Displaying 1 to 5 from 5 results

transformers - 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX

  •    Python

🤗 Transformers provides thousands of pretrained models to perform tasks on texts such as classification, information extraction, question answering, summarization, translation, text generation and more in over 100 languages. Its aim is to make cutting-edge NLP easier to use for everyone. 🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our model hub. At the same time, each python module defining an architecture is fully standalone and can be modified to enable quick research experiments.

long-range-arena - Long Range Arena for Benchmarking Efficient Transformers

  •    Python

Long-range arena is an effort toward systematic evaluation of efficient transformer models. The project aims at establishing benchmark tasks/dtasets using which we can evaluate transformer-based models in a systematic way, by assessing their generalization power, computational efficiency, memory foot-print, etc. Long-range arena also implements different variants of Transformer models in JAX, using Flax.

flaxmodels - Pretrained models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet.

  •    Python

The goal of this project is to make current deep learning models more easily available for the awesome Jax/Flax ecosystem. You will need Python 3.7 or later.

jax-resnet - Implementations and checkpoints for ResNet, Wide ResNet, ResNeXt, ResNet-D, and ResNeSt in JAX (Flax)

  •    Python

A Flax (Linen) implementation of ResNet (He et al. 2015), Wide ResNet (Zagoruyko & Komodakis 2016), ResNeXt (Xie et al. 2017), ResNet-D (He et al. 2020), and ResNeSt (Zhang et al. 2020). The code is modular so you can mix and match the various stem, residual, and bottleneck implementations. The models are tested to have the same intermediate activations and outputs as the torch.hub implementations, except ResNeSt-50 Fast, whose activations don't match exactly but the final accuracy does.

jaxrl - Jax (Flax) implementation of algorithms for Deep Reinforcement Learning with continuous action spaces

  •    Jupyter

The goal of this repository is to provide simple and clean implementations to build research on top of. Please do not use this repository for baseline results and use the original implementations instead (SAC, AWAC, DrQ). If you want to run this code on GPU, please follow instructions from the official repository.

We have large collection of open source products. Follow the tags from Tag Cloud >>

Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.