ResNeXt-DenseNet - PyTorch Implementation for ResNet, Pre-Activation ResNet, ResNeXt, DenseNet, and Group Normalisation

  •        44

PyTorch Implementation for ResNet, Pre-Activation ResNet, ResNeXt, DenseNet, and Group Normalisation

https://github.com/D-X-Y/ResNeXt-DenseNet

Tags
Implementation
License
Platform

   




Related Projects

densenet.pytorch - A PyTorch implementation of DenseNet.

  •    Python

This is a PyTorch implementation of the DenseNet-BC architecture as described in the paper Densely Connected Convolutional Networks by G. Huang, Z. Liu, K. Weinberger, and L. van der Maaten. This implementation gets a CIFAR-10+ error rate of 4.77 with a 100-layer DenseNet-BC with a growth rate of 12. Their official implementation and links to many other third-party implementations are available in the liuzhuang13/DenseNet repo on GitHub. As this table from the DenseNet paper shows, it provides competitive state of the art results on CIFAR-10, CIFAR-100, and SVHN.

ResNeXt - Implementation of a classification framework from the paper Aggregated Residual Transformations for Deep Neural Networks

  •    Lua

This repository contains a Torch implementation for the ResNeXt algorithm for image classification. The code is based on fb.resnet.torch. ResNeXt is a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call “cardinality” (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width.

attention-transfer - Improving Convolutional Networks via Attention Transfer (ICLR 2017)

  •    Jupyter

The code uses PyTorch https://pytorch.org. Note that the original experiments were done using torch-autograd, we have so far validated that CIFAR-10 experiments are exactly reproducible in PyTorch, and are in process of doing so for ImageNet (results are very slightly worse in PyTorch, due to hyperparameters). This section describes how to get the results in the table 1 of the paper.

cnn-models - ImageNet pre-trained models with batch normalization for the Caffe framework

  •    Python

This repository contains convolutional neural network (CNN) models trained on ImageNet by Marcel Simon at the Computer Vision Group Jena (CVGJ) using the Caffe framework as published in the accompanying technical report. Each model is in a separate subfolder and contains everything needed to reproduce the results. This repository focuses currently contains the batch-normalization-variants of AlexNet and VGG19 as well as the training code for Residual Networks (Resnet). No mean subtraction is required for the pre-trained models! We have a batch-normalization layer which basically does the same.


SparseConvNet - Submanifold sparse convolutional networks

  •    C++

This library brings Spatially-sparse convolutional networks to Torch/PyTorch. Moreover, it introduces Submanifold Sparse Convolutions, that can be used to build computationally efficient sparse VGG/ResNet/DenseNet-style networks. With regular 3x3 convolutions, the set of active (non-zero) sites grows rapidly: With Submanifold Sparse Convolutions, the set of active sites is unchanged. Active sites look at their active neighbors (green); non-active sites (red) have no computational overhead: Stacking Submanifold Sparse Convolutions to build VGG and ResNet type ConvNets, information can flow along lines or surfaces of active points.

DenseNet - DenseNet implementation in Keras

  •    Python

The Bottleneck - Compressed DenseNets offer further performance benefits, such as reduced number of parameters, with similar or better performance. The best original model, DenseNet-100-24 (27.2 million parameters) achieves 3.74 % error, whereas the DenseNet-BC-190-40 (25.6 million parameters) achieves 3.46 % error which is a new state of the art performance on CIFAR-10.

DenseNet-Caffe - DenseNet Caffe Models, converted from https://github.com/liuzhuang13/DenseNet

  •    

We manually converted the original torch models into caffe format from https://github.com/liuzhuang13/DenseNet. Update (July 27, 2017): for your convenience, we also provide a link to these models on Baidu Disk.

video-classification-3d-cnn-pytorch - Video classification tools using 3D ResNet

  •    Python

This is a pytorch code for video (action) classification using 3D ResNet trained by this code. The 3D ResNet is trained on the Kinetics dataset, which includes 400 action classes. This code uses videos as inputs and outputs class names and predicted class scores for each 16 frames in the score mode. In the feature mode, this code outputs features of 512 dims (after global average pooling) for each 16 frames. Torch (Lua) version of this code is available here.

pytorch-segmentation-detection - Image Segmentation and Object Detection in Pytorch

  •    Jupyter

So far, the library contains an implementation of FCN-32s (Long et al.), Resnet-18-8s, Resnet-34-8s (Chen et al.) image segmentation models in Pytorch and Pytorch/Vision library with training routine, reported accuracy, trained models for PASCAL VOC 2012 dataset. To train these models on your data, you will have to write a dataloader for your dataset. Models for Object Detection will be released soon.

PyTorch-GAN - PyTorch implementations of Generative Adversarial Networks.

  •    Python

Collection of PyTorch implementations of Generative Adversarial Network varieties presented in research papers. Model architectures will not always mirror the ones proposed in the papers, but I have chosen to focus on getting the core ideas covered instead of getting every layer configuration right. Contributions and suggestions of GANs to implement are very welcomed. Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7% of the classes have samples exhibiting diversity comparable to real ImageNet data.

pixel-cnn - Python3 / Tensorflow implementation of PixelCNN++, as described in "PixelCNN++: A PixelCNN Implementation with Discretized Logistic Mixture Likelihood and Other Modifications"

  •    Python

PixelCNN++: A PixelCNN Implementation with Discretized Logistic Mixture Likelihood and Other Modifications, by Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma, and Yaroslav Bulatov. This code supports multi-GPU training of our improved PixelCNN on CIFAR-10 and Small ImageNet, but is easy to adapt for additional datasets. Training on a machine with 8 Maxwell TITAN X GPUs achieves 3.0 bits per dimension in about 10 hours and it takes approximately 5 days to converge to 2.92.

pytorch-cifar - 95.16% on CIFAR10 with PyTorch

  •    Python

I'm playing with PyTorch on the CIFAR10 dataset.

efficient_densenet_pytorch - A memory-efficient implementation of DenseNets

  •    Python

A PyTorch implementation of DenseNets, optimized to save GPU memory. While DenseNets are fairly easy to implement in deep learning frameworks, most implmementations (such as the original) tend to be memory-hungry. In particular, the number of intermediate feature maps generated by batch normalization and concatenation operations grows quadratically with network depth. It is worth emphasizing that this is not a property inherent to DenseNets, but rather to the implementation.

LightNet - LightNet: Light-weight Networks for Semantic Image Segmentation (Cityscapes and Mapillary Vistas Dataset)

  •    Python

This repository contains the code (in PyTorch) for: "LightNet: Light-weight Networks for Semantic Image Segmentation " (underway) by Huijun Liu @ TU Braunschweig. Semantic Segmentation is a significant part of the modern autonomous driving system, as exact understanding the surrounding scene is very important for the navigation and driving decision of the self-driving car. Nowadays, deep fully convolutional networks (FCNs) have a very significant effect on semantic segmentation, but most of the relevant researchs have focused on improving segmentation accuracy rather than model computation efficiency. However, the autonomous driving system is often based on embedded devices, where computing and storage resources are relatively limited. In this paper we describe several light-weight networks based on MobileNetV2, ShuffleNet and Mixed-scale DenseNet for semantic image segmentation task, Additionally, we introduce GAN for data augmentation[17] (pix2pixHD) concurrent Spatial-Channel Sequeeze & Excitation (SCSE) and Receptive Field Block (RFB) to the proposed network. We measure our performance on Cityscapes pixel-level segmentation, and achieve up to 70.72% class mIoU and 88.27% cat. mIoU. We evaluate the trade-offs between mIoU, and number of operations measured by multiply-add (MAdd), as well as the number of parameters.

Switchable-Normalization - Code for Switchable Normalization from "Differentiable Learning-to-Normalize via Switchable Normalization", https://arxiv

  •    HTML

Switchable Normalization is a normalization technique that is able to learn different normalization operations for different normalization layers in a deep neural network in an end-to-end manner. This repository provides imagenet classification results and models trained with Switchable Normalization. You are encouraged to cite the following paper if you use SN in research.

tensornets - High level network definitions with pre-trained weights in TensorFlow

  •    Python

High level network definitions with pre-trained weights in TensorFlow (tested with >= 1.1.0). You can install TensorNets from PyPI (pip install tensornets) or directly from GitHub (pip install git+https://github.com/taehoonlee/tensornets.git).

awesome-very-deep-learning - 🔥A curated list of papers and code about very deep neural networks

  •    

awesome-very-deep-learning is a curated list for papers and code about implementing and training very deep neural networks. Value Iteration Networks are very deep networks that have tied weights and perform approximate value iteration. They are used as an internal (model-based) planning module.

ImageAI - A python library built to empower developers to build applications and systems with self-contained Computer Vision capabilities

  •    Python

A python library built to empower developers to build applications and systems with self-contained Deep Learning and Computer Vision capabilities using simple and few lines of code. Built with simplicity in mind, ImageAI supports a list of state-of-the-art Machine Learning algorithms for image prediction, custom image prediction, object detection, video detection, video object tracking and image predictions trainings. ImageAI currently supports image prediction and training using 4 different Machine Learning algorithms trained on the ImageNet-1000 dataset. ImageAI also supports object detection, video detection and object tracking using RetinaNet, YOLOv3 and TinyYOLOv3 trained on COCO dataset. Eventually, ImageAI will provide support for a wider and more specialized aspects of Computer Vision including and not limited to image recognition in special environments and special fields.

OSVOS-PyTorch - PyTorch implementation of One-Shot Video Object Segmentation (OSVOS)

  •    Python

Check our project page for additional information. OSVOS is a method that tackles the task of semi-supervised video object segmentation. It is based on a fully-convolutional neural network architecture that is able to successively transfer generic semantic information, learned on ImageNet, to the task of foreground segmentation, and finally to learning the appearance of a single annotated object of the test sequence (hence one-shot). Experiments on DAVIS 2016 show that OSVOS is faster than currently available techniques and improves the state of the art by a significant margin (79.8% vs 68.0%).





We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.