Displaying 1 to 12 from 12 results

one-pixel-attack-keras - Keras reimplementation of "One pixel attack for fooling deep neural networks" using differential evolution on Cifar10 and ImageNet

  •    Jupyter

How simple is it to cause a deep neural network to misclassify an image if an attacker is only allowed to modify the color of one pixel and only see the prediction probability? Turns out it is very simple. In many cases, an attacker can even cause the network to return any answer they want. The following project is a Keras reimplementation and tutorial of "One pixel attack for fooling deep neural networks".

pytorch-speech-commands - Speech commands recognition with PyTorch

  •    Python

Convolutional neural networks for Google speech commands data set with PyTorch. We, xuyuan and tugstugi, have participated in the Kaggle competition TensorFlow Speech Recognition Challenge and reached the 10-th place. This repository contains a simplified and cleaned up version of our team's code.

resnet-cifar10-caffe - ResNet 20 32 44 56 110 for CIFAR10 with caffe

  •    Python

seems there's no much difference between resnet-20 and plain-20. However, from the second plot, you can see that plain-110 have difficulty to converge.




dawn-bench-entries - DAWNBench: An End-to-End Deep Learning Benchmark and Competition

  •    Python

To add your model to our leaderboard, open a Pull Request with title <Model name> || <Task name> || <Author name> (example PR), with JSON (and TSV where applicable) result files in the format outlined below. We evaluate image classification performance on the CIFAR10 dataset.

tf-vqvae - Tensorflow Implementation of the paper [Neural Discrete Representation Learning](https://arxiv

  •    Jupyter

This repository implements the paper, Neural Discrete Representation Learning (VQ-VAE) in Tensorflow. ⚠️ This is not an official implementation, and might have some glitch (,or a major defect).

label-embedding-network - Label Embedding Network

  •    Python

This is an implementation of the paper Label Embedding Network: Learning Label Representation for Soft Training of Deep Networks https://arxiv.org/abs/1710.10393. Label Embedding Network can learn label representation (label embedding) during the training process of deep networks. With the proposed method, the label embedding is adaptively and automatically learned through back propagation. The original one-hot represented loss function is converted into a new loss function with soft distributions, such that the originally unrelated labels have continuous interactions with each other during the training process. As a result, the trained model can achieve substantially higher accuracy and with faster convergence speed. Experimental results based on competitive tasks demonstrate the effectiveness of the proposed method, and the learned label embedding is reasonable and interpretable. The proposed method achieves comparable or even better results than the state-of-the-art systems.

DenseNet-Cifar10 - Train DenseNet on Cifar-10 based on Keras

  •    Python

Train the DenseNet-40-10 on Cifar-10 dataset with data augmentation. The implementation of DenseNet is based on titu1994/DenseNet. I've made some modifications so as to make it consistent with Keras2 interface.


tensorflow-cifar-10 - Cifar-10 CNN implementation using TensorFlow library with 20% error.

  •    Python

Cifar-10 convolutional network implementation example using TensorFlow library. Best accurancy what I receive was 79.12% on test data set. You must to understand that network cant always learn with the same accuracy. But almost always accuracy more than 78%.

parle

  •    Python

This is the code for Parle: parallelizing stochastic gradient descent. We demonstrate an algorithm for parallel training of deep neural networks which trains multiple copies of the same network in parallel, called as "replicas", with special coupling upon their weights to obtain significantly improved generalization performance over a single network as well as 2-5x faster convergence over a data-parallel implementation of SGD for a single network. In both cases, we construct an optimizer class that initializes the requisite buffers on different GPUs and handles all the updates after each mini-batch. As an example, we have provided code for MNIST and CIFAR-10 datasets with two prototypical networks, LeNet and All-CNN, respectively. The MNIST and CIFAR-10/100 datasets will be downloaded and pre-processed (stored in the proc folder) the first time parle is run.

kaggle-cifar10 - 这是kaggle-cifar10的baseline

  •    Python

这是kaggle-cifar10的baseline






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.