Practical_DL - DL course co-developed by YSDA, Skoltech and HSE

  •        25

This repo supplements Deep Learning course taught at YSDA and Skoltech @spring'18. For previous iteration visit the fall17 branch. Lecture and seminar materials for each week are in ./week* folders. Homeworks are in ./homework* folders.

https://github.com/yandexdataschool/Practical_DL

Tags
Implementation
License
Platform

   




Related Projects

Practical_RL - A course in reinforcement learning in the wild

  •    Jupyter

A course on reinforcement learning in the wild. Taught on-campus at HSE and YSDA and maintained to be friendly to online students (both english and russian). The syllabus is approximate: the lectures may occur in a slightly different order and some topics may end up taking two weeks.

CADL - Course materials/Homework materials for the FREE MOOC course on "Creative Applications of Deep Learning w/ Tensorflow" #CADL

  •    Jupyter

This repository contains lecture transcripts and homework assignments as Jupyter Notebooks for the first of three Kadenze Academy courses on Creative Applications of Deep Learning w/ Tensorflow. It also contains a python package containing all the code developed during all three courses. The first course makes heavy usage of Jupyter Notebook. This will be necessary for submitting the homeworks and interacting with the guided session notebooks I will provide for each assignment. Follow along this guide and we'll see how to obtain all of the necessary libraries that we'll be using. By the end of this, you'll have installed Jupyter Notebook, NumPy, SciPy, and Matplotlib. While many of these libraries aren't necessary for performing the Deep Learning which we'll get to in later lectures, they are incredibly useful for manipulating data on your computer, preparing data for learning, and exploring results.

AgentNet - Deep Reinforcement Learning library for humans

  •    Python

AgentNet is a deep reinforcement learning framework, which is designed for ease of research and prototyping of Deep Learning models for Markov Decision Processes. We have a full in-and-out support for Lasagne deep learning library, granting you access to all convolutions, maxouts, poolings, dropouts, etc. etc. etc.

keras-rl - Deep Reinforcement Learning for Keras.

  •    Python

keras-rl implements some state-of-the art deep reinforcement learning algorithms in Python and seamlessly integrates with the deep learning library Keras. Just like Keras, it works with either Theano or TensorFlow, which means that you can train your algorithm efficiently either on CPU or GPU. Furthermore, keras-rl works with OpenAI Gym out of the box. This means that evaluating and playing around with different algorithms is easy. Of course you can extend keras-rl according to your own needs. You can use built-in Keras callbacks and metrics or define your own. Even more so, it is easy to implement your own environments and even algorithms by simply extending some simple abstract classes. In a nutshell: keras-rl makes it really easy to run state-of-the-art deep reinforcement learning algorithms, uses Keras and thus Theano or TensorFlow and was built with OpenAI Gym in mind.

lectures - Oxford Deep NLP 2017 course

  •    

This repository contains the lecture slides and course description for the Deep Natural Language Processing course offered in Hilary Term 2017 at the University of Oxford. This is an applied course focussing on recent advances in analysing and generating speech and text using recurrent neural networks. We introduce the mathematical definitions of the relevant machine learning models and derive their associated optimisation algorithms. The course covers a range of applications of neural networks in NLP including analysing latent dimensions in text, transcribing speech to text, translating between languages, and answering questions. These topics are organised into three high level themes forming a progression from understanding the use of neural networks for sequential language modelling, to understanding their use as conditional language models for transduction tasks, and finally to approaches employing these techniques in combination with other mechanisms for advanced applications. Throughout the course the practical implementation of such models on CPU and GPU hardware is also discussed.


stanford-tensorflow-tutorials - This repository contains code examples for the Stanford's course: TensorFlow for Deep Learning Research

  •    Python

This repository contains code examples for the course CS 20: TensorFlow for Deep Learning Research. It will be updated as the class progresses. Detailed syllabus and lecture notes can be found here. For this course, I use python3.6 and TensorFlow 1.4.1. For setup instruction and the list of dependencies, please see the setup folder of this repository.

nolearn - Combines the ease of use of scikit-learn with the power of Theano/Lasagne

  •    Python

nolearn contains a number of wrappers and abstractions around existing neural network libraries, most notably Lasagne, along with a few machine learning utility modules. All code is written to be compatible with scikit-learn. We recommend using venv (when using Python 3) or virtualenv (Python 2) to install nolearn.

ai-resources - Selection of resources to learn Artificial Intelligence / Machine Learning / Statistical Inference / Deep Learning / Reinforcement Learning

  •    

Update April 2017: It’s been almost a year since I posted this list of resources, and over the year there’s been an explosion of articles, videos, books, tutorials etc on the subject — even an explosion of ‘lists of resources’ such as this one. It’s impossible for me to keep this up to date. However, the one resource I would like to add is https://ml4a.github.io/ (https://github.com/ml4a) led by Gene Kogan. It’s specifically aimed at artists and the creative coding community. This is a very incomplete and subjective selection of resources to learn about the algorithms and maths of Artificial Intelligence (AI) / Machine Learning (ML) / Statistical Inference (SI) / Deep Learning (DL) / Reinforcement Learning (RL). It is aimed at beginners (those without Computer Science background and not knowing anything about these subjects) and hopes to take them to quite advanced levels (able to read and understand DL papers). It is not an exhaustive list and only contains some of the learning materials that I have personally completed so that I can include brief personal comments on them. It is also by no means the best path to follow (nowadays most MOOCs have full paths all the way from basic statistics and linear algebra to ML/DL). But this is the path I took and in a sense it's a partial documentation of my personal journey into DL (actually I bounced around all of these back and forth like crazy). As someone who has no formal background in Computer Science (but has been programming for many years), the language, notation and concepts of ML/SI/DL and even CS was completely alien to me, and the learning curve was not only steep, but vertical, treacherous and slippery like ice.

Lasagne - Lightweight library to build and train neural networks in Theano

  •    Python

For more details and alternatives, please see the Installation instructions. For support, please refer to the lasagne-users mailing list.

cakechat - CakeChat: Emotional Generative Dialog System

  •    Python

It is written in Theano and Lasagne. It uses end-to-end trained embeddings of 5 different emotions to generate responses conditioned by a given emotion. The code is flexible and allows to condition a response by an arbitrary categorical variable defined for some samples in the training data. With CakeChat you can, for example, train your own persona-based neural conversational model[5] or create an emotional chatting machine without external memory[4].

dlwin - GPU-accelerated Deep Learning on Windows 10 native

  •    Python

There are certainly a lot of guides to assist you build great deep learning (DL) setups on Linux or Mac OS (including with Tensorflow which, unfortunately, as of this posting, cannot be easily installed on Windows), but few care about building an efficient Windows 10-native setup. Most focus on running an Ubuntu VM hosted on Windows or using Docker, unnecessary - and ultimately sub-optimal - steps. We also found enough misguiding/deprecated information out there to make it worthwhile putting together a step-by-step guide for the latest stable versions of Keras, Tensorflow, CNTK, MXNet, and PyTorch. Used either together (e.g., Keras with Tensorflow backend), or independently -- PyTorch cannot be used as a Keras backend, TensorFlow can be used on its own -- they make for some of the most powerful deep learning python libraries to work natively on Windows.

t81_558_deep_learning - Washington University (in St

  •    Jupyter

Deep learning is a group of exciting new technologies for neural networks. Through a combination of advanced training techniques and neural network architectural components, it is now possible to create neural networks of much greater complexity. Deep learning allows a neural network to learn hierarchies of information in a way that is like the function of the human brain. This course will introduce the student to computer vision with Convolution Neural Networks (CNN), time series analysis with Long Short-Term Memory (LSTM), classic neural network structures and application to computer security. High Performance Computing (HPC) aspects will demonstrate how deep learning can be leveraged both on graphical processing units (GPUs), as well as grids. Focus is primarily upon the application of deep learning to problems, with some introduction mathematical foundations. Students will use the Python programming language to implement deep learning using Google TensorFlow and Keras. It is not necessary to know Python prior to this course; however, familiarity of at least one programming language is assumed. This course will be delivered in a hybrid format that includes both classroom and online instruction. This syllabus presents the expected class schedule, due dates, and reading assignments. Download current syllabus.

deep-learning-coursera - Deep Learning Specialization by Andrew Ng on Coursera.

  •    Jupyter

This repo contains all my work for this specialization. All the code base, quiz questions, screenshot, and images, are taken from, unless specified, Deep Learning Specialization on Coursera. As a CS major student and a long-time self-taught learner, I have completed many CS related MOOCs on Coursera, Udacity, Udemy, and Edx. I do understand the hard time you spend on understanding new concepts and debugging your program. There are discussion forums on most MOOC platforms, however, even a question with detailed description may need some time to be answered. Here I released these solutions, which are only for your reference purpose. It may help you to save some time. And I hope you don't copy any part of the code (the programming assignments are fairly easy if you read the instructions carefully), see the quiz solutions before you start your own adventure. This course is almost the simplest deep learning course I have ever taken, but the simplicity is based on the fabulous course content and structure. It's a treasure given by deeplearning.ai team.

MBE - Course materials for Modern Binary Exploitation by RPISEC

  •    C

This repository contains the materials as developed and used by RPISEC to teach Modern Binary Exploitation at Rensselaer Polytechnic Institute in Spring 2015. This was a university course developed and run solely by students to teach skills in vulnerability research, reverse engineering, and binary exploitation. Vulnerability research & exploit development is something totally outside the bounds of what you see in a normal computer science curriculum, but central to a lot of what we RPISEC members find ourselves doing in our free time. We also find that subjects in offensive security tend to have a stigma around them in university that we would like to help shake off. These are practical, applied skills that we're excited to share with those interested in learning.

courses - fast.ai Courses

  •    Jupyter

These are the lecture materials from Practical Deep Learning for Coders. Two important parts of the course are our online forums and our wiki. If you are encountering an error, we recommend that you first search the forums and wiki for a solution. If you can't find the answer there, the next step is to ask your question on the forums. See this advice on how to ask for help in a way that will allow others to most quickly and effectively be able to help you. Please don't use Github Issues to ask for help debugging (many questions have already been answered in the forums).

deepo - A series of Docker images (and their generator) that allows you to quickly set up your deep learning research environment

  •    Python

If you want to share your data and configurations between the host (your machine or VM) and the container in which you are using Deepo, use the -v option, e.g. This will make /host/data from the host visible as /data in the container, and /host/config as /config. Such isolation reduces the chances of your containerized experiments overwriting or using wrong data.

Deep-Learning-Tricks - Enumerate diverse machine learning training tricks.

  •    

This is an attempt to enumerate different machine learning training tricks I gather around as well as some network architectures. The goal is to briefly give a description of the trick as well as an intuition about why it is working. My knowledge is quite limited so this is prone to errors/imprecisions. This should be a collaborative work so feel free to complete or correct. Most of the tricks may seem trivial for those who have some experience in machine learning but I feel like while there is a lot of very good theoretical material available for machine learning, there is still a lack of practical advice. Those would really have helped me when I started. The excellent CS231n Stanford course already has a good list of training tricks. What: The learning rate is probably the most important hyperparameter to tune. A strategy used to select the hyperparameters is to randomly sample them (uniformly or logscale) and see the testing error after a few epoch.

practical-1 - Oxford Deep NLP 2017 course - Practical 1: word2vec

  •    Jupyter

For this practical, you'll be provided with a partially-complete IPython notebook, an interactive web-based Python computing environment that allows us to mix text, code, and interactive plots. We will be training word2vec models on TED Talk and Wikipedia data, using the word2vec implementation included in the Python package gensim. After training the models, we will analyze and visualize the learned embeddings.

grenade - Deep Learning in Haskell

  •    Haskell

Grenade is a composable, dependently typed, practical, and fast recurrent neural network library for concise and precise specifications of complex networks in Haskell. And that's it. Because the types are so rich, there's no specific term level code required to construct this network; although it is of course possible and easy to construct and deconstruct the networks and layers explicitly oneself.

Deep_reinforcement_learning_Course - Implementations from the free course Deep Reinforcement Learning with Tensorflow

  •    Jupyter

Deep Reinforcement Learning Course is a free series of blog posts and videos 🆕 about Deep Reinforcement Learning, where we'll learn the main algorithms, and how to implement them with Tensorflow. 📜The articles explain the concept from the big picture to the mathematical details behind it.