stanford-cs229 - Exercise answers to the problem sets from the 2017 machine learning course cs229 by Andrew Ng at Stanford

  •        10

I tried to record all details in Jupyter notebooks. If you see any mistake, please let me know by opening a new issue. As for reinforcement learning, I've also implemented value iteration, policy iteration, SARSA, and Q-learning before in javascript for the gridworld at with a web demo at



Related Projects

Coursera-Machine-Learning - Coursera Machine Learning - Python code

  •    Jupyter

This repository contains python implementations of certain exercises from the course by Andrew Ng. For a number of assignments in the course you are instructed to create complete, stand-alone Octave/MATLAB implementations of certain algorithms (Linear and Logistic Regression for example). The rest of the assignments depend on additional code provided by the course authors. For most of the code in this repository I have instead used existing Python implementations like Scikit-learn.

Stanford-CS-229-CN - A Chinese Translation of Stanford CS229 notes 斯坦福机器学习CS229课程讲义的中文翻译

  •    HTML

A Chinese Translation of Stanford CS229 notes 斯坦福机器学习CS229课程讲义的中文翻译

machine-learning-coursera - Programming assignments from Coursera's Machine Learning course taught by Andrew Ng

  •    Matlab

These are the programming assignments from Coursera's Machine Learning course taught by Andrew Ng.

machine_learning - Python coded examples and documentation of machine learning algorithms.

  •    Python

This repo contains a collection of IPython notebooks detailing various machine learning algorithims. In general, the mathematics follows that presented by Dr. Andrew Ng's Machine Learning course taught at Stanford University (materials available from [ITunes U] (, Stanford Machine Learning), Dr. Tom Mitchell's course at Carnegie Mellon (materials avialable here), and Christopher M. Bishop's "Pattern Recognition And Machine Learning". Unless otherwise noted, the Python code is orginal and any errors or ommissions should be attribued to me and not the aforemention authors. Each ipynb provides a list of the pertinent reading material. It is suggested that the material be read in the order provided.

machine-learning-yearning - Translation of <Machine Learning Yearning> by Andrew NG


Translation of <Machine Learning Yearning> by Andrew NG

deep-learning-coursera - Deep Learning Specialization by Andrew Ng on Coursera.

  •    Jupyter

This repo contains all my work for this specialization. All the code base, quiz questions, screenshot, and images, are taken from, unless specified, Deep Learning Specialization on Coursera. As a CS major student and a long-time self-taught learner, I have completed many CS related MOOCs on Coursera, Udacity, Udemy, and Edx. I do understand the hard time you spend on understanding new concepts and debugging your program. There are discussion forums on most MOOC platforms, however, even a question with detailed description may need some time to be answered. Here I released these solutions, which are only for your reference purpose. It may help you to save some time. And I hope you don't copy any part of the code (the programming assignments are fairly easy if you read the instructions carefully), see the quiz solutions before you start your own adventure. This course is almost the simplest deep learning course I have ever taken, but the simplicity is based on the fabulous course content and structure. It's a treasure given by team.

machine-learning-yearning-cn - MACHINE LEARNING YEARNING BY ANDREW NG

  •    CSS

这是一份普通人可以理解的许可协议概要 (但不是替代) 。 免责声明.

UFLDL-tutorial - Deep Learning and Unsupervised Feature Learning Tutorial Solutions

  •    Jupyter

These are solutions to the exercises up at the Stanford OpenClassroom Deep Learning class and Andrew Ng's UFLDL Tutorial. When I was solving these, I looked around for copies of the solutions so I could compare notes because debugging learning algorithms is often tedious in a way that isn't educational, but almost everything I found was incomplete or obviously wrong. I don't promise that these don't have bugs, but they at least give outputs within the range of the expected outputs for the assignments. I've attempted to make this Octave compatible, so that you can run this with free software. It seems to work, but the results are slightly different. One side effect of this is that I'm using fminlbfgs instead of minFunc. It ran for me with Octave 3.6.4; my understanding is that Octave 3.8 and newer versions aren't completely backwards compatible, so you may run into problems with the current version of octave. Pull requests welcome, of course.

Deep-Learning-Coursera - Deep Learning Specialization by Andrew Ng,

  •    Jupyter

This is my personal projects for the course. The course covers deep learning from begginer level to advanced. Highly recommend anyone wanting to break into AI.

stanford-tensorflow-tutorials - This repository contains code examples for the Stanford's course: TensorFlow for Deep Learning Research

  •    Python

This repository contains code examples for the course CS 20: TensorFlow for Deep Learning Research. It will be updated as the class progresses. Detailed syllabus and lecture notes can be found here. For this course, I use python3.6 and TensorFlow 1.4.1. For setup instruction and the list of dependencies, please see the setup folder of this repository.


  •    Python

This repository contains my personal notes and summaries on specialization courses. I've enjoyed every little bit of the course hope you enjoy my notes too. If you want to break into AI, this Specialization will help you do so. Deep Learning is one of the most highly sought after skills in tech. We will help you become good at Deep Learning. - , By Andrew Ng, All slide and notebook + code and some material.

  •    Jupyter , By Andrew Ng, All slide and notebook + code and some material.

flare-fakenet-ng - FakeNet-NG - Next Generation Dynamic Network Analysis Tool

  •    Python

FakeNet-NG is a next generation dynamic network analysis tool for malware analysts and penetration testers. It is open source and designed for the latest versions of Windows (and Linux, for certain modes of operation). FakeNet-NG is based on the excellent Fakenet tool developed by Andrew Honig and Michael Sikorski. The tool allows you to intercept and redirect all or specific network traffic while simulating legitimate network services. Using FakeNet-NG, malware analysts can quickly identify malware's functionality and capture network signatures. Penetration testers and bug hunters will find FakeNet-NG's configurable interception engine and modular framework highly useful when testing application's specific functionality and prototyping PoCs.

Octave - my octave exercises for 2011 stanford machine learning class, posted after the due date of course

  •    Matlab

I took this class 5+ years ago. Please don't open any issues or pull requests. All examples come as is. Octave is a high level (open source) programming language similar to Matlab. I'm using it for the 2011 Stanford Machine Learning Class.

Stanford-Machine-Learning-Course - machine learning course programming exercise

  •    Matlab

machine learning course programming exercise

deep-learning-specialization-coursera - Deep Learning Specialization by Andrew Ng on Coursera.

  •    Jupyter

This repo contains all my work for this specialization. All the code base, quiz questions, screenshot, and images, are taken from, unless specified, Deep Learning Specialization on Coursera.

statlearning-notebooks - Python notebooks for exercises covered in Stanford statlearning class (where exercises were in R)


IPython notebooks that implement the R code for the StatLearning: Statistical Learning online course from Stanford University taught by Profs Trevor Hastie and Rob Tibshirani. The original code for the classes were written in R. The notebooks are also accessible from A gallery of interesting IPython Notebooks under the Statistics, Machine Learning and Data Science section. More information in my blog post.

Deep-Learning-Tricks - Enumerate diverse machine learning training tricks.


This is an attempt to enumerate different machine learning training tricks I gather around as well as some network architectures. The goal is to briefly give a description of the trick as well as an intuition about why it is working. My knowledge is quite limited so this is prone to errors/imprecisions. This should be a collaborative work so feel free to complete or correct. Most of the tricks may seem trivial for those who have some experience in machine learning but I feel like while there is a lot of very good theoretical material available for machine learning, there is still a lack of practical advice. Those would really have helped me when I started. The excellent CS231n Stanford course already has a good list of training tricks. What: The learning rate is probably the most important hyperparameter to tune. A strategy used to select the hyperparameters is to randomly sample them (uniformly or logscale) and see the testing error after a few epoch.