Displaying 1 to 9 from 9 results

distiller - Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research

  •    Python

Distiller is an open-source Python package for neural network compression research. Network compression can reduce the memory footprint of a neural network, increase its inference speed and save energy. Distiller provides a PyTorch environment for prototyping and analyzing compression algorithms, such as sparsity-inducing methods and low-precision arithmetic.

neural-structured-learning - Training neural models with structured signals.

  •    Python

Neural Structured Learning (NSL) is a new learning paradigm to train neural networks by leveraging structured signals in addition to feature inputs. Structure can be explicit as represented by a graph [1,2,5] or implicit as induced by adversarial perturbation [3,4]. Structured signals are commonly used to represent relations or similarity among samples that may be labeled or unlabeled. Leveraging these signals during neural network training harnesses both labeled and unlabeled data, which can improve model accuracy, particularly when the amount of labeled data is relatively small. Additionally, models trained with samples that are generated by adversarial perturbation have been shown to be robust against malicious attacks, which are designed to mislead a model's prediction or classification.

distiller - Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research

  •    Jupyter

Distiller is an open-source Python package for neural network compression research. Network compression can reduce the memory footprint of a neural network, increase its inference speed and save energy. Distiller provides a PyTorch environment for prototyping and analyzing compression algorithms, such as sparsity-inducing methods and low-precision arithmetic.




pyowl - Ordered Weighted L1 regularization for classification and regression in Python

  •    Python

The OWL norm generalizes L1, L_inf and OSCAR. In particular, OSCAR selects coefficients in groups with equal values, therefore handling highly correlated features in a robust way. Also known as Sorted L1 norm or SLOPE.

Deep-Learning-101 - The tools and syntax you need to code neural networks from day one.

  •    Jupyter

When I started learning deep learning I spent two weeks researching. I selected tools, compared cloud services, and researched online courses. In retrospect, I wish I could have built neural networks from day one. That’s what this article is set out to do. You don’t need any prerequisites, yet a basic understanding of Python, the command line, and Jupyter notebook will help. This is the code experiments from the article.

bigKRLS - Now on CRAN, bigKRLS combines bigmemory & RcppArmadillo (C++) for speed into a new Kernel Regularized Least Squares algorithm

  •    R

Kernel Regularized Least Squares (KRLS) is a kernel-based, complexity-penalized method developed by Hainmueller and Hazlett (2013), and designed to minimize parametric assumptions while maintaining interpretive clarity. Here, we introduce bigKRLS, an updated version of the original KRLS R package with algorithmic and implementation improvements designed to optimize speed and memory usage. These improvements allow users to straightforwardly estimate pairwise regression models with KRLS once N > 2500. Since April 15, 2017, bigKRLS has been available on CRAN. You may also be interested in our working paper, which has been accepted by Political Analysis, and which demonstrates the utility of bigKRLS by analyzing the 2016 US presidential election. Our replication materials can be found on Dataverse and our Github repo contains examples too. C++ integration. We re-implement most major computations in the model in C++ via Rcpp and RcppArmadillo. These changes produce up to a 50% runtime decrease compared to the original R implementation.

SparseRegression.jl - Statistical Models with Regularization in Pure Julia

  •    Julia

This package relies on primitives defined in the JuliaML ecosystem to implement high-performance algorithms for linear models which often produce sparsity in the coefficients.


machine-learning-course - R code for the assignments of Coursera machine learning course

  •    R

This is the R version assignments of the online machine learning course (MOOC) on Coursera website by Prof. Andrew Ng. This repository provides the starter code to solve the assignment in R statistical software; the completed assignments are also available beside each exercise file.






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.