fastFM - fastFM: A Library for Factorization Machines

  •        887

The library fastFM is an academic project. The time and resources spent developing fastFM are therefore justified by the number of citations of the software. If you publish scientific articles using fastFM, please cite the following article (bibtex entry citation.bib). This repository allows you to use Factorization Machines in Python (2.7 & 3.x) with the well known scikit-learn API. All performance critical code as been written in C and wrapped with Cython. fastFM provides stochastic gradient descent (SGD) and coordinate descent (CD) optimization routines as well as Markov Chain Monte Carlo (MCMC) for Bayesian inference. The solvers can be used for regression, classification and ranking problems. Detailed usage instructions can be found in the online documentation and on arXiv.

http://ibayer.github.io/fastFM
https://github.com/ibayer/fastFM

Tags
Implementation
License
Platform

   




Related Projects

xlearn - High performance, easy-to-use, and scalable ML package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and command line interface

  •    C++

xLearn is a high performance, easy-to-use, and scalable machine learning package, which can be used to solve large-scale machine learning problems. xLearn is especially useful for solving machine learning problems on large-scale sparse data, which is very common in Internet services such as online advertisement and recommender systems in recent years. If you are the user of liblinear, libfm, or libffm, now xLearn is your another better choice. xLearn is developed with high-performance C++ code with careful design and optimizations. Our system is designed to maximize CPU and memory utilization, provide cache-aware computation, and support lock-free learning. By combining these insights, xLearn is 5x-13x faster compared to similar systems.

spotlight - Deep recommender models using PyTorch.

  •    Python

Spotlight uses PyTorch to build both deep and shallow recommender models. By providing both a slew of building blocks for loss functions (various pointwise and pairwise ranking losses), representations (shallow factorization representations, deep sequence models), and utilities for fetching (or generating) recommendation datasets, it aims to be a tool for rapid exploration and prototyping of new recommender models. See the full documentation for details.

lightfm - A Python implementation of LightFM, a hybrid recommendation algorithm.

  •    Python

LightFM is a Python implementation of a number of popular recommendation algorithms for both implicit and explicit feedback, including efficient implementation of BPR and WARP ranking losses. It's easy to use, fast (via multithreaded model estimation), and produces high quality results. It also makes it possible to incorporate both item and user metadata into the traditional matrix factorization algorithms. It represents each user and item as the sum of the latent representations of their features, thus allowing recommendations to generalise to new items (via item features) and to new users (via user features).

buffalo - TOROS Buffalo: A fast and scalable production-ready open source project for recommender systems

  •    Python

Buffalo is a fast and scalable production-ready open source project for recommender systems. Buffalo effectively utilizes system resources, enabling high performance even on low-spec machines. The implementation is optimized for CPU and SSD. Even so, it shows good performance with GPU accelerator, too. Buffalo, developed by Kakao, has been reliably used in production for various Kakao services. This software is licensed under the Apache 2 license, quoted below.

librec - LibRec: A Leading Java Library for Recommender Systems, see

  •    Java

LibRec (http://www.librec.net) is a Java library for recommender systems (Java version 1.7 or higher required). It implements a suit of state-of-the-art recommendation algorithms, aiming to resolve two classic recommendation tasks: rating prediction and item ranking. A movie recommender system is designed and available here.


rumale - Rumale is a machine learning library in Ruby

  •    Ruby

Rumale (Ruby machine learning) is a machine learning library in Ruby. Rumale provides machine learning algorithms with interfaces similar to Scikit-Learn in Python. Rumale supports Linear / Kernel Support Vector Machine, Logistic Regression, Linear Regression, Ridge, Lasso, Kernel Ridge, Factorization Machine, Naive Bayes, Decision Tree, AdaBoost, Gradient Tree Boosting, Random Forest, Extra-Trees, K-nearest neighbor classifier, K-Means, K-Medoids, Gaussian Mixture Model, DBSCAN, SNN, Power Iteration Clustering, Mutidimensional Scaling, t-SNE, Principal Component Analysis, Kernel PCA and Non-negative Matrix Factorization. This project was formerly known as "SVMKit". If you are using SVMKit, please install Rumale and replace SVMKit constants with Rumale.

implicit - Fast Python Collaborative Filtering for Implicit Feedback Datasets

  •    Python

Fast Python Collaborative Filtering for Implicit Datasets. Alternating Least Squares as described in the papers Collaborative Filtering for Implicit Feedback Datasets and Applications of the Conjugate Gradient Method for Implicit Feedback Collaborative Filtering.

libfm - Library for factorization machines

  •    C++

Factorization machines (FM) are a generic approach that allows to mimic most factorization models by feature engineering. This way, factorization machines combine the generality of feature engineering with the superiority of factorization models in estimating interactions between categorical variables of large domain. libFM is a software implementation for factorization machines that features stochastic gradient descent (SGD) and alternating least squares (ALS) optimization as well as Bayesian inference using Markov Chain Monte Carlo (MCMC). Please see the libFM 1.4.2 manual for details about how to use libFM. If you have questions, please visit the forum.

Surprise - A Python scikit for building and analyzing recommender systems

  •    Python

Surprise is a Python scikit building and analyzing recommender systems that deal with explicit rating data. The name SurPRISE (roughly :) ) stands for Simple Python RecommendatIon System Engine.

universal-recommender - Highly configurable recommender based on PredictionIO and Mahout's Correlated Cross-Occurrence algorithm

  •    Scala

The Universal Recommender (UR) is a new type of collaborative filtering recommender based on an algorithm that can use data from a wide variety of user preference indicators—it is called the Correlated Cross-Occurrence algorithm. Unlike matrix factorization embodied in things like MLlib's ALS, CCO is able to ingest any number of user actions, events, profile data, and contextual information. It then serves results in a fast and scalable way. It also supports item properties for building flexible business rules for filtering and boosting recommendations and can therefor be considered a hybrid collaborative filtering and content-based recommender. Most recommenders can only use conversion events, like buy or rate. Using all we know about a user and their context allows us to much better predict their preferences.

nimfa - Nimfa: Nonnegative matrix factorization in Python

  •    Python

Nimfa is a Python module that implements many algorithms for nonnegative matrix factorization. Nimfa is distributed under the BSD license. The project was started in 2011 by Marinka Zitnik as a Google Summer of Code project, and since then many volunteers have contributed. See AUTHORS file for a complete list of contributors.

paracel - Distributed training framework with parameter server

  •    C++

Paracel is a distributed computational framework, designed for many machine learning problems: Logistic Regression, SVD, Matrix Factorization(BFGS, sgd, als, cg), LDA, Lasso... Firstly, paracel splits both massive dataset and massive parameter space. Unlike Mapreduce-Like Systems, paracel offers a simple communication model, allowing you to work with a global and distributed key-value storage, which is called parameter server.

tensorly - TensorLy: Tensor Learning in Python.

  •    Python

TensorLy is a Python library that aims at making tensor learning simple and accessible. It allows to easily perform tensor decomposition, tensor learning and tensor algebra. Its backend system allows to seamlessly perform computation with NumPy, PyTorch, JAX, MXNet, TensorFlow or CuPy, and run methods at scale on CPU or GPU. The only pre-requisite is to have Python 3 installed. The easiest way is via the Anaconda distribution.

implicit-mf - Implicit matrix factorization as outlined in http://yifanhu.net/PUB/cf.pdf.

  •    Python

Python implementation of implicit matrix factorization as outlined in Collaborative Filtering for Implicit Feedback Datasets. Requires numpy version 1.7.1 or greater and scipy version 0.12.0 or greater.

ml-ease - ADMM based large scale logistic regression

  •    Java

The open-source project that does large-scale machine learning including logistic regression, matrix factorization etc.

lrslibrary - Low-Rank and Sparse Tools for Background Modeling and Subtraction in Videos

  •    Matlab

Low-Rank and Sparse tools for Background Modeling and Subtraction in Videos. The LRSLibrary provides a collection of low-rank and sparse decomposition algorithms in MATLAB. The library was designed for motion segmentation in videos, but it can be also used (or adapted) for other computer vision problems (for more information, please see this page). Currently the LRSLibrary offers more than 100 algorithms based on matrix and tensor methods. The LRSLibrary was tested successfully in several MATLAB versions (e.g. R2014, R2015, R2016, R2017, on both x86 and x64 versions). It requires minimum R2014b.

libffm - A Library for Field-aware Factorization Machines

  •    C++

A Library for Field-aware Factorization Machines






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.