Displaying 1 to 9 from 9 results

spotlight - Deep recommender models using PyTorch.

  •    Python

Spotlight uses PyTorch to build both deep and shallow recommender models. By providing both a slew of building blocks for loss functions (various pointwise and pairwise ranking losses), representations (shallow factorization representations, deep sequence models), and utilities for fetching (or generating) recommendation datasets, it aims to be a tool for rapid exploration and prototyping of new recommender models. See the full documentation for details.

lightfm - A Python implementation of LightFM, a hybrid recommendation algorithm.

  •    Python

LightFM is a Python implementation of a number of popular recommendation algorithms for both implicit and explicit feedback, including efficient implementation of BPR and WARP ranking losses. It's easy to use, fast (via multithreaded model estimation), and produces high quality results. It also makes it possible to incorporate both item and user metadata into the traditional matrix factorization algorithms. It represents each user and item as the sum of the latent representations of their features, thus allowing recommendations to generalise to new items (via item features) and to new users (via user features).

ranking - Learning to Rank in TensorFlow

  •    Python

We envision that this library will provide a convenient open platform for hosting and advancing state-of-the-art ranking models based on deep learning techniques, and thus facilitate both academic research and industrial applications. TF-Ranking was presented at premier conferences in Information Retrieval, SIGIR 2019 and ICTIR 2019! The slides are available here.

stringsifter - A machine learning tool that ranks strings based on their relevance for malware analysis

  •    Python

StringSifter is a machine learning tool that automatically ranks strings based on their relevance for malware analysis. The pip install command installs two runnable scripts flarestrings and rank_strings into your python environment. When developing from source, use pipenv run flarestrings and pipenv run rank_strings.




allRank - allRank is a framework for training learning-to-rank neural models based on PyTorch.

  •    Python

allRank provides an easy and flexible way to experiment with various LTR neural network models and loss functions. It is easy to add a custom loss, and to configure the model and the training procedure. We hope that allRank will facilitate both research in neural LTR and its industrial applications. To help you get started, we provide a run_example.sh script which generates dummy ranking data in libsvm format and trains a Transformer model on the data using provided example config.json config file. Once you run the script, the dummy data can be found in dummy_data directory and the results of the experiment in test_run directory. To run the example, Docker is required.

cs-ranking - Context-sensitive ranking in Python with Tensorflow

  •    Python

CS-Rank is a Python package for context-sensitive ranking algorithms. Check out our interactive notebooks to quickly find out what our package can do.

elasticsearch-ltr-demo - This demo uses data from TheMovieDB (TMDB) to demonstrate using Ranklib learning to rank models with Elasticsearch

  •    HTML

This demo uses data from TheMovieDB (TMDB) to demonstrate using Ranklib learning to rank models with Elasticsearch. Start a supported version of Elasticsearch and follow the instructions to install the learning to rank plugin.

pyltr - Python learning to rank (LTR) toolkit

  •    Python

pyltr is a Python learning-to-rank toolkit with ranking models, evaluation metrics, data wrangling helpers, and more. This software is licensed under the BSD 3-clause license (see LICENSE.txt).


binge - Recommendation models that use binary rather than floating point operations at prediction time

  •    TeX

This repository contains an implementation of recommendation models that use binary rather than floating point operations at prediction time. This makes them much faster (and less memory intensive), but also less accurate. The details are in the paper and in the slides.






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.