Displaying 1 to 18 from 18 results

AmpliGraph - Python library for Representation Learning on Knowledge Graphs https://docs

  •    Python

Open source library based on TensorFlow that predicts links between concepts in a knowledge graph. AmpliGraph is a suite of neural machine learning models for relational Learning, a branch of machine learning that deals with supervised learning on knowledge graphs.

awesome-network-embedding - A curated list of network embedding techniques.

  •    

Also called network representation learning, graph embedding, knowledge embedding, etc. The task is to learn the representations of the vertices from a given network.

ConMask - ConMask model described in paper Open-world Knowledge Graph Completion.

  •    Python

Code for AAAI'18 paper: Open-world Knowledge Graph Completion. Warning: Current implementation needs a machine with four GPUs, this could be reduced to 1 GPU but needs code modification.

bimu - Bilingual Learning of Multi-sense Embeddings with Discrete Autoencoders

  •    Python

The individual similarity scores, presented as averages in the paper, are reported in appendix. See python3.4 examples/run_bimu.py --help for the full list of options, and set the Theano flags as THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32.




hmm-reps - Hidden Markov models for word representations

  •    Python

Learn discrete and continuous word representations with Hidden Markov models, including variants defined over unlabeled and labeled parse trees. Despite the mentioned, the running time is relatively slow and is especially sensitive to the number of states. A speed-up would be possible through the use of sparse matrices, but at several places the replacement is not trivial.

sigver_wiwd - Learned representation for Offline Handwritten Signature Verification

  •    Jupyter

This repository contains the code and instructions to use the trained CNN models described in [1] to extract features for Offline Handwritten Signatures. It also includes the models described in [2] that can generate a fixed-sized feature vector for signatures of different sizes. We tested the code in Ubuntu 16.04. This code can be used with or without GPUs - to use a GPU with Theano, follow the instructions in this link. Note that Theano takes time to compile the model, so it is much faster to instantiate the model once and run forward propagation for many images (instead of calling many times a script that instantiates the model and run forward propagation for a single image).

cuNVSM - Neural Vector Space Models

  •    Cuda

⚠️ You need a CUDA-compatible GPU (compute capability 5.2+) to use this software. cuNVSM is a C++/CUDA implementation of state-of-the-art NVSM and LSE representation learning algorithms.

SERT - Semantic Entity Retrieval Toolkit

  •    Python

The Semantic Entity Retrieval Toolkit (SERT) is a collection of neural entity retrieval algorithms. SERT requires Python 3.5 and assorted modules. The trec_eval utility is required for evaluation and the end-to-end scripts. If you wish to train your models on GPGPUs, you will need a GPU compatible with Theano.


sub-character-cws - Sub-Character Representation Learning

  •    Python

Codes and corpora for paper "Dual Long Short-Term Memory Networks for Sub-Character Representation Learning" (accepted at ITNG 2018). We proposed to learn character and sub-character level representations jointly for capturing deeper level of semantic meanings. When applied to Chinese Word Segmentation as a case example, our solution achieved state-of-the-art results on both Simplified and Traditional Chinese, without extra Traditional to Simplified Chinese conversion.

ICE - ICE: Item Concept Embedding

  •    C++

The ICE toolkit is designed to embed the concepts of items into an embedding representation such that the resulted embeddings can be compared in terms of overall conceptual similarity regardless of item types (ICE: Item Concept Embedding via Textual Information, SIGIR 2017). For example, a song can be used to retrieve conceptually similar songs (homogeneous) as well as conceptually similar concepts (heterogeneous). In specific, ICE incorporates items and their representative concepts (words extracted from the item's textual information) using a heterogeneous network and then learns the embeddings for both items and concepts in terms of the shared concept words. Since items are defined in terms of concepts, adding expanded concepts into the network allows the learned embeddings to be used to retrieve conceptually more diverse and yet relevant results.

proNet-core - A general-purpose network embedding framework: pair-wise representations optimization Network

  •    C++

In the near future, we will redesign the framework making some solid APIs for fast development on different network embedding techniques. This shell script will help obtain the representations of the Youtube links in Youtube-links dataset.

program-induction - A library for program induction and learning representations.

  •    Rust

A library for program induction and learning representations. Implements Bayesian program learning and genetic programming. See the docs for more information.

Variational-Ladder-Autoencoder - Implementation of VLAE

  •    Python

This is the implementation of the Variational Ladder Autoencoder. Training on this architecture with standard VAE disentangles high and low level features without using any other prior information or inductive bias. This has been successful on MNIST, SVHN, and CelebA. LSUN is a little difficult for VAE with pixel-wise reconstruction loss. However with another recently work we can generate sharp results on LSUN as well. This architecture serve as the baseline architecture for that model.

robotics-rl-srl - S-RL Toolbox: Reinforcement Learning (RL) and State Representation Learning (SRL) for Robotics

  •    Python

This repository was made to evaluate State Representation Learning methods using Reinforcement Learning. It integrates (automatic logging, plotting, saving, loading of trained agent) various RL algorithms (PPO, A2C, ARS, ACKTR, DDPG, DQN, ACER, CMA-ES, SAC, TRPO) along with different SRL methods (see SRL Repo) in an efficient way (1 Million steps in 1 Hour with 8-core cpu and 1 Titan X GPU). We also release customizable Gym environments for working with simulation (Kuka arm, Mobile Robot in PyBullet, running at 250 FPS on a 8-core machine) and real robots (Baxter Robot, Robobo with ROS).

srl-zoo - State Representation Learning (SRL) zoo with PyTorch - Part of S-RL Toolbox

  •    Python

A collection of State Representation Learning (SRL) methods for Reinforcement Learning, written using PyTorch. Please read the documentation for more details, we provide anaconda env files and docker images.

decagon - Graph convolutional neural network for multirelational link prediction

  •    Jupyter

This repository contains code necessary to run the Decagon algorithm. Decagon is a method for learning node embeddings in multimodal graphs, and is especially useful for link prediction in highly multi-relational settings. See our paper for details on the algorithm. Decagon is used to address a burning question in pharmacology, which is that of predicting safety of drug combinations.

multi_object_datasets - Multi-object image datasets with ground-truth segmentation masks and generative factors

  •    Python

The datasets consist of multi-object scenes. Each image is accompanied by ground-truth segmentation masks for all objects in the scene. We also provide per-object generative factors (except in Objects Room) to facilitate representation learning. The generative factors include all necessary and sufficient features (size, color, position, etc.) to describe and render the objects present in a scene. Lastly, the segmentation_metrics module contains a TensorFlow implementation of the adjusted Rand index [3], which can be used to compare inferred object segmentations with ground-truth segmentation masks. All code has been tested to work with TensorFlow r1.14.