tensorflow-triplet-loss - Implementation of triplet loss in TensorFlow

  •        7

This repository contains a triplet loss implementation in TensorFlow with online triplet mining. Please check the blog post for a full description. The code structure is adapted from code I wrote for CS230 in this repository at tensorflow/vision. A set of tutorials for this code can be found here.

https://omoindrot.github.io/triplet-loss
https://github.com/omoindrot/tensorflow-triplet-loss

Tags
Implementation
License
Platform

   




Related Projects

Postgrey - Postfix Greylisting Policy-Daemon

  •    Perl

Postgrey is a Postfix policy server implementing greylisting developed by David Schweikert. When a request for delivery of a mail is received by Postfix via SMTP, the triplet CLIENT_IP / SENDER / RECIPIENT is built. If it is the first time that this triplet is seen, or if the triplet was first seen, less than 5 minutes ago, then the mail gets rejected with a temporary error. Hopefully spammers or viruses will not try again later, as it is however required per RFC.

triplet_recommendations_keras - An example of doing MovieLens recommendations using triplet loss in Keras

  •    Jupyter

Note: a much richer set of neural network recommender models is available as Spotlight. Along the lines of BPR [1].

learning-to-learn - Learning to Learn in TensorFlow

  •    Python

New problems can be implemented very easily. You can see in train.py that the meta_minimize method from the MetaOptimizer class is given a function that returns the TensorFlow operation that generates the loss function we want to minimize (see problems.py for an example). It's important that all operations with Python side effects (e.g. queue creation) must be done outside of the function passed to meta_minimize. The cifar10 function in problems.py is a good example of a loss function that uses TensorFlow queues.

tensorflow_cookbook - Code for Tensorflow Machine Learning Cookbook

  •    Jupyter

This chapter intends to introduce the main objects and concepts in TensorFlow. We also introduce how to access the data for the rest of the book and provide additional resources for learning about TensorFlow. After we have established the basic objects and methods in TensorFlow, we now want to establish the components that make up TensorFlow algorithms. We start by introducing computational graphs, and then move to loss functions and back propagation. We end with creating a simple classifier and then show an example of evaluating regression and classification algorithms.

certigrad - Bug-free machine learning on stochastic computation graphs

  •    Lean

Specifically, Certigrad is a system for optimizing over stochastic computation graphs, that we debugged systematically in the Lean Theorem Prover, and ultimately proved correct in terms of the underlying mathematics. Stochastic computation graphs extend the computation graphs that underlie systems like TensorFlow and Theano by allowing nodes to represent random variables and by defining the loss function to be the expected value of the sum of the leaf nodes over all the random choices in the graph. Certigrad allows users to construct arbitrary stochastic computation graphs out of the primitives that we provide. The main purpose of the system is to take a program describing a stochastic computation graph and to run a randomized algorithm (stochastic backpropagation) that, in expectation, samples the gradients of the loss function with respect to the parameters.


gradient-checkpointing - Make huge neural nets fit in memory

  •    Python

Training very deep neural networks requires a lot of memory. Using the tools in this package, developed jointly by Tim Salimans and Yaroslav Bulatov, you can trade off some of this memory usage with computation to make your model fit into memory more easily. For feed-forward models we were able to fit more than 10x larger models onto our GPU, at only a 20% increase in computation time. The memory intensive part of training deep neural networks is computing the gradient of the loss by backpropagation. By checkpointing nodes in the computation graph defined by your model, and recomputing the parts of the graph in between those nodes during backpropagation, it is possible to calculate this gradient at reduced memory cost. When training deep feed-forward neural networks consisting of n layers, we can reduce the memory consumption to O(sqrt(n)) in this way, at the cost of performing one additional forward pass (see e.g. Training Deep Nets with Sublinear Memory Cost, by Chen et al. (2016)). This repository provides an implementation of this functionality in Tensorflow, using the Tensorflow graph editor to automatically rewrite the computation graph of the backward pass.

neural-image-assessment - Implementation of NIMA: Neural Image Assessment in Keras

  •    Python

Implementation of NIMA: Neural Image Assessment in Keras + Tensorflow with weights for MobileNet model trained on AVA dataset. NIMA assigns a Mean + Standard Deviation score to images, and can be used as a tool to automatically inspect quality of images or as a loss function to further improve the quality of generated images.

hub - A library for transfer learning by reusing parts of TensorFlow models.

  •    Python

TensorFlow Hub is a library to foster the publication, discovery, and consumption of reusable parts of machine learning models. In particular, it provides modules, which are pre-trained pieces of TensorFlow models that can be reused on new tasks. If you'd like to contribute to TensorFlow Hub, be sure to review the contribution guidelines. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.

crfasrnn_keras - CRF-RNN Keras/Tensorflow version

  •    Python

This repository contains Keras/Tensorflow code for the "CRF-RNN" semantic image segmentation method, published in the ICCV 2015 paper Conditional Random Fields as Recurrent Neural Networks. This paper was initially described in an arXiv tech report. The online demo of this project won the Best Demo Prize at ICCV 2015. Original Caffe-based code of this project can be found here. Results produced with this Keras/Tensorflow code are almost identical to that with the Caffe-based version. The root directory of the clone will be referred to as crfasrnn_keras hereafter.

Knet.jl - Koç University deep learning framework.

  •    Julia

Knet uses dynamic computational graphs generated at runtime for automatic differentiation of (almost) any Julia code. This allows machine learning models to be implemented by defining just the forward calculation (i.e. the computation from parameters and data to loss) using the full power and expressivity of Julia. The implementation can use helper functions, loops, conditionals, recursion, closures, tuples and dictionaries, array indexing, concatenation and other high level language features, some of which are often missing in the restricted modeling languages of static computational graph systems like Theano, Torch, Caffe and Tensorflow. GPU operation is supported by simply using the KnetArray type instead of regular Array for parameters and data. Knet builds a dynamic computational graph by recording primitive operations during forward calculation. Only pointers to inputs and outputs are recorded for efficiency. Therefore array overwriting is not supported during forward and backward passes. This encourages a clean functional programming style. High performance is achieved using custom memory management and efficient GPU kernels. See Under the hood for more details.

keras-contrib - Keras community contributions

  •    Python

This library is the official extension repository for the python deep learning library Keras. It contains additional layers, activations, loss functions, optimizers, etc. which are not yet available within Keras itself. All of these additional modules can be used in conjunction with core Keras models and modules. As the community contributions in Keras-Contrib are tested, used, validated, and their utility proven, they may be integrated into the Keras core repository. In the interest of keeping Keras succinct, clean, and powerfully simple, only the most useful contributions make it into Keras. This contribution repository is both the proving ground for new functionality, and the archive for functionality that (while useful) may not fit well into the Keras paradigm.

keras-quora-question-pairs - A Keras model that addresses the Quora Question Pairs dyadic prediction task

  •    Jupyter

A Keras model that addresses the Quora Question Pairs [1] dyadic prediction task. The model architecture is based on the Stanford Natural Language Inference [2] benchmark model developed by Stephen Merity [3], specifically the version using a simple summation of GloVe word embeddings [4] to represent each question in the pair. A difference between this and the Merity SNLI benchmark is that our final layer is Dense with sigmoid activation, as opposed to softmax. Another key difference is that we are using the max operator as opposed to sum to combine word embeddings into a question representation. We use binary cross-entropy as a loss function and Adam for optimization.

LargeMargin_Softmax_Loss - Implementation for <Large-Margin Softmax Loss for Convolutional Neural Networks> in ICML'16

  •    C++

We introduce a large-margin softmax (L-Softmax) loss for convolutional neural networks. L-Softmax loss can greatly improve the generalization ability of CNNs, so it is very suitable for general classification, feature embedding and biometrics (e.g. face) verification. We give the 2D feature visualization on MNIST to illustrate our L-Softmax loss. The paper is published in ICML 2016 and also available at arXiv.

Tensorflow-Project-Template - A best practice for tensorflow project template architecture.

  •    Python

A simple and well designed structure is essential for any Deep Learning project, so after a lot of practice and contributing in tensorflow projects here's a tensorflow project template that combines simplcity, best practice for folder structure and good OOP design. The main idea is that there's much stuff you do every time you start your tensorflow project, so wrapping all this shared stuff will help you to change just the core idea every time you start a new tensorflow project. You will find a template file and a simple example in the model and trainer folder that shows you how to try your first model simply.

EffectiveTensorflow - TensorFlow tutorials and best practices.

  •    

We aim to gradually expand this series by adding new articles and keep the content up to date with the latest releases of TensorFlow API. If you have suggestions on how to improve this series or find the explanations ambiguous, feel free to create an issue, send patches, or reach out by email. The most striking difference between TensorFlow and other numerical computation libraries such as NumPy is that operations in TensorFlow are symbolic. This is a powerful concept that allows TensorFlow to do all sort of things (e.g. automatic differentiation) that are not possible with imperative libraries such as NumPy. But it also comes at the cost of making it harder to grasp. Our attempt here is to demystify TensorFlow and provide some guidelines and best practices for more effective use of TensorFlow.

tensorflow-exercises - TensorFlow Exercises - focusing on the comparison with NumPy.

  •    Python

TensorFlow is arugably the most popular deep learning library as of 2017. This is designed to help those who want to familiarize themselves with TensorFlow functions. Particulary, I focus on comparing TensorFlow functions with the equivalent functions in NumPy, the de facto standard numerical computation library. I hope this will help you get comfortable with TensorFlow quickly.

AmpliGraph - Python library for Representation Learning on Knowledge Graphs https://docs

  •    Python

Open source library based on TensorFlow that predicts links between concepts in a knowledge graph. AmpliGraph is a suite of neural machine learning models for relational Learning, a branch of machine learning that deals with supervised learning on knowledge graphs.

tensorflow - TensorFlow for R

  •    R

TensorFlow™ is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. The TensorFlow API is composed of a set of Python modules that enable constructing and executing TensorFlow graphs. The tensorflow package provides access to the complete TensorFlow API from within R.

tensorflow_cc - Build and install TensorFlow C++ API library.

  •    CMake

This repository makes possible the usage of the TensorFlow C++ API from the outside of the TensorFlow source code folders and without the use of the Bazel build system. This repository contains two CMake projects. The tensorflow_cc project downloads, builds and installs the TensorFlow C++ API into the operating system and the example project demonstrates its simple usage.