Displaying 1 to 16 from 16 results

The library's full documentation can be found here. Be sure to lint & pass the unit tests before submitting your pull request.

natural-language-processing machine-learning fuzzy-matching clustering record-linkage bayes bloom-filter canberra caverphone chebyshev cologne cosine classifier daitch-mokotoff dice fingerprint fuzzy hamming k-means jaccard jaro lancaster levenshtein lig metaphone mra ngrams nlp nysiis perceptron phonetic porter punkt schinke sorensen soundex stats tfidf tokenizer tversky vectorizer winklerA collection of low-level machine learning algorithms for node.js.This project is quite new and documentation will be on the way shortly. In the meantime you can check out the spec folder for examples of how to use the algorithms.

machine learning ml classifier clustering bayes k-means logistic regressionNode.js asynchronous implementation of the clustering algorithm k-means.

k-means clusteringThis is simple tool which uses the K-means++ algorithm to pick suitable terminal colors from a given image. The algorithm is an approximation for solving clustering/paritioning problems, which in this particular case means finding N = 8 dominant colors in the image. Obviously, it does not work well on pictures with narrow spectrum. Currently, the code only works in Pantheon terminal, but the colors can be extracted and manually inserted into your favorite terminal's settings. Running the script for the image below...

k-means colorscheme color-picker color-palette colorsWhen dealing with lots of data points, clustering algorithms may be needed in order to group them. The k-means algorithm partitions n data points into k clusters and finds the centroids of these clusters incrementally. The basic k-means algorithm is initialized with k centroids at random positions.

kmeans-algorithm clustering-algorithm kmeansplusplus k-means k-means++ clustering data partition algorithm kmeans browser( Development is current suspended ). Models are acted upon by the perceive or predict functions. These functions currently do the same thing, the wording is indicative of the nature of the result, and the action the model has taken on the data.

machine learning ml classifier clustering bayes k-means logistic regression perceptron neural netWIP Machine learning library, written in J. Various algorithm implementations, including MLPClassifiers, MLPRegressors, Mixture Models, K-Means, KNN, RBF-Network, Self-organizing Maps. Models can be serialized to text files, with a mixture of text and binary packing. The size of the serialized file depends on the size of the model, but will probably range from 10 MB and upwards for NN models (including convnets and rec-nets).

machine-learning convolutional-neural-networks j deep-learning gaussian-mixture-models gaussian-processes self-organizing-map principal-component-analysis k-means hierarchical-clustering lstm ensemble-learning learning rbm restricted-boltzmann-machines multilayer-perceptron-network knn-classifier clusteringThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

k-means kmeans machine-learning ml clusteringThis repository contains a few brief experiments with Stanford NLP's GloVe, an unsupervised learning algorithm for obtaining vector representations for words. Similar to Word2Vec, GloVe creates a continuous N-dimensional representation of a word that is learned from its surrounding context words in a training corpus. Trained on a large corpus of text, these co-occurance statistics (an N-dimensional vector embedding) cause semantically similar words to appear near each-other in their resulting N-dimensional embedding space (e.g. "dog" and "cat" may appear nearby a region of other pet related words in the embedding space because the context words that surround both "dog" and "cat" in the training corpus are similar). All three scripts use the GloVe.6B pre-trained word embeddings created from the combined Wikipedia 2014 and Gigaword 5 datasets. They were trained using 6 billion tokens and contains 400,000 unique lowercase words. Trained embeddings are provided in 50, 100, 200, and 300 dimensions (822 MB download).

glove-embeddings glove-vectors word2vec k-means glove nlp machine-learning embeddings word-game k-nearest-neighborsThis is the R version assignments of the online machine learning course (MOOC) on Coursera website by Prof. Andrew Ng. This repository provides the starter code to solve the assignment in R statistical software; the completed assignments are also available beside each exercise file.

machine-learning learning-curve pca linear-regression gradient-descent svm principal-component-analysis clustering neural-network k-means recommender-system classification regularization anomalydetection ghIn this repository, source codes will be shared while capturing "TensorFlow 101: Introduction to Deep Learning" online course published on Udemy. The course consists of 18 lectures and includes 3 hours material.

tensorflow tensorboard dnn neural-networks deep-learning deep-neural-networks classification regression clustering k-means kmeans supervised-learning unsupervised-learning machine-learning python-3This is a (nearly absolute) balanced kdtree for fast kNN search with bad performance for dynamic addition and removal. In fact we adopt quick sort to rebuild the whole tree after changes of the nodes. We cache the added or the deleted nodes which will not be actually mapped into the tree until the rebuild method to be invoked. The good thing is we can always keep the tree balanced, and the bad thing is we have to wait some time for the finish of tree rebuild. Moreover duplicated samples are allowed to be added with the tree still kept balanced. The thought of the implementation is posted here.

kd-tree kdtrees algorithm tree-structure knn-search knn k-nearest-neighbours kmeans k-means kd-treesThis is the Xcode Playground to accompany the book Classic Computer Science Problems in Swift by David Kopec. The book is available for purchase directly from the publisher, Manning, and from other book vendors. The Playground is compatible with Swift 4 (Xcode 9).

book manning computer-science neural-network graph-algorithms search-algorithms genetic-algorithms k-means constraint-satisfaction-problemThis repository contains code referenced in my blog post Exploring k-means in Python, C++ and CUDA, where I implement k-means in a variety of platforms. In this post I show how CUDA implementations of k-means can outperform scikit-learn and scipy in performance by a factor of 72 and 90, respectively. The code is not particularly tidy, but gives an idea of how to implement k-means efficiently on a GPU.

k-means cpp cuda parallel machine-learningshaman supports both simple linear regression and multiple linear regression. By default, shaman uses the Normal Equation for linear regression.

machine-learning linear-regression statistics gradient-descent-algorithm clustering k-means kmeansA python script to organize your images by similarity. It uses a k-means algorithm to separatem them in clusters.

script k-means image-classification image-comparison machine-learning-algorithms
We have large collection of open source products. Follow the tags from
Tag Cloud >>

Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
**Add Projects.**