Displaying 1 to 20 from 23 results

This code provides a hyper-parameter optimization implementation for machine learning algorithms, as described in the paper: L. Yang and A. Shami, “On hyperparameter optimization of machine learning algorithms: Theory and practice,” Neurocomputing, vol. 415, pp. 295–316, 2020, doi: https://doi.org/10.1016/j.neucom.2020.07.061. To fit a machine learning model into different problems, its hyper-parameters must be tuned. Selecting the best hyper-parameter configuration for machine learning models has a direct impact on the model's performance. In this paper, optimizing the hyper-parameters of common machine learning models is studied. We introduce several state-of-the-art optimization techniques and discuss how to apply them to machine learning algorithms. Many available libraries and frameworks developed for hyper-parameter optimization problems are provided, and some open challenges of hyper-parameter optimization research are also discussed in this paper. Moreover, experiments are conducted on benchmark datasets to compare the performance of different optimization methods and provide practical examples of hyper-parameter optimization.

machine-learning deep-learning random-forest optimization svm genetic-algorithm machine-learning-algorithms hyperparameter-optimization artificial-neural-networks grid-search tuning-parameters knn bayesian-optimization hyperparameter-tuning random-search particle-swarm-optimization hpo python-examples python-samples hyperbandI just built out v2 of this project that now gives you analytics info from your models, and is production-ready. machineJS is an amazing research project that clearly proved there's a hunger for automated machine learning. auto_ml tackles this exact same goal, but with more features, cleaner code, and the ability to be copy/pasted into production.

machine-learning data-science machine-learning-library machine-learning-algorithms ml data-scientists javascript-library scikit-learn kaggle numerai automated-machine-learning automl auto-ml neuralnet neural-network algorithms random-forest svm naive-bayes bagging optimization brainjs date-night sklearn ensemble data-formatting js xgboost scikit-neuralnetwork knn k-nearest-neighbors gridsearch gridsearchcv grid-search randomizedsearchcv preprocessing data-formatter kaggle-competitionMore examples could be found in the example folder. All models are tested by 5-fold cross validation on a PC with Intel(R) Core(TM) i5-4590 CPU (3.30GHz) and 16.0GB RAM. All scores are the best scores achieved by gorse yet.

recommender-system svd svdplusplus knn slope-one co-clustering nmf machine-learning recommender bpr collaborative-filtering data-mining machinelearning avx2For more detail, see the installation for instruction on how to build N2 from source. N2 is an approximate nearest neighborhoods algorithm library written in C++ (including Python/Go bindings). N2 provides a much faster search speed than other implementations when modeling large dataset. Also, N2 supports multi-core CPUs for index building.

ml knn machine-learning approximate k-nearest-neighbors nearest-neighbor-search approximate-nearest-neighbor-searchMachine Learning for Real Estate

pandas scikit-learn knn plotly jupyter-notebook machine-learning real-estate unsupervised-learningMachine Learning for Spot Prices

aws spot machine-learning knn scikit-learn pandas clickk-nearest neighbors search for RBush. Implements a simple depth-first kNN search algorithm using a priority queue.

rbush knn k-neareset-neighbors data-structure queryThis is a learning-to-rank pipeline, which is a part of the project where we study applicability of k-nearest neighbor search methods in IR and QA applications. This project is supported primarily by the NSF grant #1618159 : "Matching and Ranking via Proximity Graphs: Applications to Question Answering and Beyond". For more details, please, check the Wiki page.

knn knn-search ibm-model1 embeddings proximity-graphs knn-graphs question-answeringpython-timbl is a Python extension module wrapping the full TiMBL C++ programming interface. With this module, all functionality exposed through the C++ interface is also available to Python scripts. Being able to access the API from Python greatly facilitates prototyping TiMBL-based applications. This is the 2013 release by Maarten van Gompel, building on the 2006 release by Sander Canisius. For those used to the old library, there is one backwards-incompatible change, adapt your scripts to use import timblapi instead of import timbl, as the latter is now a higher-level interface.

timbl machine-learning knn k-nearest-neighbourskd-trees are a compact data structure for answering orthogonal range and nearest neighbor queries on higher dimensional point data in linear time. While they are not as efficient at answering orthogonal range queries as range trees - especially in low dimensions - kdtrees consume exponentially less space, support k-nearest neighbor queries and are relatively cheap to construct. This makes them useful in small to medium dimensions for achieving a modest speed up over a linear scan. It is also worth mentioning that for approximate nearest neighbor queries or queries with a fixed size radius, grids and locality sensitive hashing are strictly better options. In these charts the transition between "Medium" and "Big" depends on how many points there are in the data structure. As the number of points grows larger, the dimension at which kdtrees become practical goes up.

kdtree static pure range orthogonal bounding box point sphere query nearest neighbor knn nn rnn searching closestVisually interact with dots in your browser classifying unknown dots using the K-Nearest Neighbors algorithm. Playable at Lettier.com.

interactive-knearest-neighbors knn machine-learning data-science visualization html5 nearest-neighbor-search data-analysis scikit-learn machine-learning-algorithms ai classification statistics gui k-nearest-neighbors k-nearest-neighbor k-nearest-neighbours k-nn nearest neighborsThis is the repository for D-Lab’s Introduction to Machine Learning in R workshop.

machine-learning dlab-berkeley tutorial knn random-forest gradient-boosting-machine superlearner decision-treesTiMBL is an open source software package implementing several memory-based learning algorithms, among which IB1-IG, an implementation of k-nearest neighbor classification with feature weighting suitable for symbolic feature spaces, and IGTree, a decision-tree approximation of IB1-IG. All implemented algorithms have in common that they store some representation of the training set explicitly in memory. During testing, new cases are classified by extrapolation from the most similar stored cases. For over fifteen years TiMBL has been mostly used in natural language processing as a machine learning classifier component, but its use extends to virtually any supervised machine learning domain. Due to its particular decision-tree-based implementation, TiMBL is in many cases far more efficient in classification than a standard k-nearest neighbor algorithm would be.

machine-learning classification learning-algorithm timbl ib1 igtree decision-tree k-nearest-neighbours nearest-neighbours knn c-plus-plusrsa,base64,ac,kmp,svm,knn,hash table...

rsa base64 dispatch sandbox kmp ac svm knn interpreterGo-mining is a small library for data mining. The library is written in Go language.

mining smote data-mining random-forest data-mining-algorithms ln-smote knn cart cascaded-random-forestWIP... k-Nearest Neighbors algorithm (k-NN) implemented on Apache Spark. This uses a hybrid spill tree approach to achieve high accuracy and search efficiency. The simplicity of k-NN and lack of tuning parameters makes k-NN a useful baseline model for many machine learning problems.

spark knnEsse repositório foi criado com a intenção de difundir o ensino de Machine Learning em português. Os algoritmos aqui implementados não são otimizados e foram implementados visando o fácil entendimento. Portanto, não devem ser utilizados para fins de pesquisa ou outros fins além dos especificados.

machine-learning machine-learning-algorithms adaboost decision-trees kmeans knn linear-discriminant-analysis principal-component-analysis naive-bayes regression linear-regression neural-network redes-neurais-artificiais multilinear-regression polynomial-regression feature-selectionrun "bash install.sh" to download all the required libraries and data. It would take several minutes to tens of minutes, depending on the network connection. We have been running our codes since Matlab R2011b. The latest version of code is tested on Matlab R2015a. Please let us know if you run into problem.

knn nearest-neighbor-search matting vision segmentation foreground knn-matting vlfeatThis is a (nearly absolute) balanced kdtree for fast kNN search with bad performance for dynamic addition and removal. In fact we adopt quick sort to rebuild the whole tree after changes of the nodes. We cache the added or the deleted nodes which will not be actually mapped into the tree until the rebuild method to be invoked. The good thing is we can always keep the tree balanced, and the bad thing is we have to wait some time for the finish of tree rebuild. Moreover duplicated samples are allowed to be added with the tree still kept balanced. The thought of the implementation is posted here.

kd-tree kdtrees algorithm tree-structure knn-search knn k-nearest-neighbours kmeans k-means kd-treesWelcome to fork my code and add more segmentation, feature_extraction, or classification to this project.

knn decaptcha
We have large collection of open source products. Follow the tags from
Tag Cloud >>

Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
**Add Projects.**