Displaying 1 to 14 from 14 results

Course materials for General Assembly's Data Science course in Washington, DC (8/18/15 - 10/29/15).

data-science machine-learning scikit-learn data-analysis pandas jupyter-notebook course linear-regression logistic-regression model-evaluation naive-bayes natural-language-processing decision-trees ensemble-learning clustering regular-expressions web-scraping data-visualization data-cleaningCatBoost is a machine learning method based on gradient boosting over decision trees. All CatBoost documentation is available here.

machine-learning decision-trees gradient-boosting gbm gbdt r kaggle gpu-computing catboost tutorial categorical-features distributed gpu coreml opensource data-science big-dataPython codes for common Machine Learning Algorithms

linear-regression polynomial-regression logistic-regression decision-trees random-forest svm svr knn-classification naive-bayes-classifier kmeans-clustering hierarchical-clustering pca lda xgboost-algorithmPractice and tutorial-style notebooks covering wide variety of machine learning techniques

numpy statistics pandas matplotlib regression scikit-learn classification principal-component-analysis clustering decision-trees random-forest dimensionality-reduction neural-network deep-learning artificial-intelligence data-science machine-learning k-nearest-neighbours naive-bayesThe Customer 360 solution provides you a scalable way to build a customer profile enriched by machine learning. It also allows you to uniformly access and operate on data across disparate data sources (while minimizing raw data movement) and leverage the power of Microsoft R Server for scalable modelling and accurate predictions. Ingestion and Pre-processing: Ingest, prepare, and aggregate live user activity data.

azure azure-functions datawarehouse data-science customer-enrichment feature-engineering data-virtualization multi-class-classification decision-treesThis is the repository for D-Lab’s Introduction to Machine Learning in R workshop.

machine-learning dlab-berkeley tutorial knn random-forest gradient-boosting-machine superlearner decision-treesThe purpose of this respository is use machine learning to solve NLP problem without involving deep learning releted technology. So only traditional machine learning methods will be used here. It will include Naive Bayes, Decision Tree, Random Forest,GBDT and so on. We will first use Naive Bayes to do binary classification, which is to classify a sentence releted to be a 'theft' or not.

naive-bayes-classifier decision-trees gbdtA wrapper is written around Orange C4.5, sklearn CART, GUIDE and QUEST. The returned object is a Decision Tree, which can be found in decisiontree.py. Moreover, different methods are available on this decision tree: classify new, unknown samples; visualise the tree; export it to string, JSON and DOT; etc. A wrapper written around the R package inTrees and an implementation of ISM can be found in the constructors package.

decision-trees data-mining ensembleAIFAD stands for Automated Induction of Functions over Algebraic Data Types and is an application written in OCaml that improves decision tree learning by supporting significantly more complex kinds of data. This allows users to more conveniently describe the data they want to learn functions on and can improve the accuracy and complexity of resulting models. Handles multi-valued attributes. This has already become widespread among decision tree learners, but some implementations still only support binary ones.

ocaml machine-learning decision-trees algebraic-data-typesEsse repositório foi criado com a intenção de difundir o ensino de Machine Learning em português. Os algoritmos aqui implementados não são otimizados e foram implementados visando o fácil entendimento. Portanto, não devem ser utilizados para fins de pesquisa ou outros fins além dos especificados.

machine-learning machine-learning-algorithms adaboost decision-trees kmeans knn linear-discriminant-analysis principal-component-analysis naive-bayes regression linear-regression neural-network redes-neurais-artificiais multilinear-regression polynomial-regression feature-selectionMassive On-line Analysis is an environment for massive data mining. MOA provides a framework for data stream mining and includes tools for evaluation and a collection of machine learning algorithms. Related to the WEKA project, also written in Java, while scaling to more demanding problems.

moa datastream classification ensemble-learning random-forest concept-drift decision-trees ensembleGo scoring API for Predictive Model Markup Language (PMML). Will be happy to implement new models by demand, or assist with any other issue.

pmml random-forest classification machine-learning decision-treesIn 2018, The European Space Agency (ESA) organized a series of 6 lectures on Machine Learning at the European Space Operations Centre (ESOC). This repository contains the lectures resources: presentations, notebooks and links to the videos (presentation and hands-on).

machinelearning machine-learning linear-regression support-vector-machines decision-trees random-forest neural-network deep-learning clustering pca anomaly-detection text-mining tf-idf topic-modeling lectures lecture-slides lecture-material lecture-videosleaves is a library implementing prediction code for GBRT (Gradient Boosting Regression Trees) models in pure Go. The goal of the project - make it possible to use models from popular GBRT frameworks in Go programs without C API bindings. In order to use XGBoost model, just change leaves.LGEnsembleFromFile, to leaves.XGEnsembleFromFile.

machine-learning lightgbm xgboost decision-trees boosting
We have large collection of open source products. Follow the tags from
Tag Cloud >>

Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
**Add Projects.**