DEVELOPMENT... OpenML
People
Jason
Search these flows in more detail

Jason's flows

flow
C-Support Vector Classification. The implementation is based on libsvm. The fit time scales at least quadratically with the number of samples and may be impractical beyond tens of thousands of…
0 runs0 likes0 downloads0 reach0 impact
Multi-layer Perceptron classifier. This model optimizes the log-loss function using LBFGS or stochastic gradient descent. .. versionadded:: 0.18
0 runs0 likes0 downloads0 reach0 impact
Linear classifiers (SVM, logistic regression, a.o.) with SGD training. This estimator implements regularized linear models with stochastic gradient descent (SGD) learning: the gradient of the loss is…
0 runs0 likes0 downloads0 reach0 impact
A random forest classifier. A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive…
1 runs0 likes0 downloads0 reach1 impact
Gradient Boosting for classification. GB builds an additive model in a forward stage-wise fashion; it allows for the optimization of arbitrary differentiable loss functions. In each stage…
0 runs0 likes0 downloads0 reach0 impact
Classifier implementing the k-nearest neighbors vote.
6 runs0 likes0 downloads0 reach6 impact
An extra-trees classifier. This class implements a meta estimator that fits a number of randomized decision trees (a.k.a. extra-trees) on various sub-samples of the dataset and uses averaging to…
0 runs0 likes0 downloads0 reach0 impact
Naive Bayes classifier for multivariate Bernoulli models. Like MultinomialNB, this classifier is suitable for discrete data. The difference is that while MultinomialNB works with occurrence counts,…
0 runs0 likes0 downloads0 reach0 impact
Applies transformers to columns of an array or pandas DataFrame. This estimator allows different columns or column subsets of the input to be transformed separately and the features generated by each…
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
0 runs0 likes0 downloads0 reach0 impact
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
Standardize features by removing the mean and scaling to unit variance The standard score of a sample `x` is calculated as: z = (x - u) / s where `u` is the mean of the training samples or zero if…
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
0 runs0 likes0 downloads0 reach0 impact
Encode categorical integer features as a one-hot numeric array. The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete)…
0 runs0 likes0 downloads0 reach0 impact
Feature selector that removes all low-variance features. This feature selection algorithm looks only at the features (X), not the desired outputs (y), and can thus be used for unsupervised learning.
0 runs0 likes0 downloads0 reach0 impact
An AdaBoost classifier. An AdaBoost [1] classifier is a meta-estimator that begins by fitting a classifier on the original dataset and then fits additional copies of the classifier on the same dataset…
0 runs0 likes0 downloads0 reach0 impact
Classifier implementing the k-nearest neighbors vote.
0 runs0 likes0 downloads0 reach0 impact
Gradient Boosting for classification. GB builds an additive model in a forward stage-wise fashion; it allows for the optimization of arbitrary differentiable loss functions. In each stage…
0 runs0 likes0 downloads0 reach0 impact
An extra-trees classifier. This class implements a meta estimator that fits a number of randomized decision trees (a.k.a. extra-trees) on various sub-samples of the dataset and uses averaging to…
0 runs0 likes0 downloads0 reach0 impact
Naive Bayes classifier for multivariate Bernoulli models. Like MultinomialNB, this classifier is suitable for discrete data. The difference is that while MultinomialNB works with occurrence counts,…
0 runs0 likes0 downloads0 reach0 impact
Applies transformers to columns of an array or pandas DataFrame. This estimator allows different columns or column subsets of the input to be transformed separately and the features generated by each…
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
0 runs0 likes0 downloads0 reach0 impact
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
Standardize features by removing the mean and scaling to unit variance The standard score of a sample `x` is calculated as: z = (x - u) / s where `u` is the mean of the training samples or zero if…
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
0 runs0 likes0 downloads0 reach0 impact
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
Encode categorical integer features as a one-hot numeric array. The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete)…
0 runs0 likes0 downloads0 reach0 impact
Feature selector that removes all low-variance features. This feature selection algorithm looks only at the features (X), not the desired outputs (y), and can thus be used for unsupervised learning.
0 runs0 likes0 downloads0 reach0 impact
An AdaBoost classifier. An AdaBoost [1] classifier is a meta-estimator that begins by fitting a classifier on the original dataset and then fits additional copies of the classifier on the same dataset…
0 runs0 likes0 downloads0 reach0 impact
Logistic Regression (aka logit, MaxEnt) classifier. In the multiclass case, the training algorithm uses the one-vs-rest (OvR) scheme if the 'multi_class' option is set to 'ovr', and uses the…
1 runs0 likes0 downloads0 reach0 impact
J. Friedman, T. Hastie, R. Tibshirani (1998). Additive Logistic Regression: a Statistical View of Boosting. Stanford University.
56 runs0 likes0 downloads0 reach0 impact
J. Platt: Fast Training of Support Vector Machines using Sequential Minimal Optimization. In B. Schoelkopf and C. Burges and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning,…
2 runs0 likes0 downloads0 reach0 impact
Weka implementation.
0 runs0 likes0 downloads0 reach0 impact
le Cessie, S., van Houwelingen, J.C. (1992). Ridge Estimators in Logistic Regression. Applied Statistics. 41(1):191-201.
0 runs0 likes0 downloads0 reach0 impact
Applies transformers to columns of an array or pandas DataFrame. This estimator allows different columns or column subsets of the input to be transformed separately and the features generated by each…
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
0 runs0 likes0 downloads0 reach0 impact
Imputation transformer for completing missing values.
0 runs0 likes0 downloads0 reach0 impact
Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit…
0 runs0 likes0 downloads0 reach0 impact
Feature selector that removes all low-variance features. This feature selection algorithm looks only at the features (X), not the desired outputs (y), and can thus be used for unsupervised learning.
0 runs0 likes0 downloads0 reach0 impact
C-Support Vector Classification. The implementation is based on libsvm. The fit time complexity is more than quadratic with the number of samples which makes it hard to scale to dataset with more than…
219 runs0 likes0 downloads0 reach0 impact
Automatically created scikit-learn flow.
0 runs0 likes0 downloads0 reach0 impact
Automatically created scikit-learn flow.
0 runs0 likes0 downloads0 reach0 impact
Automatically created scikit-learn flow.
0 runs0 likes0 downloads0 reach0 impact
Automatically created scikit-learn flow.
0 runs0 likes0 downloads0 reach0 impact
Automatically created scikit-learn flow.
0 runs0 likes0 downloads0 reach0 impact
D. Aha, D. Kibler (1991). Instance-based learning algorithms. Machine Learning. 6:37-66.
0 runs0 likes0 downloads0 reach0 impact
Weka implementation.
0 runs0 likes0 downloads0 reach0 impact
Wikipedia. Euclidean distance. URL http://en.wikipedia.org/wiki/Euclidean_distance.
0 runs0 likes0 downloads0 reach0 impact
Weka implementation.
0 runs0 likes0 downloads0 reach0 impact
William W. Cohen: Fast Effective Rule Induction. In: Twelfth International Conference on Machine Learning, 115-123, 1995.
0 runs0 likes0 downloads0 reach0 impact
Leo Breiman (2001). Random Forests. Machine Learning. 45(1):5-32.
1 runs0 likes0 downloads0 reach0 impact
R.C. Holte (1993). Very simple classification rules perform well on most commonly used datasets. Machine Learning. 11:63-91.
0 runs0 likes0 downloads0 reach0 impact
Weka implementation.
0 runs0 likes0 downloads0 reach0 impact
Weka implementation.
0 runs0 likes0 downloads0 reach0 impact
Weka implementation.
0 runs0 likes0 downloads0 reach0 impact
Weka implementation.
0 runs0 likes0 downloads0 reach0 impact
Weka implementation.
0 runs0 likes0 downloads0 reach0 impact
Weka implementation.
0 runs0 likes0 downloads0 reach0 impact
J. Friedman, T. Hastie, R. Tibshirani (1998). Additive Logistic Regression: a Statistical View of Boosting. Stanford University.
0 runs0 likes0 downloads0 reach0 impact
Weka implementation.
0 runs0 likes0 downloads0 reach0 impact
Ross Quinlan (1993). C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, San Mateo, CA.
1 runs0 likes0 downloads0 reach0 impact
Weka implementation.
6 runs0 likes0 downloads0 reach5 impact