DEVELOPMENT... OpenML
Flow
sklearn.ensemble._forest.RandomForestRegressor

sklearn.ensemble._forest.RandomForestRegressor

Visibility: public Uploaded 16-03-2023 by Amanda Humphrey sklearn==1.2.2 numpy>=1.17.3 scipy>=1.3.2 joblib>=1.1.1 threadpoolctl>=2.0.0 0 runs
0 likes downloaded by 0 people 0 issues 0 downvotes , 0 total downloads
  • openml-python python scikit-learn sklearn sklearn_1.2.2
Issue #Downvotes for this reason By


Loading wiki
Help us complete this description Edit
A random forest regressor. A random forest is a meta estimator that fits a number of classifying decision trees on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is controlled with the `max_samples` parameter if `bootstrap=True` (default), otherwise the whole dataset is used to build each tree.

Parameters

bootstrapWhether bootstrap samples are used when building trees. If False, the whole dataset is used to build each treedefault: true
ccp_alphaComplexity parameter used for Minimal Cost-Complexity Pruning. The subtree with the largest cost complexity that is smaller than ``ccp_alpha`` will be chosen. By default, no pruning is performed. See :ref:`minimal_cost_complexity_pruning` for details .. versionadded:: 0.22default: 0.0
criteriondefault: "friedman_mse"
max_depthThe maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samplesdefault: null
max_featuresdefault: 0.41354225419909363
max_leaf_nodesGrow trees with ``max_leaf_nodes`` in best-first fashion Best nodes are defined as relative reduction in impurity If None then unlimited number of leaf nodesdefault: null
max_samplesIf bootstrap is True, the number of samples to draw from X to train each base estimator - If None (default), then draw `X.shape[0]` samples - If int, then draw `max_samples` samples - If float, then draw `max_samples * X.shape[0]` samples. Thus, `max_samples` should be in the interval `(0.0, 1.0]` .. versionadded:: 0.22default: null
min_impurity_decreaseA node will be split if this split induces a decrease of the impurity greater than or equal to this value The weighted impurity decrease equation is the following:: N_t / N * (impurity - N_t_R / N_t * right_impurity - N_t_L / N_t * left_impurity) where ``N`` is the total number of samples, ``N_t`` is the number of samples at the current node, ``N_t_L`` is the number of samples in the left child, and ``N_t_R`` is the number of samples in the right child ``N``, ``N_t``, ``N_t_R`` and ``N_t_L`` all refer to the weighted sum, if ``sample_weight`` is passed .. versionadded:: 0.19default: 0.0
min_samples_leafThe minimum number of samples required to be at a leaf node A split point at any depth will only be considered if it leaves at least ``min_samples_leaf`` training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression - If int, then consider `min_samples_leaf` as the minimum number - If float, then `min_samples_leaf` is a fraction and `ceil(min_samples_leaf * n_samples)` are the minimum number of samples for each node .. versionchanged:: 0.18 Added float values for fractionsdefault: 14
min_samples_splitThe minimum number of samples required to split an internal node: - If int, then consider `min_samples_split` as the minimum number - If float, then `min_samples_split` is a fraction and `ceil(min_samples_split * n_samples)` are the minimum number of samples for each split .. versionchanged:: 0.18 Added float values for fractionsdefault: 5
min_weight_fraction_leafThe minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided max_features : {"sqrt", "log2", None}, int or float, default=1.0 The number of features to consider when looking for the best split: - If int, then consider `max_features` features at each split - If float, then `max_features` is a fraction and `max(1, int(max_features * n_features_in_))` features are considered at each split - If "auto", then `max_features=n_features` - If "sqrt", then `max_features=sqrt(n_features)` - If "log2", then `max_features=log2(n_features)` - If None or 1.0, then `max_features=n_features` .. note:: The default of 1.0 is equivalent to bagged trees and more randomness can be achieved by setting smaller values, e.g. 0.3 .. versionchanged:: 1.1 The default of `max_features` changed from `"auto"` to 1.0 .. deprecated:: 1.1 ...default: 0.0
n_estimatorsThe number of trees in the forest .. versionchanged:: 0.22 The default value of ``n_estimators`` changed from 10 to 100 in 0.22 criterion : {"squared_error", "absolute_error", "friedman_mse", "poisson"}, default="squared_error" The function to measure the quality of a split. Supported criteria are "squared_error" for the mean squared error, which is equal to variance reduction as feature selection criterion and minimizes the L2 loss using the mean of each terminal node, "friedman_mse", which uses mean squared error with Friedman's improvement score for potential splits, "absolute_error" for the mean absolute error, which minimizes the L1 loss using the median of each terminal node, and "poisson" which uses reduction in Poisson deviance to find splits Training using "absolute_error" is significantly slower than when using "squared_error" .. versionadded:: 0.18 Mean Absolute Error (MAE) criterion .. versionadded:: 1.0 ...default: 100
n_jobsThe number of jobs to run in parallel. :meth:`fit`, :meth:`predict`, :meth:`decision_path` and :meth:`apply` are all parallelized over the trees. ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context. ``-1`` means using all processors. See :term:`Glossary ` for more detailsdefault: null
oob_scoreWhether to use out-of-bag samples to estimate the generalization score Only available if bootstrap=Truedefault: false
random_stateControls both the randomness of the bootstrapping of the samples used when building trees (if ``bootstrap=True``) and the sampling of the features to consider when looking for the best split at each node (if ``max_features < n_features``) See :term:`Glossary ` for detailsdefault: null
verboseControls the verbosity when fitting and predictingdefault: 0
warm_startWhen set to ``True``, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest. See :term:`Glossary ` and :ref:`gradient_boosting_warm_start` for detailsdefault: false

0
Runs

List all runs
Parameter:
Rendering chart
Rendering table