![]() Measuring a prediction error given ground truth and prediction:įunctions ending with _score return a value toįunctions ending with _error or _loss return a The module trics also exposes a set of simple functions Defining your scoring strategy from metric functions ¶ You can retrieve the names of all available scorers by calling If a wrong scoring name is passed, an InvalidParameterError is raised. The model and the data, like an_squared_error, areĪvailable as neg_mean_squared_error which return the negated value Thus metrics which measure the distance between Scoring parameter the table below shows all possible values.Īll scorer objects follow the convention that higher return values are better Common cases: predefined values ¶įor the most common use cases, you can designate a scorer object with the Model_selection.cross_val_score, take a scoring parameter thatĬontrols what metric they apply to the estimators evaluated. Model selection and evaluation using tools, such as The scoring parameter: defining model evaluation rules ¶ Predictions, see the Pairwise metrics, Affinities and Kernels section. Visual evaluation of regression modelsįor “pairwise” metrics, between samples and not estimators or Mean Poisson, Gamma, and Tweedie deviances R² score, the coefficient of determination Defining your scoring strategy from metric functions The scoring parameter: defining model evaluation rules Metrics and scoring: quantifying the quality of predictions
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |