DEVELOPMENT... OpenML
Measure

kappa

Cohen's kappa coefficient is a statistical measure of agreement for qualitative (categorical) items: it measures the agreement of prediction with the true class – 1.0 signifies complete agreement. It is generally thought to be a more robust measure than simple percent agreement calculation since kappa takes into account the agreement occurring by chance. However, some researchers have expressed concern over kappa's tendency to take the observed categories' frequencies as givens, which can have the effect of underestimating agreement for a category that is also commonly used; for this reason, kappa is considered an overly conservative measure of agreement. The equation for kappa is: $$\kappa = \frac{\Pr(a) - \Pr(e)}{1 - \Pr(e)}, \!$$ where Pr(a) is the relative observed agreement among raters, and Pr(e) is the hypothetical probability of chance agreement, using the observed data to calculate the probabilities of each observer randomly saying each category. If the raters are in complete agreement then kappa = 1. If there is no agreement among the raters other than what would be expected by chance (as defined by Pr(e)), kappa = 0. See: Cohen, Jacob (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20 (1): 37–46.

Properties

Minimum value-1
Maximum value1
Unit
OptimizationHigher is better

A PHP Error was encountered

Severity: Core Warning

Message: Module 'mysqli' already loaded

Filename: Unknown

Line Number: 0

Backtrace: