An Information Theoretic Metric for Multi-Class Categorization: The most common metrics used to evaluate a classifier are accuracy, recall and precision, and $F_1$-score. These metrics are widely used in machine learning, information retrieval, and text analysis (e.g., text categorization). Each of these metrics is imperfect in some way (captures only one aspect of predictor performance and can be fooled by a weird data set). None of them can be used to compare predictors across different datasets. In this paper we present an information-theoretic performance metric which does not suffer from the aforementioned flaws and can be used in both classification (binary and multi-class) and categorization (each example can be placed in several categories) settings. The code to compute the metric is available under the Apache open-source license.
Session Summary
An Information Theoretic Metric for Multi-Class Categorization
MLconf 2016 Seattle
Sam Steingold
Clear
Chief Data Scientist
Learn more »