iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: http://www.ncbi.nlm.nih.gov/pubmed/29238404
PMLB: a large benchmark suite for machine learning evaluation and comparison - PubMed Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 Dec 11:10:36.
doi: 10.1186/s13040-017-0154-4. eCollection 2017.

PMLB: a large benchmark suite for machine learning evaluation and comparison

Affiliations

PMLB: a large benchmark suite for machine learning evaluation and comparison

Randal S Olson et al. BioData Min. .

Abstract

Background: The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists.

Results: The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered.

Conclusions: This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

Keywords: Benchmarking; Data repository; Machine learning; Model evaluation.

PubMed Disclaimer

Conflict of interest statement

Not applicable. All data used in this study was publicly available online and does not contain private information about any particular individual.Not applicableThe authors declare that they have no competing interests.Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Figures

Fig. 1
Fig. 1
Histograms showing the distribution of meta-feature values from the PMLB datasets. Note the log scale of the y axes
Fig. 2
Fig. 2
Clustered meta-features of datasets in the PMLB projected onto the first two principal component axes (PCA 1 and PCA 2)
Fig. 3
Fig. 3
Mean values of each meta-feature within PMLB dataset clusters identified in Fig. 2
Fig. 4
Fig. 4
a Biclustering of the 13 ML models and 165 datasets according to the balanced accuracy of the models using their best parameter settings. b Deviation from the mean balanced accuracy across all 13 ML models. Highlights datasets upon which all ML methods performed similarly versus those where certain ML methods performed better or worse than others. c Identifies the boundaries of the 40 contiguous biclusters identified based on the 4 ML-wise clusters by the 10 data-wise clusters
Fig. 5
Fig. 5
Accuracy of the tuned ML models on each dataset across the PMLB suite of problems, sorted by the maximum balanced accuracy obtained for that dataset

Similar articles

Cited by

References

    1. Hastie TJ, Tibshirani RJ, Friedman JH. The elements of statistical learning: data mining, inference, and prediction. New York: Springer; 2009.
    1. Caruana R, Niculescu-Mizil A. Proceedings of the 23rd International Conference on Machine Learning. Pittsburgh: ACM; 2006. An empirical comparison of supervised learning algorithms.
    1. Urbanowicz RJ, Kiralis J, Sinnott-Armstrong NA, Heberling T, Fisher JM, Moore JH. Gametes: a fast, direct algorithm for generating pure, strict, epistatic models with random architectures. BioData Min. 2012;5(1):16. doi: 10.1186/1756-0381-5-16. - DOI - PMC - PubMed
    1. Urbanowicz RJ, Kiralis J, Fisher JM, Moore JH. Predicting the difficulty of pure, strict, epistatic models: metrics for simulated model selection. BioData Min. 2012;5(1):15. doi: 10.1186/1756-0381-5-15. - DOI - PMC - PubMed
    1. Blum A, Kalai A, Wasserman H. Noise-tolerant Learning, the Parity Problem, and the Statistical Query Model. J ACM. 2003;50:506–19. doi: 10.1145/792538.792543. - DOI