Check the preview of 2nd version of this platform being developed by the open MLCommons taskforce on automation and reproducibility as a free, open-source and technology-agnostic on-prem platform.

MLPerf Inference Benchmark

lib:d0e50ebb5b9d4ec9 (v1.0.0)

Vote to reproduce this paper and share portable workflows   3 
Authors: Vijay Janapa Reddi,Christine Cheng,David Kanter,Peter Mattson,Guenther Schmuelling,Carole-Jean Wu,Brian Anderson,Maximilien Breughe,Mark Charlebois,William Chou,Ramesh Chukka,Cody Coleman,Sam Davis,Pan Deng,Greg Diamos,Jared Duke,Dave Fick,J. Scott Gardner,Itay Hubara,Sachin Idgunji,Thomas B. Jablin,Jeff Jiao,Tom St. John,Pankaj Kanwar,David Lee,Jeffery Liao,Anton Lokhmotov,Francisco Massa,Peng Meng,Paulius Micikevicius,Colin Osborne,Gennady Pekhimenko,Arun Tejusve Raghunath Rajan,Dilip Sequeira,Ashish Sirasao,Fei Sun,Hanlin Tang,Michael Thomson,Frank Wei,Ephrem Wu,Lingjie Xu,Koichi Yamada,Bing Yu,George Yuan,Aaron Zhong,Peizhao Zhang,Yuchen Zhou
ArXiv: 1911.02549
Document:  PDF  DOI 
Artifact development version: GitHub
Abstract URL: https://arxiv.org/abs/1911.02549v2


Machine-learning (ML) hardware and software system demand is burgeoning. Driven by ML applications, the number of different ML inference systems has exploded. Over 100 organizations are building ML inference chips, and the systems that incorporate existing models span at least three orders of magnitude in power consumption and five orders of magnitude in performance; they range from embedded devices to data-center solutions. Fueling the hardware are a dozen or more software frameworks and libraries. The myriad combinations of ML hardware and ML software make assessing ML-system performance in an architecture-neutral, representative, and reproducible manner challenging. There is a clear need for industry-wide standard ML benchmarking and evaluation criteria. MLPerf Inference answers that call. In this paper, we present our benchmarking method for evaluating ML inference systems. Driven by more than 30 organizations as well as more than 200 ML engineers and practitioners, MLPerf prescribes a set of rules and best practices to ensure comparability across systems with wildly differing architectures. The first call for submissions garnered more than 600 reproducible inference-performance measurements from 14 organizations, representing over 30 systems that showcase a wide range of capabilities. The submissions attest to the benchmark's flexibility and adaptability.

Relevant initiatives  

Related knowledge about this paper Reproduced results (crowd-benchmarking and competitions) Artifact and reproducibility checklists Common formats for research projects and shared artifacts Reproducibility initiatives

Comments  

Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!