Check the preview of 2nd version of this platform being developed by the open MLCommons taskforce on automation and reproducibility as a free, open-source and technology-agnostic on-prem platform.

SOTA: Validating MLPerf inference benchmark v0.5 results (object detection) via cK crowd-benchmarking

result:sota-mlperf-object-detection-v0.5-crowd-benchmarking (v1.0.0)
License: https://github.com/mlperf/policies/blob/master/TERMS%20OF%20USE.md
Creation date: 2019-12-19
Source: mlperf.org/inference-overview/#overview
cID: 4cd2850867df4241:844ef83a757b605a
Push data to this graph: docs , graph meta description
Don't hesitate to get in touch if you encounter any issues or would like to discuss this community project!
Please report if this CK component works: 1  or fails: 0 
Sign up to be notified when artifacts are shared or updated!
Live CK scoreboards are connected with portable CK workflows and adaptive containers to help the community participate in collaborative AI/ML/SW/HW benchmarking and DSE. See this paper and ACM TechTalk for more details.
This is a snapshot of MLPerf benchmark results together with unofficial results collected via CK crowd-benchmarking platform with portable CK workflows!
MLPerf name and logo are trademarks. See www.mlperf.org for more information.


Reproduced paper: MLPerf Inference Benchmark
Related reusable solutions:   1     2     3  

SOTA: MLPerf inference benchmark v0.5 results snapshot (open, Available) connected to the cK crowd-benchmarking platform

Versions  

Files  

Comments  

Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!