SOTA: MLPerf inference benchmark v0.5 results snapshot (open, Available) for collaborative validation
Please report if this CK component works: 1 ▲
or fails: 0 ▲
Live CK scoreboards are connected with CK solutions
and adaptive containers to make it easier to participate in collaborative AI/ML/SW/HW benchmarking.
See the CK white paper for more details.
Benchmark results (performance): P_*_SS - Single Stream in milliseconds, P_*_MS - MultiStream in no. streams, P_*_S - Server in QPS, P_*_O - Offline in inputs/second.
Benchmark results (accuracy): A_IC* - Top-1, A_OD* - mAP, A_NMT* - BLEU.
cK components: packages, software detection plugins.
Image Classification: IC1 - ImageNet, MobileNet-v1, IC2 - ImageNet, ResNet-50 v1.5.
Object detection: OD1 - COCO, SSD w/ MobileNet-v1, OD2 - COCO 1200x1200, SSD w/ ResNet-34.
Translation: NMT - WMT E-G, NMT.
Form Factor: FF_M - Mobile/Handheld, FF_D - Desktop/Workstation, FF_S - Server, FF_E - Edge/Embedded.
These are not official results but a snapshot to collaboratively reproduce results and add portable workflows!
MLPerf name and logo are trademarks. See www.mlperf.org for more information.
Reproduced paper: MLPerf Inference Benchmark
Related reusable solutions:
Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!