Check a prototype of 2nd version of this platform being developed by in collaboration with MLCommons.
docker:mlperf-inference-v0.7.openvino (v2.6.0)
Creation date: 2020-07-16
Source: GitHub
cID: 88eef0cd8c43b68a:dd04628a2cc32cdf

Don't hesitate to get in touch if you encounter any issues or would like to discuss this community project!
Please report if this CK component works: 3  or fails: 0 
Sign up to be notified when artifacts are shared or updated!


This CK-powered container is our attempt to provide a common API to customize, build and run AI and ML applications with different models, frameworks, libraries, datasets, compilers, formats, backends and platforms. Our on-going project is to make the onboarding process as simple as possible via this platform. Please check this CK white paper and don't hesitate to contact us if you have suggestions or feedback!


MLPerf Inference v0.7 - OpenVINO

This collection of images from dividiti tests automated, customizable and reproducible Collective Knowledge workflows for OpenVINO workoads.

CK_TAG (Dockerfile's extension) Python GCC Comments
ubuntu-20.04 3.8.2 9.3.0

Set up Collective Knowledge

You will need to install Collective Knowledge to build images and save benchmarking results. Please follow the CK installation instructions and then pull our object detection repository:

$ ck pull repo:ck-mlperf

NB: Refresh all CK repositories after any updates (e.g. bug fixes):

$ ck pull all


To build an image e.g. from Dockerfile.ubuntu-20.04:

$ export CK_IMAGE=mlperf-inference-v0.7.openvino CK_TAG=ubuntu-20.04
$ cd `ck find docker:$CK_IMAGE` && docker build -t ctuning/$CK_IMAGE:$CK_TAG -f Dockerfile.$CK_TAG .

Run the default command

To run the default command of an image e.g. built from Dockerfile.ubuntu-20.04:

$ export CK_IMAGE=mlperf-inference-v0.7.openvino CK_TAG=ubuntu-20.04
$ docker run --rm ctuning/$CK_IMAGE:$CK_TAG
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.242
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.381
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.277
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.031
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.189
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.575
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.224
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.264
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.265
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.036
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.194
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.620




Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!