Check the preview of 2nd version of this platform being developed by the open MLCommons taskforce on automation and reproducibility as a free, open-source and technology-agnostic on-prem platform.
docker:ck-mlperf-inference-vision-with-ck.intel.ubuntu-18.04 (v2.5.0)
Creation date: 2019-10-28
Source: GitHub
cID: 88eef0cd8c43b68a:42cbbebeca304484

Don't hesitate to get in touch if you encounter any issues or would like to discuss this community project!
Please report if this CK component works: 1  or fails: 0 
Sign up to be notified when artifacts are shared or updated!

Description  

This CK-powered container is our attempt to provide a common API to customize, build and run AI and ML applications with different models, frameworks, libraries, datasets, compilers, formats, backends and platforms. Our on-going project is to make the onboarding process as simple as possible via this platform. Please check this CK white paper and don't hesitate to contact us if you have suggestions or feedback!

ReadMe  

MLPerf Inference - Object Detection - TensorFlow with Intel MKL

This collection of CK-powered adaptive containers is based on the MKL-optimized TensorFlow image from Intel (which is in turn based on Ubuntu 18.04).

The image includes about a dozen of TensorFlow models for object detection, the COCO 2017 validation dataset, and MKL-optimized TensorFlow 1.15.2.

  1. Setup
  2. Usage

Setup

Note that you may need to run commands below with sudo, unless you manage Docker as a non-root user.

Set up Collective Knowledge

You will need to install Collective Knowledge to build images and save benchmarking results. Please follow the CK installation instructions and then pull our object detection repository:

$ ck pull repo:ck-mlperf

NB: Refresh all CK repositories after any updates (e.g. bug fixes):

$ ck pull all

(This only updates CK repositories on the host system. To update the Docker image, rebuild it using the --no-cache flag.)

Set up environment variables

Set up the variable to contain the image name:

$ export CK_IMAGE=mlperf-inference-vision-with-ck.intel.ubuntu-18.04

Set up the variable that points to the directory that contains your CK repositories (usually ~/CK or ~/CK_REPOS):

$ export CK_REPOS=${HOME}/CK

Download from Docker Hub

To download a prebuilt image from Docker Hub, run:

$ docker pull ctuning/${CK_IMAGE}

NB: As the prebuilt TensorFlow variant does not support AVX2 instructions, we advise to use the TensorFlow variant built from sources on compatible hardware. In fact, as the prebuilt image was built on an HP Z640 workstation with an Intel(R) Xeon(R) CPU E5-2650 v3 (launched in Q3'14), we advise to rebuild the image on your system.

Build

To build an image on your system, run:

$ ck build docker:${CK_IMAGE}

NB: This CK command is equivalent to:

$ cd `ck find docker:${CK_IMAGE}`
$ docker build --no-cache -f Dockerfile -t ctuning/${CK_IMAGE} .

Usage

Run inference once

Once you have downloaded or built an image, you can run inference on the CPU as follows:

$ docker run --env-file ${CK_REPOS}/ck-mlperf/docker/${CK_IMAGE}/env.list --rm ctuning/${CK_IMAGE} \
        "ck run program:mlperf-inference-vision --cmd_key=direct \
        --env.CK_LOADGEN_EXTRA_PARAMS='--count 50' \
        --env.CK_METRIC_TYPE=COCO \
        --env.CK_LOADGEN_SCENARIO=SingleStream \
        --env.CK_LOADGEN_MODE='--accuracy' \
        --dep_add_tags.weights=ssd,mobilenet-v1,quantized,mlperf,tf \
        --dep_add_tags.lib-tensorflow=vcpu --env.CUDA_VISIBLE_DEVICES=-1 --env.CK_LOADGEN_BACKEND=tensorflow \
        --env.CK_LOADGEN_REF_PROFILE=default_tf_object_det_zoo \
        --skip_print_timers"

Here, we run inference on 50 images on the CPU using the quantized SSD-MobileNet model.

NB: This is equivalent to the default run command:

$ docker run --rm ctuning/$CK_IMAGE

We describe all supported models and flags below.

Models

Our TensorFlow-Python application supports the following TensorFlow models trained on the COCO 2017 dataset. With the exception of a TensorFlow reimplementation of YOLO v3, all the models come from the TensorFlow Object Detection model zoo. Note that we report the accuracy reference (mAP in %) on the COCO 2017 validation dataset (5,000 images).

Model Unique CK Tags (<tags>) Is Custom? mAP in %
faster_rcnn_nas_lowproposals_coco rcnn,nas,lowproposals,vcoco 0 44.340195
faster_rcnn_resnet50_lowproposals_coco rcnn,resnet50,lowproposals 0 24.241037
faster_rcnn_resnet101_lowproposals_coco rcnn,resnet101,lowproposals 0 32.594327
faster_rcnn_inception_resnet_v2_atrous_lowproposals_coco rcnn,inception-resnet-v2,lowproposals 0 36.520117
faster_rcnn_inception_v2_coco rcnn,inception-v2 0 28.309626
ssd_inception_v2_coco ssd,inception-v2 0 27.765988
ssd_mobilenet_v1_coco ssd,mobilenet-v1,non-quantized,mlperf,tf 0 23.111170
ssd_mobilenet_v1_quantized_coco ssd,mobilenet-v1,quantized,mlperf,tf 0 23.591693
ssd_mobilenet_v1_fpn_coco ssd,mobilenet-v1,fpn 0 35.353170
ssd_resnet_50_fpn_coco ssd,resnet50,fpn 0 38.341120
ssdlite_mobilenet_v2_coco ssdlite,mobilenet-v2,vcoco 0 24.281540
yolo_v3_coco yolo-v3 1 28.532508

Each model can be selected by adding the --dep_add_tags.weights=<tags> flag when running a customized command for the container. For example, to run inference on the quantized SSD-MobileNet model, add --dep_add_tags.weights=ssd-mobilenet,quantized; to run inference on the YOLO model, add --dep_add_tags.weights=yolo; and so on.

Flags

Env Flag Possible Values Default Value Description
--env.CK_LOADGEN_BACKEND tensorflow tensorflow
--env.CK_LOADGEN_REF_PROFILE mobilenet-tf,default_tf_object_det_zoo,default_tf_trt_object_det_zoo,tf_yolo,tf_yolo_trt mobilenet-tf The "LoadGen profile" - combines aspects of model and backend
--env.CK_LOADGEN_SCENARIO SingleStream,Offline,MultiStream SingleStream The LoadGen testing scenario
--env.CK_LOADGEN_MODE "--accuracy","" "--accuracy" LoadGen mode - empty line stands for Performance mode

Benchmark models individually

When you run inference using ck run, the results get printed but not saved (and some won't be even printed). You can use ck benchmark to save the results on the host system as CK experiment entries (JSON files).

Let's set up a variable that points to the directory on the host computer where you want to collect the experiments from, making sure $USER has write access to it:

$ export CK_EXPERIMENTS_DIR=/data/$USER/mlperf-inference-vision-experiments

$ mkdir -p ${CK_EXPERIMENTS_DIR}

When running ck benchmark via Docker, we map the internal output directory to $CK_EXPERIMENTS_DIR on the host in order to access the results easier (using parameters for a custom Yolo v3 model for a change) :

$ docker run --env-file ${CK_REPOS}/ck-mlperf/docker/${CK_IMAGE}/env.list \
        --user=$(id -u):1500 --volume ${CK_EXPERIMENTS_DIR}:/home/dvdt/CK_REPOS/local/experiment \
        --rm ctuning/${CK_IMAGE} \
        "ck benchmark program:mlperf-inference-vision --cmd_key=direct --repetitions=1 \
        --env.CK_LOADGEN_EXTRA_PARAMS='--count 50' \
        --env.CK_METRIC_TYPE=COCO \
        --env.CK_LOADGEN_SCENARIO=SingleStream \
        --env.CK_LOADGEN_MODE='--accuracy' \
        --dep_add_tags.weights=yolo-v3 \
        --dep_add_tags.lib-tensorflow=vcpu \
        --env.CK_LOADGEN_BACKEND=tensorflow \
        --env.CK_LOADGEN_REF_PROFILE=tf_yolo_trt \
        --record --record_repo=local \
        --record_uoa=mlperf.open.object-detection.cpu.yolo_v3_coco.singlestream.accuracy \
        --tags=mlperf,open,object-detection,cpu,yolo_v3_coco,singlestream,accuracy \
        --skip_print_timers --skip_stat_analysis --process_multi_keys"

Docker parameters

  • --env-file: the path to the env.list file, which is usually located in the same folder as the Dockerfile. (Currently, the env.list files are identical for all the images.)
  • --volume: a folder with read/write permissions for the user that serves as shared space ("volume") between the host and the container.
  • --user: your user id on the host system and a fixed group id (1500) needed to access files in the container.

Gory details

We ask you to launch a container with --user=$(id -u):1500, where $(id -u) gives your user id on the host system and 1500 is the fixed group id of the dvdtg group in the image. We also ask you to mount a folder with read/write permissions with --volume=<folder_for_results>. This folder gets mapped to the /home/dvdt/CK_REPOS/local/experiment folder in the image. While the experiment folder belongs to the dvdt user, it is made accessible to the dvdtg group. Therefore, you can retrieve the results of a container run under your user id from this folder.

CK parameters

  • --dep_add_tags.lib-tensorflow: specify vsrc to use TensorFlow built from sources controlling its execution via flags and vprebuilt to use prebuilt TensorFlow on the CPU.
  • --dep_add_tags.weights: specify the tags for a particular model.
  • --repetitions: the number of times to run an experiment (3 by default); we typically use --repetitions=1 for experiments that measure accuracy and e.g. --repetitions=10 for experiments that measure performance.
  • --record, --record_repo=local: must be present to have the results saved in the mounted volume.
  • --record_uoa: a unique name for each CK experiment entry; here, mlperf.open.object-detection is the common prefix for all experiments, cpu is the TensorFlow backend, ssd-mobilenet-quantized is unique for each model, accuracy indicates the accuracy mode.
  • --tags: specify the comma-separated tags for each CK experiment entry; we typically use parts of the experiment entry name.

Explore design space

Putting this all together, we provide a shell script which can be found under:

$ ck find script:mlperf-inference-v0.5.open.object-detection

The script launches full design space explorationo ver all the object detection models and available TensorFlow backends.

Analyze the results

Copy the results to a machine for analysis

Once you have accumulated some experiment entries in <folder_for_results>, you can zip them:

$ cd <folder_for_results>
$ zip -rv <file_with_results>.zip {.cm,*}

copy <file_with_results>.zip to a machine where you would like to analyze them, create there a new repository with a placeholder for experiment entries:

$ ck add repo:object-detection-tf-py-experiments --quiet
$ ck add object-detection-tf-py-experiments:experiment:dummy --common_func
$ ck rm  object-detection-tf-py-experiments:experiment:dummy --force

or:

$ ck add repo:object-detection-tf-py-experiments --quiet
$ ck create_entry --data_uoa=experiment --data_uid=bc0409fb61f0aa82 \
--path=`ck find repo:object-detection-tf-py-experiments`

and, finally, extract the results:

$ unzip <file_with_result>.zip -d `ck find repo:object-detection-tf-py-experiments`/experiment
$ ck list object-detection-tf-py-experiments:experiment:*
...

Visualize the results via Jupyter

TBC

Versions  

Files  

Comments  

Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!