Check the preview of 2nd version of this platform being developed by the open MLCommons taskforce on automation and reproducibility as a free, open-source and technology-agnostic on-prem platform.
program:object-detection-onnx-py (v3.0.0)
Copyright: See copyright in the source repository
License: See license in the source repository
Creation date: 2019-06-03
Source: GitHub
cID: b0ac08fe1d3c2615:4bc385394b7a9350

Don't hesitate to get in touch if you encounter any issues or would like to discuss this community project!
Please report if this CK component works: 1  or fails: 0 
Sign up to be notified when artifacts are shared or updated!

Description  

This portable workflow is our attempt to provide a common CLI with Python JSON API and a JSON meta description to automatically detect or install required components (models, data sets, libraries, frameworks, tools), and then build, run, validate, benchmark and auto-tune the associated method (program) across diverse models, datasets, compilers, platforms and environments. Our on-going project is to make the onboarding process as simple as possible via this platform. Please check this CK white paper and don't hesitate to contact us if you have suggestions or feedback!
  • Automation framework: CK
  • Development repository: ck-ml
  • Source: GitHub
  • Available command lines:
    • ck run program:object-detection-onnx-py --cmd_key=default (META)
  • Support for host OS: any
  • Support for target OS: any
  • Tags: object-detection,onnx,lang-python
  • Template: Object Detection via ONNX (Python)
  • How to get the stable version via the client:
    pip install cbench
    cb download program:object-detection-onnx-py --version=3.0.0 --all
    ck run program:object-detection-onnx-py
  • How to get the development version:
    pip install ck
    ck pull repo:ck-ml
    ck run program:object-detection-onnx-py

  • CLI and Python API: module:program
  • Dependencies    

    ReadMe  

    MLPerf Inference - Object Detection - ONNX

    Installation

    Collective Knowledge (CK)

    $ python3 -m pip install ck --user
    

    CK repositories

    $ ck pull repo --url=https://github.com/krai/ck-mlperf
    

    ONNX library and runtime

    $ ck install package --tags=python-package,onnx
    $ ck install package --tags=python-package,onnxruntime
    

    Models

    SSD-ResNet34

    $ ck install package --tags=model,onnx,mlperf,ssd-resnet,downloaded
    

    SSD-MobileNet-v1

    $ ck install package --tags=model,onnx,mlperf,ssd-mobilenet,downloaded
    

    Datasets

    NB: Using OpenCV gives better accuracy than using Pillow.

    SSD-ResNet34

    $ ck install package --tags=dataset,object-detection,preprocessed,full,side.1200
    

    SSD-MobileNet-v1

    $ ck install package --tags=dataset,object-detection,preprocessed,full,side.300
    

    Inference

    Parameters

    CK_BATCH_COUNT

    The number of images to be processed.

    Default: 1.

    CK_SKIP_IMAGES

    The number of images to skip.

    Default: 0.

    Models

    SSD-ResNet34

    50 images

    $ ck run program:object-detection-onnx-py --skip_print_timers \
    --dep_add_tags.dataset=preprocessed,using-opencv,side.1200 \
    --dep_add_tags.weights=ssd-resnet \
    --env.CK_BATCH_COUNT=50
    ...
    Convert results to coco ...
    
    Evaluate metrics as coco ...
    loading annotations into memory...
    Done (t=0.53s)
    creating index...
    index created!
    Loading and preparing results...
    DONE (t=0.03s)
    creating index...
    index created!
    Running per image evaluation...
    Evaluate annotation type *bbox*
    DONE (t=0.99s).
    Accumulating evaluation results...
    DONE (t=0.32s).
     Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.256
     Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.450
     Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.255
     Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.153
     Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.420
     Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.389
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.258
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.363
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.381
     Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.210
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.517
     Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.485
    
    Summary:
    -------------------------------
    All images loaded in 1.812857s
    Average image load time: 0.036257s
    All images detected in 53.678096s
    Average detection time: 1.071682s
    Total NMS time: 0.000000s
    Average NMS time: 0.000000s
    mAP: 0.2555006861214358
    Recall: 0.38062334131440473
    --------------------------------
    

    5,000 images

    $ ck run program:object-detection-onnx-py --skip_print_timers \
    --dep_add_tags.dataset=preprocessed,using-opencv,side.1200 \
    --dep_add_tags.weights=ssd-resnet \
    --env.CK_BATCH_COUNT=5000
    ...
    Convert results to coco ...
    
    Evaluate metrics as coco ...
    loading annotations into memory...
    Done (t=0.45s)
    creating index...
    index created!
    Loading and preparing results...
    DONE (t=6.37s)
    creating index...
    index created!
    Running per image evaluation...
    Evaluate annotation type *bbox*
    DONE (t=89.64s).
    Accumulating evaluation results...
    DONE (t=14.66s).
     Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.200
     Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.381
     Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.183
     Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.119
     Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.257
     Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.233
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.200
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.321
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.344
     Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.174
     Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.406
     Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.416
    
    Summary:
    -------------------------------
    All images loaded in 176.739452s
    Average image load time: 0.035348s
    All images detected in 5474.896789s
    Average detection time: 1.094935s
    Total NMS time: 0.000000s
    Average NMS time: 0.000000s
    mAP: 0.19952640873605498
    Recall: 0.343745110610767
    --------------------------------
    

    SSD-MobileNet-v1

    50 images

    $ ck run program:object-detection-onnx-py --skip_print_timers \
    --dep_add_tags.dataset=preprocessed,using-opencv,side.300 \
    --dep_add_tags.weights=ssd-mobilenet \
    --env.CK_BATCH_COUNT=50
    ...
    executing code ...
    Traceback (most recent call last):
      File "../detect.py", line 16, in 
        from coco_helper import (load_preprocessed_batch, image_filenames, original_w_h,
      File "/home/anton/CK/ck-mlperf/soft/lib.python.coco-helper/coco_helper/__init__.py", line 70, in 
        ) or os.environ['ML_MODEL_CLASS_LABELS']
      File "/usr/local/lib/python3.7/os.py", line 681, in __getitem__
        raise KeyError(key) from None
    KeyError: 'ML_MODEL_CLASS_LABELS'
    

    Versions  

    Files  

    Comments  

    Please log in to add your comments!
    If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!