Go directly to the demo of a portable ML solution (object detection, TF, MobileNets, COCO)

MLPerf crowd-benchmarking    Webcam-based live test    ML/SW/HW autotuning scoreboard


One of our goals is to help researchers and practitioners find reproducible AI, ML, and systems papers with portable workflows and reusable artifacts (models, data sets, frameworks, libraries, tools) and quickly build upon them or deploy them in production. Furthermore, such workflows can be connected live CK scoreboards to allow everyone participate in collaborative validation, benchmarking, and optimization of novel techniques from published papers thus supporting reproducibility initiatives at scientific conferences.

Note that CK platform is a prototype and there is still a lot to be done - feel free to get in touch if you have suggestions or encounter issues!

You can try some practical CK use-case from our partners:

Participate in collaborative ML&systems benchmarking

Preparing, submitting and reproducing ML benchmarking results is a very tedious, ad-hoc and time consuming process. Check these CK solutions to learn how to automatically install and run real applications from the MLPerf benchmark on your platform in a few relatively simple steps: You can then participate in collaborative benchmarking to validate MLPerf results and submit the new ones using these live CK scoreboards. You can also see all dependencies on reusable components from this portal required to assemble this portable solution.

Test the above object detection solution in practice

Besides looking at benchmarking results, we also want to test research techniques in practice on real data sets. You can test how the above object detection solution works in practice in your browser.

See other reproduced results (AI, ML, quantum, IoT)

Follow this link to find reproduced results from open competitions, reproducible hackathons and collaborative benchmarking efforts we helped to organize since 2015.

Check our concept of a live research paper

See the live paper with reusable workflows to apply machine learning to compilers (our collaboration with the Raspberry Pi foundation).

Create your own dashboard for crowd-benchmarking

Please follow this documentaiton about how to create your own customized dashboard for crowd-benchmarking and live research papers.

Create your own portable CK solution for crowd-benchmarking

Preliminary documentation (MLPerf benchmark automation example)