Check a prototype of 2nd version of this platform being developed by cKnowledge.org in collaboration with MLCommons.
[ Project overview, Reddit disccusion, Android app, Chrome add-on, our reproducibility initiatives ]

Docker image with all AI and ML components (workflows, packages, automation actions)

We collected all portable AI and ML workflows in one
GitHub repository as suggested by our users.

You can start it as follows:

docker run --rm -it ctuning/ck-ai:ubuntu-20.04

You can then prepare and run portable AI/ML workflows and program pipelines (including the MLPerf inference benchmark automation).

CK solutions to automate AI/ML/SW/HW benchmarking and optimization

CK dashboards for collaborative experimentation

Adaptive CK containers with portable workflows and reusable artifacts

Participate in collaborative ML and Systems benchmarking

Preparing, submitting and reproducing ML benchmarking results is a very tedious, ad-hoc and time consuming process. Check these CK solutions to learn how to automatically install and run real applications from the MLPerf benchmark on your platform: You can then participate in collaborative benchmarking to validate MLPerf results and submit the new ones using these live CK scoreboards. You can also see all dependencies on reusable components from this portal required to assemble this portable solution.

Test the above object detection solution in your browser

Besides looking at benchmarking results, we also want to test research techniques in practice on real data sets. You can test how the above object detection solution works in practice in your browser.

See other reproduced results (AI, ML, quantum, IoT)

Follow this link to find reproduced results from open competitions, reproducible hackathons and collaborative benchmarking efforts we helped to organize since 2015.

Check our concept of a live research paper

See the live paper with reusable workflows to apply machine learning to compilers (our collaboration with the Raspberry Pi foundation).

Create your own dashboard for crowd-benchmarking

Please follow this documentation about how to create your own customized dashboard for crowd-benchmarking and live research papers.

Create your own portable CK solution for crowd-benchmarking

Preliminary documentation (MLPerf benchmark automation example)