[ Project overview, Reddit disccusion, Android app, Chrome add-on, All CK components on GitHub ]

Our goal is to help researchers and practitioners find and test portable workflows with reusable artifacts (models, data sets, frameworks, libraries, tools) for emerging deep tech (AI, ML, quantum) and make it easier to adopt them in practice as described in this journal article. Such workflows can be connected with CK dashboards to collaboratively validate and reproduce results from published papers as a part of on-going reproducibility initiatives.

Check Docker image with all AI and ML components (workflows, packages, automation actions)

We collected all portable AI and ML workflows in one
GitHub repository and this adaptive CK Docker container as suggested by our users.

You can start it as follows:

docker run --rm -it ctuning/ck-ai:ubuntu-20.04

You can then prepare and run portable AI/ML workflows and program pipelines (including the MLPerf inference benchmark automation).

Check CK solutions to automate AI/ML/SW/HW benchmarking and optimization

Check examples of CK dashboards

Check Adaptive CK containers with portable workflows and reusable artifacts

Participate in collaborative ML and Systems benchmarking

Preparing, submitting and reproducing ML benchmarking results is a very tedious, ad-hoc and time consuming process. Check these CK solutions to learn how to automatically install and run real applications from the MLPerf benchmark on your platform: You can then participate in collaborative benchmarking to validate MLPerf results and submit the new ones using these live CK scoreboards. You can also see all dependencies on reusable components from this portal required to assemble this portable solution.

Test the above object detection solution in your browser

Besides looking at benchmarking results, we also want to test research techniques in practice on real data sets. You can test how the above object detection solution works in practice in your browser.

See other reproduced results (AI, ML, quantum, IoT)

Follow this link to find reproduced results from open competitions, reproducible hackathons and collaborative benchmarking efforts we helped to organize since 2015.

Check our concept of a live research paper

See the live paper with reusable workflows to apply machine learning to compilers (our collaboration with the Raspberry Pi foundation).

Create your own dashboard for crowd-benchmarking

Please follow this documentation about how to create your own customized dashboard for crowd-benchmarking and live research papers.

Create your own portable CK solution for crowd-benchmarking

Preliminary documentation (MLPerf benchmark automation example)