One of our goals is to help researchers and practitioners find
reproducible AI, ML, and systems papers with portable workflows
and reusable artifacts (models, data sets, frameworks, libraries, tools)
and quickly build upon them or deploy them in production.
Furthermore, such workflows can be connected
to allow everyone participate in collaborative validation, benchmarking, and optimization of novel techniques
from published papers thus supporting
Participate in collaborative ML&systems benchmarking
Preparing, submitting and reproducing ML benchmarking results
is a very tedious, ad-hoc and time consuming process.
Check these CK solutions to learn how to automatically install and run real applications from the MLPerf benchmark on your platform in a few relatively simple steps:
You can then participate in collaborative benchmarking to validate MLPerf results
and submit the new ones using these live CK scoreboards
You can also see all dependencies
on reusable components from this portal required to assemble this portable solution.
Test the above object detection solution in practice
Besides looking at benchmarking results, we also want to test research techniques in practice on real data sets.
You can test how the above object detection solution works in practice
in your browser
See other reproduced results (AI, ML, quantum, IoT)
Follow this link
to find reproduced results from open competitions, reproducible hackathons and collaborative benchmarking efforts
we helped to organize
Check our concept of a live research paper
See the live paper
with reusable workflows to apply machine learning to compilers (our collaboration with the Raspberry Pi foundation).
Create your own dashboard for crowd-benchmarking
Please follow this documentaiton
about how to create your own customized dashboard for crowd-benchmarking and live research papers.
Create your own portable CK solution for crowd-benchmarking