Browse
Contribute
Mission
Examples
Reproducible research
Contacts
GitHub
Sign in / Register
Check a
prototype of 2nd version of this platform
being developed by
cKnowledge.org
in collaboration with
MLCommons
.
[
Project overview
,
Reddit disccusion
,
Android app
,
Chrome add-on
,
All CK components on GitHub
]
Track the behavior of diverse computational systems
Live dashboards with reproduced results to compare computational systems (trade off speed, accuracy, energy, costs).
Aggregated MLPerf™ inference benchmark results
All published results
All CK components to automate SysML research including the MLPerf benchmark
Adaptive CK containers with customizable workflows
Portable solutions for collaborative and reproducible benchmarking including MLPerf
Check Jupyter notebook with a demo of the automated DSE of ML/SW/HW stacks
Check CK dashboard with above results
Participate in ML crowd-benchmarking
Test the MLPerf object detection in a browser
See solutions with live tests and demos
Track scientific papers with reproduced results
Reproduced papers with portable workflows
Reproduced papers during artifact evaluation
Live reproducible papers and graphs
All papers
Our reproducibility initiatives for computational science
Track reusable components to build, benchmark and optimize ML systems
Online search
Portable meta packages (models, datasets, frameworks, tasks)
ML models
Software detection components
Programs (portable CK pipelines)
AI/ML only
Reusable automation actions for scientific research and MLOps
Python modules to abstract continuously evolving artifacts (software, hardware, data sets, models)
All shared components