Check a prototype of 2nd version of this platform being developed by cKnowledge.org in collaboration with MLCommons.
Reproducibility initiative at MLSys'20 (the Conference on Machine Learning and Systems)
[ Back to the conference website ]

Artifacts

The list of artifacts is now available here!

Important dates

Paper decision: January 3, 2020
Artifact submission: January 15, 2020
Artifact decision: February 22, 2020
Camera-ready paper: February 28, 2020
Conference: March 2-4, 2020

Reproducibility chairs

Reproducibility committee members

  • Xavier Bouthillier (University of Montreal)
  • Ting-Wu Chin (CMU)
  • Murali Emani (Argonne National Laboratory)
  • Jessica Forde (Brown University)
  • Grigori Fursin (cTuning foundation)
  • Pierre Gronlier (OVH)
  • HervĂ© Guillou (CodeReef & cTuning foundation)
  • Erik Hemberg (MIT)
  • Ahmet Inci (Carnegie Mellon University)
  • Sagar Karandikar (UC Berkeley)
  • Tanvir Ahmed Khan (University of Michigan)
  • Asmita Pal (University of Wisconsin-Madison)
  • Andrew Pelegris (UToronto)
  • Shang Wang (University of Toronto)
  • Yiwen Zhu (Microsoft)

Artifacts

The list of artifacts is now available here!

The process

Following the successful introduction of the artifact evaluation process at MLSys'19 and the success of related initiatives at NeurIPS'19 and ASPLOS'20 (more than half of the accepted papers submitted artifacts for evaluation), we will continue the reproducibility initiative at the Conference on Machine Learning and Systems 2020.

The Conference on Machine Learning and Systems promotes reproducibility of experimental results and encourages code and data sharing to help the community quickly validate and compare alternative approaches. Authors of accepted papers are invited to formally describe supporting materials (code, data, models, workflows, results) using the standard Artifact Appendix template and submit it to the Artifact Evaluation process (AE).

Note that this submission is voluntary and will not influence the final decision regarding the papers. The point is to help authors validate experimental results from their accepted papers by an independent AE Committee in a collaborative way, and to help readers find articles with available, functional, and validated artifacts! For example, ACM Digital Library already allows one to find papers with available artifacts and reproducible results!

Artifact submission

You need to prepare your artifacts and appendix using the following guidelines. You can then submit your paper with the standard artifact appendix via the dedicated MLSys AE website before January 15, 2020. Your submission will be reviewed according to the following guidelines. Please, do not forget to describe the required hardware, software, data sets and models in your artifact abstract - this is essential to find appropriate evaluators! You can find the examples of Artifact Appendices in these MLSys'19 papers.

AE is run by a separate committee whose task is to assess how submitted artifacts support the work described in accepted papers based on the standard ACM Artifact Review and Badging policy.

Depending on evaluation results, camera-ready papers will include the artifact appendix and a set of ACM stamps of approval printed on their first page:

artifact available
artifact evaluated (functional)
results replicated

At the end of the process we will inform you about how to add these badges and the artifact appendix to your camera-ready paper.

Questions and feedback

Please check AE FAQs and feel free to ask questions or provide your feedback and suggestions via the dedicated AE discussion group