This document (V20190727) provides guidelines to submit your artifact for evaluation across a range of CS conferences and journals. It gradually evolves to define a common submission methodology based on our past Artifact Evaluations and open discussions, the ACM reviewing and badging policy (which we contributed to as a part of the ACM taskforce on reproducibility), artifact-eval.org and your feedback (2018a, 2018b, 2017a, 2017b, 2014).

News

What to expect   Preparing artifacts for submission   If accepted   Examples of accepted artifacts   Methodology archive   Extended artifact description

What to expect

We aim to formalize and unify artifact submission while keeping it relatively simple. You will need to pack your artifacts (code and data) using any publicly available tool. In some exceptional cases when rare hardware or proprietary software is used, you can arrange a remote access to a machine with the pre-installed software. Then you need to prepare a small and informal Artifact Appendix using our AE LaTeX template (now used by ASPLOS, SML, CGO, PPoPP, Supercomputing, PACT, IA3, RTSS, ReQuEST and other ACM/IEEE conferences and workshops) to explain evaluators what your artifact is and how to validate it. You will normally be allowed to add up to 2 pages of this Appendix to your final camera-ready paper. You will need to add this appendix to you paper and submit it to the AE submission website for a given event. You can find examples of such AE appendices in the following papers: SysML'19, ReQuEST-ASPLOS'18 (associated experimental workflow), CGO'17, PPoPP'16, SC'16. Note that since your paper is already accepted, artifact submission is single blind i.e. you can add authors to your PDF!

Please, do not forget to check the following artifact reviewing guidelines to understand how your artifact will be evaluated. In the end, you will receive a report with the following overall assessment of your artifact and a set of ACM reproducibility badges:

Since our eventual goal is to promote collaborative and reproducible research, we see AE as a cooperative process between authors and reviewers to validate shared artifacts rather than naming and shaming problematic artifacts. We therefore allow continuous and anonymous communication between authors and reviewers via HotCRP to fix raised issues until a given artifact can pass evaluation or until a major issue is detected.

Preparing artifacts for submission

You need to perform the following steps to submit your artifact for evaluation:
  1. Prepare experimental workflow.

    You can skip this step if you just want to make your artifacts publicly available without validation of experimental results.

    You need to provide at least some scripts or Jupyter Notebooks to prepare and run experiments, as well as reporting and validating results.

    Note that we are developing an open-source Collective Knowledge Framework (CK) to automate and standardize the evaluation of artifacts. CK helps to reuse various automation tasks, data sets, models, programs and portable workflows already shared by the community (see CK artifacts and workflows from reproduced papers). If you would like to use it for your submission, please check this guide or get in touch with the CK community for assistance.

  2. Pack your artifact (code and data) or provide an easy access to them using any publicly available and free tool you prefer or strictly require.

    For example, you can use the following:
  3. Write a brief artifact abstract with a SW/HW check-list to informally describe your artifact including minimal hardware and software requirements, how it supports your paper, how it can be validated and what the expected result is. Particularly stress if you use any proprietary software or hardware Note that it is critical to help AE chairs select appropriate reviewers! If you use proprietary benchmarks or tools (SPEC, Intel compilers, etc), we suggest you to provide a simplified test case with open source software to be able to quickly validate functionality of your experimental workflow.

  4. Fill in and append AE template (download here) to the PDF of your (accepted) paper. Though it should be relatively intuitive, we still strongly suggest you to check out extra notes about how to fill in this template based on our past AE experience.

  5. Submit the artifact abstract and the new PDF at the AE submission website provided by the event.
If you encounter problems, find some ambiguities or have any questions, do not hesitate to get in touch with the AE community via the dedicated AE google group.

If accepted

You will need to add up to 2 pages of your AE appendix to your camera ready paper while removing all unnecessary or confidential information. This will help readers better understand what was evaluated. If your paper will be published in the ACM Digital Library, you do not need to add reproducibility stamps yourself - ACM will add them to your camera-ready paper! In other cases, AE chairs will tell you how to add a stamp to your paper.

Sometimes artifact evaluation help discover some minor mistakes in the accepted paper - in such case you now have a chance to add related notes and corrections in the Artifact Appendix of your camera-ready paper..

A few artifact examples from the past conferences, workshops and journals

Methodology archive

We keep track of the past submission and reviewing methodology to let readers understand which one was used in the papers with the evaluated artifacts. Also see original AE procedures for programming language conferences.