Back to submission guide ]

Extended artifact description (V20180713)

Here we provide a few informal suggestions to help you fill in your AE template for submission. They are based on our past Artifact Evaluations and personal experience to crowdsource and reproduce experiments (2017, 2014, 2009) to help you avoid common pitfalls and reduce reviewers burden.

If you encounter problems, find some ambiguities or have any questions, do not hesitate to get in touch with the AE community via the dedicated AE google group.

Abstract

Briefly and informally describe your artifact including minimal hardware and software requirements, how it supports your paper, how it can be validated, and what is the expected result. It will be used to select appropriate reviewers.

Artifact check-list (meta-information)

Together with artifact abstract, this informal check-list will help us make sure that reviewers have appropriate competency as well as technology to evaluate your artifact. It can also be used as meta information to find your artifacts in Digital Libraries (under discussion/development). It was created based on past AE experience and your feedback as such to cover most of the artifacts in computer systems research including SW/HW co-design, benchmarking, design space exploration, autotuning, architecture simulation, run-time adaptation, and more.

Fill in whatever is applicable with some informal keywords and remove unrelated items (please consider questions below just as informal hints that reviewers are usually concerned about):

Description

How delivered

Describe, how reviewers can access your artifact: Please describe approximate disk space required after unpacking your artifact (to avoid surprises when artifact requires 20GB of free space). We do not have a strict limit but strongly suggest to limit space to several GB and avoid including unnecessary software to your VM images.

Hardware dependencies

Describe any specific hardware and its features strictly required to evaluate your artifact (vendor, CPU/GPU/FPGA, number of processors/cores, interconnect, memory, hardware counters, etc).

Software dependencies

Describe any specific OS and software packages required to evaluate your artifact. This is particularly important if you share source code that has to be rebuilt or if you rely on proprietary software that you can not include to your package. In such case, we strongly suggest to describe where to get and to install all third-party tools.

Note that we are trying to obtain AE licenses for some commonly used proprietary tools and benchmarks (you will be informed in case of positive outcome).

Data sets

If third-party data sets are not included in your packages (for example, they are very large or proprietary), please provide details how to download and install them. In case of proprietary data sets, we suggest you provide reviewers a public alternative subset for evaluation.

Installation

Describe installation and setup procedures for your artifact (even if you use VM image or access to remote machine) targeting a novice user.

Experiment workflow

Describe experiment workflow and how it is implemented, invoked and customized (if needed), i.e. some OS scripts, IPython/Jupyter notebook, portable CK workflow, etc. See the following example of the experimental workflow for multi-objective and machine-learning based autotuning:



Evaluation and expected result

Describe all steps necessary to evaluate your artifact using the workflow above. Also describe whether reviewers will need to replicate (exact result match) or reproduce (possibly varying result or different experimental conditions) the output. Finally, describe the expected result as well as allowable variation (particularly important for performance numbers and speed-ups).

Experiment customization

It is currently optional since it is not always trivial. If possible, describe how to customize your workflow, i.e. if it is possible to use different data sets, benchmarks, real applications, predictive models, software environment (compilers, libraries, run-time systems), hardware, etc. Also, describe if it is possible to parameterize your workflow (whatever is applicable such as changing number of threads, optimizations, CPU/GPU frequency, accuracy, autotuning scenario, etc). See artifact description of the following award-winning artifacts as examples of portable, customizable and reusable workflows implemented using open-source Collective Knowledge workflow with a portable package manager: [shared CK workflows from the 1st ACM ReQuEST-ASPLOS'18 tournament to co-design efficient SW/HW stack for deep learning], [CGO'17 paper], [IA3 @ Supercomputing'17 paper], [SC'15].

Notes

You can add informal notes for reviewers to draw their attention to known or possible issues (particularly if you plan to continue working on them after submission).

This guide was prepared by Grigori Fursin with contributions from Bruce Childers, Michael Heroux, Michela Taufer and other colleagues.