Briefly and informally describe your artifact including minimal hardware and software requirements,
how it supports your paper, how it can be validated, and what is the expected result. It will be used
to select appropriate reviewers.
Artifact check-list (meta-information)
Together with artifact abstract, this informal check-list will help us make sure that reviewers
have appropriate competency as well as technology to evaluate your artifact.
It can also be used as meta information
to find your artifacts in Digital Libraries (
under discussion/development).
It was created based on past AE experience and your feedback as such to cover most of the
artifacts in computer systems research including SW/HW co-design, benchmarking, design space exploration,
autotuning, architecture simulation, run-time adaptation, and more.
Fill in whatever is applicable with some informal keywords and remove unrelated items
(please consider questions below just as informal hints
that reviewers are usually concerned about):
-
Algorithm: Are you presenting a new algorithm?
-
Program: Which benchmarks do you use
(PARSEC
ARM real workloads,
NAS,
EEMBC,
SPLASH,
Rodinia,
LINPACK,
HPCG,
MiBench,
SPEC,
cTuning, etc)?
Are they included or should they be downloaded? Which version?
Are they public or private? If they are private,
is there a public analog to evaluate your artifact?
What is the approximate size?
-
Compilation: Do you present or require a specific compiler? Public/private? Is it included? Which version?
-
Transformations: Do you present or require a program transformation tool (source-to-source, binary-to-binary, compiler pass, etc)?
Public/private? Is it included? Which version?
-
Binary: Are binaries included? OS-specific? Which version?
-
Data set: Do you use specific data sets (for example,
ICL-NUIM,
cTuning data sets,
KDataSets, etc)?
Are they included? If not, how to download and install?
What is their approximate size?
-
Run-time environment: Is your artifact OS-specific (Linux, Windows, MacOS, Android, etc) ?
Which version? Which are the main software dependencies (JIT, libs, run-time adaptation frameworks, etc);
Do you need root access?
-
Hardware: Do you need specific hardware (supercomputer, architecture simulator, CPU, GPU, neural network accelerator, FPGA)
or features such as hardware counters
to measure power consumption, or access to CPU/GPU frequency)?
Are they publicly available?
-
Run-time state: Is your artifact sensitive to run-time state (cold/hot cache, network/cache contentions, etc.)
-
Execution: Any specific conditions during execution (sole user, process pinning, profiling, adaptation, etc)? How long will it approximately run?
-
Metrics: Which metrics are reported (execution time, inference per second, Top1 accuracy, static and dynamic energy consumption, etc) -
particularly important for multi-objective benchmarking, optimization and co-design
(see ACM ReQuEST tournaments).
-
Output: What is your output (console, file, table, graph) and what is your result
(exact output, numerical results, measured characteristics, etc)?
-
Experiments: How to prepare experiments and replicate/reproduce results
(OS scripts, manual steps by user,
IPython/Jupyter notebook, CK workflow, etc)?
Do not forget to mention tolerable variation of empirical results!
-
How much disk space required (approximately)?: This can help evaluators and end-users to find appropriate resources.
-
How much time is needed to prepare workflow (approximately)?: This can help evaluators and end-users to estimate resources needed to evaluate your artifact.
-
How much time is needed to complete experiments (approximately)?: This can help evaluators and end-users to estimate resources needed to evaluate your artifact.
-
Publicly available?: Will your artifact be publicly available? If yes, we may spend an extra effort to help you with the documentation.
-
Code/data licenses (if publicly available)?: If you workflows and artifacts will be publicly available, please provide information about licenses.
This will help the community to reuse your components.
-
Workflow frameworks used? Were any standard workflow frameworks used to automate and customize experiments
(such as CK,
OCCAM, Code Ocean or any similar).
-
Archived?:
Note that the author-created artifacts relevant to this paper
will receive an ACM "artifact available" badge *only if*
they have been placed on a publicly
accessible archival repository such as Zenodo,
FigShare
or Dryad.
A DOI will be then assigned to their artifacts and must be provided here!
Personal web pages, Google Drive, GitHub, GitLab and BitBucket
are not accepted for this badge.
The authors can also share their artifact via ACM DL.
In such case they should contact AE chairs to obtain DOI
(not yet automated unlike above repositories).
How delivered
Describe, how reviewers can access your artifact:
- Download package from a public website
- Download package from a private website (you will need to send information how to access your artifact to AE chairs)
- Access artifact via private machine with pre-installed software (only when access to rare hardware is required or proprietary
software is used - you will need to send information and credentials to access your machine to AE chairs)
Please describe approximate disk space required after unpacking your artifact
(to avoid surprises when artifact requires 20GB of free space). We do not have
a strict limit but strongly suggest to limit space to several GB and avoid including
unnecessary software to your VM images.
Hardware dependencies
Describe any specific hardware and its features
strictly required to evaluate your artifact
(vendor, CPU/GPU/FPGA, number of processors/cores, interconnect, memory,
hardware counters, etc).
Software dependencies
Describe any specific OS and software packages required to evaluate your
artifact. This is particularly important if you share source code
that has to be rebuilt or if you rely on proprietary software that you
can not include to your package. In such case, we strongly suggest to describe
where to get and to install all third-party tools.
Note that we are trying to obtain AE licenses for some commonly used proprietary tools and benchmarks
(you will be informed in case of positive outcome).
Data sets
If third-party data sets are not included in your packages (for example,
they are very large or proprietary), please provide details how to download
and install them.
In case of proprietary data sets, we suggest you provide reviewers
a public alternative subset for evaluation.
Describe installation and setup procedures for your artifact
(even if you use VM image or access to remote machine) targeting
a novice user.
Describe experiment workflow and how it is implemented,
invoked and customized (if needed), i.e. some OS scripts,
IPython/Jupyter notebook,
portable CK workflow, etc.
See the following example of the experimental workflow
for multi-objective and machine-learning based autotuning:
Describe all steps necessary to evaluate your artifact
using the workflow above. Also describe whether reviewers
will need to replicate (exact result match) or reproduce (possibly
varying result or different experimental conditions) the output.
Finally, describe the expected result as well as allowable variation
(particularly important for performance numbers and speed-ups).
It is currently optional since it is not always trivial.
If possible, describe how to customize your workflow, i.e. if
it is possible to use different data sets, benchmarks, real applications,
predictive models, software environment (compilers, libraries,
run-time systems), hardware, etc. Also, describe if it is possible
to parameterize your workflow (whatever is applicable such as
changing number of threads, optimizations, CPU/GPU frequency,
accuracy, autotuning scenario, etc).
See artifact description of the following award-winning artifacts
as examples of portable, customizable and reusable workflows
implemented using open-source
Collective Knowledge workflow
with a portable package manager:
[shared CK workflows from the
1st ACM ReQuEST-ASPLOS'18 tournament to co-design efficient SW/HW stack for deep learning],
[
CGO'17 paper],
[
IA3 @ Supercomputing'17 paper],
[
SC'15].
Notes
You can add informal notes for reviewers to draw
their attention to known or possible issues (particularly
if you plan to continue working on them after submission).