Call for Artifacts
Aim
Reproducibility of experiments is crucial to foster an atmosphere of open, reusable and trustworthy research. To improve and reward reproducibility and to give more visibility and credit to the effort of tool developers in our community, authors of accepted papers will be invited to submit possible artifacts associated with their paper for evaluation, and based on the level of reproducibility they will be awarded one or more badges.
The goals of the artifact evaluation are manifold. We want to encourage authors to provide more substantial evidence to their papers and to reward authors who aim for reproducibility of their results, and therefore create artifacts. Also, we want to give more visibility and credit to the effort of tool developers in our community. Furthermore, we want to simplify the independent replication of results presented in the paper and to ease future comparisons with existing approaches.
Artifact submission is optional. Papers that are successfully evaluated will be awarded one or more artifact badges, but the result of the artifact evaluation will not alter the paper’s acceptance decision. We aim to assess the artifacts themselves and not the quality of the research linked to the artifact, which has been assessed by the iFM 2025 program committee already. The goal of our review process is to be constructive and to improve the submitted artifacts. Only if an artifact cannot be improved to achieve sufficient quality in the given time frame or if it is inconsistent with the paper, it should be rejected.
Important Dates (AoE)
Artifact Registration | 15 Aug 2025 |
Artifact Submission | 22 Aug 2025 |
Smoke-Test | 31 Aug 2025 |
Rebuttal | 01-04 Sep 2025 |
Artifact Notification           | 24 Sep 2025 |
Reviewing Criteria
All artifacts are evaluated by the artifact evaluation committee.
- Each artifact will be reviewed by at least three committee members.
- Reviewers will read the accepted paper and explore the artifact to evaluate how well the artifact supports the claims and results of the paper.
iFM awards the Artifacts Available and Artifacts Evaluated badges of EAPLS.
Artifacts Available Badge
The availability badge will be awarded if the artifact is made permanently and publicly available and has a DOI. We recommend services like Zenodo or figshare for this. The available badge is the bare minimum that an artifact should allow: a binary with instructions how to run a problem are sufficient for this badge.
Artifacts Evaluated Badge
The evaluation badge has two levels, functional and reusable. Each successfully evaluated artifact receives at least the functional badge. The reusable badge is granted to artifacts of very high quality.
Functional
The official description has more details, but you should be able reproduce the experimental results of the paper in the virtual environment you picked (docker / VM).
Reusable
The artifact needs a description how to reproduce the work outside of the virtual environment you picked (docker / VM) and even better describe how to use the tool for another purpose.
Process
The artifact evaluation consists of two phases: the smoke-test phase and the main-review phase.
Smoke-Test
During the initial smoke-test phase, reviewers will download the submitted artifacts, follow the instructions, and attempt to run the experiments to ensure basic functionality. In case of technical issues at this stage and at the discretion of the AEC chairs, a single round of rebuttal may be applied to some artifacts shortly after submission. Authors may reply with a single set of answers to address issues and facilitate further evaluation. Update of the submitted files or further communication will not be allowed. Updated scripts may only be provided via the rebuttal form in EasyChair.
Main-Review
In the main-review phase, reviewers will try to reproduce any experiments or activities and evaluate the artifact w.r.t. the reviewing criteria mentioned above. The final review is communicated using EasyChair.
Awarding Authors may use all granted badges on the title page of the respective paper. Artifacts that are not exercisable, as, for example, protocols used for empirical studies, will be evaluated only according to the Available badge, as Functional and Reusable badges are not applicable.
Artifact Submission
An artifact submission consists of
- An abstract, to be written directly in EasyChair, that:
- summarizes the artifact
- explains its relation to the paper, describes which badges the authors submit for.
-
A .pdf file of the most recent version of the accepted paper, which may differ from the submitted version to take reviewers’ comments into account. Please also look at the Artifact Packaging Guidelines below for more detailed information about the contents of the artifact.
-
A DOI of the artifact, as a link to a repository that provides a DOI such that Zenodo, figshare, or Dryad.
-
Information whether your artifact is submitted as a Docker container or a VM.
-
We need the checksum to ensure the integrity of your artifact. You can generate the checksum using the following command-line tools.
Linux: sha256sum <file> Windows: CertUtil -hashfile <file> SHA256 MacOS: shasum -a 256 <file>
The abstract and the .pdf file of your paper must be submitted via EasyChair:
https://easychair.org/my/conference?conf=ifm25
If you cannot submit the artifact as requested or encounter any other difficulties in the submission process, please contact the artifact evaluation chairs prior to submission.
Packaging guidelines
Your artifact should contain the following elements:
-
the main artifact, i.e., data, software, libraries, scripts, etc. required to replicate the results of your paper and any additional software required by your artifact including an installation description in the README. We recommend using a Docker image, but you can also use an OVA file based on Ubuntu 24.04 LTS.
-
A
LICENSE
file describing the rights. Your license needs to allow the artifact evaluation committee members to download and evaluate the artifact, e.g., download, use, execute, and modify the artifact for the purpose of artifact evaluation. Please refer to typical open-source licenses. Artifacts without an open-source license are also accepted, but a type of license needs to be specified, which allows the committee to assess the artifact. For quick help about possible licenses, visit https://choosealicense.com/. -
The
README
file should introduce the artifact to the user, i.e., describe what the artifact does, and guide the user through the installation, set up tests, and replication of your results. It should contain:- the structure and content of the artifact
- the step to setup your artifact (do not assume that all reviewers are familiar with Docker)
- additional requirements for the artifact, such as installation of proprietary software or particular hardware resources
- detailed instruction for an early light review that allows reviewers to: (1) verify that the artifact can properly run; and (2) perform a short evaluation of the artifact before the full evaluation and detect any difficulties
- detailed instructions for use of the artifact and replication of results in the paper
- a time estimate how long reproducing takes. It should not take more than 8 hours (2 threads) to
reproduce the entire set of experiments.
An example README from the CAV24-AE is available on Zenodo,
https://doi.org/10.5281/zenodo.11118824 .
In case your experiments cannot be replicated inside Docker or a VM, please contact the Artifact Evaluation Committee chairs before submission. Possible reasons may include the need for special hardware (FPGAs, GPUs, clusters, robots, etc.), software licensing issues. In any case, you are encouraged to submit a complete artifact. This way, the reviewers have the option to replicate the experiments in the event they have access to the required resources.
In case you artifact requires more than 8 hours (or more memory) to reproduce, please provide:
- the full set of log files you obtained
- the limited set of log files you obtained when running the artifact. This should not be the directory where the log files are produced by your experiments in the artifact!
- the directory where the log files are produced by your experiments in the artifact
- your scripts should allow to reproduce the results for the subset from the artifact and the full set.
The structure of those three directories should be similar so that the reviewers can compare the log files (while being aware than the cluster might or might no be faster than the machine the artifact is executed on).
If you decide to use a VM, your artifact is allowed reasonable network access. Authors can therefore decide whether to supply a completely self-contained artifact, rely on network access for external resources, or combine both. However, the main artifact described in the paper must be included in the archive, and use of external resources is at the author’s own risk. If you are using external resources, please ensure that they are version pinned to ensure long-term replicability. We anticipate typical usage of this relaxation to be for the installation of third-party dependencies.
Recommendations for Authors
-
We recommend preparing your artifact in such a way that any computer science expert without dedicated expertise in your field can use your artifact, especially replicate your results. For example, keep the evaluation process simple, provide easy-to-use scripts, and a detailed README document.
-
Furthermore, the artifact and its documentation should be self-contained.
-
However, we recommand to not make the README longer than necessary. If your tool uses a standard input language (like DIMACS or SMT-LiB) and a standard output (
s SATISFIABLE
ors UNSATISFIABLE
), a link to a description is enough. -
We do not expect a full execution of your artifact if the runtime is more than 8 hours (or even thousands of hours if actually a beefy cluster is needed)
-
Indicate precisely in the paper what you are reproducing (like Table 4 or Figure 5).
Artifact Evaluation Committee
Chairs
- Daniela Kaufmann, TU Wien
- Roberto Casadei, Università di Bologna
PC
TBA