pVLDB Reproducibility 2018

Starting with pVLDB 2018, pVLDB joins SIGMOD in encouraging the database community to develop a culture of sharing and cross-validation. pVLDB's reproducibility effort is being developed in coordination with SIGMOD's.

News

What is pVLDB Reproducibility?

pVLDB Reproducibility has three goals:

  • Increase the impact of database research papers.
  • Enable easy dissemination of research results.
  • Enable easy sharing of code and experimentation set-ups.

In short, the goal is to assist in building a culture where sharing results, code, and scripts of database research is the norm rather than an exception. The challenge is to do this efficiently, which means building technical expertise on how to do better research via creating repeatable and sharable research. The pVLDB Reproducibility committee is here to help you with this.

Submission

Submit your accepted pVLDB papers for reproducibility through CMT. To submit, you'll need the following information:

  1. The title and abstract of your original, accepted pVLDB paper.
  2. A link to your original, accepted pVLDB paper.
  3. A short description of how the reviewer may retrieve your reproducibility submission. This should include at least the following information: a link to the code and how to use the scripts for (a) code compilation, (b) data generation, (c) experimentation.
  4. A short description of the hardware needed to run your code and reproduce experiments included in the paper, with detailed specification of unusual or not commercially available hardware. If your hardware is sufficiently specialized, please have plans to allow the reviewers to access your hardware.
  5. A short description of any software or data necessary to run your code and reproduce experiments included in the paper, particularly if it is restricted-access (e.g., commercial software without a free demo or academic version). If this is the case, please have plans to allow the reviewers access to any necessary software or data.

In keeping with pVLDB itself, the pVLDB Reproducibility effort will use a rolling, monthly deadline. Papers received by 5PM EST on the first of each month will be distributed for that month's round of reviews. We will aim to have a completed reproducibility review within 2 months.

Why should I be part of this?

You will be making it easy for other researchers to compare with your work, to adopt and extend your research. This instantly means more recognition for your work and higher impact.

How much overhead is it?

At first, making research sharable seems like an extra overhead for authors. You just had your paper accepted in a major conference; why should you spend more time on it? The answer is to have more impact!

If you ask any experienced researcher in academia or in industry, they will tell you that they in fact already follow the reproducibility principles on a daily basis! Not as an afterthought, but as a way of doing good research.

Maintaining easily reproducible experiments, simply makes working on hard problems much easier by being able to repeat your analysis for different data sets, different hardware, different parameters, etc. Like other leading system designers, you will save significant amounts of time because you will minimize the set up and tuning effort for your experiments. In addition, such practices will help bring new students up to speed after a project has lain dormant for a few months.

Ideally reproducibility should be close to zero effort.

Reproducibility Highlights

Leveraging Similarity Joins for Signal Reconstruction
Abolfazl Asudeh (University of Michigan)
Azade Nazi (Microsoft)
Jees Augustine (UTA)
Saravanan Thirumuruganathan (QCRI)
Nan Zhang (Pennsylvania State University)
Gautam Das (University of Texas at Arlington)
Divesh Srivastava (AT&T Labs Research)
An Eight-Dimensional Systematic Evaluation of Optimized Search Algorithms on Modern Processors
Lars-Christian Schulz (University of Magdeburg)
David Broneske (Otto-von-Guericke University Magdeburg)
Gunter Saake (University of Magdeburg)
Cardinality Estimation: An Experimental Survey
Hazar Harmouch (Hasso Plattner Institute)
Felix Naumann (Hasso Plattner Institute)

Criteria and Process

Availability

Each submitted experiment should contain: (1) A prototype system provided as a white box (source, configuration files, build environment) or a black-box system fully specified. (2) Input Data: Either the process to generate the input data should be made available, or when the data is not generated, the actual data itself or a link to the data should be provided. (3) The set of experiments (system configuration and initialization, scripts, workload, measurement protocol) used to produce the raw experimental data. (4) The scripts needed to transform the raw data into the graphs included in the paper.

Replicability

The central results and claims of the paper should be supported by the submitted experiments, meaning we can recreate result data and graphs that demonstrate similar behavior with that shown in the paper. Typically when the results are about response times, the exact numbers will depend on the underlying hardware. We do not expect to get identical results with the paper unless it happens that we get access to identical hardware. Instead, what we expect to see is that the overall behavior matches the conclusions drown in the paper, e.g., that a given algorithm is significantly faster than another one, or that a given parameter affects negatively or positively the behavior of a system.

Reproducibility

One important characteristic of strong research results is how flexible and robust they are in terms of the parameters and the tested environment. For example, testing a new algorithm for several input data distributions, workload characteristics and even hardware with diverse properties provides a complete picture of the properties of the algorithm. Of course, a single paper cannot always cover the whole space of possible scenarios. Typically the opposite is true. For this reason, we expect authors to provide a short description as part of their submission about different experiments that one could do to test their work on top of what already exists in the paper. Ideally, the scripts provided should enable such functionality so that reviewers can test these case. This would allow reviewers to argue about how “reproducible” the results of the paper are under different conditions.

We do not expect the authors to perform any additional experiments on top of the ones in the paper. Any additional experiments submitted will be considered and tested but they are not required. As long as the flexibility report shows that there is a reasonable set of existing experiments, then a paper meets the flexibility criteria. What reasonable means will be judged on a case by case basis based on the topic of each paper and in practice all accepted papers in top database conferences meet this criteria. You should see the flexibility report mainly as a way to describe the design space covered by the paper and the design space which is interesting to cover in the future for further analysis that may inspire others to work on open problems triggered by your work.

Process

Each paper is reviewed by one database group. The process happens in communication with the reviewers so that authors and reviewers can iron out any technical issues that arise. The end result is a short report which describes the result of the process.

The goal of the committee is to properly assess and promote database research! While we expect that authors try as best as possible to prepare a submission that works out of the box, we know that sometimes unexpected problems appear and that in certain cases experiments are very hard to fully automate. The committee will not dismiss submissions if something does not work out of the box; instead, they will contact the authors to get their input on how to properly evaluate their work.

Reproducibility Committee

Co-Chairs

Committee

  • Boston University: Charalampos Tsourakakis
  • Delaware State University: Gokhan Kul
  • Duke University: Sudeepa Roy, Yuhao Wen, Prajakta Kalmegh, Zhengjie Miao, Yuchao Tao
  • IIT Delhi: Maya Ramanath, Madhulika Mohanty, Prajna Upadhyay
  • INRIA: Ioana Manolescu, Tien-Duc Cao
  • Imperial College London: Peter Pietzuch, George Theodorakis, Panagiotis Garefalakis
  • Tsinghua University: Guoliang Li, Ji Sun
  • Télécom ParisTech University: Fabian Suchanek, Jonathan Lajus
  • UC Davis: Mohammad Sadoghi, Suyash Gupta, Thamir Qadah, Patrick Liao, Domenic Cianfichi
  • University of Glasgow: Nikos Ntarmos, Iulia Popescu, Fotis Savva
  • University of Insubria: Elena Ferrari
  • University of New South Wales: Wenjie Zhang
  • University of Pittsburgh: Panos Chrysanthis, Bruce R Childers, Daniel Petrov, Xiaoyu Ge
  • University of Western Australia: Jianxin Li