Guidance on Code Review#

Code review can help increase the accuracy of results, improve usability and maintainability of code, and is a great opportunity to learn. For a more detailed overview of motivations for Code Review, please see the Code Reviewing Process.

The most important questions during code review are:

  • Are all the files available? If not, ask the editor to request the data/code from the authors.

  • Does it run?

  • Is it easy to understand? Or is it more complex than it should be?

  • If it is not just a script underlying a publication but part of software infrastructure: how easy is it to maintain?

Code does not have to be perfect - it has to work, be accompanied with sufficient documentation and be maintainable if this is needed.

The online sustainability evaluation provided by the Software Sustainability Institute can help address issues that affect the sustainability of the software.

Code review in ReproHack style#

This is more applicable when you’re reviewing the code underlying a research article.

1. Access

  • How easy is it to access the materials? Can you access all the materials?

    • Is the data stored in a separate directory or data repository? Is there a persistent identifier associated with the data/code?

2. Installation

  • Are you able to install everything, did you run into any problems and how did you solve these?

3. Documentation Does the documentation contain information on:

  • how to install necessary software and dependencies?

  • how to use materials to reproduce the paper?

  • how to cite the materials, ideally in a form that can be copy and pasted? Provide suggestions on how to improve the documentation of the code if needed.

  • Are the inline comments in the code helpful and necessary? Comments should explain why some code exists, not what the code is doing. If the code isn’t clear enough to explain itself, then the code should be made simpler.

  • Is the code following applicable style guides? (for example, Google Style Guide

4. Reproduction

  • Were you able to fully reproduce the paper?

  • How automated was the process of reproducing the paper?

  • How easy was it to link analysis code to the plots it generates and sections in the manuscript in which it is described and results reported

If you are not able to reproduce the article:

  • Were there missing dependencies?

  • Was the computational environment not adequately described / captured?

  • Were there bugs in the code?

  • Does the code handle errors properly? Where could this be improved? If there are any tests, check if they are correct, sensible, and useful.

  • Did code run but results (such as model outputs, tables, figures) differ to those published? By how much? Was this to be expected (for example, because of use of random numbers in the method)?

5. User perspective

  • What did you find easy / intuitive? (For example: file structure, file naming, analysis workflow, documentation?)

  • What did you find confusing / difficult? Identify pressure points and provide constructive suggestions

  • What did you enjoy? Identify aspects that worked well.

6. Acknowledge the effort from authors and give them feedback in good faith. Also tell them what they did well!

CODECHECK#

CODECHECK provides a workflow, guidelines and tools to evaluate computer programs underlying scientific papers. If you want to get involved as a codechecker in the community, or if you want to apply the CODECHECK principles in your journal or conference, please take a look at the Get Involved page.

Code Review of research software#

Please see the Code Reviewing Process chapter for more details when reviewing software as a primary research output, which includes a checklist for code review process

Resources#

Journal, conference and archive guidelines#

Teaching Code Review#

Sharing Code Review Experiences#