At the 22nd International Conference on Software Engineering, Alastair Dunsmore, Marc Roper, and Murray Wood presented the findings of their study on three different techniques for code review.
The first approach was a “checklist review” which outlined specific things that a reviewer should check for at the class, method, and class-hierarchy levels. The second approach was a “systematic review” which referred reviewers to a reading plan that should focus their attention. The third approach was a “use-case review” where the reviewer is provided with use-cases for how the code relates to other code in the system.
So what did they find? The reviewers who were guided by a checklist found more defects and in less time than the other two methods -- a 30% improvement over the worst approach.
That insight was presented in 2000, nearly two decades ago, and it bears repeating.
In the modern software world of CI/CD pipelines, a budding API economy, and automated testing tools, it’s time to make the case for something deceptively simple: checklists.
Embedding Your Priorities Into Your Review Process
First off, if you are someone who recoils at the very mention of a word like “process”, you’re not alone and you’re not wrong to be skeptical. Just keep in mind that there’s an important distinction between unproductive process (bureaucracy) and productive process (a smart way to work).
So, let start at the top.
The primary function of a code review is for code to be checked for quality.
An author submits their code to a reviewer, and a reviewer might reply to a pull request with a thumbs up and a “LGTM”. What are they actually saying (besides “Looks Good to Me”)?
The author might wonder:
- Did they run static analysis?
- Did they think my code would be easily readable in the future?
- Did they check to make sure the code isn’t redundant?
There’s no way to know if their notion of quality is unique or corresponds to a standard set across the team.
You could outline some standard code review guidelines on a wiki page, but that might collect dust over time.
Team priorities should be reflected in your regular code review process and explicitly listed out so that those priorities are translated into practice.
If your team cares about code maintainability, are you asking reviewers to note if the code is easy to understand? If you’re working in a highly-regulated field, are you making sure that other stakeholders have provided their feedback before moving ahead?
By creating checklists for different types of projects or deliverables, your team has the opportunity to create clear Definitions of Done across your peer review process.
When your team has clear expectations, you are more likely to have better code reviews. In our 2018 State of Code Review report, we found that there was a strong correlation between reviewers who knew what was expected of them and those who were satisfied with their review process.
Ideally, your checklists can be living and iterative documentation of your team priorities.
Improving Each Sprint With Iterative Reminders
Is there something that your team keeps forgetting to do?
In The Best Kept Secrets of Peer Code Review, Jason Cohen shares how his development team kept forgetting to kick the build number before QA sessions, about 30% of the time. By building that step into the review checklist, the team had to remember each time and they did.
If there are frequent mistakes or certain areas that your team wants to focus on, employing checklist items as reminders can be a simple and effective strategy for continuous improvement.
Design your checklists with usability in mind. If they are too long, they can feel like overkill. If they are too short, they might not provide much value.
Catching What Isn’t There
When a veteran developer starts reviewing a section of new code, it isn’t hard for them to catch typos or logic errors that need to be fixed. They can comment, mark the issues, and probably call the review complete.
But what if the code works fine, matches your styles and conventions, and passes automated analysis, but still doesn’t actually meet the requirements?
Checklists are powerful because they can remind a reviewer to check for things that might be omitted from the code in front of them. If the requirements weren’t in front of the reviewer, they might have just assumed that the author was on the right track and moved on.
By using a checklist, reviewers are working within the context of a project, not just the context of the code in front of them.
Connecting Your Workflow Across Tools
Speaking of context, if your team is using multiple tools to manage, track, test, and execute your software development, it can be difficult to keep a clear picture of your process. When do you use which tools for what?
Your checklists can detail each step in your process by citing specific tools.
If you are building your checklists in a peer review tool like Collaborator, you can create checklist items that link out to other tools. For example, your list could tie together projects in Jira, documentation in Confluence, test cases in HipTest, and repositories in GitHub.
Before you ship a new feature, you can go through a final review checklist to make sure that other stakeholders have provided their feedback, tests have been run successfully, and documentation has been updated.
By connecting checklists with your tooling, you can create a Digital Thread, or a comprehensive record from planning to deployment. This term has been popularized by quality management professionals in the manufacturing space, but certainly can apply to software development.
Showing a Clear Record of Your Work
If you are using a tool to drive your reviews, you get the additional benefit of checklists serving as a type of audit trail. Each item can document who checked it off, when, and if they had any qualifying notes.
In the future, if your team needs to look back on what kind of tests were run or who took part in a review, checklists can provide a clear history.
If your team is working in a highly-regulated industry, you may need to show proof that reviews took place for internal or external audits. Using checklists to maintain a record of peer reviews simplifies auditing, especially compared to trying to pull together pull requests.
Adopting Checklists that Actually Work
If your team is just starting to use checklists for reviews, don’t start with a full wish list. Start with a small, meaningful list that will actually be used.
If your team is already doing something really well and regularly, you might not need to include it in a checklist. Focus on new priorities and areas of improvement so you can clearly see the impact. In retrospectives, you can collaboratively discuss what might make sense to add or whether new additions have made a difference.
Each team is different and equipped with unique priorities. Embrace checklists as a way to define what matters to your team. When everyone knows explicitly what is expected of them and why, then they will be much more likely to organically grow a culture around those values.
If you are interested, you can learn more about creating checklists in Collaborator or you can get started with a free 30-day trial.