Hey, I want your opinion on code reviews, what is the best way to use them in a professional environment? Pick one of the following and give me your thoughts (from the most forgiving to the most strict):
- no code reviews, they are useless
- optional code reviews
- mandatory reviews on code that is already merged, optional fixes
- mandatory reviews on code before merging (like a pull request), with a time-frame for optional fixes (i.e. whether to fix what has been pointed out is up to the author), merge will occur anyway.
- mandatory reviews on code before merging (PR) with mandatory fixes.
Of course in open source development with public contributions, you’ll often see (5), but I’m not convinced it could work in professional dev.
Edit: I’m talking about a team of 5 mid to senior devs (no junior or interns) working on a 2-3 year project without many security concerns, but feel free to give me your general opinion.


Ah, I see that OP added more details while I was still writing mine. Specifically, the detail about having only a group of 5 fairly-experienced engineers.
In that case, the question still has to focus on what is an acceptable risk and how risk decisions are made. After all, that’s the other half of code reviews: first is to identify something that doesn’t work, and second is to assess if it’s impactful or worth fixing.
As I said before, different projects have different definitions of acceptability. A startup is more amenable to shipping some rather ugly code, if their success criteria is simply to have a working proof of concept for VCs to gawk at. But a military contractor that is financially on the hook for broken code would need to be risk-adverse. Such a contractor might impose a two-person rule (ie all code must have been looked at by at least two pairs of eyeballs, the first being the author and the second being someone competent to review it).
In your scenario, you need to identify: 1) what your success criteria is, 2) what sort of bugs could threaten your success criteria, 3) which person or persons can make the determination that a bug falls into that must-fix category.
On that note, I’ve worked in organizations that extended the two-person rule to also be a two-person sign-offs: if during review, both persons find a bug but also agree that the bug won’t impact the success criteria, they can sign off on it and it’ll go in.
Separately, I’ve been in an organization that allows anyone to voice a negative opinion during a code review, and that will block the code from merging until either that person is suitably convinced that their objections are ameliorated, or until a manager’s manager steps in and makes the risk decision themselves.
And there’s probably all levels in between those two. Maybe somewhere has a 3-person sign-off rule. Or there’s a place that only allows people with 2+ years of experience to block code from merging. But that’s the rub: the process should match how much risk is acceptable for the project.
Boeing, the maker of the 737 MAX jetliner that had a falty MCAS behavior, probably should use a more conservative process than, say, a tech startup that makes IoT devices. But even a tech startup could be on the hook for millions if their devices mishandle data in contravention to data protection laws like the EU’s GDPR or California’s CCPA. So sometimes certain parts of a codebase will be comparmentalized and be subject to higher scrutiny, because of bugs that are big enough to end the organization.
Thanks for the insight
I think is a good part of what I needed to be told, thank you!