this post was submitted on 20 Mar 2026
24 points (76.1% liked)
Programming
26193 readers
287 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Ah, I see that OP added more details while I was still writing mine. Specifically, the detail about having only a group of 5 fairly-experienced engineers.
In that case, the question still has to focus on what is an acceptable risk and how risk decisions are made. After all, that's the other half of code reviews: first is to identify something that doesn't work, and second is to assess if it's impactful or worth fixing.
As I said before, different projects have different definitions of acceptability. A startup is more amenable to shipping some rather ugly code, if their success criteria is simply to have a working proof of concept for VCs to gawk at. But a military contractor that is financially on the hook for broken code would need to be risk-adverse. Such a contractor might impose a two-person rule (ie all code must have been looked at by at least two pairs of eyeballs, the first being the author and the second being someone competent to review it).
In your scenario, you need to identify: 1) what your success criteria is, 2) what sort of bugs could threaten your success criteria, 3) which person or persons can make the determination that a bug falls into that must-fix category.
On that note, I've worked in organizations that extended the two-person rule to also be a two-person sign-offs: if during review, both persons find a bug but also agree that the bug won't impact the success criteria, they can sign off on it and it'll go in.
Separately, I've been in an organization that allows anyone to voice a negative opinion during a code review, and that will block the code from merging until either that person is suitably convinced that their objections are ameliorated, or until a manager's manager steps in and makes the risk decision themselves.
And there's probably all levels in between those two. Maybe somewhere has a 3-person sign-off rule. Or there's a place that only allows people with 2+ years of experience to block code from merging. But that's the rub: the process should match how much risk is acceptable for the project.
Boeing, the maker of the 737 MAX jetliner that had a falty MCAS behavior, probably should use a more conservative process than, say, a tech startup that makes IoT devices. But even a tech startup could be on the hook for millions if their devices mishandle data in contravention to data protection laws like the EU's GDPR or California's CCPA. So sometimes certain parts of a codebase will be comparmentalized and be subject to higher scrutiny, because of bugs that are big enough to end the organization.
Thanks for the insight
I think is a good part of what I needed to be told, thank you!