I’ve code reviewed over 750 pull requests at Amazon. Here’s my exact thought process.

Curtis Einsmann
4 min readNov 10, 2020

--

Disclaimer: I’m not representing Amazon in any way. Opinions written here are strictly my own.

I’ve code reviewed over 750 pull requests in my 5-year software engineer career at Amazon. As my team’s tech lead I provide insightful feedback and encourage a high code quality bar.

But as a junior engineer I was a poor reviewer. I didn’t know where to start, what to look for or how to comment. I gave useless comments on style. I rushed through reviews giving the classic “LGTM!” after missing obvious flaws.

What follows is my thought process on how I review pull requests (PR’s).

I understand why code reviews matter.

Quality code strengthens readability and system understanding. This speeds up the team’s feature velocity. The time spent reviewing code for defects and quality pays dividends in the long term.

Code reviews are an excellent coaching opportunity. Insightful feedback accelerates learning and growth.

They’re also an excellent learning opportunity for the reviewer. Every engineer solves problems in their unique way. Reviewing code exposes me to problem solving strategies I can leverage later.

I’ve taken the time to learn programming principles.

I understand principles like SOLID, DRY and KISS. I’ve read the Clean Code book. I understand the importance of data structure selection, accurate abstraction, logical naming and control flow.

I’ve also read counterarguments to these principles. I understand that they’re guidelines, not rules. They work in some contexts but not others.

The most important questions are: does it work? Is it easy to maintain? Will we be able to understand it 2 years from now?

I review the PR slowly.

Insightful feedback requires a deep understanding of the change. I prevent my eyes from glazing over a line of code I don’t 100% understand.

I pretend there’s a flaw hidden in the code; I’m the detective.

I start with the crux.

I scan through the PR to find the critical piece of implementation. This could be the API entry point or the refactored data access layer.

I start reading. I write comments as I go, knowing I’ll need to fix them later.

I read randomly until I achieve deep understanding.

I follow function calls. I jump around. I read existing code to see how the change fits. I read the task for context on the problem.

Every line of code is a puzzle piece.

I often end up reading the same code 3, 4, even 5 times. I read until the entire puzzle forms in my brain.

I read class by class.

This comes only after I achieve deep understanding.

I ideate important edge cases as I read the implementation. I ensure these edge cases are covered when I read the test code.

I comment with kindness.

I comment on the code, not the person. I avoid using “you” or “your.” I prefix nitpicks with “nit.” I phrase most comments as questions or suggestions.

I leave at least one positive comment. (I sometimes forget to do this; I shouldn’t.)

I make myself available for verbal discussions if that’s desired.

I comment with accuracy.

I write comments while reading. But I rewrite most comments after deep understanding. The first iteration of reading doesn’t give me a clear enough puzzle picture for an accurate comment.

I don’t leave ambiguous comments. Each comment explains:

  1. The problem with the code.
  2. The reason why I perceive it to be a problem.
  3. My recommendation for how the author can resolve it.

I leave style decisions up to the automated formatting tools.

I approve when the PR is good, not perfect.

I understand perfection is a pie in the sky. I approve if it works and it’s readable.

But I hold my teammates to a relentless high standard. I give a [kind, accurate] comment even with minor confusion. I hold my authored PR’s to the same standard.

Some teammates have a proven track record of delivering quality, functional code. I understand these teammates deserve leniency. I’m not as thorough with these reviews, because I don’t need to be.

I seek feedback for whether I’m reviewing well.

It’s difficult to measure code quality. Possible metrics are minimal production defects, feature velocity and new-hire ramp up speed.

But these are difficult to measure. As is measuring whether I’ve performed a good review.

I seek feedback from other engineers to understand if I’ve been helpful. I understand that I’ll improve as I perform more reviews.

How do you review pull requests? Does your thought process differ from mine? I’d like to hear it!

Like this article? It started on Twitter. Follow me for more! Here’s an example of what to expect:

--

--

Curtis Einsmann
Curtis Einsmann

Written by Curtis Einsmann

Software engineer, writing to help developers level up. All stories free. Follow me on Twitter for more: https://twitter.com/curtiseinsmann