I recently read Daniel Kahneman’s “Thinking, Fast and Slow” and it changed the way I look at the world. The book is about how people make decisions, and among other things, it inspired me to take a look at Synapse’s interview process and how we can make better hiring decisions.

To vastly oversimplify Kahneman’s life’s work, human beings see themselves as much more rational decision makers than they actually are. Our brains are incredibly good at pattern matching, and we use mental shortcuts to make decisions in ways that our conscious brain is often not aware of. Kahneman calls fast, unconscious, heuristic-based thinking “System 1” thinking, and slow, rational, deliberate thinking “System 2” thinking. If you’re not familiar with Kahneman’s work, this review of his book gives a good summary of its themes.

Since reading Kahneman’s book, I’ve been looking for ways to force my brain into using System 2 thinking to make better decisions. One of the examples Kahneman cites in his book is a review of Israeli judges’ rulings in parole hearings. The judges were substantially more likely to grant parole after they ate lunch (it’s worth clicking on the link and looking at the graph to see how dramatic the effect is). Kahneman’s theory is that System 2 thinking is computationally intensive, and glucose is the energy source for our brain. When we’re hungry, it’s harder to use System 2 thinking. I’ve started blocking out 10 minutes before situations where I need to make critical decisions to focus, prepare, and assess whether I need a snack.

Thinking about interviewing in the context of Kahneman’s work doesn’t inspire confidence about an average human’s ability to make good hiring decisions. This is worrisome because hiring decisions are some of the most crucial decisions we make at Synapse. Synapsters work closely together in teams, and we care deeply about the community we’ve built. We want to hire people who will bring great skills to our project teams and share our core values like tenacity and egolessness.

After identifying interviews as a place where we want to promote System 2 thinking, I looked into the research literature on interviews. Many of the interview practices that researchers have found to be effective, like structured interviews, do encourage System 2 thinking. My fellow engineering directors and I agreed to be more consistent between interviews for different teams, and to make some changes to encourage System 2 thinking. Here are four changes that we’ve implemented for most engineering interviews:

1. We use a standard rubric to score candidates

If we want to evaluate candidates objectively, we need to be clear about what criteria we’re evaluating them against. Asking the interviewers for specific information instead of a general assessment promotes System 2 thinking and ensures that the interviewer is tying their assessment to the correct criteria. For my senior electrical engineering hires, here is a section of my rubric:

  • Do they have a good grasp of the technical fundamentals?
  • Do they have a good problem solving approach – are they rational, methodical, creative?
  • How do they do at system-level thinking & product architecture formulation?
  • Are they good at project planning and task breakdown and prioritization?
  • How would they be to work with on a team? (Do they ask good questions, have no ego, and seem open to input from others?)

2. We have interview candidates complete a “design challenge” exercise before in-person interviews

Standing in front of strangers at a whiteboard and working through questions that span an entire discipline of engineering in an in-person interview isn’t entirely representative of what people do in their day-to-day lives at Synapse (although it does remind me of some of my more stressful client meetings). To get a more well-rounded picture of a candidate’s abilities, we give candidates for some positions an open-ended design challenge and ask them to send us back a presentation describing their recommendations. This looks more like our day to day work with Synapse—we encounter a technical challenge or issue, do some background research and work, and present recommendations to our clients.

3. We agree on a standard interview format

Before starting interviews for an open position, we pick a set of topics and questions that cover the areas in the rubric that we’re going to assess candidates against. We’ll make changes to the exact questions we ask based on a candidate’s background and expertise, but the basic outline remains consistent. The combination of a rubric and a consistent set of interview topics is in line with structured interviewing, which is a better predictor of job performance than standard interviewing.

4. After the interview, the full interview team meets for a candidate debrief

The first thing we do at our interview debrief is score the candidates against our rubric for the job. We do this exercise as blind voting so that the interviewers won’t be influenced by each other’s scores – a phenomenon Kahneman calls “anchoring bias”. After we score the candidate against the rubric, we open up discussion on the candidate. If the interviewers didn’t agree on scoring for a particular metric, we focus the discussion on that area.

Doing the debrief in person ensures that the interviewers are all focused on the evaluation and not distracted – the meeting moderator can make sure that everyone’s laptop is closed and they’re focused. Focused evaluators are more likely to use System 2 thinking. There’s an interesting connection here to diversity in engineering, which is another interest of mine. Workplace diversity research shows that rushed or distracted evaluators may act with more bias. To me, unconscious bias is another example of System 1 thinking; it’s the brain unconsciously pattern matching to available cultural stereotypes. As part of the effort to improve our interview process, I spent some time looking into research backed recommendations for promoting diversity in hiring and was pleased by how well they lined up with more general research into effective interviewing.

My team has used this updated interview process in hiring three people in the last six months. Though I don’t have a large enough sample size to give any meaningful numerical metrics, I do believe our interviews have improved. I feel that I have more information about candidates and am better prepared to make hiring decisions after interviews. We are going to continue using this updated process, monitor how it is working, and make improvements. If it’s not successful, maybe we’ll just start scheduling all of our interviews immediately after lunch.