Why You Get Recruiting Wrong In 80% Of Cases And What To Do About It

I am currently reading a book about 'people decisions' titled 'It's not the How or the What, but the Who.'. It is primarily about various aspects of recruiting and I enjoy it a lot so far. While I am not yet done with reading it, I wanted to share with you an interesting insight explaining with simple numbers how hard recruiting actually is.

The author starts by explaining a well known fact that in complex professional jobs the productivity of a 'top' employee can be 10x+ higher than the productivity of an 'average' employee. This is very different from 'simple' manual jobs, where the difference is less than 50% according to some studies. The phenomenon is well captured by the below chart.

If this is true, then we should try to hire 'top' performers, especially for top positions within an organisation. Let's see how likely we are to succeed at this and assume that:

  • top performers make up 10% of the population of potential hires,
  • we are 90% right in identifying the real traits of a candidate, i.e. whether she or he is 'top' or not.

Based on these assumptions, what are the chances that we will, on average, hire the top person for the job? As little as 50%. I found this very counter-intuitive, as the assumption was that we are right in 90% of the cases. The below chart explains why with some simple math.

What is even worse is that in reality our ability to correctly identify whether someone is 'top' or not is probably lower, say 70%. If we assume this, the odds really decrease further and we will make the hire we wish only in 21% of the cases, on average. 79% error rate!

This is just a model meant to illustrate with simple numbers where the problem comes from and how important it is to really focus on getting recruiting right. The author suggests a few strategies to tackle these problems:
  • generate a great pool of candidates rather than an average one; focus on the right sources,
  • try to test people through work samples rather than just interviews and reference checks,
  • make a number of people assess a hire, but keep the number of assessors very small, so as to not wrongly eliminate too many great candidates from the process.

If you have developed further successful recruiting strategies to tackle the above issues, please share.

Based on my experience with the book so far, I highly recommend it to anyone involved in recruiting process and making decisions about people.

11 responses
Good article, Pawel. As a tech recruiter, another way in which I approach getting on board great candidates is through referrals. Excellent candidates tend to know other excellent candidates...
Thanks, Ciaran - makes sense!
Thx for sharing Pawel! Your example can be explained with the bayes rate fallacy (Bayes Theorem). I haven't read the book you are referring to, but I can recommend "Thinking, Fast and Slow" when it comes to decision making. P.S.It is a funny coincidence that I applied for the internship position Point Nine Capital offers right now :) Best, Fabian
Thanks, Fabian.
Indeed. It took a while to take it down analytically, though (with help of our inhouse mathematicians). Question: So what is the probability that if a girl (should work for guys, too) walks into my office, and if I hire her, she is a top performer? Case #1: 10% (top performer) * 90% (I hire) = 9% (I hire a top performer) Case #2: 10% (top performer) * 10% (dont hire) = 1% (I miss a top performer) Case #3: 90% (loser) * 10% (I hire) = 9% (I hire a loser) Case #4: 90% (loser) * 90% (dont hire) = 81% (I dont hire a loser) Answer: I hire in 18% cases, out of which the probabilities of hiring a loser or a top performer are both equal (9%), therefore the probablity that I hired a top performer is 50%. QED But look from a positive angle: the probability of not hiring a loser is 81% and missing a top performer is only 1%. So what to do do raise the odds? Here's the idea. Don't hire them at once, put the selected 18% into another round of evaluation. Because now you have a group of girls for which you know that 50% of them are top performers, and 50% are losers. Much better than with what you started (only 10% top performers). In the next round you have already 90% probability of hiring a top performer!!! (Do the math as above) But there's an important premise: the methods of detecting top performers in the sequence of rounds have to be uncorrelated. Otherwise you always get the same results for the same girls. How to check that the methods are uncorrelated? Here's a possibility. Calibrate them on the same group of people. They have to identify a different set of losers as top performers (do it on your own staff for which you know where on the curve in Fig 8-1 they are positioned). You don't care really if you miss a different top performer (alpha error), your goal is to minimize the probability of hiring ANY loser (beta error). Result: You don't really need an expensive testing program through work sampling (as proposed in the book), just an uncorrelated set of questionnaires. The beauty: The process can be automated. I'm interested in your thoughts!
6 visitors upvoted this post.