Auto Ads by Adsense

Booking.com

Wednesday, February 11, 2015

Interviewing and Performance

Recently, someone asked me a deep question: we all know (and Google has the data) that interviews do a poor job of predicting on-the-job performance. If that is the case, would a different form of interviewing (say, pair programming) or other form of testing do a better job?

My answer to that question is "no." What most of the other articles do not note is that Google actually does have data as to the major factor influencing on-the-job performance (at least, performance as viewed by Google's notorious promotion committees). It turns out that even in 2003-2004, Google had data indicating that your first tech lead at the company strongly predicted how well you would do in the future inside Google.

There are several reasons for this. One obvious one is that the promotion committee is likely to weigh your tech lead's comments on your performance more heavily than some other random engineer's. The deeper reason, however, can be found in the book, Chasing Stars. Fundamentally, all organizations have stated or unstated rules for how they work. Whether the on-boarding systems do a good job of explaining that to new employees and indoctrinating them in the culture very much explains future performance.

Google at the time when I joined was a largely oral culture. A typical noogler joining the engineering team would during his first week of working through the engineering training document find several bugs a day in the documentation necessitating a documentation change, if he were conscientious. Old documentation or out of date documentation was rampant, and the tech docs team had their hands full trying to keep up with the amount of code and internal APIs continually being churned. If you actually had to get work done, your most important tool wasn't the documentation or the internal search engine (which was laughably bad), but knowing who to talk to. For instance, if you needed to make a change to crawl, and your tech lead knew to say, "Go talk to Arup Mukherjee and ask him how you would do this", you were in luck and you'd be productive and efficient. If your tech lead said, "Go read the documentation," or worse, "Use the Source, Luke", not only would you waste a lot of time reading both code and documentation (as I was forced to once when dealing with the search results mixer), chances are when you were done you would probably have done it wrong, and your code reviewer would spend gobs of time correcting the way you did things, and forcing you to do everything over and over until you got it right. If that happened, you might as well kiss your "Exceeds Expectations" performance review goodbye. (And yes, I lucked into knowing people who wouldn't just tell me who to talk to, but walked me to their cube, provided introductions, and made it clear that what I was supposed to do was important enough to deserve help)

I'm fond of saying that context matters a lot when it comes to performance. This type of context-sensitive performance isn't necessarily because the tech lead deliberately led the poor engineer wrong. It was because the tech lead did not provide a suitable context for the engineer to work with, and in the process makes the job much much harder (or in some cases nearly impossible) for the new engineer. Hence if your interview process is successful in eliminating people who can't actually do the job, but you end up with variable performance or unexpectedly poor performance on the job from people who should be doing well, you need to examine your on-boarding process or the training process for your leads/managers.

The follow up to this question then is, "If performance is so context determined, why do we bother with interviews?" The answer to that is that the goal of the interview isn't to predict performance in the future. The goal of the interview is to ensure sufficient technical competency and cultural compatibility so that with a good on-boarding process or a decent tech lead/manager, the new engineer ought to be able to do a great job. Hence, when I run interviews, I don't favor esoteric problems that require dynamic programming (for instance), but basic data structure questions. While I consider basic tests such as the Fizz Buzz Test way too simple and insufficiently indicative of someone with basic computer science knowledge, coding questions that approximate that level of complexity (while still testing basic computer science concepts) is all that is typically needed to weed out people who simply can't code and shouldn't be allowed access to your source control system.

No comments: