Thinking more about what annoyed me about Norvig’s post on hiring at Google, I realize that it’s not that involved a computer simulation of hiring – it’s that it pretended to take a simple simulation seriously, like it was, well, science or something.
I don’t have anything against quick-and-dirty simulations – I actually think that they can be one of the most fun kinds of computer programming. You do what is essentially a thought experiment, and then you get to see your assumptions played out in front of you. You say “Hey, what if A’s hired A’s, and B’s hired C’s, but A’s made the occasional mistake and hired B’s instead – what would happen?”
Of course, though, it’s very hard for simulations to prove anything about the world – all you are seeing are your own assumptions and their implications, brought to “life” in front of you. Simulation programs like this are to thought experiments what calculators are to mental arithmetic – they don’t change the activity, they just make it a lot more efficient. And the worst way to use them is when you already kinda know what conclusion you’d like to reach (like that your company’s hiring philosophy is a good one)…. because usually there are enough input parameters to juggle that if you don’t get the right answer the first time, you will eventually.
No doubt the root of my feeling of distaste is just that I spent a long long time in grad school studying old-school AI, and this is exactly the kind of program that AI specialized in for a while. Some parts of AI had an engineering focus, and the programs were judged on how useful they were; the rest was essentially computer-aided philosophy, with the thought experiments helpfully instrumented with print statements. I actually don’t have anything against this kind of philosphizing and model-building, and I do think that programs can play a really cool role in showing where your assumptions are leading you astray. It’s just closing the loop back to anything about the real world that’s problematic.