First, imagine a game where you listen to a few seconds of music then try to identify the instrument, the style, the band, and the year first performed. Do you think a professional musician would score well? I think so.
My program was like this except for our professional programmers. I selected 30 interesting lines of code out of 20 different parts of our now sprawling codebase. Upon being shown these fragments an employee would be asked to guess: what language is it? what does it do? where did it come from? and have you worked on this code? When ready, I showed what I knew about the code and offered check boxes where one would check off whether they thought they guess right.
A running average would be computed between 0.0 and 4.0. When I tested our software architects they scored between 1.3 and 2.4 out of 4.0. A 4.0 would be near impossible because nobody has worked in so may places. But a 2.4 out of 3.0 is awesome. I asked anyone scoring over 2.0 how they knew so much. They each had an explanation and it was some combination of curiosity and willingness to work through confusion.
I wrote this in javascript and published it in github pages inside the company. I kept scores in browser local storage indexed by the sha1 of the sample. This allows me to distribute revised versions with more or different snippets and have the game resume anew.