Having worked at a number of places, I have, of course, done my share of technical tests. I know the form for this, and am perfectly comfortable writing code that has comprehensive unit tests, full comments, documentation, consideration of SOLID principles, which is tested against the examples, etc. I have an almost perfect track record when it comes to these, because generally they ask for a reasonably well-defined piece of logic that’s not excessive in scope and which doesn’t expect any niche knowledge that you can’t easily look up. Mostly they’re good tests, because they establish that you can write working code that someone else could work on. You can demonstrate an ability to work in a style, and the tests can be done in a candidate’s own time to minimise interfering with existing commitments.
The one time I failed a test was having already had a strong track record, and therefore came as something of a surprise. The company apparently felt that all the above elements weren’t sufficient, despite stating explicitly that there was no need to ‘gold plate’ the test. The issue seems to have been that they also expected a full algorithm implementation on top of all that – from the technical test options, either a full Constraint Optimisation algorithm for meeting planning, a Graph Exploration algorithm for route planning, or a Natural Language Parser for interpreting free text input. From an unpaid piece of work that will just be thrown away afterwards.
I’m sorry, but no. If you pay me to implement a meeting planner I’ll do constraint optimisation (although I’ll probably use the available libraries for the job since it’s likely to be more efficient than my reinventing the wheel). Similarly if you want to work on a commercial project creating a bot or similar tool and therefore need to understand free-form natural language, then I’ll work on a full NLP system, although again I’d normally use libraries since even 60′s-era expert systems are non-trivial amounts of work. I should also point out that I’d equally happily work on an Open Source project that works towards these kinds of system. It’s not about being paid directly, but about my time being worth something. Working on a project that someone may use creates something of value from my time, but implementing an extensive but throw-away technical test doesn’t.
I would ask the question about what a company like this hopes to achieve by creating such a technical test. The first impression that it sets is that they place little value on the time of the potential employee, because it demands unpaid effort so freely. This can’t create a good impression about the company because any company which is willing to make such demands at this point seems likely to make similar demands of its employees, which doesn’t imply a good work-life balance.
The second is that they do not care about being clear in their requirements. One reason for relatively simple technical tests is that it is easier to be very clear about what is and isn’t expected. A complex with with a caveat not to gold plate leaves a candidate guessing how much to comment, document, refactor and restructure for SOLID principles, validate against bad inputs, handle additional use cases, unit test, plan for extensibility, etc. Most candidates will guess wrongly, but that’s not their failing, it’s the company’s, because it has failed to communicate its expectations then blamed the candidate when they haven’t magically known. This isn’t a project where the goals are not clear, so it’s relatively easy to be very detailed in your specifications. By all means expect Agile practices like test-driven development, but because there aren’t daily stand-ups, stake-holder participation, etc, some communication has to revert back to a more traditional specification document approach. I’ll push for Agile practices in the workplace, but I’ll equally recognise where they won’t work, and will adapt my practices accordingly, and a company should, too.
To be clear, I did wonder whether a more advanced implementation was expected. The test was much longer than any other I had received in the past and included instructions not to ‘gold plate’, so I tried to balance how complex a system to implement with that instruction. I decided that it was most appropriate to focus on handling the defined cases and adding appropriate tests, that a full complexity algorithm would not be required for such a test, and apparently this was wrong. I was left guessing on scope, not on how to implement logic. so their conclusion is I don’t have sufficient knowledge when what I didn’t know was how much was expected rather than how to implement what they expected. It’s the wrong conclusion resulting from asking the question in the wrong way. I have implemented more complex Natural Language Processing than I implemented for the test (although my language for that was Prolog as a tool well suited to the task, rather than C#). I’ve also done some Graph Exploration algorithms (if you’ve seen Project Euler you’ll know that various of these questions cannot be solved without some similar consideration of how to avoid brute force calculations, and I’ve looked at route planning for a robot vehicle, implementing code in Matlab and Python previously). Constraint Optimisation I admittedly haven’t done much of, but I know that naive algorithms can hit severe performance issues and would generally prefer to use libraries that are likely to prove much more efficient than any first-cut algorithm I’d implement.
The strangest bit about the whole test was the requirement not to use 3rd party libraries. If you want to work efficiently, don’t reinvent the wheel. I put more effort into learning the principles then appropriate tools than in writing toy imitations of what’s already available. I’ll use R or Pandas (in Python) or Accord.NET (in C#) for machine learning , despite having implemented a basic neural network and other algorithms in Octave, because the tools are there and more performance than the code i’d write. This requirement will favour those who reinvent the wheel over those who don’t. I’m valuable because I know the right tools to use. For example, I haven’t implemented quick sort and bubble sort and so on, because working in C# there’s no need to. After implementing a Neural Network (for a course) I’ve not used that code since, because I get better performance from the libraries in R and Matlab. There have been many articles about ‘NIH’ (Not Invented Here) syndrome, where significant resources are wasted implementing something that’s buggier and slower than 3rd party libraries that do the same job. I’ll ALWAYS call for identifying a complete toolbox first. Occasionally you’ll need to write lower level tools or libraries. However, in the same way as you wouldn’t use Assembly to write an e-commerce site you’re also mostly better off understanding the basic principles of an algorithm (to understand the limitations and requirements) then using an existing, well-tested implementation where one already exists. I’d rather hire someone who can identify the best libraries to use than someone who can write a pale imitation of them.
I have a job. Between that and commuting that’s around 65 hours a week, currently. I have a family, and since I’m away during the week they expect my attention when I’m around at the weekend. I also need to sleep. Ask too much of my time unpaid and I’ll just say no. It’s not just about placing a value on my time, but if I push too hard I’ll get tired. I’m not going to risk doing a bad job for an existing client by being tired, because that would not be professional behaviour. Perhaps those companies asking candidates to implement extensive technical test pieces should consider what kind of candidate would put in that much time. Candidates between roles will do everything they can, but those who are sufficiently in demand to be currently engaged should be prioritising their current commitments over pursuing their next step. I don’t want to be the kind of person that puts those priorities the other way around, and I’d suggest that no company wants that kind of person either.
At the end of the day I’d say companies should consider their principles and the principles of the people they want to hire. Treat your candidates like their time is valuable and don’t waste it. Expect them to treat their existing client/employer as their top priority. This one wasn’t about the money (it would have been a pay cut), but it upset me because the company talked about social justice. They talked a good talk but didn’t walk the walk.
So what are the solutions? Well, for working out whether you want to work with me, drop me an email, and I’ll happily send over example technical tests I’ve done in the past. I’m not afraid they’ll make me look bad, and I’d rather not keep doing random other throw-away developments that create no value. Why not ask potential candidates if they have any examples of code they can send for evaluation before you ask for them to implement something throw-away. Alternatively, if you want to specify something new, how about thinking about things in a different way? Why not hire them for a day to work on something small but real? Alternatively, point them at an Open Source project and ask them to do some work on it, or ask them to identify code they’ve contributed to an Open Source project? If you work with charities, maybe get something implemented for the charity. There’s lots of ways to ensure that valuable time is not wasted on creating something with no actual value while still getting the information you need. Most of these suggestions wouldn’t tend to demand specific algorithm knowledge and avoid some of the pitfalls of the complex algorithm implementation technical test.