Our Quest for the Perfect Code Assessment
This Paired Blog is brought to you by Justin Wheeler, Senior Software Developer; David Crawford, Mobile App Developer; and Bill Osborn, Director of Account Management & Business Development.
The Bravo talent development team has a question written on a whiteboard in a common area: "How do we become the most attractive development shop in West Michigan The United States?" It's funny how a global pandemic can change your very micro view on talent, and expand your playing field!
In a post pandemic world where the restraints on location are lesser than ever before, how do we attract and vet candidates from across the country? Pre-pandemic we focused on attracting talent that was regional. We were able to rely on word-of-mouth and personal references to grow our development teams. While that is still the case, the world has gotten a lot smaller from a talent perspective over the last 18 months. For example, we work with organizations that never had a remote employee in their entire existence. Then, practically overnight, they had a 100% remote workforce. And the funny thing is...a remote workforce… WORKED! Bravo, and the rest of the world, now have the opportunity to bring on the best and brightest across the globe, not just within driving distance of downtown Grand Rapids.
This fundamental change in work is an amazing opportunity to share the Bravo culture. With that, we need to be diligent on who we bring into our family. This change sent our team on a mission to come up with a plan that will help us get closer and closer to the perfect code assessment. A matches our goal of being the most attractive development shop in the United States. In this post, we'll cover the common frustrations interviewers and candidates face in technical interviews, our dissatisfaction with current solutions out there, and finally, our design for a better assessment.
What are the common frustrations developers face in technical interviews?
Personally, I’ve been frustrated in technical interviews to hear canned responses to common interview questions. When I ask what polymorphism is I don’t want the textbook definition recited to me. I want a general understanding and some examples of how that candidate utilized that concept before.
From the candidates perspective, I think traditional technical interviews cause too much stress. I know people who will spend days studying textbook terminology and practicing pointless coding exercises with tools like LeetCode. I can see that this doesn’t benefit the candidate or the employer. In most real-life situations, it doesn’t matter if you can write code from memory to sort a list without using libraries.
I prefer practical examples that will tell me if the candidate will be a good employee or not. In these types of situations, the candidate shouldn’t feel the need to prepare for the technical interview. This is because the interview work will be very similar to how a real job would be, with no real right or wrong answer.
In reality, a developer’s personal day-to-day involves, at a minimum, five things:
A ticket is created and documented
The developer works on the ticket
Testing is incorporated (Unit, functional, etc.)
A Pull Request is made by the developer
The developer is challenged on their decisions through PR review
Which one of these are frequently ignored in the classic whiteboard problem? Everything but #2. Testing is a close second, but is typically used to automate the "correctness" of the candidate's answer. The candidate themselves aren't usually making them as part of the assessment.
So what are the common technical interview frustrations? They don’t reflect reality, they’re based on trick questions, they can be beaten by someone who gets lucky and knows the solution beforehand, there’s no research phase, and it misses the rest of the "day-to-day" developer work we need someone to excel at. So we needed to design and evolve a solution that incorporates the realistic work of a developer.
Why were we not satisfied with current technology?
Pluralsight, Coderbyte, Filtered, and other tools were limited to the programming language itself, like Java. In the real world, projects use frameworks heavily, so knowing the frameworks is just as important as knowing the programming language. Especially when discussing very large frameworks like Spring. Popular programming assessment tools have easy to find coding solutions on the web. None of the assessments we found focused on coding areas like exception handling or testing.
There are a lot of solutions out there to help automate coding interviews. They will typically include these core features:
Tracking of code progress
Automated testing on solutions
These sound quite useful at first, but they typically focus on only one area of the development paradigm: solving an algorithm or some contrived problem. Any service that provides these assessments online are attempting to mimic the whiteboard problem, which has a major flaw: real work simply isn’t done this way.
Additionally, when developers need to review the work, they have to leave their normal “review process” and switch to an online assessment process. Normal, formal reviews will be in the form of reviewing PRs. So why not review a candidate’s work in the same way real work is reviewed?
When considering how we wanted to screen candidates, we started to think about what’s important on a real project. This led us to think about how an employer views skills while on a project. The consensus was that the actual process of interacting with a live code base and ticketed issues would be a solid indicator of the candidate's abilities. This process mimicked live development work and would give us a lot of insight without having to watch the candidate write the code live.
Following this pattern we get to test the candidate not just on the core programming language, but also on:
Their ability to comprehend the business requirements (Trello)
Their ability to use version control (Git)
Their ability to update an existing code base (It’s really rare that a candidate would start a project from scratch)
Their ability to work with popular frameworks (Hibernate, Lombok, Spring, etc.)
Their ability to write unit tests (jUnit, Mockito, etc.)
The following is our solution:
Employer Responsibility (Bravo LT)
Create initial sample code in the target language. Example given: Angular, Java, etc.
Ensure the code is visible to an unauthorized account. This is to avoid managerial overhead.
Create tickets that clearly outline requirements of work to be done. Make time-boxes clear that the work should not be completed if it’s taking the candidate too long. This is to respect the candidate's time while ensuring timely responses and measuring time management skills (No job will give a developer endless time to complete a task).
Ensure the ticket is visible to an unauthorized account. This is to avoid managerial overhead.
Assign the ticket to the candidate via email or other means.
Review the Pull Request the candidate has created. Additionally, question decisions made. See if the changes meet the outlined acceptance criteria.
Read and comprehend the ticket assigned.
Ask questions (if any). It’s common in the industry that the requirements are not clearly defined, when they should be. It’s perfectly fair for the candidate to get clarification if they are confused about the task.
Fork the repository. This is to avoid managerial overhead of granting permission to clone the repository. Since anyone can fork a public repo, and only added members can clone.
Complete the tasks defined in the forked repo.
Create a Pull Request into the original repo.
Address feedback added to the Pull Request by the employer.
Some early concerns brought up about this project revolve around the privacy of the assessment itself, and how to handle candidate response to PR feedback.
The GitHub project and Trello board are public. Doesn’t that make it easier for a candidate to “do better” than they usually would, if they already knew everything about the assessment details? We believe that’s missing the point.
We have clients who have existing tech stacks or applications that a project could be for, and therefore we likely already know a lot of technical details before placing a developer. And with real-life work, there’s a lot of research, discussion, and negotiation before committing to certain key tech details. Why not mimic this experience as well for the technical interview? It would be impressive for a candidate to do their research on us, and see that we have public repositories used for assessments. If we assign a ticket to a candidate, they may not know the exact task, but they may be familiar with the environment because of their own proaction.
We treat the assessment PR reviews as we would treat any real-life PR. We want to provide specific, technical feedback for the benefit of the candidate. But we don't expect new commits in response to this, if it would take the candidate's time over the expected 1-2 hours of work. We want to question decisions for our better understanding, and show potential solutions/best practices for the candidate's learning.
We hope you enjoyed this break-down of our assessment process! Justin put a lot of work into setting it up, and we're excited to start using it!
If you’d like to try out our assessment for yourself, shoot us an email! We’re hiring for many interesting positions, and would love to have a discussion with you. You can find our openings here:
You can also check out our assessment repositories, and more, here: