The Why and How of Giving Room for Failure
Giving up control and creating room for failure is one of the most valuable skills I had to learn as a manager. It helped our team flourish and created many educational opportunities for individual contributors. However, a fear of disastrous mistakes and a misunderstanding about how people learn prevented me from taking this approach earlier. I hope that my story can help others in reaching this understanding more quickly.
As an individual contributor, I had been reasonably successful at maximizing my own performance. In the process, I learned to anticipate and prevent potential issues. Now, I was eager to help others acquire similar skills. My initial approach was to give lots and lots of "proactive" feedback and to be constantly on the lookout for potential mistakes, for which I then provided advice about what I would do.
What I expected is that I could prevent any mistakes from happening, while people would somehow learn from my past experiences through osmosis. In reality, this approach was neither effective nor scalable. As any good teacher or coach will tell you, we learn best when we get to practice on our own, not by someone prescribing us solutions. The people that did improve significantly, during this time, did so despite my approach, not because of it.
Looking back at my own experiences, I realized that I learned most from situations where I felt a high level of control and felt responsible for the outcome. The most educational moments were the ones where I was also slightly out of my comfort zone. Going through such experiences helped me truly internalize new knowledge, rather than just knowing about it.
After some reflection and reading up on the topic, I realized that I had failed to promote the same experiences in others. I was preventing team members from making any of the mistakes that had taught me so much, inadvertently robbing them of valuable learning opportunities.
Take deployments, for example. In the beginning, I checked every pull request myself, before it got deployed, to make sure it couldn’t bring down production. You don’t learn much about the various risks associated with deploys if somebody else is always guarding the process for you like that. It’s only once it’s you that’s deploying those changes to production, and you feel responsible for it, that you actively learn about those matters.
After that realization, I started asking myself the following question: How can I create room for failure and learning, while maintaining alignment and preventing disastrous mistakes? The latter part of that question is why I was so overly careful in the first place.
Eventually, I found that the best way to solve that question was to define clear goals and expectations. This was important for two reasons. First, if there isn’t enough clarity about what’s expected, then there’s a high chance of misalignment. This would lead to suboptimal decisions at best or chaos at worst. Secondly, by setting goals, we give people some freedom on how to achieve them. That allows them to own up to it and find the best way for themselves to achieve those goals.
Changing my habits to create clarity, instead of providing solutions wasn't always easy. When done well, however, it allowed people to take responsibility and reach those goals without a lot of guidance. Gradually they took on more and more. Pretty soon everyone was routinely doing the things I had initially been so worried about delegating. Looking back at it now, I feel ridiculous that I once worried so much about delegating those matters in the first place.
Often, I still worried about mistakes but I started handling those worries differently. Rather than preventing any from happening, I focused on making sure we learned from the mistakes that did. Whenever we did make mistakes, we did post-mortems or retrospectives and made sure we had at least one action item to make things better the next time.
This consistently led to incremental improvements to our tech stack and our processes, which over time added up to significant benefits. In the example of the deployment process, we went from me being the only one merging all pull requests, to everyone being able to do that in a Continuous Deployment setup, which not only ran our test suite but also our automated blackbox QA tests and sanity checks, like assessing whether our assets were correctly uploaded to the CDN.
As for those "disastrous" mistakes I was originally worried about: Most of them could be prevented by the improvements I just talked about. For the ones that weren't, and where I thought we were going to make a mistake, I did step in, but that became increasingly rare.
Additionally, I worked on instilling this mindset in everyone in our team. That way anyone could coach interns, or each other, in a similar fashion. This helped us share knowledge more systematically, and made us more effective in onboarding new team members. An example of this that I liked particularly is how we changed our pair and mob programming sessions. Too often, these practices involve a senior member doing the majority of the thinking and driving. Instead, we made sure that everyone got a chance to code, and that there were always moments where we asked each other to think through why we were doing things in a particular way.
In the end, giving people room for failure felt a lot more like giving them room for success. By creating an environment that promoted autonomy and learning from mistakes, we became a more resilient team.
Thanks to Alexander Claes, Charlotte Dekempeneer, and Filip Schouwenaars for reviewing my drafts.