Growth Engineering challenges and principles to overcome them

Following the wave of excitement about product-led growth as a discipline and the success stories of many teams in some of the fastest growing companies, nowadays many tech startups are spinning up their own growth teams trying to fuel their growth engine. Starting a new team is never trivial and arguably the most critical role to hire for is engineering. However, there’s only so many engineers who have experience working in growth teams and many folks who are being hired today - even senior engineers with years of experience building software - find themselves having to adapt to a different environment and encounter challenges they might not have had to face before.

Let’s look into what are some of the key differences between the way Growth Engineering teams operate compared to other product teams, what challenges they are faced with and what are some of the levers they can pull to overcome these challenges.

The Product Growth approach

The main difference between Growth Engineering teams and other engineering teams is the way growth teams approach building software. I touched more on this topic in my post What is a Growth Engineer? but in short, growth teams usually start from raw hypotheses and ideas on how to solve for user problems, validate ideas and de-risk assumptions through user testing and experimentation, measure impact with A/B testing and analytics, learn from results and refine their ideas through iteration until ready to be released (or discarded).

The product growth approach: observe user problems, generate hypotheses, validate and de-risk, analyze results, iterate, release and/or learn

Because of this experimentation-driven and iterative approach, the most common outcomes when building something are to either double down on a solution and iterate multiple times to eventually release it to users, or instead discard the idea quickly and learn from it.

Traits and challenges of Growth Engineering work

Given this approach to building software and the possible outcomes of each initiative, we can start to outline certain traits and characteristics of Growth Engineering work, which we can define as:

  • Dynamic
  • Fast paced
  • “Touch everything - own nothing”

Dynamic

Growth Engineering work is dynamic in the sense that most initiatives will go through many iterations or else will be discarded, so the code engineers initially write will usually be short lived and will need to change over time to adapt to new iterations and changing product requirements.

Fast paced

Growth Engineering work is fast paced because cutting down the time it takes to learn and iterate is crucial to the success of the team.

“Touch everything - own nothing”

Growth Engineering teams usually operate on many areas of the codebase that they don’t own because they work on initiatives that span across multiple product areas.

These traits and characteristics come with certain challenges:

  • How to make room for dynamism when designing and coding a solution? 
  • How to balance speed of execution and quality?
  • How to best collaborate with other teams?

Let’s look into each of these challenges and what guidelines and principles engineers can follow to overcome them.

Making room for dynamism

This is probably the first challenge engineers face when joining a growth team and they usually run into a situation that might unfold like this. 

As a new engineer on the team you build and launch your first experiment following the initial spec and designs. A few days after launch the PM (product manager) asks you “can we add a second variant with this additional change?” and you answer “sure, I’ll need to refactor my code a bit but it’s doable in a few days”. Once the experiment is over the PM says “the results are actually null and our success metric didn’t move but we saw some good engagement with feature X, let’s run a follow up experiment with this other change” and you go “oh.. ok. This is not supported with my current implementation so I’ll need to refactor a lot, but it’s doable”. After the follow up experiment is over the PM says “actually.. can we revert to the original solution and add a third variant with this key difference so we can validate this hypothesis?” and you will question your choice of joining a growth team.

This might sound like an engineering nightmare to some, but it’s a very realistic chain of events many Growth Engineers can relate to.

So how can Growth Engineers make room for dynamism when designing and coding solutions so that they can easily adapt to changing requirements and multiple iterations?

Keep it simple

The first principle is to keep your code simple, especially when working on a new initiative/experiment where the level of uncertainty is high. This means: don’t over engineer your solution trying to solve for possible future use cases that might never be needed, avoid unnecessary abstractions that add complexity and prefer code duplication over building DRY (Don’t Repeat Yourself) modules and APIs. Simple code is easier to extend with new use cases, refactor with better abstractions, or clean up.

Encapsulate it

The second principle is to encapsulate and centralize your experiment logic that determines what UI/UX to show. For example, instead of writing conditionals all over the codebase to split the UI depending on what variant a user is assigned to, you can use a React custom hook to encapsulate that logic.

Non-encapsulated experiment logic
Encapsulated experiment logic

Make it declarative

For certain experiments it’s worth building some abstraction to make your business logic declarative and separated from the view layer, so that iterating on it will be as simple as changing some parameters in a config. However this could be a double edged-sword and you might end up over engineering it. Understanding what you’re likely going to iterate on will help you decide what should be built in a declarative way and what not.

Balancing speed of execution and quality

The second challenge Growth Engineers face is the need for speed of execution in a fast paced environment where there are lots of potential ideas to experiment with and limited time and resources, so cutting the time to launch experiments and learn from them is crucial in order to discard unsuccessful ideas early and iterate on successful ideas quickly to provide customer value sooner. Often, engineers new to growth ask themselves “do I need to cut on code quality in order to build something faster?”. The answer is usually “no”.

Let’s debunk this once and for all: speed and quality are not at the opposite sides of a spectrum but usually go hand in hand. Increasing speed of execution doesn’t mean writing buggy / poorly performant / spaghetti code. In fact, poor quality code usually leads to lower speed of execution. Bugs and performance issues have a negative impact on user experience and therefore can invalidate experiment results leading to - guess what - having to re-run experiments after fixing bugs!

So let’s look into some of the ways to keep a high quality bar while optimizing for speed of execution and delivery.

First of all, what things negatively impact speed of execution the most? This is not an extensive list, but some of the most impactful are:

  • Scope changes
  • Back and forth with product and design
  • Idle pull requests
  • Writing extensive tests

Cut scope

The most impactful lever you have to increase speed of execution is to cut scope. Often, during the initial designing phase we tend to include many nice-to-have requirements that are not essential to the experiment because they don't directly help validating the experiment hypothesis. Once development has started and the technical effort needed to build each requirement is more clear, it’s your job as an engineer to re-align with product and design and de-scope if needed.

However

Take ownership

Taking ownership means that you’re not just building something someone told you to build, but you understand the goals behind the experiment and what is the hypothesis you’re trying to validate, why and how. This is crucial because it will enable you to make micro product decisions on your own and suggest alternative solutions and tradeoffs.

Make it easy to review

Often, pull requests remain open for a long time because the review cycles are slow. This could be due to many reasons, but a common factor is that they require a high cognitive load to be reviewed. Cognitive load increases when the reviewer is not familiar with what you’re building and when code complexity is high. To reduce cognitive load and make it easier for others to review your code you should loop reviewers in early, for example by writing tech specs and discussing implementation details even before starting to work on something, and then keeping them in the loop by discussing tradeoffs and architecture decisions that come along the way. It also helps to break down big chunks of work into smaller pull requests that can be reviewed individually more easily.

Have a testing strategy

How to test your code with unit, integration, and e2e tests is always a heated topic when it comes to Growth Engineering. Because of the fast-paced environment and the often short-lived and dynamic code, sometimes it’s hard to justify writing extensive tests that might slow down your initial speed of delivery. On the other hand, tests help us prevent things from breaking with new changes coming in, as well as ensuring our code does what we think it should be doing. Therefore it’s important that you agree on a testing strategy and define guidelines within your team. A few rules of thumb are:

  • Add unit/integration tests for business logic that is likely to live in the codebase for an extended period of time. This will provide safeguards against possible regressions, which will ensure reliability of experiment results and eventually increase speed of iteration.
  • Prefer manual QA testing over writing extensive e2e tests for the multiple concurrent variants of an experiment.
  • Proactively monitor metrics and analytics for the duration of the experiment to catch possible issues and fix them quickly.
  • Add e2e tests as part of the GA (General Availability) release process to ensure your solution doesn’t break over time and the experiment results won’t be undermined.

It’s worth noting that there’s no “one size fits all” solution and every team should define their own strategy and guidelines based on the complexity of their codebase and existing established processes around testing.

Be intentionally scrappy

Sometimes you just gotta be more scrappy and build something that just works. For some experiments you know that you’re writing short-lived throw-away code and it doesn’t matter if it’s not clean and maintainable, as long as it’s encapsulated enough and doesn’t impact other teams. You can be intentionally scrappy while still maintaining reliability and gracefully handling errors and edge cases. With experience you’ll be able to build a sense for how to balance scrappiness and quality, when to be scrappy and when to build more sound solutions, or what parts of a solution can be scrappy and what should not.

Collaborating with other teams

Because of the very wide scope growth teams operate in, Growth Engineers often end up touching parts of a codebase someone else owns. This requires good cross-team collaboration which is not trivial, especially in large organizations. Therefore it’s crucial for Growth Engineers to understand how to best communicate and create healthy and productive collaboration practices with other teams.

Do your homework

First of all, when starting to look into an area of the product you’re not familiar with you must identify what team owns and maintains it. Then ask them how to best communicate with them. Some teams might have formal processes to submit a request, others might hold office hours, others might prefer async communication. Whatever their process is, follow it. But before reaching out with questions, make sure to have a basic understanding of the domain. Read their documentation if one exists. Dig into the codebase and try to figure out how it works on a high level. Do your homework and be prepared. They’ll appreciate it and you’ll get off to a good start.

Find alignment

When you start talking to other teams about something you’re trying to build, the very first thing you should do is to align on the goals and business objectives of the initiative. Make sure everyone understands the why and only then you can discuss the what and how. Aligning on the non-goals is also equally important to avoid a lot of unnecessary back and forth. Discuss upfront the technical challenges that might arise with different solutions and how they might impact overall scope and effort. Initial alignment is crucial for an effective collaboration with fewer misunderstandings along the way. People also tend to be more open and helpful when they understand why you’re doing what you’re doing.

Lead, but listen

Don’t expect other teams to keep your initiative top of mind. They have their own priorities that might not align with yours. Sometimes you’ll have to ping people multiple times and make sure they follow up with outstanding requests. You’ll need to take the lead to make sure your initiative goes forward. At the same time it’s important that you respect their time and take into consideration their advice and guidance. Make sure to keep them in the loop and ask them to review your tech specs and code. Align to their testing strategy and guidelines. However, sometimes you’ll have to push back on their requests and re-align on the original goals of the project. 

Conclusions

Given the growth approach to building software, we can define Growth Engineering work to be dynamic, fast paced and touching everything but owing nothing. Each of these traits comes with challenges that Growth Engineers need to face. For each challenge we saw different principles and guidelines that can help overcome them.

Traits
Challenges
Principles
Dynamic
Making room for dynamism when designing and coding a solution
Keep it simpleDon’t over engineer your code trying to solve for future use cases, avoid unnecessary abstractions, prefer code duplication over DRY modules and APIs.
Encapsulate itCentralize your experiment logic that determines what UI/UX to show.
Make it declarativeMake your business logic declarative and separated from the view layer.
Fast paced
Balancing speed of execution and quality
Cut scopeRe-align often with product and design during the development phase and cut scope when needed.
Take ownershipTake the time to understand goals and hypotheses behind an experiment. Make micro product decisions and suggest alternative solutions and tradeoffs.
Make it easy to reviewLoop reviewers in early to discuss implementation details, tradeoffs and architecture decisions. Break down big chunks of work into smaller pull requests.
Have a testing strategyAgree on a testing strategy and define guidelines within your team.
Be intentionally scrappyBalance scrappiness and quality by maintaining reliability and gracefully handling errors and edge cases.
Touch everything - own nothing
Collaborating with other teams
Do your homeworkIdentify how to best communicate with other teams and follow their processes. Read their documentation and get a high level understanding of their codebase.
Find alignmentAlign on the goals and business objectives. Talk about the why before discussing the what and how. Align on the non-goals. Discuss technical challenges and their impact on scope and effort.
Lead, but listenTake the lead in making sure your initiative progresses but listen to other teams’ advice and guidance. Keep them in the loop with reviews. Align to their testing strategy. If you need to push back, re-align on goals and objectives.

PS. if you want to join me at Webflow, we're hiring!