ISSUE #11 - NOVEMBER 14, 2022

Creating an experimentation-driven product culture

How to create an experimentation-driven culture within your product team? What are the steps to set this up? How to manage & prioritize your experiments? Practical advice and real-world examples.

Creating-an-experimentation-driven-product-culture

One of the first things someone learns about product management is that "you have to focus on your users." Learn about their day-to-day, their needs, and what keeps them up at night.

Here's the thing, though. No matter how well you understand your users, you will rely on assumptions and hypotheses to conceptualize and build solutions for them. Regardless of how educated those assumptions are, they are likely wrong. Here is where an experimentation-driven culture will be invaluable for you.

An experimentation-driven product culture will allow you to focus on your users and explore their problems from different perspectives. By setting your hypotheses, iterating, and experimenting with the solutions you deliver to them, you can identify - not a good enough - but the best way to solve them.

You must know, though, that your whole team needs to embrace that to succeed. In this post, you will find out how to take your first steps towards that with examples of what I did and (so far) is working.

Before you start

Before you get your hands dirty, you must be familiar with a few elements and definitions. Firstly, since we are seeking to embrace an experiment-driven culture, it is evident that "experiments" are at the heart of that. However, experiments are only tools. The foundation of every experiment is a clear and concise hypothesis. So, let's define both of them, starting with the latter.

What is a hypothesis?

A hypothesis is the foundation of every experiment. It's a phrase that should reveal a cause-effect relationship between two items.

In a product, a hypothesis should ideally reveal this cause-effect relationship between a metric and an action. It should look like:

"If we improve Y, the metric X will increase, because users will be able to do Y easier."

For example, one of the areas that we are working at Simpler, is optimizing the checkout experience for online shoppers. That means that one of the metrics we are optimizing is CR. One of the most recent assumptions that we prioritized for experimentation is:

"If we improve the address input form for users in the UK, our checkout conversion rate will increase because users that want their orders to be shipped into the UK will be able to find their address seamlessly."

What is an experiment?

As implied earlier, an experiment is the vehicle through which you will validate (or invalidate) your hypotheses. A hard reality here is that most of your experiments will fail, meaning that they will invalidate your assumptions. However, you can extract significant learnings from your failures, if not especially from them.

A good experiment will be consisted of:

  • A clear hypothesis
  • A concise description of how success looks like
  • The details of this experiment
  • And the work needed to set them up

To help my colleagues formulate ideas for experiments that could prove to be valuable for us in the future, I have created an experiment template that can be found here.

Also, on this link, you may find an actual experiment doc with one of the experiments that we ran recently at Simpler.

Taking the first steps

We are done with some of the basics. So how can we get our hands dirty now?

To start experimenting, you must first have a few tools in place. The essentials that my team is using as well are:

  • A hypothesis library
  • Tools to help you set up your experiments

A hypothesis (or experiments) library is the pool where you keep track of your past, current, and future experiments. You may think of that as your backlog for product experiments.

I share my team's hypothesis library with the whole company so that anyone can contribute by adding their ideas, as long as they won't just add a simple hypothesis but adhere to the hypothesis template I shared earlier. This means that an entry will be deleted unless someone explains the hypothesis, how they will measure success, and how we know if we are successful.

The way I structure this library is as a table. Each row is a hypothesis. Then each row should contain an expected result (in the form of a metric), the experiment's status (not started, running, completed), and an owner. Those are the columns of the table.

Prioritizing your experiments

On top of those columns, there are another four and their titles are "I", "C", "E" and "Score. You can easily guess the meaning of those columns. As a prioritization framework, we are using ICE. Based on this, we score each experiment based on its expected impact, our confidence, and the ease of implementation.

ICE works well for us since it is a straightforward framework to understand and helps with ambiguity. When experimenting, uncertainty exists by definition. So this framework allows us to balance our expectations regarding impact of our activities with our confidence in our assumptions.

Below, you may find a screenshot of the current form of our Experiments library at Simpler (some sensitive information has been hidden):

Product-Experiments-Hypothesis-Library

Tooling you'll need

An experiment could take many forms. The simplest could be "if we implement X, then Y should happen". So, this would require a simple observation. However, in most cases, you will be trying to optimize a metric. In those cases, comparing this metric before the suggested changes with the metric after is a bad idea. So, for example, in what we call "multivariant" experiments (or A/B testing if you have only two variants), you always want to compare two variants against the same sample. This is the only way to know with certainty which variant performs best.

Some tools can help you deploy those variations and track their performance out of the box. My preference is Google Optimize. It's super easy to integrate with any app and has a comprehensive toolset that allows you to run split tests of any kind. Google Optimize also integrates seamlessly with Google Analytics, which is another benefit. However, other tools, such as Optimizely, are just as good.

Occasionally, when your experiments are more specialized, tools similar to those mentioned before might not be able to cover your requirements. In that case, building your own custom experiment management tools is also an option.

Managing your experiments

Assuming you have set up everything, you are ready to start your journey in experiments land, which should be full of new insights and learnings. How do you manage it, and how do you keep the ball rolling?

Here is what I am doing:

Weekly Experiments Review

This is a weekly sync that I do where I, alongside our Ui/Ux designers and other product managers or team stakeholders, are participating. The setup of this meeting is particular, aiming at three things:

  1. Align on the status of the ongoing experiments
  2. Extract insights from those experiments
  3. Plan the upcoming experiments

The setup of the meeting is the following:

  • 10mins: Review our experiments metrics from last week and what we focus on optimizing.
  • 10mins: Review last week's experiments or why they weren't launched.
  • 15mins: The person analyzing the experiments data shares insights while the rest of the participants should reflect on them or ask for any other additional insights that would be valuable.
  • 10mins: Select the next experiments to run and assign them to their owners.
  • 10mins: Check and prioritize the experiments pipeline.

Biweekly Hypothesis Brainstorming

During this session, the main goal is to come up with new ideas in the form of hypotheses that will be prioritized to be tested in the future. However, the source of those ideas is not always pure creativity. In my experience, most of those ideas occur as learnings from past experiments. And especially those experiments whose initial hypothesis was invalidated.

Concluding

Closing this post, I want to leave you with the following: Most of the hypotheses you will try to validate will be proven wrong. You should not be discouraged. As mentioned earlier, even failed experiments have learnings, which are often more valuable than successful experiments' learnings. Knowing what doesn't work is as important as knowing what does.

Past Newsletters

ISSUE #10 - OCTOBER 4, 2022

Keeping your product team engaged and motivated

People argue that when you try to push a development team or keep them accountable, they are going to just start looking for their next employer.  I..

Read here

ISSUE #9 - SEPTEMBER 6, 2022

Navigating in fast-paced environments as a Product Manager

What is the definition of a fast-paced environment? How does this affect your routine? How (if at all) your approach be adjusted based on that?

Read here

ISSUE #8 - AUGUST 1, 2022

Get to the top of Google's first page by smashing your Page Speed score

Why page speed is important for SEO? What is Google Page Speed Insights? What is the Google Page Speed Test? How can you smash your google Page..

Read here

The Product Notebook by Manos Kyr.

Once every month, I’m sharing my thoughts on product, growth & entrepreneurship.

Latest Newsletters

ISSUE #25
What to do when you don't know what to do next

ISSUE #24

When good docs go bad: Learning from a PM's misstep

Copyright © Manos Kyriakakis