Skip to content
Go backGo back

Article

How experiment design can derisk your digital strategy

We’ve all seen what happens when digital product development goes awry. Months (and even years) are spent planning, designing and developing technology. When launch day finally arrives, typically after a series of delays, the fanfare quickly gives way to disappointment. Despite all the good intentions, your shiny new product is not engaging customers in the ways you expected. Painful post-mortem meetings follow, with consolation sought with phrases like: “Oh well, at least we learnt something.”

Every product is built on the back of assumptions that might turn out to be wrong. But learning doesn’t have to be slow and expensive. At FT Strategies, we have helped many businesses adopt an approach that derisks the process of launching and improving products. You can read more about our strategy by downloading this free report, “The Art and Science of Experimentation for Growth” here.

This approach uses experimentation to tighten up the loop between strategy and execution from months/years to days/weeks. In this blog we take a deep dive into how you can design experiments that put validated learning at the heart of your approach.

1) Experiment design should be part of a wider Discovery Process

Organisations tend to jump too quickly to solutions and start building. This means they either play it safe, or don’t solve the real problem. At the Financial Times, we approach product development with a discovery mindset. You can summarise our approach as: “We’re not going to build this until we know enough about it.” For this reason, we don’t run our experiments in isolation – they are part of our wider Discovery Process (read more about this process in our previous blogpost “How the FT uses customer discovery to design better value propositions”)

Experiment design takes place after we have understood the problem, via methods such as data analysis, competitor research and focus groups. Once we have summarised and shared this knowledge, we generate ideas for what we might build. Our goal is to create ideas that are: aligned with user needs and business goals, sufficiently ambitious, and varied (allowing us to get more interesting feedback and not get too sucked into one solution). It’s at this stage that we design one or more experiments that will allow us to test some of these ideas.

image
image
image
image
image

The FT's Discovery Process

A lightweight Discovery Process that any organisation can adopt is the Double Diamond, as popularised by the Design Council. This model follows similar principles to the FT’s Discovery Process. First, you aim to explore the problem space (typically by conducting user research), then decide on an area of focus, and then move into a phase of running experiments to test and refine a potential solution.

image
image
image
image
image

2) Experiments should test your riskiest assumptions, not your idea

By this stage in the process you should have a clear idea that you would like to test. For example, let’s imagine that a subscription music service (e.g. Spotify, Apple Music) has learnt that a certain segment of music fans is spending money on vinyl, and has defined an idea that involves selling physical music directly from their platform. The temptation here might be to “build the thing”, but the goal here is to maximise learning and minimise development time. As such, it’s important to think less about your idea, and more about the underlying assumptions of that solution.

To help our clients uncover those underlying assumptions, FT Strategies has used frameworks such as “The Four Big Risks” created by Silicon Valley Product Group. This framework distinguishes between four types of risk that are common to most product concepts:

  • Value risk (whether customers will buy it or users will choose to use it)
  • Usability risk (whether users can figure out how to use it)
  • Feasibility risk (whether our engineers can build what we need with the time, skills and technology we have)
  • Business viability risk (whether this solution also works for the various aspects of our business)

Taking our example of the subscription music service moving into the world of selling physical music, some of the assumptions might be:

  • Value: people will want to buy records; people would prefer to buy their physical music from us rather than physical stores; etc
  • Usability: people won’t be confused by the idea of buying from their streaming provider; we can design the “buy vinyl” feature in a way that is unobtrusive to other users; etc
  • Feasibility: this feature will require minimal changes to overall platform architecture; this feature will work with our existing billing service; etc
  • Viability: this feature will not be a distraction from our core product strategy; this will require minimal changes to wider teams and processes; this will not create misaligned incentives within our teams; etc

As you can see, there are many risks and assumptions lying behind every idea. In order to design a lightweight experiment, the next step would be to prioritise these assumptions according to the level of risk each one presents.

One easy way to do this is to score each assumption based on two criteria: severity of impact if we are wrong, and confidence that the assumption is correct.

3) Every experiment starts with a hypothesis

Once you have decided on the riskiest assumption, you can begin the process of designing your experiment. It is good to begin this process by reframing your assumption as a testable hypothesis. You can write hypotheses in lots of different ways, but we have found that the following format can work well in most situations:

We believe that… [insight about your audience]

So if we… [experiment summary]

We will see… [success criteria]

This structure encourages you to think about the key elements of an experiment: the audience, the experiment, and what success looks like. To continue our example of the music streaming service, one hypothesis might be:

We believe that… superfans will purchase physical records through our service

So if we… add an option to purchase vinyl to each album listing page

We will see… >1% of superfans will click on that option to learn more, and >0.1% will purchase an album

4) Picking the right experiment for your hypotheses (e.g. A/B test, landing page, etc)

Once you have decided on your hypothesis assumption, you can move into experiment design. Your goal at this stage is to minimise cost and maximise learning potential. There are many different types of experiment, including A/B tests, clickable wireframes, paper prototypes, fake door tests and software prototypes… Each is appropriate to specific circumstances, and one useful way to navigate the options is to consider your hypothesis alongside a list of options. One way we like to navigate experiments is to consider them in terms of level of product fidelity (how close is it to your brand’s products?), and potential reach (see below).

image
image
image
image
image

An A/B test involves actually making a live change on a product or service. Different cohorts of users (i.e. Cohort A and B) will have a different experience. For example, you could make the colour of a button green for Cohort A, and blue for Cohort B. The benefit of this approach is that you have the potential of reaching a large numbers of users, and can measuring quantitative differences in behaviour (e.g. which button has a higher click through rate).

A/B tests can be a great option when the concept you are testing does not require a substantial development time. If you wanted to test something with a large development cost (e.g. the concept of an entirely new product or service), techniques like concept videos or landing pages might be a better option. These methods still allow you to to measure the impact across a larger pool of customers, but can require a smaller technical investment. For example, when Joel Gascoigne was considering building a Twitter scheduling tool called Buffer, he decided to create a basic landing page rather than building the full service. This landing page simply described the service, and allowed potential customers to share their email address with him. This allowed him to measure interest in the product before committing time to building specific features.

image
image
image
image
image

Buffer’s ‘MVP’ landing page

While quantitative learning is a great outcome from an experiment, we think it is important that our clients don’t underestimate the value of methods that are less scalable. For example, showing a basic paper prototype or clickable wireframe to a handful of customers can provide invaluable qualitative insights, particularly early on in the development process. The below video shows an example of testing a mobile phone provider’s app using just cardboard and hand-drawn interfaces. Researchers often find that a deliberately low fidelity approach can help solicit richer feedback from users, partly because it’s obvious they aren’t being shown a “finished” product.

Low fidelity prototype testing of the EE app

5) How to measure the results of your experiment

The goal of any experiment is validated learning. As we have seen, some experiments (e.g. A/B tests) lend themselves to the capture of quantifiable data. For this to work well, technology investment is required. We have seen that most organisations will need to use some combination of off-the-shelf software (e.g. Optimizely) and their own in-house tools. In order to get meaningful results, the team running testing will need to ensure that any results meet the criteria for statistical significance. In order to do this, you need to be sure that you have reached a large enough population to draw valid conclusions from the experiment. There are online tools that can help with this (see below).

image
image
image
image
image

At the FT, A/B testing is always our first choice for any product release. It is our surest way of understanding causality. Although the subjective feedback provided by alternative experimentation methods will never provide this, they are still useful tools for learning. In particular, they are useful when we are earlier in the development journey or need to balancing business utility in some other way (e.g. if we need quick results, or we’re testing something where statistical rigour will be hard to achieve).

How FT Strategies can help

We believe that a discovery-led experimental approach to product development lies at the heart of successful digital businesses. We have helped many organisations adopt the human, strategic and technical capabilities to derisk their approach to creating new products and services. If you’d like to learn more, or share a challenge you have in this space, please get in touch – we’d love to hear from you!

Download our report on designing and running experiments “The Art and Science of Experimentation for Growth” now.

You might also like

FT Strategies brand symbol
FT Strategies brand symbol
FT Strategies brand symbol
FT Strategies brand symbol
FT Strategies brand symbol

Get started

We've helped companies around the
world future-proof



their businesses
- and we can do the same for you.