Experiment-Driven Product Development

A presentation at Boye & Company, Product Management Peer Group Call in July 2020 in by Paul Rissen

Slide 1

Slide 1

Experiment-Driven Product Development Paul Rissen Senior Product Manager Springer Nature @r4isstatic

Slide 2

Slide 2

How do you know if you’re building the right thing? You don’t. Until much later. @r4isstatic

Slide 3

Slide 3

How do you avoid making the wrong decisions? You can’t. @r4isstatic

Slide 4

Slide 4

How do you minimise the time and effort spent on the wrong things? Experiment-Driven Product Development (XDPD). @r4isstatic

Slide 5

Slide 5

What is Experiment-Driven Product Development?

Slide 6

Slide 6

An evolution of agile/lean product development that places the emphasis on questions rather than solutions. @r4isstatic

Slide 7

Slide 7

The basic process of XDPD @r4isstatic

Slide 8

Slide 8

What do I mean by an experiment?

Slide 9

Slide 9

Experiment != A/B test @r4isstatic

Slide 10

Slide 10

Experiments are not just for R&D ‘innovation’ teams @r4isstatic

Slide 11

Slide 11

Experiments are a structured way of asking questions. @r4isstatic

Slide 12

Slide 12

● ● ● ● ● ● ● What are we trying to discover? Why is it important for us to find this out? What are we going to do? What change or difference are we expecting to see? How will we measure the result? How precise do we need the result to be? How certain do we need to be? @r4isstatic

Slide 13

Slide 13

This is not easy. @r4isstatic

Slide 14

Slide 14

Detailed roadmaps Guaranteed successes No wasted effort @r4isstatic

Slide 15

Slide 15

Detailed roadmaps Guaranteed successes No wasted effort @r4isstatic

Slide 16

Slide 16

So why on Earth should I use this approach?

Slide 17

Slide 17

Focus on results. @r4isstatic

Slide 18

Slide 18

Stop obsessing over pet solutions. @r4isstatic

Slide 19

Slide 19

Listen to your users - at scale. @r4isstatic

Slide 20

Slide 20

Data* as a stakeholder. * specifically ‘user activity’ data @r4isstatic

Slide 21

Slide 21

Challenge your assumptions. @r4isstatic

Slide 22

Slide 22

Reduces the cost of failure. @r4isstatic

Slide 23

Slide 23

Evidence over opinions. @r4isstatic

Slide 24

Slide 24

We are scientists for product development. @r4isstatic

Slide 25

Slide 25

Observe. Hypothesise. Experiment. Analyse. (repeat to fade)

Slide 26

Slide 26

Principles of XDPD

Slide 27

Slide 27

Involve the whole team, not just data scientists. @r4isstatic

Slide 28

Slide 28

Data informed, not data driven. @r4isstatic

Slide 29

Slide 29

Simplest Useful Thing @r4isstatic

Slide 30

Slide 30

Simplest Useful Thing Product/User POV: - Simple = easy to use - Useful = fulfils a need @r4isstatic

Slide 31

Slide 31

Simplest Useful Thing Developer’s POV: - What can we do with what we have available to us, now? - What’s the simplest thing we could build in order to test our hypothesis/answer our question? @r4isstatic

Slide 32

Slide 32

Simplest Useful Thing Experiment POV: - What’s the simplest method we could use, in order to learn? - What’s the lowest cost way to learn, that would still ensure reliable evidence? @r4isstatic

Slide 33

Slide 33

Experiment-driven product development in practice

Slide 34

Slide 34

The Planning Phase @r4isstatic

Slide 35

Slide 35

The Design Phase @r4isstatic

Slide 36

Slide 36

The Analysis Phase @r4isstatic

Slide 37

Slide 37

Don’t validate ideas. Test hypotheses. @r4isstatic

Slide 38

Slide 38

Design the experiment scale and conditions before choosing a method. How certain & how precise do you need to be? @r4isstatic

Slide 39

Slide 39

NO PEEKING! @r4isstatic

Slide 40

Slide 40

<Experiment name> Background Experiment details Question: <What do we want to know?> Measures Why: <Details of the question & background information> What: <What are we going to do e.g. A/B test, UX research, Data analysis> Hypothesis: <What are we going to do? What change are we expecting to see? How will we measure success?> ● Conditions A. B. <A condition - new behaviour> <B condition - existing behaviour> Details ● ● ● ● ● Headline result: <What have we learned?> <Measure, including current rate if applicable> <What is our Minimum Detectable Effect threshold? (e.g. 1%)> <How big a sample does each condition need?> <Therefore, how long will we run the experiment at a minimum? E.g. 24 hours> <What subset of the data will we use? E.g. config.nnn> <Event name/filters> @r4isstatic

Slide 41

Slide 41

Email first on popup experiment Background Experiment details Question: Can we drive more email sign ups by moving the sign up card to the front of the popup? Measures Why: Only ~2% of users that see recommendations see the email sign up card on the popup. We think showing the card earlier would increase sign ups. What: A/B experiment. Hypothesis: Showing the sign up card at the start of the popup (before recommendations) will lead to more email sign ups than having the card at the back, but the verification rate will not change significantly. ● ● ● ● Email subscribers Email sign up rate (subscribes / no. users seeing card) Email verifications Close + no thanks rate and CTR (as checks) Conditions A. B. Email sign-up last card in popup journey Email sign up 1st card in popup journey Details ● ● 24 hours* config.experiment_email_position

  • ‘2017-10-05 14:00:00 - 2017-10-06 14:00:00 Headline result: Putting email signup card first drives far more signups (and most of these still verify) without leading to a large adverse effect on other metrics, although it does massively reduce the recommendations we display (see full writeup for more details) @r4isstatic

Slide 42

Slide 42

Latest Articles sliced by Subject Background Experiment details Question: Would showing latest articles, sliced by Subject, be a more useful feature than the traditional ‘Browse Articles’ box? Measures Why: We want to explore ways that we can usefully slice content, to better show off the range of content within a broad-scope journal What: A/B Hypothesis: Displaying three of the latest articles for each top level subject will lead to a higher CTR to articles than the ‘Browse Articles’ box ● Unique pageviews on homepage ● Unique clicks on any article in each subject box ● Unique clicks to any article in ‘Browse Articles’ box ● Unique pageviews to any SREP article Conditions A. Users visiting the journal homepage are shown the ‘Browse Articles’ box and the ‘Browse Subjects’ box (which just has links to the scoped searches) B. Users visiting the journal homepage are shown four subject boxes, each with a set of the three latest articles from that subject, and a link to the scoped search page Details ● 18-24th October (7 days) Headline result: 5.4% click through to articles, compared to 4.2% with the traditional ‘Browse Articles’ grid of nine. “See All” for each subject received 10% UCTR, compared to 8% from previous design, and 6.7% for ‘See All’ for all SREP

Slide 43

Slide 43

What have we learnt? - what are the raw numbers? - what happened to the other ‘health check’ metrics? - what was the significance and p value? - what does this tell us? - what does this mean/imply in relation to our hypothesis/question? - what have we learned? What would we do differently? - how does this affect our backlog of questions? @r4isstatic

Slide 44

Slide 44

Summary In summary…

Slide 45

Slide 45

  • Focus on questions not solutions - Challenge your assumptions - Gather evidence - Be data informed, not data driven - Involve the whole team - Use results to inspire new experiments @r4isstatic

Slide 46

Slide 46

Slide 47

Slide 47

eBook now available via Apress & Amazon https://www.amazon.co.uk/dp/1484255275 https://www.apress.com/gb/book/9781484255278 Also available on SpringerLink https://link.springer.com/book/10.1007/978-1-4842-5528-5 Preface & Intro available for free download! @r4isstatic

Slide 48

Slide 48

Thank you. https://www.paulrissen.com https://www.linkedin.com/in/r4isstatic/ Twitter: @r4isstatic