Menu

The Lean Startup - Time to Pivot... or Persevere?

Episode 94 - 26 Jul 2017

It's been a couple of weeks since our Minimum Viable Product / Minimum Viable Experiment came to an end.

Today, we'll take a critical look at what worked... and what didn't.

And I'll give you the entire Experiment Framework, in the form of a brand new Cheat Sheet / Worksheet.

Links

Noah Lorang's article

The Mundane MVP Cheat Sheet / Worksheet

Small beginnings

I like the idea of web testing.

I started running tests on my own website years ago, using a long-since discontinued Google tool.

It was fairly powerful: you could vary several different elements of the page, and the tool would look after serving up all the combinations and counting the results.

It would even tell you when the experiment was over: when one “set of options had “won”.

Which was all great. However…

... my website didn’t get many visitors, so it might take weeks for enough data to come in.

Not very satisfying.

More traffic

Years later, I had the opportunity to run tests on the BBC Worldwide website.

No problem with slow tests here: with hundreds - thousands of visits per day - tests didn’t have to run very long to produce solid results.

At that time I made a bit of a name for myself. (Though not in a good way.)

You see, at the beginning of each experiment, I'd make a confident prediction of the outcome of the outcome.

And on each and every occasion... I'd be DEAD WRONG.

New mindset

I was in very real danger of turning myself off testing. So I count myself lucky that I came across an article at that time.

Written by Noah Noah Lorang at 37Signals (since renamed to Basecamp), the article is entitled A/B Testing: It's not about the results, and it's definitely not about the why

Here’s an extract, which details Noah’s testing mindset::

I don’t judge a test based on what feedback we might have gotten about it. I don’t judge a test based on what we think we might have learned about why a given variation performed. I don’t judge a test based on the improvement in conversion or any other quantitative measure.

I only judge a test based on whether we designed and administered it properly.

When I read this (and you can bet a shared the article with my BBC Worldwide colleagues) I stopped worrying about the OUTCOME of a test.

And I started focusing on the mechanics of the test.

Minimum Viable Experiment

I bring this up, as I'm sure you guessed, because of the experiment we ran a few weeks ago.

Without going into too much detail, the purpose of the test was to test demand for a course I was thinking of making.

The Development That Pays audience - that's you - provided a shortlist of titles.

And the Development That Pays audience - that's you again - selected the finalists. There were:

Agile That Pays Real Life Agile Agile Done Right

It fell to me to set up the experiment.

But was is a good experiment?

One that I would have been happy to show to Noah Lorang ?

Let’s take a look.

Worksheet

To make it easier for you to take a look, I’ve put the details of details of the experiment onto a cheatsheet cum worksheet:

The **Mundane MVP Cheat Sheet and Worksheet**

OK. Let’s evaluate.

Facebook Ad

As far as I can tell, this first part was fine: I have to assume that Facebook knows how many times it "served" the ads, and I'm sure it knows how to count clicks correctly.

While we’re here, let’s fill in some numbers for the winning advert “Agile Done Right”:

It was displayed 4868 times It was clicked 96 times

That’s a click-through rate (CTR) of 1.97% - well above the benchmark target of 1%. Good news.

More good news: the absolute number of impressions and clicks were high enough (just) for the result to be statistically significant.

So I would be happy to talk to Noah about this part of the experiment.

Agile Done Right will be the title of the course.

In the language of The Lean Startup. It’s not a Pivot, it’s a “Persevere”.

The Landing Page

Moving on to the next part of the experiment: the Landing Page.

The landing page was viewed 35 times, and the big yellow button was clicked 9 times.

That’s a click through rate of 26%.

Missing visitors?

I shared the results with you a couple of weeks ago, and you quick to spot that something looked fishy.

Rob said “it even raised more questions about CTR and Facebook/Google data.”

The Facebook/Google data “thing” that Rob is referring to is the disparity between these two numbers:

How can 96 people click the Facebook advert link… ... but only 35 people arrive on the Landing Page?

Part of the answer is that they are measurements of different things.

A click on a link - such as a link on a Facebook ad - is easy to measure. What Google measures is trickier: the page has to load… the javascript has to trigger…

There are a bunch of reasons why the number of clicks on the ad would be lower than the number of visits on the Landing Page.

Landing Page

The Click Through Rate on the Landing Page - 26% - was way lower than I was expecting.

It’s looking like we find ourselves in a Pivot situation.

But before we talk about that, what about the quality of this part of the experiment?

Helene said: “With just a handful of clicks, it doesn't seem to be statistically significant.”

And Helene is right: with just three clicks, the data isn’t statistically significant.

Which brings us to a key challenge of testing what is, I suppose, a funnel: the further we descend into a funnel, the less data we have to play with.

I would have liked 10x more Landing Page clicks - at least.

Looking back up the funnel, to get 90 Landing Page clicks, we’d need 960 Facebook Advert clicks.

I paid roughly a dollar per click, so that would be a cost of $960.

I a re-ran the test with three add variations, the cost would be the best part of 3 grand.

We’re testing something that I plan to give away for free, so that’s too rich for my blood.

Time

That’s one problem.

Another is that these 96 clicks took 5 days to show up.

I don’t want to wait 50 days for a 10 week test!

End of the road?

Is this the end of the road?

Am I saying that the Validated Learning that we seek is now out of reach?

Not quite.

Next steps

I have THREE ideas that might let us run a good test - without breaking the back.

1. Run a single ad variant.

We already have a winning name, so we can get away with running a single ad variant.

That immediately cuts my costs in three.

2: Close the gap between Advert clicks and Landing page visits

I think - hope - I can achieve this by counting

requests for the Landing Page, and clicks on the Landing Page

server-side.

In theory, that will remove any JavaScript-related weirdness.

3. Increase the Click-Through Rate

I’m paying for clicks, so increasing the Ad click-through rate won’t save me any money.

But increasing the Landing page CTR will.

Why? Because the moment I get “enough” clicks, I’ll stop the experiment.

That’s the theory. The plan is two-pronged.

  • Produce a better landing page. (Duh.)
  • Pick a better audience.

The latter, I have to say, Is high risk. It could back-fire.

Facebook has a thing called look-alike audiences.

I already have a COMPLETELY PERFECT IN EVERY WAY audience: you!

So my hope is that an audience JUST LIKE YOU would be quite likely to, you know, click.

In summary

To summarise:

The Facebook part of the experiment worked well. The Landing Page part of the experiment worked less well.

Now that the first part of the experiment has yielded a winning title, we’re in a position to run a single-ad to a new, improved landing page.

By tweaking things here and there, the hope is to achieve a result of statistical significance, WITHOUT breaking the bank.

That’s the plan!