GUIDE 2024

Data-Driven vs. Data-Informed

Many product organizations have stopped focusing on being data-driven, and are pivoting towards being data-informed instead.

What’s the difference between the two? Let me illustrate with a real scenario.

Product Study: Data-Driven Demand Assessment

One of my past initiatives was to find a product-market fit for a particular product idea that we had.

The goal was to use landing pages, SEO (search engine optimization), and SEM (search engine marketing) to determine the following:

  • Was our idea viable?
  • If so, how should we market it, and to what audience?

We already had hypotheses around how to market the product, and what audience we were targeting since we had already completed extensive qualitative and quantitative research to confirm that our product was solving a real customer pain.

We just needed to find the best way to bring that product idea to market. We decided to use as much quantitative data as possible to measure demand.

We pulled together several different landing pages, each one addressing a particular hypothesis we had about our customer segments.

For example, one of the landing pages was upscale and luxurious, aimed at wealthy customers who wanted peace of mind.

Another one of the landing pages was friendly and accessible, aimed at the middle of the market that needed help and expertise.

In looking at our experiment, we decided that we would focus on the following metrics for each landing page, in the following priority:

  1. # of leads that qualified per week (prospects who actually might buy our product)
  2. # of leads submitted per week (regardless of prospect quality)
  3. # of impressions from Google per week (a.k.a. site traffic)
  4. Bounce rate
  5. Time on page

Our initial results across all of our landing pages weren’t particularly great across our key metrics. We weren’t alarmed and expected this behavior. After all, Google AdWords takes some amount of time to optimize itself, and our landing pages were thin in content.

Strategy #1: Content

While we waited on Google AdWords, our first goal was to get the number of impressions up since impressions are a “top of the funnel” metric.

That is, if impressions are low, then all downstream metrics will be low as well since our other metrics depended on whether users even saw our landing pages to begin with.

So, to bring our impressions up, we fleshed out the landing pages further with more pages, customer testimonials, and support channels (e.g. email and phone numbers).

Over the next two weeks, we started getting significant traffic across a subset of these landing pages.

Success! Those particular landing pages must be the ones with the winning messaging, right?

Except, this traffic converted poorly. That is, people didn’t submit leads when they came to the website. We decided to look into user behavior to understand why that might be the case.

Strategy #2: Behavior Research

We installed a tracking code to look at why people did not convert. Almost immediately, we noticed 4 distinct behavioral patterns across all of the landing pages.

The first kind was the people who clicked away immediately. This segment represented a small fraction of visitors.

The second kind was the people who read the overview page in-depth then left.

The third kind was the people who came to the overview page, navigated other pages in-depth, came back to the overview page, then left.

The fourth kind was the people who jumped straight to the lead form, filled it out halfway, and then left.

This set of behaviors baffled us. With the decent volumes of traffic that we were attracting, why weren’t people encouraged to submit the form? After all, the vast majority of visitors had interacted with our landing pages.

Strategy #3: Segment-Specific Messaging

We went back to the drawing board and kept hammering away at the problem, keeping these newly discovered behavioral segments in mind. We tried new user flows, new content, new AdWords campaigns, new SEO strategies, new plugins, and widgets.

Nope. We still weren’t getting traction. Even when we unlocked a huge number of leads, we found that those leads didn’t qualify – that is, they didn’t actually care about our company, or they misunderstood what we were providing.

Yet no matter how many times we iterated on the messaging, our prospects still couldn’t grasp what we were selling.

At this point, our customer researcher called out that when we had interviewed customers, they looked for products like ours by word of mouth.

Acting on their insight, we changed our ads to focus on online referrals. Yet, our product offering still wasn’t gaining traction.

We kept running experiments over the next month. We optimized lead volume, we optimized impressions, we reduced the bounce rate, we increased the on-page time. Just about every metric we wanted was going up except for the most important one – prospects who would actually buy our product.

Finally, we admitted defeat. Nothing we were doing was winning. We decided that there wasn’t market demand for our product after all and prepared to stop the initiative.

Strategy #4: Channel Exploration

At this point, our customer researcher gently brought up that we could try offline marketing before giving up.

That was a terrifying proposition for us. We had never marketed offline before. We didn’t have tracking set up for offline. How were we going to be data-driven?

They pushed us to try it. After all, if our core metric was “number of qualified leads”, we could always look at the total of qualified leads generated, then divide by total dollars spent to get the acquisition cost per qualified lead.

Their proposal made sense. We switched to newspapers and flyers to assess demand. To mitigate the financial risk, we tightly limited our offline marketing budget and determined that we’d call off the entire thing within 4 weeks if we didn’t get results.

Within the first week of our first offline marketing campaign, we saw demand skyrocket. These customers were intensely interested in actually buying the product!

Learned: How Being Data-Informed Would Have Helped

We initially didn’t want to use offline media because offline attribution is hard.

After all, how can you measure how long someone looked at a flyer, or how many people saw a newspaper ad?

We knew that we could measure online. We were good at online measurements, and online measurements had pointed us in the right direction in the past. We could calculate standard deviations and visualize distributions, we could segment by behavior and demographics, we could even track the geographic location and the device type of every single visitor.

The problem is that the customers we were targeting weren’t seeking that kind of product through an online channel. They were seeking it offline.

We should have thought carefully about what “data-driven” means. If we only optimize using data, then we can only see the data that we can capture and optimize off of that.

Since all of our analytics instrumentation was exclusively online, we could never have learned about that offline behavior through data.

If we had been data-informed instead of being data-driven, we would have gone back to the research immediately after the first iteration to double-check one of our fundamental hypotheses – that people are looking for this kind of product online.

We would have had the “why” to the “what” much earlier on.

Make no mistake – my teammates and stakeholders were extraordinarily capable, and no one was at fault.

This particular case of offline marketing was an organization-wide first for us, and I’m proud of our creativity, resilience, and resourcefulness as we worked our way through this challenge.

But because we were too heavily data-driven and had little experience in challenging our data, we rejected less attributable methods, leading us to entirely miss the mark.

I’ll note that this particular scenario was an edge case – after all, when you’re building digital products, you expect your users to come find you through the internet.

Still, this case study illustrates that being data-driven can be incredibly dangerous if you only optimize against existing metrics without digging deeper to understand whether those metrics are valid in the first place. Even the most talented teams can miss critical insights if they’re too data-driven.

I’m proud to say that after our learnings, our product culture shifted towards being more data-informed rather than being solely data-driven, leading to better product decisions and better outcomes across multiple initiatives.

Key Takeaways

There’s a particular time and place for quantitative data, and there’s a particular time and place for qualitative observations. Each person requires a deep understanding, experience, and even certifications to understand them properly. 
Product Manager Certification
Quantitative, measurable data is fantastic for telling you the “what”.

That is, data can show you real behaviors and provide a strong foundation for determining whether a product has made a positive impact.

Qualitative observations (e.g. interviews, shadowing) are especially powerful in telling you the “why”.

Given enough practice as an impartial and unbiased observer, you’ll find that your customers will reveal entirely new lines of inquiry and exciting new product hypotheses that you could have never seen from your data alone.

That’s why I now strongly prefer being data-informed over being data-driven. Data is just one of many inputs to the messy product decisions that we make every day.

As product managers, we serve other human beings as the final end-user. That means we always need a strong grasp of both their why and their what.

Additional Resources

Check out the following articles to learn more about the difference between being data-driven and being data-informed:

Have thoughts that you’d like to contribute around data cultures within product organizations? Chat with other product leaders around the world in our PMHQ Community!

Clement Kao
Clement Kao
Clement Kao is Co-Founder of Product Manager HQ. He was previously a Principal Product Manager at Blend, an enterprise technology company that is inventing a simpler and more transparent consumer lending experience while ensuring broader access for all types of borrowers.