3 Ways to Analyze CRO Test Results Beyond Wins or Losses

3 Ways to Analyze CRO Test Results Beyond Wins or Losses

A/B testing has long been seen as a way to find what works to improve conversions.

But that mindset is limiting.

If you focus only on what works, you risk overlooking everything that doesn’t. And that means missing the richest source of insight: tests that don’t go your way.

Growth doesn’t come from wins alone. It comes from learning fast, interpreting deeply, and evolving constantly.

That’s why we found Taciana Serafím ’s approach to interpreting CRO test results so intriguing. In an interview for our CRO Perspectives series, she highlighted that she doesn’t see test results as just wins or losses. Instead, she breaks them down into three strategic types.

1. Winner/Loser/Inconclusive ratio

“This metric provides insight into the effectiveness of our testing hypotheses and the overall direction of our experimentation efforts. It helps us understand the proportion of tests that result in positive, negative, or neutral impacts on our objectives.”

VWO's Perspective: We see this ratio as a calibration tool. Just like a compass shows if you’re veering off course, this ratio helps teams self-correct their hypothesis quality, test design, and risk appetite:

  • Too many winners? You might be playing it safe. A high win rate often indicates that your hypotheses are rooted in known best practices or safe, incremental ideas. These are great for short-term lifts and demonstrating momentum, but they rarely lead to breakthrough insights or disruptive changes. It’s worth asking: Are we avoiding bold ideas in favor of guaranteed wins? Are we limiting ourselves to what’s worked before?

  • Too many losers? Losses aren’t inherently bad. Sometimes, they provide a necessary validation that the current experience is already strong—an underrated win in itself. But a streak of failed tests also raises deeper questions: Are we framing our hypotheses based on user needs and data? What new insight does this failure reveal? How can it inform a stronger hypothesis for my next test? Every loss is a stepping stone to a more refined experiment. Losses should drive curiosity, not discouragement.

  • Too many inconclusive results? This often signals structural or executional issues. Maybe your hypothesis is too weak to produce a meaningful effect, the traffic split is off, or your test isn’t running long enough.

Inconclusive tests aren’t failures either, but they can waste time and resources if they become the norm. They signal a need to tighten your experiment design process—focusing on clarity, volume, and targeting.


Article content
Three types of test results

A healthy program should contain a balanced mix: some safe bets, bold swings, and tests that challenge your understanding of the user. By examining the patterns behind your wins, losses, and inconclusive results, you can improve your testing strategy to reflect a better balance between impact, insight, and innovation.

We suggest you take the following steps:

  • Regularly review whether your hypotheses are rooted in solid data or based on assumptions. Use a simple checklist: Is it specific? Measurable? Based on user insight?
  • Target specific, high-intent user segments rather than broad audiences to detect meaningful impact and reduce inconclusive results.
  • Bring in cross-functional teams for various ideations that balance safe bets with bold moves. Encourage team members from marketing, product, support, and sales to contribute observations from their own domain. These frontline insights often uncover friction points or missed opportunities that can lead to major breakthroughs.


Article content
Steps for analyzing test results

2. Rate of tests yielding new learnings

“One of the core values of CRO is the continuous learning and improvement it fosters. This metric tracks the percentage of tests that contribute new insights into user behavior, preferences, or website usability issues.”

VWO's Perspective: Not every test has to increase conversions to be valuable. Some of the most powerful experiments are the ones that help us understand why users behave the way they do. It’s about building institutional knowledge and uncovering the 'why' behind user actions. This analysis captures the percentage of tests that offer new insights—about how users think, what they expect, what frustrates them, or what motivates them to act. 

A case in point: UK retail brand FreestyleXtreme decided to enhance their website by adding detailed product descriptions. This change was guided by direct customer feedback and best practices in eCommerce. The expectation? Higher conversions due to increased product clarity and trust.

But the results told a different story.

When tested against a design with no descriptions, the version with product descriptions saw a 31.38% drop in conversions.

The team did exactly what users said they wanted. So why the negative impact?

This result opened up new avenues of exploration around test design, user intent, and behavioral psychology. The most reasonable conclusion was that execution quality trumps feature requests, especially when it affects the natural course of the user journey. In this case, placing text-heavy descriptions above product images disrupted the visual hierarchy and introduced cognitive load.

Users had to scroll more, interpret dense blocks of text, and work harder to see the actual product—all of which added friction. This touches on a core principle in digital psychology: users follow paths of least resistance. When effort outweighs perceived reward, drop-off becomes inevitable.

The takeaway? CRO isn’t just about doing what users say—it’s about designing experiences that align with how they think, feel, and behave.

Read the full case study here.

What you should prioritize

  • Treat every test as a learning opportunity, not just a success/failure event.
  • Document not only results but hypotheses, assumptions, and surprising behaviors observed during the test.
  • Incorporate qualitative tools—like heatmaps, session recordings, and on-site surveys—to enrich learnings from quantitative results.
  • Based on new insights gathered, run iterative tests to validate hypotheses and determine their impact on user behavior.


Article content
How to prioritize throughout a test

Think of it as a measure of how much you're growing your understanding of your audience, not just your metrics. It helps teams build a library of learnings that inform future design, messaging, and product direction. The most impactful testing programs are those that see value in why a variant didn’t win—and use that insight to fuel smarter tests moving forward.

3. Rate of tests generating shared learnings across departments

“It measures the extent to which the insights gained from CRO activities are shared beyond the immediate team, influencing strategies and tactics in other departments. It underscores the collaborative nature of optimization and its role in driving company-wide improvements.”

VWO's Perspective:  This is where CRO graduates from a team-specific initiative to an organization-wide culture. When test insights reach marketing, sales, product, or even customer success, the impact multiplies. This analysis tracks how well your learnings travel. In other words, this is where CRO stops being a growth lever and starts being a decision-making engine for an organization.

Consider a hotel booking platform that ran a test on its reservation page. The goal was to reduce drop-offs and increase booking completions. But the test revealed something critical—users were abandoning their bookings due to unclear pricing details, especially around taxes, service charges, and add-ons like breakfast or late checkout.

That single discovery led to a wave of coordinated action across teams:

  • Marketing adjusted UX messaging to emphasize clear, upfront pricing—no surprises at checkout.
  • Customer support updated FAQs and response scripts to preemptively address common pricing questions.
  • Product redesigned the booking flow to surface a transparent, itemized cost breakdown before the final confirmation step.

Our recommendations:

  • Maintain a shared testing repository (Notion, VWO Plan, Airtable, etc.) that is accessible to all.
  • Involve other departments in test planning, so their goals and hypotheses feed into the backlog.
  • Use a structured summary format: "What was tested? What did we learn? Who else should know?"
  • Host monthly test readouts with representatives from marketing, product, design, and CX.

Another valuable insight from Taciana: We closely monitor business-related metrics tailored to our company’s goals, with a particular focus on the Return on Investment (ROI) of our initiatives. 

CRO teams get caught up in optimizing micro-metrics—headline clicks, form fills, or bounce rate. But unless those metrics roll up to topline business goals—like revenue, lifetime value (LTV), retention, or acquisition efficiency—you're improving a layer, not moving the needle.


Article content
Micro metrics or business metrics

Too often, teams celebrate a 10% lift without asking: Did it lead to more qualified leads? Did those leads convert to paying customers? Did the average order value increase or churn drop?

No matter how difficult these questions may seem, ask them to assess where your program stands:

  • Is your CRO program increasing revenue per visitor, or just inflating vanity metrics?
  • Are you solving real business challenges, or just running “safe” tests to pad the win column?

What you can do to implement ROI-driven experimentation:

  1. Start with business questions, not just test ideas: Ask: What strategic objective is this test aligned to—revenue per visitor, upsell rate, activation, etc.?
  2. Build a full-funnel measurement model: Connect test variants to downstream KPIs via analytics platforms, BI dashboards, or CRM systems. VWO integrates with tools like GA4, Mixpanel, and Salesforce to make this seamless.
  3. Define ROI benchmarks per initiative: For example, is a 3% lift in AOV worth more than a 10% lift in clicks? Not all wins are created equal—prioritize accordingly.
  4. Make ROI review a habit: Look back at your last 10–15 experiments each quarter. Which ones delivered lasting business impact—and which just looked good on the surface?
  5. Involve business stakeholders early: Don’t wait until post-launch to talk ROI. Loop in finance, strategy, or growth leaders while shaping the experiment to align expectations.

If your CRO program can’t prove its value to the business, it’ll always struggle to gain buy-in, resources, or influence. 

Way forward

So, no—A/B testing isn’t just about what works. It’s about discovering why something works, why it doesn’t, and what to build next because of it. Taciana’s structured approach to interpreting test outcomes offers a refreshing shift in testing mindset. She helps practitioners see test results as strategic inputs, not just performance indicators, for long-term business growth. She shares additional actionable takeaways that CRO practitioners can adopt to drive meaningful business impact. Read the full interview here.

To view or add a comment, sign in

More articles by VWO

Explore topics