3 Ways to Analyze CRO Test Results Beyond Wins or Losses
A/B testing has long been seen as a way to find what works to improve conversions.
But that mindset is limiting.
If you focus only on what works, you risk overlooking everything that doesn’t. And that means missing the richest source of insight: tests that don’t go your way.
Growth doesn’t come from wins alone. It comes from learning fast, interpreting deeply, and evolving constantly.
That’s why we found Taciana Serafím ’s approach to interpreting CRO test results so intriguing. In an interview for our CRO Perspectives series, she highlighted that she doesn’t see test results as just wins or losses. Instead, she breaks them down into three strategic types.
1. Winner/Loser/Inconclusive ratio
“This metric provides insight into the effectiveness of our testing hypotheses and the overall direction of our experimentation efforts. It helps us understand the proportion of tests that result in positive, negative, or neutral impacts on our objectives.”
VWO's Perspective: We see this ratio as a calibration tool. Just like a compass shows if you’re veering off course, this ratio helps teams self-correct their hypothesis quality, test design, and risk appetite:
Inconclusive tests aren’t failures either, but they can waste time and resources if they become the norm. They signal a need to tighten your experiment design process—focusing on clarity, volume, and targeting.
A healthy program should contain a balanced mix: some safe bets, bold swings, and tests that challenge your understanding of the user. By examining the patterns behind your wins, losses, and inconclusive results, you can improve your testing strategy to reflect a better balance between impact, insight, and innovation.
We suggest you take the following steps:
2. Rate of tests yielding new learnings
“One of the core values of CRO is the continuous learning and improvement it fosters. This metric tracks the percentage of tests that contribute new insights into user behavior, preferences, or website usability issues.”
VWO's Perspective: Not every test has to increase conversions to be valuable. Some of the most powerful experiments are the ones that help us understand why users behave the way they do. It’s about building institutional knowledge and uncovering the 'why' behind user actions. This analysis captures the percentage of tests that offer new insights—about how users think, what they expect, what frustrates them, or what motivates them to act.
A case in point: UK retail brand FreestyleXtreme decided to enhance their website by adding detailed product descriptions. This change was guided by direct customer feedback and best practices in eCommerce. The expectation? Higher conversions due to increased product clarity and trust.
But the results told a different story.
When tested against a design with no descriptions, the version with product descriptions saw a 31.38% drop in conversions.
The team did exactly what users said they wanted. So why the negative impact?
This result opened up new avenues of exploration around test design, user intent, and behavioral psychology. The most reasonable conclusion was that execution quality trumps feature requests, especially when it affects the natural course of the user journey. In this case, placing text-heavy descriptions above product images disrupted the visual hierarchy and introduced cognitive load.
Users had to scroll more, interpret dense blocks of text, and work harder to see the actual product—all of which added friction. This touches on a core principle in digital psychology: users follow paths of least resistance. When effort outweighs perceived reward, drop-off becomes inevitable.
The takeaway? CRO isn’t just about doing what users say—it’s about designing experiences that align with how they think, feel, and behave.
Read the full case study here.
What you should prioritize:
Think of it as a measure of how much you're growing your understanding of your audience, not just your metrics. It helps teams build a library of learnings that inform future design, messaging, and product direction. The most impactful testing programs are those that see value in why a variant didn’t win—and use that insight to fuel smarter tests moving forward.
3. Rate of tests generating shared learnings across departments
“It measures the extent to which the insights gained from CRO activities are shared beyond the immediate team, influencing strategies and tactics in other departments. It underscores the collaborative nature of optimization and its role in driving company-wide improvements.”
VWO's Perspective: This is where CRO graduates from a team-specific initiative to an organization-wide culture. When test insights reach marketing, sales, product, or even customer success, the impact multiplies. This analysis tracks how well your learnings travel. In other words, this is where CRO stops being a growth lever and starts being a decision-making engine for an organization.
Consider a hotel booking platform that ran a test on its reservation page. The goal was to reduce drop-offs and increase booking completions. But the test revealed something critical—users were abandoning their bookings due to unclear pricing details, especially around taxes, service charges, and add-ons like breakfast or late checkout.
That single discovery led to a wave of coordinated action across teams:
Our recommendations:
Another valuable insight from Taciana: We closely monitor business-related metrics tailored to our company’s goals, with a particular focus on the Return on Investment (ROI) of our initiatives.
CRO teams get caught up in optimizing micro-metrics—headline clicks, form fills, or bounce rate. But unless those metrics roll up to topline business goals—like revenue, lifetime value (LTV), retention, or acquisition efficiency—you're improving a layer, not moving the needle.
Too often, teams celebrate a 10% lift without asking: Did it lead to more qualified leads? Did those leads convert to paying customers? Did the average order value increase or churn drop?
No matter how difficult these questions may seem, ask them to assess where your program stands:
What you can do to implement ROI-driven experimentation:
If your CRO program can’t prove its value to the business, it’ll always struggle to gain buy-in, resources, or influence.
Way forward
So, no—A/B testing isn’t just about what works. It’s about discovering why something works, why it doesn’t, and what to build next because of it. Taciana’s structured approach to interpreting test outcomes offers a refreshing shift in testing mindset. She helps practitioners see test results as strategic inputs, not just performance indicators, for long-term business growth. She shares additional actionable takeaways that CRO practitioners can adopt to drive meaningful business impact. Read the full interview here.