How to Estimate the Value of Data for Marketing
In my current role, I end up doing different types of consulting projects. Though they’re broadly around data and tech usage for media, the end goal that the client is looking for is often quite different from each other. For example, one of the project may be helping the client figure out the best way to use a CDP, while on another it could be about creating a data partnership strategy, or a retail media partnership strategy. Over the last 12 months, I’ve noticed a thread of commonality in these projects, that I want to talk about today.
This is about lack of clarity within organisations about the value of data in marketing. Please don’t get me wrong - I’m not saying that the clients that I’m working with do not realise that data is valuable. It’s actually the reverse. That they’re ready to engage with us in paid consulting engagements is a big indicator that they’re big believers that data-driven can drive both growth and efficiency for their business, and they want our expertise to help them chart out the roadmap. What I’m saying is, most organisations do not have a way to objectively estimate the most likely value, in financial terms, that different types of data can drive for them. As a result, it is often hard for marketers to articulate the value back to the business, resulting in decision paralysis.
Of course, if we could actually use the data in campaigns, then there would be a clear way to measure ROI of the data, but we have the proverbial chicken and egg problem here. We can’t run the campaigns until we get access to the data. One of the ways that we’ve found to break this deadlock is to activate limited time pilots. Let me take you through my experience of how we get to an unlock from this situation.
One of the first things to consider is to map the different data sources you’re considering across two lenses - Scale vs Impact, and Cost/Complexity vs Impact. Each of these would help you get immediate sense of the relative priority of data sources, and will move you towards a decision, taking your situation into account.
Scale vs Impact
This lens would immediately give you a sense of which part of the customer journey or sales funnel you’d want to apply this data. Upper funnel activities typically would need high volume, while you can compromise on the impact that the data source drives. In lower parts of the funnel, you’d typically need a data source that can work harder towards conversion. Because of the nature of the data sources, they’re typically going to be smaller in scale, and more expensive.
Having this lens immediately helps a marketer to connect to their immediate vs longer term priorities, and help create a sense of a roadmap. It’d be useful to add a note about the top right corner. Getting access to a data source that provides high impact at high scale would be both very difficult and very expensive. Because of that, it may be worthwhile to think of this corner as a spectrum of it’s own where depending on the scale and impact, you can think of applying the data in mid to lower part of the funnel.
Complexity vs Impact
True cost of using a dataset goes far beyond just the financial cost of procuring the data. Often there are challenges around data privacy, data access, organisational and technical readiness, skill availability, to be able to extract proper value from the data set. I see this especially with regards to first-party data sets. This lens would give us a sense of prioritisation based on how easy or complex it is to be able to use the data and the corresponding impact it can drive.
I guess the diagram above is self-explanatory; just a quick note about the bottom-left corner where I’ve found that often there’s some residual value that can be driven in low-maturity geos through a comparably low-impact dataset, which easy to deploy. Surprisingly, several times those unexpected successes get talked about more than more impactful pilots.
Estimating Scale, Complexity and Impact
One advantage of the above approach is that you don’t really have to immediately go through a big spreadsheet to work out the cost elements to perfection. Estimating scale is anyway easy - total volume or a proxy would do the job. Since this is about relative priority, for complexity you can assign a 1-5 scale or 1-10 scale score across each dimension of complexity, and very easily get to a composite score. For those of who you’re mathematically inclined, you will probably think of a weighted additive, or weighted average model.
Estimating impact, in my experience, needs a bit more work.
My first port of call is to refer to some industry research to get a sense of the benchmarks. Secondly, materials and case stidues from data vendors are a valuable source of this information. In a perfunctory nod to Gen AI, let me also suggest that Perplexity AI is a good place to start building this intelligence up. However, I take this just as the first step. Over years, I’ve created a list of trusted sources that I use for this purpose, but the numbers in industry press are generalised, that it’s mostly difficult to map to your specific use case.
My next step typically is to create an informed hypothesis of what impact a specific data set should deliver. This is how I create the argument for the limited time pilot. One of the main actions in this case would be to select an appropriate KPI. If your immediate priority is growth, then you’re likely to chose a volume metric e.g. orders. If it’s efficiency, then you will probably choose an efficiency metric like cost per action. For limited time pilots, I prefer efficiency metrics. This is because many times, the volume growth depends on longer-term halo effects, and also on seasonality, and other external factors on which we’d have no control.
Once we have decided on the KPI, a bit of spreadsheet work is required to work out the success criteria for the pilot. You can get as complex and complete at this stage, but I’ll highlight the overall logic with a simple example below:
Now, in this, we need to also think about the scale. Typically due to the well established law of diminishing returns, efficiency reduces with scale, so we need to take that into account in constructing the pilot. Typically, in the above example, I will assume that application of the data source should provide break even in the first year, so that we’re in green from second year onwards. That’s a pretty attractive story to tell, correct? So, in that case, in the pilot, the efficiency to be provided would be 3% x 5 = 15%.
That is more or less the crux of it. I’ve found this line of reasoning to be quite effective in unlocking data projects. It’d be great to hear about your experiences, and your comments. Do you use something similar? Do you use something completely different? It’d be great to get the discussion going.
Also, if you’re interested about my sources of benchmarks, please comment, and I’ll do a post on that too.
CCO @ digitalAudience.io | Innovative Marketing Solutions | SaaSTop100
7moGreat post Indranil! The KPI’s are always a challenge. Your insight in cost per action totally makes sense.
Digital Strategy and Innovation @ Virgin Media O2 | Senior Advisor (Data and AI)
7moVery informative thank youb
Consulting Principal @ DIGITAL DATA CONSULTANCY LTD | Data Strategy, Data Governance and Data Management
7moVery useful insights.
Helping SaaS Companies Scale Revenue | $100K+ ARR Deal Closer | 15+ years in Enterprise Sales & Partnerships | Fluent in 4 Languages (EN, DE, IT, ES) | AI, Data & Martech GTM 🚀
7moGreat post Indranil Datta!