The independent identity solutions being introduced into the market aren’t as good as a common industry standard, but they are probably going to be the best we have for some time to come. The government won’t create a single browser-based identifier, as their goal is to protect consumer privacy, not to help the digital advertising industry. A single independent solution is unlikely to be broadly adopted by everyone that matters, from Google to Apple. And IAB’s Project standard would only cover PII data from logged-in users, not unknown visitors, so we’re stuck with managing a hodge-podge of options for the foreseeable future.
Once again, publishers deal with the most complexity when the industry doesn’t play nice. Right now, many of them are holding fast to cookie-based options because the money and the data is still flowing. In a year or more, though, it’s likely that currently, popular cookie-based ID solutions will be less interesting, and other companies’ new cookie alternatives would become more enticing. Publishers don’t have the bandwidth to accept them all, and so they need to consider a strategy that maximizes efficiency in an inefficient landscape. Advertisers and publishers will be working together to create scaled audiences (among other things, such as campaign measurement) without third party cookies, which will mean that more companies will be trading on IDs and compliant data that will need to flow through the ecosystem as easily as possible.
Working with common naming conventions, creating standards and frameworks and building towards transparency can help with experimentation and adoption as non-third party cookie ID solutions and other new data-sharing patterns emerge. This will only happen if publishers start experimenting now, and plan for the complexities that will arise in their path.
New Logistics To Build For
Each new identity solution requires an entry point – a way that an advertiser or publisher can actually collect first-party data with user consent. Presumably, these points of entry will create more friction than the status quo. Additionally, publishers and advertisers need automated prompts, a system to check data collection for errors, de-duplicate it and merge it. Larger publishers will be able to navigate this tradeoff, and consortiums will emerge to share amongst each other. Smaller publishers will not have the resources or clout to join these consortiums, so the open market’s long tail will wither unless a cookie alternative with a similar degree of adoptability is developed.
Once a publisher collects first-party data, it has to be made scalable. Partners need to read it. It needs to be matched to segments, and DSPs need to use it to match users across devices. Doing this for a single data set is complex enough, but doing this for multiple identities is redundant. The result is SSP, DSPs, and ID providers creating and managing their own match tables (or whatever structure emerges) and doubling work – an unfortunate example of duplicative instead of collaborative work.
These logistics become harder still when publishers are liable to manage the data according to GDPR, CCPA and the expectant future policies. Data regulation will become increasingly complex and rigorous. Few publishers will have the appetite or ability to solve for these complexities on their own. CMPs help publishers manage user consents, but publishers might need to use one or more CMPs to manage different regulations, which will first need to be integrated into their current adtech stacks and data workflows, and then work well with setups of publisher partners.
Lastly, buyers will only want to work with a few ID providers, at most. The two key criteria for making this selection will be reach and quality (degree of determinism). Thus, individual publishers who try to create their own IDs will struggle to attract demand without significant market power. Instead, they will need to take time now to experiment with the various privately-funded identity providers to see who delivers the most value before the big change away from cookies.
Probabilistic Modeling Will Be More Important – Scoring Helps
Leaving the cookie behind means that brands and publishers will have more accurate data that they collect from known individuals, but it will be a smaller footprint because it misses the many visitors that never log-in or opt-in. Perhaps 20% of users are logged in on a site, and so audience addressability from the buyer’s perspective would be insufficient without data sharing between publishers, or without the help of sophisticated, scaled third parties offering probabilistic modeling. Getting confidence in this modeling, is an exercise in “degrees of determinism.” Everyone benefits if buyers can gain confidence that they are trading on real, good data.
Similar to the bond market, with a single rating system, buyers should have confidence in how credible the data is and what the past performance or quality is. A common quality system or data quality assessment framework works best if it’s universally accepted. This creates a layer of security and will help normalize pricing quickly, creating a healthier market sooner.
Rather than try to solve every problem with a common ID, which IAB CEO Randall Rothenberg admitted required massive creativity and technology that currently doesn’t exist today, we need to embrace the reality: ID solutions are already in the market and are being built around now. Complexity is already happening.
Similar to quality auditing such as IVT and viewability, which is managed by private third-party companies, we’re likely to add more value by focusing on ID standards in data trading and scoring that everyone can use as a common language, whatever ID solution they implement. And for publishers, who will be managing the largest variety of solutions, this will guide their business. The industry would benefit from a common playbook.