By Peter Wallace, MD for EMEA at GumGum
The case for contextual being the future of digital advertising seems to stack up almost weekly. News emerged recently that Oracle had decided to end third-party data services in Europe, following a multi-million dollar class-action lawsuit against the company over alleged breaches of GDPR data privacy regulations.
Using technology that matches ads with context – the online content that users are seeking and viewing in any given moment – is a much safer alternative, both for users and marketers. GumGum recently published research in partnership with Dentsu that finally proved what our ten years of experience with contextual targeting have shown us anecdotally: that impressions earned this way are significantly more accurate at targeting online audiences and are more cost-effective than those gained via behavioural targeting.
But given that contextual intelligence incorporates some complex technology, we know it can be tough for brands and agencies to feel confident about commissioning these services. With this in mind, here is my list of the key questions to ensure that whoever you choose truly knows the territory.
Does your platform employ machine learning?
One of the key technologies driving the evolution of contextual intelligence today is machine learning, a subset of artificial intelligence that ‘trains’ computers to learn automatically through experience. It’s not strictly necessary to use machine learning in contextual targeting but it will make a huge difference in terms of efficiency and accuracy of the software deployed to read and make decisions on context.
Some of the original providers of contextual targeting tech may not have incorporated machine learning or the use of deep neural networks. The latter refers to a way of making AI even more effective by using a system that mimics the way the brain learns. This technology is particularly useful when you’re analysing large quantities of unstructured data, like web pages. Without deep neural networks, the algorithms will be that much simpler and less able to detect important subtleties within text or visual-based content.
How does your brand safety system operate?
Contextual targeting doesn’t just mean how to reach users based on the content they are viewing. It also incorporates an element of ‘anti-targeting’ – i.e. keeping ads clear of any dangerous or dubious content.
It’s important to understand how a provider achieves this as there is a spectrum of complexity. Keyword-blocking is very much the basic service and relies on avoiding content that is tagged with certain words which the system pre-defines as a red flag. But this system is a very blunt instrument – does one mention of the word ‘shoot’ deem a web page as content about violence, for example? Or is it just an article on photography or basketball?
A much better way of keeping ads safe is to analyse content using natural language processing (NLP). This system analyses text by looking at its pattern and tone in a similar way to the human brain. It takes brand safety to another level of complexity that simple keyword blocking can’t achieve.
How do you analyse the image and video content?
GumGum uses a technology called Computer Vision, which allows us to examine the pixels in images and then classify them accordingly. Understanding the visuals in this way, along with our capabilities around analysing text using natural language processing, means we are able to scan the entire web page and understand its meaning and context at a much deeper level. It’s because of this combination of technologies that a recent study found our contextual intelligence platform, Verity, was 1.7X more accurate than other leading contextual vendors.
Similarly with video analysis, there are different levels of accuracy. Many providers will just programme their systems to look at the metadata that the videos are tagged with. This can leave an advertiser open to potential brand safety threats because the tagging can be simplistic and minimal (if it even exists). We go a step further by integrating NLP with Computer Vision to give us a much more accurate picture of the video content in question: as identified via a frame-by-frame process of analysis.
How do you source the ground truth data?
For contextual analysis to take place, computers can be programmed to learn but humans have to feed in and define the information that starts this process – something referred to as ‘ground truth data‘. It’s worth bearing in mind that ground truth can’t ever be an entirely objective set of facts, but are a human interpretation. This is why contextual suppliers need to ensure that they have a good system for questioning their teams’ unconscious biases, to avoid them seeping into the programming.
It’s worth finding out how a provider handles this element. You can’t beat the human touch here. It’s expensive in terms of labour to deploy, but necessary for ground data that is as unbiased and accurate as possible.
Legislation to protect consumer’s data privacy will continue to be rolled out globally – Gartner has predicted that in just three years’ time, 65% of the world’s population will be protected by these types of laws. As such, the momentum behind privacy-friendly contextual intelligence will continue to increase over the coming years.
So it makes sense to start educating yourself and your team – and keep asking potential suppliers to prove their contextual credentials. The tech might be complex but the benefits to your business are likely to be significant, and more than worth the investment.