Although it first made headlines during the U.S. presidential election cycle, fake news is a global issue and as the UK General Election approaches, a London-based team of fact-checkers is working hard to limit the spread of false information that may influence the result.
Fake news is entirely fabricated content that is shared online and aims to be as controversial and outrageous as possible to gain maximum attention. Programmatic advertising has come under fire as the mechanism by which money is made from fake news, and the more views a story gets, the more ad revenue it can generate. Abandoning programmatic – which is expected to account for over three quarters of UK digital display ad spending this year – is not an option, so the industry is looking for other ways to tackle the issue.
Tech giants such as Facebook, Twitter, and Google are widely criticised for enabling the spread of fake news, and were instrumental in spreading false information following the recent devastating terrorist attack in Manchester. But these platforms are taking some measures to combat the growing trend, including part funding the fact-checking initiative in the run up to the election. Facebook has also published advice for spotting fake news, both on the platform itself and in print, removed accounts that spread false information, allowed users to flag questionable content, and updated its algorithms to spot potentially misleading content. Google is taking action to remove “downright false information” from its search results and prioritise high quality content instead.
So what more can brands do to combat this disturbing side effect of automated advertising and prevent their programmatic ads appearing next to fake news?
A call for clear definitions
One of the key issues with detecting fake news is defining precisely what it is. As an automated process, programmatic advertising relies on clear rules and standard definitions to operate. So, as long as fake news remains a subjective term it will be difficult to avoid misleading content entering the programmatic ecosystem.
In a practical sense, fake news is any content that is entirely made-up to deceive readers, and to generate as much attention and ad revenue as possible. However, other content types, such as humorous or satirical content that is designed to entertain or amuse, as well as strongly expressed personal opinions or hate speech, can also be labelled as fake news.
While it is vital to have these clear definitions, it is also important to emphasise that fake news should not be tackled in isolation. Advertisers are just as keen to keep their ad dollars away from other types of inappropriate content such as extremist videos, as illustrated by the YouTube boycott, which apparently cost the platform 5% of its North American advertisers. The industry needs to take a more holistic approach to ad placements that helps advertisers avoid their ads appearing alongside any type of content – genuine or otherwise – that does not reflect their brand values.
This is where artificial intelligence (AI) comes in.
Understanding context through AI
To guarantee their ads are being placed in an appropriate, brand safe environment, advertisers need to fully understand the context of the web page the ads appear on, as well as the content within that page. This can’t be achieved by using basic techniques such as keyword analysis or blacklists, which amount to little more than guesswork.
Instead, advanced AI-based cognitive technologies such as semantic analysis and full Natural Language Processing can be used to automatically read content just as a human brain would. By understanding nuances and subtle changes in language, the technology can accurately reveal the sentiment and context of online content at granular page level, as well as the emotions that content evokes.
These AI-based technologies naturally filter out extreme or sensationalist content and give advertisers far more control over the type of content their ads are associated with. Analysis can be performed before a bid is placed to ensure programmatic ads are being positioned within relevant, on-brand environments, enabling advertisers to steer clear of any placements that carry negative associations for their brand or industry.
While AI-based technologies such as Natural Language Processing offer both scale and accuracy, they will never be 100% effective at detecting and avoiding inappropriate content – especially in the case of fake news where definitions are still subjective and yet to be clarified. These technologies should always be combined with rigorous human review, combining our natural ability to ascertain whether content is genuine and objective, with the scale and advanced intelligence of semantic analysis techniques.
Fake news may be gaining attention once again as the general election approaches, but rather than focusing on what is still largely a grey area, advertisers must take a more holistic approach to ad placement, considering the context and content they would like to be associated with or to avoid. They can then use an optimal combination of AI-based semantic analysis technologies and human verification to ensure effective, brand safe, programmatic placements.