Let’s Surf the AI Content Tsunami

Share this post

In the future, humans will still win Oscars, Pulitzers and Lions. But they will create a tiny proportion of all content out there. The time will come when most content will be generated by AI. What will this new form of “generative” media mean for content, creators and the rest of us?

Gartner predicted that by 2018, 20% of business content would be authored by machines. A year on, we ran a straw poll from the stage of an “AI in Marketing” conference. Not a single delegate believed the number to be 20%. Most claimed to have never seen a single piece of generative content. So where is it?

Generative content is going pop.

Gartner’s confidence came from the fact that machines have already authored sports articles, press releases, legal documents, 3,000 financial reports each quarter for the Associated Press and 1 of every 3 articles on Bloomberg News. Perhaps if you’ve never seen generative content, it may be because you didn’t know you were seeing it.

A better interpretation of our straw poll is that the marketers hadn’t yet used generative content themselves. Currently it’s used in narrow domains where the information source (e.g. stock prices) or narrative (e.g. a football game) is standardised, making generation more of a rote task than a creative one. But two things are changing this; firstly, the massive experimentation in the cloud and secondly, the pattern detection capabilities of deep neural networks. More dynamic forms of generative content are coming, with teams working on generative pop songs, novels, news articles and advertising. In many cases, machines are being taught not just to generate content, but to adjust their style and tone to better resonate with specific audiences. Content that is optimized for captivation opens up huge new commercial and ethical territory.

Generative content will be cheap, so eventually it will be everywhere.

An hour of a writer’s time may cost hundreds of dollars. An hour of uptime on a cloud server costs fractions of cents. What seems like a worrying prospect is likely to be more nuanced than the economics would suggest. Only one job has been totally eliminated by technology over the last 60 years in the US – the elevator operator. But every other job has changed in some way because of it, and creatives are not exempt. Understanding the impact of generative content is the best way for creators to take advantage of it.

Firstly, machine-generated content will command a high share of consumer attention, regardless of whether it is “better” than the best human content, by virtue of its cost, speed and availability. Secondly, machine-generated content will usher in the personalisation of media. News stories will come in a flavor for every possible political persuasion. Ads will be tailored to every imaginable customer segment. In this world, human creators may very well find themselves in the business of telling people not what they want to hear, but what they need to.

Generative content won’t always feel human, and that won’t make it any less effective.

Machines taught to play chess or Go have been observed to discover and exhibit “alien” behaviours that are nonetheless highly effective. Generative content similarly has the potential to contain its own idiosyncrasies, especially if optimised for something other than copying humans. Attempts to generate commercial content, such as ads, which can be optimised for effectiveness rather than style, may ironically lead to the most creative machine-generated outcomes.

For novel and interesting generative styles to emerge, machines must be given foundational means of creating independently with access to humans who can feed back. In this way, perhaps both the machine and its human operators might learn something.

Generative content will teach us about originality and meaning without understanding either.

Learning machines can be made to create originally in a number of ways. Randomness can be introduced into their processes, yielding unexpected results. They can even be explicitly built to experiment. Similarly to how anyone can appreciate the Mona Lisa without being able to replicate it, a world of useful and high quality generative content may depend on machines’ ability to experiment naively but aggressively, and learn what humans consider “good”, even if they don’t know how to get straight to “good” any more than we do.

We may also need to accept that generative content will influence the style of the rest of humanity’s media. Rapid machine discovery of new tactics that have been shown to “work”, in the sense of attracting attention of clicks in the digital economy, will be picked up and used by all. Whether this new world of persuasive generative content is received well or not by consumers may well come down to how transparent media owners are about the source and intent of their material.

Generative content will need to be directed, so who directs it matters.

The hard limit of generative content may not be whether it’s possible but where it should be used. Even as machines get better at generating content, they’re not getting better at what content to generate. Someone still needs to push the button.

In some instances, generative content will be a force for good. A hospital might employ generative prescription notes, personalised to patients, to nudge them towards strictly following their drug regimens. However, in the wrong hands, it could be misused. A restaurateur might use a generative system to overwhelm a competitor with believable but fake negative reviews.

Eventually, regulation may be needed for generative content. A rapidly evolving ethical framework for the media industry certainly will be. And most of all, the intent of the machines increasingly communicating with us should be made transparent. But perhaps you could say that of the people communicating with us, too.


Share this post
No Comments Yet

Leave a Reply

Your email address will not be published.