Do Androids Dream of Pizza?: Building Human Brands for Artificial Interfaces

Share this post

Artificial intelligence (AI) has been around for a long time. Powering Google’s page rankings and haunting cable provider phone calls, it just hasn’t been as visible as the topic is today. AI has been buried in tech you interact with everyday, built by big companies to deliver more precise ad targeting or more relevant content. Facebook uses image recognition to tag friends’ faces in photos and machine learning to deliver click-bait free content to your feed. Pinterest uses image recognition to scan the products in pins you like, serving up similar lamps and leather jackets. Google leverages natural language processing and deep learning to spot email spam. Siri uses voice recognition to queue up that new single you want to hear.

Why AI is Exploding Now

There are two reasons why artificial intelligence has recently become more visible. First, the insane increase in computer processing power has made it possible for AI to learn, self-optimize, and generate new content with the advent of machine learning. Second, the massive influx of data collected over the last couple years – billions of Siri voice commands and Facebook clicks – has given AI more information to learn from than ever before. This has made it possible for artificial intelligence to go from making smart ads to making smart text threads, homes, speakers, and cars. Artificial intelligence has exploded outside of the bounds of banners and out into the world, filling our Facebook messenger inboxes with Facebook M-powered bots recommending products from brands or updates from news bots from Quartz and CNN, to name a few.

With iOS 10 this fall, AI assistant Siri will be in our iMessages, suggesting Opentable restaurant reservations when trading texts with friends. Amazon’s Alexa AI is in our speakers, making it publicly acceptable to ask your sound system to call a Lyft, make a cocktail, or play a game of Jeopordy – preferably all at once. AI is in our cars, with Uber swapping chatty drivers for robots in its Pittsburgh fleet. And it’s infiltrating our media with The Washington Post using robo-journalists to write Olympics coverage and a bot named Benjamin writing the script for indie film “Sunspring,” featuring Silicon Valley’s Thomas Middleditch. We’ll have to see what SAG AFTRA looks like for bots.

Start with Assistive AI Experiences Before Jumping to Hands-Off AI Experiences

Artificial intelligence is baked into everything. In a world where everything is media, brands are having to reckon with the impact of artificial intelligence on how they advertise. While many companies are immediately jumping to hands-off experiences that are wholly AI, they should start by crafting assistive experiences that are a mix of human and intelligent machine. Microsoft’s Tay chatbot, for example, was wholly autonomous, using machine learning to self-generate responses to Tweets without a human intermediary, so the brand voice of the robot was up to the machine and the inputs it was receiving from Twitter trolls. Delta, on the other hand, uses a mix of human and AI to field your phone calls, with AI queuing up your medallion status, name, and information so a Delta representative can skip a few steps to get you what you need faster.

If everything from your AI-enabled speakers to your surroundings knows who you are and understands your preferences, the world should become a recommendation engine for brands to deliver thoughtful, unexpected human experiences. Imagine going to Chipotle and having a fresh burrito bowl waiting, tailored to your taste based on past purchase orders, diet, and calorie count. Or jumping in an Uber to head home from work, only to find the groceries culled from your to-do list in the back seat. As AI gets smarter, advertisers shouldn’t just proactively send you push notifications but groceries, laundry detergent, or cars you only return if you don’t want them.

Algorithm Training is the New Talent Development

As brands move from assistive AI to hands-off AI experiences, they will need to invest in training and algorithm development in the same way they do talent. Thoroughly autonomous AI operates on machine learning, so core interactions, designed by the individuals that build them, are shaped by the inputs they receive from the outside world. AI doesn’t inherently understand cultural or social norms – such sensitivities need to be learned in order for brands to hand over their communication to bots. Nissan’s autonomous driving team hired an anthropologist to ensure that its algorithms are humanistic. And in an episode of Andreeson Horowitz’s podcast, Fei-Fei Li, Director of Stanford Artificial Intelligence Lab emphasizes the need to hire diverse minds when creating AI teams, in order for bots to be built by people with different “value systems” from “all walks of life.”

With artificial intelligence moving from automated to generative, it makes it easy to envision the next-gen creative team being composed of a copywriting bot that comes up with absurd concepts not bound by reason and human creatives that understand brand voice. The next wave of endorsements could have brands hosting sessions with big name talent, imbuing the bot with the wit of Chelsea Peretti or genius of Neil Degrasee Tyson.

Brands Still Need a Human Touch

We’ve all felt the familiar frustration of AI not feeling human enough – shouting at Siri for not understanding our questions or complaining because a bot didn’t provide what we’re looking for. As artificial intelligence continues to seep into media and algorithms become the new interfaces through which we interact, brands will have to determine what their brand voice will look like on these platforms. And no matter how sophisticated AI becomes, a lot of experiences in customer interaction still require a human touch.


Share this post
No Comments Yet

Leave a Reply

Your email address will not be published.