Solving The Social Dilemma: Building Ethical A.I.

Share this post

By Field Garthwaite, CEO of IRIS.TV

Silicon Valley is having a moment.

Antitrust investigations are being launched against the largest companies in tech. Scientific research shows that our phones and apps are creating new forms of substance addiction. New data regulations are being enacted all over the world to limit the collection and use of our personal data. The recent documentary ‘The Social Dilemma’ shines a light on the “massive scale contagion experiments” happening on platforms like Facebook. The film details how entertainment and social apps learn from our behavior to make us more addicted to them. By capturing more of our attention, they are learning how to wedge themselves into our subconscious.

Are the companies we trust with so much of our personal data, trustworthy? Do the employees of these companies have ethical or moral obligations to do the right thing?

It is a massive problem. One that every marketer, regulator, engineer, product manager, entrepreneur and investor should be thinking about.

How do we build successful businesses that don’t trade in human futures?

Build Privacy First Products

No app or service requires our personal data in order to offer email, map services, messaging and social feeds. It just happens to be the preferred business model in 2020. The next wave of services that focus on protecting users’ privacy can thrive. But it will require collaboration and willingness from marketers to invest in finding new ways to deliver a return on ad spend (ROAS), from apps and services.

Build Ethical A.I.

Businesses are being built with “ethical AI” already today. Teams have full control over how the machine learning works and there is proactive oversight on cohorts, weights, and goals. 

Money and investment, not regulators, define how we can live in a privacy-first economy.

Marketers can have the largest impact in directing change. But brands and agencies can’t keep investing in the same platforms at the same rate as before if they expect them to change. Pausing spend for one month by participating in a #stophateforprofit PR initiative isn’t going to protect us from social engineering and behavior modification.

Interpublic Group (IPG) has implemented a “Media Responsibility Audit” of social platforms but the rate of change has been too slow. We need a bigger change, faster. Today Fortune 500 brands are responsible for a large percentage of Facebook’s and Google’s overall revenue. If these brands don’t act—we can’t expect small businesses to make the difference and fund innovations.

Change will only come when better options arise to enable brands to realize their return on investment goals.

Marketers can help facilitate change by ensuring that they deliver on brand outcomes and ROAS, while supporting premium content, journalism, and divesting from firms that thrive on misinformation and emotional manipulation to drive profits.

Why now?

Social engineering has been proven to act as an amplifier of hatred, genocide, and tribal divisions in social bubbles around the world from different races, religions, and political parties. Consider the following:

 Facebook offered a rebuttal to the Social Dilemma in a PDF posted to its corporate site.

The response by Facebook was quickly scrutinized and countered by social media researchers.

One statement in Facebook’s response stood out:

“The idea that we allow misinformation to fester on our platform, or that we somehow benefit from this content, is wrong.”

Multiple scientific papers have revealed how misinformation goes viral on social platforms. Misinformation increases usage and overall attention from users, which is the core asset that Facebook harnesses in order to drive ad sales.

Viral misinformation has become a profitable business for Facebook, Google, Twitter, TikTok, and all social networks.

What Can We Do About It

Marketers

Marketers need to take the first step and set an example with their checkbook by divesting from platforms that enable widespread social engineering, misinformation and political propaganda.

  • Diversify investment away from tech juggernauts like Facebook and Google
  • Focus investment into new formats like connected TV to drive improved return on marketing investment

Adtech, Martech, and Data Companies

These companies can help brands utilizing their platforms to actively divest away from social networks and instead turn to new emerging formats like Connected TV to drive higher ROAS than social media.

  • Promote the use of privacy-first platforms where user consent is explicit
  • Build an industry on top of 1st party data and contextual intelligence that protects consumer privacy and delivers on marketer and agency expectations for ROI and efficiency

Publishers

News publishers and premium content owners are in an ideal position to drive change too. Instead of “going where the audience is,” publishers can focus on utilizing ethical practices of machine learning and marketing automation to grow their own businesses—without relying heavily on external platforms.

Premium content and large audiences are at the core value of media and advertising. Publishers that can bring audiences to their own sites and apps and provide transparency to brands on how their inventory is brand safe and brand suitable will thrive.

What’s Next? 

Ultimately it is the marketers that will have the biggest impact in combating misinformation now and going forward. Brands should reward the companies that are investing in journalism and those who utilize ethical A.I. and leverage trusted practices to inhibit the spread of misinformation. Given the size and scale of Facebook and Google, and the malfeasance therein, the biggest choice and the biggest difference that can be made over the next year is how much marketers actively diversify investment away from these platforms and toward more ethical options.


Share this post
No Comments Yet

Leave a Reply

Your email address will not be published.