鶹ýAV

Skip to content

Shelly Palmer - Reflecting on AI and Monoculturalism

SASKTODAY's newest columnist, Shelly Palmer has been named LinkedIn’s “Top Voice in Technology,” and writes a popular daily business blog.
shellypalmersunday

Does generative AI pose a threat to the rich tapestry of human expression? Are we on an inevitable path toward digital monoculturalism? This essay was written in the fall of 2016 and originally titled, “Digital Monoculturalism: Small Changes, Big Impact.” Considering how quickly generative AI content creation tools and copilots are being woven into the fabric of our lives, now would be a very good time to start thinking about what our human-created cultures truly mean to us.


While  leads us to our destinations via the quickest route, our dependence on this kind of decision support system may also be the quickest route to a monocultural society.

You’ve said the word “algorithm” a thousand times this year, and you may have even written out your algorithmic goals in English, but have you ever coded an algorithm? Do you really know how any AI model is performing? What tiny mistakes (or purposeful small changes) are being made to subtly guide our decision-making?

Humans Are Incredibly Bad Decision-Makers

To make a good decision, you have to properly assess risk. Sadly, people almost always improperly assess risk. For example, you have a , but you have a  on the way to the airport. You have a , but you have a . By the numbers, cars and guns are far more life threatening than planes and terrorists. So it should be easy to find people to advocate for spending cuts on anti-terrorist programs in favor of funding programs to reduce gun violence. But, without a clear understanding of the risks we face each day, our decisions are controlled by our hearts, not our heads.

That said, every decision is not emotional. We are often called upon to make conscious decisions, and , we have developed some statistical tools that can increase our odds of success. In his book , James Surowiecki explains how to use a diverse collection of independent thinkers for decision-making.

The book opens with a story about a crowd at a country fair. All of the people in the crowd were asked to guess the weight of an ox, and while none of them got the correct answer, the average of all of the answers was closest to the ox’s actual weight. Importantly, this methodology actually requires individuals to think independently. If the individuals are influenced by expertise, group dynamics or other types of bias, the results will be skewed.

We’ve Set the Bar Pretty Low for AI

In an “observe and react” architecture, AI systems can use algorithms that loosely mimic a “Wisdom of Crowds” approach. If the AI system just makes “pretty bad” decisions (as opposed to “incredibly bad” decisions), the improvement will be measurable. But as we start to build interactive man/machine partnerships, things are going to change.

If only a few people were driving around in cars with relatively accurate traffic congestion maps, they would benefit from the knowledge and enjoy an alternative, and presumably quicker, route. However, the overall impact of AI on the larger traffic system would be negligible.

But I just logged on to Waze, and there are over 53,000 Wazers around me. All Wazers are seeking the fastest route to their respective destinations, and Waze is doing its best to help them. Are we all being sent via the best route? The better Waze gets, the more people will use it, and the more we use it, the smarter Waze will become, until … we are totally dependent on Waze to get us where we need to go. With 53,000 vehicles around New York City being “routed” by Waze, the impact on the larger traffic system is material.

The Road to Digital Monoculturalism Is Paved with Good Intentions

This would not matter as much if it were just about Waze. But Waze is a proxy for its parent company, Google. And it is also a proxy for Amazon and Facebook and IBM and Microsoft – the other founding members of the newly formed “.” Add Apple, and you now know the names of the very few “artificial intelligences” that you have been training to make decisions for you.

As AI and machine learning improve, more people are going to benefit, and through our interaction with the machines, the AI systems will make better decisions for us and in turn become more and more popular. And then it will happen: a small number of AI systems (most likely the aforementioned “Partnership on AI” group) will be making most of our decisions for us. We might not even notice that in the process, we devolved our diverse, multicultural world into a collection of distinct digital monocultures.

AI will sort our news feeds (it already does), our entertainment choices (it already does), our way-finding (it already does), and the energy efficiency of our homes and offices (it already can, but it is not widely deployed); make our financial decisions (it mostly does); make our medical decisions; make our business decisions; and probably make our political decisions too. The list of potential AI applications is bounded only by need and imagination.

I Don’t Know What I Don’t Know

I’m not really worried about rogue computers threatening our lives. I’m worried about the small number of programmers and coders charged with realizing the financial and political goals of their patrons. Could a ubiquitous social network skew or even direct an election? Could a traffic control system delay certain people from getting to work on time? Could an AI-enhanced financial services company deny loans or insurance due to zip code or race because it is the “best outcome” based on its programming? Could we train the AI that controls our news, communications and entertainment to restrict us to our comfort zones without even realizing what we’ve done?

I can imagine a world filled with digital monocultures, isolated from one another by feedback loops. Cognitive computed comfort zones will be much worse than our self-crafted comfort zones, worse because we won’t know that as a few artificial intelligences strive to algorithmically optimize our lives, it will be at the cost of our incredibly bad human decision-making. Which I’m sure we’re going to miss.

Author’s note: Author’s Note: This article was originally published on October 2, 2016. This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.

[email protected]

ABOUT SHELLY PALMER

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named  he covers tech and business for , is a regular commentator on CNN and writes a popular . He's a , and the creator of the popular, free online course, . Follow  or visit . 

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks