UT Wordmark Primary UT Wordmark Formal Shield Texas UT News Camera Chevron Close Search Copy Link Download File Hamburger Menu Time Stamp Open in browser Load More Pull quote Cloudy and windy Cloudy Partly Cloudy Rain and snow Rain Showers Snow Sunny Thunderstorms Wind and Rain Windy Facebook Instagram LinkedIn Twitter email

UT News

Facebook is the Canary in the Coal Mine for the Future of Artificial Intelligence

Our elected representatives lack both the technical aptitude and the wisdom to provide the answers we need.

Two color orange horizontal divider
zuckerberg_830

As Facebook CEO Mark Zuckerberg finished his walk through Congress, his inquisitors were left mumbling confusedly about data scraping and the Dark Web. Meanwhile, the rest of us were left to ourselves, asking hard questions about how we should react to the way Facebook seems to run its business.

Do you have any idea what the Cambridge Analytica data breach, Russian election meddling and fake news should mean for your own online presence? What about all the other scandals we haven’t yet discovered, or never will? Should you delete Facebook? Should you stop searching for unmentionables on Google? Are you already toast?

Only one thing is clear: Our elected representatives lack both the technical aptitude and the wisdom to provide the answers for you. When it comes to the hard questions surrounding data use and privacy, we are on our own.

Most of us ignored these questions 20 years ago when we first Googled something. We ignored them 10 years ago when we first signed up for Facebook. And we ignored them again five years ago when we first let our iPhones have our thumbprints.

But we can’t ignore these questions any longer, because they are about to get a lot more important. Facebook is the canary in the coal mine for the future of artificial intelligence.

Facebook became a successful company by mastering the art of artificial intelligence for targeted marketing. It fed masses of your personal data into powerful statistical models to yield targeted “content” recommendations. So did Amazon and Google and countless other companies. Many of the best minds in AI cut their teeth trying to get you to click on ads.

But AI will soon be used to help make decisions far more consequential than the ads you see on Facebook or the products Amazon recommends for you. It will inform what medical treatments you receive, the jobs you are offered, and what colleges accept your application. And at the heart of every such system will sit one thing: your data.

Are you prepared to leave these matters to 30-year-old CEOs? The stakes are huge. If we get this right, AI could offer us safer workplaces, better health care, freedom from drudgery and fewer language barriers. It could bring us a world where people and machines work together to reach fairer decisions — about hiring, scholarships, loans and so much more.

But it could all go so badly wrong. Imagine a hospital that’s as cavalier with your data as Facebook. Or a college admissions officer who’s as lax about algorithmic bias as Google’s ad-targeting system — which, according to a recent study from Carnegie Mellon University, showed online ads for high-paying executive jobs five times as often to men as to women. That’s the treacherous path we’re walking now.

It does not need to be this way. We might, for example, look to Sweden as a model. It has some of the world’s strictest data-privacy laws, but also one of the world’s most advanced digital economies. Witness Spotify: a notably scandal-free Swedish company built on data science and AI.

So what can you as an ordinary citizen do to nudge us in the right direction? I’m a college professor, so it will not surprise you that my answer is education. Culture is more effective than law at enforcing good behavior — and abuse nearly always trades on a culture of ignorance, credulity and inertia.

The ideas behind AI may be surrounded by a force field of technical jargon, but they’re surprisingly simple. How does AI work? Why does it depend so strongly on data? When and where does it go wrong? I promise you the answers are within your reach — and if you care about the world, few questions are more urgent today.

With a bit of knowledge, you’ll be much more prepared to play an informed role in the coming age of thinking machines. Just as importantly, you won’t be such a mug when the next Facebook comes around.

James Scott is an associate professor of statistics and data science at the McCombs School of Business at The University of Texas at Austin. He is the co-author of the upcoming book “AIQ: How People and Machines are Smarter Together.”

A version of this op-ed appeared in Fortune.

To view more op-eds from Texas Perspectives, click here.

Like us on Facebook.

Media Contact

University Communications
Email: UTMedia@utexas.edu
Phone: (512) 471-3151

Texas Perspectives is a wire-style service produced by The University of Texas at Austin that is intended to provide media outlets with meaningful and thoughtful opinion columns (op-eds) on a variety of topics and current events. Authors are faculty members and staffers at UT Austin who work with University Communications to craft columns that adhere to journalistic best practices and Associated Press style guidelines. The University of Texas at Austin offers these opinion articles for publication at no charge. Columns appearing on the service and this webpage represent the views of the authors, not of The University of Texas at Austin.

The University of Texas at Austin