Carnegie Mellon University

CONSEQUENTIAL

Consequential is a podcast that looks at the human side of technological change and develops meaningful plans of action for policymakers, technologists and everyday people to build the kind of future that reduces inequality, improves quality of life and considers humanity. Over the course of the first season, we will unpack important topics like industry disruption, algorithmic bias, human-AI collaboration, reskilling and the future of work, as well as discuss policy interventions for using emerging technologies for social good.

Host: Lauren Prastien, Eugene Leventhal

Coming Soon

Block Center for Technology and Society Podcast - Consequential

Update: Season 2 & Coronavirus Mini-Season

In light of recent developments related to COVID-19, we have decided to push back our second season to focus instead on what we can learn from the coronavirus in terms of technology and society. In our mini-season, we will cover the use of large-scale public health data, remote education, and the future of work.

Listen/Subscribe: Consequential Apple Podcast  Consequential Podcast on Google Podcast  Consequential Podcast on Spotify  Consequential Podcast on Stitcher

Read Transcript.

Season One

This is Consequential

Our future isn’t a coin flip. In an age of artificial intelligence and increasing automation, Consequential looks at our digital future and discusses what’s significant, what’s coming and what we can do about it. Over the course of our first season, hosts Lauren Prastien and Eugene Leventhal will unpack the narrative of technological change in conversation with leading technologists, ethicists, economists and everything in between.

Listen/Subscribe: Consequential Apple Podcast  Consequential Podcast on Google Podcast  Consequential Podcast on Spotify  Consequential Podcast on Stitcher

Read Transcript.

Episode 1: Disruption Disrupted

Are the robots coming for your job? The answer isn’t quite that simple. We look at what’s real and what’s hype in the narrative of industry disruption, how we might be able to better predict future technological change and how artificial intelligence will change our understanding of the nature of intelligence itself.

In this episode:
- Lee Branstetter, Professor Of Economics And Public Policy, Carnegie Mellon University
- Anita Williams Woolley, Associate Professor of Organizational Behavior and Theory, Carnegie Mellon University

Listen/Subscribe: Consequential Apple Podcast  Consequential Podcast on Google Podcast  Consequential Podcast on Spotify  Consequential Podcast on Stitcher

Read Transcript.

Episode 2: The Black Box

Inside the black box, important decisions are being made that may affect the kinds of jobs you apply for and are selected for, the candidates you’ll learn about and vote for, or even the course of action your doctor might take in trying to save your life. However, when it comes to figuring out how algorithms make decisions, it’s not just a matter of looking under the hood.

In this episode:
- Kartik Hosanagar, Professor of Operations, Information and Decisions, The Wharton School of the University of Pennsylvania
- Molly Wright Steenson, Senior Associate Dean for Research, Carnegie Mellon University

Listen/Subscribe: Consequential Apple Podcast  Consequential Podcast on Google Podcast  Consequential Podcast on Spotify  Consequential Podcast on Stitcher

Read Transcript.

Episode 3: Data Subjects and Manure Entrepreneurs

Every time you order a shirt, swipe on a dating app or even stream this podcast, your data is contributing to the growing digital architecture that powers artificial intelligence. But where does that leave you? In our deep-dive on data subjects, we discuss how to better inform and better protect the people whose data drives some of the most central technologies today.

In this episode:
- Kartik Hosanagar, Professor of Operations, Information and Decisions, The Wharton School of the University of Pennsylvania
- Tae Wan Kim, Associate Professor of Business Ethics, Carnegie Mellon University

Listen/Subscribe: Consequential Apple Podcast  Consequential Podcast on Google Podcast  Consequential Podcast on Spotify  Consequential Podcast on Stitcher

Read Transcript.

Episode 4: Fair Enough

Everyone has a different definition of what fairness means - including algorithms. As municipalities begin to rely on algorithmic decision-making, many of the people impacted by these AI systems may not intuitively understand how those algorithms are making certain crucial choices. How can we foster better conversation between policymakers, technologists and communities their technologies affect?

In this episode:
- Jason Hong, Professor of Human Computer Interaction, Carnegie Mellon University
- Molly Wright Steenson, Senior Associate Dean for Research, Carnegie Mellon University
- David Danks, Professor of Philosophy and Psychology, Carnegie Mellon University

Listen/Subscribe: Consequential Apple Podcast  Consequential Podcast on Google Podcast  Consequential Podcast on Spotify  Consequential Podcast on Stitcher

Read Transcript.

Episode 5: Bursting the Education Bubble

Big data disrupted the entertainment industry by changing the ways that people develop, distribute and access content, and it may soon do the same for education. New technologies are changing education, both within and beyond the classroom, as well as opening up more accessible learning opportunities. However, without reform in our infrastructure, this ed-tech might not reach the people who need it the most.

In this episode:
- Michael Smith, Professor Of Information Technology And Marketing, Carnegie Mellon University
- Pedro Ferreira, Associate Professor Of Information Systems, Carnegie Mellon University
- Lauren Herckis, Research Scientist, Carnegie Mellon University
- Lee Branstetter, Professor Of Economics And Public Policy, Carnegie Mellon University

Listen/Subscribe: Consequential Apple Podcast  Consequential Podcast on Google Podcast  Consequential Podcast on Spotify  Consequential Podcast on Stitcher

Read Transcript.

Episode 6: Staying Connected

If you think about any piece of pop culture about the future, it takes place in a city. Whether we realize it or not, when we imagine the future, we picture cities, and that idea is all the more problematic when it comes to who benefits from technological change and who does not. This episode will look at how emerging technologies can keep communities connected, rather than widen divides or leave people behind.

In this episode:
- Richard Stafford, Distinguished Service Professor, Carnegie Mellon University
- Karen Lightman, Executive Director - Metro21, Carnegie Mellon University
- Douglas G. Lee, President, Waynesburg University

Listen/Subscribe: Consequential Apple Podcast  Consequential Podcast on Google Podcast  Consequential Podcast on Spotify  Consequential Podcast on Stitcher

Read Transcript.

Episode 7: A Particular Set of Skills

The World Economic Forum has found that while automation could eliminate 75 million jobs by 2022, it could also create 133 million new jobs. In this episode, we will look at how to prepare potentially displaced workers for these new opportunities. We will also discuss the “overqualification trap” and how the Fourth Industrial Revolution is changing hiring and credentialing processes.

In this episode:
- Liz Shuler, Secretary-Treasurer, AFL-CIO
- Craig Becker, General Counsel, AFL-CIO
- Oliver Hahl, Assistant Professor of Organizational Theory and Strategy, Carnegie Mellon University
- Lee Branstetter, Professor Of Economics and Public Policy, Carnegie Mellon University

Listen/Subscribe: Consequential Apple Podcast  Consequential Podcast on Google Podcast  Consequential Podcast on Spotify  Consequential Podcast on Stitcher

Read Transcript.

Episode 8: The Future of Work

If artificial intelligence can do certain tasks better than we can, what does that mean for the concept of work as we know it? We will cover human-AI collaboration in the workplace: what it might look like, what it could accomplish and what policy needs to be put in place to protect the interests of workers.

In this episode:
- Parth Vaishnav, Assistant Research Professor of Engineering and Public Policy, Carnegie Mellon University
- Aniruddh Mohan, Graduate Research Assistant, Carnegie Mellon University
Liz Shuler, Secretary-Treasurer, AFL-CIO
- Craig Becker, General Counsel, AFL-CIO
- Tom Mitchell, University Professor of Computer Science and Machine Learning, Carnegie Mellon University

Listen/Subscribe: Consequential Apple Podcast  Consequential Podcast on Google Podcast  Consequential Podcast on Spotify  Consequential Podcast on Stitcher

Read Transcript.

Episode 9: Paging Dr. Robot

Don’t worry, your next doctor probably isn’t going to be a robot. But as healthcare tech finds its way into both the operating room and your living room, we’re going to have to answer the kinds of difficult ethical questions that will also determine how these technologies could be used in other sectors. We will also discuss the importance of more robust data-sharing practices and policies to drive innovation in the healthcare sector.

In this episode:
- David Danks, Professor of Philosophy and Psychology, Carnegie Mellon University
- Zachary Chase Lipton, Assistant Professor of Business Technologies and Machine Learning, Carnegie Mellon University
- Adam Perer, Assistant Research Professor of Human-centered Data Science, Carnegie Mellon University
- Tom Mitchell, University Professor of Computer Science and Machine Learning, Carnegie Mellon University

Listen/Subscribe: Consequential Apple Podcast  Consequential Podcast on Google Podcast  Consequential Podcast on Spotify  Consequential Podcast on Stitcher

Read Transcript.

Episode 10: A Policy Roadmap

Over the last 9 episodes, we’ve presented a variety of questions and concerns relating to the impacts of technology, specifically focusing on artificial intelligence. To end season 1, we want to take a step back and lay out a policy roadmap that came together from the interviews and research we conducted. We will outline over 20 different steps and actions that policymakers can take, starting with laying the necessary foundations to applying regulatory frameworks from other industries to novel approaches.

Listen/Subscribe: Consequential Apple Podcast  Consequential Podcast on Google Podcast  Consequential Podcast on Spotify  Consequential Podcast on Stitcher

Read Transcript.

TRANSCRIPTS

Lauren Prastien: In 2017, a team of researchers found that there is a 50 percent chance that artificial intelligence or AI will outperform humans in all tasks, from driving a truck to performing surgery to writing a bestselling novel, in just 45 years. That’s right. 50 percent. The same odds as a coin flip.

But the thing is, this isn’t a matter of chance. We aren’t flipping a coin to decide whether or not the robots are going to take over. And this isn’t an all or nothing gamble.

So who chooses what the future is going to look like? And what actions do we need to take now - as policymakers, as technologists, and as everyday people - to make sure that we build the kind of future that we want to live in?

Hi, I’m Lauren Prastien.

Eugene Leventhal: And I’m Eugene Leventhal. This is Consequential. We’re coming to you from the Block Center for Technology and Society at Carnegie Mellon University to explore how robotics, artificial intelligence and other emerging technologies can transform our future for better or for worse. 

Lauren Prastien: Over the course of this season, we’re going to be looking at hot topics in tech:

Molly Wright Steenson: Well, I think a lot of things with artificial intelligence take place in what could call it gets called the black box.

Lauren Prastien: We’ll speak to leaders in the field right now about the current narrative of technological disruption:

Tom Mitchell: It's not that technology is just rolling over us and we have to figure out how to get out of the way. In fact, policymakers, technologists, all of us can play a role in shaping that future that we're going to be getting. 

Lauren Prastien: And we’ll look at the policy interventions necessary to prepare for an increasingly automated and technologically enhanced workplace:

Anita Williams Woolley: So if we want to prepare our future workforce to be able to compliment the rise and the use of technology, it's going to be a workforce that's been well-versed in how to collaborate with a wide variety of people.

Eugene Leventhal: Along the way, we’ll unpack some of the concepts and challenges ahead in order to make sure that we build the kind of future that reduces inequality, improves quality of life and considers humanity. Because we’re not flipping a coin. We’re taking action.

This is Consequential: what’s significant, what’s coming and what we can do about it.

Follow us on Apple Podcasts or wherever you’re listening to this. You can email us directly at consequential@cmu.edu. To learn more about Consequential and the Block Center, you can check out our website at cmu.edu/block-center or follow us on Twitter @CMUBlockCenter.

Lauren Prastien: So, maybe you’ve noticed that a lot of things have started to look a little different lately.

By which I mean, the other day, a good friend of mine called me to tell me that a robot had started yelling at her at the grocery store. Apparently, this robot was wandering up and down the aisles of the grocery store and suddenly, it blocks the entire aisle to announce, 

Eugene Leventhal, as robot: “Danger detected ahead.” 

Lauren Prastien: And she proceeds to text me a picture of this, as she put it, “absolute nightmare Roomba” because she wasn’t really sure what to do.

And when I asked her if I could use this story, she proceeded to tell me: “Lauren, I was at the craft store the other day, and as I was leaving, the store spoke to me.” There was this automated voice that said, 

Eugene Leventhal, as robot: “Thank you for shopping in the bead section.” 

Lauren Prastien: As in, as soon she left the bead section, the store knew and wanted to let her know that they were happy she stopped by. And, by the way, she hated this.

But this isn’t a podcast about how my friend has been hounded by robots for the past few weeks or even about the idea of a robot takeover. And it’s not only about the people those robots might have replaced, like the grocery store employee that would normally be checking the aisles for spills or the greeter at the door of the craft store. And it’s not a podcast saying that it’s time for us to panic about new technologies or the future, because by the way, we’ve always freaked out about that. Socrates was afraid a new technology called writing things down would make everyone forgetful and slow because we wouldn’t memorize things anymore. Ironically, we know this because Plato, Socrates’ student, wrote this down in his dialogue, the Phaedrus

This podcast is actually about how the world is changing around us and the role that technology, specifically Artificial Intelligence or AI is playing in those changes. It’s about understanding the potential consequences, both good and bad. It’s about how you have played a central role in the development of these technologies and that you deserve a seat at the table when it comes to the role that these technologies are playing in our society and in your everyday life.

This is Consequential: what’s significant, what’s coming, and what we can do about it. I’m Lauren Prastien and I’ll be your main tour guide along this journey. You’ll also hear the voices of our many guests as well as your other host.

Eugene Leventhal: Hi, I’m Eugene Leventhal. I’ll be joining throughout the season to take a step back with Lauren and overview what was just covered, to talk policy, and to read quotes. I’ll pass it back to you now Lauren. 

Lauren Prastien: Consequential is recorded at the Block Center for Technology and Society at Carnegie Mellon University. Established in 2018 through a generous gift from Keith Block, the Block Center is dedicated to investigating the economic, organizational, and public policy impacts of emerging technologies.

Over the course of this season, we’re going to talk about how a lot of the institutions and industries that we’ve previously thought unchangeable are changing, how the technologies accompanying and initiating these changes have become increasingly pervasive in our lives, and how both policymakers and individuals can respond to these changes. 

In this first episode, we’ll start our journey by talking about some of the major changes humanity has witnessed in recent generations, about what intelligence means in an age of automation, and why all this AI-enabled technology, like algorithms, self-driving cars, and robots, will require more thoughtful and intentional engagement between individuals as a foundation to deal with these coming changes. 

Our question today is “How can we disrupt the narrative of industry disruption?”

Lee Branstetter: The problem of course, is that we need to design public policies that can help cushion the disruption if it's going to come. But we need to have those policies in place before disruption has become a really major thing. So, you know, how do we get visibility on where this technology is actually being deployed and what its labor market effects are likely to be? Well our idea, and I think it's a pretty clever one, is to actually use AI to study this question.

Lauren Prastien: That and more when we return.

So, one hundred years ago, everyone was getting really worried about this relatively new, popular technology that was going to distract children from their schoolwork, completely ruin people’s social lives and destroy entire industries. 

This was the phonograph. That’s right. People were scared that record players were going to destroy everything.

By the 1910s, the phonograph had given rise to a lot of new trends in music, such as shorter songs, which people thought would make our mental muscles flabby by not being intellectually stimulating enough. Record players also got people into the habit of listening to music alone, which critics worried would make us completely antisocial. One of the most vocal critics of the phonograph was John Philip Sousa, who you probably know for composing all those patriotic marches you hear on the Fourth of July. One hundred years ago, Sousa was worried that the phonograph – or as he called it, the “talking machine” – would disincentivize children from learning music and as a result, we’d have no new musicians, no music teachers and no concert halls. It would be the death of music.

That’s right: A lot of people were genuinely worried that the record player was going to destroy our way of life, put employees out of work, make us all really disconnected from each other and completely eliminate music making as we knew it. Which is really kind of funny when you consider a certain presidential candidate has been talking about how record players could bring us all back together again.

So we’ve always been a little ambivalent about technology and its capacity to radically change our lives for better or for worse. If we look at past interpretations of the future, they’re always a little absurd in hindsight. The campiness of the original Star Trek, the claustrophobia and anxiety of Blade Runner. Sometimes, they’re profoundly alarmist. And sometimes, they’re ridiculously utopian. Often, they’re predicated on this idea that some form of technology or some greater technological trend is either going to save us or it’s going to completely destroy us. 

One of the most dramatic examples of us feeling this way was Y2K. 

Twenty years ago, it was 1999, and we were preparing for a major technological fallout. This was when Netflix was only two years old, and it was something you got in the mail. People were terrified that computers wouldn’t be able to comprehend the concept of a new millennium. As in, they wouldn’t be smart enough to know that we weren’t just starting the 1900s over, and as a result, interest rates would be completely messed up, all the powerplants would implode and planes would fall out of the sky. Which, you know, none of that really happened. In part because a lot of people worked really hard to make sure that the computers did what they were supposed to do.

But while 1999 didn’t deliver on Y2K, it was the year that Napster hit the web. So, while the world may not have ended for you and me, it did end for the compact disc.

The new millennium saw us worrying once more about the death of the music industry. But we’re at a point now where we can see that this industry didn’t die. It changed. The phonograph didn’t kill music, and neither did Napster. In 2017, U.S. music sales hit its highest revenue in a decade. While not a full recovery from its pre-Napster glory days, paid subscription services like Spotify and Apple Music have responded to the changing nature of media consumption in such a way that has steered a lot of consumers away from piracy and kept the music industry alive.

This isn’t to say that the changing nature of the music industry didn’t put people out of work and companies out of business – it absolutely did. We watched a seemingly unsinkable industry have to weather a really difficult storm. And that storm changed it irrevocably.

The thing is, this is what technology has always done. In a recent report, the AFL-CIO’s Commission on the Future of Work and Unions noted:

Eugene Leventhal: “Technology has always involved changing the way we work, and it has always meant eliminating some jobs, even as new ones are created.” 

Lauren Prastien: But in this report, the AFL-CIO emphasizes that this shouldn’t solely be viewed as an organic process. It’s something that needs to be approached with intentionality.

Today, we’re seeing those kinds of transformations happen much more swiftly and at a much higher frequency. But how do we know where those transformations are going to take place or what those transformations are going to look like?

Heads up – we’re not great at this. 

Fifty years ago, it was 1969. The year of Woodstock, the Moon landing, Nuclear Nonproliferation and the Stonewall Riots. A US stamp cost just 6 cents.

This was the year a certain Meredith W. Thring, a professor of mechanical engineering at Queen Mary College, testified before the International Congress of Industrial Design in London. He was there to talk about the future, and Eugene’s going to tell us what he had to say about it:

Eugene Leventhal: “I do not believe that any computer or robot can ever be built which has emotions in it and therefore, which can do anything original or anything which is more sophisticated than it has been programmed to do by a human being. I do not believe it will ever be able to do creative work.” 

Lauren Prastien: By creative work, Professor Thring meant cooking.

He believed that no robot would look like a person, which would make it easier for us to dehumanize them and, in his words, enslave them, and that their designs would be purely functional. Thring imagined robots would have eyes in the palms of their hands and brains between their toes. Or, in the case of an agricultural robot, a large, roving eye at the front of the tractor, angled to down the ground. A quick Google of the term “automated cooking,” will show you just how wrong our friend Meredith W. Thring was when it came to robots capable of preparing meals. 

So if our own imaginations aren’t sufficient to understand where disruption is going to occur, what could be? There’s a group of researchers here at the Block Center who came up with an interesting way to measure just much AI disruption might be coming – patents. 

Lee Branstetter: Now, of course, not all AI inventions are going to be patented, but if you've got something fundamental that you think is going to make you billions of dollars and you don't patent at least part of it, you're leaving yourself open to the possibility that somebody else is going to patent that thing and get the billion dollars instead of you.

Lauren Prastien: That was Lee Branstetter, a Professor of Economics and Public Policy at Carnegie Mellon. He also leads the Future of Work Initiative here at the Block Center, where his work focuses on the economic forces that shape how new technology is created, as well as the economic and social impacts of those new technologies. 

Lee Branstetter: Once we can identify these AI patents, we know the companies that own them. We often know something about the industry in which they're being deployed. We know when the invention was created, even who the inventors are and when we know who the inventing firms are, we can link the patent data to data maintained by other sources, like the US Census Bureau.

Lauren Prastien: Combined with employment data, this patent data offers a really useful window into how this technology is being developed and deployed.

Lee Branstetter: And one of the most interesting pieces of data is the so called LEHD Dataset, the longitudinal employer household dynamics dataset. This is essentially a matched employer-employee dataset. We can observe the entire wage distribution of firms and how they're changing as AI technology is developed and deployed within the firm. 

Lauren Prastien: When Eugene and I spoke to Professor Branstetter, we wanted to get a better idea of what industry disruption might actually look like and exactly who it was going to impact. Because right now, there are a lot of conflicting opinions out there about what exactly is going to happen to the concept of work as we know it. 

Lee Branstetter: Everybody's already heard the sort of extreme positions that are being propagated in the media and on social media, right? On the one hand, there are the techno utopians who tell us that a life of endless leisure and infinite wealth, uh, is almost within our grasp. And then there are the technical dystopians, right? Who will tell us that the machines are going to take all of our jobs.  

So one of my concerns is that AI is not going to render human labor obsolete, but it's going to exacerbate the trends that we've been seeing for decades, right? It's going to amplify demand for the highest skilled workers and it's going to weaken demand for the lower skilled workers. Well, with our data, we could actually match AI, patent data and other data to data on the entire wage distribution of firms and see how it evolves and see where and when and in what kind of industry these effects are starting to emerge and that can help inform public policy. All right? We can kind of see the leading edge of this disruption just as it's starting to happen. And we can react as policy makers. 

Lauren Prastien: Professor Branstetter believes that being able to react now and take certain preemptive measures is going to be a critical part of being able to shape the narrative of disruption in this new age of artificial intelligence. Because even if it seems that everything has suddenly come together overnight: a robot cleaning the aisle in a grocery store, a robot thanking you for shopping in the bead section - this isn’t some kind of hostile robot takeover or sudden, unstoppable tide of change that we’re helpless to let wash over us. The fact is that this is all still relatively new.

Lee Branstetter: All of the debate, uh, is basically taking place in a virtual absence of real data. Means these technologies are still in their very early stages. You know, we're just starting along a pathway that is likely to take decades over which these technologies probably are going to be deployed in just about every sector of the economy. But we really don't know yet what the effect is.

Lauren Prastien: While the economic realities don’t point a massive change just yet, there are plenty of reasons to believe that more change is coming. Though only time will tell the exact extent and who will be impacted the most, the fact is that the increasing pace of technological change is very likely to lead to some large-scale changes in society. Our job will be to dig into what is real and what is hype, and what needs to be done so that we’re prepared for the negative outcomes.

Lee Branstetter: The problem of course, is that we need to design public policies that can help cushion the disruption if it's going to come. But we need to have those policies in place before disruption has become a really major thing. So, you know, how do we get visibility on where this technology is actually being deployed and what its labor market effects are likely to be?

Well our idea, and I think it's a pretty clever one, is to actually use AI to study this question. So I've been working with Ed Hovy who is a major scholar in the Language Technologies Institute of the School Computer Science. Um, he's an expert in using machine learning algorithms to parse text. And so together with one of his graduate students and a former patent attorney that is now getting two PhDs at Carnegie Mellon, uh, we're actually teaching an ensemble of machine learning algorithms to parse patent text and figure out on the basis of the language and the text whether this invention is AI related or not.  

Lauren Prastien: That’s right. Professor Branstetter is using robots to fight the robots, in a manner of speaking. 

But if we take a step back, there are some signs that certain industries are already being completely restructured or threatened. As an example: ride-hailing apps, like Uber and Lyft, have disrupted an industry previously considered to be un-disruptable: taxi services. So even as we use technologies like Professor Branstetter’s patent analysis to cushion the blow of technological change, we’re still going to see industries that are impacted, and as Professor Branstetter warned us, this could really exacerbate existing inequalities. 

There’s another, more promising idea that these technologies could really help promote shared prosperity by breaking down the barriers to economic success. But for every company that implements a robot to do a task like, say, clean an aisle, so that that employee can do more human-facing, less-routinized work, there’s going to be a company that just implements a robot without finding new work for an existing employee. And so being able to shape how these technologies impact our way of life is going to take some real work. Real work that starts at the personal level, starting with the simple act of caring more about this in the first place and extending to working with governments, universities, and corporations to make the digitally-enabled future one that’s better for everyone. 

Because just anticipating this disruption is only half the battle. Later on this season, we’re going to get into some of the specific policy interventions that could protect individuals working in disrupted industries and help them transition to new careers, like wage insurance and reskilling initiatives.

As we prepared the interviews that went into this season, we realized that the topic of tech as a tool kept coming up, and reasonably so. The sound of using AI or robots to enhance our human abilities sounds like we’re in some sci-fi movie, though I’m sure that’s not the only reason researchers look into it. But these tools aren’t infallible: they’re going to see the world with the same biases and limitations as their creators. So thinking technology can somehow make the underlying problems that people are concerned with go away is kind of unrealistic. 

As technologies continue to evolve at ever faster rates, one of the things you’ll hear mentioned throughout the season are the human challenges. It’s important to consider that technology in and of itself is not useful – it is only helpful when it actually solves problems that we humans have. And these technologies have the potential to do a lot of good, from helping to save lives by improving diagnosis to making the workplace safer by aiding in the performance of difficult physical tasks to opening up new opportunities through remote work, online learning and open-source collaboration. Sometimes, disruption is a good thing. But we can’t lose the human factor or simply allow these technologies to bulldoze right through us. 

If anything, as these technologies become more complex, that means that we get to delve into increasingly more complex topics related to being human. You may have heard this new little catchphrase that EQ, or emotional intelligence, is the new IQ, or how robots are only going to make the things that make us human all the more significant. 

Anita Williams Woolley: And so this really suggests that school funding models that take resources away from the activities that foster teamwork and foster social interaction in favor of you know, more mathematics for example, will really be shortchanging our children and really our economy. 

Lauren Prastien: We’re going to talk a little more about that in just a moment, so stay tuned.

In his book AI Superpowers: China, Silicon Valley and the New World Order, computer scientist and businessman Kai-Fu Lee looks at the story of AlphaGo versus Ke Jie. In 2017, Ke Jie was the reigning Go champion. Go is a strategy board game where two players try to gain control of a board by surrounding the most territory with their game pieces, or stones. It is considered to be one of the oldest board games in human existence, invented in China during the Zhou dynasty and still played today. Back during antiquity, being able to competently play Go was considered one of the four essential arts of a Chinese Scholar, along with playing a stringed instrument, calligraphy, and painting.

So, in May of 2017, Ke Jie, the worldwide Go champion, arranged to play against a computer program called AlphaGo. They played for three rounds, and AlphaGo won all of them. Which, in the battle for human versus robot, might seem really discouraging.

But of this defeat, Kai-Fu Lee, as read by Eugene, wrote: 

Eugene Leventhal: “In that same match, I also saw a reason for hope. Two hours and fifty-one minutes into the match, Ke Jie had hit a wall. He’d given all that he could to this game, but he knew it wasn’t going to be enough. Hunched low over the board, he pursed his lips and his eyebrow began to twitch. Realizing he couldn’t hold his emotions in any longer, he removed his glasses and used the back of his hand to wipe tears from both of his eyes. It happened in a flash, but the emotion behind it was visible for all to see. Those tears triggered an outpouring of sympathy and support for Ke. Over the course of these three matches, Ke had gone on a roller-coaster of human emotion: confidence, anxiety, fear, hope, and heartbreak. It had showcased his competitive spirit, but I saw in those games an act of genuine love: a willingness to tangle with an unbeatable opponent out of pure love for the game, its history, and the people who play it. Those people who watched Ke’s frustration responded in kind. AlphaGo may have been the winner, but Ke became the people’s champion. In that connection – human beings giving and receiving love – I caught a glimpse of how humans will find work and meaning in the age of artificial intelligence.”

Lauren Prastien: Like Kai-Fu Lee, I don’t want to believe that this is a matter of us versus them. I also believe in that glimpse that he describes, and I think that glimpse is something we call emotional intelligence.

But to really understand how emotional intelligence and other forms of human intelligence are going to keep us from being automated out of existence, we’re going to have to understand what we mean by intelligence. Breaking down the idea of human intelligence is another subject for a different podcast from someone far better-equipped to handle this stuff. But let’s use a really basic working definition that intelligence is the ability to acquire and apply knowledge or skills.

A lot of the time when we talk about intelligence, we think about this as the individual pursuit of knowledge. But as the nature of our workplace changes with the influx of these new technologies, we’re going to see an emphasis on new kinds of intelligence that can compete with or even complement artificial intelligence. And one of these is collective intelligence.

Anita Williams Woolley: Collective intelligence is the ability of a group to work together over a series of problems. We really developed it to compliment the idea of individual intelligence, which has historically been measured as the ability of an individual to solve a wide range of problems. 

Lauren Prastien: That’s Anita Williams Woolley. She is a Professor of Organizational Behavior and Theory at Carnegie Mellon University. She’s used collective intelligence to look at everything from how to motivate people to participate in massive open-source collaborations like Wikipedia to explaining how the September 11th attacks could have been prevented with better communication and collaboration.

Anita Williams Woolley: In order for a group to be able to work together effectively over a range of different kinds of problems, they really need different perspectives, different information, different skills. And you can't get that if everybody is the same. And so it’s not the case that a high level of diversity automatically leads to collective intelligence. There needs to be some other behaviors, some other communication behaviors and collaboration behaviors that you need to see as well.

It's, it's not necessarily how individually intelligent people are, but the skills that they bring that foster collaboration as well as again, the diversity of, of different skills. So in terms of collaboration skills, initially what we observed was that having more women in the team led to higher collective intelligence over time we found more of a curvilinear effect. 

Lauren Prastien: Real quick, curvilinear means that if there’s two variables, they’re going to both increase together at the same rate for a little while, but then at some certain point, while one variable keeps increasing, the other starts decreasing. Think of it as the “too much of a good thing” relationship. So, in the case of having women in a group, the curvilinear effect looked something like this. If a group had no women, there wasn’t very high collective intelligence. Sorry, guys. And as more and more women are added to a group, the collective intelligence of that group increases. But to a point. A group with majority women participants is going to have really high collective intelligence, but if a group is entirely women, collective intelligence is actually going to be a little lower than it would be if there were also some men in the group. It’s also really important to quickly clarify why this is. It’s not that women are magic. I mean, we are. But Professor Woolley has a more sociological explanation for why women participants boost a group’s collective intelligence.

Anita Williams Woolley: So one of the reasons why having more women helps teams is because women on average tend to have higher social perceptiveness than men. However, that said, if an organization is really doing a lot of collaboratively intensive work, if they focus on hiring people who have higher levels of social skills, whether they're male or female, it should enhance the ability of their teams to be more collectively intelligent. 

Lauren Prastien: But creating a strong collectively intelligent group isn’t just a matter of gender. Professor Woolley has found that this trend extends to other forms of diversity as well. 

Anita Williams Woolley: So we've looked at gender diversity, we've looked at some ethnic diversity. In both cases we find that you, you know, there is a benefit to both sorts of diversity for collective intelligence, but specifically we also find a benefit for cognitive diversity. And it's the cognitive styles that we look at are styles that tend to differentiate people who go into different academic fields. And so there's a cognitive style that's predominant in the humanities, one that's predominant in engineering and the sciences, one that's predominant in the visual arts. And we find that at least a moderate of cognitive diversity along these cognitive styles is best for collective intelligence. So trying to create organizations, create teams that are diverse in these ways is going to lead to higher collective intelligence. 

Lauren Prastien: So what does this have to do with contending with artificial intelligence and automation? Partially, it’s to say that we’re not going to succeed in managing these technologies if we keep trying to prop up exemplary individuals to compete with them. One of Professor Woolley’s studies showed that a team of regular people with strong communication skills handled a simulated terrorist attack better than actual counterterrorism experts. That is, until those experts participated in a communication seminar.

But the more important point here is that one of the best ways to leverage these new technologies is not to look at how they can replace us, but to understand how they can complement the things we’re already great at.

Anita Williams Woolley: I think it's important to keep in mind the distinction between production technologies and collaboration technologies. So when you think about a robot who's just going to do your job for you, that would be an example of a production technology where they're actually doing the task. And that's usually what people call to mind if they think about AI coming to take their job. However, the bigger possibility and actually the one that is potentially very positive for many of us is a coordination technology, which is where the robots come and they help us coordinate our input so that they get combined more effectively. So that we don't have you know, gaps or people doing, you know, the same work or you know, other coordination losses that you often see in organizations.

Lauren Prastien: Professor Woolley’s research has shown that sometimes, people can really struggle when it comes to articulating what they’re good at or when they have to allocate tasks among a team. But that doesn’t mean that our future managers and mentors are going to be robots.

Anita Williams Woolley: You'd be willing to have a machine tell you, oh, the most of you know, the best time for you to have this meeting is at this time because that's when everybody is available. Okay, fine, I'll do that. But am I going to take career advice or life advice, you know, from this robot? 

So we have done some studies. We're starting to work now on a new program looking at AI-based coaches for task performance. And so in some of the pilot studies we were interested in how do humans perceive these coaches and do they find them as competent, as warm, you know, do they want to work with them? And the answer is no. So if the same if, if a performer was getting the same advice but thought it was from a human, they thought it was much more competent and credible than if they thought it was from a bot. 

Lauren Prastien: Professor Woolley proposes that artificial intelligence could help coordinate people to more effectively tackle challenges and derive more value from the work they do. Because ultimately, while there’s work that technology may be able to do slightly better than we do – there’s a lot of stuff that technology simply cannot match us in. It’s the stuff that made us root for Ke Jie, even when he was losing to AlphaGo. Especially when he was losing to AlphaGo.

And it’s the kind of stuff that makes us feel kind of nice when a human thanks us for shopping in the bead section and feel really unnerved when a robot does it. There are going to be the machines that beat us at games, the machines that lay bricks more efficiently than we do and the machines that write up contracts faster that we can. But what both Kai-Fu Lee and Professor Woolley are arguing is that machines cannot take away the things that make us innately human. If anything, they can help enhance them.

But it’s not going to happen organically. According to Professor Woolley, it’s going to take some interventions in policy and education.

Anita Williams Woolley: I think focusing on the education policy is a big piece of this. Traditionally in the last few decades in the United States, we focused a lot on STEM education and mathematics. And, and related fields. And those are important. But what we see as we look at the economy and also look at you know, where wages are rising, it's in those occupations and in fields where you both need technical skill but also social skill. And so this really suggests that school funding models that take resources away from the activities that foster teamwork and foster social interaction in favor of you know, more mathematics for example, will really be shortchanging our, our children and really our economy. 

Lauren Prastien: It’s really important to stress this shifting nature of intelligence, and the fact that this isn’t the first time we’ve seen this. Since the Industrial Revolution, the proliferation of new technologies has continuously emphasized the value of science, math, and engineering education, often to the detriment of the arts and the humanities. Now, we are seeing many issues related to technology that center around a lack of social education. As tech increases our ability to communicate freely and more tasks become automated, we have to start placing an emphasis on skills that have been relatively undervalued as of late. 

Anita Williams Woolley: Especially as we get more and more technologies online that can take over some of the jobs that require mathematical skill, that's going to increase the value of these social skills even more. So if we want to prepare our future workforce to be able to compliment the rise and the use of technology, it's gonna be a workforce that's been well versed in how to collaborate with a wide variety of people and that's best accomplished in a school setting. 

Lauren Prastien: If used correctly, technology can help us achieve more than we may be able to without it. But can we disrupt disruption? So Eugene, we talked to some experts this week. What do you think?

Eugene Leventhal: Well, Lauren, the fact is that technology isn’t some unstoppable force that we are powerless to lose our jobs and sense of worth to. But ensuring that disruption doesn’t exacerbate existing inequalities means taking steps to anticipate where this disruption may occur and determining how to best deploy these technologies to enhance human work, rather than to replace it. It also means providing adequate support through education and other avenues to strengthen and reinforce the skills that make us innately human. And so where does our journey take us from here? 

Lauren Prastien: In the coming episodes, we will discuss the increasing influence of emerging technologies, concerns of algorithmic bias, potential impacts on social and economic inequality, and what role technologists, policymakers and their constituents can play in determining how these new technologies are implemented, evaluated and regulated.

In the next episode of Consequential, we’ll talk about the AI black box: what is it, why is it important, and is it possible to unpack it? Here’s a snippet from Molly Wright Steenson, a Professor of Ethics & Computational Technologies here at CMU, who’s going to join us next week: 

Molly Wright Steenson: Some people say that an AI or a robot should be able to say what it's doing at any moment. It should be able to stop and explain what it's done in what its decision is. And I don't think that's realistic.  

Lauren Prastien: I’m Lauren Prastien.

Eugene Leventhal: And I’m Eugene Leventhal.

Lauren Prastien: And this was Consequential. We’ll see you next week.

Consequential was recorded at the Block Center for Technology and Society at Carnegie Mellon University, which was established to examine the societal consequences of technological change and create meaningful plans of action. To learn more about Consequential, the Block Center and our faculty, you can check out our website at cmu.edu/block-center or follow us on Twitter @CMUBlockCenter. You can also email us at consequential@cmu.edu.

This episode of Consequential was written by Lauren Prastien, with editorial support from Eugene Leventhal. It was edited by Eugene and our intern, Ivan Plazacic. Consequential is produced by Eugene, Lauren, Shryansh Mehta and Jon Nehlsen. 

This episode uses a clip of John Philip Sousa’s High School Cadets march, a portion of the AFL-CIO Commission on the Future of Work and Unions’ report to the AFL-CIO General Board and an excerpt of Kai-Fu Lee’s AI Superpowers: China, Silicon Valley and the New World Order.

Lauren Prastien: So, you might know this story already. But bear with me here. 

In 2012, a teenage girl in Minneapolis went to Target to buy some unscented lotion and a bag of cotton balls. Which, okay. Nothing unusual there. She was also stocking up on magnesium, calcium and zinc mineral supplements. Sure, fine – teenagers are growing, those supplements are good for bones and maintaining a healthy sleep schedule. But here’s where things get strange – one day, Target sent her a mailer full of coupons, which prominently featured products like baby clothes, formula, cribs. You know, things you might buy if you’re pregnant. Yeah, when I got to this point in the story the first time I heard it, I was cringing, too. 

Naturally, an awkward conversation ensued because, you guessed it, Target had figured out that this teenage girl was pregnant before her own parents did.

Or, I should say, an algorithm figured out. It was developed by statistician Andrew Pole. In a partnership with Target, Pole pinpointed twenty-five products that, when purchased together, might indicate that a consumer is pregnant. So, unscented lotion – that’s fine on its own. But unscented lotion and mineral supplements? Maybe that shopper’s getting ready to buy a crib.

It might seem unsettling but consider: we know what that algorithm was taking into account to jump to that conclusion. But what happens when we don’t? And what happens when an algorithm like that has a false positive? Or maybe even worse, what happens when we find out that there’s an algorithm making a bigger decision than whether or not you get coupons for baby products - like, say, whether or not you’re getting hired for a job - and that algorithm is using really messed up criteria to do that?

So, full disclosure: that happened. In 2018, the journalist Jeffrey Dastin broke a story on Reuters that Amazon was using a secret AI recruiting tool that turned out to be biased against job candidates that were women. Essentially, their recruiting algorithm decided that male candidates were preferable for the positions listed, and downgraded resumes from otherwise strong candidates just because they were women. Fortunately, a spokesperson for Amazon claims that they have never used this algorithm as the sole determinant for a hiring decision. 

So far, this has been the only high-profile example of something like this happening, but it might not be the last. According to a 2017 study conducted by PwC, about 40% of the HR functions of international companies are already using AI, and 50% of companies worldwide use data analytics to find and develop talent. So these hiring algorithms are probably going to become more common, and we could have another scandal like Amazon’s again.

We don’t always know how artificial intelligence makes decisions. But if we want to, we’re going to have to unpack the black box.

When I say the words “black box,” you probably think of airplanes. A crash. The aftermath. An account of the things that went wrong.                                                              

But this is a different kind of black box. It’s determining whether or not you’re getting approved for a loan. It’s picking which advertisements are getting pushed to your social media timelines. And it’s making important decisions that could affect the kinds of jobs you apply for and are selected for, the candidates you’ll learn about and vote for, or even the course of action your doctor might take in trying to save your life.

This is Consequential: what’s significant, what’s coming, and what we can do about it. I’m Lauren Prastien and I’ll be your main tour guide along this journey. You’ll also hear the voices of our many guests as well as of your other host.

Eugene Leventhal: Hi, I’m Eugene Leventhal. I’ll be joining throughout the season to take a step back with Lauren to overview what was just covered, to talk policy, and to read quotes. I’ll pass it back to you now, Lauren. 

Lauren Prastien: Consequential is recorded at the Block Center for Technology and Society at Carnegie Mellon University. Established in 2018 through a generous gift from Keith Block and Suzanne Kelly, the Block Center is dedicated to investigating the economic, organizational, and public policy impacts of emerging technologies.

Today, we’re going to talk about algorithmic transparency and the black box. And we’ll try to answer the question: can we - and should we - unpack the black box? But before that, we’ll need to establish what these algorithms are and why they’re so important.

Kartik Hosanagar: Really, they’re all around us, whether it’s decisions we make or others make for us or about us. They’re quite pervasive, and they’ll become even more central to decisions we’ll make going forward.

Lauren Prastien: That and more soon. Stay with us.

Kartik Hosanagar: Algorithms are all around us. When you go to an ecommerce website

like Amazon, you might see recommendations...That’s an algorithm that’s convincing you to buy certain products. Some studies show that over a third of the choices we make on Amazon are driven by algorithmic decisions.

Lauren Prastien: That’s Kartik Hosanagar. He’s a Professor of Technology and Digital Business at the University of Pennsylvania. He’s also the author of A Human’s Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control.

Kartik Hosanagar: On Netflix, an algorithm is recommending media for us to see. About 80% of the hours you spend on Netflix are attributed to algorithmic recommendations. And of course, these systems are making decisions beyond just products we buy and media we consume. If you use a dating app like Match.com or Tinder, algorithms are matching people and so they’re influencing who we date and marry. 

Lauren Prastien: But algorithms aren’t just responsible for individual decision-making. In addition to making decisions for us, they’re also making decisions about us.

Kartik Hosanagar: They’re in the workplace. If you look at recruiting, algorithms are helping recruiters figure out who to invite for job interviews. They’re also making life and death decisions for us. So, for example, algorithms are used in courtrooms in the US to guide judges in sentencing and bail and parole decisions. Algorithms are entering hospitals to guide doctors in making treatment decisions and in diagnosis as well. So really, they’re all around us, whether it’s decisions we make or others make for us or about us. They’re quite pervasive, and they’ll become even more central to decisions we make going forward.

Lauren Prastien: We’re going to get into the implications of some of these more specific examples throughout the season, but right now, I want to focus on why it’s important that these algorithms exist in the first place, how they can actually be useful to us, and what happens when they don’t do what they’re supposed to do. 

To make sure we’re all on the same page, an algorithm is a set of instructions to be followed in a specific order to achieve specific results. So technically, making a peanut butter and jelly sandwich is an algorithm. You take out your ingredients. You remove two slices of bread from the bag. You toast the bread. You open the jar of peanut butter and use the knife to apply a layer of peanut butter to the open face of one of the slices. You then open the jar of jelly and use your knife to apply a layer of jelly to the open face of the other slice. You press the peanut butter-covered side of the first slice onto the jelly-covered side of the second slice. Voila - peanut butter and jelly sandwich. A set of instructions, a specific order, specific results. 

Have you ever had to do that team-building exercise where you make peanut butter and jelly sandwiches? One person gives directions, and one person has to follow the directions literally? So, if the person giving the directions forgets to say to take the bread out of the bag, the person making the sandwich has to just spread peanut butter and jelly all over a plastic bag full of bread. If you’ve ever had to write some code, only to realize you skipped a line or weren’t specific enough, you know this kind of frustration.

So, in that way - the act of getting dressed is an algorithm: you can’t put on your shoes before you put on your socks. And driving involves a pretty complicated algorithm, which we’ll talk about when we talk about autonomous vehicles in another episode. 

Algorithms actually originated in mathematics - they’re how we do things like find prime numbers. The word algorithm comes from Algorismus, a 9th century mathematician whose writings helped bring algebra and the Arabic numerals - aka the numbers we use every day - to Europe. But the algorithms that we’re talking about this season are the ones that turn up in computer science. Essentially, they’re programs set up to solve a problem by using a specific input to find a specific output. If we take a step back in history, this was more or less how computing started - we made machines that were capable of receiving data and then processing that data into something we could understand.

And when it comes to AI, this still holds mostly true. Algorithms use models of how to process data in order to make predictions about a given outcome. And sometimes, how those algorithms are using the data to make certain predictions is really difficult to explain. 

So, the Amazon hiring program was probably using a set of sourcing, filtering and matching algorithms that looked through a set of resumes, found resumes that exhibit certain characteristics, and selected those candidates that best matched their hiring criteria for HR to then review. It did this through a process known as machine learning, which we’ll talk about a lot this season. Essentially, machine learning is a form of artificial intelligence that uses large quantities of data to be able to make inferences about patterns in that data with relatively little human interference. 

So, Amazon had about ten years of applicant resumes to work off of, and that’s what they fed to their machine learning algorithm. So the algorithm saw these were the successful resumes, these people got jobs. So, the instructions were: find resumes that look like those resumes, based on some emergent patterns in the successful resumes. And this is what machine learning algorithms are great at: detecting patterns that we miss or aren’t able to see. So, a successful hiring algorithm might be able to identify that certain je ne sais quoi that equates to a good fit with a certain job position. 

In addition to finding that certain special characteristic, or, ideally, objectively hiring someone based on their experience, rather than based on biases that a human tasked with hiring might have, a hiring algorithm like Amazon’s is also useful from a pure volume perspective. As a hiring manager, you’re dealing with thousands of applicants for just a handful of spots. When it comes to the most efficient way of narrowing down the most promising applicants for that position, an algorithm can be really useful. When it’s working well, an algorithm like Amazon’s hiring algorithm would toss out someone with, say, no experience in coding software for a senior level software engineer position, and select a candidate with over a decade of experience doing relevant work.

But as you’ve seen, that can sometimes go really wrong. And not just with Amazon.

Kartik Hosanagar: And here's a technology that was tested quite extensively, uh, in lab settings and launched. And it didn't really take long for it to just go completely awry. And it had to be shutdown within 24 hours.

Lauren Prastien: That and more when we come back.

As a good friend of mine put it: the wonderful thing about artificial intelligence is that you give your model a whole bunch of latitude in what it can do. And the terrible part is that you give your model a whole bunch of latitude in what it can do.

While algorithms can pick up on patterns so subtle that sometimes we as humans miss them, sometimes, algorithms pick up on patterns that don’t actually exist. The problem with the Amazon hiring algorithm was that most of the resumes that the machine learning algorithm had to learn from were from men. So, the algorithm jumped to the conclusion that male candidates were preferable to female candidates. In the peanut butter and jelly sandwich example I gave you earlier, this is the equivalent to someone spreading peanut butter on a plastic bag full of bread. From a purely technical perspective, that algorithm was following directions and doing its job correctly. It noticed that the successful candidates were mostly male, and so it assumed that it should be looking for more men, because for some reason, male meant good fit.

But we know that that’s not how it works. You don’t also eat the bag when you eat a peanut butter and jelly sandwich. And it’s not that men were naturally better at the jobs Amazon was advertising for, it’s that tech has a huge gender problem. But an algorithm isn’t going to know that. Because algorithms just follow directions - they don’t know context.

The fact is that algorithms are, well, just following orders. And so when you put problematic data or problematic directions into an algorithm, it’s going to follow those directions correctly - for better or for worse. And we’ve seen first-hand how bad data can make these programs absolutely disastrous. Right, Professor Hosanagar? 

Kartik Hosanagar: I think it was 2016, this was a chatbot called  Microsoft Tay. It was launched on Twitter and the chat bot turned abusive in a matter of minutes.

Lauren Prastien: So maybe you’ve heard the story of Microsoft Tay. Or you were on Twitter when it all went down. But basically, Tay - an acronym of Thinking About You - was a chatbot designed to talk as though it were a 19-year-old American girl. It was branded by Microsoft as “the AI with zero chill.” Having once been a 19-year-old American girl with absolutely no chill, I can confirm that Tay was pretty convincing at first. In one of her earliest missives, she declared to the Internet: “i love me i love me i love me i love everyone.”

In early 2016, Microsoft set up the handle @TayandYou for Tay to interact with and learn from the denizens of Twitter. If you’ve spent more than 5 minutes on Twitter, you understand why this basic premise is pretty risky. At 8:14 AM on March 23, 2016, Tay began her brief life on Twitter by exclaiming “hellooooo world!” By that afternoon, Tay was saying stuff that I am not comfortable repeating on this podcast. 

While Tay’s algorithm had been trained to generate safe, pre-written answers to certain controversial topics, like the death of Eric Garner, it wasn’t perfect. And as a lot of poisonous data from Twitter started trickling into that algorithm, Tay got corrupted. To the point that within 16 hours of joining Twitter, Tay had to be shut down. 

Kartik Hosanagar: And here's a technology that was tested quite extensively in lab settings and launched. And it didn't really take long for it to just go completely awry. And it had to be shut down within 24 hours.

Lauren Prastien: At the end of the day, while Microsoft Tay was a really disturbing mirror that Twitter had to look into, there weren’t a ton of consequences. But as we learned with the Amazon hiring algorithm, there are real issues that come into play when we decide to use those algorithms for more consequential decisions, like picking a certain job candidate, deciding on a course of cancer treatment or evaluating a convicted individual’s likelihood of recidivism, or breaking the law again.

Kartik Hosanagar: And so I think it's speaks to how we need to, when we're talking about AI and, uh, really using these algorithms to make consequential decisions, we really need to be cautious in terms of how we understand the algorithms. There are limitations, how we use them what kinds of safeguards we have in place.

Lauren Prastien: But Professor Hosanagar and I both believe that this isn’t a matter of just never using algorithms again and relying solely on human judgment. Because human judgment isn’t all that infallible, either. Remember - those problematic datasets, like the male resumes that Amazon’s hiring algorithm used to determine that the ideal candidates were male, were made as a result of biased human decision-making. 

As it stands, human decision-making is affected by pretty significant prejudices, and that can lead to serious negative outcomes in the areas of hiring, healthcare and criminal justice. More than that, it’s subjected to the kinds of whims that an algorithm isn’t necessarily susceptible to. You’ve probably heard that statistic that judges give out harsher sentences before lunch. Though I should say that the jury’s out - pun intended - on the whole correlation/causation of that. 

When these algorithms work well, they can offset or even help to overcome the kinds of human biases that pervade these sensitive areas of decision-making. 

This is all to say that when algorithms are doing what they are supposed to do, they could actually promote greater equality in these often-subjective decisions. But that requires understanding how they’re making those decisions in the first place.

Kartik Hosanagar: Look, I don't think we should become overly skeptical of algorithms and become Luddites and run away from it because they are also part of the progress that's being created using technology. But at the same time, when we give that much decision-making power and information to algorithms, we need some checks and balances in place.

Lauren Prastien: But the issue comes down to exactly how we enforce those checks and balances. In the next episode of this podcast, we’re going to get into what those checks and balances mean on the information side. But for the rest of this episode, we’re going to focus on the checks and balances necessary for decision-making. Especially when sometimes, we don’t know exactly how algorithms are making decisions.

Molly Wright Steenson: Some people say that an AI or a robot should be able to say what it's doing at any moment. It should be able to stop and explain what it's done and what it’s decision is. And I don't think that's realistic.

Lauren Prastien: Stay with us.

Molly Wright Steenson: Architecture, AI and design work together in ways that we don't talk about all the time. But also I think that with AI and design, design is where the rubber meets the road. So, the way that decisions have been made by AI researchers or technologists who work on AI-related technologies - it's decisions that they make about the design of a thing or a product or a service or something else. Those design decisions are felt by humans. And that's where design is involved.

Lauren Prastien: That’s Molly Wright Steenson, a Professor of Ethics & Computational Technologies at CMU. Her research focuses on how the principles of design, architecture and artificial intelligence have informed and can continue to inform each other. She’s the author of Architectural Intelligence: How Designers and Architects Created the Digital Landscape.

Molly Wright Steenson: Well, I think a lot of things with artificial intelligence take place in what could call it gets called the black box.

Algorithms make decisions, um, process things in a way that's opaque to most of us. So we know what the inputs are, us, the things we do that get changed into data, which we don't necessarily understand that gets parsed by an algorithm and then outcomes happen. People don't get the student loan or they don't see the really high paying jobs on their LinkedIn profile or something like that. So these decisions get made in a black box and some people say that an AI or a robot should be able to say what it's doing at any moment. It should be able to stop and explain what it's done in what its decision is. And I don't think that's realistic.

Lauren Prastien: Like I mentioned before the break, there are some real consequences to an algorithm receiving faulty data or creating a problematic pattern based on the information it receives. But when it comes to actually unpacking that black box and seeing how those decisions are made, it’s not as easy as lifting a lid and looking inside. 

Molly Wright Steenson: We know with deep learning that most researchers don't even understand how the algorithms do the algorithms. And we also know that sometimes if you want to be totally transparent and you give someone way too much information, it actually makes matters worse. So Mike Anthony and Kate Crawford talk about a bunch of reasons why transparency is kind of, I don't want to say a lie, but it might, it might be harmful.

Lauren Prastien: Lately, transparency is a pretty hot topic in artificial intelligence. The idea is this: if we know what an algorithm is doing at any given point, we would be able to trust it. More than that - we could control it and steer it in the right direction. But like Professor Steenson said, there’s a lot of problems with this idea.

In their paper “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability,” Mike Annony of the University of Southern California and Kate Crawford of Microsoft Research and New York University ask:

Eugene Leventhal: “Can “black boxes’ ever be opened, and if so, would that ever be sufficient?”

Lauren Prastien: And ultimately, what they find is that transparency is a pretty insufficient way to govern or understand an algorithm.

This is because while Annony and Crawford have found that while we assume,

Eugene Leventhal: “Seeing a phenomenon creates opportunities and obligations to make it accountable and thus to change it.”

Lauren Prastien: The reality is that,

Eugene Leventhal: “We instead hold systems accountable by looking across them—seeing them as sociotechnical systems that do not contain complexity but enact complexity by connecting to and intertwining with assemblages of humans and non-humans.”

Lauren Prastien: Essentially, what that means is seeing that algorithm or its underlying data isn’t the same as holding that algorithm accountable, which is ultimately the goal here.

If I look under the hood of a car, I’m going to be able to understand how that car functions on a mechanical level. Maybe. If I have the training to know how all those moving parts work. But looking under the hood of that car isn’t going to tell me how that car’s driver is going to handle a snowstorm or a deer running into the road. Which is why we can’t just look at the system itself, we have to look at how that system works within the other systems operating around it, like Annony and Crawford said. 

We can look at the Target marketing algorithm from the beginning of this episode and see, yes, it’s found those products that pregnant people normally buy, and now it’s going to help them save money on those products while making some revenue for Target, so this is a good algorithm. But the second we zoom out and take a look at the larger systems operating around that algorithm, it’s really not. Because even if that algorithm has perfectly narrowed down its criteria - we can see, for instance, that it’s looking at unscented lotion and mineral supplements and cotton balls and the other twenty-two products that purchased together, usually equal a pregnant customer - it’s not taking into account the greater social implications of sending those coupons to the home address of a pregnant teenager in the Midwest. And then, wow, that’s really bad. But transparency doesn’t cover that, and no amount of transparency would have prevented that from happening.

Which is why Professor Steenson is more interested in the concept of interpretability. 

Molly Wright Steenson: It's not a matter of something explaining itself. It's a matter of you having the information that you need so you can interpret what's happened or what it means. And I think that if we're considering policy ramifications, then this notion of interpretation is really, really important. As in, it's important for policy makers. It's important for lawmakers, and it's important for citizens. We want to make decisions on our own. We might not come to the same decision about what's right, but we want to be able to make that interpretation. 

Lauren Prastien: When it comes to managing the black box and the role of algorithms in our lives, Professor Steenson sees this as a two-sided approach.

One side is the responsibility that lies with our institutions, such as companies and governments. And what would that look like? Technologists would be more mindful of the implications of their algorithms and work towards advancing explainability. Governments would create structures to limit the chance that citizens are adversely affected as new technologies are rolled out. And companies would find new ways of bringing more people to the table, including people who aren’t technologists, to truly understand the impacts of algorithms. This comes back to the fundamentals of design approaches taken towards artificial intelligence and tech in general. 

And this brings us to the other side of the coin - us. Though this is directly linked with education, even before there is a change in how digital literacy is approached, we can start by being more engaged in our part in how these algorithms are being deployed and which specific areas of our lives they’re going to impact. And, by the way, we’ll get into what that might look like next week. 

But when it comes to us, Professor Hosanagar agrees that we can’t just sit back and watch all of this unfold and hope for the best. But that doesn’t necessarily mean that we have to become experts in these technologies.

Kartik Hosanagar: If you have users who are not passive, who are actually actively engaging with the technology they use, who understand the technology they use, they understand the implications and they can push back and they can say, why does this company need this particular data of mine? Or I understand why this decision was made and I'm okay with it.

Lauren Prastien: Until there’s more public will to better understand and until there are more education opportunities for people to learn, it may be challenging to get such controls to be effectively used. Think of privacy policies. Sure, it’s great that companies have to disclose information related to privacy. But how often do you read those agreements? Just having control may be a bit of a false hope until there is effort placed around education.

So can we unpack the black box? It’s complicated. Right, Eugene?

Eugene Leventhal: It absolutely is, Lauren. As we’ve learned today from our guests, figuring out what an algorithm is doing isn’t just a matter of lifting a lid and looking inside. It’s a matter of understanding the larger systems operating around that algorithm, and seeing where that algorithm’s decision-making fits into those systems as a whole. And there’s an opportunity for policymakers, technologists and the people impacted by these algorithms to ask, “what kind of data is this algorithm using, and what biases could be impacting that data?”, as well as to consider “is using an algorithm in this context helpful or harmful, and to whom?”

Lauren Prastien: Over the next two episodes we’re going to explore some of the potential policy responses, ranging from looking at different ways of empowering digital rights to the importance of community standards.

Next week, we’ll be looking at data rights. Did you know that you played a pretty significant role in the digitization of the entire New York Times archive, the development of Google Maps and, now, the future of self-driving cars? We’ll talk about what that means, and what that could entitle you to next week. And here’s a preview of our conversation with our guest Tae Wan Kim, a professor of Business Ethics:

Tae Wan Kim: Data subjects can be considered as a special kind of investors.

Lauren Prastien: I’m Lauren Prastien.

Eugene Leventhal: And I’m Eugene Leventhal.

Lauren Prastien: This was Consequential. We’ll see you next week.  

Eugene Leventhal: Consequential was recorded at the Block Center for Technology and Society at Carnegie Mellon University, which was established to examine the societal consequences of technological change and create meaningful plans of action. To learn more about Consequential, the Block Center and our faculty, you can check out our website at cmu.edu/block-center or follow us on Twitter @CMUBlockCenter. You can also email us at consequential@cmu.edu.

This episode of Consequential was written by Lauren Prastien, with editorial support from Eugene Leventhal. It was edited by Eugene and our intern, Ivan Plazacic. Consequential is produced by Eugene, Lauren, Shryansh Mehta and Jon Nehlsen. 

This episode uses an excerpt of Mike Annony and Kate Crawford’s “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.”

Lauren Prastien: Did you know that you played a vital role in the digitization of the entire New York Times archive, the development of Google Maps and the creation of Amazon’s recommendation engine? That's right, you!

Whether or not you know how to code, you've been part of the expansion of just how prevalent artificial intelligence is in society today. When you make choices of what to watch on Netflix or YouTube, you're informing their recommendation engine. When you interact with Alexa or Siri, you help train their voice recognition software. And if you've ever had to confirm your identity online and prove that you are not a robot, then you’re familiar with our key example for today - CAPTCHA. It started as a security check that digitized books, but now, every time you complete a CAPTCHA, you are determining the future of self-driving cars.

So, where does all of this leave you and your relationship with technology as a whole?

This is Consequential: what’s significant, what’s coming, and what we can do about it. I’m Lauren Prastien and I’ll be your main tour guide along this journey. You’ll also hear the voices of our many guests as well as of your other host.

Eugene Leventhal: Hi, I’m Eugene Leventhal. I’ll be joining throughout the season to take a step back with Lauren and overview what was just covered, talk policy, and read quotes. I’ll pass it back to you now, Lauren.

Lauren Prastien: Consequential is recorded at the Block Center for Technology and Society at Carnegie Mellon University. Established in 2018 through a generous gift from Keith Block and Suzanne Kelley, the Block Center is dedicated to investigating the economic, organizational, and public policy impacts of emerging technologies.

This week, we’re talking about Data Subjects and Manure Entrepreneurs.

So stick with us.

Our journey begins with CAPTCHA.

So, CAPTCHA stands for “Completely Automated Public Turing test to tell Computers and Humans Apart.” It’s catchy, I know. The idea is you’re trying to tell your computer that you’re a person and not, say, a bot that’s trying to wreak havoc on the Internet or impersonate you and steal your information. In his 2018 Netflix special Kid Gorgeous, comedian John Mulaney summed this up pretty well:

John Mulaney: The world is run by computers. The world is run by robots. And sometimes they ask us if we’re a robot, just cause we’re trying to log on and look at our own stuff. Multiple times a day. May I see my stuff please? I smell a robot. Prove. Prove. Prove you’re not a robot. Look at these curvy letters. Much curvier than most letters, wouldn’t you say? No robot could ever read these.

Lauren Prastien: Originally, this was the conceit: You’re trying to log into a website, and you’re presented with a series of letters and numbers that look like they’ve been run through a washing machine. You squint at it, try to figure it out, type it in, and then you get to see your stuff. Or you mess up, briefly worry that you might actually be a robot, and then try again.

But aside from keeping robots from touching your stuff or, say, instantaneously scalping all the tickets for a concert the second they drop and then reselling them at three times the cost, this didn’t really accomplish anything.

And this started to bother one of the early developers of CAPTCHA, Luis von Ahn. You probably know him as the co-founder and CEO of the language-learning platform Duolingo. But back in 2000, von Ahn was a PhD candidate at Carnegie Mellon University, where he worked on developing some of the first CAPTCHAs with his advisor, Manuel Blum. And for a minute there, he was kind of regretting subjecting humanity to these really obnoxious little tasks with no payoff. You proved you weren’t a robot, and then you proved you weren’t a robot, and then you proved you weren’t a robot, and you had nothing to show for it. You know, imagine Sisyphus happy. 

So in 2007, von Ahn and a team of computer scientists at Carnegie Mellon established reCAPTCHA, a CAPTCHA-like system that didn’t just spit out a bunch of random letters and numbers – it borrowed text from otherwise hard-to-decipher books. So now, instead of just proving you weren’t a robot, you would also help digitize books.

That’s pretty useful, right? Now you’re not just seeing your stuff, you’re making books like Pride and Prejudice and the Adventures of Sherlock Holmes freely available online. If you’re interested in learning about reCAPTCHA’s work digitizing out-of-copyright books, the journalist Alex Hutchinson did some fantastic reporting on this for The Walrus in 2018, but let me give you the abbreviated version:

In 2004, there was a huge international initiative to digitize every out-of-copyright book in the world to make it freely available to anyone. While the software was able to digitize the content of a new book with 90% accuracy, older books presented some problems because they weren’t printed in a lot of the standard fonts we have now. So, the software could only accurately transcribe about 60% of older texts.

This was where the reCAPTCHA came in. The reCAPTCHA would consist of two words: A known word that serves as the actual test to confirm that you were a human and an unknown word that the software failed to characterize. If you go on CAPTCHA’s website, the example CAPTCHA you’ll get includes the words: “overlooks inquiry.” So let’s say the software already knows that the word overlooks is indeed the word overlooks. There’s your Turing test, where you prove you’re not a robot. But the word “inquiry” – I don’t know, it also looks like it could maybe be the word injury? So you throw that in the reCAPTCHA. And after a general consensus among four users as to what that word is, you’ve now transcribed the missing word in the book – at 99.1% accuracy.

The reCAPTCHA system helps to correct over 10 million words each day, allowing people to freely access books and articles online that they may never have had access to before. It’s also responsible for digitizing the entire New York Times archive, from 1851 to the present day. So, bravo! You did that!

But perhaps you’ve noticed that in the past few years, the CAPTCHAs reCAPTCHA was showing you have looked…a little different. Maybe you had to tell reCAPTCHA which pictures had storefronts in them. Or, maybe you had to pick all of the pictures of dogs. Or maybe only one of the words was a word, and the other one was a picture of a house number. Or, oh, I don’t know…

John Mulaney: I’ve devised a question no robot could ever answer! Which of these pictures does not have a stop sign in it? What?!

Lauren Prastien: Yeah. You know what kind of computer needs to recognize a stop sign and differentiate it from, say, a yield sign? Like I said, congratulations, you are part of the future of self-driving cars.

When it comes to making books freely available, it’s really easy to see this as a work of altruism for the common good. And that’s what Luis von Ahn envisioned: a collective effort on the part of humanity to share knowledge and literature across the world wide web.

And this isn’t the only time we’ve done something like this. Wikipedia is a vast online database of knowledge that is developed almost entirely from open-source labor. It’s an amazing example of something we discussed in our first episode: collective intelligence. Most Wikipedia editors self-describe as volunteers. And by the way, I got that from a Wikipedia article titled “Thoughts on Wikipedia Editing and Digital Labor.” But while Wikipedia also relies on free labor to promote the spread of knowledge, that labor was completely voluntary.

But in the case of reCAPTCHA, you can make the argument that you were an unconsenting, unpaid laborer in the process. Which is exactly what one Massachusetts woman did in 2015, when she filed a class-action lawsuit against Google, Inc., which bought reCAPTCHA in 2009. The suit alleged that asking users to transcribe text for Google’s commercial use and benefit, with no corresponding benefit to the user, was an act of fraud. Remember, only one of the two words in a reCAPTCHA actually keeps your stuff safe, so to speak.

However, the case was dismissed by the US District Court of the Northern District of California in early 2016 on the grounds that typing a single word without knowledge of how Google profits from such conduct does not outweigh the benefit. Essentially, the US District Court argued that the Plaintiff was being compensated, just not financially: she’s allowed to use the free Google services that rely on those reCAPTCHAS like Google Maps and Google Books, as well as the free Gmail account she was signing up for when she completed the reCAPTCHA. In other words, the court found that the value of that free labor - however unwitting it is - does not outweigh the value of the benefits that someone receives for performing that labor.

But is that still true today? Consider a recent report from Allied Market Research, which priced the global market for autonomous vehicles at 54.23 billion dollars, with the expectation that this market will be worth more than 500 billion by 2026.

This isn’t just about reCAPTCHA and self-driving cars. And it isn’t just a financial issue or a labor issue. Your data is an incredibly valuable and ultimately essential resource, and it’s driving more than just autonomous vehicles. Last episode, we discussed just how pervasive algorithms have become, from recommending the things we buy and watch to supporting treatment and hiring decisions. But it’s important to remember that these algorithms didn’t just appear out of nowhere. The algorithms that we use every day could not exist without the data that we passively offer up anytime we click on an advertisement or order a t-shirt or binge that new show everyone’s talking about.

So it’s easy to feel absolutely out of control here, like you don’t have a seat at the table. But here’s the thing: You have a seat, it’s just been empty.

Tae Wan Kim: Without the data, the AI’s not going to work. But the problem is, who really owns the data? So who benefits, and who does not?

Lauren Prastien: So stay with us.

If these algorithms need our data to function, that means we’re an absolutely necessary and, dare I say, consequential part of this process. And that might entitle us to some kind of authority over how our data is being used. But in order to define our rights when it comes to our data, we need to define what sort of authority we have.

That’s where Professor Tae Wan Kim comes in. He’s a Professor of Business Ethics, and specifically, he’s interested in the ethics of data capitalism. In other words, he wants to know what our rights are when big data is monetized, and he’s interested in articulating exactly where a data subject - or anyone whose data is being used to drive technology - sits at the table.

Tae Wan Kim: So benefits, and who does not? Our typical understanding of data subjects are that they are consumers. So, we offer data to Facebook. In exchange, Facebook offers a service. That is a discrete transaction. Once we sell the data to Facebook, then the data is theirs. But there is a problem – legally and philosophically – to make a case that we sell our privacy to someone else. That’s the beginning of this question. 

Lauren: As we discussed with the example of reCAPTCHA, there’s also a pervading argument that data subjects are workers. But Professor Kim is interested in a different framework for encouraging data subjects to take a proactive role in this decision-making: data subjects as investors.

Tae Wan Kim: Data subjects can be considered as a special kind of investors. Like shareholders.

Lauren Prastien: In his research on data ownership, Professor Kim found that the relationship between data subjects and the corporations that use their data is structurally similar to the relationship between shareholders and the corporations that use their investments. Essentially, both data subjects and traditional shareholders provide the essential resources necessary to power a given product. For shareholders, that’s money - by investing money into a business, you then get to reap the rewards of your investment, if it’s a successful investment. And that’s pretty similar to what data subjects do with their data - they give the basic resources that drive the technology that they then benefit from using. Like how the people filling out reCAPTCHAS got to use Google’s services for free.

But there’s a big difference between shareholders and data subjects - at least right now. Shareholders know how much money they invested and are aware of what is being done with that money. And even in a more general sense, shareholders know that they’re shareholders. But some data subjects aren’t even aware they’re data subjects.

Tae Wan Kim: The bottom line is informed consent. But the problem is informed consent assumes that the data is mine and then I transfer the exclusive right to use that data to another company. But it’s not that clear of an issue.

Lauren Prastien: This kind of grey area has come up before, by the way, in a very different kind of business model.

Tae Wan Kim: In the nineteenth century before the introduction of automobiles, most people used horse-drawn wagons. Horses create manure. All the way down, all the roads. No one thought that would be an important economic resource. But some people thought that maybe, no one cares about that, no one claims ownership.

Lauren Prastien: Yeah. You can see where this is going. Some very brave man named William A. Lockwood stepped into the street, found eighteen piles of horse droppings just kind of sitting there and saw an opportunity to make some fertilizer on the cheap. The problem was that this guy named Thomas Haslem had ordered two of his servants to make those piles with the intention of I don’t know, picking them up later, I guess? And when he arrives the next day to find the piles of manure gone, he says, hey, wait a second, that’s my horse’s droppings. You can’t just use my horse’s droppings that I left in the street for profit. So I want the $6 that the fertilizer you made is worth. Then Lockwood the manure entrepreneur said, well, no, because I waited 24 hours for the original owner to claim it, I asked a few public officials if they knew who made those piles and if they wanted them, and this constable was basically like, “ew. No.” So I found your weird manure piles and I gathered them up, and then did the labor of making the fertilizer. And the court said, “I mean, yeah, that’s valid.”

The case, Haslem v. Lockwood, is hilarious and fascinating and would take an entire episode to unpack. But the point here is this: these questions are complicated. But that doesn’t mean we shouldn’t tackle them.

I should note here that Haslem v. Lockwood is an interesting analog, but it’s not a perfect point of comparison. Horse droppings are, well, excrement. And the fertilizer that Lockwood made didn’t impact Haslem’s ability to get a job or secure a loan. So our data is a little different from that.

Tae Wan Kim: If our society is similar about data, if no one cares about data, then the courts will decide with the companies. But once we as the individuals start claiming that I have interest in my data, claim that I have some proprietary interest in my data, then the landscape will probably change. So it’s up to us, actually.

Lauren Prastien: Despite how unapproachable topics such as AI and machine learning can seem for those who do not specialize in these areas, it’s crucial to remember that everyone plays an important role in the future of how technology gets rolled out and implemented. By ensuring that individuals have rights relating to their own data, policymakers can set the stage for people to have some control over their data.

Tae Wan KIm: So for instance, shareholders are granted several rights. One is information rights. Once they invest their money, the company has a duty to explain how the company has used the investment for some period of time. How to realize that duty in typical societies is using annual shareholders meeting, during which shareholders are informed of how their money has been used. If data subjects have similar information rights, then they have a right to know how companies have used their data to run their companies. So, we can imagine something like an annual data subjects meeting.

Lauren Prastien: It might be an added burden on the companies innovating with AI and machine learning, but creating such rights would also ensure a higher standard of protection for the individuals. And by articulating that data subjects are in fact investors, we’d know how to enact legislation to better protect them.

Tae Wan Kim: It is a philosophical and legal question. What is really the legitimate status of the data subject? Are they simply consumers? Then the consumer protection perspective is the best. So, public policymakers can think of how to protect them using consumer protection agencies. If data subjects are laborers, then labor protection law is the best way to go. If investor is the right legitimate status, then we have to think of how to use the SEC.

Lauren Prastien: If we had such rights, we could fight for programs to help deal with some of the problematic areas of AI, such as the kinds of harmful biases that can emerge in the sorts of algorithms that we discussed last week. But that’s going to take some education, both on our part and on the part of our policymakers.

Senator Orrin Hatch: If so, how do you sustain a business model in which users don’t pay for your service?

Mark Zuckerberg: Senator, we run ads.

Senator Orrin Hatch: I see. That’s great.

Lauren Prastien: Stay with us.

In a 2015 article in The Guardian titled “What does the panopticon mean in the age of digital surveillance?”, Thomas McMullan said of the sale of our privacy:

Eugene Leventhal: “In the private space of my personal browsing, I do not feel exposed - I do not feel that my body of data is under surveillance because I do not know where that body begins or ends.”

Lauren Prastien: Here, he was referring to how we do or do not police our own online behavior under the assumption that we are all being constantly watched. But there’s something to be said of the fact that often, we don’t know where that body of data begins or ends, particularly when it comes to data capitalism. And if we did, maybe we’d be able to take a more proactive role in those decisions.

Because while Professor Kim’s approach to understanding our legal role as data subjects could inform how we may or may not be protected by certain governing bodies, we can’t just be passive in assuming that that protection is absolutely coming. And by the way, we probably can’t wait around for policymakers to just learn these things on their own.

In April 2018, Facebook co-founder and CEO Mark Zuckerberg appeared before Congress to discuss data privacy and the Cambridge Analytica scandal. And it became pretty clear that a lot of really prominent and powerful policymakers didn’t really understand how Facebook and other companies that collect, monetize and utilize your data actually work.

Senator Orrin Hatch: If so, how do you sustain a business model in which users don’t pay for your service?

Mark Zuckerberg: Senator, we run ads.

Senator Orrin Hatch: I see. That’s great.

Lauren Prastien: Remember when Professor Kim said that every time we use a site like Facebook, we’re making a transaction? Essentially, instead of paying Facebook money to log on, share articles, talk to our friends, check up on our old high school rivals, we’re giving them our data, which they use in turn to push us relevant ads that generate money for the site. Which is why sometimes, you’ll go look at a pair of sneakers on one website, and then proceed to have those sneakers chase you around the entire Internet. And this is a pretty consistent model, but it’s also a pretty new model. And it makes sense once you hear it, but intuitively, we’re not always aware that that transaction is taking place.

The Zuckerberg hearings were ten hours long in total and, at times, really frustrating. But perhaps the most telling was this moment between Zuckerberg and Louisiana Senator John Kennedy:

Senator John Kennedy: As a Facebook user, are you willing to give me more control over my data?

Mark Zuckerberg: Senator, as someone who uses Facebook, I believe that you should have complete control over your data.

Senator John Kennedy: Okay. Are you willing to go back and work on giving me a greater right to erase my data?

Mark Zuckerberg: Senator, you can already delete any of the data that’s there or delete all of your data.

Senator John Kennedy: Are you going to work on expanding that?

Mark Zuckerberg: Senator, I think we already do what you think we are referring to, but certainly we’re working on trying to make these controls easier.

Senator John Kennedy: Are you willing to expand my right to know who you’re sharing my data with?

Mark Zuckerberg: Senator, we already give you a list of apps that you’re using, and you signed into those yourself, and provided affirmative consent. As I said, we don’t share any data with…

Senator John Kennedy: On that...on that user agreement - are you willing to expand my right to prohibit you from sharing my data?

Senator Mark Zuckerberg: Senator, again, I believe that you already have that control. I think people have that full control in the system already today. If we’re not communicating this clearly, then that’s a big thing that we should work on, because I think the principles that you’re articulating are the ones that you believe in and try to codify in the product that we build.

John Kennedy: Are you willing to give me the right to take my data on Facebook and move it to another social media platform?

Senator Mark Zuckerberg: Senator, you can already do that. We have a download your information tool where you can go, get a file of all the content there and then do whatever you want with it.

John Kennedy: Then I assume you’re willing to give me the right to say that I’m going to go on your platform and you’re going to tell a lot about me as a result but I don’t want you to share it with anybody.

Senator Mark Zuckerberg: Yes, Senator. I believe you already have that ability today.

Lauren Prastien: There’s a massive breakdown in communication between the people set to draw up legislation on platforms like Facebook and the people who design and run those platforms. But let me ask you something - did you know that you could go delete your data from Facebook? And did you know that actually, Facebook doesn’t sell your data - it acts as the broker between you and the companies that ultimately advertise to you by selling access to your newsfeed? A company can’t say, “hey Facebook, can you give me all of Lauren Prastien’s data so that I can figure out how to sell stuff to her? Please and thank you.” But it can say, “hey Facebook, can you give me access to someone who might be willing to buy these sneakers? Please and thank you.” And Facebook would say, “why yes. I can’t tell you who she is. But I can keep reminding her that these sneakers exist until she eventually capitulates and buys them.”

Which is something you can opt out of or manage. If you go to your preferences page on Facebook, you can decide what kinds of ads you want targeted to you, what kind of data Facebook can access for those ads, and what materials you might find upsetting to look at.

Which, by the way, wasn’t something I knew either, until I started researching for this episode.

But it’s also worth noting that on December 18, 2018, just eight months after the Zuckerberg hearings, Gabriel J.X. Dance, Michael LaForgia and Nicholas Confessore of the New York Times broke the story that Facebook let major companies like Microsoft, Netflix, Spotify, Amazon and Yahoo access user’s names, contact information, private messages and posts despite claiming that it had stopped this kind of sharing years ago. The Times also noted that some of these companies even had the ability to read, write and delete users’ private messages. Even the New York Times itself was named as a company that retained access to users’ friend lists until 2017, despite the fact that it had discontinued the article-sharing application that was using those friend lists in 2011. And all this is pretty meaningful, given this exchange in the Zuckerberg hearings:

Senator John Kennedy: Let me ask you one final question in my twelve seconds. Could somebody call you up and say, I want to see John Kennedy’s file?

Mark Zuckerberg: Absolutely not!

Senator John Kennedy: Not would you do it. Could you do it?

Mark Zuckerberg: In theory.

Senator John Kennedy: Do you have the right to put my data...a name on my data and share it with somebody?

Mark Zuckerberg: I do not believe we have the right to do that.

Senator John Kennedy: Do you have the ability?

Mark Zuckerberg: Senator, the data is in the system. So…

Senator John Kennedy: Do you have the ability?

Mark Zuckerberg: Technically, I think someone could do that. But that would be a massive breach. So we would never do that.

Senator John Kennedy: It would be a breach. Thank you, Mr. Chairman.

Lauren Prastien: In response to the New York Times exposé, Facebook’s director of privacy and public policy, Steve Scatterfield, said none of the partnerships violated users’ privacy or its 2011 agreement with the Federal Trade Commission, wherein Facebook agreed not to share users’ data without their explicit permission. Why? Essentially, because the 150 companies that had access to the users’ data, even if those users had disabled all data-sharing options - that’s right, 150, and yes, you heard me, even if users were like please share absolutely none of my data - those companies were acting as extensions of Facebook itself. Which...meh?

So while Facebook may not have literally sold your data, they did make deals that let some of the most powerful companies in the world take a little peek at it. Which was not something that I considered as within the realm of possibility when I agreed to make a data transaction with Facebook.

And that’s just Facebook.

Kartik Hosanagar: I think in today's world we need to be talking about, uh, basic data and algorithm literacy, which should be in schools and people should have a basic understanding of when I do things on an app or on a website, what kinds of data might be trapped? What might, what are the kinds of things that companies can do with the data? How do I find out how data are being used.

Lauren Prastien: Stay with us.

Have you ever been walking around and suddenly got a notification that a completely innocuous app, like, I don’t know, a game app that you play to make your commute go faster, has been tracking your location? And your phone goes,

Eugene Leventhal: “Hey, do you want this app to continue tracking your location?”

Lauren Prastien: And you’re like, “wait, what do you mean, continue?”

By the way, the reason why a lot of those apps ask to track your location is to be able to target more relevant ads to you. But even though I technically consented to that and then had the ability to tell the app, “hey, stop it. No, I don’t want you to track my location,” I didn’t really know that.

So there’s a lot of confusion. But there is some legislation in the works for how to most effectively regulate this, from requiring users to opt in to sharing data rather than just sharing it by default to requiring tech companies to more overtly disclose which advertisers they’re working with.

One piece of legislation currently in the works is the DASHBOARD Act, a bipartisan effort that would require large-scale digital service providers like YouTube and Amazon to give regular updates to their users on what personal data is being collected, what the economic value of that data is, and how third parties are using that data. By the way, DASHBOARD stands for “Designing Accounting Safeguards to Help Broaden Oversight And Regulations on Data.” Yeah, I am also loving the acronyms this episode.

On a state level, California passed the California Consumer Privacy Act in late September 2019. This law is set to come into effect on January 1, 2020, and it will give the state increased power in demanding disclosure and, in certain circumstances, pursuing legal action against businesses. These laws will apply to companies earning over $25 million annually, holding personal information on over 50,000 people, or earning half of their revenue from selling others’ data.

In addition to creating frameworks that define and defend the rights of data subjects, policymakers can also focus on initiatives to educate data subjects on their role in the development of these technologies. Because, like Professor Kim said, a big difference between shareholders and data subjects is informed consent.

We asked Professor Hosanagar, our guest from our previous episode, what that kind of informed consent might look like.

Kartik Hosanagar: Yeah, I would say that, first of all where we are today is that most of us use technology very passively. And, uh, you know, as I mentioned, decisions are being made for us and about us when we have no clue, nor the interest in digging in deeper and understanding what's actually happening behind the scenes. And, and I think that needs to change. Um, in terms of, uh, you know, to what extent are companies providing the information or users digging in and trying to learn more? Not a whole lot is happening in that regard. So we're mostly in the dark. We do need to know certain things. And again, it doesn't mean that we need to, don't know the nitty gritty of how these algorithms work and you know, all the engineering details.

Lauren Prastien: While it may not be realistic to think that every person on Earth will be able to read and write code, it is possible to add a basic element of digital literacy to educational systems. This is something that the American education system has tried to do whenever we encounter a new technology that’s going to impact our workforce and our way of life. Growing up in the American public-school system, I remember learning skills like using Wikipedia responsibly and effectively navigating a search engine like Google. So what’s to stop us from incorporating algorithmic literacy into curricula?

Kartik Hosanagar: You know, we used to talk about digital literacy 10, 15 years back and basic computer literacy and knowledge of the Internet. I think in today's world we need to be talking about basic data and algorithm literacy, which should be in schools and people should have a basic understanding of, you know, when I do things on an app or on a website, what kinds of data might be tracked? What might, what are the kinds of things that companies can do with the data? How do I find out how data are being used?

Lauren Prastien: You also may have noticed that a lot of the policy recommendations that have come up on this podcast have some educational component. And this isn’t a huge coincidence. Education is a big theme here. As this season progresses, we’re going to be digging into how education has changed and is going to continue to change in response to these technologies, both in terms of the infiltration of tech into the classroom and in terms of preparing individuals for the way these technologies will impact their lives and their places of work.

This brings us back to one of our central points this season - that you play a very crucial role in shaping an equitable digital future. Not just in providing the data, but in advocating for how that data gets used.

Before we end, it’s worth mentioning that a few weeks ago, Mark Zuckerberg returned to Capitol Hill to talk to the House’s Financial Services Committee about the Libra cryptocurrency system. Some of the issues we’ve been discussing today and that Zuckerberg discussed in his 2018 hearings came up again.

So we thought it would be important to watch and review the 5-hour hearing before we released this episode as written. And something that we noticed was that this time, Congress was pretty well-informed on a lot of the nuances of Facebook’s data monetization model, the algorithms Facebook uses and even data subject protections. Like in this exchange with New York Representative Nydia Velazquez:

Representative Nydia Velazquez: Mr. Zuckerberg, Calibra has pledged it will not share account information or financial data with Facebook or any third-party without customer consent. However, Facebook has had a history of problems safeguarding users’ data. In July, Facebook was forced to pay a 5 billion-dollar fine to the FTC. By far, the largest penalty ever imposed to a company for violating consumers’ privacy rights as part of the settlement related to the 2018 Cambridge Analytica Scandal. So let me start off by asking you a very simple question, why should we believe what you and Calibra are saying about protecting customer privacy and financial data?

Mark Zuckerberg: Well, Congresswoman, I think this is an important question for us on all of the new services that we build. We certainly have work to do to build trust. I think the settlement and order we entered into with the FTC will help us set a new standard for our industry in terms of the rigor of the privacy program that we’re building. We’re now basically building out a privacy program for people’s data that is parallel to what the Sarbanes-Oxley requirements would be for a public company on people’s financial data.

Lauren Prastien: So real quick. The Sarbanes-Oxley Act is a federal law that protects the investors in a public company from fraudulent financial reporting. It was passed in 2002 as a result of the financial scandals of the early aughts, like the Enron Scandal of 2001 and the Tyco Scandal of 2002.

The hearings also raised a really interesting issue when it comes to data subject rights: shadow profiles. Here’s Iowa Representative Cynthia Axne:

Representative Cynthia Axne: So do you collect data on people who don’t even have an account with Facebook?

Mark Zuckerberg: Congresswoman, there are a number of cases where a website or app might send us signals from things that they’re seeing and we might match that to someone who’s on our services. But someone might also send us information about someone who’s not on our services, in which case we likely wouldn’t use that.

Representative Cynthia Axne: So you collect data on people who don’t even have an account? Correct?

Mark Zuckerberg: Congressman, I’m not sure that’s what I just said. But-

Representative Cynthia Axne: If you are loading up somebody contacts and you’re able to access that information, that’s information about somebody who might not have a Facebook account. Is that correct?

Mark Zuckerberg: Congresswoman, if you’re referring to a person uploading their own contact list and saying that the information on their contact list might include people who are not on Facebook, then sure, yes. In that case they’re...

Representative Cynthia Axne: So Facebook then has a profile of virtually every American. And your business model is to sell ads based on harvesting as much data as possible from as many people as possible. So you said last year that you believed it was a reasonable principle that consumers should be able to easily place limits on the personal data that companies collect and retain. I know Facebook users have a setting to opt out of data collection and that they can download their information. But I want to remind you of what you said in your testimony, ”Facebook is about putting power in people’s hands.” If one of my constituents doesn’t have a Facebook account, how are they supposed to place limits on what information your company has about them when they collect information about them, but they don’t have the opportunity to opt out because they’re not in Facebook?

Mark Zuckerberg: Congresswoman, respectfully, I think you…I don’t agree with the characterization saying that if someone uploads their contacts…

Representative Cynthia Axne: That’s just one example. I know that there’s multiple ways that you’re able to collect data for individuals. So I’m asking you, for those folks who don’t have a Facebook account, what are you doing to help them place limits on the information that your company has about them?

Mark Zuckerberg: Congresswoman, my understanding is not that we build profiles for people who are not on our service. There may be signals that apps and other things send us that might include people who aren’t in our community. But I don’t think we include those in any kind of understanding of who a person is, if the person isn’t on our services.

Representative Cynthia Axne: So I appreciate that. What actions do you know specifically are being taken or are you willing to take to ensure that people who don’t have a Facebook account have that power to limit the data that your company is collecting?

Mark Zuckerberg: Congresswoman, what I’m trying to communicate is that I believe that, that’s the case today. I can get back to you on all of the different things that we do in terms of controls of services.

Representative Cynthia Axne: That would be great. Because, we absolutely need some specifics around that to make sure that people can protect their data privacy. Mr. Zuckerberg, to conclude, Facebook is now tracking people’s behavior in numerous ways, whether they’re using it or not. It’s been used to undermine our elections. And of course, I know you’re aware Facebook isn’t the most trusted name. So I’m asking you to think about what needs to be fixed before you bring a currency to market. Thank you.

Lauren Prastien: This isn’t the first time Mark Zuckerberg has been asked about shadow profiles by Congress, they came up in the 2018 hearings as well, where he denied having any knowledge of the existence of these profiles. However, in 2018, the journalist Kashmir Hill of Gizmodo found that Facebook’s ad targeting algorithms were indeed using the contact information of individuals who did not necessarily have Facebook accounts, which they obtained via users who had consented to allow Facebook access to their contact information. Which might be the next frontier in the battle for data subject rights and informed consent.

Which is all to say that today, you have clearly defined rights as a consumer, and you have clear protections to ensure that the products you buy aren’t going to hurt you. When you go to school, you have rights and protections as a student. When you walk into a doctor’s office, you have rights and protections as a patient. And if you buy shares in a company, you have rights and protections as a shareholder - thanks, Sarbanes-Oxley. So why not as a data subject?

We hope that today’s examples of exploring how you have contributed to the development of some of the most pervasive technologies in use today have left you feeling more encouraged about your seat at the table when it comes to tech development.

So what does demanding your seat at the table look like?

Eugene Leventhal: That’s a great question, Lauren, and it’s something that’s still being determined. From Professor Kim’s work in defining what role data subjects have and what rights and protections that entitles them to Professor Hosanagar’s work in advocating for adequate educational reform for data subjects, there’s a ton that can be happening on the policy side that can impact your place in the implementation of these technologies.

There are various national organizations to better understand the impacts of AI and to help you, as someone who’s data is being used for these systems, better understand how these algorithms impact you. Academically linked efforts such as AI Now Institute out of NYU, Stanford’s Institute for Human-Centered Artificial Intelligence, and here at Carnegie Mellon, the Block Center for Tech and Society are all working to increase the amount of attention researchers are paying to these questions. Nonprofits such as the Center for Humane Technology are helping people understand how technology overall is affecting our well-being, while more localized efforts, such as the Montreal AI Ethics Institute and Pittsburgh AI are creating new ways for individuals to learn more about your role in AI and to advocate for your rights and to engage in the ongoing conversation surrounding data rights as a whole. And so where do we go from here, Lauren?

Lauren Prastien: We’re going to take the next episode to explore how becoming more active participants in this landscape could help shape this landscape for the better - as well as some of the obstacles to facilitating communication between the technologists that develop algorithms, the policymakers that implement them and the communities that these algorithms affect. Because the fact is that algorithms are becoming more and more present in our lives and affecting continuously more important decisions. Ultimately, if we have better insight into these decision-making processes, we can help to improve those processes and use them to help improve our way of life, rather than diminish it.

Next week, we’ll talk to Jason Hong, a professor in Carnegie Mellon University’s Human-Computer Interaction Institute, who has conceived of a rather clever way for the people affected by algorithms to help hold them accountable:

Jason Hong: It turns out that several hundreds of companies already had these bug bounties and it’s a great way of trying to align incentives of the security researchers. So what we’re trying to do with bias bounty is can we try to incentivize lots of lay people to try to find potential bugs inside of these machine learning algorithms.

Lauren Prastien: I’m Lauren Prastien, and this was Consequential. We’ll see you next week.

This episode of Consequential was written by Lauren Prastien, with editorial support from Eugene Leventhal. It was edited by Eugene and our intern, Ivan Plazacic. Consequential is produced by Eugene, Lauren, Shryansh Mehta and Jon Nehlsen. 

This episode uses clips of John Mulaney’s comedy special Kid Gorgeous, the Wikipedia article “Thoughts on Wikipedia Editing and Digital Labor,” Alex Hutchinson’s reporting on reCAPTCHA for The Walrus, an excerpt of Thomas McMullan’s article “What does the panopticon mean in the age of digital surveillance?”, which was published in The Guardian in 2015, excerpts of Mark Zuckerberg’s 2018 and 2019 hearings before Congress, an excerpt from Gabriel J.X. Dance, Michael LaForgia and Nicholas Confessore’s article “As Facebook Raised a Privacy Wall, It Carved an Opening for Tech Giants,” and Kashmir Hill’s reporting for Gizmodo on Facebook’s shadow profiles.

Lauren Prastien: Let me ask you a question. What does fairness mean?

Don’t look it up in the dictionary. I already did. In case you’re curious, fairness is impartial and just treatment or behavior without favoritism or discrimination

What I’m really asking you is what does fairness mean mathematically? Computationally? Could you write me an algorithm that’s going to make the fairest decision? The most impartial, just, unbiased decision? 

And if you’re thinking, wait a second. Fairness isn’t a mathematical idea, it’s an ethical one. Fine, okay: whose ethics? Mine? Yours? A whole bunch of people’s? 

So what does fairness mean culturally? What does it mean to a community?

Over the past few weeks, we’ve talked about how algorithms can be really helpful and really hurtful. And so far, that’s been using a rather un-nuanced metric of

Eugene Leventhal: Oh, an algorithm to overcome human bias in hiring decisions? It works? that’s great.

Lauren Prastien: Versus

Eugene Leventhal: Oh, the hiring algorithm favors men? Nevermind! That’s bad! That’s real bad.” But a lot of algorithms aren’t just objectively bad or good.

We can just about universally agree that an algorithm that tosses out applications from women is problematic. But take this example: In 2017, the City of Boston set out to try to improve their public school system’s busing processes using automated systems. To give you some context: Boston’s public school system had the highest transportation costs in the country, accounting for 10% of the district’s entire budget, and some schools drew students from over 20 zip codes.

So, the City issued the Boston Public Schools Transportation Challenge, which offered a $15,000 prize for an algorithm that would streamline its busing and school start times. The winning research team devised an algorithm that changed the start times of 84% of schools and was was 20% more efficient than any of the route maps developed by hand. The algorithm saved the City $5 million that was reinvested directly into the schools, and it cut more than 20,000 pounds of carbon dioxide emissions per day.

Here’s the thing, though: While the busing component of this algorithm was a huge success, the start time component was never implemented. Why? Because while the algorithm would have benefited a lot of people in Boston - for instance, it reduced the number of teenagers with early high school start times from 74% to just 6%, and made sure that elementary schools let students out well before dark - a lot of families that benefited from the old system would now have to make pretty dramatic changes to their schedules to accommodate the new one. Which, those families argued, was unfair.

There are a lot of examples of seemingly good AI interventions being ultimately rejected by the communities they were designed for. Which, to be fair, seems like something a community should be able to do. Sometimes, this is because the definition of fairness the community is using doesn’t quite match with the definition of fairness the algorithm is. So how do we balance these competing definitions of fairness?

This is Consequential: what’s significant, what’s coming, and what we can do about it. I’m Lauren Prastien and I’ll be your main tour guide along this journey. You’ll also hear the voices of our many guests as well as of your other host.

Eugene Leventhal: Hi, I’m Eugene Leventhal. I’ll be joining throughout the season to take a step back with Lauren and overview what was just covered, talk policy, and read quotes. I’ll pass it back to you now Lauren. 

Lauren Prastien: Consequential is recorded at the Block Center for Technology and Society at Carnegie Mellon University. Established in 2018 through a generous gift from Keith Block and Suzeanne Kelly, the Block Center is dedicated to investigating the economic, organizational, and public policy impacts of emerging technologies.

Last week, we talked about your evolving rights as a data subject, and how important it is to understand these technologies. This week, it’s all about fairness, community standards and the bias bounty. Our question: how do we make sure these algorithms are fair...or, you know, fair enough?

This wasn’t the first time Boston tried to use an algorithm to reform their public school system and then decided not to implement it.

Their first attempt was really ambitious: the City wanted to improve the racial and geographic diversity of their school districts, while also providing students with education that was closer to home. Multiple studies have shown that racially and socioeconomically diverse classrooms lead to students that are better leaders, problem-solvers and critical thinkers, have lower drop-out rates and higher college enrollment rates. So like Professor Woolley found in her studies of collective intelligence, which you may remember from our first episode, diversity is a really good thing, on teams, in the workplace, and in classrooms. 

However, by shortening commutes, the algorithm actually decreased school integration, because it wasn’t able to pick up on the larger racial disparities and socioeconomic contexts that pervade the City of Boston. So, the city rejected the algorithm.

Which makes sense - the algorithm didn’t do what it set out to do. But in the case of the start time suggestions from the busing algorithm, things are a little fuzzier. 

On December 22, 2017, the Boston Globe ran an opinion piece titled “Don’t blame the algorithm for doing what Boston school officials asked.” And the general premise was this: school start and end times are political problems, and algorithms shouldn’t be solving political problems. The authors argued that it was debatable whether Boston Public Schools wanted to enhance the students’ learning experience or just save money. If it was the latter, then it succeeded. And if it was the former, then some of the blowback might indicate that it didn’t. Because here are these parents, saying the system is unfair.

But this is the challenge: we haven’t actually agreed on a universal definition of fairness - algorithmic or otherwise.

Jason Hong: The question of what is fair has been something that's been plaguing humanity since the very beginning. 

Lauren Prastien: That’s Jason Hong. He’s a professor in the Human-Computer Interaction Institute at Carnegie Mellon. His research looks at how computer scientists can make technologies more easily understandable to the general public, particularly when it comes to issues like privacy, security and fairness. Because fairness in a computer science sense is a little different than the way we understand fairness by most cultural and philosophical definitions.

Jason Hong: You mentioned the mathematical definitions of fairness, and to give you a few examples of those one would be that it’s equally unfair or it’s equally wrong for different groups of people. 

Lauren Prastien: Correct me if I’m wrong here, but when most people say “I’d like to do this the fairest way possible,” they don’t actually mean “I’d like to do this such that it is equally unpleasant for everyone.” Or at least, that’s not the first place they’d go. But sometimes, that’s what an algorithm uses as a metric of fairness. But that’s also not the only way an algorithm can determine fairness.

Jason Hong: So say for example, if you have two groups of people, let's call them East Coast people and West Coast people and you want to give out loans to them. Uh, one definition of fairness would be that is equally accurate on both cases that an algorithm would correctly give loans to people who would repay correctly. But then it also does not give loans to people who are not repaying those loans. Uh, but that's just one definition of fairness. 

Lauren Prastien: And those are just the mathematical definitions. When it comes to the way people measure fairness, there’s usually a little element of ethics thrown in there, too.

Jason Hong: It comes from moral philosophy. One is called deontological. And the way to think about this one is it basically follows the rules and processes. So given that you had a set of rules, which people have already declared, say, here's how we're going to go do things, have you followed those rules and are you falling into a declared processes?

Uh, the other kind of fairness would be described as consequentialist. So basically outcomes. So is the outcome actually fair or not? 

Lauren Prastien: I promise we haven’t tried to put some form of the word consequential in every episode. It’s just been turning out that way. But real quick - consequentialism is a form of moral philosophy that basically says that the morality of an action is based entirely on the consequences of that action, rather than the intention of that action. That’s where deontological fairness conflicts with consequentialist fairness.

To explain this, let’s look at the story of Robin Hood. Remember, Robin Hood’s whole deal is that he steals from the rich in order to give to the poor. Which is, you know, a crime. But Robin Hood is considered the hero of the story, and this is because the rich, and in particular the Sheriff of Nottingham, are over-taxing the poor to the point of starvation and using those taxes to line their pockets. Which you could also argue is a form of theft. So do you punish Robin Hood? 

The deontologist would say yes, both he and the Sheriff stole from the community, just in different ways. And if you don’t hold them both accountable, the social order is going to break down and the rules are going to have no meaning.

But the consequentialist would say no. Because by robbing the rich, Robin Hood was benefitting his community, not himself. And it’s not fair to treat the actions of the Sheriff of Nottingham and Robin Hood as equal crimes.

We’re not going to settle the debate between deontological morality and consequentialism and several of the other branches of moral philosophy - nihilism, utilitarianism, pick your ism of choice. NBC’s The Good Place has been at it for four seasons, and philosophers have been trying to untangle it for much, much longer than that. 

The point is that “fairness” isn’t an objective concept. Usually, it’s determined by a community’s agreed-upon standards. But even then, everyone in that community doesn’t always agree upon those standards.

Here’s something really crucial to consider with this Boston example. In 2018, the journalist David Scharfenberg published an article in The Boston Globe titled, “Computers Can Solve Your Problem. You May Not Like the Answer.” 

In speaking to the developers of the bus algorithm, Scharfenberg learned that their algorithm sorted through 1 novemtrigintillion options - a number I did not know existed, but is apparently 1 followed by 120 zeroes. And one of their biggest achievements with this one in one novemtrigintillion options was that it made sure that 94% of high schools had a later start time, which is really important. Teenagers have different circadian rhythms than children and adults, and as a result, the effect of sleep deprivation on teenagers is especially pronounced, and it can be catastrophic to a student’s academic performance, mental health and physical development.

Scharfenberg also noted that the current system was disproportionately assigning ideal school start times to residents in whiter, wealthier regions of Boston. By redistributing those start times, the algorithm made getting to school, having a better educational experience and ensuring a good nights’ sleep more tenable for families in regions that weren’t white or affluent. 

However, it did require a trade-off for those families that did benefit from the previous system.

I don’t want to pass judgment here at all, or say if one group was absolutely right or wrong. And it’s not my place to say if this was the best of the 1 novemtrigintillion options or not. 

However, I do want to discuss the fact that this emphasizes the value of really communicating with a community on what an algorithm is taking into account to make a fair decision. And that in turn raises two really important questions:

First: Where do conversations of ethics and values fit into the development and regulation of technology?

And second: How do we make sure that policymakers and technologists are effectively ensuring that communities are able to make informed judgments about the technologies that might impact them?

Molly Wright Steenson: The way that decisions have been made by AI researchers or technologists who work on AI related technologies - it's decisions that they make about the design of a thing or a product or a service or something else. Those design decisions are felt by humans. 

Lauren Prastien: If you’re having a little deja vu here, it’s because this is a clip from our interview with Molly Wright Steenson from our episode on the black box. But as we were talking to Professor Steenson about the black box, the subject of fairness came up a lot. In part because these subjects are haunted by the same specters: bias and subjectivity. 

Molly Wright Steenson: That question of fairness I think is really good because it’s also what’s really difficult about, um, about AI. The fact is that we need bias some way or another in our day to day lives. Bias is what keeps me crossing the street safely by determining when I should go and when I should stop. 

Lauren Prastien: And when it comes to determining which biases are just plain wrong or actively harming people, well, that comes down to ethics.

The notion of ethical AI is kind of all the rage right now. For good reason. Like Professor Steenson said, these decisions are being felt by humans. 

But as this culture of ethical AI rises, there is a lot of cynicism around it. At a recent conference at Stanford University’s Human-AI Initiative, Dr. Annette Zimmermann, a political philosopher at Princeton University, presented a slide on “AI Ethics Traps Bingo,” detailing everything from “let’s make a checklist” to “who needs ethics once we have good laws?”

Ethics isn’t just a box to tick, but it can be really tempting to frame it that way to avoid getting into the weeds. Because, as we saw in the debate between deontology and consequentialism, ethics can be kind of time-consuming, circuitous, and complicated. You know, sort of the opposite of the things that good tech likes to advertise itself as: quick, direct, and simple.

Molly Wright Steenson: I think that if you want to attach an ethicist to a project or a startup, then what you’re going to be doing is it’s like, it’s like attaching a post it note to it or an attractive hat. It’s gonna fall off. 

Lauren Prastien: By the way, that’s one of my favorite things anyone has said this season.

Molly Wright Steenson: What you needed to be doing is it needs to be built into the incentives and rewards of, of the systems that we’re building. And that requires a rethinking of how programmers are incentivized. 

If you are just focused on operationalizing everything in Silicon Valley or in a startup, where on earth are you going to put ethics? There’s no room for it. And so what you need instead to do is conceptualize what we do and what we build ethically from the get go. 

Lauren Prastien: When it comes to incorporating ethics into design processes and reframing how programmers approach their work, Professor Steenson referred to a concept called service design.

Molly Wright Steenson: There’s a design discipline called service design, um, which is considering the multiple stakeholders in a, in a design problem, right? So it could be citizens and it could be the team building whatever technology you’re using, but there are probably secondary people involved. There are whole lot of different stakeholders. There are people who will feel the impact of whatever is designed or built. And then there’s a question of how do you design for that, right? 

Lauren Prastien: In their 2018 book, This Is Service Design Doing: Applying Service Design Thinking in the Real World, Adam Lawrence, Jakob Schneider, Marc Stickdorn, and Markus Edgar Hormess propose a set of principles for service design. Among them are that service design needs to be human-centered, or consider the experience of the people affected by the given product, and that it needs to be collaborative, which means that the stakeholders should be actively involved when it comes to the process of design development and implementation. The authors also say that the needs, ideas and values of stakeholders should be researched and enacted in reality, and adapted as the world and context that these design decisions are enacted in shifts.

According to Professor Steenson, context and adaptability are really important elements of addressing issues of bias and fairness. Because as these technologies become more pervasive, the stakes get much higher.

Molly Wright Steenson: One thing that I think can happen if we do our jobs right as designers and for my position as, as a professor is to get students to understand what they’re walking into and what the scope is that they might be addressing. That it isn’t just about making an attractive object or a nice interface.

I think we see the ramifications as these technologies are implemented and implemented at scale. You know, Facebook means one thing when it’s 2005 and people on a few college campuses are using it. It means something else when it has 2.7 billion users.

Lauren Prastien: When it comes to developing these algorithms within a larger cultural and social context - especially when the stakes attached are in flux - there are going to be some trade-offs. It is impossible to please all 2.7 billion users of a social networking platform or the entire Boston Public School system.

So how do we navigate managing these challenging tradeoffs?

[BREAK ]

Lauren Prastien: As Professor Steenson noted, scaling up algorithmic decision-making is going to impact the design decisions surrounding those algorithms, and those decisions don’t occur in a vacuum.

So, we consulted with the Block Center’s Chief Ethicist. His name is David Danks, and he’s a professor of philosophy and psychology here at CMU, where his work looks at the ethical and policy implications of autonomous systems and machine learning. 

David Danks: A really important point is the ways in which the ethical issues are changing over time. That it’s not a stable, “ask this question every year and we’re going to be okay about all of it.” 

Lauren Prastien: Remember what Professor Steenson said about how scope and stakes can really impact how technologists need to be thinking about design? It’s why putting on the fancy hat of ethics once doesn’t work. These things are always in flux. The hat is going to fall off.

And as the scope of a lot of these technologies has started to broaden significantly, Professor Danks has seen the ethical landscape of tech shifting as well.

David Danks: I think one set of ethical issues that’s really emerged in the last year or two is a growing realization that we can’t have our cake and eat it too. That many of the choices we’re making when we develop technology and we deploy it in particular communities involve tradeoffs and those trade offs are not technological in nature. They are not necessarily political in nature, they’re ethical in nature. And so we really have to start as people who build, deploy and regulate technology to think about the trade offs that we are imposing on the communities around us and trying to really engage with those communities to figure out whether the trade offs we’re making are the right ones for them rather than paternalistically presupposing that we’re doing the right thing. 

Lauren Prastien: As Professor Danks has mentioned, just presupposing the answers to those questions or just assuming you’re doing right by the people impacted by a given piece of technology can be really harmful. History is full of good intentions going really poorly. And like the principles of consequentialism we went over earlier in this episode emphasize: outcomes matter.

David Danks: It’s absolutely critical that people recognize these impacts and collaborate with people who can help them understand the depth of those impacts in the form of those impacts. Now that requires changes in education. We have to teach people how to ask the right questions. It requires changes in development practices at these software companies. They need to get better at providing tools for their developers to rapidly determine whether they should go talk to somebody. 

Lauren Prastien: And there are a lot of places to find those answers. From consulting with ethicists who have spent a lot of time toiling with these questions to actually asking the communities themselves which values and standards are the most important to them when it comes to making a decision.   

But when it comes to community engagement, a lot of that presupposing to date has come from the fact that it’s actually quite difficult to facilitate these conversations about fairness in the first place.

So how do we ensure these conversations are taking place?

[Music]

In January of 2019, Alexis C. Madrigal published a piece in the Atlantic titled, “How a Feel-Good AI Story Went Wrong in Flint.” I highly recommend you read it, but for the sake of our discussion today, let me give you a quick summary. 

After Flint’s Water Crisis came to light, the City was faced with the task of having to find and remove the lead pipes under people’s homes. The problem was that the city’s records on this were pretty inconsistent, and sometimes just outright wrong. And so, a team of volunteer computer scientists put together a machine learning algorithm to try to figure out which houses had lead pipes. In 2017, this algorithm helped the City locate and replace lead pipes in over 6,000 homes, operating ahead of schedule and under budget.

But the following year, things slowed down significantly. While the algorithm was operating at 70% accuracy in 2017, by the end of 2018, they had dug up about 10,531 properties, and only located lead pipes in 1,567. And thousands of homes in Flint still had lead pipes. 

This happened because in 2018, the city abandoned the machine learning method because it was receiving pressure from the residents of Flint, who felt that certain neighborhoods and homes were being overlooked. In other words, the City would come by and dig up the yards of one neighborhood, then completely skip another neighborhood, and then maybe only dig up a few yards in the next neighborhood. Which, if you’re one of the people getting skipped, is going to feel really suspicious and unfair. 

But the city wasn’t looking at certain homes and neighborhoods because there was a very low probability that these homes and neighborhoods actually had lead pipes. But they didn’t really have a way of effectively and efficiently communicating this to the community. And if a city’s trust has already been fundamentally shaken by something as devastating as a water crisis, it’s probably not going to feel super great about inherently trusting an algorithm being employed by the city, especially with the whole “Schrodinger’s pipes” situation it left them in. Which, to be fair: wouldn’t you also always wonder if your pipes were actually made of lead or not?

By looking at every home, the project slowed down significantly. In the eastern block of Zone 10, the city dug up hundreds of properties, but not a single one of them had lead pipes. Meaning that while the City dug up those pipes, there were actual lead pipes in other neighborhoods still sitting under the ground and potentially leaching into people’s water. Like in the City’s Fifth Ward, where the algorithm estimated that 80% of houses excavated would have lead, but from January to August 2018, this was the area with the fewest excavations.

And here’s what’s really upsetting: when the National Resources Defense Council ultimately pursued legal action against the City of Flint, it did so because it believed the City had abandoned its priority of lead removal, thereby endangering certain communities like the Fifth Ward. But the City’s decision to deviate from that algorithm was because of community distrust of that algorithm, and that was based on the fact that the community didn’t actually know whether or not the algorithm was making a fair assessment of which properties to dig up.

Since the publication of Madrigal’s article, the City of Flint switched back to the algorithm. And this time, there’s something really important happening: the developers of the pipe detection algorithm are working on the creation of an interpretable, accessible way to show the residents of Flint how the algorithm is making its decisions.

According to Professor Hong, that kind of communication is key when it comes to introducing an algorithmic intervention like that into a community. Right now, his team is working on a project to help facilitate clearer communication about the ways an algorithm is operating such that the people impacted by these algorithms are able to evaluate and critique them from an informed perspective.

Jason Hong: And so what we’re trying to do, this is, you know, make machine learning algorithms more understandable and then also probe what people’s perceptions of fairness are in lots of different situations. Which one...which mathematical definition is actually are closest to people’s perceptions of fairness in that specific situation. It might be the case that people think that this mathematical definition over here is actually really good for face recognition, but this mathematical definition over there is better for, uh, say advertisements. 

 

Lauren Prastien: Professor Hong’s work could help make algorithmic implementation more of a community effort, and he hopes that by coming up with better ways of conveying and critiquing algorithmic fairness, we’ll be able to have these complicated ethical discussions about algorithms that do not necessarily seem completely bad or absolutely perfect.

 

Jason Hong: There’s going to be the people who are developing the systems, the people might be affected by the systems, the people who might be regulating the systems and so on. And um, you have to make sure that all of those people and all those groups actually have their incentives aligned correctly so that we can have much better kinds of outcomes. 

 

Lauren Prastien: When we have communities engaged in ensuring that the technologies they’re adopting are in line with their values, those constituents can start participating in making those technologies better and more equitable, which brings us to another aspect of Professor Hong’s work.

 

Jason Hong: There’s this thing in cybersecurity known as a bug bounty. The idea is that you want to incentivize people to find security bugs in your software, but to inform you of it rather than trying to exploit it or to sell it to criminals. Apple said that they are offering $1 million to anybody who can hack and the ios right now or the iPhone. It turns out that several hundreds of companies already had these bug bounties and it’s a great way of trying to align incentives of the security researchers.

 

Lauren Prastien: In April of this year, the cybersecurity company HackerOne announced that in the past 7 years, the members of its community had won more than 50 million dollars in bug bounty cash by reporting over 120,000 vulnerabilities to over 1,300 programs. Which is amazing, considering what could have happened if those vulnerabilities were in the wrong hands.

 

Looking at the success of bug bounties, Professor Hong was inspired to develop something similar to involve communities in the algorithms impacting their lives: a bias bounty.

 

Jason Hong: So for example, a lot of face recognition algorithms, it turns out that they are less accurate on people with darker skin and also people who are women. And so I think a lot of people would say, hmm, that doesn't seem very right. Uh, just intuitively without even a formal definition, a lot of people would say that seems not very fair. 

 

So what we’re trying to do with bias bounty is can we try to incentivize lots of lay people to try to find potential bugs inside of these machine learning algorithms. So this might be a way of trying to find that, for example, this computer vision algorithm just doesn’t work very well for people who are wearing headscarves. So, hey, here’s this algorithm for trying to recognize faces and oh, here’s an example of that one that doesn’t work. Here’s another example of one that doesn’t work and so on.

 

Lauren Prastien: That sounds pretty awesome, right? You find something wrong with an algorithm, and then you get rewarded. 

 

Hearing this, I couldn’t help thinking of the slew of videos that have hit the internet over the past few years of automatic soap dispensers not detecting black people’s hands. In 2015, an attendee of the DragonCon conference in Atlanta, T.J. Fitzpatrick, posted a video of his hand waving under an automatic soap dispenser. No matter how hard he tries, even going so far as to press the sensor with his finger, no soap. So he gets his white friend, Larry, to put his hand under the dispenser and, viola, there’s the soap.

 

The reason why this is happening is because soap dispensers like that use near-infrared technology to trigger the release of soap. When a reflective object, like, say, a white hand, goes under the dispenser, the near-infrared light is bounced back towards the sensor, triggering the release of soap. But darker colors absorb light, so it’s not as likely for that light to be bounced back towards the sensor. 

 

Which seems like a pretty big design flaw.

 

Jason Hong: That’s also a good reason for why, you know, Silicon Valley for example, needs a lot more diversity in general because you want to try to minimize those kinds of blind spots

that you might have. Uh, but for other researchers and for, you know, other kinds of government systems, I think, you know, the bias money that we were just talking about I think could be effective. Uh, it’s definitely something where you can get a lot more people involved and could also be sort of fun. And also it’s trying to get people involved with something that’s much bigger than any single person that you are trying to help protect other people or trying to make sure the world is more fair.

 

Lauren Prastien: Between bringing people into the conversation on AI fairness and incentivizing users of these technologies - from things as high-stakes as bus algorithms to as simple as automatic soap dispensers - to meaningfully critique them, we could see communities more effectively developing and enforcing these standards for their technologies. 

 

It’s important to say once more that all of this stuff is very new and its scope has been increasing dramatically, and so setting the standards for policy, regulation and accountability is sometimes without precedent. 

 

So, how do we make sure these algorithms are fair...or, you know, fair enough?

 

Eugene Leventhal: Today’s discussion highlights that some of the biggest challenges that we have to overcome in relation to emerging technologies such as AI are not technical in nature. On the contrary, questions of how to bring people together and get them involved in the development of solutions is a much more human question. A big part of this is how organizations and policymakers communicate the components and impacts of an algorithm. There’s a lot of uncertainty ahead relating to finding approaches that benefit everyone, but that is no reason to shy away from these challenges that are continuously growing in importance. 

 

So where do we go from here Lauren? 

 

Lauren Prastien: Over the past few weeks, we’ve dug into the pervasiveness of algorithms, their potential impact on industries and the ways we - as data subjects and as communities - can have a say in how these technologies are enacted. Next week, we’re going to look at an area where we could see a lot of these emerging technologies begin to disrupt a lot of long-standing structures, for better or worse: and that’s education. Here’s a clip from one of our guests next week, Professor Michael D. Smith:

 

Michael D. Smith: Technology is never going to change higher education, right? Cause we’re the one industry on the planet who doesn’t have to worry about technology coming in and disrupting our business. He says provocatively.

 

Lauren Prastien: I’m Lauren Prastien, 

 

Eugene Leventhal: and I’m Eugene Leventhal

 

Lauren Prastien: and this was Consequential. We’ll see you next week.   

 

Eugene Leventhal: Consequential was recorded at the Block Center for Technology and Society at Carnegie Mellon University. The Block Center was established to examine the societal consequences of technological change and create meaningful plans of action. To learn more about Consequential, the Block Center and our faculty, you can check out our website at cmu.edu/block-center or follow us on Twitter @CMUBlockCenter. You can also email us at consequential@cmu.edu.

This episode of Consequential was written by Lauren Prastien, with editorial support from Eugene Leventhal. It was edited by Eugene and our intern, Ivan Plazacic. Consequential is produced by Eugene, Lauren, Shryansh Mehta and Jon Nehlsen. 

This episode uses Kade Crockford and Joi Ito’s “Don’t blame the algorithm for doing what Boston school officials asked” and David Scharfenberg’s “Computers Can Solve Your Problem. You May Not Like the Answer,” both published in The Boston Globe. It also refers to Dr. Annette Zimmermann’s “AI Ethics Traps Bingo,” Adam Lawrence, Jakob Schneider, Marc Stickdorn, and Markus Edgar Hormess’s book This Is Service Design Doing: Applying Service Design Thinking in the Real World, and Alexis C. Madrigal’s article in the Atlantic titled, “How a Feel-Good AI Story Went Wrong in Flint.” We have also used a tweet from T.J. Fitzpatrick. 

Lauren Prastien: This is a story about the underdog. About power, about scarcity. This is, I’m not going to lie, one of my favorite stories about the Internet.

In 2018, nineteen-year-old Montero Lamar Hill, better known as Lil Nas X, dropped out of college to devote himself fully to his rap career. He was living with his sister, sleeping only three hours a night, when he bought a beat for 30 dollars on the Internet - an infectious trap reimagining of a Nine Inch Nails song. He wrote a song about a plucky, independent cowboy that oozed Americana and bravado, and recorded it in under an hour at a small recording studio in Atlanta, using their $20 Tuesday discount. Yeah. Between the beat and the hour at the CinCoYo Recording Studio, “Old Town Road” cost 50 dollars, the price of a nice dinner for two.

For reference, in 2011, NPR’s “Planet Money” podcast estimated that Rihanna’s single “Man Down” cost $78,000 to make through traditional channels, and then another million dollars to promote through those channels. But Lil Nas X’s promotion model was a little different from that. He used TikTok, a free social video-sharing app, where “Old Town Road” caught on like wildfire in late 2018. So if we want to talk about an industry disruptor, look no further than Lil Nas X.

In October 2019, “Old Town Road” was awarded a diamond certification by the Recording Industry Association of America, for selling or streaming ten million copies in the United States. And by the way, it achieved this diamond certification faster than any of the 32 other songs to reach the distinction. Again, for reference: The Black Eyed Peas song “I Gotta Feeling,” the most requested song of 2009, took nearly ten years to go diamond. Old Town Road took ten months.

It’s really important to remember that “Old Town Road” isn’t a fluke or an exception. It’s part of a larger trend in the entertainment industry, where the growth of technologies for content creation and distribution have disrupted the existing power structures that have controlled who gets to make and share that content for years. Lil Nas X is not the first star to come up through the Internet. He’s in good company with Justin Bieber, who’s arguably YouTube’s biggest success story; SoundCloud artists like Post Malone and Halsey; Comedians, like Bo Burnham, Grace Helbig, and SNL stars Beck Bennett and Kyle Mooney. The bestseller 50 Shades of Grey was published on an online fan-fiction website before shattering distribution records previously held by mainstream thriller writer Dan Brown. The idea for an upcoming heist film starring Rihanna and Lupita Nyong’o was pitched not to a room full of executives, but by a fan on Twitter.

And this year, Alfonso Cuaron’s Roma, a film distributed on Netflix after running in theaters for just three weeks, was nominated for 10 Academy Awards and won 3, sparking controversy among some of the biggest names in Hollywood, like director Steven Spielberg:

Steven Spielberg: Once you commit to a television format, you’re a TV movie. You certainly, if it’s a good show, deserve an Emmy, but not an Oscar. I don’t believe films that are just given token qualifications in a couple of theaters for less than a week should qualify for the Academy Award nomination.

Lauren Prastien: Like it or not, technology is shaking up traditional models of media creation, distribution and consumption, and it has changed the entertainment industry forever. According to researchers here at Carnegie Mellon, what happened to entertainment might happen again to a very different kind of industry: higher education.

This is Consequential: what’s significant, what’s coming, and what we can do about it. I’m Lauren Prastien and I’ll be your main tour guide along this journey. You’ll also hear the voices of our many guests as well as of your other host.

Eugene Leventhal: Hi, I’m Eugene Leventhal. I’ll be joining throughout the season to take a step back with Lauren and overview what was just covered, to talk policy, and to read quotes. I’ll pass it back to you now Lauren.

Lauren Prastien: Consequential is recorded at the Block Center for Technology and Society at Carnegie Mellon University. Established in 2018 through a generous gift from Keith Block and Suzanne Kelly, the Block Center is dedicated to investigating the economic, organizational, and public policy impacts of emerging technologies.

Today, we want to know: is tech going to burst the education bubble?

Michael D. Smith: Technology is never going to change higher education, right? Cause we're the one industry on the planet who doesn't have to worry about technology coming in and disrupting our business. He says provocatively.

Lauren Prastien: That’s Michael D. Smith. He’s a professor of Information Technology and Marketing at Carnegie Mellon. And before he was making that provocative statement about higher ed, he was making a really similar provocative statement about the entertainment industry.

Michael D. Smith: It had been an industry that was incredibly stable for a hundred years. Um, you know, massive shifts in technology, but the same six motion picture studios, the same four record labels, the same five publishers dominated the business. And then all of a sudden when information technology hit each of these powerful in each of these powerful players lost power very quickly.

Lauren Prastien: Why? Well, in 2016, Professor Smith published a book called, Streaming, Sharing, Stealing: Big Data and the Future of Entertainment, with Rahul Telang, a professor of Information Systems and Management at Carnegie Mellon, to answer that very question. And what Professors Smith and Telang found was that the longevity of this industry relied on a little something called scarcity.

Michael D. Smith: They were able to maintain their power for the, for a hundred years because they were able to control access to the scarce resources necessary to create content, the scarcity for this resources, to distribute content. And then, and then we’re able to create scarcity around how content got consumed.

Lauren Prastien:  Without scarcity of those resources, anyone can be an actor, a singer, a director, you name it, and broadcast it out onto the Internet. You know, the whole Warholian “in the future, everyone will be famous for 15 minutes” kind of thing.

And that’s really good! Digital platforms and entertainment analytics have allowed for the distribution of voices and stories that have been previously underrepresented in media, providing for the development and elevation of more content in what marketers call “the long-tail.”

Real quick: the term long-tail is attributed to the work of mathematician Benoît Mandelbrot, and was popularized by author and entrepreneur Chris Anderson in a 2004 article in Wired. It comes up in a lot of different fields, but I’m going to focus on the way it’s used in media. Essentially, when it comes to content, you’ve got two categories: the head and the long-tail. Think of the head as the small set of items that appease a big market, so Top 40 music and Hollywood blockbusters. And the long-tail is a larger set of items that each have smaller markets. It’s the niche content.  

In a traditional entertainment model, concentrating mostly on the head made sense. You only had a finite set of timeslots on channels, a finite amount of airtime on a radio station, a finite number of screens at a movie theater, and so on. Which was how those big names were able to maintain control: by channeling their resources into head content that would occupy those finite spaces.

And right now, Professor Smith sees something really similar happening in higher education.

Michael D. Smith: Higher education looks very similar to the entertainment industry in the sense that our power is based on our ability to control scarcity in who gets seats in the classes, scarcity in the professors, um, and then scarcity in, in communicating to the market who’s smart and who’s not by by using a sheet of paper with a stamp on it, uh, that, that you have to pay a quarter of a million dollars and four years of your life to get, um, what would happen if those scarce resources weren’t as scarce anymore.

Lauren Prastien:  We’ve seen what happens in entertainment. Platforms like Netflix and Hulu don’t have to worry about filling slots with the most broadly appealing content. First of all - there are no timeslots. The concept of Primetime doesn’t exist on Netflix. And if I don’t like what I’m watching on Hulu, it’s not like I have to stop giving Hulu my business by changing the channel - because, well, no channels. I can just consume some other content on that platform. And it goes further than that.

Michael D. Smith: When I think about the benefits of on demand online streaming platforms like nec, like Netflix, it's this ability to say, what exactly are you? Let me, let me understand. You as an individual, understand what you've liked in the past, what other people have liked that similar, and then create something that's, you create a set of programming that's uniquely customized to you, um, versus the broadcast world, which was let's find a single message that's broadly applicable to everyone. Um, could you do with the same thing in the classroom? I think so. Today we think about teaching in terms of a broadcast world. I'm going to come to class with a lecture that I'm hoping is broadly applicable to all of all 40 of students, when in fact each of those 40 students is an individual with a unique background, a unique set of knowledge, a unique way. They learn a unique way. 

Lauren Prastien: What Professor Smith is getting at here isn’t a new concept. It’s called mastery learning. It was first proposed by the educational psychologist Benjamin Bloom in 1968, and the basic idea was this: until students were able to demonstrate that they had a level of mastery of a given concept or piece of information, they wouldn’t be able to move forward to learn any subsequent information. This means that instead of solely placing the burden of keeping up on the student, the responsibility is now shared between the student and the instructor, who needs to ensure that they are able to effectively convey the material to everyone in the class, catering to their given learning styles and providing supplemental materials where necessary. Which sounds really, really great.

But in a traditional higher ed model, you only have a semester to teach this material, and you’ve gotta get through it, whether everyone’s on board or not. Again: think head versus long tail.

And time is just one of the forms of scarcity in education.

But like Professor Smith said, higher education has several other forms of scarcity, and they have really wide-reaching implications.

Michael D. Smith: You talk to admissions officers and we all know that the SAT is easily manipulated by the ability. Just, just the ability to get review courses, um, you know, can change, can change your, your score. And if you live in a zip code where it's unlikely you're going to have access to those review courses, you're at a huge disadvantage. Um, that's a problem for the individual student obviously. That's kind of the hope of what we're talking about here. That in that in the same way technology allowed people who had stories to tell to tell those stories, technology might allow people who have unique skills and gifts to contribute to society in a way that we're not allowing them to today.

All of a sudden people had opportunities to tell their stories that didn't, did, didn't have opportunities before and people had opportunities to consume stories. I think the same thing's going to be true in education. This is going to be tough on established universities, but I think it's going to be great for the industry of teaching and learning.

Lauren Prastien: So what is this going to look like - both inside and outside of traditional classrooms? And how do we ensure that students using these more accessible forms of education are protected from misinformation, scams and faulty materials? Stay with us.

Let me tell you a quick story. When I was in college, there was this one really famous professor that taught intro to creative writing. And in order to even take a class with this person, you had to apply for a spot and then you had to stand on a line outside of the registration office on a given date, and be one of the lucky first few in line to put your name on the list for that professor’s section of the course. and okay to be fair, it was a really good class.

The other day, I was scrolling through Instagram, and I saw an ad for a certain platform where you stream online classes from celebrated experts in their fields. Such as, as this ad on my Instagram showed me the other day: the very same in-demand professor from my alma mater. That’s right: no more standing on a line in the snow. No more application. And, hey, you don’t even have to be a student.

And frankly, I loved seeing this. Because, why not? There might be someone who could really benefit from that class who doesn’t want or need the four years of college that would usually come with it. And while it’s true that you won’t have the experience of having that professor read your work, you will still get her insights on writing short stories. Because in the preview for this class, she was saying a lot of the same stuff she said to us on our first day of her class.

And this is something that’s another really fascinating commonality about the narratives of the entertainment and education industries: the new opportunities that come with streaming video. Be it the development of services like the one I just described or the rise of the massive open online course, or MOOC, streaming video has allowed individuals to take a certain class or learn a specific skill that they may not have had access to based on their geographic location, socioeconomic status or even just their daily schedule.

And learning via streaming video is becoming really common. In its most recent “By The Numbers” report on MOOCs, Class Central found that 20 million new learners signed up for at least one MOOC in 2018. While this was down slightly from 23 million in 2017, the number of paying users of MOOC platforms may have increased. Currently, more than 101 million students are actively enrolled in MOOCs, and over 900 universities around the world, including MIT, Imperial College London, and our very own Carnegie Mellon University, offer at least one MOOC.

But how effective is learning via video? And how do you handle the credentialing of someone who learned a skill from a video in their home versus learning that skill in a classroom and then getting a diploma?

Pedro Ferreira: We know that students love video, video games for education. We know that students love playing games, but that they actually learning? There's some anecdotal evidence that there are some learning involved, but how does that actually work?

Lauren Prastien: That’s Pedro Ferreira. He’s a professor of information systems, engineering and public policy at Carnegie Mellon. And along with Professor Smith, who we spoke to earlier, he’s working on how to make better videos for better education.

Pedro Ferreira: How can we actually understand how the students learn? And then what conditions in the right contexts for what kind of goals? And then with all that information, can we perhaps personalize? And one video works for you, but it doesn't work for the next student and different places of learning and so on. And so far, so we aim in this project to do large scale experimentation to actually find out what works.

Lauren Prastien: This isn’t the first time that Professor Ferreira’s looked at tech - and even videos - in the classroom. In 2014, he released a study on the effect that YouTube had on classrooms in Portugal. The data was from 2006 - so, you know, at this point, YouTube had actually only been around for a year, and it looked really, really different from the YouTube of today. What Professor Ferreira found was that, unfortunately, in the classrooms with access to YouTube, student performance went way, way, way down. So what makes him optimistic about video technology in the classroom now?

Pedro Ferreira: That study, uh, looked at the context, actually happens lots of time, which is the technology kind of parachutes into the school. And people are not used to the technology. They don't know what to do with. And you have as powerful technology like access to the Internet that allows you for both learning and distraction at the same time, guests which one you go, it's going to prevail if you don't actually thought about how you're going to use it productively. So for example, just to give you a counter case, just more recently, I've been working on a paper where we put smart phones into the classroom, not Internet, but we put them in a condition where the students can use them at twill, but they also can use in another condition. They can use the smart phones at will, but the teacher actively uses the smartphones to learn. And guess what? That's the condition where it actually grades are better. So you can actually introduce technology into the classroom in a positive way. And also in a negative way. It depends on how you actually combine the use of the technology with what you want to teach.

Lauren Prastien: Educators are no strangers to the risks of parachuting. There are entire databases, message boards and even listacles on the Internet devoted to ed tech failures. And it always boils down to the same formula: the technology is kind of just dropped into the classroom, with very little pedagogical motivation beyond “this thing is new and it’s cool.”

In this collaboration with Professor Smith, Professor Ferreira wants to see how, when approached with intentionality, videos can enhance student learning experiences and even teach us more about how students learn.

Pedro Ferreira: We're looking at how people actually gonna start video, stop videos.

Fast forward because I know this stuff for I rewind because I need to learn that again. And so we'll have this almost like frame by frame understanding of how the students interacted with the video and we were endowing a platform to run experiments where students can actually then comment on particular frames. I want to know this particular frame in this particular video was the one that spurred all these discussion among students because then there's a chat room and the messages back and forth and so on and so forth.

Lauren Prastien: By allowing students to have an active role in consuming, critiquing and in some cases even creating educational video, Professor Ferreira envisions being able to provide more personalized educational experiences that more effectively cater to students’ specific needs. But he does admit that not all video is proven equal.

Pedro Ferreira: We are talking about the world where users generate content, generate small videos that generate small videos for education, for entertainment, and so on and so forth. And without that, 90% of the people don't generate anything. 9% generate something in 1% generates good stuff, right? And so we have been inundated by, I would say, low quality stuff on the Internet these days. Uh, and also good quality, but you need to go through and, and, and, and navigate, right? And so we need to understand what's actually good for each student at each point in case, because we can actually widen the gaps if we put students in front of bad material. And so, uh, the recommended system for education I think need to actually be very much more precise.

Lauren Prastien: That’s one of the important distinctions between the entertainment industry and higher ed. If I see a bad movie or listen to a bad song on the Internet, there aren’t very serious consequences for that. But that’s not how it goes with education.

Pedro Ferreira: How are we actually going to certify people that went on the Internet to watch this video for these many hours or binged watched all these education content and now all of a sudden I'm an expert in, some of them are, we just need to actually find out that they are and, and imply them. And so how the market is formally going to certify these people I think is a huge, a huge opportunity and also huge hurdle.

Lauren Prastien: Ensuring the legitimacy of video-based education, adapting traditional credentialing frameworks to more holistically address the changing nature of learning, and protecting the rights of users of MOOCs are all really important issues for policymakers to examine. And right now, educators and regulators alike are really torn on how to handle this.

In an article for Inside Higher Ed, the journalist Lindsay McKenzie outlined some of the issues pervading the legislation of distance education. What she found was that in part, it boiled down to whether these protections need to be enacted on the state level or on the national-level. Essentially, because these online programs often operate in multiple states, they are really difficult to legislate. Earlier this year, the US Department of Education convened a panel to set up a new national set of rules for accreditors, or providers of online education, and in October, they published a set of regulations based partially on the panel’s findings that will take effect in July of 2020. These new provisions have received a lot of criticism, particularly for relaxing a lot of regulations and protections related to for-profit online education institutions. Because while proponents argue that these national regulations streamline otherwise complicated state-level regulations, critics of the new regulations maintain that this will actually substantially weaken protections for students and taxpayers alike.

The debate over how to handle the legislation of online education been going on for more than a decade, and as the nature and scope of educational videos change, these questions become a lot more complicated. Because while there are the institutions and opportunities that open doors for people who normally would not have the chance to take a certain class or learn a particular skill, there are online education entities that are just flat-out scams.

But students aren’t the only group that will be impacted by online education and the proliferation of tech in the classroom, and they’re not the only group that will need support. As Professor Ferreira found, this is also going to have a profound impact on teachers:

Pedro Ferreira: We've been trying with some schools to try and find out how the students react to videos and so on and so forth. And one thing that I have learned already is that when you increasingly rely on these kinds of materials, the role of the teacher changes.

Lauren Prastien: So, what will this mean for teachers? We’ll get into that in just a moment.

Lauren Herckis: Technology affords educators the opportunity to implement their pedagogies in ways that make sense for them with their students. So a shift in the, the ecosystem, the technological ecosystem in which they're doing their teaching means they need to rethink some of the minutiae of teaching that are so important to them.

Lauren Prastien: That’s Lauren Herckis. She’s an anthropologist at Carnegie Mellon University, and her work looks at faculty culture, from the use of technology in higher education to something that I can relate to as a former educator: the fear of looking stupid in front of your students. By the way, she also teaches a course on the Archaeology of Death, which I think is very, very cool, but that’s neither here nor there.

Lauren Herckis: So when we're talking about a student body that's now not just the students sitting in front of you and your classroom, but also the students who are watching and who are participating over, uh, a link at a distance that requires a shift, but that's not new. Teaching has always required adaptation to the new tech, new technologies, new contexts and new students.

Lauren Prastien: And sometimes, that adaptation is happening at a different pace for a teacher than it is for a student.

Lauren Herckis: But really any time there's a shift that changes classroom dynamics, there's a growth that's necessary. There are shifts in pedagogy and in understanding of what teaching and learning is for and a recalibration of, well, what are my students looking for? Why are they here? What are their goals? And does this class meet their goals? Um, there's a, there's a recalibration required, um, for how our understanding of content and the process of teaching can work with a new kind of classroom or a new set of students.

Lauren Prastien: And Professor Herckis suggests that when it comes to how to manage the changing nature of education and best prepare teachers for new kinds of students and new technologies, higher education can take some inspiration from healthcare.

Lauren Herckis: But there's been a revolution in medicine over the last several decades in which doctors are connected to one another and to evidence-based networks and organizations that can help provide those kinds of supports, like checklists that can ensure that our current knowledge about what's best, um, is implementable, is accessible. And, and so, yeah, most professors are not in a position to take a class on how to teach, let alone a course of study on how to teach. But providing accessible support that can ensure that when they're teaching in this new way or with this new thing, they're doing this in a way that is evidence and in line with our best understanding of how to teach effectively or how to use technology to its best advantage. Um, every little bit helps and accessible supports, um, have made just a tremendous difference in medicine. And there's no reason why we can't produce the similar kinds of supports in postsecondary education.

Lauren Prastien: And those supports are really, really important, because the kind of parachuting that Professor Ferreira described earlier is just as taxing on teachers as it is on students.

Lauren Herckis: Big changes are often rolled out on on college campuses as though they are a solution to problems that people have been concerned about and they will be the solution for the future without an acknowledgement that this too will be phased out. How long will this, uh, this set of smartboards, this new learning management system, um, this set of classroom equipment that's being rolled out to every classroom, how long do we really expect this to be the state of the art and what other things are going to change during that time?

Lauren Prastien: But Professor Herckis agrees that an openness to these new forms of teaching and learning is really important, and it’s an overall good trend in the field of education.

Lauren Herckis: But I think that some of the most powerful technological innovations that, that are currently revolutionizing education and stand to in the future are, are basically communication technologies. I think that a person who's living in a place that doesn't have access to a school but who wants to learn, could use a communication technology to have a conversation regularly, a face to face conversation effectively with someone where they practice a language that nobody speaks within a hundred miles of where they live is pretty powerful.

Lauren Prastien: But how do we make sure that all of this innovation actually reaches the people it needs to? Because, heads up - it might not.

Although we’ve laid out a number of ways in which education is being changed by technology, it’s crucial to keep in mind that a few things are being assumed for all of this to work. For one, if students don’t have access to computers or tablets, then they won’t be able to access any kind of digital solutions. On an even more fundamental level, if students don’t have access to broadband, then it becomes practically impossible for those students without broadband access to keep up with those who have it.

But first, Eugene, what are some things that policymakers should be thinking about when it comes to EdTech and making sure the necessary infrastructure is in place?

Eugene Leventhal: First, the foundational level, which is making sure that there is broadband internet available for everyone. This is something that we’ll be hearing about in our next episode.

Once we shift to looking at the solutions getting deployed in schools, we can turn to Professor Lee Branstetter, who you may remember from our first episode as the Head of our Future of Work Initiative, for a potential solution.

Lee Branstetter: I think part of the solution, um, is for government and government funded entities to do for Ed Tech, what the FDA does for drugs, submit it to scientific tests, rigorous scientific tests, um, on, you know, human subjects in this case students, and be able to help people figure out what works and what doesn't mean. We would never imagine a world in which the drug companies can just push whatever the heck they want to without any kind of government testing or regulation. But guess what, that's the world of Ed Tech. We can do better.

Eugene Leventhal: And when we look beyond Ed Tech to online courses, there is a lot of disagreement about how exactly to protect the students, teachers and taxpayers that are the stakeholders in this system. Though policymakers don’t need to worry about academic institutions in their jurisdictions being completely replaced by online learning environments, it’s important to be aware of how technology can help students and schools when applied with thoughtful, evidence-backed intent. Aside from talking about the lack of broadband that many Americans deal with, Lauren what else are we covering next week?

Lauren Prastien: In exploring some of the issues and potential solutions with broadband, we’re going to be getting into a larger discussion on what’s being called the rural-urban divide. The term has been coming up in the news a lot lately, and so we want to know: is there such thing as a rural-urban divide? And how can emerging technologies complement the values, character and industries of rural communities, rather than attempt to overwrite them? We’ll also talk about the role that universities can play in the context of bridging resource gaps and how physical infrastructure and mobility play a role in resources divides as well.

Here’s a preview of our conversation next week with Karen Lightman, the Executive Director of Metro21: Smart Cities Institute at Carnegie Mellon:

Karen Lightman: And we live in an area where we have access to pretty good high speed broadband. And there's a promise with 5g if going in even faster. But there's a good chunk of the, of the United States, the world that doesn't have access to high speed broadband. So that means kids can't do their homework. Right?

Lauren Prastien: I’m Lauren Prastien,

Eugene Leventhal: and I’m Eugene Leventhal

Lauren Prastien: and this was Consequential. We’ll see you next week.  

Consequential was recorded at the Block Center for Technology and Society at Carnegie Mellon University. The Block Center was established to examine the societal consequences of technological change and create meaningful plans of action. To learn more about Consequential, the Block Center and our faculty, you can check out our website at cmu.edu/block-center or follow us on Twitter @CMUBlockCenter. You can also email us at consequential@cmu.edu.

This episode of Consequential was written by Lauren Prastien, with editorial support from Eugene Leventhal. It was edited by Eugene and our intern, Ivan Plazacic. Consequential is produced by Eugene, Lauren, Shryansh Mehta and Jon Nehlsen.

This episode references an episode of NPR’s Planet Money podcast, Class Central’s “By The Numbers: MOOCs in 2018” report, and Lindsay McKenzie’s 2019 article “Rift Over State Reciprocity Rules” from Inside Higher Ed.

Lauren Prastien: I want you to think about your favorite piece of media about the future. It can be a movie, a book, a television show. A graphic novel. It can even be a song.

Where does it take place? 

It’s a city, isn’t it? 

If you think about just about any piece of pop culture about the future, particularly anything that’s come out in the last 25 years, be it utopian or dystopian, especially if it’s about robots or artificial intelligence, it takes place in a city. 1984, Fahrenheit 451, The Matrix, Blade Runner, Her, Altered Carbon, Gattaca, Minority Report, The Jetsons, I - Robot, Metropolis. Even The Flaming Lips song “Yoshimi Battles The Pink Robots,” one of my personal favorite pieces of media about the future, is guilty of this. After all, Yoshimi literally works for the city.

And if there is a rural setting, it’s usually an indication of a situation of extreme poverty or a nostalgic attachment to the past. So, sure, The Hunger Games doesn’t take place exclusively in the Capitol, but that is where all the prosperity and technology is. And Westworld is set in a rural playground visited by tourists coded as citydwellers and run by people who live in a nearby city. Face it, when we think about the future, we picture cities. And that idea is really, really problematic.

This is Consequential: what’s significant, what’s coming, and what we can do about it. I’m Lauren Prastien and I’ll be your main tour guide along this journey. You’ll also hear the voices of our many guests as well as of your other host.

Eugene Leventhal: Hi, I’m Eugene Leventhal. I’ll be joining throughout the season to take a step back with Lauren and overview what was just covered, to talk policy, and to read quotes. I’ll pass it back to you now Lauren. 

Lauren Prastien: Consequential is recorded at the Block Center for Technology and Society at Carnegie Mellon University. Established in 2018 through a generous gift from Keith Block and Suzanne Kelley, the Block Center is dedicated to investigating the economic, organizational, and public policy impacts of emerging technologies.

Today, we want to know: Is there a rural-urban divide? And if there is, how can technology help us overcome it, rather than widen it?

So, the rural-urban divide is a blanket term intended the encapsulate the political, economic and cultural differences between the rural and urban populations of the United States. Its narrative is relatively simple: as U.S. cities flourish and more people move there, rural areas languish. In particular, much of the discourse on this divide pertains to widening gaps in employment and income. 

Real quick, here are some important stats to keep in mind: Today, nearly half of the world’s population lives in urban areas, and by 2050, the United Nations expects this to increase to 66%. And while the U.S. Census Bureau found in 2015 that poverty rates are higher in urban areas than rural areas, the median household income for rural households was about 4 percent lower than the median for urban households. Additionally, United States Department of Agriculture’s Economic Research Service found that since 2012, the rural unemployment rate has exceeded the urban unemployment rate and prime-age labor-force participation rates have remained depressed in rural areas.

So is there a rural-urban divide?

Eugene and I spoke to an expert on this very subject here at the Block Center, to learn how real this sense of a divide is, what role technology is playing in this divide and what today’s discourse on these issues might be leaving out. 

Richard Stafford: I married a coal miner’s daughter, by the way, and I’m still married to her. So, uh, the fact is that when she was born and her dad was a coal miner, you would look at a County like Fayette which is here in Southwestern Pennsylvania, right next to Greene County where I grew up. And there were probably 10, 12,000 miners at work. Today, there’s probably 400. They’re mining almost the same amount. What happened? 

Lauren Prastien: That’s Richard Stafford. He’s a Distinguished Service Professor at Carnegie Mellon. Prior to coming to CMU, Professor Stafford served as the Chief Executive Officer for the Allegheny Conference on Community Development. And now, using his civic experience, Professor Stafford is looking at how public policy can respond to the societal consequences of technological change.

Richard Stafford: Automation. That’s what happened. Uh, so that whole impact of those jobs is still being felt and still being resented. 

Their feeling is much stronger than you might suspect that we’re being left behind. And that characterizes, I think that the rural small town feeling looking at the city and thinking, yeah, well everybody cares about the city and they get all the jobs and you know, we’re being left behind. And it’s aggravated in our region, in the Pittsburgh region. If you look at the rural areas and how, what happened there and what prosperity was there had to do with these basic industries that have disappeared. Steel’s the obvious, biggest example. Steel was dependent on coal. Where did coal come from? Coal came from the rural areas. Okay. What happened to coal? 

Lauren Prastien: In 2016, the International Institute for Sustainable Development released a report titled “Mining A Mirage? Reassessing the shared-value paradigm in light of technological advances in the mining sector.” It’s a mouthful, but let me give you the bottom line: The report found that a lot of the coal mining process has already been automated, from the trucks that haul the coal to the GIS systems that power mine surveying, and in the next 10 to 15 years, automation will be likely to replace anywhere from 40 to 80 percent of workers in a coal mine. And while automating a lot of these processes and even moving away from using coal as an energy source does have positive side-effects - it’s more environmentally-friendly and it’s also safer for employees working on these sites - it does mean that many regions will lose an industry that is a cornerstone of their economy. 

And this goes further than coal. A lot of other rural industries have seen massive employee displacement as a result of artificial intelligence and enhanced automation. According to the U.S. Census Bureau’s American Community Survey, these are some of the top sectors filled by the civilian labor force in rural counties: manufacturing, retail trade, agriculture, mining, construction, transportation, warehousing and utilities. Yeah. A lot of the sectors that come up when we talk about where automation is going to take place.

But employment is just one facet of this issue.

Richard Stafford: When I think of the rural-urban divide, I think of the accessibility to healthcare to education to the kinds of basics in life that there’s a big difference in. So if you think about transportation, for example, in some rural areas, and you think about autonomous vehicles, well, the future of autonomous vehicles is going to be largely dependent on the ability of the communication system to communicate with that vehicle. Right now in Greene County, you can’t get cell phone connection in Western Greene County. So let alone high-speed Internet for the kids that, if they were going to join the future workforce, it would be nice if they could do their homework on the Internet, like all the kids in the urban area. 

Lauren Prastien: But it’s also crucial to take into account that these problems aren’t exclusively rural issues.

Richard Stafford: Now, having said all that, by the way, there’s huge similarities between rural and small town disadvantages as technology progresses to areas of the city that have the same problem, right? To whether it’s Internet access or health care access or whatever. So in a lot of ways, while there is a rural-urban divide, I think we need to be careful about thinking of it as too distinct. We need to think in a sense for whatever area that you need to prosper, we need to think of the haves and have-nots.

Lauren Prastien: And according to Professor Stafford, a really great place to direct that focus is broadband Internet.

Richard Stafford: If you think of high speed Internet access as a utility, which is what it should be thought of today, we went through electricity becoming a utility, right? If you look historically, huge disadvantage right now in rural areas. And it’s a very simple thing because in the sense to understand because we’re telecommunications companies can make money is density. Rural areas by definition aren’t that dense. So how do we overcome that? How do we find a way from a public policy standpoint to redistribute in some acceptable way because that scares redistribution scares people that have! In some acceptable way as we have in electricity. So that the benefits can be there for those families and those kids that are growing up in rural areas to be part of the prosperity that supposedly AI and technology, technological development will provide. It’s a big issue and it will only be solved at a public policy forum. It won’t be just a matter of leave it to the free market. 

Lauren Prastien: So what would that look like? Stay with us.

Hey. Remember this? [Dial-up modem sound effect]

Karen Lightman: I remember that sound and waiting and the anticipation of getting online and then you're about to load an image. And I mean it's...I get that. And what's really amazing to me is now we have Fios and Xfinity and you know, all this. And we live in an area where we have access to pretty good high-speed broadband. And there's a promise with 5G of going in even faster. 

Lauren Prastien: That’s Karen Lightman. She’s the Executive Director of the Metro21: Smart Cities Institute at Carnegie Mellon, where her work looks at the use of connected, intelligent infrastructural technologies for improving sustainability, safety and quality of life. And a huge part of that depends on access to broadband.

Karen Lightman: But there's a good chunk of the United States, the world that doesn't have access to high speed broadband.

Lauren Prastien: In a 2018 study, Microsoft found that 162.8 million people in the United States lack regular access to broadband Internet. And according to the FCC, the option of using broadband isn’t even available to 24.7 million Americans, more than 19 million of which are based in rural communities. For some perspective: imagine if everyone in the entire State of New York couldn’t get on the Internet. 

And this has greater implications than not being able to stream Netflix. According to the US Bureau of Labor Statistics, the highest unemployment rates in the US are frequently associated with counties with the lowest availability of broadband Internet.

Karen Lightman: So that means people can't telecommute. That means that if there is, you know, an emergency where they have to get information out, like there's a flood alert in a low lying area, that means that are not getting that information because a lot of it is digital and there's an assumption and so we could do better. I think that's the bottom line.

Lauren Prastien: According to the most recent data from the FCC’s Broadband Task Force, 70% of teachers in the United States assign homework that requires broadband access to complete. And by the way, this data is from 2009. It’s probably much, much higher today. So why is this happening?

Karen Lightman: When we had infrastructure investments in like the creation of highways, right? So there was a huge investment and the government is deciding that highways are really important, right? But the whole backstory on why that was there, you know, those military applicate uh, implications. But there was an investment by the federal government saying that this kind of infrastructure of connecting communities is important and we're going to make that investment. We have lights, right? So we had, you know, investment in electricity so that we could have lights, we could have, you know, electricity in our homes. Phones. So there were utilities, public utilities, and yet with broadband it's this fuzzy area that it's sort of regulated but sort of not. And it's mainly driven by investments by privately-held mega companies, right? I'm not going to name names, but they know who they are and their focus is profit, right? And it's not a public utility, so it's not like water and electricity, but maybe it should be considered that way. 

Lauren Prastien: But what happens when we make broadband a public utility? Well, look no further than Chattanooga. 

Before Chattanooga made high-speed Internet a public utility, its Downtown area had pretty good access to privately-owned broadband - so think Comcast and AT&T - but once you left the Downtown area, and particularly once you got into the more rural areas surrounding Chattanooga, there was really, really spotty or just completely non-existent broadband access. And because these were really small markets, the private broadband companies that covered the Greater Chattanooga area didn’t consider building out the infrastructure in these areas to be a worthwhile investment. Think the head and the long-tail. 

Karen Lightman: And so they made the investment, they put in the fiber and they own it. 

Lauren Prastien: So it’s important to note that in 1938, the Tennessee Legislature set up the Electric Power Board of Chattanooga, or EPB, as an independent entity to provide electricity to the Greater Chattanooga area. So already, Chattanooga’s electricity was a publicly-owned utility.  

And so in 2009, the EPB widened its focus to broadband. With the help of a $111 million federal stimulus grant from the Department of Energy and a $169 million loan from the Chattanooga City Council, the EPB developed its own smart grid. And private Internet providers panicked. They sued the City of Chattanooga four times, and even tried introducing more competitive packages to dissuade Chattanoogans from using the publicly-owned broadband. But by 2010, Chattanooga’s residential symmetrical broadband Internet was operating at 1 gigabit per second, which was, at the time, 200 times faster than the national average.

Karen Lightman: Understanding the ROI, so looking at the examples, like what happened in Chattanooga is seeing that if you make a big investment, it's not trivial, then there is an economic development boost to a community. I think that's where the argument needs to be made. 

Lauren Prastien: And the ROI was incredible. By adopting the first citywide gigabit-speed broadband in not just the United States, but the entire Western Hemisphere, Chattanooga spurred economic growth in not just the city itself, but the entirety of Hamilton County. According to an independent study by researchers at the University of Tennessee at Chattanooga and Oklahoma State University, EPB’s smart grid created and maintained an extra 3,950 jobs in its first 5 years of implementation. In a 2016 article in VICE, the journalist Jason Koebler called Chattanooga “The City That Was Saved by the Internet.” Because not only did the City rather swiftly break even on its investment, it also saw a strong reduction in unemployment and got a new identity as The Gig City, incentivizing the growth of new businesses and attracting younger residents to a region that was once seeing a pretty serious exodus.  

Currently, 82 cities and towns in the United States have government-owned, fiber-based broadband Internet. And while there have been some areas that have experimented with municipal broadband and not seen the success of Chattanooga, the fact is that the lack of broadband availability is keeping people from participating in not just new educational or work opportunities, but very basic aspects of our increasingly connected world. And right now, there’s very little national oversight on making sure that all Americans have access to a service that is becoming as vital a utility as electricity. 

Karen Lightman: It's like the wild west and it's not consistent. There's no, the federal, like I said, the federal government's not playing a role.

Lauren Prastien: Lately, we have seen the development of some legislative solutions to address this on the federal level, like the ACCESS BROADBAND Act. ACCESS BROADBAND is an acronym, and it’s even better than the DASHBOARD Act’s acronym we talked about with reference to data subject rights. So get ready. Actually, Eugene. Get over here. Come read this.

Eugene Leventhal: All right. Here goes. ACCESS BROADBAND stands for Advancing Critical Connectivity Expands Service, Small Business Resources, Opportunities, Access and Data Based on Assessed Need and Demand.

Lauren Prastien: Quick aside: Whoever has been coming up with these really amazing and really ambitious acronyms for these bills, we appreciate your hard work.

But anyway - the bill aims to promote broadband access in underserved areas, particularly rural areas that don’t quite fit the “head” category in the “head and long-tail” framework. It would also establish an Office of Internet Connectivity and Growth at the National Telecommunications and Information Administration, which would help to provide broadband access for small businesses and local communities, as well as provide a more efficient process to allow small business and local governments to apply for federal broadband assistance. So far, it passed the House of Representatives back in May, and it’s been received by the Senate and referred to the Committee on Commerce, Science and Transportation. 

The federal assistance aspect of this bill could be really promising. Because the fact is that high-speed broadband requires major investments in infrastructure for 5G technology to work, and that’s often really expensive. 

Karen Lightman: It’s not just laying down a new piece of fiber through a conduit pipe. It’s also these, basically be changing kind of the cell towers that we now see on rebuilding. They’d actually be, they have to be, in order to have that kind of Internet-of-things capability, they need to be lower to the ground and they are large for the most part, and they are shorter distance from each other.

Lauren Prastien: So, it’s a big investment, which can be discouraging to both the public sector and private companies alike. In addition to increasing federal support for these initiatives, Professor Lightman has also seen a lot of value in using smaller-scale deployments, from both the public sector and from private companies, to gain public trust in a given project and troubleshoot potential issues before major investments are made. 

Karen Lightman: Pittsburgh is a neat city because we’ve got all these bridges. We’ve got a lot of tunnels, we’ve got a lot of hills and valleys. The joke is that it, Pittsburgh is a great place for wireless to die, so it’s a great test bed.

Lauren Prastien: And when it comes to facilitating these deployments, she sees a lot of value in the role of universities.

Karen Lightman: So that’s where the role of a university and a role of Metr 21 to do the deployment of research and development. And do a Beta, do a time-bound pilot project with a beginning and an end in a way to measure it and to see yes this works or no, we need to go back and tweak it. Or this was the worst idea ever! And I think what’s also unique about the work that we do, and this is pretty unique to Metro21, is that we really care about the people that it’s affecting. So we have social decision scientists that work alongside our work. We have designers, we have economists. So we’re thinking about the unintended consequences as long as, as well as the intended. And that’s where a university has a really nice sweet spot in that area. 

Lauren Prastien: With these pilot projects, Professor Lightman aims to keep the community informed and involved as new technologies become implemented into their infrastructure. Which is great, when we think about some of the problems that come along when a community isn’t kept in the loop, like we saw in episode 4.

And while broadband access is a really vital part of being able to ensure that people aren’t being left behind by the digitization of certain sectors - like what we’ve seen in education - the issue doesn’t just boil down to broadband. And broadband isn’t a silver bullet that’s going to solve everything. But the implementation of a more equitable broadband infrastructure could assist in  helping to close some of the more sector-specific gaps that are widening between rural and urban areas - or between haves and have-nots regardless of region - and are contributing to this narrative of a rural-urban divide. Because often, these issues are all linked in really complex, really inextricable ways.

Like how broadband access factors into healthcare. And right now, in rural regions, access to quality healthcare is becoming more and more difficult. According to the US Department of Health and Human Services’ Health Resources and Services Administration, of the more than 7,000 regions in the United States with a shortage of healthcare professionals, 60% are rural areas. And while Professor Lightman has seen some promising breakthroughs in telemedicine and sensor technology to fill some of these gaps, you guessed it: they need high-speed Internet to work.

Karen Lightman: There's also healthcare and the idea of hospitals are closing and a lot of these communities, so we have the technology for telemedicine. I mean, telemedicine is so amazing right now that, but you need internet, you need high speed, reliable internet. In order to, if you’re having a face to face conversation with a doctor or maybe you have sensor technology to help with blood pressure or diabetes and that information can be, you know, sent over the Internet remotely even. And that technology exists. But if there's no secure Internet, it's not gonna it doesn't work. There are a lot of other issues at play here, from the impact of automation on a region’s dominant industry to the lack of availability of services that would normally help people transition to a new career when that industry goes away in a more urbanized setting.

But addressing this is not a matter of just packing it all up and saying everyone needs to move to a city. Because not everyone wants to live in the city. And that should be fine. It should be an option. Because by the way, cities don’t automatically equal prosperity. In his lecture at the annual meeting of the American Economic Association earlier this year, MIT economist David H. Autor found that not only do cities not provide the kinds of middle-skill jobs for workers without degrees that they once did, they’re also only good places for as few as one in three people to be able to live and work.

So when you picture the future, I don’t want you to just picture cities. And when we talk about extending these opportunities more equitably into rural areas, I also don’t want you to think of this as cities bestowing these developments on rural communities. Because, one, come on. It’s condescending. And two, that’s just not how it’s playing out. 

A lot of really incredible breakthroughs and a lot of thought leadership related to using tech for social good and preparing for the future of work aren’t coming out of cities.

I’ll give you a great example in just a moment, so stay with us.

So, Greene County has come up a few times in this episode. And real quick, if you’re not familiar with it, let me paint a picture for you. The cornerstone of the keystone state, it’s the southwestern-most county in Pennsylvania, sitting right on our border with West Virginia. It’s about 578 square miles. In case you’re wondering, that’s double the size of Chicago, or a little bit bigger than one Los Angeles. The city, not the county. It has about fifteen public schools spread out over five districts, three libraries, and a small, county-owned airport. 

Also, it’s home to the Greene River Trail along Tenmile Creek which is, in my humble opinion, a really pretty place to hike.

Its county seat, Waynesburg, is home to Waynesburg University. It’s a small, private university with a student population of around 1,800 undergraduates and 700 graduate students. Some of its most popular majors are nursing, business administration and criminal justice. And under the leadership of its President, Douglas G. Lee, the University has received a lot of attention for the economic outcomes and social mobility of its graduates from institutions like Brookings and US News and World Report.

We spoke to President Lee about the ways that automation, artificial intelligence and other emerging technologies are going to change the way that non-urban areas - and particularly                                    universities in these areas - can respond in such a way that they feel the benefits of these changes, rather than the drawbacks.

Douglas Lee: I grew up in this area and I saw what happened to many of the steelworkers when they lost their jobs and how hard it was for those folks to adapt and retrain into another position. And you see that in the coal industry as well today. So, linking this type of education to the workforce of the future I think is going to be critical. 

Lauren Prastien: And according to President Lee, a huge part of that reframing is also adapting how we look at the role of educational institutions in the lives of their students, alumni and the communities they occupy. 

Douglas Lee: So it's really about educating all of these young people to have, be lifelong and encouraging that and building that into a culture. And I think higher education needs to play a significant role in that and looking for those ways that you continue to grow and develop that concept as well because it's not just four years or six years or seven years in, year out it's lifetime now. And engaging in that lifetime experience, whether it's with your alumni, whether it's members of the community or in a larger sense, people that have an interest in a specific mission purpose in what you're educating at your university and plugging them into that. 

Lauren Prastien: And this is going to factor a lot to our new understanding of how and when we acquire skills. Because regardless, where you choose to live shouldn’t preclude you from prosperity. A lot of really incredible breakthroughs in technologies are moving us into the future, and entire populations don’t deserve to be left behind. So, Eugene, how do we bridge these kinds of divides, be they rural and urban or have and have not?

Eugene Leventhal: The first step that should be taken is building community, especially with those who live in the areas that do not get the same access to resources or infrastructure. In order to make sure any solutions are as effective as possible, those individuals who are most affected need to be included in the decision making process in some capacity. Focusing on community can help understand how to meaningfully bring residents into the conversation.

We heard of the importance of broadband access and the comparison to how people didn’t have easier access to electricity until it became a public utility. It is important to seriously consider whether the role of high-speed internet qualifies it for becoming a public utility. If not, then it’s still crucial to have actionable plans for how broadband can be provided to all citizens. Ensuring high-speed internet for everyone would help raise the quality of education that students receive as well. 

Lauren Prastien: Next week, we will be discussing the impact of automation on the creation and elimination of jobs and industries, with a focus on how policymakers, educational institutions and organized labor can prepare potentially displaced workers for new opportunities. We’ll also discuss the “overqualification trap” and how the Fourth Industrial Revolution is changing hiring and credentialing processes. Here’s a preview of our conversation with one of our guests next week, Liz Schuler, the Secretary-Treasurer of the AFL-CIO:

Liz Schuler: How do we have a conversation at the federal level to know what the needs are going to be and approach it in a systematic way so that our planning and our policy making can mirror what the needs of the future workforce are going to be and not have pockets of conversation or innovation happening in vacuumd in different places. And so the labor movement can be a good connective tissue in that regard.

Lauren Prastien: I’m Lauren Prastien, 

Eugene Leventhal: and I’m Eugene Leventhal

Lauren Prastien: and this was Consequential. We’ll see you next week.   

Eugene Leventhal: Consequential was recorded at the Block Center for Technology and Society at Carnegie Mellon University. The Block Center was established to examine the societal consequences of technological change and create meaningful plans of action. To learn more about Consequential, the Block Center and our faculty, you can check out our website at cmu.edu/block-center or follow us on Twitter @CMUBlockCenter. You can also email us at consequential@cmu.edu.

This episode of Consequential was written by Lauren Prastien, with editorial support from Eugene Leventhal. It was edited by Eugene and our intern, Ivan Plazacic. Consequential is produced by Eugene, Lauren, Shryansh Mehta and Jon Nehlsen.

This episode references the 2014 revision of the United Nations’ World Urbanization Prospects report, the US Census Bureau’s 2014 American Community Survey, a 2019 report from the United States Department of Agriculture’s Economic Research Service on Rural Employment and Unemployment, a 2016 report from the International Institute for Sustainable Development on automation in mining, Microsoft’s 2018 report “The rural broadband divide: An urgent national problem that we can solve,” a 2009 report from the FCC’s Broadband Taskforce, 2019 employment data from the US Bureau of Labor Statistics, Jason Koebler’s 2016 article “The City That Was Saved by the Internet” for VICE, 2018 data from HRSA on healthcare shortage areas and David H. Autor’s lecture at this year’s American Economic Association. The sound effect used in this episode was from Orange Free Sounds. 

Lauren Prastien: In 2018, the World Economic Forum released a report saying that by 2022, automation is expected to eliminate 75 million jobs. But, it’s also expected to create another 133 million new jobs. 

So what would that look like? How can a technology that replaces a job create even more jobs? Let me use a really simplified example of how disruption can turn into job creation.

In the 2005 movie Charlie and the Chocolate Factory, Charlie Bucket’s father works at a toothpaste factory where his sole responsibility is screwing the caps onto tubes of toothpaste. That is, until a candy bar sweepstakes absolutely bamboozles the local economy.  

Excerpt: The upswing in candy sales had led to a rise in cavities, which led to a rise in toothpaste sales. With the extra money, the factory had decided to modernize, eliminating Mr. Bucket’s job.

Lauren Prastien: There’s that first part of the prediction. Mr. Bucket gets automated out of his job. But don’t worry, because a little later in the movie, Charlie’s father gets a better job at the toothpaste factory...repairing the machine that had replaced him. 

It would be irresponsible - and also flat-out wrong - for me to say that this process is absolutely organic. You need interventions in place to make sure that the individuals displaced by technology are able to find new, meaningful, well-paying occupations. But what are those interventions? And who is responsible for enacting them?

This is Consequential: what’s significant, what’s coming, and what we can do about it. I’m Lauren Prastien and I’ll be your main tour guide along this journey. You’ll also hear the voices of our many guests as well as of your other host. 

Eugene Leventhal: Hi, I’m Eugene Leventhal. I’ll be joining throughout the season to take a step back with Lauren and overview what was just covered, to talk policy, and to read quotes. I’ll pass it back to you now Lauren. 

Lauren Prastien: Consequential is recorded at the Block Center for Technology and Society at Carnegie Mellon University. Established in 2018 through a generous gift from Keith Block and Suzanne Kelley, the Block Center is dedicated to investigating the economic, organizational, and public policy impacts of emerging technologies.

Today, we’re talking about skills, displacement and overqualification. So stay with us.

So in the case of Charlie and the Chocolate Factory, automation took on a routinized task like screwing on toothpaste caps, and gave Mr. Bucket a more interesting and better-paying job. And historically, we’ve actually seen things like this happen. The term computer used to refer to a person, not a device. And while we no longer need people working through the minutiae of calculations that we’ve now more or less automated, the Bureau of Labor Statistics says that the computing and information technology sector is one of the fastest-growing industries in terms of employment today. 

Or think of it this way: The invention of the alarm clock meant that there was no longer a need for a really unfortunately-named job called a “knocker-upper,” which was a person who was paid to go around the neighborhood, banging on doors to wake people up. So, no more “knocker-upper,” but now you need people designing alarm clocks, assembling them, selling them, repairing them, you get the idea.

And sometimes, technological disruption made new jobs that were a lot, lot safer. Back before phlebotomists were a thing and when blood-letting was a lot more common, you needed someone to go out and find leeches to draw blood. Leech collector was a real job, and, uh...no thanks.

But as technology took away these jobs, the skills required for the jobs it created in their place weren’t always the same. A person who was really good at banging on doors and waking people up might not be that great at engineering alarm clocks. Someone with the stomach for wading into rivers to collect leeches might not have the skills or even the desire to draw blood at a hospital.

So what are skills? According to Merriam-Webster, a skill is “a learned power of doing something competently.”

Napoleon Dynamite: You know, like nunchuck skills, bow hunting skills, computer hacking skills. Girls only want boyfriends who have great skills.

Lauren Prastien: Napoleon Dynamite’s onto something here. Our skills are how we demonstrate our suitability for a given position, because they’re what we need to fulfill the obligations of that position, be it the nunchuck skills that Napoleon Dynamite needs in order to be a good boyfriend, or, say, the technical proficiency Mr. Bucket needs to repair the robot that originally replaced him. Because you need,  in the words of the Beastie Boys, “the skills to pay the bills.” 

Skills are something that we usually hone over the course of our career. In a traditional model, you’ll gain the basis of those skills through your education, be it in K-12 and post-secondary institutions, at a trade school or through a working apprenticeship. You’ll usually then have some piece of documentation that demonstrates that you learned those skills and you’re proficient in using them: a diploma, a certification, a set of test scores, or even a letter from the person you apprenticed under saying that you can, indeed, competently perform the skills you learned during your apprenticeship. 

Liam Neeson: I can tell you I don't have money. But what I do have are a very particular set of skills, skills I have acquired over a very long career, skills that make me a nightmare for people like you.

Lauren Prastien: And like Liam Neeson’s character in Taken just said, you’ll further hone those skills throughout your life, be it over the course of your career or even in your spare time. But what happens when a skill you’ve gone to school for and then used throughout a career becomes automated? Last week, we mentioned that automation has infiltrated industries like mining and agriculture, often making these industries a lot safer and a lot less environmentally harmful. But where does that leave the workers who have been displaced?

According to the World Economic Forum, transitioning 95% of at-risk workers in the United States into new jobs through reskilling could cost more than $34 billion. Lately, we’ve seen some efforts in the private sector to support reskilling in anticipation of the greater impacts of artificial intelligence and advanced automation. In 2018, AT&T began a $1 billion “Future Ready” initiative in collaboration with online educational platforms and MOOCs, which you may remember from our fifth episode are massive open online courses, in order to provide its workforce with more competitive and relevant skills as technology transforms the telecommunications industry. Earlier this year, Amazon announced a $700 million “Upskilling 2025” initiative to retrain a third of its workforce for more technically-oriented roles in IT and software engineering. And Salesforce has rolled out a suite of initiatives over the past few years focused on reskilling and upskilling, such as the “Vetforce” job training and career accelerator program for military service members, veterans and their spouses. But the World Economic Forum has found that the private sector could only profitably reskill about 25% of the workers at-risk for displacement, which indicates that this isn’t something we could rely exclusively on the private sector to handle. According to Borge Brende, President of the World Economic Forum:

Eugene Leventhal: If businesses work together to create economies of scale, they could collectively reskill 45% of at-risk workers. If governments join this effort, they could reskill as many as 77% of all at-risk workers, while benefiting from returns on investment in the form of increased tax returns and lower social costs including unemployment compensation. When businesses can’t profitably cover costs and governments can’t provide the solutions alone, it becomes imperative to turn to public-private partnerships that lower costs and provide concrete social benefits and actionable solutions for workers.

Lauren Prastien: And when we talk about training the future workforce and reskilling a displaced workforce, it’s important to understand the values, needs and concerns of the workers themselves. And as the voice of organized labor, that’s where unions come in. 

So Eugene and I spoke to the American Federation of Labor and Congress of Industrial Organizations, or AFL-CIO. As the largest federation of unions in the United States, the AFL-CIO represents more than 12 million workers across a variety of sectors, from teachers to steelworkers to nurses to miners to actors. This fall, the AFL-CIO announced a historic partnership with Carnegie Mellon University to investigate how to reshape the future of work to benefit all working people. 

In this spirit, we sat down with Craig Becker, the AFL-CIO’s General Counsel, and Liz Shuler, the Secretary-Treasurer of the AFL-CIO, to understand where policymakers need to focus their efforts in anticipation of worker displacement and in powering the reskilling efforts necessary to keep people meaningfully, gainfully employed. Because according to Liz Shuler, automation isn’t necessarily bad for workers. 

Liz Shuler: Even though we know that the forecasts are dire, but at the same time we believe it's going to be about enhancing more than it is replacing. That a lot of new jobs will emerge, but some of the older jobs actually will evolve, using technology and freeing up humans to actually enhance their skills and use their judgment.

Lauren Prastien: Both Becker and Shuler agree that the anxiety over the huge impact that enhanced automation and artificial intelligence could have is really more about the ways in which these technologies will displace employees and degrade the value of their work than about automation in and of itself. 

Craig Becker: If you want workers to embrace change and play a positive role in innovation, they have to have a certain degree of security. They can't fear that if they assist in innovation, it's gonna lead to their loss of jobs or the downgrading of their skills or degradation of their work. So a certain level of security, so policies which lead to unemployment policy, robust minimum wage, so that workers understand that if they are displaced, they'll get a new job and it won't be a worse job. 

Lauren Prastien: And when we think about having workers embrace these changes and ensuring that these new positions suit the needs and desires of the workforce, you need to be actually getting input from the workforce itself. And that’s where unions can play a powerful role in these initiatives.

Liz Shuler: How do we have a conversation at the federal level to know what the needs are going to be and approach it in a systematic way so that our planning and our policy making can mirror what the needs of the future workforce are going to be and not have pockets of conversation or innovation happening in vacuum in different places? And so the labor movement can be a good connective tissue in that regard. 

Lauren Prastien: And part of the reason why Shuler believes that unions are uniquely positioned to be that connective tissue is due to their own history as a resource for worker training.

Liz Shuler: We've handled this before. We've worked through it and really the labor movement sweet spot is in helping workers transition and ladder up to better careers. And so we are actually the second largest provider of training in the country behind the U.S. military. 

We're sort of the original platform for upskilling. And so I think the labor movement has a real opportunity to be a center of gravity for all working people who are looking to make transitions in their careers as technology evolves. 

Lauren Prastien: In addition to providing reskilling opportunities themselves, the organized labor union has also made reskilling a priority in their collective bargaining agreements with the industries they work with. In 2018, the AFL-CIO’s Culinary Workers Union in Las Vegas made reskilling a central factor in their negotiations with the Casinos, requiring management to communicate to the union before implementing a new technology.

The same thing happened with the 2018 contracts resulting from the bargaining agreements with Marriott Hotels. Their ensuing agreement made sure that the union receives 165 days’ notice from the company any time it plans to automate a given process, allowing for worker input on the use of technology in this way. Additionally, and perhaps more powerfully, all the workers affected by the implementation of a new technology are then entitled to retraining to either work with this new technology or to take on a new position within Marriott.

However, according to a Bloomberg Law survey, only 3 percent of employers’ contracts with workers included language on worker retraining programs, down sharply from 20 percent in 2011. Which, again, drives home the point that private sector efforts may not be enough to maintain a skilled and gainfully employed workforce. 

Craig Becker: I think what's paradoxical here, or perhaps perverse here, is that while there's a wide recognition of an increased need for training and upscaling and continuing education, both public investment and employer investment in workforce training is down. And that I think largely has to do with the changing nature of the employment relationship. That is, employers see their workforce as turning over much faster than it did in the past and therefore don't see the need to invest in their workforce.

Lauren Prastien: So when it comes to investing in the workforce, they believe that a good lens to be using is that of investment in infrastructure.

Craig Becker: There's a wide recognition by both employers and unions and some public officials that there hasn't been sufficient investment in physical infrastructure: roads, pipes, water supplies, and digital infrastructure. I think the same thing that if we're going to be successful in other sectors, we need to have those complementary forms of investment. Digital infrastructure is obviously now key to the operation of much of that fiscal infrastructure, whether it's a factory or a water system or a sanitation system or transport system. All are now guided and aided by digital infrastructure. And similarly, investment in human infrastructure, training of the people to operate that new physical and digital infrastructure. All are significant and all are important. And there's a deficit in all three areas right now. 

Lauren Prastien: And like we said, this isn’t a small investment. But just like investments in physical and digital infrastructure show a strong payoff, investments in human infrastructure are just as valuable. In 1997, the Association for Talent Development, then called the American Society for Training and Development, launched a major research initiative to evaluate the return on investment in worker education and training. In 2000, they released a comprehensive study titled “Profiting from Learning: Do Firms’ Investments in Education and Training Pay Off?” And essentially, the answer was, yes. A lot. Companies that offered comprehensive training programs not only had a 218% higher income per employee than companies without formalized training, they also saw a 24% higher profit margin than those who spent less on training. So it’s better for both employees and employers.

And what’s really exciting here is that the ROI we get from investing in worker training might even be bolstered by disruptive technologies. Because the technologies that we often think about as displacing workers can even help in reskilling them. Like this example, from Liz Shuler, on how augmented reality can help workers assess their interest in and suitability for a given job, before they make the investment in undergoing the training necessary for that job.

Liz Shuler: I was just at a conference with the sheet metal workers and I actually took the opportunity to do some virtual welding. They had a booth set up their conference so that people could actually see what the technology is all about. And I also did a virtual lift so that you're using that technology before you even leave the ground to know whether you have the aptitude or the stomach to be able to get into, uh, one of those lifts and not be afraid of heights, for example. Let me tell you, it was very challenging. Those are the kinds of tools and innovation that our training programs get ahead of. 

Lauren Prastien: But what’s important to keep in mind here is that this sort of investment presupposes that organized labor has ample time to anticipate the disruption and retrain workers before they’re merely automated out of their jobs, which is why these efforts cannot be reactive.

Craig Becker: The role of unions as a voice for workers has to start really before the innovation takes place. There has to be a role for worker voice in deciding what kind of innovation would work. How would this innovation fit into the workplace? How would it expand what workers can do? How can it make them more safe? So that, you know, the type of dialogue which has gone on between our commission and CMU and its faculty is exactly what has to be promoted by policy makers. 

Lauren Prastien: But what happens when this doesn’t play out? Like we discussed last episode, we’ve seen the larger socioeconomic implications that come with a region losing the industry that employs a large amount of its residents to automation. And like we said, the transition from one job to another isn’t always organic, and it often involves building new skills that take time and money to acquire, be it from going back to school, learning a new software or operating system, or picking up a new trade. Sometimes, providing retraining opportunities themselves aren’t enough, especially when the transition from one industry to another can come with a decrease in income.

But when it comes to protecting workers while they transition to another job and gain the skills necessary to support that position, there are steps that policymakers can take. I’ll tell you what that might look like in a few moments, so stay with us.

Lee Branstetter: The US federal government has been subsidizing worker retraining for decades. It spent billions of dollars and a lot of the results have been fairly disappointing. I think what we've learned is that retraining is hard, right? And then once a worker acquires a skill, they need to be able to move that skill to where the demand for that skill exists. All these need to be in alignment for retraining to work. 

Lauren Prastien: If that voice sounds familiar, it’s because you’ve heard it a few times already over the course of this season. But if you’re just joining us or if you can’t quite place it, let me help you out. That’s Lee Branstetter. He’s a former Economic Advisor to President Obama, and he currently leads the Future of Work Initiative here at the Block Center. And based on his work as an economist and a professor of public policy, Professor Branstetter thinks unemployment insurance isn’t the right route to go. Instead, he thinks we should be focusing on wage insurance.

Lee Branstetter: The payout would not work like unemployment at all. The payout in our unemployment insurance where a system works in the following manner: you lose your job, you get some money, it's supposed to tide you over until you get another job. The problem we're finding is that workers go through a disruptive experience generated by technology or globalization and they spent decades honing a set of skills that the market no longer demands. So they have no problem getting another job, but the new job pays less than half of what the old job paid. We don't have any way of insuring against that. And the private market is probably not going to provide this insurance on its own because the only people who would sign up for it are the people who are about to get disrupted. Imagine how health insurance markets would work if the only people who sign up for health insurance were the people who are about to get critically ill.

Lauren Prastien: And Professor Branstetter sees that a relatively inexpensive intervention like wage insurance could protect the workers most impacted by technological change.

Lee Branstetter: With a fairly small insurance premium, we could provide a pretty generous level of insurance to that small fraction of the workforce that are contending with these long term income losses. And they're huge. I mean, the long term income losses we're talking about are on the same order of magnitude as if somebody's house burned down. Now, any of these workers can go on the Internet and insure themselves against a house fire quickly, cheaply, and easily. They cannot insure themselves against the obsolescence of their skill, but it would be pretty easy and straightforward to create this kind of insurance. 

Lauren Prastien: Like Professor Branstetter said in our first episode on industry disruption, we still don’t know the exact impact that technological disruption is going to have on the workforce. But government interventions like wage insurance could cushion that impact. 

In addition to seeing difficulties associated with workers needing to reskill as certain skills become automated, another issue that we’re seeing in the current workforce relates to overqualification. In 2017, the Urban Institute’s Income and Benefits Policy Center found that as many as 25 percent of college-educated workers were overqualified for their jobs. In other words, a full quarter of the American workforce with a college degree didn’t actually need that college degree to gain the skills or do the work that their jobs required them to do. And this is a huge issue, considering the rising costs of higher education and the growing burden of student loan debt. 

So what impact is widespread overqualification having on our workforce? We’ll talk about that in just a moment.

According to the Federal Reserve, Americans owe more than $1.53 trillion in student loan debt. That’s the second-highest consumer debt category in the United States, exceeded only by mortgage debt. And according to the Institute for College Access and Success, borrowers from the Class of 2017, on average, still owe $28,650 in student loans. So imagine for a moment, please, for just a second, what it might mean to find out that you didn’t even need that degree in the first place, or that an advanced degree that you went into even more debt to obtain might even keep you from getting hired.

Oliver Hahl: There's this whole literature in the field on overqualification once you have a job and how people feel they get less enjoyment from their job when they're overqualified and all this. We were asking something that surprisingly no one really had studied at all, which was do hiring managers reject someone that they perceive to be overqualified? And if so, why? 

Lauren Prastien: That’s Oliver Hahl. He’s a professor of organizational theory and strategy at Carnegie Mellon, where his work looks at how perceptions of socioeconomic success impact the behaviors of employment markets. In particular, how hiring organizations perceive candidates who are, quote, unquote, overqualified. Which is becoming an increasing concern as more and more people are not only getting college degrees, but also getting advanced degrees. Today, workers with at least a bachelor’s degree make up 36% of the workforce, and since 2016, have outnumbered workers with just a high school diploma. 

Oliver Hahl: So basically what we found on the first paper was perceptions of commitment, which is a term of art for academics, which basically just means means a couple things. One is that the job candidate is less likely to stay with the firm. The employers are less likely to kind of get as much effort out of them. So the more committed you are to the organization, the more you're willing to put the organization first. 

Lauren Prastien: And as Professor Hahl continues to look at the impact of how organizations perceive a candidates’ qualifications, his research group found that there are some really interesting tensions cut along the lines of gender.

Oliver Hahl: So then a student of mine, Elizabeth Campbell, who's at the Tepper School, came and was like, I think this is related to gender. The literature or the way that it's discussed, even interpersonally about a commitment, organizational commitment for men tends to be about, are you committed to the firm? For women, tends to be about, are you committed to your career? So I already have that divergence about what we mean by commitment, opens the door for, oh, there could be different outcomes, right?

Lauren Prastien: And by the way, Campbell was right. Gender did have an impact on the perception of qualification, though maybe not the impact you’d expect.

Oliver Hahl: And so as we thought about it, you realize the more qualifications you show on a resume might threaten the firm in saying like, you might not be committed to the firm, but it actually shows more commitment to your career because you've invested more. And so it kind of works in that, in these opposite directions. In the preliminary test, that's what we found. A woman who's overqualified relative to someone who's sufficiently qualified tends to be selected more. Whereas men, it's the opposite. Men who overqualified tend to be selected less than someone who's sufficiently qualified. 

Lauren Prastien: So yeah, if you’re a man and you’re overqualified, you’re probably not going to be selected for that position. And if you’re a woman and you’re overqualified, the hiring organization is going to favor you. Essentially, Campbell and Hahl have summed it up this way: 

Eugene Leventhal: “He’s Overqualified, She’s Highly Committed.” 

Lauren Prastien: Which stinks, whether you’re a man not getting picked for a position or a woman being hired for a position you’re overqualified for. And this doesn’t just negatively impact job candidates. According to Professor Hahl, this hurts organizations, too.

Oliver Hahl: The implication of this is from kind of a strategy standpoint of how to manage your human capital is the organization is leaving really qualified people out in the workforce that, that they could get a lot of productivity from. 

Lauren Prastien: And so we asked Professor Hahl what forces are driving this trend and contributing to this disparity.

Oliver Hahl: The fetishization of the undergrad degree as opposed to doing an associates degree or developing a technical skill. Going and getting an undergrad degree where it's not, you know, teaches you to think really well and it's great. And I'm not, you know, I get paid by universities. I think universities are great, but I don't know that it's for as many people who are going to get those jobs.

Lauren Prastien: As we talked about in our episode about the education bubble, higher education has relied on prestige and scarcity for a long time to maintain its dominance in the skill acquisition market, but the deployment of new technologies that impact the way people are being educated and how they are gaining credentials could shake this up significantly. And Professor Hahl also sees these technologies shaking up hiring, and potentially helping to overcome the quote, unquote overqualification trap, particularly if we take a page from how medical schools go about candidate recruitment.

Oliver Hahl: The way medical schools match with their with medical students. This comes from my talking with my two brothers in law who are doctors. So, when they were getting out of their medical school and matching for their fellowship. Or not fellowship, it’s the residence for the next step. They list, they go around and you go around and interview once with a bunch of schools. You talked to a bunch of schools and get your sense on how they think of you and how you think of them. But then it’s blind. You don’t know how they rank you relative to other people and they don’t know how you rank them relative to other schools. And you make a list of your top schools and they make a list of their top students and some algorithm matches them.

Lauren Prastien: So Eugene, what have we learned today about protecting workers from the greater impacts of technological change and ensuring that they have the skills and ability to find meaningful, well-paying jobs during what’s being called the Fourth Industrial Revolution?

Eugene Leventhal: Well Lauren, we’re starting to see a bit of a trend arising when it comes to the policy responses relating to the types of questions that we’re bringing up in this season. That trend is the fact that there is no easy solution, there is no single easy, laid-out path ahead of policymakers in terms of what is the best way to deal with it. 

What we do know is that we need to work with the groups that are being affected. That includes individuals who are already in the process of being displaced by automation or have already been displaced by automation. Or that can be relating to industries where we see a high chance of those industries becoming disrupted in the coming years. Situations where policymakers are not able to directly administer studies looking into how the workforce within their jurisdiction is being affected, it’s important to partner with either universities or nonprofits focused on workforce redevelopment to better understand how are individuals being affected and who specifically is at a high risk of being affected. 

As we heard today, retraining is definitely an important area to focus on in terms of supporting workers as career landscapes changed. However, retraining should not be seen as a panacea to these issues. We also need to think of other elements of supporting our workforce, such as the wage insurance idea that Professor Branstetter spoke of. We also need to have a larger cultural shift around how we view both education and some career paths, as continuing the current status of overqualification isn’t good for anyone involved. And so we see the potential policy response here starting with understanding who is being affected and how, as well as thinking what are the necessary support systems that need to be in place for workers that will be affected but for whom retraining may not be a viable option in the short-run. 

Lauren Prastien: Thanks, Eugene. Now that we’ve discussed the changes in the workforce, we’ll look at changes in the workplace. In particular, human-computer collaboration. Here’s a preview of our conversation with one of our guests next week, Parth Vaishnav, a professor of engineering and public policy here at Carnegie Mellon:

Parth Vaishnav: Okay, if we accept the idea that there’s going to be someone in the truck, who is going to be monitoring these systems, how is the job of that person going to be different from what truckers do today? And are there other things that truckers do right now? Things like basic maintenance, things like paperwork, things like coordinating delivery times with customers, which an autonomous system may not be able to do. Would autonomy create jobs that are completely different from the trucking job, but still involved truckers? 

Lauren Prastien: I’m Lauren Prastien, 

Eugene Leventhal: and I’m Eugene Leventhal,

Lauren Prastien: and this was Consequential. We’ll see you next week.   

Consequential was recorded at the Block Center for Technology and Society at Carnegie Mellon University. The Block Center was established to examine the societal consequences of technological change and create meaningful plans of action. To learn more about Consequential, the Block Center and our faculty, you can check out our website at cmu.edu/block-center or follow us on Twitter @CMUBlockCenter. You can also email us at consequential@cmu.edu.

This episode of Consequential was written by Lauren Prastien, with editorial support from Eugene Leventhal. It was edited by Eugene and our intern, Ivan Plazacic. Consequential is produced by Eugene, Lauren, Shryansh Mehta and Jon Nehlsen. 

This episode references the World Economic Forum’s “The Future of Jobs 2018” report, the Bureau of Labor Statistics’ Occupational Outlook Handbook, a 2019 article from World Economic Forum President Borge Brende titled “We need a reskilling revolution. Here’s how to make it happen,” a 2019 Bloomberg Law survey titled “Bargaining Objectives, 2019,” a 2000 study from the ATD titled “Profiting from Learning: Do Firms’ Investments in Education and Training Pay Off?”, a 2017 study from the Urban Institute titled “Mismatch: How Many Workers with a Bachelor’s Degree Are Overqualified for their Jobs?”, consumer credit data from the Federal Reserve, data from the Institute for College Access & Success’s Project on Student Debt and data from Georgetown University’s Center on Education and the Workforce. It uses clips from the 2005 movie Charlie and the Chocolate Factory, the 2004 movie Napoleon Dynamite and the 2008 movie Taken

Lauren Prastien: Does your boss trap you in endless conversational loops? Do they ask your opinion on something they’ve clearly already decided upon? Are they obsessed with efficiency?

I’m so sorry to tell you this: your boss might be a robot. At least, according to author and Forbes columnist Steve Denning. Back in 2012, he wrote an article titled, “How Do You Tell If Your Boss Is A Robot?” And while Denning was referring to a metaphorical robot, as in, your boss is just kind of a jerk and obsessed with the bottom line - he was also speaking to a very real anxiety that as artificial intelligence becomes more and more competent, we’re going to start seeing it in the workplace, whether we’re consciously aware of it or not.

And this anxiety over being bamboozled by a surprise robot, or even just having robots in the workplace in general, hasn’t gone away. If anything, it’s intensified. Just this year, the journalist Eillie Anzilotti wrote an article for Fast Company titled, “Your new most annoying overachieving coworker is a robot.” Yes, a literal robot this time.

And in the trailer for the third season of Westworld, a construction worker played by Aaron Paul sits on a girder, the city looming behind him, looking lonely as he eats his lunch beside the robot that is presumably his coworker. It bears a stunning resemblance to the famous Charles C. Ebbets photograph Lunch atop a Skyscraper. But while the eleven men in Lunch atop a Skyscraper share cigarettes and chat away on their break, looking exhausted but somehow also enlivened by each others’ presence, Westworld’s human and robot coworkers bear the same hunched, defeated posture. They can’t even seem to look at each other.

But will human-computer collaboration actually look like that? How accurate is our current cultural anxiety over - and perhaps even fascination with - the future of work?

This is Consequential: what’s significant, what’s coming, and what we can do about it. I’m Lauren Prastien and I’ll be your main tour guide along this journey. You’ll also hear the voices of our many guests, as well as of your other host.

EL: Hi, I’m Eugene Leventhal. I’ll be joining throughout the season to take a step back with Lauren and overview what was just covered, to talk policy, and to read quotes. I’ll pass it back to you now Lauren.

LP: Consequential is recorded at the Block Center for Technology and Society at Carnegie Mellon University. Established in 2018 through a generous gift from Keith Block and Suzanne Kelley, the Block Center is dedicated to investigating the economic, organizational, and public policy impacts of emerging technologies. 

Last week, we talked about how artificial intelligence and automation could displace workers, and what interventions need to be in place to protect our workforce.

Today, we’re going to look at what happens when these technologies enter the workplace and even just our lives in general. This week is all about human-computer collaboration and the future of work. So stay with us. 

[MUSIC]

So real talk: your newest coworker is probably not going to be a literal, nuts-and-bolts robot with a face and arms and legs. Aaron Paul’s character in Westworld is probably more likely to work alongside a semi-autonomous 3D-printer or a robotic arm designed for quickly laying bricks. He might even wear something like a robotic suit or exoskeleton, which allow users to safely lift heavy objects. But, no, he’s probably not going to be working alongside a human-like robot. It’s more likely he’ll just use an algorithm to help efficiently develop a projects’ schedule and reduce the likelihood of delays. Though, okay, let’s be fair here: Westworld is fiction, and a scheduling algorithm doesn’t make great television like a fully realized robot does, and it doesn’t give us that really stunning shot of Aaron Paul and that robot on their lunch break. But wait a hot second - why does a robot need a lunch break?

Anyway: human-computer collaboration isn’t a discussion that takes place entirely in the future tense. Because already, there are a lot of industries that aren’t computing or even computing-related where we are seeing a lot of human-computer collaboration. And one of them is trucking. 

Parth Vaishnav: There is already relatively more automation in trucking than we see in passenger cars. Things like cruise control, things like adaptive cruise control have been rolled out in trucking and have been studied for quite some time.

Lauren Prastien: That’s Parth Vaishnav. He’s a professor of engineering and public policy at Carnegie Mellon, where his work looks at the economic and environmental implications of automation, as well as human-computer collaboration, in several industries, including trucking.

Parth Vaishnav: There are companies which are actually implementing technologies like platooning where one truck follows another truck and the following truck essentially has a driver monitoring the system. But so long as they are in a platoon, the following truck acts as if it's fairly autonomous and obviously it's a low level of autonomy, which means that something goes wrong, you always have a vigilant driver who's ready to intervene.  

Lauren Prastien: Quick clarification on the use of the term platoon here: a platoon is essentially a group of vehicles that travel close together, wherein the lead vehicle sets the speed and direction for the group. It’s also called a flock. Because, you know, that’s essentially what flocks of birds do. But frankly, I prefer the word platoon. Because, come on.  

But anyway, it’s a form of vehicle autonomy, though, like Professor Vaishnav said, it’s a fairly low level of autonomy. But the level of sophistication of automation in trucking is increasing pretty rapidly. In 2017, the journalist Alex Davies reported for Wired that the transportation startup Embark was hauling smart refrigerators between Texas and California using a fleet of self-driving trucks. Yes, there was a human driver in the cab, which is - at least right now - pretty important.

Like we said all the way back in our second episode on the black box, driving involves a really sophisticated algorithm. It’s not just step one, step two, step three, you have reached your destination. You’re taking into account other cars, drivers who might not actually be following the rules of the road, pedestrians, detours, stoplights, you get the picture. So the algorithms that power self-driving technology don’t just have to understand the rules of the road, they have to rely on the sensors that feed them input about everything happening around the vehicle, from weather conditions to traffic to potholes.

If you remember from our third episode on data subjects, you contributed a lot of the data that helps these algorithms interpret what those sensors are picking up. You know, while you were proving you weren’t a robot online.

John Mulaney: I’ve devised a question no robot could ever answer! Which of these pictures does not have a stop sign in it? What?! 

Lauren Prastien: Right. Thanks, John Mulaney. And thanks, CAPTCHA.

But let me make two really quick clarifications about how this technology is being used. First of all, autonomous vehicles aren’t just on the road with nobody in them. You need a driver in the cab, in the same way you still need a pilot in the cockpit when you’re on autopilot.

And continuing this pilot metaphor to make this second clarification: the majority of autonomous trucking is going to take place on highways and in long-hauls, not in urban areas. Because, again: detours, pedestrians, potholes, you get the picture. Think of it this way: for most commercial airlines, about 90% of a flight is done on autopilot. But 99% of landings and 100% of takeoffs are done by a human pilot, not an autopilot system.

So in the case of trucking, if those humans in the cab are not going to be spending that time driving, what can they be doing?

Parth Vaishnav: Okay, if we accept the idea that there’s going to be someone in the truck, who is going to be monitoring these systems, how is the job of that person going to be different from what truckers do today? And are there other things that truckers do right now, things like basic maintenance, things like paperwork, things like, coordinating delivery times with customers, which an autonomous system may not be able to do? Would autonomy create jobs that are completely different from the trucking job, but still involved truckers? What kinds of jobs does that create and what skills do those people require relative to the skillset already exist among people who service trucks on long haul routes right now?

Lauren Prastien: And there’s a really interesting tension here when it comes to how integrating automation into the trucking industry could do a lot of good and cause a lot of harm. A 2015 analysis of US Census Bureau conducted by National Public Radio found that “truck driver” was the most common job in 29 states, including Pennsylvania, California and Texas. But while the integration of driverless technology could also endanger the jobs of the 1.9 million professional truck drivers currently on the road in the United States, it’s also worth noting that the American trucking industry is experiencing a driver shortage that could be remedied through increased automation.

Parth Vaishnav: About 30% of the cost of trucking is the cost of the driver, and so there's a strong economic case to actually reduce costs by making trucking autonomous. Related to that, the reason why the costs of drivers are so high is that it's a hard job to do. You have to spend time away from home, you have to spend very long hours behind the wheel where you alternate between dealing with tricky situations like getting the truck in and out of warehouses and rest stops. And boredom. So it's a hard job to recruit people into doing. The turnover rate is fairly high.

Lauren Prastien: It’s also important to consider that actually, the trucking industry isn’t just truck drivers, even today. In its 2015 analysis of the truck driver shortage, the American Trucking Associations reported that actually, some 7.1 million people involved in the American trucking industry today aren’t drivers. And in addition to having a really high turnover rate, the trucking industry is also facing a pretty significant demographic problem. According to Aniruddh Mohan, a PhD student in Engineering and Public Policy at Carnegie Mellon and one of the researchers working with Professor Vaishnav on this project, changing the roles and responsibilities of individuals in this industry might actually be critical to making sure this industry survives.

Aniruddh Mohan: I think one of the interesting things about the trucking industry is that there's simultaneously a massive shortage of drivers in coming years, but there's also a lack of diversity in the industry. So, most of the truck drivers are baby boomers and there’s a lack of millennials, younger generations participating. And one of the interesting things that we want to find out from this work is to see how the jobs might change, particularly with the introduction of automation, and whether technology might make those jobs more attractive to younger generations who are more technology savvy.

Lauren Prastien: So how soon is the trucking industry going to feel the impact of these changes?

Parth Vaishnav: I think panic is probably premature at this point. I think some of the players in the industry have also admitted that autonomous vehicles are not going to come barreling down the highways, at the end of this year, whatever it is, that some people claim. I think there is time, but also there is value in performing the kinds of exercises that that we're performing where you try and understand how the jobs are going to change on what timescales and start preparing people with it. The other thing that's important is, fleet operators and drivers who are actually exposed to the risk both in terms of risk of accidents, but also the risk of their jobs changing or going away, should be brought into the conversation. We should try to bring in all the stakeholders into the conversation sooner rather than later.

Lauren Prastien: So what will bringing stakeholders into that conversation look like? And how will worker input impact and even improve the way technologies are being used in the workplace? We’ll talk about that in just a moment.  

[Music]

Lauren Prastien: Like we said last week, if you’re going to talk about worker interests and needs, a great place to look is to unions. When we spoke to last week’s guests Liz Shuler, the Secretary-Treasurer of the AFL-CIO, and Craig Becker, General Counsel to the AFL-CIO, about displacement, the subject of human-computer collaboration naturally came up. Because as we’re seeing technologies like artificial intelligence and enhanced automation enter into a variety of sectors, Becker and Shuler believe that this is also going to show us where the need for a human is all-the-more important.

Craig Becker: I think one of the consensus predictions in terms of job loss is it's in jobs that require a human touch - teaching, nursing - that you're not going to see large amounts of displacement. That there's something about the classroom with a human teacher, the hospital room with the human nurse, that’s just irreplaceable. So I think you have to understand the possibilities in this area as enhancing the understanding of those human actors.

Liz Shuler: We have seen a lot of talk around technology being sort of a silver bullet, right, in the classroom. I went to the consumer electronics show this last year in Las Vegas and I saw a lot of robots that were meant to be used in the classroom. And we had a member of the AFT along with us at the show. And he said, you know, this seems like a good idea on the surface, but I've seen it where you're in a classroom of 30 people, and sometimes 40 people as we've heard in Chicago right now during the strike. A robot being used in a classroom, in some cases for remedial purposes where maybe a student has been as a following the lesson plan as closely or quickly. The minute the robot scoots over to that student's desk, that student is targeted or is feeling vulnerable and there's a lot that goes along with that. Whether it's bullying or some kind of an emotional impact that, you know, people don't foresee.  

Lauren Prastien: This example with robots in the classroom points to something really vital. When it comes to the technologies being implemented into a given industry, those technologies themselves are sometimes not being developed by someone who works in that industry. Don’t get me wrong, sometimes they are. But this example of an assistive robot that might actually alienate the student it’s intended to help is a really telling sign of what happens when workers in a given industry aren’t involved in the development and integration of these technologies. And this isn’t an extraordinary circumstance. It’s a scenario that the AFL-CIO is seeing a lot.

Craig Becker: There was a young robotics professor developing a robotic arm to assist feeding of patients. The professor was there along with two graduate students, and in the group from our commission was the executive director of the nurses’ union. So, you know, what ensued was a very, very interesting back and forth between the two students and the professor about their conception of how this tool, this robotic arm would be used, and the nurse's conception of what actually takes place when a nurse is assisting a patient to be fed and how it's much more than making sure that the food goes into the patient's mouth in terms of assessments that are going on at the same time. And also how the tool would likely be deployed by actual health care institutions. 

Lauren Prastien: Last week, we talked about how bringing workers into the process of implementing innovations into their given industry is really vital to preventing displacement. But putting workers in conversation with technologists and trusting workers to know what they need is also really vital to making sure that these technologies are actually being utilized meaningfully and not, you know, just being parachuted into an industry, to take a term from our education episode.

Because a lot of the popular discourse on human-computer collaboration can ignore a really important detail: it’s not just the technology improving the efficacy of the human. A lot of the time, it’s also the human improving the efficacy of the technology. And in a moment, we’ll dig into that a little more. So stay with us.

[Music]

Lauren Prastien: From trucking to nursing to teaching, we can see that human-computer collaboration can look really, really different across industries. On an even more granular level, there are different forms of human-computer collaboration, and they have really different implications for the future of work, depending on which we choose to implement and prioritize. And when it came to figuring out what that might look like, we turned to an expert on the subject:

Tom Mitchell: The focus of this study, our marching orders in that study was to analyze and understand and describe the impact that AI would have on the workforce. And what we found was that the impacts were many, not just one, and some were positive, some were negative, some were hard to evaluate.

Lauren Prastien: That’s Tom Mitchell. He’s a professor of computer science here at Carnegie Mellon, where he established the world's first department of machine learning. He’s also the Lead Technologist at the Block Center, where his work pertains to machine learning, artificial intelligence, and cognitive neuroscience, and in particular, developing machine learning approaches to natural language understanding by computers, which we’ll explain very, very soon.

The study he’s referencing here is called Information Technology and the U.S. Workforce: Where Are We and Where Do We Go from Here? It was published in 2017, and it covers everything from future trends in technology to the changing nature of the workforce. 

One of the biggest takeaways from both this study for the National Academies of Sciences, Engineering and Medicine and several other studies Professor Mitchell has published is the idea of a job as a series of tasks. And that’s where human-computer collaboration, sometimes also called human-in-the-loop AI systems, comes in, by assigning certain tasks to the computer, such that humans are then freed up to do other tasks. Which can either result in an enhanced worker experience by creating a more meaningful and stimulating job, or can eventually lead to job displacement, depending on how that’s split up. So we asked Professor Mitchell for a few examples of this. 

Tom Mitchell: Suppose you have people employed who are making routine repetitive decisions. Like there's somebody at CMU who approves or doesn't approve reimbursement requests when people take trips, for example. Now, there is a policy and that person is looking at data that's online and making the decision based on that policy. They use their human judgment in various ways. But if you think about it, the computers in that organization are capturing many training examples of the form: here's the request for reimbursement with the details and here's the person's decision. And that's exactly the kind of decision-making training example that machine learning algorithms work from. Now obviously, with that kind of human in the loop scenario, we might expect over time the computer to be able to either replace if it's extremely good, or more likely augment and error check the decision making of that person, but improve the productivity of that person over time. So that's a kind of human in the loop scenario that I would call the mimic the human.

Lauren Prastien: If you remember from episode 2, machine learning algorithms work by taking in large quantities of data to be able to make inferences about patterns in that data with relatively little human interference. So in the case of the “mimic the human” scenario, you can have this really negative outcome where the computer learns how to do the task that more or less makes up your job, then does your job better than you, then essentially Single White Females you out of your job. But don’t worry. This isn’t the only way human-computer interaction can play out!

Tom Mitchell: If you think about our goal, which is to figure out how to use computers to make even better decisions and make humans even better at the work that they're doing instead of replace them in many cases, it's what we'd like. Then there's a different kind of approach that's actually one of our current ongoing research projects and I call it conversational learning. If you hired me to be your assistant, you might, for example, say whenever it snows at night, wake me up 30 minutes earlier. As a person, I would understand that instruction. Today, if you say to your phone, whatever snows at night, wake me up 30 minutes earlier, notice two things: One, the computer won't understand what you're saying, but number two, it actually could do that. It could use the weather app to find out if it's snowing and it could use the alarm app to wake you up.

Lauren Prastien: Hey. This sounds a little like the knocker-upper from our last episode. But right now, my phone - which is, remember, a computer - can’t actually do that. Watch. Whenever it snows at night, can you set the alarm thirty minutes earlier?

Phone: I’ve set an alarm for 7:30 PM.

Lauren Prastien: That’s not helpful.

Tom Mitchell: In our prototype system, the system says, I don't understand, “do you want to teach me?” Then you can say yes. If you want to know if it snows at night, you open up this weather app right here and you click on it and you see right here where it says current conditions, if that says S, N, O, W, it's snowing. If you want to wake me up 30 minutes earlier, you open this alarm app here and you tap on it and you say right here where the number is, subtract 30 from that. So literally you teach or instruct or program your computer the same way that you would teach a person. And so the idea is if you step back, you'll notice if I ask how many people on earth can reprogram their telephones to do new things for them, the answer is about 0.001% of humanity. Those are the people who took the time to learn the programming language of the computer. Our goal is instead have the computer learn the natural instruction language of the person.

Lauren Prastien: In addition to just being really, really cool, this is incredibly significant when we think about the kinds of barriers that exist today in terms of who has access to this technology, who is able to innovate with this technology and who is benefitting from the innovations in this technology.

Tom Mitchell: If we can make it so that every worker can program or instruct your computer how to help them, then instead of computers replacing people, the dominant theme will be how creative can the worker be in thinking of new ways to teach the computer to help the worker. So there's an example of a place where we have a social question about impact of AI, but part of the solution can actually be changing the underlying AI technology. So in general, I think one of the things I love about the Block Center, is it's going, we're going beyond the idea that the Block Center should guide policy makers. It's just as important that the Block Center identify these kinds of technology research opportunities that technology researchers can work on. And if they succeed, it will change the kind of social impact that AI has.  

Lauren Prastien: To close, I asked Professor Mitchell what we need to keep in mind going forward as all this technology is rolled out. And his answer was pretty encouraging.

Tom Mitchell: It's not that technology is just rolling over us and we have to figure out how to get out of the way. In fact, policymakers, technologists, all of us can play a role in shaping that future that we're going to be getting.

Lauren Prastien: On that note, Eugene, what’s our takeaway this week, and where can policymakers start to approach the really unwieldy topic that is the future of work?

Eugene Leventhal: To add to what Professor Mitchell was just saying, I would go so far as to say that it’s not that all of us can play a role. The reality is that, like it or not, we all will play a role. And so what should policymakers specifically be doing at this point?

A good place to start is not immediately buying into the hype that robots and AI will change everything right away. Policymakers have been contended with non-AI driven automation for decades, especially in terms of manufacturing and textiles. From there, efforts need to be directed towards understanding what functions and roles across various will be disrupted in the next few years and which are likely to be disrupted in the coming decades. Convening people from the public and private sectors, including academic institutions, can both be helpful and overwhelming. The reality is that more resources need to be dedicated towards helping policymakers know what is most pertinent to their constituents. Partnering with nonprofits and universities can help lighten that burden.

In addition to thinking of roles where humans are getting automated away, it’s crucial to focus attention on the areas where AI can collaborate with humans for better ultimate outcomes. Automation is an unquestionable concern moving forward, but that doesn’t mean that the beneficial prospects of AI should be overshadowed. The better we understand where AI can help boost human productivity, the easier it will be to envision new roles that don’t currently exist. If there will be more jobs eliminated than created, then the gains from productivity could be used to help fund programs such as Wage Insurance, which we heard about from Professor Lee Branstetter in last week’s episode called “A Particular Set of Skills.”

Lauren Prastien: Thanks, Eugene. Next week, we’re looking at a sector that’s seeing a lot of the issues we’ve been discussing all season playing out in real-time - from algorithmic bias to technological disruption to the potential of displacement - and how decisions made about this industry could have serious implications for the standards of other industries in the future. And that’s healthcare. Here’s a clip from one of our guests next week, Zachary Lipton, a professor of business technologies and machine learning at Carnegie Mellon:

Zach Lipton: How can you use the tools of modern computer vision to not just do the main thing, which is to say, try to imitate what a radiologist does, but to help a radiologist in this context of like review and, uh, kinda like continue learning? So among other things we imagine is the possibility of using computer vision as a way of like surfacing cases that would be interesting for review.    

Lauren Prastien: I’m Lauren Prastien,

Eugene Leventhal: and I’m Eugene Leventhal,

Lauren Prastien: and this was Consequential. We’ll see you next week.  

Consequential was recorded at the Block Center for Technology and Society at Carnegie Mellon University. The Block Center was established to examine the societal consequences of technological change and create meaningful plans of action. To learn more about Consequential, the Block Center and our faculty, you can check out our website at cmu.edu/block-center or follow us on Twitter @CMUBlockCenter. You can also email us at consequential@cmu.edu.

This episode of Consequential was written by Lauren Prastien and was produced by Eugene Leventhal with editing support from our intern, Ivan Plazacic. Our executive producers are Shryansh Mehta and Jon Nehlsen. 

This episode references Steve Denning’s article “How Do You Tell If Your Boss is a Robot?” for Forbes in 2012, Eillie Anzilotti’s article “Your new most annoying overachieving coworker is a robot” for Fast Company in 2019, Alex Davies’ article “Self-Driving Trucks are Now Delivering Refrigerators” for Wired in 2017, a clip from John Mulaney’s comedy special Kid Gorgeous, the American Trucking Associations’ “Truck Driver Shortage Analysis 2015” and NPR Planet Money’s 2015 study “The Most Common Job in Each State 1978 - 2014.”

Lauren Prastien: Right now, a lot of the media coverage on AI in healthcare falls into two categories: feel-good robot success story or horrifying robot nightmare. Success: a last-minute decision to put a face and heart-shaped eyes on an assistive robotic arm named Moxi helps nurses with staffing shortages while making patients feel happier and more comfortable. Nightmare: A popular search engine amasses millions’ of people’s health records without their knowledge. Success: A surgeon-controlled pair of robotic arms stitches the skin of a grape back together. Nightmare: A widely-implemented algorithm in the American healthcare system is found to be biased against black patients. Success: Telepresence technology helps people in underserved areas talk to therapists. Nightmare: An American engineer intentionally designs a robot to break Asimov’s first law of robotics: “never hurt humans.” Success, nightmare, success, nightmare. Rinse, repeat.

Over the course of this season, we’ve talked about some of the issues in our current technological landscape, from algorithmic bias to industry disruption to worker displacement to the AI black box. Right now, they’re playing out in real-time in the healthcare sector, and the decisions we make about how these technologies are implemented here may have greater repercussions for how they’re used and regulated in other sectors. And when it comes to medicine, the stakes are pretty high. Because, remember, sometimes, this is literally a matter of life and death.

This is Consequential: what’s significant, what’s coming, and what we can do about it. I’m Lauren Prastien and I’ll be your main tour guide along this journey. You’ll also hear the voices of our many guests, as well as of your other host.

EL: Hi, I’m Eugene Leventhal. I’ll be joining throughout the season to take a step back with Lauren and overview what was just covered, to talk policy, and to read quotes. I’ll pass it back to you now Lauren.

LP: Consequential is recorded at the Block Center for Technology and Society at Carnegie Mellon University. Established in 2018 through a generous gift from Keith Block and Suzanne Kelley, the Block Center is dedicated to investigating the economic, organizational, and public policy impacts of emerging technologies.

This week, we’re talking about healthcare and tech. So stay with us.

Lauren Prastien: I feel like just about everyone has a really harrowing or just kind of uncomfortable story about a time a doctor was far too clinical or dismissive with them. When I was fourteen, I went in for a physical and my GP kept insisting my acne would go away if I just quit smoking. Which was weird, because I’d never had a cigarette in my life. I finally got so exasperated that I was like, “hey, excuse me, why won’t you believe I’m not a smoker?” And without missing a beat, she replies in just about the flattest affect possible: “Well, you look like one.” Which, wow, thanks.

But the point is this: healthcare is incredibly vulnerable by nature, because our bodies and our humanity can sometimes feel really inextricable from each other. And when we talk about our bodies - particularly when we talk about the ways things can go really, really wrong with our bodies - that’s a really vulnerable act.

So naturally, it’s easy to worry that putting an assistive robot in a hospital or involving an algorithm in a serious treatment decision is going to make everything eerie and dehumanizing and clinical. But I can’t help but think of the mere fact that the term clinical - which comes from the Greek klinike, or bedside, as in, of or pertaining to the sick bed - has the connotation of being cold, detached, dispassionate, as in, the very same negative attributes we often apply to artificial intelligence. But don’t worry, your next doctor is probably not going to be a robot. We asked an expert.

Zachary Lipton: Yeah. That won't happen.

Lauren Prastien: That’s Zachary Chase Lipton. He’s a professor of Business Technologies and Machine Learning at Carnegie Mellon, where his work looks at the use of machine learning in healthcare. And he has a good reason for why you shouldn’t have to worry about the whole Dr. Robot thing.

Zachary Lipton: Machine learning is good at one thing, which is prediction. And prediction under a very kind of rigid assumption. When we say prediction, I think a lot of times when doctors say prediction and they mean like forecast a future like Zandar or something. Prediction doesn't necessarily mean like what will happen in the future, it means infer something unknown from something known

In medicine you're often concerned with something called a treatment effect, right? You really care. Like, if I were to give someone, not based on the distribution of historical data, if I were to intercede and give someone a different treatment than the doctors already would have given them, now are they going to do better or worse? And so the current crop of machine learning tools that we have doesn't answer that fuller like picture of actually how do we make better decisions. It gives us something like, you know, in a narrow little location we can say, is there a tumor or is there not a tumor given the image? But again, it doesn't tell us why should we give a treatment that we historically shouldn't have given. It doesn't tell us, you know, can we detect a tumor in the future? If suddenly there are changes to the equipment such as the images look a bit different from before, it doesn't tell us how you make structural changes to the healthcare system. So when people get carried away, like AI is taking over everything, it's more like we're plugging it in into these narrow places.

Lauren Prastien: And according to Adam Perer, a professor of human-computer interaction at Carnegie Mellon, the way we’re going to see artificial intelligence implemented in healthcare is going to look a lot like the human-in-the-loop systems that we discussed last week:

Adam Perer: Essentially it's a way to kind of help boost their cognitive abilities by giving them one other piece of information to act upon. But essentially we're nowhere near being able to replace them. We can just maybe give them a little extra, some information that maybe retrospective data suggests this, maybe you want to think about this, give some guidance towards that. But ultimately they need to make the decisions themselves.

Lauren Prastien: Right now, Professors Lipton and Perer are working to improve the way that clinicians interact with and even learn from the AI that is supplementing their work.

Zachary Lipton: So, this came out of conversations we've been having over the last year with, Dr. Rita Zuley who's over at Magee-Women's Hospital. Fortunately in UPMC, a lot of the patients are...they're a dominant provider in the area. So they have 40 hospitals. A lot of people are on the health plan. So all of their health is within the plan. And that's actually great from a research standpoint because it means that if they were screened in UPMC, then if something happened, they were probably treated in UPMC. The outcome was probably tracked and made it into the health record. And so they'll find out the doctor like, oh, this case that you reviewed a year ago, you called it negative, but it turned out that within a year they came in with a cancer and then they can go back and say, what did they get wrong? But we were interested in thinking, how can you use the tools of modern computer vision to not just do the main thing, which is to say, try to imitate what a radiologist does, but to help a radiologist in this context of like review and continue learning. So among other things, we imagine is the possibility of using computer vision as a way of like surfacing cases that would be interesting for review.

Lauren Prastien: An important piece of background: today, image recognition and other deep learning strategies have achieved really high accuracy in tumor diagnosis. As in, sometimes higher accuracy than actual doctors. And along with that, there is mounting anxiety over whether or not that means that doctors are just going to be replaced by these algorithms.

But remember last week, when our guests from the AFL-CIO talked to us about the areas that are more likely to see increased human-computer collaboration rather than displacement. Healthcare came up almost immediately. In the words of Craig Becker, General Counsel to the AFL-CIO, you need that human connection. I can’t imagine anything more, well, clinical, than having a computer program tell me that there’s an 87% chance I have breast cancer. It goes back to what we talked about in our very first episode this season: as these technologies continue to evolve, they are going to reinforce the importance of things like empathy, emotional intelligence and collaboration. And in that spirit, Professor Lipton is more interested in focusing on algorithms that help human clinicians become better doctors, rather than just trying to outdo them or replace them.

Zachary Lipton: How do we ultimately, to the extent that we can build a computer algorithm that sees something that a doctor might not, how do we not just sort of say, okay, we did better in this class images, but actually cycle that knowledge back? How do we help a human to sort of see perceptually what it is or what is the group of images for, you know, how we make it to that, as a function of some kind of processed by the human engaging with a model they're able to better recognize whatever is, you know, those, that subset of images for which the algorithm outperforms them.

Lauren Prastien: But this isn’t as simple as a doctor looking down at what the algorithm did and going,

Eugene Leventhal: “Oh, that is bad. I will do better next time.”

Lauren Prastien: Most doctors aren’t trained in being able to read and write code, and remember: a lot of what goes on with an algorithm mostly happens in a black box.

Adam Perer: So the really interesting challenge from my perspective is how do we explain what this algorithm is doing to the doctors so they can actually get potentially better at detecting cancer by understanding what the algorithm found that they couldn't find. 

Lauren Prastien: An important step in this process is going to be making these results interpretable to the clinicians who will be learning from them, which is where Professor Perer’s expertise in creating visual interactive systems to help users make sense out of big data is going to come in handy as this project progresses.

Even in the most general sense: being able to interpret the results of healthcare algorithms, as well as understanding the larger cultural context that these algorithms may not be aware of, is really, really vital to ensuring that these algorithms are helpful and not harmful.

Look at what happened this past fall, when a major study published in the journal Science found that a popular commercial algorithm used in the American healthcare system was biased against black patients. Essentially, when presented with a white patient and a black patient, the algorithm would assign a lower risk score to the black patient than to the equally sick white patient, and, by extension, was then much more likely to refer the white patient for further treatment. As a result, only 17.7% of patients referred to additional care were black, which is super troubling when, once the researchers located this bias and attempted to improve the algorithm to eliminate it, that proportion shot up to 46.5%.

So where did this go so wrong? Essentially, the algorithm was basing the risk scores it assigned to patients on their total annual healthcare costs. On the surface, this makes a lot of sense: if have higher healthcare costs, you probably have greater healthcare needs. And in the data that the algorithm was trained on, the average black patient had roughly the same overall healthcare costs as the average white patient. But here’s the issue: even though black and white patients spent roughly the same amount per year in healthcare costs, when you compared a black patient and a white patient with the same condition, the black patient would spend $1,800 less in annual medical costs. So, the algorithm would see that and incorrectly assume that oh, black patients spend less on healthcare, so they’re actually healthier than white patients. But when researchers dug a little bit into the data that algorithm was trained on, they found that, actually, the average black patient was a lot more likely to have serious conditions like high blood pressure, anemia and diabetes. They just were a lot less likely to have received treatment - ergo, lower healthcare costs.

Zachary Lipton: If you reduce everything about your model's performance to a single number, you lose a lot of information. And if you start drilling down and say, okay, well how well is this model performing, for men, for women, for, for white people, for black people?

Lauren Prastien: In a 2018 article in Quartz, the journalist Dave Gershgorn considered: “If AI is going to be the world’s doctor, it needs better textbooks.” In other words, most healthcare data is super male and super white. But according to Professor Perer, there are ways to overcome this discrepancy, and to borrow a little bit of healthcare jargon, it involves seeking a second opinion.

Adam Perer: One way we tried to address this in some of the systems that I build is if so for deploying a system that can predict the risk of a certain patient population, if you're putting in a new patient and want to see what their risk score is going to be, you can kind of give some feedback about how similar this patient is to what the model has been trained on. And I think giving that feedback back to them also give some ability for the end user, the doctor, to trust this risk or not because they can kind of see exactly how close is this patient, has there never been a patient like this before? And therefore, whatever the model is, gonna output just doesn't make any sense. Or is there, do we have lots and lots of patients similar to this one. So similar demographics and their age, similar history of treatments and so on. And then you kind of give it a little bit more guidance for them.

Lauren Prastien: But remember, this data is trained on decisions made by humans. The data that powered that risk assessment algorithm didn’t just appear out of nowhere. And those human decisions were - and are - often plagued by the same prejudices that the algorithm itself was exhibiting. A 2015 study in JAMA Pediatrics showed that black children were less likely than white children to be administered pain medication in the emergency room while being treated for appendicitis. The next year, a study in the Proceedings of the National Academies of Sciences that found that in a survey of 222 white medical students and residents, roughly half of respondents believed the proven falsehood that black people naturally felt less pain than white people. And in 2019, the American Journal of Emergency Medicine published a review of studies from 1990 to 2018 comparing the role of race and ethnicity in a patient’s likelihood to receive medication for acute pain in emergency departments, which showed that across the board, emergency room clinicians were less likely to give painkillers to nonwhite patients than they were to white patients.

Like Professor Lipton said at the beginning of our interview, machine learning can’t tell us how to make structural changes to the healthcare system. AI isn’t going to take bias out of our society. We have to do that ourselves.

And like we’ve been saying, a lot of this technology is going to make us have to evaluate what our biases are, what our values are and what our standards are. We’ll talk a little more about what that looks like in just a moment.

[Music] 

In that article for Quartz I mentioned earlier, Dave Gershgorn posits a really interesting dilemma, which Eugene is going to read for us now:

Eugene Leventhal: Imagine there was a simple test to see whether you were developing Alzheimer’s disease. You would look at a picture and describe it, software would assess the way you spoke, and based on your answer, tell you whether or not you had early-stage Alzheimer’s. It would be quick, easy, and over 90% accurate—except for  , it doesn’t work.

That might be because you’re from Africa. Or because you’re from India, or China, or Michigan. Imagine most of the world is getting healthier because of some new technology, but you’re getting left behind.

Lauren Prastien: Yeah, it’s just a classic trolley problem. But it’s really easy to take purely quantitative approach to the trolley problem until it’s you or someone you love who’s sitting on the tracks. And the trolley problem isn’t the only kind of complicated ethical question that the use of tech in healthcare, be it algorithms, robotics, telemedicine, you name it, is going to bring up. Some of them are going to be pretty high stakes. And for some of them, the stakes will be much lower. And like we learned in our episode on fairness, these questions usually don’t have cut-and-dry correct answers. The answers usually have more to do with our values and standards as a society.

So, we brought in an ethicist, who is fortunately much more prepared to take on these questions than we are.

David Danks: And so I think it’s such an exciting and powerful area because healthcare touches every one of us directly in the form of our own health and in the form of the health of our loved ones. But also indirectly because it is such a major sector of our economy and our lives.

Lauren Prastien: That’s David Danks, the Chief Ethicist here at the Block Center. As you may remember from our episode on fairness earlier in the season, he’s a professor of philosophy and psychology here at Carnegie Mellon, where his work looks at the ethical and policy implications of autonomous systems and machine learning.

David Danks: Well, I think what we have come to really see is the ways in which health care technologies, especially healthcare, AI in healthcare, robotics, they're um, by their nature, largely tools. They are tools that can assist a doctor and they can assist a doctor by augmenting their performance or freeing up their time for other tasks such as spending more time with their patients or it can make them even more efficient and help them to optimize the length of time they spend with a patient such that they can see twice as many people in a day. And I think one of the things we have to recognize as the ways in which, as individuals, this technology is going to start to mediate many of our interactions with doctors. And it can mediate for the better, it can mediate for the worse.

Lauren Prastien: Like Professor Danks mentioned when he joined us back in episode 4, a lot of the decisions being made about emerging technologies right now pertain to the ethical trade-offs inherent to how they’re implemented and regulated. And that’s absolutely the case in healthcare.

David Danks: So let me give a concrete example. I mean I already mentioned it might make it so a doctor could see twice as many people rather than spending twice as much time with each patient and we might have this immediate reaction. I think most people have the immediate reaction that it's of course awful that a doctor has to see twice as many people, except if we think about the ways in which certain communities and certain groups are really underserved from a medical care point of view, maybe the thing that we should be doing as a group is actually trying to have doctors see more people, that there's a trade off to be made here. Do we have deeper interactions with a select few, those who already have access to healthcare, or do we want to broaden the pool of people who have access to the incredibly high quality healthcare that we have in certain parts of the United States and other parts of the industrialized world?

Lauren Prastien: Broadening access could take on a lot of different forms. As an example, like we said in our episode on staying connected, more than 7,000 regions in the United States have a shortage of healthcare professionals, and 60% of these are in rural areas. So, there is a side to this that could benefit a lot of people, if, say, those doctors had time to take on patients via telemedicine. But like Professor Danks said, we are going to have to decide: is this what we want the future of healthcare to look like? And does it align with our values as a society?

To answer questions like these, Professor Danks emphasizes the importance of taking on a more holistic approach.

David Danks: And the only way to really tackle the challenge of what kinds of healthcare technologies we should and do want is to adopt this kind of a multidisciplinary perspective that requires deep engagement with the technology because you have to understand the ways in which the doctor's time is being freed up or their performance can be augmented. You have to understand the policy and the regulations around healthcare. What is it permissible for technology to do? What is, what do we have to know about a technology before a doctor is going to be allowed to use it? You have to understand sociology because you have to understand the ways in which people interact with one another in societies. You have to understand economics because of course that's going to be a major driver of a lot of the deployment of these technologies. And you have to understand ethics. What are the things that we value and how do we realize those values through our actions, whether individually or as a community?

Lauren Prastien: If you’re sitting here asking, who actually knows all of that? Well, his name is David Danks, there is only one of him, and we keep him pretty busy. But in all seriousness: this is why convening conversations between academics, technologists, policymakers and constituents is so crucial. All of these perspectives have something essential to offer, and not having them represented has really serious consequences, from widening the gaps in who benefits from these technologies to actively physically harming people.

But on an individual level, Professor Danks says just a basic understanding of what technology is actually out there and what that technology is actually capable of doing is a pretty vital place to start.

David Danks: Well, I think one of the first educational elements is having an understanding of what the technology is, but perhaps more importantly what it isn't.

Lauren Prastien: Because, hey, you can’t really regulate what you don’t understand. Or, at least, you really shouldn’t. And beyond that, knowing what these technologies are capable of will help to guide us in where their implementation will be most useful and where it really won’t.

David Danks: AI and robotic systems are incredibly good at handling relatively speaking, narrow tasks and in particular tasks where we have a clear idea of success. So if I'm trying to diagnose somebody's illness, there's a clear understanding of what it means to be successful with that. I get the diagnosis right. I actually figure out what is wrong with this individual at this point in time. But if we think about what it means to have a successful relationship with your doctor, that is much less clear what counts as success. It's something along the lines of the doctor has my best healthcare interests at heart, or maybe my doctor understands what matters to me and is able to help me make healthcare decisions that support what matters to me. That if I'm a world-class violinist that maybe I shouldn't take a medication that causes hand tremors. Even if that is in some sense the medically right thing to do, maybe we should look for alternative treatments. And I think those are exactly the kinds of nuanced context-sensitive, value-laden discussions and decisions where AI currently struggles quite a bit.

Lauren Prastien: And so how does this maybe guide our understanding of how to approach other sectors that are seeing the integration of AI?

David Danks: So I think one of the things that people need to understand is that when they enter into whether it's healthcare service industry, transportation, that there are certain things that we humans do that we really don't know how to automate away yet. And so what we should be arguing for lobbying, for, seeking to bring about through our economic power. Consumers are changes to people's jobs that prioritize those things that really only a human can and should be doing right now and allowing technology to be what technology is very, very good at. Uh, if I have to add a bunch of numbers together, I'd much rather have a computer do it then than me. Um, I think by and large automatic transmissions have been a good thing for transportation, um, rather than having to use a stick shift all the time. But that's because we're letting the machine, the computer do what it's good at and reserving things like a decision about where to drive for us humans.

Lauren Prastien: If we take an informed approach and remain mindful of some of the risks we’ve discussed here, incorporating artificial intelligence into healthcare has the potential to streamline practices, fill staffing gaps, help doctors improve their diagnostic practices and, perhaps most importantly, save lives. But, by the way, without the ability to access the data necessary to power these algorithms, we might not see much of this happening.

Stay with us.

[Music]

Lauren Prastien: Like we said earlier, the data powering healthcare algorithms is often not all that diverse. And if you’re sitting here wondering, well, wait a second, why can’t the people making these algorithms just get more diverse datasets? So that way, we don’t have a model for spotting cancerous moles that can’t recognize moles on black skin. To explain, here’s Professor Perer:

Adam Perer: When this data was originally designed to be stored somewhere, it wasn't actually designed to be later leveraged, you know, decades later for machine learning. So they're really just data dumps of stuff that's stored there. Maybe they thought, okay, we have to keep it due to regulation, but we never really forced you to what the use cases would be. And now when you're trying to get information out of there, there is really, really hard limits. You know, we've worked with healthcare institutions where, you know, the doctors, the clinical research we're working with really want us to share their data. They have a few hundred patients, maybe even something small like that. They want us to give us their data so we can help them analyze it. But again, it takes months and months and months of technologies figuring out the right query as they get out of their systems into it. And so really, you know, I'm hopeful that that new systems in place will help speed up that process, but right now it is very, very, very slow.

Lauren Prastien: So why do machine learning algorithms need all this data in the first place? Well, it’s because machine learning doesn’t look like human learning. We talked to Tom Mitchell, who you may remember from our last episode is a professor of machine learning at Carnegie Mellon and the Lead Technologist here at the Block Center. He attributed this discrepancy to something called Polanyi’s Paradox. Essentially: we know more than we can tell. Like in the words of Freud, tell me about your mother.

Tom Mitchell: You can recognize your mother, but you cannot write down a recipe and give it to me so that I can recognize your mother. You can tie your shoes. But many people cannot reference, cannot describe how to tie shoes despite the fact they can do it. So there's a lot of things that we can do instinctively, but we don't have sort of conscious, deliberate access to the procedure that we are using. And, the consequence, the importance of this is that if we're building AI systems to help us make decisions, then there are many decisions that we don't know how, that we can make, but we don't know how to describe like recognizing our mother to give a trivial example. Now within the implication of that is well, either we'll have AI systems that just won't be able to make those kinds of decisions because we don't know how to tell the computer how we do it. Or we use machine learning, which is in fact a big trend these days where instead of telling the system the recipe for how to make the decision, we train it, we show it examples. And in fact you can show examples of photographs that do include your mother and do not include your mother to a computer. And it's a very effective way to train a system to recognize faces.

Lauren Prastien: So, in the case of that cancer detection algorithm, you’d be able to show that system pictures of people with darker skin tones with cancerous moles, pictures of people with darker skin tones with moles that aren’t cancerous, pictures of people with darker skin tones with no moles at all, you get the idea, until the algorithm is able to identify the presence of a cancerous mole on darker skin tones with the same level of competence that it can on lighter skin tones. But again, a lot of that data wasn’t designed to be utilized this way when it was first gathered.

Tom Mitchell: For the last 15, 20 years in the U.S. we've been building up a larger and larger collection of medical records online. That medical data is scattered around the country in different hospitals. It's not being shared partly because of well-founded privacy reasons, partly because of profit motives of the database companies that sell the software that holds those medical records. But there is a perfect example of how we as a society together with our policy makers could change the course in a way that, I think at no big cost could significantly improve our healthcare.

Lauren Prastien: So what would that look like?

Tom Mitchell: We should be working toward a national data resource for medical care. And the weird thing is we almost had it because all of us have electronic medical records in our local hospitals. What we don't have is the national data resource. Instead, we have a diverse set of incompatible, uh, data sets in different hospitals. They're diverse partly b  ecause there are several vendors of software that store those medical records and their profit motive involves keeping proprietary their data formats. They have no incentive to share the data with their competitors. And so, step number one is we need, uh, some policy making and regulation making at the national level that says, number one, let's use a consistent data format to represent medical records. Number two, let's share it in a privacy preserving way so that at the national scale we can take advantage of the very important subtle statistical trends that are in that data that we can't see today.

Lauren Prastien: If we’re able to nationalize data storage like this, we could ensure that there are protections in place to keep that data secure and anonymized. And more than that, we can start to do some pretty cool stuff with this data:

Tom Mitchell: Imagine if we could instead have your cell phone ring tomorrow morning, ifit turns out that today I show up in an emergency room with an infectious disease and your phone calls you in the morning and says, somebody you were in close proximity with yesterday has this disease, here are the symptoms to watch out for. If you experience any of those, call your doctor, that simple alert and warning would dampen the spread of these infectious diseases significantly.

What would it take to do that? All it would take would be for your phone carrier and, and other retailers who have geolocation data about you to share the trace of where you have been with a third party who also has access to the emergency room data. There are obviously privacy issues here, although the data is already being captured, sharing it is a new privacy issue, but with the right kind of infrastructure with a trusted third party being the only group that has access to that combined data and for them to then that third party could provide the service and we would get the benefit. Again, at very low cost. The interesting thing about leveraging data that's already online is often it's not a big government expense. It's just a matter of organizing ourselves in a way that we haven't historically thought about doing.

Lauren Prastien: So, Eugene, what have we learned today and how is this going to apply to other sectors moving forward?

Eugene Leventhal: Despite how some headlines can make it seem as though you’ll be getting your next flu shot from a robot, that’s not something you have to worry about just yet. Given that machine learning is good at making predictions in pretty narrow areas, it’s more important to focus on how doctors could use such algorithms to help improve patient outcomes.

We heard from Professor David Danks about the importance of having a baseline education to be able to better regulate. There are so many complex factors at play that it becomes very challenging to have a single, clear-cut regulation that could solve all of a policymaker’s problems and concerns. The reality is that there needs to be a constant cycle of education on new technologies, working with technologists and those impacted by the use of the technology, and carefully assessing where tech can be most helpful without harming individuals.

Lauren Prastien: For our tenth and final episode of the first season of Consequential, we’ll be doing a policy recap based on all of the discussions we’ve had this season.

But until then, I’m Lauren Prastien,

Eugene Leventhal: and I’m Eugene Leventhal,

Lauren Prastien: and this was Consequential. We’ll see you later this week for episode 10.  

Consequential was recorded at the Block Center for Technology and Society at Carnegie Mellon University. The Block Center was established to examine the societal consequences of technological change and create meaningful plans of action. To learn more about Consequential, the Block Center and our faculty, you can check out our website at cmu.edu/block-center or follow us on Twitter @CMUBlockCenter. You can also email us at consequential@cmu.edu.

This episode of Consequential was written by Lauren Prastien and was produced by Eugene Leventhal with support from our intern, Ivan Plazacic. Our executive producers are Shryansh Mehta and Jon Nehlsen.

This episode references the 2019 research article “Dissecting racial bias in an algorithm used to manage to health of populations” by Obermeyer et al. in Science, Dave Gershgorn’s 2018 article in Quartz “If AI is going to be the world’s doctor, it needs better textbooks,” the 2015 study “Racial Disparities in Pain Management of Children With Appendicitis in Emergency Departments” by Goyal et al. in JAMA Pediatrics, the 2016 study “Racial bias in pain assessment and treatment recommendations, and false beliefs about biological differences between blacks and whites” by Hoffman et al. in Proceedings of the National Academies of Sciences, and the 2019 literature review “Racial and ethnic disparities in the management of acute pain in US emergency departments: Meta-analysis and systematic review,” by Lee et al. in the American Journal of Medicine.

Eugene Leventhal: Right now, a lot of the media coverage on healthcare technology falls into two categories: feel-good success story or absolute nightmare.

[MUSIC]

If you’re wondering why this sounds so familiar and why it’s my voice you’re hearing,

Well folks, I’ve finally acted on my plan to overthrow Lauren as the host so we’re doing things differently today.

[MUSIC]

Lauren Prastien: Really?

Eugene Leventhal: Well, maybe not that differently. But we are taking a step back from the various deep dives that we’ve been taking over the past nine weeks in order to better understand the specific policy suggestions that came up throughout our first season.

Lauren Prastien: Imagine you receive the following phone call:

Eugene Leventhal: Hello! We’re calling to let you know that you’ve been selected to come up with solutions for the United States in the context of automation, jobs, training, education, and technology as a whole. Oh, and that’s in addition to making sure that everyone has access to basic health and safety services, that our economy runs, (START FADE OUT) that no fraudulent activity takes place, that we have good relations with as many of the countries around the world as possible, that…

Lauren Prastien: And while, okay, policymakers don’t get to where they are because they got one random phone call, it is true that the multitude of issues that they’re currently dealing with are both really wide in scope and also really deeply interconnected. So where can policy begin to tackle some of the concepts and challenges that we’ve been talking about for the past nine weeks or so?

This is Consequential: what’s significant, what’s coming, and what we can do about it. I’m Lauren Prastien and though I’ve been your main tour guide along the journey of season one, I’m leaving you in the very capable hands of your other host.

Eugene Leventhal: Eugene Leventhal, that’s me! On this episode, I will walk you through the relevant policy takeaways from this season. But before we do, Lauren, can you remind us of how we got to this point?

Lauren Prastien: So over the past nine weeks, we’ve talked about the human side of technological change. We started on the question that seems to be driving a lot of the dialogue on artificial intelligence, enhanced automation and the future of work: are robots going to disrupt everything and essentially render humanity irrelevant? And the answer was, well, no.

There are some things that technology will never be able to replicate, and if anything, these technologies are going to make the things that make us innately human all the more important. But that doesn’t mean we shouldn’t do what we can to protect the people that these technologies might displace. From there, we did a deep-dive into the algorithms that have become more pervasive in our everyday lives. We looked at some of the issues surrounding the rising prevalence of these algorithms, from our rights as the individuals providing the data to power these algorithms to greater issues of bias, privacy, fairness and interpretability.

From there, we looked at how these technologies are going to impact both the ways we learn and the ways we work, from making education more accessible to changing both our workforce and our workplace. We talked about some of the impediments to ensuring that everyone benefits from this technology, from access to reliable internet to access to reskilling opportunities, and some of the policy interventions in the works right now to try to close those divides, like promoting broadband access in underserved areas and the implementation of wage insurance programs.

And last week, we saw how all these issues converged in one sector in particular - healthcare - and how decisions made about that sector might have larger implications for the ways we regulate the infiltration of emerging technologies into other sectors. All told, we learned a lot, and when it comes to synthesizing the information we covered and think of how policymakers can begin to tackle some of these issues, it can be hard to figure out where to start.

Fortunately, Eugene has made us a policy roadmap. So stay with us.

[MUSIC]

Eugene Leventhal: We are going to break down this policy episode into three parts:

Part one: The human factor. Yes, this is a podcast about technology and policy. But we can’t look at either without first talking about people.

Part two: Education and regulation. This will relate to some foundational policies to make sure stakeholders’ rights are protected, as well as keeping individuals informed about the technologies that impact their everyday lives.

Part three: New possibilities, which covers some new ideas that would enable more collaborative efforts on the part of companies, policymakers and their constituents to ensure that technologies are effectively serving the individuals they’re intended for.

Stay with us as we’ll be exploring where policymakers can start with policy in relation to AI and emerging technologies in general.

[MUSIC]

Eugene Leventhal: When we began this season, we covered the ways in which we’ve seen technological disruption play out over the last century and the generally changing nature of intelligence. The reason we began here was to set the stage for the very personal elements that are inevitably part of this greater narrative of how AI will impact humanity and what we can do about it. Because like we said, as these technologies are rolled out, they’re going to make the things that make us innately human all the more important.

More than that, any technological innovation or legislation about technology needs to prioritize the interests of humanity. We turn to Professor Molly Wright Steenson for a reminder as to why.

Molly Wright Steenson: The way that decisions have been made by AI researchers or technologists who work on AI related technologies - it's decisions that they make about the design of a thing or a product or a service or something else. Those design decisions are felt by humans.

Eugene Leventhal: Because humans feel the impacts of those design decisions, those choices have stakes. And like Professor David Danks told us in our episode on fairness, those decisions will inevitably involve some pretty serious tradeoffs.

David Danks: Many of the choices we're making when we develop technology and we deploy it in particular communities involve tradeoffs and those trade offs are not technological in nature. They are not necessarily political in nature, they're ethical in nature.

Eugene Leventhal: It’s important to recognize that having a thorough plan of action to prepare for the impacts of technological change starts with recognizing that this discussion is about much more than technology. And it’s not just how individuals are affected, for better and for worse, but also about how communities are feeling the impacts of these developments. Which is why community building is a crucial element here, as Karen Lightman reminded us in our episode on Staying Connected.

Karen Lightman: And so I think again, we need to have that user perspective and we need to understand and, and to do that, you need to be in the community, right. And you need to connect with the community and understand their side.

Eugene Leventhal: Increasing the engagement is only part of the challenge. Just having more interactions is much easier than building deeper levels of trust or connections. This comes back to the idea of being present in the communities impacted by these technologies and interacting with the full range of constituents.

The topic of building community goes hand in hand with the topic of diversity. Coming back to Professor Anita Wooley from the first episode,

Anita Wooley: So collective intelligence is the ability of a group to work together over a series of problems. So, and we really developed it to compliment the idea of individual intelligence, which has historically been measured as the ability of an individual to solve a wide range of problems.

Eugene Leventhal: Over this season, we’ve seen what happens when technological interventions don’t take into account certain populations - remember the Amazon hiring algorithm that favored men because its data was trained on male resumes, or the automatic soap dispensers whose sensors were well trained to detect white hands, but not so much for people of color. And we’ve seen what happens when a community impacted by a technological change isn’t kept in the loop about what’s going on, from the individuals impacted by Flint’s pipe detection algorithm to the parents of students in Boston Public Schools affected by the school start time algorithm.

The fact is this: technology is but a tool. Given that these tools are changing at an ever-increasing rate, it’s all the more important to make a more concerted effort to ensure that we are doing all we can to keep everyone in the loop so that they can make informed decisions about how those tools impact their day-to-day lives. By committing to engage with communities, showing that commitment through long-term presence and interaction, and bringing people from different backgrounds together, policymakers can set the tone for how tech policy should and will look and to make sure that it will be in the best interest of the people it impacts.

We heard this sentiment echoed in the context of workers from Craig Becker, the General Council of the AFL-CIO.

Craig Becker: If you want workers to embrace change and play a positive role in innovation, they have to have a certain degree of security. They can't fear that if they assist in innovation, it's gonna lead to their loss of jobs or the downgrading of their skills or degradation of their work.

Eugene Leventhal: Which brings us back to the idea of service design mentioned by Professor Molly Wright Steenson back in episode four.

Molly Wright Steenson: Okay, sure. Um, there's a design discipline called service design, um, which is considering the multiple stakeholders in a, in a design problem, right?...There are whole lot of different stakeholders. There are people who will feel the impact of whatever is designed or built. And then there’s a question of how do you design for that?

Eugene Leventhal: And like Professor Steenson mentioned in that episode, taking into account the very human factor of these technologies and how they’re going to be implemented can’t be something decided in the 11th hour. To take Lauren’s personal favorite quote from the first season:

Molly Wright Steenson: I think that if you want to attach an ethicist to a project or a startup, then what you’re going to be doing is it’s like, it’s like attaching a post it note to it or an attractive hat. It’s gonna fall off.

Eugene Leventhal: Or if we’re going to take the design metaphor a little further, think of keeping community in the loop as not just the foundation upon which a building could be created, it’s the entire plan for the building in the first place. Because think of it this way: a shaky foundation can be reinforced and propped up in some fashion. But deciding a massive construction project doesn’t need a project manager or a plan. No matter how good your materials - virtually guarantees that it is simply a matter of time until the structure comes crumbling down. We truly believe that not starting with a human-centered approach that focuses on community and diversity sets us up as a society for one inevitable outcome – failure. And when it comes to topics such as limiting the negative impacts of AI, failure is just not an option. Because again, these are people we’re talking about.

But keeping the people impacted by a given innovation in mind isn’t just about the design of a technology, it’s also about education and regulation.

[MUSIC]

Eugene Leventhal: Today, algorithms aren’t really a thing you can just opt out of. Just ask Wharton Professor Kartik Hosanagar:

Kartik Hosanager: Algorithms are all around us and sometimes we don't realize it or recognize it.

If you look at algorithms used in making treatment decisions or making loan approval decisions, recruiting decisions, these are socially significant decisions and if they have biases or they go wrong in other ways they have huge social and financial consequences as well.

Eugene Leventhal: Though algorithms are intensely pervasive in our everyday lives, many people are not aware of the extent. Which is why it’s so important for policymakers and for constituents alike to understand where algorithms are being used and for what purpose. An algorithm may have led you to this podcast, be it from pushing it to the top of a social media timeline, sending you a targeted ad or placing it in your recommendations based on other podcasts you’ve listed to. So it’s crucial for people to be able to understand where they’re interacting with algorithms, as well as how algorithms are impacting certain aspects of their lives. This is something that could potentially be explored as an open-source type of solution – imagine a Wikipedia of sorts where anyone could enter a company or application name and find out all of the ways they’re using algorithmic decision-making.

Once we have a better understanding of where algorithms are being used, we can work towards gaining a more intricate knowledge of how these systems work overall. It’s great to know that both Netflix and YouTube use algorithms. However, if one of their algorithms is keeping people binging longer while the other is driving people towards more incendiary content, or if there’s one algorithm doing the former with the unintended consequence of the latter, it would be in our best interest to both know that and to understand why this is happening in the first place.

Now, we understand that the target of everyone on Earth having a degree in Machine Learning is not a realistic one, and that’s not what we’re advocating for. You don’t need to be able to code or know exactly how algorithms are written to have an opinion on where it is and is not appropriate to be deployed. Think of this in the context of literacy: you don’t need to have read Infinite Jest to demonstrate that you know how to read.

Lauren Prastien: What a relief.

Eugene Leventhal: Not everyone needs to break down dense texts on artificial intelligence to be able to discuss technology with some degree of confidence or competence. The existence of additional complexity has never stopped us from integrating the basics of things that matter, like literature or mathematics, into curricula. Just as we have integrated skills like sending professional emails and using search engines appropriately into our education system, we can update those educational frameworks to include a basic sense of algorithmic literacy. Aka - what are algorithms, how do they gain access to your data, and how do they then use that data. So while not everyone will need to have an advanced education in computer science, it is possible for us to have a common base of understanding and shared lexicon. As Professor Hosanager mentioned in our episode on the black box:

Kartik Hosanager: But at a high level, I think we all need to, it's sort of like, you know, we used to talk about digital literacy 10, 15 years back and basic computer literacy and knowledge of the Internet. I think in today's world we need to be talking about, uh, basic data and algorithm literacy,

Eugene Leventhal: Knowing how these algorithms work is important for two reasons: first, we’ll then know how to advocate and protect the rights of individuals, and second, we’ll be able to make more informed decisions about how communities choose to implement and utilize these algorithms.

To that first point: Once individuals know when and how their data is being used, they’ll be able to make judgments about what their values are in terms of protections. From a regulatory side, that might mean thinking of new ways to conceptualize and manage the role of data subjects, as Professor Tae Wan Kim explained in our third episode:

Tae Wan Kim: Data subjects can be considered as a special kind of investors, like shareholders.

Eugene Leventhal: In episode 3, we looked at how the legal precedents for data subject rights both do and don't effectively capture our current technological and social landscape. And to be fair, this landscape is changing really quickly, which means that the individuals responsible for determining how to regulate it may, you know, need a little help. I promise, this isn’t just a shameless plug for the Block Center, here for all of your tech policy needs.

But we do want to stress the importance of both tapping academics proficient in technology, ethics, design, policy, you name it, and the value of forming partnerships between universities, companies and government. In our very resource and time constrained reality though, we have to get creative about how to get policymakers exposed to more people with the required expertise. Especially as the pace of innovation is increasing fairly sharply: It took us forty years after the first microelectromechanical automotive airbag system was patented – for federal legislation to mandate the installation of airbags in all new vehicles. We might not want to wait forty years for regulations regarding the safety of autonomous vehicle to be implemented.

Providing a pathway for people to have more explicit rights about how their data is being used and monetized is great, though it does not put limits on when companies are able to deploy new algorithms. This brings us to the idea of needing to have much more moderated and regulated expansion of algorithms, and our second point about the rights of communities impacted by these algorithms. Professor Danks tells us more,

David Danks: I think one set of ethical issues that’s really emerged in the last year or two is a growing realization that we can’t have our cake and eat it too. And so we really have to start as people who build, deploy and regulate technology to think about the trade offs that we are imposing on the communities around us and trying to really engage with those communities to figure out whether the trade offs we’re making are the right ones for them rather than paternalistically presupposing that we’re doing the right thing.

Eugene Leventhal: And part of that means putting these parties in dialogue. As Professor Jason Hong said in our episode on fairness:

Jason Hong: There’s going to be the people who are developing the systems, the people might be affected by the systems, the people who might be regulating the systems and so on. And um, you have to make sure that all of those people and all those groups actually have their incentives aligned correctly so that we can have much better kinds of outcomes.

Eugene Leventhal: Which, again, drives home why this kind of education and engagement is so important. But, we can’t forget that just focusing on STEM won’t solve the fundamental tension between wanting to create new technologies and making sure that those developing and using these new solutions have the basic knowledge they need to deal with the impacts of the tech. That’s why the educational system has to prepare its students not only for the technologies themselves, but for the ways that these technologies will change work and shift the emphasis placed on certain skill sets. Dare we say, the consequences. As Douglas Lee, President of Waynesburg University, said in our episode on education:

Douglas Lee: We have to look at ways to, to help them, um, develop those skills necessary to succeed.

Eugene Leventhal: When it comes to rethinking education, it's not only curriculums that are changing. The technology used in classrooms is another crucial question that needs to be carefully examined. Back in episode six, we heard from Professor Pedro Ferreira on some of his work relating to experiments with tech in the classroom.

Pedro Ferreira: So you can actually introduce technology into the classroom in a positive way. And also in a negative way. It depends on how you actually combine the use of the technology with what you want to teach.

Eugene Leventhal: Another perspective on why we need to change up how we’re approaching education came from Professor Oliver Hahl, relating to our current system producing many overqualified workers. 

Oliver Hahl: What we're saying is there's even more people out there who are being rejected for being overqualified. So even conditional on making the job, they, they seem to be disappointed in the job if they're overqualified.

Eugene Leventhal: All said, having that deeper understanding of how algorithms function, understanding where they’re being integrated, and looking at the larger consequences of technological change will help us tackle a really big question, namely,

Zachary Lipton: What is it that we’re regulating exactly? Model, application, something different? 

Eugene Leventhal: In our last episode, Professor Zachary Lipton brought up this thorny but important question. If we don’t understand how the outcomes of these algorithms are being generated, how much care and attention can be provided to dealing with potential outcomes?

In our episode on the education bubble, Professor Lee Branstetter proposed an FDA-like system for regulating the roll-out of ed tech:

Lee: And so I think part of the solution, um, is for government and government funded entities to do for Ed Tech, what the FDA does for drugs, submit it to scientific tests, rigorous scientific tests, um, on, you know, human subjects in this case students, and be able to help people figure out what works and what doesn't mean.

Eugene Leventhal: This idea speaks to one of the major tensions that policymakers face in terms of tech - how to both support innovation without sacrificing the well-beng of individuals. While focusing on an FDA-style testing approach may work well for education, it’s deployment in, say, manufacturing could help worker safety but would not do much in terms of the impact of automation overall. To find an option for protecting workers, we turn again to Professor Branstetter.

Lee: The problem we're finding is that workers go through a disruptive experience generated by technology or globalization and they spent decades honing a set of skills that the market no longer demands. So they have no problem getting another job, but the new job pays less than half what the old job paid. We don't have any way of insuring against that.

I mean, the long term income losses we're talking about are in the same order of magnitude as if somebody's house burned down. Now, any of these workers can go on the Internet and insure themselves against a house fire quickly, cheaply, and easily. They cannot insure themselves against the apple obsolescence of their skill, but it would be pretty easy and straightforward to create this kind of insurance. And I would view this as being complimentary to training.

Eugene Leventhal: To recap, this portion focused on personal rights and protections, we started by exploring the idea of data rights, specifically viewing data subjects as investors, and the fact that we need to have a measured approach to rolling out new technologies. With that as the backdrop, we explored a variety of potential policy responses from requiring safety demonstrations to algorithmic reporting to audits to creating an agency to help assess new tools before they make their way into classrooms. Finally, we covered the idea of wage insurance as a meaningful way to help displaced workers.

In our final section, we’ll talk about new possibilities. If we’re bringing everyone to the table and we’re protecting the rights of the individuals impacted by tech, what kinds of positive innovations can we develop? We’ll discuss a few in just a moment.

[MUSIC]

Eugene Leventhal: Now that we’ve focused on the importance of personal rights and protections in the digital space, we can look at a few final ideas that we came across in preparing this season.

The first two ideas come from Professor Tom Mitchell, the first of which relates to standardized data formats.

Tom: We need, uh, some policy making and regulation making at the national level that says, number one, let's use a consistent data format to represent medical records. Number two, let's share it in a privacy preserving way so that at the national scale we can take advantage of the very important subtle statistical trends that are in that data that we can't see today.

Eugene Leventhal: If we’re able to standardize these data formats and share information in a privacy-preserving way, we’ll be able to develop useful and potentially life-saving interventions while still maintaining public trust. It's important to stress, that's no easy task. But let's turn back to Professor Mitchell to hear what we could gain from doing so.

Tom: What if we combined the emergency room admissions data with the gps data from your phone. Then if you think about how we currently respond to new infectious diseases like h a n 23, whatever the next a infectious disease will be called currently, we respond by trying to uh, find cases of it and uh, figuring out what's the source and then we warn people publicly and so forth. Imagine if we could instead have a your cell phone ring, um, tomorrow morning. If it turns out that today I show up in an emergency room with this infectious disease and your phone calls you in the morning and says, somebody you were in close proximity with yesterday has this disease, here are the symptoms to watch out for.

Eugene Leventhal: While the beneficial use cases do sound exciting, with the way things are today, many people are more wary than optimistic. And reasonably so. That’s why we started where we did - with focusing on individuals, bringing together and building communities, and making sure that they are diverse and represent the entire set of stakeholders. By doing so, we can build networks of trust where people might be more willing to explore solutions like these, especially once they are provided the education and training to really make the most of these new technologies.

In order for that to happen, we need to pay more serious attention to protecting individuals data and digital rights to make sure that people don’t just understand these technologies, but that they also personally stand to benefit from them. And so we turn to our final recommendation, which we saved for last because it’s meant for more companies more than the government. Of course, policymakers can incentivize companies to support such programs and run versions themselves, but we turn to Professor Hong for the idea itself,

Jason Hong: So what we're trying to do with bias bounty is can we try to incentivize lots of people to try to find potential bugs inside of these machine learning algorithms.

Eugene Leventhal: By following a model similar to cybersecurity related bounties, companies can direct resources towards mitigating bias-related issues. So you, whoever you are, can play a more direct role in the technologies that impact your life, by keeping them in check.

Because ultimately, we all play a role in how the changing technological landscape is going to impact our lives, from the ways we interact with each other to how we’ll learn and work and get around. So Lauren, where does that leave us?

Lauren Prastien: Back when we first introduced this podcast, we did so with the very frightening forecast that in just 45 years, there’s a 50% chance that AI will outperform humans in all tasks, from driving a truck to performing surgery to writing a bestselling novel. Which on the surface sounds alarming, but let me reiterate: those odds - 50% - those are the same odds as a coin toss.

Here’s the thing about a coin toss: it relies on chance and a little bit of physics. That’s it. The future is a little more complicated than that.

Like we’ve been saying this whole season, this isn’t a matter of chance. We aren’t flipping a coin to decide whether or not the robots are going to take over.

So who chooses what the future is going to look like? The short answer: all of us. And what actions do we need to take now - as policymakers, as technologists, as data subjects - to make sure that we build the kind of future that we want to live in? The long answer: you’ve got ten episodes of content to get you started.

Eugene Leventhal: I’m Eugene Leventhal

Lauren Prastien: and I’m Lauren Prastien,

Eugene Leventhal: and this was Consequential. We want to take a moment to thank you for joining for season 1 of our journey of better understanding the impacts that technology will have on society. Thank you for listening and for sharing, and we look forward to continuing the conversation next year.

As we’re getting ready for season two next year, we’d love to know about the tech-related topics that are on your mind. Please feel free to reach out - we’re @CMUBlockCenter on Twitter and you can email us consequential@cmu.edu. If you liked what you’ve heard throughout the season, let us know what you enjoyed in a review on iTunes.

Consequential was recorded at the Block Center for Technology and Society at Carnegie Mellon University. The Block Center was established to examine the societal consequences of technological change and create meaningful plans of action. To learn more about Consequential, the Block Center and our faculty, you can check out our website at cmu.edu/block-center.

This episode of Consequential was written by Eugene Leventhal, with editorial support from Lauren Prastien. It was produced by Eugene Leventhal and our intern, Ivan Plazacic. Our executive producers are Shryansh Mehta and Jon Nehlsen.

Eugene Leventhal: Hello dear listeners. We hope that you’re staying safe during these unusual and trying times of social distancing and self-quarantining. From staying home to waiting in lines to get into supermarkets lacking toilet paper to worrying more about those among us with health issues, life has definitely changed of late.   

Lauren Prastien: Before Carnegie Mellon went remote, we were getting ready to release our second season of Consequential. But as we set up recording studios in our closets to put the finishing touches on season two, we couldn’t help but consider what so many of our episodes now meant in light of COVID-19. And so we had an idea.

Eugene Leventhal: Over the past few weeks, we’ve conducted a ton of new interviews - all remotely, don’t worry - about the intersection of technology, society and COVID-19.

Lauren Prastien: We talked to a lot of interesting people: like a professor who is figuring out how to teach and produce theater in the age of Zoom meetings, as well as an infectious disease epidemiologist who is using data analytics to improve pandemic responses. 

Eugene Leventhal: And we’ve decided to put our new interviews in conversation with existing season 2 interviews, to launch a short mini-season related to some more timely topics. This mini-season will explore three main areas: the use of large-scale public health data, remote education, and the future of work.

Lauren Prastien: We might have a few more episodes beyond that, but this is something we’re figuring out as we go along. Our first episode on public health data analytics will be out on April 8th, and from there, we’ll be releasing episodes every other week. 

Eugene Leventhal: If there are any tech and coronavirus related stories you want to hear covered, feel free to email us at consequential@cmu.edu

Lauren Prastien: And we’ll see you next week for the first episode of our mini-season of Consequential.

Eugene Leventhal: Consequential comes to you from the Block Center for Technology and Society at Carnegie Mellon University. The Block Center was established to examine the societal consequences of technological change and create meaningful plans of action. To learn more about Consequential, the Block Center and our faculty, you can check out our website at cmu.edu/block-center or follow us on Twitter @CMUBlockCenter.

The music you are hearing was produced by Fin Hagerty-Hammond.

Be well.