Categories: Crafts

How to craft effective AI policy – MIT Technology Review

A conversation about equity and what it takes to make effective AI policy.
This episode was taped before a live audience at MIT Technology Review’s annual AI conference, EmTech Digital.
This episode was created by Jennifer Strong, Anthony Green, Erin Underwood and Emma Cillekens. It was edited by Michael Reilly, directed by Laird Nolan and mixed by Garret Lang. Episode art by Stephanie Arnett. Cover art by Eric Mongeon. Special thanks this week to Amy Lammers and Brian Bryson.
[PREROLL]
[TR ID]
Jennifer Strong: The applications of artificial intelligence are so embedded in our everyday lives it’s easy to forget it’s there… But these systems, like ones powering Instagram filters or the price of a car ride home… can rely on pre-existing datasets that fail to paint a complete picture of consumers. 
It means people become outliers in that data – often the same people who’ve historically been marginalized. 
It’s why face recognition technologies are least accurate on women of color, and why ride-share services can actually be more expensive in low-income neighborhoods. So, how do we stop this from happening?
Well would you believe a quote from Harry Potter and his wizarding world… might create a good starting point for this conversation?
I’m Jennifer Strong and this episode, our producer Anthony Green brings you a conversation about equity from MIT Technology Review’s A-I conference, EmTech Digital. We’ll hear from Nicol Turner Lee—the director of the center for technology at the Brookings Institution—about what it takes to make effective AI policy.
[EPISODE  IN: 
Anthony Green: There’s a quote from Harry Potter of all places. 
Nicol Turner Lee: Oh Lord, I, I, I haven’t seen the Harry Potter episodes since my kids were little so I’ll try. 
[Laughter]
Anthony Green: Oh man. Uh, it’s a pretty good one. No, it’s, it’s just kind of stuck with me over the years. I’m honestly not even otherwise a big fan, but, um, the quote goes, there will be a time where we must choose between what is right and what is easy and it feels like that applies pretty squarely to how companies design these systems. Right. So I guess my question is how can policy makers, right, start to push the needle in the right direction when it comes to favorable outcomes for AI in decision making? 
Nicol Turner Lee: Well, that’s a great question. And again, thank you for having me. You may be wondering why I’m sitting here. I’m a sociologist. I’ve had the privilege of being on this stage for a couple of conferences here at MIT. But I got into this… And before I answer your question, because I think the quote that you’re referencing points to much of what my colleagues have talked about, which are the sociotechnical implications of these systems.
Anthony Green: Mm-hmm.
Nicol Turner Lee: So I’ve been doing this for about 30 years. And part of the challenge that we’ve had is that we’ve not seen equitable access to technology. And as we think about these emerging sophisticated systems, to your point, we have to think about the extent to which they have effects on regular everyday people, particularly people who are already marginalized. Already vulnerable in our society. So that quote has a lot of meaning because if we’re not careful, the technology in and of itself will sort of accelerate, I think, some of the progress that we’ve made when it comes to equity and civil rights. 
Anthony Green: Yeah. 
Nicol Turner Lee: Um, I’m gonna date myself for just a moment. I know I look a lot younger. When I was growing up I used to run home and watch the Jetsons, right. There were two cartoons. I watched Fred Flinstone, which if you all remember, he rode around on a car with rocks and I watched the Jetsons..
Anthony Green: Powered with his feet. 
Nicol Turner Lee: I know, right! You’re too young to know about Fred Flinstone. 
Anthony Green: Oh, Boomerang. 
But, but if you notice. You know, Fred Flinstone is archaic. Right? 
Anthony Green: Right.
Nicol Turner Lee: The, the rocks as wheels doesn’t work. 
Anthony Green: Yeah. 
Nicol Turner Lee: The Jetsons is actually realized. And part of the challenge and the reason that I have interest in this work outside of my, you know, PhD in sociology and my interest in technology is that these systems now are so much more generally purposed that they impact people when they are contextualized in environments.
And that’s where I think we have to have more conversations that point to your question. So roundabout way. But I think it’s really important that we have these conversations now, before the technology accelerates itself.
Anthony Green: Hundred percent. And I mean, you know, all of that said, right, policy making alone isn’t going to be the only solution needed to resolve these issues. So I would love it if you can speak to how accountability, specifically on the part of industry, comes into play as well. 
Nicol Turner Lee: Well, the problem with policy makers is that we’re not necessarily technologists. And so we can see a problem and we actually sort of see that problem in its outcomes. 
Anthony Green: Yeah. 
Nicol Turner Lee: So I don’t think there’s any policy maker, or very few outside of people like Ro Khanna and others, right, who actually know what it’s like to be in, in the tech space. 
Anthony Green: Sure. 
Nicol Turner Lee: That understands how these outcomes occur. They don’t understand what’s underneath the hood. Or as people say, I’m trying to move away from this language. It’s not really a black box. Right. It’s just a box.
Anthony Green: Right.
Nicol Turner Lee: Because there’s some, uh, judgments that come with calling it a black box. But when you think about policy and those outcomes you have to say to yourself, how do policy makers sort of take an organic iterative model and then legislate or regulate it? And that’s where people like me who are in the social sciences, I think, come in and have much more conversation on what they should be looking for. Um, so the accountability there is hard.
Anthony Green: Yeah.
Nicol Turner Lee: Because no one is talking the same language as many of you in this room, right. The technologists are sort of rushing to market. I call it permissionless forgiveness. Uh, as my colleagues at the center for technology innovation, Tom Wheeler has that great phrase, “build it and then break it and then come back and fix it.” Well, guess what happens? That’s permissionless forgiveness. Cuz what happens? We say we’re sorry when people have foreclosed, uh, mortgage rates, are in criminal justice systems where they’re detained longer because these models dictate those predictions.
Anthony Green: Right.
Nicol Turner Lee: So policy makers have not quite, Anthony, caught up to the speed of innovation. And we’ve said that for decades, but it’s actually true. 
Anthony Green: Absolutely. I mean, you’ve referred to this issue in the past as a civil and human rights issue. 
Nicol Turner Lee: It is. It is. 
Anthony Green: Right. So, I mean, can you kind of like expand on that and how that’s kind of shaped your conversations about policy?
Nicol Turner Lee: You know, it’s shaped my conversations from the standpoint of this. I, I, you know, shameless plug, I have a book coming out on the US digital divide so I’ve been very interested. I call it, uh, Digitally Invisible, how the internet is creating the new underclass. And it’s really about the digital divide going past the binary construction of who’s online, who’s not, to really thinking about what are the impacts when you are not connected. 
Anthony Green: Right. 
Nicol Turner Lee: And how do these emerging technologies impact you? So to your point, I call it a civil rights issue because what the pandemic demonstrated is that without internet access, you were actually not able to get the same opportunities as everybody else. You could not register for your vaccine. You could not communicate with your friends and family. Fifty-million school aged kids sent home, 15 to 16 million of them could not learn. And now we’re seeing the effects of that. 
Anthony Green: Yeah. 
Nicol Turner Lee: And so when we think about artificial intelligence systems that now have replaced what I call the death of analog. Replace, uh, you know, how we used to do things in person we’re now seeing, in a civil rights age, laws that are being violated. And that.. in ways that I, I don’t necessarily attribute to the malfeasance of technologists. But what they’re doing is they’re foreclosing on opportunities that people have fought hard for.
Anthony Green: Sure. 
Nicol Turner Lee: 2016 election. When we had foreign operatives come in and manipulate the content that was available to voters. That was a form of voter suppression. 
Anthony Green: Right.
Nicol Turner Lee: And there was no place that those folks could go to like the Supreme Court or Congress to say my vote was just taken away based on the deep neural networks that were associated with what they were seeing.
Anthony Green: Yeah. 
Nicol Turner Lee: Or the misinformation around polling. We’re now at a state… when you are in a city like Boston and an Uber driver doesn’t pick you up because he sees your face in the profile. Where do you go for the type of, um, you know, the benefits of, of the civil rights regime that we have that was not based on a digital atmosphere? So part of my work at Brookings has been how do we look at the flexibility and agility of these systems to apply to emerging technologies. And we have no simple answer because these rules were not necessarily developed , you know, in the 21st century.
Anthony Green: Right.
Nicol Turner Lee: They were developed when my grandfather told me how he walked to school with the same pair of shoes, right. Where the bottom was out because he wanted an education. We don’t have that today. And I think it’s worth a conversation as these technologies become more ubiquitous. How are we developing not just inclusive and equitable AI but legally compliant AI? AI that makes sense that people feel that they have some retribution for that malfeasance. So I’ll talk a little bit about some of the work we’re doing on there, but I think, you know, there’s a cadre of individuals like myself, some of them here at MIT, that are really trying to figure out how do we go back and make people accountable to the civil and human liberties of folks and not allow the technology to be the fall person when it comes to, you know, why things wreck havoc or go wrong. 
Anthony Green: Don’t blame the robots. 
Nicol Turner Lee: You know! I tell people robots do not discriminate. I’m sorry. You know, we do and, and it’s something to be said about that. We start looking at civil rights. 
Anthony Green: I’m gonna go to the audience. Anyone got a question? 
Rene, audience member: Thank you so much, Renee from Sao Paulo, Brazil.
Nicol Turner Lee: Hey!
Rene, audience member: There is a common theme on these last presentations. It’s about invisibility. 
Nicol Turner Lee: Yes!
Rene, audience member: There are so many ways to be invisible. If, if you have the wrong badge you are invisible, like Harry Potter. If you are too old, if you have the wrong kind of skin. And there’s one very interesting thing. When we talk, we talk about data and AI. AI is proposing things about data that are available. 
Nicol Turner Lee: Yeah. 
Rene, audience member: But there are data that are completely invisible about people who are invisible. So what kind of solutions are we building if you are basing on data.. based on data about all, always the same people. How do we bring visibility to everybody?
Yes!
Rene, audience member: So, thank you so much. 
Nicol Turner Lee: No, I love that question. Can I jump right in on this one? 
Anthony Green: Go for it. 
Nicol Turner Lee: You know, uh, my colleague and friend Renee Cummings, who is the AI, uh, scientist in residence at University of Virginia. She introduced me to, a few months ago and we did a podcast where she was featured, this concept of what’s called data trauma.
Anthony Green: Mmmm.
Nicol Turner Lee: And I wanna sort of walk you through this because it blew me away when I began to think about the implications and it goes to Renee’s question. What does it mean, you know, when we talk about AI, we often talk about the problem development, the data that we’re training it on, the way that we’re interpreting the outcomes or explaining them, but we never talk about the quality of the data and the fact that the data in and of itself holds within it, the, the wounds of our society. I don’t care what people say. If you are training AI on criminal justice, um, issues, and you’re trying to make a fair and equitable AI that recognizes who should be detained or who should be released. And we all know that particular algorithm I’m talking about. If it is trained on US data, it is disproportionately going to overrepresent people of color.
So even though my friends, and I tell everybody this, just so you know, like she’s not coming in here, you know, being angry. I tell everybody you need a social scientist as a friend. I don’t care who you are. If you are a scientist, an engineer, a data scientist and you don’t have one social scientist as your friend, you’re not being honest to this problem. Right? Because what happens with that data? It comes with all of that noise. And despite our ability as scientists to sort of tease out that noise or diffuse the noise, you still have the basis and the foundation for the inequality. And so one of the things I’ve tried to tell people, it’s probably okay for us to recognize the trauma of the data that we’re using. It’s okay for us to realize that our models will be normative in the extent to which there will be bias. Technical bias, societal bias, outcome bias and prediction bias, but we should disclose what those things are. 
Anthony Green: Yeah. 
Nicol Turner Lee: And that’s where my work in particular has become really interesting to me as a person who is looking at this as, you know, the use of proxies and the use of data. For me, it becomes what part of the model is much more injurious to respondents and to outcomes. And what part should we disclose that we just don’t have the right data to predict accurately without some type of, you know, risk…
Anthony Green: Sure. 
Nicol Turner Lee: …to that population. 
Anthony Green: Yeah. 
Nicol Turner Lee: So to your question, I think if we acknowledge that, you know, I think then we can get to a point where we can have these honest conversations on how we bring interdisciplinary context to certain situations.
Anthony Green: We’ve got another question.
Kyle, audience member: Hi Nicol.
Nicol Turner Lee: Hey.
Kyle, audience member: I’m grateful for your perspective. Um, my name is Kyle. I run… I’m a data scientist by training and I run a team of AI and ML designers and developers. And so, you know, it scares me how fast the industry’s evolving. You mentioned GPT-3. We’re already talking about GPT-4 is in the works and the exponential leap and capabilities that’s gonna present. Something that you mentioned that really struck me is that legislators don’t understand what we’re doing.  And I don’t believe that us as data scientists should be the ones making decisions about how to tie our hands behind our backs. 
Nicol Turner Lee: Yeah. 
Kyle, audience member: And how to protect our work from having unintended consequences.
Nicol Turner Lee: Yes. 
Kyle, audience member: So how do we engage and how do we help legislators understand the real risks and not the hype that is sometimes heard or perceived in the media? 
Nicol Turner Lee: Yeah, no, I love that question. I’m actually gonna flip it. And I’m gonna talk about it in two ways in which I actually talk about it. So I do think that legislators who work in this space, particularly in those sensitive use cases.
So I tell people, I give this example all the time. I love shopping for boots and I’m okay with the algorithm that tells me as a consumer that I love boots, but as Latonya Sweeney’s work has indicated if you associate other things with me. Uh, what other, uh, attributes does this particular person have? When does she buy boots? How many boots does she have? Does she check her credit when she’s buying boots? What kind of computer is she using when she’s buying her boots? If you become to make that accumulative picture around me, then we run into what Dr. Sweeney has talked about—these associations that create that type of risk. 
So to your first question, I think you’re right. That policy makers should actually define the guardrails, but I don’t think they need to do it for everything. I think we need to pick those areas that are most sensitive. The EU has called them high risk. And maybe we might take from that, some models that help us think about what’s high risk and where should we spend more time and potentially policy makers, where should we spend time together?
I’m a huge fan of regulatory sandboxes when it comes to co-design and co-evolution of feedback. Uh, I have an article coming out in an Oxford University press book on an incentive-based rating system that I could talk about in just a moment. But I also think on the flip side that all of you have to take account for your reputational risk.
As we move into a much more digitally advanced society, it is incumbent upon developers to do their due diligence too. You can’t afford as a company to go out and put an algorithm that you think, or an autonomous system that you think is the best idea, and then land up on the first page of the newspaper. Because what that does is it degrades the trustworthiness by your consumers of your product.
And so what I tell, you know, both sides is that I think it’s worth a conversation where we have certain guardrails when it comes to facial recognition technology, because we don’t have the technical accuracy when it applies to all populations. When it comes to disparate impact on financial products and services.There are great models that I’ve found in my work, in the banking industry, where they actually have triggers because they have regulatory bodies that help them understand what proxies actually deliver disparate impact. There are areas that we just saw this right in the housing and appraisal market, where AI is being used to sort of, um, replace a subjective decision making, but contributing more to the type of discrimination and predatory appraisals that we see. There are certain cases that we actually need policy makers to impose guardrails, but more so be proactive. I tell policymakers all the time, you can’t blame data scientists. If the data is horrible.
Anthony Green: Right.
Nicol Turner Lee: Put more money in R and D. Help us create better data sets that are overrepresented in certain areas or underrepresented in terms of minority populations. The key thing is, it has to work together. I don’t think that we’ll have a good winning solution if policy makers actually, you know, lead this or data scientists lead it by itself in certain areas. I think you really need people working together and collaborating on what those principles are. We create these models. Computers don’t. We know what we’re doing with these models when we’re creating algorithms or autonomous systems or ad targeting. We know! We in this room, we cannot sit back and say, we don’t understand why we use these technologies. We know because they actually have a precedent for how they’ve been expanded in our society, but we need some accountability. And that’s really what I’m trying to get at. Who’s making us accountable for these systems that we’re creating?
It’s so interesting, Anthony, these last few, uh, weeks, as many of us have watched the, uh, conflict in Ukraine. My daughter, because I have a 15 year old, has come to me with a variety of TikToks and other things that she’s seen to sort of say, “Hey mom, did you know that this is happening?” And I’ve had to sort of pull myself back cause I’ve gotten really involved in the conversation, not knowing that in some ways, once I go down that path with her. I’m going deeper and deeper and deeper into that well.
Anthony Green: Yeah.
Nicol Turner Lee: And I think for us as scientists, it kind of goes back to this. I Have a Dream speech. We have to determine which side of history we wanna be on with this technology folks. And how far down the rabbit hole do we wanna go to contribute? I think what the greatness of AI is our ability to have human cognition wrapped up in these repetitive processes that go way beyond our wildest imagination of the Jetsons.
And that allows us to do things that none of us have been able to do in our lifetime. Where do we want to sit on the right side of history? And how do we want to handle these technologies so that we create better scientists? 
Anthony Green: Sure. 
Nicol Turner Lee: Not ones that are worse. And I think that’s a valid question to ask of this group. And it’s a valid question to ask of yourself. 
Anthony Green: I don’t know if we can end on anything better and we’re out of time! Nicol, we can go all day but..
Nicol Turner Lee: I know. I always feel like a Baptist preacher, you know, so if I have energy about it…
Anthony Green: Choir, can you sing it? 
Nicol Turner Lee: I know, right. I can’t sing it, but you can do that I Have A Dream speech, Anthony. 
[Laughter] 
Anthony Green: Oh man. You’re putting me on the stand and I’m already on stage. 
Nicol Turner Lee: Yeah, right haha. 
Anthony Green: Nicol, thank you so much. 
Nicol Turner Lee: Thank you so much as well. Appreciate it. 
Anthony Green: Absolutely. 
Nicol Turner Lee: Thank you everybody here.
[MIDROLL AD]
Jennifer Strong: This episode was produced by Anthony Green, Erin Underwood, and Emma Cillekens. It’s edited by Michael Reilly, directed by Laird Nolan and mixed by Garret Lang. It was recorded in front of a live audience at the MIT Media Lab in Cambridge Massachusetts, with special thanks to Amy Lammers and Brian Bryson.
Jennifer Strong: Thanks for listening. I’m Jennifer Strong. 
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
A group of over 1,000 AI researchers has created a multilingual large language model bigger than GPT-3—and they’re giving it out for free.
What Gran Turismo Sophy learned on the racetrack could help shape the future of machines that can work alongside humans, or join us on the roads.
And it’s giving the data away for free, which could spur new scientific discoveries.
Discover special offers, top stories, upcoming events, and more.
Thank you for submitting your email!
It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
Our in-depth reporting reveals what’s going on now to prepare you for what’s coming next.
Subscribe to support our journalism.
© 2022 MIT Technology Review

source

InfoLair

Our primary beliefs and values include giving our readers quality material, disseminating information to encourage informed thinking, and supporting policies and ideas. We frequently curate or extract content from reliable online sources in order to uphold those ideals.

Recent Posts

Homebase in administration: What went wrong and what next? – Retail Gazette

To read the full article click below: Homebase in administration: What went wrong and what… Read More

2 days ago

Ukraine war latest: Kyiv's army 'in trouble' – with Putin's forces in 'ascendancy' – Sky News

Ukraine war latest: Kyiv's army 'in trouble' - with Putin's forces in 'ascendancy'  Sky News Source Read More

6 days ago

Princess Anne makes significant change for the first time in 50 years – GB News

Princess Anne makes significant change for the first time in 50 years  GB NewsPrincess Anne changes… Read More

6 days ago

Glowy Glam Spring Makeup Tutorial – MSN

Glowy Glam Spring Makeup Tutorial  MSN Source Read More

1 week ago

How To Use Audience Segmentation To Diversify Your Marketing (2024) – Shopify

To read the full article click below: How To Use Audience Segmentation To Diversify Your… Read More

2 weeks ago

‘I want to take care of me’: Why more American women are moving abroad for a better life – USA TODAY

To read the full article click below: 'I want to take care of me': Why… Read More

2 weeks ago

This website uses cookies.