Human rights attorney Malika Saada Saar ’92 brings AI fireside chat series to Watson

Renowned human rights attorney and Watson Senior Fellow Malika Saada Saar has organized a series of events for the spring 2025 semester called Fireside Chats on Building AI for Humanity, featuring conversations with tech industry leaders about AI and human rights.

Senior Fellow in International and Public Affairs Malika Saada Saar is concerned about the implications of artificial intelligence (AI) for human rights in the U.S. and worldwide, but possibly not in the way you might expect. While much of the public conversation about AI has focused on speculation about arcane, science-fiction scenarios in which AI attains sentience and turns on humanity, Saada Saar's concerns are more practical and immediate. "I'm more concerned about what people do with AI than with AI itself," she said. 

Saada Saar has organized Fireside Chats on Building AI for Humanity, a series of conversations with tech industry leaders about AI and human rights for the spring 2025 semester. Saada Saar hopes to foster dialogue between academia and industry about the implications of widespread AI adoption for human rights and how it can be deployed responsibly. "I don't think we have a full appreciation of the centrality of this technology in our lives today," she said. 

Saada Saar said the series is "an opportunity to have critical conversations about a technology that is creating the fourth industrial revolution." "We currently find ourselves at a crossroads with AI," she said, "either we move in the direction of rights, democracy, and the rule of law with this new technology and how it's designed and deployed, or we move in the direction of AI reinforcing the weaponization of technology against human rights and towards a totalitarian regime."

In organizing the series, Saada Saar said she considered it imperative to bring together the worlds of academia and the tech industry. "The conversations that should be happening are not happening enough in the tech sector, and I don't think they're happening enough in academia either," she said. “And to the extent that they are playing out, they're not happening at the intersection of academia and tech…It's really important to recognize the power of Brown University and make sure that we bring in tech leaders to engage in dialogue at this critical moment."

"This is the time to explore the implications of the technology and the opportunities to design and deploy it in a way that reinforces human rights and democracy," said Saada Saar, "and to recognize the danger and harms this technology could bring without proper safeguards."

According to Saada Saar, AI presents both countless opportunities and risks. "AI determines whether you will get health insurance, whether you will get a mortgage loan, whether you will get a job, and whether you will be admitted to a university," she said. "We must begin to grapple with the centrality of this technology in our lives and the way it has created a fourth industrial revolution. The influence of this technology is almost unprecedented."

Saada Saar said some of the problems with AI are well-known. "We understand and have many examples of how this technology reproduces hierarchies of power, reproduces gender and racialized stereotypes, and determines who is permitted access and opportunity and who is denied," she said. 

Saada Saar points to a central irony about AI: the fact that a radically new technology binds us to the past because it relies on information that is already available. "I talk with my students about how it's so interesting that this is the most complex technology that has ever been deployed, and yet it tethers us to the same regressive thinking of the 20th century and the centuries before in terms of issues around race and gender," she said. "I find that paradox fascinating — that we can have a technology that is so advanced and yet so regressive at the same time."

But Saada Saar said it's important that the conversation about AI expand beyond the threats it poses and also include potentially beneficial uses. "I think it's really important for human rights defenders to understand the technology and to think about how we can harness it," she said. "If we cede the technology to the despots, then we lose an opportunity to be the tip of the spear to think about how it can be used for humanity and to defend and advance human rights."

Saada Saar points to some of the ways advanced technology has been used to expand human rights in recent years. "As a human rights lawyer, I would argue that we recently had this moment of understanding the power of technology to bring us closer to better human rights standards," she said. "Tech — these phones and digital global platforms — has enabled the world to bear witness to human rights movements and human rights abuses. We saw that with the Arab Spring, the horrid crimes against humanity of the Assad government in Syria. Or Darnella Frazier, who courageously recorded the murder of George Floyd," she said. "One of the things you get taught as a human rights lawyer is the importance of bearing witness and documentation," said Saada Saar, "and you document so that you ensure that the world bears witness to human rights abuses."

We cannot move recklessly. There is a false binary that says you can either innovate or be responsible. But actually, responsibility can drive innovation…If we approach AI from a rights-based position that is equity-informed, then we can design AI that will be better for humanity.

Malika Saada Saar Senior Fellow in International and Public Affairs
 
Malika Saada Saar ’92

Saada Saar said we need to think about similar ways in which the power of AI can be harnessed to advance the cause of human rights. "We need to make sure the NGO community understands this technology and invests in it," she said. "We need to make sure that there are places of intersection between technologists — the architects of AI — and those who are fighting for human rights."

According to Saada Saar, the people who create public policy must also understand AI and participate in these conversations. "We cannot move recklessly. There is a false binary that says you can either innovate or be responsible. But actually, responsibility can drive innovation," she said. "If we approach AI from a rights-based position that is equity-informed, then we can design AI that will be better for humanity."

Saada Saar cites the work of a previous fireside chat guest, Vinay Rao, as an example of human-rights-centered AI innovation. Vinay is the head of trust and safety at Anthropic, the AI research company that developed Claude, an AI assistant "designed to be helpful, harmless and honest" in its interactions with humans. Saada Saar pointed out that one of Claude's distinguishing features is that its "constitution" has 75 points, including sections from the UN Universal Declaration of Human Rights.

Erin Teague, who joined Saada Saar for a chat on March 12, is the chief product officer at Character.AI, a popular AI platform that allows users to create "characters" with their own "personalities" that other members of the community can interact with. 

Saada Saar, who worked with Teague at YouTube when she was their director of product management, characterizes her as "the real deal." "As someone who has watched Erin's ascent and cheered her on as a black woman in this space, I'm excited for the students to hear how she has tried to implement a regime of responsible tech for a platform that is nascent and fraught," she said.

On April 8, as part of a special AI for Impact and Justice roundtable event, Saada Saar will speak with Devshi Mehrotra, the founder and CEO of Justice Text, an AI-powered tool that promotes greater police accountability by helping attorneys analyze the thousands of hours of police body-cam footage captured every single day. The final fireside chat of the semester will feature the Head of UN and International Organizations Policy at Microsoft, Howie Watchel.