Women in AI: Brandie Nonnecke of UC Berkeley says investors should insist on responsible AI practices

Must read

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Brandie Nonnecke is the founding director of the CITRIS Policy Lab, headquartered at UC Berkeley, which supports interdisciplinary research to address questions around the role of regulation in promoting innovation. Nonnecke also co-directors the Berkeley Center for Law and Technology, where she leads projects on AI, platforms and society, and the UC Berkeley AI Policy Hub, an initiative to train researchers to develop effective AI governance and policy frameworks.

In her spare time, Nonnecke hosts a video and podcast series, TecHype, that analyzes emerging tech policies, regulations and laws, providing insights into the benefits and risks and identifying strategies to harness tech for good.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

I’ve been working in responsible AI governance for nearly a decade. My training in technology, public policy and their intersection with societal impacts drew me into the field. AI is already pervasive and profoundly impactful in our lives — for better and for worse. It’s important to me to meaningfully contribute to society’s ability to harness this technology for good rather than stand on the sidelines.

What work are you most proud of (in the AI field)?

I’m really proud of two things we’ve accomplished. First, The University of California was the first university to establish responsible AI principles and a governance structure to better ensure responsible procurement and use of AI. We take our commitment to serve the public in a responsible manner seriously. I had the honor of co-chairing the UC Presidential Working Group on AI and its subsequent permanent AI Council. In these roles, I’ve been able to gain firsthand experience thinking through how to best operationalize our responsible AI principles in order to safeguard our faculty, staff, students, and the broader communities we serve. Second, I think it’s critical that the public understand emerging technologies and their real benefits and risks. We launched TecHype, a video and podcast series that demystifies emerging technologies and provides guidance on effective technical and policy interventions.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

Be curious, persistent and undeterred by imposter syndrome. I’ve found it crucial to seek out mentors who support diversity and inclusion, and to offer the same support to others entering the field. Building inclusive communities in tech has been a powerful way to share experiences, advice and encouragement.

What advice would you give to women seeking to enter the AI field?

For women entering the AI field, my advice is threefold: Seek knowledge relentlessly, as AI is a rapidly evolving field. Embrace networking, as connections will open doors to opportunities and offer invaluable support. And advocate for yourself and others, as your voice is essential in shaping an inclusive, equitable future for AI. Remember, your unique perspectives and experiences enrich the field and drive innovation.

What are some of the most pressing issues facing AI as it evolves?

I believe one of the most pressing issues facing AI as it evolves is to not get hung up on the latest hype cycles. We’re seeing this now with generative AI. Sure, generative AI presents significant advancements and will have tremendous impact — good and bad. But other forms of machine learning are in use today that are surreptitiously making decisions that directly affect everyone’s ability to exercise their rights. Rather than focusing on the latest marvels of machine learning, it’s more important that we focus on how and where machine learning is being applied regardless of its technological prowess.

What are some issues AI users should be aware of?

AI users should be aware of issues related to data privacy and security, the potential for bias in AI decision-making and the importance of transparency in how AI systems operate and make decisions. Understanding these issues can empower users to demand more accountable and equitable AI systems.

What is the best way to responsibly build AI?

Responsibly building AI involves integrating ethical considerations at every stage of development and deployment. This includes diverse stakeholder engagement, transparent methodologies, bias management strategies and ongoing impact assessments. Prioritizing the public good and ensuring AI technologies are developed with human rights, fairness and inclusivity at their core are fundamental.

How can investors better push for responsible AI?

This is such an important question! For a long time we never expressly discussed the role of investors. I cannot express enough how impactful investors are! I believe the trope that “regulation stifles innovation” is overused and is often untrue. Instead, I firmly believe smaller firms can experience a late mover advantage and learn from the larger AI companies that have been developing responsible AI practices and the guidance emerging from academia, civil society and government. Investors have the power to shape the industry’s direction by making responsible AI practices a critical factor in their investment decisions. This includes supporting initiatives that focus on addressing social challenges through AI, promoting diversity and inclusion within the AI workforce and advocating for strong governance and technical strategies that help to ensure AI technologies benefit society as a whole.

More articles

Latest article