To give AI-focused female academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching an interview series focusing on the remarkable women who have contributed to the AI revolution.
Anika Collier-Navaroli is a senior fellow at the Tao Center for Digital Journalism at Columbia University and a Public Voices in Technology Fellow at the OpEd Project, convened in collaboration with the MacArthur Foundation.
She is known for her research and advocacy work in technology. She previously served as a Race and Technology Practice Fellow at the Stanford Center on Philanthropy and Civil Society. Before that, she led Trust and Safety at Twitch and Twitter. Navaroli is perhaps best known for her testimony before Congress about Twitter, where she spoke about ignored warnings of impending violence on social media that preceded what would become the Jan. 6 Capitol attack.
In short, how did you get started in the field of artificial intelligence? What attracted you to the field?
About 20 years ago, I was working as a newsroom writer at my hometown newspaper during the summer when it went digital. At the time, I was an undergraduate studying journalism. Social media sites like Facebook were sweeping the campus, and I became obsessed with trying to understand how laws built on the printing press evolved with emerging technologies. This curiosity led me to law school, where I took to Twitter, studied media law and policy, and watched the Arab Spring and Occupy Wall Street movements. I put it all together and wrote my master’s thesis on how new technology has changed the way information flows and how society exercises freedom of expression.
I worked at a couple of law firms after graduation and then found my way to the Data and Society Research Institute where I led the research of a new think tank on what was then called “big data,” civil rights, and justice. There, practical research looked into how early AI systems, such as facial recognition software, predictive policing tools, and criminal justice risk assessment algorithms, replicated bias and created unintended consequences that affected marginalized communities. I then moved to Color of Change and led the first civil rights audit of a technology company, developing the organization’s playbook for technology accountability campaigns, and advocating for technology policy change for governments and regulators. From there, I became a senior policy officer within the Trust and Safety teams at Twitter and Twitch.
What work are you most proud of in the field of artificial intelligence?
I am most proud of my work within technology companies using politics to practically change the balance of power and correct bias within culture and knowledge-producing algorithmic systems. At Twitter, I ran several campaigns to verify individuals who had previously been shockingly excluded from the exclusive verification process, including Black women, people of color, and queer people. This also included leading AI scientists such as Safiya Nobel, Alondra Nelson, Timnit Gebru, and Meredith Broussard. This was in 2020 when Twitter was still Twitter. At the time, verification meant that your name and content became part of Twitter’s core algorithm because Tweets from verified accounts were included in recommendations, search results, and main timelines, and helped create trends. So, the work of validating new people with different perspectives on AI has radically transformed people whose voices have been empowered as thought leaders and lifted new ideas into the public conversation during some truly critical moments.
I’m also very proud of the research I did at Stanford that came together Black in moderation. When I was working inside tech companies, I also noticed that no one was writing or talking about the experiences I was having every day as a Black person working in the trust and safety space. So when I left the industry and returned to academia, I decided to talk to black tech workers and highlight their stories. The research ended up being the first of its kind stimulate Lots of new and important conversations about the experiences of tech employees with marginalized identities.
How do you overcome the challenges of the male-dominated technology industry and, by extension, the male-dominated AI industry?
As a Black queer woman, navigating male-dominated spaces and spaces of being someone else has been part of my entire life’s journey. In the field of technology and artificial intelligence, I think the most challenging aspect is what I call in my research “forced identity work.” I coined this term to describe recurring situations where employees with marginalized identities are treated as voices and/or representatives of entire communities who share their identities.
Given the high risks that come with developing new technology like artificial intelligence, it can sometimes seem almost impossible to escape this labor force. I had to learn how to set specific boundaries for myself about what issues I was willing to deal with and when.
What are some of the most pressing issues facing AI as it develops?
according to Investigation reports, current generative AI models have taken over all the data on the Internet, and will soon run out of data available to devour. So the world’s largest AI companies are turning to synthetic data, or information generated by AI itself, not humans, to further train their systems.
The idea took me down a rabbit hole. So, I wrote recently Editorial Arguing that I believe the use of synthetic data as training data is one of the most pressing ethical issues facing the development of new artificial intelligence. Generative AI systems have already shown that based on their original training data, their output is replicating bias and creating false information. So the path to training new systems using synthetic data means constantly feeding biased and inaccurate output into the system as new training data. I described This will likely turn into a feedback loop to hell.
Since I wrote the article, Mark Zuckerberg praised That the Llama 3 chat client updated from Meta was Partially supported By synthetic data and was the “smartest” AI product on the market.
What are some issues that AI users should be aware of?
Artificial intelligence is a ubiquitous part of our current lives, from spell check and social media feeds to chatbots and image generators. In many respects, society has become a guinea pig for experiments with this new, untested technology. But AI users should not feel helpless.
I was He argues That technology advocates must come together and organize AI users to advocate for people to pause AI. I believe the Writers Guild of America has shown that through organization, teamwork, and patient determination, people can come together to create meaningful boundaries for the use of AI technologies. I also believe that if we stop now to fix past mistakes and establish new ethical guidelines and regulations, AI will not have to become a tool. Existential threat To our future.
What is the best way to build AI responsibly??
My experience working within technology companies has shown me how important it is who writes policy, makes arguments, and makes decisions. My path also showed me that I developed the skills I needed to succeed in the technology industry by starting in journalism school. Now I’m back at Columbia Journalism School and I’m interested in training the next generation of people who will do technology accountability work and responsibly develop AI within technology companies and as external observers.
I suspect [journalism] The school gives people such unique training in questioning information, seeking truth, considering multiple points of view, creating logical arguments, and extracting facts and reality from opinion and misinformation. I think this is a solid foundation for the people who will be responsible for writing the rules for what the next iterations of AI can and cannot do. I look forward to creating a more paved path for those who come next.
I also believe that in addition to skilled trust and safety workers, the AI industry needs external regulation. In the United States, i Argues This should come in the form of a new agency to regulate US technology companies with the ability to create and enforce basic safety and privacy standards. I would also like to continue working to connect current and future regulatory bodies with former tech insiders who can help those in power ask the right questions and create new, accurate, practical solutions.