Site icon The digital classroom, transforming the way we learn

Joy Buolamwini: “We’re giving AI companies a free pass”

The pioneering AI researcher and activist shares her personal journey in a new book, and explains her concerns about today’s AI systems. MIT Review

A particular concern, says Buolamwini, is the basis upon which we are building today’s sparkliest AI toys, so-called foundation models. Technologists envision these multifunctional models serving as a springboard for many other AI applications, from chatbots to automated movie-making. They are built by scraping masses of data from the internet, inevitably including copyrighted content and personal information. Many AI companies are now being sued by artists, music companies, and writers, who claim their intellectual property was taken without consent.

The current modus operandi of today’s AI companies is unethical—a form of “data colonialism,” Buolamwini says, with a “full disregard for consent.”

“What’s out there for the taking, if there aren’t laws—it’s just pillaged,” she says. As an author, Buolamwini says, she fully expects her book, her poems, her voice, and her op-eds—even her PhD dissertation—to be scraped into AI models.

Big risk, big reward

Buolamwini also describes an episode in which she went up against a tech “Goliath,” Amazon. Her PhD research about auditing facial recognition systems elicited public attacks from senior executives at the company, which was at the time—in 2019—competing with Microsoft for a $10 billion contract to provide AI services to the Pentagon. After research by Buolamwini and Inioluwa Deborah Raji, another AI researcher, showed that Amazon’s facial recognition technology was biased, an Amazon vice president, Matt Wood, claimed that her paper and press coverage about it were “misleading” and drew “false conclusions.”

 

Read the article found here: AI systems. MIT Review

Lesson plan

Joy Buolamwini’s Work on AI Ethics

Objectives:

  • Students will learn about Joy Buolamwini’s groundbreaking research on bias in AI systems.
  • Students will understand the ethical implications of data colonialism and the importance of consent in AI development.
  • Students will be able to critically evaluate the work of AI companies and advocate for more ethical and responsible AI development.

Materials:

  • Articles on Joy Buolamwini’s work
  • Whiteboard or projector
  • Markers or pens

Procedure:

  1. Introduction: Begin by asking students what they know about AI and its potential benefits and risks. Explain that Joy Buolamwini is a leading AI researcher and activist who is working to make AI more ethical and accountable.
  2. Lecture: Give a brief lecture on Buolamwini’s research, focusing on her findings on bias in facial recognition systems and her concerns about the ethical implications of data colonialism.
  3. Discussion: Lead a discussion with students about the following questions:
    • What are the ethical implications of building AI systems on data that is scraped from the internet without consent?
    • How can we mitigate bias in AI systems?
    • What role can individuals and communities play in advocating for more ethical and responsible AI development?
  4. Activity: Divide students into small groups and have them brainstorm a list of actions that individuals and communities can take to advocate for more ethical and responsible AI development. Have each group share their ideas with the class.

Assessment:

  • Have students write a short essay reflecting on the following questions:
    • What are the most important things you learned about Joy Buolamwini’s work?
    • What are the most pressing ethical challenges facing AI development today?
    • What actions do you think individuals and communities can take to advocate for more ethical and responsible AI development?

Concise Version:

Joy Buolamwini is an AI researcher and activist who studies bias in AI systems. She has found that facial recognition systems are often biased against darker-skinned and female faces. She is also concerned about the ethical implications of data colonialism, the practice of scraping data from the internet without consent. Buolamwini’s work has had a significant impact on the field of AI ethics and has sparked important conversations about bias and fairness in AI systems.

In one notable incident, Buolamwini’s research on Amazon’s facial recognition technology was publicly criticized by an Amazon vice president. However, the research community rallied behind Buolamwini, demonstrating their support for her work and the importance of investigating biases in AI systems.

Buolamwini’s work is essential to building a more ethical and responsible AI future. She is a reminder that we must be vigilant about bias in AI systems and that we must hold AI companies accountable for the technologies they develop.

Joy Buolamwini’s Research on Bias in Facial Recognition Systems and Data Colonialism

Joy Buolamwini is an AI researcher and activist who has conducted groundbreaking research on bias in facial recognition systems. Her work has shown that these systems are often more accurate at identifying white male faces than darker-skinned and female faces. This is because facial recognition systems are trained on datasets that are often biased towards these groups.

Buolamwini’s research has also raised concerns about the ethical implications of data colonialism. Data colonialism is the practice of scraping data from the internet without consent. This data is often used to train AI systems, including facial recognition systems. Buolamwini argues that data colonialism is unethical because it violates people’s privacy and can lead to the development of AI systems that are biased against certain groups.

Facial Recognition Bias

Buolamwini’s research on facial recognition bias began when she was a graduate student at MIT. She was working on a project to develop a facial recognition system that could be used to help people with disabilities. However, she soon discovered that the system was more accurate at identifying her face when she was wearing a white mask.

Buolamwini then conducted a study of several commercial facial recognition systems. She found that these systems were also biased against darker-skinned and female faces. For example, one system misidentified black women as men 35% of the time, while it only misidentified white men as women 1% of the time.

Data Colonialism

Buolamwini’s research on data colonialism is based on her concerns about the way that AI systems are trained. AI systems are trained on datasets that are collected from the internet. However, this data is often collected without people’s consent.

Buolamwini argues that data colonialism is unethical because it violates people’s privacy and can lead to the development of AI systems that are biased against certain groups. For example, if an AI system is trained on a dataset of images that is mostly white and male, then the system is likely to be more accurate at identifying white male faces.

Conclusion

Buolamwini’s research is essential to building a more ethical and responsible AI future. She is a reminder that we must be vigilant about bias in AI systems and that we must hold AI companies accountable for the technologies they develop.

Here are some things that we can do to address the challenges raised by Buolamwini’s research:

  • Require AI companies to audit their systems for bias. AI companies should be required to audit their systems for bias and to make the results of these audits public.
  • Develop laws and regulations to protect people’s privacy from data colonialism. We need laws and regulations that prevent companies from collecting and using people’s data without their consent.
  • Educate the public about bias in AI systems. People need to be aware of the potential for bias in AI systems so that they can be critical consumers of these technologies.

By taking these steps, we can help to ensure that AI is used for good and that it benefits everyone.

 

Exit mobile version