Timnit Gebru’s innovative research, dynamic voice, and entrepreneurial spirit have brought her to the forefront of AI, with her research featured in the New York Times and MIT Tech Review. Her recently published research, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” with co-author Joy Buolamwini, highlights the staggering finding that darker-skinned females are the most misclassified group by facial recognition AI, with an error rate of 34.7%, while lighter-skinned males had an error rate of only 0.8%.
Timnit’s work to promote inclusion in AI has also made meaningful impacts on the field. In her role as a postdoctoral researcher in Microsoft’s Fairness, Accountability, Transparency, and Ethics (FATE) group, she works to understand, identify, and publicly communicate algorithmic bias in everyday products. As cofounder of Black in AI, she’s helped foster a community to share ideas and support racial diversity in tech by increasing the presence of Black people in AI.
Timnit spoke with AI4ALL about what got her started in computer vision, how she’s engaging in data activism, and what needs to be done to create a positive and inclusive future for AI in this special installment of Role Models in AI.
As told to Nicole Halmi of AI4ALL by Timnit Gebru; edited by Panchami Bhat
NH: Can you describe what you do as a postdoctoral researcher in Microsoft’s Fairness, Accountability, Transparency, and Ethics (FATE) group? What kind of projects are you working on?
TG: As part of the FATE group, I study how AI impacts society right now, how it may negatively impact society in the future, and what it means to be educated about the potential negative impacts of AI so we can make the impacts positive instead.
So far, I have been very focused on not just my own education but also public education and raising awareness.
I want the Black community to be very aware of the ways in which data can be negatively affecting them so that they can then be more motivated to get into the field of AI and do something about it.
I want people to associate activism not just with marching in the streets but also with data activism. I don’t think of that as my core research work, but it’s been something I’ve been spending a lot of time on.
I also work on trying to expose biases in everyday products and figuring out how to better test for and communicate bias to users. To that end, I just published a paper with Joy Buolamwini (of MIT Media Lab) that exposed biases inherent in many different APIs for things as simple as gender classification.
As we’re trying to democratize AI and make sure that everybody in the world has access to it, we also need to make sure that people understand the pitfalls of using it.
Another thing I work on is how to talk to people in other disciplines. For example, the Department of Homeland Security (DHS) asked several tech companies to partner and build technology to monitor social networks and other activities of immigrants or people who want to immigrate to decide whether or not this person is going to be a threat. You have to understand the kinds of projects that people are proposing so you can say, “hey, I don’t think this a good idea, and I strongly oppose it.” The work isn’t just sitting at your desk and writing code.
You are a co-organizer of Black in AI, which was an important presence at the NIPS conference in December. Can you talk a little bit about what the group does, how it was founded, and what’s up next for the group?
Black in AI is a community that gives support and helps people understand the challenges in the field. I see it as a combination of a community and advocacy.
Black in AI has a Facebook group, a Google group, and a forum where people can discuss research ideas, share their work, ask questions, and encourage collaborations — particularly cross-continental collaborations, like Africans with African Americans, Brazilians, and so on. Regardless of where in the world you are, some things are common to the experience of a lot of Black people in artificial intelligence.
Almost 20% of the world’s population is Black, which means we should be very well represented in these conferences, but we’re not. It’s not just about seeing someone who looks like you but finding someone who understands social problems faced by people in the Black community so you can feel that other people are working on trying to address these social problems too.
NIPS was in Long Beach last year, and it’s going to be in Montreal this year. We are already placing hurdles for people by deciding where to have these conferences. For example, I’m already thinking about visa issues people are going to face. I’m originally from Ethiopia, and 50% of Ethiopians who apply for a US or British visa don’t get one. Some people get accepted by universities to study, but they can’t go because they don’t get their visas. There are people in Nigeria who have to apply for a visa 6 months in advance to go to Canada. This is in addition to all the fees you have to pay, from an expensive registration to flights. Unless there are people embedded in the AI community advocating and creating awareness about these challenges, no one else is going to know these kinds of hurdles exist for certain groups of people.
I’m already seeing Black in AI’s impact. Rediet [Abebe] (Black in AI cofounder and graduate student at Cornell) told me that a lot of Black people were admitted to Cornell for PhD programs. A lot of people applied because they knew about the opportunities and because we told them to apply.
What are some of the things people should be doing now to create a positive future for AI?
What we value is at the root of many problems we’re facing in AI right now — who we market the field to, what our interview process is like, who we respect, and what kind of skills we value. Throughout computer science education and professionally, we’ve implicitly said only a specific type of personality and skill set is allowed in this field. What we value is indirectly making this impact.
You want people in AI who have compassion, who are thinking about social issues, who are thinking about accessibility.
You want to see how people treat those around them, the workspace are they’re creating for themselves, and whether they work with women. You want to think about how you’re advertising the field and what kinds of people you’re selecting for.
For example, Joy Buolamwini, my co-author, has so many different skills that were necessary to bring this project [Gender Shades] to fruition. In addition to being a great coder, she takes initiative, is creative, and effectively communicates research findings with videos. People like her should be encouraged to be in the mainstream of this field.
Switching focus to your personal journey: how did you decide to get degrees in electrical engineering? Were you interested in the field at a young age, or did you discover it in college? And how did you come to specialize in computer vision?
My dad was an electrical engineer, and my two sisters were also electrical engineers. It was kind of destiny. When I was growing up, I really liked math and physics, but because my dad was an electrical engineer, it seemed like a natural thing for me to pursue.
When I entered college, I was planning on double majoring in electrical engineering and music, but I found that I didn’t like some of the classes. I worked at Apple for a few years doing analog circuit design. Then I got a master’s and got even deeper into the hardware side of things.
I started — and eventually left — a PhD working on optical coherence tomography because I didn’t like it and I felt very isolated. Instead of being really interested in making physical devices like interferometers, I became interested in computer vision. During that time, I took an image processing class and started to get interested in computer vision. That sparked my decision to pursue my PhD in computer vision [advised by Fei-Fei Li, AI4ALL’s co-founder].
Who were your role models growing up? Do you have any role models now?
My mom is my biggest role model because she solves problems. She came to the US at the age of 55 and completely switched her career. My father died when I was five years old, and she was left with three kids that she had to figure out how to raise on her own in a very politically difficult situation. At the time, Ethiopia had a military communist dictatorship, and she was worried.
She just assesses the situation and tries to solve the problem. She also tries to correct. When my mom thinks I’m losing self-confidence, she’ll say, “who cares, don’t worry,” and tells me to prove them wrong. On the other hand, when she thinks I’m getting too much praise, she’ll try to correct for that. My cousin called and told her that I was in the news, and my mom said, “these days anyone can get fifteen minutes of fame!” She tries to keep me grounded.
About Timnit
Timnit Gebru works in the Fairness, Accountability, Transparency, and Ethics (FATE) group at the Microsoft Research New York lab. Prior to joining Microsoft Research, she was a PhD student in the Stanford Artificial Intelligence Laboratory, studying computer vision under Fei-Fei Li. Her main research interest is in data mining large-scale, publicly available images to gain sociological insight, and working on computer vision problems that arise as a result, including fine-grained image recognition, scalable annotation of images, and domain adaptation. The Economist and others have recently covered part of this work. She is currently studying how to take dataset bias into account while designing machine learning algorithms, and the ethical considerations underlying any data mining project. As a cofounder of the group Black in AI, she works to both increase diversity in the field and reduce the impact of racial bias in the data.
Follow along with AI4ALL’s Role Models in AI series on Twitter and Facebook at #rolemodelsinAI. We’ll be publishing a new interview with an AI expert every week this winter. The experts we feature are working in AI in a variety of roles and have taken a variety of paths to get there. They bring to life the importance of including a diversity of voices in the development and use of AI.