Case studies, surveys, and insights from emerging AI talent
“Artificial intelligence is a tool that is neither the root cause nor the cure-all for societal problems. Ultimately, lasting change will only come as we address the ills of our society holistically and head-on.” — AI4ALL Special Interest Group on AI Agents & Human Agency
Changemakers in AI, the AI4ALL alumni community, are driven by their belief in the potential of AI to effect social good. However, Changemakers in AI are also leaders, understanding that to effect social good, continuous discussions need to be had surrounding the societal and ethical implications of AI. AI4ALL Changemakers are uniquely situated to have these discussions, as they are high school and college-age students, the population that will be most impacted by AI development. Technologists and policymakers are creating the world Changemakers will have to live in, so it’s only fair that we hear and amplify their voices.
As the FAT Conference winds down, we are sharing reports written and researched by AI4ALL Changemakers regarding the future of AI ethics. Our students volunteered to write these reports, wanting to increase their own understanding of AI ethics while building the skills needed to effectively communicate their learnings to others. We hope these reports offer a glimpse into the future of AI ethics, as gleaned by the students building that future, our Changemakers.
Students came from our summer programs at Stanford, Boston University, Columbia, Arizona State University, University of Maryland, Carnegie Mellon University, UC San Francisco, Princeton, University of Michigan, Simon Fraser University, and Berkeley. Mentors from Columbia, Harvard, Santa Clara University, Node.io, Caption Health, and University of Washington supported the research.
Ethics in Artificial Intelligence: the Why
Read the report here.
Highlights
- “Biases are almost always a product of humans. They can originate at every step of the creation of an NLP [natural language processing] algorithm, from the selection of data, the preparation of data, the creation of code, and the training of the algorithm.”
- “When such [facial recognition] apps mislabel or don’t recognize darker faces when they can almost always recognize and correctly identify lighter ones, they create a divide and cause dark-skinned people to feel excluded from data-induced societal ‘norms.’”
- “To ensure that we use this novel technology [computer vision] for social good, we must remain cautious of bias in data by ensuring that we avoid selection bias, out-group homogeneity bias, biased data representation, and biased labels, and bias in interpretation by avoiding overgeneralization, overfitting, and correlation fallacy.”
Contributors
Students: Emma B. Hari B. Meghna G. Elena L. Anika P. Mana V. Ecem Y.
Mentors: Pujaa Rajan, Node.io; Natalia Bilenko, Caption Health; Jared Moore, University of Washington
AI Agents & Human Agency
Read the report here.
Highlights
- “As algorithms learn the patterns embedded in data that have been shaped by existing systems of inequality and oppression, their predictions become skewed by present and historical patterns of injustice.”
- “By regarding the predictions that machine learning algorithms present us with as concrete facts, we risk losing sight of the crucial distinction between correlation and causation, thereby detaching our understanding from reality.”
- “Although AI holds considerable potential to improve the way we live and work, AI systems are only as effective as the data they are trained on.”
Contributors
Students: Jui K. Ashna K. Ines K. Connie L. Ria M. Ashna M. Rucheng P. Constance R. Sana S. Taylor W.
Mentors: William Frey, Columbia University; Shannon Vallor, Santa Clara University; Mutale Nkonde, Harvard University