The Peculiar Case of AI Bias in Higher Education
Why Biased Roads Don't Necessarily Lead to Biased Destinations
In recent years, higher education institutions have increasingly adopted data-driven approaches and AI systems to assist with critical functions like admissions, student services, pedagogy and curriculum design. However, there is growing concern that these systems may propagate harmful biases against certain student demographics.
Several high-profile cases have brought this issue to light. In 2017, an AI-based recruiting tool developed by Amazon was found to systematically discriminate against female candidates. In May 2023, the Brookings Institution found that ChatGPT is politically biased, a claim that has since been substantiated by researchers from the University of East Anglia who found ‘significant and systemic left-wing bias’.
On Harmful Biases
There is no doubt that biased AI can cause real harm. However, the assumption that bias is universally problematic merits closer scrutiny. AI systems, like any human creation, are shaped by the data used to train them and the humans who design them. Biases in the developers and biases in the training data inevitably lead to biases in system outputs. However, bias is not intrinsically good or bad; what matters most is how it interacts with power, social norms, and existing discrimination.
Bias is an inevitable part of human judgment and perception. For example, are teachers and educators completely unbiased in their interactions with students? Of course not. Social psychology shows that stereotyping is a universal cognitive process for managing complex environments. The goal should be minimizing harmful biases, not eliminating categorization itself.
Furthermore, while AI is often assumed to be neutral and objective, it reflects the values and viewpoints of its creators no less than human decision-makers do. Claims of pure objectivity seem illusory. The process of training machine learning systems involves many small choices by engineers that shape the system's behavior. Values alignment in AI is complex. Techniques like reinforcement learning with human feedback have been used to align models like ChatGPT with human values. But whose values? Feedback may come from narrow demographics, leading to biases against excluded groups. We must acknowledge the subjectivity inherent in determining what constitutes ethical AI behavior.
Bias In Itself Does Not Necessarily Shape Outcomes
Rather than chasing the illusion of eliminating bias, our goal should be ensuring AI systems transparently reflect ethical values aligned with society's ideals of fairness and justice. But this is easier said than done. Those harmed by algorithmic bias, including women, racial minorities, LGBTQ+ people and other marginalized groups should have a seat at the table in determining ethical AI in educational institutions. Their voices are crucially important.
Crucially, bias alone does not determine outcomes. Context and how bias interacts with power dynamics are essential factors. For example, while a loan approval AI may learn to associate certain zip codes with risk, whether this leads to unfair denials depends greatly on how this input factor is used and checked for discrimination. Well-intentioned but biased AI used properly in the appropriate context need not lead to harm, just as biased human decision-making does not always cause harm.
Moreover, not all bias is equally disconcerting. For instance, a search engine AI biased towards showing more popular and authoritative content is often useful for users. But bias that replicates historic discrimination may be unethical even without ill intent. We must thoughtfully examine each case of bias in context rather than taking a blanket stance.
The line between acceptable and unacceptable bias is not always clear cut. Total elimination of bias, if that were ever to become a possibility, could mean eliminating useful heuristics and patterns that ethically aid decision-making. The goal should be minimizing harmful bias, not bias per se.
Moving Forward
It should be obvious that higher education cannot attain pure objectivity through technology alone. However, through evolving use cases, policies, norms and software features, academia can productively manage AI's risks and biases, much as has been done for crowd-sourced business models like Waze or Wikipedia.
Techniques like rigorous testing for fairness, implementing ethics review boards for AI projects, and supporting whistleblowers who report harmful biases are all moves in the right direction. As with Wikipedia, we must thoughtfully develop governance and accountability to harness AI's benefits while addressing its risks.
While AI bias certainly exists and raises valid concerns about real-world harms, it does not necessitate abandoning or preventing the development of AI. AI is merely a magnifying glass through which we can observe already existing bias and other disqueting human practice that needs attention, with or without AI. Technology reflects both the virtues and flaws of its creators, and AI is no different.
When all is said and done, I remain optimistic for the future of AI and higher education.