How can we design AI to reduce inequality?
To design AI that can help to reduce inequality, it’s important to consider the following factors:
- Algorithms and data: The algorithms and data used to train AI systems should be free from bias in order to avoid perpetuating existing inequalities. This requires careful consideration of the data sources and the potential for bias in the data.
- Fairness and transparency: AI systems should be designed to be fair and transparent, so that their decisions can be understood and evaluated by humans. This can help to ensure that the AI is not making biased or unfair decisions.
- Responsible deployment: AI should be deployed responsibly, in a way that considers the potential negative consequences on society, such as job displacement. It’s important to address these issues in a fair and equitable manner.
- Public policy: Governments and other organisations can play a role in promoting the responsible use of AI and addressing potential negative consequences. This can include regulations and policies that promote fairness and transparency in AI decision-making.
Time for a confession: the section above was written by ChatGPT, an AI chatbot. So it might be a bit biased.
ChatGPT has been all over the news in recent days, amusing people with its biblical advice on removing a peanut butter sandwich from a VCR while simultaneously alarming them about the implications for teaching English. As Ian Leslie notes in The Ruffian, although the chatbot’s ability to respond cogently to almost any broad question within seconds is impressive, its responses do “resemble the kind of answers students give when they are winging it”.
At the same time, its capabilities are a sign of how quickly AI can change society. If we want to engage with the debate in time to shape how AI develops and the implications of its development for our society and economy, we’d better hurry up.
There is no shortage of people sounding the alarm. Cathy O’Neil has written in Weapons of Math Destruction about how algorithms “threaten to rip apart our social fabric”, and how decisions on issues like whether we get a loan, or how much we pay for insurance, are made by mathematical models that often magnify bias and unfairness, rather than eliminating it. Algorithms have absorbed the common human assumption that we live in a meritocracy. The situation that an individual finds themselves in is assumed to be purely due to their talent and effort, with no account paid to the role of luck or circumstances. These models ‘keep people in their lanes’, acting as a block on social mobility, and they ‘codify the past’. The solution is to embed better values into our algorithms, so that fairness is part of the programming. We need to measure how, and how much, these models are increasing inequality, and then reverse the process.
There is a lot of discussion happening about fairness and AI in the context of reducing bias from algorithms - ensuring fair treatment, if you like. But there’s rather less attention on the impact of AI on equality, or on fair opportunities. Drawing on the philosophical framework of John Rawls, the DeepMind research scientist Iason Gabriel has written in Toward a Theory of Justice for Artificial Intelligence about the need to ensure that AI can “meet a certain standard of public justification, support citizens’ rights, and promote substantively fair outcomes”, with a particular focus on their impact on the worst-off members of society. Gabriel talks about creating fairer AI systems that minimise inequality in the first place, as well as redistributing wealth to compensate for inequality. He suggests that “we need further research and public discussion about what substantively fair outcomes look like in practice, and about how AI systems can support this societal objective.”
James Plunkett has written a lot about social justice in a digital age (with a series written for JRF, as well as articles such as After social democracy, what next?). He points out that we need to think beyond liberal egalitarianism and its focus on distributive justice, and to consider the impacts of technological change on social cohesion, relational equality, and esteem. He calls for a greater focus on diagnosing the social ills of digital capitalism, as reformers like Joseph Rowntree and Beatrice Webb did for the previous incarnation a century or so ago. And he points out that we need to design a new set of governing arrangements for a digital age. Some of the policies suggested in his book End State deal with this issue. One is introducing principles-based regulation, focusing on outcomes rather than processes, so that people don’t lose out from algorithms. Another is opening up big tech to ensure that we all benefit from the IP and from the wealth created by it (a suggestion discussed in more detail by Joseph Stiglitz and Anton Korinek).
Perhaps we should leave the final word(s) to ChatGPT. When asked to write a Shakespearean sonnet about AI and inequality, it obliged straight away:
To AI, with love and caution do I write Thou art a powerful tool, with great potential for good But also the capacity to do harm, if not used right Thou canst automate tedious tasks and create new job opportunities Thou canst improve access to education and healthcare But thou must be designed and implemented with fairness in mind To prevent bias and discrimination, and the exacerbation of existing inequalities Oh AI, thou art a double-edged sword With the power to improve lives, or to do great harm Let us use thee wisely, and guard against thy potential to do wrong.
Why haven’t we made more progress in tackling the social determinants of health? It would be a missed opportunity not to ask ChatGPT, which (who?) helpfully says:
There are many factors that have contributed to the slow progress in tackling the social determinants of health. One major factor is a lack of political will and funding for programs and policies that address these issues. In many cases, addressing the social determinants of health requires significant investment in areas such as education, housing, and infrastructure, which can be difficult to secure in a political climate that is focused on short-term gains and budget cuts. Additionally, there are often complex social and economic issues at play, which can make it challenging to develop effective solutions. Finally, the effects of the social determinants of health often manifest over the long term, making it difficult to measure the impact of interventions and to justify the necessary investments.
It may be generic, but it’s not a bad summary. This one is a hard nut to crack.
But there is reason for optimism. Last week saw the launch of Health Equals, a group of people and organisations that is campaigning to shape a society where each of us has our best chance of good health, no matter where we’re born, work or live. A key focus is on raising public awareness, to challenge the lack of political will that ChatGPT helpfully mentioned. Look out for some public campaigns in 2023.
If you haven’t yet done so, please sign up to be emailed Fair Comment every Monday.
Please suggest anything we should include in (or change about) Fair Comment.