
Ethics for urgency means making ethics a core part of AI rather than an afterthought, says Jess Whittlestone.
Jess Whittlestone at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge and her colleagues published a comment piece in Nature Machine Intelligence this week arguing that if artificial intelligence is going to help in a crisis, we need a new, faster way of doing AI ethics, which they call ethics for urgency.
For Whittlestone, this means anticipating problems before they happen, finding better ways to build safety and reliability into AI systems, and emphasizing technical expertise at all levels of the technology’s development and use. At the core of these recommendations is the idea that ethics needs to become simply a part of how AI is made and used, rather than an add-on or afterthought.
Ultimately, AI will be quicker to deploy when needed if it is made with ethics built in, she argues. I asked her to talk me through what this means.
This interview has been edited for length and clarity.
Why do we need a new kind of ethics for AI?
With this pandemic we’re suddenly in a situation where people are really talking about whether AI could be useful, whether it could save lives. But the crisis has made it clear that we don’t have robust enough ethics procedures for AI to be deployed safely, and certainly not ones that can be implemented quickly.
What’s wrong with the ethics we have?
I spent the last couple of years reviewing AI ethics initiatives, looking at their limitations and asking what else we need. Compared to something like biomedical ethics, the ethics we have for AI isn’t very practical. It focuses too much on high-level principles. We can all agree that AI should be used for good. But what does that really mean? And what happens when high-level principles come into conflict?
For example, AI has the potential to save lives but this could come at the cost of civil liberties like privacy. How do we address those trade-offs in ways that are acceptable to lots of different people? We haven’t figured out how to deal with the inevitable disagreements.
AI ethics also tends to respond to existing problems rather than anticipate new ones. Most of the issues that people are discussing today around algorithmic bias came up only when high-profile things went wrong, such as with policing and parole decisions.
But ethics needs to be proactive and prepare for what could go wrong, not what has gone wrong already. Obviously, we can’t predict the future. But as these systems become more powerful and get used in more high-stakes domains, the risks will get bigger.
Article link: https://www.technologyreview.com/2020/06/24/1004432/ai-help-crisis-new-kind-ethics-machine-learning-pandemic/?