By Jack Cumming
The short answer is “no,” but it can be trained to act ethically if the trainer has a wise sense of ethics. That word “trained,” however, is what differentiates AI machines from machines we’ve long known. Machines are not ethical, and neither is AI.
Of course, that can also be true of a human. Think of a mechanistic worker who performs one task on an assembly line whether the line produces machines for peace or for killing. The same task can be ethical or unethical.
Follow the Precedents
In this article, one of a series of articles probing the potential of AI for senior living (see here and here), we explore this complex question and how augmented intelligence, or artificial intelligence, might best be governed.
From a resident’s perspective, the benefits of AI cannot come too soon. Thus, there is an urgency to take the organizational, staffing, and financial steps to ensure that senior living does not fall behind. Without residents, there can be no senior living industry, though the human needs of aging will persist.
At the outset, I have to give kudos to Daniel Huttenlocher of MIT, whose talk on these matters inspired this reflection. The basic insight is that we will do best if we do not reinvent ethics and governance principles merely because AI is suddenly front-and-center in public awareness.
What’s New?
Let’s be clear about what makes augmented or artificial intelligence unique. Previously, all machines had to be constructed or programmed by humans to do a prescribed task. Now with the dramatic increase in both data and computing power, we are able to construct machines that can learn from training.
That learning/training is nevertheless tailored by humans to the task, just as previously human-designed programming or mechanics tailored machines to the task. Or, from another perspective, training an AI device is similar to how new employees are trained to do specific tasks.
Politicians take note. Please restrain yourselves and put wisdom before short-term popular pandering. With that caution and that acknowledgment, here is the current state of my learning and reflection on this pressing question, given the pros and cons, benefits and pitfalls.
- I believe that all parties should be open about AI involvement when they deploy AI applications, much as passengers are aware when they are riding in a self-driving car. If a customer service person or sales assistant is an AI-enabled machine interlocutor, then that should not be concealed. Never pretend that a machine is a person.
- My preference would be that, rather than adopting de novo norms and constraints for AI, existing norms for ethical behaviors as codified in court cases, statutes, regulations, and accepted practices be applied to AI, just as they apply to other elements of human agency. Let’s not reinvent what’s been time-tested and proven.
- Elaborating on this case law approach to AI, since AI and its future are now imperfectly understood even by experts, it seems best to allow a body of case law to evolve before codifying AI law into statutes. No one should be allowed to disown responsibility or to limit accountability through devious contracting or self-serving government action.
- If we let fears anticipate risks that don’t exist, the danger is that irrational, emotional fears may unduly restrict AI from providing the human good it could achieve.
- Following the principles of today’s law and stare decisis, the human who deploys an AI application or an AI-governed robot should carry the responsibility for results just as any machine user has responsibility.
- Machines can’t be responsible any more than an infant can be, so we have to hold responsible the adult humans who unleash machine robots and AI. This is similar to how we hold animal owners responsible for any harm that their animal might cause. You can own a tiger, but you have responsibility if the tiger escapes to attack the public. Following the same premise, the same responsibility would extend, for example, to allegations of copyright infringement, say, by ChatGPT.
- The “training” of autonomously learning AI machines needs to be derived from how we train and educate human beings. Similarly, those having responsibility for machine “training” need to be held to account just as computer programmers are held to account for their work.
What Is Ethical?
This brings us to an overall ethical question which concerns businesses generally, whether they are for-profit or not-for-profit. After all, business is business. Some maintain that business can pursue any avenue toward profit, fair or unfair, as long as it is legal. Others hold themselves and the businesses they lead to a higher standard.
If law is simply the codification of the morality of a culture, then concepts of what are moral behaviors will generally develop before immoral deviations require punitive legal modifications. It’s axiomatic that you can’t legislate integrity. There will always be some people who will push the limits, even veering over into criminality.
And then, there’s politics. Some believe that with freedom should come consequences for actions. Others believe that a benign government should make everything fail-safe. Still others, often overlapping the first two political principles, believe that government has unlimited funds and should fund positive outcomes for everyone.
As soon as ethical questions arise about what’s new — robots and AI in this case — fears of change also surface to cloud the discussion. It’s best to encourage progress without assuming the worst. Fear can lead to socially destructive, inhibiting laws. Let progress unfold naturally, and then, hold the innovators to account for the consequences of their actions. That will create incentives for caution while allowing a better future for humankind.
Where would we be if fear of crashes had prevented Orville and Wilbur from taking flight? Or, in our time, would we have been wise to fear the chemistry of LED lighting? The more proactively senior living adopts robotics and AI, the more likely the industry is to persevere.