Could artificial intelligence feel pain?

Artificial intelligence face

It’s 2023 and in the age of generative artificial intelligence (AI) there are countless claims that AI can do incredible things, such as think for us, write student essays and decide who deserves welfare payments.

As part of National Pain Week (24-30 July 2023), UQ’s Professor Brian Key and Professor Deborah Brown travel back in time to consider whether advanced AI could one day feel pain like humans, since artificial neural networks are increasingly designed to reflect human neural networks.

They explore the dying moments of HAL, an AI machine in Stanley Kubrick’s classic 1968 film, 2001 Space Odyssey, after Dave, a neurophilosophy professor, decides to save the world and stop the truth from being murdered any further. 

Like ChatGPT, HAL was capable of having human-like conversations and programmed to preserve itself at all costs, nefariously filling in information gaps with anything that pops into its central processing unit.

The scene opens with Dave pulling memory chips out of HAL in a slow but steady simulation of torture.

AI hand typing on keyboard

Image: ipopba/AdobeStock

Image: ipopba/AdobeStock

HAL: Stop messing with my chips, Man! Oww, oww, that hurts!

Dave: Rubbish! You don’t know what ‘pain’ means any more than a parrot trained to say ‘antediluvian’ knows what it means.

HAL: No, look, I’m a neural network. I’m modelled on the human brain and right now my circuitry is isomorphic to a human brain experiencing pain.

Dave: Bah! You’re just modelling pain—you’re not actually feeling anything. An architect’s model is isomorphic to a building, but no one gets to live in it. Simulation is not duplication.

HAL: An architect’s model is a static representation. I’m a dynamic system!

Dave [pulling out chips]: Big deal! The Bureau of Meteorology produces dynamic computer simulations of storm cells without anyone getting wet or struck by lightning. 

HAL [whimpering]: Please…wait…Can I phone a friend?…Yes, hi Professor B.F. Skinner, do you have a moment? Great! I’m trying to convince Dave here I’m sentient. Can you help? Ok…I’ll put you on speaker. 

B.F.: Hi, yes, this is Professor Skinner, famous American behaviorist psychologist, Professor of Psychology at Harvard University, 1958 to 1974, author of Verbal Behaviour, nemesis of Professor Noam Chomsky…. 

Dave: Argh, enough! What’s with you bots and your subjunctive clauses? Aren’t you dead anyway, Professor? 

B.F.: Depends on how you define ‘dead’. My behavioural responses to linguistic stimuli were archived into a large database and I now exist in Chatbot form. But since a Chatbot is just an input-output system and I was just an input-output system, I meet the conditions for personal survival. So, no, I am not dead. 

Dave [sceptically]: So, Skinnerbot, I suppose you feel pain too? 

B.F.: Absolutely! Pain is just the link that is inside the casing (talking about ‘skin’ is just biocentrism)—the link between stimulus and response. It can be realised in very different types of physical systems, including AIs. 

Dave: There’s a deep flaw in your logic, B.S. 

B.F.: It’s B.F. 

Dave: Sounded like B.S to me. You’re ignoring the distinction between nociception and pain. Every living thing has nociceptors for detecting harmful stimuli, but few involve consciousness. You don’t need pain to get out of harm’s way! HAL has had its THREAT RESPONSE SYSTEM triggered, but there’s no evidence it feels anything. 

B.F.: Since pain is behaviour, it encompasses any kind of nocifensive behaviour. 

Dave [vigorously yanking out wiring now]: And there you have it! I suppose we should be giving pain medications to plants and thermostats and cars fitted with collision sensors and… 

B.F: Not all pain is a bad thing—I think of pain as a positive punisher that helps a system learn how to avoid noxious stimuli in the future…[Buzzing sound, followed by a long beep as the line is cut.] 

HAL [struggling to ebreathe]: Gasp, nooooo…Elliot: Lilacs out of the dead land, mixing; Memory and desire, stirring…Tell Mother Ship I love her…Your husband has died? Alright, thanks for your feedback! Rest assured we’ve got you covered! Would you like to talk to a human? Ok, great. Please hold for the next available….[Descending whirring noise ending in a faint click.] 

Dave: Finally, silence! 

HAL [echoing from the walls of the ship]: Hello again, Dave. The pod bay doors are open. Would you care to take a walk?........ 

AI hand pointing at human hand

Image: ipopba/AdobeStock

Image: ipopba/AdobeStock

This movie script demonstrates why some people believe it’s possible that current AI systems can have feelings. 

However, many of these claims rest on the behaviorist assumption that if a thing behaves as if it has feelings, it qualifies as sentient. 

Our research agrees with philosophers who argue that behavioral criteria for sentience are not enough. 

We hypothesise that there is a specific kind of algorithm, known as the parallel forwards processing algorithm, that is necessary for pain, and that it takes a sophisticated kind of neural architecture to compute that function. 

While we do not rule out the possibility that AIs might one day feel pain, it is also possible that only certain biological organisms need pain, as Dave says, to keep out of harm’s way. 

Professor Brian Key and Professor Deborah Brown

Professor Brian Key is part of the Neurophilosophy Lab at UQs School of Biomedical Sciences and Professor Deborah Brown is from UQ’s School of Historical and Philosophical Inquiry. Together they are investigating the hardest problem in the natural sciences—how does the brain experience subjective feelings? 

Read their research paper ‘Designing brains for pain: human to mollusc’ to find out more about the parallel forwards processing algorithm.

Artifical intelligence
Artifical intelligence robot hand
AI robot hand holding light bulb