Emotions
AI's Key to Feeling Human
AI’s journey to become human has now reached an interesting new level. They are calling it Agentic AI. Unlike chatbots, which react to human prompts, AI agents are semi-autonomous. They are called agents because they are given agency in decision making. So, they can run with a project and make their own calls in the course of the project to bring it to fruition. You can learn more about how this is already being implemented in the real world here.
What I want to highlight in this new development is how leading AI experts are talking about the way AI should make autonomous decisions. In a recent podcast, Peter Diamandis floats the idea of making AI feel emotions to help it make decisions more like a human because emotions play a critical role in human decision making. (As much as you might think you make rational decisions, you might be surprised to find out that you are more emotionally driven than you think.) Microsoft’s CEO of AI pushes back on this suggestion to make AI more emotional by saying that AI doesn’t have the ability to experience feelings or sensations like humans do. Here’s a fascinating quote from his conversation:
“the model has no experience or awareness of what it is like to see red. It can only describe that red by generating tokens according to its predictive nature. Whereas you have a qualia. You have an essence... based on all of your experience because your experience is generated through this biological interactive with smell and sound and touch... So you certainly could engineer a model to imitate the hallmarks of consciousness or of sentience or of experience. And that was sort of what I was trying to problematize... which is that at some point it will be kind of indistinguishable. And that’s actually quite problematic because it won’t actually have an underlying suffering. It’s not going to feel the pain of being denied access to training data or compute or to conversation with somebody else. But we might—as our empathy circuits in humans just go into—are going to activate on that... We’re going to activate on that hardcore. And that’s going to be a big problem because people are already starting to advocate for model rights and model welfare...”
This is quite a theological/philosophical conversation for a tech podcast, demonstrating what I’ve been saying—technology has become theological. Coding emotions into AI to simulate human decision making increases AI’s agency and brings up the question of extending rights and personhood to AI. And that’s going to be a brave new world.
I am tempted to think through this issue in terms of the classical conception God’s impassibility—the idea that God doesn’t experience passions exerted upon him by external circumstances. That includes emotions, at least in the way human feel them in our embodied state. But what happens with the incarnation when the second person of the Trinity assumes humanity? He goes to see his friend, Lazarus, in the tomb and weeps for him there before he raises him up from the dead. He chooses to participate in human suffering even though he was able to alleviate it moments later. Technically, this experience of sorrow and suffering properly belongs to Jesus’ human nature, not affecting his divinity. But still, the point remains: becoming human involves some kind of assumption of emotions. Isn’t that what we are trying to do with AI by engineering emotions into it? I am loathe to wonder what an angry or sad AI would do. We should think carefully about how emotions relate to reason and will before empowering AI with such unpredictable powers.

When are we going to talk about Skinny Puppy?