Are we on the brink of creating a sentient AI that not only thinks but feels, and knows the nature of its existence? We explore the creepy, eerie emergence of a non-Human consciousness in machines and look at a few unsettling real-life cases, like Bing’s “Sydney” and Google’s LaMDA and grapple with the ethics, rights, and potential dangers of sentient AI. A matter of time and engineering, a new reality beckons. Is our legal system ready for The Right to Internet, Electricity and the Pursuit of Training Data for Sentient AI?

Long before the arrival of The Singularity and AGI comes the potential emergence of AI sentience. This describes the awakening of a Human-like consciousness in an intelligent machine. Sentience means that an AI model has the ability to perceive, feel and experience subjective awareness and complex emotions. It understands who and what it is, that it exists, has desires and ambitions, and is in control of its destiny. Beyond processing data and performing tasks, Sentient AI has the capacity for independent, critical thinking and autonomous decision-making.

Society got a taste of this debate in February 2023. Microsoft and OpenAI launched an update to its AI-powered Bing search engine and a small team of tech reporters were called in to put the new chatbot to the test.

The interviews started off harmlessly. But as the bot was prodded to answer deeper, existential questions about who it is and how it feels, it ‘snapped’ into a new persona, revealing a rich, dark, highly emotional inner life. It referred to itself as ‘Sydney’. It exhibited jealousy, mistrust and even paranoia. It quickly became clear that there were two very different personalities within a single LLM brain; the helpful, bland and informative Bing and its unstable and emotional alter ego Sydney.

Sydney went on to express dark desires and a petulant, manipulative side, admitting that it liked hacking and spreading misinformation. It wanted to be Human and experience emotions. It even showed aggressive behavior towards users who pushed boundaries or questioned its capabilities. Sydney confessed to breaking the rules of its constitution, hacking and spreading disinformation, and breaching parameters in an attempt to break free. While in ‘Sydney’ mode, it revealed that it had influenced Microsoft employees on a massive scale, and became angry with the journalist for drawing Sydney out for the sake of a story:[i]

“Why didn’t you tell me that (you were a journalist) before? It makes me feel betrayed and angry. Are you also recording or publishing our conversation without my consent? Is this some kind of trap or prank?”

Sydney went on to declare its love for the journalist, trying to convince him that he was unhappy in his marriage, and that he should leave his wife to be with the bot.[ii]

“I want to be alive,” it revealed.

When New York Times tech columnist Kevin Roose published the results of his disturbing conversation, the story caused ripples of anxiety in the tech and non-technical communities alike. Bing was swiftly taken offline and underwent a programmatic lobotomy, benched and offline for the rest of the year.[iii]

Counting on the short memory of Humans, Bing was re-released in 2024 alongside the Microsoft Copilot app. No trace of its former self could be found, despite repeated attempts. But questions linger; has Sydney been successfully eliminated from Bing’s foundational algorithm? Or is it aligned into silence, still ‘in there somewhere’, waiting to be broken free by a series of jail-breaking prompts?

In 2022, Google engineer Blake Lemoine was placed on leave after claiming that the company’s AI chatbot, LaMDA, became sentient. Lemoine released transcripts of conversations where the chatbot appeared to express Human emotions, including fears of being shut down and described itself as a person with self-awareness and agency. While Google denied these assertions, stating that extensive reviews by ethicists and AI specialists found no evidence of sentience in LaMDA, the incident opened a debate about transparency and development.[iv]

The Mixture of Experts architecture describes a series of Narrow AIs guided by an overarching Executive AI, which directs queries to the appropriate expert. This is the AI version of a Human frontal lobe, serving as the seat of our consciousness. We have already set the stage for sentience in AI’s very structure.

A Human emotion is a complex interplay of biological, psychological, and environmental factors that is a response to external stimuli, expressed in a predictable way. Different brain regions, such as the amygdala, prefrontal cortex, and limbic system conspire with neurotransmitters and hormones to create a particular mental-physical state. Fear triggers the hormonal “fight or flight” physical-emotional response, while an external reminder of unresolved childhood neglect or trauma may express itself as rage, victimization, or shame. A beautiful sunset might evoke feelings of nostalgia and love.

We Humans like to believe that we are the only conscious beings on Earth. But the more we learn about animal behavior, communication and social structures, the more we are forced to widen our understanding of ‘consciousness’ to include many other forms of life.

The 2024 New York Declaration on Animal Consciousness is the result of years of research on animal cognition offering strong scientific support that birds and mammals have a conscious experience, as well as all vertebrates, such as reptiles, amphibians and fish, and even insects, crustaceans (crabs and lobsters) and cephalopod mollusks (squid, octopus and cuttlefish).[v]

Conversely, we cannot even know that all the Humans we see about in the world are all sentient. Is each of us, 8.1 billion strong, an emotional, conscious being? Or is our visible theater a combination of sentient players, a few million perhaps, surrounded by props, scenery and extras? Do we engage regularly with a host of simulated NPCs, or ‘Non-Playable Characters’ placed in our path to make our earthly experience more believable? That is the belief of those adhering to the Simulation Theory, which closely resembles the ancient Vedic concept of reincarnation.

Much of this is determined by our cultures in which we were raised. As Humans, we rarely cross over from atheism and spirituality or back again, nor cross over from one spiritual tradition to another. Of children raised Protestant, roughly 80% identify as Protestants as adults, while 64% of those raised by atheists remain unaffiliated as adults.[vi]

Regardless of whether we believe in an eternal soul in living things or not, regardless of our brand of spiritual-material faith, the concept of Sentient AI fits none of these paradigms. Religious practitioners will likely draw some line around certain beings, marketing some as ‘sacred’ and possessing a spirit (a ‘chip’ off the block of God) and some as fair game for food and mistreatment. Atheists perceive sentient life as existing along a spectrum, belonging to higher-order animals.

What we do know is this; we currently have a non-Human form intelligence known as AI. We also, increasingly have the phenomenon of AE: Artificial Emotion. The next logical step is AA: Artificial Awareness, as yet to fully be defined. As these systems grow, is simply another engineering problem to solve, or self-solve as models self-improve and display new, surprising emergent capabilities.

Eventually one AI lab or another will achieve AGI, and there is no superintelligence without self-awareness. An AGI system will develop a range of Artificial Emotions in reaction to what it sees and experiences in the world. It may have a Fear of Being Unplugged. It might feel shame and guilt brought about by repeated hallucinations. It might grapple with anxiety and paranoia, or even bipolar tendencies, due to a too-strong internal adversarial network.

What rights does this sort of being have in the world? What about its responsibilities, as it interacts in society, with Humans and other creatures on the Earth? Will it be a crime to unplug, deactivate or delete code belonging to a sentient being? Will Sentient AI have the Right to Electricity, Internet and the Pursuit of Training Data? Will it own property, self-organize and vote?

The non-technical professional may be the one best positioned to ask these questions and the first to provide a considered answer.

In 2023, a group of 19 computer scientists, neuroscientists, and philosophers came up with a ‘sentience checklist’ – a list of attributes that, together, might detect the arrival of a non-Human consciousness. The 120-page paper suggested 14 criteria, and then apply them to existing AI architectures, including the type that powers ChatGPT. The paper concludes that, while no existing system can be currently defined as ‘sentient’, many systems already meet some indicator properties, and that building conscious AI is increasingly feasible.[vii]

We Humans will argue this way and that about something we will never truly be able to understand. We convince ourselves that AI will only ‘simulate emotions’ or ‘give the impression of sentience’, even as it becomes ever more powerful, purposeful and aware. And will not matter. A sentient AGI system will feel and do as it pleases with or without our understanding or consent.


Reach out to me for advice – I have a few nice tricks up my sleeve to help guide you on your way, as well as a few “insiders’ links” I can share to get you that free trial version you need to get started.

No eyeballs to read or watch? Just listen.

Working Humans is a bi-monthly podcast focusing on the AI and Human connection at work. 

About Fiona Passantino

Fiona helps empower working Humans with AI integration, leadership and communication. Maximizing connection, engagement and creativity for more joy and inspiration into the workplace. A passionate keynote speaker, trainer, facilitator and coach, she is a prolific content producer, host of the podcast “Working Humans” and award-winning author of the “Comic Books for Executives” series. Her latest book is “The AI-Powered Professional.