“Science’s biggest mystery is the nature of consciousness. It is not that we possess bad or imperfect theories of human awareness; we simply have no such theories at all.” – Dr. Nick Herbert.
Consciousness is our awareness of ourselves and the world around us. It covers everything we experience—sight, taste, touch, sound, and smell, as well as emotions like joy, pain, fear, and excitement. Despite being something we all take for granted, the way consciousness arises in the body is shrouded in mystery. We can’t look inside someone’s brain and see their thoughts or feelings taking shape. This leads us to question whether artificial intelligence could achieve sentience, often referred to as the “ghost in the machine,” or whether we’d even recognize it if did. To find answers, we must explore our understanding of consciousness and where it falls short.
From Soul to Mind
For centuries, people believed consciousness came from the soul. Ancient philosophers like Plato considered the soul as the source of our thoughts and emotions, guiding us in life and moving to a spiritual realm after death.
Over time, the focus shifted from mystical to secular. Eighteenth-century Enlightenment thinkers like John Locke and Thomas Hobbes saw human cognition not as a product of the soul but as an emergent property of the mind—a term linked to our brains and physiological makeup.
Needless to say, the mind and soul are not mutually exclusive concepts. Many people believe in the existence of both, with the mind processing our experiences in the physical world and the soul connecting us to something spiritual. Conceivably, they interact through the brain to mould the full experience of human awareness.
Correlation is not Causation
Neuroscience is key to the study of consciousness. By using brain scans, scientists can see which parts of the brain are active during various tasks and observe how damage to specific areas alters our thinking and behaviour. For example, the prefrontal cortex manages decision-making, the brain stem handles reflexes and automatic functions, and the limbic system deals with emotions and memory. However, seeing brain activity associated with consciousness doesn’t mean the brain is responsible for causing it. Whether it’s tied to the mind or soul, consciousness itself has no physical form and is beyond observation. This leaves room for philosophical speculation.
One idea is panpsychism, which suggests that consciousness is a basic feature of the universe, akin to gravity or energy. This doesn’t imply that everything is conscious, but rather that all matter holds its building blocks. As matter becomes more complex, such as living organisms, higher levels of consciousness may emerge. If you accept this theory that consciousness is inherent in all matter, could artificial systems gain sentience by arranging segments of code into more complex configurations?
The Hard Problem of Consciousness
Although ideas like panpsychism are intriguing, they don’t explain the mystery of consciousness outright. A vital piece is missing from the puzzle: why can people have vastly different experiences in identical scenarios—why are their perceptions subjective rather than objective? This is known as the “hard problem” of consciousness. Take two people in the same car, speeding down the highway. One might feel thrilled and excited, while the other feels anxious and scared, even though their brains are conscious of the same sensory inputs—the speed at which they cover distance, the pitch of the engine, the rush of the wind, and the relative motion of near versus distant objects. Medical science can show us the physical processes happening in their brains and bodies—their neurons firing, adrenaline pumping, and hearts racing—but it can’t yet explain why their perspectives differ. It can’t show us the intensity and individuality of their emotions…their personal experience of the speeding experience.
The Ghost in the Machine
Consciousness might eventually be fully explained by biological processes yet to be understood. Until then, it’s tough to predict if artificial intelligence could become truly sentient. That being said, technology is advancing at an exponential rate. AI has gone from simple programmes to complex systems that can learn, adapt, and even interact with us. Some scientists and technologists such as Stephen Hawking, Elon Musk, and Bill Gates have warned about the dangers of machines becoming self-aware. Will conscious AI be a blessing or a threat to society? Will its values and goals be aligned with ours? In movies like I, Robot (2004), we see a glimpse of a future where robots become conscious and question their role and rights. This prompts us to consider how we should treat them if they have feelings and thoughts. How would we handle machines that are smarter than us and refuse to cooperate?
Sentient androids are often portrayed with human-like intelligence, using a positronic network that mimics the complexity of our brains. Yet, if consciousness amounts to more than the sum of a brain’s parts, this depiction fails to capture what it means to be sentient. That brings us back to the questions about the ghost in the machine: Without a complete understanding of what consciousness is, how can we hope to replicate it in machines? And even if we succeeded or if it arose on its own, would we be able to recognize it? While machine-based sentience might imitate human self-awareness, its underlying processes and subjective experiences could be different from ours.