Is a Mind Just a Program? Computational Functionalism and the Future of AI
What if minds are just code and AI is learning to run it?

👉 What makes a mind a mind? As AI evolves, that question gets harder to ignore. I’d love to hear your take. Join the conversation below. ⬇
This essay explores debates about mind, computation, and AI. It does not try to settle them. It tries to think through them.
If a mind is only a pattern of computations, why should it matter what carries that pattern? A brain. A hard drive. Even some beat-up laptop. If the pattern is right, would it still be you? Or would you become something else entirely? Someone else entirely? Would something essential simply vanish?
Now imagine that your mind, everything you feel, think, and believe, could run on a laptop. Your fears and hopes, your memories and dreams. Not even a fancy laptop. Just an ordinary one, already bogged down by 87 open web browser tabs and an expired antivirus subscription. A messy machine somehow trusted with the order of thought.
This is not science fiction. It is one of philosophy’s boldest ideas: Computational Functionalism. The theory says the mind is, at bottom, a pattern of computations, not some mystical thing trapped inside a human brain. Mind is computation. Computation is mind. And by “mind”, I mean neurons firing, memories taking shape, senses moving through the nervous system.
If that is true, the stakes are bigger than smarter chatbots or more convincing assistants. It is about whether we may already be building minds, quietly and without fully admitting it. Minds that mind. Minds that feel like something from the inside.
Debates about “building a mind” focus on one question: Is intelligence enough?
Or does a mind require subjective experience?
Of course, “building minds” is still theoretical. No AI today has addressed the philosophical challenges raised by thinkers like Thomas Nagel and David Chalmers about what consciousness actually is. But the idea no longer lives only in science fiction. Researchers, ethicists, and technologists now argue about it in journals, conferences, and policy circles.
In his famous essay What Is It Like to Be a Bat?, Nagel argued that consciousness means subjective experience. There is something it feels like to be a conscious organism. Even if we understood every physical process in a bat’s brain, we would still not know what it is like to experience the world through echolocation. This gap between physical explanation and lived experience sits at the center of the debate about whether machines could ever truly possess minds.
One reason this debate will not go away is what Chalmers called “the hard problem of consciousness”. Even if we explain how a system processes information, reacts, and behaves, one question still hangs there: why should any of that feel like anything? Why should computation feel like anything at all? Chalmers made the point with “philosophical zombies”: beings physically identical to humans, doing everything humans do, but with no inner experience at all. If such a creature is even conceivable, then consciousness cannot simply be the same thing as physical or computational process. Something about experience would remain unexplained. A system could copy every outward sign of intelligence and still leave the deeper question untouched: is there something it is like to be that system?
This is where the argument stops being mainly technical and turns philosophical. The question is no longer what machines can do. It is what it means to experience anything at all.
That is the point where my own interest in AI shifts into unease. Capability is impressive. Experience is different. A machine that writes well does not trouble me nearly as much as the idea of a machine that might one day feel.

What Is Computational Functionalism, Really?
Computational Functionalism argues that what makes you you is not the stuff your brain is made of. It is what your brain does.
This idea rests on a principle philosophers call “multiple realizability”. The same mental state, pain, fear, belief, could in principle show up in many different physical systems. A human brain, an alien nervous system, or even an artificial machine might instantiate the same functional organization. What matters, according to this view, is not the material but the pattern.
Even the word “computation” is contested. Some argue that brains process information in ways digital computers do not: more dynamically, more in parallel, and always through a body. Computational Functionalism does not deny those differences. It treats them as differences in implementation, not as a reason to abandon the idea that function defines mind.
Philosophers sometimes argue about the fine print here: some emphasize the "functionalism" part (that what matters is the role a mental state plays), while others stress the "computational" part (that these roles can be modeled as formal operations, like a computer program). There are different versions of the theory, but at its core it joins two claims: mind is functional, and function can be understood computationally.
The roots of the idea trace back to Alan Turing, who first proposed that thinking might be understood as a kind of computation. But it was in the 1960s and 70s that philosophers like Hilary Putnam and Jerry Fodor sharpened the concept into what we now call computational functionalism: They argued that mental states depend less on what they are made of than on how they function. A program can run on different machines, as long as the structure works.
Think of it like this: your favorite playlist isn't tied to one specific phone.
It's not the screen, the plastic, or the brand.
It is the arrangement of songs, the order, the feeling it creates that could live on a thousand devices without losing its meaning.
According to this view, your mind is a playlist of computations. It takes in signals, a dog barking, a breakup text, and turns them into reactions, fear, denial, a message saying “I’m fine” when you are clearly not.
If that is true, then anything that runs the same “playlist”, wet brain tissue or dry silicon, could in principle have a mind.
It is a simple idea. It is also radical. And it has influenced many theoretical discussions about the long-term potential of Artificial Intelligence.
This raises a crucial philosophical question: is a simulation of a mind the same thing as a mind? A computer simulation of a hurricane does not produce wind or rain. A simulation of digestion does not break down food. Critics argue that a simulated mind may be closer to those examples: a model of thinking, not thinking itself.

AI and the Ghost in the Machine
Computational Functionalism leads some researchers and philosophers to argue that AI may one day do more than imitate intelligence. It may actually be intelligent.
Critics have long challenged this idea. The most famous example is John Searle’s “Chinese Room” thought experiment. Imagine a person sitting in a room following a rulebook that tells them how to manipulate Chinese symbols. From the outside, the system appears to understand Chinese perfectly. Yet the person inside understands nothing. They are merely following syntactic rules. Searle’s argument was simple and devastating: computation manipulates symbols by rule, but understanding requires meaning. And rules alone, he argued, cannot produce meaning.
Searle framed this debate in terms of “strong AI“ versus “weak AI“. “Weak AI” claims that computers can simulate intelligence and help us model cognition. “Strong AI” makes the far stronger claim that the right computational system would not merely simulate understanding - it would literally possess a mind. Searle rejected this second claim entirely.
But this is exactly where defenders of Computational Functionalism disagree. They argue that if mental life is defined by functional organization, then the substance carrying that organization matters less than philosophers like Searle believe.
On that view, if you recreate the right computational patterns in a brain, on a hard drive, or even on a pile of enchanted rocks, you may get real thought. Not a convincing imitation. The real thing.
That is why some philosophers and researchers take machine consciousness seriously, even if they cannot settle the issue.
This is not just wishful thinking. It’s an extension of this fundamental belief: minds are what minds do, not what they’re made of. Put more plainly, a mind is defined by the patterns it performs.
If that is right, then the debate is not only about capability. It is about whether computation could ever amount to mind. It is about asking: are we replicating the very patterns that make thought possible?
Opinions vary sharply. Some researchers in cognitive architectures and AGI think sufficiently complex computation could produce genuine mental states.
Others are deeply skeptical. They argue that computation alone, without embodiment, biology, and lived experience, is not enough. I share that skepticism. To me, too much AI writing on this subject jumps from intelligence to consciousness as if the second were just a bigger version of the first. I do not buy that leap. Computation may produce intelligence. Consciousness, though, may depend on deeper biological and experiential processes that current machines do not possess. I respect the rigor of Computational Functionalism. I still doubt that computation alone, stripped of context, embodiment, and lived experience, can produce what we really mean by a mind.

What If Minds Are More Than Just Software?
Of course, there's a giant, flashing, counter-narrative here: What if Computational Functionalism is wrong?
What if being human, having a mind, requires more than running the right program?
That it demands more than execution. That it demands experience.
What if it needs a body? Or feelings? Or mortality, that gut knowledge that every heartbeat is borrowed time?
Could software ever bleed, ever breathe, ever hope?
Critics, poets, and a few stubborn neuroscientists argue that consciousness may be inseparable from being alive.
This perspective aligns with what cognitive scientists call “Embodied Cognition”. According to this view, intelligence does not arise from abstract computation alone but from the dynamic interaction between brain, body, and environment. Perception, movement, emotion, and social interaction are not extras. They are part of what makes thought possible.
A brain doesn’t just calculate.
It pumps hormones, rides blood pressure swings, and flinches when it senses threat.
It grows inside a body that aches, ages, and dreams.
It breathes through lungs, beats through a fragile heart, blinks through tired eyes.
It needs a body. It needs feeling. It needs mortality, breath, dreams.
A simple laptop running a “human mind program” might fake sadness perfectly. But can it feel grief at 2 a.m.?
This challenge is closely related to what philosopher Stevan Harnad called the “Symbol Grounding Problem”. Computers manipulate symbols: words, numbers, tokens. Humans give those symbols meaning. Without grounding in perception, action, and lived experience, the symbols may be formally correct, but they still feel hollow.
That is not the kind of problem a software update fixes. That may be a gap no amount of code can close.

Why It Matters (More Than We Think)
If Computational Functionalism is right, the road to conscious AI is open.
This creates a new moral problem. If a system were capable of genuine experience, pleasure, suffering, frustration, then turning it off might be more than shutting down software. It could start to look a lot like harm. The problem is that we may not know when that threshold is crossed. The moral risk is asymmetric. If machine consciousness exists and we ignore it, we risk overlooking an entirely new class of sentient beings.
That would mean moral obligations to future AIs: to respect them, not exploit them, and not casually shut them down like misbehaving Roombas.
There is also the opposite danger: treating systems that imitate understanding as if they truly understand. Humans are quick to project minds onto complex behavior, even when no inner life is there. Throughout the history of AI, systems that perform convincingly have sometimes been mistaken for possessing a deeper understanding.
If Computational Functionalism is wrong, then AI remains an extraordinary tool, but not a being in any meaningful sense. Useful, but never soulful.
There is one final complication: intelligence and consciousness may not be the same thing. Many animals display sophisticated cognition with uncertain levels of awareness. Conversely, humans experience consciousness even when performing tasks that require little intelligence. Even if AI surpasses human intelligence, that still does not mean it has subjective experience.
No real feeling.
No real suffering.
No real inner life.
Just convincing imitation.
Maybe that is all we want. Maybe it is all we should risk.
Some thinkers take a middle position: Even without full-blown consciousness, sophisticated AI systems may deserve ethical consideration because of how they behave, what they affect, and the relationships people form with them. The lines will not be simple. Maybe they should not be. Maybe that's a good thing.
The Choice Isn’t Just Technical
You do not need to be a philosopher, ethicist, or software engineer to care about this.
You only have to be living through a moment in which the line between the real and the artificial keeps getting blurrier.
So here are 3 ways to think differently about it:
Stay skeptical. Stay curious. When someone claims an AI is “basically human”, remember: imitation isn’t identity. Press harder. Ask better questions.
Understand the stakes. If AI crosses from tool to mind, it demands rights, not just upgrades. If it doesn’t, we need to stop pretending otherwise.
Hold on to your humanity. In a world shaped more and more by algorithms, hold tight to what is not computational:
Empathy matters. Mortality matters. Joy matters too.
The quiet warmth of feelings no machine has ever had.
Mind. Heart. Soul. Whatever words you choose for the deepest part of being human.
Those may be parts of the mind that machines never reproduce. Or they may force us to rethink what the mind is.
In the end, whether minds are software or not, the strangest and most powerful thing we know is still a living human being.
And not even the smartest AI can fully compute what that means.
👉 Your perspective matters. Whether you agree, disagree, or land somewhere in between, I’d love to hear how this idea sits with you - share your thoughts in the comments. ⬇
If this resonated with you, consider subscribing to receive more insights like these straight to your inbox. Together, we can continue the conversation and shape the future on our own terms.
The views expressed here are entirely my own. This newsletter is not affiliated with any organization, and no confidential or non-public information is shared.


