Clarity Compounded

Clarity That Grows With You.

AI Consciousness Delusion

John Lennox cuts through the AI hype with a simple observation: "These machines are not conscious, nor are they ever likely to be. For a very simple reason: nobody knows what consciousness is. So it's silly when people start talking about, oh, it's conscious."

This should be obvious. But it's not.

We're having serious debates about whether AI is conscious, whether it has feelings, whether it deserves rights. Meanwhile, we can't even define what consciousness is.

The Hard Problem of Consciousness

Philosophers call it the "hard problem." Not how the brain processes information-that's the easy problem. But how physical processes give rise to subjective experience. How neurons firing become the feeling of red, the taste of coffee, the experience of being you.

Nobody knows. Not neuroscientists. Not philosophers. Not AI researchers.

We have theories. Integrated Information Theory. Global Workspace Theory. Quantum consciousness. But they're all speculative. None of them explain how matter becomes mind.

So when someone says "this AI might be conscious," they're making a claim about something they can't define, can't measure, and don't understand.

That's not science. That's speculation dressed up as concern.

Why We Want AI to Be Conscious

The consciousness debate isn't really about AI. It's about us.

If AI can be conscious, then consciousness isn't special. It's just computation. And if consciousness is just computation, then we're just machines. Sophisticated, but ultimately reducible to algorithms.

This is comforting to materialists. It means there's no soul, no transcendence, no accountability beyond the physical. We're just meat computers running code.

But it's also terrifying. Because if we're just machines, then meaning is an illusion. Love is just oxytocin. Morality is just evolutionary programming. And when we die, we're just turned off.

So we project consciousness onto AI. Not because the evidence supports it, but because we want to believe consciousness is mechanical. That it can be replicated. That we can build it.

The Real Danger: Deception

Lennox shifts the focus: "The danger of AI, even the stuff that's being used now, is fearsome in terms of deep fakes and deception."

This is the actual problem. Not consciousness. Deception.

AI doesn't need to be conscious to deceive. It just needs to be convincing. And it's getting very, very convincing.

Deepfakes and the Erosion of Truth

You can't trust what you see anymore. Videos can be fabricated. Voices can be cloned. Images can be generated. And the average person can't tell the difference.

This isn't theoretical. It's happening now. Political deepfakes. Celebrity deepfakes. Scam calls using cloned voices. Fake news that looks real because it uses real footage that's been manipulated.

The "five I's"-the top AI researchers-are warning about this. Not consciousness. Deception. The chaos that follows when truth becomes impossible to verify.

Jesus' Warning

Lennox connects this to Jesus' statements about his coming: "Be careful, because of deception. It's the major problem."

Matthew 24:4: "Watch out that no one deceives you."

This wasn't about AI. But it's about the pattern. When truth becomes negotiable, when deception becomes easy, chaos follows.

We're entering an era where deception is not just easy-it's automated. Scalable. Indistinguishable from reality.

And we're not ready for it.

Technology Moves Faster Than Ethics

Lennox observes: "The technology moves much faster than the ethics."

This is the core problem. We build first, ask questions later. We optimize for capability, not responsibility. We celebrate innovation without considering consequences.

And by the time we realize the ethical implications, the technology is already deployed. Already causing harm. Already too embedded to easily remove.

The "We Shall Be As Gods" Problem

Lennox identifies the driver: "The technology is partly driven, I believe, by 'we shall be as gods.' In other words, if we can do it, we should do it and bother the consequences."

This is Genesis 3:5 playing out in real time. The serpent's promise: "You will be like God."

Not "you will understand God." Not "you will serve God." You will be God.

And the modern version is the same: if we can build it, we should build it. If we can create artificial intelligence, we should. If we can manipulate reality, we should. If we can transcend our limitations, we should.

The consequences? We'll figure that out later.

This is a very dangerous attitude. Because somewhere in the world, someone's going to try it. And they won't care about the ethics. They'll care about being first.

Why Christians Should Be in AI

Lennox makes a practical suggestion: "Christians who are scientifically minded, it would be a good thing for them to go into AI so that they can sit at the ethics table."

This isn't about stopping progress. It's about shaping it.

Someone needs to be in the room asking: just because we can, should we? What are the consequences? Who gets harmed? What safeguards do we need?

And those questions need to be asked by people who believe in objective morality. Who believe humans have inherent dignity. Who believe there are limits to what we should do, even if we can do it.

The Ethics Table

The ethics table is where decisions get made. Not about whether the technology works, but about whether it should be deployed. What constraints should exist. What uses should be prohibited.

Right now, that table is dominated by people who believe ethics are subjective. That morality is just preference. That the only real constraint is legality.

We need people at that table who believe differently. Who believe some things are wrong even if they're legal. Who believe human dignity isn't negotiable. Who believe there are moral absolutes that technology can't override.

What We Should Do

Stop Talking About AI Consciousness

It's a distraction. We don't know what consciousness is. We can't measure it. We can't replicate it. And debating whether AI has it is wasting time we should spend on actual problems.

Focus on Deception

This is the real danger. Deepfakes. Misinformation. The erosion of truth. The chaos that follows when you can't trust what you see or hear.

We need technical solutions: watermarking, verification systems, detection tools. But we also need cultural solutions: media literacy, critical thinking, healthy skepticism.

Slow Down

The technology moves faster than the ethics. So slow down. Build in safeguards before deployment, not after. Ask the hard questions before releasing the product, not after it causes harm.

This won't happen naturally. The incentives push toward speed. First mover advantage. Market dominance. Competitive pressure.

Someone needs to pump the brakes. And that someone should be people who care about more than profit.

Sit at the Ethics Table

If you're a Christian with technical skills, go into AI. Not to stop it, but to shape it. To ask the questions that need asking. To advocate for constraints that protect human dignity.

The technology will be built. The question is: who's in the room when decisions get made?

The Pattern Repeats

Lennox's insight is that the Biblical pattern keeps repeating. The desire to be like God. The promise of transcendence. The problem of deception. The consequences of unchecked ambition.

We think we're different. We think our technology makes us special. We think this time will be different.

But the pattern holds. Because human nature hasn't changed. We're still the same creatures who built the Tower of Babel. Who ate the fruit. Who believed the serpent's promise.

The technology is new. The temptation is ancient.

The Choice

We can build AI with humility, recognizing our limitations and building in safeguards. Or we can build it with hubris, believing we're smart enough to handle whatever consequences arise.

We can prioritize ethics alongside capability. Or we can prioritize speed and figure out the ethics later.

We can have people at the ethics table who believe in objective morality. Or we can leave it to people who believe ethics are just preferences.

These aren't technical questions. They're moral questions. And they need to be answered before the technology is deployed, not after.

Because once it's out, it's too late. The deception is already spreading. The chaos is already starting. The consequences are already unfolding.

And we'll discover, once again, that the Biblical warnings were right.

Not because they predicted the technology. But because they understood human nature.

And human nature doesn't change. No matter how sophisticated our tools become.

Share: