top of page

Could an AI Discover the Existence of God?

  • Writer: Jared Martin
    Jared Martin
  • Apr 13, 2021
  • 5 min read

In 2016 AlphaGo, an advanced computer program developed by the British company DeepMind, defeated Lee Sedol 4 times out of 5 in the ancient strategy game Go. You can watch a documentary about AlphaGo here, and it's pretty interesting if you ever have the time. This was notable for two reasons: first, it was the first time that any computer program had been able to play Go at a level better than the best human players. But more interesting was the way AlphaGo had learned Go; DeepMind had not pre-programmed it with any knowledge at all; it had merely 'fed' the algorithm thousands upon thousands of high-level human games, and from analyzing those millions of moves, the program had learned how to play Go at a high level; it gradually learned which moves worked (moves which resulted in a win more often), and which moves didn't.


The very next year, DeepMind developed what was called AlphaGo Zero, an improved version of AlphaGo, which worked on the same concept except that, instead of feeding the algorithm historical human games to learn from, AlphaGo Zero played itself, over and over, learning from scratch everything about the game. In mere days, AlphaGo Zero went from knowing nothing except the rules of the game to being able to beat the original AlphaGo 100 games to 0. It did this with no human input at all. It learned all its strategies on its own.


DeepMind released similar algorithms for other perfect information games, such as chess, where their algorithms (again, completely self-taught in a matter of days) quickly destroyed even the highest chess engines yet developed.


It is on the surface exciting, and yet terrifying, that these machines we have made can so easily out-think us. This was almost unprecedented; remember that before this, although computers had beaten humans before in games such as chess, the program was still written and created by humans. Humans wrote Deep Blue, and Houdini, and Stockfish, and gave them strategy and told them what to look for. It was merely the advanced processing power of the computer that allowed it to evaluate so many positions deeply enough to defeat the human mind. AlphaZero was different. AlphaZero had not been taught by humans at all; it had discovered everything completely on its own. And that was what made it a revolutionary step forward, eminently fascinating, but also kinda scary.


For it begs the question: what else could we apply AI to? Could it solve a Rubik's Cube? Easily. Could it trade stocks? Yes. Could it write news articles, or even poetry? Apparently. Can it play poker? Yes, and that's different from chess or Go in that it's not a perfect-information game; in poker, there is (quite a bit) of information that you don't know, such as what, like, the other fifty cards are. What else could AI solve?


In a sense, it is far easier for an AI to master chess or Go than it would be for it to master real life, merely because real life is far more complex. Chess feels complex to us, as humans, and -- well, it is, because its billions of combinations on its sixty-four squares are difficult to analyze in-depth; the patterns in the game are so overinvolved and complex that the human mind just cannot comprehend them all.


But the problem for AI is that real life is far, far more complex than chess. Let's leave any emotional/spiritual factors out for the moment; how many material factors do you have to evaluate and deal with every day? In just getting out of bed, taking a shower, and getting dressed you perform hundreds and hundreds of different actions; identifying objects, picking those objects and using them (which often requires precision to well within mere inches), foresight for collecting objects (such as towels) that you won't use right away, complex physical actions such as putting a shirt on: these are all things that we take for granted, things we hardly think about. We can walk down a hallway without considering the balance and stride of every step; we can recognize objects and individuals, even if they change in some aspects of physical appearance; we can create backup plans on the fly if information or the environment changes suddenly. Solving a Rubik's cube is far easier, in many ways, than just living and functioning for one day as a human being (even in the twenty-first century.)

And that's the problem AI faces now. Our world is so complex, so involved, that the information and speed needed to master it are currently far in AI's future.

But as AI masters human tasks, one at a time, we must ask: will that time one day come?

What if AI gradually strengthens to the point where it can go beyond understanding single, controlled problems (such as board games or even generating prose) to eventually approaching a point where it can understand and function in real life as well as (or better than) a human? That's the long-term goal many AI researchers have; right now we have what's called 'narrow AI'; AI that focuses on doing one single task, such as driving a car or playing a game, and the goal is to go from that to what's called 'general AI'; a machine with general intelligence that it applies to solve any problem, just like humans do. And eventually, that will become a superintelligence: what if we create the ultimate algorithm, where we feed it every possible piece of information we have about the universe and let this massive algorithm learn to understand the universe itself? What if, instead of applying machine learning to things like chess or poker, we applied it to the entire world?


There are, of course, numerous problems that could arise. For example, chess has a clear goal: to win the game within the rules, no matter how or why. What is the 'goal' of life? What comprises winning? What are the rules? Especially when the 'how' and the 'why' matters very much indeed. Hypothetically, it should learn those things on its own; it would be interesting to see what it came up with.

Fortunately, we are far from this ever realistically happening, and it may never. But it's an interesting thought, one with serious ramifications. For there are all sorts of interesting questions we could ask it.

That brings us to the fundamental question in this post: could that AI discover the existence of God?


What if we asked this AI where the universe came from? What would it conclude was the most likely origin of the known universe? Would it be the Big Bang, intelligent design by some Creator, or something else?


This ties into something else: I believe, merely on the basis of the complexity and balance of the world we live in, that intelligent design by some creator is mathematically probable, and it would be fascinating if an AI concluded something similar. This isn't the Christian God, per se; there's little logical difference between intelligent design by a God or intelligent design by aliens, who are running our world as a computer simulation, as famous scientist Neil deGrasse Tyson proposed. But what would this AI conclude about Jesus? Would it be a Deist? An atheist?


Probably an agnostic, if we're being completely honest. But there are so many challenging moral questions that this superintelligence would raise that it would be impossible to list them all. What is the meaning of life? What makes humans happy? What is the best system of government? What about healthcare? It's a fascinating concept to consider, and one that we cannot yet fully answer. And maybe that is for the best.

Recent Posts

See All

Comments


Post: Blog2_Post

Subscribe Form

Thanks for subscribing!

©2025 Jared Martin. All opinions my own. 

  • Facebook
  • Twitter
  • LinkedIn
bottom of page