In an uncertain world, the search for truth can pull us in two very different directions. One is a path of logic and reason, paved with evidence and experiments. The other is a route of faith, built on belief and trust.
At least, that is the common perception. Many of us study faith and reason separately at school, then are encouraged to keep them apart in our professional lives. But the two paths cross more than we might expect. Take the current Pope Leo XIV, who has a degree in mathematics. On the face of it, maths might seem like a safe haven from faith. Statements are true or false. Theorems are proven or disproven. A fact today is a fact forever.
Yet mathematics and religion share a long and tangled history. Many influential mathematicians were driven by faith as much as by logical conviction. Isaac Newton viewed God as the active force sustaining natural laws, while Georg Cantor believed his revolutionary ideas about infinity were divine revelations. As the statistician Karl Pearson observed in 1926: ‘the post-Newtonian English mathematicians were more influenced by Newton’s theology than by his mathematics …’
Faith and reason have long coexisted, with an intangible belief in God shaping tangible mathematical discoveries. Today, a non-spiritual form of faith continues to shape mathematical ideas. From climate science to AI, approaches to proof no longer rely on pure, simple logic. Instead, trust plays an increasingly important role.
Much of this transition is down to the complexity of modern scientific problems, and the opacity of the methods that researchers use to solve them. Contrast this with the fundamental principles we learn at school – like Pythagoras’s theorem – which can generally be proven with one or two pages of logical arguments.
We are no longer in an era where breakthroughs can be demonstrated with a few handwritten equations. Even in mathematics, some proofs sprawl into hundreds, even thousands of pages. Take the proof of the so-called ‘geometric Langlands conjecture’, announced in 2024. In short, this says that it is always possible to translate problems about symmetries into problems about shapes. If true, the conjecture would be a step towards a grand unified theory, linking seemingly different areas of mathematics. But is the conjecture truly proven? The recently announced proof runs to almost 1,000 pages. It will take years to undergo thorough peer review and appear in an academic journal.
As the length of typical mathematical proofs started to grow dramatically in the 1970s and ’80s, so did concerns about the trust required to accept them. When proofs run to thousands of pages, and only a handful of people have the expertise to check them, how confident can we be that no detail has been missed? ‘What should one do with such theorems, if one has to use them?’ the mathematician Jean-Pierre Serre asked in 1985. ‘Accept them on faith? Probably. But it is not a very comfortable situation.’
Not everyone believed the proof initially. Maybe the computer had made an error somewhere?
It is not only the lengths of proofs that require trust. It is also the methods that are now used to show that things are true. In 1976, the mathematicians Kenneth Appel and Wolfgang Haken made history when they announced the first computer-aided proof. Their discovery meant that the mathematical community had to accept a major theorem that they couldn’t verify by hand.

Wolfgang Haken (seated) and Kenneth Appel. Courtesy the University of Illinois Urbana-Champaign
The pair had tackled the so-called ‘four colour theorem’, which states that if you want to fill in a map so that no two bordering countries are the same colour, you only ever need four colours to do this. There were too many possible map configurations to crunch through by hand. So, they used a computer to get over the finish line.
Not everyone believed the proof initially. Maybe the computer had made an error somewhere? Suddenly, mathematicians no longer had full intellectual control. They had to put their trust in a machine. When Haken’s son, a PhD student at the University of California, Berkeley, gave a lecture on his father’s proof in 1977, he recalled the audience reaction:
The older listeners asked, ‘How can you believe a proof that makes such heavy use of a computer?’ The younger listeners asked, ‘How can you believe a proof that depends on the accuracy of 400 pages of hand verification of detail?’
It showed that the handwritten certainty of the old guard had become a source of fallibility for the new generation.
For a while, there had been hope that science could escape faith entirely. In the 19th century, scientists had begun to question the role of God in shaping the world. One of them was Francis Galton, who had become sceptical about the impact of prayer. ‘Most people have some general belief in the objective efficacy of prayer,’ he noted in 1872, ‘but none seem willing to admit its action in those special cases of which they have scientific cognizance.’ In other words, where science was now shining light, there was no need to turn to God to explain the darkness.
Galton even performed a statistical analysis that suggested prayer was ineffective. Comparing the average lifespan of different professions, he found that royalty – despite supposedly benefitting from proclamations like ‘God save the Queen’ – were outlived by lawyers, doctors and traders.
New paradigms would often have gaps and inconsistencies at first, as well as facing opposition
Yet researchers still found themselves needing to believe in things without solid proof. One such belief was in the value of research itself. Ronald Fisher, who pioneered the statistical design of experiments in the early 20th century, argued that because scientific knowledge can have unpredictable future benefits, it was not possible to place a value on a future discovery. As he put it in 1955: ‘scientific research is not geared to maximise the profits of any particular organisation, but is rather an attempt to improve public knowledge undertaken as an act of faith …’
In the 1960s, the philosopher Thomas Kuhn introduced the notion of ‘paradigm shifts’, whereby one dominant scientific idea is replaced by another. For example, one such shift occurred in the mid-19th century, when scientists began to pin disease on microscopic pathogens, rather than the accepted ‘miasma theory’ that illness came from bad smells in the air. Kuhn pointed out that new paradigms would often have gaps and inconsistencies at first, as well as facing staunch opposition – and extensive evidence – from supporters of the existing paradigm. In the case of germ theory, crude microscopes made it hard to convincingly demonstrate the existence of these new germs.
A bold new idea like germ theory could take off only because there were people at the start who pushed forward against these challenges. ‘The man who embraces a new paradigm at an early stage must often do so in defiance of the evidence provided by problem-solving,’ Kuhn suggested in The Structure of Scientific Revolutions (1962). ‘He must, that is, have faith that the new paradigm will succeed with the many large problems that confront it, knowing only that the older paradigm has failed with a few. A decision of that kind can only be made on faith.’
One of the most recent examples of a paradigm shift has been in the field of AI. Historically, computer-generated knowledge had come from following clear logic and rules, known as ‘symbolic reasoning’. It was this instruction-based approach that had enabled Appel and Haken to prove the four-colour theorem with a computer. Symbolic approaches have been widely used in other areas of science too. For instance, they allow climate scientists to simulate complex atmospheric dynamics on supercomputers. Although it’s not possible to check the results with pen and paper, it is possible to write down the rules that the computer is following to generate the outputs.
In the 2010s, an alternative method rose to prominence. Rather than following hand-coded rules to complete a task, neural networks could learn from huge amounts of data, adjusting millions – or even billions – of connections between artificial ‘neurons’ until the resulting network could make accurate predictions. Neural networks had been around for decades, but it had taken huge datasets and vast computing power for the method to repay the faith that a minority of advocates had placed in them. Just as Kuhn had outlined, early adopters had persisted with neural networks, despite limited success in practice, thanks to their belief in the promise of a better future.
The search for truth has always required belief, beyond the limits of reason
In the past decade, neural networks have beaten humans at complex games like Go and poker, powered self-driving cars, and even helped researchers win a Nobel prize for predicting the structure of proteins. An idea long in the wilderness has finally made it into the mainstream. And in the process, modern AI is once again testing our faith in technology.
Unlike symbolic reasoning, neural networks are generally a ‘black box’. Users can make sense of the inputs and outputs, but not the complexity of the artificial neurons in between. Even though neural networks can be trained to do useful things – like predict protein structures or drive cars – it is difficult to say exactly how they are managing to do it. This can require a shift in mindset; most scientists grew up wanting to understand how the world works, but they are now grappling with a technology that can deliver human-like performance without human-like explanations. Even if AI becomes very good at a task, there is still the chance that a surprise is lurking.
The game of Go was supposedly mastered by superhuman AI almost a decade ago. But in 2023, the AI researcher Tony Wang and his colleagues put this conclusion to the test. They found that even a state-of-the-art system could be tricked into making absurd mistakes that would cost it the game. This suggests that superhuman performance in some situations isn’t enough to guarantee that AI won’t fail unpredictably at other times. Just like humans, even the best AI can get distracted or misled by a distorted view of the world. Worse, these weaknesses may be unavoidable, no matter how good neural networks get in future. ‘Just because you see your AI system behaving well in scenarios A, B, C, D, this does not necessarily imply that it will behave well in scenario E,’ Wang told me shortly after publishing the analysis. ‘There’s an act of faith you have to take.’
From Newtonian mathematics to modern machine learning, the search for truth has always required more than logic. It has required belief, beyond the limits of reason and available evidence. Belief that new ideas will be vindicated, that near-uncheckable proofs will stand, and that scientific knowledge will help societies. It’s tempting to view faith and reason as two distinct paths, in two different directions. But science has long relied on them both, with faith spurring researchers to explore the unknown, and reason helping them understand what they find when they get there.








