In the 18th century, Scottish philosopher David Hume posed a confounding question about the nature of the scientific method. By questioning the logic behind making predictions based on past observations, he exposed a fundamental problem that has vexed logicians to this day. But now, through a new analysis, philosophers Prof. Gerhard Schurz and Dr. Paul Thorn at Heinrich Heine University Düsseldorf offer a fresh perspective – one that could finally help us escape Hume’s logical trap, through a concept known as regret-based meta-induction. More
To gain a deeper understanding of the world around them, scientists typically build theories based on patterns they have recognised in their past observations. This approach has been a core part of the scientific method for centuries. But for some, it presents a deep logical puzzle.
Once a theory has been supported by observations, the scientific method assumes it will continue to hold true for all future observations: a type of reasoning known in logical theory as an ‘inductive inference’. Yet as early as the 18th century, Scottish philosopher David Hume raised a major objection to this assumption.
For any scientific theory to be truly reliable, it must be able to predict future outcomes accurately. But when a theory’s conclusions are based on induction – that is, derived from past experience – Hume pointed out that it will always logically possible for the future to behave in a completely different way from the past. As such, the scientific method appears to rest on a shaky foundation.
On the surface, the only defence for inductive reasoning relies on a logical loop: induction has worked well in the past, and therefore, using induction itself, it must continue to work again in the future. However, this statement simply assumes the very thing it is trying to prove – a textbook nonstarter in logical theory.
To see how this plays out, imagine a simple experiment. A light flashes red or blue in a hidden sequence, and a scientist is tasked with discovering the rule behind the pattern. If they observe thousands of red flashes in a row without a single blue, induction tells them the light will probably always flash red in the future.
But according to Hume, that conclusion is built on a logical loop: the scientist is assuming in advance that patterns in the past can be trusted to continue. Worse still, this logic can be mirrored – and even undermined – by the concept of ‘anti-induction’.
Anti-induction is the idea of predicting the opposite of what has been observed. So, if the light has flashed red every time so far, anti-inductive reasoning would say: ‘It must flash blue next.’ And although this strategy has clearly failed, the light just keeps flashing red, anti-induction, using its own logic, claims that its past failures prove that it should immediately start succeeding. So, it predicts blue again, confident the pattern will finally reverse.
For Hume’s proponents, contradiction is clear: that induction and anti-induction each use the same self-referential logic to reach opposite conclusions. This paradox has left logicians stuck for centuries – if both approaches can justify themselves with circular reasoning, then neither can claim to be rationally superior. Today, Hume’s objection to induction still appears to stand as strong as ever – and with it, there remains a lingering doubt among some logicians over whether the scientific method itself can ever be fully justified.
Through fresh analysis of the problem Schurz and Thorn believe they may have found a way to break this logical loop. Drawing from the principle of regret-based meta-induction, they argue that Hume’s objection can only hold up in theory, and immediately breaks down under conditions we find in the real world.
Since Hume’s time, the flaws in inductive reasoning have been expressed even more crisply through the No Free Lunch, or NFL theorem. In this context, the NFL theorem states that there’s no shortcut to success, with ‘success’ in this context denoting accurate predictions about the future, based purely on past data.
If we think back to the flashing light experiment, the NFL theorem points out that the hidden rule behind the sequence could be absolutely anything – completely regular, completely random, or somewhere in between. Because of that, no strategy, including induction, can consistently do better than just guessing at random. At least, not if all possible sequences are considered equally likely.
But Schurz and Thorn take this one step further by introducing regret-based meta-induction, or RW: a learning strategy that tracks the performance of multiple prediction methods over time. It works by giving less weight to unsuccessful methods, ‘regretting’ incorrect predictions, while giving more weight to those methods that have been more successful.
At first glance, RW seems to clash with the NFL theorem. If no method does better than any other, how could RW learn to favour one over time? However, Schurz and Thorn show that this apparent conflict disappears, for three important reasons.
Firstly, in the long run, RW consistently does better than one would expect based on the NFL theorem, because the NFL theorem assigns a probability of zero to regular possible worlds in which induction can be successful. For this reason, RW does get free lunches after all. Returning to our flashing light experiment, RW tracks several approaches, including induction, anti-induction, and random guessing.
Over time, RW shifts its weight toward the best-performing strategies. So, if the light keeps flashing red, RW will gradually learn that induction is the most reliable guide, and it will predict red more confidently. As a result, it reduces errors far more effectively than any fixed strategy.
Secondly, Schurz and Thorn point out that the NFL theorem only holds under a very unrealistic assumption: that all possible light sequences are equally likely. If this was really the case, the world is so unpredictable that no strategy can succeed. But in reality, scientists can discover structure, patterns, and regularities in virtually every system they study – from atomic-scale reactions to the motions of galaxies.
These patterns are precisely what make learning and prediction possible. While RW thrives in realistic environments, the NFL theorem is only relevant in fully chaotic systems – which are far rarer.
Finally, RW doesn’t just work in the long run. Schurz and Thorn show that it also performs well in short-term scenarios. In our experiment, RW can start identifying which strategies are working after just a handful of light flashes, and will shift toward them accordingly. That means even with limited data, it can already begin to favour inductive inferences when they prove to be the most effective method – reducing mistakes faster than strategies that stick rigidly to one method.
Taken together, these three results form a strong rebuttal to the NFL theorem, and Hume’s original critique of inductive reasoning. While RW doesn’t claim that induction is always right, it does show that induction is the best-performing strategy in environments that have at least some kind of regularity, including most of the real-world systems that scientists actually study. Most importantly, RW allows us to adopt inductive reasoning without assuming in advance that it works, avoiding the circular logic that Hume warned about.
For Schurz and Thorn, this makes RW a powerful new foundation for scientific reasoning. It doesn’t just patch over Hume’s conundrum, it offers a watertight solution to one of the oldest and most troubling problems in the logic of science. Their work shows that the scientific method can be rationally justified in making predictions about the future – not by blind faith in the past, but through a method that learns, adapts, and earns its trust over time.