At the beginning of Part III of Keynes’s A Treatise on Probability he quotes the anti-inductive argument made by Hume in his Philosophical Essays concerning Human Understanding:
Hume makes a seemingly valid point. One would not be so bold as to make general conclusions after one observation. One requires multiple observations to be sure. But what, exactly, is the logical reason why adding observations adds certainty to our conclusions? Why does one extra confirmatory observation make us more sure that we are right? The answer is not immediately clear. It was from this line of reasoning that Hume proposed that it was very difficult to “know” anything inductively at all.
However, according to Keynes, Hume misrepresents the argument. He is right to question the use of “uniform experiments” but Keynes comments that this is not how to draw inductive conclusions. One needs to vary the experiment. In Hume’s example, one could taste eggs of different sizes, from different countries, at different times of the day, prepared in different ways, etc. If each of these experiments leads to the same result (the same “taste and relish”) then we can be more certain in our inductive conclusion.
It seems to me the two are talking about two different things. Hume’s argument is based on certainty. He argues that additional observation doesn’t help us “know” something. This is correct, we can never be certain when our basis of understanding is inductive reasoning. Keynes argues from a probabilistic perspective, asking do the same conclusions hold when the conditions of the experiment are altered? If so, the conclusions that we originally drew are more likely to be true.
No right answer
The problem with this method is that we are fooled by false patterns exceedingly easily. We are too good at pattern matching, which gets us into trouble (Type 1 thinking, as Kahneham and Tversky call it in Thinking Fast and Slow). This ability has a use: identifying patterns works well for learning most things. Don’t lean too far one way on your bike, you’ll fall over. Don’t make noise when you’re hunting, the prey will run away. Recent developments in AI (neural networks, deep learning, etc.) are based on this pattern recognition learning principle: give the computer masses of data and let it “learn” by figuring out patterns. Patters generated by randomness seem real. When they are not, we can be fooled.
Additionally, the non-uniform “experiments” that inductive conclusions are drawn from are not really experiments. The real world is not a laboratory; experimental conditions change mid-way through the “experiment”. This leads to overconfidence in inductive conclusions.
In a sense, we can never really know anything. Both deductive and inductive arguments are never certain. Deductive arguments, even those that make falsifiable predictions, are based on axioms that are still just theories. These theories can be proven to be wrong. Inductive arguments are also dubious; even with all the observations in the universe, there is no real sense in which they prove a proposition to be true.
There is a probability attached to all “knowledge” (outside of mathematics). Keynes and Hume were both right but in different ways.
How can one use inductive arguments in a practical sense? It seems that our best bet may be to cross our fingers and close our eyes and hope that observed patterns hold up. One certainly should not plan for the possibility that you wake up and suddenly stop liking eggs, that you are suddenly unable to walk, or that you have suddenly forgotten English. This is neither practical nor productive.
It helps to know the limits of induction, to ensure that one is not easily fooled by a false pattern or seemingly-robust statistical inference. Some patterns will hold true (most of the time) and some are just outcomes of the volatility that is embedded in nearly all aspects of modern-day life. The trick is determining the difference between the two, or at least which patterns are more likely to be true in which circumstances.
Rather than relying purely on induction, we might be better served by combining a deductive argument with the method of non-uniform experimentation described by Keynes. This is an approach that Ray Dalio takes to correlation, first understanding the reasons for the correlation and subsequently checking that the correlation holds across time (out-of-sample data) and space (different geographies and assets). Mark Spitznagel acts similarly in his book The Dao of Capital: he builds a deductive argument and uses data to “check his working”. These both seem like sensible approaches: begin with deduction and use probability and statics to do a kind of sanity check. This avoids the pitfalls of inducing relationships between variables.