How Good Are We at Predicting the Future?
I would suggest we are terrible at it. And that should give us pause when it comes to AI.
The history of technology prediction is a history of confident wrongness. In 1943, Thomas Watson, chairman of IBM, reportedly said the world market for computers would stretch to “maybe five.” In 1995, the astronomer Clifford Stoll wrote a Newsweek piece dismissing the internet as overhyped, arguing that online databases would never replace newspapers and that e-commerce was a fantasy. Even Bill Gates, no technological slouch, admitted he overestimated what technology could do in two years and underestimated what it could do in ten.
The pattern is remarkably consistent. We either wildly overestimate the short-term impact of new technologies (remember when blockchain was going to replace every institution within five years?) or we catastrophically underestimate their long-term consequences. Nobody building the early internet imagined it would reshape elections, create trillion-dollar companies, or leave millions of people psychologically dependent on dopamine hits from their phones.
It isn’t just that we fail to predict which technologies will emerge; we fail spectacularly at predicting the second- and third-order effects of the technologies we already have. Social media was supposed to connect the world. It did. It also polarised it. The smartphone was a communication device. It became an anxiety machine for an entire generation of teenagers. Nobody planned these outcomes. They emerged.
Now consider AI. We are in the grip of what might be the most transformative technology since electricity, and we are making predictions about it with the same confidence and likely the same accuracy as every generation before us. The optimists tell us AI will cure diseases, solve climate change, and unleash unprecedented productivity. The pessimists warn of mass unemployment, surveillance states, and existential risk. Both camps speak with remarkable certainty about a technology whose capabilities are shifting month by month.
The truth is that we do not know. We do not know because the most significant impacts of AI will almost certainly be the ones nobody is currently talking about. Just as the most profound consequence of the printing press wasn’t better books but the Reformation, the most profound consequence of AI probably isn’t anything on today’s conference agendas.
Perhaps we can agree on this: the speed of change is different this time. Previous technological revolutions gave societies time to adapt, however imperfectly. AI is compressing that timeline dramatically. The gap between invention and widespread disruption is shrinking, which means our poor track record of prediction becomes even more dangerous, because we have less time to course-correct when we get things wrong.
So where does that leave us? Not with better predictions, but perhaps with a need for better humility. The wisest response to AI may not be to forecast its future with false precision, but to build adaptability and agility in our institutions, our education, and ourselves. Because the one thing we can predict with confidence is that we’ll get it wrong.