Are We Worrying Too Much About AI? 2

I think AI does fall into the historical pattern we discussed yesterday, but with important caveats:

Like Past Patterns: The apocalyptic rhetoric, the focus on job displacement, the fears about human agency, the belief that “this changes everything”, these echo precisely what people said about electricity, printing, and industrialisation.

Genuinely Different: But the speed, scope, and cognitive nature of AI do represent something novel. The historical pattern doesn’t guarantee a benign outcome; it just suggests our tendency toward panic often exceeds the actual risk.

The Real Challenge: I suspect the biggest risks aren’t the sci-fi scenarios people focus on, but the mundane ones: economic disruption happening faster than adaptation, concentration of power in a few hands, and making decisions about AI while in the grip of panic.

I hope that we can prepare for genuine risks while resisting the urge toward panic that has characterised every previous technological revolution.

Meanwhile read this. It is well worth your time investment.

The 2028 Global Intelligence Crisis.

Are We Worrying Too Much About AI?

In 370 BC, Socrates worried that writing would weaken memory and allow ‘the pretence of understanding, rather than true understanding’

In the 1440s, the printing press faced violent opposition: scribes' guilds destroyed machines and chased book merchants out of towns, fearing job losses and the spread of dangerous ideas.

In the 1850s, people thought the telephone would cause deafness, and that railway travel was so dangerous that a school board condemned trains as ‘a device of Satan to lead immortal souls to hell’

In the Early 1900s, the sewing machine sparked fears that women’s economic independence would disrupt family structures and society.

Is AI just another one on the list? Or is it different?

TBC.

Love a List

What is it about The List?

We grab paper and pen, or an iPad and start typing, and suddenly all is well. Like many, I love a good list.

The power of the list, I would argue, lies in two extraordinary benefits:

First, it succinctly directs our mind to what needs attention. When everything is swirling in your head, nothing has priority. The moment you write it down, hierarchy appears; some things matter more; some things can wait. The list makes this explicit.

Second, it off-loads the storage of such data. Your brain is not so good at remembering large numbers of things but-if there is structure-is excellent at processing them. Let the paper (or screen) hold the information. Let your mind do the thinking and deciding.

How Good Are We at Predicting the Future?

I would suggest we are terrible at it. And that should give us pause when it comes to AI.

The history of technology prediction is a history of confident wrongness. In 1943, Thomas Watson, chairman of IBM, reportedly said the world market for computers would stretch to “maybe five.” In 1995, the astronomer Clifford Stoll wrote a Newsweek piece dismissing the internet as overhyped, arguing that online databases would never replace newspapers and that e-commerce was a fantasy. Even Bill Gates, no technological slouch, admitted he overestimated what technology could do in two years and underestimated what it could do in ten.

The pattern is remarkably consistent. We either wildly overestimate the short-term impact of new technologies (remember when blockchain was going to replace every institution within five years?) or we catastrophically underestimate their long-term consequences. Nobody building the early internet imagined it would reshape elections, create trillion-dollar companies, or leave millions of people psychologically dependent on dopamine hits from their phones.

It isn’t just that we fail to predict which technologies will emerge; we fail spectacularly at predicting the second- and third-order effects of the technologies we already have. Social media was supposed to connect the world. It did. It also polarised it. The smartphone was a communication device. It became an anxiety machine for an entire generation of teenagers. Nobody planned these outcomes. They emerged.

Now consider AI. We are in the grip of what might be the most transformative technology since electricity, and we are making predictions about it with the same confidence and likely the same accuracy as every generation before us. The optimists tell us AI will cure diseases, solve climate change, and unleash unprecedented productivity. The pessimists warn of mass unemployment, surveillance states, and existential risk. Both camps speak with remarkable certainty about a technology whose capabilities are shifting month by month.

The truth is that we do not know. We do not know because the most significant impacts of AI will almost certainly be the ones nobody is currently talking about. Just as the most profound consequence of the printing press wasn’t better books but the Reformation, the most profound consequence of AI probably isn’t anything on today’s conference agendas.

Perhaps we can agree on this: the speed of change is different this time. Previous technological revolutions gave societies time to adapt, however imperfectly. AI is compressing that timeline dramatically. The gap between invention and widespread disruption is shrinking, which means our poor track record of prediction becomes even more dangerous, because we have less time to course-correct when we get things wrong.

So where does that leave us? Not with better predictions, but perhaps with a need for better humility. The wisest response to AI may not be to forecast its future with false precision, but to build adaptability and agility in our institutions, our education, and ourselves. Because the one thing we can predict with confidence is that we’ll get it wrong.

They always help….

A walk; good music; pencil and paper and just write; a good book; a conversation with your dog; a long bath; weeding the garden.

The Compact Guides.

Long-time followers of my (mostly) daily writings will know that way back I wrote several business and personal development books. I then took a long pause to get my novel writing started. Late last year I returned to non-fiction with my Companion Series. The intention is that these are short, very easy to read and 100% practical and instantly available worldwide. The first four are out and the series will continue this year.

The first four:

How to Beat ChatGPT or How to Not Say AI Took My Job.

MEDS: meditation, exercise, diet and sleep; a powerful daily strategy for wellness.

The Tools of Excellence. 70 Devices, Concepts or Strategies for Brilliance.

Do Less yet Achieve More: the 80/20 strategy that transforms productivity.

From Liverpool Cellar to London Rooftop

The Beatles’ first performance under that name took place on 9 February 1961 at The Cavern Club in Liverpool.

While a setlist from the lunchtime show is not reliably documented, the song most widely cited as opening their early Cavern-era performances is “Some Other Guy,” a rhythm & blues number originally recorded by Richie Barrett. It became a staple of their early live sets in 1961–62.

The Beatles’ final live appearance as a group was the famous Rooftop Concert on 30 January 1969, atop Apple Corps headquarters in London. The final song they ever performed live in public was: “Get Back” which they played twice. After finishing the final take, John Lennon closed with his famous line: “I’d like to say thank you on behalf of the group and ourselves, and I hope we passed the audition.”