Stop Press

Both books mentioned below are free for 72h thanks to the nice people at Kindle. Free. Worldwide. 72h.

On Writing

Writing as Excavation

E.L. Doctorow said that ‘writing a novel is like driving a car at night. You can see only as far as your headlights, but you can make the whole trip that way.’ This metaphor captures something essential about the creative process that many overlook: writing is not about executing a plan; it is about discovering what you are trying to say.

Most writers know this quietly but need permission to trust it. Stories are not built so much as uncovered. You do not start with a complete blueprint. You start with a situation, or a character, or a question, and you write to find out what happens.

For me, I have an idea which grows and develops until I feel I have a book (if not a book, then maybe a short story). I start to write. I write daily. I chase quantity and then edit for quality. Characters appear from nowhere.

This can feel terrifying to new writers who perhaps think they should know everything before they begin. They ask: how can I start if I don’t know where I’m going? The answer is: you start anyway, and the act of writing reveals the destination.

Go write.

Are We Worrying Too Much About AI? 2

I think AI does fall into the historical pattern we discussed yesterday, but with important caveats:

Like Past Patterns: The apocalyptic rhetoric, the focus on job displacement, the fears about human agency, the belief that “this changes everything”, these echo precisely what people said about electricity, printing, and industrialisation.

Genuinely Different: But the speed, scope, and cognitive nature of AI do represent something novel. The historical pattern doesn’t guarantee a benign outcome; it just suggests our tendency toward panic often exceeds the actual risk.

The Real Challenge: I suspect the biggest risks aren’t the sci-fi scenarios people focus on, but the mundane ones: economic disruption happening faster than adaptation, concentration of power in a few hands, and making decisions about AI while in the grip of panic.

I hope that we can prepare for genuine risks while resisting the urge toward panic that has characterised every previous technological revolution.

Meanwhile read this. It is well worth your time investment.

The 2028 Global Intelligence Crisis.

Are We Worrying Too Much About AI?

In 370 BC, Socrates worried that writing would weaken memory and allow ‘the pretence of understanding, rather than true understanding’

In the 1440s, the printing press faced violent opposition: scribes' guilds destroyed machines and chased book merchants out of towns, fearing job losses and the spread of dangerous ideas.

In the 1850s, people thought the telephone would cause deafness, and that railway travel was so dangerous that a school board condemned trains as ‘a device of Satan to lead immortal souls to hell’

In the Early 1900s, the sewing machine sparked fears that women’s economic independence would disrupt family structures and society.

Is AI just another one on the list? Or is it different?

TBC.

Love a List

What is it about The List?

We grab paper and pen, or an iPad and start typing, and suddenly all is well. Like many, I love a good list.

The power of the list, I would argue, lies in two extraordinary benefits:

First, it succinctly directs our mind to what needs attention. When everything is swirling in your head, nothing has priority. The moment you write it down, hierarchy appears; some things matter more; some things can wait. The list makes this explicit.

Second, it off-loads the storage of such data. Your brain is not so good at remembering large numbers of things but-if there is structure-is excellent at processing them. Let the paper (or screen) hold the information. Let your mind do the thinking and deciding.

How Good Are We at Predicting the Future?

I would suggest we are terrible at it. And that should give us pause when it comes to AI.

The history of technology prediction is a history of confident wrongness. In 1943, Thomas Watson, chairman of IBM, reportedly said the world market for computers would stretch to “maybe five.” In 1995, the astronomer Clifford Stoll wrote a Newsweek piece dismissing the internet as overhyped, arguing that online databases would never replace newspapers and that e-commerce was a fantasy. Even Bill Gates, no technological slouch, admitted he overestimated what technology could do in two years and underestimated what it could do in ten.

The pattern is remarkably consistent. We either wildly overestimate the short-term impact of new technologies (remember when blockchain was going to replace every institution within five years?) or we catastrophically underestimate their long-term consequences. Nobody building the early internet imagined it would reshape elections, create trillion-dollar companies, or leave millions of people psychologically dependent on dopamine hits from their phones.

It isn’t just that we fail to predict which technologies will emerge; we fail spectacularly at predicting the second- and third-order effects of the technologies we already have. Social media was supposed to connect the world. It did. It also polarised it. The smartphone was a communication device. It became an anxiety machine for an entire generation of teenagers. Nobody planned these outcomes. They emerged.

Now consider AI. We are in the grip of what might be the most transformative technology since electricity, and we are making predictions about it with the same confidence and likely the same accuracy as every generation before us. The optimists tell us AI will cure diseases, solve climate change, and unleash unprecedented productivity. The pessimists warn of mass unemployment, surveillance states, and existential risk. Both camps speak with remarkable certainty about a technology whose capabilities are shifting month by month.

The truth is that we do not know. We do not know because the most significant impacts of AI will almost certainly be the ones nobody is currently talking about. Just as the most profound consequence of the printing press wasn’t better books but the Reformation, the most profound consequence of AI probably isn’t anything on today’s conference agendas.

Perhaps we can agree on this: the speed of change is different this time. Previous technological revolutions gave societies time to adapt, however imperfectly. AI is compressing that timeline dramatically. The gap between invention and widespread disruption is shrinking, which means our poor track record of prediction becomes even more dangerous, because we have less time to course-correct when we get things wrong.

So where does that leave us? Not with better predictions, but perhaps with a need for better humility. The wisest response to AI may not be to forecast its future with false precision, but to build adaptability and agility in our institutions, our education, and ourselves. Because the one thing we can predict with confidence is that we’ll get it wrong.

They always help….

A walk; good music; pencil and paper and just write; a good book; a conversation with your dog; a long bath; weeding the garden.