Are We Worrying Too Much About AI? 2

I think AI does fall into the historical pattern we discussed yesterday, but with important caveats:

Like Past Patterns: The apocalyptic rhetoric, the focus on job displacement, the fears about human agency, the belief that “this changes everything”, these echo precisely what people said about electricity, printing, and industrialisation.

Genuinely Different: But the speed, scope, and cognitive nature of AI do represent something novel. The historical pattern doesn’t guarantee a benign outcome; it just suggests our tendency toward panic often exceeds the actual risk.

The Real Challenge: I suspect the biggest risks aren’t the sci-fi scenarios people focus on, but the mundane ones: economic disruption happening faster than adaptation, concentration of power in a few hands, and making decisions about AI while in the grip of panic.

I hope that we can prepare for genuine risks while resisting the urge toward panic that has characterised every previous technological revolution.

Meanwhile read this. It is well worth your time investment.

The 2028 Global Intelligence Crisis.