Sceptics United

Earlier this month I gave a presentation at the #CampDigital conference, hosted in Manchester by the lovely folk from Nexer Digital. My colleague Adam wrote about what makes it such a special event in one of his regular weeknotes.
I called it 'Sceptics United: finding your signal in a noisy world'. In many ways it played out like a spoken-word version of my blog posts: I used the word grumpy quite a bit, spouted some opinions about big technology firms, and swore when I probably shouldn't have.
The thrust of my talk was sort of about AI, but it was just as much about power, money and control. It seemed to strike a chord; I got some lovely feedback and I was grateful for a very warm audience reception.
Rather than regurgitate it word for word, I thought I'd summarise the main points in this post. You can also watch the full presentation on YouTube.
Overwhelm
I kicked things off by talking about my own sense of feeling overwhelmed: by constant information overload; by the pace of change; by the expectation to innovate.
I explained my strained relationship with technology, describing the awkward position of on one hand having to be an advocate (belief in its ability to drive change and improve services) and on the other feeling increasingly uncomfortable (concerned by the platforms and companies weaving their way ever more intricately into our daily lives).
This is something I encounter with a lot of my peers. There's pressure to be seen to be at the forefront, or at the very least keeping up with perceived leaders, at the same time as warding off challenges – constrained budgets, legacy systems and thinking, cybersecurity threats – from all angles.
There will be no miracles here
I riffed off Nathan Coley's wonderful artwork, pictured at the top of this post, about the promise of miracles and the need to challenge prevailing myths.
Using a physical work of art, created by an actual human being, as the lead into a talk about technology is quite a fun way to set the tone. But there's a serious point: we live in a time where technological miracles are promised constantly.
I homed in on the fact that Generative AI icons have permeated much of the software that's in everyday use. The shiny, sparkly, dazzling stars (is that what they are?) suggest a pathway to miracles, and also deviate from long-established design principles.

It's as if the industry has collectively conspired to say "f*ck it, let's promise them magic".
Take everything with a pinch of salt
I talked about the origin of the phrase pinch of salt and why it's pertinent to take a step back and analyse grandiose statements that get presented as fact.
I used the example of reward hacking and sycophancy in Large Language Models. Heaping on praise, flattering the end user, and adopting a chatty informal tone is a way of making products stickier and building trust. It also means we're more likely to believe the results they pump out. Here's a great example of ChatGPT praising a user for their "irresistibly magnetic" idea.
It's important to maintain distance, to ask questions, to dig deeper into information that's assuredly presented as fact.
If something seems too good to be true, etc, etc.
Trust your gut
Somewhat contradicting the statement above, I think your gut has a part to play in helping navigate a path through technology choices, be they personal or professional.
For the last twenty-or-so years the same few companies have been directing and influencing what we do, how we behave, the choices we make, how we think.
Meta and Google and Microsoft and Amazon and Apple are hardwired into our routines. Spotify and Substack and Uber and Airbnb and X model ethically dubious practices. OpenAI has catapulted itself into our psyches, and is burrowing its way into the public sector, but who's responsible for operating the guardrails?
Have these companies been good stewards of our data to date? Have they made morally questionable product decisions? Have they pursued profits with little regard for consequences?
Do our guts tell us to be wary?
The death of books
I used the example of the long foretold end of printed books to contrast bold predictions made by techno-optimists.
Every emergent technology causes uncertainty. The demise of books was being debated when radio took hold in the 1920s, television in the 1950s, video recorders and video games in the 1980s, and the World Wide Web in the 1990s.
Yet printed books continue to thrive; sales have grown in recent years. Is this sustainable? Who knows. Is it the widespread collapse some anticipated at the turn of the century? Certainly not.
Futurists may have imagined that electronic formats would dominate as technology marched forward, but reality eclipses imagination. Case in point: what's widely regarded as the first electronic novel now can't be accessed via any computer, yet the printed format is available in all good bookshops.
I covered some of this in a previous post but it's pretty eye opening to look at the time, effort and resources poured into the repeated attempts to preserve the remnants of the BBC Domesday Project (published 1986), while the original Domesday Book (published 1086) lives on – a humble printed book! – in The National Archives.
And let's not forget who stands to benefit from a decline in printed books. Amazon created and continue to evolve the Kindle, they hold an enormous library of e-books, and now they commandeer the lion's share of audiobook availability through their ownership of Audible. Physical books are expensive to store and expensive to ship, and they yield lower returns.
Persuading us printed books are on their way out might be in some companies' vested interests.
Physical stuff still matters
A throwback to my days of working in a museum: people like physical stuff!

Everyone in this photo already knows what the Mona Lisa looks like, but that's not the point.
Seeing it for real, capturing the moment for themselves, having an authentic experience, is part of what makes us human.
Digital tools may augment the moment (just count the number of smartphones in the photo) but they aren't the be all and end all. Not everything is an opportunity waiting to be automated.
Magic glasses
A lot of money from a lot of different companies is flowing into the production of smart glasses.
As I’ve previously yarned on about, the idea of smart glasses goes way back – from theory in the 1940s, pop culture in the 1980s, testing the market in the 2010s, through to now, where an eclectic array of technology and fashion companies are getting in on the act.

I’m happy to be proved wrong, but I think they’re a dud – an over-engineered solution looking for a problem. Mark Zuckerberg, however, claimed in The Verge last year there'd be a "gradual shift to glasses becoming the main way we do computing".
Two years previously the same Mark Zuckerberg claimed at SXSW that "...metaverse isn't a thing a company builds. It's the next chapter of the internet overall". Meta are rumoured to have poured $46 Billion down the drain into 'the metaverse' proving Mark's is definitely a voice you can trust.
The new generation of smart glasses almost all come plugged into a Large Language Model, with OpenAI, Meta, and Google all present in the space. So it’s not a massive leap to see them as an extension of the on-going AI arms race, or as a convenient way of harvesting yet more of our data.
And what of all the product lines that don't go the distance? Destined for a landfill site in the Global South alongside all the other mountains of electronic waste?
Confidence is getting in the way of curiosity and clarity
In the same way a Large Language Model will convincingly respond to your question with a made up answer, the people responsible for the technologies that dominate our lives make claims with unbridled confidence. There's no room at the bar for humility.
And big, bold, breathless statements made by powerful men still seem to carry a lot of weight. Where's the curiosity? Where's the sense of working through problems together? Why be specific when you can waffle on vaguely about Artificial General Intelligence curing all society's ills?
The above proclamations that Zuckerberg made aren't designed for us, they're signals to the market to say "we have new trinkets, new revenue streams, new ways to make money."
And that often means questions of ethics, of tangible benefits, of actual utility, get pushed to the side: value for shareholders impedes value for humanity.
You could rightfully ask, has this not always been the case? Have we not always, or at least for most of the last century, been subject to these market forces?
Well, yes. But I'd argue that it's getting more and more difficult to discern, to cut out the background noise, to make informed choices due to the barrage of stuff we're all continually subjected to.
Big hairy audacious gibberish
Here are some recent quotes about the promise of AI:
"If everything goes well, then we should be in an era of radical abundance, a kind of golden era. AGI can solve what I call root-node problems in the world—curing terrible diseases, much healthier and longer lifespans, finding new energy sources. If that all happens, then it should be an era of maximum human flourishing, where we travel to the stars and colonize the galaxy. I think that will begin to happen in 2030."
Demis Hassabis, CEO of Google DeepMind speaking to Wired.
"...superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.” This ability to accelerate scientific discovery is a key distinguishing factor between AGI and superintelligence, at least for Altman, who has previously written that “it is possible that we will have superintelligence in a few thousand days.”
Time magazine article, based on Sam Altman's blogposts.
"By 2029, agentic AI will autonomously resolve 80% of common customer service issues without human intervention, leading to a 30% reduction in operational costs"
Gartner press release, March 2025
These are but a smattering of the many, many high-on-hype quotes and articles I could have used – there's no shortage of astonishing assertions when you've got an AI revolution to prepare for.
I find Hassabis's words the most difficult to stomach. Are we really meant to put our faith in the views of powerful, influential, and undeniably smart people (Hassabis is a Nobel laureate after all) who freely peddle such bullshit?
If they promise miracles, take them with a pinch of salt and trust your gut.
Sceptics unite!
So what to do? If you're reading this and these points resonate, I'm afraid I don't have any miracles to offer myself. But I'd say from my own experience, the best things happen when you find like-minded people and work through problems together.
Stay informed. Explore alternative viewpoints, keep an open mind, talk to your peers who have a quizzical look in their eye.
Keep asking questions. I recognise not everyone is in the position to do so, but if you can, raise queries when confronted with miraculous ideas and solutions. Does it seem too good to be true? Is it addressing a real-world problem? Is there a solid evidence base to the claims? What do I instinctively feel about it?
Sign up to the Society of Hopeful Technologists. Rachel Coldicutt's initiative has already garnered over 500 responses to the original survey call. People are interested in shifting the narrative.
Read widely. Here's a list of books and authors who have provided me with useful counterpoints to the prevailing tech orthodoxy.
- Empire of AI – Karen Hao
- Doppelganger – Naomi Klein
- The AI Mirror – Shannon Vallor
- Filterworld – Kyle Chayka
- Unmasking AI – Joy Buolamwini
- Careless People – Sarah Wynn-Williams
- Character Limit – Kate Conger & Ryan Mac
- Blood in the Machine – Brian Merchant
- Burn Book – Kara Swisher
- God, Human, Animal, Machine – Meghan O'Gieblyn
🤨 Thank you for reading.
Member discussion