7 min read

Appetite for disruption?

Contrasting the seismic digital transformation of the publishing industry with the vague promise of AI. With some slop thrown in.
Close up photo of lower case lettering in a typesetter tray
It's called lower case 'cos the letters were in the lower case, dontcha know? Image by 57claudio, CC BY-SA 4.0, via Wikimedia Commons.

When I'm looking for examples of how technology has upended a specific industry, the full-scale disruption of publishing in the 1980s offers up a distinct before and after tale.

In the beforetimes there were a host of precise manual skills and procedures involved in the production of any printed publication; typesetting is an art and a science.

In the aftertimes there was the automation of a swathe of specialist processes unleashed by the Apple Mac, Aldus PageMaker and PostScript.

The transition was seismic and brutal and spelled the literal end of hot metal typesetting, for 100 or so years the dominant method of mass-market printing.

The Wapping dispute, which caused widespread demonstrations and unrest during the mid-eighties, exemplified the struggle as people pushed back against looming digitisation and job losses.

At the point I started using PageMaker in the early 1990s such disputes were already fading into memory. I was one of a newish wave of digital natives who only ever knew how to design pages or assemble artwork on the screen of a Mac. Any suggestion that you would use another method seemed peculiar, antiquated, a step back in time.

After I graduated from university I attended job interviews at both The Scotsman and The List magazine and found my skills were already too advanced for their specific and still largely manual set-ups: I simply couldn't do the design and layout roles they were advertising for.

I'm not pitching myself as some wunderkind, however it was evident those publications – both of which survive today in mainly digital form – were going to have to evolve to keep pace with a rapidly transforming landscape.

Whether you bemoan the painful demise of a whole generation of expertise or cheer the 'democratisation' of an industry ripe for disruption, it's hard to argue against technology's specific role ushering in sizeable, tangible change.

Efficiency gains, shmefficiency shmains

I often find myself yearning for such tangibility when hearing (ad nauseam, on repeat) about the disruptive impact of Generative AI.

When I'm sitting in auditoria, or reading articles, or scrolling through responses to social media posts where I'm being told my entire organisation needs to pivot, or I must adopt an AI-first mindset, or to brace myself for the fourth industrial revolution, I will either glaze over or feel my skin start to prickle – neither a particularly positive physical reaction.

It would be woefully naive to assert that AI isn't going to have an impact on the jobs market and emerging skills requirements. Brian Merchant, author of the superlative Blood in the Machine book (and newsletter of the same name), has been covering the very real consequences being felt as companies make a beeline for the AI promised land.

The AI jobs crisis is here, now
It’s not coming, it has already arrived.

At the same time, it seems perfectly fair to question the extent, pace-of-change, and types of jobs and skills we're actually talking about.

As Merchant duly notes:

"The AI jobs crisis does not, as I’ve written before, look like sentient programs arising all around us, inexorably replacing human jobs en masse. It’s a series of management decisions being made by executives seeking to cut labor costs and consolidate control in their organizations. The AI jobs crisis is not any sort of SkyNet-esque robot jobs apocalypse—it’s DOGE firing tens of thousands of federal employees while waving the banner of “an AI-first strategy.”"

Too often the pitch for AI is used as an opportunity to roll out bland aphorisms and astonishing-yet-unspecific claims.

Much hubbub surrounds the case of Klarna, for instance, who very publicly staked a claim to going all-in on AI and now seem to be taking a more subdued standpoint.

Occasionally I'm pleasantly surprised when I learn about a niche example, a Generative AI solution that showcases a genuine step change, but mostly it still feels like a grift. And so many of the promised benefits fall into a bucket of vague efficiency gains and productivity improvements.

Anyone who's used the growing range of large language or multimodal models will likely recognise the potential to improve workflows, automate tasks, and generally cut down on menial chores.

Aside from the odd interesting individual case, how much of it is actually working in practice though? And how much of it scales, and interoperates, and plays nice with the rest of the tools that intertwine with our day-to-day lives?

Efficiency gains are what software vendors and software-as-a-service models have been promising for years, nay decades. It drives me slightly bonkers that practically every technology supplier and platform is pitching efficiency gains made through AI as the win. [Screams into void]: "Haven't we been paying our license fees for you to have already sussed this out?"

When I attempt to turn on my out-of-office in Outlook, a frustratingly clunky task if ever there was, I have a wry little smile to myself. Thanks for all the Copilot shizzle Microsoft, couldn't you fix some of your core freakin' functionality first?

A final point on efficiency gains: admittedly it's easy to cherry pick evidence, however there are reputable studies suggesting that the opposite of efficiency might be happening. TL;DR: AI isn't "moving the needle" and may even be increasing workload.

Pin the tail

Back to tangibility. Maybe if there wasn't such a gold rush mentality and lots of people high on buzzwords, a clearer picture might materialise. Where are the desktop publishing like studies to build the case for transformation? Are we meant to just blindly accept that this stuff is going to live up to its promise?

A young girl blindfolded, playing Pin the Tail on the Donkey
image: https://www.flickr.com/photos/kilgub/

At the moment, choosing an AI solution feels like a game of pin the tail on the donkey: close your eyes, move approximately in the right direction, and hope for the best. As much as the pressure may be on to leap headlong in, there’s still a huge amount of uncertainty to contend with.

As an organisation you may need to take a punt on one of thousands of emergent solutions – AI startups are big business – while at the same point find yourself at the mercy of a clutch of big tech players. Alphabet, OpenAI, Meta, Anthropic and Microsoft are either furiously undercutting or trying to outdo one another to woo customers and stake their claim as the dominant AI model.

While the venture capital money is flowing and the good times roll, doing something on the relative cheap, using technology in a fairly flexible way, is still possible. That won't last though – it never, ever, does.

Anil Dash recently wrote a characteristically excellent post on the phoniness of "AI first", and this paragraph stood out for me:

"We don't actually have to follow along with the narratives that tech tycoons make up for each other. We choose the tools that we use, based on the utility that they have for us. It's strange to have to say it, but... there are people picking up and adopting AI tools on their own, because they find them useful."

Usefulness and utility, eh? How novel. If there was more clarity around the actual value AI brings and specific outcomes it delivers, and less overhyped-solutions-looking-for-problems, maybe all this would be a bit more palatable.

But that doesn't feel like it's coming anytime soon. Contrast the vagaries and banality of Jony Ive and Sam Altman's love-in earlier this month with the sharp, focused precision on display at the launch of Jony's masterpiece in 2007. God help us.🙄

Behold the slopfest

One area where you can draw comparisons between publishing's digital revolution and AI is the slop it discharges on an unsuspecting world.

Where PageMaker led, others followed, and soon there was an array of WYSIWIG (what you see is what you get) tools available to every desktop-computer-owner-cum-amateur-designer.

While not as prolific as the AI outputs now flooding every corner of the Internet, the availability of software with a low barrier to entry (staring hard at you, Microsoft Publisher) led to an onslaught of printed publications, posters and leaflets that ignored the most basic rules of typography and design.

As much as my younger self was incredibly snobby about this, I now gaze back fondly at a more innocent era. Personally, I'd rather look at a terribly designed flyer displaying Comic Sans at a jaunty angle overlayed on an awful piece of clipart than some hyperrealistic AI-generated image or video, devoid of any personality, created purely to generate likes.

Guess I'm still a bit snobby.


Further reading/action:


🖥️ Thank you for reading.