8 min read

People like real things

Thoughts on trust, authenticity and bubbles in response to the launch of Sora, OpenAI’s new video-sharing app.
A young girl trying to pop a large bubble.
Photo by Jacky Zhao on Unsplash

In my last newsletter — which I now seem to be publishing quarterly, like some sort of downmarket academic journal — I had a short section that talked about how physical stuff still matters in an age where digital tools dominate and distract.

For this post I’ve returned to a similar theme, partly in response to the launch of Sora, OpenAI’s new video-sharing app, which debuted at the end of September. 

If you haven't heard of Sora then Max Read's post on it is an excellent place to start, although Bobbie Johnson declaring "no new product has ever left me feeling so pessimistic" is most enlightening as well.

The app isn’t yet downloadable outside of the USA and Canada, but it's pretty inevitable it'll soon be spreading joy around the globe.

And as per the old adage "when America sneezes, the world catches a cold" the evidence of Sora's output is already all over the web, like globules of machine-generated snot spraying our screens. 

Here's 404 Media's article about the array of, ahem, captivating content you can peruse, or why not watch OpenAI CEO Sam Altman as the main character in a selection of user-generated videos or dressed in a Third Reich style uniform? The hilarity never stops.

Many of the articles that accompanied Sora’s launch focused on its lack of robust guardrails. The New York Times highlights the step change in the misinformation it's possible to pump out:

"It also created videos of bombs exploding on city streets and other fake images of war — content that is considered highly sensitive for its potential to mislead the public about global conflicts. Fake and outdated footage has circulated on social media in all recent wars, but the app raises the prospect that such content could be tailor-made and delivered by perceptive algorithms to receptive audiences.”

There's occasionally an element of surprise in the coverage I’ve read, as if the very notion OpenAI — whose professed mission is to benefit all humanity — would be so remiss to not have considered the potential downsides of making high-quality deepfakes available to everyone. 

To which I would once again direct anyone with an interest in pulling back the curtain to Karen Hao's essential Empire of AI, for a fulsome account of OpenAI's moral bankruptcy.

Anyway, this post isn't so much about the slop content that's being served, but more about the platforms that serve it. My hot take is I think OpenAI doesn't seem to know much about product.

Hot take ahoy

As ridiculous as it might sound coming from a lowly little soul like myself compared to a multi-billion (or is it trillion? I lose track) enterprise, I think OpenAI's platforms miss the basics of what makes human beings tick.

You can sniff this in the weird numbering conventions that have accompanied each ChatGPT release (not to mention Sora/Sora 2's own emergence), you can see it in the clunky, unintuitive user interface decisions in ChatGPT's various iterations, and it's evident in the backlash following the launch of ChatGPT-5. Most recently, the unveiling of Atlas, OpenAI's web browser, has raised many an eyebrow — here's Anil Dash taking it to task.

Does this even matter, you may ask? If these products are wildly successful in spite of a few peripheral challenges in using them, surely that's of minor significance? A tweak here and a rollback there, and everything's tickety-boo again.

My simplistic take is these mis-steps do matter if your entire pitch is about changing society at large. 

If you're preaching, as OpenAI do, about "increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility" shouldn't you be better at the basics?

If you're sitting atop of mountains of data tracking the innermost thoughts of your user base, shouldn't you understand their quirks a little more intimately?

The lack of deftness might be fine if you're a small fish bobbing around in a small pond, but if you're a behemoth influencing government, corporations, and the economy at large maybe you need to be better.

What if that same lack of deftness, of attention to detail, of nuance permeates throughout your products and decisions?

"Kind of bad, in fact"

Cory Doctorow riffs on this in his recent post, positing "the AI bubble is driven by monopolists who've conquered their markets and have no more growth potential, who are desperate to convince investors that they can continue to grow by moving into some other sector". Note 'monopolists' — not technologists, or scientists, or product designers; this is a quest for money and power.

That same post warns of the catastrophic impact of the AI bubble bursting, in a far, far more consequential way than the dotcom bust of the early noughties.

Ed Zitron, interviewed in a recent episode of the Channels podcast, also highlights the precarious economic territory we're in.

He makes a compelling link between OpenAI's incessant drive to scale and attract investment, and the unbridled hubris that accompanies their products and proclamations.

There's only so many times you can falsely promise the earth without something breaking...

And thus we get Sora, and the hubris is in full flow. In his blog post to accompany Sora's launch, Sam Altman claims that creativity "could be about to go through a Cambrian explosion, and along with it, the quality of art and entertainment can drastically increase”.

Yes, he really seems to be putting Sora on a par with the creation of human life itself.

If spouting this kind of bunk was done with a tongue firmly in cheek, then potentially the ground wouldn’t feel so shaky.

But judging by this recent interview for the A16Z newsletter, the lack of irony and self reflection has the predictable detached bravado of a Silicon Valley leader.

Perhaps this is deliberately coy or playing dumb, but the plans outlined seem so lackadaisical and poorly thought through.

Far from a Cambrian explosion, Katie Notopoulos crashes us back to reality with what's actually going on: "the problem with having your new video app full of teenage Jake Paul fans is that they might not necessarily be creating the kind of wonderous, magical, creative content you were hoping for — it might be kind of bad, in fact."

So-so social media

The social networks many of us use every day went through various growing pains to reach the kind of market dominance they now command. They didn’t simply arrive fully formed and flick on a monetisation switch. 

Instagram started life as a cutesy way of filtering photos before app-based image editing software was commonplace, Facebook didn’t have a ‘like’ button, Twitter only adopted retweets and hashtags on the back of user behaviour, TikTok was a lip-synching app.

Gaze back into the dim and distant past, and it's worth remembering these platforms have chopped and changed in order to build the kind of daily active usage, the prime real estate of online marketing, that allow them to turn a profit (or slowly enshittify, depending on your point of view).

Altman wants to monetise Sora as a means to offset the mountain of cash OpenAI is burning through every day. But he seems to be skipping critical steps on the route to developing a social platform that stands the test of time: where interconnection, authenticity, creative expression, and value actually matter.

I was struck by this quote from Bobbie Johnson, pointing to the weird pseudo-connectedness of it all:

"Sora is a ghoulish puppet show, and exploring it feels like wandering around an empty funfair. Even when there is a semblance of genuine human interaction, it manages to be uncanny and disquieting: You can “collaborate” with other people who have uploaded their own likenesses into the system, but this really means your fake avatar collaborating with their fake avatar."

Generating the kind of content Sora proffers, at scale, is also eye-wateringly expensive and resource intensive due to the enormous amount of compute required.

There's a not-beyond-the-realms-of-possibility scenario where Sora becomes a dead-eyed meme factory, with people posting their content on the platforms where they've already built a community — the social bit basically happens elsewhere.

So OpenAI benefits from neither the engagement or the marketing dollars, and foots the enormous production bill.

Like I say, not very good at product.

Turning the tide?

In The Atlantic Charlie Warzel writes that "to live through this moment is to feel that some essential component of our shared humanity is being slowly leached out of the world".

Gloomy words indeed. But maybe this is the kind of wake up call we need?

The title of this post is a point I absolutely stand by — people like real things; people respond to real people.

It's why I think generative AI will struggle when it comes to the direct role it plays in creative processes, and it's why I think platforms like Sora, and Meta's Vibes feed, ultimately miss the point.

These are novelty acts at best, not long-term bets.

Human connections have been key to the success of the most popular social networks, and are a crucial part of why many have now faltered and stagnated.

James O'Sullivan captures this beautifully, alongside many other salient points, in his superlative essay 'The Last Days of Social Media':

The Last Days Of Social Media | NOEMA
Social media promised connection, but it has delivered exhaustion.

Take away empathy and authenticity, and you lose trust.

Label me an out-of-touch idealist, but my hope is the proliferation of poorly conceived software (and indeed hardware) will start to turn the tide, change the conversation, about who we trust and how we conceptualise AI playing a part in our lives.

The so-called frontier AI companies, in their increasingly desperate attempts to dominate at any cost, are showing their lack of understanding, insight and ability to forge those all-important human connections. So why should we place so much faith in them determining our futures?

Despite the pessimistic premise, it's a point that Eduardo Porter concludes on in his article in The Guardian last week:

"Perhaps an AI crash could lay the groundwork to steer the technology away from Silicon Valley’s quest to build some supersmart, software-driven agent to replace meat-and-bones humans and spread a synthetic version of humanity across the galaxy. We could focus instead on building something that will help improve the lives of humans as they are."

The colour of munge

To end on a more upbeat note, there's a lovely phrase Brian Eno uses when asked about his thoughts on generative AI, from about 50 minutes into this wide-ranging interview.

He likens its outputs to 'the colour of munge':

"When I was a kid, I liked watercolour painting a lot. And I used to notice that after a day of painting, the water that I was dipping my brush into, which was, of course, a mixture of all the colours I’d touched that day, was always the same colour. I called it munge"

Even if you're still raging at Ezra Klein, the entire podcast is well worth a listen — it's full of hope, humour, inspiration and self-awareness.

In short, the kind of perspective only a (real, living, breathing, human) artist can bring.


🎨 Thank you for reading