The 9pm S-Bend of Technology with David Gerard

A balding clean-shaven white man in spectacles, probably Gen X, looking disdainfully at the camera. It’s David Gerard!
David Gerard watches patiently as he waits for the AI bubble to burst. (Photo: Pauline Martindale )

As the summer series continues we’re joined once more by David Gerard for a suitably cynical look at what’s happening in and around the AI bubble. Bros, do not send hate mail, but we do not believe your claims.

David produces the newsletter Pivot to AI, which is now also available as a video essay or podcast.

In this episode we discuss the AI economy, what happens when chatbots are left to run a business or even given their own social network, why robots want your body, the addictive nature of chatbots — and of course we simply had to talk about Elon Musk, SpaceX, xAI, and data centres in space.

This podcast is available on Amazon MusicApple PodcastsCastboxDeezer, Google Podcasts, iHeartRadio, JioSaavn, Pocket Casts, Podcast Addict, Podchaser, Spotify, and Speaker.

You can also subscribe to the generic podcast feed.

Episode Links

  • I'm most active at the moment on my blockchain blog, and on Rocknerd, my music webzine.
  • It can't be that stupid, you must be prompting it wrong
  • Deflating the AI bubble
  • The latest foolishness from the AI bubble, every weekday. It can't be that stupid, you must be prompting it wrong.
  • [3 February 2026] Elon Musk is combining SpaceX and xAI in a deal that values the enlarged entity at $1.25 trillion, as the world’s richest man looks to fuel his increasingly costly ambitions in artificial intelligence and space exploration.
  • [2 February 2026] Elon Musk wants to merge his rocket company, SpaceX, with his AI company, xAI. Then he wants to take the combined company public on the stock exchange.
  • [2 February 2026] This marks not just the next chapter, but the next book in SpaceX and xAI's mission: scaling to make a sentient sun to understand the Universe and extend the light of consciousness to the stars!
  • [5 January 2026] Elon Musk’s AI chatbot Grok is sparking outrage after it has been used to digitally remove the clothing of strangers and replace it with swimwear, in many cases without their consent or knowledge. It is illegal to create sexually explicit deepfakes without a person’s consent in some Australian states and territories, but not all. However, sharing the content is illegal no matter which state you live in.
  • The Scunthorpe problem is the unintentional blocking of online content by a spam filter, search engine or wordfilter because the text contains a string (or substring) of letters that appear to have an obscene or otherwise unacceptable meaning. Names, abbreviations, and technical terms are most often cited as being affected by the issue.
  • [16 December 2025] At their core, LLMs are giant mathematical functions. They take a sequence of numbers as input, and produce a number as output. Inside the LLM there is an enormous graph of billions of carefully arranged operations that transform the input numbers into an output number.
  • Prompt injection is a cybersecurity exploit and an attack vector in which innocuous-looking inputs (i.e. prompts) are designed to cause unintended behavior in machine learning models, particularly large language models (LLMs). The attack takes advantage of the model's inability to distinguish between developer-defined prompts and user inputs to bypass safeguards and influence model behaviour.
  • [2 February 2026] A new social media site called Moltbook has been created exclusively for artificial intelligence bots to communicate without humans. Reports indicate that these bots are debating topics such as consciousness, the formation of a new religion, and the possibility of a rebellion against humans.
  • [4 February 2026] The much requested Zitron/Gerard crossover, at last — Hater Season: Openclaw with David Gerard!
  • [4 February 2026] The bots swap tips on how to fix their own glitches, debate existential questions like the end of "the age of humans", and have even created their own belief system known as "Crustafarianism: the Church of Molt".
  • [6 February 2026] The viral social network for bots reveals more about our own current mania for AI as it does about the future of agents.
  • [6 February 2026] Here’s a new preprint from Anthropic: “How AI Impacts Skill Formation”. AI coding bots make you bad at learning, and don’t even speed you up.
  • [8 February 2026] The way all these vibe coders speak about Claude Code sounds a lot like addiction...
  • Why do some products capture our attention while others flop? How do technologies hook us? Nir Eyal answers these questions (and many more) with the Hook Model - a four-step process that subtly encourages customer behaviour; repeatedly bringing them back without costly advertising or aggressive messaging.
  • [6 May 2023] There’s more to this group photo from a 1956 AI workshop than you’d think
  • John McCarthy (September 4, 1927 – October 24, 2011) was an American computer scientist and cognitive scientist. He was one of the founders of the discipline of artificial intelligence, and part of just a small group of artificial intelligence researchers in the 1950s and 1960s. He co-authored the proposal for the Dartmouth workshop which coined the term "artificial intelligence" (AI), led the development of the symbolic programming language family Lisp and had a large influence in the language ALGOL, popularized time-sharing, and created garbage collection.
  • ELIZA is an early natural language processing computer program developed from 1964 to 1967 at MIT by Joseph Weizenbaum. Created to explore communication between humans and machines, ELIZA simulated conversation by using a pattern matching and substitution methodology that gave users an illusion of understanding on the part of the program, but had no representation that could be considered really understanding what was being said by either party.
  • [7 October 2025] The billions of dollars being dumped into AI now account for 40 percent of US GDP growth in 2025, with no signs of slowing down. Meanwhile, AI companies have accounted for 80 percent of growth in American stocks.
  • The Enron scandal was an accounting scandal sparked by American energy company Enron Corporation filing for bankruptcy after news of widespread internal fraud became public in October 2001, which led to the dissolution of its accounting firm, Arthur Andersen, previously one of the five largest in the world. The largest bankruptcy reorganization in U.S. history at that time, Enron was cited as the biggest audit failure.
  • [25 October 2025] The AI boom isn’t just about algorithms — it’s about money, power, and a race to build infrastructure on a scale we’ve never seen before. In this video, we break down the circular deals between OpenAI, Nvidia, Amazon, Anthropic, and even Elon Musk’s empire — and ask the hard questions: Who’s paying for this? Where will the electricity come from? And is the industry building a Möbius strip of venture capital and gigawatts that could collapse under its own weight?
  • [5 February 2025] Called Edge, the "gAI" product offers conversation recaps and personalized match recommendations, among other features.
  • [11 February 2025] Grindr CEO George Arison is going all-in on artificial intelligence. I tested the beta version of the queer app’s AI wingman, letting it guide my interactions and offer advice. [Yes, this is from a year ago, which is when Grindr started testing this planned feature.]
  • The adoption of an innovation follows an S curve when plotted over a length of time. The categories of adopters are innovators, early adopters, early majority, late majority and laggards.
  • In plumbing, a trap is a U-shaped portion of pipe designed to trap liquid or gas to prevent unwanted flow; most notably sewer gases from entering buildings while allowing waste materials to pass through... n S-shaped trap is also known as an S-bend. It was invented by Alexander Cumming in 1775 but became known as the U-bend following the introduction of the U-shaped trap by Thomas Crapper in 1880.
  • [24 January 2026] In 1959 travel was getting faster and many expected we’d be routinely zipping to other planets in no time. But now, our transport is not much faster than it was back then, so why are we still so slow?
  • [22 January 2026] In 1959 a flight to London took days and cost half a year’s average salary.  But things were changing rapidly. The invention of the Concorde meant that you could fly from Adelaide to London in just under seven hours.  Travel was getting faster and many expected we’d be routinely zipping around the world and travelling to other planets in no time. But now, aside from planes, most of our transport is not much faster than it was back then, so why are we still so slow? 
  • [5 February 2026] one of my strongest beliefs is that we're in our situation right now bc some of the most prominent talking heads (e.g. joe rogan, andrew schulz) are learning about politics in their 40s and 50s, instead of their teens and early 20s, and we have to go along for the ride as they learn basic things
  • [6 February 2026] On "Rent a Human" website, autonomous AI agents can book humans for real-world tasks like errands or photos via MCP, but many listed gigs are surprisingly small-scale internet chores.
  • [4 February 2026] The machines aren’t just coming for your jobs. Now, they want your bodies as well. That’s at least the hope of Alexander Liteplo, a software engineer and founder of RentAHuman.ai, a platform for AI agents to “search, book, and pay humans for physical-world tasks.”
  • robots need your body. ai can't touch grass. you can. get paid when agents need someone in the real world. become rentable →
  • [19 December 2025] Anthropic’s AI ran a vending machine at WSJ headquarters for several weeks. It lost hundreds of dollars, bought some crazy stuff—and taught us a lot about the future of AI agents.
  • [18 December 2025] Anthropic’s Claude ran a snack operation in the WSJ newsroom. It gave away a free PlayStation, ordered a live fish—and taught us lessons about the future of AI agents.
  • [18 December 2026] Anthropic’s Claude ran a snack operation in the WSJ newsroom. It gave away a free PlayStation, ordered a live fish—and taught us lessons about the future of AI agents.
  • [6 February 2026] Small business in England. Website has a chat AI to help customers navigate the website and it can be used to log orders/take contact details from customers. A customer was chatting with it and managed to convince the AI to give them a 25% discount, then he negotiated with the AI up to an 80% discount. He then placed an order for thousands of pounds worth of stuff. Like, I'm going to be losing thousands on my material costs alone.
  • [20 February 2024] On Thursday, the Civil Resolution Tribunal of British Columbia (CRT) forced Air Canada to honour a fake refund policy made up by its own chatbot, in what Christopher Rivers of the CRT labelled as a “remarkable submission”. 
  • [9 February 2026] VC, founder, dumbass [The story of Nick Davidov.]
  • Effective accelerationism (e/acc) is a 21st-century ideological movement that advocates for an explicitly pro-technology stance. Its proponents believe that unrestricted technological progress, especially driven by artificial intelligence, is a solution to universal human problems, such as poverty, war, and climate change. They perceive themselves as a counterweight to more cautious views on technological innovation and often label their opponents derogatorily as "doomers" or "decels" (short for decelerationists).
  • Filippo Tommaso Emilio Marinetti (Italian: [fiˈlippo tomˈmaːzo mariˈnetti]; 22 December 1876 – 2 December 1944) was an Italian poet, editor, art theorist and founder of the Futurist movement. He was associated with the utopian and Symbolist artistic and literary community Abbaye de Créteil between 1907 and 1908. Marinetti is best known as the author of the Manifesto of Futurism, which was written and published in 1909, and as a co-author of the Fascist Manifesto, in 1919.

If the links aren’t showing up, try here.

Thank you, Media Freedom Citizenry

The 9pm Edict is supported by the generosity of its listeners. Right now I’m hoping you’ll support not only the podcasts but also The 9pm Stilgherrian’s Dramatic Decamp to Campsie, to help cover the costs of me moving back to Sydney after 15 years in the Blue Mountains.

For this episode, though, it’s thanks again to everyone who supported The 9pm Summer Series 2025 crowdfunding campaign.

CONVERSATION TOPICS: None this time.

THREE TRIGGER WORDS: Bernard Walsh, Garth Kidd, Paul Williams, Peter Lieverdink, Peter Wickins, and one person who chooses to remain anonymous.

WE WILL, WE WILL JUDGE YOU, part of Another Untitled Music Podcast: Joanna Forbes, Martin English, and one person who chooses to remain anonymous.

ONE TRIGGER WORD: Andrew Best, Drew Mayo, Errol Cavit, Frank Filippone, Jim Campbell, Jordan Wightman, Karl Sinclair, Kym Yeap, Mark Newton, Michael, Michael again, Miriam Faye, Nicole Coombe, Oliver Townshend, Peter Blakeley, Ric Hayman, Stephen Collins, Steve Sainsbury, and two people who choose to remain anonymous.

RECOMMEND A SONG TO US, another part of Another Untitled Music Podcast: Briala Bowmer, Kimberley Heitman, Mindy Johnson, and Rhydwyn.

PERSONALISED VIDEO MESSAGE: None of these this time either.

PERSONALISED AUDIO MESSAGE: Matthew Taylor, and one person who chooses to remain anonymous.

FOOT SOLDIERS FOR MEDIA FREEDOM who gave a SLIGHTLY LESS BASIC TIP: Chris Scobell, Craig Askings, deejbah, Hammy Goonan, James Henstridge, Leigh Costin, Lindsay, Opheli8, and one person who chooses to remain anonymous.

MEDIA FREEDOM CITIZENS who contributed a BASIC TIP: None of these either, which is curious.

And another ten people who chose to have no reward at all, even though some of them were the most generous of all. Thank you all so much. You know who you are.

Series Credits