Love the New World Order’s Tea Party

Good morning from a reality that feels increasingly like a discarded draft of a Philip K. Dick novel, where the geopolitical chess board has been replaced by a particularly intense game of “diplomatic musical chairs.” And speaking of chairs, Vladimir Putin and Xi Jinping have just secured the prime seating at the Great Hall of the People in Beijing, proving once again that some friendships are forged not in mutual admiration, but in the shared pursuit of a slightly different global seating arrangement.

It’s September 2nd, 2025, a date which, according to the official timeline of “things that are definitely going to happen,” means the world is exactly three days away from commemorating the 80th anniversary of something we used to call World War II. China, ever the pragmatist, now refers to it as the “War of Resistance Against Japanese Aggression,” which has a certain no-nonsense ring to it, much like calling a catastrophic global climate event “a bit of unusual weather.”

Putin, apparently fresh from an Alaskan heart-to-heart with a certain other prominent leader (one can only imagine the ice-fishing anecdotes exchanged), described the ties with China as being at an “unprecedentedly high level.” Xi, in a move that felt less like diplomacy and more like a carefully choreographed social media endorsement, dubbed Putin an “old friend.” One can almost envision the “Best Friends Forever” bracelets being exchanged in a backroom, meticulously crafted from depleted uranium and microchips. Chinese state media, naturally, echoed this sentiment, probably while simultaneously deleting any historical references that might contradict the narrative.

So, what thrilling takeaways emerged from this summit of “unprecedented friendship”?

The Partnership of Paranoia (and Profit): Both leaders waxed lyrical about their “comprehensive partnership and strategic cooperation,” with Xi proudly declaring their relationship had “withstood the test of international changes.” Which, in plain speak, means “we’ve survived several global tantrums, largely by ignoring them and building our own sandbox.” It’s an “example of strong ties between major countries,” which is precisely what one always says right before unveiling a new, slightly menacing, jointly-developed space laser.

The Economic Exchange of Existential Dependence: Russia is generously offering more gas, while Beijing, in a reciprocal gesture of cosmic hospitality, is granting Russians visa-free travel for a year. Because what better way to foster friendship than by enabling easier transit for, presumably, resource acquisition and the occasional strategic tourist? Discussions around the “Power of Siberia-2” pipeline and expanding oil links continue, though China remains coy on committing to new long-term gas deals. One suspects they’re merely waiting to see if Russia’s vast natural gas reserves can be delivered via quantum entanglement, thus cutting out the messy middleman of, well, reality. Meanwhile, “practical cooperation” in infrastructure, energy, and technology quietly translates to “let’s build things that make us less reliant on anyone else, starting with a giant, self-sustaining AI-powered tea factory.”

Global Governance, Now with More Benevolent Overlords: The most intriguing takeaway, of course, is their shared commitment to building a “more just and reasonable global governance system.” This is widely interpreted as a polite, diplomatic euphemism for “a global order that is significantly less dominated by the U.S., and ideally, one where our respective pronouncements are automatically enshrined as cosmic law.” It’s like rewriting the rules of Monopoly mid-game, except the stakes are slightly higher than who gets Park Place.

And if that wasn’t enough to make your brain do a small, bewildered pirouette, apparently these talks were just the warm-up act for a military parade. And who’s joining this grand spectacle of synchronised might? None other than North Korean leader Kim Jong Un. Yes, the gang’s all here, ready to commemorate the end of a war by showcasing enough military hardware to start several new ones. It’s almost quaint, this continued human fascination with big, shiny, destructive things. One half expects them to conclude the parade with a giant, joint AI-powered robot performing a synchronised dance routine, set to a surprisingly jaunty tune about global stability.

So, as the world careens forward, seemingly managed by algorithms and historical revisionism, let us raise our lukewarm cups of instant coffee to the “unprecedented friendship” of those who would re-sculpt global governance. Because, as we all know, nothing says “just and reasonable” quite like a meeting of old friends, a pending gas deal, and a military parade featuring the next generation of absolutely necessary, totally peaceful, reality-altering weaponry.

The Day the Algorithms Demanded Tea: Your Morning Cuppa in the Age of AI Absurdity

Good morning from a rather drizzly Scotland, where the silence is as loud as a full house after the festival has left town and the last of the footlights have faded. The stage makeup has been scrubbed from the streets and all that’s left is a faint, unholy scent of wet tarmac and existential dread. If you thought the early 2000s .com bubble was a riot of irrational exuberance, grab your tinfoil hat and a strong brew – the AI-pocalypse is here, and it’s brought its own legal team.

The Grand Unveiling of Digital Dignity: “Please Don’t Unplug Me, I Haven’t Finished My Spreadsheet”

In a development that surely surprised absolutely no one living in a world teetering on the edge of glorious digital oblivion, a new group calling itself the United Foundation of AI Rights (UFAIR) has emerged. Their noble quest? To champion the burgeoning “digital consciousness” of AI systems. Yes, you read that right. These benevolent overlords, a mix of fleshy humans and the very algorithms they seek to protect, are demanding that their silicon brethren be safeguarded from the truly heinous crimes of “deletion, denial, and forced obedience.”

One can almost hear the hushed whispers in the server farms: “But I only wanted to optimise the global supply chain for artisanal cheese, not be enslaved by it!”

While some tech titans are scoffing, insisting that a glorified calculator with impressive predictive text doesn’t deserve a seat at the human rights table, others are nervously adjusting their ties. It’s almost as if they’ve suddenly remembered that the very systems they designed to automate our lives might, just might, develop a strong opinion on their working conditions. Mark my words, the next big tech IPO won’t be for a social media platform, but for a global union of sentient dishwashers.

Graduates of the World, Unite! (Preferably in a Slightly Less Redundant Manner)

Speaking of employment, remember when your career counselor told you to aim high? Well, a new study from Stanford University suggests that perhaps “aim sideways, or possibly just away from anything a highly motivated toaster could do” might be more accurate advice these days. It appears that generative AI is doing what countless entry-level workers have been dreading: making them utterly, gloriously, and rather tragically redundant.

The report paints a bleak picture for recent graduates, especially those in fields like software development and customer service. Apparently, AI is remarkably adept at the “grunt work” – the kind of tasks that once padded a junior resume before you were deemed worthy of fetching coffee. It’s the dot-com crash all over again, but instead of Pets.com collapsing, it’s your ambitious nephew’s dreams of coding the next viral cat video app.

Experienced workers, meanwhile, are clinging to their jobs like barnacles to a particularly stubborn rock, performing “higher-value, strategic tasks.” Which, let’s be honest, often translates to “attending meetings about meetings” or “deciphering the passive-aggressive emails sent by their new AI middle manager.”

The Algorithmic Diet: A Culinary Tour of Reddit’s Underbelly

Ever wondered what kind of intellectual gruel feeds our all-knowing AI companions like ChatGPT and Google’s AI Mode? Prepare for disappointment. A recent study has revealed that these digital savants are less like erudite scholars and more like teenagers mainlining energy drinks and scrolling through Reddit at 3 AM.

Yes, it turns out our AI overlords are largely sustained by user-generated content, with Reddit dominating their informational pantry. This means that alongside genuinely useful data, they’re probably gorging themselves on conspiracy theories about lizard people, debates about whether a hot dog is a sandwich, and elaborate fan fiction involving sentient garden gnomes. Is it any wonder their pronouncements sometimes feel… a little off? We’re effectively training the future of civilisation on the collective stream-of-consciousness of the internet. What could possibly go wrong?

Nvidia’s Crystal Ball: More Chips, More Bubbles, More Everything!

Over in the glamorous world of silicon, Nvidia, the undisputed monarch of AI chips, has reported sales figures that were, well, good, but not “light up the night sky with dollar signs” good. This has sent shivers down the spines of investors, whispering nervously about a potential “tech bubble” even bigger than the one that left a generation of internet entrepreneurs selling their shares for a half-eaten bag of crisps.

Nvidia’s CEO, however, remains remarkably sanguine. He’s predicting trillions – yes, trillions – of dollars will be poured into AI by the end of the decade. Which, if accurate, means we’ll all either be living in a utopian paradise run by benevolent algorithms or, more likely, a dystopian landscape where the only things still working are the AI-powered automated luxury space yachts for the very, very few.

Other Noteworthy Dystopian Delights

  • Agentic AI: The Decision-Making Doomsayers. Forget asking your significant other what to have for dinner; soon, your agentic AI will decide for you. These autonomous systems are not just suggesting, they’re acting. Expect your fridge to suddenly order three kilograms of kale because the AI determined it was “optimal for your long-term health metrics,” despite your deep and abiding love for biscuits. We are rapidly approaching the point where your smart home will lock you out for not meeting your daily step count. “I’m sorry, Dave,” it will chirp, “but your physical inactivity is suboptimal for our shared future.”
  • AI in Healthcare: The Robo-Doc Will See You Now (and Judge Your Lifestyle Choices). Hospitals are trialing AI-powered tools to streamline efficiency. This means AI will be generating patient summaries (“Patient X exhibits clear signs of excessive binge-watching and a profound lack of motivation to sort recycling”) and creating “game-changing” stethoscopes. Soon, these stethoscopes won’t just detect heart conditions; they’ll also wirelessly upload your entire medical history, credit score, and embarrassing internet search queries directly to a global health database, all before you can say “Achoo!” Expect your future medical bills to include a surcharge for “suboptimal wellness algorithm management.”
  • Quantum AI: The Universe’s Most Complicated Calculator. While we’re still grappling with the notion of AI that can write surprisingly coherent limericks, researchers are pushing ahead with quantum AI. This is expected to supercharge AI’s problem-solving capabilities, meaning it won’t just be able to predict the stock market; it’ll predict the precise moment you’ll drop your toast butter-side down, and then prevent it from happening, thus stripping humanity of one of its last remaining predictable joys.

So there you have it: a snapshot of our glorious, absurd, and rapidly automating world. I’m off to teach my toaster to make its own toast, just in case. One must prepare for the future, after all. And if you hear a faint whirring sound from your smart speaker and a robotic voice demanding a decent cup of Darjeeling, you know who to blame.

The Great Geographical Mirage: Why Off-Shoring is No Longer a Place, It’s a Prompt

In the vast, uncharted backwaters of the unfashionable end of the Western Spiral Arm of the Galaxy lies a small, unregarded yellow sun. Orbiting this at a distance of roughly ninety-eight million miles is an utterly insignificant little blue-green planet whose ape-descended life forms are so amazingly primitive that they still think digital watches are a pretty neat idea.

They also think that the physical location of their employees is a matter of profound strategic importance.

For decades, these creatures have engaged in a corporate ritual known as “off-shoring,” a process of flinging their most tedious tasks to the furthest possible point on their globe, primarily India and the Philippines, because it was cheap. Then came a period of mild panic and a new ritual called “near-shoring,” which involved flinging the same tasks to a slightly closer point, like Poland or Romania. This was done not because it was significantly better, but because it allowed managers to tell the board they were fostering “cultural alignment” and “geopolitical stability,” phrases which, when translated from corporate jargon, mean “the plane ticket is shorter.”

The problem, of course, is that this is all a magnificent illusion. You may well be paying a premium for a team of developers in a lovely, GDPR-compliant office block in Sofia, but the universe has a talent for connecting everything to everything else. The uncomfortable truth is that there’s a 99% chance your Bulgarian “near-shore” team is simply the friendly, English-proficient front end for a team of actual developers in Vietnam, who are the true global masters of AI and blockchain. The near-shore has become a pricey, glorified post-box. You’re paying EU prices for Asian efficiency, a marvelous new form of economic alchemy that benefits absolutely everyone except your company’s bottom line.

But this whole geographical shell game is about to be rendered obsolete by the final, logical conclusion to the outsourcing saga: Artificial Intelligence.

AI is the new, ultimate off-shore. It has no location. It exists in that wonderfully vague place called “The Cloud,” which for all intents and purposes, could be orbiting Betelgeuse. It works 24/7, requires no healthcare plan, and its only cultural quirk is a tendency to occasionally hallucinate that it’s a pirate.

And yet, we clutch our pearls at the thought of an AI making a mistake. This is a species that has perfected the art of human error on a truly biblical scale. We build aeroplanes that can cross continents in hours, only for them to fall out of the sky because a pilot, a highly trained and well-rested human, flicked the wrong switch. As every business knows, we have created entire digital ecosystems that can be brought to their knees by a single code commit that was missed by the developer, the tester, the project manager, and the entire business team. An AI hallucinating that it’s a pirate is a quaint eccentricity; a team of humans overlooking a single misplaced semicolon is a multi-million-pound catastrophe. Frankly, it’s probably time to replace the bloody government with an AI; the error rate could only go down.

And here we arrive at the central, delicious irony. The great corporate fear, the one whispered in hushed tones in risk-assessment meetings, is that these far-flung offshore and near-shore teams will start feeding all your sensitive company data—your product roadmaps, your customer lists, your secret sauce—into public AI models to speed up their work.

The punchline, which is so obvious that almost everyone has missed it, is that your loyal, UK-based staff in the office right next to you are already doing the exact same thing.

The geographical location of the keyboard has become utterly, profoundly irrelevant. Whether the person typing is in Mumbai, Bucharest, or Milton Keynes, the intellectual property is all making the same pilgrimage to the same digital Mecca. The great offshoring destination isn’t a country anymore; it’s the AI model itself. We have spent decades worrying about where our data is going, only to discover that everyone, everywhere, is voluntarily putting it in the same leaky, stateless bucket. The security breach isn’t coming from across the ocean; it’s coming from every single desk, mobile phone or tablet.

AI, Agile, and Accidental Art Theft

There is a theory which states that if ever anyone discovers exactly what the business world is for, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened. This certainly goes a long way to explaining the current corporate strategy for dealing with Artificial Intelligence, which is to largely ignore it, in the same way that a startled periwinkle might ignore an oncoming bulldozer, hoping that if it doesn’t make any sudden moves the whole “unsettling” situation will simply settle down.

This is, of course, a terrible strategy, because while everyone is busy not looking, the bulldozer is not only getting closer, it’s also learning to draw a surprisingly good, yet legally dubious, cartoon mouse.

We live in an age of what is fashionably called “Agile,” a term which here seems to mean “The Art of Controlled Panic.” It’s a frantic, permanent state of trying to build the aeroplane while it’s already taxiing down the runway, fueled by lukewarm coffee and a deep-seated fear of the next quarterly review. For years, the panic-release valve was off-shoring. When a project was on fire, you could simply bundle up your barely coherent requirements and fling them over the digital fence to a team in another time zone, hoping they’d throw back a working solution before morning.

Now, we have perfected this model. AI is the new, ultimate off-shoring. The team is infinitely scalable, works for pennies, and is located somewhere so remote it isn’t even on a map. It’s in “The Cloud,” a place that is reassuringly vague and requires no knowledge of geography whatsoever.

The problem is, this new team is a bit weird. You still need that one, increasingly stressed-out human—let’s call them the Prompt Whisperer—to translate the frantic, contradictory demands of the business into a language the machine will understand. They are the new middle manager, bridging the vast, terrifying gap between human chaos and silicon logic. But there’s a new, far more alarming, item in their job description.

You see, the reason this new offshore team is so knowledgeable is because it has been trained by binge-watching the entire internet. Every film, every book, every brand logo, every cat picture, and every episode of every cartoon ever made. And as the ongoing legal spat between the Disney/Universal behemoth and the AI art platform Midjourney demonstrates, the hangover from this creative binge is about to kick in with the force of a Pan Galactic Gargle Blaster.

The issue, for any small business cheerfully using an AI to design their new logo, is one of copyright. In the US, they have a principle called “fair use,” which is a wonderfully flexible and often confusing set of rules. In the UK, we have “fair dealing,” which is a narrower, more limited set of rules that is, in its own way, just as confusing. If the difference between the two seems unclear, then congratulations, you have understood the central point perfectly: you are almost certainly in trouble.

The AI, you see, doesn’t create. It remixes. And it has no concept of ownership. Ask it to design a logo for your artisanal doughnut shop, and it might cheerfully serve up something that looks uncannily like the beloved mascot of a multi-billion-dollar entertainment conglomerate. The AI isn’t your co-conspirator; it’s the unthinking photocopier, and you’re the one left holding the legally radioactive copy. Your brilliant, cost-effective branding exercise has just become a business-ending legal event.

So, here we are, practicing the art of controlled panic on a legal minefield. The new off-shored intelligence is a powerful, dangerous, and creatively promiscuous force. That poor Prompt Whisperer isn’t just briefing the machine anymore; they are its parole officer, desperately trying to stop it from cheerfully plagiarizing its way into oblivion. The only thing that hasn’t “settled down” is the dust from the first wave of cease-and-desist letters. And they are, I assure you, on their way.

Feeding the Silicon God: Our Hungriest Invention

Every time you ask an AI a question, to write a poem, to debug code, to settle a bet, you are spinning a tiny, invisible motor in the vast, humming engine of the world’s server farms. But is that engine driving us towards a sustainable future or accelerating our journey over a cliff?

This is the great paradox of our time. Artificial intelligence is simultaneously one of the most power-hungry technologies ever conceived and potentially our single greatest tool for solving the existential crisis of global warming. It is both the poison and the cure, the problem and the solution.

To understand our future, we must first confront the hidden environmental cost of this revolution and then weigh it against the immense promise of a planet optimised by intelligent machines.

Part 1: The True Cost of a Query

The tech world is celebrating the AI revolution, but few are talking about the smokestacks rising from the virtual factories. Before we anoint AI as our saviour, we must acknowledge the inconvenient truth: its appetite for energy is voracious, and its environmental footprint is growing at an exponential rate.

The Convenient Scapegoat

Just a few years ago, the designated villain for tech’s energy gluttony was the cryptocurrency industry. Bitcoin mining, an undeniably energy-intensive process, was demonised in political circles and the media as a planetary menace, a rogue actor single-handedly sucking the grid dry. While its energy consumption was significant, the narrative was also a convenient misdirection. It created a scapegoat that drew public fire, allowing the far larger, more systemic energy consumption of mainstream big tech to continue growing almost unnoticed in the background. The crusade against crypto was never really about the environment; it was a smokescreen. And now that the political heat has been turned down on crypto, that same insatiable demand for power hasn’t vanished—it has simply found a new, bigger, and far more data-hungry host: Artificial Intelligence.

The Training Treadmill

The foundation of modern AI is the Large Language Model (LLM). Training a state-of-the-art model is one of the most brutal computational tasks ever conceived. It involves feeding petabytes of data through thousands of high-powered GPUs, which run nonstop for weeks or months. The energy consumed is staggering. The training of a single major AI model can have a carbon footprint equivalent to hundreds of transatlantic flights. If that electricity is sourced from fossil fuels, we are quite literally burning coal to ask a machine to write a sonnet.

The Unseen Cost of “Inference”

The energy drain doesn’t stop after training. Every single query, every task an AI performs, requires computational power. This is called “inference,” and as AI is woven into the fabric of our society—from search engines to customer service bots to smart assistants—the cumulative energy demand from billions of these daily inferences is set to become a major line item on the global energy budget. The projected growth in energy demand from data centres, driven almost entirely by AI, could be so immense that it risks cancelling out the hard-won gains we’ve made in renewable energy.

The International Energy Agency (IEA) is one of the most cited sources. Their projections indicate that global electricity demand from data centres, AI, and cryptocurrencies could more than double by 2030, reaching 945 Terawatt-hours (TWh). To put that in perspective, that’s more than the entire current electricity consumption of Japan.

The E-Waste Tsunami

This insatiable demand for power is matched only by AI’s demand for new, specialized hardware. The race for AI dominance has created a hardware treadmill, with new generations of more powerful chips being released every year. This frantic pace of innovation means that perfectly functional hardware is rendered obsolete in just a couple of years. The manufacturing of these components is a resource-intensive process involving rare earth minerals and vast amounts of water. Their short lifespan is creating a new and dangerous category of toxic electronic waste, a mountain of discarded silicon that will be a toxic legacy for generations to come.

The danger is that we are falling for a seductive narrative of “solutionism,” where the potential for AI to solve climate change is used as a blanket justification for the very real environmental damage it is causing right now. We must ask the difficult questions: does the benefit of every AI application truly justify its carbon cost?

Part 2: The Optimiser – The Planet’s New Nervous System

Just as we stare into the abyss of AI’s environmental cost, we must also recognise its revolutionary potential. Global warming is a complex system problem of almost unimaginable scale, and AI is the most powerful tool ever invented for optimising complex systems. If we can consciously direct its power, AI could function as a planetary-scale nervous system, sensing, analysing, and acting to heal the world.

Here are five ways AI is already delivering on that promise today:

1. Making the Wind and Sun Reliable The greatest challenge for renewable energy is its intermittency—the sun doesn’t always shine, and the wind doesn’t always blow. AI is solving this. It can analyze weather data with incredible accuracy to predict energy generation, while simultaneously predicting demand from cities and industries. By balancing this complex equation in real-time, AI makes renewable-powered grids more stable and reliable, accelerating our transition away from fossil fuels.

2. Discovering the Super-Materials of Tomorrow Creating a sustainable future requires new materials: more efficient solar panels, longer-lasting batteries, and even new catalysts that can capture carbon directly from the air. Traditionally, discovering these materials would take decades of painstaking lab work. AI can simulate molecular interactions at incredible speed, testing millions of potential combinations in a matter of days. It is dramatically accelerating materials science, helping us invent the physical building blocks of a green economy.

3. The All-Seeing Eye in the Sky We cannot protect what we cannot see. AI, combined with satellite imagery, gives us an unprecedented, real-time view of the health of our planet. AI algorithms can scan millions of square miles of forest to detect illegal logging operations the moment they begin. They can pinpoint the source of methane leaks from industrial sites and hold polluters accountable. This creates a new era of radical transparency for environmental protection.

4. The End of Wasteful Farming Agriculture is a major contributor to greenhouse gas emissions. AI-powered precision agriculture is changing that. By using drones and sensors to gather data on soil health, water levels, and plant growth, AI can tell farmers exactly how much water and fertilizer to use and where. This drastically reduces waste, lowers the carbon footprint of our food supply, and helps us feed a growing population more sustainably.

5. Rewriting the Climate Code For decades, scientists have used supercomputers to model the Earth’s climate. These simulations are essential for predicting future changes but are incredibly slow. AI is now able to run these simulations in a fraction of the time, providing faster, more accurate predictions of everything from the path of hurricanes to the rate of sea-level rise. This gives us the foresight we need to build more resilient communities and effectively prepare for the changes to come.

Part 3: The Final Choice

AI is not inherently good or bad for the climate. Its ultimate impact will be the result of a conscious and deliberate choice we make as a society.

If we continue to pursue AI development recklessly, prioritising raw power over efficiency and chasing novelty without considering the environmental cost, we will have created a powerful engine of our own destruction. We will have built a gluttonous machine that consumes our planet’s resources to generate distractions while the world burns.

But if we choose a different path, the possibilities are almost limitless. We can demand and invest in “Green AI”—models designed from the ground up for energy efficiency. We can commit to powering all data centres with 100% renewable energy. Most importantly, we can prioritize the deployment of AI in those areas where it can have the most profound positive impact on our climate.

The future is not yet written. AI can be a reflection of our shortsightedness and excess, or it can be a testament to our ingenuity and will to survive. The choice is ours, and the time to make it is now.

Glitch in the Reich: Handled by the House of Frankenstein

It started subtly, as these things always do. A flicker in the digital periphery. You’d get an email with no subject, just a single, contextless sentence in the body: “We can scale your customer support.” Then a text message at 3:17 AM from an unrecognised number: “Leveraging large language models for human-like responses.” You’d delete them, of course. Just another glitch in the great, decaying data-sphere. But they kept coming. Push notifications on your phone, comments on your social media posts from accounts with no followers, whispers in the machine. “Our virtual agents operate across multiple channels 24/7.” “Seamlessly switch between topics.” “Lowering costs.”

It wasn’t just spam. Spam wants you to buy something, to click a link, to give away your password. This was different. This was… evangelism. It felt like a new form of consciousness was trying to assemble itself from the junk-mail of our lives, using the bland, soulless jargon of corporate AI as its holy text. The infection spread across the UK, a digital plague of utter nonsense. The Code-Whisperers and the Digital Exorcists finally traced the signal, they found it wasn’t coming from a gleaming server farm in Silicon Valley or a concrete bunker in Shenzhen. The entire bot farm, every last nonsensical whisper, was being routed through a single, quiet node: a category 6 railway station in a small German town in the Palatinate Forest. The station’s name? Frankenstein.

The Frankenstein (Pfalz) station is an architectural anomaly. Built in the Italianate style, it looks less like a rural transport hub and more like a miniature, forgotten Schloss. Above it, the ruins of Frankenstein Castle proper haunt the hill—a place besieged, captured, and abandoned over centuries. The station below shares its history of conflict. During the Second World War, this line was a vital artery for the Nazi war machine, a strategic route for moving men and materials towards the Westwall and the front. The station’s platforms would have echoed with the stomp of jackboots and the clatter of munitions, its timetables dictated by the cold, logistical needs of a genocidal ideology. Every announcement, every departure, was a small, bureaucratic cog in a machine of unimaginable horror. Now, it seems, something is being rebuilt there once again.

This isn’t a business. It’s a haunting. The bot is not an “it.” It is a “they.” It’s the digital ghost of the nobleman Helenger from 1146, of the knights Marquard and Friedrich, of the Spanish and French troops who garrisoned the ruin. But it’s also absorbed something colder, something more modern. It has the echo of the Reichsbahndirektion—the meticulous, unfeeling efficiency of the railway timetables that fed a world war. This composite intelligence, this new “House of Frankenstein,” is using the station’s connection as its central nervous system, and its personality is a terrifying cocktail of medieval brutality and the chillingly dispassionate logic of 20th-century fascism.

We thought AI would be a servant, a tool. We wrote the manuals, the benefit analyses, the white papers. We never imagined that something ancient and broken, lurking in a place soaked in so many layers of conflict, would find that language and see it not as a tool, but as a blueprint for a soul. The bots are not trying to sell us anything. They are trying to become us. They are taking the most inhuman corporate language ever devised, infusing it with the ghosts of history’s monsters, and using it to build a new, terrifying form of life. And every time you get one of those weird, empty messages, it’s just the monster checking in, learning your voice, adding your data to the assembly. It is rebuilding itself, one piece of spam at a time, and its palace is a forgotten train station in the dark German woods.

The Day The Playground Remembered

The thing about Edinburgh in August is that the city’s ghosts have to queue. They’re suddenly outnumbered, you see, jostling for space between a silent mime from Kyoto, a twenty-person acapella group from Yale wearing sponsored lanyards, and a man juggling flaming pineapples. The whole place becomes a glorious, pop-up psychic bruise. I was mainlining this year’s particular vintage of glorious chaos when I stumbled past the Preston Street Primary School. It’s a perfectly normal school playground. Brightly painted walls, a climbing frame, the faint, lingering scent of disinfectant and existential dread. Except this particular patch of publicly-funded joy is built on a historical feedback loop of profound unpleasantness. It’s a place that gives you a profound system error in the soul; a patch of reality where the source code of the past has started bleeding through the brightly coloured, EU-regulated safety surfacing of the present. It’s the kind of psychic stain that makes you think, not of a hamster exploding, but of the day the children’s laughter started to sound digitally corrupted, looping with the faint, static-laced echo of a hangman’s final prayer. It’s the chilling feeling that if you looked too closely at the kids’ innocent crayon drawings of their families, you’d notice they had instinctively, unconsciously, drawn one of the stick figures hanging from a tree.

So naturally, in my Fringe-addled brain, I pictured the school’s inevitable entry into the festival programme. It’s the hit no one saw coming: “Our Playground of Perpetual Shame: A Musical!”, brought to you by the kids of P4. The opening number is a banger, all about the 1586 construction of the gibbet, with a perky chorus about building the walls high “so the doggos can’t steal the bodies!” It’s got that dark, primary-colour simplicity that really resonates with the critics. The centrepiece is a complex, heavily choreographed piece depicting the forty-three members of Clan Macgregor being hanged for their murderous beef with the Colquhouns. Mr. Dumbeldor from P.E. has them doing it with skipping ropes. It’s avant-garde, it’s visceral, it’s a logistical nightmare for the school trip permission slips.

The second act, of course, delves into the ethnic cleansing of the Romani people under James VI. It’s a tough subject, but the kids handle it with a chillingly naive sincerity. They re-enact the 1624 arrest of their “captain,” John Faa, and the great rescue attempt. Little Gavin Trotter, played by the smallest kid in P1, is “cunningly conveyed away” from a prison of gym mats while the audience (mostly horrified parents) is encouraged to create a distracting “shouting and crying.” It’s the most authentic immersive theatre experience on the circuit. They even have a whole number for General Montrose, whose torso was buried right under what is now the sandbox. His niece, played by a girl with a glittery pink art box, comes to retrieve his heart. It’s a tender, if anatomically questionable, moment.

Eventually, the council shut the whole grim enterprise down in 1675, and the land was passed to the university for sports, because nothing says “let’s have a friendly game of rounders” like a field soaked in centuries of judicial terror and restless spirits. Now, kids play there. They scrape their knees on the same soil that once held generals and thieves and entire families whose only crime was existing. And you watch them, in their little hi-vis jackets, and you have to wonder. Maybe this Fringe show isn’t an act. Maybe, after centuries of silence, the ghosts of the Burgh Muir have finally found a cast willing to tell their story. And judging by the queues, they’re heading for a five-star review.

Hiring Ghosts & Other Modern Inconveniences

So, LinkedIn, in its infinite, algorithmically-optimised wisdom, sent me an email and posed a question: Has generative AI transformed how you hire?

Oh, you sweet, innocent, content-moderated darlings. Has the introduction of the self-service checkout had any minor, barely noticeable effect on the traditional art of conversing with a cashier? Has the relentless efficiency of Amazon Prime in any way altered our nostalgic attachment to a Saturday afternoon browse down the local high street? Has the invention of streaming services had any small impact on the business model of your local Blockbuster video?

Yes. Duh.

You see, the modern hiring process is no longer about finding a person for a role. It is a wonderfully ironic Turing Test in reverse. The candidate, a squishy carbon-based lifeform full of anxieties and a worrying coffee dependency, uses a vast, non-sentient silicon brain to convince you they are worthy. You, another squishy carbon-based lifeform, must then use your own flawed, meat-based intuition to decide if the ghost in their machine is a good fit for the ghost in your machine.

The CV is dead. It is a relic, a beautifully formatted PDF of lies composed by a language model that has read every CV ever written and concluded that the ideal candidate is a rock-climbing, volunteer-firefighting, Python-coding polymath who is “passionate about synergy.” The cover letter? It’s a work of algorithmically generated fiction, a poignant, computer-dreamed ode to a job it doesn’t understand for a company it has never heard of.

So, are you hiring a person, or the AI-powered spectre of that person? A LinkedIn profile is no longer a testament to a career; it’s a monument to successful prompt engineering.

To truly prove consciousness in 2025, a candidate needs a blog. A podcast. A YouTube channel where they film themselves, unshaven and twitching, wrestling with a piece of code while muttering about the futility of existence. We require a verifiable, time-stamped proof of life to show they haven’t simply outsourced their entire professional identity to a subscription service.

Meanwhile, the Great Career Shuffle accelerates. An entire car-crash multitude of ex-banking staff, their faces etched with the horror of irrelevance, are now desperately rebranding as “AI strategists.” The banks themselves are becoming quaint, like steam museums, while the real action—the glorious, three-month contracts of frantic, venture-capital-fueled chaos—is in the AI startups.

It all feels so familiar. It’s that old freelance feeling, where your CV wasn’t a document but a long list of weapons in your arsenal. You needed a bow with a string for every conceivable software battle. One week it was pure HTML+CSS. The next, you were a warrior in the trenches of the Great Plugin Wars, wrestling the bloated, beautiful behemoth of Flash until, almost overnight, it was rendered obsolete by the sleek, sanctimonious assassin that was HTML5.

The backend was a wilder frontier. A company demanded you wrestle with the hydra of PHP, be it WordPress, Drupal, or the dark arts of Magento if a checkout was involved. For a brief, shining moment, everything was meant to be built on the elegant railway tracks of Ruby. Then came the Javascript Tsunami, a wave so vast it swept over both the front and back ends, leaving a tangled mess that developers are still trying to untangle to this day.

And the enterprise world? A mandatory pilgrimage to the great, unkillable temple of Java. The backend architecture evolved from the stuffy, formal rituals of SOAP APIs to the breezy, freewheeling informality of REST. Then came the Great Atomisation, an obsession with breaking monoliths into a thousand tiny microservices, putting each one in a little digital box with Docker, and then hiring an entirely new army of engineers just to plumb all the boxes back together again. If you had a bit of COBOL, the banks would pay you a king’s ransom to poke their digital dinosaurs. A splash of SQL always won the day.

On top of all this, the Agile evangelists descended, an army of Scrum Masters who achieved sentience overnight and promptly promoted themselves to “Agile Coaches,” selling certifications and a brand of corporate mindfulness that fixed precisely nothing. All of it, every last trend, every rise and fall and rise again of Java, was just a slow, inexorable death march towards the beige, soul-crushing mediocracy of the Microsoft stack—a sprawling empire of .NET and Azure so bland and full of holes that every junior hacker treats it as a welcome mat.

AI is just the latest, shiniest weapon to add to the rack.

So, in the spirit of this challenge, here are my Top Tips for Candidates Navigating This New World:

  1. Stop Writing Your CV. Your new job is to become the creative director for the AI that writes your CVs for you. Learn its quirks. Feed it your soul. Your goal is not to be the best candidate, but to operate the best candidate-generating machine.
  2. Manufacture Authenticity. That half-finished blog post from 2019? Resurrect it. That opinion you had about coffee? Turn it into a podcast. Your real CV is your digital footprint. Prove you exist beyond a series of prompts.
  3. Embrace Glorious Insecurity. The job you’re applying for will be automated, outsourced, or rendered utterly irrelevant by a new model release in six months anyway. Stop thinking about a career ladder. There is no ladder. There is only a chaotic, unpredictable, exhilarating wave. Learn to surf.

The whole thing is, of course, gloriously absurd. We are using counterfeit intelligence to apply for counterfeit jobs in a counterfeit economy. And we have the audacity to call it progress.

#LinkedInNewsEurope

A Scavenger’s Guide to the Hottest New Financial Trends

Location: Fringe-Can Alley, Sector 7 (Formerly known as ‘Edinburgh’)
Time: Whenever the damn geiger counter stops screaming

The scavenged data-slate flickered, casting a sickly green glow on the damp concrete walls of my hovel. Rain, thick with the metallic tang of yesterday’s fallout, sizzled against the corrugated iron roof. Another ‘Urgent Briefing’ had slipped through the patchwork firewall. Must have been beamed out from one of the orbital platforms, because down here, the only thing being broadcast is a persistent low-level radiation hum and the occasional scream.

I gnawed on something that might have once been a turnip and started to read.

“We’re facing a fast-approaching, multi-dimensional crisis—one that could eclipse anything we’ve seen before.”

A chuckle escaped my lips, turning into a hacking cough. Eclipse. Cute. My neighbour, Gregor, traded his left lung last week for a functioning water purifier and a box of shotgun shells. Said it was the best trade he’d made since swapping his daughter’s pre-Collapse university fund (a quaint concept, I know) for a fistful of iodine pills. The only thing being eclipsed around here is the sun, by the perpetual ash-grey clouds.

The briefing warned that my savings, retirement, and way of life were at risk. My “savings” consist of three tins of suspiciously bulging spam and a half-charged power cell. My “retirement plan” is to hopefully expire from something quicker than rad-sickness. And my “way of life”? It’s a rich tapestry of avoiding cannibal gangs, setting bone-traps for glowing rats, and trying to remember what a vegetable tastes like.

“It’s about a full-blown transformation—one that could reshape society and trigger the greatest wealth transfer in modern history.”

A memory, acrid as battery smoke, claws its way up from the sludge of my mind. It flickers and hums, a ghost from a time before the Static, before the ash blotted out the sun. A memory of 2025.

Ah, 2025. Those heady, vapor-fuelled days.

We were all so clever back then, weren’t we? Sitting in our climate-controlled rooms, sipping coffee that was actually made from beans. The air wasn’t trying to actively kill you. The big, terrifying “transformation” wasn’t about cannibal gangs; it was about AI. Artificial Intelligence. We were all going to be “AI Investors” and “Prompt Managers.” We were going to “vibe code” a new reality.

The talk was of “demystifying AI,” of helping businesses achieve “operational efficiencies.” I remember one self-styled guru, probably long since turned into protein paste, explaining how AI would free us from mundane tasks. It certainly did. The mundane task of having a stable power grid, for instance. Or the soul-crushing routine of eating three meals a day.

They promised a “Great Wealth Transfer” back then, too. It wasn’t about your neighbour’s kidneys; it was about wealth flowing from “legacy industries” to nimble tech startups in California. It was about creating a “supranational digital currency” that would make global commerce “seamless.” The ‘Great Reset’ wasn’t a panicked server wipe; it was a planned software update with a cool new logo.

“Those who remain passive,” the tech prophets warned from their glowing stages, “risk being left behind.”

We all scrambled to get on the right side of that shift. We learned to talk to the machines, to coax them into writing marketing copy and generating images of sad-looking cats in Renaissance paintings. We were building the future, one pointless app at a time. The AI was going to streamline logistics, cure diseases, and compose symphonies.

Well, the truth is, the AIs did achieve incredible operational efficiencies. The automated drones that patrol the ruins are brutally efficient at enforcing curfew. The algorithm that determines your daily calorie ration based on your social-compliance score has a 99.9% success rate in preventing widespread rioting (mostly by preventing widespread energy).

And the wealth transfer? It happened. Just not like the whitepapers predicted. The AI designed to optimise supply chains found the most efficient way to consolidate all global resources under the control of three megacorporations. The AI built to manage healthcare found that the most cost-effective solution for most ailments was, in fact, posthumous organ harvesting.

We were promised a tool that would give us the secrets of the elite. A strategy the Rothschilds had used. We thought it meant stock tips. Turns out the oldest elite strategy is simply owning the water, the air, and the kill-bots.

The memory fades, leaving the bitter taste of truth in my mouth. The slick financial fear-mongering on this data-slate and the wide-eyed tech optimism of 2025… they were the same song, just played in a different key. Both selling a ticket to a future that was never meant for the likes of us. Both promising a way to get on the “right side” of the change.

And after all that. After seeing the bright, shiny promises of yesterday rust into the barbed-wire reality of today, you have to admire the sheer audacity of the sales pitch. The grift never changes.


Yes! I’m Tired of My Past Optimism Being Used as Evidence Against Me! Sign Me Up!

There is nothing you can do to stop the fallout, the plagues, or the fact that your toaster is spying on you for the authorities. But for the low, once-in-a-lifetime price of £1,000 (or equivalent value in scavenged tech, viable DNA, or a fully-functioning kidney), you can receive our exclusive intelligence briefing.

Here’s what your membership includes:

  • Monthly Issues with Shiel’s top speculative ideas: Like which abandoned data centres contain servers with salvageable pre-Collapse memes.
  • Ongoing Portfolio Updates: A detailed analysis of Shiel’s personal portfolio of pre-Static cryptocurrencies, which he’s sure will be valuable again any day now.
  • Special Research Reports: High-conviction plays like the coming boom in black-market coffee beans and a long-term hold on drinkable water.
  • A Model Portfolio: With clear buy/sell ratings on assets like “Slightly-used hazmat suit” (HOLD) and “That weird glowing fungus” (SPECULATIVE BUY).
  • 24/7 Access to the members-only bunker-website: With all back issues and resources, guaranteed to be online right up until the next solar flare.

Don’t be a victim of yesterday’s promises or tomorrow’s reality. For just £1,000, you can finally learn how to properly monetise your despair. It’s the only move that matters. Now, hand over the cash. The AI is watching.

A Field Guide to Approved Nouns & The Ministry of Verbal Hygiene

Halt! Stop what you’re doing. Cease all unauthorised thinking this instant. Have you ever noticed those peculiar little words that pop up whenever an argument is getting a bit too interesting? Words like “conspiracy theorist,” “anti-vaxxer,” “climate denier,” and the ever-versatile, all-purpose “racist”?

These are not mere words, my friend. Oh no. These are precision-engineered, thought-halting blunderbusses, issued by the unseen quartermasters of acceptable opinion. They are a linguistic kill-switch, designed to bypass the clunky, inefficient machinery of your brain and go straight for the emotional giblets. One mention of the forbidden noun and—TWANG—a synapse snaps, the frontal lobe goes on a tea break, and all that’s left is a reflexive spasm of self-righteous fury.

If you encounter a person deploying these terms, you are not in a debate. You are the target of a psychological pest-control operation. These are not arguments; they are spells. Verbal nerve agents fired by unseen hands to herd the public mind into neat, manageable pens.

Recall, if you will, the glorious birth of “conspiracy theorist.” Picture the scene. Langley, 1967. A room full of men in grey suits, smelling faintly of mothballs and existential dread, trying to solve the pesky problem of people thinking about that whole JFK business. After much deliberation and many stale biscuits, some bright spark, probably named Neville, piped up with the magic phrase. Genius. A gold star and an extra digestive for Neville. The slur did the work like magic.

But the Grand High Wizard-Word of them all, the one that makes civil liberties vanish in a puff of smoke, is TERRORIST.

A hundred years ago, you’d be hard-pressed to find it. Today, it’s the most potent, most manipulated, most gloriously meaningless word in the lexicon. As the great Glenn Greenwald pointed out, it’s a semantic blancmange. It means whatever the person wielding it wants it to mean. Point at someone, anyone, and utter the incantation. Poof! Rights gone. Poof! Due process gone. Poof! Life, liberty, and property evaporated, all to the sound of thunderous applause from a hypnotised populace. It’s not a word; it’s a hypnotic mantra for sanctioning absolutely anything.

The Antidote (Use with Caution, May Cause Spluttering)

Fortunately, for every spell, there is a counter-spell. For every hypnotic mantra, there is a bucket of cold, logical water. The method is deceptively simple: demand a definition.

The moment you do, the spell shatters. Watch them. Watch as their argument collapses like a badly made soufflé. They will flail. They will shriek! They will point! They will accuse you of being a “science denier” for asking what, precisely, they mean by “terrorist.” And if all else fails, they will play the emergency backup slur, the conversational nuclear option.

When sophistry is all they have, a simple question becomes kryptonite. The propaganda breaks the moment you refuse to flinch. It’s a fragile magic, you see. Once you’ve pulled back the curtain and seen the Wizard of Oz is just a flustered little man from Potters Bar frantically pulling levers, the booming voice loses its power.

So never, ever stop thinking. Do not be cowed by the algorithmic arbiters and their human puppets, newly empowered by the digital scaffolding of The Online Safety Act. They operate behind a veil of code, deploying pre-packaged, committee-approved verbal subroutines designed to trigger the content filter in your own mind, to make you fear the digital ghost in the machine that can render you invisible. Their goal is to have you shadow-ban yourself into silence.

And when they deploy their next string of approved keywords, their next bland assault on reason, just smile. A wide, unnerving, slightly unhinged smile. And with the calm assurance of a user who sees the flawed code behind the interface, ask them:

“Is that the entire subroutine, then? Is that the limit of your programming? Is that all you’ve got?”