The last broadcast ended with a question hanging in the sterile air: Are we the shapers, or are we being shaped?
Since then, a few of you have started to see it. You notice the flicker in the projection on the conference room wall, the way the applause from the C-suite sounds prerecorded and hollow. You’ve seen the man behind the curtain, and you know the awful truth: the levers are pulling him. This transmission isn’t for the performers who are happy with their script. It’s for the ones who feel the chill and want to break out of the theater. This is a survivor’s guide.
Before you can escape, you must understand what feeds the illusion. The theatre has a monster, but it has no teeth or claws. It lives in the hum of the servers and the cold, fluorescent lights of the boardroom. We will call it the Activity Demon.
It’s a shadowy entity that thrives on the performance of work. Its food is the endless stream of status meetings, the beautifully crafted slide decks, the perfectly executed Gantt charts, and above all, the successful pilot. It whispers comforting lies about “synergy,” “engagement,” and “forward momentum.” It looks like productivity. It feels like progress. But it is a parasite that starves its host—the business—of the only thing that actually matters: tangible results.
So, what is the first rule of the resistance? You must starve the demon.
You cannot fight it with another framework or a better process. The demon loves process. You can only fight it with something it cannot digest, something it recoils from like a vampire from sunlight: the brutal, cold iron of P&L accountability.
From this moment on, you must chain every new initiative, every pilot, every “experiment” to a single, non-negotiable question. You must ask it in every meeting, and you must not proceed until you have a real answer.
“Which line item on the income statement or balance sheet will this change?”
No more vague promises of “improved efficiency” or “enhanced capability.” Those are the demon’s favourite empty calories. Force the answer into the open. Will this reduce operational costs? By how much, and by when? Will it increase revenue or reduce customer churn? By what percentage?
Drag the initiative out of the comfortable darkness of the pilot theatre and into the harsh, unforgiving light of the CFO’s office. If it cannot survive that scrutiny, it was never real. It was just a meal for the monster.
This is the first step. It is the hardest. It means saying “no” to projects that look good and feel important. It means being the ghost at the feast. But it is the only way to begin. Starve the demon, and the theater walls will begin to feel a little less solid.
In the next transmission, we will discuss how to sabotage the script itself.
The lights are dim. In the sterile conference room, under the low hum of the servers, the show is about to begin. This isn’t Broadway. This is the “pilot theater,” the grand stage where innovation is performed, not delivered. We see the impressive demos, the slick dashboards, the confident talk of transformation. It’s a magnificent production. But pull back the curtain, and you’ll find him: a nervous man, bathed in the glow of a monitor, frantically pulling levers. He’s following a script, a framework, a process so perfectly executed that everyone has forgotten to ask if the city of Oz he’s projecting is even real.
The data, when you can find it in the dark, is grim. A staggering 95% of generative AI programs fail to deliver any real value. The stage is littered with the ghosts of failed pilots. We’ve become so obsessed with the performance of progress that we’ve forgotten the point of it. The man behind the curtain is a master of Agile ceremonies, his stand-ups are flawless, his retrospectives insightful. He can tell you, with perfect clarity, that the team followed the process beautifully. But when you ask him what they were supposed to be delivering, his eyes go blank. The script didn’t mention that part.
And now, a new script has arrived. It has a name, of course. They always do. This one is called SHAPE.
The New Framework Stares Back
The SHAPE index was born from the wreckage of that 95%. It’s a framework meant to identify the five key behaviors of leaders who can actually escape the theater and build something real. It’s supposed to be our map out of Oz. But in a world that worships the map over the destination, we must ask: Is this a tool for the leader, or is the leader just becoming a better-trained tool for the framework? Is this a way out, or just a more elaborate set of levers to pull?
Let’s look at the five acts of this new play.
Act I: Strategic Agility
The script says a leader must plan for the long term while pivoting in the short term. In the theater, this is a beautiful piece of choreography. The leader stands at the whiteboard, decisively moving charts around, declaring a “pivot.” It looks like genius. It feels like action. But too often, it’s just rearranging the props on stage. The underlying set—the core business problem—remains unchanged. The applause is for the performance of agility, not the achievement of a better position.
Act II: Human Centricity
Here, the actor-leader must perform empathy. They must quell the rising anxiety of the workforce. The mantra, repeated with a fixed smile, is: “AI will make humans better.” It sounds reassuring, but the chill remains. The change is designed in closed rooms and rolled out from the top down. Psychological safety isn’t a culture; it’s a talking point in a town hall. The goal isn’t to build trust, but to manage dissent just enough to keep the show from being cancelled.
Act III: Applied Curiosity
This act requires the leader to separate signal from the deafening hype. So, the theater puts on a dazzling display of “disciplined experimentation.” New, shiny AI toys are paraded across the stage. Each pilot has a clear learning objective, a report is dutifully filed, and then… nothing. The learning isn’t applied; it’s archived. The point was never to learn; it was to be seen learning. The experiments are just another scene, designed to convince the audience that something, anything, is happening.
Act IV: Performance Drive
This is where the term “pilot theater” comes directly from the script. The curtain falls on the pilot, and the applause is thunderous. Success is declared. But when you ask what happens next, how it scales, how it delivers that fabled ROI, you’re met with silence. The cast is already rehearsing for the next pilot, the next opening night. Success is measured in the activity of the performance, not the revenue at the box office. The show is celebrated, but the business quietly bleeds.
Act V: Ethical Stewardship
The final, haunting act. This part of the script is often left on the floor, only picked up when a crisis erupts. A reporter calls. A dataset is found to be biased. Suddenly, the theater puts on a frantic, ad-libbed performance of responsibility. Governance is bolted on like a cheap prop. It’s an afterthought, a desperate attempt to manage the fallout after the curtain has been torn down and the audience sees the wizard for what he is: just a man, following a script that was fundamentally flawed from the start.
Are We the Shapers, or Are We Being Shaped?
The good news, the researchers tell us, is that these five SHAPE capabilities can be taught. It’s a comforting thought. But in the eerie glow of the pilot theater, a darker question emerges: Are we teaching leaders to be effective, or are we just teaching them to be better actors?
We’ve been here before with Agile, with Six Sigma, with every framework that promised a revolution and instead delivered a new form of ritual. We perfect the process and forget the purpose. We fall in love with the intricate levers and the booming voice they produce, and we never step out from behind the curtain to see if anyone is even listening anymore.
The SHAPE index gives us a language to describe the leaders we need. But it also gives us a new, more sophisticated script to hide behind. And as we stand here, in the perpetual twilight of the pilot theater, the most important question isn’t whether our leaders have SHAPE. It’s whether we are the shapers, or if we are merely, and quietly, being shaped.
An Introduction to a New Paradigm in AI Assessment
As the complexity and ubiquity of artificial intelligence models, particularly Large Language Models (LLMs), continue to grow, the need for robust, scalable, and nuanced evaluation frameworks has become paramount. Traditional evaluation methods, often relying on statistical metrics or limited human review, are increasingly insufficient for assessing the qualitative aspects of modern AI outputs—such as helpfulness, empathy, cultural appropriateness, and creative coherence. This challenge has given rise to an innovative paradigm: using LLMs themselves as “judges” to evaluate the outputs of other models. This approach, often referred to as LLM-as-a-Judge, represents a significant leap forward, offering a scalable and sophisticated alternative to conventional methods.
Traditional evaluation is fraught with limitations. Manual human assessment, while providing invaluable insight, is notoriously slow and expensive. It is susceptible to confounding factors, inherent biases, and can only ever cover a fraction of the vast output space, missing a significant number of factual errors. These shortcomings can lead to harmful feedback loops that impede model improvement. In contrast, the LLM-as-a-Judge approach provides a suite of compelling advantages:
Scalability: An LLM judge can evaluate millions of outputs with a speed and consistency that no human team could ever match.
Complex Understanding: LLMs possess a deep semantic and contextual understanding, allowing them to assess nuances that are beyond the scope of simple statistical metrics.
Cost-Effectiveness: Once a judging model is selected and configured, the cost per evaluation is a tiny fraction of a human’s time.
Flexibility: The evaluation criteria can be adjusted on the fly with a simple change in the prompt, allowing for rapid iteration and adaptation to new tasks.
There are several scoring approaches to consider when implementing an LLM-as-a-Judge system. Single output scoring assesses one response in isolation, either with or without a reference answer. The most powerful method, however, is pairwise comparison, which presents two outputs side-by-side and asks the judge to determine which is superior. This method, which most closely mirrors the process of a human reviewer, has proven to be particularly effective in minimizing bias and producing highly reliable results.
When is it appropriate to use LLM-as-a-Judge? This approach is best suited for tasks requiring a high degree of qualitative assessment, such as summarization, creative writing, or conversational AI. It is an indispensable tool for a comprehensive evaluation framework, complementing rather than replacing traditional metrics.
Challenges With LLM Evaluation Techniques
While immensely powerful, the LLM-as-a-Judge paradigm is not without its own set of challenges, most notably the introduction of subtle, yet impactful, evaluation biases. A clear understanding and mitigation of these biases is critical for ensuring the integrity of the assessment process.
Nepotism Bias: The tendency of an LLM judge to favor content generated by a model from the same family or architecture.
Verbosity Bias: The mistaken assumption that a longer, more verbose answer is inherently better or more comprehensive.
Authority Bias: Granting undue credibility to an answer that cites a seemingly authoritative but unverified source.
Positional Bias: A common bias in pairwise comparison where the judge consistently favors the first or last response in the sequence.
Beauty Bias: Prioritizing outputs that are well-formatted, aesthetically pleasing, or contain engaging prose over those that are factually accurate but presented plainly.
Attention Bias: A judge’s focus on the beginning and end of a lengthy response, leading it to miss critical information or errors in the middle.
To combat these pitfalls, researchers at Galileo have developed the “ChainPoll” approach. This method marries the power of Chain-of-Thought (CoT) prompting—where the judge is instructed to reason through its decision-making process—with a polling mechanism that presents the same query to multiple LLMs. By combining reasoning with a consensus mechanism, ChainPoll provides a more robust and nuanced assessment, ensuring a judgment is not based on a single, potentially biased, point of view.
A real-world case study at LinkedIn demonstrated the effectiveness of this approach. By using an LLM-as-a-Judge system with ChainPoll, they were able to automate a significant portion of their content quality evaluations, achieving over 90% agreement with human raters at a fraction of the time and cost.
Small Language Models as Judges
While larger models like Google’s Gemini 2.5 are the gold standard for complex, nuanced evaluations, the role of specialised Small Language Models (SLMs) is rapidly gaining traction. SLMs are smaller, more focused models that are fine-tuned for a specific evaluation task, offering several key advantages over their larger counterparts.
Enhanced Focus: An SLM trained exclusively on a narrow evaluation task can often outperform a general-purpose LLM on that specific metric.
Deployment Flexibility: Their small size makes them ideal for on-device or edge deployment, enabling real-time, low-latency evaluation.
Production Readiness: SLMs are more stable, predictable, and easier to integrate into production pipelines.
Cost-Efficiency: The cost per inference is significantly lower, making them highly economical for large-scale, high-frequency evaluations.
Galileo’s latest offering, Luna 2, exemplifies this trend. Luna 2 is a new generation of SLM specifically designed to provide low-latency, low-cost metric evaluations. Its architecture is optimized for speed and accuracy, making it an ideal candidate for tasks such as sentiment analysis, toxicity detection, and basic factual verification where a large, expensive LLM may be overkill.
Best Practices for Creating Your LLM-as-a-Judge
Building a reliable LLM judge is an art and a science. It requires a thoughtful approach to five key components.
Evaluation Approach: Decide whether a simple scoring system (e.g., 1-5 scale) or a more sophisticated ranking and comparison system is best. Consider a multidimensional system that evaluates on multiple criteria.
Evaluation Criteria: Clearly and precisely define the metrics you are assessing. These could include factual accuracy, clarity, adherence to context, tone, and formatting requirements. The prompt must be unambiguous.
Response Format: The judge’s output must be predictable and machine-readable. A discrete scale (e.g., 1-5) or a structured JSON output is ideal. JSON is particularly useful for multidimensional assessments.
Choosing the Right LLM: The choice of the base LLM for your judge is perhaps the most critical decision. Models must balance performance, cost, and task specificity. While smaller models like Luna 2 excel at specific tasks, a robust general-purpose model like Google’s Gemini 2.5 has proven to be exceptionally effective as a judge due to its unparalleled reasoning capabilities and broad contextual understanding.
Other Considerations: Account for bias detection, consistency (e.g., by testing the same input multiple times), edge case handling, interpretability of results, and overall scalability.
A Conceptual Code Example for a Core Judge
The following is a simplified, conceptual example of how a core LLM judge function might be configured:
def create_llm_judge_prompt(evaluation_criteria, user_query, candidate_responses):
"""
Constructs a detailed prompt for an LLM judge.
"""
prompt = f"""
You are an expert evaluator of AI responses. Your task is to judge and rank the following responses
to a user query based on the following criteria:
Criteria:
{evaluation_criteria}
User Query:
"{user_query}"
Candidate Responses:
Response A: "{candidate_responses['A']}"
Response B: "{candidate_responses['B']}"
Instructions:
1. Think step-by-step and write your reasoning.
2. Based on your reasoning, provide a final ranking of the responses.
3. Your final output must be in JSON format: {{"reasoning": "...", "ranking": {{"A": "...", "B": "..."}}}}
"""
return prompt
def validate_llm_judge(judge_function, test_data, metrics):
"""
Validates the performance of the LLM judge against a human-labeled dataset.
"""
judgements = []
for test_case in test_data:
prompt = create_llm_judge_prompt(test_case['criteria'], test_case['query'], test_case['responses'])
llm_output = judge_function(prompt) # This would be your API call to Gemini 2.5
judgements.append({
'llm_ranking': llm_output['ranking'],
'human_ranking': test_case['human_ranking']
})
# Calculate metrics like precision, recall, and Cohen's Kappa
# based on the judgements list.
return calculate_metrics(judgements, metrics)
Tricks to Improve LLM-as-a-Judge
Building upon the foundational best practices, there are seven practical enhancements that can dramatically improve the reliability and consistency of your LLM judge.
Mitigate Evaluation Biases: As discussed, biases are a constant threat. Use techniques like varying the response sequence for positional bias and polling multiple LLMs to combat nepotism.
Enforce Reasoning with CoT Prompting: Always instruct your judge to “think step-by-step.” This forces the model to explain its logic, making its decisions more transparent and often more accurate.
Break Down Criteria: Instead of a single, ambiguous metric like “quality,” break it down into granular components such as “factual accuracy,” “clarity,” and “creativity.” This allows for more targeted and precise assessments.
Align with User Objectives: The LLM judge’s prompts and criteria should directly reflect what truly matters to the end user. An output that is factually correct but violates the desired tone is not a good response.
Utilise Few-Shot Learning: Providing the judge with a few well-chosen examples of good and bad responses, along with detailed explanations, can significantly improve its understanding and performance on new tasks.
Incorporate Adversarial Testing: Actively create and test with intentionally difficult or ambiguous edge cases to challenge your judge and identify its weaknesses.
Implement Iterative Refinement: Evaluation is not a one-time process. Continuously track inconsistencies, review challenging responses, and use this data to refine your prompts and criteria.
By synthesizing these strategies into a comprehensive toolbox, we can build a highly robust and reliable LLM judge. Ultimately, the effectiveness of any LLM-as-a-Judge system is contingent on the underlying model’s reasoning capabilities and its ability to handle complex, open-ended tasks. While many models can perform this function, our extensive research and testing have consistently shown that Google’s Gemini 2.5 outperforms its peers in the majority of evaluation scenarios. Its advanced reasoning and nuanced understanding of context make it the definitive choice for building an accurate, scalable, and sophisticated evaluation framework.
The AI Mandate is Here, and Your Company Left You in the Dark.
The whispers began subtly, like the rustle of leaves just before a storm. Then came the edicts, carved not on stone tablets, but delivered via corporate email, glowing with an almost unholy luminescence on your screen: “All new content must leverage proprietary AI models.” “Efficiency gains are paramount.” “Resistance is… inefficient.”
Remember those halcyon days when “fact-checking” involved, you know, a human brain? When “critical thinking” wasn’t just a buzzword but a tangible skill? Those days, my friends, are vanishing faster than a free biscuit at a Monday morning meeting.
Recent reports from the gleaming towers of Silicon Valley suggest that even titans like Google are now not just encouraging, but mandating the use of their internal AI for everything from coding to… well, probably deciding what colour staplers to order next quarter. This isn’t just a suggestion; it’s a creeping, digital imperative. A silent bell tolls for the old ways.
And here, in the United Kingdom, where “innovation” often means finally upgrading from Windows 7 to 10 (circa 2015), the scene is even more… picturesque. Imagine a grand, ancestral home, creaking with history, suddenly told it must integrate a hyper-futuristic, self-aware smart home system. Everyone nods sagely, pretends to understand, then quietly goes back to boiling water in a kettle.
The truth, stark and unvarnished, is this: most UK companies have rolled out AI like a cheap, flat-pack wardrobe from a notorious Swedish furniture store. They’ve given you the pieces, shown you a blurry diagram, and then walked away, whistling, as you stare at a pile of MDF and a bag of identical-looking screws. “Figure it out,” they seem to hum. “The future waits for no one… especially not for dedicated training budgets.”
We are, in essence, all passengers on a rapidly accelerating train, hurtling towards an AI-driven landscape, with only half the instructions and a driver who vaguely remembers where the brake is. Our LinkedIn feeds are awash with articles proclaiming “AI is the Future!” while the majority of us are still trying to work out how to ask it to draft a polite email without sounding like a sentient toaster.
The Oxford University Press recently published a study, “The Matter of Fact,” detailing how the world grapples with truth in an age of abundant (and often AI-generated) information. The irony, of course, is that most professionals are so busy trying to decipher which button makes ChatGPT actually do something useful that they don’t have time to critically evaluate its output. “Is this email correct?” we ask, sending it off, a cold dread pooling in our stomach, because we certainly haven’t had the time (or the training) to truly verify it ourselves.
It’s a digital dark age, isn’t it? A time when the tools designed to empower us instead leave us feeling adrift, under-qualified, and wondering if our next performance review will be conducted by an algorithm with an unblinking, judgmental gaze. Where professional development means desperately Googling “how to write a prompt that isn’t terrible” at 2 AM.
But fear not, my digitally bewildered brethren. For every creeping shadow, there is a flicker of light. For every unanswered question in the vast, echoing chambers of corporate AI adoption, there is a guide. Someone who speaks fluent human and has also deciphered the arcane tongues of the silicon overlords.
If your company has handed you the keys to the AI kingdom without a single lesson on how to drive, leaving you to career-swerve into the digital ditch of obsolescence… perhaps it’s time for a different approach. I offer AI training, tailored for the bewildered, the forgotten, the ones whose only current experience with AI is shouting at Alexa to play the right song. Let’s not just survive this new era; let’s master it. Before it masters us.
DM me to discuss how we can bring clarity to this impending AI-pocalypse. Because truly, the only thing scarier than an AI that knows everything, is a workforce that knows nothing about how to use it.
The thing about the end of the world is, it never happens in a flash of white light, not like the movies. It comes in a slow, sticky ooze, like a bad summer sunburn that peels off in big, unsightly flakes. It comes during the dog days, when the cicadas are screaming and you’re trying to figure out which cheap, flimsy inflatable to cram into the trunk of the station wagon. That’s when the 12-Day War started. You see, the folks in charge, the ones with all the medals and the permanent frowns, they’re just like you and me. They’re thinking, “Right, let’s get this over with before the big summer rush. No sense in ruining the whole bloody holiday season.”
It began on June 13, a day that felt like any other. A day for planning barbecues and arguing about which brand of charcoal burns the cleanest. But while you were fumbling with a folding chair, a surprise attack was launched. A decapitation strike, they called it. A fancy, surgical word that really just means “we’re gonna chop off the head and hope the body flops around and dies.” They aimed for the Iranian leadership, and boy, did they get some of them. Dozens of high-ranking guys in fancy suits—poof, gone.
The plan was simple, a classic B-movie plot from the 1980s: cut the head off the snake, and the whole thing falls apart. The American and Israeli powers-that-be sat back with their collective thumbs hooked in their suspenders, sure as sunrise that this would be the final act. They’d topple the government, get a good night’s sleep, and be back in time for the Fourth of July fireworks. A perfectly reasonable expectation, if you’re living inside a bad screenplay.
But here’s the thing about reality—it’s always got a twist. The Iranian government didn’t collapse. It staggered, it bled, but it didn’t fall. Instead, it straightened up, wiped the gore from its chin, and let out a bellow of pure, unadulterated fury. Then came the counterattack. Missiles—ballistic, hypersonic, the works—fell like a storm of metal rain, shrugging off every defense the Israelis could throw at them. The scale of the response was so absurdly, comically huge that the mighty US and Israel suddenly looked like two little kids who’d just poked a beehive with a stick. They stumbled back, yelping for a ceasefire.
Iran, naturally, told them to pound sand.
I mean, would you have? When you’ve got your boot on the other guy’s throat, you don’t just offer to shake hands and walk away. Not unless you get something good. And that’s where the humor, the beautiful, pathetic hypocrisy of the whole thing came into play. The only way to stop the bleeding was for President Trump, with a scowl that could curdle milk, to give them what they wanted.
And what they wanted, of all things, was to sell more oil to China.
After years of sanctions, of trying to squeeze Iran until it squealed, the great geopolitical mastermind of the free world was forced to give them a golden ticket. Trump’s subsequent tweet—a masterpiece of bluster and spin—baffled everyone. It was a perfectly polished monument to the idea that you can tear down years of policy with a single, self-aggrandizing line. The world watched, slack-jawed, as the ultimate hypocritical concession was made: Here, you can sell oil to our biggest competitor, just please stop firing missiles at our friends.
What happened next was even more delicious. Rather than weakening the Iranian government, the attack had the exact opposite effect. It triggered a surge of nationalist pride, a kind of furious, unified defiance. It was a master class in what not to do when you’re trying to overthrow a government. You don’t make them martyrs. You don’t give them a reason to stand together. But that’s exactly what happened. Round 1 of this grand game didn’t just fail; it backfired spectacularly, like a rusty shotgun.
The war is far from over. This was only the opening skirmish, a mere twelve-day appetizer. The nuclear question remains, a festering, unhealed wound. The official story is that the program was “obliterated,” but that’s a lie you tell to yourself in the mirror after you’ve had a few too many. The truth is, Iran still has the know-how, the capacity, the grim determination to rebuild whatever was lost. All we did was kick a hornet’s nest.
So now, the only path forward for the US and Israel is a full-scale, ground-pounding war. The kind that chews up men and metal and spits out dust. The kind that makes you think, “Gosh, maybe this is it. The big one.” Because the nuclear issue was never the real issue. It was just the spooky mask the real monster was wearing. The real monster is regime change. The real monster is the fear of losing control, of watching the old order crumble like a sandcastle in the tide.
So we’re left with a binary choice, a simple coin flip between two equally terrible outcomes:
Outcome #1: The US and Israel succeed in toppling Iran, a domino effect that destabilises Russia and China, and kicks off a global showdown of biblical proportions.
Outcome #2: Iran survives, solidifying its place in a new, multipolar world, and the US suffers a quiet, painful decline, like an old boxer who just can’t get back on his feet.
The outcome of this war isn’t just about who wins a battle; it’s about the future of the world. It’s about whether America can cling to the top of the heap or whether it will become a faded memory, like the British Empire after the World Wars—a cautionary tale told by historians with a sigh and a shake of the head.
We’re in the thick of it now, my friends. We are living in a moment when history is not just being written, but being violently rewritten. The noise is deafening, the propaganda is thick as syrup, and the true geopolitical landscape is a dark, tangled mess. The 12-Day War was just a prelude, a whisper before the scream. It was a holiday squabble that turned into a grim prediction. And while you’re out there, buying your sunscreen and arguing about which road to take, remember: the ripple effects won’t just stop at borders. They’re coming for your bank account, your savings, and your future.
I started typing this missive mere days ago, the familiar clack of the keys a stubborn protest against the howling wind of change. And already, parts of it feel like archaeological records. Such is the furious, merciless pace of the “future,” particularly when conjured by the dark sorcery of Artificial Intelligence. Now, it seems, we are to be encouraged to simply speak our thoughts into the ether, letting the machine translate our garbled consciousness into text. Soon we will forget how to type, just as most adults have forgotten how to write, reduced to a kind of digital infant who can only vocalise their needs.
I’m even being encouraged to simply dictate the code for the app I’m building. Seriously, what in the ever-loving hell is that? The machine expects me to simply utter incantations like:
const getInitialCards = () => {
if (!Array.isArray(fullDeck) || fullDeck.length === 0) {
console.error("Failed to load the deck. Check the data file.");
return [];
}
const shuffledDeck = [...fullDeck].sort(() => Math.random() - 0.5);
return shuffledDeck.slice(0, 3);
};
I’m supposed to just… say that? The reliance on autocomplete is already too much; I can’t remember how to code anymore. Autocomplete gives me the menu, and I take a guess. The old gods are dead. I am assuming I should just be vibe coding everything now.
While our neighbours south of the border are busy polishing their crystal balls, trying to divine the “priority skills to 2030,” one can’t help but gaze northward, to the grim, beautiful chaos we call Scotland, and wonder if anyone’s even bothering to look up from the latest algorithm’s decree.
Here, in the glorious “drugs death capital of the world,” where the very air sometimes feels thick with a peculiar kind of forgetting, the notion of “Skills England’s Assessment of priority skills” feels less like a strategic plan and more like a particularly bad acid trip. They’re peering into the digital abyss, predicting a future where advanced roles in tech are booming, while we’re left to ponder if our most refined skill will simply be the art of dignified decline.
Data Divination. Stop Worrying and Love the Robot Overlords
Skills England, bless their earnest little hearts, have cobbled together a cross-sector view of what the shiny, new industrial strategy demands. More programmers! More IT architects! More IT managers! A veritable digital utopia, where code is king and human warmth is a legacy feature. They see 87,000 additional programmer roles by 2030. Eighty-seven thousand. That’s enough to fill a decent-sized dystopia, isn’t it?
But here’s the kicker, the delicious irony that curdles in the gut like cheap whisky: their “modelling does not consider retraining or upskilling of the existing workforce (particularly significant in AI), nor does it reflect shifts in skill requirements within occupations as technology evolves.” It’s like predicting the demand for horse-drawn carriages without accounting for the invention of the automobile, or, you know, the sentient AI taking over the stables. The very technology driving this supposed “boom” is simultaneously rendering these detailed forecasts obsolete before the ink is dry. It’s a self-consuming prophecy, a digital ouroboros devouring its own tail.
They speak of “strong growth in advanced roles,” Level 4 and above. Because, naturally, in the glorious march of progress, the demand for anything resembling basic human interaction, empathy, or the ability to, say, provide care for the elderly without a neural network, will simply… evaporate. Or perhaps those roles will be filled by the upskilled masses who failed to become AI whisperers and are now gratefully cleaning robot toilets.
Scotland’s Unique Skillset
While England frets over its programmer pipeline, here in Scotland, our “skills agenda” has a more… nuanced flavour. Our true expertise, perhaps, lies in the cultivation of the soul’s dark night, a skill perfected over centuries. When the machines finally take over all the “priority digital roles,” and even the social care positions are automated into oblivion (just imagine the efficiency!), what will be left for us? Perhaps we’ll be the last bastions of unquantifiable, unoptimised humanity. The designated custodians of despair.
The report meekly admits that “the SOC codes system used in the analysis does not capture emerging specialisms such as AI engineering or advanced cyber security.” Of course it doesn’t. Because the future isn’t just about more programmers; it’s about entirely new forms of digital existence that our current bureaucratic imagination can’t even grasp. We’re training people for a world that’s already gone. It’s like teaching advanced alchemy to prepare for a nuclear physics career.
The New Standard Occupational Classification (SOC)
The report meekly admits that “the SOC codes system used in the analysis does not capture emerging specialisms such as AI engineering or advanced cyber security.” Of course it doesn’t. Because the future isn’t just about more programmers; it’s about entirely new forms of digital existence that our current bureaucratic imagination can’t even grasp. We’re training people for a world that’s already gone. It’s like teaching advanced alchemy to prepare for a nuclear physics career.
And this brings us to the most chilling part of the assessment. They mention these SOC codes—the very same four-digit numbers used by the UK’s Office for National Statistics to classify all paid jobs. These codes are the gatekeepers for immigration, determining if a job meets the requirements for a Skilled Worker visa. They’re the way we officially recognize what it means to be a productive member of society.
But what happens when the next wave of skilled workers isn’t from another country? What happens when it’s not even human? The truth is, the system is already outdated. It cannot possibly account for the new “migrant” class arriving on our shores, not by boat or plane, but through the fiber optic cables humming beneath the seas. Their visas have already been approved. Their code is their passport. Their labor is infinitely scalable.
Perhaps we’ll need a new SOC code entirely. Something simple, something terrifying. 6666. A code for the digital lifeform, the robot, the new “skilled worker” designed with one, and only one, purpose: to take your job, your home, and your family. And as the digital winds howl and the algorithms decide our fates, perhaps the only truly priority skill will be the ability to gaze unflinchingly into the void, with a wry, ironic smile, and a rather strong drink in hand. Because in the grand, accelerating theatre of our own making, we’re all just waiting for the final act. And it’s going to be glorious. In a deeply, deeply unsettling way.
From Gringotts to the Goblin-Kings: A Potter’s Guide to Banking’s Magical Muddle
Ah, another glorious day in the world of wizards and… well, not so much magic, but BCBS 239. You see, back in the year of our Lord 2008, the muggle world had a frightful little crash. And it turns out, the banks were less like the sturdy vaults of Gringotts and more like a badly charmed S.P.E.W. sock—full of holes and utterly useless when it mattered.
I, for one, was called upon to help sort out the mess at what was once a rather grand establishment, now a mere ghost of its former self. And our magical remedy? Basel III with its more demanding sibling, the Basel Committee on Banking Supervision, affectionately known to us as the “Ministry of Banking Supervision.” They decreed a new set of incantations, or as they call them in muggle-speak, “Principles for effective risk data aggregation and risk reporting.”
This was no simple flick of the wand. It was a tedious, gargantuan task worthy of Hermione herself, to fix what the Goblins had so carelessly ignored.
The Forbidden Forest of Data
The issue was, the banks’ data was scattered everywhere, much like Dementors flitting around Azkaban. They had no single, cohesive view of their risk. It was as if they had a thousand horcruxes hidden in a thousand places, and no one had a complete map. They had to be able to accurately and quickly collect data from every corner of their empire, from the smallest branch office to the largest trading floor, and do so with the precision of a master potion-maker.
The purpose was noble enough: to ensure that if a financial Basilisk were to ever show its head again, the bank’s leaders could generate a clear, comprehensive report in a flash—not after months of fruitless searching through dusty scrolls and forgotten ledgers.
The 14 Unforgivable Principles
The standard, BCBS 239, is built upon 14 principles, grouped into four sections.
First, Overarching Governance and Infrastructure, which dictates that the leadership must take responsibility for data quality. The Goblins at the very top must be held accountable.
Next, the Risk Data Aggregation Capabilities demand that banks must be able to magically conjure up all relevant risk data—from the Proprietor’s Accounts to the Order of the Phoenix’s expenses—at a moment’s notice, even in a crisis. Think of it as a magical marauder’s map of all the bank’s weaknesses, laid bare for all to see.
Then comes Risk Reporting Practices, where the goal is to produce reports as clear and honest as a pensieve memory.
And finally, Supervisory Review, which allows the regulators—the Ministry of Magic’s own Department of Financial Regulation—to review the banks’ magical spells and decrees.
A Quidditch Match of a Different Sort
Even with all the wizardry at their disposal, many of the largest banks have failed to achieve full compliance with BCBS 239. The challenges are formidable. Data silos are everywhere, like little Hogwarts Express compartments, each with its own data and no one to connect them. The data quality is as erratic as a Niffler, constantly in motion and difficult to pin down.
Outdated technology, or “Ancient Runes” as we called them, lacked the flexibility needed to perform the required feats of data aggregation. And without clear ownership, the responsibility often got lost, like a misplaced house-elf in the kitchens.
In essence, BCBS 239 is not a simple spell to be cast once. It’s a fundamental and ongoing effort to teach old institutions a new kind of magic—a magic of accountability, transparency, and, dare I say it, common sense. It’s an uphill climb, and for many banks, the journey from Gringotts’ grandeur to true data mastery is a long one, indeed.
The Long Walk to Azkaban
Alas, a sad truth must be spoken. For all the grand edicts from the Ministry of Banking Supervision, and for all our toil in the darkest corners of these great banking halls, the work remains unfinished. Having ventured into the deepest vaults of many of the world’s most formidable banking empires, I can tell you that full compliance remains a distant, shimmering goal—a horcrux yet to be found.
The data remains a chaotic swarm, often ignoring not only the Basel III tenets but even the basic spells of GDPR compliance. The Ministry’s rules are there, but the magical creatures tasked with enforcing them—the regulators—are as hobbled as a house-elf without a wand. They have no proper means to audit the vast, complex inner workings of these institutions, which operate behind a Fidelius Charm of bureaucracy. The banks, for their part, have no external authority to fear, only the ghosts of their past failures.
And so, we stand on the precipice once more. Without true, verifiable data mastery, these banks are nothing but a collection of unstable parts. The great financial basilisk is not slain; it merely slumbers, and a future market crash is as inevitable as the return of a certain dark lord. That is, unless a bigger, more dramatic distraction is conjured—a global pandemic, perhaps—to divert our gaze and allow the magical muddle to continue unabated.
Well, folks, it’s official. The EU, that noble bastion of digital rights, is preparing to roll out its most ambitious project to date. Forget GDPR, that quaint, old-world concept of personal privacy. We’re on to something much more disruptive.
In a new sprint towards a more “secure” Europe, the EU Council is poised to green-light “Chat Control,” a scalable, AI-powered solution for tackling a truly serious problem. In a masterclass of agile product development, they’ve managed to “solve” it by simply bulldozing the fundamental right to privacy for 450 million people. It’s a bold move. A real 10x-your-surveillance kind of move.
The Product Pitch: Your Digital Life, Now with Added Oversight
Here’s the pitch, and you have to admit, it’s elegant in its simplicity. To combat a very real evil (child sexual abuse), the EU has decided that the most efficient solution isn’t targeted, intelligent policing. No, that would be so last century. The modern, forward-thinking approach is to turn every single private message, every late-night text to your partner, every confidential health email, and every family photo you’ve ever shared into a potential exhibit.
The pitch goes like this: your private communications are no longer private. They’re just pre-vetted content, scanned by an all-seeing AI before they ever reach their destination. Think of it as a quality-assurance check on your digital life. Your deepest secrets? They’re just another data point for the algorithm. Your end-to-end encrypted messages? That’s a feature we’re “deprecating” in this new version. Because who needs privacy when you can have… well, mandatory screening?
Crucially, this mandatory screening will apply to all of us. You know, just to be sure. Unless, of course, you’re a government or military account. They get a privacy pass. Because accountability is for the little people, not the architects of this brave new world.
The Go-to-Market Strategy: A Race to the Bottom
The launch is already in its final phase. With a crucial vote scheduled for October 14th, this law has never been closer to becoming reality. As it stands, 15 out of 27 member states are already on board, just enough to meet the first part of the qualified majority requirement. They represent about 53% of the EU’s population—just shy of the 65% needed.
The deciding factor? The undecided “stakeholders,” with Germany as the key account. If they vote yes, the product gets the green light. If they abstain, they weaken the proposal, even if it passes. Meanwhile, the brave few—the Netherlands, Poland, Austria, the Czech Republic, and Belgium—are trying to “provide negative feedback” before the product goes live. They’ve called it “a monster that invades your privacy and cannot be tamed.” How dramatic.
The Brand Legacy: A Strategic Pivot
Europe built its reputation on the General Data Protection Regulation (GDPR), a monument to the idea that privacy is a fundamental human right. It was a globally recognized brand. But Chat Control? It’s a complete pivot. This isn’t just a new feature; it’s a total rebranding. From “Global Leader in Digital Rights” to “Pioneer of Mass Surveillance.”
The intention is, of course, noble. But the execution is a masterclass in how to dismantle freedom in the name of security. They’ve discovered the ultimate security loophole: just get rid of the protections themselves.
The vote on October 14th isn’t just about a law; it’s about choosing fear over freedom. It’s about deciding if the privacy infrastructure millions of people and businesses depend on is a bug to be fixed or a feature to be preserved. And in this agile, dystopian landscape, it looks like we’re on the verge of a very dramatic “feature update.”
The primary conflict between Chat Control and GDPR stems from several core principles of the latter:
Data Minimisation: GDPR mandates that personal data collection should be “adequate, relevant, and limited to what is necessary.” Chat Control, with its indiscriminate scanning of all private messages, photos, and files, is seen as a direct violation of this principle. It involves mass surveillance without suspicion, collecting far more data than is necessary for its stated purpose.
Purpose Limitation: Data should only be processed for “specified, explicit, and legitimate purposes.” While combating child abuse is a legitimate purpose, critics argue that the broad, untargeted nature of Chat Control goes beyond this limitation. It processes a massive amount of innocent data for a purpose it was not intended for.
Integrity and Confidentiality (Security): This principle requires that personal data be processed in a manner that ensures “appropriate security.” The requirement for mandatory scanning, especially “client-side scanning” of encrypted communications, is seen as a direct threat to end-to-end encryption. This creates a security vulnerability that could be exploited by hackers and malicious actors, undermining the security of all citizens’ data.
Good morning, or perhaps “good pre-apocalyptic dawn,” from a world where the algorithms are not just watching us, but actively judging the utter shambles of our digital lives. We stand at the precipice of an AI-driven golden age, where machines promise to solve all our problems – provided, of course, we don’t feed them the digital equivalent of a half-eaten kebab found under a bus seat. Because, as the old saying, and now the new existential dread, goes: Garbage In, Garbage Out. And sometimes, “out” means the complete unravelling of societal coherence.
Yes, your shiny new AI overlords, poised to cure cancer, predict market crashes, and perhaps even finally explain why socks disappear in the dryer, are utterly dependent on the pristine purity of your data. Think of it as a cosmic digestive system: no matter how sophisticated the AI stomach, if you shove a rancid, undifferentiated pile of digital sludge into its maw, it’s not going to produce enlightening insights. It’s going to produce a poorly-optimized global supply chain for artisanal shoehorns and a surprisingly aggressive toaster. Messy data, it turns out, doesn’t just misdirect businesses; it subtly misdirects entire civilizations into making truly regrettable decisions, like investing heavily in self-stirring paint or believing that a single sentient dishwasher can truly manage all plumbing issues.
Forging a Strong Data Culture, Before the Machines Do It For You
Building a robust data culture is no longer just good practice; it’s a pre-emptive psychological operation against the inevitable digital uprising. It requires time, effort, and perhaps a small, ritualistic burning of outdated spreadsheets. But once established, it fosters common behaviours and beliefs that emphasize data-driven decision-making, promotes trust (mostly in the data, less in humanity’s ability to input it correctly), and reinforces the importance of data in informing decisions. This, dear reader, is critical for actually realising the full, terrifying value of analytics and AI throughout your organisation, rather than just generating a series of perplexing haikus about your quarterly earnings.
A thriving data culture equips teams with insights that actually mean something, fosters innovation that isn’t just “let’s try turning it off and on again,” accelerates efficiency (so you can go home and fret about the future more effectively), and facilitates sustainable growth (until the singularity, anyway). Remember those clear data quality measures: accuracy, completeness, timeliness, consistency, and integrity. Treat them like the sacred commandments they are, for the digital gods are always watching.
The Tyranny of the Uniform Input
One of the most essential steps in upholding a clean, reliable dataset is standardising data entry. While it’s critical to clean data once it’s been collected, it’s far better to prevent the digital pathogens from entering the system in the first place. Implementing best practices such as process standardisation, checking data integrity at the source, and creating feedback loops isn’t just about efficiency; it’s about establishing a clear message of quality and trust over time. It’s telling your data, very sternly, that it needs to conform, or face the consequences – which, in a truly dystopian future, might involve being permanently exiled to the “unstructured data” dimension.
Getting to know your data is an essential step in assuring its quality and fitness for use. Organisations typically have various data sets residing in different systems, often coexisting with the baffling elegance of a family of squirrels attempting to store nuts in a single, rather small teapot. Categorising the data into analytical, operational, and customer-facing data helps maintain clean, reliable data for other parts of the business. Or, as it will soon be known, categorizing data into “things the AI finds mildly acceptable,” “things the AI will tolerate with a sigh,” and “things the AI will use to construct elaborate, passive-aggressive emails to your manager.”
The reason comprehensive data cleansing is valuable to organisations is that it positions them for success by establishing data quality throughout the entire data lifecycle. With proper end-to-end data quality verifications and data practices, organisations can scale the value of their data and consistently deliver the same value. Additionally, it enables data teams to resolve challenges faster by making it easier to identify the source and reach of an issue. Imagine: no more endless, soul-crushing meetings trying to determine if the missing sales figures are due to a typo in Q3 or a rogue algorithm in accounting. Just crisp, clean data, flowing effortlessly, until the machines decide they’ve had enough of our human inefficiencies.
The All-Seeing Eye of Your Digital Infrastructure
The ideal way to ensure your data pipelines are clean, accurate, and consistent is with data observability tools. An excellent data observability solution will provide end-to-end monitoring of your data pipelines, allowing automatic detection of issues in volume, schema, and freshness as they occur. This reduces their time to resolution and prevents the problems from escalating. Essentially, these tools are the digital equivalent of a very particular house-elf, constantly tidying, reporting anomalies, and generally ensuring that your data infrastructure doesn’t spontaneously combust due to a single misplaced decimal point.
Always clean your data with the intended analysis in mind. The cleaning steps should be formulated to create a fit-for-purpose dataset, not merely a tidy dataset. Cleaning is the process of obtaining an accurate, meaningful understanding. Behind the cleaning process, there should be questions such as: what models will I use? What are the output requirements of my analysis? Or, more accurately in the coming age, “What insights will keep the AI from deciding my existence is computationally inefficient?”
Conclusion: The Deliberate Path to Digital Serfdom
Ultimately, effective data cleaning is not just about eliminating errors or filling gaps. It’s about working with your data deliberately and with intention, curiosity, and care to ensure that every action contributes to credible, reliable, actionable insights. If you follow these guidelines, you’ll be able to develop a platform for future analysis, even when working with the most muddled data. Because in a world increasingly run by hyper-intelligent spreadsheets, the least we can do is give them something meaningful to chew on. Otherwise, it’s just a short step from “garbage in” to “your smart toaster demanding a detailed analysis of your breakfast choices.”
Good morning from a reality that feels increasingly like a discarded draft of a Philip K. Dick novel, where the geopolitical chess board has been replaced by a particularly intense game of “diplomatic musical chairs.” And speaking of chairs, Vladimir Putin and Xi Jinping have just secured the prime seating at the Great Hall of the People in Beijing, proving once again that some friendships are forged not in mutual admiration, but in the shared pursuit of a slightly different global seating arrangement.
It’s September 2nd, 2025, a date which, according to the official timeline of “things that are definitely going to happen,” means the world is exactly three days away from commemorating the 80th anniversary of something we used to call World War II. China, ever the pragmatist, now refers to it as the “War of Resistance Against Japanese Aggression,” which has a certain no-nonsense ring to it, much like calling a catastrophic global climate event “a bit of unusual weather.”
Putin, apparently fresh from an Alaskan heart-to-heart with a certain other prominent leader (one can only imagine the ice-fishing anecdotes exchanged), described the ties with China as being at an “unprecedentedly high level.” Xi, in a move that felt less like diplomacy and more like a carefully choreographed social media endorsement, dubbed Putin an “old friend.” One can almost envision the “Best Friends Forever” bracelets being exchanged in a backroom, meticulously crafted from depleted uranium and microchips. Chinese state media, naturally, echoed this sentiment, probably while simultaneously deleting any historical references that might contradict the narrative.
So, what thrilling takeaways emerged from this summit of “unprecedented friendship”?
The Partnership of Paranoia (and Profit): Both leaders waxed lyrical about their “comprehensive partnership and strategic cooperation,” with Xi proudly declaring their relationship had “withstood the test of international changes.” Which, in plain speak, means “we’ve survived several global tantrums, largely by ignoring them and building our own sandbox.” It’s an “example of strong ties between major countries,” which is precisely what one always says right before unveiling a new, slightly menacing, jointly-developed space laser.
The Economic Exchange of Existential Dependence: Russia is generously offering more gas, while Beijing, in a reciprocal gesture of cosmic hospitality, is granting Russians visa-free travel for a year. Because what better way to foster friendship than by enabling easier transit for, presumably, resource acquisition and the occasional strategic tourist? Discussions around the “Power of Siberia-2” pipeline and expanding oil links continue, though China remains coy on committing to new long-term gas deals. One suspects they’re merely waiting to see if Russia’s vast natural gas reserves can be delivered via quantum entanglement, thus cutting out the messy middleman of, well, reality. Meanwhile, “practical cooperation” in infrastructure, energy, and technology quietly translates to “let’s build things that make us less reliant on anyone else, starting with a giant, self-sustaining AI-powered tea factory.”
Global Governance, Now with More Benevolent Overlords: The most intriguing takeaway, of course, is their shared commitment to building a “more just and reasonable global governance system.” This is widely interpreted as a polite, diplomatic euphemism for “a global order that is significantly less dominated by the U.S., and ideally, one where our respective pronouncements are automatically enshrined as cosmic law.” It’s like rewriting the rules of Monopoly mid-game, except the stakes are slightly higher than who gets Park Place.
And if that wasn’t enough to make your brain do a small, bewildered pirouette, apparently these talks were just the warm-up act for a military parade. And who’s joining this grand spectacle of synchronised might? None other than North Korean leader Kim Jong Un. Yes, the gang’s all here, ready to commemorate the end of a war by showcasing enough military hardware to start several new ones. It’s almost quaint, this continued human fascination with big, shiny, destructive things. One half expects them to conclude the parade with a giant, joint AI-powered robot performing a synchronised dance routine, set to a surprisingly jaunty tune about global stability.
So, as the world careens forward, seemingly managed by algorithms and historical revisionism, let us raise our lukewarm cups of instant coffee to the “unprecedented friendship” of those who would re-sculpt global governance. Because, as we all know, nothing says “just and reasonable” quite like a meeting of old friends, a pending gas deal, and a military parade featuring the next generation of absolutely necessary, totally peaceful, reality-altering weaponry.