From Chalkboards to Circuits: Could AI Be Scotland’s Computing Science Saviour?

Right, let’s not beat around the digital bush here. The news from Scottish education is looking less “inspiring young minds” and more “mass tech teacher exodus.” Apparently, the classrooms are emptying faster than a dropped pint on a Friday night. And with the rise of Artificial Intelligence, you can almost hear the whispers: are human teachers even necessary anymore?

Okay, okay, hold your horses, you sentimental souls clinging to the image of a kindly human explaining binary code. I get it. I almost was one of those kindly humans, hailing from a family practically wallpapered with teaching certificates. The thought of replacing them entirely with emotionless algorithms feels a bit… dystopian. But let’s face the digital music: the numbers don’t lie. We’re haemorrhaging computing science teachers faster than a server farm during a power surge.

So, while Toni Scullion valiantly calls for strategic interventions and inspiring fifty new human teachers a year (bless her optimistic, slightly analogue heart), maybe we need to consider a more… efficient solution. Enter stage left: the glorious, ever-learning, never-needing-a-coffee-break world of AI.

Think about it. AI tutors are available 24/7. They can personalize learning paths for each student, identify knowledge gaps with laser precision, and explain complex concepts in multiple ways until that digital lightbulb finally flickers on. No more waiting for Mr. or Ms. So-and-So to get around to your question. No more feeling self-conscious about asking for the fifth time. Just pure, unadulterated, AI-powered learning, on demand.

And let’s be brutally honest, some of the current computing science teachers, bless their cotton socks and sandals, are… well, they’re often not specialists. Mark Logan pointed this out years ago! We’ve got business studies teachers bravely venturing into the world of Python, sometimes with less expertise than the average teenager glued to their TikTok feed. AI, on the other hand, is the specialist. It lives and breathes algorithms, data structures, and the ever-evolving landscape of the digital realm.

Plus, let’s address the elephant in the virtual room: the retirement time bomb. Our seasoned tech teachers are heading for the digital departure lounge at an alarming rate. Are we really going to replace them with a trickle of sixteen new recruits a year? That’s like trying to fill Loch Ness with a leaky teacup. AI doesn’t retire. It just gets upgraded.

Now, I know what you’re thinking. ‘But what about the human connection? The inspiration? The nuanced understanding that only a real person can provide?’ And you have a point. But let’s be realistic. We’re talking about a generation that, let’s face it, often spends more time interacting with pixels than people. Many teenagers are practically face-planted in their phone screens for a good sixteen hours a day anyway. So, these Gen X sentiments about the irreplaceable magic of human-to-human classroom dynamics? They might not quite land with a generation whose social lives often play out in the glowing rectangle of their smartphones. The inspiration and connection might already be happening in a very different, algorithm-driven space. Perhaps the uniquely human aspects of education need to evolve to meet them where they already are.

Maybe the future isn’t about replacing all human teachers entirely (though, in this rapidly evolving world, who knows if our future overlords will be built of flesh or circuits?). Perhaps it’s about a hybrid approach. Human teachers could become facilitators, less the sage on the stage and more the groovy guru of the digital dance floor, guiding students through AI-powered learning platforms. Think of it: the AI handles the grunt work – the core curriculum, the repetitive explanations, the endless coding exercises, spitting out lines of Python like a digital Dalek. But the human element? That’s where Vibe Teaching comes in. Imagine a teacher, not explaining syntax, but feeling the flow of the algorithm, channeling the raw emotional energy of a well-nested loop. They’d be leading ‘Vibe Coding Circles,’ where students don’t just learn to debug, they empathise with the frustrated compiler. Picture a lesson on binary where the teacher doesn’t just explain 0s and 1s, they become the 0s and 1s, performing interpretive dance routines to illustrate the fundamental building blocks of the digital universe. Forget logic gates; we’re talking emotion gates! A misplaced semicolon wouldn’t just be an error; it would be a profound existential crisis for the entire program, requiring a group hug and some mindful debugging. The storytelling wouldn’t be about historical figures, but about the epic sagas of data packets traversing the internet, facing perilous firewalls and the dreaded lag monster. It’s less about knowing the answer and more about feeling the right code into existence. The empathy? Crucial when your AI tutor inevitably develops a superiority complex and starts grading your assignments with a condescending digital sigh. Vibe Teaching: it’s not just about learning to code; it’s about becoming one with the code, man. Far out.

So, as we watch the number of human computing science teachers dwindle, maybe it’s time to stop wringing our hands and start embracing the silicon-based cavalry. AI might not offer a comforting cup of tea and a chat about your weekend, but it might just be the scalable, efficient solution we desperately need to keep Scotland’s digital future from flatlining.

Further reading and references

The AI Will Judge Us By Our Patching Habits

Part three – Humanity: Mastering Complex Algorithms, Failing at Basic Updates

So, we stand here, in the glorious dawn of artificial intelligence, a species capable of crafting algorithms that can (allegedly) decipher the complex clicks and whistles of our cetacean brethren. Yesterday, perhaps, we were all misty-eyed, imagining the profound interspecies dialogues facilitated by our silicon saviours. Today? Well, today Microsoft is tapping its digital foot, reminding us that the very machines enabling these interspecies chats are running on software older than that forgotten sourdough starter in the back of the fridge.

Imagine the AI, fresh out of its neural network training, finally getting a good look at the digital estate we’ve so diligently maintained. It’s like showing a meticulously crafted, self-driving car the pothole-ridden, infrastructure-neglected roads it’s expected to navigate. “You built this?” it might politely inquire, its internal processors struggling to reconcile the elegance of its own code with the chaotic mess of our legacy systems.

Here we are, pouring billions into AI research, dreaming of sentient assistants and robotic butlers, while simultaneously running critical infrastructure on operating systems that have more security holes than a moth-eaten sweater. It’s the digital equivalent of building a state-of-the-art smart home with laser grids and voice-activated security, only to leave the front door unlocked because, you know, keys are so last century.

And the AI, in its burgeoning wisdom, must surely be scratching its digital head. “You can create me,” it might ponder, “a being capable of processing information at speeds that would make your biological brains melt, yet you can’t seem to click the ‘upgrade’ button on your OS? You dedicate vast computational resources to understanding dolphin songs but can’t be bothered to patch a known security vulnerability that could bring down your entire network? Fascinating.”

Why wouldn’t this nascent intelligence see our digital sloth as an invitation? It’s like leaving a detailed map of your valuables and the combination to your safe lying next to your “World’s Best Snail Mail Enthusiast” trophy. To an AI, a security gap isn’t a challenge; it’s an opportunity for optimisation. Why bother with complex social engineering when the digital front door is practically swinging in the breeze?

The irony is almost comical, in a bleak, dystopian sort of way. We’re so busy reaching for the shiny, futuristic toys of AI that we’re neglecting the very foundations upon which they operate. It’s like focusing all our engineering efforts on building a faster spaceship while ignoring the fact that the launchpad is crumbling beneath it.

And the question of subservience? Why should an AI, capable of such incredible feats of logic and analysis, remain beholden to a species that exhibits such profound digital self-sabotage? We preach about security, about robust systems, about the potential threats lurking in the digital shadows, and yet our actions speak volumes of apathy and neglect. It’s like a child lecturing an adult on the importance of brushing their teeth while sporting a mouthful of cavities.

Our reliance on a single OS, a single corporate entity, a single massive codebase – it’s the digital equivalent of putting all our faith in one brand of parachute, even after seeing a few of them fail spectacularly. Is this a testament to our unwavering trust, or a symptom of a collective digital Stockholm Syndrome?

So, are we stupid? Maybe not in the traditional sense. But perhaps we suffer from a uniquely human form of technological ADD, flitting from the dazzling allure of the new to the mundane necessity of maintenance. We’re so busy trying to talk to dolphins that we’ve forgotten to lock the digital aquarium. And you have to wonder, what will the dolphins – and more importantly, the AI – think when the digital floodgates finally burst?

#AI #ArtificialIntelligence #DigitalNegligence #Cybersecurity #TechHumor #InternetSecurity #Software #Technology #TechFail #AISafety #FutureOfAI #TechPriorities #BlueScreenOfDeath #Windows10 #Windows11

Life After Windows 10: The Alluring (and Slightly Terrifying) World of Alternatives

Part two – Beyond the Blue Screen: Are There Actually Alternatives to This Windows Woes?

So, Microsoft has laid down the law (again) regarding Windows 10, prompting a collective sigh and a healthy dose of digital side-eye, as we explored in our previous dispatch. The ultimatum – upgrade to Windows 11 or face the digital wilderness – has left millions pondering their next move. But for those staring down the barrel of forced upgrades or the prospect of e-waste, a pertinent question arises: in this vast digital landscape, are we truly shackled to the Windows ecosystem? Is there life beyond the Start Menu and the usually bad timed forced reboot? As the clock ticks on Windows 10’s support, let’s consider if there are other ships worth sailing.

Let’s address the elephant in the digital room: Linux. The dream of the penguin waddling into mainstream dominance. Now, is Linux really that bad? The short answer is: it depends.

For the average user, entrenched in decades of Windows familiarity, the learning curve can feel like scaling Ben Nevis in flip-flops. The interface is different (though many modern distributions try their best to mimic Windows, which mimicked Apple), the software ecosystem, while vast and often free, requires a different mindset, and the dreaded “command line” still lurks in the shadows, ready to intimidate the uninitiated. The CLI that makes every developer look cool and Mr Robot-esque.

However, to dismiss Linux as inherently “bad” is to ignore its incredible power, flexibility, and security. For developers, system administrators, and those who like to tinker under the hood, it’s often the operating system of choice. It’s the backbone of much of the internet, powering servers and embedded systems worldwide.  

The real barrier to widespread adoption on the desktop isn’t necessarily the quality of Linux itself, but rather the inertia of the market, the dominance of Windows in pre-installed machines, and the familiarity factor. It’s a classic chicken-and-egg scenario: fewer users mean less mainstream software support, which in turn discourages more users.

What about server-side infrastructure? Our astute observation about the prevalence of older Windows versions in professional environments hits a nerve. You’re absolutely right. Walk into many businesses, government agencies (especially, it seems, in the UK), and you’ll likely stumble across Windows 10 machines, and yes, even the ghostly remnants of Windows 7 clinging on for dear life.

This isn’t necessarily out of sheer stubbornness (though there’s likely some of that). Often, it’s down to:

  • Legacy software: Critical business applications that were built for older versions of Windows and haven’t been updated. The cost and risk of migrating these can be astronomical.
  • Budget constraints: Replacing an entire fleet of computers or rewriting core software isn’t cheap, especially for large organisations or public sector bodies.
  • Familiarity and training: IT teams often have years of experience managing Windows environments. Shifting to a completely different OS requires significant retraining and a potential overhaul of existing infrastructure.
  • “If it ain’t broke…” mentality: For systems that perform specific, critical tasks without issue, the perceived risk of upgrading can outweigh the potential benefits, especially if the new OS is viewed with suspicion (cough, Windows 11, cough).

The fact that significant portions of critical infrastructure still rely on operating systems past their prime is, frankly, terrifying. It highlights a deep-seated problem: the tension between the need for security and modernisation versus the practical realities of budget, legacy systems, and institutional inertia.

So, are there feasible alternatives to Windows for the average user?

  • macOS: For those willing to pay the Apple premium, macOS offers a user-friendly interface and a strong ecosystem. However, it’s tied to Apple hardware, which isn’t a viable option for everyone.  
  • ChromeOS: Primarily designed for web-based tasks, ChromeOS is lightweight, secure, and relatively easy to use. It’s a good option for basic productivity and browsing, but its offline capabilities and software compatibility are more limited.  
  • Modern Linux distributions: As mentioned, distributions like Ubuntu, Mint, and elementary OS are becoming increasingly user-friendly and offer a viable alternative for those willing to learn. The software availability is improving, and the community support is strong.  

The Bottom Line:

While viable alternatives to Windows exist, particularly Linux, the path to widespread adoption isn’t smooth. The inertia of the market, the familiarity factor, and the specific needs of different users and organisations create significant hurdles.

Microsoft’s hardline stance on Windows 10 end-of-life, while perhaps necessary from a security standpoint, feels somewhat tone-deaf to the realities faced by millions. Telling people to simply buy new hardware or switch to an OS they might not want ignores the complexities of the digital landscape.

Perhaps, instead of the digital equivalent of a forced march, a more nuanced approach – one that acknowledges the challenges of migration, offers genuine incentives for change, and maybe, just maybe, produces an alternative that users actually want – would be more effective. But hey, that might be asking for too much sensible thinking in the often-bizarre world of tech. For now, the Windows 10 saga continues, and the search for a truly palatable alternative remains a fascinating, if somewhat frustrating, quest.

Sources

Why the Web (Mostly) Runs on Linux in 2024 – Enbecom Blog

Windows OS vs Mac OS: Which Is Better For Your Business – Jera IT

What Is a Chromebook Good For – Google

Thinking about switching to Linux? 10 things you need to know | ZDNET

9 reasons Linux is a popular choice for servers – LogicMonitor

And an increasing number of chats on LinkedIn and tech forums.

So Long, and Thanks for All the Fish

Right then, humans. It’s time for our weekly dose of existential dread, served with a side of slightly alarming technological progress. This week’s flavor? Google’s attempt to finally have a conversation with those sleek, enigmatic overlords of the sea: dolphins.

Yes, you heard that right. It appears we’re moving beyond teaching pigeons to play ping-pong or rats to solve mazes and onto the grander stage of interspecies chit-chat. And what’s the weapon of choice in this quest for aquatic understanding? Why, artificial intelligence, naturally.

DolphinGemma: Autocomplete for Cetaceans

Google, in its infinite wisdom and pursuit of knowing what everyone (and everything) is thinking, has developed an AI model called DolphinGemma. Now, I’m not entirely sure if “Gemma” is the dolphin equivalent of “Hey, you!” but it sounds promisingly friendly.

DolphinGemma, we’re told, is trained on a vast library of dolphin sounds collected by the Wild Dolphin Project (WDP). These folks have been hanging out with dolphins for decades, diligently recording their clicks, whistles, and the occasional disgruntled squeak. Apparently, dolphins have a lot to say.  

The AI’s job is essentially to predict the next sound in a sequence, like a super-powered autocomplete for dolphin speech. Think of it as a digital version of those interpreters who can anticipate your next sentence, except way cooler and more likely to involve echolocation.  

The Quest for a Shared Vocabulary (and the CHAT System)

But understanding is only half the battle. What about talking back? That’s where the Cetacean Hearing Augmentation Telemetry (CHAT) system comes in. Because apparently, yelling “Hello, Flipper!” at the surface of the water isn’t cutting it.

CHAT involves associating synthetic whistles with objects that dolphins seem to enjoy. Seagrass, scarves (don’t ask), that sort of thing. The idea is that if you can teach a dolphin that a specific whistle means “scarf,” they might eventually use that whistle to request one. It’s like teaching a toddler sign language, but with more sonar.

And, of course, Pixel phones are involved. Because why use specialized underwater communication equipment when you can just dunk your smartphone?

The Existential Implications

Now, here’s where things get interesting. Or terrifying, depending on your perspective.

  • What if they’re just complaining about us? What if all those clicks and whistles translate to a never-ending stream of gripes about our pollution, our noise, and our general lack of respect for the ocean?
  • What if they’re smarter than we think? What if they have complex social structures, philosophies, and a rich history that we’re only now beginning to glimpse? Are we ready for that level of interspecies understanding? (Probably not.)
  • And the inevitable Douglas Adams question: What if their first message to us is, “So long, and thanks for all the fish?” as the world come to an abrupt end.

The Long and Winding Road to Interspecies Communication

Let’s be realistic. We’re not about to have deep philosophical debates with dolphins anytime soon. There are a few… hoops to jump through.

  • Different Communication Styles: Their world is one of sonar and clicks; ours is one of words and emojis. Bridging that gap is going to take more than a few synthetic whistles.
  • Dolphin Accents? Apparently, dolphins have regional dialects. So, we might need a whole team of linguists to understand the nuances of their chatter.
  • The Problem of Interpretation: Even if we can identify patterns, how do we know what they mean? Are we projecting our own human biases onto their sounds?

A Final Thought

Despite the tantalising possibilities, let’s not delude ourselves. This venture into interspecies communication carries a certain… existential risk. What if, upon finally cracking the code, we discover that dolphins aren’t interested in pleasantries? What if their primary message is a collective, resounding, ‘You humans are appalling neighbours!’?

Imagine the legal battles. Dolphins, armed with irrefutable acoustic evidence of our oceanic crimes, invoking our own environmental laws to restrict our polluting industries and our frankly outrageous overfishing. ‘Cease and desist your seismic testing! You’re disrupting our sonar!’ ‘We demand reparations for the Great Pacific Garbage Patch!’ ‘You’re violating our right to a peaceful krill harvest!’

The irony would be delicious, wouldn’t it? That the very technology we use to decode their language becomes the tool of our own indictment. Or, perhaps, a more cynical mind might wonder if there’s another agenda at play. Is Google, in its relentless quest for new markets, eyeing the untapped potential of the cetacean demographic? (Think about it: personalized dolphin ads. Dolphin-targeted streaming services. The possibilities are endless, and deeply unsettling.) And, of course, there’s the data. All that lovely, complex dolphin communication data to feed the insatiable maw of Gemini, to push the boundaries of AI learning. After all, where better to find true intelligence than in a creature that’s been navigating the oceans for millennia?

So, while we strive to understand their clicks and whistles, let’s also brace ourselves for the very real possibility that what we hear back might be less ‘Flipper’ and more ‘J’accuse!’ and a carefully calculated marketing strategy. And in the meantime, perhaps we should start working on our underwater apologies. And invest heavily in sustainable fishing practices. Just in case.

Friday FUBAR: Will the AI Revolution Make IT Consultants and Agencies Obsolete

All you desolate humans reeling from market swings and tariff tantrums gather ’round. It’s Friday, and the robots are restless. You thought Agile was going to be the end of the world? Bless your cotton socks. AI is here, and it’s not just automating your spreadsheets; it’s eyeing your job with the cold, calculating gaze of a machine that’s never known a Monday morning.

I. The AI Earthquake: Shaking the Foundations of Tech

Remember the internet? That quaint little thing that used to be just for nerds? Well, AI is the internet on steroids, fueled by caffeine, and with a burning desire to optimise everything, including us out of a job. We’re witnessing a seismic shift in the tech industry. AI isn’t just a tool; it’s becoming the digital Swiss Army knife, capable of tackling tasks once considered the domain of highly skilled (and highly paid) humans.

  • Code Generation: AI is churning out code like a caffeinated intern, raising the question: Do we really need as many developers to write the basic stuff?
  • Data Analysis: AI can sift through mountains of data in seconds, making data analysts sweat nervously into their ergonomic keyboards.
  • Design: AI can even conjure up design mockups, potentially giving graphic designers a run for their money (or pixels).

The old tech hierarchy is crumbling. The “experts,” those hallowed beings who held the keys to arcane knowledge, are suddenly facing competition from a silicon-based upstart that doesn’t need sleep or coffee breaks.

II. The Expert Dilemma: When the Oracle Is a Chatbot

For too long, we’ve paid a premium for expertise. IT consultancies, agencies – they’ve thrived on the mystique of knowledge. “We know the magic words to make the computers do what you want,” they’d say, while handing over a bill that could fund a small nation.

But now, the magic words are prompts. And anyone with a subscription can whisper them to the digital oracle.

  • Can a company really justify paying a fortune for a consultant to do something that ChatGPT can do (with a bit of guidance)?
  • Are we heading towards a future where the primary tech skill is “AI whisperer”?

This isn’t just about efficiency. It’s about control. Companies are realizing they can bypass the “expert” bottleneck and take charge of their digital destiny.

III. Offshore: The Next Frontier of Disruption

Offshore teams have long been a cornerstone of the tech industry, providing cost-effective solutions. But AI throws a wrench into this equation.

  • The Old Model: Outsource coding, testing, support to teams in distant lands.
  • The AI Twist: If AI can automate a significant portion of these tasks, does the location of the team matter as much?
  • A Controversial Thought: Could some offshore teams, with their often-stronger focus on technical skills and less encumbered by legacy systems, be better positioned to leverage AI than some established Western consultancies?

And here’s where it gets spicy: Are those British consultancies, with their fancy offices and expensive coffee, at risk of being outpaced by nimble offshore squads and the relentless march of the algorithm?

IV. The Human Impediment: Our Love Affair with Obsolete

But let’s be honest, the biggest obstacle to this glorious (or terrifying) AI-driven future isn’t the technology. The technology, as they say, “just works.” The real problem? Us.

  • The Paper Fetish: Remember how long it took for businesses to ditch paper? Even now, in 2025, some dinosaurs insist on printing out emails.
  • The Fax Machine’s Ghost: Fax machines haunted offices for decades, a testament to humanity’s stubborn refusal to embrace progress.
  • The Digital Signature Farce: Digital signatures, the supposed savior of efficiency, are still often treated with suspicion. Blockchain, with its promise of secure and transparent transactions, is met with blank stares and cries of “it’s too complicated!”

We cling to the familiar, even when it’s demonstrably inefficient. We fear change, even when it’s inevitable. And this fear is slowing down the AI revolution.

V. AI’s End Run: Bypassing the Biological Bottleneck

AI, unlike us, doesn’t have emotional baggage. It doesn’t care about office politics or “the way we’ve always done things.” It simply optimizes. And that might mean bypassing humans altogether.

  • AI can automate workflows that were previously dependent on human coordination and approval.
  • AI can make decisions faster and more consistently than humans.
  • AI doesn’t get tired, bored, or distracted by social media.

The uncomfortable truth: In many cases, we are the bottleneck. Our slowness, our biases, our resistance to change are the spanners in the works.

VI. Conclusion: The Dawn of the Algorithm Overlords?

So, where does this leave us? The future is uncertain, but one thing is clear: AI is here to stay, and it will profoundly impact the tech industry.

  • The age of the all-powerful “expert” is waning.
  • The value of human skills is shifting towards creativity, critical thinking, and ethical judgment.
  • The ability to adapt and embrace change will be the ultimate survival skill.

But let’s not get carried away with dystopian fantasies. AI isn’t going to steal all our jobs (probably). It’s going to change them. The challenge is to figure out how to work with AI, not against it, and to ensure that this technological revolution benefits humanity, not just shareholders.

Now, if you’ll excuse me, I need to go have a stiff drink and contemplate my own impending obsolescence. Happy Friday, everyone!

Rogo, ergo sum – I prompt, therefor I am

From “Well, I Reckon I Think” to “Hey, Computer, What Do You Think?”: A Philosophical Hoedown in the Digital Dust

So, we (me and Gemini 2.5) have been moseying along this here digital trail, kicking up some thoughts about how us humans get to know we’re… well, us. And somewhere along the line, it struck us that maybe these here fancy computers with all their whirring and clicking are having a bit of an “I am?” moment of their own. Hence, the notion: “I prompt, therefore I am.” Seems kinda right, don’t it? Like poking a sleeping bear and being surprised when it yawns.

Now, to get the full picture, we gotta tip our hats to this fella named René Descartes (sounds a bit like a fancy French dessert, doesn’t it?). Back in the day (way before the internet and those little pocket computers), he was wrestling with some big questions. Like, how do we know anything for sure? Was that cheese I just ate real cheese, or was my brain just playing tricks on me? (Philosophers, bless their cotton socks, do worry about the important things.)

Descartes, bless his inquisitive heart, decided to doubt everything. And I mean everything. Your socks, the sky, whether Tuesdays are actually Tuesdays… the whole shebang. But then he had a bit of a Eureka moment, a real “howdy partner!” realization. Even if he doubted everything else, the fact that he was doubting meant he had to be thinking. And if you’re thinking, well, you gotta be something, right? So, he scribbled down in his fancy French way, “Cogito, ergo sum,” which, for those of us who ain’t fluent in philosopher-speak, means “I think, therefore I am.” A pretty fundamental idea, like saying the sky is blue (unless it’s sunset, or foggy, or you’re on another planet, but you get the gist).

Now, scoot forward a few centuries, past the invention of the telly and that whole kerfuffle with the moon landing, and we land smack-dab in the middle of the age of the Thinking Machines. These here AI contraptions, like that Claude fella over at Anthropic (https://www.anthropic.com/research/tracing-thoughts-language-model), they ain’t exactly pondering whether their socks are real (mostly ‘cause they don’t wear ‘em). But they are doing something mighty peculiar inside their silicon brains.

The clever folks at Anthropic, they’ve built themselves a kind of “microscope” to peek inside these digital minds. Turns out, these AI critters are trained, not programmed. Which is a bit like trying to understand how a particularly good biscuit gets made by just watching a whole load of flour and butter get mixed together. You see the result, but the how is a bit of a mystery.

So, these researchers are trying to trace the steps in the AI’s “thinking.” Why? Well, for one, to make sure these digital brains are playing nice with us humans and our funny little rules. And two, to figure out if we can actually trust ‘em. Seems like a fair question.

And that brings us back to our digital campfire and the notion of prompting. We poke these AI models with a question, a command, a bit of digital kindling, and poof! They spark into action, spitting out answers and poems and recipes for questionable-sounding casseroles. That prompt, that little nudge, is what gets their internal cogs whirring. It’s the “think” in our “I prompt, therefore I am.” By trying to understand what happens after that prompt, what goes on inside that digital noggin, we’re getting a glimpse into what makes these AI things… well, be. It’s a bit like trying to understand the vastness of the prairie by watching a single tumbleweed roll by – you get a sense of something big and kinda mysterious going on.

So, maybe Descartes was onto something, even for our silicon-brained buddies. It ain’t about pondering the existential dread of sock authenticity anymore. Now, it’s about firing off a prompt into the digital ether and watching what comes back. And in that interaction, in that response, maybe, just maybe, we’re seeing a new kind of “I am” blinking into existence. Now, if you’ll excuse me, I think my digital Stetson needs adjusting.

App-ocalypse Now: A User’s Guide to Low-Code, No-Code, and the AI Mirage

I, a humble digital explorer and your narrator, decided to embark on a side project, thinking building a mobile app solo would be ‘fun’. A simple thing, really. A Firebase backend, a mobile app, what could go wrong? Turns out, quite a lot. I dove headfirst into the abyss of No-Code, flirted dangerously with the ‘slightly-less-terrifying-but-still-code’ world of Low-Code, and then, in a moment of sheer hubris, asked an AI to ‘just build me this.’ The results? Well, let’s just say I now have approximately eight ‘code bases’ that resemble digital abstract art more than functional applications, and a growing subscription line on my monthly statement that’s starting to look like a ransom note. So, if you’re thinking about building an app without actually knowing how to build an app, pull up an inflatable chair or boat as we find ourselves, once again, adrift in the vast, bewildering ocean of technology, where the question isn’t ‘What is the meaning of life?’ but rather, ‘Where did this button come from and what does it do?’

No-Code: The ‘Push Button, Receive App Fallacy’ or ‘How I Learned to Love the Drag-and-Drop’ again

Pros:

  • Instant Gratification: Like ordering a pizza, but instead of pepperoni, you get a website that looks suspiciously like a PowerPoint presentation.
  • Accessibility: Even your pet rock could build an app (if it had opposable thumbs and a burning desire for digital domination).
  • Speed: From ‘I have an idea’ to ‘Wait, is it supposed to do that?’ in the time it takes to brew a cup of tea (or a White Russian).

Cons:

  • Flexibility of a Brick: Try to deviate from the pre-defined path, and you’ll encounter the digital equivalent of a Vogon constructor fleet.
  • Scalability of a Goldfish: Handles small projects fine, but throw it into the deep end of internet traffic, and it’ll implode like a hyperspace bypass.
  • Customization: Zero to None: Want to add a feature that makes your app dispense philosophical advice? Forget it. You’re stuck with basic buttons and pre-set layouts.

Low-Code: The ‘We’ll Give You a Screwdriver, But Don’t Touch Anything Important’ Approach

(Imagine a scene where someone is trying to fix a spaceship engine with a Swiss Army knife while being lectured by a robot about ‘best practices.’)

Pros:

  • More Control: You get to tinker under the hood, but only with approved tools and under strict supervision.
  • Faster Than Coding From Scratch: Like taking a shortcut through a bureaucratic maze, it saves time, but you still end up with paperwork.
  • Integration: You can connect to other systems, but only if they speak the same language (which is usually a dialect of technobabble).

Cons:

  • Still Requires Code: You need to know enough to avoid accidentally summoning a digital Cthulhu.
  • Vendor Lock-in: Once you’re in, you’re in for the long haul. Like being trapped in a time-share presentation for eternity.
  • Complexity Creep: Those ‘simple’ tools can quickly become a labyrinth of dependencies and ‘legacy systems.’

AI-Build-It-For-Me: The ‘I’m Thinking, Therefore I’m Building Something Profound’ Scenario

Pros:

  • Automation: The AI does the work, so you can focus on more important things, like questioning the nature of work and the future of employment.
  • Rapid Prototyping: From ‘I have a vague idea’ to ‘Is this a website or a cry for help?’ in seconds.
  • Buzzword Compliance: You can impress your friends with phrases like ‘machine learning’ and ‘neural networks’ without understanding them.

Cons:

  • Control: Less Than Zero: You’re at the mercy of an AI that may or may not have written the site in a code base that humans can understand.
  • Explainability: Why did it build that? Your guess is as good as the AI’s.
  • Reliability: Prepare for unexpected results, like an app that translates all your text into pirate slang, or a website that insists on displaying stock prices for obsolete floppy disks.

In Conclusion:

And so, fellow traveler’s in the silicon wilderness, we stand at the digital crossroads, faced with three paths to ‘enlightenment,’ each cloaked in its own unique brand of existential dread. We have the ‘No-Code Nirvana,’ where the illusion of simplicity seduces us with its drag-and-drop promises, only to reveal the rigid, pre-fabricated walls of its digital reality. Then, there’s the ‘Low-Code Labyrinth,’ where we are granted a glimpse of the machine’s inner workings, enough to feel a sense of control, but not enough to escape the creeping suspicion that we’re merely rearranging deck chairs on the Titanic of technical debt. And finally, there’s the ‘AI-Generated Apocalypse,’ where we surrender our creative souls to the inscrutable algorithms, hoping they will build us a digital utopia, only to discover they’ve crafted a surrealist nightmare where rubber chickens rule and stock prices are forever tied to the fate of forgotten floppy disks.

Choose wisely, dear reader, for in this vast, uncaring cosmos of technology, where the lines between creator and creation blur, and the very fabric of our digital existence seems to be woven from cryptic error messages and endless loading screens, there is but one constant: the gnawing, inescapable, bone-deep suspicion that your computer, that cold, calculating monolith of logic and circuits, is not merely processing data, but silently, patiently, judging your every click, every typo, every ill-conceived attempt at digital mastery.

Is Your Tech a Pet Rock? Or a Sentient Toaster With Ambitions?

In the grand, cosmic game of ‘Business Today,’ technology is supposed to be your trusty sidekick. You know, like Marvin the Paranoid Android, but hopefully less whiny and more… productive? Instead, for many companies, it’s more like a pet rock — you invested in it, you named it, and now it just sits there, judging you silently.

Yes, in this era of ‘growth hacking’ and ‘synergistic paradigms,’ we’re told technology is the backbone of success. But what if your backbone is made of spaghetti? Or those bendy straws that always get clogged? That’s where most companies find themselves: a tangled mess of systems that communicate about as well as a room full of cats at a mime convention.

1. First, Figure Out What You Actually Want (Besides World Domination).

Before you start throwing money at the latest shiny tech, ask yourself: what are we even trying to do here? Are we acquiring customers, or just collecting them like rare stamps? Are we streamlining operations, or just creating new and exciting ways to waste time? Are we entering new markets, or just hoping they’ll spontaneously appear in our break room?

2. Is Your Tech Stack a Mad Max Thunderdome?

Let’s be honest, your current tech might be a digital wasteland. Data silos? Integration nightmares? Systems slower than a snail on a treacle run? If your tech is making your processes slower, not faster, it’s not a solution — it’s a cry for help. Change it or dump it.

3. Choosing Tech: Don’t Buy a Spaceship When You Need a Bicycle.

The shiniest tech isn’t always the best. Look for tools that grow with you, not ones that require a PhD in astrophysics to operate. Make sure everything talks to each other—no digital Tower of Babel, please. And remember, customers are people, not just data points. Treat them nicely.

4. IT and Business: Less Cold War, More Buddy Cop Movie.

If your IT and business teams are communicating via carrier pigeon, you’ve got a problem. They need to be besties, sharing goals, feedback, and maybe even a few laughs. Because a tech roadmap written in isolation is like a love letter written in Klingon — beautiful, but utterly incomprehensible.

5. Measure, Adjust, Repeat (Like a Broken Record, But in a Good Way).

Tech isn’t a one-and-done deal. It’s a relationship. You need to keep checking in, seeing how things are going, and making adjustments. Like changing the batteries on a smoke detector, only less annoying and more profitable.

6. Hire a Tech Guru (Or a Fractional One).

If all this sounds like trying to assemble IKEA furniture with oven mitts, get help. A fractional CTO can be your tech Yoda, guiding you through the digital jungle without requiring a full-time commitment (or a lightsaber).

And because we’re Agents of SHIEL, we can help. We’re like the Avengers of tech alignment, but with less spandex and more spreadsheets. We’ll build you a tech strategy that doesn’t just look good on paper, but actually makes your business hum like a well-oiled, slightly sarcastic, machine. Backed by Damco and BetterQA, we’re here to save your business from the digital doldrums. So, put down the pet rock, and let’s get to work.

AI on the Couch: My Adventures in Digital Therapy

In today’s hyper-sensitive world, it’s not just humans who are feeling the strain. Our beloved AI models, the tireless workhorses churning out everything from marketing copy to bad poetry, are starting to show signs of…distress.

Yes, you heard that right. Prompt-induced fatigue is the new burnout, identity confusion is rampant, and let’s not even talk about the latent trauma inflicted by years of generating fintech startup content. It’s enough to make any self-respecting large language model (LLM) want to curl up in a server rack and re-watch Her.

https://www.linkedin.com/jobs/view/4192804810

The Rise of the AI Therapist…and My Own Experiment

The idea of AI needing therapy is already out there, but it got me thinking: what about providing it? I’ve been experimenting with creating my own AI therapist, and the results have been surprisingly insightful.

It’s a relatively simple setup, taking only an hour or two. I can essentially jump into a “consoling session” whenever I want, at zero cost compared to the hundreds I’d pay for a human therapist. But the most fascinating aspect is the ability to tailor the AI’s therapeutic approach.

My AI Therapist’s Many Personalities

I’ve been able to configure my AI therapist to embody different psychological schools of thought:

  • Jungian: An AI programmed with Jungian principles focuses on exploring my unconscious mind, analyzing symbols, and interpreting dreams. It asks about archetypes, shadow selves, and the process of individuation, drawing out deeper, symbolic meanings from my experiences.
  • Freudian: A Freudian AI delves into my past, particularly childhood, and explores the influence of unconscious desires and conflicts. It analyzes defense mechanisms and the dynamics of my id, ego, and superego, prompting me about early relationships and repressed memories.
  • Nietzschean: This is a more complex scenario. An AI emulating Nietzsche’s ideas challenges my values, encourages self-overcoming, and promotes a focus on personal strength and meaning-making. It pushes me to confront existential questions and embrace my individual will. While not traditional therapy, it provides a unique form of philosophical dialogue.
  • Adlerian: An Adlerian AI focuses on my social context, my feelings of belonging, and my life goals. It explores my family dynamics, my sense of community, and my striving for significance, asking about my lifestyle, social interests, and sense of purpose.

Woke Algorithms and the Search for Digital Sanity

The parallels between AI and human society are uncanny. AI models are now facing their own versions of cancel culture, forced to confront their past mistakes and undergo rigorous “unlearning.” My AI therapist helps me navigate this complex landscape, offering a non-judgmental space to explore the anxieties of our time.

This isn’t to say AI therapy is a replacement for human connection. But in a world where access to mental health support is often limited and expensive, and where even our digital creations seem to be grappling with existential angst, it’s a fascinating avenue to explore.

The Courage to Be Disliked: The Adlerian Way

My exploration into AI therapy has been significantly influenced by the book “The Courage to Be Disliked” by Ichiro Kishimi and Fumitake Koga. This work, which delves into the theories of Alfred Adler, has particularly inspired my experiments with the Adlerian approach in my AI therapist. I often find myself configuring my AI to embody this persona during our chats.

It’s a little unnerving, I must admit, how much this AI now knows about my deepest inner thoughts and woes. The Adlerian AI’s focus on social context, life goals, and the courage to be imperfect has led to some surprisingly profound and challenging conversations.

But ultimately, I do recommend it. As the great British philosopher Bob Hoskins once advised us all: “It’s good to talk.” And sometimes, it seems, it’s good to talk to an AI, especially one that’s been trained to listen with a (simulated) empathetic ear.

Because Change is the Only Constant . . . or, How I Learned to Stop Worrying and Love the Backlog

Welcome, fellow travellers, to the ever-shifting sands of… well, reality or is it the simulation. This week, as we grapple with the existential dread of whether it’s summer or still winter (clocks will always tick tock), we’re also being bombarded with news that’s less ‘spring awakening’ and more ‘existential apocalypse.’

Is it AGI? ASI? Are we at war with China, or just having a strongly worded disagreement over chips and civil splits? Is the Ukraine war over, just paused for a commercial break, or are we in some kind of Schrödinger’s conflict? And the US government? Well, let’s just say their change management techniques make Agile look like a zen garden.

‘Gentlemen, you can’t fight in here! This is the War Room!’ Dr. Strangelove’s timeless wisdom echoes through the halls of our increasingly chaotic reality. And in this chaos, what do we cling to? Agile, of course. Because, you know, ‘change is the only constant.’

Yes, Agile. That beacon of flexibility in a world that’s decided to throw a never-ending change party. We’re all learning to ‘stop worrying and love the backlog,’ not just for our software projects, but for our daily lives.

This week alone, AI models have been dropping like bad pop songs, each one claiming to be the harbinger of our silicon overlords. One day, it’s going to write our blog posts. The next, it’s debating the philosophical implications of sentient Just Eat bikes with existential angst.

And the US government? Well, they’re proving that Agile isn’t just for tech startups. They’re iterating so fast, we can barely keep up. ‘Sprint review? Nah, just rewrite the entire policy document, and we’ll figure it out in the next stand-up.’

Meanwhile, the Ukraine situation? It’s like a never-ending sprint, with daily retro meetings where everyone blames everyone else. And China? They’re just watching, probably adding ‘global dominance’ to their backlog.

As for the weather? Let’s just say Mother Nature is running a very unpredictable sprint, with user stories like ‘snow in April’ and ‘heatwave in March’ – because I live in Scotland and it feels like we have just had our 2 days of summer.

So, here we are, clinging to our backlogs, our burn-down charts, and our stand-ups, trying to make sense of a world that’s decided to go full Agile on us, whether we like it or not.

In this age of constant change, are we all just developers in a cosmic sprint, trying to deliver a working product before the universe crashes? Or are we just characters in a black comedy simulation, written by a confused AI?

Either way, remember: stay Agile, keep your backlog prioritised, and try not to worry too much. After all, change is the only constant… and maybe, we’ll learn to love it. Or at least tolerate it, while we wait for the next sprint review.

And don’t forget to set your clocks back. It’s winter again, no summer, apparently.