March Madness: Quantum Leaps, AI Bans, and the Eternal Struggle Against Laziness (It’s a Season, Apparently)

Ah, March, my birth month. The month that’s basically a seasonal identity crisis. In the Northern Hemisphere, it’s spring! Birds are chirping, flowers are contemplating. Down south? It’s autumn, leaves are falling, and pumpkin spice lattes are back on the menu. Way back in the day, the Romans were like, ‘Hey, let’s start the year now!’ Because why not? Time is a construct.

Speaking of constructs, what about quantum computing, which is basically time travel for nerds. China just dropped the Zuchongzhi 3.0, a quantum chip that’s apparently one quadrillion times faster than your average supercomputer. Yes, quadrillion. I had to Google that too. It’s basically like if your toaster could solve the meaning of life in the time it takes to burn your toast.

This chip is so fast, it made Google’s Sycamore (last months big deal) look like a dial-up modem. They did some quantum stuff, beat Google’s previous record, and everyone’s like, ‘Whoa, China’s winning the quantum race!’ Which, by the way, is a marathon, not a sprint. More like a marathon where everyone’s wearing jetpacks and occasionally tripping over their own shoelaces.

Now, while China’s busy building quantum toasters, the US is busy banning Chinese AI. DeepSeek, an AI startup, got the boot from all government devices. Apparently, they’re worried about data leaking to the Chinese Communist Party. Which, fair enough. Though, not sure what the difference is between being leaked and outright stolen, which is what the yanks do.

DeepSeek’s AI models are apparently so good, they’re scaring everyone, including investors, who are now having panic attacks about Nvidia’s stock. Even Taiwan’s like, ‘Nope, not today, DeepSeek!’ And South Korea and Italy are hitting the pause button. It’s like a global AI cold war, but with more awkward silences and fewer nukes (hopefully).

And here’s the kicker: even the Chinese are worried! DeepSeek’s employees had to hand over their passports to prevent trade secrets from leaking. Maybe Chinese passports have an email function? It’s like a spy thriller, but with more lines of code and less martinis.

So, what’s the moral of this story? March is a wild month. Quantum computers are basically magic. AI is scaring everyone. And apparently, data privacy is like a hot potato, and everyone’s trying not to get burned. Also, don’t forget that time is a construct.

Oh, and if you’re feeling lazy, just remember, even quantum computers have to work hard. So get off your couch and do something productive. Or, you know, just watch cat videos. Whatever floats your boat.

The UK Workplace: Agile Illusion and the Rise of AI-Powered Efficiency

Speaking honestly, the world of work isn’t what it used to be. Remember when stability and routine were the golden tickets? Just turning up constituted a job. Those days are fading fast. Today, we’re navigating a landscape of constant change – technological advancements, shifting market trends, and, yes, even global pandemics. It’s a whirlwind, and the only way to stay afloat is to embrace adaptability.

We’ve seen the rise of remote work, the acceleration of digital transformation, and the increasing demand for skills that didn’t even exist a two years ago. An overpriced degree takes four years to achieve? If you’re still clinging to outdated methods or resisting change, you’re likely to get left behind.

So let’s cut through the fluff: the UK workplace is stuck in a rut. Everyone’s talking about ‘adaptability,’ but in reality, there’s a gaping chasm between the buzzwords and actual practice. Agile? More like ‘fragile.’ We’re drowning in terminology, but the fundamental culture of British business remains stubbornly resistant to real change.

Laziness? Yes, I said it. A culture of complacency permeates far too many organizations. My recent contract was a prime example: an army of cooks, both from the consultancy and client sides, all stirring a pot that barely needed a simmer. Three React Native developers for a simple app? Four .NET developers to copy and paste a BFF? With a completely separate infrastructure team for a very basic integration? It was a circus of inefficiency.

While these legions of underutilised developers were busy pretending to be productive, I was building a working app using Windsurf by Codeium. And right now, Gemini is helping me create a serverless backend in Firebase. The contrast is stark, and it’s infuriating.

Here’s the truth: we’ve reached a tipping point. With the rapid advancement of AI, the traditional roles of developers are becoming increasingly redundant. I firmly believe that a skilled Business Analyst and Project Manager, armed with AI tools, are now all you need for a product build.

Imagine this: detailed requirements gathered through stakeholder interviews, translated into a prototype using AI. Employee workshops to refine the design. A final stakeholder sign-off. Then, a focus group of customers or end-users for a final review. A focused development phase, rigorous testing for non-functional requirements, and a release. Yes, there will be a month of rapid iterative re-releases as the product encounters the real world, but this is Agile in practice.

This isn’t just about efficiency; it’s about survival. The UK workplace needs a radical shake-up. We need to ditch the bloated teams and embrace the power of AI to streamline development. We need to stop paying lip service to Agile and start implementing it in a meaningful way.

The era of ‘cooks in the kitchen’ is over. It’s time for a revolution, and AI is leading the charge.

Call to Action:

Do you agree? Is the UK workplace lagging behind? Share your thoughts and experiences in the comments below. Let’s start a conversation.

From Trenches to Terminus: A Century of Warfare’s Chilling Evolution

A century. The span of a modern human lifetime, yet in the realm of warfare, it’s a chasm of unimaginable transformation. From the mud-soaked trenches of World War I to the sterile, algorithm-driven battlefields of today, the face of conflict has been irrevocably altered. In February, I spent a morning immersed in John Akomfrah’s ‘Mimesis: African Soldier’ exhibition at Glasgow’s Gallery of Modern Art, confronted by the visceral realities of a war fought with flesh and bone, a war where the majority of stories remain untold. Now, we face a future where war is waged by machines, where the human cost is both diminished and amplified in terrifying new ways.

The Echoes of WWI and Akomfrah’s “Mimesis”:

Akomfrah’s multi-screen installation is a haunting reminder of war’s human toll, especially for those whose sacrifices were systematically erased from history. The archival footage, the flowing water over forgotten faces, the montage of fragmented narratives – it all speaks to the chaos, the brutality, and the enduring trauma of conflict. WWI, with its trenches, its mustard gas, its sheer, senseless slaughter, was a war fought with rudimentary technology and an almost medieval disregard for human life. The images of African soldiers within ‘Mimesis’ forces us to consider the colonial aspects of these wars, and the many who fought and died who were not given a voice. The experience left me with a profound sense of the weight of history, a history often obscured by the dominant narratives.

The Rise of the Machines:

Fast forward to today, and the battlefield is a landscape of drones, AI, and robotic dogs armed with rocket launchers. The recent Ministry of Defence trials, showcasing robot dogs defusing bombs and drones autonomously detecting threats, paint a starkly different picture. We’re told these advancements ‘minimise human exposure to danger,’ that they ‘enhance Explosive Ordnance Disposal capability.’ But what about the ethical implications? What about the dehumanisation of conflict?

These robotic dogs, these AI-driven drones, they’re not just tools; they’re symbols of a profound shift in how we wage war. China’s deployment of advanced robotic dogs, designed to ‘change the approach to military operations,’ underscores this reality. The ‘precision movements’ and ‘remote classification of threats’ touted by defence officials mask a chilling truth: we’re entering an era where machines make life-or-death decisions.

Juxtaposition and Reflection:

The stark contrast between the human-centric horrors of WWI, as depicted in Akomfrah’s work, and the cold, calculated efficiency of modern robotic warfare is deeply unsettling. Where once soldiers faced each other across no man’s land, now machines engage in silent, unseen battles. The human element, once the defining feature of war, is being systematically removed.

This isn’t just about technological advancement; it’s about a fundamental, unsettling shift in our relationship with conflict. The distance created by these technologies—the drones, the remote-controlled robots, the AI-driven targeting systems—allows us to detach, to view war as a series of data points and algorithms, almost like a high-stakes video game. In fact, some of the footage we see now, with its crisp, digital clarity and detached perspective, bears an uncanny resemblance to scenes from ‘Call of Duty.’ But while the on-screen action might feel like entertainment, the consequences – the lives lost, the communities destroyed – remain as devastatingly real as ever. The danger lies in this blurred line, where the visceral horror of war is replaced by the sterile, almost gamified experience, potentially desensitizing us to the true cost of human conflict.

As we stand on the precipice of this new era, with growing global tensions, escalating trade conflicts, and the chilling specter of nuclear weapons being openly discussed, the threat of a third world war looms larger than ever. Yet, amidst this existential dread, we seem more preoccupied with petty snipes at Trump and the fleeting triumphs of social media one-upmanship. It’s a surreal disconnect. We must ask ourselves: what does it truly mean to wage war in the age of AI, when the very fabric of our reality is being reshaped by algorithms and automation? Are we genuinely safer, or are we merely constructing new and more insidious forms of peril, where the line between virtual and real becomes dangerously blurred? Akomfrah’s art compels us to confront the ghosts of past conflicts, the human stories buried beneath the rubble of history. The robotic dogs, with their cold, mechanical efficiency, force us to confront a future where human agency is increasingly questioned. Both past and future demand that we grapple with the human cost of conflict, in all its evolving forms, while simultaneously challenging our collective capacity for distraction and denial.

From the mud-soaked trenches of World War I to the sterile, digital battlefields of today, warfare has undergone a radical transformation, a transformation that now feels less like a distant future and more like a chilling present. For forty years, we’ve joked about the Terminator, about Skynet, about the rise of the machines, dismissing it as mere science fiction. But as we witness the deployment of AI-driven robotic dogs and the increasing gamification of conflict, that once-fantastical vision suddenly feels disturbingly real. The human capacity for both creation and destruction remains a constant, but the tools at our disposal have changed dramatically. As we embrace the technological advancements that promise to reshape our world, we can no longer afford to be detached observers, scrolling through social media while global tensions escalate. We must confront the ethical dilemmas that haunt us, the stories that have been silenced, and the very real possibility that the future we once laughed about is now upon us. The future of warfare is not just about machines; it’s about the choices we make as humans, choices that will determine whether we become the masters of our technology or its victims.

Apple and Google: A Forbidden Love Story, with AI as the Matchmaker

Well, butter my biscuits and call me surprised! Apple, the company that practically invented the walled garden, has just invited Google, its long-standing frenemy, over for a playdate. And not just any playdate – an AI-powered, privacy-focused, game-changing kind of playdate.

Remember when Apple cozied up to OpenAI, and everyone assumed ChatGPT was going to be the belle of the Siri-ball? Turns out, Apple was playing the field, secretly testing both ChatGPT and Google’s Gemini AI. And guess who stole the show? Yep, Gemini. Apparently, it’s better at whispering sweet nothings into Siri’s ear, taking notes like a diligent personal assistant, and generally being the brains of the operation.

So, what’s in it for these tech titans?

Apple’s Angle:

  • Supercharged Siri: Let’s face it, Siri’s been needing a brain transplant for a while now. Gemini could be the upgrade that finally makes her a worthy contender against Alexa and Google Assistant.
  • Privacy Prowess: By keeping Gemini on-device, Apple reinforces its commitment to privacy, a major selling point for its users.
  • Strategic Power Play: This move gives Apple leverage in the AI game, potentially attracting developers eager to build for a platform with cutting-edge AI capabilities.

Google’s Gains:

  • iPhone Invasion: Millions of iPhones suddenly become potential Gemini playgrounds. That’s a massive user base for Google to tap into.
  • AI Dominance: This partnership solidifies Google’s position as a leader in the AI space, showing that even its rivals recognize the power of Gemini.
  • Data Goldmine (Maybe?): While Apple insists on on-device processing, Google might still glean valuable insights from anonymized usage patterns.

The Bigger Picture:

This unexpected alliance could shake up the entire tech landscape. Imagine a world where your iPhone understands your needs before you even ask, where your notes practically write themselves, and where privacy isn’t an afterthought but a core feature.

But let’s not get ahead of ourselves. There are still questions to be answered. How will this impact Apple’s relationship with OpenAI? Will Google play nice with Apple’s walled garden? And most importantly, will Siri finally stop misinterpreting our requests for pizza as a desire to hear the mating call of a Peruvian tree frog?

Only time will tell. But one thing’s for sure: this Apple-Google AI mashup is a plot twist no one saw coming. And it’s going to be a wild ride.

The Agile Phone Line is Cracking Up: Is it Time to Hang Up?

Ah, March 3rd, 1876. A momentous date indeed, when Alexander Graham Bell first summoned Mr. Watson through the magic of the telephone. A groundbreaking invention that revolutionized communication and paved the way for countless innovations to come. But amidst our celebration of this technological milestone, let’s turn our attention to a more recent communication phenomenon: Agile.

Agile, that wondrous methodology that promised to streamline software development and banish the demons of waterfall projects, has become as ubiquitous as the telephone itself. Stand-up meetings, sprints, and scrum masters are now the lingua franca of the tech world, a symphony of buzzwords and acronyms that echo through the halls of countless software companies. But as we reflect on the legacy of the telephone and its evolution, perhaps it’s time to ask ourselves: Is Agile starting to sound a bit like a dial-up modem in an age of broadband?

Remember Skype? That once-beloved platform that connected us across continents, now destined for the digital graveyard on May 5th. Skype, like Agile, was once a revolutionary tool, but time and technology march on. Newer, shinier platforms have emerged, offering more features, better integration, and a smoother user experience. Could the same fate await Agile? With the rise of AI, machine learning, and automation, are we approaching a point where the Agile methodology, with its emphasis on human interaction and iterative development, becomes obsolete?

Perhaps the Agile zealots will scoff at such a notion, clinging to their scrum boards and burn-down charts like a security blanket. But the writing may be on the wall. As AI takes on more complex tasks and automation streamlines workflows, the need for constant human intervention and feedback loops might diminish. The Agile circus, with its daily stand-ups and endless retrospectives, could become a relic of a bygone era, a quaint reminder of a time when humans were still the dominant force in software development.

And speaking of communication, who could forget the ubiquitous “mute button” phenomenon? That awkward silence followed by a chorus of “You’re on mute!” has become a staple of virtual meetings, a testament to our collective struggle to adapt to the digital age. It’s a fitting metaphor for the challenges of communication in an Agile world, where information overload and constant interruptions can make it difficult to truly connect and collaborate.

So, as we raise a glass to Alexander Graham Bell and his telephonic triumph, let’s also take a moment to reflect on the future of Agile. Is it time to hang up on the old ways and embrace a new era of software development, one driven by AI, automation, and a more streamlined approach? Or can Agile adapt and evolve to remain relevant in this rapidly changing landscape? Only time will tell. But one thing is certain: the world of technology never stands still, and those who fail to keep pace risk being left behind, like a rotary phone in a smartphone world.

From Zero to Data Hero: My Google Data Analytics Journey

Just a few short months ago, the world of data analytics felt like a vast, uncharted ocean. Now, after completing Google’s Data Analytics Professional Certificate (or at least the 12+ modules that make up the learning path – more on that later!), I feel like I’ve charted a course and am confidently navigating those waters. It’s been an intense, exhilarating, and sometimes head-scratching journey, but one I wouldn’t trade for anything.

My adventure began in October 2024, and by February (this week) 2025, I had conquered (most of) the learning path. Conquer is the right word, because it was definitely an intense learning curve. 2000’s dev junior SQL skills? Yeah, they got a serious dusting off. And my forgotten Python, which was starting to resemble ancient hieroglyphics? Well, let’s just say we’re on speaking terms again.

The modules covered a huge range of topics, from the foundational “Introduction to Data Analytics on Google Cloud” and “Google Cloud Computing Foundations” to more specialized areas like “Working with Gemini Models in BigQuery,” “Creating ML Models with BigQuery ML,” and “Preparing Data for ML APIs on Google Cloud.” (See the full list at the end of this post!) Each module built upon the previous one, creating a solid foundation for understanding the entire data analytics lifecycle.

But the real stars of the show for me were BigQuery and, especially, Looker Studio. I’ve dabbled with other data visualization tools in the past (mentioning no names… cough Microsoft cough Tableau cough), but Looker Studio blew me away. It’s intuitive, powerful, and just… fun to use. Seriously, I fell in love. The ease with which you can connect to data sources and create insightful dashboards is simply unmatched. It’s like having a superpower for data storytelling!

One of the biggest “aha!” moments for me was realizing the sheer power of data insights. Mining those hidden gems from large datasets is incredibly addictive. And the fact that Google makes it so easy to access public datasets through BigQuery? Game changer. It’s like having a data goldmine at your fingertips.

This learning path has ignited a real passion within me. So much so that I’m now pursuing a Data Analysis Diploma, which I’m hoping to wrap up before June. And, because I apparently haven’t had enough learning, I’m also signing up for the Google Cloud Data Analytics Professional Certificate. I’m all in!

I have to say, the entire Google Cloud platform just feels so much more integrated and user-friendly compared to the Microsoft offerings I’ve used. Everything works together seamlessly, and the learning resources are top-notch. If you’re considering a career in data analytics, I would wholeheartedly recommend the Google path over other options.

I’m especially excited to dive deeper into the machine learning aspects. And the integration of Gemini? Genius! Having it as a code buddy has been a huge help, especially when I’m wrestling with a particularly tricky SQL query or trying to remember the correct syntax for a Python function. Seriously, it’s like having a data analytics guru by my side.

Stay tuned for future posts where I’ll be sharing more about my data analytics journey, including tips and tricks, project updates, and maybe even some data visualizations of my own!

Coursera do an official course = https://www.google.com/url?sa=E&source=gmail&q=https://www.coursera.org/professional-certificates/google-data-analytics – this you get a recognised formal professional certificate.

Or jump into Google Cloud Skills Boost: https://www.cloudskillsboost.google/ and get yourself a Cloud account and friendly with Gemini.

Modules Completed:

  • Work with Gemini Models in BigQuery
  • Analyzing and Visualizing Data in Looker Studio
  • BigQuery for Data Analysts
  • Boost Productivity with Gemini in BigQuery
  • Create ML Models with BigQuery ML
  • Derive Insights from BigQuery Data
  • Developing Data Models with LookML
  • Google Cloud Computing Foundations- Data, ML, and AI in Google Cloud
  • Introduction to Data Analytics on Google Cloud
  • Manage Data Models in Looker
  • Prepare Data for Looker Dashboards and Reports
  • Prepare Data for ML APIs on Google Cloud

So Long, and Thanks for All the Algorithms (Probably)

The Guide Mark II says, “Don’t Panic,” but when it comes to the state of Artificial Intelligence, a mild sense of existential dread might be entirely appropriate. You see, it seems we’ve built this whole AI shebang on a foundation somewhat less stable than a Vogon poetry recital.

These Large Language Models (LLMs), with their knack for mimicking human conversation, consume energy with the same reckless abandon as a Vogon poet on a bender. Training these digital behemoths requires a financial outlay that would make a small planet declare bankruptcy, and their insatiable appetite for data has led to some, shall we say, ‘creative appropriation’ from artists and writers on a scale that would make even the most unscrupulous intergalactic trader blush.

But let’s assume, for a moment, that we solve the energy crisis and appease the creative souls whose work has been unceremoniously digitised. The question remains: are these LLMs actually intelligent? Or are they just glorified autocomplete programs with a penchant for plagiarism?

Microsoft’s Copilot, for instance, boasts “thousands of skills” and “infinite possibilities.” Yet, its showcase features involve summarising emails and sprucing up PowerPoint presentations. Useful, perhaps, for those who find intergalactic travel less taxing than composing a decent memo. But revolutionary? Hardly. It’s a bit like inventing the Babel fish to order takeout.

One can’t help but wonder if we’ve been somewhat misled by the term “artificial intelligence.” It conjures images of sentient computers pondering the meaning of life, not churning out marketing copy or suggesting slightly more efficient ways to organise spreadsheets.

Perhaps, like the Babel fish, the true marvel of AI lies in its ability to translate – not languages, but the vast sea of data into something vaguely resembling human comprehension. Or maybe, just maybe, we’re still searching for the ultimate question, while the answer, like 42, remains frustratingly elusive.

In the meantime, as we navigate this brave new world of algorithms and automation, it might be wise to keep a towel handy. You never know when you might need to hitch a ride off this increasingly perplexing planet.

Comparison to Crypto Mining Nonsense:

Both LLMs and crypto mining share a striking similarity: they are incredibly resource-intensive. Just as crypto mining requires vast amounts of electricity to solve complex mathematical problems and validate transactions, training LLMs demands enormous computational power and energy consumption.

Furthermore, both have faced criticism for their environmental impact. Crypto mining has been blamed for contributing to carbon emissions and electronic waste, while LLMs raise concerns about their energy footprint and the sustainability of their development.

Another parallel lies in the questionable ethical practices surrounding both. Crypto mining has been associated with scams, fraud, and illicit activities, while LLMs have come under fire for their reliance on massive datasets often scraped from the internet without proper consent or attribution, raising concerns about copyright infringement and intellectual property theft.

In essence, both LLMs and crypto mining represent technological advancements with potentially transformative applications, but they also come with significant costs and ethical challenges that need to be addressed to ensure their responsible and sustainable development.