Apple and Google: A Forbidden Love Story, with AI as the Matchmaker

Well, butter my biscuits and call me surprised! Apple, the company that practically invented the walled garden, has just invited Google, its long-standing frenemy, over for a playdate. And not just any playdate – an AI-powered, privacy-focused, game-changing kind of playdate.

Remember when Apple cozied up to OpenAI, and everyone assumed ChatGPT was going to be the belle of the Siri-ball? Turns out, Apple was playing the field, secretly testing both ChatGPT and Google’s Gemini AI. And guess who stole the show? Yep, Gemini. Apparently, it’s better at whispering sweet nothings into Siri’s ear, taking notes like a diligent personal assistant, and generally being the brains of the operation.

So, what’s in it for these tech titans?

Apple’s Angle:

  • Supercharged Siri: Let’s face it, Siri’s been needing a brain transplant for a while now. Gemini could be the upgrade that finally makes her a worthy contender against Alexa and Google Assistant.
  • Privacy Prowess: By keeping Gemini on-device, Apple reinforces its commitment to privacy, a major selling point for its users.
  • Strategic Power Play: This move gives Apple leverage in the AI game, potentially attracting developers eager to build for a platform with cutting-edge AI capabilities.

Google’s Gains:

  • iPhone Invasion: Millions of iPhones suddenly become potential Gemini playgrounds. That’s a massive user base for Google to tap into.
  • AI Dominance: This partnership solidifies Google’s position as a leader in the AI space, showing that even its rivals recognize the power of Gemini.
  • Data Goldmine (Maybe?): While Apple insists on on-device processing, Google might still glean valuable insights from anonymized usage patterns.

The Bigger Picture:

This unexpected alliance could shake up the entire tech landscape. Imagine a world where your iPhone understands your needs before you even ask, where your notes practically write themselves, and where privacy isn’t an afterthought but a core feature.

But let’s not get ahead of ourselves. There are still questions to be answered. How will this impact Apple’s relationship with OpenAI? Will Google play nice with Apple’s walled garden? And most importantly, will Siri finally stop misinterpreting our requests for pizza as a desire to hear the mating call of a Peruvian tree frog?

Only time will tell. But one thing’s for sure: this Apple-Google AI mashup is a plot twist no one saw coming. And it’s going to be a wild ride.

The Agile Phone Line is Cracking Up: Is it Time to Hang Up?

Ah, March 3rd, 1876. A momentous date indeed, when Alexander Graham Bell first summoned Mr. Watson through the magic of the telephone. A groundbreaking invention that revolutionized communication and paved the way for countless innovations to come. But amidst our celebration of this technological milestone, let’s turn our attention to a more recent communication phenomenon: Agile.

Agile, that wondrous methodology that promised to streamline software development and banish the demons of waterfall projects, has become as ubiquitous as the telephone itself. Stand-up meetings, sprints, and scrum masters are now the lingua franca of the tech world, a symphony of buzzwords and acronyms that echo through the halls of countless software companies. But as we reflect on the legacy of the telephone and its evolution, perhaps it’s time to ask ourselves: Is Agile starting to sound a bit like a dial-up modem in an age of broadband?

Remember Skype? That once-beloved platform that connected us across continents, now destined for the digital graveyard on May 5th. Skype, like Agile, was once a revolutionary tool, but time and technology march on. Newer, shinier platforms have emerged, offering more features, better integration, and a smoother user experience. Could the same fate await Agile? With the rise of AI, machine learning, and automation, are we approaching a point where the Agile methodology, with its emphasis on human interaction and iterative development, becomes obsolete?

Perhaps the Agile zealots will scoff at such a notion, clinging to their scrum boards and burn-down charts like a security blanket. But the writing may be on the wall. As AI takes on more complex tasks and automation streamlines workflows, the need for constant human intervention and feedback loops might diminish. The Agile circus, with its daily stand-ups and endless retrospectives, could become a relic of a bygone era, a quaint reminder of a time when humans were still the dominant force in software development.

And speaking of communication, who could forget the ubiquitous “mute button” phenomenon? That awkward silence followed by a chorus of “You’re on mute!” has become a staple of virtual meetings, a testament to our collective struggle to adapt to the digital age. It’s a fitting metaphor for the challenges of communication in an Agile world, where information overload and constant interruptions can make it difficult to truly connect and collaborate.

So, as we raise a glass to Alexander Graham Bell and his telephonic triumph, let’s also take a moment to reflect on the future of Agile. Is it time to hang up on the old ways and embrace a new era of software development, one driven by AI, automation, and a more streamlined approach? Or can Agile adapt and evolve to remain relevant in this rapidly changing landscape? Only time will tell. But one thing is certain: the world of technology never stands still, and those who fail to keep pace risk being left behind, like a rotary phone in a smartphone world.

Agile: My Love-Hate Relationship with Iteration

Iteration. The word itself conjures up images of spinning wheels, cyclical patterns, and that hamster in its never-ending quest for… well, whatever a hamster sees in those wheels. But “iteration” is more than just a fancy word for “doing something again and again.” It’s a fundamental concept that permeates our lives, from the mundane to the profound.

Think about your morning routine. Wake up, stumble to the bathroom, brush your teeth (hopefully), make coffee (definitely). That’s an iteration, a daily ritual repeated with minor variations. Or consider the changing seasons, the ebb and flow of tides, the endless cycle of birth, growth, decay, and renewal. Iteration is the rhythm of existence, the heartbeat of the universe.

In the world of art and creativity, iteration takes center stage. Painters rework their canvases, musicians refine their melodies, writers revise their manuscripts – all in pursuit of that elusive perfect expression. Each iteration builds upon the last, refining, reimagining, and ultimately transforming the original concept into something new and hopefully improved.

But let’s not get all misty-eyed about iteration. It can be a cruel mistress, a source of frustration, a never-ending loop of “almost, but not quite.” Think about that DIY project that seemed so simple at first but has now become a Frankensteinian monster of mismatched parts and questionable design choices. Or that recipe you’ve tried a dozen times, each attempt yielding a slightly different (disastrous) result. Iteration, in these moments, feels less like progress and more like a punishment for our hubris.

And if we stretch it into the political arena, iteration takes on a particularly cynical flavor. The UK, with its revolving door of prime ministers, its endless Brexit debates, and its uncanny ability to elect leaders who promise change but deliver more of the same, is a prime example. Each election cycle feels like an iteration of the last, a Groundhog Day of broken promises, partisan squabbles, and that nagging sense that no matter who’s in charge, nothing really changes. Even the emergence of new parties, with their fresh faces and bold manifestos, often seems to get sucked into the same iterative loop, their initial idealism slowly eroded by the realities of power and the entrenched political system. Iteration, in this context, feels less like progress and more like a depressing reminder of our collective inability to break free from the past.

And then there’s Agile. Ah, Agile. The methodology that puts iteration on a pedestal, enshrining it as the holy grail of software development. Sprints, stand-ups, retrospectives – all designed to facilitate that relentless cycle of build, measure, learn. And while the Agile evangelists wax lyrical about the beauty of iterative development, those of us in the trenches know the truth: iteration can be a messy, chaotic, and often frustrating process.

We love iteration for its ability to adapt to change, to embrace uncertainty, to deliver value incrementally. We hate it for the endless meetings, the ever-growing backlog, the constant pressure to “fail fast” (which, let’s be honest, doesn’t always feel so fast). We love it for the sense of progress, the satisfaction of seeing a product evolve. We hate it for the scope creep, the shifting priorities, the nagging feeling that we’re building the plane as we fly it.

But love it or hate it, iteration is the heart of Agile. It’s the engine that drives innovation, the fuel that powers progress. And while it may not always be pretty, it’s undeniably effective. So, embrace the iteration, my friends. Embrace the chaos. Embrace the uncertainty. And maybe, just maybe, you’ll find yourself falling in love with the process, even if it’s a slightly dysfunctional, love-hate kind of love.

From Zero to Data Hero: My Google Data Analytics Journey

Just a few short months ago, the world of data analytics felt like a vast, uncharted ocean. Now, after completing Google’s Data Analytics Professional Certificate (or at least the 12+ modules that make up the learning path – more on that later!), I feel like I’ve charted a course and am confidently navigating those waters. It’s been an intense, exhilarating, and sometimes head-scratching journey, but one I wouldn’t trade for anything.

My adventure began in October 2024, and by February (this week) 2025, I had conquered (most of) the learning path. Conquer is the right word, because it was definitely an intense learning curve. 2000’s dev junior SQL skills? Yeah, they got a serious dusting off. And my forgotten Python, which was starting to resemble ancient hieroglyphics? Well, let’s just say we’re on speaking terms again.

The modules covered a huge range of topics, from the foundational “Introduction to Data Analytics on Google Cloud” and “Google Cloud Computing Foundations” to more specialized areas like “Working with Gemini Models in BigQuery,” “Creating ML Models with BigQuery ML,” and “Preparing Data for ML APIs on Google Cloud.” (See the full list at the end of this post!) Each module built upon the previous one, creating a solid foundation for understanding the entire data analytics lifecycle.

But the real stars of the show for me were BigQuery and, especially, Looker Studio. I’ve dabbled with other data visualization tools in the past (mentioning no names… cough Microsoft cough Tableau cough), but Looker Studio blew me away. It’s intuitive, powerful, and just… fun to use. Seriously, I fell in love. The ease with which you can connect to data sources and create insightful dashboards is simply unmatched. It’s like having a superpower for data storytelling!

One of the biggest “aha!” moments for me was realizing the sheer power of data insights. Mining those hidden gems from large datasets is incredibly addictive. And the fact that Google makes it so easy to access public datasets through BigQuery? Game changer. It’s like having a data goldmine at your fingertips.

This learning path has ignited a real passion within me. So much so that I’m now pursuing a Data Analysis Diploma, which I’m hoping to wrap up before June. And, because I apparently haven’t had enough learning, I’m also signing up for the Google Cloud Data Analytics Professional Certificate. I’m all in!

I have to say, the entire Google Cloud platform just feels so much more integrated and user-friendly compared to the Microsoft offerings I’ve used. Everything works together seamlessly, and the learning resources are top-notch. If you’re considering a career in data analytics, I would wholeheartedly recommend the Google path over other options.

I’m especially excited to dive deeper into the machine learning aspects. And the integration of Gemini? Genius! Having it as a code buddy has been a huge help, especially when I’m wrestling with a particularly tricky SQL query or trying to remember the correct syntax for a Python function. Seriously, it’s like having a data analytics guru by my side.

Stay tuned for future posts where I’ll be sharing more about my data analytics journey, including tips and tricks, project updates, and maybe even some data visualizations of my own!

Coursera do an official course = https://www.google.com/url?sa=E&source=gmail&q=https://www.coursera.org/professional-certificates/google-data-analytics – this you get a recognised formal professional certificate.

Or jump into Google Cloud Skills Boost: https://www.cloudskillsboost.google/ and get yourself a Cloud account and friendly with Gemini.

Modules Completed:

  • Work with Gemini Models in BigQuery
  • Analyzing and Visualizing Data in Looker Studio
  • BigQuery for Data Analysts
  • Boost Productivity with Gemini in BigQuery
  • Create ML Models with BigQuery ML
  • Derive Insights from BigQuery Data
  • Developing Data Models with LookML
  • Google Cloud Computing Foundations- Data, ML, and AI in Google Cloud
  • Introduction to Data Analytics on Google Cloud
  • Manage Data Models in Looker
  • Prepare Data for Looker Dashboards and Reports
  • Prepare Data for ML APIs on Google Cloud

So Long, and Thanks for All the Algorithms (Probably)

The Guide Mark II says, “Don’t Panic,” but when it comes to the state of Artificial Intelligence, a mild sense of existential dread might be entirely appropriate. You see, it seems we’ve built this whole AI shebang on a foundation somewhat less stable than a Vogon poetry recital.

These Large Language Models (LLMs), with their knack for mimicking human conversation, consume energy with the same reckless abandon as a Vogon poet on a bender. Training these digital behemoths requires a financial outlay that would make a small planet declare bankruptcy, and their insatiable appetite for data has led to some, shall we say, ‘creative appropriation’ from artists and writers on a scale that would make even the most unscrupulous intergalactic trader blush.

But let’s assume, for a moment, that we solve the energy crisis and appease the creative souls whose work has been unceremoniously digitised. The question remains: are these LLMs actually intelligent? Or are they just glorified autocomplete programs with a penchant for plagiarism?

Microsoft’s Copilot, for instance, boasts “thousands of skills” and “infinite possibilities.” Yet, its showcase features involve summarising emails and sprucing up PowerPoint presentations. Useful, perhaps, for those who find intergalactic travel less taxing than composing a decent memo. But revolutionary? Hardly. It’s a bit like inventing the Babel fish to order takeout.

One can’t help but wonder if we’ve been somewhat misled by the term “artificial intelligence.” It conjures images of sentient computers pondering the meaning of life, not churning out marketing copy or suggesting slightly more efficient ways to organise spreadsheets.

Perhaps, like the Babel fish, the true marvel of AI lies in its ability to translate – not languages, but the vast sea of data into something vaguely resembling human comprehension. Or maybe, just maybe, we’re still searching for the ultimate question, while the answer, like 42, remains frustratingly elusive.

In the meantime, as we navigate this brave new world of algorithms and automation, it might be wise to keep a towel handy. You never know when you might need to hitch a ride off this increasingly perplexing planet.

Comparison to Crypto Mining Nonsense:

Both LLMs and crypto mining share a striking similarity: they are incredibly resource-intensive. Just as crypto mining requires vast amounts of electricity to solve complex mathematical problems and validate transactions, training LLMs demands enormous computational power and energy consumption.

Furthermore, both have faced criticism for their environmental impact. Crypto mining has been blamed for contributing to carbon emissions and electronic waste, while LLMs raise concerns about their energy footprint and the sustainability of their development.

Another parallel lies in the questionable ethical practices surrounding both. Crypto mining has been associated with scams, fraud, and illicit activities, while LLMs have come under fire for their reliance on massive datasets often scraped from the internet without proper consent or attribution, raising concerns about copyright infringement and intellectual property theft.

In essence, both LLMs and crypto mining represent technological advancements with potentially transformative applications, but they also come with significant costs and ethical challenges that need to be addressed to ensure their responsible and sustainable development.

Wallace’s Beacon: A Monument Forged in National Pride

In the heart of the storied Scottish lands, a monument to the valor of William Wallace was conceived, its rise fueled by the rekindling of national pride. The call to build this towering tribute began in the bustling city of Glasgow, in the year 1851. Championed by the Reverend Charles Rogers and the steadfast William Burns, this noble endeavor sought to honor the memory of their nation’s hero.

Across the land, the people rallied, contributing their hard-earned coin to the cause. Even from distant shores, whispers of Wallace’s bravery reached the ears of foreign allies, including the valiant Italian leader, Giuseppe Garibaldi, who offered his support. The architect John Thomas Rochead, inspired by the grand style of the Victorian Gothic, envisioned a monument worthy of its purpose.

Upon the ancient volcanic crag of Abbey Craig, the first stone was set in 1861. The Duke of Atholl, esteemed Grand Master Mason of Scotland, bestowed this honor, his words echoing the resolve of a nation. From this very place, legend tells, Wallace himself surveyed the gathering English forces, moments before his legendary victory at Stirling Bridge.

Hewn from the earth’s own sandstone, the tower rose skyward, a testament to the enduring spirit of Scotland. Eight long years passed, each brick laid with unwavering dedication. At last, in 1869, the monument stood complete, its 67-meter peak a beacon of courage and freedom, forever etched upon the landscape.

More AI – images

Found time to play with some of the new AI platforms for generating images – there are so many and new ones every day so I am finding it hard to keep up and no idea how you judge which are good or bad? Seems we are jumping head first down this rabbit hole without any debate or pause.

drawit.art – basically do a sketch and choose a style (street art) and it will generate images

I found this one particularly fun – huggingface.co – ai-comic-factory – similar principle to first one where you describe the image rather than sketch it and choose a “style” for it to render and it will create a bunch of panels for you. Could you create a whole comic using it?

And inevitably there is bias in the current AI offerings which missjourney.ai is trying to counter “If you ask AI to visualize a professional, less than 20% are women. This is not ok. Visit missjourney.ai to support a gender-equal future.”

An AI alternative that creates artwork of exclusively women. With the aim of actively countering current biased image generators and ensuring we build inclusive digital realities – right from the start.
MissJourney marks the start of the year-long TEDxAmsterdam Women theme; Decoding the Future.

And finally Deep Dream which you can upload your own image and tweak it using many different parameters. Same base image with different modifiers and styles applied.

Artificial intelligence (AI) image generation is a rapidly developing field with the potential to revolutionize the way we create and consume images. AI image generators can generate realistic images from text descriptions, and they are becoming increasingly sophisticated and capable.

One of the most advanced AI image generators currently available is Google’s Imagen. Imagen is still under development, but it has the ability to generate high-quality images that are indistinguishable from human-created images. Imagen can be used to generate images from a wide range of text prompts, including images of people, animals, landscapes, and objects.

Google has not yet announced a public release date for Imagen, but it is expected to be released in the next few months. When Imagen is released, it will be available to a wider range of users, and it is likely to have a significant impact on the field of AI image generation.

Using OpenAI’s API

I enrolled in this course in May, a time when access to OpenAI was limited and its commercial model was still under development. Hence, leveraging the API emerged as the most straightforward method to use the platform. Jose Portilla’s course on Udemy brilliantly introduces how to tap into the API, harnessing the prowess of OpenAI to craft intelligent Python-driven applications.

The influx of AI platforms and services last summer indicates that embedding AI models into developments has become a standard practice.

OpenAI’s API ranks among the most sophisticated artificial intelligence platforms today, offering a spectrum of capabilities, from natural language processing to computer vision. Using this API, developers can craft applications capable of understanding and interacting with human language, generating coherent text, performing sentiment analysis, and much more.

The course initiates with a rundown of the OpenAI API basics, including account and access key setup using Python. Following this, learners embark on ten diverse projects, which include:

  • NLP to SQL: Here, you construct a POC that enables individuals to engage with a cached database and fetch details without any SQL knowledge.
  • Exam Creator: This involves the automated generation of a multiple-choice quiz, complete with an answer sheet and scoring mechanism. The focus here is on honing prompt engineering skills to format text outputs efficiently.
  • Automatic Recipe Creator: Based on user-input ingredients, this tool recommends recipes, complemented with DALLE-2 generated imagery of the finished dish. This module particularly emphasizes understanding the various models as participants engage with the Completion API and Image API.
  • Automatic Blog Post Creator: This enlightening module teaches integration of the OpenAI API with a live webpage via GitHub Pages.
  • Sentiment Analysis Exercise: By sourcing posts from Reddit and employing the Completion API, students assess the sentiment of the content. Notably, many news platforms seem to block such practices, labeling them as “scraping.”
  • Auto Code Explainer: Though I now use Co-pilot daily, this module introduced me to the Codex model. It’s adept at crafting docstrings for Python functions, ensuring that every .py file returns with comprehensive docstrings.
  • Translation Project: This module skims news from foreign languages, providing a concise English summary. A notable observation is the current model’s propensity to translate only to English. Users must also ensure they’re not infringing on site restrictions.
  • Chat-bot Fine-tuning: This pivotal tutorial unveils how one can refine existing models using specific datasets, enhancing output quality. By focusing on reducing token counts, learners gain insight into training data pricing, model utility, and cost-effectiveness. The module also underscores the rapid evolution of available models, urging students to consult OpenAI’s official documentation for the most recent updates.
  • Text Embedding: This segment was a challenge, mainly due to the intricate processes of converting text to N-dimensional vectors and understanding cosine similarity measurements. However, the module proficiently guides through concepts like search, clustering, and recommendations. It even delves into the amusing phenomenon of “model hallucination” and offers strategies to counteract it via prompt engineering.
  • General Overview & The Whisper API: Concluding the course, these tutorials provide a holistic understanding of the OpenAI API and its history, along with an introduction to the Whisper API, a tool adept at converting speech to text.

It’s noteworthy that most of the course material utilized the ChatGPT-3.5 model. However, recent updates have introduced a more efficient -turbo model. Additional information can be found here.

The course adopts a project-centric approach, with each segment potentially forming the cornerstone of a startup idea. Given the surge in AI startups, one wonders if this course inspired some of them.

This journey unraveled the intricate “magic” and “engineering” behind AI, emphasizing the importance of prompt formulation. Participants grasp essential elements like API authentication, making API calls, and processing results. By the course’s conclusion, you’re equipped to employ the OpenAI API to develop AI-integrated solutions. Prior Python knowledge can be advantageous.

Has AI just taken my job?

The rise of artificial intelligence (AI) has been a hot topic of conversation in recent weeks. Some people believe that AI will eventually replace most jobs, while others believe that it will create new ones and endless opportunities.

One company that is at the forefront of the AI revolution is Spinach.io. Spinach.io is an AI-powered platform that helps teams run more efficient meetings. The platform uses AI to transcribe meetings, generate meeting notes, and identify key decisions and actions. It integrates with Zoom, Teams, Jira, slack and more. You invite it to your meeting and it passively takes notes for you and spits them out to slack – this demo explains it better https://youtu.be/5Z5a-KCUcRY 

So, what does this mean for the future of work? 

It is hard to say for sure. However, it is clear that AI is already having an impact on the workforce. For example, AI is being used to automate tasks in customer service, manufacturing, and healthcare. This is leading to job losses in some sectors, but it is also creating new jobs in others.

In the case of Spinach.io, the platform is likely to become a valuable tool for project managers or anyone managing teams, and that is maybe a better way to look at AI . . . as a tool. AI has already created a large number of new jobs and even created a new industry platform. For example, Spinach.io is hiring engineers, data scientists, and product managers to build and improve its platform. So there is definitely disruption coming for many industries and human interactions will continue to change but there are also opportunities and new experiences to be had. 

So, while AI is likely to have an impact on the workforce, it is not clear that it will lead to widespread job losses. In fact, it is more likely that AI will create new jobs and opportunities if we embrace it.