From Zero to Data Hero: My Google Data Analytics Journey

Just a few short months ago, the world of data analytics felt like a vast, uncharted ocean. Now, after completing Google’s Data Analytics Professional Certificate (or at least the 12+ modules that make up the learning path – more on that later!), I feel like I’ve charted a course and am confidently navigating those waters. It’s been an intense, exhilarating, and sometimes head-scratching journey, but one I wouldn’t trade for anything.

My adventure began in October 2024, and by February (this week) 2025, I had conquered (most of) the learning path. Conquer is the right word, because it was definitely an intense learning curve. 2000’s dev junior SQL skills? Yeah, they got a serious dusting off. And my forgotten Python, which was starting to resemble ancient hieroglyphics? Well, let’s just say we’re on speaking terms again.

The modules covered a huge range of topics, from the foundational “Introduction to Data Analytics on Google Cloud” and “Google Cloud Computing Foundations” to more specialized areas like “Working with Gemini Models in BigQuery,” “Creating ML Models with BigQuery ML,” and “Preparing Data for ML APIs on Google Cloud.” (See the full list at the end of this post!) Each module built upon the previous one, creating a solid foundation for understanding the entire data analytics lifecycle.

But the real stars of the show for me were BigQuery and, especially, Looker Studio. I’ve dabbled with other data visualization tools in the past (mentioning no names… cough Microsoft cough Tableau cough), but Looker Studio blew me away. It’s intuitive, powerful, and just… fun to use. Seriously, I fell in love. The ease with which you can connect to data sources and create insightful dashboards is simply unmatched. It’s like having a superpower for data storytelling!

One of the biggest “aha!” moments for me was realizing the sheer power of data insights. Mining those hidden gems from large datasets is incredibly addictive. And the fact that Google makes it so easy to access public datasets through BigQuery? Game changer. It’s like having a data goldmine at your fingertips.

This learning path has ignited a real passion within me. So much so that I’m now pursuing a Data Analysis Diploma, which I’m hoping to wrap up before June. And, because I apparently haven’t had enough learning, I’m also signing up for the Google Cloud Data Analytics Professional Certificate. I’m all in!

I have to say, the entire Google Cloud platform just feels so much more integrated and user-friendly compared to the Microsoft offerings I’ve used. Everything works together seamlessly, and the learning resources are top-notch. If you’re considering a career in data analytics, I would wholeheartedly recommend the Google path over other options.

I’m especially excited to dive deeper into the machine learning aspects. And the integration of Gemini? Genius! Having it as a code buddy has been a huge help, especially when I’m wrestling with a particularly tricky SQL query or trying to remember the correct syntax for a Python function. Seriously, it’s like having a data analytics guru by my side.

Stay tuned for future posts where I’ll be sharing more about my data analytics journey, including tips and tricks, project updates, and maybe even some data visualizations of my own!

Coursera do an official course = https://www.google.com/url?sa=E&source=gmail&q=https://www.coursera.org/professional-certificates/google-data-analytics – this you get a recognised formal professional certificate.

Or jump into Google Cloud Skills Boost: https://www.cloudskillsboost.google/ and get yourself a Cloud account and friendly with Gemini.

Modules Completed:

  • Work with Gemini Models in BigQuery
  • Analyzing and Visualizing Data in Looker Studio
  • BigQuery for Data Analysts
  • Boost Productivity with Gemini in BigQuery
  • Create ML Models with BigQuery ML
  • Derive Insights from BigQuery Data
  • Developing Data Models with LookML
  • Google Cloud Computing Foundations- Data, ML, and AI in Google Cloud
  • Introduction to Data Analytics on Google Cloud
  • Manage Data Models in Looker
  • Prepare Data for Looker Dashboards and Reports
  • Prepare Data for ML APIs on Google Cloud

So Long, and Thanks for All the Algorithms (Probably)

The Guide Mark II says, “Don’t Panic,” but when it comes to the state of Artificial Intelligence, a mild sense of existential dread might be entirely appropriate. You see, it seems we’ve built this whole AI shebang on a foundation somewhat less stable than a Vogon poetry recital.

These Large Language Models (LLMs), with their knack for mimicking human conversation, consume energy with the same reckless abandon as a Vogon poet on a bender. Training these digital behemoths requires a financial outlay that would make a small planet declare bankruptcy, and their insatiable appetite for data has led to some, shall we say, ‘creative appropriation’ from artists and writers on a scale that would make even the most unscrupulous intergalactic trader blush.

But let’s assume, for a moment, that we solve the energy crisis and appease the creative souls whose work has been unceremoniously digitised. The question remains: are these LLMs actually intelligent? Or are they just glorified autocomplete programs with a penchant for plagiarism?

Microsoft’s Copilot, for instance, boasts “thousands of skills” and “infinite possibilities.” Yet, its showcase features involve summarising emails and sprucing up PowerPoint presentations. Useful, perhaps, for those who find intergalactic travel less taxing than composing a decent memo. But revolutionary? Hardly. It’s a bit like inventing the Babel fish to order takeout.

One can’t help but wonder if we’ve been somewhat misled by the term “artificial intelligence.” It conjures images of sentient computers pondering the meaning of life, not churning out marketing copy or suggesting slightly more efficient ways to organise spreadsheets.

Perhaps, like the Babel fish, the true marvel of AI lies in its ability to translate – not languages, but the vast sea of data into something vaguely resembling human comprehension. Or maybe, just maybe, we’re still searching for the ultimate question, while the answer, like 42, remains frustratingly elusive.

In the meantime, as we navigate this brave new world of algorithms and automation, it might be wise to keep a towel handy. You never know when you might need to hitch a ride off this increasingly perplexing planet.

Comparison to Crypto Mining Nonsense:

Both LLMs and crypto mining share a striking similarity: they are incredibly resource-intensive. Just as crypto mining requires vast amounts of electricity to solve complex mathematical problems and validate transactions, training LLMs demands enormous computational power and energy consumption.

Furthermore, both have faced criticism for their environmental impact. Crypto mining has been blamed for contributing to carbon emissions and electronic waste, while LLMs raise concerns about their energy footprint and the sustainability of their development.

Another parallel lies in the questionable ethical practices surrounding both. Crypto mining has been associated with scams, fraud, and illicit activities, while LLMs have come under fire for their reliance on massive datasets often scraped from the internet without proper consent or attribution, raising concerns about copyright infringement and intellectual property theft.

In essence, both LLMs and crypto mining represent technological advancements with potentially transformative applications, but they also come with significant costs and ethical challenges that need to be addressed to ensure their responsible and sustainable development.

Wallace’s Beacon: A Monument Forged in National Pride

In the heart of the storied Scottish lands, a monument to the valor of William Wallace was conceived, its rise fueled by the rekindling of national pride. The call to build this towering tribute began in the bustling city of Glasgow, in the year 1851. Championed by the Reverend Charles Rogers and the steadfast William Burns, this noble endeavor sought to honor the memory of their nation’s hero.

Across the land, the people rallied, contributing their hard-earned coin to the cause. Even from distant shores, whispers of Wallace’s bravery reached the ears of foreign allies, including the valiant Italian leader, Giuseppe Garibaldi, who offered his support. The architect John Thomas Rochead, inspired by the grand style of the Victorian Gothic, envisioned a monument worthy of its purpose.

Upon the ancient volcanic crag of Abbey Craig, the first stone was set in 1861. The Duke of Atholl, esteemed Grand Master Mason of Scotland, bestowed this honor, his words echoing the resolve of a nation. From this very place, legend tells, Wallace himself surveyed the gathering English forces, moments before his legendary victory at Stirling Bridge.

Hewn from the earth’s own sandstone, the tower rose skyward, a testament to the enduring spirit of Scotland. Eight long years passed, each brick laid with unwavering dedication. At last, in 1869, the monument stood complete, its 67-meter peak a beacon of courage and freedom, forever etched upon the landscape.

More AI – images

Found time to play with some of the new AI platforms for generating images – there are so many and new ones every day so I am finding it hard to keep up and no idea how you judge which are good or bad? Seems we are jumping head first down this rabbit hole without any debate or pause.

drawit.art – basically do a sketch and choose a style (street art) and it will generate images

I found this one particularly fun – huggingface.co – ai-comic-factory – similar principle to first one where you describe the image rather than sketch it and choose a “style” for it to render and it will create a bunch of panels for you. Could you create a whole comic using it?

And inevitably there is bias in the current AI offerings which missjourney.ai is trying to counter “If you ask AI to visualize a professional, less than 20% are women. This is not ok. Visit missjourney.ai to support a gender-equal future.”

An AI alternative that creates artwork of exclusively women. With the aim of actively countering current biased image generators and ensuring we build inclusive digital realities – right from the start.
MissJourney marks the start of the year-long TEDxAmsterdam Women theme; Decoding the Future.

And finally Deep Dream which you can upload your own image and tweak it using many different parameters. Same base image with different modifiers and styles applied.

Artificial intelligence (AI) image generation is a rapidly developing field with the potential to revolutionize the way we create and consume images. AI image generators can generate realistic images from text descriptions, and they are becoming increasingly sophisticated and capable.

One of the most advanced AI image generators currently available is Google’s Imagen. Imagen is still under development, but it has the ability to generate high-quality images that are indistinguishable from human-created images. Imagen can be used to generate images from a wide range of text prompts, including images of people, animals, landscapes, and objects.

Google has not yet announced a public release date for Imagen, but it is expected to be released in the next few months. When Imagen is released, it will be available to a wider range of users, and it is likely to have a significant impact on the field of AI image generation.

Using OpenAI’s API

I enrolled in this course in May, a time when access to OpenAI was limited and its commercial model was still under development. Hence, leveraging the API emerged as the most straightforward method to use the platform. Jose Portilla’s course on Udemy brilliantly introduces how to tap into the API, harnessing the prowess of OpenAI to craft intelligent Python-driven applications.

The influx of AI platforms and services last summer indicates that embedding AI models into developments has become a standard practice.

OpenAI’s API ranks among the most sophisticated artificial intelligence platforms today, offering a spectrum of capabilities, from natural language processing to computer vision. Using this API, developers can craft applications capable of understanding and interacting with human language, generating coherent text, performing sentiment analysis, and much more.

The course initiates with a rundown of the OpenAI API basics, including account and access key setup using Python. Following this, learners embark on ten diverse projects, which include:

  • NLP to SQL: Here, you construct a POC that enables individuals to engage with a cached database and fetch details without any SQL knowledge.
  • Exam Creator: This involves the automated generation of a multiple-choice quiz, complete with an answer sheet and scoring mechanism. The focus here is on honing prompt engineering skills to format text outputs efficiently.
  • Automatic Recipe Creator: Based on user-input ingredients, this tool recommends recipes, complemented with DALLE-2 generated imagery of the finished dish. This module particularly emphasizes understanding the various models as participants engage with the Completion API and Image API.
  • Automatic Blog Post Creator: This enlightening module teaches integration of the OpenAI API with a live webpage via GitHub Pages.
  • Sentiment Analysis Exercise: By sourcing posts from Reddit and employing the Completion API, students assess the sentiment of the content. Notably, many news platforms seem to block such practices, labeling them as “scraping.”
  • Auto Code Explainer: Though I now use Co-pilot daily, this module introduced me to the Codex model. It’s adept at crafting docstrings for Python functions, ensuring that every .py file returns with comprehensive docstrings.
  • Translation Project: This module skims news from foreign languages, providing a concise English summary. A notable observation is the current model’s propensity to translate only to English. Users must also ensure they’re not infringing on site restrictions.
  • Chat-bot Fine-tuning: This pivotal tutorial unveils how one can refine existing models using specific datasets, enhancing output quality. By focusing on reducing token counts, learners gain insight into training data pricing, model utility, and cost-effectiveness. The module also underscores the rapid evolution of available models, urging students to consult OpenAI’s official documentation for the most recent updates.
  • Text Embedding: This segment was a challenge, mainly due to the intricate processes of converting text to N-dimensional vectors and understanding cosine similarity measurements. However, the module proficiently guides through concepts like search, clustering, and recommendations. It even delves into the amusing phenomenon of “model hallucination” and offers strategies to counteract it via prompt engineering.
  • General Overview & The Whisper API: Concluding the course, these tutorials provide a holistic understanding of the OpenAI API and its history, along with an introduction to the Whisper API, a tool adept at converting speech to text.

It’s noteworthy that most of the course material utilized the ChatGPT-3.5 model. However, recent updates have introduced a more efficient -turbo model. Additional information can be found here.

The course adopts a project-centric approach, with each segment potentially forming the cornerstone of a startup idea. Given the surge in AI startups, one wonders if this course inspired some of them.

This journey unraveled the intricate “magic” and “engineering” behind AI, emphasizing the importance of prompt formulation. Participants grasp essential elements like API authentication, making API calls, and processing results. By the course’s conclusion, you’re equipped to employ the OpenAI API to develop AI-integrated solutions. Prior Python knowledge can be advantageous.

Has AI just taken my job?

The rise of artificial intelligence (AI) has been a hot topic of conversation in recent weeks. Some people believe that AI will eventually replace most jobs, while others believe that it will create new ones and endless opportunities.

One company that is at the forefront of the AI revolution is Spinach.io. Spinach.io is an AI-powered platform that helps teams run more efficient meetings. The platform uses AI to transcribe meetings, generate meeting notes, and identify key decisions and actions. It integrates with Zoom, Teams, Jira, slack and more. You invite it to your meeting and it passively takes notes for you and spits them out to slack – this demo explains it better https://youtu.be/5Z5a-KCUcRY 

So, what does this mean for the future of work? 

It is hard to say for sure. However, it is clear that AI is already having an impact on the workforce. For example, AI is being used to automate tasks in customer service, manufacturing, and healthcare. This is leading to job losses in some sectors, but it is also creating new jobs in others.

In the case of Spinach.io, the platform is likely to become a valuable tool for project managers or anyone managing teams, and that is maybe a better way to look at AI . . . as a tool. AI has already created a large number of new jobs and even created a new industry platform. For example, Spinach.io is hiring engineers, data scientists, and product managers to build and improve its platform. So there is definitely disruption coming for many industries and human interactions will continue to change but there are also opportunities and new experiences to be had. 

So, while AI is likely to have an impact on the workforce, it is not clear that it will lead to widespread job losses. In fact, it is more likely that AI will create new jobs and opportunities if we embrace it.