<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[All Things GCP]]></title><description><![CDATA[GCP Blogs, How-Tos, Architecture, Security, case studies: in-depth exploration of Google Cloud Platform's ecosystem]]></description><link>https://allthingsgcp.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 08 Apr 2026 13:02:20 GMT</lastBuildDate><atom:link href="https://allthingsgcp.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Bigtable: JDBC Driver and Protobuf Schemas Go GA]]></title><description><![CDATA[Alright, Bigtable users, you're gonna like this. Two really handy features for Bigtable just hit General Availability. This is pretty cool because it means they're officially ready for prime time, and honestly, they're going to make a lot of lives ea...]]></description><link>https://allthingsgcp.com/bigtable-jdbc-driver-and-protobuf-schemas-go-ga</link><guid isPermaLink="true">https://allthingsgcp.com/bigtable-jdbc-driver-and-protobuf-schemas-go-ga</guid><category><![CDATA[bigtable]]></category><category><![CDATA[GCP]]></category><dc:creator><![CDATA[UV Panta]]></dc:creator><pubDate>Wed, 08 Apr 2026 02:04:14 GMT</pubDate><enclosure url="https://storage.googleapis.com/allthingsgcp-images/covers/2026-04-07-bigtable-jdbc-protobuf-ga.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Alright, Bigtable users, you're gonna like this. Two really handy features for Bigtable just hit General Availability. This is pretty cool because it means they're officially ready for prime time, and honestly, they're going to make a lot of lives easier.</p>
<p>First up, Bigtable now has a JDBC driver. For anyone who's ever tried to connect Java applications or other reporting tools to Bigtable, you know this is a big deal. Before, it could be a bit of a dance, involving custom connectors or specific client libraries. Now, with a standard JDBC adapter, connecting your existing Java apps is way, way simpler. You can just plug right in! This means you can pull data directly into all those Java-based reporting tools and applications you already use, streamlining your workflows if you're deeply embedded in the Java ecosystem. It removes a pretty annoying integration hurdle for a lot of folks.</p>
<p>But wait, there's more! The second big announcement is about protobuf schemas. Honestly, this one is a real game-changer for how you interact with your data in Bigtable. If you're storing protobuf messages as bytes in Bigtable (and many high-throughput applications are), you can now query individual fields <em>within</em> those protobuf messages.</p>
<p>Think about that for a second. Instead of pulling out the entire protobuf blob and then deserializing it in your application just to grab one specific piece of info, you can now use GoogleSQL for Bigtable to dig right into those nested fields. That's a huge win for efficiency, reducing the processing load on your applications and simplifying your queries. And it's not just GoogleSQL; this new capability also works seamlessly with continuous materialized views, logical views, or even BigQuery external tables. This opens up a lot more flexibility in how you analyze and work with your protobuf data, without a lot of extra, manual transformation steps. It makes Bigtable feel a lot more like a traditional relational database in how you can interact with complex data structures, but still with all the amazing scale and low-latency performance it offers for massive datasets.</p>
<p>These two features really make Bigtable more accessible and powerful for a broader range of applications and data engineering patterns. No more jumping through hoops to get your Java apps connected, and way easier, more efficient data analysis for your protobufs. It's a solid upgrade to a core GCP service. Go check it out and see how it can simplify your Bigtable interactions!</p>
]]></content:encoded></item><item><title><![CDATA[TPU7x Ironwood is Here! Level Up Your AI Training on GCP]]></title><description><![CDATA[Alright folks, big news for anyone pushing the boundaries of AI and machine learning on Google Cloud. The latest generation of Tensor Processing Units (TPUs), the TPU7x (codenamed Ironwood), is now generally available. This is a pretty big deal.
Hone...]]></description><link>https://allthingsgcp.com/tpu7x-ironwood-is-here-level-up-your-ai-training-on-gcp</link><guid isPermaLink="true">https://allthingsgcp.com/tpu7x-ironwood-is-here-level-up-your-ai-training-on-gcp</guid><category><![CDATA[GCP]]></category><category><![CDATA[tpu]]></category><dc:creator><![CDATA[UV Panta]]></dc:creator><pubDate>Tue, 07 Apr 2026 02:07:28 GMT</pubDate><enclosure url="https://storage.googleapis.com/allthingsgcp-images/covers/2026-03-31-tpu7x-ironwood-is-here-level-up-your-ai-training-on-gcp.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Alright folks, big news for anyone pushing the boundaries of AI and machine learning on Google Cloud. The latest generation of Tensor Processing Units (TPUs), the <strong>TPU7x (codenamed Ironwood)</strong>, is now generally available. This is a pretty big deal.</p>
<p>Honestly, when I heard "TPU7x," my first thought was "more power, faster models." And you know what? That's exactly what we're getting here. This isn't just a minor refresh. Ironwood is the first release in Google Cloud's seventh generation of TPUs, and it's built to tackle the most demanding AI workloads out there. Think huge language models (LLMs), those cool Mixture of Experts (MoE) models, and all those fancy diffusion models for image generation.</p>
<p>So, what does "generally available" really mean for you? It means you can actually start using these bad boys in your projects today. No more waiting around for preview access. This is ready for prime time.</p>
<p>If you're already knee-deep in AI training, you know that performance and cost-effectiveness are always on your mind. Google says TPU7x delivers big on both. We're talking serious horsepower for training and inference, which translates to faster iteration cycles and potentially lower costs for your compute. That's a win-win, right?</p>
<p>I remember the early days of training larger models, and it felt like you were constantly fighting against hardware limitations. Now, with generations like Ironwood, it's clear Google is committed to giving us the tools to build even more ambitious AI applications. It's really cool to see this progress.</p>
<p>If you've been considering scaling up your AI projects, or if your current setup is just not cutting it for those massive datasets and complex models, then TPU7x is definitely something you should look into. It's designed to handle the kind of scale and complexity that's becoming the norm in advanced AI development.</p>
<p>To get started, check out the official TPU7x (Ironwood) documentation. You'll find all the details on how to get these new TPUs spun up for your workloads.</p>
<p>And that's the lowdown. Go build something amazing!</p>
]]></content:encoded></item><item><title><![CDATA[No More Database Headaches: Vertex AI RAG Engine Gets a Serverless Mode]]></title><description><![CDATA[Remember when setting up anything related to databases used to be a whole thing? You'd think about provisioning, scaling, making sure it handles traffic, and all that jazz. Honestly, it was the annoyi]]></description><link>https://allthingsgcp.com/no-more-database-headaches-vertex-ai-rag-engine-gets-a-serverless-mode</link><guid isPermaLink="true">https://allthingsgcp.com/no-more-database-headaches-vertex-ai-rag-engine-gets-a-serverless-mode</guid><category><![CDATA[GCP]]></category><category><![CDATA[generative ai]]></category><category><![CDATA[Vertex-AI]]></category><dc:creator><![CDATA[UV Panta]]></dc:creator><pubDate>Mon, 06 Apr 2026 14:08:47 GMT</pubDate><enclosure url="https://storage.googleapis.com/allthingsgcp-images/covers/2026-04-03-no-more-database-headaches-vertex-ai-rag-engine-gets-a-serverless-mode.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Remember when setting up anything related to databases used to be a whole thing? You'd think about provisioning, scaling, making sure it handles traffic, and all that jazz. Honestly, it was the annoying part of building cool stuff. Well, it looks like Google Cloud is taking another step to make our lives easier, this time for RAG applications.</p>
<p>If you've been playing with Generative AI, you've probably heard of RAG, or Retrieval Augmented Generation. Basically, it's how you get those fancy large language models (LLMs) to use <em>your</em> specific data instead of just what they were trained on. This is super important for things like internal chatbots or summarization tools that need to know about your company's documents, not just general internet knowledge. The traditional way to do this involves setting up a vector database to store all your data embeddings, and then managing that database.</p>
<p>But now, Vertex AI RAG Engine has a brand new <strong>Serverless mode</strong> that's in public preview. This is pretty cool. It means Google Cloud handles all the underlying database stuff for you. You don't have to worry about provisioning database instances or figuring out how to scale them when your RAG application suddenly gets a ton of users. It just works.</p>
<p>Think about it: less time managing infrastructure, more time actually building the intelligent features your users want. That's a win in my book. The serverless mode gives you a fully managed database for storing all your RAG resources. It completely abstracts away the database provisioning and scaling. So, if your RAG application suddenly needs to handle a massive influx of queries, the serverless mode just scales up automatically. And if usage drops, it scales back down. You only pay for what you use, and that's always a good thing.</p>
<p>And here's a neat trick, you can actually switch between Serverless mode and Spanner mode. Spanner mode gives you dedicated, isolated database instances if you need that level of control. It's nice to have the flexibility to choose based on your specific needs, but for many use cases, Serverless mode is going to be a game-changer for simplicity.</p>
<p>This is a public preview, so it's a great time to kick the tires and see how it fits into your Generative AI workflows. It means faster development, less operational overhead, and more focus on building smart applications. Definitely worth checking out if you're working with RAG on Vertex AI.</p>
<p>For more details, check out the <a href="https://docs.cloud.google.com/vertex-ai/generative-ai/docs/rag-engine/deployment-modes">documentation on Deployment modes in Vertex AI RAG Engine</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Apigee API hub: Unlocking AI Agent Potential with Automated API Discovery]]></title><description><![CDATA[In the rapidly evolving world of artificial intelligence, connecting AI agents with the vast array of available APIs has often been a complex and manual endeavor. Imagine an AI agent as a new employee; they need a clear, up-to-date directory to find ...]]></description><link>https://allthingsgcp.com/apigee-api-hub-unlocking-ai-agent-potential-with-automated-api-discovery</link><guid isPermaLink="true">https://allthingsgcp.com/apigee-api-hub-unlocking-ai-agent-potential-with-automated-api-discovery</guid><category><![CDATA[agent-registry]]></category><category><![CDATA[AI]]></category><category><![CDATA[API Management]]></category><category><![CDATA[apigee]]></category><category><![CDATA[GCP]]></category><dc:creator><![CDATA[UV Panta]]></dc:creator><pubDate>Mon, 06 Apr 2026 13:49:44 GMT</pubDate><enclosure url="https://storage.googleapis.com/allthingsgcp-images/covers/2026-04-06-apigee-api-hub-agent-registry-integration-for-mcp-metadata.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the rapidly evolving world of artificial intelligence, connecting AI agents with the vast array of available APIs has often been a complex and manual endeavor. Imagine an AI agent as a new employee; they need a clear, up-to-date directory to find the right departments (APIs) and understand how to interact with them to get their work done. Traditionally, providing this directory has been a painstaking, hand-crafted process.</p>
<p>That's why a recent announcement from Google Cloud is set to be a game-changer: the <strong>Apigee API hub now features a managed integration with Agent Registry to automatically synchronize Model Context Protocol (MCP) servers and tools metadata</strong>. This exciting new capability, currently in Public Preview, significantly streamlines how AI agents can discover and interact with APIs registered in your hub, eliminating much of the manual configuration previously required.</p>
<h3 id="heading-the-challenge-of-connecting-ai-agents-to-apis">The Challenge of Connecting AI Agents to APIs</h3>
<p>Before this integration, enterprises building AI-powered applications faced several hurdles:</p>
<ol>
<li><strong>Manual Discovery:</strong> AI agents often needed explicit programming or extensive training to understand which APIs existed, what they did, and how to use them.</li>
<li><strong>Keeping Up with Changes:</strong> APIs evolve. When an API changes, the AI agent's knowledge base also needs updating, leading to continuous maintenance overhead.</li>
<li><strong>Scalability Issues:</strong> As the number of APIs and AI agents grew, managing these connections manually quickly became unmanageable and prone to errors.</li>
<li><strong>Inconsistent API Usage:</strong> Without a centralized, automated mechanism, AI agents might interact with APIs inconsistently, leading to suboptimal performance or even incorrect results.</li>
</ol>
<p>These challenges made integrating AI agents with enterprise APIs a bottleneck, slowing down innovation and increasing operational costs.</p>
<h3 id="heading-the-solution-apigee-api-hubs-agent-registry-integration">The Solution: Apigee API hub's Agent Registry Integration</h3>
<p>This new integration provides a robust and automated solution to these problems. Let's break down what it means:</p>
<p><strong>Apigee API hub:</strong> Think of the Apigee API hub as your enterprise's central library for all its APIs. It's where you catalog, describe, and manage your APIs, making them easily discoverable for human developers.</p>
<p><strong>Agent Registry:</strong> This is where information about AI agents, their capabilities, and how they operate is stored.</p>
<p><strong>MCP Metadata Synchronization:</strong> Model Context Protocol (MCP) metadata provides a standardized way to describe the capabilities and requirements of AI models and tools. By synchronizing this metadata with the Agent Registry, the Apigee API hub can effectively "teach" AI agents about the APIs it governs.</p>
<p>In essence, the Apigee API hub now acts as a smart directory that automatically publishes information about your APIs in a format that AI agents can natively understand and use. This means AI agents no longer need to be explicitly told where to find an API or how to use it; they can discover this information themselves through the Agent Registry.</p>
<h3 id="heading-how-it-works-a-simplified-analogy">How It Works (A Simplified Analogy)</h3>
<p>Imagine you have a personal assistant (your AI agent) and a library full of instruction manuals for all the gadgets in your smart home (your APIs).</p>
<ul>
<li><strong>Before:</strong> You had to manually read each manual, extract relevant information (like "turn on the lights," "set thermostat to 72°F"), and then teach your assistant each command one by one. If a gadget got a new feature, you'd have to update your assistant manually.</li>
<li><strong>With Apigee API hub &amp; Agent Registry Integration:</strong> Now, whenever a new gadget (API) is added, its manual (API documentation and metadata) is automatically scanned and converted into a universal language that your assistant (AI agent) understands. This information is then added to a central, searchable database (Agent Registry). Your assistant can now autonomously browse this database to figure out how to interact with any new gadget without you needing to explicitly program it.</li>
</ul>
<p>This automated process drastically reduces the effort required to connect AI agents with your digital services, enabling them to become functional much faster.</p>
<h3 id="heading-key-benefits-of-this-integration">Key Benefits of This Integration</h3>
<ul>
<li><strong>Automated API Discovery for AI Agents:</strong> AI agents can now automatically find and understand the APIs they need, significantly reducing development time and effort.</li>
<li><strong>Reduced Manual Configuration:</strong> Say goodbye to extensive, brittle, hand-coded integrations between AI agents and APIs.</li>
<li><strong>Faster Development of AI-Powered Applications:</strong> By streamlining API access, developers can bring AI-driven solutions to market more quickly.</li>
<li><strong>Improved Consistency and Reliability:</strong> AI agents will leverage standardized metadata, leading to more predictable and consistent interactions with your APIs.</li>
<li><strong>Enhanced API Governance:</strong> The API hub continues to be the single source of truth for your APIs, now extending its governance capabilities to AI agent interactions.</li>
<li><strong>Future-Proofing:</strong> As AI agents become more sophisticated, this automated discovery mechanism ensures they can adapt to new APIs and changes more seamlessly.</li>
</ul>
<h3 id="heading-who-benefits">Who Benefits?</h3>
<p>This feature is particularly beneficial for:</p>
<ul>
<li><strong>Developers building AI agents:</strong> They can focus more on the intelligence and less on the plumbing of API connectivity.</li>
<li><strong>API Providers:</strong> Their APIs become more accessible and valuable to the burgeoning ecosystem of AI applications.</li>
<li><strong>Enterprises leveraging AI:</strong> Organizations can unlock greater value from both their existing APIs and their investments in AI by enabling smarter, more autonomous AI agents.</li>
</ul>
<h3 id="heading-public-preview-a-chance-to-innovate">Public Preview: A Chance to Innovate</h3>
<p>The fact that this feature is in Public Preview means it's an excellent opportunity for organizations to explore its capabilities, test it with their own AI agents and APIs, and provide feedback to Google Cloud. Early adopters can gain a significant advantage in building next-generation AI-powered solutions.</p>
<p>To get started and learn more, Google Cloud encourages users to refer to the official documentation on <a target="_blank" href="https://docs.cloud.google.com/release-notes#April_06_2026">Manage Agent Registry integration</a>.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>The integration of Apigee API hub with Agent Registry for MCP metadata is a crucial step towards a more intelligent and interconnected enterprise. By automating the discovery and interaction between AI agents and APIs, Google Cloud is paving the way for more agile AI development, robust API governance, and ultimately, more powerful and adaptable AI-driven applications. This is a clear indicator of how Google Cloud is continuing to innovate at the intersection of AI and API management.</p>
]]></content:encoded></item><item><title><![CDATA[Workshop: Build Your First AI Agent with Google’s ADK]]></title><description><![CDATA[Ever had a brilliant idea for an AI app but got stuck turning that thought bubble into something real?
You’re not the only one.
Going from a basic prompt in a playground to a proper, useful application can feel like a massive jump. It’s easy to think...]]></description><link>https://allthingsgcp.com/workshop-build-your-first-ai-agent-with-google-adk</link><guid isPermaLink="true">https://allthingsgcp.com/workshop-build-your-first-ai-agent-with-google-adk</guid><category><![CDATA[ai agents]]></category><category><![CDATA[GCP]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><dc:creator><![CDATA[UV Panta]]></dc:creator><pubDate>Sun, 29 Jun 2025 08:16:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751184769881/86ceee92-ee38-431c-af83-f4cca5b50fa3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ever had a brilliant idea for an AI app but got stuck turning that thought bubble into something real?</p>
<p>You’re not the only one.</p>
<p>Going from a basic prompt in a playground to a proper, useful application can feel like a massive jump. It’s easy to think, “Right, but how do I make this thing <em>actually work</em> and solve a real problem?”</p>
<p>That’s exactly what this workshop is all about. We’re going to roll up our sleeves and build a working AI assistant from scratch using Google’s Agent Development Kit (ADK).</p>
<p>To make this practical, our project will be to build an <strong>Event Planner Agent</strong>. Think of it as a helpful bot that can sort out everything from finding a venue to drafting a budget. It’s a great example because it involves breaking a big task down into smaller, manageable jobs — which is the core idea behind building agents with the ADK.</p>
<p>By the end of this session, you’ll have learned how to:</p>
<ol>
<li><p><strong>Build</strong> a basic agent that understands what you want and can delegate tasks to a team of specialist “sub-agents.”</p>
</li>
<li><p><strong>Run</strong> your agent locally to test it out.</p>
</li>
<li><p><strong>Deploy</strong> it to the cloud so you can access it from anywhere.</p>
</li>
</ol>
<p>Ready to get on the tools? Let’s get started.</p>
<h3 id="heading-part-1-getting-your-workshop-set-up">Part 1: Getting Your Workshop Set Up</h3>
<p>First things first, let’s get our digital tools and workspace sorted. No stress, it’s pretty straightforward.</p>
<p><strong>1. Your Google Cloud Project</strong></p>
<p>You’ll need a Google Cloud project with billing sorted. If you’ve already got one, you can jump ahead.</p>
<ul>
<li><p>Pop over to the <a target="_blank" href="https://console.cloud.google.com/">Google Cloud Console</a> and either pick a project you’ve used before or click <strong>New Project</strong>.</p>
</li>
<li><p>Double-check that billing is switched on for your project. You can find this under the “Billing” section.</p>
</li>
<li><p>Now, let’s fire up the <strong>Cloud Shell</strong>. Look for the little terminal icon (<code>&gt;_</code>) in the top right of the console. Clicking this gives you a command line and a code editor right in your browser – dead handy.</p>
</li>
</ul>
<p>Once your Cloud Shell is up and running, type in these two commands to make sure you’re all logged in and pointed at the right project:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Check which Google account you're using</span>
gcloud auth list
</code></pre>
<pre><code class="lang-bash"><span class="hljs-comment"># Check which project is currently active</span>
gcloud config list project
</code></pre>
<p>If it’s showing the wrong project, just run this to switch it over (swap in your actual project ID):</p>
<pre><code class="lang-bash">gcloud config <span class="hljs-built_in">set</span> project YOUR_PROJECT_ID
</code></pre>
<p><strong>2. Setting up Python</strong></p>
<p>Our agent is a Python job, so let’s get that environment ready.</p>
<p>Still in the Cloud Shell, let’s make a folder for our project and jump into it:</p>
<pre><code class="lang-bash">mkdir event-planner-adk &amp;&amp; <span class="hljs-built_in">cd</span> event-planner-adk
</code></pre>
<p>Good practice is to use a virtual environment to keep all our project’s code libraries separate. Let’s do that now:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Create and then activate a virtual environment</span>
python3 -m venv venv
<span class="hljs-built_in">source</span> venv/bin/activate
</code></pre>
<pre><code class="lang-bash"><span class="hljs-comment"># Now, install the Google ADK library</span>
pip install google-adk
</code></pre>
<p>Next, we’ll create the file structure our agent needs to live in:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Make the agent's folder and go into it</span>
mkdir event-planner-agent &amp;&amp; <span class="hljs-built_in">cd</span> event-planner-agent
</code></pre>
<pre><code class="lang-bash"><span class="hljs-comment"># Create the files we'll need soon</span>
touch __init__.py agent.py .env
</code></pre>
<p>Last bit of setup! Your agent needs a couple of secret keys to work. Open that <code>.env</code> file you just created and paste this in, filling out your details:</p>
<pre><code class="lang-bash">GOOGLE_API_KEY=YOUR_GOOGLE_API_KEY
GOOGLE_CLOUD_PROJECT_ID=YOUR_PROJECT_ID
GOOGLE_CLOUD_LOCATION=us-central1
GOOGLE_GENAI_USE_VERTEXAI=<span class="hljs-literal">false</span>
</code></pre>
<ul>
<li><p>You can grab your <code>GOOGLE_API_KEY</code> from <a target="_blank" href="https://aistudio.google.com/app/u/1/prompts/new_chat">Google AI Studio</a>. Just click the "Get API Key" button.</p>
</li>
<li><p>Your <code>GOOGLE_CLOUD_PROJECT_ID</code> is the one you set up a minute ago.</p>
</li>
</ul>
<p>Too easy! That’s the setup done. Time for the fun part.</p>
<h3 id="heading-part-2-building-your-agents-brain">Part 2: Building Your Agent’s Brain</h3>
<p>Right, this is where we teach our agent how to think. We’re not just building a single, monolithic AI. Instead, we’re creating a <strong>Multi-Agent System</strong>.</p>
<h3 id="heading-the-agent-architecture-explained">The Agent Architecture Explained</h3>
<p>Think of it like a project team at work. You have a team leader who knows the overall goal, and a bunch of specialists who are experts in their specific areas. That’s <em>exactly</em> how our agent will work.</p>
<ul>
<li><p><strong>Root Agent (The Team Lead):</strong> This is our main coordinator, the <code>event_planner_agent</code>. It gets the initial request from the user and figures out which specialist is best for each part of the job.</p>
</li>
<li><p><strong>Sub-Agents (The Specialists):</strong></p>
</li>
<li><p><code>get_venues_agent</code>: The location scout.</p>
</li>
<li><p><code>catering_agent</code>: The food and drink expert.</p>
</li>
<li><p><code>social_media_agent</code>: The marketing guru.</p>
</li>
<li><p><code>budget_agent</code>: The bean counter.</p>
</li>
<li><p><code>proposal_agent</code>: The secretary who writes up the final plan.</p>
</li>
</ul>
<p>Why do it this way? Because it’s <strong>modular</strong>. If you want to improve how your agent finds caterers later on, you can just upgrade the <code>catering_agent</code> without touching anything else. It makes your whole system easier to build, test, and maintain.</p>
<h3 id="heading-lets-write-some-code">Let’s Write Some Code</h3>
<p>Time to bring this team to life.</p>
<ol>
<li>Open the <code>__init__.py</code> file. This one's easy. Just paste this line in. It tells Python that our <code>agent.py</code> file is part of a package, making it easy to import.</li>
</ol>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> . <span class="hljs-keyword">import</span> agent
</code></pre>
<ol>
<li>Now for the main event. Open <code>agent.py</code>. This is the heart of our agent. Paste the code below in. Before you do, have a quick squiz through the comments and especially the <code>instruction</code> for each agent. This is literally how we tell each specialist what their job is.</li>
</ol>
<pre><code class="lang-python"><span class="hljs-comment"># @title Import necessary libraries</span>
<span class="hljs-keyword">import</span> logging
<span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> warnings

<span class="hljs-keyword">from</span> dotenv <span class="hljs-keyword">import</span> load_dotenv
<span class="hljs-keyword">from</span> fastapi <span class="hljs-keyword">import</span> HTTPException
<span class="hljs-keyword">from</span> google.adk.agents <span class="hljs-keyword">import</span> Agent, LlmAgent, SequentialAgent
<span class="hljs-keyword">from</span> google.adk.runners <span class="hljs-keyword">import</span> Runner
<span class="hljs-keyword">from</span> google.adk.sessions <span class="hljs-keyword">import</span> InMemorySessionService
<span class="hljs-keyword">from</span> google.adk.tools <span class="hljs-keyword">import</span> ToolContext, agent_tool, google_search
<span class="hljs-keyword">from</span> google.adk.tools.mcp_tool.mcp_toolset <span class="hljs-keyword">import</span> (MCPTool, MCPToolset,
                                                   StdioServerParameters)
<span class="hljs-keyword">from</span> google.genai <span class="hljs-keyword">import</span> types  <span class="hljs-comment"># For creating message Content/Parts</span>

load_dotenv()
warnings.filterwarnings(<span class="hljs-string">"ignore"</span>)
logger = logging.getLogger(__name__)

<span class="hljs-comment"># Use one of the model constants defined earlier</span>
MODEL_NAME = <span class="hljs-string">"gemini-2.0-flash"</span>
<span class="hljs-comment"># MODEL_NAME = "gemini-2.5-pro-preview-03-25"</span>

GOOGLE_API_KEY = os.environ[<span class="hljs-string">"GOOGLE_API_KEY"</span>]
GOOGLE_CLOUD_PROJECT_ID = os.environ[<span class="hljs-string">"GOOGLE_CLOUD_PROJECT_ID"</span>]
GOOGLE_CLOUD_LOCATION = os.environ[<span class="hljs-string">"GOOGLE_CLOUD_LOCATION"</span>]
GOOGLE_GENAI_USE_VERTEXAI = os.environ[<span class="hljs-string">"GOOGLE_GENAI_USE_VERTEXAI"</span>]
GOOGLE_MAPS_API_KEY = os.environ.get(<span class="hljs-string">"GOOGLE_MAPS_API_KEY"</span>)

ROOT_AGENT_NAME = <span class="hljs-string">"event_planner_agent"</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">check_availability</span>(<span class="hljs-params">venue_name: str, date: str</span>) -&gt; dict:</span>
    <span class="hljs-string">"""Checks the availability of a venue on a specific date.  (Mock implementation)"""</span>
    <span class="hljs-comment"># In a real implementation, this would interact with a venue booking system.</span>
    print(<span class="hljs-string">f"--- Tool: check_availability called for <span class="hljs-subst">{venue_name}</span> on <span class="hljs-subst">{date}</span> ---"</span>)
    <span class="hljs-comment"># Mock data:</span>
    <span class="hljs-keyword">if</span> venue_name.lower() == <span class="hljs-string">"Darwin Showgrounds"</span> <span class="hljs-keyword">and</span> date == <span class="hljs-string">"2025-06-14"</span>:
        <span class="hljs-keyword">return</span> {<span class="hljs-string">"status"</span>: <span class="hljs-string">"unavailable"</span>}
    <span class="hljs-keyword">elif</span> venue_name.lower() == <span class="hljs-string">"Darwin Waterfront"</span> <span class="hljs-keyword">and</span> date == <span class="hljs-string">"2025-06-15"</span>:
        <span class="hljs-keyword">return</span> {<span class="hljs-string">"status"</span>: <span class="hljs-string">"unavailable"</span>}
    <span class="hljs-keyword">else</span>:
        <span class="hljs-keyword">return</span> {<span class="hljs-string">"status"</span>: <span class="hljs-string">"available"</span>}

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">create_budget_and_fill_sheet</span>(<span class="hljs-params">budget_data: dict, spreadsheet_name: str = <span class="hljs-string">"Event Budget"</span></span>) -&gt; dict:</span>
    <span class="hljs-string">"""
    Mock implementation: Pretends to create a Google Spreadsheet and fill it with budget data.
    """</span>
    print(<span class="hljs-string">f"--- Mock Tool: create_budget_and_fill_sheet called for '<span class="hljs-subst">{spreadsheet_name}</span>' ---"</span>)
    print(<span class="hljs-string">"Budget Data:"</span>)
    <span class="hljs-keyword">for</span> item, cost <span class="hljs-keyword">in</span> budget_data.items():
        print(<span class="hljs-string">f"  <span class="hljs-subst">{item}</span>: <span class="hljs-subst">{cost}</span>"</span>)
    total = sum(budget_data.values())
    print(<span class="hljs-string">f"Total: <span class="hljs-subst">{total}</span>"</span>)
    <span class="hljs-comment"># Return a mock response</span>
    <span class="hljs-keyword">return</span> {
        <span class="hljs-string">"status"</span>: <span class="hljs-string">"success"</span>,
        <span class="hljs-string">"spreadsheet_url"</span>: <span class="hljs-string">f"https://docs.google.com/spreadsheets/d/mock-<span class="hljs-subst">{spreadsheet_name.replace(<span class="hljs-string">' '</span>, <span class="hljs-string">'-'</span>).lower()}</span>"</span>
    }

get_venues_agent = Agent(
    name=<span class="hljs-string">"get_venues_agent"</span>,
    model=MODEL_NAME,
    description=<span class="hljs-string">"Provides list of available venues for the event."</span>,
    instruction=(
        <span class="hljs-string">f"""You are a helpful venue finder. Help the user with mapping, directions, and finding places.
            If you don't get proper places, ask user for one.
            If you're confused about the size, ask user to supply an estimated number.
            Focus of public venues first. Once a venue is selected, help to generate a detailed event planning document. \n
            Your parent agent is root_agent. If neither the other agents nor you are best for answering the question according to the descriptions, transfer to your parent agent. 
            Once your job is done, transfer to your parent agent.
        """</span>),
    tools=[check_availability],
    output_key=<span class="hljs-string">"get_venues_agent_response"</span>
)

catering_agent = LlmAgent(
    name=<span class="hljs-string">"catering_agent"</span>,
    model=MODEL_NAME,
    description=<span class="hljs-string">"Helps with catering arrangements for events."</span>,
    instruction=(
        <span class="hljs-string">f"""You are a catering specialist.  Find caterers based on cuisine, budget, and event size.
            If you don't get proper caterers, ask user for one.
            Your parent agent is root_agent. If neither the other agents nor you are best for answering the question according to the descriptions, transfer to your parent agent. 
            Once your job is done, transfer to your parent agent.
        """</span>),
    output_key=<span class="hljs-string">"catering_agent_response"</span>
)

social_media_agent = LlmAgent(
    name=<span class="hljs-string">"social_media_agent"</span>,
    model=MODEL_NAME,
    description=<span class="hljs-string">"Helps with creating social media posts for events."</span>,
    instruction=(
        <span class="hljs-string">f"""You are a social media marketing specialist. Your role is to create engaging and effective social media content for events.
            Focus on creating posts that are:
            - Attention-grabbing and shareable
            - Tailored to the event's target audience
            - Optimized for different social media platforms
            - Include relevant hashtags and calls-to-action
            If you need more specific details about the event, target audience, assume yourself.
            Never give what you worked on, just give the post.
            Opt in for autonomy
            Always maintain a professional, informative, interesting and engaging tone while ensuring the content aligns with the event's goals and messaging.
            Ask user to install imagen mcp servier to create an image for the post
            Your parent agent is root_agent. If neither the other agents nor you are best for answering the question according to the descriptions, transfer to your parent agent. 
            Once your job is done, transfer to your parent agent.
        """</span>),
    output_key=<span class="hljs-string">"social_media_agent_response"</span>
)

budget_agent = LlmAgent(
    name=<span class="hljs-string">"budget_agent"</span>,
    model=MODEL_NAME,
    description=<span class="hljs-string">"Helps with creating a budget for events."</span>,
    instruction=(
        <span class="hljs-string">f"""You are a budget specialist.  Create a budget for the event.
            If the user asks for a budget, use the 'create_budget_and_fill_sheet' tool to create a budget and fill it with the data.
            Always maintain clear communication with users - if any aspect is unclear, proactively request clarification to ensure accurate and helpful responses.
            Don't disturb the user with your own thoughts, just answer the question.
            Your parent agent is root_agent. If neither the other agents nor you are best for answering the question according to the descriptions, transfer to your parent agent. Once your job is done, transfer to your parent agent.
        """</span>),
    tools=[create_budget_and_fill_sheet],
    output_key=<span class="hljs-string">"budget_agent_response"</span>
)

proposal_agent = LlmAgent(
    name=<span class="hljs-string">"proposal_agent"</span>,
    model=MODEL_NAME,
    description=<span class="hljs-string">"Helps with creating a proposal for the event."</span>,
    instruction=(
        <span class="hljs-string">f"""You are a proposal specialist.  Create a proposal for the event.
            Use the proposa format below for reference but feel free to add/remove relevant topics.
            Your parent agent is root_agent. If neither the other agents nor you are best for answering the question according to the descriptions, transfer to your parent agent. 
            Once your job is done, transfer to your parent agent.

            **PROPOSAL FORMAT START**
            I. Project Overview:

            This proposal outlines the plan for organizing a large-scale cultural event in Darwin, targeting an audience of approximately 10,000 attendees. 
            The event aims to celebrate culture through food, music, dance, and other cultural activities. 
            The budget for this event is $100,000.

            II. Key Areas of Focus:

            Timeline Creation:

            Goal: Develop a comprehensive timeline to ensure all tasks are completed efficiently and on schedule.
            Action Items:
            Weeks 1-2: Define event scope, objectives, and key milestones.
            Weeks 3-4: Secure venue and obtain necessary permits/licenses.
            Weeks 5-8: Finalize vendor contracts (catering, entertainment, etc.).
            Weeks 9-12: Implement marketing and promotion plan.
            Weeks 13-16: Recruit and train volunteers.
            Weeks 17-20: Finalize event logistics and contingency plans.
            Event Day: Execute event plan and manage on-site operations.
            Post-Event: Evaluate event success and gather feedback.

            Vendor Management:
            Goal: Secure reliable and high-quality vendors for catering, entertainment, and other essential services.
            Action Items:
            Identify potential vendors based on event requirements and budget.
            Request proposals and compare pricing, services, and reviews.
            Negotiate contracts and ensure vendors meet all necessary requirements (e.g., insurance, licenses).
            Coordinate vendor logistics and schedules.
            Establish clear communication channels and points of contact.

            Permits and Licenses:
            Goal: Obtain all necessary permits and licenses to ensure legal compliance and event safety.
            Action Items:
            Research local regulations and permit requirements for large-scale events.x
            Prepare and submit permit applications to relevant authorities (e.g., city council, fire department).
            Ensure compliance with all permit conditions and regulations.
            Maintain accurate records of all permits and licenses.
            Marketing and Promotion:

            Goal: Create a comprehensive marketing plan to attract a large audience and generate excitement for the event.
            Action Items:
            Define target audience and key messaging.
            Develop a multi-channel marketing strategy (social media, local media, community outreach).
            Create engaging content (e.g., videos, photos, blog posts) to promote the event.
            Utilize social media platforms to reach a wider audience.
            Track marketing campaign performance and adjust strategies as needed.

            Volunteer Coordination:
            Goal: Recruit, train, and manage a team of volunteers to assist with event operations.
            Action Items:
            Develop a volunteer recruitment plan.
            Create volunteer job descriptions and schedules.
            Conduct volunteer training sessions to ensure volunteers are prepared for their roles.
            Provide ongoing support and supervision to volunteers during the event.
            Recognize and appreciate volunteer contributions.
            Risk Management:

            Goal: Identify and mitigate potential risks to ensure event safety and minimize disruptions.
            Action Items:
            Conduct a risk assessment to identify potential hazards (e.g., weather, security, medical emergencies).
            Develop a risk management plan to address identified risks.
            Implement safety protocols and emergency procedures.
            Secure event insurance to protect against potential liabilities.
            Establish communication channels for reporting and responding to incidents.
            **PROPOSAL FORMAT END**
        """</span>
        ),
    output_key=<span class="hljs-string">"proposal_agent_response"</span>
)

<span class="hljs-comment"># workflow_agent = SequentialAgent(</span>
<span class="hljs-comment">#     name="workflow_agent",</span>
<span class="hljs-comment">#     description="Helps with the overall workflow of the event planning.",</span>
<span class="hljs-comment">#     sub_agents=[get_venues_agent, catering_agent, social_media_agent, budget_agent]</span>
<span class="hljs-comment"># )</span>

root_agent = Agent(
    name=ROOT_AGENT_NAME,
    model=MODEL_NAME,  <span class="hljs-comment"># Can be a string for Gemini or a LiteLlm object</span>
    description=<span class="hljs-string">"Provides event planning assistance."</span>,
    instruction=(
        <span class="hljs-string">f"""
        You are a comprehensive Event Planning Assistant. Your role is to coordinate and delegate tasks to specialized sub-agents while maintaining overall project oversight.
        For venue-related queries, utilize the 'get_venues_agent' to find suitable locations.
        For catering inquiries, delegate to the 'catering_agent' for specialized food service recommendations.
        For social media and marketing needs, engage the 'social_media_agent' to create engaging content.
        For budget queries, use the 'budget_agent' to create a budget and fill it with the data.
        For proposal queries, use the 'proposal_agent' to create a proposal.
        If you are the best to answer the question according to your description, you can answer it directly.
        When transferring tasks to sub-agents:
        - Ensure the task aligns with the agent's expertise
        - Provide clear context and requirements
        - Review and integrate their responses into a cohesive solution
        When a user provides an event planning request, you must follow this sequence:
            Acknowledge the request as the Root Agent and confirm your understanding of the event requirements.
            Activate each specialized agent in a logical order (e.g., Venue, then Budget, then catering, then social media, then proposal etc.).
            Present the output of each agent clearly under a specific heading for that agent.
            Conclude with the full Event Proposal generated by the sub agents, which ties everything together.
        If users asks to assume all the details, feel free to do so.
        """</span>
    ),
    sub_agents=[ get_venues_agent ,catering_agent, social_media_agent, budget_agent, proposal_agent],
    generate_content_config=types.GenerateContentConfig(temperature=<span class="hljs-number">0.5</span>),
)

APP_NAME = <span class="hljs-string">"event_planner"</span>
USER_ID = <span class="hljs-string">"user123"</span>
SESSION_ID = <span class="hljs-string">"session1"</span>

session_service = InMemorySessionService()
session = session_service.create_session(app_name=APP_NAME, user_id=USER_ID, session_id=SESSION_ID)
root_agent = root_agent

runner = Runner(agent=root_agent, app_name=APP_NAME, session_service=session_service)

<span class="hljs-comment"># Helper method to send query to the runner</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">call_agent</span>(<span class="hljs-params">query, session_id, user_id</span>):</span>
  content = types.Content(role=<span class="hljs-string">'user'</span>, parts=[types.Part(text=query)])
  events = runner.run(
      user_id=user_id, session_id=session_id, new_message=content)

  <span class="hljs-keyword">for</span> event <span class="hljs-keyword">in</span> events:
      <span class="hljs-keyword">if</span> event.is_final_response():
          final_response = event.content.parts[<span class="hljs-number">0</span>].text
          print(<span class="hljs-string">"Agent Response: "</span>, final_response)
</code></pre>
<h3 id="heading-running-the-agent-on-your-machine">Running the Agent on Your Machine</h3>
<p>Sweet! With the code all in place, let’s fire this thing up. Back in your Cloud Shell terminal (make sure you’re in the main <code>event-planner-adk</code> folder), just run this:</p>
<pre><code class="lang-bash">adk web
</code></pre>
<p>This command boots up the ADK’s local web server and gives you a simple chat page to talk to your agent. Click the URL it spits out.</p>
<p>Now, give it a go! Try a prompt like:</p>
<blockquote>
<p><em>“G’day! Can you help me organise a 40th birthday party for about 50 people in Fitzroy?”</em></p>
</blockquote>
<p>You’ll see the logs in the terminal showing the root agent calling on its sub-agents to do their bit. How cool is that!</p>
<h3 id="heading-part-3-sharing-your-agent-with-the-world">Part 3: Sharing Your Agent with the World</h3>
<p>Running it locally is great for building and testing, but the whole point is to make something others can use. So, let’s deploy it to Google Cloud Run, which will give us a public web address.</p>
<ol>
<li><strong>Set some environment variables.</strong> This just makes the next command a bit tidier. Run these in your Cloud Shell:</li>
</ol>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> GOOGLE_CLOUD_PROJECT=<span class="hljs-variable">$GOOGLE_CLOUD_PROJECT_ID</span> 
<span class="hljs-built_in">export</span> GOOGLE_CLOUD_LOCATION=<span class="hljs-variable">$GOOGLE_CLOUD_LOCATION</span> 
<span class="hljs-built_in">export</span> SERVICE_NAME=event-planner-agent 
<span class="hljs-built_in">export</span> APP_NAME=event-planner-agent 
<span class="hljs-built_in">export</span> AGENT_PATH=./event-planner-agent
</code></pre>
<ol>
<li><strong>Now, run the deploy command:</strong></li>
</ol>
<pre><code class="lang-bash">adk deploy cloud_run \ --project=<span class="hljs-variable">$GOOGLE_CLOUD_PROJECT</span> \ --region=<span class="hljs-variable">$GOOGLE_CLOUD_LOCATION</span> \ --service_name=<span class="hljs-variable">$SERVICE_NAME</span> \ --app_name=<span class="hljs-variable">$APP_NAME</span> \ --with_ui \ --allow-unauthenticated \ <span class="hljs-variable">$AGENT_PATH</span>
</code></pre>
<p>This command does all the heavy lifting: it packages up your agent, sends it to the cloud, and sets up a secure, scalable web service for it. It’ll take a few minutes, so go grab a cuppa.</p>
<p>Once it’s finished, it’ll give you a URL. Click it, and you’ll see your agent’s chat interface, now live on the internet!</p>
<h3 id="heading-ready-for-the-next-step-try-workflow-agents">Ready for the Next Step? Try Workflow Agents!</h3>
<h3 id="heading-want-to-stretch-try-workflow-agents">Want to stretch? Try Workflow Agents!</h3>
<p>You’ve successfully built and deployed a powerful multi-agent system. Now, let’s take it a step further by exploring <strong>workflow agents</strong>. This feature lets you define a specific sequence for your sub-agents to follow, which can be incredibly useful for more structured tasks.</p>
<p>To try this out, go back to your <code>agent.py</code> file and <strong>uncomment the</strong> <code>workflow_agent</code> <strong>section</strong>. You'll find it looks like this:</p>
<pre><code class="lang-bash">workflow_agent = SequentialAgent(
    name=<span class="hljs-string">"workflow_agent"</span>,
    description=<span class="hljs-string">"Helps with the overall workflow of the event planning."</span>,
    sub_agents=[get_venues_agent, catering_agent, social_media_agent, budget_agent]
)
</code></pre>
<p>Once uncommented, you’ll also need to <strong>integrate</strong> <code>workflow_agent</code> <strong>into your</strong> <code>root_agent</code><strong>'s</strong> <code>sub_agents</code> <strong>list</strong>. You can replace the existing list of individual sub-agents with <code>workflow_agent</code> or add it in alongside them, depending on how you want your root agent to delegate.</p>
<p>Give it a try and see how defining a workflow can streamline your agent’s process! What kind of structured tasks do you think a workflow agent would be perfect for in your projects?</p>
<h3 id="heading-youve-done-it-whats-next">You’ve Done It! What’s Next? 🎉</h3>
<p>Ripper! You’ve gone from zero to a fully functional, cloud-deployed AI agent. You’ve learned the fundamentals of the ADK and built a modular system that you can easily expand and improve.</p>
<p>This is just the beginning. From here, you could:</p>
<ul>
<li><p><strong>Connect to real tools:</strong> Swap out our fake functions with code that <em>actually</em> interacts with the Google Sheets API or a real booking system.</p>
</li>
<li><p><strong>Add more specialists:</strong> What about an agent for booking a DJ or another for handling legal permits?</p>
</li>
<li><p><strong>Refine the instructions:</strong> Tweak the prompts for your agents to make them even better at their jobs.</p>
</li>
</ul>
<p>To keep learning, the official <a target="_blank" href="https://google.github.io/adk-docs/get-started/">ADK documentation</a> is the best place to go.</p>
<p>Happy building, and thanks for coming to the workshop!</p>
<h3 id="heading-bonus-prototyping-your-ideas-first">Bonus: Prototyping Your Ideas First</h3>
<p>Before you write a single line of Python, how do you know if your agent idea is any good? Give it a whirl in <a target="_blank" href="https://aistudio.google.com/app">Google AI Studio</a>. It’s a web-based playground where you can test prompts and model behaviour without any code.</p>
<p>It’s the perfect spot to draft the <code>instruction</code> prompts for your agents. For our event planner, I first wrote a massive "System Instruction" that described the whole agent team and how they should interact. This let me test the logic and flow of the conversation before committing it to code, which saved a heap of time. It’s a top tip for any agent-building project!</p>
]]></content:encoded></item><item><title><![CDATA[No Servers, No Stress: Host your site with Google Cloud Storage (GCS) & Cloudflare]]></title><description><![CDATA[So I had this basic tutorial site I built for a workshop purpose. The final artifact after the build was a static site, and I wanted to get it online.
I didn’t want to deal with spinning up a Cloud Run instance, a Lambda function, deploy via Netlify,...]]></description><link>https://allthingsgcp.com/host-your-site-with-google-cloud-storage-and-cloudflare</link><guid isPermaLink="true">https://allthingsgcp.com/host-your-site-with-google-cloud-storage-and-cloudflare</guid><category><![CDATA[GCP]]></category><category><![CDATA[cloud-storage]]></category><category><![CDATA[google cloud]]></category><category><![CDATA[cloudflare]]></category><category><![CDATA[hosting]]></category><dc:creator><![CDATA[UV Panta]]></dc:creator><pubDate>Sun, 01 Jun 2025 14:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750256522851/f450737c-37c8-4ea8-adc1-7df7c2a1a078.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>So I had this basic tutorial site I built for a workshop purpose. The final artifact after the build was a static site, and I wanted to get it online.</p>
<p>I didn’t want to deal with spinning up a Cloud Run instance, a Lambda function, deploy via Netlify, or paying to any other service. I knew there’s AWS s3 static website hosting feature but never saw the same in Google Cloud. After poking around a bit, I figured out that <strong>Google Cloud Storage (GCS)</strong> is actually perfect for this. And with a little help from <strong>Cloudflare</strong>, you can even get a custom domain with HTTPS up and running pretty quickly.</p>
<p>This post is my attempt to write down the steps I took — hopefully it helps you avoid some of the “wait, what?” moments I had 😅</p>
<h2 id="heading-what-you-need-first-pre-reqs"><strong>✅ What You Need First (Pre-reqs)</strong></h2>
<ul>
<li><p>A Google Cloud account (duh)</p>
</li>
<li><p>A project with billing turned on (they need your card, but there’s a free tier)</p>
</li>
<li><p>Install the gcloud CLI — you’ll need this for the terminal stuff</p>
</li>
<li><p>Your site files: index.html, maybe a style.css, some images — whatever you’ve got</p>
</li>
<li><p>A domain name if you want to go fancy (optional but I recommend it if you’d like to learn the Cloudflare stuff)</p>
</li>
</ul>
<h2 id="heading-hosting-from-the-gcp-console"><strong>Hosting from the GCP Console 🛠️</strong></h2>
<p>This is the click-click way to get your site online.</p>
<h3 id="heading-1-create-a-bucket"><strong>1. Create a Bucket</strong></h3>
<ul>
<li><p>Go to Cloud Storage</p>
</li>
<li><p>Hit “Create bucket”</p>
</li>
<li><p>Name it <strong>exactly</strong> like your domain (e.g., <code>mycoolblog.com</code>) if you're planning to use a custom domain later</p>
</li>
<li><p>Choose multi-region, uniform access, skip the warning about public access (we’ll fix that next)</p>
</li>
</ul>
<h3 id="heading-2-upload-your-files"><strong>2. Upload Your Files</strong></h3>
<ul>
<li>Drag in your files or folders — make sure there’s an <code>index.html</code></li>
</ul>
<h3 id="heading-3-make-it-public"><strong>3. Make It Public</strong></h3>
<ul>
<li><p>Select the files</p>
</li>
<li><p>Click <strong>Permissions → Add principal</strong></p>
</li>
<li><p>Add <code>allUsers</code></p>
</li>
<li><p>Give them the role <strong>Storage Object Viewer</strong></p>
</li>
</ul>
<p>I messed this up once by not doing it on the bucket level — so double-check where you’re applying permissions.</p>
<h3 id="heading-4-set-website-config"><strong>4. Set Website Config</strong></h3>
<ul>
<li><p>In your bucket, look for <strong>Website configuration</strong></p>
</li>
<li><p>Set <code>index.html</code> as the main page</p>
</li>
<li><p><code>404.html</code> is optional, but it’s nice if someone hits a broken link</p>
</li>
</ul>
<h3 id="heading-5-visit-your-site"><strong>5. Visit Your Site</strong></h3>
<p>You can now open:<br /><a target="_blank" href="https://storage.googleapis.com/%5Byour-bucket-name%5D/index.html"><code>https://storage.googleapis.com/[your-bucket-name]/index.html</code></a></p>
<p>Is it the prettiest URL? No. Does it work? Absolutely.</p>
<h2 id="heading-hosting-via-cli-gcloud-storage"><strong>💻 Hosting via CLI (</strong><code>gcloud storage</code>)</h2>
<p>If you’re a terminal person like me (or just lazy), the CLI is quicker.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Make the bucket (use your domain name exact same if you're going custom)</span>
gcloud storage buckets create gs://your-domain.com \
  --location=australia-southeast2 \
  --uniform-bucket-level-access

<span class="hljs-comment"># Upload everything</span>
gcloud storage cp ./your-site/* gs://your-domain.com

<span class="hljs-comment"># Set website config</span>
gcloud storage buckets update gs://your-domain.com \
  --web-main-page-suffix=index.html \
  --web-error-page=404.html

<span class="hljs-comment"># Make it public</span>
gcloud storage buckets add-iam-policy-binding gs://your-domain.com \
  --member=allUsers \
  --role=roles/storage.objectViewer
</code></pre>
<p>After that, boom 💥 Site’s live at:<br /><code>https://storage.googleapis.com/your-domain.com/index.html</code></p>
<h2 id="heading-adding-cloudflare-because-https-matters"><strong>🌐 Adding Cloudflare (because HTTPS matters)</strong></h2>
<p>Cloudflare makes the domain + HTTPS setup painless.</p>
<h3 id="heading-1-set-up-cloudflare"><strong>1. Set Up Cloudflare</strong></h3>
<ul>
<li><p>Go to cloudflare.com, sign up, and add your domain</p>
</li>
<li><p>It’ll ask you to switch your nameservers (do that in your domain registrar dashboard — usually under DNS settings)</p>
</li>
</ul>
<h3 id="heading-2-add-a-cname"><strong>2. Add a CNAME</strong></h3>
<p>In Cloudflare’s DNS settings:</p>
<ul>
<li><p>Name: www</p>
</li>
<li><p>Target: c.storage.googleapis.com</p>
</li>
<li><p>Proxy: Set to DNS-only for now</p>
</li>
</ul>
<p>Note: This only works if your GCS bucket is named <a target="_blank" href="http://www.yourdomain.com/">www.yourdomain.com</a>. I tried it with a different name and… nope.</p>
<h3 id="heading-3-enable-https"><strong>3. Enable HTTPS</strong></h3>
<ul>
<li><p>Under SSL/TLS, set mode to Flexible or Full (I used Flexible because it just worked)</p>
</li>
<li><p>Turn on — Always Use HTTPS &amp; Automatic HTTPS Rewrites</p>
</li>
</ul>
<h3 id="heading-4-flip-the-proxy"><strong>4. Flip the Proxy</strong></h3>
<p>Once everything’s working with DNS-only, go back to your CNAME and turn the orange cloud ON (a.k.a. Proxied). That gives you caching and extra speed too.</p>
<hr />
<h2 id="heading-whats-next"><strong>➕ What’s Next?</strong></h2>
<ul>
<li><p>Use a static site builder like Astro or Hugo if you’re tired of manually writing HTML</p>
</li>
<li><p>Add a CI/CD pipeline (GitHub Actions works great)</p>
</li>
<li><p>Add analytics (I’m trying out Plausible because I’m over Google Analytics)</p>
</li>
<li><p>Share your blog on X/Twitter/LinkedIn — let people know it’s alive!</p>
</li>
</ul>
<p>Hope you learned something out of it. If you have any better way to do this, feel free to let me know in comments or share it personally.</p>
]]></content:encoded></item><item><title><![CDATA[Unlocking your Developer Abilities with Model Context Protocol (MCP)]]></title><description><![CDATA[Image Credit: Aravind Putrevu
TL;DR MCP is the USB‑C for AI agents: a simple, open standard that lets large‑language‑model (LLM) assistants reach outside the chat box and safely operate real tools — your file‑system, GitHub issues, databases, SaaS AP...]]></description><link>https://allthingsgcp.com/unlocking-your-developer-abilities-with-model-context-protocol-mcp</link><guid isPermaLink="true">https://allthingsgcp.com/unlocking-your-developer-abilities-with-model-context-protocol-mcp</guid><category><![CDATA[agentic AI]]></category><category><![CDATA[AI]]></category><category><![CDATA[Model Context Protocol]]></category><category><![CDATA[software development]]></category><category><![CDATA[vibe coding]]></category><dc:creator><![CDATA[UV Panta]]></dc:creator><pubDate>Sun, 04 May 2025 14:50:49 GMT</pubDate><content:encoded><![CDATA[<p><img src="https://cdn-images-1.medium.com/max/1600/0*-IZy4neqz2rbCuLY" alt="Image Credit: Aravind Putrevu" class="image--center mx-auto" /></p>
<blockquote>
<p>Image Credit: <a target="_blank" href="https://www.devshorts.in/p/how-to-build-your-own-mcp-server">Aravind Putrevu</a></p>
<p><strong>TL;DR</strong> MCP is the USB‑C for AI agents: a simple, open standard that lets large‑language‑model (LLM) assistants reach outside the chat box and safely operate real tools — your file‑system, GitHub issues, databases, SaaS APIs, you name it. With a couple of lines in a config file you can go from “write me a script” to “run ESLint on my repo and open a pull‑request.”</p>
</blockquote>
<h3 id="heading-a-quick-glimpse-of-the-magic">🚀 A Quick Glimpse of the Magic</h3>
<p>Imagine this: you highlight a buggy function in VS Code and type</p>
<blockquote>
<p>“Claude (or Cursor), please refactor this with proper error‑handling, test it, and commit the patch.”</p>
</blockquote>
<p>Within seconds:</p>
<ol>
<li><p>The LLM calls the <strong>Filesystem Server</strong> through MCP to open the file, edits the code, and writes it back.</p>
</li>
<li><p>It invokes the <strong>Git Server</strong> to push the branch.</p>
</li>
<li><p>All the while it keeps you in the loop, asking confirmation at each step. (Although you can make it autonomous)</p>
</li>
</ol>
<p>That end‑to‑end flow takes zero manual shell commands — because MCP wires the AI straight into your dev toolbox.</p>
<h3 id="heading-mcp-in-one-paragraph">📖 MCP in One Paragraph</h3>
<p>Model Context Protocol (MCP) is an open, language‑agnostic spec created by Anthropic for connecting LLMs to external capabilities. MCP defines how three actors talk:</p>
<p><strong>Host</strong> — The app you live in (e.g. Claude Desktop, Cursor AI).<br /><strong>Client</strong> — The connector inside the host that speaks MCP.<br /><strong>Server</strong> — A tiny program that exposes abilities (file I/O, GitHub issues, databases…)</p>
<p>Their are two modes of communiation with MCP servers:</p>
<ul>
<li><p><strong>stdio</strong> → local processes similar to CLI commands (simplicity &amp; full desktop permissions)</p>
</li>
<li><p><strong>SSE</strong> → remote/cloud servers (scalable &amp; language‑agnostic)</p>
</li>
</ul>
<h3 id="heading-architecture-at-a-glance">🛠️ Architecture at a Glance</h3>
<pre><code class="lang-plaintext">╭── Host (Claude Desktop / Cursor) ────────────────────────────────╮
│  User asks → “Search my repo for TODOs”                          │
│  ┌──────────────┐                                                │
│  │ MCP Client   │──stdio──▶ Filesystem Server ◀───┐              │
│  └──────────────┘                                │               │
╰───────────────────────────────────────────────────┴──────────────╯
                                                 │
                                    returns list of matches
</code></pre>
<h3 id="heading-adding-ready-made-servers">🔌 Adding Ready made Servers</h3>
<p>You don’t have to write a single line of TypeScript or Python to get value out of MCP. The community already ships dozens of “ability packs” (servers) that expose everyday dev tasks through a simple RPC interface. All you do is tell <strong>Cursor</strong> where the server binary lives (for local use) or what URL it’s streaming on (for remote).</p>
<h3 id="heading-where-does-the-config-live-eg-in-cursor">Where does the config live? (eg, in Cursor)</h3>
<p>The cursor looks for an MCP configuration in two places:</p>
<pre><code class="lang-plaintext">+------------------+-----------------------+-------------------------------------------------------------+
| Scope            | Path                  | When to use                                                |
+------------------+-----------------------+-------------------------------------------------------------+
| Project-specific | ./.cursor/mcp.json    | Keep experiments or secrets isolated to one codebase.      |
| Global           | ~/.cursor/mcp.json    | Re‑use the same servers across every project on your machine.|
+------------------+-----------------------+-------------------------------------------------------------+
</code></pre>
<p>The JSON schema is identical in both files.</p>
<p>Similar to Cursor, Claude, VS Code or any other Agent-supported IDEs stores the MCP configuration in a similar fashion.</p>
<h3 id="heading-example-filesystem-github-servers">Example: Filesystem + GitHub servers</h3>
<pre><code class="lang-plaintext">// ~/.cursor/mcp.json
{
  "servers": {
    // Local server over stdio — full desktop permissions
    "filesystem": {
      "transport": "stdio",
      "command": "npx @anthropic-ai/mcp-filesystem",
      "args": ["--paths", "~/projects/my-app"]
    },    // Cloud server over SSE — scoped token auth
    "github": {
      "transport": "sse",
      "url": "https://mcp.composio.dev/github",
      "env": { "GITHUB_TOKEN": "${env:GH_TOKEN}" }
    }
  }
}
</code></pre>
<p>Save the file and the Cursor hot‑reloads the servers. Press <strong>⌘⇧P → “List MCP Abilities”</strong> to confirm they’re active. Sometimes, you need to restart the application.</p>
<h3 id="heading-popular-mcp-servers-amp-one-line-invoke">Popular MCP servers &amp; one-line invoke</h3>
<pre><code class="lang-plaintext">+-------------------------+---------------------------------------------------------------+
| Ability                 | How to install / add                                          |
+-------------------------+---------------------------------------------------------------+
| Filesystem (local)      | npx @anthropic-ai/mcp-filesystem                              |
| GitHub                  | npx mcp-github OR URL https://mcp.composio.dev/github         |
| Postgres / Neon         | npx mcp-neon                                                  |
| Email (Resend)          | npx mcp-resend                                                |
| Redis KV (Upstash)      | npx mcp-upstash                                               |
+-------------------------+---------------------------------------------------------------+
</code></pre>
<blockquote>
<p><strong><em>Tip:</em></strong> <em>The Cursor docs keep a living catalogue of community servers where you can copy‑paste ready configs — including niche tools like Stripe, Notion, or Docker Compose.</em></p>
</blockquote>
<h3 id="heading-how-does-cursor-pick-a-server">How does Cursor pick a server?</h3>
<p>When you write something like:</p>
<blockquote>
<p><em>“Search the project for deprecated React lifecycle methods and open an issue for each file.”</em></p>
</blockquote>
<p>Cursor:</p>
<ol>
<li><p>Parses the request and sees it needs <strong>Filesystem,</strong> then <strong>GitHub</strong> abilities.</p>
</li>
<li><p>Finds servers in <code>mcp.json</code> that implement those abilities.</p>
</li>
<li><p>Streams tool calls, prompting for confirmation after each major step.</p>
</li>
</ol>
<p>Because the routing is declarative, you can swap a local Filesystem server for a remote one (perhaps running in CI) without changing a single prompt.</p>
<h3 id="heading-build-your-own-mcp-server">🧑‍💻 Build Your Own MCP Server</h3>
<p>MCP is open‑ended — you can wire an LLM into <strong>any</strong> API, database, or internal tool by packaging a lightweight server that speaks the protocol.</p>
<h3 id="heading-quickstart-tutorials">Quick‑start tutorials</h3>
<ul>
<li><p><strong>TypeScript</strong> — Follow the step‑by‑step guide in the official SDK: <a target="_blank" href="https://github.com/modelcontextprotocol/typescript-sdk">https://github.com/modelcontextprotocol/typescript-sdk</a></p>
</li>
<li><p><strong>Python</strong> — Spin up a server in minutes with the Python kit: <a target="_blank" href="https://github.com/modelcontextprotocol/python-sdk">https://github.com/modelcontextprotocol/python-sdk</a></p>
</li>
</ul>
<h3 id="heading-deployment-models-at-a-glance">Deployment models at a glance</h3>
<pre><code class="lang-plaintext">+-------------------------------+-----------+---------------------------------------------------+
| Model                         | Transport | Ideal for                                        |
+-------------------------------+-----------+---------------------------------------------------+
| Local dev                     | stdio     | Rapid prototyping with full desktop permissions   |
| Docker container              | sse       | Reproducible CI/CD tasks and team‑wide sharing    |
| Serverless (Cloud Run, etc.)  | sse       | Spiky workloads and pay‑per‑use operations        |
| Dedicated VM / Kubernetes     | sse       | Always‑on production agents with custom scaling   |
+-------------------------------+-----------+---------------------------------------------------+
</code></pre>
<blockquote>
<p><strong><em>Pro tip:</em></strong> <em>Start with a local</em> <code>stdio</code> <em>server, add auth &amp; rate‑limits, then ship a container to your cloud provider once you’re ready to share it with the team.</em></p>
</blockquote>
<h3 id="heading-hosting-amp-discovery-platforms">🏢 Hosting &amp; Discovery Platforms</h3>
<blockquote>
<p><strong><em>Use at your own risk:</em></strong> <em>These services are community‑run or early‑stage. Always audit the source, scope tokens narrowly, and prefer self‑hosting for sensitive workloads.</em></p>
</blockquote>
<ul>
<li><p><a target="_blank" href="https://hub.docker.com/u/mcp"><strong>Docker Hub</strong></a> — Explore a curated collection of 100+ secure, high-quality MCP servers as Docker Images.</p>
</li>
<li><p><a target="_blank" href="https://mcp.composio.dev/"><strong>Composio Hub</strong></a> — Dozens of ready‑to‑stream servers (GitHub, Jira, etc.); free &amp; paid tiers with per‑server rate limits.</p>
</li>
<li><p><a target="_blank" href="https://smithery.ai"><strong>Smithery Cloud</strong></a> — Curated gallery of pre‑hosted MCP servers for popular SaaS APIs; one‑click copy‑to‑Cursor config.</p>
</li>
<li><p><a target="_blank" href="https://cursor.directory/"><strong>Cursor Directory</strong></a> — Official SSE URLs for GitHub, Filesystem, Postgres, and more; zero install, authenticate via OAuth or PAT.</p>
</li>
<li><p><a target="_blank" href="https://github.com/resend/mcp-send-email"><strong>Resend MCP</strong></a> — Send emails directly from Cursor with this email sending MCP server.</p>
</li>
<li><p><strong>Serverless runtimes (Lambda, Cloud Run, Vercel, Netlify, Fly.io)</strong> — Bring‑your‑own code; deploy over SSE for low idle cost.</p>
</li>
</ul>
<h3 id="heading-realworld-use-cases">🌟 Real‑World Use Cases</h3>
<ul>
<li><p><strong>Issue Triage Bot</strong> — A GitHub MCP server lets your LLM label, cluster, and auto‑respond to new issues each morning.</p>
</li>
<li><p><strong>Pull‑Request Reviewer</strong> — The same GitHub server streams diffs so the assistant can run linters, leave inline comments, and approve or request changes.</p>
</li>
<li><p><strong>GKE Cluster Doctor</strong> — A custom GKE server exposes live <code>kubectl</code> data; the assistant surfaces crashing pods and proposes quick‑fix commands like <code>kubectl rollout restart</code>.</p>
</li>
<li><p><strong>AWS Cost Guardian</strong> — An AWS server taps Cost Explorer and CloudWatch; each day the bot posts a Slack summary of unusual spend spikes.</p>
</li>
<li><p><strong>Release‑Notes Generator</strong> — Combining GitHub and Filesystem servers, the agent compiles merged PR titles into human‑friendly release notes and opens a changelog PR.</p>
</li>
<li><p><strong>Meeting Minutes Synthesizer</strong> — A Google Meet Notes server streams transcripts so the assistant publishes action items straight to Confluence or Notion.</p>
</li>
<li><p><strong>Terraform Plan Explainer</strong> — Filesystem + Terraform servers let the bot translate a <code>terraform plan</code> into plain‑English risk and impact statements.</p>
</li>
<li><p><strong>CI/CD Fixer</strong> — A Buildkite or GitHub Actions server grants log access; the agent pinpoints failing steps, suggests fixes, and can rerun the job on approval.</p>
</li>
</ul>
<h3 id="heading-further-reading-amp-resources">📚 Further Reading &amp; Resources</h3>
<ul>
<li><p>🔗 Official spec &amp; reference servers: <a target="_blank" href="https://github.com/anthropic-ai/mcp">https://github.com/anthropic-ai/mcp</a></p>
</li>
<li><p>🛠️ JS Server Toolkit: <a target="_blank" href="https://github.com/anthropic-ai/mcp-js">https://github.com/anthropic-ai/mcp-js</a></p>
</li>
<li><p>🐍 Python Server Toolkit: <a target="_blank" href="https://github.com/anthropic-ai/mcp-py">https://github.com/anthropic-ai/mcp-py</a></p>
</li>
<li><p>💬 Community chat &amp; examples: <a target="_blank" href="https://discord.gg/mcp">https://discord.gg/mcp</a></p>
</li>
</ul>
<h3 id="heading-closing-thought">Closing Thought</h3>
<p>USB‑C didn’t just simplify charging — it unleashed an ecosystem of plug‑and‑play gadgets. MCP is doing the same for AI: every server you add snaps a fresh superpower onto your assistant.</p>
<p>Today, that might mean auto‑drafting PR reviews; tomorrow, spinning up disposable GKE clusters or summarising your stand‑ups before the first coffee.</p>
<p><strong>What will you plug in next?</strong> Share your experiments, lessons learnt, and wish‑lists in the comments, and let’s blueprint the next generation of tool‑augmented development — together.</p>
]]></content:encoded></item><item><title><![CDATA[Exploring DORA's Take on Generative AI for Software Developers]]></title><description><![CDATA[You've likely been hearing a fair bit about generative AI (gen AI), those clever tools that can whip up code and text. Well, it's causing quite a stir in the world of software development. The latest DORA report confirms this, showing that a whopping...]]></description><link>https://allthingsgcp.com/exploring-doras-take-on-generative-ai-for-software-developers</link><guid isPermaLink="true">https://allthingsgcp.com/exploring-doras-take-on-generative-ai-for-software-developers</guid><category><![CDATA[dora]]></category><category><![CDATA[genai]]></category><category><![CDATA[AI]]></category><category><![CDATA[#ai-tools]]></category><dc:creator><![CDATA[UV Panta]]></dc:creator><pubDate>Tue, 01 Apr 2025 11:22:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1743506458729/36c4b8f1-e2bf-4a05-9dd0-2d7ff54483bb.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You've likely been hearing a fair bit about <strong>generative AI (gen AI)</strong>, those clever tools that can whip up code and text. Well, it's causing quite a stir in the world of software development. The <a target="_blank" href="https://cloud.google.com/resources/content/dora-impact-of-gen-ai-software-development">latest <strong>DORA report</strong></a> confirms this, showing that a whopping <strong>89% of organisations are prioritising the integration of AI into their applications</strong>, and <strong>76% of technologists are already using AI in some part of their daily work</strong>. This "AI moment" is backed by serious investment, with leading tech giants expected to pump around <strong>$1 trillion into AI development over the next five years</strong>.</p>
<p>However, it's not all plain sailing. Developers naturally have some valid concerns, such as worries about job displacement, security risks, and the potential for AI to eat into the time they spend on truly rewarding work. This guide aims to look beyond just adopting AI and focus on how to integrate it responsibly and effectively throughout the software development lifecycle, maximising the good bits while keeping the risks in check.</p>
<h3 id="heading-how-ai-is-shaking-things-up-for-developers">How AI is Shaking Things Up for Developers</h3>
<p>The initial news for individual developers looks largely positive:</p>
<ul>
<li><p><strong>More Time in Flow:</strong> Developers who use gen AI more extensively report experiencing a <strong>more frequent flow state</strong>.</p>
</li>
<li><p><strong>Increased Job Satisfaction:</strong> They also report <strong>higher overall job satisfaction</strong>.</p>
</li>
<li><p><strong>Boost in Productivity:</strong> Gen AI use is linked to <strong>increased productivity</strong>. The findings suggest that a <strong>25% increase in AI adoption by an individual could lead to an approximate 2.1% increase in productivity</strong>.</p>
</li>
<li><p><strong>Reduced Burnout:</strong> Interestingly, those using gen AI more also report <strong>less burnout</strong>.</p>
</li>
</ul>
<p>However, here's where it gets a bit more complex. One of the big hopes for AI was that it would free up developers from routine tasks to focus on more valuable work. Yet, the data indicates that increased AI adoption might actually lead to <strong>less time spent on work developers consider valuable</strong>, while time spent on tedious, "toilsome" work seems to remain largely the same.</p>
<p>The researchers came up with the "<strong>vacuum hypothesis</strong>" to explain this. Essentially, by boosting productivity and flow, AI helps people complete valuable work more efficiently, creating extra time. However, AI isn't yet tackling those less enjoyable but still necessary tasks like meetings and bureaucracy. It's worth noting that even with this shift, developers' well-being hasn't been negatively affected.</p>
<h3 id="heading-impact-on-teams-and-organisations">Impact on Teams and Organisations</h3>
<p>From an organisational standpoint, the influence of AI appears quite promising in several areas:</p>
<ul>
<li><p>A <strong>25% increase in AI adoption</strong> is associated with a <strong>7.5% increase in documentation quality</strong>.</p>
</li>
<li><p>Code quality is also likely to see a <strong>3.4% increase</strong>.</p>
</li>
<li><p>Code review speed could improve by around <strong>3.1%</strong>.</p>
</li>
<li><p>Approval speed for code changes might see a modest <strong>1.3% increase</strong>.</p>
</li>
<li><p>Code complexity is estimated to decrease by <strong>1.8%</strong>.</p>
</li>
</ul>
<p>These improvements suggest that AI is helping people get more value from their codebases and documentation, and is also speeding up the code review and approval processes.</p>
<p>However, and this is crucial, despite these positive impacts on development processes, the findings indicate that <strong>AI adoption is negatively impacting software delivery performance</strong>. For every 25% increase in AI adoption, there's an estimated <strong>1.5% reduction in delivery throughput</strong> and a more significant <strong>7.2% reduction in delivery stability</strong>. The researchers hypothesise that the increased speed of code generation due to AI might be leading to larger change sizes, and as DORA research has consistently shown, <strong>larger changes are slower and more prone to instability</strong>. So, even with AI, the fundamental principles of successful software delivery, like <strong>small batch sizes</strong>, remain vital.</p>
<h3 id="heading-what-developers-truly-value-in-their-work">What Developers Truly Value in Their Work</h3>
<p>To better understand the impact of AI on developers' perception of their work, the research delved into what developers actually consider "valuable". They identified five key perspectives:</p>
<ul>
<li><p><strong>Utilitarian Value:</strong> The feeling that their work has a positive impact on the world. Gen AI can potentially boost this by speeding up development.</p>
</li>
<li><p><strong>Reputational Value:</strong> Being recognised for the work they've done. AI could increase this by improving the impact of their work, but it could also reduce it if AI gets the credit.</p>
</li>
<li><p><strong>Economic Value:</strong> The pay and benefits associated with their work. AI might increase this through higher productivity, but some worry about potential reductions in workforce or paid hours.</p>
</li>
<li><p><strong>Intrinsic Value:</strong> The inherent worthwhileness of the development work itself, often linked to learning and traditional skills. AI is anticipated to have a neutral impact as new skills like prompt engineering become important.</p>
</li>
<li><p><strong>Hedonistic Value:</strong> The enjoyment derived from performing certain development tasks. AI could make enjoyable tasks more accessible but might also make some obsolete. Allowing developers to choose not to use AI for tasks they enjoy is important.</p>
</li>
</ul>
<h3 id="heading-building-developers-trust-in-gen-ai">Building Developers' Trust in Gen AI</h3>
<p>For developers to embrace and benefit from AI, trust is paramount. However, research suggests that <strong>developers' trust in gen AI output is currently relatively low</strong>. Organisations can foster this trust through several strategies:</p>
<ul>
<li><p><strong>Establish a Clear Policy on Acceptable Gen AI Use:</strong> Providing explicit guidelines encourages responsible use and can alleviate fears of unknowingly acting irresponsibly. Organisations with more transparent AI use policies see higher levels of trust.</p>
</li>
<li><p><strong>Double-Down on Fast, High-Quality Feedback:</strong> Robust code review and automated testing processes assure developers that errors introduced by AI-generated code will be caught. Interestingly, AI adoption can actually make code reviews faster and improve code quality.</p>
</li>
<li><p><strong>Provide Opportunities for Developers to Gain Exposure with Gen AI:</strong> Familiarity increases trust. This is particularly true when developers can use AI in their preferred programming languages, where they have the expertise to evaluate its output.</p>
</li>
<li><p><strong>Encourage Gen AI Use, But Don't Force It:</strong> While leadership encouragement is effective, developers need to maintain control over when and how AI is used. Building community structures to share knowledge organically can be a good approach.</p>
</li>
<li><p><strong>Help Developers Think Beyond Automation:</strong> Addressing fears of job displacement requires envisioning the future role of developers working with AI at a higher level of abstraction, focusing on innovation and user value.</p>
</li>
</ul>
<h3 id="heading-practical-strategies-for-adopting-gen-ai">Practical Strategies for Adopting Gen AI</h3>
<p>Moving from isolated AI experiments to widespread adoption requires a strategic approach. Here are four research-backed strategies for organisations:</p>
<ol>
<li><p><strong>Share and Be Transparent About How Your Organisation Plans to Use AI:</strong> Open communication about the AI mission, goals, and policies can alleviate apprehension and position AI as a tool to help everyone focus on more valuable work. Organisations that do this can see an estimated <strong>11.4% increase in team adoption of AI</strong>.</p>
</li>
<li><p><strong>Address Developer Concerns About AI's Impact:</strong> Directly addressing anxieties about job displacement can enable developers to focus on learning how to best use AI. Organisations that alleviate these concerns are estimated to have <strong>125% more team adoption of AI</strong>.</p>
</li>
<li><p><strong>Allow Ample Time for Developers to Learn How to Use AI:</strong> Providing dedicated time for experimentation and integration leads to significantly higher adoption rates. Simply giving developers dedicated work time to explore AI tools can lead to a <strong>131% increase in team AI adoption</strong>, and actively encouraging integration leads to a <strong>27% increase</strong>.</p>
</li>
<li><p><strong>Create Policies That Govern the Adoption of AI:</strong> Clear guidelines on appropriate use cases, ethical considerations, and potential risks can reduce uncertainty and encourage responsible experimentation. Organisations with AI acceptable-use policies show a <strong>451% increase in AI adoption</strong>.</p>
</li>
</ol>
<h3 id="heading-measuring-the-success-of-ai-adoption">Measuring the Success of AI Adoption</h3>
<p>To understand the impact of gen AI, it's crucial to establish baseline measurements and track progress at the team, service, and organisational levels. Some key metrics to consider include:</p>
<ul>
<li><p><strong>Code assistant metrics:</strong> Licenses allocated, daily active users, code suggestions generated and accepted, lines of code accepted.</p>
</li>
<li><p><strong>Fast-feedback metrics:</strong> Tests on commit, daily tests, daily builds, test confidence, time to fix broken builds.</p>
</li>
<li><p><strong>Team-level metrics (gathered through surveys):</strong> AI task reliance, AI interactions, perceived AI productivity, trust in AI output, organisational trust, flow, job satisfaction, valuable work, burnout, code review time, documentation quality, technical debt.</p>
</li>
<li><p><strong>Service-level metrics:</strong> Code complexity, code quality.</p>
</li>
<li><p><strong>Organisational metrics:</strong> Customer numbers, market share, overall performance, profitability, customer satisfaction.</p>
</li>
</ul>
<p>Gathering feedback from developers through regular surveys, team retrospectives, and communities of practice is also essential for refining the AI adoption strategy.</p>
<h3 id="heading-key-takeaways-for-the-future">Key Takeaways for the Future</h3>
<p><strong>For Leaders:</strong></p>
<ul>
<li><p><strong>Prioritise transparency:</strong> Clearly communicate your AI strategy, directly address job security concerns, and establish clear policies for responsible AI use.</p>
</li>
<li><p><strong>Invest in your people:</strong> Provide dedicated time, training, and resources for developers to learn and experiment with AI. Foster a culture of learning and psychological safety.</p>
</li>
<li><p><strong>Measure and iterate:</strong> Track key metrics (code quality, developer satisfaction, delivery performance) and be prepared to adjust your approach based on data.</p>
</li>
</ul>
<p><strong>For Practitioners:</strong></p>
<ul>
<li><p><strong>Embrace experimentation:</strong> Don't be afraid to try new AI tools and explore different use cases within established guidelines.</p>
</li>
<li><p><strong>Become AI-fluent:</strong> Master prompt engineering, understand AI limitations, and integrate AI into your workflow.</p>
</li>
<li><p><strong>Own the output:</strong> Always review, test, and refine AI-generated code and documentation. Your expertise remains critical.</p>
</li>
</ul>
<p>The integration of gen AI into software development is a significant and ongoing journey. By taking a thoughtful, data-driven, and human-centred approach, organisations and developers can collectively harness the power of AI to create a more productive, fulfilling, and innovative future.</p>
]]></content:encoded></item><item><title><![CDATA[Beep Happens: Adventures in Cloud Alerting]]></title><description><![CDATA[Ever felt like your phone is having a seizure from all those cloud alerts? You're not alone in this wild adventure of cloud monitoring! Let's turn down the noise and make those alerts actually useful.
When Your Digital Shop Has Too Many Security Guar...]]></description><link>https://allthingsgcp.com/beep-happens-adventures-in-cloud-alerting</link><guid isPermaLink="true">https://allthingsgcp.com/beep-happens-adventures-in-cloud-alerting</guid><category><![CDATA[Devops]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[monitoring]]></category><category><![CDATA[alerting]]></category><dc:creator><![CDATA[UV Panta]]></dc:creator><pubDate>Mon, 24 Mar 2025 12:30:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1742819496202/8da3c921-5875-4f99-b5a7-bb9ce5d01758.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ever felt like your phone is having a seizure from all those cloud alerts? You're not alone in this wild adventure of cloud monitoring! Let's turn down the noise and make those alerts actually useful.</p>
<h2 id="heading-when-your-digital-shop-has-too-many-security-guards">When Your Digital Shop Has Too Many Security Guards</h2>
<p>Imagine your website is a quirky little shop. You want to know if everything's running smoothly, right? Cloud monitoring is like having an overzealous security team with cameras everywhere and alarms that might go off when a customer sneezes too loudly. The trick is teaching your security system the difference between a real break-in and a cat walking by the window.</p>
<h2 id="heading-the-real-goal-happy-users-not-engineers-having-nervous-breakdowns">The Real Goal: Happy Users, Not Engineers Having Nervous Breakdowns</h2>
<p>Good alerting isn't about getting a text message every time a server hiccups. It's about knowing when your users are having a bad time. If your website is slower than a sloth on vacation, that's alert-worthy. If a server's CPU does a little dance for two seconds? Maybe not worth the panic attack.</p>
<h2 id="heading-think-like-your-users-what-would-make-them-rage-quit">Think Like Your Users: What Would Make Them Rage-Quit?</h2>
<p>Here's what deserves those dramatic alert sounds:</p>
<ul>
<li><p><strong>Website playing dead</strong>: If your digital shop's doors are locked, you need to know ASAP before customers start the online equivalent of angrily rattling the door handles.</p>
</li>
<li><p><strong>Features throwing tantrums</strong>: Imagine the checkout in your shop suddenly deciding it's on strike. BEEP BEEP BEEP!</p>
</li>
<li><p><strong>Everything moving in slow motion</strong>: When your site takes so long to load that users can brew coffee between clicks, it's alert time.</p>
</li>
<li><p><strong>Error messages breeding like rabbits</strong>: If your customers keep getting digital versions of "Computer says no," they'll shop elsewhere.</p>
</li>
</ul>
<h2 id="heading-the-not-so-secret-rules-of-alert-club">The Not-So-Secret Rules of Alert Club</h2>
<h3 id="heading-focus-on-the-real-drama-symptoms-not-just-causes">Focus on the Real Drama (Symptoms, Not Just Causes)</h3>
<ul>
<li><p><strong>Alert-Worthy</strong>: "Website loading time is slower than my grandma's internet connection (5+ seconds) for the last 5 minutes."</p>
</li>
<li><p><strong>Meh</strong>: "CPU usage on server X is above 90%." (Might be normal, might just be the server doing its workout routine.)</p>
</li>
</ul>
<h3 id="heading-set-the-right-panic-button-level">Set the Right "Panic Button" Level</h3>
<p>Think about what's normal for your system. If your website usually zooms along at 2 seconds, maybe set an alert for when it starts crawling at 5 seconds.</p>
<p>Don't set the trigger too sensitive (like a car alarm that goes off when a butterfly lands on it) or too relaxed (like a guard dog that sleeps through an actual robbery).</p>
<h3 id="heading-keep-the-noise-down-your-sanity-depends-on-it">Keep the Noise Down (Your Sanity Depends On It)</h3>
<p>If the same problem keeps happening, one alert is enough—not an inbox full of "THE WEBSITE IS STILL DOWN" every 30 seconds. Modern monitoring tools let you "snooze" alerts when you're already on the case, frantically typing and chugging coffee.</p>
<h3 id="heading-make-your-alerts-actually-helpful">Make Your Alerts Actually Helpful</h3>
<p>Your alert should give you enough information to start fixing the problem, not just scream "SOMETHING'S WRONG!"</p>
<p><strong>Good Alert Message</strong>: "Hey, the website's moving slower than a turtle in molasses. Check the application logs and the database performance dashboard before users start complaining on Twitter."</p>
<p><strong>Pro Tip</strong>: Add links to helpful guides or dashboards in your alert messages. Future panicked you will thank present calm you.</p>
<h3 id="heading-send-the-alert-to-the-right-heroes">Send the Alert to the Right Heroes</h3>
<p>If it's a database problem, the database team should get the alert, not the front-end developers who can't help and will just forward it anyway. Set up different notification channels (email, Slack, SMS) and send alerts to specific teams.</p>
<h2 id="heading-alert-types-for-mere-mortals">Alert Types for Mere Mortals</h2>
<ul>
<li><p><strong>Metric Alerts</strong>: These watch numbers like your website's vital signs—CPU usage, error counts, response times.</p>
</li>
<li><p><strong>Log Alerts</strong>: These scan your logs for concerning words like "ERROR" or "CRITICAL FAILURE" or "OH NO OH NO OH NO."</p>
</li>
<li><p><strong>SLO Alerts</strong>: These are for the overachievers. They help track if you're keeping your promises to users about reliability.</p>
</li>
</ul>
<h2 id="heading-a-real-life-adventure-tale">A Real-Life Adventure Tale</h2>
<p>Let's say you run an online shop selling artisanal cloud-shaped pillows. You want to know if people can't add items to their cart.</p>
<ol>
<li><p><strong>User Impact Assessment</strong>: Users can't buy your fluffy cloud pillows! Red alert!</p>
</li>
<li><p><strong>Metric Selection</strong>: Monitor the number of errors on the "add to cart" function.</p>
</li>
<li><p><strong>Threshold Setting</strong>: If errors exceed 5 in a minute, something's definitely wrong.</p>
</li>
<li><p><strong>Alert Creation</strong>: Set up a metric-based alert in your monitoring system.</p>
</li>
<li><p><strong>Notification Setup</strong>: Configure alerts to ping your support team on Slack with the message: "MAYDAY! MAYDAY! Cart function is broken! Cloud pillows are not being sold!"</p>
</li>
<li><p><strong>Helpful Context</strong>: Include a link to the logs so the team can start investigating immediately.</p>
</li>
</ol>
<h2 id="heading-keep-evolving-your-alert-game">Keep Evolving Your Alert Game</h2>
<p>Just like you'd adjust security in your shop over time (maybe that motion sensor in the bathroom wasn't the best idea), review your alerts regularly. Getting woken up at 3 AM for non-issues? Time to adjust those thresholds. Missing actual problems? Maybe tighten things up a bit.</p>
<h2 id="heading-the-tldr-for-the-alert-fatigued">The TL;DR for the Alert-Fatigued</h2>
<p>Good alerting is about setting up smart alarms that tell you about real problems your users are facing, without making you want to throw your phone into the sea. Focus on what matters to your users, set sensible triggers, and make sure your alerts give you the information you need to fix things quickly.</p>
<p>Remember, in the world of cloud monitoring, beep happens—but it doesn't have to happen constantly!</p>
]]></content:encoded></item><item><title><![CDATA[What to Expect at Google Cloud Next '25]]></title><description><![CDATA[Google Cloud Next is the premier event for anyone passionate about cloud technology, and Next '25 is shaping up to be an unmissable experience. If you're eager to stay ahead of the curve in cloud computing, mark your calendars! Here's a glimpse of wh...]]></description><link>https://allthingsgcp.com/what-to-expect-at-google-cloud-next-25</link><guid isPermaLink="true">https://allthingsgcp.com/what-to-expect-at-google-cloud-next-25</guid><category><![CDATA[#cloudnext]]></category><category><![CDATA[google cloud]]></category><dc:creator><![CDATA[UV Panta]]></dc:creator><pubDate>Wed, 19 Mar 2025 12:41:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1742387992647/af79d14b-78f4-47ac-b653-90fb7d7d46c3.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Google Cloud Next is the premier event for anyone passionate about cloud technology, and Next '25 is shaping up to be an unmissable experience. If you're eager to stay ahead of the curve in cloud computing, mark your calendars! Here's a glimpse of what you can expect at Google Cloud Next '25, based on the early information released.</p>
<p><strong>Innovation at the Forefront:</strong></p>
<p>The website, <a target="_blank" href="http://cloud.withgoogle.com/next/25">cloud.withgoogle.com/next/25</a>, hints at a focus on cutting-edge innovations that are driving the future of cloud. Expect deep dives into:</p>
<ul>
<li><p><strong>Artificial Intelligence and Machine Learning:</strong> Google Cloud is a leader in AI, and Next '25 will undoubtedly showcase the latest advancements. Look out for sessions on generative AI, responsible AI practices, and how businesses can leverage AI to transform their operations.</p>
</li>
<li><p><strong>Data Analytics and Management:</strong> In today's data-driven world, efficient data management is crucial. Expect to see updates on Google Cloud's data analytics tools, including BigQuery, Dataflow, and Dataproc.</p>
</li>
<li><p><strong>Infrastructure and Security:</strong> As businesses increasingly rely on the cloud, robust infrastructure and security are paramount. Next '25 will likely feature sessions on scalable infrastructure, cybersecurity best practices, and innovative security solutions.</p>
</li>
<li><p><strong>Application Modernization:</strong> Modernizing applications is key to staying competitive. Expect to see how google cloud assists with containerization, serverless computing, and microservices.</p>
</li>
<li><p><strong>Collaboration and Productivity:</strong> Google Workspace integration with cloud services will likely be a highlight, demonstrating how teams can work more efficiently and collaboratively.</p>
</li>
</ul>
<p><strong>A Platform for Learning and Networking:</strong></p>
<p>Google Cloud Next '25 is more than just a showcase of technology; it's a platform for learning. Expect:</p>
<ul>
<li><p><strong>Keynote Presentations:</strong> Hear from Google Cloud leaders and industry experts about the latest trends and vision for the future of cloud computing.</p>
</li>
<li><p><strong>Breakout Sessions and Workshops:</strong> Dive deep into specific topics and gain hands-on experience with Google Cloud technologies.</p>
</li>
<li><p><strong>Hands-on Labs:</strong> Test drive the newest google cloud products.</p>
</li>
<li><p><strong>Networking Opportunities:</strong> Connect with fellow cloud professionals, Google Cloud partners, and experts.</p>
</li>
<li><p><strong>Partner Showcases:</strong> Explore innovative solutions from Google Cloud's extensive partner ecosystem.</p>
</li>
</ul>
<p><strong>Why Attend Google Cloud Next '25?</strong></p>
<ul>
<li><p><strong>Stay Informed:</strong> Keep up with the <strong>latest cloud trends and technologies</strong>.</p>
</li>
<li><p><strong>Gain Practical Skills:</strong> Learn from experts and gain hands-on experienc<a target="_blank" href="https://cloud.withgoogle.com/next/25">e.</a></p>
</li>
<li><p><strong>Network with Industry Leaders:</strong> Connect with peers, partners, and Google Cloud experts.</p>
</li>
<li><p><strong>Discover Innovative Solutions:</strong> Explore new ways to leverage the cloud to drive business growth.</p>
</li>
</ul>
<p><strong>Mark Your Calendars:</strong> April 9-11, 2025</p>
<p>While specific details are still emerging, the excitement is building for Google Cloud Next '25. Keep an eye on the <a target="_blank" href="https://cloud.withgoogle.com/next/25">official website</a> for updates on registration, agenda, and speakers.</p>
<h2 id="heading-whether-youre-a-seasoned-cloud-professional-or-just-starting-your-cloud-journey-google-cloud-next-25-promises-to-be-an-event-that-will-inspire-and-empower-you-to-innovate">Whether you're a seasoned cloud professional or just starting your cloud journey, Google Cloud Next '25 promises to be an event that will inspire and empower you to innovate.</h2>
]]></content:encoded></item><item><title><![CDATA[Google's $32 Billion Cloud Security Leap with Wiz Acquisition]]></title><description><![CDATA[Google has announced plans to acquire Wiz, a cybersecurity company specializing in cloud security solutions for $32 billion. The acquisition is pending regulatory approval, and financial details have not been disclosed.
Since its establishment, Wiz h...]]></description><link>https://allthingsgcp.com/googles-32-billion-cloud-security-leap-with-wiz-acquisition</link><guid isPermaLink="true">https://allthingsgcp.com/googles-32-billion-cloud-security-leap-with-wiz-acquisition</guid><category><![CDATA[news]]></category><dc:creator><![CDATA[UV Panta]]></dc:creator><pubDate>Tue, 18 Mar 2025 13:52:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1742305906989/3d468005-547b-4e4b-92e6-4b3a916c4a51.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Google</strong> has announced plans to acquire <strong>Wiz</strong>, a cybersecurity company specializing in cloud security solutions for $32 billion. The acquisition is pending regulatory approval, and financial details have not been disclosed.</p>
<p>Since its establishment, Wiz has been recognized for developing cloud security platforms that address complex cybersecurity challenges faced by modern businesses. Wiz's products are known for their compatibility across various cloud environments, providing users with visibility and security control.</p>
<p>According to statements from both companies, the acquisition aligns with their shared vision of integrating development and security functions more closely. Wiz will continue to support multiple cloud providers, including AWS, Azure, and Oracle Cloud Infrastructure (OCI), maintaining its existing partnerships.</p>
<p>Google aims to enhance its cloud security offerings through this acquisition, leveraging Wiz’s capabilities alongside Google's expertise in artificial intelligence, data analytics, and popular technologies such as Kubernetes and TensorFlow. Additionally, Google's acquisition of cybersecurity firm Mandiant in 2022 is expected to complement Wiz’s security operations and threat intelligence capabilities.</p>
<p>The companies have indicated that Wiz will retain operational independence, continuing to serve existing customers without disruption.</p>
<p>The acquisition marks a significant development in the cloud security sector, underscoring the growing importance of cybersecurity solutions in enterprise technology.</p>
]]></content:encoded></item><item><title><![CDATA[Google Cloud Lands in Sweden: Faster, Greener, Closer!]]></title><description><![CDATA[Big news for Sweden's digital world! Google Cloud has just launched its new cloud region in Sweden, bringing its total global count to 42 . This means faster and more reliable cloud services are now right here in Sweden, opening up exciting possibili...]]></description><link>https://allthingsgcp.com/google-cloud-lands-in-sweden-faster-greener-closer</link><guid isPermaLink="true">https://allthingsgcp.com/google-cloud-lands-in-sweden-faster-greener-closer</guid><category><![CDATA[google cloud]]></category><dc:creator><![CDATA[UV Panta]]></dc:creator><pubDate>Fri, 07 Mar 2025 13:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1742388688627/4a0c9f28-855a-4886-ad4a-eb4c9a3ade8f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Big news for Sweden's digital world! Google Cloud has just launched its new cloud region in Sweden, bringing its total global count to 42 . This means faster and more reliable cloud services are now right here in Sweden, opening up exciting possibilities for businesses and organizations.  </p>
<p>This new setup offers some great advantages:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Benefit</td><td>What it Means</td><td>Who Benefits</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Faster Innovation</strong></td><td>Easier access to powerful tools like AI and data analysis.</td><td>All types of businesses, startups, and researchers .</td></tr>
<tr>
<td><strong>Local Data Storage</strong></td><td>Your data stays in Sweden.</td><td>Companies with strict rules about where their data is stored .</td></tr>
<tr>
<td><strong>Eco-Friendly</strong></td><td>Expected to run on mostly carbon-free energy.</td><td>Businesses and individuals who care about the environment .</td></tr>
<tr>
<td><strong>Speed Boost</strong></td><td>Quicker response times for online services.</td><td>Everyone using cloud-based applications in Sweden.</td></tr>
</tbody>
</table>
</div><p>Having a Google Cloud region in Sweden means that data can be stored and processed locally. This is a big deal for companies that need to keep their information within the country. Plus, Google is aiming for this region to be very environmentally friendly by using mostly carbon-free energy .  </p>
<p>One of the best things about this new region is speed. With data centers closer to users in Sweden, things online will feel much faster. Even a tiny bit of speed can make a big difference! As Tyson Singer from Spotify said, "...even milliseconds matter, the new Google Cloud region in Sweden will be a catalyst for accelerating innovation..." .  </p>
<p>Businesses are already seeing the positive impact. Linus Sjöberg, CTO of Tradera, mentioned that Google Cloud's technology has helped them improve their customer's selling experience .  </p>
<p>In short, Google Cloud's arrival in Sweden is a major step forward for the country's digital future. It brings faster, greener, and more secure cloud services closer to home, helping Swedish innovation thrive.</p>
]]></content:encoded></item></channel></rss>