Seeking to construct clever brokers with real-world capabilities? Use Google ADK for constructing brokers that may cause, delegate, and reply dynamically. This Google ADK tutorial walks you thru the steps to construct conversational brokers with Google ADK throughout completely different language fashions like Gemini and GPT. Whether or not you’re exploring Google ADK for AI brokers or inquisitive about the way to create AI brokers utilizing Google ADK, this hands-on information will allow you to kickstart your journey into agentic improvement with ease and readability.
What’s the Agent Growth Equipment?
Agent Growth Equipment (ADK) is a versatile and modular framework for growing and deploying AI brokers. It may be used with standard LLMs and open-source generative AI instruments and is designed to combine tightly with the Google ecosystem and Gemini fashions. ADK makes it simple to get began with easy brokers powered by Gemini fashions and Google AI instruments whereas offering the management and construction wanted for extra complicated agent architectures and orchestration.
Options of Google’s Agent Growth Equipment
- Multi-Agent Structure: Compose brokers in parallel, sequential, or hierarchical workflows.
- Versatile Orchestration: Route duties dynamically utilizing LLM-powered workflows.
- Wealthy Instrument Ecosystem: Use built-in, customized, and third-party instruments seamlessly.
- Mannequin-Agnostic: Helps Gemini, GPT-4o, Claude, Mistral, and extra.
- Streaming Capabilities: Actual-time streaming for textual content, audio, and video.
- Dev-Pleasant Tooling: CLI, net UI, visible debugging, and analysis instruments.
- Reminiscence & State Administration: Constructed-in dealing with for session and long-term reminiscence.
- Artifact Dealing with: Handle recordsdata, outputs, and binary knowledge effortlessly.
- Good Execution: Brokers can execute code and deal with multi-step planning.
- Versatile Deployment: Run domestically, on Google Cloud (Vertex AI, Cloud Run), or through Docker.
Drawback Assertion
As AI methods evolve from single-purpose instruments to collaborative, multi-agent ecosystems, builders want sensible steerage on constructing and orchestrating clever brokers that may talk, delegate, and adapt. To bridge this hole, we’ll construct a Climate Bot Staff, a multi-agent system able to answering weather-related queries whereas additionally dealing with person interactions like greetings, farewells, and secure responses.
This hands-on undertaking goals to display the way to:
- Design a modular multi-agent system utilizing Google’s Agent Growth Equipment (ADK).
- Combine a number of language fashions (e.g., Gemini, GPT, Claude) for job specialization.
- Implement clever job delegation throughout brokers.
- Handle session reminiscence for contextual continuity.
- Apply security mechanisms by means of structured callbacks.
By fixing this downside, you’ll acquire sensible expertise with ADK’s structure, orchestration, reminiscence administration, and security finest practices, laying the groundwork for extra complicated, real-world agentic purposes.
You’ll be able to consult with the supplied Colab pocket book to information you thru the hands-on implementation.
Proposed Workflow
Stipulations
Earlier than diving into the code, be sure you’ve accomplished the next setup steps:
1. Set-up your Setting & Set up ADK
Begin by creating and activating a digital atmosphere to isolate your undertaking dependencies:
# Create a digital atmosphere
python -m venv .venv
Now that the atmosphere has been created, we will energetic it utilizing the next instructions:
# Activate the atmosphere
# macOS/Linux:
supply .venv/bin/activate
# Home windows CMD:
.venvScriptsactivate.bat
# Home windows PowerShell:
.venvScriptsActivate.ps1
As soon as your atmosphere is activated, set up the Google AI Growth Equipment (ADK):
pip set up google-adk
2. Get hold of your API Keys
You’ll want API keys to work together with completely different AI fashions. Seize them from the next sources:
Steps to Construct Your Climate App
Step 1: Setup and Set up
Set up the required libraries for the undertaking:
# Set up Google ADK and LiteLLM
!pip set up google-adk -q
!pip set up litellm -q
Import libraries:
import os
import asyncio
from google.adk.brokers import Agent
from google.adk.fashions.lite_llm import LiteLlm # For multi-model assist
from google.adk.classes import InMemorySessionService
from google.adk.runners import Runner
from google.genai import varieties # For creating message Content material/Elements
import warnings
# Ignore all warnings
warnings.filterwarnings("ignore")
import logging
logging.basicConfig(stage=logging.ERROR)
Arrange API Keys:
# Gemini API Key
os.environ["GOOGLE_API_KEY"] = "YOUR_GOOGLE_API_KEY"
# OpenAI API Key
os.environ['OPENAI_API_KEY'] = “YOUR_OPENAI_API_KEY”
# Anthropic API Key
os.environ['ANTHROPIC_API_KEY'] = “YOUR_ANTHROPIC_API_KEY”
print("API Keys Set:")
print(f"Google API Key set: {'Sure' if os.environ.get('GOOGLE_API_KEY') and os.environ['GOOGLE_API_KEY'] != 'YOUR_GOOGLE_API_KEY' else 'No (REPLACE PLACEHOLDER!)'}")
print(f"OpenAI API Key set: {'Sure' if os.environ.get('OPENAI_API_KEY') and os.environ['OPENAI_API_KEY'] != 'YOUR_OPENAI_API_KEY' else 'No (REPLACE PLACEHOLDER!)'}")
print(f"Anthropic API Key set: {'Sure' if os.environ.get('ANTHROPIC_API_KEY') and os.environ['ANTHROPIC_API_KEY'] != 'YOUR_ANTHROPIC_API_KEY' else 'No (REPLACE PLACEHOLDER!)'}")
# Configure ADK to make use of API keys immediately (not Vertex AI for this multi-model setup)
os.environ["GOOGLE_GENAI_USE_VERTEXAI"] = "False"
Outline Mannequin Constants for simpler use:
MODEL_GEMINI_2_0_FLASH = "gemini-2.0-flash".
MODEL_GPT_4O = "openai/gpt-4o"
MODEL_CLAUDE_SONNET = "anthropic/claude-3-sonnet-20240229"
print("nEnvironment configured.")
Step 2: Outline Instruments
In ADK, Instruments are the purposeful constructing blocks that enable brokers to transcend simply producing textual content. They’re usually easy Python capabilities that may carry out actual actions, like fetching climate knowledge, querying a database, or operating calculations.
To start out, we’ll create a mock climate software to simulate climate lookups. This helps us give attention to the agent’s construction without having exterior APIs. Later, we will simply swap it for an actual climate service.
Code:
def get_weather(metropolis: str) -> dict:
"""Retrieves the present climate report for a specified metropolis.
Args:
metropolis (str): The identify of the town (e.g., "Mumbai","Chennai","Delhi").
Returns:
dict: A dictionary containing the climate info.
Features a 'standing' key ('success' or 'error').
If 'success', features a 'report' key with climate particulars.
If 'error', contains an 'error_message' key.
"""
# Finest Observe: Log software execution for simpler debugging
print(f"--- Instrument: get_weather known as for metropolis: {metropolis} ---")
city_normalized = metropolis.decrease().change(" ", "") # Primary enter normalization
mock_weather_db = {
"delhi": {"standing": "success", "report": "The climate in Delhi is sunny with a temperature of 35°C."},
"mumbai": {"standing": "success", "report": "It is humid in Mumbai with a temperature of 30°C."},
"bangalore": {"standing": "success", "report": "Bangalore is experiencing gentle showers and a temperature of twenty-two°C."},
"kolkata": {"standing": "success", "report": "Kolkata is partly cloudy with a temperature of 29°C."},
"chennai": {"standing": "success", "report": "It is sizzling and humid in Chennai with a temperature of 33°C."},
}
if city_normalized in mock_weather_db:
return mock_weather_db[city_normalized]
else:
return {"standing": "error", "error_message": f"Sorry, I haven't got climate info for '{metropolis}'."}
# Instance utilization
print(get_weather("Mumbai"))
Step 3: Defining the Agent
In ADK, an Agent is the core element that manages the dialog circulate, connecting the person, the LLM, and the instruments it will probably use.
To outline an agent, you’ll configure a number of important parameters:
- identify: A novel identifier for the agent (e.g., “weather_agent_v1”).
- mannequin: The LLM the agent will use (e.g., MODEL_GEMINI_2_5_PRO).
- description: A brief abstract of what the agent does – essential for collaboration and delegation in multi-agent methods.
- instruction: Detailed conduct tips for the LLM, defining its persona, objectives, the way to use instruments, and the way to deal with edge instances.
- instruments: An inventory of software capabilities (like [get_weather]) the agent can invoke.
Code:
AGENT_MODEL=mannequin
weather_agent=Agent(
identify="weather_agent_v1",
mannequin=AGENT_MODEL,
description="Supplies climate info for particular cities.",
instruction="You're a useful climate assistant. Your major objective is to supply present climate reviews. "
"When the person asks for the climate in a selected metropolis, "
"you MUST use the 'get_weather' software to seek out the knowledge. "
"Analyze the software's response: if the standing is 'error', inform the person politely in regards to the error message. "
"If the standing is 'success', current the climate 'report' clearly and concisely to the person. "
"Solely use the software when a metropolis is talked about for a climate request.",
instruments=[get_weather],
)
print(f"Agent '{weather_agent.identify}' created utilizing mannequin '{AGENT_MODEL}'.")
Step 4: Arrange Runner and Session Service
To deal with conversations and run the agent successfully, we want two key parts:
SessionService: This element retains observe of every person’s dialog historical past and session state. A primary model known as InMemorySessionService shops all knowledge in reminiscence, making it splendid for testing or light-weight apps. It logs each message exchanged in a session. We’ll dive into saving session knowledge completely.
Runner: This acts because the mind of the system. It manages all the interplay circulate, taking in person enter, passing it to the appropriate agent, calling the LLM and any obligatory instruments, updating session knowledge by means of the SessionService, and producing a stream of occasions that present what’s occurring through the interplay.
Code:
# @title Setup Session Service and Runner
# ---Session Administration ---
# Key Idea: SessionService shops dialog historical past & state.
# InMemorySessionService is a straightforward, non-persistent storage for this tutorial.
session_service=InMemorySessionService()
# Outline constants for figuring out the interplay context
APP_NAME="weathertutorial_app"
USER_ID="user_1"
SESSION_ID="session_001"
# Create the particular session the place the dialog will occur
session=session_service.create_session(
app_name=APP_NAME,
user_id=USER_ID,
session_id=SESSION_ID,
)
print(f"Session created: App='{APP_NAME}', Consumer="{USER_ID}", Session='{SESSION_ID}'")
# ---Runner ---
# Key Idea: Runner orchestrates the agent execution loop.
runner=Runner(
agent=weather_agent,
app_name=APP_NAME,
session_service=session_service
)
print(f"Runner created for agent '{runner.agent.identify}'.")
Step 5: Work together with the Agent
We’ll use ADK’s asynchronous Runner to speak to our agent and get its response. Since LLM and gear calls can take time, dealing with them asynchronously ensures a easy, non-blocking expertise.
We’ll create a helper perform known as call_agent_async that does the next:
- Accepts a person question as enter
- Wraps it in ADK’s required Content material format
- Calls runner.run_async() with the session and message
- Iterates by means of the Occasion stream ADK returns these occasions and tracks every step (software name, response, and so forth.).
- Detects and prints the ultimate response utilizing occasion.is_final_response()
Code:
# @title Outline Agent Interplay Operate
import asyncio
from google.genai import varieties # For creating message Content material/Elements
async def call_agent_async(question: str):
"""Sends a question to the agent and prints the ultimate response."""
print(f"n>>> Consumer Question: {question}")
# Put together the person's message in ADK format
content material = varieties.Content material(function="person", components=[types.Part(text=query)])
final_response_text = "Agent didn't produce a last response." # Default
# Key Idea: run_async executes the agent logic and yields Occasions.
# We iterate by means of occasions to seek out the ultimate reply.
async for occasion in runner.run_async(user_id=USER_ID, session_id=SESSION_ID, new_message=content material):
# You'll be able to uncomment the road under to see *all* occasions throughout execution
# print(f" [Event] Writer: {occasion.writer}, Kind: {kind(occasion).__name__}, Closing: {occasion.is_final_response()}, Content material: {occasion.content material}")
# Key Idea: is_final_response() marks the concluding message for the flip.
if occasion.is_final_response():
if occasion.content material and occasion.content material.components:
# Assuming textual content response within the first half
final_response_text = occasion.content material.components[0].textual content
elif occasion.actions and occasion.actions.escalate: # Deal with potential errors/escalations
final_response_text = f"Agent escalated: {occasion.error_message or 'No particular message.'}"
# Add extra checks right here if wanted (e.g., particular error codes)
break # Cease processing occasions as soon as the ultimate response is discovered
print(f"<<< Agent Response: {final_response_text}")
Step 6: Run the Dialog
Now that all the things’s arrange, it’s time to place our agent to the check by sending a number of pattern queries.
We’ll:
- Wrap the async calls inside a most important() coroutine
- Use await to run the perform.
What to Anticipate:
- The person queries can be printed
- When the agent makes use of a software (like get_weather), you’ll see logs like:
— Instrument: get_weather known as… — - The agent will return a last response, even gracefully dealing with instances the place knowledge isn’t obtainable (e.g., for “Paris”)
Code:
# @title Run the Preliminary Dialog
# # We want an async perform to await our interplay helper
# async def run_conversation():
# await call_agent_async("What's the climate like in Mumbai")
# await call_agent_async("How about Delhi?") # Anticipating the software's error message
# await call_agent_async("Inform me the climate in CHennai")
# Execute the dialog utilizing await in an async context (like Colab/Jupyter)
await run_conversation()
Output:

Additionally Learn: Methods to Use OpenAI’s Responses API & Agent SDK?
Conclusion
Google’s Agent Growth Equipment (ADK) permits builders to create clever, multi-agent methods that transcend easy textual content era. By constructing a climate bot, we realized key ADK ideas similar to software integration, agent orchestration, and session administration, all whereas leveraging the ability of Google’s Gemini. From defining clear, descriptive docstrings for instruments to orchestrating interactions by means of the Runner and SessionService, ADK gives the pliability to construct production-ready brokers that may work together, study, and adapt. Whether or not you’re constructing chatbots, digital assistants, or multi-agent ecosystems, ADK presents the instruments to carry your imaginative and prescient to life.
Ceaselessly Requested Questions
A. Google ADK is an open-source, modular framework for constructing, orchestrating, and deploying AI-powered brokers, together with each easy bots and sophisticated multi-agent methods. It’s designed for flexibility, scalability, and integration with main LLMs and Google’s AI ecosystem.
A. ADK is multi-agent by design, permitting you to compose brokers in parallel, sequential, or hierarchical workflows. It’s model-agnostic, helps real-time streaming (textual content, audio, video), and comes with built-in instruments for debugging, analysis, and deployment throughout environments.
A. Whereas optimized for Google’s Gemini fashions, ADK is model-flexible and might work with different standard LLMs similar to GPT-4o, Claude, Mistral, and extra, through integrations like LiteLLM.
A. Sure, ADK is appropriate for constructing each conversational brokers (like chatbots) and non-conversational brokers that deal with complicated workflows or automation duties.
A. You’ll be able to set up ADK utilizing pip (pip set up google-adk), arrange your Google Cloud undertaking (if wanted), and rapidly construct your first agent utilizing Python. ADK gives a CLI and an online UI for native improvement, testing, and debugging.
Login to proceed studying and luxuriate in expert-curated content material.