7. Streaming and multi-turn
7. Streaming and multi-turn
7. Streaming and multi-turn
The HelloWorld example proves basic request/response and simple streaming. Real-world agents often need:
- rich streaming (status updates + artifact chunks),
- task persistence (so conversations can continue later),
- multi-turn interactions (the agent asks for missing information).
A reference example: LangGraph “Currency Agent”
The a2a-samples repository includes a more advanced Python example (often called a currency agent) built with LangGraph/LangChain and a Gemini model.
Typical workflow:
- Create and export an API key (e.g.
GOOGLE_API_KEY) for the LLM provider. - Start the agent server.
- Run its test client to observe:
TaskStatusUpdateEventupdates while the agent works,TaskArtifactUpdateEventchunks for final results,- multi-turn flows where the task enters an “input required” state.
What to learn from it
When you review that example code, focus on:
- how the executor translates agent progress into streamed task events,
- how task IDs and context IDs are used across turns,
- how “needs more input” is represented and continued by the client.