Skip to content

Conversation

@lizzij
Copy link
Member

@lizzij lizzij commented Oct 29, 2025

Please ensure you have read the contribution guide before creating a pull request.

Link to Issue or Description of Change

**1. Link to an existing issue (if applicable):

Testing Plan

_Please describe the tests that you ran to verify your changes. This is required for all PRs that are not small documentation or typo fixes.

Make unit tests, test complete

Unit Tests:

  • I have added or updated unit tests for my change.
  • All unit tests pass locally.

Please include a summary of passed pytest results.
image

Manual End-to-End (E2E) Tests:

  • Before
image - After image

Please provide instructions on how to manually test your changes, including any necessary setup or configuration. Please provide logs or screenshots to help reviewers better understand the fix.

Checklist

  • I have read the CONTRIBUTING.md document.
  • I have performed a self-review of my own code.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have added tests that prove my fix is effective or that my feature works.
  • New and existing unit tests pass locally with my changes.
  • I have manually tested my changes end-to-end.
  • Any dependent changes have been merged and published in downstream modules.

Additional context

Add any other context or screenshots about the feature request here.

@gemini-code-assist
Copy link

Summary of Changes

Hello @lizzij, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical bug where LLM instructions were being incorrectly inserted into conversation history, leading to failures in tool-use for certain models, particularly Anthropic's Claude. The changes introduce a more robust mechanism for instruction placement, ensuring that instructions respect semantic boundaries within the conversation flow. This prevents instructions from being interleaved with tool calls or user messages in a way that violates LLM provider requirements, thereby improving the reliability of agent interactions.

Highlights

  • Improved Instruction Insertion Logic: The logic for inserting instructions into LLM requests has been refined to prevent disruption of tool-use sequences and user turns, specifically addressing issues with models like Anthropic's Claude.
  • New Validation Function: A new helper function, is_valid_instruction_position, was introduced to semantically determine appropriate insertion points for instructions, ensuring they are not placed within user messages or tool-use blocks.
  • Comprehensive Unit Tests: Extensive unit tests have been added to validate the updated instruction insertion logic across various conversation scenarios, including those involving function calls and different message roles.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot adk-bot added the core [Component] This issue is related to the core interface and implementation label Oct 29, 2025
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a critical bug where instructions were incorrectly inserted between a tool call and its corresponding tool response, which violates the requirements for some LLM providers like Anthropic. The fix introduces a new helper function, is_valid_instruction_position, which correctly identifies valid insertion points by ensuring instructions are not placed after user messages or tool call requests. This prevents breaking the required adjacency of tool use and tool result blocks. The changes are well-implemented and are accompanied by a comprehensive set of new unit tests that validate the fix across various scenarios, including those with tool calls, multiple user messages, and empty content lists. The solution is robust and effectively resolves the described issue.

@Jacksunwei Jacksunwei added this to the 10/27 ADK Week milestone Oct 30, 2025
@GWeale
Copy link
Collaborator

GWeale commented Oct 30, 2025

Thank you for the fix, had previous fix for the same issue!

@GWeale GWeale closed this Oct 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

core [Component] This issue is related to the core interface and implementation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Mixing static_instruction and instruction prevents tool use in Anthropic-family LLMs.

5 participants