Description
** Please make sure you read the contribution guide and file the issues in the right place. **
Contribution guide.
Describe the bug
At present, when multiple function responses are provided for the same function call in different events AND those responses occur in parallel with responses for function responses that do not correlate then the event re-writing logic can incorrectly include too many function responses, resulting in as error like:
{
"error": {
"code": 400,
"message": "Please ensure that the number of function response parts is equal to the number of function call parts of the function call turn.",
"status": "INVALID_ARGUMENT"
}
To Reproduce
Steps to reproduce the behavior:
- Trigger a tool call for an async tool, or tool that requires approval
- Trigger another tool call for an async tool, or tool that requires approval - in another turn
- Provide auth for both of the calls in one turn, resulting in new responses for both calls
The current implementation will return only one function call, but since the matching event contains 2 responses and events are handled atomically, both will be present in the same content.
Expected behavior
The perfect solution to this scenario is a little complex and unclear, but a few things should be true:
- when processing the events into contents, the result should always be a valid model input.
- If multiple function CALLS are present with the same ID, take the most recent as the only call.
- If multiple function RESPONSES are present, take the most recent as the only response.
- If calls are present in parallel, while moving one to ensure the above should not automatically move the others.
- If responses are present in parallel, while moving one to ensure the above should not automatically move the others.
- If multiple calls should occur in a sequence, they should be collected into a parallel call
- If multiple responses should occur in a sequence, they should be collected into a parallel response
Note: points 6 and 7 seem to suggest that models MUST support parallel function calling, and that this would break with models that don't. This is not in fact a requirement as the only way for these above re-writing rules to bring multiple responses together is if a series of function responses were to be ordered together with no model parts interleaved - i.e. there must be parallel function responses already present to bring this about. Regarding parallel calls, since the above logic will always bring a matching number of calls to responses into the preceding turn, parallel function calling requires parallel responses to already exist. The other situation for parallel calls is when unmatched - which should only occur naturally at the end of a conversation, and in these cases they would also need to already be present to be returned as such.
Desktop (please complete the following information):
- OS: [e.g. iOS]: MacOS
- Python version(python -V): 3.10
- ADK version(pip show google-adk): v1.4.2
Proposed solution
I have captured this in detail in #1582, along with a solution that captures all the scenarios I am aware of - but this may not be the core library maintainers intended direction so I am creating this issue as a place for more open discussion of solutions.