feat: confirmToolCall support — startToolCall with requireConfirmation + ToolRunnableCallable [JAR-9208]#703
feat: confirmToolCall support — startToolCall with requireConfirmation + ToolRunnableCallable [JAR-9208]#703JoshParkSJ wants to merge 4 commits intomainfrom
Conversation
f129206 to
5700f6b
Compare
1eed1e2 to
f655a29
Compare
413c601 to
4d8b645
Compare
| Walks compiled graph nodes once at runtime init. This is needed because coded agents | ||
| (create_agent) export a compiled graph as the only artifact — there's no side channel | ||
| to pass confirmation metadata from the build step to the runtime. | ||
| """ |
There was a problem hiding this comment.
not a problem for low code agents, but might as well unify how we get tools requiring confirmation
…RunnableCallable [JAR-9208]
87b0fc5 to
aa8c53b
Compare
| # Skip the first chunk — it's already assigned as current_message above, | ||
| # so accumulating it with itself would duplicate fields via string concat | ||
| # (e.g. tool name "search_web" becomes "search_websearch_web"). | ||
| if ( | ||
| isinstance(self.current_message, AIMessageChunk) | ||
| and self.current_message is not message | ||
| ): |
There was a problem hiding this comment.
Is this what causes the duplicate tool name issue? I remember @shannonsuhendra wrote that this issue wasn't happening anymore.
If this change is risky maybe we can move to a different PR?
There was a problem hiding this comment.
updated the comment so it's clear this is for gpt only bug
| def __init__(self, *, func: Any, afunc: Any, name: str, tool: BaseTool): | ||
| super().__init__(func=func, afunc=afunc, name=name) | ||
| self.tool = tool | ||
|
|
There was a problem hiding this comment.
This part seems a little sketchy to me in terms of typing. It would be nice if the runtime.py code has the tool field typed, or at least can do an if isinstance ... to get the right class typing.
Some questions:
- Any way we can add the fields needed by the
toolto some metadata withinRunnableCallable? - At the very least we don't need to do
anyforfunc,afuncinConversationalToolRunnableCallable? But if we could do the first bullet it might be even better
Would be nice to get a look from @cristipufu when approach is finalized as well.
There was a problem hiding this comment.
Unfortunately RunnableCallable is a langgraph class, not ours to touch. Fixed func/afunc and if isintance ...
If you think ConversationalToolRunnableCallable is just ugly/excessive then we can force assign node.tool like this. Not sure if that's better though
node = RunnableCallable(func=_func, afunc=_afunc, name=tool_name)
tool = getattr(tool_node, "tool", None)
if isinstance(tool, BaseTool):
node.tool = tool # type: ignore[attr-defined]
return node

Changes
confirmToolCallonstartToolCall:BaseToolreference; the typed subclass preserves.toolso the runtime can inspect it at init time (so we know which tools need confirmation before hitting the tool node and can pass require confirmation onstartToolCall)_get_tool_confirmation_infowalks the compiled graph once, findsConversationalToolRunnableCallablenodes, and builds a{tool_name: input_schema}lookup. This is needed because coded agents (create_agent) export a compiled graph as the only artifact — there's no side channel to pass confirmation metadata from the build step to the runtime.AIMessagetool call, emitsstartToolCallwithrequireConfirmation: trueandinputSchema.AIMessageChunkself-accumulation bug that doubled tool names on the first streamed chunk (only triggered with OpenAI, not Claude);startToolCallnow always emitted for confirmation toolsCompanion PRs
https://uipath.atlassian.net/browse/JAR-9208