-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Security solution] Fix OpenAI token reporting #169156
Conversation
Pinging @elastic/security-threat-hunting (Team:Threat Hunting) |
Pinging @elastic/security-solution (Team: SecuritySolution) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Response Ops changes LGTM
💚 Build Succeeded
Metrics [docs]Async chunks
History
To update your PR or re-run it, just comment with: |
// if not langchain, call execute action directly and return the response: | ||
if (!request.body.assistantLangChain) { | ||
const result = await executeAction({ actions, request, connectorId }); | ||
return response.ok({ | ||
body: result, | ||
}); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did assistantLangChain
need to be pushed server side for this fix? @andrew-goldstein intentionally kept the split in logic between calling the actions framework, and the new langchain implementation at the client API layer for separation of concerns. There's no intent to continue supporting an actions 'pass through' once this new API is validated (at least AFAIK), so unless this is somehow required for the fix I'd be hesitant to push this logic into this route.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this is going to be necessary for the upcoming streaming work, so all good! 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, decided to move it new because I wanted to parse the response to continue returning a string from this endpoint, and I knew I'd be moving it for streaming anyways.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checked out, tested locally, and code reviewed. Seeing token counts for OpenAI connectors in the event log, and the langchain code paths appear to be functioning as expected -- LGTM! 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
💔 All backports failed
Manual backportTo create the backport manually run:
Questions ?Please refer to the Backport tool documentation |
💚 All backports created successfully
Note: Successful backport PRs will be merged automatically after passing CI. Questions ?Please refer to the Backport tool documentation |
(cherry picked from commit 8284398) # Conflicts: # x-pack/packages/kbn-elastic-assistant/impl/assistant/api.test.tsx
) # Backport This will backport the following commits from `main` to `8.11`: - [[Security solution] Fix OpenAI token reporting (#169156)](#169156) <!--- Backport version: 8.9.8 --> ### Questions ? Please refer to the [Backport tool documentation](https://github.com/sqren/backport) <!--BACKPORT [{"author":{"name":"Steph Milovic","email":"[email protected]"},"sourceCommit":{"committedDate":"2023-10-18T18:06:03Z","message":"[Security solution] Fix OpenAI token reporting (#169156)","sha":"8284398023648e850a8ca038bce8be6cc85cc51f","branchLabelMapping":{"^v8.12.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:skip","Team:Threat Hunting","Team: SecuritySolution","v8.11.0","v8.12.0","v8.11.1"],"number":169156,"url":"https://github.com/elastic/kibana/pull/169156","mergeCommit":{"message":"[Security solution] Fix OpenAI token reporting (#169156)","sha":"8284398023648e850a8ca038bce8be6cc85cc51f"}},"sourceBranch":"main","suggestedTargetBranches":["8.11"],"targetPullRequestStates":[{"branch":"8.11","label":"v8.11.0","labelRegex":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"main","label":"v8.12.0","labelRegex":"^v8.12.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/169156","number":169156,"mergeCommit":{"message":"[Security solution] Fix OpenAI token reporting (#169156)","sha":"8284398023648e850a8ca038bce8be6cc85cc51f"}}]}] BACKPORT--> --------- Co-authored-by: kibanamachine <[email protected]>
The OpenAI token reporting inadvertently got broken for security assistant in this PR.
When I implemented the
invokeAI
method, I returned the message as a string. However, I did not account for the fact that the action executor is looking for the usage object to report those metrics. Instead of just a string,invokeAI
should return an object like{ message: string; usage: UsageObject }
.