diff --git a/instrumentation/opentelemetry-instrumentation-openai/LICENSE b/instrumentation/opentelemetry-instrumentation-openai/LICENSE new file mode 100644 index 0000000000..261eeb9e9f --- /dev/null +++ b/instrumentation/opentelemetry-instrumentation-openai/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/instrumentation/opentelemetry-instrumentation-openai/README.rst b/instrumentation/opentelemetry-instrumentation-openai/README.rst new file mode 100644 index 0000000000..7cabed5b16 --- /dev/null +++ b/instrumentation/opentelemetry-instrumentation-openai/README.rst @@ -0,0 +1,26 @@ +OpenTelemetry OpenAI Instrumentation +=================================== + +|pypi| + +.. |pypi| image:: https://badge.fury.io/py/opentelemetry-instrumentation-openai.svg + :target: https://pypi.org/project/opentelemetry-instrumentation-openai/ + +Instrumentation with OpenAI that supports the openai library and is +specified to trace_integration using 'OpenAI'. + + +Installation +------------ + +:: + + pip install opentelemetry-instrumentation-openai + + +References +---------- +* `OpenTelemetry OpenAI Instrumentation `_ +* `OpenTelemetry Project `_ +* `OpenTelemetry Python Examples `_ + diff --git a/instrumentation/opentelemetry-instrumentation-openai/pyproject.toml b/instrumentation/opentelemetry-instrumentation-openai/pyproject.toml new file mode 100644 index 0000000000..bafb620577 --- /dev/null +++ b/instrumentation/opentelemetry-instrumentation-openai/pyproject.toml @@ -0,0 +1,53 @@ +[build-system] +requires = ["hatchling"] +build-backend = "hatchling.build" + +[project] +name = "opentelemetry-instrumentation-openai" +dynamic = ["version"] +description = "OpenTelemetry OpenAI instrumentation" +readme = "README.rst" +license = "Apache-2.0" +requires-python = ">=3.8" +authors = [ + { name = "OpenTelemetry Authors", email = "cncf-opentelemetry-contributors@lists.cncf.io" }, +] +classifiers = [ + "Development Status :: 4 - Beta", + "Intended Audience :: Developers", + "License :: OSI Approved :: Apache Software License", + "Programming Language :: Python", + "Programming Language :: Python :: 3", + "Programming Language :: Python :: 3.8", + "Programming Language :: Python :: 3.9", + "Programming Language :: Python :: 3.10", + "Programming Language :: Python :: 3.11", + "Programming Language :: Python :: 3.12", +] +dependencies = [ + "opentelemetry-api ~= 1.12", + "opentelemetry-instrumentation == 0.48b0.dev", +] + +[project.optional-dependencies] +instruments = [ + "openai ~= 1.37.1", +] + +[project.entry-points.opentelemetry_instrumentor] +openai = "opentelemetry.instrumentation.openai:OpenAIInstrumentor" + +[project.urls] +Homepage = "https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-openai" + +[tool.hatch.version] +path = "src/opentelemetry/instrumentation/openai/version.py" + +[tool.hatch.build.targets.sdist] +include = [ + "/src", + "/tests", +] + +[tool.hatch.build.targets.wheel] +packages = ["src/opentelemetry"] diff --git a/instrumentation/opentelemetry-instrumentation-openai/src/opentelemetry/instrumentation/openai/__init__.py b/instrumentation/opentelemetry-instrumentation-openai/src/opentelemetry/instrumentation/openai/__init__.py new file mode 100644 index 0000000000..fe7d433ac1 --- /dev/null +++ b/instrumentation/opentelemetry-instrumentation-openai/src/opentelemetry/instrumentation/openai/__init__.py @@ -0,0 +1,74 @@ +# Copyright The OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +OpenAI client instrumentation supporting `openai`, it can be enabled by +using ``OpenAIInstrumentor``. + +.. _openai: https://pypi.org/project/openai/ + +Usage +----- + +.. code:: python + + from openai import OpenAI + from opentelemetry.instrumentation.openai import OpenAIInstrumentor + + OpenAIInstrumentor().instrument() + + client = OpenAI() + response = client.chat.completions.create( + model="gpt-4o-mini", + messages=[ + {"role": "user", "content": "Write a short poem on open telemetry."}, + ], + ) + +API +--- +""" + +import importlib.metadata +from typing import Collection + +from opentelemetry.instrumentation.instrumentor import BaseInstrumentor +from opentelemetry.instrumentation.openai.package import _instruments +from opentelemetry.trace import get_tracer +from wrapt import wrap_function_wrapper +from langtrace_python_sdk.instrumentation.openai.patch import ( + chat_completions_create +) + + +class OpenAIInstrumentor(BaseInstrumentor): + + def instrumentation_dependencies(self) -> Collection[str]: + return _instruments + + def _instrument(self, **kwargs): + """Enable OpenAI instrumentation. + """ + tracer_provider = kwargs.get("tracer_provider") + tracer = get_tracer(__name__, "", tracer_provider) + version = importlib.metadata.version("openai") + + wrap_function_wrapper( + "openai.resources.chat.completions", + "Completions.create", + chat_completions_create("openai.chat.completions.create", version, tracer), + ) + + def _uninstrument(self, **kwargs): + pass diff --git a/instrumentation/opentelemetry-instrumentation-openai/src/opentelemetry/instrumentation/openai/package.py b/instrumentation/opentelemetry-instrumentation-openai/src/opentelemetry/instrumentation/openai/package.py new file mode 100644 index 0000000000..9dd45c3b43 --- /dev/null +++ b/instrumentation/opentelemetry-instrumentation-openai/src/opentelemetry/instrumentation/openai/package.py @@ -0,0 +1,16 @@ +# Copyright The OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +_instruments = ("openai ~= 1.37.1",) diff --git a/instrumentation/opentelemetry-instrumentation-openai/src/opentelemetry/instrumentation/openai/patch.py b/instrumentation/opentelemetry-instrumentation-openai/src/opentelemetry/instrumentation/openai/patch.py new file mode 100644 index 0000000000..7e57aea25f --- /dev/null +++ b/instrumentation/opentelemetry-instrumentation-openai/src/opentelemetry/instrumentation/openai/patch.py @@ -0,0 +1,438 @@ +# Copyright The OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import json +from opentelemetry import trace +from opentelemetry.trace import SpanKind, Span +from opentelemetry.trace.status import Status, StatusCode +from opentelemetry.trace.propagation import set_span_in_context +from openai._types import NOT_GIVEN +from span_attributes import SpanAttributes, LLMSpanAttributes, Event +from utils import estimate_tokens, silently_fail, extract_content, calculate_prompt_tokens + + +def chat_completions_create(original_method, version, tracer): + """Wrap the `create` method of the `ChatCompletion` class to trace it.""" + + def traced_method(wrapped, instance, args, kwargs): + llm_prompts = [] + for item in kwargs.get("messages", []): + tools = get_tool_calls(item) + if tools is not None: + tool_calls = [] + for tool_call in tools: + tool_call_dict = { + "id": tool_call.id if hasattr(tool_call, "id") else "", + "type": tool_call.type if hasattr(tool_call, "type") else "", + } + if hasattr(tool_call, "function"): + tool_call_dict["function"] = { + "name": ( + tool_call.function.name + if hasattr(tool_call.function, "name") + else "" + ), + "arguments": ( + tool_call.function.arguments + if hasattr(tool_call.function, "arguments") + else "" + ), + } + tool_calls.append(tool_call_dict) + llm_prompts.append(tool_calls) + else: + llm_prompts.append(item) + + span_attributes = { + **get_llm_request_attributes(kwargs, prompts=llm_prompts), + } + + attributes = LLMSpanAttributes(**span_attributes) + + span = tracer.start_span( + "openai.completion", + kind=SpanKind.CLIENT, + context=set_span_in_context(trace.get_current_span()), + ) + _set_input_attributes(span, kwargs, attributes) + + try: + result = wrapped(*args, **kwargs) + if is_streaming(kwargs): + prompt_tokens = 0 + for message in kwargs.get("messages", {}): + prompt_tokens += calculate_prompt_tokens( + json.dumps(str(message)), kwargs.get("model") + ) + + if ( + kwargs.get("functions") is not None + and kwargs.get("functions") != NOT_GIVEN + ): + for function in kwargs.get("functions"): + prompt_tokens += calculate_prompt_tokens( + json.dumps(function), kwargs.get("model") + ) + + return StreamWrapper( + result, + span, + prompt_tokens, + function_call=kwargs.get("functions") is not None, + tool_calls=kwargs.get("tools") is not None, + ) + else: + _set_response_attributes(span, kwargs, result) + span.set_status(StatusCode.OK) + span.end() + return result + + except Exception as error: + span.record_exception(error) + span.set_status(Status(StatusCode.ERROR, str(error))) + span.end() + raise + + return traced_method + + +def get_tool_calls(item): + if isinstance(item, dict): + if "tool_calls" in item and item["tool_calls"] is not None: + return item["tool_calls"] + return None + + else: + if hasattr(item, "tool_calls") and item.tool_calls is not None: + return item.tool_calls + return None + + +@silently_fail +def _set_input_attributes(span, kwargs, attributes): + tools = [] + for field, value in attributes.model_dump(by_alias=True).items(): + set_span_attribute(span, field, value) + + if kwargs.get("functions") is not None and kwargs.get("functions") != NOT_GIVEN: + for function in kwargs.get("functions"): + tools.append(json.dumps({"type": "function", "function": function})) + + if kwargs.get("tools") is not None and kwargs.get("tools") != NOT_GIVEN: + tools.append(json.dumps(kwargs.get("tools"))) + + if tools: + set_span_attribute(span, SpanAttributes.LLM_TOOLS, json.dumps(tools)) + + +@silently_fail +def _set_response_attributes(span, kwargs, result): + set_span_attribute(span, SpanAttributes.LLM_RESPONSE_MODEL, result.model) + if hasattr(result, "choices") and result.choices is not None: + responses = [ + { + "role": ( + choice.message.role + if choice.message and choice.message.role + else "assistant" + ), + "content": extract_content(choice), + **( + {"content_filter_results": choice["content_filter_results"]} + if "content_filter_results" in choice + else {} + ), + } + for choice in result.choices + ] + set_event_completion(span, responses) + + if ( + hasattr(result, "system_fingerprint") + and result.system_fingerprint is not None + and result.system_fingerprint != NOT_GIVEN + ): + set_span_attribute( + span, + SpanAttributes.LLM_SYSTEM_FINGERPRINT, + result.system_fingerprint, + ) + # Get the usage + if hasattr(result, "usage") and result.usage is not None: + usage = result.usage + if usage is not None: + set_span_attribute( + span, + SpanAttributes.LLM_USAGE_PROMPT_TOKENS, + result.usage.prompt_tokens, + ) + set_span_attribute( + span, + SpanAttributes.LLM_USAGE_COMPLETION_TOKENS, + result.usage.completion_tokens, + ) + set_span_attribute( + span, + SpanAttributes.LLM_USAGE_TOTAL_TOKENS, + result.usage.total_tokens, + ) + + +def set_event_prompt(span: Span, prompt): + span.add_event( + name=SpanAttributes.LLM_CONTENT_PROMPT, + attributes={ + SpanAttributes.LLM_PROMPTS: prompt, + }, + ) + + +def set_span_attributes(span: Span, attributes: dict): + for field, value in attributes.model_dump(by_alias=True).items(): + set_span_attribute(span, field, value) + + +def set_event_completion(span: Span, result_content): + span.add_event( + name=SpanAttributes.LLM_CONTENT_COMPLETION, + attributes={ + SpanAttributes.LLM_COMPLETIONS: json.dumps(result_content), + }, + ) + + +def set_event_completion_chunk(span: Span, chunk): + span.add_event( + name=SpanAttributes.LLM_CONTENT_COMPLETION_CHUNK, + attributes={ + SpanAttributes.LLM_CONTENT_COMPLETION_CHUNK: json.dumps(chunk), + }, + ) + + +def set_span_attribute(span: Span, name, value): + if value is not None: + if value != "" or value != NOT_GIVEN: + if name == SpanAttributes.LLM_PROMPTS: + set_event_prompt(span, value) + else: + span.set_attribute(name, value) + return + + +def is_streaming(kwargs): + return not ( + kwargs.get("stream") is False + or kwargs.get("stream") is None + or kwargs.get("stream") == NOT_GIVEN + ) + + +def get_llm_request_attributes(kwargs, prompts=None, model=None, operation_name="chat"): + + user = kwargs.get("user", None) + if prompts is None: + prompts = ( + [{"role": user or "user", "content": kwargs.get("prompt")}] + if "prompt" in kwargs + else None + ) + top_k = ( + kwargs.get("n", None) + or kwargs.get("k", None) + or kwargs.get("top_k", None) + or kwargs.get("top_n", None) + ) + + top_p = kwargs.get("p", None) or kwargs.get("top_p", None) + tools = kwargs.get("tools", None) + return { + SpanAttributes.LLM_OPERATION_NAME: operation_name, + SpanAttributes.LLM_REQUEST_MODEL: model or kwargs.get("model"), + SpanAttributes.LLM_IS_STREAMING: kwargs.get("stream"), + SpanAttributes.LLM_REQUEST_TEMPERATURE: kwargs.get("temperature"), + SpanAttributes.LLM_TOP_K: top_k, + SpanAttributes.LLM_PROMPTS: json.dumps(prompts) if prompts else None, + SpanAttributes.LLM_USER: user, + SpanAttributes.LLM_REQUEST_TOP_P: top_p, + SpanAttributes.LLM_REQUEST_MAX_TOKENS: kwargs.get("max_tokens"), + SpanAttributes.LLM_SYSTEM_FINGERPRINT: kwargs.get("system_fingerprint"), + SpanAttributes.LLM_PRESENCE_PENALTY: kwargs.get("presence_penalty"), + SpanAttributes.LLM_FREQUENCY_PENALTY: kwargs.get("frequency_penalty"), + SpanAttributes.LLM_REQUEST_SEED: kwargs.get("seed"), + SpanAttributes.LLM_TOOLS: json.dumps(tools) if tools else None, + SpanAttributes.LLM_TOOL_CHOICE: kwargs.get("tool_choice"), + SpanAttributes.LLM_REQUEST_LOGPROPS: kwargs.get("logprobs"), + SpanAttributes.LLM_REQUEST_LOGITBIAS: kwargs.get("logit_bias"), + SpanAttributes.LLM_REQUEST_TOP_LOGPROPS: kwargs.get("top_logprobs"), + } + + +class StreamWrapper: + span: Span + + def __init__( + self, stream, span, prompt_tokens, function_call=False, tool_calls=False + ): + self.stream = stream + self.span = span + self.prompt_tokens = prompt_tokens + self.function_call = function_call + self.tool_calls = tool_calls + self.result_content = [] + self.completion_tokens = 0 + self._span_started = False + self.setup() + + def setup(self): + if not self._span_started: + self.span.add_event(Event.STREAM_START.value) + self._span_started = True + + def cleanup(self): + if self._span_started: + self.span.add_event(Event.STREAM_END.value) + set_span_attribute( + self.span, + SpanAttributes.LLM_USAGE_PROMPT_TOKENS, + self.prompt_tokens, + ) + set_span_attribute( + self.span, + SpanAttributes.LLM_USAGE_COMPLETION_TOKENS, + self.completion_tokens, + ) + set_span_attribute( + self.span, + SpanAttributes.LLM_USAGE_TOTAL_TOKENS, + self.prompt_tokens + self.completion_tokens, + ) + set_event_completion( + self.span, + [ + { + "role": "assistant", + "content": "".join(self.result_content), + } + ], + ) + + self.span.set_status(StatusCode.OK) + self.span.end() + self._span_started = False + + def __enter__(self): + self.setup() + return self + + def __exit__(self, exc_type, exc_val, exc_tb): + self.cleanup() + + async def __aenter__(self): + self.setup() + return self + + async def __aexit__(self, exc_type, exc_val, exc_tb): + self.cleanup() + + def __iter__(self): + return self + + def __next__(self): + try: + chunk = next(self.stream) + self.process_chunk(chunk) + return chunk + except StopIteration: + self.cleanup() + raise + + def __aiter__(self): + return self + + async def __anext__(self): + try: + chunk = await self.stream.__anext__() + self.process_chunk(chunk) + return chunk + except StopAsyncIteration: + self.cleanup() + raise StopAsyncIteration + + def process_chunk(self, chunk): + if hasattr(chunk, "model") and chunk.model is not None: + set_span_attribute( + self.span, + SpanAttributes.LLM_RESPONSE_MODEL, + chunk.model, + ) + + if hasattr(chunk, "choices") and chunk.choices is not None: + content = [] + if not self.function_call and not self.tool_calls: + for choice in chunk.choices: + if choice.delta and choice.delta.content is not None: + token_counts = estimate_tokens(choice.delta.content) + self.completion_tokens += token_counts + content = [choice.delta.content] + elif self.function_call: + for choice in chunk.choices: + if ( + choice.delta + and choice.delta.function_call is not None + and choice.delta.function_call.arguments is not None + ): + token_counts = estimate_tokens( + choice.delta.function_call.arguments + ) + self.completion_tokens += token_counts + content = [choice.delta.function_call.arguments] + elif self.tool_calls: + for choice in chunk.choices: + if choice.delta and choice.delta.tool_calls is not None: + toolcalls = choice.delta.tool_calls + content = [] + for tool_call in toolcalls: + if ( + tool_call + and tool_call.function is not None + and tool_call.function.arguments is not None + ): + token_counts = estimate_tokens( + tool_call.function.arguments + ) + self.completion_tokens += token_counts + content.append(tool_call.function.arguments) + set_event_completion_chunk( + self.span, + "".join(content) if len(content) > 0 and content[0] is not None else "", + ) + if content: + self.result_content.append(content[0]) + + if hasattr(chunk, "text"): + token_counts = estimate_tokens(chunk.text) + self.completion_tokens += token_counts + content = [chunk.text] + set_event_completion_chunk( + self.span, + "".join(content) if len(content) > 0 and content[0] is not None else "", + ) + + if content: + self.result_content.append(content[0]) + + if hasattr(chunk, "usage_metadata"): + self.completion_tokens = chunk.usage_metadata.candidates_token_count + self.prompt_tokens = chunk.usage_metadata.prompt_token_count diff --git a/instrumentation/opentelemetry-instrumentation-openai/src/opentelemetry/instrumentation/openai/span_attributes.py b/instrumentation/opentelemetry-instrumentation-openai/src/opentelemetry/instrumentation/openai/span_attributes.py new file mode 100644 index 0000000000..d4c7d70ec4 --- /dev/null +++ b/instrumentation/opentelemetry-instrumentation-openai/src/opentelemetry/instrumentation/openai/span_attributes.py @@ -0,0 +1,212 @@ +# Copyright The OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import annotations +from enum import Enum +from typing import List, Optional +from pydantic import BaseModel, ConfigDict, Field + + +class SpanAttributes: + LLM_SYSTEM = "gen_ai.system" + LLM_OPERATION_NAME = "gen_ai.operation.name" + LLM_REQUEST_MODEL = "gen_ai.request.model" + LLM_REQUEST_MAX_TOKENS = "gen_ai.request.max_tokens" + LLM_REQUEST_TEMPERATURE = "gen_ai.request.temperature" + LLM_REQUEST_TOP_P = "gen_ai.request.top_p" + LLM_SYSTEM_FINGERPRINT = "gen_ai.system_fingerprint" + LLM_REQUEST_DOCUMENTS = "gen_ai.request.documents" + LLM_REQUEST_SEARCH_REQUIRED = "gen_ai.request.is_search_required" + LLM_PROMPTS = "gen_ai.prompt" + LLM_CONTENT_PROMPT = "gen_ai.content.prompt" + LLM_COMPLETIONS = "gen_ai.completion" + LLM_CONTENT_COMPLETION = "gen_ai.content.completion" + LLM_RESPONSE_MODEL = "gen_ai.response.model" + LLM_USAGE_COMPLETION_TOKENS = "gen_ai.usage.output_tokens" + LLM_USAGE_PROMPT_TOKENS = "gen_ai.usage.input_tokens" + LLM_USAGE_TOTAL_TOKENS = "gen_ai.request.total_tokens" + LLM_USAGE_TOKEN_TYPE = "gen_ai.usage.token_type" + LLM_USAGE_SEARCH_UNITS = "gen_ai.usage.search_units" + LLM_GENERATION_ID = "gen_ai.generation_id" + LLM_TOKEN_TYPE = "gen_ai.token.type" + LLM_RESPONSE_ID = "gen_ai.response_id" + LLM_URL = "url.full" + LLM_PATH = "url.path" + LLM_RESPONSE_FORMAT = "gen_ai.response.format" + LLM_IMAGE_SIZE = "gen_ai.image.size" + LLM_REQUEST_ENCODING_FORMATS = "gen_ai.request.encoding_formats" + LLM_REQUEST_DIMENSIONS = "gen_ai.request.dimensions" + LLM_REQUEST_SEED = "gen_ai.request.seed" + LLM_REQUEST_TOP_LOGPROPS = "gen_ai.request.top_props" + LLM_REQUEST_LOGPROPS = "gen_ai.request.log_props" + LLM_REQUEST_LOGITBIAS = "gen_ai.request.logit_bias" + LLM_REQUEST_TYPE = "gen_ai.request.type" + LLM_HEADERS = "gen_ai.headers" + LLM_USER = "gen_ai.user" + LLM_TOOLS = "gen_ai.request.tools" + LLM_TOOL_CHOICE = "gen_ai.request.tool_choice" + LLM_TOOL_RESULTS = "gen_ai.request.tool_results" + LLM_TOP_K = "gen_ai.request.top_k" + LLM_IS_STREAMING = "gen_ai.request.stream" + LLM_FREQUENCY_PENALTY = "gen_ai.request.frequency_penalty" + LLM_PRESENCE_PENALTY = "gen_ai.request.presence_penalty" + LLM_CHAT_STOP_SEQUENCES = "gen_ai.chat.stop_sequences" + LLM_REQUEST_FUNCTIONS = "gen_ai.request.functions" + LLM_REQUEST_REPETITION_PENALTY = "gen_ai.request.repetition_penalty" + LLM_RESPONSE_FINISH_REASON = "gen_ai.response.finish_reasons" + LLM_RESPONSE_STOP_REASON = "gen_ai.response.stop_reason" + LLM_CONTENT_COMPLETION_CHUNK = "gen_ai.completion.chunk" + + +class Event(Enum): + STREAM_START = "stream.start" + STREAM_OUTPUT = "stream.output" + STREAM_END = "stream.end" + RESPONSE = "response" + + +class LLMSpanAttributes(BaseModel): + model_config = ConfigDict(extra="allow") + gen_ai_operation_name: str = Field( + ..., + alias='gen_ai.operation.name', + description='The name of the operation being performed.', + ) + gen_ai_request_model: str = Field( + ..., + alias='gen_ai.request.model', + description='Model name from the input request', + ) + gen_ai_response_model: Optional[str] = Field( + None, alias='gen_ai.response.model', description='Model name from the response' + ) + gen_ai_request_temperature: Optional[float] = Field( + None, + alias='gen_ai.request.temperature', + description='Temperature value from the input request', + ) + gen_ai_request_logit_bias: Optional[str] = Field( + None, + alias='gen_ai.request.logit_bias', + description='Likelihood bias of the specified tokens the input request.', + ) + gen_ai_request_logprobs: Optional[bool] = Field( + None, + alias='gen_ai.request.logprobs', + description='Logprobs flag returns log probabilities.', + ) + gen_ai_request_top_logprobs: Optional[float] = Field( + None, + alias='gen_ai.request.top_logprobs', + description='Integer between 0 and 5 specifying the number of most likely tokens to return.', + ) + gen_ai_request_top_p: Optional[float] = Field( + None, + alias='gen_ai.request.top_p', + description='Top P value from the input request', + ) + gen_ai_request_top_k: Optional[float] = Field( + None, + alias='gen_ai.request.top_k', + description='Top K results to return from the input request', + ) + gen_ai_user: Optional[str] = Field( + None, alias='gen_ai.user', description='User ID from the input request' + ) + gen_ai_prompt: Optional[str] = Field( + None, alias='gen_ai.prompt', description='Prompt text from the input request' + ) + gen_ai_completion: Optional[str] = Field( + None, + alias='gen_ai.completion', + description='Completion text from the response. This will be an array of json objects with the following format {"role": "", "content": ""}. Role can be one of the following values: [system, user, assistant, tool]', + ) + gen_ai_request_stream: Optional[bool] = Field( + None, + alias='gen_ai.request.stream', + description='Stream flag from the input request', + ) + gen_ai_request_encoding_formats: Optional[List[str]] = Field( + None, + alias='gen_ai.request.encoding_formats', + description="Encoding formats from the input request. Allowed values: ['float', 'int8','uint8', 'binary', 'ubinary', 'base64']", + ) + gen_ai_completion_chunk: Optional[str] = Field( + None, + alias='gen_ai.completion.chunk', + description='Chunk text from the response', + ) + gen_ai_response_finish_reasons: Optional[List[str]] = Field( + None, + alias='gen_ai.response.finish_reasons', + description='Array of reasons the model stopped generating tokens, corresponding to each generation received', + ) + gen_ai_system_fingerprint: Optional[str] = Field( + None, + alias='gen_ai.system_fingerprint', + description='System fingerprint of the system that generated the response', + ) + gen_ai_request_tool_choice: Optional[str] = Field( + None, + alias='gen_ai.request.tool_choice', + description='Tool choice from the input request', + ) + gen_ai_response_tool_calls: Optional[str] = Field( + None, + alias='gen_ai.response.tool_calls', + description='Array of tool calls from the response json stringified', + ) + gen_ai_request_max_tokens: Optional[float] = Field( + None, + alias='gen_ai.request.max_tokens', + description='The maximum number of tokens the LLM generates for a request.', + ) + gen_ai_usage_input_tokens: Optional[float] = Field( + None, + alias='gen_ai.usage.input_tokens', + description='The number of tokens used in the llm prompt.', + ) + gen_ai_usage_total_tokens: Optional[float] = Field( + None, + alias='gen_ai.usage.total_tokens', + description='The total number of tokens used in the llm request.', + ) + gen_ai_usage_output_tokens: Optional[float] = Field( + None, + alias='gen_ai.usage.output_tokens', + description='The number of tokens in the llm response.', + ) + gen_ai_request_seed: Optional[str] = Field( + None, alias='gen_ai.request.seed', description='Seed from the input request' + ) + gen_ai_request_frequency_penalty: Optional[float] = Field( + None, + alias='gen_ai.request.frequency_penalty', + description='Frequency penalty from the input request', + ) + gen_ai_request_presence_penalty: Optional[float] = Field( + None, + alias='gen_ai.request.presence_penalty', + description='Presence penalty from the input request', + ) + gen_ai_request_tools: Optional[str] = Field( + None, + alias='gen_ai.request.tools', + description='An array of tools from the input request json stringified', + ) + gen_ai_request_tool_results: Optional[str] = Field( + None, + alias='gen_ai.request.tool_results', + description='An array of tool results from the input request json stringified', + ) diff --git a/instrumentation/opentelemetry-instrumentation-openai/src/opentelemetry/instrumentation/openai/utils.py b/instrumentation/opentelemetry-instrumentation-openai/src/opentelemetry/instrumentation/openai/utils.py new file mode 100644 index 0000000000..1a824d463a --- /dev/null +++ b/instrumentation/opentelemetry-instrumentation-openai/src/opentelemetry/instrumentation/openai/utils.py @@ -0,0 +1,117 @@ +# Copyright The OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import logging +from tiktoken import get_encoding + + +def estimate_tokens_using_tiktoken(prompt, model): + """ + Estimate the number of tokens in a prompt using tiktoken.""" + encoding = get_encoding(model) + tokens = encoding.encode(prompt) + return len(tokens) + + +def estimate_tokens(prompt): + """ + Estimate the number of tokens in a prompt.""" + if prompt and len(prompt) > 0: + # Simplified token estimation: count the words. + return len([word for word in prompt.split() if word]) + return 0 + + +TIKTOKEN_MODEL_MAPPING = { + "gpt-4": "cl100k_base", + "gpt-4-32k": "cl100k_base", + "gpt-4-0125-preview": "cl100k_base", + "gpt-4-1106-preview": "cl100k_base", + "gpt-4-1106-vision-preview": "cl100k_base", + "gpt-4o": "0200k_base", + "gpt-4o-mini": "0200k_base", +} + + +def silently_fail(func): + """ + A decorator that catches exceptions thrown by the decorated function and logs them as warnings. + """ + + logger = logging.getLogger(func.__module__) + + def wrapper(*args, **kwargs): + try: + return func(*args, **kwargs) + except Exception as exception: + logger.warning( + "Failed to execute %s, error: %s", func.__name__, str(exception) + ) + + return wrapper + + +def extract_content(choice): + # Check if choice.message exists and has a content attribute + if ( + hasattr(choice, "message") + and hasattr(choice.message, "content") + and choice.message.content is not None + ): + return choice.message.content + + # Check if choice.message has tool_calls and extract information accordingly + elif ( + hasattr(choice, "message") + and hasattr(choice.message, "tool_calls") + and choice.message.tool_calls is not None + ): + result = [ + { + "id": tool_call.id, + "type": tool_call.type, + "function": { + "name": tool_call.function.name, + "arguments": tool_call.function.arguments, + }, + } + for tool_call in choice.message.tool_calls + ] + return result + + # Check if choice.message has a function_call and extract information accordingly + elif ( + hasattr(choice, "message") + and hasattr(choice.message, "function_call") + and choice.message.function_call is not None + ): + return { + "name": choice.message.function_call.name, + "arguments": choice.message.function_call.arguments, + } + + # Return an empty string if none of the above conditions are met + else: + return "" + + +def calculate_prompt_tokens(prompt_content, model): + """ + Calculate the number of tokens in a prompt. If the model is supported by tiktoken, use it for the estimation. + """ + try: + tiktoken_model = TIKTOKEN_MODEL_MAPPING[model] + return estimate_tokens_using_tiktoken(prompt_content, tiktoken_model) + except Exception: + return estimate_tokens(prompt_content) # Fallback method diff --git a/instrumentation/opentelemetry-instrumentation-openai/src/opentelemetry/instrumentation/openai/version.py b/instrumentation/opentelemetry-instrumentation-openai/src/opentelemetry/instrumentation/openai/version.py new file mode 100644 index 0000000000..021133daf1 --- /dev/null +++ b/instrumentation/opentelemetry-instrumentation-openai/src/opentelemetry/instrumentation/openai/version.py @@ -0,0 +1,15 @@ +# Copyright The OpenTelemetry Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +__version__ = "0.0.1dev" diff --git a/instrumentation/opentelemetry-instrumentation-openai/test-requirements.txt b/instrumentation/opentelemetry-instrumentation-openai/test-requirements.txt new file mode 100644 index 0000000000..6b3d9ef39d --- /dev/null +++ b/instrumentation/opentelemetry-instrumentation-openai/test-requirements.txt @@ -0,0 +1,8 @@ +openai==1.37.1 +Deprecated==1.2.14 +importlib-metadata==6.11.0 +packaging==24.0 +pytest==7.4.4 +wrapt==1.16.0 +# -e opentelemetry-instrumentation +# -e instrumentation/opentelemetry-instrumentation-openai diff --git a/instrumentation/opentelemetry-instrumentation-openai/tests/__init__.py b/instrumentation/opentelemetry-instrumentation-openai/tests/__init__.py new file mode 100644 index 0000000000..e69de29bb2