Semantic Conventions for GenAI events

Status: Experimental

GenAI instrumentations MAY capture user inputs sent to the model and responses received from it as events.

Note: Event API is experimental and not yet available in some languages. Check spec-compliance matrix to see the implementation status in corresponding language.

Instrumentations MAY capture inputs and outputs if and only if application has enabled the collection of this data. This is for three primary reasons:

  1. Data privacy concerns. End users of GenAI applications may input sensitive information or personally identifiable information (PII) that they do not wish to be sent to a telemetry backend.
  2. Data size concerns. Although there is no specified limit to sizes, there are practical limitations in programming languages and telemetry systems. Some GenAI systems allow for extremely large context windows that end users may take full advantage of.
  3. Performance concerns. Sending large amounts of data to a telemetry backend may cause performance issues for the application.

Body fields that contain user input, model output, or other potentially sensitive and verbose data SHOULD NOT be captured by default.

Semantic conventions for individual systems which extend content events SHOULD document all additional body fields and specify whether they should be captured by default or need application to opt into capturing them.

Telemetry consumers SHOULD expect to receive unknown body fields.

Instrumentations SHOULD NOT capture undocumented body fields and MUST follow the documented defaults for known fields. Instrumentations MAY offer configuration options allowing to disable events or allowing to capture all fields.

Common attributes

The following attributes apply to all GenAI events.

AttributeTypeDescriptionExamplesRequirement LevelStability
gen_ai.systemstringThe Generative AI product as identified by the client or server instrumentation. [1]openaiRecommendedExperimental

[1]: The gen_ai.system describes a family of GenAI models with specific model identified by gen_ai.request.model and gen_ai.response.model attributes.

The actual GenAI product may differ from the one identified by the client. For example, when using OpenAI client libraries to communicate with Mistral, the gen_ai.system is set to openai based on the instrumentation’s best knowledge.

For custom model, a custom friendly name SHOULD be used. If none of these options apply, the gen_ai.system SHOULD be set to _OTHER.

gen_ai.system has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

ValueDescriptionStability
anthropicAnthropicExperimental
cohereCohereExperimental
openaiOpenAIExperimental
vertex_aiVertex AIExperimental

System event

This event describes the instructions passed to the GenAI model.

The event name MUST be gen_ai.system.message.

Body FieldTypeDescriptionExamplesRequirement Level
rolestringThe actual role of the message author as passed in the message."system", "instructions"Conditionally Required: if available and not equal to system
contentAnyValueThe contents of the system message."You're a friendly bot that answers questions about OpenTelemetry."Opt-In

User event

This event describes the prompt message specified by the user.

The event name MUST be gen_ai.user.message.

Body FieldTypeDescriptionExamplesRequirement Level
rolestringThe actual role of the message author as passed in the message."user", "customer"Conditionally Required: if available and if not equal to user
contentAnyValueThe contents of the user message.What telemetry is reported by OpenAI instrumentations?Opt-In

Assistant event

This event describes the assistant message.

The event name MUST be gen_ai.assistant.message.

Body FieldTypeDescriptionExamplesRequirement Level
rolestringThe actual role of the message author as passed in the message."assistant", "bot"Conditionally Required: if available and if not equal to assistant
contentAnyValueThe contents of the assistant message.Spans, events, metrics defined by the GenAI semantic conventions.Opt-In
tool_callsToolCall[]The tool calls generated by the model, such as function calls.[{"id":"call_mszuSIzqtI65i1wAUOE8w5H4", "function":{"name":"get_link_to_otel_semconv", "arguments":{"semconv":"gen_ai"}}, "type":"function"}]Conditionally Required: if available

ToolCall object

Body FieldTypeDescriptionExamplesRequirement Level
idstringThe id of the tool callcall_mszuSIzqtI65i1wAUOE8w5H4Required
typestringThe type of the toolfunctionRequired
functionFunctionThe function that the model called{"name":"get_link_to_otel_semconv", "arguments":{"semconv":"gen_ai"}}Required

Function object

Body FieldTypeDescriptionExamplesRequirement Level
namestringThe name of the function to callget_link_to_otel_semconvRequired
argumentsAnyValueThe arguments to pass the the function{"semconv": "gen_ai"}Opt-In

Tool event

This event describes the output of the tool or function submitted back to the model.

The event name MUST be gen_ai.tool.message.

Body FieldTypeDescriptionExamplesRequirement Level
rolestringThe actual role of the message author as passed in the message."tool", "function"Conditionally Required: if available and if not equal to tool
contentAnyValueThe contents of the tool message.opentelemetry.ioOpt-In
idstringTool call that this message is responding to.call_mszuSIzqtI65i1wAUOE8w5H4Required

Choice event

This event describes model-generated individual chat response (choice). If GenAI model returns multiple choices, each choice SHOULD be recorded as an individual event.

When response is streamed, instrumentations that report response events MUST reconstruct and report the full message and MUST NOT report individual chunks as events. If the request to GenAI model fails with an error before content is received, instrumentation SHOULD report an event with truncated content (if enabled). If finish_reason was not received, it MUST be set to error.

The event name MUST be gen_ai.choice.

Choice event body has the following fields:

Body FieldTypeDescriptionExamplesRequirement Level
finish_reasonstringThe reason the model stopped generating tokens.stop, tool_calls, content_filterRequired
indexintThe index of the choice in the list of choices.1Required
messageMessageGenAI response message{"content":"The OpenAI semantic conventions are available at opentelemetry.io"}Recommended

Message object

Body FieldTypeDescriptionExamplesRequirement Level
rolestringThe actual role of the message author as passed in the message."assistant", "bot"Conditionally Required: if available and if not equal to assistant
contentAnyValueThe contents of the assistant message.Spans, events, metrics defined by the GenAI semantic conventions.Opt-In
tool_callsToolCall[]The tool calls generated by the model, such as function calls.[{"id":"call_mszuSIzqtI65i1wAUOE8w5H4", "function":{"name":"get_link_to_otel_semconv", "arguments":"{\"semconv\":\"gen_ai\"}"}, "type":"function"}]Conditionally Required: if available

Custom events

System-specific events that are not covered in this document SHOULD be documented in corresponding Semantic Conventions extensions and SHOULD follow gen_ai.{gen_ai.system}.* naming pattern for system-specific events.

Examples

Chat completion

This example covers the following scenario:

  • user requests chat completion from OpenAI GPT-4 model for the following prompt:

    • System message: You're a friendly bot that answers questions about OpenTelemetry.
    • User message: How to instrument GenAI library with OTel?
  • The model responds with "Follow GenAI semantic conventions available at opentelemetry.io." message

Span:

Attribute nameValue
Span name"chat gpt-4"
gen_ai.system"openai"
gen_ai.request.model"gpt-4"
gen_ai.request.max_tokens200
gen_ai.request.top_p1.0
gen_ai.response.id"chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l"
gen_ai.response.model"gpt-4-0613"
gen_ai.usage.output_tokens47
gen_ai.usage.input_tokens52
gen_ai.response.finish_reasons["stop"]

Events:

  1. gen_ai.system.message.

    PropertyValue
    gen_ai.system"openai"
    Event body{"content": "You're a friendly bot that answers questions about OpenTelemetry."}
  2. gen_ai.user.message

    PropertyValue
    gen_ai.system"openai"
    Event body{"content":"How to instrument GenAI library with OTel?"}
  3. gen_ai.choice

    PropertyValue
    gen_ai.system"openai"
    Event body (with content enabled){"index":0,"finish_reason":"stop","message":{"content":"Follow GenAI semantic conventions available at opentelemetry.io."}}
    Event body (without content){"index":0,"finish_reason":"stop","message":{}}

Tools

This example covers the following scenario:

  1. Application requests chat completion from OpenAI GPT-4 model and provides a function definition.

    • Application provides the following prompt:
      • User message: How to instrument GenAI library with OTel?
    • Application defines a tool (a function) names get_link_to_otel_semconv with single string argument named semconv
  2. The model responds with a tool call request which application executes

  3. The application requests chat completion again now with the tool execution result

Here’s the telemetry generated for each step in this scenario:

  1. Chat completion resulting in a tool call.

    Attribute nameValue
    Span name"chat gpt-4"
    gen_ai.system"openai"
    gen_ai.request.model"gpt-4"
    gen_ai.request.max_tokens200
    gen_ai.request.top_p1.0
    gen_ai.response.id"chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l"
    gen_ai.response.model"gpt-4-0613"
    gen_ai.usage.output_tokens17
    gen_ai.usage.input_tokens47
    gen_ai.response.finish_reasons["tool_calls"]

    Events parented to this span:

    • gen_ai.user.message (not reported when capturing content is disabled)

      PropertyValue
      gen_ai.system"openai"
      Event body{"content":"How to instrument GenAI library with OTel?"}
    • gen_ai.choice

      PropertyValue
      gen_ai.system"openai"
      Event body (with content){"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_link_to_otel_semconv","arguments":"{\"semconv\":\"GenAI\"}"},"type":"function"}]}
      Event body (without content){"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_link_to_otel_semconv"},"type":"function"}]}
  2. Application executes the tool call. Application may create span which is not covered by this semantic convention.

  3. Final chat completion call

    Attribute nameValue
    Span name"chat gpt-4"
    gen_ai.system"openai"
    gen_ai.request.model"gpt-4"
    gen_ai.request.max_tokens200
    gen_ai.request.top_p1.0
    gen_ai.response.id"chatcmpl-call_VSPygqKTWdrhaFErNvMV18Yl"
    gen_ai.response.model"gpt-4-0613"
    gen_ai.usage.output_tokens52
    gen_ai.usage.input_tokens47
    gen_ai.response.finish_reasons["stop"]

    Events parented to this span: (in this example, the event content matches the original messages, but applications may also drop messages or change their content)

    • gen_ai.user.message (not reported when capturing content is not enabled)

      PropertyValue
      gen_ai.system"openai"
      Event body{"content":"How to instrument GenAI library with OTel?"}
    • gen_ai.assistant.message

      PropertyValue
      gen_ai.system"openai"
      Event body (content enabled){"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_link_to_otel_semconv","arguments":"{\"semconv\":\"GenAI\"}"},"type":"function"}]}
      Event body (content not enabled){"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_link_to_otel_semconv"},"type":"function"}]}
    • gen_ai.tool.message

      PropertyValue
      gen_ai.system"openai"
      Event body (content enabled){"content":"opentelemetry.io/semconv/gen-ai","id":"call_VSPygqKTWdrhaFErNvMV18Yl"}
      Event body (content not enabled){"id":"call_VSPygqKTWdrhaFErNvMV18Yl"}
    • gen_ai.choice

      PropertyValue
      gen_ai.system"openai"
      Event body (content enabled){"index":0,"finish_reason":"stop","message":{"content":"Follow OTel semconv available at opentelemetry.io/semconv/gen-ai"}}
      Event body (content not enabled){"index":0,"finish_reason":"stop","message":{}}

Chat completion with multiple choices

This example covers the following scenario:

  • user requests 2 chat completion from OpenAI GPT-4 model for the following prompt:

    • System message: You're a friendly bot that answers questions about OpenTelemetry.
    • User message: How to instrument GenAI library with OTel?
  • The model responds with two choices

    • "Follow GenAI semantic conventions available at opentelemetry.io." message
    • "Use OpenAI instrumentation library." message

Span:

Attribute nameValue
Span name"chat gpt-4"
gen_ai.system"openai"
gen_ai.request.model"gpt-4"
gen_ai.request.max_tokens200
gen_ai.request.top_p1.0
gen_ai.response.id"chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l"
gen_ai.response.model"gpt-4-0613"
gen_ai.usage.output_tokens77
gen_ai.usage.input_tokens52
gen_ai.response.finish_reasons["stop"]

Events:

  1. gen_ai.system.message: the same as in the Chat Completion example

  2. gen_ai.user.message: the same as in the previous example

  3. gen_ai.choice

    PropertyValue
    gen_ai.system"openai"
    Event body (content enabled){"index":0,"finish_reason":"stop","message":{"content":"Follow GenAI semantic conventions available at opentelemetry.io."}}
  4. gen_ai.choice

    PropertyValue
    gen_ai.system"openai"
    Event body (content enabled){"index":1,"finish_reason":"stop","message":{"content":"Use OpenAI instrumentation library."}}