Semantic Conventions for GenAI operations

Status: Experimental

A request to an Generative AI is modeled as a span in a trace.

Span kind: MUST always be CLIENT.

Name

GenAI spans MUST follow the overall guidelines for span names. The span name SHOULD be {gen_ai.operation.name} {gen_ai.request.model}. Semantic conventions for individual GenAI systems and frameworks MAY specify different span name format.

Configuration

Instrumentations for Generative AI clients MAY capture prompts and completions. Instrumentations that support it, MUST offer the ability to turn off capture of prompts and completions. This is for three primary reasons:

  1. Data privacy concerns. End users of GenAI applications may input sensitive information or personally identifiable information (PII) that they do not wish to be sent to a telemetry backend.
  2. Data size concerns. Although there is no specified limit to sizes, there are practical limitations in programming languages and telemetry systems. Some GenAI systems allow for extremely large context windows that end users may take full advantage of.
  3. Performance concerns. Sending large amounts of data to a telemetry backend may cause performance issues for the application.

GenAI attributes

These attributes track input data and metadata for a request to an GenAI model. Each attribute represents a concept that is common to most Generative AI clients.

AttributeTypeDescriptionExamplesRequirement LevelStability
gen_ai.operation.namestringThe name of the operation being performed. [1]chat; text_completionRequiredExperimental
gen_ai.request.modelstringThe name of the GenAI model a request is being made to. [2]gpt-4RequiredExperimental
gen_ai.systemstringThe Generative AI product as identified by the client or server instrumentation. [3]openaiRequiredExperimental
error.typestringDescribes a class of error the operation ended with. [4]timeout; java.net.UnknownHostException; server_certificate_invalid; 500Conditionally Required if the operation ended in an errorStable
server.portintGenAI server port. [5]80; 8080; 443Conditionally Required If server.address is set.Stable
gen_ai.request.frequency_penaltydoubleThe frequency penalty setting for the GenAI request.0.1RecommendedExperimental
gen_ai.request.max_tokensintThe maximum number of tokens the model generates for a request.100RecommendedExperimental
gen_ai.request.presence_penaltydoubleThe presence penalty setting for the GenAI request.0.1RecommendedExperimental
gen_ai.request.stop_sequencesstring[]List of sequences that the model will use to stop generating further tokens.["forest", "lived"]RecommendedExperimental
gen_ai.request.temperaturedoubleThe temperature setting for the GenAI request.0.0RecommendedExperimental
gen_ai.request.top_kdoubleThe top_k sampling setting for the GenAI request.1.0RecommendedExperimental
gen_ai.request.top_pdoubleThe top_p sampling setting for the GenAI request.1.0RecommendedExperimental
gen_ai.response.finish_reasonsstring[]Array of reasons the model stopped generating tokens, corresponding to each generation received.["stop"]RecommendedExperimental
gen_ai.response.idstringThe unique identifier for the completion.chatcmpl-123RecommendedExperimental
gen_ai.response.modelstringThe name of the model that generated the response. [6]gpt-4-0613RecommendedExperimental
gen_ai.usage.input_tokensintThe number of tokens used in the GenAI input (prompt).100RecommendedExperimental
gen_ai.usage.output_tokensintThe number of tokens used in the GenAI response (completion).180RecommendedExperimental
server.addressstringGenAI server address. [7]example.com; 10.1.2.80; /tmp/my.sockRecommendedStable

[1]: If one of the predefined values applies, but specific system uses a different name it’s RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.

[2]: The name of the GenAI model a request is being made to. If the model is supplied by a vendor, then the value must be the exact name of the model requested. If the model is a fine-tuned custom model, the value should have a more specific name than the base model that’s been fine-tuned.

[3]: The gen_ai.system describes a family of GenAI models with specific model identified by gen_ai.request.model and gen_ai.response.model attributes.

The actual GenAI product may differ from the one identified by the client. For example, when using OpenAI client libraries to communicate with Mistral, the gen_ai.system is set to openai based on the instrumentation’s best knowledge.

For custom model, a custom friendly name SHOULD be used. If none of these options apply, the gen_ai.system SHOULD be set to _OTHER.

[4]: The error.type SHOULD match the error code returned by the Generative AI provider or the client library, the canonical name of exception that occurred, or another low-cardinality error identifier. Instrumentations SHOULD document the list of errors they report.

[5]: When observed from the client side, and when communicating through an intermediary, server.port SHOULD represent the server port behind any intermediaries, for example proxies, if it’s available.

[6]: If available. The name of the GenAI model that provided the response. If the model is supplied by a vendor, then the value must be the exact name of the model actually used. If the model is a fine-tuned custom model, the value should have a more specific name than the base model that’s been fine-tuned.

[7]: When observed from the client side, and when communicating through an intermediary, server.address SHOULD represent the server address behind any intermediaries, for example proxies, if it’s available.

error.type has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

ValueDescriptionStability
_OTHERA fallback error value to be used when the instrumentation doesn’t define a custom value.Stable

gen_ai.operation.name has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

ValueDescriptionStability
chatChat completion operation such as OpenAI Chat APIExperimental
text_completionText completions operation such as OpenAI Completions API (Legacy)Experimental

gen_ai.system has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

ValueDescriptionStability
anthropicAnthropicExperimental
cohereCohereExperimental
openaiOpenAIExperimental
vertex_aiVertex AIExperimental

Events

In the lifetime of a GenAI span, an event for prompts sent and completions received MAY be created, depending on the configuration of the instrumentation.

The event name MUST be gen_ai.content.prompt.

AttributeTypeDescriptionExamplesRequirement LevelStability
gen_ai.promptstringThe full prompt sent to the GenAI model. [1][{'role': 'user', 'content': 'What is the capital of France?'}]Conditionally Required if and only if corresponding event is enabledExperimental

[1]: It’s RECOMMENDED to format prompts as JSON string matching OpenAI messages format

The event name MUST be gen_ai.content.completion.

AttributeTypeDescriptionExamplesRequirement LevelStability
gen_ai.completionstringThe full response received from the GenAI model. [1][{'role': 'assistant', 'content': 'The capital of France is Paris.'}]Conditionally Required if and only if corresponding event is enabledExperimental

[1]: It’s RECOMMENDED to format completions as JSON string matching OpenAI messages format