Semantic Conventions for GenAI agent and framework spans
Status: Development
Generative AI models can be trained to use tools to access real-time information or suggest a real-world action. For example, a model can leverage a database retrieval tool to access specific information, like a customer’s purchase history, so it can generate tailored shopping recommendations. Alternatively, based on a user’s query, a model can make various API calls to send an email response to a colleague or complete a financial transaction on your behalf. To do so, the model must not only have access to a set of external tools, it needs the ability to plan and execute any task in a self-directed fashion. This combination of reasoning, logic, and access to external information that are all connected to a Generative AI model invokes the concept of an agent.
This document defines semantic conventions for GenAI agent calls that are defined by this whitepaper.
It MAY be applicable to agent operations that are performed by the GenAI framework locally.
The semantic conventions for GenAI agents extend and override the semantic conventions for Gen AI Spans.
Spans
Create Agent Span
Describes GenAI agent creation and is usually applicable when working with remote agent services.
The gen_ai.operation.name
SHOULD be create_agent
.
The span name SHOULD be create_agent {gen_ai.agent.name}
.
Semantic conventions for individual GenAI systems and frameworks MAY specify different span name format.
Attribute | Type | Description | Examples | Requirement Level | Stability |
---|---|---|---|---|---|
gen_ai.operation.name | string | The name of the operation being performed. [1] | chat ; text_completion ; embeddings | Required | |
gen_ai.system | string | The Generative AI product as identified by the client or server instrumentation. [2] | openai | Required | |
error.type | string | Describes a class of error the operation ended with. [3] | timeout ; java.net.UnknownHostException ; server_certificate_invalid ; 500 | Conditionally Required if the operation ended in an error | |
gen_ai.agent.description | string | Free-form description of the GenAI agent provided by the application. | Helps with math problems ; Generates fiction stories | Conditionally Required If provided by the application. | |
gen_ai.agent.id | string | The unique identifier of the GenAI agent. | asst_5j66UpCpwteGg4YSxUnt7lPY | Conditionally Required if applicable. | |
gen_ai.agent.name | string | Human-readable name of the GenAI agent provided by the application. | Math Tutor ; Fiction Writer | Conditionally Required If provided by the application. | |
gen_ai.output.type | string | Represents the content type requested by the client. [4] | text ; json ; image | Conditionally Required [5] | |
gen_ai.request.choice.count | int | The target number of candidate completions to return. | 3 | Conditionally Required if available, in the request, and !=1 | |
gen_ai.request.model | string | The name of the GenAI model a request is being made to. [6] | gpt-4 | Conditionally Required If provided by the application. | |
gen_ai.request.seed | int | Requests with same seed value more likely to return same result. | 100 | Conditionally Required if applicable and if the request includes a seed | |
gen_ai.request.temperature | double | The temperature setting for the GenAI request. | 0.0 | Conditionally Required If provided by the application. | |
gen_ai.request.top_p | double | The top_p sampling setting for the GenAI request. | 1.0 | Conditionally Required If provided by the application. | |
server.port | int | GenAI server port. [7] | 80 ; 8080 ; 443 | Conditionally Required If server.address is set. | |
gen_ai.request.encoding_formats | string[] | The encoding formats requested in an embeddings operation, if specified. [8] | ["base64"] ; ["float", "binary"] | Recommended | |
gen_ai.request.frequency_penalty | double | The frequency penalty setting for the GenAI request. | 0.1 | Recommended | |
gen_ai.request.max_tokens | int | The maximum number of tokens the model generates for a request. | 100 | Recommended | |
gen_ai.request.presence_penalty | double | The presence penalty setting for the GenAI request. | 0.1 | Recommended | |
gen_ai.request.stop_sequences | string[] | List of sequences that the model will use to stop generating further tokens. | ["forest", "lived"] | Recommended | |
gen_ai.response.finish_reasons | string[] | Array of reasons the model stopped generating tokens, corresponding to each generation received. | ["stop"] ; ["stop", "length"] | Recommended | |
gen_ai.response.id | string | The unique identifier for the completion. | chatcmpl-123 | Recommended | |
gen_ai.response.model | string | The name of the model that generated the response. [9] | gpt-4-0613 | Recommended | |
gen_ai.usage.input_tokens | int | The number of tokens used in the GenAI input (prompt). | 100 | Recommended | |
gen_ai.usage.output_tokens | int | The number of tokens used in the GenAI response (completion). | 180 | Recommended | |
server.address | string | GenAI server address. [10] | example.com ; 10.1.2.80 ; /tmp/my.sock | Recommended |
[1] gen_ai.operation.name
: If one of the predefined values applies, but specific system uses a different name it’s RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
[2] gen_ai.system
: The gen_ai.system
describes a family of GenAI models with specific model identified
by gen_ai.request.model
and gen_ai.response.model
attributes.
The actual GenAI product may differ from the one identified by the client.
Multiple systems, including Azure OpenAI and Gemini, are accessible by OpenAI client
libraries. In such cases, the gen_ai.system
is set to openai
based on the
instrumentation’s best knowledge, instead of the actual system. The server.address
attribute may help identify the actual system in use for openai
.
For custom model, a custom friendly name SHOULD be used.
If none of these options apply, the gen_ai.system
SHOULD be set to _OTHER
.
[3] error.type
: The error.type
SHOULD match the error code returned by the Generative AI provider or the client library,
the canonical name of exception that occurred, or another low-cardinality error identifier.
Instrumentations SHOULD document the list of errors they report.
[4] gen_ai.output.type
: This attribute SHOULD be used when the client requests output of a specific type. The model may return zero or more outputs of this type.
This attribute specifies the output modality and not the actual output format. For example, if an image is requested, the actual output could be a URL pointing to an image file.
Additional output format details may be recorded in the future in the gen_ai.output.{type}.*
attributes.
[5] gen_ai.output.type
: when applicable and if the request includes an output format.
[6] gen_ai.request.model
: The name of the GenAI model a request is being made to. If the model is supplied by a vendor, then the value must be the exact name of the model requested. If the model is a fine-tuned custom model, the value should have a more specific name than the base model that’s been fine-tuned.
[7] server.port
: When observed from the client side, and when communicating through an intermediary, server.port
SHOULD represent the server port behind any intermediaries, for example proxies, if it’s available.
[8] gen_ai.request.encoding_formats
: In some GenAI systems the encoding formats are called embedding types. Also, some GenAI systems only accept a single format per request.
[9] gen_ai.response.model
: If available. The name of the GenAI model that provided the response. If the model is supplied by a vendor, then the value must be the exact name of the model actually used. If the model is a fine-tuned custom model, the value should have a more specific name than the base model that’s been fine-tuned.
[10] server.address
: When observed from the client side, and when communicating through an intermediary, server.address
SHOULD represent the server address behind any intermediaries, for example proxies, if it’s available.
error.type
has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
Value | Description | Stability |
---|---|---|
_OTHER | A fallback error value to be used when the instrumentation doesn’t define a custom value. |
gen_ai.operation.name
has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
Value | Description | Stability |
---|---|---|
chat | Chat completion operation such as OpenAI Chat API | |
create_agent | Create GenAI agent | |
embeddings | Embeddings operation such as OpenAI Create embeddings API | |
execute_tool | Execute a tool | |
text_completion | Text completions operation such as OpenAI Completions API (Legacy) |
gen_ai.output.type
has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
Value | Description | Stability |
---|---|---|
image | Image | |
json | JSON object with known or unknown schema | |
speech | Speech | |
text | Plain text |
gen_ai.system
has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
Value | Description | Stability |
---|---|---|
anthropic | Anthropic | |
aws.bedrock | AWS Bedrock | |
az.ai.inference | Azure AI Inference | |
az.ai.openai | Azure OpenAI | |
cohere | Cohere | |
deepseek | DeepSeek | |
gemini | Gemini | |
groq | Groq | |
ibm.watsonx.ai | IBM Watsonx AI | |
mistral_ai | Mistral AI | |
openai | OpenAI | |
perplexity | Perplexity | |
vertex_ai | Vertex AI | |
xai | xAI |
Agent Execute Tool Span
If you are using some tools in your agent, refer to Execute Tool Span.
Feedback
Was this page helpful?
Thank you. Your feedback is appreciated!
Please let us know how we can improve this page. Your feedback is appreciated!