Semantic conventions for AWS Bedrock operations

Status: Development

[!Warning]

Existing GenAI instrumentations that are using v1.36.0 of this document (or prior):

  • SHOULD NOT change the version of the GenAI conventions that they emit by default. Conventions include, but are not limited to, attributes, metric, span and event names, span kind and unit of measure.
  • SHOULD introduce an environment variable OTEL_SEMCONV_STABILITY_OPT_IN as a comma-separated list of category-specific values. The list of values includes:
    • gen_ai_latest_experimental - emit the latest experimental version of GenAI conventions (supported by the instrumentation) and do not emit the old one (v1.36.0 or prior).
    • The default behavior is to continue emitting whatever version of the GenAI conventions the instrumentation was emitting (1.34.0 or prior).

This transition plan will be updated to include stable version before the GenAI conventions are marked as stable.

AWS Bedrock Spans

The Semantic Conventions for AWS Bedrock extend and override the semantic conventions for Gen AI Spans.

gen_ai.provider.name MUST be set to "aws.bedrock".

These attributes track input data and metadata for a request to an AWS Bedrock model. The attributes include general Generative AI attributes and ones specific the AWS Bedrock.

Status: Development

Describes an AWS Bedrock operation span.

Span kind SHOULD be CLIENT.

Span status SHOULD follow the Recording Errors document.

AttributeTypeDescriptionExamplesRequirement LevelStability
aws.bedrock.guardrail.idstringThe unique identifier of the AWS Bedrock Guardrail. A guardrail helps safeguard and prevent unwanted behavior from model responses or user messages.sgi5gkybzqakRequiredDevelopment
gen_ai.operation.namestringThe name of the operation being performed. [1]chat; generate_content; text_completionRequiredDevelopment
gen_ai.provider.namestringThe Generative AI provider as identified by the client or server instrumentation. [2]openai; gcp.gen_ai; gcp.vertex_aiRequiredDevelopment
error.typestringDescribes a class of error the operation ended with. [3]timeout; java.net.UnknownHostException; server_certificate_invalid; 500Conditionally Required if the operation ended in an errorStable
gen_ai.conversation.idstringThe unique identifier for a conversation (session, thread), used to store and correlate messages within this conversation. [4]conv_5j66UpCpwteGg4YSxUnt7lPYConditionally Required when availableDevelopment
gen_ai.output.typestringRepresents the content type requested by the client. [5]text; json; imageConditionally Required [6]Development
gen_ai.request.choice.countintThe target number of candidate completions to return.3Conditionally Required if available, in the request, and !=1Development
gen_ai.request.modelstringThe name of the GenAI model a request is being made to. [7]gpt-4Conditionally Required If available.Development
gen_ai.request.seedintRequests with same seed value more likely to return same result.100Conditionally Required if applicable and if the request includes a seedDevelopment
server.portintGenAI server port. [8]80; 8080; 443Conditionally Required If server.address is set.Stable
aws.bedrock.knowledge_base.idstringThe unique identifier of the AWS Bedrock Knowledge base. A knowledge base is a bank of information that can be queried by models to generate more relevant responses and augment prompts.XFWUPB9PAWRecommendedDevelopment
gen_ai.request.frequency_penaltydoubleThe frequency penalty setting for the GenAI request.0.1RecommendedDevelopment
gen_ai.request.max_tokensintThe maximum number of tokens the model generates for a request.100RecommendedDevelopment
gen_ai.request.presence_penaltydoubleThe presence penalty setting for the GenAI request.0.1RecommendedDevelopment
gen_ai.request.stop_sequencesstring[]List of sequences that the model will use to stop generating further tokens.["forest", "lived"]RecommendedDevelopment
gen_ai.request.temperaturedoubleThe temperature setting for the GenAI request.0.0RecommendedDevelopment
gen_ai.request.top_kdoubleThe top_k sampling setting for the GenAI request.1.0RecommendedDevelopment
gen_ai.request.top_pdoubleThe top_p sampling setting for the GenAI request.1.0RecommendedDevelopment
gen_ai.response.finish_reasonsstring[]Array of reasons the model stopped generating tokens, corresponding to each generation received.["stop"]; ["stop", "length"]RecommendedDevelopment
gen_ai.response.idstringThe unique identifier for the completion.chatcmpl-123RecommendedDevelopment
gen_ai.response.modelstringThe name of the model that generated the response. [9]gpt-4-0613RecommendedDevelopment
gen_ai.usage.input_tokensintThe number of tokens used in the GenAI input (prompt).100RecommendedDevelopment
gen_ai.usage.output_tokensintThe number of tokens used in the GenAI response (completion).180RecommendedDevelopment
server.addressstringGenAI server address. [10]example.com; 10.1.2.80; /tmp/my.sockRecommendedStable
gen_ai.input.messagesanyThe chat history provided to the model as an input. [11][
  {
    “role”: “user”,
    “parts”: [
      {
        “type”: “text”,
        “content”: “Weather in Paris?"
      }
    ]
  },
  {
    “role”: “assistant”,
    “parts”: [
      {
        “type”: “tool_call”,
        “id”: “call_VSPygqKTWdrhaFErNvMV18Yl”,
        “name”: “get_weather”,
        “arguments”: {
          “location”: “Paris”
        }
      }
    ]
  },
  {
    “role”: “tool”,
    “parts”: [
      {
        “type”: “tool_call_response”,
        “id”: " call_VSPygqKTWdrhaFErNvMV18Yl”,
        “result”: “rainy, 57°F”
      }
    ]
  }
]
Opt-InDevelopment
gen_ai.output.messagesanyMessages returned by the model where each message represents a specific model response (choice, candidate). [12][
  {
    “role”: “assistant”,
    “parts”: [
      {
        “type”: “text”,
        “content”: “The weather in Paris is currently rainy with a temperature of 57°F."
      }
    ],
    “finish_reason”: “stop”
  }
]
Opt-InDevelopment
gen_ai.system_instructionsanyThe system message or instructions provided to the GenAI model separately from the chat history. [13][
  {
    “type”: “text”,
    “content”: “You are an Agent that greet users, always use greetings tool to respond”
  }
]; [
  {
    “type”: “text”,
    “content”: “You are a language translator."
  },
  {
    “type”: “text”,
    “content”: “Your mission is to translate text in English to French."
  }
]
Opt-InDevelopment

[1] gen_ai.operation.name: If one of the predefined values applies, but specific system uses a different name it’s RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.

[2] gen_ai.provider.name: The attribute SHOULD be set based on the instrumentation’s best knowledge and may differ from the actual model provider.

Multiple providers, including Azure OpenAI, Gemini, and AI hosting platforms are accessible using the OpenAI REST API and corresponding client libraries, but may proxy or host models from different providers.

The gen_ai.request.model, gen_ai.response.model, and server.address attributes may help identify the actual system in use.

The gen_ai.provider.name attribute acts as a discriminator that identifies the GenAI telemetry format flavor specific to that provider within GenAI semantic conventions. It SHOULD be set consistently with provider-specific attributes and signals. For example, GenAI spans, metrics, and events related to AWS Bedrock should have the gen_ai.provider.name set to aws.bedrock and include applicable aws.bedrock.* attributes and are not expected to include openai.* attributes.

[3] error.type: The error.type SHOULD match the error code returned by the Generative AI provider or the client library, the canonical name of exception that occurred, or another low-cardinality error identifier. Instrumentations SHOULD document the list of errors they report.

[4] gen_ai.conversation.id: Instrumentations SHOULD populate conversation id when they have it readily available for a given operation, for example:

Application developers that manage conversation history MAY add conversation id to GenAI and other spans or logs using custom span or log record processors or hooks provided by instrumentation libraries.

[5] gen_ai.output.type: This attribute SHOULD be used when the client requests output of a specific type. The model may return zero or more outputs of this type. This attribute specifies the output modality and not the actual output format. For example, if an image is requested, the actual output could be a URL pointing to an image file. Additional output format details may be recorded in the future in the gen_ai.output.{type}.* attributes.

[6] gen_ai.output.type: when applicable and if the request includes an output format.

[7] gen_ai.request.model: The name of the GenAI model a request is being made to. If the model is supplied by a vendor, then the value must be the exact name of the model requested. If the model is a fine-tuned custom model, the value should have a more specific name than the base model that’s been fine-tuned.

[8] server.port: When observed from the client side, and when communicating through an intermediary, server.port SHOULD represent the server port behind any intermediaries, for example proxies, if it’s available.

[9] gen_ai.response.model: If available. The name of the GenAI model that provided the response. If the model is supplied by a vendor, then the value must be the exact name of the model actually used. If the model is a fine-tuned custom model, the value should have a more specific name than the base model that’s been fine-tuned.

[10] server.address: When observed from the client side, and when communicating through an intermediary, server.address SHOULD represent the server address behind any intermediaries, for example proxies, if it’s available.

[11] gen_ai.input.messages: Instrumentations MUST follow Input messages JSON schema. When the attribute is recorded on events, it MUST be recorded in structured form. When recorded on spans, it MAY be recorded as a JSON string if structured format is not supported and SHOULD be recorded in structured form otherwise.

Messages MUST be provided in the order they were sent to the model. Instrumentations MAY provide a way for users to filter or truncate input messages.

[!Warning] This attribute is likely to contain sensitive information including user/PII data.

See Recording content on attributes section for more details.

[12] gen_ai.output.messages: Instrumentations MUST follow Output messages JSON schema

Each message represents a single output choice/candidate generated by the model. Each message corresponds to exactly one generation (choice/candidate) and vice versa - one choice cannot be split across multiple messages or one message cannot contain parts from multiple choices.

When the attribute is recorded on events, it MUST be recorded in structured form. When recorded on spans, it MAY be recorded as a JSON string if structured format is not supported and SHOULD be recorded in structured form otherwise.

Instrumentations MAY provide a way for users to filter or truncate output messages.

[!Warning] This attribute is likely to contain sensitive information including user/PII data.

See Recording content on attributes section for more details.

[13] gen_ai.system_instructions: This attribute SHOULD be used when the corresponding provider or API allows to provide system instructions or messages separately from the chat history.

Instructions that are part of the chat history SHOULD be recorded in gen_ai.input.messages attribute instead.

Instrumentations MUST follow System instructions JSON schema.

When recorded on spans, it MAY be recorded as a JSON string if structured format is not supported and SHOULD be recorded in structured form otherwise.

Instrumentations MAY provide a way for users to filter or truncate system instructions.

[!Warning] This attribute may contain sensitive information.

See Recording content on attributes section for more details.


error.type has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

ValueDescriptionStability
_OTHERA fallback error value to be used when the instrumentation doesn’t define a custom value.Stable

gen_ai.operation.name has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

ValueDescriptionStability
chatChat completion operation such as OpenAI Chat APIDevelopment
create_agentCreate GenAI agentDevelopment
embeddingsEmbeddings operation such as OpenAI Create embeddings APIDevelopment
execute_toolExecute a toolDevelopment
generate_contentMultimodal content generation operation such as Gemini Generate ContentDevelopment
invoke_agentInvoke GenAI agentDevelopment
text_completionText completions operation such as OpenAI Completions API (Legacy)Development

gen_ai.output.type has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

ValueDescriptionStability
imageImageDevelopment
jsonJSON object with known or unknown schemaDevelopment
speechSpeechDevelopment
textPlain textDevelopment

gen_ai.provider.name has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

ValueDescriptionStability
anthropicAnthropicDevelopment
aws.bedrockAWS BedrockDevelopment
azure.ai.inferenceAzure AI InferenceDevelopment
azure.ai.openaiAzure OpenAIDevelopment
cohereCohereDevelopment
deepseekDeepSeekDevelopment
gcp.geminiGemini [14]Development
gcp.gen_aiAny Google generative AI endpoint [15]Development
gcp.vertex_aiVertex AI [16]Development
groqGroqDevelopment
ibm.watsonx.aiIBM Watsonx AIDevelopment
mistral_aiMistral AIDevelopment
openaiOpenAIDevelopment
perplexityPerplexityDevelopment
x_aixAIDevelopment

[14]: Used when accessing the ‘generativelanguage.googleapis.com’ endpoint. Also known as the AI Studio API.

[15]: May be used when specific backend is unknown.

[16]: Used when accessing the ‘aiplatform.googleapis.com’ endpoint.