Quick start
Estás viendo la versión en inglés de está página porque aún no ha sido traducida. ¿Te interesa ayudar? Mira en Contribuir.
The OpenTelemetry Collector receives telemetry such as traces, metrics, and logs, processes it, and forwards it to one or more observability backends through its component pipeline.
This quick start demo creates a basic local setup. The goal is to show you how the Collector works, not to set up a production-ready environment.
In this guide, you will:
- Start a local instance of the OpenTelemetry Collector
- Generate trace data and send it to the Collector
- Check that the Collector receives and processes the data
By the end, you will have a simple pipeline running on your machine and a clearer idea of how the Collector fits into an observability stack. If you want more context before getting started, see the Collector overview.
Prerequisites
Before you begin, make sure your environment has the following tools installed:
- Docker or any compatible container runtime — used to run the Collector
- Go, one of the latest two minor versions — used to install the telemetry generator
GOBINenvironment variable set — ensures installed Go binaries are available in your PATH1
If GOBIN isn’t set, run:
export GOBIN=${GOBIN:-$(go env GOPATH)/bin}
This guide uses bash commands. If you’re using a different shell, you might
need to adjust the command syntax.
Set up the environment
Pull the Docker image of the OpenTelemetry Collector core distribution:
docker pull otel/opentelemetry-collector:0.150.0Install telemetrygen, which we’ll use to simulate a client that generates telemetry:
go install github.com/open-telemetry/opentelemetry-collector-contrib/cmd/telemetrygen@latest
Generate and collect telemetry
Start the Collector:
docker run \ -p 127.0.0.1:4317:4317 \ -p 127.0.0.1:4318:4318 \ -p 127.0.0.1:55679:55679 \ otel/opentelemetry-collector:0.150.0 \ 2>&1 | tee collector-output.txtThe previous command runs the Collector locally and opens three ports:
4317— OTLP over gRPC, the default for most SDKs4318— OTLP over HTTP, for clients that don’t support gRPC55679— ZPages, a built-in debug UI you can open in the browser
In a separate terminal, generate some traces:
$GOBIN/telemetrygen traces --otlp-insecure --traces 3You see output confirming the traces were sent:
2024-01-16T14:33:15.692-0500 INFO traces/worker.go:99 traces generated {"worker": 0, "traces": 3} 2024-01-16T14:33:15.692-0500 INFO traces/traces.go:58 stop the batch span processorBack in the Collector terminal, you should see trace ingest activity similar to the following:
$ grep -E '^Span|(ID|Name|Kind|time|Status \w+)\s+:' ./collector-output.txt Span #0 Trace ID : f30faffbde5fcf71432f89da1bf7bc14 Parent ID : 6f1ff7f9cf4ec1c7 ID : 8d1e820c1ac57337 Name : okey-dokey Kind : Server Start time : 2024-01-16 14:13:54.585877 +0000 UTC End time : 2024-01-16 14:13:54.586 +0000 UTC Status code : Unset Status message : Span #1 Trace ID : f30faffbde5fcf71432f89da1bf7bc14 Parent ID : ID : 6f1ff7f9cf4ec1c7 Name : lets-go Kind : Client Start time : 2024-01-16 14:13:54.585877 +0000 UTC End time : 2024-01-16 14:13:54.586 +0000 UTC Status code : Unset Status message : ...To explore the traces visually, open http://localhost:55679/debug/tracez in your browser and select one of the traces from the table.
Press Control-C to stop the Collector.
Next steps
At this point, you’ve run the Collector locally and seen how it handles telemetry end to end. From here, you can start learning how it’s used in real setups:
- Configuration: Learn how the Collector’s config file works and how to connect it to a real backend like Jaeger or Prometheus.
- Deployment patterns: Understand the difference between running the Collector as an agent versus a gateway.
- Install the Collector: Explore installation options beyond Docker, including binaries and Kubernetes.
- Component registry: Browse available receivers, processors, and exporters to extend your pipeline.
For more information, see Your first program. ↩︎
Comentarios
¿Fue útil esta página?
Thank you. Your feedback is appreciated!
Please let us know how we can improve this page. Your feedback is appreciated!