The complete guide to OpenTelemetry in Python

Published January 14, 2025

portrait of Vadim Korolik.

by Vadim Korolik

1LaunchDarkly is an [open source](https://github.com/highlight/highlight) monitoring platform. If you’re interested in learning more, get started at [LaunchDarkly](https://launchdarkly.com).

OpenTelemetry is an important specification that defines how we send telemetry data to observability backends like LaunchDarkly, Grafana, and others. OpenTelemetry is great because it is vendor agnostic, and can be used with several observability backends. If you’re new to OpenTelemetry, you can learn more about it here.

Today, we’ll go through a complete guide to using OpenTelemetry in Python, including the high-level concepts as well as how to send traces and logs to your OpenTelemetry backend of choice.

Step One: Signals

In OpenTelemetry, a signal is a type of telemetry data that is collected and sent to an observability backend. There are three primary types of signals in OpenTelemetry: traces, metrics, and logs.

  • Traces: Traces represent the end-to-end journey of a request as it travels through various services and components in a distributed system. They are composed of spans, which are individual units of work within the trace. Traces help you understand the flow of requests and identify performance bottlenecks or errors in your system.

  • Metrics: Metrics are numerical measurements that provide insights into the performance and health of your system. They can include data such as request counts, error rates, and latency. Metrics are typically aggregated over time and used to monitor trends and set up alerts.

  • Logs: Logs are timestamped records of events that occur within your application. They provide detailed information that is helpful for monitoring, debugging and troubleshooting the application’s behavior. Logs can include various levels of severity, such as info, warning, and error.

In addition to traces, metrics and logs, there are other constructs that can inherit from a signal. For example, an error can be represented using traces as the underlying data type, providing context about where and why the error occurred. Similarly, sessions can be constructed using several signals, such as traces, metrics, and logs, to provide a holistic view of a user’s interaction with the system.

In the context of OpenTelemetry, signals are generated by instrumenting your systems (whether its a container, application, or service).

Components of Instrumentation

Beyond signals, when working with OpenTelemetry instrumented in your application code, there are several key components that ultimately make up the OpenTelemetry API.

These components are designed to be flexible and can be used for all signals. After these signals leave your application, they then can hit an OpenTelemetry collector (or multiple). In LaunchDarkly, we host a cluster of collectors that can be used to send data to, but you can also choose to host your own. Let’s go through each of them. You can refer to the diagram below for a visual representation of how these components interact:

Diagram of the OpenTelemetry pipeline showing how providers, processors, and exporters interact to send telemetry data from an application to a collector.

Diagram of the OpenTelemetry pipeline showing how providers, processors, and exporters interact to send telemetry data from an application to a collector.

Provider

A provider is the API entry point that holds the configuration for telemetry data. In the context of tracing, this would be a TracerProvider, and for logging, it would be a LoggerProvider. The provider is responsible for setting up the environment and ensuring that all necessary configurations are in place. This can include configuring a vendor specific api key, or something as simple as setting the service name and environment.

For example, a TracerProvider could set up the resource attributes like service name and environment, and set the LaunchDarkly project id so that the traces are associated with your LaunchDarkly project.

Here’s a quick example of what this looks like in code:

1from opentelemetry import trace
2from opentelemetry.sdk.trace import TracerProvider
3from opentelemetry.sdk.resources import Resource
4
5provider = TracerProvider(resource=Resource.create(
6 {
7 "service.name": "my-service",
8 "highlight.project_id": "<YOUR_PROJECT_ID>",
9 "environment": "production",
10 }
11))
12
13trace.set_tracer_provider(provider)
14tracer = trace.get_tracer("my-service")

Processor

A processor defines any pre-processing that should be done on the created signals, such as batching, sampling, filtering or even enriching data. This is important because you may have specific needs on the machine that you’re sending data from that require customization. As a very simple example, a BatchSpanProcessor collects spans in batches and sends them to the exporter, which is more efficient than sending each span individually.

Here’s an example of how you might configure a BatchSpanProcessor:

1from opentelemetry.sdk.trace import TracerProvider
2from opentelemetry.sdk.trace.export import BatchSpanProcessor
3from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
4
5processor = BatchSpanProcessor(
6 exporter=OTLPSpanExporter(endpoint="https://otel.highlight.io:4317"),
7 max_queue_size=1000,
8 max_export_batch_size=100,
9 schedule_delay_millis=1000,
10)

As you can see, we’ve configured the processor to use a BatchSpanProcessor with an OTLPSpanExporter that sends the spans to the LaunchDarkly collector (more about this later). We’ve also configured the processor to batch the spans and send them to the exporter every second with a queue size of 1000.

Exporter

Finally, an exporter sends the telemetry data to the backend. This is where you configure the endpoint and any other necessary settings related to the backend you’re sending data to. For example, an OTLPSpanExporter would configure the endpoint and any necessary headers, while an ConsoleSpanExporter would simply print the spans to the console.

Here’s an example of how you might configure an OTLPSpanExporter:

1from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
2
3exporter = OTLPSpanExporter(endpoint="https://otel.highlight.io:4317", insecure=True, headers={ "foo": "bar" })

As you can see, we’ve configured the exporter to use the LaunchDarkly collector, and set the foo header to bar.

Instrumenting your application

Logging

Now that we’re familiar with the high-level concepts, let’s see how we can instrument our application to send logs to an OpenTelemetry backend. In this example, we’ll assume that we’re sending data to LaunchDarkly, but the same principles would apply to any other backend that supports OpenTelemetry.

First, lets install the necessary packages:

$pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp

Next, we’ll need to set up the provider, processor, and exporter.

1service_name = "my-service"
2environment = "production"
3otel_endpoint = "https://otel.highlight.io:4317"
4
5# Set up the logger provider with the resource
6logger_provider = LoggerProvider(resource=Resource.create(
7 {
8 "service.name": service_name,
9 "highlight.project_id": "<YOUR_PROJECT_ID>",
10 "environment": environment,
11 }
12))
13set_logger_provider(logger_provider)
14
15# Configure the OTLP log exporter
16exporter = OTLPLogExporter(endpoint=otel_endpoint, insecure=True)
17logger_provider.add_log_record_processor(BatchLogRecordProcessor(exporter))
18
19# Set up the logger
20logger = logging.getLogger(service_name)
21logger.setLevel(logging.DEBUG)
22
23# Add the OpenTelemetry logging handler
24handler = LoggingHandler(level=logging.DEBUG, logger_provider=logger_provider)
25logger.addHandler(handler)

Tracing

Similar to logging, we can instrument our application to send traces to an OpenTelemetry backend. Lets start with installing the necessary packages:

$pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp

Next, we’ll need to set up the provider, processor, and exporter.

1import logging
2from opentelemetry import trace
3from opentelemetry._logs import set_logger_provider
4from opentelemetry.exporter.otlp.proto.grpc._log_exporter import OTLPLogExporter
5from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
6from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
7from opentelemetry.sdk.resources import Resource
8from opentelemetry.sdk.trace import TracerProvider
9from opentelemetry.sdk.trace.export import BatchSpanProcessor
10from opentelemetry.sdk.trace.export import ConsoleSpanExporter
11from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
12import sys
13
14# Define the service name and environment
15service_name = "my-service"
16environment = "production"
17otel_endpoint = "https://otel.highlight.io:4317"
18
19# Create a resource with service name and highlight project ID
20provider = TracerProvider(resource=Resource.create(
21 {
22 "service.name": service_name,
23 "highlight.project_id": "<YOUR_PROJECT_ID>",
24 "environment": environment,
25 }
26))
27processor = BatchSpanProcessor(OTLPSpanExporter(endpoint=otel_endpoint, insecure=True))
28provider.add_span_processor(processor)
29trace.set_tracer_provider(provider)
30tracer = trace.get_tracer(service_name)

At this point, once you’ve instrumented your application to send logs and traces to an OpenTelemetry backend, you can start using the tracer and logger objects to start sending data to your backend, like so:

1 with tracer.start_as_current_span("example-span") as span:
2 logger.info('hello, world!')
3 span.set_attributes(
4 {
5 "category": "special",
6 "rows_affected": 123
7 }
8 )
9 logger.warning('whoa there', {'key': 'value'})`,

Metrics

Lastly, we can instrument our application to send metrics. In this example, we’ll send a simple count metric, but you can also send other types of metrics like histograms, gauges, and more (see the OpenTelemetry docs for more information).

First, we need to install the necessary OpenTelemetry packages for metrics:

$pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp

Next, we’ll need to set up the provider, processor, and exporter.

1from opentelemetry import metrics
2from opentelemetry.sdk.metrics import MeterProvider, set_meter_provider
3from opentelemetry.sdk.metrics.export import ConsoleMetricsExporter
4from opentelemetry.sdk.resources import Resource
5
6# Set up the meter provider with the resource
7meter_provider = MeterProvider(resource=Resource.create(
8 {
9 "service.name": service_name,
10 "highlight.project_id": "<YOUR_PROJECT_ID>",
11 "environment": environment,
12 }
13))
14set_meter_provider(meter_provider)
15meter = metrics.get_meter("my-service")
16counter = meter.create_counter("my-counter")

And lastly, we can use the meter to create a counter and add a simple value to it:

1counter.add(1)

Note that this is a simple example, and you can also create other types of metrics like histograms, gauges, and more. There’s also the option to use Observable Metric Objects, which allow for more complex metrics collection (like collecting CPU usage, memory usage, etc). Take a look at the OpenTelemetry docs for more information.

Auto-instrumentation & Middleware

Last but not least, in addition to manual instrumentation, OpenTelemetry also supports auto-instrumentation for popular libraries and frameworks. This allows you to automatically collect telemetry data without having to modify your application code. For example, in Python, you could use the OpenTelemetry Distro SDK or the Zero Code Python setup to automatically instrument your application. The downside of these options, however, is that it requires that you change the way your application is run (and may affect your deployment strategy).

As a good alternative, you could use middleware to automatically instrument your application. Middleware is a layer of code that sits between your application and the OpenTelemetry backend, and can be used to automatically collect telemetry data. For example, in a Python FastAPI application, you could write a simple middleware to wrap your application and automatically create traces for each request, like so:

1from fastapi import Request, FastAPI
2
3app = FastAPI()
4
5@app.middleware("http")
6async def trace_middleware(request, call_next):
7 with tracer.start_as_current_span(f"{request.method} {request.url.path}"):
8 response = await call_next(request)
9 return response
10
11@app.get("/")
12def read_root():
13 return {"message": "Hello, World!"}

The great thing about middleware is that it doesn’t require that you change the way your application is run, and every time you write signals within each of your endpoints, you’ll automatically have traces and logs associated with that request.

Putting it all together

Let’s take the various pieces of the OpenTelemetry configuration and put them into a single file that can easily be imported by our application. Create the file o11y.py with the following contents:

1import logging
2import os
3from dotenv import load_dotenv
4from typing import Optional
5
6from opentelemetry import metrics, trace
7from opentelemetry.sdk.metrics.export import AggregationTemporality
8from opentelemetry.sdk.metrics import Counter, Histogram, UpDownCounter
9from opentelemetry._logs import set_logger_provider
10from opentelemetry.exporter.otlp.proto.grpc._log_exporter import OTLPLogExporter
11from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import OTLPMetricExporter
12from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
13from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
14from opentelemetry.sdk._logs.export import BatchLogRecordProcessor, ConsoleLogExporter
15from opentelemetry.sdk.metrics import MeterProvider
16from opentelemetry.sdk.metrics.export import ConsoleMetricExporter, PeriodicExportingMetricReader
17from opentelemetry.sdk.resources import Resource
18from opentelemetry.sdk.trace import TracerProvider
19from opentelemetry.sdk.trace.export import BatchSpanProcessor, ConsoleSpanExporter
20
21EXPORTER_OTLP_ENDPOINT = os.getenv("OTEL_ENDPOINT","https://otel.highlight.io:4317")
22
23# read from .env
24load_dotenv()
25
26print("OTEL Endpoint is: ", EXPORTER_OTLP_ENDPOINT)
27HIGHLIGHT_PROJECT_ID = os.getenv("HIGHLIGHT_PROJECT_ID", "EMPTY")
28print("HIGHLIGHT_PROJECT_ID is: ", HIGHLIGHT_PROJECT_ID)
29
30import sys
31
32def create_logger(service_name: str, environment: Optional[str] = "production", local_debug: bool = False) -> logging.Logger:
33 if environment is None:
34 environment = "production"
35 commit = os.getenv("RENDER_GIT_COMMIT", "unknown")
36 resource = Resource.create(
37 {
38 "service.name": service_name,
39 "highlight.project_id": HIGHLIGHT_PROJECT_ID,
40 "environment": environment,
41 "commit": commit
42 }
43 )
44
45 logger_provider = LoggerProvider(resource=resource)
46 set_logger_provider(logger_provider)
47
48 exporter = OTLPLogExporter(endpoint=EXPORTER_OTLP_ENDPOINT, insecure=True) if not local_debug else ConsoleLogExporter()
49
50 logger_provider.add_log_record_processor(BatchLogRecordProcessor(exporter))
51
52 logger = logging.getLogger(service_name)
53 logger.setLevel(logging.DEBUG)
54
55 handler = LoggingHandler(level=logging.DEBUG, logger_provider=logger_provider)
56 logger.addHandler(handler)
57
58 # Add console handler for stdout
59 console_handler = logging.StreamHandler(sys.stdout)
60 console_handler.setLevel(logging.DEBUG)
61 if commit:
62 formatter = logging.Formatter('commit: ' + commit + ' - %(asctime)s - %(name)s - %(levelname)s - %(message)s')
63 else:
64 formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
65 console_handler.setFormatter(formatter)
66 logger.addHandler(console_handler)
67
68 return logger
69
70def create_tracer(
71 service_name: str,
72 environment: Optional[str] = "production",
73 local_debug: bool = False
74 ) -> trace.Tracer:
75 if environment is None:
76 environment = "production"
77 commit = os.getenv("RENDER_GIT_COMMIT", "unknown")
78 provider = TracerProvider(resource=Resource.create(
79 {
80 "service.name": service_name,
81 "highlight.project_id": HIGHLIGHT_PROJECT_ID,
82 "environment": environment,
83 "commit": commit
84 }
85 ))
86 processor = BatchSpanProcessor(OTLPSpanExporter(endpoint=EXPORTER_OTLP_ENDPOINT, insecure=True)) if not local_debug else BatchSpanProcessor(ConsoleSpanExporter())
87 provider.add_span_processor(processor)
88 trace.set_tracer_provider(provider)
89 tracer = trace.get_tracer(service_name)
90
91 return tracer
92
93def get_meter(service_name: str, environment: Optional[str] = "production", local_debug: bool = False) -> metrics.Meter:
94 if environment is None:
95 environment = "production"
96 commit = os.getenv("RENDER_GIT_COMMIT", "unknown")
97
98
99 preferred_temporality: dict[type, AggregationTemporality] = {
100 Counter: AggregationTemporality.DELTA,
101 UpDownCounter: AggregationTemporality.DELTA,
102 Histogram: AggregationTemporality.DELTA,
103 }
104
105 readers = [PeriodicExportingMetricReader(exporter=OTLPMetricExporter(endpoint=EXPORTER_OTLP_ENDPOINT, insecure=True, preferred_temporality=preferred_temporality))]
106 if local_debug:
107 readers.append(PeriodicExportingMetricReader(exporter=ConsoleMetricExporter(
108 preferred_temporality=preferred_temporality
109 ), export_interval_millis=1000))
110
111 provider = MeterProvider(resource=Resource.create(
112 {
113 "service.name": service_name,
114 "highlight.project_id": HIGHLIGHT_PROJECT_ID,
115 "environment": environment,
116 "commit": commit
117 }
118 ), metric_readers=readers)
119 metrics.set_meter_provider(provider)
120 meter = metrics.get_meter(service_name)
121 return meter

Now, let’s use the setup from our Flask app. In our app entrypoint main.py, we just need to set up the OpenTelemetry resources:

1import os
2from o11y import create_logger, create_tracer, get_meter
3
4# Initialize observability tools
5service_name = "flask-backend"
6logger = create_logger(service_name, os.getenv("ENVIRONMENT"))
7tracer = create_tracer(service_name, os.getenv("ENVIRONMENT"))
8meter = get_meter(service_name, os.getenv("ENVIRONMENT"))
9
10histogram = meter.create_histogram("request_duration_histogram")
11gauge = meter.create_gauge("request_duration_gauge")
12counter = meter.create_counter("request_count")
13
14logger.info("Starting the application")

See the complete example in our Python Flask OTel github repository.

Conclusion

In this guide, we’ve gone through everything you need to use OpenTelemetry in Python, including the high-level concepts as well as how to send traces and logs to your OpenTelemetry backend of choice.

If you have any questions, please feel free to reach out to us on Twitter or Discord.