Performance and tracing
If you are seeing unexpectedly slow performance, you can gather a trace to understand what operations are being performed throughout the deployment and what the long poles are for your deployment. In most cases, the most time-consuming operations will be the provisioning of one or more resources in your cloud provider, however, there may be cases where Pulumi itself is doing work that is limiting the performance of your deployments, and this may indicate an opportunity to further improve the Pulumi deployment orchestration engine to get the maximal parallelism and performance possible for your cloud deployment.
Tracing
To collect a trace:
$ pulumi up --tracing=file:./up.trace
To view a trace locally using AppDash:
$ PULUMI_DEBUG_COMMANDS=1 pulumi view-trace ./up.trace
Displaying trace at http://localhost:8008
Pulumi also supports Zipkin compatible tracing. To collect a trace to a local Jaeger server:
$ docker run -d --name jaeger \
-e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
-p 16686:16686 \
-p 9411:9411 \
jaegertracing/all-in-one:1.22
$ pulumi up --tracing http://localhost:9411/api/v1/spans
To view a trace locally navigate to the Jaeger UI.
OpenTelemetry tracing
Pulumi also supports exporting traces using OpenTelemetry via the --otel-traces flag. This produces richer traces that can be viewed using any OpenTelemetry tooling.
To save traces as OTLP JSON to a local file (this is often the most convenient option if the traces are to be shared):
$ pulumi up --otel-traces file:///tmp/traces.json
To send traces to an OTLP-compatible backend (such as Jaeger, Grafana Tempo, or Honeycomb) via gRPC:
$ pulumi up --otel-traces grpc://localhost:4317
Under the hood, Pulumi starts a local OTLP receiver and sets PULUMI_OTEL_EXPORTER_OTLP_ENDPOINT for all child processes, so that spans from resource providers and language hosts are collected and forwarded to your endpoint. Legacy OpenTracing plugins are automatically bridged into the OTel trace.
Both --tracing and --otel-traces can be used at the same time.
Thank you for your feedback!
If you have a question about how to use Pulumi, reach out in Community Slack.
Open an issue on GitHub to report a problem or suggest an improvement.