Skip to main content

Overview

OpenClaw is a self-hosted AI messaging gateway that bridges chat platforms (WhatsApp, Telegram, Discord, Slack, iMessage, etc.) to AI agents. OpenClaw natively exports OpenTelemetry traces — no TraceCtrl SDK needed. You just point its OTEL export at your TraceCtrl collector and spans flow automatically.

What Gets Traced

SpanTypeWhat It Captures
openclaw.model.usageLLMModel, provider, token counts (input/output/cache), session ID
openclaw.message.processedAgentChannel, outcome, chat ID, message ID, session ID
openclaw.webhook.processedToolChannel, webhook handler, chat ID
openclaw.webhook.errorTool (Error)Channel, webhook, error details
openclaw.session.stuckAlertSession state, age, queue depth
TraceCtrl automatically maps these to its agent/tool/LLM schema — each channel (WhatsApp, Telegram, etc.) becomes an agent node in the topology, and model usage becomes LLM edges.

Setup

1

Enable the diagnostics-otel plugin

openclaw plugins enable diagnostics-otel
2

Configure OTEL export

Add to ~/.openclaw/openclaw.json:
{
  "plugins": {
    "allow": ["diagnostics-otel"],
    "entries": {
      "diagnostics-otel": { "enabled": true }
    }
  },
  "diagnostics": {
    "enabled": true,
    "otel": {
      "enabled": true,
      "endpoint": "http://localhost:4318",
      "protocol": "http/protobuf",
      "serviceName": "openclaw-gateway",
      "traces": true,
      "metrics": false,
      "logs": false,
      "sampleRate": 1.0,
      "flushIntervalMs": 5000
    }
  }
}
OpenClaw exports via OTLP/HTTP (port 4318), not gRPC (4317). TraceCtrl’s OTel Collector accepts both protocols.
3

Restart the gateway

openclaw gateway restart
4

Verify in TraceCtrl

Spans appear within 5 seconds. Open the dashboard and select openclaw-gateway from the project dropdown.
tracectrl doctor   # Verify stack is running
curl http://localhost:8000/api/v1/projects
# Should include "openclaw-gateway"
SettingDevProductionWhy
sampleRate1.00.2Full tracing during setup, sample at scale
flushIntervalMs5000600005s for near-real-time debugging, 60s to reduce overhead
tracestruetrueRequired for all TraceCtrl views
metricsfalsefalseNot processed by TraceCtrl yet (silently dropped)
logsfalsefalseHigh volume, not processed yet

What You’ll See

Topology

Each chat channel becomes an agent node connected to the LLM models it uses:
[openclaw-whatsapp] ──webhook──→ [claude-sonnet-4-6]
[openclaw-telegram] ──webhook──→ [gpt-4o]
[openclaw-discord]  ──webhook──→ [gemini-2.5-pro]

Sessions

Each openclaw.message.processed span appears as a session row showing channel, outcome, duration, and nested LLM calls.

Risk

  • openclaw.session.stuck spans trigger risk alerts
  • openclaw.webhook.error spans tracked as error rates
  • Webhook error patterns surface in attack path analysis

Protocol Notes

  • OpenClaw supports OTLP/HTTP (protobuf) only — gRPC is not supported
  • The endpoint should be the HTTP port (:4318), not gRPC (:4317)
  • Metrics and logs are also exported if enabled, but TraceCtrl only processes traces currently — other signals are silently dropped with no errors

Environment Variables

You can also configure OTEL export via environment variables instead of the JSON config file:
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
OTEL_SERVICE_NAME=openclaw-gateway
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf

Reference