Skip to main content
Duration: 1.5–2 hours | Level: Intermediate | Format: Hands-on lab

Using an AI assistant?

Paste the TraceCtrl agent instructions into Claude Code, Cursor, or Copilot to get contextual help throughout this workshop.

Part 1: Introduction to TraceCtrl

What is TraceCtrl?

TraceCtrl is a security observability platform for agentic AI. Our tagline captures the workflow:
  1. See — Visualize your agent’s architecture, topology, and risk posture at a glance
  2. Trace — Trace every message, tool call, and model request through the full execution lifecycle
  3. Ctrl — Control and harden your agent’s configuration with automated scanning and remediation

Components

TraceCtrl has five components:
ComponentWhat it doesTechnology
CLItracectrl scan, fix, doctor, setupPython (pip installable)
Scanner33 security/ops/perf/compliance checks for OpenClawPython
OTel CollectorReceives spans from agents via OTLPOpenTelemetry Collector
EngineProcesses spans, builds topology, serves APIFastAPI + ClickHouse
DashboardVisualizes scans, traces, topologyReact + Vite + Cytoscape.js

Architecture

AGENT CODE
OTLP gRPC :4317
OPENCLAW
OTLP HTTP :4318
OTEL COLLECTOR
:4317 / :4318
CLICKHOUSE
native :9000
ENGINE API
:8000
DASHBOARD
:3000
tracectrl scan
POST findings → Engine
tracectrl fix
auto-remediates OpenClaw config
How data flows:
  1. Your agent (or OpenClaw) exports OpenTelemetry spans to the OTel Collector
  2. The Collector writes spans to ClickHouse (column-oriented database optimized for traces)
  3. The Engine runs a pipeline every 60 seconds — reads new spans, builds agent inventory, discovers topology edges
  4. The Dashboard queries the Engine API to render trace trees, topology graphs, and scan reports
  5. The CLI Scanner analyzes OpenClaw config files and uploads findings to the Engine

What the Scanner Checks

33 checks across four categories:
CategoryChecksExamples
Security (26)Network, credentials, tools, sandbox, SSRF, authGateway exposed? Plaintext API keys? Dangerous tools allowed?
Operational (2)Model config, fallbacksPrimary model set? Fallback configured?
Performance (2)Sub-agent limitsTimeouts configured? Concurrency bounded?
Compliance (3)Retention, isolation, redactionSession data purged? User contexts isolated?

Part 2: Prerequisites

Install all of the following before the bootcamp. Each tool has platform-specific instructions and a verification step at the end.

Required Software

Git

Git Downloads

Official Git downloads for all platforms.
# Option 1: Homebrew (recommended — no dialog popup)
brew install git

# Option 2: Xcode Command Line Tools
xcode-select --install
# A system dialog will appear — click Install and wait (~5 min).
Verify
git --version
Expected: git version 2.x.x
macOS: If you used xcode-select --install, the system dialog must fully complete before git is available. Do not close it early.Windows: Run all commands inside your WSL2 (Ubuntu) terminal, not PowerShell or Command Prompt.

Python 3.10+

Python Downloads

Official Python downloads — choose 3.12 or later.
brew install python@3.12

# If brew warns about PATH, add it:
echo 'export PATH="$(brew --prefix python@3.12)/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc
Verify
python3 --version
Expected: Python 3.10.x or higher
python vs python3: On Ubuntu and EC2, python may point to Python 2 or not exist. Always use python3 (or python3.12 on AL2023). All bootcamp scripts use python3.pyenv users: Run pyenv version to confirm the active version — pyenv takes precedence over system Python.EC2: If python3.12 is not available in dnf, python3.11 also satisfies the >= 3.10 requirement.

Node.js 22+

Node.js Downloads

Official Node.js downloads — choose the 22 LTS release.
# Option 1: Homebrew
brew install node

# Option 2: nvm (recommended if you manage multiple Node versions)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
source ~/.zshrc
nvm install 22 && nvm use 22
Verify
node -v && npm -v
Expected: v22.x.x and 10.x.x
nvm not in PATH: After installing nvm, run source ~/.bashrc (Linux) or source ~/.zshrc (macOS) before nvm is usable. Opening a new terminal also works.Old Node still active: If node -v shows v14 or v16 after installing nvm, run nvm use 22 to switch the active version in that shell.Never use sudo npm install -g when using nvm — nvm installs Node in your home directory and does not need elevated permissions for global packages.

Docker

Docker Desktop

Docker Desktop for macOS and Windows. Docker Engine for Linux and EC2.
# Option 1: Homebrew Cask
brew install --cask docker
# Open Docker.app from /Applications after installing.

# Option 2: Download directly from docker.com/products/docker-desktop/
Docker Desktop must be running (whale icon in menu bar) before any docker command works. It needs ~4 GB of RAM.
Verify
docker --version && docker compose version
Expected: Docker version 27.x.x and Docker Compose version v2.x.x
macOS — daemon not running: Cannot connect to the Docker daemon means Docker Desktop is installed but not started. Open it from Applications and wait for the whale icon.Linux — permission denied on socket: You added yourself to the docker group but haven’t started a new shell session. Run newgrp docker or log out and back in.WSL2 — Docker Desktop not running on Windows: Docker commands in WSL2 proxy through Docker Desktop on the host. If Docker Desktop is closed on the Windows side, all docker commands inside WSL2 will fail.EC2 — group change not in effect: usermod -aG docker only applies in new sessions. Run newgrp docker to avoid re-logging in.

OpenClaw

OpenClaw Documentation

Full installation guide and platform requirements.
# Option 1: npm (requires Node.js 22+ from the previous step)
npm install -g openclaw

# Option 2: Homebrew
brew install openclaw
Verify
openclaw --version
Expected: openclaw x.x.x
Permission error (EACCES): If npm install -g openclaw fails with permission denied, you are using a system Node install instead of nvm. Go back to the Node.js step and install via nvm — it does not require sudo for global packages.Command not found after install: Run source ~/.bashrc (or open a new terminal) to update your PATH.Node version: OpenClaw requires Node.js 22+. Confirm with node -v before installing.

Quick Verification

Paste this into your terminal to check all prerequisites at once:
echo "=== Bootcamp Prerequisites Check ==="

git --version >/dev/null 2>&1 && echo "✓ Git: $(git --version)" || echo "✗ Git MISSING"

if python3 -c "import sys; assert sys.version_info >= (3,10)" 2>/dev/null; then
  echo "✓ Python: $(python3 --version) (>= 3.10)"
else
  echo "✗ Python MISSING or < 3.10 (found: $(python3 --version 2>/dev/null || echo 'not found'))"
fi

if node -e "process.exit(parseInt(process.versions.node) >= 22 ? 0 : 1)" 2>/dev/null; then
  echo "✓ Node.js: $(node -v) (>= 22)"
else
  echo "✗ Node.js MISSING or < 22 (found: $(node -v 2>/dev/null || echo 'not found'))"
fi

docker info >/dev/null 2>&1 && echo "✓ Docker: $(docker --version)" || echo "✗ Docker MISSING or daemon not running"
docker compose version >/dev/null 2>&1 && echo "✓ Docker Compose: $(docker compose version)" || echo "✗ Docker Compose v2 MISSING"
openclaw --version >/dev/null 2>&1 && echo "✓ OpenClaw: $(openclaw --version)" || echo "✗ OpenClaw MISSING"

echo "====================================="

API Keys

You’ll need an API key for an LLM provider to power your OpenClaw agent. Pick one from the options below — Gemma via Google AI Studio is free and works well for the bootcamp.
ProviderFree TierSign Up
Google AI Studio (Gemma 3 / Gemini)Yes — free API key, no credit cardaistudio.google.com
Anthropic (Claude)No — requires billingconsole.anthropic.com
OpenAI (GPT-4o)No — requires billingplatform.openai.com
Groq (Llama, Mixtral)Yes — free tier availableconsole.groq.com
Recommended for bootcamp: Get a free Google AI Studio key. Go to aistudio.google.com, sign in with a Google account, and click Get API key. No credit card required.

Part 3: OpenClaw Setup

What is OpenClaw?

OpenClaw is a self-hosted AI agent gateway. It connects LLMs to messaging channels (Telegram, WhatsApp, Discord, Slack, WebChat) and manages agent configuration, tool access, session state, and model routing — all from a single JSON config file.

OpenClaw Documentation

Full reference: installation, configuration, plugins, channels, and tools.

Run the Setup Wizard

openclaw
The interactive wizard guides you through:
  1. Model provider — Select Anthropic and enter the API key
  2. Channel — Choose WebChat for the bootcamp (no external accounts needed)
  3. Identity — Name your agent (e.g., “Bootcamp Agent”)
API Key: Ask the facilitator for the Anthropic API key. It can also be found in the Notion document “LLM Providers (API Keys)”.

Understand the Configuration

After setup, your config lives at ~/.openclaw/openclaw.json (or a custom path if you chose one during the wizard). Not sure of your workspace path? Run this in your normal user shell (not sudo):
openclaw configure
The first line of output shows your workspace location. Copy the absolute path — for example /home/yourname/.openclaw — and use that wherever the docs reference ~/.openclaw.
~ changes under sudo: If you ever run a command as a sudoer or with sudo su, ~ resolves to /root/ — not your home directory. So ~/.openclaw silently becomes /root/.openclaw, which doesn’t exist. Always use the absolute path (e.g. /home/yourname/.openclaw/) when running any TraceCtrl command in an elevated shell. Run openclaw configure as your normal user to get it.
Here are the key sections:
{
  "agents": {
    "defaults": {
      "model": {
        "primary": "anthropic/claude-sonnet-4-20250514"
      }
    }
  },
  "channels": {
    "webchat": {
      "enabled": true,
      "dmPolicy": "open"
    }
  },
  "gateway": {
    "bind": "loopback",
    "port": 18789
  },
  "tools": {
    "profile": "full"
  }
}
SectionWhat it controls
agents.defaults.modelWhich LLM to use (primary + fallbacks)
channelsMessaging channels and their DM/group policies
gatewayNetwork binding, authentication, TLS
toolsWhich tools the agent can access (full, coding, messaging, minimal)
sessionConversation scope (main, per-peer, per-channel-peer) and retention
loggingLog level (info, debug, trace) and sensitive data redaction
diagnosticsOpenTelemetry export for runtime traces
pluginsWhich plugins are enabled (providers, channels, tools)

OpenClaw Configuration Reference

Full configuration reference with all options.

Enable Diagnostics (OTEL Export)

For TraceCtrl to receive runtime traces from OpenClaw, enable the diagnostics-otel plugin. Edit ~/.openclaw/openclaw.json and add these sections:
{
  "plugins": {
    "allow": ["diagnostics-otel"]
  },
  "diagnostics": {
    "enabled": true,
    "otel": {
      "enabled": true,
      "endpoint": "http://localhost:4318",
      "protocol": "http/protobuf",
      "serviceName": "openclaw-gateway",
      "traces": true,
      "metrics": false,
      "logs": false,
      "sampleRate": 1.0,
      "flushIntervalMs": 5000
    }
  }
}
Restart OpenClaw to apply:
openclaw gateway restart

Test Your Agent

Open the OpenClaw Control UI at http://localhost:18789 and send a test message via WebChat. If you set up Telegram, you can message your bot there instead.
Checkpoint: You can chat with your agent via WebChat or Telegram. OpenClaw is running with diagnostics enabled.

Part 4: TraceCtrl Setup

Clone and Install

# Clone the repository
git clone https://github.com/tracectrl/tracectrl.git
cd tracectrl

# Create a virtual environment
python3 -m venv .venv
source .venv/bin/activate

# Install the CLI and Scanner
pip install ./scanner
pip install ./sdk/tracectrl

# Verify
tracectrl version

⭐ Star TraceCtrl on GitHub

You just cloned it — if it looks useful, a star helps others find the project. Takes 2 seconds.

Start the Stack

tracectrl setup
This launches the interactive setup wizard which guides you through configuration and starts the Docker containers.
Docker socket access: tracectrl setup starts Docker containers and needs access to the Docker socket. If you see permission denied while trying to connect to the Docker daemon socket, your user is not yet in the docker group — go back to the Docker install step and run newgrp docker (or log out and back in).Do not run sudo tracectrl setup (or sudo su before running it). When you elevate to root, ~ resolves to /root/ instead of your home directory. Any path that was relative (like ~/.openclaw) will silently point to the wrong location and the TUI will fail to find your OpenClaw workspace.
Alternatively, you can start containers directly:
docker compose up -d
This starts four containers:
ContainerPortPurpose
clickhouse8123, 9000Span storage (ClickHouse database)
otel-collector4317, 4318Receives OpenTelemetry spans
tracectrl-engine8000Intelligence Engine (API + pipeline)
tracectrl-ui3000Dashboard (React web app)
Wait ~30 seconds for all services to start, then verify:
tracectrl doctor
Expected output:
TraceCtrl Doctor

  Services:
    [OK]   Engine API (http://localhost:8000/api/v1/health)
    [OK]   Dashboard UI (http://localhost:3000)
    [OK]   OTel Collector (http://localhost:4318/v1/traces)

  Docker:
    [OK]   Docker is installed

  All checks passed.

Open the Dashboard

Open http://localhost:3000 in your browser. You’ll see the TraceCtrl dashboard — currently empty. We’ll populate it in the next sections.
Checkpoint: tracectrl doctor shows all green. Dashboard loads at localhost:3000.

Part 5: Static Scan & Risk Topology

Run the Scan

First, get your absolute workspace path — run this in your normal user shell (not as sudo):
openclaw configure
# The first line shows your workspace path, e.g. /home/yourname/.openclaw
Then run the scan using that absolute path:
tracectrl scan /home/yourname/.openclaw/
Do not use ~ if you are running tracectrl as a sudoer or in a sudo su session. In an elevated shell, ~ resolves to /root/ — not your home directory — so ~/.openclaw points to a path that doesn’t exist and the scan will fail silently.Always use the full absolute path (e.g. /home/yourname/.openclaw/ or /Users/yourname/.openclaw/) when running scans from an elevated context. Run openclaw configure as your normal user to get the correct path.
The scanner reads your openclaw.json, runs 33 checks, and uploads results to the dashboard. Terminal output shows a severity summary, findings table, compound risk signals, and topology stats:
CRITICAL 5 HIGH 14 MEDIUM 14 PASS 0
Check IDSeverityFinding
OC-NET-001CRITICALGateway bind is non-loopback — exposed to network
OC-SEC-001CRITICALNo auth on network-exposed gateway
OC-SEC-002CRITICALDangerous flags active: browser.ssrfPolicy…
OC-TOOL-001CRITICALbash/exec tools enable arbitrary command execution
OC-TOOL-002CRITICALWildcard tool permission — all tools permitted
OC-CRED-001HIGHPlaintext keys found at: models.providers.vllm.apiKey
OC-SEC-004HIGHsandbox.mode is “off” — no tool isolation
OC-SEC-005HIGHBrowser SSRF: dangerouslyAllowPrivateNetwork is true
OC-OPS-001HIGHNo primary model configured
… 24 more findings (33 checks total)
Compound Risk Signals
[HIGH] COMPOUND-004 Plaintext credentials + public channel = credential exfiltration risk
Topology: 9 nodes · 8 edges

View in the Dashboard

Open http://localhost:3000/scan to see:
  1. Severity cards — CRITICAL / HIGH / MEDIUM / PASS counts
  2. Architecture risk topology — your OpenClaw architecture with risk-colored nodes
  3. Category sections — findings grouped by Security, Operational, Performance, Compliance

Reading the Topology Graph

Node ColorType
TealIngress channels (Telegram, WebChat)
BlueAgents
PurpleLLM Providers (Anthropic, OpenAI)
GreenTools
OrangeExtensions/Plugins
Nodes with red borders have CRITICAL findings. Orange borders = HIGH. Yellow = MEDIUM.

Exploring Findings

Expand each category section to see individual findings. Click any finding to see:
  • What was found — the specific misconfiguration
  • Why it matters — the security/operational rationale
  • How to fix it — step-by-step remediation

Auto-Fix Critical Findings

tracectrl fix ~/.openclaw/ --auto
This automatically remediates the most common findings:
FindingWhat the fix does
OC-NET-001 Gateway exposedSets gateway.bind = "loopback"
OC-TOOL-001 Dangerous toolsRemoves bash, exec from allow lists
OC-TOOL-002 Wildcard toolsRemoves * from tools.allow
OC-ING-001 Open DM policySets dmPolicy = "pairing"
OC-PERS-001 Cron enabledSets cron.enabled = false
OC-LOG-002 Debug loggingSets logging.level = "info"
The CLI creates a .bak backup, applies fixes, re-scans, and uploads results:
  ✓ OC-NET-001: Set gateway.bind = "loopback"
  ✓ OC-TOOL-002: Removed "*" from tools.allow

  Before: 12 findings → After: 6 findings
  Results uploaded to engine.
Refresh the dashboard to see the updated report.

Manual Fixes

Some findings require manual remediation. Here are the most important ones:

Plaintext API Keys (OC-CRED-001)

Problem: API keys stored directly in openclaw.json. Fix: Replace with environment variable references:
// Before
"apiKey": "sk-ant-abc123..."

// After
"apiKey": "${ANTHROPIC_API_KEY}"
Then set the variable in your shell:
export ANTHROPIC_API_KEY="sk-ant-abc123..."
Or use openclaw configure to reconfigure the provider with an env var.

Browser SSRF Policy (OC-SEC-002)

Problem: browser.ssrfPolicy.dangerouslyAllowPrivateNetwork is true by default — the agent’s browser can reach your internal network. Fix: Add to openclaw.json:
{
  "browser": {
    "ssrfPolicy": {
      "dangerouslyAllowPrivateNetwork": false
    }
  }
}

Sandbox Not Enabled (OC-SEC-004)

Problem: Agent tools run directly on the host without isolation. Fix:
{
  "agents": {
    "defaults": {
      "sandbox": {
        "mode": "non-main"
      }
    }
  }
}

Session Scope Shared (OC-COMP-002)

Problem: All users share the same conversation context. Fix:
{
  "session": {
    "dmScope": "per-channel-peer"
  }
}

OpenClaw Security Guide

Full security configuration reference.

Re-scan After Manual Fixes

# Use your absolute workspace path (same one from the scan step above)
tracectrl scan /home/yourname/.openclaw/
Refresh the dashboard — your findings should be significantly reduced.
Checkpoint: 0 CRITICAL findings. Dashboard shows the updated scan report with risk topology.

Part 6: Strands Agent + Runtime Telemetry

What is Strands?

AWS Strands is an open-source SDK for building AI agents with tool use. TraceCtrl instruments Strands agents to capture every execution — agent runs, LLM calls, and tool invocations — as OpenTelemetry spans.

Install the Strands Instrumentation

From the TraceCtrl repo root:
pip install ./sdk/tracectrl-instrumentation-strands
pip install strands-agents strands-agents-tools

Create a Strands Agent

Create a file called my_agent.py:
import tracectrl
from tracectrl.instrumentation.strands import StrandsInstrumentor

# 1. Configure TraceCtrl — sends spans to the local OTel Collector
tracectrl.configure(service_name="bootcamp-agent")

# 2. Instrument Strands — wraps all agent/LLM/tool calls with span creation
StrandsInstrumentor().instrument()

# 3. Build your agent
from strands import Agent
from strands_tools import calculator

agent = Agent(
    model="us.anthropic.claude-sonnet-4-20250514-v1:0",
    tools=[calculator],
    system_prompt="You are a helpful assistant. Use the calculator tool for math."
)

# 4. Run it
response = agent("What is 42 * 17 + 389?")
print(response)
API Key: Set your credentials before running:
export AWS_PROFILE=default
# or
export ANTHROPIC_API_KEY="sk-ant-..."
Ask the facilitator for keys, or check the Notion document “LLM Providers (API Keys)”.

Run the Agent

python my_agent.py
You should see the agent respond with the calculation result.

View the Traces

Open http://localhost:3000/sessions. You’ll see a new trace for bootcamp-agent. Click the row to expand the trace tree:
  • Agent span — the top-level agent execution
  • LLM spans — each model call with input/output and token counts
  • Tool spans — calculator tool with arguments and result
Click any span to see its detail panel with timing, attributes, and input/output values.

View the Topology

Open http://localhost:3000/topology. The runtime topology graph now shows:
  • Agent node (bootcamp-agent) → connected to the LLM provider (Anthropic/Bedrock)
  • Agent node → connected to tool nodes (calculator)
This graph is built automatically from the trace data — no configuration needed.

Try More Complex Scenarios

Multi-tool Agent

from strands import Agent
from strands_tools import calculator, http_request

agent = Agent(
    model="us.anthropic.claude-sonnet-4-20250514-v1:0",
    tools=[calculator, http_request],
    system_prompt="You are a research assistant with calculator and web access."
)

response = agent("What is the population of Singapore divided by its area in km²?")
print(response)

Multiple Runs

Run the agent several times to build up session history:
questions = [
    "What is 2^10?",
    "Calculate the square root of 144",
    "What is 15% of 2500?",
]

for q in questions:
    print(f"\n> {q}")
    print(agent(q))
Each run creates a new session in the dashboard. The topology grows as more tools and models are observed.

What Gets Captured

Span TypeKey Attributes
Agentagent.name, openinference.span.kind=AGENT
LLMllm.model_name, input.value, output.value, token counts
Tooltool.name, tool.description, input/output, risk category
Sessiontracectrl.session_id linking all spans in a conversation
TraceCtrl’s TraceCtrlSpanProcessor automatically enriches every span with:
  • Tool risk category — one of 8 categories (filesystem, network, code_execution, data_store, web_browsing, communication, system, other)
  • Agent identity — framework detection and agent naming
  • Session correlation — links all spans from a single conversation
Checkpoint: Sessions page shows traces with agent → LLM → tool spans. Topology page shows your agent connected to its tools and LLM provider.

Summary

What You Built

ComponentStatus
TraceCtrl stack (Engine + Dashboard + Collector + ClickHouse)Running via Docker
OpenClaw agent with WebChatConfigured and chatting
Security scan (33 checks)Findings visible on dashboard
Auto-remediationCritical findings fixed
Manual hardeningSSRF, sandbox, session isolation configured
Strands agent with TraceCtrl instrumentationTraces and topology visible

CLI Reference

tracectrl scan [path]          # Run 33-check security scan
tracectrl fix --auto           # Auto-fix + rescan + upload
tracectrl doctor               # Verify all services are healthy
tracectrl version              # Print version
tracectrl install-plugin       # Install the OpenClaw telemetry plugin

TraceCtrl GitHub

Source code and issues.

OpenClaw Docs

Full OpenClaw reference.

OpenTelemetry

The observability standard TraceCtrl is built on.

Strands Agents

AWS Strands agent framework.