Kibana Observability Team · Dev Workflow Proposal

tmux + Git Worktrees
for Multi-Branch Kibana Dev

A structured local development setup that eliminates context-switching overhead, lets multiple Kibana instances run simultaneously, and automates the full bootstrap → ES → Kibana startup sequence.

tmux git worktrees dev-start.sh cursor-agent auto port assignment playwright-mcp

The Problem with Today's Setup

Before — tabs + manual switching
  • 5–10 terminal tabs with no structure
  • Switching branches = stop servers, stash, checkout, restart
  • No way to run two Kibana instances simultaneously
  • Context lost when terminal closes or machine restarts
  • Manual port management across sessions
  • cursor-agent has no awareness of which Kibana URL to use
  • Full startup sequence done manually every time
After — tmux + worktrees + dev-start.sh
  • Named sessions with structured windows per concern
  • Switch branches = Ctrl-a s, no restarts needed
  • kibana-main and kibana-feat run concurrently on different ports
  • Sessions survive terminal close, persist until machine reboot
  • Ports auto-assigned, never conflict
  • cursor-agent lives inside each session, scoped to that worktree
  • One command starts everything automatically

Sessions, Windows & Panes

There are 2 permanent sessionskibana-feat and kibana-main — always running in the background. On top of that, you can spin up as many temporary sessions as you need using ~/dev-start.sh new <branch> — for PR reviews, hotfixes, testing a colleague's branch, etc. Kill them when done. Switch between any session instantly with Ctrl-a s.

Design rationale

kibana-feat always runs on the default ports — Kibana :5601 and ES :9200 — because that's where most of your day-to-day development happens. No port memorization needed.

kibana-main is kept clean on :5602 / :9201, reserved for checking the latest state of main without interfering with your feature work.

Temporary sessions get ports auto-assigned from :5603+. Spin up as many as you want — they never conflict with each other or with the permanent sessions.

When you finish a feature and move to a new one, run ~/dev-start.sh switch <new-branch>. Your new feature branch will be set up in kibana-feat, automatically configured on the default ports. You don't have to handle any of this yourself.

kibana-feat
permanent
0serversleft: ES · right: Kibana
1cursorleft: cursor-agent · right: shell
2scriptsleft: run-data synthetics · right: run-data slo
3gitsingle pane
4editorvim, /etc/hosts, configs
5checkseslint · type check · jest
6ftrleft: server · right: runner
kibana-main
permanent
0serversleft: ES · right: Kibana
1cursorleft: cursor-agent · right: shell
2scriptsleft: run-data synthetics · right: run-data slo
3gitsingle pane
4editorvim, /etc/hosts, configs
5checkseslint · type check · jest
6ftrleft: server · right: runner
kibana-<branch>
temporary
0serversleft: ES · right: Kibana
1cursorleft: cursor-agent · right: shell
2scriptsleft: run-data synthetics · right: run-data slo
3gitsingle pane
4editorsingle pane
↳ lightweight, killed when done

The Servers Window (most important)

Terminal — kibana-feat
0:servers
1:cursor
2:scripts
3:git
4:editor
5:checks
6:ftr
[kibana-feat]
left — ES + bootstrap
$ kbn-start.sh feat-cluster \
--kibana-port 5601 \
--es-port 9200
▶ yarn kbn bootstrap...
▶ Starting ES...
{"message":"started"}
succ kbn/es setup complete
→ auto-firing Kibana in right pane
right — Kibana (auto-started)
$ yarn start --no-base-path \
--host kibana-feat.local \
--port 5601
Server running at
http://kibana-feat.local:5601
Kibana is now available

The Cursor Window (AI coding)

Terminal — kibana-feat
0:servers
1:cursor
2:scripts
3:git
4:editor
5:checks
6:ftr
[kibana-feat]
left (~60%) — cursor-agent
$ cursor-agent
▶ Starting cursor agent...
Context: ~/worktrees/slo-filters
Agent ready. How can I help?
right (~40%) — spare shell
$ cd ~/worktrees/slo-filters
# run commands the agent
# suggests, check output,
# git status, file browsing

Multiple Branches, Simultaneously on Disk

Git worktrees allow multiple branches to be checked out at the same time in separate directories, all sharing a single .git folder. No stashing, no switching, no interrupting running servers.

~/ (home directory)
├── kibana/ ← main repo (main branch) :5602 / :9201
├── config/kibana.dev.yml auto-generated · port 5602
└── .cursor/mcp.json KIBANA_URL=http://kibana-main.local:5602
├── worktrees/
├── slo-filters/ ← feat worktree (current feature) :5601 / :9200
├── config/kibana.dev.yml auto-generated · port 5601
└── .cursor/mcp.json KIBANA_URL=http://kibana-feat.local:5601
└── slo-crash/ ← temporary worktree (created by new command) :5603 / :9202
├── config/kibana.dev.yml auto-generated · port 5603
└── .cursor/mcp.json KIBANA_URL=http://localhost:5603
└── .kibana-dev-state tracks current feat branch + dir
Key insight

All worktrees share the same .git history — only working files differ. node_modules is separate per worktree (one-time yarn kbn bootstrap per worktree).

Port assignment
  • kibana-main → Kibana :5602 · ES :9201
  • kibana-feat → Kibana :5601 · ES :9200
  • kibana-<branch> → auto-assigned from :5603+ (created by new command)

The Auto-Start Sequence

Running ~/dev-start.sh triggers a fully automated startup. You never manually run bootstrap or start servers.

~/dev-start.sh
reads ~/.kibana-dev-state, creates all sessions
kbn-start.sh
sent to left pane of each servers window
nvm use
switches to correct Node version (reads .nvmrc)
yarn kbn bootstrap
installs deps for this worktree
detect remote ES in kibana.dev.yml
checks for non-localhost ES URL (supports template + oblt-cli YAML formats)
LOCAL → yarn es snapshot --license trial
starts ES, pipes to /tmp/es-*.log, tail watches for trigger
REMOTE → skip local ES entirely
ES already running in cloud (oblt-cli), Kibana reads host from config
nvm use + yarn start (auto-fired)
sent to right pane — immediately for remote ES, after trigger for local ES

Working with Remote ES (oblt-cli)

When you need to test against a real Elastic Cloud cluster (e.g. via oblt-cli), use the --remote flag. It reads credentials from ~/.kibana-remote-es.yml and generates the correct config automatically — no manual editing needed.

Usage

~/dev-start.sh switch <branch> --remote — switch to remote ES

~/dev-start.sh switch <branch> — switch back to local ES

Credentials are stored once in ~/.kibana-remote-es.yml (outside any git repo). Update it when your cluster expires.

Local ES (default)
elasticsearch.hosts:
  - "http://localhost:9200"
elasticsearch.username: "kibana"
elasticsearch.password: "changeme"

kbn-start.sh starts ES locally, watches for trigger, then auto-fires Kibana.

Remote ES (oblt-cli / cloud)
elasticsearch:
  hosts: https://my-cluster.elastic.co:443
  username: kibana_system_user
  password: <from oblt-cli>

kbn-start.sh detects the remote URL, skips local ES, starts Kibana directly. Fleet config (agent policies, packages) is included with the Fleet ES output pointing to the remote cluster.

How detection works

kbn-start.sh greps kibana.dev.yml for http(s):// URLs, skipping any that contain localhost or 127.0.0.1. It handles both YAML formats:

Template format: - "http://..."   |   oblt-cli format: hosts: https://...

Use --remote to switch to remote ES or omit it to switch back to local — no manual config editing needed. Without --remote, switch preserves an existing kibana.dev.yml. Delete the file and re-run to regenerate from template.

Data Ingestion (run-data.sh)

The scripts window has two panes pre-populated with ingestion commands. Just press Enter when Kibana is ready — the script waits automatically.

run-data.sh slo

Ingests SLO test data using data_forge.js with the fake_stack dataset. Reads ES host and credentials from kibana.dev.yml.

For remote ES: auto-reduces concurrency (1 worker, payload 1000) to avoid timeouts. Always uses the elastic superuser for data writes.

run-data.sh synthetics

Creates a synthetics private location with Fleet Server + Elastic Agent. Works on both local and remote ES.

On local ES, uses synthetics_private_location.js which starts Fleet Server, enrolls an agent, and creates the private location. On remote ES, orchestrates Docker containers directly (Fleet Server + elastic-agent-complete) with proper credentials, updates Fleet Server host to local Docker, and enrolls the agent — so monitors actually run instead of staying in "Pending".

Branch-Scoped Checks (run-checks.sh)

The checks window runs lint, typecheck, and jest scoped to files and plugins you've changed on the branch (compared to upstream/main via git merge-base).

Three panes, three commands

Top-left: run-checks.sh lint — eslint on changed .ts/.tsx/.js files

Top-right: run-checks.sh typecheck — tsc per changed plugin

Bottom: run-checks.sh jest — jest per changed plugin

Commands are pre-populated but not auto-run — press Enter in each pane when ready.

How This Looks Day-to-Day

1
Morning — start everything
One command recreates all sessions if they don't exist, or reattaches to kibana-feat if they do.
~/dev-start.sh
2
Working on your feature
Land in kibana-feat by default. All windows are ready. Kibana auto-starts once ES is up. Jump to cursor window to run cursor-agent in your worktree context.
Ctrl-a w  → jump to any window
3
Check main branch behavior
Switch to kibana-main without touching your feature session. Both Kibana instances run concurrently on different ports. kibana-feat stays untouched.
Ctrl-a s  → select kibana-main
4
Review a colleague's PR (needs to run)
Spin up a lightweight session on its own port alongside your running kibana-feat. Reuses existing worktree if already checked out. Kill it cleanly when done.
~/dev-start.sh new feat/colleague-branch

~/dev-start.sh kill colleague-branch
5
Starting a new feature (every few days/weeks)
Replaces your kibana-feat branch. Kills the old session, removes the old worktree, creates a new one, regenerates config files, and rebuilds the session.
~/dev-start.sh switch feature/burn-rate-fix
6
End of day
Just detach. Sessions keep running in the background. Tomorrow, reattach and everything is exactly where you left it.
Ctrl-a d

Command Reference

dev-start.sh commands
./dev-start.shstart / attach all sessions
./dev-start.sh switch <branch>switch feat to new branch (local ES)
./dev-start.sh switch <branch> --remoteswitch with remote ES (reads ~/.kibana-remote-es.yml)
./dev-start.sh new <branch>spin up a temporary session for any branch (PR review, hotfix, testing)
./dev-start.sh new <branch> --remotetemporary session with remote ES
./dev-start.sh new <branch> --fullcreate full session
./dev-start.sh kill <branch>kill session + remove worktree
./dev-start.sh kill-allkill all kibana-* sessions
./dev-start.sh pruneremove orphaned worktrees (no active tmux session) — interactive, pick one or all
./dev-start.sh listlist sessions, ports, worktrees + mismatch warnings
./dev-start.sh sync [target] [--remote|--local]regenerate kibana.dev.yml — use --remote to switch to remote ES, --local to switch back
./dev-start.sh statushealth check — ping ES + Kibana for all active sessions
./dev-start.sh restart main|feat|<branch>restart ES + Kibana (auto-rebuilds session if panes are missing)
./dev-start.sh cleanlist ES data folders + sizes
./dev-start.sh clean main|feat|alldelete ES data to start fresh
./dev-start.sh renewrefresh remote ES credentials — auto-creates cluster if destroyed
./dev-start.sh setupinteractive config wizard — paths, ports, symlinks
helper scripts
run-data.sh sloingest SLO fake_stack data via data_forge.js — waits for Kibana, reads creds from config
run-data.sh syntheticscreate synthetics private location with Fleet Server + Agent (local ES via script, remote ES via Docker containers)
run-data.sh fleet-resetwipe all Fleet state (signing keys, policies, private locations) — restart Kibana after
run-checks.sh linteslint on changed .ts/.tsx/.js files (scoped to branch via merge-base)
run-checks.sh typechecktsc per changed plugin (scoped to branch)
run-checks.sh jestjest per changed plugin (scoped to branch)
tmux shortcuts (prefix: Ctrl-a)
Ctrl-a ssession switcher
Ctrl-a wwindow overview
Ctrl-a [0-9]jump to window by number
Ctrl-a ddetach (sessions keep running)
Ctrl-a [scroll / copy mode
Ctrl-a | / -split pane vertical / horizontal
ports at a glance
kibana-mainKibana :5602 · ES :9201
kibana-featKibana :5601 · ES :9200
kibana-<branch>auto-assigned :5603+ · ES :9202+ — created by new command
generated files per worktree
config/kibana.dev.ymlgenerated from template with correct ports — preserved on switch if already exists (supports remote ES via oblt-cli)
.cursor/mcp.jsonKIBANA_URL for playwright-mcp, scoped per worktree
~/.kibana-dev.confuser config — paths, ports, overrides (generated by setup)
~/.kibana-dev-statetracks current feat branch + path
~/.kibana-remote-es.ymlremote ES credentials for --remote flag — updated automatically by renew or manually from oblt-cli
~/Documents/Development/es_data/ES data stored outside Kibana repo — one folder per session, wipe with clean

Test Suite

Running tests
./tests/run-tests.shrun all 86 tests across 4 suites
./tests/run-tests.sh configconfig generation only
./tests/run-tests.sh detectionremote ES detection only
./tests/run-tests.sh run-datarun-data.sh parsing only
./tests/run-tests.sh argargument parsing only
Test suites
test-config-generationLocal + remote kibana.dev.yml generation, port substitution, placeholder removal, server block stripping, switching between local/remote (24 tests)
test-es-detectionGrep pattern matching for both YAML formats (template + oblt-cli), localhost/127.0.0.1 filtering, commented lines, edge cases (15 tests)
test-run-dataCredential parsing, remote detection, concurrency reduction, password defaults, synthetics guard (18 tests)
test-arg-parsingkbn-start.sh flag parsing, switch/new flag parsing, port reservation logic (29 tests)
For contributors

Tests are pure bash — no external dependencies. Each file creates temp directories, generates configs, and validates expected behaviour. Run the suite before submitting changes to verify nothing breaks.

Known Behaviours & Edge Cases

Things to be aware of
  • Branch names with dots (e.g. 9.3) — tmux interprets . as a session:window separator. The script converts dots to hyphens in the session name (kibana-9-3), but the worktree folder keeps the original name (worktrees/9.3).
  • kibana.dev.yml not regenerated on new — if a worktree already has the file with wrong ports, the script skips it. Run dev-start.sh list to see port mismatch warnings, then delete the file and re-run.
  • Sessions lost on Mac reboot — tmux survives terminal close but not reboots. Run ~/dev-start.sh each morning to recreate. Consider tmux-resurrect for auto-restore.
  • ES trigger string may be outdatedkbn-start.sh watches for a specific string in the ES log to know when to fire Kibana. If ES version changes and the string changes, auto-start won't fire. Check /tmp/es-*.log if Kibana never starts.
  • Port mismatch after newswitch — if you ran new on a branch and then switch to the same branch, kibana.dev.yml will have the temporary port (5603+) not the feat port (5601). Delete and rerun switch.
  • Remote ES: kibana_system_user can't write data — the oblt-cli service account doesn't have permissions to write to data indices. run-data.sh automatically uses the elastic superuser (same password) for ingestion. If you get 401 errors with data_forge, check that the elastic user has the same password as configured in your yml.
  • encryptedSavedObjects errors with remote ES — expected when connecting to a remote cluster. Alerts created by a different Kibana instance were encrypted with a different key. These errors are harmless and don't affect new rules you create locally.
  • Fleet "Cannot read existing Message Signing Key pair" — the remote ES has Fleet signing keys encrypted by a different Kibana. Fleet preconfiguration (agent policies from kibana.dev.yml) will silently fail. Fix: run-data fleet-reset then restart Kibana.