A structured local development setup that eliminates context-switching overhead, lets multiple Kibana instances run simultaneously, and automates the full bootstrap → ES → Kibana startup sequence.
There are 2 permanent sessions — kibana-feat and kibana-main — always running in the background. On top of that, you can spin up as many temporary sessions as you need using ~/dev-start.sh new <branch> — for PR reviews, hotfixes, testing a colleague's branch, etc. Kill them when done. Switch between any session instantly with Ctrl-a s.
kibana-feat always runs on the default ports — Kibana :5601 and ES :9200 — because that's where most of your day-to-day development happens. No port memorization needed.
kibana-main is kept clean on :5602 / :9201, reserved for checking the latest state of main without interfering with your feature work.
Temporary sessions get ports auto-assigned from :5603+. Spin up as many as you want — they never conflict with each other or with the permanent sessions.
When you finish a feature and move to a new one, run ~/dev-start.sh switch <new-branch>. Your new feature branch will be set up in kibana-feat, automatically configured on the default ports. You don't have to handle any of this yourself.
Git worktrees allow multiple branches to be checked out at the same time in separate directories, all sharing a single .git folder. No stashing, no switching, no interrupting running servers.
All worktrees share the same .git history — only working files differ. node_modules is separate per worktree (one-time yarn kbn bootstrap per worktree).
Running ~/dev-start.sh triggers a fully automated startup. You never manually run bootstrap or start servers.
When you need to test against a real Elastic Cloud cluster (e.g. via oblt-cli), use the --remote flag. It reads credentials from ~/.kibana-remote-es.yml and generates the correct config automatically — no manual editing needed.
~/dev-start.sh switch <branch> --remote — switch to remote ES
~/dev-start.sh switch <branch> — switch back to local ES
Credentials are stored once in ~/.kibana-remote-es.yml (outside any git repo). Update it when your cluster expires.
elasticsearch.hosts: - "http://localhost:9200" elasticsearch.username: "kibana" elasticsearch.password: "changeme"
kbn-start.sh starts ES locally, watches for trigger, then auto-fires Kibana.
elasticsearch: hosts: https://my-cluster.elastic.co:443 username: kibana_system_user password: <from oblt-cli>
kbn-start.sh detects the remote URL, skips local ES, starts Kibana directly. Fleet config (agent policies, packages) is included with the Fleet ES output pointing to the remote cluster.
kbn-start.sh greps kibana.dev.yml for http(s):// URLs, skipping any that contain localhost or 127.0.0.1. It handles both YAML formats:
Template format: - "http://..." | oblt-cli format: hosts: https://...
Use --remote to switch to remote ES or omit it to switch back to local — no manual config editing needed. Without --remote, switch preserves an existing kibana.dev.yml. Delete the file and re-run to regenerate from template.
The scripts window has two panes pre-populated with ingestion commands. Just press Enter when Kibana is ready — the script waits automatically.
Ingests SLO test data using data_forge.js with the fake_stack dataset. Reads ES host and credentials from kibana.dev.yml.
For remote ES: auto-reduces concurrency (1 worker, payload 1000) to avoid timeouts. Always uses the elastic superuser for data writes.
Creates a synthetics private location with Fleet Server + Elastic Agent. Works on both local and remote ES.
On local ES, uses synthetics_private_location.js which starts Fleet Server, enrolls an agent, and creates the private location. On remote ES, orchestrates Docker containers directly (Fleet Server + elastic-agent-complete) with proper credentials, updates Fleet Server host to local Docker, and enrolls the agent — so monitors actually run instead of staying in "Pending".
The checks window runs lint, typecheck, and jest scoped to files and plugins you've changed on the branch (compared to upstream/main via git merge-base).
Top-left: run-checks.sh lint — eslint on changed .ts/.tsx/.js files
Top-right: run-checks.sh typecheck — tsc per changed plugin
Bottom: run-checks.sh jest — jest per changed plugin
Commands are pre-populated but not auto-run — press Enter in each pane when ready.
--remote to switch to remote ES, --local to switch backsetup)renew or manually from oblt-clicleantest-config-generation | Local + remote kibana.dev.yml generation, port substitution, placeholder removal, server block stripping, switching between local/remote (24 tests) |
test-es-detection | Grep pattern matching for both YAML formats (template + oblt-cli), localhost/127.0.0.1 filtering, commented lines, edge cases (15 tests) |
test-run-data | Credential parsing, remote detection, concurrency reduction, password defaults, synthetics guard (18 tests) |
test-arg-parsing | kbn-start.sh flag parsing, switch/new flag parsing, port reservation logic (29 tests) |
Tests are pure bash — no external dependencies. Each file creates temp directories, generates configs, and validates expected behaviour. Run the suite before submitting changes to verify nothing breaks.
9.3) — tmux interprets . as a session:window separator. The script converts dots to hyphens in the session name (kibana-9-3), but the worktree folder keeps the original name (worktrees/9.3).kibana.dev.yml not regenerated on new — if a worktree already has the file with wrong ports, the script skips it. Run dev-start.sh list to see port mismatch warnings, then delete the file and re-run.~/dev-start.sh each morning to recreate. Consider tmux-resurrect for auto-restore.kbn-start.sh watches for a specific string in the ES log to know when to fire Kibana. If ES version changes and the string changes, auto-start won't fire. Check /tmp/es-*.log if Kibana never starts.new → switch — if you ran new on a branch and then switch to the same branch, kibana.dev.yml will have the temporary port (5603+) not the feat port (5601). Delete and rerun switch.kibana_system_user can't write data — the oblt-cli service account doesn't have permissions to write to data indices. run-data.sh automatically uses the elastic superuser (same password) for ingestion. If you get 401 errors with data_forge, check that the elastic user has the same password as configured in your yml.kibana.dev.yml) will silently fail. Fix: run-data fleet-reset then restart Kibana.