CLI Reference
Run, compare, monitor, and report on UX tests directly from your terminal or CI pipeline.
Installation
Install globally to use simutest as a system command, or use npx to run without installing:
# Global install
npm install -g @simutest/cli
# Or run without installing
npx simutest <command>Set the SIMUTEST_API_KEY environment variable or pass --api-key with each command.
simutest run
Run a UX test against a URL or using a YAML configuration file.
# Minimal usage
simutest run --url http://localhost:3000 --task "Find the pricing page"
# Full options
simutest run \
--url http://localhost:3000 \
--task "Sign up for a free trial" \
--sessions 100 \
--model claude-sonnet \
--viewport mobile \
--config simutest.yaml| Flag | Description |
|---|---|
| --url | URL of the page to test |
| --task | Natural language task for agents to complete |
| --sessions | Number of simulated sessions (default: 100) |
| --model | AI model to use (claude-sonnet | claude-opus) |
| --viewport | Viewport size (mobile | desktop | tablet) |
| --config | Path to a YAML config file (e.g. simutest.yaml) |
You can also run all tests defined in a YAML config file:
# Run tests defined in a YAML config file
simutest run --config simutest.yaml
# Override YAML defaults from the command line
simutest run --config simutest.yaml --sessions 50 --viewport desktopsimutest compare
Run the same task against two URLs and compare UX scores side by side. Ideal for A/B test validation.
# Compare two variants of a page
simutest compare \
--url-a http://localhost:3000/landing-v1 \
--url-b http://localhost:3000/landing-v2 \
--task "Find and click the Get Started button" \
--sessions 100
# Output
# Variant A score: 6.8
# Variant B score: 7.5
# Winner: B (+0.7)| Flag | Description |
|---|---|
| --url-a | URL of variant A |
| --url-b | URL of variant B |
| --task | Task to complete on both variants |
| --sessions | Number of sessions per variant (default: 100) |
simutest status
Check the progress of a running or recently completed test using its test ID.
# Check the status of a running test
simutest status --test-id test_abc123
# Output
# Test: test_abc123
# Status: running
# Progress: 64/100 sessions completed
# Estimated completion: ~2 min| Flag | Description |
|---|---|
| --test-id | The test ID returned by simutest run |
simutest report
Generate a formatted report for a completed test. Supports JSON, HTML, and Markdown output formats.
# Print summary to stdout (default)
simutest report --test-id test_abc123
# Export as JSON
simutest report --test-id test_abc123 --format json --output ./report.json
# Export as HTML
simutest report --test-id test_abc123 --format html --output ./report.html
# Export as Markdown
simutest report --test-id test_abc123 --format markdown --output ./report.md| Flag | Description |
|---|---|
| --test-id | The test ID to generate a report for |
| --format | Output format: text | json | html | markdown (default: text) |
| --output | File path to write the report (default: stdout) |
Global Options
These options apply to all commands:
| Flag | Description |
|---|---|
| --api-key | SimuTest API key (overrides SIMUTEST_API_KEY env var) |
| --json | Output raw JSON results to stdout (useful for scripting) |
| --verbose | Print detailed session-level output and thinking traces |
| --quiet | Suppress all output except errors and the final score |
Use --json in CI pipelines to pipe results to downstream tools:
# Output results as JSON (useful for CI scripting)
simutest run \
--url http://localhost:3000 \
--task "Complete checkout" \
--json | jq '.summary.overall_score'Full CI example using GitHub Actions:
# .github/workflows/ux-test.yml
- name: Run UX tests
env:
SIMUTEST_API_KEY: ${{ secrets.SIMUTEST_API_KEY }}
run: |
npx simutest run \
--config simutest.yaml \
--json > results.json
score=$(jq '.summary.overall_score' results.json)
echo "UX Score: $score"
# Fail the build if score drops below threshold
python3 -c "import sys; sys.exit(0 if float('$score') >= 7.0 else 1)"On this page