Documentation

CLI Reference

Run, compare, monitor, and report on UX tests directly from your terminal or CI pipeline.

Installation

Install globally to use simutest as a system command, or use npx to run without installing:

# Global install
npm install -g @simutest/cli

# Or run without installing
npx simutest <command>

Set the SIMUTEST_API_KEY environment variable or pass --api-key with each command.

simutest run

Run a UX test against a URL or using a YAML configuration file.

# Minimal usage
simutest run --url http://localhost:3000 --task "Find the pricing page"

# Full options
simutest run \
  --url http://localhost:3000 \
  --task "Sign up for a free trial" \
  --sessions 100 \
  --model claude-sonnet \
  --viewport mobile \
  --config simutest.yaml
FlagDescription
--urlURL of the page to test
--taskNatural language task for agents to complete
--sessionsNumber of simulated sessions (default: 100)
--modelAI model to use (claude-sonnet | claude-opus)
--viewportViewport size (mobile | desktop | tablet)
--configPath to a YAML config file (e.g. simutest.yaml)

You can also run all tests defined in a YAML config file:

# Run tests defined in a YAML config file
simutest run --config simutest.yaml

# Override YAML defaults from the command line
simutest run --config simutest.yaml --sessions 50 --viewport desktop

simutest compare

Run the same task against two URLs and compare UX scores side by side. Ideal for A/B test validation.

# Compare two variants of a page
simutest compare \
  --url-a http://localhost:3000/landing-v1 \
  --url-b http://localhost:3000/landing-v2 \
  --task "Find and click the Get Started button" \
  --sessions 100

# Output
# Variant A score: 6.8
# Variant B score: 7.5
# Winner: B (+0.7)
FlagDescription
--url-aURL of variant A
--url-bURL of variant B
--taskTask to complete on both variants
--sessionsNumber of sessions per variant (default: 100)

simutest status

Check the progress of a running or recently completed test using its test ID.

# Check the status of a running test
simutest status --test-id test_abc123

# Output
# Test: test_abc123
# Status: running
# Progress: 64/100 sessions completed
# Estimated completion: ~2 min
FlagDescription
--test-idThe test ID returned by simutest run

simutest report

Generate a formatted report for a completed test. Supports JSON, HTML, and Markdown output formats.

# Print summary to stdout (default)
simutest report --test-id test_abc123

# Export as JSON
simutest report --test-id test_abc123 --format json --output ./report.json

# Export as HTML
simutest report --test-id test_abc123 --format html --output ./report.html

# Export as Markdown
simutest report --test-id test_abc123 --format markdown --output ./report.md
FlagDescription
--test-idThe test ID to generate a report for
--formatOutput format: text | json | html | markdown (default: text)
--outputFile path to write the report (default: stdout)

Global Options

These options apply to all commands:

FlagDescription
--api-keySimuTest API key (overrides SIMUTEST_API_KEY env var)
--jsonOutput raw JSON results to stdout (useful for scripting)
--verbosePrint detailed session-level output and thinking traces
--quietSuppress all output except errors and the final score

Use --json in CI pipelines to pipe results to downstream tools:

# Output results as JSON (useful for CI scripting)
simutest run \
  --url http://localhost:3000 \
  --task "Complete checkout" \
  --json | jq '.summary.overall_score'

Full CI example using GitHub Actions:

# .github/workflows/ux-test.yml
- name: Run UX tests
  env:
    SIMUTEST_API_KEY: ${{ secrets.SIMUTEST_API_KEY }}
  run: |
    npx simutest run \
      --config simutest.yaml \
      --json > results.json

    score=$(jq '.summary.overall_score' results.json)
    echo "UX Score: $score"

    # Fail the build if score drops below threshold
    python3 -c "import sys; sys.exit(0 if float('$score') >= 7.0 else 1)"