- Description
- Example Workflows
- Installation
- Shell Completion
- Environment
- Command:
api - Commands:
junit-upload,playwright-json-upload,allure-upload - Test Report Requirements
- Development
QAS CLI (qasphere) is the official command-line interface for QA Sphere. It exposes the full QA Sphere public API and supports the following use cases:
- Ad-hoc terminal use — run one-off
qasphere api <resource> <action>commands to inspect or change QA Sphere state. Every command prints JSON to stdout for easy inspection or piping into tools likejq. See Command:api. - Scripts and CI/CD automation — orchestrate QA Sphere projects, folders, test cases, milestones, runs, and results from shell scripts and CI pipelines. The same
apicommands that work for one-off use compose cleanly into automated flows. See Example Workflows. - Test result uploads — upload JUnit XML, Playwright JSON, and Allure result directories at the end of an automated test run. The CLI matches test case markers to QA Sphere test cases and attaches files. See Commands:
junit-upload,playwright-json-upload,allure-upload. - AI agents — the project ships a SKILL.md so coding agents (Claude Code, Cursor, etc.) can operate QA Sphere on the user's behalf — listing projects, authoring test cases, opening and closing runs, recording results, and more. See AI Agent Skill.
The examples below show what end-to-end automation with qas-cli looks like. They assume QAS_URL and QAS_TOKEN are configured (see Environment) and use jq for JSON parsing. For the full list of resources and actions, see the API Command Tree.
1. Open a run, post results, and close it — useful when results come from a tool that doesn't produce JUnit/Playwright/Allure output:
RUN_ID=$(qasphere api runs create --project-code PRJ \
--title "Smoke $(date +%Y-%m-%d)" --type static \
--query-plans '[{"tcaseIds": ["abc123", "def456"]}]' | jq -r '.id')
qasphere api results batch-create --project-code PRJ --run-id "$RUN_ID" \
--items '[{"tcaseId": "abc123", "status": "passed"}, {"tcaseId": "def456", "status": "failed", "comment": "timeout on /cart"}]'
qasphere api runs close --project-code PRJ --run-id "$RUN_ID"2. Run progress report — list open runs with their pass/fail/open counts:
qasphere api runs list --project-code PRJ --closed false \
| jq '.[] | {title, passed: .statusCounts.passed, failed: .statusCounts.failed, open: .statusCounts.open, total: .statusCounts.all}'3. Pair the API with the result uploader — pre-create a run with custom metadata in CI, then upload automation results into it:
RUN_ID=$(qasphere api runs create --project-code PRJ \
--title "CI build $BUILD_NUMBER" --type static \
--query-plans '[{"tcaseIds": ["abc123"]}]' \
--milestone-id 7 | jq -r '.id')
qasphere junit-upload -r "$QAS_URL/project/PRJ/run/$RUN_ID" ./test-results.xml4. AI agent workflows — reports and visualizations — once the skill is registered with a coding agent (Claude Code, Cursor, etc.), the agent can drive ad-hoc reporting, dashboards, and charts in natural language. Under the hood it calls qasphere api ... to fetch the data, then renders it however the conversation needs (Markdown tables, matplotlib/Plotly charts, HTML/SVG, CSV exports, etc.). Example prompts:
- "Plot the pass rate of the last 20 closed runs in project PRJ as a line chart." — agent calls
qasphere api runs list --project-code PRJ --closed true, computespassed / allper run fromstatusCounts, renders the chart. - "Which folders in PRJ have the most failing tests this week?" — agent combines
runs list,runs test-cases list, andfolders listto aggregate failures by folder. - "Generate a markdown summary of run 42 with the pass/fail breakdown and a list of failing test case titles." — agent calls
runs test-cases list --project-code PRJ --run-id 42and formats the output.
Because the agent reads the same --help output a human would, you don't need to pre-script the queries — describe the report you want and let the agent compose the API calls.
Node.js version 18.0.0 or higher.
Simply run npx qas-cli. On first use, you'll need to agree to download the package. You can use npx qas-cli in all contexts instead of the qasphere command.
Verify installation: npx qas-cli --version
Note: npx caches packages. To ensure latest version, clear cache with npx clear-npx-cache.
npm install -g qas-cliVerify installation: qasphere --version
Update: Run npm update -g qas-cli to get the latest version.
The CLI supports shell completion for commands and options. To enable it, append the completion script to your shell profile:
Zsh:
qasphere completion >> ~/.zshrcBash:
qasphere completion >> ~/.bashrcThen restart your shell or source the profile (e.g., source ~/.zshrc). After that, pressing Tab will autocomplete commands and options.
The CLI requires the following variables to be defined:
QAS_TOKEN- QA Sphere API token (see docs if you need help generating one)QAS_URL- Base URL of your QA Sphere instance (e.g.,https://qas.eu2.qasphere.com)
These variables could be defined:
- as environment variables
- in .env of a current working directory
- in a special
.qaspherecliconfiguration file in your project directory (or any parent directory)
Example: .qaspherecli
# .qaspherecli
QAS_TOKEN=your_token
QAS_URL=https://qas.eu1.qasphere.com
# Example with real values:
# QAS_TOKEN=qas.1CKCEtest_JYyckc3zYtest.dhhjYY3BYEoQH41e62itest
# QAS_URL=https://qas.eu1.qasphere.comThe api command provides direct access to the QA Sphere public API from the command line. Outputting JSON to stdout for easy scripting and piping.
qasphere api <resource> <action> [options]
qasphere api
├── audit-logs
│ └── list # List audit log entries
├── custom-fields
│ └── list --project-code # List custom fields
├── files
│ └── upload --file # Upload a file attachment
├── folders
│ ├── list --project-code # List folders
│ └── bulk-create --project-code --folders # Create/update folders
├── milestones
│ ├── list --project-code # List milestones
│ └── create --project-code --title # Create milestone
├── projects
│ ├── list # List all projects
│ ├── get --project-code # Get project by code
│ └── create --code --title # Create project
├── requirements
│ └── list --project-code # List requirements
├── results
│ ├── create --project-code --run-id --tcase-id --status # Create result
│ └── batch-create --project-code --run-id --items # Batch create results
├── runs
│ ├── create --project-code --title --type --query-plans # Create run
│ ├── list --project-code # List runs
│ ├── clone --project-code --run-id --title # Clone run
│ ├── close --project-code --run-id # Close run
│ └── test-cases
│ ├── list --project-code --run-id # List test cases in run
│ └── get --project-code --run-id --tcase-id # Get test case in run
├── settings
│ ├── list-statuses # List result statuses
│ └── update-statuses --statuses # Update custom statuses
├── shared-preconditions
│ ├── list --project-code # List shared preconditions
│ └── get --project-code --id # Get shared precondition
├── shared-steps
│ ├── list --project-code # List shared steps
│ └── get --project-code --id # Get shared step
├── tags
│ └── list --project-code # List tags
├── test-cases
│ ├── list --project-code # List test cases
│ ├── get --project-code --tcase-id # Get test case
│ ├── count --project-code # Count test cases
│ ├── create --project-code --body # Create test case
│ └── update --project-code --tcase-id --body # Update test case
├── test-plans
│ └── create --project-code --body # Create test plan
└── users
└── list # List all users
Note: qasphere api files upload --file ... uses the public batch upload endpoint internally and returns the first uploaded file from that response.
The junit-upload, playwright-json-upload, and allure-upload commands upload test results to QA Sphere.
There are two modes for uploading results using the commands:
- Upload to an existing test run by specifying its URL via
--run-urlflag - Create a new test run and upload results to it (when
--run-urlflag is not specified)
<files..>/<directories..>- Input paths. Use report files forjunit-uploadandplaywright-json-upload, and Allure results directories forallure-upload-r/--run-url- Upload results to an existing test run--project-code,--run-name,--create-tcases- Create a new test run and upload results to it--project-code- Project code for creating new test run. It can also be auto detected from test case markers in the results, but this is not fully reliable, so it is recommended to specify the project code explicitly--run-name- Optional name template for creating new test run. It supports{env:VAR},{YYYY},{YY},{MM},{MMM},{DD},{HH},{hh},{mm},{ss},{AMPM}placeholders (default:Automated test run - {MMM} {DD}, {YYYY}, {hh}:{mm}:{ss} {AMPM})--create-tcases- Automatically create test cases in QA Sphere for results that don't have valid test case markers. A mapping file (qasphere-automapping-YYYYMMDD-HHmmss.txt) is generated showing the sequence numbers assigned to each new test case (default:false)
--attachments- Try to detect and upload any attachments with the test result--force- Ignore API request errors, invalid or duplicate test case mappings, or attachments--ignore-unmatched- Suppress individual unmatched test messages, show summary only--skip-report-stdout- Control when to skip stdout blocks from test report (choices:on-success,never; default:never)--skip-report-stderr- Control when to skip stderr blocks from test report (choices:on-success,never; default:never)-h, --help- Show command help
The --run-name option supports the following placeholders:
{env:VARIABLE_NAME}- Environment variables (e.g.,{env:BUILD_NUMBER},{env:CI_COMMIT_SHA}){YYYY}- 4-digit year{YY}- 2-digit year{MMM}- 3-letter month (e.g., Jan, Feb, Mar){MM}- 2-digit month{DD}- 2-digit day{HH}- 2-digit hour in 24-hour format{hh}- 2-digit hour in 12-hour format{mm}- 2-digit minute{ss}- 2-digit second
Note: The --run-name option is only used when creating new test runs (i.e., when --run-url is not specified).
Ensure the required environment variables are defined before running these commands.
Note: The following examples use junit-upload, but you can replace it with playwright-json-upload and adjust the file extension from .xml to .json to upload Playwright JSON reports instead.
-
Upload to an existing test run:
qasphere junit-upload -r https://qas.eu1.qasphere.com/project/P1/run/23 ./test-results.xml
-
Create a new test run with default name template and upload results:
qasphere junit-upload ./test-results.xml
Project code is detected from test case markers in the results.
-
Create a new test run with name template without any placeholders and upload results:
qasphere junit-upload --project-code P1 --run-name "v1.4.4-rc5" ./test-results.xml -
Create a new test run with name template using environment variables and date placeholders and upload results:
qasphere junit-upload --project-code P1 --run-name "CI Build {env:BUILD_NUMBER} - {YYYY}-{MM}-{DD}" ./test-results.xmlIf
BUILD_NUMBERenvironment variable is set tov1.4.4-rc5and today's date is January 1, 2025, the run would be named "CI Build v1.4.4-rc5 - 2025-01-01". -
Create a new test run with name template using date/time placeholders and create test cases for results without valid markers and upload results:
qasphere junit-upload --project-code P1 --run-name "Nightly Tests {YYYY}/{MM}/{DD} {HH}:{mm}" --create-tcases ./test-results.xmlIf the current time is 10:34 PM on January 1, 2025, the run would be named "Nightly Tests 2025/01/01 22:34". This also creates new test cases in QA Sphere for any results that doesn't have a valid test case marker. A mapping file (
qasphere-automapping-YYYYMMDD-HHmmss.txt) is generated showing the sequence numbers assigned to each newly created test case. Update your test cases to include the markers in the name, for future uploads. -
Upload results with attachments:
qasphere junit-upload --attachments ./test1.xml
-
Force upload even with missing test cases or attachments:
qasphere junit-upload --force ./test-results.xml
-
Suppress unmatched test messages (useful during gradual test case linking):
qasphere junit-upload --ignore-unmatched ./test-results.xml
This will show only a summary like "Skipped 5 unmatched tests" instead of individual error messages for each unmatched test.
-
Skip stdout for passed tests to reduce result payload size:
qasphere junit-upload --skip-report-stdout on-success ./test-results.xml
This will exclude stdout from passed tests while still including it for failed, blocked, or skipped tests.
-
Skip both stdout and stderr for passed tests:
qasphere junit-upload --skip-report-stdout on-success --skip-report-stderr on-success ./test-results.xml
This is useful when you have verbose logging in tests but only want to see output for failures.
-
Upload Allure results from a directory:
qasphere allure-upload -r https://qas.eu1.qasphere.com/project/P1/run/23 ./allure-results
-
Continue Allure upload when some
*-result.jsonfiles are malformed (skip invalid files):qasphere allure-upload --force -r https://qas.eu1.qasphere.com/project/P1/run/23 ./allure-results
The QAS CLI maps test results from your reports (JUnit XML, Playwright JSON, or Allure) to corresponding test cases in QA Sphere. If a test result lacks a valid marker/reference, or multiple results resolve to the same run test case, the CLI will display an error unless you use --create-tcases to automatically create test cases, or --ignore-unmatched/--force to bypass the mapping issue.
Test case names in JUnit XML reports must include a QA Sphere test case marker. The following marker formats are supported (checked in order):
Format: PROJECT-SEQUENCE where PROJECT is your QA Sphere project code and SEQUENCE is the test case sequence number (minimum 3 digits, zero-padded if needed). The marker can appear anywhere in the test name and is matched case-insensitively.
Examples:
PRJ-002: Login with valid credentialsLogin with invalid credentials: PRJ-1312
For languages where test names are function identifiers and hyphens are not allowed, the CLI supports hyphenless markers separated by underscores. The test name must start with test (case-insensitive).
Examples (pytest):
test_prj002_login_with_valid_credentialstest_login_with_invalid_credentials_prj1312
For CamelCase test function names, the CLI detects markers at the start (immediately after the Test prefix) or at the end of the name. The test name must start with Test (case-insensitive).
Examples (Go):
TestPrj002LoginWithValidCredentials(marker at start)TestLoginWithValidCredentialsPrj1312(marker at end)
Note: Hyphenless matching (formats 2 and 3) is only available for junit-upload. For playwright-json-upload, only the hyphenated format is supported (or test annotations, see below).
Playwright JSON reports support two methods for referencing test cases (checked in order):
-
Test Annotations (Recommended) - Add a test annotation with:
type:"test case"(case-insensitive)description: Full QA Sphere test case URL
test( 'user login', { annotation: { type: 'test case', description: 'https://qas.eu1.qasphere.com/project/PRJ/tcase/123', }, }, async ({ page }) => { // test code } )
-
Hyphenated Marker in Name - Include the
PROJECT-SEQUENCEmarker in the test name (same format as JUnit XML format 1). Hyphenless markers are not supported for Playwright JSON
Allure results use one *-result.json file per test in a results directory. allure-upload matches test cases using:
- TMS links (Recommended) -
links[]entries with:type:"tms"url: QA Sphere test case URL, e.g.https://qas.eu1.qasphere.com/project/PRJ/tcase/123
- TMS link name fallback - If
urlis not a QA Sphere URL, a marker inlinks[].nameis used (for examplePRJ-123) - Test case marker in name - Marker in
namefield (samePROJECT-SEQUENCEformat as JUnit XML)
Only Allure JSON result files (*-result.json) are supported. Legacy Allure 1 XML files are ignored.
The CLI automatically detects global or suite-level failures and uploads them as run-level logs to QA Sphere. These failures are typically caused by setup/teardown issues that aren't tied to specific test cases.
- JUnit XML: Suite-level
<system-err>elements and empty-name<testcase>entries with<error>or<failure>(synthetic entries from setup/teardown failures, e.g., Maven Surefire) are extracted as run-level logs. - Playwright JSON: Top-level
errorsarray entries (global setup/teardown failures) are extracted as run-level logs. - Allure: Failed or broken
befores/aftersfixtures in*-container.jsonfiles (e.g., session/module-level setup/teardown failures from pytest) are extracted as run-level logs.
qas-cli includes a SKILL.md file that enables AI coding agents (e.g., Claude Code, Cursor) to use the CLI effectively. To add this skill to your agent:
npx skills add Hypersequent/qas-cliThe skill provides the agent with full documentation of the CLI commands, options, and conventions. See skills for more details.
- Install and build:
npm install && npm run build && npm link - Get test account at qasphere.com (includes demo project)
- Configure
.qaspherecliwith credentials - Test with sample reports from bistro-e2e
Tests: npm test (Vitest) and cd mnode-test && ./docker-test.sh (Node.js 18+ compatibility)