test plans

A test plan is a curated list of test cases that defines what needs to be tested for a specific goal — a release, a sprint, a feature sign-off. You build it in Qase by selecting the cases that matter, and optionally assigning team members to each one.

The reason test plans matter for automated reporting: they're the bridge between your manual testers and your automated suite. A single plan can contain both.


The problem plans solve

Most teams have a mix of manual and automated tests. The manual ones live in Qase — exploratory tests, UX walkthroughs, edge cases that aren't worth automating. The automated ones live in your repo — regression suites, API checks, integration tests. Without a plan, these two worlds report separately. Your manual testers submit results through the Qase UI. Your CI pipeline submits results through the reporter. Two different runs, two different dashboards, no unified picture.

A test plan brings them together. You create one plan that includes both manual and automated cases. When you start a run from that plan, everyone — manual testers and your CI pipeline — reports into the same run. One dashboard, one completion percentage, one answer to "are we ready to ship?"


How it works

Test plan
Curated cases
defined in Qase
fetches
Reporter (CI)
Creates the run
plan.id set in config
creates + pushes
Test run — linked to plan
Automated case passed
Automated case passed
Automated case failed

manual cases — awaiting QA
Manual case untested
Manual case untested
Manual case untested
manual QA submits results
Manual QA
Completes the run
works through untested cases in Qase UI — run closes when all cases have a result
  1. Create a plan in Qase — pick the cases, assign team members to manual cases if you want.
  2. Start a run from the plan — Qase creates a run pre-populated with the plan's cases.
  3. Your reporter sends automated results to the same run — using testops.run.id set to the run that was created from the plan.
  4. Manual testers work through their cases in the Qase UI — clicking pass, fail, blocked.

The run dashboard shows everything in one place: which cases are done, which are still untested, who's assigned to what, and the overall pass/fail breakdown. It doesn't matter whether a result came from a human clicking a button or a CI job sending an API call.


Selective execution

Some reporters can go further. If you set testops.plan.id in your config, the reporter fetches the plan's case list from Qase and filters your test suite to only run the tests that are in the plan. Tests not in the plan are skipped entirely — they don't execute.

This is supported in Java (TestNG, JUnit5), Python (pytest), and Robot Framework. In these frameworks, the reporter intercepts the test collection phase and removes tests whose Qase IDs aren't in the plan.

{
  "testops": {
    "plan": {
      "id": 42
    }
  }
}

Or as an environment variable:

QASE_TESTOPS_PLAN_ID=42

Important: Selective execution only works for tests that have a Qase ID linked. Unlinked tests are excluded from the filtered run because the reporter has no way to match them to the plan.

Not all frameworks support selective execution. JavaScript reporters, C#, Go, PHP, and Kotlin report results to a plan-linked run but don't filter the test collection. All your tests still execute — the plan association is metadata on the run, not a filter on execution.


Two ways to use plans with reporters

ApproachConfigWhat happens
Report to a plan-linked runtestops.run.id = the run created from the planReporter sends results to an existing run. All tests execute. Manual + automated results merge in one dashboard.
Selective executiontestops.plan.id = the plan IDReporter fetches the plan, filters tests, creates a new run linked to the plan, and only runs matching tests.

The first approach is more common — especially when you have manual testers working alongside automation. You create the run from the plan in Qase, hand the run ID to your CI job, and both sides report into the same place.

The second approach is useful when you want CI to run exactly the tests in a plan — nothing more, nothing less. This is common for targeted regression: "run only the cases we've flagged for this release."


When to use plans

  • Release sign-off — "These 200 cases must pass before we ship. 150 are automated, 50 are manual. One plan, one run, one dashboard."
  • Sprint testing — "This sprint touches auth and payments. Here's a plan with the relevant cases. QA owns the manual ones, CI handles the rest."
  • Compliance or audit — "We need to prove these specific scenarios were tested. The plan is the checklist, the run is the evidence."

If you're a small team running only automated tests, you may not need plans at all — the reporter creates runs automatically and that's enough. Plans become valuable when you need to coordinate manual and automated testing, or when you need to define exactly which tests should run for a specific purpose.