Home Learning Paths ECU Lab Assessments Interview Preparation Arena Pricing Log In Sign Up

SWE.6 Purpose, Scope & Outcomes

Official Purpose Statement (ASPICE v3.1): "The purpose of the Software Qualification Testing Process is to ensure that the integrated software is tested to provide evidence for the compliance of the software with the software requirements."

SWE.6 closes the V-model. It is the mirror of SWE.1: where SWE.1 defined the software requirements, SWE.6 verifies that the fully integrated software meets every one of those requirements. The output of SWE.6 - a Software Qualification Test Report - is the primary document an OEM uses to accept a software release. It is the most externally visible ASPICE process artifact.

📋 Learning Objectives

  • Describe the distinction between SWE.6 qualification testing and SWE.5 integration testing
  • Develop qualification test cases that are traceable to SWE.1 requirements and have quantified pass/fail criteria
  • Produce a Software Qualification Test Report that satisfies ASPICE BP requirements
  • Manage regression testing and requirements coverage at the qualification level
  • Close the full traceability chain: SYS.2 → SWE.1 → SWE.2 → SWE.3 → SWE.4 → SWE.5 → SWE.6

SWE.6 Process Outcomes

OutcomeStatementAssessed Via
O1A qualification test strategy and test cases are developedSoftware Test Plan + Software Qualification Test Specification with test cases per SWE.1 requirement
O2SW qualification tests are performed and results are documentedSoftware Qualification Test Report with pass/fail per test case, version, environment, date
O3Consistency and bidirectional traceability between SW requirements and test casesFull traceability closure: every SWE.1 requirement covered by ≥1 test case; every test case traces to ≥1 requirement
O4Regression testing ensures continued compliance after changesRegression test execution records for each software release or change

Base Practices & Qualification Testing

BPNameWhat Assessors CheckWork ProductsCommon Failure
BP1Develop SW qualification test strategyDefines: test scope (all SWE.1 requirements), test environment (HIL, target ECU, or SIL with documented limitations), tools used, test personnel roles, entry/exit criteria (e.g., all P1 failures resolved; ≤3 P2 open with waiver), SW version under test must be a CM-controlled build.Software Test Plan / Qualification Test StrategyNo formal test strategy; entry/exit criteria not defined; test environment not documented
BP2Develop SW qualification test casesOne test case per SWE.1 requirement (minimum). Each test case: unique TC-ID, precondition (ECU state, bus configuration), stimulus (CAN message, sensor input, diagnostic service), expected response (output value with tolerance, DTC set/clear, CAN message response time), pass/fail criterion (quantified - not "behaves correctly"). Edge cases and error conditions covered, not just nominal.Software Qualification Test Specification (SQTS)Test cases written at feature level without tracing to specific requirement IDs; pass criteria are vague; error injection cases absent
BP3Perform SW qualification testingTests executed on the SW version recorded in the test report. Each test case result: pass/fail, actual observed output, deviation notes. Tester and date recorded. Failures logged in defect tracker (SUP.9). A "Blocked" status is explained and risk-assessed.Software Qualification Test Report (SQTR)Test report shows only "PASS" without actual observed values; failures noted but not logged in defect tracker; test report version does not match SW version tested
BP4Ensure SW requirements coverageEvery SWE.1 requirement has at least one test case in the SQTS. Requirements coverage matrix is produced and shows ≥95% coverage (justified gaps for requirements verified by analysis, inspection, or deferred to SYS.5). ASIL-rated requirements have full coverage - no justified gaps allowed for ASIL-B/C/D requirements.Requirements coverage matrix (SRS-ID → TC-ID); coverage metric per ASIL levelCoverage matrix not produced; coverage assumed based on "we tested all features"; ASIL-rated requirements mixed with QM without separate coverage tracking
BP5Ensure regression testingFor every software change (CR-driven or defect fix), affected test cases are identified and re-executed. Regression strategy defines how to determine test scope for a given change (full regression vs. risk-based subset). Regression results recorded per change event.Regression test execution records per software release; change impact analysis for test scopeFull regression only run at final release; partial releases tested with no documented rationale for test scope selection
BP6Establish bidirectional traceabilityThis BP closes the entire V-model traceability chain. Assessors will do a full chain walk: pick a SWE.1 requirement → find the SWE.6 test case → verify it is executed in the SQTR → find the result → trace back to the SWE.2 component → SWE.3 detailed design → SWE.4 unit test. All links must be present and current.Final integrated traceability matrix: SYS.2 → SWE.1 → SWE.2 → SWE.3 → SWE.4 TC → SWE.6 TCTraceability chain exists for SWE.1–SWE.2 but breaks between SWE.5 and SWE.6; qualification test cases not linked back to SWE.1 requirement IDs in the test report

Software Qualification Test Report - What It Must Contain

The SQTR is the most OEM-visible document in the ASPICE process. It accompanies every software release. A compliant SQTR must include:

SectionRequired ContentASPICE Link
Cover / HeaderSW version tested (including Git hash or build ID), test report version, test environment version, date range of testing, test team (names and roles)GP 2.2.3, GP 2.2.4
Test ConfigurationECU HW revision, diagnostic tool version, HIL/SIL configuration version, calibration file version, bus configuration (CAPL version, .dbc version). Must be reproducible: someone else can recreate the exact test environment.BP1, GP 2.2.2
Test SummaryTotal TCs: N. Passed: X. Failed: Y. Blocked: Z. Requirements coverage: NN%. Open defects: list with severity and waiver status.BP3, BP4
Test Case ResultsPer-TC row: TC-ID, SRS-ID traced, test description, precondition, stimulus, expected output (from SQTS), actual output observed, result (PASS/FAIL/BLOCKED), defect reference if FAIL.BP2, BP3, BP6
Requirements Coverage MatrixTable or matrix: SRS-ID → TC-ID(s) → Result. Color-coded by coverage (green=covered+passed, red=covered+failed, yellow=partial). ASIL classification per requirement.BP4
Open Issues / WaiversFor each open failure: defect ID, severity, root cause (if known), risk assessment, waiver approval (signature or tracking record), planned fix version.BP3, MAN.5 interface
Regression DeltaIf this is not the first test run: which TCs were rerun vs. previous version, what changed, what new failures or regressions were introduced.BP5

Test Environment Documentation (Critical)

A test result is only valid if it can be reproduced. ASPICE does not explicitly require reproducibility but assessors will probe: "If this test failed in production and the customer asks you to reproduce the test, could you?" If the answer is "we'd need to find the same bench setup, rebuild the software, and hope the calibration was the same" - the test environment is not documented. Required in the SQTR or a referenced test environment document:

  • ECU hardware version (PCB revision, populated components)
  • ECU software version (build ID or Git hash - not just version string)
  • Test harness version (HIL model, signal injection scripts, CAN database version)
  • Calibration file version (with parameter values for safety-critical parameters)
  • Measurement and diagnostic tools (CANoe version + workspace file, ODX/PDX version, Vector configuration)
  • Test execution environment (OS, hardware clock accuracy if timing tests performed)

SWE.6 Findings, Traceability Closure & CL2 Checklist

#FindingFix
1Requirements coverage <100%: some SWE.1 requirements have no test caseEvery SWE.1 requirement must have a test case. If a requirement cannot be tested at SW level (e.g., system-level behavior only observable at SYS.5), document this explicitly with the alternative verification method and reference the SYS.5 test case that covers it.
2Test report shows PASS but no actual observed values - only "as expected"Record quantified actual values: "Expected: voltage 4.9–5.1V; Observed: 5.03V. PASS." This is the only way an OEM or assessor can evaluate whether the pass/fail criterion was correctly applied.
3SW version in test report does not match SW version delivered to customerSQTR must reference the exact binary that was qualified. Apply a build tag (Git tag + CI build number) that is locked before testing begins and cannot change during the test cycle. Build from the tagged commit to produce the delivery binary.
4Open failures waived without documented risk assessment or approvalEvery waiver requires: defect ID, severity classification, root cause or workaround, explicit risk acceptance signature from authorized role (project lead + safety manager for ASIL items), planned fix version.

✅ SWE.6 CL2 Readiness Checklist

  • ✅ Software Test Plan / Qualification Test Strategy with scope, environment, tools, entry/exit criteria
  • ✅ SQTS: test cases per SWE.1 requirement with quantified preconditions, stimuli, and pass/fail criteria
  • ✅ SQTR: SW version, test environment, per-TC results with actual observed values, coverage summary
  • ✅ Requirements coverage matrix: 100% of SWE.1 requirements covered; ASIL requirements fully covered
  • ✅ Open failures: logged in SUP.9, risk-assessed, waivers approved by authorized role
  • ✅ Regression: test scope per change documented; regression results per release recorded
  • ✅ Traceability chain closed: SYS.2 → SWE.1 → SWE.6 TC in both directions; full V-model chain traversable

What's Next

Continue to SYS.1–SYS.5: System Engineering Processes, where the system-level V-model is addressed: how stakeholder requirements become system technical requirements, how HW/SW architecture partition is managed, and how system integration and qualification testing work above the software level.

← PreviousSWE.5 - Software Integration & Integration Testing Next →Hands-On: Bidirectional Traceability Setup