Home Learning Paths ECU Lab Assessments Interview Preparation Arena Pricing Log In Sign Up

SWE.4 Purpose, Scope & Outcomes

Official Purpose Statement (ASPICE v3.1): "The purpose of the Software Unit Verification Process is to verify software units to provide evidence for the compliance of the software units with the software detailed design and with non-functional software requirements."

SWE.4 is the first process on the right leg of the V-model and is the mirror of SWE.3. It verifies that what was built matches what was designed. SWE.4 is not integration testing (that is SWE.5) - it verifies individual software units in isolation. The key words in the purpose statement are "compliance with the software detailed design" - not with SWE.1 requirements. Requirements coverage is verified at SWE.6; design compliance is SWE.4's job.

📋 Learning Objectives

  • Explain the distinction between unit verification (SWE.4) and integration testing (SWE.5) and qualification testing (SWE.6)
  • Describe the SWE.4 Base Practices and required work products
  • Specify test cases at the unit level: preconditions, inputs, expected outputs, pass/fail criteria
  • Apply and report code coverage metrics (statement, branch, MC/DC) correctly
  • Manage unit test regression and defects through to closure

SWE.4 Process Outcomes

OutcomeStatementAssessed Via
O1A unit test strategy and test cases are definedUnit test plan or strategy document; test cases per unit with preconditions, inputs, expected outputs
O2Unit tests are performed and results are documentedTest execution records showing pass/fail per test case per unit, with tool/version/date
O3Consistency and bidirectional traceability between SWE.3 detailed design and unit testsTest case → detailed design element trace; all design decisions covered by at least one test case
O4Regression testing applied to maintain complianceRegression test suite exists; re-run on each change to the unit; results recorded

Base Practices & Unit Testing in Practice

BPNameWhat Assessors CheckWork ProductsCommon Failure
BP1Define unit test strategyA documented strategy covering: test scope (which units, which test types), test environment (target vs. host-based), coverage targets (per ASIL), tools used, entry/exit criteria for unit testing phaseUnit Test Strategy / Plan documentNo strategy document; test cases written ad hoc by individual developers without common criteria
BP2Develop unit test casesTest cases address: normal operation (typical inputs), boundary values (min, max, just outside range), error conditions (NULL pointer, timeout, invalid state), performance/timing requirements. Each test case has: ID, precondition, input data, expected output, pass/fail criterion. Test cases trace back to the SWE.3 detailed design.Unit test specification (test case table per unit)Test cases exist but only cover happy path; no boundary or fault injection tests; no explicit expected output - just "no crash"
BP3Perform unit testingTests are executed (on target or host using a unit test framework - CUnit, Unity, GoogleTest, VectorCAST). Results recorded: passed/failed/blocked per test case, per unit, per tool run. All failures investigated - a "failed" test case left unexplained is a Major Finding.Unit test execution report (automated or manual); test log with timestamp, version, results per TCTest execution done informally; results only exist as a developer's memory or Jira comment; no structured test report
BP4Measure test coverageCode coverage measured using a coverage tool (VectorCAST, Tessy, Bullseye, LDRA). Coverage targets defined in the strategy (typically: statement coverage 100% for QM, branch coverage 100% for ASIL-A/B, MC/DC 100% for ASIL-C/D). Coverage report produced. Uncovered code is analyzed: dead code identified and justified, or additional tests written.Coverage report (from tool); coverage vs. target comparison; dead code analysisCoverage measured but target not defined; 72% branch coverage with no analysis of what the remaining 28% represents
BP5Ensure regression testingA regression test suite (all passing unit tests) is rerun whenever source code changes. Regression failures are investigated before the change is accepted. CI/CD automation of regression is best practice but not mandated - a documented manual regression protocol also satisfies this BP.Regression test execution records per code change event; CI pipeline configuration if automatedRegression tests exist but are only run before major releases, not on every change; regressions discovered late in integration
BP6Establish bidirectional traceabilityEvery unit test case traces to at least one SWE.3 detailed design element (function, data structure, algorithm step). Every SWE.3 design element with testable behavior has at least one test case. Plus: traceability extended upward through SWE.3 to SWE.2 component and SWE.1 requirement.Extended traceability matrix: SRS-ID → SAD Component → SDDD element → Test Case IDTest cases exist but traceability to SDDD is missing; test suite structured by file, not by design element

Code Coverage - ASIL-Based Targets

ASIL LevelRequired Coverage MetricTargetTool Support
QMStatement Coverage100% (or justification for uncovered)Any coverage tool
ASIL-AStatement + Branch Coverage100% branchVectorCAST, Tessy, LDRA, Bullseye
ASIL-BStatement + Branch Coverage100% branchVectorCAST, Tessy, LDRA, Bullseye
ASIL-CBranch + MC/DC Coverage100% MC/DCVectorCAST, LDRA (mandatory tool qualification for ASIL-C)
ASIL-DMC/DC Coverage100% MC/DCVectorCAST, LDRA (tool qualification required per IEC 61508 / ISO 26262)

MC/DC (Modified Condition/Decision Coverage) requires that each condition in a compound boolean expression independently affects the decision outcome. This is significantly more rigorous than branch coverage and typically requires 2×N test cases for N conditions in a compound expression. For ASIL-C/D code, MC/DC coverage analysis is reviewed by the safety assessor as part of the Functional Safety Audit - ASPICE SWE.4 evidence and ISO 26262 Part 6 evidence overlap here.

SWE.4 Findings & CL2 Readiness

#FindingFix
1No unit test strategy: tests exist but with no documented scope, coverage targets, tools, or entry/exit criteriaWrite a two-page Unit Test Strategy document before testing begins. It need not be long - just explicit about: what is tested, on what platform, to what coverage target, with what tool.
2Test cases have no expected output: "test passes if no crash or ASSERT"Every test case must state quantified expected output. Use test assertions: ASSERT_EQ(DoorSensor_GetState(0), DOOR_CLOSED). The expected value must be derivable from the SWE.3 design, not from running the code.
3Coverage measured but gap not analyzed: "78% branch coverage - OK"Every uncovered branch must be classified: dead code (provably unreachable - document why), defensive code (reachable only via hardware fault - acceptable with justification), or untested (test case missing - fix required).
4Regression suite not run on changes: unit tests run once, then never again until integration failsGate merge-to-main on passing unit tests in CI. Even a basic Jenkins/GitHub Actions pipeline running Unity/GoogleTest on each commit satisfies this BP and prevents regression-induced integration failures.

✅ SWE.4 CL2 Readiness Checklist

  • ✅ Unit Test Strategy document: scope, coverage targets per ASIL, tools, entry/exit criteria
  • ✅ Test cases: ID, precondition, input, expected output (quantified), pass/fail criterion, trace to SDDD
  • ✅ Coverage report from tool: meets ASIL-based target; uncovered code classified and justified
  • ✅ Test execution report: all TCs run, pass/fail recorded, failures investigated and resolved or deferred with justification
  • ✅ Regression suite runs on every code change (automated preferred; documented manual process acceptable)
  • ✅ Traceability: SDDD element → Test Case(s) bidirectional

What's Next

Continue to SWE.5 - Software Integration & Integration Testing, where individually verified units are assembled into integrated software and tested against the interfaces defined in SWE.2.

← PreviousSWE.3 - Software Detailed Design & Unit Construction Next →SWE.5 - Software Integration & Integration Testing