Home Learning Paths ECU Lab Assessments Interview Preparation Arena Pricing Log In Sign Up

SWE.5 Purpose, Scope & Outcomes

Official Purpose Statement (ASPICE v3.1): "The purpose of the Software Integration and Integration Testing Process is to integrate the software units and software components until the integrated software is obtained and to ensure that the software units and software components are tested in accordance with the software architectural design."

SWE.5 is the mirror process of SWE.2. Where SWE.2 defines the architectural design and component interfaces, SWE.5 verifies that those interfaces work correctly when real software components are assembled together. The test focus shifts from "does this unit work in isolation?" (SWE.4) to "do these components work together as the architecture intended?" (SWE.5).

📋 Learning Objectives

  • Define the integration order and integration levels for a multi-component ECU software stack
  • Develop integration test cases derived from SWE.2 interface specifications
  • Execute integration tests and document results in ASPICE-compliant form
  • Distinguish between SWE.4, SWE.5, and SWE.6 test scope boundaries
  • Manage integration test defects through the SWE.5/SUP.9 process

The SWE.5 Scope Boundary

ProcessWhat Is TestedVerification BasisEnvironment
SWE.4Individual SW units (functions, modules)SWE.3 Detailed DesignHost-based (PC) unit test framework; target optional
SWE.5SW component interactions and interfacesSWE.2 Software ArchitectureTarget ECU or SIL (Software-in-the-Loop)
SWE.6Fully integrated software against customer requirementsSWE.1 Software RequirementsTarget ECU, HIL (Hardware-in-the-Loop)

Base Practices & Integration Strategy

BPNameWhat Assessors CheckWork ProductsCommon Failure
BP1Define integration strategyThe integration order is documented and justified. Which components are integrated first? What stubs/drivers are used for not-yet-available components? What defines an "integration level"? What are entry and exit criteria for each level?Software Integration Plan / Strategy document with integration levels, order, stubs planIntegration done ad hoc; components assembled all at once ("big bang integration") making failure isolation impossible
BP2Develop integration test casesTest cases target interface behavior: correct data exchange between components, correct handling of interface error conditions, timing of inter-component calls, shared resource access. Test cases derived from SWE.2 interface specifications - not from requirement text (that is SWE.6's job).Integration test specification (ITS) with test cases per interface; trace from test case to SAD interfaceIntegration test cases are just SWE.6 tests re-run at an earlier stage; no distinct test cases targeting interface contracts
BP3Perform software integrationSoftware components are assembled step-by-step according to the integration plan. Integration build is reproducible: a defined source version + configuration produces a known binary. Integration events logged (what was integrated, when, what version).Integration build records; version-labeled integrated SW binary; integration log per levelIntegration builds are ad hoc; no reproducible build process; "works on my machine" - integration environment not documented
BP4Perform integration testingIntegration tests are executed on the target or SIL. Results recorded per test case (pass/fail), with tool/version/date. Failures are logged in the defect tracking system (SUP.9 interface). All failures are dispositioned before SWE.6 entry.Integration test execution report; defect records linked to integration failuresIntegration tests run but not recorded; "it worked, we moved on" - no evidence the tests were actually executed
BP5Ensure regression testingWhen a component changes after integration testing has started, regression tests are rerun for all interfaces that component participates in. Regression suite for integration level is maintained and traceable to the interface specifications.Regression test records per integration eventRegression only for SWE.6 (qualification); integration-level regression not tracked separately
BP6Establish bidirectional traceabilityEvery integration test case traces to at least one SWE.2 interface or architectural element. Every SWE.2 interface has at least one test case covering it.Extended traceability matrix: SAD interface → ITS test caseSWE.5 test cases exist but are not linked to the architecture - traceability chain broken at the SWE.2/SWE.5 boundary

Integration Strategies: Bottom-Up vs. Top-Down vs. Sandwich

The integration strategy defines the order in which components are assembled. ASPICE does not mandate a specific strategy but requires the chosen strategy to be documented and justified.

StrategyOrderStub/Driver NeedsBest ForASPICE Consideration
Bottom-UpLowest-layer drivers first, application lastTest drivers needed to call lower modulesWell-defined hardware abstraction layers (AUTOSAR MCAL → BSW → Application)Easy to trace: MCAL verified first, then BSW modules using real MCAL, then application using real BSW
Top-DownApplication logic first, stubs replace lower layersStubs for every lower module calledProjects where requirements are clear and HW delivery is delayedStub management requires documentation: what stubs were used, which SW version they simulate
Sandwich (Hybrid)Critical-path components first, expand outwardBoth stubs and drivers as neededComplex AUTOSAR ECUs with parallel development teamsMost common in industry; requires an explicit integration order table and stub catalog in the integration plan
Big BangAll components integrated simultaneouslyNone - but fault isolation is impossibleSmall projects (<5 components) with stable interfacesWeakly justified unless the project scope truly warrants it; assessors will probe for integration fault isolation capability

SWE.5 Findings & CL2 Readiness

#FindingFix
1No distinct integration test cases - SWE.6 qualification tests used as integration testsIntegration tests must target interface behavior from the SWE.2 architecture, not requirement text from SWE.1. Create an Integration Test Specification separate from the Qualification Test Specification.
2Integration strategy not documented: components assembled as they become available, no defined orderWrite an Integration Plan before SWE.5 begins. One page with: integration levels (IL1: MCAL, IL2: BSW, IL3: Application), entry criteria per level, stub/driver list, responsible engineer.
3Integration test results not recorded: "we ran it and it worked"Every integration test execution must produce a record with: test case IDs executed, pass/fail per TC, tool/environment version, date, engineer. A structured spreadsheet or test management tool (TestRail, qTest, HP ALM) both work.
4Integration failures not linked to defect tracking: failures fixed directly in code without a defect recordEvery integration test failure must generate a defect in the problem tracking system (SUP.9). Defect ID is referenced in the integration test result record. Closure of the defect requires re-test evidence.

✅ SWE.5 CL2 Readiness Checklist

  • ✅ Integration Strategy document: integration levels, order, entry/exit criteria, stub/driver catalog
  • ✅ Integration Test Specification: test cases with trace to SWE.2 interfaces; distinct from SWE.6 qualification tests
  • ✅ Integration build records: reproducible build process; version-labeled binary per integration level
  • ✅ Integration test execution report: all TCs run, results recorded, failures linked to defects (SUP.9)
  • ✅ Regression: integration tests re-run on component changes; results recorded
  • ✅ Traceability: SAD interface → Integration Test Case bidirectional

What's Next

Continue to SWE.6 - Software Qualification Testing, the final process on the right leg of the V-model, which closes the traceability loop back to SWE.1 requirements and produces the evidence required for software release approval.

← PreviousSWE.4 - Software Unit Verification Next →SWE.6 - Software Qualification Testing