Home Learning Paths ECU Lab Assessments Interview Preparation Arena Pricing Log In Sign Up

What Is the Process Reference Model?

The Process Reference Model (PRM) is the foundational layer of Automotive SPICE. It defines the complete catalog of processes relevant to automotive embedded software development - describing what each process is, what its purpose is, and what outcomes it must produce. The PRM does not describe how to assess those processes (that is the PAM's job) and does not prescribe implementation details.

Technically, ASPICE's PRM is an instance of the ISO/IEC 33004 definition of a PRM: a set of process descriptions that collectively describe the processes within a defined domain (in this case, automotive ECU software development). Every process in the PRM has:

  • A unique process ID (e.g., SWE.1, SUP.8)
  • A purpose statement - one sentence defining what the process achieves
  • A list of process outcomes - observable results that, when achieved, confirm the process is being performed

📋 Learning Objectives

  • Name every process group in the ASPICE PRM and its abbreviation
  • Recite the purpose of each SWE and SYS process from memory - assessors will ask
  • Explain the distinction between a process outcome (PRM level) and a base practice (PAM level)
  • Identify which processes are in the default HIS assessment scope and why
  • Map development V-model phases to their corresponding ASPICE processes

PRM vs PAM: The Critical Distinction

Engineers who are new to ASPICE consistently confuse these two layers. The relationship is: the PAM implements the PRM. The PRM defines what must happen; the PAM defines how to gather evidence that it did happen.

PRMPAM
Defines process purposes and outcomesDefines Base Practices, Work Products, Generic Practices
Used for process definition and improvementUsed for assessment and audit
Relatively stable across versionsUpdated more frequently to reflect industry experience
Derived from ISO/IEC 12207 and 15288ASPICE-specific - not part of ISO lifecycle standards
Assessors validate outcomes are achievedAssessors collect indicators (BPs, WPs) as evidence

In practice: when you design your internal process framework, align to the PRM. When you prepare for an audit, study the PAM. Both use the same process IDs, but they operate at different levels of abstraction.

Process Groups & Process IDs

ASPICE v3.1 organizes its 33+ processes into 6 process groups. ASPICE v4.0 adds a seventh (HWE). Each group has a 2–3 letter prefix that appears in every process ID within that group.

PrefixProcess GroupProcess Count (v3.1)Scope
SWESoftware Engineering6 (SWE.1–SWE.6)All software development activities from requirements to qualification testing
SYSSystem Engineering5 (SYS.1–SYS.5)System-level requirements, architecture, integration and testing
SUPSupporting Processes10 (SUP.1–SUP.10)Quality assurance, verification, configuration management, problem resolution, change management, joint reviews
MANManagement Processes6 (MAN.1–MAN.6)Project management, risk management, measurement, organizational alignment
ACQAcquisition Processes5 (ACQ.1–ACQ.5)Supplier selection, contract management, supplier monitoring, acceptance
REUReuse Processes3 (REU.1–REU.3)Asset management, reuse program management - rarely assessed in standard audits
HWEHardware Engineering4 (HWE.1–HWE.4) [v4.0 only]Hardware requirements, design, integration testing, qualification testing

Process Numbering Logic

Process numbers within a group follow the V-model development flow. For SWE and SYS, the numbering intentionally mirrors the left and right legs of the V: SWE.1 (requirements) pairs with SWE.6 (qualification testing); SWE.2 (architecture) pairs with SWE.5 (integration testing); SWE.3 (detailed design + unit construction) pairs with SWE.4 (unit verification). The same logic applies to SYS.2 pairing with SYS.5, and SYS.3 pairing with SYS.4.

This is not cosmetic - it directly informs the traceability requirement. An assessor checking SWE.4 will expect to find test cases traceable back to SWE.3 detailed design decisions. The numbering encodes the expected bidirectional traceability chain.

The SWE Process Chain (SWE.1–SWE.6)

The SWE processes are the core of every ASPICE supplier audit. Together they cover the full software development lifecycle from customer requirements to final qualification testing. Every SWE process below is described with its official purpose statement, its key outcomes, and what evidence assessors will look for.

ProcessNamePurpose (verbatim)Key Outcomes
SWE.1 Software Requirements Analysis "The purpose of the Software Requirements Analysis Process is to transform the software related parts of the system requirements into a set of software requirements." Software requirements are defined, consistent with system requirements, prioritized, and approved. Each requirement has a unique identifier. Bidirectional traceability to SYS.2 system requirements exists.
SWE.2 Software Architectural Design "The purpose of the Software Architectural Design Process is to establish an architectural design and to identify which software requirements are to be allocated to which elements of the software." Software is decomposed into SW components with defined interfaces. Requirements are allocated to components. Static and dynamic views of the architecture are documented. Bidirectional traceability to SWE.1 requirements exists.
SWE.3 Software Detailed Design and Unit Construction "The purpose of the Software Detailed Design and Unit Construction Process is to provide an evaluated detailed design for the software units and to produce software units." Detailed design for each SW unit (sufficient to guide implementation). Unit source code produced. Traceability to SWE.2 architectural components. Static analysis applied to source code.
SWE.4 Software Unit Verification "The purpose of the Software Unit Verification Process is to verify software units to provide evidence for the compliance of the software units with the software detailed design and with non-functional software requirements." Unit test strategy, test cases, and test procedures defined. Tests executed and results recorded. Coverage targets (statement, branch, MC/DC depending on ASIL) measured. Defects logged and resolved.
SWE.5 Software Integration and Integration Testing "The purpose of the Software Integration and Integration Testing Process is to integrate the software units and software components until the integrated software is obtained and to ensure that the software units and software components are tested in accordance with the software architectural design." Integration order defined. Integration test cases traceable to SWE.2 architecture interfaces. Test results recorded. Regression test suite maintained. Integrated software produced.
SWE.6 Software Qualification Testing "The purpose of the Software Qualification Testing Process is to ensure that the integrated software is tested to provide evidence for the compliance of the software with the software requirements." Qualification test cases traceable to SWE.1 requirements (closing the traceability loop). Test results demonstrate coverage of requirements. Regression testing performed. Software qualification test report produced.

The Traceability Chain: SWE.1 → SWE.6

The single most assessed aspect of the entire SWE chain is bidirectional traceability. Assessors do not just check that requirements and tests exist - they follow the chain in both directions. A typical traceability check during an assessment looks like this:

  1. Take a random sample of 3–5 SWE.1 software requirements
  2. Verify each has a unique ID and is traceable upward to a SYS.2 system requirement
  3. Verify each is traceable downward to at least one SWE.2 architectural component that realizes it
  4. Follow to SWE.3: does the detailed design of that component show how the requirement is implemented?
  5. Follow to SWE.4: does a unit test case exist with explicit traceability back to this requirement or detailed design element?
  6. Follow to SWE.6: does a qualification test case cover this requirement? Is it marked as "passed" in the test report?

If any link in this chain is broken - a requirement with no corresponding test, a test with no traced requirement, an architectural component with no implementing unit - the assessor will flag a weakness or finding at the relevant process. This is the #1 source of CL1 failures in real assessments.

⚠️ Traceability Tool Reality

Excel-based traceability matrices can satisfy ASPICE for small projects, but they become unmaintainable above ~500 requirements. Tools like IBM DOORS, Polarion, Jama Connect, or even Jira + requirements plugins are used in production. Whatever tool you use, the key is that every link is recorded, maintained, and auditable - not just that coverage numbers look good on paper.

System Engineering: SYS.1–SYS.5

The SYS processes sit above the SWE chain in the V-model. They handle the system-level requirements that get decomposed into software requirements (SWE.1), hardware requirements (HWE.1), and mechanical requirements. In an ECU supplier context, SYS.1 captures what the vehicle-level OEM requirements are; SYS.2 refines them into a system technical specification; SYS.3 designs the system architecture that partitions function across HW/SW.

ProcessNamePurpose SummaryOutput to SWE?
SYS.1Requirements ElicitationGather and define stakeholder (OEM) requirements for the system. Understand operational context, constraints, and interfaces.Indirectly - stakeholder requirements flow into SYS.2
SYS.2System Requirements AnalysisTransform stakeholder requirements into a defined set of system technical requirements. Assign ASIL ratings. Define measurable acceptance criteria.Direct - SYS.2 system requirements are the mandatory input to SWE.1
SYS.3System Architectural DesignDefine the system architecture: partition functions to SW, HW, and mechanical elements. Define inter-element interfaces.Direct - SW allocation from SYS.3 feeds SWE.2 architectural design
SYS.4System Integration and Integration TestingIntegrate SW, HW, and mechanical elements. Test integrated system against SYS.3 architectural interfaces.Receives - SWE.6-qualified software is input to SYS.4
SYS.5System Qualification TestingTest the complete system against SYS.2 system requirements. Produce system qualification test report.Receives - output of SYS.4 integration is input to SYS.5

Where SYS Scope Ends for Tier-1 Suppliers

A frequent question in supplier assessments is: "Which SYS processes are actually in scope for us?" The answer depends on the supplier's contractual role. Most ECU Tier-1 suppliers are responsible for SYS.2 (system requirements, which for them are the input from OEM translated into a System Technical Requirement, often called an FSD or STS) and SYS.3 (HW/SW architecture partition). SYS.4 and SYS.5 may be shared with or delegated to the OEM if the OEM runs vehicle-level integration.

For software-only suppliers (pure SW development with no HW responsibility), SYS.1 and SYS.3 are often out of scope - the supplier receives an already-architected system design. Always clarify scope boundaries in the project-specific Process Assessment Framework before an audit begins.

Supporting Processes: SUP, MAN, ACQ

The SUP, MAN, and ACQ processes are the "infrastructure" that enables the engineering processes to function at higher capability levels. They are often underestimated by teams focused on engineering - and that is why they are consistently the lowest-scoring processes in supplier audits.

Key SUP Processes in HIS Scope

ProcessNameWhat It Requires in Practice
SUP.1Quality AssuranceAn independent QA function that audits process compliance and product quality - not the project team self-certifying. Evidence: QA audit records, QA reports to management, nonconformance tracking. Independence is a hard requirement - the project lead cannot be the QA auditor.
SUP.8Configuration ManagementAll work products under version control with unique identifiers. Baselines established at defined milestones (e.g., after SWE.1 review, before release). Change history auditable. Build reproducibility demonstrable. Evidence: CM plan, version control logs, baseline records, build manifests.
SUP.9Problem Resolution ManagementAll defects and problems logged, classified, analyzed, resolved, and closed with formal tracking. Evidence: defect tracking system records showing lifecycle from detection to closure, root cause analysis for high-severity issues, trend metrics.
SUP.10Change Request ManagementAll changes to baselined work products initiated through a formal change request (CR) process. Impact analysis performed. CRs approved by authorized personnel before implementation. Evidence: CR records in a tracking tool, impact analysis documents, approval records.

Key MAN Processes in HIS Scope

ProcessNameWhat Assessors Verify
MAN.3Project ManagementProject plan exists, is updated, and is actually used to manage the project. Schedule, milestones, resource allocation, and effort estimates documented and tracked. Deviations from plan analyzed and corrective actions taken. Status communicated to stakeholders.
MAN.5Risk ManagementRisk register maintained. Risks assessed for probability and impact. Mitigation actions tracked. Risk review meetings documented. This is not a one-time exercise at project start - ongoing updates are expected.
MAN.6MeasurementProject-specific measurement objectives defined. Quantitative data collected and analyzed. Metrics used for project management decisions (not just reported and ignored). Common metrics: requirements stability index, defect discovery rates, test coverage trends, review effectiveness.

🔍 The MAN.3 Trap

The most common MAN.3 finding: the project plan exists and is detailed, but there is no evidence it was ever used for management decisions. Assessors will ask: "Show me a status meeting where you compared actuals to plan and made a decision based on that comparison." If you cannot show this, the plan is a decoration - and assessors know it.

ACQ.4 - Supplier Monitoring

ACQ.4 (Supplier Monitoring) is in HIS scope for companies that sub-supply to other suppliers. If a Tier-1 purchases software components from a Tier-2 (e.g., a MCAL layer from a silicon vendor, or a safety library from a functional safety supplier), the Tier-1 must demonstrate they monitor and control that supplier's process quality - not just the end product quality. Evidence includes: supplier assessment records, regular progress reviews, escalation records for supplier deviations.

HIS Default Scope & Why It Exists

The HIS default assessment scope is the set of processes that the four founding OEMs agreed represents the minimum meaningful scope for ECU software supplier audits. It is not mandated by the ASPICE standard itself - it is an OEM industry agreement that became the de facto norm.

ProcessIn HIS ScopeRationale
SWE.1–SWE.6✅ All 6Core software development chain - always in scope
SYS.1Customer (OEM) responsibility in most supplier relationships
SYS.2Supplier must demonstrate they can translate system requirements into software requirements
SYS.3HW/SW architecture partition is supplier responsibility
SYS.4HW/SW integration is supplier responsibility
SYS.5System qualification is supplier responsibility
SUP.1Independent QA is non-negotiable for safety-relevant software
SUP.8Configuration management is the prerequisite for reproducible builds and auditable change history
SUP.9Often rolled into SUP.10 scope; OEM-specific decision
SUP.10Change request management protects baseline integrity
MAN.3Project management is required to demonstrate CL2 managed processes
MAN.5, MAN.6Context-dependentIncreasingly required by OEMs for CL3 assessments; not always in base HIS scope

Scope Expansion and Compression

OEMs increasingly expand the HIS default scope for high-risk or safety-relevant projects. Common additions: SUP.9 (Problem Resolution), MAN.5 (Risk Management), and for AUTOSAR-based projects, implicitly the full SYS chain. Some OEMs also define project-specific scopes that exclude processes when the supplier has a clearly bounded responsibility (e.g., software-only deliverable with no system integration role).

Always read the Supplier Quality Agreement (SQA) or Assessment Contract before preparing for an audit. The scope should be written in the agreement - if it is not, clarify it in writing before the assessment kicks off. Scope ambiguity discovered mid-assessment is a project risk.

Process Outcomes vs Base Practices

This distinction is essential and consistently misunderstood. Process Outcomes belong to the PRM. Base Practices belong to the PAM. They describe the same process from different angles.

Example: SWE.1 Outcomes vs SWE.1 Base Practices

The SWE.1 PRM defines these process outcomes (abbreviated):

  • SWE.1.O1: Software requirements are defined
  • SWE.1.O2: Software requirements are consistent with system requirements
  • SWE.1.O3: Software requirements are evaluated for testability
  • SWE.1.O4: Software requirements are prioritized
  • SWE.1.O5: The impact of proposed changes to software requirements is analyzed
  • SWE.1.O6: Software requirements are agreed and communicated to all affected parties

The SWE.1 PAM then operationalizes these outcomes into Base Practices (actual activities) and Work Products (tangible evidence). For example:

Base PracticeDescriptionCovers Outcome(s)Expected Work Product
SWE.1.BP1Specify SW requirements. Define and document the software requirements including functional and non-functional requirements.O1, O2Software Requirements Specification (SRS)
SWE.1.BP2Structure software requirements. Provide an identification and structure to the software requirements to support consistency, testability, and traceability.O1, O3SRS with unique IDs and structure attributes
SWE.1.BP3Analyze software requirements. Analyze the software requirements including completeness, consistency, feasibility, and correctness.O2, O3Review records, analysis results
SWE.1.BP4Analyze the impact on the operating environment. Understand and document the effects of software requirements on other system elements and external systems.O2Interface analysis document, updated SRS
SWE.1.BP5Develop verification criteria. Identify how each software requirement will be verified - test, analysis, inspection, demonstration.O3Verification method matrix or annotated SRS
SWE.1.BP6Ensure consistency and establish bidirectional traceability. Trace each software requirement to the system requirement(s) it realizes.O2, O5Traceability matrix (SRS ↔ System Req)
SWE.1.BP7Identify the content of the software product release notes. Specify which requirements will be covered by each software release.O6Release planning documents, release notes template
SWE.1.BP8Ensure agreement and communicate requirements. Confirm requirements are agreed with stakeholders and distributed to all relevant parties.O6Review sign-off records, distribution records

💡 How Assessors Use This

During an SWE.1 assessment interview, an assessor will walk through each BP and ask for evidence. They might say: "Show me how you implement BP6 - bidirectional traceability from your software requirements to your system requirements." If your SRS has requirement IDs but no explicit link back to the system requirement document, you have partially achieved BP6 at best. "Partially" in ASPICE notation means the BP is rated P (Partially), which limits your process rating to CL1-Partially - insufficient to achieve CL1 for SWE.1.

Summary & Key Takeaways

✅ Key Takeaways

  • The PRM defines what processes exist (purpose + outcomes). The PAM defines how to assess them (BPs + WPs + GPs).
  • 7 process groups: SWE, SYS, SUP, MAN, ACQ, REU, HWE(v4). Memorize the prefix, group name, and rough process count.
  • SWE.1–SWE.6 form a V-model chain: SWE.1↔SWE.6, SWE.2↔SWE.5, SWE.3↔SWE.4. Each pair requires bidirectional traceability across the V.
  • The HIS default scope = SWE.1–6, SYS.2–5, SUP.1, SUP.8, SUP.10, MAN.3. Know which processes are in and out and why.
  • SUP and MAN processes are consistently the weakest in supplier audits - they are not optional scaffolding, they are what enables CL2+.
  • Always check your SQA for the exact scope, version, and Capability Level targets before preparing for an assessment.

PRM Quick Reference: All HIS-Scope Processes at a Glance

IDNameV-Model PositionPair
SYS.2System Requirements AnalysisLeft - System levelSYS.5
SYS.3System Architectural DesignLeft - System levelSYS.4
SYS.4System Integration & TestingRight - System levelSYS.3
SYS.5System Qualification TestingRight - System levelSYS.2
SWE.1SW Requirements AnalysisLeft - SW levelSWE.6
SWE.2SW Architectural DesignLeft - SW levelSWE.5
SWE.3SW Detailed Design & Unit ConstructionLeft - SW unit levelSWE.4
SWE.4SW Unit VerificationRight - SW unit levelSWE.3
SWE.5SW Integration & Integration TestingRight - SW levelSWE.2
SWE.6SW Qualification TestingRight - SW levelSWE.1
SUP.1Quality AssuranceCross-cutting-
SUP.8Configuration ManagementCross-cutting-
SUP.10Change Request ManagementCross-cutting-
MAN.3Project ManagementCross-cutting-

What's Next

The next chapter covers Capability Levels 0–5 in depth - the measurement framework that turns the PRM process catalog into quantified ratings. You will learn exactly what PA 1.1 through PA 5.2 require, what the N/P/L/F ratings mean in practice, and how the CL scale translates into organizational process maturity.

What's Next

Continue to Capability Levels 0–5 to understand how Process Attributes are rated, how Generic Practices escalate capability requirements at each level, and what "Largely achieved" vs "Fully achieved" means in an actual assessment interview.

← PreviousASPICE Overview & History Next →Capability Levels 0–5