Home Learning Paths ECU Lab Assessments Interview Preparation Arena Pricing Log In Sign Up

BP vs GP: The Two Indicator Types

Every ASPICE assessment is built on two types of indicators defined in the PAM: Base Practices (BPs) and Generic Practices (GPs). Understanding the difference is essential - assessors treat them differently, they serve different purposes, and failing one versus the other has different implications for your Capability Level rating.

Base Practice (BP)Generic Practice (GP)
What it isAn activity specific to a particular process (e.g., SWE.1.BP6: establish traceability)An activity that applies identically to every process at a given capability level (e.g., GP 2.1.2: plan the process)
Assessed atPA 1.1 - contributes to CL1PA 2.1, 2.2, 3.1, 3.2 - contributes to CL2, CL3
Process-specific?Yes - each process has its own BP setNo - the same GPs apply to SWE.1, SWE.2, MAN.3, etc.
Work Products producedEngineering WPs (SRS, architecture doc, test cases)Management WPs (plans, review records, CM baselines)
If not metPA 1.1 is rated P or N → CL1 not achievedPA 2.1/2.2 rated P or N → CL2 not achieved

📋 Learning Objectives

  • Recite from memory the BPs for SWE.1 through SWE.4 with their descriptions
  • Explain the difference between "defined" and "communicated" in SWE.1.BP8
  • Identify which BPs are the most frequently failed in OEM assessments
  • Apply the GP 2.2.4 review requirement to a concrete project scenario
  • Distinguish sufficient from insufficient evidence for each practice type

SWE.1 BPs: Software Requirements Analysis

SWE.1 has 8 Base Practices in v3.1. These must all be performed (rated L or F in aggregate) for PA 1.1 to be Fully achieved. Below is each BP with its official description, what it means in practice, the expected work product, and the most common failure mode seen in real assessments.

BPNamePractice DescriptionExpected EvidenceCommon Failure
SWE.1.BP1 Specify software requirements Define and document each software requirement. Each requirement must be uniquely identified, correct, verifiable, and traceable. Software Requirements Specification (SRS) with numbered requirements and attributes (priority, type, source) Requirements stated as design decisions ("the system shall use a 16-bit counter") rather than true requirements ("the system shall count events up to 65535")
SWE.1.BP2 Structure software requirements Organize requirements with unique IDs and a logical structure (chapters, groups) that supports navigation, review, and change management. SRS with hierarchical numbering, requirement IDs (e.g., SRS-FEAT-0042), and a consistent naming convention Flat list of requirements with sequential numbers only, no grouping by function or feature area - makes impact analysis impractical
SWE.1.BP3 Analyze software requirements Evaluate requirements for completeness, consistency, technical feasibility, and correctness. Document analysis results and resolve identified issues. Requirements review record with analysis checklist results, or documented requirements review meeting minutes with issues list Review is done mentally but not recorded; or only a "sign-off" page exists without any issue log showing what was actually analyzed
SWE.1.BP4 Analyze impact on operating environment Identify and document how software requirements affect other system elements - hardware, other ECUs, vehicle networks, external systems. Interface analysis section in SRS, or a separate Interface Control Document (ICD) showing affected interfaces SRS treats the software in isolation with no analysis of how requirements drive hardware sizing, bus load, or interaction with adjacent ECUs
SWE.1.BP5 Develop verification criteria For each requirement, identify how it will be verified: test, analysis, inspection, or demonstration. Define acceptance criteria specific enough to make the verification objective. Verification method column in SRS, or a separate verification cross-reference matrix (VCRM). Acceptance criteria must be measurable, not just "the function works correctly." Verification method defined generically ("test") without acceptance criteria; or no verification method defined at all for non-functional requirements like response time or memory usage
SWE.1.BP6 Ensure consistency and establish bidirectional traceability Every SWE.1 requirement must trace upward to a SYS.2 system requirement that justifies its existence. Every SYS.2 system requirement must trace downward to at least one SWE.1 requirement that realizes it. Both directions must be documented. Traceability matrix (spreadsheet, DOORS links, Polarion associations) showing SRS ID ↔ System Req ID mappings. Coverage report showing all SYS.2 requirements are covered. SRS requirements have source annotations ("from STS §3.2.1") but no formal machine-readable trace link; assessor cannot verify coverage without manual inspection. Or: traceability exists but 15–20% of requirements are unmapped - rated P, not F.
SWE.1.BP7 Identify the content of the software product release notes Specify which requirements will be covered in which software release. Provide input to release planning so that stakeholders know what will be delivered and when. Release plan or sprint plan showing requirement allocation to releases; requirements marked with "target release" attribute Release planning exists as a schedule but requirements are not explicitly tagged to releases; assessor cannot determine which requirements are in-scope for the current release being assessed
SWE.1.BP8 Ensure agreement and communicate requirements Requirements must be formally agreed with all affected parties (OEM, system engineers, architects, test team) and distributed. "Communicated" means acknowledged receipt, not just emailed. Review sign-off page signed by relevant stakeholders; distribution records; meeting minutes from requirements review with external parties attending "We sent the SRS to the customer" without documented acknowledgment or formal approval. Internal distribution also missing - test engineers never formally received the SRS they are supposed to use.

SWE.2 BPs: Software Architectural Design

SWE.2 has 7 Base Practices. The architecture is the central artifact that connects requirements to implementation and enables integration testing. Weak SWE.2 execution is one of the leading causes of costly integration failures in ECU development.

BPDescriptionKey EvidenceMost Common Gap
SWE.2.BP1 Develop software architectural design. Decompose software into components with defined responsibilities, interfaces, and dependencies. Software Architecture Description (SAD): component diagram, component responsibilities, static and dynamic views Architecture exists as a mental model or informal diagram; nothing detailed enough to use as a basis for SWE.3 detailed design
SWE.2.BP2 Allocate software requirements to components. Every SWE.1 requirement is assigned to at least one architectural component responsible for its realization. Requirements allocation matrix: SRS ID → Component ID. This is the downward link of SWE.1 traceability. Architecture described but no formal allocation - "obviously the feature manager handles that" is not documented evidence
SWE.2.BP3 Define interfaces of software components. Specify the API or interaction protocol for each inter-component interface - data types, parameter lists, preconditions, error behavior. Interface specification section in SAD, or a separate Interface Definition Document (IDD). For AUTOSAR: port interfaces in the system description. Interfaces named but not specified - "ComponentA sends data to ComponentB" without defining what data, in what format, at what rate, and what happens on error
SWE.2.BP4 Describe dynamic behavior. Document how components interact at runtime: sequence diagrams, state machines, task scheduling, timing constraints. Sequence diagrams for key use cases; state machine diagrams for state-dependent components; task timing analysis Only static structure shown; no runtime behavior described - makes SWE.5 integration testing planning very difficult
SWE.2.BP5 Evaluate software architectural design. Review the architecture against requirements (completeness), technical constraints (feasibility), non-functional requirements (performance, memory, AUTOSAR compatibility). Architecture review record: who attended, what was checked, what issues were found, disposition of each issue Architecture presented to team but no formal review record; or review done by the architect alone with no independent verification
SWE.2.BP6 Ensure consistency and establish bidirectional traceability. Every SWE.2 component must trace to the SWE.1 requirement it realizes. Every SWE.1 requirement must trace to a component. Updated traceability matrix extending SRS → Architecture. Coverage report confirming all requirements are allocated. Same as SWE.1.BP6 - partial coverage, especially for non-functional requirements which often have no explicit architecture component assigned
SWE.2.BP7 Communicate the architectural design. The architecture must be formally reviewed and distributed to all parties who will use it: developers (SWE.3), testers (SWE.5), system engineers (SYS.4). Distribution record; meeting minutes from architecture review showing developer attendance; architecture baseline under CM Architecture shared informally via chat/email; no formal distribution list; no acknowledgment from receiving parties

SWE.3 BPs: Detailed Design & Unit Construction

SWE.3 is the bridge between architecture and code. Its BPs require both documentation of detailed design decisions AND the production of the actual source code. Both must exist - detailed design without code, or code without documented design, both fail SWE.3.

BPDescriptionKey EvidenceMost Common Gap
SWE.3.BP1 Develop detailed design for each software unit. Document the design of each unit at a level of detail sufficient to guide implementation - data structures, algorithms, function signatures, error handling logic. Low-Level Design (LLD) or Detailed Design Document (DDD) per unit or per component. For Simulink-based development: model with sufficient block-level documentation. Architecture documentation exists but LLD is missing; developers code directly from the architecture without a detailed design step - common in agile teams and small projects
SWE.3.BP2 Define interfaces of software units. For each unit, specify the exact API: function names, parameter types, return values, pre/postconditions. Header files with full documentation (Doxygen-style), function specification tables in LLD Interfaces documented at the architectural level (SWE.2) but not at the unit level - assessor needs unit-level specificity
SWE.3.BP3 Describe dynamic behavior of each software unit. Document how units behave at runtime, including state machines, timer behavior, interrupt handling, and resource contention. State machine diagrams per unit, flow charts for complex algorithms, timing diagrams for interrupt-driven code Simple units (pure functions) often lack any runtime description; complex state-machine-based units rely entirely on code readability
SWE.3.BP4 Evaluate detailed design. Review the detailed design for correctness, completeness, consistency with architecture, and adherence to coding guidelines (before coding starts, not after). LLD review records; static analysis results run on the design (if model-based); peer review records Reviews done post-coding ("we reviewed the code, not the design") - the design never existed as a separate reviewable artifact
SWE.3.BP5 Implement each software unit. Produce source code conforming to the detailed design and to the project's coding standards. Source code in version control, traceable to LLD. Coding standard compliance evidence (static analysis tool report - MISRA, AUTOSAR C++ guidelines, etc.) Code exists but no coding standard was applied; or coding standard applied but static analysis results not documented/archived
SWE.3.BP6 Establish bidirectional traceability. Each SW unit traces to the architectural component it implements (SWE.2) and the detailed design that describes it (SWE.3.BP1). Traceability matrix extending SWE.2 components → SWE.3 units. Code file headers referencing LLD section. Architecture components are granular but code modules are not mapped - "the BSW layer implements SWE.2 layer" without component-to-unit granularity

SWE.4 / SWE.5 / SWE.6: Testing Processes

SWE.4 - Software Unit Verification: Critical BPs

BPDescriptionMost Assessed Evidence
SWE.4.BP1Develop unit test cases and procedures based on SWE.3 detailed design. Tests must be traceable to the unit design, not just to requirements.Unit test plan, test case specifications with explicit SWE.3 traceability links
SWE.4.BP2Select and apply code coverage criteria appropriate to the safety level. For non-safety: statement + branch coverage minimum. For ISO 26262 ASIL B: MC/DC required.Coverage measurement report from testing framework (LDRA, VectorCAST, GoogleTest + gcov). Report must show per-unit coverage vs. target.
SWE.4.BP3Execute unit tests and record results. All test executions must be logged with pass/fail, configuration used, and tester identity.Automated test execution logs with timestamps, or manual test result sheets with tester sign-off
SWE.4.BP4Ensure consistency and establish bidirectional traceability. Test cases trace to detailed design elements. Every design element is covered by at least one test case.SWE.4 traceability matrix: Test Case ID ↔ SWE.3 unit/function ID
SWE.4.BP5Ensure that all open defects are addressed before proceeding. Defects found in unit testing are logged, analyzed, fixed, and re-tested.Defect tracking records showing all critical/high defects resolved or formally deferred with justification before SWE.5 entry

SWE.5 - Software Integration Testing: Critical BPs

BPDescriptionMost Assessed Evidence
SWE.5.BP1Develop software integration strategy. Define the order of component integration and the rationale (bottom-up, feature-based, or risk-driven).Integration strategy document or integration plan section with integration order and dependency graph
SWE.5.BP2Develop integration test cases traceable to SWE.2 interface specifications. Tests verify that components interact correctly at their defined interfaces.Integration test cases referencing SWE.2 interface IDs. Test for normal behavior, boundary conditions, and error injection at interfaces.
SWE.5.BP3Execute integration tests and record results. Regression tests must be executed after each integration increment.Integration test execution log; regression test results per build/increment
SWE.5.BP4Ensure bidirectional traceability: integration test cases ↔ SWE.2 interfaces and ↔ SWE.1 requirements (via SWE.2 allocation).Integration test traceability matrix; test coverage report against architecture interfaces

SWE.6 - Software Qualification Testing: Critical BPs

BPDescriptionMost Assessed Evidence
SWE.6.BP1Develop qualification test cases from SWE.1 requirements (closing the V-model loop). Every SWE.1 requirement must be covered by at least one qualification test case.Qualification test specification with explicit SWE.1 requirement traceability. 100% requirements coverage required (or formal deviation approved).
SWE.6.BP2Execute qualification tests and record results. Test results must be archived and associated with the specific software version tested.Qualification test report: version under test, test date, pass/fail per test case, tester, environment configuration
SWE.6.BP3Ensure regression testing on change. Any change after a baseline requires regression test execution to demonstrate no unintended impacts.Regression test execution records triggered by change requests; traceability from CR to affected test cases to re-executed results
SWE.6.BP4Ensure bidirectional traceability: qualification test cases ↔ SWE.1 requirements (closing the complete chain: OEM requirement → SYS req → SW req → test).End-to-end traceability report. The assessor will spot-check 3–5 requirements and follow the full chain.

SUP Process BPs: QA, CM, Change Management

SUP.1 - Quality Assurance: Most Assessed BPs

BPDescriptionWhat Assessors Look For
SUP.1.BP1Develop a QA plan. Document what will be audited, when, by whom, and what criteria will be used.QA plan for the specific project - not a company-generic QA procedure; project-specific audit schedule with dates and responsible persons
SUP.1.BP2Assure product quality by verifying that work products comply with defined requirements and standards.QA audit records showing which work products were reviewed against which criteria; nonconformance reports (NCRs) for findings
SUP.1.BP3Assure process quality by verifying that development activities conform to defined processes.Process audit records (different from product audits); evidence that QA checked process adherence, not just document existence
SUP.1.BP4Ensure independence: QA activities must be performed by someone not directly responsible for the activities being audited.QA auditor is not the project manager or the lead engineer of the process being audited. Org chart or declaration of independence.
SUP.1.BP5Escalate QA findings that cannot be resolved at project level to appropriate management.Escalation records showing at least one NCR was escalated (or documented evidence that all NCRs were resolved at project level)

SUP.8 - Configuration Management: Most Assessed BPs

BPDescriptionWhat Assessors Look For
SUP.8.BP1Develop a CM plan covering all work products under CM control, the CM tools to be used, and the baseline strategy.Project CM plan (not just a company CM procedure) - specifying which documents, code, and tools are under CM for this project
SUP.8.BP2Identify configuration items (CIs). Every work product under CM must be uniquely identified.CI list: SRS v1.2, Architecture v3.0, Software release 2.4.1 - all uniquely identified. Tool configurations (compiler flags, linker scripts) must also be CIs.
SUP.8.BP3Establish baselines at defined points. A baseline is a snapshot of a set of CIs at a defined milestone - it is frozen and can only be changed through a formal change request.Baseline records: date, included CIs and versions, authorized by whom. At minimum: requirements baseline, architecture baseline, release baseline.
SUP.8.BP4Control changes to CIs. After baselineing, any change must go through a formal process (typically SUP.10 CR process) - ad-hoc edits are not permitted.Evidence that changes to baselined WPs were initiated via CRs; no "unauthorized" versions floating around
SUP.8.BP5Report status of CIs. Teams and stakeholders can determine the current state of all CIs at any time.CM status reports or access to CM tool showing current version, last change date, last change author for all CIs

SUP.10 - Change Request Management: Most Assessed BPs

BPDescriptionWhat Assessors Look For
SUP.10.BP1Identify and record change requests. All changes to baselined work products must be initiated as formal CRs - one CR per change.CR records in a tracking tool (JIRA, GitHub Issues, dedicated CR system); CRs for requirements changes, architecture updates, and software fixes
SUP.10.BP2Analyze and assess change requests. Before approving a CR, assess its technical impact, cost, schedule impact, and risk.Impact analysis field in CR record; approval records showing who authorized the change based on the impact analysis
SUP.10.BP3Approve change requests before implementation. Only authorized personnel can approve CRs - not the person who submitted them.Two-person approval: requester ≠ approver. Change board records if applicable.
SUP.10.BP4Track change requests to closure. CRs must be closed only when the change has been implemented, tested, and the relevant work products re-baselined.CR closure records showing: implementation done, test result attached, WP version updated, baseline updated, CR status = Closed

Generic Practice Catalog (CL2–CL3)

Generic Practices are process-independent. The exact same GP 2.1.2 ("plan the performance of the process") applies to SWE.1, SWE.2, SUP.1, MAN.3, and every other process in scope. For a project targeting CL2 across 11 processes, all 10 GPs at CL2 (6 from PA 2.1 + 4 from PA 2.2) must be evidenced for each process separately.

PAGPStatementConcrete Evidence Required
2.1GP 2.1.1Identify performance objectivesProject plan or quality plan contains explicit objectives for this process (not generic "we will do SWE.1")
2.1GP 2.1.2Plan process performanceScheduled activities in project plan with dependencies, milestones, and assigned resources
2.1GP 2.1.3Monitor and adjustStatus tracking artifacts (reports, meeting minutes) with actuals vs. plan and documented corrective actions for deviations
2.1GP 2.1.4Define responsibilitiesRACI matrix, role definitions, project charter. Each process activity has an identified responsible person.
2.1GP 2.1.5Identify and allocate resourcesResource plan: people, tools, lab time. Training records for specialized tools or processes.
2.1GP 2.1.6Manage interfacesCommunication plan or interface agreement between teams; meeting records showing cross-team coordination
2.2GP 2.2.1Define WP requirementsTemplates, content checklists, or review criteria that define what the work product must contain before it is acceptable
2.2GP 2.2.2Define documentation and control requirementsCM plan specifying version control, naming conventions, storage location, and baseline rules for this WP
2.2GP 2.2.3Identify and control work productsWork products under CM with unique IDs, stored in version control, history traceable
2.2GP 2.2.4Review and adjust work productsFormal review records: version reviewed, attendees, issues found, disposition of each issue, updated WP

⚠️ The "Generic" Trap

GP 2.1.2 says "plan the performance of the process." This means the SWE.1 process activities must be explicitly scheduled in the project plan - not just "software development" as one task. An assessor will ask: "Show me where SWE.1 requirements analysis is scheduled in your plan, what the milestones are, and who is responsible." A vague project schedule with only high-level phases will fail GP 2.1.2 for every process it covers.

Evidence Quality: Sufficient vs Insufficient

Understanding what counts as sufficient evidence is the most practically valuable skill for ASPICE preparation. The PAM defines indicators but does not give worked examples - that knowledge comes from assessment experience and OEM guidance. The table below summarizes the most common sufficiency judgments from real assessments.

BP/GPInsufficient EvidenceSufficient Evidence
SWE.1.BP6 - TraceabilitySRS with a "Source" column filled in with free text ("from customer meeting"); no machine-traceable linkDOORS/Polarion bidirectional link set; or spreadsheet with SRS ID, SYS-REQ ID, and coverage report showing 100% SYS-REQ items linked
SWE.1.BP5 - Verification criteria"Verification method: Test" for every requirement, no acceptance criteriaAcceptance criteria specified per requirement; e.g., "Req-042: response time ≤ 20ms measured at ECU CAN interface under maximum bus load"
GP 2.2.4 - Review recordsEmail thread saying "looks good, approved"; signed cover page with no issue listStructured review record: document ID, version, review date, reviewer names, checklist used, issue list with severity and status, updated document reference
SUP.8.BP3 - Baselines"We have version control in Git" without defined baseline events or baseline manifestsCM plan defining baseline events (e.g., "SRS Baseline at SWE.1 completion/gate review"); tagged Git releases with manifest; baseline approval record
MAN.3.GP 2.1.3 - MonitoringWeekly status calls mentioned in meeting scheduleMeeting minutes from status calls showing: planned vs. actual progress comparison, identified deviations, documented corrective actions, action owner, and closure date
SUP.1.BP4 - IndependenceQA audits performed by the test lead who is also a developer on the projectQA audits performed by a person with no direct development responsibility; documented auditor role separate from project team; or external QA auditor

Summary & Key Takeaways

✅ Key Takeaways

  • BPs are process-specific indicators for PA 1.1 (CL1). GPs are process-independent indicators for PA 2.1, 2.2, 3.1, 3.2 (CL2+).
  • SWE.1.BP6 (bidirectional traceability) is the most frequently failed BP in real assessments - partial coverage or informal links do not satisfy it.
  • GP 2.2.4 (review records) is the most frequently failed GP - review meetings without structured issue logs and disposition records do not satisfy it.
  • SWE.3 requires BOTH a detailed design document AND code - neither alone satisfies the process.
  • The complete traceability chain - OEM requirement → SYS.2 → SWE.1 → SWE.2 → SWE.3 → SWE.4 → SWE.5 → SWE.6 - must be machine-traceable (tool-supported links, not free-text annotations) for serious projects.
  • SUP.1 independence is non-negotiable - no amount of process knowledge compensates for an auditor who is on the development team.

What's Next

The next chapter covers Assessment Method & Ratings - how assessors structure an assessment, conduct interviews, collect evidence samples, and produce the final rating. Understanding the assessor's method is the best preparation for being assessed.

What's Next

Continue to Assessment Method & Ratings to understand the lifecycle of an ASPICE assessment from assessment planning through opening meeting, interviews, evidence review, finding classification, and rating production.

← PreviousAssessment Method & Ratings Next →Hands-On: Process Capability Scoring