Home Learning Paths ECU Lab Assessments Interview Preparation Arena Pricing Log In Sign Up
Log In Sign Up

ASPICE Process Assessment

45 questions on ASPICE process assessment model, SWE/SYS processes, capability levels, and assessment preparation.

31 Questions
0 Revealed
1
What is Automotive SPICE (ASPICE)?
Answer
Automotive SPICE is a process assessment model specifically adapted for the automotive industry from ISO/IEC 33000 (formerly 15504). It evaluates the capability of software and system development processes at supplier organizations. ASPICE defines a Process Reference Model (PRM) with process descriptions and a Process Assessment Model (PAM) with capability indicators. It is the de facto standard for process maturity in automotive OEM-supplier relationships.
2
What are the ASPICE capability levels?
Answer
Level 0 - Incomplete: process not implemented or fails to achieve its purpose. Level 1 - Performed: process achieves its outcomes but not managed. Level 2 - Managed: process is planned, monitored, and adjusted; work products are managed. Level 3 - Established: process uses a defined standard process tailored for the project. Level 4 - Predictable: process operates with quantitative management. Level 5 - Innovating: process continuously improved through innovation. Most OEM requirements target Level 2 or 3.
3
What are the key process categories in ASPICE?
Answer
Primary Life Cycle Processes: SYS (System Engineering - SYS.1-SYS.5), SWE (Software Engineering - SWE.1-SWE.6), ACQ (Acquisition - ACQ.4). Supporting Processes: SUP.1 (Quality Assurance), SUP.8 (Configuration Management), SUP.9 (Problem Resolution), SUP.10 (Change Request Management). Management Processes: MAN.3 (Project Management). Organizational Processes: not typically assessed in projects.
4
What is the ASPICE V-model and how does it map to processes?
Answer
The V-model maps ASPICE processes to development phases: Left side (top-down): SYS.1/SYS.2 (System Requirements/Architecture) → SWE.1/SWE.2 (Software Requirements/Architecture) → SWE.3 (Detailed Design and Implementation). Right side (bottom-up): SWE.4 (Software Unit Verification) → SWE.5 (Software Integration and Verification) → SWE.6 (Software Qualification Test) → SYS.4/SYS.5 (System Integration/Qualification Test). Each level has bidirectional traceability.
5
What is SWE.1 (Software Requirements Analysis)?
Answer
SWE.1 covers the specification of software requirements from system requirements. Key base practices: specify software requirements (functional, non-functional, interface), analyze requirements for correctness/consistency/feasibility, evaluate impact on operating environment, establish traceability to system requirements, ensure consistency with system architecture, communicate and agree on requirements. Key work products: Software Requirements Specification (SRS), traceability matrix.
6
What is SWE.2 (Software Architectural Design)?
Answer
SWE.2 transforms software requirements into an architecture. Base practices: develop software architecture describing static structure and dynamic behavior, define interfaces between components, assess architecture against requirements, verify resource consumption estimates, establish traceability between requirements and architecture elements. Work products: Software Architecture Document, interface specifications. Architecture must demonstrate how requirements are allocated to components.
7
What is SWE.3 (Software Detailed Design and Unit Construction)?
Answer
SWE.3 covers detailed design of software units and their implementation. Base practices: develop detailed design for each software unit, define interfaces at the unit level, implement the units (coding), establish traceability from detailed design to architecture. Key outputs: detailed design documents, source code. Coding guidelines (MISRA-C) must be applied. Design must be sufficient to implement and verify each unit independently.
8
What is SWE.4 (Software Unit Verification)?
Answer
SWE.4 verifies that software units meet their detailed design and requirements. Base practices: develop unit verification strategy, develop test cases from detailed design and requirements, verify units (execute tests), establish bidirectional traceability between test cases and design elements, document test results. Coverage metrics (statement, branch, MC/DC) must be achieved. Methods: unit testing, static analysis, code review.
9
What is SWE.5 (Software Integration and Integration Test)?
Answer
SWE.5 covers integrating software units into larger components and verifying their interaction. Base practices: develop integration strategy (build plan), integrate software units according to strategy, develop integration test cases from architecture, execute integration tests, establish traceability. Verification focuses on: interface correctness, data flow, timing behavior, and resource usage. Both bottom-up and incremental integration are valid approaches.
10
What is SWE.6 (Software Qualification Test)?
Answer
SWE.6 validates that the integrated software meets its requirements in the target environment. Base practices: develop qualification test strategy, derive test cases from software requirements (SWE.1), execute tests on target hardware (or representative environment), evaluate results against pass/fail criteria, establish traceability to requirements. Qualification testing confirms the software is ready for system integration. It is the right-side counterpart to SWE.1.
11
What is the difference between SYS.4 and SYS.5?
Answer
SYS.4 (System Integration and Integration Test) verifies that system components work correctly together - focuses on interfaces, data exchange, and interaction between hardware, software, and mechanical components. SYS.5 (System Qualification Test) validates that the complete integrated system meets the system requirements in the target environment or vehicle. SYS.4 is about component interaction; SYS.5 is about overall system behavior.
12
What is bidirectional traceability and why is it important?
Answer
Bidirectional traceability means every requirement links forward to design/implementation/test elements AND every design/test element links back to requirements. Forward trace: ensures all requirements are implemented and tested (completeness). Backward trace: ensures all code and tests have a justification (no unnecessary work). Required at every V-model level. Tools: IBM DOORS, Polarion, Jama. Traceability gaps are common ASPICE findings.
13
What is the difference between consistency and traceability?
Answer
Traceability: explicit links between work products across V-model levels (requirement → design → code → test). Consistency: agreement between work products at the same level or across levels (the architecture document matches what's in the code; the requirement text aligns with the test case description). Both are required by ASPICE. Traceability ensures completeness; consistency ensures correctness. Tools and reviews address both.
14
What is SUP.8 (Configuration Management)?
Answer
SUP.8 manages all work products and their versions. Base practices: establish a CM strategy, identify configuration items (code, documents, tools, test artifacts), manage changes with baselines, track configuration status, verify configuration integrity. Key outcomes: all work products are version-controlled, baselines are established at milestones, changes are traceable, and the correct versions can be reproduced. Tools: Git, SVN, Dimensions.
15
What is SUP.9 (Problem Resolution Management)?
Answer
SUP.9 ensures that problems (bugs, defects, issues) discovered during development are systematically identified, analyzed, resolved, and tracked to closure. Base practices: identify and record problems, analyze root cause, implement resolution, track to closure, analyze trends. Problem reports must link to affected work products. Metrics: open/closed ratio, resolution time, recurrence rate. Tools: Jira, Bugzilla, Redmine.
16
What is SUP.10 (Change Request Management)?
Answer
SUP.10 manages change requests throughout the project. Base practices: identify and record change requests, analyze impact (on requirements, design, test, schedule, cost), approve/reject through defined authority, implement approved changes, track to closure. Change requests must be linked to affected work products and trigger appropriate re-verification. Change control boards (CCBs) typically approve changes.
17
What is MAN.3 (Project Management)?
Answer
MAN.3 covers project planning, monitoring, and control. Base practices: define project scope and objectives, estimate resources and schedule, define work breakdown structure, establish project plan, monitor progress against plan, take corrective actions, manage risks, and manage stakeholders. Key work products: project plan, schedule, risk register, status reports. ASPICE evaluates whether the project is systematically managed.
18
What is SUP.1 (Quality Assurance)?
Answer
SUP.1 ensures that work products and processes comply with plans and standards. Base practices: establish QA strategy, verify work product compliance (reviews, audits), verify process compliance (process audits), identify and resolve non-conformances, and escalate unresolved issues. QA must be independent from development. Work products: QA plan, audit reports, non-conformance records. QA covers both product quality and process adherence.
19
How does an ASPICE assessment work?
Answer
Assessment process: 1) Preparation - define scope (processes, projects), select assessment team. 2) Assessment execution - interview project participants, review work products, observe practices. 3) Rating - assessors rate each practice indicator and derive capability levels. 4) Reporting - assessment report with findings, ratings, and improvement recommendations. A lead assessor must be certified (Intacs). Assessments typically take 3-5 days for a team of 2-4 assessors.
20
What are base practices and generic practices?
Answer
Base Practices (BP) are specific activities for each process (e.g., SWE.1.BP1: Specify software requirements). They determine Level 1 achievement. Generic Practices (GP) apply to all processes and determine Levels 2-5: GP 2.1 (Performance Management - planning), GP 2.2 (Work Product Management - version control), etc. Level 2 requires demonstrating management of the process; Level 3 requires a defined standard process.
21
What is the difference between Level 1 and Level 2?
Answer
Level 1 (Performed): the process achieves its purpose - outcomes are produced (requirements exist, tests are run). BUT the process may be ad-hoc, undocumented, dependent on individuals. Level 2 (Managed): the process is planned, monitored, and adjusted. Work products are reviewed, versioned, and managed. Resources are allocated. The process is repeatable even if key people change. Level 2 adds process discipline on top of technical execution.
22
What is the difference between Level 2 and Level 3?
Answer
Level 2 (Managed): each project may implement the process differently - what matters is that it's planned and managed. Level 3 (Established): a standard process is defined at the organizational level, and projects tailor it for their specific needs. Level 3 requires: documented standard process, tailoring guidelines, process measurements collected, and process deployment. Level 3 demonstrates organizational learning across projects.
23
What are common ASPICE assessment findings?
Answer
Common gaps: insufficient bidirectional traceability (especially architecture↔test), missing or incomplete interface specifications, inadequate verification strategy (no coverage targets), poor change impact analysis, missing non-functional requirements, inconsistency between documents and actual implementation, insufficient review evidence, and weak configuration management (missing baselines). Most organizations struggle with traceability completeness.
24
How do you prepare for an ASPICE assessment?
Answer
Preparation steps: 1) Gap analysis - self-assess current practices against ASPICE requirements. 2) Process improvement - close identified gaps. 3) Work product review - ensure all required documents exist and are consistent. 4) Traceability check - verify complete bidirectional traceability. 5) Interview preparation - project members understand their process and can explain decisions. 6) Tool readiness - demonstrate tools support the process. 7) Mock assessment - practice with internal assessors.
25
What tools support ASPICE compliance?
Answer
Requirements management: IBM DOORS, Polarion, Jama Connect. Architecture: Enterprise Architect, Rhapsody. Configuration management: Git, SVN, ClearCase. Test management: VectorCAST, Tessy, LDRA, HP ALM. Project management: Jira, MS Project, Planview. Traceability: built-in tool features or dedicated (Yakindu Traceability). Document management: Confluence, SharePoint. The toolchain must demonstrate integrated traceability across all V-model levels.
26
What is the relationship between ASPICE and ISO 26262?
Answer
Both are required for automotive development but focus on different aspects. ASPICE evaluates process quality (how well you develop). ISO 26262 evaluates functional safety (what you develop for safety). They complement each other: ASPICE's SWE processes align with ISO 26262 Part 6 software development. Many work products serve both (requirements documents, test reports). A mature ASPICE Level 2/3 organization is better positioned for ISO 26262 compliance.
27
What is intacs and how does assessor certification work?
Answer
intacs (International Assessor Certification Scheme) certifies ASPICE assessors. Levels: Provisional Assessor (completed training), Competent Assessor (passed exam + supervised assessments), and Principal Assessor (extensive experience). Only certified assessors can lead formal assessments. Training covers: process reference model, assessment methodology, rating framework, and interview techniques. Certification requires ongoing maintenance through continued assessment activity.
28
How does ASPICE handle agile development?
Answer
ASPICE v3.1 and v4.0 are methodology-neutral - they define what outcomes to achieve, not how. Agile teams must still produce: requirements (as user stories with acceptance criteria), architecture (even if evolving), traceability (story → code → test), verification evidence (automated tests), and configuration management. The challenge is maintaining documentation and traceability in rapid iterations. Many teams use automated traceability tools to bridge agile and ASPICE.
29
What changed in ASPICE v4.0?
Answer
ASPICE v4.0 (published 2023) introduced: new process groups for hardware and machine learning development, updated system engineering processes, improved alignment with ISO 26262 and ISO 21434, added cybersecurity processes, refined generic practice indicators, and better support for agile and model-based development. The process reference model was restructured, and assessment methodology was updated for more consistent ratings across assessors.
30
What is a Process Improvement Plan (PIP)?
Answer
A PIP is an action plan derived from ASPICE assessment findings. It includes: identified gaps (with severity/priority), improvement actions for each gap, responsible persons, target dates, and success criteria. PIPs are typically agreed between OEM and supplier after an assessment. Progress is reviewed periodically. Common approach: address highest-impact gaps first, implement process changes, train teams, then verify through internal assessments before the next formal assessment.
31
How does ACQ.4 (Supplier Monitoring) work?
Answer
ACQ.4 evaluates how the acquiring organization (OEM or Tier-1) monitors its suppliers. Base practices: agree on monitoring activities and milestones, conduct technical and management reviews, monitor supplier progress against plan, jointly resolve issues, and manage changes with the supplier. Work products: supplier progress reports, review meeting minutes. OEMs typically require periodic ASPICE assessments of suppliers as part of ACQ.4.