Home Learning Paths ECU Lab Assessments Interview Preparation Arena Pricing Log In Sign Up

Exercise Overview & Setup

This hands-on exercise simulates a real ASPICE joint assessment at CL2 target scope. You will play two roles alternately - supplier representative and ASPICE assessor - to experience the assessment from both sides. The exercise is structured around a fictional but realistic ECU project: an ABS (Anti-lock Braking System) software development at a Tier-1 automotive supplier.

📋 Exercise Objectives

  • Conduct a simulated SWE.1 and SWE.2 interview from both assessor and supplier perspectives
  • Review a set of provided work products and identify Findings, Weaknesses, and Strengths
  • Apply the N/P/L/F rating scale to a real evidence set and produce PA ratings
  • Classify observations using the correct finding taxonomy
  • Practice the evidence navigation skills that determine assessment outcomes

What You Need

To complete this exercise you need: the ASPICE PAM v3.1 document (available from intacs.eu), a printed or digital copy of the work product excerpts below, and the rating worksheet at the end of this chapter. Estimated time: 60–90 minutes for the full exercise; individual sections can be done independently.

Assessment Context

FieldValue
Project NameABS-ECU SW Development v2.3
SupplierFictional: AutoSoft GmbH, Munich
OEM CustomerFictional: PremiumCar AG
Assessment TypeJoint Assessment (OEM-led)
PAM VersionASPICE v3.1
Target CLCL2 for all HIS-scope processes
In-Scope ProcessesSWE.1, SWE.2, SWE.3, SWE.4, SWE.5, SWE.6, SYS.2, SYS.3, SYS.4, SYS.5, SUP.1, SUP.8, SUP.10, MAN.3
Project Phase at AssessmentSoftware Integration Testing complete; System Qualification ongoing
Team Size12 SW engineers, 2 QA engineers, 1 project manager

Project Under Assessment: ABS ECU

Project Background

AutoSoft GmbH is developing the software for a new ABS ECU platform for PremiumCar AG's next-generation SUV. The software runs on an Infineon AURIX TC387 microcontroller and is built on an AUTOSAR Classic BSW stack (EB Tresos). The application software handles wheel speed sensor processing, slip ratio calculation, hydraulic control unit valve actuation, and CAN communication with the ESP and instrument cluster ECUs.

The project has been running for 18 months. This is the first ASPICE assessment of this project. AutoSoft has an ISO 9001 QMS but no prior ASPICE assessments. The QA team has recently completed an intacs ASPICE Foundation training course.

Available Work Products (Excerpts Provided in This Exercise)

IDWork ProductVersionStatusLocation
SRS-001Software Requirements Specificationv2.1ApprovedSharePoint / ABS_Project / Requirements
SAD-001Software Architectural Designv1.8Under ReviewSharePoint / ABS_Project / Architecture
SDD-001Software Detailed Design (selected components)v1.3DraftConfluence wiki page
UTR-001Unit Test Report (selected modules)v1.0FinalJenkins CI - last build report
TRM-001Traceability Matrix (SRS ↔ SYS.2 ↔ SWE.6)v1.5ApprovedExcel - SharePoint / ABS_Project / Traceability
RVW-SRS-001SRS Review Record-ClosedSharePoint / ABS_Project / Reviews
PJP-001Project Management Planv2.0ApprovedSharePoint / ABS_Project / Management
QAP-001Quality Assurance Planv1.2ApprovedSharePoint / ABS_Project / QA
QAR-001 to QAR-004QA Audit Reports (4 audits conducted)-FinalSharePoint / ABS_Project / QA
PSM-001 to PSM-008Project Status Meeting Minutes (8 meetings)-FinalSharePoint / ABS_Project / Management

Interview Simulation: SWE.1 + SWE.2

Below is a realistic ASPICE assessment interview transcript for SWE.1 and SWE.2. The assessor is playing a Competent Assessor (CA) from an independent firm. The interviewees are the Requirements Engineer (RE) and Software Architect (SA) from AutoSoft. Read through the transcript, then answer the analysis questions at the end of each exchange.

SWE.1 Interview Segment

🎭 Interview Transcript - SWE.1

Assessor: "Walk me through how a new software requirement from PremiumCar ends up in your SRS. What's the first step?"

RE: "When we receive a customer specification update - typically the System Technical Specification from PremiumCar - we run a delta analysis against the current SRS. Our requirements engineers review the changed sections of the STS, write or update the corresponding SRS requirements, give them IDs in our numbering scheme, and submit for internal review."

Assessor: "How do you structure the link from an SRS requirement back to the STS? Can you show me an example?"

RE: [Navigates to SRS-001 in SharePoint] "Here - requirement SRS-042 is the wheel speed sensor signal filtering requirement. You can see in the 'Source' column it links to STS-PremCar-AWD-107. That's the customer STS item."

Assessor: "Good. And in the other direction - from STS-PremCar-AWD-107, can you find all the SRS requirements it decomposes into?"

RE: [Pauses, switches to Traceability Matrix TRM-001] "Yes, if I filter this matrix by STS-107, I get SRS-042, SRS-043, and SRS-044. Those are the three software requirements that decompose that system requirement."

Assessor: "I see SRS-043 has no entry in the 'Verification Method' column. Is that intentional?"

RE: "Hmm, that's missing. SRS-043 is a performance requirement - it should have been marked as 'Test.' I'll note that as an action."

Assessor: "Let me pick a few requirements randomly. SRS-089, SRS-156, SRS-201 - can you show me the upstream link for each?"

RE: [Navigates through TRM-001] "SRS-089 links to STS-PremCar-AWD-198. SRS-156... [pause] ...I'm not finding an entry for SRS-156 in the traceability matrix. Let me check the SRS directly." [Opens SRS] "It's there in section 4.2, but it doesn't have a source entry. This might be an internal requirement we added during architecture - those sometimes don't make it back into the matrix."

Assessor: "SRS-201?"

RE: "SRS-201 links to STS-PremCar-AWD-244."

❓ Analysis Questions - SWE.1 Segment (Answer Before Reading Debrief)

  1. What Base Practice(s) does the 'Verification Method' gap in SRS-043 affect? Is this a Finding or a Weakness? Why?
  2. SRS-156 has no upstream traceability link. What BP does this violate? How would you rate this - N, P, L, or F - for BP6, given that 2 out of 3 sampled requirements have upstream links?
  3. The interviewee demonstrated downstream traceability (STS → SRS) and upstream traceability (SRS → STS) using TRM-001. Which BPs does this satisfy? What GP (if any) does the traceability matrix serve?
  4. What follow-up questions would you ask next if you were the assessor?

SWE.2 Interview Segment

🎭 Interview Transcript - SWE.2

Assessor: "Please walk me through the architecture of the ABS software. What are the main components and how did you decide on this decomposition?"

SA: [Opens SAD-001] "We have four main software components: WheelSpeedProcessor, SlipController, HCUActuator, and CANInterface. The decomposition follows functional boundaries - sensor processing, control logic, actuation, and communication are each in their own component. This came from the function allocation in the SYS.3 system architecture document."

Assessor: "Show me the interface specification between SlipController and HCUActuator."

SA: [Navigates to SAD section 3.2] "Here - SlipController provides a sender port 'ValveCommands' of type SlipCtrl_ValveCmd_Type, and HCUActuator has a receiver port with the same data type. The data type definition is in the ARXML and the interface is specified in section 3.2.4."

Assessor: "And SAD-001 is version 1.8 - it's marked 'Under Review.' Is there a previous approved version?"

SA: "v1.7 is the last approved baseline. v1.8 adds the multi-wheel event handling that came from a late customer requirement change. The review is in progress - we expect it closed by end of next week."

Assessor: "Who is conducting the review of v1.8?"

SA: "It's a peer review - myself, the lead systems engineer, and the safety manager."

Assessor: "Do you have a review record for v1.7?"

SA: "Yes." [Opens RVW-SAD-001 in SharePoint - a spreadsheet with 23 review issues listed, dates, and disposition status]

Assessor: "Issue #14 - 'Interface description of CANInterface RX buffer size missing' - is marked 'Open.' Is this still open?"

SA: "That was resolved in v1.8 actually - the buffer size is now specified in section 4.1.3. The review record wasn't updated when we closed it."

Assessor: "From SWE.1, I saw requirement SRS-042 (wheel speed filtering). Which architectural component implements it?"

SA: "WheelSpeedProcessor." [Navigates to allocation table in SAD] "Here in Table 2 - SRS-042 is allocated to WheelSpeedProcessor, component WS_Filter."

Assessor: "SRS-156 - the one without upstream traceability we noted in SWE.1 - can you find it in the allocation table?"

SA: [Searches] "No, SRS-156 is not in the allocation table."

❓ Analysis Questions - SWE.2 Segment (Answer Before Reading Debrief)

  1. Review issue #14 is marked Open but was resolved in v1.8. What BP and GP does the failure to close this record in the review log affect? Is this a Finding or Weakness?
  2. The SAD is currently "Under Review" at v1.8 but the approved version is v1.7. What does this mean for PA 2.2 evidence for SWE.2? Can the assessor accept v1.8 as evidence?
  3. SRS-156 has no upstream trace (found in SWE.1) AND no allocation to an architectural component (found in SWE.2). What does this cross-process issue tell you about the systemic traceability health of the project?
  4. Rate SWE.2 PA 1.1 based on what you have seen so far. What additional information would you need before finalizing the rating?

Evidence Review Exercises

For each evidence excerpt below, determine: (1) which BP or GP it serves as evidence for, (2) how it would be rated (N/P/L/F), and (3) whether any gaps would constitute a Finding or Weakness. Work through each independently before checking the debrief section.

Exercise A - Project Status Meeting Minutes (PSM-005)

📄 Excerpt: Project Status Meeting - ABS ECU SW - Week 34

Date: 2024-08-22 | Attendees: PM (chair), RE Lead, SW Architect, QA Lead

Agenda item 2 - Schedule Status:

SWE.1 baselining completed 2024-08-15, 3 days late (planned 2024-08-12). Root cause: late receipt of updated STS v2.1 from PremiumCar on 2024-08-09. No impact on downstream SWE.2 milestone (planned 2024-09-05).

SWE.2 architecture review scheduled for 2024-09-01 - on track.

Agenda item 3 - Open Actions:

Action #AR-034 (open from Week 32): Update CANInterface port specification in SAD - Owner: SA, Due: 2024-09-05, Status: In Progress.

Action #AR-036 (new): Complete SRS-156 upstream traceability - Owner: RE Lead, Due: 2024-08-29, Status: Open.

Agenda item 4 - Risk Review:

Risk R-007 (STS stability from customer) elevated from Medium to High. Mitigation: formal change request process with PremiumCar, weekly STS change notification emails. PM to raise with PremiumCar PM on next joint call.

❓ Questions - PSM-005

  1. Which GP(s) does this meeting minute satisfy? Be specific about which AO within the GP is addressed.
  2. The meeting shows a schedule deviation with documented root cause and a no-impact analysis. Does this satisfy GP 2.1.3 (monitor performance and adjust plans)? What would be needed to make the evidence stronger?
  3. Action AR-036 addresses the SRS-156 traceability gap. Does the existence of this action in the meeting minutes change your rating of SWE.1 BP6? Why or why not?

Exercise B - QA Audit Report (QAR-003)

📄 Excerpt: QA Audit Report #3 - SWE.1 Compliance Audit

Audit Date: 2024-07-18 | Auditor: Maria Fischer, QA Engineer | Project: ABS ECU SW

Scope: SWE.1 process compliance - requirements specification and traceability

Findings:

NC-001: 8 requirements (SRS-156, SRS-201, SRS-245, SRS-246, SRS-289, SRS-301, SRS-302, SRS-303) have no upstream link to STS. Severity: Major. Due Date: 2024-08-30. Owner: RE Lead.

OBS-001: SRS section 5 (timing requirements) uses non-measurable language ("shall respond quickly"). Recommend adding quantitative acceptance criteria. Severity: Observation.

Auditor Independence: Maria Fischer is a QA Engineer in the ABS project QA team. She reports to the QA Manager, not the project manager. No development responsibilities.

❓ Questions - QAR-003

  1. NC-001 was raised on 2024-07-18 covering 8 requirements with missing upstream traceability, including SRS-156. The status meeting PSM-005 (2024-08-22) shows only SRS-156 with an open action. What does this suggest about NC-001 closure? How would you investigate further?
  2. Is Maria Fischer sufficiently independent for SUP.1? The ASPICE PAM requires that QA auditors be "independent of the project" - what does that mean specifically, and does this structure satisfy it?
  3. OBS-001 (non-measurable timing requirements) - which SWE.1 BP does this relate to? Is it a Finding or a Weakness from an ASPICE perspective?

Rating Consolidation Exercise

Based on all the evidence presented above (interview transcripts, evidence excerpts, and your analysis), complete the following rating worksheet for SWE.1 and SWE.2. Rate each BP independently, then derive the PA 1.1 rating, then determine if CL2 evidence would need to be collected (answer: yes for PA 2.1 and PA 2.2 - but note what evidence you already have).

SWE.1 BP Rating Worksheet

BPDescriptionEvidence FoundYour RatingRationale
SWE.1.BP1Specify SW requirementsSRS-001 v2.1, approved, content confirmed in interview[Rate: N/P/L/F][Your rationale]
SWE.1.BP2Structure SW requirementsSRS has unique IDs, structured sections[Rate: N/P/L/F][Your rationale]
SWE.1.BP3Analyze SW requirementsReview record RVW-SRS-001 exists; review conducted[Rate: N/P/L/F][Your rationale]
SWE.1.BP4Analyze impact on operating environmentNot directly explored in interview - assessor did not check[Rate: N/P/L/F][Your rationale]
SWE.1.BP5Develop verification criteriaSRS-043 missing verification method; other requirements checked had methods[Rate: N/P/L/F][Your rationale]
SWE.1.BP6Ensure consistency & bidirectional traceability2/3 sampled requirements had upstream links; SRS-156 missing; NC-001 documents 8 missing links[Rate: N/P/L/F][Your rationale]
SWE.1.BP7Identify release contentNot explored[Rate: N/P/L/F][Your rationale]
SWE.1.BP8Ensure agreement & communicate requirementsSRS v2.1 approval evidenced by signature block in SharePoint[Rate: N/P/L/F][Your rationale]

After completing the BP ratings, answer: What would you rate PA 1.1 for SWE.1 - and would you rate it F (allowing CL2 to be checked) or only L (capping at CL1)?

CL2 Evidence Check: PA 2.1 and PA 2.2 for SWE.1

GPEvidence Available From Exercises AboveSufficient for L or F?
GP 2.1.1 - Process objectives definedQAP-001 mentions SWE.1 objectives; project plan references SRS milestone M2[Y/N - justify]
GP 2.1.2 - Process performance plannedPJP-001 has SWE.1 milestones; schedule in project plan[Y/N - justify]
GP 2.1.3 - Monitor and adjustPSM-005 shows schedule tracking with actuals vs. plan; 8 meeting minutes total[Y/N - justify]
GP 2.2.1 - WP requirements definedQAP-001 references SRS template; review checklist for SRS exists[Y/N - justify]
GP 2.2.2/2.2.3 - WP under CM controlSRS-001 in SharePoint with version history; v2.1 is current approved baseline[Y/N - justify]
GP 2.2.4 - WP reviewed with recordsRVW-SRS-001 exists with issue list and dispositions[Y/N - justify]

Finding Classification Drill

For each observation below, classify it as: Finding (significant gap - drives N or P rating), Weakness (minor gap - allows L rating), or Strength. Then identify the specific BP/GP/PA it affects and the CL consequence.

#ObservationYour ClassificationBP/GP/PA AffectedCL Consequence
1SRS-156 has no upstream link to STS; this was raised as NC-001 in QA audit 5 weeks ago; action AR-036 is open but past due date[F / W / S][Identify][What CL impact?]
2SAD review record RVW-SAD-001 has issue #14 marked Open but the issue was resolved in v1.8; the record was not updated[F / W / S][Identify][What CL impact?]
3The project uses automated CI/CD with Jenkins; unit test execution is automated, coverage reports generated per build, results archived in Jenkins with full build history[F / W / S][Identify][What CL impact?]
4Project status meetings have 8 sets of minutes with actuals vs. planned comparison and corrective actions documented[F / W / S][Identify][What CL impact?]
5QA auditor Maria Fischer is in the project QA team but has no development responsibilities; she reports to QA Manager, not PM[F / W / S][Identify][What CL impact?]
6SRS-043 is missing the verification method entry; all other sampled requirements (11 out of 12 checked) have verification methods defined[F / W / S][Identify][What CL impact?]
7Software Detailed Design for the CANInterface component exists only as a Confluence wiki page with no version control, no formal ID, and no review record[F / W / S][Identify][What CL impact?]

Debrief & Model Answers

SWE.1 Interview - Model Answers

✅ Q1: SRS-043 missing verification method

BP affected: SWE.1.BP5 (Develop verification criteria). The verification method is explicitly required by BP5. Classification: Weakness - not a Finding, because 11 out of 12 sampled requirements correctly have verification methods defined. One gap in an otherwise well-implemented BP is a minor weakness that allows PA 1.1 to be rated L for this indicator, but does not by itself drive it to P. If further sampling revealed a higher proportion of missing methods (say 20%+), reclassify as Finding.

✅ Q2: SRS-156 missing upstream traceability

BP affected: SWE.1.BP6 (Bidirectional traceability). This is the most critical BP in SWE.1. With NC-001 context: this is a Finding.} NC-001 documents 8 requirements without upstream traceability. That is a systemic issue - not an isolated case. 2 out of 3 sampled in the interview appeared correct, but the QA audit found 8 more. The scope of the gap across the full requirements set is material. PA 1.1 would be rated P (Partially) not L, because BP6 - a core outcome driver - has a significant coverage gap. BP6 = P → PA 1.1 cannot be F → CL1 not achieved → CL2 check does not proceed.

✅ Q3: BPs served by traceability matrix demonstration

Demonstrating STS → SRS in TRM-001 satisfies BP6 for the links that exist. The traceability matrix also serves as a work product for GP 2.2.3 (identified, documented, controlled) - it is in SharePoint with version control. The matrix itself does not directly serve a GP in terms of process management, but its existence and quality as a work product is evidence for PA 2.2 (Work Product Management) indicators.

SWE.2 Interview - Model Answers

✅ Q1: Review issue #14 marked Open but resolved

BP/GP affected: GP 2.2.4 (Review and adjust work products) - specifically, the requirement to document review findings and their disposition. If a review issue is resolved but the record is not updated, the review record does not accurately reflect the final disposition. Classification: Weakness - the review was done, the issue was resolved, the work product was updated (v1.8 added the buffer spec). The administrative failure to update the review log is a minor discipline gap, not a material process failure. The assessor would note it as a weakness and recommend updating the record during the assessment.

✅ Q2: SAD at v1.8 "Under Review" - what version counts?

For PA 2.2 evidence, the assessor can use the approved baseline (v1.7) as the primary evidence object. v1.8 is a work-in-progress and cannot be treated as a controlled baseline. The assessor would assess v1.7 for GP 2.2.3/2.2.4 compliance (it has an approved review record). The existence of v1.8 under review at the time of the assessment is a normal development state - it is not a finding unless v1.8 has been used to implement changes that are not traceable back through v1.7.

✅ Q3: SRS-156 no upstream trace AND no architectural allocation

This cross-process correlation is exactly what skilled assessors look for. SRS-156 appears in the SRS but has: no upstream link to a customer/system requirement, no architectural component allocation in the SWE.2 allocation table. This means the requirement is either an internal architecture constraint (which should be explicitly labeled as such) or a forgotten requirement that slipped through the requirements intake process. Either way, it is evidence of a systemic traceability process gap - not just a single record error. This strengthens the Finding classification for SWE.1 BP6 and adds a Finding for SWE.2 BP2 (requirements allocation).

Finding Classification - Model Answers

#ObservationClassificationBP/GP/PACL Consequence
1SRS-156 upstream traceability gap - systemic (8 items, NC-001 open, past due)FindingSWE.1.BP6 → PA 1.1 = PSWE.1 stays at CL0 or CL1-P; CL1 not achieved
2Review record issue #14 not updated to Closed after resolution in v1.8WeaknessGP 2.2.4 → PA 2.2PA 2.2 rated L (not F); CL2 achievable with L
3Automated CI with Jenkins, test execution automated, coverage per build, history archivedStrengthSWE.4 BP2, BP3 → PA 1.1Positive contribution - supports F rating for relevant BPs
48 status meeting minutes with actuals vs. planned and documented corrective actionsStrengthGP 2.1.3 → PA 2.1Strong evidence for PA 2.1 F rating across assessed processes
5QA auditor in project QA team, no dev responsibilities, reports to QA ManagerWeakness (borderline)SUP.1.BP4 (independence)ASPICE requires independence from "the project" - being in the project team is a weakness. Full independence = auditor from a separate organizational unit. This is borderline - some assessors accept it as sufficient; others flag it as a Finding. Typical outcome: Weakness with recommendation to move to org-unit-level independence.
6SRS-043 missing verification method; 11/12 correctly have methodsWeaknessSWE.1.BP5 → PA 1.1PA 1.1 indicator for BP5 rated L - minor gap. Does not prevent CL1 for BP5 individually.
7SDD for CANInterface: Confluence wiki, no version control, no ID, no review recordFindingSWE.3 PA 1.1 + GP 2.2.2, 2.2.3, 2.2.4SWE.3 PA 1.1 affected if detail design is incomplete for some components. PA 2.2 indicators all fail for this WP - CL2 for SWE.3 not achievable as long as this WP is uncontrolled.

✅ Key Learning: The Cross-Process Pattern

Notice how SRS-156 appeared as a clue in SWE.1 (no upstream trace), reappeared as a gap in SWE.2 (no architectural allocation), and was already flagged by QA in SUP.1 (NC-001). A skilled assessor connects these dots across processes to identify systemic issues rather than treating each observation in isolation. When you prepare for an assessment, think about your evidence across the full traceability chain - a gap that appears small in one process is often the visible tip of a larger systemic issue.

What's Next

Continue to Hands-On: Work Product Templates where you will complete and review actual ASPICE-compliant work product templates for SWE.1–SWE.3, applying the BP and WPC knowledge from this chapter to create evidence that would satisfy an assessor.

← PreviousImprovement Action Planning