Home Learning Paths ECU Lab Assessments Interview Preparation Arena Pricing Log In Sign Up

What Are Assessment Findings?

An assessment finding is a documented observation made by the assessment team when evidence does not fully satisfy one or more Base Practice (BP) or Process Attribute (PA) indicators. Findings are not binary pass/fail - they are ratings (N/P/L/F) applied to specific indicators, with the finding statement explaining the gap.

There are two categories of finding output from an ASPICE assessment:

  • Strength - something the assessed organization does particularly well. Documented but does not change ratings.
  • Weakness - a gap between evidence and indicator requirement that prevents a BP or PA from being rated Largely or Fully achieved. Weaknesses are the primary driver of CL ratings.

This lesson presents the findings most frequently observed in automotive ECU supplier assessments worldwide - based on published PISA data, intacs community reports, and typical patterns from actual supplier audits. Understanding these patterns before an assessment is the most efficient preparation possible.

📋 Learning Objectives

  • Name the top 3 findings for SWE.1, SWE.2, SWE.4, SUP.1, and MAN.3
  • Explain why specific findings recur across organizations regardless of size or maturity
  • Map findings to their root causes and identify actionable resolutions
  • Predict which findings will drop a process from CL2 to CL1 vs which allow CL2 despite a weakness

Top SWE Process Findings

SWE.1 - Software Requirements Analysis

FindingBPRoot CauseResolution
Requirements not uniquely identified - IDs missing, duplicated, or changed between versions without tracking.SWE.1.BP2Requirements managed in Word/Excel without tool discipline. Authors add rows without assigning IDs.Mandate unique IDs in your SRS template. Use DOORS, Polarion, Jama, or similar. Automate ID generation. Never allow unnamed requirements in a baselined document.
No bidirectional traceability to system requirements - SWE.1 requirements exist but have no explicit link to SYS.2 requirements.SWE.1.BP6SW team writes requirements from reading the OEM TS directly, without referencing the intermediate SYS.2 document or creating traceability links.Require every SWE.1 requirement to carry a "derived from" field referencing at least one SYS.2 requirement ID. Enforce this in the requirements tool. Make traceability generation part of SWE.1 review criteria.
Requirements not verifiable / acceptance criteria missing - Requirements say "shall be fast" or "shall support multiple modes" without measurable criteria.SWE.1.BP3, SWE.1.BP5Requirements written by system architects without test engineer input. No formal review step that checks verifiability.Add a "verification method" and "acceptance criterion" column to every requirement. Run a dedicated verifiability review. Flag any requirement where the acceptance criterion is empty as incomplete - it cannot be baselined.
Requirements not prioritized - All requirements treated equally; no priority or release assignment exists.SWE.1.BP4 (v3.1) / SWE.1.BP7Project teams assume all requirements must be delivered in the first release. Release planning is done verbally or in Jira stories without linking back to the requirements.Add a priority field (e.g., Must/Should/Could or release version) to every SWE.1 requirement. Ensure the requirements tool export shows which requirements are targeted for which release milestone.

SWE.2 - Software Architectural Design

FindingRoot CauseResolution
Architecture documented only as static block diagram - no dynamic view (no sequence diagrams, no timing diagrams, no data flow across component boundaries).Teams document "what" components exist but not "how" they interact at runtime. Dynamic behavior is implied by the code, not the design.Require at least one sequence diagram per key use case (normal operation, error recovery, startup/shutdown). Use PlantUML or equivalent in the architecture document. Assessors specifically ask for dynamic views.
Requirements not allocated to components - SWE.2 architecture design exists but there is no explicit mapping showing which component realizes which SWE.1 requirement.Architecture designed organically; requirement allocation assumed to be "obvious" from the design.Create a requirement-to-component allocation table in the architecture document. This is the key evidence for SWE.2.BP3. It also serves as the starting point for SWE.5 integration test planning.
Interfaces between SW components not formally specified - only internal interfaces exist; communication across component boundaries is undocumented.AUTOSAR teams assume that RTE-generated code documents the interfaces. However, AUTOSAR port definitions are not equivalent to an interface specification document unless explicitly referenced.Document all cross-component interfaces: data elements, direction, type, update rate, error semantics. For AUTOSAR projects, reference the ARXML signal definitions. For non-AUTOSAR, write interface header files or API documents.

SWE.4 - Software Unit Verification

FindingRoot CauseResolution
No unit test strategy document - tests exist but there is no documented strategy defining coverage targets, test techniques, pass/fail criteria, and environment.Teams jump directly to writing unit tests without first defining the testing strategy. Strategy is implicit in tribal knowledge.Write a SW Unit Verification Strategy (often part of the broader Software Test Plan). Define: coverage metric (statement, branch, MC/DC for ASIL), tools (GoogleTest, VectorCAST, LDRA), exclusion criteria (generated code, third-party), review process for test cases.
Test cases not traceable to detailed design or requirements - tests exist as files in a repo, but there is no link between a test case and the SW unit design element it verifies.Test engineers write tests based on reading the code, not the design. The link is implied by proximity (test file in the same folder), not explicit traceability.Add a "tests" or "verifies" attribute to each test case or test suite. Reference the detailed design element or requirement ID. Many unit test frameworks (e.g., GoogleTest, LDRA) support test case metadata that can carry these links.
Coverage measured but targets not met and not addressed - coverage reports exist showing 60% branch coverage against an 80% target, but there is no record of why the gap exists or what was done about it.Coverage targets set in the test plan but never enforced. Test execution reports generated but not reviewed by anyone with authority to act on gaps.Include coverage report review as a mandatory gate in the development process. For every coverage gap: either add a test case, or document and justify the exclusion (generated code, defensive code, hardware-dependent branch). Both are acceptable - silence is not.

SWE.5 - Software Integration and Integration Testing

FindingRoot CauseResolution
No integration order defined - software components are integrated ad-hoc; no integration sequence rationale documented.Integration treated as "just build the system" rather than a process requiring a planned strategy.Define and document the integration order in the integration plan. Reference the SWE.2 architecture to justify the order (bottom-up, top-down, or backbone integration). Show the rationale for the chosen sequence.
Integration test cases not traceable to architecture interfaces - integration tests exist but reference requirements (SWE.1) rather than architecture interfaces (SWE.2).Integration and qualification testing scripts are maintained as a single test suite without distinguishing which tests verify interfaces (SWE.5) vs requirements (SWE.6).Tag or separate test cases by type: integration (verifying component interfaces per SWE.2) vs qualification (verifying requirements per SWE.1). Both can share the same test framework but must be distinguishable in reporting.

Top SUP & MAN Findings

SUP.1 - Quality Assurance

FindingRoot CauseResolution
QA performed by the project team, not independently - the SWE.1 author also performs the SWE.1 process audit. No independence exists.Small teams assign QA tasks to whoever has capacity. Independence requirement underestimated.QA must be organizationally independent of the project team. Assign a QA engineer from a different team/department or a central quality function. Document the independence in the QA plan. The QA auditor cannot report to the project manager.
QA audit records not project-specific - generic QA checklists exist but are not applied to the specific project being assessed. No audit records reference the assessed project's artifacts.QA plan is a template; nobody fills in the actual audit results for the specific project instance.QA audit records must reference specific documents (document name, version, date reviewed) and findings for the specific project. Generic "process compliant: yes" entries are not sufficient evidence.

SUP.8 - Configuration Management

FindingRoot CauseResolution
Not all work products under CM control - source code is in Git, but requirements spec, architecture document, and test cases are in SharePoint or email attachments without versioning.CM seen as a developer responsibility (source code only). Process documents managed informally.All baselined work products (SRS, architecture doc, test plans, test reports, QA records) must be in a CM-controlled location with version history. This does not require Git for Word documents - SharePoint with versioning enabled, Confluence with page history, or any system that provides an auditable version trail is acceptable.
Baselines not established at required milestones - no baseline at SWE.1 completion, at design review, or before release. Version is "whatever is in the repo."Baseline management seen as overhead. No process defines when baselines are required.Define baseline points in the development process (e.g., after each phase review, before any integration activity begins, before any delivery). Record baseline names, content lists, and dates. Git tags or release branches serve this purpose for code; a CM baseline record document serves it for collections of work products.

MAN.3 - Project Management

FindingRoot CauseResolution
Project plan exists but is not used for tracking - a detailed Gantt chart was created at kickoff, but status meetings do not reference it, actuals are not recorded, and the plan is never updated.Project planning treated as a documentation exercise for the customer kick-off presentation, not as a management tool.Status meeting minutes must reference the plan explicitly: "planned vs actual for milestone X, deviation: Y days, corrective action: Z." The plan must show update timestamps. Assessors will ask for the plan and the last 3 status meeting minutes and check for alignment.
No evidence of corrective actions when deviations occur - plan shows milestone was missed; no record of analysis or corrective action exists.Project managers deal with deviations informally. Verbal agreements to adjust scope or timeline are not documented.Every significant plan deviation must generate a documented corrective action: what was done, what was decided, who approved it. This does not need to be elaborate - a dated entry in the status log is sufficient if it records the decision and its rationale.
No resource plan - schedule exists (milestones) but no effort estimates, role allocations, or resource tracking.Resource management handled in a separate HR system not linked to the project plan.The project plan must include at minimum: role/person assignments to planned tasks, effort estimates, and actual effort tracking. This can be as simple as a spreadsheet - but it must exist and be updated.

Systemic Finding Patterns

Across thousands of ASPICE assessments, certain finding patterns recur regardless of organization size, country, or technology domain. These are structural failures rooted in process design, not execution failures rooted in individual mistakes.

Pattern 1: The Documentation Lag

Work is done correctly - design decisions made, tests written, architecture evolved - but the documentation lags the actual work by weeks or months. At assessment time, the documents either don't exist for the assessed phase, or they describe a different state of the system than what was actually built. This creates findings at every SWE process simultaneously. Root cause: documentation is not integrated into the workflow, it is an afterthought. Resolution: treat work products as Definition-of-Done criteria for each process phase - you cannot close a SWE.1 sprint without an updated, reviewed SRS.

Pattern 2: The Tool Silo

Requirements are in DOORS. Architecture is in Visio. Tests are in LDRA. Defects are in Jira. Project plans are in MS Project. Each tool is used well in isolation, but there are no links between them - traceability exists within tools but not across them. An assessor asking "show me the test cases that verify requirement R-123" cannot be answered by any single tool view. Resolution: invest in cross-tool traceability. Modern ALM platforms (PTC Integrity, Polarion ALM, IBM ELM) support cross-artifact linking. Alternatively, maintain an explicit traceability matrix in a controlled document that bridges tool boundaries.

Pattern 3: The Generic Practices Gap

Organizations achieve CL1 (processes are performed, BPs have evidence) but fail CL2 because the Generic Practices (PA 2.1 - Performance Management; PA 2.2 - Work Product Management) are not in place. CL1 evidence shows the work was done. CL2 evidence shows the work was planned, tracked, and the work products were managed (versioned, reviewed, baselined). The most common CL2 failure: no evidence that SWE processes were monitored against a plan. Assessors ask: "Where in your status reports does it show whether SWE.1 was on schedule?" If the answer is "it is tracked in Jira stories," show the Jira burndown tied to the SWE.1 milestone. If that link does not exist, CL2 PA 2.1 will be rated P (Partially).

How Findings Impact CL Ratings

Not all findings have equal impact. A weakness in a minor BP may leave the process at CL1 with a "Largely achieved" rating. A weakness in a core BP or a PA 2.x attribute can prevent CL1 or CL2 achievement entirely.

Finding SeverityRating ImpactExample
BP rated N (Not achieved)PA 1.1 is rated N or P, limiting the process to CL0 or CL0-Partially. No CL1.SWE.1 has no Software Requirements Specification at all - BP1 rated N. Process cannot achieve CL1.
BP rated P (Partially achieved)PA 1.1 is rated P or L depending on how many BPs are affected. May still achieve CL1 if overall PA 1.1 aggregates to L (50-85%).SWE.1.BP6 (traceability) rated P - only 40% of requirements have upward traces. Combined with other BPs fully achieved, PA 1.1 may still reach L.
GP 2.1 rated P (performance not managed)PA 2.1 rated P. CL2 not achieved for this process regardless of how well BPs are performed.SWE.1 work is planned in project plan but progress is never tracked - no status monitoring evidence. PA 2.1 rated P → CL1 achieved, CL2 not achieved.
GP 2.2 rated P (WPs not managed)PA 2.2 rated P. CL2 not achieved.SRS exists but has no version history and was never formally reviewed. PA 2.2 rated P → CL1 only.

🔍 The CL2 Blocker Most Teams Miss

Many teams focus all their preparation energy on Base Practices (CL1 evidence) and arrive at the assessment with well-documented processes, only to fail CL2 because PA 2.1 and PA 2.2 evidence is absent. PA 2.1 requires that each process has defined objectives, plans with schedule and resources, and monitored progress with corrective actions. PA 2.2 requires that work products are identified, version-controlled, reviewed, and baselined. These are process management requirements, not engineering requirements - and they are equally mandatory for CL2.

Resolution Strategies

Prioritizing Findings for Action

After an assessment, the organization receives a findings list that may contain 20–60 individual observations across all processes. Not all can be addressed simultaneously. Prioritize by:

  1. CL impact - fix the findings that prevent CL1 achievement first (N-rated BPs), then findings blocking CL2 (PA 2.1/2.2 gaps), then findings that improve quality but don't change ratings
  2. Cross-cutting impact - a traceability process gap affects SWE.1, SWE.2, SWE.5, SWE.6, SYS.2, SYS.5 simultaneously. One structural fix resolves multiple findings.
  3. OEM re-assessment timeline - if a re-assessment is scheduled in 6 months, focus on findings that can realistically be resolved with evidence by then. A new process introduced 2 weeks before the re-assessment will have no credible evidence.

The Improvement Action Plan (IAP)

Most OEMs require a formal Improvement Action Plan (IAP) within 4–6 weeks of the assessment closing. The IAP must address every weakness finding with: a description of the planned corrective action, the owner, the target date, and measurable completion criteria. Assessors at the re-assessment will check IAP execution - if an action was planned but not completed, or completed without evidence, it will be re-rated.

Structure your IAP with the same process-level grouping as the assessment findings. One root cause may address multiple findings - call that out explicitly to demonstrate systemic thinking rather than point solutions.

Summary & Key Takeaways

✅ Key Takeaways

  • The #1 recurring finding across all SWE processes: broken traceability - requirements with no upstream or downstream links. Fix your traceability discipline before anything else.
  • SWE.1 top findings: missing unique requirement IDs, no upstream trace to SYS.2, requirements not verifiable (no acceptance criteria).
  • SWE.4 top findings: no unit test strategy document, test cases not traced to detailed design, coverage gaps not addressed.
  • SUP.1 top finding: lack of QA independence - the project team cannot audit itself.
  • MAN.3 top finding: project plan exists but is not used for tracking - no actuals, no deviation records, no corrective actions.
  • The most common CL2 failure is not a missing BP - it is missing PA 2.1 (performance management) and PA 2.2 (work product management) evidence.
  • Prioritize findings by CL impact, then cross-cutting impact, then re-assessment timeline. One structural fix (e.g., a traceability tool rollout) can close multiple findings simultaneously.

What's Next

Continue to Evidence Collection Strategies to learn how to systematically gather, organize, and present the evidence package that maximizes your assessment ratings - including which document types have highest assessor credibility and how to handle gaps in retrospective evidence.

← PreviousEvidence Collection Strategies Next →Improvement Action Planning