Home Learning Paths ECU Lab Assessments Interview Preparation Arena Pricing Log In Sign Up

Assessment Types: Internal, External, Joint

ASPICE assessments come in three forms, and understanding the differences matters for how you prepare, what authority the results carry, and who participates.

📋 Learning Objectives

  • Distinguish the three assessment types and when each is appropriate
  • Walk through the 6-phase assessment lifecycle step by step
  • Explain what happens in each type of interview and how evidence is sampled
  • Classify findings correctly (Weakness vs. Finding) and understand what each means for the CL rating
  • Describe the rating aggregation algorithm from raw indicator ratings to a final CL score
  • Build a realistic supplier preparation strategy for a CL2 target assessment
TypeWho Conducts ItPurposeOutput Authority
Internal Assessment (Self-Assessment) The supplier organization itself, using trained internal assessors or process engineers Process improvement, readiness check before external assessment, gap analysis No external authority - for internal use only. OEMs do not accept self-assessment results as evidence of CL achievement.
External Assessment (3rd Party) intacs-certified assessors from an independent consulting organization, not affiliated with the supplier or OEM Independent validation of capability level for compliance reporting to OEM customers Highest authority - accepted by all major OEMs as independent evidence. Assessment report signed by a Competent or Principal Assessor.
Joint Assessment (OEM-Led) OEM quality engineers and/or OEM-engaged assessors, conducted at the supplier site. Supplier participates but OEM controls the assessment. OEM supplier qualification, new program authorization, performance monitoring OEM-specific authority - results feed into the OEM's supplier scorecard. Often triggers contractual obligations (e.g., improvement plan required within 90 days if CL < target).

Conformant vs. Non-Conformant Assessments

ISO 33002 defines requirements for a "conformant assessment." A conformant assessment must: use a recognized PAM (like ASPICE v3.1 or v4.0), be conducted by a qualified lead assessor, follow a defined assessment process, and produce documented assessment outputs. Only conformant assessments produce results that can be formally reported against the ASPICE standard. Many "ASPICE gap analyses" conducted by consulting firms are non-conformant by design - they provide useful input but cannot be reported as an ASPICE assessment result.

Assessment Lifecycle: 6 Phases

A formal ASPICE assessment follows a structured lifecycle. Whether it is an external or joint assessment, the phases are largely standardized across the industry. Below is the full lifecycle with the key activities, decisions, and outputs at each phase.

PhaseNameKey ActivitiesOutputs
1 Assessment Planning Define assessment scope (processes, projects, organizational units). Agree on PAM version and target Capability Levels. Select assessment team. Schedule interviews and site visits. Produce the Assessment Input document (AI). Assessment Input (AI) document signed by both parties; assessment schedule; NDA/confidentiality agreement
2 Document Review / Pre-Assessment Supplier sends key work products to assessor team in advance. Assessors review for completeness and prepare targeted interview questions. Identify obvious gaps before on-site visit. Assessor preparation notes; preliminary observation log; document review checklist
3 Opening Meeting Assessor leads a kickoff meeting at the supplier site. Confirm scope, schedule, interview participants. Establish communication rules. Supplier presents project overview. Confirmed assessment plan; interview schedule with named participants; scope confirmation
4 Data Collection (Interviews + Evidence Review) Structured interviews with process owners, engineers, and project managers. Evidence review of work products (document spot-checks, tool walk-throughs). Multiple rounds per process. Interview notes; evidence log; preliminary indicator ratings per BP/GP; observation records
5 Data Consolidation & Rating Assessment team meets to consolidate findings, reconcile conflicting evidence, apply the N/P/L/F scale to each PA, and produce the final CL ratings. Classify all observations as Strengths, Weaknesses, or Findings. Rating spreadsheet; finding list with evidence references; draft assessment report
6 Closing Meeting & Report Assessor presents results to supplier management. Each finding is explained and evidence discussed. Ratings confirmed. Supplier signs acknowledgment. Final assessment report produced within agreed timeframe (typically 10–15 business days). Assessment Output document (AO); signed acknowledgment; improvement recommendations (if applicable)

Typical Assessment Duration

Duration depends on scope. A standard HIS-scope assessment (11 processes, 1 project) runs 3–4 days on-site for the interviews and evidence review, preceded by 0.5–1 day of document pre-review. Larger scopes (multiple projects, extended process lists) can run 5–7 days. Virtual assessments (post-COVID norm) run similarly but require advance sharing of all work products via secure file transfer.

Interview Structure & Evidence Sampling

The interview is the core data collection mechanism in ASPICE assessment. Understanding how assessors structure interviews enables you to prepare the right people and have the right evidence ready.

Interview Structure

A typical process interview (e.g., for SWE.1 + SWE.2) lasts 2–3 hours and involves:

  • Interviewees: Requirements engineer (for SWE.1), software architect (for SWE.2), optionally the project manager or QA lead
  • Format: Assessor asks open questions about the process; interviewee explains and demonstrates with evidence. The assessor is not checking a checklist - they are building a mental model of how the process actually works.
  • Evidence presentation: Interviewees should have all work products accessible (ideally on screen, navigable by the assessor). The assessor will ask to see specific documents, navigate to specific sections, follow a traceability link.

Typical Interview Questions by Process

ProcessTypical Assessor QuestionWhat They Are Really Testing
SWE.1"Walk me through how a new requirement from the customer ends up in your SRS."Is there a defined intake process? Is traceability established from the point of receipt?
SWE.1"Show me the traceability from requirement SRS-042 back to the customer specification."BP6: Is bidirectional traceability actually implemented, not just claimed?
SWE.2"How did you decide which components are in the architecture? Show me the allocation."BP2: Are requirements formally allocated, or is it informal expertise?
SWE.2"Show me the interface specification between the Feature Manager and the Communication Handler."BP3: Are interfaces specified at the required level of detail?
SWE.4"What coverage target do you use for unit tests and how do you measure it?"BP2: Is a defined coverage criteria applied? Is it measured tool-supported?
SUP.1"Who performed the last QA audit of SWE.1 and what was their relationship to the project?"BP4: Independence - is the auditor truly independent?
MAN.3"Show me the last project status meeting where you compared actuals to plan."GP 2.1.3: Is monitoring actually performed and documented?

Evidence Sampling Method

Assessors do not review every document end-to-end. They use targeted sampling:

  • Random sample: Pick 3–5 requirements and follow their traceability chain top-to-bottom. If these are broken, the assessor infers systematic weakness across the population.
  • Boundary sampling: Look for requirements that are close to the edge of completeness - partially written, recently added, or recently changed. These are most likely to have traceability gaps.
  • Risk-driven selection: For safety-critical features or recently changed components, deeper review. Well-established stable features get less attention.
  • Cross-reference check: Take a test case from SWE.6 and trace it backwards to find its SWE.1 source requirement. If the trace cannot be followed, it is a finding regardless of what the forward trace shows.

⚠️ Never Present Evidence You Cannot Navigate

A common mistake in assessments: presenting a 400-page SRS to the assessor but being unable to quickly demonstrate a specific trace link when asked. Assessors treat inability to demonstrate evidence as evidence of absence. Before an assessment, every interviewee should practice navigating to key work product elements in under 30 seconds - traceability links, review records, version history, coverage reports. Fumbling with tool access during the interview creates a negative impression that can influence borderline ratings.

Finding Classification: Weakness, Finding, Strength

During data consolidation, assessors classify every observation into one of three categories. The classification directly drives the N/P/L/F rating and the final CL outcome.

ClassificationDefinitionImpact on RatingExamples
Strength An observed practice that exceeds requirements - the process is performed exceptionally well in a specific area Contributes positively to F (Fully Achieved) ratings; supports higher-end scores in borderline cases "The team uses automated bidirectional traceability with real-time coverage reporting in Polarion, with daily coverage dashboards reviewed in sprint reviews"
Weakness An area where the process is not fully meeting requirements, but the gap is minor - it does not prevent the purpose from being achieved Can still allow L (Largely Achieved) - the process passes but with noted improvement areas "Traceability links exist for all requirements but coverage report is manually generated quarterly, creating a risk of stale data"; "Review records include issues but closure status is not consistently documented"
Finding An observed gap that is significant enough to prevent the process from achieving its purpose or a Process Attribute outcome Drives N or P rating for the relevant indicator; if PA 1.1 indicator, prevents CL1 achievement; if PA 2.x indicator, prevents CL2 achievement "No bidirectional traceability exists from SWE.1 requirements to SYS.2 system requirements - the SRS references source chapters but has no machine-traceable links and ~40% of SYS.2 items have no corresponding SWE.1 requirement"

Finding Severity Classification (OEM-Specific)

Some OEMs use a severity scale for findings. BMW Group, for example, classifies findings as Critical, Major, or Minor. A Critical finding in any in-scope process can prevent supplier qualification regardless of overall CL ratings. Common critical finding triggers:

  • No evidence of any review for a safety-relevant work product (SRS, architecture, test plan)
  • SUP.1 independence requirement violated (QA performed by the project lead)
  • No configuration management for released software (code not under version control)
  • Traceability completely absent between test cases and requirements in SWE.6

Rating Production: From Indicators to CL

The final CL ratings are produced through a defined algorithm. Understanding this algorithm helps you predict your assessment outcome based on your evidence inventory.

Step-by-Step Rating Algorithm

  1. Rate each indicator (BP or GP) against the evidence collected during data collection. Each indicator gets an implicit N/P/L/F judgment.
  2. Aggregate to Process Attribute level: PA 1.1 is rated N/P/L/F based on the aggregate achievement of all BPs for the process. If most BPs are F but one critical BP (e.g., SWE.1.BP6 traceability) is P, PA 1.1 may be rated L (not F). The assessor's judgment governs the aggregation - there is no simple arithmetic average.
  3. Apply CL achievement rules: Check if the PA rating satisfies the CL achievement conditions (see Capability Levels chapter). A PA rated L satisfies the target CL. A PA at a lower CL must be F before proceeding.
  4. Produce the CL rating per process: The final CL is the highest level where all CL achievement conditions are met.
  5. Aggregate to profile report: Produce a process profile showing the CL and all PA ratings for every assessed process. This is the primary deliverable of the assessment.

The Assessment Output Profile: What It Looks Like

The assessment output for a typical project might look like this:

ProcessPA 1.1PA 2.1PA 2.2CL AchievedTarget CLStatus
SWE.1FLL22✅ Met
SWE.2FLP12❌ Not Met (PA 2.2 = P)
SWE.3L--02❌ Not Met (PA 1.1 = L, not F)
SWE.4FFF22✅ Met
SUP.1FLL22✅ Met
MAN.3FPL12❌ Not Met (PA 2.1 = P)

In this example, SWE.3 and MAN.3 are the priority improvement areas. SWE.3's PA 1.1 being L (not F) means a detailed design process is only partially implemented - BPs are mostly present but with significant gaps. MAN.3's PA 2.1 = P means project management planning exists on paper but monitoring is not documented.

💡 Typical First-Assessment Profile

In a supplier's first formal ASPICE assessment, it is very common to see: SWE.1–SWE.2 at CL1 or borderline CL2, SWE.3–SWE.4 at CL1 (design documentation and unit testing are consistently weak), SUP.8 at CL1-L (CM exists but baselines are informal), MAN.3 at CL1 (planning exists but monitoring is undocumented). Understanding this typical profile helps a new project target its preparation efforts on the highest-impact gaps.

Assessor Competency & intacs Certification

Knowing the assessor certification structure helps you understand what kind of assessor you are working with and what authority their results carry.

LevelTitleRequirementsAuthority
PA Provisional Assessor Passed intacs foundation course; participating as team member under a Competent or Principal lead Can participate in assessments but cannot lead. Cannot sign assessment output as lead.
CA Competent Assessor Performed ≥ 5 assessments under supervision; passed intacs examinations; peer-reviewed by Principal Assessor Can lead conformant assessments. Signs the assessment output. Results are reportable externally.
PRA Principal Assessor Long track record as CA; demonstrated calibration quality; peer-reviewed by other PRAs; approved by intacs board Can lead assessments, train CAs, and certify new assessors. Highest authority level.

Assessor Calibration

A known concern with any subjective rating system is assessor-to-assessor variability. The ASPICE community addresses this through intacs-managed calibration workshops, where assessors rate the same evidence set independently and compare results. Significant divergence leads to additional training or re-certification. In practice, experienced assessors converge closely on CL ratings, but individual BP ratings can vary ± one notch (P vs. L) depending on the evidence weighting approach.

As a supplier, it is reasonable to respectfully challenge a specific rating during the closing meeting if you have evidence that the assessor did not consider. This is not adversarial - it is part of the process. An assessor who is confident in their rating will explain their evidence basis; if you can show evidence they missed, the rating can be revised before the final report is issued.

Preparation Strategy for Suppliers

A structured 8–12 week preparation plan for an external ASPICE assessment targeting CL2 across the HIS scope:

WeekActivityOutput
1–2Internal self-assessment / gap analysis. Walk through every BP and GP for every in-scope process. Rate current evidence availability. Identify gaps.Gap analysis spreadsheet: BP/GP × Evidence Available × Gap Description × Priority
3–4Evidence inventory. Locate all existing work products. Verify they are in CM, versioned, and accessible. For missing WPs, assign owners and deadlines.Evidence inventory list with document ID, version, location, CM status, review status
5–6Gap closure - high priority items. Address Findings (not just Weaknesses) first. Typically: establish review records, complete traceability links, set up CM baselines.Updated work products, review records, CM baseline manifest
7–8Mock interviews. Role-play assessment interviews with process owners. Practice presenting evidence quickly. Identify where interviewees struggle to answer questions.Mock interview feedback, evidence navigation practice
9–10Documentation cleanup. Ensure all work products meet template requirements, have correct IDs, and are under CM. Close any open review findings.Final evidence package ready for assessor pre-review
11–12Pre-assessment document submission. Send agreed work products to assessors. Address any questions or clarifications. Brief interviewees on the assessment schedule.Assessor pre-review materials; interviewee briefing notes

The Three Most Impactful Preparation Actions

If time is limited, prioritize these three actions above all others:

  1. Implement formal review records (GP 2.2.4) for all key work products. Even a simple spreadsheet (Document, Version, Date, Reviewers, Issues Found, Disposition) will satisfy this GP. This single action can move multiple processes from CL1 to CL2 eligibility.
  2. Establish machine-traceable bidirectional traceability (SWE.1.BP6 through SWE.6.BP4). If you are not using a requirements management tool, set up a structured spreadsheet with explicit ID-to-ID links and generate a coverage report showing all requirements are covered in both directions.
  3. Document project status tracking (GP 2.1.3). Create meeting minutes for at least the last 3–4 project status meetings showing planned vs. actual comparison and documented corrective actions. This fixes the most common MAN.3 CL2 failure.

Summary & Key Takeaways

✅ Key Takeaways

  • Three assessment types: Internal (no external authority), External/3rd Party (highest authority), Joint/OEM-led (feeds supplier scorecard).
  • The assessment lifecycle has 6 phases: Planning → Document Review → Opening → Data Collection → Consolidation → Closing. Know what happens at each phase and what you must prepare for each.
  • Assessors use targeted sampling - random samples, boundary cases, and cross-reference checks. Being unable to navigate evidence under pressure is treated as evidence absence.
  • Findings (significant gaps) drive N/P ratings and prevent CL achievement. Weaknesses (minor gaps) allow L ratings. Strengths contribute to F ratings.
  • You can challenge assessor ratings during the closing meeting with evidence - this is expected and professional, not adversarial.
  • Three highest-impact preparation actions: formal review records, machine-traceable bidirectional traceability, and documented project status monitoring.

What's Next

The next chapter covers Evidence Collection Strategies - a practical guide to building an evidence package that satisfies every HIS-scope BP and GP, organized as a work product index you can use directly in your next assessment preparation.

What's Next

Continue to Evidence Collection Strategies for a structured, process-by-process evidence inventory guide - what to collect, how to organize it, and how to present it efficiently to an assessor.

← PreviousCapability Levels 0–5 Next →Base Practices & Generic Practices