Home Learning Paths ECU Lab Assessments Interview Preparation Arena Pricing Log In Sign Up

Exercise 1: Classify Hardware Elements for AEB Radar

AEB Radar System Hardware Elements
  Radar sensor module (integrated):
  ├── RF frontend (77 GHz): transmit + receive array
  ├── DSP ASIC: beamforming, object detection
  ├── CAN transceiver: sends object list every 20ms
  └── Power supply: 5V regulator from 12V vehicle supply

  AEB Domain Controller ECU:
  ├── Main MCU (NXP S32G, dual-core lockstep)
  │   ├── Core pair A+B in lockstep
  │   ├── Flash (ECC protected)
  │   └── ADC (for external sensor inputs)
  ├── Safety watchdog IC (external: TPS65381)
  ├── CAN transceivers (radar, brake ECU)
  └── 12V to 5V/3.3V power supplies

  Safety mechanisms:
  ├── CAN timeout monitor: detects radar silence > 50ms
  ├── Object plausibility: validates object data range
  ├── Lockstep comparison: detects MCU core mismatch
  ├── Window watchdog: detects task timing violation
  └── Supply voltage monitor: detects under/over-voltage

Exercise 2: SPFM/LFM Calculation

Pythonaeb_metrics.py
#!/usr/bin/env python3
# AEB radar hardware metrics — complete the table and run calculation

# Your task: classify each element, assign FIT and DC, run spfm_worksheet.py

aeb_elements = [
    # TODO: complete DC values based on safety mechanisms listed above
    # Hint: CAN timeout monitor DC for radar silence = 97% (detects all comm loss)
    # Hint: Lockstep DC for MCU core fault = 99%
    # Hint: Window watchdog DC = 99% (if properly sized)
    # Hint: Supply voltage monitor DC = 97%
    # Hint: Object plausibility check DC = 90% (range check only)

    {"name": "Radar RF frontend total failure", "FIT": 40,  "class": "SPF",   "DC": 0.97},  # CAN timeout detects
    {"name": "Radar CAN transceiver bus-off",   "FIT": 8,   "class": "SPF",   "DC": 0.97},  # CAN timeout
    {"name": "Radar object data corruption",    "FIT": 5,   "class": "SPF",   "DC": 0.90},  # plausibility check
    {"name": "MCU lockstep core mismatch",      "FIT": 15,  "class": "SPF",   "DC": 0.99},  # lockstep HW
    {"name": "MCU flash ECC double-bit",        "FIT": 2,   "class": "SPF",   "DC": 1.00},  # ECC DED
    {"name": "MCU flash ECC single-bit",        "FIT": 8,   "class": "SF",    "DC": 1.00},  # safe: corrected
    {"name": "AEB ECU 5V supply",               "FIT": 30,  "class": "SPF",   "DC": 0.97},  # supply monitor
    {"name": "CAN transceiver (brake ECU)",     "FIT": 8,   "class": "SPF",   "DC": 0.90},  # plausibility
    {"name": "External watchdog IC",            "FIT": 12,  "class": "MPF_L", "DC": 0.70},  # periodic test
    {"name": "Safety monitor SW (latent bug)",  "FIT": 10,  "class": "MPF_L", "DC": 0.60},  # periodic self-test
]

# Run the calculation using same logic as spfm_worksheet.py
spf_total = spf_covered = mpf_r_total = mpf_r_covered = mpf_l_total = mpf_l_covered = 0
for e in aeb_elements:
    covered = e["FIT"] * e["DC"]
    if e["class"] == "SPF":    spf_total += e["FIT"]; spf_covered += covered
    elif e["class"] == "MPF_R": mpf_r_total += e["FIT"]; mpf_r_covered += covered
    elif e["class"] == "MPF_L": mpf_l_total += e["FIT"]; mpf_l_covered += covered

spf_denom = spf_total + mpf_r_total
spf_resid = (spf_total - spf_covered) + (mpf_r_total - mpf_r_covered)
mpf_l_resid = mpf_l_total - mpf_l_covered

SPFM = 1 - spf_resid / spf_denom if spf_denom else 1
LFM  = 1 - mpf_l_resid / mpf_l_total if mpf_l_total else 1
PMHF = spf_resid + mpf_l_resid

print(f"SPFM={SPFM*100:.1f}% LFM={LFM*100:.1f}% PMHF={PMHF:.2f} FIT")
print("Exercise: if any metric fails, identify which element to improve")
print("Hint: improve DC of 'Safety monitor SW (latent bug)' from 60% to 90%")
print("      by adding more frequent periodic self-test (every 1 hour vs 4 hours)")

Summary

The hands-on calculation exercise demonstrates the iterative nature of hardware metric analysis: compute metrics, identify failing elements, improve diagnostic coverage or architecture, recompute. In practice, the hardware metrics calculation is not done once — it is maintained as a living spreadsheet (or tool output) that is updated whenever the hardware architecture changes, a new DC measurement becomes available from fault injection testing, or a new FIT data source is identified. The metrics report is reviewed at every design milestone and is a mandatory input to the functional safety assessment.

🔬 Deep Dive — Core Concepts Expanded

This section builds on the foundational concepts covered above with additional technical depth, edge cases, and configuration nuances that separate competent engineers from experts. When working on production ECU projects, the details covered here are the ones most commonly responsible for integration delays and late-phase defects.

Key principles to reinforce:

  • Configuration over coding: In AUTOSAR and automotive middleware environments, correctness is largely determined by ARXML configuration, not application code. A correctly implemented algorithm can produce wrong results due to a single misconfigured parameter.
  • Traceability as a first-class concern: Every configuration decision should be traceable to a requirement, safety goal, or architecture decision. Undocumented configuration choices are a common source of regression defects when ECUs are updated.
  • Cross-module dependencies: In tightly integrated automotive software stacks, changing one module's configuration often requires corresponding updates in dependent modules. Always perform a dependency impact analysis before submitting configuration changes.

🏭 How This Topic Appears in Production Projects

  • Project integration phase: The concepts covered in this lesson are most commonly encountered during ECU integration testing — when multiple software components from different teams are combined for the first time. Issues that were invisible in unit tests frequently surface at this stage.
  • Supplier/OEM interface: This is a topic that frequently appears in technical discussions between Tier-1 ECU suppliers and OEM system integrators. Engineers who can speak fluently about these details earn credibility and are often brought into critical design review meetings.
  • Automotive tool ecosystem: Vector CANoe/CANalyzer, dSPACE tools, and ETAS INCA are the standard tools used to validate and measure the correct behaviour of the systems described in this lesson. Familiarity with these tools alongside the conceptual knowledge dramatically accelerates debugging in real projects.

⚠️ Common Mistakes and How to Avoid Them

  1. Assuming default configuration is correct: Automotive software tools ship with default configurations that are designed to compile and link, not to meet project-specific requirements. Every configuration parameter needs to be consciously set. 'It compiled' is not the same as 'it is correctly configured'.
  2. Skipping documentation of configuration rationale: In a 3-year ECU project with team turnover, undocumented configuration choices become tribal knowledge that disappears when engineers leave. Document why a parameter is set to a specific value, not just what it is set to.
  3. Testing only the happy path: Automotive ECUs must behave correctly under fault conditions, voltage variations, and communication errors. Always test the error handling paths as rigorously as the nominal operation. Many production escapes originate in untested error branches.
  4. Version mismatches between teams: In a multi-team project, the BSW team, SWC team, and system integration team may use different versions of the same ARXML file. Version management of all ARXML files in a shared repository is mandatory, not optional.

📊 Industry Note

Engineers who master both the theoretical concepts and the practical toolchain skills covered in this course are among the most sought-after professionals in the automotive software industry. The combination of AUTOSAR standards knowledge, safety engineering understanding, and hands-on configuration experience commands premium salaries at OEMs and Tier-1 suppliers globally.

← PreviousHardware Architectural Metrics CalculationNext →Software Safety Requirements Specification