Home Learning Paths ECU Lab Assessments Interview Preparation Arena Pricing Log In Sign Up

Remote Debug Architecture

CI/CD Hardware-in-the-Loop Debug Architecture
  GitLab CI Runner (Linux VM)
  ├── Job: cmake build → ELF produced
  ├── Job: TRACE32 HIL test
  │    └── SSH to Debug Server VM
  └── Job: test report upload

  Debug Server VM (Windows or Linux)
  ├── TRACE32 PowerView running headless (no display)
  ├── TCP API port 20000 listening
  ├── USB-connected LA-7780 probe → cable → ECU board on bench
  └── Python test runner: pytest + lauterbach-trace32-rcl

  Test execution:
  1. pytest calls t32.connect() → TCP connection to TRACE32
  2. t32.cmd("Data.LOAD.Elf new_build.elf")
  3. t32.cmd("SYStem.Reset") → t32.cmd("Go main") → t32.wait()
  4. t32.fnc.var_value("g_testResult") → assert expected
  5. t32.disconnect(); report pass/fail to GitLab

CI/CD Pipeline Integration

YAMLgitlab-ci-hil.yml
# GitLab CI: Hardware-in-the-Loop debug regression tests

stages: [build, hil_test, report]

build:
  stage: build
  script:
    - cmake -B build -DCMAKE_BUILD_TYPE=Debug
    - cmake --build build --target app.elf
  artifacts:
    paths: [build/app.elf]
    expire_in: 1h

hil_regression:
  stage: hil_test
  tags: [hw-ecub-tc397]          # GitLab runner with physical ECU attached
  script:
    - python3 -m pytest tests/hil/ -v --tb=short --junitxml=hil_results.xml
    - python3 tests/hil/t32_regression.py build/app.elf
  artifacts:
    reports:
      junit: hil_results.xml
    paths: [hil_results.xml, trace_captures/]
  timeout: 15m
  allow_failure: false            # HIL test failure blocks merge

report:
  stage: report
  script:
    - python3 ci/generate_debug_report.py hil_results.xml
  when: always

Python T32 API Test Harness

Pythont32_regression.py
#!/usr/bin/env python3
import lauterbach.trace32.rcl as t32
import sys, json

TEST_CASES = [
    {"id":"HC-001","name":"CAN RX count non-zero after 1s","var":"g_canRxCount",
     "op":"gt","expected":0,"run_to":"Os_StartOS","wait_ms":1000},
    {"id":"HC-002","name":"Speed conversion accuracy","var":"g_speed_kmh",
     "op":"near","expected":100.0,"tolerance":0.5,
     "setup":{"var":"g_speedRaw","val":2778},"run_to":"App_SpeedFilter"},
    {"id":"HC-003","name":"HardFault not triggered","var":"g_faultContext.magic",
     "op":"ne","expected":0xFADE1234,"run_to":"","wait_ms":5000},
]

def run_tests(elf_path: str) -> dict:
    api = t32.connect(host="localhost", port=20000)
    api.cmd(f'Data.LOAD.Elf "{elf_path}" /RELPATH')
    api.cmd("SYStem.Reset")

    results = {"passed": 0, "failed": 0, "details": []}
    for tc in TEST_CASES:
        api.cmd("SYStem.Reset")
        if "setup" in tc:
            api.cmd(f"Var.SET {tc['setup']['var']} {tc['setup']['val']}")
        if tc["run_to"]:
            api.cmd(f"Break.Set {tc['run_to']} /Program")
            api.cmd("Go"); api.wait(3000)
        if tc.get("wait_ms"):
            api.cmd("Go"); api.wait(tc["wait_ms"]); api.cmd("Break")

        val = api.fnc.var_value(tc["var"])
        if tc["op"] == "gt":     ok = val > tc["expected"]
        elif tc["op"] == "ne":   ok = val != tc["expected"]
        elif tc["op"] == "near": ok = abs(val - tc["expected"]) <= tc["tolerance"]

        status = "PASS" if ok else "FAIL"
        results["passed" if ok else "failed"] += 1
        results["details"].append({"id":tc["id"],"status":status,"value":val})
        print(f"[{status}] {tc['id']}: {tc['name']} (got {val})")

    api.disconnect()
    return results

if __name__ == "__main__":
    r = run_tests(sys.argv[1])
    json.dump(r, open("hil_results.json","w"), indent=2)
    sys.exit(0 if r["failed"]==0 else 1)

Summary

Hardware-in-the-loop debug regression testing closes the gap between software-only unit tests and full vehicle integration testing: real firmware runs on real hardware, exercised by the T32 Python API, with results reported back to GitLab CI. A dedicated GitLab runner tagged with the physical ECU label ensures tests are scheduled to the machine with the board attached. The most important CI gate is that zero HardFaults occur during a 5-second free-run — catching memory corruption regressions before merge.

🔬 Deep Dive — Core Concepts Expanded

This section builds on the foundational concepts covered above with additional technical depth, edge cases, and configuration nuances that separate competent engineers from experts. When working on production ECU projects, the details covered here are the ones most commonly responsible for integration delays and late-phase defects.

Key principles to reinforce:

  • Configuration over coding: In AUTOSAR and automotive middleware environments, correctness is largely determined by ARXML configuration, not application code. A correctly implemented algorithm can produce wrong results due to a single misconfigured parameter.
  • Traceability as a first-class concern: Every configuration decision should be traceable to a requirement, safety goal, or architecture decision. Undocumented configuration choices are a common source of regression defects when ECUs are updated.
  • Cross-module dependencies: In tightly integrated automotive software stacks, changing one module's configuration often requires corresponding updates in dependent modules. Always perform a dependency impact analysis before submitting configuration changes.

🏭 How This Topic Appears in Production Projects

  • Project integration phase: The concepts covered in this lesson are most commonly encountered during ECU integration testing — when multiple software components from different teams are combined for the first time. Issues that were invisible in unit tests frequently surface at this stage.
  • Supplier/OEM interface: This is a topic that frequently appears in technical discussions between Tier-1 ECU suppliers and OEM system integrators. Engineers who can speak fluently about these details earn credibility and are often brought into critical design review meetings.
  • Automotive tool ecosystem: Vector CANoe/CANalyzer, dSPACE tools, and ETAS INCA are the standard tools used to validate and measure the correct behaviour of the systems described in this lesson. Familiarity with these tools alongside the conceptual knowledge dramatically accelerates debugging in real projects.

⚠️ Common Mistakes and How to Avoid Them

  1. Assuming default configuration is correct: Automotive software tools ship with default configurations that are designed to compile and link, not to meet project-specific requirements. Every configuration parameter needs to be consciously set. 'It compiled' is not the same as 'it is correctly configured'.
  2. Skipping documentation of configuration rationale: In a 3-year ECU project with team turnover, undocumented configuration choices become tribal knowledge that disappears when engineers leave. Document why a parameter is set to a specific value, not just what it is set to.
  3. Testing only the happy path: Automotive ECUs must behave correctly under fault conditions, voltage variations, and communication errors. Always test the error handling paths as rigorously as the nominal operation. Many production escapes originate in untested error branches.
  4. Version mismatches between teams: In a multi-team project, the BSW team, SWC team, and system integration team may use different versions of the same ARXML file. Version management of all ARXML files in a shared repository is mandatory, not optional.

📊 Industry Note

Engineers who master both the theoretical concepts and the practical toolchain skills covered in this course are among the most sought-after professionals in the automotive software industry. The combination of AUTOSAR standards knowledge, safety engineering understanding, and hands-on configuration experience commands premium salaries at OEMs and Tier-1 suppliers globally.

← PreviousPost-Mortem DebuggingNext →Hands-On: Complex Bug Investigation