# GitLab CI: security gates -- every merge request runs all stages
stages: [sast, dependency_check, secret_scan, binary_hardening, misra, fuzzing]
# Stage 1: SAST -- Coverity on changed C files; block on CVSS >= 7.0 new finding
coverity_scan:
stage: sast
script:
- cov-build --dir cov-int make
- cov-analyze --dir cov-int --all
- cov-format-errors --dir cov-int --json-output-v7 coverity.json
- python3 ci/check_coverity.py --max-cvss 7.0 coverity.json
rules: [{changes: ["src/**/*.c","src/**/*.h"]}]
# Stage 2: SBOM vs NVD CVE database
sbom_cve_check:
stage: dependency_check
script:
- dependency-check --project ECU --scan src/ --format JSON --out dep.json
- python3 ci/check_cves.py dep.json --max-cvss 7.0 --fail-on-new
# Stage 3: Secret scanning -- no hardcoded keys/credentials
secret_scan:
stage: secret_scan
script:
- trufflehog git file://. --json --no-history > secrets.json
- python3 ci/fail_on_secrets.py secrets.json
# Stage 4: Binary hardening -- fail on missing stack canary or NX
binary_hardening:
stage: binary_hardening
script:
- make all
- checksec --file build/ecu.elf --json | python3 ci/check_hardening.py
# Stage 5: MISRA check -- fail on new Required violations
misra_check:
stage: misra
script:
- pclint-plus -u src/ -o lint.xml
- python3 ci/check_misra.py lint.xml --fail-on-required
# Stage 6: Automated UDS fuzzer -- zero new crashes required
uds_fuzzing:
stage: fuzzing
script:
- python3 ci/uds_fuzzer_ci.py --iterations 5000 --interface vcan0
- python3 ci/assert_zero_crashes.py fuzzer_results.jsonCI/CD Pipeline Security Gates
Automated UDS Fuzzing Stage
#!/usr/bin/env python3
# CI UDS fuzzing: 5000 iterations, crash detection via CAN heartbeat
import can, random, time, json, sys
def run_ci_fuzzing(iface="vcan0", iterations=5000, heartbeat_id=0x3C5):
bus = can.interface.Bus(iface, bustype="socketcan")
crashes = []
for i in range(iterations):
sid = random.randint(0x00,0xFF)
length = random.choice([0,1,7,8,255,random.randint(0,4095)])
payload = bytes([sid]) + random.randbytes(min(length,7))
bus.send(can.Message(arbitration_id=0x7DF, data=payload[:8], is_extended_id=False))
time.sleep(0.005)
if i % 100 == 0:
# Check ECU heartbeat (100 ms expected period)
deadline = time.time() + 0.5
alive = False
while time.time() < deadline:
msg = bus.recv(timeout=0.05)
if msg and msg.arbitration_id == heartbeat_id:
alive = True; break
if not alive:
crashes.append({"iteration": i, "payload": payload.hex()})
print(f"CRASH detected at iteration {i}")
results = {"iterations": iterations, "crashes": crashes}
with open("fuzzer_results.json","w") as f: json.dump(results, f, indent=2)
print(f"Fuzzing complete: {iterations} iterations, {len(crashes)} crashes")
bus.shutdown()
return crashes
if __name__ == "__main__":
crashes = run_ci_fuzzing()
if crashes: sys.exit(1) # fail CI on any crashSoftware Bill of Materials (SBOM) Management
#!/usr/bin/env python3
# SBOM vs NVD CVE correlation -- nightly job; new CVSS >= 7.0 creates Jira ticket
import json, requests
from datetime import date
def load_sbom(spdx_path: str) -> list:
with open(spdx_path) as f: spdx = json.load(f)
return [{"name": p["name"], "version": p["versionInfo"],
"supplier": p.get("supplier","")}
for p in spdx.get("packages",[])]
def check_nvd(package_name: str, version: str) -> list:
# Query OSV.dev API for known CVEs in package + version
resp = requests.post("https://api.osv.dev/v1/query",
json={"package": {"name": package_name, "ecosystem": "CRAN"},
"version": version}, timeout=10)
if resp.status_code == 200:
return resp.json().get("vulns", [])
return []
def create_jira_ticket(cve_id: str, component: str, cvss: float, sla_days: int):
due_date = date.today()
print(f"JIRA: {cve_id} [{component}] CVSS {cvss} -- due {due_date} (+{sla_days}d)")
# Main: load SBOM, check each component, file tickets for CVSS >= 7.0
# sbom = load_sbom("build/sbom.spdx.json")
# for component in sbom:
# vulns = check_nvd(component["name"], component["version"])
# for v in vulns:
# cvss = v.get("database_specific",{}).get("severity_cvss_v3",0)
# if cvss >= 7.0:
# create_jira_ticket(v["id"], component["name"], cvss,
# 30 if cvss>=9.0 else 90)Security Regression Test Suite
#!/usr/bin/env python3
# Security regression: replay known-crash payloads against new firmware build
# If a previously-fixed payload now causes no crash → regression test PASSES
# If a previously-fixed payload still causes a crash → regression FAILS (re-introduced bug)
import can, json, time, sys
KNOWN_CRASH_DB = "security_crashes.json" # built up from pentest + fuzzing history
def run_regression(iface="vcan0") -> dict:
with open(KNOWN_CRASH_DB) as f:
known_crashes = json.load(f)
bus = can.interface.Bus(iface, bustype="socketcan")
results = {"passed":[], "failed":[]}
for crash in known_crashes:
payload = bytes.fromhex(crash["payload"])
frame = can.Message(arbitration_id=0x7DF, data=payload[:8], is_extended_id=False)
bus.send(frame)
time.sleep(0.05)
# Check if ECU still responds (heartbeat within 500ms)
deadline = time.time() + 0.5; alive = False
while time.time() < deadline:
msg = bus.recv(timeout=0.05)
if msg and msg.arbitration_id == 0x3C5: alive = True; break
test_id = crash.get("id", crash["payload"][:8])
if alive:
results["passed"].append(test_id) # bug fixed → ECU survives
else:
results["failed"].append(test_id) # regression! bug re-introduced
print(f"REGRESSION: {test_id} -- previously-fixed crash re-introduced!")
bus.shutdown()
print(f"Regression: {len(results['passed'])} passed, {len(results['failed'])} failed")
return results
if __name__ == "__main__":
r = run_regression()
if r["failed"]: sys.exit(1) # block release on regressionSummary
The CI security pipeline catches vulnerabilities in minutes rather than weeks: SAST on every PR, SBOM-CVE correlation nightly, secret scanning on every commit, binary hardening verification on every build, and 5,000-iteration automated UDS fuzzing before merge. The security regression suite ensures previously fixed vulnerabilities stay fixed -- every pentest finding and fuzzing crash becomes a regression test payload. Together these gates make it structurally impossible to ship a known-vulnerable or poorly hardened firmware binary.
🔬 Deep Dive — Core Concepts Expanded
This section builds on the foundational concepts covered above with additional technical depth, edge cases, and configuration nuances that separate competent engineers from experts. When working on production ECU projects, the details covered here are the ones most commonly responsible for integration delays and late-phase defects.
Key principles to reinforce:
- Configuration over coding: In AUTOSAR and automotive middleware environments, correctness is largely determined by ARXML configuration, not application code. A correctly implemented algorithm can produce wrong results due to a single misconfigured parameter.
- Traceability as a first-class concern: Every configuration decision should be traceable to a requirement, safety goal, or architecture decision. Undocumented configuration choices are a common source of regression defects when ECUs are updated.
- Cross-module dependencies: In tightly integrated automotive software stacks, changing one module's configuration often requires corresponding updates in dependent modules. Always perform a dependency impact analysis before submitting configuration changes.
🏭 How This Topic Appears in Production Projects
- Project integration phase: The concepts covered in this lesson are most commonly encountered during ECU integration testing — when multiple software components from different teams are combined for the first time. Issues that were invisible in unit tests frequently surface at this stage.
- Supplier/OEM interface: This is a topic that frequently appears in technical discussions between Tier-1 ECU suppliers and OEM system integrators. Engineers who can speak fluently about these details earn credibility and are often brought into critical design review meetings.
- Automotive tool ecosystem: Vector CANoe/CANalyzer, dSPACE tools, and ETAS INCA are the standard tools used to validate and measure the correct behaviour of the systems described in this lesson. Familiarity with these tools alongside the conceptual knowledge dramatically accelerates debugging in real projects.
⚠️ Common Mistakes and How to Avoid Them
- Assuming default configuration is correct: Automotive software tools ship with default configurations that are designed to compile and link, not to meet project-specific requirements. Every configuration parameter needs to be consciously set. 'It compiled' is not the same as 'it is correctly configured'.
- Skipping documentation of configuration rationale: In a 3-year ECU project with team turnover, undocumented configuration choices become tribal knowledge that disappears when engineers leave. Document why a parameter is set to a specific value, not just what it is set to.
- Testing only the happy path: Automotive ECUs must behave correctly under fault conditions, voltage variations, and communication errors. Always test the error handling paths as rigorously as the nominal operation. Many production escapes originate in untested error branches.
- Version mismatches between teams: In a multi-team project, the BSW team, SWC team, and system integration team may use different versions of the same ARXML file. Version management of all ARXML files in a shared repository is mandatory, not optional.
📊 Industry Note
Engineers who master both the theoretical concepts and the practical toolchain skills covered in this course are among the most sought-after professionals in the automotive software industry. The combination of AUTOSAR standards knowledge, safety engineering understanding, and hands-on configuration experience commands premium salaries at OEMs and Tier-1 suppliers globally.