Home Learning Paths ECU Lab Assessments Interview Preparation Pricing Log In Sign Up
Log In Sign Up
Communication

Vehicle Networks

Comprehensive deep dive into automotive communication protocols - CAN, CAN-FD, LIN, FlexRay, and SOME/IP. Understand frame structures, arbitration, error handling, and gateway architectures.

33 chapters
25.0 hrs reading
6 modules

Overview

Automotive communication networks are the nervous system of every vehicle. This course provides exhaustive coverage of all major in-vehicle protocols, from the ubiquitous CAN bus to modern SOME/IP service-oriented communication.

You'll learn not just the theory of each protocol, but how to analyze real bus traffic, design communication matrices, configure gateways, and debug network issues using industry-standard tools like Vector CANoe and CANalyzer.

The course concludes with network architecture design - how OEMs plan the complete communication topology for a vehicle, including domain-based and zone-based architectures.

Course Modules

1
CAN Protocol Deep Dive
6 chapters • 4.5 hrs reading
CAN 2.0 Frame Structure & Bit EncodingFREE PREVIEW 45 min read
▸ Standard (11-bit ID) vs extended (29-bit ID) frame: SOF + arbitration field (ID + RTR/SRR) + control field (IDE, r0, DLC 0–8) + data field (0–8 bytes) + CRC(15+DEL) + ACK slot + EOF(7) + IFS(3) - total 47–111 bit times at nominal rate
▸ NRZ encoding with bit stuffing: after 5 consecutive same-polarity bits a complementary stuff bit is inserted; receiver strips stuff bits; more than 5 consecutive same-polarity bits in a data/CRC field = Stuff Error → node transmits Active Error Flag
▸ Dominant (0) vs recessive (1) levels: wired-AND bus; dominant wins arbitration; ISO 11898-2 physical layer: CANH–CANL differential ≥ 1.5 V for dominant, ≤ 0.5 V differential for recessive; max bus length at 1 Mbps ≈ 40 m (propagation delay constraint)
▸ Frame types: Data frame (carries payload), Remote frame (RTR=1, no payload, requests transmission from another node), Error frame (6-bit Active/Passive Error Flag + 8-bit echo), Overload frame (delaying next frame) - each type triggers specific node state machine transitions
Arbitration - Priority & Bus AccessFREE PREVIEW 40 min read
▸ Non-destructive bitwise arbitration: each transmitter monitors its own transmitted bit; recessive transmitted but dominant received → lost arbitration → node immediately backs off to receiver mode; lowest numeric CAN ID wins = highest priority; all arbitration in-frame with no collision destruction
▸ 11-bit ID gives 2048 unique identifiers; 29-bit extended ID gives 536 million; mixed-mode networks: SRR bit in extended frame must be recessive - any 11-bit standard-ID frame with ID 0x000 always beats any extended frame regardless of extended ID value
▸ Priority inversion risk: high-priority message appearing mid-transmission must wait until current frame + IFS(3 bits) completes; worst-case latency = DLC=8 data frame (128 bits) at 1 Mbps = 128 µs; priority assignment analysis required for real-time systems
▸ CAN bus load budget: > 30% average utilisation degrades real-time performance due to queuing delays; design target typically < 40% average, < 60% peak; network matrix analysis in CANdb++ or BUSMASTER verifies worst-case latency per message before hardware build
Error Detection & Fault Confinement 50 min read
▸ Five error detection mechanisms: bit monitoring (node reads own transmitted bit), bit stuffing violation, CRC check (15-bit CCITT polynomial over data + CRC fields), frame check (fixed-format fields must contain only legal bit patterns), ACK check (dominant ACK must be received in ACK slot)
▸ Error counters: TEC (Transmit Error Counter) incremented +8 on transmit error, decremented −1 on successful frame; REC (Receive Error Counter) incremented +1 on receive error, decremented −1; counters maintained independently per node
▸ Node states: Error Active (TEC/REC < 128 - sends 6-dominant Active Error Flag), Error Passive (TEC or REC ≥ 128 - sends 6-recessive Passive Error Flag; does not disrupt bus), Bus Off (TEC > 255 - node disconnects; reintegrates after 128 × 11 consecutive recessive bits)
▸ Fault confinement in practice: bad transceiver, EMC noise, or software bug incrementing TEC → node auto-isolates in Bus Off before permanent bus disruption; CANalyzer TEC/REC monitoring shows node approaching Error Passive state before Bus Off event occurs
Bit Timing & Baud Rate Calculation 45 min read
▸ Time quantum (tq) = CAN clock / prescaler; bit time = Sync_Seg(1 tq) + Prop_Seg(1–8 tq) + Phase_Seg1(1–8 tq) + Phase_Seg2(1–8 tq); baud rate = 1 / (total tq × tq_duration); example: 40 MHz / prescaler=4 = 100 ns tq; 10 tq × 100 ns = 1 Mbps
▸ Sample point position: end of Phase_Seg1; recommended 75–87.5% of bit time per ISO 11898-5; 80% = 8 tq of 10 tq bit time; sample point set too early → affected by propagation delay; set too late → reduced resynchronisation margin
▸ Resynchronisation: hard sync on SOF falling edge; resync adjusts Phase_Seg1 or Phase_Seg2 within SJW (Synchronisation Jump Width, 1–4 tq) to compensate for oscillator frequency deviation; all nodes must use identical timing parameters - mismatch causes intermittent errors at high bus loads
▸ Calculator tools: Vector Hardware Configurator, CANdb++ bit-timing dialog, online Bittiming Calculator - input MCU CAN clock and target baud rate; output optimal prescaler + segment values; validate oscillator tolerance meets (2 × SJW) / (2 × (13 × N_bits − SJW)) requirement
CAN Database (DBC) Files 40 min read
▸ DBC file structure: VERSION header, NS_ (new symbols), BS_ (baud rate), BU_ (node list), BO_ (message definitions with CAN ID + DLC + transmitter node), SG_ (signal definitions), BA_DEF_ (attribute definitions), BA_ (attribute values), CM_ (comment strings)
▸ Signal definition syntax: SG_ EngineSpeed : 0|16@1+ (0.1,0) [0|6553.5] "rpm" ECU2 - bit start position=0, length=16 bits, little-endian (@1), unsigned (+), factor=0.1, offset=0, min/max limits, unit string, receiver node list
▸ Attribute extensions: BA_DEF_ BO_ "GenMsgCycleTime" INT 10 1000; BA_ "GenMsgCycleTime" BO_ 0x100 100; - adds cycle time metadata per message; consumed by CANoe/CANalyzer for busload simulation and schedule compliance checking in test nodes
▸ DBC tooling: Vector CANdb++, PEAK PCAN-Symbol Editor, Python cantools library for parse+decode scripts; DBC → Simulink via MATLAB Vehicle Network Toolbox; DBC diff in CI pipeline detects accidental CAN ID or signal bit-position changes between software releases
Hands-On: CAN Bus Analysis with CANalyzer 55 min read
▸ CANalyzer setup: New configuration → Insert Networks Block → CAN, set bit-rate 500 kbps; assign Vector VN1610 or PEAK USB adapter; attach .dbc via CAN → Properties → Channel-specific database; click Start → Trace window shows live decoded frames
▸ Trace window analysis: CAN ID column, DLC, decoded signal values; set trigger "On Error Frame" to highlight bus error bursts; right-click a message ID → Add to Symbol Window to display physical-unit signals using DBC COMPU_METHOD scaling
▸ CAPL scripting: on message 0x100 { write("EngineSpeed = %.1f rpm", this.EngineSpeed); } - live signal print to Write window; stimulus CAPL: output(engineCmd, 0x200); simulates absent ECU response for component-level test without full vehicle
▸ Bus load measurement: Statistics window shows instantaneous and average utilisation %; inject periodic messages via Simulation Setup to reproduce production bus load; verify no error frames appear when load > 70%; check TEC/REC counters remain at 0 before network sign-off
2
CAN-FD & Advanced CAN
6 chapters • 4.5 hrs reading
CAN-FD - Extended Data Fields 45 min read
▸ CAN-FD frame differences: new FDF bit (formerly reserved r1 = 1 signals FD frame), BRS bit (Bit Rate Switch - enables data-phase speed increase), ESI bit (Error State Indicator - signals Error Passive transmitter), DLC codes 9–15 map to 12/16/20/24/32/48/64 byte payloads
▸ CRC enhancement: 17-bit CRC for DLC ≤ 10; 21-bit CRC for DLC > 10 (vs 15-bit in CAN 2.0); stuffed-bit-count field embedded in CRC seed - improves error detection coverage for longer frames at higher data rates; mandatory CRC delimiter behaviour differs from classical CAN
▸ Backward compatibility: CAN-FD nodes receive legacy CAN 2.0B frames correctly; a CAN 2.0B node interprets the FDF bit as a protocol error and sends an error frame - FD networks must be upgraded node-by-node or FD traffic isolated in a dedicated network segment
▸ AUTOSAR impact: IPDUM can pack multiple signals into one 64-byte FD frame reducing fragmentation; CanIf CanIfCtrlDrvCfg must set CAN_FD mode; XCP MAX_CTO can be 64 bytes over CAN-FD; ComM must be configured for CAN-FD bus type; PduR FD PDUs use CANFD_DLC encoding
CAN-FD Bit Rate Switching 40 min read
▸ Dual bit-rate mechanism: arbitration phase at nominal rate (500 kbps or 1 Mbps); BRS bit triggers switch to data rate (typically 2–5 Mbps) immediately after BRS; bus returns to nominal before CRC delimiter; clean reflections required during high-speed data phase - bus segment limited to ~30 cm at 5 Mbps
▸ Transceiver requirements: TJA1044 for 2 Mbps; TJA1463 or SIT1044 for 5 Mbps - must meet ISO 11898-2:2016 CAN-FD transceiver spec; termination network accounts for signal integrity at high data rate; PCB trace impedance must be 120Ω matched end-to-end
▸ Separate timing registers: MCU CAN controller has data-phase prescaler + Phase_Seg1_D / Phase_Seg2_D / SJW_D independent from arbitration phase; data-phase sample point typically 70–80% (shorter due to smaller propagation impact in CAN-FD physical constraint)
▸ Oscilloscope verification: probe CAN bus during BRS transition - dominant bit followed by visible bus speed increase; check eye diagram at data rate; PEAK PCAN-FD USB or Vector VN1610 with FD license decodes dual-rate frames in CANalyzer trace window showing both phases
CAN XL - Next Generation 35 min read
▸ CAN XL improvements over CAN-FD: payload up to 2048 bytes, data rate up to 20 Mbps, SDT (Service Data Type) field for protocol multiplexing (SOME/IP, Ethernet frames tunnelled over CAN XL), VCID (Virtual Channel ID) for virtual network segments on one physical bus
▸ Physical layer differences: CAN XL uses PWM signalling with new transceiver (TJA1153) and different voltage levels - not backward compatible with CAN-FD or CAN 2.0B at physical layer; requires dedicated CAN XL bus segment or active gateway node between CAN XL and legacy CAN
▸ Tunnelling use case: backbone CAN XL segment connects zone controllers; Ethernet SOME/IP frames tunnelled over CAN XL to zone nodes lacking Ethernet transceiver - eliminates 100BASE-T1 PHY cost in simple zone ECUs while enabling service-oriented data exchange
▸ AUTOSAR integration status: CAN XL MCAL driver module standardised in AUTOSAR R22-11+; CAN XL PDU Router support for SDT-based protocol multiplexing; Vector vCANconf and EB tresos CAN XL stack components under active development as of 2024
Network Management (NM) over CAN 45 min read
▸ AUTOSAR CanNm: each ECU periodically transmits NM PDU (CAN ID = NM_NODE_ID_OFFSET + node NM ID) containing CBV (Control Bit Vector) and NID (Node ID); repeat timer (1–20 ms typical) keeps network awake as long as any node requests Network mode
▸ NM state machine: Bus-Sleep → Repeat Message (sends NM PDUs at CanNmMsgRepeatPanelTime) → Normal Operation (optional, passive) → Prepare Bus-Sleep (waits CanNmTimeoutTime for NM silence) → Bus-Sleep (CAN transceiver low-power standby mode)
▸ Partial Networking (PN): NM PDU contains PNI field with PN cluster bitmask; nodes compare PNI against their configured CanNmPnClusterRequestBitfieldRef - nodes not matching can sleep; selective wake-up via CAN transceiver WUF filter (TJA1145 PN-capable transceiver)
▸ ComM integration: ComM calls Nm_NetworkRequest() to request and Nm_NetworkRelease() to release network; NM callback Nm_NetworkMode() notifies ComM when network enters Normal Operation - ComM then allows COM to send application PDUs; CanSM bridges CAN bus-off recovery to NM state
Transport Protocols (ISO 15765) 50 min read
▸ ISO 15765-2 (ISO-TP) frame types: Single Frame (PCI = 0x0N, payload ≤ 7 bytes), First Frame (PCI = 0x1N, 10-bit length for 8–4095 byte messages), Consecutive Frame (PCI = 0x2N, SN 0–15, 7 bytes per CF), Flow Control (PCI = 0x3N - FS: 0=ContinueToSend, 1=Wait, 2=Overflow)
▸ Flow Control parameters: BS (Block Size - 0 = no FC needed until complete; N = send N CFs then wait for another FC), STmin (separation time between CFs - 0x00–0x7F = 0–127 ms; 0xF1–0xF9 = 100–900 µs for high-speed UDS on CAN-FD)
▸ Timeouts: N_Bs = sender waits max N_Bs ms for Flow Control after First Frame (typical 1000 ms); N_Cr = receiver waits max N_Cr ms for next Consecutive Frame (typical 150 ms); expiry → error indication to upper layer (AUTOSAR CanTp_RxIndication with E_NOT_OK)
▸ AUTOSAR integration: PduR routes CanTp SDUs; CanTp implements ISO 15765 segmentation; Dcm UDS services (0x10, 0x22, 0x2E, 0x31) segment through CanTp; N_Bs/N_Cr/STmin configured in CanTpRxNSduRef; tune carefully to avoid OBD scan-tool incompatibility (ISO 15031-3 timing expectations)
Hands-On: CAN-FD Communication Setup 55 min read
▸ Vector VN1610 FD configuration in CANoe: assign Channel 1 as CAN FD, nominal 500 kbps (prescaler=8, Ph1+Prop=31, Ph2=8, SJW=8 @ 64 MHz clock), data 2000 kbps (prescaler=2, Ph1=15, Ph2=4, SJW=4); verify both timing readouts match transceiver spec TJA1044
▸ ARXML / FIBEX network description import: import CAN-FD .arxml from DaVinci or EB tresos into CANoe; signal definitions contain FRAME_TRIGGERED + PDU_TO_FRAME_MAPPING; verify DLC=20 frames appear correctly decoded in trace window with physical signal values
▸ Bus stress test: CANoe Simulation Setup → inject 90% CAN-FD bus load; monitor TEC/REC via Diagnostics panel; confirm zero error frames; introduce 120 Ω termination mismatch and observe signal integrity degradation at 5 Mbps data rate with oscilloscope eye diagram
▸ AUTOSAR CanIf verification: CanIfTxPduCfg.CanIfTxPduCanId must match DBC/FIBEX CAN ID; CanIfCtrlDrvCfgRef points to CanControllerBaudrateConfig with matching FD prescaler settings; CanIfRxPduCfg acceptace filter mask; mismatched filter → RxIndication never called, verified by adding DLT log on Indication side
3
LIN Protocol
5 chapters • 3.2 hrs reading
LIN Architecture & Master/Slave Concept 40 min read
▸ LIN physical layer: single-wire, 12 V supply; dominant = 0 V (grounded), recessive = 12 V (pulled up); master node: 1 kΩ pull-up to Vbat + serial interface; slave nodes: internal ~30 kΩ pull-up; max 16 nodes, max 40 m bus length; speeds 1.2 / 9.6 / 10.4 / 19.2 kbps
▸ Master/slave roles: master generates Break field (≥13 dominant bits) + Sync byte (0x55) for slave auto-baud detection; master transmits Protected Identifier byte initiating each slot; only the designated publisher slave (or master) responds with data + checksum - no collision possible
▸ Schedule table: master cycles through frame slots at fixed intervals - e.g., 10 ms cycle with motor_position every 10 ms, status_report every 20 ms; LIN spec defines Unconditional, Event-triggered, Sporadic, and Diagnostic frame slot types; schedule table stored in LDF file
▸ AUTOSAR LIN driver: LIN_SendFrame() triggers header + response transmission; Lin_GetStatus() polls completion; LIN MCAL maps to hardware UART with LIN break generation; LinIf (LIN Interface) manages schedule table execution at SchM-triggered period; LIN SBC (e.g., NXP MC33662) integrates slave transceiver + regulator
LIN Frame Structure & Schedule Tables 45 min read
▸ LIN frame anatomy: Break field (≥13 dominant bits + 1 recessive delimiter), Sync byte (0x55 = alternating bits, slave uses edges for baud sync), Protected Identifier byte (6-bit frame ID + parity bits P0/P1), Response = 1–8 data bytes + 1 checksum byte; total 19–57 bit times
▸ Protected Identifier parity: P0 = ID[0] XOR ID[1] XOR ID[2] XOR ID[4]; P1 = NOT(ID[1] XOR ID[3] XOR ID[4] XOR ID[5]); diagnostic frame PIDs are fixed: 0x3C (master request = 0x7F) and 0x3D (slave response = 0x7E); PID mismatch → slave discards response
▸ Checksum types: Classic (sum of data bytes only, modulo-256 two's complement, LIN 1.x); Enhanced (sum of PID + data bytes, LIN 2.x all frames except diagnostic) - engineer must configure slave checksum type to match ECU firmware; mismatch → no valid response, LIN trace shows checksum error flag
▸ Schedule table timing: slot duration ≥ THEAD (header) + N_data × T_byte + T_response_space; all slots must fit within cycle period; AUTOSAR LinIf LIN_SCHEDULE_TABLE_TYPE switches between Normal and Diagnostic schedule tables dynamically during UDS diagnostic session
LIN Diagnostics & Node Configuration 35 min read
▸ LIN diagnostic frames: master request frame (PID 0x3C, 8 bytes) carries UDS service request; slave response frame (PID 0x3D, 8 bytes) carries response; supports UDS 0x22 ReadByIdentifier, 0x2E WriteByIdentifier, 0x10 DiagnosticSessionControl - all routed through LIN Transport Protocol
▸ Node Configuration (NC) services: LIN 2.x SID 0xB0 AssignNAD (change slave NAD address), SID 0xB3 AssignFrameIdentifierRange (maps signal frame IDs to slave), SID 0xB4 DataDump, SID 0xB5 AssignNAD (alternate) - used by master to commission blank slave nodes at end-of-line
▸ AUTOSAR LINTp: LIN_TP_PCI_SF (single frame), LIN_TP_PCI_FF/CF for multi-frame UDS messages across LIN diagnostic slots; Dcm routes diagnostic sessions to LIN slaves via LINTp PDU routing; LIN_TP_NAs timeout (master abandon) and LIN_TP_NCr timeout (slave response) tuned per LDF slot timing
▸ Factory commissioning: EOL tester sends AssignNAD from initial address 0x7F to assigned NAD (e.g., 0x01); slave stores NAD in EEPROM; verification: ReadByIdentifier (SID 0x00) reads Supplier ID + Function ID + Variant - must match LDF product_id block; failure = rework or slave replacement
LIN Description Files (LDF) 30 min read
▸ LDF file structure: LIN_description_file header, LIN_protocol_version, LIN_language_version, LIN_speed (baud rate in bps), Nodes block (master with timebase + jitter; each slave with NAD, product_id, response_error_signal), Signals block, Frames block, Schedule_tables block
▸ Signal encoding: signal_encoding_type block defines value-to-physical mapping - physical_range (factor, offset, min, max, unit) or logical_value (enum: 0="Closed", 1="Open") - equivalent to COMPU_METHOD in A2L; consumed by Vector LINdb++ and CANoe for decoded trace display
▸ Frame publisher assignment: each unconditional_frame has one publisher (master or one specific slave); only the publisher node responds to header; CANalyzer LIN trace shows "No Response Timeout" if publisher slave is absent or NAD mismatch prevents it from recognising its frame ID
▸ LDF tooling and validation: Vector LINdb++ LDF Editor, ETAS ISOLAR-A LDF import, AUTOSAR LIN ARXML generator in DaVinci; LDF validation checks that sum of all slot durations ≤ schedule table cycle period; exported LDF drives AUTOSAR LinIf module initialisation via generated LIN driver config
Hands-On: LIN Network Configuration 45 min read
▸ CANoe LIN setup: Insert Networks → LIN, assign VN1610 Channel 2 as LIN master at 19.2 kbps; import .ldf; verify node list shows master + all slaves with correct NAD values; Start → trace window displays "No Response Timeout" flags for absent slave nodes
▸ AUTOSAR LIN driver configuration: LIN_CHANNEL_0 BAUD_RATE=19200, LIN_HW_PIN=LINPHY_0; LinIf schedule table: SLOT_0 FrameId=0x01, SlotDelay=10ms, publisher=Slave2; SLOT_1 FrameId=0x3C (diagnostic), SlotDelay=5ms; active schedule switched by LinIf_ScheduleRequest() call from Dcm
▸ Response error handling: slave signals response_error bit in its status frame when checksum or framing error detected; master reads status frame periodically; AUTOSAR LinIf raises LIN_E_RESPONSE_ERROR DEM event; persistent errors indicate slave EEPROM corruption or transceiver fault
▸ End-of-line NAD assignment via CAPL: linMasterSendDiag(0x7F, {0xB0, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00}); AssignNAD from 0x7F to 0x01; verify: ReadByIdentifier (0x00) returns Supplier ID matching LDF product_id; log PASS/FAIL for each slave to EOL test report
4
FlexRay Protocol
5 chapters • 4.0 hrs reading
FlexRay TDMA Architecture 50 min read
▸ TDMA fundamentals: bus time divided into fixed communication cycles; each cycle has a static segment (TDMA - each slot assigned to exactly one node globally) and an optional dynamic segment (FTDMA - minislot auction); clock synchronised globally across all nodes via designated sync frames
▸ Communication cycle parameters: gdCycle (cycle duration, typical 1–5 ms), gNumberOfStaticSlots, gdStaticSlot (duration per static slot in microticks), gPayloadLengthStatic (max 127 words = 254 bytes per static slot), gNumberOfMinislots (dynamic segment size)
▸ Physical layer: two independent channels (Channel A + B) each at 10 Mbps - active star topology or passive bus; Channel A+B redundancy provides fault tolerance for safety-critical functions; TJA1080 active star coupler regenerates signals between bus branches and monitors bus health
▸ Node architecture: FlexRay CC (Communication Controller) handles bit-level protocol; Bus Driver (TJA1080 BD) handles physical signalling; Host (MCU) controls CC via CHI (Controller Host Interface); Infineon AURIX TC3x family integrates FlexRay CC on-chip alongside multiple CAN-FD controllers
Static & Dynamic Segments 45 min read
▸ Static segment: each slot globally pre-assigned - Node A always transmits in slot 1, Node B in slot 2, etc.; frame ID = slot number; all nodes receive simultaneously on both channels; worst-case message latency = one communication cycle - deterministic guarantee for safety-critical data
▸ Dynamic segment (FTDMA): divided into minislots; if no node transmits, minislot counter advances; node with frame ID matching current minislot count may start a dynamic frame; variable-length dynamic frames; not all dynamic frames guaranteed to fit every cycle under heavy bandwidth load
▸ Symbol window and NIT: after static+dynamic segments, a short symbol window for CAS (Collision Avoidance Symbol) and WUS (Wakeup Symbol); followed by Network Idle Time (NIT) providing the synchronisation correction window where nodes apply clock rate and offset corrections
▸ Slot allocation design: safety-critical control (steering torque, brake demand) in static segment for deterministic guaranteed delivery; high-bandwidth calibration data (XCP on FlexRay) in dynamic segment to avoid reserving precious static bandwidth; signal matrix spreadsheet maps each AUTOSAR I-PDU to FlexRay slot
Clock Synchronization Mechanism 40 min read
▸ FlexRay global time: sync frames (static slot frames with SYNC bit = 1, no payload) transmitted by designated sync nodes; every receiving node measures actual arrival time vs expected slot boundary; these offset measurements feed the synchronisation algorithm each cycle
▸ Fault-Tolerant Midpoint (FTM) algorithm: node collects offsets from ≥ gMinSyncNodeCount sync frames; sorts measurements; discards highest and lowest (fault tolerance against one faulty sync node); computes midpoint; applies rate and offset corrections during NIT phase
▸ Microtick and macrotick: microtick = finest time unit (nanoseconds, derived from node oscillator); macrotick = N × microticks (configurable per cluster); global FlexRay time expressed in (cycle number, macrotick offset); StbM (AUTOSAR Synchronized Time-Base Manager) exposes FlexRay time to application layer
▸ Synchronisation parameters: gOffsetCorrectionMax (max offset correction per NIT, typically 100 µs), gRateCorrectionMax (max rate correction); oscillator stability spec (± 50 ppm typical for automotive-grade quartz) must be compatible with these bounds; exceed bounds → sync failure → cold start required
FlexRay Communication Cycle Design 45 min read
▸ Cycle design process: list all periodic messages with rates (1/2/5/10 ms) and payload sizes; assign static slots to messages; messages faster than gdCycle use multiple slots per cycle (e.g., 2 slots for a 2× gdCycle rate message); Vector vFlexRay or EB tresos FrPlugin assists slot allocation
▸ Payload length constraint: all static slots in a cluster share the same gPayloadLengthStatic - determined by the largest required payload; PDU multiplexing (FrIf I-PDU multiplexing) packs multiple small signals into one slot to avoid wasting bandwidth on oversized slots
▸ Startup and wakeup sequences: startup - first cold-start node sends CAS symbol and begins slot counting; cold-start integration completes when ≥ 2 nodes synchronised; wakeup - WUS symbol on Channel A, 50 ms observation phase, then integration; EcuM_WakeupSourceStatusType includes FlexRay wakeup source mask
▸ AUTOSAR FrIf and FrTp: FrIf (FlexRay Interface) maps AUTOSAR I-PDUs to FlexRay frame + slot + cycle offset; FrTp segments large PDUs (e.g., OTA data) across multiple cycles for diagnostic use; FrNm provides FlexRay network management; ComM controls transition to FlexRay Ready-Sleep and Bus-Sleep via FrNm votes
Hands-On: FlexRay Schedule Configuration 55 min read
▸ Vector vFlexRay cluster definition: gdCycle=2ms, gNumberOfStaticSlots=50, gdStaticSlot=2500ns, gPayloadLengthStatic=12 words, data rate=10 Mbps; define nodes - ECU_A in slots 1+2 as cold-start sync node, ECU_B in slot 3; export FIBEX for CANoe import
▸ CANoe FlexRay simulation: Insert Networks → FlexRay, import FIBEX; assign VN7600 Channel A and B; Start Measurement → verify "Synchronized" state in FlexRay Statistics window; slot monitor shows green for active slots, red for empty/collision slots
▸ Slot timing verification: FlexRay trace window shows cycle number, slot ID, channel, CRC, payload bytes; oscilloscope probe on TXD/RXD pins verifies correct bus-idle between frames and slot boundary alignment; active star monitor port confirms consistent signal levels on all branches
▸ Adding a new message exercise: assign steering_angle (16 bytes, 5 ms rate) to two static slots per 10 ms cycle; verify no slot conflict with existing nodes in vFlexRay slot matrix; re-export FIBEX; confirm steering_angle PDU appears every 5 ms in CANoe trace with correct 16-byte payload
5
SOME/IP & Service-Oriented Communication
6 chapters • 4.2 hrs reading
SOME/IP Protocol Fundamentals 45 min read
▸ SOME/IP 8-byte fixed header: Service ID (2B), Method/Event ID (2B), Length (4B - excludes first 8 bytes), Client ID (2B), Session ID (2B), Protocol Version (1B = 0x01), Interface Version (1B), Message Type (REQUEST=0x00 / NOTIFICATION=0x02 / RESPONSE=0x80 / ERROR=0x81), Return Code (1B)
▸ Transport binding: SOME/IP events over UDP (fire-and-forget, multicast possible); methods use TCP for reliable request/response; SD runs on UDP port 30490; application services use ports defined in Service Instance Manifest; UDP MTU limits single payload to ≤ 65507 bytes (use SOME/IP-TP for larger)
▸ Service Instance concept: Service Interface (in ARXML) deployed as a Service Instance with unique Service ID + Instance ID at a fixed or discovered IP:port; Client creates Proxy to call methods or subscribe to events; Server provides Skeleton with handler implementations
▸ GENIVI vsomeip stack: widely used open-source SOME/IP implementation for Linux-based ECUs; vsomeip.json configures service ID, instance ID, unicast IP, reliable/unreliable ports; AUTOSAR Adaptive ara::com wraps vsomeip via binding layer; Classic ECUs use SoAd + SOME/IP Transformer in the AUTOSAR BSW stack
Service Discovery (SD) Protocol 40 min read
▸ SOME/IP-SD message: SD-specific header (Flags byte: Reboot, Unicast, EIDC bits; 3 reserved bytes) + Entry Array (Service Entry / Eventgroup Entry) + Options Array (IPv4/IPv6 endpoint options with IP address, transport, port) - all within a SOME/IP payload
▸ OfferService FSM: server powers up → Initial Wait (random delay 10–500 ms to prevent SD storm) → Repetition Phase (sends OfferService at RepetitionBaseDelay × 2^n, up to RepetitionMaxCount) → Main Phase (periodic OfferService at SdCycleOfferServiceDelay, e.g., 1000 ms)
▸ Eventgroup Subscribe/Notify: client sends SubscribeEventgroup (Entry Type=0x06) with Eventgroup ID and TTL; server responds with SubscribeEventgroupAck (Type=0x07); server then sends NOTIFICATION events to subscriber's IP:port; StopSubscribeEventgroup (TTL=0) cancels subscription
▸ SD timing tuning: SdServerServiceInstanceConfig → InitialDelayMinValue/MaxValue (randomised to spread startup SD traffic), RequestResponseDelay, OfferCyclicDelay; over-aggressive OfferService intervals waste bandwidth; too-slow discovery increases startup latency for safety functions - balance per OEM requirement
Events, Methods, and Fields 35 min read
▸ Event: server sends unsolicited NOTIFICATION (Message Type=0x02) to all subscribers; one-way no-response; used for periodic data (vehicle_speed every 10 ms), state changes (gear_engaged), threshold alerts; client subscribes via SD SubscribeEventgroup before events are sent
▸ Method (Request/Response): client sends REQUEST (0x00) with incrementing SessionID; server processes and returns RESPONSE (0x80) with same SessionID for client demultiplexing; configurable timeout; used for remote procedure calls (GetDiagInfo, TriggerDTCClear, RequestSoftwareUpdate)
▸ Field: combination of getter method, setter method, and change notifier event; client calls Getter to read current value, Setter to write; Notifier pushes changes when field value changes; ara::com Field API: field.Get().get() (getter), field.Set(newVal).get() (setter), field.Subscribe(N) + GetNewSamples() (notifier)
▸ QoS and safety: UDP events fire-and-forget (no reliability); TCP methods use socket ACK; for safety-relevant events combine with E2E Protection transformer - E2E CRC+counter appended to SOME/IP payload; DataID CRC validates authenticity; counter freshness detects replay attacks (E2E P07 profile for Ethernet)
Serialization & Deserialization 40 min read
▸ SOME/IP wire format: data types serialised in big-endian (network byte order) by default; uint8/uint16/uint32/uint64/sint variants per PRS_SOMEIP_00077; strings: UTF-8 with 4-byte length prefix; arrays: 4-byte length prefix + elements; structs: fields serialised in declaration order
▸ AUTOSAR transformer chain: SOME/IP Transformer converts AUTOSAR I-PDU signals to/from SOME/IP wire format; E2E Transformer → SOME/IP Transformer → SoAd → UDP socket; configured via ServiceInterface + DataElement + SomeipTransformationProps ARXML elements
▸ Endianness per field: SomeipDataPrototype.SomeipTransformationProps.byteOrder allows field-level endianness override - e.g., uint16 fields in little-endian when interfacing with legacy CAN signals; mixed-endian structs require explicit transformer config per field
▸ Deserialization error handling: Length field mismatch → DESERIALIZATION_FAILED; array element count exceeding MAX_ELEMENTS → message dropped; malformed UTF-8 → error code returned; E2E Transformer CRC/counter failure → E2E_STATUS_ERROR → application ProxyEvent enters Error state via GetSubscriptionState()
SOME/IP-TP for Large Payloads 35 min read
▸ SOME/IP-TP purpose: base SOME/IP limited to single UDP datagram (≤ 65507 byte payload); SOME/IP-TP enables segmentation up to 4 GB; used for camera image transfer, FOTA software packages, large UDS ReadMemoryByAddress responses over Ethernet
▸ Segment structure: standard SOME/IP 8-byte header + TP header (4 bytes: Offset[31:4] in 16-byte units | Reserved[3:1] | More_Seg[0]); More_Seg=1 indicates more segments follow; final segment has More_Seg=0; Offset allows out-of-order reassembly
▸ Reassembly: receiver collects segments ordered by Offset, waits for More_Seg=0 last segment, checks for gaps; SOMEIPTP_RX_TIMEOUT triggered if all segments not received within configured window; gap detection: missing Offset → SOMEIPTP_E_INCOMPLETE error to upper layer
▸ AUTOSAR SomeIpTp configuration: SomeIpTpConfig.SomeIpTpChannel → SomeIpTpTxPdu + SomeIpTpRxPdu; SomeIpTpSeparationCycleMs (inter-segment delay prevents receiver buffer overflow); SomeIpTpMaxSegmentLength (must fit in UDP MTU, typically ≤ 1400 bytes with VLAN + IP + UDP overhead)
Hands-On: SOME/IP Service Implementation 60 min read
▸ vsomeip configuration: vsomeip.json defines unicast IP, service_id=0x1234, instance_id=0x5678, method_id=0x0001, reliable port 30501; Wireshark SOME/IP dissector (Analyze → Enabled Protocols → SOME/IP) decodes Service ID, Method ID, Message Type, Return Code in human-readable form
▸ CANoe .NET / CAPL scripted test: SomeIpService.CreateClient(0x1234, 0x5678); client.CallMethod(0x0001, payload); assert RESPONSE arrives within 100 ms and Return Code = 0x00 (E_OK); log SessionID increments per call confirming no socket reconnect
▸ AUTOSAR Adaptive proxy generation: ARXML ServiceInterface → arxml2src generates MethodProxy/EventProxy/FieldProxy C++ code; link against ara::com vsomeip binding; deploy with Service Instance Manifest declaring service instance ID + transport + port; run on QNX or Linux AGL
▸ Debugging checklist: Wireshark shows no SD OfferService → server not started or unicast IP mismatch in vsomeip.json; SubscribeEventgroupAck missing → server ServiceInstance not deployed; NOTIFICATION never arrives → EventGroup ID mismatch client vs server ARXML; Session ID not incrementing → TCP socket re-establishing each call
6
Network Architecture & Gateway Design
5 chapters • 4.0 hrs reading
Vehicle Network Topology Design 45 min read
▸ Domain-based topology (legacy): separate high-speed CAN buses per domain (powertrain at 500 kbps, chassis at 500 kbps, body at 125 kbps); central gateway ECU interconnects domains; 70–100+ ECUs typical; OBD connector accesses all domains via gateway diagnostic routing
▸ Bandwidth budget per bus: CAN 500 kbps target < 40% average utilisation; FlexRay 10 Mbps for safety-critical chassis/ADAS; Automotive Ethernet 100BASE-T1 for camera streams (30 Mbps × 8 cameras = 240 Mbps aggregate); routing matrix specifies which signals cross domain boundaries
▸ Topology design tools: Vector PREEvision for network topology, signal routing matrix, bus load analysis, harness length estimation; ETAS EHANDBOOK for live topology documentation; AUTOSAR SystemTemplate ARXML captures I-Signal → I-PDU → Frame → Cluster → Channel hierarchy in machine-readable form
▸ Zonal architecture trend: physical grouping (Front, Rear, Roof zones) replaces functional domain grouping; Zone ECU aggregates all actuators/sensors in its physical zone regardless of function; Gigabit Ethernet backbone connects zone ECUs to central Vehicle Computer; reduces wire harness length by ~30% vs domain topology
Gateway ECU Architecture 50 min read
▸ Gateway routing types: signal gateway (unpack signal from source I-PDU → convert + repack into target I-PDU; handles scaling, data type change); PDU gateway (forward PDU bytes unchanged; no signal decode; for same-format cross-bus routing); frame gateway (raw CAN frame relay; lowest latency)
▸ AUTOSAR PduR routing path: COM → PduR (routing table: PduRSrcPdu → PduRDestPdu) → CanIf/EthIf → bus → CanIf/EthIf → PduR → COM → destination; one PduRSrcPdu can fan out to multiple PduRDestPdus (replication); typical gateway latency = 0–2 CAN cycles (decode + routing + re-encode)
▸ CAN-to-Ethernet gateway: CAN message bursts absorbed by ring buffer; DLT logger records all forwarded messages on Ethernet side; AUTOSAR GW-ECU runs EcuC, CanSM, EthSM, PduR, COM, ComM, NM; SOME/IP transformer converts routed COM signals into SOME/IP service events
▸ Latency budget analysis: safety-relevant signals (wheel speed, steering torque) require bounded E2E latency; AUTOSAR TIMEX model defines latency constraint from source ECU sensor task to actuator task; worst-case gateway analysis must show task period and routing fits within the allocated timing budget
Domain vs Zone-Based Architectures 40 min read
▸ Domain architecture: functional grouping (Powertrain, Chassis, Body, ADAS, Infotainment) - dedicated Domain Control Unit (DCU) per domain; ECUs within domain communicate on dedicated CAN/FlexRay bus; DCU acts as domain master and inter-domain gateway; BMW F-series and Toyota pre-2020 vehicles use this model
▸ Zone architecture: physical grouping by vehicle location (Front-Left, Front-Right, Rear, Roof zones) - Zone ECU aggregates all actuators/sensors in its physical zone regardless of function; Gigabit Ethernet backbone connects zones to central Vehicle Compute Module; Tesla Model 3 and VW MEB+ platforms use zonal approach
▸ Zone architecture advantages: shorter wiring harness (actuators wire to nearest zone ECU); reduced ECU count (zone ECU replaces 5–10 domain ECUs); OTA update of one zone ECU updates all peripherals in that zone; new vehicle feature = new software deployment, not a new hardware ECU
▸ Migration challenges: domain-specific ECU suppliers deliver proprietary CAN interfaces that must be translated to SOME/IP in the zone architecture; AUTOSAR Classic ECUs in zone connect via gateway to Adaptive zone controller - hybrid Classic/Adaptive architecture common in 2022–2028 vehicle programs
Network Security & SecOC 45 min read
▸ AUTOSAR SecOC (Secure On-Board Communication): sender appends MAC (Message Authentication Code, 4–8 bytes) + Freshness Counter (2–4 bytes) to CAN/Ethernet PDU; receiver verifies MAC using shared symmetric key; detects message injection, replay, and spoofing attacks on the bus
▸ MAC algorithms: AES-128-CMAC (most common, 16-byte full MAC truncated to 4–8 bytes for CAN bandwidth); HMAC-SHA256 for Ethernet; key stored in HSM (Hardware Security Module, e.g., Infineon SHE+ / NXP SE050) - key never exposed to main MCU core; MAC computation offloaded to HSM via SHE API
▸ Freshness Value Manager (FVM): AUTOSAR FVM module maintains per-PDU counter; lower nibble (4–8 bits) transmitted in PDU to save bandwidth; upper nibble maintained by NvM-stored trip counter; receiver reconstructs full counter from received nibble + NvM upper part; mismatch > threshold → SecOC_VerificationStatus_FAILED
▸ Gateway SecOC impact: signal gateway must re-MAC forwarded messages using source + destination keys; latency = MAC computation time (50–200 µs with AES-CMAC on HSM); PDU gateway cannot re-MAC (lacks key) - interface must be classified as trusted; OEM key injection during EOL is security-critical manufacturing step
Hands-On: Multi-Bus Gateway Project 65 min read
▸ Gateway AUTOSAR configuration: CanSM on 3 CAN buses (CAN0 powertrain 500 kbps, CAN1 body 125 kbps, CAN2 diagnostic 500 kbps) + EthSM on Ethernet; PduR routing table: VehicleSpeed CAN0 → CAN1 (signal gateway, factor=1.0, offset=0.0), EngineStatus CAN0 → Ethernet SOME/IP notification event
▸ DaVinci routing configuration: PduRRoutingPath_VehicleSpeed - Source=Can0_VehicleSpeed_Pdu (8-byte DLC) → Destination=Can1_VehicleSpeed_Pdu (4-byte DLC, with signal extraction); verify routing path reference chain CanIf → PduR → ComM in graphical Signal Routing Matrix view
▸ CANoe multi-bus simulation: CAN A (powertrain), CAN B (body), Ethernet (100BASE-T1 simulated); inject VehicleSpeed on CAN A at 100 Hz; verify CAN B shows forwarded VehicleSpeed with ≤ 1 ms latency (one CAN cycle boundary); Ethernet trace shows SOME/IP NOTIFICATION with same VehicleSpeed value
▸ Debug approach: CANoe BusStatistics confirms each bus at expected utilisation; global Trace window filtered on Signal "VehicleSpeed" shows three appearances (CAN0 source, CAN1 forwarded, Ethernet notification) with timestamps; delta between CAN0 and Ethernet notification must be < 5 ms per system requirement

What You'll Learn

Analyze and debug CAN, CAN-FD, LIN, and FlexRay bus traffic
Design communication matrices and DBC/ARXML databases
Configure network management and transport protocols
Implement SOME/IP services for Adaptive ECUs
Design vehicle network architectures and gateways
Apply network security measures including SecOC

Prerequisites

Basic electronics knowledge (voltage levels, digital signals)
Understanding of binary/hexadecimal number systems
Familiarity with automotive ECU concepts
Full Access
Free with Pro
Enroll Now Browse Modules

This course includes:

33 detailed documentation chapters
Downloadable resources
Searchable text documentation
Code snippets & technical diagrams
Hands-on exercises
Lifetime access
Certificate of completion