ISO 26262 Part 4 System-Level Development: Technical Safety Concept, Architecture, and Safety Requirements Explained

ISO 26262 Part 4 system-level development diagram showing technical safety concept, system architecture design, safety requirements, FMEA, FTA, and validation process in automotive functional safety

Hello, automotive engineers and system safety professionals! Welcome to the fourth deep-dive post in our comprehensive ISO 26262 series at PiEmbSysTech. In this article, we will explore ISO 26262 Part 4 – Product Development at the System Level, the part where the abstract safety goals and functional safety requirements from the concept phase (Part 3) are transformed into a concrete, implementable system design.

Part 4 is the critical engineering bridge in the V-model. On the left side of the “V,” it takes the functional safety concept and refines it into a technical safety concept with detailed technical safety requirements, a system architectural design, and a hardware-software interface (HSI) specification. On the right side, it defines the requirements for system integration testing and vehicle-level safety validation. If Part 3 answered “what safety must we achieve?” then Part 4 answers “how will our system architecture achieve it?”

Let us explore every clause, every concept, and every practical consideration of this pivotal part of the standard.

ISO 26262 Part 4 System-Level Development Table of Contents

  1. What is ISO 26262 Part 4 and Where Does It Fit?
  2. Structure of Part 4 – The Seven Clauses
  3. Initiation of Product Development at the System Level (Clause 5)
  4. Technical Safety Requirements Specification (Clause 6)
  5. Deriving TSRs from Functional Safety Requirements
  6. What Technical Safety Requirements Must Address
  7. The Technical Safety Concept (Clause 6) – Design Response to Safety Goals
  8. System Architectural Design – Building the Safety Architecture
  9. Safety Mechanisms at the System Level
  10. Hardware-Software Interface (HSI) Specification
  11. What the HSI Must Contain
  12. System-Level Safety Analysis – FMEA, FTA, and DFA
  13. Specification of Hardware Metric Targets (SPFM, LFM, PMHF)
  14. System Integration and Testing (Clause 7)
  15. Integration Testing Strategy and Methods
  16. Safety Validation (Clause 8) – Vehicle-Level Evidence
  17. Validation Methods and Acceptance Criteria
  18. Key Work Products of Part 4
  19. Part 4 in the V-Model – Connecting Left and Right Sides
  20. Common Mistakes and How to Avoid Them
  21. Frequently Asked Questions
  22. Conclusion

1. What is ISO 26262 Part 4 and Where Does It Fit?

ISO 26262 Part 4: Product Development at the System Level specifies the requirements for developing the system architecture, defining technical safety requirements, performing system-level safety analyses, integrating hardware and software, and validating the item at the vehicle level. It occupies the central position in the V-model – the point where abstract safety concepts become concrete engineering specifications, and where the outputs of hardware development (Part 5) and software development (Part 6) are brought together and tested as an integrated system.

Part 4 serves as the “systems engineering hub” of ISO 26262. It receives inputs from the concept phase (Part 3) – specifically the functional safety requirements and the functional safety concept – and produces outputs that feed directly into hardware development (Part 5) and software development (Part 6). On the verification side, Part 4 defines the integration testing that validates the combined hardware-software system and the vehicle-level safety validation that confirms the original safety goals are achieved in the real operational environment.

This dual role – defining the design and defining the test – is a direct reflection of the V-model philosophy: for every specification on the left side, there must be a corresponding verification activity on the right side, with clear traceability between the two.

2. Structure of Part 4 – The Seven Clauses

ISO 26262-4:2018 is organized into the following normative clauses:

Clause 5: General Topics for Product Development at the System Level – Covers the initiation of system-level development, including prerequisites, the transition from the concept phase, and the overall objectives.

Clause 6: Specification of the Technical Safety Concept – The core design clause. Covers the specification of technical safety requirements (TSRs), the system architectural design, the technical safety concept, the hardware-software interface (HSI) specification, and the specification of target values for hardware architectural metrics.

Clause 7: System and Item Integration and Testing – Covers the integration of hardware and software elements into the complete system, the testing of the integrated system against its technical safety requirements, and the integration of the item into the vehicle.

Clause 8: Safety Validation – Covers the vehicle-level validation that confirms the safety goals are achieved by the implemented item in its real operational environment.

Additionally, Annex A provides the overview of objectives, prerequisites, and work products, and Annex B provides detailed guidance on the contents of the HSI specification.

3. Initiation of Product Development at the System Level (Clause 5)

Before system-level development can begin, certain prerequisites from the concept phase must be in place. The functional safety concept from Part 3 – including the safety goals, their ASIL classifications, and the functional safety requirements (FSRs) – must be complete and verified. The safety plan must define the activities, methods, and resources for system-level development. The system-level portion of the safety plan may be a sub-plan within the overall safety plan defined in Part 2.

Clause 5 establishes the overall objectives for system-level development. These objectives include refining the functional safety requirements into technical safety requirements that are sufficiently detailed for allocation to hardware and software elements, developing a system architectural design and a technical safety concept that satisfy the safety requirements while not conflicting with non-safety requirements, verifying that the technical safety requirements achieve functional safety at the system level and are consistent with the functional safety requirements, integrating and testing the system against the technical safety requirements, and validating at the vehicle level that the safety goals are achieved.

An important principle established in Clause 5 is that technical safety and non-safety requirements shall not contradict each other. The system design must satisfy both the functional (non-safety) performance requirements and the safety requirements simultaneously. If conflicts arise, they must be resolved through design trade-offs, and any impact on safety must be evaluated and documented.

4. Technical Safety Requirements Specification (Clause 6)

The technical safety requirements (TSRs) are the heart of Part 4. They represent the refinement of the abstract functional safety requirements (FSRs) from Part 3 into concrete, implementable specifications that can be allocated to specific hardware components and software modules within the system architecture. If FSRs describe what the system must do in safety terms, TSRs describe how the system will do it at a technical level.

Each TSR inherits the ASIL of the FSR from which it is derived (unless ASIL decomposition is applied per Part 9). TSRs are specified at a level of detail that is sufficient for direct allocation to hardware elements (feeding into Part 5) and software elements (feeding into Part 6). They must be verifiable – it must be possible to design a test or analysis that objectively determines whether each TSR is met.

5. Deriving TSRs from Functional Safety Requirements

The derivation of TSRs from FSRs is an engineering refinement process that adds technical specificity while maintaining traceability. Each FSR is analyzed in the context of the emerging system architecture, and one or more TSRs are specified to implement it.

For example, consider an FSR from our EPS example: “The EPS system shall detect loss of steering torque sensor signal within 10 ms.” This FSR might be refined into several TSRs. A hardware TSR could specify: “The EPS ECU analog input circuit shall include a signal range monitoring circuit that generates a fault indication when the torque sensor signal is outside the range of 0.25V to 4.75V.” A software TSR could specify: “The EPS application software shall read the torque sensor ADC value every 1 ms and set the ‘torque sensor fault’ flag if two consecutive readings are outside the valid range (100 to 3996 counts for 12-bit ADC).” Another software TSR could specify: “Upon setting the ‘torque sensor fault’ flag, the EPS safety monitor software shall initiate the safe state transition within 5 ms.”

Notice how the abstract FSR (“detect loss within 10 ms”) has been refined into specific hardware and software behaviors with concrete parameters (voltage ranges, ADC values, timing budgets, number of consecutive readings). This refinement is the essence of the TSR derivation process. The total fault detection time (1 ms sampling + 2 ms for two consecutive readings + 5 ms for safe state initiation = 8 ms) fits within the 10 ms FSR requirement, with 2 ms margin.

6. What Technical Safety Requirements Must Address

ISO 26262-4:2018, Clause 6 specifies that the TSRs shall address several critical aspects of the system design:

Safety mechanisms for fault detection and control: TSRs must specify the safety mechanisms that detect faults within the system itself and in external interfaces. This includes specifying what is monitored, how it is monitored, the diagnostic coverage expected, the detection time, and the response upon detection. These mechanisms form the diagnostic architecture of the system.

Safety mechanisms for external fault detection: TSRs must also address the detection and control of faults in external elements that the item depends on – such as faults in sensor inputs from other ECUs, communication bus errors, or power supply anomalies.

Transition to and maintenance of safe states: TSRs must specify the detailed mechanism for transitioning the system to the defined safe states when faults are detected, including the timing of the transition, the sequence of actions, and the conditions for maintaining the safe state once entered.

Warning and degradation strategy implementation: TSRs must specify how the warning and degradation concepts from the functional safety concept are technically implemented – which warning indicators are activated, when, in what sequence, and what degraded operating modes are available.

Prevention of latent faults: TSRs must address mechanisms to prevent safety-critical faults from remaining latent (undetected). This includes specifying start-up tests (pre-drive checks), periodic online diagnostic tests, and shut-down tests (post-drive checks) that detect faults in safety mechanisms and redundant elements.

Arbitration logic: When multiple control requests from different safety mechanisms or safety functions may be issued simultaneously, TSRs must specify the arbitration logic that determines which request takes priority.

Operating modes and transitions: TSRs must define the system’s operating modes (normal, degraded, safe state, off) and the conditions and transitions between them.

Timing constraints: All timing-critical TSRs must include explicit timing parameters derived from the FTTI budgets established in the functional safety concept.

7. The Technical Safety Concept (Clause 6) – Design Response to Safety Goals

The technical safety concept is the comprehensive documentation of how the technical safety requirements are implemented within the system architectural design. It is the system-level design solution for achieving functional safety. While the functional safety concept (Part 3) described the safety approach at a functional, implementation-independent level, the technical safety concept describes the concrete technical implementation.

The technical safety concept brings together the complete set of TSRs, the system architectural design showing all safety-related elements and their interconnections, the allocation of each TSR to specific elements (hardware components, software modules, or combinations), the definition of all safety mechanisms and their operating characteristics, the diagnostic architecture (what is diagnosed, how, when, and with what coverage), the safe state management strategy (including all operating mode transitions), the timing analysis showing that all safety mechanisms operate within the FTTI budgets, and the rationale for why the chosen architecture and mechanisms are sufficient to achieve the safety goals.

The technical safety concept is one of the most important work products in the entire ISO 26262 lifecycle because it is the single document that connects the safety goals (what must be achieved) to the system design (how it will be achieved). It is the primary input to hardware development (Part 5) and software development (Part 6), and it is the primary reference against which the system integration test results are evaluated.

8. System Architectural Design – Building the Safety Architecture

The system architectural design describes the structure of the system in terms of its elements, their interfaces, and their interactions. For a safety-related system, the architecture must be designed not only to deliver the intended functionality but also to provide adequate safety mechanisms, diagnostic capabilities, and fault tolerance.

Key architectural design principles for safety include modularity (dividing the system into well-defined, loosely coupled modules to contain fault propagation), encapsulation (isolating safety-critical elements from non-safety elements to achieve freedom from interference), hierarchical design (organizing the architecture in layers with clear interfaces between levels), and redundancy (providing backup elements or channels that can maintain safety-critical functions in the presence of single faults).

The system architecture for our EPS example might include the main application processor (running the EPS control algorithm and safety monitoring software), a safety monitoring co-processor or hardware safety monitor (providing independent diagnostic supervision), the motor drive power stage (H-bridge with current monitoring), dual or redundant sensor interfaces (torque sensor, motor position sensor), a CAN communication interface (for vehicle speed and ESC commands), a power supply with monitoring and protection circuits, and a motor relay or power disconnect (for safe state activation – de-energizing the motor in case of a detected critical fault).

Each architectural element is assigned the TSRs that it must implement, and the ASIL of each element is determined by the highest ASIL of the TSRs allocated to it. This allocation creates the specification input for Part 5 (hardware safety requirements) and Part 6 (software safety requirements).

9. Safety Mechanisms at the System Level

ISO 26262 Part 4 requires the definition of safety mechanisms that detect faults, control failures, or mitigate their effects to achieve or maintain a safe state. At the system level, safety mechanisms typically operate across hardware and software boundaries and address the system’s overall diagnostic and protection strategy.

Common system-level safety mechanisms include input plausibility checks (comparing sensor readings against expected ranges or against redundant sensor values), output monitoring (verifying that actuator commands produce the expected physical response – for EPS, comparing the commanded motor current with the measured motor current), watchdog supervision (an independent hardware watchdog timer that monitors the software execution and triggers a safe state if the software fails to service the watchdog within the expected time), program flow monitoring (verifying that the software executes in the correct sequence and timing), communication monitoring (detecting lost, corrupted, or delayed CAN messages through alive counters, CRC checks, and timeout monitoring), power supply monitoring (detecting undervoltage, overvoltage, or power supply failures), and safe state activation (the mechanism that physically transitions the system to the safe state – for EPS, this might involve opening the motor relay to de-energize the motor, combined with activating the warning lamp).

For each safety mechanism, the TSR must specify what fault or failure it detects, the expected diagnostic coverage (low, medium, or high), the detection time, the response upon detection, and any dependencies on other system elements.

10. Hardware-Software Interface (HSI) Specification

The Hardware-Software Interface (HSI) specification is a unique and critical work product of Part 4 that defines the interface between hardware and software elements within the system. It describes the hardware features and resources that the software relies upon, and the software functions that control or depend on hardware capabilities. The HSI ensures that hardware and software development teams have a clear, agreed-upon interface specification that prevents integration issues and safety gaps.

In traditional automotive development, hardware and software teams often work in silos, leading to interface mismatches, timing conflicts, and unaddressed safety responsibilities. The HSI specification explicitly addresses this risk by documenting every interaction point between hardware and software before detailed development begins.

11. What the HSI Must Contain

ISO 26262-4:2018, Annex B provides detailed guidance on the HSI contents. The HSI shall include:

Hardware devices controlled by software: A list of all hardware peripherals, registers, and devices that the software directly controls – including operating modes (default, initialization, test, advanced), configuration parameters (gain control, filter settings, clock prescaler values), and the expected behavior of the hardware in each mode.

Hardware resources supporting software execution: The hardware platform resources that the software depends on — including CPU architecture, clock frequencies, memory mapping (RAM, ROM, flash addresses and sizes), allocation of registers, timers, interrupt vectors, I/O ports, and DMA channels.

Hardware features ensuring independence: The hardware mechanisms that support freedom from interference between software elements – such as memory protection units (MPUs), memory management units (MMUs), privilege levels, and hardware partitioning features.

Communication interfaces: Specifications of the hardware communication interfaces (CAN, SPI, UART, etc.) including their operating modes (master, slave, baud rates), buffer sizes, and error detection capabilities.

Timing constraints: The timing requirements derived from the technical safety concept – including the minimum and maximum execution times for safety-critical software functions, interrupt latencies, and communication cycle times.

Diagnostic hardware features: Hardware-provided diagnostic capabilities – such as built-in self-test (BIST), error correction codes (ECC) on memories, watchdog timer interfaces, voltage monitoring outputs, current sensing capabilities, and overcurrent or overtemperature protection circuits.

Hardware characteristics relevant to software design: Analog-to-digital converter resolution and accuracy, signal conditioning characteristics, PWM resolution and frequency ranges, and any hardware quirks or errata that the software must account for.

The HSI is a bidirectional specification – it defines what hardware provides to software and what software expects from hardware. Both hardware and software development teams must agree on the HSI before proceeding to detailed design in Parts 5 and 6.

12. System-Level Safety Analysis – FMEA, FTA, and DFA

ISO 26262 Part 4 requires system-level safety analyses to verify that the system architectural design and the technical safety concept adequately address the safety goals. The standard references several analysis methods:

Failure Mode and Effect Analysis (FMEA): A systematic, bottom-up analysis that starts from individual component failure modes and traces their effects through the system hierarchy to determine whether they can lead to safety goal violations. System-level FMEA examines each element in the system architecture, identifies its possible failure modes, assesses the effect of each failure on the system’s safety functions, and evaluates whether existing safety mechanisms provide adequate detection and mitigation. FMEA is particularly useful for identifying single-point failures and ensuring that all safety-critical failure paths have been addressed by safety mechanisms.

Fault Tree Analysis (FTA): A top-down, deductive analysis that starts from an undesired top-level event (typically the violation of a safety goal) and systematically identifies all possible combinations of lower-level faults that could cause it. FTA uses Boolean logic gates (AND, OR) to model fault propagation. It is particularly powerful for assessing the effectiveness of redundant architectures – an AND gate represents a situation where multiple independent faults must coincide to violate the safety goal, demonstrating the protection provided by redundancy. FTA can also be quantified by assigning failure rate data to the basic events, enabling the calculation of the top-event probability for comparison with the PMHF target.

Dependent Failure Analysis (DFA): Required by Part 9, DFA examines potential dependent failures – common cause failures (a single root cause affecting multiple independent elements), cascading failures (the failure of one element causing the failure of another), and common mode failures (different elements failing in the same way due to a shared external influence). DFA is critical for validating the independence assumptions that underpin redundant architectures and ASIL decomposition.

The choice and rigor of analysis methods depend on the ASIL. For ASIL D, both inductive (FMEA) and deductive (FTA) methods are highly recommended. For lower ASILs, the requirements are less stringent, but at least one systematic analysis method should be applied. The results of these analyses may trigger revisions to the system architecture, the addition of safety mechanisms, or refinements to the TSRs.

13. Specification of Hardware Metric Targets (SPFM, LFM, PMHF)

A critical output of Part 4 is the specification of target values for hardware architectural metrics – the quantitative targets that the hardware design (Part 5) must achieve. These targets are derived from the safety goals and their ASILs and include the Single-Point Fault Metric (SPFM) target (≥99% for ASIL D, ≥97% for ASIL C, ≥90% for ASIL B), the Latent Fault Metric (LFM) target (≥90% for ASIL D, ≥80% for ASIL C, ≥60% for ASIL B), and the Probabilistic Metric for random Hardware Failures (PMHF) target (<10⁻⁸/h for ASIL D, <10⁻⁷/h for ASIL C and B).

At the system level, these targets are defined for the overall item and may be further decomposed and allocated to subsystems or individual components. For example, if the item’s overall PMHF budget is 10⁻⁸/h for an ASIL D safety goal, and the system consists of three major subsystems (ECU, motor drive, and sensor interface), the PMHF budget must be distributed among them such that the sum of their individual contributions does not exceed the total target. This budget allocation is a key systems engineering activity that requires careful coordination between the system safety team and the hardware development teams.

14. System Integration and Testing (Clause 7)

System integration and testing is the right side of the V-model at the system level. After hardware (Part 5) and software (Part 6) have been developed and individually tested, they are brought together in the integrated system and tested against the technical safety requirements defined in the technical safety concept.

The integration process is typically performed in stages. Hardware-software integration combines the developed software with the hardware platform on which it will execute, verifying that the software correctly interfaces with the hardware as specified in the HSI. System integration combines all system elements (ECU, sensors, actuators, communication interfaces) into the complete system, verifying that all elements interact correctly. Item integration integrates the complete system into the vehicle (or a vehicle-representative test environment), verifying that the item correctly interfaces with other vehicle systems and the vehicle-level environment.

15. Integration Testing Strategy and Methods

The integration test specification must define the test cases, test procedures, acceptance criteria, and the test environment for each level of integration. The standard recommends several testing methods, with the required rigor depending on the ASIL:

Requirements-based testing: Deriving test cases directly from the TSRs to verify that each requirement is met. This is mandatory at all ASIL levels and forms the core of integration testing.

Fault injection testing: Deliberately introducing faults into the system (through hardware fault injection, software fault simulation, or communication error injection) to verify that safety mechanisms detect the faults and the system transitions to the correct safe state within the specified time. Fault injection is highly recommended for ASIL C and D and is one of the most powerful methods for validating the effectiveness of the safety architecture.

Back-to-back testing: Comparing the behavior of the integrated system against a reference model (such as a simulation model used during development) to identify discrepancies. This is recommended for higher ASIL levels.

Stress testing and robustness testing: Testing the system under boundary conditions, overload conditions, and environmental extremes (temperature, voltage, electromagnetic interference) to verify that it maintains functional safety under adverse conditions.

All integration test results must be documented in a system integration test report, with clear traceability between each test case and the TSR it verifies. Any failures or anomalies discovered during integration testing must be analyzed, documented, and resolved before proceeding to safety validation.

16. Safety Validation (Clause 8) – Vehicle-Level Evidence

Safety validation is the final, highest-level verification activity in Part 4. Unlike integration testing (which verifies the system against the TSRs), safety validation confirms that the original safety goals are achieved by the implemented item in its actual or representative vehicle-level operational environment. Validation answers the ultimate question: “Does the vehicle, with this item installed, achieve an absence of unreasonable risk from malfunctioning E/E systems?”

Safety validation is typically performed at the vehicle level – on a prototype vehicle, a test mule, or in a hardware-in-the-loop (HIL) environment that faithfully represents the vehicle-level context. The validation considers the effectiveness of the safety concept under realistic conditions, the robustness of the item against real-world environmental conditions and operational scenarios, the effectiveness of safety mechanisms in detecting and controlling faults, the correct transition to and maintenance of safe states, and the adequacy of the driver warning and degradation strategy (including the driver’s ability to perceive warnings and take appropriate action).

17. Validation Methods and Acceptance Criteria

ISO 26262-4:2018, Clause 8 specifies that the safety validation shall include validation of the safety goals, evaluation of the effectiveness of safety measures in controlling random hardware failures and systematic failures, and assessment of the item’s behavior at the vehicle level under normal and fault conditions.

Common validation methods include vehicle-level fault injection (injecting faults at the system interfaces and observing the vehicle-level response), drive testing (operating the vehicle under representative driving conditions to confirm correct behavior), HIL simulation with vehicle model (using a hardware-in-the-loop simulator with a validated vehicle dynamics model to test scenarios that cannot safely be tested on a real vehicle), and expert assessment (engineering judgment by qualified experts evaluating the safety case evidence).

The results are documented in a safety validation report, which provides the final evidence that the safety goals are achieved. This report is a critical input to the safety case and supports the release for production decision (Part 2).

18. Key Work Products of Part 4

Part 4 produces the following essential work products: the technical safety concept (including all TSRs, system architecture, safety mechanism specifications, and diagnostic architecture), the system architectural design specification, the hardware-software interface (HSI) specification, the specification of hardware metric targets (SPFM, LFM, PMHF target values and their allocation), system-level safety analysis reports (FMEA, FTA, DFA results), the system integration test specification and report, the item integration test specification and report, the safety validation specification and report, and verification reports for all above work products. These work products collectively form the system-level evidence in the safety case.

19. Part 4 in the V-Model – Connecting Left and Right Sides

Part 4 occupies a unique position in the ISO 26262 V-model because it spans both sides of the “V.” On the left (design) side, it defines the system architecture and TSRs that feed into hardware design (Part 5) and software design (Part 6). On the right (verification) side, it defines the system integration testing and safety validation that verify the outputs of Parts 5 and 6 when they are brought together.

The V-model traceability at the Part 4 level works as follows: each TSR defined during system design must have a corresponding integration test case that verifies it. Each safety goal from Part 3 must have a corresponding validation test or assessment in the safety validation. This end-to-end traceability – from safety goal through TSR to test case – is fundamental to demonstrating that the safety lifecycle has been correctly executed.

Part 4 also provides the critical handoff specifications to Parts 5 and 6: the hardware safety requirements (derived from TSRs allocated to hardware elements) become the input to Part 5, the software safety requirements (derived from TSRs allocated to software elements) become the input to Part 6, and the HSI specification serves as the agreed contract between hardware and software development teams.

20. Common Mistakes and How to Avoid Them

Mistake 1: Incomplete HSI specification. The HSI is frequently underspecified, leading to integration problems when hardware and software teams discover mismatches late in development. Invest the time to make the HSI comprehensive — include all hardware resources, timing constraints, diagnostic features, and configuration parameters that the software depends on.

Mistake 2: TSRs that are not verifiable. Technical safety requirements must be written in a way that enables objective testing. A TSR like “the system shall respond quickly to faults” is not verifiable. A TSR like “the system shall detect motor overcurrent conditions exceeding 25A and initiate safe state transition within 5 ms” is verifiable. Use specific, measurable parameters.

Mistake 3: Skipping deductive analysis (FTA). Many teams rely solely on FMEA (inductive analysis) and neglect FTA (deductive analysis). Both approaches complement each other – FMEA identifies what each failure does to the system, while FTA identifies all the ways a safety goal can be violated. For ASIL C and D, both are highly recommended.

Mistake 4: Insufficient fault injection testing. Integration testing based solely on requirements-based testing (verifying normal behavior) is insufficient. Safety mechanisms must be validated through fault injection – deliberately introducing faults and verifying the system’s response. Without fault injection, there is no evidence that the safety mechanisms actually work when they are needed.

Mistake 5: Not allocating PMHF budgets to subsystems early enough. If the system-level PMHF budget is not allocated to subsystems at the beginning of hardware development, the hardware teams may independently design components that individually meet “reasonable” targets but collectively exceed the system-level budget. Allocate PMHF budgets as part of the TSR specification, before detailed hardware design begins.

Mistake 6: Treating safety validation as a formality. Safety validation at the vehicle level is not a rubber-stamp exercise. It must genuinely confirm that the safety goals are met under realistic conditions. Use real vehicles or high-fidelity HIL environments, perform vehicle-level fault injection, and involve engineers with deep domain expertise in the validation team.

21. Frequently Asked Questions

Q1: What is the difference between the functional safety concept (Part 3) and the technical safety concept (Part 4)?

The functional safety concept describes the safety approach at a functional, implementation-independent level — it defines what must be achieved without specifying concrete technical solutions. The technical safety concept describes the concrete technical implementation – it specifies the exact hardware and software mechanisms, timing parameters, diagnostic coverage, and architectural design that will achieve the functional safety requirements. The functional safety concept is the “what”; the technical safety concept is the “how.”

Q2: Is the HSI specification a hardware document or a software document?

Neither – it is a system-level document that serves as the interface contract between hardware and software development. Both teams contribute to and agree on its contents. The HSI is owned by the system engineering team and is maintained under configuration management as a Part 4 work product.

Q3: When should system-level FMEA and FTA be performed?

They should be performed during the development of the technical safety concept and system architectural design — before detailed hardware and software design begins. The results inform and validate the architectural design decisions. If the analysis reveals inadequate safety coverage, the architecture must be revised. Performing these analyses too late (after detailed design is complete) defeats their purpose.

Q4: Can integration testing be performed on a HIL simulator instead of real hardware?

Yes, HIL testing is an accepted and widely used method for system integration testing. However, the HIL environment must faithfully represent the real hardware behavior, particularly for safety-critical aspects like timing, analog signal characteristics, and fault responses. Some testing – particularly vehicle-level safety validation – may require real vehicle testing to complement HIL results.

Q5: What happens if safety validation reveals that a safety goal is not met?

If safety validation identifies that a safety goal is not achieved, development must iterate: the root cause must be identified, the technical safety concept or system architecture must be revised, the changes must be implemented and retested, and validation must be repeated. The release for production cannot proceed until all safety goals are demonstrably achieved.

Q6: How does Part 4 interact with AUTOSAR?

AUTOSAR provides a standardized software architecture that directly supports many Part 4 requirements. The AUTOSAR layered architecture facilitates freedom from interference between software components of different ASILs. AUTOSAR communication mechanisms (like E2E protection) support safety-related data exchange. The AUTOSAR BSW (Basic Software) provides infrastructure for diagnostics, watchdog supervision, and safe state management. The technical safety concept should leverage AUTOSAR capabilities where applicable. At PiEmbSysTech, we cover AUTOSAR architecture in detail.

22. Conclusion

ISO 26262 Part 4 – Product Development at the System Level is the engineering engine of the functional safety lifecycle. It transforms abstract safety concepts into concrete system designs, defines the critical interface between hardware and software through the HSI, employs systematic safety analyses (FMEA, FTA, DFA) to validate the architecture, specifies quantitative hardware metric targets, and verifies the entire system through integration testing and vehicle-level safety validation.

Mastering Part 4 requires a systems engineering mindset – the ability to see the big picture while managing the details, to coordinate between hardware and software teams, and to ensure that every safety goal traces through the architecture to a verifiable test. The technical safety concept, the HSI specification, and the system-level safety analysis reports are among the most technically demanding and valuable work products in the entire ISO 26262 standard.

This article is part of our comprehensive ISO 26262 series at PiEmbSysTech. Next in our series: ISO 26262 Part 5 – Product Development at the Hardware Level, where we will dive deep into hardware architectural design, FMEDA analysis, SPFM and LFM calculation, PMHF evaluation, and hardware integration testing. Be sure to review our earlier posts on Part 1Part 2, and Part 3.

Stay safe. Stay systematic. Keep engineering the future.

– The PiEmbSysTech Team


Discover more from PiEmbSysTech - Embedded Systems & VLSI Lab

Subscribe to get the latest posts sent to your email.

Leave a ReplyCancel reply

Discover more from PiEmbSysTech - Embedded Systems & VLSI Lab

Subscribe now to keep reading and get access to the full archive.

Continue reading

Exit mobile version