ISO 26262 Part 5 Hardware Development: SPFM, LFM, PMHF, FMEDA & Diagnostic Coverage Explained

Hello, hardware safety engineers and automotive electronics professionals! Welcome to the fifth deep-dive post in our comprehensive ISO 26262 series at PiEmbSysTech. In this article, we will explore ISO 26262 Part 5 Hardware Development at the Hardware Level, the part that deals with the design, analysis, and verification of safety-related hardware components.

ISO 26262 Part 5 hardware development infographic showing SPFM, LFM, PMHF metrics, FMEDA analysis, diagnostic coverage, and fault classification in automotive functional safety

Part 5 is where functional safety becomes deeply quantitative. While Parts 3 and 4 dealt with conceptual and system-level safety, Part 5 requires hardware engineers to perform detailed Failure Mode Effects and Diagnostic Analysis (FMEDA), calculate hardware architectural metrics – the Single-Point Fault Metric (SPFM)Latent Fault Metric (LFM), and Probabilistic Metric for random Hardware Failures (PMHF) – classify every fault in the hardware architecture, and demonstrate through rigorous analysis that random hardware failures are adequately controlled.

For hardware engineers accustomed to reliability analysis and component selection, Part 5 introduces a safety-specific perspective that goes beyond traditional reliability engineering. It is not enough for the hardware to be reliable – it must be demonstrably safe, with quantified evidence that safety mechanisms provide adequate coverage against random hardware failures. Let us explore every aspect in depth.

ISO 26262 Part 5 Hardware Development Table of Contents

  1. What is ISO 26262 Part 5 and What Does It Cover?
  2. Structure of Part 5 – The Key Clauses
  3. Hardware Safety Requirements Specification (Clause 6)
  4. Hardware Design and Safety Mechanism Design (Clause 7)
  5. Hardware Design Principles for Functional Safety
  6. Common Hardware Safety Mechanisms
  7. Fault Classification in Hardware – The Complete Picture
  8. Failure Rates, FIT Rates, and Failure Mode Distributions
  9. Diagnostic Coverage – Low, Medium, and High
  10. FMEDA – The Core Analysis Method for Part 5
  11. FMEDA Process: Step-by-Step
  12. Single-Point Fault Metric (SPFM) – Formula and Calculation
  13. Latent Fault Metric (LFM) – Formula and Calculation
  14. PMHF – Probabilistic Metric for random Hardware Failures
  15. Hardware Metric Target Values by ASIL Level
  16. Worked Example: Simplified FMEDA and Metric Calculation
  17. Hardware Element Classes – Class I, II, and III
  18. Hardware Integration and Testing (Clause 10)
  19. Fault Injection Testing for Hardware
  20. Key Work Products of Part 5
  21. Common Mistakes and How to Avoid Them
  22. Frequently Asked Questions
  23. Conclusion

1. What is ISO 26262 Part 5 and What Does It Cover?

ISO 26262 Part 5: Product Development at the Hardware Level specifies the requirements for the development of hardware elements that implement safety-related functions in automotive E/E systems. It covers the entire hardware development process from specifying hardware safety requirements through hardware design, safety analysis, evaluation of hardware architectural metrics, evaluation of safety goal violations due to random hardware failures, and hardware integration testing.

Part 5 addresses the fundamental challenge of random hardware failures – failures that occur unpredictably due to physical degradation of semiconductor devices, passive components, connectors, and other hardware elements. Unlike systematic failures (which are addressed through rigorous development processes), random hardware failures cannot be prevented through process quality alone. They must be detected, controlled, or mitigated through safety mechanisms built into the hardware architecture, and the effectiveness of these mechanisms must be quantitatively demonstrated through the hardware architectural metrics.

Part 5 receives its primary inputs from Part 4 – specifically the hardware safety requirements derived from the technical safety requirements, the HSI specification, and the hardware metric target values (SPFM, LFM, PMHF targets). Its outputs feed back to Part 4 for system integration testing and also inform the overall safety case.

2. Structure of Part 5 – The Key Clauses

ISO 26262-5:2018 contains the following normative clauses:

Clause 6: Specification of Hardware Safety Requirements – Derives and specifies the hardware-specific safety requirements from the TSRs allocated to hardware in Part 4.

Clause 7: Hardware Design – Specifies requirements for the hardware architectural design and detailed design, including the design of safety mechanisms.

Clause 8: Evaluation of the Hardware Architectural Metrics – Specifies the SPFM and LFM calculation requirements.

Clause 9: Evaluation of Safety Goal Violations due to Random Hardware Failures – Specifies the PMHF evaluation requirements.

Clause 10: Hardware Integration and Testing – Specifies the integration and test requirements for hardware elements.

3. Hardware Safety Requirements Specification (Clause 6)

The hardware safety requirements (HSRs) are derived from the technical safety requirements (TSRs) allocated to hardware elements in the technical safety concept (Part 4). HSRs refine the system-level TSRs to include hardware-specific design constraints and implementation details.

The HSR specification includes the safety mechanisms and their attributes (such as the diagnostic test interval, the required diagnostic coverage, and the fault handling time), the non-safety requirements that may affect safety (such as intended functionality requirements), the specification of target values for hardware architectural metrics (SPFM, LFM, and PMHF targets inherited from Part 4), the hardware design verification criteria (including environmental conditions, operational profile, and derating guidelines), and the identification of safety-related special characteristics that must be controlled during production.

Each HSR inherits the ASIL of the TSR from which it is derived. The hardware design must satisfy all HSRs while also achieving the quantitative hardware metric targets.

4. Hardware Design and Safety Mechanism Design (Clause 7)

Clause 7 addresses the hardware architectural design and the detailed hardware design. The hardware architecture must implement all hardware safety requirements, including the required safety mechanisms, and must be designed to facilitate the achievement of the hardware metric targets.

The hardware design process is iterative: the initial architecture is analyzed through FMEDA, the metrics are calculated, and if the targets are not met, the architecture is revised – typically by adding or improving safety mechanisms, increasing diagnostic coverage, or modifying the architecture to reduce single-point and latent fault paths. This iterative process continues until the hardware architecture demonstrably meets all metric targets.

5. Hardware Design Principles for Functional Safety

ISO 26262 Part 5 recommends several hardware design principles that support functional safety. Adequate diagnostics means that every safety-relevant hardware element should have associated diagnostic mechanisms that can detect its failure modes with sufficient coverage. Redundancy where needed means that for higher ASIL levels (especially ASIL D), critical hardware functions may need redundant implementations (dual sensors, dual processing channels, lockstep processors) to ensure that a single hardware fault cannot violate a safety goal. Derating means operating components well within their absolute maximum ratings to reduce the probability of random failures. Avoidance of single-point failures means designing the architecture so that no single hardware fault can directly violate a safety goal without being detected by a safety mechanism. Robustness against environmental stress means designing for the full range of automotive environmental conditions including temperature cycling, vibration, humidity, electromagnetic interference, and supply voltage variations.

6. Common Hardware Safety Mechanisms

Part 5 requires the implementation of safety mechanisms that detect and control random hardware failures. Common hardware-level safety mechanisms include voltage monitoring circuits (detecting overvoltage, undervoltage, or supply anomalies), current monitoring and overcurrent protection (detecting actuator faults or short circuits), temperature monitoring (detecting overtemperature conditions), watchdog timers (independent hardware timers that trigger safe state if software fails to respond), memory ECC (Error Correcting Code) (detecting and correcting single-bit errors in RAM and Flash memory), memory BIST (Built-In Self-Test) (startup and periodic tests of memory integrity), lockstep CPU comparison (running two identical CPU cores in parallel and comparing outputs to detect processing errors), ADC monitoring and cross-checking (verifying sensor readings through range checks, redundant readings, or comparison with independent sensors), output feedback monitoring (reading back the actual output state and comparing with the commanded state), and clock monitoring and plausibility (verifying that system clocks operate within their specified tolerance).

Each safety mechanism contributes to the overall diagnostic coverage of the hardware architecture. The more failure modes that are covered by safety mechanisms (and the higher the coverage of each mechanism), the better the SPFM, LFM, and PMHF metrics will be.

7. Fault Classification in Hardware – The Complete Picture

Understanding fault classification is essential for performing FMEDA and calculating hardware metrics. As detailed in our Part 1 vocabulary article, ISO 26262 classifies hardware faults as follows:

Safe Faults (λS): Faults that do not significantly increase the probability of violating a safety goal. These faults are benign from a safety perspective and do not require safety mechanisms. Example: a fault in a hardware register used only for non-safety diagnostic logging.

Single-Point Faults (λSPF): Faults in safety-related elements that are NOT covered by any safety mechanism and that can directly violate a safety goal on their own. These are the most dangerous faults. The entire SPFM metric is designed to minimize them.

Residual Faults (λRF): The portion of faults in safety-related elements that are partially covered by a safety mechanism but where the coverage is not 100%. The uncovered portion constitutes residual faults. If a safety mechanism has 90% diagnostic coverage for a failure mode, the remaining 10% is the residual fault rate.

Detected Multiple-Point Faults (λMPF,D): Faults in safety-related elements (including safety mechanism elements) that are covered by safety mechanisms — meaning they are detected before they can combine with another fault to violate a safety goal.

Perceived Multiple-Point Faults (λMPF,P): Faults that are perceived by the driver (through warning indicators or noticeable degradation) within the multiple-point fault detection interval.

Latent Multiple-Point Faults (λMPF,L): Faults that are NOT detected by any safety mechanism and NOT perceived by the driver. These faults remain hidden and can silently degrade the system’s safety capability.

The total failure rate of safety-related hardware is: λ = λS + λSPF + λRF + λMPF,D + λMPF,P + λMPF,L

8. Failure Rates, FIT Rates, and Failure Mode Distributions

Failure rates are the fundamental quantitative input to all hardware metric calculations. In ISO 26262, failure rates are assumed to be constant over the device’s operational lifetime and are expressed in units of FIT (Failures In Time), where 1 FIT = 1 failure per 10⁹ device-hours of operation. Equivalently, 1 FIT corresponds to a mean time to failure (MTTF) of approximately 114,155 years.

Failure rate data can be obtained from several sources: industry standard reliability databases (such as IEC TR 62380, SN 29500, or MIL-HDBK-217), semiconductor manufacturer datasheets and reliability reports, field failure data from predecessor products, and accelerated life testing results. The choice of failure rate source should be justified and documented.

Failure mode distributions describe how the total failure rate of a component is distributed among its possible failure modes. For example, a resistor might have failure modes of “open” (70% of total failure rate) and “short” (30% of total failure rate). For integrated circuits and microcontrollers, failure mode distributions are more complex, often involving dozens of failure modes distributed across functional blocks (CPU, memory, ADC, communication peripherals, etc.). Semiconductor manufacturers that develop products for ISO 26262 compliance typically provide detailed FMEDA data including failure mode distributions for their devices as part of their safety documentation package.

9. Diagnostic Coverage – Low, Medium, and High

Diagnostic coverage (DC) is the proportion of the failure rate of a failure mode that is detected or controlled by the implemented safety mechanism. ISO 26262 classifies diagnostic coverage into three categories:

Low DC: ≥60% – The safety mechanism detects more than 60% of the fault’s failure rate. Examples include basic range checks on sensor inputs or simple plausibility checks that catch only the most obvious failures.

Medium DC: ≥90% – The safety mechanism detects more than 90% of the fault’s failure rate. Examples include comparison of redundant sensor readings, CRC checks on communication data, or voltage monitoring with moderate accuracy.

High DC: ≥99% – The safety mechanism detects more than 99% of the fault’s failure rate. Examples include lockstep CPU comparison, complete memory ECC with periodic scrubbing, or full output readback with precise comparison. Achieving high DC typically requires dedicated safety mechanisms specifically designed for fault detection.

The diagnostic coverage achieved for each failure mode is a critical input to the FMEDA and directly affects the SPFM and LFM calculations. Higher diagnostic coverage means fewer residual faults (improving SPFM) and fewer latent faults (improving LFM).

10. FMEDA – The Core Analysis Method for Part 5

Failure Mode Effects and Diagnostic Analysis (FMEDA) is the primary analysis method used to evaluate the hardware architectural metrics in ISO 26262 Part 5. FMEDA extends traditional FMEA by adding two critical dimensions: failure rate data (quantitative failure probability for each component and failure mode) and diagnostic coverage assessment (the effectiveness of safety mechanisms in detecting each failure mode).

The output of an FMEDA is a comprehensive table that, for every hardware element in the safety-related architecture, lists each possible failure mode, the failure rate associated with that mode, the effect of the failure on safety goals, the classification of the fault (safe, single-point, residual, detected multiple-point, perceived multiple-point, or latent multiple-point), the safety mechanism(s) that cover the failure mode, and the diagnostic coverage achieved. From this table, the hardware architectural metrics (SPFM, LFM) and the PMHF can be calculated directly.

11. FMEDA Process: Step-by-Step

The FMEDA process follows these systematic steps:

Step 1 – List all hardware elements: Identify every hardware component (down to the hardware part level) in the safety-related portion of the architecture. This includes microcontrollers, sensors, power management ICs, passive components, connectors, and PCB traces.

Step 2 – Assign failure rates: For each hardware element, obtain the total failure rate (in FIT) from reliability databases, manufacturer data, or field experience.

Step 3 – Identify failure modes and distributions: For each hardware element, list all possible failure modes and their percentage distribution of the total failure rate.

Step 4 – Determine the effect of each failure mode on safety goals: For each failure mode, analyze whether it can contribute to the violation of a safety goal. If it cannot (because it only affects non-safety functions), classify it as a safe fault.

Step 5 – Identify safety mechanisms and diagnostic coverage: For each safety-relevant failure mode, identify which safety mechanism(s) detect or control it, and assign the diagnostic coverage (low, medium, or high – or a specific percentage value) based on the safety mechanism’s capability.

Step 6 – Classify each fault: Based on the effect on safety goals and the diagnostic coverage, classify each failure mode contribution into the appropriate fault category (safe, single-point, residual, detected/perceived/latent multiple-point).

Step 7 – Calculate the metrics: Sum the failure rates in each category across all hardware elements and calculate SPFM, LFM, and PMHF using the standard formulas.

Step 8 – Compare with targets: Compare the calculated metrics with the ASIL-dependent target values. If any target is not met, iterate the hardware design – add safety mechanisms, increase diagnostic coverage, or modify the architecture — and repeat the analysis.

12. Single-Point Fault Metric (SPFM) – Formula and Calculation

The Single-Point Fault Metric (SPFM) quantifies the effectiveness of the hardware architecture against single-point faults and residual faults – the faults that can individually violate a safety goal without requiring a second independent fault. A high SPFM means that the architecture has very few uncovered safety-critical fault paths.

The SPFM formula is:

SPFM = 1 − ( ΣλSPF + ΣλRF ) / Σλsafety-related

Where: ΣλSPF is the sum of all single-point fault rates across all safety-related hardware elements. ΣλRF is the sum of all residual fault rates (the uncovered portions of partially covered faults). Σλsafety-related is the total failure rate of all safety-related hardware elements (excluding safe faults).

In simpler terms, SPFM measures the proportion of safety-related failures that are covered by safety mechanisms, leaving only a small fraction as uncovered single-point or residual faults. The numerator in the fraction represents the “bad” failures (those not caught by safety mechanisms), and the denominator represents all safety-related failures. Subtracting from 1 gives the percentage of “covered” failures.

13. Latent Fault Metric (LFM) – Formula and Calculation

The Latent Fault Metric (LFM) quantifies the effectiveness of the hardware architecture against latent faults – multiple-point faults that remain undetected by safety mechanisms and unperceived by the driver. Latent faults are dangerous because they silently degrade the safety capability of the system: if a safety mechanism itself has a latent fault, and then the element it monitors also fails, the combination can violate a safety goal.

The LFM formula is:

LFM = 1 − ( ΣλMPF,L ) / ( Σλsafety-related − ΣλSPF − ΣλRF )

Where: ΣλMPF,L is the sum of all latent multiple-point fault rates. The denominator represents all multiple-point faults (total safety-related failures minus single-point and residual faults). LFM measures what proportion of multiple-point faults are detected (not latent).

To improve LFM, the hardware design must include mechanisms to detect faults in safety mechanisms themselves — such as startup BIST (Built-In Self-Test), periodic online diagnostics, and end-of-driving-cycle checks. These “diagnostics of the diagnostics” ensure that safety mechanisms are themselves monitored for failures.

14. PMHF – Probabilistic Metric for random Hardware Failures

The Probabilistic Metric for random Hardware Failures (PMHF) is an absolute metric (expressed in probability per hour, or equivalently in FIT) that evaluates whether the overall residual risk of safety goal violations due to random hardware failures is sufficiently low. Unlike SPFM and LFM (which are relative, dimensionless percentages), PMHF gives an absolute probability that quantifies the actual risk level.

ISO 26262 provides two methods for PMHF evaluation. The first is the probabilistic approach, which uses quantitative methods such as quantified Fault Tree Analysis (FTA) to calculate the overall probability of safety goal violation due to random hardware failures, considering both single-point failures and dual-point (latent + subsequent) failure combinations. The second is the individual evaluation of each fault (cut-set analysis), which individually evaluates each residual fault, single-point fault, and dual-point failure that leads to safety goal violation.

A simplified approximation of PMHF (often used as a conservative first estimate) is:

PMHF ≈ ΣλSPF + ΣλRF + (dual-point contribution)

The dual-point contribution depends on the product of latent fault rates and the exposure duration (the time during which a latent fault could combine with a second fault). The exact calculation requires careful consideration of diagnostic test intervals, multiple-point fault detection intervals, and the operating profile of the vehicle.

For ASIL D, the PMHF target is less than 10⁻⁸ per hour (equivalent to less than 10 FIT). This is an extremely stringent target that typically requires extensive safety mechanisms, high diagnostic coverage, and often redundant hardware architectures to achieve.

15. Hardware Metric Target Values by ASIL Level

MetricASIL AASIL BASIL CASIL D
SPFMNot defined≥ 90%≥ 97%≥ 99%
LFMNot defined≥ 60%≥ 80%≥ 90%
PMHFNot defined< 10⁻⁷/h (100 FIT)< 10⁻⁷/h (100 FIT)< 10⁻⁸/h (10 FIT)

Note: SPFM and LFM evaluation is mandatory for ASIL C and D, and recommended for ASIL B. PMHF evaluation is mandatory for ASIL C and D, and recommended for ASIL B. For ASIL A, no specific hardware metric targets are defined — standard quality processes are considered sufficient.

16. Worked Example: Simplified FMEDA and Metric Calculation

Consider a simplified EPS ECU subsystem with the following safety-related hardware elements contributing to one safety goal (ASIL D):

Elementλ Total (FIT)λ Safeλ SPFλ RFλ MPF,Dλ MPF,L
MCU (lockstep)1003000.765.34
Torque Sensor IC20500.313.71
Motor Driver IC501000.437.62
Power Supply IC15300.1211.380.5
Passive Components1570.20.36.51
TOTALS200550.21.82134.488.5

Calculations:

λsafety-related = λtotal − λsafe = 200 − 55 = 145 FIT

SPFM = 1 − (0.2 + 1.82) / 145 = 1 − 2.02/145 = 1 − 0.0139 = 98.61%

This does NOT meet the ASIL D target of ≥99%. The hardware team would need to improve safety mechanisms for the passive components and reduce residual fault rates to achieve the target. Adding diagnostic coverage to the passive components (e.g., output feedback monitoring) could convert SPF and RF contributions into detected MPF, improving the SPFM.

LFM = 1 − 8.5 / (145 − 0.2 − 1.82) = 1 − 8.5/142.98 = 1 − 0.0594 = 94.06%

This meets the ASIL D target of ≥90%. ✓

PMHF ≈ λSPF + λRF = 0.2 + 1.82 = 2.02 FIT (simplified, excluding dual-point contribution)

This meets the ASIL D target of <10 FIT. ✓ (Though the dual-point contribution must also be evaluated for a complete PMHF.)

This example illustrates how a single metric shortfall (SPFM at 98.61% vs. 99% target) drives design iteration. The hardware team must investigate where the 0.39% gap comes from and add targeted safety mechanisms to close it.

17. Hardware Element Classes – Class I, II, and III

ISO 26262 defines three classes of hardware elements based on their complexity and the availability of safety-related documentation:

Class I: Elements with few or no states that can be analyzed from a safety perspective without knowledge of implementation details, and with no internal safety mechanisms. Examples: resistors, capacitors, transistors, simple logic gates, LDO regulators. Class I elements are evaluated as part of the larger system context, not individually.

Class II: Elements with few states analyzable from a safety perspective, possibly with no internal safety mechanisms, but with documentation supporting systematic fault assumptions. Examples: operational amplifiers, data converters, DC-DC converters, CAN transceivers. Class II elements require an evaluation plan supported by analysis and testing.

Class III: Elements with complex states, typically with internal safety mechanisms and comprehensive safety documentation. Examples: automotive microcontrollers with built-in safety features (lockstep cores, ECC, BIST), system basis chips (SBCs), complex sensor ICs with internal diagnostics. Class III elements are typically developed with full ISO 26262 compliance by the semiconductor manufacturer and provided with detailed FMEDA data, safety manuals, and safety documentation packages.

18. Hardware Integration and Testing (Clause 10)

After hardware design and fabrication, the hardware elements must be integrated and tested to verify that they meet the hardware safety requirements. Hardware integration testing includes verifying the correct operation of all safety mechanisms under normal and fault conditions, verifying that hardware diagnostic coverage assumptions from the FMEDA are achievable in practice, testing boundary conditions and environmental stress conditions, and performing electromagnetic compatibility (EMC) testing to ensure that the hardware maintains functional safety under electromagnetic interference.

The hardware integration test plan, test specification, and test report are key work products that provide evidence for the safety case.

19. Fault Injection Testing for Hardware

Fault injection testing is a critical verification activity for Part 5. Physical faults are injected into the hardware (by manipulating signals, shorting pins, disconnecting sensors, introducing supply voltage anomalies, etc.) to verify that the safety mechanisms detect the faults and the system transitions to the correct safe state within the specified FTTI. Fault injection validates the diagnostic coverage assumptions made in the FMEDA and provides empirical evidence that the safety architecture works as designed.

For each FMEDA failure mode that claims diagnostic coverage by a safety mechanism, a corresponding fault injection test should be performed to confirm that the safety mechanism actually detects the injected fault and triggers the correct response. Any discrepancy between the FMEDA assumptions and the fault injection results must be investigated and resolved – either by correcting the safety mechanism implementation or by revising the FMEDA diagnostic coverage claims.

20. Key Work Products of Part 5

Part 5 produces the following essential work products: hardware safety requirements specificationhardware design specification (architectural and detailed design), FMEDA report (the comprehensive fault classification and metric calculation), hardware architectural metrics evaluation report (SPFM and LFM results vs. targets), PMHF evaluation report (probabilistic failure metric results vs. targets), hardware integration test specification and reportfault injection test report, and hardware verification report.

21. Common Mistakes and How to Avoid Them

Mistake 1: Using unrealistic failure rate data. Using overly optimistic failure rates leads to artificially good metrics that do not reflect the actual safety of the product. Use conservative, well-sourced failure rate data from recognized databases. When data uncertainty exists, use the higher (more conservative) estimate.

Mistake 2: Overestimating diagnostic coverage. Claiming high diagnostic coverage without rigorous justification is a common pitfall that assessors specifically target. Every DC claim must be supported by a clear technical rationale explaining how the safety mechanism detects the failure mode and what percentage of the failure mode’s manifestations are actually caught. Fault injection testing provides the strongest evidence.

Mistake 3: Ignoring safe faults classification carefully. Incorrectly classifying a safety-relevant fault as “safe” artificially improves the metrics. Every safe fault classification must be rigorously justified – explaining why the fault cannot contribute to any safety goal violation under any operational condition.

Mistake 4: Not considering common cause failures between safety mechanisms and monitored elements. If a safety mechanism shares the same power supply, clock, or communication bus as the element it monitors, a single root cause (e.g., power supply failure) could disable both simultaneously. DFA (Part 9) must verify the independence of safety mechanisms from the elements they protect.

Mistake 5: Performing FMEDA too late in the design cycle. FMEDA should be started early – during the hardware architectural design phase — and refined iteratively as the design matures. Performing FMEDA only after the design is complete and fabricated removes the opportunity to improve the architecture if metrics fall short.

Mistake 6: Neglecting the dual-point contribution to PMHF. The simplified PMHF approximation (SPF + RF only) is useful for early estimation but can be non-conservative. A complete PMHF evaluation must consider dual-point failure combinations, especially for architectures with significant latent fault exposure.

22. Frequently Asked Questions

Q1: Is FMEDA the only acceptable method for calculating hardware metrics?

No. ISO 26262 allows the use of FMEDA, quantified FTA, or other equivalent quantitative methods. FMEDA is the most widely used because it provides a structured, component-by-component analysis that directly yields all the inputs needed for SPFM, LFM, and PMHF calculation. Quantified FTA is particularly valuable for complex architectures with multiple redundancy paths.

Q2: Where do I get failure rate data for automotive ICs?

Automotive semiconductor manufacturers (such as NXP, Infineon, Texas Instruments, Renesas, STMicroelectronics) typically provide FMEDA data, failure mode distributions, and FIT rate information for their ISO 26262-compliant products through safety documentation packages, often under NDA. Industry databases like IEC TR 62380 and SN 29500 provide generic failure rate data for standard component types.

Q3: What happens if I cannot meet the PMHF target?

If the PMHF target cannot be met with the current architecture, the hardware team must iterate the design: add additional safety mechanisms to improve diagnostic coverage, introduce architectural redundancy (e.g., dual-channel design), replace components with more reliable alternatives, or request ASIL decomposition (Part 9) to distribute the safety requirement across independent elements with lower individual ASIL targets.

Q4: Do passive components (resistors, capacitors) need to be included in the FMEDA?

Yes, if they are in the safety-related signal path. Every hardware element that can influence a safety-critical function must be included. Passive components often have relatively low individual failure rates, but their cumulative contribution across dozens or hundreds of components can be significant. They are typically Class I elements analyzed in the system context.

Q5: How does Part 11 (Semiconductors) relate to Part 5?

Part 11 provides additional guidance specifically for semiconductor manufacturers and silicon IP providers on how to apply Part 5’s requirements to the unique challenges of IC-level analysis. It addresses semiconductor-specific failure modes, on-chip safety mechanisms (like lockstep cores, ECC, BIST), and the collaboration model between IC vendors and ECU-level integrators. When using ISO 26262-compliant semiconductors, the ECU integrator can leverage the IC manufacturer’s FMEDA data as input to their system-level analysis.

23. Conclusion

ISO 26262 Part 5 – Product Development at the Hardware Level is where functional safety becomes deeply quantitative. Through FMEDA analysis, fault classification, and the calculation of SPFM, LFM, and PMHF, hardware engineers must demonstrate with numerical evidence that random hardware failures are adequately detected, controlled, or mitigated by the safety architecture. The iterative process of designing safety mechanisms, analyzing their effectiveness through FMEDA, and refining the design until metric targets are met is the core engineering challenge of Part 5.

Mastering Part 5 requires a unique combination of skills: deep hardware design expertise, understanding of component reliability and failure physics, proficiency in systematic safety analysis methods (FMEDA, FTA), and the ability to design creative, effective safety mechanisms that achieve the required diagnostic coverage. Engineers who can do all of this while keeping the hardware architecture cost-effective and producible are among the most valuable professionals in the automotive electronics industry.

This article is part of our comprehensive ISO 26262 series at PiEmbSysTech. Next in our series: ISO 26262 Part 6 – Product Development at the Software Level, where we will explore software architectural design, MISRA C/C++ coding standards, unit testing with structural coverage (MC/DC), and software verification. Be sure to also review our earlier posts on Part 1Part 2Part 3, and Part 4.

Stay safe. Stay quantitative. Keep engineering the future.

– The PiEmbSysTech Team


Discover more from PiEmbSysTech - Embedded Systems & VLSI Lab

Subscribe to get the latest posts sent to your email.

Leave a ReplyCancel reply

Discover more from PiEmbSysTech - Embedded Systems & VLSI Lab

Subscribe now to keep reading and get access to the full archive.

Continue reading

Exit mobile version