ISO 26262 Part 11 Semiconductors & Chip-Level Safety Explained
Hello, semiconductor engineers, SoC architects, and automotive IC designers! Welcome to the eleventh deep-dive post in our comprehensive ISO 26262 series at PiEmbSysTech. In this article, we will explore ISO 26262 Part 11 – Guidelines on Application of ISO 26262 to Semiconductors, the part introduced in the 2018 second edition to provide dedicated guidance for developing functionally safe semiconductor devices for automotive applications.

Modern automotive systems depend heavily on complex semiconductors – from advanced microcontrollers with lockstep CPU cores and built-in safety mechanisms, to high-performance SoCs (System-on-Chips) powering ADAS and autonomous driving, to precision analog ICs and MEMS sensors providing the sensory inputs for safety-critical functions. As automotive electronics evolve toward higher levels of automation, the safety of these semiconductor devices becomes increasingly critical. Part 11 provides the interpretation framework that semiconductor manufacturers need to apply ISO 26262 at the silicon level.
ISO 26262 Part 11 Semiconductors Table of Contents
Table of Contents
1. What is ISO 26262 Part 11 and Why Was It Introduced?
ISO 26262 Part 11: Guidelines on Application of ISO 26262 to Semiconductors was introduced in the 2018 second edition to address the growing complexity and criticality of semiconductor devices in automotive safety systems. The first edition (2011) did not include dedicated semiconductor guidance, leaving IC designers and IP providers to interpret the general hardware requirements of Part 5 for the unique challenges of chip-level design.
This gap was significant because semiconductor design has unique characteristics that differentiate it from traditional PCB-level hardware design. Failure modes at the transistor and gate level are fundamentally different from failure modes at the component level. On-chip safety mechanisms (ECC, lockstep, BIST) operate at a granularity that Part 5 alone did not adequately address. The semiconductor supply chain involves multiple layers of IP providers and integrators, requiring a specific collaboration framework. Failure rate estimation for ICs involves complex models that account for die size, process technology, packaging, and operating conditions. Transient faults (soft errors) caused by cosmic rays and alpha particles are a semiconductor-specific phenomenon that requires dedicated analysis.
Part 11, at approximately 185 pages, is one of the most substantial parts of the standard. It provides the interpretive bridge between the general hardware requirements of Part 5 and the specific realities of semiconductor design and verification.
2. Informative Status and Relationship with Normative Parts
Like Part 10, Part 11 has an informative (non-normative) character. It contains possible interpretations of the normative parts (primarily Part 5) with respect to semiconductor development. The standard explicitly states that its content “is not exhaustive with regard to possible interpretations” – meaning other valid interpretations may exist. However, in practice, Part 11 has become the de facto reference for semiconductor functional safety in the automotive industry, and assessors routinely use it as the benchmark for evaluating IC-level compliance.
The normative requirements that semiconductor developers must satisfy come from Parts 2 through 9 – primarily Part 5 (hardware development) for the technical requirements, Part 8 (supporting processes) for tool qualification and configuration management, and Part 9 for dependent failure analysis. Part 11 provides guidance on how to apply those normative requirements to the specific context of semiconductor development.
3. Structure and Key Topics of Part 11
Part 11 is organized into the following major sections covering general semiconductor guidance (decomposition of ICs into hardware parts, sub-parts, and elementary sub-parts), failure rate estimation (using IEC TR 62380, SN 29500, and other models), soft error analysis (transient faults from cosmic rays and alpha particles), guidance for specific semiconductor technologies – including digital components (microcontrollers, memories, logic), analog and mixed-signal components, programmable logic devices (PLDs/FPGAs), multi-core processors, and sensors and transducers (including MEMS) – the silicon IP framework (IP providers, integrators, assumptions of use, and safety element out of context for IP), FMEDA at the semiconductor level, fault injection techniques for IC verification, and the semiconductor safety manual (the key deliverable from IC vendor to system integrator).
4. IC Decomposition – Hardware Parts, Sub-Parts, and Elementary Sub-Parts
Part 11 provides detailed guidance on how to decompose an integrated circuit into the hierarchical levels used for safety analysis. This decomposition extends the general hardware hierarchy from Part 1 to the chip level:
Hardware Part: At the IC level, a hardware part is a functionally distinct block within the chip that can be independently analyzed. Examples include the CPU core, a memory array (Flash, SRAM), an ADC module, a communication peripheral (CAN, SPI), or a timer block. Each hardware part has an identifiable failure rate contribution.
Hardware Sub-Part: A sub-part is a logically distinguishable portion of a hardware part. For example, within the CPU hardware part, the ALU (Arithmetic Logic Unit), the register bank, the instruction decoder, and the interrupt controller are sub-parts. Sub-parts are the level at which failure modes are typically identified and diagnostic coverage is assessed in the FMEDA.
Elementary Sub-Part: The finest level of decomposition – individual functional elements such as a single flip-flop, a logic cone, a single memory cell, or a specific analog circuit block. Elementary sub-parts are relevant for detailed fault injection analysis.
This hierarchical decomposition enables the systematic FMEDA analysis required by Part 5 to be applied at the IC level, with failure rates distributed across hardware parts based on their relative die area, transistor count, or other appropriate allocation methods.
5. SEooC Framework for Semiconductor Development
Semiconductors are almost always developed as a Safety Element out of Context (SEooC). This is because IC designers typically do not know the specific vehicle or ECU system in which their chip will be used. The chip may be designed for a broad range of automotive applications, each with different safety goals, different ASIL levels, and different system architectures.
Under the SEooC framework, the semiconductor manufacturer develops the IC based on documented Assumptions of Use (AoU) – assumptions about the safety requirements, the intended safety concept, the expected safety mechanisms (both on-chip and external), and the operating conditions that the chip will encounter when integrated into the final system. The system integrator (typically the Tier-1 ECU supplier or OEM) must validate these assumptions during system integration.
The AoU document is a critical interface between the IC vendor and the system integrator. It must clearly specify what safety mechanisms the chip provides internally (and their diagnostic coverage), what external safety mechanisms the system integrator must provide (such as external voltage monitoring, external clock supervision, or software-based diagnostics), the operating conditions under which the IC’s safety claims are valid, and any configuration or initialization steps required to enable the on-chip safety features.
6. Silicon IP Providers and Integrators – The Collaboration Model
Modern SoCs are assembled from dozens or hundreds of silicon IP blocks sourced from multiple IP providers – processor cores (ARM, RISC-V), memory controllers, communication interfaces (PCIe, CAN, Ethernet), analog subsystems, and more. Part 11 provides a collaboration framework for this multi-tier IP supply chain.
The framework distinguishes between IP providers (who develop and verify individual IP blocks, typically as SEooC) and IP integrators (who assemble IP blocks into the complete SoC, validating the assumptions of use and performing system-level safety analysis). Each IP provider delivers safety work products for their block – including FMEDA data, safety manuals, DFA analysis, and assumption of use documentation. The integrator combines these inputs with their own integration-level analysis to produce the SoC-level safety evidence.
Part 11 addresses the practical challenges of this model, including how to handle IP blocks with different ASIL capabilities being integrated into the same die, how to perform dependent failure analysis between IP blocks that share on-chip resources (bus fabric, clock trees, power domains), and how to allocate the system-level PMHF budget across multiple IP blocks.
7. Failure Rates for Semiconductor Devices
Failure rate estimation for ICs is a critical input to the FMEDA and hardware metric calculations. Part 11 discusses the use of industry-standard failure rate models, including IEC TR 62380 (which was the primary reference during Part 11’s development), SN 29500 (the German Siemens standard), and FIDES (the French reliability prediction model). Each model calculates IC failure rates based on parameters such as die complexity (transistor count or die area), package type, number of pins, operating temperature, duty cycle, and environmental stress factors.
Part 11 emphasizes that these failure rate predictions focus on random hardware failures and explicitly exclude systematic failure modes (such as electrostatic discharge damage or electrical overstress, which are addressed through process controls and design guidelines). The base failure rate must be reported before the application of safety mechanisms – meaning the raw failure rate of the silicon, not the residual rate after ECC, lockstep, or other safety mechanisms are applied. This ensures transparency and allows the system integrator to correctly account for the chip’s contribution to the overall system PMHF budget.
Part 11 also addresses the allocation of the total IC failure rate to individual hardware parts. A common approach is die-area-based allocation – distributing the total failure rate proportionally to the relative die area occupied by each hardware part. This is a practical approximation, but Part 11 notes that more refined allocation methods may be used when specific failure data is available for different circuit types (e.g., different failure rates per unit area for SRAM vs. logic vs. analog circuits).
8. Soft Errors and Transient Faults in Semiconductors
Soft errors (also called single-event upsets or SEUs) are transient faults caused by ionizing radiation – primarily cosmic ray neutrons at ground level and alpha particles from packaging materials. Unlike permanent random failures, soft errors do not physically damage the silicon – they temporarily flip the state of a storage element (such as a memory bit or a flip-flop), potentially causing an incorrect computation or data corruption.
Part 11 provides specific guidance on handling soft errors in the context of ISO 26262. The soft error rate (SER) must be considered as part of the overall failure rate for the chip, and safety mechanisms must be designed to detect and correct soft errors in safety-critical data paths. Common countermeasures include ECC on memories (correcting single-bit errors, detecting double-bit errors), triple modular redundancy (TMR) on critical flip-flops, lockstep processing (detecting soft errors in the CPU data path by comparing outputs of two identical cores), and periodic memory scrubbing (proactively reading and re-writing memory contents to detect and correct accumulated soft errors before they cause problems).
Part 11 notes that soft error rates are technology-dependent – they generally increase with smaller process geometries and lower supply voltages, as the critical charge needed to flip a storage element decreases. This trend makes soft error management increasingly important for advanced automotive SoCs built on leading-edge process nodes.
9. On-Chip Safety Mechanisms – The Semiconductor Safety Toolkit
Part 11 provides extensive guidance on the on-chip safety mechanisms that semiconductor designers use to detect, correct, and mitigate random hardware failures. These mechanisms are the chip-level implementation of the safety architecture concepts defined in Parts 4 and 5. The effectiveness of these mechanisms directly determines the chip’s SPFM, LFM, and PMHF contributions.
10. ECC (Error Correction Code) for Memories
Error Correction Code (ECC) is the most widely used on-chip safety mechanism for memories (SRAM, Flash, DRAM). ECC adds redundant parity bits to each stored data word, enabling the detection and correction of errors. The most common implementations are SECDED (Single Error Correction, Double Error Detection) – corrects any single-bit error in a data word and detects (but cannot correct) any two-bit error. This provides high diagnostic coverage for memory failures and soft errors. More advanced ECC schemes (such as BCH codes) can correct multiple-bit errors, providing even higher coverage. ECC is typically applied to internal SRAM, Flash memory, cache memories, register files, and sometimes to internal bus data transfers. Part 11 discusses the diagnostic coverage that can be claimed for different ECC implementations and the considerations for ECC on different memory types.
11. Lockstep CPU Cores
Lockstep is a safety mechanism where two identical CPU cores execute the same instructions in parallel, and a hardware comparator checks their outputs on every clock cycle. Any discrepancy indicates a fault in one of the cores, triggering a fault notification. Lockstep provides very high diagnostic coverage (typically claimed at 99% or higher) for the CPU processing path – including the ALU, registers, instruction decoder, and pipeline. It is the primary safety mechanism for achieving ASIL D in microcontroller-based systems.
Part 11 discusses the considerations for lockstep implementation, including the handling of the comparator itself (which is a single point of failure if not independently monitored), the handling of cores that operate with a time delay (delayed lockstep, where one core runs a few clock cycles behind the other to improve tolerance against common-cause transient faults), and the detection latency and response time when a mismatch is detected.
12. BIST (Built-In Self-Test)
Built-In Self-Test (BIST) mechanisms are on-chip test circuits that autonomously test specific hardware blocks without external test equipment. BIST is used for startup testing (power-on self-test / POST), periodic online testing during operation, and shutdown testing (end-of-driving-cycle tests). Common BIST implementations include memory BIST (MBIST) for testing SRAM and Flash arrays using march test algorithms, logic BIST (LBIST) for testing combinational and sequential logic using pseudo-random test patterns and signature analysis, and analog BIST for testing ADCs, DACs, and other analog blocks using internally generated test stimuli. BIST is critical for detecting latent faults – faults in safety mechanisms or redundant elements that could remain hidden until a second fault occurs. By periodically running BIST, latent faults are detected and flagged, improving the LFM and preventing the accumulation of undetected degradation.
13. Other On-Chip Safety Mechanisms
Beyond ECC, lockstep, and BIST, Part 11 discusses numerous other on-chip safety mechanisms: parity protection on internal buses and registers (simpler than ECC, providing error detection without correction), protocol checkers on internal bus fabrics (verifying that transactions comply with the bus protocol), end-to-end data protection (CRC or checksum on data transfers between IP blocks), watchdog timers (on-chip hardware timers that monitor software execution), clock monitoring and redundancy (on-chip oscillators and PLL monitors that detect clock failures), voltage monitoring (on-chip supply voltage sensors that detect undervoltage or overvoltage), temperature monitoring (on-chip thermal sensors that detect overtemperature), and redundant signal paths (duplicate paths for critical control signals with comparison logic).
Each mechanism contributes a specific level of diagnostic coverage that is documented in the FMEDA. The combination of all on-chip safety mechanisms determines the overall SPFM, LFM, and PMHF for the chip.
14. FMEDA for Semiconductors – Chip-Level Analysis
The FMEDA at the semiconductor level follows the same general methodology as described in Part 5, but with chip-specific considerations. The IC is decomposed into hardware parts and sub-parts, each assigned a portion of the total failure rate. For each sub-part, the possible failure modes are identified (stuck-at-0, stuck-at-1, bridging, open, delay, etc.), and each failure mode is classified as safe, single-point, residual, detected multiple-point, perceived multiple-point, or latent multiple-point – based on its effect on safety goals and the diagnostic coverage provided by the on-chip safety mechanisms.
Part 11 provides specific guidance on FMEDA considerations for semiconductors, including how to handle failure modes that are technology-specific (e.g., gate oxide breakdown, electromigration, hot carrier injection), how to estimate diagnostic coverage for on-chip safety mechanisms (with guidance on justifying coverage claims through fault injection), how to account for soft errors in the FMEDA, and how to present the FMEDA results in the semiconductor safety manual for use by the system integrator.
15. Fault Injection at the Semiconductor Level
Fault injection is the primary method for validating the diagnostic coverage claims made in the semiconductor FMEDA. At the IC level, fault injection is typically performed through simulation – injecting faults into the RTL (Register Transfer Level) design model and observing whether the on-chip safety mechanisms detect them.
Part 11 discusses several fault injection approaches. RTL fault simulation injects stuck-at, bit-flip, or other fault models into the design netlist and simulates the system’s response. Gate-level fault simulation provides higher fidelity by injecting faults at the synthesized gate-level netlist. Physical fault injection on silicon (using techniques like laser fault injection or voltage glitching) provides the ultimate validation on actual hardware but is more limited in coverage. Statistical fault injection uses random sampling of fault locations and types when exhaustive injection is impractical for large designs.
The results of fault injection campaigns provide quantitative evidence for diagnostic coverage claims and are a key deliverable in the semiconductor safety documentation package.
16. Guidance for Digital Components
Part 11 provides detailed guidance for digital semiconductor components, including microcontrollers, DSPs, and digital logic blocks. This includes definitions and guidance on fault models for digital circuits (stuck-at, bridging, transition delay, stuck-open), failure mode analysis for common digital blocks (CPU, memory, communication peripherals, timers, DMA controllers), diagnostic coverage estimation for digital safety mechanisms (lockstep, ECC, parity, protocol checkers), and verification through fault injection simulation at RTL and gate levels.
17. Guidance for Analog and Mixed-Signal Components
Part 11 also addresses analog and mixed-signal semiconductor components, which present unique challenges for safety analysis due to their continuous (rather than discrete) behavior. The guidance covers how to decompose analog devices into analyzable blocks, failure modes specific to analog circuits (drift, offset, gain error, oscillation, saturation), the concept of Analog Single Event Transients (ASETs) – transient disturbances in analog circuits caused by ionizing radiation, and guidance on under-voltage and over-voltage diagnostics for analog supervision circuits. Analog safety analysis is generally more challenging than digital analysis because failure modes are often gradual (drift) rather than binary (stuck-at), and the boundary between normal parameter variation and a safety-relevant failure can be difficult to define.
18. PLDs, Multi-Core Processors, and Sensors
Part 11 provides specific guidance for Programmable Logic Devices (PLDs/FPGAs), covering the unique considerations for configurable semiconductor devices – including the distinction between the PLD manufacturer’s responsibilities and the PLD user’s responsibilities, and the handling of configuration memory (which, if corrupted, can change the fundamental behavior of the device). For multi-core processors, Part 11 addresses the allocation of safety requirements to individual cores, freedom from interference between cores sharing on-chip resources (cache, bus fabric, memory controller), and the use of multi-core lockstep versus heterogeneous multi-core architectures. For sensors and transducers (including MEMS), Part 11 addresses sensor-specific failure modes, transducer failure modes, and production process considerations for MEMS devices.
19. The Semiconductor Safety Manual
The semiconductor safety manual is the primary deliverable from the IC vendor to the system integrator. It is the document that enables the system integrator to correctly use the semiconductor device in a safety-related application. The safety manual must include the IC’s safety capabilities (ASIL capability, supported safety mechanisms), the FMEDA results (failure mode distribution, diagnostic coverage, SPFM/LFM contributions), the assumptions of use (the conditions under which the safety claims are valid), the external safety mechanisms that the system integrator must provide, the configuration and initialization requirements for enabling safety features, the operating conditions and derating requirements, and any known limitations or errata that affect safety. The safety manual is typically provided under NDA and is complemented by the FMEDA report, the DFA report, and (for TÜV-certified products) the third-party assessment certificate.
20. Key Work Products and Deliverables
Part 11 semiconductor developments typically produce the following deliverables: the semiconductor safety manual, the FMEDA report (detailed fault classification and metric calculations for the IC), the DFA report (analysis of dependent failures between on-chip blocks), the fault injection campaign results (validation evidence for diagnostic coverage claims), the assumptions of use document (for SEooC development), the safety analysis reports (FTA, FMEA at the IC level), and (when applicable) a third-party assessment certificate (typically from TÜV SÜD, TÜV Rheinland, or SGS). Major automotive semiconductor manufacturers (Infineon, NXP, Renesas, Texas Instruments, STMicroelectronics, Microchip) provide these deliverables as part of their standard safety documentation packages for automotive-grade products.
21. Common Mistakes and How to Avoid Them
Mistake 1: Treating Part 11 as optional because it is informative. While Part 11 is informative, the underlying Part 5 requirements for hardware metric evaluation are normative and apply to semiconductors. Part 11 provides the accepted interpretation of how to apply Part 5 at the chip level. Ignoring Part 11 while still claiming Part 5 compliance for semiconductor elements will be challenged by assessors.
Mistake 2: Using IC vendor’s FMEDA data without validating assumptions of use. The semiconductor vendor’s FMEDA results are valid only under the documented assumptions of use. If the system integrator uses the chip in a different configuration, at different operating conditions, or without providing the specified external safety mechanisms, the FMEDA results do not apply. Always validate the AoU against your actual system design.
Mistake 3: Claiming high diagnostic coverage without fault injection evidence. Theoretical analysis can estimate diagnostic coverage, but the semiconductor industry and assessors increasingly expect fault injection simulation to validate these claims. A lockstep mechanism that theoretically provides 99% coverage must be validated through fault injection to confirm that the coverage is actually achieved in the implemented design.
Mistake 4: Ignoring soft errors. Soft errors are a significant contributor to the overall failure rate, especially in advanced process nodes. Not including soft error rates in the FMEDA, or not designing safety mechanisms to detect soft errors, can lead to underestimation of the PMHF and inadequate safety.
Mistake 5: Not considering freedom from interference between on-chip IP blocks. IP blocks sharing the same die, power domain, clock tree, and bus fabric have significant coupling factors. A thorough DFA at the semiconductor level — evaluating these shared on-chip resources – is essential for validating independence claims.
22. Frequently Asked Questions
Q1: Does every semiconductor used in a safety-related system need to comply with Part 11?
Part 11 provides guidance primarily for complex semiconductor devices (microcontrollers, SoCs, sensor ICs) that contain safety-related functions or safety mechanisms. Simple passive components (resistors, capacitors) and simple discrete semiconductors (transistors, diodes) are analyzed at the system level under Part 5 as Class I hardware elements and do not require Part 11 analysis.
Q2: What is the difference between AEC-Q100 and ISO 26262 for semiconductors?
AEC-Q100 is a reliability qualification standard focused on stress testing (temperature cycling, high-temperature operating life, moisture sensitivity, etc.) to ensure the physical durability of the IC. ISO 26262 is a functional safety standard focused on ensuring the IC behaves safely when faults occur. They are complementary – AEC-Q100 qualification is necessary (to ensure the IC is reliable) but not sufficient (it does not address functional safety architecture, diagnostic coverage, or safety mechanisms). Both are required for safety-critical automotive semiconductors.
Q3: Can a semiconductor be certified to a specific ASIL?
Semiconductors developed as SEooC are assessed for an ASIL capability rather than certified to a specific ASIL. The IC vendor demonstrates that the chip has been developed with systematic processes sufficient for a target ASIL and provides FMEDA data showing the chip’s random hardware failure contributions. The actual ASIL compliance of the complete system is determined at the system level by the integrator, considering the chip’s contributions alongside all other system elements.
Q4: How do I obtain FMEDA data from semiconductor vendors?
Major automotive semiconductor vendors provide FMEDA data, safety manuals, and safety documentation packages for their ISO 26262-compliant products. These are typically available under NDA through the vendor’s automotive or functional safety support channels. Some vendors also provide safety documentation through their Development Interface Agreement (DIA) process with Tier-1 and OEM customers.
Q5: Is fault injection simulation mandatory?
Part 11 is informative, so it does not create mandatory requirements. However, the normative Part 5 requires evaluation of hardware architectural metrics, and fault injection is the most widely accepted method for validating diagnostic coverage claims at the semiconductor level. In practice, assessors and OEMs expect fault injection evidence for complex semiconductor safety mechanisms, particularly for ASIL C and D.
23. Conclusion
ISO 26262 Part 11 – Guidelines on Application of ISO 26262 to Semiconductors fills a critical gap in the standard by providing the interpretive framework that semiconductor designers, IP providers, and SoC integrators need to develop functionally safe automotive ICs. From the hierarchical decomposition of ICs into hardware parts and sub-parts, through the design and validation of on-chip safety mechanisms (ECC, lockstep, BIST), to the collaboration model between IP providers and integrators, Part 11 addresses the unique challenges of chip-level functional safety with depth and specificity.
As automotive electronics evolve toward higher levels of automation, more complex SoC architectures, and advanced process nodes, the importance of Part 11 will only grow. Semiconductor engineers who master this part of the standard will be at the forefront of the automotive industry’s most critical safety challenge: ensuring that the silicon at the heart of every safety-critical system is demonstrably safe.
This article is part of our comprehensive ISO 26262 series at PiEmbSysTech. Next in our series: ISO 26262 Part 12 – Adaptation for Motorcycles. Be sure to review our earlier posts on Part 1 through Part 10.
Stay safe. Stay silicon-smart. Keep engineering the future.
– The PiEmbSysTech Team
Discover more from PiEmbSysTech - Embedded Systems & VLSI Lab
Subscribe to get the latest posts sent to your email.


