ISO 26262 Part 6 Software Development: MISRA, MCDC, Software Safety Requirements & Verification Explained
Hello, embedded software engineers and automotive safety developers! Welcome to the sixth deep-dive post in our comprehensive ISO 26262 series at PiEmbSysTech. In this article, we will explore ISO 26262 Part 6 – Product Development at the Software Level, the part that defines how safety-related automotive software must be designed, implemented, tested, and verified.
In modern vehicles, software controls an enormous range of safety-critical functions – from electronic braking and power steering to airbag deployment, engine management, and autonomous driving perception. A single software defect in the wrong place can have catastrophic consequences. Part 6 provides the systematic framework to ensure that safety-related software is developed with the rigor proportional to its safety criticality, using proven methods such as MISRA C/C++ coding standards, structural code coverage analysis (including MC/DC), requirements-based unit testing, and formal software architectural design.
If Part 5 focused on ensuring that hardware random failures are controlled through quantitative metrics, Part 6 focuses on ensuring that systematic software failures are avoided through disciplined development processes, rigorous static and dynamic analysis, and comprehensive testing. Let us explore every aspect.
ISO 26262 Part 6 Software Development Table of Contents
- What is ISO 26262 Part 6 and Why Software Safety Matters
- Structure of Part 6 – The Key Clauses
- Software Safety Requirements Specification (Clause 6)
- Software Architectural Design (Clause 7)
- Software Architectural Design Principles for Safety
- Architectural Notations and ASIL-Dependent Methods
- Software Unit Design and Implementation (Clause 8)
- Coding Guidelines – MISRA C, MISRA C++, and Beyond
- MISRA C/C++ in Detail – The Automotive Coding Standard
- Software Design Principles from Part 6 Tables
- Software Unit Verification (Clause 9)
- Unit Testing Methods and ASIL-Dependent Requirements
- Structural Code Coverage – Statement, Branch, and MC/DC
- MC/DC (Modified Condition/Decision Coverage) Explained
- Software Integration and Testing (Clause 10)
- Verification of Software Safety Requirements (Clause 11)
- Static Analysis – Finding Defects Without Execution
- Model-Based Development for ISO 26262
- Software Tool Qualification (Part 8 Reference)
- Key Work Products of Part 6
- Complete ASIL-Dependent Methods and Coverage Table
- Common Mistakes and How to Avoid Them
- Frequently Asked Questions
- Conclusion
1. What is ISO 26262 Part 6 and Why Software Safety Matters
ISO 26262 Part 6: Product Development at the Software Level specifies the requirements for the development of safety-related software in automotive E/E systems. It covers the complete software development lifecycle within the V-model – from specifying software safety requirements, through software architectural design, unit design and implementation, unit verification, software integration testing, and verification of software safety requirements at the integrated software level.
Part 6 addresses systematic failures in software – defects introduced through design errors, coding mistakes, specification ambiguities, or inadequate testing. Unlike hardware random failures (addressed by Part 5’s quantitative metrics), software failures are deterministic: the same input to the same code under the same conditions will always produce the same (possibly wrong) output. Software does not wear out or degrade – it either has a defect or it does not. Therefore, the approach to software safety is fundamentally different from hardware safety: rather than calculating probabilistic failure metrics, Part 6 prescribes progressively rigorous development methods, coding standards, testing techniques, and coverage criteria scaled to the ASIL of the component.
The higher the ASIL, the more rigorous the methods that must be applied. For ASIL D software, the standard requires the most stringent coding standards, the most comprehensive testing, and the most demanding structural coverage criteria – including Modified Condition/Decision Coverage (MC/DC), which is one of the most labor-intensive and expensive verification activities in the entire ISO 26262 lifecycle.
2. Structure of Part 6 – The Key Clauses
ISO 26262-6:2018 contains the following normative clauses:
Clause 5: General Topics for Software Development – Initiation of software development, including modeling and coding guidelines selection.
Clause 6: Specification of Software Safety Requirements – Derivation of software safety requirements from the TSRs and HSI.
Clause 7: Software Architectural Design – Design of the software architecture with safety-relevant properties.
Clause 8: Software Unit Design and Implementation – Detailed design and coding of individual software units.
Clause 9: Software Unit Verification – Testing and analysis of individual software units.
Clause 10: Software Integration and Testing – Integration of software units and testing at the integrated software level.
Clause 11: Verification of Software Safety Requirements – Verification that the complete integrated software meets all software safety requirements.
3. Software Safety Requirements Specification (Clause 6)
The software safety requirements (SSRs) are derived from the technical safety requirements (TSRs) allocated to software elements in the technical safety concept (Part 4) and from the hardware-software interface (HSI) specification. SSRs define what the software must do to contribute to the achievement of the safety goals.
SSRs must address the software’s contribution to safety mechanisms (fault detection, safe state transition, warning management), the software’s interfaces with hardware elements as defined in the HSI, timing and performance requirements (execution time budgets, response times, deadline constraints), operating mode management (normal, degraded, safe state transitions), data integrity and plausibility checking, and self-monitoring and diagnostic capabilities. Each SSR inherits the ASIL of the TSR from which it is derived. SSRs must be written in a manner that is unambiguous, verifiable, and traceable back to the TSRs and ultimately to the safety goals.
4. Software Architectural Design (Clause 7)
The software architectural design defines the structure of the safety-related software in terms of software components, their interfaces, their interactions, and their allocation to hardware resources. The architecture must be designed to satisfy all software safety requirements while supporting key safety properties such as modularity, encapsulation, and freedom from interference.
The software architecture describes the hierarchical decomposition of the software into components and units, the static structure (how components are organized and connected), the dynamic behavior (how components interact at runtime, including task scheduling, interrupt handling, and communication), the resource allocation (memory mapping, stack allocation, timing budget allocation to tasks), and the mechanisms that support freedom from interference between elements of different ASILs (memory partitioning, temporal protection, communication firewalls).
The software architectural design must be verified to confirm that it is consistent with the software safety requirements, that it supports the required safety mechanisms, and that the architectural properties (modularity, coupling, cohesion) are adequate for the target ASIL.
5. Software Architectural Design Principles for Safety
ISO 26262 Part 6, Table 3 (in the 2018 edition) specifies recommended software architectural design principles, with the recommendation strength varying by ASIL. Key principles include:
Hierarchical structure: The software should be organized in a clear hierarchical decomposition with well-defined layers and interfaces. This improves understanding, maintainability, and testability.
Restricted size of software components: Components should be small enough to be comprehensible and testable. Large, monolithic software modules are difficult to analyze, review, and test, and are more likely to contain latent defects.
Restricted size of interfaces: Interfaces between components should be minimized. The more data that flows between components, the more potential there is for interface errors and unintended interactions.
Strong cohesion within components: Each software component should have a single, well-defined purpose. High cohesion means that all the internal elements of a component work together toward the same goal.
Loose coupling between components: The dependencies between software components should be minimized. Loose coupling reduces the risk that a defect in one component will propagate to other components.
Appropriate scheduling properties: The scheduling of software tasks must guarantee that safety-critical functions meet their timing deadlines. Priority-based preemptive scheduling is commonly used, with safety-critical tasks assigned the highest priorities.
Freedom from interference: When software elements of different ASILs coexist on the same processor, mechanisms must be in place to ensure that lower-ASIL or QM software cannot interfere with higher-ASIL software. This is typically achieved through memory protection (MPU/MMU), temporal protection (timing monitoring), and communication protection (data validation at ASIL boundaries). AUTOSAR OS and its memory partitioning features directly support this requirement.
6. Architectural Notations and ASIL-Dependent Methods
The standard recommends the use of specific notations for documenting the software architecture, with the formality of the notation increasing with the ASIL. For ASIL A and B, informal notations (natural language descriptions with block diagrams) are generally acceptable. For ASIL C, semi-formal notations are highly recommended – these include standard graphical notations such as UML (Unified Modeling Language), SysML, or AUTOSAR’s architectural description formats. For ASIL D, semi-formal notations are highly recommended, and formal notations (mathematically rigorous specifications such as Z notation or state machines with formal semantics) are recommended for the most critical design elements.
The use of model-based development (where the software architecture and behavior are described in models using tools like MATLAB/Simulink, Stateflow, or TargetLink) is increasingly common and is explicitly supported by the 2018 edition of the standard. When model-based development is used, the models serve as the architectural and detailed design documentation, and code can be automatically generated from the models.
7. Software Unit Design and Implementation (Clause 8)
Software unit design and implementation is where the code is actually written. Clause 8 specifies the requirements for the detailed design of individual software units and for their implementation in source code. The standard provides extensive guidance through its tables, which list design principles and implementation practices recommended at each ASIL level.
The detailed design of each software unit must specify the unit’s functionality, its interfaces (inputs, outputs, parameters), its internal data structures and algorithms, its error handling behavior, and its timing characteristics. The implementation must adhere to the selected coding guidelines (MISRA C/C++ or equivalent) and must apply the design principles listed in the standard’s tables.
8. Coding Guidelines – MISRA C, MISRA C++, and Beyond
Coding guidelines are one of the most important systematic fault avoidance measures in Part 6. The standard (Clause 5.4.7 and Table 1) specifies topics that modeling and coding guidelines must cover, including enforcement of low complexity, use of language subsets (e.g., MISRA C), use of strong typing, defensive programming techniques, use of naming conventions, and avoidance of implementation-defined and undefined behavior.
The standard explicitly references MISRA C and MISRA C++ as appropriate language subsets for safety-related automotive software development. In practice, MISRA compliance is effectively mandatory for ISO 26262 projects – it is the most widely used and recognized coding standard in the automotive industry, and assessors and OEMs universally expect its application.
9. MISRA C/C++ in Detail – The Automotive Coding Standard
MISRA (Motor Industry Software Reliability Association) coding guidelines are a set of rules and directives designed to promote the safe, reliable, and portable use of the C and C++ programming languages in safety-critical and security-critical systems. Originally developed by the UK Motor Industry Research Association in the 1990s, MISRA has become the globally recognized standard for C/C++ coding in automotive embedded systems.
MISRA C:2023 (Current Edition)
The current edition of MISRA C (published 2023) contains approximately 175 guidelines categorized as Mandatory rules (must always be complied with – no deviations permitted), Required rules (must be complied with unless a formal deviation is documented with justification), and Advisory rules (should be complied with where practical, but formal deviations are not required). The guidelines cover topics such as avoidance of undefined and implementation-defined behavior in the C language, restrictions on pointer arithmetic and dynamic memory allocation, restrictions on recursion (which complicates worst-case execution time analysis), requirements for initialization of variables, restrictions on complex expressions and side effects, and requirements for explicit type conversions.
MISRA C++:2023
For projects using C++, MISRA C++:2023 provides equivalent guidelines tailored to the C++ language. It addresses C++-specific concerns such as safe use of inheritance and polymorphism, restrictions on exception handling in safety-critical contexts, safe use of templates and generic programming, and avoidance of features with unpredictable timing behavior.
MISRA Compliance Framework
Achieving MISRA compliance is not simply about fixing every violation found by a static analysis tool. The MISRA Compliance:2020 framework defines a structured approach: organizations create a Guideline Re-categorization Plan (GRP) that maps each MISRA rule to the appropriate category for their project, a Guideline Enforcement Plan (GEP) that specifies how each rule will be enforced (static analysis tool, code review, etc.), and a Deviations process for documenting and justifying any non-compliance with required rules. This framework is widely accepted by OEMs and assessors as the proper approach to demonstrating MISRA compliance in an ISO 26262 context.
10. Software Design Principles from Part 6 Tables
Part 6, Table 6 (2018 edition) lists design principles for software unit design and implementation with ASIL-dependent recommendations. Key principles include ensuring correct order of execution, consistency of interfaces between units, correctness of data flow and control flow, simplicity and readability of the code, robustness through defensive programming (validating input parameters, checking return values, handling unexpected conditions), and avoidance of unnecessary complexity (minimizing cyclomatic complexity, limiting nesting depth, restricting function length).
Static analysis tools configured with MISRA rules and software metrics (such as the HIS metrics suite commonly used in the automotive industry) provide automated enforcement of many of these principles, generating quantitative evidence that can be included in the safety case.
11. Software Unit Verification (Clause 9)
Software unit verification is the process of testing and analyzing individual software units to confirm that they correctly implement their detailed design specifications and that they are free from defects. Clause 9 specifies the methods for unit verification, with the required rigor depending on the ASIL.
Unit verification includes both static methods (analysis of the source code without executing it) and dynamic methods (executing the code with specific test cases and checking the outputs). The standard’s tables recommend specific methods at each ASIL level:
Static analysis methods: Walk-throughs and inspections of the source code, static code analysis (using tools like Polyspace, PC-lint, LDRA, Coverity, or QA-MISRA), control flow analysis, and data flow analysis. Static analysis is recommended at all ASIL levels and is highly recommended for ASIL C and D.
Dynamic testing methods: Requirements-based testing (deriving test cases from the unit’s requirements specification), equivalence class partitioning (dividing the input domain into classes of equivalent behavior and testing at least one value from each class), boundary value analysis (testing at the boundaries of equivalence classes where defects are most likely to lurk), and error guessing (creating test cases based on the tester’s experience with common defect patterns). Requirements-based testing is recommended at all ASIL levels. Boundary value analysis is highly recommended for ASIL C and D.
12. Unit Testing Methods and ASIL-Dependent Requirements
Part 6, Table 7 (2018 edition) specifies the methods for software unit testing at each ASIL level. The following table summarizes the key recommendations:
| Method | ASIL A | ASIL B | ASIL C | ASIL D |
|---|---|---|---|---|
| Requirements-based testing | ++ | ++ | ++ | ++ |
| Interface testing | + | ++ | ++ | ++ |
| Equivalence classes & boundary values | + | ++ | ++ | ++ |
| Error guessing based on knowledge or experience | + | + | + | + |
Legend: ++ = Highly recommended, + = Recommended, o = No recommendation
13. Structural Code Coverage – Statement, Branch, and MC/DC
Structural code coverage is a measure of how thoroughly the test cases exercise the source code. It is a critical metric in ISO 26262 Part 6 that provides evidence of testing completeness. The standard defines three progressively more demanding levels of structural coverage, with the required level depending on the ASIL:
| Coverage Level | ASIL A | ASIL B | ASIL C | ASIL D |
|---|---|---|---|---|
| Statement Coverage (C0) | ++ | ++ | + | + |
| Branch Coverage (C1) | + | ++ | ++ | ++ |
| MC/DC Coverage | + | + | ++ | ++ |
Legend: ++ = Highly recommended, + = Recommended
Statement Coverage (C0): The most basic level. It requires that every executable statement in the source code is executed at least once during testing. Statement coverage identifies dead code (code that is never executed) but does not verify that all decision branches are exercised.
Branch Coverage (C1): Also known as decision coverage. It requires that every branch of every decision point (if-else, switch-case, while, for) is exercised at least once. Branch coverage is more thorough than statement coverage because it verifies both the true and false outcomes of every decision. Branch coverage subsumes statement coverage.
Modified Condition/Decision Coverage (MC/DC): The most demanding level. It requires that every condition within a decision is shown to independently affect the outcome of that decision. MC/DC subsumes branch coverage and provides evidence that the logic of complex boolean expressions has been thoroughly tested. MC/DC is highly recommended for ASIL D and is one of the most labor-intensive testing requirements in the standard.
14. MC/DC (Modified Condition/Decision Coverage) Explained
MC/DC is a structural coverage criterion that originated in the aviation industry (DO-178C) and was adopted by ISO 26262 for the highest ASIL levels. It is the gold standard of structural coverage for safety-critical software.
A decision is a boolean expression that controls the flow of the program (e.g., the expression in an if-statement). A condition is an atomic boolean sub-expression within a decision. For example, in the decision if (A && B || C), there are three conditions: A, B, and C.
MC/DC requires that for each condition in a decision, there exists a pair of test cases where that condition changes value (true ↔ false), all other conditions remain unchanged, and the overall decision outcome also changes. This demonstrates that each condition independently affects the decision — meaning the testing has proven that the logic is implemented correctly for each individual condition, not just for the overall decision.
For a decision with N conditions, full condition coverage would require 2^N test cases (exponential growth). MC/DC limits this to approximately N+1 test cases while still providing strong evidence of correct logic implementation. This is why it is called “modified” — it modifies the full condition coverage requirement to be practically achievable while retaining most of its defect detection power.
Practical Example: Consider the decision if (speed > 100 && brake_pressed == false). This decision has two conditions: (speed > 100) and (brake_pressed == false). MC/DC requires at minimum three test cases: Test 1 – speed=120, brake_pressed=false → decision TRUE. Test 2 – speed=80, brake_pressed=false → decision FALSE (speed condition changed, brake unchanged, decision changed). Test 3 – speed=120, brake_pressed=true → decision FALSE (brake condition changed, speed unchanged, decision changed). These three test cases prove that each condition independently affects the decision outcome.
Achieving 100% MC/DC coverage across an entire safety-critical software codebase is a significant engineering and cost investment. Automated test generation tools (such as Parasoft C/C++test, VectorCAST, Tessy, or Cantata) are virtually essential for achieving this efficiently, and these tools must themselves be qualified according to Part 8’s tool qualification requirements.
15. Software Integration and Testing (Clause 10)
Software integration testing verifies the correct interaction between software components when they are combined. While unit testing verifies individual units in isolation, integration testing focuses on the interfaces and interactions between units – data flow across interfaces, correct sequencing of function calls, shared resource management, and inter-component communication.
The integration testing strategy should test the correct data exchange between components, the correct sequencing and timing of interactions, the correct handling of error conditions at interface boundaries, and the correct behavior of the integrated software under nominal and fault conditions. The methods for software integration testing are specified in Part 6, Table 10, and include requirements-based testing, interface testing, fault injection testing, and resource usage evaluation. Structural coverage at the integration level (function coverage, call coverage) is also recommended for higher ASIL levels.
16. Verification of Software Safety Requirements (Clause 11)
Clause 11 addresses the verification of the complete, integrated software against the software safety requirements specification. This is the highest level of software-only testing – verifying that the fully integrated software (before hardware integration) meets all its SSRs. Testing at this level is performed on the target hardware platform or on a representative simulation environment. The standard recommends requirements-based testing, performance testing (verifying timing and resource usage), and robustness testing (verifying behavior under abnormal or boundary conditions).
17. Static Analysis – Finding Defects Without Execution
Static analysis is the examination of source code without executing it. It is a highly effective method for finding certain categories of defects early in the development process – often during or immediately after coding, before the code has even been compiled and linked. ISO 26262 Part 6 recommends static analysis at all ASIL levels, with increasing emphasis for ASIL C and D.
Types of static analysis relevant to Part 6 include coding standard compliance checking (MISRA C/C++ rule enforcement), data flow analysis (detecting uninitialized variables, unused variables, and data dependency issues), control flow analysis (detecting unreachable code, infinite loops, and missing return paths), semantic analysis (detecting potential runtime errors such as division by zero, null pointer dereference, array bounds overflow, and integer overflow without executing the code), and software metrics analysis (calculating complexity metrics, coupling metrics, and cohesion metrics to verify architectural design principles).
Tools like Polyspace (MathWorks), Coverity (Black Duck), PC-lint, LDRA, QA-MISRA, and Klocwork are widely used in the automotive industry for static analysis. Many of these tools are certified by third-party assessment bodies (such as TÜV SÜD) for use in ISO 26262 projects, simplifying the tool qualification process.
18. Model-Based Development for ISO 26262
Model-based development (MBD) is an approach where software functionality is specified and designed using graphical models (such as Simulink/Stateflow models) rather than directly in source code. Code is then automatically generated from the models using qualified code generators (such as Embedded Coder or TargetLink). The 2018 edition of ISO 26262 explicitly supports MBD and provides guidance on its application.
When MBD is used, the models serve as the software architectural design and unit design documentation. Verification activities can be performed at the model level (model-in-the-loop testing, software-in-the-loop testing) in addition to or instead of code-level verification. The code generator itself becomes a software tool that must be qualified according to Part 8 – typically at a high Tool Confidence Level, since errors in code generation could introduce systematic failures into the safety-related software.
MBD offers several advantages for ISO 26262 compliance: models provide a semi-formal or formal representation of the software design (satisfying the architectural notation requirements for higher ASILs), automatic code generation eliminates a class of manual coding errors, and model-level testing can be performed early in the development process before the target hardware is available. However, MBD also introduces challenges: the models themselves must be verified, the code generator must be qualified, and the generated code must still achieve the required structural coverage targets.
19. Software Tool Qualification (Part 8 Reference)
All software tools used in the development, testing, and verification of safety-related software must be evaluated for their potential impact on functional safety. ISO 26262 Part 8, Clause 11 defines the Tool Confidence Level (TCL) classification and the corresponding qualification requirements.
Tools are classified based on two factors: their Tool Impact (TI) – whether the tool can introduce errors or fail to detect errors in the safety-related output – and the Tool Error Detection (TD) – the likelihood that errors introduced by the tool will be detected by subsequent activities. The combination of TI and TD determines the TCL: TCL1 (lowest qualification effort required), TCL2 (moderate qualification effort), or TCL3 (highest qualification effort). A code generator used for ASIL D software, for example, would typically be classified as TI2 (can introduce errors) with low TD (errors may not be easily detected in the generated code), resulting in TCL3 and requiring the most extensive qualification. A static analysis tool used to verify coding standard compliance might be classified as TI1 (can fail to detect errors) with high TD (manual code review provides an alternative detection path), resulting in TCL1.
Tool qualification methods include increased confidence from use (demonstrating extensive, successful use of the tool in similar contexts), evaluation of the tool development process, and validation of the tool through a dedicated test suite. Many tool vendors provide pre-built Tool Qualification Support Kits (QSKs) that include test suites, qualification reports, and safety manuals specifically designed to simplify the ISO 26262 qualification process for their customers.
20. Key Work Products of Part 6
Part 6 produces the following essential work products: software safety requirements specification, software architectural design specification, software unit design specifications, source code (with MISRA compliance reports), software unit verification report (including unit test results and structural coverage reports), software integration test specification and report, verification report for software safety requirements, static analysis reports (coding standard violations, data flow/control flow analysis results), and software verification plan. All work products must maintain bidirectional traceability: from software safety requirements down to source code and test cases, and from test results back up to the requirements they verify.
21. Complete ASIL-Dependent Methods and Coverage Summary
The following summary captures the overall progression of software development rigor across ASIL levels:
ASIL A: Basic coding guidelines recommended, statement coverage recommended, requirements-based unit testing recommended. The overall approach resembles good software engineering practice with added documentation and traceability.
ASIL B: MISRA C/C++ compliance highly recommended, branch coverage highly recommended, requirements-based testing with boundary value analysis highly recommended. Semi-formal architectural notation recommended. Increased emphasis on static analysis.
ASIL C: MISRA C/C++ compliance highly recommended, MC/DC coverage highly recommended, all major testing methods highly recommended. Semi-formal architectural notation highly recommended. Comprehensive static analysis including semantic analysis highly recommended.
ASIL D: Everything highly recommended at ASIL C remains at ASIL D, with the addition that formal methods are recommended for the most critical design elements, MC/DC coverage is the expected standard, and the overall verification effort reaches its maximum. The cost and effort difference between ASIL C and ASIL D software development is significant, primarily driven by the MC/DC coverage requirement and the more stringent independence requirements for confirmation reviews.
22. Common Mistakes and How to Avoid Them
Mistake 1: Treating MISRA compliance as a checkbox exercise. Simply running a static analysis tool and generating a report is insufficient. MISRA compliance requires a documented Guideline Re-categorization Plan, a systematic approach to handling deviations, and regular code reviews. Assessors will examine the substance of your compliance process, not just the tool output.
Mistake 2: Achieving structural coverage through artificial test cases. Adding test cases solely to increase coverage numbers, without testing meaningful requirements-driven scenarios, produces coverage statistics without genuine safety value. Structural coverage should complement requirements-based testing, not replace it. If requirements-based tests do not achieve the target coverage, it often indicates that the requirements are incomplete or that there is dead code in the implementation.
Mistake 3: Not qualifying software tools. All tools used in the development and verification of safety-related software must be evaluated for their TCL and qualified accordingly. Using unqualified tools – especially for critical activities like code generation, static analysis, and unit testing – is a significant non-conformity that assessors will flag.
Mistake 4: Insufficient freedom from interference analysis. When safety-critical and non-safety software share the same processor, simply relying on “good coding practices” for isolation is inadequate. Hardware-enforced memory protection (MPU/MMU), temporal monitoring, and communication data validation are needed, and their effectiveness must be demonstrated through analysis and testing.
Mistake 5: Neglecting software integration testing. Many teams invest heavily in unit testing but underinvest in integration testing. Integration defects – timing issues, race conditions, interface mismatches, shared resource conflicts – are among the most common and most dangerous software defects. Integration testing should receive attention proportional to the architectural complexity of the software.
Mistake 6: Starting MC/DC analysis too late. MC/DC coverage achievement should be planned from the beginning of the software design process, not attempted as a late-stage activity. Software designed with testability in mind (low complexity, small functions, limited boolean expression depth) is much easier to achieve MC/DC coverage for than complex, deeply nested code written without coverage considerations.
23. Frequently Asked Questions
Q1: Is MISRA C compliance legally mandatory for ISO 26262?
MISRA C is not legally mandatory, and ISO 26262 does not explicitly require MISRA C – it recommends the use of “language subsets” and references MISRA as an appropriate example. However, in practice, MISRA C/C++ is universally expected by OEMs, assessors, and the industry. Not using MISRA (or an equivalent coding standard) would require strong justification and would likely be challenged during any functional safety assessment.
Q2: Do I need 100% MC/DC coverage for ASIL D?
The standard highly recommends MC/DC coverage for ASIL D but does not specify a mandatory percentage. However, any uncovered code must be analyzed and justified – either as unreachable code (dead code that should be removed) or as code that is adequately covered by other means. In practice, most OEMs and assessors expect coverage approaching 100%, with documented justification for any gaps.
Q3: Can I use C++ for ASIL D software?
Yes, C++ can be used for safety-related software development at any ASIL, provided that an appropriate language subset is applied (such as MISRA C++:2023 or AUTOSAR C++14) and that the specific risks of C++ features (dynamic memory allocation, exceptions, complex templates) are adequately managed. Many AUTOSAR Adaptive Platform components, for example, are developed in C++ at high ASIL levels.
Q4: What is the relationship between Part 6 and AUTOSAR?
AUTOSAR provides a standardized software architecture and platform that directly supports many Part 6 requirements. AUTOSAR OS provides memory partitioning and temporal protection for freedom from interference. AUTOSAR communication stack provides E2E (end-to-end) data protection. AUTOSAR BSW modules are developed following MISRA guidelines. The AUTOSAR methodology facilitates requirements traceability and configuration management. Using AUTOSAR does not automatically guarantee ISO 26262 compliance, but it provides a strong architectural foundation. At PiEmbSysTech, we cover AUTOSAR architecture and Adaptive AUTOSAR in detail.
Q5: How does model-based development affect structural coverage requirements?
When code is auto-generated from models, structural coverage can be measured at either the model level or the code level (or both). If coverage is measured only at the model level, additional evidence must be provided that the code generator preserves the structural properties of the model. The most robust approach is to measure coverage at both levels – model-level coverage during model-in-the-loop testing and code-level coverage during software-in-the-loop or hardware-in-the-loop testing.
24. Conclusion
ISO 26262 Part 6 – Product Development at the Software Level is where the majority of the effort and cost in modern automotive safety development is concentrated. With software controlling an ever-increasing proportion of vehicle functions – from basic engine management to Level 4 autonomous driving – the rigor applied to software development directly determines the safety of the vehicle.
The combination of MISRA-compliant coding, structured architectural design, systematic unit and integration testing, and demanding structural coverage criteria (culminating in MC/DC for ASIL D) creates a comprehensive framework for avoiding systematic software failures. While the effort required – especially for ASIL D – is substantial, the methods are proven and the tools are mature. Organizations that invest in building the right processes, training the right people, and selecting the right tools can achieve ISO 26262 software compliance efficiently and effectively.
This article is part of our comprehensive ISO 26262 series at PiEmbSysTech. Next in our series: ISO 26262 Part 7 – Production, Operation, Service & Decommissioning. Be sure to review our earlier posts on Part 1, Part 2, Part 3, Part 4, and Part 5.
Stay safe. Stay disciplined. Keep engineering the future.
– The PiEmbSysTech Team
Discover more from PiEmbSysTech - Embedded Systems & VLSI Lab
Subscribe to get the latest posts sent to your email.


