ISO 26262 Part 8 Supporting Processes: Tool Qualification (TCL), Configuration Management & Safety Support Explained
Hello, automotive safety engineers and development tool specialists! Welcome to the eighth deep-dive post in our comprehensive ISO 26262 series at PiEmbSysTech. In this article, we will explore ISO 26262 Part 8 – Supporting Processes, the part that defines the cross-cutting processes that run in parallel with and support all technical development activities throughout the safety lifecycle.
Part 8 is often described as the “backbone” of ISO 26262 compliance. While Parts 3 through 7 define what to develop and how to verify it, Part 8 defines the essential infrastructure processes that ensure the entire development effort is traceable, controlled, documented, and performed with qualified tools. Without robust Part 8 processes, even the most technically excellent development work cannot be demonstrated to be compliant – because the evidence cannot be traced, the configurations cannot be verified, and the tools cannot be trusted.
The most widely discussed topic in Part 8 is software tool qualification – the process of determining whether the tools used in safety-related development are trustworthy enough for their intended purpose. This single clause (Clause 11) has generated more industry discussion, confusion, and cost than perhaps any other single requirement in the standard. We will demystify it completely in this article.
Table of Contents
Table of Contents
1. What is ISO 26262 Part 8 and Why is It Essential?
ISO 26262 Part 8: Supporting Processes specifies the requirements for cross-cutting processes that support the entire safety lifecycle. These processes are not specific to any single phase or technical domain – they operate continuously throughout concept, system, hardware, software, and production phases, providing the infrastructure that ensures all safety activities are planned, executed, documented, controlled, and verifiable.
Without Part 8, the safety case falls apart. Requirements cannot be traced if there is no requirements management process. Design changes cannot be controlled if there is no change management process. Work products cannot be trusted if there is no configuration management process. Test results cannot be relied upon if the testing tools have not been qualified. Part 8 provides the connective tissue that holds the entire ISO 26262 framework together.
2. Structure of Part 8 – The Supporting Process Clauses
ISO 26262-8:2018 contains the following major normative clauses:
Clause 5: Interfaces within Distributed Developments – Development Interface Agreements (DIAs) for managing distributed development across the supply chain.
Clause 6: Specification and Management of Safety Requirements – Requirements management processes ensuring traceability, completeness, and consistency of all safety requirements.
Clause 7: Configuration Management – Controlling the configuration of all safety-related work products throughout the lifecycle.
Clause 8: Change Management – Managing changes to safety-related items and elements through a controlled process with safety impact analysis.
Clause 9: Verification – General requirements for verification activities across all parts of the standard.
Clause 10: Documentation – Requirements for the creation, maintenance, and management of safety documentation.
Clause 11: Confidence in the Use of Software Tools – The tool classification and qualification process (the most extensively discussed clause in Part 8).
Clause 12: Qualification of Software Components – Qualifying pre-existing software components for reuse in safety-related applications.
Clause 13: Evaluation of Hardware Elements – Evaluating hardware elements not developed according to ISO 26262 for use in safety-related systems.
Clause 14: Proven in Use Argument – Using field experience evidence to argue for the safety of previously developed elements.
3. Distributed Development and the Development Interface Agreement (DIA)
In the automotive supply chain, safety-related items are rarely developed by a single organization. An OEM defines the vehicle-level safety requirements, a Tier-1 supplier develops the ECU system, semiconductor vendors provide the microcontrollers and sensors, and software tool vendors provide the development and testing tools. Distributed development is the norm, not the exception.
The Development Interface Agreement (DIA) is the contractual document that defines the responsibilities, deliverables, and interfaces between cooperating parties in a distributed development. The DIA must specify which party is responsible for which safety lifecycle activities, which work products are exchanged between the parties (and in what format, at what level of detail), the ASIL of the safety requirements allocated to each party, the assumptions and constraints that each party is working under, the methods and tools to be used, and the confirmation measures (reviews, audits, assessments) and their allocation between parties.
A well-defined DIA prevents the dangerous situation where both parties assume the other is responsible for a critical safety activity – resulting in the activity being performed by neither. It is the single most important document for managing functional safety across the supply chain.
4. Requirements Management (Clause 6)
Requirements management ensures that all safety requirements – from safety goals through functional safety requirements, technical safety requirements, hardware safety requirements, and software safety requirements – are specified correctly, maintained consistently, and traced bidirectionally throughout the development lifecycle.
Key requirements management activities include maintaining bidirectional traceability from every safety requirement to its source (the parent requirement or safety goal it was derived from) and to its implementation (the hardware or software element that implements it) and its verification (the test case or analysis that confirms it is met). This end-to-end traceability is a fundamental ISO 26262 compliance requirement and is typically managed using dedicated requirements management tools such as IBM DOORS, Polarion, Jama Connect, or Codebeamer.
Requirements must be checked for completeness (every safety goal has been refined into lower-level requirements, and every requirement has been allocated and verified), consistency (no contradictions between requirements at the same or different levels), verifiability (every requirement can be objectively tested or analyzed), and unambiguity (every requirement has a single, clear interpretation).
5. Configuration Management (Clause 7)
Configuration management (CM) is the process of maintaining the integrity and traceability of all safety-related work products throughout the lifecycle. It ensures that every version of every document, source file, design artifact, test result, and analysis report is uniquely identified, controlled, and retrievable.
Configuration management must cover all work products referenced in the safety case, all source code and build configurations for safety-related software, all hardware design files (schematics, PCB layouts, BOMs), all test specifications, test scripts, and test results, and the safety plan, safety case, and all confirmation measure reports. The CM system must support version control, baseline management (creating named configurations of related work products at specific milestones), change history tracking, and access control. For software, version control systems like Git or SVN, combined with build management systems that ensure reproducible builds from baselined source code, are standard practice.
6. Change Management (Clause 8)
Change management is the controlled process for evaluating, approving, implementing, and verifying changes to safety-related items and elements. Every change – whether it originates from a defect correction, a customer requirement change, a component obsolescence, a field issue, or a design optimization – must be subjected to a safety impact analysis before it is approved.
The change management process must identify and document the proposed change, analyze the potential impact of the change on functional safety (including impact on safety requirements, safety analyses, hardware metrics, and the safety case), classify the change based on its safety impact (ranging from no impact to significant impact requiring re-analysis and re-testing), obtain appropriate approval before implementation (with the safety manager’s involvement for safety-impacting changes), implement the change with proper configuration management, verify the change through testing and analysis proportional to its impact, and update all affected work products (including safety analyses, test results, and the safety case).
Change management is closely linked to Part 7’s post-SOP modification requirements – changes after the start of production follow the same fundamental process but with additional considerations for field impact and production line changes.
7. Verification (Clause 9)
Clause 9 provides the general requirements for verification activities referenced throughout the standard. Verification is the process of providing evidence that a work product of a particular lifecycle phase fulfils the requirements of the previous phase. It answers the question: “Did we build the product right?”
Verification methods include reviews (formal and informal examination of work products by qualified reviewers), analysis (systematic evaluation using analytical methods such as FMEA, FTA, or static code analysis), simulation (executing models to verify behavior), and testing (executing the actual hardware or software and comparing results against expected outcomes). The required verification methods and their rigor depend on the ASIL of the element being verified.
A verification review of a work product shall be performed by a person or persons different from the author(s) of the work product. This independence requirement ensures objectivity and increases the probability of detecting errors that the author may have overlooked. The standard specifies that the verification shall include checks for completeness, correctness, consistency, and feasibility of the work product against its requirements.
8. Documentation Management (Clause 10)
Documentation management ensures that all safety-related documentation is created, maintained, and managed according to defined standards. ISO 26262 is fundamentally an evidence-based standard – compliance is demonstrated through work products (documents, reports, analyses, test results). If the documentation is incomplete, inconsistent, or uncontrolled, the safety case cannot be established.
Documentation requirements include clear identification and versioning of all documents, controlled review and approval processes, defined retention periods (safety documentation must be retained for the expected operational lifetime of the vehicle plus a defined period), controlled distribution and access, and protection against unauthorized modification or loss.
9. Software Tool Qualification – The Complete Guide (Clause 11)
Software tool qualification is the most extensively discussed and most frequently misunderstood topic in Part 8. Clause 11 specifies the process for determining whether software tools used in the development of safety-related systems are sufficiently trustworthy for their intended purpose.
The fundamental question is: “If this tool malfunctions, could it introduce an error into the safety-related product or fail to detect an error?” If the answer is yes, and if there are no adequate external measures to catch the tool’s mistake, then the tool requires formal qualification.
The tool qualification process follows three steps: Step 1 – Tool Classification (determining the Tool Impact and Tool Error Detection to derive the Tool Confidence Level), Step 2 – TCL Determination (using the TI × TD matrix to assign the TCL), and Step 3 – Tool Qualification (applying one or more of the four defined qualification methods if TCL2 or TCL3 is assigned).
10. Step 1: Tool Classification – TI and TD Determination
The first step in tool qualification is tool classification, which evaluates two independent dimensions: the Tool Impact (TI) – the potential consequences of a tool malfunction – and the Tool Error Detection (TD) – the likelihood that a tool-induced error would be caught by subsequent development activities.
Tool classification is performed on a per-use-case basis, not on the tool as a whole. A single tool may have multiple use cases with different TI and TD classifications. The highest TCL across all use cases determines the overall qualification requirement for the tool.
11. Tool Impact (TI) – TI1 vs TI2 with Examples
Tool Impact (TI) evaluates whether a malfunction of the tool can introduce or fail to detect errors in a safety-related item or element. Two levels are defined:
TI1 (No direct impact): A malfunction of the tool cannot introduce errors into the safety-related output, and a malfunction cannot fail to detect safety-relevant errors. Examples: a project management tool (like JIRA) used to track tasks – its malfunction does not introduce errors into the product code. A text editor used to write documentation – the editor cannot introduce errors into the compiled software. A defect tracking tool used for issue management.
TI2 (Direct impact possible): A malfunction of the tool can introduce errors into the safety-related output, or a malfunction can fail to detect errors. Examples: a compiler or code generator – if it malfunctions, it could generate incorrect object code from correct source code. A static analysis tool – if it malfunctions, it could fail to detect a MISRA violation or a potential runtime error. A unit testing tool – if it malfunctions, it could produce false passing test results. A requirements management tool – if it malfunctions, it could lose traceability links or corrupt requirement data.
If TI = TI1, the tool is immediately classified as TCL1, and no further qualification is required. Only tools with TI = TI2 proceed to the TD evaluation.
12. Tool Error Detection (TD) – TD1, TD2, TD3 with Examples
For tools classified as TI2, the next step evaluates the Tool Error Detection (TD) – the confidence that errors introduced or missed by the tool will be detected by subsequent development activities (either tool-internal measures or tool-external measures in the development process). Three levels are defined:
TD1 (High confidence in detection): There is high confidence that a tool malfunction producing erroneous output will be detected by subsequent process steps. Example: a code generator whose output (source code) is subsequently verified through code review, static analysis, and unit testing – these subsequent activities provide high confidence that code generation errors would be caught. Another example: a static analysis tool whose findings are supplemented by independent dynamic testing – if the static tool misses a defect, the dynamic testing is likely to catch it.
TD2 (Medium confidence in detection): There is medium confidence in detection. Example: a static analysis tool that is the primary (but not only) means of verifying compliance with a coding standard – some checks may also be performed during code review, but the coverage is not comprehensive enough for TD1.
TD3 (Low confidence in detection): There is low confidence that errors introduced or missed by the tool would be caught. Example: a code generator whose generated code is not independently verified (no code review, no static analysis of generated code, no separate testing of generated code). Another example: a test automation tool that is the sole source of test execution evidence — if the tool reports a false pass, there is no other mechanism to detect the error.
13. Step 2: TCL Determination – The TI × TD Matrix
The Tool Confidence Level (TCL) is determined by combining TI and TD using the following matrix:
| TI1 | TI2 | |
|---|---|---|
| TD1 | TCL1 | TCL1 |
| TD2 | TCL1 | TCL2 |
| TD3 | TCL1 | TCL3 |
TCL1 (Green): No specific tool qualification required. Standard quality processes are sufficient. This applies when TI = TI1 (regardless of TD), or when TI = TI2 but TD = TD1 (high confidence that tool errors will be caught by subsequent activities).
TCL2 (Yellow): Medium tool confidence level. Tool qualification is required using at least one of the four defined methods. Applies when TI = TI2 and TD = TD2.
TCL3 (Red): Highest tool confidence level. Tool qualification is required with the most stringent methods. Applies when TI = TI2 and TD = TD3. This is the most expensive and time-consuming qualification scenario.
A critical insight: you can often reduce the TCL by improving TD – adding verification activities downstream of the tool that increase the confidence of detecting tool errors. For example, if a code generator is classified as TI2/TD3 (TCL3), adding independent static analysis and unit testing of the generated code can improve TD to TD1, reducing the classification to TI2/TD1 (TCL1) and eliminating the need for formal tool qualification entirely. This is a widely used and legitimate strategy for managing tool qualification costs.
14. Step 3: Tool Qualification Methods (1a, 1b, 1c, 1d)
For tools classified as TCL2 or TCL3, ISO 26262 Part 8, Clause 11.4.6 defines four methods for performing tool qualification:
Method 1a – Increased Confidence from Use: Demonstrating that the tool has been used successfully in previous projects under similar conditions, with similar use cases, and in the same version – with documented evidence that no tool malfunctions affected safety-related outputs. This method is easiest to apply but has limitations: it requires evidence from prior use of the same tool version in a similar context, which may not exist for new tools or new versions.
Method 1b – Evaluation of the Tool Development Process: Demonstrating that the tool was developed following a quality-assured software development process (such as one conforming to ISO 9001 or a recognized software development standard). This method requires access to information about the tool vendor’s development processes, which may be available through tool vendor documentation or third-party audits. This method is more applicable to COTS (commercial off-the-shelf) tools where the vendor can provide process evidence.
Method 1c – Validation of the Software Tool: Performing dedicated validation testing of the tool – executing a defined test suite that exercises the tool’s functions relevant to its safety-critical use cases and verifying that the tool produces correct outputs. This is the most direct method and is highly recommended for TCL3 at ASIL D. Many tool vendors provide pre-built Tool Qualification Support Kits (QSKs) that include validation test suites, test procedures, and report templates specifically designed for ISO 26262 tool qualification.
Method 1d – Development in Accordance with a Safety Standard: Demonstrating that the tool itself was developed in compliance with a relevant safety standard (such as IEC 61508 for general functional safety, or ISO 26262 itself). This is the most rigorous method and provides the highest confidence, but it is also the most expensive and typically only applied by major EDA and tool vendors for their flagship safety-critical products.
15. Selecting the Right Qualification Method by TCL and ASIL
ISO 26262 Part 8, Tables 4 and 5 provide ASIL-dependent recommendations for which qualification methods are appropriate for each TCL level. The following summary captures the key guidance:
For TCL2: Methods 1a (increased confidence from use) and 1c (validation) are recommended or highly recommended depending on the ASIL. Method 1b (development process evaluation) is recommended for lower ASILs. Method 1d is recommended but not commonly applied at TCL2.
For TCL3: Method 1c (validation) is highly recommended for all ASILs and is effectively the expected approach for the highest-criticality tool qualification scenarios. Method 1d is highly recommended for ASIL C and D. Methods 1a and 1b may be used as complementary evidence but are generally insufficient as standalone methods at TCL3 for higher ASILs.
In practice, Method 1c (validation) is the most widely used qualification method in the automotive industry because it provides direct, objective evidence of tool correctness through testing. Tool vendors increasingly provide TÜV-certified Tool Qualification Support Kits that implement Method 1c for their customers, significantly reducing the qualification effort.
16. Practical Tool Qualification Examples
Let us walk through several practical examples to illustrate how the tool qualification process works in real projects:
Example 1: C Compiler (e.g., GCC, Green Hills, IAR) – TI = TI2 (compiler malfunction can generate incorrect object code from correct source). If the generated object code is verified through unit testing with structural coverage and the test results are validated on the target hardware, TD = TD1. Result: TCL1 – no formal qualification required. If code is not independently tested, TD = TD3, resulting in TCL3 – extensive qualification required.
Example 2: MISRA Static Analysis Tool (e.g., Polyspace, PC-lint, QA-MISRA) – TI = TI2 (tool malfunction could fail to detect a coding standard violation). If the static analysis is supplemented by code reviews that also check for MISRA compliance, TD = TD2. Result: TCL2 – qualification required (Method 1c recommended). If the tool is the sole means of MISRA verification, TD = TD3, resulting in TCL3.
Example 3: Unit Testing Tool (e.g., VectorCAST, Cantata, Tessy) – TI = TI2 (tool malfunction could report false passing results). If test results are supplemented by independent code review and integration testing that would catch the same defects, TD = TD1 or TD2. Result: TCL1 or TCL2. Many testing tool vendors provide TÜV-certified qualification kits specifically for this scenario.
Example 4: Requirements Management Tool (e.g., IBM DOORS, Polarion) – TI = TI2 (tool malfunction could corrupt traceability data). If traceability is periodically verified through manual reviews or automated consistency checks, TD = TD1 or TD2. Result: TCL1 or TCL2.
Example 5: Project Management Tool (e.g., JIRA) – TI = TI1 (tool malfunction cannot introduce errors into the safety-related product). Result: TCL1 – no qualification required regardless of TD.
Example 6: Simulink/Embedded Coder (Model-Based Code Generation) – TI = TI2 (code generator malfunction can produce incorrect safety-related code). If generated code is verified through back-to-back testing, SIL testing, and MC/DC coverage analysis, TD = TD1. Result: TCL1. MathWorks provides IEC Certification Kit that supports Method 1c and 1d for Simulink and Embedded Coder qualification.
17. Qualification of Software Components (Clause 12)
Clause 12 addresses the qualification of pre-existing software components for reuse in safety-related applications. This applies to COTS (Commercial Off-The-Shelf) software libraries, open-source software components, in-house software components originally developed for non-safety or different-ASIL applications, and third-party software components from suppliers.
The qualification process requires evaluating whether the software component meets the safety requirements for its intended use, including analysis of the component’s development process and documentation, analysis of the component’s failure modes and their potential impact on safety, testing of the component against the safety requirements allocated to it, and assessment of the adequacy of the component’s existing verification evidence. The rigor of the qualification depends on the ASIL of the safety requirement allocated to the component. For ASIL D applications, reusing unqualified COTS components in safety-critical functions is extremely difficult to justify.
18. Evaluation of Hardware Elements (Clause 13)
Clause 13 addresses the evaluation of hardware elements that were not developed in compliance with ISO 26262 but are intended for use in safety-related systems. This applies to commercially available semiconductor devices, passive components, and other hardware elements that were developed according to general automotive quality standards (such as AEC-Q100 for ICs) but not specifically for ISO 26262.
The evaluation must determine whether the hardware element is suitable for its intended safety-related application, considering its failure modes, failure rates, diagnostic capabilities, and any assumptions made in the system-level safety analysis. Hardware element evaluation ties directly to the hardware element classification system (Class I, II, III) described in Part 5.
19. Proven in Use Argument (Clause 14)
The proven in use argument allows an organization to use evidence from field operation to argue that a previously developed element (hardware or software) meets safety requirements without repeating the full ISO 26262 development process. This clause provides a structured framework for making this argument credibly.
To support a proven in use argument, the organization must demonstrate that the element has operated in a comparable context (similar operational conditions, similar interfaces, similar environmental stresses) for a sufficient period with sufficient volume (enough vehicle-years of operation to provide statistical confidence), that field failure data has been systematically collected and analyzed, that no safety-relevant failures have been identified (or that any identified failures have been analyzed and resolved), and that the element has not been modified since the field experience was collected (or that any modifications are minor and their impact has been evaluated).
The proven in use argument is particularly valuable for components with long production histories and extensive field data — such as mature microcontroller families, established sensor technologies, or well-proven software libraries. However, it requires rigorous documentation and is subject to scrutiny during functional safety assessments.
20. Key Work Products of Part 8
Part 8 produces the following essential work products: Development Interface Agreement (DIA) for each distributed development relationship, requirements traceability matrix (bidirectional traceability from safety goals to implementation and verification), configuration management records (baselines, version histories, change logs), change management records (change requests, impact analyses, approval records, verification evidence), verification reports (review records, analysis reports, test reports for each verified work product), tool classification reports (TI/TD assessment and TCL determination for each tool), tool qualification reports (qualification method selection, execution evidence, and results for TCL2/TCL3 tools), software component qualification reports, hardware element evaluation reports, and proven in use documentation.
21. Common Mistakes and How to Avoid Them
Mistake 1: Assuming TCL1 means no work is needed. While TCL1 does not require formal tool qualification methods, the tool classification analysis itself must still be performed and documented. Assessors will ask for the tool classification report and the rationale for the TI and TD assignments.
Mistake 2: Performing tool classification at the tool level instead of the use-case level. The standard requires classification per use case – a tool may have TCL1 for some use cases and TCL3 for others. Classifying at the tool level (assigning a single TCL to the entire tool) can result in either over-qualification (wasting resources) or under-qualification (missing critical use cases).
Mistake 3: Relying solely on vendor TÜV certificates without understanding the scope. A vendor’s TÜV certificate covers a specific tool version, specific use cases, and specific operating conditions. If your usage differs from the certified scope – different version, different compiler target, different operating system – the certificate may not apply directly to your project. Always verify that the certification scope matches your actual usage context.
Mistake 4: Neglecting to plan tool qualification early. Tool qualification can take weeks or months, especially for TCL3 scenarios. Discovering late in the project that a critical tool requires qualification can cause significant schedule delays. Perform tool classification during project planning and schedule qualification activities accordingly.
Mistake 5: Inadequate requirements traceability. Incomplete or broken traceability chains are one of the most common findings during functional safety assessments. Invest in proper requirements management tooling and processes from the start. Retrospectively establishing traceability across thousands of requirements is far more expensive than maintaining it incrementally during development.
Mistake 6: Treating change management as bureaucracy rather than safety protection. The change management process exists to prevent uncontrolled changes from compromising safety. Organizations that view it as bureaucratic overhead tend to circumvent it – until a poorly managed change causes a field issue. Safety impact analysis for every change is not optional overhead; it is essential risk management.
22. Frequently Asked Questions
Q1: Do I need to qualify every tool used in the project?
No. You need to classify every tool (determining TI and TD) and document the classification. Only tools with TCL2 or TCL3 require formal qualification. Tools with TI1 or with TI2/TD1 are classified as TCL1 and require no further qualification. In practice, many tools end up at TCL1 after proper classification.
Q2: Can a tool vendor’s TÜV certificate replace project-specific tool qualification?
A vendor’s certificate can significantly simplify your qualification effort, but it does not entirely replace project-specific evaluation. You must verify that the certificate covers the specific tool version, use cases, and operating environment relevant to your project. The certificate can serve as evidence supporting Method 1c or 1d, but you still need to document your project-specific tool classification analysis.
Q3: What is the relationship between tool qualification and ASIL?
The TCL is determined independently of the ASIL (it depends on TI and TD). However, the selection and rigor of the qualification methods depend on the ASIL of the item being developed. For a tool classified as TCL3, the required qualification effort is greater when used in an ASIL D project than in an ASIL B project. The standard’s Tables 4 and 5 provide the specific ASIL-dependent recommendations.
Q4: How often must tool qualification be repeated?
Tool qualification must be re-evaluated when the tool version changes (new releases may introduce new bugs or change behavior), when the tool’s use cases change, when the development environment changes (different target hardware, different OS, different compiler version), or when field experience reveals tool malfunctions. Many organizations establish a tool management process that tracks tool versions and triggers re-evaluation when changes occur.
Q5: Does an open-source tool require the same qualification as a commercial tool?
Yes. The tool qualification requirements apply regardless of whether the tool is commercial, open-source, or internally developed. An open-source compiler or test framework used in safety-critical development requires the same TI/TD classification and (if applicable) the same qualification methods as a commercial tool. The challenge with open-source tools is that Method 1b (development process evaluation) may be more difficult to apply due to limited documentation of the development process.
Q6: What is the difference between tool qualification (Part 8, Clause 11) and software component qualification (Part 8, Clause 12)?
Tool qualification applies to development and verification tools – software used in the development process but not part of the final product (compilers, test tools, static analyzers, requirements management tools). Software component qualification applies to software components that become part of the final safety-related product (COTS libraries, third-party driver software, reused software modules). They have different objectives and different evaluation criteria.
23. Conclusion
ISO 26262 Part 8 – Supporting Processes provides the essential infrastructure that makes the entire functional safety framework operational. From requirements traceability to configuration management, from change control to documentation, and from tool qualification to component qualification – these processes ensure that safety development work is controlled, verifiable, and trustworthy.
The tool qualification process, while complex, follows a logical and structured approach: classify the tool based on its potential impact and the detectability of its errors, determine the required confidence level, and apply appropriate qualification methods proportional to the risk. By understanding and strategically managing TI and TD classifications, organizations can often reduce tool qualification costs significantly – for example, by adding downstream verification activities that increase TD and reduce the TCL.
This article is part of our comprehensive ISO 26262 series at PiEmbSysTech. Next in our series: ISO 26262 Part 9 – ASIL-Oriented and Safety-Oriented Analyses. Be sure to review our earlier posts on Part 1 through Part 7.
Stay safe. Stay traceable. Keep engineering the future.
– The PiEmbSysTech Team
Discover more from PiEmbSysTech - Embedded Systems & VLSI Lab
Subscribe to get the latest posts sent to your email.



