proven in use argument ISO 26262 : Field Data, Observation Period & ASIL Criteria Explained
Hello, automotive safety engineers and project managers! Many automotive systems in production today were designed before ISO 26262 was published – yet they have been operating safely in millions of vehicles for years. How can these legacy components be reused in new ISO 26262-compliant projects without a full re-development? The answer is the Proven in Use Argument, defined in ISO 26262 Part 8, Clause 14.

In this comprehensive guide at PiEmbSysTech, we will explain what proven in use means, what criteria must be met, how field data is evaluated, what the observation period requirements are, and how proven in use compares to other reuse approaches like SEooC and software component qualification. Let us begin.
proven in use argument ISO 26262 Table of Contents
1. What is the Proven in Use Argument?
The Proven in Use Argument is an alternative compliance path in ISO 26262 that allows an existing hardware element, software element, or complete system – which was not originally developed according to ISO 26262 – to be used in a safety-relevant application. The argument relies on field experience data (service history) to demonstrate that the element has operated safely and reliably in comparable conditions for a sufficient volume and duration.
In essence, the proven in use argument says: “This component was not developed to ISO 26262, but it has been deployed in millions of vehicles for years without safety-relevant failures – this field track record provides sufficient evidence of its suitability for our safety application.”
Proven in use is not a blanket exemption from ISO 26262 – it is a structured, evidence-based argument with specific criteria that must be met. It allows tailoring of the safety lifecycle activities based on the strength of the field experience evidence.
2. Why Proven in Use Exists – The Legacy Component Problem
When ISO 26262 was first published in 2011, millions of vehicles were already on the road with E/E systems developed before the standard existed — systems that had been operating safely for years. Requiring a complete re-development of these proven systems to ISO 26262 would be economically impractical and technically unnecessary if their field track record demonstrates adequate safety. Proven in use provides the path for these legacy components to be reused in new ISO 26262-compliant projects without full re-development, while still providing structured safety evidence.
Common candidates for proven in use include mature ECU hardware designs with extensive field history, established software modules (communication stacks, diagnostic routines) with millions of operating hours, sensor assemblies with proven reliability data, and actuator components with well-documented field performance.
3. Where Proven in Use Appears in ISO 26262
Part 8, Clause 14: The primary normative clause defining the proven in use argument – criteria, process, field data requirements, and work products.
Part 2, Clause 6.4.5: Allows tailoring of safety activities based on a proven in use argument – specifically, if a safety activity is tailored as a result of a proven in use argument, the tailoring must comply with Part 8 Clause 14.
Part 2, Clause 6.4.12: The safety plan must include provisions for proven in use arguments when applicable.
Part 11, Clause 5.2: Provides specific guidance for semiconductor components using the proven in use argument – particularly relevant for established silicon products being introduced into safety applications.
4. Defining the Candidate for Proven in Use
The candidate is the specific hardware element, software element, or system for which the proven in use argument is being made. The candidate definition must include a precise description of the element (version, configuration, part number), the function(s) the element performs, the conditions of use in the field application (operating environment, duty cycle, interfaces), and the conditions of use in the target application (the new ISO 26262-compliant project). The candidate’s conditions of use in the target application must be identical to or have a very high degree of commonality with the conditions under which the field experience was gained. If the target application conditions differ significantly from the field conditions, the proven in use argument may be weakened or invalidated.
5. The Two Essential Criteria
ISO 26262 Part 8, Clause 14 requires two essential criteria to be satisfied for a valid proven in use argument:
Criterion 1 – Relevance of the field data: The field experience data must be relevant to the target application – meaning the conditions of use (operating environment, functional scope, interfaces, duty cycle) in the field application must be sufficiently similar to the target application. If the candidate was used as a QM component in the field but is now intended for an ASIL D safety function, the field data must still be relevant to the safety-related aspects of the candidate’s operation.
Criterion 2 – Changes since the observation period: Any changes to the candidate since the beginning of the observation period must be evaluated for their impact on the proven in use argument. Changes that affect the candidate’s safety-relevant behavior may invalidate the argument or require re-establishment of the field experience evidence. This includes hardware design changes (PCB layout, component substitutions), software changes (bug fixes, feature additions, configuration changes), manufacturing process changes, and changes in the operating environment or conditions of use.
6. Field Data Requirements – Relevance and Quality
The strength of the proven in use argument depends entirely on the quality and relevance of the field data. ISO 26262 requires that the field data include the total number of deployed units and the total accumulated operating hours (or kilometers), the observation period duration, the number and nature of observed failures (distinguishing safety-relevant failures from non-safety-relevant failures), the conditions of use during the observation period (operating environment, duty cycle, vehicle types), and evidence that the field monitoring program was effective in capturing safety-relevant failures (not just warranty claims – field monitoring must be systematic enough to detect failures that could lead to safety goal violations).
The data source must be credible — warranty claim databases, field failure return analysis records, dealer service reports, and OEM quality databases are typical sources. Self-reported data without independent verification may not be accepted by assessors.
7. Observation Period and Volume Requirements
ISO 26262 Part 8, Clause 14 specifies minimum observation periods and volume requirements that depend on the target ASIL. The requirements are expressed in terms of total service hours (operating hours accumulated across all deployed units) and the maximum acceptable incidence rate (safety-relevant failures per operating hour).
The mathematical basis is the chi-squared distribution – the required total service hours to demonstrate, with a specified confidence level, that the failure rate is below the target PMHF. The formula from ISO 26262-8:2018 Clause 14 is:
tservice = (χ²(2f+2, 1-p)) / (2 × λtarget)
Where: tservice is the required total service hours, f is the number of observed failures (ideally 0), p is the confidence level (typically 0.95 or 95%), and λtarget is the maximum acceptable failure rate (aligned with the PMHF target for the ASIL).
In practical terms, achieving ASIL D proven in use typically requires approximately 5 million or more component units in the field with several years of operation and zero safety-relevant failures – a very demanding requirement that limits proven in use to high-volume, well-established components.
8. Incidence Rate KPIs by ASIL
The target incidence rate (acceptable safety-relevant failure rate) is aligned with the PMHF targets for the target ASIL:
| Target ASIL | Target Failure Rate | Approximate Volume × Years Needed |
|---|---|---|
| ASIL B | <10⁻⁷ /h (100 FIT) | ~300K units × 3 years |
| ASIL C | <10⁻⁷ /h (100 FIT) | ~300K units × 3 years |
| ASIL D | <10⁻⁸ /h (10 FIT) | ~5M units × 4–6 years |
These are approximate values assuming zero observed failures and 95% confidence. Actual requirements depend on operating hours per unit and the specific statistical calculation.
9. Change Management – What Invalidates Proven in Use?
The proven in use argument is valid only for the specific version and configuration of the candidate that was in the field during the observation period. Changes that can invalidate the argument include: hardware component substitutions (even “equivalent” replacements may have different failure modes), PCB layout changes (affecting thermal behavior, EMC characteristics), software updates (bug fixes that change execution paths, feature additions), manufacturing process changes (new production line, new solder profile, different test coverage), and changes in operating conditions (different vehicle platform with different thermal or vibration environment).
When a change is made to a proven in use candidate, an impact analysis must be performed to determine whether the change affects the safety-relevant behavior. If the change is significant, the observation period may need to be restarted, or the changed element may need to be developed/qualified using other ISO 26262 compliance paths (SEooC, qualification, or full Part 5/6 development).
10. Field Monitoring Program Requirements
A critical prerequisite for the proven in use argument is an effective field monitoring program (as defined in ISO 26262-8:2018 Clause 14.4.5.3) that systematically collects and analyzes field failure data. The field monitoring program must be designed to capture safety-relevant failures, not just warranty claims or customer complaints. Many failures that are safety-relevant may be detected by on-board diagnostics (DTC codes), reported during dealer service visits, or identified through systematic field return analysis – but only if the monitoring program is designed to flag these events. A field monitoring program that only tracks warranty cost but does not systematically identify safety-relevant failure modes does not satisfy the proven in use requirements.
11. The Proven in Use Process – Step by Step
Step 1 – Define the candidate: Precisely identify the element (version, configuration, part number), its function, and its conditions of use in the field.
Step 2 – Define the target application: Describe how the candidate will be used in the new ISO 26262-compliant project, including the target ASIL, the safety requirements, and the operating conditions.
Step 3 – Evaluate relevance (Criterion 1): Assess whether the field conditions are sufficiently similar to the target application conditions. Document the comparison and any differences.
Step 4 – Evaluate changes (Criterion 2): Identify any changes to the candidate since the beginning of the observation period. Perform impact analysis for each change.
Step 5 – Collect and analyze field data: Gather the field experience data – total units deployed, total operating hours, number and nature of observed failures. Calculate the achieved failure rate and compare against the target ASIL’s PMHF requirement.
Step 6 – Perform the statistical evaluation: Using the chi-squared formula, determine whether the accumulated field experience is sufficient to demonstrate the required failure rate at the specified confidence level.
Step 7 – Document the proven in use analysis report: Compile all evidence into the proven in use analysis report a key work product for the safety case.
Step 8 – Plan the safety activities: Based on the proven in use credit, determine which safety lifecycle activities can be tailored and which must still be performed for the target project. Include the proven in use aspects in the safety plan.
12. Work Products for Proven in Use
ISO 26262 requires the following work products for a proven in use argument: the proven in use aspects in the safety plan (how the proven in use argument is used to tailor safety activities), the description of the candidate (precise identification, function, conditions of use), the proven in use analysis report (field data, statistical evaluation, relevance assessment, change impact analysis), and the field monitoring evidence (data sources, monitoring program description, failure records).
13. Limitations and Restrictions
Proven in use has several important limitations: it only addresses random and systematic failures that manifest during operation – it does not address failures related to aging or wear-out mechanisms that have not yet manifested during the observation period. The argument requires very high field volumes (especially for ASIL D), which limits its applicability to high-volume components. Changes to the candidate can invalidate the argument, requiring re-evaluation or alternative compliance paths. The field monitoring program must be effective — passive warranty tracking alone is insufficient. The argument only covers the candidate as used in comparable conditions — if the target application subjects the candidate to different stresses (higher temperature, different duty cycle), the field data may not be relevant. Additionally, proven in use provides a lower level of safety assurance than full ISO 26262 development because it relies on statistical inference rather than systematic development processes.
14. Proven in Use vs SEooC vs Qualification – Comparison
| Aspect | Proven in Use (Part 8 Cl.14) | SEooC (Part 10 Cl.9) | Qualification (Part 8 Cl.12) |
|---|---|---|---|
| Developed to ISO 26262? | No (pre-existing) | Yes (based on assumptions) | No (qualified through additional evidence) |
| Evidence basis | Field experience data (statistical) | Development process evidence (systematic) | Additional analysis and testing evidence |
| Applies to | HW, SW, or system in the field | HW, SW, or system (new development) | SW components only |
| Volume required | Very high (millions of units for ASIL D) | Not applicable | Not applicable |
| Best for | Legacy high-volume components with extensive field history | New components developed for reuse across multiple applications | COTS software not originally developed to ISO 26262 |
| Assurance level | Lower (statistical inference) | Higher (systematic process compliance) | Medium (additional evidence supplements existing development) |
15. Proven in Use for Semiconductors
Part 11 addresses proven in use for semiconductor components specifically. For semiconductor IP providers and integrators, the proven in use argument can provide a means to demonstrate that an existing IP design is appropriate for a safety application. However, the conditions for semiconductor proven in use are restrictive – the IP must have been in the field in comparable operating conditions, with sufficient volume and duration, and with an effective field monitoring program that captures semiconductor-specific failure modes. Given the typical time-to-market pressures in the semiconductor industry, many safety experts recommend full ISO 26262 development (as SEooC) over proven in use for new safety semiconductor products, as the development approach provides stronger and more sustainable safety assurance.
16. Common Mistakes and How to Avoid Them
Mistake 1: Using warranty data alone as field evidence. Warranty claims capture customer-visible failures, not all safety-relevant failures. Many safety-relevant failure modes are detected by on-board diagnostics and handled by the safety mechanisms without the driver ever noticing. The field monitoring program must capture these events.
Mistake 2: Assuming “years in the field” equals “proven in use.” Duration alone is insufficient – the total accumulated operating hours across all deployed units, the failure rate calculation, and the statistical confidence level must all meet the standard’s requirements.
Mistake 3: Ignoring changes to the candidate. Even “minor” changes (component substitutions, software patches, manufacturing process adjustments) can invalidate the proven in use argument. Every change must be evaluated for safety impact.
Mistake 4: Claiming proven in use for ASIL D without sufficient volume. ASIL D requires approximately 5 million units with 4–6 years of field operation and zero safety-relevant failures. Many components do not have this volume of field experience, making proven in use impractical for ASIL D.
Mistake 5: Not documenting the conditions of use comparison. The relevance of the field data to the target application must be explicitly documented – showing that the field conditions (temperature, duty cycle, interfaces, vehicle type) are sufficiently similar to the target conditions.
Mistake 6: Using proven in use as a shortcut to avoid ISO 26262 development. Proven in use is not a loophole – it is a structured, evidence-based alternative that may require significant effort in field data collection, statistical analysis, and documentation. In many cases, developing the component as SEooC may actually be more efficient and provide stronger safety assurance.
17. Frequently Asked Questions
Q1: Can proven in use be applied to software?
Yes. ISO 26262 Part 8 Clause 14 applies to hardware, software, and complete systems. However, software proven in use is particularly challenging because software failures are systematic (deterministic) – a software bug that has not manifested during the observation period still exists in the code and could be triggered by a previously unencountered input combination. This makes the statistical argument less robust for software than for hardware (where random failures are genuinely probabilistic).
Q2: Can proven in use be combined with other compliance approaches?
Yes. The proven in use argument can cover some aspects of the candidate (e.g., the hardware random failure performance) while other aspects (e.g., systematic failure avoidance for a modified software module) are addressed through ISO 26262 development methods. The safety plan should clearly document which aspects are covered by proven in use and which require other compliance approaches.
Q3: Does a TÜV certificate for a component provide proven in use credit?
No. A TÜV functional safety certificate typically confirms that the component was developed according to ISO 26262 (e.g., as SEooC) – not that it has proven in use field evidence. Proven in use and development compliance are different paths to ISO 26262 compliance.
Q4: What if safety-relevant failures are observed during the observation period?
If one or more safety-relevant failures are observed, the statistical calculation still works – but the required observation period becomes longer (the chi-squared formula accounts for the number of observed failures). However, some safety experts take the position that any observed systematic failure should be corrected (root cause addressed) and the observation period restarted, since systematic failures are deterministic and their presence indicates a design deficiency.
Q5: Is proven in use suitable as a long-term strategy?
Generally, no. Proven in use is best suited as a transitional measure for legacy components. For new developments, ISO 26262 compliance through systematic development (Parts 4–6) or SEooC provides stronger, more sustainable safety assurance. The automotive industry is increasingly moving toward full ISO 26262 development even for components that could potentially claim proven in use.
18. Conclusion
The Proven in Use Argument provides a valuable compliance path for integrating well-established, high-volume legacy components into new ISO 26262-compliant projects. By leveraging extensive field experience data to demonstrate that the component has operated safely and reliably, the proven in use argument avoids the need for full re-development while still providing structured safety evidence. However, the stringent requirements for field data volume, relevance, change management, and effective field monitoring mean that proven in use is not a simple shortcut – it is a rigorous, evidence-based argument that requires significant effort to establish and maintain.
This article is part of our comprehensive ISO 26262 series at PiEmbSysTech. For related reuse approaches, see SEooC Practical Guide and Part 8 – Supporting Processes.
Stay safe. Stay field-proven. Keep engineering the future.
— The PiEmbSysTech Team
Discover more from PiEmbSysTech - Embedded Systems & VLSI Lab
Subscribe to get the latest posts sent to your email.


