REGULATION & SAFETY

7
REGULATION & SAFETY AUTOMOTIVE ISO 26262

Transcript of REGULATION & SAFETY

Page 2: REGULATION & SAFETY

2

One of the long-standing issues in the development of autonomous vehicles is that of functional safety. The increasing complexity in the automotive industry has resulted in a drive towards the provision of safety-compliant systems.

Modern cars can consist of hundreds of ECUs and millions of lines of software code, with ADAS a precursor to much more complex self-driving systems. The goal of ISO 26262 is to provide a unifying standard for all automotive E/E systems.

To re-cap, ISO 26262 uses a system of steps to manage functional safety and regulate product development on a system, hardware, and software level. The standard provides regulations and recommendations throughout the product development process, from conceptual development through decommissioning. It details how to assign acceptable risk levels to systems and components and document the overall testing process. In general, ISO 26262:

• Provides an automotive safety lifecycle and supports tailoring the necessary activities during the lifecycle phases.

• Provides an automotive specific risk-based approach for determining risk classes.

• Use ASILs for specifying the item’s necessary safety requirements for achieving an acceptable residual risk.

• Provides requirements for validation and confirmation measures to ensure a sufficient and acceptable level of safety being achieved.

Beyond ISO 26262, a new standard has been created (ISO 21448), described as ‘Safety of the Intended Functionality’, and designed to complement the existing standard.

REGULATION & SAFETY - AUTOMOTIVE ISO 26262

Page 3: REGULATION & SAFETY

#AV19

3

ISO 21448 - SAFETY OF THE INTENDED FUNCTIONALITY (SOTIF)ISO defines SOTIF in the following terms: ‘The absence of unreasonable risk due to hazards resulting from functional insufficiencies of the intended functionality or by reasonably foreseeable misuse by persons is referred to as the Safety of the Intended Functionality (SOTIF).

SOTIF has been developed to mitigate unreasonable risks for autonomous vehicles and ADAS where systems encounter problems on the road - even in cases where the relevant hardware and software hasn’t malfunctioned. Systems that are considered safe because they meet the necessary requirements of ISO 26262, could still fail in certain real-world scenarios. This could be due to a number of reasons,

such as limits on performance due to inadequate sensor configuration, unexpected changes in the environment, misuse of functions by the driver of the vehicle, or the inability of AI-based systems to accurately interpret the situation and operate safely.

SOTIF can be conceptualized as a framework for identifying hazardous conditions and a method for verifying and validating the behaviour until there is an acceptable level of risk. However, it is difficult to identify unknown and unsafe areas of operation or to quantify whether all edge cases could have been accounted for. To this end, SOTIF calls for simulations for reasons of practicality.

FUNCTIONAL SAFETY IN DESIGN AND DEVELOPMENTWhile functional safety requirements have traditionally been managed by manufacturers and system providers, the increasing complexity of the electronics involved has resulted in the propagation of functional safety throughout the supply chain. ISO 26262 specifies recommendations to ensure functional safety throughout the product development lifecycle, at system, hardware, and software levels.

In terms of hardware, ISO 26262 defines the malfunction of E/E components into two types of failures:

• Systemic failures: These represent the failures in an item or function that are induced in a deterministic way during development, manufacturing, or maintenance. These failures are typically due to process causes and can be tackled by a change of the design or the manufacturing process, operational procedures, documentation,

or other relevant factors. Typical requirements are tracking and traceability and the methods and expectations are captured in the ISO 26262 functional safety management activities.

• Random failures: Hardware failures appear during the lifetime of a hardware element and emanate from random defects innate to the process or usage conditions. Hardware failures can be classified into permanent fault categories such as stuck-at faults, and transient faults such as single event upsets. Random failures can be addressed during the design and verification of the hardware/software system by introducing safety mechanisms to make the architecture able to detect and correct the malfunctions.

A safety mechanism, in the context of ISO 26262, is a technical solution implemented by E/E functions or elements, or by other technologies, to detect faults or control failures to achieve or maintain a safe state.

Page 4: REGULATION & SAFETY

#AV19

4

Such safety mechanisms include error correction code (ECC), cyclic redundancy check (CRC), hardware redundancy, and built-in-self-test (BIST).

The effectiveness of a safety mechanism to detect random failures is measured by three metrics to detect fault and failure in time, as well as the overall likelihood of risk. These three metrics are the measurement of functional safety for hardware components as per ISO 26262:

• Single-point fault metric: This metric reflects the robustness of an item or function to the single point faults either by design or by coverage from safety procedures.

• Latent fault metric: This metric reflects the robustness of an item or function against latent faults either by design, fault coverage by via safety procedures, or by the driver’s recognition of a fault’s existence before the infraction of a safety objective.

• Probabilistic metric of hardware failures: This metric provides rationale that the residual risk of a safety goal violation due to random hardware failures is sufficiently low.

In terms of software, systemic failures typically occur due to human errors during different product development life cycle phases. They can often be traced back to a root cause and corrected. Such errors include:

• Requirement specification and communication: This phase is one of the largest sources of software error and commonly occurs in two scenarios: When software executes ‘correctly’ according to the understanding of the requirement, but the requirement proves to be inaccurately defined within the scope of the system; Or when the requirement was simply misunderstood by the software developer.

• Software design and coding errors: This type of error can occur due to: A poorly structured embedded software code; Or a variety of errors such as timing errors, incorrect queries, syntax errors, algorithm errors, lack of self-tests, or failed error-handing.

• Errors due to software changes: These errors may occur when there are changes to the developed software which introduce unanticipated errors, or where there is a failure of the configuration control process.

• Errors due to inadequate testing: During the testing phases, sometimes the software seems to have passed the testing criteria, but during the actual execution it may fail to perform the required task. This can be defined as a testing failure and may occur when safety-critical test coverage is inadequate.

• Errors in Timing: Such errors may occur when software performs the correct function but at the wrong time or under inappropriate conditions.

The verification and validation of both hardware and software components and systems developed for autonomous driving is critical, particularly as artificial intelligence is increasingly used to make decisions. Decisions made by AI systems are only as good as the data they are provided with.

Page 5: REGULATION & SAFETY

#AV19

5

TRAINING AI TO MAKE ETHICAL DECISIONSThe ethical considerations of autonomous vehicles have become a hot topic. Various scenarios can be imagined where an AI system will have to choose a course of action where all of the options could result in a crash and potential injury. It is an extremely difficult conundrum for developers and engineers to grapple with when it comes to machine learning and ‘teaching’ AI how to act.

A 2018 IBM paper entitled, ‘Everyday Ethics for Artificial Intelligence’ outlined five key areas of ethical focus in a ‘practical guide for designers and developers’.

• Accountability: Human judgement plays a role throughout a seemingly objective system of logical decisions. It is humans who write algorithms, who define success or failure, who make decisions about the uses of systems, and who may be affected by a system’s outcomes. Every person involved in the creation of AI at any step is accountable for considering the system’s impact in the world, as are the companies invested in its development.

• Value Alignment: AI works alongside diverse, human interests. People make decisions based on any number of contextual factors, including their experiences, memories, upbringing, and cultural norms. These factors allow us to have a fundamental understanding of “right and wrong” in a wide range of contexts. Today’s AI systems do not have these types of experiences to draw upon, so it is the job of designers and developers to collaborate with each other in order to ensure consideration of existing values.

• Explainability: As an AI increases in capabilities and achieves a greater range of impact, its decision-making process should be explainable in terms people understand. Explainability is key for users interacting with AI to understand the AI’s conclusions and recommendations.

• Fairness: AI provides deeper insight into our personal lives when interacting with our sensitive data. As humans are inherently vulnerable to biases, and are responsible for building AI, there are chances for human bias to be embedded in the systems we create. It is the role of the team to minimize algorithmic bias through ongoing research and data collection which is representative of a diverse population.

• User data rights: Research shows that the general public finds it very important to be in control of their own information, and unacceptable for companies to share information about them without permission. AI should be fully compliant with data protection regulations and must be designed to protect user data and preserve the user’s power over access and uses.

Page 6: REGULATION & SAFETY

6

SUMMARYFunctional safety in automotive development continues to broaden in scope. Such is the complexity of today’s systems that functional safety is no longer controlled only at high level, but also at component level throughout the supply chain. The demands will only increase as autonomous vehicle development begins to rely ever more on artificial intelligence - And functional safety of AI opens up more questions of an ethical nature which tie into functional safety on a moral level as well as a technological level.

Sources:

https://www.iso.org/standard/70939.html

https://www.ni.com/en-us/innovations/white-papers/11/what-is-the-iso-26262-functional-safety-standard-.html

https://www.cadence.com/content/dam/cadence-www/global/en_US/documents/solutions/automotive-functional-safety-wp.pdf

https://www.nvidia.com/content/dam/en-zz/Solutions/self-driving-cars/safety-report/auto-print-safety-report-pdf-v16.5%20(1).pdf

https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf

Page 7: REGULATION & SAFETY

WANT TO LEARN MORE ABOUT THE FUTURE OF AUTONOMOUS?

AUGUST 21 – 23, 2019

SUBURBAN COLLECTION SHOWPLACE

DOWNLOAD AGENDA PURC HASE PASS SPONSOR