PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida...

84
Norwegian University of Science and Technology Faculty of Social Sciences and Technology Management Department of Industrial Economics and Technology Management Master Thesis PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED FUNCTIONS Hanne Rolén June 2007

Transcript of PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida...

Page 1: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Norwegian University of Science and Technology

Faculty of Social Sciences and Technology Management

Department of Industrial Economics and Technology Management

Master Thesis

PARTIAL AND IMPERFECT TESTING OF

SAFETY INSTRUMENTED FUNCTIONS

Hanne Rolén

June 2007

Page 2: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability
Page 3: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability
Page 4: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability
Page 5: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability
Page 6: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability
Page 7: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

MASTEROPPGAVE

Vårsemester 2007

Student Hanne Rolén

Institutt for industriell økonomi og teknologiledelse

ERKLÆRING

Jeg erklærer herved på ære og samvittighet at jeg har utført ovennevnte hovedoppgave selv

og uten noen som helst ulovlig hjelp

Lysaker, 8.juni 2007

____________________________________________________________

Signatur

Besvarelsen med tegninger m.v. blir i henhold til Forskrifter om studier ved § 20, NTNU's

eiendom. Arbeidene - eller resultater fra disse - kan derfor ikke utnyttes til andre formål uten

etter avtale med de interesserte parter.

Page 8: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability
Page 9: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

PREFACE This master thesis was written during 20 weeks throughout spring 2007 as the final work

performed by Hanne Rolén at Norwegian University of Science and Technology (NTNU). The

thesis is written within the Department of Industrial Economics and Management, study

field Health, Environment and Safety. It is also closely related to the Department of

Industrial Production and Quality as the thesis is within the field of technical safety. The

thesis is written in close cooperation with Aker Kværner Subsea.

Intended audience are those with knowledge of reliability theory, and it is recommended that

the reader is familiar with the concepts described in the book “System Reliability Theory” by

Rausand and Høyland (second edition, 2004).

I would like to thank my colleagues at Aker Kværner Subsea for the support throughout the

semester, and especially Thor Ketil Hallan as the supervisor. Further on I would like to thank

Ring-O, Lars Bak (Lilleaker Consulting), and Luciano Sanguineti and Enrico Sanguineti at

ATV for giving me the necessary practical understanding of valves. And finally thanks to

Mary Ann Lundteigen (NTNU) for good discussions and Marvin Rausand (supervisor at

NTNU) for important feedback and input throughout the thesis.

Lysaker, 8th of June 2007

Hanne Rolén

Page 10: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability
Page 11: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Introduction

11

SUMMARY

In order to avoid substantial hardware costs of building platforms, moving petroleum

production facilities subsea is becoming a popular solution. Fields can be remotely operated

and stand-alone fields that would not be profitable to develop separately can now be tied

together to one pipeline/riser and hence save expenses. Safety instrumented systems are

implemented to reduce or eliminate unacceptable risk associated with such production, and

the safety integrity level is a common requirement describing the safety availability of the

equipment. When performing analysis of the consistency of the safety functions to perform

when needed, it is important to evaluate the assumptions that form the basis for the

calculations. The author has in particular assessed the assumption that a component is “as

good as new” after each proof test, meaning that the unavailability is reduced to zero.

The reasons for imperfect tests may be related to the five M-factors; method, machine,

milieu, man-power and material. Through different case studies the potential effects of

imperfect tests have been analyzed. SINTEF has proposed a method for including the

systematic failures in the calculations of the probability of failure on demand (PFD) by

adding a constant value called PTIF (test independent failures) in the PDS method. A method

for quantifying the PFD impact of an imperfect test due to non-testable random hardware

failures have been proposed by the author. Case results indicate that the PFD impact is far

more significant for imperfect testing of hardware failures than the PDS approach for

systematic failures.

Implementing partial stroke testing enables to reveal failure modes only before possible

through tests that require process shutdown. A successful implementation may improve the

safety integrity level rating of the system. The use of partial stroke testing in subsea

petroleum production has so far not been common and several of the arguments for and

against implementing partial stroke testing are assessed.

It has been argued that partial stroke testing leads to an increase of the spurious trip rate, as

it is likely that if it starts to move it will continue to closed position. The likely reasons for

such an event were placed in a Bayesian belief network and proved the need for the right

equipment to be implemented. New devices such as smart positioners and digital valve

controllers have been introduced for the purpose of partial stroke testing, reducing the

human interference in partial stroke testing and thus reducing the causes for spurious trips.

Partial stroke testing may be implemented in order to justify extended proof test intervals. As

common cause failures are those failures that happen within the same proof test interval, an

extension of the interval could imply that more failures are classified as common cause

failures (Rausand, 2007). In such situations, it should be discussed whether the β-factor

should be incremented to reflect the PFD impact this may have.

Another argument for implementing partial stroke testing has been the opportunity to reduce

the hardware fault tolerance (implies cost saving) since the safe failure fraction is

Page 12: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Introduction

12

incremented by detecting more dangerous undetected failures and converting them to

dangerous detected failures. As partial stroke testing is not fulfilling the criteria for a

diagnostic test, it is argued that the partial stroke testing should not be used to affect the safe

failure fraction, and hence can not be an argument for a reduction in the hardware fault

tolerance (McCrea-Steele, 2006).

Based on a failure mode assessment of gate valves and OREDA data the author has proposed

a tentative partial stroke testing coverage factor of 62%. The result is in accordance with

former research. The partial stroke testing coverage for the dangerous failure modes fail to

close, leakage in closed position, delayed operation and external leakage in closed position

could not be justified quantitatively as the production companies do not give such detailed

information. The coverage may differ dependent on the valve type, design and production

environment.

In particular for components with higher failure rates, from 6100.1 −⋅=DUλ 1−hours and

above, investing in partial stroke testing can be recommended. Achieving the exact partial

stroke testing coverage is less important than the test frequency. The positive PFD impact is

greater if the tests are carried out often than to improve the coverage by additional 10%.

On the other hand, a reduction of the non-testable part with 10% yields a greater

improvement of the PFD than obtaining both higher partial stroke testing coverage and

shorter test intervals. Hence the focus should be upon diminishing reasons for why the test

should be unsuccessful. The author performed a case study of the Morvin HIPPS (high

integrity pipeline protection system) that confirms this outcome. Ignoring the estimation of

non-testable failures yields inaccurate PFD results and inaccurate rating of the safety

integrity level. Considering the use of safety instrumented systems is becoming the common

approach for reducing risk in the petroleum production industry, it is important to improve

the quality of these calculations.

Page 13: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Introduction

13

INDEX PREFACE 9 SUMMARY 11 INDEX 13 LIST OF TABLES 15 LIST OF FIGURES 15 TERMS AND ABBREVIATIONS 16

1 INTRODUCTION 17

1.1 BACKGROUND 17 1.2 OBJECTIVES 17 1.3 DELIMITATIONS 17 1.4 SCIENTIFIC APPROACH 18 1.5 STRUCTURE OF THE REPORT 19

2 THEORETICAL FRAMEWORK 20

2.1 STANDARDS AND GUIDELINES 21 2.1.1 IEC 61508 & IEC 61511 21 2.1.2 OLF 070 GUIDELINE 23 2.2 RELIABILITY DATA SOURCES 23 2.2.1 OREDA 23 2.2.2 PDS 23 2.2.3 EXIDA 24 2.3 SAFETY INSTRUMENTED FUNCTIONS 25 2.3.1 MAIN PRINCIPLES 25 2.3.2 AVAILABILITY OF SIF 28 2.3.3 SIS REQUIREMENTS 31 2.4 SIS APPLIED IN SUBSEA XMT 34 2.5 TESTING OF THE SIS’ ABILITY TO PERFORM THE SIF 37

3 IMPERFECT TESTING 39

3.1 CAUSES FOR AN IMPERFECT TEST 39 3.2 EFFECTS OF AN IMPERFECT TEST 41 3.2.1 CASE A, CONSTANT PFD ADDITION 43 3.2.2 CASE B, INCREASING PFD ADDITION 46 3.2.3 CASE C, DECREASING PFD 52 3.2.4 COMMENTS TO THE IMPERFECT TEST CASES 54

Page 14: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Introduction

14

4 PARTIAL STROKE TESTING 55

4.1 MAIN PRINCIPLES AND CONCEPTS 55 4.2 ADVANTAGES AND DISADVANTAGES 57 4.3 PST COVERAGE FACTOR 58 4.4 CORRELATION PST AND SPURIOUS TRIPS 62 4.5 INFLUENCING FACTORS FOR PST CONTRIBUTION FOR A SIS 63 4.6 PST IMPACT ONTO THE SIL 65

5 DISCUSSION 69

5.1 QUALITY OF THE RELIABILITY ASSESSMENT 69 5.2 UNCERTAINTY REGARDING THE RESULTS 71 5.3 RECOMMENDATIONS FOR FURTHER WORK 71

6 CASE STUDY 72

6.1 INTRODUCTION CASE STUDY; MORVIN 72 6.2 REQUIREMENTS FROM CUSTOMER 72 6.3 HIPPS 74 6.3.1 HIPPS TESTING 75 6.4 SIL RATING 76

7 CONCLUDING REMARKS 79

REFERENCES 81 ANNEX A, XMT 84

Page 15: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Introduction

15

LIST OF TABLES TABLE 1, SIL FOR LOW AND HIGH DEMAND MODE OF OPERATION (IEC 61508-1, 2002)............................................................26 TABLE 2, SIL FOR TYPE A SUBSYSTEM (IEC 61508-2) ..........................................................................................................32 TABLE 3, SIL FOR TYPE B SUBSYSTEM (IEC 61508-2)...........................................................................................................32 TABLE 4, DATA FOR THE SYSTEM TEST EXAMPLE ..................................................................................................................41 TABLE 5, UNAVAILABILITY AT TIME T OF A SINGLE COMPONENT UNDER IMPERFECT TEST CONDITIONS .......................................48 TABLE 6, PFD AVERAGE DIFFERENCES BETWEEN PERFECT AND IMPERFECT TESTS ....................................................................49 TABLE 7, MATRIX FOR SIL RATING SENSITIVITY DUE TO IMPERFECT TESTING ..........................................................................50 TABLE 8, DANGEROUS FAILURE MODES AND TEST STRATEGY FOR A SAFETY GATE VALVE (ADAPTED FROM SUMMERS & ZACHARY

2000A, MCCREA-STEELE 2006, KOP 2002, BAK 2007 AND ATV 2007) ......................................................................59 TABLE 9, RELIABILITY DATA AS BASIS FOR PST COVERAGE ESTIMATION (ADAPTED FROM LUNDTEIGEN & RAUSAND, 2007).......61 TABLE 10, PFD RELATED TO DIVERSE PST COVERAGES, TEST INTERVALS AND (IM)PERFECT TESTING........................................68 TABLE 11, MORVIN HIPPS REQUIREMENTS (STATOIL, 2007A)...............................................................................................73 TABLE 12, MORVIN HIPPS CASE DATA ...............................................................................................................................76

LIST OF FIGURES FIGURE 1, IEC 61508 SAFETY LIFECYCLE (IEC 61508) .........................................................................................................22 FIGURE 2, SKETCH OF A SIMPLE SIS (RAUSAND & HØYLAND, 2004).......................................................................................25 FIGURE 3, ALLOCATION OF SIL (IEC 61508-1).....................................................................................................................27 FIGURE 4, RISK REDUCTION (IEC 61508) .............................................................................................................................27 FIGURE 5, FAILURE MODE CLASSIFICATION (IEC 61508) .......................................................................................................28 FIGURE 6, FRACTIONS OF DIFFERENT TYPES OF FAILURES FOR A SYSTEM WITH TWO COMPONENTS .............................................30 FIGURE 7, SIS DESIGN REQUIREMENTS .................................................................................................................................31 FIGURE 8, WELLHEAD AND XMT (OREDA, 2002)...............................................................................................................34 FIGURE 9, HORIZONTAL XMT (AKS, 2007).........................................................................................................................35 FIGURE 10, GATE VALVE WITH ACTUATOR (RING-O, 2007) ...................................................................................................36 FIGURE 11, UP AND DOWN TIME RELATED TO TESTS ..............................................................................................................38 FIGURE 12, CAUSES FOR IMPERFECT TESTING OF SUBSEA SAFETY VALVES ...............................................................................40 FIGURE 13, RELIABILITY BLOCK DIAGRAM OF A SIMPLE SIS..................................................................................................41 FIGURE 14, CONTRIBUTION TO UNAVAILABILITY (PDS METHOD, 2006) ..................................................................................43 FIGURE 15, SKETCH OF THE PFD IMPACT WITH PTIF ADDITION (CASE A) .................................................................................44 FIGURE 16, UNAVAILABILITY UNDER IMPERFECT TEST CONDITION CASE A..............................................................................44 FIGURE 17, SKETCH OF THE PFD IMPACT WITH IMPERFECT TEST ADDITION (CASE B)................................................................46 FIGURE 18, SERIES STRUCTURE WHEN IMPERFECT TEST OF A COMPONENT ...............................................................................46 FIGURE 19, UNAVAILABILITY UNDER IMPERFECT TEST CONDITION CASE B ..............................................................................47 FIGURE 20, UNAVAILABILITY FOR DIFFERENT FAILURE RATES UNDER IMPERFECT TESTING........................................................49 FIGURE 21, THE M-FACTORS’ CONTRIBUTION TO THE IMPERFECT TEST ADDITION ....................................................................50 FIGURE 22, UNAVAILABILITY WITH DECREASING PTIF ADDITION (CASE C1) .............................................................................52 FIGURE 23, UNAVAILABILITY WITH DECREASING IMPERFECT TEST ADDITION (CASE C2) ...........................................................53 FIGURE 24, PFD RESULTS FROM CASE STUDIES ON IMPERFECT TESTING...................................................................................54 FIGURE 25, PST IMPACT ON THE PFD (LUNDTEIGEN & RAUSAND, 2007)................................................................................55 FIGURE 26, SIMPLE SIS WITH PST IMPLEMENTATION (ADAPTED FROM MCCREA-STEELE, 2006) ..............................................56 FIGURE 27, OVERVIEW OF RELEVANT FAILURE RATES (LUNDTEIGEN & RAUSAND, 2007) .........................................................58 FIGURE 28, BAYESIAN BELIEF NETWORK FOR ST DURING PST ...............................................................................................62 FIGURE 29, UNAVAILABILITY WITH PST ..............................................................................................................................65 FIGURE 30, UNAVAILABILITY WITH PST AND PTIF ADDITION ..................................................................................................66 FIGURE 31, UNAVAILABILITY WITH PST AND IMPERFECT TESTING..........................................................................................67 FIGURE 32, HIPPS SCHEMATIC (KOP, 2004)........................................................................................................................74 FIGURE 33, HIPPS RELIABILITY BLOCK DIAGRAM FOR MORVIN FIELD DEVELOPMENT ..............................................................76 FIGURE 34, PFD RESULTS FOR THE DIFFERENT CALCULATION APPROACHES .............................................................................78

Page 16: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Introduction

16

TERMS AND ABBREVIATIONS ALARP As low as is reasonably practicable

CCF Common Cause Failures

CSU Critical Safety Unavailability

DC

DD

DU

DOP

Diagnostic Coverage

Dangerous Detected

Dangerous Undetected

Delayed Operation

E/E/PE(S)

ELP

ESD

FT

FTC

Electrical/Electronic/Programmable Electronic (System)

External Leakage in closed Position

Emergency Shut Down

Function Test

Fail To Close

FME(C)A Failure Mode Effect and (Criticality) Analyses

HAZOP Hazard and Operational Analyses

HFT Hardware Fault Tolerance

HIPPS

HP/HT

High Integrity Pressure Protection System

High Pressure High Temperature

HSE Health, Safety and Environment

IEC International Electrotechnical Commission

ISO

KOP

LCP

International Organization for Standardization

Kværner Oilfield Products

Leakage in Closed Position

OLF The Norwegian Oil Industry Association

OREDA

PDS

PFD

PMV

PWV

PST

PT

Offshore Reliability Data

Reliability of computer-based safety systems (Norwegian)

Probability of Failure on Demand

Production Master Valve

Production Wing Valve

Partial Stroke Testing

Proof Test

RBD Reliability Block Diagram

ROV

SCSSV

Remotely Operated Vehicle

Surface Controlled Subsurface Safety Valve

SD Safe Detected

SFF

SIF

SIL

SIS

SRS

ST

SU

XMT

Safe Failure Fraction

Safety Instrumented Function

Safety Integrity Level

Safety Instrumented System

Safety Requirement Specification

Spurious Trip

Safe Undetected

X-mas Tree

Page 17: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Introduction

17

1 Introduction

1.1 Background As safety instrumented systems are getting increasingly important within petroleum

production, there is a need to have a good understanding of the assumptions and

simplifications that form the basis for the assessment. Reliability calculations of subsea

production systems are usually based on the IEC 61508 (2002) approach and utilizing

OREDA (2002) data. A basic assumption is that after test and repair, the system is “as good

as new”, meaning that the system unavailability is reduced to zero. As the demand for

continuous production extends the test intervals further every time, it is increasingly

important to study these aspects more profoundly. Are the safety levels as good as claimed?

Is it required to change the calculation-method in order to reflect the reality?

The topic of this thesis has been developed in cooperation with Marvin Rausand at NTNU

and Thor Ketil Hallan at Aker Kværner Subsea, as this is of interest for both parties.

Intended audience are those with knowledge of reliability theory, and it is recommended that

the reader is familiar with the concepts described in the book “System Reliability Theory” by

Rausand and Høyland (second edition, 2004). Basic knowledge of subsea petroleum

production is also an advantage.

1.2 Objectives The main objectives have been to;

• Study the IEC 61508 (2002) and IEC 61511 (2004) standards and OLF 070 guideline

(2004)

• Describe the causes for imperfect tests

• Estimate the impact of imperfect tests on the probability to fail on demand

• Describe partial stroke testing (PST)

• Estimate the impact of PST to the probability to fail on demand

• Perform a case study from one of Aker Kværner Subsea’s actual projects

1.3 Delimitations The time scope to carry out the master thesis is set to 20 weeks; hence it has been a need to

choose more specific topics to assess. Since the author has been present at Aker Kværner

Subsea during the thesis period, the focus is upon reliability in subsea petroleum production.

Because of the author’s special interest in safety topics, the reliability assessment is limited to

the safety reliability (availability), excluding the production reliability.

The purpose of the case study is to relate the results to an actual field. The weight of the

thesis is to be found in assessment of PST and imperfect testing.

Page 18: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Introduction

18

1.4 Scientific approach The information gathering process has basically been done by literature research and

feedback from experts. In order to learn about subsea systems in general, and Aker Kværner

Subsea products in particular, the author has been present at Aker Kværner Subsea facilities

during the whole semester, getting input from the engineers day by day. This has been

significant for the progress of the thesis. The author’s knowledge to subsea systems was

limited prior to the thesis start-up and as a consequence a substantial effort was put into

getting familiarized with the topic. The meetings with valve suppliers and external

professionals with experience from the field gave a valuable insight to the subsea petroleum

production.

As the topic of the thesis is closely connected to the OLF 070 guideline (2004), IEC 61508

(2002) and IEC 61511 (2004) standards, it was natural to study these. Further on it has been

an extensive search for literature, as the concept of partial stroke testing is fairly new and

imperfect testing is hardly discussed in the literature. Doing so, Engineering Village and

ScienceDirect were of great importance, in addition to the recommendations from my

supervisor. The search engine Google has also been utilized to find additional information.

The IEC 61508/61511 standards hardly mention imperfect testing of SIS while the PDS

method (2006) has a different approach than the standards, making it interesting to develop

new concepts and approaches to testing. The progress and quality of the work has been

assured through discussions and feedback from my supervisor and other key personnel at the

university as well as colleagues at Aker Kværner Subsea.

Page 19: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Introduction

19

1.5 Structure of the report

The master thesis has the structure as described below;

Chapter 1

Introduction

Presentation of the background of the master thesis topic, its

objectives, delimitations and scientific approach.

Chapter 2

Theoretical framework

Introduction to IEC 61508/61511, OLF 070 guideline

(2004), PDS method (2006), OREDA (2002) and exida

(2003). Description of basic concepts; SIF, SIS, safety

unavailability contributors, failure classification, failure

modes and testing. Briefly about subsea X-mas trees and

safety valves.

Chapter 3

Imperfect testing

Imperfect testing is defined, and its possible reasons

described. Three cases are developed to illustrate possible

effects of imperfect testing. A new method for quantifying

imperfect tests is proposed.

Chapter 4

Partial stroke testing

Description of partial stroke testing and its advantages and

disadvantages. Assessment of the rationale behind the

partial stroke testing coverage factor. The correlation

between the partial stroke testing and spurious trips and

other influencing factors of the partial stroke testing

contribution is described. Partial stroke testing is applied on

the case results from chapter 3.

Chapter 5

Discussion

Discussion about the results of imperfect and partial stroke

testing, the credibility of the data utilized and the need for

further work.

Chapter 6

Case

The theories from the former chapters are used for a real life

system; the Morvin field development.

Chapter 7

Concluding remarks

Concluding remarks regarding imperfect testing and partial

stroke testing.

Page 20: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

20

2 Theoretical framework

Subsea equipment is becoming the typical solution for the petroleum offshore industry as the

production is moved to deeper and more demanding areas. It is expected to be a substantial

increment of the subsea petroleum and gas production in the years to come (Subseazone,

2007). At the same time it can be claimed that there has been an increased concern regarding

health, safety and environment (HSE) among the general public and governments the last

years, which has lead to a more strict legislation within the field. Together with the high costs

for subsea intervention this gives incentives for the oil companies to achieve a high level of

safety and reliability of their systems.

Safety instrumented systems (SIS) have become ever more common as a measure for

reducing risk. A SIS is designed to prevent, or mitigate, the hazardous event that could harm

the system in which it is implemented to protect. Examples of hazards related to subsea

production are blowouts topside (possible personnel fatalities and material damage) and

leakage to water (environmental danger). One SIS can perform one or several safety

instrumented functions (SIF).

With this increased dependency of SIS to mitigate risk, it is crucial to be aware of the

assumptions and simplifications that form basis for the reliability calculations. In the

following short introductions to the important standards and guidelines within the field are

given, as well as a more thorough description of SIF.

Page 21: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

21

2.1 Standards and guidelines The IEC 61508 (2002) standard and the more process specific IEC 61511 (2004) standard are

safety standards that state requirements for the use of SIF. OLF has developed a guideline for

the implementation of the two standards (2004).

2.1.1 IEC 61508 & IEC 61511 The IEC 61508 shall be applied whenever “there is a possibility that E/E/PE technologies

might be used, (…) so that the functional safety requirements for any E/E/PE safety-related

systems are determined in a methodical, risk-based manner” (IEC, 2002). If the hardware

device already has been proven in use, IEC 61511 can be followed as this standard focus on

the integration of such components. Note that the E/E/PE (Electrical/Electronic/

Programmable Electronic) safety related systems is referred to as SIS (safety instrumented

system) throughout this thesis.

IEC 61508-6 states that “the overall goal is to ensure that plant and equipment can be safely

automated. A key objective of this standard is to prevent failures of control systems triggering

other events, which in turn could lead to danger, and (to prevent) undetected failures in

protection systems, making the systems unavailable when needed for a safety action”.

The IEC 61508 standard is divided in following 7 parts; Part 1, General requirements

Specifies the requirements that are applicable to all parts. Introduces the safety life cycle perspective as the technical framework for the standard.

Part 2, Requirements for electrical/electronic/ programmable electronic safety-related systems

Provides additional and more specific requirements for the hardware than the first part. Specifies the requirements for activities in the design and manufacturing phase.

Part 3, Software requirements

Provides additional and more specific requirements for the design and development of the software than the first part.

Part 4, Definitions and abbreviations

Lists all the definitions and abbreviations used throughout the standard.

Part 5, Examples of methods for the determination of safety integrity levels

Describes the underlying concepts of risk and gives methods for determining safety integrity levels.

Part 6, Guidelines on the application of IEC 61508-2 and IEC 61508-3

Gives a guideline for the application of part 2 and part 3 with examples and methods.

Part 7, Overview of techniques and measures

Gives an overview of techniques and measures for control of hardware failures and avoidance and control of systematic failures.

Page 22: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

22

The IEC 61508 has a lifecycle approach for the SIS as presented in Figure 1. Following the

steps in the lifecycle is to ensure that the SIF is achieved through a systematic approach to all

the necessary activities.

Figure 1, IEC 61508 Safety lifecycle (IEC 61508)

The system design complies with the IEC 61508 and IEC 61511 standards when the company

accomplishes the requirements related to:

• Management of functional safety

• Safety lifecycle requirements

• Verification

• Process hazard and risk analysis

• Allocation of safety functions to

protection layers

• SIS safety requirements specification

• SIS design and engineering

• Factory acceptance testing (FAT)

• SIS installation and commissioning

• Requirements for application

software, including selection criteria

for utility software

• SIS safety validation

• SIS operation and maintenance

• SIS modification

• SIS decommissioning

• Information and documentation

requirements

The calculation of the reliability of SIF is only a small part of the IEC 61508 compliance.

Some of the assumptions and simplifications done in the IEC standard related to reliability

are assessed in this thesis. Lundteigen & Rausand (2006) stated; “The standards are not

prescriptive, which gives room for different interpretations, and hence opens up for new

methods, approaches and technology”.

The implications the IEC 61508 standard has for SIS is described more in detail throughout

chapter 2.

Page 23: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

23

2.1.2 OLF 070 guideline The purpose of the OLF 070 guideline (2004) is to give a guideline for the application of IEC

61508 and IEC 61511 in the Norwegian petroleum industry that is simplified in use. Note that

while the standards are risk-based, meaning that the users have to determine the risks

related to the system and on basis of this state the required SIL of the SIF, the OLF 070

guideline provides minimum SIL requirements for the most common SIFs, giving the

guideline a different approach to assessing the SIS than the intentions in the IEC 61508

standard.

2.2 Reliability data sources

2.2.1 OREDA OREDA (Offshore REliabilty DAta) (2002) collects and exchanges reliability data among the

participating companies and act as the forum for co-ordination and management of

reliability data collection within the oil and gas industry (OREDA, 2007). It was initiated in

1981 and has issued four public editions of a Reliability Data Handbook (1984, -92, -97, -02).

In OREDA a failure is either classified as critical, degraded or incipient.

• Critical failure; “a failure that causes immediate and complete loss of a systems

capability of providing its outputs”.

• Degraded failure; “a failure which is not critical, but which prevents the system from

providing its outputs within specifications. Such a failure would usually, but not

necessarily, be gradual or partial, and may develop into a critical failure in time”.

• Incipient failure; “a failure which does not immediately cause loss of a system’s

capability of providing its output, but which, if not attended to, could result in a

critical or degraded failure in the near future”.

The failure has been associated with one of these severity classes independent of the failure

mode and failure cause, meaning that a “leakage in closed position” failure mode can be

found listed both as a critical and as a degraded failure.

2.2.2 PDS PDS is the Norwegian abbreviation for “reliability of computer-based safety systems”.

SINTEF is the author of both a PDS Method Handbook and a PDS Data Handbook. The PDS

approach is described in the first, while the latter contains a data dossier for different

components. These are based on OREDA, but the project group have done some adjustments

and expert judgements upon the figures. The PDS method is in line with the main principles

in the IEC 61508 standard, except of a somewhat different approach regarding failure

classification, modelling of common cause failures and the treatment of systematic failures

(PDS Method, 2006). Of special relevance for this thesis are the quantification of systematic

failures called PTIF (test independent failures, TIF), and the concept of Critical Safety

Unavailability (CSU) which in addition to the IEC 61508 approach to PFD calculation also

includes downtime due to test and repair.

Page 24: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

24

2.2.3 Exida Exida (excellence in dependable automation) provides reliability data for use in process and

machinery industries. The numbers are based on FMEDA data or exida comprehensive

analysis with data from OREDA, PDS etc. The main reliability concepts and failure

classifications correspond to a great extent those described in IEC 61508.

Page 25: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

25

2.3 Safety Instrumented Functions Within every industry there is a need to perform hazard identification methods such as

HAZID, HAZOP and FMEA studies in order to decide the need for extra safety measures.

Different measures may be implemented and among those introducing SIF. This concept is

thoroughly described in the following.

2.3.1 Main principles SIFs are functions whose purpose is to achieve or maintain a safe state for the system. The

Norwegian “Facilities regulations” § 7 state that SIF has several purposes. It should detect

abnormal circumstances, it should prevent that such abnormal situations escalates into a

dangerous state, and it should mitigate damages in case of accidents. Lundteigen & Rausand

(2006) describe SIF even simpler; detect – decide – act. There are two concepts regarding

SIFs; “safety function requirements” and “safety integrity requirements”. While the first is

which SIF the SIS should do, the latter is regarding the likelihood that the SIS is able to

perform the specific SIF satisfactorily within a stated period of time (IEC, 2002).

Any system, implemented in any technology, which carries out SIF, is a SIS (IEC, 2002). A

SIS covers all parts of the system that are required to carry out a SIF, and may consist of for

example the subsystems; sensors, control logic and communication systems, final actuator,

and the critical actions of a human operator (IEC, 2002). In this thesis the actuating items

are analogous with safety valves. Figure 2 is an illustration of a simple SIS.

Figure 2, Sketch of a simple SIS (Rausand & Høyland, 2004)

Examples of SIS are among others the emergency shut-down system in a hazardous chemical

process plant, automobile indicator lights, anti-lock braking and engine-management

systems, and remote monitoring, operation or programming of a network-enabled process

plant (adapted from Rausand & Høyland, 2004).

The safety integrity level (SIL) specifies the safety integrity requirements of the SIF to be

allocated to the SIS. It states the probability of the SIS to fail to perform the requested SIF

upon demand, often referred to as the PFD (probability of failure on demand). The PFD may

be interpreted in two ways; the probability that the system will be in a dangerous failure

mode upon demand, and the fraction of time the system will be in a dangerous failure mode

and not work as a SIF. In order to attain the requested SIL it is also required to avoid and

Page 26: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

26

control systematic failures and to select the hardware configuration within the architectural

constraints. These requirements are further described in section 2.3.3.

The IEC 61508 standard divides the SIL into 4 levels, where the highest SIL rating states the

lowest probability that the SIS will fail to perform the required SIF. Depending if it is a low or

high/continuous demand mode of operation, the range of the levels differ as shown in Table

1. Low demand mode embrace systems “where the frequency of demands for operation made

on a safety related system is no greater than one per year and no greater than twice the proof-

test frequency” (IEC 61508, 2002), otherwise it is classified as a high demand system. An

example of a low demand application in subsea production is the safety valve where the valve

remains static until a demand occurs. An application in high demand mode can for example

be the brake system in a car.

Table 1, SIL for low and high demand mode of operation (IEC 61508-1, 2002)

Safety

integrity

level

Low demand mode of operation

(Average probability of failure to

perform its design function on

demand)

High demand or continuous

mode of operation

(Probability of a dangerous failure

per hour)

4 45 1010 −− <≥ to 89 1010 −− <≥ to

3 34 1010 −− <≥ to 78 1010 −− <≥ to

2 23 1010 −− <≥ to 67 1010 −− <≥ to

1 12 1010 −− <≥ to 56 1010 −− <≥ to

Note that one SIS may perform several SIFs, and that the reliability assessments are done for

each SIF and not for the SIS. The SIL rating is often required to be within the midpoint of

each level to be considered good enough, meaning that the PFD has to be less than 3100.5 −⋅ for meeting the SIL 3 rating.

Two qualitative methods to determine the required SIL are presented in the IEC 61508

standard; the risk graph and the hazardous event severity matrix. This is basically done by

assessing the probabilities, frequencies and consequences of certain events and then the need

for risk reduction by a SIF is decided.

In order to realize the SIL target it is necessary to allocate the safety integrity requirements to

each SIF, and subsequently obtain the design requirements for the SIS as shown in Figure 3.

Note that each of the subelements sensor, logic and final elements have to achieve the

required SIL rating in order to achieve the overall SIL requirement. The oil companies often

require a certain SIL rating of the equipment they purchase, making the manufacturers’

responsibility, or challenge, to design the system to meet the requirements.

Page 27: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

27

Figure 3, Allocation of SIL (IEC 61508-1)

The IEC 61508 standard, the Norwegian “Activities regulations” § 1 -2 and the Haddon

energy model (1973) all have the same philosophy regarding barriers. In order to reduce risk,

the priority should be to apply measures in the design, as this is the best way to eliminate the

hazards. If this does not reduce the risk to the tolerable region (reference ALARP principle as

described in IEC 61508-5 and the Norwegian “Framework regulations” § 9). Barriers are

introduced in order to prevent or mitigate impact on people, environment and/or material

assets. A SIS can be such a barrier as shown in Figure 4. Further reduction of risk than

strictly necessary in order to come within the acceptable area should be done as long as this is

economically reasonable.

Figure 4, Risk reduction (IEC 61508)

Note that introducing a SIS is only one measure among others. It is equally important to

introduce other barriers.

Page 28: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

28

2.3.2 Availability of SIF Dangerous and safe failures

If the SIS fails to perform the intended SIF the system is brought to a fault state. The failure

that causes the fault can be either dangerous or safe. A dangerous failure is defined as a

“failure which has the potential to put the safety-related system in a hazardous or fail-to-

function state”. Some of these failures can be revealed at an early stage through testing or by

coincidence by personnel, while others remain undetected. It is differentiated between

dangerous detected and dangerous undetected failures, and safe detected and safe

undetected failures as illustrated in Figure 5.

Figure 5, Failure mode classification (IEC 61508)

Examples of dangerous detected failures are those revealed by diagnostic testing. On the

other hand, a dangerous undetected failure is a failure not revealed before a proof test or by a

demand. These failures are important to discover as soon as possible. Representative for a

safe failure is a spurious trip, which is for example that the safety valve closes without a real

demand. Note that to classify a failure according to these classes is not always

straightforward and can easily be interpreted differently among the users.

Random hardware failures and systematic failures

Another way of classifying the failures is to differentiate between random hardware

(physical) failures due to aging and stress, and systematic (non-physical) failures due to

design and interaction (adapted from IEC 61508);

• Random hardware failures: Failures occurring in random time resulting from a

variety of degradation mechanisms in the hardware. Usually, only degradation

mechanisms arising from conditions within the design envelope (natural conditions)

are considered as random hardware failures. System failure due to such failures can

be quantified with reasonable accuracy.

• Systematic failures: Failures that are related in a deterministic way to a certain cause,

which can only be eliminated by modification of the design, manufacturing process,

operational procedures, documentation or other relevant factors. Design faults and

maintenance procedure deficiencies are example of causes that may lead to systematic

failures. System failure due to systematic failures can not be easily predicted.

Page 29: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

29

It is of great interest to assess the unavailability of the safety system. It is only the dangerous

undetected failures that form the basis for the PFD calculation. With τ as the proof test

interval and F(t) the distribution function, the safety unavailability is expressed by PFD as

given below;

Ā(t) = Pr(a DU failure has occurred at, or before, time t) = Pr(T≤t) =F(t)

∫=τ

τ 0

1PFD Ā ∫=τ

τ 0

)(1)( dttFdtt

Calculation of safety unavailability is further explained in section 2.3.3, while detecting

failures through testing is treated in section 2.5 and further discussed in chapter 3 and 4.

Failure modes

The failure mode describes the various abnormal states of an equipment unit, and the

possible transition from correct to incorrect state (OREDA, 2002). The main failure modes of

the safety valve are (adapted from Lundteigen & Rausand 2007 and Rausand & Høyland,

2004):

• FTC = Fail to close on demand

• LCP = Leakage (through the valve) in closed position

• ELP = External leakage in closed position

• DOP = Delayed operation

• ST = Spurious trip

• FTO = Fail to open on command

When valves are designed to stop the flow (as emergency shutdown valves), the FTC and LCP

failure modes can be classified as dangerous failures, since the purpose of the valves are not

fulfilled (Rausand & Høyland, 2004). The valves are designed to be closed within a specific

amount of time, and if this demand is not fulfilled it is classified as a DOP failure mode and is

a dangerous failure. It is considered to be an ELP failure when there is a leakage to the

exterior when the valve is in a closed position, and is classified as a dangerous failure.

As already mentioned, these failures can be classified both as a critical and as a degraded

failure in OREDA, leaving it up to the user how to interpret the data. In a safety perspective,

the ST and FTO failure modes do not imply danger, since for safety valves these failure

modes correspond to their safe position (closed).

Common cause failures and spurious trips

In order to make a system more failure resistant, a common approach is to introduce

redundancy. There are two aspects that may reduce this benefit; spurious trips and common

cause failures. Redundancy for safety valves is often obtained by placing two valves in series,

meaning that if one valve fails to close, the other can shut down the system instead. As it

takes only one valve to close down the system, redundancy could lead to a higher spurious

trip rate. For sensors this can be solved, or at least minimized, by introducing voting. Voting

is when 2 sensors out of 3 sensors are programmed to give an order to close the valves, thus

removing the possibility that one single sensor can command the valve to close. A similar

solution is not possible for valve.

Page 30: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

30

Component 1 Component 2

β

1 - β 1 - β

Common cause failures (CCF) may bring down both safety valves at the same time, reducing

the potential positive effects by introducing redundancy to the system. By IEC 61508

definition a common cause failure is a “failure, which is the result of one or more events,

causing coincident failures of two or more separate channels in a multiple channel system,

leading to system failure”. The β-factor is one way of describing common cause failures

quantitatively, where the β-factor gives the fraction of common cause failures among all

failures of a component;

Pr (Common cause failure | Failure) = β

This is illustrated in Figure 6. Rausand & Høyland (2004) give more details about the β-

factor model and other alternative models.

Figure 6, Fractions of different types of failures for a system with two components

Goble (2003) state that three principles should be followed in order to avoid CCF;

1. Reduce the chance of a common stress –physical separation and electrical separation

in redundant units.

2. Respond differently to a common stress – redundant units should use diverse

technology/mechanisms.

3. Increased strength against all failures.

A method for estimating the common cause beta factor is provided in IEC 61508-6, or the

maximum values given in table D.4 in IEC 61508-6 can be used directly.

Page 31: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

31

2.3.3 SIS requirements There are several requirements in IEC 61508 related to the design of a SIS. As illustrated in

Figure 7 these requirements can be classified related to hardware safety integrity, the system

behaviour on detection of fault and systematic safety integrity. Safety integrity is the

probability that the SIS performs the required SIF under all the stated conditions within a

stated period of time (IEC 61508).

Figure 7, SIS design requirements

Hardware safety Integrity

In order to avoid the random hardware failures there are certain architectural constraints

that limit the designer’s freedom on how the hardware may be configured (Lundteigen and

Rausand, 2006). There are two concepts that are related to architectural constraints, Safe

failure fraction (SFF) and Hardware fault tolerance (HFT). The SFF can be interpreted in two

ways, one is as the fraction of failures considered as safe versus the total failure rate. The

other is as the fraction of failures not leading to a dangerous failure of the SIF (op. cit).

∑ ∑ ∑∑∑

∑∑ ∑

++

+=

+=

DUDDS

DDS

Tot

DDSSFFλλλ

λλ

λ

λλ

=Sλ Rate of safe failures

=DDλ Rate of dangerous detected failures

=DUλ Rate of dangerous undetected failures

=TOTλ Total rate of dangerous and safe failures

Subsequently, the SFF can be increased by detecting more dangerous undetected failures and

classify them as dangerous detected.

The HFT is a measure of how many of the components can be lost without losing the property

of being a safety function. A 1oo2 and 2oo3 architecture has a HFT of 1, while a 1oo3 system

has a HFT of 2.

The highest SIL that can be claimed for a safety function is limited by the HFT and the SFF of

the subsystems that carry out the safety function (IEC 61508-2). It is differentiated between

type A and type B subsystems. Simplified, the subsystem is regarded to be of type B when the

subsystem consists of one or more components that have uncertainty regarding failure

data/modes or uncertainty of its behaviour in a fault mode, if otherwise the system is of type

Page 32: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

32

A. A safety valve is normally defined as a type A subsystem (Lundteigen and Rausand, 2007).

Table 2 gives the attainable SIL rating under these constraints for type A subsystems.

Table 2, SIL for type A subsystem (IEC 61508-2)

Hardware fault tolerance Safe failure

fraction 0 1 2

< 60% SIL1 SIL2 SIL3

60% - < 90% SIL2 SIL3 SIL4

90% - < 99% SIL3 SIL4 SIL4

≥ 99% SIL3 SIL4 SIL4

The attainable SIL rating for type B subsystems is given in Table 3.

Table 3, SIL for type B subsystem (IEC 61508-2)

Hardware fault tolerance Safe failure

fraction 0 1 2

< 60% Not allowed SIL1 SIL2

60% - < 90% SIL1 SIL2 SIL3

90% - < 99% SIL2 SIL3 SIL4

≥ 99% SIL3 SIL4 SIL4

Because of the difficulties of achieving and maintaining a SIL 4 throughout the safety

lifecycle, applications which require the use of a single SIF with SIL 4 should be avoided

where reasonably practicable (IEC 61511-1).

Related to hardware safety integrity there are also requirements for the PFD. In the

calculations it should be taken into account the system architecture, dangerous failures

undetected/detected by diagnostic tests, susceptibility to common cause failures, diagnostic

coverage, test intervals and repair times etc. The calculations should be done for each sub

element and gives the following formula for the SIS in Figure 2;

AILDSYS PFDPFDPFDPFD ++=

Where the PFD is given by ∫=τ

τ 0

1PFD Ā ∫=τ

τ 0

)(1)( dttFdtt as presented in last section.

System behaviour on detection of fault

The requirements to the system behaviour on detection of fault are to specify an action to

achieve or maintain a safe state, or to assure a safe operation while repairs are carried out.

Page 33: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

33

Systematic safety integrity

The systematic safety integrity is related to evidence proven in use, avoidance of failures and

control of such failures. Evidence of proven in use is basically adequate documentation that

the likelihood of any failure of the subsystem in the SIS is low enough in order to achieve the

required SIL for the SIF.

The requirements for avoidance of failures embrace the measures for preventing the

introduction of faults during design and development of the SIS hardware. These

requirements have to be applied only on those systems which have not yet been proven in

use.

The requirements for the control of systematic failures emphasize that the design process

shall make the SIS tolerant against residual design faults in the hardware and software,

environmental stresses and mistakes made by the operator of the equipment under control.

The maintainability and testability shall be considered already at the design and development

phase. Annex A and B of IEC 61508-2 give techniques of how to avoid and control systematic

failures.

Page 34: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

34

2.4 SIS applied in subsea XMT An example of a SIS in subsea installations are the safety valves in the X-mas Tree (XMT).

The XMT itself is regarded as a secondary well barrier while the surface controlled subsurface

safety valve (SCSSV) is regarded as one of the primary well barriers.

The XMT is placed onto a wellhead on the seabed. Basically the XMT consists of a range of

valves and measurement instruments. Its function is to be a connection point between the

well and the flowlines, to give the opportunity to shut down the well in case of emergency, to

guide the flow and to give facilities to control the well. The XMT can lead the flow directly or

indirectly through a manifold onshore/topside. Figure 8 shows a schematic of a XMT and

wellhead. Note the location of the Production Master Valve (PMV) and the Production Wing

Valve (PWV) as these are considered to be the most important valves to close in an

emergency situation. They are typically safety gate valves. Testing of these valves is discussed

in chapter 3 and 4.

Figure 8, Wellhead and XMT (OREDA, 2002)

Page 35: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

35

Two common types of XMT are the conventional (dual bore) and horizontal. One of the main

advantages for choosing a conventional XMT is that the tree can be retrieved without

removing the tubing hanger and the tubing (Sangesland, 2007). In a horizontal XMT the

diameter of the tubing can be larger and the tubing can be replaced without retrieving the

tree. The stack-up height of the XMT including BOP may otherwise be difficult to handle on a

conventional drilling vessel. On the other hand, retrieval of the tree implies retrieval of the

tubing. Illustrations of both the horizontal and conventional tree are in Annex A. In Figure 9

is an example of a horizontal XMT.

Figure 9, Horizontal XMT (AKS, 2007)

The gate valve is normally preferred as safety valve compared to a ball valve. The ball valve

requires a rotation force in addition to vertical movement. Gate valves normally have a lower

internal leakage, and a shorter stem travel. It is assisted by pressure in the bore cavity

Page 36: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

36

pushing the stem out of the valve cavity when closing. The actuator spring is also designed to

close also without pressure in the system. In Figure 10 a gate valve is shown in closed (left)

and open (right) position. Note that this is only one of many solutions.

Figure 10, Gate valve with actuator (Ring-O, 2007)

Note that for fail safe gate valves the hole in the gate is on the upper part, thus when the valve

is de-energized it will shift upwards and close. The valve will close whenever loss of electric

and/or hydraulic power is detected.

Page 37: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

37

2.5 Testing of the SIS’ ability to perform the SIF IEC 61508 stresses the importance of considering the testability already in the design phase

of a system. In order to maintain the required SIL it is necessary to perform tests to assure

that the equipment is working as desired. If safety system devices are not tested the

dangerous failures reveal themselves when a process demand occurs, often resulting in the

unsafe event that the safety system was designed to prevent (Summers, 2000b). Such tests

are performed both before and during installation, but the testing during the production life

is of great importance. Several methods and philosophies exist on this matter today.

Proof test is a “periodic test performed to detect failures in a safety-related system so that, if

necessary, the system can be restored to an “as new” condition or as close as practical to this

condition” (IEC 61508, 2002). As a proof test requires production shut-down, other

measures has been introduced that offer online testing; diagnostic tests and partial stroke

tests. The logic solver in the SIS is often programmable, and may carry out diagnostic self-

testing during operation. That is done by the logic solver sending frequent signals to the

detectors and to the actuating items, and compares the responses with predefined values

(Rausand & Høyland, 2004). Since there is no explicit definition of diagnostic testing in IEC

61508, the interpretations of Velten-Philipp and Houtermans (2004), are used; “a test is a

diagnostic test if it fulfils the following three criteria;

1. It is carried out automatically (…) and frequently (…).

2. The test is used to find failures that can prevent the safety function from being

available.

3. The system automatically acts upon the results of the test.

Partial stroke testing (PST) is, when implemented, normally applied on valves. The test is

conducted by simply stroking the valve to check that it is not stuck and thus revealing hidden

dangerous failures. It is not done as frequently as diagnostic testing and dependent of the

chosen system it is neither performed automatic. Hence PST does not fulfil the criteria for

being classified as a diagnostic test.

A function test is not defined by the IEC 61508/61511 standards, but for a valve it often

implies a full stroke test. This can be interpreted as the function test simply confirms that the

valve can close, not if it seals. Since the standard defines proof tests as a measure for restore

the system to as new condition, reducing the unavailability to zero, it must be assumed that

such tests embrace a wider range of test-methods in order to be capable of discover all the

failure modes. It is noticed that it is seldom distinguished clearly between proof testing and

function testing in the literature.

The proof testing is only done at certain intervals since it demands full shutdown of the

production. Yet, this test is important because it reveals some failure modes that can not be

detected through the diagnostic self-testing or PST. The diagnostic coverage (DC), which is

the fraction of dangerous failures that are discovered by diagnostic testing relative to the total

number of dangerous failures (adapted from IEC 61508), may differ dependent on the system

Page 38: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Theoretical framework

38

in question and the chosen test approach. The proof test itself may however also be

incomplete and may be considered as an imperfect test. This is further elaborated in the next

chapter. In Figure 11 the relationship between the time concepts and tests is given. Rausand

& Høyland (2004) give a thorough description of these concepts.

A- The system is taken down

in order to perform a full

proof test.

B- The system is already

down when the full proof

test is performed and

reveals the dangerous

undetected failure.

C- Diagnostic or partial

testing reveals the failure

before the scheduled proof

test, thus reducing the

time of the undetected

dangerous failure.

Figure 11, Up and down time related to tests

It has been developed a high level of diagnostic coverage for the sensors and logic and with

redundancy it has been succeeded to reduce the contribution to the PFD (Metso automation,

2002), leaving the greatest contributor to be the actuating items/final elements.

Because of the disturbances testing impose on the production, the risks associated with the

testing itself and restarting after the test is finalized, it is preferred to test as seldom as

possible. Hence it is a need to optimize the test intervals to maintain both the safety and

production interests. That is to assure as high safety availability as possible without

introducing additional production downtime. The causes and consequences of imperfect

testing, and partial stroke testing is further discussed in chapter 3 and 4.

Page 39: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Imperfect testing

39

3 Imperfect testing

In the IEC 61508 standard it is assumed that after a proof test the component is “as good as

new”. For the proof test to be fully effective this means that it is necessary to detect 100 % of

all dangerous failures, reducing the unavailability to zero. This may not be feasible. An

imperfect test situation may be defined as a situation where the test does not discover all

dangerous failures and subsequently component unavailability remains. It can be claimed

that there are two possible classifications of an imperfect test situation;

1. The test does not cover all possible failures – inadequate test method.

2. The test does not detect all the failures – unsuccessful test.

Hence the function test, the PST and the diagnostic test can all be classified as an imperfect

test since they do not cover all failure modes while a proof test may be imperfect due to

unsuccessful testing.

Since the focus in this thesis is upon testing it is assumed that as long as all failures are

discovered they can be repaired to an “as good as new” condition. Analogue to the definition

of imperfect testing in this thesis, imperfect repair can be defined as the situations where the

fault is not repaired perfectly or that the failure is chosen not to be repaired, as well as lack of

an adequate method for repairing the component. This can be the case when for example the

leakage is considered minimal, or the repair of a somewhat delayed operational time (DOP) is

postponed until it is more significant. Rausand & Høyland (2004) give an introduction to

imperfect repair processes.

This uncertainty related to the test quality is not included in the reliability calculations, and is

neither discussed much in the IEC 61508/61511 standards nor literature in general. IEC

61508-6 mention briefly the effects of a non-perfect proof test in annex A (informative only).

This topic is elaborated in the following, both the possible causes for imperfect testing and its

impact on the PFD.

3.1 Causes for an imperfect test The PDS method (2006) claims that there are three main contributors to the loss of safety;

unavailability due to dangerous undetected failures (consists of random hardware failures),

unavailability due to systematic failures and unavailability due to known or planned

downtime.

Planned downtime is of no significance in this context, but it is of great interest to assess the

reasons why random hardware failures and systematic failures are not discovered through

testing. A reason why failures are not discovered could be because the instruments needed to

be able to confirm the test may not exist. Another reason could be that the company wants to

avoid putting stress onto the system, thus instead of slamming the SCSSV shut as in a real

situation, the test could be modelled as controlled closing by closing the PWV in order to

Page 40: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Imperfect testing

40

create a static well. The fishbone diagram in Figure 12 gives additional causes for imperfect

testing of safety valves subsea.

Figure 12, Causes for imperfect testing of subsea safety valves

As described in Figure 12 the reasons for imperfect testing can be related to the attributes;

methods, materials, machines, milieu and manpower. The attribute ‘materials’ cover the test

equipment, ’methods’ the procedure and formalities around the testing, ‘machines’ the

subsea-system itself, ‘milieu’ describes the context of the system and ‘manpower’ the

managers and workers conducting the tests. As illustrated it is obvious that the human

interference is an important reason for imperfect testing.

There are no data collected for the proportion of tests that can be claimed to be imperfect. A

possible method for estimating the contribution of each of the M–factors described above is

proposed in 3.2.2.

Page 41: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Imperfect testing

41

3.2 Effects of an imperfect test An imperfect test influences the estimation of the PFD since the overall unavailability of the

system will be higher than when perfect testing is assumed. Three different interpretations of

this situation are described in the following. Depending on the assumptions taken, the PFD

can be expected to increase with a constant quantity over the interval, increase continually or

decrease.

• Case A – a constant PFD addition (due to systematic failures not revealed)

• Case B – an increasing PFD addition (due to random hardware failures not revealed)

• Case C – a decreasing PFD addition

Case C depends on which approach is chosen, case A or case B, and is split into C1 and C2.

Using the simple SIS shown in Figure 2, with failure rates from OLF 070 guideline (2004) as

stated in Table 4, the impact of the imperfect tests on the PFD is estimated for the three

approaches. In order to do this, the reliability block diagram is a good tool for describing the

system. Note that the logic has a 1oo2 configuration. The sensors have 2oo3 redundancy, and

the final elements have 1oo2 redundancy.

Figure 13, Reliability Block Diagram of a simple SIS

Table 4, Data for the system test example

Component

Failure rate λDU

( 1−hours )

(OLF 070)

Common cause

failure; β - factor

(PDS Data, 2006)

Test interval

τ hours

PFD under perfect

testing assumptions

( 1−hours )

Sensors

(2oo3)

7100.3 −⋅ 3% 8760 51094.3 −⋅

Logic

(1oo2)

7100.1 −⋅ 2% 8760 61001.9 −⋅

Valve

(1oo2)

7100.1 −⋅ 2% 8760 61001.9 −⋅

System 51074.5 −⋅

This result corresponds to a SIL 4 classification for the system. No diagnostic testing is

assumed for the sensors or logic in this example. The time required to test and repair the

items is considered negligible. This may make the result higher than genuine field results.

Page 42: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Imperfect testing

42

Note that HFT and SFF requirements are not considered in the case examples, only the PFD

requirement for the SIL rating is assessed.

To determine the PFD impact of an imperfect test is dependent of the type of failure that

remains undetected. It is only the dangerous undetected failures that should be included in

the calculations. Using the simplified equations for the PFDs, the PFD for the system is:

AILDSYS PFDPFDPFDPFD ++=

10022132 PFDPFDPFDPFD ooooSYS ++=

[ ] [ ] [ ]23

)1(23

)1(2

)1(22

2 τβλτλβτβλτλβτβλτλβ DUDUDUDUDU

DUSYSPFD +−

++−

++−=

β = common cause failure factor τ = test interval λDU = dangerous failure rate

These simplified equations can be used when λDUτ is small (<0.1), and is often used in

practical calculations. The approximation is conservative, which means that the

approximated value is always greater than the correct value (Rausand & Høyland, 2004).

In the calculations of the system test example in the next sections, the PFD is chosen to be

calculated for 20 years because the XMTs are often required to have a life span of at least 20

years. It is assumed that the XMT is not retrieved and overhauled during this period. In order

to estimate the impact of PFD imperfect testing of the valves to the test-system, it is

necessary to assess the valve’s PFD first and then use the result in the system test example at

the end of each section.

Page 43: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Imperfect testing

43

3.2.1 Case A, constant PFD addition PDS interpretation of imperfect testing is covered through the concept of Critical Safety

Unavailability (CSU). CSU consists of the both the PFD as well as the contribution from

(Systematic) Test Independent Failures (TIF). The PDS method is in line with IEC 61508, but

differs regarding quantification of systematic failures. This contribution is not quantified in

the IEC 61508 standard, as it may vary from each application. But it is argued in the PDS

Handbook (2006) that there are several reasons why this should be included.

One reason is to reflect the actual risk related to

the operation, another is that failure rates are

based on historic data, thus the rates already

include the systematic failures to a certain extent.

A third argument is that the systematic failures

may be the dominant contributor and should not

be excluded, and last that the measures against

systematic failures should be reflected in the

quantitative failure rate estimate. This approach is

supported by the OLF 070 guideline (2004). As

illustrated in Figure 14, downtime due to testing

and repair (DTUT and DTUR) also contributes to

the safety unavailability; but these do not

contribute to the CSU.

Figure 14, Contribution to unavailability (PDS method, 2006)

The definition of PTIF is “The probability that the module/system will fail to carry out its

intended function due to a (latent) systematic failure not detectable by functional testing

(therefore the name “test independent failure”)”. The PTIF is assumed to be constant

throughout the lifetime, and for extended testing (proof testing) of the valves, a value of

PTIF= 5100.1 −⋅ 1−hours is suggested. PDS does not give any details for this choice of value.

The difficulty with detecting systematic failures is an example of an imperfect test due to

inadequate test methods. This gives the following equation:

TIFPPFDCSU +=

This situation compared with the IEC 61508 approach is illustrated in Figure 15.

Page 44: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Imperfect testing

44

Figure 15, Sketch of the PFD impact with PTIF addition (case A)

With failure rate 7100.1 −⋅=DUλ 1−hours and PTIF= 5100.1 −⋅ , the unavailability for one

component for a lifecycle of 20 years is illustrated in Figure 16.

Unavailability Case A

0

0,0002

0,0004

0,0006

0,0008

0,001

0,0012

0

16300

32600

48900

65200

81500

97800

114100

130400

146700

163000

Hours

Ā(t)

UnavailabilitySIL 2PFD averagePFD avg perfect test

Figure 16, Unavailability under imperfect test condition case A

For this exact example the PFD for the imperfect test situation is 41048.4 −⋅=PFD while a

perfect test yields the result 41038.4 −⋅=PFD and a difference of 5100.1 −⋅ , which is the PTIF

addition. This addition does not lead to a change of the SIL rating for the component as the

PTIF is small.

Page 45: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Imperfect testing

45

Result system test example

With the data given in Table 4, the PFD of the test-system is described below. Note that with

a 1oo2 configuration for the safety valves, the TIF contribution becomes “ TIFP⋅β ”. C1oo2 is the

PDS configuration factor for the valve configuration.

TIFSYSPDSSYS PPFDCSUPFD +==_

[ ] [ ] [ ]3)1(

23)1(

2)1(

222

_τλβτβλτλβτβλ

τλβ DUDUDUDUDUPDSSYSPFD −

++−

++−=

TIFooDU PC ⋅⋅++ β

τβλ212

545_ 1094.5100.102.00.11074.5 −−− ⋅=⋅⋅⋅+⋅=PDSSYSPFD

This result differs marginally from the perfect testing situation since the PTIF addition is

relatively small. It does not lead to a change in the SIL-rating.

Page 46: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Imperfect testing

46

3.2.2 Case B, Increasing PFD addition The second interpretation of imperfect testing is that there is an increasing PFD addition over

the life span of the component due to unsuccessful proof testing, meaning that the

unavailability is not reduced to zero after the test is conducted. This leads to a shift upwards

for the PFD since the overall unavailability of the system will be higher than when perfect

testing is assumed as illustrated in Figure 17. Note that the unavailability starts at zero as it is

assumed that the condition of the component is perfect.

Figure 17, Sketch of the PFD impact with imperfect test addition (case B)

As a basis for calculating the PFD impact of imperfect testing, a component is divided into a

series structure where one part is “non-testable” and the remaining part “testable”. In order

to function, both parts of the component have to function.

Figure 18, Series structure when imperfect test of a component

The dangerous undetected failures rate is split into two parts dependent of the imperfect test

fraction, here named α.

TDUNTDUDU −− += λλλ

DUNTDU λαλ ⋅=−

DUTDU λαλ ⋅−=− )1(

When the test interval for the testable part is τ=8760h, and the τNT =175200h corresponds to

the component life span of 20 years, the PFD for the component is described by:

Page 47: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Imperfect testing

47

22)1( NTDUDUPFD τλατλα ⋅⋅

+⋅−

=

As the relation between PFD and the test interval is a linear function, it is reasonably that a

shorter test interval leads to a smaller PFD. Because of the implications with proof testing,

shorten the test intervals is not a desired solution for achieving the required SIL. For this

reason the test interval is held fixed throughout the calculations.

In Figure 19 a sketch is drawn for the failure rate 7100.1 −⋅=DUλ 1−hours over 20 years. It is

assumed that the non-testable part is α=20%. This means that 80% of the failure rate is set to

zero after every proof test while the remaining 20% continues over the whole interval. As

shown in Figure 19 the non-testable part increasingly override the testable part of the system.

Unavailability Case B

0

0,0005

0,001

0,0015

0,002

0,0025

0,003

0,0035

0,004

0,0045

01590 0

3180 04770 0

6360 07950 0

9540 0

1113 00

1272 00

1431 00

1590 00

1749 00

Hours

Ā(t)

Unavailability

SIL 2

Unavail. PerfecttestingPFD average

PFD avg perfect test

Figure 19, Unavailability under imperfect test condition case B

The equation for this situation is as follows:

2)1(

2PTDUNTDUPFD τλ

ατλ

α ⋅−+⋅≈

377

1010.22

8760100.1)2.01(2

876020100.12.0 −−−

⋅=⋅⋅

⋅−+⋅⋅⋅

⋅≈PFD

Note that the PFD average is actually an average of average in order to illustrate the possible

change in the SIL rating. For this exact example the PFD for the imperfect test situation is 31010.2 −⋅=PFD while a perfect test yields the result 41038.4 −⋅=PFD which gives a

difference of 31066.1 −⋅ .

The PFD impact by different combinations of the failure rate and percentage non-testable are

given in Table 5. The unavailability is calculated for different failure rates, and the range from 8100.1 −⋅=DUλ 1−hours till 6100.1 −⋅=DUλ 1−hours is chosen as this interval reflects the

failure rate of a PMV in subsea XMT. The non-testable part ranges from 10% till 90% in the

calculations. For convenience it is assumed that the same failures remain undetected during

Page 48: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Imperfect testing

48

the whole lifetime. For illustrating the PFD development the accumulated PFD is shown for

year 1, 10 and 20.

Table 5, Unavailability at time t of a single component under imperfect test conditions

Failure rate component ( 1−hours )

% imperfect test Years

8100.1 −⋅=DUλ

7100.1 −⋅=DUλ 6100.1 −⋅=DUλ

0 % 1 51080.8 −⋅ 41080.8 −⋅ 31076.8 −⋅

10 51070.8 −⋅ 41070.8 −⋅ 31066.8 −⋅

20 51070.8 −⋅ 41070.8 −⋅ 31066.8 −⋅

10% 1 51080.8 −⋅ 41080.8 −⋅ 41070.9 −⋅

10 41066.1 −⋅ 31066.1 −⋅ 21066.1 −⋅

20 41054.2 −⋅ 31054.2 −⋅ 21053.2 −⋅

20% 1 51080.8 −⋅ 41080.8 −⋅ 31084.1 −⋅

10 41046.2 −⋅ 31046.2 −⋅ 21044.2 −⋅

20 41022.4 −⋅ 31021.4 −⋅ 21015.4 −⋅

30% 1 51080.8 −⋅ 41080.8 −⋅ 31070.2 −⋅

10 41025.3 −⋅ 31025.3 −⋅ 21021.3 −⋅

20 41089.5 −⋅ 31088.5 −⋅ 21075.5 −⋅

40% 1 51080.8 −⋅ 41080.8 −⋅ 31057.3 −⋅

10 41005.4 −⋅ 31004.4 −⋅ 21098.3 −⋅

20 41056.7 −⋅ 31054.7 −⋅ 21032.7 −⋅

50% 1 51080.8 −⋅ 41080.8 −⋅ 31044.4 −⋅

10 41084.4 −⋅ 31083.4 −⋅ 21074.4 −⋅

20 41024.9 −⋅ 31020.9 −⋅ 21086.8 −⋅

60% 1 51080.8 −⋅ 41080.8 −⋅ 31078.8 −⋅

10 41063.5 −⋅ 31062.5 −⋅ 21050.5 −⋅

20 31009.1 −⋅ 21009.1 −⋅ 11004.1 −⋅

70% 1 51080.8 −⋅ 41080.8 −⋅ 31078.8 −⋅

10 41043.6 −⋅ 31041.6 −⋅ 21024.6 −⋅

20 31026.1 −⋅ 21025.1 −⋅ 11019.1 −⋅

80% 1 51080.8 −⋅ 41080.8 −⋅ 31077.8 −⋅

10 41022.7 −⋅ 31020.7 −⋅ 21098.6 −⋅

20 31043.1 −⋅ 21042.1 −⋅ 11033.1 −⋅

90% 1 51080.8 −⋅ 41080.8 −⋅ 31077.8 −⋅

10 41001.8 −⋅ 31099.7 −⋅ 21071.7 −⋅

Un

ava

ila

bil

ity

at

tim

e t

wit

h i

mp

erf

ect

te

stin

g (

hou

rs-1

)

20 31059.1 −⋅ 21058.1 −⋅ 11047.1 −⋅

In Table 6 the average differences between perfect testing results and the imperfect test

situation is given. The imperfect test situation yields higher average PFDs than with perfect

testing.

Page 49: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Imperfect testing

49

Table 6, PFD average differences between perfect and imperfect tests

Failure rate ( 1−hours ) % Non-testable 8100.1 −⋅ 7100.1 −⋅ 6100.1 −⋅

10 51032.8 −⋅ 41032.8 −⋅ 31032.8 −⋅

20 41066.1 −⋅ 31066.1 −⋅ 21066.1 −⋅

30 41050.2 −⋅ 31050.2 −⋅ 21050.2 −⋅

40 41033.3 −⋅ 31033.3 −⋅ 21033.3 −⋅

50 41016.4 −⋅ 31016.4 −⋅ 21016.4 −⋅

60 41099.4 −⋅ 31099.4 −⋅ 21099.4 −⋅

70 41083.5 −⋅ 31083.5 −⋅ 21083.5 −⋅

80 41066.6 −⋅ 31066.6 −⋅ 21066.6 −⋅

90 41049.7 −⋅ 31049.7 −⋅ 21049.7 −⋅ PFD average differences (PFDdiff)

41016.4 −⋅ 31016.4 −⋅ 21016.4 −⋅

As illustrated in Figure 20 the impact is greater when the failure rate is getting higher. For a

component with failure rate of 6100.1 −⋅=DUλ 1−hours and a high percentage of non-testable

failures could potentially lead to a change of the SIL rating as the result would tend to go to

the outer limit of the classification. Often the SIL is required by the client to be in the

midpoint of the range.

Unavailability

0,00000

0,01000

0,02000

0,03000

0,04000

0,05000

0,06000

0 10 20 30 40 50 60 70 80 90

% Non-testable

PFD

diff 10 -̂6

10 -̂710 -̂8

Figure 20, Unavailability for different failure rates under imperfect testing

Based on the PFD average differences given in Table 6, special care should be shown in cases

with high failure rates for the valves, while for the lower failure rates the impact is not

considered as critical if the SIL requirement is low. If imperfect tests are to be included in the

calculations, refer to Table 7 for knowing when this topic should be given attention.

Page 50: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Imperfect testing

50

Table 7, Matrix for SIL rating sensitivity due to imperfect testing

Imperfect test addition

PFDdiff

41016.4 −⋅ 31016.4 −⋅ 21016.4 −⋅

Failure

rate valves

( 1−hours ) 8100.1 −⋅=DUλ 7100.1 −⋅=DUλ 6100.1 −⋅=DUλ

4

3

2 SIL

1

Red The inclusion of imperfect test addition leads to a change of SIL-rating

Yellow Dependent of the exact PFD calculation the inclusion of imperfect test

addition could lead to a change of SIL-rating

Green The inclusion of imperfect test addition will not have an impact on the SIL-

rating

As discussed at the beginning of this chapter it may be hard to assess the exact percentage

that remains untested after a proof test, hence using the imperfect test addition as proposed

in Table 7 ensures that conservative estimates are made.

Note that these imperfect test additions are done for one component only, and for different

architectures they need to be modified. For the system in question where the valves have a

1oo2 configuration, the PFDdiff shouled be modified analogous to the PDS method (2006);

diffoodiff PFDPFD ⋅= β21_

However, it has been argued (Bak, 2007) that constant factors as such are not motivating

seen from a safety designers’ point of view since improvements in the design will never lead

to a quantitatively change. A simple solution is to perform a weighing of each of the M-factors

described in section 3.1 and then adjust the imperfect test additions in Table 7 dependent of

each specific case.

Figure 21, The M-factors’ contribution to the imperfect test addition

Page 51: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Imperfect testing

51

The contribution of each of the M-factors can be estimated by:

• 0 = 0 % non-testable

• 0.5 = 25 % non-testable

• 1.0 = 50 % non-testable

• 1.5 = 75 % non-testable

• 2.0 = 100 % non-testable

Pimp, the proability of imperfect tests, is then calculated by the average of the contributors;

diffimp PFDMMMMMP ⋅++++

=5

)( 54321

Determining the contributing factors should be done by a multidisciplinary team analogous

with other risk analysis. To facilitate the estimation, more detailed questions should be

elaborated to reflect the possible impact of each of the M-factors.

Result system test example

All the M-factors are set to 1.0 in this example except the factor ‘method’. It is assumed that

the valves are tested by a function test, meaning that there is no testing for internal and

external leakages. The fraction of LCP and ELP failure modes of the total dangerous failure

modes gives a non-testable part of 26% (based on OREDA data as shown in Table 9). This

corresponds to a 0.5 for the M-factor.

33 1074.31016.45

)1115.01( −− ⋅=⋅⋅++++

=impP

Using the simplified equations for calculating the PFD as described in the introduction of this

chapter, and adding the imperfect test contribution as calculated, the PFD for the system is:

diffSYSSYS PFDPFDPFD +=

[ ] [ ] [ ]3)1(

23)1(

2)1(

222 τλβτβλτλβτβλ

τλβ DUDUDUDUDUSYSPFD −

++−

++−=

impDU P⋅++ β

τβλ2

435 1032.11074.302.01074.5 −−− ⋅=⋅⋅+⋅=SYSPFD

This result is higher than the original PFD and the SIL rating when only the PFD is

considered is changed to SIL3, one level lower.

Page 52: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Imperfect testing

52

3.2.3 Case C, Decreasing PFD Another approach to the assessment of imperfect tests is to let the initial PFD decrease over

time. Arguments for this are that failures that originally were impossible to detect over time

become so obvious that they are discovered, or that they are discovered incidentally by the

operators. Depending if case A or case B is used as a basis, this situation may have two

interpretations, named C1 and C2 respectively.

Case C1, decreasing PTIF addition

In the PDS method, it is assumed a TIF addition already from the start of the component life

time. But over time it can be presumed that more and more of the systematic failures will be

revealed, leading to a decreasing TIF addition and PFDavg. This situation is sketched in Figure

22 for a lifetime of 20 years, where the failure rate is 7100.1 −⋅=DUλ 1−hours and

PTIF= 5100.1 −⋅ . In the example the PTIF is assumed to decrease with 6100.1 −⋅ each year which

is only for the benefit for the example.

Unavailability Case C1

0

0,0002

0,0004

0,0006

0,0008

0,001

0,0012

0

16300

32600

48900

65200

81500

97800

114100

130400

146700

163000

Hours

Ā(t)

UnavailabilitySIL 2PFD averagePFD avg perfect test

Figure 22, Unavailability with decreasing PTIF addition (case C1)

The impact on the PFD for a single component is hardly traceable. The PFD for the imperfect

test situation is 41041.4 −⋅=PFD while a perfect test yields the result 41038.4 −⋅=PFD and

a difference of 6100.3 −⋅ .

Result system test example

Using the same approach as in case B, but with a decreased PTIF value gives the following

result for the test system example:

[ ] [ ] [ ]3)1(

23)1(

2)1(

222 τλβτβλτλβτβλ

τλβ DUDUDUDUDUSYSPFD −

++−

++−=

TIFDU P⋅++ β

τβλ2

Page 53: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Imperfect testing

53

565 1075.5100.302.01074.5 −−− ⋅=⋅⋅+⋅=SYSPFD

This result is even lower than case A, and does not lead to a change of the SIL-rating.

Case C2, decreasing imperfect test addition

The non-testable part (as described in Case B) of the random hardware failures are not

discovered by tests, but a realistic assumption may be that over time the failures become so

obvious that they are discovered. In Figure 23 a sketch is made for a component with failure

rate 7100.1 −⋅=DUλ 1−hours , and where it is assumed that the non-testable part is α=20% for

a lifetime of 20 years. For the convenience of the example, the non-testable part is assumed

to diminish and become testable with a rate of 1% per year.

Unavailability Case C2

0

0,0002

0,0004

0,0006

0,0008

0,001

0,0012

0,0014

0,0016

0,0018

0,002

0

16300326

00489

00652

00815

00978

00

114100

130400

146700

163000

Hours

Ā(t)

UnavailabilitySIL2PFD averagePFD avg perfect test

Figure 23, Unavailability with decreasing imperfect test addition (case C2)

The equation for one component for this situation is given below;

2)1(

2PTDUNTDUPFD τλ

ατλ

α ⋅−+⋅≈

377

1085.32

8760100.1)2.01(2

876010100.12.02 −−−

⋅=⋅⋅

⋅−+⋅⋅⋅

⋅⋅≈PFD

For this exact example the PFD for the imperfect test situation is 31085.3 −⋅=PFD while a

perfect test yields the result 41038.4 −⋅=PFD which gives a difference of 41041.3 −⋅ for the

component.

Result system test example

Using the same approach as in case B, but with the 41041.39.0 −⋅⋅=impP since the failure

distribution is different, gives the following result for the system test example:

[ ] [ ] [ ]3)1(

23)1(

2)1(

222 τλβτβλτλβτβλ

τλβ DUDUDUDUDUSYSPFD −

++−

++−=

Page 54: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Imperfect testing

54

impDU P⋅++ β

τβλ2

435 1019.11041.39.002.01074.5 −−− ⋅=⋅⋅⋅+⋅=SYSPFD

As in case B this leads to a change of the SIL rating to SIL4 considering the PFD requirement only.

3.2.4 Comments to the imperfect test cases In this chapter several interpretations of imperfect testing have been proposed. The PDS

(2006) perspective on systematic failures was described in case A, while the influence of

undetectable random hardware failures was assessed in case B. Case C gave an alternative

approach to the prior ones, where the additions were assumed to reduce over time. The PFD

results from the cases are gathered in Figure 24.

0,00E+002,00E-054,00E-056,00E-058,00E-051,00E-041,20E-041,40E-04

Base Case Case A Case B Case C1 Case C2

Case

PFD

Figure 24, PFD results from case studies on imperfect testing

The case B and case C2 PFD values are considerably higher than the base case, and are also

the only ones that lead to a shift of the SIL rating. The PDS approach to systematic failures in

case A and case C1 hardly has an impact on the PFD. The results illustrate the importance of

making a testable system and the impact this may have if it is not taken into consideration.

There may be several difficulties applying the methods described in this chapter. One topic

that has been debated is the challenges with quantifying the PTIF-value correctly to reflect the

hidden systematic failures. Further on should it should be developed a detailed approach for

deciding the contribution of each of the M-factors proposed in case B.

Another aspect that should be taken into consideration is that normally the PFD is based on

the average values. For increasing unavailability as shown in case B, it is a substantial

difference between the PFD for the first and the last proof test interval as illustrated by the

unavailability sketches. This makes it interesting to assess the PFD max values in addition to

the traditional PFD average. For high failure rates the imperfect test addition could

potentially lead to another decrease of the SIL rating for the system at the end of its life span.

Page 55: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Partial stroke testing

55

4 Partial stroke testing

Partial stroke testing (PST) has been introduced in order to reveal failure modes before only

feasible through tests requiring process shutdown (Lundteigen and Rausand, 2007). As

indicated by its name, PST involves moving the valve only partially in order to confirm that

the valve is not stuck. It is assumed that if the valve is able to move, it will most likely

continue to fully closed position in a real demand situation. PST can be categorized as an

imperfect test as described in chapter 3. Even so, PST is implemented in some industries

because of the positive contribution to reveal some of the failure modes earlier than the

scheduled proof tests, or as a measure for extending the proof test interval. PST is yet not a

common approach in subsea petroleum production; one reason might be the difficulties to

detect that the valve actually moves. New technology as smart positioners is now introduced

to the market, giving incentives to assess the advantages and disadvantages for subsea

equipments more profoundly.

4.1 Main principles and concepts A successful implementation of PST into a SIS has several advantages. It may improve the

safety as failures are detected at an earlier stage than when only proof testing is conducted. It

is possible to maintain the same SIL rating even with longer proof test intervals. This is of

special importance when there is high risk related to the testing. It is even claimed that it is

possible to take out a component of the architecture and still maintain the requested SIL. The

PFD impact of PST is illustrated in Figure 25.

Figure 25, PST impact on the PFD (Lundteigen & Rausand, 2007)

There are three basic types of partial stroke test equipment; mechanical limiting, position

control and solenoids (Summers & Zachary, 2000a):

Mechanical limiting: Requires manual interaction and visual inspection of valve

movement which is obviously not practical to incorporate in subsea systems.

Page 56: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Partial stroke testing

56

Position control: Enables to detect how far the valve has moved. This requires

additional hardware to be installed, and a system for collecting the test information, making

the cost a major drawback.

Solenoids: The test is conducted by pulsing the solenoid, and the preset valve travel is

confirmed by a limit switch or position transmitter, allowing for automatic documentation of

test status.

The solenoid may either be integrated with the SIS, or it may be a separate PST package

(Lundteigen and Rausand, 2007). The SIS sketch in Figure 26 is an illustration of a solution

with both position control and solenoid (adapted from Beurden & Amkreutz, 2001).

Figure 26, Simple SIS with PST implementation (adapted from McCrea-Steele, 2006)

In subsea petroleum production PST has been implemented in the Kristin field for testing the

High Integrity Pressure Protection System (HIPPS) (Lundteigen and Rausand, 2007).

Page 57: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Partial stroke testing

57

4.2 Advantages and disadvantages Some of the advantages and disadvantages of the PST are described below (based on

Lundteigen & Rausand (2007), McCrea-Steele (2005, 2006)):

Advantages

• Reduced wear of the valve seat area

o Since the valve is less frequently brought to a closed position

• Reduced probability of sticking seals

o Due to more frequent operation of the valve

• Valve is available during the test period

o If properly designed

• Reduced operational disturbances due to testing

o If properly designed

Disadvantages

• Tests only a portion of the DU failures

• Increased wear

o Due to more frequent operation

• More complex system

o Due to added software and hardware

• Potentially increased spurious trip rate

o Since the valve may continue to fail safe position instead of returning to the

initial position

• Potentially converts the valve to a type B complex subcomponent

o Due to the extra components installed

In addition to this there are some problems related to estimate the coverage when using PST.

The measuring devices used to confirm that the test was successful may introduce failures

themselves. Besides, there are not many methods that can measure with certainty that the

PST actually moved the valve, very often it is only assumed on basis of for example that the

hydraulics were bled off. These topics are discussed throughout the next chapters.

Page 58: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Partial stroke testing

58

4.3 PST coverage factor The dangerous failure rate can be split up as shown in Figure 27, consisting of diagnostic

testing, partial stroke testing and proof testing. This gives the PFDavg equation:

PFDavg = PFDavg_PT + PFDavg_PST + PFDavg_diagnostic

The PFDavg_diagnostic hardly gives any contribution to the PFDavg since it is very small and will

not be included in the calculations.

Figure 27, Overview of relevant failure rates (Lundteigen & Rausand, 2007)

It is necessary to estimate the coverage of PST in order to optimize the proof test intervals or

to determine if a higher SIL rating can be obtained. The PST coverage is defined as the

fraction of dangerous undetected failures detected by PST relative to the total number of

dangerous undetected failures by Lundteigen & Rausand (2007).

DU

PSTDUPST λ

λθ ,=

The PST coverage can be given two interpretations;

1. The mean fraction of dangerous undetected failures that are detected by PST among

all dangerous undetected failures.

2. The probability that a dangerous undetected failure is detected by the PST once a

dangerous undetected failure is present.

The failure rates expressed in terms of λD are (op.cit.):

DDCDD λθλ ⋅=

DPSTDCPSTDU λθθλ ⋅⋅−= )1(,

DPSTDCPTDU λθθλ ⋅−⋅−= )1()1(,

Where θDC is the diagnostic coverage (as explained in section 2.5):

D

DDDC λ

λθ =

Summers (1998) put emphasis on the importance of being plant specific when the PST

coverage is assessed, as the valve and exposure environment may differ greatly from case to

case. She also claims that “credit for partial stroking in the quantitative verification of a SIL

should be considered only when the process service is clean and tight shutoff is not required”.

Page 59: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Partial stroke testing

59

In Table 8 the reasons for the different failure modes are described and connected to the test

strategy they are assumed to be revealed by.

Table 8, Dangerous failure modes and test strategy for a safety gate valve (adapted from

Summers & Zachary 2000a, McCrea-Steele 2006, KOP 2002, Bak 2007 and ATV 2007)

Failure Descriptor Failure Mode Test

Strategy

Spring cap plate is jammed Fail To Close (FTC) FT/PT

Too high test pressure that leads to

deformation

Fail To Close (FTC) PST/FT/PT

Failure in closed compensation system Fail To Close (FTC) FT/PT

Hydraulic line is blocked Fail To Close (FTC) PST/FT/PT

Formation of hydrates in bonnet cavity

during shutdown

Fail To Close (FTC) PST/FT/PT

Foreign object/debris in cavity Fail To Close (FTC) FT/PT

Valve seal is seized due to temperature

changes

Fail To Close (FTC) PST/FT/PT

One of the two springs breaks and jams Fail To Close (FTC) FT/PT

Valve stem sticks Fail To Close (FTC) PST/FT/PT

Valve is not fully set back/fully disconnected

after ROV intervention

Fail To Close (FTC) FT/PT

Valve seat is scarred Leakage in closed position

(LCP)

PT

Valve seat contains debris Leakage in closed position

(LCP)

PT

Valve seat plugged due to deposition or

polymerization

Leakage in closed position

(LCP)

PT

Foreign object/debris in cavity Delayed operation (DOP) FT or PST

with speed

of travel test

Hydraulic line to actuator choked Delayed operation (DOP) FT or PST

with speed

of travel test

Due to high friction the valve may delay/be

prevented from fully close

Delayed operation (DOP) PT (PST in

low

pressure

fields)

Valve seal is damaged External leakage in closed

position (ELP)

PT

PST: Partial Stroke Test PT: Proof Test FT: Function test (full stroke, no leakage test)

It is the safety gate valve that forms the basis for the assessment of the failure modes in Table

8. The table does not cover all possible failure descriptors for all different valve designs, but

reflects the design of the valve producer ATV to a high degree (ATV, 2007). Note that it isn’t

Page 60: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Partial stroke testing

60

any reason why moving the valve 20% should lead to a build up of corrosion/scaling that

eventually would lead to a fail to close failure (Bak, 2007), hence it is not put as a failure

descriptor.

Note that as a basis for the PST coverage estimation several assumptions have been done:

• Both the critical and degraded failure rates from OREDA are included in the

calculations as the practical distinction between the two may be vague (adapted from

Lundteigen & Rausand, 2007).

• Whether it is FTC or FTO that is safety critical for the system is not possible to read

from the OREDA data, but for avoiding calculations with twice the contribution the

FTC rate was chosen to be used throughout the calculations.

• Only the failure modes DOP, ELP, FTC and LCP are considered dangerous and

relevant for testing of a safety system, hence are only these failure modes included in

the calculations.

• Since it is not much reliability data for subsea systems in particular, the available data

for topside and subsea is merged. This may not be accurate as there are different

requirements related to subsea equipment than topside. Even so, the inner

environment is considered equal for both subsea and topside valves, and the mix may

thus be justified.

• Only the latest OREDA edition (2002) is utilized. It may be discussed whether the

earlier OREDA edition should be used or not since it is a continuously improvement

of the design as more operational experience is attained. Hence the old versions may

not reflect the failure rates of new equipment realistically.

Leakage in closed position can not be detected by PST as the valve needs to be fully closed.

Neither is external leakage in closed position assumed to be discovered by PST as the

pressure difference over the valve may be minimal for a high pressure field. The failure mode

itself is very unlikely to occur for valves with backseat, as the leakage is only possible during

transaction. Consequently both of these coverages are set to 0 %. It is likely that DOP can be

discovered by PST if a speed of travel detector is installed. For the failure descriptor “due to

high friction the valve may delay/be prevented from fully close” the PST will probably not

detect problems with closing the very last part. Regarding FTC, it may be discussed if the

assumption that the valve will continue to a closed position if it can start to move is realistic.

The coverage is estimated on the failure descriptors in Table 8 only and on basis of the

assumptions described above, a tentative estimation of the PST coverage for each failure

mode may be set to;

FTC- 80%

LCP- 0%

DOP- 90%

ELP- 0%

The PST coverage factor is estimated by collecting the relevant failure modes from OREDA

(2002) as shown in Table 9. Only the valves used for ESD, control and safety purposes is

Page 61: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Partial stroke testing

61

chosen. Then the fraction of failures that can be discovered by PST is calculated as shown

below:

failurescriticalTotalfailuresNPSTfailuresNPST erageDOPerageFTC

PST __)_()_( cov_cov_ ⋅+⋅

Table 9, Reliability data as basis for PST coverage estimation (adapted from Lundteigen

& Rausand, 2007)

Failure data

(OREDA 2002)

FTC LCP DOP ELP Total

Subsea - Common component

process isolation valves (p.

806)

3 2 0 1 6

Subsea – Manifold (p. 818) 1 2 0 1 4

Topside - Control & safety

equipment (p.568)

134 40 30 44 248

Topside -Control & safety

equipment (p.575)

61 1 21 10 93

Topside -Control & safety

equipment (p.607)

34 22 4 9 69

Topside -Control & safety

equipment (p.581)

3 0 2 0 5

Topside -Control & safety

equipment (p.689)

42 0 5 2 49

Topside -Control & safety

equipment (p.695)

21 0 1 1 23

Topside - ESD/PSD Ball

valves (p.706)

11 0 6 3 20

Topside - ESD/PSD Gate

valves (p. 717)

7 0 2 0 9

Total 317 67 71 71 526

When only the data from the latest OREDA edition (2002) is used and with the PST coverage

for the different failure modes as estimated above, the PST coverage is 62%. With more

optimistic PST dangerous failures coverages, e.g. both FTC and DOP set to 95%, the PST

coverage is estimated to be 72%. On the contrary, a more pessimistic approach, with both

FTC and DOP set to 50%, the PST coverage is estimated to be 38%.

Summers & Zachary (2000a) proposed a PST coverage of 70% and Lundteigen & Rausand

(2007) estimated a PST coverage for the Kristin HIPPS valves of 62%, hence the result is in

about the same range as former research.

Page 62: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Partial stroke testing

62

4.4 Correlation PST and spurious trips It has been claimed that PST leads to a higher degree of spurious trips (ST) (Willem-Jan, N.

& Rens, W. 2005, McCrea-Steele 2006). A repetitive argument is that if the valve starts to

move, it will easily continue to close instead of returning to its open position. In Figure 28 a

Bayesian belief network illustrates the possible causes for ST during PST. Bayesian belief

network can be used instead of fault trees and cause and effects diagrams to illustrate the

relationship between a system failure and its contributing factors (adapted from Rausand &

Høyland, 2004).

The result depends highly on the chosen system, whether it is automatic, semi-automatic or

manual. For the automatic system, the organisational and human contributors are designed

away. In the figure the contributors for a ST for the manual solution is shown with whole-

drawn line, while the contributors relevant also for the automatic system are dashed.

Figure 28, Bayesian belief network for ST during PST

From the figure it can be concluded that by implementing an automatic system for

conducting the PST, many of the potential reasons for ST can be designed away.

Page 63: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Partial stroke testing

63

4.5 Influencing factors for PST contribution for a SIS Based on the formulas described in section 4.3, the PFDavg of a system with PST implemented

can be given by;

DTPSTPT PFDPFDPFDPFD ++≈

222)1( DTDDPSTDU

PSTPTDU

PSTPFD τλτλθ

τλθ +⋅+⋅−≈

The proportion of failures that is not detected through PST is left to be discovered through

proof testing which is assumed to be perfect. This may not be a realistic assumption as

discussed in chapter 3 and is further assessed in section 4.6. A system which has

implemented PST may be influenced by several factors than normally included in PFD

calculations. Some of the simplifications and assumptions are briefly discussed in the

following.

Increase of β-factor

It is likely that when one of the two valves in the same line fails due to corrosion, there is a

high probability that given the same body materials and process conditions, the other valve

will fail as well (metso, 2002).

To be considered a CCF, the failures have to be within a short time interval. To be classified

as such, the failures have to occur within the same proof test interval. Hence an extension of

the test interval might imply that the β-factor should increment over time (Rausand, 2007).

Reasons for this are;

1. Longer time intervals that several components can fail within.

2. Preventive maintenance initiated because of findings on one component will not be

performed as often as with shorter proof test intervals.

If PST is used to extend the proof test intervals, this should lead to an assessment of the β-

factor to reflect the PFD impact this may have.

SFF calculation

An argument for implementing PST is the potential improvement of the SFF and

consequently the possibility to reduce the hardware fault tolerance requirement. This is

obtained by the PST converting part of the dangerous undetected failures from the

denominator, till dangerous detected in the numerator;

DUDDSUSD

DDSUSDSFFλλλλ

λλλ+++

++=

As shown in Table 2, an improvement from SFF < 60 % till SFF= 60 % ≤ 90 % enables a

reduction of hardware of 1, and still maintain the SIL 2 rating. λDD refers to the failures

discovered by the automatic diagnostic tests. As PST is not fulfilling the criteria for function

as a diagnostic test, the PST should not be used to affect the SFF, and hence can not be an

argument for a reduction in the HFT (McCrea-Steele, 2006).

Page 64: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Partial stroke testing

64

Choice of position indicator

Different types of position indicators may influence the impact of the PST to the reliability

and efficiency differently. Mechanical position indicators have been utilized topside and are

considered to be rather unreliable as they can easily be changed from their actual position by

stepping onto it. Erroneous indication of position will not prevent the valve from closing, but

it is critical for safety when it shows ”closed” when is actually ”open” and when the correct

sequence of valve closure is crucial. But without a correct indication of the position, the

testing will be of no value as the actual position is not confirmed.

Mechanical position indicators subsea can be read by ROVs only, hence it is a need for other

instrumentation. Today smart microprocessor based devices have been developed by a series

of vendors (McCrea-Steele, 2006). Implementation of such components are normally not a

standard delivery (ATV, 2007) but can easily be implemented in the design.

Together with the mechanical indicator, the electronic indicator offer a redundancy for

reading off the position “open” or “closed”. Some indicators can measure the exact degree

“closed”, which is of highly relevance for PST. McCrea-Steele (2006) claims that

supplementary smart programmable equipment may introduce additional undetected

failures. On the other hand, the implementation of smart positioners can design away the

need of human interference when conducting a PST, removing the number of reasons for ST

substantially as shown in section 4.4. Furthermore the electronic indicators can be

continuously monitored and the operability readily observed (Summers, 2000b). Digital

valve controllers monitor performance trends, enabling the detection of failures long before

they prevent the system from functioning (Ali, 2004). This is done by comparing the current

test with those performed in the past, also referred to as a signature test.

Solenoid configuration

Summers & Zachary (2002) recommend installing redundant solenoids to prevent ST. They

claim that this yields substantial savings for the avoidance of lost production. For subsea

equipment there are several other topics to take into consideration if the solenoids should be

made redundant. The solenoids are placed in the control pod on the side of the XMT,

enabling for retrieving the whole module if necessary. For the installation purposes it is

important that the equipment is balanced, so if the weight of the control pod is increased, this

demands a balancing weight on the opposite side of the XMT. This impacts both the cost and

difficulties with the handling of the equipment. A special vessel capable of doing heavier lifts

might be required, and the installation becomes more weather-dependent. For the control

pod installed on the manifold this is less important as the weight already requires special

lifting vessel. Redundancy for the solenoids should be considered as a possible solution for

the manifold.

Page 65: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Partial stroke testing

65

4.6 PST impact onto the SIL The PFD impact of a single component when implementing PST is illustrated in Figure 29.

For better illustrating the potential effect of PST, a failure rate for the component of 7100.3 −⋅=DUλ 1−hours is chosen. The PST coverage is set to θPST=60% and the test interval

τPST=1 week.

Unavailability

0

0,0002

0,0004

0,0006

0,0008

0,001

0,0012

0,0014

0

13700

27400

41100

54800

68500

82200

95900

109600

123300

137000

150700

164400

Hours

Ā(t)

UnavailabilityPFD avg proof testPFD averageSIL 2

Figure 29, Unavailability with PST

The equation for this situation is as follows;

PSTPT PFDPFDPFD +≈

22)1( PSTDU

PSTPTDU

PSTPFDτλ

θτλ

θ ⋅+⋅−≈

477

1042.52

5.182100.36.02

8760100.3)6.01( −−−

⋅=⋅⋅

⋅+⋅⋅

⋅−≈PFD

The implementation of PST would for this case lead to a change of SIL rating from SIL 2 to

SIL 3 for the single component since the PFD is reduced from the previous 31031.1 −⋅=PFD

till 4104.5 −⋅=PFD .

In the following PST is applied onto the ‘case A’ and ‘case B’ described in chapter 3.

Page 66: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Partial stroke testing

66

Case A, systematic failures

The PDS method proposes a PTIF addition of 3100.1 −⋅ when PST is implemented. In Figure 30

the PTIF additions for both PT and PST are included. The failure rate for the component

is 7100.1 −⋅=DUλ 1−hours , the PST coverage is set to θPST=60% and the test interval is τPST=1

week.

Unavailability

0

0,0002

0,0004

0,0006

0,0008

0,001

0,0012

0,0014

0,0016

0

13700

27400

41100

54800

68500

82200

95900

109600

123300

137000

150700

164400

Hours

Ā(t)

UnavailabilityPFD avg proof testPFD averageSIL 2

Figure 30, Unavailability with PST and PTIF addition

The equation for this situation is as follows;

TIFPSTPT PPFDPFDPFD ++≈

TIFPSTDU

PSTPTDU

PST PPFD +⋅+⋅−≈22

)1(τλ

θτλ

θ

33577

1019.1100.1100.12

5.182100.16.02

8760100.1)6.01( −−−

⋅=⋅+⋅+⋅⋅

⋅+⋅⋅

⋅−≈PFD

As the PFD when proof testing the component is 41038.4 −⋅=PFD , implementing PST leads

to a worse PFD result when the PTIF-additions proposed by PDS is used. This is the case when

the PTIF is significantly greater than the failure rate. Hence, for a safety valve with adequately

low failure rate, the PDS method does not support the implementation of PST. But with two

valves in series, the PST contribution will be positive since ‘ TIFPAddition ⋅= β ’, which also

implies that for small β-values the PTIF-addition becomes very small.

Page 67: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Partial stroke testing

67

Case B, random hardware failures

Figure 31 illustrates the unavailability of a component with failure rate 7100.1 −⋅=DUλ 1−hours . The PST coverage is set to θPST=60% and the test interval τPST=1

week. The non-testable part is assumed to be α=20% and it is assumed that the non-testable

part is equal for both the PT and PST.

Unavailability

0

0,0005

0,001

0,0015

0,002

0,0025

0,003

0,0035

0,004

0

13700

27400

41100

54800

68500

82200

95900

109600

123300

137000

150700

164400

Hours

Ā(t)

UnavailabilityPFD avg proof testPFD averageSIL 2

Figure 31, Unavailability with PST and imperfect testing

The equation for this situation is as follows;

PSTPT PFDPFDPFD +≈

⋅+⋅−−+⋅≈

22)1()1(

2PSTDU

PSTPTDU

PSTNTDUPFD τλ

θτλ

θατλ

α

2876020100.12.0

7 ⋅⋅⋅⋅≈

PFD

377

1090.12

5.182100.16.02

8760100.1)6.01()2.01( −−−

⋅=

⋅⋅⋅+

⋅⋅⋅−−+

The insertion of the non-testable part and PST leads to a change of SIL rating from SIL 3 to

SIL 2 rating since the PFD increases from 41038.4 −⋅=PFD (only proof testing conducted)

till 31090.1 −⋅=PFD .

The impact on the PFD of a component with failure rate 7100.1 −⋅=DUλ 1−hours and where

20% is assumed non-testable is assessed in Table 10. The table shows the results for both the

imperfect test situation and perfect testing for diverse PST coverages and intervals.

Page 68: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Partial stroke testing

68

Table 10, PFD related to diverse PST coverages, test intervals and (im)perfect testing

PST coverage 50 % PST coverage 60 % Without

PST With PST

Imperfect test

Perfect test

Imperfect test

Perfect test

τPST = 1 week

31094.1 −⋅ 41023.2 −⋅ 31091.1 −⋅ 41080.1 −⋅

τPST = 1 month

31095.1 −⋅ 41038.2 −⋅ 31092.1 −⋅ 41098.1 −⋅

τPST = 3 months

31098.1 −⋅ 41071.2 −⋅ 31095.1 −⋅ 41038.2 −⋅

PST coverage 70 % PST coverage 80 % With PST

Imperfect test

Perfect test

Imperfect test

Perfect test

τPST = 1 week

31087.1 −⋅ 41037.1 −⋅ 31084.1 −⋅ 51050.9 −⋅

τPST = 1 month

31088.1 −⋅ 41058.1 −⋅ 31086.1 −⋅ 41019.1 −⋅

41036.4 −⋅

τPST = 3 months

31093.1 −⋅ 41005.2 −⋅ 31090.1 −⋅ 41072.1 −⋅

Considering only the perfect test situation it can be claimed that there are relatively small

differences between the diverse PST coverages and for the different test intervals. A change of

the failure rate to 6100.1 −⋅=DUλ 1−hours showed that it more important to conduct the PST

often, than assess the exact PST coverage. A lower test interval could potentially lead to a

change of the SIL rating of the component while a higher PST coverage hardly has an impact.

Dependent of each situation, the improved PFD by PST implementation can be obtained by

either improving the PST coverage, or shorten the PST test interval (or both).

It is clear that an imperfect test gives a greater negative impact to the PFD than the positive

PST contribution; hence the priority should be upon reducing the non-testable part. A

reduction of the non-testable part with 10% gives a greater improvement of the PFD than for

a component with 80% PST coverage and a test interval of 1 week.

Page 69: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Discussion

69

5 Discussion

5.1 Quality of the reliability assessment As discussed in chapter 3 there are many reasons for imperfect testing. In this thesis they

have been classified accordingly to the five M-factors; method, machine, milieu, man-power

and material. From the results of the case studies conducted on the topic, it is obvious that it

should be of great interest to assess the proportion of non-testable failures. Even a relatively

low proportion of non-testable failures have a significant PFD impact and hence the SIL

rating of the system.

The assessment of imperfect testing is of special importance for valves with higher failure

rates, meaning from 6100.1 −⋅=DUλ 1−hours and upwards. If the reliability of the safety

valves continues to improve so that adequately low failure rates are obtained, the imperfect

test assessment may be obsolete as the PFD impact is considered insignificant. This should

however not be used as an excuse for assessing the matter now.

Since it can be difficult to achieve the exact proportion of non-testable failures for the system,

a method facilitating the estimation has been proposed. It can be claimed that by introducing

such an imperfect test addition further uncertainty is introduced to the results. Not assessing

it will be to step backwards from the customarily conservative estimation approach by

ignoring the possible impact of imperfect tests.

The assessment of the PST implementation shows that for higher failure rates, from 6100.1 −⋅=DUλ 1−hours and above, it is more important to conduct the PST often than to

achieve a higher PST coverage. The reason is that the positive PFD impact is greater if the

tests are carried out more often than to improve the coverage by additional 10%. This makes

the difficulty deciding the exact PST coverage somewhat less important as the assumptions

taken doesn’t give a great impact.

On the contrary, a reduction of the non-testable part with 10% gives a greater improvement

of the PFD than obtaining both higher PST coverage and shorter test intervals. Hence the

focus should be upon diminishing the factors that leads to unsuccessful tests.

If PST is implemented in order to extend the proof test interval, it might be necessary to

change the β-factor (Rausand, 2007). As CCF are those failures that happen within the same

proof test interval, an extension of the interval could lead to more failures are classified as

CCF. Analogous for the imperfect test situation, the period for the non-testable part is larger

and it is likely that several components would fail within the same period. Hence it should be

discussed whether the β-factor should be incremented to realistically reflect the PFD impact.

Page 70: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Discussion

70

The SIL rating is not a static measure. The PFD is greatly influenced by the operator

companies in the production lifetime. The equipment can be delivered in excellent condition

and with the opportunity to check and validate the system, but the operational philosophy is

significant. As mentioned, the operational philosophy may be to minimize the stress on the

system by not conducting the tests in a realistic manner, e.g. conducting function tests

instead of proof tests (which implies leakage test). A leakage test may lead to degradation of

the components which results in higher failure rates and consequently higher PFD. This

makes it interesting to optimize the function tests, PST and proof tests interval with the

possible degradation of the system if proof tests were conducted. For some situations it may

be worth including the imperfect test addition instead of degrade the components by proof

testing.

Page 71: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Discussion

71

5.2 Uncertainty regarding the results The author had very limited statistical material as the OREDA Handbook and the PDS data

handbook were the only reliability data sources. Because of the lack of experience with subsea

equipments both topside and subsea data had to be used to get enough material and may not

be accurate as there are different requirements related to subsea equipment than topside.

The inner environment is considered equal for both subsea and topside valves so it can be

reasoned that the numbers may be used for illustrating purposes.

The OREDA handbooks are neither detailed enough regarding the different failure

descriptors. Attempts to get more detailed data from the operator company did not succeed

and thus the author was limited to qualitative adjustments of the PST coverage of the

dangerous failure modes.

The models developed are fairly simple. More advanced approaches may give results with

higher accuracy than obtained in this thesis. The approximation formulas have been utilized

in the calculation of the PFD. More accurate results may be achieved by using the proper

equations.

In order to evaluate the assumption of perfect testing additional assumptions were needed to

simplify the problem. One example is that it is assumed that the same failures remain non-

testable throughout the life span of the component. Further on has the theories been applied

on the valves only, different results could have been achieved by including the logic and

sensors as well.

5.3 Recommendations for further work Throughout the work with the thesis, several topics that are interesting to assess more

profoundly were discovered;

• Assess more perspectives/interpretations of imperfect testing. Only a few alternatives

were included in this thesis in order to illustrate the impact on the PFD. Improved

models may improve the accuracy of the results.

• Calculations for imperfect tests have been conducted on one single component only;

other architectures such as 1oo2, 2oo3 etc. should be analyzed more profoundly.

• The PST impact on the CCF can be analyzed further. A possible increment of β-value

according to extended proof test intervals should be investigated.

• The method for estimating the contributions from the M-factors should be developed.

A detailed questionnaire enables an easy approach for estimating a conservative and

realistic imperfect test addition.

• The PFD impact of imperfect testing should be assessed also for the logic and sensors.

For illustrative reasons the focus has been only upon the safety valves in this thesis.

Page 72: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Case study

72

6 Case study

The topics discussed throughout the thesis have in this section been applied on a genuine

field development. The Morvin field have been chosen for this purpose. Special attention has

been given to examine the HIPPS. Up to this date, the contract for Morvin has not yet been

awarded hence has this case study been based on the initial concept studies done by AKS and

the first drafts done by Statoil.

6.1 Introduction case study; Morvin The Morvin field is part of Halten West Unit, PL134B, and is situated in block 6506/11, north

of Kristin and east of Smørbukk. The reservoir pressure is 818 bars, and temperature is 162°C

and will thus be developed as a subsea HP/HT (High Pressure High Temperature) field.

The ownership interests per February 2007;

• Statoil ASA 50 %

• Norsk Hydro ASA 14 %

• ENI AS 30 %

• Total 6 %

Statoil is the responsible company for the development phase.

6.2 Requirements from customer The main structures are: 2 templates, 2 manifolds and 4 X-mas trees. The field will be

produced from two 4-slot templates with two wells on each, and tied back to Åsgard B

through a 10.5” inner diameter pipeline (Statoil, 2007c). Since it is a HP/HT field, HIPPS

(high integrity pressure protection system) will be installed on the manifold, enabling the

pipeline and riser to be designed for 390 bars while the shut-in pressure is 715 bars.

Statoil defines safety requirements for the emergency shutdown, the process shutdown and

for the HIPPS specifically (Statoil, 2007a). HIPPS is a kind of a SIS and must comply with the

IEC 61508 standard. The safety requirements given by Statoil for the HIPPS are quoted in

Table 11.

Note that the closing time for the valves should be calculated for each specific field, as the

required time to close may vary (Patni & Davalath, 2005).

Page 73: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Case study

73

Table 11, Morvin HIPPS requirements (Statoil, 2007a)

Topic Requirement or description

Definition of safety function Closing valves upon high pressure in production header

Definition of functional limits The function includes pressure transmitter subsea, logic,

and valves.

Equipment under control

(EUC)

The EUC is defined as the flowline and riser.

Safe state of the function Safe state of the function is when one of the valves is

closed.

SIL requirement SIL 3. PFD shall be less than 4100.5 −⋅ .

PFD allocation:

- Initiator < 35 %

- Logic < 15 %

- Final element < 50 %

Max allowed response time Closing time shall be less than 13 seconds, including signal

polling time.

Other performance measures Internal leak shall be 0 kg/s at FAT

Operational requirements The status/position of all safety critical components in the

function shall be available at any time.

Along with the requirements directed to the physical structures, there are several other

requirements related to activities, testing and documentation. Regarding safety and

reliability analyses, the following is required as a minimum:

• HAZIDs; including hazard register, according to ISO17776

• HAZOPs; Hazard and Operability analyses

• SAZOPs; Safety and Operability analyses

• FMECAs; Failure Mode Effect and Criticality Analyses

• RAMs; Reliability, Availability and Maintainability Analyses

• Uncertainty and risk analysis, inclusive Quantitative Risk Analyses

• Documentation and analyses of SIS as required by IEC 61508 or IEC 61511 as

interpreted in OLF 070 guideline.

The documentation requirements for compliance with the IEC 61508 standard are specified

as follows (Statoil, 2007b);

• Safety Requirements Specification (SRS)

• Safety Analysis Report (SAR)

• Documentation of SIL

• Proven in use

The documentation requirements show the emphasis put upon a systematic approach to

assess the risks associated to the system. Note that the requirements quoted here are only

those directly related to safety and reliability.

Page 74: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Case study

74

6.3 HIPPS The HIPPS may be installed topside or subsea on a X-mas tree, manifold or pipeline end

terminal. HIPPS provides a pressure break between the subsea systems rated to full shut-in

pressure and the flowline and riser rated to a lower pressure (Patni & Davalath, 2005). An

example of a HIPPS schematic is given in Figure 32 (KOP, 2004) and as shown the HIPPS

basically consists of two safety valves in series and redundant pressure transmitters. One

HIPPS configuration is normally placed on each header on the manifold securing redundancy

for the function.

Figure 32, HIPPS schematic (KOP, 2004)

There are several advantages of implementing HIPPS subsea. Among others (adapted from

Patni & Davalath, 2005);

• Lower installation cost of flowline and risers due to lighter components

• Reduce cost of HP/HT risers

• The thicker flowlines facilities early payback of installation cost due to higher flow of

oil and gas

• The thicker flowlines facilities higher flow and thus higher temperature which is

positive for the cool down time and danger of creating hydrates

• The larger flow area allows the field to be abandoned at a lower reservoir pressure

Page 75: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Case study

75

6.3.1 HIPPS testing The testing methods described below are adapted from the methods for the HIPPS valves on

the Kristin field (2003).

Proposed method for FT (no leakage test) of HIPPS valves

Removing power from the pressure sensors shows a high pressure reading and contributes to

the 2oo4 voting. Removing the power from an additional pressure sensor causes the valves to

close. For testing the sensor function fluid can be injected in order to reach the trip point. If

the sensors vote ‘high pressure’ the function is confirmed. In this test all the components are

tested; sensors, logic and final element.

Proposed method for leakage test of HIPPS valves

All wells have to be closed in order to perform this test. The test is conducted by injecting

fixed volumes of methanol to a preset pressure and then the decay in pressure is monitored

to decide the leakage rate for methanol. This leakage can be converted to the equivalent

gas/petroleum leakage.

Proposed method for PST of HIPPS valves

The test is initiated by the safety and automation system which provides a PST command to

the selected HIPPS valve. The solenoid is de-energizing the hydraulic fluid, causing the

HIPPS valve to move towards closed position. After a pre-set time the solenoid re-energizes

the hydraulic fluid and the valve returns to open position. In this test only the final element is

tested; solenoid, control valve, actuator, valve and position indicator.

Page 76: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Case study

76

6.4 SIL rating The reliability block diagram for the HIPPS is illustrated in Figure 33.

Sensor 2

Sensor 3

Valve 1

Valve 2

Sensor 4

Sensor 1

Sensor 1

Sensor 1 Logic unit 1

Logic unit 2Sensor 3Sensor 2

Sensor 4Sensor 2

Sensor 4Sensor 3

Figure 33, HIPPS reliability block diagram for Morvin field development

When assuming the same results of the FMEA analysis performed for the Kristin project

(AKS, 2002) the data for the HIPPS valve is as described in Table 12. The failure rate includes

all the final elements as actuator and solenoid. The sensor and logic failure data is achieved

from a topside HIPPS example in the PDS data handbook (2006).

Table 12, Morvin HIPPS Case data

Component

Failure rate λDU

( 1−hours )

*PDS data, 2006

β - factor

(PDS Data, 2006) SFF

Test interval τ

(hours)

Sensors

(2oo4)

7100.3 −⋅ * 3% < 60% 8760

Logic

(1oo2)

7100.1 −⋅ * 2% ≥ 99% 8760

Valve

(1oo2)

61001.1 −⋅ 2% ≥ 60% 8760

It is assumed that the equipment complies with the SFF requirement given by Statoil.

Considering the HFT and SFF requirements in Table 2, the 2oo4 voting on the sensors

enables a SIL 3 rating, the logic enables a SIL4 rating and the final elements allow a SIL 3

rating. Hence the system complies with the SIL 3 requirements.

Page 77: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Case study

77

1. Proof testing of the system Using the simplified equations as before, the following PFD is obtained when only proof

testing is conducted on the system:

AILDSYS PFDPFDPFDPFD ++=

10022142 PFDPFDPFDPFD ooooSYS ++=

[ ] [ ] [ ]23

)1(23

)1(2

)1(22

3 τβλτλβτβλτλβτβλτλβ DUDUDUDUDU

DUSYSPFD +−

++−

++−=

4465 1061.11013.11001.91094.3 −−−− ⋅=⋅+⋅+⋅=SYSPFD

The PFD corresponds to a SIL 3 rating.

2. Case A, PTIF addition

Corresponding to the PDS method as explained in section 3.2.1.

TIFSYSPDSSYS PPFDCSUPFD +==_

[ ] [ ] [ ]3)1(

23)1(

2)1(

223

_τλβτβλτλβτβλ

τλβ DUDUDUDUDUPDSSYSPFD −

++−

++−=

TIFDU P⋅++ β

τβλ2

45465_ 1062.1100.102.01013.11001.91094.3 −−−−− ⋅=⋅⋅+⋅+⋅+⋅=PDSSYSPFD

3. Case B, Imperfect test addition Pimp The PFDdiff is added correspondingly to case B in section 3.2.2 with the same PIMP value.

diffSYSSYS PFDPFDPFD +=

[ ] [ ] [ ]3)1(

23)1(

2)1(

223 τλβτβλτλβτβλ

τλβ DUDUDUDUDUSYSPFD −

++−

++−=

impDU P⋅++ β

τβλ2

42465 1009.91074.302.01013.11001.91094.3 −−−−− ⋅=⋅⋅+⋅+⋅+⋅=SYSPFD

4. Implementation of PST

Assuming PST interval of τ=730 hours (every month), and a PST coverage of 60%.

PSTPT PFDPFDPFD +≈

[ ] [ ]23

)1(2

)1(2

3 PTDUPTDUPTDUPTDUSYSPFD

τβλτλβτβλτλβ +

−++−=

[ ]

+

−⋅−+

23)1(

)1(2

PTDUPTDUPST

τβλτλβθ

[ ]

+

−⋅+

23)1( 2

PSTDUPSTDUPST

τβλτλβθ

56565 1085.953.456.41001.91094.3 −−−−− ⋅=++⋅+⋅=SYSPFD

Page 78: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Case study

78

5. PFD for HIPPS including PST, PTIF and Pimp Including all the cases described above yields the result;

PSTPT PFDPFDPFD +≈

[ ] [ ]23

)1(2

)1(2

3 PTDUPTDUPTDUPTDUSYSPFD

τβλτλβτβλτλβ +

−++−=

[ ]

+

−⋅−+

23)1(

)1(2

PTDUPTDUPST

τβλτλβθ

[ ]

+

−⋅+

23)1( 2

PSTDUPSTDUPST

τβλτλβθ impTIF PP ⋅+⋅+ ββ

36565 1001.102.053.456.41001.91094.3 −−−−− ⋅⋅+++⋅+⋅=SYSPFD

42 1067.81074.302.0 −− ⋅=⋅⋅+

Comments to the results

The results are gathered in Figure 34, and all the approaches yield a SIL3 rating. The PST

could potentially increase the rating to a SIL4, but the other SIL requirements, SFF and HFT,

should be improved to a SIL4 rating as well.

00,00010,00020,00030,00040,00050,00060,00070,00080,0009

0,001

1. PT 2. P TIF 3. P imp 4. PST 5. All

Calculation approach

PFD

Mor

vin

HIP

PS c

ase

Figure 34, PFD results for the different calculation approaches

It is evident that special attention should be put upon discovering the non-testable part of the

system as this has a great impact on the PFD. Introducing PST as a manner to decrease the

PFD has practically no impact when PTIF and Pimp is included in the calculations.

Page 79: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Concluding remarks

79

7 Concluding remarks

SIL rating is a common requirement for subsea petroleum systems, making it interesting to

evaluate the assumptions that form basis for the calculations. The assumption that a

component is “as good as new” after each proof test, meaning that the unavailability for the

component is reduced to zero, has been subject for assessment. The effect of a test that is

imperfect, meaning that the unavailability is not reduced to zero, has not been discussed to a

great extent in the literature. Hence the author has aimed to define and analyze the effect of

imperfect testing.

An imperfect test was classified according to two dimensions;

1. The test does not cover all possible failures – inadequate test method.

2. The test does not detect all the failures – unsuccessful test.

The reasons for imperfect tests were related to the five M-factors; method, machine, milieu,

man-power and material. It has been proven that the PFD impact by imperfect tests can be

significant. While the PDS proposed PTIF value hardly makes any impact, an imperfect test

with a high proportion of non-testable failures proved the potential of changing the SIL

rating of the system. As it may be difficult to decide the exact percentage that is non-testable

for a system, a method based on the M-factors facilitating such estimation was proposed.

PST has been introduced in order to reveal failure modes which before only has been feasible

through tests that require process shutdown. A successful implementation may improve the

SIL rating of the system. The use of PST in subsea petroleum production has so far not been

common. Several of the arguments for and against implementing PST in subsea equipment

have been assessed.

A tentative PST coverage factor was set to 62 %, based on a failure mode assessment of gate

valves and OREDA data. The result is in accordance with former research. The PST coverage

for the dangerous failure modes FTC, LCP, DOP and ELP could not be justified quantitatively

as the production companies do not give such detailed information. The coverage may differ

dependent on the valve type in question, design and production environment.

It has been argued that PST leads to an increase of the ST rate, assuming that if the valve

starts to move it will continue to closed position. The likely reasons for such an event were

assessed in a Bayesian belief network and proved the need for the right equipment. New

devices such as smart positioners and digital valve controllers have been introduced for the

purpose of PST, reducing the human interference in PST and thus the reasons for ST.

PST is by some implemented in order to justify extended proof testing intervals. As CCF are

those failures that happen within the same proof test interval, an extension of the interval

could lead to more failures being classified as CCF. In such situations, it should be discussed

Page 80: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

Partial and imperfect testing of SIF Concluding remarks

80

whether the β-factor should be incremented to realistically reflect the PFD impact this may

have.

Another argument for implementing PST has been the opportunity to reduce the HFT since

the SFF is incremented by detecting more dangerous undetected failures and convert them to

dangerous detected failures. As PST is not fulfilling the criteria for being a diagnostic test, it

is argued that the PST should not be used to affect the SFF, and hence can not be an

argument for a reduction in the HFT (McCrea-Steele, 2006).

Especially for components with higher failure rates, from 6100.1 −⋅=DUλ 1−hours and above,

investing in PST can be recommended. The case studies showed that achieving the exact PST

coverage was less important than the test frequency. The positive PFD impact was greater if

the tests were carried out often than if the coverage was improved by additional 10%.

On the contrary, a reduction of the non-testable part with 10% gave a greater improvement of

the PFD than if both higher PST coverage and shorter test intervals were obtained. Hence the

focus should be upon diminishing reasons for why the test should be unsuccessful.

Throughout the thesis it has become obvious that it is necessary to assess the assumptions

regarding reliability calculations more closely. The imperfect test case proved that ignoring

the estimation of non-testable failures could lead to an inaccurate PFD result. As the use of

SIS develops as one of the standard method for reducing risk in the petroleum production, it

should be highly relevant to improve the quality of these calculations.

Page 81: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

81

REFERENCES

Articles and textbooks

Ali, R. 2004. Problems, concerns and possible solutions for testing (and diagnostic

coverage) of final control element of SIF loops. FIELDVUE Business, USA.

Bak, L. 2007. Personal communication 2nd of May 2007. Sandvika, Norway.

Beurden, I. Amkreutz, R. 2003. The effects of partial valve stroke tesing on SIL level.

exida.com.

Goble, W.M. 2003. Estimating the Common Cause Beta Factor. Exida.com, USA.

Haddon, W. Jr. 1973. Energy damage and the ten countermeasure strategies. Human

Factors, 15(4):355-66.

Hovden, J., 2003. Theory formations about the “Risk Society”. NoFS XV, Karlstad, Sweden.

Lundteigen, M. and Rausand, M., 2006. Assessment of Hardware Safety Integrity

Requirements. Proceedings of the 30th ESReDA Seminar. NTNU,Trondheim-Norway.

Lundteigen, M. and Rausand, M. 2007. The effect of partial stroke testing on the reliability

of safety valves. NTNU, Trondheim-Norway.

McCrea_Steele, R. 2005. Partial Stroke Testing. Implementing for the Right Reasons. Paper

at ISA EXPO 2005, Chicago.

McCrea_Steele, R. 2006. Partial Stroke Testing. The Good, the Bad and the Ugly. Premier

Consulting Services, USA.

Metso automation, 2002. Comparison between testing methodologies to achieve the

required SIL level. Application report 2726/01/02.

Rausand, M. 2007. Personal communication 10th of April 2007. Trondheim, Norway.

Rausand, M. and Høyland, A., 2004. System Reliability Theory. Second edition. John Wiley

& Sons, Inc., Hoboken, New Jersey.

Sangesland, S. 2007. Drilling and completion of subsea wells. Course compendia, NTNU,

Trondheim.

Sanguineti, L. & Sanguineti, E. 2007. Personal communication 4th of May 2007. ATV,

Colico, Italy.

Page 82: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

82

Subseazone, 2007. World Subsea Production Capex. Internet:

http://www.subseazone.com/zones/subsea_home.aspx

Summers, 1998. Valve safety concerns. Letters, InTech May 1998. Internet:

http://findarticles.com/p/articles/mi_qa3739/is_199805/ai_n8800478

Summers, A. Zachary, B. 2000a. Partial-stroke testing of safety block valves. SIS-TECH

Solutions, Published in Control Engineering November 1, 2000. Internet:

http://www.controleng.com/article/CA190350.html

Summers, A. 2000b. High Integrity Pressure Protection Systems (HIPPS). SIS-TECH

Solutions, Published in Chemical Engineering Progress November, 2000.

Summers, A. & Zachary, B. 2002. Improve Facility SIS Performance and Reliability. SIS-

TECH Solutions, published in Hydrocarbon Processing, vol. 81, number 10, pp. 71-74

October 2002.

Velten-Philipp, W. and Houtermans, M. 2004. The effect of diagnostic and periodic testing

on the reliability of safety systems. Software and Information Technology (ASI),

Cologne-Germany.

Willem-Jan, N. & Rens, W. 2005. Partial Stroking on fast acting applications. Mokveld

Valves, Gouda, The Netherlands.

Standards & Guidelines

Activities regulations, 2002. The Petroleum Safety Authority Norway (PSA).

exida, 2003. Safety equipment reliability handbook. exida.com. L.L.C. PA, USA.

Facilities regulations, 2001. The Petroleum Safety Authority Norway (PSA).

Framework regulations, 2001. The Petroleum Safety Authority Norway (PSA).

IEC, 2002. Functional safety and IEC 61508. A basic guide.

IEC 61508 Standard, 2002. Functional safety of electrical/electronic/programmable

electronic safety-related-systems. Part 1-7.

IEC 61511 Standard, 2004. Functional safety. Safety instrumented systems for the process

industry sector. Part 1-3.

Page 83: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

83

OLF 070, 2004. Application of IEC 61508 and IEC 61511 in the Norwegian Petroleum

Industry. OLF, Rev. 02, 10.29.2004.

OREDA, 2002. Offshore Reliability Data. 4th Edition. SINTEF, Trondheim-Norway.

OREDA, 2007. OREDA homepage. Internet: http://www.sintef.no/static/tl/projects/oreda/

PDS Data Handbook, 2006. Reliability Data for Safety Instrumented Systems. 2006

Edition. SINTEF, Trondheim-Norway.

PDS Method Handbook, 2006. Reliability Prediction Method for Safety Instrumented

Systems. 2006 Edition. SINTEF, Trondheim-Norway.

Other documents

AKS, 2007. Dalia X-mas Tree. Internal document Aker Kværner Subsea.

KOP, 2005. IEC 61508 / IEC 61511 - SIL kurs, modul 1. Presentation for training purposes.

KOP, 2004. “There is something about Kristin”. Presentation at Society for Underwater

Technology.

KOP, 2003. Safety requirement specification, system 18, HIPPS. Doc. Number 22-KC0005-

02

KOP, 2002. FMEA report – HIPPS valve and actuator. Doc. Number C074-KOP-S-RA-0002

Ring-O, 2007. Single acting FSC actuator for 5”1/8 gate valve. Internal document Ring-O

Valves, Colico - Italy.

Statoil, 2007a. “Morvin – Safety Requirement Specification.” Doc. number TR2250.

Statoil, 2007b. IEC61508/61511 Compliance. Doc.number TR2249

Statoil, 2007c. Delivery of subsea production system. Scope of work. Frame agreement

no.4600004645.

Page 84: PARTIAL AND IMPERFECT TESTING OF SAFETY INSTRUMENTED … · 2.2.1 oreda 23 2.2.2 pds 23 2.2.3 exida 24 2.3 safety instrumented functions 25 2.3.1 main principles 25 2.3.2 availability

84

ANNEX A, XMT Horizontal XMT

(Sangesland,2007)

Conventional XMT

(Sangesland,2007)