[IEEE 2008 Annual Reliability and Maintainability Symposium - Las Vegas, NV, USA...

6
1-4244-1461-X/08/$25.00 ©2008 IEEE A Simple Classification Approach to Build a Bathtub Brian Douglas Haan, Director of Research / Principal Researcher, SELF Key Words: Reliability Data Analysis, Mixed-Weibull ABSTRACT The notional bathtub curve is often cited to describe how a device’s failure rate may change with age. Modeling the bathtub curve or other undulating function to capture the reliability-centric phases of life can be accomplished using the mixed-Weibull distribution. Unfortunately, fitting failure data directly to the mixed-Weibull distribution typically requires an assumption of the number of subpopulations within the distribution and difficult computations that often end in the utilization of complex algorithms. The fitting approach described in this paper provides a tactic that can perform the fit without assuming a set number of subpopulations and can be implemented in a basic spreadsheet. This paper begins with a brief examination of a common mixed-Weibull form. It is observed that the likelihood function of this form implicitly handles the data in aggregate – ironically not a mixture. This can be addressed with a modest adjustment but at the cost of greatly increasing the number of parameters that must be considered to fit the distribution. Two separate derivations of the introduced approach are outlined. The first originates within an Artificial-Life framework used for constructing reliability models. Processes within this framework are taken to a conceptual limit. Addressing computational time issues that result yields the presented approach. Because the Artificial-Life Framework tactic is still largely unproven a second derivation based on the well established k-means clustering algorithm is provided as an alternate. Because k-means clustering algorithms are well known, their behavior provides predictions into the behavior of the approach being introduced. The mechanics of the approach are outlined and detailed using sample data. One simple sample set demonstrates the mechanics while a second, more contextually rich set of data illustrates a more realistic application and behavior of the approach. In each, individual reliability data are classified and subpopulations emerge to quickly estimate parameters for a mixed-Weibull distribution. Performance characteristics are noted to be very similar to the k-means algorithm. Termination requires little iteration so even very complex mixtures can be assessed quickly. As predicted by its k-means derivation the approach is mildly chaotic so multiple trials may yield better solutions. Fortunately speed and ease of implementation accommodates for this shortcoming. Additionally, repeated application of the method on a set of data is shown to yield a discrete probabilistic estimate of the number of subpopulations contained within a dataset. The approach is found to be a convenient addition to the reliability analyst’s toolbox. 1 INTRODUCTION –THE MIXED WEIBULL The flexibility of the Weibull distribution is well known in the reliability discipline. Because this single distribution can often be successfully applied to model subjects that have increasing, decreasing, or constant failure rates it is well suited to characterize a broad spectrum of failure modes. Mixing multiple Weibull functions together to form the mixed- Weibull extends this distribution’s application to characterize failure rates that fall and rise as the device passes through the various phases of its life. This non-monotonic behavior is popularly described using the notional bathtub curve but may just as legitimately be described by some other function that undulates over time. By ganging multiple Weibull distributions even the most slithery functions can often be tamed. In practice, application of the mixed-Weibull to model an undulating failure rate directly from failure data is not without challenges. First, the analyst is typically required to decide how many Weibull functions to include in the mix before fitting the data. Additionally, fitting data to the mixed- Weibull form often requires computationally involved algorithms that set practical limits on the number of subpopulations that may be considered. 1.1 The Common Mixed-Weibull Form The mixed-Weibull function is expressed as the superposition of multiple weighted Weibull functions. Each Weibull function suggests a different subpopulation with the accompanying weight, P, indicating the percentage of the overall population being represented. If considering a mixture of 2-parameter Weibull functions each subpopulation will be described by three parameters to identify the relative size, of the subpopulation and the shape and scale of its characteristic reliability function. Written as a failure density function a mixed-Weibull consisting of N sub subpopulations can be expressed as in Equation 1. = = Sub i i i N i t i i i i e t P t f 1 1 ) ( β η β η η β (1)

Transcript of [IEEE 2008 Annual Reliability and Maintainability Symposium - Las Vegas, NV, USA...

Page 1: [IEEE 2008 Annual Reliability and Maintainability Symposium - Las Vegas, NV, USA (2008.01.28-2008.01.31)] 2008 Annual Reliability and Maintainability Symposium - A simple classification

1-4244-1461-X/08/$25.00 ©2008 IEEE

A Simple Classification Approach to Build a Bathtub

Brian Douglas Haan, Director of Research / Principal Researcher, SELF

Key Words: Reliability Data Analysis, Mixed-Weibull

ABSTRACT

The notional bathtub curve is often cited to describe how a device’s failure rate may change with age. Modeling the bathtub curve or other undulating function to capture the reliability-centric phases of life can be accomplished using the mixed-Weibull distribution. Unfortunately, fitting failure data directly to the mixed-Weibull distribution typically requires an assumption of the number of subpopulations within the distribution and difficult computations that often end in the utilization of complex algorithms. The fitting approach described in this paper provides a tactic that can perform the fit without assuming a set number of subpopulations and can be implemented in a basic spreadsheet.

This paper begins with a brief examination of a common mixed-Weibull form. It is observed that the likelihood function of this form implicitly handles the data in aggregate – ironically not a mixture. This can be addressed with a modest adjustment but at the cost of greatly increasing the number of parameters that must be considered to fit the distribution.

Two separate derivations of the introduced approach are outlined. The first originates within an Artificial-Life framework used for constructing reliability models. Processes within this framework are taken to a conceptual limit. Addressing computational time issues that result yields the presented approach. Because the Artificial-Life Framework tactic is still largely unproven a second derivation based on the well established k-means clustering algorithm is provided as an alternate. Because k-means clustering algorithms are well known, their behavior provides predictions into the behavior of the approach being introduced.

The mechanics of the approach are outlined and detailed using sample data. One simple sample set demonstrates the mechanics while a second, more contextually rich set of data illustrates a more realistic application and behavior of the approach. In each, individual reliability data are classified and subpopulations emerge to quickly estimate parameters for a mixed-Weibull distribution. Performance characteristics are noted to be very similar to the k-means algorithm. Termination requires little iteration so even very complex mixtures can be assessed quickly. As predicted by its k-means derivation the approach is mildly chaotic so multiple trials may yield better solutions. Fortunately speed and ease of implementation accommodates for this shortcoming. Additionally, repeated application of the method on a set of data is shown to yield a discrete probabilistic estimate of the

number of subpopulations contained within a dataset. The approach is found to be a convenient addition to the reliability analyst’s toolbox.

1 INTRODUCTION –THE MIXED WEIBULL

The flexibility of the Weibull distribution is well known in the reliability discipline. Because this single distribution can often be successfully applied to model subjects that have increasing, decreasing, or constant failure rates it is well suited to characterize a broad spectrum of failure modes. Mixing multiple Weibull functions together to form the mixed-Weibull extends this distribution’s application to characterize failure rates that fall and rise as the device passes through the various phases of its life. This non-monotonic behavior is popularly described using the notional bathtub curve but may just as legitimately be described by some other function that undulates over time. By ganging multiple Weibull distributions even the most slithery functions can often be tamed.

In practice, application of the mixed-Weibull to model an undulating failure rate directly from failure data is not without challenges. First, the analyst is typically required to decide how many Weibull functions to include in the mix before fitting the data. Additionally, fitting data to the mixed-Weibull form often requires computationally involved algorithms that set practical limits on the number of subpopulations that may be considered.

1.1 The Common Mixed-Weibull Form

The mixed-Weibull function is expressed as the superposition of multiple weighted Weibull functions. Each Weibull function suggests a different subpopulation with the accompanying weight, P, indicating the percentage of the overall population being represented. If considering a mixture of 2-parameter Weibull functions each subpopulation will be described by three parameters to identify the relative size, of the subpopulation and the shape and scale of its characteristic reliability function. Written as a failure density function a mixed-Weibull consisting of Nsub subpopulations can be expressed as in Equation 1.

∑=

⎟⎟⎠

⎞⎜⎜⎝

⎛−−

⎟⎟⎠

⎞⎜⎜⎝

⎛⋅=

Sub

i

iiN

i

t

ii

ii etPtf

1

1

)(

β

ηβ

ηηβ

(1)

Page 2: [IEEE 2008 Annual Reliability and Maintainability Symposium - Las Vegas, NV, USA (2008.01.28-2008.01.31)] 2008 Annual Reliability and Maintainability Symposium - A simple classification

To estimate the shape, scale, and portion parameters for Equation 1, the log-likelihood function (Equation 2) may be used. (Note: For legibility, Equation 2 omits terms for suspended and interval data. Consideration of these terms does not change the conversation.) Reviewing the log-likelihood function reveals that the portion parameter Pi is applied uniformly to every failure point Tj. This subtlety disconnects from reality if the subpopulations are intended to model phases of life. A sample that fails early in life (for example) is much more likely to belong to a short-lived subpopulation than a longer lived one. To constrain data into phases of life this likelihood function needs to be adjusted.

∑ ∑= =

⎟⎟⎠

⎞⎜⎜⎝

⎛−−

⎟⎟⎟⎟

⎜⎜⎜⎜

⎟⎟⎠

⎞⎜⎜⎝

⎛⋅=Λ

N

j

N

i

T

i

j

i

ii

Sub

i

i

ji

eT

P1 1

1

ln

β

ηβ

ηηβ

(2)

1.2 A Modestly Altered Mixed Weibull Form

The alteration required to allow the likelihood function to constrain the failure data to phase-of-life comes by adding the capability to focus data within a particular phase-of-life into specific subpopulations. This can be accomplished by simply making the weighting factor P dependent on both sub-population and the time associated with the individual data points to be fitted. Notationally, this may be accomplished by adding a subscript to P to allow its value to change given different data (Equation 3). In this form the total allocation of a failure point across all populations must be unity as shown in Equation 4. The portion variable aggregated across all failure data can now determined from Equation 5.

∑ ∑= =

⎟⎟⎠

⎞⎜⎜⎝

⎛−−

⎟⎟⎟⎟

⎜⎜⎜⎜

⎟⎟⎠

⎞⎜⎜⎝

⎛⋅=Λ

N

j

N

i

T

i

j

i

iji

Sub

i

i

ji

eT

P1 1

1

,ln

β

ηβ

ηηβ

(3)

11

, =∑=

SubN

ijiP (4)

∑=

=N

jjii P

NP

1,

1 (5)

The notational similarity between Equation 2 and Equation 3 belies the impact of the change. The number of parameters to be determined when maximizing Equation 2 is 3NSub. The number of parameters to be determined during the fitting process now tops out at (N+2)NSub. This greatly increased number of parameters suggests that Maximum Likelihood Estimation (MLE) and other common tactics will not provide practical solutions for Equation 3. An alternative tactic is required.

2 BUILDING THE DATA CLASSIFICATION METHOD

2.1 The Classification Strategy

This discussion assumes a situation where the analyst

wishes to model all phases of the subject’s life directly from life data without the benefit of any additional information about the composition within the failure data. If it is briefly assumed that the analyst also has failure analysis information then the analyst has a new option for building the mixed Weibull. The analyst can first use the failure analysis information to classify the data. Each class can then be independently fit to a failure distribution. Once all classes have been individually fit they can be combined into a mixed distribution. The derivations in the following sections will apply the same strategy but will classify data based on the time-to-failure data alone assuming that circumstance does not provide the analyst opportunity to perform a failure analysis.

2.2 The Artificial-Life Tactic to Classify Life Data

There is, in fact, an existing tactical solution to finding parameters to fit data to Equation 3. By using what was termed an Artificial-Life framework, it has been demonstrated that classes encoded within failure data can be feasibly extracted from life data alone [1]. The approach relies on massive parallelism and the interaction of competitive, adaptation, and mutation processes to equitably portion out failure data to multiple agents. Over multiple iterations the agents in this process coalesce around signatures encoded by subpopulations within the data allowing the subpopulations to be identified and defined.

The solution provided by the artificial-life approach is far from concise. The verbose nature is rooted in its design to equitably portion the failure data to all agents used in the framework. In terms of Equation 3, NSub would relate to the number of defined agents and is quite large (128 in the cited reference – anecdotally frameworks may have tens-of-thousands of agents!). The framework causes Equation 5 to approximate the reciprocal of the total number of agents used in the computation (Equation 6). As a consequence, a single real subpopulation in the data must be modeled by multiple agents sharing the same shape and scale parameters.

Sub

N

jji N

PN

11

1, ≈∑

= (6)

Concise classification requires a break in the equity detailed in Equation 6. This can be accomplished by making individual data points indivisible and place the artificial-life framework into a “winner-take-all” situation. This situation constrains values for the elements of matrix P to either unity or zero. (Equation 7)

( )1,0, ∈jiP (7)

2.3 Defining a New Method based on Winner-Take-All

The first derivation of this study is based on forcing the constraint of Equation 7 onto an artificial-life framework. Briefly, the “Winner-Take-All” condition brings the framework to a limit where its processes collapse. Using the Artificial-Life Framework vernacular, agents cease to exchange local information causing the adaptation process to stop. The structures that allowed agents to share success cease

Page 3: [IEEE 2008 Annual Reliability and Maintainability Symposium - Las Vegas, NV, USA (2008.01.28-2008.01.31)] 2008 Annual Reliability and Maintainability Symposium - A simple classification

and place the computational system in a random search. In time the system reduces to a condition where few agents possess very tight holds on significant subsets of data leaving the majority of agents languishing in irrelevance. Considering only the successful agents in the end condition allows for more concise description but requires considerable time to compute. If operating under the constraint of Equation 7 the Artificial-Life Framework processes need to be recast to operate more effectively.

In “Winner-Take-All” the data exchange process sublimates. Encoding agent characteristics for data exchange is therefore of no use and is discarded. Agents can now be directly defined by a shape and scale parameter. With direct access to these parameters the adaptation and mutation processes may also be discarded in favor of a direct fit to their claimed data.

Normally the massive parallelism in the cooperative system accelerates the search in solution space. In “Winner-Take-All” the benefits of massive parallelism vanish. Agents unable to state minimal claims slow the computational engine and may now be dropped to allow distribution of the data to more potent agents.

Incorporation of the described changes spins a method for classifying data to fit into a mixed-Weibull. Dropping the artificial life vernacular defines the method by the following steps (Figure 1).

1. Define initial estimates of shape and scale parameters for some large number of subpopulations.

2. Assign each data point to the subpopulation that predicts that point with highest likelihood.

3. Update shape and scale of each subpopulation based on its claimed data.

4. Drop subpopulations that fail to meet minimal criteria for fitting.

5. Repeat steps 2 through 4 until the system stagnates.

2.4 The K-Means Inspired Tactic

The operational theory behind the Artificial-Life Framework for constructing reliability models is still under development. It may then be premature to base an analytical tactic solely on this foundation. An alternative derivation can,

however, be supplied. One simple algorithm that has been in general application for more than 40 years is the k-means clustering algorithm [2]. The basic tactic behind k-means consists of the following steps:

1. Randomly place k points into the space to be clustered. Each of these points represents a centroid of a cluster.

2. Assign each data point to the group with the nearest centroid.

3. Recalculate the centroid of each group based on the revised assignment.

4. Repeat steps 2 and 3 until termination. The function that drives the k-means is the distance

function (i.e. a squared error function) applied in step 2. To make this approach more reliability-centric the distance function may be substituted with a Weibull likelihood function. This substitution has consequence. Shape and scale parameters will now be computed instead of a basic centroid. To ensure each subpopulation has adequate data to provide a fit a step to drop groups that fail to claim an adequate number of data points is inserted between steps 2 and 3.

Applying these reliability-centric substitutions to the basic k-means results in a procedure identical to that derived in the previous section from the Artificial-Life framework. This provides a degree of validation but also provides an additional benefit. The strengths and weaknesses of the k-means algorithm are very well known. A reliability classification algorithm founded on the same tactics as the k-means is likely to share similar characteristics (Table 1).

Table 1 – Some Strengths and Weaknesses of K-Means

Strengths Weaknesses o Simple

implementation o Very fast o Procedure always

terminates

o Mildly Chaotic – Results depend on starting conditions

o Groupings are often sub-optimal

3 THE ILLUSTRATED PROCESS

3.1 Illustration Cases

Two sample cases are now provided to illustrate the mechanics and performance of the classification method. The first illustration narrates the mechanics of the process as it is applied to a small set of data. This wouldn’t reflect the intended application of the method but it provides a demonstration of the operational concept. The second illustration examines the behavior and output of the method using a richer data set.

3.2 Illustration 1 – Demonstration of Mechanics

This example will consider the process being applied to a set of ten time-to-failure data points. The narrative is illustrated in Figure 2. To begin the first iteration, points are randomly assigned to one of four subpopulations. This assignment provides an initial (random) guess of the values of Pi,j meeting the constraint of Equation 7. Weibull parameters

STARTRandomly

assign data to sub-population.

Fit Datafor each sub-

population based on

assignment.

Dropsub-populations

with too few data points.

Reassigndata based on

likelihood functions of previous fits.

Repeatif data

assignments changed

otherwise…

STOPOne mixed-

Weibull solution is now

available.

Figure 1 – Basic process as derived from either modification of a constrained A-Life Framework or alteration of the k-

means clustering algorithm.

Page 4: [IEEE 2008 Annual Reliability and Maintainability Symposium - Las Vegas, NV, USA (2008.01.28-2008.01.31)] 2008 Annual Reliability and Maintainability Symposium - A simple classification

are then determined for each subpopulation using basic rank-regression. Assignments are noted in Figure 2 by the “+”, “x”, “empty-triangle”, and “solid-square” symbols.

The likelihood functions determined in the first iteration are now used to reassign the data. As shown, the “+” subpopulation has greatest likelihood until approximately time 8. After that time the “empty-triangle” population becomes most likely. Each data point is now reassigned to these two subpopulations accordingly. This reallocation determines the next iteration of the portion matrix Pi,j. Because the “solid-square” and “x” subpopulations are unable to claim a minimum number of data points they are dropped from the process.

The method now begins its second iteration with the remaining subpopulations and repeats the process. In this second iteration it can be seen that the likelihood functions have broadened and have distributed their focus across the life of the sample data. The “+” population has selected a specialized focus on the early-life failures while the “empty-triangle” has settled on a broader focus on the remainder of the population. These likelihood functions now cross at approximately time 9. Reassigning failure data based on these likelihood functions has no effect on the allocation so the method terminates. For reference, the resulting mixed-Weibull at each iteration is imposed on the cumulative failure

charts shown in Figure 2.

3.3 Illustration 2 – Demonstration of Application

A more realistic demonstration of this approach requires a richer set of data. To create a data set for this illustration a sampled set of published mission profiles [3] of historical deep-space and planetary probes was examined. The duration of seventy-nine (79) selected missions was determined based on the profiles and used as the life data. The data represents an eclectic mix of missions of many different vintages, countries of origin, and mission objectives. Although inappropriate to apply this illustration as a model of space-exploration reliability, this contextually rich set of data should illustrate how the method’s categorization might be applied in the early stages of an analysis where underlying data is expected to consist of a mixture.

For this examination the method will begin with sixteen sub-populations and will drop subpopulations unable to retain at least 5 data points. Given the expectation that the approach will prove mildly chaotic it will be repeated multiple times to allow the opportunity to examine this behavior.

The first characteristic examined will be the number of iterations required to terminate the process. If little iteration is required for each trial then the approach will be quick enough to accommodate multiple trials in an analytical setting. Given

Figure 2 - Illustration of the Mechanics of the Method In the first iteration data points are randomly assigned to subpopulations and likelihood functions computed.

Second iteration then reassigns data to subpopulations based on likelihood functions and repeats process.

Page 5: [IEEE 2008 Annual Reliability and Maintainability Symposium - Las Vegas, NV, USA (2008.01.28-2008.01.31)] 2008 Annual Reliability and Maintainability Symposium - A simple classification

its similarity to k-means quick termination is expected. Repeated application of the sample data demonstrates this characteristic (histogram plotted in Figure 3). The majority (62.5%) of the applications required between three to five iterations to terminate. Roughly 5% of the trials required only a single iteration to terminate and no trial required more than 8 iterations.

As discussed in the introduction the most-common techniques for fitting data to the mixed-Weibull require the analyst to assume the number of sub-populations present before estimating parameters. In contrast, this approach requires the analyst to set an upper-limit of the number of sub-populations and the minimum criteria for retaining a subpopulation. As a result, if ran multiple times this approach builds a bounded discrete probability density function of the expected number of subpopulations (Figure 4) that can mutually exist and simultaneously meet the minimal criteria for definition in the given data set.

Because the approach generates multiple solutions the analyst has the freedom to select which solution appears most appropriate. If simply selecting the trial with the least error the resulting bathtub curve consists of a mixture of six sub-populations and is shown in Figure 5 (plotted on log-log for clarity).

Like the common approaches of fitting data to the mixed-Weibull distribution, this model consists principally of a predictive component. There is no basis presented here to claim that the approach provides a true descriptive modeling component but, as an aside, a review of the underlying data does suggest some descriptive meaning behind these populations. With a shape factor less than unity (β=0.31) and a very small scale factor (η=0.022 days), the first subpopulation (P=6.33%) is a clear example of infant mortality. The population happens to consist entirely of launch failures and catastrophically failed orbital insertions. The second population (β=1.06, η=4.72 days, P=10.13%) consists of “random” failures during the initial on-orbit check-out of the spacecraft – typically loss after commanding a subsystem to prepare the craft for operations. The remaining

populations generally consist of missions that were terminated by mission design and not attributed to any specific technical failure. Subpopulation 3 (β=1.68, η=126 days, P=29.11%) consists largely of fly-by missions to the inner planets of the late 1960’s and early 1970’s (e.g. Mariner 5-7). Subpopulation 4 (β=3.71, η=440 days, P=15.19%) likewise consists of missions to other inner planets but of later vintage (e.g. Vega 1, 2). The peak of the distribution describing subpopulation 5 (β=2.72, η=1850 days, P=21.52%) seems to target a set of Martian missions (e.g. Viking, Nozomi) but is otherwise the most eclectic mix. Population 6 (β=2.38, η=9370 days, P=17.72%) consists entirely of the oldest probes with extended missions to study the sun or journey beyond the solar system (e.g. Pioneers and Voyagers). Although more predictive in construction, the evidenced linkage may suggest a limited capacity for the developed approach to provide a descriptive modeling component.

0%

5%

10%

15%

20%

25%

% o

f Tria

ls

1 2 3 4 5 6 7 8

Iterations Required for Termination

Figure 3 – Iterations Required for Termination

0%5%

10%15%20%25%30%35%40%45%50%

% o

f Tria

ls

4 5 6 7

Predicted Number of Sub-Populations

Figure 4 – Expected Number of Subpopulations in Sample Data

0.01

0.1

1

10

100

1000

0.00

001

0.00

01

0.00

1

0.01 0.

1 1 10 100

1000

1000

0

1000

00

Time (days)

Failu

re R

ate

(Fai

lure

s/Da

y)

Figure 5 – Lowest-Error Bathtub Curve Generated From Sample Data

Page 6: [IEEE 2008 Annual Reliability and Maintainability Symposium - Las Vegas, NV, USA (2008.01.28-2008.01.31)] 2008 Annual Reliability and Maintainability Symposium - A simple classification

4 SUMMARY AND CONCLUSIONS

An approach that allows classification directly from failure data to construct bathtub (or similar) curves was presented. The approach was derived by two separate approaches. The first derivation is based on operating an Artifical Life Framework at a conceptual limit and then modifying for improved performance in computational time. Because the theory behind the Artifical Life Framework is still immature and underdeveloped, an alternative derivation is provided based on the well established k-means clustering algorithm. The k-means class of algorithms is more than 40 years old and well studied. The known behavior of the k-means algorithm predicts the characteristic behavior of the proposed classification method. These predictions were confirmed when the method was applied to a contextually rich set of sample data.

There are two principal advantages provided by the classification approach that are not generally available in other methods for fitting data directly to the mixed-Weibull distribution. First, no assumption of the number of subpopulation contained in a mixture of data is required prior to fitting the data. If run multiple times the approach builds a probabilistic distribution to model the likely number of subpopulations in a dataset. Second, the computation is very fast and simple. As predicted by the k-means algorithm the approach always terminates and requires little iteration to terminate. The individual computations are very simple and can be implemented in a basic spreadsheet.

Also predicted by k-means is the principal disadvantage of the approach. The approach is mildly chaotic depending on the initial, random assignment of data to subpopulations. Results are likely to be sub-optimal. Like the k-means algorithm, however, the simplicity and speed of the approach allows multiple trials to be performed with great speed. Although not optimal, the mixed-Weibull functions generated by this approach are likely to be adequate in the context of most reliability analyses.

The approach is not intended to replace descriptive analysis and thoughtful consideration of failure data. Given its speed, ease of implementation, and anecdotal demonstration of extracting meaningful data classifications, the method provides a very convenient supplement to a reliability analyst’s toolbox.

REFERENCES

1. Haan, B.D., “A Feasibility Study of the Application on an Artificial Life Framework for Constructing Reliability Models”, Proceedings of the Annual Reliability and Maintainability Symposium, January 2007

2. MacQueen, J.B., “Some Methods for Classification and Analysis of Multivariate Observations”, Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability”, Berkeley, University of California Press, 1:281-297, 1967.

3. Siddiqi, A.A., “Deep Space Chronicle: A Chronology of Deep Space and Planetary Probes 1958-2000”, NASA Monographs in Aerospace History No 24, June 2002

BIOGRAPHY

Brian Douglas Haan Director of Research / Principal Researcher SELF P.O. Box 20846 Rochester, New York 14602

e-mail: [email protected] Brian Haan has over a decade of reliability engineering, fault-design, modeling and simulation design, conceptual operations engineering and risk management experience. This experience spans the domains of space systems, telecom, transportation, and industrial systems for both industrial and governmental customers. He has worked in the roles of reliability engineering department manager, reliability engineer, risk analyst, modeling and simulations architect, and conceptual operations engineer. Presently he serves as Executive Director of Research and Principal Researcher for an unincorporated reliability sciences research directorate.