Functional Test Coverage Assessment Project Board Test Workshop 2009

26
iNEMI Confidential for member organizations only Functional Test Coverage Assessment Project Board Test Workshop 2009 Project Chair Tony Taylor, Intel 9/17/09

description

Functional Test Coverage Assessment Project Board Test Workshop 2009. Project Chair Tony Taylor, Intel 9/17/09. Background. Functional test coverage assessment methods lack consistency throughout the test industry due to: Wide variation of test equipment, - PowerPoint PPT Presentation

Transcript of Functional Test Coverage Assessment Project Board Test Workshop 2009

Page 1: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidential for member organizations only

Functional TestCoverage

Assessment Project

Board Test Workshop 2009

Project ChairTony Taylor, Intel

9/17/09

Page 2: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

Background

• Functional test coverage assessment methods lack consistency throughout the test industry due to:– Wide variation of test equipment,

– Differing product software environments

– Natural obscurity of product hardware beneath a product’s operating system

• Shrinking form factors and increasing component density results in the loss of test point access which compromises the effectiveness of ICT/MDA*

• More emphasis is being placed on the defect coverage capability of functional test, as well as the need to overlay functional test coverage with other test stage coverage

• The purpose of this project is to create a functional test coverage assessment method that enables consistency of test coverage assessment between different:– Test Environments

– Test Revisions– Assessors

* In Circuit Tester / Material Defect Analyzer

2

Page 3: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

Participants

3

Page 4: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

• Agilent: Ken Parker• Asset: Arden Bjerkeli• Boston Scientific: Jim Meyer, Max Cortner• Cisco: Thomas Wong• Dell: Brian McGonagle• HP: Carlos Michel• Huawei: Victor Chen• Intel: Tony Taylor

Contributors

4

Page 5: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidential for member organizations only

Phase 1

Uses of Coverage Information

Page 6: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

Who are the consumers of test coverage information?

Uses of Coverage Information

6

• TestScope the extent of test development, input to use in understanding the risk of coverage gaps, to communicate test development progress, and cost of test

• DesignerPlan DFT in designs, to understand the impact of design decisions to the test and manufacturing process, or to root cause product issues

• QualityProvides input that helps determine product quality risks, or to monitor quality improvement efforts

• Management (Test, Product, Marketing, Quality)Understand test development progress, test equipment requirements, and input that helps to understand manufacturing risks

• Business PartnerOEM’s, ODM’s, CM’s, design partners, and manufacturing partners to communicate requirements, expectations, and scope of work

• Customer, Stakeholder or Regulatory OrganizationKnow utilization is a consistent and robust process

Page 7: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

What are the uses of test coverage information?

Uses of Coverage Information

7

• Line layout planning, test equipment selection and end-to-end test strategy

• Predictive coverage generation to indicate what the final test coverage will be at the end of the test development effort

• Incremental test development tracking to monitor test development progress

• Determining whether additional test development is approaching the point of diminishing returns

• Establishing coverage baselines with external partners, such as contract manufacturers or ODM’s

• Providing a metric for quality goals and continual improvement (assuming that a higher coverage rating correlates to higher product quality)

• Enabling adaptive test strategies in response to disruptive technology changes

Page 8: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

How simple or complicated does test coverage information need to be?

Uses of Coverage Information

8

• It depends on who is using it• It depends on what the information is used for• Examples of coverage summary data:

– Percentage of feature coverage can be a simple overview to grossly describe feature coverage or test development progress. Useful for people that do not need deep coverage detail

– Score of functional-only coverage

– Score of structural defect coverage

– Combined score of structural and functional

– Indication of confidence of the score, e.g., were the scores derived from defect injection or detecting actual defects in the manufacturing environment? Or were the scores assigned by looking over the test, the product spec and the schematic?

– Identification of coverage gaps or opportunities for improvement

Page 9: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidential for member organizations only

Phase 2

Defect Categorization

Page 10: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

Defect CategorizationOverview of the Assessment Method

Example of Component Scoring GuidelinesP, Presence: Does the test determine the presence of the part? Will a test fail if the part

is not present?– Present = 1.00; – Not Present = 0.00

C, Correct: Does the test determine that the part is correct by a distinct identifier? Is this identifier unique to that part, or are there multiple SKU’s of the part that cannot be uniquely identified by an identifier?

– Part is identified to be uniquely correct = 1.00– Part is determined to be correct, but identifier could belong to improper part sku = 0.50– Part cannot be identified = 0.00

O, Orientation: Inferred if the part is functional during test and operation can only happen if the part is oriented properly, or polarity is proper

– Part can be determined to be oriented properly = 1.00– Part cannot be determined to operate properly = 0.00

L, Live: Is the part electrically functional for basic activity?– Part is live = 1.00– Part is not live = 0.00

A, Alignment: Cannot be determined at FT, other than inferring the part is aligned properly enough to be Live (includes upside-down or billboard)

– No score at FT. A = 0.00

10

Page 11: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

S, Shorts: Can shorts within a shorting radius be detected?– All shorts within determined shorting radius will result in test failure = 1.00 (rule of thumb,

adjacent pins on parts and connectors are considered within shorting radius of 0.1”)

– Not all shorts within shorting radius will result in test fail = 0.50

– No shorts within shorting radius will result in a test failure = 0.00

O, Opens: If there is an open on the pin/trace will there be a test failure?– Open results in test failure = 1.00

– Open does not result in test failure = 0.00

Q, Solder Quality: Cannot be determined at FT, other than inferring the Solder Quality is sufficient enough for the part to be Live.– No score at FT

Defect CategorizationOverview of the Assessment Method

Example of Solder Quality Scoring Guidelines

11

Page 12: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

F, Feature: Can presence or absence of a feature be determined at FT? Any sub-features should be entered as separate features to assess.

– Feature presence or absence can be determined at FT = 1.00

– Feature presence or absence cannot be determined at FT = 0.00

A, At-speed: Can the pin/interface/feature be tested at minimum subset of speeds or at known breakpoints?

– Tested at min/mid/max speeds = 1.00 (if there is only one speed, or two speeds and they are tested assign full score)

– Tested at subset of min/mid/max = 0.500

– Not tested = 0.00

M, Measurement: Can a measurement be taken that confirms performance of the element (voltage, current, CRC, BERR, counts, etc.)?

– Measurement taken that confirms proper operation and board elements (e.g., resistor values, capacitor values, bit integrity with presence of aggressor signals, etc.) = 1.00

– No measurement taken = 0.00

Defect CategorizationOverview of the Assessment Method

Example of Functional Scoring Guidelines

12

Page 13: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidential for member organizations only

Phase 3

Assessment / Scoring Details

Page 14: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

Using a CPU socket as a feature to assess, the following scores were assigned during assessment…

Functional Assessment

14

Component Critical F Fcm A Acm M Mcm

Socket 0 0.75 1.00 1.00 1.00 0.00 0.00

– In this case we are doing a Feature, At-Speed and Measurement assessment of the socket

– The socket is assigned a Critical Value of “0”, meaning it is in the most critical category. The assignment of values to the Critical field allows summary sorting according to the assigned critical category. There is no limit to the number of Critical categories that an assessor could create and the value is “0” is reserved for the most important Critical value so that the numbering of progressively lesser categories can increment upward as needed

– Feature, or F, score is 0.75 due to the fact that not every pin on the socket is testable, although the vast majority are, and the ones necessary for proper operation are tested

– Feature Confidence Margin, or Fcm, score is 1.0. The tests utilized have been through many product generations and the coverage score assigned is well understood

– At-Speed, or A, score is 1.00. Considering that processors can run at different speeds using the same socket, but only one speed is used during functional test, it might be assumed that using the scoring guidelines provided by the project team the At-Speed score should be 0.50. But the processor is not part of the testable system, it is gold hardware that is removed before the board is shipped. So if the socket is tested with the maximum speed processor, it can receive the full score

– The At-Speed confidence margin, or Acm, is full score

– There is no direct Measurement Capability in the functional test for the socket, so no score is assigned for M or Mcm

Page 15: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

The FAM score for this individual feature is determined as follows

Functional Assessment

15

Component Critical F Fcm A Acm M Mcm

Socket 0 0.75 1.00 1.00 1.00 0.00 0.00

• Using equal weighting of 0.333 for each functional element FAM, the form of the calculation is:

FAM = (0.333 × F × Fcm) + (0.333 × A × Acm) + (0.333 × M × Mcm)

Or using the values from the table…

FAM = (0.333 × 0.75 × 1) + (0.333 × 1 × 1) + (0.333 × 0 × 0) = 0.58

• There are many features that do not have a feasible test that provides measurement capability. It may be desirable to produce a FA-only score, using weighting factors of 0.50 instead of 0.333, so the calculation would be…

FA = (0.5 × F × Fcm) + (0.5 × A × Acm) = 0.88

Page 16: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

The PCOLA score for this individual component is determined as follows

PCOLA Assessment

16

Component Critical P Pcm C Ccm O Ocm L Lcm A Acm

u9 0 1.00 1.00 1.00 1.00 0.00 0.00 1.00 1.00 0.00 0.00

• Using weighting commonly used for ICT scoring, the form of the calculation is:

PCOLA = (0.35 × P × Pcm) + (0.25 × C × Ccm) + (0.25 × O × Ocm) + (0.15 × L × Lcm) + (0.2 × A × Acm)

Or using the values from the table…

PCOLA = (0.35 × 1× 1) + (0.25 × 1 × 1) + (0.25 × 0 × 0) + (0.15 × 1 × 1) + (0.2 × 0 × 0) = 0.85

• Similarly, SOQ scores can be calculated to assess connectivity coverage

Page 17: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

Scoring Details

17

• After all PCOLA/SOQ/FAM elements are scored, a summary report can be generated

• The summary scores consist of:– Feature coverage – a simple sum of all

feature column scores divided by the number of features entries

– FAM Score – an average of the FAM scores for every feature

– PCOLA scores – an average of the PCOLA for every component

– SOQ Score – an average of the SOQ score for each connection (connections can be pins, but also includes press-fit connections, ground lugs, BGA balls, chassis grounding contact, etc.)

– A total score that combines PCOLA/SOQ/FAM into a single score, in this case with equal weighting for each element of 0.333

– All scores are scaled by 100,000. This is to remove the inevitable association of coverage with percent coverage. If a percent coverage is desired, the simple Feature Coverage Percentage should suffice

Page 18: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

Scoring Details

18

• In addition, the summary table provides an indicator of where potential improvement is available

• The table reveals what functional test is good at (see FAM and confidence margin scores), and what functional test is bad at (see Orientation, Alignment, Quality and confidence margin scores)

• For example, the PCOLA Alignment value of 45 in cell C24 indicates that 45 components assessed were assigned a value less than a full value of 1. The next column in D24 indicates that 100% of the components assessed received an Alignment score less than 1.

• This indicates that there is a large opportunity to improve defect coverage by detecting alignment at functional test… if only it could!

Page 19: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

Scoring Details

19

• In the summary report displayed, critical feature coverage is tallied separate from the total summary data

• In this case, critical feature coverage in cell H3 indicates the functional test package assessed achieves 99% coverage of all features flagged as critical

Note: This assessment was a Functional-only assessment, meaning the structural coverage of the functional test package was not assessed. This could be because there was excellent structural coverage provided in previous test stages so no assessment of structural coverage was necessary

Page 20: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

• Confidence margin indicates how sure the assessor is that the assigned coverage score is absolutely correct

• There are three methods that can be used to assess functional test coverage:1. A Paper Study is carried out by reviewing the tests in the test package, reviewing the

product schematic, discussing operation with the design engineers, etc.• This assessment method is ideal for a product that is still in the design stage and there is

no product to actually observe or defect inject. This would be a predictive assessment in this case

• This is the quickest assessment to perform, but also has the least assurance of being correct. Therefore it gets the least confidence margin score of 0.25

2. Assessment by observation is carried out by running tests on the actual product to ensure that test patterns run on the actual interfaces. This provides an additional level of confidence that the test is executing on the UUT, but still does not guarantee that a defect will be detected. For this reason assessment by observation gets an assigned confidence margin score of 0.5

3. Assessment by defect injection provides much more assurance that a specific defect will be detected. This type of assessment can be assigned a full confidence margin score of 1

Confidence Margin Scores

Scoring Details

20

Page 21: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

• The three assessment methods can represent the life cycle of test development1. Paper Study: Before the product design is complete, a predictive

assessment can be performed to plan test development activity and a predictive benchmark of final coverage

2. Observation: When actual product is available to run tests on, each newly developed test can be run to verify test operation by observing with test equipment (oscilloscope, logic analyzer, etc.)

3. Defect Injection: Defect injection can be performed on the product to confirm defect detection, or early production builds can be used to verify the tests detect actual, naturally occurring defects. As the product matures and more production data is available, the confidence margin of the test can increase.

• When the same tests are used for subsequent generations, the confidence margin of the tests on the new product can be higher

Confidence Margin Scores

Scoring Details

21

Page 22: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidential for member organizations only

Potential Follow-on Activities

22

Page 23: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

Potential Follow-on Activities (1 of 3)

• Assessment of protocol specific coverage… coverage for free– For example: PCIe, what coverage is guaranteed through training

procedure?

– What test capability is invoke-able in the protocol at functional test and what defect coverage does it provide?

– Enumeration, training, initialization… what’s the guaranteed coverage on standard interfaces?

– These should be portable blocks of pre-scored coverage

• Assessment of the boot process… more coverage for free before your OS is even ready

23

Boot Init/Training/EnumBIST/Diag

TestsFT

Page 24: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

Potential Follow-on Activities (2 of 3)

• System stress is another capability of functional test, a factor of multiple concurrent system operation (processor, memory access, multi-core, multi-thread, maximum throughput, mass data transfer, etc.) – Can system stress be assessed objectively and in a consistent

manner for widely diverse products?

(Premise of the original functional categories of FAIM, in which I = In-parallel operation or the capability to generate system stress with simultaneous subsystem operation)

24

Page 25: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

Potential Follow-on Activities (3 of3)

• Any SW tools for functional assessment? Automated scoring process to eliminate any errors due to manual spreadsheet entry? Assessment GUI for data entry?

• Functional test diagnostics improvement– One weakness for functional test is the fact that tests usually

operate over the product operating system

– SW layers intentionally obscure the hardware, decreasing the availability of diagnostic data upon failure. • A computer blue screen of death is an example.

• The operating system crashes, however few intelligible details are known concerning the root cause

25

Page 26: Functional Test Coverage Assessment Project Board Test Workshop 2009

iNEMI Confidentialfor member organizations only

www.inemi.orgEmail contacts:

Jim McElroy

[email protected]

Bob Pfahl

[email protected]