0% found this document useful (0 votes)
346 views102 pages

Mil HDBK 344a

Mil Hdbk 344a
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
346 views102 pages

Mil HDBK 344a

Mil Hdbk 344a
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 102
—_ NOT MEASUREMENT] MIL-HDBK-344A 16 August 1993 Superseding DOD-HDBK-344(USAF) 20 October 1986 MILITARY HANDBOOK ENVIRONMENTAL STRESS SCREENING (ESS) OF ELECTRONIC EQUIPMENT AMSC N/A FSC-RELI DISTRIBUTION STATEMENT A; Approved for public release; distribution unlimited. MIL-HOBK-344A, DEPARTMENT OF DEFENSE WASHINGTON DC 20301) ENVIRONMENTAL STRESS SCREENING (ESS) OF ELECTRONIC EQUIPMENT ‘This standardization handbook was developed by the Department of Defense with the assistance of the miltary departments, federal agencies, and industry. Every etfort has been made to reflect the latest information on Environmental Stress Screening procedures. tis the intent to review this handbook periodically to ensure ts completeness and currency. Beneficial comments (recommendaticons, additions, deletions) and any pertinent data which may be of ‘use in improving this document should be addressed to: Commander, Rome Laboratory, AFMC, ATTN ERSS, Griffiss Air Force Base, New York 13441-4508, by using the sell-addressed Standardization Document improvement Proposal (DD Form 1426) appearing at the end of this document or by letter. MIL-HDBK-344a EQREWORD 1. This Handbook provides techniques for planning and evaluating Environmental Stress Screening (ESS) Programs. The guidance contained herein departs from other approaches to ESS in that quantitative methods are Used to pian and control both the cost and effectiveness of ESS programs, Handbook procedures and methodology were developed under Rome Laboratory contractual and in-house studies. Contractual efforts were performed by the Hughes Aircraft Company of Fullerton, California, under the direction of Mr. A.E. Saari and Litton Systems Canada Limited of Toronto, Ontario under the direction of Mr. R.A. Pepperall. The Handbook includes the guidance contained in R&M 2000 ESS Policy Letter dated 25 Jun 86, 2. Environmental Stress Screening (ESS) programs, which are applied during the development and production phases, can yield significant improvements in fieki reliability and reductions in tield maintenance costs, ‘Application during development can reap significant savings in test time and costs as a resul of eliminating or reducing the number of latent defects prior to qualification tests. The benefits for the manufacturer include: a high degree of visibility as to the sources of reliability problems in the product or process, better control of rework ‘costs, and the opportunity to determine corrective actions which eliminate the sources of reliability problems from the product or process. 3. There are various approaches associated with the application of stress screens. Regardiess of the approach used, the fundamental objective of ESS remains the same; i... to remove latent defects trom the product prior to field delivery. The quantitative methods, contained in this Handbook, extend this objective by focusing on the detects which remain inthe product at delivery and their impact on field reliability, The goal of ESS programs thus becomes to reduce the latent detect population, at delivery, to a level which is consistent with the Feliabilty requirements for the product. Reduction of the latent defect population in a production lot of electronic, ‘equipment, is accomplished by: a. Use of ESS to precipitate flaws in the assembled hardware to a detectable level coupled with the use of thorough tests to facilitate their detection and removal D. Use of ESS resuts to isolate and detect failure causes followed by determining appropriate corrective actions. Effective corrective actions eliminate the source (cause) of the defect from the process or product, thereby improving manufacturing process capability 4. General guidelines and supporting rationale in Section 4 and detailed guidelines in Section 5 provide the user with the procedures needed to plan, monitor and control the screening process so that quantitative goals can bbe achieved cost effectively. The six detailed procedures of Section 5 are entitled: Procedure A - Optimizing Screen Selection and Placement Procedure B - Estimating Detect Density Procedure C - Estimating Screening Strength Procedure D - Refining Estimates of Detect Density and Screening Strength Procedure E - Monitor and Control Procedure F - Product Reliabilty Veriication Test (PRVT). 5. ___ It should be noted that it is not possible to eliminate all defects from the hardware through stress screening. The vast majority of parts in the hardware will never fail throughout the lite of the product. However, some fraction of the parts contain gross latent defects and tend to fail early and thus dominate the reliablity of fielded products during early life. The objective is to remove as many of the gross defects from the hardware as is technically and economically feasible so as to achieve the designed-in or required reliability. The Handbock implements these objectives through use of controls on the latent defects present in the hardware at assembiy, the costs to precipitate and remove them, and the assurance needed that latent defects remaining in the hardware al delivery will allow reliability objectives to be achieved. 6. __ The procedures provided in the Handbook are an important aspect of a manufacturers TM program and Philosophy. The procedures quantity some elements of customers satisfaction that are measured by cost and Fellabilty and reflect these as factory goals and requirements that are thus meaningtully and directly related to the ‘customers measures of satisfaction. These factory requirements apply to all levels from the procurement of parts MIL-HOBK-3444, and materials from vendors through all factory processes and tests and affect both management and design Philosophies. The procedures also provide management and working level groups with quantitative feedback on their performance compared with requirements and goals for continuous improvement. If problem areas or deticioncies are identified the procedures help analyze options for defect control or prevention. 7. This revision to MIL-HOBK-344A provides the following changes based upon a recently completed study, feterence AL-TR-91-300, Vol. 1, “Evaluation of Quantitative Environmental Stress Screening (ESS) methods The changes do not affect the basic concepts and methodology of the handbook a. Incoming defects per system are calculated in a manner slightly diferent than the original handbook. The complexity of a system is described by the number of items in various type-reliabilty grade categories. The defects per system are then calculated by multiplying each of these complexity values by the corresponding defect density for each category. Workmanship complexity and their defects are determines ‘based upon the MIL-STD-2000 assembly and solder complexity numbers. This change was made to improve the accuracy of the estimated workmanship defects. The defect population (i.e. parts and workmanship) is proportioned into separate populations that are sensitive to Random Vibration (RV) and Temperature Cycling (TC) Stresses. ESS calculations are subsequently performed on these separate populations. This change was made to improve modeling accuracy and to ensure a proper balance of RV and TC screens. The defects are determined telative to the R&M 2000 stress levels. These stress levels are defined to be the reference or baseline stress levels. Detect densities for other factory ESS stress levels are determined by muttiplying the reference vaiues by an appropriate Stress Adjustment Factor (SAF). The values of field defects under different operating environments are calculated using the defect densities for that environment, e.g...AIF, etc. b. The calculations of defects removed and detects remaining are also similar to the existing handbook in that the defects removed are calculated by mutipiying the system (or assembly) defect density by the applicable screening strength. The recommended changes affect the procedure as follows: i) The detects removed by screening are calculated relative to the baseline stress. ‘The actual detects removed are then calculated by multiplying the removals by an appropriate stress adjustment factor. i) The terminology was changed from Test Strength = Screening Strength x Detection Efficiency to Screening Strength = Precipitation Efficiency x Detection ficiency. This change was ‘made to make the terminology more consistent and descriptive, li) Precipitation efficiency is determined using the same equations as those used to produce values found in previous HOBK tables (DoD-Hdbk-344)._ The precipitation efficiency tor RV however was ‘modified to include an axis sensitivity factor. This change was made to improve modeling accuracy based on the axis sensitivity observed in the study. iv) The stress parameters 0.g. Grms, Temperature Transition Rate etc. are defined relative to the unit under test and not the environmental chambers. The requirement for thermal and vibration surveys fo determine appropriate values was also added. (Consistent with this change, the stress level in the precipitation efficiency equation may need to be rescaled.) v.) The requirement to calculate the damage factors due to the ESS was added to ensure that the ESS stress levels and duration are not destructive or consume a significant portion of the useful fatigue ite c. Further changes and refinements concerned the data analysis, Statistical Process Control (SPC) procedures and the requirements for Failure Free Acceptance Test (FFAT). 1), The procedures were modified to encourage the maximum use of observed data. Initial ‘estimates of detect density and screening strength are made using the HDBK/industry data base; however, these estimates are subsequently refined by the user based on the actual data. The methodology provided to enable the user to measure the ESS parameters (e.g. defect density, screening strength etc.) is based on a curve fiting solution to the general ESS mathematical expression developed in Appendix A. These changes were made to eliminate the need for highly accurate data in the HOBK. MIL-HDBK-3444, ii) For analysis and modeling purposes defects are segregated into errors and defects with defects being further subdivided into latent and patent defects. Since it is precipitated latent defects that determine the reliability in the field itis important to distinguish between errors and defects. Although the user ‘must minimize and control errors, the improvements in these areas do not necessarily reduce latent defects nor improve reliably, li) The SPC control charts used for monitoring purposes were modified to show requirements that are based on and directly related to the customer’ reliability requirements. In addition, the process mean is determined using regression analysis since the mean is expected to change as a result of corrective actions and ‘continuous improvement. A modified form of PARETO charting is also recommended to help identify problems fequiring analysis. The modification to the PARETO is to not only compare on the basis of frequency of ‘occurrence but to relate the frequency to that expected based on the unit's complexity and the ESS predictions. iv) The mathematical expression described in Appendix A is used to relate remaining defects (at ESS stress levels) to field reliability, This relationship requires prior knowledge of the average time constant in the field. Alternatively, # the actual stress levels are known, the precipitation efficiency equations can be directly applied. With either method, the original estimates are to be refined based on actual data, ¥) The requirement for a failure free acceptance test (FFAT) was eliminated and replaced with an SPC program to measure and control remaining defects. The FFAT requirement was considered to be potentially damaging and uneconomical and tended to be contrary to ESS and the HDBK philosophy of defect, elimination and control. A minimum veriication test is used however so that ESS can not be entirely eliminated and tests remain in place to collect SPC data, MIL-HDBK-344A 1. INTRODUCTION ... 1.2 Application 113 General... 1.3.1 What is ESS. 1.3.2 Organization of the Handbook . 113.3 Development and Production Phase Reliabilty Assurance 1.5.4 ESS Application and the Quantitative Approach 1.3.4.1 The Quantitative Approach. 1.3.5 Benefits of a Quantitative Approach 1.3.6 Process Capability and Detect Density 2, REFERENCED DOCUMENTS. 2.4 Government Documents 2.2 Non Government Documents. 2.2.1 Other Non Government Cocuments. 3, DEFINITIONS AND ACRONYMS 3.1 Definitions... 3.2 Acronyms/Abbreviations 32.1 Acronyms Used In Procedure 8 Of Section 5... 3.2.2 Other Acronyms. 4 GENERAL GUIDELINES. 4.1 Contractual Aspects of ESS.... 4.2 Relation of ESS to MIL-STO-765 Reliability Program Tasks. 43 Subcontractor and Supplier Stress Screening 4.3.1 Screening of Spares 44 Planning a Stress Screening Program. 4.4.1 Preparation of ESS Plans 4.4.1.1 Development Phase Plan. 4.4.12 Production Phase Plan... 4.4.2 Establishing Objectives/Goals...... 4.4.3 Obtaining Planning Estimates of Detect Censity.. 4.4.3.1 Latent vs, Patent Detects 4.43.2 Categories of Detects. 4.4.3.2.1 Screenable Latent Detects and the Field Stress Environment. 4.43.3 Factors Which Impact Detect Density. a ‘4433.1 Part vs. Assembly Detect Density 4.43.32 Parl Level vs. Assembly Level Screening . 4:4.3.3.3 Air Force RAM 2000 ESS Polcy-Part Fraction Delectve. 4.4.3.3.4 Process Maturity and Defects. 4.4.3.5 Packaging Density 4.4.4 Screen Selection and Placement. 4.4.4.1 Precipitation Etficiency. 4.4.4.1.1 Screen Parameters 4.4.4.1.2 Design Limits 4.4.4.1.3 Guidelines for intial Screen Selection and Piacement. 4.4.4.1.4 RAM 2000 ESS Initial Regimen... MIL-HOBK-3448, 4.4.42 Detection Etticiency 4.4.42.1 Determining Delection Efficiency 4.4.42.2 Power-On Testing vs. Power-Oft . 4.4.42.3 Pre/Post Screen Testing and Screening Strength 4.4.4.2.4 Production Phase-Refining Estimates From Fallout Observation. 45 Production Phase-Monitoring Evaluation and Control. 4.5.1 Data Collection. 45.2 Failure Classification 45.3 Preliminary Analysis of Fallout Data. 45.4 Analysis of Screen Fallout Data ‘4.5.4.1 Use of the Mathematical Model to Evaluate Screening Results 45.42 Use of the Chance Detective Exponential Model to Evaluate Screening Results. 4.5.4.3 Product Reliability Vertication Test (PRVT) 4.8 Costs of ESS vs. Productivity Improvement 4.6.1 Faciliies and Costs. 5, DETAILED GUIDELINES 5.1 ESS Implementation Procedures. 5.2 Procedure A - Optimizing Screen Selection and Piacement. 5.2.1 Objective..... 5.2.2 Methodology. 5.2.3 Procedure Steps. 5.3 Procedure B - Estimaing Detect Density 5.4 Procedure C - Estimating Screening ‘Strength... 5.4.1 Objective... 5.4.2 Methodology 5.4.3 Procedure Steps 5.5 Procedure D - Refining Estimates of Detect Density and 5.5.1 Objective nnn 5.5.2 Methodology 5.5.3 Procedure Steps 5.6 Procedure E - Monitor and Control 5.6.1 Objective 5.6.2 Methodology 5.6.3 Procedure Steps 5.7 Procedure F - Product Reliabiity Vertication Test (Pav 5.7.1 Objective..... 5.7.2 Methodology. 5.7.3 Procedure Steps APPENDIX A Stress Screening Mathematical Model 10, General... 20. Reterence Documents 30. Detintions and Acronyms. 40. General Mathematial Relation 40.1 Detect Density, 40.2 Precipitation Etticiency ‘40.3 Detection Efficiency. 40.4 Screening Strength .. 405 Yield, 40.6 Remaining/Fiemoved Detects. 40.7 Chance Detective Exponential Mode 40.8 Relating DR to Field Reliability and Failure Rate .. vii 1 2 2 2 2 3 8 8 9 2 2 2 2 Bacar eeeoage b> >db>rprr b> MIL-HDBK-344A APPENDIX B Product Reliablity Vertication Test. 10. General .nnsnsnennene 20. Reference Documents 30. Definitions and Acronyms. 40. General Mathematical Relations. 40.1 Derivation... APPENDIX C Fault Coverage Data pomoo @ ° MIL-HDBK-344A AIST OF TABLES TABLE me 5 Remaining Detect Density Goais. Detect Types & Density vs. Process Maturity... ‘Assembly Datect Types Precipitated by Thermai & Vibration Screens Guidelines for initial Screen Selection And Placement. Fi & M 2000 Environmental Stress Screening intial Regimen. Baseline Stress Defect Density Vectors (PPM) Microelectronic Devices Defect Density (in PPM) for Various Environments. “Transistor Devices Detect Density (in PPM) for Various Environments Diode Part Devices Detect Density {in PPM) for Various Environments, Resistor Devices Defect Density (in PPM) for Various Environments Capacitor Defect Density (in PPM) for Various Environments Inductor Defect Density (in PPM) for Various Environments... Rotating Devices Defect Density (in PPM) for Various Environments Relay Defect Density (in PPM) for Various Environments ‘Switch Detect Density (in PPM) for Various Environments, ‘Connector Detect Density (in PPM) for Various Environments... PWB Detect Density (in PPM) for Various Environments... Manufacturing Characteristics (in PPM) for Various Environments... Precipitation Efficiency Factors - Random Vibration Screens. Precipitation Etficiency Factors - Temperature Cycling Screens. Precipitation Etficiency Factors - Swept Sine Vibration Screens. Precipitation Etficiency Factors - Constant Temperature Screens.. Comparison of Actuals; Planned Detect Densiy and Screening Strength Values 5 Faull Coverage vs. Test Types. c. Faull Coverage For Automatic Test Systems. 2 Ce Fault Detection for a 1000 PCB Lot Size ic ONOMHERALERAMTOATAMAGE DE REE MIL-HDBK-3448 Fu ™me Cross Reference of ESS Program Sequence to Handbook Procedures.. Mathematical Model of an ESS Program The Quantitative Problem, Stress Screening and Variable Relationships. Detect Categories & Product Lite Failures Fraction Of Defective Assemblies Vs Remaining Part Fraction Detective ‘Temperature Cycling Data Fitted to the Chance Detective Exponential Model ‘Sample Multi Level ESS Flow Diagram. ‘Sample ESS Test Flow Diagram. Key To Figure 5.2.. . System Breakdown Char... Und Breakdown To Assembiy Level ‘Sample Assembly Complexity Vector. ‘Template To Create Complexity Vector. Sample System Complexity Matrix... Template To Create System Compiexity Matrix. Sample Curve Fitting Analysis. Expected Form of Hand Plotted Defect Distribution Breakdown of Defect Distribution Curve.. IOLA ma On Semi Pape. Sample SPC Chart ‘Sample PARETO Chart Field Failure Rate vs, Detect Density ARLE i eopnn eee eRe tases @ MIL-HOBK-3448 1. INTRODUCTION 1.1 Purpose. This Handbook provides uniform procedures, methods and techniques for planning, monitoring and controlling the cost effectiveness of ESS programs for electronic equipment. It is intended to support the requirements of MIL-STD-785, Task 301, “Environmental Stress Screening” andior MIL-STO-781, Task 401 “Environmental Stress Screening" and to implement Air Force R&M 2000 ESS recommendations and guidelines. 4.2 Application. The Handbook is intended for use by procuring activities and contractors during development land production. tis not intended that the Handbook procedures and techniques be used in a cookbook fashion. Knowledge of the equipment and the manufacturing process is essential for a properly planned and tailored ESS rogram. The data base needed for a systematic approach to ESS application is not fully developed. Use of the Handbook by Government procuring agencies and equipment manutacturers will foster the development of an improved and broader data base. 1.3 General. A property applied ESS program can significantly impact the quality and reliability of electronic products delivered to the Government. ESS is interrelated with the requirements set forth in MIL-Q-9858,, MIL-STD-785, MIL-STD-781, and MIL-HOBK-781. Quality Control is a manufacturing function and Reliability Engineering is a design function. Although the Quality and Reliabiliy disciplines are related, in practice, they are conducted as separate programs without common objectives. The Handbook uses the ESS program as a means for integrating Quality Control and Reliability Engineering tasks so as to assure achievement of reliabilty objectives during manufacture. Supporting software is available trom Rome Laboratory that fully automates the detailed manual procedures contained herein. 1.3.1 What Is ESS?, ESS is a process or series of processes in which environmental stimuli, such as rapid thermal cycling and random vibration, are applied to electronic tems in order to precipitate latent detects to early failure, An equally important and inseparable aspect of the screening process is the testing which is done as part ‘of the screen, so as to detect and properly identify the defects which have been precipitated to failure. The precipitation and testing process is basically a search for detects. Manufacturing techniques for modern electronic hardware consist of hundreds of individual operations and processes through which defects can be introduced into the product. Many of the detects can be detected without the need for stress screens by use of visual inspections, functional tests and other conventional quality assurance procedures. Such detects are termed ‘errors and are a subset of patent defects. A small percentage of latent defects remain undetected by obvious means and, if not removed in the factory, will eventually manifest as early lle failures during product use. The inability to ‘ind latent defects by obvious means is a consequence of the increased complexity of modern electronic products and the processes which are used in their manutacture. ESS is the vehicle by which latent defects are accelerated to early failure in the tactory. ESS can thus be viewed as an extension of the qualily ‘control inspection and testing process, 1.3.2 Qraanization of the Handbook. The Introduction (Section 1) outlines the purpose of the Handbook and provides general introductory remarks pertaining to the quantitative approach to ESS. Section 2 lists applicable references and Section 3 defines terms and acronyms used. Section 4 contains general guidelines ‘and provides the rationale and background for the detailed guidelines. Section 5 contains the detailed guidelines which are organized according to the sequence of events to be undertaken by the contractor in planning ‘monitoring and controlling a screening program. The detailed procedures are entitled: Procedure A - Optimizing Screen Selection and Placement Procedure 8 - Estimating Detect Density Procedure C Estimating Screening Strength Procedure - Refining Estimates of Defect Density and Screening Strength Procedure E - Monitor and Control Procedure F - Product Reliabilty Verification Test, ‘Appendix A contains the mathematical relations and model descriptions used in the Handbook. A review of ‘Appendix A will help the interested reader in gaining a quick understanding of the rationale and methodology of the Handbook. Appendix 8 provides the mathematical foundation for the Product Reliabilty Verification Test. Figure 1.1 shows the sequence of application of the various tasks contained in the Handbook and ‘cross-references them to the applicable procedures of the Handbook. 14 MIL-HDBK-344A ESTABLISH ESS GOALS (PROCEDURE A) ‘GENERATE OF REFINE ESTIMATES OF Diy (PROCEDURES 8 AND D) ‘GENERATE OR REFINE ESTIMATES OF SS. (PROCEDURES C AND 0) ‘SCREEN SELECTION ‘AND PLACEMENT (PROCEDURE A) [OPTIMIZATION FOR Cosi (PROCEDURE A) ‘COST ANALYSIS. (PROCEDURE A) FALLOUT ANALYSIS (PROCEDURE 0) JONITOR AND CONTROL {PROCEDURE E) PRODUCT RELIABILITY VERIFICATION TEST (PROCEDURE Figure 1.1: Cross Reference of ESS Program Sequence to Handbook Procedures ‘The product development phase is used to experiment with stress screens to refine the estimate of ESS parameters (DIN, SS) and to define and plan a cost effective screening program for the production phase. The incoming latent detect density is estimated (Procedure B) and screens are selectively placed at various assembly levels to davelop a plan for achieving quantitative ESS goals cost-effectively (Procedure A). The ESS plan for the development phase should be submitted as part of the Reliability Program Plan (paragraph 4.4.1). ‘An ESS plan for the production phase is submitted based upon the experimentation and analyses of ‘cost-effectiveness (Para 4.4.1). After the screening program is implemented during production, the fallout from the screens are used to evaluate the screening process and to establish whether ESS program objectives are being achieved (Procedures D and €). Figure 1.2 shows the detailed mathematical model upon which the ESS program is based. The details will be explained as the reader continues. 12 MIL-HDBK-3448 Vagrant sreeosy—TOnseUne TRG ‘The mathematical model can be represented by: OpEMOVED = DE * Dpar + DE*Dyarlt -EXP(AI]+DE“CFR*t (A-9) Whore: DE « Dotection Eficoncy k= Stress Constant Dpar=Patont Detects tte Stress Duration Dar = Latent Defects CFR = Constant Failure Rate Figure 1.2: Mathematical Model of an ESS Program 13 MIL-HDBK-344A, A Product Reliability Verification test is performed and the results used in conjunction with data from the entire factory ESS program to provice assurance that quantitative objectives have been achieved prior to delivery fo the customer (Procedure F). The Quantitative goals for the screening program should be established in accordance with the methods outlined in Procedure A. 1.3.3 Development and Production Phase Rellability Assurance. SS is not a substitute tor a ‘sound reliability program conducted during the design and development phases. The inherent reliability of the product is driven primarily by the design. However, without a viable reliability assurance program during production, the reliability which is designed into the product can be seriously degraded. An equipment will eventually pass a MIL-STD-781 reliabilly demonstration test, either during development or on a sample basis during production. A single equipment passing the MIL-STO-781 test does not imply that all other equipment in the production fot have the same reliabilty. A relatively few latent defects, contained in various equipment in the tot, can significantly reduce the field reliabilty, especially for equipment with high reliability requirements. A production reliability assurance program which complements the design/development reliability program, is therefore essential to achieving reliability objectives. A properly planned, monitored and controlled stress screening program, structured as part of a production reliability assurance program, is the vehicle through which product reliability in manufacture can be maintained, The identification and prevention of detect causes through ESS and analysis reduces defect densities for production. This information also provides feedback to a lessons-leamed data base to avoid similar deficiencies on subsequent designs or changes. The procedures are oriented toward achieving reliability objectives through use of quantitative methods for stress screening and [production reliability assurance. 1.34 ESS_Appllcatlon and the Quantitative Appreach. Historically there have been two basic approaches to the application of stress screens. In one approach, the Goverment explicitly specifies the screens land screening parameters to be used at various assembly levels.’ Failure-tree periods are sometimes attached to the screens, as acceptance requirements, in order to provide assurance that the product is reasonably tree of defects. “Another approach is to have the contractor propose a screening program which is tailored to the product and is subject to the approval of the procuring activity. Although the latter approach is preferred, neither approach is adequate since explicit objectives and the relations between the screening program and quantitative reliability requirements are not always defined. Costs are also uncontrolled because some of the screens might be more ‘tficiently pertormed, at lower assembly levels, where rework costs are lower. In addition, screening levels may tar exceed the design limits of the product and resut in damage to the equipment. There are several unknowns associated with the application of stress screens. How effective are the screens? What is considered acceptable or unacceptable fallout trom a screen? How does the quantity of defacts remaining in the equipment after delivery to the customer impact field reliabilty? The aforementioned ESS approaches co ‘ot fully address these questions. For example, if the screen fallout is “low”, it is not known whether the equipment is “good (ie. defect-Iree) or whether the screen is not effective. On ihe other hand, ifthe fallout is “high, itis not known whether the incoming defect levels are inordinately high or whether the screen might be causing non-detectives to fail ‘Screens and tests are not pertect. At each stage of manutacture where screens and tests might be applied, trom device level to the final system level, escapes 10 the nex! assembly stage occur, and new opportunities for introducing defects are created. The number of latent defects which remain in the product at delivery and their impact on field reliabilty, however, is the primary concern, 1.3.4.1 The Quantitative Approach. The use of a quantitative approach to stress screening requires that the initial pan latent defect levels, the defect level introduced during manufacture of the product, the eltectiveness of the screens, and reasonably acceptable values for the number of latent defects which remain and ‘escape into the field be addressed. Figures 1.3 and 1.4 ilustrale the quantitative aspects of stress screening ‘When a quantitative approach to stress screening is used, the key variables of interest are the average number of detects per product which enter the screen (DIN comprised of latent detects (D_ aT)and patent delects (OPAT and £)), the screen sirength (SS) which is the product of Precipitation Eificiency (PE) and Detection Ettciency (DE) and the average number of defects per product which escape the screenvtest (DREMAINING). Figure 1.4 ‘shows the relationships between these stress screening variables. 14 MIL-HOBK-344A HOW MANY LATENT MANUFACTURING (WORKMANSHPPROCESS) HOW COST EFFECTIVE Serects? IS THe PROGRAM? HOWMANY = wHaTiS: EMAINING ICOMNG. STRESS SCREENS J—> Carent ON FIELD UaTeNT DEFECTS? RELIABILITY? PART DEFECTS? HOW EFFECTIVE BRE THE STRESS SCREENS? Figure 1.3 ; The QuanthatWve Problem a Ti. Stress Constant’ 1 SCREEN STRENGTH SS = PE‘DE Stress Duration i Ow 1 ee oT ee ourcona LATENT DEFECT DENSITY pea Derecr DENSITY Duar v berecr Oe ' 1 : ' ! 1 ' I ‘DEFECT | INCOMING PATENT; DETECTION ‘OUTGOING PATENT Dereer DENSITY AND REMOVAL SerecE Deny | Dat |EFFICIENCY( DF} ' ' i ' ' 1 1 (1- DE) Opa + (1 - DE) (PE* DL REMANING = (1-DEDpaT + (1-DE) Dy ar 1-EXP(-Kt) + REMOVED DLA HEXPC Kt) DE (Opa PE* Olan) ae T canowoerccr | UTGONG INCOMNG ERROR DETECTION AND }———> ERROR. | DENSITY (E) REMOVAL DE" DENSITY 1 1 Cee Eq-De) ERRORS REMOVED DEE Stress Screening and Variable Relationships 1s MIL-HDBK-3448 The number of defects remaining in the production lot at delivery is a function of three kay factors: The quantity of design, part and manufacturing (workmanship and process) detects which intially reside in the hardware prior to assembly level screening. b. The capability of the environmental stress to precipitate flaws in assemblies to a detectable level. ¢. The thoroughness of the testing which is done, either during or after the screen, to assure detection of the defects precipitated to failure by the screens and the abilly to fault isolate and remove the defect without introducing new flaws. None of the three factors which Impact the reliablity of delivered products is known with certainty. Without a basic knowledge of their quantitative value, however, etfective screening programs cannot be properly planned and conttolied. The procedures in the handbook are directed at obtaining both preliminary planning and measured estimates of the three factors in order to plan, monitor and control the screening process. Experience data gathered from previous screening programs, screening experiments conducted during the development phase land use of the handbook procedures provides the methodology and information needed to plan and conduct effective screening programs. ‘Once a screening program is implemented during production, the results must be monitored and appropriate changes made in the screening regimen to assure that goals on remaining detects are achieved. The basic mechanism for assuring control is to compare the screening results with established goals so as to determine the ‘need for corrective actions. For example, corrective actions might be accomplished by increasing precipitation or detection efficiencies so that more detects can be precipitated and detected, or by reducing incoming defect {quantities through improved process controls. Changes which reduce ar eliminate screening at some levels of assembly can also be taken 10 reduce costs, when iti found that the screens are ineffective or unnecessary. 1.3.5 Benefits of a Quantitative Approach. A quantitative approach 1o stress screening enables the establishment of explicit quantitative objectives and provides a basis for planning, monitoring and controlling the screening process to meet those objectives. A quantitative approach also facilitates Government and contractor ‘communication on the status of the screening process and on the progress being made toward achieving objectives. Coupled with a good Failure Reporting Analysis and Corrective Action System (FRACAS), the {quantitative approach also provides a more focused emphasis on the sources of latent reliabiliy problems in the product or process as well as better control of costs. 1.3.6 Process Capability and Defect Density. The use of a quantitative approach to stress screening requires addressing the capability of the manufacturing process to produce products which are reasonably free of defects, Defects are introduced into a lot of manufactured products through repeated assembly, handiing and testing operations. The average number of defects per product (detect density) varies as a function of the degree ‘of control which is exercised over the manutacturing process and the process capability. The ESS program addresses the questions: What is the process capabilly? What must the process capability be in order to meet ‘Quantitative reliability objectives? What improvements and changes are required to achieve the reliability objectives at optimum cost? 16 MIL-HDBK-344A 2. REFERENCED DOCUMENTS ‘The documents cited in this section are for guidance and information. 2.1 Government Documents. SPECIFICATIONS, MIL-0-9858 Quality Program Requirements STANDARDS, MIL-STD-721 Definition of Terms for Reliability and Maintainabilty MIL-STD-781 _Reliabilly Testing tor Engineering Development, Qualification, And Production MIL-STD-785 Reliability Program For Systems and Equipment Development ang Production MIL-STO-883 Test Methods and Procedures for Microelectronics MIL-STD-2000 Standard Requirements for Soldered Electrical and Electronic Assemblies MIL-STD-2155 Failure Reporting, Analysis And Corrective Action System HANDBOOKS MIL-HDaK-217 Relabilly Prediction of Electronic Equipment MIL-HOBK-781 Reliablty Test Methods, Plans, and Environments for Engineering Development, Quaiticalion, and Production MIL-HOBK-338 Electronic Reiabity Design Handbook PUBLICATIONS ‘ir Force AFP 800-7, USAF R & M 2000 Process AFWAL-TR-60-3086 Environmental Burn-n Elfectveness RADC-TR#2-87 _Stress Screening of Electronic Hardware RADC-TR-66-198 ADC Guide to Environmental Stress Screening RADC-TR-66-149 Environmental Stress Screening RADC-TR-87-225 Improved Operational Readiness Through Environmental Stress Screening RADC-TR-90-269 Quantitative Reli ly Growth Factors for Environmental Stress Screening AL-TR.91-900 Vol! Evaluation of Quantitative Environmental Stress Screening Methods AL-TR-91-900 Voli DOD-HDBK-344 Software Users Manual aa MIL-HOBK-344A Sacramento Air Logistics Center ESS Handbook amy AMC Reg 702-25 Army Material Command Environmental Stress Screening Program Navy NAVMAT P-9492 Navy Manutacturing Screening Program NAVSO-P-6071 Best Practices Handbook TE000-AB-GTP.020A Environmental Stress Screening Requirements And ‘Application Manual for Navy Electronic Equipment pop 000 4245.7.M Transition From Development To Production TBD ‘Tri-Service Environmental Stress Screening Guidelines Copies of specifications, standards, handbooks, drawings, and publications required by contractors in connection with specitic acquisition functions should be obtained from the contracting activity or as directed by the contracting officer. Single copies are also available (without charge) upon writen request to: Standardization Document Order Desk 700 Robins Ave. Philadelphia PA 19111-5094 (215) 697-2667 22 Non Government Documents. Institute of Environmental Sciences (IES) Environmental Stress Screening Guidelines, 1981 Environmental Stress Screening Guidelines for Assemblies, Sep 84 Environmental Stress Screening Guidelines for Assemblies, Mar 30, Environmental Stress Screening Guidelines for Pants (Application for copies should be addressed to the Institute of Environmental Sciences, 940 East Northwest Highway, Mt, Prospect IL. 60056-3444) Electronic industries Association (EIA) Interim Standard No. 18 Lot Acceptance Procedure for Verilying Compliance with the Specified Quality Level (SQL) in PPM (Application tor copies should be addressed to the Electronic Industries Association, 2001 Eye Street, NW, Washington DC 20006 5009) 2.2.1 Other Non Government Documents. Fertig, K.W., Murthy, V.K., "Models for Reliabilty Growth During Burn-in", Proceedings of the MIL-HOBK-3444, 1978 Annual R8M Symposium, pp. 504-509. Bateson, J.T, "Board Test Strategies - Production Testing in the Factory ofthe Future”, Test and Measurement World, pp. 118-129, Dee 84. Kube, F., Hirschberger, G., “An Investigation to Determine Eifective Equipment Acceptance Test Methods*, Grumman Aerospace Corporation, Report No., ADR 14-04-73, Apr 73 Brownies, K.A. (1960), Statistical Theory and Methodology in Science and Engineering, New York, John Wiley and Sons. Crandall, Random Vibration, John Wiley and Son Engelmaier, EHtects of Power Cycling in LCC, Bell Laboratories NJ. Quinn, J.J. "How To implement DoD-Hdbk 344 For New And Existing Raytheon Environmental ‘Stress Screening (ESS) Programs", June 1991 (Non government documents are generally available for reference from libraries. They are also distributed among ‘non government standards bodies and using Federal agencies.) 23 MIL-HDBK-3448 3. DEFINITIONS AND ACRONYMS 3.1 Definitions. Definitions applicable to this Handbook are: Assembly/Module Baseline Stress ‘Chamber Detect Density Detectable Failure Detection Etficiency Enror Escapes Failure-Free Period Failure Rate Fallout Fault Coverage Latent Detect, Part Pant Fraction Detective ‘A number of parts joined together to perform a specific function and Capable of disassembly, for example a printed circuit board... An assembly of parts designed to function in conjunction with similar or different ‘modules when assembled into a unit. (e.g. power suppiy modu! ‘memory module.) Factory ESS stress levels consistent with R&M 2000 guidelines ie. 6 Grn. 2°Cimin. Measured at Unit Under Test Cabinet in which hardware is placed in order to apply stress to a ‘Average number of latent defects per item. Symbols used: DIN. DUT. DREMAINING and Do for incoming, outgoing, remaining and observed detect density, respectively, ‘failure that can be detected with 100% detection efficiency. ‘A measure of the capabiliy of detecting a patent defect. Symbol is DE. Class of patent detect resutting from assembly andior test correlation ‘anors. Errors do not require environmental stress for precipitation or detection, ‘The incoming defect density which is not detected by a screen and test and which is passed on to the next level. ‘A contiguous period of time during which an item is to operate without the occurrence of a failure while under environmental stress. The total number of failures within an item population, divided by the total ‘umber of lie units expended by that population during a particular ‘measurement interval under stated conditions. Symbol is A. A relabilily measure related to MTBF. Failures observed during, or immediately after, and attributed to stress. screens. Symbol is F. "Sometimes used to mean detects removed, ‘symbol REMOVED. In a given piece of equipment, the ratio of faults which are detectable to taults present. ‘An inherent or induced weakness, not detectable by ordinary means, ‘which will either be precipitated to early failure under environmental stress screening conditions or eventually fail in the intended use environment. Symbol is DLAT. ‘Any identtiable ter within the product which can be removed or repaired (e.g., discrete semiconductor, resistor, IC, solder joint, connector) ‘The number of detective parts contained in a part population divided by the total number of parts in the population expressed in Parts Per Million (PPM). See also detect density at Patent Detect Pracipitation (of Defects) Precipitation Etficiency Production Lot Product Reliability Verification Test ‘Screenable Latent Detect Screen Parameters Screening Experiments Screening Regimen Screening Strength Selection and Placement Stress Adjustment Factor Stress Screening System/Equipment ‘Thermal Survey Una Vibration Survey MIL-HDBK.344A, ‘An inherent or induced weakness which can be detected by inspection, functional test, or other defined means. Symbol is DpaT. In this procedure, DpAT refers to precipitated latent detect. See also error. ‘The process of transforming a latent detect into a patent defect through the application of stress screens. ‘A measure of the capability of a screen to precipitate latent defects to failure. Symbol is PE. A group of items manufactured under essentially the same conditions and processes, A test to provide confidence that field reliability willbe achieved. A latent detect which is accelerated to failure by a screen and then detected by test. Parameters which relate to screening strength, ( e.g, vibration G-evels, temperature rate of change and time duration) Stress screening applied to preproduction equipment in order to derive ata such as screen parameters for planning the overall ESS program. ‘A combination of stress screens applied to an equipment, identified in the order of application (Le., assembly, unit and system screens), ‘The probability that a specific screen will precipitate a latent detect to failure and detect it by test, given that a latent defect susceptible to the screen is present. It is the product of precipitation efficiency and detection efficiency. Symbol is SS. ‘The process of systematically selecting the most effective stress screens and placing them at the appropriate levels of assembly. The ratio of the incoming detect density at the anticipated field stress, level to the incoming defect density at the base line stress level. ‘The process of applying mechanical, electrical and/or thermal stresses to ‘an equipment item for the purpose of precipitating latent part and workmanship defects to early failure ‘A group of units interconnected or assembled to perform some overall electronic function (e.g., electronic tight control system, communications system) ‘The measurement of thermal response characteristics at points of interest within an equipment when temperature extremes are applied to the equipment. {A selt-contained collection of parts and/or assembiigs within one package Performing a specttic tunction or group of functions, and removable as a single package from an operating system (i.e., auto pilot computer, vit ‘communications, transmitter) ‘The measurement of vibration response characteristics at points of interest within an equipment when vibration excitation is applied to the ‘equipment. 32 MIL-HDBK-3448 Yietd ‘The probability that an equipment will pass a screen or test without failure, 9.2 Acronyms/Abbreviations 3.2.1 Acronyms Used In Procedure 8 Of Section § Abbreviation escription Ac ‘itborne inhabited Cargo AF ‘Aibome Inhabited Fighter ‘auc ‘Airborne Uninhabited Cargo AUF Airborne Uninhabited Fighter ARW ‘Aitbome Rotary Wing cu Gannon Launch S Ground Benign e Ground Fixed au Ground Mobile MF Missile Fight ML Missile Launch NS ‘Naval Sheltered NU Naval Unshettered SF ‘Space Flight 3.2.2 Other Acronyms ‘Abbreviation ‘Descrttion AoaL ‘Average Outgoing Quality Limit ATP ‘Acceptance Test Procedure BT Built in Test CDE ‘Chance Detective Exponential CFR Constant Failure Rate cND Cannot Duplicate D Detect Density DE Detection Efficiency 20D Department Of Defense ESD/EOS Electrostatic Discharge/Electrical Overstress ess Environmental Stress Screening Fer Functional Board Tester FL Fault Location FMEA Fallure Mode & Effect Analysis. FR Fallure Rate FRACAS Failure Reporting and Corrective Action System FY. Fiscal Year Hz Hentz c Integrated Circuit ICA In Circuit Analyzer cr In Circuit Tester es Institute of Environmental Sciences k Stress Constant LBs Loaded Board Shorts CAM Line Replaceable Module LRU Line Replaceable Unit ts! Large Scale Integration Teo Lot Tolerance Percent Detective MLE ‘Maximum Likelihood Estimate 33 Ms MSI MTBF NEF OEM PE PEP PCB PPM PRVT PWA PM Ram RMS: RTOK RV SAF SRU Sat SPC t TAAF Te Temp TMAX THB TMIN TOM uv MIL-HDBK-344A, Mechanical Shock Medium Scale Integration Mean Time Between Failures Number Of Standard Deviations. ‘Sample Or Lot Size No Fault Found Criginal Equipment Manutacturer Precipitation Efficiency Production Engineering Phase Printed Circuit Board Parts Per Millon Product Reliabiity Verification Test Printed Wiring Assembly Performance Monitoring Range Reliability & Maintainabitty Root Mean Square Retest OK Random Vibration ‘Stress Adjustment Factor ‘Shop Replaceable Unit ‘Screen Strength ‘Specitied Quality Level Statistical Process Control Stress Duration ‘Test Analyze & Fix Temperature Cycling Temperature Maximum Temperature ‘Time-Temperature-Humidity-Bias Minimum Temperature Total Quality Management Unable To Verity 34 MIL-HOBK-344A 4. GENERAL GUIDELINES 4.1 Contractual Aspects of ESS. ESS must remain an adaptive process $0 thal the screening regimen can bbe changed to improve cost-etfectiveness. Contract provisions for ESS programs should have flexibility to effect necessary modifications of stress screens. During the initial stages of production more severe stress screens may be required, As the product and process mature, the screens may require adjustment such as by reducing the umber of temperature cycles, the number of axes ol vibration or by eliminating unnecessary screens. in early production, a number of unknowns preclude adoption of optimum stress screening. Some of the more significant ‘unknowns are: ‘a, Residual design deficiencies bb. Manufacturing planning errors . Worker training . New suppliers ¢. Latent detects in new part lots 1. New process capability 9. Precipitation Eificiency fh. Detection Etticiency The stress screening program, even it carefully planned, may produce unexpected results which should be addressed through modification of the screens, hardware, or processes. The principle of adaptive screening is to adjust the screens on the basis of observed screening results so that the screens are aways most cost effective ‘while meeting ESS program goals. Contract terms should be flexible enough to permit modification of screens or screen parameters when such modification can be shown to be beneficial In long term production the quantity and distribution of latent defects change with time and theretore contract terms should contain provisions for periodically reassessing the individual screens and the overall screening program, The overriding crterion for change should be the most cost effective achievement of objectives. Contracting arrangemenis should be made which permit such changes without having to resort to extensive renegotiation. 42 Belation of ESS to MIL-STD-785 Reliability Program Tasks. Planning an ESS program for the production phase Is intartelated with many of the MiL-STD-785 relibilly program tasks which are required to be performed Guring development and production Every etfon should be made To integrate the knowledge gained from MIL-STD-785 tasks into the planning of an ESS program for production. MIL-STD-786 reliabilty program tasks which have a particular bearing on ESS planning include: “Reliabilty Prediction (Task 203), Reliailty ‘Alocation (Task 202), Qualiication Tests (Task 303), Parts Program (Task 207), Failure Reporting Analysis and Corrective Action Sysiem (Task 104), Fallure Modes, Eltects and Criically Analysis (Task 204), Reliability Growth ‘Testing (Task 302), and of course, ESS (Task 901)” Proper screen selection and placement is highly dependent ‘on the reliably and stress design characteristics of the equipment. Information derived from reliabilty program tasks such as predicted and demonstrated falure rates, qualty level of parts, number and type of nonstandard and MiL-pars, number and type of interconnections, design capabiliy, field stress environments, and craical items should be used in structuring an ESS program for production. 4.3 Subcontractor and Supplier Stress Screening. items which are furnished by subcontractors or ‘ther equipment suppliers may require stress screening. There are several distinct advantages for the Subcontractor or supplier 1o perform the stress screening rather than the prime contractor. ‘@—_Subcontractor/supplier concern for yield can be translated to profits which may force Process improvements to minimize latent defects. b. Screening at receiving inspection/test, by the prime contractor, may involve returning defective items to the subcontractor/supplier and result in shortages and schedule slippage's. Performing the additional screen can introduce latent defects due to handling, ¢.g., mechanical and ESD damage and electrical overstress. ‘Special stress screening facilities and test equipment do not have to be purchased, supported and operated by the prime contractor. at MIL-HDBK-344A The procedures and methodology contained in the Handbook can be imposed on the subcontractorisupplier. TO ‘assure that the subcontractor/supplier is able to perform the tasks required by the Handbook the intent must be made known prior to production, In this manner, the sudcontractor/supplier can prepare a screening plan, acquire the necessary capability or arrange for an external laboratory to perform the screening, 4.3.1 Screening of Spares. Spares should be subjected to a screening regimen equivalent to that used for the production hardware. Spares are either manufactured on the same procuction line of are produced separately to the same specifications as the production hardware. The spares are most often an LRU or SRU and ‘consequently may not receive the exposure to additional screening at higher assembly levels that non-spare items might receive. Quantitative ESS goals for the system should be allocated down to the spare item. The procedures of Section 5 can be used to ensure that defect density for the spares does not exceed allocated ‘goals. A costly and less desirable alternative would be to screen and test ali spares in a mock-up configuration for the system. AS a word of caution, there are times when spare orders are placed long after the original production fun has been completed. As a consequence, the production ESS facilities may not be available. This may lead 10 a requirement to develop a "new" ESS process that ullizes new/existing facilities. Also given the potential ime lag between the actual production phase and the manufacturing of the spares, processes that were in control for production may be out of control for the spares. In such situations it is not recommended to biindly rely on the fonginal production screening regimen. 4.4 Planning a Stress Screening Program. Planning a stress screening program must begin early in the design phase {0 ensure that the equipment can withstand the necessary ESS stress levels, The success of a stress screening program is strongly dependent on knowledge of the product and the processes to be used in manutacture. The following must be kept in mind when planning a stress screening program using quantitative methods: a. The defects which can potentially reside in the product and the effectiveness of screens in precipitating the defects to failure (and then detecting them) are not known with ‘certainty. By comparing planned estimates for defect fallout with actual screen fallout, the soreening process can be refined andior the manufacturing process improved to achieve the desired goals of a highly reliable product. b. Experience data on equipment similar in composition, construction and degree of maturity, can provide very useful data for planning purposes. Information derived from the following sources should be used in planning an ESS program for production’ (0 Identtication of hardware items (parts, assemblies) which have exhibited a high incidence of latent defectives on other programs, (2) Identitication of suppliersivendors whose products have indicated high defect levels. (3) Qualification test resutt. (4) Supplier acceptance test results. (8) Part receiving inspection, test and screening results. (6) Screening and test records for previous programs. (7) Reliability growth test results. (@) Field tailure data, c. A.viable screening program must be dynamic, i.e. the screening process must be continuously monitored to ensure that it is both technically and cost effective. Changes fo the screening process should be made, as necessary, based on analysis of screening fallout data and failure analysis so that quantitative screening objectives can be achieved. 42 MIL-HOBK-3448 4. The basic questions which must be addressed in planning a stress screening program ar (What are the quantitative objectives of the programs? (2) What are the stress screens to be used and at what level of assembly should the screens be placed to achieve the desired objectives? (@) What are the costs associated with each of the possible altemative screening Sequences and how can the screening program be made cost effective? (4) How will one know ifthe screening program is proceeding according to plan? ‘What assurances can be provided that program objectives have been achieved? (5) What corrective actions must be taken to achieve desired screening program goals i the screening fallout data indicate significant departures from the planned rogram? e. AN ESS program for the production phase should include the following major tasks (1) Preparation of ESS Plan (2) Establish Objectives/Goais (8) Obtain Planning Estimates of Detect Density (4) Selection and Placement of Scr 1s to Optimize Cost ‘A Giscussion of each of these major tasks which includes background, rationale and general guidelines for Use of the detailed procedures is contained in 4.4.1 through 4.45. 4.4.1 Preparation of ESS Plans. The contractor should prepare ESS plans for both the development and [production phases. The purpose of the development phase plan is to describe the proposed application of ESS ‘during development and production and to refine the estimated values of Din and SS. Use of the procedures ccontained in the Handbook in conjunction with stress screen experimentation on pre-production prototype equipment (if cost effective) can provide invaluable data for planning, Estimates of the type and quantity of defects likely to be present in the hardware can be evaluated against experimental data. Screens can be designed, based upon engineering evaluation, which provide the desired stress stimulation for suspected detects in the hardware. Test specifications can also be evaluated to ensure that possible failure modes, arising trom various detect types and sources, can be detected by the tests performed either during or following the Screens. Integration of the results from the MiL-STD-785 reliability program tasks can also be effectively ‘accomplished. Early fallout from screens provides the maximum amount of information on likely defect sources, Process capability, and design limitations. Corrective actions taken as a result of screen experimentation during development can aid significantly in stabilizing the process for production. The development phase and Production phase ESS plans should be submitted for approval by the procuring activity prior to production. 44.11 Development Phase Plan. The development phase plan should include the following @ Identification of the reliability requirements for the product and the quantitative goals for the ESS program. b. _dentiication of the equipment to be screened and the respective production quantities. &. Description of the initial screens which will be applied and the screening experiments Which will be conducted (I! experimentation is necessary and cost effective.) 4. Deseri ion of the data collection and analysis program which will be used. A Failure 43 MIL-HDBK-3448 Reporting, Analysis And Corrective Action System (FRACAS) should be in place and ‘operating. €. Description of subcontractor and supplier stress screening to be performed, 1. Results of preliminary use of the handbook procedures. 9. Identification of the organization elements that will be responsible for ESS planning and experimentation, and the conduct of development phase screening activity. 44.1.2 Production Phase Plan. The production phase plan should include the following: a. Quantitative objectives of the ESS program. b. Detailed breakdown to the assembly level of the equipment which will be screened. ¢. _Description of the screens which will be applied, including screen parameters and exposure time. d. Description of the results in applying Procedures A through E of Section 5 including the ralionale for achieving quantitative objectives in a cost effective manner. 2. Description of the FRACAS and the analysis procedures which will be used to evaluate and control the screening process. 1 Description of the PRVT to be performed to verity achievement of objectives, 9. Identification of the organizational. elements responsible for conducting and evaluating the effectiveness of the production ESS program. 4.4.2 Establishing Obiectives/Goals. Expressed quantitatively, the objective of a stress screening program is to reduce the incoming latent defect density in a production let of equipment to an acceptable emaining latent detect density in a cost effective manner. Equipment having high reliabilty requirements will have more stringent goals on remaining defect density. Methods for determining goals on remaining detect density are discussed in Appendix A. The remaining latent and patent defects determine the field reliability according to the {ollowing expression: Total failures intimeT 1 ‘Average Failure Rate in Field = “ war = summation of { (1- DEJDPAT + (1 - DE)DLAT*SAF'[1-exp(-KT)]+ CFR*THT tor all environments, DE = Detection Efficiency (1 DE)Dpar = remaining patent defects (1- DE)DLAT = remaining latent defects SAF = Stress Adjustment Factor k= precipitation stress constant Using this relationship, the required tied failure rate can be used to determine the requirements for remaining delect density and consequently used to establish goals and requirements for all integration and test levels from incoming defect densities for parts through to final equipment testing ‘An example relating various values of DREMAINING to the fiekd MTBF is shown in Table 4.1 for an assumed field —t_ precipitation rate k = ZegqH 44 MIL-HDBK-344A 4.4.3 Obtaining Planning Estimates of Detect Density. The design of a stress screening program requires knowledge of the quantity and type of latent defects which are likely to reside in the hardware prior 10 assembly level screening. The detect density tables contained in Procedure 8 ot Section § are used to obtain. planning estimates of detect density. Values in the tables are based upon studies of historical defect data from ‘the factory and field for several part types. Extrapolations to other part types and field environments were made based upon correlations to MIL-HOBK-217 quality level and field environment factors. Study results and ‘methodology are contained in RADC-TR-86-149. Procedure D provides the methodology that allows the user 10 refine these estimates based on experience data. Table 4.1 Remaining Detect Density Goals (OREMAINING) Faire Rate MTBFHrS) DREMAINING (FailuresiHour) (At Field Stress) 0.009516 105) 10 (0.000851 7051 1 (0.000475, 2,102 os (0.000190, 5.254 02 (0.000035, 10,508. 04 (0.000047 21,017 0.05 (0.000019 52,542 0.02 (0.000009, 105,083 ‘0.0r (0.000000. 7,050,893) 0.00% 44.3.1 Latent vs Patent Delects. A common understanding of the nature of the detects which the sereening program should be designed lo precipitate is essential for proper planning. The factors which impact incoming defect density and the rationale for the procedures used in obtaining planning estimates of detect density should also be understood. For ESS purposes detects can be categorized into two types, latent and patent. A latent defect is characterized as an inherent of induced weakness or flaw with some residuai strength thal will manifest tsel as a failure at some time in the future when exposed to normally encountered stress (electrical, mechanical, thermal, or chemical) Latent defects can not be detected until precipitated as a patent defect. For simplicity, a detect with no residual strength but requiring stress concurrent with testing to be detectable can also be considered to be a latent defect until tis detected. Some examples of latent defects are: a) Parts {@) Paral damage through electtcal overstres or electrostatic discharge () Parla physical damage dunng handing (© Materialor process induced hidden flaws (@) Damage inficted during soldering operations (excessive heat) (2) Interconnections (@) Cold solder joint (©) Inadequate/excessive solder (©) Broken wire strands () Insulation damage (@) Loose screw termination (Improper crimp (9) Unseated connector contact (h) Cracked etch Poor contact termination Inadequate wire stress relet A patent detect is a defect that is detectable in its present form and has two subcategories, error and precipitated 45 MIL-HDBK-3448 latent. An error is a defect caused by workmanship oF test correlation. Errors are preventable and should not ‘occur, whereas patent defects due to precipitated latent defects are only preventable to the limits of the state of the art in equipment and technology. Errors can be readily monitored using conventional SPC techniques and can be removed by simple testing or inspection without the need for ESS or environmental stress. Errors are introduced into the product during fabrication, and assembly, and pass through various assembly ‘stages until thay are detected by a test or inspection of sufficient thoroughness and subsequently eliminated from the product. When good quality control test and inspection procedures are applied, all but the most subtie errors. should be detected and eliminated prior to shipment. Some examples of errors are: (1) Pans (a) Broken or damaged in handling () Wrong par installed (c) Correct pan installed incorrectly (@) Missing pans (e) Electrical test correlation and tolerancing (2) Interconnections (@) Incorrect wire termination (©) Openwire due to handling damage (¢) Wire short to ground due fo misrouting or insulation damage (@) Missing wire {e) Open etch on printed wiring board () Open plated - through hole (9) Shorted etch (h) Solder bridge {) Loose wire strand A precipitated latent defect is a latent flaw that has been transformed into a patent defect by exposure to stress over time. Since detection efficiency is not 100%, some precipitated latent flaws, and errors, will escape fo the field as undetected detects. tis thus Important to address the aspects of precipitation and detection separately, ‘and also to distinguish and separately monitor errors and precipitated latent flaws. For simplicity the Handbook shall use the term patent detect to define a precipitated latent detect. 43.2 Categories of Defects. The majority of parts and connections within an electronic equipment will never fail over the products lifetime and are thus “good”. The failures which occur during product lite are traceable to design or externally induced causes. or to latent detects which were introduced into the product during manufacture. Such detects, # nol eliminated from the product in the Yactory, wil result in premature or ealy-ife failures in the field. Not ali latent detects however, are screenable ie., capable of being eliminated {rom the ‘equipment in the factory by use of stress screens. Its only those latent defects, whose failure threshold can be accelerated by the stresses imposed by the screens, which are screenable. It is the screenable early life tailure \which the stress screening program must be designed lo remove. Figure 4.1 ilustrates the categories of defects and thei relationship to product ie failures. 4.4.3.2.1 Screenable Latent Defects and the Field Stress Environment, The notion of screenable latent detects must be further examined to fully understand the rationale used for the procedures contained in the handbook. The population of latent defects within newly manulactured electronic items can be viewed as a continuum which ranges from minor defects of small size to major detects of large size. However, itis important to note a somewhat controversial point, ie., given the same manufacturing process, the ‘umber of latent defects which may reside in the hardware will difer. depending upon the operating environment and stress levels to which the equipment will be exposed. The stress/time to which a latent defect is exposed will determine its failure threshold and time-to-failure. The probability of a latent detect’s failure threshold being exceeded Is much higher in a harsh environment than in a more benign environment. Obtaining an initial estimate of defect density for an equipment must take environment to which the equipment will be exposed during product life. 10 consideration the field operating MIL-HDBK-3448 ‘Since the operating environmental stress levels are ditferent and less than the factory ESS levels, the field detect ensity estimate is not directly applicable to the factory ESS program. Further, the producer must design, assess, and monitor the ESS program based upon analysis of factory fallout data and causes. Some method must thus be Provided to relate defect density in the field to the factory defect density. This is accomplished by including a Stress adjustment factor (SAF) in the model, where __ DEFECT DENSITY (FIELD STRESS, STRESS ADJUSTMENT FACTOR (SAF) = 5EFECT DENSITY (FACTORY STRESS) ‘The application and measurement of the SAF is described in Procedures B and E respectively of section 5. MANUFACTURING DEFECTS DESIGN & EXTERNALLY INDUCED PARTS, BOARDS, AND DEFECTS INTERCONNECTIONS ‘SCREENABLE EARLY PRODUCT LIFE FAILURES: NoT ‘SCREENABLE \LL OTHER PRODUC LUFE FAILURES, Figure 4.1: Defect Categories & Product Lite Failures. 4.4.3.3 Factors Which Impact Defect Density. The quantity and type of defects which are introduced into a product are dependent upon several factors. The first six factors, listed below, are related to product or program characteristics for which the manufacturing function within a company has little control. The last two factors are related to the manufacturing process for which the manufacturing funetion has direct control a Complexity - The quantity and type of parts and interconnections used in the product, affects defect density. Increased complexity creates more opportunities for detects. b. Part Quality LeveVGrade - The quailty levels of parts are established by MIL-STD pan screening requirements. ‘The number of defects which remain in a lt of screened parts is determined by the type and extent of screening and testing to which the parts are subjected under MIL-STD screening requirements, 47 MIL-HDBK-344A ©. Stress Environment - The stress conditions to which the equipment will be exposed will affect the proportion of defects which should be screened from the product. A detect may be precipitated to early failure in a harsh field operating environment, but may survive product Ife in a benign fied environment d. Process Maturity - New production requires time to identify and correct planning and process problems, train personnel and to establish vendor and process controls. Maturily Is dependent on volume and time. Low production volume over a long period would have a low matunty rate and will thus impact defect density, 2. Packaging Density - Electronic assemblies with high pant and wiring density are more susceptible to process, workmanship and temperature induced defects due to smaller ‘error margins, increased rework diticuly and thermal control problems. {Concurrent Engineering - Proper design analysis and assessment and application of Concurrent Engineering principles during the design stage will end to ensure a reliable and producible product and thus reduce the latent (and error) defect densities. Durability analyses will also ensure thal the design can withstand the stresses of ESS. ‘The following factors are under the direct control of the manutacturing function. The degree ot control exercised will impact defect density 9. Manufacturing Process Controis - Good process controls will end to reduce the number of defects which are introduced into the product. h. Workmanship Quality Standards - Stringent and properly enforced workmanship quality standards will enhance the reliability of the product through reduced introduction of ‘workmanship defects into the product. 4.43.3.1 Partvs Assembly Defect Density. The part defect density can have a significant impact on the assembly defect density depending upon the number of parts contained in the assembly. The Poisson approximation is used in Figure 4.2 to illustrate the expected assembly detect density as a function of the remaining part detect density and the number of parts per assembly. As can be noted relatively small vaiues of part detect density result in large values of assembly detect density depending upon the number of parts contained in the assembly. As an example, for a 150 part assembly containing parts with a Defect density of .01 (10,000 PPM), the assembly defect density is 1.5. In terms of yield, only about 22% i.e. exp(-1.5) of such assemblies, when Subjected to tirst assembly test, would pass without failure. It is quite obvious that the part detect density rust bo much better than .01 1 the costs of rework, retesting and handling of the assemblies are to be avoided. The ‘questions answered by the ESS methodology and procedures in this handbook are: How much better rust the remaining part detect density be? What level of part defect density is needed for delivered systems? Can such levels be achieved? 443.32 Part Level va Assembly Level Screening. Screening at the part level may be a cost effective atemative for eliminating detects prior to the parts being assembled into the production hardware. A population of parts, even those procured to high quality levels, may appear 1o contain high defect density levels. For example, microelectronic devices procured to the quality requirements of MIL STD-883 receive 100% final electrical testing by the part vendor. Nonetheless, one manufacturer has found that about 1%, and as much as 4% of the parts will ho pass asimlar electrical est performed atthe OEM receiving inspection. There ae several posse reasons for this including: the seller's and buyer's tests are different seller testing errors buyer testing errors device damage or degradation in handling inspection and sorting errors. latent detects 48 MIL-HDBK-3448 FRACTION DEFECTIVE ASSEMBLIES 0.001 01 ot FRACTION DEFECTIVE PARTS raction Of Defective Assemblles Vs Remaining Part Fraction Defective General awareness of this problem in industry has resulted in improvements in part quality and reliability. For example, results reported in the Integrated Circuit Screening Report published by the IES in November, 1988 indicated a significant Improvement for microciccuits. and revealed that the additional handling involved i the rescreening process was actually introducing more defects that were being screened. None the tess, it should be noted that the foregoing discussion addresses errors only and must be extended to include latent defects and that itis primarily latent defects that escape to the field and degrade early lite reliability. ‘The requirement for parts rescreening should not be mandated and should only be used as determined to be necessary by the implementation of the Handbook. ‘Screening at the assembly level is also a means of finding and eliminating part detects from the hardware. The part {allout from early screening at the assembly level can provide much of the information needed for resolving such Uncertainties and taking corrective action. There are always uncertainties as to whether the part defects which are found during assembly level screening, are escapes from part level screens or whether they are newly introduced defects due to handling, test and assembly operations. A thorough failure analysis of the fallout from assembly level screening can help in determining defect causes and the types of screens which should be used. 4.3.3.3 Alt Force A&M 2000 ESS Pollcy-Part Fraction Defective. Air Force R&M 2000 ESS studies recommend that the manufacturing process begin with piece parts having a remaining part fraction defective below 1000 PPM by FY87 and below 100 PPM by FY90. Procedure D of Section 5 and ESS results are used in the Handbook procedures to evaluate the achievement of these goals. However, the prescribed requirement of 100 PPM defect level for parts may not be adequate for achieving the required reliabildy. The actual requirements should be determined using Procedure A and may increase or relax the R&M 2000 levels. The R&M 2000 levels should also be interpreted as being applicable to both latent and patent defects where the Patent defects include errors due to electrical testing, test correlation, specification discrepancies etc. 43.34 Process Maturity and Defects. The maturity of both the product design and the manufacturing process can significantly impact the quantity and type of defects which can reside in the hardware. The data shown in Table 4.2 represent experience on several large development and production projects. As the data ilystrate, the proportions of failures in a product which are traceable to design, part or manufacturing causes can differ substantially, depending upon the stage of maturity of the product and the manufacturing process. During 49 MIL-HOBK-3444 the development phase, the major contributor to product failure is design (50%), while parts may account for 20%. of the failures. Unfortunately, design problems can still be present in the product when stress screens are being conducted during production. ‘The proportion of failures in a product, attributable to design, would be expected to decrease as the process matures. The overall detect density in the product would also be expected to decrease as the process matures, Maturity of the product and process should be taken into account when planning estimates of detect density are ‘being determined in accordance with Procedure B of Section 5. In such cases, the user may decide to use Procedure D to modity the defect density values in Tables 5.2 through 5.13, of Procedure B either upward or downward, depending upon past experience and assessments of maturity. With an emphasis on TOM and concurrent engineering, more thorough design analysis and assessment should be performed during the design stage to prevent design problems during production. A high incidence of design problems during initial production provides valuable feedback on the efficacy of the concurrent engineering program. ‘Table 4.2: Detect Types & Density vs Process Maturity Defect Type Distribution (percent) Maturty Detect Density Design Manutacturing Parts Development 10-60 20-40 10-307 High Early Production 20-40 30-50 20-40, Moderate, Late Production, 5-15 20-30 60-70, Low. 4.4.3.3.5 Packaging Density, Assemblies with high part and wiring density relative to the assembly ‘manufacturing technology are more likely to contain both patent (error) and latent defects because of the proximay cf devices and interconnections contained within a small volume. The effects of poor heat dissipation in densely packaged electronic assemblies can accelerate latent defects to early failure. Difficulties in intialy assembling or Feworking the hardware can also make such assemblies more defect prone. Procedure B in Section 5, for estimating defect density, tnus includes a packaging density factor. This factor should be continually monitored and refined using Procedure D of section 5. 444 Screen Selection and Placement. Planning a stress screening program requires the selection and placement of appropriate screens at various levels of assembly so as to achieve a cost effective screening rogram. Listed below are the factors which affect screen selection and placement. The factors are discussed in ‘more detail in the following paragraphs. a Screening strength - The product of precipitation etticiency and detection efficiency, determines the capability for removing detects. b. Precipitation etticiency - Prior knowledge of the effectiveness of the screens in precipitating defects to failure. ©. Detection etficiency - The tests which can be economically and feasibly used to detect detects which have been precipitated to failure by the screens d. Thermal and vibration response characteristics - The structural, thermal and material properties of the items to be screened and their response to applied stress. @. Design limits - The environmental stress design limits of the items ta be screened, t Faciliies - The screening, test and instrumentation facilities available to the manutacturer to perform screening and test operations. 9. _Gosts - The costs to achieve screening program goals on remaining detect density. 4-10 MIL-HOBK-3448 i Product Reliabilty Verification Test (PRVT) - The use of a PVT as an integral part of an ESS program to provide confidence that field reliabiity will be achieved. 444. Precipitation Elflclency. Precipitation efficiency is defined as the probability that a screen will precipitate a detect to a detectable siate given that a defect susceplibie to the screen stress is present Screening strength is defined as the precipitation efficiency multiplied by the probability that the detect will be Selected and removed (ie, the detection efficiency). A basic premise of stress screening is that under speciic screening stresses applied overtime, the failure rates of deectives are accelerated from that which would occur Uunder normal field operating siress conditions. By subjecting electronic items to accelerated stresses, Le. rapid temperature cyciing and random vibration, latent defects are thus preciptated to early faire. More severe sireSses will end to accelerate failure mechanisms and the rate of detect failure. For exampla, the failure rate of a latent detect increases with more rapid rates of temperature change and larger temperature extremes, The Precipitation efficiency (and hence screening strength) of a random vibration screen increases as a function of the level and uration of the applied excitation Stress screens are not all equally effective in transforming latent detects into detectable failures. Table 4.3 provides a listing of latent defect types and the screens believed to be effective in precipitating them to failure. Table 4.3 may be used as an aid in the selection of a screen type when prior knowledge on workmanship or part detects for similar assemblies is not available. Table 4.3: Assembly Defect Types Precipitated by Thermal & Vibration Screens Detect Type Thermal Screen Vibration Screen Defective Part Broken Pan Improperly Installed Pat ‘Solder Connection PCB Eich, Shorts and Loose Contact Wire Insulation, Loose Wire Termination Improper Crimp Or Matt ‘Contamination Debris ‘Loose Hardware, ‘Chated, Pinched Wires Parameter Orit Hermetic Seal Failure. ‘Adjacent Boards/Parts Shorr Reference RADG-1R-82-87 Table 4.3 indicates that vibration screens are generally more effective for loose contacts, debris and loose hardware while temperature cyciing screens are not effective. Thermal screens are generally more effective tor part parameter dri, contamination and improper crimp or mating type detects while vibration screens are not. For other detect classes listed in the table, both thermal and vibration screens are effective, but the relative degree of effectiveness of one screen type over the other is not precisely known. These are some of the uncertainties which must be dealt with in planning a screening program. Historically, on average, 20% of the detects are found to be responsive to vibration screens and 80% to temperature cycling screens. (Reference publication IES Environmental Stress Screening Guidelines tor Assemblies) To improve the modeling accuracy and to ensure a proper balance between thermal and vibration screens, itis recommended that the defect population be segregated into Random Vibration (RV) sensitive detects and ‘Temperature Cycle (TC) sensitive defects. I necessary, the population responsive to either TC or RV can also be inclided on the model MIL-HDBK-344A, 44.4.1.1 Screen Parameters. Precipitation efficiency is a function of specific screen stresses (parameters) ‘and the time duration of the stress application. Equations provided in Procedure C of Section 5 provide values for Precipitation etficiency as a function of relevant screening parameters. it should be noted that these parameters ertain to the unit under test and not the chamber etc. Vibrational characteristics of the equipment (2.9. resonances, transmissibilty etc.) and the various thermal conductivities and masses must be considered. All assembled hardware consists of many paths along which a stress might be transmitted. The selection of screening parameters and methods of stress application must be suited to the stress transmission characteristics ‘of the hardware design. As a part of the screen selection and placement process, in which thermal of vibration screens are to be used, a stress response survey of the item to be screened should be performed. This may squire simulations and of surveys conducted on the actual or similar nardware. Care should be exercised to ensure that hardware responses are large enough to generate an effective screen while not exceeding hardware design capability. Environmental stresses should be applied to the hardware and the response of critical hardware. elements measured to determine whether maximum or minimum temperature limits are being exceeded, and whether suspected defect sites (parts, interconnections etc.) are responsive to the screen stress. In addition, normal design provisions for isolating the hardware from stress such as the use of shock mounting, vibration isolators or cooling air should also be evaluated. Application of environmental stress screening in such instances, should require bypassing the normal stress isolation provisions or may dictate the need for screening at lower assembly levels which do not include the stress isolation design features. Temperature cycle, constant temperature, random and swept-sine screening parameters are defined as foliows: a Thammalcycie screen parameters (1) Maximum temperature (Tmax) - The maximum temperature to which the screened item will ‘be exposed. This should not exceed the lowest of the maximum ratings of all the parts Ind materials comprising the tem. Note that non-operating temperature ratings for parts ‘are higher than operating ratings. (2) Minimum temperature (Tmin) - The minimum temperature to which the screened iter will bbe exposed. This should not exceed the highest of the minimum ratings of all the parts and materials comprising the assembly. Note: Tmax and Tmin must be carefully selected either through analytical means or a thermal survey. (3) Range (R) - The range is the diference between the maximum and minimum applied extemal (chamber) temperature (Tmax - Tmin). Temperatures are expressed in °C. Care should be taken when Tmin is negative not to subtract incorrectly and result in an ‘erroneously small computed temperature range (4) Temperature rate of change (Tp) - This parameter is the average rate of change of the temperature of the item to be screened as it transitions between Tmax and Tmin and is given by Tmax - Tin’ Tmax - Tmin [eset _ (Ensen) | Tat 2 Wh 0" 11 is the transition time from Trin 10 Tmax in minutes {is the transition time trom Tmax 10 Trin in minutes, (8) Dwell - Maintaining the hardware temperature constant, once it has reached the maximum (or minimum) temperature, is relerred to as dwell. The duration of the dwell is a tunction of Gifferences in the thermal mass of the items being screened. (8) Number of cycles - The number of transitions between temperature extremes (Tmax oF MIL-HDBK-3448 Tmin) divided by two. b. Constant Temperature Screen Parameters (1) Temperature delta (47) - The absolute value of the difference between the hardware temperature and 26°C. aT =|T-252C] ‘Where T is the hardware temperature (2) Duration - The time period over which the temperature is applied to the item being screened, in hours, affer the hardware has reached thermal equilrium. ‘Mibration Screen Paramotors (1) Grms level for random vibration - The rms value of the applied power spectral density observed by the hardware, including resonance and transmissibilly effects. (2) Spectrum shape for random vibration - The shape taken by the range of trequencies in the frequency spectrum. (3) G-level for swept sine vibration - The constant rms acceleration applied to the equipment being screened throughout the frequency range above 40 HZ. The g-level below 40 HZ may be less. (4) Sweep rate for swept sine vibration - The rate at which the “Yorcing” frequency is varied through a range of frequencies. (5) Duration - The time period over which the vibration excitation is applied to the item being screened, in minutes. (©) Axes of vibration - This can be a single axes or multiple axes depending on the sensitivity ‘detects to particular axial inputs. 4.4.44.2 Design Limits. The use of screen parameters which impose stresses which exceed the design limits of the product is not recommended. Etfective screening programs can be developed without having to resort to stresses which exceed the design capability of the hardware. Criteria for judging how much the design limas can be safely exceeded, without causing damage to the product, are non-existent or at least arbitrary However, to permit reasonably high ESS stress levels, tis important that the equipment be designed for ESS and thus the ESS program and required stress levels should be determined concurrently during the design stage. Designing equipment for ESS means that the design should develop such that individual assemblies have similar response characteristics. This should be done so that no one subassembly will be dictating the screening levels for the other subassemblies. Using the procedures contained in the handbook, the manufacturer can focus on those items in which defects are most likely to reside in the hardware and determine safe screening levels, within appropriate cost constraints, for precipitating them to failure. The procedures take into account the increased elects with increased factory stress level and also require a fatigue lite study 10 ensure that useful operating life ‘has not been impacted by the amount or level of ESS. 4.44.1.3 Guidelines for initial Screen Selection and Placement. The development phase ESS rogram is intended to expose various defect types and causes and to obtain factory data to calculate and refine the planning estimates of Dj and SS that were based on handbook and industry data. Additional ESS beyond that intended for production may be required to improve the estimate accuracy. An initial screening regimen should be selected for experimental use during the development phase in conjunction with the use of the handbook procedures. Table 4.4 is recommended as an aid in selecting and placing screens for a starting regimen, 443 MIL-HDBK-344A R&M 2000 ESS studies recommend the screen types, parameters and placements outined in Table 4.5 as an initial regimen. The screens contained in Table 4.5 have high precipitation efficiency. Alter sufficient fallout has been observed, the screening regimen may be reduced. ‘The R&M 2000 guidelines thus represent initial values for consideration during the development phase and can ‘be reduced for production based on the planning and analysis procedures outlined in Procedures A and D. 4.4.4.2 Detection Efficiency. Detection efficiency is a measure of the ability to detect and remove patent detects. Detection efficiency includes factors representing fault coverage. the requirement for concurrent stress. he test duration, and the diagnostics and rework capabilities for removing the detect. Detection efficiency is ‘expressed as the ratio ol patent defects detected (and removed) by a defined test procedure to the total possible ‘of patent defects. While stress screens may be effective in precipitating a latent defect into a detectable failure, removal of the failed condition is dependent on the capability of the test procedures used lo detect and localize the failure. Care should be taken to ensure that tests have detection efficiencies as high as is technically and economically achievable. The screens may otherwise precipitate detects to failure which may go undetected by post screen tests. Modern electronic equipment comprised of microprocessors, large memory and LSI devices may contain defects so subtle that only the most thorough of tests can detect them. High screening strengths at lower levels of assembly may not always be easily accomplished because of low detection efficiency. The difficulty in accurately simulating functional interfaces or the inability to establish meaningful acceptance criteria may make the development of tests with high detection efficiency at the assumbly level dificult and costly. certain percentage of detects may anly be detectable at the univ’system level when all or a majority of the system components are Connected and operating as a system. Analysis and quantification of detection efficiencies should be an integral Part of the planning for a screening program. 4.4.4.2.1 Determining Detection Efficiency. Detection efficiency is determined as the product of factors that represent the following considerations: (@) The probability of observing and detecting a patent defect. This includes the probabilty of detection and the probability of occurrence. Consideration must also be given to the extent that the tests and limits being used represent all application requirements for functional and parametric pertormance. The detection of intermittent and/or situation sensitive defects may also require extended test times and may be modeled using a Poisson distribution. (©) The requirement for concurrent stress. Many of the latent flaws precipitated to failure by ESS can only be detected when stress is applied during the test. (c) The probabilty of isolating and then removing the defect without creating an adcitionat detect, ‘On some system procurements the probability of detection is a specitied parameter for built-in-test (BIT), performance monitoring (PM) and fault location (FL) capability requirements. When the required BIT or PM/FL Ccapabilty is used to verity performance of an item being screened, the actual values of fault coverage should be Used in conjunction with the factors detined above and in Procedure C. On other system procurements, requirements to perform a faiure modes and effects analysis (FMEA) are specified in the contract. In such cases, the FMEA should be used to estimate the fautt coverage for a given test design When FMEA or BIT faull detection requirements are not specified in the contract, estimates of fault coverage should be made based upon experience data. Appendix C provides values of fault coverage for various tests which may be applied with stress screens. The values in the table were derived by production and engineering test personnel from a large DOD electronic system manufacturer. RADC TR-82-87 MIL-HDBK-3448 Table 4.4: Guidelines for Initial Screen Selection And Placement |__saiscnon | Paceent ‘Temp. CONST. lcyc.e| TEMP: E- Effective M- Marginally Effective N- Not Effective Not 1. Particularly lt power Is applied and performance Is monitored at temperatur ‘extreme 2. Efactive where sssomblle ‘contain complex devices (RAMS, microprocessors, hybrids, etc.) 3. Eltectiveness highly ‘dependent on assambly structure, Not effective for small att PWAs. + Cost per flaw precipitated Is lowest (unpowered screens). ‘Smal aiza permits batch sereoning. Low thermal mass allows. high rates of temperature change. Temperature range Greater than operating sage allowable. during screen. Higher test detection att ‘Assembly interconnect tons (e.g. wiring back- plane) are screene: {All potentia! sources of flaws are screened. Unit interoperability flaw: detected, High test detection ‘fticlaney. 415 + Tost detection siticlency Is relatively low. + Tast equipment cost for powered screens is high. requires costly facilities. + Cost per aw significantiy higher than assembly, level + Temperature range reduced from assembly level + Ditficutt and costly to test at temperature extremes. + Mase preciudes use of ‘tfective vibration screens, (oF makes use costly. + Cost per flaw Is highest. MIL-HOBK-344A Table 4.5: R & M 2000 Environmental Stress Screening Initial Regimen "ASSEMBLIES. EQUIPMENT, OR (PRINTED WIRING UNIT (LRUA’RM) ASSEMBLIES) (SRU Tamperature Range From 84°C To 485°C | From -4°C To +71°C (Minimum) (See Note 1) ‘Temperature Rate Of Change 30°CMinute seciMinute (Minimum) (See Note 2) (Chamber Ale Temp.) (Chamber Air Temp.) ‘Temperature Dwell Duration Untit Stabilization Untit Stabilization (See Note 3) ‘Temporature Cycles (Minimum) 10 Power On/Equipment Operating (See Note 5) Equipment Monitoring (Soe Note 6) Electrical Testing After Screen. YooiAt Amblont Temp ‘QUAS-RANDOM VIBRATION (See Nots 7) Spectral Density Sars Frequency Limite 4100 -1000 HZ ‘Axas Stimulated Serlally or Concurrently 3 Duration Ot Vibration (Minimum) ‘Stimulated Serial 10 MinutesAxis Stimulated Concurrently 10 Minutes Power On/Equipmant Operation (See Note 5) Equipment Monitoring (See Nota 6) "SRU- Shop Replaceable Unk LAU- Line Reptncable Unit LAM LloeReplaceabie Modula see "Scare ep een oe ne SIRO STE mm mp dae amin eget see 1 eta ace egametean eeprom cts resto nge romp 4 Srimum ot te inal yen mont we mold ater he random iraton EER SESS Secretar eran epg et la SRS Sniay cesenmae amiga ammenities a Seiatiatirarnmenseesteeteeemearieae a MIL-HOBK-3448 444.22 Power-On Testing vs Power-Olt. Application of power, exercising and monitoring equipment performance continuously during the screen will greatly enhance detection efficiency. Subtle faults, such as Contact intermittents or temperature sensitive parts, can only be detected with powered and monitored screens. With the increased complexity of modem electronics, fault sites may be confined to smaller areas and fault symptoms may appear only during certain tests or under a special set of external conditions. As a resut, a greater incidence of "Cannot Duplicate"(CND), "No-Fault Found" (NFF) and "Retest OK"{RTOK) and similar intermittent or transient phenomena can occur. Patent defects which have been precipitated to failure by stress screens can be ‘categorized into three general types: @ Type. Physical defects transformed from an inherent weakness to a hard failure by the ‘stress screen. b. _Tupe2 Physical detects that manifest as failures only while under thermal or mechanical stress. (e.9. intermittent caused by a cold solder joint) ©. Type. Functional defects that manifest as performance failures or anomalies only while ‘under thermal or mechanical stress. (e.g. timing problems). The type 1 detects are readily detected by post screen tests of sufficient thoroughness. Type 2 and Type 3 detects require thorough and continuously monitored tests so that they can be detected. Type 3 defects, wnich include problems such as timing, part parameter drt with temperature or tolerance build-up can only be detected with powered and monitored tests. Type 2 and Type 3 defects can comprise 50% and as much as 80% of the latent defects prosent in the hardware. (Reference RADC TR-86-149) Developing tests and test strategies for use with stress screens and estimating their detection efficiency is a vitally important activity in planning a stress screening program. The use of tests with high detection elficiency is of ‘equal importance to using screens with high precipitation efficiency in structuring a screening program for production. 4442.3 Pre/Post Screen Testing and Screening Sirenath. In order to experimentally determine screening strength, the following conditions are required: a The tems subjected to stress screening must be tested thoroughly before the stress screen to assure that no detectable failures remain at the star of stress screening. When {esting is not performed prior to stress screening, tis not known whether patent defects ‘were present, which could have been detected without stress screening, or whether latent detects were precipitated by the stress screen, b. The items subjected to stress screening must be powered and exercised. Performance must be continuously monitored to assure that stress-dependent defects (e.9. intermittents, temperature and timing sensitive faults) are detected. c. The tems subjected to screening must be tested using the same test(s) both before and alter the stress screen to assure thal the failures detected are a result of the stresses. imposed. 4. Data must be collected on defect fallout ater the stress screen (i.e. . during subsequent stress screens, tests, or early field operation) to obtain an estimaia of the number of defects which were intially present. ‘When such data are available and assuming perfect tests, then the screening strength can be determined by use Cf ie observed fallout from the screen and the number of defects intially present Le, ir Fallout Screening Strength = jiumbar OF initial Latent Detects However, the total number of latent detects can not be determined until extensive field data is available. We are thus compelled to use a modeling approach where screening strength is based upon estimates derived trom a combination of the actual screening program data, experiments, and the published Iterature. The precipitation tficiency models and values used in the handbook tables of Procedure C in Section 5, were developed using 47 MIL-HDBK-3444 ‘such an approach. The results and methodology used for those studies are contained in RADC TR-62:87 and RADC TR-86-149. Additional information is also provided in AFWAL TR-80-3086 and ADR 14-04-73. As more ‘experience data on stress screening are gathered, the screening strength estimates will be refined and improved. 4.4.4.2.4 Production Phase - Refining Estimates From Fallout Observation The analysis methodology provided in Procedure D is based upon curve fiting actual data to determine the latent and patent detect components. Defects prosent before screening appear as the DpaT term and defects precipitated and detected by the screen appear as the DAT term. This approach, however, requires a sutficient number of data points throughout the screen. It changes take place during production such as in an assembly or fabrication Process, personnel or production flow, then the defect density (both latent and patent) is likely to change and affect the fallout observed during screening and will be apparent using the monitoring and control procedures of Procedure E. Under long term production, process improvements and other corrective actions taken as a result of the screening process are likely to change the quantity and distribution of latent defects present in the hardware, 45 = Once a screening program is implemented during the production phase, the screen fallout data and the screening process must be monitored and controlled to assure that program objectives are achieved. For an effective monitoring and contro! program, the field reliability requirements should be directly related to goals and requirements for parts, processes, and materials and assemblies for all factory integration and ESS test levels. The procedure for establishing these equirements and for monitor and control are provided in Procedures A and E respectively. Use of a Failure Reporting Analysis, and Corrective Action System (FRACAS) should be an integral part of production phase monitoring and control tasks. The fallout from the screening process provides the necessary visibility regarding the sources of defects in the product and the manutacturing process. Finding defects, determining their root causes and ensuring thal the sources of the defects are eliminated from either the process or product, is the basic mechanism by which process capabilty is improved. Analyses of screen fallout data must be performed with specitic objectives in mind, Well-detined monitoring, evaluation and control task objectives will ensure that the proper data is collected, classified and correctly analyzed to meet adjectives. The objectives of tha monitoring-evaluation and control tasks are to establish assurance that remaining detect density and reliability goals are achieved through implementing improvements in manutacturin. screening and test process capability. Manufacturing process capability is improved through taking corrective actions which reduce the number of defects that are introduced into the product, Screening process capability is improved by increasing both the precipitation efficiency of screens (by ensuring that potential sites for defects in the product are being adequately stimulated) and the detection efficiency. ‘Another goal of monitoring and control is related to cost effectiveness. The intial screening program might have been based upon planning estimates which were overly pessimistic. Corrective actions might aiso have been taken during production to reduce the number of defects introduced into the product. In either case, if the screening program is continued as planned, more screening than is necessary results, which impacts both cost and schedule. Decisions must be made on how to reduce the screening regimen. In a sense, the goal of ESS and the monitoring and control tasks is to make the screening program unnecessary (except for that limiting value required for PRVT). 4.8.1 Data Collection, The importance of timely and accurate data collection to achieving screening program objectives cannot be overemphasized. The data elements listed below should be collected during the conduct of the screening program. Some of the data elements become available directly as observed events trom the screening process. Other data elements will become available only after analysis of the failures and failure data, of after a batch of items have been exposed to screening, a, Identification of the item's exposed to the screenitest, e.g., description, part number, revision, and serial number. b. Number of Ike items exposed to the screenvtest c. Number of ike items passeditailed the screemtest. 4-18 MIL-HDBK-3448 d. Date of test fe. Test station or equivalent 1. Type and number of defects found in conjunction with the number of items exposed, passedfailed (data elements b, c, o} 9. Description of the type of defect found (part, workmanshipyprocess, design) 1h, Identification of the pan, interconnection site where the defect was found, i Identitication of the assembly level or manufacturing process operation where the defect was introduced, i ‘Screen conditions under which the detect was found (e.g., high temperature, vertical axis, of vibration etc.) k. _Timesto-failure relative to the start of the screen. L Failure analysis results which identity the root cause of the detect, m. Corrective action taken to eliminate the cause of the defect from the product and/or process. Data elements | and m may only be available if trends, as identified by the SPC monitoring and control methodology, warrant detailed root cause analysis and corrective action, 4.5.2 Fallure Classification, In order to establish a basis for the analysis of the screening fallout data, the failures must be properiy classified. The following classiication scheme 's recommended. Part defect - A failure or mattunction which is attributable to a basic weakness or flaw in a part (diode, transistor, microcircui, etc.) Subcategories may include electrical, electronic, and mechanical b. Manufacturing defect - A failure or matfunction attributable to workmanship or to the manufacturing process (cold solder joint, cracked etch, broken wire strands, etc.) ‘Subcategories may include assembly, process, and handling, © Design Failure - A tallure or maitunction attributable to a design deficiency. Note that electrical or thermal overstress failures due to inadequate derating, are design problems. ‘Subcategories inciude hardware and software. 4. Externally induced failures - A failure attroutable to external influences such as prime power disturbances, test equipment, instrumentation malfunctions or test personnel, fe. Dependent failure - A failure which is caused by the failure of another associated item which failed independently, t Unknown cause failure - An independent failure which requires repair and rework but ‘which cannot be classified into any of the above categories. An intermittent failure that recurs infrequently would be an unknown cause. Subcategories include verified and not veritied 9. Unable to verity (UTV), retest OK (RTOK), and NO Fault Found (NFF) classifications describe conditions where an anomaly during testing could not be reproduced. 4-19 MIL-HOBK-344A 4.5.3 Preliminary Analysis of Fallout Data, A preliminary analysis of the fallout data should be performed to ensure that failure causes are properly established and to categorize the failures so that more detailed analysis related to the ESS program objectives can be performed. a _Allfailures traceable to part, board and interconnection detects, which are precipitated ‘and detected by a screentest, should be considered to be latent defects provided that pre-screen testing was performed. These data should be used for monitor and control Purposes, b. A predominance of design problems which are discovered dunng production screening operations is a matter of serious concem. Every e‘fort should be made to determine corrective actions for design problems very early in production. It does no good to speculate that the design problems should have been eliminated from the hardware during the development stage. Stress screening, on a 100% basis, is an expensive and time consuming method for finding design problems, If the fallout from screening indicates persistent evidence of design problems, methods other than 100% stress screening should be used. Reliability growth and Test-Analyze-And-Fix (TAF) techniques are recommended. ©. Special attention should be given to unknown cause failures. Sufficient investigation should be made to establish that an intermittent condition does not exist. The number of failures classified as “Unknown Cause" should be kept to a minimum. Every effort should be made to correlate the failure circumstance data with the other similar failure incidents, as well as to use failure analysis so as to establish the cause of failure. The number of “unknown cause" classifications and/or “unable to verily” classifications should be used in assessing the detection efficiency. d. Analyses of induced failures should be performed to determine necessary corrective actions. The detailed analyses would typically be performed @ the established goals and requirements are not being achieved, either for parts, materials and processes or for assemblies at various ESS levels, 4.54 Analysis of Screen Fallout Data, The analysis of screening fallout data is directed toward evaluating the screening process s0 as to achieve screening program goals on remaining defect density, REMAINING: Yield goals are achieved by both improving manufacturing process capabilty through corrective action and by improving the screening and test process capability when itis found to be needed, Manufacturing, screening and test process capability wil determine the remaining detect density. The capability of these processes are measured and controlled by use of two important quantities, the incoming defect density (Din) and the screening strength (SS). Neither one of these quantities are directly observable as a resut of the screening process. The only observable statistic is the fallout from the screervtest, from which inferences regarding Din and SS must be drawn. The basic approach used in Procedure D of Section 5, is to obtain estimates of DIN and SS, using the screen fallout data and to statistically compare the observed data against the planning estimates. Based upon the comparisons, corrective actions are determined to eliminate the source of the detect from the process andor to change the screens so as to achieve stated objectives, ‘Two complementary procedures ara presented in Procedures O and E for performing monitoring and analyses tasks. Procedure D uses curve fitting techniques, applied to the mathematical model, to estimate Din and SS. Procedure E uses Quality Control Charts (SPC and PARETO) for monitoring and contro. ‘The use of control charts for defect control is a standard technique. Control charts (SPC ‘and PARETO) are used in Procedure E which are based upon the Poisson Probability distribution; ie... 4-20 MIL-HDBK-3448 D = defect density x = number of defects in an item Po) = probabilty of x defects in an tem The mean of the Poisson distribution is D and the standard deviation is 1D. The primary purpose of the contro ‘chan technique is to establish baselines against which the process can be monitored and by which out-of-contro! Conditions can be identitied. Because of varying conditions, tor example improving defect density, the actual defect density, D is determined using regression analysis. This value is then used to determine the expected Stasical variation du oid sample size, D2 where nis the numberof standard deviations, typical 3, and N is the sample oF lot size. Detect density is calculated, using the fallout data, and compared against the ‘control chart baselines. Part and workmanship (process) problems are rank ordered with consideration for the expected defects based on complexity, etc., and analyses are performed and corrective actions taken to eliminate the source of the defects from the product. Procedure E of Saction 5 contains the detailed methodology for implementing the control chart technique. 4.5.4.1 Use of the Mathematical Model to Evaluate Screening Results. Appendix A provides a description of the Stress Screening Mathematical Model. The factory fallout data (expressed detects per system) ccan be curve fitted to the expression developed therein (for REMOVED) 0 as to obtain estimates of the model parameters. Parameters which can be determined using this method are Djn, SS (comprising PE and DE terms), the constant failure rate (CFR) and SAF, a stress adjustment factor relating detect levels at field stress to tactory stress. 45.4.2 Use of the Chance Detective Exponential (CDE) Model_to_Evaluate Screening Besults, The detect distribution for both factory and field stress environments have been empirically determined to be represented by the following expression. DREMOVED DE" DPAT + DLAT’(t-exp(-it)) + CFR] where OPAT represents the patent defects, DLAT represents the latent defects, t the stress duration e.g, time, ‘ycles etc, k the precipitation stress constant, CFR the constant failure rate, and DE is the detection efficiency which is 1 forthe fleld ‘The CDE model developed by Fertig and Muthy and discussed in a paper contained in the 1978 Annual R&M ‘Symposium provides a possible explanation for this observed relationship. Regardless of the true derivation, the empirical results have been found to be sufticiently accurate tor the urposas of this handbook. inaccuracies either in the modelling andor the estimated parameters are initially addressed using design margins and addressed during the production phase through the use of aciual factory and field data to refine the estimates. The observed fallout data can be fitted to the model to obtain estimates of ‘the model parameters. The parameters of the model provide estimates of the incoming detect density Din, the screening strength (SS, PE, DE), the limiting faliure rate of the equipment (CFR) and the stress adjustment factor (SAF). Figure 4.3 is an extract from a study report which shows a histogram of the screen fallout from a 12 cycle -849C to 719C temperature cycle screen. The fallout per cycle is used to obtain maximum likelihood estimate (MLE) for the parameters of the CDE model. ‘As Figure 4.3 shows, the CDE model parameters estimated by the MLE procedure are: incoming defect density (DIN) equal to .1542 defects per item, the failure rate of a detect (Dk) equal to .1485 failures per hour (which coresponds to a screening strength of 95 and a value of .0032 forthe limiting failure rate (CFR). 4.5.4.3 Product Reliability Verification Test (PRYT). The use of a PRVT segment as part of an ESS program is intended to provide confidence that field reliability will be achieved and help identiy out of control Conditions that could otierwise be missed. As defect density is improved, ESS can be reduced to optimize cost without impacting field reliabilty. However, ESS can not be completely eliminated since some portion is required to allow reliabilty to be assessed. PRVT is that portion of ESS retained for this purpose. 421 MIL-HDBK-3444, Assessments of reliability should be made on the basis of the performance of the collective population. The PRVT segment should be implemented on a first pass yield basis (ist pass yield being defined as the number of systems completing the PRVT segment with no failures divided by the total number of systems submited first time). Ifthe first pass yield requirements are not achieved, corrective actions must be taken that address the ent population. Appendix B provides the mathematical derivation of the PRVT methods contained in the handbook Procedure F in Section § contains the detailed procecures for incorporating the PRVT segment. Note that a failure free requirement for any part of ESS or PRVT is not recommended. If requirements (e.9., PRVT yield) are not being achieved and detects are randomly distributed, then the overall defect density is too high and ction must be taken that affects the entire population. Requiring one particular piece of equipment to pass a sequence of tests “Yallure free does not substantially improve the reliability of the population, The tailed item however, must undergo sulficient confidence testing subsequent to rework to ensure that the faut has been eliminated. 4.6 Costs of ESS vs Productivity Improvement, The costs of conducting a screening program during the production phase can be high. To a large extent, the costs can be otset by the increased productivity which results through proper screen selection and placement. Screening at the lowest possible level ot assembly will ‘almost atways be the least costly alternative in terms of rework costs. The time and effort required to test, troubleshoot and repair items increases by at least an order of magnitude at each subsequent level of assembly. Significant cost savings or avoidance can accrue to the manufacturer by analyzing the cost benefits of various screen selection and placement alternatives and by striving to find dotects at the lowest possible level of assembly. The fixed and recurring costs to screen, instrument, and test the hardware at lower assembly levels (especially with power applied) can possibly negate any benefit from lower rework costs. I is imperative that the ‘optimum ESS program be determined for each equipment type. Cost savings to the Government will result through improved field reliabilty and corresponding reductions in field repair costs. The benefits of a properly ‘conducted ESS program to the Government go beyond field repair costs alone. Improved reliability during early lite will also reduce over-buying of spares, since estimates of required spare quantities are based upon eary lle field performance, ‘The opportunity for introducing new defect sources into the hardware during field ‘maintenance and handling is also reduced. ‘There should be however, controls and constraints on the cost of conducting a screening program. Situations can arise where the cost of conducting a screening program tar outweigh any benefits which may be derived. For ‘example, for low complexity items the number of screenable detects which are likely to be present in the hardware may be relatively small. Conducting a full-scale screening program, in such cases, can result in very high costs per defect eliminated. Costs of $10K to $15k per defect eliminated may be justified for equipments which are used in crtical missions with very high reliability requirements. On the other hand, such costs may be difficult to justify # the equipment is used in non critical missions and f the costs of field maintenance are not severely effected by not screening. Each case, where a stress screening program is under consideration, must be judged individually as to the cost benefits to be derived from stress screening and optimized cn a combined user-producer cost basis. Procedure A, in Section 5 is used to determine the cost effectiveness of ESS programs, 48.1 Facilities and Costs, The facilities that the manufacturer has available for screening, instrumenting ‘and testing the product affects screen selection and placement. A manufacturer may not have random vibration facillies of automatic test systems which can be used for the stress screening program. In such cases, the manufacturer may decide to impose less severe stresses for a longer duration or decide to use less expensive alternatives such as described in NAVMAT P-9492, The costs to purchase expensive screening or test ‘equipment and pertorm screens at a given level of assembly may not be warranted, in terms of the number of defects which are likely to be found. The screening and test facilities which the manufacturer has available for screening must be addressed in preparing the screening program plan and in the screen selection and placement Process. Costs versus the benefits ta be derived from screening should be addressed. 4-22

You might also like