NDT.net - Sep 2001, Vol. 06 No. 09 |
The objective of the modular approach for measuring the reliability of NDE is to provide a validated testing system that fulfills the requirements of the client in the most efficient and cost effective manner. This capability is especially important where expensive statistical tests are not possible. In developing this concept we divide a system into appropriate sub modules, and evaluate the discrete reliability of each. The knowledge gained within each of the modules allows an optimization of the total system. The reliability of the total system is then determined by joining the single reliabilities of the modules, including their possible correlation. To facilitate an understanding of the mechanics the background of reliability analysis is briefly introduced. We then consider how to characterize properly the reliability of an NDE component. Finally our first very simple attempt to join the modules is described.
The reliability measurement of a system should describe the degree to which the system is capable of fulfilling the required task. In the case of NDE this relates to the detection of flaws or material conditions. This measurement is accomplished by comparing the real status of a component with statistically significant set of test data. Under field conditions and with the requirement for realistic test specimens, this process is often very expensive or even impossible. To address the need for an efficient and effective approach, we propose to cut the NDE system into main modules and to assess the reliability of each module in a manner appropriate to the nature of each modules information. As a tool for joining the individual reliability contributions into a system reliability value, we propose the mathematics of reliability analysis of systems, e.g. fault tree analysis.
The NDE system often consists of a signal chain in terms of a ray/wave source, the interaction with the material and inherent defects of our component under investigation and of the detector response that is evaluated by a human being or an automated evaluation system.
The modular approach facilitates direct integration of the American-European Workshop Reliability Formula. The expression defines a total reliability R, which consists of; an intrinsic capability IC describing the physics and basic capability of the devices, factors of industrial application such as restricted access in the field, AP, and finally the human factors HF. In the first example we compose the POD of radiographic crack testing from; a modeling calculation for IC and statistical test results for AP and HF. In the second example we decompose the total POD of automated ultrasonic testing in investigating the influence of the AP factor "flaw shape". The third example deals with radiographic wall thickness measurement of insulated tubes. The determination of the wall thickness is done by the evaluation of the optical density (shadow technique) on radiographic film in two ways; Manual evaluation of the radiographic films by human inspectors and digitized, automated evaluation by image processing software. The parameters of the main influencing modules are optimized in a way where both the results of modeling calculations and sets of experiments are exploited. The integral system is assessed by statistical evaluation of measurement deviations.
In the reliability analysis [1, 2] systems of components are investigated where components have known failure probabilities. The system failure is modeled as a logical function of component failures. The probability of system failure can then be calculated with help of elementary probability theory. For this task additional assumptions about independence or knowledge of failure probabilities of certain subsystems are necessary.
Illustration
System S composed of components K_{1} to K_{5}:
Fig 1: Illustration of a system of components. |
The diagram in figure 1 means that system S does not fail if one of the three alternative paths;
has no failing component.
where 'ָ' means union (logical 'or') and 'ַ' means intersection (logical 'and'). K_{i} is used as symbol for the statement 'the component K_{i} does not fail'. (In probability theory, sets called 'event' are considered instead of statements. An event consists of all situations for which the statement is true, such that the two notions are equivalent). Rewriting the expression for S according to the rules of set algebra in the form
the three alternative paths appear.
Failure of the system is given by the set theoretic complement (or, with the notion of statements in mind, by logic negation) of the preceding expression:
S^{c} | = ( K1 ַ K_{2} ַ K_{5} )^{c} ַ ( K_{1} ַ K_{3} ַ K_{5} )^{c} ַ ( K_{4} ַ K_{5} )^{c} |
= ( K_{1}^{c} ָ K_{2}^{c} ָ K_{5}^{c} ) ַ ( K_{1}^{c} ָ K_{3}^{c} ָ K_{5}^{c} ) ַ ( K_{4}^{c} ָ K_{5}^{c} ) |
i.e., S fails, if all three paths are 'blocked' by at least one failing component. (K_{i}^{c} means, that K_{i} fails.) The probability for the event K_{i} ('component K_{i} does not fail') is written as P(K_{i})=p_{i}, such that failure probability is P(K_{i}^{c}) = 1 - p_{i}. Assuming, that all components fail independent of each other (e.g. component K_{3} does not work more reliably than usual, just because K_{2} fails), the probability for the system S to work correctly can be calculated:
P(S) | = P( ( ( K_{1} ַ ( K_{2} ָ K_{3} ) ) ָ K_{4} ) ַ K_{5} ) |
= P( ( ( K_{1} ַ ( K_{2} ָ K_{3} ) ) ָ K_{4} ) P( K_{5}) | |
= ( P( K_{1} ַ ( K_{2} ָ K_{3} ) ) + P( K_{4} ) - P( K_{1} ַ ( K_{2} ָ K_{3} ) ַ K_{4} ) ) p_{5} | |
= ( P( K_{1} ) P( K_{2} ָ K_{3} ) + p_{4} - P( K_{1}) P( K_{2} ָ K_{3} ) P( K_{4} ) ) p_{5} | |
= ( p_{1} ( P( K_{2} ) + P( K_{3} ) - P( K_{2} ַ K_{3} ) ) (1 - p_{4}) + p_{4} ) p_{5} | |
= ( p_{1} (p_{2} + p_{3} - p_{2} p_{3}) ( 1 - p_{4} ) + p_{4} ) p_{5} |
Hence the failure probability for S is P(S^{c}) = 1 - P(S). If certain subsystems S_{i}, S_{j} of S do not fail independent of each other, the equality P( S_{i} ַ S_{j} ) = P( S_{i} ) P( S_{j} ) used in the calculation above is not true. The calculation becomes more complex because additional knowledge (or assumptions) about 'combined probabilities' is necessary.
Key techniques for this method of reliability assessment, which was developed during nuclear power plants risk studies in the 1960s, are the 'fault tree analysis' and the 'failure mode and effects analysis (FMEA)'.
In non-destructive testing 'probability of detection' has been proposed as one characteristic measure of reliability. The meaning of this term has changed during the last years. In the field of radiolocation (SONAR), where the term has its origins, the term 'detection' is used in the same sense as 'indication' or 'alarm' in NDE. In radiolocation even a false alarm is a detection, and POD is the probability of detection in a standard environment, in which an object to be indicated exists in exact one half of the tested situations. In NDE, however, 'detection' means the indication of an existing flaw, such that POD is the (conditional) probability of true positive indication. This has the advantage, that no standard environment is necessary, and the disadvantage, that the POD contains no information about the chance for false alarms. In any case it is assumed, that indications of more than one flaw are independent, i.e. the probability that both of two certain flaws in a specimen are detected is POD^{2}, and the probability to miss any of them is 1-POD^{2}.POD as a characteristic quantity of an NDE method can be estimated by application of the method to a representative sample of specimens each containing one or more flaws. The ratio of detected flaws and existing flaws is an estimation for POD. When using NDT methods, that are build upon certain test components the question may become relevant, if the POD can be calculated from certain 'component probabilities' rather than being estimated from experiments. The intention of the authors is to make a first proposal for a solution of this problem, which is along the lines of fault tree analysis as an application of elementary probability as discussed above. For this purpose it is necessary to decompose the event 'flaw detected' into a logical combination of events, where each event refers to a single component only. Integration of subsystems that contribute uncertainties in form of standard deviations (measuring on a continuous scale) rather than in form of probabilities (testing or classification) is also possible but more complex.
We distinguish here between modules contributing to pure measurement systems (Quantitative tests) and those contributing to detection systems (Qualitative tests).
Fig 2: Illustration of a set up for a test in terms of a measurement. |
(1) |
An illustration of this task is shown in figure 3. The specification of this task should be in accordance with the requirements of the client and should relate to the allowed risk for the component and the economy of false repair.
Fig 3: Typical configuration for a detection task. |
The proposed characterization of the reliability will be in terms of POD and PFA as follows:
These values of POD and PFA depend on; the detection and recording thresholds, the target flaw size, and also the component noise levels.
We consider the NDE reliability formula created on the European-American workshop as a general expression:
When we wish to apply the above conceptual definition we must first ask: "What is the appropriate mathematical formulation when I want to combine the different contributions according to the above described reliability analysis?" Again we have to distinguish between quantitative and qualitative detection tasks.
(2) |
the combination consists of a simple multiplication of probabilities.
For more complicated cases of interrelationship between the factors
(3) |
where F is a mathematical function to be determined according to the Reliability Theory of Systems[1, 2] which works similar to a fault tree analysis as described above.
Because we will treat here realistic industrial examples we will put special focus on AP.According to Matthew J. Golis "What is an Application Parameter?" [3], we can define four categories of application parameters:
These categories cover the different practical mechanical location of the application parameter. The following differentiation according to types shall consider the recognition point of view:
First Example
The NDE task consists of the detection of thermally induced cracks in welds of ferritic tubes (nuclear power plants). As a metric for each module we use the POD as a function of crack depth. In the IC function we consider the influence of the physics of the X-ray penetration and the creation of the crack image with a minimum contrast of 0.01O.D. for notch-like idealized cracks. The probability for IC is determined by a modeling calculation and shown in the left part of Fig.5. As AP+HF we investigated the capability of human inspectors to detect the images of naturally shaped cracks on the radiographic film. Figure 4 shows again the concept of sharing this task with the corresponding probability factors. The probability belonging to AP+HF was determined by experiments in counting the hit/miss rates. Because all the experimental cracks had a height greater than 4% wall thickness and the POD (IC) reaches 1 before that 4% the whole POD is represented by POD (AP + HF).
Fig 4: Assignment of the modules for example 1. |
Fig 5: Results for the POD's for the modules of example 1. |
Second Example
The task here is the detection of porosity in electron beam welds of Ti-alloy aircraft engine parts using automated ultrasonic testing with focussed probes. The application factor consists of "naturally occurring flaw shape" of the porosity per the category 3 definitions. The detailed approach is described in [4]. Fig.6 shows the scheme of the investigation.
Fig 7: POD for sphere bottom bore holes. |
Fig 8: POD for the module "flaw shape". |
Fig 9: POD for naturally occurring pore holes. |
Fig 6: Scheme of modularization for example 2. |
Sphere bottom bore holes were used as the ideal flaw. The POD as a function of sphere diameters is presented in Fig.7: We see a fast rising POD whereas the POD for the naturally occurring pores (Fig.8) rises much more slowly. The shape of the pores seems to strongly influence the POD – which up to now had physically not been clear. We can now formally define the application factor influence by dividing the latter POD by the former – which is shown in Fig.9. This factor could be used to scale other sphere bottom hole values to realistic ones – as a concept.
Third Example
The NDE task consists in the wall thickness determination of insulated tubes using radiographic shadow technique and automated evaluation of the digitized radiographic image.
Fig 12: Results for the uncertainty in wall thickness measurement for different insulation radii |
The application factor considered here is the variation in the insulation radius and corresponds to category4 – geometrical shape of the component. The metric is the uncertainty in wall thickness. Fig.10 illustrates the different parameters. Fig.11 shows the geometrical set up and the mathematical formula used for the wall thickness determination. The error propagation law was applied to that formula to investigate the influence of the application factor "isolation radius variation" on the uncertainty in wall thickness measurement. Fig.12 shows the combination of parameters investigated. Fig.13 shows the results for the uncertainty in wall thickness measurement as a 2dimensional function of the tube radius and the wall thickness for selected examples of parameter combinations, especially those with a different radius of insulation.
Fig 10: Scheme of modules for example 3. |
Fig 11: Wall thickness formula. | Fig 13: Scheme of modules for optimization. |
The results of different model calculations and physical reasoning were then utilized to optimize the whole testing system:
The optimization occurs in several cycles. One intermediate optimization result is shown below in terms of the measurement deviations of the optimized system using a 16bit film digitization and automated computerized wall thickness measurement evaluation system. This result is compared with the original visual evaluation of an experienced human inspector. We can recognize the beginning of an improvement.
Fig 14: Visual evaluation of the film images on light box by an experienced human inspector. | Fig 15: Automated computerized evaluation of 16bit digitized film images. |
The modular approach for evaluating the reliability of NDE systems reduces the amount of data collection required in physical demonstrations. This has been accomplished by replacing some of this information with other knowledge about the system e.g. in terms of modeling calculation of the intrinsic capability. The example studies demonstrate the feasibility of this approach for practical application. In addition the modular approach provides the NDE reliability specialist the opportunity to study principal influences, which enable optimization of the NDE system.
The authors wish to express appreciation to Lloyd Schaefer of Siemens Power Generation for the helpful discussion and English editing of the paper.
© NDT.net - info@ndt.net | |Top| |