|NDT.net - December 2002, Vol. 7 No.12|
During the second European-American Workshop on NDE Reliability, September 99 in Boulder, the term NDE reliability was defined as the degree that an NDT system is capable of achieving its purpose regarding detection, characterization and false calls. The most common but also most expensive - way to determine this degree of capability in e.g. defect detection is to make just a performance demonstration using realistic test samples and to count the correct detections and false calls. For a more efficient way of reliability evaluation we propose to decompose the system into main modules e.g. according to the reliability formula set up on the first workshop: into f(IC) a function of IC the intrinsic capability, determined by the physics and the technique of the NDE method representing an ideal upper bound of the reliability and g(AP) a function of AP the industrial application factor like surface state or limited access to a component in general diminishing the ideal capability and finally h(HF) the function of the human factor which is in general also diminishing the ideal capability. The workshop proposed a rather plain mathematical shape for the formula R = f(IC) g(AP) h(HF) which should be considered merely as a philosophical expression but not as exact mathematical formula to be applied for evaluation of e.g. POD data. We propose to decompose the NDE system into modules in terms of functions of IC, AP and HF if appropriate or additional ones if necessary and then to analyze the mutual relationships of the terms via fault tree analysis. Finally the total reliability of the system is composed of the reliability of the subsystems via the rules of statistical systems theory.
First trials for this approach of data analysis will be presented via examples from NDE systems in the aerospace industry.
The reliability measurement of a system should describe the degree to which the system is capable of fulfilling the required task. In the case of NDE this relates to the detection of flaws or material conditions. This measurement is accomplished by comparing the real status of a component with statistically significant set of test data. Under field conditions and with the requirement for realistic test specimens, this process is often very expensive or even impossible. To address the need for an efficient and effective approach, we propose to cut the NDE system into main modules and to assess the reliability of each module in a manner appropriate to the nature of each modules information. As a tool for joining the individual reliability contributions into a system reliability value, we propose the mathematics of reliability analysis of systems, e.g. fault tree analysis.
The NDE system often consists of a signal chain in terms of a ray/wave source, the interaction with the material and inherent defects of our component under investigation and of the detector response that is evaluated by a human being or an automated evaluation system.
The modular approach facilitates direct integration of the American-European Workshop Reliability Formula. The expression defines a total reliability R, which consists of; an intrinsic capability IC describing the physics and basic capability of the devices, factors of industrial application such as restricted access in the field, AP, and finally the human factors HF. In the example we decompose the total POD of automated ultrasonic testing in investigating the influence of the AP factor flaw shape. The second example deals with mechanized radiographic testing using many angle incidents and a reconstruction where variations in AP and HF are considered. The third example shows consideration of all three IC, AP and HF factors for eddy current testing of gas turbine engine parts and their stepwise improvement leading to a better POD result.
In the reliability analysis [1, 2] systems of components are investigated where components have known failure probabilities. The system failure is modeled as a logical function of component failures. The probability of system failure can then be calculated with help of elementary probability theory.
In non-destructive testing, 'probability of detection' has been proposed as one characteristic measure of reliability. The meaning of this term has changed during the last years. In the field of radiolocation (SONAR), where the term has its origins, the term 'detection' is used in the same sense as 'indication' or 'alarm' in NDE. In radiolocation even a false alarm is a detection, and POD is the probability of detection in a standard environment, in which an object to be indicated exists in exact one half of the tested situations. In NDE, however, 'detection' means the indication of an existing flaw, such that POD is the (conditional) probability of true positive indication. This has the advantage, that no standard environment is necessary, and the disadvantage, that the POD contains no information about the chance for false alarms. In any case it is assumed, that indications of more than one flaw are independent, i.e. the probability that both of two certain flaws in a specimen are detected is POD2, and the probability to miss any of them is 1-POD2. POD as a characteristic quantity of an NDE method can be estimated by application of the method to a representative sample of specimens each containing one or more flaws. The ratio of detected flaws and existing flaws is an estimation for POD. When using NDT methods, that are build upon certain test components the question may become relevant, if the POD can be calculated from certain 'component probabilities' rather than being estimated from experiments. The intention of the authors is to make a first proposal for a solution of this problem, which is along the lines of fault tree analysis as an application of elementary probability as discussed above. For this purpose it is necessary to decompose the event 'flaw detected' into a logical combination of events, where each event refers to a single component only. Integration of subsystems that contribute uncertainties in form of standard deviations (measuring on a continuous scale) rather than in form of probabilities (testing or classification) is also possible but more complex.
We distinguish here between modules contributing to pure measurement systems (Quantitative tests) and those contributing to detection systems (Qualitative tests).
|Fig 1: Illustration of a set up for a test in terms of a measurement.|
The task of the test is to measure a physical magnitude such as for a wall thickness depicted in figure 1. The new ISO Guide 17025 on competence of testing laboratories proposes to use the uncertainty in measurement for the characterization of the reliability of the testing system. The general guidelines for this evaluation can be found in the GUM (ISO Guide about the Expression of Uncertainty in Measurement). For simple cases this is just the standard deviation determined from a series of experiments.
This task is often more difficult and often a combination of two parts (a and b):
An illustration of this task is shown in figure 2. The specification of this task should be in accordance with the requirements of the client and should relate to the allowed risk for the component and the economy of false repair.
|Fig 2: Typical configuration for a detection task.|
The proposed characterization of the reliability will be in terms of POD and PFA as follows:
These values of POD and PFA depend on; the detection and recording thresholds, the target flaw size, and also the component noise levels.
We consider the NDE reliability formula created on the European-American workshop as a general expression:
When we wish to apply the above conceptual definition we must first ask: What is the appropriate mathematical formulation when I want to combine the different contributions according to the above described reliability analysis? Again we have to distinguish between quantitative and qualitative detection tasks.
the combination consists of a simple multiplication of probabilities.
For more complicated cases of interrelationship between the factors
where F is a mathematical function to be determined according to the Reliability Theory of Systems [1, 2] which works similar to a fault tree analysis as described above.
Because we will treat here three realistic industrial examples we will put special focus on AP.
According to Matthew J. Golis What is an Application Parameter? , we can define four categories of application parameters:
These categories cover the different practical mechanical location of the application parameter. The following differentiation according to types shall consider the recognition point of view:
The task here is the detection of porosity in electron beam welds of Ti-alloy aircraft engine parts using automated ultrasonic testing with focussed probes. The application factor consists of naturally occurring flaw shape of the porosity per the category 3 definitions. The detailed approach is described in . Figure 3 shows the scheme of the investigation.
Sphere bottom bore holes were used as the ideal flaw. The POD as a function of sphere diameters is presented in figure 4: We see a fast rising POD whereas the POD for the naturally occurring pores (figure 5) rises much more slowly. The shape of the pores seems to strongly influence the POD which up to now had physically not been clear. We can now formally define the application factor influence by dividing the latter POD by the former which is shown in figure 6. This factor could be used to scale other sphere bottom hole values to realistic ones as a concept.
|Fig 4: POD for sphere bottom bore holes.||Fig 5: POD for the module flaw shape.|
|Fig 6: POD for naturally occurring pore holes.|
An inspection system consisting of a 240 kV X-ray tube of the flat type, a double line camera and a driving system (figure 7), permits linewise scanning (figure 8) of circular welded seams on a pipe in the vertical direction. The X-ray tube and the camera, located against each other across the pipe, rotate synchronously and with a constant angular orientation around the pipe. For laminography and tomography applications, it is necessary to take exposures under different angles. This is achieved using an additional drive that moves the X-ray tube parallel to the axis of the pipe.
|Fig 7: Set-up of the Experiment.|
||Fig 8: Principle of the circular welding seams inspection by linewise scanning
The automated system for X-ray inspection of the circular welding seams assures confidence in the probability of detection (POD) values (figure 9) for wide as well as narrow notches. Variance introduced by the factors AP and HF limit the POD to 95% to 75% respectively for the notches shown in figures 10-12.
|Fig 9: POD vs. angle of the exposure for different widthes of the notches.|
|Fig 10: Drawing of the test sample (plate with notches).||Fig 11: Reconstructed cross section through the test plate from multi-angel projections.|
|Fig 12: Photograph of the test plate.|
AP factors that cause variation include; non-ideal geometrical adjustment and calibration of the system "X-ray tube -- line camera". The most important HF factor is the uncertainty in recognition and evaluation of the signal.
The scanning system described is now being rebuilt (equipped with a new line camera), so not all planned measurements could be taken. After the completion of the rebuilding process all measurements will be repeated and re-evaluated. Finally, it is planned to scan a real pipe with cracks and compare the results of the measurements with the real characteristics of the test object.
A legacy eddycurrent inspection system for a turbine disk blade attachment slot was assessed as to its reliability ( R ) due to field failure experience involving defect sizes above the assumed detection threshold. Figures 13-14 depict the typical disk and inspection region of interest.
|Fig 13: Failure & Inspection history.||Fig 14: Crack type & orientation.|
A gage R & R of the calibration & thresholding practice was performed under a 6 sigma study program for the inspection protocol shown in figures 15-17 performed manually by a population of certified eddy current inspectors. The recorded data acquisition speeds (frequency response) and calibration acceptance values were found to vary by a factor of 24 & 38% respectively as shown in figure 18. These effects were realized in a randomized interpretation criteria and incoherent a-hat vs. a relationship depicted in figures 19-20. The associated POD is depicted as Actual, vs. the assumed Target performance level in figure 21.
|Fig 15: Probe variability impact: Reduced quality of input signal.||Fig 16: Instrument variability impact: Diminished clarity & reproducibility of signal.|
|Fig 17: Variation (low P+R) from A-scan or lissojousloop displays produced by hand scans on a variety of equipment resolutions.|
|Fig 18: Variability impact: feature discr. Set & Dec T-hold var.||Fig 19: Variability impact: Random accept/reject decisions.|
|Fig 20: Lacks flawsize/EC relationship; Poor crack/notch assumptions.||Fig 21: POD assumed on target; Deterministic basis derived from Cal.|
To control the exam calibration and acquisition variability, a mechanized acquisition system was designed and constructed as shown in figure 22. To maximize the information matrix, an array probe collection scheme was included to replace the single stream shown in figure 17, with the amplitude & spatial response depicted in figure 23.
|Fig 22: Mechanized Exam System Destign.|
|Fig 23: Array vs. single coil.|
A lab and field assessment was performed to collect and measure AP & residual HF factors in the automated acquisition and analysis prototype system, with the coherent a- hat vs. a and on target R (POD+CL) depicted in figure 24.
|Fig 24: Lab + Field measurements to assure Xfer function & valid POD.|
The modular approach for evaluating the reliability of NDE systems reduces the amount of data collection required in physical demonstrations. This has been accomplished by replacing some of this information with other knowledge about the system e.g. in terms of modeling calculation of the intrinsic capability. The example studies demonstrate the feasibility of this approach for practical application. In addition the modular approach provides the NDE reliability specialist the opportunity to study principal influences, which enable optimization of the NDE system.
The authors wish to express appreciation to Lloyd Schaefer of PNDE for the helpful industrial examples, technical discussions and English editing of the paper.
|© NDT.net - email@example.com|||Top||