AUT on Girth welds, accuracy, precision and tolerance
Paul and Gerry have revived my interest in the old topic of quantifying our accuracy of sizing of flaws in AUT on girth weld inspection projects.
As Paul pointed out, the ability of any system to reproduce amplitude reponses on notches, or flaws that have all the characteristics of notches, should be fairly predictable. But real flaws, as Gerry points out, are rarely so ideal and therefore their amplitude responses will not be so predictable.
The reality of flaws being "unpredictable" means variations can occur. Both in amplitude and even temporal aspects as we try to discriminate tip signals and need to use operator "judgement" when the signal and phase separations are small.
The zone methods we have used on REAL flaws in pipeline girth weld AUT have provided some degree of accuracy and it is constantly being "debated". Perhaps the methods of assessment of this accuracy should ALSO be debated.
In DNV OS-F101 Random and Systematic (mean and standard?)deviations are to be determined. The details of how these values are used are not described in the Qualification document...just that they must be determined.
Charlesworth and Temple describe TOFD sizing errors and usually refer to the standard deviation.
API 1104 19th edition requires that the sizing error be determined but it makes no description of how this is to be done.
It is interesting that a similar requirement has been imposed in the Nuclear industry. ASME Section XI in Appendix VIII sets out Performance Demonstration Requirements. In VIII-3120 Sizing requirements are imposed using a comprison of flaw sizes to UT estimated sizes. Here ASME requires an RMS value be determined.
For ferritic pipe welds ASME requires:
(a) The RMS error of the flaw lengths estimated by
ultrasonics, as compared with the true lengths, shall
not exceed 0.75 in. (19 mm);
(b) The RMS error of the flaw depths estimated by
ultrasonics, as compared with the true depths, shall not
exceed 0.125 in. (3.2 mm).
The ASME treatment seems well addressed to the engineering applications of NDT results where Fitness-for-Service can be calculated with the normal "safety allowances" made by fracture mechanic engineers provided the NDT can assure a minimum accuracy tolerance.
But pipeline application of NDT results seem to be different. Instead of requiring a minimum accuracy tolerance to meet the pre-estimated tolerance minimums assumed by engineering safety factors, the sizing errors we have been deriving are added to the flaw size estimates. If no other safety factor has been incorporated this is of course needed.
But in several projects I have seen ECA based acceptance criteria that is more restrictive than radiographic workmanship criteria. No one has bothered to explain how, with estimates of flaw height within nearly a millimetre accuracy, the acceptable length could be less than what would be allowed by a radiographic inspection where NO height estimate is available and radiography has been shown to be less effective at even detecting the planar flaws in the welds so it may have even missed the small flaw that UT detected and was required to reject.
So, anyone with thoughts or comments on:
1. What is the most informative statistic in flaw sizing estimates?
2. How is it best determined?
3. How is it to be used in acceptance criteria?
4. Should a minimum error be required proven because other safety factors are already calculated?
5. Should the error be simply added to all the other safety factors?
Input from the engineering community, both those that impose the requirements for NDT accuracy tolerances and those that have had a contribution to how the tolerances are used in determining acceptance criteria for ECAs would be much appreciated.