Ultrasonic testing of spot welds
(Here is the completed version of the message 2 days ago)
The following is quoted from the paper on testing of autobody spotwelds.
'A modified ultrasonic testing method clearly detects stick welds and
cold welds, this is not possible with normal ultrasonic methods.
When testing under production conditions, 98% of the welds tested are
correctly evaluated, i.e. values obtained from the ultrasonic test
satisfactorily corresponded to the results from the teardown test.'
The link to the paper is: http://www.ndt.net/article/0498/spotw/spotw.htm
The above quoted statement would be true of most testing of autobody
spotwelds for the following reason.---
The body welding performance is typically very high and can reach 98% or
more. When an operator calls all welds good he can only be wrong 2% of
the time. If the report claims 98% correlation or 98% reliability I believe
that is misleading the readers to believe the testing system is capable of
accurately detecting both good and bad welds (with one calibration
setting). I have read detailed reports that do just as outlined above.
They report 98% accuracy but the item under test had 98% good items (welds)
in the test population. I could claim the same accuracy and be correct by
simply reporting all welds are good!
The way to present reliability data is to include the ALPHA and BETA
error data along with the claim of reliability for the test system. As
mentioned in other articles, the way to report could be:
1-ALPHA * 1-BETA = TEST RELIABILITY
ALPHA is the proportion of good called bad
BETA is the proportion of bad called good
I believe that a standard method of reporting or at lease the
computational method used to establish reliability should be included in any
Is there a standard with which you expect data to be reported?
Would you please comment on the appropriate measure of a
My concern is that the measurement system should reject bad items and
not reject good items. Any checking system looses effectiveness when it
begins to reject good or accept bad items. My present requirement for
testing is 3% alpha (max) error and 15% beta (max) error (without
re-calibrating the system). Using the above 3% and 15% requirements,
the system only has to be about 82% reliable.