CT Measurement Realization Process : The Need for Advanced Correction Methods

Computed tomography use as a dimensional metrology technique has increased steadily, despite the fact that the uncertainty has been rarely determined and expressed in measurements of production parts. This perception may be attributed in part to a limited way of employing the uncertainty concept. From this perspective, the conceptual integration of the product (and measurement) realization process with the uncertainty approach is first emphazised in this paper. The reasoning behind it is to employ different methods for uncertainty evaluation in a rational sense, where the accuracy required to the uncertainty evaluation process drives the method selection. For critical measurement cases, which can be identified in the proposed concept, the need for advanced correction methods, using master part reference measurements for point-based compensation, is presented, tested and justified for GD&T characteristics.


Introduction
Computed tomography (CT) for dimensional metrology has been gradually integrated into the industrial quality control loop; the possibility of fully inspecting complex-shaped objects without damaging them or imposing sensor accessibility restrictions for tiny or hidden features are key attributes of CT as a dimensional measurement technique.Typical design of metrology CT scanners comprises the X-ray cone-beam source at one end and the flat panel detector at the other end.Between them, a turntable for stepwise rotation of the test object is placed, which can be linearly moved along the magnification axis.CT relies on the attenuation of X-rays when propagating through the test object.For different beam directions, the intensity distribution of the remaining radiation is detected and stored as a grey-value image.The resulting planar images related to the complete test object rotation are mathematically processed to construct the voxel dataset.Posterior segmentation of the voxel dataset allows distinguish the object surfaces; finally, dimensional analyses can be performed.Because of the complex CT measurement chain, many influencing quantities and factors may affect the results of CT-based measurements.They may be related to the X-ray source (e.g.photon energy, focal spot size), detector (e.g.sensitivity, pixel size, exposure time), test object (e.g.material, geometry), kinematics (e.g.magnification axis and turntable accuracy), and image and mathematical further processing (e.g.image filtering, artifact reduction, surface determination, tolerance analysis) [1].
This complicated measurement error behavior has brought issues in assessing uncertainties and tracing CT measurement results back to the unit of length.Some authors have addressed: the difficulties in reporting the uncertainty associated with CT results [2], the use of the experimental method described in ISO 15530-3 to evaluate the uncertainty [3][4][5][6], the development of virtual CT simulation platforms to numerically compute the uncertainty [7], and the effect of varying specific CT influence factors on the measurement result [8][9][10][11][12].In practice, however, uncertainty has been rarely determined and reported for measurements of production parts, which may be also explained by the restricted way of employing the uncertainty concept.In this sense, the conceptual integration of the product and measurement realization process with the CT uncertainty is outlined in this paper.The reasoning is to use different uncertainty evaluation techniques in a rational manner, where the estimated accuracy required to the uncertainty estimation drives the method selection.Within the proposed idea, the need for advanced correction methods, using master part reference measurements for point-based compensation, is presented, tested and justified for different types of GD&T callouts.

Measurement purpose
Measurements may be applied to many distinct purposes and demand particular requirements in industries.From product concept approval to product launching, production measurements may be used for research and development (R&D) purposes, and not only for production purposes.Production-driven measurements require sufficient accuracy for checking product quality and need to be quick in order to not add production bottlenecks and simple for friendly operation.R&D-driven measurements are generally more time-consuming and complex, and their performance needs to be known and optimized in order to diminish measurement error impact on product characterization.Being measurement a sequence of interrelated processes, by analogy with the product development lifecycle, the expression measurement process realization could be coined and the following two macro activities defined: (a) measurement planning, which embodies strategic operations that may guide measurement equipment definition, installation requirements, measuring protocol preparation, and should give rise to a solution capable of satisfying metrological and operational needs; (b) measurement execution, which involves measuring prototypes, first articles and production parts for tolerance compliance and/or manufacturing process control, and may induce measurement changes to correct errors verified in early phases.
Considering the measurement process realization just outlined, to provide a useful uncertainty evaluation solution to the metrologist at least the following three conceptual dimensions should be taken into account: (a) uncertainty evaluation purpose, (b) measurement application criticality, and (c) measurement case singularity.The first relates to the phase of the product realization cycle; the second, to the phase of the measurement process realization cycle; the third, to the product or feature intrinsic value and the measurement process relative capability.Figure 1 depicts the idea described and indicates that the accuracy of a given uncertainty estimate, i.e., how close the estimated uncertainty is from the hypothetic true uncertainty value (true uncertainty defined in ISO 14253-2, item 3.7), be evaluated considering the dimensions previously defined.It is worth mentioning that similar reasoning may be observed in subclause 7.2.2 of ISO 10012 under the following terms: "… the effort devoted to determining and recording uncertainties of measurements should be commensurate with the importance of the measurement results to the quality of the product …", and in the Guide to the Expression of Uncertainty in Measurement, GUM, subclause 3.4.1:"… a measurement can be modelled mathematically to the degree imposed by the required accuracy of the measurement."Based on Figure 1 one can infer that the accuracy expected to the uncertainty estimate in the measurement planning phase would be lower, since only general information would be likely available, that for a barely capable measurement process the uncertainty estimate should be more accurate, and so on.That means the best method for evaluating the uncertainty for a particular measuring scenario would not be necessarily a one-size-fits-all approach.

Concept application for CT metrology
The good X-ray penetrability into plastic samples makes CT particularly appropriate for checking injection-molded plastic parts, which demand e.g.linear tolerance interval of 0.2 mm (for lengths up to 6 mm) to 0.6 mm (for lengths up to 120 mm).This is a typical scenario in which lesser photon energy is necessary and smaller focal spot size is obtained, resulting in lower geometric unsharpness and sufficient accuracy for direct CT measurements (right side of Figure 1).For example, considering the findings of an exploratory study with modular test pieces made of POM, Baldo et al. [13] proposed an empirical error band of ± 0.03 mm that contains the biases for dimensional measurements influenced by edge detection (e.g.features of size).Later on, for other plastic parts with some degree of similarity, the authors observed that the estimated biases for each dimensional feature lay within the error band proposed [14][15].
This means that such a general uncertainty statement would be sufficient for situations where the accuracy expected to the measurement uncertainty would not be critical.On the other hand, metal parts manufactured through, for instance, investment casting, die casting, and laser sintering would demand more accurate dimensional measurements, even though the CT measurement capability could not be sufficient due to increased photon energy and focal spot size, severe beam hardening artifacts, ambiguous edge determination, etc.For measurement cases requiring more accurate measurements, the arrow of Figure 1 should be displaced to the left end, and the use of a point-based compensation approach introduced by Baldo et al. [16] for a single GD&T tolerance (flatness) would be a candidate for delivering the required level of accuracy.

Point-based compensation approach
GD&T evaluation programs have been introduced to dimensional metrology practice in recent years to deal with the methods divergence issue, term created by Brown [17], and consequently make tolerance verification consistent and nearly independent of metrology personnel decisions.In general, these programs use point clouds from test object measurements performed on any measurement device, merge the points with the corresponding nominal CAD model, and automatically evaluate the tolerance callouts in compliance with GD&T standards (e.g.ISO 1101 and/or ASME Y14.5).For CT metrology, the typical input data of standalone GD&T evaluation programs is the point cloud obtained from the detected surface after segmentation.The point-based compensation approach utilizes this extracted point cloud and relies on two foundations: 1.A measurement technology more accurate than CT, traceable to the SI unit of length, such as a conventional coordinate measuring machine (CMM) -labeled as R-CMS (Reference Coordinate Measuring System).This is a very important assumption, quite reasonable though considering that CMMs are appropriate for most GD&T-based inspection processes.
2. A consistent CT output -labeled as L-CT (Line Computed Tomography), i.e., with a constant measurement bias at each specific point location under repeatability conditions.Repeatability conditions mean keeping constant or under reasonable limits most of the influence quantities and factors summarized in the first section.
The point-based compensation approach makes then possible separating the systematic effect stemming from the manufacturing process (e.g.difference between the nominal model and the actual part geometry, estimated from the calibrated master part) from imperfections of the measurement process, which are contained in the compensation vector created for each measurement point.Please refer to Baldo et al. [16] for a complete description of the four macro operations concerned with the method experimentally explored in this paper.

Experimental measurement design and results
To study the point-based compensation approach effect on dimensional and geometrical tolerance verification, the prismatic part illustrated in Figure 2 was selected.The illustration exhibits the three datum planes and the tolerances assigned to some of the part geometries, which may metrologically profit from the proposed point-based compensation approach.Particularly for CTbased measurements, it is worth mentioning that the test part material is aluminum, featuring a nominal density of 2700 kg/m 3 and a body diagonal of approximately 100 mm.Two test parts were considered in the experimental investigation, one labelled 'lab' part and the other, 'shop' part.

Measurement setup
The intrinsic characteristics of both test parts were first calibrated on a Carl Zeiss PRISMO ultra CMM housed in a temperaturecontrolled laboratory kept at (20.0 ± 0.5) °C -the R-CMS.The R-CMS point clouds were thus generated in the scanning mode and the single point uncertainty estimated using the Virtual CMM software output and expert judgment.Data evaluation was entirely executed on KOTEM SmartProfile 5.0 analysis software using the reference point clouds, and least-squares method chosen to associate ideal geometries (e.g.planes, cylinders) to the sampled points.The test parts were then scanned on a Carl Zeiss METROTOM 1500 CT -the L-CT, equipped with a 225 keV micro-focus X-ray tube and a 2048x2048 px flat panel detector.The metrology CT system is housed in a temperature-controlled room kept at (20 ± 2) °C.To scan the test part, the magnification axis was positioned to project it using the maximum possible area of the flat panel detector with 1024x1024 px resolution (binning mode).The X-ray tube voltage was set high enough to avoid complete beam extinction (220 keV with a prefilter of 3 mm Cu), and the detector integration time set to a convenient value for improving accuracy (4 s).The source current was set to enhance image contrast and brightness (500 µA).This way, the spatial resolution remained limited by the focal spot size of 110 µm and not by the resulting voxel size of 146 µm.The number of angular steps was selected as about twice the number of pixels covered by the resulting shadow of the test part in the projection (1500 projections).L-CT built-in postprocessing beam hardening reduction function was enabled during reconstruction.To determine the surface from the voxel dataset of the mono-material part, the standard 'iso-50%' threshold value was applied globally.The 'lab' part was scanned three times and the 'shop' part was scanned twice on the L-CT, both under repeatability conditions.

Experimental results
The main outcomes of applying the point-based compensation approach are explored in this subsection for the tolerance types exhibited in Figure 2, namely: linear distances between features, features of size, flatness, hole axis location and hole cylindricity.Before running the point compensation, on the other hand, the R-CMS evaluation results for the tolerance callouts were generated using ca.n = 4000 (yellow) points sampled on the 'shop' part, which are exhibited in Figure 3a, and the L-CT original dataset results with more than half a million (yellow) points were calculated, which are illustrated in Figure 3b (first measurement of the 'shop' part).The 'lab' part datasets obtained on the R-CMS and on the L-CT (three measurement cycles) were used to generate the point compensation vector as detailed by Baldo et al. [16].The point-compensation algorithm correlates the very dense L-CT dataset with the R-CMS dataset by creating virtual circles of a given radius for each R-CMS point centered on it and by weighted averaging the L-CT points enclosed by the circle, filters out the uncorrelated points and calculates the difference between each L-CT mean point and the corresponding R-CMS point.The point compensation vector is produced (representing the L-CT taskspecific error for each single point), which embodies all the actual unknown systematic effects of the L-CT compared to the R-CMS.Figure 3c shows the L-CT results of the 'shop' part (first measurement run) using the n paired (yellow) points before compensation.In Figure 3d, on the other hand, the L-CT results after applying the compensation vector to the n paired (yellow) points are presented.By comparing the L-CT original dataset (Figure 3b) with L-CT filtered results (Figure 3c) it is possible evidencing no significant loss of information after significantly reducing the point cloud density, which indicates alone an effective contribution of the proposed method as considerably less computational effort is required to handle the filtered point cloud.
Table 1 summarizes the measurement results for L-CT after point reduction (L-CT-r) and the measurement results for L-CT after point compensation (L-CT-c) and evidences the biases for both scenarios against the reference values (bias1 = L-CT-r -R-CMS and bias2 = L-CT-c -R-CMS) and the normalized errors, En [18] (En1 relates to bias1 and En2 relates to bias2) taking into consideration the expanded measurement uncertainty U, which includes for features of size, linear distances and hole axis position also the temperature variation observed during CT scanning.One can observe in Table 1 that En1 values (those related to the L-CT reduced dataset) were not satisfactory (most of them are larger than 1).After applying the compensation vector to the 'shop' part datasets, good agreement could be evidenced between the R-CMS values and the L-CT values (En2 values are less than 1 for all features under scrutiny).This perceptible accuracy improvement represents the major contribution of the proposed point-based compensation approach.The substantial reduction of the systematic deviations observed after merging homogeneous data (i.e., point clouds of R-CMS and L-CT) is a step forward in broadening the industrial application of CT, as the operational advantages of the technology can be extended to measurement cases originally not capable of providing the required accuracy.In fact, after applying the compensation vector, features of size and other dimensional and geometrical characteristics could be measured with an expanded uncertainty U down to 0.01 mm (k = 2).It is important to mention that this uncertainty value is valid as long as the 'lab' test part used to create the point compensation vector remains representative of the CT measurement process.

Concluding remarks
In this paper the conceptual integration of the product and measurement realization process with the uncertainty approach was delineated, which is in consonance with the tenets of the GUM.The concept was briefly explained for a practical case where the accuracy required to the uncertainty estimate was not critical, and then the point-based compensation approach was introduced and explored for cases where more accuracy would be needed to the uncertainty evaluation.The compensation approach relies on more accurate measurements performed e.g. on classical coordinate machines, which allows filtering intelligently the massive CT dataset to a reduced amount of points with no significant loss of information for dimensional metrology.By compensating this reduced CT dataset, the task-specific measurement accuracy can be enhanced and traceability to the SI unit of length can be guaranteed.In fact, from the experimental tests, a well-defined bias could be observed for the CT datasets -point compensation is thus worthy, but with a random variation that cannot be neglected in the uncertainty determination.
Experimental findings have shown good agreement between R-CMS and L-CT datasets after R-CMS-based point reduction and compensation, which emulates the situation in which the actual feature shape remains stable over time.Nevertheless, whenever the surface signature is not resolved by the measurement technology (e.g. when evaluating voxel matrices of denser materials), a new compensation vector would be required whether the actual part shape has changed significantly (e.g.due to machine tool wear, changes in cutting conditions).For this, a new reference part would need to be defined and calibrated.Another possibility in this case would be including an additional uncertainty component to account for this effect, even though this solution would result in larger uncertainties.
Last but not least, the point-based compensation approach described in this paper offers the potential to reduce the measurement uncertainty down to the level of the CT measurement process repeatability for features that can be calibrated on the R-CMS and measured on the L-CT systems using a particular operator-selected measuring protocol, thus keeping the advantages of the CT technology for dimensional metrology such as independence of measurement time to the amount of features.The application spectrum of CT for dimensional metrology therefore can be extended to situations that demand more accurate measurements (arrow to the left end of Figure 1), which would not be feasible for the original CT output.

Figure 1 :
Figure 1: Complete concept proposed from the measurement scenario to the accuracy required to CT uncertainty

Figure 2 :
Figure 2: Illustrative image of the prismatic part indicating the assigned feature tolerances and the respective datums

Figure 3 :
Figure 3: Point-based compensation approach flow applied to some tolerances: (a) R-CMS measurement results; (b) L-CT original measurement results; (c) L-CT filtered results; (d) L-CT filtered results after compensation

Table 1 :
R-CMS, L-CT after point reduction and L-CT after point compensation values, biases and normalized errors