|NDT.net Jan 2006 Vol. 11 No.1|
Life-Cycle Prolongation of Civil-Engineering Structures via MonitoringUdo Peil, Institute for Steel Structure, Technical University at Braunschweig, Germany,
38302 Braunschweig, Beethovenstr. 51, Germany, email@example.com
ABSTRACTThe prediction of a realistic life cycle and the prolongation of the service life is an important task to reduce costs of civil engineering structures in the future. The precise assessment of the life cycle will become an important challenge. This paper gives at first an overview about the possibilities of assessing the state of the structure (the anamnesis) which must precede every life cycle determination. Subsequently, new ways and possibilities of precise life cycle determination are presented. The methods are developed at the Collaborative-Research-Center "Monitoring of Structures" at the Technical University at Braunschweig (Germany).
LIFE CYCLE ASSESSMENT BY MEANS OF MONITORING
General overviewFigure 2 shows a flow chart of the life cycle assessment by means of SHM. If existing structures shall be investigated, a fundamental difference to SHM measures occurs, used in the field of mechanical engineering or in the Aero- Astro-Engineering field. A conscientious check up of the state of the structure and its accumulated damage (the anamnesis) is required. The flow chart is explained in more detail in the following.
Anamnesis:At the beginning of any monitoring measure a conscientious check up of the state of the structure and its accumulated damage is indispensable. This first inspection should go from an overall examination to a more detailed examination of the details; one should not only be focused on visible damages or faults but also on symptoms and damage indicators, as e.g. local changes in the color of coatings. Figure 3 shows e.g. a shaped like a crescent brighter zone in surrounding bolt. The material check showed that a wrong material was used for the plate, thus severe plastic deformations occurred. Simple rule: inspection is good, thinking in parallel is better.
During this first inspection doubts can be cleared up or increased and suggestions can be made for further action. If one decides to perform a precise anamnesis, first the global system geometry and the local geometry must be determined for the subsequent modeling of the structure. Very old structures are often problematic because their technical documents, as drawings, calculations are not available: the geometry has to be determined on site. Different methods are available: Conventional measurements by hand, Photogrammetry, Tachmetry and, very efficient, the laser scan method. A laser scanner measures the horizontal and vertical angles and the distance using rotating reflectors. The measurement speed is high. Up to 800,000 points in space can be sampled in a second. The accuracy depends on the distance; values between 1mm and 0.1 mm are possible. In addition the color of the measured point can be recorded. The cloud of points then looks like a photo, see figure 4.
The next step is the determination of the material parameters. Non destructive testing techniques are only for rough estimations. The ultimate strength of steel e.g. is correlated to the hardness, which can be simply measured on site. The ultimate strength of concrete can be determined using the rebound hammer. Modern testing devices come with integrated mini computers which determine mean and rms-values of a couple of measurements. Both methods are only very rough estimations. If more precise material parameters are needed drilling core must be taken from less strained areas near the weak point. The size of the drilling core is a problem: desirable small cores show a remarkable size influence, small local defects as pores, slag inclusions, concrete aggregates already show over proportional influence on the behavior of the specimen. Concrete drilling cores usually have diameters three to four times the maximum aggregate (see DIN 1048), steel drilling cores with a diameter of 50mm to 60mm represent a good compromise. Figure5 left shows the drilling of the core, on the right hand side the different specimen which could be taken out of the core): a tension test specimen to determine the stress-strain-diagram, an ISO-V-specimen to determine the notch toughness and a modified CT-specimen to determine fracture mechanical parameters. The remnant can be use for the determination of the chemistry of the material.
The miniaturized specimen must be tested in a special test rig which is designed for only small forces. Usually used test rigs for material testing are not able to control the small forces needed.
Now we are in a position to model the structure: geometry, material and loads are known and the local stresses and strains can be determined. This must be done with high accuracy, because a modeling of the structure too simple can hide or fake weak points in the structure [2,11]. A typical example for wrong modeling is e.g. a two dimensional system for a simple continuous bridge. Due to the three dimensional load carrying behavior, the cross-girders show high local stresses at the clamped connections to the main girder, which often lead to cracks, see  and figure 6.
Experience teaches that use of the Finite-Element-Method (FEM) will give the best results. Figure 7 shows for example the response of an old cast iron bridge (see figure 5) calculated by FEM. The weak point is a very local stress concentration at a clamped diagonal connection. This point shows the maximum stress compared to all other details. It is a typical weak point, a critical detail.
The material state near the critical weak points should than be checked be non destructive techniques (NDT) as e.g. ultrasonic, X-ray, magnetic tests etc.
In many cases the result of this first step evaluation of the structure shows that no further measures must be taken, or that the structure should be preserved. If the result is unfavorable concerning the expected life cycle, monitoring measures together with adaptive prognosis models could help considerably. The life cycle determination might be improved using more sophisticated but usual methods.
Environmental load and response assessmentBefore we further follow the flow chart in Figure 1 some information are given how to determine the environmental load on the structure. If the investigated structure already exists and no monitored time histories from the past are available, the time history must be estimated or generated by a model. The generating method consists of two procedures: a load generating procedure and a strain generating procedure .
Loads are usually generated on the basis of statistical distributions. Typical vehicle sequences (clustering of trucks etc.) and typical temporal distances between vehicles must be taken into account. The vehicle statistics (load distribution, clusters, temporal vehicle distances) depend on the type of the road (highway, expressway, country road) and the local situation. If available, the statistics of a certain road type can also be used for other similar cases. The performed investigation can be applied to different types of roads and local situations in order to compile statistics in a catalogue which can be used in the future as a statistically repository for the large amount of existing bridges in different local situations.
Traffic data and axle loads are usually known by traffic authorities for different regions. Also WIM data on a lot of structures are available nowadays [10, 11]. If measurements are not available (which is normal), estimated load probability distribution functions should be used.
Figure 8 shows the density distributions of strain data continuously measured at a cross girder of a highway bridge. Strain peaks caused by different vehicles are converted into metric tons classified and plotted in a histogram. The histogram is fitted by 4 density functions. Each density function is assigned to a vehicle type. Density function 1 (in Figure 8 shown truncated) describes the probability of occurrence of cars, function 2 of light or empty trucks, functions 3 and 4 represent ordinary and heavy trucks .
In order to take typical load sequences and typical temporal distances between single loads into account, the measured time history is classified into four groups which represent the four types of vehicles mentioned above . The borders of a class assigned to a vehicle type are determined using the discriminant analysis method. The borders correspond to the intersection points of the density functions plotted in Fig. 8, resulting in 3, 17 and 28.5 metric tons (Fig. 9). This procedure yields a stream consisting of numerals 1 to 4 (Fig. 9, numbered half circles).
Temporal and sequential data of this stream are representative of loading scenarios of the structure which depend on the location of the structures (e.g. uphill road) and on human driving behavior. Fig. 10 shows the density functions of the temporal distances in seconds between two successive vehicles sorted in a matrix. The temporal distances are shown in Fig. 10 at upper margin. The row of a matrix element indicates the actual vehicle type and the column of the subsequent one. Element (4,1) indicates for example that cars (type 1) waiting impatiently for an opportunity to overtake trucks, because the modal value of the distributions of this elements is remarkably low.
To identify typical sequences, three successive vehicles are always considered. The first two vehicles are used to express the occurrence probability of the third one. The distributions of the sequences are stored in the matrix of Fig. 11. The row i of the matrix indicates the first vehicle of the actual sequence, the column j the second (the recent!) one. The histogram of element i, j (sequence i, j) shows the probability of occurrence of the next vehicle of that sequence. It is obvious that the probability of occurrence of succeeding trucks is high (see the high mode of the distributions in elements with a column number greater than 1) .
The statistical input information are now used to generate a synthetic time history of the traffic of the past, using the Monte-Carlo-Method. A random generator produces vehicle types which always consider the most recent sequence generated on the basis of the densities shown in Fig. 11. Temporal distances are chosen in the same manner according to the densities in Fig. 10. The actual load of a vehicle is then determined on the basis of the corresponding density of the forecasted vehicle type.
The strains in the structure must be determined by means of a precise model, preferably a FE-Model. Heavy structures may be investigated purely statically, excluding dynamic effects. Moving loads on slender structures like bridges, however, may cause remarkable additional dynamic effects.
An additional dynamic influence develops from the roughness of the road surface (the pavement) . This influence is taken into account using measured different pavement states and describes them by means of a Gaussian, stationary, ergodic process, characterized by a power spectral density function (PSD). For use in the presented simulation, a discrete realization of the roughness function is needed. It can be generated from a finite Fourier series with random phase, where the amplitude of each term is determined in accordance with the chosen PSD.
To account for these dynamic effects, the structure is modeled by the FEM, see Fig. 12) . In a precedent modal analysis, eigenvalues and eigenvectors are extracted as input for the iterative solution scheme. The different types of vehicles are idealized as damped 2 mass systems for each wheel. They consist of a wheel mass, a mass containing the corresponding part of the vehicle body and pay load, spring and damper between both and another spring representing the tire tangent stiffness (Fig. 13).
Because the system changes at each time step, the submodels (bridge and vehicles) have to be treated separately. The coupling forces of both systems are calculated and iterations must be performed until the Euclidean Norm of the interactive forces falls below a given threshold. The solution of this iteration is saved for each time step in modal coordinates. Afterwards, displacements, velocities, accelerations or stresses can be calculated from these data for nodes that are of particular interest.
PROBABILISTIC ASSESSMENT OF CRITICAL WEAK POINTSIndependent of the type of the structure to be monitored, the assessment of critical weak points is one of the most important points in the procedure. Finding critical points of the structure is one of the main tasks in monitoring. Weak points or weak spots are areas of the structure which are prone to damages or where possible damages cause non tolerable consequences. Weak points of older structures, usually designed with very different safety levels, are normally well known and can be determined by existing structural calculations or by experience.
New structures however show an equally distributed safety level over a high number of critical details. Because of the increasing costs for every new detail to be monitored, the critical details can be detected using probabilistic methods, leading to a lower number of measuring points.
The procedure for reliability-oriented determination of weak points classifies critical weak points as those which contribute the largest part to the overall failure probability of the structure. These points must be monitored. To determine the failure probability the description of
The procedure can be describes by a simple example: a concrete beam under corrosive attack . The concrete beam is subjected to a trapezoidal bending moment. The beam has a rectangular cross section with four eccentric reinforcement bars. The beam is exposed to a chloride attack which seeps slowly into the cross section and reduces the cross section of the reinforcement bars due to increasing corrosion. The goal is the determination of the weak points, respectively the optimal sensor arrangement for measuring strains, humidity and pH-value.
The reinforcement bar breaks if the actual tension force is higher than the tension force activated by the corroded reinforcement bars (effects of stress corrosion are not taken into account in this example):
Because geometry and material parameter are known, thus 13 random variables remain, and their influence must be investigated . They are summed up together with their appropriate stochastic model.
Figure 14 shows the corresponding fault tree. One can see that the possible fault paths can be assessed systematically even by a computer program  (Figure 15). The usual use of fault trees, which is standard in the design of complex and risky structures, assumes that the basic random variables are statistically independent. Unfortunately this is not usual in most cases. Thus complete probabilistic investigation of all fault paths by means of the First-Order-Reliability-Method (FORM) must be performed.
The FORM-calculations are based on the general definition of the limit state equation:
with R as resistance quantity and S as action quantity of the system. R and S are both stochastic variables (basic variables). Safety is guaranteed, when the resistance is greater than the action, while failure occurs when G<0. The failure probability can be interpreted by equation (2).
One method for the ascertainment of failure probabilities is the First Order Reliability Method (FORM). This method has been performed by Hasofer/Lind for normally distributed variables and later extended to arbitrarily distributed parameters by Rackwitz/Fieseler. For the explanation of this method, it is supposed that R and S in Equation 1 are uncorrelated and normally distributed.
When the failure probability is calculated according to FORM, the basic variables and thus the limit state equation are transformed into standard normal distributions. This is done by the transformation equation. The procedure then follows the classical First-Order-Reliability-Method (FORM).
The calculation of our example results in a probability of occurrence, described by the safety index b. The safety index is normally time dependent. In our case the path 1 is the most critical one, the safety index of the other paths are almost larger than 36, the probability of occurrence is very much lower than those of path 1. Path 1 should be monitored first (Figure 16). The probabilistic approach results in:
In  another example, a steel structure with fatigue damage, demonstrates the use of the probabilistic method in another field. The procedure is now upgraded now to automatically perform the overall analysis of a fatigue endangered structure.
DECISION ABOUT THE MONTORING STRATEGYThe decision about the most effective monitoring strategy depends on both the demands and knowledge of the structure. If the knowledge is only incomplete, than a prognosis of the life cycle using an adaptive model is useless, because the input parameter of the model is more or less unknown. Monitoring can then be used to observe threshold values of defined damages or damage symptoms of important details which are essential for the structure, as limit stresses or strains, cracks, crack lengths, crack lengths rates, corrosion depth, etc.
If the knowledge about the structure is more complete, due to a conscientious anamnesis and a good knowledge of the former environmental impacts and loads, then a life cycle prognosis can be modeled. The model should be an adaptive one, i.e. the input parameter should be directly measured on site. The model input parameter can be continuously updated and thus give the best forecast. Adaptive models could be the same as usual models though on the other hand they could be much more simple than the usual ones. The usual models must take into account many more external and internal variables to give a sufficient, reliable prognosis of the life cycle. In [3, 4] an adaptive procedure is described which does no longer needs any model.
MEASURES AS A RESULT OF MONITORINGDepending on the result of the monitoring measure a decision can be made about the future of the structure. Possible measures could be:
The expected size of the damage depends furthermore on the statistical values of the strength at the site of the damage, i.e. the mean value and the rms-value and probably from higher statistical moments. In addition the probability with which the damage could even have been detected must be known. Small cracks will be detected with less probability than large ones. Details of damage detection probability are very rare in general. An exception is the detection of cracks in steel or alloy structures. A model of the probability of detection with an exponential distribution shows good agreement with the experience of usual inspection procedures. The probability function for the probability of detection (POD) is given by:
where adet is the detectable crack length, adet,min is the smallest detectable crack length, which could be detected using the chosen inspection method and l is a calibration factor depending on the crack length. Figure 17 shows typical values.
With these values it is possible to describe the detection of a crack at the inspection time tinsp by a comparison of the detectable crack length adet with the expected one ã (tinsp) as follows:
With this inspection intervals for different failure paths can be determined. For different weak points one will get different inspection intervals. A strategy coupling various repair measures is given in  The decision about the future of the structure must include not only technical and economic aspects [13,14,15] but also the associated legal background. Building regulations and laws, public laws and product liability can easily become problems which result in the abandonment of the decision to repair a structure, causing the structure to be demolished
ACKNOWLEDGEMENTSThe financial support of the Deutsche Forschungsgemeinschaft DFG (German Science Foundation ) within the framework of the Collaborative Research Centre SFB 477 "Monitoring of Structures" is gratefully acknowledged.