Abstract
Structural nonlinearities are often spatially localized, such joints and interfaces, localized damage, or isolated connections, in an otherwise linearly behaving system. Quinn and Brink (2021, “Global System Reduction Order Modeling for Localized Feature Inclusion,” ASME J. Vib. Acoust., 143(4), p. 041006.) modeled this localized nonlinearity as a deviatoric force component. In other previous work (Najera-Flores, D. A., Quinn, D. D., Garland, A., Vlachas, K., Chatzi, E., and Todd, M. D., 2023, “A Structure-Preserving Machine Learning Framework for Accurate Prediction of Structural Dynamics for Systems With Isolated Nonlinearities,”), the authors proposed a physics-informed machine learning framework to determine the deviatoric force from measurements obtained only at the boundary of the nonlinear region, assuming a noise-free environment. However, in real experimental applications, the data are expected to contain noise from a variety of sources. In this work, we explore the sensitivity of the trained network by comparing the network responses when trained on deterministic (“noise-free”) model data and model data with additive noise (“noisy”). As the neural network does not yield a closed-form transformation from the input distribution to the response distribution, we leverage the use of conformal sets to build an illustration of sensitivity. Through the conformal set assumption of exchangeability, we may build a distribution-free prediction interval for both network responses of the clean and noisy training sets. This work will explore the application of conformal sets for uncertainty quantification of a deterministic structure-preserving neural network and its deployment in a structural health monitoring framework to detect deviations from a baseline state based on noisy measurements.
1 Introduction
Structural health monitoring (SHM) systems are key enabling capabilities that inform reliable and robust operations of modern structures, typically exposed to environments that may cause damage or changes in the system. These SHM systems seek to provide actionable information about both the current health state of the system as well as predict future limit states, but they require access to a digital twin that is able to provide response predictions in near real-time. These digital twins are computational models of engineering systems or components that are faithful representations of the deployed system in the field [1–4]. Digital twins require continuous updates informed by data obtained from sensor systems embedded in the physical systems [5,6]. Since they are meant to operate in real-time aided by continuous data streams, they offer an enhanced ability to monitor the evolving health of a system and predict the future response to both nominal operating conditions and unexpected excursions in loading environments [7–9].
To be truly effective and useful in SHM systems, digital twins need to be able to account for nonlinearities arising as a result of damage or changes to the structure. A wide range of nonlinearities present within structural systems are often located within localized regions of the structure [10]. Examples include nonlinearities introduced by joints and interfaces, localized damage, or isolated regions with nonlinear connections. As such, the majority of the system may be characterized as “linear” or otherwise known (in the sense that other nonlinearities are known); however, the overall system response is nonetheless significantly influenced by the presence of the isolated nonlinearities, and accurate models must resolve these regions of influence. In such cases, the linear modes of the system still provide an adequate description of the majority of the structure external to the nonlinear regions, but the coupling between these modes arising from the nonlinearities may no longer be neglected. For systems with isolated nonlinearities, Quinn and Brink [11] formulate the equations of motion in such a way as to isolate the effect of the nonlinearities on the underlying linear structure. In this work, the effect of the nonlinearities is introduced through a deviatoric force acting on the ideal linear system, representing only the contribution from the nonlinearities and localized to the boundary of the nonlinear region. As a result, the global modes of the linear structure continue to serve as framework for the development of reduced order models, while only the isolated nonlinear regions need to be resolved in an associated nonlinear system whose domain is only over the nonlinear region [11]. Furthermore, the authors previously developed a structure-preserving approach to model these deviatoric forces with machine learning in a model-agnostic way. The proposed approach replaced the nonlinear deviatoric component with a trained neural network embedded in a structure-preserving architecture [12].
To provide information that is actionable, SHM systems need to deal with the uncertainty of the predictions provided by the digital twin system in the presence of noise coming from measurements that serve as inputs to the model. The previous work by the authors [12] considered only noise-free data. However, in real experimental applications, the data are expected to contain noise which motivates this study on its effect on the ML system predictions. The authors of Ref. [13] summarized a variety of ways to perform uncertainty quantification in ML systems, including Monte Carlo dropout [14], Bayesian neural networks [15], neural network ensembles [16], spectral-normalized neural Gaussian processes [17], among others. However, all of these methods require special treatment of the neural network architecture through the inclusion of specific layer types or modified training schemes. For cases where a deterministic neural network is available, an approach to characterizing the observational uncertainty (e.g., from measurement noise in the inputs) in an ML system is through conformal predictions [18].
While traditional estimators such as the jackknife estimator capture measurement uncertainty, conformal prediction methods are able to additionally capture model bias [19]. Here, the jackknife+ estimator is used to construct measures of uncertainty for the damage parameter at each snapshot. In this work, we illustrate a general structure with an isolated nonlinearity formulation, present a structure-preserving machine learning methodology for estimating the deviatoric force at the boundary, and apply conformal methods to measure the uncertainties about the estimated deviatoric forces. We demonstrate how these uncertainty measures can be leveraged to identify changes in the structure in the presence of noise through hypothesis testing. The paper is organized as follows. Section 2 presents a summary of previous work and the enhancements to existing methods in this paper. Section 3 illustrates the application of the proposed approach on a numerical example while Sec. 4 provides conclusions and discussion of future work opportunities.
2 Background and Methodology
This paper leverages a structure-preserving machine learning method developed by the authors in Ref. [12]. We briefly describe the methodology here. Sections 2.1 and 2.2 summarize previous work that forms the foundation for this paper while Sec. 2.3 describes the statistical inference methods used to quantify uncertainty in the model.
2.1 Isolated Nonlinearity Formulation.
where represents the response of the ideal system (i.e., in the absence of a deviatoric force) and is the deviatoric response defined as . Furthermore, the subscripts represent the following regions:
c is the DOFs in that are not coupled to the interface DOFs, α is the DOFs in that are coupled to the interface DOFs, β is the DOFs in that are coupled to the interface DOFs, and n is the DOFs in that are not coupled to the interface DOFs.
The advantage of this formulation is that the deviatoric force is expressed as a function of which is available at the boundary of the isolated region. This formulation avoids the need to have access to the interior of the isolated region which may not be physically accessible (e.g., a joint). This work assumes that is obtained by measuring the response at the boundary (e.g., with sensors) and that these measurements may be noisy.
2.2 Structure-Preserving Machine Learning.
where corresponds to the difference between the measured and ideal response at the boundary of the isolated region. The coefficients λ and γ are learned during training, K is the polynomial degree chosen, and the function represents the MLP. More details are provided in the original articles [12,21]. A diagram illustrating the neural network architecture is shown in Fig. 2.
where is a short-hand form for Eq. (12), for any set of initial conditions or forcing function. It should be noted that the deviatoric force is not directly used for training. Instead, the residual of the equation of motion of the system is used to train the network. This formulation enforces preservation of the underlying geometric structure of the dynamic system.
2.3 Statistical Inference on Deviatoric Forces.
In practice, the responses z are sampled from some nonbifurcated (unimodal) distribution that induces a distribution on , denoted as . Characterizing allows for building prediction intervals on functions of samples from . If the distribution on z is known, it is possible to obtain the distribution of through the transformation defined in Eq. (15). Commonly, the distribution on z is unknown, leading to a nonparametric characterization of .
for some . While the bias reduction in Eq. (18) from Eq. (17) lends Eq. (16) to be a desirable estimator.
Defining in Eq. (16), the estimation process follows as responses of the neural network in Eq. (15). To this end, Eq. (16) assumes that the true response of is the parameter θ. For different training sets, or cross-validating data subsets, the value of θ is changing from sample to sample. This induces a machine-learning model bias into the estimation process. Therefore, a conformal prediction method is applied to capture-and-correct for this bias as the jackknife estimator similarly captures the bias in the traditional nonparametric setting [19].
3 Results
This results section will describe an example problem and demonstrate the proposed approach to define predictive intervals and detect shifts in the domain through hypothesis testing.
3.1 Example Description.
where is the modal transformation matrix.
following the regularized formulation presented in Ref. [12].
with , and ρ represents the level of hysteresis included. For the data generated for training the network, .
3.2 Noise Model Description.
where ν is the noise factor considered which can also be interpreted as the reciprocal of the signal-to-noise ratio since it represents the ratio between the noise variance and the signal variance.
and similarly for the deviatoric velocities, .
3.3 Effect of Noise on Model Performance.
The noise observed in the inputs is propagated through the trained neural network when evaluating the deviatoric force term. As a result, the outputs are noisy as well. As the neural network does not yield a closed-form transformation from the input distribution to the response distribution, we leverage the use of conformal sets to build an illustration of sensitivity which enables the definition of predictive confidence intervals. We start by assessing the network's performance when trained with noisy and clean (i.e., noise-free) data. To this end, the network was trained with one simulated realization using the rod model. Random noise was added to the response as described in Eq. (31) with .
For the following results, the trained networks were evaluated with a different realization (i.e., with different initial conditions than those used for training). The deviatoric force was modeled as described in Eq. (28) where only linear terms were included in the polynomial terms (based on prior knowledge of how the interface displacements relate to the forces), and the MLP consisted of five hidden layers with Swish activation functions and the following number of units: , which were determined based on a hyperparameter grid search. The output layer had a linear activation function. The model was trained using the Adam optimizer [25] with a learning rate of for 65,000 epochs, which took around two hours to train on a single graphics processing unit. The neural network was implemented using the Jax [26] and Flax [27] packages. Figure 3 illustrates the predicted deviatoric force across the interface when given noisy inputs. Both the network trained with noisy data and the network trained with clean data predict really similar outputs when confronted with noisy data during inference. This result is illustrated in Figs. 4 and 5. As shown, the distributions predicted by both networks are indistinguishable from each other which indicates that the network is robust to the presence of noise (at least additive white noise). This result may be due to the structure-preserving constraints in the network that have a regularization effect. It should be noted that the network trained with noisy data required twice the amount of training epochs to reach the same level of accuracy as the network trained with clean data.
To further evaluate the robustness of the network to noise, the network trained with clean data was used inside a time integration loop and random noise (at 1% level) was added at every integration step. Figure 6 illustrates the effect of the noise on the integrated response. As shown, the mean response can still be recovered even when noise is added during integration. While the effect of error accumulation is evident in these plots, the response did not diverge for the time range that was considered.
3.4 Prediction Intervals and Damage Detection.
The next step is to define predictive intervals for the network using the conformal regression approach outlined in Sec. 2.3. To this end, the Python package MAPIE [28] was used to define 95% confidence intervals for the trained models using the jacknife+ method. The intervals are obtained by fitting a conformal regression model to a “calibration” set but the results presented here are for a different realization (i.e., different initial conditions). Figures 7–9 illustrate the confidence intervals obtained for noise levels 5%, 10%, and 20%, respectively. As illustrated, the predicted confidence intervals provide reasonable coverage of the data (i.e., roughly 95% of the data is covered by the interval).
Now we test whether these predictive intervals can be informative in determining whether damage is present in the structure based on response measurements at the boundary of the isolated region. To simulate damage, the parameter ρ that controls the level of hysteresis in the structure is set to either 0, 1, 10, and 20. These levels define the baseline, low damage, medium damage, and high damage levels, respectively. These cases were simulated and compared to the predicted response from the neural network that had been trained with noisy data from the baseline model. This process was repeated for all three levels of damage and the three noise levels previously considered. For example, Fig. 10 corresponds to low level damage and 5% noise. In this case, it is evident that there has been a shift in the system response as the response of the damaged system is now offset from the mean of the distribution. However, when more noise is added, as illustrated in Fig. 11, this shift becomes less apparent. In contrast, a high level damage is apparent even in the presence of 20% noise, as shown in Fig. 12. These results illustrate the challenge of distinguishing meaningful domain shifts in the presence of noise.
The next step is to propagate these intervals through the dynamic process via time integration to obtain the response across the isolated region. For the following examples, a noise factor of 1% was used in order to avoid numerical convergence problems during time integration. For conciseness, only the velocity across the isolated region is shown. Figure 13 shows the time histories of the velocity and their corresponding power spectral density (PSD). The three levels of damage (blue) are plotted alongside the baseline response (black). As shown, it is hard to distinguish between the cases from visual inspection which motivates a more quantitative assessment. Another way to inspect the data is to look at the histograms of the samples at different time points or frequency points. Figure 14 illustrates the sample distributions at different points in time from the time histories, and Fig. 15 shows the histograms at different frequencies from the PSDs. As shown, the distributions exhibit heteroscedastic behavior and, as a result, the trends observed in the histograms are not constant through time or frequency.
To further understand how the distributions vary through time and frequency, a Kolmogorov–Smirnov (KS) two-sample test was performed to compare the damaged cases to the baseline using the implementation in the SciPy package [29]. In this test, the null hypothesis (H0) is that the two samples (i.e., baseline and damaged) are from the same continuous distribution while the alternative hypothesis (H1) is that they are from two different continuous distributions. The physical interpretation of failing to reject the null hypothesis is that there is no damage present (i.e., the two datasets are from a “healthy” baseline state). In this case, we use a significance level of 5% which implies that p-values that are larger than 0.05 imply that the response is from the same distribution as the baseline state. In other words, p-values smaller than 0.05 will indicate the presence of damage. Figure 16 illustrates the p-values obtained from the KS test as a function of time (left) and frequency (right). As shown, the statistical significance of the test depends on the damage level which is not surprising. As shown in the histograms in Figs. 14 and 15, the shift in the empirical distributions is more evident as the damage level increases. Moreover the peaks in the p-values from the time history seem to correspond to zero-crossings where the differences in response may not be evident. On the frequency side, the test seems to more effective at detecting the damaged cases at higher frequencies. This result implies that the type of damage modeled (i.e., hysteresis) may be having a more pronounced effect on the higher frequency content. The time history p-values are more consistent through time because the high-frequency response is distributed across time. Finally, to combine the two concepts, the short time Fourier transform (STFT) was used to compute the change in frequency as a function of time and the KS test was performed at each frequency and time cell. These results are plotted in Fig. 17 where the upper limit of the color bar was set to so that yellow cells indicate that the null hypothesis was not rejected (i.e., no different from the baseline). These results seem consistent with the PSD and time history results as they show that the level 1 (low) damage is hard to detect with the KS test and that damage is not evident at the lower frequency response. A similar analysis was performed with the Cramer–von Mises (CVM) [30,31] to verify that similar trends were observed. The p-value as a function of time and frequency is plotted in Fig. 18. The CVM test exhibits lower sensitivity to the domain shift in the STFT as illustrated by the fact that there are large yellow regions in Fig. 18 which indicates that the data were sampled from the same distribution. However, the number of cells for which the CVM rejects the null hypothesis (i.e., blue regions) increases as the level damage increases.
4 Conclusions
This paper presented a framework for uncertainty quantification of neural network response predictions through the definition of predictive intervals using conformal regression. The proposed approach was developed in the context of a structure-preserving neural network that has an embedded isolated nonlinearity formulation to provide physical constraints to the problem. As this model is intended to be used in an online structural health monitoring system, the performance of the models in the presence of measurement noise was evaluated. It was shown that the trained models were not significantly affected by noise in the inputs. This noise was propagated through the machine learning model and through the time integration of the dynamic system. Conformal regression was used to define a predictive interval of the integrated response that was used to assess the presence of damage (in the form of hysteresis) through hypothesis testing. Future work will explore other sources of uncertainty (such as epistemic uncertainty) and will apply the proposed approach to experimental cases.
Acknowledgment
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under Contract No. DE-NA-0003525.
Funding Data
Sandia National Laboratories (Contract No. DE-NA-0003525; Funder ID: 10.13039/100006234).
Data Availability Statement
The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.