Deep Operator Network (DeepONet), a recently introduced deep learning operator network, approximates linear and nonlinear solution operators by taking parametric functions (infinite-dimensional objects) as inputs and mapping them to solution functions in contrast to classical neural networks (NNs) that need re-training for every new set of parametric inputs. In this work, we have extended the classical formulation of DeepONets by introducing recurrent neural networks (RNNs) in its branch in so-called sequential DeepONets (S-DeepONets) thus allowing accurate solution predictions in the entire domain for parametric and time-dependent loading histories. We have demonstrated this novel formulation’s generality and exceptional accuracy with thermal and mechanical random loading histories applied to highly nonlinear thermal solidification and plastic deformation use cases. We show that once S-DeepONet is properly trained, it can accurately predict the final solutions in the entire domain and is several orders of magnitude more computationally efficient than the finite element method for arbitrary loading histories without additional training.
Abstract The paper explores the possibility of using the novel Deep Operator Networks (DeepONet) for forward analysis of numerically intensive and challenging multiphysics designs and optimizations of advanced materials and processes. As an important step towards that goal, DeepONet networks were devised and trained on GPUs to solve the Poisson equation (heat-conduction equation) with the spatially variable heat source and highly nonlinear stress distributions under plastic deformation with variable loads and material properties. Since DeepONet can learn the parametric solution of various phenomena and processes in science and engineering, it was found that a properly trained DeepONet can instantly and accurately inference thermal and mechanical solutions for new parametric inputs without re-training and transfer learning and several orders of magnitude faster than classical numerical methods.
In hydrology, projected climate change impact assessment studies typically rely on ensembles of downscaled climate model outputs. Due to large modeling uncertainties, the ensembles are often averaged to provide a basis for studying the effects of climate change. A key issue when analyzing averages of a climate model ensemble is whether to weight all models in the ensemble equally, often referred to as the equal‐weights or unweighted approach, or to use a weighted approach, where, in general, each model would have a different weight. Many studies have advocated for the latter, based on the assumption that models that are better at simulating the past, that is, the models with higher hindcast accuracy, will give more accurate forecasts for the future and thus should receive higher weights. To examine this issue, observed and modeled daily precipitation frequency (PF) estimates for three urban areas in the United States, namely Boston, Massachusetts; Houston, Texas; and Chicago, Illinois, were analyzed. The comparison used the raw output of 24 Coupled Model Intercomparison Project Phase 5 (CMIP5) models. The PFs from these models were compared with the observed PFs for a specific historical training period to determine model weights for each area. The unweighted and weighted averaged model PFs from a more recent testing period were then compared with their corresponding observed PFs to determine if weights improved the estimates. These comparisons indeed showed that the weighted averages were closer to the observed values than the unweighted averages in nearly all cases. The study also demonstrated how weights can help reduce model spread in future climate projections by comparing the unweighted and weighted ensemble standard deviations in these projections. In all studied scenarios, the weights actually reduced the standard deviations compared to the equal‐weights approach. Finally, an analysis of the results' sensitivity to the areal reduction factor used to allow comparisons between point station measurements and grid‐box averages is provided.
A graph convolutional network (GCN) is employed in the deep energy method (DEM) model to solve the momentum balance equation in three‐dimensional space for the deformation of linear elastic and hyperelastic materials due to its ability to handle irregular domains over the traditional DEM method based on a multilayer perceptron (MLP) network. The method's accuracy and solution time are compared to the DEM model based on a MLP network. We demonstrate that the GCN‐based model delivers similar accuracy while having a shorter run time through numerical examples. Two different spatial gradient computation techniques, one based on automatic differentiation (AD) and the other based on shape function (SF) gradients, are also accessed. We provide a simple example to demonstrate the strain localization instability associated with the AD‐based gradient computation and show that the instability exists in more general cases by four numerical examples. The SF‐based gradient computation is shown to be more robust and delivers an accurate solution even at severe deformations. Therefore, the combination of the GCN‐based DEM model and SF‐based gradient computation is potentially a promising candidate for solving problems involving severe material and geometric nonlinearities.
This paper explores the possibilities of applying physics-informed neural networks (PINNs) in topology optimization (TO) by introducing a fully self-supervised TO framework based on PINNs. This framework solves the forward elasticity problem by the deep energy method (DEM). Instead of training a separate neural network to update the density distribution, we leverage the fact that the compliance minimization problem is self-adjoint to express the element sensitivity directly in terms of the displacement field from the DEM model. Thus, no additional neural network is needed for the inverse problem. The method of moving asymptotes is used as the optimizer for updating density distribution. The implementation of Neumann, Dirichlet, and periodic boundary conditions is described in the context of the DEM model. Three numerical examples are presented to demonstrate framework capabilities: (i) compliance minimization in 2D under different geometries and loading, (ii) compliance minimization in 3D, and (iii) maximization of homogenized shear modulus to design 2D metamaterial unit cells. The results show that the optimized designs from the DEM-based framework are very comparable to those generated by the finite element method and shed light on a new way of integrating PINN-based simulation methods into classical computational mechanics problems.
Using recent advancements in high-performance computing data assimilation to combine satellite InSAR data with numerical models, the prolonged unrest of the Sierra Negra volcano in the Galápagos was tracked to provide a fortuitous, but successful, forecast 5 months in advance of the 26 June 2018 eruption. Subsequent numerical simulations reveal that the evolution of the stress state in the host rock surrounding the Sierra Negra magma system likely controlled eruption timing. While changes in magma reservoir pressure remained modest (<15 MPa), modeled widespread Mohr-Coulomb failure is coincident with the timing of the 26 June 2018 moment magnitude 5.4 earthquake and subsequent eruption. Coulomb stress transfer models suggest that the faulting event triggered the 2018 eruption by encouraging tensile failure along the northern portion of the caldera. These findings provide a critical framework for understanding Sierra Negra’s eruption cycles and evaluating the potential and timing of future eruptions.
Physics‐informed neural networks have gained growing interest. Specifically, they are used to solve partial differential equations governing several physical phenomena. However, physics‐informed neural network models suffer from several issues and can fail to provide accurate solutions in many scenarios. We discuss a few of these challenges and the techniques, such as the use of Fourier transform, that can be used to resolve these issues. This paper proposes and develops a physics‐informed neural network model that combines the residuals of the strong form and the potential energy, yielding many loss terms contributing to the definition of the loss function to be minimized. Hence, we propose using the coefficient of variation weighting scheme to dynamically and adaptively assign the weight for each loss term in the loss function. The developed PINN model is standalone and meshfree. In other words, it can accurately capture the mechanical response without requiring any labeled data. Although the framework can be used for many solid mechanics problems, we focus on three‐dimensional (3D) hyperelasticity, where we consider two hyperelastic models. Once the model is trained, the response can be obtained almost instantly at any point in the physical domain, given its spatial coordinates. We demonstrate the framework's performance by solving different problems with various boundary conditions.
Iterative methods are widely used for solving sparse linear systems of equations and eigenvalue problems. Their performances are relevant to the conditioning of the linear systems. This work explores factors which affects the conditioning of the discretized system, including material heterogeneity, different constitutive characteristics and element sizes, and reveals the dependencies among solvers performance and the conditioning of linear systems. Results show that multiple materials can alter the eigenvalue distributions significantly, while lowering Young’s modulus results in higher condition numbers but has less effects on the spectral scope, additionally, there is a approximately reciprocal square linear relation between element size and condition numbers. These entangled effects along with the chosen pre-conditioners render that there is no simple monotonic increasing dependency among condition numbers and solving time, except with specific conditions. It is hoped that this work will provide more understanding of the iterative sparse linear solver behavior used in similar structural problems.
The solidifying steel follows highly nonlinear thermo-mechanical behavior depending on the loading history, temperature, and metallurgical phase fraction calculations (liquid, ferrite, and austenite). Numerical modeling with a computationally challenging multiphysics approach is used on high-performance computing to generate sufficient training and testing data for subsequent deep learning. We have demonstrated how the innovative sequence deep learning methods can learn from multiphysics modeling data of a solidifying slice traveling in a continuous caster and correctly and instantly capture the complex history and temperature-dependent phenomenon in test data samples never seen by the deep learning networks.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više