This article introduces a computational design framework for obtaining three‐dimensional (3D) periodic elastoplastic architected materials with enhanced performance, subject to uniaxial or shear strain. A nonlinear finite element model accounting for plastic deformation is developed, where a Lagrange multiplier approach is utilized to impose periodicity constraints. The analysis assumes that the material obeys a von Mises plasticity model with linear isotropic hardening. The finite element model is combined with a corresponding path‐dependent adjoint sensitivity formulation, which is derived analytically. The optimization problem is parametrized using the solid isotropic material penalization method. Designs are optimized for either end compliance or toughness for a given prescribed displacement. Such a framework results in producing materials with enhanced performance through much better utilization of an elastoplastic material. Several 3D examples are used to demonstrate the effectiveness of the mathematical framework.
Deep learning (DL) and the collocation method are merged and used to solve partial differential equations (PDEs) describing structures' deformation. We have considered different types of materials: linear elasticity, hyperelasticity (neo‐Hookean) with large deformation, and von Mises plasticity with isotropic and kinematic hardening. The performance of this deep collocation method (DCM) depends on the architecture of the neural network and the corresponding hyperparameters. The presented DCM is meshfree and avoids any spatial discretization, which is usually needed for the finite element method (FEM). We show that the DCM can capture the response qualitatively and quantitatively, without the need for any data generation using other numerical methods such as the FEM. Data generation usually is the main bottleneck in most data‐driven models. The DL model is trained to learn the model's parameters yielding accurate approximate solutions. Once the model is properly trained, solutions can be obtained almost instantly at any point in the domain, given its spatial coordinates. Therefore, the DCM is potentially a promising standalone technique to solve PDEs involved in the deformation of materials and structural systems as well as other physical phenomena.
Fluid-Structure Interaction (FSI) simulations have applications to a wide range of engineering areas. One popular technique to solve FSI problems is the Arbitrary Lagrangian-Eulerian (ALE) method. Both academic and industry communities developed codes to implement the ALE method. One of them is Alya, a Finite Element Method (FEM) based code developed in Barcelona Supercomputing Center (BSC). By analyzing the application on a simplified artery case and compared to another commercial code, which is Finite Volume Method (FVM) based, this paper discusses the mathematical background of the solver for domains, and carries out verification work on Alya’s FSI capability. The results show that while both codes provide comparable FSI results, Alya has exhibited better robustness due to its Subgrid Scale (SGS) technique for stabilization of convective term and the subsequent numerical treatments. Thus this code opens the door for more extensive use of higher fidelity finite element based FSI methods in future.
Among advanced manufacturing techniques for Fiber-Reinforced Polymer-matrix Composites (FRPCs) which are critical for aerospace, marine, automotive, and energy industries, Frontal Polymerization (FP) has been recently proposed to save orders of magnitude time and energy. However, the cure kinetics of the matrix phase, usually a thermosetting polymer, brings difculty to the design and control of the process. Here, we develop a deep learning model, ChemNet, to solve an inverse problem in predicting and optimizing the cure kinetics parameters of the thermosetting FRPCs for a desired fabrication strategy. ChemNet consists of a fully connected FeedForward 9-layer deep neural network trained on one million examples, and predicts activation energy and reaction enthalpy given the front characteristics such as speed and maximum temperature. ChemNet provides highly accurate predictions measured by the mean square error (MSE) and by the maximum absolute error metrics. The MSE of ChemNet, on the train set and test set attain the values of 1E-4 and 2E-4, respectively.
LS-DYNA is a well-known multiphysics code with both explicit and implicit time stepping capabilities. Implicit simulations rely heavily on sparse matrix computations, in particular direct solvers, and are notoriously much harder to scale than explicit simulations. In this paper, we investigate the scalability challenges of the implicit structural mode of LS- DYNA. In particular, we focus on linear constraint analysis, sparse matrix reordering, symbolic factorization, and numerical factorization. Our problem of choice for this study is a thermomechanical simulation of jet engine models built by Rolls-Royce with up to 200 million degrees of freedom, or equations. The models are used for engine performance analysis and design optimization, in particular optimization of tip clearances in the compressor and turbine sections of the engine. We present results using as many as 131,072 cores on the Blue Waters Cray XE6/XK7 supercomputer at NCSA and the Titan Cray XK7 supercomputer at OLCF. Since the main focus is on general linear algebra problems, this work is of interest for all linear algebra practitioners, not only developers of implicit finite element codes.
Significant investments to upgrade and construct large-scale scientific facilities demand commensurate investments in R&D to design algorithms and computing approaches to enable scientific and engineering breakthroughs in the big data era. Innovative Artificial Intelligence (AI) applications have powered transformational solutions for big data challenges in industry and technology that now drive a multi-billion dollar industry, and which play an ever increasing role shaping human social patterns. As AI continues to evolve into a computing paradigm endowed with statistical and mathematical rigor, it has become apparent that single-GPU solutions for training, validation, and testing are no longer sufficient for computational grand challenges brought about by scientific facilities that produce data at a rate and volume that outstrip the computing capabilities of available cyberinfrastructure platforms. This realization has been driving the confluence of AI and high performance computing (HPC) to reduce time-to-insight, and to enable a systematic study of domain-inspired AI architectures and optimization schemes to enable data-driven discovery. In this article we present a summary of recent developments in this field, and describe specific advances that authors in this article are spearheading to accelerate and streamline the use of HPC platforms to design and apply accelerated AI algorithms in academia and industry.
<p>Ensemble based data assimilation approaches, such as the Ensemble Kalman Filter (EnKF), have been widely and successfully implemented to combine observations with dynamic models to investigate the evolution of a system’s state. Such inversions are powerful tools for providing forecasts as well as “hindcasting” events such as volcanic eruptions to investigate source parameters and triggering mechanisms. In this study, a high performance computing (HPC) adaptation of the EnKF is used to assimilate ground deformation observations from interferometric synthetic-aperture radar (InSAR) into high-fidelity, multiphysics finite element models to evaluate the prolonged unrest and June 26, 2018 eruption of Sierra Negra volcano, Galápagos. The stability of the Sierra Negra magma system is evaluated at each time step by estimating variations in reservoir overpressure, Mohr-Coulomb failure in the host rock, and tensile stress and failure along the reservoir boundary. The deformation of Sierra Negra is tracked over a decade, during which almost 5 meters of surface uplift has been recorded. The EnKF reveals that the evolution of the stress state in the host rock surrounding the Sierra Negra magma reservoir likely controlled the timing of the eruption. While increases in magma reservoir overpressure remained modest (< 10 MPa) throughout the data assimilation time period, significant Mohr-Coulomb failure is indicated in the lead up to the eruption coincident with increased seismicity along both trapdoor faults within Sierra Negra’s caldera and along the caldera’s ring faults. During the final stages of pre-eruptive unrest, the EnKF models indicate limited tensile failure, with no tensile failure along the northern portion of the magma system where the eruption commenced. Most strikingly, model calculations of significant through-going Mohr-Coulomb failure correspond in space and time with a Mw 5.4 earthquake recorded in the hours preceding the 2018 eruption. Subsequent stress modeling implicates the Mw 5.4 earthquake along the southern intra-caldera trapdoor fault as the potential catalyst for tensile failure and dike initiation along the reservoir to the north. In conclusion, the volcano EnKF approach successfully tracked the evolving stability of Sierra Negra, indicating great potential for future forecasting efforts.</p>
The field of optimal design of linear elastic structures has seen many exciting successes that resulted in new architected materials and designs. With the availability of cloud computing, including high-performance computing, machine learning, and simulation, searching for optimal nonlinear structures is now within reach. In this study, we develop two convolutional neural network models to predict optimized designs for a given set of boundary conditions, loads, and volume constraints. The first convolutional neural network model is for the case of materials with a linear elastic response while the second developed model is for hyperelastic response where material and geometric nonlinearities are involved. For the nonlinear elastic case, the neo-Hookean model is utilized. For this purpose, we generate datasets, composed of the optimized designs paired with the corresponding boundary conditions, loads, and constraints, using topology optimization framework to train and validate both models. The developed models are capable of accurately predicting the optimized designs without requiring an iterative scheme and with negligible computational time. The suggested pipeline can be generalized to other nonlinear mechanics scenarios and design domains.
Model extrapolation to unseen flow is one of the biggest challenges facing data-driven turbulence modeling, especially for models with high dimensional inputs that involve many flow features. In this study we review previous efforts on data-driven Reynolds-Averaged Naiver Stokes (RANS) turbulence modeling and model extrapolation, with main focus on the popular methods being used in the field of transfer learning. Several potential metrics to measure the dissimilarity between training flows and testing flows are examined. Different Machine Learning (ML) models are compared to understand how the capacity or complexity of the model affects its behavior in the face of dataset shift. Data preprocessing schemes which are robust to covariate shift, like normalization, transformation, and importance re-weighted likelihood, are studied to understand whether it is possible to find projections of the data that attenuate the differences in the training and test distributions while preserving predictability. Three metrics are proposed to assess the dissimilarity between training/testing dataset. To attenuate the dissimilarity, a distribution matching framework is used to align the statistics of the distributions. These modifications also allow the regression tasks to have better accuracy in forecasting under-represented extreme values of the target variable. These findings are useful for future ML based turbulence models to evaluate their model predictability and provide guidance to systematically generate diversified high-fidelity simulation database.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više