Agent-Based Simulations, Adaptive Algorithms and Solvers (ABS-AAS) Session 1
Time and Date: 10:15 - 11:55 on 13th June 2019
Room: 0.4
Chair: Maciej Paszynski
118 | Refined Isogeometric Analysis (rIGA) for resistivity well-logging problems [abstract] Abstract: Resistivity well logging characterizes the geological formation around a borehole by measuring the electrical resistivity. On logging while drilling techniques, real-time imaging of the well surroundings is decisive to correct the well trajectory in real time for geosteering purposes. Thus, we require numerical methods that rapidly solve Maxwell's equations.
In this work, we study the main features and limitations of rIGA to solve borehole resistivity problems. We apply rIGA method to approximate 3D electromagnetic fields that result from solving Maxwell's equations through the 2.5D formulation. We use a spline-based generalization of a H(curl) x H^1 functional space. In particular, we use the H(curl) spaces previously introduced by Buffa et al. to set the high-continuity curl-conforming space discretization. |
Daniel Garcia Lozano, David Pardo and Victor Calo |
129 | A Painless Automatic hp-Adaptive Strategy for Elliptic 1D and 2D Problems [abstract] Abstract: Despite the existence of several hp-adaptive algorithms in the literature (e.g. [1]), very few are used in industrial context due to their high implementational complexity, computational cost, or both. This occurs mainly because of two limitations associated with hp-adaptive methods: (1) The data structures needed to support hp-refined meshes are often complex, and (2) the design of a robust automatic hp-adaptive strategy is challenging.
To overcome limitation (1), we adopt the multi-level approach of D’Angela et al. [2]. This method handles hanging nodes via a multilevel technique with massive use of Dirichlet nodes.
Our main contribution in this work is intended to overcome limitation (2) by introducing a novel automatic hp-adaptive strategy. For that, we have developed a simple energy-based coarsening approach that takes advantage of the hierarchical structure of the basis functions. Given any grid, the main idea consists in detecting those unknowns that contribute least to the energy norm, and remove them. Once a sufficient level of unrefinement is achieved, a global h, p, or any other type of refinements can be performed.
We tested and analyzed our algorithm on one-dimensional (1D) and two- dimensional (2D) benchmark cases. In this presentation, we shall illustrate the main advantages and limitations of the proposed hp-adapted method.
References:
1. L. Demkowicz. Computing with hp-adaptive finite elements. Vol. 1. One and two dimensional elliptic and Maxwell problems. Applied Mathematics and Nonlinear Science Series. Chapman & Hall/CRC, Boca Raton, FL, 2007. ISBN 978-1-58488- 671-6; 1-58488-671-4.
2. D. D’Angella, N. Zander, S. Kollmannsberger, F. Frischmann, E. Rank, A. Schröder, and A. Reali. Multi-level hp-adaptivity and explicit error estimation. Advanced Modeling and Simulation in Engineering Sciences, 3(1):33, 2016. ISSN 2213-7467. |
Vincent Darrigrand, David Pardo, Théophile Chaumont-Frelet, Ignacio Gómez-Revuelto and Luis Emilio Garcia-Castillo |
147 | Fast isogeometric Cahn-Hilliard equations solver with web-based interface [abstract] Abstract: We present a framework to run Cahn-Hilliard simulations with a web interface. We use a popular Continous Integration tool Jenkins. This setup allows launching computations from any machine and without the need to sustain a connection to the computational environment. Moreover, the results of the computations are automatically post-processed and stored upon job completion for future retrieval in the form of a sequence of bitmaps, and the video illustrating the simulation. We extract the mobility and chemical potential functions from the Cahn-Hilliard equation to the interface, allowing for several numerical applications. The discretization is performed with isogeometric analysis, and it is parameterized with the number of time steps, time step size, mesh dimensions, and the order of the B-splines. The interface is linked with the fast alternating direction semi-implicit solver [1], resulting in a linear computational cost of the simulation. |
Maciej Paszynski, Grzegorz Gurgul, Danuta Szeliga, Marcin Łoś, Vladimir Puzyrev and Victor Calo |
159 | Low-frequency Upscaling of Effective Velocities in Heterogeneous Rocks [abstract] Abstract: We want to estimate the effective (homogenized) compressional velocity of a highly heterogeneous porous rock at low frequencies. To achieve this goal is necessary to repeat virtually the rock domain several times until it becomes at least two-wavelengths long. Otherwise, boundary conditions (e.g., a PML) pollute the estimated effective velocity. Due to this requirement on the computational domain size, traditional conforming fitting element grids result in a humongous number of elements that cannot be simulated with today's computers. To overcome this problem, we consider non-fitting meshes, in which each finite element includes highly-discontinuous material properties. To maintain accuracy under this scenario, we show it is sufficient to perform exact integration. Being this operation also computationally expensive for such large domains, we precompute the element stiffness matrices. The presence of a PML makes the implementation of this precomputation step more challenging.
In this presentation, we illustrate the main challenges for solving this upscaling/homogenization problem, which is of great interest to the oil & gas industry, and we detail the computational techniques employed to overcome them.
The performance of the proposed method is also showcased with different numerical experiments. |
Ángel Javier Omella, Magdalena Strugaru, Julen Álvarez-Aramberri, Vincent Darrigrand, David Pardo, Héctor González and Carlos Santos |
171 | Distributed Memory Parallel Implementation of Agent Based Economic Models [abstract] Abstract: We present a Distributed Memory Parallel (DMP) implementation of agent based economic models, which facilitates large scale simulations with millions of agents. A major obstacle in scalable DMP implementation is balancely distributing the agents among MPI processes, while making all the topological graphs, over which the agents interact, available at a minimum communication cost. We balancely distributed the computational workload among MPI processes by partitioning a representative employer-employee interaction graph, and all the other interaction graphs are made available at negligible communication costs by mimicking the organizations of the real-world's economic entities. Cache friendly and low memory intensive algorithms and data structures are proposed to improve runtime and parallel scalability, and their effectivenesses are demonstrated. It is demonstrated that the current implementation is capable of simulating 1:1 scale models of medium size countries. |
Maddegedara Lalith, Amit Gill, Sebastian Poledna, Muneo Hori, Inoue Hikaru, Noda Tomoyuki, Toda Koyo and Tsuyoshi Ichimura |