ICCS 2015 Main Track (MT) Session 13
Time and Date: 14:10 - 15:50 on 2nd June 2015
Room: V101
Chair: Witold Dzwinel
712 | Collaborative Knowledge Fusion by Ad-Hoc Information Distribution in Crowds [abstract] Abstract: We study situations where (such as in a city festival) in the case of a phone signal outage cell phones can communicate opportunistically (for instance, using WiFi or Bluetooth) and we want to understand and control information spreading. A particular question is, how to prevent false information from spreading, and how to facilitate the spreading of useful (true) information? We introduce collaborative knowledge fusion as the operation by which individual, local knowledge claims are ``merged". Such fusion events are local, e.g. happen upon the physical meetings of knowledge providers. We study and evaluate different methods for collaborative knowledge fusion and study the conditions for and tradeoffs of the convergence to a global true knowledge state under various conditions. |
George Kampis, Paul Lukowicz |
220 | Modeling Deflagration in Energetic Materials using the Uintah Computational Framework [abstract] Abstract: Predictive computer simulations of large-scale deflagration and detonation are dependent on the availability of robust reaction models embedded in a computational framework capable of running on massively parallel computer architectures. We have been developing such models in the Uintah Computational Framework, which is capable of scaling up to 512k cores. Our particular interest is in predicting DDT for accident scenarios involving large numbers of energetic devices; the 2005 truck explosion in Spanish Fork Canyon, UT is a prototypical example. Our current reaction model adapts components from Ward, Son and Brewster to describe the effects of pressure and initial temperature on deflagration, from Berghout et al. for burning in cracks in damaged explosives, and from Souers for describing fully developed detonation. The reaction model has been subjected to extensive validation against experimental tests. Current efforts are focused on effects of carrying the computational grid elements on multiple aspects of deflagration and the transition to detonation. |
Jacqueline Beckvermit, Todd Harman, Andrew Bezdjian, Charles Wight |
237 | Fast Equilibration of Coarse-Grained Polymeric Liquids [abstract] Abstract: The study of macromolecular systems may require large computer simulations that are too time consuming and resource intensive to execute in full atomic detail. The integral equation coarse-graining approach by Guenza and co-workers enables the exploration of longer time and spatial scales without sacrificing thermodynamic consistency, by approximating collections of atoms using analytically-derived soft-sphere potentials. Because coarse-grained (CG) characterizations evolve polymer systems far more efficiently than the corresponding united atom (UA) descriptions, we can feasibly equilibrate a CG system to a reasonable geometry, then transform back to the UA description for a more complete equilibration. Automating the transformation between the two different representations simultaneously exploits CG efficiency and UA accuracy. By iteratively mapping back and forth between CG and UA, we can quickly guide the simulation towards a configuration that would have taken many more time steps within the UA representation alone. Accomplishing this feat requires a diligent workflow for managing input/output coordinate data between the different steps, deriving the potential at runtime, and inspecting convergence. In this paper, we present a lightweight workflow environment that accomplishes such fast equilibration without user intervention. The workflow supports automated mapping between the CG and UA descriptions in an iterative, scalable, and customizable manner. We describe this technique, examine its feasibility, and analyze its correctness. |
David Ozog, Jay McCarty, Grant Gossett, Allen Malony and Marina Guenza |
392 | Massively Parallel Simulations of Hemodynamics in the Human Vasculature [abstract] Abstract: We present a computational model of three-dimensional and unsteady hemodynamics within the primary large arteries in the human on 1,572,864 cores of the IBM Blue Gene/Q. Models of large regions of the circulatory system are needed to study the impact of local factors on global hemodynamics and to inform next generation drug delivery mechanisms. The HARVEY code successfully addresses key challenges that can hinder effective solution of image-based hemodynamics on contemporary supercomputers, such as limited memory capacity and bandwidth, flexible load balancing, and scalability. This work is the first demonstration of large (> 500 cm) fluid dynamics simulations of the circulatory system modeled at resolutions as high as 10 μm. |
Amanda Randles, Erik W. Draeger and Peter E. Bailey |
402 | Parallel performance of an IB-LBM suspension simulation framework [abstract] Abstract: We present performance results from ficsion, a general purpose parallel suspension solver, employing the Immersed-Boundary lattice-Boltzmann method (IB-LBM).
ficsion is build on top of the open-source LBM framework Palabos, making use of its data structures and their inherent parallelism.
We describe in brief the implementation and present weak and strong scaling results for simulations of dense red blood cell suspensions.
Despite its complexity the simulations demonstrate a fairly good, close to linear scaling, both in the weak and strong scaling scenarios. |
Lampros Mountrakis, Eric Lorenz, Orestis Malaspinas, Saad Alowayyed, Bastien Chopard and Alfons G. Hoekstra |