ICCS 2015 Main Track (MT) Session 2
Time and Date: 14:30 - 16:10 on 1st June 2015
Room: M101
Chair: Markus Towara
305 | A Neural Network Embedded System for Real-Time Estimation of Muscle Forces [abstract] Abstract: This work documents the progress towards the implementation of an embedded solution for muscular forces assessment during cycling activity. The core of the study is the adaptation to a real-time paradigm an inverse biomechanical model. The model is well suited for real-time applications since all the optimization problems are solved through a direct neural estimator. The real-time version of the model was implemented on an embedded microcontroller platform to profile code performance and precision degradation, using different numerical techniques to balance speed and accuracy in a low computational resources environment. |
Gabriele Maria Lozito, Maurizio Schmid, Silvia Conforto, Francesco Riganti Fulginei, Daniele Bibbo |
366 | Towards Scalability and Data Skew Handling in GroupBy-Joins using MapReduce Model [abstract] Abstract: For over a decade, MapReduce has become the leading programming model for parallel and massive processing of large volumes of data. This has been driven by the development of many frameworks such as Spark, Pig and Hive, facilitating data analysis on large-scale systems. However, these frameworks still remain vulnerable to communication costs, data skew and tasks imbalance problems. This can have a devastating effect on the performance and on the scalability of these systems, more particularly when treating GroupBy-Join queries of large datasets. In this paper, we present a new GroupBy-Join algorithm allowing to reduce communication costs considerably while avoiding data skew effects. A cost analysis of this algorithm shows that our approach is insensitive to data skew and ensures perfect balancing properties during all stages of GroupBy-Join computation even for highly skewed data. These performances have been confirmed by a series of experimentations. |
Mohamad Al Hajj Hassan, Mostafa Bamha |
452 | MREv: an Automatic MapReduce Evaluation Tool for Big Data Workloads [abstract] Abstract: The popularity of Big Data computing models like MapReduce has caused the emergence of many frameworks oriented to High Performance Computing (HPC) systems. The suitability of each one to a particular use case depends on its design and implementation, the underlying system resources and the type of application to be run. Therefore, the appropriate selection of one of these frameworks generally involves the execution of multiple experiments in order to assess their performance, scalability and resource efficiency. This work studies the main issues of this evaluation, proposing a new MapReduce Evaluator (MREv) tool which unifies the configuration of the frameworks, eases the task of collecting results and generates resource utilization statistics. Moreover, a practical use case is described, including examples of the experimental results provided by this tool. MREv is available to download at http://mrev.des.udc.es. |
Jorge Veiga, Roberto R. ExpĆ³sito, Guillermo L. Taboada, Juan Tourino |
604 | Load-Balancing for Large Scale Situated Agent-Based Simulations [abstract] Abstract: In large scale agent-based simulations, memory and computational power requirements can increase dramatically because of high numbers of agents and interactions. To be able to simulate millions of agents, distributing the simulator on a computer network is promising, but raises some issues like: agents allocation and load-balancing between machines. In this paper, we study the best ways to automatically balance the loads between machines in large scale situations. We study the performance of two different applications with two different distribution approaches, and we show in our experimental results that some applications can automatically adapt the loads between machines and get alone a high performance in large scale simulations with one distribution approach than the other. |
Omar Rihawi, Yann Secq, Philippe Mathieu |
669 | Changing CPU Frequency in CoMD Proxy Application Offloaded to Intel Xeon Phi Co-processors [abstract] Abstract: Obtaining exascale performance is a challenge. Although the technology of today features hardware with very high levels of concurrency, exascale performance is primarily limited by energy consumption. This limitation has lead to the use of GPUs and specialized hardware such as many integrated core (MIC) co-processors and FPGAs for computation acceleration. The Intel Xeon Phi co-processor, built upon the MIC architecture, features many low frequency, energy efficient cores. Applications, even those which do not saturate the large vector processing unit in each core, may benefit from the energy-efficient hardware and software of the Xeon Phi. This work explores the energy savings of applications which have not been optimized for the co-processor. Dynamic voltage and frequency scaling (DVFS) is often used to reduce energy consumption during portions of the execution where performance is least likely to be affected. This work investigates the impact on energy and performance when DVFS is applied to the CPU during MIC-offloaded sections (i.e., code segments to be processed on the co-processor). Experiments, conducted on the molecular dynamics proxy application CoMD, show that as much as 14\% energy may be saved if two Xeon Phi's are used. When DVFS is applied to the host CPU frequency, energy savings of as high as 9\% are obtained in addition to the 8\% saved from reducing link-cell count. |
Gary Lawson, Masha Sosonkina, Yuzhong Shen |