ICCS 2017 Main Track (MT) Session 7
Time and Date: 13:25 - 15:05 on 14th June 2017
Room: HG F 30
Chair: Ming Xu
424 | Efficient Simulation of Financial Stress Testing Scenarios with Suppes-Bayes Causal Networks [abstract] Abstract: The most recent financial upheavals have cast doubt on the adequacy of some of the conventional quantitative risk management strategies, such as VaR (Value at Risk), in many common situations. Consequently, there has been an increasing need for verisimilar financial stress testings, namely simulating and analyzing financial portfolios in extreme, albeit rare scenarios. Unlike conventional risk management which exploits statistical correlations among financial instruments, here we focus our analysis on the notion of probabilistic causation, which is embodied by Suppes-Bayes Causal Networks (SBCNs), SBCNs are probabilistic graphical models that have many attractive features in terms of more accurate causal analysis for generating financial stress scenarios.
In this paper, we present a novel approach for conducting stress testing of financial portfolios based on SBCNs in combination with classical machine learning classification tools. The resulting method is shown to be capable of correctly discovering the causal relationships among financial factors that affect the portfolios and thus, simulating stress testing scenarios with a higher accuracy and lower computational complexity than conventional Monte Carlo Simulations.
|
Gelin Gao, Bud Mishra and Daniele Ramazzotti |
531 | Simultaneous Prediction of Wind Speed and Direction by Evolutionary Fuzzy Rule Forest [abstract] Abstract: An accurate estimate of wind speed and direction is important for many application domains including weather prediction, smart grids, and e.g. traffic management. These two environmental variables depend on a number of factors and are linked together. Evolutionary fuzzy rules, based on fuzzy information retrieval and genetic programming, have been used to solve a variety of real-world regression and classification tasks. They were, however, limited by the ability to estimate only one variable by a single model. In this work, we introduce an extended version of this predictor that facilitates an artificial evolution of forests of fuzzy rules. In this way, multiple variables can be predicted by a single model that is able to comprehend complex relations between input and output variables. The usefulness of the proposed concept is demonstrated by the evolution of forests of fuzzy rules for simultaneous wind speed and direction prediction. |
Pavel Kromer and Jan Platos |
557 | Performance Improvement of Stencil Computations for Multi-core Architectures based on Machine Learning [abstract] Abstract: Stencil computations are basis to solve many problems related to Partial Differential Equations (PDEs). Obtaining the best performance with such numerical kernels is a major issue as many critical parameters (architectural features, compiler flags, memory policies, multithreading strategies) must be finely tuned. In this context, auto-tuning methods have been extensively used last few years to improve the overall performance. However, the complexity of current architectures and the large number of optimizations to consider reduce the efficiency of this approach. This paper focuses on the use of Machine Learning to predict the performance of PDEs on multicore architectures. Low-level hardware counters (e.g. cache-misses and TLB misses) on a limited number of executions are used to build our predictive model. We have considered two different kernels (7-point Jacobi and seismic equation) to demonstrate the effectiveness of our approach. Our results show that the performance can be predicted and the best input configuration for stencil problems can be obtained by simulations of hardware counters and performance measurements. |
Victor Martinez, Fabrice Dupros, Márcio Castro and Philippe Navaux |
321 | Distributed training strategies for a computer vision deep learning algorithm on a distributed GPU cluster [abstract] Abstract: Deep learning algorithms base their success on building high learning capacity models with millions of parameters that are tuned in a data-driven fashion. These models are trained by processing millions of examples, so that the development of more accurate algorithms is usually limited by the throughput of the computing devices on which they are trained. In this work, we explore how the training of a state-of-the-art neural network for computer vision can be parallelized on a distributed GPU cluster. The effect of distributing the training process is addressed from two different points of view. First, the scalability of the task and its performance in the distributed setting are analyzed. Second, the impact of distributed training methods on the final accuracy of the models is studied. |
Víctor Campos, Francesc Sastre, Maurici Yagües, Míriam Bellver, Xavier Giró-I-Nieto and Jordi Torres |