Solving Problems with Uncertainties (SPU) Session 1
Time and Date: 16:20 - 18:00 on 11th June 2014
Room: Tully III
Chair: Vassil Alexandrov
37 | Wind field uncertainty in forest fire propagation prediction [abstract] Abstract: Forest fires are a significant problem, especially in Mediterranean countries. To fight against these hazards, it is necessary to have an accurate prediction of its evolution beforehand. So, propagation models have been developed to determine the expected evolution of a forest fire. Such propagation models require input parameters to produce the predictions. Such parameters must be as accurate as possible in order to provide a prediction adjusted to the actual fire behavior. However, in many cases the information concerning the values of the input parameter is obtained by indirect measurements. Such indirect estimations imply an uncertainty degree concerning the values of the parameters. This problem is very significant in the case of parameters that have a spatial distribution or variation, such as wind. The wind provided by a global weather forecast model or measured at a meteorological station in some particular point is modified by the topography of the terrain and has a different value at every point of the terrain. To estimate the wind speed and direction at each point of the terrain it is necessary to apply a wind field model that determines those values at each point depending on the terrain topography. WindNinja is a wind field simulator that provides an estimate wind direction and wind speed at each point of the terrain given a meteorological wind. However, the calculation of the wind field takes some time when the map has a considerable size (30x30 Km) and the resolution is high (30x30meters). This time penalizes the prediction of forest fire spread and may eventually make impractical the effective prediction of fire spread with wind field. On the other hand, it must be considered that the data structures needed to calculate the wind field of a large map requires a large amount of memory that may not be available on a single node of a current system. To reduce the computation time of the wind field a data partition method has been applied. In this case the wind field is calculated in parallel on each part of the map and then the wind fields of the different parts are joined to form the global wind field. Furthermore, by partitioning the terrain map, the data structures necessary to resolve the wind field in each part are reduced significantly and can be stored in the memory of a single node in a current parallel system. Therefore, the existing nodes can perform computation in parallel with data that fit the capacity of the memory on each node. However, the calculation of the wind field is a complex problem which has certain border effects, so that the wind direction and speed in the points next to the border of each part may have some variability and differ from those they would have obtained if they were far from the border, for example if the wind field is calculated over a single complete map. To solve this problem, it is necessary to include a degree of overlap among the map parts. So, there is a margin from the beginning of the part and the part cells itself. The overall wind field aggregation is obtained by discarding the calculated margin fields overlap of each part. The inclusion of an overlap each part increases the execution time, but the variation in the wind field is reduced. The methodology has been tested with several terrain maps, and it was found that parts of 400x400 cells with an overlap of 50 cells per side provide a reasonable execution time (150 sec) with virtually no variation with respect to the wind field obtained with a global map. With this type of partitioning, each process solves an effective part of a map of 300x300 cells. |
Gemma Sanjuan, Carlos Brun, Tomas Margalef, Ana Cortes |
307 | A Framework for Evaluating Skyline Query over Uncertain Autonomous Databases [abstract] Abstract: The perception of skyline query is to find a set of objects that is much preferred in all dimensions. While this theory is easily applicable on certain and complete database, however, when it comes to data integration of databases where each has different representation of data in a same dimension, it would be difficult to determine the dominance relation between the underlying data. In this paper, we propose a framework, SkyQUD, to efficiently compute the skyline probability of datasets in uncertain dimensions. We explore the effects of having datasets with uncertain dimensions in relation to the dominance relation theory and propose a framework that is able to support skyline queries on this type of datasets. |
Nurul Husna Mohd Saad, Hamidah Ibrahim, Ali Amer Alwan, Fatimah Sidi, Razali Yaakob |
253 | Efficient Data Structures for Risk Modelling in Portfolios of Catastrophic Risk Using MapReduce
[abstract] Abstract: The QuPARA Risk Analysis Framework~\cite{IEEEbigdata} is an analytical framework implemented using MapReduce and designed to answer a wide variety of complex risk analysis queries on massive portfolios of catastrophic risk contracts. In this paper, we present data structure improvements that greatly accelerate QuPARA's computation of Exceedance Probability (EP) curves with secondary uncertainty. |
Andrew Rau-Chaplin, Zhimin Yao, Norbert Zeh |
40 | Argumentation Approach and Learning Methods in Intelligent Decision Support Systems in the Presence
of Inconsistent Data
[abstract] Abstract: This paper contains a description of methods and algorithms for working with inconsistent data in intelligent decision support systems. An argumentation approach and application of rough sets for generalization problems are considered. The methods for finding the conflicts and the generalization algorithm based on rough sets are proposed. Noise models in the generalization algorithm are viewed. Experimental results are introduced. A decision of some problems that are not solvable in classical logics is given. |
Vadim N. Vagin, Marina Fomina, Oleg Morosin |
365 | Enhancing Monte Carlo Preconditioning Methods for Matrix Computations [abstract] Abstract: An enhanced version of a stochastic SParse Approximate Inverse (SPAI) preconditioner for general matrices is presented. This method is used in contrast to the standard deterministic preconditioners computed by the deterministic SPAI, and its further optimized parallel variant- Modified SParse Approximate Inverse Preconditioner (MSPAI). Thus we present a Monte Carlo preconditioner that relies on the use of Markov Chain Monte Carlo (MCMC) methods to compute a rough matrix inverse first, which is further optimized by an iterative filter process and a parallel refinement, to enhance the accuracy of the preconditioner. Monte Carlo methods quantify the uncertainties by enabling us to estimate the non-zero elements of the inverse matrix with a given precision and certain probability. The advantage of this approach is that we use sparse Monte Carlo matrix inversion whose complexity is linear of the size of the matrix. The behaviour of the proposed algorithm is studied, its performance measured and compared with MSPAI. |
Janko Strassburg, Vassil Alexandrov |