Solving Problems with Uncertainties (SPU) Session 3
Time and Date: 14:20 - 16:00 on 14th June 2019
Room: 0.6
Chair: Vassil Alexandrov
467 | On the estimation of the accuracy of numerical solutions in CFD problems [abstract] Abstract: The task of assessing accuracy in mathematical modeling of gas-dynamic processes is of utmost importance and relevance. Modern software packages include a large number of models, numerical methods and algorithms that allow to solve most of the current CFD problems. However, the issue of obtaining a reliable solution in the absence of experimental data or any reference solution remains relevant. The paper provides a brief overview of some useful approaches to solving the problem, including such approaches as a multi-model approach, the study of an ensemble of solutions, the construction of a generalized numerical experiment. |
Alexander Bondarev |
499 | "Why did you do that?" Explaining black box models with Inductive Synthesis [abstract] Abstract: By their nature, the composition of black box models is opaque. This makes the ability to generate explanations for the response to stimuli challenging. The importance of explaining black box models has become increasingly important given the prevalence of AI and ML systems and the need to build legal and regulatory frameworks around them. Such explanations can also increase trust in these uncertain systems. In our paper we present RICE, a method for generating explanations of the behaviour of black box models by (1) probing a model to extract model output examples using sensitivity analysis; and (2) applying CNPInduce, a method for inductive logic program synthesis, to generate logic programs based on critical input-output pairs, and (3) interpreting the target program as a human-readable explanation. We demonstrate the application of our method by generating explanations of an artificial neural network trained to follow simple traffic rules in a hypothetical self-driving car simulation. We conclude with a discussion on the scalability and usability of our approach and its potential applications to explanation-critical scenarios. |
Gorkem Pacaci, David Johnson, Steve McKeever and Andreas Hamfelt |
510 | Predictive Analytics with Factor Variance Association [abstract] Abstract: Predictive Factor Variance Association (PFVA) is a machine learning algorithm that solves the multiclass problem. A set of feature samples is provided and a set of target classes. If a sample belongs to a class, then that column is marked as one or zero otherwise. PFVA will carry out Singular Value Decomposition in the standardized samples creating orthogonal linear combinations of the variables called Factors. For each linear combination, probabilities are estimated for a target class. Then a least squares curve fitting model is used to compute the probability that a particular sample belongs to a class or not. It can also give predictions based on regression for quantitative dependent variables and carry-out clustering of samples. The main advantage of our technique is a clear mathematical founda-tion using well-known concepts of linear algebra and probability. |
Raul Ramirez-Velarde, Laura Hervert-Escobar and Neil Hernandez-Gress |
536 | Integration of ontological engineering and machine learning methods to reduce uncertainties in health risk assessment and recommendation systems [abstract] Abstract: This research provides an approach that integrates the best from ontology engineering and machine learning methods in order to reduce some types of uncertainties in health risk assessment challenges and improve explainability of decision-making systems. The proposed approach is based on ontological knowledge base of health risk assessment having regard to medical, genetic, environmental and life style factors. To automate the knowledge base development, we propose integrating both traditional knowledge engineering methods and machine learning approach using collaborative knowledge base Freebase. We also come up with the idea of using Text Mining method based on lexico-syntactic patterns inherited from different sets and created by our own. Moreover, we use ontology engineering methods in order to explain machine learning results, unsupervised methods in particular.
In the paper we present the case studies showing original methods and approaches solving problems with some kind of uncertainties in biomedicine decision making systems within BioGenom2.0 platform development. Because the platform use ontology driven reasoner there is no need to make changes in source code in order to tackle health risk assessment challenges using various of knowledge base focused on medical, genetic aspects and etc. |
Svetlana Chuprina and Taisiya Kostareva |