0000000000938335

AUTHOR

Oleksandra Vitko

Causality-Aware Convolutional Neural Networks for Advanced Image Classification and Generation

Smart manufacturing uses emerging deep learning models, and particularly Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), for different industrial diagnostics tasks, e.g., classification, detection, recognition, prediction, synthetic data generation, security, etc., on the basis of image data. In spite of being efficient for these objectives, the majority of current deep learning models lack interpretability and explainability. They can discover features hidden within input data together with their mutual co-occurrence. However, they are weak at discovering and making explicit hidden causalities between the features, which could be the reason behind the parti…

research product

Learning Bayesian Metanetworks from Data with Multilevel Uncertainty

Managing knowledge by maintaining it according to dynamic context is among the basic abilities of a knowledge-based system. The two main challenges in managing context in Bayesian networks are the introduction of contextual (in)dependence and Bayesian multinets. We are presenting one possible implementation of a context sensitive Bayesian multinet-the Bayesian Metanetwork, which implies that interoperability between component Bayesian networks (valid in different contexts) can be also modelled by another Bayesian network. The general concepts and two kinds of such Metanetwork models are considered. The main focus of this paper is learning procedure for Bayesian Metanetworks.

research product

Explainable AI for Industry 4.0 : Semantic Representation of Deep Learning Models

Artificial Intelligence is an important asset of Industry 4.0. Current discoveries within machine learning and particularly in deep learning enable qualitative change within the industrial processes, applications, systems and products. However, there is an important challenge related to explainability of (and, therefore, trust to) the decisions made by the deep learning models (aka black-boxes) and their poor capacity for being integrated with each other. Explainable artificial intelligence is needed instead but without loss of effectiveness of the deep learning models. In this paper we present the transformation technique between black-box models and explainable (as well as interoperable) …

research product

Bayesian metanetworks for modelling user preferences in mobile environment

The problem of profiling and filtering is important particularly for mobile information systems where wireless network traffic and mobile terminal’s size are limited comparing to the Internet access from the PC. Dealing with uncertainty in this area is crucial and many researchers apply various probabilistic models. The main challenge of this paper is the multilevel probabilistic model (the Bayesian Metanetwork), which is an extension of traditional Bayesian networks. The extra level(s) in the Metanetwork is used to select the appropriate substructure from the basic network level based on contextual features from user’s profile (e.g. user’s location). Two models of the Metanetwork are consi…

research product