Search results for "interpretability"
showing 10 items of 32 documents
Understanding deep learning in land use classification based on Sentinel-2 time series
2020
AbstractThe use of deep learning (DL) approaches for the analysis of remote sensing (RS) data is rapidly increasing. DL techniques have provided excellent results in applications ranging from parameter estimation to image classification and anomaly detection. Although the vast majority of studies report precision indicators, there is a lack of studies dealing with the interpretability of the predictions. This shortcoming hampers a wider adoption of DL approaches by a wider users community, as model’s decisions are not accountable. In applications that involve the management of public budgets or policy compliance, a better interpretability of predictions is strictly required. This work aims …
2021
Classification approaches that allow to extract logical rules such as decision trees are often considered to be more interpretable than neural networks. Also, logical rules are comparatively easy to verify with any possible input. This is an important part in systems that aim to ensure correct operation of a given model. However, for high-dimensional input data such as images, the individual symbols, i.e. pixels, are not easily interpretable. Therefore, rule-based approaches are not typically used for this kind of high-dimensional data. We introduce the concept of first-order convolutional rules, which are logical rules that can be extracted using a convolutional neural network (CNN), and w…
Crop Yield Estimation and Interpretability With Gaussian Processes
2021
This work introduces the use of Gaussian processes (GPs) for the estimation and understanding of crop development and yield using multisensor satellite observations and meteo- rological data. The proposed methodology combines synergistic information on canopy greenness, biomass, soil, and plant water content from optical and microwave sensors with the atmospheric variables typically measured at meteorological stations. A com- posite covariance is used in the GP model to account for varying scales, nonstationary, and nonlinear processes. The GP model reports noticeable gains in terms of accuracy with respect to other machine learning approaches for the estimation of corn, wheat, and soybean …
Interpretability of Recurrent Neural Networks in Remote Sensing
2020
In this work we propose the use of Long Short-Term Memory (LSTM) Recurrent Neural Networks for multivariate time series of satellite data for crop yield estimation. Recurrent nets allow exploiting the temporal dimension efficiently, but interpretability is hampered by the typically overparameterized models. The focus of the study is to understand LSTM models by looking at the hidden units distribution, the impact of increasing network complexity, and the relative importance of the input covariates. We extracted time series of three variables describing the soil-vegetation status in agroe-cosystems -soil moisture, VOD and EVI- from optical and microwave satellites, as well as available in si…
Comparison and interpretability of the available urticaria activity scores
2017
The urticaria activity score (UAS) is the gold standard for assessing disease activity in patients with chronic spontaneous urticaria (CSU). Two different versions, the UAS7 and UAS7TD , are currently used in clinical trials and routine care. To compare both versions and to obtain data on their interpretability, 130 CSU patients applied both versions and globally rated their disease activity as none, mild, moderate, or severe. UAS7 and UAS7TD values correlated strongly (r = .90, P < .001). Interquartile ranges for UAS7 and UAS7TD values for mild, moderate, and severe CSU were 11-20 and 10-24, 16-30 and 16-32, and 27-37 and 28-40. UAS7 values were slightly, but significantly lower as compare…
Unbiased sensitivity analysis and pruning techniques in neural networks for surface ozone modelling
2005
Abstract This paper presents the use of artificial neural networks (ANNs) for surface ozone modelling. Due to the usual non-linear nature of problems in ecology, the use of ANNs has proven to be a common practice in this field. Nevertheless, few efforts have been made to acquire knowledge about the problems by analysing the useful, but often complex, input–output mapping performed by these models. In fact, researchers are not only interested in accurate methods but also in understandable models. In the present paper, we propose a methodology to extract the governing rules of trained ANN which, in turn, yields simplified models by using unbiased sensitivity and pruning techniques. Our propos…
Intrusion Detection with Interpretable Rules Generated Using the Tsetlin Machine
2020
The rapid deployment in information and communication technologies and internet-based services have made anomaly based network intrusion detection ever so important for safeguarding systems from novel attack vectors. To this date, various machine learning mechanisms have been considered to build intrusion detection systems. However, achieving an acceptable level of classification accuracy while preserving the interpretability of the classification has always been a challenge. In this paper, we propose an efficient anomaly based intrusion detection mechanism based on the Tsetlin Machine (TM). We have evaluated the proposed mechanism over the Knowledge Discovery and Data Mining 1999 (KDD’99) …
Spanish Adaptation of the Inventory Brief Child Abuse Potential and the Protective Factors Survey
2021
Child maltreatment is a public health problem with different consequences depending on the form of abuse. Measuring risk and protective factors has been a fertile ground for research, without involving instruments with sufficient evidence of validity. The aim of the study was to gather evidence of validity and reliability of the Inventory Brief Child Abuse Potential (IBCAP) and Protective Factors Survey (PFS) in the Mexican population. The instruments were translated into Spanish. In a non-probabilistic sample of 200 participants, the 7-factor model for the IBCAP [comparative fit index (CFI) = 0.984; root mean square error of approximation (RMSEA) = 0.067] and the 4-factor model for the PFS…
Enhancing Attention’s Explanation Using Interpretable Tsetlin Machine
2022
Explainability is one of the key factors in Natural Language Processing (NLP) specially for legal documents, medical diagnosis, and clinical text. Attention mechanism has been a popular choice for such explainability recently by estimating the relative importance of input units. Recent research has revealed, however, that such processes tend to misidentify irrelevant input units when explaining them. This is due to the fact that language representation layers are initialized by pre-trained word embedding that is not context-dependent. Such a lack of context-dependent knowledge in the initial layer makes it difficult for the model to concentrate on the important aspects of input. Usually, th…
Convolutional Regression Tsetlin Machine: An Interpretable Approach to Convolutional Regression
2021
The Convolutional Tsetlin Machine (CTM), a variant of Tsetlin Machine (TM), represents patterns as straightforward AND-rules, to address the high computational complexity and the lack of interpretability of Convolutional Neural Networks (CNNs). CTM has shown competitive performance on MNIST, Fashion-MNIST, and Kuzushiji-MNIST pattern classification benchmarks, both in terms of accuracy and memory footprint. In this paper, we propose the Convolutional Regression Tsetlin Machine (C-RTM) that extends the CTM to support continuous output problems in image analysis. C-RTM identifies patterns in images using the convolution operation as in the CTM and then maps the identified patterns into a real…