Search results for "interpretability"

showing 10 items of 32 documents

Understanding deep learning in land use classification based on Sentinel-2 time series

2020

AbstractThe use of deep learning (DL) approaches for the analysis of remote sensing (RS) data is rapidly increasing. DL techniques have provided excellent results in applications ranging from parameter estimation to image classification and anomaly detection. Although the vast majority of studies report precision indicators, there is a lack of studies dealing with the interpretability of the predictions. This shortcoming hampers a wider adoption of DL approaches by a wider users community, as model’s decisions are not accountable. In applications that involve the management of public budgets or policy compliance, a better interpretability of predictions is strictly required. This work aims …

010504 meteorology & atmospheric sciencesEnvironmental economicsComputer scienceProcess (engineering)0211 other engineering and technologieslcsh:MedicineClimate changeContext (language use)02 engineering and technology01 natural sciencesArticleRelevance (information retrieval)lcsh:Science021101 geological & geomatics engineering0105 earth and related environmental sciencesInterpretabilityMultidisciplinaryLand useContextual image classificationbusiness.industryDeep learninglcsh:RClimate-change policy15. Life on landComputer scienceData scienceEnvironmental sciencesEnvironmental social sciences13. Climate actionlcsh:QAnomaly detectionArtificial intelligencebusinessCommon Agricultural PolicyAgroecologyScientific Reports
researchProduct

2021

Classification approaches that allow to extract logical rules such as decision trees are often considered to be more interpretable than neural networks. Also, logical rules are comparatively easy to verify with any possible input. This is an important part in systems that aim to ensure correct operation of a given model. However, for high-dimensional input data such as images, the individual symbols, i.e. pixels, are not easily interpretable. Therefore, rule-based approaches are not typically used for this kind of high-dimensional data. We introduce the concept of first-order convolutional rules, which are logical rules that can be extracted using a convolutional neural network (CNN), and w…

0209 industrial biotechnologyPixelArtificial neural networkbusiness.industryComputer scienceDecision treePattern recognition02 engineering and technologyConvolutional neural network020901 industrial engineering & automationFilter (video)0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingLocal search (optimization)Artificial intelligencebusinessInterpretabilityCurse of dimensionalityFrontiers in Artificial Intelligence
researchProduct

Crop Yield Estimation and Interpretability With Gaussian Processes

2021

This work introduces the use of Gaussian processes (GPs) for the estimation and understanding of crop development and yield using multisensor satellite observations and meteo- rological data. The proposed methodology combines synergistic information on canopy greenness, biomass, soil, and plant water content from optical and microwave sensors with the atmospheric variables typically measured at meteorological stations. A com- posite covariance is used in the GP model to account for varying scales, nonstationary, and nonlinear processes. The GP model reports noticeable gains in terms of accuracy with respect to other machine learning approaches for the estimation of corn, wheat, and soybean …

2. Zero hungerEstimation010504 meteorology & atmospheric sciencesCrop yieldProductivitat agrícola0207 environmental engineeringProcessos estocàstics02 engineering and technology15. Life on landGeotechnical Engineering and Engineering Geology01 natural sciencessymbols.namesake13. Climate actionStatisticssymbolsElectrical and Electronic Engineering020701 environmental engineeringGaussian process0105 earth and related environmental sciencesMathematicsInterpretabilityIEEE Geoscience and Remote Sensing Letters
researchProduct

Interpretability of Recurrent Neural Networks in Remote Sensing

2020

In this work we propose the use of Long Short-Term Memory (LSTM) Recurrent Neural Networks for multivariate time series of satellite data for crop yield estimation. Recurrent nets allow exploiting the temporal dimension efficiently, but interpretability is hampered by the typically overparameterized models. The focus of the study is to understand LSTM models by looking at the hidden units distribution, the impact of increasing network complexity, and the relative importance of the input covariates. We extracted time series of three variables describing the soil-vegetation status in agroe-cosystems -soil moisture, VOD and EVI- from optical and microwave satellites, as well as available in si…

2. Zero hungerMultivariate statisticsNetwork complexity010504 meteorology & atmospheric sciencesComputer science0211 other engineering and technologies02 engineering and technology15. Life on landcomputer.software_genre01 natural sciencesRecurrent neural networkDimension (vector space)Redundancy (engineering)Relevance (information retrieval)Data miningTime seriesWater contentcomputer021101 geological & geomatics engineering0105 earth and related environmental sciencesInterpretabilityIGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium
researchProduct

Comparison and interpretability of the available urticaria activity scores

2017

The urticaria activity score (UAS) is the gold standard for assessing disease activity in patients with chronic spontaneous urticaria (CSU). Two different versions, the UAS7 and UAS7TD , are currently used in clinical trials and routine care. To compare both versions and to obtain data on their interpretability, 130 CSU patients applied both versions and globally rated their disease activity as none, mild, moderate, or severe. UAS7 and UAS7TD values correlated strongly (r = .90, P < .001). Interquartile ranges for UAS7 and UAS7TD values for mild, moderate, and severe CSU were 11-20 and 10-24, 16-30 and 16-32, and 27-37 and 28-40. UAS7 values were slightly, but significantly lower as compare…

AdultMalemedicine.medical_specialtyUrticariaImmunologySeverity of Illness IndexMean differenceDisease activity030207 dermatology & venereal diseases03 medical and health sciences0302 clinical medicineInterquartile rangeInternal medicineOutcome Assessment Health CaremedicineHumansImmunology and AllergyIn patientRoutine careInterpretabilitybusiness.industryReproducibility of ResultsGold standard (test)Middle AgedSurgeryClinical trial030228 respiratory systemFemalebusinessBiomarkersAllergy
researchProduct

Unbiased sensitivity analysis and pruning techniques in neural networks for surface ozone modelling

2005

Abstract This paper presents the use of artificial neural networks (ANNs) for surface ozone modelling. Due to the usual non-linear nature of problems in ecology, the use of ANNs has proven to be a common practice in this field. Nevertheless, few efforts have been made to acquire knowledge about the problems by analysing the useful, but often complex, input–output mapping performed by these models. In fact, researchers are not only interested in accurate methods but also in understandable models. In the present paper, we propose a methodology to extract the governing rules of trained ANN which, in turn, yields simplified models by using unbiased sensitivity and pruning techniques. Our propos…

Artificial neural networkOperations researchComputer sciencebusiness.industryEcological ModelingNon linear modelMachine learningcomputer.software_genreField (computer science)chemistry.chemical_compoundSurface ozonechemistrySensitivity (control systems)Tropospheric ozoneArtificial intelligencePruning (decision trees)businesscomputerInterpretabilityEcological Modelling
researchProduct

Intrusion Detection with Interpretable Rules Generated Using the Tsetlin Machine

2020

The rapid deployment in information and communication technologies and internet-based services have made anomaly based network intrusion detection ever so important for safeguarding systems from novel attack vectors. To this date, various machine learning mechanisms have been considered to build intrusion detection systems. However, achieving an acceptable level of classification accuracy while preserving the interpretability of the classification has always been a challenge. In this paper, we propose an efficient anomaly based intrusion detection mechanism based on the Tsetlin Machine (TM). We have evaluated the proposed mechanism over the Knowledge Discovery and Data Mining 1999 (KDD’99) …

Artificial neural networkbusiness.industryComputer science0206 medical engineeringDecision tree02 engineering and technologyIntrusion detection systemMachine learningcomputer.software_genreRandom forestSupport vector machineStatistical classificationKnowledge extraction0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingArtificial intelligencebusinesscomputer020602 bioinformaticsInterpretability2020 IEEE Symposium Series on Computational Intelligence (SSCI)
researchProduct

Spanish Adaptation of the Inventory Brief Child Abuse Potential and the Protective Factors Survey

2021

Child maltreatment is a public health problem with different consequences depending on the form of abuse. Measuring risk and protective factors has been a fertile ground for research, without involving instruments with sufficient evidence of validity. The aim of the study was to gather evidence of validity and reliability of the Inventory Brief Child Abuse Potential (IBCAP) and Protective Factors Survey (PFS) in the Mexican population. The instruments were translated into Spanish. In a non-probabilistic sample of 200 participants, the 7-factor model for the IBCAP [comparative fit index (CFI) = 0.984; root mean square error of approximation (RMSEA) = 0.067] and the 4-factor model for the PFS…

Child abusemedicine.medical_specialtyreliabilitychild abusePublic health05 social sciencesDiscriminant validityValidity050109 social psychologySample (statistics)norms and interpretation of tests scoresStructural equation modelingBF1-990validity evidencesmedicinePsychologyprotective and risk factors0501 psychology and cognitive sciencesPsychologyGeneral PsychologyReliability (statistics)Original Research050104 developmental & child psychologyInterpretabilityClinical psychologyFrontiers in Psychology
researchProduct

Enhancing Attention’s Explanation Using Interpretable Tsetlin Machine

2022

Explainability is one of the key factors in Natural Language Processing (NLP) specially for legal documents, medical diagnosis, and clinical text. Attention mechanism has been a popular choice for such explainability recently by estimating the relative importance of input units. Recent research has revealed, however, that such processes tend to misidentify irrelevant input units when explaining them. This is due to the fact that language representation layers are initialized by pre-trained word embedding that is not context-dependent. Such a lack of context-dependent knowledge in the initial layer makes it difficult for the model to concentrate on the important aspects of input. Usually, th…

Computational MathematicsNumerical AnalysisComputational Theory and MathematicsNLP; interpretability; explainability; Tsetlin Machine; Bi-GRUs; attentionVDP::Matematikk og Naturvitenskap: 400::Informasjons- og kommunikasjonsvitenskap: 420Theoretical Computer Science
researchProduct

Convolutional Regression Tsetlin Machine: An Interpretable Approach to Convolutional Regression

2021

The Convolutional Tsetlin Machine (CTM), a variant of Tsetlin Machine (TM), represents patterns as straightforward AND-rules, to address the high computational complexity and the lack of interpretability of Convolutional Neural Networks (CNNs). CTM has shown competitive performance on MNIST, Fashion-MNIST, and Kuzushiji-MNIST pattern classification benchmarks, both in terms of accuracy and memory footprint. In this paper, we propose the Convolutional Regression Tsetlin Machine (C-RTM) that extends the CTM to support continuous output problems in image analysis. C-RTM identifies patterns in images using the convolution operation as in the CTM and then maps the identified patterns into a real…

Computational complexity theorybusiness.industryComputer scienceMemory footprintPattern recognitionArtificial intelligenceNoise (video)businessConvolutional neural networkRegressionMNIST databaseConvolutionInterpretability2021 6th International Conference on Machine Learning Technologies
researchProduct