Search results for "bayesian"

showing 10 items of 604 documents

A comparison of STARFM and an unmixing-based algorithm for Landsat and MODIS data fusion

2015

article i nfo The focus of the current study is to compare data fusion methods applied to sensors with medium- and high- spatial resolutions. Two documented methods are applied, the spatial and temporal adaptive reflectance fusion model (STARFM) and an unmixing-based method which proposes a Bayesian formulation to incorporate prior spectral information.Furthermore, thestrengths of both algorithms arecombined ina novel data fusionmethod: the Spatial and Temporal Reflectance Unmixing Model (STRUM). The potential of each method is demonstrated using simulation imagery and Landsat and MODIS imagery. The theoretical basis of the algorithms causes STARFM and STRUM to produce Landsat-like reflecta…

Computer scienceBayesian formulationSpatial ecologySoil ScienceGeologyMETIS-308148Computers in Earth SciencesSensor fusionFocus (optics)ReflectivityAlgorithmNormalized Difference Vegetation IndexRemote sensingRemote Sensing of Environment
researchProduct

Bayesian inference in Markovian queues

1994

This paper is concerned with the Bayesian analysis of general queues with Poisson input and exponential service times. Joint posterior distribution of the arrival rate and the individual service rate is obtained from a sample consisting inn observations of the interarrival process andm complete service times. Posterior distribution of traffic intensity inM/M/c is also obtained and the statistical analysis of the ergodic condition from a decision point of view is discussed.

Computer scienceBayesian probabilityErgodicityPosterior probabilityManagement Science and Operations ResearchBayesian inferencePoisson distributionComputer Science ApplicationsExponential functionTraffic intensitysymbols.namesakeComputational Theory and MathematicsStatisticssymbolsApplied mathematicsErgodic theoryQueueing Systems
researchProduct

Efficient linear fusion of partial estimators

2018

Abstract Many signal processing applications require performing statistical inference on large datasets, where computational and/or memory restrictions become an issue. In this big data setting, computing an exact global centralized estimator is often either unfeasible or impractical. Hence, several authors have considered distributed inference approaches, where the data are divided among multiple workers (cores, machines or a combination of both). The computations are then performed in parallel and the resulting partial estimators are finally combined to approximate the intractable global estimator. In this paper, we focus on the scenario where no communication exists among the workers, de…

Computer scienceBayesian probabilityInferenceAsymptotic distribution02 engineering and technology01 natural sciences010104 statistics & probability[INFO.INFO-TS]Computer Science [cs]/Signal and Image ProcessingArtificial Intelligence0202 electrical engineering electronic engineering information engineeringStatistical inferenceFusion rules0101 mathematicsElectrical and Electronic EngineeringComputingMilieux_MISCELLANEOUSMinimum mean square errorApplied MathematicsConstrained optimizationEstimator020206 networking & telecommunicationsComputational Theory and MathematicsSignal ProcessingComputer Vision and Pattern RecognitionStatistics Probability and Uncertainty[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processingAlgorithmDigital Signal Processing
researchProduct

Adaptive Importance Sampling: The past, the present, and the future

2017

A fundamental problem in signal processing is the estimation of unknown parameters or functions from noisy observations. Important examples include localization of objects in wireless sensor networks [1] and the Internet of Things [2]; multiple source reconstruction from electroencephalograms [3]; estimation of power spectral density for speech enhancement [4]; or inference in genomic signal processing [5]. Within the Bayesian signal processing framework, these problems are addressed by constructing posterior probability distributions of the unknowns. The posteriors combine optimally all of the information about the unknowns in the observations with the information that is present in their …

Computer scienceBayesian probabilityPosterior probabilityInference02 engineering and technologyMachine learningcomputer.software_genre01 natural sciences010104 statistics & probabilityMultidimensional signal processing[INFO.INFO-TS]Computer Science [cs]/Signal and Image ProcessingPrior probability0202 electrical engineering electronic engineering information engineering0101 mathematicsElectrical and Electronic EngineeringComputingMilieux_MISCELLANEOUSbusiness.industryApplied Mathematics020206 networking & telecommunicationsApproximate inferenceSignal ProcessingProbability distributionArtificial intelligencebusinessAlgorithmcomputer[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processingImportance sampling
researchProduct

A New Simple Computational Method of Simultaneous Constructing and Comparing Confidence Intervals of Shortest Length and Equal Tails for Making Effic…

2021

A confidence interval is a range of values that provides the user with useful information about how accurately a statistic estimates a parameter. In the present paper, a new simple computational method is proposed for simultaneous constructing and comparing confidence intervals of shortest length and equal tails in order to make efficient decisions under parametric uncertainty. This unified computational method provides intervals in several situations that previously required separate analysis using more advanced methods and tables for numerical solutions. In contrast to the Bayesian approach, the proposed approach does not depend on the choice of priors and is a novelty in the theory of st…

Computer scienceBayesian probabilityPrior probabilityProbability distributionQuantile functionPivotal quantityAlgorithmConfidence intervalParametric statisticsQuantile
researchProduct

A Bayesian unified framework for risk estimation and cluster identification in small area health data analysis.

2020

Many statistical models have been proposed to analyse small area disease data with the aim of describing spatial variation in disease risk. In this paper, we propose a Bayesian hierarchical model that simultaneously allows for risk estimation and cluster identification. Our model formulation assumes that there is an unknown number of risk classes and small areas are assigned to a risk class by means of independent allocation variables. Therefore, areas within each cluster are assumed to share a common risk but they may be geographically separated. The posterior distribution of the parameter representing the number of risk classes is estimated using a novel procedure that combines its prior …

Computer scienceEpidemiologyPathology and Laboratory Medicine01 natural sciencesGeographical locations010104 statistics & probabilityChickenpoxMathematical and Statistical TechniquesStatisticsMedicine and Health SciencesPublic and Occupational Health0303 health sciencesMultidisciplinarySimulation and ModelingQREuropeIdentification (information)Medical MicrobiologySmall-Area AnalysisViral PathogensVirusesPhysical SciencesMedicinePathogensAlgorithmsResearch ArticleHerpesvirusesScienceBayesian probabilityPosterior probabilityBayesian MethodDisease SurveillanceDisease clusterResearch and Analysis MethodsRisk AssessmentMicrobiologyVaricella Zoster Virus03 medical and health sciencesRisk classPrior probabilityCovariateBayesian hierarchical modelingHumansEuropean Union0101 mathematicsMicrobial Pathogens030304 developmental biologyBiology and life sciencesOrganismsStatistical modelBayes TheoremProbability TheoryProbability DistributionMarginal likelihoodConvolutionSpainPeople and placesDNA virusesMathematical FunctionsMathematicsPloS one
researchProduct

Temporal Binding in Multisensory and Motor-Sensory Contexts: Toward a Unified Model

2021

Our senses receive a manifold of sensory signals at any given moment in our daily lives. For a coherent and unified representation of information and precise motor control, our brain needs to temporally bind the signals emanating from a common causal event and segregate others. Traditionally, different mechanisms were proposed for the temporal binding phenomenon in multisensory and motor-sensory contexts. This paper reviews the literature on the temporal binding phenomenon in both multisensory and motor-sensory contexts and suggests future research directions for advancing the field. Moreover, by critically evaluating the recent literature, this paper suggests that common computational prin…

Computer scienceMini ReviewEvent (relativity)Sensory system050105 experimental psychologylcsh:RC321-57103 medical and health sciencesBehavioral Neuroscience0302 clinical medicinetemporal bindingPhenomenon0501 psychology and cognitive sciencescausal inferencelcsh:Neurosciences. Biological psychiatry. Neuropsychiatrymotor-sensoryBayesian modelsBiological PsychiatryUncertainty reduction theoryCognitive science05 social sciencesRepresentation (systemics)Motor controlHuman NeuroscienceUnified ModelmultisensoryPsychiatry and Mental healthNeuropsychology and Physiological PsychologyNeurologyCausal inferenceprecision030217 neurology & neurosurgeryFrontiers in Human Neuroscience
researchProduct

Distributed Particle Metropolis-Hastings Schemes

2018

We introduce a Particle Metropolis-Hastings algorithm driven by several parallel particle filters. The communication with the central node requires the transmission of only a set of weighted samples, one per filter. Furthermore, the marginal version of the previous scheme, called Distributed Particle Marginal Metropolis-Hastings (DPMMH) method, is also presented. DPMMH can be used for making inference on both a dynamical and static variable of interest. The ergodicity is guaranteed, and numerical simulations show the advantages of the novel schemes.

Computer scienceMonte Carlo methodErgodicity020206 networking & telecommunications02 engineering and technologyFilter (signal processing)Bayesian inferenceStatistics::ComputationSet (abstract data type)Metropolis–Hastings algorithm[INFO.INFO-TS]Computer Science [cs]/Signal and Image ProcessingTransmission (telecommunications)0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingParticle filter[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processingAlgorithmComputingMilieux_MISCELLANEOUS2018 IEEE Statistical Signal Processing Workshop (SSP)
researchProduct

Group Metropolis Sampling

2017

Monte Carlo (MC) methods are widely used for Bayesian inference and optimization in statistics, signal processing and machine learning. Two well-known class of MC methods are the Importance Sampling (IS) techniques and the Markov Chain Monte Carlo (MCMC) algorithms. In this work, we introduce the Group Importance Sampling (GIS) framework where different sets of weighted samples are properly summarized with one summary particle and one summary weight. GIS facilitates the design of novel efficient MC techniques. For instance, we present the Group Metropolis Sampling (GMS) algorithm which produces a Markov chain of sets of weighted samples. GMS in general outperforms other multiple try schemes…

Computer scienceMonte Carlo methodMarkov processSlice samplingProbability density function02 engineering and technologyMultiple-try MetropolisBayesian inferenceMachine learningcomputer.software_genre01 natural sciencesHybrid Monte Carlo010104 statistics & probabilitysymbols.namesake[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing0202 electrical engineering electronic engineering information engineering0101 mathematicsComputingMilieux_MISCELLANEOUSMarkov chainbusiness.industryRejection samplingSampling (statistics)020206 networking & telecommunicationsMarkov chain Monte CarloMetropolis–Hastings algorithmsymbolsMonte Carlo method in statistical physicsMonte Carlo integrationArtificial intelligencebusinessParticle filter[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processingcomputerAlgorithmImportance samplingMonte Carlo molecular modeling
researchProduct

Additive noise and multiplicative bias as disclosure limitation techniques for continuous microdata: A simulation study

2004

This paper focuses on a combination of two disclosure limitation techniques, additive noise and multiplicative bias, and studies their efficacy in protecting confidentiality of continuous microdata. A Bayesian intruder model is extensively simulated in order to assess the performance of these disclosure limitation techniques as a function of key parameters like the variability amongst profiles in the original data, the amount of users prior information, the amount of bias and noise introduced in the data. The results of the simulation offer insight into the degree of vulnerability of data on continuous random variables and suggests some guidelines for effective protection measures.

Computer scienceMultiplicative functionBayesian probabilityGeneral Engineeringcomputer.software_genreComputer Science ApplicationsOriginal dataComputational MathematicsMicrodata (HTML)Simulated dataConfidentialityData miningRandom variablecomputerPrior information
researchProduct