Search results for " Marko"

showing 10 items of 201 documents

Gesture Modeling by Hanklet-Based Hidden Markov Model

2015

In this paper we propose a novel approach for gesture modeling. We aim at decomposing a gesture into sub-trajectories that are the output of a sequence of atomic linear time invariant (LTI) systems, and we use a Hidden Markov Model to model the transitions from the LTI system to another. For this purpose, we represent the human body motion in a temporal window as a set of body joint trajectories that we assume are the output of an LTI system. We describe the set of trajectories in a temporal window by the corresponding Hankel matrix (Hanklet), which embeds the observability matrix of the LTI system that produced it. We train a set of HMMs (one for each gesture class) with a discriminative a…

Conditional random fieldKinectbusiness.industryComputer scienceMaximum-entropy Markov modelAction ClassificationHankel matrixMarkov modelHidden Markov ModelLTI system theoryGestureAction RecognitionGesture recognitionObservabilityArtificial intelligencebusinessHidden Markov modelAlgorithmHankel matrixSkeleton
researchProduct

Harmony perception and regularity of spike trains in a simple auditory model

2013

A probabilistic approach for investigating the phenomena of dissonance and consonance in a simple auditory sensory model, composed by two sensory neurons and one interneuron, is presented. We calculated the interneuron’s firing statistics, that is the interspike interval statistics of the spike train at the output of the interneuron, for consonant and dissonant inputs in the presence of additional "noise", representing random signals from other, nearby neurons and from the environment. We find that blurry interspike interval distributions (ISIDs) characterize dissonant accords, while quite regular ISIDs characterize consonant accords. The informational entropy of the non-Markov spike train …

ConsonantInterneuronSpeech recognitionSpike trainmedia_common.quotation_subjectSensory systemConsonance and dissonanceSound perceptionSettore FIS/03 - Fisica Della Materiamedicine.anatomical_structureAuditory system consonant and dissonant accords environmental noise hidden Markov chain informational entropy regularityPerceptionmedicineAuditory systemMathematicsmedia_commonAIP Conference Proceedings
researchProduct

Income distribution dynamics: monotone Markov chains make light work

1995

This paper considers some aspects of the dynamics of income distributions by employing a simple Markov chain model of income mobility. The main motivation of the paper is to introduce the techniques of “monotone” Markov chains to this field. The transition matrix of a discrete Markov chain is called monotone if each row stochastically dominates the row above it. It will be shown that by embedding the dynamics of the income distribution in a monotone Markov chain, a number of interesting results may be obtained in a straightforward and intuitive fashion.

Continuous-time Markov chainEconomics and EconometricsMathematical optimizationMarkov kernelMarkov chain mixing timeMarkov chainVariable-order Markov modelApplied mathematicsMarkov propertyExamples of Markov chainsMarkov modelSocial Sciences (miscellaneous)MathematicsSocial Choice and Welfare
researchProduct

Convergence of Markovian Stochastic Approximation with discontinuous dynamics

2016

This paper is devoted to the convergence analysis of stochastic approximation algorithms of the form $\theta_{n+1} = \theta_n + \gamma_{n+1} H_{\theta_n}({X_{n+1}})$, where ${\left\{ {\theta}_n, n \in {\mathbb{N}} \right\}}$ is an ${\mathbb{R}}^d$-valued sequence, ${\left\{ {\gamma}_n, n \in {\mathbb{N}} \right\}}$ is a deterministic stepsize sequence, and ${\left\{ {X}_n, n \in {\mathbb{N}} \right\}}$ is a controlled Markov chain. We study the convergence under weak assumptions on smoothness-in-$\theta$ of the function $\theta \mapsto H_{\theta}({x})$. It is usually assumed that this function is continuous for any $x$; in this work, we relax this condition. Our results are illustrated by c…

Control and OptimizationStochastic approximationMarkov processMathematics - Statistics Theorydiscontinuous dynamicsStatistics Theory (math.ST)Stochastic approximation01 natural sciencesCombinatorics010104 statistics & probabilitysymbols.namesake[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST]Convergence (routing)FOS: Mathematics0101 mathematics62L20state-dependent noiseComputingMilieux_MISCELLANEOUSMathematicsta112SequenceconvergenceApplied Mathematicsta111010102 general mathematicsFunction (mathematics)[STAT.TH]Statistics [stat]/Statistics Theory [stat.TH]16. Peace & justice[INFO.INFO-MO]Computer Science [cs]/Modeling and Simulationcontrolled Markov chainMarkovian stochastic approximationsymbolsStochastic approximat
researchProduct

Probabilistic techniques for bridging the semantic gap in schema alignment

Connecting pieces of informations from heterogeneous sources sharing the same domain is an open challenge in Semantic Web, Big Data and business communities. The main problem in this research area is to bridge the expressiveness gap between relational databases and ontologies. In general, an ontology is more expressive and captures more semantic information behind data than a relational database does. On the other side, databases are the most common used persistent storage system and they grant benefits such as security and data integrity but they need to be managed by expert users. The problem is quite significant above all when enterprise or corporate ontologies are used to share infomation…

Data IntegrationOWL OntologyDatabaseSettore ING-INF/05 - Sistemi Di Elaborazione Delle InformazioniSchema MatchingEntity-Relation DiagramHidden Markov Model
researchProduct

Hidden Markov random field model and Broyden–Fletcher–Goldfarb–Shanno algorithm for brain image segmentation

2018

International audience; Many routine medical examinations produce images of patients suffering from various pathologies. With the huge number of medical images, the manual analysis and interpretation became a tedious task. Thus, automatic image segmentation became essential for diagnosis assistance. Segmentation consists in dividing the image into homogeneous and significant regions. We focus on hidden Markov random fields referred to as HMRF to model the problem of segmentation. This modelisation leads to a classical function minimisation problem. Broyden-Fletcher-Goldfarb-Shanno algorithm referred to as BFGS is one of the most powerful methods to solve unconstrained optimisation problem. …

Dice coefficient criterionComputer scienceBrain image segmentation02 engineering and technologyMR-images[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI]Theoretical Computer Science03 medical and health sciences0302 clinical medicineArtificial Intelligence0202 electrical engineering electronic engineering information engineering[INFO]Computer Science [cs]SegmentationBrain magnetic resonance imagingHidden Markov modelRandom fieldbusiness.industryBroyden-Fletcher-Goldfarb-Shanno algorithmPattern recognitionImage segmentationhidden Markov random fieldMinimization3. Good healthHomogeneousBroyden–Fletcher–Goldfarb–Shanno algorithm020201 artificial intelligence & image processingAutomatic segmentationArtificial intelligenceHidden Markov random fieldbusiness030217 neurology & neurosurgerySoftwareJournal of Experimental & Theoretical Artificial Intelligence
researchProduct

The pianigiani-yorke measure for topological markov chains

1997

We prove the existence of a Pianigiani-Yorke measure for a Markovian factor of a topological Markov chain. This measure induces a Gibbs measure in the limit set. The proof uses the contraction properties of the Ruelle-Perron-Frobenius operator.

Discrete mathematicsMathematics::Dynamical SystemsMarkov chain mixing timeMarkov chainGeneral MathematicsMarkov processPartition function (mathematics)TopologyHarris chainNonlinear Sciences::Chaotic Dynamicssymbols.namesakeBalance equationsymbolsExamples of Markov chainsGibbs measureMathematicsIsrael Journal of Mathematics
researchProduct

Context Trees, Variable Length Markov Chains and Dynamical Sources

2012

Infinite random sequences of letters can be viewed as stochastic chains or as strings produced by a source, in the sense of information theory. The relationship between Variable Length Markov Chains (VLMC) and probabilistic dynamical sources is studied. We establish a probabilistic frame for context trees and VLMC and we prove that any VLMC is a dynamical source for which we explicitly build the mapping. On two examples, the "comb" and the "bamboo blossom", we find a necessary and sufficient condition for the existence and the uniqueness of a stationary probability measure for the VLMC. These two examples are detailed in order to provide the associated Dirichlet series as well as the genera…

Discrete mathematicsPure mathematicsStationary distributionMarkov chain010102 general mathematicsProbabilistic dynamical sourcesProbabilistic logicContext (language use)Information theoryVariable length Markov chains01 natural sciencesMeasure (mathematics)Occurrences of words[MATH.MATH-PR]Mathematics [math]/Probability [math.PR]010104 statistics & probabilitysymbols.namesakesymbolsUniquenessDynamical systems of the intervalDirichlet series0101 mathematics[ MATH.MATH-PR ] Mathematics [math]/Probability [math.PR]Dirichlet seriesMathematics
researchProduct

Conjugate unstable manifolds and their underlying geometrized Markov partitions

2000

Abstract Conjugate unstable manifolds of saturated hyperbolic sets of Smale diffeomorphisms are characterized in terms of the combinatorics of their geometrized Markov partitions. As a consequence, the relationship between the local and the global point of view is also made explicit.

Discrete mathematicsSmale diffeomorphismsMathematics::Dynamical SystemsMarkov chainInvariant manifoldsGeometrized Markov partitionsPoint (geometry)Geometry and TopologyMathematics::Symplectic GeometryMathematics::Geometric TopologyConjugateMathematicsTopology and its Applications
researchProduct

QUANTITATIVE CONVERGENCE RATES FOR SUBGEOMETRIC MARKOV CHAINS

2015

We provide explicit expressions for the constants involved in the characterisation of ergodicity of subgeometric Markov chains. The constants are determined in terms of those appearing in the assumed drift and one-step minorisation conditions. The results are fundamental for the study of some algorithms where uniform bounds for these constants are needed for a family of Markov kernels. Our results accommodate also some classes of inhomogeneous chains.

Discrete mathematicsStatistics and ProbabilityMarkov chain mixing timeMarkov chainVariable-order Markov modelGeneral Mathematicsta111Markov chain010102 general mathematicsErgodicity01 natural sciencesInhomogeneous010104 statistics & probability60J05Polynomial ergodicitySubgeometric ergodicityConvergence (routing)60J22Examples of Markov chainsStatistical physics0101 mathematicsStatistics Probability and UncertaintyMathematics
researchProduct