Search results for "Markov proce"
showing 10 items of 147 documents
The pianigiani-yorke measure for topological markov chains
1997
We prove the existence of a Pianigiani-Yorke measure for a Markovian factor of a topological Markov chain. This measure induces a Gibbs measure in the limit set. The proof uses the contraction properties of the Ruelle-Perron-Frobenius operator.
System-environment correlations and Markovian embedding of quantum non-Markovian dynamics
2018
We study the dynamics of a quantum system whose interaction with an environment is described by a collision model, i.e. the open dynamics is modelled through sequences of unitary interactions between the system and the individual constituents of the environment, termed "ancillas", which are subsequently traced out. In this setting non-Markovianity is introduced by allowing for additional unitary interactions between the ancillas. For this model, we identify the relevant system-environment correlations that lead to a non-Markovian evolution. Through an equivalent picture of the open dynamics, we introduce the notion of "memory depth" where these correlations are established between the syste…
Emulation of n-photon Jaynes Cummings and Anti-Jaynes-Cummings models via parametric modulation of cyclic qutrit
2019
We study a circuit QED setup involving a single cavity mode and a cyclic qutrit whose parameters are time modulated externally. It is shown that in the dispersive regime this system behaves as a versatile platform to implement effective $n$-photon Jaynes-Cummings (JC) and anti-Jaynes-Cummings (AJC) models by suitably setting the modulation frequency. The atomic levels and the cavity Fock states involved in the effective Hamiltonians can be controlled through adjustment of the system parameters, and different JC and AJC interactions can be implemented simultaneously using multitone modulations. Moreover, one can implement some models that go beyond simple JC and AJC-like interaction, such as…
Recycling Gibbs sampling
2017
Gibbs sampling is a well-known Markov chain Monte Carlo (MCMC) algorithm, extensively used in signal processing, machine learning and statistics. The key point for the successful application of the Gibbs sampler is the ability to draw samples from the full-conditional probability density functions efficiently. In the general case this is not possible, so in order to speed up the convergence of the chain, it is required to generate auxiliary samples. However, such intermediate information is finally disregarded. In this work, we show that these auxiliary samples can be recycled within the Gibbs estimators, improving their efficiency with no extra cost. Theoretical and exhaustive numerical co…
A Conclusive Analysis of the Finite-Time Behavior of the Discretized Pursuit Learning Automaton
2019
Author's accepted version (post-print). © 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Available from 20/03/2021. This paper deals with the finite-time analysis (FTA) of learning automata (LA), which is a topic for which very little work has been reported in the literature. This is as opposed to the asymptotic steady-state analysis for which there are, probabl…
Convergence of Markovian Stochastic Approximation with discontinuous dynamics
2016
This paper is devoted to the convergence analysis of stochastic approximation algorithms of the form $\theta_{n+1} = \theta_n + \gamma_{n+1} H_{\theta_n}({X_{n+1}})$, where ${\left\{ {\theta}_n, n \in {\mathbb{N}} \right\}}$ is an ${\mathbb{R}}^d$-valued sequence, ${\left\{ {\gamma}_n, n \in {\mathbb{N}} \right\}}$ is a deterministic stepsize sequence, and ${\left\{ {X}_n, n \in {\mathbb{N}} \right\}}$ is a controlled Markov chain. We study the convergence under weak assumptions on smoothness-in-$\theta$ of the function $\theta \mapsto H_{\theta}({x})$. It is usually assumed that this function is continuous for any $x$; in this work, we relax this condition. Our results are illustrated by c…
Fluctuation theorems for non-Markovian quantum processes
2013
Exploiting previous results on Markovian dynamics and fluctuation theorems, we study the consequences of memory effects on single realizations of nonequilibrium processes within an open system approach. The entropy production along single trajectories for forward and backward processes is obtained with the help of a recently proposed classical-like non-Markovian stochastic unravelling, which is demonstrated to lead to a correction of the standard entropic fluctuation theorem. This correction is interpreted as resulting from the interplay between the information extracted from the system through measurements and the flow of information from the environment to the open system: Due to memory e…
On risk sensitive control of regular step Markov processes
2001
A Bayesian analysis of a queueing system with unlimited service
1997
Abstract A queueing system occurs when “customers” arrive at some facility requiring a certain type of “service” provided by the “servers”. Both the arrival pattern and the service requirements are usually taken to be random. If all the servers are busy when customers arrive, they usually wait in line to get served. Queues possess a number of mathematical challenges and have been mainly approached from a probability point of view, and statistical analysis are very scarce. In this paper we present a Bayesian analysis of a Markovian queue in which customers are immediately served upon arrival, and hence no waiting lines form. Emergency and self-service facilities provide many examples. Techni…
On the derivation of a linear Boltzmann equation from a periodic lattice gas
2004
We consider the problem of deriving the linear Boltzmann equation from the Lorentz process with hard spheres obstacles. In a suitable limit (the Boltzmann-Grad limit), it has been proved that the linear Boltzmann equation can be obtained when the position of obstacles are Poisson distributed, while the validation fails, also for the "correct" ratio between obstacle size and lattice parameter, when they are distributed on a purely periodic lattice, because of the existence of very long free trajectories. Here we validate the linear Boltzmann equation, in the limit when the scatterer's radius epsilon vanishes, for a family of Lorentz processes such that the obstacles have a random distributio…