Search results for "Probabilistic"
showing 10 items of 380 documents
Minimal forbidden words and factor automata
1998
International audience; Let L(M) be the (factorial) language avoiding a given antifactorial language M. We design an automaton accepting L(M) and built from the language M. The construction is eff ective if M is finite. If M is the set of minimal forbidden words of a single word v, the automaton turns out to be the factor automaton of v (the minimal automaton accepting the set of factors of v). We also give an algorithm that builds the trie of M from the factor automaton of a single word. It yields a non-trivial upper bound on the number of minimal forbidden words of a word.
Probabilistic semantics for categorical syllogisms of Figure II
2018
A coherence-based probability semantics for categorical syllogisms of Figure I, which have transitive structures, has been proposed recently (Gilio, Pfeifer, & Sanfilippo [15]). We extend this work by studying Figure II under coherence. Camestres is an example of a Figure II syllogism: from Every P is M and No S is M infer No S is P. We interpret these sentences by suitable conditional probability assessments. Since the probabilistic inference of \(\bar{P}|S\) from the premise set \(\{M|P,\bar{M}|S\}\) is not informative, we add \(p(S|(S \vee P))>0\) as a probabilistic constraint (i.e., an “existential import assumption”) to obtain probabilistic informativeness. We show how to propagate the…
2021
Abstract Reliable patient-specific ventricular repolarization times (RTs) can identify regions of functional block or afterdepolarizations, indicating arrhythmogenic cardiac tissue and the risk of sudden cardiac death. Unipolar electrograms (UEs) record electric potentials, and the Wyatt method has been shown to be accurate for estimating RT from a UE. High-pass filtering is an important step in processing UEs, however, it is known to distort the T-wave phase of the UE, which may compromise the accuracy of the Wyatt method. The aim of this study was to examine the effects of high-pass filtering, and improve RT estimates derived from filtered UEs. We first generated a comprehensive set of UE…
A Probabilistic Classification Procedure Based on Response Time Analysis Towards a Quick Pre-Diagnosis of Student's Attention Deficit
2019
[EN] A classification methodology based on an experimental study is proposed towards a fast pre-diagnosis of attention deficit. Our sample consisted of school-aged children between 8 and 12 years from Valencia, Spain. The study was based on the response time (RT) to visual stimuli in computerized tasks. The process of answering consecutive questions usually follows an ex-Gaussian distribution of the RTs. Specifically, we seek to propose a simple automatic classification scheme of children based on the most recent evidence of the relationship between RTs and ADHD. Specifically, the prevalence percentage and reported evidence for RTs in relation to ADHD or to attention deficit symptoms were t…
CN2-R: Faster CN2 with randomly generated complexes
2011
Among the rule induction algorithms, the classic CN2 is still one of the most popular ones; a great amount of enhancements and improvements to it is to witness this. Despite the growing computing capacities since the algorithm was proposed, one of the main issues is resource demand. The proposed modification, CN2-R, substitutes the star concept of the original algorithm with a technique of randomly generated complexes in order to substantially improve on running times without significant loss in accuracy.
Estimation of peak capacity based on peak simulation.
2018
Peak capacity (PC) is a key concept in chromatographic analysis, nowadays of great importance for characterising complex separations as a criterion to find the most promising conditions. A theoretical expression for PC estimation can be easily deduced in isocratic elution, provided that the column plate count is assumed constant for all analytes. In gradient elution, the complex dependence of peak width with the gradient program implies that an integral equation has to be solved, which is only possible in a limited number of situations. In 2005, Uwe Neue developed a comprehensive theory for the calculation of PC in gradient elution, which is only valid for certain situations: single linear …
A probabilistic approach to radiant field modeling in dense particulate systems
2016
Radiant field distribution is an important modeling issue in many systems of practical interest, such as photo-bioreactors for algae growth and heterogeneous photo-catalytic reactors for water detoxification.In this work, a simple radiant field model suitable for dispersed systems showing particle size distributions, is proposed for both dilute and dense two-phase systems. Its main features are: (i) only physical, independently assessable parameters are involved and (ii) its simplicity allows a closed form solution, which makes it suitable for inclusion in a complete photo-reactor model, where also kinetic and fluid dynamic sub-models play a role. A similar model can be derived by making us…
Multiple Mean Models of Statistical Shape and Probability Priors for Automatic Prostate Segmentation
2011
International audience; Low contrast of the prostate gland, heterogeneous intensity distribution inside the prostate region, imaging artifacts like shadow regions, speckle and significant variations in prostate shape, size and in- ter dataset contrast in Trans Rectal Ultrasound (TRUS) images challenge computer aided automatic or semi-automatic segmentation of the prostate. In this paper, we propose a probabilistic framework for automatic initialization and propagation of multiple mean parametric models derived from principal component analysis of shape and posterior probability information of the prostate region to segment the prostate. Unlike traditional statistical models of shape and int…
Reasoning with Vague Spatial Information from Upper Mesopotamia (2000BC)
2015
International audience; Concepts such as near, far, south of, etc., are by its own nature vague. However, they are quite common in human language. In the case of historical records, these concepts are often the only source of information regarding the position of ancient places whose exact location has been lost. In our research, we use digitized written records from Upper Mesopotamia (2000BC) from the HIGEOMES project. Our goal is to provide better understanding of the location of places, based on the analysis of spatial statements. In our approach, we analyse cardinal statements between places with known location. Using this information we construct a probabilistic function representing t…
Discovering human mobility from mobile data : probabilistic models and learning algorithms
2020
Smartphone usage data can be used to study human indoor and outdoor mobility. In our work, we investigate both aspects in proposing machine learning-based algorithms adapted to the different information sources that can be collected.In terms of outdoor mobility, we use the collected GPS coordinate data to discover the daily mobility patterns of the users. To this end, we propose an automatic clustering algorithm using the Dirichlet process Gaussian mixture model (DPGMM) so as to cluster the daily GPS trajectories. This clustering method is based on estimating probability densities of the trajectories, which alleviate the problems caused by the data noise.By contrast, we utilize the collecte…