Search results for " learning"
showing 10 items of 5299 documents
Computational issues in fitting joint frailty models for recurrent events with an associated terminal event.
2020
Abstract Background and objective: Joint frailty regression models are intended for the analysis of recurrent event times in the presence of informative drop-outs. They have been proposed for clinical trials to estimate the effect of some treatment on the rate of recurrent heart failure hospitalisations in the presence of drop-outs due to cardiovascular death. Whereas a R-software-package for fitting joint frailty models is available, some technical issues have to be solved in order to use SASⓇ 1 software, which is required in the regulatory environment of clinical trials. Methods: First, we demonstrate how to solve these issues by deriving proper likelihood-decompositions, in particular fo…
Automatic left ventricle volume calculation with explainability through a deep learning weak-supervision methodology
2021
[EN] Background and objective: Magnetic resonance imaging is the most reliable imaging technique to assess the heart. More specifically there is great importance in the analysis of the left ventricle, as the main pathologies directly affect this region. In order to characterize the left ventricle, it is necessary to extract its volume. In this work we present a neural network architecture that is capable of directly estimating the left ventricle volume in short axis cine Magnetic Resonance Imaging in the end-diastolic frame and provide a segmentation of the region which is the basis of the volume calculation, thus offering explain-ability to the estimated value. Methods: The network was des…
A framework for modelling the biomechanical behaviour of the human liver during breathing in real time using machine learning
2017
Progress in biomechanical modelling of human soft tissue is the basis for the development of new clinical applications capable of improving the diagnosis and treatment of some diseases (e.g. cancer), as well as the surgical planning and guidance of some interventions. The finite element method (FEM) is one of the most popular techniques used to predict the deformation of the human soft tissue due to its high accuracy. However, FEM has an associated high computational cost, which makes it difficult its integration in real-time computer-aided surgery systems. An alternative for simulating the mechanical behaviour of human organs in real time comes from the use of machine learning (ML) techniq…
Bag-of-word based brand recognition using Markov Clustering Algorithm for codebook generation
2015
International audience; In order to address the issue of counterfeiting online, it is necessary to use automatic tools that analyze the large amount of information available over the Internet. Analysis methods that extract information about the content of the images are very promising for this purpose. In this paper, a method that automatically extract the brand of objects in images is proposed. The method does not explicitly search for text or logos. This information is implicitly included in the Bag-of-Words representation. In the Bag-of-Words paradigm, visual features are clustered to create the visual words. Despite its shortcomings, k-means is the most widely used algorithm. With k-mea…
A Comparison of Advanced Regression Algorithms for Quantifying Urban Land Cover
2014
Quantitative methods for mapping sub-pixel land cover fractions are gaining increasing attention, particularly with regard to upcoming hyperspectral satellite missions. We evaluated five advanced regression algorithms combined with synthetically mixed training data for quantifying urban land cover from HyMap data at 3.6 and 9 m spatial resolution. Methods included support vector regression (SVR), kernel ridge regression (KRR), artificial neural networks (NN), random forest regression (RFR) and partial least squares regression (PLSR). Our experiments demonstrate that both kernel methods SVR and KRR yield high accuracies for mapping complex urban surface types, i.e., rooftops, pavements, gras…
Relation between adaptive learning actions and profiles of MOOCs users
2016
The overcrowding and the heterogeneity of participants' profiles in a Massive Open Online Course (MOOC) are some of the main causes for a high dropout rate. International reports and research works points out the personalized learning as an important way to improve learning in any educational context. The information and communication technologies help to address adaptive technics in education through online courses. The specific characteristics of MOOCs point to the need to implement adaptive methodologies in MOOCs to increase the completion rates. This work presents a statistical analysis to find out in what aspects the condition of adaptivity, defined by the construct, is a preference of…
Adaptive Population Importance Samplers: A General Perspective
2016
Importance sampling (IS) is a well-known Monte Carlo method, widely used to approximate a distribution of interest using a random measure composed of a set of weighted samples generated from another proposal density. Since the performance of the algorithm depends on the mismatch between the target and the proposal densities, a set of proposals is often iteratively adapted in order to reduce the variance of the resulting estimator. In this paper, we review several well-known adaptive population importance samplers, providing a unified common framework and classifying them according to the nature of their estimation and adaptive procedures. Furthermore, we interpret the underlying motivation …
Time in Associative Learning: A Review on Temporal Maps
2021
Ability to recall the timing of events is a crucial aspect of associative learning. Yet, traditional theories of associative learning have often overlooked the role of time in learning association and shaping the behavioral outcome. They address temporal learning as an independent and parallel process. Temporal Coding Hypothesis is an attempt to bringing together the associative and non-associative aspects of learning. This account proposes temporal maps, a representation that encodes several aspects of a learned association, but attach considerable importance to the temporal aspect. A temporal map helps an agent to make inferences about missing information by applying an integration mechan…
Group Metropolis Sampling
2017
Monte Carlo (MC) methods are widely used for Bayesian inference and optimization in statistics, signal processing and machine learning. Two well-known class of MC methods are the Importance Sampling (IS) techniques and the Markov Chain Monte Carlo (MCMC) algorithms. In this work, we introduce the Group Importance Sampling (GIS) framework where different sets of weighted samples are properly summarized with one summary particle and one summary weight. GIS facilitates the design of novel efficient MC techniques. For instance, we present the Group Metropolis Sampling (GMS) algorithm which produces a Markov chain of sets of weighted samples. GMS in general outperforms other multiple try schemes…
Recycling Gibbs sampling
2017
Gibbs sampling is a well-known Markov chain Monte Carlo (MCMC) algorithm, extensively used in signal processing, machine learning and statistics. The key point for the successful application of the Gibbs sampler is the ability to draw samples from the full-conditional probability density functions efficiently. In the general case this is not possible, so in order to speed up the convergence of the chain, it is required to generate auxiliary samples. However, such intermediate information is finally disregarded. In this work, we show that these auxiliary samples can be recycled within the Gibbs estimators, improving their efficiency with no extra cost. Theoretical and exhaustive numerical co…