Search results for "Clusterin"
showing 10 items of 478 documents
New statistical post processing approach for precise fault and defect localization in TRI database acquired on complex VLSI
2013
International audience; Timing issue, missing or extra state transitions or unusual consumption can be detected and localized by Time Resolved Imaging (TRI) database analysis. Although, long test pattern can challenge this process. The number of photons to process rapidly increases and the acquisition time to have a good signal over noise ratio (SNR) can be prohibitive. As a result, the tracking of the defect emission signature inside a huge database can be quite complicated. In this paper, a method based on data mining techniques is suggested to help the TRI end user to have a good idea about where to start a deeper analysis of the integrated circuit, even with such complex databases.
Neural Modeling of Greenhouse Gas Emission from Agricultural Sector in European Union Member Countries
2018
The present paper discusses a novel methodology based on neural network to determine agriculture emission model simulations. Methane and nitrous oxide are the key pollutions among greenhouse gases being a major contribution to climate changes because of their high potential global impact. Using statistical clustering (k-means and Ward’s method), five meaningful clusters of countries with similar level of greenhouse gases emission were identified. Neural modeling using multi-layer perceptron networks was performed for countries placed in particular groups. The parameters that characterize the quality of a network are the predictive errors (mainly validation and test) and they are high (0.97–…
Geographical spread of influenza incidence in Spain during the 2009 A(H1N1) pandemic wave and the two succeeding influenza seasons
2014
SUMMARYThe aim of this study was to monitor the spatio-temporal spread of influenza incidence in Spain during the 2009 pandemic and the following two influenza seasons 2010–2011 and 2011–2012 using a Bayesian Poisson mixed regression model; and implement this model of geographical analysis in the Spanish Influenza Surveillance System to obtain maps of influenza incidence for every week. In the pandemic wave the maps showed influenza activity spreading from west to east. The 2010–2011 influenza epidemic wave plotted a north-west/south-east pattern of spread. During the 2011–2012 season the spread of influenza was geographically heterogeneous. The most important source of variability in the m…
Assessment of computational methods for the analysis of single-cell ATAC-seq data
2019
Abstract Background Recent innovations in single-cell Assay for Transposase Accessible Chromatin using sequencing (scATAC-seq) enable profiling of the epigenetic landscape of thousands of individual cells. scATAC-seq data analysis presents unique methodological challenges. scATAC-seq experiments sample DNA, which, due to low copy numbers (diploid in humans), lead to inherent data sparsity (1–10% of peaks detected per cell) compared to transcriptomic (scRNA-seq) data (10–45% of expressed genes detected per cell). Such challenges in data generation emphasize the need for informative features to assess cell heterogeneity at the chromatin level. Results We present a benchmarking framework that …
Towards Evidence-Based Academic Advising Using Learning Analytics
2018
Academic advising is a process between the advisee, adviser and the academic institution which provides the degree requirements and courses contained in it. Content-wise planning and management of the student’ study path, guidance on studies and academic career support is the main joint activity of advising. The purpose of this article is to propose the use of learning analytics methods, more precisely robust clustering, for creation of groups of actual study profiles of students. This allows academic advisers to provide evidence-based information on the study paths that have actually happened similarly to individual students. Moreover, academic institutions can focus on management and upda…
Distributed and proximity-constrained C-means for discrete coverage control
2018
In this paper we present a novel distributed coverage control framework for a network of mobile agents, in charge of covering a finite set of points of interest (PoI), such as people in danger, geographically dispersed equipment or environmental landmarks. The proposed algorithm is inspired by C-Means, an unsupervised learning algorithm originally proposed for non-exclusive clustering and for identification of cluster centroids from a set of observations. To cope with the agents' limited sensing range and avoid infeasible coverage solutions, traditional C-Means needs to be enhanced with proximity constraints, ensuring that each agent takes into account only neighboring PoIs. The proposed co…
Multilingual Clustering of Streaming News
2018
Clustering news across languages enables efficient media monitoring by aggregating articles from multilingual sources into coherent stories. Doing so in an online setting allows scalable processing of massive news streams. To this end, we describe a novel method for clustering an incoming stream of multilingual documents into monolingual and crosslingual story clusters. Unlike typical clustering approaches that consider a small and known number of labels, we tackle the problem of discovering an ever growing number of cluster labels in an online fashion, using real news datasets in multiple languages. Our method is simple to implement, computationally efficient and produces state-of-the-art …
Towards Responsible AI for Financial Transactions
2020
Author's accepted manuscript. © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The application of AI in finance is increasingly dependent on the principles of responsible AI. These principles-explainability, fairness, privacy, accountability, transparency and soundness form the basis for trust in future AI systems. In this empirical study, we address the first p…
Minimal Learning Machine: Theoretical Results and Clustering-Based Reference Point Selection
2019
The Minimal Learning Machine (MLM) is a nonlinear supervised approach based on learning a linear mapping between distance matrices computed in the input and output data spaces, where distances are calculated using a subset of points called reference points. Its simple formulation has attracted several recent works on extensions and applications. In this paper, we aim to address some open questions related to the MLM. First, we detail theoretical aspects that assure the interpolation and universal approximation capabilities of the MLM, which were previously only empirically verified. Second, we identify the task of selecting reference points as having major importance for the MLM's generaliz…
Diffusion map for clustering fMRI spatial maps extracted by Indipendent Component Analysis
2013
Functional magnetic resonance imaging (fMRI) produces data about activity inside the brain, from which spatial maps can be extracted by independent component analysis (ICA). In datasets, there are n spatial maps that contain p voxels. The number of voxels is very high compared to the number of analyzed spatial maps. Clustering of the spatial maps is usually based on correlation matrices. This usually works well, although such a similarity matrix inherently can explain only a certain amount of the total variance contained in the high-dimensional data where n is relatively small but p is large. For high-dimensional space, it is reasonable to perform dimensionality reduction before clustering.…