Search results for "Dimensionality Reduction"
showing 10 items of 120 documents
Dimension reduction: additional benefit of an optimal filter for independent component analysis to extract event-related potentials.
2011
The present study addresses benefits of a linear optimal filter (OF) for independent component analysis (ICA) in extracting brain event-related potentials (ERPs). A filter such as the digital filter is usually considered as a denoising tool. Actually, in filtering ERP recordings by an OF, the ERP' topography should not be changed by the filter, and the output should also be able to be modeled by the linear transformation. Moreover, an OF designed for a specific ERP source or component may remove noise, as well as reduce the overlap of sources and even reject some non-targeted sources in the ERP recordings. The OF can thus accomplish both the denoising and dimension reduction (reducing the n…
Detection of steering direction using EEG recordings based on sample entropy and time-frequency analysis.
2016
Monitoring driver's intentions beforehand is an ambitious aim, which will bring a huge impact on the society by preventing traffic accidents. Hence, in this preliminary study we recorded high resolution electroencephalography (EEG) from 5 subjects while driving a car under real conditions along with an accelerometer which detects the onset of steering. Two sensor-level analyses, sample entropy and time-frequency analysis, have been implemented to observe the dynamics before the onset of steering. Thus, in order to classify the steering direction we applied a machine learning algorithm consisting of: dimensionality reduction and classification using principal-component-analysis (PCA) and sup…
Sparse Manifold Clustering and Embedding to discriminate gene expression profiles of glioblastoma and meningioma tumors.
2013
Sparse Manifold Clustering and Embedding (SMCE) algorithm has been recently proposed for simultaneous clustering and dimensionality reduction of data on nonlinear manifolds using sparse representation techniques. In this work, SMCE algorithm is applied to the differential discrimination of Glioblastoma and Meningioma Tumors by means of their Gene Expression Profiles. Our purpose was to evaluate the robustness of this nonlinear manifold to classify gene expression profiles, characterized by the high-dimensionality of their representations and the low discrimination power of most of the genes. For this objective, we used SMCE to reduce the dimensionality of a preprocessed dataset of 35 single…
Improving clustering of Web bot and human sessions by applying Principal Component Analysis
2019
View references (18) The paper addresses the problem of modeling Web sessions of bots and legitimate users (humans) as feature vectors for their use at the input of classification models. So far many different features to discriminate bots’ and humans’ navigational patterns have been considered in session models but very few studies were devoted to feature selection and dimensionality reduction in the context of bot detection. We propose applying Principal Component Analysis (PCA) to develop improved session models based on predictor variables being efficient discriminants of Web bots. The proposed models are used in session clustering, whose performance is evaluated in terms of the purity …
An efficient functional magnetic resonance imaging data reduction strategy using neighborhood preserving embedding algorithm
2021
High dimensionality data have become common in neuroimaging fields, especially group-level functional magnetic resonance imaging (fMRI) datasets. fMRI connectivity analysis is a widely used, powerful technique for studying functional brain networks to probe underlying mechanisms of brain function and neuropsychological disorders. However, data-driven technique like independent components analysis (ICA), can yield unstable and inconsistent results, confounding the true effects of interest and hindering the understanding of brain functionality and connectivity. A key contributing factor to this instability is the information loss that occurs during fMRI data reduction. Data reduction of high …
Making nonlinear manifold learning models interpretable: The manifold grand tour
2015
Smooth nonlinear topographic maps of the data distribution to guide a Grand Tour visualisation.Prioritisation of data linear views that are most consistent with data structure in the maps.Useful visualisations that cannot be obtained by other more classical approaches. Dimensionality reduction is required to produce visualisations of high dimensional data. In this framework, one of the most straightforward approaches to visualising high dimensional data is based on reducing complexity and applying linear projections while tumbling the projection axes in a defined sequence which generates a Grand Tour of the data. We propose using smooth nonlinear topographic maps of the data distribution to…
Dimensionality reduction via regression on hyperspectral infrared sounding data
2014
This paper introduces a new method for dimensionality reduction via regression (DRR). The method generalizes Principal Component Analysis (PCA) in such a way that reduces the variance of the PCA scores. In order to do so, DRR relies on a deflationary process in which a non-linear regression reduces the redundancy between the PC scores. Unlike other nonlinear dimensionality reduction methods, DRR is easy to apply, it has out-of-sample extension, it is invertible, and the learned transformation is volume-preserving. These properties make the method useful for a wide range of applications, especially in very high dimensional data in general, and for hyperspectral image processing in particular…
The impact of sample reduction on PCA-based feature extraction for supervised learning
2006
"The curse of dimensionality" is pertinent to many learning algorithms, and it denotes the drastic raise of computational complexity and classification error in high dimensions. In this paper, different feature extraction (FE) techniques are analyzed as means of dimensionality reduction, and constructive induction with respect to the performance of Naive Bayes classifier. When a data set contains a large number of instances, some sampling approach is applied to address the computational complexity of FE and classification processes. The main goal of this paper is to show the impact of sample reduction on the process of FE for supervised learning. In our study we analyzed the conventional PC…
Feature Extraction and Selection for Pain Recognition Using Peripheral Physiological Signals.
2019
In pattern recognition, the selection of appropriate features is paramount to both the performance and the robustness of the system. Over-reliance on machine learning-based feature selection methods can, therefore, be problematic; especially when conducted using small snapshots of data. The results of these methods, if adopted without proper interpretation, can lead to sub-optimal system design or worse, the abandonment of otherwise viable and important features. In this work, a deep exploration of pain-based emotion classification was conducted to better understand differences in the results of the related literature. In total, 155 different time domain and frequency domain features were e…
Improved Statistically Based Retrievals via Spatial-Spectral Data Compression for IASI Data
2019
In this paper, we analyze the effect of spatial and spectral compression on the performance of statistically based retrieval. Although the quality of the information is not com- pletely preserved during the coding process, experiments reveal that a certain amount of compression may yield a positive impact on the accuracy of retrievals. We unveil two strategies, both with interesting benefits: either to apply a very high compression, which still maintains the same retrieval performance as that obtained for uncompressed data; or to apply a moderate to high compression, which improves the performance. As a second contribution of this paper, we focus on the origins of these benefits. On the one…