Search results for "Dimensionality Reduction"

showing 10 items of 120 documents

Variable Selection in Predictive MIDAS Models

2014

In short-term forecasting, it is essential to take into account all available information on the current state of the economic activity. Yet, the fact that various time series are sampled at different frequencies prevents an efficient use of available data. In this respect, the Mixed-Data Sampling (MIDAS) model has proved to outperform existing tools by combining data series of different frequencies. However, major issues remain regarding the choice of explanatory variables. The paper first addresses this point by developing MIDAS based dimension reduction techniques and by introducing two novel approaches based on either a method of penalized variable selection or Bayesian stochastic searc…

EngineeringSeries (mathematics)business.industryDimensionality reductionBayesian probabilitySampling (statistics)Feature selectioncomputer.software_genreEconomic indicatorData miningState (computer science)businesscomputerSelection (genetic algorithm)SSRN Electronic Journal
researchProduct

Assessment of computational methods for the analysis of single-cell ATAC-seq data

2019

Abstract Background Recent innovations in single-cell Assay for Transposase Accessible Chromatin using sequencing (scATAC-seq) enable profiling of the epigenetic landscape of thousands of individual cells. scATAC-seq data analysis presents unique methodological challenges. scATAC-seq experiments sample DNA, which, due to low copy numbers (diploid in humans), lead to inherent data sparsity (1–10% of peaks detected per cell) compared to transcriptomic (scRNA-seq) data (10–45% of expressed genes detected per cell). Such challenges in data generation emphasize the need for informative features to assess cell heterogeneity at the chromatin level. Results We present a benchmarking framework that …

Epigenomicslcsh:QH426-470Test data generationComputer scienceCellATAC-seqComputational biologyBiologyClusteringTranscriptomeMice03 medical and health scienceschemistry.chemical_compound0302 clinical medicinemedicineAnimalsHumansProfiling (information science)scATAC-seqnatural sciencesEpigeneticsFeature matrixCluster analysislcsh:QH301-705.5GeneTransposaseVisualization030304 developmental biologySparse matrix0303 health sciencesFeaturizationDimensionality reductionResearchComputational BiologySequence Analysis DNADimensionality reductionChromatinBenchmarkinglcsh:Geneticsmedicine.anatomical_structurelcsh:Biology (General)chemistryRegulatory genomicsSingle-Cell AnalysisPeak calling030217 neurology & neurosurgeryDNA
researchProduct

Fair Kernel Learning

2017

New social and economic activities massively exploit big data and machine learning algorithms to do inference on people’s lives. Applications include automatic curricula evaluation, wage determination, and risk assessment for credits and loans. Recently, many governments and institutions have raised concerns about the lack of fairness, equity and ethics in machine learning to treat these problems. It has been shown that not including sensitive features that bias fairness, such as gender or race, is not enough to mitigate the discrimination when other related features are included. Instead, including fairness in the objective function has been shown to be more efficient.

Equity (economics)Actuarial scienceComputingMilieux_THECOMPUTINGPROFESSIONExploitComputer sciencebusiness.industrymedia_common.quotation_subjectDimensionality reductionBig dataWageInference02 engineering and technologyKernel method020204 information systems0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingbusinessCurriculummedia_common
researchProduct

Functional Linear Regression

2018

This article presents a selected bibliography on functional linear regression (FLR) and highlights the key contributions from both applied and theoretical points of view. It first defines FLR in the case of a scalar response and shows how its modelization can also be extended to the case of a functional response. It then considers two kinds of estimation procedures for this slope parameter: projection-based estimators in which regularization is performed through dimension reduction, such as functional principal component regression, and penalized least squares estimators that take into account a penalized least squares minimization problem. The article proceeds by discussing the main asympt…

EstimationDimensionality reductionStatisticsFunctional linear regressionMathematicsQuantile
researchProduct

Nonlinearities and Adaptation of Color Vision from Sequential Principal Curves Analysis

2016

Mechanisms of human color vision are characterized by two phenomenological aspects: the system is nonlinear and adaptive to changing environments. Conventional attempts to derive these features from statistics use separate arguments for each aspect. The few statistical explanations that do consider both phenomena simultaneously follow parametric formulations based on empirical models. Therefore, it may be argued that the behavior does not come directly from the color statistics but from the convenient functional form adopted. In addition, many times the whole statistical analysis is based on simplified databases that disregard relevant physical effects in the input signal, as, for instance…

FOS: Computer and information sciencesColor visionComputer scienceCognitive NeuroscienceComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONStandard illuminantMachine Learning (stat.ML)Models BiologicalArts and Humanities (miscellaneous)Statistics - Machine LearningPsychophysicsHumansLearningComputer SimulationChromatic scaleParametric statisticsPrincipal Component AnalysisColor VisionNonlinear dimensionality reductionAdaptation PhysiologicalNonlinear systemNonlinear DynamicsFOS: Biological sciencesQuantitative Biology - Neurons and CognitionMetric (mathematics)A priori and a posterioriNeurons and Cognition (q-bio.NC)AlgorithmColor PerceptionPhotic Stimulation
researchProduct

Diffusion map for clustering fMRI spatial maps extracted by Indipendent Component Analysis

2013

Functional magnetic resonance imaging (fMRI) produces data about activity inside the brain, from which spatial maps can be extracted by independent component analysis (ICA). In datasets, there are n spatial maps that contain p voxels. The number of voxels is very high compared to the number of analyzed spatial maps. Clustering of the spatial maps is usually based on correlation matrices. This usually works well, although such a similarity matrix inherently can explain only a certain amount of the total variance contained in the high-dimensional data where n is relatively small but p is large. For high-dimensional space, it is reasonable to perform dimensionality reduction before clustering.…

FOS: Computer and information sciencesDiffusion (acoustics)Computer sciencediffusion mapMachine Learning (stat.ML)02 engineering and technologycomputer.software_genreMachine Learning (cs.LG)Computational Engineering Finance and Science (cs.CE)Correlation03 medical and health sciencesTotal variation0302 clinical medicineStatistics - Machine LearningVoxel0202 electrical engineering electronic engineering information engineeringComputer Science - Computational Engineering Finance and ScienceCluster analysisdimensionality reductionta113spatial mapsbusiness.industryDimensionality reductionfunctional magnetic resonance imaging (fMRI)Pattern recognitionIndependent component analysisSpectral clusteringComputer Science - Learningindependent component analysista6131020201 artificial intelligence & image processingArtificial intelligenceDYNAMICAL-SYSTEMSbusinesscomputer030217 neurology & neurosurgeryclustering
researchProduct

PRINCIPAL POLYNOMIAL ANALYSIS

2014

© 2014 World Scientific Publishing Company. This paper presents a new framework for manifold learning based on a sequence of principal polynomials that capture the possibly nonlinear nature of the data. The proposed Principal Polynomial Analysis (PPA) generalizes PCA by modeling the directions of maximal variance by means of curves instead of straight lines. Contrarily to previous approaches PPA reduces to performing simple univariate regressions which makes it computationally feasible and robust. Moreover PPA shows a number of interesting analytical properties. First PPA is a volume preserving map which in turn guarantees the existence of the inverse. Second such an inverse can be obtained…

FOS: Computer and information sciencesPolynomialComputer Networks and CommunicationsComputer scienceMachine Learning (stat.ML)02 engineering and technologyReduction (complexity)03 medical and health sciencessymbols.namesake0302 clinical medicineStatistics - Machine LearningArtificial Intelligence0202 electrical engineering electronic engineering information engineeringPrincipal Polynomial AnalysisPrincipal Component AnalysisMahalanobis distanceModels StatisticalCodingDimensionality reductionNonlinear dimensionality reductionGeneral MedicineClassificationDimensionality reductionManifold learningNonlinear DynamicsMetric (mathematics)Jacobian matrix and determinantsymbolsRegression Analysis020201 artificial intelligence & image processingNeural Networks ComputerAlgorithmAlgorithms030217 neurology & neurosurgeryCurse of dimensionalityInternational Journal of Neural Systems
researchProduct

Asymptotic and bootstrap tests for subspace dimension

2022

Most linear dimension reduction methods proposed in the literature can be formulated using an appropriate pair of scatter matrices, see e.g. Ye and Weiss (2003), Tyler et al. (2009), Bura and Yang (2011), Liski et al. (2014) and Luo and Li (2016). The eigen-decomposition of one scatter matrix with respect to another is then often used to determine the dimension of the signal subspace and to separate signal and noise parts of the data. Three popular dimension reduction methods, namely principal component analysis (PCA), fourth order blind identification (FOBI) and sliced inverse regression (SIR) are considered in detail and the first two moments of subsets of the eigenvalues are used to test…

FOS: Computer and information sciencesStatistics and ProbabilityPrincipal component analysisMathematics - Statistics TheoryStatistics Theory (math.ST)01 natural sciencesMethodology (stat.ME)010104 statistics & probabilityDimension (vector space)Scatter matrixSliced inverse regression0502 economics and businessFOS: MathematicsSliced inverse regressionApplied mathematics0101 mathematicsEigenvalues and eigenvectorsStatistics - Methodology050205 econometrics MathematicsestimointiNumerical AnalysisOrder determinationDimensionality reduction05 social sciencesriippumattomien komponenttien analyysimonimuuttujamenetelmätPrincipal component analysisStatistics Probability and UncertaintySubspace topologySignal subspace
researchProduct

Dimensionality Reduction via Regression in Hyperspectral Imagery

2015

This paper introduces a new unsupervised method for dimensionality reduction via regression (DRR). The algorithm belongs to the family of invertible transforms that generalize Principal Component Analysis (PCA) by using curvilinear instead of linear features. DRR identifies the nonlinear features through multivariate regression to ensure the reduction in redundancy between he PCA coefficients, the reduction of the variance of the scores, and the reduction in the reconstruction error. More importantly, unlike other nonlinear dimensionality reduction methods, the invertibility, volume-preservation, and straightforward out-of-sample extension, makes DRR interpretable and easy to apply. The pro…

FOS: Computer and information sciencesbusiness.industryDimensionality reductionComputer Vision and Pattern Recognition (cs.CV)Feature extractionNonlinear dimensionality reductionDiffusion mapComputer Science - Computer Vision and Pattern RecognitionPattern recognitionMachine Learning (stat.ML)CollinearityReduction (complexity)Statistics - Machine LearningSignal ProcessingPrincipal component analysisArtificial intelligenceElectrical and Electronic EngineeringbusinessMathematicsCurse of dimensionality
researchProduct

Combining feature extraction and expansion to improve classification based similarity learning

2017

Abstract Metric learning has been shown to outperform standard classification based similarity learning in a number of different contexts. In this paper, we show that the performance of classification similarity learning strongly depends on the data format used to learn the model. We then present an Enriched Classification Similarity Learning method that follows a hybrid approach that combines both feature extraction and feature expansion. In particular, we propose a data transformation and the use of a set of standard distances to supplement the information provided by the feature vectors of the training samples. The method is compared to state-of-the-art feature extraction and metric lear…

Feature extractionLinear classifier02 engineering and technologySemi-supervised learning010501 environmental sciencesMachine learningcomputer.software_genre01 natural sciencesk-nearest neighbors algorithmArtificial Intelligence0202 electrical engineering electronic engineering information engineering0105 earth and related environmental sciencesMathematicsbusiness.industryDimensionality reductionPattern recognitionStatistical classificationSignal Processing020201 artificial intelligence & image processingComputer Vision and Pattern RecognitionArtificial intelligencebusinessFeature learningcomputerSoftwareSimilarity learningPattern Recognition Letters
researchProduct