Search results for "lasso"

showing 10 items of 110 documents

An extension of the censored gaussian lasso estimator

2019

The conditional glasso is one of the most used estimators for inferring genetic networks. Despite its diffusion, there are several fields in applied research where the limits of detection of modern measurement technologies make the use of this estimator theoretically unfounded, even when the assumption of a multivariate Gaussian distribution is satisfied. In this paper we propose an extension to censored data.

Censored data Censored glasso estimator Gaussian graphical model glasso estimator.Settore SECS-S/01 - Statistica
researchProduct

Regularized Regression Incorporating Network Information: Simultaneous Estimation of Covariate Coefficients and Connection Signs

2014

We develop an algorithm that incorporates network information into regression settings. It simultaneously estimates the covariate coefficients and the signs of the network connections (i.e. whether the connections are of an activating or of a repressing type). For the coefficient estimation steps an additional penalty is set on top of the lasso penalty, similarly to Li and Li (2008). We develop a fast implementation for the new method based on coordinate descent. Furthermore, we show how the new methods can be applied to time-to-event data. The new method yields good results in simulation studies concerning sensitivity and specificity of non-zero covariate coefficients, estimation of networ…

Clustering high-dimensional databusiness.industryjel:C41jel:C13Machine learningcomputer.software_genreRegressionhigh-dimensional data gene expression data pathway information penalized regressionConnection (mathematics)Set (abstract data type)Lasso (statistics)CovariateArtificial intelligenceSensitivity (control systems)businessCoordinate descentAlgorithmcomputerMathematics
researchProduct

Sparse model-based network inference using Gaussian graphical models

2010

We consider the problem of estimating a sparse dynamic Gaussian graphical model with L1 penalized maximum likelihood of structured precision matrix. The structure can consist of specific time dynamics, known presence or absence of links in the graphical model or equality constraints on the parameters. The model is defined on the basis of partial correlations, which results in a specific class precision matrices. A priori L1 penalized maximum likelihood estimation in this class is extremely difficult, because of the above mentioned constraints, the computational complexity of the L1 constraint on the side of the usual positive-definite constraint. The implementation is non-trivial, but we sh…

Covariance SelectionGaussian Graphical ModelStructured Correlation MatrixPenalized likelihoodLassoSDPT3 Algorithm
researchProduct

Thalassobacter stenotrophicus Macián et al. 2005 is a later synonym of Jannaschia cystaugens Adachi et al. 2004, with emended description of the genu…

2005

The type strains of Jannaschia cystaugens (LMG 22015T) and Thalassobacter stenotrophicus (CECT 5294T) were analysed by means of genomic DNA–DNA hybridization, comparison of 16S rRNA gene sequences and phenotypic properties determined under the same methodological conditions. J. cystaugens LMG 22015T showed DNA–DNA relatedness levels of 72 % when hybridized with the genomic DNA of T. stenotrophicus CECT 5294T. Sequence comparisons revealed that the 16S rRNA genes of the two strains had a similarity of 99·8 %. The cellular fatty acid and polar lipid compositions of the two strains and their DNA mol% G+C contents were almost identical. Bacteriochlorophyll a (Bchl a) and polyhydroxybutyrate wer…

DNA BacterialGeneticsbiologyPhylogenetic treeHydroxybutyratesNucleic Acid HybridizationGenes rRNAThalassobacterBacteriochlorophyll AGeneral MedicineRibosomal RNAJannaschiabiology.organism_classification16S ribosomal RNAMicrobiologyBacterial Typing TechniquesMicrobiologygenomic DNAPhenotypePhylogeneticsRNA Ribosomal 16SRhodobacteraceaeRhodobacteraceaePhylogenyEcology Evolution Behavior and SystematicsInternational Journal of Systematic and Evolutionary Microbiology
researchProduct

INDAGINI DI RILASSOMETRIA FFC-NMR APPLICATI ALLO STUDIO DELLE PROPRIETÀ DI NANOSPUGNE

Dinamica MolecolareFFC NMRSettore AGR/13 - Chimica AgrariaRilassometriaSettore CHIM/06 - Chimica OrganicaCiclodestrinaNanospugneSettore CHIM/02 - Chimica Fisica
researchProduct

Effects of fish feeding by snorkellers on the density and size distribution of fishes in a Mediterranean marine protected area

2005

Although there is a great deal of evidence to show that supplementary feeding by humans in terrestrial environments causes pronounced changes in the distribution and behaviour of wild animals, at present very little is known about the potential for such effects on marine fish. This study evaluated the consequences of feeding by snorkellers on fish assemblages in the no-take area of the Ustica Island marine protected area (MPA; western Mediterranean) by (1) determining if reef fish assemblage structure is affected in space and time by tourists feeding the fish; (2) assessing the effects of feeding on the abundance of the most common fish species; and (3) assessing the effects of feeding on t…

EcologybiologyBait ballCoral reef fishThalassoma pavoCoastal fishAquatic Sciencebiology.organism_classificationrecreational ecology tourism fish BACI MediterraneanPredationFisheryPredatory fishWrasseMarine protected areaEcology Evolution Behavior and SystematicsMarine Biology
researchProduct

Regularized extreme learning machine for regression problems

2011

Extreme learning machine (ELM) is a new learning algorithm for single-hidden layer feedforward networks (SLFNs) proposed by Huang et al. [1]. Its main advantage is the lower computational cost, which is especially relevant when dealing with many patterns defined in a high-dimensional space. This paper proposes an algorithm for pruning ELM networks by using regularized regression methods, thus obtaining a suitable number of the hidden nodes in the network architecture. Beginning from an initial large number of hidden nodes, irrelevant nodes are then pruned using ridge regression, elastic net and lasso methods; hence, the architectural design of ELM network can be automated. Empirical studies…

Elastic net regularizationArtificial neural networkbusiness.industryComputer scienceCognitive NeuroscienceFeed forwardMachine learningcomputer.software_genreRegularization (mathematics)Computer Science ApplicationsLasso (statistics)Artificial IntelligenceArtificial intelligencebusinesscomputerExtreme learning machineNeurocomputing
researchProduct

Dual Extrapolation for Sparse Generalized Linear Models

2020

International audience; Generalized Linear Models (GLM) form a wide class of regression and classification models, where prediction is a function of a linear combination of the input variables. For statistical inference in high dimension, sparsity inducing regularizations have proven to be useful while offering statistical guarantees. However, solving the resulting optimization problems can be challenging: even for popular iterative algorithms such as coordinate descent, one needs to loop over a large number of variables. To mitigate this, techniques known as screening rules and working sets diminish the size of the optimization problem at hand, either by progressively removing variables, o…

FOS: Computer and information sciencesComputer Science - Machine Learningextrapolation[MATH.MATH-OC] Mathematics [math]/Optimization and Control [math.OC]Machine Learning (stat.ML)working setsgeneralized linear models[STAT.ML] Statistics [stat]/Machine Learning [stat.ML]Convex optimizationscreening rulesMachine Learning (cs.LG)[STAT.ML]Statistics [stat]/Machine Learning [stat.ML]Statistics - Machine Learning[MATH.MATH-OC]Mathematics [math]/Optimization and Control [math.OC]Lassosparse logistic regression
researchProduct

Implicit differentiation for fast hyperparameter selection in non-smooth convex learning

2022

International audience; Finding the optimal hyperparameters of a model can be cast as a bilevel optimization problem, typically solved using zero-order techniques. In this work we study first-order methods when the inner optimization problem is convex but non-smooth. We show that the forward-mode differentiation of proximal gradient descent and proximal coordinate descent yield sequences of Jacobians converging toward the exact Jacobian. Using implicit differentiation, we show it is possible to leverage the non-smoothness of the inner problem to speed up the computation. Finally, we provide a bound on the error made on the hypergradient when the inner optimization problem is solved approxim…

FOS: Computer and information sciencesbilevel optimizationComputer Science - Machine Learninghyperparameter selec- tionMachine Learning (stat.ML)[MATH.MATH-OC] Mathematics [math]/Optimization and Control [math.OC]generalized linear modelsMachine Learning (cs.LG)Convex optimizationStatistics - Machine Learning[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST]Optimization and Control (math.OC)FOS: Mathematics[MATH.MATH-OC]Mathematics [math]/Optimization and Control [math.OC]hyperparameter optimizationLassoMathematics - Optimization and Control[MATH.MATH-ST] Mathematics [math]/Statistics [math.ST]
researchProduct

Cyclic coordinate for penalized Gaussian graphical models with symmetry restriction

2014

In this paper we propose two efficient cyclic coordinate algorithms to estimate structured concentration matrix in penalized Gaussian graphical models. Symmetry restrictions on the concentration matrix are particularly useful to reduce the number of parameters to be estimated and to create specific structured graphs. The penalized Gaussian graphical models are suitable for high-dimensional data.

Factorial dynamic Gaussian graphical models Gaussian graphical models graphical lasso cyclic coordinate descent methodsSettore SECS-S/01 - Statistica
researchProduct