Search results for "lasso"
showing 10 items of 110 documents
An extension of the censored gaussian lasso estimator
2019
The conditional glasso is one of the most used estimators for inferring genetic networks. Despite its diffusion, there are several fields in applied research where the limits of detection of modern measurement technologies make the use of this estimator theoretically unfounded, even when the assumption of a multivariate Gaussian distribution is satisfied. In this paper we propose an extension to censored data.
Regularized Regression Incorporating Network Information: Simultaneous Estimation of Covariate Coefficients and Connection Signs
2014
We develop an algorithm that incorporates network information into regression settings. It simultaneously estimates the covariate coefficients and the signs of the network connections (i.e. whether the connections are of an activating or of a repressing type). For the coefficient estimation steps an additional penalty is set on top of the lasso penalty, similarly to Li and Li (2008). We develop a fast implementation for the new method based on coordinate descent. Furthermore, we show how the new methods can be applied to time-to-event data. The new method yields good results in simulation studies concerning sensitivity and specificity of non-zero covariate coefficients, estimation of networ…
Sparse model-based network inference using Gaussian graphical models
2010
We consider the problem of estimating a sparse dynamic Gaussian graphical model with L1 penalized maximum likelihood of structured precision matrix. The structure can consist of specific time dynamics, known presence or absence of links in the graphical model or equality constraints on the parameters. The model is defined on the basis of partial correlations, which results in a specific class precision matrices. A priori L1 penalized maximum likelihood estimation in this class is extremely difficult, because of the above mentioned constraints, the computational complexity of the L1 constraint on the side of the usual positive-definite constraint. The implementation is non-trivial, but we sh…
Thalassobacter stenotrophicus Macián et al. 2005 is a later synonym of Jannaschia cystaugens Adachi et al. 2004, with emended description of the genu…
2005
The type strains of Jannaschia cystaugens (LMG 22015T) and Thalassobacter stenotrophicus (CECT 5294T) were analysed by means of genomic DNA–DNA hybridization, comparison of 16S rRNA gene sequences and phenotypic properties determined under the same methodological conditions. J. cystaugens LMG 22015T showed DNA–DNA relatedness levels of 72 % when hybridized with the genomic DNA of T. stenotrophicus CECT 5294T. Sequence comparisons revealed that the 16S rRNA genes of the two strains had a similarity of 99·8 %. The cellular fatty acid and polar lipid compositions of the two strains and their DNA mol% G+C contents were almost identical. Bacteriochlorophyll a (Bchl a) and polyhydroxybutyrate wer…
INDAGINI DI RILASSOMETRIA FFC-NMR APPLICATI ALLO STUDIO DELLE PROPRIETÀ DI NANOSPUGNE
Effects of fish feeding by snorkellers on the density and size distribution of fishes in a Mediterranean marine protected area
2005
Although there is a great deal of evidence to show that supplementary feeding by humans in terrestrial environments causes pronounced changes in the distribution and behaviour of wild animals, at present very little is known about the potential for such effects on marine fish. This study evaluated the consequences of feeding by snorkellers on fish assemblages in the no-take area of the Ustica Island marine protected area (MPA; western Mediterranean) by (1) determining if reef fish assemblage structure is affected in space and time by tourists feeding the fish; (2) assessing the effects of feeding on the abundance of the most common fish species; and (3) assessing the effects of feeding on t…
Regularized extreme learning machine for regression problems
2011
Extreme learning machine (ELM) is a new learning algorithm for single-hidden layer feedforward networks (SLFNs) proposed by Huang et al. [1]. Its main advantage is the lower computational cost, which is especially relevant when dealing with many patterns defined in a high-dimensional space. This paper proposes an algorithm for pruning ELM networks by using regularized regression methods, thus obtaining a suitable number of the hidden nodes in the network architecture. Beginning from an initial large number of hidden nodes, irrelevant nodes are then pruned using ridge regression, elastic net and lasso methods; hence, the architectural design of ELM network can be automated. Empirical studies…
Dual Extrapolation for Sparse Generalized Linear Models
2020
International audience; Generalized Linear Models (GLM) form a wide class of regression and classification models, where prediction is a function of a linear combination of the input variables. For statistical inference in high dimension, sparsity inducing regularizations have proven to be useful while offering statistical guarantees. However, solving the resulting optimization problems can be challenging: even for popular iterative algorithms such as coordinate descent, one needs to loop over a large number of variables. To mitigate this, techniques known as screening rules and working sets diminish the size of the optimization problem at hand, either by progressively removing variables, o…
Implicit differentiation for fast hyperparameter selection in non-smooth convex learning
2022
International audience; Finding the optimal hyperparameters of a model can be cast as a bilevel optimization problem, typically solved using zero-order techniques. In this work we study first-order methods when the inner optimization problem is convex but non-smooth. We show that the forward-mode differentiation of proximal gradient descent and proximal coordinate descent yield sequences of Jacobians converging toward the exact Jacobian. Using implicit differentiation, we show it is possible to leverage the non-smoothness of the inner problem to speed up the computation. Finally, we provide a bound on the error made on the hypergradient when the inner optimization problem is solved approxim…
Cyclic coordinate for penalized Gaussian graphical models with symmetry restriction
2014
In this paper we propose two efficient cyclic coordinate algorithms to estimate structured concentration matrix in penalized Gaussian graphical models. Symmetry restrictions on the concentration matrix are particularly useful to reduce the number of parameters to be estimated and to create specific structured graphs. The penalized Gaussian graphical models are suitable for high-dimensional data.