Search results for "Statistics::Machine Learning"
showing 10 items of 30 documents
Varying-coefficient functional linear regression models
2008
This article considers a generalization of the functional linear regression in which an additional real variable influences smoothly the functional coefficient. We thus define a varying-coefficient regression model for functional data. We propose two estimators based, respectively, on conditional functional principal regression and on local penalized regression splines and prove their pointwise consistency. We check, with the prediction one day ahead of ozone concentration in the city of Toulouse, the ability of such nonlinear functional approaches to produce competitive estimations.
The Induced Smoothed lasso: A practical framework for hypothesis testing in high dimensional regression.
2020
This paper focuses on hypothesis testing in lasso regression, when one is interested in judging statistical significance for the regression coefficients in the regression equation involving a lot of covariates. To get reliable p-values, we propose a new lasso-type estimator relying on the idea of induced smoothing which allows to obtain appropriate covariance matrix and Wald statistic relatively easily. Some simulation experiments reveal that our approach exhibits good performance when contrasted with the recent inferential tools in the lasso framework. Two real data analyses are presented to illustrate the proposed framework in practice.
Selecting the tuning parameter in penalized Gaussian graphical models
2019
Penalized inference of Gaussian graphical models is a way to assess the conditional independence structure in multivariate problems. In this setting, the conditional independence structure, corresponding to a graph, is related to the choice of the tuning parameter, which determines the model complexity or degrees of freedom. There has been little research on the degrees of freedom for penalized Gaussian graphical models. In this paper, we propose an estimator of the degrees of freedom in $$\ell _1$$ -penalized Gaussian graphical models. Specifically, we derive an estimator inspired by the generalized information criterion and propose to use this estimator as the bias term for two informatio…
On Limiting Fréchet ε-Subdifferentials
1998
This paper presents an e-sub differential calculus for nonconvex and nonsmooth functions. We extend the previous work by Jofre et all to the case where the functions are lower semicontinuous instead of locally Lipschitz.
On the Ambiguous Consequences of Omitting Variables
2015
This paper studies what happens when we move from a short regression to a long regression (or vice versa), when the long regression is shorter than the data-generation process. In the special case where the long regression equals the data-generation process, the least-squares estimators have smaller bias (in fact zero bias) but larger variances in the long regression than in the short regression. But if the long regression is also misspecified, the bias may not be smaller. We provide bias and mean squared error comparisons and study the dependence of the differences on the misspecification parameter.
On the ambiguous consequences of omitting variables
2015
This paper studies what happens when we move from a short regression to a long regression (or vice versa), when the long regression is shorter than the data-generation process. In the special case where the long regression equals the data-generation process, the least-squares estimators have smaller bias (in fact zero bias) but larger variances in the long regression than in the short regression. But if the long regression is also misspecified, the bias may not be smaller. We provide bias and mean squared error comparisons and study the dependence of the differences on the misspecification parameter.
On the sign recovery by LASSO, thresholded LASSO and thresholded Basis Pursuit Denoising
2020
Basis Pursuit (BP), Basis Pursuit DeNoising (BPDN), and LASSO are popular methods for identifyingimportant predictors in the high-dimensional linear regression model Y = Xβ + ε. By definition, whenε = 0, BP uniquely recovers β when Xβ = Xb and β different than b implies L1 norm of β is smaller than the L1 norm of b (identifiability condition). Furthermore, LASSO can recover the sign of β only under a much stronger irrepresentability condition. Meanwhile, it is known that the model selection properties of LASSO can be improved by hard-thresholdingits estimates. This article supports these findings by proving that thresholded LASSO, thresholded BPDNand thresholded BP recover the sign of β in …
The Stochastic Limit of the Open BCS Model of Superconductivity
2004
We review some recent results concerning the open BCS model of superconductivity as originally proposed by Buffet and Martin. We also briefly analyze some possible generalizations.
Multi-dimensional Function Approximation and Regression Estimation
2002
In this communication, we generalize the Support Vector Machines (SVM) for regression estimation and function approximation to multi-dimensional problems. We propose a multi-dimensional Support Vector Regressor (MSVR) that uses a cost function with a hyperspherical insensitive zone, capable of obtaining better predictions than using an SVM independently for each dimension. The resolution of the MSVR is achieved by an iterative procedure over the Karush-Kuhn-Tucker conditions. The proposed algorithm is illustrated by computers experiments.
Limiting Carleman weights and conformally transversally anisotropic manifolds
2020
We analyze the structure of the set of limiting Carleman weights in all conformally flat manifolds, 3 3 -manifolds, and 4 4 -manifolds. In particular we give a new proof of the classification of Euclidean limiting Carleman weights, and show that there are only three basic such weights up to the action of the conformal group. In dimension three we show that if the manifold is not conformally flat, there could be one or two limiting Carleman weights. We also characterize the metrics that have more than one limiting Carleman weight. In dimension four we obtain a complete spectrum of examples according to the structure of the Weyl tensor. In particular, we construct unimodular Lie groups whose …