Search results for " Models"

showing 10 items of 4240 documents

Modeling Posidonia oceanica growth data: from linear to generalized linear mixed models

2010

The statistical analysis of annual growth of Posidonia oceanica is traditionally carried out through Gaussian linear models applied to untransformed, or log-transformed, data. In this paper, we claim that there are good reasons for re-considering this established practice, since real data on annual growth often violate the assumptions of Gaussian linear models, and show that the class of Generalized Linear Models (GLMs) represents a useful alternative for handling such violations. By analyzing Sicily PosiData-1, a real dataset on P. oceanica growth data gathered in the period 2000–2002 along the coasts of Sicily, we find that in the majority of cases Normality is rejected and the effect of …

Statistics and ProbabilityGeneralized linear modelSettore BIO/07 - EcologiabiologyEcological Modelingmedia_common.quotation_subjectGaussianLinear modelPosidonia oceanica annual growth Generalized Linear Models Generalized Linear Mixed Models lepidochronological data.biology.organism_classificationGeneralized linear mixed modelHierarchical generalized linear modelsymbols.namesakePosidonia oceanicaStatisticsEconometricsGamma distributionsymbolsSettore SECS-S/01 - StatisticaNormalityMathematicsmedia_common
researchProduct

Differential geometric least angle regression: a differential geometric approach to sparse generalized linear models

2013

Summary Sparsity is an essential feature of many contemporary data problems. Remote sensing, various forms of automated screening and other high throughput measurement devices collect a large amount of information, typically about few independent statistical subjects or units. In certain cases it is reasonable to assume that the underlying process generating the data is itself sparse, in the sense that only a few of the measured variables are involved in the process. We propose an explicit method of monotonically decreasing sparsity for outcomes that can be modelled by an exponential family. In our approach we generalize the equiangular condition in a generalized linear model. Although the …

Statistics and ProbabilityGeneralized linear modelSparse modelMathematical optimizationGeneralized linear modelsVariable selectionPath following algorithmEquiangular polygonGeneralized linear modelLASSODANTZIG SELECTORsymbols.namesakeExponential familyLasso (statistics)Sparse modelsDifferential geometryInformation geometryCOORDINATE DESCENTFisher informationERRORMathematicsLeast-angle regressionLeast angle regressionGeneralized degrees of freedomsymbolsSHRINKAGEStatistics Probability and UncertaintySimple linear regressionInformation geometrySettore SECS-S/01 - StatisticaAlgorithmCovariance penalty theory
researchProduct

Adaptive linear rank tests for eQTL studies

2012

Expression quantitative trait loci (eQTL) studies are performed to identify single-nucleotide polymorphisms that modify average expression values of genes, proteins, or metabolites, depending on the genotype. As expression values are often not normally distributed, statistical methods for eQTL studies should be valid and powerful in these situations. Adaptive tests are promising alternatives to standard approaches, such as the analysis of variance or the Kruskal-Wallis test. In a two-stage procedure, skewness and tail length of the distributions are estimated and used to select one of several linear rank tests. In this study, we compare two adaptive tests that were proposed in the literatur…

Statistics and ProbabilityGenetic ResearchModels StatisticalRank (linear algebra)EpidemiologyComputer scienceQuantitative Trait LociMonte Carlo methodLinear modelGene ExpressionPolymorphism Single NucleotideArticleSkewnessExpression quantitative trait lociStatisticsLinear ModelsRange (statistics)HumansAnalysis of varianceComputerized adaptive testingMonte Carlo MethodAlgorithmStatistics in Medicine
researchProduct

A weighted combined effect measure for the analysis of a composite time-to-first-event endpoint with components of different clinical relevance

2018

Composite endpoints combine several events within a single variable, which increases the number of expected events and is thereby meant to increase the power. However, the interpretation of results can be difficult as the observed effect for the composite does not necessarily reflect the effects for the components, which may be of different magnitude or even point in adverse directions. Moreover, in clinical applications, the event types are often of different clinical relevance, which also complicates the interpretation of the composite effect. The common effect measure for composite endpoints is the all-cause hazard ratio, which gives equal weight to all events irrespective of their type …

Statistics and ProbabilityHazard (logic)EpidemiologyEndpoint Determination01 natural sciencesMeasure (mathematics)WIN RATIO010104 statistics & probability03 medical and health sciences0302 clinical medicineResamplingStatisticstime-to-eventHumansComputer Simulation030212 general & internal medicinerelevance weighting0101 mathematicsParametric statisticsEvent (probability theory)MathematicsProportional Hazards Modelsclinical trialsHazard ratiocomposite endpointWeightingPRIORITIZED OUTCOMESTRIALSData Interpretation StatisticalMULTISTATE MODELSINFERENCENull hypothesisMonte Carlo MethodStatistics in Medicine
researchProduct

Statistics in Education

2015

During the last few decades, educational systems have attracted a great deal of interest because they are closely related to economic and social systems. For example, ‘higher education has been affected by a number of changes, including higher rates of participation, internationalization, the growing importance of knowledge-led economies and increased global completion’ (Bologna Process, 1999). There is a worldwide need to include in the educational language new words and concepts such as assessment, evaluation, accountability, student performance, mobility, competitiveness as part of a new governance system

Statistics and ProbabilityHigher educationbusiness.industry02 engineering and technology01 natural sciences010104 statistics & probabilitySocial systemeducation statistical models indicators0202 electrical engineering electronic engineering information engineeringMathematics education020201 artificial intelligence & image processingSettore SECS-S/05 - Statistica SocialeSociology0101 mathematicsStatistics Probability and UncertaintybusinessEducational systemsJournal of Applied Statistics
researchProduct

Bayesian analysis of a disability model for lung cancer survival

2016

Bayesian reasoning, survival analysis and multi-state models are used to assess survival times for Stage IV non-small-cell lung cancer patients and the evolution of the disease over time. Bayesian estimation is done using minimum informative priors for the Weibull regression survival model, leading to an automatic inferential procedure. Markov chain Monte Carlo methods have been used for approximating posterior distributions and the Bayesian information criterion has been considered for covariate selection. In particular, the posterior distribution of the transition probabilities, resulting from the multi-state model, constitutes a very interesting tool which could be useful to help oncolog…

Statistics and ProbabilityLung NeoplasmsEpidemiologyComputer scienceMatemáticasPosterior probabilityBayesian probabilityEstadísticaBiostatisticsAccelerated failure time modelsBayesian inference01 natural sciences010104 statistics & probability03 medical and health sciencesBayes' theoremsymbols.namesake0302 clinical medicineHealth Information ManagementBayesian information criterionCarcinoma Non-Small-Cell LungStatisticsPrior probabilityHumans0101 mathematicsBiología y BiomedicinaNeoplasm StagingInformáticaBayes estimatorBayes TheoremMarkov chain Monte CarloSurvival AnalysisBayesian information criterionMarkov Chains030220 oncology & carcinogenesisMinimum informative priorsymbolsMulti-state modelsRegression AnalysisWeibull distributionMonte Carlo Method
researchProduct

Sparse kernel methods for high-dimensional survival data

2008

Abstract Sparse kernel methods like support vector machines (SVM) have been applied with great success to classification and (standard) regression settings. Existing support vector classification and regression techniques however are not suitable for partly censored survival data, which are typically analysed using Cox's proportional hazards model. As the partial likelihood of the proportional hazards model only depends on the covariates through inner products, it can be ‘kernelized’. The kernelized proportional hazards model however yields a solution that is dense, i.e. the solution depends on all observations. One of the key features of an SVM is that it yields a sparse solution, dependin…

Statistics and ProbabilityLung NeoplasmsLymphomaComputer sciencecomputer.software_genreComputing MethodologiesBiochemistryPattern Recognition AutomatedArtificial IntelligenceMargin (machine learning)CovariateCluster AnalysisHumansComputer SimulationFraction (mathematics)Molecular BiologyProportional Hazards ModelsModels StatisticalTraining setProportional hazards modelGene Expression ProfilingComputational BiologyComputer Science ApplicationsSupport vector machineComputational MathematicsKernel methodComputational Theory and MathematicsRegression AnalysisData miningcomputerAlgorithmsSoftwareBioinformatics
researchProduct

Coupled variable selection for regression modeling of complex treatment patterns in a clinical cancer registry.

2013

For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivat…

Statistics and ProbabilityMaleNiacinamideBoosting (machine learning)Carcinoma HepatocellularEpidemiologyComputer scienceScoreFeature selectionAntineoplastic Agentscomputer.software_genreDecision Support TechniquesNeoplasmsCovariateHumansRegistriesAgedProportional Hazards ModelsProportional hazards modelPhenylurea CompoundsLiver NeoplasmsRegression analysisConfounding Factors EpidemiologicMiddle AgedSorafenibPrognosisRegressionCancer registryData Interpretation StatisticalRegression AnalysisData miningcomputerStatistics in medicine
researchProduct

Calibration of optimal execution of financial transactions in the presence of transient market impact

2012

Trading large volumes of a financial asset in order driven markets requires the use of algorithmic execution dividing the volume in many transactions in order to minimize costs due to market impact. A proper design of an optimal execution strategy strongly depends on a careful modeling of market impact, i.e. how the price reacts to trades. In this paper we consider a recently introduced market impact model (Bouchaud et al., 2004), which has the property of describing both the volume and the temporal dependence of price change due to trading. We show how this model can be used to describe price impact also in aggregated trade time or in real time. We then solve analytically and calibrate wit…

Statistics and ProbabilityMathematical optimizationQuantitative Finance - Trading and Market MicrostructureStatistical Finance (q-fin.ST)Financial market Econophysics stochastic processesFinancial assetComputer scienceVolume (computing)Efficient frontierQuantitative Finance - Statistical FinanceStatistical and Nonlinear PhysicsRisk neutralTrading and Market Microstructure (q-fin.TR)FOS: Economics and businessOrder (exchange)Financial transactionfinancial instruments and regulation models of financial markets risk measure and managementTransient (computer programming)Statistics Probability and UncertaintyMarket impact
researchProduct

Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous…

2012

In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedica…

Statistics and ProbabilityModels StatisticalEpidemiologyModel selectionMultivariable calculusExplained variationSpline (mathematics)Logistic ModelsSample size determinationSample SizeMultivariate AnalysisLinear regressionStatisticsCovariateHumansComputer SimulationCategorical variableMathematicsStatistics in Medicine
researchProduct