Search results for "regression"

showing 10 items of 2619 documents

Emulation of 2D Hydrodynamic Flood Simulations at Catchment Scale Using ANN and SVR

2021

Two-dimensional (2D) hydrodynamic models are one of the most widely used tools for flood modeling practices and risk estimation. The 2D models provide accurate results

Computer scienceProcess (engineering)Geography Planning and DevelopmentAquatic ScienceMachine learningcomputer.software_genreBiochemistrysupport vector regressionTD201-500Uncertainty analysisWater Science and TechnologyEmulationArtificial neural networkFlood mythWater supply for domestic and industrial purposesbusiness.industryDimensionality reductionHydraulic engineeringSupport vector machineemulatorsVDP::Teknologi: 500Sample size determinationerror structureArtificial intelligencetraining set sizebusinessTC1-978computerartificial neural networkWater
researchProduct

Specifications of model development

2016

Chapter 4 goes into detail with the specifications of the model and model validation. Why Partial Least Squares (PLS), a structural equation modelling approach, is chosen as the method for model testing is explained in section 4.1, while 4.2 describes the survey conducted to collect data for model testing. Section 4.3 goes into detail with the PLS approach, its theoretical background and its application to the research question, before section 4.4 outlines the necessary operationalisation of the constructs introduced in chapter 3.

Computer scienceSection (archaeology)Model testingPartial least squares regressionModel developmentResearch questionIndustrial engineeringStructural equation modelingBrand loyaltyModel validation
researchProduct

Does Regression Discontinuity Design Work? Evidence from Random Election Outcomes

2014

We use data for 198121 candidates and 1351 random election outcomes to estimate the effect of incumbency status on future electoral success. We find no evidence of incumbency advantage using data on randomized elections. In contrast, regression discontinuity design, using optimal bandwidths, produces a positive and significant incumbency effect. Using even narrower bandwidths aligns the results with those obtained using the randomized elections. So does the bias-correction of Calonico et al. (forthcoming). Standard validity tests are not useful in detecting the problems with the optimal bandwidths. The appropriate bandwidth seems narrower in larger elections and is thus context specific.

Computer scienceWork (physics)Regression discontinuity designEconometricsBandwidth (computing)Contrast (statistics)Context (language use)SSRN Electronic Journal
researchProduct

Analysis of ventricular fibrillation signals using feature selection methods

2012

Feature selection methods in machine learning models are a powerful tool to knowledge extraction. In this work they are used to analyse the intrinsic modifications of cardiac response during ventricular fibrillation due to physical exercise. The data used are two sets of registers from isolated rabbit hearts: control (G1: without physical training), and trained (G2). Four parameters were extracted (dominant frequency, normalized energy, regularity index and number of occurrences). From them, 18 features were extracted. This work analyses the relevance of each feature to classify the records in G1 and G2 using Logistic Regression, Multilayer Perceptron and Extreme Learning Machine. Three fea…

Computer sciencebusiness.industryFeature extractionFeature selectionPattern recognitionRegression analysiscomputer.software_genreStandard deviationKnowledge extractionMultilayer perceptronData miningArtificial intelligencebusinessClassifier (UML)computerExtreme learning machine2012 3rd International Workshop on Cognitive Information Processing (CIP)
researchProduct

Maximum Common Subgraph based locally weighted regression

2012

This paper investigates a simple, yet effective method for regression on graphs, in particular for applications in chem-informatics and for quantitative structure-activity relationships (QSARs). The method combines Locally Weighted Learning (LWL) with Maximum Common Subgraph (MCS) based graph distances. More specifically, we investigate a variant of locally weighted regression on graphs (structures) that uses the maximum common subgraph for determining and weighting the neighborhood of a graph and feature vectors for the actual regression model. We show that this combination, LWL-MCS, outperforms other methods that use the local neighborhood of graphs for regression. The performance of this…

Computer sciencebusiness.industryFeature vectorLocal regressionPattern recognitionRegression analysisGraphWeightingCombinatoricsLazy learningSimple (abstract algebra)Artificial intelligenceCluster analysisbusinessMathematicsofComputing_DISCRETEMATHEMATICSProceedings of the 27th Annual ACM Symposium on Applied Computing
researchProduct

Three-dimensional Fuzzy Kernel Regression framework for registration of medical volume data

2013

Abstract In this work a general framework for non-rigid 3D medical image registration is presented. It relies on two pattern recognition techniques: kernel regression and fuzzy c-means clustering. The paper provides theoretic explanation, details the framework, and illustrates its application to implement three registration algorithms for CT/MR volumes as well as single 2D slices. The first two algorithms are landmark-based approaches, while the third one is an area-based technique. The last approach is based on iterative hierarchical volume subdivision, and maximization of mutual information. Moreover, a high performance Nvidia CUDA based implementation of the algorithm is presented. The f…

Computer sciencebusiness.industryImage registrationMutual informationMachine learningcomputer.software_genreFuzzy logicCUDANon-rigid registration Fuzzy regression Mutual information Interpolation GPU computingArtificial IntelligenceSignal ProcessingPattern recognition (psychology)Kernel regressionComputer Vision and Pattern RecognitionArtificial intelligenceData miningGeneral-purpose computing on graphics processing unitsCluster analysisbusinesscomputerSoftwareInterpolationPattern Recognition
researchProduct

Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML)

2017

International audience; This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the pred…

Computer sciencecomputer.internet_protocol02 engineering and technologycomputer.software_genreIndustrial and Manufacturing EngineeringArticleSet (abstract data type)[SPI]Engineering Sciences [physics]Kriging020204 information systems0202 electrical engineering electronic engineering information engineeringUncertainty quantificationRepresentation (mathematics)predictive model markup language (PMML)Probabilistic logicdata miningPredictive analyticsXMLComputer Science Applicationspredictive analyticsControl and Systems EngineeringPredictive Model Markup Languagestandards020201 artificial intelligence & image processingData miningcomputerXMLGaussian process regression
researchProduct

Missing values in deduplication of electronic patient data

2011

Data deduplication refers to the process in which records referring to the same real-world entities are detected in datasets such that duplicated records can be eliminated. The denotation ‘record linkage’ is used here for the same problem.1 A typical application is the deduplication of medical registry data.2 3 Medical registries are institutions that collect medical and personal data in a standardized and comprehensive way. The primary aims are the creation of a pool of patients eligible for clinical or epidemiological studies and the computation of certain indices such as the incidence in order to oversee the development of diseases. The latter task in particular requires a database in wh…

Computer sciencemedia_common.quotation_subjectInferenceHealth InformaticsAmbiguityPatient dataMissing datacomputer.software_genreResearch and ApplicationsRegressionNeoplasmsStatisticsData deduplicationElectronic Health RecordsHumansData miningImputation (statistics)Medical Record LinkageRegistriescomputerRecord linkagemedia_common
researchProduct

Individualizing deep dynamic models for psychological resilience data

2020

ABSTRACTDeep learning approaches can uncover complex patterns in data. In particular, variational autoencoders (VAEs) achieve this by a non-linear mapping of data into a low-dimensional latent space. Motivated by an application to psychological resilience in the Mainz Resilience Project (MARP), which features intermittent longitudinal measurements of stressors and mental health, we propose an approach for individualized, dynamic modeling in this latent space. Specifically, we utilize ordinary differential equations (ODEs) and develop a novel technique for obtaining person-specific ODE parameters even in settings with a rather small number of individuals and observations, incomplete data, an…

Computer sciencemedia_common.quotation_subjectMathematics and computing ; PsychologySpace (commercial competition)Machine learningcomputer.software_genre050105 experimental psychology0504 sociologyHumans0501 psychology and cognitive sciencesBaseline (configuration management)media_commonMultidisciplinarybusiness.industryDeep learning05 social sciencesOde050401 social sciences methodsResilience PsychologicalMental healthRegressionSystem dynamicsMental HealthPsychological resilienceArtificial intelligencebusinesscomputer
researchProduct

Lead Reconstruction Using Artificial Neural Networks for Ambulatory ECG Acquisition

2021

One of the most powerful techniques to diagnose cardiovascular diseases is to analyze the electrocardiogram (ECG). To increase diagnostic sensitivity, the ECG might need to be acquired using an ambulatory system, as symptoms may occur during a patient’s daily life. In this paper, we propose using an ambulatory ECG (aECG) recording device with a low number of leads and then estimating the views that would have been obtained with a standard ECG location, reconstructing the complete Standard 12-Lead System, the most widely used system for diagnosis by cardiologists. Four approaches have been explored, including Linear Regression with ECG segmentation and Artificial Neural Networks (ANN). The b…

Computer sciencestandard 12-lead systemTP1-1185electrocardiogramBiochemistryArticlelead reconstructionAnalytical ChemistryElectrocardiographyLinear regressionHumansSegmentationSensitivity (control systems)cardiovascular diseasesElectrical and Electronic EngineeringLead (electronics)InstrumentationArtificial neural networkbusiness.industryChemical technologyReconstruction algorithmPattern recognitionSignal Processing Computer-AssistedAtomic and Molecular Physics and Opticscardiovascular diseasesambulatory monitoringAmbulatory ECGElectrocardiography AmbulatoryArtificial intelligenceNeural Networks ComputerEcg signalbusinessartificial neural networkAlgorithmsSensors (Basel, Switzerland)
researchProduct