Search results for "quantitative method"

showing 10 items of 170 documents

Tandem repeats lead to sequence assembly errors and impose multi-level challenges for genome and protein databases

2019

AbstractThe widespread occurrence of repetitive stretches of DNA in genomes of organisms across the tree of life imposes fundamental challenges for sequencing, genome assembly, and automated annotation of genes and proteins. This multi-level problem can lead to errors in genome and protein databases that are often not recognized or acknowledged. As a consequence, end users working with sequences with repetitive regions are faced with ‘ready-to-use’ deposited data whose trustworthiness is difficult to determine, let alone to quantify. Here, we provide a review of the problems associated with tandem repeat sequences that originate from different stages during the sequencing-assembly-annotatio…

FOS: Computer and information sciencesBioinformatics[SDV]Life Sciences [q-bio]Sequence assemblyGenomics[SDV.BC]Life Sciences [q-bio]/Cellular BiologyComputational biologyBiologyGenome03 medical and health sciencesAnnotation0302 clinical medicineTandem repeatGeneticsAnimalsSurvey and SummaryDatabases ProteinGeneComputingMilieux_MISCELLANEOUS030304 developmental biology0303 health sciencesEnd user572: BiochemieDNASequence Analysis DNAGenomics[SDV.BIBS]Life Sciences [q-bio]/Quantitative Methods [q-bio.QM]WorkflowComputingMethodologies_PATTERNRECOGNITIONGadus morhuaTandem Repeat SequencesScientific Experimental Error[INFO.INFO-BI]Computer Science [cs]/Bioinformatics [q-bio.QM]Databases Nucleic Acid030217 neurology & neurosurgery
researchProduct

Gap Filling of Biophysical Parameter Time Series with Multi-Output Gaussian Processes

2018

In this work we evaluate multi-output (MO) Gaussian Process (GP) models based on the linear model of coregionalization (LMC) for estimation of biophysical parameter variables under a gap filling setup. In particular, we focus on LAI and fAPAR over rice areas. We show how this problem cannot be solved with standard single-output (SO) GP models, and how the proposed MO-GP models are able to successfully predict these variables even in high missing data regimes, by implicitly performing an across-domain information transfer.

FOS: Computer and information sciencesComputer Science - Machine Learning010504 meteorology & atmospheric sciences0211 other engineering and technologiesFOS: Physical sciencesMachine Learning (stat.ML)02 engineering and technology01 natural sciencesQuantitative Biology - Quantitative MethodsMachine Learning (cs.LG)Data modelingsymbols.namesakeStatistics - Machine LearningApplied mathematicsTime seriesGaussian processQuantitative Methods (q-bio.QM)021101 geological & geomatics engineering0105 earth and related environmental sciencesMathematicsSeries (mathematics)Linear modelProbability and statisticsMissing dataFOS: Biological sciencesPhysics - Data Analysis Statistics and ProbabilitysymbolsFocus (optics)Data Analysis Statistics and Probability (physics.data-an)
researchProduct

Retrieval of aboveground crop nitrogen content with a hybrid machine learning method

2020

Abstract Hyperspectral acquisitions have proven to be the most informative Earth observation data source for the estimation of nitrogen (N) content, which is the main limiting nutrient for plant growth and thus agricultural production. In the past, empirical algorithms have been widely employed to retrieve information on this biochemical plant component from canopy reflectance. However, these approaches do not seek for a cause-effect relationship based on physical laws. Moreover, most studies solely relied on the correlation of chlorophyll content with nitrogen, and thus neglected the fact that most N is bound in proteins. Our study presents a hybrid retrieval method using a physically-base…

FOS: Computer and information sciencesComputer Science - Machine LearningHeteroscedasticity010504 meteorology & atmospheric sciencesMean squared errorEnMAP0211 other engineering and technologiesGaussian processes02 engineering and technologyManagement Monitoring Policy and LawQuantitative Biology - Quantitative Methods01 natural sciencesMachine Learning (cs.LG)symbols.namesakeHomoscedasticityEnMAPAgricultural monitoringComputers in Earth SciencesGaussian processQuantitative Methods (q-bio.QM)021101 geological & geomatics engineering0105 earth and related environmental sciencesEarth-Surface ProcessesMathematicsRemote sensing2. Zero hungerGlobal and Planetary ChangeInversionHyperspectral imagingImaging spectroscopyRadiative transfer modelingRegressionImaging spectroscopyFOS: Biological sciences[SDE]Environmental SciencessymbolsInternational Journal of Applied Earth Observation and Geoinformation
researchProduct

Human experts vs. machines in taxa recognition

2020

The step of expert taxa recognition currently slows down the response time of many bioassessments. Shifting to quicker and cheaper state-of-the-art machine learning approaches is still met with expert scepticism towards the ability and logic of machines. In our study, we investigate both the differences in accuracy and in the identification logic of taxonomic experts and machines. We propose a systematic approach utilizing deep Convolutional Neural Nets with the transfer learning paradigm and extensively evaluate it over a multi-pose taxonomic dataset with hierarchical labels specifically created for this comparison. We also study the prediction accuracy on different ranks of taxonomic hier…

FOS: Computer and information sciencesComputer Science - Machine Learninghahmontunnistus (tietotekniikka)Computer scienceClassification approachTaxonomic expert02 engineering and technologyneuroverkotcomputer.software_genreConvolutional neural networkQuantitative Biology - Quantitative MethodsField (computer science)Machine Learning (cs.LG)Machine learning approachesStatistics - Machine LearningAutomated approachDeep neural networks0202 electrical engineering electronic engineering information engineeringTaxonomic rankQuantitative Methods (q-bio.QM)Classification (of information)Artificial neural networksystematiikka (biologia)Prediction accuracyIdentification (information)koneoppiminenMulti-image dataBenchmark (computing)020201 artificial intelligence & image processingConvolutional neural networksComputer Vision and Pattern RecognitionClassification errorsMachine Learning (stat.ML)Machine learningState of the artElectrical and Electronic EngineeringTaxonomySupport vector machinesLearning systemsbusiness.industryNode (networking)020206 networking & telecommunicationsComputer circuitsHierarchical classificationConvolutionSupport vector machineFOS: Biological sciencesTaxonomic hierarchySignal ProcessingBiomonitoringBenchmark datasetsArtificial intelligencebusinesscomputertaksonitSoftware
researchProduct

Machinery Failure Approach and Spectral Analysis to study the Reaction Time Dynamics over Consecutive Visual Stimuli

2020

The reaction times of individuals over consecutive visual stimuli have been studied using spectral analysis and a failure machinery approach. The used tools include the fast Fourier transform and a spectral entropy analysis. The results indicate that the reaction times produced by the independently responding individuals to visual stimuli appear to be correlated. The spectral analysis and the entropy of the spectrum yield that there are features of similarity in the response times of each participant and among them. Furthermore, the analysis of the mistakes made by the participants during the reaction time experiments concluded that they follow a behavior which is consistent with the MTBF (…

FOS: Computer and information sciencesFOS: Biological sciencesApplications (stat.AP)Quantitative Biology - Quantitative MethodsStatistics - ApplicationsQuantitative Methods (q-bio.QM)
researchProduct

Local Granger causality

2021

Granger causality is a statistical notion of causal influence based on prediction via vector autoregression. For Gaussian variables it is equivalent to transfer entropy, an information-theoretic measure of time-directed information transfer between jointly dependent processes. We exploit such equivalence and calculate exactly the 'local Granger causality', i.e. the profile of the information transfer at each discrete time point in Gaussian processes; in this frame Granger causality is the average of its local version. Our approach offers a robust and computationally fast method to follow the information transfer along the time history of linear stochastic processes, as well as of nonlinear …

FOS: Computer and information sciencesInformation transferGaussianFOS: Physical sciencestechniques; information theory; granger causalityMachine Learning (stat.ML)Quantitative Biology - Quantitative Methods01 natural sciences010305 fluids & plasmasVector autoregressionsymbols.namesakegranger causalityGranger causalityStatistics - Machine Learning0103 physical sciencesApplied mathematicstime serie010306 general physicsQuantitative Methods (q-bio.QM)Mathematicsinformation theoryStochastic processDisordered Systems and Neural Networks (cond-mat.dis-nn)Condensed Matter - Disordered Systems and Neural NetworksComputational Physics (physics.comp-ph)Discrete time and continuous timeAutoregressive modelFOS: Biological sciencesSettore ING-INF/06 - Bioingegneria Elettronica E InformaticasymbolsTransfer entropytechniquesPhysics - Computational Physics
researchProduct

Order-distance and other metric-like functions on jointly distributed random variables

2013

We construct a class of real-valued nonnegative binary functions on a set of jointly distributed random variables, which satisfy the triangle inequality and vanish at identical arguments (pseudo-quasi-metrics). These functions are useful in dealing with the problem of selective probabilistic causality encountered in behavioral sciences and in quantum physics. The problem reduces to that of ascertaining the existence of a joint distribution for a set of variables with known distributions of certain subsets of this set. Any violation of the triangle inequality or its consequences by one of our functions when applied to such a set rules out the existence of this joint distribution. We focus on…

FOS: Computer and information sciencesMeasurable functionComputer Science - Artificial IntelligenceGeneral MathematicsMathematics - Statistics TheoryStatistics Theory (math.ST)Quantitative Biology - Quantitative Methods01 natural sciences050105 experimental psychologyJoint probability distribution0103 physical sciencesFOS: Mathematics0501 psychology and cognitive sciences010306 general physicsQuantitative Methods (q-bio.QM)60B99 (Primary) 81Q99 91E45 (Secondary)Probability measureMathematicsDiscrete mathematicsTriangle inequalityApplied MathematicsProbability (math.PR)05 social sciencesFunction (mathematics)Artificial Intelligence (cs.AI)Distribution (mathematics)FOS: Biological sciencesSample spaceRandom variableMathematics - ProbabilityProceedings of the American Mathematical Society
researchProduct

A New Nonparametric Estimate of the Risk-Neutral Density with Applications to Variance Swaps

2021

We develop a new nonparametric approach for estimating the risk-neutral density of asset prices and reformulate its estimation into a double-constrained optimization problem. We evaluate our approach using the S\&P 500 market option prices from 1996 to 2015. A comprehensive cross-validation study shows that our approach outperforms the existing nonparametric quartic B-spline and cubic spline methods, as well as the parametric method based on the Normal Inverse Gaussian distribution. As an application, we use the proposed density estimator to price long-term variance swaps, and the model-implied prices match reasonably well with those of the variance future downloaded from the CBOE websi…

FOS: Computer and information sciencesStatistics and ProbabilityVariance swapOptimization problemvariance swapStatistics - ApplicationsFOS: Economics and businessNormal-inverse Gaussian distributiondouble-constrained optimizationpricingEconometricsApplications (stat.AP)Asset (economics)normal inverse Gaussian distributionMathematicsParametric statisticslcsh:T57-57.97Applied MathematicsNonparametric statisticsEstimatorVariance (accounting)lcsh:Applied mathematics. Quantitative methodsPricing of Securities (q-fin.PR)risk-neutral densitylcsh:Probabilities. Mathematical statisticslcsh:QA273-280Quantitative Finance - Pricing of Securities
researchProduct

Multiscale partial information decomposition of dynamic processes with short and long-range correlations: theory and application to cardiovascular co…

2022

Abstract Objective. In this work, an analytical framework for the multiscale analysis of multivariate Gaussian processes is presented, whereby the computation of Partial Information Decomposition measures is achieved accounting for the simultaneous presence of short-term dynamics and long-range correlations. Approach. We consider physiological time series mapping the activity of the cardiac, vascular and respiratory systems in the field of Network Physiology. In this context, the multiscale representation of transfer entropy within the network of interactions among Systolic arterial pressure (S), respiration (R) and heart period (H), as well as the decomposition into unique, redundant and s…

FOS: Computer and information sciencesmultivariate time seriesPhysiologyEntropyRespirationBiomedical EngineeringBiophysicsheart rate variabilitytransfer entropyredundancy and synergyBlood PressureHeartQuantitative Biology - Quantitative MethodsCardiovascular SystemMethodology (stat.ME)Heart RatePhysiology (medical)FOS: Biological sciencesCardiovascular controlSettore ING-INF/06 - Bioingegneria Elettronica E Informaticavector autoregressive fractionally integrated (VARFI) modelsHumansQuantitative Methods (q-bio.QM)Statistics - MethodologyPhysiological measurement
researchProduct

Fast PET Scan Tumor Segmentation Using Superpixels, Principal Component Analysis and K-Means Clustering

2018

Positron Emission Tomography scan images are extensively used in radiotherapy planning, clinical diagnosis, assessment of growth and treatment of a tumor. These all rely on fidelity and speed of detection and delineation algorithm. Despite intensive research, segmentation remained a challenging problem due to the diverse image content, resolution, shape, and noise. This paper presents a fast positron emission tomography tumor segmentation method in which superpixels are extracted first from the input image. Principal component analysis is then applied on the superpixels and also on their average. Distance vector of each superpixel from the average is computed in principal components coordin…

FOS: Computer and information sciencespositron emission tomographyprincipal component analysisComputer scienceComputer Vision and Pattern Recognition (cs.CV)k-meansCoordinate systemComputer Science - Computer Vision and Pattern RecognitionFOS: Physical sciences02 engineering and technologyBenchmarkQuantitative Biology - Quantitative MethodsBiochemistry Genetics and Molecular Biology (miscellaneous)030218 nuclear medicine & medical imagingsuperpixels03 medical and health sciences0302 clinical medicineStructural Biology0202 electrical engineering electronic engineering information engineeringmedicineSegmentationComputer visionTissues and Organs (q-bio.TO)Cluster analysisQuantitative Methods (q-bio.QM)Pixelmedicine.diagnostic_testbusiness.industrysegmentationk-means clusteringQuantitative Biology - Tissues and OrgansPattern recognitionPhysics - Medical PhysicsPositron emission tomographyFOS: Biological sciencesPhysics - Data Analysis Statistics and ProbabilityPrincipal component analysis020201 artificial intelligence & image processingMedical Physics (physics.med-ph)Artificial intelligenceNoise (video)businessData Analysis Statistics and Probability (physics.data-an)BiotechnologyMethods and Protocols
researchProduct