Search results for "Estimation"

showing 10 items of 924 documents

Modelling, Simulation and Characterization of a Supercapacitor in Automotive Applications

2022

In the energy storage field, supercapacitors (SCs) are gaining more and more attention thanks to features such as high-power density, high life cycles and lack of maintenance. In this article, an improved SC three-branches model which considers the residual charge phenomenon is presented. The procedure to estimate the model parameters and the related experimental set-up are presented. The parameter estimation procedure is repeated for several SCs of the same type. The average parameters are then obtained and used as initial guesses for a recursive least square optimization algorithm, to increase the accuracy of the model. The model of a single SC is then extended to SC banks, testing differ…

Mathematical modelsenergy storageResistorssupercapacitor (SC)Computational modelingVoltageCapacitorsSettore ING-IND/32 - Convertitori Macchine E Azionamenti ElettriciIndustrial and Manufacturing EngineeringmodellingControl and Systems EngineeringIntegrated circuit modelingSupercapacitorsElectrical and Electronic Engineeringparameter estimation
researchProduct

How to simulate normal data sets with the desired correlation structure

2010

The Cholesky decomposition is a widely used method to draw samples from multivariate normal distribution with non-singular covariance matrices. In this work we introduce a simple method by using singular value decomposition (SVD) to simulate multivariate normal data even if the covariance matrix is singular, which is often the case in chemometric problems. The covariance matrix can be specified by the user or can be generated by specifying a subset of the eigenvalues. The latter can be an advantage for simulating data sets with a particular latent structure. This can be useful for testing the performance of chemometric methods with data sets matching the theoretical conditions for their app…

Mathematical optimizationCovariance functionCovariance matrixProcess Chemistry and TechnologyMathematicsofComputing_NUMERICALANALYSISMultivariate normal distributionCovarianceComputer Science ApplicationsAnalytical ChemistryEstimation of covariance matricesScatter matrixMatrix normal distributionCMA-ESAlgorithmComputer Science::DatabasesSpectroscopySoftwareMathematicsChemometrics and Intelligent Laboratory Systems
researchProduct

Modelling agricultural risk in a large scale positive mathematical programming model

2020

International audience; Mathematical programming has been extensively used to account for risk in farmers' decision making. The recent development of the positive mathematical programming (PMP) has renewed the need to incorporate risk in a more robust and flexible way. Most of the existing PMP-risk models have been tested at farm-type level and for a very limited sample of farms. This paper presents and tests a novel methodology for modelling risk at individual farm level in a large scale model, called individual farm model for common agricultural policy analysis (IFM-CAP). Results show a clear trade-off between including and excluding the risk specification. Albeit both alternatives provid…

Mathematical optimizationEconomics and EconometricsScale (ratio)Computer scienceComputationprogrammation mathématique positive020209 energyexpected utilitySample (statistics)highest posterior density02 engineering and technologypolitique agricole communerisk and uncertainty0202 electrical engineering electronic engineering information engineeringEuropean common agricultural policyExpected utility hypothesisagricultureEstimationrisque et incertitude2. Zero hungerbusiness.industry020208 electrical & electronic engineering[SHS.ECO]Humanities and Social Sciences/Economics and Finance16. Peace & justicemodèle de fermePMPComputer Science ApplicationsAgriculturebusinessCommon Agricultural PolicyScale modelpositive mathematical programmingInternational Journal of Computational Economics and Econometrics
researchProduct

Approximation of the Feasible Parameter Set in worst-case identification of Hammerstein models

2005

The estimation of the Feasible Parameter Set (FPS) for Hammerstein models in a worst-case setting is considered. A bounding procedure is determined both for polytopic and ellipsoidic uncertainties. It consists in the projection of the FPS of the extended parameter vector onto suitable subspaces and in the solution of convex optimization problems which provide Uncertainties Intervals of the model parameters. The bounds obtained are tighter than in the previous approaches. hes.

Mathematical optimizationEstimation theorySystem identificationIdentification (control systems)PolytopeLinear subspaceInterval arithmeticSettore ING-INF/04 - AutomaticaControl and Systems EngineeringBounding overwatchConvex optimizationNonlinear systemsApplied mathematicsElectrical and Electronic EngineeringProjection (set theory)static nonlinearityMathematics
researchProduct

Two-Sided Guaranteed Estimates of the Cost Functional for Optimal Control Problems with Elliptic State Equations

2014

In the paper, we discuss error estimation methods for optimal control problems with distributed control functions entering the right-hand side of the corresponding elliptic state equations. Our analysis is based on a posteriori error estimates of the functional type, which were derived in the last decade for many boundary value problems. They provide guaranteed two-sided bounds of approximation errors for any conforming approximation. If they are applied to approximate solutions of state equations, then we obtain new variational formulations of optimal control problems and guaranteed bounds of the cost functional. Moreover, for problems with linear state equations this procedure leads to gu…

Mathematical optimizationFunctional typeA priori and a posterioriBoundary value problemState (functional analysis)Control (linguistics)Optimal controlEstimation methodsMathematics
researchProduct

Estimating biophysical variable dependences with kernels

2010

This paper introduces a nonlinear measure of dependence between random variables in the context of remote sensing data analysis. The Hilbert-Schmidt Independence Criterion (HSIC) is a kernel method for evaluating statistical dependence. HSIC is based on computing the Hilbert-Schmidt norm of the cross-covariance operator of mapped samples in the corresponding Hilbert spaces. The HSIC empirical estimator is very easy to compute and has good theoretical and practical properties. We exploit the capabilities of HSIC to explain nonlinear dependences in two remote sensing problems: temperature estimation and chlorophyll concentration prediction from spectra. Results show that, when the relationshi…

Mathematical optimizationHilbert spaceKernel methodsEstimatorDependence estimationMutual informationChlorophyll concentrationNonlinear systemsymbols.namesakeKernel methodNorm (mathematics)symbolsApplied mathematicsRandom variableMathematics2010 IEEE International Geoscience and Remote Sensing Symposium
researchProduct

Robust estimation of partial directed coherence by the vector optimal parameter search algorithm

2009

We propose a method for the accurate estimation of Partial Directed Coherence (PDC) from multichannel time series. The method is based on multivariate vector autoregressive (MVAR) model identification performed through the recently proposed Vector Optimal Parameter Search (VOPS) algorithm. Using Monte Carlo simulations generated by different MVAR models, the proposed VOPS algorithm is compared with the traditional Vector Least Squares (VLS) identification method. We show that the VOPS provides more accurate PDC estimates than the VLS (either overall and single-arc errors) in presence of interactions with long delays and missing terms, and for noisy multichannel time series. ©2009 IEEE.

Mathematical optimizationMultivariate statisticsNeuroscience (all)Parameter search algorithmComputer scienceEstimation theoryMonte Carlo methodSystem identificationPartial directed coherenceBiomedical EngineeringAC powerAutoregressive modelSearch algorithmVector autoregressive modelSettore ING-INF/06 - Bioingegneria Elettronica E InformaticaCoherence (signal processing)Brain connectivityNeurology (clinical)Algorithm
researchProduct

Decomposition of Dynamic Single-Product and Multi-product Lotsizing Problems and Scalability of EDAs

2008

In existing theoretical and experimental work, Estimation of Distribution Algorithms (EDAs) are primarily applied to decomposable test problems. State-of-the-art EDAs like the Hierarchical Bayesian Optimization Algorithm (hBOA), the Learning Factorized Distribution Algorithm (LFDA) or Estimation of Bayesian Networks Algorithm (EBNA) solve these problems in polynomial time. Regarding this success, it is tempting to apply EDAs to real-world problems. But up to now, it has rarely been analyzed which real-world problems are decomposable. The main contribution of this chapter is twofold: (1) It shows that uncapacitated single-product and multi-product lotsizing problems are decomposable. (2) A s…

Mathematical optimizationPolynomialDistribution (mathematics)Estimation of distribution algorithmComputer scienceBounded functionScalabilityEDASBayesian networkTime complexity
researchProduct

Methods cooperation for multiresolution motion estimation

2002

For a medical application, we are interested in an estimation of optical flow on a patient's face, particularly around the eyes. Among the methods of optical flow estimation, gradient estimation and block matching are the main methods. However, the gradient-based approach can only be applied for small displacements (one or two pixels). Gener- ally, the process of block matching leads to good results only if the searching strategy is judiciously selected. Our approach is based on a Markov random field model, combined with an algorithm of block match- ing in a multiresolution scheme. The multiresolution approach allows de- tection of a large range of speeds. The large displacements are detect…

Mathematical optimizationRandom fieldMarkov random fieldMarkov chainComputer scienceGeneral EngineeringOptical flowInitializationMotion detectionImage processingAtomic and Molecular Physics and OpticsOptical flow estimationMotion estimationImage resolutionAlgorithmBlock (data storage)Block-matching algorithmOptical Engineering
researchProduct

Using Fourier local magnitude in adaptive smoothness constraints in motion estimation

2007

Like many problems in image analysis, motion estimation is an ill-posed one, since the available data do not always sufficiently constrain the solution. It is therefore necessary to regularize the solution by imposing a smoothness constraint. One of the main difficulties while estimating motion is to preserve the discontinuities of the motion field. In this paper, we address this problem by integrating the motion magnitude information obtained by the Fourier analysis into the smoothness constraint, resulting in an adaptive smoothness. We describe how to achieve this with two different motion estimation approaches: the Horn and Schunck method and the Markov Random Field (MRF) modeling. The t…

Mathematical optimizationRandom fieldMarkov random fieldSmoothness (probability theory)ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONOptical flowConstraint (information theory)symbols.namesakeMotion fieldArtificial IntelligenceFourier analysisMotion estimationSignal ProcessingsymbolsComputer Vision and Pattern RecognitionAlgorithmSoftwareComputingMethodologies_COMPUTERGRAPHICSMathematicsPattern Recognition Letters
researchProduct