Search results for "gradient"

showing 10 items of 725 documents

Time scales of adaptive behavior and motor learning in the presence of stochastic perturbations.

2009

In this paper, the major assumptions of influential approaches to the structure of variability in practice conditions are discussed from the perspective of a generalized evolving attractor landscape model of motor learning. The efficacy of the practice condition effects is considered in relation to the theoretical influence of stochastic perturbations in models of gradient descent learning of multiple dimension landscapes. A model for motor learning is presented combining simulated annealing and stochastic resonance phenomena against the background of different time scales for adaptation and learning processes. The practical consequences of the model's assumptions for the structure of pract…

Mathematical optimizationAcclimatizationMovementBiophysicsExperimental and Cognitive PsychologyMotor ActivityOscillometryAttractorAdaptation PsychologicalHumansLearningOrthopedics and Sports MedicineAttentionMotor skillAdaptive behaviorBehaviorStochastic ProcessesStochastic processbusiness.industryGeneral MedicineStochastic resonance (sensory neurobiology)Motor SkillsSimulated annealingArtificial intelligenceMotor learningGradient descentbusinessPsychologyNoiseHuman movement science
researchProduct

Direct Numerical Methods for Optimal Control Problems

2003

Development of interior point methods for linear and quadratic programming problems occurred during the 1990’s. Because of their simplicity and their convergence properties, interior point methods are attractive solvers for such problems. Moreover, extensions have been made to more general convex programming problems.

Mathematical optimizationComputer scienceNumerical analysisConjugate gradient methodConvergence (routing)Convex optimizationMathematicsofComputing_NUMERICALANALYSISPositive-definite matrixQuadratic programmingOptimal controlInterior point method
researchProduct

A variational inequality approach to constrained control problems for parabolic equations

1988

A distributed optimal control problem for parabolic systems with constraints in state is considered. The problem is transformed to control problem without constraints but for systems governed by parabolic variational inequalities. The new formulation presented enables the efficient use of a standard gradient method for numerically solving the problem in question. Comparison with a standard penalty method as well as numerical examples are given.

Mathematical optimizationControl and OptimizationApplied MathematicsVariational inequalityMathematicsofComputing_NUMERICALANALYSISPenalty methodState (functional analysis)Optimal controlControl (linguistics)Gradient methodParabolic partial differential equationMathematicsApplied Mathematics & Optimization
researchProduct

Fast Convergence of Neural Networks by Application of a New Min-Max Algorithm

1992

Abstract The paper presents a new application of the min-max method, an original algorithm previously successfully applied in other areas and based on a combination of the quasi-Newton and steepest descent methods in order to find the weights minimising the error function of a feed forward neural networks. Preliminary results, obtained by applying the proposed method to a simple 2-2-1 architecture on small Boolean learning problems, are very promising.

Mathematical optimizationError functionArtificial neural networkComputer scienceSimple (abstract algebra)Convergence (routing)MinimaxGradient descent
researchProduct

A purification algorithm for semi-infinite programming

1992

Abstract In this paper we present a purification algorithm for semi-infinite linear programming. Starting with a feasible point, the algorithm either finds an improved extreme point or concludes with the unboundedness of the problem. The method is based on the solution of a sequence of linear programming problems. The study of some recession conditions has allowed us to establish a weak assumption for the finite convergence of this algorithm. Numerical results illustrating the method are given.

Mathematical optimizationInformation Systems and ManagementGeneral Computer ScienceLinear programmingManagement Science and Operations ResearchIndustrial and Manufacturing EngineeringSemi-infinite programmingLinear-fractional programmingSimplex algorithmModeling and SimulationAlgorithm designCriss-cross algorithmExtreme pointAlgorithmGradient methodMathematicsEuropean Journal of Operational Research
researchProduct

Least-Norm Regularization For Weak Two-Level Optimization Problems

1992

In this paper, we consider a regularization for weak two-level optimization problems by adaptation of the method presented by Solohovic (1970). Existence and approximation results are given in the case in which the constraints to the lower level problems are described by a multifunction. Convergence results for the least-norm regularization under perturbations are also presented.

Mathematical optimizationOptimization problemNorm (mathematics)Proximal gradient methods for learningRegularization perspectives on support vector machinesBackus–Gilbert methodRegularization (mathematics)Mathematics
researchProduct

Gradient-based shape optimisation of ultra-wideband antennas parameterised using splines

2010

Methodology enabling the gradient-based optimisation of antennas parameterised using B-splines is presented. Use of the spline parametrisation allows us to obtain versatile new shapes, whereas the geometry can be represented with a small set of design variables. Moreover, good control over admissible geometries is retained. Advantages of gradient-based optimisation methods are quick convergence, and the fact that the obtained design can be guaranteed to be a local optimum. Focus of this study is to present techniques that enable the computation of exact gradients of the discrete problem, even though the complexity of the geometries does not permit establishing analytical expressions for the…

Mathematical optimizationSpline (mathematics)Local optimumComputer simulationFrequency bandComputationB-splineElectrical and Electronic EngineeringAlgorithmGradient methodSmall setMathematicsIET Microwaves, Antennas & Propagation
researchProduct

Higher integrability and stability of (p,q)-quasiminimizers

2023

Using purely variational methods, we prove local and global higher integrability results for upper gradients of quasiminimizers of a $(p,q)$-Dirichlet integral with fixed boundary data, assuming it belongs to a slightly better Newtonian space. We also obtain a stability property with respect to the varying exponents $p$ and $q$. The setting is a doubling metric measure space supporting a Poincar\'e inequality.

Mathematics - Analysis of PDEsApplied MathematicsFOS: Mathematics31E05 30L99 46E35AnalysisAnalysis of PDEs (math.AP)(pq)-Laplace operator Measure metric spaces Minimal p-weak upper gradient Minimizer
researchProduct

Slopes of Kantorovich potentials and existence of optimal transport maps in metric measure spaces

2014

We study optimal transportation with the quadratic cost function in geodesic metric spaces satisfying suitable non-branching assumptions. We introduce and study the notions of slope along curves and along geodesics and we apply the latter to prove suitable generalizations of Brenier's theorem of existence of optimal maps.

Mathematics - Differential GeometryPure mathematicsGeodesicApplied MathematicsInjective metric spacenon-brancingMathematical analysis49Q20 53C23Metric Geometry (math.MG)Measure (mathematics)geodesic metric spaceConvex metric spaceIntrinsic metricMetric spaceMathematics - Metric GeometryDifferential Geometry (math.DG)Metric (mathematics)FOS: Mathematicsupper gradientMetric mapoptimal transportationMathematics
researchProduct

Tensorization of quasi-Hilbertian Sobolev spaces

2022

The tensorization problem for Sobolev spaces asks for a characterization of how the Sobolev space on a product metric measure space $X\times Y$ can be determined from its factors. We show that two natural descriptions of the Sobolev space from the literature coincide, $W^{1,2}(X\times Y)=J^{1,2}(X,Y)$, thus settling the tensorization problem for Sobolev spaces in the case $p=2$, when $X$ and $Y$ are infinitesimally quasi-Hilbertian, i.e. the Sobolev space $W^{1,2}$ admits an equivalent renorming by a Dirichlet form. This class includes in particular metric measure spaces $X,Y$ of finite Hausdorff dimension as well as infinitesimally Hilbertian spaces. More generally for $p\in (1,\infty)$ we…

Mathematics - Differential Geometrymetric measure spacesDirichlet formsminimal upper gradientFunctional Analysis (math.FA)Mathematics - Functional Analysistensorization46E36 (Primary) 31C25 (Secondary)Differential Geometry (math.DG)Sobolev spacesFOS: Mathematicsanalysis on metric spacespotentiaaliteoriafunktionaalianalyysi
researchProduct