Search results for "gradient"
showing 10 items of 725 documents
Time scales of adaptive behavior and motor learning in the presence of stochastic perturbations.
2009
In this paper, the major assumptions of influential approaches to the structure of variability in practice conditions are discussed from the perspective of a generalized evolving attractor landscape model of motor learning. The efficacy of the practice condition effects is considered in relation to the theoretical influence of stochastic perturbations in models of gradient descent learning of multiple dimension landscapes. A model for motor learning is presented combining simulated annealing and stochastic resonance phenomena against the background of different time scales for adaptation and learning processes. The practical consequences of the model's assumptions for the structure of pract…
Direct Numerical Methods for Optimal Control Problems
2003
Development of interior point methods for linear and quadratic programming problems occurred during the 1990’s. Because of their simplicity and their convergence properties, interior point methods are attractive solvers for such problems. Moreover, extensions have been made to more general convex programming problems.
A variational inequality approach to constrained control problems for parabolic equations
1988
A distributed optimal control problem for parabolic systems with constraints in state is considered. The problem is transformed to control problem without constraints but for systems governed by parabolic variational inequalities. The new formulation presented enables the efficient use of a standard gradient method for numerically solving the problem in question. Comparison with a standard penalty method as well as numerical examples are given.
Fast Convergence of Neural Networks by Application of a New Min-Max Algorithm
1992
Abstract The paper presents a new application of the min-max method, an original algorithm previously successfully applied in other areas and based on a combination of the quasi-Newton and steepest descent methods in order to find the weights minimising the error function of a feed forward neural networks. Preliminary results, obtained by applying the proposed method to a simple 2-2-1 architecture on small Boolean learning problems, are very promising.
A purification algorithm for semi-infinite programming
1992
Abstract In this paper we present a purification algorithm for semi-infinite linear programming. Starting with a feasible point, the algorithm either finds an improved extreme point or concludes with the unboundedness of the problem. The method is based on the solution of a sequence of linear programming problems. The study of some recession conditions has allowed us to establish a weak assumption for the finite convergence of this algorithm. Numerical results illustrating the method are given.
Least-Norm Regularization For Weak Two-Level Optimization Problems
1992
In this paper, we consider a regularization for weak two-level optimization problems by adaptation of the method presented by Solohovic (1970). Existence and approximation results are given in the case in which the constraints to the lower level problems are described by a multifunction. Convergence results for the least-norm regularization under perturbations are also presented.
Gradient-based shape optimisation of ultra-wideband antennas parameterised using splines
2010
Methodology enabling the gradient-based optimisation of antennas parameterised using B-splines is presented. Use of the spline parametrisation allows us to obtain versatile new shapes, whereas the geometry can be represented with a small set of design variables. Moreover, good control over admissible geometries is retained. Advantages of gradient-based optimisation methods are quick convergence, and the fact that the obtained design can be guaranteed to be a local optimum. Focus of this study is to present techniques that enable the computation of exact gradients of the discrete problem, even though the complexity of the geometries does not permit establishing analytical expressions for the…
Higher integrability and stability of (p,q)-quasiminimizers
2023
Using purely variational methods, we prove local and global higher integrability results for upper gradients of quasiminimizers of a $(p,q)$-Dirichlet integral with fixed boundary data, assuming it belongs to a slightly better Newtonian space. We also obtain a stability property with respect to the varying exponents $p$ and $q$. The setting is a doubling metric measure space supporting a Poincar\'e inequality.
Slopes of Kantorovich potentials and existence of optimal transport maps in metric measure spaces
2014
We study optimal transportation with the quadratic cost function in geodesic metric spaces satisfying suitable non-branching assumptions. We introduce and study the notions of slope along curves and along geodesics and we apply the latter to prove suitable generalizations of Brenier's theorem of existence of optimal maps.
Tensorization of quasi-Hilbertian Sobolev spaces
2022
The tensorization problem for Sobolev spaces asks for a characterization of how the Sobolev space on a product metric measure space $X\times Y$ can be determined from its factors. We show that two natural descriptions of the Sobolev space from the literature coincide, $W^{1,2}(X\times Y)=J^{1,2}(X,Y)$, thus settling the tensorization problem for Sobolev spaces in the case $p=2$, when $X$ and $Y$ are infinitesimally quasi-Hilbertian, i.e. the Sobolev space $W^{1,2}$ admits an equivalent renorming by a Dirichlet form. This class includes in particular metric measure spaces $X,Y$ of finite Hausdorff dimension as well as infinitesimally Hilbertian spaces. More generally for $p\in (1,\infty)$ we…