Search results for "convergence"
showing 10 items of 655 documents
Algorithms for Rational Discrete Least Squares Approximation Part I: Unconstrained Optimization
1976
In this paper a modification of L. Wittmeyer’s method ([1], [14]) for rational discrete least squares approximation is given which corrects for its failure to converge to a non-optimal point in general. The modification makes necessary very little additional computing effort only. It is analysed thoroughly with respect to its conditions for convergence and its numerical properties. A suitable implementation is shown to be benign in the sense of F. L. Bauer [2]. The algorithm has proven successful even in adverse situations.
Direct Numerical Methods for Optimal Control Problems
2003
Development of interior point methods for linear and quadratic programming problems occurred during the 1990’s. Because of their simplicity and their convergence properties, interior point methods are attractive solvers for such problems. Moreover, extensions have been made to more general convex programming problems.
Dealing with uncertainty in consensus protocols
2009
Recent results on consensus protocols for networks are presented. The basic tools and the main contribution available in the literature are considered, together with some of the related challenging aspects: estimation in networks and how to deal with disturbances is considered. Motivated by applications to sensor, peer-to-peer, and ad hoc networks, many papers have considered the problem of estimation in a consensus fashion. Here, the Unknown But Bounded (UBB) noise affecting the network is addressed in details. Because of the presence of UBB disturbances convergence to equilibria with all equal components is, in general, not possible. The solution of the e-consensus problem, where the stat…
The solution of a ‘ fixed-target’—model by an approach of system analysis
1974
Abstract A general approach fur economic systems is combined with a concrete ‘ fixed-target’—model. The consideration of convergence leads—under conditions of a stable solution and two targets—to the result that five numerical restrictions must be recognized when treating the two instruments. Generalizations of the discussed illustrative model are possible.
A Stochastic Search on the Line-Based Solution to Discretized Estimation
2012
Published version of a chapter in the book: Advanced Research in Applied Artificial Intelligence. Also available from the publisher at: http://dx.doi.org/10.1007/978-3-642-31087-4_77 Recently, Oommen and Rueda [11] presented a strategy by which the parameters of a binomial/multinomial distribution can be estimated when the underlying distribution is nonstationary. The method has been referred to as the Stochastic Learning Weak Estimator (SLWE), and is based on the principles of continuous stochastic Learning Automata (LA). In this paper, we consider a new family of stochastic discretized weak estimators pertinent to tracking time-varying binomial distributions. As opposed to the SLWE, our p…
Statistical criteria for early-stopping of support vector machines
2007
This paper proposes the use of statistical criteria for early-stopping support vector machines, both for regression and classification problems. The method basically stops the minimization of the primal functional when moments of the error signal (up to fourth order) become stationary, rather than according to a tolerance threshold of primal convergence itself. This simple strategy induces lower computational efforts and no significant differences are observed in terms of performance and sparsity.
Fast Convergence of Neural Networks by Application of a New Min-Max Algorithm
1992
Abstract The paper presents a new application of the min-max method, an original algorithm previously successfully applied in other areas and based on a combination of the quasi-Newton and steepest descent methods in order to find the weights minimising the error function of a feed forward neural networks. Preliminary results, obtained by applying the proposed method to a simple 2-2-1 architecture on small Boolean learning problems, are very promising.
A New Min-Max Optimisation Approach for Fast Learning Convergence of Feed-Forward Neural Networks
1993
One of the most critical aspect for a wide use of neural networks to real world problems is related to the learning process which is known to be computational expensive and time consuming.
On Using a Hierarchy of Twofold Resource Allocation Automata to Solve Stochastic Nonlinear Resource Allocation Problems
2007
Recent trends in AI attempt to solve difficult NP-hard problems using intelligent techniques so as to obtain approximately-optimal solutions. In this paper, we consider a family of such problems which fall under the general umbrella of "knapsack-like" problems, and demonstrate how we can solve all of them fast and accurately using a hierarchy of Learning Automata (LA). In a multitude of real-world situations, resources must be allocated based on incomplete and noisy information, which often renders traditional resource allocation techniques ineffective. This paper addresses one such class of problems, namely, Stochastic Non-linear Fractional Knapsack Problems. We first present a completely …
The convergence of the perturbed Newton method and its application for ill-conditioned problems
2011
Abstract Iterative methods, such as Newton’s, behave poorly when solving ill-conditioned problems: they become slow (first order), and decrease their accuracy. In this paper we analyze deeply and widely the convergence of a modified Newton method, which we call perturbed Newton, in order to overcome the usual disadvantages Newton’s one presents. The basic point of this method is the dependence of a parameter affording a degree of freedom that introduces regularization. Choices for that parameter are proposed. The theoretical analysis will be illustrated through examples.