ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 16 (1975), S. 429-445 
    ISSN: 1573-2878
    Keywords: Mathematical programming ; conjugate-gradient methods ; variable-metric methods ; linear equations ; numerical methods ; computing methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract A computationally stable method for the general solution of a system of linear equations is given. The system isA Tx−B=0, where then-vectorx is unknown and then×q matrixA and theq-vectorB are known. It is assumed that the matrixA T and the augmented matrix [A T,B] are of the same rankm, wherem≤n, so that the system is consistent and solvable. Whenm〈n, the method yields the minimum modulus solutionx m and a symmetricn ×n matrixH m of rankn−m, so thatx=x m+H my satisfies the system for ally, ann-vector. Whenm=n, the matrixH m reduces to zero andx m becomes the unique solution of the system. The method is also suitable for the solution of a determined system ofn linear equations. When then×n coefficient matrix is ill-conditioned, the method can produce a good solution, while the commonly used elimination method fails.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 13 (1974), S. 519-537 
    ISSN: 1573-2878
    Keywords: Mathematical programming ; function minimization ; method of dual matrices ; computing methods ; numerical methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract In this paper, the method of dual matrices for the minimization of functions is introduced. The method, which is developed on the model of a quadratic function, is characterized by two matrices at each iteration. One matrix is such that a linearly independent set of directions can be generated, regardless of the stepsize employed. The other matrix is such that, at the point where the first matrix fails to yield a gradient linearly independent of all the previous gradients, it generates a displacement leading to the minimal point. Thus, the one-dimensional search is bypassed. For a quadratic function, it is proved that the minimal point is obtained in at mostn + 1 iterations, wheren is the number of variables in the function. Since the one-dimensional search is not needed, the total number of gradient evaluations for convergence is at mostn + 2. Three algorithms of the method are presented. A reverse algorithm, which permits the use of only one matrix, is also given. Considerations pertaining to the applications of this method to the minimization of a quadratic function and a nonquadratic function are given. It is believed that, since the one-dimensional search can be bypassed, a considerable amount of computational saving can be achieved.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 13 (1974), S. 620-634 
    ISSN: 1573-2878
    Keywords: Mathematical programming ; function minimization ; method of dual matrices ; computing methods ; numerical methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract In Ref. 2, four algorithms of dual matrices for function minimization were introduced. These algorithms are characterized by the simultaneous use of two matrices and by the property that the one-dimensional search for the optimal stepsize is not needed for convergence. For a quadratic function, these algorithms lead to the solution in at mostn+1 iterations, wheren is the number of variables in the function. Since the one-dimensional search is not needed, the total number of gradient evaluations for convergence is at mostn+2. In this paper, the above-mentioned algorithms are tested numerically by using five nonquadratic functions. In order to investigate the effects of the stepsize on the performances of these algorithms, four schemes for the stepsize factor are employed, two corresponding to small-step processes and two corresponding to large-step processes. The numerical results show that, in spite of the wide range employed in the choice of the stepsize factor, all algorithms exhibit satisfactory convergence properties and compare favorably with the corresponding quadratically convergent algorithms using one-dimensional searches for optimal stepsizes.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 16 (1975), S. 447-485 
    ISSN: 1573-2878
    Keywords: Nonlinear programming ; mathematical programming ; quadratically convergent algorithms ; conjugate-gradient methods ; variable-metric methods ; computing methods ; numerical methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract The problem of minimizing a functionf(x) subject to the constraint ϕ(x)=0 is considered. Here,f is a scalar,x is ann-vector, and ϕ is anm-vector, wherem 〈n. A general quadratically convergent algorithm is presented. The conjugate-gradient algorithm and the variable-metric algorithms for constrained function minimization can be obtained as particular cases of the general algorithm. It is shown that, for a quadratic function subject to a linear constraint, all the particular algorithms behave identically if the one-dimensional search for the stepsize is exact. Specifically, they all produce the same sequence of points and lead to the constrained minimal point in no more thann −r descent steps, wherer is the number of linearly independent constraints. The algorithms are then modified so that they can also be employed for a nonquadratic function subject to a nonlinear constraint. Some particular algorithms are tested through several numerical examples.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...