ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (24)
  • computing methods  (24)
  • 1975-1979  (17)
  • 1970-1974  (7)
  • Mathematics  (24)
  • Architecture, Civil Engineering, Surveying
Collection
  • Articles  (24)
Publisher
Years
Year
Topic
  • Mathematics  (24)
  • Architecture, Civil Engineering, Surveying
  • 1
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 13 (1974), S. 164-178 
    ISSN: 1573-2878
    Keywords: Two-point boundary-value problems ; calculus of variations ; numerical methods ; differential equations ; computing methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract In the parameter variation method, a scalar parameterk, kε[0, 1], is introduced into the differential equations. The parameterk is inserted in such a way that, whenk=0, the solution of the boundary-value problem is known or readily calculated and, whenk=1, the problem is identical with the original problem. Thus, bydeforming the solution step-by-step throughk-space fromk=0 tok=1, the original problem may be solved. These solutions then provide good starting values for any convergent, iterative scheme such as the Newton-Raphson method. The method is applied to the solution of problems with various types of boundary-value specifications and is further extended to take account of situations arising in the solution of problems from variational calculus (e.g., total elapsed time not specified, optimum control not a simple function of the variables).
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 13 (1974), S. 553-581 
    ISSN: 1573-2878
    Keywords: Epsilon technique ; computing methods ; control theory ; penalty function methods ; optimal strategies
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract A version of Balakrishnan's ε-technique is developed using an integrated description of the system equations. The associated necessary conditions for optimality of theintegral ε-technique appear as integral equations and lead to computational algorithms which are largely independent of the penalty function characteristics of the ε-technique. The merits and shortcomings of this approach are discussed, alternative solution methods are proposed, and some computational results are presented.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 14 (1974), S. 529-556 
    ISSN: 1573-2878
    Keywords: Calculus of variations ; optimal control ; computing methods ; numerical methods ; boundary-value problems ; modified quasilinearization algorithm ; nondifferential constraints
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract This paper considers the numerical solution of optimal control problems involving a functionalI subject to differential constraints, nondifferential constraints, and terminal constraints. The problem is to find the statex(t), the controlu(t), and the parameter π so that the functional is minimized, while the constraints are satisfied to a predetermined accuracy. A modified quasilinearization algorithm is developed. Its main property is the descent property in the performance indexR, the cumulative error in the constraints and the optimality conditions. Modified quasilinearization differs from ordinary quasilinearization because of the inclusion of the scaling factor (or stepsize) α in the system of variations. The stepsize is determined by a one-dimensional search on the performance indexR. Since the first variation δR is negative, the decrease inR is guaranteed if α is sufficiently small. Convergence to the solution is achieved whenR becomes smaller than some preselected value. In order to start the algorithm, some nominal functionsx(t),u(t), π and nominal multipliers λ(t), ρ(t), μ must be chosen. In a real problem, the selection of the nominal functions can be made on the basis of physical considerations. Concerning the nominal multipliers, no useful guidelines have been available thus far. In this paper, an auxiliary minimization algorithm for selecting the multipliers optimally is presented: the performance indexR is minimized with respect to λ(t), ρ(t), μ. Since the functionalR is quadratically dependent on the multipliers, the resulting variational problem is governed by optimality conditions which are linear and, therefore, can be solved without difficulty. To facilitate the numerical solution on digital computers, the actual time θ is replaced by the normalized timet, defined in such a way that the extremal arc has a normalized time length Δt=1. In this way, variable-time terminal conditions are transformed into fixed-time terminal conditions. The actual time τ at which the terminal boundary is reached is regarded to be a component of the parameter π being optimized. The present general formulation differs from that of Ref. 3 because of the inclusion of the nondifferential constraints to be satisfied everywhere over the interval 0⩽t⩽1. Its importance lies in that (i) many optimization problems arise directly in the form considered here, (ii) there are problems involving state equality constraints which can be reduced to the present scheme through suitable transformations, and (iii) there are some problems involving inequality constraints which can be reduced to the present scheme through the introduction of auxiliary variables. Numerical examples are presented for the free-final-time case. These examples demonstrate the feasibility as well as the rapidity of convergence of the technique developed in this paper.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 15 (1975), S. 503-508 
    ISSN: 1573-2878
    Keywords: Drazin inverse ; primitive matrix ; nonnegative matrix ; computing methods ; linear systems
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract Some properties of the Drazin inverse of a square matrix are derived which, together with elementary divisor theory, are used to investigate a spectral inequality which relates the maximal eigenvalues of two nonnegative indecomposable matrices.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 17 (1975), S. 361-430 
    ISSN: 1573-2878
    Keywords: Survey papers ; gradient methods ; numerical methods ; computing methods ; calculus of variations ; optimal control ; gradient-restoration algorithms ; boundary-value problems ; bounded control problems ; bounded state problems ; nondifferential constraints
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract This paper summarizes recent advances in the area of gradient algorithms for optimal control problems, with particular emphasis on the work performed by the staff of the Aero-Astronautics Group of Rice University. The following basic problem is considered: minimize a functionalI which depends on the statex(t), the controlu(t), and the parameter π. Here,I is a scalar,x ann-vector,u anm-vector, and π ap-vector. At the initial point, the state is prescribed. At the final point, the statex and the parameter π are required to satisfyq scalar relations. Along the interval of integration, the state, the control, and the parameter are required to satisfyn scalar differential equations. First, the sequential gradient-restoration algorithm and the combined gradient-restoration algorithm are presented. The descent properties of these algorithms are studied, and schemes to determine the optimum stepsize are discussed. Both of the above algorithms require the solution of a linear, two-point boundary-value problem at each iteration. Hence, a discussion of integration techniques is given. Next, a family of gradient-restoration algorithms is introduced. Not only does this family include the previous two algorithms as particular cases, but it allows one to generate several additional algorithms, namely, those with alternate restoration and optional restoration. Then, two modifications of the sequential gradient-restoration algorithm are presented in an effort to accelerate terminal convergence. In the first modification, the quadratic constraint imposed on the variations of the control is modified by the inclusion of a positive-definite weighting matrix (the matrix of the second derivatives of the Hamiltonian with respect to the control). The second modification is a conjugate-gradient extension of the sequential gradient-restoration algorithm. Next, the addition of a nondifferential constraint, to be satisfied everywhere along the interval of integration, is considered. In theory, this seems to be only a minor modification of the basic problem. In practice, the change is considerable in that it enlarges dramatically the number and variety of problems of optimal control which can be treated by gradient-restoration algorithms. Indeed, by suitable transformations, almost every known problem of optimal control theory can be brought into this scheme. This statement applies, for instance, to the following situations: (i) problems with control equality constraints, (ii) problems with state equality constraints, (iii) problems with equality constraints on the time rate of change of the state, (iv) problems with control inequality constraints, (v) problems with state inequality constraints, and (vi) problems with inequality constraints on the time rate of change of the state. Finally, the simultaneous presence of nondifferential constraints and multiple subarcs is considered. The possibility that the analytical form of the functions under consideration might change from one subarc to another is taken into account. The resulting formulation is particularly relevant to those problems of optimal control involving bounds on the control or the state or the time derivative of the state. For these problems, one might be unwilling to accept the simplistic view of a continuous extremal arc. Indeed, one might want to take the more realistic view of an extremal arc composed of several subarcs, some internal to the boundary being considered and some lying on the boundary. The paper ends with a section dealing with transformation techniques. This section illustrates several analytical devices by means of which a great number of problems of optimal control can be reduced to one of the formulations presented here. In particular, the following topics are treated: (i) time normalization, (ii) free initial state, (iii) bounded control, and (iv) bounded state.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 21 (1977), S. 235-239 
    ISSN: 1573-2878
    Keywords: Nonlinear programming ; computing methods ; Lagrange multiplier estimates ; bounded variables ; large-scale problems
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract Some recent methods for solving nonlinear programming problems make use of estimates of the Lagrange multipliers. These estimates are usually calculated by solving a system oft linear equations, wheret is the number of active constraints. It is shown that, when a large proportion of the active constraints consists of simple upper or lower bounds on the variables, then computational effort can be saved by means of a reorganization of this linear system.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 26 (1978), S. 205-242 
    ISSN: 1573-2878
    Keywords: Geometric programming ; algorithms ; comparison of algorithms ; test problems ; numerical methods ; computing methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract Numerical results are presented of extensive tests involving five posynomial and twelve signomial programming codes. The set of test problems includes problems with a pure mathematical meaning as well as problems originating from different fields of engineering. The algorithms are compared on the basis of CPU time, number of failures, preparation time, and in-core storage.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 26 (1978), S. 395-425 
    ISSN: 1573-2878
    Keywords: Optimal control ; numerical methods ; computing methods ; gradient methods ; gradient-restoration algorithms ; sequential gradient-restoration algorithms ; general boundary conditions ; nondifferential constraints ; bounded control ; bounded state
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract This paper considers the numerical solution of two classes of optimal control problems, called Problem P1 and Problem P2 for easy identification. Problem P1 involves a functionalI subject to differential constraints and general boundary conditions. It consists of finding the statex(t), the controlu(t), and the parameter π so that the functionalI is minimized, while the constraints and the boundary conditions are satisfied to a predetermined accuracy. Problem P2 extends Problem P1 to include nondifferential constraints to be satisfied everywhere along the interval of integration. Algorithms are developed for both Problem P1 and Problem P2. The approach taken is a sequence of two-phase cycles, composed of a gradient phase and a restoration phase. The gradient phase involves one iteration and is designed to decrease the value of the functional, while the constraints are satisfied to first order. The restoration phase involves one or more iterations and is designed to force constraint satisfaction to a predetermined accuracy, while the norm squared of the variations of the control, the parameter, and the missing components of the initial state is minimized. The principal property of both algorithms is that they produce a sequence of feasible suboptimal solutions: the functions obtained at the end of each cycle satisfy the constraints to a predetermined accuracy. Therefore, the values of the functionalI corresponding to any two elements of the sequence are comparable. The stepsize of the gradient phase is determined by a one-dimensional search on the augmented functionalJ, while the stepsize of the restoration phase is obtained by a one-dimensional search on the constraint errorP. The gradient stepsize and the restoration stepsize are chosen so that the restoration phase preserves the descent property of the gradient phase. Therefore, the value of the functionalI at the end of any complete gradient-restoration cycle is smaller than the value of the same functional at the beginning of that cycle. The algorithms presented here differ from those of Refs. 1 and 2, in that it is not required that the state vector be given at the initial point. Instead, the initial conditions can be absolutely general. In analogy with Refs. 1 and 2, the present algorithms are capable of handling general final conditions; therefore, they are suited for the solution of optimal control problems with general boundary conditions. Their importance lies in the fact that many optimal control problems involve initial conditions of the type considered here. Six numerical examples are presented in order to illustrate the performance of the algorithms associated with Problem P1 and Problem P2. The numerical results show the feasibility as well as the convergence characteristics of these algorithms.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 28 (1979), S. 1-9 
    ISSN: 1573-2878
    Keywords: Nonlinear optimization ; conjugate-gradient methods ; numerical methods ; computing methods ; mathematical programming ; nonlinear programming
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract Three variants of the classical conjugate-gradient method are presented. Two of these variants are based upon a nonlinear function of a quadratic form. A restarting procedure due to Powell, and based upon some earlier work of Beale, is discussed and incorporated into two of the variants. Results of applying the four algorithms to a set of benchmark problems are included, and some tentative conclusions about the relative merits of the four schemes are presented.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 28 (1979), S. 185-212 
    ISSN: 1573-2878
    Keywords: Optimal control ; numerical methods ; computing methods ; transformation techniques ; sequential gradient-restoration algorithm ; nondifferential constraints ; state inequality constraints ; linear state inequality constraints ; partially linear state inequality constraints
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract This paper considers optimal control problems involving the minimization of a functional subject to differential constraints, terminal constraints, and a state inequality constraint. The state inequality constraint is of a special type, namely, it is linear in some or all of the components of the state vector. A transformation technique is introduced, by means of which the inequality-constrained problem is converted into an equality-constrained problem involving differential constraints, terminal constraints, and a control equality constraint. The transformation technique takes advantage of the partial linearity of the state inequality constraint so as to yield a transformed problem characterized by a new state vector of minimal size. This concept is important computationally, in that the computer time per iteration increases with the square of the dimension of the state vector. In order to illustrate the advantages of the new transformation technique, several numerical examples are solved by means of the sequential gradient-restoration algorithm for optimal control problems involving nondifferential constraints. The examples show the substantial savings in computer time for convergence, which are associated with the new transformation technique.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 11
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 13 (1974), S. 218-255 
    ISSN: 1573-2878
    Keywords: Calculus of variations ; optimal control ; computing methods ; numerical methods ; gradient methods ; seqential gradient-restoration algorithm ; restoration algorithm ; boundary-value problems ; bounded control problems ; bounded state problems ; nondifferential constraints
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract This paper considers the numerical solution of optimal control problems involving a functionalI subject to differential constraints, nondifferential constraints, and terminal constraints. The problem is to find the statex(t), the controlu(t), and the parameter π so that the functional is minimized, while the constraints are satisfied to a predetermined accuracy. The approach taken is a sequence of two-phase processes or cycles, composed of a gradient phase and a restoration phase. The gradient phase involves a single iteration and is designed to decrease the functional, while the constraints are satisfied to first order. The restoration phase involves one or several iterations and is designed to restore the constraints to a predetermined accuracy, while the norm of the variations of the control and the parameter is minimized. The principal property of the algorithm is that it produces a sequence of feasible suboptimal solutions: the functionsx(t),u(t), π obtained at the end of each cycle satisfy the constraints to a predetermined accuracy. Therefore, the functionals of any two elements of the sequence are comparable. The stepsize of the gradient phase is determined by a one-dimensional search on the augmented functionalJ, and the stepsize of the restoration phase by a one-dimensional search on the constraint errorP. If α g is the gradient stepsize and α r is the restoration stepsize, the gradient corrections are ofO(α g ) and the restoration corrections are ofO(α r α g 2). Therefore, for α g sufficiently small, the restoration phase preserves the descent property of the gradient phase: the functionalÎ at the end of any complete gradient-restoration cycle is smaller than the functionalI at the beginning of the cycle. To facilitate the numerical solution on digital computers, the actual time ϑ is replaced by the normalized timet, defined in such a way that the extremal arc has a normalized time length Δt=1. In this way, variable-time terminal conditions are transformed into fixed-time terminal conditions. The actual time τ at which the terminal boundary is reached is regarded to be a component of the parameter π being optimized. The present general formulation differs from that of Ref. 4 because of the inclusion of the nondifferential constraints to be satisfied everywhere over the interval 0 ≤t ≤ 1. Its importance lies in that (i) many optimization problems arise directly in the form considered here, (ii) problems involving state equality constraints can be reduced to the present scheme through suitable transformations, and (iii) problems involving inequality constraints can be reduced to the present scheme through suitable transformations. The latter statement applies, for instance, to the following situations: (a) problems with bounded control, (b) problems with bounded state, (c) problems with bounded time rate of change of the state, and (d) problems where some bound is imposed on an arbitrarily prescribed function of the parameter, the control, the state, and the time rate of change of the state. Numerical examples are presented for both the fixed-final-time case and the free-final-time case. These examples demonstrate the feasibility as well as the rapidity of convergence of the technique developed in this paper.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 12
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 14 (1974), S. 263-270 
    ISSN: 1573-2878
    Keywords: Two-point boundary-value problems ; differential equations ; Newton-Raphson methods ; computing methods ; numerical methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract A method based on matching a zero of the right-hand side of the differential equations, in a two-point boundary-value problem, to the boundary conditions is suggested. Effectiveness of the procedure is tested on three nonlinear, two-point boundary-value problems.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 13
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 19 (1976), S. 233-259 
    ISSN: 1573-2878
    Keywords: Geometric programming ; signomial functions ; computing methods ; nonlinear programming
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract A computational comparison of several methods for dealing with polynomial geometric programs is presented. Specifically, we compare the complementary programs of Avriel and Williams (Ref. 1) with the reversed programs and the harmonic programs of Duffin and Peterson (Refs. 2, 3). These methods are used to generate a sequence of posynomial geometric programs which are solved using a dual algorithm.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 14
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 19 (1976), S. 587-611 
    ISSN: 1573-2878
    Keywords: Optimization theorems ; conjugate-gradient methods ; contraction-mapping principle ; computing methods ; open-loop control optimization
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract The solution of linear equationsCu 0+b=0 foru 0 is considered here, withC a positive-definite and self-adjoint operator. Such equations arise when solving quadratic optimization problems and (for example) when solving partial differential equations using finite-difference methods. A standard solution technique is to approximateC by an operatorK which is easy to invert and then to construct an algorithm of the contraction-mapping type to useK −1 iteratively to help solve the original equation. Such algorithms have long been used for solving equations of this type. The aim of the paper is to show that, for eachK, a little-known generalization of the usual conjugate-gradient algorithm has advantages over the corresponding contraction-mapping algorithm in that it has better convergence properties. In addition, it is not significantly more difficult to implement. IfK is a good approximation toC, the resulting generalized conjugate-gradient algorithm is more effective than the usual conjugate-gradient algorithm.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 15
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 20 (1976), S. 37-46 
    ISSN: 1573-2878
    Keywords: Boundary-value problems ; initial-value methods ; differential equations ; numerical methods ; computing methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract The paper discusses the solution of boundary-value problems for ordinary differential equations by Warner's algorithm. This shooting algorithm requires that only the original system of differential equations is solved once in each iteration, while the initial conditions for a new iteration are evaluated from a matrix equation. Numerical analysis performed shows that the algorithm converges even for very bad starting values of the unknown initial conditions and that the number of iterations is small and weakly dependent on the starting point. Based on this algorithm, a general subroutine can be realized for the solution of a large class of boundary-value problems.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 16
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 20 (1976), S. 397-416 
    ISSN: 1573-2878
    Keywords: Mathematical programming ; complementarity problems ; computing methods ; linear systems
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract Lemke's algorithm for the linear complementarity problem fails when a desired pivot is not blocked. A projective transformation overcomes this difficulty. The transformation is performed computationally by adjoining a new row to a schema of the problem and pivoting on the element in this row and the unit constant column. Two new algorithms result; some conditions for their success are discussed.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 17
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 20 (1976), S. 455-479 
    ISSN: 1573-2878
    Keywords: Optimal control ; numerical methods ; computing methods ; gradient methods ; quasi-Newton algorithms ; bounded control problems ; singular arcs
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract Two existing function-space quasi-Newton algorithms, the Davidon algorithm and the projected gradient algorithm, are modified so that they may handle directly control-variable inequality constraints. A third quasi-Newton-type algorithm, developed by Broyden, is extended to optimal control problems. The Broyden algorithm is further modified so that it may handle directly control-variable inequality constraints. From a computational viewpoint, dyadic operator implementation of quasi-Newton methods is shown to be superior to the integral kernel representation. The quasi-Newton methods, along with the steepest descent method and two conjugate gradient algorithms, are simulated on three relatively simple (yet representative) bounded control problems, two of which possess singular subarcs. Overall, the Broyden algorithm was found to be superior. The most notable result of the simulations was the clear superiority of the Broyden and Davidon algorithms in producing a sharp singular control subarc.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 18
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 21 (1977), S. 83-89 
    ISSN: 1573-2878
    Keywords: Augmented penalty functions ; control theory ; delayed control systems ; retarded control problems ; computing methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract A descent method is given for the numerical solution of delayed optimal control problems with fixed delays by first reducing them to nondelayed problems and then using the technique of augmented penalty functions. The system resulting from the reduction to a nondelayed problem is of higher order than the original system; however, the time is proportionally shorter, and the variational matrices are sparse.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 19
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 23 (1977), S. 229-243 
    ISSN: 1573-2878
    Keywords: Gradient methods ; ill-posed problems ; computing methods ; numerical methods ; failure analysis
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract An extensive failure analysis of the steepest-descent optimization algorithm has been made. Each of the ways in which the algorithm can fail is discussed in terms of both the mathematical and numerical manifestations of a failure and the information which each type of failure provides about the formulation of the physical problem. Numerical tests for each of the various types of failure are described; several faulty problem formulations are presented, each of which illustrates a particular type of failure. A table is presented in which all failure modes are summarized and the corresponding numerical tests are exhibited.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 20
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 16 (1975), S. 165-178 
    ISSN: 1573-2878
    Keywords: Estimation theory ; linear estimation problems ; computing methods ; least-square methods ; sequential estimation
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract The Dyer-McReynolds square-root information filter (SRIF) is rederived, using recursive least-square arguments. The result is applied to a system composed partly of biases. The filtersensitivity matrix,computed covariance, andconsider covariance for this augmented system are reviewed. A new computationally attractive representation for the smoothed estimates, in terms of a smoothedsensitivity matrix and a smoothedcomputed covariance is presented.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 21
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 13 (1974), S. 620-634 
    ISSN: 1573-2878
    Keywords: Mathematical programming ; function minimization ; method of dual matrices ; computing methods ; numerical methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract In Ref. 2, four algorithms of dual matrices for function minimization were introduced. These algorithms are characterized by the simultaneous use of two matrices and by the property that the one-dimensional search for the optimal stepsize is not needed for convergence. For a quadratic function, these algorithms lead to the solution in at mostn+1 iterations, wheren is the number of variables in the function. Since the one-dimensional search is not needed, the total number of gradient evaluations for convergence is at mostn+2. In this paper, the above-mentioned algorithms are tested numerically by using five nonquadratic functions. In order to investigate the effects of the stepsize on the performances of these algorithms, four schemes for the stepsize factor are employed, two corresponding to small-step processes and two corresponding to large-step processes. The numerical results show that, in spite of the wide range employed in the choice of the stepsize factor, all algorithms exhibit satisfactory convergence properties and compare favorably with the corresponding quadratically convergent algorithms using one-dimensional searches for optimal stepsizes.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 22
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 16 (1975), S. 429-445 
    ISSN: 1573-2878
    Keywords: Mathematical programming ; conjugate-gradient methods ; variable-metric methods ; linear equations ; numerical methods ; computing methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract A computationally stable method for the general solution of a system of linear equations is given. The system isA Tx−B=0, where then-vectorx is unknown and then×q matrixA and theq-vectorB are known. It is assumed that the matrixA T and the augmented matrix [A T,B] are of the same rankm, wherem≤n, so that the system is consistent and solvable. Whenm〈n, the method yields the minimum modulus solutionx m and a symmetricn ×n matrixH m of rankn−m, so thatx=x m+H my satisfies the system for ally, ann-vector. Whenm=n, the matrixH m reduces to zero andx m becomes the unique solution of the system. The method is also suitable for the solution of a determined system ofn linear equations. When then×n coefficient matrix is ill-conditioned, the method can produce a good solution, while the commonly used elimination method fails.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 23
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 13 (1974), S. 519-537 
    ISSN: 1573-2878
    Keywords: Mathematical programming ; function minimization ; method of dual matrices ; computing methods ; numerical methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract In this paper, the method of dual matrices for the minimization of functions is introduced. The method, which is developed on the model of a quadratic function, is characterized by two matrices at each iteration. One matrix is such that a linearly independent set of directions can be generated, regardless of the stepsize employed. The other matrix is such that, at the point where the first matrix fails to yield a gradient linearly independent of all the previous gradients, it generates a displacement leading to the minimal point. Thus, the one-dimensional search is bypassed. For a quadratic function, it is proved that the minimal point is obtained in at mostn + 1 iterations, wheren is the number of variables in the function. Since the one-dimensional search is not needed, the total number of gradient evaluations for convergence is at mostn + 2. Three algorithms of the method are presented. A reverse algorithm, which permits the use of only one matrix, is also given. Considerations pertaining to the applications of this method to the minimization of a quadratic function and a nonquadratic function are given. It is believed that, since the one-dimensional search can be bypassed, a considerable amount of computational saving can be achieved.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 24
    Electronic Resource
    Electronic Resource
    Springer
    Journal of optimization theory and applications 16 (1975), S. 447-485 
    ISSN: 1573-2878
    Keywords: Nonlinear programming ; mathematical programming ; quadratically convergent algorithms ; conjugate-gradient methods ; variable-metric methods ; computing methods ; numerical methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract The problem of minimizing a functionf(x) subject to the constraint ϕ(x)=0 is considered. Here,f is a scalar,x is ann-vector, and ϕ is anm-vector, wherem 〈n. A general quadratically convergent algorithm is presented. The conjugate-gradient algorithm and the variable-metric algorithms for constrained function minimization can be obtained as particular cases of the general algorithm. It is shown that, for a quadratic function subject to a linear constraint, all the particular algorithms behave identically if the one-dimensional search for the stepsize is exact. Specifically, they all produce the same sequence of points and lead to the constrained minimal point in no more thann −r descent steps, wherer is the number of linearly independent constraints. The algorithms are then modified so that they can also be employed for a nonquadratic function subject to a nonlinear constraint. Some particular algorithms are tested through several numerical examples.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...