ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Publication Date: 1992-01-01
    Print ISSN: 0167-8655
    Electronic ISSN: 1872-7344
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2011-08-24
    Description: The thrust of this paper is to present a new approach to multi-target tracking for the mid-course stage of the Strategic Defense Initiative (SDI). This approach is based upon a continuum representation of a cluster of flying objects. We assume that the velocities of the flying objects can be embedded into a smooth velocity field. This assumption is based upon the impossibility of encounters in a high density cluster between the flying objects. Therefore, the problem is reduced to an identification of a moving continuum based upon consecutive time frame observations. In contradistinction to the previous approaches, here each target is considered as a center of a small continuous neighborhood subjected to a local-affine transformation, and therefore, the target trajectories do not mix. Obviously, their mixture in plane of sensor view is apparent. The approach is illustrated by an example.
    Keywords: SPACE COMMUNICATIONS, SPACECRAFT COMMUNICATIONS, COMMAND AND TRACKING
    Type: In: Signal and data processing of small targets 1991; Proceedings of the Meeting, Orlando, FL, Apr. 1-3, 1991 (A93-26796 09-35); p. 386-397.
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2011-08-24
    Description: A new methodology for faster supervised temporal learning in nonlinear neural networks is presented which builds upon the concept of adjoint operators to allow fast computation of the gradients of an error functional with respect to all parameters of the neural architecture, and exploits the concept of teacher forcing to incorporate information on the desired output into the activation dynamics. The importance of the initial or final time conditions for the adjoint equations is discussed. A new algorithm is presented in which the adjoint equations are solved simultaneously (i.e., forward in time) with the activation dynamics of the neural network. We also indicate how teacher forcing can be modulated in time as learning proceeds. The results obtained show that the learning time is reduced by one to two orders of magnitude with respect to previously published results, while trajectory tracking is significantly improved. The proposed methodology makes hardware implementation of temporal learning attractive for real-time applications.
    Keywords: CYBERNETICS
    Type: Neural Networks (ISSN 0893-6080); p. 473-484.
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2011-08-24
    Description: A neural algorithm for rapidly simulating a certain class of nonlinear wave phenomena using analog VLSI neural hardware is presented and applied to the Korteweg-de Vries partial differential equation. The corresponding neural architecture is obtained from a pseudospectral representation of the spatial dependence, along with a leap-frog scheme for the temporal evolution. Numerical simulations demonstrated the robustness of the proposed approach.
    Keywords: CYBERNETICS
    Type: In: IJCNN - International Joint Conference on Neural Networks, Baltimore, MD, June 7-11, 1992, Proceedings. Vol. 3 (A93-37001 14-63); p. III-10 to III-15.
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    facet.materialart.
    Unknown
    In:  Other Sources
    Publication Date: 2011-08-19
    Description: A novel approach is presented which is motivated by an attempt to remove one of the most fundamental limitations of artificial neural networks: their rigid behavior as compared with even the simplest biological systems. It is demonstrated that non-Lipschitzian dynamics, based on the faliure of the Lipschitz conditions at repellers, displays a new qualitative effect, i.e., a multichoice response to periodic external excitations. This makes it possible to construct unpredictable systems, represented in the form of coupled activation and learning dynamical equations. It is shown that unpredictable systems can be controlled by sign strings which uniquely define the system behavior by specifying the direction of the motions at the critical points. Unpredictable systems driven by sign strings are extremely flexible and can serve as a powerful tool for complex pattern recognition.
    Keywords: CYBERNETICS
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2013-08-31
    Description: Included in this viewgraph presentation on intelligent neuroprocessors for launch vehicle health management systems (HMS) are the following: where the flight failures have been in launch vehicles; cumulative delay time; breakdown of operations hours; failure of Mars Probe; vehicle health management (VHM) cost optimizing curve; target HMS-STS auxiliary power unit location; APU monitoring and diagnosis; and integration of neural networks and fuzzy logic.
    Keywords: CYBERNETICS
    Type: A Decade of Neural Networks: Practical Applications and Prospects; p 137-162
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2018-06-08
    Description: The emergence, in recent years, of hyperspectral sensors provides a tremendous opportunity for advancing the process of detailed and direct remote detection and identification from space of targets or surface materials. Such sensors exploit the uniqueness of the corresponding spectral reflectance signatures, which enables high resolution imaging spectrometer data to be processed on a pixel-by-pixel basis. This has implications both for defense-related applications (e.g., surveillance tasks) and in the civilian domain (e.g., for science applications). The purpose of this talk is to discuss a number of strong arguments that support neural networks as a choice for the generalized analysis (e.g., unmixing) of remotely sensed hyperspectral data.
    Keywords: Earth Resources and Remote Sensing
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2019-06-28
    Description: A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
    Keywords: CYBERNETICS
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2019-06-28
    Description: A method and apparatus for supervised neural learning of time dependent trajectories exploits the concepts of adjoint operators to enable computation of the gradient of an objective functional with respect to the various parameters of the network architecture in a highly efficient manner. Specifically, it combines the advantage of dramatic reductions in computational complexity inherent in adjoint methods with the ability to solve two adjoint systems of equations together forward in time. Not only is a large amount of computation and storage saved, but the handling of real-time applications becomes also possible. The invention has been applied it to two examples of representative complexity which have recently been analyzed in the open literature and demonstrated that a circular trajectory can be learned in approximately 200 iterations compared to the 12000 reported in the literature. A figure eight trajectory was achieved in under 500 iterations compared to 20000 previously required. The trajectories computed using our new method are much closer to the target trajectories than was reported in previous studies.
    Keywords: CYBERNETICS
    Type: NAS 1.71:NPO-18586-1-CU
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2019-06-28
    Description: A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
    Keywords: CYBERNETICS
    Type: NAS 1.71:NPO-18553-1-CU
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...