ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (153)
  • 1990-1994  (77)
  • 1965-1969  (49)
  • 1940-1944  (8)
  • 1935-1939  (19)
  • Computer Science  (153)
Collection
  • Articles  (153)
Years
Year
Journal
  • 1
    Electronic Resource
    Electronic Resource
    Springer
    Computing 47 (1991), S. 169-191 
    ISSN: 1436-5057
    Keywords: Primary: 65H10 ; secondary: 65G10 ; Nonlinear algebraic systems ; interval arithmetic ; automatic differentiation
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Description / Table of Contents: Zusammenfassung Intervalliterationen können in Verbindung mit anderen Verfahren verwendet werden, um alle Lösungen eines nichlinearen Gleichungsystems in einem gegebenen Gebiet mit Sicherheit abzuschätzen, und auch um Approximationen der Lösungen solcher Systeme zu verifizieren. Die Abschätzungen in den Verfahren sind jedoch manchmal nicht hinreichend genau, da Überschätzungen in der Berechnung und in dem Gebrauch der Invervall-Jacobi Matrix auftreten. In der vorliegenden Arbeit werden Intervalliterationen auf einem erweiterten Gleichungssystem behandelt. In diesem System gibt es keine Überschätzungen der Einzelkomponenten der Intervall-Jacobi Matrix, und für die Nichtlinearitären können Abschätzungen angegeben werden. Anhand eines Beispiels wird die Wirkungsweise der behandelten Algorithmen demonstriert.
    Notes: Abstract Interval iteration can be used, in conjunction with other techniques, for rigorously bounding all solutions to a nonlinear system of equations within a given region, or for verifying approximate solutions. However, because of overestimation which occurs when the interval Jacobian matrix is accumulated and applied, straightforward linearization of the original nonlinear system sometimes leads to nonconvergent iteration. In this paper, we examine interval iterations based on an expanded system obtained from the intermediate quantities in the original system. In this system, there is no overestimation in entries of the interval Jacobi matrix, and nonlinearities can be taken into account to obtain sharp bounds. We present an example in detail, algorithms, and detailed experimental results obtained from applying our algorithms to the example.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Electronic Resource
    Electronic Resource
    Springer
    Higher-order and symbolic computation 4 (1992), S. 371-398 
    ISSN: 1573-0557
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract A critique of DIN Kernel Lisp is presented which argues for greater emphasis on implementation efficiency and language cleanliness, and a greater emphasis onParallel andpersistent Lisp environments. Specific recommendations include standardizing the S-expression rather than the character form of a program, using lexical scoping and shadowing to enhance subsystem modularity, relying on macros and compiler-macros for more pleasant syntax and greater modularity, requiring immutable/functional bindings, strings, vectors and lists; using object-oriented capabilities to build basic capabilities-e.g., generic arithmetic, streams and pathnames, relying ondefstruct instead ofdefclass, and standardizing ondefmethod for all function definitions. A virtual/synthetic class mechanism is presented to solve certain technical problems analogous to those solved by the “virtual function” mechanism of C++. Finally, we recommend the inclusion offutures as DKLisp's fundamental mechanism for the introduction of multiple parallel threads of computation.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Electronic Resource
    Electronic Resource
    Springer
    Higher-order and symbolic computation 5 (1992), S. 157-190 
    ISSN: 1573-0557
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Common Lisp [25],[26] includes a dynamic datatype system of moderate complexity, as well as predicates for checking the types of language objects. Additionally, an interesting predicate of two “type specifiers”—SUBTYPEP—is included in the language. Thissubtypep predicate provides a mechanism with which to query the Common Lisp type system regarding containment relations among the various built-in and user-defined types. Whilesubtypep is rarely needed by an applications programmer, the efficiency of a Common Lisp implementation can depend critically upon the quality of itssubtypep predicate: the run-time system typically calls uponsubtypep to decide what sort of representations to use when making arrays; the compiler calls uponsubtypep to interpret userdeclarations, on which efficient data representation and code generation decisions are based. As might be expected due to the complexity of the Common Lisp type system, there may be type containment questions which cannot be decided. In these casessubtypep is expected to return “can't determine”, in order to avoid giving an incorrect answer. Unfortunately, most Common Lisp implementations have abused this license by answering “can't determine” in all but the most trivial cases.In particular, most Common Lisp implementations of SUBTYPEP fail on the basic axioms of the Common Lisp type system itself [25][26]. This situation is particularly embarrassing for Lisp-the premier “symbol processing language”—in which the implementation of complex symbolic logical operations should be relatively easy. Sincesubtypep was presumably included in Common Lisp to answer thehard cases of type containment, this “lazy evaluation” limits the usefulness of an important language feature. This paper shows how those type containment relations of Common Lisp which can be decided at all, can be decided simply and quickly by a decision procedure which can dramatically reduce the number of occurrences of the “can't determine” answer fromsubtypep. This decision procedure doesnot require the conversion of a type specifier expression to conjunctive or disjunctive normal form, and therefore does not incur the exponential explosion in space and time that such a conversion would entail. The lattice mechanism described here for decidingsubtypep is also ideal for performingtype inference [2]; the particular implementation developed here, however, is specific to the type system of Common Lisp [4]. Categories and Subject Descriptors: Lisp, dynamic typing, compiler optimization, type inference, decision procedure.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Electronic Resource
    Electronic Resource
    Springer
    Biological cybernetics 67 (1992), S. 103-112 
    ISSN: 1432-0770
    Source: Springer Online Journal Archives 1860-2000
    Topics: Biology , Computer Science , Physics
    Notes: Abstract The vestibulo-ocular reflex (VOR), which stabilizes the eyes in space during head movements, can undergo adaptive modification to maintain retinal stability in response to natural or experimental challenges. A number of models and neural sites have been proposed to account for this adaptation but these do not fully explain how the nervous system can detect and correct errors in both gain and phase of the VOR. This paper presents a general error correction algorithm based on the multiplicative combination of three signals (retinal slip velocity, head position, head velocity) directly relevant to processing of the VOR. The algorithm is highly specific, requiring the combination of particular sets of signals to achieve compensation. It is robust, with essentially perfect compensation observed for all gain (0.25X–4.0X) and phase (-180°–+180°) errors tested. Output of the model closely resembles behavioral data from both gain and phase adaptation experiments in a variety of species. Imposing physiological constraints (no negative activation levels or changes in the sign of unit weights) does not alter the effectiveness of the algorithm. These results suggest that the mechanisms implemented in our model correspond to those implemented in the brain of the behaving organism. Predictions concerning the nature of the adaptive process are specific enough to permit experimental verification using electrophysiological techniques. In addition, the model provides a strategy for adaptive control of any first order mechanical system.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    ISSN: 1432-0770
    Source: Springer Online Journal Archives 1860-2000
    Topics: Biology , Computer Science , Physics
    Notes: Abstract The primary function of the vestibuloocular reflex (VOR) is to maintain the stability of retinal images during head movements. This function is expressed through a complex array of dynamic and adaptive characteristics whose essential physiological basis is a disynaptic arc. We present a model of normal VOR function using a simple neural network architecture constrained by the physiological and anatomical characteristics of this disynaptic reflex arc. When tuned using a method of global optimization, this network is capable of exhibiting the broadband response characteristics observed in behavioral tests of VOR function. Examination of the internal units in the network show that this performance is achieved by rediscovering the solution to VOR processing first proposed by Skavenski and Robinson (1973). Type I units at the intermediate level of the network possess activation characteristics associated with either pure position or pure velocity. When the network is made more complex either through adding more pairs of internal units or an additional level of units, the characteristic division of unit activation properties into position and velocity types remains unchanged. Although simple in nature, the results of our simulations reinforce the validity of bottom-up approaches to modeling of neutral function. In addition, the architecture of the network is consistent with current ideas on the characteristics and site of adaptation of the reflex and should be compatible with current theories regarding learning rules for synaptic modification during VOR adaptation.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Electronic Resource
    Electronic Resource
    Springer
    Computational economics 3 (1990), S. 63-80 
    ISSN: 1572-9974
    Keywords: Hierarchical/relational approach ; math programming modeling
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Economics
    Notes: Abstract We describe a hierarchical/relational approach to math programming modeling. The approach transforms free-form generic modeling constructs into linear and nonlinear models which are independent of end-user data structures. The underlying relationships with graph-based interfaces and the inherent aggregation/disaggregation capabilities of the approach are also discussed. The modeling approach will be illustrated with several process industry applications including distribution planning, operations planning, and production scheduling.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Electronic Resource
    Electronic Resource
    Springer
    Designs, codes and cryptography 4 (1994), S. 307-312 
    ISSN: 1573-7586
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science , Mathematics
    Notes: Abstract The one-to-one correspondence between the class of two-dimensional translation planes of orderq 2 and the collection of spreads ofPG(3,q) has long provided a natural context for describing new planes. The method often used for constructing “interesting” spreads is to start with a regular spread, corresponding to a desarguesian plane, and then replace some “nice” subset of lines by another partial spread covering the same set of points. Indeed the first approach was replacing the lines of a regulus by the lines of its opposite regulus, or doing this process for a set of disjoint reguli. Nontrivial generalizations of this idea include thechains of Bruen and thenests of Baker and Ebert. In this paper we construct a replaceable subset of a regular spread ofPG (3, 19) which is the union of 11 reguli double covering the lines in their union, hence is a chain in the terminology of Bruen or a 11-nest in the Baker-Ebert terminology.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Electronic Resource
    Electronic Resource
    Springer
    Journal of automated reasoning 8 (1992), S. 275-295 
    ISSN: 1573-0670
    Keywords: Automated theorem proving ; integration ; planning
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract An investigation is made into the ways proof planning can enhance the capability of a rule based prover for the theory of integration. The integrals are of the Riemann type and are defined in a way to maximize the theorem proving methods of predicate calculus. Approximately fifty theorems have been proved and several examples are discussed. A major shortcoming was found to be the inability of the system to work with or produce a proof plan. As a result, a planning scheme based on the idea of subgoals or milestones was considered. With user defined plans, there was a substantial increase in performance and capability of the system and, in some cases, proofs which were previously unsuccessful were completed.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Electronic Resource
    Electronic Resource
    Springer
    Real-time systems 3 (1991), S. 67-99 
    ISSN: 1573-1383
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract The Priority Ceiling Protocol (PCP) of Sha, Rajkumar and Lehoczky is a policy for locking binary semaphores that bounds priority inversion (i.e., the blocking of a job while a lower priority job executes), and thereby improves schedulability under fixed priority preemptive scheduling. We show how to extend the PCP to handle: multiunit resources, which subsume binary semaphores and reader-writer locks; dynamic priority schemes, such as earliest-deadline-first (EDF), that use static “preemption levels”; sharing of runtime stack space between jobs. These extensions can be applied independently, or together. The Stack Resource Policy (SRP) is a variant of the SRP that incorporates the three extensions mentioned above, plus the conservative assumption that each job may require the use of a shared stack. This avoids unnecessary context switches and allows the SRP to be implemented very simply using a stack. We prove a schedulability result for EDF scheduling with the SRP that is tighter than the one proved previously for EDF with a dynamic version of the PCP. The Minimal SRP (MSRP) is a slightly more complex variant of the SRP, which has similar properties, but imposes less blocking. The MSRP is optimal for stack sharing systems, in the sense that it is the least restrictive policy that strictly bounds priority inversion and prevents deadlock for rate monotone (RM) and earliest-deadline-first (EDF) scheduling.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Electronic Resource
    Electronic Resource
    Springer
    Real-time systems 7 (1994), S. 159-182 
    ISSN: 1573-1383
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Predicting the execution times of straight-line code sequences is a fundamental problem in the design and evaluation of hard real-time systems. The reliability of system-level timings and schedulability analysis rests on the accuracy of execution time predictions for the basic schedulable units of work. Obtaining such predictions for contemporary microprocessors is difficult. This paper presents a new technique called micro-analysis for predicting point-to-point execution times on code segments. It uses machine-description rules, similar to those that have proven useful for code generation and peephole optimization, to translate compiled object code into a sequence of very low-level (micro) instructions. The stream of micro-instructions is then analyzed for timing, via a three-level pattern matching scheme. At this low level, the effect of advanced features such as instruction caching and overlap can be taken into account. This technique is compiler and language-independent, and retargetable. This paper also describes a prototype system in which the micro-analysis technique is integrated with an existing C compiler. This system predicts the bounded execution time of statement ranges or simple (non-nested) C functions at compile time.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...