ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Electronic Resource
    Electronic Resource
    Springer
    The VLDB journal 1 (1992), S. 1-39 
    ISSN: 0949-877X
    Keywords: Algorithms ; performance ; reliability ; serializibility ; deadlock recovery ; transaction log
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract This paper is concerned with the problem of integrating a number of existing off-the-shelf local database systems into a multidatabase system that maintains consistency in the face of concurrency and failures.The major difficulties in designing such systems stem from the requirements that local transactions be allowed to execute outside the multidatabase system control, and that the various local database systems cannot participate in the execution of a global commit protocol. A scheme based on the assumption that the component local database systems use the strict two-phase locking protocol is developed. Two major problems are addressed: How to ensure global transaction atomicity without the provision of a commit protocol, and how to ensure freedom from global deadlocks.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Electronic Resource
    Electronic Resource
    Springer
    The VLDB journal 1 (1992), S. 181-239 
    ISSN: 0949-877X
    Keywords: Multidatabase ; serializability ; recovery ; reliability ; two-level serializability ; transaction
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract A multidatabase system (MDBS) is a facility that allows users access to data located in multiple autonomous database management systems (DBMSs). In such a system,global transactions are executed under the control of the MDBS. Independently,local transactions are executed under the control of the local DBMSs. Each local DBMS integrated by the MDBS may employ a different transaction management scheme. In addition, each local DBMS has complete control over all transactions (global and local) executing at its site, including the ability to abort at any point any of the transactions executing at its site. Typically, no design or internal DBMS structure changes are allowed in order to accommodate the MDBS. Furthermore, the local DBMSs may not be aware of each other and, as a consequence, cannot coordinate their actions. Thus, traditional techniques for ensuring transaction atomicity and consistency in homogeneous distributed database systems may not be appropriate for an MDBS environment. The objective of this article is to provide a brief review of the most current work in the area of multidatabase transaction management. We first define the problem and argue that the multidatabase research will become increasingly important in the coming years. We then outline basic research issues in multidatabase transaction management and review recent results in the area. We conclude with a discussion of open problems and practical implications of this research.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Electronic Resource
    Electronic Resource
    Springer
    World wide web 3 (2000), S. 65-77 
    ISSN: 1573-1413
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract We describe the design of a system for fast and reliable HTTP service which we call Web++. Web++ achieves high reliability by dynamically replicating web data among multiple web servers. Web++ selects the available server that is expected to provide the fastest response time. Furthermore, Web++ guarantees data delivery given that at least one server containing the requested data is available. After detecting a server failure, Web++ client requests are satisfied transparently to the user by another server. Furthermore, the Web++ architecture is flexible enough for implementing additional performance optimizations. We describe implementation of one such optimization, batch resource transmission, whereby all resources embedded in an HTML page that are not cached by the client are sent to the client in a single response. Web++ is built on top of the standard HTTP protocol and does not require any changes either in existing web browsers or the installation of any software on the client side. In particular, Web++ clients are dynamically downloaded to web browsers as signed Java applets. We implemented a Web++ prototype; performance experiments indicate that the Web++ system with 3 servers improves the response time perceived by clients on average by 36.6%, and in many cases by as much as 59%, when compared with the current web performance. In addition, we show that batch resource transmission can improve the response time on average by 39% for clients with fast network connections and 21% for the clients with 56 Kb modem connections.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Electronic Resource
    Electronic Resource
    Springer
    Distributed and parallel databases 4 (1996), S. 319-354 
    ISSN: 1573-7578
    Keywords: database ; hashing ; scalable ; distributed ; performance
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract The paper presents a family of distributed file structures, coined DiFS, for record structured, disk resident files with key based exact or interval match access. The file is organized into buckets that are spread among multiple servers, where a server may hold several buckets. Client requests are serviced by mapping keys onto buckets and looking up the corresponding server in an address table. Dynamic growth, in terms of file size and access load, is supported by bucket splits and bucket migrations onto the existing or newly created servers. The major problem that we are addressing is achieving scalability in the sense that both the file size and the client throughput can be scaled up by linearly increasing the number of servers and dynamically redistributing the data. Unlike previous work with similar objectives, our data redistribution considers explicitly the cost/performance ratio of the system by aiming to minimize the number of servers that are used to provide the required performance. A new server is added only if the overall server load in the system does not drop below a pre-specified threshold. Simulation results demonstrate the scalability with controlled cost/performance and the importance of global load control. The impact of various tuning parameters on the effectiveness of the load control is studied in detail. Finally, we compare our approach with other approaches known to date and demonstrate that each of the previous approaches can be recast as a special case of our model.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Electronic Resource
    Electronic Resource
    Springer
    Distributed and parallel databases 6 (1998), S. 117-156 
    ISSN: 1573-7578
    Keywords: networks of workstations ; scalable storage ; distributed database
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Networks of workstations are an emerging architectural paradigm for high-performance parallel and distributed systems. Exploiting networks of workstations for massive data management poses exciting challenges. We consider here the problem of managing record-structured data in such an environment. For example, managing collections of HTML documents on a cluster of WWW servers is an important application for which our approach provides support. The records are accessed by a dynamically growing set of clients based on a search key (e.g., a URL). To scale up the throughput of client accesses with approximately constant response time, the records and thus also their access load are dynamically redistributed across a growing set of workstations. The paper addresses two problems of realistic workloads: skewed access frequencies to the records and evolving access patterns where previously cold records may become hot and vice versa. Our solution incorporates load tracking at different levels of granularity and automatically chooses the appropriate granularity for dynamic data migrations. Experimental results based on a detailed simulation model show that our method is indeed successful in providing scalable cost/performance and explicitly controlling its level.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Electronic Resource
    Electronic Resource
    Springer
    Acta informatica 4 (1975), S. 107-116 
    ISSN: 1432-0525
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Summary For Boolean functions whose variables appear in secondary storage, algorithms which minimize the expected cost of evaluation are considered. An easyto-implement algorithm which gives nearly optimal results is proposed for the case of monotonic functions without a priori probabilities. Optimality proofs are given for a simple special cases.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2008-05-17
    Print ISSN: 0219-1377
    Electronic ISSN: 0219-3116
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...