ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Electronic Resource
    Electronic Resource
    Springer
    Multimedia systems 5 (1997), S. 213-237 
    ISSN: 1432-1882
    Keywords: Key words: Continuous media – I/O scheduling – Frame-oriented streams –VCR-like operations
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract. A growing set of applications require access to digital video and audio. In order to provide playback of such continuous media (CM), scheduling strategies for CM data servers (CMS) are necessary. In some domains, particularly defense and industrial process control, the timing requirements of these applications are strict and essential to their correct operation. In this paper we develop a scheduling strategy for multiple access to a CMS such that the timing guarantees are maintained at all times. First, we develop a scheduling strategy for the steady state, i.e., when there are no changes in playback rate or operation. We derive an optimal Batched SCAN (BSCAN) algorithm that requires minimum buffer space to schedule concurrent accesses. The scheduling strategy incorporates two key constraints: (1) data fetches from the storage system are assumed to be in integral multiples of the block size, and (2) playback guarantees are ensured for frame-oriented streams when each frame can span multiple blocks. We discuss modifications to the scheduling strategy to handle compressed data like motion-JPEG and MPEG. Second, we develop techniques to handle dynamic changes brought about by VCR-like operations executed by applications. We define a suite of primitive VCR-like operations that can be executed. We show that an unregulated change in the BSCAN schedule, in response to VCR-like operations, will affect playback guarantees. We develop two general techniques to ensure playback guarantees while responding to VCR-like operations: passive and active accumulation. Using user response time as a metric we show that active accumulation algorithms outperform passive accumulation algorithms. An optimal response-time algorithm in a class of active accumulation strategies is derived. The results presented here are validated by extensive simulation studies.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Electronic Resource
    Electronic Resource
    Springer
    Multimedia systems 8 (2000), S. 105-117 
    ISSN: 1432-1882
    Keywords: Key words: Multimedia – Disk scheduling – Data retrieval – Buffer management – Synchronization
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract. Excessive buffer requirement to handle continuous-media playbacks is an impediment to cost- effective provisioning for on-line video retrieval. Given the skewed distribution of video popularity, it is expected that often there are concurrent playbacks of the same video file within a short time interval. This creates an opportunity to batch multiple requests and to service them with a single stream from the disk without violating the on-demand constraint. However, there is a need to keep data in memory between successive uses to do this. This leads to a buffer space trade-off between servicing a request in memory mode vs. servicing it in disk-mode. In this work, we develop a novel algorithm to minimize the buffer requirement to support a set of concurrent playbacks. One of the beauties of the proposed scheme is that it enables the server to dynamically adapt to the changing workload while minimizing the total buffer space requirement. Our algorithm makes a significant contribution in decreasing the total buffer requirement, especially when the user access pattern is biased in favor of a small set of files. The idea of the proposed scheme is modeled in detail using an analytical formulation, and optimality of the algorithm is proved. An analytical framework is developed so that the proposed scheme can be used in combination with various existing disk-scheduling strategies. Our simulation results confirm that under certain circumstances, it is much more resource efficient to support some of the playbacks in memory mode and subsequently the proposed scheme enables the server to minimize the overall buffer space requirement.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Electronic Resource
    Electronic Resource
    Springer
    Multimedia systems 7 (1999), S. 486-499 
    ISSN: 1432-1882
    Keywords: Key words:Quality of service – User studies – Media losses – Metrics
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract. Perception of multimedia quality, specified by quality-of-service (QoS) metrics, can be used by system designers to optimize customer satisfaction within resource bounds enforced by general-purpose computing platforms. Media losses, rate variations and transient synchronization losses have been suspected to affect human perception of multimedia quality. This paper presents metrics to measure such defects, and results of a series of user experiments that justify such speculations. Results of the study provide bounds on losses, rate variations and transient synchronization losses as a function of user satisfaction, in the form of Likert values. It is shown how these results can be used by algorithm designers of underlying multimedia systems.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Electronic Resource
    Electronic Resource
    Springer
    Annals of software engineering 8 (1999), S. 289-311 
    ISSN: 1573-7489
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract As information services become increasingly ubiquitous, and are being charged for, users are demanding the ability to choose the quality of an information service based on its cost. In this environment, delivering the right information to the right user at the right time, and with appropriate quality, is the challenge. In addition, resource optimizations must be performed to the degree possible. In this paper we introduce the concept of information quality based system evaluation. The key contribution is a framework in which user/application needs for information quality can be quantified in terms of rigorously defined metrics, existing systems can be evaluated against these metrics, and the tradeoffs between information quality and its cost of provisioning can be examined. Further, this provides an approach to developing information quality aware policies and mechanisms for the system. This framework has been partially validated by applying it to the provisioning of continuous media services, i.e., audio and video, in a distributed environment. This paper presents the case study in detail. Finally, the paper outlines a research program in the area of information quality based system evaluation.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Electronic Resource
    Electronic Resource
    Springer
    Multimedia tools and applications 3 (1996), S. 127-166 
    ISSN: 1573-7721
    Keywords: continuous media ; multimedia ; QoS ; synchronization ; media mixing
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract This paper presents quality of service (QoS) metrics for continuity and synchronization specifications in continuous media (CM). Proposed metrics specify continuity and synchronization, with tolerable limits on average and bursty defaults from perfect continuity, timing and synchronization constraints. These metrics can be used in a distributed environment for resource allocation. Continuity specification of a CM stream consists of its sequencing, display rate and drift profiles. The sequencing profile of a CM stream consists of tolerable aggregate and consecutive frame miss ratios. Rate profiles specify the average rendition rate and its variation. Given a rate profile, the ideal time unit for frame display is determined as an offset from the beginning of the stream. Drift profile specifies the average and bursty deviation of schedules for frames from such fixed points in time. Synchronization requirements of a collection of CM streams are specified by mixing, rate and synchronization drift profiles. Mixing profiles specify vectors of frames that can be displayed simultaneously. They consist of average and bursty losses of synchronization. Rate profiles consist of average rates and permissible deviations thereof. Synchronization drift profiles specify permissible aggregate and bursty time drifts between schedules of simultaneously displayable frames. It is shown that rate profiles of a collection of synchronized streams is definable in terms of rate profiles of its component streams. It is also shown that mixing and drift profiles of a collection of streams are non-definable in terms of sequencing and drift profiles of its constituents. An important consequence of the mutual independence of synchronization and continuity specification is that, in a general purpose platform with limited resources, synchronized display of CM streams may require QoS tradeoffs. An algorithm that makes such tradeoffs is presented as a proof of applicability of our metrics in a realistic environment.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Electronic Resource
    Electronic Resource
    Springer
    Multimedia tools and applications 8 (1999), S. 249-283 
    ISSN: 1573-7721
    Keywords: video caching ; storage overflow ; video scheduling ; continous media delivery ; distributed service ; cost model
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract The per service cost has been a serious impediment to wide spread usage of on-line digital continuous media service, especially in the entertainment arena. Although handling continuous media may be achievable due to technology advances in the past few years, its competitiveness in the market with existing service type such as video rental is still in question. In this paper, we propose a model for continuous media service in a distributed infrastructure which has a video warehouse and intermediate storages connected via a high speed communication network, in an effort to reduce the resource requirement to support a set of service requests. The storage resource and network resource to support a set of requests should be properly quantified to a uniform metric to measure the efficiency of the service schedule. We developed a cost model which maps the given service schedule to a quantity. The proposed cost model is used to capture the amortized resource requirement of the schedule and thus to measure the efficiency of the schedule. The distributed environment consists of a massive scale continuous media server called a video warehouse, and intermediate storages connected via a high speed communication network. An intermediate storage is located in each neighborhood, and its main purpose is to avoid the repeated delivery of the same file to a neighborhood. We consider a situation where a request for a video file is made sometime in advance. We develop a scheduling algorithm which strategically replicates the requested continuous media files at the various intermediate storages.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Electronic Resource
    Electronic Resource
    Springer
    Distributed and parallel databases 2 (1994), S. 151-182 
    ISSN: 1573-7578
    Keywords: Transaction management ; federated databases ; transaction recovery ; global serializability ; local recoverability
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Transaction management for federated database systems (FDBSs), where each participating DBMS is autonomous, supports global transactions that can access more than one database. A number of concurrency control algorithms exist for the failure-free environment. Handling transaction failure, due to concurrency control or site related reasons, becomes especially difficult in the presence of autonomy. Due to local autonomy, carrying out 2-phase commit may not be possible. This can be simulated by providing a server on top of the pre-existing DBMS at each site, which is responsible for submitting the local operations to the associated DBMS and communicating with the transaction's originating site. In this paper we formalize the problem of ensuring transaction consistency in an FDBS environment in the presence of failure. The key problem is that due to autonomy, the local DBMS and FDBS may have different views of an execution sequence generated at a site.Local recoverability is identified as the property of local execution sequences necessary for correctness. The other main problem is of guaranteeing that the various locally recoverable histories are consistent with each other. These identified properties are necessary and sufficient conditions for the correctness of FDBS recovery algorithms, and can be used to evaluate the correctness of the proposed algorithms. This paper also presents an FDBS recovery algorithm that has been proved to be correct. Formal proofs of all properties and a comparison of different algorithms are provided.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Electronic Resource
    Electronic Resource
    Springer
    Distributed and parallel databases 3 (1995), S. 273-307 
    ISSN: 1573-7578
    Keywords: multidatabase query ; integration operation ; algebraic transformation ; constrained query tree ; outerjoin graph
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Existence of semantic conflicts between component databases severely impacts query processing in a multidatabase system. In this paper, we describe two types of semantic conflicts that have to be dealt with in the integration of databases modeling information about related sets of real-world entities. These are the entityidentification problem and theattribute value conflict problem. While thetwo-way outerjoin operation has been commonly used for resolving entity identification problem between two component relations, outerjoins using regular equality comparisons between component relation keys is shown to produce counter-intuitive entity identification result. We remedy this by defining a newkey-equality comparator in place of regular equality comparator, for outerjoins. For the attribute value conflict problem, we define aGeneralized Attribute Derivation (GAD) operation which allows user-defined attribute derivation functions to be used to compute new attributes from the component relations' attributes. By adding two-way outerjoin andGAD to the set of relational operations, the traditional algebraic transformation framework for relational queries is no longer adequate for multidatabase query processing and optimization. As a result, we introduceconstrained query tree as the multidatabase query representation. We show that some knowledge about query predicates and attribute derivation functions can be used to simplify queries. Such knowledge is modeled as an outerjoin graph attached to every outerjoin operation in the query tree. Based on this, we further extend the traditional algebraic transformation framework to include two-way outerjoins andGAD operations. Our framework demonstrates that properties of selection/join predicates and attribute derivation functions can be used to provide interesting transformation alternatives. This framework also serves as a formal ground for developing optimization strategies for multidatabase queries.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Electronic Resource
    Electronic Resource
    Springer
    Journal of systems integration 8 (1998), S. 5-30 
    ISSN: 1573-8787
    Keywords: federated query processing ; database integration ; object-oriented view
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract In this paper, we present a federated query processing approach to evaluate queries on an Object-Oriented (OO) federated database. This approach has been designed and implemented in the OO-Myriad project, which is an OO extension to the Myriad FDBS researchmyriad:94. Since data integration is performed as part of federated query processing, we have proposed outerjoin, outer-difference and generalized attribute derivation operations together with the traditional relational operations, to be used for integration purposes. To define an OO federated database as a virtual view on multiple OO export databases, we adopt a database mapping strategy that systematically derives each of the class extents, deep class extents and relationships of the federated database using an operator tree consisting of the integration operations. By augmenting federated database queries with this algebraic mapping information, query execution plans can be generated. Based on the original Myriad query processing framework, we have realized the proposed OO federated query processing approach in the OO-Myriad prototype.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2012-10-20
    Description: We propose using side information to further inform anomaly detection algorithms of the semantic context of the text data they are analyzing, thereby considering both divergence from the statistical pattern seen in particular datasets and divergence seen from more general semantic expectations. Computational experiments show that our algorithm performs as expected on data that reflect real-world events with contextual ambiguity, while replicating conventional clustering on data that are either too specialized or generic to result in contextual information being actionable. These results suggest that our algorithm could potentially reduce false positive rates in existing anomaly detection systems.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...