ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (1,210)
  • English  (1,198)
  • Italian  (12)
  • 2005-2009  (1,210)
Collection
Keywords
Language
Years
Year
  • 1
    Publication Date: 2024-04-19
    Description: This paper reports on the results of an independent evaluation of the techniques presented in the VLDB 2007 paper "Scalable Semantic Web Data Management Using Vertical Partitioning", authored by D. Abadi, A. Marcus, S. R. Madden, and K. Hollenbach [1]. We revisit the proposed benchmark and examine both the data and query space coverage. The benchmark is extended to cover a larger portion of the query space in a canonical way. Repeatability of the experiments is assessed using the code base obtained from the authors. Inspired by the proposed vertically-partitioned storage solution for RDF data and the performance figures using a column-store, we conduct a complementary analysis of state-of-the-art RDF storage solutions. To this end, we employ MonetDB/SQL, a fully-functional open source column-store, and a well-known -- for its performance -- commercial row-store DBMS. We implement two relational RDF storage solutions -- triple-store and vertically-partitioned -- in both systems. This allows us to expand the scope of [1] with the performance characterization along both dimensions -- triple-store vs. vertically-partitioned and row-store vs. column-store -- individually, before analyzing their combined effects. A detailed report of the experimental test-bed, as well as an in-depth analysis of the parameters involved, clarify the scope of the solution originally presented and position the results in a broader context by covering more systems.
    Language: English
    Type: info:eu-repo/semantics/article
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2024-04-19
    Description: By leveraging modern networking hardware (RDMA-enabled network cards), we can shift priorities in distributed database processing significantly. Complex and sophisticated mechanisms to avoid network traffic can be replaced by a scheme that takes advantage of the bandwidth and low latency offered by such interconnects. We illustrate this phenomenon with cyclo-join, an efficient join algorithm based on continuously pumping data through a ring-structured network. Our approach is capable of exploiting the resources of all CPUs and distributed main-memory available in the network for processing queries of arbitrary shape and datasets of arbitrary size.
    Language: English
    Type: info:eu-repo/semantics/conferenceObject
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2024-04-19
    Description: This paper presents our experiences in porting the Sloan Digital Sky Survey(SDSS)/ SkyServer to the state-of- the-art open source database system MonetDB/SQL. SDSS acts as a well-documented benchmark for scientific database management. We have achieved a fully functional prototype for the personal SkyServer, to be downloaded from our site. The lessons learned are 1) the column store approach of MonetDB demonstrates a great potential in the world of scientific databases. However, the application also challenged the functionality of our implementation and revealed that a fully operational SQL environment is needed, e.g. including persistent stored modules; 2) the initial performance is competitive to the reference platform, MS SQL Server 2005, and 3) the analysis of SDSS query traces hints at several techniques to boost performance by utilizing repetitive behavior and zoom-in/zoom-out access patterns, that are currently not captured by the system.
    Language: English
    Type: info:eu-repo/semantics/conferenceObject
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2024-04-19
    Description: Stream applications gained significant popularity over the last years that lead to the development of specialized stream engines. These systems are designed from scratch with a different philosophy than nowadays database engines in order to cope with the stream applications requirements. However, this means that they lack the power and sophisticated techniques of a full fledged database system that exploits techniques and algorithms accumulated over many years of database research. In this paper, we take the opposite route and design a stream engine directly on top of a database kernel. Incoming tuples are directly stored upon arrival in a new kind of system tables, called baskets. A continuous query can then be evaluated over its relevant baskets as a typical one-time query exploiting the power of the relational engine. Once a tuple has been seen by all relevant queries/operators, it is dropped from its basket. A basket can be the input to a single or multiple similar query plans. Furthermore, a query plan can be split into multiple parts each one with its own input/output baskets allowing for flexible load sharing query scheduling. Contrary to traditional stream engines, that process one tuple at a time, this model allows batch processing of tuples, e.g., query a basket only after x tuples arrive or after a time threshold has passed. Furthermore, we are not restricted to process tuples in the order they arrive. Instead, we can selectively pick tuples from a basket based on the query requirements exploiting a novel query component, the basket expressions. We investigate the opportunities and challenges that arise with such a direction and we show that it carries significant advantages. We propose a complete architecture, the DataCell, which we implemented on top of an open-source column-oriented DBMS. A detailed analysis and experimental evaluation of the core algorithms using both micro benchmarks and the standard Linear Road benchmark demonstrate the potential of this new approach.
    Language: English
    Type: info:eu-repo/semantics/conferenceObject
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2024-02-28
    Description: Landslide activity often significantly alters the spectral signal of the earth surface through the displacement and disruption of the surficial vegetation layer. This makes landslide areas targets for detection and mapping from spaceborne data. We employed data from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) to detect and map landslides that were triggered by the 2005 Kashmir earthquake. Digital Elevation Models (DEMs) generated from ASTER stereo data, together with DEM data from the Shuttle Radar Topography Mission (SRTM) were used to calculate volume estimates for individual landslides. Orthophotos and DEMs were generated from corrected level 1B data. Orientation of the 3N and corresponding 3B band from GCPs, transformation to epipolar geometry, parallax-matching, and parallax-to-DEM conversion was done using the PCI Geomatica 10.1 Orthoengine software. Orthophotos and DEMs were then integrated into a GIS (ArcGIS 9.2), where landslides were automatically detected and mapped using different band thresholds, band ratios, change detection methods and selected terrain information. The resulting landslides inventory data was used to supplement and extend an existing field data inventory.
    Language: English
    Type: info:eu-repo/semantics/conferenceObject
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2023-12-08
    Keywords: ddc:600
    Repository Name: Wuppertal Institut für Klima, Umwelt, Energie
    Language: English
    Type: conferenceobject , doc-type:conferenceObject
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2023-06-16
    Description: The Aegean region overlies a complex tectonic regime that experiences a wide diversity of earthquake behaviour, with enormous disparity in focal mechanism and spatio-temporal distribution. Multiple random earthquake simulations, via Monte Carlo simulation, offer the opportunity to analyse seismic hazard across the Aegean, whilst still allowing for uncertainty in various parameters such as frequency-magnitude relation, maximum magnitude (Mmax) and attenuation relation. They may also enable meaningful determination of hazard (in terms of Peak Ground Acceleration (PGA) and MSK Intensity) with exceedence probabilities significantly smaller than those currently used in standard probabilistic seismic hazard assessment (PSHA) techniques. In addition, these simulations can also be used to conduct sensitivity analyses that will act as a verification process, allowing assumptions regarding the seismic hazard parameters to be continually tested. A catalogue of earthquakes in the Aegean (1900-1999AD) is used as a basis for fitting appropriate models of spatial distribution, frequency-magnitude relation and maximum-magnitude. This has been achieved by random re-sampling of the catalogue, and by random sampling from a Gutenberg-Richter relation fitted to the observed data. Simple hazard analyses for five cities within the Aegean have been undertaken using the earthquake simulations. PGA has been determined using appropriate attenuation relations, and its variability quantified. For each site, the PGA with a 10% probability of exceedence in 50 years is largely consistent with those of current hazard analyses. This may give the user additional confidence in the hazard determined for lower exceedence probabilities.
    Language: English
    Type: info:eu-repo/semantics/conferenceObject
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2023-06-16
    Description: The selection of specific uniform seismic source zones for use in probabilistic seismic hazard analysis is often controversial. Recognizing that a consistent approach to source model development is not always possible, as the information available relating to geology and seismotectonics can vary from region to region, the K-means algorithm for hierarchical cluster analysis can be used to partition regions based on observed seismicity. The Aegean [incorporating Greece, Albania, Former Yugoslav Republic of Macedonia (F.Y.R.O.M.), southern Bulgaria and western Turkey], with its varied seismotectonics and generally high seismicity, is used as an important area ofseismicity in which to develop and demonstrate the application of K-means. Two types of algorithm are considered. The first is a point-source K-means that can be used to partition a catalogue of earthquake hypocentres. The second is a novel line-source development of the algorithm, appropriate in seismology as these are analogues for the traces of active faults, which is then applied to a catalogue of known fault ruptures in the Aegean. The common problems of the K-means methodology are also addressed. Ensemble analyses are used to identify better choices of initial estimates for the cluster centres. A cluster quality index is used to identify the optimum number of clusters, and its robustness assessed when considering different subsets of the observed earthquake catalogue. An alternative approach is also implemented: Monte Carlo seismic hazard analysis is used to compare models with different numbers of clusters with the observed seismicity of the 20th century. Considerable variation is found in the optimum number of clusters identified either by the quality index or by stochastic seismic hazard analysis. Ultimately the K-means partitions of seismicity are developed into source models and their representation of Aegean seismotectonics assessed. The result is that models containing between 20 and 30 clusters emerge as the most appropriate in capturing the spatial variation in hypocentral distribution and fault type in the Aegean.
    Language: English
    Type: info:eu-repo/semantics/article
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2023-06-16
    Language: English
    Type: info:eu-repo/semantics/article
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2023-06-16
    Description: Applying alternative and different approaches to seismic hazard assessment is instructive. It allows learning from the different outcomes of the different approaches. These outcomes may be mutually reinforcing or diverge, suggest further study and research is needed, or provide new insights into old problems. Herein Java island-scale seismic hazard will be considered by applying different probabilistic approaches to hazard assessment. Results from two distinct methods are provided for Java: 1) primary zoning using K-means partitioning of seismicity into spatial clusters (progressed into zones) which are then developed into seismic hazard maps using Monte Carlo earthquake catalogue simulation, and 2) extreme value analysis applied at a matrix of points throughout a zoneless Java. The latter approach has been used before, the former adopts seismicity partitioning into spatial clusters prior to Monte Carlo modelling and is novel. The earthquake catalogue analysed is NEIC (1973-2006). This catalogue is homogenised to the moment magnitude scale MW and Poisson declustering of fore- and after-shocks applied. The completeness threshold is around 4.9 MW. Shallow earthquakes down to 80 km depth contribute most to the hazard and are partitioned into 1 to K trial clusters of seismicity by minimising the total within cluster distance from seed centroids. Repeated trials produce an optimum partition. A variety of indices can be invoked to try to quantify cluster quality for a given K; in addition to this, it is decided to seek the best value of K by testing the influence of K on ensuing seismic hazard analyses. Monte Carlo synthesis generates synthetic catalogues for each K value, from which peak ground acceleration (PGA) hazards are calculated and compared against results from the observed catalogue to choose acceptable K values. To summarise the results, seismic hazard maps are constructed for two acceptable values of K (8 and 27) for Java from the Poisson declustered catalogue of shallow earthquakes using the Boore, Joyner, Fumal attenuation law. Not surprisingly the smaller value of K with 8 clusters (progressed to zones) produces the smoother hazard map. All of the maps indicate highest hazard around the Sunda Strait and a general expectation in Java Island of 100-300 cm s-2 with one-in-ten chance of exceedance in 50 years.
    Language: English
    Type: info:eu-repo/semantics/conferenceObject
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...