ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Publication Date: 2019-07-13
    Description: It is a challenge to access and process fast growing Earth science data from satellites and numerical models, which may be archived in very different data format and structures. NASA data centers, managed by the Earth Observing System Data and Information System (EOSDIS), have developed a rich and diverse set of data services and tools with features intended to simplify finding, downloading, and working with these data. Although most data services and tools have user guides, many users still experience difficulties with accessing or reading data due to varying levels of familiarity with data services, tools, and/or formats. A type of structured online document, data recipe, were created in beginning 2013 by Goddard Earth Science Data and Information Services Center (GES DISC). A data recipe is the How-To document created by using the fixed template, containing step-by-step instructions with screenshots and examples of accessing and working with real data. The recipes has been found to be very helpful, especially to first-time-users of particular data services, tools, or data products. Online traffic to the data recipe pages is significant to some recipes. In 2014, the NASA Earth Science Data System Working Group (ESDSWG) for data recipes was established, aimed to initiate an EOSDIS-wide campaign for leveraging the distributed knowledge within EOSDIS and its user communities regarding their respective services and tools. The ESDSWG data recipe group started with inventory and analysis of existing EOSDIS-wide online help documents, and provided recommendations and guidelines and for writing and grouping data recipes. This presentation will overview activities of creating How-To documents at GES DISC and ESDSWG. We encourage feedback and contribution from users for improving the data How-To knowledge base.
    Keywords: Geosciences (General); Documentation and Information Science
    Type: GSFC-E-DAA-TN38138 , AGU Fall Meeting; Dec 12, 2016 - Dec 16, 2016; San Francisco, CA; United States
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2019-07-13
    Description: The Earth Science Data and Information System (ESDIS) Project manages twelve Distributed Active Archive Centers (DAACs) which are geographically dispersed across the United States. The DAACs are responsible for ingesting, processing, archiving, and distributing Earth science data produced from various sources (satellites, aircraft, field measurements, etc.). In response to projections of an exponential increase in data production, there has been a recent effort to prototype various DAAC activities in the cloud computing environment. This, in turn, led to the creation of an initiative, called the Cloud Analysis Toolkit to Enable Earth Science (CATEES), to develop a Python software package in order to transition Earth science data processing to the cloud. This project, in particular, supports CATEES and has two primary goals. One, to transition data recipes created by the Goddard Earth Science Data and Information Service Center (GES DISC) into an interactive and educational environment using JupyterNotebooks. Two, to acclimate Earth scientists to cloud computing. To accomplish these goals, we create JupyterNotebooks to compartmentalize the different steps of data analysis and help users obtain and parse data from the command line. We also develop a Docker container, comprised of Jupyter Notebooks, Python dependencies, and command line tools, and configure it into an easy-to-deploy package. The end result is an end-to-end product that simulates the use case of end users working in the cloud computing environment.
    Keywords: Computer Programming and Software
    Type: GSFC-E-DAA-TN50539 , AGU Fall Meeing; Dec 11, 2017 - Dec 15, 2017; New Orleans, LA; United States
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2019-07-13
    Description: As our inventory of Earth science data sets grows, the ability to compare, merge and fuse multiple datasets grows in importance. This requires a deeper data interoperability than we have now. Efforts such as Open Geospatial Consortium and OPeNDAP (Open-source Project for a Network Data Access Protocol) have broken down format barriers to interoperability; the next challenge is the semantic aspects of the data. Consider the issues when satellite data are merged, cross-calibrated, validated, inter-compared and fused. We must match up data sets that are related, yet different in significant ways: the phenomenon being measured, measurement technique, location in space-time or quality of the measurements. If subtle distinctions between similar measurements are not clear to the user, results can be meaningless or lead to an incorrect interpretation of the data. Most of these distinctions trace to how the data came to be: sensors, processing and quality assessment. For example, monthly averages of satellite-based aerosol measurements often show significant discrepancies, which might be due to differences in spatio- temporal aggregation, sampling issues, sensor biases, algorithm differences or calibration issues. Provenance information must be captured in a semantic framework that allows data inter-use tools to incorporate it and aid in the intervention of comparison or merged products. Semantic web technology allows us to encode our knowledge of measurement characteristics, phenomena measured, space-time representation, and data quality attributes in a well-structured, machine-readable ontology and rulesets. An analysis tool can use this knowledge to show users the provenance-related distrintions between two variables, advising on options for further data processing and analysis. An additional problem for workflows distributed across heterogeneous systems is retrieval and transport of provenance. Provenance may be either embedded within the data payload, or transmitted from server to client in an out-of-band mechanism. The out of band mechanism is more flexible in the richness of provenance information that can be accomodated, but it relies on a persistent framework and can be difficult for legacy clients to use. We are prototyping the embedded model, incorporating provenance within metadata objects in the data payload. Thus, it always remains with the data. The downside is a limit to the size of provenance metadata that we can include, an issue that will eventually need resolution to encompass the richness of provenance information required for daata intercomparison and merging.
    Keywords: Computer Programming and Software
    Type: American Geophysical Union Meeting; Dec 15, 2008 - Dec 19, 2008; San Francisco, CA; United States
    Format: application/pdf
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...