ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 11
    facet.materialart.
    Unknown
    In:  [Poster] In: Future Ocean: Cluster Retreat 2017, 09.-10.10.2017, Schleswig, Germany .
    Publication Date: 2018-01-22
    Type: Conference or Workshop Item , NonPeerReviewed
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 12
    facet.materialart.
    Unknown
    In:  [Talk] In: Kickoff Workshop of The Changing Arctic Transpolar System – CATS, 16.05.-18.05.2017, St. Petersburg, Russia .
    Publication Date: 2017-08-04
    Type: Conference or Workshop Item , NonPeerReviewed
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 13
    facet.materialart.
    Unknown
    In:  [Poster] In: EGU General Assembly 2011, 03.-08.04.2011, Vienna, Austria .
    Publication Date: 2012-07-05
    Type: Conference or Workshop Item , NonPeerReviewed
    Format: image
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 14
    facet.materialart.
    Unknown
    In:  [Poster] In: Data Repositories In Environmental Sciences, 28.02.-01.03.2011, Rauischholzhausen/Marburg .
    Publication Date: 2012-07-05
    Description: The Kiel Data Management Infrastructure (KDMI) started from a cooperation of three large-scale projects (SFB574, SFB754 and Cluster of Excellence The Future Ocean) and the Leibniz Institute of Marine Sciences (IFM-GEOMAR). KDMI key features focus on the data provenance which we consider to comprise the entire workflow from field sampling or measurements through lab work to data calculation and evaluation. Managing the data of each individual project participant in this way yields the data management for the entire project and warrants the reusability of (meta)data. Accordingly scientists provide a workflow definition of their data creation procedures resulting in their target variables. The central idea in the development of the KDMI presented here is inspired by the object oriented programming concept which allows to have one object definition (workflow) and infinite numbers of object instances (data). Each definition is created by a graphical user interface and produces XML output stored in a database using a generic data model. On creation of a data instance the KDMI translates the definition into web forms for the scientist, the generic data model then accepts all information input following the given data provenance definition. An important aspect of the implementation phase is the possibility of a successive transition from daily measurement routines resulting in single spreadsheet files with well known points of failure and limited reusability to a central infrastructure as a single point of truth. An interim system allows users to upload and share data files from cruises and expeditions. It relates files to metadata such as where, when, what, who etc. As a proof of concept we use a 'truncated workflow' to migrate a selection of marine chemical data files and their structured metadata into the generic data model. A web application will allow data extraction for selectable parameters, time and geocoordinates. The availability of these widely used data is expected to motivate more scientists to design their own workflows for their upcoming work and their resulting data. This data provenance approach in terms of human workflows has several positive side effects: (1) the scientist designs the extend and timing of data and metadata prompts by workflow definitions while (2) consistency and completeness (mandatory information) of metadata in the resulting XML document can be checked by XML validation. (3) Storage of the entire data creation process (including raw data and processing steps) provides a multidimensional quality history accessible by all researchers in addition to the commonly applied one dimensional quality flag system and thus (4) improves the reuseability of the data. (5) The KDMI concept focuses on bringing data management infrastructure into the daily measurement routines instead of the final data management hassle at the end of each project. (6) The KDMI can be extended to other scientific disciplines or new scientific procedures by simply adding new workflow definitions. The data input can start from this point while domain specific outputs with the newly added data instances will be created by the KDM-Team. The KDMI follows scientists' requests for Web 2.0 like (net)working platforms but instead of sharing privacy or making friends it is all about sharing daily scientific work and data with project partners. For this purpose we have deployed a portal server (Liferay) where individual scientists are assigned to project communities and working groups or have their own working spaces. All these features are expected to raise the acceptance of the integrated data management applications and advance scientific collaboration.
    Type: Conference or Workshop Item , NonPeerReviewed
    Format: image
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 15
    Publication Date: 2012-07-05
    Description: IN23A-1065: Three large scale projects Cluster of Excellence - 'The Future Ocean', SFB574 - 'Volatiles and Fluids in Subduction Zones', SFB754 - 'Climate - Biogeochemistry Interactions in the Tropical Ocean' and the Leibniz Institute of Marine Sciences IFM-GEOMAR seek for a joined data management infrastructure for Kiel marine sciences with approximately 1000 researchers from various scientific disciplines. The Kiel Data Management Infrastructure (KDMI) emerges from this large interdisciplinary community, but its concept focusses on the individual scientist and its work. This individual data management approach instead of the classic project based practice avoids redundancy in data management efforts and facilitates interdisciplinary collaboration. The web-based KDMI portal is a starting point of a virtual research platform as an efficient access for all users of the centralized data management. It integrates customized personal and project communities and provides web 2.0 enabled technologies like calendars, wikis and forums supporting daily project activities. Incorporation of essential project infrastructure with a data management system emphasizes data management as part of daily scientific routines instead of different downstream operations per project. Extracting the similarities in observational sciences leads to a process oriented data management approach. Treating the emergence of data even of completely different disciplines like field observations and experimental sciences as a process allows us to store the data in a single consistent data model. The quality of data and meta data is crucial to reusability in future projects. Accordingly the KDMI approach includes peer revision as a quality assurance step within a scientific workflow definition. Utilizing a community working platform for daily scientific work results in increasing efforts put into quality assurance of data and metadata (e.g. CARINA project). Recording the data creation process itself instead of literature references is a novelty in the field of scientific documentation and a possibility to provide complete information for a future quality reassessment. This process fosters the reuse of KDMI datasets. The infrastructure of World Data Centers (WDC) and their cooperation with the Open Archive Initiative (OAI) is the aspired final publication platform for the KDMI data. An essential issue of the KDMI concept is the publication of data in a WDC either as supplement of peer reviewed journal articles or as dataset publication on their own warranting access to the data for the global scientific community.
    Type: Conference or Workshop Item , NonPeerReviewed
    Format: image
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 16
    Publication Date: 2019-10-10
    Description: Water column data of carbon and carbon relevant hydrographic and hydrochemical parameters from 188 previously non-publicly available cruises in the Arctic, Atlantic, and Southern Ocean have been retrieved and merged into a new database: CARINA (CARbon IN the Atlantic). The data have been subject to rigorous quality control (QC) in order to ensure highest possible quality and consistency. The data for most of the parameters included were examined in order to quantify systematic biases in the reported values, i.e. secondary quality control. Significant biases have been corrected for in the data products, i.e. the three merged files with measured, calculated and interpolated values for each of the three CARINA regions; the Arctic Mediterranean Seas (AMS), the Atlantic (ATL) and the Southern Ocean (SO). With the adjustments the CARINA database is consistent both internally as well as with GLODAP (Key et al., 2004) and is suitable for accurate assessments of, for example, oceanic carbon inventories and uptake rates and for model validation. The Arctic Mediterranean Seas include the Arctic Ocean and the Nordic Seas, and the quality control was carried out separately in these two areas. This contribution provides an overview of the CARINA data from the Nordic Seas and summarises the findings of the QC of the salinity data. One cruise had salinity data that were of questionable quality, and these have been removed from the data product. An evaluation of the consistency of the quality controlled salinity data suggests that they are consistent to at least ±0.005.
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 17
    Publication Date: 2019-09-23
    Type: Conference or Workshop Item , NonPeerReviewed
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 18
    Publication Date: 2019-09-23
    Type: Conference or Workshop Item , NonPeerReviewed
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 19
    Publication Date: 2019-01-29
    Type: Conference or Workshop Item , NonPeerReviewed
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 20
    Publication Date: 2021-04-21
    Description: We present a mapped climatology (GLODAPv2.2016b) of ocean biogeochemical variables based on the new GLODAP version 2 data product (Olsen et al., 2016; Key et al., 2015), which covers all ocean basins over the years 1972 to 2013. The quality-controlled and internally consistent GLODAPv2 was used to create global 1°  ×  1° mapped climatologies of salinity, temperature, oxygen, nitrate, phosphate, silicate, total dissolved inorganic carbon (TCO2), total alkalinity (TAlk), pH, and CaCO3 saturation states using the Data-Interpolating Variational Analysis (DIVA) mapping method. Improving on maps based on an earlier but similar dataset, GLODAPv1.1, this climatology also covers the Arctic Ocean. Climatologies were created for 33 standard depth surfaces. The conceivably confounding temporal trends in TCO2 and pH due to anthropogenic influence were removed prior to mapping by normalizing these data to the year 2002 using first-order calculations of anthropogenic carbon accumulation rates. We additionally provide maps of accumulated anthropogenic carbon in the year 2002 and of preindustrial TCO2. For all parameters, all data from the full 1972–2013 period were used, including data that did not receive full secondary quality control. The GLODAPv2.2016b global 1°  ×  1° mapped climatologies, including error fields and ancillary information, are available at the GLODAPv2 web page at the Carbon Dioxide Information Analysis Center (CDIAC; doi:10.3334/CDIAC/OTG.NDP093_GLODAPv2).
    Type: Article , PeerReviewed , info:eu-repo/semantics/article
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...