ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Elsevier  (593,884)
  • 2010-2014  (593,884)
Collection
Language
Years
Year
  • 11
    Publication Date: 2023-12-12
    Description: This study aims to provide a systematic overview and comparison of capital and O&M costs models for CO2 pipelines and booster stations currently available in literature. Our findings indicate significantly large cost ranges for the results provided by the different cost models. Two main types of capital cost models for pipeline transport were found in literature, models relating diameter to costs and models relating mass flow to costs. For the nine diameter based models examined, a capital cost range is found of, for instance, 0.8–5.5 M€2010/km for a pipeline diameter of 0.8 m and a length of 25 km. For the five mass flow based cost models evaluated in this study, a cost range is found of, for instance, 0.9–2.1 M€2010/km for a mass flow of 750 kg/s over 25 km (TRUNK-25). An important additional factor is that all capital costs models for CO2 pipeline transport, directly or indirectly, depend on the diameter. Therefore, a systematic overview is made of the various equations and parameter used to calculate the diameter. By applying these equations and parameters to a common mass flow, height difference and length result in diameters between 0.59 and 0.91 m for TRUNK-25. The main reason for this range was different assumptions about specific pressure drop and velocity. Combining the range for diameter, mass flow and diameter based cost models gives a capital and levelized cost range which varied by a factor 10 for a given mass flow and length. The levelized cost range will further increase if the discrepancy in O&M costs is added, for which estimations vary between 4.5 and 75 €/m/year for a pipeline diameter of 0.8 m. On top of this, most cost models underestimate the capital costs of CO2 pipelines. Only two cost models (namely the models who relate the costs to the weight of the pipeline) take into account the higher material requirements which are typically required for CO2 pipelines. The other sources use existing onshore natural gas pipelines as the basis for their cost estimations, and thereby underestimating the material costs for CO2 pipelines. Additionally, most cost models are based on relatively old pipelines constructed in the United States in the 1990s and early 2000s and do not consider the large increase in material prices in the last several years. Furthermore, key model characteristics are identified for a general cost comparison of CCS with other technologies and a system analysis over time. For a general cost comparison of CCS with other technologies, pipeline cost models with parameters which have physical or economic meaning are the preferred option. These are easy to interpret and can be adjusted to new conditions. A linear cost model is an example of such an model. For a system analysis over time, it is advised to adapt a pipeline cost model related to the weight of the pipeline, which is the only cost model that specifically models thickness of the pipeline and include material prices, to incorporate the effect of impurities and pipeline technology development. For modeling booster station costs, a relation between capacity and costs including some economies of scale seems to be the most appropriate. However, the cost range found in literature is very large, for instance, 3.1–3.6 M€2010 for a booster station with a capacity of 1.25 MWe. Therefore, validation of the booster station cost is required before such models are applied in further research.
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 12
    Publication Date: 2023-11-29
    Description: Understanding metal and proton toxicity under field conditions requires consideration of the complex nature of chemicals in mixtures. Here, we demonstrate a novel method that relates streamwater concentrations of cationic metallic species and protons to a field ecological index of biodiversity. The model WHAM-FTOX postulates that cation binding sites of aquatic macroinvertebrates can be represented by the functional groups of natural organic matter (humic acid), as described by the Windermere Humic Aqueous Model (WHAM6), and supporting field evidence is presented. We define a toxicity function (FTOX) by summing the products: (amount of invertebrate-bound cation) × (cation-specific toxicity coefficient, αi). Species richness data for Ephemeroptera, Plecoptera and Trichoptera (EPT), are then described with a lower threshold of FTOX, below which all organisms are present and toxic effects are absent, and an upper threshold above which organisms are absent. Between the thresholds the number of species declines linearly with FTOX. We parameterised the model with chemistry and EPT data for low-order streamwaters affected by acid deposition and/or abandoned mines, representing a total of 412 sites across three continents. The fitting made use of quantile regression, to take into account reduced species richness caused by (unknown) factors other than cation toxicity. Parameters were derived for the four most common or abundant cations, with values of αi following the sequence (increasing toxicity) H+ 〈 Al 〈 Zn 〈 Cu. For waters affected mainly by H+ and Al, FTOX shows a steady decline with increasing pH, crossing the lower threshold near to pH 7. Competition effects among cations mean that toxicity due to Cu and Zn is rare at lower pH values, and occurs mostly between pH 6 and 8.
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 13
    Publication Date: 2023-11-08
    Description: Cold-water coral communities cover a wide range of possible habitats in terms of latitude, ocean basins, and depth, with ongoing studies continually expanding occurrences in various regions of the global ocean. A range of factors determines the formation of cold-water coral reefs, such as physical, hydrochemical, and biological (e.g. food supply) factors. Recently, more and more modeling studies have emerged using a variety of mathematical approaches have emerged including environmental niche factor analysis (ENFA) and predictive habitat suitability models. However, only few studies have attempted to characterize the underlying suite of hydro-biogeochemical and physical constraints of cold-water coral reefs and to differentiate between pristine reef growth vs. sites with reduced or no coral occurrences. This study concentrates on new data and a compilation of existing data sets on the physical and chemical properties in the NE Atlantic and the Mediterranean. It explores the influence of ambient bottom waters and its characteristics on living cold-water reefs and mounds formed by Lophelia pertusa. Several questions are addressed: (1) what are the physical and geochemical boundary conditions of living cold-water corals? (2) Do these geochemical parameters correlate with proposed physical prerequisites? (3) Is there a general difference in the signature of living and dead coral sites?
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 14
    Publication Date: 2023-11-08
    Description: The global surface air temperature record of the last 150 years is characterized by a long-term warming trend, with strong multidecadal variability superimposed. Similar multidecadal variability is also seen in other (societal important) parameters such as Sahel rainfall or Atlantic hurricane activity. The existence of the multidecadal variability makes climate change detection a challenge, since Global Warming evolves on a similar timescale. The ongoing discussion about a potential anthropogenic signal in the Atlantic hurricane activity is an example. A lot of work was devoted during the last years to understand the dynamics of the multidecadal variability, and external as well as internal mechanisms were proposed. This review paper focuses on two aspects. First, it describes the mechanisms for internal variability using a stochastic framework. Specific attention is given to variability of the Atlantic Meridional Overturning Circulation (AMOC), which is likely the origin of a considerable part of decadal variability and predictability in the Atlantic Sector. Second, the paper discusses the decadal predictability and the factors limiting its realisation. These include a poor understanding of the mechanisms involved and large biases in state-of-the-art climate models. Enhanced model resolution, improved subgrid scale parameterisations, and the inclusion of additional climate subsystems, such as a resolved stratosphere, may help overcome these limitations.
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 15
    Publication Date: 2023-11-08
    Description: In marine recirculating aquaculture systems ozone, as a strong oxidant, is often used to improve water quality by reducing the pathogen load and removing inorganic and organic wastes. However, mainly when disinfection of recirculating water is desired, high ozone dosage is required, which may lead to toxicity problems for the cultured species. Acute toxicity of ozone-produced oxidants (OPO) to juvenile Pacific white shrimp, Litopenaeus vannamei, was assessed by determining the medium lethal concentration (LC50). Shrimp were exposed to a series of OPO concentrations for 96 h. Toxicity was analysed using standard probit regression. The 24, 48, 72 and 96 h LC50 values were 0.84, 0.61, 0.54 and 0.50 mg/l chlorine equivalent, respectively. A safe level for residual oxidant concentration was calculated and further verified by chronic exposure experiments. While long-term exposure of juvenile white shrimp to an OPO concentration of 0.06 mg/l revealed no observable effect, long-term exposures to 0.10 and 0.15 mg/l induced incidence of soft shell syndrome which led to mortalities due to cannibalism. Thus, an OPO concentration of 0.06 mg/l is suggested to be the maximum safe exposure level for rearing juvenile L. vannamei. Furthermore, we proved this safe level to be sufficient to control and reduce bacterial biomass in the recirculating process water.
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 16
    Publication Date: 2023-11-08
    Description: Volcano edifice volume calculations are presented for 65 volcanoes of the 1400 km long Chilean Southern Volcanic Zone (SVZ) as a basic step in subduction zone mass budgets. Volume calculations are performed in a Geographical Information System that integrates Digital Elevation Models based on of Shuttle Radar Topography Mission as well as ASTER-GDEM topographic data, LANDSAT satellite images and geological maps. The method of volume calculation is straightforward for isolated, morphologically well-defined stratovolcanoes. Uncertainties increase for volcanic edifices that formed on pre-existing rugged terrain, for multi-phase eruptive centers, as well as for eroded edifices. A revised segmentation of the arc is used to describe the spatial volume distribution of extruded magma along the SVZ and to discuss controlling tectonic factors. Peak volumes between arc and back arc are offset by 400 km. The total volcanic extrusion is in the range of 10–13 km3/km/Ma. Major differences between the SVZ and the Central American subduction system are notable with regard to volcano density and maximum volumes.
    Type: Article , PeerReviewed
    Format: text
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 17
    Publication Date: 2023-11-03
    Description: Methods and results for parameter optimization and uncertainty analysis for a one-dimensional marine biogeochemical model of NPZD type are presented. The model, developed by Schartau and Oschlies, simulates the distribution of nitrogen, phytoplankton, zooplankton and detritus in a water column and is driven by ocean circulation data. Our aim is to identify parameters and fit the model output to given observational data. For this model, it has been shown that a satisfactory fit could not be obtained, and that parameters with comparable fits can vary significantly. Since these results were obtained by evolutionary algorithms (EA), we used a wider range of optimization methods: A special type of EA (called quantum-EA) with coordinate line search and a quasi-Newton SQP method, where exact gradients were generated by Automatic/Algorithmic Differentiation. Both methods are parallelized and can be viewed as instances of a hybrid, mixed evolutionary and deterministic optimization algorithm that we present in detail. This algorithm provides a flexible and robust tool for parameter identification and model validation. We show how the obtained parameters depend on data sparsity and given data error. We present an uncertainty analysis of the optimized parameters w.r.t. Gaussian perturbed data. We show that the model is well suited for parameter identification if the data are attainable. On the other hand, the result that it cannot be fitted to the real observational data without extension or modification, is confirmed. (C) 2010 Elsevier Ltd. All rights reserved.
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 18
    Publication Date: 2023-09-26
    Description: Two distinct types of eclogites from the Raspas Complex (Ecuador), which can be distinguished based on petrography and trace element geochemistry, were analyzed for their stable (Li, O) and radiogenic (Sr, Nd) isotope signature to constrain metasomatic changes due to fluid-overprinting in metabasaltic rocks at high-pressure conditions and to identify fluid sources. MORB-type eclogites are characterized by a relative LREE depletion similar to MORB. High-pressure (HP) minerals from this type of eclogite have highly variable oxygen isotope compositions (garnet: + 4.1 to + 9.8 ‰; omphacite: + 6.1 to + 11.0 ‰; phengite: 8.7 to 10.4 ‰; amphibole: 6.2 to 10.1 ‰) and generally show equilibrium oxygen isotope fractionation. Initial 87Sr/86Sr isotope ratios are also variable (0.7037-0.7063), whereas εNd130Ma values (+ 8.3 to + 11.0) are relatively similar. Sr and O isotopic compositional differences among rocks on outcrop scale, the preservation of O isotopic compositions of low-temperature altered oceanic crust, and Sr-Nd isotopic trends typical for seafloor alteration suggest inheritance from variably altered oceanic crust. However, decreasing δ7Li values (-0.5 to -12.9 ‰) with increasing Li concentrations (11-94 ppm) indicate Li isotope fractionation by diffusion related to fluid-rock interaction. Li isotopes prove to be a very sensitive tracer of metasomatism, although the small effects on the Sr-Nd-O isotope systems suggest that the fluid-induced metasomatic event in the MORB-type eclogites was small-scale at low-water/rock ratios. This metasomatic fluid is thought to predominantly derive from in situ dehydration of MORB-type rocks. Zoisite eclogites, the second eclogite type from the Raspas Complex, are characterized by the presence of zoisite and enrichment in many incompatible trace elements compared to the MORB-type eclogites. The zoisite eclogites have a homogenous Sr-Nd isotopic signature (Initial 87Sr/86Sr = 0.7075-0.7081, εNd130Ma = -6.7 to -8.7), interpreted to reflect a metasomatic overprint. The isotopic signature can be attributed to the metasomatic formation of zoisite because associated zoisite veins are isotopically similar. Relatively homogenous O isotope values for garnet (10.9-12.3 ‰) omphacite (9.4 to 10.8 ‰), amphibole (10.0-10.1 ‰) and zoisite (10.5-11.9 ‰) and inter-mineral O isotopic disequilibria are consistent with a metasomatic overprint via open-system fluid input. Li concentrations (46-76 ppm) and δ7Li values of the zoisite eclogites overlap the range of the MORB-type eclogites. The large amount of fluid required for isotopic homogenization, combined with the results from fluid inclusion studies, suggests that deserpentinization played a major role in generating the metasomatic fluid that altered the zoisite eclogites. However, influence of a (meta)sedimentary source is required based on Sr-Nd isotope data and trace element enrichments. The significant geochemical variation in the various eclogites generated by interaction with metasomatic fluids has to be considered in attempts to constrain recycling at convergent margins.
    Type: Article , PeerReviewed
    Format: text
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 19
    Publication Date: 2023-09-19
    Description: New 40Ar-39Ar ages of 5.6 to 1.3 Ma for lavas from the fossil Phoenix Ridge in the Drake Passage show that magmatism continued for at least 2 Ma after the cessation of spreading at 3.3 ± 0.2 Ma. The Phoenix Ridge lavas are incompatible element-enriched relative to average MORB and show an increasing enrichment with decreasing age, corresponding to progressively decreasing degrees of partial melting of spinel peridotite after spreading stopped. The low-degree partial melts increasingly tap a mantle source with radiogenic Sr and Pb but unradiogenic Nd isotope ratios implying an ancient enrichment. The post-spreading magmas apparently form by buoyant ascent of enriched and easily fusible portions of the upper mantle. Only segments of fossil spreading ridges underlain by such enriched and fertile mantle show post-spreading volcanism frequently forming bathymetric highs. The Phoenix Ridge lavas belong to the Pacific, rather than the Atlantic, mantle domain in regional Sr-Nd-Pb space. Our new data show that the southern Pacific Ocean mantle is heterogeneous containing significant enriched portions that are preferentially tapped at low melt fractions. Isotopic mapping reveals that Pacific-type upper mantle flows eastward through Drake Passage and surrounds the subducting Phoenix Plate beneath the Bransfield Basin.
    Type: Article , PeerReviewed
    Format: text
    Format: other
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 20
    Publication Date: 2023-08-15
    Description: Preprocessing software, which converts large instrumental data sets into a manageable format for data analysis, is crucial for the discovery of chemical signatures in metabolomics, chemical forensics, and other signature-focused disciplines. Here, four freely available and published preprocessing tools known as MetAlign, MZmine, SpectConnect, and XCMS were evaluated for impurity profiling using nominal mass GC/MS data and accurate mass LC/MS data. Both data sets were previously collected from the analysis of replicate samples from multiple stocks of a nerve-agent precursor and method blanks. Parameters were optimized for each of the four tools for the untargeted detection, matching, and cataloging of chromatographic peaks from impurities present in the stock samples. The peak table generated by each preprocessing tool was analyzed to determine the number of impurity components detected in all replicate samples per stock and absent in the method blanks. A cumulative set of impurity components was then generated using all available peak tables and used as a reference to calculate the percent of component detections for each tool, in which 100% indicated the detection of every known component present in a stock. For the nominal mass GC/MS data, MetAlign had the most component detections followed by MZmine, SpectConnect, and XCMS with detection percentages of 83, 60, 47, and 41%, respectively. For the accurate mass LC/MS data, the order was MetAlign, XCMS, and MZmine with detection percentages of 80, 45, and 35%, respectively. SpectConnect did not function for the accurate mass LC/MS data. Larger detection percentages were obtained by combining the top performer with at least one of the other tools such as 96% by combining MetAlign with MZmine for the GC/MS data and 93% by combining MetAlign with XCMS for the LC/MS data. In terms of quantitative performance, the reported peak intensities from each tool had averaged absolute biases (relative to peak intensities obtained using instrument software) of 41, 4.4, 1.3 and 1.3% for SpectConnect, MetAlign, XCMS, and MZmine, respectively, for the GC/MS data. For the LC/MS data, the averaged absolute biases were 22, 4.5, and 3.1% for MetAlign, MZmine, and XCMS, respectively. In summary, MetAlign performed the best in terms of the number of component detections; however, more than one preprocessing tool should be considered to avoid missing impurities or other trace components as potential chemical signatures.
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...