ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (37)
  • cluster analysis
  • sampling
  • structure
  • 1990-1994  (13)
  • 1970-1974  (24)
  • Geosciences  (35)
  • Architecture, Civil Engineering, Surveying  (2)
Collection
  • Articles  (37)
Publisher
Years
Year
  • 1
    Electronic Resource
    Electronic Resource
    Springer
    Fire technology 30 (1994), S. 155-172 
    ISSN: 1572-8099
    Keywords: Airborne sampling ; field tests ; fire research ; fires ; measurement ; measuring instruments ; oils ; particulates ; pool fires ; sampling ; smoke ; smoke yield
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying
    Notes: Abstract A unique airborne smoke sample package (ASSP) for determining the smoke yield of large fires has been developed. The uncertainty in the average smoke yield at the 95% confidence interval is about ±7% of the average of three repeat measurements. The ASSP, which weighs less than 4 kg, is light enough to be flown suspended below a tethered helium-filled balloon or attached to a small radio-controlled aircraft. Measurements are made by flying the sampling equipment into a fire's smoke plume. Additional smoke plume measurements that can be made with the ASSP include particle size distribution using a cascade impactor, smoke agglomerate structure using transmission electron microscope (TEM) grids, and polycyclic aromatic hydrocarbons (PAHs) analysis using various sorbent tubes. The application of the ASSP in measuring laboratory and large outdoors petroleum pool fires is discussed. Smoke yield values measured in field burns of Louisiana crude oil range from 0.080 to 0.137, and the primary sphere diameter of the agglomerates is as large as 0.15 µm.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Electronic Resource
    Electronic Resource
    Springer
    Journal of chemical crystallography 24 (1994), S. 739-742 
    ISSN: 1572-8854
    Keywords: Dimethylcrocetin (DMCRT) ; carotenoids ; structure ; saffron ; Crocus sativus L
    Source: Springer Online Journal Archives 1860-2000
    Topics: Geosciences , Physics
    Notes: Abstract The structure of dimethylcrocetin (DMCRT), prepared by alkaline hydrolysis in methanol of the glucosidic carotenoids extracted from the stigmata of theCrocus sativus L. flowers, has been determined. The molecule has the all-trans configuration and is planar forming a long conjugated system. The C−O bond length values of the two ester groups are shortened as expected. The compound (C11H14O2)2 crystallizes in the orthorhombic space group Pbcn witha=12.5907(7),b=7.5639(5) andc=21.963(2)Å. Dimethylcrocetin has characteristic Infrared absorptions at 1697 cm−1 ϑ (C=O) and 1229 cm−1 ϑ (C−O) and characteristic Raman vibrational modes at 1542 cm−1 ϑ (C=C) and 1166 cm−1 ϑ (C−C).
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Electronic Resource
    Electronic Resource
    Springer
    Transportation 17 (1990), S. 29-47 
    ISSN: 1572-9435
    Keywords: freight ; mode ; choice — behaviour ; modelling ; company ; structure ; decision-making ; factor analysis ; disaggregate data ; causal relationships ; reliability ; consignment ; control
    Source: Springer Online Journal Archives 1860-2000
    Topics: Architecture, Civil Engineering, Surveying
    Notes: Abstract In the next few years, exciting developments in the field of freight transport are likely to occur. The Channel Tunnel will be perceived as giving railways much greater distance of operation, compared to the current train ferry to/from Great Britain. The further development of swap-body technology will allow easier modal transfer and the creation, in 1992, of a single market in Europe will transform the pattern of trade. All of these are likely to have significant impacts on modal choice, and hence modal split, in freight transport. Reappraisal by many firms of the modes of transport used is likely but will it result in a net transfer of freight from road to rail and, if so, to what extent? To answer such questions, an accurate and reliable method of predicting modal split is required. Research in the past has concentrated on the development of modal split models based on generalised costs. These fail to explain adequately the prevalence of road freight in the UK. From surveys of freight managers within industry, it is clear that models to date rely too heavily on the economic cost factor and too little on behavioural factors (Jeffs 1985). This paper derives from a recent study of freight transport modal choice from the standpoint of the transport decision-maker within the firm. It attempts to shed light on the actual parameters which should be incorporated into a modal split model. Many variables appear to exert an influence on modal choice decision-making process. However, it is possible to categorise them into six main groups, namely: customer-requirements; product-characteristics; company structure/organisation; government interventions; available transport facilities; and perceptions of the decision-maker him/herself. It is the interactions and inter-relationships between these which ultimately determine freight modal split. This study has shown that the relationship between the outcome of the transport decision process and the values of particular determinants of modal split is not straight-forward, due to the complexity and variety of interactions involved. Perhaps one of the main reasons for researchers' failure hitherto to develop a successful modal-split model has been the preoccupation with techniques that rely on the development of common metric (e.g. generalised cost), which has led to the exclusion of some important explanatory variables along quite different dimensions. Another important issue concerns the appropriate level of aggregation. In order not to reduce the explanatory power of the key variables, it is important to work at a disaggregate level, although this does make substantial demands on data. The use of factor analysis enables both the aggregation of information without loss of behavioural reality and the specification of variables in terms of a common metric. In conclusion, freight transport has usually been examined within too narrow a framework. It must be placed firmly within the context of the total industrial process. The demand for freight transport is directly influenced by the level, composition and geographical distribution of production and consumption activities. Freight flows are complex and so it is highly unlikely that a universal mode-choice model can ever be developed. Future research should, therefore, be directed towards developing partial models in response to specific needs of those involved in decision-taking in the freight sector.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Electronic Resource
    Electronic Resource
    Springer
    Journal of paleolimnology 4 (1990), S. 43-59 
    ISSN: 1573-0417
    Keywords: diatoms ; concentrations ; accumulation rates ; variability ; acidification ; correspondence analysis ; cluster analysis ; surface sediments
    Source: Springer Online Journal Archives 1860-2000
    Topics: Biology , Geosciences
    Notes: Abstract The variability of diatom distribution in an acidified, upland wind-stressed lake (Loch Fleet, Galloway, S. W. Scotland) was assessed by analysis of 28 surface sediment samples and 11 cores. Correspondence analysis (CA) and cluster analysis were used to illustrate the variability of the surface sediment and core samples. There was reasonable uniformity of taxa in most of the surface sediment samples, although 7 samples, as indicated by both CA and cluster analyses were atypical. Most cores recorded clearly the acidification of the lake, although percentages of individual taxa varied up to 20% between cores. Two cores had old, preacidification diatom assemblages (of indeterminate age) close to the sediment surface. These old sediments were probably the source of the re-worked diatoms found in the atypical surface sediment assemblages. Diatom trends, as CA ordinations and pH profiles, were less variable than the surface sediment assemblages. It is argued that non-uniform sediment accumulation rates and diatom deposition cause variability in surface sediment diatom samples. This variability may be reduced in core profiles by homogenization during further resuspension/deposition cycles and burial. Cores, and the associated time component they offer, may be useful in assessing the variability of surface sediment assemblages.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Electronic Resource
    Electronic Resource
    Springer
    Journal of paleolimnology 7 (1992), S. 95-101 
    ISSN: 1573-0417
    Keywords: sediments ; soft sediments ; loose sediments ; sampling ; subsampling ; cysts ; dinoflagellates ; freeze coring ; Twin 80
    Source: Springer Online Journal Archives 1860-2000
    Topics: Biology , Geosciences
    Notes: Abstract A method is described for processing flocculated clay-rich sediments which avoids acetolysis and heavy liquid separation. Twin 80 (Merck index 1983) is used for deflocculation. Microsieves separate the recovered organisms according to size. Taxonomic identification and quantitative evaluation of the organisms can be performed in counting chambers or on permanent slides. Algae, cysts and exospores of dinoflagellates, pollen grains and zooplankton remains can be recovered.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Electronic Resource
    Electronic Resource
    Springer
    Natural hazards 3 (1990), S. 125-139 
    ISSN: 1573-0840
    Keywords: Strong ground motion ; focal parameters ; scaling laws ; cluster analysis
    Source: Springer Online Journal Archives 1860-2000
    Topics: Energy, Environment Protection, Nuclear Power Engineering , Geography , Geosciences
    Notes: Abstract The estimation of seismic loading requires both the featuring of the seismotectonic environment as well as a prognosis of site-specific strong ground-motion parameters. Due to the long recurrence times of larger events, there is a lack of high quality strong motion recordings at many sites. The use of seismograms obtained from different seismotectonic regions, additionally recorded under different subsoil conditions, poses several questions as to the significance of the assessment of seismic loading. A way to overcome these problems is the generation of synthetic seismograms, e.g. by the use of Boore's stochastic procedure. Subsoil conditions may be explicitly included in the simulation as proposed by Kunze et al. In this context, the seismotectonic environment forms the deterministic frame of any stochastic strong ground-motion simulation. In terms of Boore's approach, seismic moment, global stress drop, high frequency cut off, and focal depth act as controlling parameters. In order to achieve a more realistic assessment of possible seismic loading, a classification of seismoactive zones should account for parameter vectors as described above rather than for single parameters like magnitudes or intensities. A tool to attack this task may be found in ‘cluster analysis’, which is a type of pattern recognition based on statistical considerations. Crucial to this technique, however, is the use of properly estimated parameters, which still cause discussion in the seismological community. Problems and chances of the approach are discussed with examples from the Swabian Jura and the Rhine graben.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Electronic Resource
    Electronic Resource
    Springer
    Bulletin of volcanology 55 (1992), S. 97-109 
    ISSN: 1432-0819
    Keywords: Galagapos ; caldera morphology ; stress orientations ; intra-caldera ; structure ; temporal variations ; tectonic control
    Source: Springer Online Journal Archives 1860-2000
    Topics: Geosciences
    Notes: Abstract Air photographs taken in 1946, 1960, and 1982, together with SPOT HVR-1 images obtained in April and October of 1988, are used to characterize recent activity in and around the caldera of Fernandina Volcano, West Galapagos Islands. The eruptive and collapse events during this time span appear to be distributed in a NW-SE band across the summit and caldera. On the flanks of the volcano, subtle topographic ridges indicate that this is a long-term preferred orientation of extra-caldera activity as well (although radial and arcuate fissures are found on all sectors). The caldera is formed from the coalescence of multiple collapse features that are also distributed along a NW-SE direction, and these give the caldera its elongate and scalloped outline. The NW and SE benches consist of lavas that ponded in once-separated depressions that have been incorporated into the caldera by more recent collapse. The volume of individual eruptions within the caldera over the observed 42 years appears to be small (∼4x106 m3) in comparison to the volumes of individual flows exposed in the caldera walls (∼120–150x106 m3). Field observations (in 1989) of lavas exposed in the caldera walls and their cross-cutting relationships show that there have been at least three generations of calderas, and that at times each was completely filled. An interplay between a varying supply rate to the volcano and a regional stress regime is suggested to be the cause of long-term spatial and volumetric variations in activity. When supply is high, the caldera is filled in relative to collapse and dikes tend to propagate in all directions through the edifice. At other times (such as the present) supply is relatively low; eruptions are small, the caldera is far from being filled in, and dike propagation is influenced by an extra-volcano stress regime.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Electronic Resource
    Electronic Resource
    Springer
    Transport in porous media 9 (1992), S. 113-121 
    ISSN: 1573-1634
    Keywords: Fluid-filled permeable media ; structure ; internal geometry
    Source: Springer Online Journal Archives 1860-2000
    Topics: Geosciences , Technology
    Notes: Abstract The starting point of our considerations are balance equations obtained by the space-averaging method. These equations contain the explicit consequences of the admitted inhomogeneities of microscopic fields. The proposed postulates relating relative velocities of phases at micro- and macro-levels and properties of internal geometry, allow one to find the form of a few constitutive functions apparently dependent on the internal geometry of the medium.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical geology 3 (1971), S. 227-238 
    ISSN: 1573-8868
    Keywords: classification ; data processing ; graphics ; mapping ; mathematics ; plotting ; sampling ; statistics ; sedimentology ; stratigraphy ; grain-size analysis ; textural analysis ; glacial geology ; Pleistocene stratigraphy ; till
    Source: Springer Online Journal Archives 1860-2000
    Topics: Geosciences , Mathematics
    Notes: Abstract Relative percentages of sand, silt, and clay from samples of the same till unit are not identical because of different lithologies in the source areas, sorting in transport, random variation, and experimental error. Random variation and experimental error can be isolated from the other two as follows. For each particle-size class of each till unit, a standard population is determined by using a normally distributed, representative group of data. New measurements are compared with the standard population and, if they compare satisfactorily, the experimental error is not significant and random variation is within the expected range for the population. The outcome of the comparison depends on numerical criteria derived from a graphical method rather than on a more commonly used one-way analysis of variance with two treatments. If the number of samples and the standard deviation of the standard population are substituted in at-test equation, a family of hyperbolas is generated, each of which corresponds to a specific number of subsamples taken from each new sample. The axes of the graphs of the hyperbolas are the standard deviation of new measurements (horizontal axis) and the difference between the means of the new measurements and the standard population (vertical axis). The area between the two branches of each hyperbola corresponds to a satisfactory comparison between the new measurements and the standard population. Measurements from a new sample can be tested by plotting their standard deviation vs. difference in means on axes containing a hyperbola corresponding to the specific number of subsamples used. If the point lies between the branches of the hyperbola, the measurements are considered reliable. But if the point lies outside this region, the measurements are repeated. Because the critical segment of the hyperbola is approximately a straight line parallel to the horizontal axis, the test is simplified to a comparison between the means of the standard population and the means of the subsample. The minimum number of subsamples required to prove significant variation between samples caused by different lithologies in the source areas and sorting in transport can be determined directly from the graphical method. The minimum number of subsamples required is the maximum number to be run for economy of effort.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Electronic Resource
    Electronic Resource
    Springer
    Mathematical geology 3 (1971), S. 335-355 
    ISSN: 1573-8868
    Keywords: nearest neighbor analysis ; regression analysis ; statistics ; trend analysis ; structure
    Source: Springer Online Journal Archives 1860-2000
    Topics: Geosciences , Mathematics
    Notes: Abstract A quantitative analysis was made of the spatial arrangement of 149explosion craters in the western rift of Uganda. A variety of methods demonstrate that the spatial pattern of the craters reveals significant structural patterns that have guided volcanism to the surface. It is shown that the east-west elements in the field affected location, and the main rift fault is resolved into two main components. Tentatively, a possible dextral transform fault is identified that affected the relative location of the two main zones of activity. Grouping techniques demonstrate that crater groups obey an exponential rank-size rule and allow a mapping of the craters into energy classes that reveals a concentric pattern of energy in the field. The effect of the topography on energy levels and crater size show that only topography greater than 11,000ft could have prevented all eruptive activity, but the smaller energies and craters are sensitive to height differences on the order of the height of the rift wall, about 1000ft. Total energy in each crater class size is roughly constant, and the field energy could create one or two single craters comparable in size to small central volcanoes.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...