ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Publication Date: 2013-10-04
    Description: [1]  Determining the scale-length, magnitude, and distribution of heterogeneity in the lowermost mantle is crucial to understanding whole mantle dynamics, and yet it remains a much debated and ongoing challenge in geophysics. Common shortcomings of current seismically-derived lowermost mantle models are incomplete raypath coverage, arbitrary model parameterization, inaccurate uncertainty estimates, and an ad hoc definition of the misfit function in the optimization framework. In response, we present a new approach to global tomography. Apart from improving the existing raypath coverage using only high quality cross-correlated waveforms, the problem is addressed within a Bayesian framework where explicit regularization of model parameters is notrequired. We obtain high resolution images, complete with uncertainty estimates, of the lowermost mantle P-wave velocity structure using a hand-picked dataset of PKPab-df, PKPbc-df, and PcP-P differential traveltimes. Most importantly, our results demonstrate that the root mean square of the P-wave velocity variations in the lowermost mantle is approximately 0.87%, which is three times larger than previous global-scale estimates.
    Print ISSN: 0148-0227
    Topics: Geosciences , Physics
    Published by Wiley on behalf of American Geophysical Union (AGU).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2016-06-18
    Description: A new approach is presented for the reconstruction of time series and other ( y , x ) functions from observables with any type of stochastic noise. In particular noise may exist in both dependent and independent variables, i.e. y and x , or t , and may even be correlated between these variables. This situation occurs in many areas of the geosciences when the ‘independentÕ time variable is itself the result of a measurement process, such as in paleo sea-level estimation. Uncertainty in the recovered time series is quantified in probabilistic terms using Bayesian Changepoint modelling. The main contribution of the paper is the derivation of a new form of integrated Likelihood function which can measure the data fit for a curve to ( y , t ) observables contaminated by any type of random noise. Closed form expressions are found for the special case of correlated Gaussian data noise and curves built from the sum of piecewise linear polynomials. The technique is illustrated by estimating relative sea-level variations, over the last 5 glacial cycles, from a dataset of 1928 δ 18 measurements. Comparisons are also made with other techniques including those that assume an error free ‘independent’ variable. Experiments illustrate several benefits of accounting for timing errors. These include allowing rigorous uncertainty information of both time dependent signals and their gradients. Derivatives of the integrated Likelihood function are also given, which allow implementation of Likelihood maximization. The new likelihood function better reflects real errors in data and can improve recovery of the estimated time series.
    Print ISSN: 0148-0227
    Topics: Geosciences , Physics
    Published by Wiley on behalf of American Geophysical Union (AGU).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2011-04-19
    Description: Coda wave interferometry (CWI) can be used to estimate the separation between a pair of earthquakes directly from the coda recorded at a single station. Existing CWI methodology leads to a single estimate of separation and provides no information on uncertainty. Here, the theory of coda wave interferometry is revisited and modifications introduced that extend the range of applicability by 50% (i.e., 300–450 m separation for 1–5 Hz filtered coda waves). Synthetic experiments suggest that coda wave separation estimates fluctuate around the actual separation and that they have an increased tendency to underestimate the actual separation as the distance between events increases. A Bayesian framework is used to build a probabilistic understanding of the coda wave constraints which accounts for both the fluctuations and bias. The resulting a posteriori function provides a conditional probability distribution of the actual separation given the coda wave constraints. It can be used in isolation, or in combination with other constraints such as travel times or geodetic data, and provides a method for combining data from multiple stations and events. Earthquakes on the Calaveras Fault, California, are used to demonstrate that CWI is relatively insensitive to the number of recording stations and leads to enhanced estimates of separation in situations where station geometry is unfavorable for traditional relative location techniques.
    Print ISSN: 0148-0227
    Topics: Geosciences , Physics
    Published by Wiley on behalf of American Geophysical Union (AGU).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2012-02-03
    Description: We present a novel method for joint inversion of receiver functions and surface wave dispersion data, using a transdimensional Bayesian formulation. This class of algorithm treats the number of model parameters (e.g. number of layers) as an unknown in the problem. The dimension of the model space is variable and a Markov chain Monte Carlo (McMC) scheme is used to provide a parsimonious solution that fully quantifies the degree of knowledge one has about seismic structure (i.e constraints on the model, resolution, and trade-offs). The level of data noise (i.e. the covariance matrix of data errors) effectively controls the information recoverable from the data and here it naturally determines the complexity of the model (i.e. the number of model parameters). However, it is often difficult to quantify the data noise appropriately, particularly in the case of seismic waveform inversion where data errors are correlated. Here we address the issue of noise estimation using an extended Hierarchical Bayesian formulation, which allows both the variance and covariance of data noise to be treated as unknowns in the inversion. In this way it is possible to let the data infer the appropriate level of data fit. In the context of joint inversions, assessment of uncertainty for different data types becomes crucial in the evaluation of the misfit function. We show that the Hierarchical Bayes procedure is a powerful tool in this situation, because it is able to evaluate the level of information brought by different data types in the misfit, thus removing the arbitrary choice of weighting factors. After illustrating the method with synthetic tests, a real data application is shown where teleseismic receiver functions and ambient noise surface wave dispersion measurements from the WOMBAT array (South-East Australia) are jointly inverted to provide a probabilistic 1D model of shear-wave velocity beneath a given station.
    Print ISSN: 0148-0227
    Topics: Geosciences , Physics
    Published by Wiley on behalf of American Geophysical Union (AGU).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2014-04-16
    Description: Knowledge of past plate motions derived from ocean–floor finite rotations is an important asset of the Earth Sciences, because it allows linking a variety of shallow– and deep–rooted geological processes. Efforts have recently been taken towards inferring finite rotations at the unprecedented temporal resolution of 1 Myr or less, and more data are anticipated in the near future. These reconstructions, like any data set, feature a degree of noise that compromises significantly our ability to make geodynamical inferences. Bayesian Inference has been recently shown to be effective in reducing the impact of noise on plate kinematics inferred from high–temporal–resolution finite–rotation data sets. We describe REDBACK, an open–source software that implements trans–dimensional hierarchical Bayesian Inference for efficient noise–reduction in plate kinematic reconstructions. Algorithm details are described and illustrated by means of a synthetic test.
    Electronic ISSN: 1525-2027
    Topics: Chemistry and Pharmacology , Geosciences , Physics
    Published by Wiley on behalf of American Geophysical Union (AGU).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2014-07-11
    Description: Over the last 25 years, several studies have tested for a link between geomagnetic field intensity and reversal frequency. However, despite a large increase in the number of absolute paleointensity determinations, and improved methods for obtaining such data, two competing models have arisen. Here, we employ a new tool for objectively analyzing paleomagnetic time series to investigate the possibility of a link between reversal frequency and paleointensity. Transdimensional Markov chain Monte Carlo techniques are applied to a quality-filtered version of the global paleointensity (PINT) database for the last 202 My to model long-term paleointensity behavior. A large ensemble of models is sampled, from which a final representative mean model is extracted. The resulting paleointensity model confirms published conclusions that the single silicate crystal method gives significantly different results from more conventional whole rock paleointensity methods, this makes it difficult to jointly model the two data types in the same analysis. When the much larger whole rock dataset is considered, a stable paleointensity of 5.46 ± 0.28 × 1022 A/m2 for the last 202 My is consistent with the 95% con fidence interval of the paleointensity model. Statistical tests indicate no significant correlation between reversal frequency and field intensity at the 0.05 level. However, this result is likely due to the characteristics of the PINT database rather than being a genuine, physically representative conclusion. Given the paucity of data and general state of the global paleointensity database, concerted efforts to increase the number of high-quality, well-dated paleointensity data are required before conclusions about a link between geomagnetic field intensity and reversal frequency can be confidently drawn.
    Print ISSN: 0148-0227
    Topics: Geosciences , Physics
    Published by Wiley on behalf of American Geophysical Union (AGU).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2012-10-19
    Description: Interpolation of spatial data is a widely used technique across the Earth sciences. For example, the thickness of the crust can be estimated by different active and passive seismic source surveys, and seismologists reconstruct the topography of the Moho by interpolating these different estimates. Although much research has been done on improving the quantity and quality of observations, the interpolation algorithms utilized often remain standard linear regression schemes, with three main weaknesses: (1) the level of structure in the surface, or smoothness, has to be predefined by the user; (2) different classes of measurements with varying and often poorly constrained uncertainties are used together, and hence it is difficult to give appropriate weight to different data types with standard algorithms; (3) there is typically no simple way to propagate uncertainties in the data to uncertainty in the estimated surface. Hence the situation can be expressed by Mackenzie (2004): “We use fantastic telescopes, the best physical models, and the best computers. The weak link in this chain is interpreting our data using 100 year old mathematics”. Here we use recent developments made in Bayesian statistics and apply them to the problem of surface reconstruction. We show how the reversible jump Markov chain Monte Carlo (rj-McMC) algorithm can be used to let the degree of structure in the surface be directly determined by the data. The solution is described in probabilistic terms, allowing uncertainties to be fully accounted for. The method is illustrated with an application to Moho depth reconstruction in Australia.
    Print ISSN: 0148-0227
    Topics: Geosciences , Physics
    Published by Wiley on behalf of American Geophysical Union (AGU).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Electronic Resource
    Electronic Resource
    Oxford, UK : Blackwell Publishing Ltd
    Geophysical journal international 118 (1994), S. 0 
    ISSN: 1365-246X
    Source: Blackwell Publishing Journal Backfiles 1879-2005
    Topics: Geosciences
    Notes: This paper shows how the performance of a fully non-linear earthquake location scheme can be improved by taking advantage of problem-specific information in the location procedure. The genetic algorithm is best viewed as a method of parameter space sampling that can be used for optimization problems. It has been applied successfully in regional and teleseismic earthquake location when the network geometry is favourable. However, on a series of test events with unfavourable network geometries the performance of the genetic algorithm is found to be poor.We introduce a method to separate the spatial and temporal parameters in such a way that problems related to the strong trade-off between depth and origin time are avoided. Our modified algorithm has been applied to several test events. Performance over the unmodified algorithm is improved substantially and the computational cost is reduced. The algorithm is better suited to the determination of hypocentral location whether using arrival times, array information (slowness and azimuth) or a combination of both.A second type of modification is introduced which exploits the weak correlation between the epicentral parameters and depth. This algorithm also improves performance over the standard genetic algorithm search, except in circumstances where the depth and epicentre are not weakly correlated, which occurs when the azimuthal coverage is very poor, or when azimuth and slowness information are incorporated. On a shallow nuclear explosion with only teleseismic P arrivals available, the algorithm consistently converged to a depth very close to the true depth, indicating superior depth estimation for shallow earthquake locations over the unmodified algorithm.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Electronic Resource
    Electronic Resource
    Oxford, UK : Blackwell Publishing Ltd
    Geophysical journal international 102 (1990), S. 0 
    ISSN: 1365-246X
    Source: Blackwell Publishing Journal Backfiles 1879-2005
    Topics: Geosciences
    Notes: The problem of constraining 3-D seismic anomalies using arrival times from a regional network is examined. The non-linear dependence of arrival times on the hypocentral parameters of the earthquakes and the 3-D velocity field leads to a multiparameter-type non-linear inverse problem, and the distribution of sources and receivers from a typical regional network results in an enormous 3-D variation in data constraint. To ensure computational feasibility, authors have tended to neglect the non-linearity of the problem by linearizing about some best-guess discretized earth model. One must be careful in interpreting 3-D structure from linearized inversions because the inadequacy of the data window may combine with non-linear effects to produce artificial or phantom ‘structure’.To avoid the generation of artificial velocity gradients we must determine only those velocity variations which are necessary to fit the data rather than merely estimating local velocities in different parts of the model, which is the more common practice. We present a series of inversion algorithms which seek to inhibit the generation of unnecessary structure while performing efficiently within the framework of a large-scale inversion. This is achieved by extending the subspace method of Kennett, Sambridge & Williamson (1988) and incorporating the smoothing strategy proposed by Constable, Parker & Constable (1987). A flexible model parametrization involving Cardinal spline functions is used, and full 3-D ray tracing performed. A comparison between linear and non-linear inversions shows that if a breakdown in the linearizing approximation occurs spurious velocity models may be obtained which would appear acceptable in a linear inversion. Application of the techniques to a SE Australian data set show that unnecessary structure can be suppressed. As the smoothing power of the algorithm is improved a robust low-velocity anomaly dipping to the north becomes the most dominant feature of the P-wave model and much of the complex structure of pure data-fitting models is removed.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Electronic Resource
    Electronic Resource
    Oxford, UK : Blackwell Publishing Ltd
    Geophysical journal international 101 (1990), S. 0 
    ISSN: 1365-246X
    Source: Blackwell Publishing Journal Backfiles 1879-2005
    Topics: Geosciences
    Notes: Traveltime calculations in 3-D velocity models have become more commonplace during the past decade or so. Many schemes have been developed to deal with the initial value problem, which consists of tracing rays from a known source position and trajectory usually towards some distant surface. Less attention has been given to the more difficult problem of boundary value ray tracing in 3-D. In this case, source and receiver positions are known and one, or more, minimum time paths are sought between fixed endpoints.A new technique for boundary value ray tracing is proposed. The scheme uses a common numerical integration technique for solving the initial value problem and iteratively updates the take-off angles until the ray passes through the receiver. This type of ‘shooting’ technique is made efficient by using expressions describing the geometrical spreading of the wavefront to determine the relationship between the ray position at any time and the take-off angles from the source. The use of numerical integration allows the method to be compatible with a wide variety of structures. These include models with velocity varying smoothly as a function of position and those with arbitrarily orientated surfaces of discontinuity. An examination of traveltime accuracy is given as well as a discussion of efficiency for a few classes of velocity model.To improve upon the first guess pair of take-off angles, a small-scale non-linear inverse problem must be solved. The difference between the receiver position and the arrival point of a ray, on a plane through the receiver, describe a mis-match surface as a function of the two take-off angles of the ray. The shape of this surface can possess local minima and multiple ‘global’ minima even for relatively simple 1-D velocity models. Its study provides some insight into the non-linearities of a small-scale geophysical inverse problem.
    Type of Medium: Electronic Resource
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...