ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Oxford University Press  (134,655)
  • American Meteorological Society
  • MDPI Publishing
  • 2015-2019  (147,346)
  • 2005-2009  (53,259)
  • 1960-1964  (6,789)
  • 1940-1944  (3,226)
Collection
Language
Years
Year
  • 1
    facet.materialart.
    Unknown
    Oxford University Press
    Publication Date: 2024-03-23
    Description: "With so much media and political criticism of their shortcomings and failures, it is easy to overlook the fact that many governments work pretty well much of the time. Great Policy Successes turns the spotlight on instances of public policy that are remarkably successful. It develops a framework for identifying and assessing policy successes, paying attention not just to their programmatic outcomes but also to the quality of the processes by which policies are designed and delivered, the level of support and legitimacy they attain, and the extent to which successful performance endures over time. The bulk of the book is then devoted to 15 detailed case studies of striking policy successes from around the world, including Singapore's public health system, Copenhagen and Melbourne's rise from stilted backwaters to the highly liveable and dynamic urban centres they are today, Brazil's Bolsa Familia poverty relief scheme, the US's GI Bill, and Germany's breakthrough labour market reforms of the 2000s. Each case is set in context, its main actors are introduced, key events and decisions are described, the assessment framework is applied to gauge the nature and level of its success, key contributing factors to success are identified, and potential lessons and future challenges are identified. Purposefully avoiding the kind of heavy theorizing that characterizes many accounts of public policy processes, each case is written in an accessible and narrative style ideally suited for classroom use in conjunction with mainstream textbooks on public policy design, implementation, and evaluation.
    Keywords: public policy ; policy evaluation ; government ; governance ; social policy ; health policy ; economic policy ; thema EDItEUR::J Society and Social Sciences::JP Politics and government
    Language: English
    Format: image/jpeg
    Format: image/jpeg
    Format: image/jpeg
    Format: image/jpeg
    Format: image/jpeg
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2019-01-01
    Description: We describe the historical evolution of the conceptualization, formulation, quantification, application, and utilization of “radiative forcing” (RF) of Earth’s climate. Basic theories of shortwave and longwave radiation were developed through the nineteenth and twentieth centuries and established the analytical framework for defining and quantifying the perturbations to Earth’s radiative energy balance by natural and anthropogenic influences. The insight that Earth’s climate could be radiatively forced by changes in carbon dioxide, first introduced in the nineteenth century, gained empirical support with sustained observations of the atmospheric concentrations of the gas beginning in 1957. Advances in laboratory and field measurements, theory, instrumentation, computational technology, data, and analysis of well-mixed greenhouse gases and the global climate system through the twentieth century enabled the development and formalism of RF; this allowed RF to be related to changes in global-mean surface temperature with the aid of increasingly sophisticated models. This in turn led to RF becoming firmly established as a principal concept in climate science by 1990. The linkage with surface temperature has proven to be the most important application of the RF concept, enabling a simple metric to evaluate the relative climate impacts of different agents. The late 1970s and 1980s saw accelerated developments in quantification, including the first assessment of the effect of the forcing due to the doubling of carbon dioxide on climate (the “Charney” report). The concept was subsequently extended to a wide variety of agents beyond well-mixed greenhouse gases (WMGHGs; carbon dioxide, methane, nitrous oxide, and halocarbons) to short-lived species such as ozone. The WMO and IPCC international assessments began the important sequence of periodic evaluations and quantifications of the forcings by natural (solar irradiance changes and stratospheric aerosols resulting from volcanic eruptions) and a growing set of anthropogenic agents (WMGHGs, ozone, aerosols, land surface changes, contrails). From the 1990s to the present, knowledge and scientific confidence in the radiative agents acting on the climate system have proliferated. The conceptual basis of RF has also evolved as both our understanding of the way radiative forcing drives climate change and the diversity of the forcing mechanisms have grown. This has led to the current situation where “effective radiative forcing” (ERF) is regarded as the preferred practical definition of radiative forcing in order to better capture the link between forcing and global-mean surface temperature change. The use of ERF, however, comes with its own attendant issues, including challenges in its diagnosis from climate models, its applications to small forcings, and blurring of the distinction between rapid climate adjustments (fast responses) and climate feedbacks; this will necessitate further elaboration of its utility in the future. Global climate model simulations of radiative perturbations by various agents have established how the forcings affect other climate variables besides temperature (e.g., precipitation). The forcing–response linkage as simulated by models, including the diversity in the spatial distribution of forcings by the different agents, has provided a practical demonstration of the effectiveness of agents in perturbing the radiative energy balance and causing climate changes. The significant advances over the past half century have established, with very high confidence, that the global-mean ERF due to human activity since preindustrial times is positive (the 2013 IPCC assessment gives a best estimate of 2.3 W m−2, with a range from 1.1 to 3.3 W m−2; 90% confidence interval). Further, except in the immediate aftermath of climatically significant volcanic eruptions, the net anthropogenic forcing dominates over natural radiative forcing mechanisms. Nevertheless, the substantial remaining uncertainty in the net anthropogenic ERF leads to large uncertainties in estimates of climate sensitivity from observations and in predicting future climate impacts. The uncertainty in the ERF arises principally from the incorporation of the rapid climate adjustments in the formulation, the well-recognized difficulties in characterizing the preindustrial state of the atmosphere, and the incomplete knowledge of the interactions of aerosols with clouds. This uncertainty impairs the quantitative evaluation of climate adaptation and mitigation pathways in the future. A grand challenge in Earth system science lies in continuing to sustain the relatively simple essence of the radiative forcing concept in a form similar to that originally devised, and at the same time improving the quantification of the forcing. This, in turn, demands an accurate, yet increasingly complex and comprehensive, accounting of the relevant processes in the climate system.
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2019-01-01
    Description: Today’s global Earth system models began as simple regional models of tropospheric weather systems. Over the past century, the physical realism of the models has steadily increased, while the scope of the models has broadened to include the global troposphere and stratosphere, the ocean, the vegetated land surface, and terrestrial ice sheets. This chapter gives an approximately chronological account of the many and profound conceptual and technological advances that made today’s models possible. For brevity, we omit any discussion of the roles of chemistry and biogeochemistry, and terrestrial ice sheets.
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2019-01-01
    Description: This chapter reviews the history of the discovery of cloud nuclei and their impacts on cloud microphysics and the climate system. Pioneers including John Aitken, Sir John Mason, Hilding Köhler, Christian Junge, Sean Twomey, and Kenneth Whitby laid the foundations of the field. Through their contributions and those of many others, rapid progress has been made in the last 100 years in understanding the sources, evolution, and composition of the atmospheric aerosol, the interactions of particles with atmospheric water vapor, and cloud microphysical processes. Major breakthroughs in measurement capabilities and in theoretical understanding have elucidated the characteristics of cloud condensation nuclei and ice nucleating particles and the role these play in shaping cloud microphysical properties and the formation of precipitation. Despite these advances, not all their impacts on cloud formation and evolution have been resolved. The resulting radiative forcing on the climate system due to aerosol–cloud interactions remains an unacceptably large uncertainty in future climate projections. Process-level understanding of aerosol–cloud interactions remains insufficient to support technological mitigation strategies such as intentional weather modification or geoengineering to accelerating Earth-system-wide changes in temperature and weather patterns.
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2019-01-01
    Description: Over the past 100 years, the collaborative effort of the international science community, including government weather services and the media, along with the associated proliferation of environmental observations, improved scientific understanding, and growth of technology, has radically transformed weather forecasting into an effective global and regional environmental prediction capability. This chapter traces the evolution of forecasting, starting in 1919 [when the American Meteorological Society (AMS) was founded], over four eras separated by breakpoints at 1939, 1956, and 1985. The current state of forecasting could not have been achieved without essential collaboration within and among countries in pursuing the common weather and Earth-system prediction challenge. AMS itself has had a strong role in enabling this international collaboration.
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2019-01-01
    Description: The year 1919 was important in meteorology, not only because it was the year that the American Meteorological Society was founded, but also for two other reasons. One of the foundational papers in extratropical cyclone structure by Jakob Bjerknes was published in 1919, leading to what is now known as the Norwegian cyclone model. Also that year, a series of meetings was held that led to the formation of organizations that promoted the international collaboration and scientific exchange required for extratropical cyclone research, which by necessity involves spatial scales spanning national borders. This chapter describes the history of scientific inquiry into the structure, evolution, and dynamics of extratropical cyclones, their constituent fronts, and their attendant jet streams and storm tracks. We refer to these phenomena collectively as the centerpiece of meteorology because of their central role in fostering meteorological research during this century. This extremely productive period in extratropical cyclone research has been possible because of 1) the need to address practical challenges of poor forecasts that had large socioeconomic consequences, 2) the intermingling of theory, observations, and diagnosis (including dynamical modeling) to provide improved physical understanding and conceptual models, and 3) strong international cooperation. Conceptual frameworks for cyclones arise from a desire to classify and understand cyclones; they include the Norwegian cyclone model and its sister the Shapiro–Keyser cyclone model. The challenge of understanding the dynamics of cyclones led to such theoretical frameworks as quasigeostrophy, baroclinic instability, semigeostrophy, and frontogenesis. The challenge of predicting explosive extratropical cyclones in particular led to new theoretical developments such as potential-vorticity thinking and downstream development. Deeper appreciation of the limits of predictability has resulted from an evolution from determinism to chaos. Last, observational insights led to detailed cyclone and frontal structure, storm tracks, and rainbands.
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2019-01-01
    Description: Mountains significantly influence weather and climate on Earth, including disturbed surface winds; altered distribution of precipitation; gravity waves reaching the upper atmosphere; and modified global patterns of storms, fronts, jet streams, and climate. All of these impacts arise because Earth’s mountains penetrate deeply into the atmosphere. This penetration can be quantified by comparing mountain heights to several atmospheric reference heights such as density scale height, water vapor scale height, airflow blocking height, and the height of natural atmospheric layers. The geometry of Earth’s terrain can be analyzed quantitatively using statistical, matrix, and spectral methods. In this review, we summarize how our understanding of orographic effects has progressed over 100 years using the equations for atmospheric dynamics and thermodynamics, numerical modeling, and many clever in situ and remote sensing methods. We explore how mountains disturb the surface winds on our planet, including mountaintop winds, severe downslope winds, barrier jets, gap jets, wakes, thermally generated winds, and cold pools. We consider the variety of physical mechanisms by which mountains modify precipitation patterns in different climate zones. We discuss the vertical propagation of mountain waves through the troposphere into the stratosphere, mesosphere, and thermosphere. Finally, we look at how mountains distort the global-scale westerly winds that circle the poles and how varying ice sheets and mountain uplift and erosion over geologic time may have contributed to climate change.
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2019-01-01
    Description: The stratosphere contains ~17% of Earth’s atmospheric mass, but its existence was unknown until 1902. In the following decades our knowledge grew gradually as more observations of the stratosphere were made. In 1913 the ozone layer, which protects life from harmful ultraviolet radiation, was discovered. From ozone and water vapor observations, a first basic idea of a stratospheric general circulation was put forward. Since the 1950s our knowledge of the stratosphere and mesosphere has expanded rapidly, and the importance of this region in the climate system has become clear. With more observations, several new stratospheric phenomena have been discovered: the quasi-biennial oscillation, sudden stratospheric warmings, the Southern Hemisphere ozone hole, and surface weather impacts of stratospheric variability. None of these phenomena were anticipated by theory. Advances in theory have more often than not been prompted by unexplained phenomena seen in new stratospheric observations. From the 1960s onward, the importance of dynamical processes and the coupled stratosphere–troposphere circulation was realized. Since approximately 2000, better representations of the stratosphere—and even the mesosphere—have been included in climate and weather forecasting models. We now know that in order to produce accurate seasonal weather forecasts, and to predict long-term changes in climate and the future evolution of the ozone layer, models with a well-resolved stratosphere with realistic dynamics and chemistry are necessary.
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2019-01-01
    Description: Satellite meteorology is a relatively new branch of the atmospheric sciences. The field emerged in the late 1950s during the Cold War and built on the advances in rocketry after World War II. In less than 70 years, satellite observations have transformed the way scientists observe and study Earth. This paper discusses some of the key advances in our understanding of the energy and water cycles, weather forecasting, and atmospheric composition enabled by satellite observations. While progress truly has been an international achievement, in accord with a monograph observing the centennial of the American Meteorological Society, as well as limited space, the emphasis of this chapter is on the U.S. satellite effort.
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2019-01-01
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 11
    Publication Date: 2019-01-01
    Description: This chapter outlines the development of our understanding of several examples of mesoscale atmospheric circulations that are tied directly to surface forcings, starting from thermally driven variations over the ocean and progressing inland to man-made variations in temperature and roughness, and ending with forced boundary layer circulations. Examples include atmospheric responses to 1) overocean temperature variations, 2) coastlines (sea breezes), 3) mesoscale regions of inland water (lake-effect storms), and 4) variations in land-based surface usage (urban land cover). This chapter provides brief summaries of the historical evolution of, and tools for, understanding such mesoscale atmospheric circulations and their importance to the field, as well as physical processes responsible for initiating and determining their evolution. Some avenues of future research we see as critical are provided. The American Meteorological Society (AMS) has played a direct and important role in fostering the development of understanding mesoscale surface-forced circulations. The significance of AMS journal publications and conferences on this and interrelated atmospheric, oceanic, and hydrological fields, as well as those by sister scientific organizations, are demonstrated through extensive relevant citations.
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 12
    Publication Date: 2019-01-01
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 13
    Publication Date: 2019-01-01
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 14
    Publication Date: 2019-01-01
    Description: The history of over 100 years of observing the ocean is reviewed. The evolution of particular classes of ocean measurements (e.g., shipboard hydrography, moorings, and drifting floats) are summarized along with some of the discoveries and dynamical understanding they made possible. By the 1970s, isolated and “expedition” observational approaches were evolving into experimental campaigns that covered large ocean areas and addressed multiscale phenomena using diverse instrumental suites and associated modeling and analysis teams. The Mid-Ocean Dynamics Experiment (MODE) addressed mesoscale “eddies” and their interaction with larger-scale currents using new ocean modeling and experiment design techniques and a suite of developing observational methods. Following MODE, new instrument networks were established to study processes that dominated ocean behavior in different regions. The Tropical Ocean Global Atmosphere program gathered multiyear time series in the tropical Pacific to understand, and eventually predict, evolution of coupled ocean–atmosphere phenomena like El Niño–Southern Oscillation (ENSO). The World Ocean Circulation Experiment (WOCE) sought to quantify ocean transport throughout the global ocean using temperature, salinity, and other tracer measurements along with fewer direct velocity measurements with floats and moorings. Western and eastern boundary currents attracted comprehensive measurements, and various coastal regions, each with its unique scientific and societally important phenomena, became home to regional observing systems. Today, the trend toward networked observing arrays of many instrument types continues to be a productive way to understand and predict large-scale ocean phenomena.
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 15
    Publication Date: 2019-01-01
    Description: In situ observation networks and reanalyses products of the state of the atmosphere and upper ocean show well-defined, large-scale patterns of coupled climate variability on time scales ranging from seasons to several decades. We summarize these phenomena and their physics, which have been revealed by analysis of observations, by experimentation with uncoupled and coupled atmosphere and ocean models with a hierarchy of complexity, and by theoretical developments. We start with a discussion of the seasonal cycle in the equatorial tropical Pacific and Atlantic Oceans, which are clearly affected by coupling between the atmosphere and the ocean. We then discuss the tropical phenomena that only exist because of the coupling between the atmosphere and the ocean: the Pacific and Atlantic meridional modes, the El Niño–Southern Oscillation (ENSO) in the Pacific, and a phenomenon analogous to ENSO in the Atlantic. For ENSO, we further discuss the sources of irregularity and asymmetry between warm and cold phases of ENSO, and the response of ENSO to forcing. Fundamental to variability on all time scales in the midlatitudes of the Northern Hemisphere are preferred patterns of uncoupled atmospheric variability that exist independent of any changes in the state of the ocean, land, or distribution of sea ice. These patterns include the North Atlantic Oscillation (NAO), the North Pacific Oscillation (NPO), and the Pacific–North American (PNA) pattern; they are most active in wintertime, with a temporal spectrum that is nearly white. Stochastic variability in the NPO, PNA, and NAO force the ocean on days to interannual times scales by way of turbulent heat exchange and Ekman transport, and on decadal and longer time scales by way of wind stress forcing. The PNA is partially responsible for the Pacific decadal oscillation; the NAO is responsible for an analogous phenomenon in the North Atlantic subpolar gyre. In models, stochastic forcing by the NAO also gives rise to variability in the strength of the Atlantic meridional overturning circulation (AMOC) that is partially responsible for multidecadal anomalies in the North Atlantic climate known as the Atlantic multidecadal oscillation (AMO); observations do not yet exist to adequately determine the physics of the AMO. We review the progress that has been made in the past 50 years in understanding each of these phenomena and the implications for short-term (seasonal-to-interannual) climate forecasts. We end with a brief discussion of advances of things that are on the horizon, under the rug, and over the rainbow.
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 16
    Publication Date: 2019-01-01
    Description: s The history of severe thunderstorm research and forecasting over the past century has been a remarkable story involving interactions between technological development of observational and modeling capabilities, research into physical processes, and the forecasting of phenomena with the goal of reducing loss of life and property. Perhaps more so than any other field of meteorology, the relationship between researchers and forecasters has been particularly close in the severe thunderstorm domain, with both groups depending on improved observational capabilities. The advances that have been made have depended on observing systems that did not exist 100 years ago, particularly radar and upper-air systems. They have allowed scientists to observe storm behavior and structure and the environmental setting in which storms occur. This has led to improved understanding of processes, which in turn has allowed forecasters to use those same observational systems to improve forecasts. Because of the relatively rare and small-scale nature of many severe thunderstorm events, severe thunderstorm researchers have developed mobile instrumentation capabilities that have allowed them to collect high-quality observations in the vicinity of storms. Since much of the world is subject to severe thunderstorm hazards, research has taken place around the world, with the local emphasis dependent on what threats are perceived in that area, subject to the availability of resources to study the threat. Frequently, the topics of interest depend upon a single event, or a small number of events, of a particular kind that aroused public or economic interests in that area. International cooperation has been an important contributor to collecting and disseminating knowledge. As the AMS turns 100, the range of research relating to severe thunderstorms is expanding. The time scale of forecasting or projecting is increasing, with work going on to study forecasts on the seasonal to subseasonal time scales, as well as addressing how climate change may influence severe thunderstorms. With its roots in studying weather that impacts the public, severe thunderstorm research now includes significant work from the social science community, some as standalone research and some in active collaborative efforts with physical scientists. In addition, the traditional emphases of the field continue to grow. Improved radar and numerical modeling capabilities allow meteorologists to see and model details that were unobservable and not understood a half century ago. The long tradition of collecting observations in the field has led to improved quality and quantity of observations, as well as the capability to collect them in locations that were previously inaccessible. Much of that work has been driven by the gaps in understanding identified by theoretical and operational practice.
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 17
    Publication Date: 2019-01-01
    Description: Remarkable progress has occurred over the last 100 years in our understanding of atmospheric chemical composition, stratospheric and tropospheric chemistry, urban air pollution, acid rain, and the formation of airborne particles from gas-phase chemistry. Much of this progress was associated with the developing understanding of the formation and role of ozone and of the oxides of nitrogen, NO and NO2, in the stratosphere and troposphere. The chemistry of the stratosphere, emerging from the pioneering work of Chapman in 1931, was followed by the discovery of catalytic ozone cycles, ozone destruction by chlorofluorocarbons, and the polar ozone holes, work honored by the 1995 Nobel Prize in Chemistry awarded to Crutzen, Rowland, and Molina. Foundations for the modern understanding of tropospheric chemistry were laid in the 1950s and 1960s, stimulated by the eye-stinging smog in Los Angeles. The importance of the hydroxyl (OH) radical and its relationship to the oxides of nitrogen (NO and NO2) emerged. The chemical processes leading to acid rain were elucidated. The atmosphere contains an immense number of gas-phase organic compounds, a result of emissions from plants and animals, natural and anthropogenic combustion processes, emissions from oceans, and from the atmospheric oxidation of organics emitted into the atmosphere. Organic atmospheric particulate matter arises largely as gas-phase organic compounds undergo oxidation to yield low-volatility products that condense into the particle phase. A hundred years ago, quantitative theories of chemical reaction rates were nonexistent. Today, comprehensive computer codes are available for performing detailed calculations of chemical reaction rates and mechanisms for atmospheric reactions. Understanding the future role of atmospheric chemistry in climate change and, in turn, the impact of climate change on atmospheric chemistry, will be critical to developing effective policies to protect the planet.
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 18
    Publication Date: 2019-01-01
    Description: The human population on Earth has increased by a factor of 4.6 in the last 100 years and has become more centered in urban environments. This expansion and migration pattern has resulted in stresses on the environment. Meteorological applications have helped to understand and mitigate those stresses. This chapter describes several applications that enable the population to interact with the environment in more sustainable ways. The first topic treated is urbanization itself and the types of stresses exerted by population growth and its attendant growth in urban landscapes—buildings and pavement—and how they modify airflow and create a local climate. We describe environmental impacts of these changes and implications for the future. The growing population uses increasing amounts of energy. Traditional sources of energy have taxed the environment, but the increase in renewable energy has used the atmosphere and hydrosphere as its fuel. Utilizing these variable renewable resources requires meteorological information to operate electric systems efficiently and economically while providing reliable power and minimizing environmental impacts. The growing human population also pollutes the environment. Thus, understanding and modeling the transport and dispersion of atmospheric contaminants are important steps toward regulating the pollution and mitigating impacts. This chapter describes how weather information can help to make surface transportation more safe and efficient. It is explained how these applications naturally require transdisciplinary collaboration to address these challenges caused by the expanding population.
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 19
    Publication Date: 2019-01-01
    Description: Applied meteorology is an important and rapidly growing field. This chapter concludes the three-chapter series of this monograph describing how meteorological information can be used to serve society’s needs while at the same time advancing our understanding of the basics of the science. This chapter continues along the lines of Part II of this series by discussing ways that meteorological and climate information can help to improve the output of the agriculture and food-security sector. It also discusses how agriculture alters climate and its long-term implications. It finally pulls together several of the applications discussed by treating the food–energy–water nexus. The remaining topics of this chapter are those that are advancing rapidly with more opportunities for observation and needs for prediction. The study of space weather is advancing our understanding of how the barrage of particles from other planetary bodies in the solar system impacts Earth’s atmosphere. Our ability to predict wildland fires by coupling atmospheric and fire-behavior models is beginning to impact decision-support systems for firefighters. Last, we examine how artificial intelligence is changing the way we predict, emulate, and optimize our meteorological variables and its potential to amplify our capabilities. Many of these advances are directly due to the rapid increase in observational data and computer power. The applications reviewed in this series of chapters are not comprehensive, but they will whet the reader’s appetite for learning more about how meteorology can make a concrete impact on the world’s population by enhancing access to resources, preserving the environment, and feeding back into a better understanding how the pieces of the environmental system interact.
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 20
    Publication Date: 2019-01-01
    Description: Some of the advances of the past century in our understanding of the general circulation of the atmosphere are described, starting with a brief summary of some of the key developments from the first half of the twentieth century, but with a primary focus on the period beginning with the midcentury breakthrough in baroclinic instability and quasigeostrophic dynamics. In addition to baroclinic instability, topics touched upon include the following: stationary wave theory, the role played by the two-layer model, scaling arguments for the eddy heat flux, the subtlety of large-scale eddy momentum fluxes, the Eliassen–Palm flux and the transformed Eulerian mean formulation, the structure of storm tracks, and the controls on the Hadley cell.
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 21
    Publication Date: 2019-01-01
    Description: Over the last 100 years, boundary layer meteorology grew from the subject of mostly near-surface observations to a field encompassing diverse atmospheric boundary layers (ABLs) around the world. From the start, researchers drew from an ever-expanding set of disciplines—thermodynamics, soil and plant studies, fluid dynamics and turbulence, cloud microphysics, and aerosol studies. Research expanded upward to include the entire ABL in response to the need to know how particles and trace gases dispersed, and later how to represent the ABL in numerical models of weather and climate (starting in the 1970s–80s); taking advantage of the opportunities afforded by the development of large-eddy simulations (1970s), direct numerical simulations (1990s), and a host of instruments to sample the boundary layer in situ and remotely from the surface, the air, and space. Near-surface flux-profile relationships were developed rapidly between the 1940s and 1970s, when rapid progress shifted to the fair-weather convective boundary layer (CBL), though tropical CBL studies date back to the 1940s. In the 1980s, ABL research began to include the interaction of the ABL with the surface and clouds, the first ABL parameterization schemes emerged; and land surface and ocean surface model development blossomed. Research in subsequent decades has focused on more complex ABLs, often identified by shortcomings or uncertainties in weather and climate models, including the stable boundary layer, the Arctic boundary layer, cloudy boundary layers, and ABLs over heterogeneous surfaces (including cities). The paper closes with a brief summary, some lessons learned, and a look to the future.
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 22
    Publication Date: 2019-01-01
    Description: The development of the technologies of remote sensing of the ocean was initiated in the 1970s, while the ideas of observing the ocean from space were conceived in the late 1960s. The first global view from space revealed the expanse and complexity of the state of the ocean that had perplexed and inspired oceanographers ever since. This paper presents a glimpse of the vast progress made from ocean remote sensing in the past 50 years that has a profound impact on the ways we study the ocean in relation to weather and climate. The new view from space in conjunction with the deployment of an unprecedented amount of in situ observations of the ocean has led to a revolution in physical oceanography. The highlights of the achievement include the description and understanding of the global ocean circulation, the air–sea fluxes driving the coupled ocean–atmosphere system that is most prominently illustrated in the tropical oceans. The polar oceans are most sensitive to climate change with significant consequences, but owing to remoteness they were not accessible until the space age. Fundamental discoveries have been made on the evolution of the state of sea ice as well as the circulation of the ice-covered ocean. Many surprises emerged from the extraordinary accuracy and expanse of the space observations. Notable examples include the determination of the global mean sea level rise as well as the role of the deep ocean in tidal mixing and dissipation.
    Print ISSN: 0065-9401
    Electronic ISSN: 1943-3646
    Topics: Geography , Geosciences , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 23
    Publication Date: 2019-06-01
    Print ISSN: 0003-0007
    Electronic ISSN: 1520-0477
    Topics: Geography , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 24
    Publication Date: 2019-06-01
    Print ISSN: 0003-0007
    Electronic ISSN: 1520-0477
    Topics: Geography , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 25
    Publication Date: 2019-06-01
    Print ISSN: 0003-0007
    Electronic ISSN: 1520-0477
    Topics: Geography , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 26
    Publication Date: 2019-06-01
    Print ISSN: 0003-0007
    Electronic ISSN: 1520-0477
    Topics: Geography , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 27
    Publication Date: 2019-06-01
    Print ISSN: 0003-0007
    Electronic ISSN: 1520-0477
    Topics: Geography , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 28
    Publication Date: 2019-06-01
    Print ISSN: 0003-0007
    Electronic ISSN: 1520-0477
    Topics: Geography , Physics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 29
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉The investigation of using a novel radial basis function-based meshfree method for forward modelling magnetotelluric data is presented. The meshfree method, which can be termed radial basis function-based finite difference (RBF-FD), uses only a cloud of unconnected points to obtain the numerical solution throughout the computational domain. Unlike mesh-based numerical methods (for example, grid-based finite difference, finite volume and finite element), the meshfree method has the unique feature that the discretization of the conductivity model can be decoupled from the discretization used for numerical computation, thus avoiding traditional expensive mesh generation and allowing complicated geometries of the model be easily represented. To accelerate the computation, unstructured point discretization with local refinements are employed. Maxwell’s equations in the frequency domain are re-formulated using $\mathbf {A}$-ψ potentials in conjuction with the Coulomb gauge condition, and are solved numerically with a direct solver to obtain magnetotelluric responses. A major obstacle in applying common meshfree methods in modelling geophysical electromagnetic data is that they are incapable of reproducing discontinuous fields such as the discontinuous electric field over conductivity jumps, causing spurious solutions. The occurrence of spurious, or non-physical, solutions when applying standard meshfree methods is removed here by proposing a novel mixed scheme of the RBF-FD and a Galerkin-type weak-form treatment in discretizing the equations. The RBF-FD is applied to the points in uniform conductivity regions, whereas the weak-form treatment is introduced to points located on the interfaces separating different homogeneous conductivity regions. The effectiveness of the proposed meshfree method is validated with two numerical examples of modelling the magnetotelluric responses over three-dimensional conductivity models.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 30
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉Receiver functions are sensitive to sharp seismic velocity variations with depth and are commonly used to constrain crustal thickness. The H-κ stacking method of Zhu and Kanamori (〈span〉2000〈/span〉) is often employed to constrain both the crustal thickness (H) and ${V_P}$/${V_S}$ ratio ($\kappa $) beneath a seismic station using P-to-s converted waves (Ps). However, traditional H-κ stacks require an assumption of average crustal velocity (usually ${V_P}$). Additionally, large amplitude reverberations from low velocity shallow layers, such as sedimentary basins, can overprint sought-after crustal signals, rendering traditional H-$\ \kappa $ stacking uninterpretable. We overcome these difficulties in two ways. When S-wave reverberations from sediment are present, they are removed by applying a resonance removal filter allowing crustal signals to be clarified and interpreted. We also combine complementary Ps receiver functions, Sp receiver functions, and the post-critical P wave reflection from the Moho (SP〈sub〉m〈/sub〉p) to remove the dependence on an assumed average crustal ${V_P}$. By correcting for sediment and combining multiple data sets, the crustal thickness, average crustal P-wave velocity, and crustal ${V_P}$/${V_S}$ ratio is constrained in geologic regions where traditional H-$\ \kappa $ stacking fails, without making an initial P-wave velocity assumption or suffering from contamination by sedimentary reverberations.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 31
    Publication Date: 2019
    Description: 〈span〉In the original version of this article the author, Adrian Flores Orozco, was incorrectly listed. This has now been corrected and the publisher apologises for the error.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 32
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉The evolution of the Philippine Sea Plate (PSP) since Jurassic is one of the key issues in the dynamics of lithosphere and mantle. The related studies benefited mostly from seismic tomography which provides velocity structures in the upper mantle. However, the upper-mantle structure is not well resolved compared to the continental areas due to the lack of seismic data in the Philippine Sea. We employ a 3-D gravity inversion constrained by an initial model based on the 〈span〉S〈/span〉-wave tomography (SL2013sv; Schaeffer & Lebedev 2013) to image the density structure of the upper mantle of the PSP and adjacent region. The resulting model shows a three-layer pattern of vertical high-low-high density variation in the upper mantle under the PSP. The thin high-density layer evidences for strong oceanic lithosphere in the West Philippine Sea. The relatively low dense mantle located below the PSP possibly originates from the asthenosphere. The PSP differs from the Pacific and the Indian-Australian plates in the whole depth range, while its structure is similar to the eastern Eurasian and Sunda plates. In the depth range, 200–300 km, the relative high-density zone beneath PSP extends to the Sunda Plate and to the eastern Eurasian Plate. We further estimated the conversion factor of our density model and the velocity model (SL2013sv; Schaeffer & Lebedev 2013) in order to locate the changes of compositional effects in the upper mantle. The negative conversion factor indicates that the compositional changes primarily affect the density anomalies beneath the PSP. We, therefore, describe the layered density structures as ‘sandwich’ pattern, which is unique and different from adjacent regions.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 33
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉Low-velocity layers within the crust can indicate the presence of melt and lithologic differences with implications for crustal composition and formation. Seismic wave conversions and reverberations across the base of the crust or intracrustal discontinuities, analysed using the receiver function method, can be used to constrain crustal layering. This is commonly accomplished by inverting receiver functions jointly with surface wave dispersion. Recently, the proliferation of model-space search approaches has made this technique a workhorse of crustal seismology. We show that reverberations from shallow layers such as sedimentary basins produce spurious low-velocity zones when inverted for crustal structure with surface wave data of insufficiently high frequency. Therefore, reports of such layers in the literature based on inversions using receiver function data should be re-evaluated. We demonstrate that a simple resonance-removal filter can suppress these effects and yield reliable estimates of crustal structure, and advocate for its use in receiver-function based inversions.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 34
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉We present a numerical method for simulating both single-event dynamic ruptures and earthquake sequences with full inertial effects in antiplane shear with rate-and-state fault friction. We use the second-order form of the wave equation, expressed in terms of displacements, discretized with high-order-accurate finite difference operators in space. Advantages of this method over other methods include reduced computational memory usage and reduced spurious high frequency oscillations. Our method handles complex geometries, such as non-planar fault interfaces and free surface topography. Boundary conditions are imposed weakly using penalties. We prove time stability by constructing discrete energy estimates. We present numerical experiments demonstrating the stability and convergence of the method, and showcasing applications of the method, including the transition in rupture style from crack-like ruptures to slip pulses for strongly rate-weakening friction and the simulation of earthquake sequences in a viscoelastic solid with a fully dynamic coseismic phase.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 35
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉We derive a theoretical relationship between the cross correlation of ambient Rayleigh waves (seismic ambient noise) and the attenuation parameter α associated with Rayleigh-wave propagation. In particular, we derive a mathematical expression for the multiplicative factor relating normalized cross correlation to the Rayleigh-wave Green’s function. Based on this expression, we formulate an inverse problem to determine α from cross correlations of recorded ambient signal. We conduct a preliminary application of our algorithm to a relatively small instrument array, conveniently deployed on an island. In our setup, the mentioned multiplicative factor has values of about 2.5 to 3, which, if neglected, could result in a significant underestimate of α. We find that our inferred values of α are reasonable, in comparison with independently obtained estimates found in the literature. Allowing α to vary with respect to frequency results in a reduction of misfit between observed and predicted cross correlations.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 36
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉Global phases, viz. seismic phases that travel through the Earth’s core, can be used to locally image the crust by means of seismic interferometry. This method is known as Global Phase Seismic Interferometry (GloPSI). Traditionally, GloPSI retrieves low-frequency information (up to 1 Hz). Recent studies, however, suggest that there is high-frequency signal present in the coda of strong, distant earthquakes. This research quantifies the potential of these high-frequency signals, by analysing recordings of a multitude of high-magnitude earthquakes (≥6.4 〈span〉M〈/span〉〈sub〉w〈/sub〉) and their coda on a selection of permanent USArray stations. Nearly half of the 〈span〉P, PKP〈/span〉 and PKIKP phases are recorded with a signal-to-noise ratio of at least 5 dB at 3 Hz. To assess the viability of using the high-frequency signal, the second half of the paper highlights two case studies. First, a known sedimentary structure is imaged in Malargüe, Argentina. Secondly, the method is used to reveal the structure of the Midcontinent Rift below the SPREE array in Minnesota, USA. Both studies demonstrate that structural information of the shallow crust (≤5 km) below the arrays can be retrieved. In particular, the interpreted thickness of the sedimentary layer below the Malargüe array is in agreement with earlier studies in the same area. Being able to use global phases and direct 〈span〉P〈/span〉-phases with large epicentral distances (〉80°) to recover the Earth’s sedimentary structure suggests that GloPSI can be applied in an industrial context.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 37
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉The localization of passive seismic sources in form of microseismic tremors as well as large-scale earthquakes is a key issue in seismology. While most previous studies are assuming fairly good knowledge of the underlying velocity model, we propose an automatic spatial localization and joint velocity model building scheme that is independent of detailed 〈span〉a priori〈/span〉 information. The first step is a coherence analysis, estimating so-called wavefront attributes to locally describe the wavefield in terms of slopes and curvatures. In a similar fashion, we also obtain an initial guess of the source excitation times of the recorded events. The wavefront attributes constitute the input for wavefront tomography which represents the next step of the workflow and allows for a refinement of the previously evaluated source excitation times while simultaneously approximating the velocity distribution. In a last step, we use the final estimate of the velocity distribution and compute the respective image function by reverse time modelling to gain the source locations. This paper introduces the theoretical concept of our proposed approach for the general 3-D case. We analyse the feasibility of our strategy and the influences of different acquisition settings by means of a synthetic 2-D data example. In a final 3-D field data example we use the workflow to localize a deep earthquake without relying on a given velocity model. The approach can deal with high levels of noise and low signal amplitudes, respectively, as well as sparse geophone sampling. The workflow generally delivers good approximations of the long-wavelength velocity variations along with accurate source locations.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 38
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉The 2017 July 20, 〈span〉M〈/span〉〈sub〉w〈/sub〉6.6 Bodrum–Kos earthquake occurred in the Gulf of Gökova in the SE Aegean, a region characterized by N–S extension in the backarc of the easternmost Hellenic Trench. The dip direction of the fault that ruptured during the earthquake has been a matter of controversy where both north- and south-dipping fault planes were used to model the coseismic slip in previous studies. Here, we use seismic (seismicity, main shock modelling, aftershock relocations and aftershock mechanisms using regional body and surface waves), geodetic (GPS, InSAR) and structural observations to estimate the location, and the dip direction of the fault that ruptured during the 2017 earthquake, and the relationship of this event to regional tectonics. We consider both dip directions and systematically search for the best-fitting locations for the north- and south-dipping fault planes. Comparing the best-fitting planes for both dip directions in terms of their misfit to the geodetic data, proximity to the hypocenter location and Coulomb stress changes at the aftershock locations, we conclude that the 2017 earthquake ruptured a north-dipping fault. We find that the earthquake occurred on a 20–25 km long, ∼E–W striking, 40° north-dipping, pure normal fault with slip primarily confined between 3 and 15 km depth, and the largest slip exceeding 2 m between depths of 4 and 10 km. The coseismic fault, not mapped previously, projects to the surface within the western Gulf, and partly serves both to widen the Gulf and separate Kos Island from the Bodrum Peninsula of SW Anatolia. The coseismic fault may be an extension of a mapped, north-dipping normal fault along the south side of the Gulf of Gökova. While all of the larger aftershocks are consistent with N–S extension, their spatially dispersed pattern attests to the high degree of crustal fracturing within the basin, due to rapid trenchward extension and anticlockwise rotation within the southeastern Aegean.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 39
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉The harmonic electromagnetic noise produced by anthropic electrical structures is a critical component of the global noise affecting geophysical signals and increasing data uncertainty. It is composed of a series of harmonic signals whose frequencies are multiple integers of the fundamental frequency specific to the electrical noise source. To date, most model-based noise removal strategies assume that the fundamental frequency constraining the harmonic noise is single and constant over the duration of the geophysical record. In this paper, we demonstrate that classical harmonic processing methods lose efficacy when these assumptions are not valid. We present several surface nuclear magnetic resonance field data sets, which testify the increasing probability of recording the harmonic noise with such multiple or unstable frequency content. For each case (multiple frequencies or unstable frequency) we propose new processing strategies, namely, the 〈span〉2-D grid-search〈/span〉 and the 〈span〉segmentation〈/span〉 approach, respectively, which efficiently manage to remove the harmonic noise in these difficult conditions. In the process, we also apply a fast frequency estimator called the Nyman, Gaiser and Saucier estimation method, which shows equivalent performance as classical estimators while allowing a reduction of the computing time by a factor of 2.5.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 40
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉Combinatorial methods are used to determine the spatial distribution of earthquake magnitudes on a fault whose slip rate varies along strike. Input to the problem is a finite sample of earthquake magnitudes that span 5 kyr drawn from a truncated Pareto distribution. The primary constraints to the problem are maximum and minimum values around the target slip-rate function indicating where feasible solutions can occur. Two methods are used to determine the spatial distribution of earthquakes: integer programming and the greedy-sequential algorithm. For the integer-programming method, the binary decision vector includes all possible locations along the fault where each earthquake can occur. Once a set of solutions that satisfy the constraints is found, the cumulative slip misfit on the fault is globally minimized relative to the target slip-rate function. The greedy algorithm sequentially places earthquakes to locally optimize slip accumulation. As a case study, we calculate how earthquakes are distributed along the megathrust of the Nankai subduction zone, in which the slip rate varies significantly along strike. For both methods, the spatial distribution of magnitudes depends on slip rate, except for the largest magnitude earthquakes that span multiple sections of the fault. The greedy-sequential algorithm, previously applied to this fault (Parsons et al., 2012), tends to produce smoother spatial distributions and fewer lower magnitude earthquakes in the low slip-rate section of the fault compared to the integer-programming method. Differences in results from the two methods relate to how much emphasis is placed on minimizing the misfit to the target slip rate (integer programming) compared to finding a solution within the slip-rate constraints (greedy sequential). Specifics of the spatial distribution of magnitudes also depend on the shape of the target slip-rate function: that is, stepped at the section boundaries versus a smooth function. This study isolates the effects of slip-rate variation along a single fault in determining the spatial distribution of earthquake magnitudes, helping to better interpret results from more complex, interconnected fault systems.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 41
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉The southcentral Hikurangi subduction margin (North Island, New Zealand) has a wide, low-taper accretionary wedge that is frontally accreting a 〉3-km-thick layer of sediments, with deformation currently focused near the toe of the wedge. We use a geological model based on a depth-converted seismic section, together with physically realistic parameters for fluid pressure, and sediment and décollement friction based on laboratory experiments, to investigate the present-day force balance in the wedge. Numerical models are used to establish the range of physical parameters compatible with the present-day wedge geometry and mechanics. Our analysis shows that the accretionary wedge stability and taper angle require either high to moderate fluid pressure on the plate interface, and/or weak frictional strength along the décollement. The décollement beneath the outer wedge requires a relatively weaker effective strength than beneath the inner (consolidated) wedge. Increasing density and cohesion with depth make it easier to attain a stable taper within the inner wedge, while anything that weakens the wedge—such as high fluid pressures and weak faults—make it harder. Our results allow a near-hydrostatic wedge fluid pressure, sublithostatic fluid overpressure at the subduction interface, and friction coefficients compatible with measurements from laboratory experiments on weak clay minerals.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 42
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉We present a new methodology to compute the gravitational fields generated by tesseroids (spherical prisms) whose density varies with depth according to an arbitrary continuous function. It approximates the gravitational fields through the Gauss–Legendre Quadrature along with two discretization algorithms that automatically control its accuracy by adaptively dividing the tesseroid into smaller ones. The first one is a preexisting 2-D adaptive discretization algorithm that reduces the errors due to the distance between the tesseroid and the computation point. The second is a new density-based discretization algorithm that decreases the errors introduced by the variation of the density function with depth. The amount of divisions made by each algorithm is indirectly controlled by two parameters: the distance-size ratio and the delta ratio. We have obtained analytical solutions for a spherical shell with radially variable density and compared them to the results of the numerical model for linear, exponential, and sinusoidal density functions. The heavily oscillating density functions are intended only to test the algorithm to its limits and not to emulate a real world case. These comparisons allowed us to obtain optimal values for the distance-size and delta ratios that yield an accuracy of 0.1 per cent of the analytical solutions. The resulting optimal values of distance-size ratio for the gravitational potential and its gradient are 1 and 2.5, respectively. The density-based discretization algorithm produces no discretizations in the linear density case, but a delta ratio of 0.1 is needed for the exponential and most sinusoidal density functions. These values can be extrapolated to cover most common use cases, which are simpler than oscillating density profiles. However, the distance-size and delta ratios can be configured by the user to increase the accuracy of the results at the expense of computational speed. Finally, we apply this new methodology to model the Neuquén Basin, a foreland basin in Argentina with a maximum depth of over 5000 m, using an exponential density function.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 43
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉The dynamics of dyke emplacement are typically modeled by assuming an elastic rheology for the host rock. However, the resulting stress field predicts significant shear failure in the region surrounding the dyke tip. Here, we model the dyking process in an elastic-perfectly plastic host rock in order to simulate distributed shear fracturing and subsequent frictional slip on the fracture surfaces. The fluid mechanical aspects of the magma are neglected as we are interested only in the fracture mechanics of the process. Magma overpressure in dykes is typically of the same order of magnitude as the yield stress of the host rock in shear, especially when the pressure effect of volatiles exsolving from the magma is taken into account. Under these conditions, the plastic deformation zone has spatial dimensions that approach the length of the dyke itself, and concepts based on linear elastic fracture mechanics (LEFM) no longer apply. As incremental plasticity is path dependent, we describe two geologically meaningful endmember cases, namely dyke propagation at constant driving pressure, and gradual inflation of a pre-existing crack. For both models, we find that plastic deformation surrounding the fracture tip enhances dyke opening, and thus increases the energy input into the system due to pressure work integrated over the fracture wall. At the same time, energy is dissipated by plastic deformation. Dissipation in the propagation model is greater by about an order of magnitude than it is in the inflation model because the propagating dyke tip leaves behind it a broad halo of deformation due to plastic bending and unbending in the relict process zone. The net effect is that plastic deformation impedes dyke growth in the propagation model, while it enhances dyke growth in the inflation model. The results show that, when the plastic failure zone is large, a single parameter such as fracture toughness is unable to capture the physics that underpin the resistance of a fracture or dyke against propagation. In these cases, plastic failure has to be modeled explicitly for the given conditions. We provide analytical approximations for the propagation forces and the maximum dyke aperture for the two endmember cases, that is, the propagating dyke and the dyke formed by inflation of a crack. Furthermore, we show that the effect of plasticity on dyke energetics, together with an overestimate of magma pressure when interpreting dyke aspect ratios using elastic host rock models, offers a possible explanation for the long-standing paradox that laboratory measurements of fracture toughness of rocks consistently indicate values about two orders of magnitude lower than those derived from dyke observations.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 44
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉Building geomechanical models for induced seismicity in complex reservoirs poses a major challenge, in particular if many faults need to be included. We developed a novel way of calculating induced stress changes and associated seismic moment response for structurally complex reservoirs with tens to hundreds of faults. Our specific target was to improve the predictive capability of stress evolution along multiple faults, and to use the calculations to enhance physics-based understanding of the reservoir seismicity. Our methodology deploys a mesh-free numerical and analytical approach for both the stress calculation and the seismic moment calculation. We introduce a high-performance computational method for high-resolution induced Coulomb stress changes along faults, based on a Green's function for the stress response to a nucleus of strain. One key ingredient is the deployment of an octree representation and calculation scheme for the nuclei of strain, based on the topology and spatial variability of the mesh of the reservoir flow model. Once the induced stress changes are evaluated along multiple faults, we calculate potential seismic moment release in a fault system supposing an initial stress field. The capability of the approach, dubbed as MACRIS (〈strong〉M〈/strong〉echanical 〈strong〉A〈/strong〉nalysis of 〈strong〉C〈/strong〉omplex 〈strong〉R〈/strong〉eservoirs for 〈strong〉I〈/strong〉nduced 〈strong〉S〈/strong〉eismicity) is proven through comparisons with finite element models. Computational performance and suitability for probabilistic assessment of seismic hazards are demonstrated though the use of the complex, heavily faulted Gullfaks field.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 45
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉We test the feasibility of GPS-based rapid centroid moment tensor (GPS CMT) methods for Taiwan, one of the most earthquake prone areas in the world. In recent years, Taiwan has become a leading developer of seismometer-based earthquake early warning systems, which have successfully been applied to several large events. The rapid determination of earthquake magnitude and focal mechanism, important for a number of rapid response applications, including tsunami warning, is still challenging because of the limitations of near-field inertial recordings. This instrumental issue can be solved by an entirely different observation system: a GPS network. Taiwan is well posed to take advantage of GPS because in the last decade it has developed a very dense network. Thus, in this research, we explore the suitability of the GPS CMT inversion for Taiwan. We retrospectively investigate six moderate to large (〈span〉M〈/span〉〈sub〉w〈/sub〉6.0 ∼ 7.0) earthquakes and propose a resolution test for our model, we find that the minimum resolvable earthquake magnitude of this system is ∼〈span〉M〈/span〉〈sub〉w〈/sub〉5.5 (at 5 km depth). Our tests also suggest that the finite fault complexity, often challenging for the near-field methodology, can be ignored under such good station coverage and thus, can provide a fast and robust solution for large earthquake directly from the near field. Our findings help to understand and quantify how the proposed methodology could be implemented in real time and what its contributions could be to the overall earthquake monitoring system.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 46
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉Time-domain processing of seismic reflection data has always been an important engine that is routinely utilized to produce seismic images and to expeditiously construct subsurface models. The conventional procedure involves analysing parameters related to the derivatives of reflection traveltime with respect to offset including normal moveout (NMO) velocities (second-order derivatives) and quartic coefficients (fourth-order derivatives). In this study, we propose to go beyond the typical assumption of 1-D laterally homogeneous medium when relating those ‘processing’ parameters to the subsurface medium parameters and take into account the additional influences from lateral heterogeneity including curved interfaces and smoothly variable velocities. We fill in the theoretical gap from previous studies and develop a general framework for such connection in layered anisotropic media. We show that in general, the influences of lateral heterogeneity get accumulated from all layers via a recursive relationship according to the Fermat’s principle and can be approximately quantified in terms of the lateral derivatives of the layer interface surfaces and velocities. Based on the same general principle, we show that our approach can also be used to study the lateral heterogeneity effects on diffraction traveltime and its second-order derivative related to time-migration velocity. In this paper, we explicitly specify expressions for NMO and time-migration velocities with the influences from both types of heterogeneity suitable for 2-D data sets and also discuss possible extensions of the proposed theory to 3-D data sets and to parameters related to higher-order traveltime derivatives. Using numerical examples, we demonstrate that the proposed theory can lead to more accurate reflection and diffraction traveltime predictions in comparison with those obtained based on the 1-D assumption. Both the proposed theoretical framework and its numerical testing for forward traveltime computation presented in this study aid in understanding the effects from lateral heterogeneity on time-processing parameters and also serve as an important basis for designing an efficient technique to separate those influences in important processes such as Dix inversion for a more accurate subsurface model in the future.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 47
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉Estimating shear wave velocity with depth from Rayleigh-wave dispersion data is limited by the accuracy of fundamental and higher mode identification and characterization. In many cases, the fundamental mode signal propagates exclusively in retrograde motion, while higher modes propagate in prograde motion. It has previously been shown that differences in particle motion can be identified with multicomponent recordings and used to separate prograde from retrograde signals. Here we explore the domain of existence of prograde motion of the fundamental mode, arising from a combination of two conditions: (1) a shallow, high-impedance contrast and (2) a high Poisson ratio material. We present solutions to isolate fundamental and higher mode signals using multicomponent recordings. Previously, a time-domain polarity mute was used with limited success due to the overlap in the time domain of fundamental and higher mode signals at low frequencies. We present several new approaches to overcome this low-frequency obstacle, all of which utilize the different particle motions of retrograde and prograde signals. First, the Hilbert transform is used to phase shift one component by 90° prior to summation or subtraction of the other component. This enhances either retrograde or prograde motion and can increase the mode amplitude. Secondly, we present a new time–frequency domain polarity mute to separate retrograde and prograde signals. We demonstrate these methods with synthetic and field data to highlight the improvements to dispersion images and the resulting dispersion curve extraction.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 48
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉We use seismic noise cross-correlations to obtain a 3-D tomography model of 〈span〉SV〈/span〉-wave velocities beneath the western Indian Ocean, in the depth range of the oceanic crust and uppermost mantle. The study area covers 2000 × 2000 km〈sup〉2〈/sup〉 between Madagascar and the three spreading ridges of the Indian Ocean, centred on the volcanic hotspot of La Réunion. We use seismograms from 38 ocean bottom seismometers (OBSs) deployed by the RHUM-RUM project and 10 island stations on La Réunion, Madagascar, Mauritius, Rodrigues, and Tromelin. Phase cross-correlations are calculated for 1119 OBS-to-OBS, land-to-OBS, and land-to-land station pairs, and a phase-weighted stacking algorithm yields robust group velocity measurements in the period range of 3–50 s. We demonstrate that OBS correlations across large interstation distances of 〉2000 km are of sufficiently high quality for large-scale tomography of ocean basins. Many OBSs yielded similarly good group velocity measurements as land stations. Besides Rayleigh waves, the noise correlations contain a low-velocity wave type propagating at 0.8–1.5 km s〈sup〉−1〈/sup〉 over distances exceeding 1000 km, presumably Scholte waves travelling through seafloor sediments. The 100 highest-quality group velocity curves are selected for tomographic inversion at crustal and lithospheric depths. The inversion is executed jointly with a data set of longer-period, Rayleigh-wave phase and group velocity measurements from earthquakes, which had previously yielded a 3-D model of Indian Ocean lithosphere and asthenosphere. Robust resolution tests and plausible structural findings in the upper 30 km validate the use of noise-derived OBS correlations for adding crustal structure to earthquake-derived tomography of the oceanic mantle. Relative to crustal reference model CRUST1.0, our new shear-velocity model tends to enhance both slow and fast anomalies. It reveals slow anomalies at 20 km depth beneath La Réunion, Mauritius, Rodrigues Ridge, Madagascar Rise, and beneath the Central Indian spreading ridge. These structures can clearly be associated with increased crustal thickness and/or volcanic activity. Locally thickened crust beneath La Réunion and Mauritius is probably related to magmatic underplating by the hotspot. In addition, these islands are characterized by a thickened lithosphere that may reflect the depleted, dehydrated mantle regions from which the crustal melts where sourced. Our tomography model is available as electronic supplement.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 49
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉Since the completion of the Gravity field and steady-state Ocean Circulation Explorer mission (GOCE), global gravity models of uniform quality and coverage are available. We investigate their potential of being useful tools for estimating the thermal structure of the continental lithosphere, through simulation and real-data test in Central-Eastern Europe across the Trans-European Suture Zone. Heat flow, measured near the Earth surface, is the result of the superposition of a complex set of contributions, one of them being the heat production occurring in the crust. The crust is enriched in radioactive elements respect to the underlying mantle and crustal thickness is an essential parameter in isolating the thermal contribution of the crust. Obtaining reliable estimates of crustal thickness through inversion of GOCE-derived gravity models has already proven feasible, especially when weak constraints from other observables are introduced. We test a way to integrate this in a geothermal framework, building a 3-D, steady state, solid Earth conductive heat transport model, from the lithosphere–asthenosphere boundary to the surface. This thermal model is coupled with a crust-mantle boundary depth resulting from inverse modelling, after correcting the gravity model for the effects of topography, far-field isostatic roots and sediments. We employ a mixed space- and spectral-domain based forward modelling strategy to ensure full spectral coherency between the limited spectral content of the gravity model and the reductions. Deviations from a direct crustal thickness to crustal heat production relationship are accommodated using a subsequent substitution scheme, constrained by surface heat flow measurements, where available. The result is a 3-D model of the lithosphere characterised in temperature, radiogenic heat and thermal conductivity. It provides added information respect to the lithospheric structure and sparse heat flow measurements alone, revealing a satisfactory coherence with the geological features in the area and their controlling effect on the conductive heat transport.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 50
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉We investigate the possibility of passive monitoring of a salt-water disposal well in British Columbia, Canada, using continuously recorded ambient seismic noise. We find seismic velocity variations induced by a reduction of injection pressure in an effort to mitigate an elevated level of seismicity, most likely associated with the disposal of salt water. The relative velocity variations are derived from time-shifts measured between consecutive cross-correlation functions for each station pair in a surface array composed of five broad-band seismometers. The probable driving mechanisms responsible for the velocity changes are reduced pore pressures and/or lowered poroelastic stresses beyond the injection wellbore, respectively. Hydrologic data (e.g. snow and rainfall), noise energy trends and fluctuations in the incident direction of dominant noise sources do not correlate with the estimated relative velocity variations. Velocity variations are detected ahead of the zone of induced seismicity, thus indicating that seismic interferometry may aid in mitigation efforts to reduce the risk of induced seismicity by (1) providing verifiable and repeatable measurements of physical changes within the surrounding area and (2) providing hard constraints for modelling efforts to constrain how and where pore-pressure fronts change.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 51
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉Electrical conductivity is one of the most commonly used geophysical method for reservoir and environmental studies. Its main interest lies in its sensitivity to key properties of storage and transport in porous media. Its quantitative use therefore depends on the efficiency of the petrophysical relationship to link them. In this work, we develop a new physically based model for estimating electrical conductivity of saturated porous media. The model is derived assuming that the porous media is represented by a bundle of tortuous capillary tubes with a fractal pore-size distribution. The model is expressed in terms of the porosity, electrical conductivity of the pore liquid and the microstructural parameters of porous media. It takes into account the interface properties between minerals and pore water by introducing a surface conductivity. Expressions for the formation factor and hydraulic tortuosity are also obtained from the model derivation. The model is then successfully compared with published data and performs better than previous models. The proposed approach also permits to relate the electrical conductivity to other transport properties such as the hydraulic conductivity.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 52
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉Thinning of the lithosphere under continental collisional orogens is often attributed to delamination or convective thinning. Both processes remove part or all of the mantle lithosphere that has become denser and gravitationally unstable. Previous studies mostly focused on the different thermomagmatic consequences of these two processes; the dynamic links between them, and the critical conditions for one or the other process to dominate lithosphere thinning, remain uncertain. Here, we used high-resolution thermomechanical models with various rheology (linear viscous, power-law viscous and/or the extended Drucker–Prager plasticity) to systematically investigate the dynamics of delamination and convective thinning under collisional orogens. Our results show that convective thinning is favoured in models of linear (Newtonian) viscous rheology and low viscosity $({10^{19}}\!-\! {10^{20}}\,\,{\rm{Pa}} \, {\rm{s}})$. Power-law viscous rheology promotes strain localization, which reduces the effective viscosity and may lead to localized rising of the asthenosphere to the crustal base, thus triggering delamination. Further strain localization and stronger delamination are predicted with inclusion of plastic rheology in the model. These results indicate that convective thinning and delamination are dynamically linked and can occur in the same orogeny. Their relative dominance during orogenesis may be distinguished by the resulting spatiotemporal evolutions of thermal perturbation, magmatism and elevation changes. We applied the models to show that the evolution of the Central Anatolian Plateau is consistent with the dominance of convective thinning, whereas delamination played a major role in thinning the mantle lithosphere under central-northern Tibetan Plateau.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 53
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉Developing a model for anthropogenic seismic hazard remains an open challenge whatever the geo-resource production. We analyse the (〈span〉M〈/span〉〈sub〉max〈/sub〉) largest reported magnitude on each site where (RTS) Reservoir Triggered Seismicity in documented (37 events, 1933–2008), for aftershocks of reservoir impoundment loading. We relate each reservoir impoundment to its magnitude-equivalent 〈span〉M〈/span〉*〈sub〉reservoir〈/sub〉 = 〈span〉M〈/span〉*(〈span〉L〈/span〉〈sub〉r〈/sub〉). We use (〈span〉L〈/span〉〈sub〉r〈/sub〉) the reservoir length as a proxy for a rupture length of the reservoir main shock-equivallent. This latter is derived from the empirical relationship that exists for tectonic earthquake among magnitude and rupture length. We resolve (i) 〈span〉M〈/span〉〈sub〉max〈/sub〉 for RTS are bounded by 〈span〉M〈/span〉*〈sub〉reservoir〈/sub〉 at a 95 per cent confidence level; (ii) in average 〈span〉M〈/span〉〈sub〉max〈/sub〉 are smaller than 〈span〉M〈/span〉*〈sub〉reservoir〈/sub〉 by 2.2 units (iii) 50 per cent of the 〈span〉M〈/span〉〈sub〉max〈/sub〉 occurrence is within 2 ± 1 yr from the reservoir impoundment. These triggering patterns support the signature of fluid driven seismicity during the slow reservoir impoundment emerges as a weaker efficiency (larger Δ〈span〉M〈/span〉 = 〈span〉M〈/span〉*〈sub〉reservoir〈/sub〉 – 〈span〉M〈/span〉〈sub〉max〈/sub〉) to trigger 〈span〉M〈/span〉〈sub〉max〈/sub〉 events than from earthquake interactions.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 54
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉The activities of frontal thrusts in the northern Qilian Shan are critical for understanding the deformation of the Qilian Shan and the northeastern Tibetan Plateau. In this study, we estimate the slip rate of the active Fodongmiao–Hongyazi thrust along the northern margin of the Qilian Shan. High-resolution satellite imagery interpretations and detailed field investigations suggest that the fault displaced late Pleistocene terraces and formed fresh prominent north-facing fault scarps. To quantify the slip rate of the fault, we measured the displacements along the fault scarps using an unmanned aerial vehicle system and dated the displaced geomorphic surfaces using optically stimulated luminescence (OSL) and 〈sup〉14〈/sup〉C methods. The vertical slip rate of the fault is estimated at 1.0 ± 0.3 mm yr〈sup〉−1〈/sup〉 for the western segment. The slip rates for two branches in the eastern segment are 0.3 ± 0.1 and 0.6 ± 0.1 mm yr〈sup〉−1〈/sup〉. Using a fault dip of 40 ± 10°, we constrain the corresponding shortening rates to 1.4 ± 0.5 and 1.2 ± 0.4 mm yr〈sup〉−1〈/sup〉, respectively. The rates are consistent with values over different timescales, which suggests steady rock uplift and northeastward growth of the western Qilian Shan. Crustal shortening occurs mainly on the range-bounding frontal thrust.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 55
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉We present a numerical method for the simulation of earthquake cycles on a 1-D fault interface embedded in a 2-D homogeneous, anisotropic elastic solid. The fault is governed by an experimentally motivated friction law known as rate-and-state friction which furnishes a set of ordinary differential equations which couple the interface to the surrounding volume. Time enters the problem through the evolution of the ordinary differential equations along the fault and provides boundary conditions for the volume, which is governed by quasi-static elasticity. We develop a time-stepping method which accounts for the interface/volume coupling and requires solving an elliptic partial differential equation for the volume response at each time step. The 2-D volume is discretized with a second-order accurate finite difference method satisfying the summation-by-parts property, with boundary and fault interface conditions enforced weakly. This framework leads to a provably stable semi-discretization. To mimic slow tectonic loading, the remote side-boundaries are displaced at a slow rate, which eventually leads to earthquake nucleation at the fault. Time stepping is based on an adaptive, fourth-order Runge–Kutta method and captures the highly varying timescales present. The method is verified with convergence tests for both the orthotropic and fully anisotropic cases. An initial parameter study reveals regions of parameter space where the systems experience a bifurcation from period one to period two behaviour. Additionally, we find that anisotropy influences the recurrence interval between earthquakes, as well as the emergence of aseismic transients and the nucleation zone size and depth of earthquakes.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 56
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉Palaeomagnetic constraints are essential factors in the reconstruction of the Mesozoic convergence of Eastern Asia blocks. As one of the key blocks, Indochina was constrained only by sedimentary-rocks-derived palaeomagnetic data. To evaluate whether the palaeomagnetic data used to restore the Late Triassic position of Indochina suffered inclination shallowing effects, we conducted a palaeomagnetic and geochronologic study on a coeval volcanic clastic rocks sequence in the western margin of the Khorat Basin, Thailand. The U-Pb SIMS dating on zircons indicates the age of the sampling section is between 205.1 ± 1.5 and 204.7 ± 1.4 Ma. Site mean directions are D〈sub〉g〈/sub〉/I〈sub〉g〈/sub〉 = 217.2°/−39.4° (κ〈sub〉g〈/sub〉 = 45.1, α〈sub〉95g〈/sub〉 = 10.1°) before and D〈sub〉s〈/sub〉/I〈sub〉s〈/sub〉 = 209.2°/−44.5° (κ〈sub〉s〈/sub〉 = 43.8, α〈sub〉95s〈/sub〉 = 10.2°) after tilt correction. The new data set indicates a positive reversal test result at ‘Category C’ level. The characteristic remanent magnetization recorded by the coexistent magnetite and hematite is interpreted to be primary remanence acquired during the initial cooling of the volcanic clastic rocks. The consistence of the corresponding palaeolatitudes derived from the volcanic clastic rocks and the former reported sedimentary rocks suggests that there is probably no significant inclination shallowing bias in the sedimentary-rocks-derived palaeomagnetic data. Therefore, the estimates of the Late Triassic position of Indochina are confirmed to be reliable. The Indochina Block had collided to the southern margin of Eurasia by the Late Triassic and played an important role in the Mesozoic convergence of the Eastern Asia blocks.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 57
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉We present a numerically exact method for calculating the internal and external gravitational potential of aspherical and heterogeneous planets. Our approach is based on the transformation of Poisson’s equation into an equivalent equation posed on a spherical computational domain. This new problem is solved in an efficient iterative manner based on a hybrid pseudospectral/spectral element discretization. The main advantage of our method is that its computational cost reflects the planet’s geometric and structural complexity, being in many situations only marginally more expensive than boundary perturbation theory. Several numerical examples are presented to illustrate the method’s efficacy and potential range of applications.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 58
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉In the case of long-range propagation of forward scattering, due to the accumulation of phase changes caused by the velocity perturbations, the validity of the Born approximation will be violated. In contrast, the phase-change accumulation can be handled by the Rytov approximation, which has been widely used for long-distance propagation with only forward scattering or small-angle scattering involved. However, the weak scattering assumption (i.e. small velocity perturbation) in the Rytov approximation limits its scope of application. To address this problem, we analyse the integral kernel of the Rytov transform using the Wentzel-Kramers-Brillouin-Jeffreys (WKBJ) approximation and we demonstrate that the integral kernel is a function of velocity perturbation and scattering angle. By applying a small scattering angle approximation, we show that the phase variation has a linear relationship with the slowness perturbation, no matter how strong the magnitude of perturbation is. Therefore, the new integral equation is then referred to as the generalized Rytov approximation (GRA) because it overcomes the weak scattering assumption of the Rytov approximation. To show the limitations of the Rytov approximation and the advantages of the proposed GRA method, first we design a two-layer model and we analytically calculate the errors introduced by the small scattering angle assumption using plane wave incidence. We show that the phase (traveltime) variations predicted by the GRA are always more accurate than the Rytov approximation. Particularly, the GRA produces accurate phase variations for the normal incident plane wave regardless of the magnitude of velocity perturbation. Numerical examples using Gaussian anomaly models demonstrate that the scattering angle has a crucial impact on the accuracy of the GRA. If the small scattering angle assumption holds, the GRA can produce an accurate phase approximation even if the velocity perturbation is very strong. On the contrary, both the first-order Rytov approximation and the GRA fail to get satisfying results when the scattering angle is large enough. The proposed GRA method has the potential to be used for traveltime modelling and inversion for large-scale strong perturbation media.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 59
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉Estimating the location of geologic and tectonic features on a subducting plate is important for interpreting their spatial relationships with other observables including seismicity, seismic velocity and attenuation anomalies, and the location of ore deposits and arc volcanism in the over-riding plate. Here we present two methods for estimating the location of predictable features such as seamounts, ridges and fracture zones on the slab. One uses kinematic reconstructions of plate motions, and the other uses multidimensional scaling to flatten the slab onto the surface of the Earth. We demonstrate the methods using synthetic examples and also using the test case of fracture zones entering the Lesser Antilles subduction zone. The two methods produce results that are in good agreement with each other in both the synthetic and real examples. In the Lesser Antilles, the subducted fracture zones trend northwards of the surface projections. The two methods begin to diverge in regions where the multidimensional scaling method has its greatest likely error. Wider application of these methods may help to establish spatial correlations globally.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 60
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉Microseismic monitoring is a primary tool for understanding and tracking the progress of mechanical processes occurring in active rock fracture systems. In geothermal or hydrocarbon fields or along seismogenic fault systems, the detection and location of microseismicity facilitates resolution of the fracture system geometry and the investigation of the interaction between fluids and rocks, in response of stress field perturbations. Seismic monitoring aims to detect locate and characterize seismic sources. The detection of weak signals is often achieved at the cost of increasing the number of false detections, related to transient signals generated by a range of noise sources, or related to instrumental problems, ambient conditions or human activity that often affect seismic records. A variety of fast and automated methods has been recently proposed to detect and locate microseismicity based on the coherent detection of signal anomalies, such as increase in amplitude or coherent polarization, at dense seismic networks. While these methods proved to be very powerful to detect weak events and to reduce the magnitude of completeness, a major problem remains to discriminate among weak seismic signals produced by microseismicity and false detections. In this work, the microseimic data recorded along the Irpinia fault zone (Southern Apennines, Italy) are analysed to detect weak, natural earthquakes using one of such automated, migration-based, method. We propose a new method for the automatic discrimination of real vs false detections, which is based on empirical data and information about the detectability (i.e. detection capability) of the seismic network. Our approach allows obtaining high performances in detecting earthquakes without requiring a visual inspection of the seismic signals and minimizing analyst intervention. The proposed methodology is automated, self-updating and can be tuned at different success rates.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 61
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉A multitaper estimator is proposed that accommodates time-series containing gaps without using any form of interpolation. In contrast with prior missing-data multitaper estimators that force standard Slepian sequences to be zero at gaps, the proposed missing-data Slepian sequences are defined only where data are present. The missing-data Slepian sequences are frequency independent, as are the eigenvalues that define the energy concentration within the resolution bandwidth, when the process bandwidth is $[ { - 1/2,\,\,\,1/2} )$ for unit sampling and the sampling scheme comprises integer multiples of unity. As a consequence, one need only compute the ensuing missing-data Slepian sequences for a given sampling scheme once, and then the spectrum at an arbitrary set of frequencies can be computed using them. It is also shown that the resulting missing-data multitaper estimator can incorporate all of the optimality features (i.e. adaptive-weighting, 〈span〉F〈/span〉-test and reshaping) of the standard multitaper estimator, and can be applied to bivariate or multivariate situations in similar ways. Performance of the missing-data multitaper estimator is illustrated using length of day, seafloor pressure and Nile River low stand time-series.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 62
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉The analysis of surface wave dispersion curves (DCs) is widely used for near-surface 〈span〉S〈/span〉-wave velocity (VS) reconstruction. However, a comprehensive characterization of the near-surface requires also the estimation of 〈span〉P〈/span〉-wave velocity (VP). We focus on the estimation of both VS and VP models from surface waves using a direct data transform approach. We estimate a relationship between the wavelength of the fundamental mode of surface waves and the investigation depth and we use it to directly transform the DCs into VS and VP models in laterally varying sites. We apply the workflow to a real data set acquired on a known test site. The accuracy of such reconstruction is validated by a waveform comparison between field data and synthetic data obtained by performing elastic numerical simulations on the estimated VP and VS models. The uncertainties on the estimated velocity models are also computed.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 63
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉Seismic signal recognition can serve as a powerful auxiliary tool for analysing and processing ever-larger volumes of seismic data. It can facilitate many subsequent procedures such as first-break picking, statics correction, denoising, signal detection, events tracking, structural interpretation, inversion and imaging. In this study, I propose an automatic technique of seismic signal recognition taking advantage of unsupervised machine learning. In the proposed technique, seismic signal recognition is considered as a problem of clustering data points. All the seismic sampling points in time domain are clustered into two clusters, that is, signal or non-signal. The hierarchical clustering algorithm is used to group these sampling points. Four attributes, that is, two short-term-average-to-long-term-average ratios, variance and envelope are investigated in the clustering process. In addition, to quantitatively evaluate the performance of seismic signal recognition properly, I propose two new statistical indicators, namely, the rate between the total energies of original and recognized signals (RTE), and the rate between the average energies of original and recognized signals (RAE). A large number of numerical experiments show that when the signal is slightly corrupted by noise, the proposed technique performs very well, with recognizing accuracy, precision and RTE of nearly 1 (i.e. 100 per cent), recall greater than 0.8 and RAE about 1–1.3. When the signal is moderately corrupted by noise, the proposed technique can hold recognizing accuracy about 0.9, recognizing precision nearly to 1, RTE about 0.9, recall around 0.6 and RAE about 1.5. Applications of the proposed technique to real microseismic data induced from hydraulic fracturing and reflection seismic data demonstrate its feasibility and encouraging prospect.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 64
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉Measurements of seismo-acoustic events by collocated seismic and infrasound arrays allow for studying the two wavefields that were produced by the same event. However, some of the scientific and technical constraints on the building of the two technologies are different and may be contradicting. For the case of a new station, an optimal design that will satisfy the constraints of the two technologies can be found. However, in the case of upgrading an existing array by adding the complementing technology, the situation is different. The site location, the array configuration and physical constraints are fixed and may not be optimal for the complementing technology, which may lead to rejection of the upgrade. The International Monitoring System (IMS) for the verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT) includes 37 seismic arrays and 51 infrasound arrays. Although the CTBT verification regime is fixed in the treaty, an upgrade of the existing arrays by adding more technologies is possible.The Mount Meron seismic array (MMAI), which is part of the IMS, is composed of 16 sites. Microbarometers were installed at five MMAI sites to form the Mount Meron infrasound array. Due to regulation and physical constraints, it was not possible to relocate the sites nor to install analogue noise reduction filters (i.e. a pipe array). In this study, it is demonstrated that the installation of the MMAI infrasound array is beneficial despite the non-optimal conditions. It is shown that the noise levels of the individual array sites are between the high and median global noise levels. However, we claim that the more indicative measures are the noise levels of the beams of interest, as demonstrated by analysing the microbaroms originated from the Mediterranean Sea. Moreover, the ability to detect events relevant to the CTBT is demonstrated by analysing man-made events during 2011 from the Libya region.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 65
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉Static and quasi-static Coulomb stress changes produced by large earthquakes can modify the probability of occurrence of subsequent events on neighbouring faults. This approach is based on physical (Coulomb stress changes) and statistical (probability calculations) models, which are influenced by the quality and quantity of data available in the study region. Here, we focus on the Wasatch fault zone (WFZ), a well-studied active normal fault system having abundant geological and palaeoseismological data. Palaeoseismological trench investigations of the WFZ indicate that at least 24 large, surface-faulting earthquakes have ruptured the fault's five central, 35–59-km long segments since ∼7 ka. Our goal is to determine if the stress changes due to the youngest palaeoevents have significantly modified the present-day probability of occurrence of large earthquakes on each of the segments. For each segment, we modelled the cumulative (coseismic + post-seismic) Coulomb stress changes (∆CFS〈sub〉cum〈/sub〉) due to earthquakes younger than the most recent event on the segment in question and applied the resulting values to the time-dependent probability calculations. Results from the Coulomb stress modelling suggest that the Brigham City, Salt Lake City, and Provo segments have accumulated ∆CFS〈sub〉cum〈/sub〉 larger than 10 bar, whereas the Weber segment has experienced a stress decrease of 5 bar, in the scenario of recent rupture of the Great Salt Lake fault to the west. Probability calculations predict high probability of occurrence for the Brigham City and Salt Lake City segments, due to their long elapsed times (〉1–2 ka) when compared to the Weber, Provo and Nephi segments (〈1 ka). The range of calculated coefficients of variation has a large influence on the final probabilities, mostly in the case of the Brigham City segment. Finally, when the Coulomb stress and the probability models are combined, our results indicate that the ∆CFS〈sub〉cum〈/sub〉 resulting from earthquakes post-dating the youngest events on each of the five segments substantially affects the probability calculations for three of the segments: Brigham City, Salt Lake City and Provo. The probability of occurrence of a large earthquake in the next 50 yr on these three segments may, therefore, be underestimated if a time-independent approach, or a time-dependent approach that does not consider ∆CFS, is adopted.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 66
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉Quantifying landslide activity in remote regions is difficult because of the numerous complications that prevent direct landslide observations. However, building exhaustive landslide catalogues is critical to document and assess the impacts of climate change on landslide activity such as increasing precipitation, glacial retreat and permafrost thawing, which are thought to be strong drivers of the destabilization of large parts of the high-latitude/altitude regions of the Earth. In this study, we take advantage of the capability offered by seismological observations to continuously and remotely record landslide occurrences at regional scales. We developed a new automated machine learning processing chain, based on the Random Forest classifier, able to automatically detect and identify landslide seismic signals in continuous seismic records. We processed two decades of continuous seismological observations acquired by the Alaskan seismic networks. This allowed detection of 5087 potential landslides over a period of 22 yr (1995–2017). We observe an increase in the number of landslides for the period and discuss the possible causes.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 67
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉To describe the energy transport in the seismic coda, we introduce a system of radiative transfer equations for coupled surface and body waves in a scalar approximation. Our model is based on the Helmholtz equation in a half-space geometry with mixed boundary conditions. In this model, Green’s function can be represented as a sum of body waves and surface waves, which mimics the situation on Earth. In a first step, we study the single-scattering problem for point-like objects in the Born approximation. Using the assumption that the phase of body waves is randomized by surface reflection or by interaction with the scatterers, we show that it becomes possible to define, in the usual manner, the cross-sections for surface-to-body and body-to-surface scattering. Adopting the independent scattering approximation, we then define the scattering mean free paths of body and surface waves including the coupling between the two types of waves. Using a phenomenological approach, we then derive a set of coupled transport equations satisfied by the specific energy density of surface and body waves in a medium containing a homogeneous distribution of point scatterers. In our model, the scattering mean free path of body waves is depth dependent as a consequence of the body-to-surface coupling. We demonstrate that an equipartition between surface and body waves is established at long lapse-time, with a ratio which is predicted by usual mode counting arguments. We derive a diffusion approximation from the set of transport equations and show that the diffusivity is both anisotropic and depth dependent. The physical origin of the two properties is discussed. Finally, we present Monte Carlo solutions of the transport equations which illustrate the convergence towards equipartition at long lapse-time as well as the importance of the coupling between surface and body waves in the generation of coda waves.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 68
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉In the profile analysis of faults, the distribution of GNSS sites directly affects the accuracy of the results of slip rate and locking depth. This paper discusses strategies for designing the layout of GNSS stations perpendicular to strike-slip faults in terms of site spacing and the Minimum Effective Distance, which is 20 times the locking depth of the fault. Three layout models are proposed considering the complexity of strike-slip faults: (1) Equal spacing layout, in which many stations are deployed in the far field, only a few are deployed in the near field. (2) Equal deformation layout, in which stations are densely arranged in the near field and sparsely arranged in the far field according to the frequency of deformation curve. (3) Equal slope spacing layout, in which stations are arranged according to the nonlinear degree of the deformation curve, with dense distribution in regions with high nonlinearity and sparse distribution in approximately linear regions. The three models were used to redistribute the sites in the Qiaojia to Dongchuan segment of the Xiaojiang fault profile, and their performances were compared with that of the current sites distribution of the segment. The results showed that model 1 is optimal for fitting the accuracy of slip rate and model 3 is optimal for the accuracy of locking depth. Overall, model 3 appears to be the best choice, considering that the accuracy of the locking depth is more difficult to control. One of the main purposes of deployment is to identify the seismogenic depth of the fault. With the locking depth of the fault gradually approaching the depth of the seismogenic layer during an interseismic period, the accuracy of observations of sites deployed at a preset value of historical seismogenic depth of the fault would improve.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 69
    Publication Date: 2019
    Description: 〈span〉〈div〉SUMMARY〈/div〉HY-2A is China's first satellite altimeter mission, launched in Aug. 2011. Its geodetic mission (GM) started from 2016 March 30 till present, collecting sea surface heights for about five 168-d cycles. To test how the HY-2A altimeter performs in marine gravity derivation, we use the least-squares collocation method to determine marine gravity anomalies on 1′ × 1′ grids around the South China Sea (covering 0°–30°N, 105°E–125°E) from the HY-2A/GM-measured geoid gradients. We assess the qualities of the HY-2A/GM-derived gravity over different depths and areas using the bias and tilt-adjusted ship-borne gravity anomalies from the U.S. National Centers for Environmental Information (NCEI) and the Second Institute of Oceanography, Ministry of Natural Resources (MNR) of P. R. China. The RMS difference between the HY-2A/GM-derived and the NCEI ship-borne gravity is 5.91 mGal, and is 5.33 mGal when replacing the HY-2A value from the Scripps Institution of Oceanography (SIO) V23.1 value. The RMS difference between the HY-2A/GM-derived and the MNR ship-borne gravity is 2.90 mGal, and is 2.76 mGal when replacing the HY-2A value from the SIO V23.1 value. The RMS difference between the HY-2A and SIO V23.1 value is 3.57 mGal in open sea areas at least 20 km far away from the coast. In general, the difference between the HY-2A/GM-derived gravity and ship-borne gravity decreases with decreasing gravity field roughness and increasing depth. HY-2A results in the lowest gravity accuracy in areas with islands or reefs. Our assessment result suggests that HY-2A can compete with other Ku-band altimeter missions in marine gravity derivation.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 70
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉Virtual Deep Seismic Sounding (VDSS) has emerged as a novel method to image the crust-mantle-boundary (CMB) and potentially other lithospheric boundaries. In Part 1 (Liu et al., 2018), we showed that the arrival time and waveform of post-critical 〈span〉SsPmp〈/span〉, the post-critical reflection phase at the CMB used in VDSS, is sensitive to several different attributes of the crust and upper mantle. Here, we synthesize our methodology of deriving Moho depth, average crustal 〈span〉Vp〈/span〉 and uppermost-mantle 〈span〉Vp〈/span〉 from single-station observations of post-critical 〈span〉SsPmp〈/span〉 under a 1D assumption. We first verify our method with synthetics and then substantiate it with a case study using the Yellowknife and POLARIS arrays in the Slave Craton, Canada. We show good agreement of crustal and upper-mantle properties derived with VDSS with those given by previous active-source experiments and our own P receiver functions (PRF) in our study area. Finally, we propose a PRF-VDSS joint analysis method to constrain average crustal 〈span〉Vp〈/span〉/〈span〉Vs〈/span〉 ratio and composition. Our PRF-VDSS joint analysis shows that the southwest Slave Craton has an intermediate crustal composition, most consistent with a Mesoarchean age.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 71
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉The variation of temperature in the crust is difficult to quantify due to the sparsity of surface heat flow observations and lack of measurements on the thermal properties of rocks at depth. We examine the degree to which the thermal structure of the crust can be constrained from Curie depth and surface heat flow data in Southeastern Australia. We cast the inverse problem of heat conduction within a Bayesian framework and derive its adjoint so we can efficiently find the optimal model that best reproduces the data and prior information on the thermal properties of the crust. Efficiency gains obtained from the adjoint method facilitates a detailed exploration of thermal structure in SE Australia, where we predict high temperatures within Precambrian rocks of 650 〈sup〉○〈/sup〉C due to relatively high rates of heat production (0.9–1.4 μW m〈sup〉−3〈/sup〉). In contrast, temperatures within dominantly Phanerozoic crust reach only 520 〈sup〉○〈/sup〉C at the Moho due to the low rates of heat production in Cambrian mafic volcanics. A combination of Curie depth and heat flow data are required to constrain the uncertainty of lower crustal temperatures to ± 73 〈sup〉○〈/sup〉C. We also show that parts of the crust are unconstrained if either dataset is omitted from the inversion.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 72
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉The mantle transition zone is the region between the globally observed major seismic velocity discontinuities around depths of 410 and 660 km and is important for determining the style of convection and mixing between the upper and the lower mantle. In this study, P-to-S converted waves, or receiver functions, are used to study these discontinuities beneath the Alaskan subduction zone, where the Pacific plate subducts underneath the North American plate. Previous tomographic models do not agree on the depth extent of the subducting slab, therefore improved imaging of the Earth structure underneath Alaska is required. We use 27,800 high quality radial receiver functions to make common conversion point stacks. Upper mantle velocity anomalies are accounted for by two recently published regional tomographic S-wave velocity models. Using these two tomographic models, we show that the discontinuity depths within our CCP stacks are highly dependent on the choice of velocity model, between which velocity anomaly magnitudes vary greatly. We design a quantitative test to show whether the anomalies in the velocity models are too strong or too weak, leading to over- or under-corrected discontinuity depths. We also show how this test can be used to rescale the 3D velocity corrections in order to improve the discontinuity topography maps. After applying the appropriate corrections, we find a localised thicker mantle transition zone and an uplifted 410 discontinuity, which show that the slab has clearly penetrated into the mantle transition zone. Little topography is seen on the 660 discontinuity, indicating that the slab has probably not reached the lower mantle. In the southwest, P410s arrivals have very small amplitudes or no significant arrival at all. This could be caused by water or basalt in the subducting slab, reducing the strength at the 410, or by topography on the 410 discontinuity, preventing coherent stacking. In the southeast of Alaska, a thinner mantle transition zone is observed. This area corresponds to the location of a slab window, and thinning of the mantle transition zone may be caused by hot mantle upwellings.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 73
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉We present an iterative classification scheme using inter-event cross-correlation to update an existing earthquake catalogue with similar events from a list of automatic seismic event detections. The algorithm automatically produces catalogue quality events, with improved hypocentres and reliable P and S arrival time information. Detected events are classified into four event categories with the purpose of using the top category, with the highest assessed event quality and highest true-to-false ratio, directly for local earthquake tomography without additional manual analysis. The remaining categories have varying proportions of lower quality events, quality being defined primarily by the number of observed phase onsets, and can be viewed as different priority groups for manual inspection to reduce the time spent by a seismic analyst. A list of 3348 event detections from the geothermally active volcanic region around Hengill, southwest Iceland, produced by our migration and stack detector (Wagner et al. 2017), was processed using a reference catalogue of 1108 manually picked events from the same area. P and S phase onset times were automatically determined for the detected events using correlation time lags with respect to manually picked phase arrivals from different multiple reference events at the same station. A significant improvement of the initial hypocentre estimates was achieved after re-locating the detected events using the computed phase onset times. The differential time dataset resulting from the correlation was successfully used for a double-difference re-location of the final updated catalogue. The routine can potentially be implemented in real-time seismic monitoring environments in combination with a variety of seismic event/phase detectors.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 74
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉To describe the energy transport in the seismic coda, we introduce a system of radiative transfer equations for coupled surface and body waves in a scalar approximation. Our model is based on the Helmholtz equation in a half-space geometry with mixed boundary conditions. In this model, Green’s function can be represented as a sum of body waves and surface waves, which mimics the situation on Earth. In a first step, we study the single-scattering problem for point-like objects in the Born approximation. Using the assumption that the phase of body waves is randomized by surface reflection or by interaction with the scatterers, we show that it becomes possible to define, in the usual manner, the cross-sections for surface-to-body and body-to-surface scattering. Adopting the independent scattering approximation, we then define the scattering mean free paths of body and surface waves including the coupling between the two types of waves. Using a phenomenological approach, we then derive a set of coupled transport equations satisfied by the specific energy density of surface and body waves in a medium containing a homogeneous distribution of point scatterers. In our model, the scattering mean free path of body waves is depth dependent as a consequence of the body-to-surface coupling. We demonstrate that an equipartition between surface and body waves is established at long lapse-time, with a ratio which is predicted by usual mode counting arguments. We derive a diffusion approximation from the set of transport equations and show that the diffusivity is both anisotropic and depth dependent. The physical origin of the two properties is discussed. Finally, we present Monte-Carlo solutions of the transport equations which illustrate the convergence towards equipartition at long lapse-time as well as the importance of the coupling between surface and body waves in the generation of coda waves.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 75
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉In planetary fluid cores, the density depends on temperature and chemical composition, which diffuse at very different rates. This leads to various instabilities, bearing the name of double-diffusive convection. We investigate rotating double-diffusive convection (RDDC) in fluid spheres. We use the Boussinesq approximation with homogeneous internal thermal and compositional source terms. We focus on the finger regime, in which the thermal gradient is stabilising whereas the compositional one is destabilising. First, we perform a global linear stability analysis in spheres. The critical Rayleigh numbers drastically drop for stably stratified fluids, yielding large-scale convective motions where local analyses predict stability. We evidence the inviscid nature of this large-scale double-diffusive instability, enabling the determination of the marginal stability curve at realistic planetary regimes. In particular, we show that in stably stratified spheres, the Rayleigh numbers 〈span〉Ra〈/span〉 at the onset evolve like 〈span〉Ra〈/span〉 ∼ 〈span〉Ek〈/span〉〈sup〉−1〈/sup〉, where 〈span〉Ek〈/span〉 is the Ekman number. This differs from rotating convection in unstably stratified spheres, for which 〈span〉Ra〈/span〉 ∼ 〈span〉Ek〈/span〉〈sup〉−4/3〈/sup〉. The domain of existence of inviscid convection thus increases as 〈span〉Ek〈/span〉〈sup〉−1/3〈/sup〉. Second, we perform nonlinear simulations. We find a transition between two regimes of RDDC, controlled by the strength of the stratification. Furthermore, far from the RDDC onset, we find a dominating equatorially anti-symmetric, large-scale zonal flow slightly above the associated linear onset. Unexpectedly, a purely linear mechanism can explain this phenomenon, even far from the instability onset, yielding a symmetry breaking of the nonlinear flow at saturation. For even stronger stable stratification, the flow becomes mainly equatorially-symmetric and intense zonal jets develop. Finally, we apply our results to the early Earth core. Double diffusion can reduce the critical Rayleigh number by four decades for realistic core conditions. We suggest that the early Earth core was prone to turbulent RDDC, with large-scale zonal flows.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 76
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉Time-domain processing of seismic reflection data has always been an important engine that is routinely utilized to produce seismic images and to expeditiously construct subsurface models. The conventional procedure involves analyzing parameters related to the derivatives of reflection traveltime with respect to offset including normal moveout (NMO) velocities (second-order derivatives) and quartic coefficients (fourth-order derivatives). In this study, we propose to go beyond the typical assumption of 1D laterally homogeneous medium when relating those ‘processing’ parameters to the subsurface medium parameters and take into account the additional influences from lateral heterogeneity including curved interfaces and smoothly variable velocities. We fill in the theoretical gap from previous studies and develop a general framework for such connection in layered anisotropic media. We show that in general, the influences of lateral heterogeneity get accumulated from all layers via a recursive relationship according to the Fermat’s principle and can be approximately quantified in terms of the lateral derivatives of the layer interface surfaces and velocities. Based on the same general principle, we show that our approach can also be used to study the lateral heterogeneity effects on diffraction traveltime and its second-order derivative related to time-migration velocity. In this paper, we explicitly specify expressions for NMO and time-migration velocities with the influences from both types of heterogeneity suitable for 2D datasets and also discuss possible extensions of the proposed theory to 3D datasets and to parameters related to higher-order traveltime derivatives. Using numerical examples, we demonstrate that the proposed theory can lead to more accurate reflection and diffraction traveltime predictions in comparison with those obtained based on the 1D assumption. Both the proposed theoretical framework and its numerical testing for forward traveltime computation presented in this study aid in understanding the effects from lateral heterogeneity on time-processing parameters and also serve as an important basis for designing an efficient technique to separate those influences in important processes such as Dix inversion for a more accurate subsurface model in the future.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 77
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉H-κ stacking is used routinely to infer crustal thickness and bulk-crustal V〈sub〉〈span〉P〈/span〉〈/sub〉/V〈sub〉〈span〉S〈/span〉〈/sub〉 ratio from teleseismic receiver functions. The method assumes that the largest amplitude P-to-S conversions beneath the seismograph station are generated at the Moho. This is reasonable where the crust is simple and the Moho marks a relatively abrupt transition from crust to mantle, but not if the crust-mantle transition is gradational and/or complex intra-crustal structure exists. We demonstrate via synthetic seismogram analysis that H-κ results can be strongly dependent on the choice of stacking parameters (the relative weights assigned to the Moho P-to-S conversion and its subsequent reverberations, the choice of linear or phase-weighted stacking, input crustal P-wave velocity) and associated data parameters (receiver function frequency content and the sample of receiver functions analyzed). To address this parameter sensitivity issue, we develop an H-κ approach in which cluster analysis selects a final solution from 1000 individual H-κ results, each calculated using randomly-selected receiver functions, and H-κ input parameters. Ten quality control criteria that variously assess the final numerical result, the receiver function dataset, and the extent to which the results are tightly clustered, are used to assess the reliability of H-κ stacking at a station. Analysis of synthetic datasets indicates H-κ works reliably when the Moho is sharp and intra-crustal structure is lacking but is less successful when the Moho is gradational. Limiting the frequency content of receiver functions can improve the H-κ solutions in such settings, provided intra-crustal structure is simple. In cratonic Canada, India and Australia, H-κ solutions generally cluster tightly, indicative of simple crust and a sharp Moho. In contrast, on the Ethiopian plateau, where Paleogene flood-basalts overlie marine sediments, H-κ results are unstable and erroneous. For stations that lie on thinner flood-basalt outcrops, and/or in regions where Blue Nile river incision has eroded through to the sediments below, limiting the receiver function frequency content to longer periods improves the H-κ solution and reveals a 6–10 km gradational Moho, readily interpreted as a lower-crustal intrusion layer at the base of a mafic (V〈sub〉〈span〉P〈/span〉〈/sub〉/V〈sub〉〈span〉S〈/span〉〈/sub〉=1.77–1.87) crust. Moving off the flood-basalt province, H-κ results are reliable and the crust is thinner and more felsic (V〈sub〉〈span〉P〈/span〉〈/sub〉/V〈sub〉〈span〉S〈/span〉〈/sub〉=1.70–1.77), indicating the lower crustal intrusion layer is confined to the region covered by flood-basaltic volcanism. Analysis of data from other tectonically-complex settings (e.g., Japan, Cyprus) shows H-κ stacking results should be treated cautiously. Only in regions of relatively simple crust can H-κ stacking analysis be considered truly reliable.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 78
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉The 280-km-long San Cristobal Trough (SCT), created by the tearing of the Australia plate as it subducts under the Pacific plate near the Solomon and Vanuatu subduction zones, has hosted strike-slip earthquake sequences in 1993 and 2015. Both sequences, which likely represent a complete seismic cycle, began along the oldest section of the SCT—the portion farthest from the tear that has experienced the most cumulative displacement—and migrated to the younger sections closer to the tear. The SCT's abundant seismicity allows us to study transform boundary development—a process rarely observed along a single fault system—through observations of earthquake rupture properties. Using the spectral ratio method based on empirical Green's functions (EGFs), we calculate the corner frequencies of three M〈sub〉w〈/sub〉 ∼7 2015 earthquakes and co-located smaller earthquakes. We utilize two different spectral ratio stacking methods and fit both Brune and Boatwright models to the stacked spectral ratios. Regardless of stacking methods and spectral models, we find that the corner frequencies of the 2015 M〈sub〉w〈/sub〉 ∼7 earthquakes decrease slightly with distance from the tear. Assuming a constant rupture velocity and an omega-square spectral model, this corner frequency decrease may be due to an increase in rupture length with distance from the tear. The spectrum of the 2015 earthquake farthest from the tear also deviates from the omega-square model, which may indicate rupture complexity. Stress drop estimates from the corner frequencies of the 2015 M〈sub〉w〈/sub〉 ∼7 earthquakes range between 1 and 7 MPa, whereas stress drop estimates of their EGFs range from ∼0.05 to 10 MPa with most values between 0.1 to 1 MPa. Independent evidence from a second moments analysis of the 2015 earthquake sequence also indicates a possible increase in rupture length with distance from the tear, confirming the results from the spectral ratio analysis. We also observe an increase in normalized centroid time-delay values, a first-order proxy for rupture behavior, with distance from the tear for the 2015 sequence. A similar trend for the 1993 sequence suggests that earthquake rupture varies systematically along the SCT. Since distance from the tear corresponds to cumulative fault displacement, these along-strike rupture variations may be due to a displacement-driven fault maturation process.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 79
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉Thickness of cover over crystalline basement is an important consideration for mineral exploration in covered regions. It can be estimated from a variety of geophysical data types using a variety of inference methods. A robust method for combining such estimates to map the cover-basement interface over a region of interest is needed. Due to the large uncertainties involved, these need to be a probabilistic maps. Predominantly, interpolation methods are used for this purpose, but these are built on simplifying assumptions about the inputs which are often inappropriate. Bayesian estimate fusion is an alternative capable of addressing that issue by enabling more extensive use of domain knowledge about all inputs. This study is intended as a first step towards making Bayesian estimate fusion a practical tool for cover thickness uncertainty mapping. The main contribution is to identify the types of data assumptions that are important for this problem, to demonstrate their importance using synthetic tests, and to design a method that enables their use without introducing excessive tedium. We argue that interpolation methods like kriging often cannot achieve this goal and demonstrate that Markov chain Monte Carlo sampling can. This paper focuses on development of statistical methodology and presents synthetic data tests designed to reflect realistic exploration scenarios on an abstract level. Intended application is for the early stages of exploration where some geophysical data is available while drill hole coverage is poor.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 80
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉The uneven distribution of earthquakes and stations in seismic tomography leads to slower convergence of nonlinear inversions and spatial bias in inversion results. Including dense regional arrays, such as USArray or Hi-Net, in global tomography causes severe convergence and spatial bias problems, against which conventional preconditioning schemes are ineffective. To save computational cost and reduce model bias, we propose a new strategy based on a geographical weighting of sources and receivers. Unlike approaches based on ray density or Voronoi tessellation, this method scales to large full-waveform inversion problems and avoids instabilities at the edges of dense receiver or source clusters. We validate our strategy using a 2D global waveform inversion test and show that the new weighting scheme leads to a nearly two-fold reduction in model error and much faster convergence relative to a conventionally-preconditioned inversion. We implement this geographical weighting strategy for global adjoint tomography.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 81
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉S-wave spectral amplitudes from 312 crustal earthquakes recorded at the Iranian National Broadband Seismic Network in the Alborz region between 2005 and 2017 are analysed in order to evaluate earthquake source parameters, path attenuation and site amplification functions using the nonparametric generalized inversion technique (GIT). We exploit a total number of 1117 seismograms with M〈sub〉L〈/sub〉 3–5.6 in the frequency range 0.3–20 Hz. The evaluated nonparametric attenuation functions decay uniformly with distance for the entire frequency range and the estimated S-wave quality factor shows low Q values with relatively strong frequency dependence. We assume the omega-square source model to retrieve earthquake source parameters from the inverted source spectra. The obtained stress drops range from 0.02 to 16 MPa with a mean value of 1.1 MPa. Stress drop and radiated energy show fairly self-similar scaling with seismic moment over the available magnitude range; however, the magnitude range of this study is too narrow to draw a definite conclusion on source scaling characteristics. The obtained moment magnitude M〈sub〉W〈/sub〉 and the local magnitude M〈sub〉L〈/sub〉 are linearly correlated and approximately equivalent in the range of M〈sub〉W〈/sub〉 3–4. For larger events, M〈sub〉W〈/sub〉 generally underestimates M〈sub〉L〈/sub〉 by about 0.1–0.5 magnitude units. The estimated site amplification functions for horizontal component (GIT H) are nearly flat with no obvious predominant frequency peaks for most stations, as expected for the sites of permanent broadband seismic stations located on rock, though a few stations show amplification peaks from 1 to 8 Hz, with a maximum amplification of about a factor of 7 with respect to the reference site. The evaluated site responses for the vertical components present remarkable amplification or deamplification, leading to differences of the H/V amplitude levels in comparison with the GIT H amplification curves. The results of this study provide a valuable basis for predicting appropriate ground motions in a context of seismic hazard assessment.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 82
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉We introduce a new relative moment tensor (MT) inversion method for clusters of nearby earthquakes. The method extends previous work by introducing constraints from S-waves that do not require modal decomposition and by employing principal component analysis to produce robust estimates of excitation. At each receiver, P- and S-waves from each event are independently aligned and decomposed into principal components. P-wave constraints on MTs are obtained from a ratio of coefficients corresponding to the first principal component, equivalent to a relative amplitude. For S-waves we produce constraints on MTs involving three events, where one event is described as a linear combination of the other two, and coefficients are derived from the first two principal components. Non-linear optimization is applied to efficiently find best-fit tensile-earthquake and double-couple solutions for relative MT systems. Using synthetic data, we demonstrate the effectiveness of the P and S constraints both individually and in combination. We then apply the relative MT inversion to a set of 16 earthquakes from southern Alaska, at ∼125 km depth within the subducted Yakutat terrane. Most events are compatible with a stress tensor dominated by down-dip tension, however, we observe several pairs of earthquakes with nearly antiparallel slip implying that the stress regime is heterogeneous and/or faults are extremely weak. The location of these events near the abrupt down-dip termination of seismicity and the low-velocity zone suggest that they are caused by weakening via grain-size and volume reduction associated with eclogitization of the lower-crustal gabbro layer.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 83
    Publication Date: 2019
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 84
    Publication Date: 2019
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 85
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉The complete part of the earthquake frequency-magnitude distribution, above the completeness magnitude 〈span〉mc〈/span〉, is well described by the Gutenberg-Richter law. On the other hand, incomplete data does not follow any specific law, since the shape of the frequency-magnitude distribution below max(〈span〉mc〈/span〉) is function of 〈span〉mc〈/span〉 heterogeneities that depend on the seismic network spatiotemporal configuration. This paper attempts to solve this problem by presenting an asymmetric Laplace mixture model, defined as the weighted sum of Laplace (or double exponential) distribution components of constant 〈span〉mc〈/span〉, where the inverse scale parameter of the exponential function is the detection parameter κ below 〈span〉mc〈/span〉, and the Gutenberg-Richter β-value above 〈span〉mc〈/span〉. Using a variant of the expectation maximization algorithm, the mixture model confirms the ontology proposed by Mignan [2012, 〈a href="https://doi.org/10.1029/2012JB009347"〉https://doi.org/10.1029/2012JB009347〈/a〉], which states that the shape of the earthquake frequency-magnitude distribution shifts from angular (in log-linear space) in a homogeneous space-time volume of constant 〈span〉mc〈/span〉 to rounded in a heterogeneous volume corresponding to the union of smaller homogeneous volumes. The performance of the proposed mixture model is analysed, with encouraging results obtained in simulations and in 8 real earthquake catalogues that represent different seismic network spatial configurations. We find that 〈span〉k〈/span〉 = κ/ln(10) ≈ 3 in most earthquake catalogues (compared to 〈span〉b〈/span〉 = β/ln(10) ≈ 1), suggesting a common detection capability of different seismic networks. Although simpler algorithms may be preferred on pragmatic grounds to estimate 〈span〉mc〈/span〉 and the 〈span〉b〈/span〉-value, other methods so far fail to model the angular distributions observed in homogeneous space-time volumes. Mixture modelling is a promising strategy to model the full earthquake magnitude range, hence potentially increasing seismicity data availability tenfold, since c. 90 per cent of earthquake catalogue events are below max(〈span〉mc〈/span〉).〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 86
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉Full waveform inversion (FWI) is a nonlinear waveform matching procedure, which suffers from cycle skipping when the initial model is not kinematically-accurate enough. To mitigate cycle skipping, wavefield reconstruction inversion (WRI) extends the inversion search space by computing wavefields with a relaxation of the wave equation in order to fit the data from the first iteration. Then, the subsurface parameters are updated by minimizing the source residuals the relaxation generated. Capitalizing on the wave-equation bilinearity, performing wavefield reconstruction and parameter estimation in alternating mode decomposes WRI into two linear subproblems, which can solved efficiently with the alternating-direction method of multiplier (ADMM), leading to the so-called iteratively refined wavefield reconstruction inversion (IR-WRI). Moreover, ADMM provides a suitable framework to implement bound constraints and different types of regularizations and their mixture in IR-WRI. Here, IR-WRI is extended to multiparameter reconstruction for VTI acoustic media. To achieve this goal, we first propose different forms of bilinear VTI acoustic wave equation. We develop more specifically IR-WRI for the one that relies on a parametrisation involving vertical wavespeed and Thomsen’s parameters δ and ε. With a toy numerical example, we first show that the radiation patterns of the virtual sources generate similar wavenumber filtering and parameter cross-talks in classical FWI and IR-WRI. Bound constraints and TV regularization in IR-WRI fully remove these undesired effects for an idealized piecewise constant target. We show with a more realistic long-offset case study representative of the North Sea that anisotropic IR-WRI successfully reconstruct the vertical wavespeed starting from a laterally homogeneous model and update the long-wavelengths of the starting ε model, while a smooth δ model is used as a passive background model. VTI acoustic IR-WRI can be alternatively performed with subsurface parametrisations involving stiffness or compliance coefficients or normal moveout velocities and η parameter (or horizontal velocity).〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 87
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉Seismic signal recognition can serve as a powerful auxiliary tool for analyzing and processing ever-larger volumes of seismic data. It can facilitate many subsequent procedures such as first-break picking, statics correction, denoising, signal detection, events tracking, structural interpretation, inversion and imaging. In this study, I propose an automatic technique of seismic signal recognition taking advantage of unsupervised machine learning (ML). In the proposed technique, seismic signal recognition is considered as a problem of clustering data points. All the seismic sampling points in time domain are clustered into two clusters, i.e. signal or non-signal. The hierarchical clustering (HC) algorithm is used to group these sampling points. Four attributes, i.e. two short-term-average-to-long-term-average ratios (STA/LTAs), variance and envelope are investigated in the clustering process. In addition, to quantitatively evaluate the performance of seismic signal recognition properly, I propose two new statistical indicators, namely the rate between the total energies of original and recognized signals (RTE), and the rate between the average energies of original and recognized signals (RAE). A large number of numerical experiments show that, when the signal is slightly corrupted by noise, the proposed technique performs very well, with recognizing accuracy, precision and RTE of nearly 1 (i.e., 100%), recall greater than 0.8 and RAE about 1-1.3. When the signal is moderately corrupted by noise, the proposed technique can hold recognizing accuracy about 0.9, recognizing precision nearly to 1, RTE about 0.9, recall around 0.6 and RAE about 1.5. Applications of the proposed technique to real micro-seismic data induced from hydraulic fracturing and reflection seismic data demonstrate its feasibility and encouraging prospect.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 88
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉Mantle plumes may play a major role in the transport of heat and mass through the Earth, but establishing their existence and structure using seismology has proven challenging and controversial. Previous studies have mainly focused on imaging plumes using waveform modelling and inversion (i.e. tomography). In this study we investigate the potential visibility of mantle plumes using array methods, and in particular whether we can detect seismic scattering from the plumes. By combining geodynamic modelling with mineral physics data we compute ‘seismic’ plumes whose shape and structure correspond to dynamically-plausible thermochemical plumes. We use these seismic models to perform a full-waveform simulation, sending seismic waves through the plumes, in order to generate synthetic seismograms. Using velocity spectral analysis and slowness-backazimuth plots, we are unable to detect scattering. However at longer dominant periods (25 seconds) we see several arrivals from outside the plane of the great circle path, that are consistent with an apparent bending of the wavefront around the plume conduit. At shorter periods (15 seconds), these arrivals are less obvious and less strong, consistent with the expected changes in the waves' behaviour at higher frequencies. We also detect reflections off the iron-rich chemical pile which serves as the plume source in the D'' region, indicating that D'' reflections may not always be due to a phase transformation. We suggest that slowness-backazimuth analysis may be a useful tool to locate mantle plumes in real array datasets. However, it is important to analyse the data at different dominant periods since, depending on the width of the plume, there is probably an optimum frequency band at which the plume is most visible. Our results also show the importance of studying the incoming energy in all directions, so that any apparently out-of-plane arrivals can be correctly interpreted.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 89
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉We test the feasibility of GPS-based rapid centroid moment tensor (GPS CMT) methods for Taiwan, one of the most earthquake prone areas in the world. In recent years Taiwan has become a leading developer of seismometer-based earthquake early warning systems which have successfully been applied to several large events. The rapid determination of earthquake magnitude and focal mechanism, important for a number of rapid response applications, including tsunami warning, is still challenging because of the limitations of near-field inertial recordings. This instrumental issue can be solved by an entirely different observation system: a GPS network. Taiwan is well-posed to take advantage of GPS because in the last decade it has developed a very dense network. Thus, in this research, we explore the suitability of the GPS CMT inversion for Taiwan. We retrospectively investigate six moderate to large (M〈sub〉w〈/sub〉6.0∼7.0) earthquakes and propose a resolution test for our model, we find that the minimum resolvable earthquake magnitude of this system is ∼M〈sub〉w〈/sub〉5.5 (at 5 km depth). Our tests also suggest that the finite fault complexity, often challenging for the near-field methodology, can be ignored under such good station coverage and thus, can provide a fast and robust solution for large earthquake directly from the near-field. Our findings help to understand and quantify how the proposed methodology could be implemented in real-time and what its contributions could be to the overall earthquake monitoring system.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 90
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉The ice cap covering Antarctica has long limited our understanding of the continental-scale crustal model due to its inaccessibility and the resulting logistical difficulties when executing geophysical field work, such as seismograph deployment. Resolving a high spatial resolution crustal model for Antarctica where seismographs are sparsely distributed, stimulates scientific interest in this relatively less studied continent. In this study, we utilize satellite gravity observations from the global gravity model EIGEN-6C4 to create an alternative crustal thickness model of Antarctica. The gravity data was corrected for sediments, topography and ice cover. Furthermore, we considered the gravity effect due to vertical deformation of the lithosphere caused by ice load besides the earth's curvature in the modeling. We inverted the corrected gravity data using the regularized Bott's inversion method in spherical approximation and constrained the results by seismic observations. This crustal thickness model shows a thicker average crust in East Antarctica and a thinner one in West Antarctica. The thickest crust is in the Gamburtsev sub-glacial mountains with a Moho depth of over 40 km. The thicker crust is particularly evident along the Transantarctic Mountains and the Dronning Maud lands. Comparisons with existing models show a good correlation in gravity-constrained areas. Differences appear in the sedimentary basins and crust with thickness closer to seismic point observations. Overall, our crustal model is relatively improved than the existing gravity derived models.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 91
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉Flexure and fracturing of the seafloor on the outer trench wall of subduction zones reflects bending of the lithosphere beyond its elastic limit. To investigate these inelastic processes, we have developed a full non-linear inversion approach for estimating the bending moment, curvature, and outer trench wall fracturing using shipboard bathymetry and satellite altimetry derived gravity data as constraints. Bending moments and downward forces are imposed along curved trench axes and an iterative method is used to calculate the non-linear response for 26 sites in the circum-Pacific region having seafloor age ranging from 15 to 148 Ma. We use standard thermal and yield strength envelope models to develop the non-linear moment versus curvature relationship. Two coefficients of friction of 0.6 and 0.3 are considered and we find the lower value provides a better overall fit to the data. The main result is that the lithosphere is nearly moment saturated at the trench axis. The effective elastic thickness of the plate on the outer trench slope is at least three times smaller than the elastic thickness of the plate before bending at the outer rise, in agreement with previous studies. The average seafloor depth of the unbent plate in these 26 sites matches the Parsons & Sclater (1977) depth versus age model beyond 120 Ma. We also use the model to predict the offsets of normal faults on the outer trench walls and compare this with the horst and graben structures observed by multibeam surveys. The model with the lower coefficient of friction fits the fault offset data close to the trench axis. However, the model predicts significant fracturing of the lithosphere between 75 and 150 kilometres away from the trench axis where no fracturing is observed. To reconcile these observations, we impose a thermoelastic pre-stress in the lithosphere (Wessel 1992) prior to subduction. This pre-stress delays the onset of fracturing in better agreement with the data.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 92
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉We applied a non-linear teleseismic tomography algorithm to explore the 3D structure of the upper mantle beneath the Arabia-Eurasia continental collision zone in western Iran. An unprecedented dataset consisting of 32738 teleseismic 〈span〉P〈/span〉-wave relative arrival time residuals from 129 permanent and temporary seismic stations was inverted for seismic imaging. The seismic images suggest a thick high-velocity lithospheric mantle beneath the Zagros. Other high-velocity domain observed below ∼300 km depth, which is not connected to the Zagros lithosphere, is interpreted as a slab segment in the upper mantle beneath the collision zone. The low-velocity anomalies beneath Central Iran and Alborz, consistent with a weak upper mantle, may be a result of upwelling asthenosphere and partial melting of the subducted lithosphere.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 93
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉Non-uniqueness in the geophysical inverse problem is well recognized and so too is the ability to obtain solutions with different character by altering the form of the regularization function. Of particular note is the use of ℓ〈sub〉〈span〉p〈/span〉〈/sub〉-norms with 〈span〉p〈/span〉 ∈ [0, 2] which gives rise to sparse or smooth models. Most algorithms are designed to implement a single ℓ〈sub〉〈span〉p〈/span〉〈/sub〉-norm for the entire model domain. This is not adequate when the fundamental character of the model changes throughout the volume of interest. In such cases we require a generalized regularization function where each sub-volume of the model domain has penalties on smallness and roughness and its own suite of ℓ〈sub〉〈span〉p〈/span〉〈/sub〉 parameters. Solving the inverse problem using mixed ℓ〈sub〉〈span〉p〈/span〉〈/sub〉-norms in the regularization (especially for 〈span〉p〈/span〉 〈 1) is computationally challenging. We use the Lawson formulation for the ℓ〈sub〉〈span〉p〈/span〉〈/sub〉-norm and solve the optimization problem with Iterative Reweighted Least Squares. The algorithm has two stages; we first solve the 〈span〉l〈/span〉〈sub〉2〈/sub〉-norm problem and then we switch to the desired suite of ℓ〈sub〉〈span〉p〈/span〉〈/sub〉-norms; there is one value of 〈span〉p〈/span〉 for each term in the objective function. To handle the large changes in numerical values of the regularization function when 〈span〉p〈/span〉-values are changed, and to ensure that each component of the regularization is contributing to the final solution, we successively rescale the gradients in our Gauss-Newton solution. An indicator function allows us to evaluate our success in finding a solution in which components of the objective function have been equally influential. We use our algorithm to generate an ensemble of solutions with mixed ℓ〈sub〉〈span〉p〈/span〉〈/sub〉-norms. This illuminates some of the non-uniqueness in the inverse problem and helps prevent over-interpretation that can occur by having only one solution. In addition, we use this ensemble to estimate the suite of 〈span〉p〈/span〉-values that can be used in a final inversion. First, the most common features of our ensemble are extracted using principal component analysis and edge detection procedures; this provides a reference model. A correlation of each member of the ensemble with the reference model, carried out in a windowed domain, then yields a set of 〈span〉p〈/span〉-values for each model cell. The efficacy of our technique is illustrated on a synthetic 2D cross-well example. We then apply our technique to the field example that motivated this research, the 3D inversion of magnetic data at a kimberlite site in Canada. Since the final regularization terms have different sets of 〈span〉p〈/span〉-values in different regions of model space we are able to recover compact regions associated with the kimberlite intrusions, continuous linear features with sharp edges that are associated with dykes, and a background that is relatively smooth. The result has a geologic character that would not have been achievable without the use of spatially variable mixed norms.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 94
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉The engines of surface deformation in the Anatolia—Aegean region are a matter of debate, including the origin of the high elevations of the Anatolian plateau. Recent publications based on geological and thermo-mechanical modelling emphasize the role of dynamic topography in the plateau uplift. However, quantitative estimates of the contribution of dynamic topography are affected by large uncertainties due to insufficient knowledge of the crustal structure, in particular crustal thickness and density. To reduce these uncertainties, we provide a new accurate crustal thickness map of the Anatolia-Aegean domain computed from a large volume of broadband seismic data. In addition, we display high-resolution seismic sections of the internal structure of the crust in Western and Central Anatolia. Density contrasts are derived from the same seismic data set and Bouguer gravity anomaly computed from the EGM2008 model. Our crustal thickness model is highly correlated with the topography suggesting that the Anatolian plateau is close to isostatic equilibrium. The average density difference between crust and upper mantle computed from our crustal model and Bouguer gravity anomaly is low compared to the global average, ∼0.315 10〈sup〉3〈/sup〉 kg.m〈sup〉−3〈/sup〉. The ratio of surface elevation to crustal thickness is lower than average, 1/9.4, which also indicates a low-density crust. Differences between isostatic topography and observed topography are overall small (〈 500 m). The East-to-West gradients of crustal thickness and topography changes are nearly constant in between the Taurides and Pontides at the northern and southern borders of Anatolia. The observed constant crustal thickness gradient may indicate a low viscosity lower crust supported by the thin mantle lithosphere evidenced by seismic tomography beneath the Anatolian plateau. We propose that viscous flow in the lower crust has smoothed out lateral changes in the crustal structure expected for such a heterogeneous collage of continental fragments. This flow may originate from gravitational potential energy differences between Eastern Anatolia (thick crust, high elevations) and the Aegean Sea (thin crust, low elevations), suggesting that gravity plays an integral part in the westward escape of Anatolia.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 95
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉We found that SH〈sub〉〈span〉diff〈/span〉〈/sub〉 phases generated by earthquakes in the Fiji-Tonga, recorded in India, are accompanied by secondary pulses. We interpreted them as a consequence of multipathing of S waves caused by the Pacific Large Low-Shear-Velocity Province (LLSVP). We analyzed the differential travel times between SH〈sub〉〈span〉diff〈/span〉〈/sub〉 and the secondary pulse, together with the absolute SH〈sub〉〈span〉diff〈/span〉〈/sub〉 arrival times, to constrain the thickness and velocity perturbations in the western end of the Pacific LLSVP. Our preferred model shows a lateral variation in the thickness of the LLSVP; the southern part reveals a thicker (300 km) low velocity region compared to the northern part (200 km). However, the velocity perturbations of the LLSVP appear to be comparable (-1.5%). The results are consistent with a scenario that the LLSVP is a chemically distinct pile with significant surface topography.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 96
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉Primary microseism is the less studied seismic background vibration of the Earth. Evidence points to sources caused by ocean gravity waves coupling with the seafloor topography. As a result, these sources should be in water depth smaller than the wavelength of ocean waves. Using a state-of-the-art ocean wave model, we carry out the first global-scale seismic modeling of the vertical-component power spectral density of primary microseisms. Our modeling allows us to infer that the observed weak seasonality of primary microseisms in the southern hemisphere corresponds to a weak local seasonality of the sources. Moreover, a systematic analysis of the source regions that mostly contribute to each station reveals that stations on both the East and West sides of the North Atlantic Ocean are sensitive to frequency-dependent source regions. At low frequency (i.e., 0.05 Hz), the dominant source regions can be located thousands of kilometers away from the stations. This observation suggests that identifying the source regions of primary microseisms as the closest coasts can be misleading.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 97
    Publication Date: 2019
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 98
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉Over the past few decades, seismic studies have revealed complex structural anomalies in the Earth’s deep interior at various scales, such as large low-shear-velocity provinces (LLSVPs) and ultra-low velocity zones (ULVZs) in the lowermost mantle, and small-scale scatterers in the mid-mantle. These structures which are critical for better understanding of the geodynamics and evolution of the deep Earth, need to be further resolved by high-resolution imaging techniques. The spectral-element method (SEM) can be used to accurately simulate seismic wave propagation in heterogeneous Earth models, and its application in full-waveform inversion (FWI) provides a promising high-resolution and high-fidelity imaging technique. But it can be computationally prohibitive when used to model small scale structures in the deep Earth based upon high-frequency seismic waves. The heavy computational cost can be circumvented by using hybrid methods, which restrict the main computation by SEM solver to only a small target region (e.g., above the CMB) encompassing possible 2D/3D anomalies, and apply efficient analytical or numerical methods to calculate the wavefield for 1D background models. These forward modeling tools based on hybrid methods can be then used in the so-called ‘box tomography’ approach to resolve fine-structures in the deep Earth. In this study, we outline the theory of a hybrid method used to model small scale structures in the deep Earth and present its implementation based on SEM solvers in a three-step workflow. First, the wavefield generated by the source is computed for the 1D background model with traction and velocity saved for the virtual boundary of the target region, which are then used as boundary inputs to simulate the wavefield in the target region based on absorbing boundary condition in SEM. In the final step, the total wavefield at receivers is reconstructed based upon the total wavefield on the virtual boundary computed in the previous step. As a proof-of-concept study, we demonstrate the workflow of the hybrid method based on a 2D SEM solver. Examples of the hybrid method applied to a coupled fluid-solid model show that our workflow can accurately recover the scattered waves back to the surface. Furthermore, we benchmark the hybrid method on a realistic heterogeneous Earth model built from 〈span〉AK135-F〈/span〉 and show how teleseismic scattered waves can be used to model deep Earth structures. By documenting the theory and SEM implementation of the hybrid method, our study lays the foundation for future two-way coupling of 3D SEM solver with other efficient analytic or numerical 1D solvers.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 99
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉The generalized standard-linear-solid model, also called the Zener model, is widely used in viscoacoustic/viscoelastic wavefield forward and inverse modeling, because the wave equations in this model can be written in differential equation form, which can be solved efficiently by time-domain numerical methods such as finite difference method, spectral element method, etc. For this model, however, two different expressions for the relaxation function (or complex modulus) appear in the literature somewhat confusingly. In addition to this confusion, the time- and frequency-domain versions of the wave equations for the generalized standard-linear-solid model are scattered throughout the literature. Here, we revisit the generalized standard-linear-solid model and seek to overcome the confusion concerning the expression for the relaxation function (or modulus). We present a unified approach to derive the viscoacoustic wave equations. We start with the time- and frequency-domain formulations separately to derive two sets of viscoacoustic wave equations. All these viscoacoustic wave equations are expressed in a simple and compact form. The two sets of viscoacoustic wave equations are equivalent to each other. The proposed method to derive the appropriate viscoacoustic wave equations can be extended to derive wave equations for other dissipative media.〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 100
    Publication Date: 2019
    Description: 〈span〉〈div〉Summary〈/div〉We provide a two-stage approach to extract spectral induced polarization (SIP) information from time-domain IP data. In the first stage we invert DC data to recover the background conductivity. In the second, we solve a linear inverse problem and invert all time channels simultaneously to recover the IP parameters. The IP decay curves are represented by a Stretched Exponential (SE) rather than the traditional Cole-Cole model, and we find that defining the parameters in terms of their logarithmic values is advantageous. To demonstrate the capability of our simultaneous SIP inversion we use synthetic data simulating a porphyry mineral deposit. The challenge is to image a mineral body that is hosted within an alteration halo having the same chargeability but a different time constant. For a 2D problem, we were able to distinguish the body using our simultaneous inversion but we were not successful in using a sequential (or conventional) SIP inversion approach. For the 3D problem we recovered 3D distributions of the SIP parameters and used those to construct a 3D rock model having four rock units. Three chargeable units were distinguished. The compact mineralization zone, having a large time constant, was distinguished from the circular alteration halo that had a small time constant. Finally, to promote the use of the SIP technique, and to have further development of SIP inversion, all examples presented in this paper are available in our open source resources (〈a href="https://github.com/simpeg-research/kang-2018-spectral-inducedpolarization"〉https://github.com/simpeg-research/kang-2018-spectral-inducedpolarization〈/a〉).〈/span〉
    Print ISSN: 2051-1965
    Electronic ISSN: 1365-246X
    Topics: Geosciences
    Published by Oxford University Press on behalf of The Deutsche Geophysikalische Gesellschaft (DGG) and the Royal Astronomical Society (RAS).
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...