Treatise on geophysics chapter 6: slip inversion s. Ide

Скачать 290.09 Kb.
 Название Treatise on geophysics chapter 6: slip inversion s. Ide страница 1/6 Дата 06.10.2012 Размер 290.09 Kb. Тип Документы

TREATISE ON GEOPHYSICS CHAPTER 6: SLIP INVERSION S. IDE

Treatise on Geophysics

Volume 2. Earthquake Seismology

6. Slip Inversion

Dr. Satoshi Ide

Department of Earth and Planetary Science, University of Tokyo

7-3-1, Bunkyo, Tokyo, 113-0033, JAPAN

ide@eps.s.u-tokyo.ac.jp

Slip Inversion

Satoshi Ide

Department of Earth and Planetary Science, University of Tokyo

Outline

Abstract

6.1: INTRODUCTION

6.2: CONSTRUCTION OF SLIP INVERSION PROBLEM

6.2.1 Outline

6.2.2 Data Preparation

6.2.3 Setting Fault Models

6.2.3.1 Parameterization of slip distribution

6.2.3.2 Example of linear expression: multi-time-window method

6.2.3.3 Example of nonlinear expression

6.2.4 Calculation of synthetic data

6.3 SOLVING LEAST SQUARES PROBLEM

6.3.1 Best Estimate

6.3.2 Constraints and Regularization

6.3.3 Comparison in the Frequency Domain

6.4 EXAMPLE OF SLIP MODEL: THE 1999 CHI-CHI EARTHQUAKE

6.5 EXTENDED STUDIES BASED ON SLIP MODELS

6.5.1 Characteristics of Slip Models

6.5.2 Implication of Slip Models for Fault Dynamics

6.5.3 Dynamic Modeling and Slip Models

6.5.4 Scaling of Earthquake Heterogeneity

6.6 DISCUSSION AND CONCLUSION

Acknowledgements.

Reference

Tables

Figures

KEYWORDS: the physics of earthquake, rupture process, slip inversion, seismograms, strong-motion, fault model, slip distribution, Green’s function, regularization, heterogeneity, scaling, inverse problem, Bayesian modeling, dynamic modeling

Abstract

Slip inversion is a widely used analysis method using seismic and/or geodetic data, which determines the spatial and temporal distribution of fault slip, namely a slip model. This chapter reviews the history, formulation, application, and extension of slip inversion. Constituents of a slip inversion problem are data preparation, model parameterization, and calculation of synthetic data. For each of these elements, we introduce typical treatments and provide total mathematical formulations. Frequently used data are far-field broadband seismograms, near-field strong-motion data, and various geodetic data. Both linear and nonlinear parameterizations are possible to represent slip models. While Green’s functions for layered structure have been used for synthetic wave calculation, 3D Green’s function and empirical Green’s functions are adequate for more complex structures. Different combinations of these constituents can lead to significant differences between slip models of the same event. Then we discuss the issues associated with the inversion method. Various optimization schemes are applicable to find the best estimates of parameters and their uncertainties. Sometimes an inversion problem requires additional smoothing constraints for regularization because it tends to be partly underdetermined. The weight of these constraints can be determined objectively using Bayesian modeling. Another topic related to the method is the frequency characteristics of a slip distribution and slip inversion in frequency domain. As an example of well-studied earthquake, we compare various slip models of the 1999 Chi-Chi, Taiwan earthquake published by different research groups, and find that there are common characteristics among the models. We also review derivative studies based on slip models. There are some common characteristics in many slip models, such as slip pulse, complementary distributions of aftershocks and large slips, and rupture directivity. Slip models can provide clues to governing laws and properties of earthquake dynamic rupture directly or indirectly. Attempts to scale the complexity of earthquake ruptures have just started and will be important both for understanding the physics of earthquakes and for reliable strong-motion predictions.

6.1: INTRODUCTION

Earthquake source is dynamic share shear ruptures including fracture and frictional slip on fault planes. To further study the physics of earthquake rupture, we have to know what occurred during an earthquake by resolving the spatial and temporal behavior of the rupture. The information of detailed rupture process is also useful for realistic simulations of strong ground motion from a complex source. The rupture process is usually represented by fault slip distribution using a parametric model, which is called a slip model, a heterogeneous slip model, or a finite fault model. Since slip models only describe the rupture history without specific reference to underlying rupture physicsbut seem to provide little information about underlying rupture physics, they are also referred to as kinematic models. The method to objectively constrain a slip model by data, usually seismic and/or geodetic data, is slip inversion, which is the central topic of this chapter. The data are usually contaminated with background noise and our knowledge about earthquake locations and underground structure are not sufficient. To overcome these difficulties, various techniques of slip inversion have been developed with many applications during these decades. We review the history, typical formulation, application examples, and extension of slip inversion, paying attention to the limitation of the method.

First we review the history of the development of slip inversion. Modern seismology began in the 1960s after the establishment of a basic picture of seismic sources as shear slips on fault planes. Revealing the spatial distribution of the slip soon became a major topic of earthquake seismology. The first earthquake whose spatial extent is quantitatively discussed based on seismograms was the 1960 Chilean earthquake (Mw 9.5), the largest event in the 20th century. The studies of surface waves using long-period seismograms suggest that this earthquake propagated 750-1000 km to the south at a rupture velocity a little slower than the S wave velocity (Benioff et al. 1961; Press et al. 1961).

In the 1960s macroscopic seismic source models described by handful parameters are well studied. The most famous macroscopic model that includes simple rupture propagation is the Haskell’s model (Haskell 1964, 1969; Aki and Richards 2002). In this model, a line dislocation of a constant amount and a constant duration (rise time) propagates on a rectangular plane unidirectionally at a constant velocity. The 1966 Parkfield earthquake (M 6.0) was an example that was successfully explained by a propagating dislocation (Aki 1968). The Haskell’s model and similar macroscopic models are used to explain observed records from many large earthquakes during the 1960s and 1970s. Based on the results for earthquakes larger than magnitude 6, Kanamori and Anderson (1975) derived scaling relations for macroscopic earthquake ruptures. In these scaling relations, macroscopic parameters such as the fault length, the fault width, and the average slip satisfy a geometrical similarity, indicating that the average stress drop is almost constant. This is a standard macroscopic image of earthquake source established in the 1970s.

However, from the very beginning of the seismic observation history, it has also been known that an earthquake is not a simple rupture but consists of several distinct shocks. Quantitative evidences of such complexity were was clearly given from the observation of far-field body waves. Far-field records of some large earthquakes contain multiple pulses that are not explained by the effect of the underground structure. These events are called multiple shocks. One example is the 1976 Guatemala earthquake (Mw 7.5) studied by Kanamori and Stewart (1978) and Kikuchi and Kanamori (1982, 1991). This earthquake consists of 10 discrete ruptures that radiated pulse-like body waves. Although not all earthquakes are that complex, some degree of complexity is visible in any large earthquakes.

From the late 1970s, seismologists started developing systematic procedures to analyze the complexity of earthquake source. Trifunac (1974) constructed and solved the first slip inversion problem that objectively determines seismic slip on an assumed fault plane for the 1971 San Fernando earthquake. He divided the assumed fault plane into a number of rectangular subfaults and determined the amount of slip on each of the subfaults using the least-square method. Following the development of the inversion theory in other fields, such as statistics and information engineering, various systematic procedures have been developed to reveal heterogeneity of seismic source. In 1982, two important papers for inversion methods were published. The one is the subevent deconvolution method proposed by Kikuchi and Kanamori (1982). They consider seismic sources as a sequence of spatially distributed point sources and identified the time and moment of these sources iteratively using waveform correlations of far-field body waves. This method and its extended version were applied for many earthquakes (Kikuchi and Kanamori 1986, 1991, 1994). The other inversion method proposed by Olson and Apsel (1982) determined spatial and temporal slip distribution on a fault plane using successive time windows propagating at a constant velocity. Hartzell and Heaton (1983) also proposed a similar method. These methods are referred to as the multi-time-window method and have been frequently applied for many earthquakes since the 1979 Imperial Valley earthquake (Mw 6.4). We will review this method more thoroughly in the section 6.2.

In the history of slip inversion, a small number of large earthquakes played significant roles to provide new kinds of data, to enable development of special treatments, and to increase our knowledge of the earthquake physics. Very near-field records of the 1966 Parkfield earthquake (M 6.0), at 80 m from the surface fault trace, confirmed the image of seismic source as a propagating dislocation (Aki 1968). The 1971 San Fernando earthquake (Mw 6.7) provided more than 250 near-field strong-motion records and enabled the first slip inversion from the data at five stations within the source area as described above (Trifunac 1974). The 1979 Imperial Valley earthquake (Mw 6.4) is the first event whose slip models were published from by different groups (Olson and Apsel 1982; Hartzell and Heaton 1983; Archuleta 1984). In the 1990s, geodetic measurements using satellites became popular and these data are introduced into joint inversions as constraints of final slip. The 1992 Landers earthquake (Mw 7.2), the 1994 Northridge earthquake (Mw 6.7), and the 1995 Hyogo-Ken Nanbu (Kobe) earthquake (Mw 6.9) are well-studied events using both seismic and geodetic data. Serious damage by the Kobe earthquake stimulated the improvement of nationwide observational systems in Japan, the dense networks of high-sensitivity and strong-motion seismometers and GPS, which enabled detailed studies of smaller earthquakes. The 1999 Chi-Chi, Taiwan, earthquake (Mw 7.6) is so far the best recorded and best resolved earthquake owing to the dense strong-motion network in Taiwan. The first M9 earthquake in the digital seismogram era, the 2004 Sumatra earthquake (Mw 9.3) was also analyzed for slip distribution. A dense far-field seismometer network can reveal the whole rupture process of such a great earthquake even without near-field strong-motion records (Krüger and Ohrnberger 2005; Ishii et al. 2005).

In recent years, slip inversion has become a kind of routine analysis. Rapid solutions are published via the Internet from several groups immediately after large earthquakes, usually within one or a few days. Since there are some distributed computer programs of slip inversion, there is possibility that it is applied without careful investigation. Sometimes, assumptions to get solutions are not explicitly written in research papers. If there are enough data, the relative significance of the assumptions is low, but the data are often limited and the assumptions can be essential for the details of the solution.

In the following sections, we review general treatments and assumptions in slip inversion decomposing the problem into basic elements: preparation of data, model parameterization, and calculation of synthetic data from the model. All of these are essential to construct even a forward problem that just compares data and model prediction. In the section 6.2 we introduce various examples for each element. The section 6.2.2 reviews the availability of seismic waveform data and several geodetic measurements. Most popular seismic waveform data are local strong-motion records and global broadband records. In the section 6.2.3 we summarize the representation of slip distribution by a finite number of model parameters, which are the model to be determined by slip inversion and generally classified into linear or nonlinear types. Synthetic data are calculated using Green’s functions, which are the displacements at stations due to an impulsive force and connect data and model parameters. As we will discuss in the section 6.2.4, they are theoretically calculated based on the knowledge of underground structure, or empirically modified from the records of small events. The section 6.3 explains how to solve the problem based on inversion theory. After a short review of an ordinary solution for the least-square problem in the section 6.3.1, Bayesian modeling for underdetermined problems is reviewed with expressions in the section 6.3.2. While most slip inversion problems are solved in the time domain, sometimes information in the frequency domain is also important, although there have been only a few studies in the frequency domain. We will discuss on these studies in the final subsection of the section 6.3. The section 6.4 shows examples of slip inversion, by comparing slip models of the Chi-Chi earthquake published by different groups. The section 6.5 reviews various derivative studies based on slip models, finding characteristics, dynamic implication and modeling, and scaling problem. Finally, 6.6 summarize this paper discussing the current problems and the future prospects of slip inversion.

There have been good review articles about slip inversion. The study of the 1986 North Palm Springs earthquakes by Hartzell (1989) discussed merits and demerits for different settings, comparing teleseismic and strong motion data, linear and nonlinear models, and empirical and theoretical Green’s functions. The developments till the 1980s are summarized by Kikuchi (1991) and Iwata (1991). Yoshida (1995) reviewed the problem paying a special attention to classify the problems from a point source to a spatio-temporal slip distribution. Beroza (1995) described characteristics of analysis with far-field and near-field data. The review of strong-motion seismology by Anderson (2003) included a list of previous slip inversion studies using strong-motion records. From the viewpoint of earthquake physics, Kanamori (2004) mentioned characteristics of some results of slip inversion. There are also good textbooks and review articles for general inversion problems in geophysics (e.g., Menke 1989; Matsu’ura 1991; Tarantola 2005).

6.2: CONSTRUCTION OF SLIP INVERSION PROBLEM

6.2.1 Outline

First, we introduce the mathematical formulation of slip inversion problems using seismic and/or geodetic data. An earthquake rupture is the spatial and temporal distribution of displacement discontinuity u(x, t) across fault planes . We consider only shear slip whose direction is perpendicular to the local normal vector of fault plane, (x), so that u(x, t(x) = 0 everywhere. In the elastic medium, displacement at (x, t) due to a slip distribution u(, ) is written as (equation 10.1 of Aki and Richards, 2002),

, (1)

where Cjkpq is the elastic constants. Gip is a Green tensor function that is i-th component of displacement at (x, t) due to an impulsive force at (, ) in p-th direction. ,q represents the derivative in q direction. The summation convention is used for index j, k, p, and q. The equation (1) is the expression for seismic waves, but it is also applicable to geodetic measurements if we take . The constituents of this problem are data , slip model , and Green’s functions Gip, which we will discuss independently in the following subsections.

6.2.2 Data Preparation

Among various data available for slip inversion, seismic waveform data are essential to resolve temporal change of fault slip. Broadband seismograms recorded by global seismic networks are useful to resolve slip distribution for any earthquake larger than M7. Far-field P- and S-waves in the range of hypocentral angular distances from 30° to 100° are separated from other large phases and interpreted as a source time function with minor modification for the reflection and refraction due to local structure. Seismic waveform data from regional networks are useful to improve the resolution of the slip distribution for large earthquakes and enable slip inversion even for earthquake of moderate size. Especially strong-motion records are most important to get fine image of rupture propagation for large earthquakes. When on-scale records are available at around surface fault traces, the near-field components can strongly constrain the timing of local rupture propagation.

Many slip models have been determined using strong-motion records. The resolution and reliability of each model depend mainly on the quantity and quality of strong-motion data. Digital data generally have wide dynamic range than digitized analog data and the accuracy of absolute timing is often critical for the analysis. Strong-motion data are usually limited and only small number of earthquakes have been observed at many near-field stations. Table 6.1 summarizes the slip models determined by relatively large amount of data, together with the models in early studies for the San Fernando earthquake and the Imperial Valley earthquake. The number of strong-motion stations and types of data increased with time and the number of model parameters also increased until recently. While one research group uses strong-motion data as velocity data after integration, another group uses them as displacement data after double-integration. The choice is actually controls the roughness of the slip model as we will discuss in Section 6.3.3.

Recently we can easily access several data sets of seismic waves. Global data had been available to world researchers before the first slip inversion study. World Wide Standard Seismograph Network (WWSSN, Oliver and Murphy 1971) constructed in the 1960s was also useful for the study of seismic sources. This network has been gradually replaced into by the network of broadband seismometers with digital recording system at more than 120 worldwide stations (Lee 2002). Today, we can obtain these data from Data Manager Center (DMC) [http://www.iris.edu/] of Incorporated Research Institutions for Seismology (IRIS) from a few hours after the occurrence of earthquakes. Recently, near-field strong-motion stations have been organized. Web sites are maintained by some large regional networks of strong-motion seismometers such as California Strong Motion Instrumentation Program (CSMIP) [http://www.consrv.ca.gov/cgs/smip/], in USA, Kyoshin network (K-NET) [http://www.kyoshin.bosai.go.jp/], and KiK-net [http://www.kik.bosai.go.jp/] in Japan. The Consortium of Organizations for Strong-Motion Observation Systems (COSMOS) operated since 2001 provides a virtual server (search engine) for the strong-motion data from the stations worldwide.

While new earthquakes provide new data sets for slip inversion, old analog seismograms were digitized and used for slip inversion of old significant earthquakes. It is a time-consuming task to digitize the seismograms from smoked papers and microfilms and apply appropriate corrections to them. Sometimes the information necessary for the corrections, such as the pendulum period, magnification and pen arc length, is missing. Nevertheless, owing to recent progresses in image processing technique, the study of such old earthquakes is getting popular. For example, the slip models are determined for the 1906 San Francisco earthquake (Mw 7.7, Wald et al. 1993; Song and Beroza, 2005), the 1923 Kanto, Japan, earthquake (Mw 7.9, Wald and Somerville 1995), the 1944 Tonankai, Japan, earthquake (Mw 7.9, Kikuchi et al. 2003; Ichinose et al. 2003), and the 1948 Fukui earthquake (Mw 6.8, Kikuchi et al. 1999; Ichinose et al. 2005). Although, the resolution and reliability are not comparable to the state-of-the-art results, analyses of these events are important for the study of not only earthquake physics, but also for the regional tectonics and the assessment of long-term earthquake potential.

Only the final slip distribution (without its temporal evolution) can be estimated using geodetic data alone (e.g., Ward and Barrientos 1986; Yabuki and Matsu’ura 1992). Geodetic data can be combined with seismic data to better constrain the amount of total slip, which is often poorly constrained from seismic data alone. The examples of geodetic data are triangulation (e.g., Yoshida and Koketsu 1990), leveling (e.g., Horikawa et al. 1996; Yoshida et al. 1996), Global Positioning System (GPS), (e.g., Wald and Heaton 1994; Wald et al. 1996; Horikawa et al. 1996; Yoshida et al. 1996), and synthetic aperture radar interferometry (InSAR, Hernandez et al. 1999; Delouis et al. 2002; Salichon et al. 2003, 2004). Geodetic data can be used to determine the slip history of extraordinary slow slip events, such as volcanic deformation (e.g., Aoki et al. 1999) and interplate silent earthquakes (Yagi et al. 2001; Miyazaki et al. 2003, 2006). Tidal gage records of tsunamis excited by earthquakes, which provide information about low-frequency behavior of fault slip, are also used to constrain slip distributions (e.g., Satake 1989, 1993; Tanioka et al. 1996). Since typical tsunamigenic earthquake occur outside of regional seismic and geodetic networks, tidal gage data are useful to independently constrain the slip distribution.

Before solving an inversion problem, all these data must be properly preprocessed and arranged into a data vector do. Analog data require digitization and digital data often need decimation. The seismometer response should be removed by deconvolution to get displacement or velocity data. In these preprocessing, we can estimate the level of noise, apply a bandpass filter to reduce them, and remove characteristic noises if it they exists. Problems in the treatment of digital data, such as deconvolution and filtering, are discussed in detail in the textbook of Scherbaum (1996). As seen in Table 1, the type of data (displacement, velocity, or acceleration) is different among studies, which controls the complexity of the slip model.

When we use different types of data or data from different stations simultaneously, we must determine the relative weights of these data. The weight should be determined according to the magnitude of noise in each data set. In addition to natural background noises and instrumental noises, we should consider model errors due to incompleteness of our knowledge to calculate synthetic data using assumed model. In the case of seismic data, the largest source of uncertainty comes from the inaccuracy of Green’s functions due to uncertain underground structure. The magnitude of such errors is roughly proportional to the amplitude of calculated synthetic data. Therefore, to equalize the weight of all data, we may normalize data by their maximum value for each station. Furthermore, when the spatial distribution of stations is not homogeneous, we should increase the weights for the stations in more sparsely covered areas to regularize the problem.

6.2.3 Setting Fault Models

6.2.3.1 Parameterization of slip distribution

When we model an earthquake rupture process, we usually assume one or a few rectangular fault planes in the source region. Each rectangular plane has seven parameters: the location of one corner (latitude, longitude, and depth), the size (length and width), and the orientation (strike and dip angles). With two parameters concerning the slip, the slip amount and the rake angle, these nine parameters represent the macroscopic image of the static fault slip as defined in Aki and Richards (2002). The determination of these macroscopic parameters had been a major research topic in earthquake seismology till the 1980s and numerous macroscopic static fault models have been determined for many earthquakes.

To discuss the spatial slip distribution, we usually divide the rectangular fault planes into a number of subfaults. If we emphasize the objectivity of slip models, the macroscopic fault parameters, seven for each plane, should be determined in slip inversion. However, a the problem that solves heterogeneous slip distribution and fault geometry simultaneously is nonlinear and intractable. Instead, we usually assume these seven parameters based on the results of macroscopic analysis with low resolution or supplemental information such as the aftershock distribution, the surface fault trace, and the focal mechanism solutions including the Centroid Moment Tensor (CMT) solution that represents force system acting at the source (Dziewonski and Woodhouse, 1983).

Похожие:

Разместите кнопку на своём сайте:
Библиотека

База данных защищена авторским правом ©lib.znate.ru 2014
обратиться к администрации
Библиотека