Treatise on geophysics chapter 6: slip inversion s. Ide




Скачать 290.09 Kb.
НазваниеTreatise on geophysics chapter 6: slip inversion s. Ide
страница2/6
Дата06.10.2012
Размер290.09 Kb.
ТипДокументы
1   2   3   4   5   6

de = Gm, (8)

where m is a model vector that consists of potency parameters , and G includes theoretical waveforms calculated for every discrete unit of spatio-temporal source volume. This linear equation is solved using standard procedures of the linear least-square inversion (e.g., Menke 1985) on the assumption that the difference between observation and model prediction follows the normal distribution with zero means. However, a flexible model requires a large number of model parameters. In recent analysis the number of parameters is sometimes larger than 3000-4000 (Table 6.1). Usually the number of data points is much larger than the number of parameters, but these data points are almost surely dependent and the problem tends to be underdetermined. The effective rank of matrix G is less than the number of parameters and the best estimates of model parameters are, even if they were obtained, unstable with quite large standard deviations. Such problems require regularization with various constraints, which will be discussed in 6.3.2. Another deficiency of the multi-time-window method is that it limits the timing of slip. When the duration of the temporal basis functions is too short and real slip occurs longer than that, unexpected errors occur in the solution. This problem was demonstrated by Hartzell and Langer (1993) using the models having different number of time windows.


6.2.3.3 Example of nonlinear expression

If the timings of slip are treated as unknown parameters in (2), we can reduce the number of parameters. An example is

, (9)

where is a function which is zero before t = 0 (is 0 before ) and normalized to be unity when integrated with time (Fig. 2b). The parameter is a scaling parameter in time. When is a boxcar function from t = 0 to 1, is called the rise time. A boxcar function (e.g., Yoshida 1986) and a triangle function (e.g., Fukuyama and Irikura 1986; Takeo 1987) are frequently used. A merit of nonlinear expression is that we can use a dynamically plausible function as . Beroza and Spudich (1988) assumed

, (10)

where H(t) is a step (Heaviside) function. This is a truncated version of the slip-rate function in the analytic solution of dynamic crack propagation. We can change the direction of slip vector using unknown parameters as,

. (11)

Substituting (11) into (1) and integrations for space and time yields a predictive data vector from the model parameter vector m consisting of p j, t j, r j, and j,

de = G(m). (12)

If we assume Gaussian errors of model prediction, linear inversion can provide a unique solution, while nonlinear inversion does not eliminate the problem of nonuniqueness. Instead, the model with nonlinear expression can be extended to include more general features of fault slip.


6.2.4 Calculation of synthetic data

To carry out integration of equation (1) and obtain predictive data vectors (8) or (12), Green’s function must be properly prepared for each point on the fault plane. Unless the spatial basis function in equation (2) is a delta function, we have to calculate Green’s functions from every point source located at a small interval to account for rupture directivity effect in each discrete unit (subfault) as shown in Fig. 6.2.

Theoretical formulation of elastic wave propagations from a point source in layered structure had been developed mainly during the 1970s and 1980s. For global studies, the method using a propagator matrix (Bouchon 1976) and generalized ray theory (Langston and Helmberger 1975) are frequently used to calculate far-field P and S-waves with important contributions from local source and site structures, such as depth phases, sP, sP, and sS. For calculation of near-field full waveforms, the method that uses frequency-wavenumber integration is effective. The reflectivity method (e.g., Fuchs and Müller 1971; Kind 1976) and the Kennett-Bouchon method (e.g., Kennett & Kerry 1979; Bouchon 1981; Yao and Harkrider 1983) are well-known methods and several numerical codes based on these methods are developed and distributed (e.g., Kohketsu 1985; Takeo 1985; Saikia 1994). A general textbook about seismic wave propagation has been published by Kennett (2001). The combination of ray theory and isochrone integration (Bernard and Madariaga 1984; Spudich and Fraser 1984) is also effective in high frequency for near-field stations. For static deformations, the most popular study is Okada (1985) that gives analytic solutions for the surface displacement due to a buried dislocation on a rectangular fault in half space, with a concise summary of previous works.

The seismic velocity structure profiles determined by explosion surveys are helpful to construct a layered structure. Most studies assume a 1D structure from the complex result of 2D or 3D surveys with minor improvements by trial and error. When the structure difference is large among stations, several 1D structures are assumed in one study. Waveforms of small events are useful for to empirically improve better 1D assumption structure as shown by Ichinose et al. (2003) who used compared synthetic waveforms with aftershock records to estimate average structures between the source and the stations.

Though 1D structure is thought to be a fair approximation in many cases, it is obvious that there are many problems for which 1D structure is no longer valid. For example, body waves from submarine earthquakes are followed by large reverberations from water layers and dip angle of sea floor is critical to simulate these waves (Wiens 1987, 1989; Okamoto and Miyatake 1989; Yoshida 1992). Another example is the existence of thick sediment layers recently revealed by structural surveys in urbanized areas. Using a finite difference method for 3D structure, Graves and Wald (2001) and Wald and Graves (2001) demonstrated that the use of 3D Green’s functions can increase the resolution of slip inversion. They also emphasized that this improvement is only possible with the accurate underground structure and careful examination of its validity is required for slip inversion. As demonstrated by Tsuboi et al. (2003) for the 2002 Denali Fault earthquake (Mw 7.9), one of the current fastest computers can calculate seismic waves up to 5 s in a laterally heterogeneous global earth model. Introduction of more accurate 3D Green’s functions into slip inversion is promising.

The acAccurate 3D structure has have beennot been determined except only for small areas on the Earth. Moreover, small earthquakes radiate high-frequency waves, which cannot be modeled theoretically. An alternative of theoretical Green’s function is empirical Green’s function (eGf) which is usually the record of a small earthquake. This concept is originally proposed by Hartzell (1978) for strong-motion simulations. Note that the displacement of small earthquake is not a Green’s function in the strict sense in the equation (1), where Green’s function is defined using an impulsive single force while an earthquake has a double couple force system with a finite duration. Nevertheless, as long as the source duration of the small event is negligible, the difference of the force system is practically insignificant for slip inversion. Fukuyama and Irikura (1986) applied eGf to the 1983 Japan Sea earthquake (Mw 7.7). The large source area of this event requires two aftershocks each of which represents the north and the south subfaults, since an eGf is valid only for the limited space. For a large earthquake which breaks shallow faults, the difference of distance from the free surface is hard to neglect and a number of aftershocks are required to obtain a complete set of empirical Green’s functions, which is almost impossible. Since it is also known that aftershocks are rare in the area where the mainshock slip was large, it is hard to find proper small earthquakes for eGf method. Unfortunately the waves from this most important part of the fault plane is most difficult to simulate using eGf. Nevertheless, in the study with relatively low resolution especially for small earthquakes, eGf is an effective tool as shown by many studies (e.g., Mori and Hartzell 1990; Courboulex et al. 1996; Fletcher and Spudich 1998; Hellweg and Boatwright 1999; Ide 2001; Okada et al. 2001). In these studies seismic waves at a station can resolve the resolution of slip inversion slip distribution mainly in the direction of the ray from the source to the station. To better resolve the slip distribution, a set of various raydepends on the variety of incident angles of seismic waves directions from the sourceare required. Figure 6.3 is an example of the comparison between waveforms from the large event (the 2003 Miyagi-Oki, Japan, earthquake; Mw 7.1) and its possible eGf event of the similar mechanism. The waveforms of eGf events have short durations and site-specific complexity mainly due to local shallow structure. The mainshock records have some distinct phases and the arrivals of the second phases suggest the northward rupture propagation, which is consistent with the slip models presented by Okada and Hasegawa (2003) and Wu and Takeo (2004).



Misalignment between predicted and observed data can be a significant source of errors though it is not always carefully discussed in research papers. Usually the arrival times calculated for an assumed structure have some errors and sometimes data do not have accurate information of absolute timing. Therefore, we have to arrange predicted and observed data to minimize the difference between the corresponding reference phases. Usually S waves are dominant in near-field displacement records but exact determination of S wave arrival times is difficult because it is masked by P coda waves. It is plausible to estimate the S wave arrival times using the records of small events in an empirical manner. Sometimes, poorly determined timings in near-field records lead to considerably different results of slip inversion. This might be one reason of large discrepancy between slip models for the 1999 İzmit earthquake (Mw 7.4, Yagi and Kikuchi 2000; Bouchon et al. 2002; Delouis et al. 2002; Gülen et al. 2002; Sekiguchi and Iwata 2002) as pointed out by Beresnev (2003) and Ide et al. (2005).


6.3 SOLVING INVERSE PROBLEM

6.3.1 Best Estimate

From an observed data vector d0 and a corresponding prediction vector de including a model parameter vector m, we solve the inverse problem to minimize the difference between these vectors, d0de, and find the best estimate of the model parameters and their estimation errors. There are good textbooks for such inversion or optimization problems (e.g., Menke 1989; Tarantola 2005). Here we review only basic expressions using the maximum likelihood methods as an introduction to the next subsection.

The first step of inversion problem is to estimate the errors of the data vector. Usually the errors are assumed to have the Gaussian distribution with zero mean and a covariance matrix. In the next subsection we review Bayesian method in which the absolute value of covariance matrix is also estimated from the observed data through marginal likelihood. It is also necessary to estimate relative errors between independent data sets that can have different physical dimensions. For example, we estimate relative magnitude of errors between displacements of two stations, taking into account background noise levels and inaccuracy of the assumed model, i.e., the assumption of fault planes and velocity structure. We can assess the effect of model inaccuracy by calculation using slightly different model settings. The relative magnitude of errors between data of different types, e.g., seismic waveforms and static displacements, can be estimated similarly.

The maximum likelihood method gives the best estimate of model parameter vector by maximizing a likelihood function p(d0|m). It is formally identical to the probability distribution function of the data p(d|m) but it is thought as a function of m for the observed data d0. With the assumption of the Gaussian error, it is

, (13)

where we assume that the mean values and covariance matrix of the data as and , respectively. Superscript t denotes the transpose of matrix. is the number of the data points. The maximum value of the likelihood function corresponds to the minimum of the L2 norm in the exponent, . Therefore, this is the well-known weighted least square problem and the best estimate (maximum likelihood values) of the model parameter, m0, is given by the least square solution.

When the model is linear as equation (8), i.e., de(m) = Gm, and the matrix is full rank, the solution is

. (14)

The covariance matrix of the model parameter is written as,

. (15)

However, the solution in the form (14) and (15) is rarely used. One reason is that usually the matrix is not full rank. The regularization to increase the rank will be discussed in the next subsection. Another reason is that (14) often gives the slips in the negative direction. Although it is still debated whether slip can occur in the negative direction, it is hard to imagine that slip reverses its direction under high shear stress environment in the crust, and so we usually assume that the slip is unidirectional. To prohibit slip in the negative direction, the non-negative least squares (NNLS, Lawson and Hanson 1974) is useful. If the solution of NNLS exists, the uniqueness of the solution is verified by the Kuhn-Tucker theorem. A similar positive constraint is also possible with a nonlinear error function that rapidly increases near zero when we solve a nonlinear problem, for which the uniqueness is hard to prove. For example, Yoshida and Koketsu (1990) adopted an error function, , which rapidly increase as slip rate X → 0.

When the model expression is nonlinear (equation (12)), it is generally difficult to find the best estimates that minimize . We can linerize the observation equation and apply an iterative inversion method such as the Gauss-Newton method and the Levenberg-Marquart method, in which the solution is derived through successive iterations of linear inversion. The estimate of the parameter vector after i-th iteration mi gives a prediction vector . Then the difference between the data vector and the prediction vector is approximated by Taylor series to the first order as

, (16)

where G’ is a Jacobian matrix. In the Gauss-Newton method, the solution is derived similarly to the equation (14) and the updated model parameter vector is represented as

. (17)

This procedure is iterated till the solution converges. However, the convergence does not guarantee that the solution is the best estimate because the solution depends on the initial value and can be just a local least square solution. In the case of slip inversion with many unknown parameters, many local least square solutions may reduce L2 norm similarly, and it is almost impossible to find a unique solution that globally minimizes the L2 norm. Jackson and Matsu’ura (1985) showed that prior information for model parameters, as discussed in the following section, can reduce the improve this problem of nonuniqueness.

Rapid improvement of computers has enabled seismologists to solve nonlinear problems by methods including random search iterations such as the Monte Carlo method. As examples to slip inversion in the early period, Hartzell and Liu (1995) and Hartzell et al (1996) analyzed the Landers earthquake using Simulated Annealing (SA) and Zeng and Anderson (1996) used Genetic Algorithm (GA) for the Northridge earthquake. Recently, Ji et al. (2002a) and Liu and Archuleta (2004) recommended the heat-bath algorithm (Rothman 1986) that is a class of SA method and efficient for slip inversion.

Despite the frequent assumption of the Gaussian errors, which is supported by the central limit theorem, it is also known that real data include many ‘outliers’ that are far from the prediction and the errors cannot be explained by the Gaussian distribution. One remedy is just to remove these outliers from the data set, but it is also effective to use a long-tailed probability function instead of the Gaussian distribution, which gives more robust solution relatively independent of the outliers. If we assume that the probability function is double exponential (Laplace) distribution, a typical long-tailed function, the likelihood is written as

. (18)

The maximum likelihood solution minimizes the L1 norm,

. (19)

This is called the least absolute values problem, which is solved using an iterative method. There is no dependence on the initial value like general nonlinear problem and the solution is obtained as a point or a finite hyper plane in the model space (e.g., Tarantola 2005). The examples of slip inversion using L1 norm are found in Das and Kostrov (1990) and Hartzell et al. (1991) who solved the problem by the linear programming method, and in Ihmlé (1998) who used SA. The comparison of two solutions based on L1 and L2 norms is useful to find universal features of the solution that are independent of the norm assumption (Hartzell et al. 1991).


6.3.2 Constraints and Regularization

Since the parameterization of slip distribution depends on the specific setup of the model, such as the size and locations of subfaults and the shape of time function, it is a good idea to use a large number of model parameters to make a flexible model. However, with a large number of model parameters, the solution in the form (14) and (15) cannot be obtained or become unstable with large parameter errors. This is because the effective rank of matrix is not full. Although the number of data usually exceeds the number of model parameters , the effective rank becomes smaller because the data are not independent or because a part of the model cannot be resolved by the data. Such a problem requires regularization with some constraints to model parameters. One of the simplest constraints is minimizing the L2 norm of the parameter vector. Another popular assumption in slip inversion is that the slip distribution is smooth temporally and spatially in some degree, which is expressed by constraints minimizing the time derivatives and the Laplacian of the slip distribution, respectively. Alternatively, we can fix the total amount of slip or seismic moment at a value suggested from other information (Das and Kostrov 1990; Ji et al. 2002b), for example the seismic moment of CMT solution determined using global long-period seismic waves (Dziewonski and Woodhouse, 1983).

The introduction of constraints raises another problem: how strong should these constraints be? A strong smoothing constraint makes the solution smooth, and vice versa. While early studies introduced ad-hoc criterions, Bayesian modeling and the information theory provide a way to find an objective solution with a proper weight. In the theory of Bayesian statistics, these constraints are thought to be prior information that provides a prior probability function (pdf) of model parameters with hyper parameters (Jackson 1979; Jackson and Matsu’ura 1985). When there is no prior information (constraints), we find the best estimate of model parameter vector maximizing the likelihood function as shown in 6.3.1. Similarly, with constraints as prior information, we find the solution maximizing a posterior pdf, which is proportional to the product of the likelihood function and the prior pdf. The proper weights of constraints, the best estimates of hyper parameters, are determined by the maximizing the expectation of the likelihood for model parameters, a marginal likelihood. The condition is rewritten in a compact form of the minimum Akaike’s Bayesian Information Criterion (ABIC, Akaike 1980). ABIC was first applied by Yabuki and Matsu’ura (1992) and many applications of ABIC to slip inversion are found in e.g., Yoshida and Koketsu (1990), Ide and Takeo (1997), and Fukahata et al. (2004). We review the usage following Akaike (1980) and Yabuki and Matsu’ura (1992).

We may use several constraints of different types using constraint matrices Cj. and constraint value vectors ej. A constraint is written with errors of Gaussian distribution with zero mean and a covariance matrix , . To minimize the L2 norm of the parameters, we use a unit matrix for Cj and a zero vector for ej. A typical smoothing constraints that minimize the difference between two successive parameters is
1   2   3   4   5   6

Похожие:

Treatise on geophysics chapter 6: slip inversion s. Ide iconMacKenzie’s quote about Darwin’s strange inversion can be found in my ddi, 1995

Treatise on geophysics chapter 6: slip inversion s. Ide iconR  emarks : If you wish to receive a formal acknowledgment of assignment, please complete the following slip

Treatise on geophysics chapter 6: slip inversion s. Ide iconChapter 2 70 Effect of novel nop receptor ligands on ethanol drinking in the alcohol-preferring msP rats Chapter 3 96

Treatise on geophysics chapter 6: slip inversion s. Ide iconService statute, in Title 42 Chapter 6A to Chapter 1, to repeal

Treatise on geophysics chapter 6: slip inversion s. Ide iconThe Thermal Evolution and Slip History of the Renbu Zedong Thrust, southeastern Tibet

Treatise on geophysics chapter 6: slip inversion s. Ide iconChapter one brought to the screen the underground world of American muscle cars and dangerous street racing from the City of Angels. Chapter two told a tale of

Treatise on geophysics chapter 6: slip inversion s. Ide iconChapter Name: ieee singapore mtt/ap chapter

Treatise on geophysics chapter 6: slip inversion s. Ide iconMathematical Methods in Geophysics (1)

Treatise on geophysics chapter 6: slip inversion s. Ide icon1. admission of the m. Sc./M. Sc. (Tech.) Geophysics/M. C. A. Course

Treatise on geophysics chapter 6: slip inversion s. Ide iconSolid Earth Geophysics

Разместите кнопку на своём сайте:
Библиотека


База данных защищена авторским правом ©lib.znate.ru 2014
обратиться к администрации
Библиотека
Главная страница