Distribution of maximum earthquake magnitudes in future time intervals: application to the seismicity of Japan (1923–2007)
© The Society of Geomagnetism and Earth, Planetary and Space Sciences (SGEPSS); The Seismological Society of Japan; The Volcanological Society of Japan; The Geodetic Society of Japan; The Japanese Society for Planetary Sciences; TERRAPUB. 2010
Received: 16 August 2009
Accepted: 11 June 2010
Published: 31 August 2010
We have modified the new method for the statistical estimation of the tail distribution of earthquake seismic moments introduced by Pisarenko et al. (2009) and applied it to the earthquake catalog of Japan (1923–2007). The newly modified method is based on the two main limit theorems of the theory of extreme values and on the derived duality between the generalized Pareto distribution (GPD) and the generalized extreme value distribution (GEV). Using this method, we obtain the distribution of maximum earthquake magnitudes in future time intervals of arbitrary duration τ. This distribution can be characterized by its quantile Qq (τ) at any desirable statistical level q. The quantile Qq(τ) provides a much more stable and robust characteristic than the traditional absolute maximum magnitude Mmax (Mmax can be obtained as the limit of Qq(τ) as q → 1, τ → ∞). The best estimates of the parameters governing the distribution of Qq(τ) for Japan (1923–2007) are the following: ξGEV = −0.19 ± 0.07; μGEV(200) = 6.339 ± 0.038; σGEV (200) = 0.600 ± 0.022; Q0.90,GEV(10) = 8.34 ± 0.32. We have also estimated Qq(τ) for a set of q-values and future time periods in the range 1 ≤ τ ≤ 50 years from 2007 onwards. For comparison, the absolute maximum estimate Mmax-GEV = 9.57 ± 0.86 has a scatter more than twice that of the 90% quantile Q0.90,gev(10) of the maximum magnitude over the next 10 years beginning from 2007.
Key wordsExtreme value theory generalized extreme value distribution generalized Pareto distribution earthquake seismic moments magnitude
The work presented in this article has two goals: (1) to adapt the method suggested by Pisarenko et al. (2009) for the statistical estimation of the tail of the distribution of earthquake magnitudes to catalogs in which earthquake magnitudes are reported in discrete values, and (2) to apply the newly developed method to the Japan Meteorological Agency (JMA) magnitude catalog of Japan (1923–2007) in order to estimate the maximum possible magnitude and other measures characterizing the tail of the distribution of magnitudes.
The chief innovation, introduced first by Pisarenko et al. (2009) and extended here, is to combine the two main limit theorems of extreme value theory (EVT), which allows us to derive the distribution of T-maxima (maximum magnitude occurring in sequential time intervals of duration T) for arbitrary T. This distribution enables derivation of any desired statistical characteristic of the future T-maximum. The two limit theorems of EVT correspond to the generalized extreme value distribution (GEV) and to the generalized Pareto distribution (GPD), respectively. Pisarenko et al. (2009) established the direct relations between the parameters of these two distributions. The duality between the GEV and GPD provides a new approach to check the consistency of the estimation of the tail characteristics of the distribution of earthquake magnitudes for earthquakes occurring over arbitrary time intervals.
Instead of focusing on the unstable parameter Mmax, we suggest a new, stable, and convenient characteristic, Mmax(τ), defined as the maximum earthquake that can be recorded over a future time interval of duration τ. The random value Mmax(τ) can be described by its distribution function or by its quantiles Q q (τ), which are, in contrast to Mmax, stable and robust characteristics. In addition, if τ→∞, then Mmax(τ) → Mmax with a probability of one. The methods for calculating Q q (τ) are given in the following section. In particular, we can estimate Q q (τ) for, say, q = 10%, 5%, and 1%, as well as for the median (q = 50%) for any desirable time interval τ. These methods are illustrated below on the magnitude catalog of the JMA, over the time period 1923–2007, for magnitudes m ≥ 4.1.
We should stress that our method relies on the assumption that the distribution of earthquake magnitudes exhibits a regular limit behavior on its right (for large magnitudes)— even though there is no way to be absolutely certain that this is the case due to the limited data set for large and extreme earthquake sizes. Thus, in specific cases, seismologists are forced to accept the most appropriate assumption about the behavior of the magnitude distribution on its right end. The assumption used in our paper (which coincides with the assumption of the EVT: the existence of a non-trivial asymptotic distribution for centered and normalized maximum of sample) seems to be the least harmful and the most fruitful. It provides the three well-known types of possible limit distributions for the maximum (in our paper we use only one of these). Without such an assumption, it would scarcely be possible to obtain any useful result on the distribution of sample maxima.
2. The Method
The method developed here is based on the following assumptions:
(1) the Poisson property of independence in time of the main shocks;
(2) independence between the observed magnitudes M;
(3) regularity of the tail probability of the earthquake magnitudes M;
We now present the elements that justify using these assumptions and then describe the specifics of the method.
2.1 Test of the Poisson hypothesis
Our analysis is performed for main shocks, following the application of a declustering method. We used the Kagan-Knopoff time-space window declustering method to remove the aftershocks. This method has a number of shortcomings, and other versions of aftershock cleansing are available, but these have no universally accepted advantages. There is a widespread opinion among seismologists that the overwhelming majority of main shocks can be considered to be independent random variables. This property is more evident when earthquake observations are considered on a global scale, but it is still a reasonable hypothesis for large seismic regions, such as Japan. The Japanese data that we use exhibit evident irregularities in the registration process, which are visible in Fig. 7. In particular, during the time interval 1945–1965, the lack of observations is clearly evident. Fortunately, this effect is not essential for the larger earthquakes, which are the focus of our work.
We note that the model of a Poisson flow of events corresponds to a renewal model with exponentially distributed intervals between successive events. Testing for the Poisson property is reduced to the study of the distribution of time intervals between successive main shocks. In our analysis, we are going to study this distribution for events in Japan with magnitudes larger than some chosen lower threshold. We will show that, at least for large earthquakes with m ≥ 7.0, the exponential distribution cannot be rejected at a rather high statistical significance level. For earthquakes with m ≥ 6. 0, the exponential distribution can be accepted, at least since 1966. For earthquakes of smaller sizes, the deviations of the distribution of the time intervals from the exponential law becomes more pronounced; consequently, the renewal model with non-exponentially distributed time intervals is perhaps more appropriate. However, this is a rather irrelevant finding for our purpose of determining the distribution of maximum earthquake magnitudes, which is controlled mainly by the large earthquakes.
The exponential Poisson hypothesis is thus acceptable (accepting, say, if p-value > 0.1) for m ≥ 6.0 since 1966, and for m ≥ 7.0 for the whole catalog starting from 1923.
2.2 Independence of the magnitudes
2.3 Regularity of the tail probability of the earthquake magnitudes M
The conditions guaranteeing the validity of these two limit theorems include the regularity of the original distributions of magnitudes in their tail. These conditions ensure the existence of a non-degenerate limit distribution of M n after a proper centering and normalization. Following the standard approach, we assume that the conditions for which a non-degenerate limit distribution of M n exists are truly valid. If this were not to be the case, we would not be able to perform any meaningful analysis. While this argument may appear circular, it is standard approach in statistics in general and in statistical seismology in particular. One can never really prove the validity of mathematical conditions solely from data. The model or theory can, however, be progressively validated by comparing its predictions with the results of precise tests (Sornette et al., 2007, 2008). It is therefore the conclusions that we derive from our analysis that will support—or refute—the value of the analysis itself.
2.4 Formulation of the theory and procedure
In our analysis, we study the maximum magnitudes occurring in time interval (0, T). We assume that the flow of main shocks is a Poissonian stationary process with some intensity λ. This property for main shocks was studied and confirmed in appendix A of Pisarenko et al. (2008) for the Harvard catalog of seismic moments over the time period 1 January 1977–20 December 2004. The term “main shock” refers here to the events that remain following the application of a suitable desclustering algorithm (see Pisarenko et al., 2008, 2009, and below). In Subsection 2.1, we tested the Poisson hypothesis and confirmed that (1) for earthquakes with m ≥ 6.0, the exponential distribution can be accepted—at least since 1966; (2) for large earthquakes with m ≥ 7.0, the exponential distribution cannot be rejected with rather a high statistical significance level. We can then proceed with the description of the model.
Given the intensity λ and the duration T of the time window, the average number of observations (main shocks) within the interval (0, T) is equal to 〈n〉 = λT. For T → ∞, the number of observations in (0, T) tends to infinity with a probability of one; we can therefore use Eq. (4) as the limit distribution of the maximum magnitudes m T of the main shocks occurring in time interval (0, T) of growing sizes (Pisarenko et al., 2008).
3. Application of the GPD and GEV to the Estimation of r-maximum Magnitudes in Japan
3.1 Characteristics of the JMA data
The full JMA catalog covers the spatial domain delimited by 25.02 ≤ latitude ≤ 49.53° and 121.01 ≤ longitude ≤ 156.36° and by the temporal window 1 January 1923 to 30 April 2007. The depths of the earthquakes fall in the interval 0 ≤ depth ≤ 657 km. The magnitudes are expressed in 0.1-bins and vary in the interval 4.1 ≤ magnitude ≤ 8.2. There are 39,316 events in this space-time domain. The spatial domain covered by the JMA catalog covers the Kuril Islands and the east border of Asia.
Figure 6 plots the yearly number of earthquakes averaged over 10 years for three magnitude thresholds: m ≥ 4.1 (all available events); m ≥ 5.5; m ≥ 6.0. The latter time-series with m ≥ 6.0 appears to be approximately stationary, with an intensity of about three to four events per year. Figure 7 shows the flow of main events (same variable as in Fig. 6 but for the main shocks obtained after applying the declustering Knopoff-Kagan algorithm). For large events (m ≥ 6.0), the flow is approximately stationary.
3.2 Adaptation for binned magnitudes
Consider a catalog in which the magnitudes are reported with a magnitude step Δm. In most existing catalogs, including that of Japan, in most cases Δm = 0.1. In some catalogs, two decimal digits are reported, but the last digit is fictitious unless the magnitudes are recalculated from seismic moments, themselves determined with several exact digits (such as for the mW magnitude in the Harvard catalog). Here, we assume that the digitization is fulfilled exactly without random errors in intervals ((k − 1) · Δm; k · Δm), where k is an integer. As a consequence, in the GPD approach, we should use only half-integer thresholds h = (k − 1/2) · Δm, which is not a serious restriction.
This result shows that in our analysis we are forced to use statistical tools adapted to discrete random variables. We have chosen the standard Pearson chi-square (χ2) method as it provides a way to both estimate unknown parameters and strictly evaluate the goodness of fit. The χ2-statistic is calculated by finding the difference between each observed and theoretical frequency for each possible magnitude bin, then squaring each difference, dividing it by the theoretical frequency, and taking the sum of the results. The χ2-statistics is then distributed according to the χ2-distribution with n − 1−3 degrees of freedom (df) since we estimate three parameters in fitting the theoretical GEV distribution.
In order to be able to apply the chi-square test, a sufficient number of observations is needed in each bin (we chose this minimum number as being equal to 8 (see discussion of this matter in Borovkov (1987));
In order to compare two different fits (corresponding to two different vectors of parameters), it is highly desirable to have the same binning in both experiments in order to avoid large variations in the significance levels, which depend on the binning.
In general, the chi-square test is less sensitive and less efficient than the Kolmogorov test or the Anderson-Darling test due to the fact that the chi-square test coarsens data by placing data into discrete bins.
When using the GEV, the digitized GEV of the magnitude maxima in successive T-intervals is fitted using the χ2-method.
3.3 The GPD approach
Chi-square fitting procedure using the GPD approach.
degrees of freedom
Having estimated the first triple (ξ, σ T , μ T ) or the second triple (ξ, s, h), we use these estimates to predict the quantile of τ-maxima for any arbitrary future time interval (0, τ), since these τ-maxima have the distribution , as seen from Eqs. (6)–(13). Recall that, in Eqs. (6)–(13), λ denotes the intensity of the Poissonian flow of events whose magnitudes exceed the threshold h.
3.4 The GEV approach
In this approach, we divide the total time interval Tc from 1923 to 2007 covered by the catalog into a sequence of non-overlapping and touching intervals of length T. The maximum magnitude M T,j in each T-interval is identified. We have k = [Tc/T] T-intervals, so the sample of our T-maxima has size k: MT1, …, M T,k We assume that T is large enough, so that each M T,j can be considered as being sampled from the GEV distribution with some unknown parameters (ξ, σ T , μ T ) that should be estimated through the sample MT,1, …, M T,k .
Comparing ξ, Mmax and the Q-estimates obtained by the GPD and the GEV approaches, the GEV method is found to be somewhat more efficient (its scatter is smaller by a factor approximately equal to 0.7). This can be explained by the fact that the GEV approach uses the full catalog more intensively: all events with magnitude m ≥ 4.1 participate (in principle) in the estimation, whereas the GPD approach throws out all events with m < h.
4. Discussion and Conclusions
We have adapted the new method of statistical estimation suggested by Pisarenko et al. (2009) to earthquake catalogs with discrete magnitudes. This method is based on the duality of the two main limit theorems of EVT. One theorem leads to the GPD (peak over threshold approach), and the other theorem leads to the GEV (T-maximum method). Both limit distributions must possess the same form parameter ξ. For the Japanese catalog of earthquake magnitudes over the period 1923–2007, both approaches provide almost the same statistical estimate for the form parameter, which is found to be negative; . A negative form parameter corresponds to a distribution of magnitudes that is bounded from above (by a parameter named Mmax). This maximum magnitude corresponds to the finiteness of the geological structures supporting earthquakes. The density distribution extends to its final value Mmax with a very small probability weight in its neighborhood, characterized by a tangency of a high degree (“duck beak” shape). In fact, the limit behavior of the density distribution of Japanese earthquake magnitudes is described by the function , i.e. by a polynomial of degree approximately equal to 4. This is the explanation of the unstable character of the statistical estimates of the parameter Mmax: a small change in the catalog of earthquake magnitude can give rise to a significant fluctuation in the resulting estimate of Mmax. In contrast, the estimation of the integral parameter Q q (τ) is generally more stable and robust, as we demonstrate quantitatively for the Japanese catalog of earthquake magnitudes over the period 1923–2007.
The main problem in the statistical study of the tail of the distribution of earthquake magnitudes (as well as in distributions of other rarely observable extremes) is the estimation of quantiles that exceed the data range, i.e. quantiles of level q > 1 − 1/n, where n is the sample size. We would like to stress once more that the reliable estimation of quantiles of levels q > 1 − 1/n can be made only with some additional assumptions on the behavior of the tail. Sometimes, such assumptions can be made on the basis of physical processes underlying the phenomena under study. For this purpose, we used general mathematical limit theorems, namely, the theorems of EVT. In our case, the assumptions for the validity of EVT amount to assuming a regular (power-like) behavior of the tail 1 − F (m) of the distribution of earthquake magnitudes in the vicinity of its rightmost point Mmax. Partial justification for such an assumption is the fact that, without it, there is no meaningful limit theorem in EVT. Of course, there is no a priori guarantee that these assumptions will hold in all real situations, and they should be discussed and possibly verified or supported by other means. In fact, because EVT suggests a statistical methodology for the extrapolation of quantiles beyond the data range, the question of whether such interpolation is justified or not in a given problem should be investigated carefully in each concrete situation.
This work was partially supported (V. F. Pisarenko, M. V. Rodkin) by the Russian Foundation for Basic research, grant 09-05-01039a, and by the Swiss ETH CCES project EXTREMES (DS).
- Borovkov, A. A., Statistique Mathematique, Moscow, Mir., 1987.Google Scholar
- Cosentino, P., V. Ficara, and D. Luzio, Truncated exponential frequency-magnitude relationship in the earthquake statistics, Bull. Seismol. Soc. Am., 67, 1615–1623, 1977.Google Scholar
- Embrechts, P., C. Kluppelberg, and T. Mikosch, Modelling Extrememal Events, Springer, 1997.Google Scholar
- Epstein, B. C. and C. Lomnitz, A model for the occurrence of large earthquakes, Nature, 211, 954–956, 1966.View ArticleGoogle Scholar
- Kagan, Y. Y., Universality of the seismic moment-frequency relation, Pure Appl. Geophys, 155, 537–573, 1999.View ArticleGoogle Scholar
- Kagan, Y. Y. and F. Schoenberg, Estimation of the upper cutoff parameter for the tapered distribution, J. Appl. Probab, 38A, 901–918, 2001.View ArticleGoogle Scholar
- Kijko, A., Estimation of the maximum earthquake magnitude, Mmax, Pure Appl. Geophys, 161, 1–27, 2004.View ArticleGoogle Scholar
- Kijko, A. and M. A. Sellevoll, Estimation of earthquake hazard parameters from incomplete data files. Part I, Utilization of extreme and complete catalogues with different threshold magnitudes, Bull Seismol Soc Am., 79, 645–654, 1989.Google Scholar
- Kijko, A. and M. A. Sellevoll, Estimation of earthquake hazard parameters from incomplete data files. Part II, Incorporation of magnitude heterogeneity, Bull. Seismol. Soc. Am, 82, 120–134, 1992.Google Scholar
- Knopoff, L. and Y. Kagan, Analysis of the extremes as applied to earthquake problems, J Geophys Res., 82, 5647–5657, 1977.View ArticleGoogle Scholar
- Pisarenko, V. F., A. A. Lyubushin, V. B. Lysenko, and T. V. Golubeva, Statistical estimation of seismic hazard parameters: maximum possible magnitude and related parameters, Bull Seismol Soc Am., 86, 691700, 1996.Google Scholar
- Pisarenko, V. F., A. Sornette, D. Sornette, and M. V. Rodkin, New approach to the characterization of Mmax and of the tail of the distribution of earthquake magnitudes, Pure Appl Geophys., 165, 847–888, 2008.View ArticleGoogle Scholar
- Pisarenko, V. F., A. Sornette, D. Sornette, and M. V. Rodkin, Characterization of the tail of the distribution of earthquake magnitudes by combining the GEV and GPD descriptions of extreme value theory, Pure Appl Geophys, (http://arXiv.org/abs/0805.1635), 2009.
- Sornette, D., L. Knopoff, Y. Y. Kagan, and C. Vanneste, Rank-ordering statistics of extreme events: application to the distribution of large earthquakes, J. Geophys. Res, 101, 13883–13893, 1996.View ArticleGoogle Scholar
- Sornette, D., A. B. Davis, K. Ide, K. R. Vixie, V. Pisarenko, and J. R. Kamm, Algorithm for model validation: Theory and applications, Proc. Natl. Acad. Sci. USA, 104(16), 6562–6567, 2007.View ArticleGoogle Scholar
- Sornette, D., A. B. Davis, J. R. Kamm, and K. Ide, A general strategy for physics-based model validation illustrated with earthquake phenomenology, atmospheric radiative transfer, and computational fluid dynamics, in Book series: Lecture Notes in Computational Science and Engineering, vol 62, Book Series: Computational Methods in Transport: Verification and Validation, edited by F. Graziani and D. Swesty, pp. 19–73, Springer, New York (NY), (http://arxiv.org/abs/0710.0317), 2008.View ArticleGoogle Scholar
- Stephens, M. A., EDF Statistics for Goodness of Fit and Some Comparisons, J. Am. Statist. Soc, 69(347), 730–737, 1974.View ArticleGoogle Scholar