Skip to main content

Volume 64 Supplement 8

Earthquake Forecast Testing Experiment in Japan (II)

Testing various seismic potential models for hazard estimation against a historical earthquake catalog in Japan

Abstract

The classic zoning method and spatial smoothing of seismicity were used with seismicity, GPS, and late Quaternary fault data to develop time-invariant seismic potential models of shallow crustal earthquakes in the Japanese islands that were then tested against a 400-year Japanese historical earthquake catalog. The results demonstrated that the models so developed for seismic hazard estimation did not necessarily reproduce the observed seismicity. In some cases they were even worse than the Reference Model that assumes a uniform earthquake potential over all of the Japanese islands. A subsequent analysis of the original dataset once it had been divided into two subsets based on time indicated that the present-day spatial distribution of small earthquakes and surface horizontal strain are much affected by previous large earthquakes. Two sources of information were the most effective: regionalized seismicity of small earthquakes and the active fault data. Two models using each of them were not only successful, but also robust. A model combining the distributions of small and moderate-size earthquakes proposed by Frankel in 1995 was also effective for modeling the distributed sources, which are unrelated to the faults. In this study, we tested the spatial variation of the likelihood of large earthquakes with M ≥ 6.8.

1. Introduction

An earthquake potential model plays a key role in seismic hazard analysis. It has long been realized that an evaluation of a potential source model is essential to accurate earthquake forecasting, but realization of this technique has required a wait of many decades—until data on an adequate number of large earthquakes were available for evaluation. For moderate-sized earthquakes, regionalized earthquake likelihood models are now being prospectively tested (e.g., Jordan, 2006; Field, 2007; Schorlemmer et al., 2007; Nanjo et al., 2011). A time-invariant model can be used not only to study future earthquakes, but also those in the past. The aim of the study reported here was to evaluate a timeinvariant source model retrospectively by testing it against a 400-year historical earthquake catalog in the Japanese islands.

Various models utilizing different methods have been developed to estimate seismic hazard. In this study, we use both the classic regionalization method (Cornell, 1968) as well as a spatial smoothing technique proposed by Frankel (1995), information on strain accumulation proposed by Ward (1994), and a method based on estimated slip rate and other parameters of late Quaternary faults (e.g., Wesnousky et al., 1984) for constructing source models for shallow crustal earthquake hazards in Japan.

For the construction of various long-term earthquake potential models, we used the following datasets: an instru-mentally recorded Japan Meteorological Agency (JMA) catalog of small and moderate-size earthquakes for 19261997, a seismotectonic zoning map of Japan (Property and Casualty Insurance Rating Organization of Japan, 2000), GPS data of the Geographical Survey Institute for 1994–1999, and active fault data (Kumamoto, 1997).

The Akaike information criterion (AIC; Akaike, 1974) is used to quantitatively evaluate the models. Historical data on inland large earthquakes during the past 400 years are used to calculate the likelihood of realization of the spatial distribution of the large events for all models. The difference in AIC value is calculated between a certain model and the Reference Model that assumes a spatially uniform earthquake potential. A model with the largest difference in AIC was ultimately chosen as the most successful model.

Finally, we divide the historical earthquake datasets into two data subsets and test the models against each subset separately to evaluate the robustness of the models.

2. Data

Two earthquake catalogs are assembled from JMA catalogs for the construction of earthquake potential models. Since the target is a large shallow inland earthquake, for our catalogs we select earthquakes from the JMA catalogs that are not deeper than 20 km, and we exclude events occurring off shore. One catalog, referred to as the “small earthquake catalog”, includes events with magnitude ≥3.0 that occurred between 1980 and 1997. Relatively high-quality observations of small earthquakes started at the beginning of 1980. The second catalog, referred as the “moderate-size earthquake catalog”, consists of earthquakes with a magnitude ≥5.0 than occurred from 1926 to 1997.

We use the zoning map of Property and Casualty Insurance Rating Organization of Japan (2000), which is mainly based on a seismotectonic-zoning map proposed by Kakimi et al. (1994, 2003).

The GPS data used in this study were provided by the Geographical Survey Institute (GSI) for the period 1994–1999. We first extract the long-term average velocity at each GPS site and then estimate the surface strain rate by using the least squares collocation technique (El-Fiky et al., 1997; Kato et al., 1998) in which Gaussian spatial smoothing with a correlation distance of 100 km was applied for noise reduction (Shimazaki and Zhao, 2000).

The characteristic earthquake model (Wesnousky et al., 1983; Schwartz and Coppersmith, 1984) is used to evaluate seismicity based on the late Quaternary fault data. A synthetic catalog of large shallow crustal earthquakes is produced from Kumamoto’s (1997) datasets of late Quaternary faults which contain data on the epicenter, magnitude, and annual frequency.

3. Models

We divide the study area, i.e., the Japanese islands, into cells 0.1° (in longitude) × 0.091° (in latitude), which translates into cells approximately 10 × 10 km in both longitude and latitude (Fig. 1). This cell map is applied to all models, and the earthquake potential in each cell is estimated. The models will be tested against the spatial distribution of historical earthquakes with magnitudes ≥6.8. A Reference Model is introduced into the analysis based on a uniform seismic potential throughout all of the Japanese islands.

Fig. 1
figure 1

Cell map used for model construction and testing. Each cell is approximately 10 × 10 km in both longitude and latitude. The distribution of historical earthquakes used for testing is also shown. All of the data on historical earthquakes are divided into two data subsets: (1) events occurring in 1801–1925 (the solid circles in figure on the left) and events occurring during other periods (open circles in the figure on the left); (2) events related to the late Quaternary faults (open circles in the figure on the right) and unrelated (solid circles in the figure on the right).

A total of eight earthquake potential models, shown in Fig. 2, are presented in this study and compared to the Reference Model: three models obtained by the smoothing of seismicity, another three models based on the zoning, a model derived from GPS data, and a model based on fault data. Detailed data on the construction of the models is provided in the following sections. The models shown in Fig. 2 appear to be different at first glance even though they all have been developed for one purpose—to produce a map of time-invariant seismic potential for hazard estimation.

Fig. 2
figure 2

Eight earthquake potential models showing a probability of occurrence of large shallow crustal earthquakes (M ≥ 6.8) in 100 years in each cell shown in Fig. 1.

We assume that a seismic event is a realization of the uniform random point process and that the truncated Gutenberg-Richter law holds for the frequency-magnitude relation unless otherwise stated. The occurrence rate of earthquake with a magnitude equal to or larger than M, υ (≥ M), is given as

In which a and bare empirically determined constants, T is the length of record, and Mmax is the truncation magnitude: MMmax. We assume that the b-value is constant in space and time unless otherwise stated. Using the maximum likelihood method (Aki, 1965) proposed by Utsu (1965), the b-value of the original Gutenberg-Richter law is calculated as 0.85 on the basis of small earthquake data for 1980 to 1997. Thus, once the a-value in each modeled cell is obtained, we can calculate the occurrence rate of an earthquake with magnitude ≥6.8.

3.1 Smoothing Models

Smoothing Models are based on spatial smoothing of independent events. For the exclusion of dependent events, or aftershocks, we initially declustered the small and moderate-size earthquake catalogs using ZMAP v3.0 (Wiemer et al., 1997), although some apparent aftershocks had to be removed later. For the small earthquake catalog, the declustering parameters, namely, the maximum look-ahead time and the magnitude factor, are set as 150 days and 0.6, respectively; for the moderate-size earthquake catalog, these are 120 days and 0.75, respectively. The distributions of small and moderate-size events are shown in Fig. 3.

Fig. 3
figure 3

Distributions of 3,000 declustered small earthquakes (M ≥ 3.0) for 1980–1997 (left) and 250 declustered moderate-size earthquakes (M ≥ 5.0) for 1926–1997 (right).

Following the methodology proposed by Frankel (1995) we use a Gaussian function to smooth the seismicity. Model S is based on spatially smoothed a-values derived from the declustered small earthquake catalog. The a-value shows the activity level in the Gutenberg-Richter relation. We use a correlation distance of 50 km for the smoothing. In this model, events with a magnitude of ≥3 are assumed to illuminate areas of faulting that can produce a destructive earthquake (Frankel, 1995). Model M uses the declustered moderate-size earthquake catalog and a correlation distance of 75 km. This model assumes that a future large event will occur close to where moderate-size earthquakes have occurred in the past. Model U, which is identical to the Reference Model, assumes uniform seismic potential. The aim of this model is to quantify large earthquake potential in areas that have not shown significant seismicity during the period for which instrumental records are available, but which could very well produce a sizeable earthquake in the future (Frankel, 1995). The Smoothing F Model is a combination of the three models, i.e., Models S and M, and U. We adopt Frankel’s (1995) weighting factors of 0.5, 0.25, 0.25 for Models S and M, and U, respectively. Similarly, Smoothing S Model is constructed by combining Models S and U, with weighting factors of 0.75 and 0.25, respectively. The Smoothing M Model is a combination of Models M and U, with weighting factors of 0.75 and 0.25, respectively.

3.2 Zoning Models

The Zoning P Model utilizes the zoning map and a- and b-values in each zone as proposed by Property and Casualty Insurance Rating Organization of Japan (PCIRO) (2000). PCIRO’s zoning map is based on the seismo-tectonic map of Kakimi et al. (1994, 2003). PCIRO (2000) estimated the a- and b-values from seismicity for 1885 to 1995. We construct two other models on the basis of the small and moderate-size earthquake catalogs, referred to here as the Zoning S and Zoning M Models, respectively. Only the a-value in each zone is estimated from the catalogs since the b-value is assumed to be 0.85.

3.3 GPS Model

On the basis of Kostrov’s (1974) formula relating strain with seismic moment, Ward (1994) proposed the use of GPS data for seismic hazard evaluation. An average strain rate over the seismogenic crustal volume is replaced by an average strain rate at the surface to deduce approximate moment-rate tensors, and then these are transformed to be a single scalar moment rate. The Working Group on California Earthquake Probabilities (1995) and Savage and Simpson (1997) also attempted to evaluate a scalar moment rate from GPS data. Here, we adopt Ward’s (1994) formula as shown below to evaluate the scalar moment rate,

where max(|e1|, |e2|) is equal to the larger of |e1| and |e2|, µ is the rigidity, H is the seismogenic depth, A is the unit area (10 ×10 km), and e1 and e2 are the principal strain rates. The truncated Gutenberg-Richter relationship in terms of seismic moment could be written as the following (e.g., Molnar, 1979),

in which Momax is the upper truncation seismic moment, β is equal to b/c, and c is the constant of the seismic moment-moment magnitude relationship, that is log Mo = cMW + d. We may replace MW by MJMA using Takemura’s (1990) relationship between the JMA magnitude and the moment magnitude. Using the above formula we can estimate the occurrence rate of a large earthquake with magnitude ≥6.8 in each modeled cell of 10 × 10 km.

The surface strain rates derived from the GPS data show relatively high rates along the Pacific coasts due to the subduction of the Pacific and Philippine Sea plates. Most of the accumulated strain will be released by large earthquakes off shore (Shimazaki, 1974). Thus, the subduction effects are removed for the evaluation of inland seismicity, as shown in Appendix.

3.4 Fault Model

Fault Model is constructed on the basis of the late Quaternary fault datasets (Kumamoto, 1997) in which synthetic earthquakes are spatially smoothed. The epicenter of a synthetic event is derived from the middle position of the two end points of a fault. The magnitude is estimated from the fault length (km), L, based on the empirical formula (Matsuda, 1975) as below:

The recurrence interval, an inverse of the annual frequency, is estimated by dividing the seismic moment by the seismic moment rate, following the method of Wesnousky et al. (1984). First, the seismic moment is estimated from the fault length using an empirical formula, and then the seismic moment rate is estimated from the slip rate of the fault. We use the maximum fault length and the maximum slip rate listed in Kumamoto (1997). Since the target is a large earthquake with magnitude ≥6.8, events <6.8 are excluded from the synthetic catalog. Figure 4 shows the distributions of the late Quaternary faults and the synthetic earthquakes.

Fig. 4
figure 4

Distributions of the late Quaternary faults (left) and epicenters of synthetic earthquakes (right). Kumamoto’s (1997) Maximum Length model for the late Quaternary faults is mapped. Each synthetic event has its own recurrence interval, which is not shown in the figure. Thus, the concentration of events does not necessarily indicate high activity.

We spatially smooth the synthetic seismicity with a correlation distance of 75 km to obtain Model F. We then construct Fault Model by combining Models F and U with weighting factors of 0.75 and 0.25, respectively.

4. Testing Models

4.1 Historical earthquake catalog

For testing the earthquake potential models we use about 400 years of data collected on Japanese historical earthquakes from 1596 to 2000 in Usami’s (2003) catalog. On Hokkaido, the northernmost island of the four major Japanese islands, datasets on historical earthquakes cover only the past 150 years; therefore, Hokkaido is excluded in the assessment of the models. Events occurring off shore are also excluded, as are dependent events, i.e., aftershocks of the 1923 Kanto earthquake, deep earthquakes, and the Odawara earthquake of 1782 with tsunami reporting (Tsuji, 1986).

Figure 1 shows the distribution of the historical inland large earthquakes used for testing. The magnitudes of all earthquakes tested are ≥6.8 because, based on the cumulative magnitude-frequency distribution, we can judge that the data are more or less complete for this magnitude range. Data for 1926 through to 1997 are excluded because seismicity data for this time period were used to construct the models, and the historical data for testing should be independent of the data used in model construction.

However, as some after-effects of large historical earthquakes may exist, we divide the complete whole dataset into two subsets: one covering the period 1801–1925; the second, covering all other periods (left figure of Fig. 1). If a large historical event has century-long after-effects, some models may have a good correlation with data from the 19th to early 20th century. We also divide the data into two subsets (right figure on Fig. 1) on the basis of whether events are related to the late Quaternary fault or not (Odagiri and Shimazaki, 2001). The Fault Model should be able to successfully reproduce the fault-related data.

A total of 40 historical earthquakes are used: 18 took place between 1801 and 1925 and 22 occurred in other time periods; 17 correlated with late Quaternary faults and 23 were uncorrelated. If the two classifications are completely independent, we can expect seven to eight fault-related earthquakes for the period 1801–1925. However, there are ten such events. Thus, there exists a slight correlation between the fault-related earthquakes and the events occurring in 1801–1925.

4.2 Evaluation

Although visual comparison between the models in Fig. 2 and the distribution of historical earthquakes in Fig. 1 may indicate some evaluative qualities of the models, we introduce the following quantitative scheme for their evaluation in this study. Given that all the models are time-independent, the aim of this study is to determine the spatial distribution of earthquake potential. All models show an occurrence rate of large earthquakes with magnitude ≥ 6.8 in each cell, and the historical data indicate whether such a large earthquake took place or not in each cell. Consequently, the log-likelihood of the models can be calculated as.

where n is the total number of cells, p i occurrence rate at cell i, and c i is equal to one when a historical large earthquake took place in cell i, and 0 when not (Kagan and Jackson, 1995; Jackson, 1996).

Because it is possible that the historical data are incomplete, we focus on the spatial distribution of occurrence rate and not the absolute value of the rate. In other words, we cannot perform an ‘N-test’ (the number test) (Kagan and Jackson, 1995) because the observed earthquake frequency may be underestimated due to historically missing events. Thus, we did not use the probability p i itself, but its relative magnitude kp i by introducing a scaling factor k. Therefore, instead of the above formula we use:

Following maximization of the log-likelihood function with respect to k, the AIC (Akaike, 1974) can be written as:

where p is the number of parameters used to maximize the log likelihood function and is equal to 1 since the factor k is the only parameter, and Lmax is the maximized log likelihood. It should be noted that the minus sign in the above equation means that the better model has a smaller AIC value. For model comparison, we introduce δAIC as:

where AICModel is the AIC value of a specific model and AICReference is the AIC value obtained for the Reference Model in which the earthquake potential is homogeneous throughout the Japanese islands. When δAIC is positive, the model shows a better result than the Reference Model.

5. Results

Tables 13 summarize δAIC for all of the tests. A difference in the AIC of 2 is considered to be significant since it is identical to the introduction of one free parameter. However, since the historical data used for testing are limited, δAIC should be allowed more deviation. As a simple estimate of the effect of missing data, we randomly remove a historical earthquake from the catalog and examine how δAIC changes; the change ranges from −2 to 2 and, therefore, the difference of 2 in δAIC may not be significant.

Table 1 Comparison of difference models.

Nonetheless, there exists a significant difference between models. The difference in δAIC between the best and the worst models is >20, which is equivalent to a difference of 10 in the log-likelihood. Nominally, the best model may reproduce the observed spatial distribution of large earthquakes roughly 20,000 times more than the worst model.

The best score is obtained for the Fault Model, as expected since many fault-related events are included in the historical catalog, with the second-best score obtained by the Zoning S Model. These models appear to be robust since δAIC are positive in all cases (Tables 2 and 3).

Table 2 Comparison of models for groups of events related and unrelated to a late Quaternary fault.

As described earlier, the historical events are divided into two subsets based on whether the events are related to the late Quaternary fault or not. The result is shown in Table 2. The data are also divided into two subsets to examine whether century-long after-effects exist or not. The results for events taking place for 1801 and 1925 are shown in Table 3 together with the results for the remaining events.

Table 3 Comparison of models for groups of events occurring in 1801–1925 and other periods.

6. Discussion

It may be argued that calculated likelihood is inaccurate for a large event since its source zone could be much larger than the size of one cell (10 × 10 km) and that, therefore, not just one cell but all cells should be included in the source zone. However, the actual source zone of the most historical earthquakes is unknown. The uncertainty of the likelihood would not be large since all of the models are spatially smoothed by the Gaussian function or composed of wide zones. The correlation distance used in this study ranges from 50 to 100 km, and the likelihood only slightly varies with these distances. Thus, we neglect the finite extent of the source zone in this study.

It is surprising that one-half of the models show poor results that are not much better than the Reference Model (Table 1). The Smoothing M and Zoning M Models are based on the moderate-size earthquake catalog. Although the observation period is much longer than that of the small earthquake catalog, 72 vs. 18 years, the results based on the moderate-size earthquake catalog are far less successful. Despite declustering, moderate-size earthquakes tend to cluster near large earthquakes during the observation period, i.e., 1926–1997, where historical earthquakes rarely occurred. In other words, we find no recurrence of large earthquakes in the same cell during the past 400 years. However, the Smoothing and Zoning M Models do show high occurrence probability near the epicenter of large earthquakes for 1926–1997 and fail to reproduce the spatial distribution of large earthquakes in other time periods (Table 1).

The Smoothing S and Zoning S Models are based on the small earthquake catalog. Both models appear to be successful as a whole (Table 1), but they show contrasting results when the historical catalog is divided into two different time-periods (Table 3). The Zoning S Model seems robust, while the Smoothing S Model is not.

A comparison of the last two columns in Table 3 reveals that the GPS Model has the same time-dependency as the Smoothing S Model, namely, a successful result for 1801–1925, but a poor result for the other time-period. It is very likely that the large earthquakes in the earlier time period affect the activity of present-day small earthquakes and the surface strain. Since current seismicity near the source region of the 1891 Nobi earthquake still follows the Omori-Utsu aftershock formula (Utsu, 1961), century-long aftereffects of large events may not be surprising. The viscous response of the lower crust would also cause lingering strain accumulation near the source region of a large earthquake, as was observed after the 1896 Riku-u earthquake (Thatcher et al., 1980). If the historical earthquake catalog were to be short, allowing only earthquakes between 1801 and 1925 to be used in the analysis, this viewpoint could be neglected, and a different and wrong conclusion could be reached. We should note that both present-day small earthquakes and surface strain are greatly affected by large earthquakes that occurred about a century ago.

It is to be expected that the Fault Model successfully reproduces historical seismicity (Table 1). However, it also successfully reproduces the data of earthquakes uncorrelated with late Quaternary faults (Table 2). This latter result is rather surprising since recent large shallow crustal earthquakes tend to occur in areas where no late Quaternary fault have been mapped and the importance of a “blind fault” is emphasized (e.g., Toda and Awata, 2008). However, the Fault Model is constructed by the spatial smoothing of synthetic events based on the late Quaternary fault data. In comparison with the correlation distance of 75 km used for the smoothing, those events took place not far from the nearest mapped fault and, therefore, high occurrence probabilities are estimated in areas of blind faulting. The Fault Model also shows robustness for the different time-period data (Table 3).

The Zoning S Model is the most robust (Tables 2 and 3) and successful model, possibly indicating that both the large quantity of small earthquakes and geological knowledge of zoning are important keys for predicting the spatial variation of large shallow crustal earthquakes. Detailed knowledge of the late Quaternary faults is unnecessary to construct this model.

The Smoothing F Model is also successful although the ΔAIC for the entire historical dataset is not as high as those for the Fault and Smoothing F Models. Frankel (1995) proposed using fault data and other data for a source model of large events with magnitude > 7.0 and to use the spatial smoothing technique for smaller events. As such, it is acceptable to test the Smoothing F Model against the historical dataset of events unrelated to late Quaternary faults (Table 2). We find that the best model of distributed sources is the Smoothing F Model. The weighting factors used for combining different models seem to have an unpredictable mystic effect because the δAIC of the F Model is larger than the sum of those of S and M models (Table 1).

The Zoning P Model employs the original a- and b-values in each zone proposed by the Property and Casualty Insurance Rating Organization of Japan (2000). Tables 1, 2, and 3 show that the δAIC of this model is negative in all cases. Since the zoning method was used for distributed sources, it is logical to test the Zoning P Model against the historical data of events uncorrelated with late Quaternary faults (Table 2). However, δAIC is nearly zero and, consequently, the model does not provide much information on the spatial distribution of large events. Since the Zoning S Model is successful (Table 2), the zoning itself should not be blamed. Property and Casualty Insurance Rating Organization of Japan (2000) obtained the a- and b-values from seismicity data for 1885 to 1995. It is likely that the migration of large earthquakes over time is one cause of the poor result.

7. Conclusions

Eight seismic potential models of large shallow crustal events in the Japanese islands are constructed by zoning and spatial smoothing techniques with seismicity, GPS, and late Quaternary fault data and then tested against historical data. We find that the model based on a large quantity of small earthquake data combined with seismo-tectonic zoning is the most robust and successful model. The model based on the late Quaternary fault data is also reliable. Frankel’s (1995) method of combining catalogs of small and moderate-size earthquake is found to be effective for distributed sources.

A century-long after-effect of large earthquakes on seismicity and surface strain is inferred from the results obtained from testing of the models based on seismicity and GPS data. If a historical catalog is not long enough, this effect may not be detected.

Large earthquakes have not recurred in the same place for the past 400 years. Thus, the earthquake potential model showing a similar spatial distribution to what we observe now for large earthquakes will give a rather poor forecast of future earthquakes.

References

  • Akaike, H., A new look at the statistical model identification, IEEE trans. Automatic Control, AC-19, 716–723, 1974.

    Article  Google Scholar 

  • Aki, K., Maximum likelihood estimate of b in the formula logN = a-bM and its confidence limits, Bull. Earthq. Res. Inst., Univ. Tokyo, 43, 237–239, 1965.

    Google Scholar 

  • Cornell, C. A., Engineering seismic risk analysis, Bull. Seismol. Soc. Am., 58, 1583–1606, 1968.

    Google Scholar 

  • El-Fiky, G. S. and T. Kato, Interplate coupling in the Tohoku district, Japan, deduced from geodetic data inversion, J. Geophys. Res., 104, 20361–20377, 1999.

    Article  Google Scholar 

  • El-Fiky, G. S., T. Kato, and Y. Fujii, Distribution of vertical crustal movement rates in the Tohoku district, Japan, predicted by least-squares collocation, J. Geod., 71, 432–442, 1997.

    Article  Google Scholar 

  • Field, E. H., Overview of the working group for the development of Regional Earthquake Likelihood Models (RELM), Seismol. Res. Lett., 78, 7–16, 2007.

    Article  Google Scholar 

  • Frankel, A., Mapping seismic hazard in the central and eastern United States, Seismol. Res. Lett., 66, 8–21, 1995.

    Article  Google Scholar 

  • Hashimoto, M. and D. D. Jackson, Plate tectonics and crustal deformation around the Japanese islands, J. Geophys. Res., 98, 16149–16166, 1993.

    Article  Google Scholar 

  • Jackson, D. D., Hypothesis testing and earthquake prediction, Proc. Natl. Acad. Sci. USA., 93, 3772–3775, 1996.

    Article  Google Scholar 

  • Jordan, T. H., Earthquake predictability: Brick by brick, Seismol. Res. Lett., 77, 3–6, 2006.

    Article  Google Scholar 

  • Kagan, Y. Y. and D. D. Jackson, New seismic gap hypothesis: Five years after, J. Geophys. Res., 100, 3943–3960, 1995.

    Article  Google Scholar 

  • Kakimi, T., A. Okada, Y. Kinugasa, T. Matsuda, and N. Yonekura, Seismotectonic province map of Japanese island, with the upperbound earthquake magnitudes, Abstracts 1994 Japan Earth and Planetary Sciences Joint Meeting, 302, 1994.

  • Kakimi, T., T. Matsuda, I. Aida, and Y. Kinugasa, A seismotectonic province map in and around the Japanese islands, Zisin, 55, 389–406, 2003.

    Google Scholar 

  • Kato, T. and M. Ando, Source mechanisms of the 1944 Tonankai and 1946 Nankaido earthquakes: Spatial heterogeneity of rise times, Geophys. Res. Lett., 24, 2055–2058, 1997.

    Article  Google Scholar 

  • Kato, T., G. S. El-Fiky, E. N. Oware, and S. Miyazaki, Crustal strains in the Japanese islands as deduced from dense GPS array, Geophys. Res. Lett., 25, 3445–3448, 1998.

    Article  Google Scholar 

  • Kostrov, B. V., Seismic moment and energy of earthquakes, and seismic flow of rock, Izv. Acad. Sci. USSR, Phys. Solid Earth, 1, 23–40, 1974.

    Google Scholar 

  • Kumamoto, T., Long-term conditional seismic hazard of Quaternary active faults in Japan, 62 pp., Ph.D. Thesis, Univ. Tokyo, 1997.

  • Le Pichon, X., S. Mazzotti, P. Henry, and M. Hashimoto, Deformation of the Japanese islands and seismic coupling: an interpretation based on GSI permanent GPS observations, Geophys. J. Int., 134, 501–514, 2002.

    Article  Google Scholar 

  • Matsuda, T., Magnitude and recurrence interval of earthquakes from a fault, Zisin, 28, 269–283, 1975.

    Google Scholar 

  • Molnar, P., Earthquake recurrence intervals and plate tectonics, Bull. Seismol. Soc. Am., 69, 115–133, 1979.

    Google Scholar 

  • Nanjo, K. Z., H. Tsuruoka, N. Hirata, and T. H. Jordan, Overview of the first earthquake forecast testing experiment in Japan, Earth Planets Space, 63, 159–169, 2011.

    Article  Google Scholar 

  • Nishimura, S., M. Ando, and S. Miyazaki, Inter-plate coupling along the Nankai trough and southeastward motion along southern part of Kyushu, Zisin, 51, 443–456, 1999.

    Google Scholar 

  • Odagiri, S. and K. Shimazaki, Correspondence of a historical earthquake to a seismogenic fault, Zisin, 54, 47–61, 2001.

    Google Scholar 

  • Okada, Y., Internal deformation due to shear and tensile faults in a half-space, Bull. Seismol. Soc. Am., 82, 1018–1040, 1992.

    Google Scholar 

  • Property and Casualty Insurance Rating Organization of Japan, Study on seismic hazard considering active faults and historical earthquakes— proposal for seismic hazard map—, Res. Rep. Earthq. Insurance, 47, 1–91, Non-Life Insurance Rating Organization of Japan, 2000.

    Google Scholar 

  • Sagiya, T., Interplate coupling and plate tectonics at the northern end of the Philippine Sea plate deduced for continuous GPS data, Bull. Earthq. Res. Inst, Univ. Tokyo, 73, 275–290, 1998.

    Google Scholar 

  • Savage, J. C., A dislocation model of strain accumulation and release at a subduction zone, J. Geophys. Res., 88, 4984–4996, 1983.

    Article  Google Scholar 

  • Savage, J. C. and R. W. Simpson, Surface strain accumulation and the seismic moment tensor, Bull. Seismol. Soc. Am., 87, 1345–1353, 1997.

    Google Scholar 

  • Schorlemmer, D., M. C. Gerstenberger, S. Wiemer, D. D. Jackson, and D. A. Rhoades, Earthquake likelihood model testing, Seismol. Res. Lett., 78, 17–29, 2007.

    Article  Google Scholar 

  • Schwartz, D. P. and K. J. Coppersmith, Fault behavior and characteristic earthquakes: example from the Wasatch and San Andreas fault zones, J. Geophys. Res., 89, 5681–5698, 1984.

    Article  Google Scholar 

  • Shimazaki, K., Nemuro-Oki earthquake of June 17, 1973: a lithospheric rebound at the upper half of the interface, Phys. Earth Planet. Inter., 9, 314–327, 1974.

    Article  Google Scholar 

  • Shimazaki, K. and Y. Zhao, Dislocation model for strain accumulation in a plate collision zone, Earth Planets Space, 52, 1091–1094, 2000.

    Article  Google Scholar 

  • Takemura, M., Magnitude-seismic moment relations for the shallow earthquakes in and around Japan, Zisin, 43, 257–265, 1990.

    Google Scholar 

  • Thatcher, W., T. Matsuda, T. Kato, and J. B. Rundle, Lithospheric loading by the 1986 Riku-u earthquake, northern Japan: Implications for plate flexure and asthenospheric rheology, J. Geophys. Res., 85, 6429–6435, 1980.

    Article  Google Scholar 

  • Toda, S. and Y. Awata, Does the 2007 Noto Hanto earthquake reveal a weakness in the Japanese national seismic hazard map that could be remedied with geological data?, Earth Planets Space, 60, 1047–1052, 2008.

    Article  Google Scholar 

  • Tsuji, Y., Documents of tsunami of the Tenmei Odawara earthquake of August 23, 1782, Zisin, 39, 277–287, 1986.

    Google Scholar 

  • Usami, T., Materials for Comprehensive List of Destructive Earthquakes in Japan, 416-2001, 605 pp., Univ. Tokyo Press, 2003.

    Google Scholar 

  • Utsu, T., A statistical study on the occurrence of aftershocks, Geophys. Mag., 30,521–605, 1961.

    Google Scholar 

  • Utsu, T., A method for determining the value of b in a formula log n = a − bM showing the magnitude frequency relation for earthquakes, Geophys. Bull. Hokkaido Univ., 13, 99–103, 1965.

    Google Scholar 

  • Ward, S. N., A multidisciplinary approach to seismic hazard in southern California, Bull. Seismol. Soc. Am., 84, 1293–1309, 1994.

    Google Scholar 

  • Wesnousky, S. G., C. H. Scholz, K. Shimazaki, and T. Matsuda, Earthquake frequency distribution and the mechanics of faulting, J. Geophys. Res., 88, 9331–9340, 1983.

    Article  Google Scholar 

  • Wesnousky, S. G., C. H. Scholz, K. Shimazaki, and T. Matsuda, Integration of geological and seismological data for the analysis of seismic risk: a case study of Japan, Bull. Seismol. Soc. Am., 74, 687–708, 1984.

    Google Scholar 

  • Wiemer, S., R. Zuniga, and A. Allmann, ZMAP User Guide, Version 3.0 (beta), 116 pp., University of Alaska, 1997.

  • Working Group on California Earthquake Probabilities, Seismic hazards in Southern California—probable earthquakes, 1994 to 2004, Bull. Seismol. Soc. Am., 85, 379–439, 1995.

    Google Scholar 

Download references

Acknowledgements

We thank Prof. Takashi Kumamoto for fault data and Dr. Eric Nana Oware for his help in handling the GPS data. We also acknowledge the comments of two reviewers (Prof. Honn Kao and an anonymous reviewer) and the editor (Dr. Kazu Z. Nanjo).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kunihiko Shimazaki.

Appendix A.

Appendix A.

We exclude the accumulating surface strain caused by the subduction, according to Savage (1983), who proposed to separate the steady-state plate motion and the cyclic deformation process, where normal faulting at the plate interface may cause inland deformation equivalent to that during the inter-seismic stage. In this study we subtract the synthetic accumulating rates of surface strain due to normal faulting on the plate interface, shown in Fig. A.1, from the observed strain rates by using Okada’s (1992) formula. The faulting parameters are mainly taken from Hashimoto and Jackson (1993), Kato and Ando (1997), Sagiya (1998), El-Fiky and Kato (1999), Nishimura et al. (1999), and Le Pichon et al. (2002).

Fig. A.1
figure A1

Rectangular faults assumed for calculating crustal deformation caused by the subducting oceanic plates. The arrows show accumulating horizontal displacement rates.

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Cite this article

Triyoso, W., Shimazaki, K. Testing various seismic potential models for hazard estimation against a historical earthquake catalog in Japan. Earth Planet Sp 64, 673–681 (2012). https://doi.org/10.5047/eps.2011.02.003

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.5047/eps.2011.02.003

Key words