Skip to main content

Data completeness of the Kumamoto earthquake sequence in the JMA catalog and its influence on the estimation of the ETAS parameters

Abstract

This study investigates the missing data problem in the Japan Meteorological Agency catalog of the Kumamoto aftershock sequence, which occurred since April 15, 2016, in Japan. Based on the assumption that earthquake magnitudes are independent of their occurrence times, we replenish the short-term missing data of small earthquakes by using a bi-scale transformation and study their influence on the maximum likelihood estimate (MLE) of the epidemic-type aftershock sequences (ETAS) parameters by comparing the analysis results from the original and the replenished datasets. The results show that the MLEs of the ETAS parameters vary when this model is fitted to the recorded catalog with different cutoff magnitudes, while those MLEs remain stable for the replenished dataset. Further analysis shows that the seismicity becomes quiescent after the occurrence of the second major shock, which can be regarded as a precursory phenomenon of the occurrence of the subsequent \(M_J7.3\) mainshock. This relative quiescence is demonstrated more clearly by the analysis of the replenished dataset.

(Left 6 panels) Illustration of applying the replenishing algorithm to the short missing of aftershocks in the Kumamoto aftershock sequence. (Right 6 panels) ETAS parameters estimated from the Kumamoto aftershock sequence with different magnitude thresholds. See text for details.

Background

On April 16, 2016, an earthquake sequence bursted in the Kumamoto region of the Kyushu Island, Japan, on the Hinagu and Futagawa faults, which lie at the southern end of the Median Tectonic Line, forking in two directions from the Beppu-Haneyama Fault Zone. One of the significant features of this sequence is that it included three M6+ earthquakes, a magnitude 7.3 mainshock which struck at 01:25 JST on April 16, 2016, beneath Kumamoto City, at a depth of about 10 km and two foreshocks, one with a magnitude 6.5 at 21:26 JST on April 14, 2016, at a depth of about 11 km and the other with a magnitude 6.4 at 00:03 JST on April 15, 2016, at a depth of about 7 km (Table 1). The earthquakes claimed 49 lives by collapsed houses or induced landslides.

This study aims to quantify the seismicity patterns of this sequence by using the ETAS model. After Ogata (1988) proposed this model and extended into space–time version (Ogata 1998), it has become a popular model for standard short-term clustering of seismicity. The assumptions of this model are: (1) The background seismicity is a stationary Poisson process; (2) every event, no matter whether it is a background event or it is triggered by a previous event, triggers its own offspring independently; (3) the expected number of direct offspring is an increasing function of the magnitude of the mother event; and (4) the time lags between triggered events and the mother event follow the Omori–Utsu formula. Mathematically, this model can be formulated by its conditional intensity function

$$\begin{aligned} \lambda (t)& = {} \lim _{\Delta \downarrow 0}\frac{1}{\Delta }\Pr \{N[t,t+\Delta )=1\mid {\mathcal {H}}_t\}\nonumber \\& = {} \mu +K \sum _{i:\,t_i<t} \frac{\exp [\alpha (m_i-m_c)]}{(t-t_i+c)^{p}}, \end{aligned}$$
(1)

where \(N=\{(t_i, m_i){:}\, i=1, 2,\ldots , n\}\) is the sequence of earthquake occurrence times and magnitudes, \(N[t, t+{\Delta })=1\) if any of \(\{t_i{:}\, i=1, 2,\ldots , n\}\) falls in \([t, t+{\Delta })\) and, otherwise, \(N[t, t+{\Delta })=0\), \({\mathcal {H}}_t\) represents the observation history up to time t but not including t, and parameters \(\mu\), k, \(\alpha\), c, and p are constants to be estimated from the data. In the above equation, \(\mu\) represents the background seismicity rate, and \(\alpha\) represents the difference in triggering efficiency among events of different magnitudes. For easier explanation, we introduce another parameter

$$\begin{aligned} A=K \int _0^\infty \frac{1}{(t+c)^{p}}\, \text{d} t, \end{aligned}$$
(2)

which represents the productivity from an event of magnitude \(m_c\). The parameters can be estimated through the maximum likelihood estimate (MLE). Given the observation series, \(N=\{(t_i, m_i){:}\, i=1, 2,\ldots , n\}\), in a time interval [0, T], the logarithm of the likelihood can be written as (Daley and Vere-Jones 2003, Chap. 7)

$$\begin{aligned} \log L =\sum _{i=1}^n \log \lambda (t_i) -\int _0^T\lambda (u)\, \text{d} u. \end{aligned}$$
(3)

In the practice of data analysis with the ETAS model, there are two major difficult problems: One is the cutoff magnitude threshold and the other is the short-term missing of small events. It has been shown that the estimated model parameters vary vastly when the magnitude threshold changes (Ogata 1998). This problem is also carefully studied by Wang et al. (2010). To solve the short-term aftershock-missing problem, when fitting the ETAS model or the Omori–Utsu formula, the early period of aftershocks is always skipped. However, this method cannot be easily used when multiple sequences are included in the data. It is important to know how the short-term aftershock-missing influences the estimates of the ETAS parameters.

Many efforts have been made to fix the problem of missing small aftershocks in the early stage of an earthquake sequence. One observational approach is to use waveform-based earthquake detection methods (e.g., Enescu et al. 2007, 2009; Peng et al. 2007; Marsan and Enescu 2012; Hainzl 2016). These methods found many aftershocks that are unrecorded in the catalog. Another observational approach is giving up describing the earthquake process as a process of discrete events but regarding it as a stream of energy to assess the effect of early aftershock incompleteness (Sawazaki and Enescu 2014). For statistical approaches, based on the Gutenberg–Richter magnitude–frequency relation and using the Bayesian analysis techniques with assumptions of smoothness priors, Ogata and his colleagues investigated the incompleteness of earthquake catalogs (Ogata and Vere-Jones 2003; Iwata 2008, 2013, 2014) and developed methods of making probabilistic earthquake forecasting with missing earthquakes taken into account (e.g., Ogata 2006; Omi et al. 2013, 2014, 2015). A non-Bayesian procedure that corrects such temporally varying incomplete detection of earthquakes can be found in Marsan and Enescu (2012), where they assumed that the b-value is constant and that the occurrence rate of earthquakes follows the Omori–Utsu formula or the ETAS model.

Zhuang and Wang (2016) proposed a generic algorithm for replenishing missing data in the record of a temporal point process with time-independent marks. They verified this algorithm through simulations and applied it to the record of the aftershock sequence following the 2008 Wenchuan \(M_S\)7.9 earthquake in Sichuan Province, China, where there were up to 30% small events of M3+ in the whole aftershock sequence of the Wenchuan earthquake in China. Their results confirmed the hypothesis in Utsu et al. (1995) that missing small events in the early stage of the aftershock sequence cause the instability of the estimate of the Omori–Utsu formula.

In the following sections, the completeness of the catalog is investigated and then the missing data are replenished using the approach proposed by Zhuang and Wang (2016). By comparing the results from fitting the ETAS model to the original and the replenished datasets, the influence of the missing data problem on the estimate of the ETAS parameters can be understood, which is helpful to produce more reliable aftershock forecasting.

Fig. 1
figure 1

Epicenter map of seismicity in the Kyushu region and nearby from April 1, 2016 to April 21, 2016. The sizes of circles represent different magnitudes from 1.0 to 7.3. The locations of the background events before the first major event (M6.5) are marked in yellow circles and all the events after it in red circles. The three major quakes are marked as yellow stars

Data

We use the JMA catalog in this study. The spatial range of data selection is \(128{\sim }133^\circ\)E, \(30{\sim }35^\circ\)N and the time range April 1, 2016, 00:00:00 to April 21, 2016, 24:00:00. Figure 1 shows the epicenter locations of the selected earthquake. We choose a wide region so that we can include nearby earthquakes as background seismicity. To see how the small earthquakes are missing in the catalog, we plot the magnitudes, dithered with random rounding errors that are independently, identically, and uniformly distributed in [−0.05, 0.05], against the sequential numbers, i.e., the timescale is equalized for each event, as shown in Fig. 2. Such a figure gives information on how the earthquake magnitude structure changes with time (e.g., Agnew 2014). If the dataset is complete, such a plot shows a homogeneous pattern along the horizontal axis, as the right half of Fig. 2. We can see that the biggest missing events are no less than magnitude 3.0 immediately after the first and the third major shocks, much higher than the completeness level of the usual detection ability of the network in this area, which goes down to about 0.5 for shallow events (up to 30 km deep) and about 1.0 for slightly deeper events (30–60 km) (Nanjo et al. 2010; Iwata 2013).

Table 1 List of three major earthquakes in the 2016 Kumamoto earthquake sequence
Fig. 2
figure 2

A plot of dithered magnitudes versus sequential numbers of the earthquake events in the study region. The timescale is equalized for observed earthquakes. The magnitudes are dithered with random errors uniformly in [−0.05, 0.05]

Data replenishment

Heuristically, the missing data points can be replenished by adding some points into the blank parts that are due to the missing small earthquakes in Fig. 2, in such a way that the new plot shows homogeneous pattern along the equalized time axis. Or roughly speaking, there should be enough small earthquakes in the same time period during which big events occur. The algorithm proposed by Zhuang and Wang (2016) is based on this idea. In the following we apply this algorithm to replenish the data and explain it step by step.

The first step is to transform the entire observed dataset \(\{(t_i, m_i){:}\, i=1,2,\ldots , n_\mathrm{obs}\}\) onto the unit square \([0,1]\times [0,1]\):

$$\begin{aligned} t_i^\prime = \frac{\sum _{k=1}^{n_\mathrm{obs}}I(t_k\le t_i)}{n_\mathrm{obs}}, \quad \displaystyle m_i^\prime = \frac{\sum _{k=1}^{n_\mathrm{obs}}I(m_k\le m_i)}{n_\mathrm{obs}}, \end{aligned}$$
(4)

where I is a logical function defined by

$$\begin{aligned} I(x) =\left\{ \begin{array}{ll} 1,&\quad{\text{if}}\,x\,{\text{is true;}}\\ 0,&\quad{}\text{otherwise.}\end{array}\right. \end{aligned}$$
(5)

If the magnitudes and the occurrence times are independent of each other and the magnitudes are independent and identically distributed random variables, then \(\{(t_i^\prime , m_i^\prime ){:}\, i=1,2,\ldots ,n_\mathrm{obs}\}\) form up a homogeneous pattern in the unit square. When the magnitudes and the occurrence times are not independent of each other, for instance, there are some missing small events in some particular periods, the resulted point pattern is not homogeneous anymore, as shown in Fig. 3b.

The second step is to make a judgement whether there are missing events in the point pattern of \(\{(t_i^\prime , m_i^\prime ){:}\, i=1,2,\ldots ,n_\mathrm{obs}\}\). In Fig. 3b, the blank area implies that short-term missing of aftershocks exists and the dense parts are also caused by the existence of missing data. The missing data, which twist the bi-scale empirical transformation, make the transformed point pattern much different from using the transformation based on the complete data. According to Fig. 3b, an area S can be figured out to include all the missing points.

To estimate what are missing in S, we need to understand how S should be when the data are complete, since S obtained by the empirical transformation in Fig. 3b is calculated based on incomplete data. That is to say, we need to restore the area \(S^*\) corresponding to S under the true empirical transformation:

$$\begin{aligned} t_i^\prime = \frac{\sum _{k=1}^{n_\mathrm{all}}I(\tau _k\le t_i)}{n_\mathrm{all}}, \quad \displaystyle m_i^\prime = \frac{\sum _{k=1}^{n_\mathrm{all}}I(M_k\le m_i)}{n_\mathrm{all}}. \end{aligned}$$
(6)

where \(N_\mathrm{all}=\{(\tau _k, M_k){:}\, k=1,2,\ldots ,n_\mathrm{all}\}\) is the complete dataset that contains all the events occurring in the studied space–magnitude–time range, and \(N_\mathrm{obs}=\{(t_i, m_i){:} i=1,2,\ldots , n_\mathrm{obs}\}\) is a subset of \(N_\mathrm{all}=\{(\tau _k, M_k){:}\, k=1,2,\ldots ,n_\mathrm{all}\}\).

The third step is to restore the area corresponding to S under the true empirical transformation. Since \(N_\mathrm{all}=\{(\tau _k, M_k){:}\, k=1,2,\ldots ,n_\mathrm{all}\}\) is not completely known, we can only estimate the true bi-scale empirical transformation based on the points outside of S, where the events are assumed to be completely observed. This is done by using the following iterative method.

Set

$$\begin{aligned} \left( t_i^{(1)}, m_i^{(1)}\right) = F^{{(1)}}(t_i, m_i) \quad \text{ and }\quad S^{(1)} = F^{{(1)}}(S), \end{aligned}$$
(7)

where

$$\begin{aligned} F^{{(1)}}(t, m) ={\left( F_1^{{(1)}}(t),\, F_2^{{(1)}}(m)\right) =} \left( \frac{1}{n_{\text{obs}}} \sum _{i=1}^{n_{\text{obs}}} {{I}({t_i}<t)}, \, \frac{1}{n_{\text{obs}}} {\sum _{i=1}^{n_{\text{obs}}} {I}({m_i}<m)} \right) . \end{aligned}$$
(8)

In the above, \(S^{(1)} = F^{{(1)}}(S)\) means that \(S^{(1)}\) is the image of S under the mapping of \(F^{{(1)}}\). Starting from \(\ell =1\), repeat the following iterative computation until convergence, for example, \(\max \{|t_i^{(\ell +1)} -t_i^{(\ell )}|, |m_i^{(\ell +1)} -m_i^{(\ell )}| \}<\epsilon\), where \(\epsilon\) is a given small positive number:

$$\begin{aligned} \left( t_i^{(\ell +1)}, m_i^{(\ell +1)}\right) = F^{(\ell +1)}\left( t_i^{(\ell )}, m_i^{(\ell )}; S^{(\ell )}\right) ,\quad i=1, \,2,\,\ldots ,\, n_{\mathrm{obs}}, \end{aligned}$$
(9)
$$\begin{aligned} S^{(\ell +1)} = F^{(\ell +1)}\left( S^{(\ell )}; S^{(\ell )}\right) , \end{aligned}$$
(10)

where

$$\begin{aligned} F^{(\ell +1)}(t, m; A) =\left( \frac{\sum _{i=1}^{n_{\text{obs}}} w_1\left( t^{(\ell )}_i, m^{(\ell )}_i, A\right) \, {{I}\left( t_i^{(\ell )}< t\right) }}{\sum _{i=1}^{n_{\text{obs}}} w_1\left( t^{(\ell )}_i, m^{(\ell )}_i, A\right) }, \frac{\sum _{i=1}^{n_{\text{obs}}} w_2\left( t^{(\ell )}_i, m^{(\ell )}_i, A\right) \,{{I}\left( m_i^{(\ell )} < m \right) }}{\sum _i^{n_{\text{obs}}}w_2\left( t^{(\ell )}_i, m^{(\ell )}_i, A\right) }\right) \end{aligned}$$
(11)

with the weights defined by

$$\begin{aligned} w_1^{(\ell )}(t,m,A)= \frac{{I}\left( (t,m)\not \in A\right) }{1-\int _0^1 {I}\left( (t,\tau )\in A\right) \,\mathrm{d}\tau } \end{aligned}$$
(12)

and

$$\begin{aligned} w_2^{(\ell )}(t,m,A) = \frac{{I}\left( (t,m)\not \in A\right) }{1-\int _0^1 {I}\left( (\tau , m)\in A\right) \,\mathrm{d}\tau }, \end{aligned}$$
(13)

for any regular region \(A\subset [0,1]\times [0,1]\). Denote the convergent results by \(N^*_{{\mathrm {obs}}}=\{(t_i^*, m_i^*){:}\, i=1,\,2\,\ldots , {n_{\mathrm{obs}}}\}\) and \(S^*\).

One may ask why the iterations are necessary. This is because we need to know the image of \(S\), which contains all the missing events, under transformation based on the complete dataset, \(N_{\mathrm{all}}=N_{{\mathrm {obs}}}\cup N_{{\mathrm {miss}}}\), where \(N_{{\mathrm {obs}}}\) and \(N_{{\mathrm {miss}}}\) denote the sets of observed events and missing events, respectively. The images of all the events, missing or observed, that fall in S are nearly uniformly distributed in the image of \(S\) under this transformation. Due to the existence of the unobserved events, the image of \(S\) under \(F^{(1)}\), the bi-scale empirical transformation based on the observed data, \(N_{{\mathrm {obs}}}\), is different from its image under the transformation based on the complete dataset D since events in \(N_{{\mathrm {miss}}}\) are not included in the calculation. Through reweighing the observed events outside of S, i.e., events in \(N_{{\mathrm {obs}}}\setminus S\), by using Eqs. (11)–(13), the iteration in this step constructs a bi-scale transformation as close as possible to the bi-scale empirical transformation based on the complete data. At the same time, the corresponding area that contains the missing data, \(S^*\), is restored as close as possible to the corresponding image under the transformation based on the complete dataset. This can be seen by comparing Fig. 3b with c.

After the above iterations of transformations, the image of all the events (including the missing and observed events) should be approximately uniformly distributed in the unit square \([0,1]\times [0,1]\). As shown in Fig. 3c, the events outside \(S^*\) are approximately uniformly distributed. The missing events inside \(S^*\) can be replenished by refilling in a way such that the events inside it are also uniformly distributed with the same occurrence rate as the outside.

The fourth step is to refill \(S^*\), in which the events (including missing and observed) should be approximately uniformly distributed according to a homogeneous Poisson process. Consider the theoretical conclusion that, given a homogeneous Poisson process in \(S_1\cup S_2\) with an unknown occurrence rate, where \(S_1\) and \(S_2\) are disjoint, if there are k events falling in \(S_1\), then the number of events of this process falling in \(S_2\) follows a negative binomial distribution with parameter \((k, \frac{|S_1|}{|S_1|+|S_2|})\). This can be derived in the following way: Providing that an event of this process falling in either \(S_1\) or \(S_2\), then the probabilities that it falls in \(S_1\) and \(S_2\) are \(|S_1|/(|S_1|+|S_2|)\) and \(|S_2|/(|S_1|+|S_2|)\), respectively. This is equivalent to a sequence of independent Bernoulli trials, where each trial has two potential outcomes called “success” (say, falling in \(S_{1}\)) and “failure” (say, falling in \(S_2\)). Then the random number of failures, X, which we will see before the occurrence of k successes, has a negative binomial distribution, \({\mathrm {NB}}(k, \frac{|S_1|}{|S_1|+|S_2|})\), with probability mass function

$$\begin{aligned} f(n; k, p) \equiv \Pr (X = n) = \left( {\begin{array}{c}n+k-1\\ n\end{array}}\right) p^n(1-p)^k \quad {\text{for }}n = 0, 1, 2, \ldots , \end{aligned}$$

where \(p=\frac{|S_1|}{|S_1|+|S_2|}\). It is interesting that the number of earthquakes in a given space–time–magnitude window also follows a negative binomial distribution (e.g., Dionysiou and Papadopoulos 1992; Kagan 2010). Thus, we generate a random number K from a negative binomial random variable with parameters \((k, 1-|S^*|)\), where \(|S^*|\) is the area of \(S^*\), and

$$\begin{aligned} k= \sum _{i=1}^{n_{\mathrm{obs}}} {I}((t_i^*,m_i^*)\not \in S^*)=\#(N^*_{{\mathrm {obs}}}\setminus S^*){,} \end{aligned}$$

is the number of events outside \(S^*\), with “\(\#\)” representing the number of elements. Then we generate K random events independently, identically, and uniformly distributed in \(S^*\). Denote these newly generated events by \(N^*_{{\mathrm {rep}}}\). Since there are already some observed points in \(S^*\), we should keep them and remove the same amount of simulated points. Simply, for each event of \(N^*_{{\mathrm {obs}}}\) that falls in \(S^*\), sequentially remove from \(N^*_{{\mathrm {rep}}}\) the closest event to it. The output of this step is shown in Fig. 3d.

The final step is to convert the resulted \(N^*_{{\mathrm {rep}}}\) from the above steps to the original observational space \([0,T]\times M\) through linear interpolation:

$$\begin{aligned} s_j = {{\mathrm {LI}}}\left( s^*_j;\, [0,\,t^*_1,\,t^*_2,\, \ldots ,\, t^*_{n_{\mathrm{obs}}},1],\,[0,\,t_1,\,t_2,\, \ldots ,\, t_{n_{\mathrm{obs}}}, \, T]\right) , \end{aligned}$$
(14)
$$\begin{aligned} v_j = {{\mathrm {LI}}} \left( v^*_j;\, [0,\,m^*_1,\,m^*_2,\, \ldots ,\, m_{n_{\mathrm{obs}}}^*],\,[0,\,m_1,\,m_2,\, \ldots ,\, m_{n_{\mathrm{obs}}}]\right) . \end{aligned}$$
(15)

for each \((s_j^*, v_j^*) \in N^*_{{\mathrm {rep}}}\), where \({{\mathrm {LI}}}(x, A, B)\) represents the linear interpolation value of x conditioning on that the function values for each component in A are the locations corresponding to each component in B. Denote the set consisting of all \((s_j, v_j)\) by \(N_{{\mathrm {rep}}}\). Then \(N_{{\mathrm {rep}}}\) is the final output (Fig. 3e).

Figure 3f shows the comparison between the cumulative frequencies of events in the original and the replenished datasets, from which it can be seen that about 60% of M1.0+ events are missing.

Fig. 3
figure 3

Results from applying the replenishing algorithm to the earthquake data from the Kumamoto aftershock region. a Magnitudes versus occurrence times of the earthquake events. b Rescaled magnitudes versus empirical distribution of occurrence times of the recorded events transformed by using the bi-scale empirical transformation. c Rescaled magnitudes versus rescaled occurrence times of the combination of the observed events, with the rescaling based on the empirical distribution that is estimated based on the events outside of S. d Rescaled magnitudes versus rescaled occurrence times of the observed events and replenished events, i.e., newly generated events after removing events that are close to any of the observed in S, with the rescaling based on the empirical distributions of the events outside of S. e Magnitudes versus occurrence times of the observed synthetic events and the replenished events. f Cumulative numbers of events against occurrence times for the original dataset (gray curve) and for the replenished dataset (black curve). The blue polygons in ad are the area S and its corresponding mappings in which the missing events fall. Green dots in d and e are the replenished events

Influence of short-term missing on the estimates of ETAS parameters

Table 2 shows the results from fitting the ETAS model with different magnitude thresholds to the original and the replenished datasets, respectively. For easy comparison, they are also plotted in Fig. 4. When using a low magnitude threshold, the fitted ETAS parameters estimated by using the original dataset differ from those by using the replenished dataset. When the magnitude threshold is above 3.0, which is approximately the magnitude of completeness for the original dataset, the estimated ETAS parameters are about the same for both datasets.

  1. 1.

    The first striking feature is that the \(\alpha\) value is almost fixed around 2.0 for the replenished data while for the original data it increases from 0.22 to 2.0 when the cutoff magnitude is increased. As mentioned in Ogata (1988, 1999), a small \(\alpha\) implies the seismicity is more like a swarm while a large \(\alpha\) implies mainshock–aftershock sequences. The high \(\alpha\) value in this analysis is more reasonable since this sequence is clearly an aftershock sequence. It is not difficult to explain why low \(\alpha\) values are obtained when lowering the magnitude threshold for the original dataset. The estimation procedure wrongly classifies aftershocks at the latter stage into secondary aftershocks that are triggered by some aftershocks in the sequence.

  2. 2.

    For the replenished dataset, the estimated background rate \(\mu\) decreases exponentially when the cutoff magnitude is increased, which can be explained by the Gutenberg–Richter magnitude–frequency relation, while such a pattern is not clear for the original dataset (Fig. 4a).

  3. 3.

    The K value ranges from 0.007 to 0.055 for the original dataset and 0.002 to 0.008 for the replenished dataset (Fig. 4b). Since this parameter is not so easy to discuss, A, as defined in Eq. (2), is also plotted. Figure 4c shows that the estimate of A increases gradually from 0.03 to 0.11 for the replenished data, while it decreases from 1.2 to a value around 0.1 when the cutoff magnitude changes from 1.0 to 3.8. For a bursting mainshock–aftershock sequence, a small A value and a high \(\alpha\) value are typical characteristics, implying that most of the aftershocks are directly triggered by very few major shocks, or even only by the mainshock.

  4. 4.

    The c and p values in the Omori–type temporal decays are nearly constant for the replenished data but not for the original dataset. This indicates that missing small events in the early stage of the aftershock sequence cause the instability of the estimate of the Omori–Utsu formula, as pointed out by Utsu et al. (1995).

Table 2 Results from fitting the ETAS model to the original and replenished datasets

The results from the above analysis indicate that the short-term missing of aftershocks causes serious biases in the estimation of model parameters. It is not difficult to imagine that such biases will propagate in the probability forecasting of seismicity at a timescale of weeks or months and cause big errors. After the missing data are replenished by using the algorithm, the biases can be corrected in a great degree.

Fig. 4
figure 4

ETAS parameters estimated from the Kumamoto aftershock sequence with different magnitude thresholds: a \(\mu\), b K, c A, d c, e \(\alpha\), and f p. The red and black dots are the estimates based on the original and the replenished datasets, respectively

Detecting change point by using the replenished dataset

It is interesting to know whether the seismicity pattern changes during the entire sequence, especially after the occurrence of the second major shock. When tangling with the short-term missing data problem, this problem is difficult to tackle since the model cannot be estimated stably. In this section, we compare the results from applying change-point detection techniques to both the original and the replenished datasets.

The main technique to detect seismicity change is using the transformed time sequence (Ogata 1988). Given a point process \(N=\{t_i: i=1,\, 2,\,\ldots , \, n\}\), which is determined by a conditional intensity \(\lambda (t)\), the following transformation

$$\begin{aligned} t_i\rightarrow \tau _i =\int _0^{t_i} \lambda (u)\, \text{d}u \end{aligned}$$
(16)

transforms N into a stationary Poisson process with a unit rate (standard Poisson process), namely \(N^\prime =\{\tau _i{:}\, i=1,\,2,\, \ldots ,\, n\}\). The process \(N^\prime\) is called the transformed time sequence. The true \(\lambda (t)\) is always unknown in real data analysis. If we replace \(\lambda (t)\) by \(\hat{\lambda }(t)\), which is a good approximation of the true model, in the above equation, we can also obtain a transformed time sequence that is approximately a Poisson process of rate 1 (the standard Poisson process). If the transformed time sequence deviates significantly from the standard Poisson process, then we can conclude that the model does not fit the data well. To see whether the seismicity pattern changes after the occurrence of the second major earthquake, one can firstly fit the ETAS model to the seismicity data just before the second major earthquake and then calculate the transformed time sequence and extend the calculation after the occurrence of the second major earthquake.

The confidence bands of the transformed time sequence have been studied by Ogata (1988, 1989). In this study, this problem is treated from another viewpoint: Since such a transformed time sequence is a standard Poisson process for an ideal model, statistics related to the Poisson process can be used to construct the confidence band. Following Schoenberg (2002), the cumulative frequency curve \((\tau _i=\int _0^{t_i} \hat{\lambda }(u)\, \text{d}u,\, i)\) always connects \((0,\,0)\) and \((T,\, n)\), where \(\hat{\lambda }(u)\) is the model estimated from the earthquake data in \([0,\,T]\) by using the maximum likelihood estimate and \(n=N[0,T]\). For each positive integer k, if \(k<n\), the confidence interval for \(\tau _k\) is the same as kZ, where Z is a random variable that obeys a beta distribution with parameter \((k+1, n-k+1)\); when \(k>n\), \(\tau _k\) can be approximated by a gamma distribution with a shape parameter \(k-n\) and scale parameter 1. Here we refer to Schoenberg (2002) for details.

Firstly, the ETAS model is fitted to the original dataset with a target interval of [0, \(T_1\)], where \(T_1=14.40\) is just before the occurrence time of the second major shock, with different cutoff magnitudes. No stable results are obtained if the cutoff magnitude is less than 2.2. After the model parameters are estimated, the transformed time sequence is calculated and the same calculation is extended to \(T_2=15.059\), which is just before the mainshock or the third major earthquake. The results are shown in Fig. 5. A scenario of relative quiescence can be seen between the occurrence times of the second and the third major earthquakes. A similar result is also reported by Kumazwa et al. (2016). However, one may argue that it might be caused by missing of some smaller events since (1) small gaps at the bottom of Fig. 5d can be found at the places of \(\tau \approx 300\), 400, and 500 and (2) the quiescence starts at about \(\tau \approx 300\), not the occurrence of the second major earthquake.

The same procedure is applied to the replenished data. Stable results can be obtained when the cutoff magnitude is no less than 1.2. Fitting results from data with the cutoff magnitude of 1.2 are shown in Fig. 6. One can see that the quiescence starts almost immediately after the second major earthquake occurs. The cumulative frequency curve drops outside of the 99% confidence bands quickly after the second major earthquake in the transformed time domain. This is similar to many cases of foreshock–mainshock–aftershock sequences, i.e., in a foreshock swarm, a drop of activity is observed just before the mainshock, such as the \(M_S\)7.3 Haicheng earthquake in China on 1976-2-4 (Wang et al. 2006) and the recent large M8.1 earthquake in Chile on 2014-4-1 (Papadopoulos and Minadakis 2016).

To verify our results, we also fit the ETAS model to the original dataset with some higher magnitude thresholds, M2.5 and M3.0. Quiescence is also found in the corresponding results, but does not occur immediately after the second major quake in the transformed time domain. However, such quiescence occurs much earlier than in the results when using M2.2 as the cutoff magnitude.

In summary, detecting relative quiescence with respect to the ETAS model becomes rather complicated when short-term missing of aftershocks exists. Data replenishment can correct the biases caused by it in a plausible way. In the Kumamoto sequence, seismicity becomes relatively quiescent almost immediately after the occurrence of the second major event.

Fig. 5
figure 5

Detection of relative quiescence before the mainshock by using the original catalog with events of magnitudes 2.2 or above. a Observed (solid curve) and predicted (dashed curve) cumulative frequencies in the time domain. b Observed (solid curve) and predicted (dashed curve) cumulative frequencies in the transformed time domain. c, d are the plots of event magnitudes versus occurrence times and the transformed times, respectively

Fig. 6
figure 6

Detection of relative quiescence before the mainshock by using the replenished dataset with events of magnitudes 1.2 or above. a Observed (solid curve) and predicted (dashed curve) cumulative frequencies in the time domain. b Observed (solid curve) and predicted (dashed curve) cumulative frequencies in the transformed time domain. c, d are the plots of event magnitudes versus occurrence times and the transformed times, respectively

Discussion and conclusions

To study the seismicity of the Kumamoto aftershock sequence, the ETAS model is firstly fitted to the original dataset. The estimated parameters vary dramatically when the magnitude threshold changes. When the magnitude threshold is much lower than the completeness level, the estimates give a lower \(\alpha\) and a higher p value, implying that the influence of short-term missing of aftershocks on the estimates of the ETAS parameters should not be ignored. When short-term missing of aftershocks exists, detection of the change point in seismicity becomes complicated.

In many studies, the completeness threshold is determined by visually looking at the global magnitude–frequency curve or applying some detection methods (see Huang et al. 2016, and the references therein) to the whole catalog. All these methods cannot effectively detect the magnitude threshold of completeness in the short term immediately after the mainshock, while the estimates of the ETAS model parameters are mainly determined by short-term clustering. To avoid biases caused in the estimation of ETAS parameters by such short-term missing, it is important to find a reliable magnitude threshold of completeness by looking at a figure like Fig. 2 or using some replenishing methods as introduced in this study.

Such short-term missing of small aftershocks can be replenished by using a generic method proposed by Zhuang and Wang (2016), which is designed for replenishing missing data in marked temporal point processes and only makes use of the assumption that the marks and occurrence times of the events are independent, regardless of how the events interact on the time axis. The key point of this method is an algorithm that iteratively estimates the missing area in the transformed domain according to the parts where data are completely recorded. When missing events are fixed by using this method, the ETAS parameters are much more stable and consistent when the magnitude threshold varies. The results show that this replenishment method helps us to evaluate the influence of missing data and correct the bias caused by missing data.

The results show that the Kumamoto aftershock sequence is a complex one, but still mainly mainshock–aftershocks, only the three major earthquakes producing most of the aftershocks. This can be seen from the high \(\alpha\) value. There are also different seismicity phases during this sequence. Particularly, the relative quiescence after the occurrence of the second major earthquake can be regarded as an anomaly prior to the mainshock. It is worthwhile extending the analysis based on the ETAS model to the whole aftershock sequence of this M7.3 mainshock in future research. For example, we can investigate whether the foreshock and aftershock activities are characterized by different ETAS parameters and how many phase changes there are in the aftershock sequences.

Also, the ETAS model is shown to be a stable model. The variations in the estimated ETAS parameters with different magnitude thresholds in past studies may be caused by the influence of short-term missing of small events. This conclusion needs to be verified by further studies.

The b-value, which is the key parameter that characterizes the magnitude distribution, might change during the earthquake sequence. However, in the case of short-term missing of small aftershocks, the variation of detection is usually unknown. Extracting the changes in the b-value and estimating the temporal variation of detection abilities at the same time have the problem of identifiability. If the magnitude distribution does not change dramatically, the generic algorithm can still be usable to tackle the issues caused by the short-term missing of small aftershocks to some extent.

References

  • Agnew DC (2014) Equalized plot scales for exploring seismicity data. Seismol Res Lett 85(4):775–780. doi:10.1785/0220130214

    Article  Google Scholar 

  • Daley DD, Vere-Jones D (2003) An introduction to theory of point processes—volume 1: elementary theory and methods, 2nd edn. Springer, New York

    Google Scholar 

  • Dionysiou DD, Papadopoulos GA (1992) Poissonian and negative binomial modelling of earthquake time series in the Aegean area. Phys Earth Planet Inter 71(3):154–165. doi:10.1016/0031-9201(92)90073-5

    Article  Google Scholar 

  • Enescu B, Mori J, Miyazawa M (2007) Quantifying early aftershock activity of the 2004 mid-Niigata prefecture earthquake \(({M_w}6.6)\). J Geophys Res Solid Earth 112(B4):B004629. doi:10.1029/2006JB004629

    Article  Google Scholar 

  • Enescu B, Mori J, Miyazawa M, Kano Y (2009) Omori–Utsu law \(c\)-values associated with recent moderate earthquakes in Japan. Bull Seismol Soc Am 99(2A):884–891. doi:10.1785/0120080211

    Article  Google Scholar 

  • Hainzl S (2016) Rate-dependent incompleteness of earthquake catalogs. Seismol Res Lett 87(2A):337–344. doi:10.1785/0220150211

    Article  Google Scholar 

  • Huang Y-L, Zhou S-Y, Zhuang J-C (2016) Numerical tests on catalog-based methods to estimate magnitude of completeness. Chin J Geophys 59(3):266–275. doi:10.6038/cjg20160416

    Article  Google Scholar 

  • Iwata T (2008) Low detection capability of global earthquakes after the occurrence of large earthquakes: Investigation of the Harvard CMT catalogue. Geophys J Int 174(3):849–856. doi:10.1111/j.1365-246X.2008.03864.x

    Article  Google Scholar 

  • Iwata T (2013) Estimation of completeness magnitude considering daily variation in earthquake detection capability. Geophys J Int 194(3):1909–1919. doi:10.1093/gji/ggt208

    Article  Google Scholar 

  • Iwata T (2014) Decomposition of seasonality and long-term trend in seismological data: a Bayesian modelling of earthquake detection capability. Aust N Z J Stat 56(3):201–215. doi:10.1111/anzs.12079

    Article  Google Scholar 

  • Kagan YY (2010) Statistical distributions of earthquake numbers: consequence of branching process. Geophys J Int 180(3):1313. doi:10.1111/j.1365-246X.2009.04487.x

    Article  Google Scholar 

  • Kumazwa T, Ogata Y, Tsuruoka H (2016) Statistical monitoring of seismicity in Kyushu district before the occurrence of the 2016 Kumamoto earthquakes of M6.5 and M7.3. Report of the Coordinating Committee for Earthquake Prediction, 96

  • Marsan D, Enescu B (2012) Modeling the foreshock sequence prior to the 2011, \({M_W}\)9.0 Tohoku, Japan, earthquake. J Geophys Res Solid Earth 117(B6):B06316. doi:10.1029/2011JB009039

    Article  Google Scholar 

  • Nanjo KZ, Ishibe T, Tsuruoka H, Schorlemmer D, Ishigaki Y, Hirata N (2010) Analysis of the completeness magnitude and seismic network coverage of Japan. Bull Seismol Soc Am 100(6):3261–3268. doi:10.1785/0120100077

    Article  Google Scholar 

  • Ogata Y (1988) Statistical models for earthquake occurrences and residual analysis for point processes. J Am Stat Assoc 83(401):9–27. doi:10.1080/01621459.1988.10478560

    Article  Google Scholar 

  • Ogata Y (1989) Statistical model for standard seismicity and detection of anomalies by residual analysis. Tectonophysics 169(1–3):159–174. doi:10.1016/0040-1951(89)90191-1

    Article  Google Scholar 

  • Ogata Y (1998) Space–time point-process models for earthquake occurrences. Ann Inst Stat Math 50(2):379–402. doi:10.1023/A:1003403601725

    Article  Google Scholar 

  • Ogata Y (1999) Seismicity analysis through point-process modeling: a review. Pure Appl Geophys 155(2):471–507. doi:10.1007/s000240050275

    Article  Google Scholar 

  • Ogata Y (2006) Monitoring of anomaly in the aftershock sequence of the 2005 earthquake of M7.0 off coast of the western Fukuoka, Japan, by the ETAS model. Geophys Res Lett 33:L01303. doi:10.1029/2005GL024405

    Article  Google Scholar 

  • Ogata Y, Vere-Jones D (2003) Examples of statistical models and methods applied to seismology and related earth physics. In: Lee WH, Kanamori H, Jennings PC, Kisslinger C (eds) International handbook of earthquake and engineering seismology, chapter 82, vol 81B. International Association of Seismology and Physics of Earth’s Interior, London

    Google Scholar 

  • Omi T, Ogata Y, Hirata Y, Aihara K (2013) Forecasting large aftershocks within one day after the main shock. Sci Rep 3:2218. doi:10.1038/srep02218

    Article  Google Scholar 

  • Omi T, Ogata Y, Hirata Y, Aihara K (2014) Estimating the ETAS model from an early aftershock sequence. Geophys Res Lett 41(3):850–857. doi:10.1002/2013GL058958

    Article  Google Scholar 

  • Omi T, Ogata Y, Hirata Y, Aihara K (2015) Intermediate-term forecasting of aftershocks from an early aftershock sequence: Bayesian and ensemble forecasting approaches. J Geophys Res Solid Earth 120(4):2561–2578. doi:10.1002/2014JB011456

    Article  Google Scholar 

  • Papadopoulos GA, Minadakis G (2016) Foreshock patterns preceding great earthquakes in the subduction zone of Chile. Pure Appl Geophys 173(10):3247–3271. doi:10.1007/s00024-016-1337-5

    Article  Google Scholar 

  • Peng Z, Vidale JE, Ishii M, Helmstetter A (2007) Seismicity rate immediately before and after main shock rupture from high-frequency waveforms in Japan. J Geophys Res Solid Earth 112(B3):B03306. doi:10.1029/2006JB004386

    Article  Google Scholar 

  • Sawazaki K, Enescu B (2014) Imaging the high-frequency energy radiation process of a main shock and its early aftershock sequence: The case of the 2008 Iwate-Miyagi Nairiku earthquake, Japan. J Geophys Res Solid Earth 119(6):4729–4746. doi:10.1002/2013JB010539

    Article  Google Scholar 

  • Schoenberg F (2002) On rescaled Poisson processes and the Brownian bridge. Ann Inst Stat Math 54(2):445–457. doi:10.1023/A:1022494523519

    Article  Google Scholar 

  • Utsu T, Ogata Y, Matsu’ura RS (1995) The centenary of the Omori formula for a decay law of aftershock activity. J Phys Earth 43(1):1–33. doi:10.4294/jpe1952.43.1

    Article  Google Scholar 

  • Wang K, Chen Q-F, Sun S, Wang A (2006) Predicting the 1975 Haicheng earthquake. Bull Seismol Soc Am 96(3):757–795. doi:10.1785/0120050191

    Article  Google Scholar 

  • Wang Q, Jackson DD, Zhuang J (2010) Missing links in earthquake clustering models. Geophys Res Lett 37(21):L21307. doi:10.1029/2010GL044858

    Google Scholar 

  • Zhuang J, Wang T (2016) Correcting biases in the estimates of earthquake clustering parameters caused by short-term missing of aftershocks. Japan Geoscience Union Meeting 2016, Makuhari, Chiba, Japan, 22–26 May 2016

Download references

Authors’ contributions

JZ carried out the data analysis and drafted the manuscript. TW and JZ designed the replenishing algorithm. YO participated in designing the study and partially performed the explanation of the results. All authors read and approved the final manuscript.

Acknowledgements

This project is partially supported by KAKENHI 2624004 and 26280006 from the Japan Society for the Promotion of Science and the Marsden Fund administered by the Royal Society of New Zealand. The authors thank the editor, Prof. Manabu Hashimoto, and three anonymous reviewers for their helpful and constructive comments.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiancang Zhuang.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhuang, J., Ogata, Y. & Wang, T. Data completeness of the Kumamoto earthquake sequence in the JMA catalog and its influence on the estimation of the ETAS parameters. Earth Planets Space 69, 36 (2017). https://doi.org/10.1186/s40623-017-0614-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40623-017-0614-6

Keywords