 Full paper
 Open Access
 Published:
Data completeness of the Kumamoto earthquake sequence in the JMA catalog and its influence on the estimation of the ETAS parameters
Earth, Planets and Space volume 69, Article number: 36 (2017)
Abstract
This study investigates the missing data problem in the Japan Meteorological Agency catalog of the Kumamoto aftershock sequence, which occurred since April 15, 2016, in Japan. Based on the assumption that earthquake magnitudes are independent of their occurrence times, we replenish the shortterm missing data of small earthquakes by using a biscale transformation and study their influence on the maximum likelihood estimate (MLE) of the epidemictype aftershock sequences (ETAS) parameters by comparing the analysis results from the original and the replenished datasets. The results show that the MLEs of the ETAS parameters vary when this model is fitted to the recorded catalog with different cutoff magnitudes, while those MLEs remain stable for the replenished dataset. Further analysis shows that the seismicity becomes quiescent after the occurrence of the second major shock, which can be regarded as a precursory phenomenon of the occurrence of the subsequent \(M_J7.3\) mainshock. This relative quiescence is demonstrated more clearly by the analysis of the replenished dataset.
Background
On April 16, 2016, an earthquake sequence bursted in the Kumamoto region of the Kyushu Island, Japan, on the Hinagu and Futagawa faults, which lie at the southern end of the Median Tectonic Line, forking in two directions from the BeppuHaneyama Fault Zone. One of the significant features of this sequence is that it included three M6+ earthquakes, a magnitude 7.3 mainshock which struck at 01:25 JST on April 16, 2016, beneath Kumamoto City, at a depth of about 10 km and two foreshocks, one with a magnitude 6.5 at 21:26 JST on April 14, 2016, at a depth of about 11 km and the other with a magnitude 6.4 at 00:03 JST on April 15, 2016, at a depth of about 7 km (Table 1). The earthquakes claimed 49 lives by collapsed houses or induced landslides.
This study aims to quantify the seismicity patterns of this sequence by using the ETAS model. After Ogata (1988) proposed this model and extended into space–time version (Ogata 1998), it has become a popular model for standard shortterm clustering of seismicity. The assumptions of this model are: (1) The background seismicity is a stationary Poisson process; (2) every event, no matter whether it is a background event or it is triggered by a previous event, triggers its own offspring independently; (3) the expected number of direct offspring is an increasing function of the magnitude of the mother event; and (4) the time lags between triggered events and the mother event follow the Omori–Utsu formula. Mathematically, this model can be formulated by its conditional intensity function
where \(N=\{(t_i, m_i){:}\, i=1, 2,\ldots , n\}\) is the sequence of earthquake occurrence times and magnitudes, \(N[t, t+{\Delta })=1\) if any of \(\{t_i{:}\, i=1, 2,\ldots , n\}\) falls in \([t, t+{\Delta })\) and, otherwise, \(N[t, t+{\Delta })=0\), \({\mathcal {H}}_t\) represents the observation history up to time t but not including t, and parameters \(\mu\), k, \(\alpha\), c, and p are constants to be estimated from the data. In the above equation, \(\mu\) represents the background seismicity rate, and \(\alpha\) represents the difference in triggering efficiency among events of different magnitudes. For easier explanation, we introduce another parameter
which represents the productivity from an event of magnitude \(m_c\). The parameters can be estimated through the maximum likelihood estimate (MLE). Given the observation series, \(N=\{(t_i, m_i){:}\, i=1, 2,\ldots , n\}\), in a time interval [0, T], the logarithm of the likelihood can be written as (Daley and VereJones 2003, Chap. 7)
In the practice of data analysis with the ETAS model, there are two major difficult problems: One is the cutoff magnitude threshold and the other is the shortterm missing of small events. It has been shown that the estimated model parameters vary vastly when the magnitude threshold changes (Ogata 1998). This problem is also carefully studied by Wang et al. (2010). To solve the shortterm aftershockmissing problem, when fitting the ETAS model or the Omori–Utsu formula, the early period of aftershocks is always skipped. However, this method cannot be easily used when multiple sequences are included in the data. It is important to know how the shortterm aftershockmissing influences the estimates of the ETAS parameters.
Many efforts have been made to fix the problem of missing small aftershocks in the early stage of an earthquake sequence. One observational approach is to use waveformbased earthquake detection methods (e.g., Enescu et al. 2007, 2009; Peng et al. 2007; Marsan and Enescu 2012; Hainzl 2016). These methods found many aftershocks that are unrecorded in the catalog. Another observational approach is giving up describing the earthquake process as a process of discrete events but regarding it as a stream of energy to assess the effect of early aftershock incompleteness (Sawazaki and Enescu 2014). For statistical approaches, based on the Gutenberg–Richter magnitude–frequency relation and using the Bayesian analysis techniques with assumptions of smoothness priors, Ogata and his colleagues investigated the incompleteness of earthquake catalogs (Ogata and VereJones 2003; Iwata 2008, 2013, 2014) and developed methods of making probabilistic earthquake forecasting with missing earthquakes taken into account (e.g., Ogata 2006; Omi et al. 2013, 2014, 2015). A nonBayesian procedure that corrects such temporally varying incomplete detection of earthquakes can be found in Marsan and Enescu (2012), where they assumed that the bvalue is constant and that the occurrence rate of earthquakes follows the Omori–Utsu formula or the ETAS model.
Zhuang and Wang (2016) proposed a generic algorithm for replenishing missing data in the record of a temporal point process with timeindependent marks. They verified this algorithm through simulations and applied it to the record of the aftershock sequence following the 2008 Wenchuan \(M_S\)7.9 earthquake in Sichuan Province, China, where there were up to 30% small events of M3+ in the whole aftershock sequence of the Wenchuan earthquake in China. Their results confirmed the hypothesis in Utsu et al. (1995) that missing small events in the early stage of the aftershock sequence cause the instability of the estimate of the Omori–Utsu formula.
In the following sections, the completeness of the catalog is investigated and then the missing data are replenished using the approach proposed by Zhuang and Wang (2016). By comparing the results from fitting the ETAS model to the original and the replenished datasets, the influence of the missing data problem on the estimate of the ETAS parameters can be understood, which is helpful to produce more reliable aftershock forecasting.
Data
We use the JMA catalog in this study. The spatial range of data selection is \(128{\sim }133^\circ\)E, \(30{\sim }35^\circ\)N and the time range April 1, 2016, 00:00:00 to April 21, 2016, 24:00:00. Figure 1 shows the epicenter locations of the selected earthquake. We choose a wide region so that we can include nearby earthquakes as background seismicity. To see how the small earthquakes are missing in the catalog, we plot the magnitudes, dithered with random rounding errors that are independently, identically, and uniformly distributed in [−0.05, 0.05], against the sequential numbers, i.e., the timescale is equalized for each event, as shown in Fig. 2. Such a figure gives information on how the earthquake magnitude structure changes with time (e.g., Agnew 2014). If the dataset is complete, such a plot shows a homogeneous pattern along the horizontal axis, as the right half of Fig. 2. We can see that the biggest missing events are no less than magnitude 3.0 immediately after the first and the third major shocks, much higher than the completeness level of the usual detection ability of the network in this area, which goes down to about 0.5 for shallow events (up to 30 km deep) and about 1.0 for slightly deeper events (30–60 km) (Nanjo et al. 2010; Iwata 2013).
Data replenishment
Heuristically, the missing data points can be replenished by adding some points into the blank parts that are due to the missing small earthquakes in Fig. 2, in such a way that the new plot shows homogeneous pattern along the equalized time axis. Or roughly speaking, there should be enough small earthquakes in the same time period during which big events occur. The algorithm proposed by Zhuang and Wang (2016) is based on this idea. In the following we apply this algorithm to replenish the data and explain it step by step.
The first step is to transform the entire observed dataset \(\{(t_i, m_i){:}\, i=1,2,\ldots , n_\mathrm{obs}\}\) onto the unit square \([0,1]\times [0,1]\):
where I is a logical function defined by
If the magnitudes and the occurrence times are independent of each other and the magnitudes are independent and identically distributed random variables, then \(\{(t_i^\prime , m_i^\prime ){:}\, i=1,2,\ldots ,n_\mathrm{obs}\}\) form up a homogeneous pattern in the unit square. When the magnitudes and the occurrence times are not independent of each other, for instance, there are some missing small events in some particular periods, the resulted point pattern is not homogeneous anymore, as shown in Fig. 3b.
The second step is to make a judgement whether there are missing events in the point pattern of \(\{(t_i^\prime , m_i^\prime ){:}\, i=1,2,\ldots ,n_\mathrm{obs}\}\). In Fig. 3b, the blank area implies that shortterm missing of aftershocks exists and the dense parts are also caused by the existence of missing data. The missing data, which twist the biscale empirical transformation, make the transformed point pattern much different from using the transformation based on the complete data. According to Fig. 3b, an area S can be figured out to include all the missing points.
To estimate what are missing in S, we need to understand how S should be when the data are complete, since S obtained by the empirical transformation in Fig. 3b is calculated based on incomplete data. That is to say, we need to restore the area \(S^*\) corresponding to S under the true empirical transformation:
where \(N_\mathrm{all}=\{(\tau _k, M_k){:}\, k=1,2,\ldots ,n_\mathrm{all}\}\) is the complete dataset that contains all the events occurring in the studied space–magnitude–time range, and \(N_\mathrm{obs}=\{(t_i, m_i){:} i=1,2,\ldots , n_\mathrm{obs}\}\) is a subset of \(N_\mathrm{all}=\{(\tau _k, M_k){:}\, k=1,2,\ldots ,n_\mathrm{all}\}\).
The third step is to restore the area corresponding to S under the true empirical transformation. Since \(N_\mathrm{all}=\{(\tau _k, M_k){:}\, k=1,2,\ldots ,n_\mathrm{all}\}\) is not completely known, we can only estimate the true biscale empirical transformation based on the points outside of S, where the events are assumed to be completely observed. This is done by using the following iterative method.
Set
where
In the above, \(S^{(1)} = F^{{(1)}}(S)\) means that \(S^{(1)}\) is the image of S under the mapping of \(F^{{(1)}}\). Starting from \(\ell =1\), repeat the following iterative computation until convergence, for example, \(\max \{t_i^{(\ell +1)} t_i^{(\ell )}, m_i^{(\ell +1)} m_i^{(\ell )} \}<\epsilon\), where \(\epsilon\) is a given small positive number:
where
with the weights defined by
and
for any regular region \(A\subset [0,1]\times [0,1]\). Denote the convergent results by \(N^*_{{\mathrm {obs}}}=\{(t_i^*, m_i^*){:}\, i=1,\,2\,\ldots , {n_{\mathrm{obs}}}\}\) and \(S^*\).
One may ask why the iterations are necessary. This is because we need to know the image of \(S\), which contains all the missing events, under transformation based on the complete dataset, \(N_{\mathrm{all}}=N_{{\mathrm {obs}}}\cup N_{{\mathrm {miss}}}\), where \(N_{{\mathrm {obs}}}\) and \(N_{{\mathrm {miss}}}\) denote the sets of observed events and missing events, respectively. The images of all the events, missing or observed, that fall in S are nearly uniformly distributed in the image of \(S\) under this transformation. Due to the existence of the unobserved events, the image of \(S\) under \(F^{(1)}\), the biscale empirical transformation based on the observed data, \(N_{{\mathrm {obs}}}\), is different from its image under the transformation based on the complete dataset D since events in \(N_{{\mathrm {miss}}}\) are not included in the calculation. Through reweighing the observed events outside of S, i.e., events in \(N_{{\mathrm {obs}}}\setminus S\), by using Eqs. (11)–(13), the iteration in this step constructs a biscale transformation as close as possible to the biscale empirical transformation based on the complete data. At the same time, the corresponding area that contains the missing data, \(S^*\), is restored as close as possible to the corresponding image under the transformation based on the complete dataset. This can be seen by comparing Fig. 3b with c.
After the above iterations of transformations, the image of all the events (including the missing and observed events) should be approximately uniformly distributed in the unit square \([0,1]\times [0,1]\). As shown in Fig. 3c, the events outside \(S^*\) are approximately uniformly distributed. The missing events inside \(S^*\) can be replenished by refilling in a way such that the events inside it are also uniformly distributed with the same occurrence rate as the outside.
The fourth step is to refill \(S^*\), in which the events (including missing and observed) should be approximately uniformly distributed according to a homogeneous Poisson process. Consider the theoretical conclusion that, given a homogeneous Poisson process in \(S_1\cup S_2\) with an unknown occurrence rate, where \(S_1\) and \(S_2\) are disjoint, if there are k events falling in \(S_1\), then the number of events of this process falling in \(S_2\) follows a negative binomial distribution with parameter \((k, \frac{S_1}{S_1+S_2})\). This can be derived in the following way: Providing that an event of this process falling in either \(S_1\) or \(S_2\), then the probabilities that it falls in \(S_1\) and \(S_2\) are \(S_1/(S_1+S_2)\) and \(S_2/(S_1+S_2)\), respectively. This is equivalent to a sequence of independent Bernoulli trials, where each trial has two potential outcomes called “success” (say, falling in \(S_{1}\)) and “failure” (say, falling in \(S_2\)). Then the random number of failures, X, which we will see before the occurrence of k successes, has a negative binomial distribution, \({\mathrm {NB}}(k, \frac{S_1}{S_1+S_2})\), with probability mass function
where \(p=\frac{S_1}{S_1+S_2}\). It is interesting that the number of earthquakes in a given space–time–magnitude window also follows a negative binomial distribution (e.g., Dionysiou and Papadopoulos 1992; Kagan 2010). Thus, we generate a random number K from a negative binomial random variable with parameters \((k, 1S^*)\), where \(S^*\) is the area of \(S^*\), and
is the number of events outside \(S^*\), with “\(\#\)” representing the number of elements. Then we generate K random events independently, identically, and uniformly distributed in \(S^*\). Denote these newly generated events by \(N^*_{{\mathrm {rep}}}\). Since there are already some observed points in \(S^*\), we should keep them and remove the same amount of simulated points. Simply, for each event of \(N^*_{{\mathrm {obs}}}\) that falls in \(S^*\), sequentially remove from \(N^*_{{\mathrm {rep}}}\) the closest event to it. The output of this step is shown in Fig. 3d.
The final step is to convert the resulted \(N^*_{{\mathrm {rep}}}\) from the above steps to the original observational space \([0,T]\times M\) through linear interpolation:
for each \((s_j^*, v_j^*) \in N^*_{{\mathrm {rep}}}\), where \({{\mathrm {LI}}}(x, A, B)\) represents the linear interpolation value of x conditioning on that the function values for each component in A are the locations corresponding to each component in B. Denote the set consisting of all \((s_j, v_j)\) by \(N_{{\mathrm {rep}}}\). Then \(N_{{\mathrm {rep}}}\) is the final output (Fig. 3e).
Figure 3f shows the comparison between the cumulative frequencies of events in the original and the replenished datasets, from which it can be seen that about 60% of M1.0+ events are missing.
Influence of shortterm missing on the estimates of ETAS parameters
Table 2 shows the results from fitting the ETAS model with different magnitude thresholds to the original and the replenished datasets, respectively. For easy comparison, they are also plotted in Fig. 4. When using a low magnitude threshold, the fitted ETAS parameters estimated by using the original dataset differ from those by using the replenished dataset. When the magnitude threshold is above 3.0, which is approximately the magnitude of completeness for the original dataset, the estimated ETAS parameters are about the same for both datasets.

1.
The first striking feature is that the \(\alpha\) value is almost fixed around 2.0 for the replenished data while for the original data it increases from 0.22 to 2.0 when the cutoff magnitude is increased. As mentioned in Ogata (1988, 1999), a small \(\alpha\) implies the seismicity is more like a swarm while a large \(\alpha\) implies mainshock–aftershock sequences. The high \(\alpha\) value in this analysis is more reasonable since this sequence is clearly an aftershock sequence. It is not difficult to explain why low \(\alpha\) values are obtained when lowering the magnitude threshold for the original dataset. The estimation procedure wrongly classifies aftershocks at the latter stage into secondary aftershocks that are triggered by some aftershocks in the sequence.

2.
For the replenished dataset, the estimated background rate \(\mu\) decreases exponentially when the cutoff magnitude is increased, which can be explained by the Gutenberg–Richter magnitude–frequency relation, while such a pattern is not clear for the original dataset (Fig. 4a).

3.
The K value ranges from 0.007 to 0.055 for the original dataset and 0.002 to 0.008 for the replenished dataset (Fig. 4b). Since this parameter is not so easy to discuss, A, as defined in Eq. (2), is also plotted. Figure 4c shows that the estimate of A increases gradually from 0.03 to 0.11 for the replenished data, while it decreases from 1.2 to a value around 0.1 when the cutoff magnitude changes from 1.0 to 3.8. For a bursting mainshock–aftershock sequence, a small A value and a high \(\alpha\) value are typical characteristics, implying that most of the aftershocks are directly triggered by very few major shocks, or even only by the mainshock.

4.
The c and p values in the Omori–type temporal decays are nearly constant for the replenished data but not for the original dataset. This indicates that missing small events in the early stage of the aftershock sequence cause the instability of the estimate of the Omori–Utsu formula, as pointed out by Utsu et al. (1995).
The results from the above analysis indicate that the shortterm missing of aftershocks causes serious biases in the estimation of model parameters. It is not difficult to imagine that such biases will propagate in the probability forecasting of seismicity at a timescale of weeks or months and cause big errors. After the missing data are replenished by using the algorithm, the biases can be corrected in a great degree.
Detecting change point by using the replenished dataset
It is interesting to know whether the seismicity pattern changes during the entire sequence, especially after the occurrence of the second major shock. When tangling with the shortterm missing data problem, this problem is difficult to tackle since the model cannot be estimated stably. In this section, we compare the results from applying changepoint detection techniques to both the original and the replenished datasets.
The main technique to detect seismicity change is using the transformed time sequence (Ogata 1988). Given a point process \(N=\{t_i: i=1,\, 2,\,\ldots , \, n\}\), which is determined by a conditional intensity \(\lambda (t)\), the following transformation
transforms N into a stationary Poisson process with a unit rate (standard Poisson process), namely \(N^\prime =\{\tau _i{:}\, i=1,\,2,\, \ldots ,\, n\}\). The process \(N^\prime\) is called the transformed time sequence. The true \(\lambda (t)\) is always unknown in real data analysis. If we replace \(\lambda (t)\) by \(\hat{\lambda }(t)\), which is a good approximation of the true model, in the above equation, we can also obtain a transformed time sequence that is approximately a Poisson process of rate 1 (the standard Poisson process). If the transformed time sequence deviates significantly from the standard Poisson process, then we can conclude that the model does not fit the data well. To see whether the seismicity pattern changes after the occurrence of the second major earthquake, one can firstly fit the ETAS model to the seismicity data just before the second major earthquake and then calculate the transformed time sequence and extend the calculation after the occurrence of the second major earthquake.
The confidence bands of the transformed time sequence have been studied by Ogata (1988, 1989). In this study, this problem is treated from another viewpoint: Since such a transformed time sequence is a standard Poisson process for an ideal model, statistics related to the Poisson process can be used to construct the confidence band. Following Schoenberg (2002), the cumulative frequency curve \((\tau _i=\int _0^{t_i} \hat{\lambda }(u)\, \text{d}u,\, i)\) always connects \((0,\,0)\) and \((T,\, n)\), where \(\hat{\lambda }(u)\) is the model estimated from the earthquake data in \([0,\,T]\) by using the maximum likelihood estimate and \(n=N[0,T]\). For each positive integer k, if \(k<n\), the confidence interval for \(\tau _k\) is the same as kZ, where Z is a random variable that obeys a beta distribution with parameter \((k+1, nk+1)\); when \(k>n\), \(\tau _k\) can be approximated by a gamma distribution with a shape parameter \(kn\) and scale parameter 1. Here we refer to Schoenberg (2002) for details.
Firstly, the ETAS model is fitted to the original dataset with a target interval of [0, \(T_1\)], where \(T_1=14.40\) is just before the occurrence time of the second major shock, with different cutoff magnitudes. No stable results are obtained if the cutoff magnitude is less than 2.2. After the model parameters are estimated, the transformed time sequence is calculated and the same calculation is extended to \(T_2=15.059\), which is just before the mainshock or the third major earthquake. The results are shown in Fig. 5. A scenario of relative quiescence can be seen between the occurrence times of the second and the third major earthquakes. A similar result is also reported by Kumazwa et al. (2016). However, one may argue that it might be caused by missing of some smaller events since (1) small gaps at the bottom of Fig. 5d can be found at the places of \(\tau \approx 300\), 400, and 500 and (2) the quiescence starts at about \(\tau \approx 300\), not the occurrence of the second major earthquake.
The same procedure is applied to the replenished data. Stable results can be obtained when the cutoff magnitude is no less than 1.2. Fitting results from data with the cutoff magnitude of 1.2 are shown in Fig. 6. One can see that the quiescence starts almost immediately after the second major earthquake occurs. The cumulative frequency curve drops outside of the 99% confidence bands quickly after the second major earthquake in the transformed time domain. This is similar to many cases of foreshock–mainshock–aftershock sequences, i.e., in a foreshock swarm, a drop of activity is observed just before the mainshock, such as the \(M_S\)7.3 Haicheng earthquake in China on 197624 (Wang et al. 2006) and the recent large M8.1 earthquake in Chile on 201441 (Papadopoulos and Minadakis 2016).
To verify our results, we also fit the ETAS model to the original dataset with some higher magnitude thresholds, M2.5 and M3.0. Quiescence is also found in the corresponding results, but does not occur immediately after the second major quake in the transformed time domain. However, such quiescence occurs much earlier than in the results when using M2.2 as the cutoff magnitude.
In summary, detecting relative quiescence with respect to the ETAS model becomes rather complicated when shortterm missing of aftershocks exists. Data replenishment can correct the biases caused by it in a plausible way. In the Kumamoto sequence, seismicity becomes relatively quiescent almost immediately after the occurrence of the second major event.
Discussion and conclusions
To study the seismicity of the Kumamoto aftershock sequence, the ETAS model is firstly fitted to the original dataset. The estimated parameters vary dramatically when the magnitude threshold changes. When the magnitude threshold is much lower than the completeness level, the estimates give a lower \(\alpha\) and a higher p value, implying that the influence of shortterm missing of aftershocks on the estimates of the ETAS parameters should not be ignored. When shortterm missing of aftershocks exists, detection of the change point in seismicity becomes complicated.
In many studies, the completeness threshold is determined by visually looking at the global magnitude–frequency curve or applying some detection methods (see Huang et al. 2016, and the references therein) to the whole catalog. All these methods cannot effectively detect the magnitude threshold of completeness in the short term immediately after the mainshock, while the estimates of the ETAS model parameters are mainly determined by shortterm clustering. To avoid biases caused in the estimation of ETAS parameters by such shortterm missing, it is important to find a reliable magnitude threshold of completeness by looking at a figure like Fig. 2 or using some replenishing methods as introduced in this study.
Such shortterm missing of small aftershocks can be replenished by using a generic method proposed by Zhuang and Wang (2016), which is designed for replenishing missing data in marked temporal point processes and only makes use of the assumption that the marks and occurrence times of the events are independent, regardless of how the events interact on the time axis. The key point of this method is an algorithm that iteratively estimates the missing area in the transformed domain according to the parts where data are completely recorded. When missing events are fixed by using this method, the ETAS parameters are much more stable and consistent when the magnitude threshold varies. The results show that this replenishment method helps us to evaluate the influence of missing data and correct the bias caused by missing data.
The results show that the Kumamoto aftershock sequence is a complex one, but still mainly mainshock–aftershocks, only the three major earthquakes producing most of the aftershocks. This can be seen from the high \(\alpha\) value. There are also different seismicity phases during this sequence. Particularly, the relative quiescence after the occurrence of the second major earthquake can be regarded as an anomaly prior to the mainshock. It is worthwhile extending the analysis based on the ETAS model to the whole aftershock sequence of this M7.3 mainshock in future research. For example, we can investigate whether the foreshock and aftershock activities are characterized by different ETAS parameters and how many phase changes there are in the aftershock sequences.
Also, the ETAS model is shown to be a stable model. The variations in the estimated ETAS parameters with different magnitude thresholds in past studies may be caused by the influence of shortterm missing of small events. This conclusion needs to be verified by further studies.
The bvalue, which is the key parameter that characterizes the magnitude distribution, might change during the earthquake sequence. However, in the case of shortterm missing of small aftershocks, the variation of detection is usually unknown. Extracting the changes in the bvalue and estimating the temporal variation of detection abilities at the same time have the problem of identifiability. If the magnitude distribution does not change dramatically, the generic algorithm can still be usable to tackle the issues caused by the shortterm missing of small aftershocks to some extent.
References
Agnew DC (2014) Equalized plot scales for exploring seismicity data. Seismol Res Lett 85(4):775–780. doi:10.1785/0220130214
Daley DD, VereJones D (2003) An introduction to theory of point processes—volume 1: elementary theory and methods, 2nd edn. Springer, New York
Dionysiou DD, Papadopoulos GA (1992) Poissonian and negative binomial modelling of earthquake time series in the Aegean area. Phys Earth Planet Inter 71(3):154–165. doi:10.1016/00319201(92)900735
Enescu B, Mori J, Miyazawa M (2007) Quantifying early aftershock activity of the 2004 midNiigata prefecture earthquake \(({M_w}6.6)\). J Geophys Res Solid Earth 112(B4):B004629. doi:10.1029/2006JB004629
Enescu B, Mori J, Miyazawa M, Kano Y (2009) Omori–Utsu law \(c\)values associated with recent moderate earthquakes in Japan. Bull Seismol Soc Am 99(2A):884–891. doi:10.1785/0120080211
Hainzl S (2016) Ratedependent incompleteness of earthquake catalogs. Seismol Res Lett 87(2A):337–344. doi:10.1785/0220150211
Huang YL, Zhou SY, Zhuang JC (2016) Numerical tests on catalogbased methods to estimate magnitude of completeness. Chin J Geophys 59(3):266–275. doi:10.6038/cjg20160416
Iwata T (2008) Low detection capability of global earthquakes after the occurrence of large earthquakes: Investigation of the Harvard CMT catalogue. Geophys J Int 174(3):849–856. doi:10.1111/j.1365246X.2008.03864.x
Iwata T (2013) Estimation of completeness magnitude considering daily variation in earthquake detection capability. Geophys J Int 194(3):1909–1919. doi:10.1093/gji/ggt208
Iwata T (2014) Decomposition of seasonality and longterm trend in seismological data: a Bayesian modelling of earthquake detection capability. Aust N Z J Stat 56(3):201–215. doi:10.1111/anzs.12079
Kagan YY (2010) Statistical distributions of earthquake numbers: consequence of branching process. Geophys J Int 180(3):1313. doi:10.1111/j.1365246X.2009.04487.x
Kumazwa T, Ogata Y, Tsuruoka H (2016) Statistical monitoring of seismicity in Kyushu district before the occurrence of the 2016 Kumamoto earthquakes of M6.5 and M7.3. Report of the Coordinating Committee for Earthquake Prediction, 96
Marsan D, Enescu B (2012) Modeling the foreshock sequence prior to the 2011, \({M_W}\)9.0 Tohoku, Japan, earthquake. J Geophys Res Solid Earth 117(B6):B06316. doi:10.1029/2011JB009039
Nanjo KZ, Ishibe T, Tsuruoka H, Schorlemmer D, Ishigaki Y, Hirata N (2010) Analysis of the completeness magnitude and seismic network coverage of Japan. Bull Seismol Soc Am 100(6):3261–3268. doi:10.1785/0120100077
Ogata Y (1988) Statistical models for earthquake occurrences and residual analysis for point processes. J Am Stat Assoc 83(401):9–27. doi:10.1080/01621459.1988.10478560
Ogata Y (1989) Statistical model for standard seismicity and detection of anomalies by residual analysis. Tectonophysics 169(1–3):159–174. doi:10.1016/00401951(89)901911
Ogata Y (1998) Space–time pointprocess models for earthquake occurrences. Ann Inst Stat Math 50(2):379–402. doi:10.1023/A:1003403601725
Ogata Y (1999) Seismicity analysis through pointprocess modeling: a review. Pure Appl Geophys 155(2):471–507. doi:10.1007/s000240050275
Ogata Y (2006) Monitoring of anomaly in the aftershock sequence of the 2005 earthquake of M7.0 off coast of the western Fukuoka, Japan, by the ETAS model. Geophys Res Lett 33:L01303. doi:10.1029/2005GL024405
Ogata Y, VereJones D (2003) Examples of statistical models and methods applied to seismology and related earth physics. In: Lee WH, Kanamori H, Jennings PC, Kisslinger C (eds) International handbook of earthquake and engineering seismology, chapter 82, vol 81B. International Association of Seismology and Physics of Earth’s Interior, London
Omi T, Ogata Y, Hirata Y, Aihara K (2013) Forecasting large aftershocks within one day after the main shock. Sci Rep 3:2218. doi:10.1038/srep02218
Omi T, Ogata Y, Hirata Y, Aihara K (2014) Estimating the ETAS model from an early aftershock sequence. Geophys Res Lett 41(3):850–857. doi:10.1002/2013GL058958
Omi T, Ogata Y, Hirata Y, Aihara K (2015) Intermediateterm forecasting of aftershocks from an early aftershock sequence: Bayesian and ensemble forecasting approaches. J Geophys Res Solid Earth 120(4):2561–2578. doi:10.1002/2014JB011456
Papadopoulos GA, Minadakis G (2016) Foreshock patterns preceding great earthquakes in the subduction zone of Chile. Pure Appl Geophys 173(10):3247–3271. doi:10.1007/s0002401613375
Peng Z, Vidale JE, Ishii M, Helmstetter A (2007) Seismicity rate immediately before and after main shock rupture from highfrequency waveforms in Japan. J Geophys Res Solid Earth 112(B3):B03306. doi:10.1029/2006JB004386
Sawazaki K, Enescu B (2014) Imaging the highfrequency energy radiation process of a main shock and its early aftershock sequence: The case of the 2008 IwateMiyagi Nairiku earthquake, Japan. J Geophys Res Solid Earth 119(6):4729–4746. doi:10.1002/2013JB010539
Schoenberg F (2002) On rescaled Poisson processes and the Brownian bridge. Ann Inst Stat Math 54(2):445–457. doi:10.1023/A:1022494523519
Utsu T, Ogata Y, Matsu’ura RS (1995) The centenary of the Omori formula for a decay law of aftershock activity. J Phys Earth 43(1):1–33. doi:10.4294/jpe1952.43.1
Wang K, Chen QF, Sun S, Wang A (2006) Predicting the 1975 Haicheng earthquake. Bull Seismol Soc Am 96(3):757–795. doi:10.1785/0120050191
Wang Q, Jackson DD, Zhuang J (2010) Missing links in earthquake clustering models. Geophys Res Lett 37(21):L21307. doi:10.1029/2010GL044858
Zhuang J, Wang T (2016) Correcting biases in the estimates of earthquake clustering parameters caused by shortterm missing of aftershocks. Japan Geoscience Union Meeting 2016, Makuhari, Chiba, Japan, 22–26 May 2016
Authors’ contributions
JZ carried out the data analysis and drafted the manuscript. TW and JZ designed the replenishing algorithm. YO participated in designing the study and partially performed the explanation of the results. All authors read and approved the final manuscript.
Acknowledgements
This project is partially supported by KAKENHI 2624004 and 26280006 from the Japan Society for the Promotion of Science and the Marsden Fund administered by the Royal Society of New Zealand. The authors thank the editor, Prof. Manabu Hashimoto, and three anonymous reviewers for their helpful and constructive comments.
Competing interests
The authors declare that they have no competing interests.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Zhuang, J., Ogata, Y. & Wang, T. Data completeness of the Kumamoto earthquake sequence in the JMA catalog and its influence on the estimation of the ETAS parameters. Earth Planets Space 69, 36 (2017). https://doi.org/10.1186/s4062301706146
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s4062301706146
Keywords
 Kumamoto earthquake
 ETAS model
 Missing data imputation
 Relative quiescence
 Shortterm aftershock missing