### Main field and secular variation

The Kalman filter and smoothing algorithms provide a model in terms of mean solution and associated covariance matrix. Combining these two quantities gives a precise knowledge of locations where the solution is reliable and where it is not. As an illustration for the main field, i.e., the sum of the core field and the lithospheric expanded up to SH degree \(\ell =20\), Fig. 2 shows at different epochs the radial component of the mean field (isocontours) and its associated standard deviation (color maps). Locations where the maps are red correspond to locations where the mean solution is likely to deviate strongly from the true field. On the opposite, within blue and purple areas the model predicts that the true and the mean predicted field are close. These maps are complemented on their bottom right by a global measure of the predicted uncertainty. It corresponds to the r.m.s. standard deviation given in nT and expressed as:

$$\begin{aligned} {\bar{\sigma }} = \sqrt{\int _\Omega \sigma ^2 \mathrm {d}\Omega / \int _\Omega \mathrm {d} \Omega } \ , \end{aligned}$$

(19)

where \(\sigma\) is the standard deviation associated with the radial component of the field and \(\Omega\) is the Earth’s surface.

Until the 1960s, uncertainty maps exhibited a strong dichotomy between the Northern and Southern hemispheres. Whereas in the North, the standard deviation associated with the radial component of the field does not globally excess 25 nT, it reaches and even exceeds 50 nT in the South. The difference of predicted uncertainties is particularly important between land and oceanic surfaces reflecting the lack of measurements taken over the latter, the location where the field is best resolved is Europe. This is a benefit of the high density of ground-based observatories operating at this place and during this time period. When looking at the r.m.s. standard deviation, the year 1920 slightly stands out with \({\bar{\sigma }} = 44\) nT, whereas this value oscillates around \(\bar{\sigma } \sim 50\) nT in 1910, 1930 and 1940. This phenomenon can be explained by the multiple land and marine surveys occurring at and around this epoch and which are offering a large data coverage of the globe (see Fig. 1). In 1960, the global resolution of the model is improved and the North–South dichotomy mostly disappears. Two reasons explain this gain of accuracy. The first one is the dense spatial coverage of survey data at this epoch (see Fig. 1). The second one is the time proximity of the POGO mission which started in 1965. One can also observe that observatories still play an important role to reduce the posterior variability as it is the case in and around Europe and Japan. In 1970, the jump in accuracy of the model is striking. At this period lying within the POGO era, the standard deviation associated with the Kalmag solution is strongly reduced. However, the model predicts a higher possible variability around the magnetic dip equator. This phenomenon is the transcription of the Backus effect, or more generally the “perpendicular error” effects within the model. Indeed, as first recognized by Backus (1970), to be then generalized by Lowes (1975), when constructing a geomagnetic field model with intensity measurements alone, larger errors will contaminate the model near the equator. This effect is surely affecting our mean solution, but covariance information enables us to quantify it. With MagSat observations, which cover less than a year (\(1979-1980\)), the model precision is equivalent to the one obtained with POGO data except around the dip equator where vector field measurements eliminate the “perpendicular error” effects induced by the assimilation of intensity data. The map in 1990 highlights the importance of low-orbiting satellites to recover the Earth’s magnetic field. Lying between MagSat and Oersted missions, in the middle of almost 20 years without satellite measurements, the solution obtained at this time is strongly degraded. It presents levels of uncertainties equivalent to the 1960 ones except in Northern America and Russia where the coverage with ground-based observatories has since been increased. The situation is ameliorated with Oersted measurements and becomes even better with CHAMP and Swarm observations. With the high-quality instrumentation of CHAMP and Swarm satellites, the model is extremely precise and this is almost everywhere at the Earth’s surface. It is however worth noting that the constellation of Swarm satellites permits to obtain a slightly more accurate solution than the unique CHAMP spacecraft.

When looking at the mean secular variation (SV) and its associated standard deviation as displayed at similar epochs in Fig. 3, one can observe that the dichotomy in accuracy between the North and the South is also present for this quantity. The dichotomy persists until the year 2000, but with a lower contrast after 1960. Ground-based observatory data are of particular importance to constrain the secular variation, as locations where their density is high always coincide with areas of low posterior variability. Globally, uncertainties are decreasing with time except between 1970 and 2000, where the r.m.s. standard deviation fluctuates due to the lack of persistent low-orbiting satellite missions. In addition, the distribution of uncertainties over the different spatial scales is not homogeneous. Instead, small scales typically exhibit a higher posterior variability relatively to their mean signal than large scales. This effect can be observed in Fig. 4 where time series between 1900 and 2022 of the \(68.2\%\) confidence interval associated with some selected SH coefficients are displayed in red. In this figure, it is clearly visible that the larger the degree of the coefficient (from left to right and top to bottom), the larger its posterior standard deviation relatively to its mean values. The COV-OBS.x2 model of Huder et al. (2020), exhibits a similar behavior as its predicted \(68.2\%\) confidence intervals (blue areas) show. Although the two models are mostly consistent with one another, small differences can nevertheless be distinguished, in particular in the predicted standard deviations. Until \(\sim 1920\) their level is lower for COV-OVS.x2, they become equivalent between COV-OVS.x2 and Kalmag until \(\sim 1960\) to be lower for Kalmag afterwards.

To precisely characterize the spatio-temporal resolution of the secular variation over the model time span, we computed the ratio \(C_{{\dot{g}}}(\ell ,k)\) between the Fourier power spectra of the mean secular variation and its associated standard deviation for 20 years time periods. This quantity, which was proposed by Gillet et al. (2015), can be expressed as:

$$\begin{aligned} C_{{\dot{g}}}(\ell ,k) = {\sum _{m=\text{- }\ell }^{m=l}\left\Vert E\left[ \hat{{\dot{g}}}_{c,\ell ,m}(k)\right] \right\Vert ^2 / \sum _{m=\text{- }\ell }^{m=l} \left\Vert \sigma _{\hat{{\dot{g}}}_c,\ell ,m}(k)\right\Vert ^2 }, \end{aligned}$$

(20)

where \(\hat{{\dot{g}}}_{c,\ell ,m}(k)\) is the Fourier transform of the secular variation, and \(\sigma _{\hat{{\dot{g}}}_c,\ell ,m}(k)\) is its associated standard deviation. To estimate the latter quantity, we used an ensemble of 1024 Fourier transform of secular variation time series. In Fig. 5, \(C_{{\dot{g}}}(\ell ,k)\) is displayed for 6 different time windows. The blue and red areas correspond to spatio-temporal scales which are, respectively, well resolved and not resolved. At early times, between 1900 and 1920, only some limited amount of temporal scales of the SV up to SH degree \(\ell =4\) are resolved. The situation slightly improves between 1920 and 1960 where some signal up to SH \(\ell =6\) can be accurately recovered, and this down to a few years for the largest spatial scales. The emergence of satellite missions and the increase of ground-based observatory and survey data helps improving the model resolution between 1960 and 2000. During this time interval some spherical harmonics coefficient up to degree \(\ell =5\) are either partially or fully resolved down to time periods lower than a year. Reaching such a temporal resolution is impossible with the secular variation data derived from annual differences of observatory measurements. It can therefore only be achieved thanks to the high temporal coverage of satellite and survey data. In agreement with our previous results and with the study of Gillet (2019), the secular variation is best resolved during the CHAMP and Swarm eras, where spatial scale up \(\ell =15\) can be partially resolved down to periods of approximately 5 years, and 2-year fluctuations can be very well captured up to \(\ell =10\).

### Lithospheric field

As previously mentioned, the lithospheric field model was built in multiple steps. During the CHAMP and Swarm eras, it was fully modeled up to SH degree \(\ell =150\). After applying the smoothing algorithm, the lithospheric in 2000.5 was divided in three parts. In the first one, between \(\ell =1\) and \(\ell =100\), the full smoothing solution (mean and associated covariance matrix) was kept. In the second part, between \(\ell =101\) and \(\ell =150\), only mean and variance information were considered. Finally, between \(\ell =151\) and \(\ell =1000\) a zero mean and the variance derived from equation 7 with parameters of Table 3 were a priori imposed. The Kalman filter algorithm was then launched backward in time with this prior lithospheric field between 2000.5 and 1900.

Keeping only variance information within the Kalman filter algorithm is a strong approximation. Before implementing it, this approximation was tested during the CHAMP and Swarm eras. For this evaluation phase, the lithospheric field was fully modeled up to \(\ell =30\) and partially modeled (keeping only variance information) between \(\ell =31\) and \(\ell =150\). The remaining part of the model was simulated normally and the dataset used is the one described in "Data" section. The resulting model is referred as the PR model. With this setup, comparisons with the solution obtained at full resolution (FR model) can be performed. In a first simulation, it was observed that the posterior variance associated with the approximated solution had a tendency to be underestimated. In particular, the transition between the degree variance (the sum of the variances at a given degree) at SH degree \(\ell =30\) and \(\ell =31\) exhibited a pronounced discontinuity. To partially correct this effect, variances beyond the transition were increased by a multiplication factor. The latter was imposed to vary linearly with the degree of the SH expansion, and forced a smooth transition as well as a level of variance at the last modeled degree corresponding to stationary state variance of equation 7. Because of the latter operation, the lithospheric field resolution was increased to \(\ell =200\), a degree at which the signal at satellite altitude becomes very low as shown by Olsen et al. (2017).

The results of this evaluation phase are displayed in Fig. 6. On the left panels, the mean downward component of the lithospheric field at the Earth’s surface is shown for both the solution obtained at full resolution (top) and the one obtained at partial resolution (bottom). These two maps look very similar and most features which can be recovered by the FR model are present in the PR model. This aspect is confirmed by the map which exhibits the difference between the two mean solutions (top right). Only at the level of Antarctica, Eastern Europe and Western Russia, discrepancies become quite intense. These discrepancies coincide with relatively large-scale errors (up to \(\ell =70\)) as shown with crosses by the energy spectrum at the Earth’s surface of the difference between the two mean models (bottom right panel). Beyond \(\ell =70\), the level of error decreases. The computation of the degree correlation between the two models, as introduced by Langel and Hinze (1998) reads:

$$\begin{aligned} \rho _\ell = \frac{\sum _{\ell =\text{- }l}^{\ell =m}\left[ g_{l,\ell ,m}^{} g_{s,\ell ,m}^{} \right] }{ \sqrt{\sum _{\ell =\text{- }l}^{\ell =m}\left[ g_{l,\ell ,m}^2 + g_{m,\ell ,m}^2 \right] }} \ , \end{aligned}$$

(21)

also highlights their proximity. The latter reaches a minimum of 0.915 at \(\ell =66\) and stabilizes around the mean value of 0.979 beyond \(\ell =100\). The energy spectra associated with the standard deviations show that the model where only variance information was updated, had a tendency to underestimate the level of predicted uncertainties. Although the technique previously mentioned to rescale the variance was applied, it did not completely resolve this issue. Nevertheless, the fact that the small-scale lithospheric field was only marginally affected by the proposed modeling approximation comforted us to implement it for the complete model derivation.

The lithospheric field resulting from the assimilation of the entire dataset is first analyzed through energy spectra at the Earth’s surface. In the left part of Fig. 7, the spectra of the mean, the standard deviation and the prior standard deviation of the lithospheric field are displayed with black lines. In this solution, energy populates the entire range of modeled scales. However, the mean field is predicted to be globally reliable only up to SH degree \(\ell \sim 450\), where the spectrum of the mean and the spectrum of the standard deviation cross one another. In addition, the discontinuity in the spectrum of the mean at SH degree \(\ell =150\) indicates that even up to \(\ell \sim 450\) a non-negligible portion of the crustal signal remains unmodeled. Nevertheless, comparisons with the FR model previously discussed (blue lines and dots) demonstrate that the assimilation of survey data helps to better constrain the large-scale lithospheric field. Indeed, the mean signal of the final solution has gained in intensity, and its standard deviation has decreased. In the same figure, the spectra of the difference with two other lithospheric models, the WDMAM model by Lesur et al. (2016) (red dots), and the LCS-1 model by Olsen et al. (2017) (green dots), are also shown.

Although our solution is apparently closer at any degree to the LCS-1 model than to the WDMAM model, the examination of the degree correlation (right panel of Fig. 7) indicates that this aspect is only true up \(\ell =150\). Beyond this value, even if \(\rho _\ell\) is relatively low, the correlation between Kalmag and WDMAM (red line) is higher. Contrary to the degree correlation between LCS-1 and Kalmag which decays smoothly, the one associated with Kalmag and WDMAM presents two transitions. One of them is at SH degree \(\ell =100\), the spatial scale delimiting the satellite data solution (\(\ell \le 100\)) from the survey data solution (\(\ell > 100\)) of the WDMAM model. The other transition occurs at \(\ell =150\), the degree beyond which our model is only constrained by survey data. This second drop in \(\rho _\ell\) may be explained by the lower spatial resolution that our solution exhibits in certain areas. This phenomenon can be observed in Fig. 8 where the downward components of WDMAM (top left) and Kalmag (bottom left) expanded up to \(\ell =450\) (the resolution up to which we predict a globally well-resolved solution) are displayed.

The intense signals predicted by WDMAM in the Southern parts of the Pacific, the Atlantic and the Indian oceans, or on large portions of continental areas are mostly absent in our solution. It is however worth noting that WDMAM does not only derive from direct measurements of the geomagnetic field, but also from the combination of ocean floor age map, relative plate motions and geomagnetic polarity time scale (see Dyment et al. (2015)). Logically, the difference between the downward component of both models (top right of Fig. 8) is larger at these oceanic and land locations than anywhere else. On the opposite, discrepancies are reduced in most areas where the standard deviation associated with the large scale part of the field (up to \(\ell =100\)) is low (map on the bottom right). These uncertainty predictions which are tied to data coverage (see Fig. 1) therefore provide a good approximation of locations where the Kalmag model is likely to be well resolved.

The model being expressed in terms of posterior distributions, it can be used as a prior information to assimilate new data when some of them become available, and therefore be updated accordingly. To illustrate this aspect, airborne intensity measurements taken above Afghanistan in 2006 and 2008 were put aside from the dataset serving the model derivation. They are now used to update the lithospheric field following the method detailed in Appendix C. The locations at which each measure was taken during these surveys are shown with colored dots (blue for 2006 red for 2008) in the bottom left panel of Fig. 9. The downward component of the mean prior lithospheric field, which comes from the smoothing solution taken up to \(\ell =1000\) in 2006.0, is shown on the top left panel. Its resolution was increased to \(\ell =2000\) before the Kalman filter simulation was launched. The result of the assimilation process is shown through the downward component of the mean posterior field in 2009.0 in the second panel of the top row of Fig. 9. On this map, it can be seen that structures which were completely invisible in the prior model appear in the posterior one. In particular, high-intensity anomalies could be detected along the Southern and Western border of Afghanistan. The field in the central part of the land is globally weaker. Such patterns are also predicted by the EMM 2017 model of Maus (2010) as shown on the third panel of the top row. They are nevertheless of lower magnitude, and less detailed due to the resolution of the model which is limited to \(\ell =790\). To make the comparison with the EMM solution possible, the posterior mean was truncated at SH degree \(\ell =790\). The resulting downward component is shown in the top right of the figure. Now the two models are looking more alike. Nevertheless, discrepancies in predicted intensity still remain. In order to assess the degree of compatibility of the different models with the observations, the absolute value of the difference between a subset of the measurements and the intensities predicted by the sum of the core and the different lithospheric field solutions was computed. The results are shown on the bottom panel below each corresponding downward components. The model exhibiting the higher degree of freedom, displayed on the second column, is without surprise the model which can better explain the data. As shown on the bottom of the map, the r.m.s. difference between the model and the measurements is of 18nT. Globally the predictions of the truncated model (right column) are closer to the data than the EMM predictions (third column). Of course Afghanistan is a particular location and no claim is made here that the Kalmag model would be globally more accurate than the EMM model since this is certainly not the case. However, this example shows that the method proposed in this study is well suited to construct regional high-resolution models of the lithospheric field and this even when data coverage is not optimal.

### Magnetospheric and induced fields

With the proposed approach, magnetospheric and induced fields are jointly estimated with the rest of the model. A priori, the field generated by the currents flowing in the outer magnetosphere (\(g_{rm}\)) is predicted to evolve slowly with time (\(\tau _{g_{rm}} = 10.3\) years) in comparison to other external sources. A posteriori, such a behavior is confirmed as illustrated by the evolution of the annual mean dipole component of \(E[g_{rm}]\) projected in magnetic coordinates and shown in the left panel of Fig. 10 with circles. Note that prior to 1953, our model cannot correctly extract this field and the latter oscillates around 0 with a large posterior variance. However, \(g_{rm}\) alone cannot explain decadal variations of external sources as they can be detected at the Earth’s surface or at the altitude of low-orbiting satellites. The rapidly evolving magnetospheric components also exhibit long-term trends whenever the latter can be captured. This effect can be observed when comparing the annual mean dipole component of \(E[g_{rm}]\) to the one of \(E[g_{rm} + g_{m}+ g_{fm}]\) shown with a continuous line in Fig. 10. During satellite eras, the latter is always found to be more intense than the former, meaning that the ring current can generate some persistent annual signal as already documented by Lühr and Maus (2010). With our current method, this signal can only be recovered when temporal data coverage is high enough due to the fact that \(E[g_{m}]\) and \(E[g_{fm}]\) exhibit very low memory timescales. A possible way to improve the AR processes characterizing these sources would be to consider some extra timescales accounting for the slow varying part of the field generated by the ring current. The cycle of approximately 10.5 years highlighted by Huder et al. (2020) with the COV-OBS.x2 model (shown with dashed lines in Fig. 10) is also present in our solution. Although the mean solutions of both models slightly differ from one another, the COV-OBS.x2 dipole always lies within the 68.7 confidence interval predicted by our model (gray areas in Fig. 10).

To evaluate the model over short periods of time and when all sources are predicted to be well separated, we now compare predictions of the azimuthal component of the model with ground based observatory measurements taken at four different locations, Hermanus, Niemegk, Canberra and Kakioka. Observatory data being only assimilated to constrain the core field secular variation, they can be considered as independent measurements for external and induced fields. In order to make visual comparisons possible and to remain within the conditions the model was built in, only hourly night-time measurements and predictions were kept to be then averaged over 10 days time periods. The results are reported in the right panel of Fig. 10 with red lines for observatory data, black lines for the full model predictions and blue lines for the predictions of the core field alone. Globally, monthly and annual variations of \(B_\theta\) are well captured by the model. Only during the time gap between the CHAMP and Swarm missions, when external sources are not updated anymore, predictions and observations differ strongly. One can also notice that the core field does not seem to be contaminated by external or induced fields, as its evolution does not reproduce the rapid variations observed in the data. The largest discrepancies between predictions and observations are in the magnitude of the signals. Intense excursions are not predicted by the model. The reason for this is that the model was trained on a dataset selected for very quiet magnetic conditions [see Baerenzung et al. (2020)]. Therefore, the selection algorithm of the Kalman filter prevents the assimilation of data containing a too strong signal from external sources. A recalibration of the model for more general conditions would certainly solve this issue.

Finally, our model contains a source for induced/residual ionospheric fields. The latter is a priori uncorrelated from magnetospheric fields. Yet rapid variations of external fields generate currents within the Earth’s interior, which in return induce a secondary magnetic field (e.g., Schmucker 1985; Langel and Estes 1985b; Olsen et al. 2005; Finlay et al. 2020). The intensity and temporal evolution of the induced field depends on the conductivity of the crust, the mantle and the core. Under the assumption that conductivity only depends on depth, each spherical harmonics coefficient of the induced field will be linked the same coefficient of the external field through the relation:

$$\begin{aligned} {\iota }_{l,m}(t) = \int _{\text{- }\infty }^{\infty }{\mathcal {Q}}_{l,m}(t-t^\prime ) \epsilon _{l,m}(t^\prime ) \mathrm {d}t^\prime \ , \end{aligned}$$

(22)

where \(\iota\) is the induced field, \(\epsilon\) the external fields, and \({\mathcal {Q}}\) is referred as the \({\mathcal {Q}}\)-response. In our model, \(\iota = g_{ii}\) and \(\epsilon =g_{rm} + g_{m}+ g_{fm}\), where \(g_{rm}\) is projected in magnetic coordinates.

In the particular case discussed by Olsen et al. (2005), where the mantle is assumed to be insulating until a given depth *d* followed by a superconductor, \({\mathcal {Q}}_l^m(t-t^\prime ) = \tilde{{\mathcal {Q}}}_l^m\delta (t-t^\prime )\) and therefore \({\iota }_{l,m}(t) = \tilde{{\mathcal {Q}}}_{l,m}\epsilon _{l,m}(t)\). Focusing on the dipole component of induced and external fields, and assuming a depth of \(d = 1200\) km, leads to \({\iota }_{1,0}(t) = \tilde{{\mathcal {Q}}}_{1,0}\epsilon _{1,0}(t)\) with \(\tilde{{\mathcal {Q}}}_{1,0} = 0.27\) as estimated by Langel and Estes (1985b) with POGO data. In the left panel of Fig. 11, the evolution of, respectively, \(\epsilon _{1,0}\) and \({\iota }_{1,0}(t)/\tilde{{\mathcal {Q}}}_{1,0}\) is displayed between 2019.45 and 2019.65 with, respectively, red and black lines. In order to concentrate on rapid variations only (we recall that external and induced field were stored every 3 hours during the CHAMP and Swarm eras), temporal scales larger than 15 days have been filtered out from both time series. Furthermore, we chose the [2019.45, 2019.65] time period because temporal coverage of Swarm data is optimal during this interval. The two time series in Fig. 11 follow one another quite closely and \(\tilde{{\mathcal {Q}}}_{1,0}^{-1}\) seems appropriate to rescale the induced field. Over the current Swarm time span, induced and external fields exhibit a Pearson correlation \(\rho = \mathrm {Cov}(\epsilon ,\iota )/(\sigma _\epsilon \sigma _\iota )\), calculated here with the mean Kalmag solutions, of \(\rho = 0.79\). It is of \(\rho = 0.84\) over the time interval of Fig. 11 and of \(\rho = 0.73\) over the CHAMP era. This lower correlation value is probably caused by the uncertainty level of external and induced fields which are higher during the CHAMP mission than during the Swarm one. However, the particular 1-D conductivity model leading to \(\tilde{{\mathcal {Q}}}_{1,0}\) is known to be imperfect. More complex conductivity profiles are required to better model induction processes within the Earth’s interior.

We now investigate the \({\mathcal {Q}}\)-response predicted by our model when keeping the assumption that the conductivity within the Earth is only depth-dependent, but relaxing the constraint about its profile. For this evaluation, we operate in spectral space. Considering only dipole components of \(\iota\) and \(\epsilon\) and applying a Fourier transform to equation 22 the latter becomes:

$$\begin{aligned} {{\hat{\iota }}}_{1,0}(k) = \hat{{\mathcal {Q}}}_{1,0}(k) \hat{\epsilon }_{1,0}(k) \ . \end{aligned}$$

(23)

From this equation, the real and imaginary parts of \(\hat{{\mathcal {Q}}}(k)\) are, respectively, given by:

$$\begin{aligned} {\text {Re}}\{\hat{{\mathcal {Q}}}(k)\}= & {} \frac{ {\text {Re}}\{{\hat{\epsilon }}(k)\}{\text {Re}}\{{\hat{\iota }}(k)\} +{\text {Im}}\{{\hat{\epsilon }}(k)\}{\text {Im}}\{{\hat{\iota }}(k)\} }{{\text {Re}}\{{\hat{\epsilon }}(k)\}^2 + {\text {Im}}\{{\hat{\epsilon }}(k)\}^2}, \end{aligned}$$

(24)

$$\begin{aligned} {\text {Im}}\{\hat{{\mathcal {Q}}}(k)\}= & {} \frac{ {\text {Re}}\{{\hat{\epsilon }}(k)\}{\text {Im}}\{{\hat{\iota }}(k)\} -{\text {Im}}\{{\hat{\epsilon }}(k)\}{\text {Re}}\{{\hat{\iota }}(k)\} }{{\text {Re}}\{{\hat{\epsilon }}(k)\}^2 + {\text {Im}}\{{\hat{\epsilon }}(k)\}^2} \ . \end{aligned}$$

(25)

To evaluate these two quantities we considered induced and external fields during the [2015.0, 2021.0] time interval when the model reaches its peak accuracy. In the right panel of Fig. 11, \({\text {Re}}\{\hat{{\mathcal {Q}}}(2\pi /k)\}\) and \({\text {Im}}\{\hat{{\mathcal {Q}}}(2\pi /k)\}\) averaged at period \(T_i=2\pi /k_i\) over \([T_i,2T_i]\) are, respectively, displayed with red and black continuous lines. For comparisons, the real and imaginary parts of \(\hat{\tilde{{\mathcal {Q}}}}_{1,0}\) as well as the \({\mathcal {Q}}\)-response (referred as \({\mathcal {Q}}^O\) ) estimated by Olsen et al. (2005) with a realistic conductivity model are, respectively, shown with dashed and dotted lines. The general behavior of the \({\mathcal {Q}}\)-response we recover is coherent with our prior knowledge about it. Indeed, for short periods of time the real part of \(\hat{{\mathcal {Q}}}\) is much more intense than its imaginary part and its decay pattern is close to the one predicted by Olsen et al. (2005). However, in comparison to \({\text {Re}}\{\hat{{\mathcal {Q}}}^O\}\), \({\text {Re}}\{\hat{{\mathcal {Q}}}\}\) is globally underestimated. This effect might be due to the fact that induced fields vary rapidly with time, and when no data is feeding the model, its mean value tends quickly toward 0 contrary to the remote and close magnetospheric fields which evolves slower. The behavior of the imaginary part of \(\hat{{\mathcal {Q}}}\), which reflects the temporal lag of the induced field response, is on the contrary very similar to the one predicted by the direct model of Olsen et al. (2005).