- Full paper
- Open Access
- Published:

# Using TNT-NN to unlock the fast full spatial inversion of large magnetic microscopy data sets

*Earth, Planets and Space*
**volume 71**, Article number: 14 (2019)

## Abstract

Modern magnetic microscopy (MM) provides high-resolution, ultra-high-sensitivity moment magnetometry, with the ability to measure at spatial resolutions better than \(10^{-4}\) m and to detect magnetic moments weaker than \(10^{-15}\) Am\(^2\). These characteristics make modern MM devices capable of particularly high-resolution analysis of the magnetic properties of materials, but generate extremely large data sets. Many studies utilizing MM attempt to solve an inverse problem to determine the magnitude of the magnetic moments that produce the measured component of the magnetic field. Fast Fourier techniques in the frequency domain and non-negative least-squares (NNLS) methods in the spatial domain are the two most frequently used methods to solve this inverse problem. Although extremely fast, Fourier techniques can produce solutions that violate the non-negativity of moments constraint. Inversions in the spatial domain do not violate non-negativity constraints, but the execution times of standard NNLS solvers (the Lawson and Hanson method and Matlab’s *lsqlin*) prohibit spatial domain inversions from operating at the full spatial resolution of an MM. In this paper, we present the applicability of the TNT-NN algorithm, a newly developed NNLS active set method, as a means to directly address the NNLS routine hindering existing spatial domain inversion methods. The TNT-NN algorithm enhances the performance of spatial domain inversions by accelerating the core NNLS routine. Using a conventional computing system, we show that the TNT-NN algorithm produces solutions with residuals comparable to conventional methods while reducing execution time of spatial domain inversions from months to hours or less. Using isothermal remanent magnetization measurements of multiple synthetic and natural samples, we show that the capabilities of the TNT-NN algorithm allow scans with sizes that made them previously inaccesible to NNLS techniques to be inverted. Ultimately, the TNT-NN algorithm enables spatial domain inversions of MM data on an accelerated timescale that renders spatial domain analyses for modern MM studies practical. In particular, this new technique enables MM experiments that would have required an impractical amount of inversion time such as high-resolution stepwise magnetization and demagnetization and 3-dimensional inversions.

## Introduction

Modern magnetic microscopes (MMs) have been developed to analyze the microscale natural remanent magnetization and rock magnetic properties of rocks and minerals (Harrison and Feinberg 2009) and are capable of high-resolution, high-sensitivity measurements on geologic samples (Weiss et al. 2007b). MMs encompass superconducting quantum interference device (SQUID), magnetic tunnel junction (MTJ), giant magnetoresistance (GMR) sensors, and quantum diamond microscopes (QDMs). MMs are capable of measuring samples with magnetic moments weaker than \(10^{-15}\) Am\(^{2}\) (Fong et al. 2005; Weiss et al. 2007a; Oda et al. 2016; Lima and Weiss 2016) at spatial resolutions on the order of micrometers (Liu et al. 2002; Liu and Xiao 2003; Liu et al. 2006; Hankard et al. 2009; Lima et al. 2014; Glenn et al. 2017).

MM has been applied to investigations of ultrafine-scale magnetostratigraphy (Oda et al. 2011; Noguchi et al. 2017a), shock remanent magnetization (Gattacceca et al. 2006, 2010), rock magnetism (Hankard et al. 2009; Kletetschka et al. 2013), studies of nebular magnetism using chondrules (Fu et al. 2014), and the paleointensity of the magnetic field of Earth (Weiss et al. 2007a; Fu et al. 2017; Weiss et al. 2018) and Mars (Weiss et al. 2008). The high-resolution capability of MMs can yield extremely large data sets. Analyzing these data sets is dominated by solving an inversion problem, which obtains the distribution of magnetic sources from the measured magnetic field. As with mapping of magnetic fields of magnetization, retrieving magnetization from magnetic fields is non-unique without the use of other constraints and can be computationally expensive (Weiss et al. 2007b; Lima and Weiss 2016).

Weiss et al. (2007b) describe three formulations of the least-squares inversion problem to obtain the magnetic sources of a sample from MM magnetic field measurements: unrestricted, unidirectional, and uniform. Unrestricted solutions obtain the three vector components of *Q* dipoles at fixed positions within the sample from *P* measurements of the vertical (z) component of the magnetic field. Obtaining a solution at the full resolution of the scan necessitates solving an underdetermined linear least-squares problem with an infinite number of possible solutions. This type of solution is typically appropriate for scans of unidirectional natural remanent magnetization (NRM). Unidirectional solutions determine the magnitudes of *Q* dipoles at fixed positions within the sample from *P* measurements of the z-component of the magnetic field. For a full resolution solution, this requires solving a linear least-squares problem. This problem is well determined as all of the dipole orientations are fixed in any one direction (described by the angles \(\theta\) and \(\phi\) in spherical coordinates), but their magnitudes are allowed to vary independently. Because all dipoles share an orientation, it is reasonable to impose a non-negativity constraint to all dipoles once all dipoles share a positive orientation. To solve numerical problems with such a constraint, it is natural to use a non-negative least-squares (NNLS) solver. This type of solution is typically most applicable for scans of saturation isothermal remanent magnetization (SIRM). Solutions provided by NNLS are naturally smooth and should be regarded as approximations of the true physical system (a common stance in numerical modeling). This is particularly true if some dipoles remain in the “negative” orientation or are misaligned relative to the primary orientation after the sample acquires an SIRM due to sample anisotropy or interactions between remanence carriers. In such a case, those dipoles in the unidirectional solution would be constrained to zero or approximated to their component in the SIRM orientation. The more a sample violates the unidirectionality and positivity assumptions of the unidirectional inversion, the more the solution can be regarded as an approximation. Uniform solutions are obtained by requiring all dipole orientations and magnitudes to be identical. This gives rise to a problem with *P* measurements of the *z*-component of the magnetic field and only three unknowns. For MM data, this is often severely overdetermined.

Early approaches to solving the inversion problem to reconstruct magnetic sources from measured magnetic fields focused on frequency domain techniques and required idealized physical scenarios (Vestine and Davids 1945; Hughes and Pondrom 1947). Subsequent studies removed previously necessary assumptions that constrained the geometry of the magnetic sample (Smith 1959; Helbig 1963; Bhattacharyya 1967; Talwani 1965) and reduced computational complexity (Lourenco and Morrison 1973). These approaches used the Cooley–Tukey fast Fourier transform algorithm (Cooley and Tukey 1965), which enabled the fast Fourier transform to be implemented on modern computers. Focusing on the frequency domain was sensible because the computational cost of using the spatial domain was much greater than that of the Cooley–Tukey algorithm. Frequency domain methods were then extended for application to MM data (Chatraphorn et al. 2002; Egli and Heller 2000; Fleet et al. 2001; Roth et al. 1989; Sepulveda et al. 1994; Tan et al. 1996; Wikswo 1996). However, these techniques were developed to produce magnetization solutions composed of only two components and they could typically only produce unique solutions for special cases. From physical first principles, Lima and Weiss (2009) extended the Fourier-based technique of Lourenco and Morrison (1973) to reproduce vector field maps of magnetic samples from single component measurements. Lima et al. (2013) improved upon this work by regularizing the inverse problem, dampening noise, and enhancing processing speed.

An unavoidable consequence of applying Fourier techniques to SIRM data (to obtain a unidirectional solution) is the potential violation of non-negativity constraints. A typical method to handle Fourier solution components that violate non-negativity is to threshold those variables to zero. This can yield solutions that are not smooth and may not be physically valid. Because Fourier techniques operate in the frequency domain, it can be difficult to impose solution constraints that are related to the spatial domain. Such a constraint could be as simple as restricting valid solution variables to the spatial domain of a sample.

Weiss et al. (2007b) developed the first spatial inversion technique capable of producing unique solutions composed of three component magnetic distributions. This technique uses the equivalent source formalism (Dampney 1969; Emilia 1973) to represent the inverse problem in a least-squares manner. Specifically, the unrestricted solution can be obtained via unconstrained least-squares and the unidirectional and uniform solutions are obtained by NNLS. The uniform problem is typically extremely overdetermined for MM data and can be considered relatively simple to solve computationally. In contrast, the unrestricted and unidirectional problems can be computationally challenging. Because the unrestricted problems are underdetermined, they are not guaranteed to be unique without additional constraints. All other factors being equal, unidirectional problems are smaller than unrestricted problems by a factor of \(\sim\) 3 because the orientation is known and only the determination of dipole magnitudes remain. Unidirectional problems are also well-determined, which allows a unique solution to be obtained. Despite these positive characteristics, unidirectional problems can necessitate a significant amount of computational work due to the quantity of data acquired by the high-resolution of MM devices.

Addressing the NNLS problem within the unidirectional formulation has proven to be computationally expensive, requiring two months of computation time to produce a solution for the inversion of a single sample (Weiss et al. 2007b). A number of modifications were made to the computational approach to attain tractable computation for NNLS methods, with each modification imposing some degree of undesirable consequences regarding the resulting inverse solution. Although the continuous increase in the speed of modern computers diminishes the necessity of these modifications when solving the original inversions of Weiss et al. (2007b), improvements in high-resolution magnetism acquisition yield continuously larger data sets which keep these modifications relevant.

For example, a second pseudo-spatial technique, developed by Usui et al. (2012), blends spatially localized Backus–Gilbert averaging kernels (Backus and Gilbert 1968) with the subtractive optimally localized averages (SOLA) method (Pijpers and Thompson 1992). The Usui et al. (2012) approach avoids the high computational requirements of the spatial inversion technique of Weiss et al. (2007b) by performing some significant computation in the frequency domain. Specifically, it approximates a matrix inversion with a periodic boundary approximation and FFT. In some cases, this style of pseudo-spatial method has been shown to be almost as fast as methods that operate purely in the frequency domain (Pijpers 1999). Further, Pijpers (1999) shows that the spatial resolution of the SOLA method can approach that of the acquisition device. However, Usui et al. (2012) state that the shape of their averaging kernels used to invert geologic data suggest a spatial resolution of \(\sim\) 1 mm. In fact, Usui et al. (2012) successfully produced magnetization models with a spatial resolution of \(\sim\) 1 mm. For MMs to obtain well resolved field measurements, the spatial resolution of MM spatial inversion solutions are limited to approximately half the sensor-to-sample distance. Considering this limitation, the spatial resolution of most published MM data is on the order of 0.1 to 0.15 mm or less.

Ultimately, determining the effective spatial resolution of an inversion of MM data is a complex problem (Lima et al. 2006). Several factors affect spatial resolution (e.g., sensor-to-sample distance, sensor active area/volume, signal-to-noise ratio, mapping step size, regularization strategy used), which make comparison of inversion methods delicate, particularly when evaluating data obtained using different MM systems. Thorough discussions of the relationships between these factors are provided by Chatraphorn et al. (2002); Fleet et al. (2001); Lima et al. (2006, 2014); Lima and Weiss (2016); Oda et al. (2016); Egli and Heller (2000), and Roth and Wikswo Jr (1990).

The fundamental goal of any scheme for inverting MM data is to quickly determine the magnetic sources within the entire spatial domain of the sample without reducing resolution. Existing methods have had to make compromises on speed, resolution, sample completeness, or physical validity, thereby hindering the full capabilities of the inversion method. In this paper, we focus on the NNLS problem at the heart of unidirectional spatial inversion, as it provides an excellent avenue toward high-resolution unique solutions.

Here, we present the first application of TNT-NN (Myre et al. 2017a), a novel active set NNLS algorithm for large problems, to the inversion of real-world MM data, using the unidirectional inversion technique of Weiss et al. (2007b). Active set NNLS algorithms share the idea of constructing an “active set” of variables that are fixed at zero to not violate the non-negativity constraint. TNT-NN is an active set NNLS algorithm that exhibits significant performance enhancement over previous active set algorithms. These enhancements are primarily enabled by two means: (1) by improving the construction of the active set and (2) by enhancing the method used to solve the core least-squares problem. Through the use of TNT-NN to solve the core NNLS problem, the inversion scheme of Weiss et al. (2007b) can be applied to MM data using the spatial resolution of the MM device, without the need to compromise the physical validity of the inverse solutions.

The TNT-NN algorithm is used to obtain the magnetic sources of four samples; a synthetically magnetized University of Minnesota logo, a 30 \(\upmu\)m thin section of basalt from the Mauna Loa volcano (Weiss et al. 2007a, b), a 30–60 \(\upmu\)m thin section of ferromanganese crust (Noguchi et al. 2017a), and a 100 \(\upmu\)m thin section of a speleothem from Spring Valley Caverns in South East Minnesota, USA (Dasgupta et al. 2010). We show that the TNT-NN algorithm is a suitable update to the spatial inversion technique developed by Weiss et al. (2007b) due to its enhanced performance and numerical accuracy.

## Existing computational roadblocks and circumvention efforts

Investigating existing least-squares techniques to analyze MM data reveals two hurdles with undesirable consequences: extremely large data sets and inefficient solvers for the core non-negative least-squares problem. Strategies to overcome these hurdles are outlined in the remainder of this Section.

### Large data sets

The spatial resolution of MMs enables a high number of scanning measurements to be made for a given sample area, resulting in the generation of large data sets. For example, a 35 mm \(\times\) 35 mm area, scanned with a 100 \(\upmu\)m resolution, produces 122,500 measurements.

A well-determined problem, consisting of \(10^6\) elements (1000 \(\times\) 1000) does not typically present a significant computational challenge. However, in the NNLS problem, the spatial least-squares inversions are analyzing magnetic data that require the construction of the interaction (Jacobian) matrix for the entire scanned area. This quickly increases the problem size for non-trivial (non-uniform) inverse problems, so that a scan producing \(10^5\) elements yields a \(10^{5}\times 10^{5}\) (\(10^{10}\) element) Jacobian matrix. A Jacobian matrix of this scale can have storage requirements approaching 100 GB. Attempting to perform mathematical operations on such large matrices with a typical desktop computer often causes a phenomenon known as *thrashing* (Denning 1968). Thrashing is the inability to store the entirety of a data buffer within fast access memory, causing repeated data transfers with larger capacities but slower memory access. The overhead caused by data transfer leaves the processor unable to perform useful computation due to data starvation. This effectively reduces the performance of the computer system to that of the slower memory, which acts as a bottleneck. Contemporary computer systems have approximately a six order of magnitude difference in access time between typical RAM (on the order of \(10^{-9}\) s) and hard disk drive (on the order of \(10^{-3}\) s) systems.

Because the Jacobian matrices created during the spatial inversion process have significant memory requirements, it has thus far been necessary to implement computational strategies to avoid thrashing. These strategies include specimen division and dipole thresholding. Both are methods for reducing the size of the data set used in the MM inversion problem, with the secondary effect of creating a smaller core least-squares problem that reduces the time needed to find a solution. Unfortunately, both strategies produce approximations of the solution.

Specimen subdivision splits the MM measurements into subsections. The inversion of the sample is then performed by inverting the subsections separately. With sufficiently small divisions, the overall memory requirements of the Jacobian matrix are reduced to the point where thrashing is minimized. Solving these subsections is also more computationally tractable than for the sample as a whole due to the reduction in overall variables. By subdividing the sample and finding the inverse solution of each subdivision separately, the magnetic interactions between subsections are ignored. Consequently, this method violates the mathematical theory of the inversion problem, which requires that the spatial domain of the magnetic sources within a sample to be finite and fully encapsulated by the inversion problem domain (compact support) to ensure a unique solution (Baratchart et al. 2013; Lima et al. 2013).

The consequence of not accounting for magnetic source interactions across subdivisions is highlighted as artifacts at boundaries between subdivisions. These artifacts appear as high solution residuals that are spatially associated with subdivision boundaries. The spatial locality of subdivision artifacts is due to the interaction strength of magnetic sources diminishing as the inverse-square of the distance between sources. This means that the magnetic sources from the opposing subdivision that are close to the boundary have a significant effect on the field at the boundary that is not incorporated when using specimen subdivision to achieve computational tractability.

Thresholding the long range interactions of each dipole allows interactions that fall below a specified threshold to be withheld from computation. Excluding other dipoles from consideration means that their associated entries in the Jacobian matrix are fixed at zero. Inversion methods for large data sets from satellite magnetic fields commonly apply this type of thresholding (Purucker et al. 1996).

Dipole interaction thresholding was applied by Weiss et al. (2007b) to form a sparse approximation of the Jacobian, *A*, as \(A^{\dagger }\). By creating a sparse approximation, \(A^{\dagger }\), overall data storage requirements are reduced. This reduction scales with the degree of sparsity of \(A^{\dagger }\). The computation of \(A^{\dagger }\) is accelerated compared to the computation of *A* since \(A^{\dagger }\) effectively reduces the problem size and allows sparse methods to be used. The reduction in computation time to form \(A^{\dagger }\) also reduces the overall computation time for uniform inverse solutions, which are dominated by the computation of *A*.

The ability to use sparse matrix methods offers an alluring reason to use \(A^{\dagger }\) in lieu of *A*. Weiss et al. (2007b) exploit sparse methods and solve \(A^{\dagger }\) using the more computationally efficient *lsqlin* function in Matlab, which uses a preconditioned conjugate gradient routine at its core, instead of the *lsqnonneg* NNLS function for dense problems. However, the solutions obtained using \(A^{\dagger }\) are approximate solutions, just as \(A^{\dagger }\) is an approximation of *A*.

### Inefficient active set least-squares solvers

The core of the MM unidirectional and uniform inversion routines is an NNLS solver. The primary Matlab NNLS solver (*lsqnonneg*) is implemented using the seminal 1974 Lawson and Hanson NNLS (LH-NNLS) algorithm (Lawson and Hanson 1995). The LH-NNLS algorithm can easily be considered the most widely used NNLS algorithm as it is consistently provided by popular software packages, including the widely used computer algebra system Matlab, GNU R (Mullen and van Stokkum 2012), and scientific tools for Python (scipy) (Jones et al. 2001). It is also repeatedly encountered in the reference literature as the recommended method for solving non-negative least-squares problems (Aster et al. 2011; Parker 1994; Xiong and Kirsch 1992).

The LH-NNLS algorithm is an active set strategy that attempts to find solutions to the NNLS problem. This constrained NNLS problem can be stated as \(min_{\varvec{x}} || \varvec{A}\varvec{x}-\varvec{b}||_{2}, \text {such that } \varvec{x} \ge 0\), where \(\varvec{A}\) is the \(m \times n\) system of equations, \(\varvec{b}\) is the solution vector of measured data, and \(\varvec{x}\) is the vector of obtained parameters that minimizes the L\(^2\)-norm of the residual.

The LH-NNLS algorithm determines constrained variables by iterating through the set of variables one at a time, which commonly results in slow convergence. The Fast NNLS (FNNLS) algorithm developed by Bro and De Jong (1997) improves upon the LH-NNLS algorithm by avoiding redundant computations and allowing the programmer to load an initial active set of constrained variables. Using small real and synthetic test suites, Bro and De Jong report that FNNLS reduces execution time compared to NNLS by factors of 2–5.

Recently, graphics processing units (GPUs) have been exploited as high performance computing devices (Walsh et al. 2009) to improve runtime performance over CPU implementations of the LH-NNLS algorithm (Luo and Duraiswami 2011). Like CPU algorithms, the problem size that the GPU algorithm is capable of solving is limited by the available memory. The amount of memory available on contemporary GPUs is fixed and typically less than 10GB. This is a fraction of the memory space that is required for the inversion of MM data (which can easily surpass 100 GB). Recent GPU technology from NVIDIA enables the GPU to operate on data stored in host memory. However, operating on data in host memory, external to the GPU, incurs a significant performance penalty, similar to thrashing, due to additional access and transfer time.

## Overcoming computational roadblocks

### Storing large data sets

Thrashing between main memory and hard disk storage can easily be avoided by using a computer system with sufficient main memory to meet the storage requirements of the data set of interest. Desktop computers with sufficient memory to avoid thrashing when handling systems of this size are somewhat rare, but they are not out of reach for the majority of researchers. A contemporary computer built to handle the largest MM data sets presented in this paper would cost less than $10,000 (USD) and would not require any special training. Computing resources of this nature are commonly available at the computational centers of many research institutions.

### A new non-negative least-squares algorithm: TNT-NN

The TNT-NN algorithm (Myre et al. 2017a) is an active set method that is capable of solving large NNLS problems much faster than prior methods. TNT-NN improves upon existing methods through intelligent construction and modification of the active set and by incorporating an enhanced solver strategy to address the central least-squares problem.

Due to the convexity (i.e., a local minimum must be a global minimum) of the least-squares objective function (Boyd and Vandenberghe 2004) the TNT-NN algorithm can take an “algorithmic license” to guess what variables compose the active set without the risk of becoming locked into a local minimum. The suitability of the variables is determined by ranking them by their gradients (their change between iterations). Those variables with the largest positive gradients are tested by moving them into the unconstrained set first. This allows the active set to be modified by a large amount of variables in a single iteration. In contrast, other common active set methods typically modify the active set by a single variable per iteration (Lawson and Hanson 1995; Bro and De Jong 1997). The ability to modify the active set by many variables in a single iteration allows the TNT-NN algorithm to reduce the total number of iterations necessary for convergence.

In active set methods, solving the core unconstrained least-squares problem is independent from the construction of the active set. To accelerate the core unconstrained least-squares solver, Myre et al. (2018) developed the TNT algorithm. The TNT algorithm implements the Cholesky factor of the normal equations as a preconditioner to a left-preconditioned conjugate gradient normal residual (PCGNR) method (Saad 2003). PCGNR can be thought of as a computationally cheap mechanism that iteratively improves the solution. The normal equations are explicitly formed to create the preconditioner for CGNR. The numerical issues typically associated with the normal equations and the condition number of the problem are thereby avoided.

The condition number of a matrix \(\varvec{A}\), \(\kappa (\varvec{A})\), is the ratio of the largest to the smallest singular values of \(\varvec{A}\) and it can be used as indicator of numerical inaccuracies in solutions (Cline et al. 1979). A numerical “rule of thumb” states that when solving a system of equations, \(\varvec{Ax}=\varvec{b}\), “one must always expect to lose \(log_{10}\kappa (\varvec{A})\) digits in computing the solution” (Trefethen and Bau III 1997, Lec. 12). The normal equations are particularly susceptible to this issue as \(\kappa (A^{T}A) = \kappa (A)^{2}\). TNT only explicitly forms the normal equations to generate the preconditioner.

Myre et al. (2018) show that TNT obtains solutions two to sixteen times faster than other conventional solvers and that TNT consistently produces solutions where the L\(^2\)-norm of the solution residual is on the order of \(10^{-15}\) for ill-conditioned problems (\(\kappa >= 10^8\)) and \(10^{-28}\) for well-conditioned problems (\(\kappa < 10^8\)). For well-determined and well-conditioned problems, the L\(^2\)-norm of the TNT solution residual is always less than or equal to those of alternative methods while also decreasing execution time in the majority of tests.

Using TNT as the core unconstrained least-squares solver, TNT-NN likewise yields solutions where the L\(^2\)-norm of the solution residual is always less than or equal to those of alternative methods for well- and overdetermined and well-conditioned problems. Myre et al. (2017a) show that with TNT, TNT-NN can outperform the execution time performance of the relatively more modern FNNLS method by up to a factor of 180 when solving small systems up to \(25000\times 25000\) and by more than 40 times when solving a larger (\(45000\times 45000\)) system.

## Application to synthetic and natural samples

We compare the least-squares inversions of MM scans of four samples using LH-NNLS, *lsqlin*, and TNT-NN. We restrict our comparisons to results obtained using LH-NNLS and *lsqlin* as all previously reported spatial inversion results have been obtained using these methods.

Comparing TNT-NN to LH-NNLS could be considered unfair due to the age of the LH-NNLS algorithm. We consider the comparison necessary as LH-NNLS is the routine that is most consistently used throughout other published spatial inversions that do not circumvent the computational hurdles described earlier.

We analyze the spatial least-squares inversion solutions of one synthetic and three natural samples: a synthetic University of Minnesota (UMN) logo, a 30 \(\upmu\)m thin section of basalt from the Mauna Loa volcano (Weiss et al. 2007a, b), a 30–60 \(\upmu\)m thin section of ferromanganese crust from the Takuyo–Daigo Seamount (Noguchi et al. 2017a), and a 100 \(\upmu\)m thin section of a calcite speleothem from Spring Valley Caverns in South East Minnesota, USA (Dasgupta et al. 2010). The synthetic UMN logo inversion is small enough to be solved using LH-NNLS and TNT-NN. The basalt thin section inversions are solved using *lsqlin* (combined with sample subdivision) and TNT-NN. LH-NNLS is prohibitively slow for solving anything that is significantly larger than the synthetic UMN MM scan. The ferromanganese crust and speleothem MM data are large enough to be considered intractable for the LH-NNLS and *lsqlin* methods without significant modification. As such, only the TNT-NN method is used to perform these inversions.

All samples share the same SIRM dipole orientation, (\(0^{\circ }\), \(0^{\circ }\)), in (\(\theta\), \(\phi\)), where \(\theta\) is the polar angle and \(\phi\) is the azimuthal angle. Solving unidirectional problems, where (\(\theta\), \(\phi\)) are unknown, requires solving an additional, but independent, inversion problem to determine orientation. Because the process of determining orientations is independent of the unidirectional problem to determine the distribution and magnitude of magnetic sources, we do not address it here.

Although we vary the spatial resolution of the NNLS solutions across these samples, we restrict the best possible spatial resolution to half the sensor-to-sample distance. This allows the NNLS method to obtain well resolved sources without inducing instabilities. This constraint is solely due to the physics of the problem being solved. On its own, NNLS has no inherent spatial constraints.

Each real-world sample presented here is composed of data from a single MM scan. We then use different numerical methods to invert the scan (or alternatively, a cropped region of the scan). As differences between MM devices and scanning conditions can yield different spatial resolutions, we do not make direct comparisons of inversions result quality between different samples.

All inversions (using LH-NNLS, *lsqlin* and TNT-NN) were performed using Matlab 2017b and a single compute node on the Mesabi supercomputer at the Minnesota Supercomputing Institute. This compute node consists of dual 12 core Intel Haswell E5-2680v3 processors at 2.5 GHz with up to 1 TB of memory. None of the analyses presented in this paper require more than 365 GB of memory. Comparable computing systems are available at most computing centers or easily purchased for $10,000 (USD) or less.

For each sample, the performance of each technique is compared in terms of execution time and the root mean square (RMS) of the solution residual. We also examine the calculated bulk magnetic moments of the solutions for these samples. In all cases the bulk magnetic moment is calculated as the sum of the (\(\theta\), \(\phi\)) component of all solution dipoles. In the synthetic UMN logo case, we compare the calculated moment to the known moment. For the natural samples, we compare the calculated moments to experimentally measured moments.

### Synthetic UMN logo

The use of simple alphanumeric characters, simple symbols, and institutional logos as baselines for numerical experimentation is common practice (Baratchart et al. 2013; Egli and Heller 2000; Lima et al. 2006; Lima and Weiss 2009; Lima et al. 2013) as these synthetic data sets often represent worst-case scenarios when testing new methods. In particular, synthetic samples of this nature allow the inverse solution to be exactly known which enables an examination of how robust the inversion method is when solving problems of varying difficulty. MM data for this synthetic sample are created numerically using Matlab R2017a. Originally presented by Myre et al. (2017a), the process starts by generating a synthetic 2-dimensional magnetic source map. An image is treated as a discretized set of square magnetic sources from which magnetic field maps are calculated. The synthetic MM scan is then obtained as the vertical component of the magnetic field, \(B_z\), at a height, *h*, above the synthetically created and irregularly shaped set of magnetic sources.

We created a synthetic saturation isothermal remanent magnetization (SIRM) scan of the UMN logo by converting a \(67\times 50\) pixel grayscale image of the UMN logo to a magnetic source map. By introducing negative values in the image, the final synthetic SIRM sample will have points roughly mimicking magnetic sources that have failed to align with the saturating field.

By imposing a non-negativity constraint on such sources, the solution to the inversion problem is a unique distribution of magnetic sources [the unidirectional problem (Weiss et al. 2007b; Baratchart et al. 2013)]. Without the non-negativity constraint, the solutions to the inversion problem to obtain the magnetic sources are non-unique [the unrestricted problem (Weiss et al. 2007b; Baratchart et al. 2013)]. In fact, any magnetic field map can be modeled by unidirectional sources without the non-negativity (also known as unidimensional) constraint (Baratchart et al. 2013).

The following is an outline of the process used to create the synthetic SIRM scan of the UMN logo. The values in the original grayscale image range from 0 to 255. To create the set of magnetic sources requiring the application of the non-negativity constraint, we multiply the few grayscale values less than 75 by \(-1\). Those grayscale values in the image less than 75 correspond to relatively low-intensity background shading pixels in the original image. All of the values are then scaled such that the maximum value of the logo is \(4.3233\times 10^{-11}\), and the minimum value of the logo is \(-1.4788\times 10^{-12}\), which is in the typical range of magnetization intensity for natural samples measured in Am\(^2\). All orientations are in the ±*z*-direction (i.e., in or out of the page), where the sign of the converted grayscale value determines the orientation in the *z*-direction. These new pixel values are then treated as magnetic sources (each source is \(100\times 100\) \(\upmu\)m\(^2\) for this synthetic sample) and used to calculate the vertical component of the magnetic field, \(B_z\), at a height, *h* above the synthetic magnetic sources.

We address six scenarios, all of which share the same spatial dimension (6.7 mm \(\times\) 5 mm), resolution (100 \(\upmu\)m), MM scan height (200 \(\upmu\)m), dipole spacing (100 \(\upmu\)m), and total number of solution dipoles (3350): (1) solving the simulated synthetic UMN logo without negativity constraint violations using LH-NNLS and TNT-NN, (2) solving the simulated synthetic UMN logo without negativity constraint violations and with Gaussian white noise corruption using LH-NNLS and TNT-NN, (3) solving the simulated synthetic UMN logo with negativity constraint violations using LH-NNLS and TNT-NN, (4) solving the simulated synthetic UMN logo with negativity constraint violations and Gaussian white noise corruption using LH-NNLS and TNT-NN, (5) solving the simulated synthetic UMN logo with higher magnitude negativity constraint violations using LH-NNLS and TNT-NN, (6) solving the simulated synthetic UMN logo with higher magnitude negativity constraint violations and Gaussian white noise corruption using LH-NNLS and TNT-NN. All solutions obtain dipoles oriented out of page, e.g., (0\(^\circ\), 0\(^\circ\)) in (\(\theta\), \(\phi\)). The known synthetic magnetic source distributions for these scenarios are shown in Fig. 1.

In scenarios where the simulated MM scan is corrupted by Gaussian white noise (2, 4, and 6), we measure the amount of signal degradation using the signal-to-noise Ratio (SNR) reported in decibels (dB) as

where \(\sigma ^{2}_{\mathrm{{signal}}}\) is the variance of the (noiseless) synthetic B\(_z\) field and \(\sigma ^{2}_{\mathrm{{noise}}}\) is the variance of the noise. For scenarios incorporating noise, we use an SNR of 40 dB.

For scenarios 1 and 2, we modify our original synthetic SIRM creation procedure to ensure no violations of the non-negativity constraint. We do this by taking the absolute value of the original synthetic SIRM magnetic source map. For scenarios 5 and 6, we modify our original synthetic SIRM creation procedure to create more, and higher magnitude, negativity constraint violations (relative to scenarios 3 and 4). We do this by first switching all values in the original grayscale image from 1 to 45 from positive to negative. We then scale the negative values of the logo to such that the minimum value is \(-7.55\times 10^{-12}\) and scale the positive values such that the maximum value is \(4.3233\times 10^{-11}\). Like the synthetic sources created for scenarios 3 and 4, the values obtained in the modified magnetic source maps are in the typical range of magnetization intensity for natural samples measured in Am\(^2\). The modified magnetic source maps are then used to calculate the simulated MM scans of the B\(_z\) field.

We report the magnetic moment, the residual RMS, and error of each solution in Table 1. The solution error is calculated as the L\(^2\)-norm of the difference between the known magnetic source distribution and the magnetic source distribution obtained as the inverse solution. As seen in Table 1, these measures are identical for LH-NNLS and TNT-NN.

For all scenarios of these synthetic MM scans, the inverse solutions produced by each method are similar and are a good visual match to the known solutions, as seen in Figs. 2, 3, and 4. Any differences between LH-NNLS and TNT-NN can be attributed to numerical noise, which typically becomes an issue when computing with values near or below the machine epsilon, which is \(2^{-53}\) (approximately \(1.11\times 10^{-16}\)) on a computer using IEEE 754 double precision floating point numbers (Higham 2002).

The synthetic samples created with this method present an algorithmic “worst-case scenario” as the correct solution is a piecewise step function with sharp discontinuities. Any method that minimizes the \(L^2\)-norm, like least-squares, will act as a smoothing low-pass filter and spread out the solution in space. This leads to the creation of nonphysical sources in the solution. These sources can artificially inflate the net magnetic moment of the solution. This can be seen in the results from all scenarios in Table 1, where the LH-NNLS and TNT-NN solutions produce higher magnetic moments than that of the known solution.

Figure 2 shows results from scenarios 1 and 2. Both methods quickly produce solutions to scenario 1 with errors nine orders of magnitude below the machine epsilon. Scenario 1 is computationally attractive as the NNLS solvers should terminate in a single iteration with a valid solution (e.g., there are no variables requiring constraint). The addition of noise in scenario 2 introduces variables that violate the non-negativity constraint. This incurs additional NNLS solver iterations to obtain a valid solution. The noise also causes nonphysical magnetic sources to be obtained in the least-squares solutions. These nonphysical sources increase the magnetic moment, residual RMS, and error of the solutions.

Figure 3 shows results from scenarios 3 and 4. The introduction of noise in scenario 4 causes nonphysical magnetic sources to be obtained in the least-squares solutions. These nonphysical sources slightly increase the magnetic moment, residual RMS, and error of the solutions.

Figure 4 shows results from scenarios 5 and 6. Enhancing negativity causes a corresponding enhancement of the difference between the known and obtained magnetic moment. The introduction of noise in scenario 6 does not have the same effect seen in scenario 4. Nonphysical magnetic sources are still obtained in the least-squares solutions, but these are much smaller in magnitude as the magnitude of the primary solution dipoles is enhanced to compensate for the enhanced negativity. The balance between noise and negativity yields a negligible difference in magnetic moment, residual RMS, and error of the solutions obtained by LH-NNLS and TNT-NN.

The performance of these two methods diverges when considering the amount of execution time necessary for each to obtain a solution. The multiplicative factor of improvement in execution time for TNT-NN relative LH-NNLS is shown in Fig. 5, reported as speedup (Lilja 2000, Ch. 2.5). We calculate speedup as

where \({\text {exec(LH-NNLS)}}\) is the LH-NNLS execution time and \(\text {exec(TNT-NN)}\) is the TNT-NN execution time.

The degree to which TNT-NN enhances performance over LH-NNLS is dependent on many factors, some of which include problem size, number of constraints, and condition number. The results in Fig. 5 show that, on average, TNT-NN provides a 623-fold improvement in execution time for the synthetic UMN logo scenarios with sample variables requiring constraint (scenarios 3–6). Restated, TNT-NN is capable of reducing 1 h of LH-NNLS computation time for synthetic MM inversion problems to approximately 5.78 s, on average.

### Hawaiian basalt

This 30 \(\upmu\)m thin section of tholeiitic basalt was collected from the Hawaiian Scientific Drilling Project (HSDP) 2 core through the Mauna Kea Volcano, Hawaii. These MM data were collected using the scanning SQUID microscope at the MIT Paleomagnetism Laboratory. The bulk moment of the sample was also measured using a 2G Enterprises 755 Rock Magnetometer in the same laboratory. Collection and composition details for this sample, as well as the experimental conditions used to obtain the MM data, are provided by Weiss et al. (2007b).

Weiss et al. (2007b) measured the NRM and SIRM of this basalt thin section. To obtain unidirectional inverse solutions in a timely manner, it was found necessary to crop the spatial domain to a 13.6 mm \(\times\) 19.1 mm region around the sample and apply dipole thresholding to exploit sparse matrix techniques using the *lsqlin* routine, reducing computation time to several weeks. Further reductions in execution time required the use of sample subdivision to perform piecewise inversions.

We restrict our numerical analyses to the SIRM measurements of this sample (Fig. 6), using *lsqlin* and TNT-NN. Due to the prohibitive performance of the *lsqlin* method, we restrict our *lsqlin* analyses to two scenarios at full measurement resolution (dipole spacing of 100 \(\upmu\)m): (1) a piecewise scenario where five equally sized horizontal subdivisions (approximately 13.6 mm \(\times\) 4.3 mm) are solved independently and (2) using the same 13.6 mm \(\times\) 19.1 mm cropped region used by Weiss et al. Using TNT-NN we address two scenarios at full measurement resolution (dipole spacing of 100 \(\upmu\)m): (1) the same 13.6 mm \(\times\) 19.1 mm cropped region used by Weiss *et al.*and (2) using the full 19.0 mm \(\times\) 25.0 mm measurement domain.

Results are shown in Fig. 7 and Table 2. In Table 2, we also include results from the Fourier method of Lima et al. (2013) and the measured moment using a 2G. The original SIRM field map was bilinearly interpolated to produce a 500 \(\times\) 400 field map as input to the Fourier method which then produced a 500\(\times\)400 distribution of magnetic sources with a dipole spacing of 50 \(\upmu\)m as described in (Lima et al. 2013).

For these MM data, TNT-NN offers marked enhancements over *lsqlin*. We find an acceptable match between the numerically obtained magnetic moment of the basalt SIRM using TNT-NN, *lsqlin*, the Fourier method, and the measured moment using a 2G. The residual RMS values produced by TNT-NN are two orders of magnitude lower than that produced by the piecewise *lsqlin* solution. This is due to multiple factors. Two partial factors are (1) TNT-NN does not exclude any dipole interactions and (2) the *lsqlin* routine exhibit a slow convergence rate leading to early termination which produces solutions with elevated residuals. Relative to *lsqlin*, TNT-NN primarily improves solution residuals by solving the full spatial domain of the sample to avoid nonphysical artifacts at subdivision boundaries, ultimately yielding solutions that are more physically representative.

The piecewise inversions of the cropped 13.6 mm \(\times\) 19.1 mm domain using *lsqlin* require 52.3 min to calculate a solution. Because these piecewise inversions are independent, they can be solved concurrently, requiring a total inversion time that is approximately the same as the time required to solve a single subdivision. If it is necessary to solve the piecewise inversions serially due to computer systems limits the total execution time is scaled by the number of subdivisions. In this analysis, the serial execution time is approximately 261.3 min.

In contemporary computing systems equipped with sufficient memory, *lsqlin* can be used to solve the entirety of the 13.6 mm \(\times\) 19.1 mm spatial domain of the sample without the need for sample subdivision. Without subdivisions, the high-residual interfaces in the *lsqlin* solution are removed. This allows the *lsqlin* method to produce solution residuals similar to those of the TNT-NN. However, this incurs a significant increase in computation time, from 52.3 min for a single subdivision to 59.2 h for the whole spatial domain. This is an increase over the piecewise computation time by a factor of approximately 70.

Solving the same cropped 13.6 mm \(\times\) 19.1 mm domain in its entirety using TNT-NN requires only 40.98 min, which is less than the time required to solve a single subdivision of one fifth of the problem. This is an improvement of 20% and 600% over the concurrent and serial *lsqlin* execution times, respectively. The residual RMS is also improved by two orders of magnitude. Compared to using *lsqlin* to solve the full spatial domain, the computation time required by TNT-NN to obtain a solution with nearly equivalent residual RMS is 86.7 times less.

Expanding the spatial problem domain to the full measurement domain (19.0 mm \(\times\) 25.0 mm) and solving for dipoles spaced at the sampling resolution (100 \(\upmu\)m spacing) almost doubles the number of active variables, from 25,976 to 47,500. While this improves the residual RMS, the improvement is not as significant as the shift away from using subdivisions for piecewise inversion. For the minor improvement in residual RMS, there is a 1.31 h penalty on total execution time to compute the solution, almost 200% longer. Despite the increase in problem size, there is no need to address the scan in a piecewise manner.

### Ferromanganese crust

The ferromanganese crust thin section was collected from the Takuyo–Daigo Seamount (22\(^{\circ }\)41.04\(^{\prime }\)N 153\(^{\circ }\)14.63\(^{\prime }\)E, at a depth of 2239 m below the water surface) as sample HPD#954-R10. This sample is 19 \(\times\) 19 mm, 30–60 \(\upmu\)m in thickness, and has previously been used for paleomagnetic study by Noguchi et al. (2017b). The SIRM MM data were obtained using a Scanning SQUID Microscope at the Geologic Survey of Japan, National Institute of Advanced Industrial Science and Technology (Oda et al. 2016). Hysteresis loops were measured with a Princeton Measurement Corporation Alternating Gradient Force Magnetometer at the same laboratory. The first use of this thin section for MM experimentation was by Noguchi et al. (2017a), who also provide additional collection, composition, and MM experimental conditions.

We restrict our comparisons to the SIRM measurements of this sample (Fig. 8), using *lsqlin* and TNT-NN. We address three scenarios: (1) using 25% of the 2-dimensional spatial sampling resolution (200 \(\upmu\)m dipole spacing) over the entire 32.1 mm \(\times\) 30.1 mm measurement domain (the same scenario presented by Noguchi et al. (2017b)), (2) using a dipole spacing of 160 \(\upmu\)m, which is approximately half the sensor-to-sample distance (319 \(\upmu\)m), over a 23.1 mm \(\times\) 24.1 mm cropped region around the sample, and (3) using a dipole spacing of 160 \(\upmu\)m over the entire 32.1 mm \(\times\) 30.1 mm measurement domain. Due to the prohibitive performance of the *lsqlin* method, we restrict our *lsqlin* analyses to scenario 2. We use TNT-NN to address all three scenarios.

Results are shown in Fig. 9 and Table 3. In all scenarios, the numerical methods obtain magnetic sources with similar remanence magnetization (the arithmetic mean of these is 4.62 \(\times 10^{-8}\) Am\(^{2}\)). These results are on the same order of magnitude, but slightly less than the measured remanence magnetization for this sample obtained by experimental hysteresis (7.52 \(\times 10^{-8}\) Am\(^{2}\), found by Noguchi et al. (2017b, Table S1)). *lsqlin* produces the highest residual RMS value of all scenarios due to early termination (reaching the maximum number of iterations). TNT-NN yields lower residual RMS values that are similar for all three inversion scenarios. For these scenarios, the largest solution residual RMS is caused by decreasing the inverse solution resolution and scan area (scenario 2). Reducing the field of view so there is less area surrounding the sample causes a small increase in solution residual RMS. It is unlikely that this increase is significant enough to affect interpretation, as can be seen by comparing Fig. 9b, e to 9c, f, respectively.

All numerical solutions appear to increase in blurriness from the right to the left across the sample area. The original inversion published by Noguchi et al. (2017b, Figure S8) exhibits the same trait. There are multiple factors that could be responsible for this trait, including an irregular sample surface or inconsistent sensor-to-sample distance. Because the original sample varied in thickness from 30–60 \(\upmu\)m, it is possible that the sample surface was not coplanar with the surface of the SSM sensor path. This would result in an inconsistent sensor-to-sample distance and ultimately an inconsistent spatial resolution.

For conventional NNLS methods, scenarios 1 and 2 are solvable, albeit on a scale comparable to problems that required computation times of “several weeks” (Weiss et al. 2007a) or several days in these analyses [see the Hawaiian basalt inversion in Weiss et al. (2007b)]. Alternative methods could reduce computation time at the cost of solution residuals (like those in the piecewise *lsqlin* basalt solution shown in Fig. 7d). Here, *lsqlin* required 5.26 times more computation time than TNT-NN to solve scenario 2. For TNT-NN, solving scenarios 1, 2, and 3 required 0.94, 1.18, and 7.89 h, respectively. This is slightly more than 10 h of cumulative computation time.

### Speleothem

Speleothems have shown to be excellent natural recorders of magnetic signals (Latham et al. 1979; Morinaga et al. 1985, 1989; Osete et al. 2012; Strauss et al. 2013; Font et al. 2014; Bourne et al. 2015; Lascu et al. 2016; Jaqueto et al. 2016; Ponte et al. 2017; Zhu et al. 2017) as they are able to capture and preserve within their calcite matrix, detrital magnetic minerals from airborne particles, drip water or stream water from flood events as well as in situ iron oxy-hydroxide precipitates (Lascu and Feinberg 2011; Denniston and Luetscher 2017).

The stalagmite analyzed here, SVC982, originates from Spring Valley Caverns (SVC) in Fillmore County, Minnesota, USA, in the Root River watershed of the Upper Mississippi Valley. Additional field site and sample collection details are provided by Dasgupta et al. (2010). A 100 \(\upmu\)m thin section from the top \(\sim\) 5 cm of the speleothem was prepared for SQUID microscopy using non-magnetic equipment and binding materials. In order to obtain a unidirectional field map suitable for inversion, the sample was magnetized using a 1 T field oriented perpendicular to the thin section plane, which resulted in the specimen acquiring a SIRM.

The MM data were collected using the scanning SQUID microscope at the MIT Paleomagnetism Laboratory. SQUID microscope measurements were performed inside a magnetically shielded environment (ambient field < 100 nT), using a high-precision scanning stage, which allowed data collection along a square grid with 100 \(\upmu\)m spacing. The sensor-to-sample distance was 200 \(\upmu\)m. Typical scan times for a \(\sim\) 10 cm\(^2\) area were 16.5 h. The bulk moment of the sample was also measured using a 2G Enterprises Rock Magnetometer at the Institute for Rock Magnetism in the Department of Earth Sciences at the University of Minnesota.

The magnetic minerals within this speleothem are relatively sparse due to depositional layering, seen in Fig. 10. Although this sparse spatial distribution of magnetic material gives rise to a similarly sparse inverse solution, the problem itself is still dense because the Jacobian matrix is dense.

Results are shown in Table 4 and Fig. 10. We find an acceptable match between the numerically obtained magnetic moment of the speleothem SIRM using TNT-NN, the Fourier method, and the measured moment using a 2G. Due to sample dimensions and instrument configuration, it was necessary to magnetize the sample in the orientation of speleothem growth. The bulk measured moment could be smaller due to the elongated nature of the sample, which is not optimal for the 2G coil configuration, which is designed for equidimensional samples. The moment could also be affected by the differing numerical and experimental orientations, due to possible magnetic anisotropy of the sample. Two additional numerical reasons contribute to the moment of the TNT-NN solution being higher than the moment of the Fourier solution: (1) the nonphysical sources introduced by TNT-NN inflate the net magnetic moment, and (2) the Fourier solution has variables that do not conform to the non-negativity constraint which deflate the net magnetic moment of the solution. However, we are not trying to perfectly match the experimental results. Instead, the purpose of this comparison is to determine whether the numerical results are physically reasonable.

The solution residual RMS for this inversion is the same order of magnitude as the residual RMS values for the ferromanganese crust inversions and two orders of magnitude lower than the TNT-NN residual RMS values for the basalt inversions. The residual RMS produced using TNT-NN is lower than that of the Fourier solution; however, both are the same order of magnitude.

One source of high-magnitude residuals in the speleothem inverse solution can be traced to contamination of the MM scanning environment, outside the sample area (Fig. 10). Removing the magnetic sources and residuals that are not associated with the sample is easily done in postprocessing. Accounting for the interactions of the sample dipoles with the contamination dipoles remains non-trivial. Removing the solution residuals in the spatial vicinity of the contamination only reduces overall residual RMS by 0.169 nT to 0.599 nT.

The remaining sources of high-magnitude residuals are magnetic sources that are high-magnitude relative to the remainder of the sample. This difference in magnetic source magnitude is significant enough to appear as a sharp discontinuity. Such interfaces are a major source of residuals for methods that minimize the \(L^2\)-norm, like NNLS, as those methods will naturally smooth such interfaces.

This is the largest spatial inversion solved to date, by a factor of 2.26 (the ferromanganese crust scenario 2 presented here is the second largest). Despite the scale of this problem, TNT-NN required just over 24 h of computation time to produce a solution.

## Discussion

To achieve reasonable computation times, prior approaches to solving the unidirectional MM inverse problem used *lsqlin* in lieu of LH-NNLS and to incorporate techniques to reduce computational difficulty (sample subdivision, dipole thresholding, reducing resolution, etc.). TNT-NN provides a transformative method for accelerating the computation of the full spatial unidirectional MM inverse problems, without the need to modify the problem in a manner that reduces resolution or the physicality of the solution. For all samples, synthetic and natural, the TNT-NN method consistently offers performance enhancement over alternative methods via reduced execution time and solution residual, as seen in Fig. 11.

TNT-NN always requires less time to produce a solution than the other methods tested. For the synthetic UMN logo problem, TNT-NN and LH-NNLS produced identical solutions but TNT-NN was 623 times faster, on average. Using *lsqlin* to solve the piecewise Hawaiian basalt inversion problem yields the closest execution time to TNT-NN. The execution time required of *lsqlin* to solve the Hawaiian basalt inversion problem in its entirety increases to 59.2 h when solving it in its entirety, 86.7 times slower than TNT-NN.

The computation time spent by TNT-NN to obtain solutions for the ten inversion scenarios presented here totals 1.54 days. Approximately 87% of the total computation time was spent on two of the largest problems, requiring 24.36 and 7.89 h, respectively. The remaining problems were all solved in under 1.5 h each.

The trivial synthetic UMN logo inversion problem is the only case where a competing method matches the residual RMS produced by the TNT-NN solution. For the more computationally intense natural samples, only the small inversions of the Hawaiian basalt and ferromanganese crust were attempted using an alternative method. Other scenarios were considered large enough to be intractable and computational tactics violating physicality (piecewise inversion and dipole thresholding) were undesirable.

Relative to the piecewise *lsqlin* Hawaiian basalt solution, the residual RMS of other solutions are two orders of magnitude lower. When using *lsqlin* to solve the entire spatial domain of the Hawaiian basalt and ferromanganese crust samples, the residual RMS is the same order of magnitude as that of TNT-NN. However, this incurs additional computation time relative to the TNT-NN method. The TNT-NN algorithm exhibits a minor increase in residual RMS for tested scenarios that increase the spatial domain (field of view) around the sample or reduce resolution. The characteristics of the residual RMS produced by TNT-NN can be attributed to the design of the TNT-NN algorithm.

The core least-squares solver used in TNT-NN (simply named TNT) will, under perfect circumstances, solve the preconditioned system of equations in a single iteration. Because computers cannot yet avoid numerical rounding errors, it is more common that TNT will iterate as long as the residual is decreasing. The computational cost of these iterations is relatively low compared to calculating the TNT preconditioner. As such, early termination has not been found to be necessary in practice (Myre et al. 2018).

Different types of regularization are introduced by the TNT-NN method and by Fourier method of Lima et al. (2013). Whereas both methods assume a fixed direction for the magnetization as a general regularization strategy, the TNT-NN method selects a solution by imposing strict non-negativity on the solution and stopping after a convergence criterion is met (no improvement in solution residual); similarly, Wiener deconvolution and windowing/filtering further regularize the inverse problem in the Fourier domain. Which regularization scheme performs best depends on the specific data being inverted and whether the underlying assumptions of each scheme (e.g., non-negativity and smoothness of the solution) are expected based on additional information about the sample. In particular, the overall smoothness of the solution should be carefully analyzed as downward continuation of the magnetic data from the measurement plane to the sample plane is intrinsic to this type of inverse problem. Thus, one should determine the amount of regularization needed by assessing whether fine-scale changes and peaking in the solution are real or stem from noise magnification at higher spatial frequencies. When available, additional information on the magnetization, such as net moment measurements, can further guide the choice of regularization parameter(s).

The TNT-NN method is able to produce solutions to all of the samples using the full spatial domain without the need to avoid any computational roadblocks. However, undesirable effects appear in the solutions. Without incorporating any regularization techniques, least-squares methods, and any method minimizing the \(L^{2}\)-norm, will produce smooth solutions. The consequence of a smooth solution is that high frequency signals can be lost. In this sense, least-squares methods act like low-pass filters. This low-pass filtering effect is highly likely to occur at the edge of sample, where sharp discontinuities are present. The effect manifests as a nonphysical “halo” in the spatial solution that begins near discontinuities and diminishes in magnitude with sampling distance from the sample. These nonphysical sources can artificially elevate the net moment of the solution. Without additional constraints or regularization this effect will persist.

The haloing effect is most apparent in the low-resolution synthetic UMN logo spatial solutions. The smooth transition of the halo region is evident in Fig. 12, where a transect of the TNT-NN and known solutions, and the difference thereof, for scenario 3 of the synthetic UMN logo are shown. The spatial basalt and ferromanganese crust solutions also exhibit haloing but it is not immediately apparent. Haloing is not obvious in the spatial solution for the SVC982 speleothem sample, but it is present in low intensities along the depositional bands.

Fourier techniques (Lima et al. 2013; Baratchart et al. 2013) produce high-quality solutions faster than spatial domain techniques. However, an issue similar to haloing exists in Fourier solutions. This issue is nonphysical artifacts that manifest as over- and undershoot at sharp interfaces (Hewitt and Hewitt 1979). This was originally discovered by Wilbraham in 1848 but has since come to be known as Gibb’s phenomenon (Gottlieb and Shu 1997). In the context of unidirectional MM inverse problem, undershoot would qualify as a non-negativity constraint violation.

When comparing TNT-NN and Fourier solutions for the Hawaiian basalt sample (Fig. 13), it is seen that undershoot violating non-negativity occurs at sample boundaries. More accurately, undershoot occurs at strong gradients in the solution (a strong difference in adjacent magnetic moments). This can be seen when comparing the TNT-NN and Fourier solutions for the speleothem sample (Fig. 14). For this sample, undershoot does not occur at sample boundaries, instead it is localized to isolated strongly magnetic grains. Despite the presence of undershoot, the Fourier method produces high-quality solutions. These solutions could potentially be exploited as a “starting point” to reduce computation time for spatial domain inversion using TNT-NN.

Nonphysical “halos” introduce correspondingly nonphysical magnetic sources in the solution. Despite many of these sources being relatively low magnitude, together they affect the bulk moment of the solution. This can be seen in all least-squares results presented here as bulk moment typically increases with the number of dipoles. As more solution dipoles are available, more dipoles can be artificially “inflated” in the nonphysical halo. Fourier solutions obtained using postwindowing can also have low magnitude, nonphysical magnetic sources introduced to the solution. Postwindowing acts as a low-pass filter acting to smooth the solution at sharp discontinuities by spreading the solution in space.

Ultimately, TNT-NN is capable of solving the full unidirectional MM inverse problem on timescales that are less than or equal to the time required to perform the data acquisition scan using an MM. This provides a means to effectively steer experimentation.

## Future directions

We recognize at least three avenues to improve the use of the TNT-NN method for analyzing MM scans: incorporating regularization techniques into TNT-NN, parallelizing the TNT-NN method, and preloading the TNT-NN active set.

Incorporating regularization techniques into TNT-NN has the potential to reduce the low-pass filtering effects (haloing) of the least-squares method at the core of the inversion process. This should improve solutions at high gradient transitions, like those typically found at edges of samples. Current techniques and software that are potentially applicable for this purpose include a 1-norm regularization with sparsity prior (Bach et al. 2011), the min-TV regularizer from L1 Magic (Candès et al. 2006), and Matlab CVX (Grant et al. 2008, 2013).

Second, the problem sizes TNT-NN is capable of solving, as well as the execution time performance of TNT-NN, could be increased with a parallel implementation for large-scale distributed memory computer systems. Such an implementation could exploit exiting parallel linear algebra routines for such machines (Blackford et al. 1997). Additional performance improvements might be found in modern multi- or many-core heterogeneous computing systems incorporating computational accelerators, like General Purpose Graphics Processing Units (GPGPUs) (Walsh et al. 2009), and related linear algebra routines (Tomov et al. 2010a, b; Dongarra et al. 2014).

Third, it is possible that combining the frequency domain inversion technique (Lima et al. 2013; Baratchart et al. 2013) and the TNT-NN spatial domain technique could lead to a more accurate and balanced inversion method. This balanced approach would use the extremely fast frequency domain technique to generate an initial starting point for the TNT-NN method. The frequency domain solution, \(\varvec{x}\), would be the initial TNT-NN solution and the variables that fulfill \(\varvec{x} <= 0\) would load the initial TNT-NN active set. With this starting point, the number of TNT-NN iterations to convergence should be reduced and the runtime performance of the TNT-NN method should improve.

Improving the dynamic nature of MMs likewise improves the dynamic nature of MM experimentation. The prohibitively long time required to produce results has made experiments requiring the analysis of very large MM scans and large quantities of MM scans challenging. Significantly reducing analysis time enables these types of experiments to be performed. For example, stepwise demagnetization studies requiring a number steps that might have previously taken months to solve could now be reduced to the time required to perform the demagnetization and MM scan steps.

The spatial inversion of large-scale high-resolution MM scans is also enabled through TNT-NN. The need to address such data sets will continue with the development of novel applications and improvements in sampling resolution of MM devices. Recently developed quantum diamond microscopes (QDMs) (Glenn et al. 2017) have improved sampling resolution beyond standard MMs to 5 \(\upmu\)m, with sensitivities comparable to scanning SQUID microscopes. QDMs can produce data sets 100 times larger than prior MMs for the same scan area. Accelerated inversion schemes are critical to these large address data sets. Finally, a novel application of MMs is the combination of micro-to-nano tomography and MM scanning to determine the magnetic moments of an assemblage of particles in a 3D matrix (deGroot et al. 2018). These magnetic moments are determined using a least-squares formulation, so for any studies of this nature TNT could offer enhanced performance. TNT-NN could do the same for any similar studies examining SIRMs.

## Conclusions

This work demonstrates that the TNT-NN algorithm is a worthwhile extension to the existing spatial least-squares unidirectional inversion method. TNT-NN significantly reduces computation time and solution residual for all presented non-trivial inversions. For trivial inversions, alternative methods are capable of producing equivalent solution residuals but they are unable to match the runtime performance of TNT-NN. The TNT-NN method provides a powerful extension to the spatial least-squares inversion of MM data as it is a key component to overcoming the computational roadblocks that have previously accompanied spatial MM inversions while accelerating processing time and reducing solution residual. With TNT-NN, the time required to obtain magnetic source maps from MM scans can be reduced to less than the time required to perform the scan itself. Matching these timescales establishes a scanning and processing pipeline, where analysis can begin as soon as MM scanning is complete and inversion results are obtained in the time it takes to complete a second scan.

## References

Aster RC, Borchers B, Thurber CH (2011) Parameter estimation and inverse problems, vol 90, 2nd edn. Academic Press, Cambridge

Bach F, Jenatton R, Mairal J, Obozinski G (2011) Convex optimization with sparsity-inducing norms. Optim Mach Learn 5:19–53

Backus G, Gilbert F (1968) The resolving power of gross Earth data. Geophys J R Astron Soc 16(2):169–205

Baratchart L, Hardin D, Lima E, Saff E, Weiss B (2013) Characterizing kernels of operators related to thin-plate magnetizations via generalizations of Hodge decompositions. Inverse Probl 29(1):015004

Bhattacharyya B (1967) Some general properties of potential fields in space and frequency domain: a review. Geoexploration 5(3):127–143

Blackford LS, Choi J, Cleary A, D’Azevedo E, Demmel J, Dhillon I, Dongarra J, Hammarling S, Henry G, Petitet A, Stanley K, Walker D, Whaley RC (1997) ScaLAPACK users’ guide. Society for Industrial and Applied Mathematics, Philadelphia

Bourne MD, Feinberg JM, Strauss BE, Hardt B, Cheng H, Rowe HD, Springer G, Edwards RL (2015) Long-term changes in precipitation recorded by magnetic minerals in speleothems. Geology 43(7):595–598

Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press, Cambridge

Bro R, De Jong S (1997) A fast non-negativity-constrained least squares algorithm. J Chemom 11(5):393–401

Candès EJ, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 52(2):489–509

Chatraphorn S, Fleet E, Wellstood F (2002) Relationship between spatial resolution and noise in scanning superconducting quantum interference device microscopy. J Appl Phys 92(8):4731–4740

Cline AK, Moler CB, Stewart GW, Wilkinson JH (1979) An estimate for the condition number of a matrix. SIAM J Numer Anal 16(2):368–375

Cooley JW, Tukey JW (1965) An algorithm for the machine calculation of complex Fourier series. Math Comput 19(90):297–301

Dampney C (1969) The equivalent source technique. Geophysics 34(1):39–53

Dasgupta S, Saar MO, Edwards RL, Shen C-C, Cheng H, Alexander EC (2010) Three thousand years of extreme rainfall events recorded in stalagmites from Spring Valley Caverns, Minnesota. Earth Planet Sci Lett 300(1):46–54

deGroot LV, Fabian K, Béguin A, Reith P, Barnhoorn A, Hilgenkamp H (2018) Determining individual particle magnetizations in assemblages of micrograins. Geophys Res Lett 45(7):2995–3000

Denning PJ (1968) Thrashing: its causes and prevention. In: Proceedings of the December 9–11, 1968, Fall joint computer conference, part I, pp 915–922. ACM

Denniston RF, Luetscher M (2017) Speleothems as high-resolution paleoflood archives. Quat Sci Rev 170:1–13

Dongarra J, Gates M, Haidar A, Kurzak J, Luszczek P, Tomov S, Yamazaki I (2014) Accelerating numerical dense linear algebra calculations with GPUs. In: Numerical computations with GPUs, pp 1–26

Egli R, Heller F (2000) High-resolution imaging using a high-T\(_c\) superconducting quantum interference device(squid) magnetometer. J Geophys Res 105:25

Emilia DA (1973) Equivalent sources used as an analytic base for processing total magnetic field profiles. Geophysics 38(2):339–348

Fleet E, Chatraphorn S, Wellstood F, Eylem C (2001) Determination of magnetic properties using a room-temperature scanning SQUID microscope. IEEE Trans Appl Supercond 11(1):1180–1183

Fong L, Holzer J, McBride K, Lima E, Baudenbacher F, Radparvar M (2005) High-resolution room-temperature sample scanning superconducting quantum interference device microscope configurable for geological and biomagnetic applications. Rev Sci Instrum 76(5):053703–053703

Font E, Veiga-Pires C, Pozo M, Carvallo C, Siqueira Neto AC, Camps P, Fabre S, Mirão J (2014) Magnetic fingerprint of southern Portuguese speleothems and implications for paleomagnetism and environmental magnetism. J Geophys Res Solid Earth 119(11):7993–8020

Fu RR, Weiss BP, Lima EA, Harrison RJ, Bai X-N, Desch SJ, Ebel DS, Suavet C, Wang H, Glenn D et al (2014) Solar nebula magnetic fields recorded in the Semarkona meteorite. Science 1258022

Fu RR, Weiss BP, Lima EA, Kehayias P, Araujo JF, Glenn DR, Gelb J, Einsle JF, Bauer AM, Harrison RJ et al (2017) Evaluating the paleomagnetic potential of single zircon crystals using the Bishop Tuff. Earth Planet Sci Lett 458:1–13

Gattacceca J, Boustie M, Lima E, Weiss B, De Resseguier T, Cuq-Lelandais J (2010) Unraveling the simultaneous shock magnetization and demagnetization of rocks. Phys Earth Planet Inter 182(1):42–49

Gattacceca J, Boustie M, Weiss BP, Rochette P, Lima EA, Fong LE, Baudenbacher FJ (2006) Investigating impact demagnetization through laser impacts and SQUID microscopy. Geology 34(5):333–336

Glenn DR, Fu RR, Kehayias P, Le Sage D, Lima EA, Weiss BP, Walsworth RL (2017) Micrometer-scale magnetic imaging of geological samples using a quantum diamond microscope. Geochem Geophys Geosyst 18:3254–3267

Gottlieb D, Shu C-W (1997) On the Gibbs phenomenon and its resolution. SIAM Rev 39(4):644–668

Grant M, Boyd S (2008) Graph implementations for nonsmooth convex programs. In: Blondel V, Boyd S, Kimura H (eds) Recent advances in learning and control (a tribute to M. Vidyasagar). Lecture notes in control and information sciences, vol 371. Springer, London, pp 95–110. http://stanford.edu/~boyd/graph_dcp.html

Grant M, Boyd S (2013) CVX: Matlab software for disciplined convex programming, version 2.0 beta. http://cvxr.com/cvx

Hankard F, Gattacceca J, Fermon C, Pannetier-Lecoeur M, Langlais B, Quesnel Y, Rochette P, McEnroe SA (2009) Magnetic field microscopy of rock samples using a giant magnetoresistance–based scanning magnetometer. Geochem Geophys Geosyst 10:10

Harrison RJ, Feinberg JM (2009) Mineral magnetism: providing new insights into geoscience processes. Elements 5(4):209–215

Helbig K (1963) Some integrals of magnetic anomalies and their relation to the parameters of the disturbing body. Z. Geophysik 29:83–96

Hewitt E, Hewitt RE (1979) The Gibbs–Wilbraham phenomenon: an episode in Fourier analysis. Arch Hist Exact Sci 21(2):129–160

Higham NJ (2002) Accuracy and stability of numerical algorithms, 2nd edn. SIAM, Philadephia

Hughes D, Pondrom W (1947) Computation of vertical magnetic anomalies from total magnetic field measurements. Eos Trans Am Geophys Union 28(2):193–197

Jaqueto P, Trindade RI, Hartmann GA, Novello VF, Cruz FW, Karmann I, Strauss BE, Feinberg JM (2016) Linking speleothem and soil magnetism in the Pau d’Alho cave (central South America). J Geophys Res Solid Earth 121(10):7024–7039

Jones E, Oliphant T, Peterson P et al (2001) SciPy: open source scientific tools for Python. http://www.scipy.org/

Kletetschka G, Schnabl P, Šifnerová K, Tasáryová Z, Manda Š, Pruner P (2013) Magnetic scanning and interpretation of paleomagnetic data from Prague Synform’s volcanics. Studia Geophysica et Geodaetica 57(1):103–117

Lascu I, Feinberg JM (2011) Speleothem magnetism. Quat Sci Rev 30(23):3306–3320

Lascu I, Feinberg JM, Dorale JA, Cheng H, Edwards RL (2016) Age of the Laschamp excursion determined by U–Th dating of a speleothem geomagnetic record from North America. Geology 44(2):139–142

Latham A, Schwarcz H, Ford D, Pearce G (1979) Palaeomagnetism of stalagmite deposits. Nature 280(5721):383

Lawson CL, Hanson RJ (1995) Solving least squares problems, 2nd edn. SIAM, Philadelphia

Lilja D (2000) Measuring computer performance. Cambridge University Press, New York

Lima EA, Bruno AC, Carvalho HR, Weiss BP (2014) Scanning magnetic tunnel junction microscope for high-resolution imaging of remanent magnetization fields. Meas Sci Technol 25(10):105401

Lima EA, Irimia A, Wikswo JP (2006) The magnetic inverse problem. The SQUID handbook: applications of SQUIDs and SQUID systems II:139–267

Lima EA, Weiss BP (2009) Obtaining vector magnetic field maps from single-component measurements of geological samples. J Geophys Res Solid Earth. https://doi.org/10.1029/2008JB006006

Lima EA, Weiss BP (2016) Ultra-high sensitivity moment magnetometry of geological samples using magnetic microscopy. Geochem Geophys Geosyst 17(9):3754–3774

Lima EA, Weiss BP, Baratchart L, Hardin DP, Saff EB (2013) Fast inversion of magnetic field maps of unidirectional planar geological magnetization. J Geophys Res Solid Earth 118(6):2723–2752

Liu X, Xiao G (2003) Thermal annealing effects on low-frequency noise and transfer behavior in magnetic tunnel junction sensors. J Appl Phys 94(9):6218–6220

Liu X, Ren C, Xiao G (2002) Magnetic tunnel junction field sensors with hard-axis bias field. J Appl Phys 92(8):4722–4725

Liu X, Mazumdar D, Shen W, Schrag B, Xiao G (2006) Thermal stability of magnetic tunneling junctions with MgO barriers for high temperature spintronics. Appl Phys Lett 89(2):023504–023504

Lourenco JS, Morrison HF (1973) Vector magnetic anomalies derived from measurements of a single component of the field. Geophysics 38(2):359–368

Luo Y, Duraiswami R (2011) Efficient parallel nonnegative least squares on multicore architectures. SIAM J Sci Comput 33(5):2848–2863

Morinaga H, Inokuchi H, Yaskawa K (1985) Paleomagnetism and paleotemperature of a stalagmite. J Geomagn Geoelectr 37(8):823–828

Morinaga H, Inokuchi H, Yaskawa K (1989) Palaeomagnetism of stalagmites (speleothems) in SW Japan. Geophys J Int 96(3):519–528

Mullen KM, van Stokkum IHM (2012) The Lawson–Hanson algorithm for non-negative least squares (NNLS). Technical report, CRAN. http://cran.r-project.org/web/packages/nnls/nnls.pdf

Myre J, Frahm E, Lilja D, Saar M (2017a) TNT-NN: a fast active set method for solving large non-negative least squares problems. Proc Comput Sci 108:755–764

Myre JM, Frahm E, Lilja DJ, Saar MO (2017b) TNT-NN reference implementation. Zenodo. http://dx.doi.org/10.5281/zenodo.438158

Myre JM, Frahm E, Lilja DJ, Saar MO (2018) TNT: a solver for large dense least-squares problems that takes conjugate gradient from bad in theory, to good in practice. In: 2018 IEEE international parallel and distributed processing symposium workshops (IPDPSW), pp 987–995. IEEE

Noguchi A, Oda H, Yamamoto Y, Usui A, Sato M, Kawai J (2017a) Scanning SQUID microscopy of a ferromanganese crust from the northwestern Pacific: sub-millimeter scale magnetostratigraphy as a new tool for age determination and mapping of environmental magnetic parameters. Geophys Res Lett

Noguchi A, Yamamoto Y, Nishi K, Usui A, Oda H (2017b) Paleomagnetic study of ferromanganese crusts recovered from the northwest Pacific: testing the applicability of the magnetostratigraphic method to estimate growth rate. Ore Geol Rev 87:16–24

Oda H, Usui A, Miyagi I, Joshima M, Weiss BP, Shantz C, Fong LE, McBride KK, Harder R, Baudenbacher FJ (2011) Ultrafine-scale magnetostratigraphy of marine ferromanganese crust. Geology 39(3):227–230

Oda H, Kawai J, Miyamoto M, Miyagi I, Sato M, Noguchi A, Yamamoto Y, Fujihira J-I, Natsuhara N, Aramaki Y et al (2016) Scanning SQUID microscope system for geological samples: system integration and initial evaluation. Earth Planets Space 68(1):179. https://doi.org/10.1186/s40623-016-0549-3

Osete M-L, Martín-Chivelet J, Rossi C, Edwards RL, Egli R, Muñoz-García MB, Wang X, Pavón-Carrasco FJ, Heller F (2012) The Blake geomagnetic excursion recorded in a radiometrically dated speleothem. Earth Planet Sci Lett 353:173–181

Parker RL (1994) Geophys Inverse Theory. Princeton University Press, Princeton

Pijpers F (1999) Unbiased image reconstruction as an inverse problem. Mon Notices R Astron Soc 307(3):659–668

Pijpers F, Thompson M (1992) Faster formulations of the optimally localized averages method for helioseismic inversions. Astron Astrophys 262:33–36

Ponte J, Font E, Veiga-Pires C, Hillaire-Marcel C, Ghaleb B (2017) The effect of speleothem surface slope on the remanent magnetic inclination. J Geophys Res 122:4143–4156

Purucker ME, Sabaka TJ, Langel RA (1996) Conjugate gradient analysis: a new tool for studying satellite magnetic data sets. Geophys Res Lett 23(5):507–510

Roth BJ, Wikswo JP Jr (1990) Apodized pickup coils for improved spatial resolution of squid magnetometers. Rev Sci Instrum 61(9):2439–2448

Roth BJ, Sepulveda NG, Wikswo JP (1989) Using a magnetometer to image a two-dimensional current distribution. J Appl Phys 65(1):361–372

Saad Y (2003) Iterative methods for sparse linear systems, 2nd edn, pp 276–279. Society for Industrial Mathematics, Philadelphia

Sepulveda N, Thomas I, Wikswo J Jr (1994) Magnetic susceptibility tomography for three-dimensional imaging of diamagnetic and paramagnetic objects. IEEE Trans Magn 30(6):5062–5069

Smith R (1959) Some depth formulae for local magnetic and gravity anomalies. Geophys Prospect 7(1):55–63

Strauss B, Strehlau J, Lascu I, Dorale J, Penn R, Feinberg J (2013) The origin of magnetic remanence in stalagmites: observations from electron microscopy and rock magnetism. Geochem Geophys Geosyst 14(12):5006–5025

Talwani M (1965) Computation with the help of a digital computer of magnetic anomalies caused by bodies of arbitrary shape. Geophysics 30(5):797–817

Tan S, Ma YP, Thomas IM, Wikswo JP Jr (1996) Reconstruction of two-dimensional magnetization and susceptibility distributions from the magnetic field of soft magnetic materials. IEEE Trans Magn 32(1):230–234

Tomov S, Dongarra J, Baboulin M (2010a) Towards dense linear algebra for hybrid GPU accelerated manycore systems. Paral Comput 36(5–6):232–240

Tomov S, Nath R, Ltaief H, Dongarra J (2010b) Dense linear algebra solvers for multicore with GPU accelerators. In: 2010 IEEE international symposium on parallel and distributed processing, workshops and PhD forum (IPDPSW), pp 1–8. IEEE

Trefethen LN, Bau D III (1997) Numerical linear algebra, vol 50. SIAM, Philadelphia

Usui Y, Uehara M, Okuno K (2012) A rapid inversion and resolution analysis of magnetic microscope data by the subtractive optimally localized averages method. Comput Geosci 38(1):145–155

Vestine EH, Davids N (1945) Analysis and interpretation of geomagnetic anomalies. J Geophys Res 50(1):1–36

Walsh SD, Saar MO, Bailey P, Lilja DJ (2009) Accelerating geoscience and engineering system simulations on graphics hardware. Comput Geosci 35(12):2353–2364

Weiss BP, Lima EA, Fong LE, Baudenbacher FJ (2007a) Paleointensity of the Earth’s magnetic field using SQUID microscopy. Earth Planet Sci Lett 264(1):61–71

Weiss BP, Lima EA, Fong LE, Baudenbacher FJ (2007b) Paleomagnetic analysis using SQUID microscopy. J Geophys Res Solid Earth. https://doi.org/10.1029/2007JB004940

Weiss BP, Fong LE, Vali H, Lima EA, Baudenbacher FJ (2008) Paleointensity of the ancient Martian magnetic field. Geophys Res Lett. https://doi.org/10.1029/2008GL035585

Weiss BP, Fu RR, Einsle JF, Glenn DR, Kehayias P, Bell EA, Gelb J, Araujo JF, Lima EA, Borlina CS et al (2018) Secondary magnetic inclusions in detrital zircons from the Jack Hills, Western Australia, and implications for the origin of the geodynamo. Geology 46(5):427–430

Wikswo J (1996) The magnetic inverse problem for NDE. In: SQUID sensors: fundamentals. Fabrication and applications. Springer, New York City, pp 629–695

Xiong Z, Kirsch A (1992) Three-dimensional Earth conductivity inversion. J Comput Appl Math 42(1):109–121

Zhu Z, Feinberg JM, Xie S, Bourne MD, Huang C, Hu C, Cheng H (2017) Holocene ENSO-related cyclic storms recorded by magnetic minerals in speleothems of central China. Proc Natl Acad Sci 114(5):852–857

## Authors' contributions

JMM co-developed TNT-NN and TNT, ran all inversion analyses, and led manuscript writing. IL acquired the SVC982 MM data and contributed to the manuscript. EAL and BPW acquired the Hawaiian basalt MM data, developed the least-squares spatial inversion method, and contributed to the manuscript. MOS and JMF coordinated the study design and contributed to the manuscript. All authors read and approved the final manuscript.

### Acknowlegements

We would like to thank Hirokuni Oda for graciously sharing the ferromanganese crust MM data, John Ackerman, the owner of Spring Valley Caverns, for access to, and his continued support of research in Spring Valley Caverns. We would also like to thank Richard Harrison and an anonymous reviewer for their constructive reviews and guest editor Hirokuni Oda for providing positive comments contributing to the improvement of this manuscript. The authors also acknowledge the Minnesota Supercomputing Institute (MSI) at the University of Minnesota for providing resources that contributed to the research results reported within this paper (http://www.msi.umn.edu). JMM thanks Anna K. Lindquist for engaging conversations on experimental rock magnetism. MOS thanks the George and Orpha Gibson endowment for its generous support of the Hydrogeology and Geofluids research group at the University of Minnesota.

### Competing interests

None of the authors have any competing interests. None of the work presented here is duplicated or in conflict with any of the authors other work, either published or in review.

### Availability of data and materials

The availability of the data sets used and/or analyzed in this study are outlined below: Synthetic UMN logo—Available from corresponding author upon reasonable request. Hawaiian basalt—See Weiss et al. (2007b). Ferromanganese crust—see Oda et al. (2016). SVC982 Speleothem—Available from corresponding author upon reasonable request. A Matlab implementation of the TNT-NN algorithm is available on GitHub (Myre et al. 2017b). The Matlab code used to perform the spatial inversions is available upon request.

### Funding

This work was supported by National Science Foundation (NSF) Grants EAR-0941666 to MOS, EAR-1316385 to JMF, DMS-1521765 to EAL and BPW, a University of Minnesota Grants-In-Aid award and a McKnight Land-Grant Professorship awarded to JMF. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Author information

### Authors and Affiliations

### Corresponding author

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## About this article

### Cite this article

Myre, J.M., Lascu, I., Lima, E.A. *et al.* Using TNT-NN to unlock the fast full spatial inversion of large magnetic microscopy data sets.
*Earth Planets Space* **71**, 14 (2019). https://doi.org/10.1186/s40623-019-0988-8

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/s40623-019-0988-8

### Keywords

- Magnetic microscopy
- Rock magnetism
- Non-negative least-squares