# Definitions¶

Let $$x_{j} \in \mathbb{R}^{2}$$, $$i=1,2,\ldots N$$, denote a collection of source locations and let $$t_{i} \in \mathbb{R}^{2}$$ denote a collection of target locations.

## Helmholtz FMM¶

Let $$c_{j} \in \mathbb{C}$$, $$j=1,2,\ldots N$$, denote a collection of charge strengths, $$v_{j} \in \mathbb{C}$$, $$j=1,2,\ldots N$$, denote a collection of dipole strengths, and $$d_{j} \in \mathbb{R}^{2}$$, $$j=1,2,\ldots N$$, denote the corresponding dipole orientation vectors. Let $$k\in\mathbb{C}$$ denote the wave number or the Helmholtz parameter.

The Helmholtz FMM computes the potential $$u(x)$$ and the its gradient $$\nabla u(x)$$ given by

(1)$u(x) = \sum_{j=1}^{N} c_{j} H_{0}^{(1)}(k\|x-x_{j}\|) - v_{j} d_{j}\cdot \nabla H_{0}^{(1)}(k\|x-x_{j}\|) \, ,$

at the source and target locations, where $$H_{0}^{(1)}$$ is the Hankel function of the first kind of order $$0$$. When $$x=x_{j}$$, the term corresponding to $$x_{j}$$ is dropped from the sum.

## Vectorized versions¶

The vectorized versions of the Helmholtz FMM, computes repeated FMMs for new charge and dipole strengths located at the same source locations, where the potential and its gradient are evaluated at the same set of target locations.

For example, let $$c_{\ell,j}\in\mathbb{C}$$, $$j=1,2,\ldots N$$, $$\ell=1,2,\ldots n_{d}$$ denote a collection of $$n_{d}$$ charge strengths, and let $$v_{\ell,j} \in \mathbb{C}$$, $$d_{\ell,j} \in \mathbb{R}^2$$ denote a collection of $$n_{d}$$ dipole strengths and orientation vectors. Then the vectorized Helmholtz FMM computes the potentials $$u_{\ell}(x)$$ and its gradients $$\nabla u_{\ell}(x)$$ defined by the formula

(2)$u_{\ell}(x) = \sum_{j=1}^{N} c_{\ell,j} H_{0}^{(1)}(k\|x-x_{j}\|) - v_{\ell,j} d_{j}\cdot \nabla H_{0}^{(1)}(k\|x-x_{j}\|) \, ,$

at the source and target locations.

Note

In double precision arithmetic, two numbers which are within machine precision of each other cannot be distinguished. In order to account for this, suppose that the sources and targets are contained in a cube with side length $$L$$, then for all $$x$$ such that $$\| x-x_{j} \| \leq L \varepsilon_{\textrm{mach}}$$, the term corresponding to $$x_{j}$$ is dropped from the sum. Here $$\varepsilon_{\textrm{mach}} = 2^{-52}$$ is machine precision.