\pagerange

Methods for robustly measuring the minimum spanning tree and other field level statistics from galaxy surveys–Methods for robustly measuring the minimum spanning tree and other field level statistics from galaxy surveys

Krishna Naidoo^{1,2}and Ofer Lahav^{2}^{1}Institute of Cosmology and GravitationE-mail: krishna.naidoo@port.ac.uk University of Portsmouth Burnaby Road Portsmouth PO1 3FX UK^{2}Department of Physics & Astronomy University College London Gower Street London WC1E 6BT UK

(Accepted XXX. Received YYY; in original form ZZZ; 2024)

###### Abstract

Field level statistics, such as the minimum spanning tree (MST), have been shown to be a promising tool for parameter inference in cosmology. However, applications to real galaxy surveys are challenging, due to the presence of small scale systematic effectsand non-trivial survey selection functions. Since many field level statistics are ‘hard-wired’, the common practice is to forward model survey systematic effects to synthetic galaxy catalogues. However, this can be computationally demanding and produces results that are a product of cosmology and systematic effects, making it difficult to directly compare results from different experiments. We introduce a method for inverting survey systematic effects through a Monte Carlo subsampling technique where galaxies are assigned probabilities based on their galaxy weight and survey selection functions. Small scale systematic effects are mitigated through the addition of a point-process smoothing technique called jittering. The inversion technique removes the requirement for a computational and labour intensive forward modelling pipeline for parameter inference. We demonstrate that jittering can mask small scale theoretical uncertainties and survey systematic effects like fibre collisions and we show that Monte Carlo subsampling can remove the effects of survey selection functions. We outline how to measure field level statistics from future surveys.

###### keywords:

Data Methods – Numerical methods – large-scale structure of Universe – cosmology: observations

## 1 Introduction

The next generation of cosmological galaxy surveys; such as the Dark Energy Spectroscopic Instrument^{1}^{1}1https://www.desi.lbl.gov/ (DESI; DESI Collaboration etal. 2016), *Euclid*^{2}^{2}2https://www.euclid-ec.org/(Laureijs etal., 2011), the Rubin Observatory’s Legacy Survey of Space and Time^{3}^{3}3https://www.lsst.org/ (LSST; Ivezić etal. 2019) and the Wide field Spectroscopic Instrument^{4}^{4}4https://www.wstelescope.com/ (WST; Mainieri etal. 2024); will provide a deeper and more resolved view of the universe than has even been available in the past. To maximise the information extracted from this data requires exploring statistics beyond the two-point correlation function (2PCF), including the 2PCF in different densities or cosmic web environments (Bonnaire etal., 2022, 2023; Paillas etal., 2023; Massara etal., 2023), and higher-order statistics; such as the minimum spanning tree (MST; Naidoo etal., 2020, 2022), Minkowski functionals (Liu etal., 2022, 2023), critical points (Moon etal., 2023), topological persistence homology (Pranav etal., 2017; Jalali Kanafi etal., 2023), graph representations (Makinen etal., 2022), voids (Kreisch etal., 2022), wavelet scattering transforms (Valogiannis & Dvorkin, 2022), and methods directly using field level inference (Fluri etal., 2018; Lemos etal., 2023; Jeffrey etal., 2024). However, many of these techniques cannot be modelled analytically, and instead require predictions from simulation suites exploring a large cosmological parameter space. To complicate the matter, since these are often performed on galaxy catalogues, additional modelling are required to explore the halo-to-galaxy relation, often carried out by marginalising over halo occupation distribution (HOD) parameters.

One such example of a field level statistic is the MST, a highly optimised graph built from a set of points and first introduced to astronomy by Barrow etal. (1985). In graph theory, a tree is defined as a loop-free structure, while ‘spanning’ refers to a graph connecting all points in a single structure. The MST is the spanning tree with the shortest possible total length. This optimisation leads to a graph, the MST, that very effectively traces filaments in the cosmic web (see Libeskind etal., 2018) and, more recently, has been shown to be highly sensitive to neutrino mass (Naidoo etal., 2022). Unlike the $N$-point correlation function, the MST is sensitive to density, an effect that cannot be removed by the inclusion of randoms to account for survey selection effects. Additionally, because the MST operates directly on galaxies, we cannot exclude small scales by pixelising the galaxy distribution onto a grid. Furthermore, there is no mechanism to directly incorporate galaxy weights, which are typically used to correct for observational systematic effects, into the MST graph construction. Exploiting ‘hard-wired’ statistics like the MST for parameter inference presents a unique challenge for observational cosmology. While the MST is the statistic of interest in this paper, hard-wired artificial intelligence and machine learning algorithms, such as graph neural networks, will benefit from resolving these current limitations.

The standard approach for using hard-wired statistics like the MST for parameter inference would be to forward model survey systematic effects and selection functions to synthetic galaxy catalogues. However, forward modelling survey systematic effects to the levels of accuracy needed for the MST has yet to be performed and would be computationally and labour intensive,. The benefits of forward modelling is that it enables inference in scenarios that appear intractable, see Lemos etal. (2023) and Jeffrey etal. (2024) for powerful demonstrations of field level simulation based inference techniques used in cosmology. However, in forward modelling, we lose the ability to clearly delineate a cosmological measurement from survey systematic effects – we can only compare and contrast measurements at the parameter level. Being able to look and directly interpret measurements will become particularly important when measurements are in ‘tension’ with other probes or if they suggest the discovery of new physics.

In this paper, we outline how to make robust measurements of the MST and other hard-wired field level statistics from real galaxy surveys. We illustrate how to marginalise over survey selection functions, such as the redshift selection function and variabilities in completeness, and how to mitigate small scale systematic effects. The techniques introduced in this paper remove the necessity to forward model survey geometry, selection functions and systematic effects to synthetic galaxy catalogues. Our techniques involve inverting survey systematic effects on real data and comparing to synthetic galaxy catalogues without survey systematic effects. Since the methods are imposed at the catalogue level they can be applied generally to any technique. In section2 we describe the mock galaxy catalogues used, the summary statistics measured and the methods for inverting survey selection functions and mitigating small scale systematic effects. In section3 we validate the techniques, in4 we discuss the methods and outline how to use them for measuring the MST from future galaxy redshift surveys and in5 we summarise the results of the paper.

## 2 Methods

In this section we summarise the simulations, summary statistics and techniques used throughout this paper. We then introduce the Monte Carlo subsampling technique used to indirectly apply galaxy weights and mitigating survey systematic effects at the catalogue level.

### 2.1 Simulations

#### 2.1.1 Millennium XXL galaxy lightcone

We use the Millennium XXL galaxy lightcone catalogue of Smith etal. (2022). The lightcones are produced from the Millennium XXL simulation, computed in a box of $3\,h^{-1}{\rm Gpc}$ in a flat $\Lambda$CDM cosmology (Planck Collaboration etal., 2020) with matter density $\Omega_{\rm m}=0.25$, dark energy density $\Omega_{\rm\Lambda}=0.75$, amplitude of fluctuations at $8\,h^{-1}{\rm Mpc}$ $\sigma_{8}=0.9$, the Hubble constant $H_{0}=73$ and primordial spectral tilt $n_{\rm s}=1$. For this analysis we limit the catalogue to halos of mass greater than $\geq 10^{13}h^{-1}M_{\odot}$ and only consider galaxies either in a single quadrant (i.e. $0^{\circ}\leq{\rm RA}\leq 90^{\circ}$ and $0^{\circ}\leq{\rm Dec.}\leq 90^{\circ}$) or inside the Baryonic Oscillation Spectroscopic Survey (BOSS) LOWZ north footprint.

#### 2.1.2 Lévy flight random walk distribution

We use a set of Lévy flight random walk simulations. Unlike cosmological simulations the points produced have no higher-order information, since the only input is the size of the steps used in the random walk. We use two sets of distributions, a standard technique which we will refer to as Lévy flight (LF Mandelbrot, 1982) and the Adjusted Lévy flight (ALF) which we developed in Naidoo etal. (2020). These distributions are approximately equal on large scales but have significantly different small scale distributions. Both the LF and ALF random walk distributions are implemented in the MiSTree python package (Naidoo, 2019). The parameters used for the LF are the minimum step-size $t_{0}=0.2$ and tilt $\alpha=1$, and for the ALF are the step-size parameters $t_{0}=0.325$ and $t_{s}=0.015$, and the step-size shape parameters $\alpha=1.5$, $\beta=0.45$ and $\gamma=1.3$. Both distributions are produced in a periodic box of length $75\,h^{-1}{\rm Mpc}$.

### 2.2 Summary statistics

We describe the summary statistics used in this paper and the jackknife resampling technique used for error estimation.

#### 2.2.1 Two-point correlation function

The 2PCF (Peebles, 1980), is a tried-and-tested method for computing clustering properties of an input dataset. It is widely used in cosmology and generally at the forefront of any observation in large scale structure. Analytical predictions for observations can be made from a given power spectra and galaxy bias prescription. Galaxy survey geometries and systematic effects are routinely mitigated through the use of randoms to account for the anisotropic and inhomogeneous distribution of galaxies from a galaxy survey, while small scale systematic effects can be masked by limited the analysis to separations beyond some input scale (say $r_{\min}$). The 2PCF can be computed from the Landy & Szalay (1993) estimator

$\xi(r)=\left(\frac{n_{R}}{n_{D}}\right)^{2}\frac{DD(r)}{RR(r)}-2\left(\frac{n_%{R}}{n_{D}}\right)\frac{DR(r)}{RR(r)}+1,$ | (1) |

where $r$ is the distance between points, $DD(r)$ the number of galaxy-galaxy pairs at a distance $r$, $DR(r)$ the number of galaxy-random pairs, $RR(r)$ the number of random-random pairs, $n_{D}$ the mean number density of galaxies and $n_{R}$ the mean number density of randoms.

To compute the 2PCF multipoles we measure the 2PCF binned according to the distance parallel to the line-of-sight of the observer $s_{\parallel}$ and the distance perpendicular to the line-of-sight $s_{\perp}$

$\xi(s_{\parallel},s_{\perp})=\left(\frac{n_{R}}{n_{D}}\right)^{2}\frac{DD(s_{%\parallel},s_{\perp})}{RR(s_{\parallel},s_{\perp})}-2\left(\frac{n_{R}}{n_{D}}%\right)\frac{DR(s_{\parallel},s_{\perp})}{RR(s_{\parallel},s_{\perp})}+1.$ | (2) |

The multipoles are then computed from

$\xi_{\ell}(s)=\frac{2\ell+1}{2}\int^{\pi}_{0}\sqrt{1-\mu^{2}}\,\xi(s_{%\parallel},s_{\perp})\,P_{\ell}(\mu){\rm d}\theta,$ | (3) |

where $s=\sqrt{s_{\parallel}^{2}+s_{\perp}^{2}}$, $\mu=\cos\theta$ and $P_{\ell}$ is the Legendre polynomial. The monopole is computed by setting $\ell=0$, the quadrupole with $\ell=2$ and hexadecapole with $\ell=4$.

#### 2.2.2 Minimum spanning tree

The MST is computed on a 3-dimensional distribution of points using the MiSTree python package (Naidoo, 2019). From the constructed MST graph we measure:

- •
Degree ($d$): the number of connected edges to each node/point.

- •
Edge length ($l$): the length of each edge.

- •
Branches: edges connected in chains with intermediate nodes of $d=2$. From branches we measure:

- –
Branch length ($b$): the total length of member edges.

- –
Branch shape ($s$): the straight line distance between the branch ends divided by the total branch length.

- –

We are interested in the probability distribution function (PDF) of $d$, $l$, $b$ and $s$ that are obtained by binning the MST statistics into histograms. While this is not an exhaustive set of statistics to measure from the MST, we have found the PDFs to be highly constraining (Naidoo etal., 2022) for cosmology, in particular the PDFs of edge length $l$ and branch length $b$.

#### 2.2.3 Error estimation with jackknife resampling

We use the jackknife resampling technique to compute errors. We first divide our original dataset into $N_{\rm JK}$ jackknife regions. For datasets inside a periodic box, the box is divided into $N_{\rm JK}^{1/3}$ segments along each axis, creating $N_{\rm JK}$ smaller cubic boxes within the full box. For datasets from a lightcone we segment the footprint into $N_{\rm JK}$ regions using the binary-partition method in Naidoo et al.(in prep). We then compute the statistics of interest with points in the $i^{\rm th}$ jackknife segment removed. This gives us $N_{\rm JK}$ estimates of the statistic, which we will refer to as $\boldsymbol{y}$, where we take the mean

$\bar{\boldsymbol{y}}=\frac{1}{N_{\rm JK}}\sum_{i=1}^{N_{\rm JK}}\boldsymbol{y}%_{i},$ | (4) |

to be the statistic for the full sample. This can be validated, to test for biases from removing a jackknife segment, by comparing to the statistic measured on the full sample. The jackknife estimate of variance for the statistics is given by

$\boldsymbol{\Delta y}^{2}_{\rm JK}=\frac{N_{\rm JK}-1}{N_{\rm JK}}\sum_{i=1}^{%N_{\rm JK}}(\boldsymbol{y}_{i}-\bar{\boldsymbol{y}})^{2}.$ | (5) |

Note, in practice we can compute the variance by assuming the samples are independent and then correcting the variance obtained by multiplying by the Jackknife pre-factor $(N_{\rm JK}-1)^{2}/N_{\rm JK}$.

### 2.3 Survey systematic effects and theoretical limitations

Cosmological datasets, whether they come from real galaxy surveys, or theoretical data from simulations will always come with uncertainties. It is important that these uncertainties are accounted for, particularly within an inference pipeline. Accounting for these uncertainties is particularly challenging for hard-wired algorithms considered in this paper. In this section, we explore how galaxy surveys and theoretical data from $N$-body simulations can introduce uncertainties in both our model and observational measurements.

#### 2.3.1 Small scale theoretical unknowns

Simulations will be integral to making predictions for higher-order cosmic web statistics, but simulations have uncertainties of their own that need to be accounted for and mitigated in any cosmological analysis. The most problematic are uncertainties on small scales, that are dictated by the force and mass resolution of the simulation – an effect that can be larger for approximate $N$-body solvers such as COLA (Tassev etal., 2013). This can result in simulations where only massive halos are resolved reliably, making predictions difficult for observations of galaxies in small mass halos. This is conceptually a rather simple limitation to solve – simply discard low mass galaxies or use higher resolution simulations. A more pressing issue is the uncertainty in the physics of small scales, $\lesssim 10h^{-1}{\rm Mpc}$. This is the domain where we expect baryonic physics and feedback mechanisms to be important and where there remains uncertainties on the galaxy formation prescriptions used for hydrodynamic simulations. How these scales are incorporated in any cosmological analysis needs to be treated with great care. Uncertainty on small scales are further enhances by the fact that simulations with hydrodynamic galaxy formation physics are expensive to run, and not always available, especially for the large $N$-body simulation suites used for cosmological inference. Typically in cosmology, galaxies are ‘painted’ onto dark matter only simulations using a halo-to-galaxy prescription; this is often constructed using HODs or semi-analytic halo abundance matching (SHAM) or both. Marginalising over the uncertainties in these semi-analytic models will remove some of the uncertainties but eventually these models will break down. Therefore, we can only ever expect simulations to provide reliable galaxy catalogues up to some theoretical lower bound, a lower bound which cannot (at the moment) be masked from a hard-wired algorithm.

#### 2.3.2 Small scale observational systematic effects

In observations, small scales provide a different challenge. In spectroscopic surveys, galaxies that are close together on the sky may not be assigned individual fibres for their respective spectras to be measured. This problem, known as fibre collisions, leads to galaxies that cannot be assigned correct redshifts. To correct for these missing galaxies, the nearest galaxy is often re-weighted, to account for the fact that there are more galaxies nearby. Nevertheless, this means on angular scales smaller than the fibre collision length, the distribution of galaxies cannot be trusted. From photometric surveys, a different problem occurs on small scales, and that is a problem caused by the blending and crowding of galaixes in close proximity. In effect, galaxies become difficult to resolve, making photometry and shape measurements difficult and unreliable. This can lead to inaccuracies in photometric redshift estimates and can introduce systematic errors in shape measurements relevant for weak lensing studies. In both cases we cannot trust the information on these small scales and need to be able to remove them.

#### 2.3.3 Radial selection function

Depending on the characteristics of the survey, galaxies will be observed with a characteristic radial redshift selection function. This is substantially different from what we have from simulations where the distribution of galaxies is homogeneous. Not only is the distribution inhomogeneous, but so too are the galaxy properties. For example we will often find only small galaxies being observed at lower redshifts and only massive galaxies at higher redshifts. If a hard-wired algorithm is applied directly to this distribution of galaxies, it may be overwhelmed by the inhomogeneous redshift selection function and the biased galaxy redshift evolution. This reduces the relevant cosmological information in the hard-wired algorithm, with a significant portion of the algorithm just picking up survey systematic. This makes comparisons between different experiments extremely challenging, as it becomes difficult to delineate the cosmological result from the survey’s systematic effects.

#### 2.3.4 Angular selection function

Galaxy surveys will not be observed isotropically, but will instead be observed along the galaxy survey’s footprint. Surveys typically have non-trivial boundaries which may induce significant biases in the results from hard-wired algorithms. In addition, galaxies observed across the sky will be subject to the seeing conditions during observations and galactic foregrounds, such as zodiacal light and stellar density from our own galaxy that will induce anisotropies in the distribution of galaxies observed. This effect will need to be replicated in a forward modelled approach.

### 2.4 Monte Carlo subsampling

In $2$-point (or even $N$-point) statistics galaxy survey systematic effects are mitigated by assigning weights to each galaxy. The weights capture observational biases, such as the angular and radial selection functions, and on small scales can account for missing galaxies, for e.g. from fibre collisions or blending of galaxies in crowded fields. Including weights into $2$-point clustering measurements is trivial but for the MST and hard-wired algorithms it is not clear how weights should be included. The only approach to apply these promising techniques to real data for parameter inference is to carry out a full forward modelling approach. This will mean the creation of realistic galaxy mock catalogues for a given survey and accurate forward modelling of the survey’s systematic effects and selection functions. This is both computationally expensive and labour intensive, and because the measurements are a product of both cosmological and systematic information, it becomes difficult to interpret results and thereby the parameter constraints they leads to. For these reasons, we look for a different approach, one where survey systematic effects are mitigated at the catalogue level – removing the need for the direct inclusion of weights in the calculation of the MST or other hard-wired algorithms.

#### 2.4.1 Systematic weighted probabilities

To incorporate systematic effects at the catalogue level we subsample a galaxy catalogue, treating the galaxy weights $w_{\rm g}$ (which are constructed to include various systematic properties of a galaxy survey) as probabilities $\mathds{P}_{\rm g}=w_{\rm g}$. To subsample the galaxies we draw a random uniform number $u\in[0,1]$ and only keep galaxies when $\mathds{P}_{\rm g}\leq u$. We then measure the statistics of interest, which we will refer to as $\boldsymbol{d}$ (a vector of length $n_{\rm d}$), measured for the subsampled galaxy distribution, which we will refer to as $\tilde{\boldsymbol{d}}$. We repeat the process over many iterations, taking the mean of the measured statistic ($\boldsymbol{d}_{n}$) evaluated over the ensemble of $n$ iterations,

$\boldsymbol{d}_{n}=\frac{1}{n}\sum_{i=1}^{n}\tilde{\boldsymbol{d}}_{i},\quad{%\rm where}\quad\lim_{n\to\infty}\boldsymbol{d}_{n}\equiv\boldsymbol{d},$ | (6) |

which is only true if $\tilde{\boldsymbol{d}}$ is unbiased. In this way we are able to incorporate the galaxy weights indirectly, since when $n\to\infty$ a galaxy will be sampled $\approx n\mathds{P}_{\rm g}$, averaged over $n$ iterations gives the galaxy the correct weight $\approx\mathds{P}_{\rm g}=w_{\rm g}$. This process is akin to the superposition of different quantum states in quantum mechanics, each iteration gives us a discrete set of observations following an underlying probability distribution function. A distribution that becomes clearer once the average is taken over many discrete observations.

Treating $w_{\rm g}$ as probabilities $\mathds{P}_{\rm g}$ only make sense if $w_{\rm g}\in[0,1]$, if $w_{\rm g}>1$ then this galaxy will always be assigned the incorrect weight, biasing the analysis. To avoid this potential issue, we replicate galaxies with a weight greater than one by $n_{g}$ times, where $n_{g}=\lceil w_{\rm g}\rceil$, and $\lceil x\rceil$ is the ceiling function (which rounds $x$ to the nearest integer greater than $x$). The new weights for the replicated galaxies are now $\mathds{P}_{\rm g}=w_{\rm g}/n_{\rm g}$. This results in subsampled galaxies that have the correct weights. Galaxy weights greater than one, often occur when nearby galaxies cannot be observed, so it is important in practice that the replication of galaxies with weights greater than one happens in combination with the addition of a jitter (described below). This also ensures galaxy replicates do not lie on top of each other.

#### 2.4.2 Point-process smoothing with jittering

To remove systematic effects and theoretical uncertainties at small scales, we add noise to the three dimensional position of the galaxy $\boldsymbol{p}$, a process we call jitter

$\boldsymbol{p}_{\rm J}=\boldsymbol{p}+\mathcal{N}(0,\,\sigma_{\rm J}\cdot%\boldsymbol{I}_{3}).$ | (7) |

$\mathcal{N}$ is the normal distribution with mean zero and standard deviation given by the dot product of the three dimensional identity matrix $\boldsymbol{I}_{3}$ and jitter dispersion $\sigma_{\rm J}$. This process is the point-process equivalent to smoothing a field. Like smoothing, we expect this procedure to remove systematic effects at small scales and remove any sensitivity to small scales. This is crucial for the MST where small scales cannot be post-processed. Therefore, we must be able to hide galaxy survey systematic effects and theoretical uncertainties from simulations to be able to use the MST for parameter inference from galaxy surveys.

#### 2.4.3 Removing selection functions with inverse density weights

Galaxy surveys have radial and angular selection functions, the former being characterised by the redshift galaxy density distribution $\rho(z)$ and the latter characterised by the surveys footprint/mask $M({\rm RA},{\rm Dec})$ and completeness $C({\rm RA},{\rm Dec})$. Both properties create variations in the distribution of galaxies, variations which are properties of the survey and are not cosmological in origin. In $2$-point or $N$-point statistics, the effect of the surveys selection function can be removed with the inclusion of randoms which allow one to measure the extra clustering with respect to a uniform sample. If we perform the MST, or other statistics, directly on galaxies we will get results which are a combination of both the cosmology and the survey selection function. This makes the analysis of such results difficult and interpretation challenging. To remove this issue we subsample galaxies to produce a uniform distribution of galaxies both radially and across the sky. This process involves applying what we call inverse density weights, which assigns a galaxy a probability that is a function of its radial and angular densities,

$\mathds{P}_{\rm g}=w_{\rm g}\,\left(\frac{\rho_{\rm target}}{\rho(z_{\rm g})}%\right)\,\left(\frac{C_{\rm target}}{C({\rm RA}_{\rm g},{\rm Dec}_{\rm g})}%\right).$ | (8) |

Where the subscript $\rm g$ simply refers to the respective values for a single galaxy, $\rho_{\rm target}$ and $C_{\rm target}$ are target radial densities and target completeness values for the subsampled uniform distribution of galaxies. The galaxies being considered for subsampling should be choosen such that $\rho(z_{\rm g})\geq\rho_{\rm target}$ and $C({\rm RA}_{\rm g},{\rm Dec}_{\rm g})\geq C_{\rm target}$. The effective density of the subsampled distribution is $\rho_{\rm eff}=\rho_{\rm target}C_{\rm target}$. This means theoretical predictions from simulations can aim to measure the MST or other hard-wired algorithm for a well defined galaxy distribution at an effective density $\rho_{\rm eff}$, a much simpler task than attempting to forward model the surveys complicated radial and angular selection functions.

#### 2.4.4 Convergence criteria

To estimate the statistics $\boldsymbol{d}_{n}$ requires applying the iterative probabilistic subsampling scheme over $n$ iterations. If the application of weights is unbiased then we expect for $n\to\infty$, $\boldsymbol{d}_{n}\equiv\boldsymbol{d}$. However, for this procedure to be tractable $n$ should be small but large enough for $\boldsymbol{d}_{n}\approx\boldsymbol{d}$. So, how do we determine the value of $n$? To understand this it is useful to first approach this assuming we have access to $\boldsymbol{d}$. One way to define $n$ is to look at the fractional difference

$\epsilon_{f}=\frac{1}{n_{\rm d}}\sum_{j=1}^{n_{\rm d}}\frac{\boldsymbol{d}^{(j%)}_{n}-\boldsymbol{d}^{(j)}}{\boldsymbol{d}^{(j)}},$ | (9) |

where the subscript $(j)$ refers to single elements across the data vector. We take $n$ to be when this fractional difference reaches some predefined threshold $\epsilon_{\rm thresh}\leq\epsilon_{f}$. In reality we do not have access to $\boldsymbol{d}$, but we do have access to $\boldsymbol{d}_{n+1}$. We can then try to redefine the convergence by the iterative fractional difference

$\epsilon_{I}=\frac{1}{n_{\rm d}}\sum_{j=1}^{n_{\rm d}}\frac{\boldsymbol{d}^{(j%)}_{n}-\boldsymbol{d}^{(j)}_{n+1}}{\boldsymbol{d}^{(j)}_{n+1}}.$ | (10) |

For $\epsilon_{f}$ the variance is fully described by the variance in $\boldsymbol{d}_{n}$ which in turn is described by the variance in $\tilde{\boldsymbol{d}}$,

${\rm Var}\left(\boldsymbol{d}_{n}\right)=\frac{1}{n}{\rm Var}\left(\tilde{%\boldsymbol{d}}\right).$ | (11) |

Therefore the variance of $\epsilon_{f}$ is given by

${\rm Var}(\epsilon_{f})=\frac{1}{n\,n_{\rm d}^{2}}\sum_{j=1}^{n_{\rm d}}\frac{%{\rm Var}\Bigl{(}\tilde{\boldsymbol{d}}^{(j)}\Bigr{)}}{\Bigl{(}\boldsymbol{d}^%{(j)}\Bigr{)}^{2}}.$ | (12) |

The variance of $\epsilon_{I}$ is a little more complicated to compute since $\boldsymbol{d}_{n}$ and $\boldsymbol{d}_{n+1}$ are strongly correlated since the same $n$ samples are used to compute both of them. It is useful to remove this correlation and redefine $\epsilon_{I}$ as

$\epsilon_{I}=\frac{1}{n_{\rm d}(n+1)}\sum_{j=1}^{n_{\rm d}}\frac{\boldsymbol{d%}_{n}-\tilde{\boldsymbol{d}}_{n+1}}{\boldsymbol{d}_{n+1}}.$ | (13) |

Assuming $\boldsymbol{d}_{n+1}=\boldsymbol{d}+\boldsymbol{\Delta d}_{n+1}$, we can Taylor expand the denominator to write it in terms of $\boldsymbol{d}^{(j)}$,

$\begin{split}\epsilon_{I}&=\frac{1}{n_{\rm d}\,(n+1)}\sum_{j=1}^{n_{\rm d}}%\frac{\boldsymbol{d}_{n}^{(j)}-\tilde{\boldsymbol{d}}_{n+1}^{(j)}}{\boldsymbol%{d}^{(j)}+\boldsymbol{\Delta d}_{n+1}^{(j)}},\\&\approx\frac{1}{n_{\rm d}\,(n+1)}\sum_{j=1}^{n_{\rm d}}\frac{\boldsymbol{d}_{%n}^{(j)}-\tilde{\boldsymbol{d}}_{n+1}^{(j)}}{\boldsymbol{d}^{(j)}}\left(1+%\mathcal{O}\left(\frac{\boldsymbol{\Delta d}_{n+1}^{(j)}}{\boldsymbol{d}^{(j)}%}\right)\right).\end{split}$ | (14) |

Finally, since $\boldsymbol{\Delta d}_{n+1}\ll\boldsymbol{d}$ we can make the approximation

$\epsilon_{I}\approx\frac{1}{n_{\rm d}(n+1)}\sum_{j=1}^{n_{\rm d}}\frac{%\boldsymbol{d}_{n}^{(j)}-\tilde{\boldsymbol{d}}_{n+1}^{(j)}}{\boldsymbol{d}^{(%j)}},$ | (15) |

and therefore the variance is given by

${\rm Var}(\epsilon_{I})\approx\frac{1}{n\,n^{2}_{\rm d}(n+1)}\sum_{j=1}^{n_{%\rm d}}\frac{{{\rm Var}\left(\tilde{\boldsymbol{d}}^{(j)}\right)}}{\left(%\boldsymbol{d}^{(j)}\right)^{2}}$ | (16) |

and therefore from eq.12 we get

${\rm Var}(\epsilon_{I})\approx\frac{{\rm Var}(\epsilon_{f})}{n+1}.$ | (17) |

Since the expectation of $\epsilon_{f}$ and $\epsilon_{I}$ is completely described by their variance we can relate the two by their standard deviation and thus

$\epsilon_{f}\approx\sqrt{n+1}\,\epsilon_{I}.$ | (18) |

Which means we can define a threshold for $\epsilon_{f}$ without knowing $\boldsymbol{d}$.

The choice of threshold here appears to be quite arbitrary, a more robust way to define this would be to define the significant difference

$\varepsilon_{f}=\frac{1}{n_{\rm d}}\sum_{j=1}^{n_{\rm d}}\frac{\boldsymbol{d}_%{n}^{(j)}-\boldsymbol{d}^{(j)}}{\boldsymbol{\Delta d}^{(j)}},$ | (19) |

where $\boldsymbol{\Delta d}$ is the error associated with $\boldsymbol{d}$, which can be measured through jackknife resampling. Similarly, we will not know $\boldsymbol{\Delta d}$ a priori and will need to instead rely on an iterative estimator

$\varepsilon_{I}=\frac{1}{n_{\rm d}}\sum_{j=1}^{n_{\rm d}}\frac{\boldsymbol{d}^%{(j)}_{n}-\boldsymbol{d}^{(j)}_{n+1}}{\boldsymbol{\Delta d}_{n+1}^{(j)}}.$ | (20) |

Applying the same logical steps presented before and taking the Taylor expansion of $1/\boldsymbol{\Delta d}_{n+1}\approx 1/\boldsymbol{\Delta d}$ we get the relation

$\varepsilon_{f}\approx\sqrt{n+1}\,\varepsilon_{I}.$ | (21) |

By calculating $\epsilon_{f}$ and $\varepsilon_{f}$ through the approximations from the iterative $\epsilon_{I}$ and $\varepsilon_{I}$ we can set thresholds on the absolute $\epsilon_{f}$ and $\varepsilon_{f}$ to estimate how many iterations $n$ are required to get the statistics to the desired level of precision. This can be carried out without knowing the true weighted $\boldsymbol{d}$ or its true error $\boldsymbol{\Delta d}$.

## 3 Results

In this section we will explore the performance of the Monte Carlo subsampling and jittering procedure in removing survey systematic effects and masking theoretical uncertainties.

### 3.1 Jittering performance

#### 3.1.1 Masking small scale theoretical uncertainties

We compare the 2PCF and MST for two random walk distributions LF (Mandelbrot, 1982) and ALF (Naidoo etal., 2020). The distributions have identical higher-order information since they are both random walks. On scales $\gtrsim 0.3\,h^{-1}{\rm Mpc}$ they, by design, have approximately equal 2PCF – see the left subplot of Fig.1. Despite the agreement of the 2PCF at scales $r\gtrsim 0.3\,h^{-1}{\rm Mpc}$ the MST shows significant differences in the distribution of $d$, $l$, $b$ and $s$ (dark blue). This agreement between the LF and ALF distributions for the 2PCF and MST is indicated by the reduced $\chi^{2}$, where errors are computed via jackknife resampling and added in quadrature. The $\chi^{2}_{r}$ are extremely significant for the MST, especially for the edge lengths $l$ where $\chi^{2}_{r}=207$. The $\chi^{2}_{r}$ is also large for the 2PCF, at $\chi^{2}_{r}=17.1$, but this is due to the inclusion of small scales, if we limit the results to larger scales than $r\gtrsim 0.3\,h^{-1}{\rm Mpc}$ then it reduces to $\chi^{2}_{r}=0.35$. Unlike the 2PCF, we cannot simply ignore edges of the MST above some threshold scale as these scales percolate into other areas of the statistics, such as the branch shape and degree, where it is unclear how we could remove these scales retroactively. For this reason we introduce the jittering scheme, which adds a random ‘jitter’ or displacement to the location of each point in the distribution. The jitter is defined with a jitter dispersion of $\sigma_{\rm J}=1\,h^{-1}{\rm Mpc}$. By adding a random jitter to the positions of the points, we are effectively applying a point-process equivalent of smoothing a field. The 2PCF with the jitter shows complete consistency between the LF and ALF distributions, with $\chi^{2}_{r}=0.32$ (light blue line for LF and points for ALF). Similarly we show the distributions of $d$ now has a $\chi^{2}_{r}=0.772$, $l$ a $\chi^{2}_{r}=1.42$, $b$ a $\chi^{2}_{r}=1.5$, and $s$ a $\chi^{2}_{r}=1.3$. The $\chi^{2}_{r}$ are now all consistent with the same distribution, showing that adding a jitter to the location of points can masks differences on small scales.

#### 3.1.2 Masking small scale survey systematic effects

All galaxy surveys will suffer from small scale systematic effects; for photometric surveys these are typically introduced due to the blending of galaxies in crowded fields, while for spectroscopic surveys fibre collisions limit the number of spectra that can be obtained for galaxies with close neighbours on the sky. In either case, we can no longer trust the distribution of galaxies on scales below these angular scales. This is slightly different to the difference in the LF and ALF distributions discussed previously, as in this scenario the systematic occurs on the projected distribution of galaxies on the sky. This means galaxies that are close by on the sky but actually quite distant can still suffer from these systematic effects.

To understand the consequences of systematic effects on our measurements we take galaxies from the Millennium XXL quadrant described in section2.1.1 and iteratively remove galaxies with an angular separation of $<30\,{\rm arcsec}$. For each galaxy with more than one close pair (i.e. a galaxy with angular separation $<30\,{\rm arcsec}$) we first match galaxies with their close pairs and randomly select a galaxy to keep. To ensure the missing galaxies are accounted, we add the weights of the removed galaxies to the weight of galaxy being kept. Each galaxy now has either a weight of zero (if it has been removed), a weight of one (if it has no close pairs or no pairs were lost), or an integer weight greater than one (to account for the lost pairs). This gives us a catalogue of galaxies with an approximate fibre collision-like systematic, which we will refer to as the no close pair (No CP) catalogue.

We measure the 2PCF and MST on the Millennium XXL quadrant with all galaxies and with No CP shown in dark blue in Fig.2. For the No CP catalogues, galaxies with integer weights greater than one are repeated by the weighted number in the catalogue. This ensures they are correctly weighted in the 2PCF, however for the MST, this step has practically no effect on the results, since this introduces edges of length zero in the MST, which has no physical meaning and is no different to the results we would obtain if galaxies were not replicated. Nevertheless we can see that removing the close pairs in the 2PCF reduces the amplitude of small scale clustering, although this remains consistent with a $\chi^{2}_{r}=0.76$. For the MST, the distribution of $l$ and $b$ are significantly different showing $\chi^{2}_{r}=88.2$ for $l$ and $\chi^{2}_{r}=17.8$ for $b$. These distributions are significantly different and highlights the effect that fibre collisions can have on the MST and other hard-wired algorithms.

We now remeasure the 2PCF and MST with a jitter, defined with a jitter dispersion $\sigma_{\rm J}=3\,h^{-1}{\rm Mpc}$, shown in light blue in Fig.2. This removes any discrepancy in the MST statistics between the original and No CP catalogues, most significantly reducing $\chi^{2}_{r}=88.2$ for $l$ to $\chi^{2}_{r}=0.41$. The results show that adding a jitter to the position of points is able to remove the systematic effects caused by fibre collisions. Crucially, it removes a problem caused by galaxies with weights $>1$. In the MST this inevitably leads to edges equal to zero. With a jitter, this problem is removed entirely, since the repetition of galaxies with a weight $>1$ will never lie exactly on top of each other. This ensures the missing galaxies are correctly accounted for in the MST and indeed any hard-wired statistic.

In practice the jitter dispersion scale will need to be calibrated to be large enough to ensure sensitivity to small scale systematic effects like fibre collisions are removed. In this current setup the No CP catalogue suffers from percolation effects since close pairs percolate due to the algorithm used to construct them, this is one way the approximation departs strongly from the real effect of fibre collisions. In practice we would estimate the jitter dispersion scale by comparing mock datasets with and without fibre collisions as a function of $\sigma_{\rm J}$ and finding the smallest value that ensures consistency.

### 3.2 Monte Carlo subsampling performance

We test the performance of the Monte Carlo subsampling scheme, discussed in section2.4, to assign galaxy weights and mitigate survey selection functions at the catalogue level. This is used to determine the weighted hard-wired statistics without needing to understand how to pass weighted points directly to the hard-wired algorithm itself.

#### 3.2.1 Inverse redshift selection weights

Galaxy surveys observe galaxies inhomogeneously and anisotropically, effects induced by the galaxy survey’s footprint and redshift selection function. The redshift selection function forces inhomogeneities in the dataset which needs to be accounted for in any cosmological measurement, as we do not want to confuse properties of the survey with cosmology. For the 2PCF and indeed most $N$-point correlation functions this is readily handled by including a catalogue of randoms which captures the survey’s footprint and selection function. This aptly removes any sensitivity in the 2PCF to the survey properties. This is not the case for the MST and any hard-wired algorithms, meaning results will be the product of cosmology and the survey’s footprint and selection functions.

To address this we assign galaxies additional weights that re-weight galaxies based on the inverse of the redshift density

$w_{\rm g}=\frac{\rho_{\rm target}}{\rho(z_{\rm g})},$ | (22) |

where $\rho(z_{\rm g})$ is the redshift selection function density for a galaxy at redshift $z_{\rm g}$ and $\rho_{\rm target}$ the target density for subsampling galaxies (set to $\rho_{\rm target}=0.001$ in this analysis).

We will first concentrate on the 2PCF and the performance of the weight assignment via Monte Carlo subsampling. The 2PCF is calculated with a random catalogue with 10 times the density of the galaxy catalogue. The 2PCF is first computed with unit weights, i.e. treating each point equally, indicated in Fig.3 in dark blue. We then compute the 2PCF with inverse density weights applied directly in red and indirectly, using the Monte Carlo subsampling procedure computed over a 100 iterations, in light blue. Note, for the randoms we directly assign them inverse density weights in the 2PCF to account for the imposed homogeneity of the distribution being sampled via Monte Carlo subsampling. The 2PCF in Fig.3 shows: (1) weighting galaxies and randoms by the inverse of the density has no effect on the 2PCF, since randoms are designed to remove any systematic effects from the survey’s footprint and selection function; and (2) the Monte Carlo subsampling procedure is able to impose weights on the galaxy distribution at the catalogue level and retrieve the directly weighted 2PCF with a $\chi^{2}_{r}=0.00064$.

The MST distributions with (light blue) and without (dark blue) the inverse density weighting scheme is shown in Fig.3 on the right. The figure demonstrates how biased the MST can be if the survey’s selection function is not accounted for in the measurements. By imposing the inverse density weights we homogenise the distribution at the catalogue level before making the MST measurement. This ensures the statistic is no longer dependent on the survey’s redshift selection function, which is demonstrated in Fig.4. We plot the mean of the MST statistics as a function of the redshift selection function in ten redshift slices between $0.1\leq z\leq 0.2$ and plot this against the Millennium XXL density as a function of redshift, $\rho(z)$. The top panels shows the relation with unit weights, while the bottom panels show the relation when using Monte Carlo subsampling to apply inverse density weights. We fit a linear relation to the points and show that without weighting the MST the statistics is strongly correlated to the redshift density of the distribution, this is not surprising since we expect the MST to be extremely sensitive to density. Using the inverse density weights, this sensitivity is completely removed and the MST statistics are no longer correlated with the redshift selection function.

#### 3.2.2 Convergence and signal-to-noise

The Monte Carlo subsampling procedure allows us to place galaxy weights, for mitigating survey systematic effects, at the catalogue level. The iterative procedure allows us to indirectly apply the weights to the statistics of interest. In Sec.2.4.4 we introduce several convergence criteria that can be used to determine the number of Monte Carlo iterations $n$ required to make the measurement to the desired accuracy.

The mean fractional difference $\epsilon_{f}$ gives us a measure of the error of the Monte Carlo statistics in relation to the true weighted statistics. In most cases we will not have access to $\epsilon_{f}$, but instead only have access to $\epsilon_{I}$ which is computed from the sequential iterations of the statistic. This is extremely useful in cases where errors are difficult to obtain and the convergence criteria can be achieved by requiring the measurement be made to within $1\%$ error. In the cases where we do have access to the error in the measurement we can use the mean significant difference $\varepsilon_{f}$.

We test the accuracy of the iterative estimators ($\epsilon_{I}$ and $\varepsilon_{I}$) to compute $\epsilon_{f}$ and $\varepsilon_{f}$ respectively, using the 2PCF where galaxy weights can be assigned directly. In Fig.5 we compare the convergence estimators for the 2PCF multipoles ($\ell=0$ the monopole, $\ell=2$ the quadrupole and $\ell=4$ the hexadecapole). The figure shows the direct estimator, using the true weighted multipoles, in relation to the iterative estimators, using the Monte Carlo procedure. The iterative estimator follows the distribution for the direct estimator of the mean fractional difference (top panel) and the mean significant difference (bottom panel). This means we can rely on the iterative estimator to determine the necessary iterations required for estimating the weighted MST and other hard-wired statistics when we use Monte Carlo subsampling.

In Fig.6 we compute the mean fractional difference and mean significant difference for the MST distributions as a function of Monte Carlo iterations. These are computed using the iterative estimator since weights cannot be applied directly to the MST. Here we illustrate how the MST statistics converge, showing that in general the degree is the quickest to converge followed by the edge length, branch length and then branch shape. Some aspect of this is dependent on the bin sizes used for the MST distribution measurements. In the case where iterations are prohibitively expensive, the user could increase the bin sizes to reduce the iterations required for convergence. The values of $\epsilon_{f}$ and $\varepsilon_{f}$ allow us to know how accurate our estimates of the weighted MST are, for example we know from $\epsilon_{f}$ (top panel) that we can achieve a $0.1\%$ measurement of the MST statistics after $\gtrsim 50$ iterations and an error of $5\%$ as a fraction of the standard deviation.

We measure the mean SNR ratio of the MST

${\rm SNR}=\frac{1}{n_{\rm d}}\sum_{i=1}^{n_{\rm d}}\frac{m_{x}}{\Delta m_{x}},$ | (23) |

where $x$ is used to denote $d$, $l$, $b$ or $s$. The ${\rm SNR}$ is measured for the MST distributions measured on all the galaxies (${\rm SNR}^{\rm full}$) and then the iterative MST distributions measured with the Monte Carlo procedure (${\rm SNR}^{\rm iter}$). In Fig.7 we compare the SNR of the full MST in relation to the iterative estimator using inverse density weights. The plot shows that in all cases the SNR ratio of the measurement is improved by marginalising over the redshift selection function with inverse density weights. A concern in introducing inverse density weights and homogenising the distribution before applying the MST, is that this process loses information and reduces the information content of the data. This concern only turns out to be true for a few iterations, after more than a few iterations the SNR shows the opposite effect. Marginalising over the redshift selection function we improve the SNR and are therefore extracting more information from the data. The improved SNR arises from the fact that our measurement is more well defined, it is a measurement of cosmology at a specific density of galaxies, while prior to this the MST measurement was a measure of both cosmology and the survey’s redshift selection function – the latter of which is a systematic that we are not interested in measuring.

#### 3.2.3 Survey footprint bias

The footprint for a galaxy survey is dictated by the location of the observations (if on the ground), galactic and zodiacal foregrounds, bright stars and observing conditions. This will create a footprint which is unique to the survey, non-trivial and sometimes difficult to replicate. For statistics that require $N$-body simulations, we need to decide whether we want to emulate the MST or hard-wired algorithm with the survey footprint already superimposed on the distribution of galaxies or to emulate it in a topology more readily accessible for simulations, such as a cubic box or for a lightcone on a quadrant on the sky. The benefit of the latter, is the measurement is independent of the survey and can be ported to multiple different surveys. In all cases to use the emulated statistic and perform inference with galaxy survey measurement requires learning the footprint bias – i.e. the bias that the footprint imposes on the measured statistic. We will assume the bias is small and can be learnt from one point in parameter space – in practice this may not be true, in which case it may be beneficial to emulate the statistics with the survey footprint already imposed on the simulation data.

In Fig.8 we make measurements of the MST on the Millennium XXL galaxy catalogue in a quadrant on the sky and with the BOSS LOWZ North survey imposed (simply referred to as the ‘survey’). The statistics are measured with inverse density weights to measure the MST with a galaxy density of $0.001\,h^{3}\,{\rm Mpc}^{-3}$ and shown with and without a jitter with dispersion $\sigma_{\rm J}=5\,h^{-1}{\rm Mpc}$. The measurements show that for the MST statistics, the footprint bias from a quadrant to the LOWZ North survey mask is small, and is smaller after applying a jitter. Since the footprint bias is small, the functional form of this bias can be learned by applying the MST to a galaxy simulation in a cubic box or quadrant used for the rest of the simulations and then applying it to the same simulation with the survey footprint imposed. This bias can then be subtracted from the observational measurements or added to the theoretical predictions from simulations prior to parameter inference.

#### 3.2.4 Inverse angular selection weights

Galaxy surveys will generally aim to observe the sky up to a certain depth across the survey’s footprint. However, seeing conditions, instrumentation and foregrounds can limit the degree to which this can be achieved. This will create variations in the number of galaxies observed across the sky, which is often characterised by a measure of completeness. If unaccounted, the variation in galaxies observed across the sky will be a systematic that can alter our results and could lead to biased or incorrect parameter inference. It is therefore important that this effect is carefully included and mitigated.

To test the effect of completeness we subsample galaxies from the LOWZ North footprint with a completeness fraction that varies linearly in RA, from 1 to 0.7. This is quite an extreme set up designed to illustrate how such a systematic could bias measurements, but not one we expect to see in real data. We measure the MST distribution for the survey with varying completeness and compare it to a measurements made on the quadrant with a completeness fraction of 0.85 (shown in blue). The measurements are made with inverse density weights used for the radial selection function and a jitter with dispersion $\sigma_{\rm J}=5\,h^{-1}{\rm Mpc}$. When we apply the MST to the galaxies with varying completeness we see a more significant offset in relation to a quadrant with an effective completeness of $0.85$ (i.e. the mean of the variable completeness of the survey). We then apply inverse completeness weights, making full use of Eq.8, setting $C_{\rm target}=0.7$. This homogenises the distribution of galaxies across the sky, prior to the computation of the MST. This means the completeness systematic does not enter the MST statistics. For comparison we subsample the quadrant with a completeness fraction of 0.7 (shown in orange). The relations show that correcting for discrepancies in completeness reduces the $\chi^{2}_{r}$. The main discrepancies that still remain are characteristic of the footprint bias shown in Fig.8, which have not been removed in this analysis.

In Fig.10 we measure the mean of the MST statistics as a function of completeness. Here we use the average completeness in jackknife tiles on the simulated survey. We find that the mean of the distributions for $l$ and $b$ show a clear relation with respect to completeness, where the significance of the linear gradient is $|m/\sigma_{m}|>2$. After applying the completeness correction, the linear relation with completeness is greatly reduced, with $|m/\sigma_{m}|<2$.

## 4 Discussion

The MST and other hard-wired field level statistics offer an alternative probe of large scale structure, capturing higher-order statistical information present in the field and cosmic web which are not generally attainable from measuring the 2PCF. However, we are yet to capitalise on the additional information in these statistics because making these measurements precisely on real data is fraught with observational and theoretical systematic effects and uncertainties.

The conventional approach for dealing with hard-wired algorithms such as the MST is to make forward modelled predictions from $N$-body simulations. The simulations would vary both cosmological parameters and galaxy population parameters (such as HODs) and forward model survey systematic effects. This requires synthetic galaxy catalogues with:

- 1.
Galaxy weights – used to forward model the galaxy weight bias caused by ignoring or, effectively, using unit weight (i.e.setting all galaxy weights to one) in the MST or hard-wired algorithm.

- 2.
Small scale systematic effects – such as fibre collisions and systematic effects from crowding need to be injected since the MST and some hard-wired algorithms may not be able to impose small scale cuts.

- 3.
Redshift selection function – that may be challenging to impose if the target selection is complex.

- 4.
Footprint and angular selection functions – that may not be trivial if they are dictated by observational systematic effects.

- 5.
Galaxy bias evolution – galaxies observed at higher redshift will often be more luminous and more massive, this means our tracer for large scale structure changes as a function of redshift. The evolution of galaxy bias may need to be forward modelled.

While it is common to reproduce all of these systematic effects in mocks for computing covariances (see Kitaura etal., 2016; Ereza etal., 2024), reproducing these on synthetic data for parameter inference is much more challenging due to the complexity of the task and the number of simulations utilised. Although all these requirements may be satisfied, we cannot be certain we can trust the small scale physics from $N$-body simulations. Similarly, even in cases where these are produced from hydrodynamic simulations, we cannot be certain that the galaxy formation and evolution prescriptions are completely accurate. This issues means our predictions from the MST and hard-wired algorithms may be fatally flawed. Furthermore, forward modelling will need to be carried out independently for every galaxy survey, due to the specific nature of survey systematic effects. This means the analysis can only be compared to other surveys at the parameter level, since measurements made are a measure of cosmology, galaxy population and systematic effects – the latter of which will be unique to the survey and will make comparisons between surveys practically impossible. This will become important if the constraints on cosmological parameters challenges the standard model, requiring deeper investigation and interpretation.

The methods in this paper try to resolve these issues by showing how to incorporate galaxy weights, mitigate small scale uncertainties and galaxy survey systematic effects to produce measurements from surveys that are solely dependent on cosmology and the galaxy tracers being used. This simplifies the requirements from $N$-body simulations which will no longer be required to reproduce small scale systematic effects like fibre collisions, and galaxy survey redshift and angular selection functions. This means emulators of the MST statistics or hard-wired algorithm can be built more generally, substantially reducing the computational requirement for parameter inference. Furthermore, it will allow the measurements to be more readily interpreted, since the measurement is no longer survey dependent due to survey specific selection functions and systematic effects. The only survey specific issue that will remain is the footprint bias, which is the bias in making a measurement in the survey’s footprint in comparison to making it in an $N$-body box or lightcone quadrant (or in some cases across the full-sky). The footprint bias can be computed by superimposing synthetic mock galaxy data into the survey’s footprint and can then be removed from the survey measurement or injected into the predictions depending on the size and nature of the bias.

To implement this method we rely on two concepts, jittering and Monte Carlo subsampling. A jitter is the process of adding noise to the positions of galaxies in data, in our analysis the noise added follows a Gaussian distribution with a user specified dispersion scale $\sigma_{\rm J}$. Jittering allows us to masks small scale uncertainties from theoretical uncertainties in simulations and small scale systematic effects from observations, such as fibre collisions. The jittering process is the point-process equivalent to smoothing a field. The method is shown to be able to remove small scale theoretical differences, as well as fibre collision-like systematic effects in a mock galaxy catalogue. Monte Carlo subsampling is the process by which we can apply weights to galaxy catalogues indirectly when the algorithm or statistics being measure, like the MST, are unable to include weighted points. Monte Carlo subsampling treats galaxy weights as probabilities, drawing galaxies based on their probabilities and performing the MST or hard-wired algorithm on the subsampled galaxies iteratively. The mean of the statistics computed from the subsampled galaxies is taken to be the weighted statistics. We demonstrate the subsampling procedure applies weights correctly by applying the technique to the 2PCF where weights can be applied directly. We also show that without this indirect application of weights, the measurements of the MST are extremely biased with a reduced $\chi^{2}\approx 400$ in some cases.

We extend the Monte Carlo subsampling to remove survey dependent selection functions, such as the redshift selection function and completeness. To achieve this, we adjust the weights of galaxies by the inverse of the redshift selection function and completeness. This allows us to draw subsampled galaxies uniformly across the survey at some specified target density. This method removes any dependence between the survey’s selection functions (redshift and completeness) and the MST or hard-wired statistics measured.

This approach to measuring the MST, and indeed any hard-wired algorithm, has a number of advantages over the traditional forward model approach.

- •
Jittering and Monte Carlo subsampling occur at the catalogue level, meaning they can be relied upon for any statistics, allowing for galaxy weights to be applied correctly and survey systematic effects to be largely removed.

- •
The method targets a measurement at a specified target galaxy density. This is a better defined measurement then simply making the measurement on all galaxies in the survey and leads to measurements with higher SNR ratios.

- •
Measurements are not survey specific and can readily be compared to at the data vector level rather than purely at the parameter level. This will greatly improve interpretability and reliability, especially in cases where the inference from the MST or hard-wired algorithm challenges the standard model.

- •
It removes the heavy cost of forward modelling and enables emulators to be built for a more general use rather than being custom built for a single galaxy survey.

In this analysis we have limited our analysis to galaxies with stellar masses greater than $10^{13}\,h^{-1}{\rm Mpc}$. In practice how galaxies are limited in a cosmological analysis could play a significant role in the complexity of the galaxy population modelling. A volume limited sample, i.e. a complete sample of galaxies above a certain luminosity or mass, will make modelling significantly easier. This removes the need to model Malmquist bias and incomplete galaxy populations at the low mass end.

In Fig.11 we compare the schematic approach of a traditional forward modelled parameter inference pipeline in comparison to the inversion model being proposing in this paper. The key difference in the approaches is that the inversion model removes the impact of survey systematic effects on the MST or hard-wired algorithm on real data, while the forward model approach adds survey systematic effects to synthetic data.

We outline below the steps required to make measurements of the MST or hard-wired statistic using the inversion model and consequently how to infer parameter constraints from simulations.

- 1.
Measurements from galaxy survey:

- (a)
Define the properties of the galaxy population to compute the MST or hard-wired statistic. This could mean creating a volume limited sample of galaxies in mass or luminosity. Note, the more complex the properties the more complex the galaxy population modelled will need to be.

- (b)
Measure the redshift selection function on the galaxy population.

- (c)
Assign galaxies inverse density weights to remove the redshift selection function and variations in completeness across the sky.

- (d)
Using mocks with and without small scale systematic effects (like fibre collisions) determine the jitter dispersion $\sigma_{\rm J}$ required to mask small scale systematic effects. This is determined by finding the smallest value of $\sigma_{\rm J}$ which produces consistent measurements with and without small scale systematic effects.

- (e)
Determine the number of Monte Carlo subsampling iterations required for the data vector to converge.

- (f)
Test measurements are not correlated with systematic effects, to test systematic corrections have been correctly applied.

- (a)
- 2.
Measurements from $N$-body simulations:

- (a)
Model galaxy population and perform similar cuts to real data.

- (b)
Assign galaxies subsampling weights to make measurements at the correct effective galaxy density.

- (c)
Determine the number of Monte Carlo subsampling iterations required for the data vector to converge.

- (d)
At a fiducial cosmology and galaxy population model, compute the footprint bias of the survey by making the measurement in the original simulation box or lightcone quadrant and then imposing the survey’s footprint on the sample.

- (a)
- 3.
Validation:

- (a)
Using a single synthetic catalogue, test that survey systematic effects are correctly removed through the inversion pipeline.

- (b)
Test predictions from N-body simulations with the same cosmological parameters and galaxy population modelling match measurements from galaxy survey mocks.

- (c)
Test the parameter inference pipeline is unbiased by constraining cosmological parameters and galaxy population models on mock observations.

- (a)
- 4.
Apply parameter inference pipeline to real data.

It is important to note, in all cases we assume the $N$-body measurements are made in redshift space, either using a lightcone or by adding redshift space distortions along a chosen line-of-sight axis inside an $N$-body periodic box. However, for the MST the measurements are made in comoving distance, where redshifts are converted into comoving distances assuming a fiducial cosmology. For simulated lightcones a like-for-like analysis can be performed because this conversion can be carried out consistently however for periodic boxes this conversion becomes problematic. Since stretching the box to carry out the conversion to the fiducial comoving distance will alter the footprint bias. For this reason it may be simpler to project the periodic box across a region on the sky, so that the positions can be converted into redshift and reprojected into the fiducial comoving distance. To ensure the footprint bias is the same we would limit the analysis to the redshift ranges of the survey.

## 5 Summary

In this paper, we outline how to make precision measurements of the MST from galaxy surveys. The technique relies on two methods: jittering and Monte Carlo subsampling.

Jittering is a point-process smoothing technique, where the positions of points are given a random ‘jitter’ or noise, that follows a Gaussian with a standard deviation given by the jitter dispersion $\sigma_{\rm J}$. In Fig.1 we show how jittering can remove the small scale differences of two random walk distributions – illustrating that jittering can mask theoretical small scale uncertainties. In Fig.2 we show how jittering removes the effects of fibre collisions – illustrating that jittering can mask small scale systematic effects from galaxy surveys.

Monte carlo subsampling is a technique for indirectly incorporating galaxy weights to the MST. The MST is a hard-wired algorithm, meaning its internals and outputs cannot be altered or post-processed in a consistent way. This presents a unique challenge for galaxy surveys, as there are no methods for directly including galaxy weights to the MST. In Monte Carlo subsampling, galaxy weights are treated as probabilities for sampling. The MST is then performed on subsampled realisations of galaxies. Taking the mean of the MST distributions over many realisations gives us the MST of the weighted galaxies. In Fig.3 we show this technique can reproduce the weighted 2PCF and show that failing to include galaxy weights can deeply bias the MST. Furthermore, we show how to alter the weights of galaxies to correct for variations in the survey’s redshift selection function and completeness across the sky. The fundamental change in this approach is that we target a measurement of the MST for galaxies at a specified target density. This ensures the MST has no correlations with the redshift selection function (see Fig.4) and completeness (see Fig.10). In Sec.2.4.4 we derive measurements of convergence which can be used to determine the number of iterations required for Monte Carlo subsampling. We illustrate the convergence as a function of iterations for 2PCF using a direct and indirect approximation for the convergence in Fig.5, and for the MST using the approximation in Fig.6. In Fig.7 we show that Monte Carlo subsampling improves the SNR of the MST measurements.

In Fig.8 we illustrate how the technique allows us to compare predictions from measurements made with quite different footprints on the sky. The only thing that needs to be corrected is the survey’s footprint bias, which is the bias in making measurements in the survey’s footprint in comparison to making it in a simulation (periodic box or lightcone quadrant). This is a bias that can be learnt and corrected.

Lastly, in Fig.11 we illustrate how a parameter inference pipeline will need to be altered to include this inversion method and in section4 we outline the steps that need to be taken to make a measurement and infer cosmological parameter from the MST. The technique introduced in this paper resolves a number of challenges faced by many hard-wired field level statistics, not only the MST, and because the techniques operate at the catalogue level, they can be generally used for any statistics. By mitigating survey systematic effects we can make measurements that can now be readily compared between surveys and can rely on emulators and simulations that are not custom built for a single survey. This will improve interpretability and will make it easier for us to reap the rewards and sensitivities of field level statistics from the next generation of galaxy surveys, such as DESI, *Euclid*, LSST and WST.

## Acknowledgements

We thank Benjamin Joachimi, Nicolas Tessore, Tessa Baker, Shaun Cole and Grazianno Rossi for stimulating discussions and guidance in the development of this paper. KN acknowledges support from the Royal Society grant number URF\R\231006. OL acknowledges the STFC Consolidated Grant ST/R000476/1 and visits to All Souls College and to the Physics Department, Oxford University.

## Data Availability

All data used in this analysis are produced from publicly available datasets and software packages. The Millennium XXL galaxy lightcone (Smith etal., 2022) can be downloaded from the Millennium database^{5}^{5}5http://icc.dur.ac.uk/data/ and random walk Lévy-Flight distributions can be reproduced using the publicly available python package MiSTree (Naidoo, 2019).

## References

- Barrow etal. (1985)Barrow, J.D., Bhavsar, S.P., & Sonoda, D.H., 1985.Minimal spanning trees, filaments and galaxy clustering, MNRAS, 216, 17–35.
- Bonnaire etal. (2022)Bonnaire, T., Aghanim, N., Kuruvilla, J., & Decelle, A., 2022.Cosmology with cosmic web environments. I. Real-space powerspectra, A&A, 661, A146.
- Bonnaire etal. (2023)Bonnaire, T., Kuruvilla, J., Aghanim, N., & Decelle, A., 2023.Cosmology with cosmic web environments. II. Redshift-space auto andcross-power spectra, A&A, 674, A150.
- DESI Collaboration etal. (2016)DESI Collaboration, Aghamousa, A., Aguilar, J., Ahlen, S., Alam, S.,Allen, L.E., Allende Prieto, C., Annis, J., Bailey, S., Balland,C., Ballester, O., Baltay, C., Beaufore, L., Bebek, C., Beers,T.C., Bell, E.F., Bernal, J.L., Besuner, R., Beutler, F., Blake,C., Bleuler, H., Blomqvist, M., Blum, R., Bolton, A.S., Briceno,C., Brooks, D., Brownstein, J.R., Buckley-Geer, E., Burden, A.,Burtin, E., Busca, N.G., Cahn, R.N., Cai, Y.-C., Cardiel-Sas, L.,Carlberg, R.G., Carton, P.-H., Casas, R., Castander, F.J.,Cervantes-Cota, J.L., Claybaugh, T.M., Close, M., Coker, C.T.,Cole, S., Comparat, J., Cooper, A.P., Cousinou, M.C., Crocce, M.,Cuby, J.-G., Cunningham, D.P., Davis, T.M., Dawson, K.S., de laMacorra, A., De Vicente, J., Delubac, T., Derwent, M., Dey, A.,Dhungana, G., Ding, Z., Doel, P., Duan, Y.T., Ealet, A.,Edelstein, J., Eftekharzadeh, S., Eisenstein, D.J., Elliott, A.,Escoffier, S., Evatt, M., Fagrelius, P., Fan, X., Fanning, K.,Farahi, A., Farihi, J., Favole, G., Feng, Y., Fernandez, E.,Findlay, J.R., Finkbeiner, D.P., Fitzpatrick, M.J., Flaugher, B.,Flender, S., Font-Ribera, A., Forero-Romero, J.E., Fosalba, P.,Frenk, C.S., Fumagalli, M., Gaensicke, B.T., Gallo, G.,Garcia-Bellido, J., Gaztanaga, E., Pietro Gentile Fusillo, N.,Gerard, T., Gershkovich, I., Giannantonio, T., Gillet, D.,Gonzalez-de-Rivera, G., Gonzalez-Perez, V., Gott, S., Graur, O.,Gutierrez, G., Guy, J., Habib, S., Heetderks, H., Heetderks, I.,Heitmann, K., Hellwing, W.A., Herrera, D.A., Ho, S., Holland, S.,Honscheid, K., Huff, E., Hutchinson, T.A., Huterer, D., Hwang,H.S., Illa Laguna, J.M., Ishikawa, Y., Jacobs, D., Jeffrey, N.,Jelinsky, P., Jennings, E., Jiang, L., Jimenez, J., Johnson, J.,Joyce, R., Jullo, E., Juneau, S., Kama, S., Karcher, A., Karkar,S., Kehoe, R., Kennamer, N., Kent, S., Kilbinger, M., Kim, A.G.,Kirkby, D., Kisner, T., Kitanidis, E., Kneib, J.-P., Koposov, S.,Kovacs, E., Koyama, K., Kremin, A., Kron, R., Kronig, L.,Kueter-Young, A., Lacey, C.G., Lafever, R., Lahav, O., Lambert,A., Lampton, M., Landriau, M., Lang, D., Lauer, T.R., Le Goff,J.-M., Le Guillou, L., Le Van Suu, A., Lee, J.H., Lee, S.-J.,Leitner, D., Lesser, M., Levi, M.E., L’Huillier, B., Li, B.,Liang, M., Lin, H., Linder, E., Loebman, S.R., Lukić, Z.,Ma, J., MacCrann, N., Magneville, C., Makarem, L., Manera, M.,Manser, C.J., Marshall, R., Martini, P., Massey, R., Matheson, T.,McCauley, J., McDonald, P., McGreer, I.D., Meisner, A., Metcalfe,N., Miller, T.N., Miquel, R., Moustakas, J., Myers, A., Naik, M.,Newman, J.A., Nichol, R.C., Nicola, A., Nicolati da Costa, L.,Nie, J., Niz, G., Norberg, P., Nord, B., Norman, D., Nugent, P.,O’Brien, T., Oh, M., Olsen, K. A.G., Padilla, C., Padmanabhan, H.,Padmanabhan, N., Palanque-Delabrouille, N., Palmese, A., Pappalardo,D., Pâris, I., Park, C., Patej, A., Peacock, J.A., Peiris,H.V., Peng, X., Percival, W.J., Perruchot, S., Pieri, M.M.,Pogge, R., Pollack, J.E., Poppett, C., Prada, F., Prakash, A.,Probst, R.G., Rabinowitz, D., Raichoor, A., Ree, C.H., Refregier,A., Regal, X., Reid, B., Reil, K., Rezaie, M., Rockosi, C.M.,Roe, N., Ronayette, S., Roodman, A., Ross, A.J., Ross, N.P.,Rossi, G., Rozo, E., Ruhlmann-Kleider, V., Rykoff, E.S., Sabiu,C., Samushia, L., Sanchez, E., Sanchez, J., Schlegel, D.J.,Schneider, M., Schubnell, M., Secroun, A., Seljak, U., Seo, H.-J.,Serrano, S., Shafieloo, A., Shan, H., Sharples, R., Sholl, M.J.,Shourt, W.V., Silber, J.H., Silva, D.R., Sirk, M.M., Slosar,A., Smith, A., Smoot, G.F., Som, D., Song, Y.-S., Sprayberry, D.,Staten, R., Stefanik, A., Tarle, G., Sien Tie, S., Tinker, J.L.,Tojeiro, R., Valdes, F., Valenzuela, O., Valluri, M.,Vargas-Magana, M., Verde, L., Walker, A.R., Wang, J., Wang, Y.,Weaver, B.A., Weaverdyck, C., Wechsler, R.H., Weinberg, D.H.,White, M., Yang, Q., Yeche, C., Zhang, T., Zhao, G.-B., Zheng,Y., Zhou, X., Zhou, Z., Zhu, Y., Zou, H., & Zu, Y., 2016.The DESI Experiment Part I: Science,Targeting, and Survey Design,arXiv e-prints, p. arXiv:1611.00036.
- Ereza etal. (2024)Ereza, J., Prada, F., Klypin, A., Ishiyama, T., Smith, A., Baugh,C.M., Li, B., Hernández-Aguayo, C., & Ruedas, J., 2024.The UCHUU-GLAM BOSS and eBOSS LRG lightcones: Exploring clusteringand covariance errors, MNRAS.
- Fluri etal. (2018)Fluri, J., Kacprzak, T., Refregier, A., Amara, A., Lucchi, A., &Hofmann, T., 2018.Cosmological constraints from noisy convergence maps through deeplearning, Phys.Rev.D, 98(12), 123518.
- Ivezić etal. (2019)Ivezić, Ž., Kahn, S.M., Tyson, J.A., Abel, B., Acosta,E., Allsman, R., Alonso, D., AlSayyad, Y., Anderson, S.F., Andrew,J., & etal., 2019.LSST: From Science Drivers to Reference Design and Anticipated DataProducts, ApJ, 873(2), 111.
- Jalali Kanafi etal. (2023)Jalali Kanafi, M.H., Ansarifard, S., & Movahed, S.M.S., 2023.Imprint of massive neutrinos on Persistent Homology of large-scalestructure, arXiv e-prints, p. arXiv:2311.13520.
- Jeffrey etal. (2024)Jeffrey, N., Whiteway, L., Gatti, M., Williamson, J., Alsing, J.,Porredon, A., Prat, J., Doux, C., Jain, B., Chang, C., Cheng,T.Y., Kacprzak, T., Lemos, P., Alarcon, A., Amon, A., Bechtol, K.,Becker, M.R., Bernstein, G.M., Campos, A., Carnero Rosell, A.,Chen, R., Choi, A., DeRose, J., Drlica-Wagner, A., Eckert, K.,Everett, S., Ferté, A., Gruen, D., Gruendl, R.A., Herner, K.,Jarvis, M., McCullough, J., Myles, J., Navarro-Alsina, A., Pandey,S., Raveri, M., Rollins, R.P., Rykoff, E.S., Sánchez, C.,Secco, L.F., Sevilla-Noarbe, I., Sheldon, E., Shin, T., Troxel,M.A., Tutusaus, I., Varga, T.N., Yanny, B., Yin, B., Zuntz, J.,Aguena, M., Allam, S.S., Alves, O., Bacon, D., Bocquet, S.,Brooks, D., da Costa, L.N., Davis, T.M., De Vicente, J., Desai,S., Diehl, H.T., Ferrero, I., Frieman, J., García-Bellido, J.,Gaztanaga, E., Giannini, G., Gutierrez, G., Hinton, S.R.,Hollowood, D.L., Honscheid, K., Huterer, D., James, D.J., Lahav,O., Lee, S., Marshall, J.L., Mena-Fernández, J., Miquel, R.,Pieres, A., Plazas Malagón, A.A., Roodman, A., Sako, M.,Sanchez, E., Sanchez Cid, D., Smith, M., Suchyta, E., Swanson,M.E.C., Tarle, G., Tucker, D.L., Weaverdyck, N., Weller, J.,Wiseman, P., & Yamamoto, M., 2024.Dark Energy Survey Year 3 results: likelihood-free, simulation-based$w$CDM inference with neural compression of weak-lensing map statistics,arXiv e-prints, p. arXiv:2403.02314.
- Kitaura etal. (2016)Kitaura, F.-S., Rodríguez-Torres, S., Chuang, C.-H., Zhao, C.,Prada, F., Gil-Marín, H., Guo, H., Yepes, G., Klypin, A.,Scóccola, C.G., Tinker, J., McBride, C., Reid, B.,Sánchez, A.G., Salazar-Albornoz, S., Grieb, J.N.,Vargas-Magana, M., Cuesta, A.J., Neyrinck, M., Beutler, F.,Comparat, J., Percival, W.J., & Ross, A., 2016.The clustering of galaxies in the SDSS-III Baryon OscillationSpectroscopic Survey: mock galaxy catalogues for the BOSS Final DataRelease, MNRAS, 456(4), 4156–4173.
- Kreisch etal. (2022)Kreisch, C.D., Pisani, A., Villaescusa-Navarro, F., Spergel, D.N.,Wandelt, B.D., Hamaus, N., & Bayer, A.E., 2022.The GIGANTES Data Set: Precision Cosmology from Voids in theMachine-learning Era, ApJ, 935(2), 100.
- Landy & Szalay (1993)Landy, S.D. & Szalay, A.S., 1993.Bias and Variance of Angular Correlation Functions, ApJ,412, 64.
- Laureijs etal. (2011)Laureijs, R., Amiaux, J., Arduini, S., Auguères, J.L.,Brinchmann, J., Cole, R., Cropper, M., Dabin, C., Duvet, L.,Ealet, A., Garilli, B., Gondoin, P., Guzzo, L., Hoar, J.,Hoekstra, H., Holmes, R., Kitching, T., Maciaszek, T., Mellier, Y.,Pasian, F., Percival, W., Rhodes, J., Saavedra Criado, G., Sauvage,M., Scaramella, R., Valenziano, L., Warren, S., Bender, R.,Castander, F., Cimatti, A., Le Fèvre, O., Kurki-Suonio, H.,Levi, M., Lilje, P., Meylan, G., Nichol, R., Pedersen, K., Popa,V., Rebolo Lopez, R., Rix, H.W., Rottgering, H., Zeilinger, W.,Grupp, F., Hudelot, P., Massey, R., Meneghetti, M., Miller, L.,Paltani, S., Paulin-Henriksson, S., Pires, S., Saxton, C.,Schrabback, T., Seidel, G., Walsh, J., Aghanim, N., Amendola, L.,Bartlett, J., Baccigalupi, C., Beaulieu, J.P., Benabed, K., Cuby,J.G., Elbaz, D., Fosalba, P., Gavazzi, G., Helmi, A., Hook, I.,Irwin, M., Kneib, J.P., Kunz, M., Mannucci, F., Moscardini, L.,Tao, C., Teyssier, R., Weller, J., Zamorani, G., Zapatero Osorio,M.R., Boulade, O., Foumond, J.J., Di Giorgio, A., Guttridge, P.,James, A., Kemp, M., Martignac, J., Spencer, A., Walton, D.,Blümchen, T., Bonoli, C., Bortoletto, F., Cerna, C., Corcione,L., Fabron, C., Jahnke, K., Ligori, S., Madrid, F., Martin, L.,Morgante, G., Pamplona, T., Prieto, E., Riva, M., Toledo, R.,Trifoglio, M., Zerbi, F., Abdalla, F., Douspis, M., Grenet, C.,Borgani, S., Bouwens, R., Courbin, F., Delouis, J.M., Dubath, P.,Fontana, A., Frailis, M., Grazian, A., Koppenhöfer, J.,Mansutti, O., Melchior, M., Mignoli, M., Mohr, J., Neissner, C.,Noddle, K., Poncet, M., Scodeggio, M., Serrano, S., Shane, N.,Starck, J.L., Surace, C., Taylor, A., Verdoes-Kleijn, G., Vuerli,C., Williams, O.R., Zacchei, A., Altieri, B., Escudero Sanz, I.,Kohley, R., Oosterbroek, T., Astier, P., Bacon, D., Bardelli, S.,Baugh, C., Bellagamba, F., Benoist, C., Bianchi, D., Biviano, A.,Branchini, E., Carbone, C., Cardone, V., Clements, D., Colombi, S.,Conselice, C., Cresci, G., Deacon, N., Dunlop, J., Fedeli, C.,Fontanot, F., Franzetti, P., Giocoli, C., Garcia-Bellido, J., Gow,J., Heavens, A., Hewett, P., Heymans, C., Holland, A., Huang, Z.,Ilbert, O., Joachimi, B., Jennins, E., Kerins, E., Kiessling, A.,Kirk, D., Kotak, R., Krause, O., Lahav, O., van Leeuwen, F.,Lesgourgues, J., Lombardi, M., Magliocchetti, M., Maguire, K.,Majerotto, E., Maoli, R., Marulli, F., Maurogordato, S., McCracken,H., McLure, R., Melchiorri, A., Merson, A., Moresco, M., Nonino,M., Norberg, P., Peacock, J., Pello, R., Penny, M., Pettorino, V.,Di Porto, C., Pozzetti, L., Quercellini, C., Radovich, M., Rassat,A., Roche, N., Ronayette, S., Rossetti, E., Sartoris, B.,Schneider, P., Semboloni, E., Serjeant, S., Simpson, F., Skordis,C., Smadja, G., Smartt, S., Spano, P., Spiro, S., Sullivan, M.,Tilquin, A., Trotta, R., Verde, L., Wang, Y., Williger, G., Zhao,G., Zoubian, J., & Zucca, E., 2011.Euclid Definition Study Report, arXiv e-prints, p.arXiv:1110.3193.
- Lemos etal. (2023)Lemos, P., Parker, L.H., Hahn, C., Ho, S., Eickenberg, M., Hou,J., Massara, E., Modi, C., Moradinezhad Dizgah, A., Régaldo-SaintBlancard, B., & Spergel, D., 2023.SimBIG: Field-level Simulation-based Inference of Large-scaleStructure, in Machine Learning for Astrophysics, p.18.
- Libeskind etal. (2018)Libeskind, N.I., van de Weygaert, R., Cautun, M., Falck, B., Tempel,E., Abel, T., Alpaslan, M., Aragón-Calvo, M.A., Forero-Romero,J.E., Gonzalez, R., Gottlöber, S., Hahn, O., Hellwing, W.A.,Hoffman, Y., Jones, B. J.T., Kitaura, F., Knebe, A., Manti, S.,Neyrinck, M., Nuza, S.E., Padilla, N., Platen, E., Ramachandra,N., Robotham, A., Saar, E., Shandarin, S., Steinmetz, M., Stoica,R.S., Sousbie, T., & Yepes, G., 2018.Tracing the cosmic web, MNRAS, 473(1), 1195–1217.
- Liu etal. (2022)Liu, W., Jiang, A., & Fang, W., 2022.Probing massive neutrinos with the Minkowski functionals oflarge-scale structure, J. Cosmology Astropart. Phys, 2022(7), 045.
- Liu etal. (2023)Liu, W., Jiang, A., & Fang, W., 2023.Probing massive neutrinos with the Minkowski functionals of thegalaxy distribution, J. Cosmology Astropart. Phys, 2023(9), 037.
- Mainieri etal. (2024)Mainieri, V., Anderson, R.I., Brinchmann, J., Cimatti, A., Ellis,R.S., Hill, V., Kneib, J.-P., McLeod, A.F., Opitom, C., Roth,M.M., Sanchez-Saez, P., Smiljanic, R., Tolstoy, E., Bacon, R.,Randich, S., Adamo, A., Annibali, F., Arevalo, P., Audard, M.,Barsanti, S., Battaglia, G., Bayo Aran, A.M., Belfiore, F.,Bellazzini, M., Bellini, E., Beltran, M.T., Berni, L., Bianchi,S., Biazzo, K., Bisero, S., Bisogni, S., Bland-Hawthorn, J.,Blondin, S., Bodensteiner, J., Boffin, H. M.J., Bonito, R., Bono,G., Bouche, N.F., Bowman, D., Braga, V.F., Bragaglia, A.,Branchesi, M., Brucalassi, A., Bryant, J.J., Bryson, I., Busa, I.,Camera, S., Carbone, C., Casali, G., Casali, M., Casasola, V.,Castro, N., Catelan, M., Cavallo, L., Chiappini, C., Cioni, M.-R.,Colless, M., Colzi, L., Contarini, S., Couch, W., D’Ammando, F.,d’Assignies D., W., D’Orazi, V., da Silva, R., Dainotti, M.G.,Damiani, F., Danielski, C., De Cia, A., de Jong, R.S., Dhawan, S.,Dierickx, P., Driver, S.P., Dupletsa, U., Escoffier, S., Escorza,A., Fabrizio, M., Fiorentino, G., Fontana, A., Fontani, F., ForeroSanchez, D., Franois, P., Galindo-Guil, F.J., Gallazzi, A.R.,Galli, D., Garcia, M., Garcia-Rojas, J., Garilli, B., Grand, R.,Guarcello, M.G., Hazra, N., Helmi, A., Herrero, A., Iglesias, D.,Ilic, D., Irsic, V., Ivanov, V.D., Izzo, L., Jablonka, P.,Joachimi, B., Kakkad, D., Kamann, S., Koposov, S., Kordopatis, G.,Kovacevic, A.B., Kraljic, K., Kuncarayakti, H., Kwon, Y., LaForgia, F., Lahav, O., Laigle, C., Lazzarin, M., Leaman, R.,Leclercq, F., Lee, K.-G., Lee, D., Lehnert, M.D., Lira, P.,Loffredo, E., Lucatello, S., Magrini, L., Maguire, K., Mahler, G.,Zahra Majidi, F., Malavasi, N., Mannucci, F., Marconi, M., Martin,N., Marulli, F., Massari, D., Matsuno, T., Mattheee, J., McGee, S.,Merc, J., Merle, T., Miglio, A., Migliorini, A., Minchev, I.,Minniti, D., Miret-Roig, N., Monreal Ibero, A., Montano, F.,Montet, B.T., Moresco, M., Moretti, C., Moscardini, L., Moya, A.,Mueller, O., Nanayakkara, T., Nicholl, M., Nordlander, T., Onori,F., Padovani, M., Pala, A.F., Panda, S., Pandey-Pommier, M.,Pasquini, L., Pawlak, M., Pessi, P.J., Pisani, A., Popovic, L.C.,Prisinzano, L., Raddi, R., Rainer, M., Rebassa-Mansergas, A.,Richard, J., Rigault, M., Rocher, A., Romano, D., Rosati, P.,Sacco, G., Sanchez-Janssen, R., Sander, A. A.C., Sanders, J.L.,Sargent, M., Sarpa, E., Schimd, C., Schipani, P., Sefusatti, E.,Smith, G.P., Spina, L., Steinmetz, M., Tacchella, S.,Tautvaisiene, G., Theissen, C., Thomas, G., Ting, Y.-S.,Travouillon, T., Tresse, L., Trivedi, O., Tsantaki, M., Tsedrik,M., Urrutia, T., Valenti, E., Van der Swaelmen, M., Van Eck, S.,Verdiani, F., Verdier, A., Vergani, S.D., Verhamme, A., Vernet,J., Verza, G., Viel, M., Vielzeuf, P., Vietri, G., Vink, J.S.,Viscasillas Vazquez, C., Wang, H.-F., Weilbacher, P.M., Wendt, M.,Wright, N., Ye, Q., Yeche, C., Yu, J., Zafar, T., Zibetti, S.,Ziegler, B., & Zinchenko, I., 2024.The Wide-field Spectroscopic Telescope (WST) Science White Paper,arXiv e-prints, p. arXiv:2403.05398.
- Makinen etal. (2022)Makinen, T.L., Charnock, T., Lemos, P., Porqueres, N., Heavens,A.F., & Wandelt, B.D., 2022.The Cosmic Graph: Optimal Information Extraction from Large-ScaleStructure using Catalogues, The Open Journal of Astrophysics, 5(1), 18.
- Mandelbrot (1982)Mandelbrot, B.B., 1982.The Fractal Geometry of Nature.
- Massara etal. (2023)Massara, E., Villaescusa-Navarro, F., Hahn, C., Abidi, M.M.,Eickenberg, M., Ho, S., Lemos, P., Dizgah, A.M., & Blancard, B.R.-S., 2023.Cosmological Information in the Marked Power Spectrum of the GalaxyField, ApJ, 951(1), 70.
- Moon etal. (2023)Moon, J., Rossi, G., & Yu, H., 2023.Signature of Massive Neutrinos from the Clustering of CriticalPoints. I. Density-threshold-based Analysis in Configuration Space, ApJS, 264(1), 26.
- Naidoo (2019)Naidoo, K., 2019.MiSTree: a Python package for constructing and analysing MinimumSpanning Trees, The Journal of Open Source Software, 4(42),1721.
- Naidoo etal. (2020)Naidoo, K., Whiteway, L., Massara, E., Gualdi, D., Lahav, O., Viel,M., Gil-Marín, H., & Font-Ribera, A., 2020.Beyond two-point statistics: using the minimum spanning tree as atool for cosmology, MNRAS, 491(2), 1709–1726.
- Naidoo etal. (2022)Naidoo, K., Massara, E., & Lahav, O., 2022.Cosmology and neutrino mass with the minimum spanning tree, MNRAS, 513(3), 3596–3609.
- Paillas etal. (2023)Paillas, E., Cuesta-Lazaro, C., Zarrouk, P., Cai, Y.-C., Percival,W.J., Nadathur, S., Pinon, M., de Mattia, A., & Beutler, F., 2023.Constraining $\nu$$\Lambda$CDM withdensity-split clustering, MNRAS, 522(1), 606–625.
- Peebles (1980)Peebles, P. J.E., 1980.The Large-Scale Structure of the Universe, PrincetonUniversity Press.
- Planck Collaboration etal. (2020)Planck Collaboration, Aghanim, N., Akrami, Y., Ashdown, M., Aumont,J., Baccigalupi, C., Ballardini, M., Banday, A.J., Barreiro, R.B.,Bartolo, N., Basak, S., Battye, R., Benabed, K., Bernard, J.P.,Bersanelli, M., Bielewicz, P., Bock, J.J., Bond, J.R., Borrill,J., Bouchet, F.R., Boulanger, F., Bucher, M., Burigana, C.,Butler, R.C., Calabrese, E., Cardoso, J.F., Carron, J.,Challinor, A., Chiang, H.C., Chluba, J., Colombo, L.P.L.,Combet, C., Contreras, D., Crill, B.P., Cuttaia, F., de Bernardis,P., de Zotti, G., Delabrouille, J., Delouis, J.M., Di Valentino, E.,Diego, J.M., Doré, O., Douspis, M., Ducout, A., Dupac, X.,Dusini, S., Efstathiou, G., Elsner, F., Enßlin, T.A., Eriksen,H.K., Fantaye, Y., Farhang, M., Fergusson, J., Fernandez-Cobos, R.,Finelli, F., Forastieri, F., Frailis, M., Fraisse, A.A.,Franceschi, E., Frolov, A., Galeotta, S., Galli, S., Ganga, K.,Génova-Santos, R.T., Gerbino, M., Ghosh, T., González-Nuevo,J., Górski, K.M., Gratton, S., Gruppuso, A., Gudmundsson, J.E.,Hamann, J., Handley, W., Hansen, F.K., Herranz, D., Hildebrandt,S.R., Hivon, E., Huang, Z., Jaffe, A.H., Jones, W.C., Karakci,A., Keihänen, E., Keskitalo, R., Kiiveri, K., Kim, J., Kisner,T.S., Knox, L., Krachmalnicoff, N., Kunz, M., Kurki-Suonio, H.,Lagache, G., Lamarre, J.M., Lasenby, A., Lattanzi, M., Lawrence,C.R., Le Jeune, M., Lemos, P., Lesgourgues, J., Levrier, F.,Lewis, A., Liguori, M., Lilje, P.B., Lilley, M., Lindholm, V.,López-Caniego, M., Lubin, P.M., Ma, Y.Z.,Macías-Pérez, J.F., Maggio, G., Maino, D., Mandolesi, N.,Mangilli, A., Marcos-Caballero, A., Maris, M., Martin, P.G.,Martinelli, M., Martínez-González, E., Matarrese, S., Mauri,N., McEwen, J.D., Meinhold, P.R., Melchiorri, A., Mennella, A.,Migliaccio, M., Millea, M., Mitra, S., Miville-Deschênes, M.A.,Molinari, D., Montier, L., Morgante, G., Moss, A., Natoli, P.,Nørgaard-Nielsen, H.U., Pagano, L., Paoletti, D., Partridge, B.,Patanchon, G., Peiris, H.V., Perrotta, F., Pettorino, V.,Piacentini, F., Polastri, L., Polenta, G., Puget, J.L., Rachen,J.P., Reinecke, M., Remazeilles, M., Renzi, A., Rocha, G., Rosset,C., Roudier, G., Rubiño-Martín, J.A., Ruiz-Granados, B.,Salvati, L., Sandri, M., Savelainen, M., Scott, D., Shellard,E.P.S., Sirignano, C., Sirri, G., Spencer, L.D., Sunyaev, R.,Suur-Uski, A.S., Tauber, J.A., Tavagnacco, D., Tenti, M.,Toffolatti, L., Tomasi, M., Trombetti, T., Valenziano, L.,Valiviita, J., Van Tent, B., Vibert, L., Vielva, P., Villa, F.,Vittorio, N., Wandelt, B.D., Wehus, I.K., White, M., White,S.D.M., Zacchei, A., & Zonca, A., 2020.Planck 2018 results. VI. Cosmological parameters, A&A,641, A6.
- Pranav etal. (2017)Pranav, P., Edelsbrunner, H., van de Weygaert, R., Vegter, G.,Kerber, M., Jones, B. J.T., & Wintraecken, M., 2017.The topology of the cosmic web in terms of persistent Bettinumbers, MNRAS, 465(4), 4281–4310.
- Smith etal. (2022)Smith, A., Cole, S., Grove, C., Norberg, P., & Zarrouk, P., 2022.A light-cone catalogue from the Millennium-XXL simulation: improvedspatial interpolation and colour distributions for the DESI BGS, MNRAS, 516(3), 4529–4542.
- Tassev etal. (2013)Tassev, S., Zaldarriaga, M., & Eisenstein, D.J., 2013.Solving large scale structure in ten easy steps with COLA, J. Cosmology Astropart. Phys, 2013(6), 036.
- Valogiannis & Dvorkin (2022)Valogiannis, G. & Dvorkin, C., 2022.Going beyond the galaxy power spectrum: An analysis of BOSS datawith wavelet scattering transforms, Phys.Rev.D, 106(10), 103509.

\bsp