by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient...

145
Advancing precision cosmology with 21 cm intensity mapping by Kiyoshi Wesley Masui A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Physics University of Toronto c Copyright 2013 by Kiyoshi Wesley Masui

Transcript of by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient...

Page 1: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Advancing precision cosmology with 21 cm intensity mapping

by

Kiyoshi Wesley Masui

A thesis submitted in conformity with the requirementsfor the degree of Doctor of Philosophy

Graduate Department of PhysicsUniversity of Toronto

c© Copyright 2013 by Kiyoshi Wesley Masui

Page 2: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Abstract

Advancing precision cosmology with 21 cm intensity mapping

Kiyoshi Wesley Masui

Doctor of Philosophy

Graduate Department of Physics

University of Toronto

2013

In this thesis we make progress toward establishing the observational method of 21 cm

intensity mapping as a sensitive and efficient method for mapping the large-scale struc-

ture of the Universe. In Part I we undertake theoretical studies to better understand the

potential of intensity mapping. This includes forecasting the ability of intensity mapping

experiments to constrain alternative explanations to dark energy for the Universe’s accel-

erated expansion. We also consider how 21 cm observations of the neutral gas in the early

Universe (after recombination but before reionization) could be used to detect primordial

gravity waves, thus providing a window into cosmological inflation. Finally we show that

scientifically interesting measurements could in principle be performed using intensity

mapping in the near term, using existing telescopes in pilot surveys or prototypes for

larger dedicated surveys.

Part II describes observational efforts to perform some of the first measurements

using 21 cm intensity mapping. We develop a general data analysis pipeline for analyzing

intensity mapping data from single dish radio telescopes. We then apply the pipeline to

observations using the Green Bank Telescope. By cross-correlating the intensity mapping

survey with a traditional galaxy redshift survey we put a lower bound on the amplitude of

the 21 cm signal. The auto-correlation provides an upper bound on the signal amplitude

and we thus constrain the signal from both above and below. This pilot survey represents

a pioneering effort in establishing 21 cm intensity mapping as a probe of the Universe.

ii

Page 3: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Dedication

For Kaito’s generation,

that you may reach a better understanding of nature.

iii

Page 4: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Acknowledgements

First and foremost, I want to thank my thesis advisor Ue-Li Pen. Early on in the program

I was told that no single decision I made in my career would be as important as the

match between student and advisor. I think we managed to hit the sweet spot between

me having the freedom to pursue creative research and you pushing me to accomplish as

much as possible. I have learnt so much from you.

I would also like to acknowledge the efforts of all my collaborators, without whom this

thesis would not have been possible. A special thank you to the two post-docs that did

all the work: Eric Switzer and Pat McDonald. I am also indebted to the many faculty,

post-docs, staff, and grad students at CITA for the countless bits of help, tidbits of

advice, and allowing me to bounce ideas off of you relentlessly. This is especially true of

Richard Shaw, as well as my office mates who have contributed enumerable snippets of

code.

Beyond the professional, I would like to thank the many friends and family who are

responsible for me growing to love Toronto. Life here has been wonderful because of you.

Thank you to my parents and brother for making me who I am.

Thank you Maggie, for being my partner through all of this.

I can’t wait for what adventures may come.

iv

Page 5: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Contents

1 Introduction 1

1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1.1 Cosmology and large-scale structure . . . . . . . . . . . . . . . . . 1

1.1.2 Redshift surveys using the 21 cm line . . . . . . . . . . . . . . . . 4

1.2 Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.2.1 The background expansion . . . . . . . . . . . . . . . . . . . . . . 6

1.2.2 Perturbations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.3.1 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.3.2 Summary of contributions . . . . . . . . . . . . . . . . . . . . . . 15

I The potential of 21 cm cosmology 17

2 Constraining modified gravity 18

2.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.3 Modified Gravity Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.3.1 f(R) Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.3.2 DGP Braneworld . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.4 Observational Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.4.1 Baryonic acoustic oscillation expansion history test . . . . . . . . 24

2.4.2 Weak Lensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.4.3 External Priors from Planck . . . . . . . . . . . . . . . . . . . . . 29

2.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

v

Page 6: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

3 Detecting primordial gravity waves 39

3.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.3 Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.4 Tests of inflation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.5 Statistical detection in LSS . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

3.7 Addendum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4 Forecasts for near term experiments 49

4.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

4.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

4.3 Redshift Space Distortions . . . . . . . . . . . . . . . . . . . . . . . . . . 51

4.4 Baryon Acoustic Oscillations . . . . . . . . . . . . . . . . . . . . . . . . . 53

4.5 Forecasts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

4.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

II Pioneering 21 cm cosmology 62

5 Data analysis pipeline 63

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

5.2 Time ordered data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

5.2.1 Pipeline design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

5.2.2 Radio frequency interference . . . . . . . . . . . . . . . . . . . . . 67

5.2.3 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

5.3 Map-making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

5.3.1 Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

5.3.2 Noise model and estimation . . . . . . . . . . . . . . . . . . . . . 76

5.3.3 An efficient time domain map-maker . . . . . . . . . . . . . . . . 81

5.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

6 21 cm cross-correlation with an optical galaxy survey 87

6.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

6.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

6.3 Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

6.4 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

6.4.1 From data to maps . . . . . . . . . . . . . . . . . . . . . . . . . . 90

vi

Page 7: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

6.4.2 From maps to power spectra . . . . . . . . . . . . . . . . . . . . . 92

6.5 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

7 21 cm auto-correlation 98

7.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

7.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

7.3 Observations and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 100

7.3.1 Foreground Cleaning . . . . . . . . . . . . . . . . . . . . . . . . . 101

7.3.2 Instrumental Systematics . . . . . . . . . . . . . . . . . . . . . . . 102

7.3.3 Power Spectrum Estimation . . . . . . . . . . . . . . . . . . . . . 103

7.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

7.5 Discussion and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 107

8 Conclusions and outlook 111

8.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

8.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

Bibliography 115

vii

Page 8: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

List of Tables

2.1 Projected constraints on f(R) models for various combinations of obser-

vational techniques, for a 200 m telescope. Constraints are the 95% confi-

dence level upper limits and include forecasts for Planck. The non linear

results (column marked NL WL) are for the HS model with n = 1. Results

that make use of weak lensing with constraints above 10−3 are only order

of magnitude accurate. The linear regime is taken to be ` < 140, with the

nonlinear constraints extending up to ` = 600. . . . . . . . . . . . . . . 31

viii

Page 9: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

List of Figures

1.1 NASA/WMAP Science Team depiction of the evolution of the Universe

in the Λ-CDM model. The creation of the cosmic microwave background

is shown as well as the era of structure formation. Inflation and the ac-

celerated expansion is depicted on the left and right sides of the figure

respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2 From Springel et al. [2005]. Large-scale structure at redshift z = 0, as

seen in the Millennium Simulation. The colour map represents density

with the brightest colours representing the densest regions. The bright

spot near the middle of the image is a galaxy super cluster, containing of

order 10 000 galaxies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3 Large-scale structure in the VIPERS survey [Guzzo et al., 2013]. This

figure contains roughly half of the 55 000 galaxies in the total survey. The

3D position of each galaxy is represented by a black dot. The figure is then

collapsed along one of the angular dimensions (which has a thickness of

≈ 1). Large-scale structure is clearly visible, especially at z ≈ 0.7 where

the mean galaxy density is highest. . . . . . . . . . . . . . . . . . . . . . 4

1.4 Proper time t and conformal time η as a function of redshift z. The

magnitude of the proper time can be interpreted as the distance trav-

elled by a photon observed today and emitted by a source at redshift

z, while the magnitude of η is the current/comoving distance to that

source. These differ because the source recedes with the Hubble flow as

the photon is in transit. t and η are given in terms of the Hubble time

1/H0 = 14.6 Gyr or alternately the Hubble distance c/H0 = 3.00 Gpc/h

where h = H0/(100 km/s/Mpc) = 0.671. . . . . . . . . . . . . . . . . . . 9

2.1 The Weak lensing convergence power spectra for ΛCDM and the HS f(R)

model with n = 1 and fR0 = 10−4. Galaxy distribution function is flat

between z = 1 and z = 2.5. . . . . . . . . . . . . . . . . . . . . . . . . . 29

ix

Page 10: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

2.2 Projected constraints on the HS f(R) model with n = 1 using several

combinations of observational techniques, for a 200 m telescope. All curves

include forecasts for Planck. Allowed parameter values are shown in the

fR0 − h plane at the 68.3%, and 95.4% confidence level. Results are not

shown for “WL” which were calculated much less accurately (see text). . 32

2.3 Same as Figure 2.2 but for a 100 m cylindrical telescope. . . . . . . . . . 33

2.4 Ratio of the coordinate dA(z) (top) and the Hubble parameter H(z) (bot-

tom) as predicted by the best fit DGP model to the fiducial model. Error

bars are from 21 cm BAO predictions. Fit includes BAO data available

from the 200 m telescope and CMB priors on θs and ωm. . . . . . . . . . 34

2.5 Weak lensing spectra in for DGP and a smooth dark energy model with

the same expansion history. DGP parameters are h = 0.665, ωm = 0.116,

ωk = 0 and ωrc = 0.06. Errorbars represent expected accuracy of the

200 m telescope. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.1 Primordial tensor power spectrum obeying the consistency relation for r =

0.1. The solid line is the tensor power spectrum. Error bars represent the

reconstruction uncertainty on the binned power spectrum for a noiseless

experiment, surveying 200 (Gpc/h)3 and resolving scalar modes down to

kmax = 168h/Mpc. The dashed, nearly vertical, line is the reconstruction

noise power. The non-zero slope of the solid line is the deviation from

scale-free. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4.1 Baryon acoustic oscillations averaged over all directions. To show the

BAO we plot the ratio of the full matter power spectrum to the wiggle-

free power spectrum of Eisenstein and Hu [1998]. The error bars represent

projections of the sensitivity possible with 4000 hours observing time on

GBT at 0.54 < z < 1.09. . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4.2 Ability of GBT to measure the BAO and redshift space distortions as

a function of survey area at fixed observing time. Presented survey is

between z = 0.54 and z = 1.09 and observing time is 1440 hours. A factor

of 10 has been removed from the Aw curve. . . . . . . . . . . . . . . . . . 56

4.3 Roughly optimized survey area as a function of telescope time on GBT.

Redshift range is between z = 0.54 and z = 1.09. . . . . . . . . . . . . . . 56

x

Page 11: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

4.4 Forecasts for fractional error on redshift space distortion and baryon acous-

tic oscillation parameters for intensity mapping surveys on the Green Bank

Telescope (GBT). Frequency bins are approximately 200 MHz wide and

correspond to available GBT receivers. Uncertainties on D should not be

trusted unless the uncertainty on Aw is less than 50% (see text). . . . . . 58

4.5 Forecasts for fractional error on redshift space distortion and baryon acous-

tic oscillation parameters for intensity mapping surveys on a prototype

cylindrical telescope. Frequency bins are 200 MHz wide corresponding to

the capacity of the correlators which will likely be available. These result

also apply to the aperture telescope but with the observing time reduced

by a factor of 14. Uncertainties on D should not be trusted unless the

uncertainty on Aw is less than 50% (see text). Observing time does not

account for lost time due to foreground obstruction. . . . . . . . . . . . . 59

5.1 Data before (left) and after (right) RFI flagging. Colour scale represents

perturbations in the power, P/〈P 〉t−1. Frequency axis has been rebinned

from 4096 bins to 256 bins after flagging, which fills in many of the gaps

in the data left by the flagging. Any remaining gaps are assigned a value

of 0 for plotting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

5.2 Noise power spectrum, averaged over all spectral channels (δνν′Pνν′ω/cν),

as measured in the GBT 800 MHz receiver. Units of the vertical axis are

normalized such that pure thermal noise would be a horizontal line at

unity. In individual time samples are 0.131 s long and spectral bins are

3.12 MHz wide. The telescope is pointing at the north celestial pole to

minimize changes in the sky temperature. Descending the various coloured

lines corresponds to removing additional noise eigenmodes, Vνq, from the

noise power spectrum. It is seen that after removing 7 of the 64 possible

modes the noise is significantly reduced and is approaching the thermal

value on all time scales. The modes removed from each subsequent line

are shown in Figure 5.3. . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

5.3 The modes Vνq removed from the noise power spectra to produce the curves

in Figure 5.2. Each mode is offset vertically for clarity, with mode number

increasing from bottom to top. The nth mode in this figure is the dominant

remaining mode in the nth curve in Figure 5.2. . . . . . . . . . . . . . . 80

xi

Page 12: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

6.1 Maps of the GBT 15 hr field at approximately the band-center. The purple

circle is the FWHM of the GBT beam, and the color range saturates in

some places in each map. Top: The raw map as produced by the map-

maker. It is dominated by synchrotron emission from both extragalactic

point sources and smoother emission from the galaxy. Bottom: The raw

map with 20 foreground modes removed per line of sight relative to 256

spectral bins, as described in Sec. 6.4.2. The map edges have visibly

higher noise or missing data due to the sparsity of scanning coverage. The

cleaned map is dominated by thermal noise, and we have convolved by

GBT’s beam shape to bring out the noise on relevant scales. . . . . . . . 96

6.2 Cross-power between the 15 hr and 1 hr GBT fields and WiggleZ. Negative

points are shown with reversed sign and a thin line. The solid line is the

mean of simulations based on the empirical-NL model of Blake et al. [2011]

processed by the same pipeline. . . . . . . . . . . . . . . . . . . . . . . . 97

7.1 Temperature scales in our 21 cm intensity mapping survey. The top curve

is the power spectrum of the input 15 hr field with no cleaning applied (the

1 hr field is similar). Throughout, the 15 hr field results are green and the

1 hr field results are blue. The dotted and dash-dotted lines show thermal

noise in the maps. The power spectra avoid noise bias by crossing two

maps made with separate datasets. Nevertheless, thermal noise limits the

fidelity with which the foreground modes can be estimated and removed.

The points below show the power spectrum of the 15 hr and 1 hr fields

after the foreground cleaning described in Sec. 7.3.1. Negative values are

shown with thin lines and hollow markers. Any residual foregrounds will

additively bias the auto-power. The red dashed line shows the 21 cm signal

expected from the amplitude of the cross-power with the WiggleZ survey

(for r = 1) and based on simulations processed by the same pipeline. . . 106

xii

Page 13: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

7.2 Comparison with the thermal noise limit. The dark and light shaded re-

gions are the 68% and 95% confidence intervals of the measured 21 cm fluc-

tuation power. The dashed line shows the expected 21 cm signal implied

by the WiggleZ cross-correlation if r = 1. The solid line represents the

best upper 95% confidence level we could achieve given our error bars, in

the absence of foreground contamination. Note that the auto-correlation

measurements, which constrain the signal from above, are uncorrelated

between k bins, while a single global fit to the cross-power (in Masui et al.

[2013]) is used to constrain the signal from below. Confidence intervals do

not include the systematic calibration uncertainty, which is 18% in this

space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

7.3 The posterior distribution for the parameter ΩHIbHI coming from the Wig-

gleZ cross-power spectrum, 15 hr field and 1 hr field auto-powers, as well

as the joint likelihood from all three datasets. The individual distribu-

tions from the cross-power and auto-powers are dependent on the prior

on ΩHIbHI while the combined distribution is essentially insensitive. The

distributions do not include the systematic calibration uncertainty of 9%. 109

xiii

Page 14: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 1

Introduction

The question of the origin of the Universe is arguably as ancient as all of human civiliza-

tion. It is only within the last hundred years that we have begun to form an accurate

understanding of the Universe on large scales and only within the last twenty years that

we have been able to make precise statements about the Universe.

It is thought that using the 21 cm line from neutral hydrogen to map the large-

scale structure of the Universe is a promising technique for making precise cosmological

measurements [Chang et al., 2008]. Such measurements would allow for the detection of

subtle effects, ultimately leading to a better understanding of the Universe and continuing

humanity’s push to answer fundamental questions about its origin. This thesis represents

a significant contribution to the pioneering of 21 cm cosmology, both theoretically and

observationally.

1.1 Background

Here we give a brief overview of the field of cosmology, how the field has evolved, and why

the 21 cm line is anticipated to be a powerful probe of the Universe and an important

part of cosmology’s future.

Some of the text in this section has been adapted from research proposals and other

similar unpublished documents.

1.1.1 Cosmology and large-scale structure

Observationally, physical cosmology has seen tremendous progress in the past two decades.

It has matured from an imprecise field, in which a dearth of observations left the Universe

poorly understood, to one in which a richness of data is allowing for precise measurements.

1

Page 15: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 1. Introduction 2

Figure 1.1: NASA/WMAP Science Team depiction of the evolution of the Universe inthe Λ-CDM model. The creation of the cosmic microwave background is shown as wellas the era of structure formation. Inflation and the accelerated expansion is depicted onthe left and right sides of the figure respectively.

We now have a standard cosmological model, the Λ-cold dark matter model (Λ-CDM),

depicted in Figure 1.1, which explains all observations of the cosmic microwave back-

ground (CMB), large-scale structure (LSS), and Universe expansion rate. Nevertheless,

several fundamental mysteries remain unexplained within Λ-CDM. One of the most com-

pelling is the observed accelerated expansion of the Universe, the so called dark energy

problem, initially discovered by observations of distant type-1a super-novae [Riess et al.,

1998, Perlmutter et al., 1999]. Within Λ-CDM, this is parameterized by a cosmological

constant, however, its exact nature remains a mystery. Equally intriguing is the question

of what set the initial conditions for the Universe’s evolution. The leading theory is cos-

mological inflation [Guth, 1981], however again the physical nature of inflation remains

a mystery.

The rapid advancement of cosmology has been driven by observations of the CMB,

lead by the space observatories: the Cosmic Background Explorer (COBE) [Mather

et al., 1990, Smoot et al., 1992], followed by the Wilkinson Microwave Anisotropy Probe

(WMAP) [Bennett et al., 2012, Hinshaw et al., 2012], and finally the ongoing Planck

mission [Planck Collaboration et al., 2013a]. While this program has been tremendously

Page 16: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 1. Introduction 3

Figure 1.2: From Springel et al. [2005]. Large-scale structure at redshift z = 0, as seenin the Millennium Simulation. The colour map represents density with the brightestcolours representing the densest regions. The bright spot near the middle of the imageis a galaxy super cluster, containing of order 10 000 galaxies.

successful, the bulk of the information that can be extracted from the CMB has already

been retrieved. While there remains information to be extracted from the CMB polar-

ization signal, as well as various secondary effects such as weak lensing, new probes of

the Universe will become increasingly important.

The field of large-scale structure studies how the Universe’s matter is distributed in

three dimensional space. Figure 1.2 shows a map of the LSS as simulated in a large n-

body simulation. LSS is a potentially powerful probe of the Universe because it is a three

dimensional field—in contrast to the CMB which is two dimensional—yielding far more

independent observables. There are of order 1018 observable LSS modes in the Universe

[Pen, 2004], while the primordial CMB has at most of order 108 observable modes.

The large-scale structure is sensitive to a large variety of physical processes through-

out the evolution of the Universe, and can thus be used to learn about these processes.

The LSS grew, through gravitational collapse instability, from tiny perturbations that

existed in the first instants of the Universe’s evolution. The structure in the late Uni-

verse maintains information about the precise statistics of these initial perturbations. If

inflation is responsible for generating these perturbations, then the LSS can be used to

Page 17: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 1. Introduction 4

Figure 1.3: Large-scale structure in the VIPERS survey [Guzzo et al., 2013]. Thisfigure contains roughly half of the 55 000 galaxies in the total survey. The 3D positionof each galaxy is represented by a black dot. The figure is then collapsed along one ofthe angular dimensions (which has a thickness of ≈ 1). Large-scale structure is clearlyvisible, especially at z ≈ 0.7 where the mean galaxy density is highest.

study it.

To study the expansion history of the Universe, and thus gain insight into the anoma-

lous acceleration, the baryon acoustic oscillations (BAO) can be used. The BAO result

from sound waves that propagated in the early Universe. They imprint a characteristic

scale in the statistics of the LSS and thus act as a standard ruler on the sky, which

expands with the expansion of the Universe. Thus a precise measurement of this scale

as a function of redshift can be used to measure the rate of expansion.

1.1.2 Redshift surveys using the 21 cm line

Traditionally, large-scale structure has been measured using galaxy surveys. These in-

volve painstakingly measuring the 3D location of many galaxies. Redshift, measured by

performing spectroscopy on each galaxy to identify the shift in spectral lines, is used as

a proxy for radial distance, giving these surveys their colloquial names of redshift sur-

veys. The galaxies are then catalogued and accumulated into three dimensional maps

that reveal large-scale structure. This is illustrated in Figure 1.3. Because the individual

galaxies are much smaller than the structures being studied, they are hard to detect and

a very sensitive telescope must be used. Over two million galaxies have been surveyed in

this way [Eisenstein et al., 2011], but despite this massive effort, only a small fraction of

the observable Universe has been mapped to date.

Recently it has been proposed that the large-scale structure could be mapped much

more efficiently using the 21 cm line from the spin flip transition in neutral hydrogen

Page 18: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 1. Introduction 5

[Barnes et al., 2001, Loeb and Zaldarriaga, 2004], which lies in the radio part of the

electromagnetic spectrum. This has several advantages over optical redshift surveys. The

21 cm line is by far the brightest line in this part of the spectrum, leaving little chance

for line confusion. As such, each observing frequency corresponds unambiguously with

a redshift. This eliminates the need to detect individual galaxies at high significance.

The 21 cm brightness from each region of space is taken to be a proxy for the total

amount of hydrogen in that region, which in turn is assumed to be a biased tracer for

the total density. This procedure is referred to as 21 cm intensity mapping [Chang et al.,

2008, Loeb and Wyithe, 2008, Ansari et al., 2012a, Mao et al., 2008, Seo et al., 2010,

Mao, 2012]. The 21 cm signal is, in principle, measurable at all redshifts up to z ∼ 50,

potentially providing a probe of the early Universe. This is in contrast to optical surveys

which are difficult in certain redshift ranges and are impossible above a redshift of z ∼ 6

due to a lack of sources.

The use of the 21 cm line for cosmology is not without disadvantages. The most

concerning is the presence of bright foregrounds. In particular, synchrotron emission

from both galactic sources and extra-galactic sources is of order 104 times brighter than

the signal from neutral hydrogen. However, all known foreground contaminants are

expected to be spectrally smooth. The 21 cm signal on the other hand is a spectral line

and even when combining the line emission over all redshifts, the signal from hydrogen

is modulated by the large-scale structure. It is thus expected that the foregrounds can

be separated from the signal, or at the very least that foregrounds will contaminate only

a small number of the signal modes along the line of sight. In practise, instrumental

effects; such as imperfect spectral calibration, frequency dependence of the instrumental

beam, and contamination of the unpolarized channel by polarized emission; all make

foreground subtraction more difficult than might be initially expected. As such 21 cm

redshift surveys are very challenging.

The first detection of the 21 cm signal from large-scale structure above z = 0.1 was

presented in Chang et al. [2010]. There, data from the Green Bank Telescope (GBT) in

West Virginia was cross-correlated against a traditional optical galaxy survey at a redshift

of z ≈ 0.8. This confirmed the existence of the 21 cm signal from large-scale structure,

but did not require the perfect removal of foregrounds, since residual contamination does

not correlate with the optical survey.

Intensity mapping is both competitive with, and complementary to, traditional galaxy

surveys. Technological advances in radio frequency communications and digital signal

processing now make it possible to design intensity mapping systems that are capable of

surveying large volumes of the Universe at relatively modest cost. This makes intensity

Page 19: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 1. Introduction 6

mapping especially attractive for measuring large-scale features in the Universe, such as

the BAO. In cases where intensity mapping surveys overlap with galaxy surveys, there are

several synergies that would benefit both surveys. The most basic of these is a cross-check

of results, where experiments with independent systematic errors validate one another.

Going beyond this, the fact that galaxies and neutral hydrogen are different tracers of

the same cosmic structure, allows for the disentanglement of key uncertainties in both

surveys, greatly improving the precision of some measurements [McDonald and Seljak,

2009].

1.2 Formalism

Here we review some of the basic cosmological theory and concepts that will be required

for understanding the following chapters. This formalism is covered in more detail in

Hartle [2003], Dodelson [2003] and Liddle and Lyth [2000].

1.2.1 The background expansion

To zeroth order, the Universe is assumed to be homogeneous and isotropic, which on

very large scales agrees well with observations. The metric for a homogeneous and

isotropic Universe is the Friedmann-Lemaıtre-Robinson-Walker (FLRW) metric, which

can be written as

ds2 = − dt2 + a(t)2[

dr2 + Sk(r)2( dθ2 + sin2 θ dφ2)

], (1.1)

with

Sk(r) =

sin (√kr)√k

if k > 0,

r if k = 0,

sinh (√−kr)√−k if k < 0.

(1.2)

Here, we use units where the speed of light, c, is unity. By assumption of homogeneity

and isotropy, all matter-energy components must be on average mutually at rest in these

coordinates. The spatial coordinates in the above metric are thus dubbed comoving

coordinates (with distances in these coordinates referred to as comoving distance), and

t is the proper time of comoving observers. k > 0, k = 0, and k < 0 correspond to an

open, flat, and closed Universe respectively. We are free to rescale the coordinates such

that a = 1 at the present epoch. Likewise we choose t = 0 at the present epoch and r = 0

at approximately earth’s location. With these definitions, k may be interpreted as the

Page 20: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 1. Introduction 7

Gaussian curvature of the Universe at t = 0. All observations are currently consistent

the Universe being flat and as such we will henceforth take k = 0.

It is common to use the alternate time coordinate η, related to t by

a dη = dt, (1.3)

and hence

η =

∫ t

0

dt

a(t). (1.4)

η is referred to as the conformal or comoving time. It has an especially simple inter-

pretation, in that a light ray arriving at earth today, and emitted at time η, originated

at a distance r = −cη, where the factor of c has been included for clarity. Since all

observations of the Universe involve light arriving at the earth at the present epoch, this

correspondence between time and distance is especially convenient, and η is often used

to represent either in an observational context (with factors of c omitted).

In our Universe, the scale factor, a(t), has been increasing monotonically with time,

with a = 0 corresponding to the Big Bang and a = 1 corresponding to the present day.

This gives yet another way to refer to a time in the Universe’s evolution. More commonly

used is the redshift, z ≡ 1/a − 1. This is an observationally convenient quantity since,

for light emitted by a distant object receding with the Hubble flow, the wavelength shift

is ∆λ = zλ0 where λ0 is the rest frame wavelength of the light. For z 1 the recessional

velocity is v = cz.

An important quantity is the Hubble parameter defined as H ≡ a′/a = a/a2, where

the prime represents the derivative with respect to proper time and the over-dot repre-

sents the derivative with respect to the conformal time (a′ ≡ da/ dt, a ≡ da/ dη). A

time scale and a length scale can then be defined as 1/H and c/H respectively, corre-

sponding to the Hubble time (also called the expansion time) and the Hubble distance.

The Hubble distance is the proper separation beyond which two points recede at a speed

greater than the speed of light. Points separated by more than this distance at a given

time are said to not be causally connected at that time1. A more relevant scale might be

the comoving equivalent: c/aH = c(a/a)−1.

Relating these time measures requires the Einstein equations, which for the FLRW

1This is not intended to be a precise statement, it simply sets a length scale for causality.

Page 21: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 1. Introduction 8

metric yield the Friedmann equations:(a′

a

)2

=8πG

3ρtot, (1.5)

a′′

a= −4πG

3(ρtot + 3ptot), (1.6)

where ρtot is the total density and ptot is the total pressure. The over-bar, ¯, indicates that

the quantity is spatially averaged. The total density and pressure get contributions from

matter (m), radiation (r), and dark energy (Λ, assumed to be a cosmological constant).

Differentiating the first equation and substituting it into the second gives

ρ′tot = −3H(ρtot + ptot). (1.7)

This is the energy conservation equation for an FLRW Universe. If we assume that

energy is not interchangeable between the different constituents, then this equation can

be used to solve for the a dependence of the densities. We also require the equation of

state for each constituent. The equations of state for each component, along with the

inferred evolutions of the densities are

pr = ρr/3, ρr = ρr0/a4; (1.8)

pm = 0, ρm = ρm0/a3; (1.9)

pΛ = −ρΛ, ρΛ = ρΛ0. (1.10)

To make this dependence on a explicit, we define the dimensionless present day density

constants as

ΩX ≡8πG

3H20

ρX0. (1.11)

where we have defined the Hubble constant, H0 ≡ H(t = 0). Evaluating the first

Friedmann equation at t = 0 leads to the constraint that Ωr + Ωm + ΩΛ = 1, resulting in

the interpretation that ΩX is the present day energy fraction of component X.

The first Friedmann equation can then be written as(a′

a

)2

= H20 (Ωr/a

4 + Ωm/a3 + ΩΛ), (1.12)

or

a′/a = H0

√Ωr/a4 + Ωm/a3 + ΩΛ). (1.13)

Given the parameters H0, Ωr, Ωm, and ΩΛ, this equation can be integrated numerically

Page 22: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 1. Introduction 9

10-2 10-1 100 101 102 103

redshift, z

10-2

10-1

100time (Hubble units, 1/H

0)

−t−η

Figure 1.4: Proper time t and conformal time η as a function of redshift z. Themagnitude of the proper time can be interpreted as the distance travelled by a photonobserved today and emitted by a source at redshift z, while the magnitude of η is thecurrent/comoving distance to that source. These differ because the source recedes withthe Hubble flow as the photon is in transit. t and η are given in terms of the Hubbletime 1/H0 = 14.6 Gyr or alternately the Hubble distance c/H0 = 3.00 Gpc/h whereh = H0/(100 km/s/Mpc) = 0.671.

to obtain the full expansion history. Current best fit values for the parameters are H0 =

67.1 km/s/Mpc, Ωr = 8.24 × 10−5, Ωm = 0.318, and ΩΛ = 0.682 [Planck Collaboration

et al., 2013d], with uncertainties at the percent level. Figure 1.4 shows the expansion

history calculated from the above equation and these parameters.

1.2.2 Perturbations

While on large scales, above ∼ 100 Mpc, the Universe is homogeneous and isotropic to a

good approximation, on smaller scales there are perturbations to the mean background

expansion. The evolution of these perturbations is normally calculated using linear per-

turbation theory, although below ∼ 10 Mpc, where the perturbations approach order

unity, simulations are required to treat the full non-linear evolution. While the assump-

tions of homogeneity and isotropy are broken by the perturbations, the Universe is still

Page 23: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 1. Introduction 10

assumed to be statistically homogeneous and isotropic. That is, each location in the Uni-

verse is assumed to be statistically equivalent, even if the exact values of any fields differ

from place to place. This assumption is often referred to as the cosmological principle.

Basics

In the field of large-scale structure, the primary observable is the matter density field,

or some proxy such as the galaxy number density or 21 cm brightness. We define the

density perturbations as

δ(~x, t) =ρ(~x, t)− ρ(t)

ρ(t). (1.14)

Cosmology does not make predictions about the precise density at a given location. It

instead predicts the statistical correlations between densities. The most important quan-

tity predicted by theory is the 2-point function, which can be written as the correlation

function:

〈δ(~x, t)δ(~x+ ~r)〉 = ξ(r), (1.15)

which is independent of ~x by assumption of statistical homogeneity and independent of

r (the direction of ~r) by assumption of statistical isotropy.

The perturbations are much more easily described in the spatial Fourier domain,

defined by

δ(~k) = F [δ(~x)] =

∫δ(~x)e−i

~k·~x d3~x. (1.16)

This has the advantage that all Fourier modes evolve independently. This results from

our assumption of statistical homogeneity and is true only at linear order in perturbation

theory.

In the Fourier domain, the two-point statistic is

〈δ(~k)δ∗(~k′)〉 = (2π)3δ(3)(~k − ~k′)P (k), (1.17)

where δ(3)(~k) is the 3D Dirac delta function and is not to be confused with δ(~k). P (k) is

the power spectrum is and related to the correlation function by P (~k) = F [ξ(~r)] (where

the vector signs over k and r remind us that, while P and ξ only depend on the magnitude

of their arguments, the Fourier transform must be performed in 3D).

The initial power spectrum of perturbations is set by physical processes in the first

instants of the Universe’s evolution, the dominant theory for which is cosmological in-

flation. Inflation predicts a nearly scale-invariant primordial power spectrum, where

Pp(k) ∼ k−3. This is in good agreement with observations.

Page 24: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 1. Introduction 11

The initial power spectrum can be related to that observed in the large-scale structure

through linear perturbation theory. Perturbation theory describes the evolution of the

perturbations over cosmic time. As mentioned above, when perturbation amplitude reach

order unity, perturbation theory ceases to be valid, and simulations must be employed.

Evolution equations

The governing equations for linear perturbation theory come from two sources. The first

is the conservation conditions, which derive from the fact that the stress-energy tensor

has zero divergence. The second is Einstein’s equations. Here we quote these equations

for an arbitrary perfect fluid, which includes most of the important cases in cosmology.

We use η as our primary time coordinate.

The fluid is described by three variables: the density perturbations, δ; the pressure

perturbations, π; and the velocity perturbations, θ. These are defined by

δ(~x, η) =ρ(~x, η)− ρ(η)

ρ(η)(1.18)

π(~x, η) =p(~x, η)− p(η)

p(η)(1.19)

−i~kkθ(~k, η) = ~v(k, η), (1.20)

where ~v is the 3D velocity field, and all quantities (e.g. ρ, p) refer to the fluid. We note

that the curling part of the velocity field does not couple to the density perturbations at

linear order and can thus be ignored.

In addition, metric perturbations are represented by the two fields Φ and Ψ, which

contribute to the metric in the following manner:

g00(~x, η) = −a(η)2[1 + 2Ψ(~x, η)] (1.21)

3∑i=1

gii(~x, η) = 3a(η)2[1 + 2Φ(~x, η)]. (1.22)

Ψ is recognizable as the Newtonian potential, and Φ is the spatial curvature perturbation.

With all the ingredients in place, we can now write down the equations of motion

governing the perturbations. The fact that the stress energy tensor has zero divergence

Page 25: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 1. Introduction 12

yields two conservation equations:

(a3ρδ) + a3(ρ+ p)(kθ + 3Φ) + 3pa2aπ = 0 (1.23)

θ + aH(1− 3 ˙p/ ˙ρ)θ − kΨ− k p

ρ+ Pπ = 0. (1.24)

The first of these equations is the mass continuity equation and the second is the Euler

fluid equation. Einstein’s equations yield an additional two equations:

k2Φ = 4πa2ρ

[δ + 3

aH

k

(ρ+ p

ρ

](1.25)

k2(Φ + Ψ) = 0. (1.26)

The first of these is recognizable as Poisson’s equation, while the second convenient

equation allows for the elimination of Φ.

When studying large-scale structure after recombination, when the baryonic matter

has decoupled from the radiation, the matter fluid can be treated as being pressureless.

The evolution is described by the three reduced equations:

δ + 3Φ + kθ = 0 (1.27)

θ + aHθ + kΦ = 0 (1.28)

k2Φ = 4πa2ρ

[δ + 3

aH

]. (1.29)

On scales much smaller than the Hubble scale, where k aH, the system can be easily

reduced to a single equation for the evolution of the density perturbations:

δ + aHδ − 4πa2ρδ = 0. (1.30)

The fact that k does not appear in this equation means that in this limit, structure

undergoes scale-independent growth. The linearity of this equation means that the frac-

tional growth of perturbations depends only on the background expansion, a(η). This is

an important result when studying structure at late times, say z . 50, when all modes

of interest are well within the horizon. This is also the regime in which all observations

of large-scale structure are made.

Page 26: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 1. Introduction 13

Summary

The evolution of an individual mode at late times is normally split into two components

such that

δ(~k, η) =

(a

aig(η)

)(9

10Ti(k)

)δp(~k) (η > ηi). (1.31)

Here, the subscript i refers to an intermediate time, when all modes of interest are

well within the horizon, but before we intend to make observations of the large-scale

structure. A reasonable choice would be zi = 20. g(η) is the growth function, which

describes the scale-independent growth of perturbations at late times (η > ηi), as given

by Equation 1.30. The factor of (a/ai) is removed such that g(η) is unity for a matter

dominated Universe (Ωm = 1). Ti(k) is the transfer function, which describes the scale-

dependent growth of the perturbations from the primordial value (δp(~k)) to the value at

the intermediate time. The factor is 9/10 is removed such that Ti(k) asymptotes to unity

at large scales, (k aiHi).

The transfer function is calculated using the full scale-dependent evolution equations.

The dark matter component is well described by the pressureless versions, but prior to

recombination at z ∼ 1000 the baryonic component of the matter is tightly coupled to the

photons. The perturbations of multiple coupled fluids must then be considered, greatly

complicating the calculation. This is generally done numerically.

With these definitions the matter power spectrum, which we hope to observe in our

redshift surveys, is then

P (k, η) =

(a

aig(η)

)2(9

10Ti(k)

)2

Pp(k) (η > ηi). (1.32)

1.3 Overview

Here each chapter is summarized in the broader context of this thesis. In addition I state

my contributions to each chapter within my collaborations.

1.3.1 Outline

This thesis is divided into two parts. Part I contains entirely theoretical work concern-

ing measurements that could in principle be performed using 21 cm intensity mapping.

Part II contains entirely observational work, where the Green Bank Telescope was used

to perform one of the first large-scale structure surveys using 21 cm intensity mapping.

This work represents a pioneering effort to establish intensity mapping as an efficient

Page 27: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 1. Introduction 14

technique for learning about the Universe.

Part I

In Chapter 2, originally published in Masui et al. [2010b], we considered the ability of

21 cm intensity mapping experiments to constrain modified gravity models. Modifications

to Einstein’s theory of gravity, General Relativity, are sometimes invoked as an alternative

to dark energy to explain the observed accelerating expansion of the Universe. We show

that experiments designed to measure the properties of dark energy are also able to

tightly constrain modified gravity models, through the observational probes of baryon

acoustic oscillations and weak lensing. This chapter involved a relatively straight forward

calculation, using well established techniques. The project represents my introduction to

statistical analysis in modern cosmology.

In Chapter 3, originally published in Masui and Pen [2010], we discovered a new effect

by which gravity waves created in cosmological inflation leave a distinct signature in the

large-scale structure of the Universe. The effect could be used to gain rare insight into

the inflationary era if observed. We considered the feasibility of making a detection of the

effect using 21 cm observations of the early Universe at redshift z ∼ 12. We concluded

that while such a detection would be very difficult, the reward would be sufficient that

searching for the effect using a futuristic experiment would still be very compelling.

Chapter 4, originally published in Masui et al. [2010a], considers what measurements

could in principle be made using instruments that either currently exist or will be con-

structed in the near future. In this way, it differs significantly from the previous two

chapters which each consider measurements that would be performed using ‘the ulti-

mate’ intensity mapping survey. We showed that even without building a dedicated

experiment, intensity mapping could be used to make interesting measurements. In par-

ticular, the Green Bank Telescope would be capable of performing a large-scale structure

survey that would have the sensitivity to detect the Kaiser red-shift space distortions,

settling a long standing controversy about the abundance of neutral hydrogen in the

Universe.

Part II

Chapter 5, which forms the basis of an intended future publication, gives an overview

of the analysis pipeline used to analyze survey data from the Green Bank Telescope.

It describes the formalism for the various parts of the data analysis including radio

frequency interference mitigation, calibration, noise estimation and map-making. It also

Page 28: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 1. Introduction 15

describes some details of the software modules that implement the analysis.

Chapter 6, originally published in [Masui et al., 2013], presents the cross-correlation

power spectrum of the intensity mapping survey at the Green Bank Telescope with a tra-

ditional galaxy survey. The cross correlation was detected with a statistical significance

of 7.4σ, far exceeding the significance of previous measurements and putting a lower limit

on the amplitude of the 21 cm brightness fluctuations.

Chapter 7, submitted to Monthly Notices of the Royal Astronomical Society: Letters

and available in pre-print as Switzer et al. [2013], represents the first use of the auto-

correlation power spectrum from the GBT survey to make an astrophysical measurement.

A Bayesian analysis is used to combine the lower limit from the cross-correlation and the

upper limit from the auto-correlation into a determination of the 21 cm signal amplitude.

We discuss future directions and conclude in Chapter 8.

1.3.2 Summary of contributions

In Chapter 2, Patrick McDonald calculated the BAO error bars for the 21 cm experiments

as well as the constraints from the Planck mission. I calculated the predicted BAO signal

for the modified gravity models and combined these two ingredients into the projected

constraints on the modified gravity models. Likewise, Fabian Schmidt calculated the weak

lensing spectrum for the modified gravity models, and the error bars for the intensity

mapping experiments were taken from a calculation performed by Ting Ting Lu [Lu

et al., 2010]. Again I combined these into the constraints on the models. I also lead the

project, produced the figures, and did the majority of the writing for its publication.

All calculations, figures and writing for Chapter 3 was prepared by myself, under the

guidance of Ue-Li Pen.

In Chapter 4, Patrick McDonald wrote the software that calculates the sensitivity for

a general 21 cm survey. I used this software to perform forecasts for the surveys under

consideration and to optimize the surveys. I also did the majority of the writing and

created all the plots.

For the observations and data analysis that form the basis of Chapters 5, 6 and 7, I

took the lead on the survey planning and data analysis. I made significant contributions

to all proposals to the GBT telescope allocation committees, lead the planning of all

observations, wrote the vast majority of the telescope control scripts, and performed

roughly one quarter of the observations. I designed the software framework for the

data analysis pipeline and made contributions to all the data analysis software up to

the map making. This includes preprocessing the data, radio frequency interference

Page 29: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 1. Introduction 16

mitigation (written principally by Liviu-Mihai Calin), and calibration (written principally

by Tabitha Voytek). I was the sole author of all noise estimation and map making

software. While I lead the development of the pipeline, the data was actually run through

the software by collaborators, mostly Tabitha Voytek.

In early versions of the data analysis, I wrote the software that performed the fore-

ground subtraction and power spectrum (then correlation function) estimation. Re-

sponsibility for this part of the pipeline has since been transfered to my collaborators,

principally Eric Switzer with contributions from Yi-Chao Li. While I have remained a

consulting party throughout, I have made no subsequent contributions to writing the

software for the parts of the pipeline subsequent to map-making. Eric Switzer and

Yi-Chao Li wrote all the software that dealt with the WiggleZ galaxy catalogues and

cross-correlating them with the intensity mapping survey.

In Chapters 6 and 7, most of the writing was roughly evenly split between myself

and Eric Switzer. In Chapter 7, I performed the Bayesian analysis to arrive at the final

conclusions of the paper and produced all plots.

Page 30: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Part I

The potential of 21 cm cosmology

17

Page 31: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2

Projected Constraints on Modified

Gravity Cosmologies from 21 cm

Intensity Mapping

A version of this chapter was published in Physical Review D as “Projected constraints

on modified gravity cosmologies from 21 cm intensity mapping”, Masui, K. W., Schmidt,

F., Pen, U.-L. and McDonald, P., Vol. 81, Issue 6, 2010. Reproduced here with the

permission of the APS.

2.1 Summary

We present projected constraints on modified gravity models from the observational tech-

nique known as 21 cm intensity mapping, where cosmic structure is detected without re-

solving individual galaxies. The resulting map is sensitive to both BAO and weak lensing,

two of the most powerful cosmological probes. It is found that a 200 m×200 m cylindrical

telescope, sensitive out to z = 2.5, would be able to distinguish Dvali, Gabadadze and

Porrati (DGP) model from most dark energy models, and constrain the Hu & Sawicki

f(R) model to |fR0| < 9 × 10−6 at 95% confidence. The latter constraint makes exten-

sive use of the lensing spectrum in the nonlinear regime. These results show that 21 cm

intensity mapping is not only sensitive to modifications of the standard model’s expan-

sion history, but also to structure growth. This makes intensity mapping a powerful and

economical technique, achievable on much shorter time scales than optical experiments

that would probe the same era.

18

Page 32: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 19

2.2 Introduction

One of the greatest open questions in cosmology is the cause of the observed late time

acceleration of the universe. Within the context of normal gravity described by Einstein’s

General Relativity, this phenomena can only be explained by an exotic form of matter

with negative pressure. Another possible explanation is that on cosmological scales,

General Relativity fails and must be replaced by some theory of modified gravity.

Several approaches have been proposed to modify gravity at late times to explain the

apparent acceleration of the universe. The challenge in these modifications is to preserve

successful predictions of the CMB at z ≈ 1000, and also the precision tests at the present

epoch in the solar system.

A generic class of theories operates with the Chameleon effect, where at sufficiently

high densities General Relativity (GR) is restored, thus applying both in the solar system

and the early universe. To further understand the nature of gravity would require probing

gravity on cosmological scales. Large scales means large volume, requiring large fractions

of the sky. Gravity can be probed by gravitational lensing, which measures geodesics

and thus the gravitational curvature of space, and is a sensitive probe of the growth

of structure in the Universe [Knox et al., 2006, Jain and Zhang, 2008, Tsujikawa and

Tatekawa, 2008, Schmidt, 2008].

In working out predictions for cosmology, the theoretical challenge posed by these the-

ories are the nonlinear mechanisms in each model, necessary in order to restore Einstein

Gravity locally to satisfy Solar System constraints. We present quantitative results from

nonlinear calculations for a specific f(R) model, and forecasted constraints for future

21 cm experiments.

An upcoming class of experiments propose the observation of the 21 cm spectral line

at low resolution over a large fraction of the sky and large range of redshifts [Peterson

et al., 2009]. Large scale structure is detected in three dimensions without the detection

of individual galaxies. This process is referred to as 21 cm intensity mapping. These

experiments are sensitive to structures at a redshift range that is observationally difficult

to observe for ground-based optical experiments due to a lack of spectral lines. Yet these

experiments are extremely economical since they only require limited resolution and no

moving parts [Seo et al., 2010].

Intensity mapping is sensitive to both the Baryon Acoustic Oscillations (BAO) and to

weak lensing, two of the most powerful observational methods to determine cosmological

parameters. It has been shown that BAO detections from 21 cm intensity mapping are

powerful probes of dark energy, comparing favourably with Dark Energy Task Force

Page 33: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 20

Stage IV projects within the figure of merit framework [Chang et al., 2008, Albrecht

et al., 2006].

In this paper we present projected constraints on modified gravity models from 21 cm

intensity mapping. In Section 2.3 we describe the modified gravity models considered. In

Section 2.4 we discuss the observational signatures accessible to 21 cm intensity mapping,

and calculate the effects of modified gravity on these signatures. In Section 2.5 we present

statistical analysis and results and we conclude in Section 2.6.

We assume a fiducial ΛCDM cosmology with WMAP5 cosmological parameters:

Ωm = 0.258, Ωb = 0.0441, ΩΛ = 0.742, h = 0.719, ns = 0.963 and log10As = −8.65

[Komatsu et al., 2009]. We will follow the convention that ωx ≡ h2Ωx.

2.3 Modified Gravity Models

Here we describe some popular modified gravity models for which projected constraints

will later be derived. Throughout we will use units in which G = c = ~ = 1 and will be

using a metric with mostly negative signature: (+,−,−,−).

2.3.1 f(R) Models

In the f(R) paradigm, modifications to gravity are introduced by changing the standard

Einstein-Hilbert action, which is linear in R, the Ricci scalar. The modifications are

made by adding an additional non linear function of R [Starobinsky, 1980, Capozziello,

2002, Carroll et al., 2004]

S =

∫d4x√−g[R + f(R)

16π+ Lm

], (2.1)

where Lm is the matter Lagrangian. See Sotiriou and Faraoni [2010] for a comprehensive

review of f(R) theories of gravity.

The choice of the function f(R) is arbitrary, but in practice it is highly constrained by

precise solar system and cosmological constraints, as well as stability criteria [Nojiri and

Odintsov, 2003, Sawicki and Hu, 2007] (see below). In this paper, we choose parameter-

izations of f(R) such that it asymptotes to a constant for a certain choice of parameters

and thus approaches the fiducial ΛCDM.

In general, f(R) models have enough freedom to mimic exactly the ΛCDM expansion

history and yet still impose a significant modification to gravity [Nojiri and Odintsov,

2006, Song et al., 2007]. As such probes of the expansion history are less constraining

Page 34: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 21

than probes of structure growth, which will be evident in the constraints presented in

later sections.

Variation of the above action yields the modified Einstein Equations

Gµν + fRRµν −(f

2−fR

)gµν −∇µ∇νfR = 8π Tµν , (2.2)

where fR ≡ df(R)/dR, a convention that will be used throughout. f(R) gravity is

equivalent to a scalar-tensor theory [Nojiri and Odintsov, 2003, Chiba, 2003] with the

scalar field fR having a mass and potential determined by the functional form of f(R).

The field has a Compton wavelength given by its inverse mass

λC =1

mfR

=√

3fRR. (2.3)

The main criterion for stability of the f(R) model is that the mass squared of the fR

field is positive, i.e. fRR > 0. In most cases, this simply corresponds to a sign choice for

the field fR (specifically for the model we consider below, fR0 is constrained to be less

than 0).

On scales smaller than λC , gravitational forces are enhanced by 4/3, while they reduce

to unmodified gravity on larger scales. The reach of the modified forces λC generically

leads to a scale-dependent growth in f(R) models.

While the dynamics are significantly changed in f(R), the relation between matter

and the lensing potential is unchanged up to a rescaling of the gravitational constant by

the linear contribution in f . The fractional change is of order the background field value

fR ≡ fR(R) 1 where R is the background curvature scalar.

Proceeding further requires a choice of the functional form for f . A functional form

is considered which is representative of many other cases.

Hu and Sawicki [2007] (HS) proposed a simple functional form for f(R), which can

be written as

f(R) = −R0c1(R/R0)n

c2(R/R0)n + 1, (2.4)

where we have used the value of the scalar curvature in the background today, R0 ≡ R|z=0

for convenience. This three parameter model passes all stability criteria for positive n,

c1 and c2. One parameter can be fixed by demanding the expansion history to be close

(within observational limits) to ΛCDM. In this case, Equation 2.4 can be conveniently

Page 35: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 22

reparametrized and approximated by

f(R) ≈ −2Λ− fR0R0

n

(R0

R

)n. (2.5)

Here Λ and fR0—the value of the fR field in the background today—have been used

to parameterize the function in lieu of c1 and c2. This approximation is valid as long

as |fR0| 1, which is necessary to satisfy current observational constraints [Hu and

Sawicki, 2007, Schmidt et al., 2009b]. While Λ is conceptually different than vacuum

energy, it is mathematically identical and will thus be absorbed into the right hand side

of the Friedmann equation and parameterized by ΩΛ. In quoting constraints, we will

marginalize over this parameter as it is of no use in identifying signatures of modified

gravity. The parameter fR0 can be though of as controlling the strength of modifications

to gravity today, while higher n pushes these modifications to later times. The effects of

changing these parameters are discussed in greater detail in Hu and Sawicki [2007].

Allowed f(R) models exhibit the so-called chameleon mechanism: the fR field be-

comes very massive in dense environments and effectively decouples from matter. This ef-

fect is active whenever the Newtonian potential is of order the background fR field. Since

cosmological potential wells are typically of order 10−5 for massive halos, the chameleon

effect becomes important if |fR| . 10−5. If the background field is ∼ 10−7 or smaller, a

large fraction of the collapsed structures in the universe are chameleon-screened, so that

the model becomes observationally indistinguishable from ΛCDM.

Since the chameleon effect will affect the formation of structure, standard fitting

formulas based on ordinary GR simulations, such as those mapping the linear to the

nonlinear power spectrum, cannot be used for these models. Recently, however, self-

consistent N-body simulations of f(R) gravity have been performed which include the

chameleon mechanism [Oyaizu, 2008, Oyaizu et al., 2008, Schmidt et al., 2009a]. We will

use the simulation results for forecasts of weak lensing in the nonlinear regime below.

It should be noted that f(R) models are not without difficulties. In particular, an

open issue is the problem of potential unprotected singularities [Abdalla et al., 2005,

Frolov, 2008, Nojiri and Odintsov, 2008].

2.3.2 DGP Braneworld

A theory of gravity proposed by Dvali, Gabadadze and Porrati (DGP) assumes that our

four dimensional universe sits on a brane in five dimensional Minkowski space [Dvali et al.,

2000]. On small scales gravity is four dimensional but, on larger scales it becomes fully

Page 36: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 23

five dimensional. Here we parameterize DGP by rc, the scale at which gravity crosses

over in dimensionality. The DGP model has two branches depending on the embedding

of the brane in 5D space. In the self-accelerating branch, the universe accelerates without

need for a cosmological constant if rc ∼ 1/H0 [Deffayet, 2001, Deffayet et al., 2002]. In

this branch, assuming a spatially flat Universe for now, the modified Friedmann equation

is given by

H2 − H

rc=

3ρ, (2.6)

which clearly differs from ΛCDM. Thus, in contrast to the other models considered here,

DGP without a cosmological constant does not reduce to ΛCDM and it is possible to

completely rule out this scenario (where the others can only be constrained). In fact

DGP (without a cosmological constant) has been shown to be in conflict with current

data [Fang et al., 2008]. It is presented here largely for illustrative purposes.

On scales much smaller than rc, gravity is four-dimensional but not GR. On these

scales, DGP can be described as an effective scalar-tensor theory [Koyama and Maartens,

2006, Koyama and Silva, 2007, Scoccimarro, 2009]. The massless scalar field, the brane-

bending mode, is repulsive in the self-accelerating branch of DGP. Hence, structure for-

mation is slowed in DGP when compared to an effective smooth Dark Energy model

with the same expansion history. While the growth of structure is thus modified in DGP

even on scales much smaller than rc, gravitational lensing is unchanged. In other words,

the relation between matter overdensities and the lensing potential is the same as in GR

[Lue et al., 2004].

As in f(R), the DGP model contains a nonlinear mechanism to restore GR locally.

This Vainshtein mechanism is due to self-interactions of the scalar brane-bending mode

which generally become important as soon as the density field becomes of order unity. In

the Vainshtein regime, second derivatives of the field saturate, and thus modified gravity

effects are highly suppressed in high-density regions [Lue et al., 2004, Koyama and Silva,

2007, Schmidt, 2009]. We will only consider linear predictions for the DGP model here.

2.4 Observational Signatures

In this Section we describe the observational signatures available to 21 cm intensity map-

ping. We also give details on calculating the observables within modified gravity models.

We consider two types of measurements: the Baryon Acoustic Oscillations and weak

gravitational lensing.

For the fiducial survey, we assume a 200 m× 200 m cylindrical telescope, as in Chang

Page 37: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 24

et al. [2008]. We will also present limited results for a 100 m×100 m cylindrical telescope

to illustrate effects of reduced resolution and collecting area on the results. This latter

case is representative of first generation projects [Seo et al., 2010]. In the 200 m case we

assume 4000 receivers, and in the 100 m case 1000 receivers. We assume either telescope

covers 15000 sq. deg. over 4 years. We assume neutral hydrogen fraction and the bias

remain constant with ΩHI = 0.0005 today and b = 1. The object number density is

assumed to be n = 0.03 per cubic h−1Mpc (effectively no shot-noise, as should be the

case in practice [Chang et al., 2008]).

2.4.1 Baryonic acoustic oscillation expansion history test

Acoustic oscillations in the primordial photon-baryon plasma have ubiquitously left a

distinctive imprint in the distribution of matter in the universe today. This process is

understood from first principles and gives a clean length scale in the universe’s large

scale structure, largely free of systematic uncertainties and calibrations. This can be

used to measure the global cosmological expansion history through the angular diameter

distance, dA, and Hubble parameter, H, vs redshift relation. The detailed expansion and

acceleration will differ between pure cosmological constant and modified gravity models.

We use essentially the method of Seo and Eisenstein [2007] for estimating distance

errors obtainable from a BAO measurement, including 50% reconstruction of nonlinear

degradation of the BAO feature. We assume the frequency range corresponding to z < 2.5

is covered (the lower z end should be covered by equivalent galaxy redshift surveys if not

a 21cm survey). For the sky area and redshift range surveyed, the 200 m telescope is

nearly equivalent to a perfect BAO measurement. The limited resolution and collecting

area of the 100 m telescope substantially degrades the measurement at the high-z end.

The expansion history for modified gravity models can be calculated in an analogous

way to that in General Relativity. The Friedmann Equation in DGP, Equation 2.6 can

be written as

H2 = − k

a2+

(1

2rc+

√1

4rc2+

8πρ

3

)2

, (2.7)

where k is the curvature, and rc is the crossover scale. It is convenient to introduce the

parameter ωrc ≡ 1/4r2c which stands in for rc. This equation can be solved numerically

to calculate the observable quantities.

We now calculate the expansion history in the HS f(R) model using a perturbative

framework which is well suited for calculating constraints on fR0. Working in the confor-

mal gauge and mostly negative signature, we start with the modified Einstein’s Equation

(2.2). At zeroth order the left hand side of the 00 component contains the modified

Page 38: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 25

Friedmann equation

H2 =8πρ

3+ fR0gn(a, a, a,

...a ), (2.8)

where ρ is the average density (including contributions from Λ), the over-dot represents

a conformal derivative and

gn ≡−1

fR0a2

[(f + 2Λ)a2

6+ fR

(a2

a2− a

a

)+6fRR

( ...a a

a4− 3aa2

a5

)]. (2.9)

For verifiability we quote

g1 =a2R2

0(2aa2 − 7aa2 + 2...a aa)

36a3. (2.10)

Evaluating Equation 2.8 at the present epoch yields the modified version of the standard

constraint

h2 = ωm + ωr + ωk + ωΛ + fR0gn0. (2.11)

Note that the modified version of the Friedmann Equation is third order instead of

first order, however, it has been shown that the expansion history stably approaches

that of ΛCDM for vanishing fR0 [Hu and Sawicki, 2007]. For observationally allowed

cosmologies fR0 1 we expand

fR0gn = fR0gn(a, ˙a, ¨a,...a ) +O(f 2

R0), (2.12)

where a is the solution to the standard GR Friedmann equation.

By using Equation 2.12 in Equation 2.8 and keeping only terms linear in fR0, the ex-

pansion history can be calculated in the regular way, along with the observable quantities

dA(z) and H(z). For small fR0 this agrees well with the calculation in Hu and Sawicki

[2007] where the full third order differential equation was integrated

In calculating the Fisher Matrix, this treatment is exact because the Fisher Matrix

depends only on the first derivative of the observables with respect to the model param-

eters, evaluated at the fiducial model.

2.4.2 Weak Lensing

A second class of observables measures the spatial perturbations in the gravitational met-

ric. Modified gravity will change the strength of gravity on large scales and thus modify

Page 39: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 26

the growth of cosmological structure. Weak gravitational lensing, the gravitational bend-

ing of source light by intervening matter, is a probe of this effect.

Weak lensing measures the distortion of background structures as their light propa-

gates to us. Here, the background structure is the 21 cm emission from unresolved sources.

While light rays are deflected by gravitational forces, this deflection is not directly ob-

servable, since we don’t know the primary unlensed 21 cm sky. However, weak lensing

will induce correlations in the measured 21 cm background, since neighbouring rays pass

through common lens planes. While the deflection angles themselves are small (of order

arcseconds) the deflections are coherent over scales of arcminutes. In this way, the lensing

signal can be extracted statistically using quadratic estimators [Lu et al., 2010]. Given

the smallness of the lensing effect, a high resolution (high equivalent number density of

“sources”) is necessary to detect the effect.

The weak lensing observable that is predicted by theory is the power spectrum of the

convergence κ. It is given by

Cκκ(`) =

(3

2Ωm H

20

)2 ∫ χs

0

χ

WL(χ)2

χ a2(χ)ε2(χ)P (`/χ;χ), (2.13)

where χ denotes comoving distances, P (k, χ) is the (linear or nonlinear) matter power

spectrum at the given redshift, and we have assumed flat space. The lensing weight

function WL(χ) is given by:

WL(χ) =

∫ ∞z(χ)

dzsχ

χ(zs)(χ(zs)− χ)

dN

dz(zs). (2.14)

Here, dN/dz is the redshift distribution of source galaxies, normalized to unity. The factor

ε(χ) in Equation 2.13 encodes possible modifications to the Poisson equation relating the

lensing potential to matter (Section 2.3). In f(R), it is given by ε(χ) = (1 + fR(χ))−1,

while ε = 1 for GR as well as DGP. Note that for viable f(R) models, ε − 1 . 0.01, so

the effect of ε on the lensing power spectra is very small.

The CAMB Sources module [Lewis et al., 2000, Lewis and Challinor, 2007] was used

to calculate the lensing convergence power spectrum in flat ΛCDM models. The HALOFIT

[Smith et al., 2003] interface for CAMB was used for calculations that include lensing at

nonlinear scales.

For the modified gravity models in the linear regime, the convergence power spectra

were calculated using the Parametrized Post-Friedmann (PPF) approach [Hu and Saw-

icki, 2007] as in Schmidt [2008]. Briefly, the PPF approach uses an interpolation between

super-horizon scales and the quasi-static limit. On super-horizon scales (k aH), spec-

Page 40: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 27

ifying the background expansion history, together with a relation between the two metric

potentials, already determines the evolution of metric and density perturbations. On

small scales (k aH), time derivatives in the equations for the metric perturbations

can be neglected with respect to spatial derivatives, leading to a modified Poisson equa-

tion for the metric potentials. The PPF approach uses a simple interpolation scheme

between these limits, with a few fitting parameters adjusted to match the full calcu-

lations [Hu and Sawicki, 2007]. The full calculations are reproduced to within a few

percent accuracy. We use the transfer function of Eisenstein and Hu [1998] to calculate

the ΛCDM power spectrum at an initial redshift of zi = 40, were modified gravity effects

are negligible, and evolve forward using the PPF equations.

For the f(R) model, we also calculate predictions in the nonlinear regime. For these,

we use simulations of the HS model with n = 1 and fR0 values ranging from 10−6 to 10−4.

We use the deviation ∆P (k)/P (k) of the nonlinear matter power spectrum measured

in f(R) simulations from that of ΛCDM simulations with the same initial conditions

[Oyaizu et al., 2008]. This deviation is measured more precisely than P (k) itself. We

then spline-interpolate the measurements of ∆P (k)/P (k) for k = 0.04− 3.1 h/Mpc and

at scale factors a = 0.1, 0.2, ...1.0, and multiply the standard nonlinear ΛCDM prediction

(HALOFIT) with this value. For values of k > 3.1 h/Mpc, we simply set ∆P (k) = 0.

However, for the angular scales and redshifts considered here (` < 600, see below), such

high values of k do not contribute significantly.

One might be concerned that this mixing of methods, for calculating the lensing spec-

trum, might artificially exaggerate the effects of modified gravity if these methods do not

agree perfectly. While the spectra calculated for the fiducial ΛCDM model differed by up

to a percent between these methods, presumably due to slight differences in the transfer

function, this should have no effect on the results. Any direct comparison between spec-

tra (for example finite difference derivatives) are made between spectra calculated in the

same manner. Note that the Fisher Matrix depends only on the first derivative of the

observables with respect to the parameters and no cross derivatives are needed.

The lensing spectra were not calculated for non-flat models, but it is expected that

the CMB and BAO are much more sensitive to the curvature and as such the lensing

spectra are relatively unaffected. Formally we are assuming that

σωkσCκκ

∂Cκκ

∂ωk 1.

Reconstructing weak lensing from 21 cm intensity maps involves the use of quadratic

estimators to estimate the convergence and shear fields. The accuracy with which this

Page 41: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 28

can be done increases with information in the source maps, however, this information

saturates at small scales due to nonlinear evolution. As such, one cannot improve the

lensing measurement indefinitely by increasing resolution, and the experiments considered

here extract much of the available information within the redshift range considered.

The accuracy with which the convergence power spectrum can be reconstructed from

21 cm intensity maps was derived in Lu et al. [2010], where the effective lensing galaxy

density was calculated at redshifts 1.25, 3 and 5 (see Figure 7 and Table 2 therein). The

effective volume galaxy density was corrected for the finite resolution of the experiment

considered here. It was then interpolated, using a piecewise power law, and integrated

from redshift 1 to 2.5 to obtain an effective area galaxy density of ng/σ2e = 0.37arcmin−2.

The parameter σ2e is the variance in the intrinsic galaxy ellipticity, which is only used

here for comparison with optical weak lensing surveys. From the effective galaxy density

the error on the convergence power is given by

∆Cκκ(`) =

√2

(2`+ 1)fsky

(Cκκ(`) +

σ2e

ng

), (2.15)

where fsky is the fraction of the sky surveyed. The galaxy distribution function dN/dz

used to calculate the theoretical curves (from Equation 2.13) should follow the effective

galaxy density. Instead for simplicity, a flat step function was used, with this distribution

function equal from redshift 1 to 2.5 and zero elsewhere. While the difference between the

these distributions would have an effect on the lensing spectra, the effect on differences of

spectra when varying parameters is expected to be negligible. Our approximation is also

conservative, since the proper distribution function is more heavily weighted toward high

redshift. Rays travelling from high redshift will be affected by more intervening matter

and thus experience more lensing. This would increase the lensing signal, allowing a more

precise measurement.

Figure 2.1 shows the lensing spectra for the fiducial cosmology and a modified gravity

model, including both linear and nonlinear calculations. The linear regime is taken to

be up to ` = 140 for projected constraints. For calculations including weak lensing in

the nonlinear regime, Cκκ(`) up to ` = 600 is used for the larger telescope. Beyond this

scale the model used for lensing error-bars is not considered accurate at the shallowest

redshifts in the source window [Lu et al., 2010]. This cut off coincides with the scale

at which information in the source structures saturates due to non-linear evolution in

standard gravity (although it is also not far from the resolution limit of the experiment).

We speculate that a similar phenomena would occur in modified gravity and smaller

scales are not expected to carry significant additional information. Note that it is the

Page 42: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 29

101

102

103

10−6

10−5

10−4

l2C

(l)/

l

Fiducial modelHS f(R) with f

R0 = 10

−4

Fiducial model, linear

f(R) linear

Figure 2.1: The Weak lensing convergence power spectra for ΛCDM and the HS f(R)model with n = 1 and fR0 = 10−4. Galaxy distribution function is flat between z = 1and z = 2.5.

source structures in which information saturates. At smaller scales the lensing spectrum

would continue to carry information [Dore et al., 2009] if it could be reconstructed. For

the smaller telescope the scale is limited to ` < 425 by the resolution at the high end

of the redshift window. If the redshift window were subdivided into narrower bins, it

would be possible to use information at scales down to ` ≈ 1000 in the centre bins as

at these redshifts the telescope resolutions are better and structures are less non-linear.

However, considering tomographic information is beyond the scope of this work. It is

noted that these scales are very large by weak lensing standards where optical surveys

typically make detections down to an ` of order 105.

2.4.3 External Priors from Planck

While the CMB is not sensitive to the late time effects of modified gravity (except by

the integrated Sachs-Wolfe effect), it is invaluable for constraining other parameters and

breaking degeneracies. As such, projected information from the Planck experiment is

included. The Planck covariance matrix used here is given in McDonald and Eisenstein

[2007, Table II]. All late time cosmological parameters (including the curvature) are

Page 43: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 30

marginalized over, removing information contained in the ISW effect, and ensuring that

sensitivity to f(R) is entirely from 21 cm tests below. The only remaining parameter

that is related to the late time expansion is θs, the angular size of the sound horizon,

which is then used as a constraint on the parameter sets of the modified gravity models.

2.5 Results

To quantify the projected constraints on f(R) models, the Fisher matrix formalism is

employed. The HS f(R) models reduces to the fiducial model for vanishing fR0 and any

value of n. Thus the Fisher Matrix formalism is used to project constraints on fR0 for

given values of n. In the case of DGP, which does not reduce to the fiducial model, it

is shown that a measurement consistent with the fiducial model can not be consistent

with DGP for any parameter set. Unless otherwise noted, we account for freedom in

the full cosmological parameter set: h, ωm, ωb, ωk, As and ns; representing the Hubble

parameter; physical matter, baryon and curvature densities; amplitude of primordial

scalar fluctuations and the spectral index; respectively.

Within the f(R) models, the fiducial model is a special point in the parameter space as

there are no modifications to gravity. As such, one cannot in general expect perturbations

to observables to be linear in the f(R) parameter fR0, an assumption implicit in the

Fisher Matrix formalism. This assumption does seem to hold for the expansion history,

where our first order perturbative calculation agrees with the full solution to the modified

Friedmann Equations calculated in Hu and Sawicki [2007]. However, this is not the case

for weak lensing. For each f(R) model, the lensing spectrum was calculated for several

values of fR0. It was observed that enhancements to the lensing power spectrum go as

Cκκ(`)− Cκκfiducial(`) ∼ (fR0)α(`),

with α(`) in the 0.5–0.7 range. This is because the reach of the enhanced forces in f(R) is

a power law in fR0 following Equation (2.3), and the enhancement of the power spectrum

for a given mode k roughly scales with the time that this mode has been within the reach

of the enhanced force. Because of this behaviour, the constraints derived within the

Fisher Matrix formalism depend on the step size in fR0 used for finite differences.

To correct for this, we use a step size that is dependent on the final constraint.

The weak lensing Fisher Matrices where calculated for fR0 step sizes of 10−3, 10−4 and

10−5. These were then interpolated—using a power law—such that the ultimate step

size used for finite differences is roughly the quoted constraint on the modified gravity

Page 44: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 31

95% confidence HS |fR0|upper limits n = 1 NL WL n = 2 n = 4BAO 1.5e-02 ∼ 1.8e-02 3.0e-02WL 2.3e-03 4.3e-05 4.0e-03 8.6e-03BAO+WL 5.0e-05 8.9e-06 9.7e-05 4.6e-04

Table 2.1: Projected constraints on f(R) models for various combinations of observa-tional techniques, for a 200 m telescope. Constraints are the 95% confidence level upperlimits and include forecasts for Planck. The non linear results (column marked NL WL)are for the HS model with n = 1. Results that make use of weak lensing with con-straints above 10−3 are only order of magnitude accurate. The linear regime is taken tobe ` < 140, with the nonlinear constraints extending up to ` = 600.

parameter. For instance when the 95% confidence constraint on fR0 is quoted, the step

size for finite differences is ∆fR0 ≈ 2σfR0, where σfR0

is calculated from the interpolated

Fisher matrix. This is expected to be valid down to step sizes at the 10−6 level where

the chameleon mechanism is important. As such, for constraints below 10−5 a step

size of 10−5 is always used. Note that this is conservative because an over sized finite

difference step always underestimates the derivative of a power law with an power less

than unity. For constraints above the 10−3 level a step size of 10−3 is used, which is the

largest modification to gravity simulated. These constraints are considered unreliable

due to these difficulties. We reiterate this this only affects results that include weak

lensing information. Likelihood contours remain perfect ellipses in this procedure (which

is clearly inaccurate), however the spacing between contours at different confidence levels

is altered.

Figure 2.2 shows the projected constraints on the HS f(R) model with n = 1 for

various combinations of observational techniques, and a (200m)2 telescope. The elements

in the lensing fisher matrix associated with the curvature are taken to be zero for the

reasons given in Section 2.4.2. While this assumption is not conservative, it is expected

to be valid, as the angular diameter distance as measured by the BAO is very sensitive

to the curvature. In total three f(R) models were considered: HS with n = 1, 2, 4. The

results are summarized in Table 2.1.

It was found that while weak lensing, in the linear regime, is very sensitive to the

modifications to gravity, it is only barely capable of constraining f(R) models without

separate information about the expansion history. Even with the inclusion of Planck

forecasts, degeneracies with h and ωk, the mean curvature, drastically increase the un-

certainties on the modified gravity parameters. Indeed these three parameters are more

than 95% correlated (depending on the exact model and confidence interval). This of

course brings into question the neglect of the ωk terms in the weak lensing Fisher Ma-

Page 45: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 32

0.65 0.7 0.7510

−6

10−5

10−4

10−3

10−2

h

|fR

0|

BAO

BAO+WL

NL WL

BAO+NL WL

Figure 2.2: Projected constraints on the HS f(R) model with n = 1 using several com-binations of observational techniques, for a 200 m telescope. All curves include forecastsfor Planck. Allowed parameter values are shown in the fR0 − h plane at the 68.3%, and95.4% confidence level. Results are not shown for “WL” which were calculated much lessaccurately (see text).

Page 46: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 33

0.65 0.7 0.7510

−6

10−5

10−4

10−3

10−2

h

|fR

0|

BAO

BAO+WL

NL WL

BAO+NL WL

Figure 2.3: Same as Figure 2.2 but for a 100 m cylindrical telescope.

trix. However it is noted that in these cases, the predicted limits on the curvature are

|ωk| < 0.025 at 95% confidence. The current, model independent, limits on the curvature

using WMAP, SDSS and HST data are approximately half this value [Komatsu et al.,

2011]. Our neglect of any direct probes of the expansion history for the Planck+WL con-

straints is clearly unrealistic; however, the constraints illustrate what is actually measured

by weak lensing. In any case these degeneracies are broken once BAO measurements are

included, and in this final case the modified gravity parameters are correlated with the

other parameters by at most 35%. Also, considering lensing in the nonlinear regime

breaks the degeneracy to a certain extent.

First generation cylindrical telescopes will likely be smaller than the one considered

above. To illustrate the differences in constraining ability, we now present a few results

for a cylindrical radio telescope that is 100m on the side. Reducing the resolution of the

experiment degrades measurements in a number of ways. BAO measurements become

less than ideal in the higher redshift bins. The smallest scale that can be considered for

weak lensing drops to about ` = 425. A more important effect is that the lensing spectra

can not be as accurately reconstructed, dropping the effective galaxy density down to

ng/σ2e = 0.22. Figure 2.3 shows analogous results to 2.2 but for a telescope with half the

resolution.

Page 47: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 34

0.95

1

1.05

dA

,DG

P/d

A,fid

ucia

l

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0.95

1

1.05

Scale factor, a

HD

GP/H

fid

ucia

l

Figure 2.4: Ratio of the coordinate dA(z) (top) and the Hubble parameterH(z) (bottom)as predicted by the best fit DGP model to the fiducial model. Error bars are from 21 cmBAO predictions. Fit includes BAO data available from the 200 m telescope and CMBpriors on θs and ωm.

To show that a set of measurements consistent with the fiducial model would be

inconsistent with DGP we first fit DGP to the fiducial model’s CMB and BAO expansion

history by minimizing

χ2 = (rDGP − rfiducial)TC−1(rDGP − rfiducial), (2.16)

where rDGP and rfiducial are vectors of observable quantities as calculated in the DGP

and fiducial models, and C is the covariance matrix. r includes BAO dA(z) and H(z) as

well as Planck priors on ωm and θs. Note that χ2 is not truly chi-squared since rfiducial

contains fiducial model predictions and is not randomly distributed like a real data set.

Performing the fit yields DGP parameters: h = 0.677, ωm = 0.112, ωk = −0.0086 and

ωrc = 0.067. Figure 2.4 shows the deviation of H and dA respectively for the best fit DGP

model compared to the fiducial model. χ2 = 332.8 for the fit despite there only being 16

degrees of freedom, and as such a measurement consistent with the fiducial model would

thoroughly rule out DGP.

In the case that expansion history measurements are consistent with DGP, the ques-

Page 48: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 35

0 20 40 60 80 100 120 1400

0.5

1

1.5

2

l2C

(l)/

l

Dark Energy

DGP

Figure 2.5: Weak lensing spectra in for DGP and a smooth dark energy model withthe same expansion history. DGP parameters are h = 0.665, ωm = 0.116, ωk = 0 andωrc = 0.06. Errorbars represent expected accuracy of the 200 m telescope.

tion arises as to whether DGP could be distinguished from a smooth dark energy model

that had the same expansion history. The additional information in linear perturbations

as measured by weak lensing allows DGP to be distinguished even from a dark energy

model with an identical expansion history. Figure 2.5 shows the lensing spectra for a

DGP cosmology similar to the best fit discussed above, as well as the dark energy model

with the same expansion history as in Fang et al. [2008].

In principle one should consider the small amount of freedom within the DGP pa-

rameter set that could be used to make the DGP spectrum better fit the dark energy

spectra. However this is unlikely to significantly change the spectrum as all relevant

parameters are tightly constrained by the CMB and BAO. For example it is clear from

Figure 2.5 that the lensing spectra of the two models would better agree if the amplitude

of primordial scaler perturbations was increased in the DGP model. However, Planck

measurements would only allow of order half a percent increase while the disagreement is

of order 10%. This is justified by the lack of correlations found in the f(R) fisher matrices

once all three observational techniques are included. In addition we have not considered

information from weak lensing in the nonlinear regime. Adding nonlinear scales would

Page 49: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 36

only make our conclusion that DGP and smooth dark energy are distinguishable with

these observations more robust.

2.6 Discussion

We have shown that the first generation of 21 cm intensity mapping instruments will

be capable of constraining the HS f(R) model (with n = 1) down to a field value of

|fR0| . 2×10−5 at 95% confidence (Figure 2.3). This is an order of magnitude tighter than

constraints currently available from galaxy cluster abundance [Schmidt et al., 2009b].

Furthermore, model parameters in this regime are not ruled out by Solar System tests.

In comparing Figures 2.2 and 2.3 it is clear that a more advanced experiment, with

resolution improved by a factor of two, would further half the allowed value of |fR0|. It

should be noted however, that halving of the allowed parameter space does not correspond

to a factor of four increase in information. Deviations in the lensing spectrum scale sub-

linearly in the f(R) parameters, enhancing the narrowing of constraints as information

is added (see Section 2.5).

While we have concentrated on a particular f(R) model, many viable functional forms

for f(R) have been proposed in the literature [Nojiri and Odintsov, 2007, Starobinsky,

2007, Appleby and Battye, 2007]. The predictions for the growth of structure in these

different models agree qualitatively: the gravitational force is enhanced by 4/3 within λC ,

enhancing the growth on small scales. However, there are quantitative differences in the

model predictions due to the different evolution of λC over cosmic time. Our results for

the HS model with different values of n should thus cover a range of different functional

forms for f(R). Table 2.1 shows that our constraints do not depend very sensitively on

the value of n. This is because the weak lensing measurements cover a wide range of

scales as well as redshifts. Furthermore, it is straightforward to map the enhancement

in the linear P (k, z) at given k and z from the HS model considered here to any other

given model, to obtain approximate constraints for that model.

Future cluster constraints will almost certainly improve on the current limits of

|fR0| . few10−4 [Schmidt et al., 2009b]. However, for smaller field values, the main effect

of f(R) gravity shifts to lower mass halos, since the highest mass halos are chameleon-

screened (see Fig. 2 in Schmidt et al. [2009a]). Hence, future cluster constraints will

depend on the ability to accurately measure the halo abundance at masses around few

1014M and less. Furthermore, the constraints from cluster abundances depend sensi-

tively on the knowledge of the cluster mass scale, and are already systematics-dominated

[Schmidt et al., 2009b]. Weak lensing constraints have a completely independent set of

Page 50: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 37

observational systematics, and are in principle less sensitive to baryonic or astrophysical

effects. Thus, the forecasted constraints on modified gravity presented here are quite

complementary to constraints from cluster abundances.

The processes that produce the BAO feature in the matter power spectrum are un-

derstood from first principles. In addition the BAO length scale can be extracted even in

the presence of large uncertainties in biases and mass calibrations. Likewise, weak lensing

on large scales is well understood, with baryonic physics being much less important than

on smaller scales [Zhan and Knox, 2004]. In addition the dominant systematics present

in optical weak lensing surveys are instrumental in nature and not intrinsic to the quan-

tities being measured. While 21 cm intensity mapping is as yet untested, instrumental

systematics will be very different from those that affect the optical.

In the case of this study, and more generally for cosmological models which sub-

stantially modify structure formation, the motivation for higher resolution comes not

from improved BAO measurements but from better weak lensing reconstruction. Higher

resolution not only makes weak lensing information available at higher multi-poles, but

improves the accuracy at which lensing can be reconstructed on all scales.

The inclusion of lensing information in the nonlinear regime was crucial, and largely

responsible for the competitiveness of these forecasts. As seen in Figure 2.1, much of

the constraints come from multi-poles in the nonlinear regime. It should be noted that

for the higher resolution experiment considered, the minimum scale is limited not by

the resolution at high redshift, but by the saturation of information in nonlinear source

structures at low redshift [Lu et al., 2010].

Our constraints from lensing are conservative since only one wide source redshift bin

was considered, limited to ` < 600 as described above. To maximize information, the

source redshift range could be split into multiple bins, properly considering the correlation

in the lensing signal between them; a process known as lensing tomography. The low

redshift bin would be limited as above, and the high redshift bin would be limited by the

resolution to ` ≈ 850 at z = 2.5. However in intermediate bins, the lensing signal could

be reliably reconstructed above ` ≈ 1000.

Unlike most smooth dark energy models, such as quintessence, constraints on the mod-

els considered here are chiefly sensitive to structure formation, as is clear from Figure 2.2.

These forecasts show that 21 cm intensity mapping is not only sensitive to a cosmology’s

expansion history through the BAO, but also to structure growth through weak lens-

ing. The weak lensing measurements cannot compete with far off space based surveys

like Euclid or JDEM, which will have galaxy densities of order 100 arcmin−2[Albrecht

et al., 2006] and resolution to far greater `. However, cylindrical 21 cm experiments are

Page 51: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 2. Constraining modified gravity 38

realizable on a much shorter time scale and at a fraction of the cost. In addition, the

measurements considered here are approaching the limit at which f(R) models can be

tested. For |fR0| much less than 10−6 the chameleon mechanism becomes important be-

fore there are observable modifications to structure growth, reducing the motivation to

further study these models.

It has also been shown that, for these experiments, a BAO measurement consistent

with ΛCDM would definitively rule out DGP without a cosmological constant as a cosmo-

logical model. Even in the case that a BAO measurement consistent with DGP is made,

the model is still distinguishable from an exotic smooth dark energy model through struc-

ture growth. The former result is not surprising given that DGP is now in conflict with

current data [Fang et al., 2008]. However it is illustrative that a single experiment can

precisely probe both structure formation and expansion history. Even a dark energy

model that conspires to mimic DGP is, to a large extent, distinguishable.

We have studied the effects of modified gravity theories on observational quantities for

future 21 cm surveys. Because these surveys measure the distribution of galaxies on large

angular scales over large parts of the sky, they are well suited to measure the expected

deviations relative to standard general relativity. We have computed the predictions of

modified gravity in the linear and nonlinear regimes, and compared to the sensitivity of

future surveys. We find that a large part of parameter space can be tested.

Acknowledgements

We would like to thank Tingting Lu for helpful discussions. KM is supported by an

NSERC Canadian Graduate Scholars-M scholarship. FS is supported by the Gordon

and Betty Moore Foundation at Caltech. PM acknowledges support of the Beatrice D.

Tremaine Fellowship.

Page 52: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 3

Primordial gravity waves fossils and

their use in testing inflation

A version of this chapter was published in Physical Review Letters as “Primordial Gravity

Wave Fossils and Their Use in Testing Inflation”, Masui, K. W. and Pen, U.-L., Vol. 105,

Issue 16, 2010. Reproduced here with the permission of the APS.

3.1 Summary

A new effect is described by which primordial gravity waves leave a permanent signature

in the large scale structure of the Universe. The effect occurs at second order in per-

turbation theory and is sensitive to the order in which perturbations on different scales

are generated. We derive general forecasts for the detectability of the effect with future

experiments, and consider observations of the pre-reionization gas through the 21 cm line.

It is found that the Square Kilometre Array will not be competitive with current cosmic

microwave background constraints on primordial gravity waves from inflation. However,

a more futuristic experiment could, through this effect, provide the highest ultimate sen-

sitivity to tensor modes and possibly even measure the tensor spectral index. It is thus

a potentially quantitative probe of the inflationary paradigm.

3.2 Introduction

It has been proposed that redshifted 21 cm radiation, from the spin flip transition in

neutral hydrogen, might be a powerful probe of the early universe. The era before

the first luminous objects reionized the universe–around redshift 10–contains most of

39

Page 53: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 3. Detecting primordial gravity waves 40

the observable volume of the universe, and 21 cm radiation is the only known probe of

these so called dark ages (see Furlanetto et al. [2006] for a review). The density of the

hydrogen could be mapped in 3D analogous to how the cosmic microwave background

(CMB) is mapped in 2D. The wealth of obtainable statistical information may allow for

the detection of many subtle effects which could probe the early universe. In particular,

the primordial gravity wave background, also referred to as tensor perturbations, are of

considerable cosmological interest.

Inflation robustly predicts the production of tensor perturbations with a nearly scale-

free spectrum, however, their amplitude is essentially unconstrained theoretically. The

amplitude of the tensor power spectrum is quantified by r, the tensor to scalar ratio.

The current upper limit is r < 0.24 at 95% confidence [Komatsu et al., 2011], however

upcoming CMB measurements will be sensitive down to r of a few percent [Burigana

et al., 2010]. The current limits on r correspond to characteristic primordial shear on the

order of 10−5 per logarithmic interval of wavenumber.

Several probes of gravity waves using the pre-reionization 21 cm signal have been

proposed. These include polarization [Lewis and Challinor, 2007] and redshift space

distortions [Bharadwaj and Sarkar, 2009]. Dodelson et al. [2003] considered the weak

lensing signature of gravity waves and found that the signal is sensitive to the so called

metric shear. This is closely related to the present work.

Here we describe a mechanism by which primordial gravitational waves may leave an

imprint in the statistics of the large scale structure (LSS) of the universe. This signature

becomes observable when the gravity wave enters the horizon and begins to decay.

3.3 Mechanism

In the following, Greek indices run from 0 to 3 and lower case Latins from 1 to 3. Latin

indices are always raised and lowered with Kronecker deltas. Commas denote partial

derivatives, and an over-dot (#) represents a derivative with respect to the cosmological

conformal time. Finally, we adopt a mostly positive metric signature (−1, 1, 1, 1).

We start with an inflating universe with some distribution of previously generated

tensor modes that are now super horizon (have wave-length much longer than the horizon

scale). Scalar, vector and smaller scale tensor modes may exist but their contribution to

the metric is ignored. The line element is given by

ds2 = a(η)2[−dη2 + (δij + hij)dx

idxj]. (3.1)

Page 54: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 3. Detecting primordial gravity waves 41

where a is the scale factor, η the conformal time and a spatially flat background geometry

has been assumed. The metric perturbations hij are assumed to be transverse and

traceless and thus contain only tensor modes. The elements of hij are also assumed

to be small such that only leading order terms need be retained. The assumption that

all tensor modes under consideration are super horizon implies that kh a/a, where kh

denotes the wave numbers of tensor modes. The frame in which the line element takes

the form in Eq. 3.1 will hereafter be referred to as the cosmological frame (CF).

By the equivalence principle, it is possible to perform a coordinate transformation

such that the space-time appears locally Minkowski at a point. New coordinates are

defined in which the tensor modes are gauged away at the origin:

xα = (xα +1

2hαβx

β), (3.2)

where the elements h0α are taken to be zero. The metric now takes the form (up to first

order in hij)

ds2 = a2[−dη2 + δijdx

idxj − xc∂αhβcdxαdxβ]. (3.3)

This frame will be loosely referred to as the locally Friedmann frame (LFF), because in

these coordinates the metric is locally that of an unperturbed FLRW Universe. We will

give quantities in these coordinates a tilde (#) to distinguish them from their counterparts

in the CF. It is seen from Eq. 3.3 that the local effects of gravity waves are suppressed

not only by the smallness of hij but also by kh/k where k = L−1 and L is some length

scale of interest. This will be important in justifying some later assumptions. Note that

for super horizon gravity waves, temporal derivatives are much smaller than spacial ones.

On small scales, inflation generates scalar perturbations which are then carried to

larger scales by the expansion. By the equivalence principle, physical processes on small

scales can not know about the long wavelength tensor modes. As such these small scale

scalar modes must be uncorrelated with the long wavelength tensor modes. We assume

statistical homogeneity and isotropy in the LFF as would be expected from inflation.

The power spectrum of scalar perturbations can then be written as a function of only

the magnitude of the wave number, i.e., P (ka) = P (k). This applies only within the

local patch near the point where the tensor mode was gauged away. The average in the

definition of the scalar power spectrum is over realizations of the scalar map, but not the

tensor map.

In the CF, the isotropy is broken. Transforming back to cosmological coordinates

Page 55: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 3. Detecting primordial gravity waves 42

maps ki → ki − kjh ji /2. The power spectrum becomes sheared:

P (ka) = P (k)− kikjhij

2k

dP

dk+O(

khkhij) +O(hij

2). (3.4)

If the metric perturbations are not assumed to be traceless, the right hand side of this

equation gains an additional term proportional to this trace. This deviation from isotropy

is not observable since any possible observation would take place in the LFF.

It is noted that the leading order correction to CF power spectrum is not suppressed

by kh/k. It is therefore not expected that the residual terms in the LFF metric (Eq. 3.3)

can break isotropy to undo CF anisotropy. However it was the CF in which the power

spectrum should be isotropic, then there would be observable anisotropy in the LFF.

This would be a violation of the equivalence principle, since an experiment local in both

space and time would be able to detect the super horizon tensor modes by measuring the

power spectrum of the locally generated scalar perturbations.

We would now like to evolve the system to some later time when observations can

be made. Ignoring the internal dynamics of the scalar perturbations, we solve for their

evolution as if they were embedded in a sea of test particles. This is trivial since an

object at coordinate rest in the CF will remain at rest for any time dependence of hij

(this is true at all orders). At some point well after inflation, when the universe is

in its deceleration phase, the horizon will become larger than the length scale of the

tensor modes. The tensor modes will then decay by redshifting, and after some period

of time the metric perturbations hij become negligible. The CF and LFF then become

equivalent and both correspond to the frame in which observations can be made. The

distribution of test particles is the same as it initially was in the CF. As such, the initially

physically isotropic power spectrum now contains a measurable local anisotropy given by

Eq. 3.4. The values of the initial metric perturbations can be determined by measuring

this distortion at any time in the future, constituting a fossil of the initial tensor modes.

The scalar perturbations remain Gaussian but become non-stationary, and the trispec-

trum gains the corresponding terms. This is analogous to the apparent distortions ex-

pected in the CMB and 21 cm fields induced by gravitational lensing. Similarly the

bispectra of mixed scalars and tensors were calculated in Maldacena [2003], employing

similar methodology to that presented here.

The effect described here is a second order perturbation theory effect, in that it is a

small effect due to tensor modes on the already small scalar perturbations. This coupling

occurs in the initial conditions, not between the dynamics of the scalars and tensors. The

simple argument presented above avoided the complication of a full second order calcula-

Page 56: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 3. Detecting primordial gravity waves 43

tion, but it is expected that such calculations would yield the same results. Specifically,

an expression agreeing with Eq. 3.4, to relevant order, was derived in Giddings and Sloth

[2011a, Eq. 4.5] as part of a longer calculation.

3.4 Tests of inflation

The above arguments relied on perturbations on large scales being generated before per-

turbations on small scales. This is the case in any conceivable model of inflation, however

it is not be the case in all scenarios. As an illustrative example, in the cosmic defect

scenario perturbations are generated on small scales and then causally transported to

larger scales as the universe evolves. It is argued that in this scenario, tensor perturba-

tions leave no fossils, i.e. the described effect does not occur. A detection of primordial

tensors by another means (CMB B-modes for example) with an observed lack of the

corresponding fossils would provide a serious challenge to inflation.

The most specific prediction of single field inflation is the power spectrum of tensor

modes, defined by

(2π)3δ(ka − k′a)Ph(ka) ≡ 〈hij(ka)hij(k′a)〉. (3.5)

Given the amplitude of the scalar power spectrum As, the tensor power spectrum is fixed

by a single parameter, the tensor to scalar ratio r. The shape of the spectrum is then

nearly scale-free:

Ph =2π2rAsk3

(k

k0

)nt. (3.6)

We follow the WMAP conventions for defining Ph, As and r [Komatsu et al., 2009]. The

spectral index fixed by the consistency relation, nt = −r/8 [Liddle and Lyth, 2000]. The

pivot scale is taken to be k0 = 0.002 Mpc−1 and we assume the WMAP7 central value

for As of 2.46× 10−9.

Because r is likely small, any deviation from a scale-free spectrum will be difficult

to measure, making the verification of the consistency relation correspondingly difficult.

The CMB is sensitive primarily to large scale tensor modes, with smaller scale modes

having decayed by recombination. Cosmic variance and lensing contamination will likely

prevent a measurement of nt from the CMB, unless the lensing can be cleaned from the

signal [Zhao and Baskaran, 2009]. Conversely, the amplitude of the fossil signal does not

decay as the universe expands. It may thus be possible to make a measurement of the

spectral index, provided r is sufficiently large.

Page 57: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 3. Detecting primordial gravity waves 44

3.5 Statistical detection in LSS

In practice, the tensor gravity wave fossils could be reconstructed by applying quadratic

estimators to the density field. Aside from the increased dimensionality, this is identical

to the manner in which lensing shear is reconstructed [Zaldarriaga and Seljak, 1999, Lu

and Pen, 2007]. Rather than considering the statistics of such estimators, here we follow

a simpler line of reasoning to approximate the accuracy to which the tensor parameter

can be measured.

We begin by asking how well a long wavelength, tensor mode can be reconstructed

from its effects on the scalar power spectrum (Eq. 3.4). The metric perturbations are

assumed to be spatially constant and take the form

hij = h+e+ij(z) + h×e

×ij(z) (3.7)

where e+ij and e×ij are the polarization tensors and the z direction of propagation is chosen

for convenience. The uncertainty on the scalar power spectrum is

[∆P (ka)]2 = 2 [P (ka) +N ]2 , (3.8)

where N is the noise power. We use a Fisher Matrix analysis to sum this information over

all ka to determine the corresponding uncertainty on the shear h+ and h×. Assuming

an experiment whose noise is sub dominant to sample variance (N P ), the resulting

variance is inversely proportional to the number of modes surveyed:

(∆hC

)2 ∼[V (kmax/2π)3

]−1, (3.9)

where h stands for either h+ or h× (the superscript C indicates that the formula applies

for spatially constant h), V is the volume of the survey and kmax is set by the resolution

of the survey. The constant of proportionality depends on the shape of the unsheared

power spectrum P (k), but to within a few tens of percent it is unity. 21 cm emission will

be difficult to observe on large scales [Furlanetto et al., 2006], however it is small scales

that dominate the number of modes and thus the reconstruction. It is only the coherence

of small scale anisotropy that must be measured on large scales.

Given the reconstruction uncertainty on a spatially constant shear, and the fact that

reconstruction noise is scale independent (white) [Zaldarriaga and Seljak, 1999], the noise

Page 58: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 3. Detecting primordial gravity waves 45

power spectrum for spatially varying tensor modes is then

Nh = 4V (∆hC)2 = 4

(2π

kmax

)3

. (3.10)

The factor of four comes from the definition of the power spectrum in Eq. 3.5, noting

that 〈hijhij〉 = 4〈h2〉.We now sum over ka

1 to determine the signal to noise as a function of tensor power

spectrum amplitude r. The signal to noise ratio squared is then

SNR2 =∑

ka,+,×

P 2h

2(Nh + Ph)2(3.11)

≈ V

∫ kupper

klower

dk k2

2π2

P 2h (k)

(Nh + Ph)2. (3.12)

It is seen from the redness of the spectrum Ph (Eq. 3.6) that the result is completely

independent of the upper limit of integration. The same redness makes the final result

extremely sensitive to the lower limit. As described above, the fossil of a primordial

tensor mode can only be observed once the mode has decayed. This begins to happen

when the scale of the gravity wave becomes comparable to the horizon scale, and as such,

the largest scale observable mode has wavelength klower ≈ aH.

For an initial detection, we assume that noise dominates sample variance at each ka,

i.e., Nh Ph. Setting the signal to noise ratio to be 2, for a 95% confidence detection,

yields a minimum detectable amplitude of

rmin =32π2

Ask3max

(6

V VH(z)

)1/2

(3.13)

where VH ≡ (aH)−3.

While the observability of 21 cm radiation depends on the reionizaton model, one

regime in which a strong signal may exist is near redshift 15 [Furlanetto et al., 2006]. The

planned Square Kilometer Array (SKA) will aim to probe this era with 10 km baselines

[Dewdney et al., 2009]. Assuming a survey volume of 200 (Gpc/h)3 and a noiseless

measurement, the limit on r achievable with SKA will be

rmin ≈ 7.3

(1.2 Mpc/h

kmax

)3 [200 (Gpc/h)3

V

3.3 (Gpc/h)3

VH

]1/2

. (3.14)

1From this point forward, ka will refer to the wave number of a tensor mode, not a scalar mode. The

exception will be kmax which is the smallest scale at which a scalar can be resolved.

Page 59: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 3. Detecting primordial gravity waves 46

While this constraint is not competitive with current constraints from the CMB, it is a

strong function of the resolution of the experiment. The Low Frequency Array (LOFAR)

for instance, has baselines extending to 400 km. However LOFAR will not have the

sensitivity to probe the dark ages [Rottgering et al., 2003]. It is the physical shear due

to gravity waves at the source that is being measured, and all light propagation effects,

such as the lensing considered in Dodelson et al. [2003], have been ignored.

Similar arguments are used to find the achievable error on the spectral index nt.

Properly considering the degeneracy with r, the error on nt is:

∆nt = F

[(2π

kmax

)31

rAsV

]1/2

, (3.15)

where F is a function of the combination of parameters VH/(k3maxrAs). In the limit that

Ph(k = aH) Nh, which is the limit in which a measurement of nt is possible, F is

approximately 6. Assuming the same volume and redshift as above, and that r = 0.1,

the consistency relation is tested at the 2 sigma level for kmax = 168h/Mpc. The tensor

power spectrum and error bars for this scenario are shown in Fig. 3.1.

Such a measurement is very futuristic indeed, requiring a nearly filled array with

greater than thousand kilometre baselines. Note that such an experiment would be

sensitive to r down to the 10−6 level. Also, higher redshifts contain even more information,

though their observation is technically more challenging.

3.6 Discussion

Aside from the technical challenge of mapping the 21 cm signal over hundreds of cubic

gigaparsecs and down to scales smaller than a megaparsec, there may be other competing

effects that could hinder a detection. Of primary concern is weak lensing which also

shears observed structures, creating apparent local anisotropies. The weak lensing shear

is of order a few percent, and is thus many orders of magnitude greater than gravity

wave shear. However, the 3D map of gravity wave shear will be transverse, transforming

intrinsically as a tensor. To linear order, the lensing pattern is the gradient of a scalar.

Even at higher order, lensing always maps one point in space to another and is thus

at most vector like. This test does not exist for the CMB or lensing due to the lower

dimensionality of these probes.

Also of concern is the preservation of the anisotropy on small scales. The scale

corresponding to k = 168h/Mpc is still larger than the Jeans length at these redshifts,

Page 60: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 3. Detecting primordial gravity waves 47

10−2

10−1

2.2

2.25

2.3

2.35

2.4

2.45

2.5

k (h/Mpc)

10

10k

3P

h/(

2)

Figure 3.1: Primordial tensor power spectrum obeying the consistency relation forr = 0.1. The solid line is the tensor power spectrum. Error bars represent the recon-struction uncertainty on the binned power spectrum for a noiseless experiment, surveying200 (Gpc/h)3 and resolving scalar modes down to kmax = 168h/Mpc. The dashed, nearlyvertical, line is the reconstruction noise power. The non-zero slope of the solid line is thedeviation from scale-free.

and as such hydrogen should trace the dark matter. However, the evolution of scalar

perturbations is mildly nonlinear, and it is possible that this evolution will erase the

anisotropy. Detailed analysis of the nonlinear erasure of the anisotropy is deferred to

future investigation.

There has been much recent interest in searching for anisotropy, and this has some

implications for the fossil signal. The constraints on quadrupolar isotropy in LSS by

Pullen and Hirata [2010] should already imply a weak constraint at the r . 106 level.

Constraints from the CMB are not relevant however, since modes spanning the surface

of last scatter remain super horizon today.

CMB B-modes will be the most sensitive probe of primordial gravity waves in the

next generation of experiments. However, fossils may eventually be sensitive well below

the limits of the CMB.

3.7 Addendum

At the time of writing of this thesis, the observability of the effect presented in this

chapter was disputed by Pajer et al. [2013]. Their central argument is that if there is

a separation of scales (in this chapter’s notation kh k) only tidal fields are locally

observable. In this case all observables must be proportional to (k/kh)2. This is in

Page 61: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 3. Detecting primordial gravity waves 48

contrast to the leading order fossil effect, given in Equation 3.4, where the anisotropy is

proportional to the amplitude of the tensor mode (hij) and with kh not appearing until

higher order.

The source of the discrepancy is the extra information gained by having a prolonged

observation rather than an instantaneous one. We will use a simplified version of the

Laser Interferometer Gravitational Wave Observatory (LIGO) [Abramovici et al., 1992]

to illustrate this argument. Our simplified LIGO has two mirrors with separation (d)

of roughly a kilometre, and who follow geodesics in one dimension by being suspended

vertically on wires. The distance between the mirrors is precisely measured by setting up

an interferometer between them. In the presence of a gravitational wave, with a wave-

length of several thousand kilometres, the proper distance between the mirrors oscillates,

allowing for the detection of the gravity wave. It is true that instantaneously only the

acceleration of the mirrors is observable, leading to a suppressed signal whose amplitude

isd2d

dt2∼ ω2

hhijd ∼ c2k2hhijd, (3.16)

which is indeed suppressed by k2h. However LIGO does not make an instantaneous

measurement of acceleration, but a measurement of distance extended in time. If the

measurement is performed over several periods of the gravitational wave, the signal is

∆d(t) ∼ hijd, no longer suppressed by the separation of scales.

The situation is analogous for the fossil signal in which changes in position are mea-

sured through the local power spectrum of the scalar perturbations. The integral over

time, which we just argued is necessary to avoid the kh/k suppression, is provided by the

geodesic motions of matter throughout the evolution and decay of the gravitational wave.

Our observation of the scalar modes is instantaneous, however we have an additional piece

of information: the initial isotropic shape of the power spectrum. This is because ini-

tially during inflation, before the scalar perturbations have evolved, all observables for

the super-horizon tensor modes should be suppressed by separation of scales.

Acknowledgements

We would like to thank Patrick McDonald, Latham Boyle, Adrian Erickcek, Neil Barnaby,

Neal Dalal, Chris Hirata and Eiichiro Komatsu for helpful discussions. KM is supported

by NSERC Canada.

Page 62: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 4

Near term measurements with 21 cm

intensity mapping: neutral hydrogen

fraction and BAO at z < 2

A version of this chapter was published in Physical Review D as “Near-term measure-

ments with 21 cm intensity mapping: Neutral hydrogen fraction and BAO at z < 2”,

Masui, K. W., McDonald, P. and Pen, U.-L., Vol. 81, Issue 10, 2010. Reproduced here

with the permission of the APS.

4.1 Summary

It is shown that 21 cm intensity mapping could be used in the near term to make cos-

mologically useful measurements. Large scale structure could be detected using existing

radio telescopes, or using prototypes for dedicated redshift survey telescopes. This would

provide a measure of the mean neutral hydrogen density, using redshift space distortions

to break the degeneracy with the linear bias. We find that with only 200 hours of

observing time on the Green Bank Telescope, the neutral hydrogen density could be

measured to 25% precision at redshift 0.54 < z < 1.09. This compares favourably to

current measurements, uses independent techniques, and would settle the controversy

over an important parameter which impacts galaxy formation studies. In addition, a

4000 hour survey would allow for the detection of baryon acoustic oscillations, giving a

cosmological distance measure at 3.5% precision. These observation time requirements

could be greatly reduced with the construction of multiple pixel receivers. Similar results

are possible using prototypes for dedicated cylindrical telescopes on month time scales,

49

Page 63: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 4. Forecasts for near term experiments 50

or SKA pathfinder aperture arrays on day time scales. Such measurements promise to

improve our understanding of these quantities while beating a path for future generations

of hydrogen surveys.

4.2 Introduction

An upcoming class of experiments propose the observation of the 21 cm spectral line over

large volumes, detecting large scale structure in three dimensions [Peterson et al., 2009].

This method is sensitive to a redshift range of z & 1, which is observationally difficult

for optical experiments, due to a dearth of spectral lines in the atmospheric transparency

window. Such experiments require only limited resolution to resolve the structures of

primary cosmological interest, above the non-linear scale. At z ≈ 1 this corresponds to

tenths of degrees. There is no need to detect individual galaxies, and in general each

pixel will contain many. This process is referred to as 21 cm intensity mapping. A first

detection of large scale structure in the 21 cm intensity field was reported in Pen et al.

[2008].

Intensity mapping is sensitive to the large scale power spectra in both the transverse

and longitudinal directions. From this signal, signatures such as the baryon acoustic

oscillations (BAO), weak lensing and redshift space distortions (RSD) can be detected

and used to gain cosmological insight.

To perform such a survey dedicated cylindrical radio telescopes have been proposed

[Peterson et al., 2006, Seo et al., 2010], which could map a large fraction of the sky

over a wide redshift range and on timescales of several years. These experiments are

economical since their low resolution requirements imply limited size and they have no

moving parts. It has been shown that BAO detections from 21 cm intensity mapping

are powerful probes of dark energy, comparing favourably with Dark Energy Task Force

Stage IV projects within the figure of merit framework [Chang et al., 2008, Albrecht

et al., 2006]. Additionally, in Masui et al. [2010b] it was shown that such experiments

could tightly constrain theories of modified gravity, making extensive use of weak lensing

information. The f(R) theory, for instance, could be constrained nearly to the point

where the chameleon mechanism masks any deviations from standard gravity before

they first appear.

While such experiments will be very powerful probes of the Universe, it is useful to ex-

plore how 21 cm intensity mapping could be employed in the short term. Here we discuss

surveys that could be performed using existing radio telescopes, where the Green Bank

Telescope will be used as an example. Prototypes for the above mentioned cylindrical

Page 64: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 4. Forecasts for near term experiments 51

telescopes (for which some infrastructure already exists) are also considered. Finally, the

Square Kilometre Array Design Studies (SKADS) focused on developing aperture array

technology for the Square Kilometre Array [Faulkner et al., 2010]. Motivated by the pro-

posed Aperture Array Astronomical Imaging Verification (A3IV) project1, we consider

pathfinders for such arrays. These aperture arrays could share many characteristics in

common with cylindrical telescope prototypes, but would be capable of much greater

survey speed.

While these limited resources would not have nearly the statistical power required

to detect effects like weak lensing, a detection of the RSD would be possible. This

would give a measure of the mean density of neutral hydrogen in the Universe. This has

been an important and controversial parameter in galaxy formation studies and a precise

measurement would be invaluable in this field [Putman et al., 2009]. In addition BAO

are considered, a detection of which would yield cosmologically useful information about

the Universe’s expansion history.

In this paper we first describe the RSD and BAO and the information that can be

achieved with their detection. We then present forecasts for 21 cm redshift surveys as a

function of telescope time, followed by a brief discussion of these results.

We assume a fiducial ΛCDM cosmology with parameters: Ωm = 0.24, Ωb = 0.042,

ΩΛ = 0.76, h = 0.73, ns = 0.95 and log10As = −8.64; where these represent the

matter, baryon and vacuum energy densities (as fractions of the critical density), the

dimensionless Hubble constant, spectral index and logarithm of the amplitude of scalar

perturbations.

4.3 Redshift Space Distortions

In spectroscopic surveys, radial distances are given by redshifts. However redshift does

not map directly onto distance as matter also has peculiar velocities, which Doppler

shifts incoming photons. On large scales, these velocities are coherent and result in

additional apparent clustering of matter in redshift space. In linear theory, the net effect

is an enhancement of power for Fourier modes with wave vectors along the line of sight

[Kaiser, 1987],

P sX(~k, z) = b2

[1 + β(z)µ2

k

]2P (k, z). (4.1)

Here P sX is the power spectrum of tracer X (assumed to be linearly biased) as observed in

redshift space, P is the matter power spectrum and µk = k · z. The bias, b, quantifies the

1[van Ardenne, 2009], http://www.ska-aavp.eu/

Page 65: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 4. Forecasts for near term experiments 52

degree to which the density perturbation of the tracer follows the density perturbation of

the underlying dark matter. The redshift space distortion parameter β is equal to f/b in

linear theory, where f is the dimensionless linear growth rate. To a good approximation

f(z) = Ωm(z)0.55.

We define the signal neutral hydrogen power spectrum as

PHI(~k, z) ≡ x2HIP

sHI(~k, z), (4.2)

where xHI(z) is the mean fraction of hydrogen that is neutral. It is the parameter xHI that

we wish to measure. The above definition is useful because it defines the quantity that is

most directly measured in a 21 cm redshift survey. Note that in discussions of the more

standard galaxy redshift surveys one usually takes a measurement of the mean density

for granted; however, 21 cm intensity mapping will make differential measurements and

will not measure the mean signal directly. One cannot divide out the mean to define a

measurement of fluctuations around the mean. Using the fact that at late times, on large

scales, structure undergoes scale independent growth, the above quantity can be written

as

PHI(~k, z) = b2x2HI

(g(z)a

a0

)2 [1 + βµ2

k

]2P (k, z0), (4.3)

where subscript 0 refers to some early time well after recombination and g is the growth

factor (relative to an Einstein-de Sitter Universe).

Because the power spectrum at early times, P (k, z0), can be inferred from the Cosmic

Microwave Background (to good enough accuracy for our purposes in this paper), the

observed power spectrum can be parameterized by just two redshift dependent numbers,

β and the combination of scale independent prefactors in Equation 4.3,

AH ≡ b2x2HIg

2. (4.4)

It is these two parameters that can be determined from a 21 cm redshift survey using the

RSD. In general AH will be better measured than β and does not significantly contribute

to the uncertainty in xHI.

The factors f(z) and g(z) depend on the expansion history. If one allows for a

general expansion history (for instance, the WMAP allowed CDM model with arbitrary

curvature and equation of state, OWCDM) these factors are poorly determined since

there is currently little data that directly probes the expansion in this era. However, if

one is willing to assume a flat ΛCDM expansion history, then the current uncertainty in

the late time expansion is attenuated at the redshifts of interest (since dark energy is

Page 66: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 4. Forecasts for near term experiments 53

sub-dominant to matter at z = 1) and uncertainties in theses parameters can be ignored.

As such, a measurement of β gives a measurement of the bias b which in turn gives a

measurement of xHI . We have

∆xHI

xHI

≈ ∆β

β(ΛCDM assumed). (4.5)

If one is not willing to assume an expansion history, it is a simple matter to propagate

the corresponding uncertainties in the expansion parameters.

To estimate errors on β, we assume that the primordial power spectrum is essentially

known from the cosmic microwave background, and we fix all parameters except for the

amplitude As, spectral index ns, and running of the spectral index αs. The observable

power spectrum is then parameterized by β, As, ns and αs. We then use the Fisher

matrix formalism to determine how precisely β can be measured from the 21 cm survey,

marginalizing over the other three parameters and using no other information. The pa-

rameter As is used as a stand-in for the other parameters that affect the overall amplitude

of the power spectrum: the bias and xHI. Its marginalization is critical to account for

the fact that we have no a priori information about these parameters. The spectral index

marginalization is not strictly necessary but allows for some degradation due to concern

about scale dependence of bias. We have restricted the numerator of the Fisher matrix

to include only the linear theory power as suppressed by the non-linear BAO erasure

kernels of Seo and Eisenstein [2007] (all the linear power is included though, not BAO

only), so non-linearity should not be a significant issue at the level of precision discussed

here. This treatment of the non-linearity cutoff is also motivated by the propagator work

of [Crocce and Scoccimarro, 2006b,a, 2008].

4.4 Baryon Acoustic Oscillations

Acoustic oscillations in the primordial photon-baryon plasma have ubiquitously left a

distinctive imprint in the distribution of matter in the Universe today. This process is

understood from first principles and gives a clean length scale in the Universe’s large scale

structure, largely free of systematic uncertainties, and calibrations. This can be used to

measure the global cosmological expansion history through the angular diameter distance

vs redshift relation. The detailed expansion will differ between a pure cosmological

constant and the various other cosmological models.

We use essentially the method of Seo and Eisenstein [2007] for estimating distance

errors obtainable from a BAO measurement, including 50% reconstruction of nonlinear

Page 67: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 4. Forecasts for near term experiments 54

0.05 0.1 0.15 0.2

−0.1

−0.05

0

0.05

0.1

k, (h/MPc)

rela

tive

wig

gle

ampl

itude

Figure 4.1: Baryon acoustic oscillations averaged over all directions. To show the BAOwe plot the ratio of the full matter power spectrum to the wiggle-free power spectrum ofEisenstein and Hu [1998]. The error bars represent projections of the sensitivity possiblewith 4000 hours observing time on GBT at 0.54 < z < 1.09.

degradation of the BAO feature (although this is unimportant since experiments consid-

ered here have low resolution). The BAO feature is isolated by dividing the total power

spectrum [Eisenstein and Hu, 1998] by the wiggle-free power spectrum and subtracting

unity, as illustrated in Figure 4.1. The wiggles are then parameterized by an overall

amplitude, and a length scale dilation (here Aw and D respectively), which control the

vertical and horizontal stretch of the theoretical curve shown in Figure 4.1. Our errors

on Aw come from a straightforward extension of the Seo and Eisenstein [2007] method

for estimating BAO errors. In addition to the BAO distance scale as a free parameter

in our Fisher matrix, we include Aw as a free parameter. This is similar to what one

sometimes tries to do by including the baryon/dark matter density ratio as a parameter,

but more straightforward to interpret.

The ability to measure Aw (which is zero in the absence of the BAO) represents

the ability to detect the presence these wiggles. A measurement of D allows one to

associate a comoving distance to length scales on the sky. This gives a measurement of

the angular diameter distance (dA) for detections in the transverse direction, and the

Hubble parameter (H) if the wiggles are detected in the longitudinal direction.

Page 68: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 4. Forecasts for near term experiments 55

4.5 Forecasts

We present forecasts for the Green Bank Telescope and prototypes for two classes of

telescope: cylindrical telescopes and SKA aperture arrays. The signal available for 21 cm

experiments is proportional to the neutral hydrogen fraction and bias. For estimating

telescope sensitivity we assume that the product of the bias and the neutral hydrogen

density ΩHIb = 0.0004 today [Chang et al., 2008], and that the neutral hydrogen fraction

and bias do not evolve. These assumptions only affect the sensitivity of the telescopes

and not the translation from uncertainty on P sHI to the uncertainty on xHI. Also, as

in galaxy surveys, there is expected to stochastic shot noise component and we assume

Poisson noise with an effective object number density n = 0.03 per cubic h−1Mpc. Note

that stochastic noise at this level is negligible, as should be the case in practice.

The 21 cm intensity mapping technique is expected to be complicated by a variety

of contaminating effects. These include diffuse foregrounds (predominantly galactic syn-

chrotron), radio frequency interference and bright point sources. The degree to which

these contaminants will limit future surveys has yet to be quantified, and here we simply

ignore them. As such these forecasts are theoretical idealizations. Methods for dealing

with these contaminants are discussed in Chang et al. [2008].

The Green Bank Telescope is a 100 m diameter circular telescope with a system

temperature of 25 K. It has interchangeable single pixel receivers at the frequencies of

interest with bandwidths of approximately 200 MHz. For extended surveys, multiple

pixel receivers could be implemented. The construction of a four pixel receiver is within

reason and would reduce the required telescope time by a factor of four. In planning a

survey on GBT, it is important to choose an appropriate survey area. As illustrated in

Figure 4.2, at fixed observing time, there is a survey area that best measures the desired

parameters. For all results the survey area has been roughly optimized for the quantity

being measured. The optimized areas are shown in Figure 4.3. Results are essentially

insensitive to this area within a factor of 2 of the optimum.

Prototyping for dedicated cylindrical telescopes is in its early stages. We present

forecasts for a hypothetical not-too-far-off telescope, composed of two cylinders. The

total array measures 40 m×40 m with 300 dipole receivers with 200 MHz bandwidth, and

a system temperature of 100 K. Such telescopes have no moving parts and point solely

by the rotation of the earth. As such, the area of the survey is set by latitude, receiver

spacing, and obstruction by foregrounds; we assume 15 000 square degrees. The survey

area scales with the number of receivers leaving the noise per pixel unchanged. Thus the

squared errors on measured quantities scale inversely with the number of receivers. Note

Page 69: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 4. Forecasts for near term experiments 56

100

101

102

103

0

0.1

0.2

0.3

0.4

survey area (sq. deg.)

frac

tiona

l err

or

∆β/β, ∆x

HI/x

HI

∆Aw/10A

w

∆D/D

Figure 4.2: Ability of GBT to measure the BAO and redshift space distortions as afunction of survey area at fixed observing time. Presented survey is between z = 0.54and z = 1.09 and observing time is 1440 hours. A factor of 10 has been removed fromthe Aw curve.

102

103

101

102

observing time (hours)

optim

al s

urve

y ar

ea (

sq. d

eg.)

Figure 4.3: Roughly optimized survey area as a function of telescope time on GBT.Redshift range is between z = 0.54 and z = 1.09.

Page 70: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 4. Forecasts for near term experiments 57

that this is a slightly different scaling than the area optimized case of GBT, where it is

the time axis that is scaled by the number of receivers. However, in the area optimized

case variances also scale inversely with time as area optimization effectively fixes the

noise per pixel. As such our results for GBT follow both scalings.

The forecasts for cylindrical telescopes are also applicable to pathfinder aperture

arrays [Faulkner et al., 2010]. Where-as cylinders use optics to form a beam in one

dimension and interferometry in the other, aperture arrays directly sample the incoming

radio waves without optics. Interferometry is used in both dimensions, forming a two

dimensional array of beams, instead of a two pixel wide strip for a two cylinder telescope.

In principle it would be possible for an aperture array to monitor essentially the whole sky

simultaneously, and thus form thousands of beams. In practise digitizing every antenna

is costly and many antennas must be added in analogue in such a way that some beams

are preserved but many are cancelled. We consider a compact aperture array that has the

same area and resolution as the cylinder considered here. This is almost identical in scale

as the proposed A3IV [van Ardenne, 2009]. We assume preliminary A3IV specifications,

with 700 effective receivers, 300 MHz bandwidth, and 50 K system temperature [de Bruyn,

2010]. The aperture array could thus perform the same survey as the cylinder but a

factor of (300 MHz/200 MHz)(700/300)(100 K/50 K)2 = 14 faster. Note however, that

for aperture arrays there is added freedom in which beams are preserved when antennas

are added. It would be thus possible to optimize the area of the survey as in the GBT

case.

Figures 4.4 and 4.5 show the obtainable fractional errors on RSD parameter β and

BAO parameters Aw, the amplitude of wiggles, and D the overall dilation factor. D is

defined as a simultaneous dilation in both the longitudinal and transverse directions, i.e.

both H and dA are proportional to D. There is another “skew” parameter which trades

one for the other, such that the Hubble parameter and the angular diameter distance

can be independently determined, however, this parameter is generally not as precisely

measured [Padmanabhan and White, 2008]. D is the mode that contains most of the

information available in the BAO and we marginalize over the skew parameter. The

marginalized error on D is independent of the exact definition of the skew. Fractional

errors on H and dA are of order twice the fractional error on D.

Referring to Figure 4.1 above, it can be seen intuitively that the parameters D and Aw

are weakly correlated. Furthermore, since the signal is linear in the parameter Aw, our

linear Fisher analysis applies even for large errors. This however cannot be said about

D. Indeed any uncertainty on D, that brings the smallest scale peak we resolve more

than ∼ π/2 out of phase, cannot be trusted. It can be seen from Figure 4.1 that this

Page 71: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 4. Forecasts for near term experiments 58

102

103

10−1

100

frac

tiona

l err

or

observing time (hours)

∆β/β, ∆xHI

/xHI

∆Aw/A

w

∆D/D

0.18 < z < 0.480.54 < z < 1.091.06 < z < 1.79

Figure 4.4: Forecasts for fractional error on redshift space distortion and baryon acous-tic oscillation parameters for intensity mapping surveys on the Green Bank Telescope(GBT). Frequency bins are approximately 200 MHz wide and correspond to availableGBT receivers. Uncertainties on D should not be trusted unless the uncertainty on Awis less than 50% (see text).

Page 72: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 4. Forecasts for near term experiments 59

101

102

103

104

10−3

10−2

10−1

100

frac

tiona

l err

or

observing time (days)

∆β/β, ∆xHI

/xHI

∆Aw/A

w

∆D/D

0.14 < z < 0.360.53 < z < 0.971.06 < z < 1.94

Figure 4.5: Forecasts for fractional error on redshift space distortion and baryon acousticoscillation parameters for intensity mapping surveys on a prototype cylindrical telescope.Frequency bins are 200 MHz wide corresponding to the capacity of the correlators whichwill likely be available. These result also apply to the aperture telescope but with theobserving time reduced by a factor of 14. Uncertainties on D should not be trusted unlessthe uncertainty on Aw is less than 50% (see text). Observing time does not account forlost time due to foreground obstruction.

Page 73: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 4. Forecasts for near term experiments 60

corresponds to a fractional error of order 10%, which of course depends on resolution.

For this reason, we require that the uncertainty on Aw must be at most 50% (a 95%

confidence detection of the BAO) before we have any faith in the uncertainty in D. We

note that Fisher analysis is not the optimal tool for determining when an effect can first

be measured; however, it applies for any cosmologically useful measurements of the BAO.

4.6 Discussion

Typically measurements of the neutral hydrogen density at high redshift are made using

damped Lyman-α (DLA) absorption lines; current measurement uncertainties being at

the 25% statistical precision level in the 0.5 < z < 2 redshift range [Prochaska et al.,

2005, Rao et al., 2006]. However, it has been argued that these measurements are biased

high [Prochaska and Wolfe, 2009] rendering the quantity effectively uncertain by a factor

of 3. Hydrogen is the main baryonic component in the universe and it becomes neutral

after falling into galaxies and becoming self shielded from ionizing radiation. As such, the

abundance of neutral hydrogen is linked to the availability of fuel for star formation [Wolfe

et al., 1986, Pei and Fall, 1995]. Understanding how the neutral hydrogen evolves over

cosmic time is key to understanding the star formation history and feedback processes in

galaxy formation studies [Wolfe et al., 2005, Shen et al., 2009]. Additionally, the linear

bias, and any scale dependence it might have, gives valuable information about how the

gas is distributed. Finally, these quantities are crucial for estimating the sensitivity of

future 21 cm redshift surveys since the signal is proportional to the product of the bias

and the mean neutral density [Chang et al., 2008].

We have shown that even with existing telescopes, it is possible to use 21 cm intensity

mapping to make useful measurements of large scale structure at high redshift. As seen

in Figure 4.4, a 4σ detection of redshift space distortions could be made at z ≈ 0.8

with only 200 hours of telescope time at GBT. This would provide a 25% measurement

of the neutral hydrogen fraction in the Universe using methods independent of DLA

absorption lines. A longer survey using 1000 hours of telescope time could make ∼12%

measurements.

Surveys extending to this level of precision become cosmologically useful. With 4000

observing hours on GBT (1000 hours with a four pixel receiver), the BAO overall distance

scale could be measured to 3.5% precision at the same redshift. This is approaching the

precision of the WiggleZ survey, which will make a ∼2% measurement of this scale over

a similar redshift range [Blake et al., 2009, Drinkwater et al., 2010]2. Because WiggleZ

2The projected uncertainties that include reconstruction in these references are on H and dA. The

Page 74: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 4. Forecasts for near term experiments 61

will make a similar measurement, such a survey would not have a dramatic effect on

cosmological parameter estimations. However, it would provide an excellent verification

of these measurements, using completely different methods, in different regions of the

sky, and at low cost.

Prototypes for cylindrical telescopes could perform similar science to existing tele-

scopes except—with dedicated resources—longer integration times would be feasible.

This in part makes up for the limited resolution as for 40 m telescopes there is a sub-

stantial loss of information, and only the first wiggles are resolved. The measurements

described here would constitute a proving ground for this technique. The success of

these prototypes would be a clear indicator of the power and future success of full scale

cylindrical telescopes.

The most powerful telescope considered is the aperture array, which would be capable

of making sub present level BAO measurements with only a few weeks of dedicated

observing. The design for demonstrator telescope A3IV has yet to be finalized but the

proposed telescope is of nearly the same scale as the aperture array considered here.

Depending on its eventual configuration, the A3IV could be a powerful probe of the

Universe.

We have shown that 21 cm intensity mapping surveys could be employed in the short

term to make useful measurements with large scale structure. With relatively small initial

resource allocations, requisite techniques such as foreground subtraction can be tried and

tested while performing valuable science. Such short term applications of this promising

method will lay the trail for future dark energy surveys.

Acknowledgements

We thank Ger de Bruyn for preliminary specifications of the A3IV. KM is supported

by NSERC Canada. PM acknowledges support of the Beatrice D. Tremaine Fellowship.

uncertainty on D is inferred from these.

Page 75: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Part II

Pioneering 21 cm cosmology

62

Page 76: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5

A data analysis pipeline for 21 cm

intensity mapping with single dish

telescopes: data to maps

This chapter forms the basis of an intended future publication with authorship lead by

myself and Tabitha Voytek. It is intended to be submitted simultaneously with a second

article describing the latter parts of the analysis—maps to power spectra—which will be

lead by Eric Switzer and Yi-Chao Li.

5.1 Introduction

Single dish telescopes are not be the most powerful instruments for performing 21 cm

large-scale structure surveys. Generally they have a small number of simultaneous pixels

on the sky, which limits their survey speed. In addition, in order to obtain sufficient

angular resolution, a single monolithic structure must be constructed, which tends to be

expensive. However, they do have some advantages. Firstly, several appropriately spec-

ified single dishes are already in existence, with time allocated through public telescope

allocation competitions. This means that pilot 21 cm surveys may commence immedi-

ately upon being granted telescope time. In addition, single dishes have far simpler

beams than interferometers, greatly simplifying the data analysis. So while single dishes

will not be used to perform the ultimate large-scale structure survey, they are ideal for

pilot surveys and early science.

Here we describe part of a software data analysis pipeline designed for performing

21 cm surveys with single dish telescopes. The pipeline was built specifically for the

63

Page 77: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 64

21 cm intensity mapping effort at the Green Bank Telescope in West Virginia, however,

it could be—and is—being adapted for other instruments such as the L-Band Multibeam

Receiver on the Parkes Telescope in Australia.

We describe the data analysis up to and including converting the time ordered data

into maps on the sky. The latter half of the pipeline; which includes foreground subtrac-

tion, power spectrum estimation, and compensating the power spectrum for signal lost

in foreground subtraction; will be described in detail in a separate publication.

The first detection of large-scale structure using 21 cm intensity mapping was achieved

in Chang et al. [2010]. While many of the methods and algorithms used in that analysis

were carried over to the pipeline described here, the current analysis represents a complete

overhaul of the earlier software.

Our analysis software is publicly available at https://github.com/kiyo-masui/

analysis_IM.

5.2 Time ordered data

Here, we describe the raw data from the Green Bank Telescope for which our pipeline was

designed to process. When making observations using GBT, one gets to choose both a

front-end and back-end instrument. The front-end instrument is called the receiver which

sits at one of the telescope’s focal points and collects light reflected off of the telescope

mirror. The choice of receiver is generally set by the desired observing frequency. We

have chosen to use the 800 MHz receiver which is sensitive to a band between 700 MHz

and 900 MHz, corresponding to a redshift between z = 1.0 and z = 0.58. This band is

scientifically interesting because it is at a higher redshift than the largest galaxy redshift

surveys, and because the band has been relatively free of radio frequency interference

(RFI) since the switch to digital television in the United States. The 800 MHz receiver’s

beam full width half max (angular resolution) at 700 MHz is 0.314, and at 900 MHz

it is 0.250. The system temperature, which is a measure of the noise power in the

receiver that is unrelated to the radiative power from the sky, is roughly 25 K. It has

contributions from the receiver’s thermal noise as well as radiation from the ground

entering the receiver.

The back-end instrument is responsible for sampling the voltage stream from the

receiver, converting this to a measurement of power (by squaring and integrating over

some period of time) and then writing the data to disk. The traditional back-end that

would be used for our survey is the spectrometer, which gives a measure of power as a

function of both time and frequency. However, the spectrometer at GBT is quite old and

Page 78: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 65

nearly obsolete. Its analogue to digital converter (ADC) resolves only three levels of the

input voltage (i.e. it is only a∼ 1.5 bit ADC) and thus frequently saturates in the presence

of excessive power, which can be caused by strong RFI. Also, it is limited to relatively long

minimum integration times of one second, which precludes the possibility of scanning the

telescope across the sky at a high rate. The GBT Ultimate Pulsar Processing Instrument

or GUPPI is a new back-end, launched in 2008, designed for observations of pulsars

[DuPlain et al., 2008]. It has an 8-bit analogue to digital converter, and has a minimum

integration time of 1µs, far shorter than what is required by our survey. The disadvantage

to using GUPPI is, having been designed for pulsar observations, the output data is in a

very inconvenient format for standard spectroscopy observations. Nevertheless, GUPPIs

advantages far outweigh this one disadvantage, which was overcome by writing a front-

end to our analysis pipeline that automatically converts the GUPPI data to a more

tractable format.

Our data, once reformatted, are a function of frequency and time. Natively the

data has 4096 frequency bins across the 200 MHz bandwidth. This is far finer frequency

resolution than is required for our large-scale structure science, with each ≈ 50 kHz

wide bin spanning a line-of-sight distance of about 0.2 Mpc/h, however the resolution is

helpful for identifying RFI as described below. The time bins initially correspond to 1 ms

integrations, although these are rebinned to ∼ 0.1 s in the early stages of the pipeline.

The data also has four polarization channels. The 800 MHz receiver has two linearly

polarized antenna, one oriented in the horizontal, x, direction and the other in the ver-

tical, y, direction. GUPPI calculates all four correlation products of the voltages from

each antenna:

P XX ≡ 〈V XV X〉 (5.1)

P XY ≡ < [〈V XV Y〉] (5.2)

P YX ≡ = [〈V XV Y〉] (5.3)

P YY ≡ 〈V YV Y〉. (5.4)

Here, these formula a understood to apply on a spectral channel-by-channel basis, and

the angled brackets, 〈〉, represent the average over an integration time bin. Modulo some

calibration factors, these products are trivially related to the familiar Stokes parameters

Page 79: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 66

for polarized flux:

P I = (P XX + P YY) /2 (5.5)

PQ = (P XX − P YY) /2 (5.6)

P U = P XY (5.7)

P V = P YX. (5.8)

Finally, the data has two noise-cal state channels. The 800 MHz receiver has a noise

diode capable of injecting a small amount of noise directly into the signal path. Some

care has been taken to insure that the power injected by the noise-cal is stable over time

scales spanning several days. During observations the noise-cal switches on and off with a

period of 64 ms. Our data is separated into noise-cal on and noise-cal off channels before

the time axis is rebinned. The switching of the noise cal allows for the characterization

of the system gain on short time scales, greatly improving the accuracy of subsequent

calibrations.

The data is split into individual scans of the telescope, generally one to four minutes

in length. The scans are subsequently grouped into sessions of a few hundred scans,

corresponding to a single night of observing.

5.2.1 Pipeline design

The data analysis pipeline was designed to be, above all things, modular. As long as the

data is time ordered, i.e. until map-making, it never changes format. Individual pipeline

modules generally read the data in, do a minimal set of operations on the data, and then

immediately write the data back out to a new file, but in the same format.

This design, while I/O intensive, allows individual pipeline elements to be dropped in

and out of the pipeline, or reordered within the pipeline, trivially. The pipeline also allows

for nonlinear flow, where data paths can diverge along different branches re-converging

later, or looping back on itself in an iterative process.

This versatility has been invaluable in the development of the analysis, where different

procedures for analyzing the data can be tested with minimal effort. It has also proved

to be useful in ancillary science projects, where a different procedure can be developed

without diverging code bases.

Page 80: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 67

5.2.2 Radio frequency interference

RFI is a major concern for all radio astronomy observations. While the 800 MHz band is

cleaner than the adjacent bands, and while Green Bank is a protected radio quite zone,

RFI still dominates thermal noise in our data in a subset of our spectral channels. This

is especially true between 850 MHz and 900 MHz, which contains some of the cell phone

bands. Complicating the flagging of RFI is the presence of bright foregrounds in our

data, which dominate the over the thermal noise and create a floor at which RFI can be

identified unless some measure is taken to remove them.

Early versions of the RFI flagging module were based on the notion of flagging anoma-

lous data points which stand out some number of standard deviations, say 4, above the

sky signal and thermal noise in the time ordered data. Having fine spectral resolution is

helpful since RFI tends to be localized in frequency. In addition, the polarized channels

of the data P XY and P YX are sensitive probes, since the RFI tends to be polarized and the

sky signal much less so [Chang et al., 2010]. The fundamental issue with these early flag-

gers is that they had the tendency to bias the data. Data that contained a bright point

source on the sky was preferentially flagged, creating a catastrophic non-linearity in the

telescope’s response. Also, this flagging algorithm responded differently to the noise-cal

on and noise-cal off channels (see Section 5.2)—the noise-cal being highly polarized—

biasing our determination of the noise-cal power required for calibration. The bias can

be reduced by raising the threshold for flagging, but this resulted in large amounts of

RFI being missed.

The solution is to flag entire frequency channels based on their variance over the

time axis (for a single scan), instead of flagging individual outlying data points. More

data ends up flagged, but the algorithm is still relatively efficient since narrow band RFI

tends to contaminate all the data in an individual spectral channel. The bias is eliminated

because the flagger has far less freedom. For data with 4096 spectral channels, the flagger

makes 4096 decisions whether or not to flag the channel. In contrast, the initial algorithm

needed to make almost a million decisions for a 60 second scan.

This initial round of flagging eliminates the worst of the RFI and allows for the

construction of an initial map. The map can be used to get a rough, unbiased, estimate

of the sky signal (dominated by foregrounds) in the time ordered data, which can be

subtracted, yielding a version of the data that is dominated by noise and RFI. The

process of using a preliminary map to estimate the signal in the time ordered data is

described in more detail in Section 5.3.2. With the bulk of the sky signal removed from

the data, a less conservative RFI flagging can be applied—based on flagging ∼ 4 sigma

excursions—without concern for biasing the data, and escaping the foreground induced

Page 81: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 68

750 800 850spectral channel frequency (MHz)

0

50

100

150

200

250

300

350

400

time (s)

750 800 850spectral channel frequency (MHz)

0

50

100

150

200

250

300

350

400ti

me (

s)

−0.010

−0.008

−0.006

−0.004

−0.002

0.000

0.002

0.004

0.006

0.008

0.010

Figure 5.1: Data before (left) and after (right) RFI flagging. Colour scale representsperturbations in the power, P/〈P 〉t−1. Frequency axis has been rebinned from 4096 binsto 256 bins after flagging, which fills in many of the gaps in the data left by the flagging.Any remaining gaps are assigned a value of 0 for plotting.

floor for RFI identification. The effect of flagging the data for RFI is shown in Figure 5.1.

It should be noted that our RFI flagging algorithm is highly non-linear and could

in principal bias our determination of the 21 cm signal if the flagging were somehow

correlated with the signal. However, within an individual scan, the RFI and thermal

noise dominate the signal by over an order of magnitude and as such the RFI flagging is

unlikely to correlate with the signal. For this reason we have not attempted to simulate

the signal loss induced by flagging.

5.2.3 Calibration

The power recorded by GUPPI and stored in its output files has completely arbitrary

units and must be calibrated. Not only is an absolute calibration necessary to interpret

our data in terms of cosmic hydrogen, but relative calibrations are necessary, since the

telescope gain can, and will, have some time dependence. This would affect the internal

consistency of the data and prevent the extraction of the signal. Naively, since the

foregrounds are of order 104 times as bright as the signal, one might expect that relative

Page 82: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 69

calibration errors better than 10−4 are required. In practise this is not the case since

gain drifts tend to be highly correlated across frequency channels, and our foreground

subtraction algorithms have some robustness against this type of gain drift. The precise

requirement on calibration precision is not known.

Preliminaries

The basic concepts used in this section are described in detail in Rybicki and Lightman

[1979] and Wilson et al. [2009]. Here we give a brief introduction to these concepts,

before detailing our calibration procedure.

There are two relevant measures of radiative power. The first is flux, S with units [Jy],

a measure of the power emitted by a source on the sky. Flux is the most relevant measure

of radiative power for point sources, i.e. sources that are much smaller than the beam

of the telescope. This is because for extended sources, the measured flux will depend

on the detailed structure of the telescope’s beam. For intensity mapping, brightness

temperature, T with units [K], is a more appropriate measure of radiative power. It

gives a measure of the flux per unit angular area on the sky. In intensity mapping we do

not identify individual sources and instead make maps of emitted radiative intensity. As

such, it is clear that we wish to obtain a calibration in terms of brightness temperature.

It is not possible to completely ignore flux however, since our calibration sources are

all point sources and small compared to the GBT beam. Thus, it is necessary to relate

these two measures. This is done through the notion of the forward gain Gf . The forward

gain, with units [K/Jy], compares the telescope’s response to a point source to that of a

spatially constant brightness. In other words, the gain is the spatially constant brightness

temperature that causes the same increase in measured power as a 1 Jy point source. It

is therefore a measure of the peakedness of the beam. In our definition, the forward gain

only depends on the beam shape.

For a beam with shape b(~θ) the forward gain is

Gf =

(c2

2ν2kB

)b(0)∫b(~θ)dΩ

, (5.9)

where ν is the observing frequency and b(0) is the centre/peak of the beam where a point

source would be measured. For a perfectly Gaussian beam, the gain is given by

Gf =

(c2

2ν2kB

)1

2πσ2, (5.10)

where σ is the Gaussian width parameter. It is seen that the forward gain of an instrument

Page 83: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 70

depends only on the beam shape and as such is measured by mapping the beam using a

point source. In practise the primary gain is calculated for GBT using a Gaussian fit to

the measured beam function.

Flux calibration

Our model for the raw data is

P = gT (5.11)

Pcal-on = g(Tsys + Tsky + Tcal) (5.12)

Pcal-off = g(Tsys + Tsky), (5.13)

for some system gain g. Here we have distinguished between the channel where the

noise-cal (described above) is on, and off. The system temperature, Tsys, parameterizes

the base-line amount of noise power that unavoidably exists in the receiver. It is assumed

that g and Tsys are essentially stable over times scales of at least a few minutes, but that

Tcal is stable on time scales of at least several hours. The first step of the calibration is

to eliminate the system gain from the data,

T/Tcal = P/Pcal (5.14)

= P/〈Pcal-on − Pcal-off〉t. (5.15)

Here, P and T without a subscript stand for either the cal-on or cal-off state. The time

average is over a period of time in which the gain is assumed to be relatively stable,

typically a scan of length up to a few minutes. The average is necessary since the noise-

cal power is weak compared to the system temperature, and as such its determination is

noisy. We refer to data with this calibration as being in Tcal units.

To fully calibrate the data, we now need to convert from T cal units to physical temper-

ature in Kelvin. This is done by observing a calibration point source of known flux. We

observe point sources which have flux data from a wide range of telescopes and frequen-

cies. Using the NASA/IPAC Extragalactic Database (NED) catalogue1, we consider all

data points for a given source within or near our frequency range and calculate a power-

law fit. This fit provides data for Ssrc at all frequencies. Because we are interested in

the full Stokes parameters for polarized flux (T I,TQ,T U and T V), we have to know the

polarization angle of the point source as well. We have used both unpolarized sources,

1http://ned.ipac.caltech.edu

Page 84: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 71

and polarized sources, which we take to have a frequency independent polarization angle

within our band.

During each data collection session, we collect a set of on-off scans of the known point

source. Each scan is a series of four tracking scans with a 56.5 second duration. We track

on the point source, then move slightly off the source and track. The second two scans

are a repeat of the first, but with the off position on the opposite side of the source. We

collect these on-off sets at least twice during each data collection session, or roughly four

hours apart for the longer sessions.

Using each on-off set we can then calculate:

Tsrc/Tcal = 〈Ton-src/Tcal〉 − 〈Toff-src/Tcal〉. (5.16)

Again, here we have used a temporal average to maximize the signal to noise on the source

and thus minimize the error on the calibration. Even if the gain g changes between the

on-source and off-source observations due to non-linearity, our determination of Tsrc/Tcal

is not affected since the gain is already cancelled in each observation.

We can compare Tsrc/Tcal that we measure to the expected value of SsrcGf to get a

conversion factor:

Cflux = (Tcal/Tsrc)GfSsrc. (5.17)

Note that since Cflux is the conversion factor from Tcal units to Kelvin, Cflux is really just

a measurement of Tcal.

To get the most accurate value for Cflux, we calculate it for each linear polarization

independently (CXflux, and CY

flux). This automatically provides a first order calibration for

the polarized brightness T I and TQ. We also investigated the stability of Cflux on longer

time scales. We found that it is stable over individual sessions, and is usually stable

between sessions. However, there is occasionally a dramatic change in Cflux. This change

is associated with the physical rotation of the low frequency receivers at GBT, and only

occurs a few times in our overall dataset. The times of these changes correspond with

the transition between observation of the different observational fields, so we are able to

temporally average over the full set of sessions for a given field to minimize the error in

Cflux.

Differential polarization calibration

If we are interested in the full Stokes data as opposed to just the absolute intensity,

an additional calibration component is necessary. Here we follow the formalism and

notation in Heiles et al. [2001], but only perform the first order polarization calibration.

Page 85: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 72

This operation resolves a known phase ambiguity in the GUPPI data, which mixes Stokes

U and V. It also rotates the measured polarizations from telescope coordinates to sky

coordinates based on the telescope’s orientation. For an ideal instrument, this calibration

is sufficient for performing full polarimetry, however, GBT has additional instrumental

leakages between the polarized channels at the 10% level. Implementing a full polarization

calibration to deleak the Stokes parameters is an area of active research, however it is

complicated by the fact that the polarization leakage is a function of location within the

GBT primary beam.

The correction factors are most easily calculated and applied in X-Y space, so the

data is calibrated while we are still working with T XX,T XY, T YX, and T YY, later converting

to the Stokes parameters (T I,TQ,T U,T V).

Above, we calculated the magnitudes of the calibration factors CXflux and CY

flux, however,

these factors have phases as well. These phases will affect the other Stokes components

(T U and T V). We can write this influence as a series of matrices:

Tsky = GfSsky = Gf

SXX

sky

SXYsky

SYXsky

SYYsky

= M−1PA ∗Mflux ∗

T XX

meas

T XYmeas

T YXmeas

T YYmeas

, (5.18)

where MPA is the parallactic angle (θ) rotation matrix used to go from the telescope

frame to the sky frame of reference:

MPA =

0.5(1 + cos 2θ) sin 2θ 0 0.5(1− cos 2θ)

−0.5 sin 2θ cos 2θ 0 0.5 sin 2θ

0 0 1 0

0.5(1− cos 2θ) − sin 2θ 0 0.5(1 + cos 2θ)

. (5.19)

The flux matrix encodes the complex correction factors for flux, with the complex

phase φ(ν):

Mflux =

CX

flux 0 0 0

0√CX

fluxCYflux cosφ −

√CX

fluxCYflux sinφ 0

0√CX

fluxCYflux sinφ

√CX

fluxCYflux cosφ 0

0 0 0 CYflux

. (5.20)

The ambiguity in the value of φ(ν) results from a known software issue in GUPPI where

the analogue to digital sampling of the X and Y voltages can be offset in time by up to

Page 86: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 73

two samples. This issue causes a non zero phase that takes the form φ(ν) = p0ν + p1, for

unknown parameters p0 and p1.

To correct for this issue, the complex phase φ parameters p0 and p1 must be deter-

mined using a fit, after which the data can be corrected by applying the Mflux matrix.

The parameters p0 and p1 can be measured from the noise-cal. The noise-cal should have

a Stokes PV equal to zero, but a non-zero phase φ causes some of the signal from PU to

be seen in PV . We calculate the phase parameters by fitting to

Rcal = P U

cal/√

(P Ucal)

2 + (P Vcal)

2 = cos (p0ν + p1). (5.21)

Here P Ucal and P V

cal are 〈P Ucal-on − P U

cal-off〉t and 〈P Vcal-on − P V

cal-off〉t respectively. Unlike the flux

correction Cflux, the phase correction φ changes on a session time scale, or rather it resets

every time the software system resets (which is usually once a session but can be more

frequently). However, p0 and p1 only assume four sets values other than zero.

5.3 Map-making

Formally, map-making is the linear process of solving for the temperature on the sky,

given time ordered data. It is a well established subject in the cosmic microwave back-

ground (CMB) studies [Smoot et al., 1992, Bennett et al., 1996, Tegmark, 1997], with

the important caveat that CMB maps are two dimensional while our maps have the third

redshift (spectral frequency ν) dimension. Here we lay out the map-making formalism

before going into the details of the map making module used in the data analysis pipeline.

5.3.1 Formalism

Here we review the standard CMB map-making formalism as laid out in Dodelson [2003],

with some straight forward adaptations for the extra spectral frequency dimension. This

section also serves to introduce our notation.

The time ordered data is modelled by the equation

dνt = Atimνi + nνt, (5.22)

where the index t runs over the time bins, ν over the spectral bins, and i over the

angular pixels in the map. t and ν are understood to have units of time and frequency

respectively and as such are more labels than indexes. Repeated indexes are summed

over unless otherwise stated. In the above equation, dνt is the time ordered data; mνi is

Page 87: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 74

the map of the sky, including both 21 cm signal and foregrounds; nνt is the noise; and

Ati is the pointing operator, indicating how the sky signal enters the telescope at time t.

There are two possible treatments of the telescope’s beam. If the beam is stationary,

i.e. it alway couples adjacent pixels in an identical manner independent of time, then the

beam can be interpreted as being part of the map. In this case, mνi is a beam convolved

map relating to the true sky by

mνi = Bνii′mνi (no sum on ν), (5.23)

where mνi is the true sky map and Bii′ is the beam operator. Alternately, the beam can

be included in the pointing operator:

Ati = Ati′Bνti′i (no sum on ν, t), (5.24)

where Ati′ encodes where the telescope is pointing and Bti′i applies the beam. This later

treatment is necessary if one needs to treat time variability in the beam, or for treating

beam anisotropies (since the telescope’s angle relative to the sky is a function of sidereal

time). While the second treatment is more general, it has at least one major algorithmic

difficulty. Since the goal of map-making is to solve for mνi, the map-maker will need to

deconvolve the beam from data, which is numerically difficult. As such we treat the beam

as being part of the map, noting that the map-making module is in principal extensible

to include the beam in the pointing operator.

The noise, nνt, are assumed to be correlated Gaussian random numbers with a co-

variance 〈nνtnν′t′〉 = Nνt,ν′t′ . Given an estimate for the map, mνi, we can construct

chi-squared:

χ2 = (dνt − Atimνi)N−1νt,ν′t′(dν′t′ − At′i′mν′i′), (5.25)

where ()−1 is the matrix inverse. The optimal estimator for the map minimizes χ2,

∂χ2

∂mνi

= 0 (5.26)

= −2AtiN−1νt,ν′t′(dν′t′ − At′i′mν′i′) (5.27)

= −2AtiN−1νt,ν′t′dν′t′ + 2AtiN

−1νt,ν′t′At′i′mν′i′ . (5.28)

Solving for mνi yields the map-making equation:

(AtiN−1νt,ν′t′At′i′)mν′i′ = AtiN

−1νt,ν′t′dν′t′ , (5.29)

Page 88: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 75

which in principal has a solution

mνi = (AtiN−1νt,ν′t′At′i′)

−1At′′i′N−1ν′t′′,ν′′t′′′dν′′t′′′ . (5.30)

The quantity AtiN−1νt,ν′t′dν′t′ in Equation 5.29 is known as the dirty map, and can be

thought of as a noise weighted map. The matrix in the brackets turns out to be the

inverse noise covariance matrix in map space:

(C−1N )νi,ν′i′ = AtiN

−1νt,ν′t′At′i′ , (5.31)

(CN)νi,ν′i′ ≡ 〈(mνi −mνi)(mν′i′ −mν′i′)〉. (5.32)

As such, map-making can be thought of as a two step process. The first is constructing

the dirty map and the inverse noise covariance. The second step is applying the inverse

of this matrix to obtain the estimate for the map, also known as the clean map.

The map-making equation is an unbiased estimate for the map given the data as long

as 〈Atimνi nν′t′〉 = 0, i.e. as long as the noise is uncorrelated with the sky map. This

assumption is thought to be true of our data but could be broken by strong receiver

non-linearity. In particular, the map will be highly biased if

d2P

dT 2

∣∣∣∣Tsys

RMS(Tnoise)RMS(Tsky) ∼ dP

dT

∣∣∣∣Tsys

RMS(Tsky), (5.33)

where RMS() is the root mean square operation, P is the power recorded by the telescope,

and Tsys, Tnoise, and Tsky are the antenna temperatures from the system temperature, noise

and sky respectively. For our data, taking RMS(Tnoise) ≈ Tsys, we have estimated the

RHS of the above to be ∼ 500 times larger than the LHS, meaning the noise correlates

with the foregrounds at one part in 500. Assuming the foregrounds and the 21 cm signal

are uncorrelated, the noise also correlates with the 21 cm signal at same one part in 500.

All map-making codes, with the exception of some of the maps produced by the

Planck Collaboration [Planck Collaboration et al., 2013c], are based on Equation 5.29

or some approximation there of [Tegmark, 1997]. It should be noted that Equation 5.29

remains an unbiased estimator for the map even if Nνt,ν′t′ is not perfectly representative

of the noise covariance. In this case it is the optimality of the map estimator that suffers,

not its validity as an unbiased estimator [Dunner et al., 2013]. This is comparable to

using a non-optimal set of noise weights when taking a weighted average.

Even if Nνt,ν′t′ is known perfectly, taking the required inverse, N−1νt,ν′t′ , by brute force

is computationally intractable. Data sets from the GBT approach 109 individual time-

Page 89: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 76

frequency data points even after coarse binning the data, and as such the inverse would

require many millennia of computing time on a large cluster. Map-making codes must

therefore find a shortcut for performing this inverse by either exploiting some symmetry

of the noise, approximating the noise, or both.

5.3.2 Noise model and estimation

The first way in which the noise is approximated is by dividing the data into discreet

chunks in time, each assumed to have independent noise. That is, the noise is assumed to

be block diagonal. The use of this simplification greatly reduces the computational task of

inverting the noise matrix and is well established in CMB map-making (see e.g. [Jarosik

et al., 2007, Dunner et al., 2013, Planck Collaboration et al., 2013b]). This approximation

is born out of the fact that the noise tends to decorrelate between far separated times,

and as such very little information is lost when the correlations between these times are

ignored. The block length is chosen to correspond to a single scan of the telescope (of

order a few minutes) out of convenience, although no part of our map-maker precludes

using longer blocks, thus preserving more information.

From this point t will index the time bins within a block of data as opposed to all

available time bins. The index s will be used to index the block (e.g. dsνt), however

much of the subsequent analysis refers to a single block and as such the index will be

suppressed unless explicitly needed.

The noise is most effectively measured in the frequency domain, obtained by Fourier

transforming the time axis of the data. In this chapter, we always use the discrete Fourier

transform operator, denoted by the symbol Fωt using the conventions

xω = Fωtxt =∑t

e−iωtxt (5.34)

xt = F−1tω xω =

∑ω

eiωtxω/ct, (5.35)

where ct = δtt is the number of time samples, or the cardinality of the set t2.

To clarify some terminology, there are two separate frequencies that will be used in the

following sections. The first is the spectral radiation frequency, ν, which is obtained by

Fourier transforming the voltage stream as sampled from the receiver. The transform is

performed by the back-end, GUPPI, and the time sampling rate is the analogue to digital

converter rate, on order GHz, resulting in a spectrum with frequencies corresponding to

2This notation is slightly awkward, but is necessary to not conflict with our use of n to denote the

noise and to ensure that t is not mistaken for an index

Page 90: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 77

the frequency of the radiation being measured by the receiver, in our case 700 MHz to

900 MHz. The second frequency is the frequency at which the power in the telescope

changes, ω (actually an angular frequency). The time sampling rate corresponds to

the length of the power integration bins, ∼ 10 Hz, resulting in a spectrum covering the

∼ 0.01 Hz to ∼ 5 Hz range.

The key property of the noise, that makes the frequency domain so convenient, is

that noise is assumed to be stationary. That is

Nνt,ν′t′ = ξνν′(t−t′), (5.36)

where ξνν′(t−t′) is the noise spectral covariance correlation function. This symmetry results

in a noise matrix that is diagonal in the frequency domain,

Nνω,ν′ω′ = ctδωω′Pνν′ω (no sum on ω), (5.37)

where δωω′ is the Kronecker delta and Pνν′ω is the noise spectral covariance power spec-

trum. Pνν′ω is related to ξνν′(t−t′) by Pνν′ω = Fω(t−t′)ξνν′(t−t′). The frequency domain

noise matrix is defined by

Nνω,ν′ω′ = 〈nνωn∗ν′ω′〉, (5.38)

with nνω ≡ Fωtnνt. Combining Equations 5.37 and 5.38 gives

Pνν′ω = 〈nνωn∗ν′ω〉/ct (no sum on ω). (5.39)

An estimate of the noise matrix could be obtained by fitting a model for Pνν′ω to

Equation 5.39 if we had an estimate of the noise, nνt. Clearly we can never know the

noise exactly, since if we did, we could simply eliminate all noise by subtracting it from

the data. The standard assumption that is used for many CMB experiments [Jarosik

et al., 2007, Dunner et al., 2013] is that in a single scan, the signal is far sub-dominant to

the noise, that is Aωimνi nνω, and thus dνω can be taken as an estimate of the noise.

This is not the case for 21 cm mapping, where the foregrounds (which we have included

in the signal in this chapter), dominate the thermal noise even for short integrations.

However, since the foregrounds are so bright, it is easy to get a rough estimate of the

map, even with noise weights which are far from optimal, e.g. uniform diagonal noise,

setting Nνt,ν′t′ ∼ δtt′δνν′ . This yields a rough map, mνi, which is far from optimal but

is good enough that Aωi(mνi − mνi) nνω and the quantity dνt − Atimνi is dominated

by noise. This procedure can be performed iteratively with map estimation such that

successively better noise estimates are obtained by subtracting successfully better maps.

Page 91: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 78

This is similar to the procedure used in in CMB map-making to eliminate the signal bias

in noise estimation [Dunner et al., 2013].

Pνν′ω′ can thus be measured by fitting to Equation 5.39, replacing the expectation

value with a either a single realization (generally a scan’s worth of data) or several

realizations assumed to have similar noise properties. A minor issue is that the discrete

Fourier transform assumes that data is periodic, which is not the case. The data must

thus be windowed by an appropriate function, and the effects of the windowing must

also be applied to the power spectrum model prior to fitting. Likewise, there is data

missing due to being flagged for RFI. This also contributes to the window function.

Before windowing it is helpful to remove the time-mean as well as a linear function of

time, since these are the modes that are most contaminated by noise fluctuations on time

scales greater than those spanned by the data, which are more difficult to estimate.

The only remaining question is how to model Pνν′ω. This is dependant on the in-

strument. The noise will certainly have the thermodynamically guaranteed thermal part

with

Nνt,ν′t′ = δνν′δtt′T 2

sys(ν)

∆t∆ν, (5.40)

where ∆t and ∆ν are the bin sizes in integration time and spectral channel respectively.

Investigations of the noise for GBT’s 800 MHz receiver led to the conclusion that

non-thermal parts of the noise occupy small number of modes in the spectral channel

covariance. That is, the noise is well modelled by

Pνν′ω = VνqVν′q′Pqq′ω, (5.41)

where the total number of modes required to describe the noise, cq, is small—of order

5. The columns of Vνq are mutually orthogonal and are extracted from the data by

performing an eigenvalue decomposition on nνωn∗ν′ω. This leaves Pqq′ω diagonal in (q, q′),

with the ω dependence well fit by a power-law. This noise term is similar to a term found

in the Atacama Cosmology Telescope noise discussed in Dunner et al. [2013] where noisy

eigenmodes are found in correlations between individual detectors instead of the spectral

channels discussed here. The success of this model is demonstrated in Figure 5.2, with

the modes Vνq shown in Figure 5.3. Several of the observed noise modes have a straight-

forward physical interpretation, such as achromatic gain variations (first and second

modes) and individual noisy channels due to RFI contamination (fifth mode). The RFI

in our band worsens above 850 MHz, which is visible in the modes.

Our noise model should include one more ingredient. Recall that before taking the

power spectrum of the noise, we had to subtract off the time mean and time slope in each

Page 92: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 79

10-2 10-1 100

frequency (Hz)

100

101

102

noise power

Figure 5.2: Noise power spectrum, averaged over all spectral channels (δνν′Pνν′ω/cν),as measured in the GBT 800 MHz receiver. Units of the vertical axis are normalized suchthat pure thermal noise would be a horizontal line at unity. In individual time samplesare 0.131 s long and spectral bins are 3.12 MHz wide. The telescope is pointing at thenorth celestial pole to minimize changes in the sky temperature. Descending the variouscoloured lines corresponds to removing additional noise eigenmodes, Vνq, from the noisepower spectrum. It is seen that after removing 7 of the 64 possible modes the noise issignificantly reduced and is approaching the thermal value on all time scales. The modesremoved from each subsequent line are shown in Figure 5.3.

Page 93: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 80

700 750 800 850 900spectral channel frequency (MHz)

−0.5

0.0

0.5

1.0

1.5

2.0

2.5

mode a

mplit

ude (

norm

aliz

ed)

Figure 5.3: The modes Vνq removed from the noise power spectra to produce the curvesin Figure 5.2. Each mode is offset vertically for clarity, with mode number increasingfrom bottom to top. The nth mode in this figure is the dominant remaining mode in thenth curve in Figure 5.2.

Page 94: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 81

channel due to long time-scale noise. The fact that these modes are noisy and contain

little information should be included in the noise model such that they are deweighted

by the map-maker. This last part of the noise matrix takes the form

Nνt,ν′t′ = δνν′δpp′UtpUt′p′T2large, (5.42)

with cp = 2 and two columns of Utp being orthonormal constant and linear functions of

time respectively. Tlarge is a number in units of K, large enough to deweight the mode, but

not so large that it causes numerical instability. Discarding a mode in this manner does

not lead to biases in the map and discards only a small amount of the total information.

In principle Utp could be expanded to deweight additional noisy modes over the time axis,

for instance to discard a particular noisy ω or to deweight the longest time-scale modes

which tend to have persistent noise (this can be seen in the lower curves in Figure 5.2).

Putting these all together yields the full noise model:

Nνt,ν′t′ = δνν′δtt′T 2

sys(ν)

∆t∆ν+ VνqVν′q′ξqq′(t−t′) + δνν′δpp′UtpUt′p′T

2large, (5.43)

where ξqq′(t−t′) = F−1(t−t′)ωPqq′ω. We see that the noise model takes the form of a diagonal

matrix (the first term) plus two relatively low rank updates. The first is of rank cqct

and the second of rank cpcν, both of which are orders of magnitude smaller than the

full rank of ctcν. This structure greatly reduces the computational expense of taking

the noise inverse, as will be shown in the next section.

5.3.3 An efficient time domain map-maker

Traditionally, in CMB map-making a significant amount of the computation is performed

in the frequency domain [Jarosik et al., 2007, Dunner et al., 2013]. We have already seen

that going to the frequency domain is useful for estimating the noise. Unfortunately the

structure of our data is not conducive to actually performing the map-making operation

in the frequency domain. As previously noted, the advantages of working in the fre-

quency domain rely on the noise being stationary. This may be intrinsically true of our

noise, however our data is recorded only in small scans of a minute or so in length, and

with interruptions in between. To realize the advantages of the frequency domain it is

necessary for the total length of a single uninterrupted block of data to be much longer

than the longest time over which we wish to consider correlations in the noise. This is

not true of the data in our survey and as such we work in the time domain.

Page 95: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 82

Inverting the noise matrix

In the last section, we developed our model for the noise of an individual block of data

(typically a scan of length a few minutes), with each block of data assumed to be inde-

pendent. The form of our final noise model is diagonal, uncorrelated noise plus a highly

correlated component of dramatically lower rank. This form greatly facilitates the cal-

culation of the inverse noise matrix through the use of the well known Binomial Inverse

Theorem (also known as the matrix inversion lemma, Woodbury matrix identity, and the

Sherman-Morrison-Woodbury formula). In the form we will be using, the theorem states

that for matrices D, G, and W, with dimensions ch × ch, ch × cg and cg × cg,

(D + WGWT )−1 = D−1 −D−1W(G−1 + WTD−1W)−1WTD−1, (5.44)

where we have switched to index free matrix notation for convenience. This formula is

useful when cg ch and matrix D has a simple form, such as being block or fully

diagonal, allowing it to be inverted and multiplied by other matrices quickly. In the case

where D is diagonal and WGWT is dense, the left-hand-side requires c3h floating point

operations, where the right-hand-side requires O(c2hcg) floating point operations.

Applying this to our noise model in Equation 5.43, we take the first term to be D,

with

Dhh′ = δνν′δtt′T 2

sys(ν)

∆t∆ν, with (5.45)

h = (t, ν), (5.46)

which is indeed diagonal with ch = ctcν. The second two terms constitute WGWT

and are written

Ggg′ = ξqq′(t−t′) + δνν′δpp′T2large, (5.47)

Whg′ = δνν′Utp′ + δtt′Vνq′ , with (5.48)

g = (t, q)+ (ν, p). (5.49)

Here it is understood that if g = (t, q) then all elements with subscript (ν, p), such as

δνν′δpp′T2large, are zero and vice-versa. We note that W is very sparse providing another

opportunity to shorten the calculation.

In practise it is never necessary to calculate the full inverse noise matrix, since what

we really want is the inverse noise matrix times another quantity (such as dνt or Ati).

In general, it is much more efficient (in both operation count and memory usage) to

Page 96: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 83

calculate the constituent parts of the right-hand-side of Equation 5.44 (D, W and the

inverse of the term in brackets, Q ≡ (G−1 + WTD−1W)−1), and apply this directly to

another quantity. The computational complexity of the preparation phase is dominated

by calculating Qs for each block of data s, which requires O[cs(ctcq + cνcp)3]

operations for the full data set.

A word of caution about the use of the Binomial Inverse Theorem, in that it can

be numerically much less stable than standard algorithms for calculating the inverse

directly. This is caused in part by the differencing of the two terms on the right-hand-

side of Equation 5.44, which for certain modes in the matrix can be extremely close. This

occurs when an eigenvalue of WGWT is much larger than the corresponding value in D.

Pointing operator

For each block of data, the pointing operator is in principle a ct×ci matrix. However,

it is very sparse. In its simplest form, at each time t, the telescope is pointing at a single

pixel on the sky, and so the matrix representation of Ati has exactly one non-zero entry in

each row with value unity. Ati can thus be represented by a list of angular pixel indexes

of length ct. If we want to be a little bit more sophisticated, we can use linear or cubic

interpolation between pixels, since the telescope will never be pointed exactly at pixel

centre. In this case four and sixteen times more numbers are required to represent the

operator, but in all cases the operator can be applied in O(ct) operations.

Dirty map

The dirty map is defined by the quantity∑s

AstiN−1sνt,sνt′dsνt′ , (5.50)

where we have been explicit about the sum over the blocks of data indexed by s. With the

methods described above, this operation is not particularly challenging computationally.

It can be shown that forming this quantity can be done in O[cs(ctcq+ cνcp)2] op-

erations. This is much less work than the preparation phase of calculating the constituent

parts of Equation 5.44.

Inverse noise covariance

Calculating the inverse noise covariance matrix, (C−1N )νi,ν′i′ =

∑sAstiN

−1sνt,sνt′Ast′i′ , is

among the most challenging aspects of map-making. Most CMB map-making codes do

Page 97: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 84

not explicitly calculate this matrix. Instead, they calculate the action of the matrix on

a map estimate and use iterative methods, such as the conjugate gradient method, to

solve for the clean map [Wright et al., 1996, Hinshaw et al., 2003, Jarosik et al., 2007,

Dunner et al., 2013]. Our choice to explicitly calculate the matrix is born out of a

desire for simplicity, and is feasible at the current size of our maps. In addition, there

is in principle a significant amount of information that can be obtained about the noise

properties of the map from this matrix. We note that as our survey size increases, it

may become necessary to switch to iterative methods. This would require rewriting the

core engine of our map-maker. However, the infrastructure modules, which constitute

the majority of the code base, would be reusable.

When constructing this matrix, efficient memory management is essential. The matrix

generally does not fit into memory of an individual machine and as such some portion

of the matrix must be considered at a given time, making sure that input and output

is minimized. The most convenient and efficient way to do this is to hold the all the

constituent parts of the time domain noise inverse matrix, Ds, Ws and Qs for all s,

in memory and calculate a single row of the large matrix at a time. That is, calculate∑s(C

−1N )sνi,ν′i′ for a single (ν, i) at a time, and write it to disk before starting on the next

pair. This structure also lends itself well to being parallelized over a computer cluster

should this ever need to be implemented.

It can be shown that with the methods described in the previous sections, constructing

this matrix requires O[csctcν(cq+cp)(ctcν+ctcq+cνcp)] operations. The

first term in the second bracket dominates over the others.

Clean map

Solving for the clean map, mνi, is straight forward as Equation 5.29 is a linear equation.

In practise, more efficient and numerically stable algorithms exist than calculating the

required matrix inverse directly. In particular, since the noise inverse covariance ma-

trix is symmetric and positive definite by definition, a Cholesky decomposition can be

used. Solving equations in this form is well established, with libraries such as SCALAPACK

available for porting the process to a cluster for large maps. Solving for the clean map

requires O[(cicν)3] operations.

Computational problem size

To give a rough idea of the number of operations required in each step of map making,

we take the our Green Bank Telescope observations of the 15 hr field as an example. This

Page 98: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 85

field has roughly 100 hours of data spread over 10 sq. deg. This yields rough set sizes of

cν ∼ 100, ci ∼ 103, cs ∼ 103, ct ∼ 103, cq ∼ 5, and cp ∼ 2.

Under these assumptions, preparing to invert the noise matrices which is dominated

by calculating the matrices Qs, requires of order 1013 operations. Calculating the dirty

map requires a pithy 109 operations. Calculating the inverse noise covariance matrix

requires roughly 1013 operations. Calculating the clean map then takes of order 1015

operations.

These figures should not be mistaken for the amount of time required for each oper-

ation, since other factors can be very important. Calculating the matrices Qs is trivially

parallelized with each individual calculation dominated by taking a matrix inverse which

is a standard, and thus highly optimized, operation. The whopping 1015 operations re-

quired for the clean map is a matrix linear solve. This is similar to the operation that

computer clusters are benchmarked against, meaning that highly optimized parallel soft-

ware packages exist for performing the operation. Forming the inverse covariance matrix

is on the other hand highly specialized and not yet implemented on a cluster. As such

this takes a comparable amount of time as the solving for the clean map, despite the

difference in the operation count.

Simplification

Earlier versions of the map-maker, notably the versions used to produce the maps used in

the analyses in the subsequent chapters, used a simplified version of the map-maker. Here

the second term in the noise model, the one defined in Equation 5.41, was neglected. This

undoubtedly had a negative impact on the optimality of the maps, but greatly simplified

the map making problem. This is because the neglected term was the only one that

coupled the various frequency channels together, and without that term, each spectral

frequency slice could be treated as an independent map. As such, the operational count

of most steps in the map-maker was reduced by a factor of roughly cν, and solving for

the clean map became a factor of c2ν faster.

5.4 Conclusions

Arguably, the most critical part of the data analysis for a 21 cm large-scale structure

survey comes after map-making, in the foreground subtraction. This is because fore-

grounds are the greatest challenge for 21 cm surveys. However, having high quality maps

is essential for being able to subtract foregrounds. Foreground subtraction works because

Page 99: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 5. Data analysis pipeline 86

the foregrounds are confined to a limited number of modes within the map. If the map-

making is not done carefully, this will not be true and foreground subtraction will be

impossible.

The infrastructure described here will enable efficient and relatively optimal analyses

of 21 cm intensity mapping data from single dish telescopes. As will be shown in the

following chapters, this software has been highly successful in the analysis of the Green

Bank Telescope intensity mapping survey data. In addition, at the time of writing there

is an ongoing effort to adapt these methods to the data taken at the Parkes Telescope.

Page 100: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 6

Measurement of 21 cm brightness

fluctuations at z ∼ 0.8 in

cross-correlation

A version of this chapter was published in The Astrophysical Journal Letters as “Mea-

surement of 21 cm Brightness Fluctuations at z ∼ 0.8 in Cross-correlation”, Masui, K.

W., Switzer, E. R., Banavar, N., Bandura, K., Blake, C., Calin, L.-M., Chang, T.-C.,

Chen, X., Li, Y.-C., Liao, Y.-W., Natarajan, A., Pen, U.-L., Peterson, J. B., Shaw, J.

R., Voytek, T. C., Vol. 763, Issue 1, 2013. Reproduced here with the permission of the

AAS.

6.1 Summary

21 cm intensity maps acquired at the Green Bank Telescope are cross-correlated with

large-scale structure traced by galaxies in the WiggleZ Dark Energy Survey. The data

span the redshift range 0.6 < z < 1 over two fields totaling ∼ 41 deg. sq. and 190 hr of

radio integration time. The cross-correlation constrains ΩHIbHIr = [0.43 ± 0.07(stat.) ±0.04(sys.)] × 10−3, where ΩHI is the neutral hydrogen (H i) fraction, r is the galaxy–

hydrogen correlation coefficient, and bHI is the H i bias parameter. This is the most

precise constraint on neutral hydrogen density fluctuations in a challenging redshift range.

Our measurement improves the previous 21 cm cross-correlation at z ∼ 0.8 both in its

precision and in the range of scales probed.

87

Page 101: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 6. 21 cm cross-correlation with an optical galaxy survey 88

6.2 Introduction

Measurements of neutral hydrogen are essential to our understanding of the universe.

Following cosmological reionization at z ∼ 6, the majority of hydrogen outside of galaxies

is ionized. Within galaxies, it must pass through its neutral phase (H i) as it cools and

collapses to form stars. The quantity and distribution of neutral hydrogen is therefore

intimately connected with the evolution of stars and galaxies, and observations of neutral

hydrogen can give insight into these processes.

Above redshift z = 2.2, the Ly-α line redshifts into optical wavelengths and H i can

be observed, typically in absorption against distant quasars [Prochaska and Wolfe, 2009].

Below redshift z = 0.1, H i has been studied using 21 cm emission from its hyperfine

splitting [Zwaan et al., 2005, Martin et al., 2010]. There, the abundance and large-

scale distribution of neutral hydrogen are inferred from large catalogs of discrete galactic

emitters. Between z = 0.1 and z = 2.2 there are fewer constraints on neutral hydrogen,

and those that do exist [Meiring et al., 2011, Lah et al., 2007, Rao et al., 2006] have large

uncertainties.

While the 21 cm line is too faint to observe individual galaxies in this redshift range,

one can nonetheless pursue three-dimensional (3D) intensity mapping [Chang et al., 2008,

Loeb and Wyithe, 2008, Ansari et al., 2012a, Mao et al., 2008, Seo et al., 2010, Mao, 2012].

Instead of cataloging many individual galaxies, one can study the large-scale structure

(LSS) directly by detecting the aggregate emission from many galaxies that occupy large

∼ 1000 Mpc3 voxels. The use of such large voxels allows telescopes such as the Green

Bank Telescope (GBT) to reach z ∼ 1, conducting a rapid survey of a large volume.

Aside from being used to measure the hydrogen content of galaxies, intensity mapping

promises to be an efficient way to study the large-scale structure of the Universe. In

particular, the method could be used to measure the baryon acoustic oscillations to high

accuracy and constrain dark energy [Chang et al., 2008]. However, intensity mapping

is a new technique which is still being pioneered. Ongoing observational efforts such as

the one presented here are essential for developing this technique as a powerful probe of

cosmology.

Synchrotron foregrounds are the primary challenge to this method, because they are

three orders of magnitude brighter than the 21 cm signal. However, the physical process

of synchrotron emission is known to produce spectrally smooth radiation [Oh and Mack,

2003, Seo et al., 2010]. If the calibration, spectral response and beam width of the

instrument are well-controlled and characterized, the subtraction of foregrounds should

be possible because the foregrounds have fewer degrees of freedom than the cosmological

Page 102: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 6. 21 cm cross-correlation with an optical galaxy survey 89

signal. We find that this allows the foregrounds to be cleaned to the level of the expected

signal. The auto-correlation of intensity maps is biased by residual foregrounds, and

minimizing and constraining these residuals is an active area of work. However, because

residual foregrounds should be uncorrelated with the cosmological signal, they only boost

the noise in a cross-correlation with existing surveys. This makes the cross-correlation

a robust indication of neutral hydrogen density fluctuations in the 21 cm intensity maps

[Chang et al., 2010, Vujanovic et al., 2012].

The first detection of the cross-correlation between LSS and 21 cm intensity maps at

z ∼ 1 was reported in Chang et al. [2010], based on data from GBT and the DEEP2

galaxy survey. Here we improve on these measurements by cross correlating new intensity

mapping data with the WiggleZ Dark Energy Survey [Drinkwater et al., 2010]. Our

measurement improves on the statistical precision and range of scales of the previous

result, which was based on 15 hr of GBT integration time over 2 deg. sq.

Throughout, we use cosmological parameters from Komatsu et al. [2009], in accord

with Blake et al. [2011].

6.3 Observations

The observations presented here were conducted with the 680–920 MHz prime-focus re-

ceiver at the GBT. The unblocked aperture of GBT’s 100 m offset paraboloid design

results in well-controlled sidelobes and ground spill, advantageous to minimizing radio-

frequency contamination and overall system temperature (∼ 25 K). The receiver is sam-

pled from 700 MHz (z = 1) to 900 MHz (z = 0.58) by the Green Bank Ultimate Pulsar

Processing Instrument (GUPPI) pulsar back-end systems [DuPlain et al., 2008].

The data used in this analysis were collected between 2011 February and November as

part of a 400 hr allocation over four fields. This allocation was specifically to corroborate

previous cross-correlation measurements [Chang et al., 2010] over a larger survey area,

and to search for auto-power of diffuse 21 cm emission. The analysis here is based on

a 105 hr integration of a 4.5 × 2.4 “15 hr deep field” centered at 14h31m28.5s right

ascension, 20′ declination and an 84 hr integration on a 7.0 × 4.3 “1 hr shallow” field

centered at 0h52m0s right ascension, 29′ declination. The beam FWHM at 700 MHz is

0.314 and at 900 MHz it is 0.25. At band-center, the beam width corresponds to a

comoving length of 9.6h−1Mpc. Both fields have nearly complete angular overlap and

good redshift coverage with WiggleZ.

Our observing strategy consists of sets of azimuthal scans at constant elevation to

control ground spill. We start the set at the low right ascension (right hand) side of the

Page 103: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 6. 21 cm cross-correlation with an optical galaxy survey 90

field and allow the region to drift through. We then re-point the telescope to the right

side of the field and repeat the process. For the 15 hr field, this set of scans consists of

8 one-minute scans each with a stroke of 4. For the 1 hr field, a set of scans consists

of 10 two-minute scans, each 8 in length. Note that since we observe over a range of

local sidereal times, our scan directions cover a range of angles with respect to the sky.

This range of crossing angles makes the noise more isotropic, and allows us to ignore the

directional dependence of the noise in the 3D power spectrum. The survey regions have

most coverage in the middle due to the largest number of intersecting scans. Observations

were conducted at night to minimize radio-frequency interference (RFI).

The optical data are part of the WiggleZ Dark Energy Survey [Drinkwater et al., 2010],

a large-scale spectroscopic survey of emission-line galaxies selected from UV and optical

imaging. It spans redshifts 0.2 < z < 1.0 across 1000 sq. deg. The selection function

[Blake et al., 2010] has angular dependence determined primarily by the UV selection, and

redshift coverage which favors the z = 0.6 end of the radio band. The galaxies are binned

into volumes with the same pixelization as the radio maps and divided by the selection

function, so that we consider the cross-power with respect to optical over-density.

6.4 Analysis

Here we describe our analysis pipeline, which converts the raw data into 3D intensity

maps, then correlates these maps with the WiggleZ galaxies.1

6.4.1 From data to maps

The first stage of our data analysis is a rough cut to mitigate contamination by terrestrial

sources of RFI. Our data natively have fine spectral resolution with 4096 channels across

200 MHz of bandwidth. This facilitates the identification and flagging of RFI. In each

scan, individual channels are flagged based on their variance. Any RFI not sufficiently

prominent to be flagged in this stage is detected as increased noise later in the pipeline

and subsequently down-weighted during map-making. Additional RFI is detected as

frequency-frequency covariance in the foreground cleaning and subtracted in the map

domain. While RFI is prominent in the raw data, after these steps, it was not found to

be the primary limitation of our analysis.

In addition to RFI, we also eliminate channels within 6 MHz of the band edges (where

aliasing is a concern) and channels in the 800 MHz receiver’s two resonances at roughly

1Our analysis software is publicly available at https://github.com/kiyo-masui/analysis IM

Page 104: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 6. 21 cm cross-correlation with an optical galaxy survey 91

798 MHz and 817 MHz. Before mapping, the data are re-binned to 0.78 MHz-wide bands

(corresponding to roughly 3.8h−1Mpc at band-center).

For a time-transfer calibration standard, we inject power from a noise diode into the

antenna. The noise diode raises the system temperature by roughly 2 K and we switch

it at 16 Hz so that the noise power can be cleanly isolated. Calibration is performed by

first dividing by the noise diode power (averaged over a scan) in each channel, and then

converting to flux using dedicated observations of 3C286 and 3C48. The gain for X and

Y polarizations may differentially drift and so these are calibrated independently. Our

absolute calibration uncertainty is dominated by the calibration of the reference flux scale

(5%, Kellermann et al. [1969]), measurements of the calibration sources with respect to

this reference (5%, see also Scaife and Heald [2012]), and uncertainty of our measurement

of these fluxes (5%). Receiver nonlinearity, uncertainty in the beam shape and variations

in the diffuse galactic emission in the on- and off-source measurements are estimated to

contribute of order 1% each. These are all assumed to be uncorrelated errors and give

9% total calibration systematic error.

Gridding the data from the time ordered data to a map is done in two stages. We

follow cosmic microwave background (CMB) map-making conventions as described in

Tegmark [1997]. The map maker treats the noise to be uncorrelated except for deweight-

ing the mean and slope along the time axis for each scan. Each frequency channel is

treated independently. In the first round of map-making, the noise is estimated from

the variance of the scan. This is inaccurate because the foregrounds dominate the noise.

This yields a sub-optimal map which nonetheless has high a signal-to-noise ratio on the

foregrounds. This map is used to estimate the expected foreground signal in the time

ordered data and to subtract this expected signal, leaving time ordered data which are

dominated by noise. After flagging anomalous data points at the 4σ level, we re-estimate

the noise and use this estimate for a second round of map-making, yielding a map which

is much closer to optimal. In reality, it is a bad assumption that the noise is uncorrelated.

We have observed correlations at finite time lag and between separate frequency channels

in our data. Exploiting these correlations to improve the optimality of our maps is an

area of active research. For all map-making, we use square pixels with widths of 0.0627,

which corresponds to a quarter of the beam’s FWHM at the high frequency edge of our

band. Fig. 6.1 shows the 15 hr field map.

In addition to the observed maps, we develop signal-only simulations based on Gaus-

sian realizations of the non-linear, redshift-space power spectrum using the empirical-NL

model described by Blake et al. [2011].

Page 105: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 6. 21 cm cross-correlation with an optical galaxy survey 92

6.4.2 From maps to power spectra

The approach to 21 cm foreground subtraction in literature has been dominated by the

notion of fitting and subtracting smooth, orthogonal polynomials along each line of

sight. This is motivated by the eigenvectors of smooth synchrotron foregrounds [Liu

and Tegmark, 2011, 2012]. In practice, instrumental factors such as the spectral cali-

bration (and its stability) and polarization response translate into foregrounds that have

more complex structure. One way to quantify this structure is to use the map itself

to build the foreground model. To do this, we find the frequency-frequency covariance

across the sample of angular pixels in the map, using a noise inverse weight. We then

find the principal components along the frequency direction, order these by their singular

value, and subtract a fixed number of modes of the largest covariance from each line of

sight. Because the foregrounds dominate the real map, they also dominate the largest

modes of the covariance.

There is an optimum in the number of foreground modes to remove. For too few

modes, the errors are large due to residual foreground variance. For too many modes,

21 cm signal is lost, and so after compensating based on simulated signal loss (see below),

the errors increase modestly. We find that removing 20 modes in both the 15 hr and 1 hr

field maximizes the signal. Fig. 6.1 shows the foreground-cleaned 15 hr field map.

We estimate the cross-power spectrum using the inverse noise variance of the maps

and the WiggleZ selection function as the weight for the radio and optical survey data,

respectively. The variance is estimated in the mapping step and represents noise and

survey coverage. The foreground cleaning process also removes some 21 cm signal. We

compensate for signal loss using a transfer function based on 300 simulations where

we add signal simulations to the observed maps (which are dominated by foregrounds),

clean the combination, and find the cross-power with the input simulation. Because the

foreground subtraction is anisotropic in k⊥ and k‖, we estimate and apply this transfer

function in 2D. The GBT beam acts strictly in k⊥, and again we develop a 2D beam

transfer function using signal simulations with the beam.

The foreground filter is built from the real map which has a limited number of inde-

pendent angular elements. This causes the transfer function to have components in both

the angular and frequency direction [Nityananda, 2010], with the angular part dominat-

ing. This is accounted for in our transfer function. Subtleties of the cleaning method will

be described in a future methods paper.

We estimate the errors and their covariance in our cross-power spectrum by calcu-

lating the cross-power of the cleaned GBT maps with 100 random catalogs drawn from

the WiggleZ selection function [Blake et al., 2010]. The mean of these cross powers is

Page 106: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 6. 21 cm cross-correlation with an optical galaxy survey 93

consistent with zero, as expected. The variance accounts for shot noise in the galaxy

catalog and variance in the radio map either from real signal (sample variance), resid-

ual foregrounds or noise. Estimating the errors in this way requires many independent

modes to enter each spectral cross-power bin. This fails at the lowest k values and so

these scales are discarded. In going from the two-dimensional power to the 1D powers

presented here, we weight each 2D k-cell by the inverse variance of the 2D cross-power

across the set of mock galaxy catalogs. The 2D to 1D binning weight is multiplied by

the square of the beam and foreground cleaning transfer functions. Fig. 6.2 shows the

resulting galaxy-H i cross-power spectra.

6.5 Results and discussion

To relate the measured spectra with theory, we start with the mean 21 cm emission

brightness temperature [Chang et al., 2010],

Tb = 0.29ΩHI

10−3

(Ωm + (1 + z)−3ΩΛ

0.37

)− 12(

1 + z

1.8

) 12

mK. (6.1)

Here ΩHI is the comoving H i density (in units of today’s critical density), and Ωm and

ΩΛ are evaluated at the present epoch. We observe the brightness contrast, δT = TbδHI,

from fluctuations in the local H i over-density δHI. On large scales, it is assumed that

neutral hydrogen and optically-selected galaxies are biased tracers of the dark matter,

so that δHI = bHIδ, and δopt = boptδ. In practice, both tracers may contain a stochas-

tic component, so we include a galaxy-H i correlation coefficient r. This quantity is

scale-dependent because of the k-dependent ratio of shot noise to large-scale structure,

but should approach unity on large scales. The cross-power spectrum is then given by

PHI,opt(k) = TbbHIboptrPδδ(k) where Pδδ(k) is the matter power spectrum.

The large-scale matter power spectrum is well-known from CMB measurements [Ko-

matsu et al., 2011] and the bias of the optical galaxy population is measured to be

b2opt = 1.48 ± 0.08 at the central redshift of our survey [Blake et al., 2011]. Simula-

tions including nonlinear scales (as in Sec. 6.4.1) are run through the same pipeline as

the data. We fit the unknown prefactor ΩHIbHIr of the theory to the measured cross-

powers shown in Fig. 6.2, and determine ΩHIbHIr = [0.44±0.10(stat.)±0.04(sys.)]×10−3

for the 15 hr field data, and ΩHIbHIr = [0.41 ± 0.11(stat.) ± 0.04(sys.)] × 10−3 for the

1 hr field data. The systematic term represents the 9% absolute calibration uncer-

tainty from Sec. 6.4.1. It does not include current uncertainties in the cosmological

parameters or in the WiggleZ bias, but these are sub-dominant. Combining the two

Page 107: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 6. 21 cm cross-correlation with an optical galaxy survey 94

fields yields ΩHIbHIr = [0.43 ± 0.07(stat.) ± 0.04(sys.)] × 10−3. These fits are based

on the range 0.075hMpc−1 < k < 0.3hMpc−1 over which we believe that errors are

well-estimated (failing toward larger scales where there are too few k modes in the vol-

ume) and under the assumption that nonlinearities and the beam/pixelization (failing

toward smaller scales) are well-understood. A less conservative approach is to fit for

0.05hMpc−1 < k < 0.8hMpc−1 where the beam, model of nonlinearity and error esti-

mates are less robust, but which shows the full statistical power of the measurement, at

7.4σ combined. Here, ΩHIbHIr = [0.40± 0.05(stat.)± 0.04(sys.)]× 10−3 for the combined,

ΩHIbHIr = [0.46± 0.08]× 10−3 for the 15 hr field and ΩHIbHIr = [0.34± 0.07]× 10−3 for

the 1 hr field.

To compare to the result in Chang et al. [2010], ΩHIbrelr = [0.55± 0.15(stat.)]× 10−3,

we must multiply their relative bias (between the GBT intensity map and DEEP2) by

the DEEP2 bias b = 1.2 [Coil et al., 2004] to obtain an expression with respect to bHI.

This becomes ΩHIbHIr = [0.66± 0.18(stat.)]× 10−3, and is consistent with our result.

The absolute abundance and clustering of H i are of great interest in studies of galaxy

and star formation. Our measurement is an integral constraint on the H i luminosity

function, which can be directly compared to simulations. The quantity ΩHIbHI also de-

termines the amplitude of 21 cm temperature fluctuations. This is required for forecasts

of the sensitivity of future 21 cm intensity mapping experiments. Since r < 1 we have

put a lower limit on ΩHIbHI.

To determine ΩHI alone from our cross-correlation requires external estimates of the

H i bias and stochasticity. The linear bias of H i is expected to be ∼ 0.65 to ∼ 1 at

these redshifts [Marın et al., 2010, Khandai et al., 2011]. Simulations to interpret Chang

et al. [2010] find values for r between 0.9 and 0.95 [Khandai et al., 2011], albeit for a

different optical galaxy population. Measurements of the correlation coefficient between

WiggleZ galaxies and the total matter field are consistent with unity in this k-range

(with rm,opt & 0.8) [Blake et al., 2011]. These suggest that our cross-correlation can be

interpreted as ΩHI between 0.45× 10−3 and 0.75× 10−3.

Measurements with Sloan Digital Sky Survey [Prochaska and Wolfe, 2009] suggest

that before z = 2, ΩHI may have already reached ∼ 0.4 × 10−3. At low redshift, 21 cm

measurements give ΩHI(z ∼ 0) = (0.43± 0.03)× 10−3 [Martin et al., 2010]. Intermediate

redshifts are more difficult to measure, and estimates based on Mg-II lines in DLA systems

observed with Hubble Space Telescope find ΩHI(z ∼ 1) ≈ (0.97±0.36)×10−3 [Rao et al.,

2006], in rough agreement with z ≈ 0.2 DLA measurements [Meiring et al., 2011] and

21 cm stacking [Lah et al., 2007]. This is in some tension with a model where ΩHI falls

monotonically from the era of maximum star formation rate [Duffy et al., 2012]. Under

Page 108: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 6. 21 cm cross-correlation with an optical galaxy survey 95

the assumption that bHI = 0.8, r = 1, the cross-correlation measurement here suggests

ΩHI ∼ 0.5× 10−3, in better agreement, but clearly better measurements of bHI and r are

needed. Redshift space distortions can be exploited to break the degeneracy between ΩHI

and bias to measure these quantities independently of simulations [Wyithe, 2008, Masui

et al., 2010a]. This will be the subject of future work.

Our measurement is limited by both the number of galaxies in the WiggleZ fields and

by the noise in our radio observations. Simulations indicate that the variance observed in

our radio maps after foreground subtraction is roughly consistent with the expected levels

from thermal noise. This is perhaps not surprising, our survey being relatively wide and

shallow compared to an optimal LSS survey, however, this is nonetheless encouraging.

Acknowledgements

We thank John Ford, Anish Roshi and the rest of the GBT staff for their support;

Paul Demorest and Willem van-Straten for help with pulsar instruments and calibration;

and K. Vanderlinde for helpful conversations.

K.W.M. is supported by NSERC Canada. E.R.S. acknowledges support by NSF

Physics Frontier Center grant PHY-0114422 to the Kavli Institute of Cosmological Physics.

J.B.P. and T.C.V. acknowledge support under NSF grant AST-1009615. X.C. acknowl-

edges the Ministry of Science and Technology Project 863 (under grant 2012AA121701);

the John Templeton Foundation and NAOC Beyond the Horizon program; the NSFC

grant 11073024. A.N. acknowledges financial support from the Bruce and Astrid McWilliams

Center for Cosmology.

Computations were performed on the GPC supercomputer at the SciNet HPC Con-

sortium.

Page 109: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 6. 21 cm cross-correlation with an optical galaxy survey 96

1

1.5

2

2.5

3

215.5 216 216.5 217 217.5 218 218.5 219 219.5 220

De

c

RA

GBT 15hr field (800.4 MHz, z = 0.775)

-1000

-500

0

500

1000

Te

mp

era

ture

(m

K)

1

1.5

2

2.5

3

215.5 216 216.5 217 217.5 218 218.5 219 219.5 220

De

c

RA

GBT 15hr field, cleaned, beam convolved (800.4 MHz, z = 0.775)

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

Te

mp

era

ture

(m

K)

Figure 6.1: Maps of the GBT 15 hr field at approximately the band-center. The purplecircle is the FWHM of the GBT beam, and the color range saturates in some placesin each map. Top: The raw map as produced by the map-maker. It is dominated bysynchrotron emission from both extragalactic point sources and smoother emission fromthe galaxy. Bottom: The raw map with 20 foreground modes removed per line of sightrelative to 256 spectral bins, as described in Sec. 6.4.2. The map edges have visiblyhigher noise or missing data due to the sparsity of scanning coverage. The cleaned mapis dominated by thermal noise, and we have convolved by GBT’s beam shape to bringout the noise on relevant scales.

Page 110: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 6. 21 cm cross-correlation with an optical galaxy survey 97

0.001

0.01

0.1

1

0.1

∆(k

)2 (

mK

)

k (h Mpc-1

)

15 hr

1 hrΩHI bHI r = 0.43 10

-3

Figure 6.2: Cross-power between the 15 hr and 1 hr GBT fields and WiggleZ. Negativepoints are shown with reversed sign and a thin line. The solid line is the mean ofsimulations based on the empirical-NL model of Blake et al. [2011] processed by thesame pipeline.

Page 111: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 7

Determination of z ∼ 0.8 neutral

hydrogen fluctuations using the

21 cm intensity mapping

auto-correlation

A version of this chapter was submitted to Monthly Notices of the Royal Astronomical

Society: Letters as “Determination of z ∼ 0.8 neutral hydrogen fluctuations using the

21 cm intensity mapping auto-correlation”, Switzer, E. R., Masui, K. W., Bandura, K.,

Calin, L.-M., Chang, T.-C., Chen, X., Li, Y.-C., Liao, Y.-W., Natarajan, A., Pen, U.-L.,

Peterson, J. B., Shaw, J. R., Voytek, T. C. It is available as a preprint as Switzer et al.

[2013].

7.1 Summary

The large-scale distribution of neutral hydrogen in the universe will be luminous through

its 21 cm emission. Here, for the first time, we use the auto-power spectrum of 21 cm

intensity fluctuations to constrain neutral hydrogen fluctuations at z ∼ 0.8. Our data

were acquired with the Green Bank Telescope and span the redshift range 0.6 < z <

1 over two fields totaling ≈ 41 deg. sq. and 190 hr of radio integration time. The

dominant synchrotron foregrounds exceed the signal by ∼ 103, but have fewer degrees

of freedom and can be removed efficiently. Even in the presence of residual foregrounds,

the auto-power can still be interpreted as an upper bound on the 21 cm signal. Our

previous measurements of the cross-correlation of 21 cm intensity and the WiggleZ galaxy

98

Page 112: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 7. 21 cm auto-correlation 99

survey provide a lower bound. Through a Bayesian treatment of signal and foregrounds,

we can combine both fields in auto- and cross-power into a measurement of ΩHIbHI =

[0.62+0.23−0.15] × 10−3 at 68% confidence with 9% systematic calibration uncertainty, where

ΩHI is the neutral hydrogen (H i) fraction and bHI is the H i bias parameter. We describe

observational challenges with the present dataset and plans to overcome them.

7.2 Introduction

There is substantial interest in the viability of cosmological structure surveys that map

the intensity of 21 cm emission from neutral hydrogen. Such surveys could be used to

study large-scale structure (LSS) at intermediate redshifts, or to study the epoch of

reionization at high redshift. Surveys of 21 cm intensity have the potential to be very

efficient since the resolution of the instrument can be matched to the large scales of

cosmological interest [Chang et al., 2008, Loeb and Wyithe, 2008, Seo et al., 2010, Ansari

et al., 2012a]. Several experiments, including BAOBAB [Pober et al., 2013b], BAORadio

[Ansari et al., 2012b], BINGO [Battye et al., 2012], CHIME1, and TianLai [Chen, 2012]

propose to conduct redshift surveys from z ∼ 0.5 to z ∼ 2.5 using this method.

The principal challenges for 21 cm experiments are astronomical foregrounds and ter-

restrial radio frequency interference (RFI). Extragalactic sources and the Milky Way

produce synchrotron emission that is three orders of magnitude brighter than the 21 cm

signal. However, the physical process of synchrotron emission is known to produce

spectrally-smooth radiation, occupying few degrees of freedom along each line of sight. In

the absence of instrumental effects, these degrees of freedom are thought to be separable

from the signal [Liu and Tegmark, 2011, 2012, Shaw et al., 2013]. RFI can be minimized

through site location, sidelobe control, and band selection. In the Green Bank Telescope

data analyzed here, RFI is not found to be a significant challenge or limiting factor.

Subtraction of synchrotron emission has proven to be challenging in practice. In-

strumental effects such as passband calibration and polarization leakage couple bright

foregrounds into new degrees of freedom that need to be removed from each line of sight

to reach the level of the 21 cm signal. The spectral functions describing these systemat-

ics can not all be modeled in advance, so we take an empirical approach to foreground

removal by estimating dominant modes from the covariance of the map itself. This

method requires more caution because it also removes cosmological signal, which must

be accounted for.

1http://chime.phas.ubc.ca/

Page 113: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 7. 21 cm auto-correlation 100

Large-scale neutral hydrogen fluctuations above redshift z = 0.1 have been unam-

biguously detected only in cross-correlation with existing surveys of optically-selected

galaxies [Lah et al., 2009, Chang et al., 2010, Masui et al., 2013]. Here, residual 21 cm

foregrounds boost the errors but do not correlate with the optical galaxies. The density

fluctuations traced by survey galaxies may not correlate perfectly with the emission of

neutral hydrogen, so their cross-correlation can be interpreted as a lower limit on the

fluctuation power of 21 cm emission.

Several efforts have used the 21 cm line to place upper bounds on the reionization

era [Bebbington, 1986, Bowman and Rogers, 2010, Paciga et al., 2013, Pober et al.,

2013a] and z ∼ 3 (see e.g., Wieringa et al. [1992], Subrahmanyan and Anantharamaiah

[1990]) without the need to cross-correlate with an external data set. This is the first

work to describe similar bounds for z ∼ 0.8, using two fields totaling ≈ 41 deg. sq. and

190 hr of radio integration time with the Green Bank Telescope. Unlike the bounds from

reionization, for which there is currently no cross-correlation, we are able to combine the

auto- and cross-powers in a novel way, making a Bayesian inference of the amplitude of

neutral hydrogen fluctuations, parameterized by ΩHIbHI.

Throughout, we use cosmological parameters from Komatsu et al. [2009].

7.3 Observations and Analysis

The analysis here is based on the same observations used for the cross-correlation mea-

surement in Masui et al. [2013]. We flag RFI in the data, calculate 3D intensity map

volumes, clean foreground contamination, and estimate the power spectrum. Here we

will summarize essential aspects of the observations and analysis in Masui et al. [2013],

and describe the auto-power analysis in more detail.

Observations were conducted with the 680 − 920 MHz prime-focus receiver at the

Green Bank Telescope (GBT), sampled from 700 MHz (z = 1) to 900 MHz (z = 0.58) in

256 uniform spectral bins. The analysis here uses a 105 hr integration of a 4.5 × 2.4

15 hr “deep” field centered on 14h31m28.5s right ascension, 20′ declination and an 84 hr

integration on a 7.0 × 4.3 1 hr “wide” field centered on 0h52m0s right ascension, 29′

declination.

The beam FWHM at 700 MHz is 0.314, and at 900 MHz it is 0.250. At band-

center, the beam width corresponds to a comoving length of 9.6h−1Mpc. Both fields

have nearly complete angular overlap and good redshift coverage with the WiggleZ Dark

Energy Survey [Drinkwater et al., 2010]. Our absolute calibration is determined from

radio point sources and is accurate to 9% [Masui et al., 2013]. For clarity, this remains

Page 114: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 7. 21 cm auto-correlation 101

as a separately quoted systematic error throughout, and plotted posterior distributions

are based on statistical errors only.

7.3.1 Foreground Cleaning

In this section, we develop the map cleaning formalism and discuss its connection to

survey strategy. Begin by packing the three-dimensional map into an Nν ×Nθ matrix M

by unwrapping the Nθ RA, Dec pointings. For the moment, ignore thermal noise in the

map. The empirical ν− ν ′ covariance of the map is C = MMT/Nθ, and it contains both

foregrounds and 21 cm signal. This can be factored as C = UΛUT , where Λ is diagonal

and sorted in descending value. From each line of sight, we can then subtract a subset of

the modes U that describe the largest components of the variance through the operation

(1−USUT )M, where S is a selection matrix with 1 along the diagonal for modes to be

removed and 0 elsewhere.

In reality, M also contains thermal noise. To minimize its influence on our foreground

mode determination, we find the noise-inverse weighted cross-variance of two submaps

from the full season of observing. Here, CAB = (WA MA)(WB MB)T/Nθ, where

A and B denote sub-season maps, WA is the noise inverse-variance weight per pixel of

map A (neglecting correlations), and is the element-wise matrix product. CAB is no

longer symmetric, and we take its singular value decomposition (SVD) instead, using

the left and right singular vectors to clean maps A and B respectively. The weights

are calculated in the noise model developed in the map-maker, but roughly track the

map’s integration depth and weigh against RFI. The weight is nearly separable into

angle (through integration time) and frequency (through Tsys(ν)), but we average to

make it formally separable and so rank-1, so that it does not increase the map rank. The

weighted removal for map A becomes (1/WA) (1−UASUTA)WA MA, where 1/WA is

the element-wise reciprocal.

Our empirical approach to foreground removal is limited by the amount of information

in the maps. The fundamental limitation here surprisingly is not from the number of

degrees of freedom along the line of sight, but is instead the number of independent

angular resolution elements in the map [Nityananda, 2010]. To see why this is the case,

notice that in the absence of noise, our cleaning algorithm is equivalent to taking the

SVD of the map directly: M = UΣVT and thus C ∝MMT = UΣ2UT , with the same

set of frequency modes U appearing in both decompositions. The rank of C coincides

with the rank of M and is limited by the number of either angular or frequency degrees

of freedom.

Page 115: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 7. 21 cm auto-correlation 102

Taking the foreground modes to all have comparable spurious overlap with the signal,

one arrives at a transfer function rule of thumb T = Psig. out/Psig. in ∼ (1 − Nm/Nν)(1 −Nm/Nres), where Nm is the number of modes removed, Nν = 256 is the number of

frequency channels and Nres is the number of resolution elements (roughly the survey

area divided by the beam solid angle). A limited number of resolution elements can

greatly reduce the efficacy of the foreground cleaning at the expense of signal.

The wide and deep fields respectively have central low-noise regions of ∼ 10 deg. sq.

and ∼ 3 deg. sq., giving roughly 90 and 30 independent resolution elements at the largest

beam size. The rank of C is then less than the number of available spectral bins in

both cases. To increase the effective number of angular degrees of freedom that enter the

weighted ν−ν ′ covariance, we saturate the angular weight maps at their 50’th percentile.

The choice of the number of modes to remove is a trade-off between bias from residual

foregrounds (for too few modes removed), and increasing errors (from signal lost as too

many modes are removed). We find that 30 (10) modes for the 1 hr (15 hr) field shows a

good balance of minimal foregrounds and errors.

7.3.2 Instrumental Systematics

The physical mechanism of synchrotron radiation suggests that it is described by a hand-

ful of smooth modes along each line of sight [Liu and Tegmark, 2012]. Instrumental

response to bright foregrounds, however, can convert these into new degrees of freedom.

An imperfect and time-dependent passband calibration will cause intrinsically spectrally

smooth foregrounds to occupy multiple modes in our maps with non-trivial spectral

structure. We control this using a pulsed electronic calibrator, averaged for each scan.

We believe that the most pernicious spectral structure is caused by leakage of po-

larization into intensity. Our observed polarization mixing is ∼ 10% between Stokes

parameters and has both angular and spectral structure. The spectral structure converts

spectrally smooth polarization into new degrees of freedom. Faraday rotation of the

polarization introduces further spectral degrees of freedom. The majority of the polar-

ization leakage is observed to be an odd function on the sky, slightly broader than the

primary intensity response beam. To mitigate this leakage, we convolve the maps to a

common resolution corresponding to 1.4 times the beam size at 700 MHz (the largest

beam). This also treats the leakage of spatial structure into spectral structure from fre-

quency dependence of the beam. Such a convolution is viable because GBT has roughly

twice the resolution needed to map large-scale structure in the linear regime. However,

this convolution reduces the number of independent resolution elements in the map by a

Page 116: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 7. 21 cm auto-correlation 103

factor of two, increasing the challenges discussed in Sec. 7.3.1.

The present results are limited largely by the area of the regions and our understand-

ing of the instrument. With a factor of roughly ten more area, the resolution could be

degraded at less expense to the signal. This requires significant telescope time because

the area must also be covered to roughly the same depth as our present fields. It would

however provide a significant boost in overall sensitivity for scientific goals such as mea-

surement of the redshift-space distortions. In addition, we are investigating map-making

that would unmix polarization using the Mueller matrix as a function of offset from the

boresight, as determined from source scans.

7.3.3 Power Spectrum Estimation

Our starting point for power spectral estimation is the optimal quadratic estimator de-

scribed in [Liu and Tegmark, 2011]. To avoid the thermal noise bias, we only con-

sider cross-powers between four sub-season maps [Tristram et al., 2005], labeled here

as A,B,C,D. Thermal noise is uncorrelated between these sections, which we have

chosen to have similar integration depth and coverage. The foreground modes are de-

termined separately for each side of the pair using the SVD of Sec. 7.3.1. Up to a

normalization, the resulting estimator for the pair of submaps A and B is P (ki)A×B ∝(wAΠAmA)TQiwBΠBmB. Here, we have unwrapped the map matrix MA into a one-

dimensional map vector mA and written the foreground cleaning projection (1/WA) (1 − UASUT

A)WA MA as ΠAmA. The weighted mean of each frequency slice of the

map is also subtracted. The map weight wA is the matrix WA used in the SVD, but

unwrapped, and along the diagonal. Procedurally, the estimator amounts to weight-

ing both foreground-cleaned maps, taking the Fourier transform, and then summing

the three-dimensional cross-pairs to find power in annuli in two-dimensional k space,

ki = k⊥,i, k‖,i. The Fourier transform and binning are performed by Qi here. We

calculate six such crossed pairs from the four-way subseason split of the data, and let the

average over these be the estimated power P (ki).

We calculate transfer functions to describe signal lost in the foreground cleaning

and through the finite instrumental resolution. These are functions of k⊥ and k‖. The

beam transfer function is estimated using Gaussian 21 cm signal simulations that have

been convolved by the beam model. The foreground cleaning transfer function can be

efficiently estimated through Monte Carlo simulations as

T (ki) =

⟨[wAΠA+s(mA + ms)−wAΠAmA]TQims

(wAms)TQims

⟩2

, (7.1)

Page 117: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 7. 21 cm auto-correlation 104

where the A+s subscript denotes the fact that the foreground cleaning modes have been

estimated from a ν− ν ′ covariance that has added 21 cm simulation signal, ms. The lim-

ited number of angular resolution elements (Sec. 7.3.1) results in an anticorrelation of the

cleaned foregrounds with the signal itself, represented by the term (wAΠA+smA)TQims.

To reduce the noise of the simulation cross-power, note that we subtract wAΠAmA in

the numerator. Finally, we find the weighted average of these across the four-way split of

maps. We find that 300 signal simulations are sufficient to estimate the transfer function.

After compensating for lost signal using transfer functions for the beam and fore-

ground cleaning, we bin the two-dimensional powers onto one-dimensional band-powers.

We weight bins by their two-dimensional Gaussian inverse noise variance∝ N(ki)T (ki)2/Pauto(ki)

2,

where Pauto(ki) is the average of PA×A, PB×B, PC×C , PD×D (pairs which contain the

thermal noise bias), and N(ki) is the number of three-dimensional k cells that enter a

two-dimensional bin ki. In addition to the Gaussian noise weights, we impose two addi-

tional cuts in the two-dimensional k power. For k‖ < 0.035h/Mpc, k⊥ < 0.08h/Mpc for

the 15 hr field, and k⊥ < 0.04h/Mpc for the 1 hr field, there are few harmonics in the

volume, resulting in “straggler” strips in the two-dimensional power spectrum where the

errors are poorly estimated. For k⊥ > 0.3h/Mpc, the instrumental resolution produces

significant signal loss, so this is truncated also.

Foregrounds in the input maps and the 21 cm signal itself are non-Gaussian, but

after cleaning, the thermal noise dominates both contributions in an individual map, and

Gaussian errors (see, e.g. Das et al. [2011]) provide a reasonable approximation. These

take as input the auto-power measurement itself (for sample variance) and PA×A terms

that represent the thermal noise. Sample variance is significant only in the 15 hr field

in the lower 1/3 of the reported wavenumbers. Gaussian errors agree with the standard

deviation of the six crossed-pairs that enter the spectral estimation in the regime where

sample variance is negligible.

The finite survey size and weights result in correlations between adjacent k bins. We

apodize in the frequency direction using a Blackman window, and in the angular direction

using the map weight itself (which falls off at the edges due to scan coverage). The bin-

bin correlations are estimated using 3000 signal plus thermal noise simulations assuming

Tsys = 25 K. To construct a full covariance model, these are then re-calibrated by the

outer product of the Gaussian error amplitudes for the data relative to the thermal noise

simulation errors.

The Bayesian method developed in the next section assumes that adjacent bins are

uncorrelated. To achieve this, we take the matrix square-root of the inverse of our

covariance model matrix and normalize its rows to sum to one. This provides a set

Page 118: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 7. 21 cm auto-correlation 105

of functions which decorrelates [Hamilton and Tegmark, 2000] the pre-whitened power

spectrum and boosts the errors. At large scales (k = 0.1h/Mpc) where these effects are

relevant, decorrelation and sample variance increase the errors by a factor of 1.5 in the

1 hr field and 4 in the 15 hr field.

The methods that we have developed for calculating maps from data and power

spectra from maps will be described and more fully motivated in a future paper.

7.4 Results

The auto-power spectra presented in Figure 7.1 will be biased by an unknown positive

amplitude from residual foreground contamination. These data can then be interpreted

as an upper bound on the neutral hydrogen fluctuation amplitude, ΩHIbHI. In addition,

we have also measured the cross-correlation with the WiggleZ Galaxy Survey [Masui

et al., 2013]. This finds ΩHIbHIr = [0.43 ± 0.07(stat.) ± 0.04(sys.)] × 10−3, where r

is the WiggleZ galaxy-neutral hydrogen cross-correlation coefficient (taken here to be

independent of scale). Since |r| < 1 by definition and is measured to be positive, the

cross-correlation can be interpreted as a lower bound on ΩHIbHI. In this section, we

will develop a posterior distribution for the 21 cm signal auto-power between these two

bounds, as a function of k. We will then combine these into a posterior distribution on

ΩHIbHI.

The probability of our measurements given the 21 cm signal auto-power and fore-

ground model parameters is

p(dk|θk) = p(dc|sk, r)p(d15 hrk |sk, f 15 hr

k )p(d1 hrk |sk, f 1 hr

k ). (7.2)

Here, dk = dc, d15 hrk , d1 hr

k contains our cross-power and 15 hr and 1 hr field auto-power

measurements, while θk = sk, r, f 15 hrk , f 1 hr

k contains the 21 cm signal auto-power, cross-

correlation coefficient, and 15 hr and 1 hr field foreground contamination powers, respec-

tively. The cross-power variable dc represents the constraint on ΩHIbHIr from both fields

and the range of wavenumbers used in Masui et al. [2013]. The band-powers d15 hrk and

d1 hrk are independently distributed following decorrelation of finite-survey effects. We

assume that the foregrounds are uncorrelated between k bins and fields, also. This is

conservative because knowledge of foreground correlations would yield a tighter con-

straint. We take p(dc|sk, r) to be normally distributed with mean proportional to r√sk,

and p(d15 hrk |sk, f 15 hr

k ) to be normally distributed with mean sk + f 15 hrk and errors deter-

mined in Sec 7.3.3 (and analogously for the 1 hr field). Only the statistical uncertainty

Page 119: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 7. 21 cm auto-correlation 106

0.1 0.2 0.3 0.4 0.5 0.6k(h/Mpc)

10-3

10-2

10-1

100

101

102

103

∆2(m

K2)

deep auto-power

deep noise

deep foreground

wide auto-power

wide noise

ΩHIbHI=0.43×10−3

Figure 7.1: Temperature scales in our 21 cm intensity mapping survey. The top curveis the power spectrum of the input 15 hr field with no cleaning applied (the 1 hr field issimilar). Throughout, the 15 hr field results are green and the 1 hr field results are blue.The dotted and dash-dotted lines show thermal noise in the maps. The power spectraavoid noise bias by crossing two maps made with separate datasets. Nevertheless, thermalnoise limits the fidelity with which the foreground modes can be estimated and removed.The points below show the power spectrum of the 15 hr and 1 hr fields after the foregroundcleaning described in Sec. 7.3.1. Negative values are shown with thin lines and hollowmarkers. Any residual foregrounds will additively bias the auto-power. The red dashedline shows the 21 cm signal expected from the amplitude of the cross-power with theWiggleZ survey (for r = 1) and based on simulations processed by the same pipeline.

Page 120: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 7. 21 cm auto-correlation 107

is included in the width of the distributions, as the systematic calibration uncertainty is

perfectly correlated between cross- and auto-power measurements and can be applied at

the end of the analysis.

We apply Bayes’ Theorem to obtain the posterior distribution for the parameters,

p(θk|dk) ∝ p(dk|θk)p(sk)p(r)p(f 15 hrk )p(f 1 hr

k ). For the nuisance parameters, we adopt con-

servative priors. p(f 15 hrk ) and p(f 1 hr

k ) are taken to be flat over the range 0 < fk < ∞.

Likewise, we take p(r) to be constant over the range 0 < r < 1, which is conservative

given the theoretical bias toward r ≈ 1. Our goal is to marginalize over these nuisance

parameters to determine sk. We choose the prior on sk, p(sk), to be flat, which translates

into a prior p(ΩHIbHI) ∝ ΩHIbHI. The data likelihood adds significant information, so the

outcome is robust to choices for the signal prior. The signal posterior is

p(sk|dk) =

∫p(sk, r, f

15 hrk , f 1 hr

k |dk) dr df 15 hrk df 1 hr

k . (7.3)

This involves integrals of the form∫ 1

0p(dc|s, r)p(r) dr which, given the flat priors that we

have adopted, can generally be written in terms of the cumulative distribution function

of p(dc|s, r). Figure 7.2 shows the allowed signal in each spectral k-bin.

Taking the analysis further, we combine band-powers into a single constraint on

ΩHIbHI. Following Masui et al. [2013], we consider a conservative k range where er-

rors are better estimated (k > 0.12 h/Mpc, to avoid edge effects in the decorrelation

operation) and before uncertainties in nonlinear structure formation become significant

(k < 0.3 h/Mpc). Figure 7.3 shows the resulting posterior distribution.

Our analysis yields ΩHIbHI = [0.62+0.23−0.15]× 10−3 at 68% confidence with 9% systematic

calibration uncertainty. Note that we are unable to calculate a goodness-of-fit to our

model because each measurement is associated with a free foreground parameter which

can absorb any anomalies.

7.5 Discussion and Conclusions

Through the measurement of the auto-power, we extend our previous cross-power mea-

surement of ΩHIbHIr [Masui et al., 2013] to a determination of ΩHIbHI. This is the first

constraint on the amplitude of 21 cm fluctuations at z ∼ 0.8, and circumvents the de-

generacy with the cross-correlation r. The 21 cm auto-power yields a true upper bound

because it derives from the integral of the mass function. In the future, redshift distor-

tions [Wyithe, 2008, Masui et al., 2010a] can be used to further break the degeneracy

between bHI and ΩHI, and complement challenging HST measurements of ΩHI [Rao et al.,

Page 121: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 7. 21 cm auto-correlation 108

0.1 0.2 0.3 0.4 0.5 0.6k(h/Mpc)

10-2

10-1

∆2(m

K2)

statistical limit

ΩHIbHI=0.43×10−3

Figure 7.2: Comparison with the thermal noise limit. The dark and light shadedregions are the 68% and 95% confidence intervals of the measured 21 cm fluctuationpower. The dashed line shows the expected 21 cm signal implied by the WiggleZ cross-correlation if r = 1. The solid line represents the best upper 95% confidence level we couldachieve given our error bars, in the absence of foreground contamination. Note that theauto-correlation measurements, which constrain the signal from above, are uncorrelatedbetween k bins, while a single global fit to the cross-power (in Masui et al. [2013]) is usedto constrain the signal from below. Confidence intervals do not include the systematiccalibration uncertainty, which is 18% in this space.

Page 122: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 7. 21 cm auto-correlation 109

10-4 10-3

ΩHIbHI

0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

p[ ln

(ΩHIb

HI)]

cross-power

deep auto-power

wide auto-power

combined

Figure 7.3: The posterior distribution for the parameter ΩHIbHI coming from the WiggleZcross-power spectrum, 15 hr field and 1 hr field auto-powers, as well as the joint likelihoodfrom all three datasets. The individual distributions from the cross-power and auto-powers are dependent on the prior on ΩHIbHI while the combined distribution is essentiallyinsensitive. The distributions do not include the systematic calibration uncertainty of9%.

Page 123: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 7. 21 cm auto-correlation 110

2006]. Our present survey is limited by area and sensitivity, but we have shown that

foregrounds can be suppressed sufficiently, to nearly the level of the 21 cm signal, us-

ing an empirical mode subtraction method. Future surveys exploiting the auto-power of

21 cm fluctuations must develop statistics that are robust to the additive bias of residual

foregrounds, and control instrumental systematics such as polarized beam response and

passband stability.

Acknowledgements

We would like to thank John Ford, Anish Roshi and the rest of the GBT staff for their

support; and Paul Demorest and Willem van-Straten for help with pulsar instruments

and calibration.

The National Radio Astronomy Observatory is a facility of the National Science

Foundation operated under cooperative agreement by Associated Universities, Inc. This

research is supported by NSERC Canada and CIFAR. JBP and TCV acknowledge NSF

grant AST-1009615. XLC acknowledges the Ministry of Science and Technology Project

863 (2012AA121701); The John Templeton Foundation and NAOC Beyond the Hori-

zon program; The NSFC grant 11073024. AN acknowledges financial support from the

McWilliams Center for Cosmology. Computations were performed on the GPC super-

computer at the SciNet HPC Consortium. SciNet is funded by the Canada Foundation

for Innovation.

Page 124: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 8

Conclusions and outlook

In this thesis we have made significant progress toward establishing the method of 21 cm

intensity mapping as a powerful probe of the Universe. In Chapters 2 and 3 we gave two

quantitative examples of how observations of large-scale structure using the 21 cm line

could be used to study the late-time accelerated expansion and initial conditions of the

Universe, respectively. The central result of Chapter 4 is that cosmologically interesting

measurements are possible using 21 cm intensity mapping with existing instruments. As

such, surveys using, for example, the Green Bank Telescope would not only act as pi-

lots for dedicated experiments but could also perform measurements interesting in their

own right. The calculations in that chapter were invaluable in the survey planning and

sensitivity forecasts for the observations performed in Part II.

Part II of this thesis presents the analysis and results from the ongoing 21 cm intensity

mapping survey at the Green Bank Telescope. After describing some details of the

analysis in Chapter 5, we presented the results from the survey in Chapters 6 and 7. In

Chapter 6 we put a lower limit on the amplitude of the 21 cm signal power by correlating

our noisy survey maps with a traditional galaxy survey. In Chapter 7 we considered the

auto-correlation of our survey maps. This auto-correlation contains both 21 cm signal

and residual foregrounds, however, since the residual foreground must contribute positive

power to the maps, the auto-correlation allows the signal power to be constrained from

above. Combining the lower limit from the cross-correlation and upper limit from the

auto-correlation, we were able to constrain the amplitude of the 21 cm signal to within a

factor of two.

111

Page 125: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 8. Conclusions and outlook 112

8.1 Conclusions

In Part I we showed that due to the high efficiency with which intensity mapping experi-

ments can perform large surveys of large-scale structure (thus enabling many interesting

measurements), these experiments are perhaps the most promising future probes of the

Universe. In particular, in Chapter 3 we discovered a new effect, through which 21 cm

intensity mapping may eventually make the most precise measurements of primordial

gravity waves and as such provide firm evidence that inflation is the correct theory for

the early Universe.

Our observational program at the Green Bank Telescope (Part II) is currently the

leading experiment in the use of the 21 cm line to map large-scale structure at inter-

mediate redshift (0.5 . z . 3). While we have yet to make a detection of the signal,

in Chapter 7 we were able to constrain the amplitude of the signal to within a factor

of two. Our measurement directly constrains the product of two parameters: the total

neutral hydrogen abundance and the clustering of neutral hydrogen. Individually these

parameters are important for galaxy formation studies, however more importantly for

intensity mapping, this amplitude gives the total amount of 21 cm signal available for

future measurements. In particular several studies have simulated the H i signal for the

planned Square Kilometre Array (SKA)1 using a signal amplitude that is inconsistently

high with our measurement [Abdalla and Rawlings, 2005, Abdalla et al., 2010]. As such

their forecasts are overly optimistic, affecting either the sensitivity or cost of the exper-

iment. SKA is a large scale radio telescope whose projected cost is well over a billion

US dollars. Performing a large-scale structure survey using the 21 cm line is one if its

primary science goals.

8.2 Future work

The potential of idealized intensity mapping experiments at intermediate redshift is now

well established, decreasing the interest in studies such as those presented in Chapters 2

and 4. Instead the focus is now shifting from generic idealized experiments to studying

specific experiments and including effects such as foregrounds, coupled with systematic

beam inideality and uncertainty. An example of such a study is Shaw et al. [2013],

which will be invaluable to the planning, design, and analysis of the Canadian Hydrogen

Intensity Mapping Experiment2 (CHIME). In the future, such studies must be extended

1http://www.skatelescope.org2http://chime.phas.ubc.ca/

Page 126: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 8. Conclusions and outlook 113

by adding additional effects and uncertainties.

On the other hand, the high redshift effects that could be studied using 21 cm intensity

mapping, such as the effect presented in Chapter 3, is now an area of active research.

There are now several studies that address similar effects [Giddings and Sloth, 2011b,

Lewis, 2011, Book et al., 2012, Pen et al., 2012, Jeong and Kamionkowski, 2012, Schmidt

and Jeong, 2012b, Jeong and Schmidt, 2012, Schmidt and Jeong, 2012a, Pajer et al.,

2013]. Indeed the literature is not yet resolved as to whether ‘fossil’ effects are even

observable (see Pajer et al. [2013] and Section 3.7). It is now a priority to resolve this

controversy, as well as consider additional secondary effects that could be used to study

the early Universe.

We intend to extend the observational work, presented in Part II, on many fronts.

Our primary goal moving forward will be to make a convincing detection of the auto-

correlation in our survey maps in which the cosmological signal dominates over the resid-

ual foregrounds. There are several potential improvements to the data analysis pipeline,

described in Chapter 5, that may enable this. In particular, improving our time depen-

dant spectral calibration, as well as treating the polarization leakage within the GBT

primary beam (discussed in Chapter 7), is expected to yield is significant improvement

in foreground subtraction. In addition, we hope to extend our survey to a factor of

∼ 10 more area than our 15 hr deep field but at the same integration depth. This will

greatly improve the sensitivity of our survey and may additionally improve foreground

subtraction, as discussed in Chapter 7.

Beyond the simple detection of the auto-correlation signal, we also intend to break the

degeneracy between the total neutral hydrogen abundance parameter and the clustering

parameter through a detection of the Kaiser redshift space distortions, as described in

Chapter 4. This will require additional telescope time to achieve the desired sensitivity.

If we obtain a sufficient improvement in foreground subtraction, this could be performed

with the 21 cm auto-correlation, however even if this is not achieved, the measurement

could be performed by cross-correlating with an existing galaxy survey. The VIMOS

Public Extragalactic Redshift Survey (VIPERS), whose results were recently released in

de la Torre et al. [2013], Guzzo et al. [2013], has a much higher galaxy density than

WiggleZ and a redshift range well matched to our survey. As such, observations of the

VIPERS fields would accumulate sensitivity to the cross-correlation at a much faster

rate than the WiggleZ fields, greatly reducing the observing time required to detect the

redshift space distortions.

In addition to the extension of our survey there are several upcoming surveys using

new instruments with which I plan to be involved. The first is the GBT multi-beam

Page 127: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Chapter 8. Conclusions and outlook 114

project which intends to improve the mapping speed of the GBT by building a new

800 MHz receiver with multiple antennas. It is expected that the improved sensitivity of

this receiver will allow for the detection of the BAO at the GBT. At the time of writing, a

single pixel prototype for this receiver has been completed and testing for this prototype

on the telescope has been scheduled for summer 2013.

Finally, CHIME is an under-development, dedicated, intensity mapping experiment

whose primary science goal will be a precise measurement of the BAO. Currently a

CHIME pathfinder is under construction at the Dominion Radio Astronomical Observa-

tory in British Columbia. This pathfinder will have many times the sensitivity of even

the GBT multi-beam (detailed forecasts in Chapter 4). The full sized CHIME will have a

sensitivity to the BAO comparable to the next generation of LSS experiments, but should

be completed on a shorter time scale and at a fraction of the cost of its competitors.

Page 128: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

Bibliography

F. B. Abdalla and S. Rawlings. Probing dark energy with baryonic oscillations and

future radio surveys of neutral hydrogen. MNRAS, 360:27–40, June 2005. doi: 10.

1111/j.1365-2966.2005.08650.x.

F. B. Abdalla, C. Blake, and S. Rawlings. Forecasts for dark energy measurements with

future HI surveys. MNRAS, 401:743–758, January 2010. doi: 10.1111/j.1365-2966.

2009.15704.x.

M.C.B. Abdalla, Shin’ichi Nojiri, and Sergei D. Odintsov. Consistent modified gravity:

Dark energy, acceleration and the absence of cosmic doomsday. Class.Quant.Grav.,

22:L35, 2005. doi: 10.1088/0264-9381/22/5/L01.

A. Abramovici, W. E. Althouse, R. W. P. Drever, Y. Gursel, S. Kawamura, F. J. Raab,

D. Shoemaker, L. Sievers, R. E. Spero, and K. S. Thorne. LIGO - The Laser In-

terferometer Gravitational-Wave Observatory. Science, 256:325–333, April 1992. doi:

10.1126/science.256.5055.325.

Andreas Albrecht, Gary Bernstein, Robert Cahn, Wendy L. Freedman, Jacqueline He-

witt, et al. Report of the Dark Energy Task Force. 2006.

R. Ansari, J. E. Campagne, P. Colom, J. M. Le Goff, C. Magneville, J. M. Martin,

M. Moniez, J. Rich, and C. Yeche. 21 cm observation of large-scale structures at z ˜ 1.

Instrument sensitivity and foreground subtraction. A&A, 540:A129, April 2012a. doi:

10.1051/0004-6361/201117837.

R. Ansari, J.-E. Campagne, P. Colom, C. Magneville, J.-M. Martin, M. Moniez, J. Rich,

and C. Yeche. BAORadio: A digital pipeline for radio interferometry and 21 cm

mapping of large scale structures. Comptes Rendus Physique, 13:46–53, January 2012b.

doi: 10.1016/j.crhy.2011.11.003.

Stephen A. Appleby and Richard A. Battye. Do consistent F (R) models mimic General

Relativity plus Λ? Phys.Lett., B654:7–12, 2007. doi: 10.1016/j.physletb.2007.08.037.

115

Page 129: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

BIBLIOGRAPHY 116

D. G. Barnes, L. Staveley-Smith, W. J. G. de Blok, T. Oosterloo, I. M. Stewart, A. E.

Wright, G. D. Banks, R. Bhathal, P. J. Boyce, M. R. Calabretta, M. J. Disney, M. J.

Drinkwater, R. D. Ekers, K. C. Freeman, B. K. Gibson, A. J. Green, R. F. Haynes, P. te

Lintel Hekkert, P. A. Henning, H. Jerjen, S. Juraszek, M. J. Kesteven, V. A. Kilborn,

P. M. Knezek, B. Koribalski, R. C. Kraan-Korteweg, D. F. Malin, M. Marquarding,

R. F. Minchin, J. R. Mould, R. M. Price, M. E. Putman, S. D. Ryder, E. M. Sadler,

A. Schroder, F. Stootman, R. L. Webster, W. E. Wilson, and T. Ye. The Hi Parkes

All Sky Survey: southern observations, calibration and robust imaging. MNRAS, 322:

486–498, April 2001. doi: 10.1046/j.1365-8711.2001.04102.x.

R. A. Battye, M. L. Brown, I. W. A. Browne, R. J. Davis, P. Dewdney, C. Dickinson,

G. Heron, B. Maffei, A. Pourtsidou, and P. N. Wilkinson. BINGO: a single dish

approach to 21cm intensity mapping. ArXiv e-prints, September 2012.

D. H. O. Bebbington. A radio search for primordial pancakes. MNRAS, 218:577–585,

February 1986.

C. L. Bennett, A. J. Banday, K. M. Gorski, G. Hinshaw, P. Jackson, P. Keegstra,

A. Kogut, G. F. Smoot, D. T. Wilkinson, and E. L. Wright. Four-Year COBE DMR

Cosmic Microwave Background Observations: Maps and Basic Results. ApJ, 464:L1,

June 1996. doi: 10.1086/310075.

C. L. Bennett, D. Larson, J. L. Weiland, N. Jarosik, G. Hinshaw, N. Odegard, K. M.

Smith, R. S. Hill, B. Gold, M. Halpern, E. Komatsu, M. R. Nolta, L. Page, D. N.

Spergel, E. Wollack, J. Dunkley, A. Kogut, M. Limon, S. S. Meyer, G. S. Tucker, and

E. L. Wright. Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observa-

tions: Final Maps and Results. ArXiv e-prints, December 2012.

Somnath Bharadwaj and Tapomoy Guha Sarkar. Gravitational Wave Detection Us-

ing Redshifted 21-cm Observations. Phys.Rev., D79:124003, 2009. doi: 10.1103/

PhysRevD.79.124003.

C. Blake, S. Brough, M. Colless, W. Couch, S. Croom, T. Davis, M. J. Drinkwater,

K. Forster, K. Glazebrook, B. Jelliffe, R. J. Jurek, I.-H. Li, B. Madore, C. Martin,

K. Pimbblet, G. B. Poole, M. Pracy, R. Sharp, E. Wisnioski, D. Woods, and T. Wyder.

The WiggleZ Dark Energy Survey: the selection function and z = 0.6 galaxy power

spectrum. MNRAS, 406:803–821, August 2010. doi: 10.1111/j.1365-2966.2010.16747.x.

C. Blake, S. Brough, M. Colless, C. Contreras, W. Couch, S. Croom, T. Davis, M. J.

Drinkwater, K. Forster, D. Gilbank, M. Gladders, K. Glazebrook, B. Jelliffe, R. J.

Page 130: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

BIBLIOGRAPHY 117

Jurek, I.-H. Li, B. Madore, D. C. Martin, K. Pimbblet, G. B. Poole, M. Pracy, R. Sharp,

E. Wisnioski, D. Woods, T. K. Wyder, and H. K. C. Yee. The WiggleZ Dark Energy

Survey: the growth rate of cosmic structure since redshift z=0.9. MNRAS, 415:2876–

2891, August 2011. doi: 10.1111/j.1365-2966.2011.18903.x.

Chris Blake, Russell Jurek, Sarah Brough, Matthew Colless, Warrick Couch, et al. The

WiggleZ Dark Energy Survey: small-scale clustering of Lyman Break Galaxies at z

<1. 2009.

L. Book, M. Kamionkowski, and F. Schmidt. Lensing of 21-cm Fluctuations by Primordial

Gravitational Waves. Physical Review Letters, 108(21):211301, May 2012. doi: 10.

1103/PhysRevLett.108.211301.

J. D. Bowman and A. E. E. Rogers. A lower limit of ∆z>0.06 for the duration of the

reionization epoch. Nature, 468:796–798, December 2010. doi: 10.1038/nature09601.

C. Burigana, C. Destri, H.J. de Vega, A. Gruppuso, N. Mandolesi, et al. Forecast for

the Planck precision on the tensor to scalar ratio and other cosmological parameters.

Astrophys.J., 724:588–607, 2010. doi: 10.1088/0004-637X/724/1/588.

Salvatore Capozziello. Curvature quintessence. Int.J.Mod.Phys., D11:483–492, 2002. doi:

10.1142/S0218271802002025.

S. M. Carroll, V. Duvvuri, M. Trodden, and M. S. Turner. Is cosmic speed-up due to

new gravitational physics? Phys. Rev. D, 70(4):043528, August 2004. doi: 10.1103/

PhysRevD.70.043528.

T.-C. Chang, U.-L. Pen, J. B. Peterson, and P. McDonald. Baryon Acoustic Oscillation

Intensity Mapping of Dark Energy. Physical Review Letters, 100(9):091303, March

2008. doi: 10.1103/PhysRevLett.100.091303.

T.-C. Chang, U.-L. Pen, K. Bandura, and J. B. Peterson. An intensity map of hydrogen

21-cm emission at redshift z˜0.8. Nature, 466:463–465, July 2010. doi: 10.1038/

nature09187.

X. Chen. The Tianlai Project: a 21CM Cosmology Experiment. International Journal

of Modern Physics Conference Series, 12:256, 2012. doi: 10.1142/S2010194512006459.

T. Chiba. 1/R gravity and scalar-tensor gravity. Physics Letters B, 575:1–3, November

2003. doi: 10.1016/j.physletb.2003.09.033.

Page 131: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

BIBLIOGRAPHY 118

A. L. Coil, M. Davis, D. S. Madgwick, J. A. Newman, C. J. Conselice, M. Cooper, R. S.

Ellis, S. M. Faber, D. P. Finkbeiner, P. Guhathakurta, N. Kaiser, D. C. Koo, A. C.

Phillips, C. C. Steidel, B. J. Weiner, C. N. A. Willmer, and R. Yan. The DEEP2

Galaxy Redshift Survey: Clustering of Galaxies in Early Data. ApJ, 609:525–538,

July 2004. doi: 10.1086/421337.

M. Crocce and R. Scoccimarro. Renormalized cosmological perturbation theory.

Phys. Rev. D, 73(6):063519, March 2006a. doi: 10.1103/PhysRevD.73.063519.

M. Crocce and R. Scoccimarro. Memory of initial conditions in gravitational clustering.

Phys. Rev. D, 73(6):063520, March 2006b. doi: 10.1103/PhysRevD.73.063520.

M. Crocce and R. Scoccimarro. Nonlinear evolution of baryon acoustic oscillations.

Phys. Rev. D, 77(2):023533, January 2008. doi: 10.1103/PhysRevD.77.023533.

S. Das, T. A. Marriage, P. A. R. Ade, P. Aguirre, M. Amiri, J. W. Appel, L. F. Bar-

rientos, E. S. Battistelli, J. R. Bond, B. Brown, B. Burger, J. Chervenak, M. J. De-

vlin, S. R. Dicker, W. Bertrand Doriese, J. Dunkley, R. Dunner, T. Essinger-Hileman,

R. P. Fisher, J. W. Fowler, A. Hajian, M. Halpern, M. Hasselfield, C. Hernandez-

Monteagudo, G. C. Hilton, M. Hilton, A. D. Hincks, R. Hlozek, K. M. Huffenberger,

D. H. Hughes, J. P. Hughes, L. Infante, K. D. Irwin, J. Baptiste Juin, M. Kaul,

J. Klein, A. Kosowsky, J. M. Lau, M. Limon, Y.-T. Lin, R. H. Lupton, D. Marsden,

K. Martocci, P. Mauskopf, F. Menanteau, K. Moodley, H. Moseley, C. B. Netterfield,

M. D. Niemack, M. R. Nolta, L. A. Page, L. Parker, B. Partridge, B. Reid, N. Sehgal,

B. D. Sherwin, J. Sievers, D. N. Spergel, S. T. Staggs, D. S. Swetz, E. R. Switzer,

R. Thornton, H. Trac, C. Tucker, R. Warne, E. Wollack, and Y. Zhao. The Atacama

Cosmology Telescope: A Measurement of the Cosmic Microwave Background Power

Spectrum at 148 and 218 GHz from the 2008 Southern Survey. ApJ, 729:62, March

2011. doi: 10.1088/0004-637X/729/1/62.

Ger de Bruyn. private communication, 2010.

S. de la Torre, L. Guzzo, J. A. Peacock, E. Branchini, A. Iovino, B. R. Granett, U. Ab-

bas, C. Adami, S. Arnouts, J. Bel, M. Bolzonella, D. Bottini, A. Cappi, J. Coupon,

O. Cucciati, I. Davidzon, G. De Lucia, A. Fritz, P. Franzetti, M. Fumana, B. Garilli,

O. Ilbert, J. Krywult, V. Le Brun, O. Le Fevre, D. Maccagni, K. Malek, F. Marulli,

H. J. McCracken, L. Moscardini, L. Paioro, W. J. Percival, M. Polletta, A. Pollo,

H. Schlagenhaufer, M. Scodeggio, L. A. M. Tasca, R. Tojeiro, D. Vergani, A. Zanichelli,

A. Burden, C. Di Porto, A. Marchetti, C. Marinoni, Y. Mellier, P. Monaco, R. C.

Page 132: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

BIBLIOGRAPHY 119

Nichol, S. Phleps, M. Wolk, and G. Zamorani. The VIMOS Public Extragalactic Red-

shift Survey (VIPERS). Galaxy clustering and redshift-space distortions at z=0.8 in

the first data release. ArXiv e-prints, March 2013.

Cedric Deffayet. Cosmology on a brane in Minkowski bulk. Phys.Lett., B502:199–208,

2001. doi: 10.1016/S0370-2693(01)00160-5.

Cedric Deffayet, G.R. Dvali, and Gregory Gabadadze. Accelerated universe from gravity

leaking to extra dimensions. Phys.Rev., D65:044023, 2002. doi: 10.1103/PhysRevD.

65.044023.

P. E. Dewdney, P. J. Hall, R. T. Schilizzi, and T. J. L. W. Lazio. The Square Kilometre

Array. IEEE Proceedings, 97:1482–1496, August 2009. doi: 10.1109/JPROC.2009.

2021005.

S. Dodelson. Modern cosmology. 2003.

Scott Dodelson, Eduardo Rozo, and Albert Stebbins. Primordial gravity waves and weak

lensing. Phys.Rev.Lett., 91:021301, 2003. doi: 10.1103/PhysRevLett.91.021301.

Olivier Dore, Tingting Lu, and Ue-Li Pen. The Non-Linear Fisher Information content

of cosmic shear surveys. 2009.

M. J. Drinkwater, R. J. Jurek, C. Blake, D. Woods, K. A. Pimbblet, K. Glazebrook,

R. Sharp, M. B. Pracy, S. Brough, M. Colless, W. J. Couch, S. M. Croom, T. M.

Davis, D. Forbes, K. Forster, D. G. Gilbank, M. Gladders, B. Jelliffe, N. Jones, I.-H.

Li, B. Madore, D. C. Martin, G. B. Poole, T. Small, E. Wisnioski, T. Wyder, and

H. K. C. Yee. The WiggleZ Dark Energy Survey: survey design and first data release.

MNRAS, 401:1429–1452, January 2010. doi: 10.1111/j.1365-2966.2009.15754.x.

Michael J. Drinkwater, Russell J. Jurek, Chris Blake, David Woods, Kevin A. Pimbblet,

et al. The WiggleZ Dark Energy Survey: Survey Design and First Data Release.

Mon.Not.Roy.Astron.Soc., 401:1429–1452, 2010. doi: 10.1111/j.1365-2966.2009.15754.

x.

A. R. Duffy, S. T. Kay, R. A. Battye, C. M. Booth, C. Dalla Vecchia, and J. Schaye.

Modelling neutral hydrogen in galaxies using cosmological hydrodynamical simulations.

MNRAS, 420:2799–2818, March 2012. doi: 10.1111/j.1365-2966.2011.19894.x.

R. Dunner, M. Hasselfield, T. A. Marriage, J. Sievers, V. Acquaviva, G. E. Addison,

P. A. R. Ade, P. Aguirre, M. Amiri, J. W. Appel, L. F. Barrientos, E. S. Battistelli, J. R.

Page 133: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

BIBLIOGRAPHY 120

Bond, B. Brown, B. Burger, E. Calabrese, J. Chervenak, S. Das, M. J. Devlin, S. R.

Dicker, W. Bertrand Doriese, J. Dunkley, T. Essinger-Hileman, R. P. Fisher, M. B.

Gralla, J. W. Fowler, A. Hajian, M. Halpern, C. Hernandez-Monteagudo, G. C. Hilton,

M. Hilton, A. D. Hincks, R. Hlozek, K. M. Huffenberger, D. H. Hughes, J. P. Hughes,

L. Infante, K. D. Irwin, J. Baptiste Juin, M. Kaul, J. Klein, A. Kosowsky, J. M. Lau,

M. Limon, Y.-T. Lin, T. Louis, R. H. Lupton, D. Marsden, K. Martocci, P. Mauskopf,

F. Menanteau, K. Moodley, H. Moseley, C. B. Netterfield, M. D. Niemack, M. R.

Nolta, L. A. Page, L. Parker, B. Partridge, H. Quintana, B. Reid, N. Sehgal, B. D.

Sherwin, D. N. Spergel, S. T. Staggs, D. S. Swetz, E. R. Switzer, R. Thornton, H. Trac,

C. Tucker, R. Warne, G. Wilson, E. Wollack, and Y. Zhao. The Atacama Cosmology

Telescope: Data Characterization and Mapmaking. ApJ, 762:10, January 2013. doi:

10.1088/0004-637X/762/1/10.

R. DuPlain, S. Ransom, P. Demorest, P. Brandt, J. Ford, and A. L. Shelton. Launching

GUPPI: the Green Bank Ultimate Pulsar Processing Instrument. In Society of Photo-

Optical Instrumentation Engineers (SPIE) Conference Series, volume 7019 of Society

of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, August 2008.

doi: 10.1117/12.790003.

G.R. Dvali, Gregory Gabadadze, and Massimo Porrati. 4-D gravity on a brane in 5-

D Minkowski space. Phys.Lett., B485:208–214, 2000. doi: 10.1016/S0370-2693(00)

00669-9.

D. J. Eisenstein and W. Hu. Baryonic Features in the Matter Transfer Function. ApJ,

496:605, March 1998. doi: 10.1086/305424.

D. J. Eisenstein, D. H. Weinberg, E. Agol, H. Aihara, C. Allende Prieto, S. F. Anderson,

J. A. Arns, E. Aubourg, S. Bailey, E. Balbinot, and et al. SDSS-III: Massive Spec-

troscopic Surveys of the Distant Universe, the Milky Way, and Extra-Solar Planetary

Systems. AJ, 142:72, September 2011. doi: 10.1088/0004-6256/142/3/72.

Wenjuan Fang, Sheng Wang, Wayne Hu, Zoltan Haiman, Lam Hui, et al. Challenges to

the DGP Model from Horizon-Scale Growth and Geometry. Phys.Rev., D78:103509,

2008. doi: 10.1103/PhysRevD.78.103509.

A. Faulkner et al. Aperture arrays for the ska: the skads white paper. Technical Report

DS8 T1, SKADS, March 2010. http://www.skads-eu.org.

Andrei V. Frolov. A Singularity Problem with f(R) Dark Energy. Phys.Rev.Lett., 101:

061103, 2008. doi: 10.1103/PhysRevLett.101.061103.

Page 134: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

BIBLIOGRAPHY 121

Steven Furlanetto, S. Peng Oh, and Frank Briggs. Cosmology at Low Frequencies: The

21 cm Transition and the High-Redshift Universe. Phys.Rept., 433:181–301, 2006. doi:

10.1016/j.physrep.2006.08.002.

S. B. Giddings and M. S. Sloth. Semiclassical relations and IR effects in de Sitter and

slow-roll space-times. J. Cosmology Astropart. Phys., 1:023, January 2011a. doi: 10.

1088/1475-7516/2011/01/023.

S. B. Giddings and M. S. Sloth. Cosmological observables, infrared growth of fluctuations,

and scale-dependent anisotropies. Phys. Rev. D, 84(6):063528, September 2011b. doi:

10.1103/PhysRevD.84.063528.

A. H. Guth. Inflationary universe: A possible solution to the horizon and flatness prob-

lems. Phys. Rev. D, 23:347–356, January 1981. doi: 10.1103/PhysRevD.23.347.

L. Guzzo, M. Scodeggio, B. Garilli, B. R. Granett, U. Abbas, C. Adami, S. Arnouts,

J. Bel, M. Bolzonella, D. Bottini, E. Branchini, A. Cappi, J. Coupon, O. Cucciati,

I. Davidzon, G. De Lucia, S. de la Torre, A. Fritz, P. Franzetti, M. Fumana, P. Hude-

lot, O. Ilbert, A. Iovino, J. Krywult, V. Le Brun, O. Le Fevre, D. Maccagni, K. Ma lek,

F. Marulli, H. J. McCracken, L. Paioro, J. A. Peacock, M. Polletta, A. Pollo, H. Schla-

genhaufer, L. A. M. Tasca, R. Tojeiro, D. Vergani, G. Zamorani, A. Zanichelli, A. Bur-

den, C. Di Porto, A. Marchetti, C. Marinoni, Y. Mellier, L. Moscardini, R. C. Nichol,

W. J. Percival, S. Phleps, and M. Wolk. The VIMOS Public Extragalactic Redshift

Survey (VIPERS). An unprecedented view of galaxies and large-scale structure at

0.5<z<1.2. ArXiv e-prints, March 2013.

A. J. S. Hamilton and M. Tegmark. Decorrelating the power spectrum of galaxies.

MNRAS, 312:285–294, February 2000. doi: 10.1046/j.1365-8711.2000.03074.x.

J. B. Hartle. Gravity : an introduction to Einstein’s general relativity. 2003.

C. Heiles, P. Perillat, M. Nolan, D. Lorimer, R. Bhat, T. Ghosh, M. Lewis, K. O’Neil,

C. Salter, and S. Stanimirovic. Mueller Matrix Parameters for Radio Telescopes and

Their Observational Determination. PASP, 113:1274–1288, October 2001. doi: 10.

1086/323289.

G. Hinshaw, C. Barnes, C. L. Bennett, M. R. Greason, M. Halpern, R. S. Hill, N. Jarosik,

A. Kogut, M. Limon, S. S. Meyer, N. Odegard, L. Page, D. N. Spergel, G. S.

Tucker, J. L. Weiland, E. Wollack, and E. L. Wright. First-Year Wilkinson Microwave

Page 135: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

BIBLIOGRAPHY 122

Anisotropy Probe (WMAP) Observations: Data Processing Methods and Systematic

Error Limits. ApJS, 148:63–95, September 2003. doi: 10.1086/377222.

G. Hinshaw, D. Larson, E. Komatsu, D. N. Spergel, C. L. Bennett, J. Dunkley, M. R.

Nolta, M. Halpern, R. S. Hill, N. Odegard, L. Page, K. M. Smith, J. L. Weiland,

B. Gold, N. Jarosik, A. Kogut, M. Limon, S. S. Meyer, G. S. Tucker, E. Wollack, and

E. L. Wright. Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observa-

tions: Cosmological Parameter Results. ArXiv e-prints, December 2012.

W. Hu and I. Sawicki. Parametrized post-Friedmann framework for modified gravity.

Phys. Rev. D, 76(10):104043, November 2007. doi: 10.1103/PhysRevD.76.104043.

Wayne Hu and Ignacy Sawicki. Models of f(R) Cosmic Acceleration that Evade Solar-

System Tests. Phys.Rev., D76:064004, 2007. doi: 10.1103/PhysRevD.76.064004.

B. Jain and P. Zhang. Observational tests of modified gravity. Phys. Rev. D, 78(6):

063503, September 2008. doi: 10.1103/PhysRevD.78.063503.

N. Jarosik, C. Barnes, M. R. Greason, R. S. Hill, M. R. Nolta, N. Odegard, J. L. Weiland,

R. Bean, C. L. Bennett, O. Dore, M. Halpern, G. Hinshaw, A. Kogut, E. Komatsu,

M. Limon, S. S. Meyer, L. Page, D. N. Spergel, G. S. Tucker, E. Wollack, and E. L.

Wright. Three-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations:

Beam Profiles, Data Processing, Radiometer Characterization, and Systematic Error

Limits. ApJS, 170:263–287, June 2007. doi: 10.1086/513697.

D. Jeong and M. Kamionkowski. Clustering Fossils from the Early Universe. Physical

Review Letters, 108(25):251301, June 2012. doi: 10.1103/PhysRevLett.108.251301.

D. Jeong and F. Schmidt. Large-scale structure with gravitational waves. I. Galaxy

clustering. Phys. Rev. D, 86(8):083512, October 2012. doi: 10.1103/PhysRevD.86.

083512.

N. Kaiser. Clustering in real space and in redshift space. MNRAS, 227:1–21, July 1987.

K. I. Kellermann, I. I. K. Pauliny-Toth, and P. J. S. Williams. The Spectra of Radio

Sources in the Revised 3c Catalogue. ApJ, 157:1, July 1969. doi: 10.1086/150046.

N. Khandai, S. K. Sethi, T. Di Matteo, R. A. C. Croft, V. Springel, A. Jana, and J. P.

Gardner. Detecting neutral hydrogen in emission at redshift z ∼ 1. MNRAS, 415:

2580–2593, August 2011. doi: 10.1111/j.1365-2966.2011.18881.x.

Page 136: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

BIBLIOGRAPHY 123

L. Knox, Y.-S. Song, and J. A. Tyson. Distance-redshift and growth-redshift rela-

tions as two windows on acceleration and gravitation: Dark energy or new gravity?

Phys. Rev. D, 74(2):023512, July 2006. doi: 10.1103/PhysRevD.74.023512.

E. Komatsu, J. Dunkley, M. R. Nolta, C. L. Bennett, B. Gold, G. Hinshaw, N. Jarosik,

D. Larson, M. Limon, L. Page, D. N. Spergel, M. Halpern, R. S. Hill, A. Kogut, S. S.

Meyer, G. S. Tucker, J. L. Weiland, E. Wollack, and E. L. Wright. Five-Year Wilkinson

Microwave Anisotropy Probe Observations: Cosmological Interpretation. ApJS, 180:

330–376, February 2009. doi: 10.1088/0067-0049/180/2/330.

E. Komatsu, K. M. Smith, J. Dunkley, C. L. Bennett, B. Gold, G. Hinshaw, N. Jarosik,

D. Larson, M. R. Nolta, L. Page, D. N. Spergel, M. Halpern, R. S. Hill, A. Kogut,

M. Limon, S. S. Meyer, N. Odegard, G. S. Tucker, J. L. Weiland, E. Wollack, and E. L.

Wright. Seven-year Wilkinson Microwave Anisotropy Probe (WMAP) Observations:

Cosmological Interpretation. ApJS, 192:18, February 2011. doi: 10.1088/0067-0049/

192/2/18.

E. Komatsu et al. Five-Year Wilkinson Microwave Anisotropy Probe (WMAP) Ob-

servations: Cosmological Interpretation. Astrophys.J.Suppl., 180:330–376, 2009. doi:

10.1088/0067-0049/180/2/330.

K. Koyama and R. Maartens. Structure formation in the Dvali Gabadadze Porrati

cosmological model. J. Cosmology Astropart. Phys., 1:016, January 2006. doi:

10.1088/1475-7516/2006/01/016.

K. Koyama and F. P. Silva. Nonlinear interactions in a cosmological background in the

Dvali-Gabadadze-Porrati braneworld. Phys. Rev. D, 75(8):084040, April 2007. doi:

10.1103/PhysRevD.75.084040.

P. Lah, J. N. Chengalur, F. H. Briggs, M. Colless, R. de Propris, M. B. Pracy, W. J. G. de

Blok, S. S. Fujita, M. Ajiki, Y. Shioya, T. Nagao, T. Murayama, Y. Taniguchi, M. Yagi,

and S. Okamura. The HI content of star-forming galaxies at z = 0.24. MNRAS, 376:

1357–1366, April 2007. doi: 10.1111/j.1365-2966.2007.11540.x.

P. Lah, M. B. Pracy, J. N. Chengalur, F. H. Briggs, M. Colless, R. de Propris, S. Ferris,

B. P. Schmidt, and B. E. Tucker. The HI gas content of galaxies around Abell 370, a

galaxy cluster at z = 0.37. MNRAS, 399:1447–1470, November 2009. doi: 10.1111/j.

1365-2966.2009.15368.x.

Page 137: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

BIBLIOGRAPHY 124

A. Lewis. The real shape of non-Gaussianities. J. Cosmology Astropart. Phys., 10:026,

October 2011. doi: 10.1088/1475-7516/2011/10/026.

Antony Lewis and Anthony Challinor. The 21cm angular-power spectrum from the dark

ages. Phys.Rev., D76:083005, 2007. doi: 10.1103/PhysRevD.76.083005.

Antony Lewis, Anthony Challinor, and Anthony Lasenby. Efficient computation of CMB

anisotropies in closed FRW models. Astrophys.J., 538:473–476, 2000. doi: 10.1086/

309179.

A. R. Liddle and D. H. Lyth. Cosmological Inflation and Large-Scale Structure. June

2000.

A. Liu and M. Tegmark. A method for 21 cm power spectrum estimation in the presence

of foregrounds. Phys. Rev. D, 83(10):103006, May 2011. doi: 10.1103/PhysRevD.83.

103006.

A. Liu and M. Tegmark. How well can we measure and understand foregrounds with 21-

cm experiments? MNRAS, 419:3491–3504, February 2012. doi: 10.1111/j.1365-2966.

2011.19989.x.

A. Loeb and J. S. B. Wyithe. Possibility of Precise Measurement of the Cosmological

Power Spectrum with a Dedicated Survey of 21cm Emission after Reionization. Physi-

cal Review Letters, 100(16):161301, April 2008. doi: 10.1103/PhysRevLett.100.161301.

A. Loeb and M. Zaldarriaga. Measuring the Small-Scale Power Spectrum of Cosmic

Density Fluctuations through 21cm Tomography Prior to the Epoch of Structure For-

mation. Physical Review Letters, 92(21):211301, May 2004. doi: 10.1103/PhysRevLett.

92.211301.

T. Lu, U.-L. Pen, and O. Dore. Dark energy from large-scale structure lensing informa-

tion. Phys. Rev. D, 81(12):123015, June 2010. doi: 10.1103/PhysRevD.81.123015.

Tingting Lu and Ue-Li Pen. Precision of diffuse 21-cm lensing. Mon.Not.Roy.Astron.Soc.,

2007.

A. Lue, R. Scoccimarro, and G. D. Starkman. Probing Newton’s constant on vast

scales: Dvali-Gabadadze-Porrati gravity, cosmic acceleration, and large scale struc-

ture. Phys. Rev. D, 69(12):124015, June 2004. doi: 10.1103/PhysRevD.69.124015.

Juan Martin Maldacena. Non-Gaussian features of primordial fluctuations in single field

inflationary models. JHEP, 0305:013, 2003.

Page 138: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

BIBLIOGRAPHY 125

X.-C. Mao. Measuring Baryon Acoustic Oscillations on 21 cm Intensity Fluctuations at

Moderate Redshifts. ApJ, 752:80, June 2012. doi: 10.1088/0004-637X/752/2/80.

Y. Mao, M. Tegmark, M. McQuinn, M. Zaldarriaga, and O. Zahn. How accurately can

21cm tomography constrain cosmology? Phys. Rev. D, 78(2):023529, July 2008. doi:

10.1103/PhysRevD.78.023529.

F. A. Marın, N. Y. Gnedin, H.-J. Seo, and A. Vallinotto. Modeling the Large-scale Bias

of Neutral Hydrogen. ApJ, 718:972–980, August 2010. doi: 10.1088/0004-637X/718/

2/972.

A. M. Martin, E. Papastergis, R. Giovanelli, M. P. Haynes, C. M. Springob, and

S. Stierwalt. The Arecibo Legacy Fast ALFA Survey. X. The H I Mass Function

and Ω H I from the 40% ALFALFA Survey. ApJ, 723:1359–1374, November 2010. doi:

10.1088/0004-637X/723/2/1359.

K. W. Masui and U.-L. Pen. Primordial Gravity Wave Fossils and Their Use in

Testing Inflation. Physical Review Letters, 105(16):161302, October 2010. doi:

10.1103/PhysRevLett.105.161302.

K. W. Masui, P. McDonald, and U.-L. Pen. Near-term measurements with 21 cm intensity

mapping: Neutral hydrogen fraction and BAO at z<2. Phys. Rev. D, 81(10):103527,

May 2010a. doi: 10.1103/PhysRevD.81.103527.

K. W. Masui, F. Schmidt, U.-L. Pen, and P. McDonald. Projected constraints on modified

gravity cosmologies from 21 cm intensity mapping. Phys. Rev. D, 81(6):062001, March

2010b. doi: 10.1103/PhysRevD.81.062001.

K. W. Masui, E. R. Switzer, N. Banavar, K. Bandura, C. Blake, L.-M. Calin, T.-C.

Chang, X. Chen, Y.-C. Li, Y.-W. Liao, A. Natarajan, U.-L. Pen, J. B. Peterson, J. R.

Shaw, and T. C. Voytek. Measurement of 21 cm Brightness Fluctuations at z ˜ 0.8 in

Cross-correlation. ApJL, 763:L20, January 2013. doi: 10.1088/2041-8205/763/1/L20.

J. C. Mather, E. S. Cheng, R. E. Eplee, Jr., R. B. Isaacman, S. S. Meyer, R. A. Shafer,

R. Weiss, E. L. Wright, C. L. Bennett, N. W. Boggess, E. Dwek, S. Gulkis, M. G.

Hauser, M. Janssen, T. Kelsall, P. M. Lubin, S. H. Moseley, Jr., T. L. Murdock, R. F.

Silverberg, G. F. Smoot, and D. T. Wilkinson. A preliminary measurement of the

cosmic microwave background spectrum by the Cosmic Background Explorer (COBE)

satellite. ApJ, 354:L37–L40, May 1990. doi: 10.1086/185717.

Page 139: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

BIBLIOGRAPHY 126

P. McDonald and D. J. Eisenstein. Dark energy and curvature from a future baryonic

acoustic oscillation survey using the Lyman-α forest. Phys. Rev. D, 76(6):063009,

September 2007. doi: 10.1103/PhysRevD.76.063009.

P. McDonald and U. Seljak. How to evade the sample variance limit on measurements of

redshift-space distortions. J. Cosmology Astropart. Phys., 10:007, October 2009. doi:

10.1088/1475-7516/2009/10/007.

J. D. Meiring, T. M. Tripp, J. X. Prochaska, J. Tumlinson, J. Werk, E. B. Jenkins,

C. Thom, J. M. O’Meara, and K. R. Sembach. The First Observations of Low-redshift

Damped Lyα Systems with the Cosmic Origins Spectrograph. ApJ, 732:35, May 2011.

doi: 10.1088/0004-637X/732/1/35.

Rajaram Nityananda. Effects of SVD RFI filtering on a weak signal. NCRA Tech-

nical Reports, July 2010. URL http://ncralib1.ncra.tifr.res.in:8080/jspui/

handle/2301/484.

Shin’ichi Nojiri and Sergei D. Odintsov. Modified gravity with negative and positive

powers of the curvature: Unification of the inflation and of the cosmic acceleration.

Phys.Rev., D68:123512, 2003. doi: 10.1103/PhysRevD.68.123512.

Shin’ichi Nojiri and Sergei D. Odintsov. Modified f(R) gravity consistent with realistic

cosmology: From matter dominated epoch to dark energy universe. Phys.Rev., D74:

086005, 2006. doi: 10.1103/PhysRevD.74.086005.

Shin’ichi Nojiri and Sergei D. Odintsov. Unifying inflation with LambdaCDM epoch in

modified f(R) gravity consistent with Solar System tests. Phys.Lett., B657:238–245,

2007. doi: 10.1016/j.physletb.2007.10.027.

Shin’ichi Nojiri and Sergei D. Odintsov. The Future evolution and finite-time singularities

in F(R)-gravity unifying the inflation and cosmic acceleration. Phys.Rev., D78:046006,

2008. doi: 10.1103/PhysRevD.78.046006.

S. P. Oh and K. J. Mack. Foregrounds for 21-cm observations of neutral gas at high

redshift. MNRAS, 346:871–877, December 2003. doi: 10.1111/j.1365-2966.2003.07133.

x.

H. Oyaizu. Nonlinear evolution of f(R) cosmologies. I. Methodology. Phys. Rev. D, 78

(12):123523, December 2008. doi: 10.1103/PhysRevD.78.123523.

Page 140: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

BIBLIOGRAPHY 127

H. Oyaizu, M. Lima, and W. Hu. Nonlinear evolution of f(R) cosmologies. II. Power

spectrum. Phys. Rev. D, 78(12):123524, December 2008. doi: 10.1103/PhysRevD.78.

123524.

G. Paciga, J. Albert, K. Bandura, T.-C. Chang, Y. Gupta, C. Hirata, J. Odegova, U.-

L. Pen, J. B. Peterson, J. Roy, R. Shaw, K. Sigurdson, and T. Voytek. A refined

foreground-corrected limit on the HI power spectrum at z=8.6 from the GMRT Epoch

of Reionization Experiment. ArXiv e-prints, January 2013.

N. Padmanabhan and M. White. Constraining anisotropic baryon oscillations.

Phys. Rev. D, 77(12):123540, June 2008. doi: 10.1103/PhysRevD.77.123540.

E. Pajer, F. Schmidt, and M. Zaldarriaga. The Observed Squeezed Limit of Cosmological

Three-Point Functions. ArXiv e-prints, May 2013.

Y. C. Pei and S. M. Fall. Cosmic Chemical Evolution. ApJ, 454:69, November 1995. doi:

10.1086/176466.

U.-L. Pen. Gravitational lensing of epoch-of-reionization gas. New A, 9:417–424, July

2004. doi: 10.1016/j.newast.2004.01.006.

U.-L. Pen, R. Sheth, J. Harnois-Deraps, X. Chen, and Z. Li. Cosmic Tides. ArXiv

e-prints, February 2012.

Ue-Li Pen, Lister Staveley-Smith, Jeffrey Peterson, and Tzu-Ching Chang. First Detec-

tion of Cosmic Structure in the 21-cm Intensity Field. 2008.

S. Perlmutter, G. Aldering, G. Goldhaber, R. A. Knop, P. Nugent, P. G. Castro,

S. Deustua, S. Fabbro, A. Goobar, D. E. Groom, I. M. Hook, A. G. Kim, M. Y.

Kim, J. C. Lee, N. J. Nunes, R. Pain, C. R. Pennypacker, R. Quimby, C. Lidman,

R. S. Ellis, M. Irwin, R. G. McMahon, P. Ruiz-Lapuente, N. Walton, B. Schaefer,

B. J. Boyle, A. V. Filippenko, T. Matheson, A. S. Fruchter, N. Panagia, H. J. M.

Newberg, W. J. Couch, and Supernova Cosmology Project. Measurements of Omega

and Lambda from 42 High-Redshift Supernovae. ApJ, 517:565–586, June 1999. doi:

10.1086/307221.

Jeffrey B. Peterson, Kevin Bandura, and Ue Li Pen. The Hubble Sphere Hydrogen

Survey. 2006.

Jeffrey B. Peterson, Roy Aleksan, Reza Ansari, Kevin Bandura, Dick Bond, et al. 21 cm

Intensity Mapping. 2009.

Page 141: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

BIBLIOGRAPHY 128

Planck Collaboration, P. A. R. Ade, N. Aghanim, C. Armitage-Caplan, M. Arnaud,

M. Ashdown, F. Atrio-Barandela, J. Aumont, C. Baccigalupi, A. J. Banday, and et al.

Planck 2013 results. I. Overview of products and scientific results. ArXiv e-prints,

March 2013a.

Planck Collaboration, P. A. R. Ade, N. Aghanim, C. Armitage-Caplan, M. Arnaud,

M. Ashdown, F. Atrio-Barandela, J. Aumont, C. Baccigalupi, A. J. Banday, and et al.

Planck 2013 results. VIII. HFI photometric calibration and mapmaking. ArXiv e-

prints, March 2013b.

Planck Collaboration, P. A. R. Ade, N. Aghanim, C. Armitage-Caplan, M. Arnaud,

M. Ashdown, F. Atrio-Barandela, J. Aumont, C. Baccigalupi, A. J. Banday, and et al.

Planck 2013 results. XII. Component separation. ArXiv e-prints, March 2013c.

Planck Collaboration, P. A. R. Ade, N. Aghanim, C. Armitage-Caplan, M. Arnaud,

M. Ashdown, F. Atrio-Barandela, J. Aumont, C. Baccigalupi, A. J. Banday, and et al.

Planck 2013 results. XVI. Cosmological parameters. ArXiv e-prints, March 2013d.

J. C. Pober, A. R. Parsons, J. E. Aguirre, Z. Ali, R. F. Bradley, C. L. Carilli, D. DeBoer,

M. Dexter, N. E. Gugliucci, D. C. Jacobs, D. MacMahon, J. Manley, D. F. Moore,

I. I. Stefan, and W. P. Walbrugh. Opening the 21cm EoR Window: Measurements of

Foreground Isolation with PAPER. ArXiv e-prints, January 2013a.

J. C. Pober, A. R. Parsons, D. R. DeBoer, P. McDonald, M. McQuinn, J. E. Aguirre,

Z. Ali, R. F. Bradley, T.-C. Chang, and M. F. Morales. The Baryon Acoustic Oscil-

lation Broadband and Broad-beam Array: Design Overview and Sensitivity Forecasts.

AJ, 145:65, March 2013b. doi: 10.1088/0004-6256/145/3/65.

J. X. Prochaska and A. M. Wolfe. On the (Non)Evolution of H I Gas in Galaxies Over

Cosmic Time. ApJ, 696:1543–1547, May 2009. doi: 10.1088/0004-637X/696/2/1543.

Jason X. Prochaska, Stephane Herbert-Fort, and Arthur M. Wolfe. The sdss damped lya

survey: data release 3. Astrophys.J., 635:123–142, 2005. doi: 10.1086/497287.

Anthony R. Pullen and Christopher M. Hirata. Non-detection of a statistically

anisotropic power spectrum in large-scale structure. JCAP, 1005:027, 2010. doi:

10.1088/1475-7516/2010/05/027.

M.E. Putman, P. Henning, A. Bolatto, D. Keres, D.J. Pisano, et al. How do Galaxies

Accrete Gas and Form Stars? 2009.

Page 142: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

BIBLIOGRAPHY 129

S. M. Rao, D. A. Turnshek, and D. B. Nestor. Damped Lyα Systems at z<1.65: The

Expanded Sloan Digital Sky Survey Hubble Space Telescope Sample. ApJ, 636:610–

630, January 2006. doi: 10.1086/498132.

A. G. Riess, A. V. Filippenko, P. Challis, A. Clocchiatti, A. Diercks, P. M. Garnavich,

R. L. Gilliland, C. J. Hogan, S. Jha, R. P. Kirshner, B. Leibundgut, M. M. Phillips,

D. Reiss, B. P. Schmidt, R. A. Schommer, R. C. Smith, J. Spyromilio, C. Stubbs, N. B.

Suntzeff, and J. Tonry. Observational Evidence from Supernovae for an Accelerating

Universe and a Cosmological Constant. AJ, 116:1009–1038, September 1998. doi:

10.1086/300499.

H. Rottgering, A.G. de Bruyn, Robert P. Fender, J. Kuijpers, M.P. van Haarlem, et al.

LOFAR: A New radio telescope for low frequency radio observations: science and

project status. 2003.

G. B. Rybicki and A. P. Lightman. Radiative processes in astrophysics. 1979.

Ignacy Sawicki and Wayne Hu. Stability of Cosmological Solution in f(R) Models of

Gravity. Phys.Rev., D75:127502, 2007. doi: 10.1103/PhysRevD.75.127502.

A. M. M. Scaife and G. H. Heald. A broad-band flux scale for low-frequency radio

telescopes. MNRAS, 423:L30–L34, June 2012. doi: 10.1111/j.1745-3933.2012.01251.x.

F. Schmidt. Self-consistent cosmological simulations of DGP braneworld gravity.

Phys. Rev. D, 80(4):043001, August 2009. doi: 10.1103/PhysRevD.80.043001.

F. Schmidt and D. Jeong. Large-scale structure with gravitational waves. II. Shear.

Phys. Rev. D, 86(8):083513, October 2012a. doi: 10.1103/PhysRevD.86.083513.

F. Schmidt and D. Jeong. Cosmic rulers. Phys. Rev. D, 86(8):083527, October 2012b.

doi: 10.1103/PhysRevD.86.083527.

F. Schmidt, M. Lima, H. Oyaizu, and W. Hu. Nonlinear evolution of f(R) cosmologies.

III. Halo statistics. Phys. Rev. D, 79(8):083518, April 2009a. doi: 10.1103/PhysRevD.

79.083518.

F. Schmidt, A. Vikhlinin, and W. Hu. Cluster constraints on f(R) gravity. Phys. Rev. D,

80(8):083505, October 2009b. doi: 10.1103/PhysRevD.80.083505.

Fabian Schmidt. Weak Lensing Probes of Modified Gravity. Phys.Rev., D78:043002,

2008. doi: 10.1103/PhysRevD.78.043002.

Page 143: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

BIBLIOGRAPHY 130

R. Scoccimarro. Large-scale structure in brane-induced gravity. I. Perturbation theory.

Phys. Rev. D, 80(10):104006, November 2009. doi: 10.1103/PhysRevD.80.104006.

H.-J. Seo and D. J. Eisenstein. Improved Forecasts for the Baryon Acoustic Oscillations

and Cosmological Distance Scale. ApJ, 665:14–24, August 2007. doi: 10.1086/519549.

H.-J. Seo, S. Dodelson, J. Marriner, D. Mcginnis, A. Stebbins, C. Stoughton, and

A. Vallinotto. A Ground-based 21 cm Baryon Acoustic Oscillation Survey. ApJ, 721:

164–173, September 2010. doi: 10.1088/0004-637X/721/1/164.

Hee-Jong Seo, Scott Dodelson, John Marriner, Dave Mcginnis, Albert Stebbins, et al.

A ground-based 21cm Baryon acoustic oscillation survey. Astrophys.J., 721:164–173,

2010. doi: 10.1088/0004-637X/721/1/164.

J. R. Shaw, K. Sigurdson, U.-L. Pen, A. Stebbins, and M. Sitwell. All-Sky Interferometry

with Spherical Harmonic Transit Telescopes. ArXiv e-prints, February 2013.

Sijing Shen, James Wadsley, and Gregory Stinson. The Enrichment of Intergalactic

Medium With Adiabatic Feedback I: Metal Cooling and Metal Diffusion. 2009.

R. E. Smith, J. A. Peacock, A. Jenkins, S. D. M. White, C. S. Frenk, F. R. Pearce, P. A.

Thomas, G. Efstathiou, and H. M. P. Couchman. Stable clustering, the halo model

and non-linear cosmological power spectra. MNRAS, 341:1311–1332, June 2003. doi:

10.1046/j.1365-8711.2003.06503.x.

G. F. Smoot, C. L. Bennett, A. Kogut, E. L. Wright, J. Aymon, N. W. Boggess, E. S.

Cheng, G. de Amici, S. Gulkis, M. G. Hauser, G. Hinshaw, P. D. Jackson, M. Janssen,

E. Kaita, T. Kelsall, P. Keegstra, C. Lineweaver, K. Loewenstein, P. Lubin, J. Mather,

S. S. Meyer, S. H. Moseley, T. Murdock, L. Rokke, R. F. Silverberg, L. Tenorio,

R. Weiss, and D. T. Wilkinson. Structure in the COBE differential microwave ra-

diometer first-year maps. ApJ, 396:L1–L5, September 1992. doi: 10.1086/186504.

Yong-Seon Song, Hiranya Peiris, and Wayne Hu. Cosmological Constraints on f(R)

Acceleration Models. Phys.Rev., D76:063517, 2007. doi: 10.1103/PhysRevD.76.063517.

Thomas P. Sotiriou and Valerio Faraoni. f(R) Theories Of Gravity. Rev.Mod.Phys., 82:

451–497, 2010. doi: 10.1103/RevModPhys.82.451.

V. Springel, S. D. M. White, A. Jenkins, C. S. Frenk, N. Yoshida, L. Gao, J. Navarro,

R. Thacker, D. Croton, J. Helly, J. A. Peacock, S. Cole, P. Thomas, H. Couchman,

A. Evrard, J. Colberg, and F. Pearce. Simulations of the formation, evolution and

Page 144: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

BIBLIOGRAPHY 131

clustering of galaxies and quasars. Nature, 435:629–636, June 2005. doi: 10.1038/

nature03597.

Alexei A. Starobinsky. A New Type of Isotropic Cosmological Models Without Singu-

larity. Phys.Lett., B91:99–102, 1980. doi: 10.1016/0370-2693(80)90670-X.

Alexei A. Starobinsky. Disappearing cosmological constant in f(R) gravity. JETP Lett.,

86:157–163, 2007. doi: 10.1134/S0021364007150027.

R. Subrahmanyan and K. R. Anantharamaiah. A search for protoclusters at Z =

3.3. Journal of Astrophysics and Astronomy, 11:221–235, June 1990. doi: 10.1007/

BF02715018.

E. R. Switzer, K. W. Masui, K. Bandura, L.-M. Calin, T.-C. Chang, X. Chen, Y.-C. Li,

Y.-W. Liao, A. Natarajan, U.-L. Pen, J. B. Peterson, J. R. Shaw, and T. C. Voytek. De-

termination of z ˜ 0.8 neutral hydrogen fluctuations using the 21 cm intensity mapping

auto-correlation. ArXiv e-prints, April 2013.

M. Tegmark. How to Make Maps from Cosmic Microwave Background Data without

Losing Information. ApJ, 480:L87, May 1997. doi: 10.1086/310631.

M. Tristram, J. F. Macıas-Perez, C. Renault, and D. Santos. XSPECT, estimation of

the angular power spectrum by computing cross-power spectra with analytical error

bars. MNRAS, 358:833–842, April 2005. doi: 10.1111/j.1365-2966.2005.08760.x.

S. Tsujikawa and T. Tatekawa. The effect of modified gravity on weak lensing. Physics

Letters B, 665:325–331, July 2008. doi: 10.1016/j.physletb.2008.06.052.

Arnold van Ardenne. Skads next step: Aavp. In S. A. Torchinsky, A. van Ardenne,

T. van den Brink-Havinga, A. J. J. van Es, and A. J. Faulkner, editors, Wide Field

Science and Technology for the SKA, pages 407–410, Chateau de Limelette, Belgium,

November 2009.

G. Vujanovic, U.-L. Pen, M. Reid, and J. R. Bond. Detecting cosmic structure via 21-cm

intensity mapping on the Australian Telescope Compact Array. A&A, 539:L5, March

2012. doi: 10.1051/0004-6361/201117930.

M. H. Wieringa, A. G. de Bruyn, and P. Katgert. A Westerbork search for high redshift

H I. A&A, 256:331–342, March 1992.

T. L. Wilson, K. Rohlfs, and S. Huttemeister. Tools of Radio Astronomy. Springer-Verlag,

2009. doi: 10.1007/978-3-540-85122-6.

Page 145: by Kiyoshi Wesley Masui - University of Toronto T …...intensity mapping as a sensitive and e cient method for mapping the large-scale struc-ture of the Universe. In Part I we undertake

BIBLIOGRAPHY 132

A. M. Wolfe, D. A. Turnshek, H. E. Smith, and R. D. Cohen. Damped Lyman-alpha

absorption by disk galaxies with large redshifts. I - The Lick survey. ApJS, 61:249–304,

June 1986. doi: 10.1086/191114.

Arthur M. Wolfe, Eric Gawiser, and Jason X. Prochaska. Damped lyman alpha systems.

Ann.Rev.Astron.Astrophys., 43:861–918, 2005. doi: 10.1146/annurev.astro.42.053102.

133950.

E. L. Wright, G. Hinshaw, and C. L. Bennett. Producing Megapixel Cosmic Microwave

Background from Differential Radiometer Data. ApJ, 458:L53, February 1996. doi:

10.1086/309927.

S. Wyithe. A Method to Measure the Mass of Damped Ly-alpha Absorber Host Galaxies

Using Fluctuations in 21cm Emission. ArXiv e-prints, April 2008.

Matias Zaldarriaga and Uros Seljak. Reconstructing projected matter density from cosmic

microwave background. Phys.Rev., D59:123507, 1999. doi: 10.1103/PhysRevD.59.

123507.

Hu Zhan and Lloyd Knox. Effect of hot baryons on the weak-lensing shear power spec-

trum. Astrophys.J., 616:L75–L78, 2004. doi: 10.1086/426712.

W. Zhao and D. Baskaran. Detecting relic gravitational waves in the CMB: Optimal pa-

rameters and their constraints. Phys.Rev., D79:083003, 2009. doi: 10.1103/PhysRevD.

79.083003.

M. A. Zwaan, M. J. Meyer, L. Staveley-Smith, and R. L. Webster. The HIPASS catalogue:

ΩHI and environmental effects on the HI mass function of galaxies. MNRAS, 359:L30–

L34, May 2005. doi: 10.1111/j.1745-3933.2005.00029.x.