Hongyu Shen Dept. of Electrical and Computer Engineering ... · curriculum learning by first...

5
DENOISING GRAVITATIONAL WAVES WITH ENHANCED DEEP RECURRENT DENOISING AUTO-ENCODERS Hongyu Shen 1 Daniel George 2 Eliu. A. Huerta 2,3 Zhizhen Zhao 1,3 University of Illinois at Urbana-Champaign 1 Dept. of Electrical and Computer Engineering, 2 Dept. of Astronomy, 3 NCSA ABSTRACT Denoising of time domain data is a crucial task for many ap- plications such as communication, translation, virtual assis- tants etc. For this task, a combination of a recurrent neural net (RNNs) with a Denoising Auto-Encoder (DAEs) has shown promising results. However, this combined model is chal- lenged when operating with low signal-to-noise ratio (SNR) data embedded in non-Gaussian and non-stationary noise. To address this issue, we design a novel model, referred to as ‘Enhanced Deep Recurrent Denoising Auto-Encoder’ (EDR- DAE), that incorporates a signal amplifier layer, and applies curriculum learning by first denoising high SNR signals, be- fore gradually decreasing the SNR until the signals become noise dominated. We showcase the performance of EDR- DAE using time-series data that describes gravitational waves embedded in very noisy backgrounds. In addition, we show that EDRDAE can accurately denoise signals whose topology is significantly more complex than those used for training, demonstrating that our model generalizes to new classes of gravitational waves that are beyond the scope of established denoising algorithms. Index TermsRNN, denoising algorithm, time series, gravitational waves 1. INTRODUCTION Denoising time series data in real-time is a timely and fast- paced field of research. The advent of hands-free devices for communication, translation, voice assistants, and hearing aids have triggered the development of new approaches and meth- ods to recognize speech data in realistic, noisy environmental conditions. However, denoising extremely weak time-series signals embedded in non-Gaussian and non-stationary exper- imental data remains an outstanding challenge. The exploitation of deep learning algorithms has led to spectacular progress in signal denoising research. De- noising Auto-Encoders (DAEs) stand out among de-noising methods [1, 2]. Denoising auto-encoders (DAEs) follow the architecture of AEs, a type of unsupervised deep net learns an efficient data representation [3, 4, 5], mapping an input x to a hidden representation x 0 that stores the necessary in- formation to again reconstruct y of the original input. DAEs are computationally efficient since they utilize convolutional neural networks. Convolutional structures are well-known to be effective in spectrograms. However, some time series require good algorithms in the time domain, for example, gravitational waves (GWs), due to the suboptimality of lossy non-invertible representations of the underlying data intro- duced by the conversion to spectrograms [6], and to demands on the preserved information in time domain. Even in the time domain, the information of the front part of the data encoded into the model are almost uncorrelated with the terminal part of the data, due to the kernel structure in convolutions. An optimal approach, Deep Recurrent Denoising Auto-Encoder (DRDAE), proposed in [2], to accomplish this consists of feeding the raw time-series data to a model that takes inputs in multiple timesteps. Nevertheless, this approach is still in- effective in denoising low SNR signals, as discussed in [7]. To address this issue, we develop (1) a new RNN DAE architecture, named Enhanced Deep Recurrent De-noising Auto-Encoder (EDRDAE) based on DRDAE, and (2) a new training strategy. We use curriculum learning [8] to ensure that data with SNRs that range from low to very high are accurately recovered. We also describe the rationale to use signal amplifiers (SA) and cross-layer state connections (blue arrows in Fig 1(b)), two critical features in EDRDAE. Specif- ically, we demonstrate the superior performance of EDRDAE with two gravitational wave datasets, quasi-circular GWs de- scribes two black hole mergers and eccentric GWs. We show that compared with DRDAE, EDRDAE achieves lower mean square error (MSE) and higher overlap [9, 10]. In addition, we show our model can generalize to different types of time series (eccentric GWs) in denoising when only one type of data (quasi-circular GWs) is used for training. Results rel- ative to PCA and dictionary learning is also included in the experiment section. In this paper, we use SNR peak to quantify the noise level, defined as SNR peak = Maximum value of a given signal Standard deviation of noise . To facilitate direct comparisons with typical conventions in the GW community, a conversion between SNR peak and matched-filtering SNR, SNR MF , is also provided, which is approximately SNR MF = 13 × SNR peak . arXiv:1903.03105v1 [astro-ph.CO] 6 Mar 2019

Transcript of Hongyu Shen Dept. of Electrical and Computer Engineering ... · curriculum learning by first...

Page 1: Hongyu Shen Dept. of Electrical and Computer Engineering ... · curriculum learning by first denoising high SNR signals, be- ... translation, voice assistants, and hearing aids have

DENOISING GRAVITATIONAL WAVES WITH ENHANCED DEEP RECURRENTDENOISING AUTO-ENCODERS

Hongyu Shen1 Daniel George2 Eliu. A. Huerta2,3 Zhizhen Zhao1,3

University of Illinois at Urbana-Champaign1Dept. of Electrical and Computer Engineering, 2Dept. of Astronomy, 3NCSA

ABSTRACT

Denoising of time domain data is a crucial task for many ap-plications such as communication, translation, virtual assis-tants etc. For this task, a combination of a recurrent neural net(RNNs) with a Denoising Auto-Encoder (DAEs) has shownpromising results. However, this combined model is chal-lenged when operating with low signal-to-noise ratio (SNR)data embedded in non-Gaussian and non-stationary noise. Toaddress this issue, we design a novel model, referred to as‘Enhanced Deep Recurrent Denoising Auto-Encoder’ (EDR-DAE), that incorporates a signal amplifier layer, and appliescurriculum learning by first denoising high SNR signals, be-fore gradually decreasing the SNR until the signals becomenoise dominated. We showcase the performance of EDR-DAE using time-series data that describes gravitational wavesembedded in very noisy backgrounds. In addition, we showthat EDRDAE can accurately denoise signals whose topologyis significantly more complex than those used for training,demonstrating that our model generalizes to new classes ofgravitational waves that are beyond the scope of establisheddenoising algorithms.

Index Terms— RNN, denoising algorithm, time series,gravitational waves

1. INTRODUCTION

Denoising time series data in real-time is a timely and fast-paced field of research. The advent of hands-free devices forcommunication, translation, voice assistants, and hearing aidshave triggered the development of new approaches and meth-ods to recognize speech data in realistic, noisy environmentalconditions. However, denoising extremely weak time-seriessignals embedded in non-Gaussian and non-stationary exper-imental data remains an outstanding challenge.

The exploitation of deep learning algorithms has ledto spectacular progress in signal denoising research. De-noising Auto-Encoders (DAEs) stand out among de-noisingmethods [1, 2]. Denoising auto-encoders (DAEs) follow thearchitecture of AEs, a type of unsupervised deep net learnsan efficient data representation [3, 4, 5], mapping an inputx to a hidden representation x′ that stores the necessary in-

formation to again reconstruct y of the original input. DAEsare computationally efficient since they utilize convolutionalneural networks. Convolutional structures are well-knownto be effective in spectrograms. However, some time seriesrequire good algorithms in the time domain, for example,gravitational waves (GWs), due to the suboptimality of lossynon-invertible representations of the underlying data intro-duced by the conversion to spectrograms [6], and to demandson the preserved information in time domain. Even in the timedomain, the information of the front part of the data encodedinto the model are almost uncorrelated with the terminal partof the data, due to the kernel structure in convolutions. Anoptimal approach, Deep Recurrent Denoising Auto-Encoder(DRDAE), proposed in [2], to accomplish this consists offeeding the raw time-series data to a model that takes inputsin multiple timesteps. Nevertheless, this approach is still in-effective in denoising low SNR signals, as discussed in [7].

To address this issue, we develop (1) a new RNN DAEarchitecture, named Enhanced Deep Recurrent De-noisingAuto-Encoder (EDRDAE) based on DRDAE, and (2) a newtraining strategy. We use curriculum learning [8] to ensurethat data with SNRs that range from low to very high areaccurately recovered. We also describe the rationale to usesignal amplifiers (SA) and cross-layer state connections (bluearrows in Fig 1(b)), two critical features in EDRDAE. Specif-ically, we demonstrate the superior performance of EDRDAEwith two gravitational wave datasets, quasi-circular GWs de-scribes two black hole mergers and eccentric GWs. We showthat compared with DRDAE, EDRDAE achieves lower meansquare error (MSE) and higher overlap [9, 10]. In addition,we show our model can generalize to different types of timeseries (eccentric GWs) in denoising when only one type ofdata (quasi-circular GWs) is used for training. Results rel-ative to PCA and dictionary learning is also included in theexperiment section.

In this paper, we use SNRpeak to quantify the noiselevel, defined as SNRpeak = Maximum value of a given signal

Standard deviation of noise .To facilitate direct comparisons with typical conventions inthe GW community, a conversion between SNRpeak andmatched-filtering SNR, SNRMF, is also provided, which isapproximately SNRMF = 13× SNRpeak.

arX

iv:1

903.

0310

5v1

[as

tro-

ph.C

O]

6 M

ar 2

019

Page 2: Hongyu Shen Dept. of Electrical and Computer Engineering ... · curriculum learning by first denoising high SNR signals, be- ... translation, voice assistants, and hearing aids have

(a) DRDAE (b) EDRDAE (c) LegendFig. 1. (a) shows the DRDAE model structure. (b) represents our EDRDAE model. The nomenclature is described in (c).Orange circles and red circles indicate encoder and decoder part of the two models. Double arrows indicate the bidirectionalLSTM connections, whereas one-directional arrows refer to conventional LSTM structure. Both DRDAE and EDRDAE hasmultiple timestep inputs. However, for EDRDAE, we add a cross-layer connection (blue arrows (BA)) and signal amplifier(SA) in the green ellipse. Notice that the diagrams in (a) and (b) only display the basic structure of the models, the number oflayers in the actual experiments differ and are provided in the experiment section.

2. METHODS

2.1. Model Architecture

The architecture of EDRDAE (Fig 1(b)) empowers thisnew model to better denoise time series embedded in non-Gaussian and non-stationary noise datasets. There are threemajor structural differences compared to DRDAE (Fig 1(a)).First of all, besides the multiple time step inputs, we applybidirectional LSTMs [11], rather than conventional one-wayLSTMs, since the bidirectional structure will pass the in-formation of the input data through time in two directions,forward and backward. Especially for multiple timestep in-puts, where we use several neighboring time points to predictthe central timestep, we found bidirectional layers to helppass information from neighboring timesteps (before and af-ter the central timestep) to adjacent layers. Intuitively, thisboosts denoising performance significantly.

The second difference is the introduction of the SignalAmplifier (SA) (see Fig 1(b)). It is beneficial in denoisingsignals when the amplitude of the signal is lower than thatof the background noise. This new structure is inspired byspeech data, which is nearly symmetric concerning the mean-value axis (a horizontal line). SA right before the output layerassists the network in learning more evident patterns andmagnifying the reconstructed values to reach the true valuesof the clean references.

Finally, to make the training faster, we apply a cross-layerconnection, which is in spirit similar to the connection in [12].It passes through both forward and backward cells in bidirec-tional layers. The cell states from the output timestep of theencoder are passed to the cell states of the first timestep ofthe decoder layer, as illustrated in Fig 1(b) using blue arrows.Empirically, we observe this structure helps reconstruction innoisy environments and achieves higher accuracy comparedto models that have different cell states for different layers.

Fig. 2. Validation error comparison between training modelswith and without curriculum learning. The model with cur-riculum learning achieves lower MSE compared to the modelwithout curriculum learning. The red markers indicate thechange of SNRs with curriculum learning.

2.2. Curriculum learning with Decreasing SNR

Different from the conventional image and audio denoisingsetups [13, 14]. for which the noise level is typically toolow to obscure image background or audio utterances, thedata we focus on, however, are always embedded within ex-treme noise. For example, raw gravitational waves. As a re-sult, it is difficult to learn the original signal structure andremove the noise from raw data when training directly startswith very low SNRs. We found that gradually reducing SNRsduring training, an idea taken from curriculum learning liter-ature [8], provides regularization, which allows the networkto distill more accurate information of the underlying signalswith larger SNRs to signals with lower SNRs. This guaran-tees that the information of the true signal is maintained evenwith extreme noise. Moreover, a decreasing SNR strategy,in combination with the previously mentioned signal ampli-fier (SA), ensures the reconstruction accuracy and smooth-ness of the data. With this approach, the model accurately

Page 3: Hongyu Shen Dept. of Electrical and Computer Engineering ... · curriculum learning by first denoising high SNR signals, be- ... translation, voice assistants, and hearing aids have

Table 1. CL training scheme for GW denoising. This tableillustrates the change of SNRpeak happens along training forDRDAE and EDRDAE models. We start with SNRpeak 2.0for the first 2000 iterations and then decrease SNRpeak withspecified iterations in the table to the final SNRpeak = 0.5.The number of iterations shown in the table where a decreaseof SNR is performed is based on the loss function.

ITER. (K) 2 3 4.5 6 7.5 9 10.5 13 14.5 16 17.5SNRpeak 1.5 1.4 1.3 1.2 1.1 1.0 0.9 0.8 0.7 0.6 0.5

denoises GWs under extremely small SNRs in testing, and itreaches lower MSE with fewer iterations compared with mod-els trained without curriculum learning (Fig 2).

3. EXPERIMENT

3.1. Gravitational Wave Datasets

We designed experiments to illustrate the performance of DR-DAE and EDRDAE on GW datasets. We use simulated GWsdescribe binary black hole (BBH) mergers, generated withthe waveform model introduced in [15], which is available inLIGO’s Algorithm Library [16]. We consider BBH systemswith mass-ratios q ≤ 10 in steps of 0.1, and with total massM ∈ [5M�, 75M�], in steps of 1M� . The waveforms aregenerated with a sampling rate of 8192 Hz and whitened withthe design sensitivity of LIGO [17]. We consider the lateinspiral, merger and ringdown evolution of BBHs, since it isrepresentative of the BBH GW signals reported by ground-based GW detectors [18, 19]. Specifically, we have 9861(75%) samples for training and 2459 (25%) for testing. Thereare additional 41 eccentric waveforms with eccentricities upto 0.2 ten cycles before the black holes collide for testing thegeneralizability of our model. For simulating non-Gaussianand non-stationary noise cases, we take real non-Gaussiannoise, 4096s from the LIGO Open Science Center (LOSC)around the LVT151012 event.

3.2. Data Preprocessing

The SNR of astrophysical GW sources in the LIGO detec-tors cannot be known prior to detection. Therefore, standard-ization of the datasets is a crucial step to guarantee that ourmodel can detect GWs with a wide range of astrophysicallymotivated SNRs. As a result, we add additive white Gaus-sian noise to the clean data for each iteration during train-ing with a predetermined SNRpeak and normalize our data tohave variance 1.0. Apart from the timestep augmentation [2],we also add random shifts to the training data, to make themodel more resilient to variations in the location of the sig-nal. For every input signal, we randomly generate an integerbetween 0 and 200 as shift length for left and right shifts. The

Fig. 3. Comparison of the noisy gravitational wave atSNRpeak = 0.5 and the clean signal.

Table 2. Comparisons of MSE and overlap across differentapproaches. ·/· refers to ”metric for quasi-circular” / ”metricfor eccentric”. Here ”DL” refers to dictionary learning. ”WT”refers to wavelet thresholding.

MODEL PCA DL WT DRDAE EDRDAEMSE .033/.034 .108/.107 .017/.018 .008/.004 .001/.005

OVERLAP .644/.639 .466/.476 .684/.671 .906/.955 .994/.985

length is 0% to 15% proportional to the total signal length.Zero padding is performed when necessary.

3.3. Training and Evaluation

All trainings are performed on NVIDIA Tesla V100 GPUsusing TensorFlow [20]. We train the models with batch size30. To ensure a fair model comparison, each encoder anddecoder has four layers for DRDAE, with a one-way LSTMcell that has 64 channels for each layer. For EDRDAE, wehave one encoder layer followed by two decoder layers withbidirectional LSTM cells. With the defined structures, DR-DAE has 250,177 parameters, and EDRDAE has 235,650 pa-rameters. We tuned the optimizers (ADAM, RMSprop, andSGD), learning rate (0.0001 to 0.001), batch size (10 to 50),number of layers (2 to 4) and the size of input multiple timesteps (5 to 20). Only results of the best model (Adam opti-mizer, 0.0001 learning rate, batch size 30, 4-layer model 9timesteps for input) displayed in Fig 4. We use the defaultADAM setup described in [21]. To denoise signals with ex-tremely low SNR, for example, SNRpeak = 0.5 (Fig 3), ourtraining starts with SNRpeak = 2.0. We gradually decreasethe SNR at every subsequent training step until we reach aSNRpeak = 0.5 (Shown in Table 1 and Fig 2). We only in-put BBH GWs and additive Gaussian noise for training. RealLIGO noise and both BBH GWs and eccentric GWs are usedfor testing. In general, the models become stable after trainingfor about 20,000 iterations. We summarize results after the fi-

Page 4: Hongyu Shen Dept. of Electrical and Computer Engineering ... · curriculum learning by first denoising high SNR signals, be- ... translation, voice assistants, and hearing aids have

(a) DRDAE: quasi-circular (b) DRDAE: pure noise (c) DRDAE: eccentric

(d) EDRDAE: quasi-circular (e) EDRDAE: pure noise (f) EDRDAE: eccentric signals

Fig. 4. Performance on GW signals contaminated by real LIGO detector noise with SNRpeak = 0.5. The plots includereconstructed outputs of DRDAE and EDRDAE on quasi-circular signals, pure noise input, and eccentric gravitational waves.

Table 3. Ablation study for major parts in EDRDAEMISSING PARTS W/O SA W/O BA W/O CL

MSE .017/.002 .007/.003 .010/.001OVERLAP .769/.979 .941/.974 .941/.960

nal iteration of training in Fig 4. Two metrics are used formodel evaluation for GWs. Mean Squared Error (MSE) is aconventional metric for GW evaluation measuring the recon-struction quality. We also evaluate a metric known as overlap,which measures how similar any two GWs are. Overlap val-ues equal to unity indicate perfect agreement between any twogiven signals. For GW detection, LIGO typically requires aminimum overlap value 0.97 [9, 10]. Measurements are con-cluded in Table 2 with benchmarks that include PCA withWiener filtering and dictionary learning optimized by coordi-nate descent, as well as wavelet universal thresholding with10 levels [22]. An ablation study for the SA, cross-layer con-nections and decreasing SNRs is also provided in Table 3.

Results on DRDAE and EDRDAE with real LIGO noiseare in Fig 4. We compare the denoising results for a quasi-circular GW in Fig 4(a) and Fig 4(d). To prevent spuriousdetection, which is crucial in GW detection and denoising,we also test the model when the input is only pure noise.Notice that our model is also able to prevent fake wave gen-eration from pure noise (see Fig. 4(e)), whereas the outputfrom DRDAE still contains wave-like structure with notice-able amplitude (see Fig. 4(b)). Additionally, we also test the

resilience of the model to denoise signals that are not usedduring training, i.e., eccentric GWs. Currently, there is nosignal-processing algorithm adequate for the detection or de-noising of eccentric GWs due to their complex topology [23].We chose eccentric GWs to showcase the robustness of ouralgorithm to denoise signals that are significantly differentfrom the training data. The sample waveforms are presentedin Fig 4(c) and Fig 4(f). It is clear our model achieves betterreconstruction in both quasi-circular GWs and eccentric GWswith lower MSEs and higher overlaps (Table 2).

4. CONCLUSION

In this paper, we proposed a new deep recurrent denoising au-toencoder to denoise gravitational wave signals contaminatedby an extremely high level of noise often encountered in real-istic detection scenarios. By introducing additional structuresto the model (cross-layer connection, signal amplifier), and byadopting a training approach that gradually reduces the SNRsof the training samples, we show that our model outperformsDRDAE and other tested popular denoising algorithms (PCA,dictionary learning and wavelet thresholding) for GW denois-ing. It is also noteworthy that although our denoising auto-encoder was only trained with quasi-circular GWs contami-nated with additive white Gaussian noise, it is able to handleboth quasi-circular GWs with different mass ratios and ec-centric GWs embedded in real LIGO noise. Therefore theproposed method has great generalization performance.

Page 5: Hongyu Shen Dept. of Electrical and Computer Engineering ... · curriculum learning by first denoising high SNR signals, be- ... translation, voice assistants, and hearing aids have

5. REFERENCES

[1] P. Vincent, H. Larochelle, Y. Bengio, and P. A. Man-zagol, “Extracting and Composing Robust Features withDenoising Autoencoders,” New York, NY, USA, 2008,ICML 2008, pp. 1096–1103, ACM.

[2] A. L. Maas, Q. V. Le, T. M. O’Neil, O. Vinyals,P. Nguyen, and A. Y. Ng, “Recurrent neural net-works for noise reduction in robust ASR,” in INTER-SPEECH,2012, 2012, pp. 22–25.

[3] G. E. Hinton, S. Osindero, and Y. Teh, “A fast learningalgorithm for deep belief nets,” Neural Comput., vol.18, no. 7, pp. 1527–1554, 2006.

[4] G. Hinton and R. Salakhutdinov, “Reducing the dimen-sionality of data with neural networks,” Science, vol.313, no. 5786, pp. 504 – 507, 2006.

[5] Y. Bengio and Y. LeCun, “Scaling learning algorithmstowards AI,” 2007, vol. 34, pp. 1–41.

[6] Daniel George and E. A. Huerta, “Deep NeuralNetworks to Enable Real-time Multimessenger Astro-physics,” Phys. Rev., vol. D97, no. 4, pp. 044039, 2018.

[7] N. Kumar, M. Van Segbroeck, K. Audhkhasi, P. Drotar,and S. S. Narayanan, “Fusion of diverse denoisingsystems for robust automatic speech recognition,” inICASSP, 2014, pp. 5557–5561.

[8] Y. Bengio, J. Louradour, R. Collobert, and J. Weston,“Curriculum learning,” in ICML, 2009, pp. 41–48.

[9] T. D. Canton, A. J. Miller, A. P. Lundgren, D. A. Brown,and others., “Implementing a search for aligned-spinneutron star-black hole systems with advanced groundbased gravitational wave detectors,” Phys. Rev., vol.D90, no. 8, pp. 082004, 2014.

[10] S. A. Usman, A. H. Nitz, I. W. Harry, C. M. Biwer,and others., “The PyCBC search for gravitational wavesfrom compact binary coalescence,” Class. Quant. Grav.,vol. 33, no. 21, pp. 215004, 2016.

[11] A. Graves and J. Schmidhuber, “Framewise phonemeclassification with bidirectional lstm and other neuralnetwork architectures.,” Neural Networks, vol. 18, no.5-6, pp. 602–610, 2005.

[12] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le, “Se-quence to sequence learning with neural networks,”2014, NIPS’14, pp. 3104–3112.

[13] Zhihao Xia and Ayan Chakrabarti, “Identifying recur-ring patterns with deep neural networks for natural im-age denoising,” CoRR, vol. abs/1806.05229, 2018.

[14] D. Liu, B. Wen, Y. Fan, C. C. Loy, and T. S. Huang,“Non-local recurrent network for image restoration,”2018.

[15] A. Bohe, V. Raymond, L. E. Kidder, M. Purrer, andothers., “Improved effective-one-body model of spin-ning, nonprecessing binary black holes for the era ofgravitational-wave astrophysics with advanced detec-tors,” Phys. Rev. D , vol. 95, no. 4, pp. 044028, 2017.

[16] LSC, “LSC Algorithm Library software packages LAL,LALWRAPPER, and LALAPPS,” .

[17] David Shoemaker, “Advanced LIGO anticipated sensi-tivity curves,” 2010.

[18] B. P. Abbott, R. Abbott, T. D. Abbott, M. R. Abernathy,and others., “Observation of Gravitational Waves froma Binary Black Hole Merger,” Phys. Rev. Lett., vol. 116,no. 6, pp. 061102, 2016.

[19] B. P. Abbott, R. Abbott, T. D. Abbott, M. R. Abernathy,and others., “GW151226: Observation of GravitationalWaves from a 22-Solar-Mass Binary Black Hole Coa-lescence,” Phys. Rev. Lett., vol. 116, no. 24, pp. 241103,2016.

[20] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, andothers., “TensorFlow: Large-Scale Machine Learningon Heterogeneous Distributed Systems,” 2016.

[21] Diederik P. Kingma and Jimmy Ba, “Adam: A methodfor stochastic optimization.,” CoRR, vol. abs/1412.6980,2014.

[22] David L. Donoho and Iain M. Johnstone, “Ideal de-noising in an orthonormal basis chosen from a libraryof bases,” Comptes Rendus Acad. Sci., Ser. I, vol. 319,pp. 1317–1322, 1994.

[23] E. A. Huerta, Prayush Kumar, Bhanu Agarwal, DanielGeorge, Hsi-Yu Schive, Harald P. Pfeiffer, Roland Haas,Wei Ren, Tony Chu, Michael Boyle, Daniel A. Hem-berger, Lawrence E. Kidder, Mark A. Scheel, and BelaSzilagyi, “Complete waveform model for compact bi-naries on eccentric orbits,” Phys. Rev. D , vol. 95, pp.024038, Jan. 2017.