KNOWLEDGE EXTRACTION AND THE DEVELOPMENT GAS …epubs.surrey.ac.uk/812133/1/Rustell. M. Thesis...

217
KNOWLEDGE EXTRACTION AND THE DEVELOPMENT OF A DECISION SUPPORT SYSTEM FOR THE CONCEPTUAL DESIGN OF LIQUEFIED NATURAL GAS TERMINALS UNDER RISK AND UNCERTAINTY By: michael rustell Submitted in partial requirement for the degree of: doctor of engineering in sustainability for engineering & energy systems

Transcript of KNOWLEDGE EXTRACTION AND THE DEVELOPMENT GAS …epubs.surrey.ac.uk/812133/1/Rustell. M. Thesis...

K N O W L E D G E E X T R A C T I O N A N D T H E D E V E L O P M E N TO F A D E C I S I O N S U P P O RT S Y S T E M F O R T H E

C O N C E P T U A L D E S I G N O F L I Q U E F I E D N AT U R A LG A S T E R M I N A L S U N D E R R I S K A N D U N C E RTA I N T Y

By: michael rustell

Submitted in partial requirement for the degree of:

doctor of engineering

in

sustainability for engineering & energy systems

In collaboration with:

Supervised by:

university of surrey

Prof. Soon-Thiam Khu

Prof. Yaochu Jin

hr wallingford

Ms. Aurora Orsini

Mr. Ben Gouldby

Michael Rustell: Knowledge Extraction and the Development of aDecision Support System for the Conceptual Design of a LiquefiedNatural Gas Terminal Under Risk and Uncertainty, Doctor of Engi-

neering, © 2016

D E C L A R AT I O N

This thesis and the work to which it refers are the results of my own ef-

forts. Any ideas, data, images or text resulting from the work of others

(whether published or unpublished) are fully identified as such within

the work and attributed to the originator in the text, bibliography or

in footnotes. The thesis has not been submitted in whole or in part for

any other academic degree or professional qualification. I agree that

the University has the right to submit my work to the plagiarism de-

tection service TurnitinUK for originality checks. Whether or not drafts

have been so-assessed, the university reserves the right to require an

electronic version of the final document (as submitted) for assessment

as above.

Michael Rustell,

September 5, 2016

A B S T R A C T

The concept design stage is widely regarded to be the most influen-

tial. Key decisions are made and significant project finance is acquired.

Traditionally, designs are developed through engineering judgement,

driven by experience. However, this approach can be time consuming

as designs typically evolve through iteration and it does not guarantee

that the most optimal option(s) will be developed.

This thesis discusses the development of a decision support system

for the concept design of an LNG terminal - RaPoLa (Rapid Port Lay-out). It uses a range of computing and mathematical devices to enable

options to be developed manually or automatically using both deter-

ministic and probabilistic methodologies. This enables the exploration

for optimal concept designs to be accomplished in a multitude of ways.

A full description of the mathematical basis of RaPoLa is given and

three chapters detailing practical applications introduce the reader to

the core concepts of RaPoLa and demonstrate a clear progression from a

simple deterministic, manual design to a fully automated, probabilistic

design. The results of using RaPoLa on a case study indicate that it is

capable of producing options that more optimal than the traditional

approach.

iv

This thesis is dedicated to my wife Anya and our daughter Elizabeth GraceRustell, born on the 26th of April 2014.

P U B L I C AT I O N S

Some ideas and figures have appeared previously in the following arti-

cles and conference papers:

Rafiq, M. Y. and M. J. F. Rustell (2014). “Building Information Mod-

elling Steered by Evolutionary Computing.” In: Journal of Comput-ing in Civil Engineering 28.4. doi: doi:10.1061/(ASCE)CP.1943-

5487.0000295. url: http://ascelibrary.org/doi/abs/10.1061/

%28ASCE%29CP.1943-5487.0000295.

Rustell, M. J. F. (2013). “Breakwater Layout Optimisation Using A Root-

Finding Algorithm.” In: Ports 2013. Ed. by Bruce I. Ostbo and Don

Oates. Reston, VA: American Society of Civil Engineers. Chap. 197,

pp. 1929–1938. doi: 10 . 1061 / 9780784413067 . 197. eprint: http :

//ascelibrary.org.oala- proxy.surrey.ac.uk/doi/pdf/10.

1061/9780784413067.197. url: http://ascelibrary.org.oala-

proxy.surrey.ac.uk/doi/abs/10.1061/9780784413067.197.

— (2014a). “Multi-objective optimisation of a liquefied natural gas ter-

minal under risk and uncertainty (Poster).” In: Proceedings of theAnnual Surrey Engineering Doctorate in Sustainability for Engineeringand Energy Systems Conference. Guildford, UK.

— (2014b). “Optimising a Breakwater Layout using an Iterative Algo-

rithm.” In: url: http://www.pianc.org/downloads/dwa/BREAKWATER\

%20LENGTH\%20OPTIMISATION\%20USING\%20AN\%20ITERATIVE\%20ALGORITHM.

\%20M.Rustell_2014.pdf.

Rustell, M. J. F. and S. Cork (2011). “Risk and Uncertainty Mitigation

for New LNG Marine Terminals.” In: Proceedings of the Global PortsConference. London, UK.

Rustell, M. J. F., S. Dunn, S-T. Khu, Y. Jin, and B. Gouldby (2011). “The

Development of a decision support system for optimizing the de-

sign of LNG terminals.” In: Proceedings of the Annual Surrey Engi-neering Doctorate in Sustainability for Engineering and Energy SystemsConference. Guildford, UK.

Rustell, M. J. F., A. Orsini, S-T. Khu, Y. Jin, and B. Gouldby (2014a). “De-

cision Support for Designing New LNG Terminals.” In: Proceedingsof the 33rd PIANC World Congress San Francisco. San Francisco, USA.

vi

Rustell, M. J. F., A. Orsini, S-T. Khu, Y. Jin, and B. Gouldby (2014b).

“Optimizing an LNG Terminal Subject To Uncertainty.” In: Proceed-ings of the 11th International Conference on Hydroinformatics, HIC 2014.

New York City, USA.

vii

We have seen that computer programming is an art,because it applies accumulated knowledge to the world,

because it requires skill and ingenuity, and especiallybecause it produces objects of beauty.

— Donald E. Knuth (Knuth, 1974)

A C K N O W L E D G E M E N T S

I would first and foremost like to thank my wife Anya for putting up

with me and my research and for supporting me throughout the entire

process. During this research project, we had our first child - Elizabeth

Grace Rustell who has been a constant source of joy and inspiration.

I would also like to thank my supervisors: Prof. Soon-Thiam Khu, Ms

Aurora Orsini, Prof. Yaochu Jin, Mr Ben Gouldby and Dr Scott Dunn

who have guided me along the way. I would also like to offer special

thanks Mr Peter Hunter and Prof. William Allsop who both took in-

terest in me and this project and always seemed to have the answer to

questions I asked. I also thank HR Wallingford who have provided sub-

stantial funding toward this research under research project DKY0432

and for giving me a desk and access to some of the world’s most expe-

rienced maritime engineers and scientists.

I would also like to thank the University of Surrey for initialising this

project and all of the staff namely: Prof. Chris France, Mr Brian Lewarne

and Mrs Kelly Boazman who have played a role and the EPSRC for also

providing funding. I would also like to thank those at HR Wallingford

who have helped me in some way or other, there really are too many

to mention, so thank you.

And finally, special thanks to ”The Cohort” who were my companions

and kindred spirits throughout the journey. We spent many good evenings

together, exchanging ideas and enjoying life. It seems like yesterday we

were all sat in the induction room at the very start of this journey. I am

sure we will go on to do great things. Thanks guys.

viii

C O N T E N T S

List of Figures xiii

List of Tables xviii

Acronyms xxi

i introduction 1

1 introduction 2

1.1 Background 2

1.2 Concept design 6

1.3 Optimisation 8

1.4 Risk and Uncertainty 10

1.4.1 Overview of uncertainty in the maritime environ-

ment 11

1.4.2 Definition of Risk 12

1.5 Aims and objectives 12

1.5.1 Objectives 13

1.6 RaPoLa 14

1.7 Contributions to knowledge 15

1.8 Scope 15

2 literature review 17

2.1 Introduction 17

2.2 Design methodologies 18

2.2.1 Level 0 Design Methods 18

2.2.2 Level 1 Design Methods 19

2.2.3 Level 2 Design Methods 20

2.2.4 Level 3 Design Methods 22

2.2.4.1 Latin Hypercube sampling 23

2.2.5 Sensitivity Analysis 24

2.2.6 Summary 24

2.3 Sea waves 25

2.3.1 Summary 27

2.4 Optimisation 27

2.4.1 Root-finding algorithms 28

2.4.2 Model Trees 28

2.4.3 Artificial Neural Networks 29

ix

contents x

2.4.4 Genetic Programming 31

2.4.5 The Genetic Algorithm 31

2.4.6 Multi-objective Genetic Algorithm 35

ii mathematical basis of the decision support sys-

tem 40

3 fundamental engineering concepts of the deci-

sion support system 41

3.1 Introduction 41

3.2 Overview of the Model 42

3.3 Wave transformation 45

3.4 Berth operability 54

3.5 Breakwater design 56

3.5.1 Breakwater cross-section 59

3.6 Breakwater layout 67

3.7 Channel Width/Depth 74

3.8 Channel volume 76

3.9 Channel infill 78

3.10 Trestle and Terminal Layout 81

3.11 Capital and maintenance cost 83

iii practical application 86

4 case study and manual validation of the engi-

neering framework 87

4.1 Introduction - North Star Case Study 87

4.2 Part one - Replication of a Base Case design 90

4.3 Part two - RaPoLa as a manual decision support tool 91

4.3.1 Option One 92

4.3.2 Option Two 94

4.3.3 Option Three 95

4.3.4 Final options 95

4.4 Summary and conclusions 96

5 single and multi-objective optimisation of an lng

terminal 98

5.1 Introduction 98

5.2 Decision variables 98

5.3 Objective functions 98

5.4 Selection of the Optimisation Algorithm 99

contents xi

5.4.1 NSGA-II processes 100

5.5 Model setup 102

5.6 Results 104

5.6.1 Convergence 104

5.7 Results - One objective 106

5.8 Results - Two objectives 108

5.9 Results - Three objectives 111

5.10 Comparison of the three optimisation methods 115

5.10.1 Decision variables 115

5.10.2 Transposition into two-objective space 117

5.11 Comparison between Automated and Manually Devel-

oped Layouts 119

5.12 Summary and conclusions 120

iv uncertainty 123

6 risk and uncertainty analysis in lng terminal con-

cept design 124

6.1 Introduction 124

6.2 Problem formulation 124

6.3 Development of the Probability Distributions for Ran-

dom Variables 125

6.4 Calculating the optimum number of samples 126

6.5 Uncertainty within optimisation 131

6.5.1 Model setup 132

6.5.2 Convergence 134

6.5.3 Clustering into smaller subsets 136

6.6 Post-optimisation uncertainty analysis 142

6.6.1 Results - Post-optimisation uncertainty analysis 144

6.6.2 Boot-strapping the partial rank correlation coeffi-

cients 145

6.7 Comparison of methods 148

v conclusion 153

7 conclusion 154

7.1 Introduction 154

7.2 Overview of methodology 154

7.3 Contributions to knowledge 156

7.4 Limitations of the research 158

contents xii

7.5 Recommendations for future research 158

7.6 Summary 160

vi appendices 162

a sediment infill equation 163

b rapola input files 165

b.1 All inputs 165

b.2 Breakwater 166

b.3 Channel 166

b.4 Constraints 166

b.5 Cost 168

b.6 Extremes 168

b.7 Sediment 169

b.8 Mooring 169

b.9 Trestle 170

b.10 Vessel 170

b.11 Waves 170

b.12 Currents 170

bibliography 172

L I S T O F F I G U R E S

Figure 1.1 Typical LNG terminal and onshore plant 5

Figure 1.2 Design stages for a typical design project 9

Figure 2.1 Design load and resistance PDF where the com-

mon area between the load (blue) and resistance

(red) represents the probability of failure 21

Figure 2.2 Randomly sampled variables vs. Latin hypercube

samples 24

Figure 2.3 Neural Network Diagram. Source: (Gouldby, 2007) 30

Figure 2.4 Genetic algorithm schematic diagram 33

Figure 2.5 Non-dominated Pareto front. Source: (Jin, 2011) 36

Figure 3.1 HRW RaPoLa algorithm 44

Figure 3.2 Wave transformation algorithm 46

Figure 3.3 Wave bending as it propagates inshore due to

depth induced celerity reduction 47

Figure 3.4 Vessel response threshold curve 54

Figure 3.5 Overview of berth operability algorithm 55

Figure 3.6 Wave direction relative to vessel 56

Figure 3.7 Breakwater types and relevance to research. Based

on (CIRIA and CUR, 2007, pp 781) 58

Figure 3.8 Cross-section of a rubble mound breakwater. Source:

(CIRIA and CUR, 2007, pp 782) 59

Figure 3.9 Breakwater cross-section Algorithm 60

Figure 3.10 Breaker types and associated surf similarity num-

bers. Source: CIRIA and CUR (2007) 63

Figure 3.11 Miniature models of concrete armour units from

left to right: Core-Loc, Accropode, Accropode II, Ecopode 65

Figure 3.12 Stability coefficient for varying bedslope gradi-

ents 66

Figure 3.13 Breakwater layout algorithm 68

Figure 3.14 Marching squares algorithm for tracing constant

depth contour where the numbers represent the

depth of the regular bathymetric grid. 69

xiii

List of Figures xiv

Figure 3.15 Simplified diagram of the major wave energy

components 70

Figure 3.16 Diffracted and transmitted wave energies 71

Figure 3.17 Diffraction angle 72

Figure 3.18 Semi-Optimised Breakwater Layout 73

Figure 3.19 Least Squares Regression for the Optimised Break-

water Layout 73

Figure 3.20 Channel width/depth algorithm 74

Figure 3.21 Six degrees of freedom for a vessel 75

Figure 3.22 Channel volume algorithm 77

Figure 3.23 3D channel with sloped edges 77

Figure 3.24 Channel infill algorithm 79

Figure 3.25 LNG terminal schematic mapped with decision

variables 82

Figure 3.26 Costing algorithm 84

Figure 4.1 North Star Site on an exposed coastline 88

Figure 4.2 Directional roses summarising the significant wave

height Hs (a), peak wave period Tp (b), wind

speed V (c) and current velocity U (d) for a 28-

year time series of hourly recordings from 1980

- 2008. Note: rose directions are: from for waves

and to for wind and currents. 89

Figure 4.3 Layout comparison of traditional approach (LH)

and RaPoLa (RH) 91

Figure 4.4 Design iterations for three concept layouts 92

Figure 4.5 Performance of the three options per design iter-

ation, where capex, opex and total are represented

as a ratio of the Base Case 93

Figure 4.6 Final layouts developed through the manual ap-

proach 96

Figure 5.1 RaPoLa integrated with the NSGA-II 101

Figure 5.2 Convergence of the populations 105

Figure 5.3 Single objective optimisation results 106

Figure 5.4 Layouts of the four single-objective optimisation

datasets 107

Figure 5.5 Pareto front of two objective runs 108

List of Figures xv

Figure 5.6 Pareto trade-off surface composed of 100 options

with three options - 45, 65 50 representing the

extremes and knee point of the results extracted 110

Figure 5.7 Offshore option with very long breakwater 111

Figure 5.8 Pareto surface of the three objective runs (9:12)

expressed as a series of two-dimensional trade-

off curves 113

Figure 5.10 Pareto trade-off of the best performing 150_100_0.2

dataset expressed in the two-objective space in the

form total vs. downtime indicating three options

corresponding to the extremes and knee-point of

the trade-off curve 114

Figure 5.11 Berth locations of the 150_100_0.2 datasets from

each of the optimisation methods 116

Figure 5.12 All datasets presented in the two objective space 117

Figure 5.13 Plot indicates improvement in the population fit-

ness’ with increasing generation with three op-

tions selected from the final generation for fur-

ther analysis 119

Figure 5.14 Automatically developed layouts using the NSGA-

II 120

Figure 5.15 NSGA-II generated Pareto front of options dis-

playing the classic trade-off shape with the three

options developed using the manual approach

superimposed 121

Figure 6.1 PDF for the twelve uncertain variables for an

unspecified LNG terminal option already dis-

cussed in this thesis 128

Figure 6.2 CDF for the twelve uncertain variables for an

unspecified LNG terminal option already dis-

cussed in this thesis 129

Figure 6.3 Histograms N = {100, 250, 500, 1000, 2500, 5000}

randomly generated Monte Carlo samples run

through RaPoLa 131

Figure 6.4 Time to compute the empirical distribution func-

tion for N samples 132

Figure 6.5 Diagram of the uncertainty within optimisation al-

gorithm 133

List of Figures xvi

Figure 6.6 Convergence of the uncertainty within optimisa-

tion results over 100 generations using the NSGA-II and eq 5.5 134

Figure 6.7 Compound graphic of the Pareto fronts of gener-

ations 5 (red), 16 (green) and 72 (blue) includ-

ing density plots, correlation matrices, boxplots,

scatter graphs, histograms and a barplot for each

combination of objectives 135

Figure 6.8 Final generation of embedded Monte Carlo sim-

ulator with results clustered by berth location.

Compound graphic includes density plot, corre-

lation matrix, boxplot, scatter graph, histogram

and barplot for each combination of objectives 137

Figure 6.9 Berth locations of the final generation of opti-

mised options, clustered by berth location and

objective function 139

Figure 6.10 Silhouette widths of three groups, clustered by

berth location 139

Figure 6.11 Three layouts of the cluster medoids which were

partitioned using the PAM algorithm using the

uncertainty within optimisation approach 140

Figure 6.12 Boxplolt of decision variable and objective func-

tion of the uncertainty within optimisation re-

sults, grouped by cluster 141

Figure 6.13 Options 45, 65 and 50 from the two objective op-

timisation method 142

Figure 6.14 Diagram of post-optimisation uncertainty analysisalgorithm 143

Figure 6.15 Post-optimisation uncertainty analysis distribu-

tions of capex, opex and total resulting from 2500

Latin Hypercube samples 144

Figure 6.16 Bootstrapped Partial Rank Correlation Coefficient er-

ror estimates of the input variables on the capexand opex of three layout options 147

Figure 6.17 Three layouts from the two respective methods -

(a) uncertainty within optimisation, (b) post-optimisationuncertainty analysis 149

List of Figures xvii

Figure 6.18 Error bar plot of the mean (µ) and standard devi-

ation (σ) of the within optimisation uncertainty and

post-optimisation uncertainty analysis methods, in-

dicating that within optimisation uncertainty re-

duces the cv more and post-optimisation uncer-tainty analysis method produces a lower estimated

cost overall 150

Figure 7.1 Berth locations and medoid layouts for a popula-

tion of options optimised for the following three

objectives: Vbw, Vch and Vtr, indicating a well

distributed coverage of teh design space. 159

L I S T O F TA B L E S

Table 1.1 Description of LNG terminal components 4

Table 1.2 Potential uncertainty ranges for each stage of a

project life-cycle 11

Table 3.1 Overview of the input files required by RaPoLa.

A full description of all inputs files that are stated

in this table is included in Appendix B 43

Table 3.2 RaPoLa decision variables 83

Table 4.1 Design wave 87

Table 4.2 Breakwater design parameters 87

Table 4.3 Channel design parameters 90

Table 4.4 Sediment parameters 90

Table 4.5 Design vessel dimensions and parameters 91

Table 4.6 Infrastructure volume comparison of RaPoLa and

traditional approach 91

Table 4.7 Decision variables for each iteration (I) of the de-

sign evolution of Option One 94

Table 4.8 Decision variables for each iteration (I) of the de-

sign evolution of Option Two 94

Table 4.9 Decision variables for each iteration (I) of the de-

sign evolution of Option Three 95

Table 4.10 Selected model output for the three options shown

in Figure 4.6 that were developed manually (all

units in m unless stated otherwise) 96

Table 5.1 Genetic operators for 12 runs ranging from one

to three objectives 104

Table 5.2 Upper and lower bounds of the decision space

S 104

Table 5.3 Decision variables of the single objective optimi-

sation methods compared to the Base Case deci-

sion variables 108

Table 5.4 Decision variables of three selected options 109

xviii

List of Tables xix

Table 5.5 Decision variables of options 118, 121 and 13

of the 150_100_0.1 dataset of the three objective

method 115

Table 5.6 Mean and standard deviation for the decision

variables of the 150_100_0.1 dataset of each of

the optimisation methods 115

Table 5.7 Mean and standard deviations of the berth dis-

tance from the trestle for the final population

of the 150_100_0.1 dataset for each optimisation

method 116

Table 5.8 Ratio of options which dominate the Base Case

in both objectives 118

Table 5.9 Genetic parameters for automated design 119

Table 5.10 Selected model output for the three options de-

veloped automatically (all units inm unless stated

otherwise) 120

Table 6.1 Probability distributions used in model and their

parameters 127

Table 6.2 Similarity between two sets of independent sam-

ples forN = {500, 1000, 2500, 5000} using the Sym-metric Blest Measure of Agreement (SBMA) 131

Table 6.3 Genetic operators for the uncertainty within opti-misation dataset 133

Table 6.4 Silhouette width classification 136

Table 6.5 Decision variables of three medoid layouts gen-

erated through the uncertainty within optimisa-

tion approach 140

Table 6.6 Probabilistic values of total for the three layout

options compared with the deterministic values

generated in 5.8 145

Table B.1 All tables 165

Table B.2 Breakwater 166

Table B.3 Channel 167

Table B.4 Constraints 167

Table B.5 Cost 168

Table B.6 Extremes 169

Table B.7 Sediment 169

Table B.8 Mooring 169

List of Tables xx

Table B.9 Trestle 170

Table B.10 Vessel 170

A C R O N Y M S

The following abbreviations are used in this thesis:

LNGC Liquefied Natural Gas Carrier

ANN Artificial Neural Network

CAPEX Capital Expenditure

CDF Cumulative Distribution Function

CLASH Crest Level Assessment of Coastal Structures

COPRI Coasts, Oceans, Ports, and Rivers Institute

CoV Coefficient of Variance

EC Evolutionary Computing

FEED Front End Engineering Design

FORM First Order Reliability Method

FPSO Floating Production, Storage and Offloading

GA Genetic Algorithm

HAT Highest Astronomical Tide

HRCAT Hydraulics Research Wallingford Carbon Accounting Tool

LHS Latin Hypercube Sampling

LNG Liquefied Natural Gas

LNGSim Liquefied Natural Gas Simulation (software)

MC Monte Carlo

MOF Materials Offloading Facility

NSGA-II Non-Dominated Sorting Genetic Algorithm mkII

OPEX Operational Experditure

PAM Partitioning Around Medoids

PDF Probability Density Function

PIANC Permanent International Association of Navigation Congresses

PRCC Partial Rank Correlation Coefficient

pre-FEED pre-Front End Engineering Design

PROVERBS Probabilistic design tools for Vertical Breakwaters

xxi

acronyms xxii

RMS Root-Mean Squared

SORM Second Order Reliability Method

SWAN Simulating WAves Nearshore

N O TAT I O N

The following symbols are used in this thesis:

A random wave breaking coefficient

As combined suspended and bedload sediment volume

B vessel beam m

Bg crest width of secondary breakwater m

BMOF crest width of MOF breakwater m

C depth at which a caisson is automatically designed m

CD drag coefficient

D channel depth m

dn nth centile sediment diameter m

Dn nth centile armour diameter m

Dt vessel downtime

f (wave) frequency Hz

fs multiplication factor for channel slope

h water depth m

Hb depth limited wave height m

hb limiting water depth m

hMOF required depth of MOF channel m

H′0 effective offshore wave height m

Hs significant wave height m

i interest rate %

k wave number

kr refraction coefficient for a wave component

ksi shoaling coefficient using linear wave theory

L depth limited wavelength m

L channel length m

L0 non depth limited wavelength m

Nw number of waves in a storm

xxiii

acronyms xxiv

O operability %

Od overdredge depth m

P permeability of armour coefficient

Q overtopping volume m3/m/s

r roughness coefficient

Rc crest height of breakwater m

Rc,s crest height of secondary breakwater m

s relative density of sand

S vessel squat m

sd damage dumber for breakwater rock

smax wave spreading parameter

t time years

T vessel draught m

Tm mean wave period s

Tp peak wave period s

U depth averaged current velocity m/s

Urms root mean squared current velocity m/s

v kinematic viscocity of water m−2

V volume of concrete armour units m3

Vcc cross current velocity m/s

Vcw cross wind velocity m/s

Vlc long current velocity m/s

W calculated factor for wave diffraction

W channel width m

WBG factor for ’green’ side of channel

WL water level above datum m

WBM basic manoeuvring lane factor

WBR factor for ’red’ side of channel

Wi additional factor for wind and current

Zmax maximum additional vessel draught due to roll and pitch m

α breakwater slope ◦

β0 extreme wave height coefficient

acronyms xxv

β1 extreme wave height coefficient

βmax extreme wave height coefficient

βslope bedslope gradient 1 : βslope

∆ relative rock density

∆E energy in wave component

ξm Iribarren number

ρr density of rock kg/m3

θd diffracted wave angle ◦

θship wave angle relative to berthed vessel ◦

Part I

I N T R O D U C T I O N

1I N T R O D U C T I O N

1.1 background

Natural Gas is one of the most important sources of energy and in the

past few decades the global gas market has grown considerably. This

is largely due to the flexibility of the global gas market and the relative

sustainability of gas compared to the other fossil fuels. According to

EIA (1998), natural gas has 71% and 56% of the carbon dioxide of oil

and coal per unit of energy and contains significantly less nitrogen ox-

ides (a primary ingredient of ’smog’), sulfur dioxide, other particulates

and mercury which means that not only are there less harmless emis-

sions, there is also the appearance of less harmful emissions, which is

important for public perception. Countries are utilising gas as a partial

replacement for coal, with long term plans to increase the ratio and this

is driving business for natural gas exporting nations worldwide (Li et

al., 2011). In many cases, the decision is being influenced by national

and international legislation such as the Kyoto Protocol, in which coun-

tries who are party are aiming to reduce the emissions of four green-

house gases (carbon dioxide, methane, nitrous oxide, sulphur hexafluoride)and two groups of gases (hydrofluorocarbons and perfluorocarbons) to 5%

of the 1990 level by 2050 (UN, 1998).

When natural gas is super-cooled to its boiling point of −161◦ in

LNG trains1, it condenses into a liquid 1600

ththe volume. As many gas

reserves are remote from key markets and subsea pipelines are only

economically viable up to around 700km (CEE, 2007), LNG can be ex-

ported in dedicated shipping vessels known as LNG carriers (LNGC)

and regasified and transferred into the importing nation’s gas infras-

tructure. This is a relatively economic and highly flexible mode of trans-

port and this has facilitated the advent of a flexible global marketplace

1 A train is the onshore facility which uses various chemical and mechanical processesto reduce the gas temperature. Trains are often the most expensive component of anLNG project

2

1.1 background 3

in which the majority of LNG sales are through spot contracts2 and in

which any exporting nation can sell to any importing one. Advances

in all stages of the LNG value chain have lowered the unit price from

nearly $700 per tonne in the mid 90’s to $400 per tonne in 2010 and

this price is expected to drop to around $300 per tonne in 2030 (Kumar

et al., 2011). According to the BP (2016), LNG trade volumes more than

doubled between 2005 and 2012 and LNG now represents 31.4% of the

gas market (10% of the total energy market) and by 2035, LNG will

represent 15% of the worlds energy consumption.

An LNG project is essentially a supply chain consisting of the drill

rig, processing facility, transmission pipeline to shore, onshore lique-

faction plant, export terminal and shipping fleet. HR Wallingford are

involved with the design of the export terminal which provides a safe

environment for a vessel to load LNG from the onshore liquefaction

plant. The terminal must fulfil multiple objectives which are competing

for optimality; a performance increase in one will inevitably reduce the

performance in another. Typically, the terminal design must maximise

LNG throughput whilst minimising capital and maintenance costs, en-

vironmental impacts and project risk however the metrics for compar-

ison are often incompatible meaning that engineering judgement fea-

tures heavily in their assessment.

An LNG terminal typically includes the following: LNG vessel berth(s),

primary and secondary breakwaters, access channel, trestle, material of-

floading facility (MOF) and tug berths and is separate from the onshore

liquefaction/regasification plant, as shown in Figure 1.1, indicates the

key components of the terminal and Table 1.1 provides a description.

The configuration of the LNG terminal is sensitive to the depth of

water in which the berth is located. Many sites are situated on exposed

coastlines where dredged approach channels and artificial breakwaters

are routinely required to provide safe access and berthing conditions.

The further offshore the terminal is, the longer the pipe trestle and

larger the breakwater cross-section but shorter the channel. This is op-

posite for nearshore terminals. The channel costs increase exponentially

when the terminal is located in shallow water and must be dredged fre-

quently to remove deposited sediment. The cost of annual maintenance

2 The spot market is the global marketplace in which LNG is sold, so-called as contractsare typically of short duration i.e. months or years, compared to traditional pipelinecontracts which usually span 25-35 years

1.1 background 4

Table 1.1: Description of LNG terminal components

Component Function

Access channel Provides access for the vessel to reach the

berth from the limiting depth contour

Basin Enabling tug boats to manoeuvre the vessel

onto the berth safely

Berth Mooring structure for loading and unload-

ing of product (LNG)

Limiting depth The Depth after which the vessel can travel

safely without significant risk of grounding

Onshore plant Where the liquefaction/reclassification and

storage of LNG occurs (Not part of the ter-

minal)

Primary breakwater For protecting the basin and berthing area

from wave action and sediment transport

when attached to the shore

Secondary breakwater For additional protection of the berth hand

basin area, especially when longshore drift

is an issue

Trestle For transporting product between the plant

and berth. The trestle requires cryogenic

pipes

Tie in point the point that connects the maritime termi-

nal and onshore plant, usually given as a

datum

1.1 background 5

TIE IN POINT

PLAN

TTE

RM

INAL

PRIMARYBREAKWATER

PIPETRESTLE

SECONDARYBREAKWATER

ACCESSCHANNEL

BASIN

BERTH

TRAINS

STORAGE

LIMITING DEPTH

MOF

TUG BERTHS

Figure 1.1: Typical LNG terminal and onshore plant

dredging can even exceed the initial capital cost of dredging. Further-

more, the greatest project uncertainties usually derive from predicting

the infill rate of a dredged channel to allow a maintenance schedule

and hence annual cost to be anticipated.

Developing layouts that fulfil the client’s brief economically requires

a fundamental understanding of a wide range of hydrodynamic and

meteorological processes and their influence on design. Phenomena

such as wave-wave interaction, the behaviour of breakwater armour

units in extreme conditions or vessel response to wave energy trans-

mitting through a breakwater core are difficult to predict using mod-

els of any kind (Hughes, 1993). Information is typically communicated

through specialist reports and model output data files. This is a signif-

icant time bottle-neck and can often mean that the designer still needs

to interpret multiple sources of information and make assumptions re-

garding their combined impact in a typically qualitative manner.

Numerical models exist that are capable of simulating many physical

characteristics including: extreme waves, nearshore wave transforma-

tion and sediment infill however, within HR Wallingford these models

are distributed throughout different groups and often require special-

ist’s to setup and run the models and interpret the results. At a high

level, the concept design of an LNG terminal can be represented by

a fairly complex flow of data through several models however; some

1.2 concept design 6

models are 1D, others 2D; some are written in legacy code, others in

Excel and some are even web-based applications. This fragments the

design process and means that human interaction is required at every

step to act as the receptor and conduit for the data to become meaning-

ful information.

Currently, to assess the combined influence of these models on the

design, their summed impact is considered through an engineering

judgement approach which itself is often qualitative and lends to in-

terpretation. As the models are run at different stages and by different

people, no single person is able to gain a detailed overview of the de-

sign process and to understand which inputs are driving the costs, risks

and uncertainties. This is clearly a limitation on the design process.

This thesis presents a specific methodology for developing and in-

tegrating a series of individual modules for the design of terminal in-

frastructure into a single decision support system for the development

of LNG terminal concept designs. Through modern computing power,

a robust Pareto-based multi-objective optimisation algorithm will be

used to generate and evolve a set of designs toward optimality for

the desired objectives whilst simultaneously reducing the overall un-

certainty.

1.2 concept design

Creating and developing concept designs is arguably one of the most

cognitively demanding activities undertaken by man (Savage et al.,

1998) and has been said to be the most important stage of the design

process (Rafiq et al., 2005). During the conceptual design stage, sev-

eral solutions to the design problem are created and compared against

one another to gain a better understanding of possible approaches and

the associated costs. This stage is almost always undertaken by expe-

rienced designers who have the knowledge and position of responsi-

bility to ensure that solutions develop in a productive manner (Shaw,

Miles, and Gray, 2008) and it is often driven by common sense and in-

tuition (Moore, Miles, and Rees, 1997). The process is usually iterative,

where a solution is developed, tested, refined and re-tested until an

acceptable solution has been achieved. This means that designers are

often required to solve the same problem several times using different

1.2 concept design 7

variables which can be a time consuming process (Gero and Kazakov,

1998).

A typical project goes through several design stages from the initial

feasibility study through to detailed design. This is shown in Figure 1.2,

which also gives typical values for the percentage of design time and a

description of the key processes in each design stage. During the Con-

cept Generation and pre-FEED3 stages, many of the most influential

decisions such as defining the optimum berth location and whether a

breakwater is required are made. These stages, which will be referred

to as ’Concept design’, account for approximately 15% of the total de-

sign time (which could be several years), however, this is the stage

where up to 80% of the project resources are committed (Kicinger, Ar-

ciszewski, and De-Jong, 2005; Miles, Sisk, and Moore, 2001). As the

design evolves, the influence of decisions and the ease at which they

can be implemented decreases whilst the cost of implementing change

increases. The concept design stage is therefore highly influential on

the subsequent design stages and time should be spent to ensure that

the initial decisions provide a robust foundation for subsequent design

stages. Focusing effort on these decisions is an important factor in de-

veloping economic concepts and extensive testing through trial and er-

ror is seldom afforded due to time and financial pressure during early

design stages. This is often intensified when client requirements are

still developing and site-specific data is limited or not available (Rafiq

and Sui, 2010). There are many uncertainties involved in a concept de-

sign and rules of thumb and estimations tend to be conservative to al-

low for optimisation during subsequent design stages. Designs tend to

evolve through an iterative cycle which includes evaluation, modifica-

tion of the decisions and possibly recalculation of the design conditions.

Through these iterations, the designer will start to make sense of the

available data and develop an understanding of the project complexity

and the key issues that will influence the design. Ideally, a number of

well-defined concepts are developed and compared against key perfor-

mance indicators. However, in practice, it is typical for only a small

number of broadly-defined concepts to be considered, largely due to

time and financial constraints. This is clearly a limitation on the design

process as the wider range of potential solutions cannot be explored

3 Pre-Front-End Engineering Design (per-FEED) is a design stage used in oil and gasengineering which is of a slighkt more refined nature than a concept design

1.3 optimisation 8

and quantitative metrics are usually not available to complement qual-

itative judgement.

1.3 optimisation

Traditionally, one or more concepts are developed using past experi-

ence and engineering judgement. Due to the subjective nature of the

process, new, novel and potentially better concepts may be dismissed

without significant thought or even missed entirely. To enhance the ex-

ploration for concepts, Evolutionary algorithm’s (EA’s) may be employed.

EA’s use techniques including selection, mating, inheritance and ran-

dom mutation to improve a population of solutions toward optima

(Holland, 1975). Defining the objective function(s) is a critical part of

the optimisation process. Most real-world problems are complex and

have multiple competing objectives, which changes the notion of ’opti-mise’ into ’find the best compromise’ (Coello Coello, Aguirre, and Zitzler,

2007). Generally, more objectives lead to a more diverse population;

the number of objectives is an exponent of the number of solutions.

According to Cohon (1978), the three benefits of considering multiple

objectives in the concept design stage are:

1. the identification of a wider range of alternatives;

2. assignment of appropriate roles for team members;

3. a more realistic representation of the problem.

Solutions of a multi-objective optimisation are considered Pareto Opti-mal i.e. they cannot be improved in any objective, without compromise

in at least one other (Reynoso-Meza et al., 2013). The link between Evo-lutionary Computing (EC) and the Pareto set has become well-established

throughout decades of research (Auger et al., 2012), and the two are

now practically synonymous within optimisation (Coello Coello, Veld-

huizen, and Lamont, 2007). The Pareto Front enables the decision makerto visually see and explore the trade-offs between competing objectives,

utilising the EA as a decision support system rather than a decision-makingsystem.

Multi-objective evolutionary algorithms have been shown to perform

better than single objective evolutionary algorithms in some problems

1.3 optimisation 9

Figu

re1.2

:Des

ign

stag

esfo

ra

typi

cald

esig

npr

ojec

t

1.4 risk and uncertainty 10

(Lochtefeld and Ciarallo, 2011), although in some cases, a superior fit-

ness function can be achieved by combining the objectives into a single

scalar vector weighted by the perceived importance of each objective

(Jin, 2011). Where the design problem can be represented as a single

objective, the EA should converge to a single solution. This is a limita-

tion when performing concept design as many feasible, but not strictly

optimal solutions are not represented in the final population and the

decision maker is forced to accept the single solution which a product

of the analyst’s initial decisions (Savic, 2002). This process is subjective,

therefore introducing potential bias into the procedure and there is no

guarantee of the optimality of individual objectives; one or more objec-

tives tend to dominate the optimisation process (Yaman and Lee, 2010).

To overcome these issues, researchers have used multi-objectivisation,

where the single-objective problem isormulated as a multi-objective one

(Lochtefeld and Ciarallo, 2015). This has led to more competitive search

mechanisms (Garza-Fabre, Toscano-Pulido, and Rodriguez-Tello, 2015)

which can lead to increased population diversity, escape from local op-

tima and introduce new search spaces. In some cases, it can even pro-

duce superior results (Knowles, Watson, and Corne, 2001; Lochtefeld

and Ciarallo, 2014). There are two main variations: decomposing the

objective function into two or more objectives and the addition of a

second ’helper’ objective such as the distance to the nearest neighbour

(Tran, Brockhoff, and Derbel, 2013). Furthermore, the multi-objective

problem can be formulated as a single-objective one by setting all but

one as constraints (Savic, 2002) and this can be completed for each

single-objective, potentially with varying constraint thresholds and the

results combined into a single repository, however, this is a timely ac-

tivity.

1.4 risk and uncertainty

The application of uncertainty analysis in the field of coastal and mar-

itime engineering has started to gain traction in the past decade, devel-

oping from the mature field of structural reliability engineering (Reeve,

2009). There are several key drivers including an appreciation of the po-

tential long-terms effects of climate change, compounded by frequent

flooding events; an understanding that a probabilistic approach can

1.4 risk and uncertainty 11

lead to more efficient structures and clients who are willing to explore

and understand the risks of various options.

Much of the published work in the engineering domain on uncer-

tainty (and risk) is focused on the reliability and resilience of structural

systems. This can be reduced in many cases to the probability that the

load will surpass the resistance i.e. S > R. There is comparatively less

research on uncertainty analysis of cost estimates for large-scale mar-

itime engineering projects. Cost estimates have an associated level of

uncertainty, usually expressed in percentage terms relative to the stage

in the development project as shown in Table 1.2. This method of ap-

plying an uncertainty range based on the design stage rather than the

option and available data does not enable the uncertainty (and risk) of

varying options to be properly evaluated nor the influence of uncertain

inputs to be known.

Table 1.2: Potential uncertainty ranges for each stage of a project life-cycle

FeasibilityConcept

Genera-

tion

pre-

FEED

FEED Detailed

Design

Constr-

uction

Built

100% 50% 40% 25% 15% 10% 0%

The main output of a pre-FEED is often the cost estimate of which

any preliminary design is a step toward. Emphasis lies on getting the

big decisions correct i.e. ”is the terminal nearshore or offshore?”, ”is a break-water required?” or ’how long is the channel?”. The impacts of these deci-

sions are seen in the cost estimates and typically, a client is interested

in the concept that offers the lowest cost (and risk).

1.4.1 Overview of uncertainty in the maritime environment

It is assumed that all types of uncertainty can be quantified as either

epistemic - where we could know in principle but don’t due to mod-

elling methods, lack of an adequate number of samples or assumptions

that must be made (Hall and Lawry, 2003) or aleatoric - naturally ran-

dom and out of our control for instance, future wave forecasts.

During a concept design, the key uncertainties are often those which

drive the estimated final cost, which is often the key metric for compari-

1.5 aims and objectives 12

son. Where this is is the case, it is assumed that structural safety can be

designed-in during the FEED and detailed design stage as the additional

costs are relatively small in comparison the the estimated scheme costs.

Considering model uncertainty as unavoidable uncertainty inflicted on

all options using the same model, this focuses importance on the site

specific data and cost data used to price each option.

There are many source of uncertainty in a large-scale maritime project

derivative of many aspects, some physical, others of a more abstract

and random nature. These include:

1. Lack of site specific data (aleatoric).

2. Lack of region specific cost data (aleatoric).

3. Lack of adequate modelling techniques (aleatoric).

4. Data that is old, has missing values or has been processed by

somebody else, potentially in a different company (aleatoric).

5. Random uncertainty in predicting or extrapolating conditions (epis-temic).

6. Geopolitics (epistemic).

1.4.2 Definition of Risk

There are various classifications of risk in engineering, borne largely

from the field of structural reliability. Risk is the product of uncertainty

and is often described in qualitative or quantitative terms such as ’highrisk’ or ’95% confidence level’. Within the domain of this research project,

risk is the level of uncertainty that exists in the capital and maintenance

cost estimations due to the uncertainties that exist in the input vari-

ables.

1.5 aims and objectives

The aim of this research is to explore how the concept design stage

of an LNG terminal can be enhanced through the development and

application of a decision support system. The system will combine a

wide range of engineering design methodologies for key components

1.5 aims and objectives 13

with optimisation and statistical inference techniques that will facilitate

the generation of options both as a manual extension of the traditional

desk study as also as an automatic layout optimiser.

A wide range of engineering disciplines will be integrated into a

single repository that can perform the automatic design of key terminal

infrastructure including the breakwater and dredged channel as well

as the geospatial layout of the terminal infrastructure using bespoke

automated parametric routines.

The role of multi-objectivisation will be explored in full where the

optimisation problem will be solved in the traditional manner using

a Pareto-based multi-objective optimisation algorithm as well as by re-

constituting the multi-objective problem into a dual and single objec-

tive problem to explore how multi-objectivisation affects the respective

final populations in terms of diversity and minimisation of the objective

functions.

Finally, representing key inputs as continuous probability distribu-

tions rather than as discrete variables and utilising Monte Carlo simu-

lation with sensitivity analysis will enable the range of financial risk

on each option to be computed and the influence of each input on the

overall risk to be evaluated.

A single case study will be used for all chapters in Part iii. This

provides a narrative throughout the thesis and enables the results gen-

erated in the preceding chapter to be reanalysed in the next with an

evolved version of the design algorithm to demonstrate the key con-

cepts of the research.

1.5.1 Objectives

1. Conduct a literature review on relevant topics including maritime

engineering, machine learning and deterministic and probabilis-

tic design.

2. Collate methodologies for the design of relevant maritime struc-

tures and integrate them into a single software framework that

can be used deterministically and use this to replicate a Base Case.

3. Develop an optimisation strategy that will enable the framework

to be used as an automated design tool.

1.6 rapola 14

4. Solve the optimisation problem as a one, two and three objective

problem to understand whether Multi-objectivisation can produce

better results.

5. Integrate uncertainty as a minimisation objective in the existing

model for Uncertainty within optimisation.

6. Apply uncertainty analysis to the options created during Chap-

ter 5 for Post-optimisation uncertainty analysis.

7. Compare the two uncertainty methods to determine whether there

is a ’best approach’ to this optimisation problem.

1.6 rapola

Rapola (HR Wallingford Rapid Port Layout) is an R package4 developed

specifically as a decision support system for the conceptual design of

maritime terminals as part of this research project. Its purpose is four-

fold:

1. Provide a repository for common design calculations such as those

for breakwater and channel design.

2. To provide a framework which integrates and automates the de-

sign calculations required for the conceptual design of a maritime

terminal.

3. To allow automatic optimisation of concept design options.

4. To facilitate uncertainty and sensitivity analysis based on a range

of inputs described as probability distributions.

It comprises of many functions that facilitate the design, optimisa-

tion, and analysis of LNG terminals as well as visual representation

of results, graphs and statistical analysis. The User Manual is included

in the Portfolio. All results, analysis and graphs have been conducted

through functions that are part of the toolbox which is included on the

CD in the Portfolio. Additionally, all results that are discussed in this

dissertation are included in the portfolio CD for verification purposes.

4 R is a free to download scripting language well-suited to statistics, machine learningand data-visualisation. It has gained popularity largely due to researchers being ableto develop packages and submit them to CRAN5 for others to download, also for free.

1.7 contributions to knowledge 15

1.7 contributions to knowledge

The key contributions to knowledge that have arisen from this research

are the journal papers namely (Rustell, 2014b; Rustell et al., 2014c). Ad-

ditionally, the automated design algorithm for LNG terminal layouts

which has been realised through the RaPoLa decision support tool s

new and has been presented at conferences through papers (Rustell,

2014a; Rustell et al., 2014a,b). A significant component of this is the au-

tomated layout algorithm that develops the terminal layout from the de-

cision variables selected manually or automatically. This is a new and

novel algorithm. The framework integrates engineering, optimisation

and uncertainty methodologies into a single framework for which there

is currently little published literature. Additionally, multi-objectivisationis explored in detail where the three objective problem is also solved

in the one and two objective space. Multi-objectivisation is a relatively

new area of research (approx. 15 years old) and there are few if any

published works using real case studies and none of these are in the

field of maritime engineering. Finally, the integration of two methods

for uncertainty assessment and minimisation are compared with one

another to develop a strategy for minimising risk during the concept

stage of design.

1.8 scope

The main body of this dissertation is structured into five parts where

the first four are the main body of the dissertation and part five is the

appendix. The parts are as follows.

• Part I - Introduction

Chapter 1 has provided background to the work, given an overview

of the key aspects of LNG terminal design and discussed the aims

and objectives of the research which are to develop a decision sup-

port system for the design of an LNG terminal using evolutionary

computing and risk analysis techniques.

Chapter 2 provides an in-depth discussion of the key schools of

thought form which the project has relied and developed from.

This includes a review of deterministic and probabilistic design

1.8 scope 16

methods including the Monte Carlo Method; linear and random

wave theory and single and multi-objective optimisation techniques.

• Part II - Mathematical Basis of the Decision Support System

Chapter 3 is an in-depth description of the engineering design

algorithm which has been the primary focus throughout the re-

search. It discusses the mathematical basis of each of the sub-

algorithms and shows how they are integrated.

• Part III - Practical Application

Chapter 4 introduces the North Star case study and demonstrates

how the Base Case design can be bot replicated and improved

using the decision support system.

Chapter 5 discusses the development of an optimisation strategy

for the automatic generation of concept layouts. It focuses on role

of multi-objectivisation and compares a range of methods for solv-

ing the optimisation problem.

Chapter 6 describes the integration of uncertainty into the op-

timisation framework, where uncertainty within optimisation and

post-optimisation uncertainty analysis are discussed as methods for

assessing and minimising the uncertainty in layout options.

• Part IV - Conclusion

Chapter 7 summarises the key findings and outlines areas for

future study.

• Part V - Appendices

Appendix A includes the full algorithm for calculating sediment

infill to a dredged channel.

Appendix B demonstrates the range of data input files that RaP-oLa uses.

2L I T E R AT U R E R E V I E W

2.1 introduction

As with many doctoral theses’, this research does not fall into a single

academic school of knowledge, rather it pieces together components

from a range of scientific fields to form a composite whole. This liter-

ature review focuses on the fields that have been most useful in pro-

viding a foundation of knowledge and a range of techniques that have

enabled the research aims to manifest.

First, a range of Design methodologies will be discussed from the per-

spective of deterministic and probabilistic design. Most design is de-

terministic in nature (typically based on empirical limits), however the

field of structural reliability has matured over the past 50 years and

found its way into coastal and maritime engineering over the past cou-

ple of decades. The influence of probabilistic design methods on this

project will be discussed in terms of their historical development and

relative use in this project.

Second, the oceanographic topic of Sea waves will be covered in de-

tail as a significant component of this research has been to develop a

method for transforming offshore waves to the proposed LNG terminal,

emulating the transformation processes of shoaling, refraction, diffrac-

tion and finally breaking. The review will focus on linear wave theory

initially, then move into random wave theory.

Finally, Optimisation will be covered in depth, with particular empha-

sis on the Genetic Algorithm, although it also covers other relevant com-

puting areas such as Neural Networks and Root-finders. Optimisation has

featured heavily in this research project and this section should provide

the reader with an overview of the field.

Marine engineering is also a focal topic of this thesis and will be

discussed in Chapter 3 where the methodology developed during this

research will be presented.

17

2.2 design methodologies 18

2.2 design methodologies

There are several categories of dealing with uncertainty which are clas-

sified into four categories by (Reeve, 2009):

• Level 0: Traditional deterministic design methods, often developed

empirically which form the basis of many design codes of prac-

tice where a level of uncertainty is ’built-in’.

• Level 1: Quasi-probabilistic methods where factors of safety are

applied to various equation components to account for variable

uncertainty. Examples of this are the Spanish ROMS and Eurocodes

which use a quasi-probabilistic methodology approach to design.

• Level 2: Probabilistic methods used to approximate the probability

of failure using the reliability index β such as the first and second-

order reliability methods (FORM/SORM).

• Level 3: Complex probabilistic methods in which the reliability of

the function cannot be solved analytically and is approximated,

often through Monte Carlo methods.

The following sections will discuss each level.

2.2.1 Level 0 Design Methods

Deterministic design is usually the first approach a designer attempts

when performing the preliminary sizing of structures. Calculations are

usually derived from observed laws and empirical testing and are typ-

ically conservative. Deterministic calculations are intended to be quick

and easy to use and attain results from. In building codes, factors of

safety are used to deal with the uncertainties in potential loads and

structural response, however in many fields of maritime engineering

the calculations themselves have factors of safety built into them and of-

ten do not require additional factors of safety, only the correct prescrip-

tion of the ’design conditions’ (which incidentally are probabilistic). The

most important aspect of the deterministic design process is correctly

defining the loads and response. Burcharth (2000) states that there is a

lack of generally accepted safety and risk levels in the design of coastal

structures and no definitive design standards except for perhaps the

2.2 design methodologies 19

Spanish ROM 0.00 guidelines (ROM, 2002) which specify safety levels

based on the economic importance and functional requirements how-

ever, these standards are generally not used outside of Spain.

Guidance on the deterministic design of breakwaters can be found

in The Rock Manual (CIRIA and CUR, 2007), Analysis of rubble moundbreakwaters WG12 (1992) and Maritime structures Part 7: Guide to the de-sign and construction of breakwaters (BSI, 1994); and Harbour ApproachChannels Design Guidelines (WG121, 2014). These guides offer designers

a means to safely design maritime structures using industry standard

methods.

Deterministic design has been criticised for overestimating the de-

sign requirements (Burcharth, 2000; Burcharth and Sorensen, 2005; Sex-

smith, 1999). The criticisms aren’t without their dues as deterministic

design does not account well for exceptions to the rule or for low fre-

quency, high consequence events and often results in structures that

have been over designed (Kim, Kim, and Chang, 2008). Therefore, de-

terministic design is often used as a precursor to probabilistic design,

especially when large sums of money are involved.

2.2.2 Level 1 Design Methods

Design manuals such as the European PROVERBS (2005) and Spanish

ROM (2002) are based on the probabilistic approach and are being used

successfully although the deterministic approach is still considered to

be the industry standard and engineers are reluctant to abandon the

system without better guidelines or regulations. Sexsmith (1999) ar-

gues that the probabilistic method is most appropriate to deal with

uncertainties that are inherent to design projects, though that for this

to become practical the method of applying the factors must become

standardised so that the subjectivity is removed. Burcharth has been a

major figure in the development of the partial safety factor system for

rubble mound breakwaters (Burcharth, 1991, 1992) and was the lead

author of (WG12, 1992) in which the use of partial safety factors linked

to stochastic variables rather than safety factors applied to the entire

design equation is proposed. He argues that it offers higher accuracy

than conventional probabilistic or deterministic approaches as partial

safety factors are better able to account for conventional and unconven-

tional structures. In a later report by Burcharth (2000, pp. 7), he states

2.2 design methodologies 20

that some national and international codes such as the Eurocodes are

starting to make use of the partial safety factor system although the

coefficients are ”tuned to reflect the historically accepted safety of conven-tional designs and are organised in broad safety classes for which the actualsafety level is unknown” and are not suitable for the design of maritime

structures such as breakwaters for which ”no generally accepted de-

signs and safety levels exist”. However, even if the designer does use

a partial safety factor system he is still faced with selecting the most

appropriate safety level and this requires expert judgement as a poorly

specified safety level can lead to either over or under designed struc-

tures which may have negative financial or environmental impacts later

in the structure’s life.

2.2.3 Level 2 Design Methods

There has been development in the use of probabilistic design for around

60 years. In probabilistic design, the variables are represented as prob-

ability distributions rather than static values. The probability that a

structure will fail can be calculated as the relative area in which the load

(blue) and resistance (red) probability density function’s (PDF’s) overlap

as shown in Figure 2.1. In some cases this can be solved through in-

tegration however in others, especially where there are several inputs,

techniques such as Monte Carlo Simulation (Level 3) are required to

produce an output PDF from which the designer can select the level of

confidence they are satisfied with.

Probabilistic based design methods can often produce more economi-

cal designs which are more fit for purpose than using the deterministic

approach as the designer is able to exercise judgement on the most

appropriate level of safety (Sexsmith, 1999). Modification of the level

of safety can have large impacts on the whole life cost of a structure

without great increase to the level of risk faced by the user. Arends

et al. (2005) demonstrated this regarding the probabilistic design of a

large tunnel. Burcharth and Sorensen (2005) discovered that the op-

timum safety levels to ensure long term financial minimization are

actually higher than those used in conventional deterministic design

where cost is the main objective, especially when interest rates are low.

They also state that with higher interest levels, the optimal safety lev-

els decreased and it becomes more economically attractive to design

2.2 design methodologies 21

Figure 2.1: Design load and resistance PDF where the common area betweenthe load (blue) and resistance (red) represents the probability offailure

for more frequent repairs, though whether this is more sustainable is

another question entirely. A key argument was that cost optimized de-

signs are not necessarily the most cost effective in the long term and

that safety optimized designs may be marginally more expensive, but

will offer much better protection and less political and financial risk.

Perhaps this indicates that the economy vs. risk dynamic is influenced

by whether the structure is on land or not.

Probabilistic methods often use modification of deterministic equa-

tions with partial coefficients to develop the correct level of safety.

First/Second Order Reliability Methods (FORM/SORM) are an example of

this where the target reliability is used to derive partial coefficients for

primary and secondary loads and responses. These methods have been

successfully applied to many structural engineering problems, though

their application in coastal engineering has not been as wide spread as

they assume a linear performance relationship which is not inherent

in many hydraulic problems (Melching, 1995). Minguez and Castillo

(2009) did successfully use FORM for the design of a rubble mound

breakwater to optimise the whole life cost considering a range of fail-

ure modes finding that it performed successfully.

2.2 design methodologies 22

2.2.4 Level 3 Design Methods

Monte Carlo Simulation (MC) is a more common, easier to implement

and often more accurate method of dealing with uncertainty (Di Sciuva

and Lomario, 2003). MC relies on the logic that if enough model sim-

ulations using randomly sampled variables from their corresponding

probability distributions are undertaken, then the accuracy of the re-

sults will increase inversely proportional to the square root of the num-

ber of runs (Shrestha, 2009). A useful aspect of MC is that the random

variables can be used with deterministic calculations. MC does however

suffer when simulating joint probability variables and can be computa-

tionally expensive (Kuczera and Parent, 1998). MC has been applied to

many coastal engineering problems including breakwater failure proba-

bility (Kim and Park, 2005), calculating rubble mound breakwater over-

topping discharges (Geeraerts et al., 2009), calculating the infill rate of

a dredged channel (Bakker, 2010; Bakker, Winterwerp, and Zuidgeest,

2010), caisson stability (Lee et al., 2012) and berm breakwater reliabil-

ity (Torum et al., 2012). MC is also used in HR Wallingford’s LNGSimsoftware to help optimise LNG terminals by simulating arrival and de-

parture times of vessels. It is possible that layouts derived from the this

research project will be tested in LNGSim as part of the design cycle

although that is not a focus of the research.

There are several available techniques for sampling from multiple

probability distributions and these are often referred to as Monte Carlo(MC) simulation, named after the famous casino in Monaco. The most

primitive method is random sampling where each variable m ∈ M is

is randomly sampled N times and propagated through the model to

approximate the empirical distribution function Random MC simula-

tion is a common and easy to implement and often accurate method

of dealing with uncertainty (Di Sciuva and Lomario, 2003). It relies

on the logic that if enough model simulations using randomly sam-

pled variables from their corresponding probability distributions are

undertaken, then the accuracy of the results will increase inversely pro-

portional to the square root of the number of runs (Shrestha, 2009). A

useful aspect of MC is that the random variables can be used with

deterministic calculations. MC does however suffer when simulating

joint probability variables and can be computationally expensive (Kucz-

era and Parent, 1998). MC has been applied to many coastal engineer-

2.2 design methodologies 23

ing problems including breakwater failure probability (Kim and Park,

2005), calculating rubble mound breakwater overtopping discharges

(Geeraerts et al., 2009), calculating the infill rate of a dredged channel

(Bakker, 2010; Bakker, Winterwerp, and Zuidgeest, 2010), caisson stabil-

ity (Lee et al., 2012) and berm breakwater reliability (Torum et al., 2012).

Although random sampling is fast and easy to implement, it requires

a large number of samples (in the order of NM assuming that N is

sufficient for approximating the empirical distribution function when

m = 1) to predict the output distribution. An issue with the method

however is that the number of samples required for full coverage of

the input variable distributions is exponentially relative to the number

of inputs and can require many simulations to accurately approximate

the output empirical distribution function.

2.2.4.1 Latin Hypercube sampling

Latin hypercube sampling (LHS) (McKay, Beckman, and Conover, 1979)

is another MC method suitable for sampling from a multi-dimensional

distribution. LHS extrapolates the concept of the Latin square which is a

square of dimension K2 containing K observations, which each appear

once in each column and row into a hypercube of arbitrary dimension.

In a Latin hypercube, each dimension of the multi-dimensional distri-

bution is partitioned into M divisions and the total number of possible

samples N is given by the Big O notation eq 2.1, which demonstrates

an exponential relationship1.

N = (M!)K−1 (2.1)

Although similarly to traditional sampling, the number of samples

required for coverage is a product of the exponent K, the samples are

spread evenly throughout the hyperplane, enabling a better approx-

imation of the empirical distribution function to be made with rela-

tively less samples (Cheng and Druzdzel, 2000). This is demonstrated

in Figure 2.2 which compares random sampling vs LHS for k = 2

1 Big O notation is used to classify algorithms by how they respond to changes in inputsize. It is often used to determine the number of ’steps’ an algorithm takes to solve aproblem and is used to enable algorithm efficiencies to be compared irrespective of thepower of the computer on which the algorithm is run.

2.2 design methodologies 24

and N = 100, where an comparatively even coverage of the surface

is achieved using a LHS.

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Random Uniform

0.0 0.2 0.4 0.6 0.8 1.00.

00.

20.

40.

60.

81.

0

LH Sampling

Figure 2.2: Randomly sampled variables vs. Latin hypercube samples

2.2.5 Sensitivity Analysis

In many cases, knowing the effect that the uncertainty in specific inputs

is desirable and necessary. A key part of understanding uncertainty is

knowing which inputs produce the largest uncertainties. This can be

achieved through sensitivity analysis where inputs are systematically

modified and the results collated to understand the effect the inputs

are having on the outputs. (Castillo et al., 2006) used sensitivity anal-

ysis to optimise the whole life cost of a composite breakwater. The

role of sensitivity analysis in optimisation and reliability problems has

been explored by (Castillo, M?nguez, and Castillo, 2008) who used

the technique to optimise and improve the reliability of a composite

breakwater as well as other structures which was then developed to

include decomposition techniques and FORMS (Minguez and Castillo,

2009). More recently, (KoÃg and Balas, 2013) performed a reliability

analysis on a rubble mound breakwater using fuzzy random variables.

Sensitivity analysis can be used in an optimisation context to search for

robust solutions which are more resilient to the inherent uncertainties.

2.2.6 Summary

There are a multitude of ways a designer can approach a project. The

two main classification of design methodologies can be represented as

2.3 sea waves 25

deterministic and probabilistic and both play a part in the development

of design concepts. In this project, both of these methods will be used;

their application will be discussed in the proceeding sections.

2.3 sea waves

When designing a maritime structure, there are a range of environmen-

tal loads which must be taken into account. Winds, waves, currents,

tides and seismicity act on the structure on a practically continuous

cycle. Of these, waves exert the greatest influence. It is the nature of

sea waves which differentiates the design of maritime structures from

designing structures on land (Goda, 2010b). Although waves appear to

be regular in form as they travel to and break on the shore, they are

actually highly complex and random natural phenomena of which our

understanding is limited. Most waves are generated from wind, either

locally or from across the ocean. While the waves are offshore in deep

water, they exhibit different characteristic to those which are seen as

they travel into shallower water: In deep water waves are free to travel

uninhibited by the bathymetry below, it is when they enter shallow wa-

ter that they start to interact with the bottom and a range of physical

processes start to occur, changing practically every characteristic of the

deep water wave.

In most design cases, data will not exist where the engineering work

is proposed and wind or wave data is often only available from deep

water survey buoys. This means that the processes of wave refraction,

shoaling and diffraction must be simulated and the conditions which

cause waves to break must also be estimated in order to obtain design

wave characteristics at the proposed location. There are several meth-

ods of doing this. Approximations can be made from both the classical

wave theories and more recent theories. Hind-casting models can also

be used to develop waves from wind data or numerical wave transfor-

mation models can be used (Pullen et al., 2007). The maritime engineer

must be capable of selecting the correct techniques in order to estimate

the wave properties to facilitate the safe design of any maritime struc-

ture.

Until the late 1940’s, scientific understanding of waves was based

on the classical theories of the 19th century which assumed that waves

were monochromatic in nature, travelling in a single direction at a sin-

2.3 sea waves 26

gle frequency. In reality, sea waves are random in nature, composed

of an infinite spectrum of smaller waves of varying height, frequency

and direction focused around peak values (Goda, 2010b). Nowadays,

the random waves concept is used extensively in all areas of coastal en-

gineering design, featuring in design manuals across the world (Goda,

2008). The first researchers to report on the random nature of waves

were Svendrup and Munk (1947) who introduced the concept of the

significant wave or which is the average wave height of the highest

one-third of waves in a group and its corresponding period leading to

the development of the Svendrup-Munk-Bretschneider (S-M-B) method

of wave forecasting by Bretschneider (1952, 1958). According to WG121

(2014), the significant wave height is arguably the most important de-

sign variable for maritime structures although designing a structure

to resist a single wave represented by the significant wave can lead

to structural failure during storm conditions. This is understood by

maritime engineers and is implicit in design formulae such as van der

Meer’s stability formulae for breakwaters which includes the variable

Nw for the number of waves expected in a storm and in the design of

vertical breakwaters and offshore structures which use the concept of

Hmax ≈ 1.8Hs 6 2.0Hs (or higher) (Goda, 2010b).

Pierson (1950) posed the random nature of ocean waves as a fun-

damental design concept which led to the development of the Pierson-Neumann-James (P-N-J) method of wave forecasting (Pierson, Neumann,

and James, 1955). This introduced the concept of the directional wave

spectrum that describes how wave energy is dispersed through the full

frequency and directional range.

The spectral approach of the P-N-J method was combined with the

significant wave concept of the S-M-B method by Bretschneider (1968)

thus creating the Bretschneider Spectrum. The Bretschneider Spectrum was

successful in that it offered a qualitative basis of wave description based

on the significant wave. Its current form is that which has been mod-

ified by Mitsuyasu et al. (1975) who integrated it with the statistical

representation of random waves to create the Bretschneider-MitsuyasuSpectrum which is used by engineers to describe the spectral distribu-

tion of wave energy:

S(f) = 0.20H2sT−4p f−5exp[−0.75(Tpf)−4] (2.2)

2.4 optimisation 27

The Bretschneider-Mitsuyasu Spectrum is a well-established method of

representing the spectral characteristics of random waves although is

only applicable to fully developed seas. In the case of waves in devel-

oping seas, the JONSWAP spectrum (Hasselmann et al., 1973) should

be used as developing waves exhibit sharper spectral peaks which can

be represented through the peak enhancement factor. In the case of an

LNG terminal, use of the Bretschneider-Mitsuyasu Spectrum is applicable

in most circumstances as the locations are often exposed coastlines and

hence subject to waves from fully developed seas however it is up to

the designer to select the most relevant theory.

Wave crest heights and the dispersion of energy are also affected

by the wind. Higher velocities cause greater spreading of wave energy.

They found that sea states could be described in terms of smax and

showed that increasing wind velocities cause greater spreading of wave

energy, reducing the spectral peak intensity and reducing the wave

height. Mitsuyasu (1970) developed the notion of the spreading functions, relating wind velocity to the dispersion of wave energy. Goda (1975b)

expressed s in terms of the peak spreading value smax and used it to

simulate refraction and diffraction of random waves.

2.3.1 Summary

All of the models discussed play an important role in the development

of an LNG terminal, however for the application of this project, 1D

models are the most appropriate. 1D models can be developed into

code quickly, validated by senior engineers who understand the pro-

cesses and can be run quickly in an optimisation framework as well as

used in probabilistic design.

2.4 optimisation

During the conceptual design stage, the designer is often aiming to dis-

cover the optimal solution to a problem, though there are few problems

to which there is a single solution (Rafiq et al., 2005). Finding optimal

solutions is a complex task and requires the consideration of a wide

range of factors that are inter related and dependent in nature. The ob-

jective of optimisation is to find the best solution to a problem within

2.4 optimisation 28

a predetermined set of constraints. Optimisation is used extensively in

many fields, especially so in those involving computers where they are

coded as optimisation algorithms. There are a wide range of algorithms

available, some of which will be discussed in the subsequent sections.

2.4.1 Root-finding algorithms

Root-finding algorithms are a class of optimisers used to solve f(x) = 0.

The first is thought to be the Secant Method ( 2300BC) which approxi-

mates f(x) with a succession of secant lines. The well-known Newton-

Rhapson Method uses the derivative of f(xn) to approximate f(xn+1) =

0, and although fast, occasionally fails to converge (Londhe and Deo,

2003). Failures include a poor initial estimate, the starting point en-

tering a cycle or there not being a derivative at the root, for instance

f(x) = 3√x.

The Bisection method calculates a positive (a) and negative (b) esti-

mate of f (x) = 0, using the error function for the next approximation.

Though robust, the Bisection Method is often slow to converge, espe-

cially in comparison to the Newton-Rhapson Method. Another impor-

tant method is the Inverse Quadratic Interpolation algorithm which is

fast to converge when the current approximation is close to the actual

root, though slow in other circumstances. In response to these short-

comings, Brent’s Method (Brent, 1973) was developed which uses ei-

ther the Bisection, Secant or Inverse Quadratic Interpolation method de-

pending on its suitability for the current iteration. It has the robustness

of the Bisect Method and the speed of the Newton-Rhapson Method.

2.4.2 Model Trees

A model tree fits a linear regression model to every leaf of the regres-

sion tree so that every branch in the node is also associated with a

regression model. Model trees are discussed as a possible method and

although Gouldby (2007, p 5) states that there has been ”very little appli-cation history”, recently there has been successful applications of model

trees within hydraulics. A study conducted by Etemad-Shahidi and

Bonakdar (2009) applied the M5 to the prediction of armour stone sizes

to be used on rubble mound breakwaters. The model tree was based on

2.4 optimisation 29

the equations that were derived by Meer (1988) and the results showed

that the M5 model tree was able to predict the optimum armor stone

size with a greater degree of accuracy than the van der Meer equationsas well as being able to define simple, transparent and meaningful re-

lationships and rules with little computational power. The model was

also used to predict the result of the water depth parameter (h/Hs),

which produced results that were considered to be more accurate than

the deterministic approach. The authors concluded that using a purely

deterministic approach to breakwater design may actually be unsafe

as the model tree gives more conservative results of the damage level.

This opinion is also shared by Kim and Park (2005). Etemad-Shahidi

and Bali (2012) also used the M5 model tree to predict the wave run

up on rubble mound breakwaters and were able to produce new equa-

tions that they state are more accurate than those put forth by Hudson

(1958) or Meer (1988).

2.4.3 Artificial Neural Networks

The concept of the Artificial Neural Network (ANN) was first published

by McCulloch and Pitts (1943) although at the time, computers were

still theoretical constructs. It wasn’t until the 1950’s that computers

were developed and computational ANN’s were implemented by Far-

ley and Clark (1954) and Rochester et al. (1956). Since then, there

has been many applications of ANN’s especially in the past couple of

decades as computing power has increased exponentially. ANN’s are of-

ten able to develop relationships between inputs and outputs when the

underlying processes are either unknown or too complex to replicate

(Basheer and Hajmeer, 2000). Although ANN’s are able to learn patterns

from datasets, they operate akin to a ”black box” whereby an input layer

of nodes passes data to the ”black box” layer of nodes, which then pro-

cesses this data and produces an output, which means that the method

the ANN uses to arrive at a solution is unknown. ANN’s typically arrive

at solutions with a high degree of accuracy and often outperform other

modelling processes (Gouldby, 2007). They do often require large, var-

ied and comprehensive datasets in order to be able to predict accurately

for a wide range of input variables. Figure 2.3 shows the structure of

a ANN; it has input nodes, which feed into hidden nodes which feed

into the output nodes. The hidden nodes use processes such as the least

2.4 optimisation 30

Figure 2.3: Neural Network Diagram. Source: (Gouldby, 2007)

squares or gradient search algorithm to calculate their weighing during

the training stage so that after sufficient training has been undertaken,

they should be capable of accurately predicting output given input.

ANN’s can be automated so that limited user involvement is required,

they also have fast computation times. They are often used as meta

models to predict computationally expensive functions within optimi-

sation methods and this is a well-established field of research in its own

right. ANN’s typically use gradient-based methods to develop weight-

ings for the nodes. The flexibility of the ANN has ensured that it has

been applied successfully in almost every scientific field. Gouldby

(2007, p 7) gives a critical overview of the potential role of ANN’s in

solving hydraulic engineering problems at HR Wallingford. In the re-

view, ANN’s are identified as ”well suited to modelling complex, nonlinearsystems where underlying relationships are unknown or difficult to describe,but where observed data are abundant”. Their application in hydraulic en-

gineering has been widespread ranging from predicting wave trans-

formation characteristics (Altunkaynak and Ozger, 2004; Browne et

al., 2007); predicting overtopping discharges of waves on coastal struc-

tures (De Rouck, Verhaeghe, and Geeraerts, 2009; Geeraerts et al., 2007;

Pullen et al., 2007; Van Gent, Smale, and Knuiper, 2007); preliminary de-

sign of breakwaters (Balas and Tur, 2010); simulating physical model

tests for breakwater armour stability (Iglesias et al., 2008); designing

rubble mound breakwater using rock (Kim and Park, 2005) and con-

crete (Kim, Kim, and Chang, 2008); wave transmission through a low

2.4 optimisation 31

crested breakwater (Panizzo and Briganti, 2007) and predicting sedi-

mentation in harbours (Bhattacharya and Solomatine, 2006; Yang and

Rosenbaum, 2001).

A neural network could potentially be used as a surrogate model

for the dynamic mooring assessment of a berthed vessel. At present,

this is the least sophisticated model in the framework and is essentially

the ”Achilles Heel”. (Simoes, Tiquilloca, and Morishita, 2002) trained

a ANN to predict the mooring line forces of an floating production,

storage and offloading (FPSO) unit. They found that the ANN was able

to predict the mooring forces due to current, wind, and second-order

waves forces to within 5% accuracy compared to the baseline study.

This could act as a starting point for training an ANN to predict moor-

ing forces of a berthed LNG vessel.

2.4.4 Genetic Programming

Genetic programming (GP) has been considered as an alternative to the

ANN and operates in a similar manner, though is ”generally more trans-parent and easier to interpret than a set of ANN weights” (Gouldby, 2007,

p 11). A shortfall of the system is that in comparison to ANN’s there

is little historical application and is therefore not as well established

although there have been successful hydraulic applications such as

Sreekanth and Datta (2010) who used genetic programming to manage

coastal aquifers.

2.4.5 The Genetic Algorithm

Genetic Algorithms (GA’s) have been used extensively to solve global

optimisation problems since the technique was first published by Hol-

land (1975). Researchers such as Goldberg and Coello Coello have been

major forces in the application of GA’s to real world problems since

the late 80’s and have both published important books on the topic

(Coello Coello, 2002; Goldberg, 1989). A GA is a stochastic search tech-

nique based on Darwin’s theory of natural selection, often referred to

as survival of the fittest. The first generation of a population of solutions

(chromosomes) are randomly generated, ranked according to how well

they achieve the objective function fitness and the fittest are then passed

2.4 optimisation 32

to the next generation where they are used as the parents from which

the offspring will inherit traits of each parent in varying ratios within

each iteration. Through this evolutionary process where traits are both

inherited and random, large search spaces can be covered whilst the

solution pool becomes more optimal with each iteration.

GA’s have found their place in global optimisation problems where

it is possible to find optimal or near optimal solutions (Rafiq et al.,

2005) and generally speaking, GA’s outperform other heuristic methods

at finding optimal solutions (Wang et al., 2005). GA’s are well suited

to engineering problems which are often multi-dimensional in nature.

Due to this, GA’s have been used in a wide range of problems, most

often in the conceptual design stage (Rafiq, Beck, and Hughes, 2008)

and particularly in structural engineering and factory layout problems

where system constraints can be easily defined. Figure 2.4 shows a

schematic representation of this process. Well-known models include

the Multi-Objective Genetic Algorithm (Fonseca and Fleming, 1993),

Non-dominated Sorting Genetic Algorithm (NSGA) (Srinivas and Deb,

1994) and the Multi-objective Evolutionary Algorithm (MEA) (Sarker,

Liang, and Newton, 2002), one of the most widely used algorithms

is the NSGA-II (Fast Non-Dominated Genetic Algorithm) (Deb et al.,

2002) due to its robust performance and spacing of solutions on the

Pareto front. Generally, genetic algorithms are paired with the Pareto

front, where no option can be improved in an objective without compro-

mise in another and this is often visualised as a trade-off curve, surface

or hypercube.

The chromosomes are often represented in binary form, although

Gouldby (2007, p 15) states that ”it is generally better to encode numericalvariables directly as integers or real values”. The process of selection and

offspring generation iterates until either the solutions converge on what

is statistically determined to be the optimum or a maximum number

of generations has been reached. By selecting the fittest chromosomes

from the solution pool, and passing these onto the next generation, op-

timal results can be found for almost any problem however, Nimtawat

and Nanakorn (2009) notes that the accuracy of the solutions is directly

related to how well the solution is mathematically represented.

In order to ensure that the process is not entirely deterministic (which

would mean that multiple runs would arrive at the same solutions), fea-

tures such as mutation and crossover are utilized. Mutation is when a

2.4 optimisation 33

Figure 2.4: Genetic algorithm schematic diagram

2.4 optimisation 34

specified percentage of genes are randomly mutated to introduce ran-

dom mutation and therefore diversity in to a population. Mutation is

typically kept low due to the rarity of mutation in nature (i.e. less than

0.1 (Elbeltagi, Hegazy, and Grierson, 2005)), however, it is possible that

higher mutation probabilities may lead to a more optimal population).

Crossover is when a portion of each parent chromosome is given to an

offspring chromosome and in contrast to mutation is common process

(like in nature) and is therefore given a relatively high probability in

the range of 0.6 to 1.0 (Elbeltagi, Hegazy, and Grierson, 2005).

The GA has many attributes that can be successfully exploited on op-

timisation problems. A summary of the advantages and disadvantages

by Gouldby (2007) is paraphrased below:

Advantages

• The GA is able to find solutions for any objective function2 as long

as it can be represented numerically and can even find optimal

solutions to problems where little information is available and

are therefore highly robust.

• They do not generally require an in depth mathematical under-

standing of the problem and are therefore able to find optimal

solutions to nearly any problem.

• They are easily hybridised with other Machine Learning tools

such as Artificial Neural Network meta models to reduce the com-

putational time.

• The GA deals with entire populations simultaneously and due

to the probabilistic transition rules are able to explore the solu-

tion space in many directions. In the case of most engineering

problems where there are multiple optimal solutions, the GA can

be used to provide the entire Pareto Set in a single run rather

than multiple runs. For many years there has been successful in-

tegration of the GA and Pareto Set and this area of research is

well developed. As the optimal parent genes are combined, there

is often a large evolutionary jump to the next generation which

means they are ”less susceptible to becoming trapped in local minima”(Gouldby, 2007, p 18).

2 An objective function is any function which the optimiser is trying to minimise / max-imise

2.4 optimisation 35

• The GA operates in a parallel fashion which means that multiple

processors can be utilised to reduce the time taken to find optimal

solutions (Chang, Lo, and Yu, 2005). In contrast to traditional

optimisers, the GA is adaptable to a dynamic environment that

may use a time varying fitness functions and unexpected events

as this occurs in the theory of natural selection (Michalewicz and

Fogel, 2004).

Disadvantages

• There is no guarantee that an optimal solution will be found and

when approaching optimal solutions the computing time can of-

ten become quite long. If appropriate parameters are not used to

encode the problems then the solutions may get stuck in a local

optimum and an isolated optimum may be difficult to find as the

surrounding space holds no useful information.

• As they use randomised search parameters, they are computa-

tionally expensive. It is not unusual for a fitness function to be

evaluated thousands of times before an optimal solution is found.

• Defining the parameters such as mutation and crossover is prob-

lem specific and can be difficult, as it must be performed prior

to running the algorithm. The importance of proper definition of

the parameters has been acknowledged, but there are still no uni-

versal rules (Chang, Lo, and Yu, 2005). This is often performed

through a trial and error process which is time consuming.

• Optimal solutions often lie on the boundary of what the GA de-

termines to be the ’feasible region’, which can mean that many

solutions that are near optimal are therefore considered infeasible

by the GA. In contrast, if the search is limited to feasible regions

then the GA may find it difficult to generate optimal solutions.

2.4.6 Multi-objective Genetic Algorithm

In design, there are usually competing objectives such as ’reduce weight’and ’increase stiffness’ which when one is optimised, the other is com-

promised (Diez and Peri, 2010). The inclusion of several objectives

changes the notion of ’optimise’ into ’find the best compromise’ (Coello

2.4 optimisation 36

Figure 2.5: Non-dominated Pareto front. Source: (Jin, 2011)

Coello, 2002). There are three possible situations in a multi-objective

optimisation problem:

• Minimise all objective functions

• Maximise all objective functions

• Minimise some and maximise others

The key to multi objective optimisation is discovering which of the

optimal solutions is the most suitable for the purposes of the problem

in question. This is often done by combining the objectives into a scalar

function (cost) which is then evaluated for its fitness (Jin, 2010) allow-

ing the user to explore a range of Pareto optimal solutions. This is often

represented as a financial cost, where factors such as risk are developed

into cost indices, although this is not always possible nor desirable.

Some metrics cannot be transferred into financial equivalents easily

(Higgins, Hajkowicz, and Bui, 2008) and this can also perpetuate the

ideology that everything can be represented as a financial cost, which

may in fact draw away from the goal of the optimisation. An example

of this is the simultaneous optimisation of cost and carbon emissions.

Carbon can be developed into an expected financial cost, although this

makes the exercise one of saving money, not the environment and can

actually harm the overall goal.

The non-dominated Pareto Front is generally a curve of best fit in

shape where each solution on it is not worse than any other solution in

2.4 optimisation 37

each of the optimisation criteria (Coello Coello, Aguirre, and Zitzler,

2007). A solution is non-dominated when there is no other solution

that out-performs it in all objectives. Figure 2.5 shows a Pareto front

for two arbitrary objective functions where the blue solutions are both

Pareto Optimal and non-dominated as whereas the yellow and red so-

lutions are both dominated and do not represent optimal solutions in

any objective.

Though the application of GA’s to breakwaters has been limited,

there has been much application to structural engineering problems, es-

pecially during the conceptual design stage (Rafiq, Beck, and Hughes,

2008). Gero and Kazakov (1998) used a GA for the conceptual design of

building layouts and produced the Mixed Integer Linear Programming

(MILP) approach. Miles, Sisk, and Moore (2001) developed BGRID,

which uses the GA as a search heuristic to explore the solution space

rather than an optimisation tool. The focus of the study was to develop

a decision support tool for the practicing engineer. Patsiatzis and Pa-

pageorgiou (2002) used a GA to optimise the layout of multi-storey fac-

tories focusing on management and ”good engineering” as well as cost

aimed to solve all areas of the layout problem simultaneously. These

studies are relevant to this project as they show the application the GAto layout problems, which are a key focus.

Boulougouris and Papanikolaou (2008) performed multi-objective hy-

drodynamic optimisation of a floating LNG terminal to improve its sea

keeping and wave attenuation characteristics using a genetic algorithm,

concluding that it was a successful tool for this propose. Isebe et al.

(2008) used a global recursive algorithm to minimise the cost of coastal

defence structures optimised to reduce transmitted wave energy from

short waves. They used monochromatic linear wave theory and the

mild-slope equation to simulate wave propagation into shallow water

though a self-proclaimed limitation of the research was that the model

was unable to simulate random wave propagation meaning its ability

to be applied practically was limited.

Khalafallah and El-Rayes (2011) published a paper on the automated

multi-objective optimisation of airport site layouts. A GA was used to

search for the global minimum considering a wide range of conflict-

ing objectives including site cost, safety, debris control, wildlife control

and security. They also included a function which enables a selected

layout to be visualized in AutoCAD by the user. Similarities can be

2.5 summary of the literature review 38

drawn with this project and the current research project although they

were not looking at the actual engineering design, rather the general

footprint of the buildings. The aim of the current study varies in that

it is seeking to find a range of solutions which minimise/maximise a

set of performance scores through the conceptual design and layout of

the structures whilst accounting for the significant uncertainties and

minimising these where possible.

Elchahal, Younes, and Lafon (2013) used a genetic algorithm to opti-

mise the shape of a detached breakwater within a port using the level of

tranquillity behind the breakwater as the metric with which to calculate

the cost function. The study focused on using up to three points to ex-

press the topographical layout of the breakwater and a finite difference

model was used to simulate wave action behind the breakwater. The

study did not indicate whether cross-sectional design was undertaken

and hence whether overtopping and wave transmission was considered

in the model.

2.4.7 Summary

There are a range of optimisation techniques that are applicable to this

project, namely the GA which has been identified as a suitable candi-

date for optimisation. Of the root-finding algorithms, Brent’s algorithm(Brent, 1973) has been selected for implementation in the breakwater

length algorithm, which is discussed in Section 3.6.

2.5 summary of the literature review

The development of the framework for designing an LNG terminal is

complex and requires understanding of a wide range of topics. This

literature review has attempted to provide ample coverage of deter-

ministic and probabilistic design methods; sea waves and optimisation

techniques. Deterministic and probabilistic design will both feature in

the engineering methodology, which is described in Chapter 3. Deter-

ministic design can be used to validate and calibrate the models prior

to using probabilistic methods to explore how uncertainties affect the

design. Linear wave theory will be used where it is applicable, such

as in calculating wavelength and random wave theory will be used to

2.5 summary of the literature review 39

ensure that waves in intermediate to shallow water are propagated as

accurately as is possible using 1D models.

Part II

M AT H E M AT I C A L B A S I S O F T H E D E C I S I O NS U P P O RT S Y S T E M

3F U N D A M E N TA L E N G I N E E R I N G C O N C E P T S O F T H E

D E C I S I O N S U P P O RT S Y S T E M

3.1 introduction

This chapter will provide the reader with the mathematical basis of

the modelling procedure used to create the framework. LNG terminal

design involves the integration of several design methodologies from

different areas of maritime engineering. Due to the breadth of the scope

and depth of knowledge required to safely apply the methodologies, it

is unusual for a single person to conduct more than a small portion

of the studies. Often, specialist studies using sophisticated models are

required to describe physical phenomena which is then translated into

design conditions. This chapter aims to bring together these method-

ologies into a single framework which enables the design of an LNG

terminal to be automated.

Due to the uncertainties of turbulent flows, numerical and physi-

cal models are often used in hydraulic engineering. Numerical models

range from simplified 1D models such as a non-linear shoaling spread-

sheet to 2D models such as SWAN and 3D models such as those for

computational fluid dynamics. Generally speaking, the lower the di-

mensionality, the faster the runtime which makes 1D models suitable

for applications in which many runs are required. An obvious draw-

back of 1D models is that they are unable to consider the spatial effects

which occur in reality although they are of great use when exploring

options and of solving problems that can be reduced to 1D. Accurate

results to practical problems can be obtained from 1D models with little

effort, although the user must be aware of the limitations of the model

and the simplifications is makes as well as being capable of exerting

sound judgement to contextualise the results (Highway, 2008). Wave

shoaling is one such problem where a 1D model is suitable as the co-

efficient of wave height change is a function of water depth and wave

period.

41

3.2 overview of the model 42

2D models are capable of providing much more comprehensive anal-

ysis, accounting for spatial variations in terrain but their computational

time is exponentially greater than a 1D model. 2D models may be devel-

oped from mathematical principles such as the conservation of energy

principle on which functions such as the mild slope equation (Berkhoff,

1972) which forms the basis for many 2D shallow-water wave solvers.

Due to the inherent longer run-time of 2D models, their use is usually

reserved for the latter stages of design (Reeve, Chadwick, and Fleming,

2004).

Physical models usually represent one of the final stages in the de-

sign process where the scheme design is tested in pseudo realistic con-

ditions. This is an important stage in the design process as the random

nature of moving bodies of water cannot be fully replicated, even by

3D models. Physical models are expensive and time consuming to set

up and run and the user must be aware of the scale effects and how

to deal with these: gravity (Froude), surface tension (Weber) and turbu-

lence (Reynolds) to name a few. Physical models are usually reserved

for the final stages of design where 1 and 2D models have already been

used and the concept as a whole needs to be tested in realistic condi-

tions.

3.2 overview of the model

RaPoLa integrates several design methods and techniques into a single

framework to enable concept layouts to be developed easily. There are

six stages in the methodology which is shown in Figure 3.1. When

using the model manually, the user is required to create and evaluate

options, whereas, in automatic design, no intervention is required.

• Input: RaPoLa requires a range of inputs which are described in

Table 3.1. In many cases, some or even most of this information

may not be available. If so, probabilistic methods such as statis-

tical inference and Monte Carlo simulation are available in the

RaPoLa toolbox which when combined with judgement and expe-

rience can be used to fill in the gaps.

• Decision: Key decisions relating to the layout such as the location

of the berth, layout of breakwater, whether the MOF is integrated

with the main breakwater or whether a secondary breakwater

3.2 overview of the model 43

Table 3.1: Overview of the input files required by RaPoLa. A full description ofall inputs files that are stated in this table is included in Appendix B

Input Description

Breakwater Breakwater design parameters (Nw,Sd,P, r...)

Channel Parameters for channel design using WG121 (2014)

Constraints Target operability, allowable overtopping and

whether opex cost is calculated

Cost Material unit rates

Extremes Extreme wave values which can be an unlimited num-

ber of conditions

Sediment Site specific sediment parameters

Mooring Mooring thresholds for head and beam waves

Trestle Trestle tie in point coordinates on shore

Vessel Parameters for design vessel

Waves Wave time series

Currents Combined wave and sediment time series

Bathymetry Regular or irregularly spaced bathymetry as raster im-

age or points file

3.2 overview of the model 44

Figure 3.1: HRW RaPoLa algorithm

3.3 wave transformation 45

is required to limit sedimentation are made by the designer or

optimisation algorithm.

• Automated Calculations: The automated calculation algorithm inte-

grates a range of relevant methodologies as a series of modules

where the output of a module forms the input to the next, elim-

inating the need for data to be transferred from one person to

another. This removes a significant time bottleneck in the design

process. This is the main software component and will be dis-

cussed in more detail in the proceeding section.

• Output: There are three main outputs - the layout; minimisation

objectives which in this case are capital cost (capex), maintenance

dredging cost (opex) and vessel downtime; and the design calcu-

lation outputs including material specifications and take-off vol-

umes.

• Evaluation: The options are evaluated for their performance in

the specified objectives and the next design iteration is initiated

if required. When using the automatic approach, the number of

iterations is determined by its stopping criteria, as it the number

of options.

• End: When the designer is satisfied with the concepts or when the

optimisation algorithm has reached its stopping criteria.

The development of the individual models will be discussed in this

section, focusing on the inputs, outputs, structure and mathematics re-

quired for the algorithm to provide the output required. The reader

can see the inputs and outputs of the model in the figure at the start of

each section. Each component of the algorithm is lettered enabling the

reader to follow the processes of the algorithm.

3.3 wave transformation

To obtain wave parameters at the proposed location, the offshore wave

time series and extreme waves must be transformed from the deepwa-

ter location. Drawing from the theory presented in Chapter 2, a random

wave propagation algorithm has been developed. Figure 3.2 shows the

basic structure of the algorithm.

3.3 wave transformation 46

Figure 3.2: Wave transformation algorithm

3.3 wave transformation 47

(a) Refraction

As waves travel into water shallower than about λ/2, the direction

of wave propagation starts to align with the direction of the un-

derlying bathymetry contours, creating a bending effect across the

wave crest due to increasing wave/seabed interaction as shown in

Figure 3.3.

Figure 3.3: Wave bending as it propagates inshore due to depth induced celer-ity reduction

Wave refraction is based on Snell’s law1 which states that the ratio

of change in the angle of incidence is equivalent to the ratio of

change in the velocity caused in this case by a change in depth:

sinθ1C0

=sinθ0C1

(3.1)

(Pierson, 1950) and (Longuet-Higgins, 1957) were some of the first

researchers to suggest that the wave spectrum could be used to

solve the refraction of water waves. They posed that the spectrum

could be represented by monochromatic waves of varying frequency

and direction, relative to the peak period and direction. Under-

standing the importance of the spectral characteristic of random

sea waves in maritime design, Goda (1975b) expressed the refrac-

1 Named after Willebrord Snellius (1580-1626) but first discovered by Ibn Sahl (940 −

1000).

3.3 wave transformation 48

tion coefficient as a function of Mitsuyasu’s combined directional

spectral density function and directional spreading function:

S(f, θ) = S(f)G(θ|f) (3.2)

Yielding the following equation for Kr :

(Kr)eff =

√1

m0

∫∞0

∫π−πS(f, θ)K2s(f)K2r(f, θ).dθ.df (3.3)

where S (f, θ) is the representative value of total wave energy and

Kr and Ks are refraction and shoaling coefficients for the monochro-

matic component waves. This method is computationally expensive

when dealing with large wave datasets as the computer cannot in-

tegrate with a pure analytical function. An alternative offered by

Goda (2010b) is to use the mid-point rule to select representative

frequencies and directions and to approximate the double integral.

This equation is practically as accurate as the preceding equation,

but computes much faster. In this equation, shoaling is handled

outside of the function and is based on linear wave theory using

the peak wavelength and direction and S (f, θ) is replaced with ∆i,j- the relative energy in each component which is explained below.

(Kr)eff =

√√√√M∑i=1

N∑j=1

(∆E)i,j(Kr)2i,j(Ks)

2i,j (3.4)

Representative directions can be divided as 22.5◦ intervals for θ−

90◦ 6 θ + 90◦ which captures all of the directional energy. The

median frequencies which bisect the area of the wave spectrum in

each interval can be found with the following equation:

fi =1.007T1/3

{ln

(2M

2i− 1

)}1/4(3.5)

3.3 wave transformation 49

where M is the number of components being analysed and i is the

ith frequency. The relative energy in ∆E each component is then

calculated as:

∆E =1

MDj (3.6)

where Dj is represents the ratio of wave energy in each component

wave found through the cumulative distribution of wave energy:

PE(θ) =1

m0

∫θθ−π

2

∫∞0

S(f, θ).df.dθ (3.7)

by calculating Dj at Smax =10,25,75 for the specified directions

and storing them in a matrix, the value of an Smax value can be

found through a linear interpolation without having to recalculate

the cumulative distribution of wave energy each time. The refrac-

tion coefficient for each component wave is then calculated using

linear wave theory:

Kr =

√cos (θ0)cos (θ1)

(3.8)

where the relative refraction angle θ0 = θcontour− θwave

From which the refracted wave angle is found from:

θ1 =sin−1(sin(θ0)c)

c0(3.9)

The depth limited wave celerity is given by:

c0 =c

tanh(kh)(3.10)

where the wave number k = 2πλ . The deepwater wave celerity is

given by:

c =λ

T(3.11)

3.3 wave transformation 50

Best estimations of the wavelength are still based on Airy’s theory

which states the wavelength can be found with the following for-

mulae:

λ =

gT2

2π Deep water

gT2 tanh(2πh)2πλ Intermediate water

T√gh Shallow water

(3.12)

In the case of waves in intermediate water depths, the equation

must be solved through an iterative procedure which can dramati-

cally increase computational time in programs such as a refraction

program.

Researchers have attempted to derive a single line equation for

wavelength in any water depth. A discussion paper by (You, 2003)

made comparison of some of these finding that many of the rela-

tive errors were less than 1% - one of the most popular equations

is that of Hunt (1979) and one equation which was not included in

the discussion paper was that of Wu and Thornton (1986), which is

accurate to within 0.1% and has been used in this research project:

L = L0tanh

[√kh

(1+

kh

6+

(kh)2

30

)](3.13)

The total energy in each representative direction is equal to the sum

of the wave energies of the component waves in that direction:

E(θ) =

M∑i=1

(∆E)i (3.14)

(b) Shoaling

As waves propagate into intermediate and shallow depth water, a

reduction in celerity and wavelength occur as per Airy’s theory.

Due to the conservation of energy, this causes an increase in energy

3.3 wave transformation 51

density per wavelength which is expressed as an increase in wave

height, proportional to the square-root of the ratio of change in the

wave group celerity:

Ksi =

√(1+

2kh

sinh 2kh

)tanhkh (3.15)

In order to test shoaling of water waves using the energy spec-

trum, Goda (1975a) evaluated the shoaling coefficient by summing

the shoaling coefficients of waves of representative direction and

frequency finding that the variation between this and Airy’s pre-

diction was in the order of 2-3% and practically negligible. Wave

shoaling is practically unaffected by the energy spectrum although

in order to calculate shoaling of real waves, modifications of Airy’s

formula to account for wave non-linearity have been suggested by

Iwagaki (2011) who expressed them in the form of a design dia-

gram. Shuto (1974) then developed a set of equations which are

typically used in the form modified by Goda (1975a). These equa-

tions are solved through an iterative procedure and although accu-

rate, do require computation of the equivalent deepwater wave H′0 in

each iteration which requires the computation of the refraction and

diffraction coefficients. This does become computationally expen-

sive when Kr and Kd are solved using random wave theory. Kweon

and Goda (1996) have offered a closed form equation for non-linear

wave shoaling using coefficients to fit the equation to the results

derived from Goda (1975a):

Ks = Ksi + 0.0015(h

L0

)−2.87(H′0

L0

)1.27

(3.16)

A comparative study of Goda (1975a) and Kweon and Goda (1996)

was conducted at HR Wallingford corroborating that Kweon and

Goda’s equation is capable of accurate wave shoaling predictions.

This equation has been implemented in this research project.

(c) Wave breaking

As waves refract and shoal in intermediate and shallow water, they

reach a point where the depth of water cannot sustain the height of

3.3 wave transformation 52

the wave anymore. This is referred to as the limiting breaker heightHb and is in the range of Hb/h ≈ 0.5 < 0.9, although it is usually

calculated using the fixed value of Hb/h = 0.78 for monochromatic

waves. More recently in the Eur0top overtopping manual (Pullen

et al., 2007), a value of Hb/h ≈ 0.55 is suggested as this is the

point at which random waves start to break. A limitation of these

formulae was that the effect of the bottom slope steepness was not

included in the calculation which according to Goda (1970) reduces

the effectiveness of the formula which led to the development of

the first rendition of the current breaker index calculation by Goda

(1974), which was recently revised by Goda (2010a) who modified

the constant from 15 to 11 as the 1974 equation over-predicted the

ratio on steeper slopes, giving the following equation:

Hbhb

=A

hb/L0

{1− exp

[−1.5

πh

L0

(1+ 11 tan4/3 θ

)]}(3.17)

where A = 0.17 and θ is the bedslope angle. Kamphuis (1991) iden-

tified that using the constant A = 0.12, gave the breaker index A

in which random waves started to break, developing the notion of

incipient wave breaking. He also found that the value A = 0.18

was the point in which all of the waves had broken, giving the

range of A = 0.12 6 0.18 for random wave breaking. In practice, a

value of 0.17 is most often used. If the refracted and shoaled wave

Heff = Hs · Kr · Ks is less than Hb , then the wave is not break-

ing. If Heff > Hb then the wave has broken and Goda’s surf zone

calculations are required as explained in the next section.

(d) Surf zone wave height estimation

Once waves break, they release energy and lose form, creating an

increase in the mean water level, known as wave setup. Longuet-

Higgins and Stuart (1962) were some of the first researchers to pose

a numerical solution for random wave breaking, however, this equa-

tion tends to greatly overpredict the amount of wave setup and the

increase in setup increases linearly toward the shoreline which is

incorrect. Yanagashima and Katoh (1990) took a year of field mea-

3.3 wave transformation 53

surements in and were able to fit the following equation for wave

setup:

η ≈ 0.052H ′0

(H′0

L0

)−0.2

(3.18)

(Goda, 2004) tested the accuracy of the equation against formulae

he had derived from his bespoke wave transformation model PEG-BIS finding that the order of error was small, signifying the ac-

curacy of Yanagashima and Katoh’s equation. Consideration must

also be given to the wave heights in the surf zone. (Goda, 1975a)

performed testing to derive the significant wave heights of broken

waves in the surf zone (Goda and Suzuki, 1975). He was able to

derive the following formula:

Hs =

KsH′0 > 0.2 if h/L0 > 0.2

min{(β0H

′0 +β1h

),βmaxH

′0,KsH

′0

}if h/L0 < 0.2

(3.19)

where

β = 0.028(H′0/L0)

−0.38exp[20tan1.5θ

]β1 = 0.52exp [4.2tanθ]

β2 = max{0.92, 0.32(H′0/L0)

−0.29exp [2.4tanθ]}

A similar calculation exists for calculating Hmax which has also

been implemented in this project although it will not be printed

in this dissertation as it is in the same format as the calculation

for Hs. The reader should consult Goda (2010b, pp. 126). A limi-

tation of this formula is that when the wave steepness surpasses

0.04, the equation over predicts in excess of 10% in the water depth

at which Hs = β0H′0 + β1h = βmaxH

′0. Goda (2009) tested the

formula against the CLASH project database confirming that the

formula tended to overpredict slightly. Although the formula may

overpredict in some cases, it still offers enough relative accuracy for

the purposes of this research project and in the majority of LNG

3.4 berth operability 54

terminal breakwater designs, the breakwater is located in a water

depth in which waves are not breaking.

3.4 berth operability

Generally speaking, smaller vessels are more susceptible to small pe-

riod waves and large vessels are more susceptible to longer period

waves as the peak frequencies are closer to their natural frequencies.

Larger wave heights and longer wave periods carry more energy so

there is a trade-off between these which can be approximated with a

threshold curve in the form shown in Figure 3.4 in which the significant

wave height Hs is plotted against the peak wave period Tp to indicate

the level of wave energy that a berthed vessel can withstand before

mooring line strain exceeds the design guidance.

Figure 3.4: Vessel response threshold curve

The wave steepness is not considered explicitly although the data has

been taken from studies where the water depth was proportional to the

vessel draught which means that the dataset from which these curves

have been developed considered the wave steepness. Vessel response

varies based on the wave direction in relation to the bow. Generally,

waves approaching the bow or stern will illicit the least response, in-

creasing when quartering and the greatest response when approaching

the beam. Figure 3.5 shows the algorithm.

(a) Select curve

Threshold response curves (example given in FIgure 3.4 have been

developed for bow/stern-on, quartering and beam-on waves for

3.4 berth operability 55

Figure 3.5: Overview of berth operability algorithm

3.5 breakwater design 56

75, 000m3, 140, 000m3 and 210, 000m3 capacity LNG vessels. Based

on the direction in relation to the vessel at the berth, the correct

curve is selected.

Figure 3.6: Wave direction relative to vessel

(b) Test threshold

The threshold curve is tested with the transformed wave time se-

ries and if Tp < f(Hs) , the the mooring line resistance will not

be exceeded and a pass is given. When all of the waves have been

propagated, the number of passes is divided by the total number

of waves to give the operability of the berth. The model is pro-

grammed to test all ship orientations against the wave data set i.e.

0 < θship < 360 to find whether an optimal vessel orientation ex-

ists. If no direction has an actual operability greater than or equal

to the required operability, a breakwater is required.

3.5 breakwater design

When adequate natural protection is not available and the wave con-

ditions are above acceptable limits, an artificial breakwater is usually

required. Although breakwaters are sometimes designed to limit lit-

toral drift, sedimentation or scour, where constraints allow, the primary

function of a breakwater in an LNG terminal is usually to limit wave

energy at the berth which occurs primarily through wave transmission,

overtopping and diffraction (CIRIA and CUR, 2007). The general lo-

cation of the LNG terminal site is heavily influenced by the location

3.5 breakwater design 57

of the gas field, which is often offshore. This means that a breakwa-

ter is usually required to provide berth protection. There are two basic

components of breakwater design; the first is the cross-section, the sec-

ond is the topographical layout. Cross-sectional design is usually based

on the allowable level of damage for a given return period confirmed

through model testing whilst developing the layout is a mix of engineer-

ing judgement and model studies. Figure 3.7 gives a brief description

of the breakwater types and their relevance to the research project. The

conventional rubble mound breakwater is the most appropriate due to

its simpler design method and its popularity in LNG terminal design

due to the lower associated costs (usually). Caisson breakwaters may

also be considered later in the project, although they often only become

viable in water depths greater than 15m which is an uncommon depth

for an LNG berth, but still feasible.

The Shore Protection Manual was a commonly used reference for

coastal engineers however this is now considered a superceded refer-

ence, largely as the design methodologies are for regular waves and

our understanding of waves as a random phenomena was developing.

The Rock Manual embraces random wave theory and is now consid-

ered the main source of guidance for designing rubble mound break-

waters. In the UK, all engineering works must comply with the relevant

British Standards. In the case of maritime works, this is the BS 6349 se-

ries, specifically part 7 for breakwaters (BSI, 1994). It is good practice

to follow such guidance, which is generally in accord with The Rock

Manual, but it is not necessary when a project is abroad which every

LNG export terminal is. A common approach to breakwater design is

to use the ’design wave’ which is a single value wave height (often Hsor H1/10) which has a low probability of exceedence during the design

life (BSI, 1999; Goda, 2010b). Wave period, direction, spectral energy

and whether the waves are breaking is also important. Longer period

waves transmit more energy through the breakwater (CIRIA and CUR,

2007). It is also of importance to design each individual component of

the breakwater using the worst case feasible conditions for that com-

ponent (BSI, 1999). Careful consideration must be given to frequent as

well as extreme events as the former affects the wave climate behind

the structure and the latter affects safety (BSI, 1999). The breakwater

must also be designed to withstand wave attack during the construc-

tion phase though due to the short duration in comparison to the struc-

3.5 breakwater design 58

Figure 3.7: Breakwater types and relevance to research. Based on (CIRIA andCUR, 2007, pp 781)

3.5 breakwater design 59

ture’s design life, a lower return period is usually considered. The Rock

Manual suggests that the minimum core level is 1m above HAT to re-

duce the chance of washout during construction of the armour layer.

The cross-section of a rubble mound breakwater can be seen in more

detail in Figure 3.8.

Figure 3.8: Cross-section of a rubble mound breakwater. Source: (CIRIA andCUR, 2007, pp 782)

3.5.1 Breakwater cross-section

Whether a breakwater is even required is decided by the Berth Oper-

ability Algorithm. Calculating the correct dimensions of the breakwa-

ter cross-section is crucial in developing the cost of this structure. The

wave height dictates whether rock armour is feasible and the cost of

armour increases with wave height. Figure 3.9 shows the algorithm for

calculating the breakwater cross-section.

(a) Crest height

Defining the crest level of the breakwater is possibly the most im-

portant task of the breakwater designer. The crest level Rc must

limit the volume of water overtopping the structure during design

conditions. The crest level is a function of the volume of water

which is allowed to overtop the structure, although the designer

must also ensure that the core level is higher than the return period

for the construction period event. The Rock Manual gives guidance

on overtopping, although the authoritative resource on the topic

is considered to be The EurOtop Manual (Pullen et al., 2007). The

EurOtop Manual subsumes and collates previous guidance such as

(Owen, 1980; TAW, 2002). Owen was one of the first researchers

to publish on overtopping. Owen’s method was originally devised

3.5 breakwater design 60

Figure 3.9: Breakwater cross-section Algorithm

3.5 breakwater design 61

for smooth sloped structures, although this was later modified for

other slope types, such as rubble mounds through the roughness

reduction coefficient γf . The TAW method was devised by van der

Meer and it uses the spectrum’s of the significant wave height and

mean period Hm0 and Tm−1,0 drawn from random wave theory to

supposedly offers a greater level of accuracy although Owen’s for-

mula is still often used as a first estimate for its ease of use. The

authors of The EurOtop Manual have also developed a neural net-

work using data drawn from the EU CLASH project database of

physical model and real structure overtopping results. The neural

network has been used and verified in a range of research projects

(Bruce et al., 2009; De Rouck, Verhaeghe, and Geeraerts, 2009; Lykke

Andersen and Burcharth, 2009; Van Gent, Smale, and Knuiper, 2007;

Victor, Meer, and Troch, 2012) and is currently in the process of be-

ing updated to include more structure types. In this project, Owen’s

method is used as it is computationally efficient and simple to im-

plement. The crest height is defined by the volume of allowable

overtopping Q and design water level WL:

Rc =WL +Ac (3.20)

where the freeboard is calculated as:

Ac = A∗cTm

√gh (3.21)

and the dimensionless freeboard is:

A∗c =−r

Blog(Q∗

A

)(3.22)

A and B are coefficients are Besley’s coefficients found in (Besley,

1991) and the dimensionless overtopping coefficient is:

Q∗ =Q

TmgHs(3.23)

3.5 breakwater design 62

(b) Armour and underlayer sizes

In order to obtain a first order estimate of the cross-sectional dimen-

sions, preliminary design is required. The fundamental formula for

armour sizing was developed by Hudson (1958):

W50 =ρrH

3

K3D cotα(3.24)

Hudson introduced the concept of the stability coefficient KD which

indicates the natural stability of the armour an d was initially sug-

gested as a value of KD = 3 for quarried rock. The median mass

of armourstone W50 is a function of the cube of the wave height

H, which requires exponentially larger rock for increasing wave

heights. is the relative rock density and is the structure slope. As

random wave theory was evolving in the 1970’s, the first edition

of The Shore Protection Manual (CERC, 1977), suggested that H be

replaced with Bretschneider’s Hs value. The second edition of TheShore Protection Manual (CERC, 1984) advised that Hs be changed

to H1/10 i.e. 1.27Hs and that KD be set as 2 for breaking waves and

4 for non-breaking waves, which led to significantly larger armour-

stone requirements.

According to The Rock Manual (CIRIA and CUR, 2007), Hudson’s

formula is limited in that it is applicable to regular waves only, that

it does not account for wave period or storm duration, that the

level of acceptable damage is not considered and that the structure

must be permeable and non-overtopped. In response to these lim-

itations, van der Meer undertook breakthrough work for his PhD

(Meer, 1988) developing a new set of equations for rubble mound

breakwaters in deep water:

D50 =

Hs

∆6.2P0.18(Sd√Nw

)0.2ξ−0.5

: ξ < ξmcr

Hs

∆1.0P0.13(Sd√Nw

)0.2cot(α)0.5ξP

: ξmcr 6 ξ

(3.25)

These formulae introduced the notion of acceptable damage level

Sd and the number of waves in a storm eventNw as well as relating

the rock size to the mean wave period through the surf similarity

3.5 breakwater design 63

parameter ξ , often referred to as the Iribarren number after the en-

gineer who developed the formula (Iribarren and Norales, 1949):

ξ =tanα√Sm

(3.26)

The surf similarity parameter was originally developed to indicate

whether a wave was breaking, however Battjes (1974) showed that

the surf similarity parameter could also be used to define the type

of wave breaking that would occur, which is important in rubble

mound breakwater design as plunging waves release more energy

than surging waves. This is mostly affected by the slope of the

structure but the fictitious wave steepness s0 also has an impact.

Figure 3.10 shows the two breaker types used in van der Meer’s

formulae.

Figure 3.10: Breaker types and associated surf similarity numbers. Source:CIRIA and CUR (2007)

The application of van der Meer’s stability formulae was later ex-

tended to apply to waves in shallow water by Van Gent and Pozueta

(2005) by modifying the coefficients and using the spectral wave

period Tm−1,0. This requires the calculation of the H2% parameter

using Battjes and Groenendijk’s method (Battjes and Groenendijk,

2000) although this will not be discussed in this dissertation. If the

breakwater is situated in shallow water then equation 3.27 will be

used.

D50 =

Hs

∆8.4P0.18(Sd√Nw

)0.2HsH2%

ξ−0.5: ξ < ξmcr

Hs

∆1.0P0.13(Sd√Nw

)0.2HsH2%

cot(α)0.5ξP: ξmcr 6 ξ

(3.27)

3.5 breakwater design 64

The relative density of rock is given by:

∆ =ρr

ρw− 1 (3.28)

In which the wave steepness is given by:

Sm =2πHs

gT2m(3.29)

And the Iribarren Transition Value where the wave switches from

plunging to surging is:

ξm,cr =[6.2P0.31√tanα

] 1P+0.5

(3.30)

This allows the mass of rock to be calculated as:

M50 = ρrD3n,50 (3.31)

The underlayer rock mass M50,u is calculated as:

M50,u =M50

10(3.32)

And the diameter Dn50,u is found by:

Dn50,u = 3

√M50

ρr(3.33)

Rock armour is usually the cheapest option for a rubble mound

breakwater as it does not require any additional manufacturing

and the materials can usually be sourced with relative ease. Rock ar-

mour works well for fairly small waves as when Hs exceeds about

3 − 4m, the W50 of rock usually becomes greater than 10 tonnes

which may be difficult to acquire in large quantities. Some quar-

ries may be able to produce up to 20 tonne rock (corresponding to

Hs ≈ 4− 5m) but this must be investigated. When this is the case,

concrete armour units become a more suitable choice or armour.

3.5 breakwater design 65

(c) Concrete armour size

Concrete armour units have been around since the 1950’s. They

can be made very large in order to withstand very large waves, al-

though they are often more expensive to create as they require an

onshore batching plant, cement and a royalty is sometimes paid

to the company who own the casting moulds. Hudson’s stability

formula is the basis for the design of concrete armour units as test-

ing by van der Meer (Meer, 1987) showed that the wave period Tmand storm duration Nw had no effect on the hydraulic stability of

Accropode armour units and this conclusion has been extrapolated

to other unit types. Concrete armour units are usually placed in

a single layer and due to the high derived from their interlocking

nature (KD ≈ 7 6 16 ), the weight to performance ratio is higher

than quarried rock. The latest design guidance for Accropode and

Core-Loc units relates a reduction in the stability coefficient KD to

the bedslope angle, with steeper slopes reducing the value of KD.

However, it is still possible to obtain KD = 14 for a bedslope of 150 .

Figure 3.11 shows a few of the common types of concrete armour

units encountered on LNG terminal breakwaters.

Figure 3.11: Miniature models of concrete armour units from left to right: Core-Loc, Accropode, Accropode II, Ecopode

3.5 breakwater design 66

In this project, Accropode and Core-Loc armour units will be a

design possibility for a breakwater. These are two of the most com-

mon units and the cost of most units is of a similar order of magni-

tude which means that for developing initial concepts, the type of

unit does not have a major influence on the cost. Concrete armour

design is based on Hudson’s Formula:

V =H3s

KD∆ cotα(3.34)

where KD is found through linear interpolation of the data in Fig-

ure 3.12 which relates an increase in bedslope to a reduction in the

stability coefficient.

Figure 3.12: Stability coefficient for varying bedslope gradients

(d) Transmission

As rubble mound breakwaters are sloped structures, the width of

the base is a direct function of Rc and the front and back slope

angles. The width affects the amount of wave energy that is trans-

mitted through the breakwater core which and reduction of this

energy is a primary function of the breakwater. In the first version

of The Rock Manual, a simplified method of calculating the trans-

mission coefficient Ct based on model testing was given. It was

crude in that it only related the reduction in energy to the rela-

tive crest height Rc/Hs which meant that core width was not an

3.6 breakwater layout 67

explicit variable. Later research by Ahrens (1987) related Ct to ar-

mour size, wavelength and wave steepness, but it is (Briganti et al.,

2004) which takes into account the surf similarity parameter and the

relative crest width B/H .

Ct =

−0.4 RcHs + 0.64(BHs

)−0.31[1− exp (−0.5ξm)] : BHs < 10

−0.35 RcHs + 0.51(BHs

)−0.65[1− exp (−0.41ξm)] : BHs > 10

(3.35)

A reduction in wave period also occurs through transmission as

some frequencies are absorbed although calculating this is outside

of the scope of this study.

3.6 breakwater layout

There was no currently available method to develop the breakwater

layout within a computational framework. The breakwater is one of

the most critical structures and this module is correspondingly one of

the most critical. This section of the research discusses the methodol-

ogy that won the PIANC International De Paepe Willems Award 2014 as

well as the Best Student Paper Award at the COPRI Ports’ Conference in

2013. The Breakwater Layout algorithm is used to calculate the length

and layout of the breakwater which provides the desired level of berth

availability. This is achieved on the simplification that for any given

level of acceptable berth downtime, there exists a corresponding break-

water length. This premise allows the relationship to be expressed as

operability = f(length) which can be solved through an iterative ap-

proach using a root-finding algorithm. To achieve this, the Wave Transfor-mation and Berth Operability algorithms have been embedded in an iter-

ative root-finding algorithm which also includes random wave diffrac-

tion and transmission algorithms to transform the waves to, through

and around the breakwater. The diffracted and transmitted wave ener-

gies are summed using the root mean square (RMS) approach before the

effective wave is used to determine whether the vessel mooring thresh-

old is exceeded. Performing this for a time series gives an indication of

the downtime that will be incurred with this breakwater layout, which

3.6 breakwater layout 68

is used to iteratively search for the breakwater length which offers the

desired level of berth operability. Figure 3.13 shows the algorithm.

Figure 3.13: Breakwater layout algorithm

(a) Create breakwater contour

3.6 breakwater layout 69

In depths in which rubble mound breakwaters are economically vi-

able, the contours typically align to the significant wave direction

due to millennia of wave/seabed interaction. With the berth loca-

tion selected, the centre of the breakwater is calculated based on

width of the beam, the required clearance between the vessel the

rear slope and crest height of the breakwater. Unless the coordi-

nates of the location fall on a known point, a bilinear interpolation

is required to calculate the depth from the surrounding points of

the containing square. From this point, a contour of constant depth

is created using an implementation of a marching squares algo-

rithm for each side of the breakwater as shown in Figure 3.14.

Figure 3.14: Marching squares algorithm for tracing constant depth contourwhere the numbers represent the depth of the regular bathymetricgrid.

At each intersection of the bathymetric regular spaced gridlines,

a linear interpolation is used to calculate the x and y coordinates

which are then stored in an array.

(b) Transform waves to breakwater

To estimate the length of the breakwater, a root-finding algorithm

can be used. Root-finding algorithms were discussed in the liter-

ature review and Brent’s algorithm was highlighted as the most

suitable for this research project. The breakwater length for the

first side is approximated by Brent’s algorithm (Brent, 1973). The

coordinates corresponding to this distance are found by tracing the

contour until the length has been reached. The wave height at the

3.6 breakwater layout 70

berth is a product of the wave energies coming around and through

the breakwater as shown in Figure 3.15.

Figure 3.15: Simplified diagram of the major wave energy components

The total energy at the berth can therefore be approximated as the

root mean squared (RMS) of their total wave energy:

Hberth =√H21 +H

22 +H

23 (3.36)

where the value of H1 is the refracted, shoaled and diffracted wave

travelling around the LH side of the breakwater, H2 is the refracted,

shoaled and transmitted wave coming through the breakwater core

and H3 is the refracted, shoaled and diffracted wave travelling

around the RH side of the breakwater. Brent’s algorithm can only be

used to solve one unknown therefore each side must be calculated

separately which reduces the wave energy equation to:

Hberth =√H21 +H

22 (3.37)

The offshore wave time series has already been transformed to the

mid-point of the breakwater by the Wave Transformation algorithm

however the original time series must be transformed from deep

to shallow water for each new estimate of the outer coordinates

(Figure 3.16).

(c) Diffract and transmit waves

As a wave passes an object such as a breakwater, it bends spreads

into the shadow zone of the object, albeit with a reduced level of en-

ergy. This process is known as diffraction and is common in all kind

of waves such as electromagnetic, light, sound and water. Diffrac-

tion is of particular importance to maritime engineers involved in

3.6 breakwater layout 71

Figure 3.16: Diffracted and transmitted wave energies

designing breakwaters as a key objective is to limit wave energy

in the shadow zone. The first analytical solution for wave diffrac-

tion was given by theoretical physicist Arnold Sommerfeld in 1896

(Sommerfeld, 1896). Although the theory was intended to explain

optical wave diffraction through velocity potential theory, it was

utilised by coastal engineers for monochromatic waves around an

object. In 1944, American researchers Penny and Price used Som-

merfeld’s equations to compute diffracted waves around a semi-

infinite breakwater (Penney and Price, 1944) and in 1948, Blue and

Johnson used the same equations for wave diffraction through a

gap in a breakwater (Blue and Johnson, 1948), both proving that the

theory could be applied to water waves. Weigel then published the

so called Weigel Diagrams for regular wave diffraction (Weigel, 1962)

and further work by Mobarek and Wiegel (1996) added the effect

of wind to the diffraction equations although it was the Weigel Dia-grams that were included in The Shore Protection Manual, 2

nd edition

(CERC, 1984), highlighting the Western World’s slow uptake of ran-

dom wave theory. Following on from Mobarek and Weigel, Nagai

(1972) pioneered a technique for irregular wave diffraction around

a breakwater. This was then enhanced by Goda (1975b) who used

Mitsuyasu’s combined directional spectral density function and di-

rectional spreading function to express the coefficient of diffraction

in the following form:

(Kd)eff =

√1

m0

∫∞0

∫π−πS(f, θ)K2d(f, θ)dθdf (3.38)

3.6 breakwater layout 72

This equation was then used by Goda, Takayama, and Suzuki (1978)

to generate the random wave diffraction diagrams which are used

by engineers today and in Goda’s most recent edition of RandomSeas and Design of Maritime Structures (Goda, 2010b). Much in the

same manner as with random wave refraction, representative fre-

quencies and directions can be used to speed up computational

time although another method for random wave diffraction also

exists. A reasonably accurate approximation is proposed by Kraus,

1984 as:

Kd(θd) =

√0.5[

tanhSθmaxW

+ 1

](3.39)

where W = −0.0001− 1− 3S2max + 0.270Smax + 5.31 and θd is the

diffracted wave angle and is the angle shown in Figure 3.17 in radi-

ans.

Figure 3.17: Diffraction angle

According to Kraus (1984), the calculation slightly under-approximates

Kd in the shadow zone close to the breakwater however, this is re-

portedly insignificant as it is in the small wave height region. This

methodology has been adopted in this methodology as it is faster

than both the solution offered by Goda (1975b). Each wave in the

time series is diffracted around the breakwater to the berth using

Kraus’s formula and the corresponding wave from the time series

transformed to the centre of the breakwater is transmitted through

the breakwater using eq 3.34.

(d) Test berth operability

Equation 3.37 is then used to calculate the sum of the wave energy

at the berth location and the operability of the berth is calculated

as per Section 3.4.

3.6 breakwater layout 73

(e) Calculate length of other side

Once the first side has been calculated, the other side can be calcu-

lated using the original wave energy equation as a length exists for

side one:

Hberth =√H21 +H

22 (3.40)

The same approach as described in b to d is used to calculate the

required length.

(f) Recalculate first side

The procedure is performed on the first side once more, this time

considering the waves emerging from the second side also. This

process could go on continually, though it has been found that after

this iteration, the percentage of length increase is negligible. This

gives the semi-optimised layout shown in Figure 3.18.

Figure 3.18: Semi-Optimised Breakwater Layout

(g) Approximate line of best fit for each side

Now, a linear least squares regression is performed for each side

of the breakwater to give a smoothed and optimised breakwater

layout as shown in Figure 3.19.

Figure 3.19: Least Squares Regression for the Optimised Breakwater Layout

The point of intersection of the lines is calculated using a two-

dimensional line intersection routine and the three sets of coordi-

nates can now be used to calculate the breakwater volume.

3.7 channel width/depth 74

(h) Breakwater volume

The breakwater volume is calculated as the area of each of the re-

spective layers multiplied by the length accounting for the porosity

of the construction.

3.7 channel width/depth

In locations where the water depth is not sufficient to for an LNG ves-

sel, a dredged access channel is required. Calculating the conceptual

dimensions of the channel is an engineering requirement. Factors such

as allowable ship response to wave and current action and the bed ma-

terial become very important in channel design (Thoresen, 2010), as

does the consideration of uncertainty which are all usually handled

with conservative design rules (McBride, Smallman, and Huntington,

1998). Probabilistic methods have been posed by Briggs, Borgman, and

Bratteland (2003) and Lan, Doorn, and Hove (2010) and the PIANC

methodology is in the process of being updated to include probabilistic

design in WG121 (2014). Figure 3.20 shows the Channel Width/Depth

algorithm.

Figure 3.20: Channel width/depth algorithm

(a) Channel depth

The water depth is assumed to be mean sea level. Calculating the

movement of vessels to make best use of the tides is outside of the

scope of this dissertation. The calculation of the channel depth has

3.7 channel width/depth 75

evolved to include a range of factors such as vessel speed and water

depth which for calculating the Froude number which is a dimen-

sionless parameter which measures the vessels resistance to motion

in shallow water; the vessel squat which is the phenomena where

a vessel draft increases proportional to its velocity; natural frequen-

cies and roll, heave and surge. For the sake of simplicity, formulae

for calculating the maximum increase in squat considering the ves-

sel motions and material at the bottom of the channel have been

selected. The channel depth in metres is calculated as:

D = T + S+ F+C+O+Zmax (3.41)

where T is the laden draft; S is a factor based on reduced salinity

over the design life due to sea level rise; F is a factor for the seabed

material; O is the over dredge amount and the squat C is calculated

as:

C =CBυ

2

100(3.42)

Figure 3.21: Six degrees of freedom for a vessel

and Zmax is the sum of the wave response allowance formulae for

roll and pitch components (as shown in Figure 3.21):

Zr = 0.5B sin θr (3.43)

Zp = 0.5Lpp sin θr (3.44)

3.8 channel volume 76

Zmax =√Z2r +Z

2p (3.45)

(b) Channel width

Some guidance is given in BS 6349: Part 1 (BSI, 2000) though a com-

mon methodology for conceptually designing a dredged channel is

the method described in WG121 (2014). The Spanish ROM method

(ROM, 2007) can also be used, as can the US Corps of Engineers

method (UACE, 1996), sometimes in parallel with the worst case

taken. Recently, PIANC have started work on a new channel de-

sign guide which brings together much of the available guidance

in a single document. The following equation is for a one-way chan-

nel:

W =WBM +∑

Wi +WBR +WBG (3.46)

where WBM is the basic manoeuvring lane width; Wi is the addi-

tional width for environmental factors and WBR and WBG are the

red and green clearance widths. Many of the factors are for calcu-

lating WBM and can be seen in the Channel Data table included in

the figure. The other factors which play into the calculating are the

vessel dimensions, especially the beam.

3.8 channel volume

Now that the channel width and depth is known, the outline and vol-

ume can be calculated. The outline is calculated through trigonometry

for both the channel and the basin and the volume is calculated by su-

perimposing these outlines onto the bathymetric dataset. Figure 3.22

shows the algorithm.

(a) Channel outline

The width is known and a channel angle has been inputted which

means that trigonometry can be used to calculate the channel out-

line. The channel originates at one side of the breakwater. Eight

points are required to outline the channel in three dimensions and

its slope as shown in Figure 3.23.

3.8 channel volume 77

Figure 3.22: Channel volume algorithm

Figure 3.23: 3D channel with sloped edges

3.9 channel infill 78

(b) Basin outline

The basin is the area in which the vessels are manoeuvred onto

the berth by tug boats. This area is calculated through trigonome-

try following the safe berthing distance guidelines found in Thore-

sen (2010) and from values obtained with senior engineers at HR

Wallingford.

(c) Channel volume

The channel and basin outlines can now be superimposed on the

bathymetric dataset and the difference between each point that lies

within the channel subtracted to find the total dredge volume. A

loop which tests only the bathymetric points which are within the

channel and basin outline has been set up.

(d) Channel length

The channel length is calculated by extruding a line from the chan-

nel origin until it surpasses the limiting depth contour.

3.9 channel infill

Calculating the sediment that accrues in an artificial channel is of paramount

importance when calculating the frequency of dredging required to en-

sure the channel remains navigable. The continuity equation states that

velocity and depth are linked and that an increase in depth will cre-

ate a proportional decrease in velocity to maintain equilibrium. The

reduced velocity in the channel is therefore less capable of transporting

sediment due to the higher relative bed shear stress which causes sedi-

mentation. Predicting the rate of sedimentation is not an exact science

and even the most accurate formulae are still quite inaccurate. There is

also discrepancy between formulae where given the same conditions,

two formulae may be a factor of two different in output value. There

are several methods of predicting sediment transport. Einstein and

Brown were early pioneers of bed load transport (Brown, 1950; Einstein,

1942), Bagnold (1963) developed widely used formulae as did Neilsen

(1992). (Ackers and White, 1973) and (Rijn, 1984) developed formulae

which also considered the suspended load. These methods were devel-

oped mainly for current induced sediment transport. In 1981, Bailard

and Grass independently published methods for sediment transport

3.9 channel infill 79

induced by waves and currents (Bailard, 1981; Grass, 1981) and more

recently the Soulsby-van Rijn method has been published Soulsby (1997).

Soulsby’s method is used throughout HR Wallingford and there is ex-

tensive guidance for using this method therefore it has been adopted

in this project. As current speed, water depth, wave height and period

amongst many things exert effect on the rate of sedimentation, a time

series of conditions is required to test the channel in realistic conditions.

Figure 3.24: Channel infill algorithm

(a) Sediment flowing into channel

The first calculation is the sediment flowing into the channel. The

governing equation for total load transport by waves and currents

in m2/m/s is:

q = AsU

{(U2 +

0.018CD

U2rms

)0.5

− Ucr

}2.4

(1− 1.6 tanβ) (3.47)

where As = Asb +Ass and U is the current velocity in m/s. The

sedimentation is a function of the current velocity, the wave orbital

velocity, the drag coefficient related to the bedform and the critical

velocity in which free flowing sedimentation can occur. This is in

unidirectional form at an oblique angle to the channel so for it

to be useful in real calculations where the waves and current will

3.9 channel infill 80

not be oblique in most cases, the orthogonal components must be

considered. This is achieved by summing the relative velocities of

each component:

qt =

[qiUx

U+ qi

Uy

U

](3.48)

The full calculation for this process has many equations and has

therefore been included in Appendix A.

The calculation of Urms is not contained within Soulsby (1997) -

it suggests calibration with site measurements. Soulsby (2006) pro-

vides the following empirical relationship between the significant

wave height Hs, the incident water depth hi and the zero-crossing

wave period Tz:

Urms =Hs

4

√g

hiexp

(−3.65Tz

√hig

)2.1

(3.49)

(b) Sediment flowing out of the channel

The same process as in a is used except that the velocity has changed

due to the conservation of energy. The channel current velocity is

calculated as:

U2 =U1h1h2

(3.50)

where h2 is the depth of the channel h1 is the depth outside of the

channel and is the current velocity outside of the channel.

(c) Channel sedimentation

Once the volume of sediment flowing out of the channel has been

calculated, the total flow is equal to:

qt = qin − qout (3.51)

And the mean infill rate ε in mm/s is given by:

ψ = Lqt

W(1− ε)(3.52)

3.10 trestle and terminal layout 81

where L is the length of the section of channel being tested, in

this case the algorithm is run every 100m of the channel length;

t is the time step in seconds which in this case will be 3 hours i.e.

10,800s as wave time series are most often in 3 hr intervals; W is the

width of the channel in m as the sediment is assumed to distribute

evenly over the entire channel and ε is a coefficient of settlement

equal to 0.4. The depth of the channel decreases with each itera-

tion which increases the current velocity in the channel on the next

iteration and hence the rate of sedimentation. This has been built

into the algorithm. Once the overdredge depth is surpassed, the

channel volume is reset, assuming that dredging operations happen

quickly otherwise calculating the sedimentation whilst subtracting

the dredge volume becomes complex. The volume of material and

the frequency of dredging are outputs from the model.

3.10 trestle and terminal layout

With the breakwater and channel cross-sections and orientations calcu-

lated, the layout of the terminal is developed using the berth location

as a reference point and methodologies and techniques from Thoresen

(2010) that ensure that safe access, manoeuvring and turning areas are

provided. If a breakwater is required and attached to the shore, the tres-

tle is integrated with the breakwater. If the breakwater is detached or

not required then the trestle will connect to the berth directly from the

shore. Where an offshore breakwater is required, additional clearance

between the berth and breakwater is required for tugs to manoeuvre

the vessel onto the berth during operation and this will impact the level

of protection provided by the breakwater as wave height reduction due

to a larger diffraction angle.

The layout of the terminal is developed specifically through a be-

spoke algorithm that uses trigonometry to calculate the position of the

points in the 2D topographic space based on the decision variables that

are selected by the user or by the optimisation algorithm if the model

is being used as an automated design tool. Table 3.2 gives a description

of the decision variables and Figure 3.25 shows the decision variables

mapped to a potential concept with section A-A showing the breakwa-

ter cross-section decision variables that determine the cross-sectional

area and therefore the volume.

3.10 trestle and terminal layout 82

Loc MOF

TIE IN POINT

θCH

LHS

LHRH

θBW

A

A

Sec

SECTION A-A

1∇r

1∇f

0.00 m

B

Figure 3.25: LNG terminal schematic mapped with decision variables

3.11 capital and maintenance cost 83

Table 3.2: RaPoLa decision variables

Decision Description Units

Variable

Loc Coordinates of the proposed berth locations (x, y)

∇f Gradient of breakwater front slope (1:∇f)

∇r Gradient of breakwater rear slope (1:∇r)

θCH Channel offset from contour normal (◦)

LHS Whether channel originates from left of the breakwater Bool

θBW Breakwater/berth offset from contour normal (◦)

LH Length of left hand side of breakwater (m)

RH Length of right hand side of breakwater (m)

Sec Whether a secondary breakwater is required Bool

MOF Whether the breakwater has an integrated MOF Bool

B Width of the breakwater crest (m)

3.11 capital and maintenance cost

As the last step of the algorithm, the cost functions can now be com-

puted, providing the output which is required to assess a design. Fig-

ure 3.26 shows the algorithm structure. The calculation of embodied

carbon is not included in this algorithm.

(a) Capital cost

Now that the volume of the breakwater and channel have been esti-

mated, they can be costed. The capital costs are inputted by the user

for the main components. A cost database exists at HR Wallingford

for helping to develop unit rates for the preliminary costing of mar-

itime engineering projects. The costs are an ’all-in’ price which in-

cludes manufacturing, transport and construction. Calculating the

capital cost is achieved by multiplying the volume of the material

by the unit rate. There are also mobilisation costs which are added.

HR Wallingford do not currently perform the design of trestles,

they are costed as an ’all-in’ rate per m. This is project specific

but is usually in the region of $100,000/m. the length of the tres-

tle is calculated as the distance from the plant to the berth using

Pythagoras’s Theorem. During the early stages of the project, it is

not known where the sources of the materials will be so providing

3.11 capital and maintenance cost 84

Figure 3.26: Costing algorithm

a more accurate estimate is difficult and this will likely become a

focus of Phase Two through the application of probabilistic tech-

niques such as Monte Carlo simulation.

(b) Maintenance cost

The maintenance cost of the breakwater and trestle are widely ac-

cepted to be below 2% of the capital per annum (UN, 1985). This

value is actually taken as 1% in most cases and is adjusted to ac-

count for future inflation. The maintenance cost of the channel is

based on the cost of ensuring that the channel is operable all year

round through the removal of sediment deposited through wave

and current action (WG121, 2014). The frequency of dredging re-

quired to ensure the channel can be safely navigated can range

from every few years up to every few months depending on the vol-

ume which can make this a significant expenditure over the project

life-cycle and in some cases even more than the capital cost of the

channel (Bakker, 2010). The volume of material for dredging each

dredging cycle and the frequency of dredging are output from the

channel infill algorithm. The rate of dredging soft material is ap-

plied to the volume and multiplied by the number of times this

3.12 summary 85

will occur over the design life, adjusted to account for future infla-

tion. Dredging also has mobilisation costs each time this occurs.

(c) Carbon Cost

The volume of carbon released through the construction and main-

tenance phase of the terminal was tested as a minimisation objec-

tive using the HRCAT® database2 although it was not pursued fur-

ther. The key reasons were:

a) There are many uncertainties with carbon costing.

b) It would detract the focus away from the ’technical’ decision

support tool toward the more abstract.

c) It was not in the original scope of works.

d) Inclusion of this would have meant exclusion of something

else due to time pressure.

The potential for inclusion of this tool is covered in Recommendationsfor future research 7.5.

3.12 summary

An in depth description of the equations which have been used to de-

velop each model has been given. Although the models are all 1D, they

have proven to work well in all validation. Many of the techniques are

industry standard such as breakwater and channel cross-section, others

are novel such as the breakwater length, channel volume and refracted

wave direction algorithms. The methods have been drawn from a many

areas. Most of the models have been developed from first principles by

understanding the underlying theory behind the processes. ngineering.

Due to the breadth of the scope and depth of knowledge required to

safely apply the methodologies, it is unusual for a single person to con-

duct more than a small portion of the studies. Often, specialist studies

using sophisticated models are required to describe physical phenom-

ena which is then translated into design conditions. This chapter aims

to bring together these methodologies into a single framework which

enables the design of an LNG terminal to be automated.

2 HR Wallingford Carbon Accounting Tool (HRCAT®) is a database of carbon emissionsfor maritime engineering materials and works

Part III

P R A C T I C A L A P P L I C AT I O N

4C A S E S T U D Y A N D M A N U A L VA L I D AT I O N O F T H E

E N G I N E E R I N G F R A M E W O R K

4.1 introduction - north star case study

North Star is a remote greenfield site on an exposed coastline. Due to

client confidentiality the location cannot be discussed in this thesis. Its

bathymetry is shown in Figure 4.1. It is subject to a seasonal wave cli-

mate where waves approach from the South-West during the winter

months and from the North-West during the summer. Figure 4.2 sum-

marises 20 years of hourly recordings of wind, wave and current data,

indicating that the emanating wave direction is North West and South

West approximately half of the time each and that currents are tidally

generated cross-shore. Significant wave heights and periods are slightly

larger from the North-West, although South-West is the predominant

current direction. Table 4.1 provides the breakwater design conditions

for overtopping and armour stone design. The design parameters con-

sist of calculation inputs and design constraints that remain the same

for methods of design. Tables 4.2, 4.3, 4.4 and 4.5 provide the design

parameters that have been used, for which all notation can be found at

the end of the manuscript.

Table 4.1: Design wave

Parameter Value Unit

Hs 6.5 m

Tp 12.5 s

WL 0.8 m

Table 4.2: Breakwater design parameters

Sd P r Nw C BMOF hMOF Bg Rcg O

2 0.40 0.55 3000 17 5 9.7 4 1.00 0.01

87

4.1 introduction - north star case study 88

0 1000 2000 3000 4000

010

0020

0030

0040

00

Figure 4.1: North Star Site on an exposed coastline

4.1 introduction - north star case study 89

W

S

N

E

5%

10%

15%

20%

25%

30%

35%

40%

0 to 1 1 to 2 2 to 3 3 to 4 4 to 5 5 to 6 6 to 7 7 to 7.734

(a) Hs (m)

W

S

N

E

5%

10%

15%

20%

25%

30%

35%

40%

0 to 2 2 to 4 4 to 6 6 to 8 8 to 10 10 to 1212 to 13.367

(b) Tp (s)

W

S

N

E

2%

4%

6%

8%

10%

12%

14%

16%

18%

0 to 5 5 to 10 10 to 15 15 to 20 20 to 25 25 to 27.91

(c) V (m/s)

W

S

N

E

10%

20%

30%

40%

50%

60%

70%

0 to 0.2 0.2 to 0.4 0.4 to 0.6 0.6 to 0.679

(d) U (m/s).

Figure 4.2: Directional roses summarising the significant wave height Hs (a),peak wave period Tp (b), wind speed V (c) and current velocityU (d) for a 28-year time series of hourly recordings from 1980 -2008. Note: rose directions are: from for waves and to for windand currents.

4.2 part one - replication of a base case design 90

Table 4.3: Channel design parameters

open Vcw Vcc Vlc Hs fs D

TRUE 30 0.30 0.50 2 1.15 0.50

Table 4.4: Sediment parameters

d50 d90 v s z0

0.000295 0.00035 1.14e-065 0.01 0.006

There are three components to the case study, each of which will be

discussed in the proceeding sections:

1. Manual replication of the Base Case design using RaPoLa

2. Development of two additional layouts and improvement of the

Base Case using the model manually

3. Automatic development of a range of layouts using the optimisa-

tion algorithm

4.2 part one - replication of a base case design

At the North Star site, a concept layout Base Case which includes vol-

ume and cost estimates has been developed by the engineers at HR

Wallingford through a 10 week desk study. This design was recreated

by a competent modeller using RaPoLa by first preparing the input data

(3 hours), parameterising the layout (1 hour) and running the model (1

min). Figure 4.3 shows the Base Case (LH) and RaPoLa rendition (RH)

and Table 4.6 shows the material volume estimate comparisons for the

breakwater, channel and trestle which are the key structures affected

by the layout choices. The comparison of the two layouts demonstrates

that the layout geometry is visually accurate and the primary breakwa-

ter, channel and trestle volumes are of a similar order.

The secondary breakwater has been over-estimated using the tradi-

tional approach as RaPoLa interrogates the bathymetry over the length

of each structure and calculates cross-sectional areas at discrete inter-

vals, providing a more accurate estimate than the over-estimated repre-

sentative cross-section that was used in the traditional approach. RaP-

4.3 part two - rapola as a manual decision support tool 91

Table 4.5: Design vessel dimensions and parameters

LOA B T Vv CB Dt

300 50 12 8 0.85 0.05

1 2

3

Iteration4 5

12

3

Figure 4.3: Layout comparison of traditional approach (LH) and RaPoLa (RH)

oLa’s design resulted in a cost estimation equal to 92% of the tradition-

ally designed concept. The berth operability has been assumed as >95%

in the traditional approach and this has been estimated as 98% using

RaPoLa.

Table 4.6: Infrastructure volume comparison of RaPoLa and traditional ap-proach

Structure RaPoLa Traditional Unit Similarity

Primary BW 713,758 765,160 m3 0.93

Secondary BW 676,740 937,280 m3 0.72

Channel 4,633,995 4,860,000 m3 0.95

Trestle 1,462 1,500 m 0.97

4.3 part two - rapola as a manual decision support tool

To demonstrate the capability of RaPoLa as a decision support system

for developing new concepts, two alternate options have been devel-

oped in addition to the Base Case. This was achieved through five de-

sign iterations where the layouts are first initialised and then refined

based on performance in the following criteria: (capex), (opex), total cost

4.3 part two - rapola as a manual decision support tool 92

1 2 3 4 5

12

3

Iteration

Figure 4.4: Design iterations for three concept layouts

(total) and downtime to assess the impact of the decisions made. Us-

ing RaPoLa, the designer can evolve several concepts simultaneously

with immediate feedback in terms of layout and objective function per-

formance, helping to test the feasibility of multiple options in a short

time-frame.

Figure 4.4 shows the evolution of the three options from the initial

concept on the left to the final option on the right and Figure 4.5 shows

the evolution of the objective functions where the cost objectives capex,

opex and total are presented as a ratio of the Base Case estimates ob-

tained from RaPoLa and downtime is the actual predicted downtime.

This visual representation of the objectives per design iteration allows

the user to determine the impact of the decisions made and to plan

forthcoming decisions. The evolution of each option is discussed in the

following subsections.

4.3.1 Option One

Option one is initially located at a water depth of approximately 14m

and includes a secondary breakwater to limit wave agitation and sed-

iment infill to the basin area. The berth is located both further north

4.3 part two - rapola as a manual decision support tool 93

0.8

0.9

1.0

1.1

0

5

10

15

0

2

4

6

0.95

1.00

1.05

1.10

1.15

capexopex

downtim

etotal

1 2 3 4 5Iteration

valu

e

Option

1

2

3

Iteration

Figure 4.5: Performance of the three options per design iteration, where capex,opex and total are represented as a ratio of the Base Case

and further offshore than is optimal and this is reflected in the high

capital cost shown in Figure 4.5. Table 4.7 shows the decision variables

that have been modified for each iteration. Moving the berth South-

East in iteration two results in a significant capital cost saving with

no impact on the opex and downtime. Moving the berth slightly to-

ward shore, modifying the channel angle to minimise channel length

and modifying the breakwater angle to maximise berth protection re-

sults in another capex reduction, taking this option to below the cost

of the original Base Case. In iteration four, the secondary breakwater is

removed which increases the opex and the main breakwater length is re-

moved, reducing the capex and resulting in a small total cost reduction

overall, but not enough to warrant permanent removal of the secondary

breakwater. In iteration five, the secondary breakwater is replaced, the

gradient of the front slope of the breakwater is increased and the break-

water crest width is reduced, which has an overall increase on the capexbut a reduction in total. The main optimisation of this option in com-

parison to the Base Case, is the breakwater which is represented by the

breakwater which is 90m shorter and still provides adequate downtimeperformance.

4.3 part two - rapola as a manual decision support tool 94

Table 4.7: Decision variables for each iteration (I) of the design evolution ofOption One

I x y θch LHS θbr LH RH Sec MOF ∇f ∇r B

1 3013 3088 0 FALSE 0 350 350 TRUE TRUE 2.0 1.5 10

2 3013 2500 0 FALSE 0 350 350 TRUE TRUE 2.0 1.5 10

3 3150 2500 -5 FALSE 0 350 350 TRUE TRUE 2.0 1.5 10

4 3150 2500 -5 FALSE 7 300 350 FALSE TRUE 2.0 1.5 10

5 3150 2500 0 FALSE 7 300 350 TRUE TRUE 1.5 1.5 7

4.3.2 Option Two

This offshore option includes a detached offshore breakwater and com-

bined nearshore MOF and tug-pen. It is initially located in a water

depth which is not quite deep enough for a caisson breakwater to be-

come economically feasible and it also requires a basin. Moving the

berth further offshore removes the requirement for a basin and also

allows a caisson breakwater to be used (Table 4.8). However, the down-

time is still high due to the relatively short breakwater. Increasing the

breakwater length to 1000m overall and changing the trestle orienta-

tion and MOF location increases capex, although there is improvement

in the downtime. Changing the configuration of the trestle and increas-

ing the right side of the breakwater has a small impact on the capexand total, but the downtime is now unacceptably high. In iteration five,

the breakwater is placed close to the berth, resulting in a reduction in

downtime, which is now potentially acceptable. However, the total cost

is in the order of 10% higher than the Base Case and 15% higher than

the improved Base Case.

Table 4.8: Decision variables for each iteration (I) of the design evolution ofOption Two

I x y θch LHS θbr LH RH Sec MOF ∇f ∇r B

1 2212 2165 0 FALSE 0 350 350 TRUE FALSE 2.0 1.5 10

2 1700 2300 0 FALSE 0 350 350 TRUE FALSE 2.0 1.5 10

3 1800 2400 0 TRUE 0 500 500 TRUE FALSE 2.0 1.5 10

4 1800 2500 0 FALSE 0 450 600 FALSE FALSE 2.0 1.5 10

5 1800 2500 0 FALSE 0 450 550 TRUE FALSE 2.0 1.5 10

4.3 part two - rapola as a manual decision support tool 95

4.3.3 Option Three

The third option lies in the nearshore area and has the same initial cost

as the Base Case, and although the breakwater rot provides protection

against the predominant wave condition, the opex is high due to the ex-

pected rate of sediment infill and as the basin intersects with the shore-

line, a large retaining wall will be required to protect the basin from

rapid tidally-driven sediment infill. Moving the berth offshore slightly

results in the terminal orientation changing due to the depth contours

in this specific location (Table 4.9). This is remedied in iteration three by

moving the berth slightly and in iteration four, the breakwater length is

reduced which has a negligible effect on the expected downtime. In it-

eration five, the breakwater gradient is increased, the breakwater crest

width is reduced and the breakwater length is reduced again which

results in a total cost 5% lower than the Base Case. The downtime is

higher than the Base Case and Option One, however, this is still below

the specified 5% limit and is therefore an acceptable option.

Table 4.9: Decision variables for each iteration (I) of the design evolution ofOption Three

I x y θch LHS θbr LH RH Sec MOF ∇f ∇r B

1 3275 1260 0 TRUE 0 350 350 TRUE TRUE 2.0 1.5 10

2 3275 1550 0 TRUE 0 350 350 TRUE TRUE 2.0 1.5 10

3 3000 1600 0 TRUE 0 350 350 TRUE TRUE 2.0 1.5 10

4 3000 1600 0 TRUE 0 300 300 TRUE TRUE 2.0 1.5 10

5 3000 1600 0 TRUE 0 225 300 TRUE TRUE 1.5 1.5 7

4.3.4 Final options

The final options are shown in Figure 4.6, demonstrating three distinct

options. Table 4.10 provides some of the key outputs from the model

including the distance offshore (dist), channel width and length (W, L),

transformed extreme wave (Hmax), crest elevation (Rc), median rock

armour diameter (D50), and concrete armour volume (V). In all cases,

the model has chosen to use concrete armour units such as Accropodes

as rock armour sizes were too large ( 20T).

4.4 summary and conclusions 96

1 2 3

Figure 4.6: Final layouts developed through the manual approach

Table 4.10: Selected model output for the three options shown in Figure 4.6that were developed manually (all units in m unless stated other-wise)

Option dist W L Hmax B Rc D50 V (m3)

1 1096 205 871 5.70 10 7.33 2.41 4.94

2 2241 205 0 4.94 10 5.15 1.95 2.42

3 895 205 966 6.01 10 7.90 2.53 5.78

4.4 summary and conclusions

RaPoLa is a decision support software system for conceptual layout

planning of maritime terminals. The case study has demonstrated that

it is capable of producing refined cost and downtime estimates of a BaseCase design as well as producing viable alternative options. Streamlin-

ing design procedures into a single framework allows options to be

modified quickly, and multiple options can be generated and rapidly

evolved through an iterative approach in a rapid manner; the designer

can test a wide range of design ideas and quickly understand how deci-

sions effect economics and performance. The traditional approach can

take weeks to produce a concept and develop a cost estimate and this

can be significantly reduced with the option to not only modify, but

optimise the layout with respect to the available data. This can have

significant implications for the design process as now, a large number

of concept options can be generated quickly. Also, options that may

not usually be considered, due to preconceived notions of infeasibility

can be explored and either confirmed or dispelled potentially offering

viable yet unconsidered options.

4.4 summary and conclusions 97

Based on the findings of this chapter, the next chapter will inves-

tigate how the manual layout generator can be enhanced through in-

tegration with an optimisation algorithm. To determine the best opti-

misation strategy, the problem will be solved using a single and multi-

objective optimisation algorithm which will help to determine how this

problems is best solved.

5S I N G L E A N D M U LT I - O B J E C T I V E O P T I M I S AT I O N O F

A N L N G T E R M I N A L

5.1 introduction

In 4, RaPoLa was used as an extension of the traditional desk study

with automation briefly introduced. The aim of this chapter is to solve

what is essentially the single objective layout optimisation problem as

a two and three-objective optimisation problem by decomposing the

total cost function into capex and opex and including berth downtime as

an objective rather than a constraint. A macro analysis of the optimi-

sation methods will be conducted in terms of convergence, population

diversity, layout and performance in the objective functions.

5.2 decision variables

The parameters which influence the layout were shown in Figure 3.25

and described in Table 3.2. These are referred to as decision variables in

the computing literature constitute a hyper-dimensional search space S

containing all possible solutions i.e. χ ∈ S in which χ = (x1, x2, ...xn).

Navigating this hyperspace is a challenging task for humans as visual-

isation of more than three dimensions is difficult (Pratihar, 2009). On

the contrary, computers and specifically EA’s are well suited for this

task as they process a set of solutions in parallel, eventually exploiting

similarities of solutions by crossover (Zitzler and Thiele, 1998).

5.3 objective functions

The key objective is to minimise the whole-life cost (total) of the ter-

minal which is composed of the capital cost (capex): the capital cost

including site works, transport and installation of materials and the

98

5.4 selection of the optimisation algorithm 99

operational cost (opex): the cost of ongoing dredging of the basin and

channel (eq 5.1).

capex = Bw,p +Bw,s +Ch + Tr +Be +MOF +Gr

opex =

DL∑t=0

$sed.Vsed(1+ i)t

total = capex+ opex

(5.1)

Additionally, the terminal must provide at least the minimum annual

berth availability. The berth downtime is a function of the breakwater

layout; the two can be equated i.e. f(length) − downtime = 0 using a

root-finding algorithm (Brent, 1973) as discussed in (Rustell, 2013). The

single-objective optimisation problem is therefore represented as:

minimise f (total)

subject to f (length) − downtime = 0(5.2)

To explore the trade-off between downtime and total cost and solve

this as a two objective problem, downtime can also be included as a

minimisation objective:

minimise f (total,downtime) (5.3)

Additionally, total can be replaced with capex and opex, transforming

the problem to three objectives:

minimise f (capex,opex,downtime) (5.4)

5.4 selection of the optimisation algorithm

There are many optimisation algorithms that are able to find members

of the Pareto Set. The NSGA-II is the preferred optimisation algorithm

at HR Wallingford as it is well established, reliable and has been used

successfully on many projects outside and inside the company. As an

additional step in the selection process, a novel algorithm being devel-

oped at the University of Surrey (Cheng et al., 2015) that uses Gaussian

5.4 selection of the optimisation algorithm 100

process-based inverse models that map all found non-dominated solu-

tions from the objective space to the decision space was integrated with

the LNG terminal layout model. The algorithm was able to develop a

Pareto Set of solutions, however, this was not pursued further due to

confidentiality of the IP.

In addition to the NSGA-II, the NSGA-III (Deb and Jain, 2014) was

developed to handle many objective problems, where algorithms such

as the NSGA-II start to struggle. The NSGA-III was proceeded by the

θ- NSGA-III (Yuan, Xu, and Wang, 2014) which claims to be more effi-

cient than the NSGA-III at balancing convergence and diversty in many-

objective optimisation problems. This research problem focuses on a

maximum of three objectives and the NSGA-II had performed satisfac-

torily in finding member of the Pareto Set so was kept as the optimisa-

tion algorithm for the duration of the study.

5.4.1 NSGA-II processes

RaPoLa has been integrated with the Non-Dominated, Sorting Genetic Al-gorithm NSGA-II (Deb et al., 2002) as shown in Figure 5.1 to enable it to

automatically develop optimised layouts. The following

• Initialise: The NSGA-II initialises N random samples of the de-

cision variables χ = (x1, x2, ...xn) ∈ S and explores the decision

space by creating a population of options using the LNG termi-

nal layout model.

• Evaluate: This is a part of the exploitation phase and each option

(i) is evaluated against all other options from the population Nin

via two attributes:

1. the crowding distance (idistance)

2. non-domination rank (irank)

which combined through the partial order ≺n defined as:

i ≺n j if (irank < jrank)

or [(irank = jrank)

and (idistance > jdistance)]

5.4 selection of the optimisation algorithm 101

Initiachrom

ra

Evalu

H

Fi

alise population omosomes througndom sampling

uate fitness of eachromosome

Start

Stopping criteria met

HRW RAPOLA

nal poputation

of gh

ach

n

ch

Mo

Select parenchromosomes

mating poo

Crossoverpahromosomes to g

offspring

Mutate random goffspring chromo

and replace cupopulation

nts from

ol

rent generate

enes in somes rrent

Figure 5.1: RaPoLa integrated with the NSGA-II

5.5 model setup 102

where the subscript j denotes another option from population N.

The equation indicates that the partial rank ≺n prefers options

with a lower rank of non-domination and where options are on

the same non-domination front, the option in the lesser crowded

area is preferred. From the ≺n, the fittest options to be found to

become parents of the next generation.

• Stopping: If the maximum number of generations has been reached,

the NSGA-II will stop, else the next evolution commences.

• Select: Based on the crowding distance and non-domination rank, the

fittest children are selected from the mating pool using a binary

tournament selection with crowd- comparison-operator to form

the parents of the next population (exploitation).

• Crossover: The attributes of the parents are combined based on the

probability of crossover to form the offspring (exploitation).

c1,k =1

2[(1−k)p1,k + (1+k)p2,k]

c1,k =1

2[(1+k)p1,k + (1−k)p2,k]

where ci,k is the ith child of the kth component pi,k is the selected

parent and βk is a positive random number.

• Mutate: Genes within the offspring population are randomly mu-

tated to bring diversity into the solution pool based on the prob-

ability of mutation using the following equation (exploitation):

ck = pk +(puk − plk

)δk

where ck is the child and pk is the parent with puk being the

upper bound on the parent component, plk is the lower bound

and δk is a small variation which is calculated from a polynomial

distribution based on the probability of mutation.

5.5 model setup

It is well established in the literature that the most important genetic op-

erators are population and mutation (Darwin, 1859; Schaffer et al., 1989).

5.5 model setup 103

Defining optimal values for each is not an exact science and there is con-

flict in the literature. For instance, (De Jong, 1975) proposed mutation= 0.001, (Bäck, 1992) states mutation = 1.75/(N

√L) where N is the pop-

ulation size and L is the chromosome1 length and (Mühlenbein, 1992)

suggests that 1/L is generally optimal. The balance between providing

a large enough population to cover the search space, whilst economis-

ing runtime is much debated in the literature (Roeva, Fidanova, and

Paprzycki, 2013).

Using (Mühlenbein, 1992)’s equation, mutation = 1/12 = 0.83, how-

ever, in early attempts at solving this problem, mutation values of

< 0.1 tended toward less optimal solutions. Additionally, the effect of a

higher mutation value is also of interest, therefore, it is proposed that

the following values are used:

pm = {0.1, 0.2}

The following population sizes have been selected based on values

used in the literature (Roeva, Fidanova, and Paprzycki, 2013), who

found that there were diminishing returns on population sizes greater

than 100 as well as through the experience of the research team:

p = {100, 150}

These values will be used in all four combinations for each of the

three optimisation methods, resulting in 12 runs altogether (Table 5.1).

This should be sufficient to gain an understanding of how the choice of

genetic operator effects the optimisation method. The upper and lower

bounds for each of the decision variables is shown in Table 5.2.

The runs will be referenced using the following format within each

objective space:

pop_gen_mutation = 100_100_0.1

1 A chromosome is the way in which the option is encoded in terms of the optimisa-tion algorithm. In the case of this research, a chromosome is a 12 digit vector of realnumbers, with each number representing one decision variable.

5.6 results 104

Table 5.1: Genetic operators for 12 runs ranging from one to three objectives

Run No. Objs Population Generation Mutation

1 1 100 100 0.1

2 1 150 100 0.1

3 1 100 100 0.2

4 1 150 100 0.2

5 2 100 100 0.1

6 2 150 100 0.1

7 2 100 100 0.2

8 2 150 100 0.2

9 3 100 100 0.1

10 3 150 100 0.1

11 3 100 100 0.2

12 3 150 100 0.2

Table 5.2: Upper and lower bounds of the decision space S

x y θch LHS θbw LH RH sec θb MOF ∇f ∇r B

lower 1000 500 -5 0 -5 200 200 0 -30 0 1.50 1.50 5

upper 4000 3500 5 1 5 700 700 1 30 1 3 1.50 14

5.6 results

5.6.1 Convergence

The fitness of each population of each run is calculated relative to all

other datasets of the same number of objectives using eq 5.5 where

fmin and fmax are the minimum and maximum of all generations, of

all runs with the same number of objectives. Furthermore, the popula-

tion fitness of the runs with a population of 150 have been divided by

1.5 to normalise the data. This is plotted in Figure 5.2 which indicates

that all methods generally show fitness improvement with successive

generations as anticipated.

F =f− fmin

fmax − fmin(5.5)

5.6 results 105

100_100_0.1 100_100_0.2 150_100_0.1 150_100_0.2

1 obj2 obj

3 obj

0 25 50 75 1000 25 50 75 1000 25 50 75 1000 25 50 75 100generation

fitne

ss

Figure 5.2: Convergence of the populations

The number of objectives directly influences the rate of convergence

and its smoothness. One objective results in a fast, smooth profile to-

ward an absolute minima with significant improvement occurring in

the first ten generations and convergence within 40 generations in all

cases. Population and mutation seem to have no effect on the conver-

gence rate as each dataset exhibits roughly the same profile.

Two objectives has a relatively smooth transition where there is al-

most as much relative improvement in the first 10-12 generations as

in the single objective method and the convergence profile is relatively

smooth. However, in the case of 100_100_0.2, the fittest population was

discovered at generation 36 after which no population performed as

well. However, in general, it could be argued that all generations have

converged as there is no real improvement in any dataset from around

generation 50. Mutation seems to have no effect on the convergence,

although the two datasets with a population of 150 have a slightly

smoother convergence.

The three objective optimisation datasets generally show global im-

provement although local improvement is not always achieved. There

5.7 results - one objective 106

seems to be marginal positive correlation between population and smooth-

ness and convergence rate and negative correlation with higher muta-

tion, however the level of influence is difficult to determine from a small

sample group of four.

5.7 results - one objective

The single objective datasets are optimised for cost only; the breakwa-

ter is designed for the target level of downtime by the root-finder. How-

ever, graphing all datasets in terms of the total vs. downtime (Figure 5.3)

shows four distinct points, indicating convergence to a local minima in

at least three, if not all datasets which is consistent with an earlier hy-

pothesis. Also consistent is the lack of diversity in each dataset which

is represented as a single point, except in the case of 150_100_0.2 which

is represented by two practically indistinguishable points, which indi-

cates that this method has not even converged to the local optima. The

root-finder has performed adequately, where downtime is within the

specified range of ±1% in all datasets.

4.1

4.3

4.5

4.7

4.9

5.1

0.84 0.85 0.86total

dow

ntim

e

Dataset

100_100_0.1

100_100_0.2

150_100_0.1

150_100_0.2

Figure 5.3: Single objective optimisation results

Interestingly, the 100_100_0.1 dataset, with the smaller population

and mutation operators has counter intuitively produced an option

which dominates the other datasets. Investigating the actual layouts

in Figure 5.4 shows that contrary to the other options as well as the

Base Case (Figure 4.3), the layout has the channel emerging from the

opposite side of the breakwater which is a fundamental change. Addi-

tionally, the three other options are practically identical to the manual

5.7 results - one objective 107

layout, but have managed to reduce the total cost by over 13% through

a reduction in the breakwater length, slight counter-clockwise rotation

of the breakwater and slight counter-clockwise rotation of the channel.

The fact that three of the end results are identical indicates the NSGA-IIgetting stuck in a local optima, whereas in option 100_100_0.1, it has

found a solution that dominates the others.

100_100_0.1 100_100_0.2

150_100_0.1 150_100_0.2

1000

2000

3000

1000

2000

3000

2000 3000 4000 2000 3000 4000x

y

Figure 5.4: Layouts of the four single-objective optimisation datasets

The decision variables of the four layouts are shown in Table 5.3

along with the manually developed layout for comparison. The berth

of the three similar options is only 168m further inshore from the man-

ual layout berth, however, the large cost saving is most likely driven

by decreasing RH by 215m. Additionally decreasing θch to 5◦ gives a

slightly shorter distance to the limiting depth contour and maximising

θbw to 5◦ aligns the breakwater further toward the predominant wave

direction - North-West. θch and θbw are interestingly at the limits spec-

5.8 results - two objectives 108

100_100_0.1 100_100_0.2 150_100_0.1 150_100_0.2

0

10

20

0.8 0.9 1.0 0.8 0.9 1.0 0.8 0.9 1.0 0.8 0.9 1.0Total

Dow

ntim

e

Figure 5.5: Pareto front of two objective runs

ified in Table 5.2, which poses the question of whether the optimum

actually exceeds these limits2.

Table 5.3: Decision variables of the single objective optimisation methods com-pared to the Base Case decision variables

Option x y θch LHS θbw LH RH sec θb MOF ∇f ∇r B

100_100_0.1 2583 1807 2 1 5 225 300 FALSE 29 TRUE 1.50 1.50 14

100_100_0.2 3070 2248 -5 0 5 300 225 TRUE -28 TRUE 1.50 1.50 13

150_100_0.1 3124 2249 -5 0 5 300 225 TRUE -19 TRUE 1.50 1.50 14

150_100_0.2 3134 2249 -5 0 4 300 225 TRUE 6 TRUE 1.50 1.50 5

Manual 2973 2202 0 0 0 300 440 TRUE 0 TRUE 1.50 1.50 10

5.8 results - two objectives

Plotting the objective functions of the final populations of runs 5-8 (Fig-

ure 5.5) shows that all methods exhibit practically identical trade-off

curves which suggests that all runs have converged and potentially

found members of the Pareto set. This is in contrast to Figure 5.2 which

indicated that the populations were still converging.

Figure 5.6 shows the 150_100_0.1 dataset with three options repre-

senting both extremes and the knee-point of the trade-off curve and

Table 5.4 shows the decision variables of the options.

Option 45 (low cost/high downtime) has an unacceptably high down-

time caused by the miniature breakwater offering practically no pro-

tection as the breakwater design is triggered by too much downtime,

2 There are practical and considerations which define these limits such as the vesselcoming in reasonably head on to the basin and maintaining a reasonably consistentbreakwater cross-section.

5.8 results - two objectives 109

rather than optimised for the target level of downtime as with the sin-

gle objective method3.

It is likely that offshore options with a longer breakwater were too

expensive and with poor downtime performance, were dominated and

unable to evolve thereby leaving the low-cost/high-downtime options

to evolve. Figure 5.7 shows a potential breakwater layout which with a

cost of 1.45 the Base Case, still has a downtime of 91.4%.

This option has also evidently minimised the trestle length by posi-

tioning the berth just south of the normal from the tie-in-point.

Option 50 (high cost/low downtime) is very similar to Option 65,

the difference being most in the breakwater where θbw is reduced, and

LH increased, resulting in a more acute diffraction angle. Visually, this

option is displeasing - the breakwater does not look ’right’ and this

option is therefore also rejected.

Option 65 (compromised cost/downtime) looks proportionally cor-

rect and dominates the Base Case in both objectives. Overall it is the

preferred option of the three and represents the best outcome of the

two-objective optimisation method.

The fact that the multi-objective optimisation method designs the

breakwater based on the decision variables and does not optimise for

the specified downtime like the single-objective method means that op-

tions such as this can propagate through to the final generation. The

alternative is to cap the downtime, however, this is at the cost of di-

versity and realistically speaking, there are only a small number of ty-

pologies in the population, and allowing this kind of option evolve can

provide insight as to the relative cost if the preferred option becomes

prohibitive as more information becomes available.

Table 5.4: Decision variables of three selected options

Option x y θch LHS θbr LH RH Sec MOF ∇f ∇r B

45 1922 1885 -4 0 1 200 200 0 0 1.50 1.50 5

65 2582 1800 2 1 5 215 251 0 1 1.50 1.50 5

50 2822 1842 5 1 -4 379 626 0 1 1.50 1.50 5

3 A threshold can be input to the model so that any solutions which have downtime lessthan the target are culled from the current population, however, in a concept design itcan be very useful to know the pros and cons of several typologies

5.8 results - two objectives 110

45

50

65

0

10

20

0.8 0.9 1.0Total

Dow

ntim

e

(a) Total cost vs. Downtime

0 1000 2000 3000 4000

010

0020

0030

0040

00

(b) Option 45: low cost, high downtime

0 1000 2000 3000 4000

010

0020

0030

0040

00

(c) Option 65: compromised

0 1000 2000 3000 4000

010

0020

0030

0040

00

(d) Option 50: high cost, low downtime

Figure 5.6: Pareto trade-off surface composed of 100 options with three op-tions - 45, 65 50 representing the extremes and knee point of theresults extracted

5.9 results - three objectives 111

0 1000 2000 3000 4000

010

0020

0030

0040

00

Figure 5.7: Offshore option with very long breakwater

5.9 results - three objectives

Figure 5.8 shows the three-objective results as a series of two-dimensional

graphs. The graphs generally exhibit the trade-off shape and this is

most pronounced in the Capex vs. Downtime graphs in the middle

column. Opex and Downtime cannot influence each other and this is

evident in the third column showing a scattering. The trade-off between

capex and opex is also evident and this will also correlate with the berth

distance from the shore which is an important factor in calculating both

objectives.

There is little similarity between the runs, indicating, similarly to

Figure 5.2 that the populations have not converged.

It can be seen that Run 10 has a maximum opex value more than

100x the Base Case (which had a fairly small value), which suggests

5.9 results - three objectives 112

this option is located near to the shore where the sediment infill is

very prevalent and this is confirmed by the relatively low capex of 0.79,

indicating that the berth is located in shallow water where the waves

are breaking and loosing energy due to the foreshore depth rather than

the breakwater.

The capex vs. downtime objectives (which are the most important

objectives) of the 150_100_0.2 dataset are plotted in Figure 5.10 which

also shows three options corresponding to the extremes and knee point.

Additionally, the decision variables relating to the three options are

shown in Table 5.5.

The first option, 118 (low cost/high downtime) is similar to Option

45 of the two-objective method (Figure 5.6; the same criticisms apply.

Option 121 (compromised cost/downtime) offers a new typology

where θbw is minimised, resulting in a shorter channel length as well

as a slight increase of downtime. This option bears resemblance to the

Base Case, although with a longer breakwater. The inclusion of a sec-

ondary breakwater on the North side reduces the volume of sediment

infill into the basin of which the shallowest point is in 5m water depth.

Option 13 (high cost/ low downtime) is an unlikely solution to be

selected by a designer due to the following factors: the berth is too

far from the normal emanating from the tie-in point which results in

a very long breakwater root (which along with the long breakwater

trunk significantly reduces the downtime). For a berth in this depth

of water, it is likely that a detached breakwater would be selected by

a designer. Additionally, the secondary breakwater is potentially unre-

quired as the basin will be subject to a minimum of sediment infill due

to its shape and shallowest point being in 8m water depth. Although

this configuration provides a low level of downtime, the capital costs

are excessive by a factor of 1.7 in comparison to the Base Case which

would make this option a hard sell to a client when there are cheaper

options that still offer an acceptably low level of downtime. Although

this type of option is unlikely to be desirable by the designer or client,

an important aspect of optimisation is to search the solution space and

to explore where the limits of the trade off are. From a commercial per-

spective, this can provide the designer with the ability to provide the

client with a numerical basis to support discussions on the expected

costs of reducing for instance downtime.

5.9 results - three objectives 113

0

20

40

60

0.7 0.9 1.1 1.3 1.5Capex

Ope

x

0

10

20

30

0.7 0.9 1.1 1.3 1.5Capex

Dow

ntim

e0

10

20

30

0 20 40 60Opex

Dow

ntim

e

(a) Run 9: 100x100x0.2

0

25

50

75

100

0.7 0.9 1.1 1.3 1.5Capex

Ope

x

0

10

20

0.7 0.9 1.1 1.3 1.5Capex

Dow

ntim

e

0

10

20

0 25 50 75 100Opex

Dow

ntim

e

(b) Run 10: 100x100x0.1

0

20

40

60

0.8 1.0 1.2Capex

Ope

x

0

10

20

0.8 1.0 1.2Capex

Dow

ntim

e

0

10

20

0 20 40 60Opex

Dow

ntim

e

(c) Run 11: 150x100x0.2

0

50

100

150

0.8 1.0 1.2 1.4Capex

Ope

x

0

10

20

30

0.8 1.0 1.2 1.4Capex

Dow

ntim

e

0

10

20

30

0 50 100 150Opex

Dow

ntim

e

(d) Run 12: 150x100x0.1

Figure 5.8: Pareto surface of the three objective runs (9:12) expressed as a se-ries of two-dimensional trade-off curves

5.9 results - three objectives 114

118

121 130

10

20

1.00 1.25 1.50 1.75Total

Dow

ntim

e

(a) Total cost vs. Downtime

0 1000 2000 3000 4000

010

0020

0030

0040

00

(b) Option 118: low cost, high down-time

0 1000 2000 3000 4000

010

0020

0030

0040

00

(c) Option 121: compromised

0 1000 2000 3000 4000

010

0020

0030

0040

00

(d) Option 13: high cost, low downtime

Figure 5.10: Pareto trade-off of the best performing 150_100_0.2 dataset ex-pressed in the two-objective space in the form total vs. downtimeindicating three options corresponding to the extremes and knee-point of the trade-off curve

5.10 comparison of the three optimisation methods 115

Table 5.5: Decision variables of options 118, 121 and 13 of the 150_100_0.1dataset of the three objective method

Option x y θch LHS θbr LH RH Sec MOF ∇f ∇r B

118 2507 2241 -5 0 -4 407 275 0 0 1.54 1.50 7

121 3190 2253 -5 1 1 442 415 1 1 1.52 1.50 13

13 3208 1348 -5 0 -3 225 332 0 1 1.50 1.50 5

5.10 comparison of the three optimisation methods

The results section will discuss the datasets as a cohort, comparing the

two and three-objective approaches using a range of techniques includ-

ing convergence, Pareto optimality, diversity and objective function per-

formance. For consistency, the 150_100_0.1 dataset from each method

will be used as generally, all runs of each respective method produced

similar results.

5.10.1 Decision variables

Investigating the decision variables of the three datasets in terms of

the population mean and standard deviation in Table 5.6 confirms that

increasing the number of objectives increases the standard deviation

and hence the diversity. There is also variation in µ; the single method

minimises θch, maximises θbr and always includes an integrated MOF

and secondary breakwater.

Two objectives prefers the channel on the opposite side of the break-

water i.e. θch > 0 , θch − σθch > 0 although there is more variation in

θbr with µθbr = 0.05 , σθbr = 4.43 indicating that the full range of this

decision variable is being utilised.

Table 5.6: Mean and standard deviation for the decision variables of the150_100_0.1 dataset of each of the optimisation methods

Obj x y θch LHS θbr LH RH Sec MOF ∇f ∇r B

1 µ 3124 2249 -5 0 5 300 350 1 1 1.50 1.50 14

σ 0 0 0 0 0 0 0 0 0 0 0 0

2 µ 2545.05 1792.52 2.22 0.94 0.05 286.54 335.86 0 0.92 1.50 1.50 5.05

σ 180.96 87.16 2.13 0.24 4.43 65.13 118.27 0 0.27 0 0 0.28

3 µ 2877.43 1842.73 -2.91 0.45 -2.83 376.41 372.38 0.31 0.89 1.52 1.50 7.09

σ 526.41 398.24 3.34 0.50 2.49 106.90 113.62 0.47 0.31 0.02 0 2.67

5.10 comparison of the three optimisation methods 116

1 obj 2 obj 3 obj

0

1000

2000

3000

4000

20002500300035004000 20002500300035004000 20002500300035004000x

y

Figure 5.11: Berth locations of the 150_100_0.2 datasets from each of the opti-misation methods

The strongest indicator of population diversity is the range of berth

locations as x and y are the dominant decision variables as the distance

offshore is directly linked to the cost of the terminal, where nearshore

terminals have been proven to be more expensive in both capex and opex.

Figure 5.11 shows the berth location of the three selected populations.

Table 5.7 shows the mean and standard deviation of the berth distance

from shore for each population. There is a clear exponential relation-

ship between the number of objectives and the range of berth locations.

The two-objective options conform to a reasonably straight line from

the trestle origin which is an expected outcome of an optimised popu-

lation whereas, the three-objective plot shows berths located far to the

south which would require an unreasonably long trestle, resulting in

significant capex. The spread in the clusters is also much larger with a

higher number of objectives. It is clear that the two-objective runs have

produced a population of results more conformant with traditional de-

sign.

Table 5.7: Mean and standard deviations of the berth distance from the trestlefor the final population of the 150_100_0.1 dataset for each optimi-sation method

1 objective 2 objectives 3 objectives

µ(m) 766.64 1345.27 1013.12

σ(m) 0 180.96 526.25

5.10 comparison of the three optimisation methods 117

100_100_0.1 100_100_0.2 150_100_0.1 150_100_0.2

0

10

20

30

0

10

20

30

0

10

20

30

12

3

0.8 1.2 1.6 2.00.8 1.2 1.6 2.00.8 1.2 1.6 2.00.8 1.2 1.6 2.0Total

Dow

ntim

e

Figure 5.12: All datasets presented in the two objective space

5.10.2 Transposition into two-objective space

To compare the results of the single, two and three objective optimisa-

tion methods, the results are transposed into the two-objective space by

combining capex and opex into total for the three objective datasets. The

results are plotted in Figure 5.12 in which the three-objective datasets

exhibit characteristics of a trade-off curve, albeit with relatively scat-

tered points.

Clearly from the graph, the two-objective method produces the most

optimal solutions. As demonstrated in Figures 5.3/ 5.4 and Table 5.3,

the single-objective method became trapped in local minima in at least

one case and was unable to find a solution as optimal as the two-

objective method. This is largely due to the root-finder algorithm op-

timising the breakwater for 95% downtime and the other methods

trading-off between objectives.

Table 5.8 shows the ratio of options in the final population of each

run that dominate the Base Case when the options are transposed int to

the two-objective space. The single-objective and three-objective results

have performed poorly and if considering both total and downtime as

of equal value, then the Base Case is the superior option. However, the

two-objectives results have at least 41% of their population dominating

the Base Case which is a clear improvement and outlines that when

5.10 comparison of the three optimisation methods 118

Table 5.8: Ratio of options which dominate the Base Case in both objectives

Name Domination ratio

1 1_100_100_0.1 0.0

2 1_100_100_0.2 0.0

3 1_150_100_0.1 0.0

4 1_150_100_0.2 0.0

5 2_100_100_0.1 0.45

6 2_100_100_0.2 0.45

7 2_150_100_0.1 0.45

8 2_150_100_0.2 0.41

9 3_100_100_0.1 0.01

10 3_100_100_0.2 0.01

11 3_150_100_0.1 0.01

12 3_150_100_0.2 0.01

considering the multi-objectivisation of the objectives, decomposition

of total to capex and opex results in the most optimal population4. The

key decision space for this problem is the two-objective space as the total

cost and downtime estimate are the key parameters on which decisions

are made. This is due to capex constituting the majority ( 80%) of total,which when optimised as a single objective - total then capex is weighted

by a factor of four more than opex which drives total down. Considering

this as a three-objective problem where capex and opex are optimised

in parallel and then reconstituted into total, each has been optimised

as identically weighted objectives and there s no benefit of weighting.

This leads to the conclusion that this problem is best solved in the two-objective space and this will form the basis for the next chapter of the

thesis which introduces uncertainty into the optimisation framework

as a minimisation objective.

4 It is entirely possible that providing a lower threshold constraint in the single-objectivemethod i.e. Dt = 2% would result in a full population of results that dominate the BaseCase. This could be tested as part of any additional studies.

5.11 comparison between automated and manually developed layouts 119

0.8 1.0 1.2 1.4 1.6 1.8 2.0

05

1015

2025

Total

Dow

ntim

e

Generation11089

Figure 5.13: Plot indicates improvement in the population fitness’ with in-creasing generation with three options selected from the final gen-eration for further analysis

5.11 comparison between automated and manually devel-

oped layouts

Linking the model with the NSGA-II (Deb et al., 2002) enables an op-

timised population of options to be generated automatically. The deci-

sion variables are the same as those used in Section 4.3 and the minimi-

sation objectives are total and downtime and Table 5.9 shows the genetic

operators that have been used in the study.

Table 5.9: Genetic parameters for automated design

Pop Gen Xover Mut

150 100 0.7 0.2

Figure 5.13 shows the populations improving toward a Pareto front

(trade-off curve) as the populations increase. Three options have been

selected, representing three distinct areas of the curve and these are ren-

dered in Figure 5.14 and relevant outputs are displayed in Table 5.10.

5.12 summary and conclusions 120

Table 5.10: Selected model output for the three options developed automati-cally (all units in m unless stated otherwise)

Option dist W L Hmax B Rc D50 V (m3)

1 1884 205 0 5.02 10 5.29 2.28 3.38

2 982 205 741 5,68 13 5.56 2.39 4.87

3 443 205 1542 2.62 11 1.03 1.13 0.48

1 2 3

Figure 5.14: Automatically developed layouts using the NSGA-II

The Pareto front with the three manually developed options is shown

in Figure 5.15, showing that the method used in this research is capable

of developing options that are nearing the Pareto front, but are not

quite as optimised as is possible. The solid lines represent the Base Caseand indicate that the Base Case lies near to the Pareto front. The benefit

however of using a model to develop concept options is the significant

time saving where options can be developed in minutes rather than

weeks or months and the ease at which options can be modified and

iterated with immediate feedback in terms of how the decisions affect

the layout and objective functions.

5.12 summary and conclusions

Overall, the three methods were successful in producing layout con-

cepts. The NSGA-II converged reasonably well for each run, although

there was a clear relationship between the number of objectives, con-

vergence rate and smoothness. Single-objective optimisation was able

to produce layouts with 5% downtime and with a lower total cost than

the Base Case. If this value was lower, the method may have been able

5.12 summary and conclusions 121

0

10

20

0.8 1.0 1.2Total Cost Option / Base Case

Dow

ntim

e %

Option

1

2

3

Figure 5.15: NSGA-II generated Pareto front of options displaying the classictrade-off shape with the three options developed using the man-ual approach superimposed

5.12 summary and conclusions 122

to produce options which dominated the Base Case in both objectives.

Interestingly, one of the single objective runs converged to a different

option than the other methods, suggesting that one or all of the runs

got trapped in local optima. It was demonstrated that the third objec-

tive was unnecessary in this case study as it did not bring any tangible

improvements to the performance of the objective functions, although

it did yield a more diverse population and could become more relevant

in locations where sediment infill is a more significant effector.

The integration of the NSGA-II enabled a population of options to

be automatically generated and these were demonstrated to dominate

the manually developed options on the Pareto curve. It is possible that

further manual design iterations could produced optimised concepts,

however, this may not be the case on a more complex case. Exploration

of the Pareto front is of keen interest and further studies are currently

underway.

The role of multi-objectivisation was shown to be useful in provid-

ing a richer set of options in the three-objective space. The single-objective

method was successful in producing a single option in all cases, which

was optimised for the target downtime of 5%. Portraying all results in

the two-objective space enabled a comparison between methods and

indicated that the two objective method performed best overall, which

was demonstrated in Table 5.8. Additionally, two-objective optimisationoffered the best compromise between optimality and diversity and pro-

duced the most acceptable set of results. The three objective method

produced some options which were unlikely to be taken forward due

to strange configurations that at the expense of minimising downtime,

resulted in the most important objective - capex at a factor of 1.7 x BaseCase.

Based on the comparison in the two-objective space, it is clear that

developing solutions to the two-objective problem yielded the best re-

sults and this method will therefore form the basis for the next chapter

which will introduce uncertainty as an objective function to enable the

level of risk in various options to be assessed.

Part IV

U N C E RTA I N T Y

6R I S K A N D U N C E RTA I N T Y A N A LY S I S I N L N G

T E R M I N A L C O N C E P T D E S I G N

6.1 introduction

This chapter focuses on assessing the uncertainty in pre-FEED LNG

terminal layouts using two methods. The first is the direct integration

of a Latin Hypercube Monte Carlo simulator into the optimisation frame-

work, where the NSGA-II will be used to minimise total, downtime and

a new objective cost uncertainty (Risk), represented as the Coefficient ofVariation (CoV) of the total cost PDF that will be output from the anal-

ysis. To aid with selection of the ’best’ options, the partitioning aroundmedoids (PAM) algorithm will be used to group the options into subsets

corresponding to varying positions on the Pareto Front from which a

macro-analysis will be conducted to determine which cluster performs

best, from which three medoid (centre of the cluster) options will be

extracted and analysed further.

The second method will take the three selected options from the

two-objective method that were developed in Chapter 5 and perform

a post-optimisation uncertainty and sensitivity analysis. This method

will also use Latin Hypercube Monte Carlo simulator to calculate the over-

all uncertainty. A sensitivity analysis will be performed using samples

to ascertain the influence of each of the uncertain variables on the risk

objective.

The two methods will be compared in terms of layout, cost and risk

to determine which method is most suitable to estimating the risk in

LNG terminal concept layouts.

6.2 problem formulation

The North Star case study will be used to test two methods of opti-

misation under uncertainty, the first being post-optimisation uncertaintyanalysis, the second uncertainty analysis within optimisation. The environ-

mental conditions are the same as those discussed in Sections 4.1. To

124

6.3 development of the probability distributions for random variables 125

enable uncertainty to be minimised, in the case of the first method, the

coefficient of variance cv will be used as a decision variable. This value

will also be used in the second method to assess the overall uncertainty

in the option costs. The uncertain variables within the model are char-

acterised in Table 6.1.

capex = capex ′ +

N∑t=0

0.01capex ′

(1+ i)t(6.1)

capex ′ = GVbw$bw +Vch$chG

+ Ltr$tr + $MOF + $onshore

opex =

N∑t=0

Vsedi$sedi(p)t

(6.2)

6.3 development of the probability distributions for ran-

dom variables

There are twelve inputs that are considered to be probabilistic within

this research and these are described in Table 6.1. It is acknowledged

that aleatoric and epistemic uncertainty exist throughout marine engi-

neering and this is often accounted for by assuming upper and lower

cost ranges on the unit rates and material volumes during a pre-FEED1.

Therefore, the variables have been selected as those which are likely

to have the most impact on the design. The distributions were devel-

oped using proprietary knowledge from within HR Wallingford based

on likely upper, lower and median values for each item. Distributions

that reflect the probability profiles have been selected and these are

typically Gaussian or Beta. Generally, there is a greater probability that

an item cost is greater than expected and a smaller probability that it

will be less. The upper estimate of a product cost can be quite high due

to unknown variables that have not been captured in the original esti-

mate. It is possible that a product will be cheaper than anticipated, but

the lower possible estimate is bound by practical considerations such

as: raw labour, haulage, fuel and mobilisation costs. For example, in

1 During detailed design, a full probabilistic assessment of all possible variables couldbe taken

6.4 calculating the optimum number of samples 126

the case of the Channel Cost, there is a small probability that the unit

rate will be less than anticipated and a higher probability that it will

be up to 2x the unit rate estimate. Conversely, Sediment Volume has

a unique distribution where there is a reasonable probability that the

volume may be close to the estimate and equal probabilities that it will

be greater or lower, but with varying ranges to cater for the likelihood

that the estimate is greater than expected.

The probability of the random variable falling within any specified

range of values is equal to the integral of the variable’s density over the

specified range. The fundamental equation is shown in eq 6.3, in which

X is the random variable with density fx, the PDF is non-negative ev-

erywhere and its integral over the entire space is equal to one Bohm

and Z (2010).

Pr [a 6 X 6 b] =∫ba

fx (x)dx. (6.3)

Table 6.1 describes the distributions and critical parameters that have

been used for each uncertain variable, based on an (unspecified) LNG

terminal concept that has already been described in a previous chapter,

with unit rates originating from the baseline values in Table ??.

It should be noted that no ’lower limit’ to the distributions are given

i.e it is theoretically possible that the combination of PDF sample and

deterministic value would produce a negative number, however, for

instance, in the case of a Gaussian distribution, this would require 10’s

of standard deviations and is therefore not statistically significant.

Additionally, the cumulative distribution function (CDF) on which

the PDF is based and which denotes the cumulative probability of each

value between 0 < 1 is shown in Figure 6.2. It is through the CDF that

the random distribution is sampled.

6.4 calculating the optimum number of samples

Defining the optimum number of samples is an important step in un-

certainty analysis and although there are methods proposed in the lit-

erature, the most robust method is usually to conduct several analyses

using a range of sample sizes to determine the smallest number that

gives satisfactory results. According to the central limit theory, the sum

6.4 calculating the optimum number of samples 127

Tabl

e6.1

:Pro

babi

lity

dist

ribu

tion

sus

edin

mod

elan

dth

eir

para

met

ers

Var

iabl

eD

escr

ipti

onU

nit

Dis

trib

utio

nPa

ram

eter

s

$ bw

Uni

tra

tefo

rbr

eakw

ater

arm

our

$/m3

Beta

α=2,β

=3

$ ch

Uni

tra

tefo

rca

pita

ldre

dgin

gof

rock

$/m3

Beta

α=2,β

=5

$ tr

Uni

tra

tefo

rtr

estl

e$/

mBe

taα=2,β

=5

$ MOF

Tota

lcos

tof

MO

F$

Gau

ssia

nµ=1

,σ=0.1

$ onshore

Cos

tof

addi

tion

alw

orks

for

onsh

ore

basi

n$

Gau

ssia

nµ=1

,σ=0.1

$ sedi

Uni

tra

tefo

rm

aint

enan

cedr

edgi

ngof

sand

$/m3

Gau

ssia

nµ=1

,σ=0

.1

pD

isco

unt

rate

for

esti

mat

ing

net

pres

ent

valu

e%

Beta

α=2,β

=5

Vbw

Unc

erta

inty

inbr

eakw

ater

mat

eria

lest

imat

e-

Gau

ssia

nµ=1

,σ=0

.1

Vch

Unc

erta

inty

inca

pita

ldre

dgin

gvo

lum

ees

tim

ate

-G

auss

ian

µ=1

,σ=0

.1

L tr

Unc

erta

inty

intr

estl

ele

ngth

esti

mat

es-

Gau

ssia

nµ=1

,σ=0

.1

Vsedi

Unc

erta

inty

inse

dim

ent

infil

lvol

ume

-Be

spok

ese

eFi

gure

6.1

GU

ncer

tain

tyin

grou

ndco

ndit

ions

-Be

taα=2,β

=3

6.4 calculating the optimum number of samples 128B

W C

ost (

$)C

H C

ost (

$)T

R C

ost (

$)M

OF

Cos

t ($)

Slo

pe C

ost (

$)S

edi C

ost $

Dis

coun

t Rat

e ($

)B

W V

ol (

m3)

CH

Vol

(m

3)T

R L

engt

h (m

)S

edi V

ol (

m3)

Gro

und

0.00

0

0.00

5

0.01

0

0.01

5

0.00

0

0.00

5

0.01

0

0.01

5

0.02

0

0.02

5

0.00

000

0.00

003

0.00

006

0.00

009

0.00

012

0.0e

+00

2.5e

−09

5.0e

−09

7.5e

−09

0e+

00

1e−

08

2e−

08

0.00

0.05

0.10

01020

0.0e

+00

5.0e

−07

1.0e

−06

1.5e

−06

0e+

00

2e−

07

4e−

07

6e−

07

0.00

00

0.00

05

0.00

10

0.00

15

0.0e

+00

5.0e

−06

1.0e

−05

1.5e

−05

0.0

0.5

1.0

1.5

2.0

2.5

300

320

340

360

380

2550

7510

012

515

090

000

1000

0011

0000

1200

002e

+08

3e+

084e

+08

5.00

e+07

7.50

e+07

1.00

e+08

1.25

e+08

1015

2025

0.05

0.10

0.15

1000

000

1500

000

2000

000

2e+

063e

+06

4e+

065e

+06

6e+

0610

0015

0020

0060

000

9000

012

0000

1500

000.

00.

10.

20.

30.

40.

5va

lue

density

varia

ble BW

Cos

t ($)

CH

Cos

t ($)

TR

Cos

t ($)

MO

F C

ost (

$)

Slo

pe C

ost (

$)

Sed

i Cos

t $

Dis

coun

t Rat

e ($

)

BW

Vol

(m

3)

CH

Vol

(m

3)

TR

Len

gth

(m)

Sed

i Vol

(m

3)

Gro

und

Figu

re6

.1:P

DF

for

the

twel

veun

cert

ain

vari

able

sfo

ran

unsp

ecifi

edLN

Gte

rmin

alop

tion

alre

ady

disc

usse

din

this

thes

is

6.4 calculating the optimum number of samples 129B

W C

ost (

$)C

H C

ost (

$)T

R C

ost (

$)M

OF

Cos

t ($)

Slo

pe C

ost (

$)S

edi C

ost $

Dis

coun

t Rat

e ($

)B

W V

ol (

m3)

CH

Vol

(m

3)T

R L

engt

h (m

)S

edi V

ol (

m3)

Gro

und

0.00

0.25

0.50

0.75

1.00

0.00

0.25

0.50

0.75

1.00

0.00

0.25

0.50

0.75

1.00

0.00

0.25

0.50

0.75

1.00

0.00

0.25

0.50

0.75

1.00

0.00

0.25

0.50

0.75

1.00

0.00

0.25

0.50

0.75

1.00

0.00

0.25

0.50

0.75

1.00

0.00

0.25

0.50

0.75

1.00

0.00

0.25

0.50

0.75

1.00

0.00

0.25

0.50

0.75

1.00

0.00

0.25

0.50

0.75

1.00

300

325

350

375

4080

120

160

9000

010

0000

1100

0012

0000

1e+

082e

+08

3e+

084e

+08

6.0e

+07

9.0e

+07

1.2e

+08

1.5e

+08

1015

2025

300.

040.

080.

120.

1650

0000

1000

000

1500

000

2000

000

2e+

063e

+06

4e+

065e

+06

6e+

0650

010

0015

0020

0040

000

8000

012

0000

1600

000.

00.

20.

4va

lue

y

varia

ble BW

Cos

t ($)

CH

Cos

t ($)

TR

Cos

t ($)

MO

F C

ost (

$)

Slo

pe C

ost (

$)

Sed

i Cos

t $

Dis

coun

t Rat

e ($

)

BW

Vol

(m

3)

CH

Vol

(m

3)

TR

Len

gth

(m)

Sed

i Vol

(m

3)

Gro

und

Figu

re6

.2:C

DF

for

the

twel

veun

cert

ain

vari

able

sfo

ran

unsp

ecifi

edLN

Gte

rmin

alop

tion

alre

ady

disc

usse

din

this

thes

is

6.4 calculating the optimum number of samples 130

of the density function of m random independent variables Xi approx-

imately converges toward a normal distribution as m→∞ . Therefore,

if the number of samples is sufficient, the variable probability distri-

butions Xi (i = 1, 2, ...m→∞) will be adequately represented and will

produce an approximate normal distribution.

To derive the minimum number of samples required to obtain a rea-

sonable approximation of the supposed normal distribution, the Sym-metric Blest Measure of Agreement (SBMA) (Stifanelli et al., 2013) is pro-

posed. The SBMA, is similar to Spearman (1987) and Kendall and Gib-

bons (1990)’s methods of partial rank correlation, enabling comparison

between two sets (X = x1, ......,Y = y1, ......) of independently gener-

ated random observations to find out which pairs i.e. xi,yi are con-

sistent between the sets. Setting the potential sample sizes as N =

(ni = 500,ni+1 = 1000,ni+2 = 1500...ni=∞), let R and S be the density

functions approximated from the model through LHS for a sample size

of ni. The SBMA is calculated using eq 6.4, for N[i = 1] which indicates

the correlation between R and S for each pair of N samples and there-

fore the consistency of each sample size. While υ < Υ (the target SBMA),

i is increased and the samples recreated.

υ =2n+ 1

n− 1−

12

n2 −n

n∑i=1

(1−

Rin+ 1

)2Si (6.4)

The SBMA method was used to determine the optimum number of

samples by creating two independent sample sets of the following pop-

ulations: N = {500, 1000, 2500, 5000} using LHS. The results shown in

Table 6.2 indicate that there was no statistically significant correlation

for any value ofN and additionally, there was no increase in correlation

with the larger sample sizes. The conclusion that can be drawn is that

for sample sizes up to 5000, there is no correlation between sample size

and the SBMA. This would suggest that there is no benefit in terms of

consistency between repeat trials in using a larger number of samples.

LHS can cover the parameter space more efficiently than random

sampling, it is slower at generating samples due to a more complex

algorithm. Figure 6.3 shows the empirical distribution function for total,calculated using LHS. Discussing the run time of the algorithm with

respect to sample size N, is best done so using Big O notation from

which it can be deduced that equation 2.1 (replicated below) indicates

that for an increase in the number of samplesN, there is an exponential

6.5 uncertainty within optimisation 131

Table 6.2: Similarity between two sets of independent samples for N =

{500, 1000, 2500, 5000} using the Symmetric Blest Measure of Agreement(SBMA)

N 500 1000 2500 5000

SBMA 0.41 0.63 0.60 0.53

100 250 500

1000 2500 5000

0

5

10

15

0

10

20

30

0

20

40

60

0

30

60

90

0

100

200

0

100

200

300

400

500

0.6 0.8 1.0 1.2 1.4 0.6 0.8 1.0 1.2 1.4 0.6 0.8 1.0 1.2 1.4value

dens

ity

Figure 6.3: Histograms N = {100, 250, 500, 1000, 2500, 5000} randomly gener-ated Monte Carlo samples run through RaPoLa

increase in the number of computations and hence time cost. This is

demonstrated in Figure 6.4, which indicates that there is a significant

time increase of several orders of magnitude when the sample size

is increased from N = 1000 to N = 2500 and this is not justified in

the increase in smoothness of the output distribution. Considering that

there was also no correlation between N and the SBMA, it is proposed

that the LHS method is used with N = 1000 samples which provides a

good approximation of the empirical distribution function.

N = (M!)K−1

6.5 uncertainty within optimisation

This section will explore the results of the Monte Carlo simulator em-

bedded within the optimisation framework where uncertainty, expressed

as the cv is an additional minimisation objective. The potential benefits

6.5 uncertainty within optimisation 132

0

250

500

750

1000

100 250 500 1000 2500 5000N

Tim

e (m

in's

)

Figure 6.4: Time to compute the empirical distribution function for N samples

of this approach are that the uncertainty in the final costs will be min-

imised overN generations and traded against the other objectives in the

final population. This will ensure than solutions that have significant

levels of risk will be eliminated through the optimisation process and

the final population will exhibit options which are Pareto-optimal across

the three objectives. Due to the third objective (Cov, it is likely that a

more diverse population than the previous two-objective results will be

achieved, effectively multi-objectivising the solution space. A drawback

of this method is that the LHS samples will need to be generated for

each model iteration2. Figure 6.5 shows the algorithm of the modelling

process including its link with the NSGA-II and clustering extension.

6.5.1 Model setup

Based on the findings of Chapter 5, the two-objective optimisation

method was the best performing overall. The parameters used in the

best performing study 2_150_100_0.1 have been used for this current

study, with the additional minimisation objective - cv, as shown in Ta-

ble 6.3.

2 Arguably it is possible to generate a single set of samples which are used for everysingle model run as is or with the columns rearranged which would result in 12! =479x106 combinations and would allow a larger number of samples to be used asthe computation time of the LHS sample set (which is the time bottleneck) has beenremoved

6.5 uncertainty within optimisation 133

Figure 6.5: Diagram of the uncertainty within optimisation algorithm

Table 6.3: Genetic operators for the uncertainty within optimisation dataset

Minimisation objectives Population Generation Mutation Crossover

total, cv, downtime 150 100 0.1 0.7

6.5 uncertainty within optimisation 134

6.5.2 Convergence

Figure 6.6 shows the convergence of the populations which has been

calculated using eq 5.5, described in Section 5.6.1. It shows that the

NSGA-II, similarly to in Figure 5.2 has created an immediate improve-

ment in the overall population for the first (16) generations, after which

there is no discernable improvement. Figure 6.7 shows the Pareto frontof generations 5, 16 and 72, which correspond to an initial population

and the two fittest overall expressed through a compound graphic in-

cluding scatter graphs, correlation matrix, density graphs, histograms,

boxplots and a barchart.

90

100

110

120

130

140

0 25 50 75 100generation

fitne

ss

Figure 6.6: Convergence of the uncertainty within optimisation results over100 generations using the NSGA-II and eq 5.5

Any improvement in the fitness between populations can be seen

in [1, 2]3 (scatter graph), where the blue points are clearly closer to the

axis than the red and also marginally closer than the green. There is vir-

tually no difference between the populations shown in the histograms

[1:3, 1], although scatter graph [1, 3] clearly shows a greater concentra-

tion of red points toward the upper right quadrant and a clear positive

correlation between total and cv.

Interestingly, the absolute fittest population is of generation 16, which

is unexpected as the NSGA-II should improve the population fitness

with successive generations. However, again referring to Figure 5.2, this

3 Where 1 is the index along the x axis from left to right and 2 is the index of the y axisfrom bottom to top

6.5 uncertainty within optimisation 135

Tota

lC

oV_T

Dow

ntim

eG

ener

atio

n

Total CoV_T Downtime Generation

1.0e+09

1.5e+09

2.0e+09

2.5e+09

3.0e+09

Cor : 0.544

5: 0.586

16: 0.42

72: 0.529

Cor : −0.466

5: −0.417

16: −0.5

72: −0.567

5 16 72

0.08

0.09

0.10

0.11

0.12Cor : −0.219

5: −0.351

16: −0.0922

72: −0.204

0

10

20

30

0102030

0102030

0102030

1.0e+091.5e+092.0e+092.5e+093.0e+09 0.08 0.09 0.10 0.11 0.12 0 10 20 30 5 16 72

Figure 6.7: Compound graphic of the Pareto fronts of generations 5 (red), 16

(green) and 72 (blue) including density plots, correlation matrices,boxplots, scatter graphs, histograms and a barplot for each combi-nation of objectives

6.5 uncertainty within optimisation 136

is consistent with datasets 9-12 which also had three objectives leading

to the hypothesis that the NSGA-II does not perform well with RaPoLawhen optimising three objectives simultaneously. Rather than investi-

gating the final population as is typical of an optimisation study, gen-

eration 72 will now be investigated in more detail in the proceeding

sub-section.

6.5.3 Clustering into smaller subsets

There are many methods for partitioning datasets into groups with sim-

ilar characteristics and this is broadly referred to as clustering. Cluster-

ing algorithms are often combined with multi-objective EA’s to enable

the user to extract useful information from a population. Clustering

and classification algorithms divide the set S into individual subsets

C = C1,C2, ...,Ck where S = ∪ki=1Ci and Ci ∩ Cj = ∅ for i 6= j. The

most well-known is the K-means algorithm where the centroid of a de-

fined number of clusters is calculated as the point that minimises the

within cluster sum of squares (WCSS) (Hartigan1979). Two limitations of

the K-means algorithm are that the number of clusters is a user defined

input and the centroids are mean points and not actual observations.

In comparison, the PAM algorithm (Kaufman and Rousseeuw, 1987)

uses existing observations as centroids and calculates the dissimilarity

matrix to determine cluster membership and the optimum number of

clusters that maximise the average silhouette width (a measure of how

well an observation fits in the current cluster as shown in Table 6.4).

Table 6.4: Silhouette width classification

Silhouette width range Description

0.71-1.0 Strong structure

0.51-0.70 Reasonable structure

0.26-0.50 Weak structure

<0.26 No structure

There are many examples of clustering algorithms in the literature,

for instance, (Khu, Madsen, and Pierro, 2008) used clustering within

an optimisation algorithm to reduce the number of objectives, Suga,

Kato, and Hiyama (2010) clustered Pareto optimal window design op-

6.5 uncertainty within optimisation 137

tions to facilitate decision making. In Chapter 1 it was hypothesised

that berth location is the most important important decision variable

in influencing the objective functions. Furthermore, it was established

in Chapter 5 that there was a strong positive correlation between berth

location and total and a strong negative correlation between berth loca-

tion and downtime4. Building from this, the results of generation 72 of

the current study have been clustered by berth location, using the PAMalgorithm.

Tota

lC

oV_T

Dow

ntim

ecl

uste

r

Total CoV_T Downtime cluster

1.00e+09

1.25e+09

1.50e+09

1.75e+09Cor : 0.522

1: 0.0471

2: −0.00993

3: −0.363

Cor : −0.613

1: −0.685

2: −0.815

3: −0.91

1 2 3

0.08

0.09

0.10

0.11 Cor : −0.328

1: −0.211

2: −0.175

3: 0.299

0

10

20

05

101520

05

101520

05

101520

1.00e+091.25e+091.50e+091.75e+092.00e+090.08 0.09 0.10 0.11 0 10 20 1 2 3

Figure 6.8: Final generation of embedded Monte Carlo simulator with resultsclustered by berth location. Compound graphic includes densityplot, correlation matrix, boxplot, scatter graph, histogram andbarplot for each combination of objectives

Figure 6.8 shows the Pareto Front and other statistical metrics of gen-

eration 72. There is a clear difference between the clusters where Clus-

ter 2 has a very low downtime, but high total cost and uncertainty. This

is interesting as often, uncertainty ranges decrease with larger estimate.

[1, 2] shows the Pareto front of total and downtime and [3, 4] shows

4 total and downtime have a strong negative correlation which has been demonstrated innumerous graphs in Chapter 5

6.5 uncertainty within optimisation 138

the correlations, indicating strong negative correlation between the ob-

jectives for each cluster - (-0.685, -0.815, -0.91). Compared with Paretofronts involving cv ([1, 3], [2, 2]), it can be seen that the dominant rela-

tionship in the optimisation is between total and downtime. Interestingly,

the level of uncertainty is the most uncertain parameter, however it is

confined to a small range of 0.07 < cv < 0.12 which is actually a small

uncertainty range (and much less than the 40% estimate of Table 1.2).

Figure 6.9 shows the spatial layout of the options of which the three

clusters correspond to three distinct geographical locations. This cor-

respond with the hypothesis that berth location (expressed as x, y) is

strongly linked with the key objective functions5. Figure 6.10 shows

boxplots of the cluster groupings for the x and y decision variables

where the cluster groups are well defined and with mean silhouette

widths with a ’strong’ structure as defined in Table 6.4.

Cluster 1 has a perceived reduction in the cluster spread as shown

in Figure 6.9, and a relatively high proportion number of options (Fig-

ure 6.8[4, 1]), however it also has several solutions with a ’weak’ struc-

ture. This is likely due to the tight internal structure which skews the

ratio of the outliers.

Conversely, Cluster 3 has a very high average silhouette width, but

low membership, indicating that increased cluster populations can lead

to both higher and tighter average values, but are more susceptible to

even slight outliers.

The medoid layouts6 are shown in Figure 6.11 and with decision vari-

ables, objective functions and cluster membership in Table 6.5 por-

tray three distinct options. Although the medoid option of Cluster 1

is shown with a long breakwater root, this may not be typical of all

options in the cluster. This option clearly requires almost no mainte-

nance dredging, however studies have shown that generally, mainte-

nance dredging usually amounts to a maximum of 10-15% of the total

cost in terminals with channels with significant sediment infill. It is

interesting that this option has a low total cost as the long breakwa-

ter does not seem like a cost effective option, however with the offset

of no or limited channel dredging coupled with minimal maintenance

dredging this has apparently been offset.

5 total and downtime6 the layouts which are the centre of the cluster groups - the ’average layouts’.

6.5 uncertainty within optimisation 139

1000

2000

3000

1000 2000 3000 4000x

y

cluster

1

2

3

Figure 6.9: Berth locations of the final generation of optimised options, clus-tered by berth location and objective function

0.25

0.50

0.75

1 2 3cluster

sil_

wid

th

Figure 6.10: Silhouette widths of three groups, clustered by berth location

6.5 uncertainty within optimisation 140

1 2 3

Figure 6.11: Three layouts of the cluster medoids which were partitioned us-ing the PAM algorithm using the uncertainty within optimisationapproach

Medoid two is similar to the layouts that were created in Chapter 5

where the channel emanates from the left of the breakwater and a sec-

ondary breakwater provides protection from sediment infill. It is likely

that the mediod layout is not completely representative of the rest of

the cluster. Although it is the most expensive, it is also the most tradi-

tional looking and most likely to be selected.

Medoid three is likely representative of its cluster as the berth loca-

tions (Figure 6.9) and objective functions (Figure 6.8[1,3]) have a reason-

ably small spread. Again, the ’offshore’ option has chosen to minimise

total at the expense of downtime, which again highlights the lack of

there being a cost-effective offshore option which provides adequate

berth protection.

Table 6.5: Decision variables of three medoid layouts generated through theuncertainty within optimisation approach

Decision variables Objective functions

Option x y θch LHS θbr LH RH Sec MOF ∇f ∇r B Total Cv Dt (%) cluster

79 1826 1993 2 1 -5 468 201 1 1 1.52 1.50 5 0.90 0.09 0.97 1

145 2621 1782 1 1 -5 375 206 1 1 1.50 1.50 7 1.17 0.09 0.81 2

84 2183 2581 -5 0 -5 209 203 1 0 1.51 1.50 11 0.76 0.09 24.23 3

As a final stage in the analysis, distribution of the decision variables

and objective functions7 are represented as boxplots for each cluster in

Figure 6.12. This indicates similarly to Figures 6.10 and 6.9 that Cluster

1 has more spread across all objectives, with outliers typically on the

upper side of the decision variable and objective outputs.

7 Normalised as: x ′ = x−µσ , where µ and σ are the mean and standard deviation of

each objective, for all options, of all generations, for all datasets which are in the sameobjective space.

6.5 uncertainty within optimisation 141

Dec

isio

nO

bjec

tive

048 048 048

1 2 3

x

yCh.

thet

a

LHS

Br.the

ta

LH

RHgr

oyne be

rth.a

ngle

MOF

slope

Fslo

peR

B

Tota

l

CoV_T

Downt

ime

Normalilsed value

Figure 6.12: Boxplolt of decision variable and objective function of the uncer-tainty within optimisation results, grouped by cluster

6.6 post-optimisation uncertainty analysis 142

6.6 post-optimisation uncertainty analysis

In Chapter 5, a range of terminal layouts were generated with RaPoLalinked to the NSGA-II using a range of optimisation strategies. The

study concluded that the NSGA-II was able to develop solutions which

dominated the Base Case and the two-objective optimisation method

was the most successful in balancing population diversity with objec-

tive function minimisation. From this, three options which represented

key locations of the Pareto Front were selected and discussed in more

detail and these are shown in Figure 6.13. These layouts will now un-

dergo an uncertainty and sensitivity analysis to determine the overall

uncertainty in the cost estimates as well as the influence of the input

parameters.

45 50 65

Figure 6.13: Options 45, 65 and 50 from the two objective optimisation method

The model representation of the proposed extension to the frame-

work discussed in Chapter 5 to now include uncertainty and sensitivity

analysis as additional stages is shown in Figure 6.14. An important as-

pect of this is the selection of suitable options from the optimisation set.

In Section 6.5.3, the options were selected automatically using the PAMclustering algorithm. In this section, the options are the three that were

selected from the 100_150_0.1_2 dataset which was shown to be the

best performing overall from Chapter 5. This will facilitate the analysis

of the impacts of uncertainty on the three options.

The distributions of Figure 6.1 are used and applied to the determin-

istic values (which correspond to the mean of each of the distributions)

of the options. Based on the findings of Section 6.4, 1000 Latin Hyper-cube samples are generated using the random LHS method (Stein, 1987),

for each option.

6.6 post-optimisation uncertainty analysis 143

Figure 6.14: Diagram of post-optimisation uncertainty analysis algorithm

6.6 post-optimisation uncertainty analysis 144

6.6.1 Results - Post-optimisation uncertainty analysis

Figure 6.15 shows the results of the analysis as a series of histograms

which represent the output distributions of the 2500 Latin Hypercubesamples. The capex and total for each option exhibit a well-defined

Gaussian distribution which suggests that the number of samples was

adequate.

Capex Opex Total

0

500

1000

1500

2000

2500

0

100

200

300

0

100

200

300

400

4550

65

0.8 1.2 1.6 0.0 0.5 1.0 1.5 0.4 0.8 1.2 1.6value

coun

t

Figure 6.15: Post-optimisation uncertainty analysis distributions of capex, opexand total resulting from 2500 Latin Hypercube samples

Option 45 has a well defined histograms across the board and due

to a opex value of 0 and total is identical to capex. The total of Option

50 is influenced by opex only marginally which can be seen only with a

slight increase on the right hand side of the distribution. Additionally,

opex seems to be equally effected by the Vsed and $sed parameters.

Option 65 has a significant amount of sediment infill, and this is seen

in [2, 2] which has an almost identical distribution to the input variable

Vsedi shown in Figure 6.1[1, 3]. The effects of this objective are clearly

seen in the distribution for total.

6.6 post-optimisation uncertainty analysis 145

Table 6.6 compares these results to those generated in Section 5.8

where the 5th and 95th centile estimates have been calculated through

the standard normal index, also referred to as ’z-score’:

z = µ±Φσ (6.5)

It indicates that in all cases, the deterministic values vary from the

probabilistic ones and the Ratio i.e. Probabilistic/Deterministic shows

1.00, 0.85 and 0.85 respectively for the three options. Options 50 and

65 were overestimated in the deterministic approach (which was to be

expected) although Option 45 had the same total in both cases which

signifies that perhaps the deterministic approach underestimated in the

first place. The deterministic estimate of Option 45 is actually less than

the 5th centile estimate, whereas Options 50 and 65 are both just within

the 95th centile estimates.

Table 6.6: Probabilistic values of total for the three layout options comparedwith the deterministic values generated in 5.8

Option Deterministic µ σ Ratio 5th 95th

45 0.80 0.80 0.12 1.00 0.62 0.98

50 1.02 0.87 0.11 0.85 0.68 1.06

65 0.85 0.73 0.09 0.85 0.58 0.88

6.6.2 Boot-strapping the partial rank correlation coefficients

Determining the correlation of two random processes (in this case the

input and output variables) is the cornerstone of sensitivity analysis.

This can be achieved by calculating the partial rank correlation coefficient(PRCC), through methods ranging from linear regression between the

residuals (Chatterjee and Hadi, 2008), matrix inversion (Tarantola, 2004)

and conditional independence tests (Dawid, 1979). Although the linear

regression method is computationally inefficient compared to the other

methods, it is easier to implement and robust and has for these reasons

been used to calculate the PRCC’s of each of the input variables in this

study. The influence of each of the uncertain input parameters on the

total uncertainty enables key uncertainty drivers to be identified for

6.6 post-optimisation uncertainty analysis 146

each option and for the trade-off between cost and risk to be explored.

As the actual empirical distribution function and its error are unobserv-

able, the resultant population of observations and their residuals which

have been developed through LHS can be used to estimate the PRCCfor each variable through the technique of bootstrapping.

Bootstrapping is a resampling method eq 6.6 where bootstrap repli-

cates s(X∗1

), s(X∗2

), ...s

(X∗B

)are derived from randomly sampled

bootstrap samples (X∗1,X∗2, ...X∗B) with replacement from the observed

samples X = (X1,X2, ...Xn) where B is the number of bootstrap samples

of size n, which is the number of observations. The population is un-

known, therefore the true error between a population value and a sam-

ple statistic is unknowable. In bootstrap-replicates, the sample is the

’population’ and as this is known, the quality of the replicate sample

is measurable and the correlation between a sample and a population

can be estimated between specified confidence intervals as8:

seboot =

√∑Bb=1[s (X

∗b) − s (·)]2B− 1

(6.6)

where s (·) =∑Bb=1 s

(X∗b

)/B.

Applying this equation to the results of the three selected options

and using 1000 bootstrap replicates9, results in Figure 6.16, which in-

dicates the mean and min/max values of the PRCC’s for each of the

three options with respect to the two cost objectives - capex and opex.

It demonstrates that in all options, breakwater volume (Vbw was the

greatest source of uncertainty overall. Interestingly, $bw effects influ-

ence in Options 50 and 65, although not in 45. This is potentially due

to Option 45 having a caisson breakwater where the unit rate is $/m

rather than $/m3.

Although ground effects both breakwater and channel costs, its in-

fluence is negative on the overall uncertainty, which also corresponds

to the breakwater exerting more influence than the channel variables -

Vch, $ch. $ch is shown to be more of a driver than Vch in Options 50

and 65 and neither are of influence in Option 45, which has no channel.

$sedi is of less significance than Vsedi, which, when considering the

8 Often 5% 95%9 Bootstrapping is a computationally intense process and 1000 replicates was found to

be the limit on acceptable run time using the chosen programming language

6.6 post-optimisation uncertainty analysis 147

Capex Opex

−0.5

0.0

0.5

1.0

−0.5

0.0

0.5

1.0

−0.5

0.0

0.5

1.0

12

3

BW.co

st

BW.vo

l

CH.cost

CH.vol

disco

unt

grou

nd

MOF.c

ost

SEDI.cos

t

SEDI.vol

SLOPE.co

st

TR.cost

TR.leng

th

BW.co

st

BW.vo

l

CH.cost

CH.vol

disco

unt

grou

nd

MOF.c

ost

SEDI.cos

t

SEDI.vol

SLOPE.co

st

TR.cost

TR.leng

th

Figure 6.16: Bootstrapped Partial Rank Correlation Coefficient error estimates ofthe input variables on the capex and opex of three layout options

6.7 comparison of methods 148

distribution ranges in Figure 6.1 ([2, 2], [3, 1]). $slope makes no differ-

ence to either cost as none of the options intersect with the shore line. In

terms of overall uncertainty, Options 50 and 65 are similar, where key

drivers are the breakwater, channel and trestle, however, with Option

45, $MOF is also a factor as the other options do not have a detached

MOF.

6.7 comparison of methods

Both methods were able to propagate uncertainty through the model

successfully. the selected options are similar in that there is an obvious

nearshore and offshore option, which are similar in each case as shown

in Figure 6.17. The uncertainty within optimisation layouts are less usual

than the other method in that Option 1 has a long breakwater root

which could likely be optimised through further studies including sed-

iment infill. Option 3 has reduced the breakwater to its absolute min-

imum and placed the berth close to it to reduce the wave agitation.

This is contrary to Ontion 45 which has opted to maximise the spac-

ing between the berth and breakwater to enable a caisson to be used,

which is apparently more cost effective. The uncertainty within optimisa-tion method tended to prefer a secondary breakwater, which suggests

that minimising the uncertainty in the opex has a profound effect on the

overall uncertainty10.

The inclusion of the additional objective in the uncertainty within op-timisation method has produced a diverse population, however, accord-

ing to Figure 6.18 which shows the mean, upper and lower cost es-

timations, this method was able to reduce the uncertainty more, but

not the overall cost. As the uncertainty ranges are all similar, the ef-

fect of a slightly lower cv is offset by the relative difference in the cost.

Clustering removes a decision that needs to be made and allows an

automated representation of a few key typologies. This is beneficial as

it presents the user with a range of options with varying features, but

at the same time, does not facilitate exploration of the solution pool.

Overall, though it is a welcome addition to help understand the range

of typologies available in the population.

10 Vsedi and $sedi have the most uncertainty of all the variables.

6.7 comparison of methods 149

1 2 3

(a) Three layouts of the cluster medoids which were partitioned using the PAM algo-rithm using the uncertainty within optimisation approach

45 50 65

(b) Options 45, 65 and 50 which were used in the post-optimisation uncertainty analysismethod

Figure 6.17: Three layouts from the two respective methods - (a) uncertaintywithin optimisation, (b) post-optimisation uncertainty analysis

The input uncertainty ranges used in this study have resulted in a

small range of output uncertainties (0.09 6 cv 6 0.11) which shows

certain inputs are exhibiting similar influence on all objective as shown

in Section 6.6.2 which used the bootstrap method to estimate the PRCC’sof three options during a sensitivity analysis. Searching through all

options that were generated during the uncertainty within optimisationindicates that the min and max cv was 0.078 and 0.134 respectively. This

is a relatively small range and is potentially due to the most uncertain

variables - $sedi and Vsedi exert a minimal influence over the totalcost11.

The post-optimisation uncertainty analysis is the most effective method

of reducing uncertainty. This is almost certainly due to the reduced

search space limx2→∞ < limx3→∞ by a factor of12 e2. This method

11 opex typically being < 15% of capex.12 The notion of∞2 <∞3 is of course preposterous, however, it has been demonstrated

that a smaller search space results in a more optimal population when the results arerecombined in a multi-objectivisation-style approach as has been demonstrated in boththis and the previous chapter.

6.7 comparison of methods 150

Within Post

0.6

0.8

1.0

1.2

1 2 3 45 50 65Option

Mea

n

Figure 6.18: Error bar plot of the mean (µ) and standard deviation (σ) ofthe within optimisation uncertainty and post-optimisation uncertaintyanalysis methods, indicating that within optimisation uncertaintyreduces the cv more and post-optimisation uncertainty analysismethod produces a lower estimated cost overall

did produce the more desirable layouts and some human refinement

could result in a statistically sub-optimal concept, but one which as

more information becomes available may be more robust due to the

tacit knowledge instilled13. The benefit of this method is also in the fact

that options are already optimised for mean values, although a draw-

back is that there is no guarantee that in other cases, the deterministic

and probabilistic total will be correlated and if so there is no surety that

the individual components will share that same proportions of total.Where this is the case, the differences between the deterministic and

probabilistic values should be noted on a few layouts created using the

manual approach of Chapter 4 to understand the ratio of the compo-

nent costs as well as the overall ratio. Where there is variation between

the overall and component cost ratios, the uncertainty within optimisa-tion approach should be taken from the start as this will ensure that

all uncertainties are propagated through the model in each iteration,

removing any potential false optima14

In terms of which method was most successful at deciphering the

key source(s) of uncertainty in the inputs, this can only be done by post-

13 This can be facilitated through the RaPoLa package, the method of doing so is ex-plained in the User Manual contained in the Portfolio.

14 Options that are optimal deterministically, but sub-optimal probabilistically.

6.8 summary and conclusion 151

optimisation uncertainty analysis. The results of the uncertainty within opti-misation could be bootstrapped to estimate the 5% and 95% confidence

intervals (or any other intervals between 0% and 100%) and this is a

viable option as part of a sensitivity analysis. However, the uncertaintywithin optimisation method does not in itself have the framework re-

quired to conduct a sensitivity analysis.

6.8 summary and conclusion

The two methods used in this chapter of the thesis resulted in similar

levels of uncertainty, but different layouts. The uncertainty within opti-misation was generally less successful than post-optimisation uncertaintyanalysis in both minimising the mean total cost, as well as producing de-

sirable layout concepts. As has been shown in previous chapters, there

is a direct link between the distance offshore, capex and opex and the

uncertainty within optimisation method tended toward options in deeper

water, and with a lower cost.

There was no difference in cv between the medoid layouts presented

in the uncertainty within optimisation approach, which is a product of

the input distributions and indicates that the cv = 0.09 is a practical

minimum. It was also demonstrated that the lower and upper bounds

of all options developed within all generations was 0.078 < cv 6 0.134

which is a reasonably small range and indicates that the level of uncer-

tainty cv is a less important factor than the mean total cost µ. Of course

this would change with a different case study however, in this case this

was an interesting outcome which can help with the layout planning of

the terminal.

The impact of including uncertainty in the optimisation process was

successful in that it demonstrated that uncertainty was not the driving

factor in this case and the design should focus on total cost minimi-

sation to exert most effect. It must be stressed that individual cases

require individual approaches to uncertainty and

Overall, the options that were selected in both methods challenged

some of the traditional layout conventions, however, both methods pro-

duced layouts that dominated the Base Case and this demonstrates the

value of utilising an EA during a concept design. As with any design,

tinkering and modification are a normal part of the process and it is

6.8 summary and conclusion 152

likely a level of refinement is needed to provide the layouts with a

more ’human’ touch.

Part V

C O N C L U S I O N

7C O N C L U S I O N

7.1 introduction

The previous chapters have portrayed the development of a decision

support system for LNG terminal concept design. A range of engineer-

ing, computational and mathematical fields have been woven together

to produce a method of developing LNG terminal layouts and their as-

sociated information in both an automated and manual manner. This

chapter provides a summary of the thesis, including the strengths and

weaknesses of the methodology, restates the contributions to knowl-

edge and identifies several directions in which this research can be

taken forward by the author or others.

7.2 overview of methodology

The overall aim of this research has been to explore the methods in

which a decision support tool for LNG terminal concept design can be

developed, to enable deterministic and probabilistic design both manu-

ally and automatically. The approach taken had three stages, focusing

on Engineering, Optimisation and Uncertainty in series. This enabled the

concept to develop from the simple, manual decision support tool into

the complex probabilistic layout optimiser, ensuring that the founda-

tion for the forthcoming stage was laid properly. Each stage had prop-

erly defined objectives and deliverables (Section 1.5) and was prerequi-

site to the next. The three chapters in Part iii were written in the style of

a journal paper, complete with relevant literature, methodology, results

and analysis. This enables the reader to focus on each concept as an

almost separate entity to fully understand the methods used.

The Engineering aspect - Stage One of this project is covered in Chap-

ter 3 (mathematical basis) and Chapter 4 (practical application). It was

by far the most technically demanding as it encapsulated a wide range

of theories, methodologies and schools of thought into a single frame-

work. This part of the methodology effectively covers almost all groups

154

7.2 overview of methodology 155

in HR Wallingford and required significant input from key members to

understand the most important theories and methodologies. Stage Onewas also the most time consuming, taking as long as the other two ar-

eas combined, which is why a significant portion of the thesis focuses

on it.

The methodology expressed in Chapter 3 is by no means the only

one that could have been used. There are many schools of thought in

maritime engineering - the chosen method only covers a small fraction

of the possibilities. It is hoped that through the demonstrated explo-

ration of literature, the reader is convinced of the suitability of the cho-

sen method which was demonstrated to fulfil its intended purpose and

was proven to be suitable in facilitating both the manual and automatic

generation of layout options.

Many engineering design theories and methodologies have been inte-

grated into a single framework specific to the design of LNG terminal

infrastructure. This has not yet been achieved and is encapsulated in However, individualmodules can be usedin an ad hoc.manner - discussedin the User Guide inthe Portfolio

the RaPoLa package available at HR Wallingford.

In Chapter 5, the single-objective problem was decomposed into a

two and three objective one i.e. it was multi-objectivised. Multi-objectivisationis a comparatively new field of computational study (circa 2003) and

through its application, it was demonstrated that for this optimisation

problem, two-objective optimisation offered the best compromise be-

tween diversity and optimality. The key reason for this is that all single-

objective problems have a single optima (or set of optima with the same

fitness function), whereas the three-objective method although diverse

in the range of layouts, produced many sub-optimal options in compar-

ison.

All datasets with the same number of objectives converged in a simi-

lar manner which indicates that the parameters were appropriate to the

optimisation problem. Interestingly, in all cases there was practically no

improvement after the first 10-20 generations, which conforms with the

notion that population size is of more importance than number of gen-

erations.

A study into the role of multi-objectivisation has indicated that the

most effective number of objectives is two, where capex and opex are

combined into total. This was found to offer the best compromise in

terms of diversity and optimality. The single objective method, as pre-

dicted, converged to a single optimum.

7.3 contributions to knowledge 156

Some of the options that were produced may not be developed by a

human designer as they occasionally departed from conventional lay-

out practice. This is one of the benefits of optimisation - it looks in

places that a human would not. Through the RaPoLa package, the user

is able to modify any aspect of the terminal (as demonstrated in Sec-

tion 4.3) and this can bridge the gap between human and computer

driven design appraisal.

The role of optimisation under uncertainty was explored during Chap-

ter 6. Two methods were used (1) uncertainty within optimisation where a

Monte Carlo simulator was embedded in the cost evaluation function of

the NSGA-II and (2) post-optimisation uncertainty analysis where the cost

functions of options that have been optimised using the deterministic

approach are recalculated taking into account the uncertainties in the

inputs.

Techniques such as Latin Hypercube sampling, clustering and boot-strappingwere used to enable the uncertainties in the options to be estimated. It

was found that in this case study, the level of uncertainty was of sec-

ondary importance to the total cost of the options.

It was concluded that in this case study, that post-optimisation uncer-tainty analysis was the most successful method of minimising the key

objective - total and that through boot-strapping, uncertainties in the

outputs could be attributed to key input parameters. However, with

different distributions of the input variables, this would need to be

reevaluated. The deterministic values and means of their correspond-

ing Probability Density Function were well correlated, which facilitated

the extension of the three options from Chapter 5 into the probabilistic

space.

The three stages were shown to be complementary and developed

the decision support tool through logical evolutions toward a fully

probabilistic optimisation framework.

7.3 contributions to knowledge

The contributions to knowledge are as follows:

1. The primary contribution to knowledge is the RaPoLa package

written in the language R. This package enables the full concept

design of an LNG terminal to be undertaken both manually as an

7.3 contributions to knowledge 157

extension of the traditional desk study or automatically using the

NSGA-II. Its modules can be used individually to solve problems

ad hoc. and examples of many of its functions are detailed in the

User Guide.

2. The overall methodology of combining multiple maritime engi-

neering fields into a single framework is novel and useful to mar-

itime engineers. This is the first LNG specific decision support tooland perhaps one of the first focused on the technical design of

maritime infrastructure. The design philosophy in Chapter 3 pro-

vides a concise method for the design of the components of an

LNG terminal and although there are many good sources for de-

sign of individual elements, few if any resources include waves,

breakwaters, sediments and costing.A paper writtenabout this tool wasawarded the ’DePaepe-WillemsAward 2014’

3. The ’Breakwater Length Algorithm’ discussed in Section 3.6 which

has enabled the layout problem to be solved using single-objective

optimisation.

4. A method for optimising LNG terminal layouts under determin-

istic cost assumptions was presented in Chapter 5 and linking the

engineering design framework to an evolutionary algorithm has

helped to advance the field of applied optimisation in maritime

engineering of which there are few papers.

5. The role of multi-objectivisation in design is of keen interest to the

computing research community - even more so due to the real

case study that was discussed. There are relatively few papers

discussing the impact of decomposition and reconstitution during

optimisation and none that the author is aware of that compare

single, two and three-objective optimisation.

6. The comparison of probabilistic and deterministic design method-

ologies in Section 6.7 have shown that it is possible to optimise

LNG terminal layouts during a concept design stage whilst ac-

counting for significant uncertainties.

7. Two methods for optimising conceptual LNG terminal layouts

under uncertainty have been demonstrated to enable the engi-

neer to develop a better understanding of the driving sources of

uncertainty.

7.4 limitations of the research 158

7.4 limitations of the research

The following limitations of the research have been observed:

1. Each of the Chapters focused on the same case study, although on

different aspects of it. This was intentionally used to provide con-

tinuity through the evolution of a design that was demonstrated

through the thesis. Inclusion of another case study was consid-

ered, but ultimately discarded as the key concepts are clearly

demonstrated within Part iii.

2. The PDF’s that were used in Chapter 6 are relevant to this specific

case study and are not necessarily useful for other case studies.

3. Although the results of the multi-objectivisation are convincing to-

ward solving the problem with two objectives, there is no guaran-

tee that this will be true for other terminals. This is especially true

where significant currents exist as this will increase the volume of

sediment infill to the channel/basin which will in turn increase

the ratio of opex to capex

7.5 recommendations for future research

There are a few notable directions in which this research could be pro-

gressed to good effect:

1. Rather than optimising for structural cost, the NSGA-II could op-

timise for material volume which would allow a different prob-

abilistic approach to be taken. Material volume estimates are in-

herently more accurate than cost estimates for the obvious reason

which removes all cost uncertainty from the optimisation process.

Building from this, post optimisation where a range of options

from large breakwater/long trestle/small channel to vice versa

would enable the probabilistic cost modelling to be based on ma-

terial optimised options. This would enable the robustness of var-

ious options and typologies to be assessed under different cost

estimations. Figure 7.1 demonstrates what this could potentially

look like based on an early stage trial using the NSGA-II to opti-

mise Vbw, Vch and Vtr:

7.5 recommendations for future research 159

1000

2000

3000

1000 2000 3000 4000x

y

cluster

1

2

3

4

5

6

(a) Clustered berth locations

1 2 3

4 5 6

(b) Medoid layouts

Figure 7.1: Berth locations and medoid layouts for a population of optionsoptimised for the following three objectives: Vbw, Vch and Vtr,indicating a well distributed coverage of teh design space.

2. Representing more variables than those included in Figure 6.1 i.e.

wave height, sediment size etc. would extend the range of the

probabilistic approach and enable a wider range of uncertainies

to be included in the design.

3. Further development of the RaPoLa algorithm to include the de-

sign of other types of marine terminals and even ports is of keen

interest. Many of the modules are generic - it is mostly the layout

component that would need to be enhanced.

4. A full trial on various other case studies is also desirable. Rea-

sons for focusing on a single case study were clearly stated in

Chapter 1. RaPoLa is ready for use on new and on-going projects

within HR Wallingford and will hopefully be of use in years to

come.

5. An ANN could potentially be used as a surrogate model1 for

the dynamic mooring assessment of a berthed vessel. This would

need to be trained on a wide range of varied physical model test

data to ensure that it is capable of accurately predicting future

mooring forces.

1 A surrogate model is a model that is trained to approximate a computationally expen-sive model using a fraction of the computational power

7.6 summary 160

6. The HRCAT® database could be fully integrated into the frame-

work to enable the carbon cost to be both assessed and minimised

as an additional objective function.

7.6 summary

The main conclusions that can be drawn from this thesis are as follows:

1. The integration of various schools of coastal and maritime engi-

neering into a single framework allowed improvements over the

Base Case to be made using the decision support tool as an exten-

sion of the traditional desk study.

2. The introduction of the NSGA-II in Section 5.11 enabled more

optimised options to be developed than was demonstrated with

the manual approach. This section provided an effective precur-

sor to the proceeding chapter and enabled the relative benefits of

both approaches to be discussed. Automated design can results

in high-performance solutions that would not otherwise be found

and this can have a fundamental impact on the concept design.

3. The exploration of multi-objectivisation during Chapter 5 was use-

ful in that it confirmed an existing hypothesis more objectives

increases diversity at the cost of optimality and enabled the most

appropriate optimisation method to prevail and form the founda-

tion for the optimisation of Chapter 6.

4. The inclusion of uncertainty analysis enabled the relative uncer-

tainty of several options to be assessed through two methods.

This provided insight toward the relative benefits of each method

and it was demonstrated that the post-optimisation uncertainty anal-ysis was the preferred option in this case.

5. Sensitivity analysis of Section 6.6.2 demonstrated that in all op-

tions, the breakwater cost and volume were the dominating sources

of uncertainty. This approach to design can help shed light on the

relative risks of multiple design alternatives.

6. The model was able to improve on the Base Case in all progres-

sions of the model and this does mean that with less cost, less

7.6 summary 161

materials are required - contributing to a more sustainable design

process.

7. As with any numerical tool, the user should never expect it to

produce absolutely correct results. RaPoLa is a decision support

tool to help the user explore design alternatives, not to replace the

designer. Although the framework can perform many relevant

tasks, it should never be used in place of engineering judgement,but rather to compliment it.

Part VI

A P P E N D I C E S

AS E D I M E N T I N F I L L E Q U AT I O N

The following algorithm is the Soulsby-van Rijn method for calculating

the volume of sediment travelling in a given direction due to combined

wave and current action. It has been adapted from Soulsby, 1997, ch 10,

which also provides a full description of the theory and an example of

using the equations to calculate sediment infill to a dredged channel in

pp230-231.

qt = qin − aout (A.1)

The following procedure is performed for q = qin and q = qout and

the continuity equation is used to calculate the depth averaged current

velocity to account for the change in depth due to the presence of the

channel:

Uinhin = Uouthout (A.2)

where hin is the depth of water outside the channel and hout is

the depth of water in the channel and U is the velocity relative to the

channel direction.

q = AsU

{(U2 +

0.018CD

U2rms

)0.5

− Ucr

}2.4

(1− 1.6 tanβ) (A.3)

where As = Asb +Ass and :

Asb =0.005hi(d50/hi)

1.2

[(δs−1)gd50]1.2 (A.4)

Ass =0.012d50D∗−0.6

[(δs − 1)gd50]1.2 (A.5)

The drag coefficient due to current alone is:

CD =

[0.4

ln (hi/z0) − 1

](A.6)

163

sediment infill equation 164

The root-mean-squared (RMS) wave orbital velocity is calculated by

Soulsby, 2006 which provides the following empirical relationship be-

tween the significant wave height Hs, the incident water depth hi and

the zero-crossing wave period Tz:

Urms =Hs

4

√g

hiexp

(−3.65Tz

√hig

)2.1

(A.7)

The critical current velocity is:

Ucr =

0.19 (d50)0.1 log10

(4hid90

): 0.1 6 d50 6 0.5mm

8.5 (d50)0.1 log10

(4hid90

): 0.5 6 d50 6 2mm

(A.8)

in which the dimensionless grain size is:

D∗ =(g (δs − 1)

v2

)1/3d50 (A.9)

And the mean infill rate ε in mm/s is given by:

ψ = Lqt

W(1− ε)(A.10)

Where L is the length of the section of channel being tested, in this

case the algorithm is run every 100m of the channel length; t is the

time step in seconds which in this case will be 3 hours i.e. 10,800s as

wave time series are most often in 3 hr intervals; W is the width of the

channel in m as the sediment is assumed to distribute evenly over the

entire channel and ε is a coefficient of settlement equal to 0.4.

BR A P O L A I N P U T F I L E S

This Appendix includes a description of the input classes that are briefly

touched upon in Chapter 4.

b.1 all inputs

The name and high-level description of the input classes.

Table B.1: All tables

File Description

1 Breakwater Breakwater design parameters for van der Meer

2 Channel Parameters for channel width using the PIANC

WG121

4 Constraints Target operability, allowable overtopping and

whether opex cost is calculated

5 Cost Material unit rates

6 Extremes Extreme wave values which can be an unlimited

number of conditions

7 Sediment Site specific sediment parameters

8 Mooring Mooring thresholds for head and beam waves

9 Trestle Trestle tie in point on shore

10 Vessel Parameters for design vessels(s) - there can be

more than one

11 Waves Wave time series

12 Currents Combined wave and sediment time series (this

can be omitted if not available)

13 Bathymetry Bathymetric dataset as Bathymetry points. This

can be regular or irregularly spaced

165

B.2 breakwater 166

b.2 breakwater

’Breakwater’ holds the variables that are required for the calculation of

armour size and cross-section for the primary, secondary and MOF

breakwater.

Table B.2: Breakwater

Description Default Unit

Sd Damage number 2 -

P Permeability factor 0.4 -

r Roughness factor 0.55 -

Nw Number of waves in a storm 3000 -

Caisson Depth past which a caisson break-

water will be used

17 m

MOF_B Crest with of the detached MOF

breakwater

5 m

MOF_depth Dredge depth of the detached

MOF

9.7 m

B_groyne Crest width of the groyne 4 m

Rc_groyne Crest height of the groyne/sec-

ondary breakwater

1 m

Pr Density of rock 2650 kg/m3

Pw Density of water 1025 kg/m3

b.3 channel

’Channel’ holds the variables that are required for the calculation of the

channel width, depth and volume. Based on the guidance in the latest

PIANC channel report - WG121 (2014)

b.4 constraints

’Constraints’ are the parameters to which the terminal must comply in-

cluding operability and overtopping.

B.4 constraints 167

Table B.3: Channel

Description Default Unit

open Open or protected water TRUE -

Vcw Cross wind speed 30 kts

Vcc Cross current speed 0.3 kts

Vlc Longitudinal current speed 0.5 kts

Hs Significant wave height 2 m

slope.factor Multiplication factor for side slopes 1.15 -

over.dredge Additional dredge depth for sedi-

mentation

0.5 m

Table B.4: Constraints

Description Default Unit

operability Target operability 0.95 -

opex Whether the opex cost is calcu-

lated

TRUE -

Q Limiting overtopping discharge

volume

0.01 m3/m/s

q_rear Overtopping is measured at the

rear (T) or front crest (F)

TRUE bool

worstCase Whether the worst of all options

is taken

TRUE bool

B.5 cost 168

b.5 cost

’Cost’ are the material unit rates.

Table B.5: Cost

Description Default Unit

BW Average cost for breakwater ar-

mour, underlayer and core

350 $/m3

CH Dredge and dispose cost 50 $/m3

TR High level trestle cost 1e+05 $/m

MOF High level total cost for detached

MOF

3e+08 -

SLOPE Cost of slope stabilisation if the

basin is inshore

1e+08 -

CAISSON Caisson cost 4e+05 $/m

BERTH Total cost for berth and mooring

arrangement

6e+07 -

SEDI Cost of removing sediment from

the channel

20 $/m3

DISCOUNT Discount rate for net present value 0.06 %/100

DURATION Design life 25 years

PRELIM Additional cost for engineering,

mobilisation and management

0.35 %/100

LOOPS Additional factor for trestle loops 0.3 %/100

MAINTAIN Annual maintenance cost of infras-

tructure relative to capex

0.01 %/100

OPEX Whether the whole life cost is cal-

culated using a discount rate

TRUE -

b.6 extremes

’Extremes’ are the extreme waves for the design of the breakwater. This

can be a multi-row data frame.

B.7 sediment 169

Table B.6: Extremes

Description Default Unit

Hs Significant wave height 5 m

Tp Peak wave period 12.5 s

WL Additional water level 0.77 m

b.7 sediment

’Sediment’ are the parameters required for the estimation of sedimenta-

tion into the channel using the Soulsby- van Rijn method Soulsby, 1997.

Table B.7: Sediment

Description Default Unit

d50 Median sediment diameter 0.000295 m

d90 90^th^ centile sediment diameter 0.00035 m

v Kinematic viscosity of water 1.14e-06 m2/s

s Density ratio of sand and water 2.65 -

Z0 Bed roughness 0.006 m

b.8 mooring

’Mooring’ is the berth operability envelope which surpassed, will cause

downtime.

Table B.8: Mooring

Description Default Unit

x Vector of wave heights that describe the

threshold envelope

0 m

y Vector of wave periods that describe the

threshold envelope

0 s

waveDir Whether the threshold is for head or

beam seas

Head -

B.9 trestle 170

b.9 trestle

’Trestle’ is the coordinate of the onshore trestle tie in point .

Table B.9: Trestle

Description Default Unit

x x ordinate of tie in point 3890.277470742 -

y y ordinate of tie in point 1691.32756000012 -

b.10 vessel

’Vessel’ is the design vessel parameters.

Table B.10: Vessel

Description Default Unit

LOA Length overall of vessel 300 m

beam Vessel beam 50 m

draught Vessel draught 12 m

speed Vessel speed 8 kts

CB Vessel Block coefficient 0.85 -

b.11 waves

’Waves’ is the time series of wave data. This can be raw i.e. with regular

observations in terms of frequency of occurrence using the ‘frequency‘

column, however this is optional as as the model will truncate the time

series into representative conditions with a corresponding frequency of

occurrence if the frequency column is missing

b.12 currents

’Currents’ is the time series of wave/current data. This can be raw i.e.

with regular observations in terms of frequency of occurrence using the

‘frequency‘ column, however this is optional as as the model will trun-

B.13 bathymetry 171

Table B.11: Waves

Description Default Unit

HS Significant wave height m

TP Peak wave period s

DIR Direction waves are coming from deg

WS Wind speed m/s

cate the time series into representative conditions with a corresponding

frequency of occurrence if the frequency column is missing

Table B.12: Currents

Description Default Unit

HS Significant wave height m

TP Peak wave period s

DIR Direction waves are coming from deg

CDIR Direction the current is coming from deg

UBAR Current velocity m/s

b.13 bathymetry

’Bathymetry’ is the bathymetric dataset of regularly or irregularly spaced

points.

Table B.13: Bathymetry

Description Default Unit

x x ordinate -

y y ordinate -

z depth at coordinate [x, y] m

B I B L I O G R A P H Y

Ackers, P. and W. R. White (1973). “Sediment transport: a new approach

and analysis.” In: Proc. American Society of Civil Engineers 99.HY11,

pp. 2041–60.

Ahrens, J. P. (1987). Characteristics of reef breakwaters. Report. Depart-

ment of the Army, U.S. Army Corps of Engineers.

Altunkaynak, A and M Ozger (2004). “Temporal significant wave height

estimation from wind speed by perceptron Kalman filtering.” In:

Ocean Engineering 31.10, pp. 1245–1255. issn: 0029-8018. doi: 10.

1016/j.oceaneng.2003.12.008. url: http://www.sciencedirect.

com/science/ARTICLE/pii/S0029801804000356.

Arends, B. J., S. N. Jonkman, J. K. Vrijling, and P. H. A. J. M. van

Gelder (2005). “Evaluation of tunnel safety: towards an economic

safety optimum.” In: Reliability Engineering & System Safety 90.2-3,

pp. 217–228. issn: 0951-8320. doi: 10.1016/j.ress.2005.01.007.

url: http: //www .sciencedirect. com/science /ARTICLE/ pii/

S0951832005000475.

Auger, A, J Bader, D Brockhoff, and E Zitzler (2012). “Hypervolume-

based multiobjective optimization: Theoretical foundations and prac-

tical implications.” In: Theoretical Computer Science 425.0, pp. 75–103.

issn: 0304-3975. doi: http://dx.doi.org/10.1016/j.tcs.2011.03.

012. url: http://www.sciencedirect.com/science/ARTICLE/pii/

S0304397511002428.

BP (2016). BP Statistical Review of World Energy June 2016. Electronic

BOOK.

BSI (1994). “BS6349 "Maritime structures - Part 7: Guide to the design

and construction of breakwaters.” In:

— (1999). Maritime Structures. Standard.

— (2000). British Standard Code of Practice for Maritime Structures. Part1: General Criteria. BS 6349: Part 1: 2000 9 (and Ammendments 5488

and 5942). London: British Standards Institution.

Bäck, Thomas (1992). “Self-Adaptation in Genetic Algorithms.” In: Pro-ceedings of the First European Conference on Artificial Life. MIT Press,

pp. 263–271.

172

Bibliography 173

Bagnold, R. A. (1963). “Mechanics of marine sedimentation, in:” in: TheSea: Ideas and Observations. Ed. by N. M. Hill. Vol. 3. New York:

Wiley-Interscience, pp. 507–28.

Bailard, J. A. (1981). “An energetics total load sediment transport model

for a plane sloping beach.” In: Journal of Geophysics research 86.C11,

pp. 10938–54.

Bakker, S.A (2010). “Uncertainty analysis of the mud infill prediction

of the OKLNG approach channel: towards a probabilistic infill pre-

diction.” In: Terra et Aqua 120, pp. 9–18.

Bakker, S.A, J. C. Winterwerp, and M. Zuidgeest (2010). Uncertaintyanalysis of the mud infill prediction of the Olokola LNG approach channel.Conference Paper.

Balas C E, Koc M. L. and R Tur (2010). “Artificial neural networks based

on principal component analysis, fuzzy systems and fuzzy neural

networks for preliminary design of rubble mound breakwaters.” In:

Applied Ocean Research 32.4, pp. 425–433. issn: 0141-1187. doi: http:

//dx.doi.org/10.1016/j.apor.2010.09.005. url: http://www.

sciencedirect.com/science/ARTICLE/pii/S0141118710000763.

Basheer, I. A. and M. Hajmeer (2000). “Artificial neural networks: fun-

damentals, computing, design, and application.” In: Journal of Mi-crobiological Methods 43.1, pp. 3–31. issn: 0167-7012. doi: 10.1016/

s0167-7012(00)00201-3. url: http://www.sciencedirect.com/

science/ARTICLE/pii/S0167701200002013.

Battjes, J. A. (1974). “Surf similarity.” In: pp. 466–480.

Battjes, J. A. and H. W. Groenendijk (2000). “Wave height distributions

on shallow foreshores.” In: Coastal Engineering 40.3, pp. 161–182.

issn: 0378-3839. doi: http://dx.doi.org/10.1016/S0378-3839(00)

00007-7. url: http://www.sciencedirect.com/science/ARTICLE/

pii/S0378383900000077.

Berkhoff, J. C. W (1972). “Computation of combined refraction-diffraction.”

In: Proceedings of the 13th International Conference of Coastal Engineer-ing, pp. 55–69.

Besley (1991). Overtopping of Seawalls, Design and Assessment Manual. Re-

port. HR Wallingford Ltd.

Bhattacharya, B. and D. P. Solomatine (2006). “Machine learning in sed-

imentation modelling.” In: Neural Networks 19.2, pp. 208–214. issn:

0893-6080. doi: 10.1016/j.neunet.2006.01.007. url: http://www.

sciencedirect.com/science/ARTICLE/pii/S089360800600013X.

Bibliography 174

Blue, F. L. and J. W Johnson (1948). “Diffraction of water waves passing

through a breakwater gap.” In: Trans. A.G.U 30.5, pp. 705–718.

Bohm, G. and Günter Z (2010). Introduction to statistics and data analysisfor physicists. Hamburg: Deutsches Elektronen-Synchrotron. isbn:

9783935702416. doi: 10.3204/DESY-BOOK/statistics. url: http:

//www-library.desy.de/elbook.html.

Boulougouris, E. K. and A. D. Papanikolaou (2008). “Multi-objective

optimisation of a floating LNG terminal.” In: Ocean Engineering 35,

pp. 787–811. issn: 0029-8018. doi: http://dx.doi.org/10.1016/

j.oceaneng.2008.01.023. url: http://www.sciencedirect.com/

science/ARTICLE/pii/S002980180800036X.

Brent, R. P. (1973). “Chapter 4.” In: Algorithms for Minimization withoutDerivatives. Englewood Cliffs, NJ: Prentice Hall.

Bretschneider, C. L. (1952). “The generation and decay of wind waves

in deep water.” In: Trans. A.G.U. 33.3, pp. 381–389.

— (1958). “Revision; in wave forecasting deep and shallow water.” In:

Proc. 6th Conf. on Coastal Eng 30-67.

— (1968). “Significant waves and the wave spectrum.” In: Ocean Indus-try Feb, pp. 40–46.

Briganti, R., J. van der Meer, M. Buccino, and M. Calabrese (2004).

“Wave Transmission Behind Low-Crested Structures.” In: CoastalStructures 2003, pp. 580–592.

Briggs, M. J., L. E. Borgman, and E. Bratteland (2003). “Probability

assessment for deep-draft navigation channel design.” In: CoastalEngineering 48.1, pp. 29–50. issn: 0378-3839. doi: 10.1016/s0378-

3839(02)00159-x. url: http://www.sciencedirect.com/science/

ARTICLE/pii/S037838390200159X.

Brown, C. B. (1950). “Sediment Transport Wiley.” In: Engineering Hy-draulics. Ed. by H. Rouse. Wiley.

Browne, M., B. Castelle, D. Strauss, R. Tomlinson, M. Blumenstein, and

C. Lane (2007). “Near-shore swell estimation from a global wind-

wave model: Spectral process, linear, and artificial neural network

models.” In: Coastal Engineering 54.5, pp. 445–460. issn: 0378-3839.

doi: 10 . 1016 / j . coastaleng . 2006 . 11 . 007. url: http : / / www .

sciencedirect.com/science/ARTICLE/pii/S0378383906001840.

Bruce, T., J. W. van der Meer, L. Franco, and J. M. Pearson (2009). “Over-

topping performance of different armour units for rubble mound

breakwaters.” In: Coastal Engineering 56.2, pp. 166–179. issn: 0378-

Bibliography 175

3839. doi: 10.1016/j.coastaleng.2008.03.015. url: http://www.

sciencedirect.com/science/ARTICLE/pii/S0378383908000756.

Burcharth, H. F. (1991). Analysis of rubble mound breakwaters: Developmentof a partial coefficient system for the design of rubble mound breakwaters.

Sub-Group F Report of the PIANC PTC II Working Group 12.

— (1992). Analysis of rubble mound breakwaters. Uncertainty related to en-vironmental data and estimated extreme events. Sub-Group B Report of

the PIANC PTC II Working Group 12.

— (2000). “State of the art in conceptual design of breakwaters.” In:

Coastal Structures ’99. Ed. by Losada. Rotterdam: Balkema. Chap. 1,

pp. 3–20.

Burcharth, H. F. and J. D. Sorensen (2005). “Optimum safety levels for

rubble mound structures.” In: Proc. Coastlines, Structures and Break-waters. Ed. by William Allsop. London: Institution of Civil Engi-

neers. Chap. 483-495.

CEE (2007). “Introduction to LNG: An overview on liquefied natural

gas (LNG), its properties, organization of the LNG industry and

safety considerations.” In: Centre for ENergy Economics. url: www.

beg.utexas.edu/.../lng/.../CEE_INTRODUCTION_TO_LNG_FINAL.

p...Similar.

CERC (1977). Shore Protection Manual. Coastal Engineering Research

Center. Washington D.C.: U.S. Government Printing Office.

— (1984). Shore Protection Manual. 2nd. Coastal Engineering Research

Center. Washington D.C.: U.S. Government Printing Office.

CIRIA and CUR (2007). The Rock Manual. The Use of Rock in HydraulicEngineering. 2nd. London: CIRIA.

Castillo, C., R. M?nguez, E. Castillo, and M. A. Losada (2006). “An

optimal engineering design method with failure rate constraints

and sensitivity analysis. Application to composite breakwaters.” In:

Coastal Engineering 53.1, pp. 1–25. issn: 0378-3839. doi: 10.1016/j.

coastaleng.2005.09.016. url: http://www.sciencedirect.com/

science/ARTICLE/pii/S0378383905001171.

Castillo, Enrique, Roberto M?nguez, and Carmen Castillo (2008). “Sen-

sitivity analysis in optimization and reliability problems.” In: Reli-ability Engineering & System Safety 93.12, pp. 1788–1800. issn: 0951-

8320. doi: http://dx.doi.org/10.1016/j.ress.2008.03.010.

url: http: //www .sciencedirect. com/science /ARTICLE/ pii/

S0951832008000999.

Bibliography 176

Chang, C. L., S. L. Lo, and S. L. Yu (2005). “Applying fuzzy theory and

genetic algorithm to interpolate precipitation.” In: Journal of Hy-drology 314.1-4. doi: 10.1016/j.jhydrol.2005.03.034, pp. 92–104. issn:

0022-1694. url: http://www.sciencedirect.com/science/ARTICLE/

pii/S0022169405001484.

Chatterjee, S. and A. S. Hadi (2008). Sensitivity Analysis in Linear Regres-sion. John Wiley Sons, Inc. isbn: 9780470316764. doi: 10.1002/

9780470316764 . fmatter. url: http : / / dx . doi . org / 10 . 1002 /

9780470316764.fmatter.

Cheng, Jian and Marek J. Druzdzel (2000). “Latin hypercube sampling

in Bayesian networks.” In: In Proceedings of the Thirteenth Interna-tional Florida Artificial Intelligence Research Symposium Conference (FLAIRS-2000. AAAI Publishers, pp. 287–292.

Cheng, R., Y. Jin, K. Narukawa, and B. Sendhoff (2015). “A Multiobjec-

tive Evolutionary Algorithm Using Gaussian Process-Based Inverse

Modeling.” In: IEEE Transactions on Evolutionary Computation 19.6,

pp. 838–856. issn: 1089-778X. doi: 10.1109/TEVC.2015.2395073.

Coello Coello, C. A. (2002). “Theoretical and numerical constraint-handling

techniques used with evolutionary algorithms: a survey of the state

of the art.” In: Computer Methods in Applied Mechanics and Engineer-ing 191.11-12, pp. 1245–1287. issn: 0045-7825. doi: 10.1016/s0045-

7825(01)00323-1. url: http://www.sciencedirect.com/science/

ARTICLE/pii/S0045782501003231.

Coello Coello, C. A., A. H. Aguirre, and E. Zitzler (2007). “Evolutionary

multi-objective optimization.” In: European Journal of Operational Re-search 181.3, pp. 1617–1619. issn: 0377-2217. doi: 10.1016/j.ejor.

2006.08.003. url: http://www.sciencedirect.com/science/

ARTICLE/pii/S0377221706005418.

Coello Coello, Carlos A., David. A van Veldhuizen, and Gary. B Lam-

ont (2007). Evolutionary Algorithms for solving multi-objective problems.

2nd Edition. New York: Kluwer Academic / Plenum Publishers.

Cohon, J.L. (1978). Multiobjective Programming and Planning. Academic

Press, New York.

Darwin, Charles (1859). On the Origin of Species by Means of NaturalSelection, or the Preservation of Favoured Races in the Struggle for Life.

London: John Murray.

Dawid, A. P. (1979). “Conditional Independence in Statistical Theory.”

In: Journal of the Royal Statistical Society. Series B (Methodological) 41.1,

Bibliography 177

pp. 1–31. issn: 00359246. doi: 10.2307/2984718. url: http://dx.

doi.org/10.2307/2984718.

De Jong, Kenneth Alan (1975). “An Analysis of the Behavior of a Class

of Genetic Adaptive Systems.” AAI7609381. PhD thesis. Ann Arbor,

MI, USA.

De Rouck, J., H. Verhaeghe, and J. Geeraerts (2009). “Crest level assess-

ment of coastal structures – General overview.” In: Coastal Engineer-ing 56.2, pp. 99–107. issn: 0378-3839. doi: 10.1016/j.coastaleng.

2008.03.014. url: http://www.sciencedirect.com/science/

ARTICLE/pii/S0378383908000707.

Deb, K. and H. Jain (2014). “An Evolutionary Many-Objective Opti-

mization Algorithm Using Reference-Point-Based Nondominated

Sorting Approach, Part I: Solving Problems With Box Constraints.”

In: IEEE Transactions on Evolutionary Computation 18.4, pp. 577–601.

issn: 1089-778X. doi: 10.1109/TEVC.2013.2281535.

Deb, K., A.. Pratap, S.. Agarwal, and T. Meyarivan (2002). “A Fast

and Elitist Multiobjective Genetic Algorithm: NSGA-II.” In: IEEETransactions on Evolutionary Computation 6.2, pp. 182–197. issn: 1089-

778X. url: http://www.iitk.ac.in/kangal/Deb_NSGA-II.pdf.

Di Sciuva, M. and D. Lomario (2003). “A comparison between Monte

Carlo and FORMs in calculating the reliability of a composite struc-

ture.” In: Composite Structures 59.1, pp. 155–162. issn: 0263-8223.

doi: http : / / dx . doi . org / 10 . 1016 / S0263 - 8223(02 ) 00170 - 8.

url: http: //www .sciencedirect. com/science /ARTICLE/ pii/

S0263822302001708.

Diez, M. and D. Peri (2010). “Robust optimization for ship concep-

tual design.” In: Ocean Engineering 37.11?12, pp. 966–977. issn:

0029-8018. doi: http://dx.doi.org/10.1016/j.oceaneng.2010.03.

010. url: http://www.sciencedirect.com/science/ARTICLE/pii/

S0029801810000740.

EIA (1998). Natural Gas 1998: Issues and Trends. Electronic BOOK. url:

http://www.eia.gov/pub/oil_gas/natural_gas/analysis_

publications/natural_gas_1998_issues_trends/pdf/chapter2.

pdf.

Einstein, H. A. (1942). “Formulae for transportation of bed-load.” In:

Trans. ASCE 107, pp. 561–577.

Bibliography 178

Elbeltagi, E., T. Hegazy, and D. Grierson (2005). “Comparison among

five evolutionary-based optimization algorithms.” In: Advanced En-gineering Informatics 19.1, pp. 43–53.

Elchahal, G., R. Younes, and P. Lafon (2013). “Optimization of coastal

structures: Application on detached breakwaters in ports.” In: OceanEngineering 63.0, pp. 35–43. issn: 0029-8018. doi: http://dx.doi.

org / 10 . 1016 / j . oceaneng . 2013 . 01 . 021. url: http : / / www .

sciencedirect.com/science/ARTICLE/pii/S0029801813000437.

Etemad-Shahidi, A. and M. Bali (2012). “Stability of rubble-mound

breakwater using H50 wave height parameter.” In: Coastal Engineer-ing 59.1. doi: 10.1016/j.coastaleng.2011.07.002, pp. 38–45. issn: 0378-

3839. url: http://www.sciencedirect.com/science/ARTICLE/pii/

S0378383911001220.

Etemad-Shahidi, Amir and Lisham Bonakdar (2009). “Design of rubble-

mound breakwaters using M5 ’ machine learning method.” In: Ap-plied Ocean Research 31.3, pp. 197–201. issn: 0141-1187. doi: 10 .

1016/j.apor.2009.08.003. url: http://www.sciencedirect.

com/science/ARTICLE/pii/S014111870900073X.

Farley, B. and W. A. Clark (1954). “Simulation of Self-Organizing Sys-

tems by Digital Computer.” In: IRE Transactions on Information The-ory 4.4, pp. 76–84.

Fonseca, C.M and P.J. Fleming (1993). Multiobjective genetic algorithms.

Blog.

Garza-Fabre, Mario, Gregorio Toscano-Pulido, and Eduardo Rodriguez-

Tello (2015). “Multi-objectivization, fitness landscape transforma-

tion and search performance: A case of study on the hp model for

protein structure prediction.” In: European Journal of Operational Re-search 243.2, pp. 405 –422. issn: 0377-2217. doi: http://dx.doi.org/

10.1016/j.ejor.2014.06.009. url: http://www.sciencedirect.

com/science/article/pii/S0377221714004925.

Geeraerts, J., P. Troch, J. De Rouck, H. Verhaeghe, and J. J. Bouma

(2007). “Wave overtopping at coastal structures: prediction tools

and related hazard analysis.” In: Journal of Cleaner Production 15.16.

doi: 10.1016/j.jclepro.2006.07.050, pp. 1514–1521. issn: 0959-6526.

url: http: //www .sciencedirect. com/science /ARTICLE/ pii/

S0959652606003118.

Geeraerts, J., A. Kortenhaus, J. A. Gonzalez-Escriva, J. De Rouck, and

P. Troch (2009). “Effects of new variables on the overtopping dis-

Bibliography 179

charge at steep rubble mound breakwaters? The Zeebrugge case.”

In: Coastal Engineering 56.2, pp. 141–153. issn: 0378-3839. doi: http:

//dx.doi.org/10.1016/j.coastaleng.2008.03.013. url: http://

www.sciencedirect.com/science/ARTICLE/pii/S0378383908000732.

Gero, J. S. and V. A. Kazakov (1998). “Evolving design genes in space

layout planning problems.” In: Artificial Intelligence in Engineering12.3. doi: DOI: 10.1016/S0954-1810(97)00022-8, pp. 163–176. issn:

0954-1810. url: http://www.sciencedirect.com/science/ARTICLE/

B6V1X-3T5GN4S-3/2/6f0fe2a36bf4aa5f195be2566bd5a06e.

Goda, Y. (1970). “A synthesis of breaker indices.” In: Trans. Japan Soc.Civil Engrs. 2.2, pp. 227–230.

Goda, Y (1975a). “Deformation of irregular waves due to depth-controlled

wave breaking.” In: Report for the Port and Horbour Research Institute14.3, pp. 59–106.

— (1975b). “Irregular wave deformation across the surf zone.” In: CoastalEngineering in Japan 18, pp. 13–26.

Goda, Y (2004). “A 2-D random wave transformation model with gra-

dational breaker index.” In: Coastal Engineering Journal 46.1, pp. 1–

38.

Goda, Y (2008). “Wave setup and longshore currents induced by di-

rectional spectral waves; Prediction formulas based on numerical

computation results.” In: Coastal Engineering Journal 50.4, pp. 397–

440.

Goda, Y (2010a). “Reanalysis of regular and random breaking wave

statistics.” In: Coastal Engineering Journal 52.1, pp. 71–106.

Goda, Y and Y Suzuki (1975). “Computation of refraction and diffrac-

tion of sea waves with Mitsuyasu’s directional spectrum.” In: Tech-nical note of Port and Harbour Research Institute 230, p. 45.

Goda, Y., T. Takayama, and Y Suzuki (1978). “Diffraction Diagrams

for Directional Random Waves, ASCE.” In: Proceedings of the 16thCoastal Engineering Conference 1, pp. 628–650.

Goda, Yoshimi (1974). New wave pressure formulae for composite breakwa-ters. Conference Paper.

— (2009). “A performance test of nearshore wave height prediction

with CLASH datasets.” In: Coastal Engineering 56.3, pp. 220–229.

issn: 0378-3839. doi: 10.1016/j.coastaleng.2008.07.003. url:

http://www.sciencedirect.com/science/ARTICLE/pii/S0378383908001257.

Bibliography 180

Goda, Yoshimi (2010b). Random Seas and Design of Maritime Structures.

3rd. Advanced Series on Ocean Engineering - Volume 33. London:

World Scientific Publishing Co. Pte. Ltd.

Goldberg, D.. E (1989). Genetic algorithms in search optimization and ma-chine learning. Crawfordsville, Indiana: RR Donnelley.

Gouldby, B. (2007). “Machine Learning Techniques. Review and Poten-

tial Applications at HR Wallingford.” In: HR Wallingford InternalReport IT 542, December 2007.

Grass, A. J. (1981). Sediment transport by waves and currents. Report.

SERC, London Centre of Marine Technology.

Hall, J. W. and J. Lawry (2003). “Fuzzy label methods for constructing

imprecise limit state functions.” In: Structural Safety 25.4, pp. 317–

341. issn: 0167-4730. doi: 10.1016/s0167-4730(03)00003-1. url:

http://www.sciencedirect.com/science/ARTICLE/pii/S0167473003000031.

Hasselmann, K. et al. (1973). “Measurements of wind-wave growth and

swell decay during the Joint North Sea Wave Project (JONSWAP).”

In: Ergnzungsheft zur Deutschen Hydrographischen Zeitschrift ReiheA(8).12, p95.

Higgins, A. J., S. Hajkowicz, and E. Bui (2008). “A multi-objective model

for environmental investment decision making.” In: Computers &Operations Research 35.1, pp. 253–266. issn: 0305-0548. doi: 10.1016/

j . cor . 2006 . 02 . 027. url: http : / / www . sciencedirect . com /

science/ARTICLE/pii/S0305054806000657.

Highway, Central Federal Lands (2008). Velocity Variations in Cross-HoleSonic Logging Surveys. Report. url: http://www.cflhd.gov/programs/

techDevelopment/geotech/velocity/documents/05_chapter_3_

numerical_modeling.pdf.

Holland, J. (1975). Adaptation in Natural and Artificial Systems. Ann Ar-

bor: University of Michigan Press.

Hudson, R. Y. (1958). “Design of quarry stone cover layer for previous

term rubble-mound breakwaters.” In: Research Report No. 2?2,Waterways Experiment Station, Coastal Engineering Research Centre,Vicksburg, MS.

Hughes, Steven A. (1993). Physical Models and Laboratory Techniques inCoastal Engineering. Advanced Series on Ocean Engineering. Lon-

don: World Scientific.

Hunt, J.N. (1979). “Direct solution of the wave dispersion equation.” In:

WW4, pp. 457–459.

Bibliography 181

Iglesias, G., J. Rabunal, M. A. Losada, H. Pachon, A. Castro, and R.

Carballo (2008). “A virtual laboratory for stability tests of rubble-

mound breakwaters.” In: Ocean Engineering 35.11-12, pp. 1113–1120.

issn: 0029-8018. doi: 10.1016/j.oceaneng.2008.04.014. url: http:

//www.sciencedirect.com/science/ARTICLE/pii/S0029801808001121.

Iribarren, C. R. and C. Norales (1949). “Protection des ports.” In: Pro-ceedings XVIIth International Navigation Congress, Section II, Commu-nication 4, pp. 31–80.

Isebe, D., P. Azerad, B. Mohammadi, and F. Bouchette (2008). “Optimal

shape design of defense structures for minimizing short wave im-

pact.” In: Coastal Engineering 55.1, pp. 35–46. issn: 0378-3839. doi:

http : / / dx . doi . org / 10 . 1016 / j . coastaleng . 2007 . 06 . 006.

url: http: //www .sciencedirect. com/science /ARTICLE/ pii/

S0378383907000725.

Iwagaki, Y. (2011). “HYPERBOLIC WAVES AND THEIR SHOALING.”

In: Coastal Engineering Proceedings 1.11. issn: 2156-1028. url: https:

//journals.tdl.org/icce/index.php/icce/article/view/2510.

Jin, Y. (2010). Multi-Objective Machine Learning. The Netherlands: Springer.

— (2011). “Surrogate-assisted evolutionary computation: Recent ad-

vances and future challenges.” In: Swarm and Evolutionary Compu-tation 1.2. doi: 10.1016/j.swevo.2011.05.001, pp. 61–70. issn: 2210-

6502. url: http://www.sciencedirect.com/science/ARTICLE/

pii/S2210650211000198.

Kamphuis, J. W. (1991). “Incipient Wave Breaking.” In: Coastal Engineer-ing 15, pp. 185–203.

Kaufman, L and P.J Rousseeuw (1987). “Clustering by means of Medoids.”

In: Statistical Data Analysis Based on the L1NormandRelatedMethods.Ed. by Y. Dodge. Holand: North, pp. 405–416.

Kendall, M. and J. D. Gibbons (1990). Rank Correlation Methods. 5th ed.

A Charles Griffin Title. isbn: 0195208374. url: http://www.amazon.

com/exec/obidos/redirect?tag=citeulike07-20\&path=ASIN/

0195208374.

Khalafallah, A. and K. El-Rayes (2011). “Automated multi-objective op-

timization system for airport site layouts.” In: Automation in Con-struction 20.4, pp. 313–320. issn: 0926-5805. doi: http://dx.doi.

org/10.1016/j.autcon.2010.11.001. url: http://www.sciencedirect.

com/science/ARTICLE/pii/S0926580510001809.

Bibliography 182

Khu, Soon-Thiam, Henrik Madsen, and Francesco di Pierro (2008). “In-

corporating multiple observations for distributed hydrologic model

calibration: An approach using a multi-objective evolutionary al-

gorithm and clustering.” In: Advances in Water Resources 31.10. doi:

10.1016/j.advwatres.2008.07.011, pp. 1387–1398. issn: 0309-1708. url:

http://www.sciencedirect.com/science/ARTICLE/pii/S0309170808001243.

Kicinger, R., T. Arciszewski, and K. De-Jong (2005). “Evolutionary com-

putation and structural design: A survey of the state-of-the-art.” In:

Computers & Structures 83.23-24. doi: DOI: 10.1016/j.compstruc.2005.03.002,

pp. 1943–1978. issn: 0045-7949. url: http://www.sciencedirect.

com/science/ARTICLE/B6V28-4FXWWKG-6/2/e0d158504861f6b4eacc30620a7a4085.

Kim, Dong Hyawn and Woo Sun Park (2005). “Neural network for

design and reliability analysis of rubble mound breakwaters.” In:

Ocean Engineering 32.11-12, pp. 1332–1349. issn: 0029-8018. doi: 10.

1016/j.oceaneng.2004.11.008. url: http://www.sciencedirect.

com/science/ARTICLE/pii/S0029801805000284.

Kim, Dookie, D. H. Kim, and S. Chang (2008). “Reply to discussion on

Application of probabilistic neural network to design breakwater

armor blocks by Zekai Zen, Tarkan Erdik, Yavuz Karsavran.” In:

Ocean Engineering 35.11-12, p. 1284. issn: 0029-8018. doi: 10.1016/

j.oceaneng.2008.04.004. url: http://www.sciencedirect.com/

science/ARTICLE/pii/S0029801808000905.

Knowles, J. D., R. A. Watson, and D. W. Corne (2001). “Reducing Lo-

cal Optima in Single-Objective Problems by Multi-objectivization.”

English. In: Evolutionary Multi-Criterion Optimization. Ed. by E. Zit-

zler, L. Thiele, K. Deb, C. A. Coello Coello, and D. Corne. Vol. 1993.

Lecture Notes in Computer Science. Springer Berlin Heidelberg,

pp. 269–283. isbn: 978-3-540-41745-3. doi: 10.1007/3-540-44719-

9_19. url: http://dx.doi.org/10.1007/3-540-44719-9_19.

Knuth, D. E. (1974). “Computer Programming as an Art.” In: Communi-cations of the ACM 17.12, pp. 667–673.

KoÃg, Mehmet Levent and Can Elmar Balas (2013). “Reliability anal-

ysis of a rubble mound breakwater using the theory of fuzzy ran-

dom variables.” In: Applied Ocean Research 39, pp. 83 –88. issn: 0141-

1187. doi: http://dx.doi.org/10.1016/j.apor.2012.10.007.

url: http: //www .sciencedirect. com/science /article/ pii/

S0141118712000855.

Bibliography 183

Kraus, N. (1984). “ESTIMATE OF BREAKING WAVE HEIGHT BEHIND

STRUCTURES.” In: Journal of Waterway, Port, Coastal and Ocean En-gineering 110.2, pp. 276–282.

Kuczera, G. and E. Parent (1998). “monte Carlo assessment of param-

eter uncertainty in conceotual catchment models: the Metropolis

algorithm.” In: Journal of Hydrology 211, pp. 69–85.

Kumar, S., H-T. Kwon, K-H. Choi, J. Hyun-Cho, W. Lim, and I. Moon

(2011). “Current status and future projections of LNG demand and

supplies: A global prospective.” In: Energy Policy 39.7, pp. 4097–

4104. issn: 0301-4215. doi: 10.1016/j.enpol.2011.03.067. url:

http://www.sciencedirect.com/science/ARTICLE/pii/S0301421511002618.

Kweon, H. M. and Y. Goda (1996). “A parametric model for random

wave deformation by breaking on arbitrary beach profiles.” In: Proc.25th Int. Conf. Coastal Eng.

Lan, Ji., J. T. M. van Doorn, and D. ten Hove (2010). Probablistic Designof Channel Widths. Conference Paper.

Lee, Cheol-Eung, Seung-Woo Kim, Dong-Heon Park, and Kyung-Duck

Suh (2012). “Target reliability of caisson sliding of vertical breakwa-

ter based on safety factors.” In: Coastal Engineering 60.0, pp. 167–173.

issn: 0378-3839. doi: http://dx.doi.org/10.1016/j.coastaleng.

2011.09.005. url: http://www.sciencedirect.com/science/

ARTICLE/pii/S0378383911001657.

Li, J., X. Dong, J. Shangguan, and M. Hook (2011). “Forecasting the

growth of China’s natural gas consumption.” In: Energy 36.3, pp. 1380–

1385. issn: 0360-5442. doi: 10 . 1016 / j . energy . 2011 . 01 . 003.

url: http: //www .sciencedirect. com/science /ARTICLE/ pii/

S0360544211000041.

Lochtefeld, D. F. and F. W. Ciarallo (2011). “Helper-objective optimiza-

tion strategies for the Job-Shop Scheduling Problem.” In: AppliedSoft Computing 11.6, pp. 4161 –4174. issn: 1568-4946. doi: http :

//dx.doi.org/10.1016/j.asoc.2011.03.007. url: http://www.

sciencedirect.com/science/article/pii/S1568494611001037.

— (2014). “An analysis of decomposition approaches in multi-objectivization

via segmentation.” In: Applied Soft Computing 18, pp. 209 –222. issn:

1568-4946. doi: http://dx.doi.org/10.1016/j.asoc.2014.01.005.

url: http: //www .sciencedirect. com/science /article/ pii/

S1568494614000064.

Bibliography 184

Lochtefeld, D. F. and F. W. Ciarallo (2015). “Multi-objectivization Via

Decomposition: An analysis of helper-objectives and complete de-

composition.” In: European Journal of Operational Research 243.2, pp. 395

–404. issn: 0377-2217. doi: http://dx.doi.org/10.1016/j.ejor.

2014.11.041. url: http://www.sciencedirect.com/science/

article/pii/S0377221714009916.

Londhe, S. N. and M. C. Deo (2003). “Wave tranquility studies using

neural networks.” In: Marine Structures 16.6, pp. 419–436. issn: 0951-

8339. doi: 10.1016/j.marstruc.2003.09.001. url: http://www.

sciencedirect.com/science/ARTICLE/pii/S0951833903000455.

Longuet-Higgins, M. S. (1957). “On the transformation of a continuous

spectrum by refraction.” In: Proceedings of the Cambridge Philosophi-cal Society 53.1, pp. 226–229.

Longuet-Higgins, M. S. and R. W. Stuart (1962). “Radiation stress and

mass transport in gravity waves, with application to ’surf beats’.”

In: Journal of Fluid Mechanics 13, pp. 481–504.

Lykke Andersen, T. and H. F. Burcharth (2009). “Three-dimensional in-

vestigations of wave overtopping on rubble mound structures.” In:

Coastal Engineering 56.2, pp. 180–189. issn: 0378-3839. doi: 10.1016/

j.coastaleng.2008.03.007. url: http://www.sciencedirect.com/

science/ARTICLE/pii/S0378383908000768.

McBride, M. W., J. V. Smallman, and S. W. Huntington (1998). Guidelinesfor the design of approach channels. Conference Paper.

McCulloch, W. and W. Pitts (1943). “A Logical Calculus of Ideas Imma-

nent in Nervous Activity.” In: Bulletin of Mathematical Biophysics 5.4,

pp. 115–133.

McKay, M.D, R.J Beckman, and W.J Conover (1979). “A Comparison

of Three Methods for Selecting Values of Input Variables in the

Analysis of Output from a Computer Code.” In: Technometrics.

Meer, J. W. Van der (1987). “Stability of breakwater armour layer: de-

sign formulae.” In: Coastal Engineering 11.3, pp. 219–239. issn: 0378-

3839. doi: http://dx.doi.org/10.1016/0378-3839(87)90013-5.

url: http: //www .sciencedirect. com/science /ARTICLE/ pii/

0378383987900135.

— (1988). “Deterministic and probabilistic design of breakwater ar-

mour layers.” In: Proc. ASCE, Journal of WPC and OE 114.1.

Bibliography 185

Melching, C. S (1995). “Reliability estimation in computer models of

watershed hydrology.” In: Reliability estimation in computer models ofwatershed hydrology. Water Resources. Ed. by V. P. Singh, pp. 69–118.

Michalewicz, Z. and D. B. Fogel (2004). How to solve it: Modern heuristics.

Berlin, New York: Springer.

Miles, J. C., G. M. Sisk, and C. J. Moore (2001). “The conceptual design

of commercial buildings using a genetic algorithm.” In: Computers& Structures 79.17, pp. 1583–1592. issn: 0045-7949. doi: 10.1016/

s0045-7949(01)00040-2. url: http://www.sciencedirect.com/

science/ARTICLE/pii/S0045794901000402.

Minguez, R. and E. Castillo (2009). “Reliability-based optimization in

engineering using decomposition techniques and {FORMS}.” In:

Structural Safety 31.3. Structural Reliability at {ESREL} 2006, pp. 214

–223. issn: 0167-4730. doi: http : / / dx . doi . org / 10 . 1016 / j .

strusafe.2008.06.014. url: http://www.sciencedirect.com/

science/article/pii/S0167473008000647.

Mitsuyasu, H (1970). “On the growth of spectrum of wind-generated

waves (2) - spectral shape of wind waves at finite fetch.” In: Proc.Japanese Conf. Coastal Engrg, pp. 1–7.

Mitsuyasu, H., F. Tasai, S. Suhara, S. Mizuno, M. Ohkusu, T. Honda,

and K. Rikiishi (1975). “Observation of the directional spectrum of

ocean waves using a cloverleaf buoy.” In: Journal of Physical Oceanog-raphy 5, pp. 750–760.

Mobarek, I. E. and R. L Wiegel (1996). “Diffraction of wind generated

water waves.” In: Proc. 10th Conf. Coastal Engg, pp. 185–206.

Moore, C. J., J. C. Miles, and D. W. G. Rees (1997). “Decision support for

conceptual bridge design.” In: Artificial Intelligence in Engineering11.3, pp. 259–272. issn: 0954-1810. doi: 10.1016/s0954-1810(96)

00041-6. url: http://www.sciencedirect.com/science/ARTICLE/

pii/S0954181096000416.

Mühlenbein, Heinz (1992). “How Genetic Algorithms Really Work: Mu-

tation and Hillclimbing.” In: PPSN, pp. 15–26.

Nagai, K. (1972). “Diffraction of the irregular sea due to breakwaters.”

In: Coastal Engineering in Japan 15, pp. 59–69.

Neilsen, P. (1992). Coastal Bottom Boundary Layers and Sediment Transport,vol. 4. Advanced Series on Ocean Engineering. SIngapore: World

Scientific.

Bibliography 186

Nimtawat, A. and P. Nanakorn (2009). “Automated layout design of

beam-slab floors using a genetic algorithm.” In: Computers & Struc-tures 87.21-22. doi: DOI: 10.1016/j.compstruc.2009.06.007, pp. 1308–

1330. issn: 0045-7949. url: http://www.sciencedirect.com/science/

ARTICLE/B6V28-4WRD64H-1/2/169fd5acb55ee9f2d176c16a1db5d56f.

Owen, M. W (1980). “Design of seawalls allowing for wave overtop-

ping.” In: HR Wallingford, Report EX 924.

PROVERBS (2005). “Probabilistic design tools for vertical breakwaters.”

In:

Panizzo, A. and R. Briganti (2007). “Analysis of wave transmission be-

hind low-crested breakwaters using neural networks.” In: CoastalEngineering 54.9, pp. 643–656. issn: 0378-3839. doi: 10 . 1016 / j .

coastaleng.2007.01.001. url: http://www.sciencedirect.com/

science/ARTICLE/pii/S0378383907000154.

Patsiatzis, D. I. and L. G. Papageorgiou (2002). “Optimal multi-floor

process plant layout.” In: Computers & Chemical Engineering 26.4-5.

doi: DOI: 10.1016/S0098-1354(01)00781-5, pp. 575–583. issn: 0098-

1354. url: http://www.sciencedirect.com/science/ARTICLE/

B6TFT-44YWM6B-9/2/9be32f847df1eebe5c864945dd3f9f8d.

Penney, W. G. and A. T. Price (1944). Diffraction of sea waves by breakwa-ters. Report. Directorate of Miscellaneous Weapons Development,

Tech History.

Pierson, W. J., Neumann, and R. W. James (1955). “Practical Methods of

Observing and Forecasting Ocean Wave by Means of Wave Spectra

and Statistics.” In: U.S. Navy Hydrographic Office.

Pierson, W.J. (1950). The interpretation of crossed orthoganals in wave re-fraction phenomena. TM-21. Washington, D. C.: U.S. Army Corps of

Engineers, Beach Erosion Board.

Pratihar, D. K. (2009). “Non-Linear Dimensionality Reduction Tech-

niques.” In: Encyclopedia of Data Warehousing and Mining. Ed. by

John Wang. IGI Global, pp. 1416–1424. isbn: 9781605660103. url:

http://dblp.uni-trier.de/db/reference/dataware/dataware2009.

html#Pratihar09.

Pullen, T., W. Allsop, T. Bruce, A. Kortenhaus, H. Schottrumpf, and

J. W. Van der Meer (2007). EurOtop ? Wave Overtopping ofSea Defences and Related Structures: Assessment Manual. Environmen-

tal Agency, UK, Expertise Netwerk Waterkeren (NL), Kuratorium

Bibliography 187

fr Forschung im Ksteningenieurwesen (DE). url: http : / / www .

overtopping-manual.com/.

ROM (2002). Recomendations for maritme structures. General procedure andrequirements in the design of harbor and maritime structures. Part 1. Le-

gal Rule or Regulation.

— (2007). Recomendations for the design of maritime configuration of ports,approach channel and harbour basins. Legal Rule or Regulation.

Rafiq, M. Y., M. A. Beck, and N. Hughes (2008). “Interactive visuali-

sation as a powerful decision support tool for conceptual design

Proceeding of the 25th CBI W78 Int. conf. on information technol-

ogy improving the management of construction projects through

IT adoption, Eedt (Rischmoller, R) 15-15 July, Santiago Chile, pp

156-165. ISBN 978-956-319-361-9.” In:

Rafiq, M. Y., M. A. Beck, I. Packham, and S. Denham (2005). “Evolution-

ary Computation and Visualisation as Decision Support tools For

Conceptual Building Design.” In: Innovation in Civil and StructuralEngineering. Saxe-Coburg publications. Chap. 3. isbn: 1-874672-24.

url: http://www.saxe-coburg.co.uk/pubs/descrip/sl2005.htm.

Rafiq, Yaqub and Chengfei Sui (2010). “Interactive visualisation: A sup-

port for system identification.” In: Advanced Engineering Informatics24.3, pp. 355–366. issn: 1474-0346. doi: http://dx.doi.org/10.

1016/j.aei.2010.02.003. url: http://www.sciencedirect.com/

science/ARTICLE/pii/S1474034610000054.

Reeve, D. (2009). Risk and Reliability: Coastal and Hydraulic Engineering.

Taylor Francis, London.

Reeve, D., A. J. Chadwick, and C.. Fleming (2004). Coastal EngineeringProcesses, Theory and Design Practice. London: Spon Press.

Reynoso-Meza, G., X. Blasco, J. Sanchis, and J. M. Herrero (2013). “Com-

parison of design concepts in multi-criteria decision-making using

level diagrams.” In: Information Sciences 221.0, pp. 124–141. issn:

0020-0255. doi: http://dx.doi.org/10.1016/j.ins.2012.09.049.

url: http: //www .sciencedirect. com/science /ARTICLE/ pii/

S0020025512006421.

Rijn, L. C. van (1984). “Sediment transport part I: Bed load transport.”

In: Proceedings of the ASCE, Journal of the Hydraulics division 110.HY10,

pp. 1431–56.

Rochester, N., J.H. Holland, L.H. Habit, and W.L. Duda (1956). “Tests on

a cell assembly theory of the action of the brain, using a large digi-

Bibliography 188

tal computer.” In: IRE Transactions on Information Theory 2.3, pp. 80–

93.

Roeva, O., S. Fidanova, and M. Paprzycki (2013). “Influence of the

Population Size on the Genetic Algorithm Performance in Case of

Cultivation Process Modelling.” In: FedCSIS. Ed. by Maria Ganzha,

Leszek A. Maciaszek, and Marcin Paprzycki, pp. 371–376. url: http:

//dblp.uni- trier.de/db/conf/fedcsis/fedcsis2013.html#

RoevaFP13.

Rustell, M. J. F. (2013). “Breakwater Layout Optimisation Using A Root-

Finding Algorithm.” In: Ports 2013, pp. 1929–1938. doi: doi:10.

1061/9780784413067.197. url: http://ascelibrary.org.oala-

proxy.surrey.ac.uk/doi/abs/10.1061/9780784413067.197.

— (2014a). “Multi-objective optimisation of a liquefied natural gas ter-

minal under risk and uncertainty (Poster).” In: Proceedings of theAnnual Surrey Engineering Doctorate in Sustainability for Engineeringand Energy Systems Conference. Guildford, UK.

— (2014b). “Optimising a Breakwater Layout using an Iterative Algo-

rithm.” In: url: http://www.pianc.org/downloads/dwa/BREAKWATER\

%20LENGTH\%20OPTIMISATION\%20USING\%20AN\%20ITERATIVE\%20ALGORITHM.

\%20M.Rustell_2014.pdf.

Rustell, M. J. F., A. Orsini, S-T. Khu, Y. Jin, and B. Gouldby (2014a). “De-

cision Support for Designing New LNG Terminals.” In: Proceedingsof the 33rd PIANC World Congress San Francisco. San Francisco, USA.

— (2014b). Decision Support for New LNG Terminals. Conference Paper.

— (2014c). Optimizing an LNG Terminal Subject to Uncertainty. Confer-

ence Paper.

Sarker, R, K. H Liang, and C Newton (2002). “A new multiobjective

evolutionary algorithm.” In: European Journal of Operational Research140.1, pp. 12–23.

Savage, J. C. D., C. Miles, C. J. Moore, and J. C. Miles (1998). “The

interaction of time and cost constraints on the design process.”

In: Design Studies 19.2. doi: DOI: 10.1016/S0142-694X(98)00004-0,

pp. 217–233. issn: 0142-694X. url: http://www.sciencedirect.

com/science/ARTICLE/B6V2K-3TB67RS-7/2/de16b5e015d2ed8a99e25356e3a2c617.

Savic, D. (2002). “Single-objective vs. Multiobjective Optimisation for

Integrated Decision Support, In: Integrated Assessment and Deci-

sion.” In: Proceedings of the First Biennial Meeting of the InternationalEnvironmental Modelling and Software Society, pp. 7–12.

Bibliography 189

Schaffer, J. David, Rich Caruana, Larry J. Eshelman, and Rajarshi Das

(1989). “A Study of Control Parameters Affecting Online Perfor-

mance of Genetic Algorithms for Function Optimization.” In: Pro-ceedings of the 3rd International Conference on Genetic Algorithms. San

Francisco, CA, USA: Morgan Kaufmann Publishers Inc., pp. 51–60.

isbn: 1-55860-066-3. url: http://dl.acm.org/citation.cfm?id=

645512.657403.

Sexsmith, R. G. (1999). “Probability-based safety analysis – value and

drawbacks.” In: Structural Safety 21.4, pp. 303–310. issn: 0167-4730.

doi: 10.1016/s0167-4730(99)00026-0. url: http://www.sciencedirect.

com/science/ARTICLE/pii/S0167473099000260.

Shaw, D., J. Miles, and A. Gray (2008). “Determining the structural lay-

out of orthogonal framed buildings.” In: Computers & Structures86.19-20. doi: 10.1016/j.compstruc.2008.04.009, pp. 1856–1864. issn:

0045-7949. url: http://www.sciencedirect.com/science/ARTICLE/

pii/S0045794908000941.

Shrestha, D. L. (2009). Uncertainty Analysis in Rainfall-Runoff Modelling:Application of Machine Learning Techniques. Leiden, The Netherlands:

CRC Press/Balkema.

Shuto, N (1974). “Nonlinear long waves in a channel of variable sec-

tion.” In: Coastal Engineering in Japan 17, pp. 1–12.

Simoes, M. G., J. L. M. Tiquilloca, and H. M. Morishita (2002). “Neural-

Network-Based Prediction of Mooring Forces in Floating Produc-

tion Storage and Offloading Systems.” In: IEEE Transactions on In-dustry Practice 38.2.

Sommerfeld, A. (1896). “Mathematische Theorie der Diffraction.” Ger-

man. In: Mathematische Annalen 47.2-3, pp. 317–374. issn: 0025-5831.

doi: 10.1007/BF01447273. url: http://dx.doi.org/10.1007/

BF01447273.

Soulsby, R. (2006). Simplified calculation of wave orbital velocities. Elec-

tronic BOOK Section.

Soulsby, Richard (1997). Dynamics of marine sands. H R Wallingford. Lon-

don: Thomas Telford.

Spearman, C. (1987). “The proof and measurement of association be-

tween two things. By C. Spearman, 1904.” In: The American jour-nal of psychology 100.3-4, pp. 441–471. issn: 0002-9556. url: http:

//view.ncbi.nlm.nih.gov/pubmed/3322052.

Bibliography 190

Sreekanth, J. and B. Datta (2010). “Multi-objective management of salt-

water intrusion in coastal aquifers using genetic programming and

modular neural network based surrogate models.” In: Journal ofHydrology 393.3-4, pp. 245–256. issn: 0022-1694. doi: 10.1016/j.

jhydrol.2010.08.023. url: http://www.sciencedirect.com/

science/ARTICLE/pii/S0022169410005408.

Srinivas, N and K Deb (1994). “Multiobjective Optimization Using Non-

dominated Sorting in Genetic Algorithms.” In: Journal of Evolution-ary Computation 2.3, pp. 221–248.

Stein, Michael (1987). “Large Sample Properties of Simulations Using

Latin Hypercube Sampling.” In: Technometrics 29.2, pp. 143–151.

issn: 0040-1706. doi: 10.2307/1269769. url: http://dx.doi.org/

10.2307/1269769.

Stifanelli, P. F., T. M. Creanza, R. Anglani, V. C. Liuzzi, S. Mukherjee,

F. P. Schena, and N. Ancona (2013). “A comparative study of co-

variance selection models for the inference of gene regulatory net-

works.” In: Journal of Biomedical Informatics 46.5, pp. 894 –904. issn:

1532-0464. doi: http://dx.doi.org/10.1016/j.jbi.2013.07.002.

url: http: //www .sciencedirect. com/science /article/ pii/

S1532046413001019.

Suga, K., S. Kato, and K. Hiyama (2010). “Structural analysis of Pareto-

optimal solution sets for multi-objective optimization: An applica-

tion to outer window design problems using Multiple Objective

Genetic Algorithms.” In: Building and Environment 45.5, pp. 1144–

1152. issn: 0360-1323. doi: 10.1016/j.buildenv.2009.10.021.

url: http: //www .sciencedirect. com/science /ARTICLE/ pii/

S0360132309003217.

Svendrup, H. U. and H. M. Munk (1947). “Wind, Sea, and Swell; Theory

of Relations for Forecasting.” In: U.S. Navy Hydrographic Office.

TAW (2002). “Technical Report Wave Run-up and Wave Overtopping

at Dikes.” In: TAW, Technical Advisory Committee on Flood Defences.Author: J.W. van der Meer.

Tarantola, A. (2004). Inverse Problem Theory and Methods for Model Param-eter Estimation. Philadelphia, PA, USA: Society for Industrial and

Applied Mathematics. isbn: 0898715725.

Thoresen, C. (2010). Port designer’s handBOOK. 2nd. London: Thomas

Telford Limited.

Bibliography 191

Torum, A., M. N. Moghim, K. Westeng, N. Hidayati, and L. Arntsen

(2012). “On berm breakwaters: Recession, crown wall wave forces,

reliability.” In: Coastal Engineering 60.0, pp. 299–318. issn: 0378-3839.

doi: http://dx.doi.org/10.1016/j.coastaleng.2011.11.003.

url: http: //www .sciencedirect. com/science /ARTICLE/ pii/

S0378383911001785.

Tran, T-D., D. Brockhoff, and B. Derbel (2013). “Multiobjectivization

with NSGA-ii on the Noiseless BBOB Testbed.” In: Proceedings of the15th Annual Conference Companion on Genetic and Evolutionary Com-putation. GECCO ’13 Companion. Amsterdam, The Netherlands:

ACM, pp. 1217–1224. isbn: 978-1-4503-1964-5. doi: 10.1145/2464576.

2482700. url: http://doi.acm.org/10.1145/2464576.2482700.

UACE (1996). Hydraulic design of deep-draft navigation projects. Engineer

Manual EM 1110-2-1613. Washington, D.C.

UN (1985). Port Development. A handBOOK for planners in developingcountries. Second. Geneva: United Nations.

— (1998). Kyoto Protocol to the United Nations Framework Convention onClimate Change. Legal Rule or Regulation. url: http://unfccc.int/

resource/docs/convkp/kpeng.pdf.

Van Gent, M. R. A. and B. Pozueta (2005). Rear-Side Stability of RubbleMound Structures. Conference Paper.

Van Gent, M. R. A., A. J. Smale, and C. Knuiper (2007). “Stabilty of

rock slopes with shallow foreshores.” In: Proceedings of the 4th inter-national coastal structures conference. Ed. by J. A. Melby. ASCE.

Victor, L., J. W. van der Meer, and P. Troch (2012). “Probability dis-

tribution of individual wave overtopping volumes for smooth im-

permeable steep slopes with low crest freeboards.” In: Coastal En-gineering 64.0, pp. 87–101. issn: 0378-3839. doi: http://dx.doi.

org/10.1016/j.coastaleng.2012.01.003. url: http://www.

sciencedirect.com/science/ARTICLE/pii/S0378383912000129.

WG121, PIANC (2014). Harbour Approach Channels Design Guidelines. Re-

port of MarCom Working Group 49. PIANC.

WG12, PIANC (1992). “Analysis of rubble mound breakwaters.” In: Re-port of Working Group 12 (PTC II).

Wang, Hanli, Sam Kwong, Yaochu Jin, Wei Wei, and K. F. Man (2005).

“Multi-objective hierarchical genetic algorithm for interpretable fuzzy

rule-based knowledge extraction.” In: Fuzzy Sets and Systems 149.1.

Bibliography 192

doi: 10.1016/j.fss.2004.07.013, pp. 149–186. issn: 0165-0114. url: http:

//www.sciencedirect.com/science/ARTICLE/pii/S0165011404003124.

Weigel, R. L. (1962). “Diffraction of waves around a semi-infinite break-

water.” In: Journal of Hydraulics Division, ASCE 88.HY1, pp. 27–44.

Wu, C. S. and E. B. Thornton (1986). “Wave numbers in linear progres-

sive waves.” In: Journal of Waterway, Port, Coastal and Ocean Engi-neering 112.4, pp. 536–540.

Yaman, S. and C-H. Lee (2010). “A Comparison of Single- and Multi-

Objective Programming Approaches to Problems with Multiple De-

sign Objectives.” English. In: Journal of Signal Processing Systems61.1, pp. 39–50. issn: 1939-8018. doi: 10.1007/s11265-008-0295-2.

url: http://dx.doi.org/10.1007/s11265-008-0295-2.

Yanagashima, S. and K. Katoh (1990). “Field observation on wave set-

up near the shoreline.” In: Proc. 22nd Int. Conf. Coastal Engnr. ASCE,

pp. 95–108.

Yang, Y. and M. S. Rosenbaum (2001). “Artificial neural networks linked

to GIS for determining sedimentology in harbours.” In: Journal ofPetroleum Science and Engineering 29.3-4, pp. 213–220. issn: 0920-

4105. doi: 10.1016/s0920-4105(01)00091-2. url: http://www.

sciencedirect.com/science/ARTICLE/pii/S0920410501000912.

You, Z-J. (2003). “Discussion of ”Simple and explicit evolution to the

wave dispersion equation” [Coastal Engineering 45 (2002) 71-74].”

In: Coastal Engineering 48.2, pp. 133–135. issn: 0378-3839. doi: http:

//dx.doi.org/10.1016/S0378-3839(02)00170-9. url: http://www.

sciencedirect.com/science/ARTICLE/pii/S0378383902001709.

Yuan, Y., H. Xu, and B. Wang (2014). “An Improved NSGA-III Proce-

dure for Evolutionary Many-objective Optimization.” In: Proceed-ings of the 2014 Annual Conference on Genetic and Evolutionary Com-putation. GECCO ’14. Vancouver, BC, Canada: ACM, pp. 661–668.

isbn: 978-1-4503-2662-9. doi: 10.1145/2576768.2598342. url: http:

//doi.acm.org/10.1145/2576768.2598342.

Zitzler, E. and L. Thiele (1998). “Multiobjective Optimization Using Evo-

lutionary Algorithms - A Comparative Case Study.” In: Proceedingsof the 5th International Conference on Parallel Problem Solving from Na-ture. PPSN V. London, UK, UK: Springer-Verlag, pp. 292–304. isbn:

3-540-65078-4. url: http://dl.acm.org/citation.cfm?id=645824.

668610.