Protocol Engineering for Protectionagainst Denial-of-Service Attacks
by
Suratose Tritilanunt
Thesis submitted in accordance with the regulations for
the Degree of Doctor of Philosophy
Information Security InstituteFaculty of Information Technology
Queensland University of Technology
January 19, 2009
Keywords
Denial of service attacks, Denial of service resistance, Key establishment proto-
cols, Host Identity Protocol (HIP), Meadows’ cost-based framework, Coloured
Petri Nets, CPN Tools, Denial of service modelling.
i
ii
Abstract
Denial-of-service attacks (DoS) and distributed denial-of-service attacks (DDoS)
attempt to temporarily disrupt users or computer resources to cause service un-
availability to legitimate users in the internetworking system. The most common
type of DoS attack occurs when adversaries flood a large amount of bogus data
to interfere or disrupt the service on the server. The attack can be either a
single-source attack, which originates at only one host, or a multi-source attack,
in which multiple hosts coordinate to flood a large number of packets to the
server.
Cryptographic mechanisms in authentication schemes are an example ap-
proach to help the server to validate malicious traffic. Since authentication in
key establishment protocols requires the verifier to spend some resources before
successfully detecting the bogus messages, adversaries might be able to exploit
this flaw to mount an attack to overwhelm the server resources. The attacker
is able to perform this kind of attack because many key establishment protocols
incorporate strong authentication at the beginning phase before they can iden-
tify the attacks. This is an example of DoS threats in most key establishment
protocols because they have been implemented to support confidentiality and
data integrity, but do not carefully consider other security objectives, such as
availability.
The main objective of this research is to design denial-of-service resistant
mechanisms in key establishment protocols. In particular, we focus on the design
of cryptographic protocols related to key establishment protocols that implement
client puzzles to protect the server against resource exhaustion attacks.
Another objective is to extend formal analysis techniques to include DoS-
resistance. Basically, the formal analysis approach is used not only to analyse
and verify the security of a cryptographic scheme carefully but also to help in
the design stage of new protocols with a high level of security guarantee. In this
research, we focus on an analysis technique of Meadows’ cost-based framework,
and we implement DoS-resistant model using Coloured Petri Nets. Meadows’
iii
cost-based framework is directly proposed to assess denial-of-service vulnerabil-
ities in the cryptographic protocols using mathematical proof, while Coloured
Petri Nets is used to model and verify the communication protocols using inter-
active simulations. In addition, Coloured Petri Nets are able to help the protocol
designer to clarify and reduce some inconsistency of the protocol specification.
Therefore, the second objective of this research is to explore vulnerabilities in
existing DoS-resistant protocols, as well as extend a formal analysis approach to
our new framework for improving DoS-resistance and evaluating the performance
of the new proposed mechanism.
In summary, the specific outcomes of this research include following results;
1. A taxonomy of denial-of-service resistant strategies and techniques used in
key establishment protocols;
2. A critical analysis of existing DoS-resistant key exchange and key estab-
lishment protocols;
3. An implementation of Meadows’s cost-based framework using Coloured
Petri Nets for modelling and evaluating DoS-resistant protocols; and
4. A development of new efficient and practical DoS-resistant mechanisms
to improve the resistance to denial-of-service attacks in key establishment
protocols.
iv
Contents
Front Matter i
Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Declaration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
1 Introduction 1
1.1 Research Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Research Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Scope of Research . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Research Methodology . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4.1 Design of DoS-resistant Cryptographic Mechanisms . . . . 6
1.4.2 Formal Analysis and Validation Approach . . . . . . . . . 8
1.5 Research Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.6 Structure of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Research Background 13
2.1 Key Establishment Protocols . . . . . . . . . . . . . . . . . . . . . 14
2.2 Denial-of-Service Attacks in Key Establishment . . . . . . . . . . 16
2.2.1 Flooding Denial-of-Service Attacks . . . . . . . . . . . . . 19
2.2.2 Non-flooding Denial-of-Service Attacks . . . . . . . . . . . 20
2.2.3 Denial-of-Service Countermeasures . . . . . . . . . . . . . 22
2.3 DoS Defensive Cryptographic Technique in Key Establishment . . 25
2.3.1 Hash-based Reversal Puzzles . . . . . . . . . . . . . . . . . 26
2.3.2 Hint-Based Hash Reversal Puzzles . . . . . . . . . . . . . . 28
2.3.3 Repeated-Squaring: Time-Lock Puzzles . . . . . . . . . . . 28
v
2.3.4 Diffie-Hellman based Puzzles . . . . . . . . . . . . . . . . . 29
2.3.5 Trapdoor RSA-based and DLP-based Puzzles . . . . . . . 31
2.3.6 Cryptographic Chaining Puzzles . . . . . . . . . . . . . . . 33
2.3.7 Client Puzzle Summary . . . . . . . . . . . . . . . . . . . 36
2.4 Formal Analysis Techniques . . . . . . . . . . . . . . . . . . . . . 37
2.4.1 BAN Logic . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.4.2 Dolev-Yao Model . . . . . . . . . . . . . . . . . . . . . . . 39
2.4.3 NRL Analyzer . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.4.4 Petri Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.4.5 Meadows Cost-based Framework for DoS-resistance . . . . 46
2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3 DoS-resistance in Key Establishment Protocols 51
3.1 DoS-Resistance Strategies . . . . . . . . . . . . . . . . . . . . . . 52
3.1.1 Counterbalancing CPU Expenditure . . . . . . . . . . . . 54
3.1.2 Counterbalancing Memory Expenditure . . . . . . . . . . . 54
3.1.3 Gradual Authentication . . . . . . . . . . . . . . . . . . . 54
3.2 DoS Defending Techniques . . . . . . . . . . . . . . . . . . . . . . 55
3.2.1 Stateless Connection . . . . . . . . . . . . . . . . . . . . . 55
3.2.2 SYN cache and SYN Cookie . . . . . . . . . . . . . . . . . 56
3.2.3 Proofs of Work (PoW) . . . . . . . . . . . . . . . . . . . . 57
3.2.4 Client-Aided computation . . . . . . . . . . . . . . . . . . 59
3.2.5 Gradual Authentication . . . . . . . . . . . . . . . . . . . 60
3.3 DoS-resistance in Key Establishment . . . . . . . . . . . . . . . . 61
3.3.1 Photuris . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.3.2 Modified Internet Key Exchange . . . . . . . . . . . . . . . 65
3.3.3 Internet Key Exchange (IKE) Protocol version 2 . . . . . . 67
3.3.4 Just Fast Keying (JFK) . . . . . . . . . . . . . . . . . . . 69
3.3.5 Host Identity Protocol (HIP) . . . . . . . . . . . . . . . . 72
3.3.6 Lee and Fung’s Protocol . . . . . . . . . . . . . . . . . . . 74
3.3.7 Client-Aided RSA SSL / TLS . . . . . . . . . . . . . . . . 76
3.4 Discussion and Conclusion . . . . . . . . . . . . . . . . . . . . . . 79
4 Denial of Service Model for Key Establishment 83
4.1 Meadows’ Cost-based Framework . . . . . . . . . . . . . . . . . . 85
4.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.1.2 Cost sets and Tolerance relation . . . . . . . . . . . . . . . 88
4.1.3 Cost-based Analysis of DoS-resistance Protocols . . . . . . 90
vi
4.2 Refinement of Meadows’s Framework . . . . . . . . . . . . . . . . 95
4.3 CPNs Simulation and Performance Analysis . . . . . . . . . . . . 97
4.4 Formal Modelling and Performance Analysis of HIP . . . . . . . . 99
4.4.1 Adversaries’ Ability . . . . . . . . . . . . . . . . . . . . . . 99
4.4.2 Tolerance Relation (τ) . . . . . . . . . . . . . . . . . . . . 101
4.4.3 A Cost-based Model of HIP in CPNs . . . . . . . . . . . . 102
4.5 Discussion and Conclusion . . . . . . . . . . . . . . . . . . . . . . 108
5 Simulation and Analysis of HIP in Timed CPNs 111
5.1 Simulation of HIP using Timed CPNs . . . . . . . . . . . . . . . . 113
5.1.1 Modelling of HIP using Timed CPNs . . . . . . . . . . . . 114
5.1.2 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.2 An Enhanced Version of HIP . . . . . . . . . . . . . . . . . . . . . 132
5.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
6 Subset Sum Cryptographic Puzzles 141
6.1 Knapsack or Subset Sum Problems . . . . . . . . . . . . . . . . . 143
6.2 LLL Lattice Reduction on Subset Sum Cryptosystems . . . . . . . 144
6.3 Subset Sum Cryptographic Puzzles . . . . . . . . . . . . . . . . . 149
6.3.1 Puzzle Construction . . . . . . . . . . . . . . . . . . . . . 149
6.3.2 Puzzle Solving . . . . . . . . . . . . . . . . . . . . . . . . . 150
6.3.3 Puzzle Verification . . . . . . . . . . . . . . . . . . . . . . 152
6.3.4 Comparison of Puzzle Properties . . . . . . . . . . . . . . 153
6.3.5 Experimental Results of LLL Reduction on Subset Sum
Puzzles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
6.4 Performance Analysis and Evaluation on Subset Sum Puzzles . . . 158
6.4.1 Tolerance of HIP with Subset Sum Puzzles . . . . . . . . . 160
6.4.2 Performance Analysis of Subset Sum Puzzles . . . . . . . . 161
6.5 Discussion and Open Problems . . . . . . . . . . . . . . . . . . . 167
6.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
7 Conclusion and Future Work 171
7.1 Summary of Contribution . . . . . . . . . . . . . . . . . . . . . . 171
7.2 Open Problems and Future Work . . . . . . . . . . . . . . . . . . 174
A HIP Model 177
vii
B Pseudo-code of Subset Sum Experiments 181
B.1 Backtracking Algorithm . . . . . . . . . . . . . . . . . . . . . . . 182
B.2 Branch & Bound Algorithms . . . . . . . . . . . . . . . . . . . . . 183
Bibliography 185
viii
List of Figures
1.1 Research Methodology . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Verification of the Protocol (modified from [11]) . . . . . . . . . . 9
2.1 STS Protocol [35] . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Juels and Brainard Hash-based Puzzle Constructions . . . . . . . 27
2.3 Feng Hint-based Hash Reversal Puzzle Constructions . . . . . . . 28
2.4 Rivest Time-lock Puzzles Construction . . . . . . . . . . . . . . . 29
2.5 Diffie-Hellman based Puzzles Constructions . . . . . . . . . . . . . 30
2.6 Trapdoor RSA-based Puzzles . . . . . . . . . . . . . . . . . . . . 32
2.7 Trapdoor DLP-based Puzzles . . . . . . . . . . . . . . . . . . . . 33
2.8 Ma’s Hash Chain Puzzles . . . . . . . . . . . . . . . . . . . . . . . 34
2.9 Groza and Petrica’s Hash Chain Puzzles . . . . . . . . . . . . . . 35
2.10 The Example of Petri Nets . . . . . . . . . . . . . . . . . . . . . . 41
2.11 An Example of a CPN Model . . . . . . . . . . . . . . . . . . . . 42
2.12 A Hierarchical Model . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.1 Photuris Protocol [64] . . . . . . . . . . . . . . . . . . . . . . . . 64
3.2 Modified aggressive mode of IKE Protocol [83] . . . . . . . . . . . 66
3.3 IKEv2 Protocol [65] . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.4 JFKi Protocol [3] . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.5 HIP Protocol [95] . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.6 Lee and Fung’s Protocol [72] . . . . . . . . . . . . . . . . . . . . . 75
3.7 CA-RSA Protocol [20] . . . . . . . . . . . . . . . . . . . . . . . . 77
4.1 HIP Protocol in the Cost-Based Framework Notation . . . . . . . 92
4.2 HIP Cost-based Model . . . . . . . . . . . . . . . . . . . . . . . . 103
4.3 Computational Cost on HIP with different ranges of k . . . . . . . 107
5.1 An Example of Timed CPNs Model . . . . . . . . . . . . . . . . . 114
5.2 Hierarchical Construction of HIP Model . . . . . . . . . . . . . . 116
5.3 The Server’s Subpage at the First Stage . . . . . . . . . . . . . . 117
ix
5.4 Percentage of throughput of hc with k=1 and k=10 . . . . . . . . 124
5.5 Percentage of throughput of hc with adjustable client puzzles in
comparison with non-adjustable client puzzles from Experiment 3 128
5.6 Percentage of throughput of hc and Co-ad3 with adjustable client
puzzles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
5.7 Comparison between Attacking Scenario 1 and 2 . . . . . . . . . . 135
5.8 Comparison between Attacking Scenario 1 and 2 with adjustable
parallelizable client puzzles . . . . . . . . . . . . . . . . . . . . . . 136
6.1 LLL Lattice Reduction Process . . . . . . . . . . . . . . . . . . . 148
6.2 Subset Sum Puzzle Construction . . . . . . . . . . . . . . . . . . 150
6.3 Subset Sum Puzzle Solving . . . . . . . . . . . . . . . . . . . . . . 151
6.4 Subset Sum Puzzle Verification . . . . . . . . . . . . . . . . . . . 153
6.5 Percentage of throughput of hc when using Subset Sum crypto-
graphic puzzles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
A.1 Top Page of HIP CP-Net . . . . . . . . . . . . . . . . . . . . . . . 177
A.2 Colour Sets and Values . . . . . . . . . . . . . . . . . . . . . . . . 178
A.3 Initiator Subpage for Request Generation . . . . . . . . . . . . . . 178
A.4 Responer Subpage for Puzzle Generation . . . . . . . . . . . . . . 179
A.5 Initiator Subpage for Puzzle Solving . . . . . . . . . . . . . . . . . 179
x
List of Tables
2.1 Summarization of Strengths and Weaknesses of Client Puzzles . . 36
3.1 Protocol Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.2 Summary of DoS Resistance Techniques used by Protocols . . . . 80
4.1 Cost of protocol executions . . . . . . . . . . . . . . . . . . . . . . 89
4.2 Cost to an Attacker [88] . . . . . . . . . . . . . . . . . . . . . . . 90
4.3 Computational Cost of CPU and Time Usage of Specific Algorithms 96
4.4 Comparison of Computational Cost of HIP with k=1 and k=10 . 106
5.1 Experimental Setup Parameters . . . . . . . . . . . . . . . . . . . 121
5.2 Percentage of Throughput with Non-adjustable Client Puzzles . . 123
5.3 Percentage of Throughput with Adjustable Client Puzzles at Z =
50 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.4 Percentage of Throughput with Adjustable Client Puzzles at Z =
500 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.5 Percentage of Throughput compared between Attacking Scenario
1 and 2 with adjustable parallelizable client puzzles (hc=20 and
Z=500) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.6 Time Interval that the Connection Queue is Full . . . . . . . . . . 137
6.1 Summary of Puzzles in term of Proposed Desirable Properties . . 154
6.2 Backtracking and Bounding Technique Running Time of The Sub-
set Sum Puzzle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
6.3 The Experimental Result of The Subset Sum Puzzle . . . . . . . . 159
6.4 Percentage of Throughput with Subset Sum Client Puzzles . . . . 161
6.5 Performance of Non-adjustable Subset Sum Client Puzzles . . . . 163
6.6 Performance of Adjustable Subset Sum Client Puzzles . . . . . . . 163
6.7 Comparison of our new Subset Sum Puzzles with the others . . . 168
xi
xii
Declaration
The work contained in this thesis has not been previously submitted for a degree
or diploma at any higher education institution. To the best of my knowledge and
belief, the thesis contains no material previously published or written by another
person except where due reference is made.
Signed: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Date: . . . . . . . . . . . . . . . . . . . . . .
xiii
xiv
Publications
The following papers have been published or presented, and contain material
based on the content of this thesis.
[1] J. Smith, S. Tritilanunt, C. Boyd, J. M. Gonzalez Nieto, and E. Foo. Denial
of Service Resistance in Key Establishment, In International Journal of
Wireless and Mobile Computing (IJWMC), 2(1):59-71, 2007.
[2] S. Tritilanunt, C. Boyd, J. M. Gonzalez Nieto, and E. Foo. Examining the
DoS Resistance of HIP, In Proceedings of the International Workshop on
Information Security (IS’06), in Conjunction with On The Move Federated
Conferences (OTM’06), Montpellier, France, 2006.
[3] S. Tritilanunt, C. Boyd, J. M. Gonzalez Nieto, and E. Foo. Using Coloured
Petri Nets to Simulate DoS-resistant protocols, In Seventh Workshop and
Tutorial on Practical Use of Coloured Petri Nets and CPN Tools (CPN’06),
Aarhus, Denmark, 2006.
[4] S. Tritilanunt, C. Boyd, J. M. Gonzalez Nieto, and E. Foo. Cost-based
and Time-based Analysis of DoS-resistance in HIP, In Proceedings of the
Thirtieth Australasian Computer Science Conference (ACSC’07), Ballarat,
Australia, 2007.
[5] S. Tritilanunt, C. Boyd, J. M. Gonzalez Nieto, and E. Foo. Toward Non-
Parallelizable Client Puzzles, In Proceedings of the Sixth International Con-
ference on Cryptology & Network Security, Singapore, (CANS’07), Singa-
pore, 2007.
xv
xvi
Acknowledgements
First of all, I would like to express my appreciation and gratitude to my principal
supervisor, Prof. Colin Boyd and my associate supervisors, Dr. Juan Manuel
Gonzalez Nieto, and Dr Ernest Foo for providing the opportunity to conduct the
research into the Protocol Engineering for Protection against Denial-of-Service
Attacks, at the Information Security Institute of the Queensland University of
Technology. I am greatly thankful to them not only for their valuable advice
and constructive suggestion provided through this research, but also for their
friendship and many helpful comments that they have made on my writing.
They look after me step-by-step and teach me how to do a good research. I
learn a lot of valuable things from all of them. Their room is always open for me
to ask them any questions and assistance. No suitable words or sentences can
explain my gratitude for all of my supervisory team. Without their significant
guidance and professional contribution, I could not be able to definitely complete
this Ph.D. research.
I would like to acknowledge Dr. Jason Smith not only for his collaborative
work in Chapter 2 and 3, but also his valuable comments and suggestion for
improving the final version of my thesis. Without his assistance to contribute
the work of taxonomy in denial of service resistance in key establishments, I
might not be able to have a first publication within the first year. In addition, I
would like to thanks Dr. Jason Smith for his friendship entire period of my PhD
study at ISI.
Special thanks also go to all anonymous reviewers and thesis examiners for
providing me some useful comments and feedback to improve my conference and
journal papers, as well as my final thesis.
I would like to extend my thankfulness to Ms Elizabeth Hansford, Mrs Chris-
tine Orme, Mr Matthew Bradford, and Mr James Mackie for their administrative
assistance and professional support that makes my research works smoothly and
perfectly. I might not be able to name everyone of my great colleagues, ISI staffs,
and friends, but I am grateful to all of them for their professional assistance, en-
xvii
couraging support, and valuable friendship. I am fortunate to have all of them
as my great and meaningful friends.
Without the financial support from Royal Thai Government during several
years in Brisbane, I would not have been able to study and graduate the Ph.D.
I would like to extend my special thanks to all administrative staffs who work
at Office of Educational Affairs, Canberra, for their assistance in funding con-
sultance.
Last but most important, I would like to thank my Mom and Dad, and my
girlfriend for their forever love and faith in me, as well as significant support that
make me to arrive to this point of my life.
xviii
Chapter 1
Introduction
The Internet connects millions of computers linked together around the world.
A connection between computers can be established by the protocol known
as Transport Control Protocol/Internet Protocol (TCP/IP) [21, 126]. When
TCP/IP was developed in the 1970s, there were some vulnerabilities that al-
lowed attackers to exploit networks for their own benefits. A major problem of
TCP/IP is that anyone can establish a connection to any servers for requesting
services without a concrete authentication from the server before. This gener-
ates a risk to the server for exhausting its resources. The attacker can perform
a malicious action by repeatedly requesting a large number of bogus demands
to the server. The server can only process a certain number of requests at once,
so if an attacker overloads the server with many requests until they exceed the
maximum capacity of the server, the server will terminate and cannot process
any requests from a legitimate user. This kind of attack is known as a Denial-
of-Service (DoS) attack. Recently, DoS attacks cause major problems to many
online business websites over the Internet as reported by Computer Emergency
Response Team (CERT) [28].
Defending against DoS attacks is very difficult. Many defending schemes
have been proposed to counteract such attacks but all of them can only help
to limit the impact, not completely protect the network from denial-of-service
disruption. One example solution of denial-of-service defending approach has
focused on cryptographic protocols for providing user identification in order to
authenticate the source of malicious users. As the problem of TCP/IP protocol
results from lack of identification mechanisms at the beginning, authentication
schemes have been developed and proposed to fulfill this requirement.
1
2 Chapter 1. Introduction
In recent years, researchers have been attempting to develop denial of service
resistant protocols by integrating cryptographic techniques to prevent DoS at-
tacks. However, if extra care is not taken in the design phase, the protocol might
fail to satisfy its security goals. One example of major vulnerability in authenti-
cation protocols is that they often require the protocol responder to participate
with strong authentication before an initiator’s identity is validated. Typically,
the underlying algorithm of this operation requires expensive computation on the
server to commit numerous resources which risks the server to become overloaded.
This problem occurs because security flaws of the protocol could be subtle and
difficult to find by the developer without support by a security analysis tool. The
analysis of cryptographic protocols is a sufficiently complex task that informal
approaches are unreliable. As a result, many researchers have focused on formal
specification and verification of cryptographic protocols.
While numerous protocols that use cryptographic techniques have been pro-
posed and claimed to provide fundamental security properties including confiden-
tiality, data integrity, data origin authentication, and non-repudiation [16, 90],
most of them are still ineffective. In addition, cryptographic protocols should
provide availability to the entities as denial of service becomes more of a con-
cern in the computer network. Adversaries who attempt to take advantage in
the communication by disrupting the server’s resources can achieve their aim
because there are some unknown flaws in protocols. That is because security of
a system cannot be achieved only by the design, but the analysis, verification,
and validation of the protocol should be included.
Formal analysis techniques have been used in the design and analysis of com-
munication protocols since the early 1980s. Formal analysis techniques for cryp-
tographic protocols did not become widespread until the early 1990s, when sev-
eral researchers were able to find undiscovered flaws in cryptographic protocols
by using formal analysis techniques. Formal analysis techniques can be applied
in various phases of the design and verification of a cryptographic protocol. The
formal analysis provides an assurance that a specification is realizable and a
system is implemented correctly with respect to its specification.
To date denial-of-service resistant protocols have been not well analysed by
formal techniques. In order to assess fundamental security requirements, protocol
analysis methods that maintain consistency with the formal specification meth-
ods are required to evaluate the security properties of such protocols. Therefore,
the major purposes of this research are to analyse the problem of DoS attacks,
explore the ways to protect the key establishment protocols against them, design
1.1. Research Motivation 3
new DoS-resistant protocols, and apply formal simulation technique to evaluate
the performance of the proposed DoS-resistant protocols.
1.1 Research Motivation
Key establishment protocols provide shared secret information between two par-
ticipants across an insecure communication network by using cryptographic tech-
niques to achieve security objectives. These fundamental security properties
achieved by cryptographic techniques consist of 1) Confidentiality, 2) Data In-
tegrity, 3) Data Origin Authentication, and 4) Non-repudiation. An additional
requirement should include the accessibility of a system resource defined as Avail-
ability. This property has been introduced in order to address the issue of denial
of service attacks in the cryptographic protocols.
Basically, the insecure channel is subject to attack by malicious entities who
attempt to corrupt security properties, especially the availability of the resources.
Denial of service (DoS) is one attacking technique, targeting to temporarily dis-
rupt the computers or services on the computer network. In the defense against
denial of service attacks, authentication protocols play an important role to as-
sist the server to assess the malicious traffic. However, most underlying authen-
tication schemes require the user to commit to expensive computation. This
circumstance might cause another denial of service vulnerabilities that aim to
deplete the computational resources on the server.
Recently, researchers have extended cryptographic techniques into key estab-
lishment, such as in Just Fast Keying (JFK) [3], for defending against denial-of-
service attacks. These approaches are aimed to solve the problem of such attacks;
however, these techniques still have vulnerabilities that were discovered later by
other researchers [125]. Because most existing denial-of-service resistant proto-
cols have not been well analysed by formal techniques, researchers require tools
to evaluate these proposed protocols. Formal methods have been introduced and
developed based on mathematical proof to guarantee the security with regard to
not only confidentiality and integrity, but availability of the system as well.
One example formal methods successfully applied to the analysis of crypto-
graphic protocols was introduced by Meadows [89]. Meadows has discussed some
open problems in the development of formal specification and verification of cryp-
tographic protocols which are due to emerging problems in security. Basically,
the fundamental security properties should include confidentiality, authentica-
tion, integrity, and non-repudiation. However, the problems in security have
4 Chapter 1. Introduction
increased. Firstly, protocols used today are more complex than previous proto-
cols. They include more parties and protocol sequences. Secondly, availability
has become an important security property in the system. Therefore, the for-
mal analysis method should cover the Denial-of-Service (DoS) attack as well as
model and verify the vulnerability to attacks against confidentiality and integrity
within the protocol.
The major concern in this research is the problem of DoS attacks in crypto-
graphic protocols. Our goal is to explore the way to protect the server against
such attacks, design a new DoS-resistant protocol, as well as apply the formal
simulation technique not only to validate the DoS-resistance of the protocols,
but also to evaluate the performance of our new proposed protocol to guarantee
the security, especially the availability of the system in the presence of denial of
service attacks.
1.2 Research Objectives
The main objective of this research is to design a framework for accomplishing
fundamental security properties as well as protecting against a Denial-of-Service
attack. In order to achieve all of these goals, we divide our research into two
main tasks as following;
1. The first major task is to study existing cryptographic schemes and to
develop new techniques which enhance the functionality of current Denial-
of-Service resistant protocols. We focus on the design of cryptographic
protocols related to key establishment protocols. One example of DoS
defending techniques is client puzzles which were proposed by Juels and
Brainard [62], and later proposed alternative construction by Aura et al. [6],
and Wang and Reiter [133,134], for instance; and
2. The second task is to extend formal analysis techniques into the implemen-
tation. Basically, the formal analysis approach is used not only to analyse
and verify the security of a cryptographic scheme carefully but also to help
in the design stage of new protocols with a high level of security guarantee.
As mentioned, protocol flaws which expose protocols to DoS attacks have
remained undiscovered for many years. Widespread implementations of a
protocol with an unknown flaw lead to harmful consequences because they
lack a formal specification. The analysis of cryptographic protocols is a
more complex task and basic approaches are still unreliable. Researchers
1.3. Scope of Research 5
try to develop formal specification and verification tools to use in modelling
and verifying cryptographic protocols. In addition, these tools might be
able to help the protocol designer to clarify and reduce some inconsistency
of the protocol specification. Therefore, this major task is to explore vul-
nerabilities in existing DoS-resistant protocols, as well as extend this formal
analysis approach to our new framework for improving DoS-resistance and
evaluating the performance of the new proposed mechanism.
The following is a summary list of research objectives;
1. To identify denial of service vulnerabilities in key establishment protocols;
2. To propose the strategies and techniques to prevent denial of service attacks
in key establishment protocols;
3. To develop a formal framework for modelling and analysing key establish-
ment protocols that have DoS-resistant built-in mechanisms;
4. To examine the security and verify the correctness of DoS-resistant proto-
cols by using simulation techniques; and
5. To design new DoS-resistant mechanisms and evaluate their security and
performance using formal security analysis.
1.3 Scope of Research
There are various kinds of Denial-of-Service attack as well as a large number of
defending approaches today. As categorized by Needham [98], there are three
types of DoS attacks, namely 1) attacks on the server, 2) attacks on the network
bandwidth, and 3) attacks on the client. In this research, we focus on resource
consumption attacks and limit the study to protect the availability of a server
from attackers trying to perform DoS attacks. These DoS attacks considered
in this work are flooding attacks which exploit vulnerabilities in cryptographic
key establishment protocols to consume either CPU or memory resources on the
server machine. As cryptographic key establishment protocols are mostly de-
signed to function at the application layer of the OSI model, we limit the study
of DoS attacks as well as DoS protection mechanisms aiming to work at the
application layer. Thus, protecting clients and network bandwidth consumption
6 Chapter 1. Introduction
from DoS attacks are beyond the scope of this research. In addition, our ap-
proach to counter the DoS attack focuses on two-party key exchange and key
establishment protocols.
For the formal analysis approach, there are many varieties of formal specifi-
cation and verification tools used to analyse cryptographic protocols in a formal
way. After investigation and comparison of some formal analysis tools, we decide
to use the Coloured Petri Nets which is a graphical oriented language for design,
specification, simulation and verification of systems. To formally examine and
analyse DoS-resistant protocols using Coloured Petri Nets, we use a simulation
technique provided in CPN Tools [130], which is a program supporting users for
designing protocols in the graphical user interface (GUI). In addition, CPN Tools
is a versatile tool which has various support features and analysis functions, es-
pecially time analysis and performance evaluation for modelling and analysing
communication protocols [110].
1.4 Research Methodology
It is not an easy task to design an effective protocol using to solve DoS problems.
Generally, research in the area of a cryptographic protocol design involves not
only specification of a protocol to satisfy user requirements, but also analysis of
the performance of the protocol in order to compare new approaches with others
at the final process.
This research involves design of new DoS-resistant cryptographic protocols,
specifically a framework for key establishment protocols, as well as using formal
analysis techniques for analysing DoS vulnerabilities and performance of new
proposed protocols. Figure 1.1 demonstrates the overview of main research tasks.
This research consists of two major objectives; 1) the design of DoS-resistant
cryptographic mechanisms in key establishment protocols, and 2) the formal
analysis and validation of key establishment protocols for DoS-resistance. The
following provides a brief outline of research methodology relating to these two
objectives.
1.4.1 Design of DoS-resistant Cryptographic Mechanisms
Cryptographic protocols are communication protocols which use cryptographic
algorithms to achieve objectives in a secure communication across an insecure
channel. Basically, cryptographic algorithms are algorithms, which perform
1.4. Research Methodology 7
Figure 1.1: Research Methodology
transformations of data intended to make the data useless to an attacker. Also,
the insecure channel is subject to attack by attackers who attempt to corrupt se-
curity properties. Apart from fundamental security objectives, availability should
be included in the design requirements of DoS-resistant protocols.
One example of using cryptographic protocols to protect against DoS at-
tacks is proposed by Matsuura and Imai [81]. Matsuura and Imai introduced a
basic protection strategy against DoS attacks based on public-key related com-
putational cost by developed a key authentication protocol and extended it into
a beginning state of TCP/IP negotiation protocol. A few years later, Mat-
suura and Imai developed a DoS-resistant key authentication protocol based on
ISAKMP/Oakley [82] and IKE protocol [83]. However, these protocols still have
8 Chapter 1. Introduction
vulnerabilities because they have not been well analysed by formal analysis tools.
Using public-key encryption is not sufficient on its own for completely solving
the problem of DoS attacks because its operation includes complex and expensive
modular arithmetic. Attackers could use this point to exhaust the server’s re-
sources. Therefore, it is not enough to implement a framework to defend against
DoS attacks with public-key encryption alone. Juels and Brainard [62] proposed
a client puzzle technique to fulfill the requirement, and improve the resistance
to authentication protocols. In our research, we focus on Juels and Brainard’s
proposal and develop a DoS-resistant protocol based on the client puzzle mech-
anism.
1.4.2 Formal Analysis and Validation Approach
Protocol engineering integrates the activities involved in the design and valida-
tion of protocols using formal methods that are based on mathematics. Generally,
the intended protocol specification for analysis and validation can be gathered
from either the International Organization for Standardization/International Elec-
trotechnical Commission (ISO/IEC) and International Telecommunication Union
(ITU) who provide formal protocol specification from the standard documents,
or the Internet Engineering Task Force (IETF) who specify protocols as technical
reports.
The protocol specification includes a detailed description of the functions
of the protocol, which allows it to provide services. The protocol specification
should consist of a set of consistent rules and unambiguous procedures for two
or more principals for establishing secure communication over the computer net-
work, as well as achieving all service goals. Examples of protocol specification
of key exchange and key establishment protocols focused in this research will be
demonstrated and discussed in Section 3.3.
This research focuses on modelling and analysis of DoS-resistant protocols
using CPNs formalism. An example research framework providing a technique
to verify protocols using CPNs was proposed by Billington et al. [11]. The outline
of our formal analysis and validation process to achieve the DoS-resistant goals
in key establishment is illustrated in figure 1.2.
The technique begins with the specification of the service and protocol. This
specification describes the services and goals provided to the user, for example,
security services such as confidentiality and data integrity in the case of crypto-
graphic algorithms, or forward secrecy and key compromise impersonation prop-
1.5. Research Outcomes 9
Figure 1.2: Verification of the Protocol (modified from [11])
erties in the case of authentication protocols. Due to our research being mostly
concerned with the denial-of-service resistant property, we integrate an availabil-
ity objective into our analysis.
The second stage is to model the protocol, in which a model is developed
based on the specification by using a formal technique. Modelling as a message
sequential chart in CPNs formalism also helps to remove inconsistency in the pro-
tocol specification. Having modelled the protocol, simulation results can be used
for evaluating performance of the model. For example, automatic or interactive
simulation provided by CPN Tools could help exploring DoS vulnerabilities that
cause resource exhaustion to the server in authentication protocols. At the final
stage, performance analysis of the implementation could discover some flaws of
the protocol. Once protocol vulnerabilities have been discovered, we need to fix
and re-model the protocol and perform a simulation and performance analysis
again until no further vulnerabilities have been found.
1.5 Research Outcomes
The outcome of each stage has been submitted to conferences and journals in the
computer security area to obtain valuable feedback from the research community.
The outcomes of this research include the following results;
1. A taxonomy of denial-of-service resistant strategies and techniques used in
key establishment protocols;
2. A critical analysis of existing DoS-resistant key exchange and key estab-
lishment protocols;
10 Chapter 1. Introduction
3. An implementation of Meadows’s cost-based framework using Coloured
Petri Nets for modelling and evaluating DoS-resistant protocols; and
4. Development of new efficient and practical DoS-resistant mechanisms to
improve the resistance to denial-of-service attacks in key establishment
protocols.
1.6 Structure of the Thesis
The structure of this thesis is briefly outlined as following;
Chapter 2. This chapter provides background knowledge of the denial of ser-
vice attacks (DoS) definition and famous techniques used to mount such attacks
against communicating protocols. In addition, an overview of denial-of-service
defending strategies and techniques is provided. This chapter also covers the def-
inition of key establishment protocols, and formal analysis tools used to model
and evaluate cryptographic protocols.
Chapter 3. This chapter presents the strategies and the mechanisms which
have been deployed in the present DoS-resistant protocols. In addition, this
chapter covers background of seven key exchange and key establishment pro-
tocols for DoS-Resistance, and provides critical analysis of their DoS-resistant
characteristics in term of specified strategies and mechanisms.
Material presented in this chapter is based on the following publication:
• J. Smith, S. Tritilanunt, C. Boyd, J. M. Gonzalez Nieto, and E. Foo. Denial
of Service Resistance in Key Establishment, In International Journal of
Wireless and Mobile Computing (IJWMC), 2(1):59-71, 2007.
Chapter 4. This chapter describes formal analysis techniques for modelling
and analysing DoS-resistant protocols by using Meadows’ cost-based framework
and Coloured Petri Nets (CPNs). This chapter also describes the experiment
of the cost-based analysis for evaluating and identifying DoS vulnerabilities in
Host Identity Protocol (HIP). By using these formal analysis approaches, we
are able to identify DoS vulnerabilities in client puzzles, which is an underlying
1.6. Structure of the Thesis 11
DoS-resistant mechanism used in Host Identity Protocol.
Material presented in the chapter is based on the following publication:
• S. Tritilanunt, C. Boyd, J. M. Gonzalez Nieto, and E. Foo. Using Coloured
Petri Nets to Simulate DoS-resistant protocols, In Seventh Workshop and
Tutorial on Practical Use of Coloured Petri Nets and CPN Tools (CPN’06),
Aarhus, Denmark, 2006.
• S. Tritilanunt, C. Boyd, J. M. Gonzalez Nieto, and E. Foo. Examining the
DoS Resistance of HIP, In Proceedings of the International Workshop on
Information Security (IS’06), in Conjunction with On The Move Federated
Conferences (OTM’06), Montpellier, France, 2006.
Chapter 5. This chapter proposes an additional technique to model complex
DoS-resistant protocols, and to analyse the advanced attacking technique tar-
geting to attack the weakness of hash-based reversal puzzles. The new model
incorporates the time function in order to model and examine DoS-resistant pro-
tocols more realistically. From the experimental results, we have found DoS
vulnerabilities on the HIP and proposed the technique to strengthen the security
of this protocol. Furthermore, the result in this chapter reveals a limitation of
using Meadows’ cost-based framework to evaluate large scale networks dealing
with advance attacking techniques.
Material presented in the chapter is based on the following publication:
• S. Tritilanunt, C. Boyd, J. M. Gonzalez Nieto, and E. Foo. Cost-based
and Time-based Analysis of DoS-resistance in HIP, In Proceedings of the
Thirtieth Australasian Computer Science Conference (ACSC’07), Ballarat,
Australia, 2007.
Chapter 6. This chapter presents a new construction based on subset sum
problems in order to overcome the weaknesses of existing client puzzles regarding
parallelizability in a practical scheme. This chapter also explains the concept of a
lattice reduction algorithm which is an effective and non-parallelizable technique
used to solve subset sum problems in polynomial time. Finally, new experimental
results as well as performance analysis for supporting security and performance
of our new protocol are also demonstrated in this chapter.
12 Chapter 1. Introduction
Material presented in the chapter is based on the following publication:
• S. Tritilanunt, C. Boyd, J. M. Gonzalez Nieto, and E. Foo. Toward Non-
Parallelizable Client Puzzles, In Proceedings of the Sixth International Con-
ference on Cryptology & Network Security, Singapore, (CANS’07), Singa-
pore, 2007.
Chapter 7. In this chapter, a summary of the thesis, some open problems, as
well as future work are provided and discussed.
Chapter 2
Research Background
Denial-of-service attacks (DoS) and distributed denial-of-service attacks (DDoS)
attempt to temporarily disrupt users or computer resources to cause service
unavailability to legitimate users. The most preferred target of DoS attacks
would be business web servers such as banks, or credit card payment gateways.
DoS attacks are a serious problem on the Internet society because they are easy to
generate by attackers but very difficult to protect by the responder. DoS attacks
come in several forms and aim at a variety of services. The most common type
of DoS attack occurs when an attacker floods a network with a large amount of
data. The attack can be either a single-source attack, which originates at only
one host, or a multi-source attack, in which multiple hosts coordinate to flood a
large number of packets to the server.
Cryptographic mechanisms in authentication schemes are one approach to
help the server to validate the malicious traffic. Since authentication in key
establishment protocols requires the verifier to spend some amount of resources
before successfully detecting the bogus messages, adversaries might be able to
exploit this flaw to mount an attack to overwhelm the server resources. While the
attacker commits few resources to generate a large volume of bogus traffic in this
attack, the server has to spend a large amount of CPU and memory for verifying
identity. The attacker is able to perform this kind of attack because many key
establishment protocols incorporate strong authentication at the beginning phase
before they can identify the attacks. This is an example of DoS threats in most
key establishment protocols because they have been implemented to support
confidentiality and data integrity, but do not carefully consider other security
objectives, such as availability in which they are subject to be corrupted by DoS
13
14 Chapter 2. Research Background
adversaries.
As numerous protocols have been proposed but most of them still ineffective
to tackle DoS attacks, the protocol developer requires tools to verify and validate
these proposed protocols and to reveal some flaws in order to strengthen them to
resist DoS attacks. To date, formal methods have been successfully used as an
effective tool to analyze cryptographic protocols. Formal methods can be applied
either at the initial phase of the design in order to provide complete and unam-
biguous protocol specification, or at the final step of the protocol verification in
order to evaluate the specification of the protocols. In this research, we focus
two formal analysis techniques, including Meadows’ cost-based framework and
Coloured Petri Nets, for examining the DoS vulnerabilities in key establishment
protocols.
As this research involves various kinds of materials, this chapter provides
background knowledge of key establishment protocols, explores Denial-of-Service
(DoS) and Distributed Denial-of-Service (DDoS) attacks, as well as reviews some
formal analysing methods in order to use them as a formalising tool to model
and analyse the DoS threats of existing DoS-resistant protocols.
2.1 Key Establishment Protocols
A key establishment protocol as defined by Menezes et al. [90] is a cryptographic
technique that provides shared secret keys, which are guaranteed to be fresh and
only known between intended parties, used for securing subsequent communi-
cation between two or more parties. It is important to note that as our scope
is limited to two-party key establishment, all further information with respect
to key establishment protocols in this research refers to two-party protocols be-
tween an initiator and a responder. In the remaining of this literature, the term
initiator can refer to an honest client or a legitimate user, and an attacker or
adversary, while the term responder can refer to a server or host of key estab-
lishment protocols.
Basically, key establishment can be broadly divided into two subcategories
which are key transport and key agreement protocols. A key transport protocol
is used to securely deliver a secret key created by one party to another party,
while a key agreement protocol is a technique in which a shared secret key is
created from the input of two or more intended parties of the communication
and this secret key cannot be predetermined by anyone.
In term of authentication, two main important elements are key and user
2.1. Key Establishment Protocols 15
identity for each party. The process to verify the user identity is known as
key authentication, whereas the process to verify whether the established key is
fresh and known only for intended parties is called key confirmation. The key
is fresh if it is not previously used, derived, and known by any parties in the
communication. In addition, the established key should have forward secrecy,
which means the session key established after the compromise of the long-term
key should not be affected by this circumstance.
The key establishment protocols can be attacked by adversaries using either
passive or active techniques. Passive attacks refer to any techniques in which
adversaries only record data and subsequently analyse them in order to obtain
information that they want, such as a secret key. On the other hand, active
attacks are techniques in which adversaries are able to modify, delete, or insert
forged information into the communication. Since this research will focus on
adversaries who attempt to deny services of the server, techniques that we allow
adversaries to engage in are slightly different from above definition. DoS ad-
versaries do not record or modify information to obtain secret values, but they
attempt to cause any illegal actions on the server to stop the server to provide
services to a legitimate party. These illegal actions will be provided and discussed
later in the experiment section of Chapter 4 and 5.
To provide an example of key establishment protocols, we present the station-
to-station (STS) protocol proposed by Diffie et al. in 1992 [35]. The secret key
is derived from a Diffie-Hellman key exchange method [34]. This protocol also
provides user identity authentication via digital signature at the last message
of the protocol specification. Figure 2.1 shows the communication of the STS
protocol between an initiator (I) and a responder (R).
I R
1) tI = gx mod ptI−−−−−−−−−−−−−−→ KIR = ty
I mod p
2) KIR = txR mod p
tR, EKIR{sigR[tR, tI ]}←−−−−−−−−−−−−−− tR = gy mod p
3)EKIR
{sigI [tI , tR]}−−−−−−−−−−−−−−→
Figure 2.1: STS Protocol [35]
In the STS protocol, an initiator I and a responder R agree to select and
publish a large prime number p and a generator g of a multiplicative group G in
which 2 ≤ g ≤ (p− 2). In order to establish a secret key, I randomly chooses a
16 Chapter 2. Research Background
secret x in a range 1 ≤ x ≤ (p − 2), computes tI = ( gx mod p), and sends tI
to R. On the receipt of message 1, R randomly chooses a secret y in a range 1
≤ y ≤ (p− 2), and computes tR = gy mod p. Next, the responder R generates
a shared secret key KIR = tyI mod p used to encrypt its signature (sigR) over
tI and tR in order to securely return this data to an initiator I. Upon receipt
this message, I constructs a shared secret key KIR = txR mod p and decrypts
the ciphertext to verify the responder’s signature by using a responder’s public
key. If it is valid, I accepts that the secret key is fresh and authentic. Finally,
I generates its signature over tI and tR, encrypts it with a secret key KIR, and
delivers it back to the responder R. From this point of view, therefore, the STS
protocol provides both key confirmation and key authentication.
To make a critical analysis on the STS protocol, one example of a formal
method that has been successfully applied to this protocol was Meadows’ cost-
based framework [88]. This formal technique was used to analyse the protocol to
examine denial-of-service vulnerabilities. As an evaluation, Meadows found two
obvious ways to mount DoS attacks on the server. Firstly, DoS adversaries are
able to flood bogus requests at message 1 which requires cheap operation, while
the server has to waste a significant amount of resources to respond to these pack-
ets because the server has to compute a Diffie-Hellman value for those requests.
Secondly if adversaries can impersonate legitimate users and communicate with
the server until the last message, they are able to exhaust the server’s resources
to verify the bogus signature. Note that an adversary does not have to compute
a valid signature. These two scenarios represent examples of DoS threats in the
STS protocol that might occur because of an unawareness of the protocol de-
veloper. As a result, formal analysis techniques take an important role in the
protocol analysis not only to guarantee the correctness of such implementation,
but also to provide concrete confidence to the user.
2.2 Denial-of-Service Attacks in Key Establish-
ment
As stated by Gligor in 1984 [47], denial-of-service attacks (DoS) are a technique
aiming to deny access by legitimate users to shared service or resources on a
computer network. Basically, this internetworking system consists of clients,
servers, and communicating networks. Because the Internet is an open and in-
secure internetworking system, therefore, it could have some malicious users or
2.2. Denial-of-Service Attacks in Key Establishment 17
adversaries attempting to perform illegal actions to gain benefit over the others
in some environments. In some cases DoS attacks might not cause permanent
harm to the victim, because a computer can restart and continually work again.
However, they are able to cause real damage to the victim, especially when a
computer is used in a corporate network such as government, commercial web-
sites, or Internet Service Provider (ISP). The growth of DoS attacks has been
officially reported and published by CERT coordination center [28].
Slightly different from DoS, Distributed Denial-of-Service attacks (DDoS)
[26,27] are a technique when adversaries launch the attack from multiple source
systems. An attacker uses a large number of compromised machines to send
bogus packets simultaneously to crash a victim or Internet connection. These
packets are significant enough to break down the system. By taking advantage
of security vulnerabilities or weaknesses, the attacker might be able to automat-
ically control other compromised computers by installing exploit software such
as Trojan Horses that contain malicious code into legitimate clients computers.
When the attacker decides to launch the DDoS attack to any servers, they send
the common signal to trigger all compromised computers to make simultaneous
requests to the same target server. As a result, a large number of packets, which
come from many compromised computers, could break down the victim system
very easily. In this technique, the magnitude of attack depends on the number
of compromised computers. One example of the DDoS attack is Tribe Flood
Network attack [25].
DoS and DDoS attacks are simple and common attacks in the network. Both
of them can do serious damage to network services. As stated by Needham [98],
there are three basic types of such attacks.
1. Attacks on the server: an attacker attempts to interrupt the server to refuse
the normal connection. This causes the server to refuse legitimate users
access to the services;
2. Attacks on the network bandwidth: an attacker attempts to flood a net-
work, thereby preventing legitimate network traffic between servers and
clients;
3. Attacks on the client: an attacker attempts to interrupt or disrupt service
to a specific system or person. It could be substitution, which involves
replacement of the client by the attacker.
To be more specific, Mirkovic and Reiher [93] classified the attacks based on
18 Chapter 2. Research Background
the victim types. The following list is a summary of five different targets of DoS
attacks investigated by Mirkovic and Reiher.
1. Application: An obvious example of this attack on key establishment pro-
tocols is a bogus signature attack on the authentication scheme. Basically,
the authentication process consumes high resources on the server’s ma-
chine. The impact of this attack causes the server to exhaust its resources
and become unable to provide service to any users.
2. Host : TCP SYN attack [23] is an example of the attack on the host. The
attacker opens a large number of incomplete connections to a target server.
This attack causes the server to be overloaded and crash. This attack is
difficult to defend by implementing cryptographic techniques alone. Com-
bination with other defense mechanisms, such as firewalls, would be able
to prevent this attack.
3. Resource: Some example targets of this attack are a DNS server, and a
router connected to bottle neck links in the internetworking system. The
attacker might be able to flood bogus traffic, or boost up the process at
those links to cause congestion in that network. To prevent this circum-
stance, designing a robust network topology might be an option.
4. Network : The attack floods high volume of bogus traffic to consume the
connection bandwidth of the target network. This attack can be detected
easily due to its nature in a large number of malicious traffic. Adopting
an intrusion detection mechanism or firewalls can help the server to handle
this circumstance.
5. Infrastructure: This attack targets the infrastructure of the victim network.
Example targets could be either DNS servers, or backbone routers that
provide routing protocols to the government. This attack is difficult to
prevent; cooperation between organizations could be a possible solution to
counter this attack.
In our literature, we focus on the actions that degrade or interrupt the server
to provide service to legitimate users. Since the server is a centre to provide
service to any users in the network, the consequence of attacks would be more
severe far more than attacking on the client. To be more specific to the attack
techniques relating to Mirkovic and Reiher classification, we limit ourself to con-
sider attacks based on application and host only. Since our work aims to design
2.2. Denial-of-Service Attacks in Key Establishment 19
DoS-resistant key establishment protocols, the attack on resource, network, and
infrastructure are beyond our scope. More importantly, it might be difficult to
implement cryptographic protocols alone to tackle these kinds of attacks. Inte-
grating with other defensive mechanisms, such as firewalls and intrusion detection
system, would be a promising approach to counteract such attacks.
Considering the technique to launch DoS attacks on the targeting server,
Hussain et al. [54] have classified this technique into two fundamental schemes.
The first scheme exploits vulnerabilities on the software by crafting malicious
packets and sending them to crash the server, while the second scheme floods a
large volume of bogus traffic to overwhelm the server resources. As recommended
by CERT, the first scheme can be easy to prevent by updating and patching the
software vulnerabilities from the authorised website, while the second scheme is
more difficult to prevent since the internetworking system is an open community
and the server is a centre to provide service to any users. For the remaining of
this thesis, we focus on the second scheme and aim to implement DoS-resistant
mechanisms that are able to help the server to resist such attack, as well as do
not introduce DoS attacks to the server by themselves.
To achieve mounting DoS attacks on the server, adversaries have to interfere
or temporarily stop the server from accessing to the computer network. There
are some possible targets on the server for adversaries to make interruption.
These targets are comprised of the availability of computational resources and
storage capacity. Denial of service adversaries succeed their goals if any of these
resources are exhausted, because such servers may crash or disconnect from the
Internet.
Typically, the technique to deny service could be broadly classified into two
categories; 1) flooding denial-of-service attacks, and 2) non-flooding denial-of-
service attacks, as specified by Smith [124].
2.2.1 Flooding Denial-of-Service Attacks
In flooding denial-of-service attacks, most DoS attackers exploit software vul-
nerabilities in an operating system or vulnerabilities in communicating protocol
to prevent legitimate users from accessing information or service from targeting
servers. Peng et al. [105] define the impact of flooding attacks in term of attack
power. This term represents the level of resources consumed at the server by DoS
adversaries. In particular, the attack power consists of the traffic volume and the
level of resources consumed per packet. The former parameter refers to the num-
20 Chapter 2. Research Background
ber of messages from adversaries in a certain period, while the latter parameter
refers to the computational or memory usage in the processing of adversaries’
packets.
A simple technique of flooding attacks occurs when adversaries attempt to
generate a high volume of bogus requests to a server. Generally, the server has
a limited capacity to process a certain number of requests depended on the size
of storage or speed of the central processor unit (CPU). Thus, if adversaries
overload the server with high number of bogus requests, the server might not be
able to handle or simultaneously process incoming requests from other legitimate
clients. Once the server exceeds the maximum capacity, the server begins to
drop incoming packets. Following are few examples of famous flooding attacks
to disrupt the service on the server.
1. TCP SYN Flooding and IP Spoofing Attacks [23]: Basically, when the client
attempts to establish a TCP connection to the server, the client and server
exchange a three message sequence called three way handshake. However,
the attacker takes advantage over this service by flooding incomplete se-
quences with IP spoofing addresses. Therefore, the server never receives
reply messages since the IP addresses are invalid. This situation causes the
server to open a half connection until the server becomes overloaded and
is unable to serve any incoming packets. The impact of this attack causes
the server to exhaust memory and then terminate.
2. Trinoo [25]: User Datagram Protocol (UDP) is one of the standard mes-
sages in the TCP/IP protocol stack. Trinoo generates UDP packets and
floods them to random ports of target machines from many sources. An
attacker uses this tool to control compromised machines and instructs these
machines to flood denial of service traffic to attack against one or more IP
addresses.
3. Tribe Flood Network or TFN [25]: TFN is slightly different from Trinoo
in which it is a distributed tool that uses spoofed source IP addresses to
generate UDP flood attacks, TCP SYN flood, ICMP echo request flood,
and ICMP directed broadcast (or smurf) denial of service attacks.
2.2.2 Non-flooding Denial-of-Service Attacks
On the other hand, adversaries require more sophisticated tools to mount non-
flooding attacks which are advanced techniques to attack at the underlying algo-
2.2. Denial-of-Service Attacks in Key Establishment 21
rithm of the protocols. The consequence of attacks might be unpredictable from
the minor to significant disruption at the targeted victims. Those targets could
be either network protocols, operating systems, or applications, as identified by
Smith [124]. Following are a few examples of non-flooding attacks.
1. Denial-of-Service Attack via ping or Ping of Death [22]: This technique is
the most simple flooding attacks which can be done by exploting TCP/IP
flaws. In general, the TCP/IP standard allows a maximum packet size up
to 65536 octets. If attackers can craft oversized Internet Control Message
Protocol (ICMP) packets via ping command, the targeting server who re-
ceives these illegal packets will terminate and disconnect from the computer
network.
2. Smurf IP Denial-of-Service Attacks [24]: In the smurf attack, attackers use
ICMP request packets directly sent to IP broadcast addresses from spoof-
ing source addresses to generate denial-of-service attacks. In the TCP/IP
protocol standard when the host receives packets sent to broadcast ad-
dresses, the host delivers these packets to all machines on that network.
Hence if attackers need to exhaust the target’s resources, they create these
packets using a forged IP address that contain the spoofed source address
of the victim. The result is that when all the machines respond to the
ICMP requests sent to broadcast address, they send replies to the victim’s
machine. The impact on the target’s machine might be at least congestion
or disruption.
3. IP fragmentation attacks on Internet Key Exchange version 2 (IKEv2) [66]:
Even though this protocol has been proposed to resist denial of service
attacks, there is a flaw in the fragmentation algorithm. Because IKEv2
does not have a mechanism for fragmenting large messages, it has to use
the fragmentation mechanism from User Datagram Protocol (UDP) mes-
sages. However, the size of messages in this implementation is variable.
As a result, using this fragmentation scheme opens another vulnerability
of denial-of-service attacks so-called IP fragmentation attacks. The target-
ing of this attack is the fragmentation or reassembly part of the networking
stack, that causes the receiver of these illegal packets to crash or disconnect
from the network.
22 Chapter 2. Research Background
2.2.3 Denial-of-Service Countermeasures
In term of counteracting techniques, an interesting survey on DoS and DDoS
defense mechanisms has been done by Mirkovic et al. [92]. Mirkovic et al. classify
the defense approaches into three important strategies;
1. Protection: This strategy can be considered to reduce an opportunity of
adversaries to perform DoS attacks on the server. This involves strengthen-
ing the protocol by modifying some underlying mechanisms of the protocols
or applications. Patching software vulnerabilities can be classified into this
approach. Some example techniques helping to protect the server from
attacks are;
• Source Validation: The major aim of this technique is to prevent IP
spoofing attacks. By verifying the user’s identity before providing ser-
vice, the server at least ensures that the communication is initiated
by a valid user. An example defense technique proposed to combat IP
spoofing attacks is CAPTCHA [132], which is a program generating
some distorted characters, and asks the user to read them and returns
the correct answers along with a request. However, this approach is
not appropriate to implement under circumstances that involve com-
munication between computer to computer, such as routing protocol
updates or DNS lookup processes.
• Proof of Work: To address a problem of the source validation ap-
proach, proof of work is proposed to counterbalance resource usage
between a client and a server. The client puzzles [62] form one exam-
ple of this technique. The client is asked to expend some amount of
work and prove this to the server before the service is supplied. Proof
of work is able to help the server to defeat IP spoofing attacks similar
to the source validation technique. If the attacker attempts to exhaust
the server’s resource by flooding high volume of requests to cause the
server to return a large number of puzzles, or by flooding a large num-
ber of bogus solutions to cause the server to waste resources solving
these puzzles, these attacks might be less effective because the puzzle
generation and puzzle verification are cheap operations and consumes
very few resources. However, the major weakness of proof of work
is in the defense of network bandwidth attacks. It would be best to
implement other defensive technique to counteract such attacks.
2.2. Denial-of-Service Attacks in Key Establishment 23
2. Detection: Due to the complex nature of DoS attacks, implementing only
a defensive mechanism might not be enough to successfully prevent the
attacks. Some detection schemes might be able to be deployed to fulfill
a DoS prevention requirement. A famous tool used as a first line of the
defense mechanism is an intrusion detection system. This mechanism can
be broadly classify into three main categories including:
• Signature Detection: The important element of this detection is an
attack signature, which is a database of known attack characteristics
recorded from the past. All incoming packets are compared to the
database and suspicious packets removed if they are match to the
signature. This operation requires more precise signature, otherwise
a false rejected rate of legitimate packets becomes higher and causes
degradation of service in the system. This situation in the intrusion
detection system is called false-positive alerts.
• Anomaly Detection: Different from signature detection, the anomaly
detection contains the signature of legitimate users’ behaviour instead,
because it might be difficult to capture all adversarial behaviour due to
adversaries are able to modify attack techniques once their attacks are
successfully detected and dropped from the system. However the chal-
lenge emerges since not only protocols nowadays keep changing the
specification, but also different users might have varieties of applica-
tions attempting to communicate to the server. These circumstances
might increase false-positive alerts if the signature is inaccurately con-
figured.
• Misbehaviour Detection: Basically, this mechanism lies in between
the signature detection and anomaly detection scheme. It implements
both signature of suspicious traffic, as well as signature of legitimate
users’ behaviour. By having a pattern of misbehaviour and recording
their characteristic, as well as trying to capture all other types of
traffic, the traffic is suspicious if it is not located in any pattern.
3. Response: By implementing DoS protection and detection to counteract
adversaries, the final strategy is the attack response. This involves the im-
provement of DoS impact to the legitimate users. The response comprises
of three techniques including;
• Traffic Policing: This is the most obvious way to prevent DoS at-
24 Chapter 2. Research Background
tacks. It drops all suspicious traffic in the system. This is considered
as the most practical response because it requires minimum effort for
implementation. However, this technique can cause highest false re-
jected rate if inaccurate attack characteristic is implemented. Hence,
the challenge of this technique is to decide the suspicious packets and
determine how many of them to be dropped.
• Attack Traceback: The major goal of this scheme is to traceback the
suspicious traffic to the source of these attacks, and then applies the
law enforcement to such attackers. However, it might be the most dif-
ficult technique to achieve finding the true source of attacks because
most DoS attacks uses IP spoofing techniques, or come from thou-
sand of compromised machines with different IP addresses. Another
concern of this scheme is that it requires expensive implementation
installed to various places of the network components in order to suc-
cessfully traceback to the source of attackers.
• Service Differentiation: This scheme bases on the mutual trust be-
tween the server and individual clients in order to provide different
levels of service. Clients are required to prove their legitimacy to the
server, such as by proof of work. Ones who achieve this test will ob-
tain an intended service from the server. It is important to note that,
this scheme should be implemented only when the server becomes
overloaded. Thus, the server only reserves its resources to the ones
who are able to provide their legitimacy. The challenge of this scheme
is in the prevention of the attacker who can generate a high volume
of proof of work to the server. In particular, one obvious drawback
of this scheme is that the client is required to have specific software
installed on the machine to run this scenario.
As this research addresses the issue of developing DoS-resistant key estab-
lishment protocols for the server, we pay our attention to the flooding attacks
functioning at the application layer and using vulnerabilities in cryptographic key
establishment protocols to consume CPU and memory resources on the server. In
addition, the result of DoS countermeasures from this work is limited to the pro-
tection technique based on the cryptographic client puzzles which are designed
to counter-balance the workload between the server and the client, as well as
to penalize the adversaries attempting to overload server’s resources by flooding
bogus messages. The final outcome of DoS-resistant mechanisms, at least, do
2.3. DoS Defensive Cryptographic Technique in Key Establishment 25
not introduce either well known or new DoS vulnerabilities to the server. The
following section introduces a DoS defense mechanism based on various crypto-
graphic algorithms to protect the server from resource exhaustion attacks. The
major goal of this mechanism is to improve the DoS-resistance and assist the
server from DoS attacks.
2.3 DoS Defensive Cryptographic Technique in
Key Establishment
The idea of using cryptographic puzzles or client puzzles in computer networks
was first introduced by Dwork and Naor [41] for combating junk emails. Several
years later, Juels and Brainard [62] extended the concept of puzzles to thwart
Denial-of-Service (DoS) attacks in network protocols. Using client puzzles as
denial of service defending mechanism in key establishment protocols has been
recently implemented in Host Identity Protocol (HIP) [95].
Client puzzles counterbalance computational usage1 between client and server
machines by forcing the client to solve a computational puzzle before attending
to a request, the server ensures that the client spends sufficient resources before
committing its own. In particular, an adversary who attempts to flood a server
with a large number of bogus requests will suffer from solving a huge number of
puzzles.
Client puzzles can be constructed from a number of underlying problems.
Although many puzzles have been proposed using different techniques, all of
them should satisfy seven important properties described by Aura et al. [6];
Properties of a good puzzle include [6, 62]:
1. generation and verification is inexpensive for the responder;
2. level of difficulty can easily be adjusted from trivial to impossi-
ble;
3. solutions should not require specialised client hardware;
4. solutions cannot be precomputed;
5. issuing a puzzle does not require the responder to store any
state;
1The definition of this terminology will be given in Section 3.1
26 Chapter 2. Research Background
6. knowledge of the solution to one client’s puzzle is of no benefit in
solving other puzzles, so that the same puzzle may be provided
to numerous clients; and
7. initiators can reuse a puzzle by creating new instances of it.
For the purposes of the evaluation and comparison among individual types
of client puzzles, Feng et al. [44] proposed some additional criteria for evaluating
efficiency and resolution of cryptographic puzzles. As defined by Feng et al. [44],
the puzzle efficiency represents speed of puzzle generation and verification on the
server’s machine compared to the puzzle solving on the client’s machine. Mean-
while, the resolution or puzzle granularity represents the ability of the server to
finely control the amount of work done by calibrating the puzzle difficulty to
the client. Another requirement is the non-parallelization characteristic, which
describes whether the client puzzles can be distributed and solved in parallel
computation either on the group of high-end machines, or multi-core processors.
In some circumstances, non-parallelizable puzzles can prevent coordinated ad-
versaries from distributing puzzles to a group of high performance machines to
obtain solutions quicker than the specified period assigned by the server.
Having introduced the requirements of good puzzles, we now begin to provide
a survey and comparison in term of strengths and weaknesses of some existing
proposals of client puzzles.
2.3.1 Hash-based Reversal Puzzles
In 1999, Juels and Brainard [62] introduced the construction of client puzzles
to protect the responder against SYN flooding attacks and called their puzzles
hash-based reversal puzzles. The underlying function for constructing Juels and
Brainard’s puzzle is the hash function; clients need to calculate a reverse one-
way hash value of a puzzle generated by the responder. In this technique, the
responder is able to adjust a difficulty level of the client puzzle by increasing or
decreasing a value sent to a client in the puzzle. The client performs a brute-force
search on the puzzle bits by hashing each pattern until a client finds the answer.
The puzzle (shown in Figure 2.2(a)) is constructed by hashing session param-
eters (M ) such as a client’s ID, the current time (t), and a responder’s secret
number (s). The output of this hash operation (x) becomes the preimage to an-
other application of a hash function, whose output (y) forms part of the puzzle.
So the responder is required to spend two hash operations.
2.3. DoS Defensive Cryptographic Technique in Key Establishment 27
To solve a puzzle, the initiator has to test all k possible preimages until the
correct output is found. Given the partial preimage x′ (x with k-bits masked
out) and the hash digest y, this operation would take 2k−1 hash operations on
average, while the upper bound complexity of the puzzle is 2k hash operations.
This sort of hash-based puzzle can be distributed to high-end machines and
solved in parallel to obtain the solution quickly. This causes the complexity to
be reduced by a factor of z, where z is a number of parallel computers.
To verify the solution (p), the responder first checks the time value t is correct.
After that, it confirms the authenticity of a solution with a single hash operation;
p?= hash(t ,M , s). This requires one hash operation which is very cheap.
(a) Juels and Brainard Puzzle (b) Aura et al. Puzzle
Figure 2.2: Juels and Brainard Hash-based Puzzle Constructions
An alternative construction shown in Figure 2.2(b) was proposed by Aura
et al. [6]. Different from Juels and Brainard’s construction, this puzzle consists
of a time value (t), responder nonce (NR), and a difficulty parameter (k). To
solve the puzzle the initiator must find the value x, that when hashed with the
responder nonce (NR) produces a digest output whose first k-bits are zeros. To
verify the solution, the responder first checks whether t and NR are recent or not.
Then, the responder verifies the solution (x) by hashing the nonce NR to produce
an output in which the first k-bits must be zero. Since all of these operations are
similar to Juels and Brainard’s puzzles, the average computational complexity
of Aura et al.’s puzzles would be the same.
In summary, the major strength of Juels and Brainard’s puzzle is that it is
simple and fast to construct and verify on the responder’s machine. On the
other hand, the weaknesses are that it is parallelisable and its granularity is
exponential.
28 Chapter 2. Research Background
2.3.2 Hint-Based Hash Reversal Puzzles
The idea of hint-based hash reversal puzzles was introduced by Feng et al. [42]
in 2003 to improve on the granularity of hash-based reversal puzzles. The idea
of this mechanism is that the responder provides extra information called hints
attached to the puzzle. Instead of checking every possible solution, the initiator
searches for a solution within a range between A to B, where A is a starting
point and B is an ending point. To calculate the values A and B, the responder
uses a uniformly random function defined as U(·). Then the responder calculates
A = x − U(·), and B = x + k − U(·), where x is an input of the hash used to
generate the puzzle, and k is the puzzle difficulty. The puzzle construction is
demonstrated in Figure 2.3.
Figure 2.3: Feng Hint-based Hash Reversal Puzzle Constructions
Similar to the original work from Juels and Brainard [62], the complexity is
one or two hash functions for the construction and the verification. However,
the client’s computational cost would be up to k hash operations because the
number of trials is up to k. Thus fast construction and verification as well as
linear granularity for fine grained control are the strengths of this construction.
However, it is still susceptible to distribution and parallel processing attacks, as
is the original hash-based reversal scheme.
2.3.3 Repeated-Squaring: Time-Lock Puzzles
Time-Lock Puzzles were developed by Rivest et al. [112] in 1996. The major
goal of this technique is to defeat the high-end adversaries who attempt to solve
puzzles more quickly by using parallel computers. Time-lock puzzles rely on
the notion that a client has to spend a pre-determined amount of computation
time performing repeated squaring to search for a solution. To achieve this goal,
the responder estimates the performance of a client by the number of squaring
operations a client can perform in a certain period, and determines the amount
of time it expects a client to spend solving the puzzle.
2.3. DoS Defensive Cryptographic Technique in Key Establishment 29
The first step of the construction is an approximation of the client computa-
tion time (t) for calculating a number of repeated squares. Then the responder
chooses a ∈R Zn randomly and computes a2tmod n. To do so efficiently, the re-
sponder first computes e = 2t mod φ(n), where n is a product of a multiplication
of two large primes p and q, then calculates b = ae mod n. Next, the respon-
der computes K + a2tmod n and encrypts a message x with a secret key K by
using a symmetric encryption, for instance RC5. The complete construction is
illustrated in Figure 2.4.
Figure 2.4: Rivest Time-lock Puzzles Construction
To solve the puzzle, the client is required to compute a2tmod n repeatedly
because the client does not know p and q. This computation must be calculated
sequentially so it cannot be distributed and solved in parallel. An alternative
computation of solution would be to factor n instead. If users choose p and q
large enough, computing repeated squaring directly would be more efficient than
such an attack. As a result, the client needs computation of 2t multiplications on
average as specified by the responder. The function of t plays a role as a puzzle
difficulty, so we refer to it as k when comparing to the other puzzles. Hence,
the complexity of Rivest’s puzzles would be O(k). In addition since this value
reflects the granularity of the puzzles, we can conclude that the time-lock puzzles
have a linear granularity.
To sum up, time-lock puzzles provide an inherently sequential and non-
parallelisable characteristic as well as linear granularity. However, the major
concern of this scheme is the high-computation in the construction and verifica-
tion because the underlying technique requires the responder to conduct modular
arithmetic.
2.3.4 Diffie-Hellman based Puzzles
Diffie-Hellman based (DH-based) puzzles were proposed by Waters et al. [136]
in 2004. They investigate numerous techniques for outsourcing the generation
of puzzles in order to remove the computational load of puzzle generation from
30 Chapter 2. Research Background
the responder. The secure entity used as an outsourcing for puzzle generation is
called bastion host as shown in the Figure 2.5.
The construction of puzzles is based on a Diffie-Hellman method. In the
construction, the bastion host selects the first random integer r ∈R Zn, and a
second random value a ∈R Zn in the range r to r + k. Since k serves as puzzle
difficulty, the responder is able to easily control the workload in linear style to
the initiator. The puzzle sent to the initiator contains the values ga and r, where
a generator g is selected from a group of Zq. Hence, the cost of computation for
this construction requires one modular exponentiation.
In puzzle solving, the initiator searches for a solution by testing each candi-
date value in the range r to r + k until it finds c, such that gc = ga. Because
another goal of this implementation is to achieve the feature of per-channel puzzle
distribution and per-channel puzzle solution, the initiator calculates Y a, where
Y is the responder’s Diffie-Hellman public key, in order to submit the solution to
a specific server. The number of trials of the modular exponentiation is k begin-
ning from r and stopping at r + k. As a result, the complexity of this operation
is O(k) modular exponentiations. Considering the non-parallelization feature,
the nature of a given puzzle on the client’s machine is similar to the hint-based
scheme, with k available solutions in a range between A to B. As a result, this
scheme does not support non-parallelization.
Generally, the responder needs a single modular exponentiation to verify the
solution by raising the challenge value to the exponent (X) of its private key i.e.
(ga)X . However, the system proposed by Waters et al. [136] limits puzzles to
a given timeslot. Moreover, the solutions for a given timeslot are precomputed
by the responder, so the puzzle verification is performed by a table lookup for
saving computational load. The summary of DH-based operation is illustrated
in Figure 2.5
Figure 2.5: Diffie-Hellman based Puzzles Constructions
2.3. DoS Defensive Cryptographic Technique in Key Establishment 31
Since the responder generates a puzzle solution at the construction, the puz-
zle verification can be simply done via table lookup. The linear granularity is
another positive characteristic of this scheme. However, expensive computation
due to modular exponentiation as well as the distributable and parallelisable
characteristics are limitations of the DH-based technique.
2.3.5 Trapdoor RSA-based and DLP-based Puzzles
Gao [46] developed two novel puzzle mechanisms, named Trapdoor RSA-based
Puzzles and Trapdoor DLP-based Puzzles. The construction of these schemes is
based on the trapdoor function. To overcome the existing techniques, two specific
algorithms attempt to reduce the overhead of puzzle construction and verification
as well as to support fine granularity in order to mitigate DoS vulnerabilities.
• Trapdoor RSA-based puzzles: The security of the RSA-based tech-
nique is based on the assumption of the hardness of the factorisation prob-
lem; i.e. given N which is a product of two large prime number p and q, it
is infeasible to find e-root in the group of order φ(N) = (p− 1) · (q − 1).
This technique requires the responder to precompute some constants to
reduce the overhead cost of the construction. These constants consist of
time-dependent parameters gt and xt, and puzzle constant τ which requires
one modular exponentiation plus one modular multiplication for construc-
tion. To make an important note, the pre-constructing load depends on the
number of candidate data sets XA and GA stored at the pre-construction
because each candidate requires one exponentiation to generate it. Once
the responder receives requests, it generates a puzzle challenge containing
(ga′ , [A,B]) computed by picking up the pre-computed value from the set
of candidates. The responder also computes a solution x which is secretly
stored in memory for use in the verification, such that the verification pro-
cess is very simple and fast via a single table lookup. Since the range [A,B]
serves as a hint for a puzzle which has k difficulty values, the responder is
able to effectively control the client’s workload in a linear manner similar
to other linear puzzles.
To solve a puzzle, the initiator calculates a corresponding answer (x′) by
using an equation: M = Y · x′e mod n. Given M , e, n as public informa-
tion and Y as a puzzle challenge, the initiator needs to brute-force search
x′ within the range [A,B]. The number of trials depends on the puzzle dif-
ficulty level k determined by the responder. As a result, the upper bound
32 Chapter 2. Research Background
complexity on solving the puzzle would be limited by O(k) modular expo-
nentiations. Like other hint-based puzzles, this puzzle can be distributed
and parallel computed by k high performance computers. The outline of
the operation of these puzzles is shown in Figure 2.6.
Figure 2.6: Trapdoor RSA-based Puzzles
• Trapdoor DLP-based puzzles: The security of this technique is based
on the discrete logarithm problem. Given a large prime p, a generator α of
Z∗p , and a random number β ∈R Z∗
p , it is hard to find x (0 ≤ x ≤ p−2)
such that αx ≡ β (mod p).
Slightly different from the first technique, only candidate data set GA is
pre-calculated and stored at the responder. Moreover, the pre-computation
required at this step is reduced to one modular exponentiation; i.e. to
generate initial constant x−1. Hence, this scheme has a smaller precom-
putational burden than the former one. The puzzle challenge contains
parameters (ga′ , [A,B]) as the RSA-based scheme. The puzzle solution x
is also computed in advance and secretly stored in the memory for veri-
fication. Again, this scheme minimizes the computational burden at the
verification by using a single table lookup for comparing a puzzle solution.
With an equation M = Y · hx′(mod p), a client can compute a correspond-
ing solution x′. Hence, the worst case number of trials to brute-force search
x′ is k modular exponentiations. Similar to the former scheme, the upper
bound complexity on the puzzle solving would be limited by O(k) modular
exponentiation. The trapdoor DH-based scheme is demonstrated in Figure
2.7.
Both trapdoor-function based schemes involve modular computation, so we
evaluate these techniques as having expensive constructions. Although the proto-
col developer suggested to perform some pre-computation to avoid CPU burden
at construction time, puzzle generation still requires to compute with numbers
using modular arithmetic. The advantage of using these mechanisms is that they
2.3. DoS Defensive Cryptographic Technique in Key Establishment 33
Figure 2.7: Trapdoor DLP-based Puzzles
offer fast verification via table lookup as well as linear-grained control for puzzle
difficulty adjustment.
2.3.6 Cryptographic Chaining Puzzles
Client puzzles are non-parallelizable if the solution to the puzzle cannot be com-
puted in parallel. Apart from the characteristic of time-lock puzzles in which
the underlying computation requires the solver to perform sequentially repeated
modular calculation, an alternative method to prevent adversaries from com-
puting puzzles in parallel would be a chaining technique. Since the nature of
chaining requires the previous value for constructing the next consecutive items,
it will defeat those adversaries who attempt to solve puzzles by parallel comput-
ing. Recently, there are two constructions using the chaining technique based on
hash functions proposed by Ma [77] in 2005 and by Groza and Petrica [49] a year
later. The aim of these constructions is slightly different from what we have in
mind, since they are interested in partial solving of the chained puzzles. Nev-
ertheless it is interesting to examine whether they will be useful as stand-alone
puzzles.
Following are short descriptions of these two puzzles and an analysis of their
suitability.
• Ma’s Hash Chain Reversal Puzzles The concept of hash chain puzzles
was introduced by Ma [77] in 2005 as password puzzles for use in the IP
layer. The construction begins with a random number chosen as an initial
value h0. Then the server applies a one-way function to h0 repeatedly to
generate a hash chain h0, h1, . . . , hk where hi+1 = hash(hi) and k is the
desired length of the chain. According to Ma, this computation would lead
to an advantage for the server by storing the entire hash chain for future
use. Because the server knows a corresponding solution in advance, the
server saves computation and time when verifying the puzzle solution by
34 Chapter 2. Research Background
reducing the cost of verification to a single table lookup.
For puzzle solving, given a puzzle challenge containing the last value of a
hash chain (hk) along with an index value k, a client is required to compute
a hash reversal starting from index k back to the beginning point h0 to
obtain the entire hash chain. A characteristic of hash chain operation is
that an output from the former state is required to be fed to the next state
as an input, similar to a recursion in programming. We conclude that this
scheme is a non-parallelizable technique, and the cost of the verification
requires k hash operations similar to the construction.
Figure 2.8: Ma’s Hash Chain Puzzles
This is a simple and intuitive construction, but there are a number of
practical problems. First, it requires the server to store every value of the
entire hash chain in order to be able to verify the solution. Although this
has an advantage in verification effort, this scheme is susceptible to memory
exhaustion attacks. Second, when used with a typical cryptographic hash
function the scheme will be too difficult to invert for even one hash value,
let alone a chain of many values. Therefore some mechanism must be
chosen to make the individual steps in the chain invertible with reasonable
effort. Ma [77] suggested that a hash function be used which has 16-bit
outputs, but this does not seem to be an acceptable requirement since such
a function can be easily stored completely in a look-up table which makes
solving the puzzle as easy as constructing it. A more plausible mechanism
is used in the next construction that we consider.
• Groza and Petrica’s Hash Chain Puzzles This puzzle scheme [49] was
constructed from a hash chain of random numbers. Generally, the idea is
similar to the puzzle auction proposed by Wang and Reiter [133]; i.e. the
more links of the chain computed on a client’s machine, the more services
from a server a client obtains. At the beginning, the server generates
2.3. DoS Defensive Cryptographic Technique in Key Establishment 35
the first element by choosing two state-dependent random numbers, ρ and
r, and concatenating them to obtain a value σ. The first output, P0, is
constructed by double hashing σ0. Hence, the parameter σ0 serves as an
input to the next state of the chain. The rest of the puzzle will be created
by XORing two new state-dependent values with hashed output of σ from
the previous state. Thus, the puzzle elements challenged to the client would
be a series of pairs [(P0, r0), (P1, r1), . . . , (Pn, rn)], where n ≥ 1 is the length
of the hash chain. Meanwhile, the client is required to perform a forward
process of reconstructing the hash chain by searching for ρi values, with σi
= ρi ‖ ri.
Figure 2.9: Groza and Petrica’s Hash Chain Puzzles
Unfortunately, this scheme has a major drawback which risks resource ex-
haustion attacks on the server because it requires three hash operations
per state for producing a series of hashes chained either in the construction
or verification phase. This action requires a similar amount of computa-
tional effort as the solving task on the client’s machine. This circumstance
violates the fundamental requirement; i.e. client puzzles should be easy
to generate and verify by the server but hard to solve by the client. Fur-
thermore, the high-bandwidth consumption required to transmit a puzzle
challenge is another drawback of this scheme.
36 Chapter 2. Research Background
2.3.7 Client Puzzle Summary
In summary, we have seen that the hash chain puzzle has a major strength in
non-parallelizability and linear-grained control because of its structure. Light-
weight verification by one comparison is another interesting potential property.
However, the proposals so far using this technique require high computation
in the construction, high-bandwidth connection for communication, and huge
storage to cache an entire chain for avoiding CPU burden at the verification.
Therefore, currently it seems impractical to use hash chains as client puzzles and
we look for an alternative. Table 2.1 compares key strengths and weaknesses
of individual cryptographic puzzle constructions based on the required criteria
previously discussed.
Table 2.1: Summarization of Strengths and Weaknesses of Client Puzzles
Puzzle Types Key Strengths Weaknesses
Hash-based ReversalSimple and Fast Parrallelizable and
Construction & Verification Exponential Granularity
Hint-Based Hash ReversalSimilar to Hash Reversal
Parrallelizablebut has Linear Granularity
Repeated-SquaringNon-parrallelizable and High Computation to
Linear Granularity Construct & Verify
DH-basedCheap Verification and Parrallelizable and
Linear Granularity Expensive Construction
Trapdoor RSA Cheap Verification and Parrallelizable and
and Trapdoor DLP Linear Granularity Expensive Construction
Hash ChainCheap Verification and Parrallelizable and
Linear Granularity Expensive Construction
The non-parallelizability characteristic plays an important role for defending
against coordinated adversaries who attempt to distribute puzzles to other users
or high-performance machines in order to obtain puzzle solutions quicker than
the specified time without wasting their own resources. Because the weakness of
2.4. Formal Analysis Techniques 37
the hash-based puzzle characteristic in which it might be computed in parallel,
this kind of attacks have been addressed and specified as a strong attack by
Bocan and Cosma [12]. Details of this attack will be given and examined in the
simulation of DoS-resistant protocols later in Chapter 5 and 6.
Since non-parallelizability has not been defined as a primary requirement in
the original work [6,62], most existing techniques lack this characteristic as shown
from the evaluation in Table 2.1. Only repeated-squaring puzzles can thwart this
type of attack strategy, but high computation of the puzzle construction causes
this technique to be susceptible to flooding attacks. As a result, this gap becomes
an interesting point for our research to develop new schemes which achieve non-
parallelizability, meanwhile the puzzle construction and verification should be
simple and cheap as the fundamental requirement.
2.4 Formal Analysis Techniques
The goals of security or cryptographic protocols are to provide various security
services to the protocol entities. Some examples of these security services are
data integrity, confidentiality, data origin authentication, and so on. In order
to provide assurance to users, the protocol developer requires formal analysis
techniques to support the design. However, the protocol design and analysis
has long been considered as a difficult task by the research community. This
is because many proposed protocols were complex and hard to analyse by hand
only. Some security problems in which the protocol has attempted to solve were
slightly subtle and required careful analysis. In addition, abilities to capture all
adversarial techniques attacking to the protocols are significantly difficult because
adversaries are able to constantly modify their techniques to more sophisticated
scenarios once their attacks have been detected and recorded.
Due to these difficulties, research communities have attempted to develop
formal analysis techniques for resolving these problems. Formal methods be-
came popular after the development of the BAN logic [19] in 1989. In formal
methods analysis, the inputs are a set of initial assumptions and the protocol
messages. If these messages satisfy the goal of the protocol, then the protocol is
valid. A good taxonomy on the formal methods for analysis and verification of
security protocols is provided by Meadows [86], and Rubin and Honeyman [113].
With respect to this survey, formal methods can be broadly classified into four
techniques as follows:
1. Modelling and verifying cryptographic protocols using logics based on knowl-
38 Chapter 2. Research Background
edge and belief. These techniques create reason and beliefs of a model of
security protocols using logical statement and formula. One example of
this approach is BAN logic [19];
2. Developing formal model of the algebraic term-rewriting properties of cryp-
tographic systems. This technique models security protocols as an algebraic
system, and reasons such protocols using specified algebraic properties. Ex-
amples of this approach include Dolev and Yao model [36], and Communi-
cating Sequential Processes (CSP) model [114];
3. Developing Expert System to study and investigate possibilities in different
scenarios. One example of this approach is the NRL Protocol Analyzer [85];
and
4. Modelling and verifying cryptographic protocols using methods and tools
which are not specifically developed for security. One example of this ap-
proach is Petri Nets [106].
This section provides some examples of techniques used to verify the correct-
ness and security of cryptographic protocols.
2.4.1 BAN Logic
BAN logic [19] was developed by Burrows, Abadi, and Needham in 1989. The
implementation was designed to focus on the analysis of authentication proto-
cols. The concept behind this approach is to reason about the states of belief
of all participants involved in the protocol. With BAN logic, initial knowledge,
assumptions and the protocol steps are transformed into logical statements or
formula. Finally, the analysis of a protocol is performed by applying the defined
logical rules in order to determine whether the security goals can be derived from
a given set of assumptions or not.
As the BAN logic is primarily about authentication, other security objectives
such as data integrity or confidentiality are disregarded. The protocol analyst
is required to extend the logic or develop new primitive constants to a new
model. This limitation on using BAN logic to reason on the security objectives
in term of secrecy has also been highlighted by Meadows [86]. Other limitations
of BAN logic to model and analyse security protocols were discussed by Boyd
and Mao [15].
2.4. Formal Analysis Techniques 39
2.4.2 Dolev-Yao Model
The Dolev-Yao model [36] is a technique based on an algebraic approach in which
messages and events in the protocol are modelled and transformed using a col-
lection of rules and algebraic expressions. The communicating and exchanging
messages between legitimate participants follow defined rules directly, while ad-
versaries are able to read, modify, create, replay, and delete any messages at any
states during the protocol execution. For example in the symmetrical encryption,
adversaries’ task is to discover a secret key and decrypt corresponding cipher-
text. Hence, the protocol analyst is required to write the rules and algebraic
expressions for transforming this situation into exhaustive search based analysis
to determine whether adversaries could recover the plaintext or not.
Even though the Dolev-Yao model has been successfully applied and found
security flaws in many protocols designed for secrecy, it might not be suitable for
discovering other vulnerabilities in the specific classes of cryptographic protocols,
such as the authentication protocols. Other drawbacks of this model as stated by
Doyle [38] are the failure to model protocol participants’ capability to recognise
the state information, and the difficulty to capture and analyse activities in the
large and complex security protocols.
2.4.3 NRL Analyzer
The Naval Research Laboratories (NRL) protocol analyzer, which was based
on term-rewriting technique of Dolev-Yao approach, was developed as a Pro-
log program by Meadows [85] for analysing cryptographic protocols. Similar to
Dolev-Yao approach, adversaries are able to control any messages of legitimate
participants. On the other hand, the major difference between these two ap-
proaches is that NRL analyzer is able to produce not only protocol messages,
but also message events and beliefs, while Dolev-Yao model can generate only
protocol messages using defined rules and expressions. Hence, the NRL approach
can be applied to analyse a larger variety of cryptographic protocols.
A major advantage of this scheme is that it supports automated analysis
to help the protocol analyst to search vulnerabilities in cryptographic protocols
using a backward reachability test. That means the analysis begins from some
insecure states and runs backward to check whether they can be reached from an
initial state or not. It is important to note that this process is not fully automatic
since it requires participation from the protocol analyst during the protocol exe-
cution. Examples of using the NRL analyzer technique are to successfully analyse
40 Chapter 2. Research Background
a Tatebayashi-Matsuzaki-Neuman (TMN) key exchange protocol [129] by Kem-
merer et al. [67], as well as to discover unknown vulnerabilities in a security
scheme for resource sharing over a network [18].
On the other hand, the major disadvantage of this tool is in the requirement
of high level of user expertise to write accurately source code of evaluating rules
and expressions to evaluate the protocols. Moreover, the backward search pro-
cess might require well-educated users to participate in order to obtain accurate
results during the protocol execution.
2.4.4 Petri Nets
To date, researchers are trying to develop general purpose verification tools to use
in modelling and verifying cryptographic protocols. One famous formal method
based on a simulation approach is Petri Nets [106]. Petri Nets are a graphical
technique based on mathematical methods and can be used for describing and
studying several information processing systems characterised by being concur-
rent, asynchronous, non deterministic, parallel and distributed. Ordinary Petri
Nets are also called Place/Transition systems (PT-systems). The main compo-
nents of PT-nets are the places, transitions and arcs. Places are drawn as ellipses
or circles while transitions are drawn as rectangles. Arcs are directed either from
a place to a transition or from a transition to a place and may have a weight
(positive integer) associated with them (the default weight of an arc is one and
is not shown). Depending on the system to be modelled, places and transitions
can have different interpretations. For example, a place can represent a condition
and a transition an event. Also, a place can represent resources and a transition
a task or job, which requires those resources. Finally, a place can represent the
state of the system and a transition, an action that can be taken, based on that
state. A place can contain one or more tokens. They are drawn as black dots.
A distribution of tokens on the places is called a marking (i.e. marking of the
net). The initial state of the system is called the initial marking. A marking of a
place indicates the number of tokens on a particular place. An example of Petri
nets is illustrated in figure 2.10.
The dynamic behaviour of a PT-system can be seen as the state or marking
of the net changing according to transition occurrences. A transition may have
input places connected by incoming arcs and output places connected by outgoing
arcs. A transition is enabled (i.e. it can occur) if the marking of each input place
consists of as many tokens as indicated by the weight of the input arc, which
2.4. Formal Analysis Techniques 41
Figure 2.10: The Example of Petri Nets
connects the place with the transition. The occurrence of an enabled transition
removes tokens from the input places and adds tokens to the output places. The
number of removed tokens from each input place corresponds to the number of
tokens indicated by the weight of the (input) arc, which connects the place and
the transition. The number of added tokens to each output place corresponds to
the number of tokens indicated by the weight of the (output) arc, which connects
the transition with the place.
Using Petri Nets in an analysis and verification of cryptographic protocols,
a user must specify protocol objectives in term of system states and determine
the conditions which must hold when the protocol terminates. To model at-
tacker capabilities, they might be specified as malicious principals who can read,
intercept, and modify any messages transmitted in the network. One example
of using Petri Nets to identify insecure states is proposed by Stal et al. [127].
Stal et al. recommended an alternative approach in which the analysis begins at
an insecure state, and checks backward through the protocol execution whether
initial state can be reached or not. A problem with this approach is that insecure
states must be known before starting.
As systems become more complex, Petri Net [106] models can become very
large, complex, and probably unreadable. This problem has been overcome by
introducing new kinds of Petri Nets, called High-Level Petri Nets. Coloured Petri
Nets (CPNs) are high-level nets, which is proposed by Behki and Tavares [10].
They incorporate some definitions, such as data types and data values process-
ing that similar to programming languages. Although Petri Nets were developed
since 1962, they have only recently been used to verify cryptographic and secu-
rity protocols [38,94,99]. The challenges to use this approach are how to model
protocol specific security properties and how to provide verification methods.
42 Chapter 2. Research Background
Nowadays, CPNs are introduced through varieties of the service specification for
applications, communication protocols, as well as industrial models [56, 57]. In
term of a supporting tool, there have been many automated tools to support a
modelling and analysis based on CPNs specification and formalism. An example
of an automated tool is CPN Tools [110,130] developed at the department of com-
puter science, University of Aarhus, Denmark. Following provides fundamental
knowledge of Coloured Petri Nets for developing a modelling of cryptographic
protocols.
CPNs Components
In Coloured Petri Nets, places are drawn as ellipses or circles while transitions are
drawn as rectangles as in Ordinary Petri Nets. Each place has an associated type
or colour set which determines the type of data the place may contain. Tokens
are associated with each place. A token is a value (colour), which belongs to the
type of the place. The marking of a place is the multi-set of tokens present on the
place. It is a multi-set, since it may contain several tokens with the same value.
CPNs include the initial state of the system. It is called the initial marking.
Transitions represent the actions of the system which are drawn as a rectangle
and Arcs connect transitions and places. A transition may have input places
connected by incoming arcs and output places connected by outgoing arcs. Arcs
have expressions associated with them. Expressions are located next to arcs
and determine which tokens are removed or added to the places. Figure 2.11
illustrates an example of a CPN model.
Figure 2.11: An Example of a CPN Model
Coloured Petri Nets (CPNs) are state and action oriented, i.e., the modeler
2.4. Formal Analysis Techniques 43
can determine whether to concentrate on states or on actions at the same time.
To construct a model as shown in Figure 2.11, CPNs are created as a graphical
drawing which contain some basic constructions including:
1. State
• Place: Places, which drawn as ellipses or circles, represent states of a
system.
• Type: Types are used to specify the colour set of a token in each place.
• Marking : Marking represents the state of a CPNs. It consists of
a number of tokens positioned on the individual places. Each token
contains a colour which belongs to the type of the place. The marking
of a place can be a multi-set of token values. This means that a place
may have several tokens with the same token value.
2. Action
• Transition: Transitions, which are drawn as rectangles, represent ac-
tions of a system.
• Arc and Arc Expression: Arc connects transitions and places. Each
arc contains a weight value call an arc expression. The arc expression
determines the number of tokens which are removed from the asso-
ciated input place and added to the associated output place during
occurrences of transitions.
3. Dynamic Behavior
• Variable, Binding and Guard : To enable the transition, we need to
bind data values to the variables occurring in the arc expressions on
the incoming arcs attached to this transition. Moreover, we can spec-
ify a boolean expression, called a guard, with variables to each tran-
sition in order to limit the accepted binding values, i.e., only tokens
which have the boolean expression set to true are fired.
• Occurrence: The occurrence of a transition in an enabled binding re-
moves tokens from the input places and adds tokens to the output
places of the transition by evaluating the arc expression on the cor-
responding arc. In the simulation, it may consist of several enabled
binding elements which can be either concurrently enabled or conflict
enabled [57].
44 Chapter 2. Research Background
CPNs Model Construction
To design an effective model, the developer should divide the model into pages by
working with a hierarchical CPN model which is similar to the project browsers
found in programming environments. The basic idea of the hierarchical model
is to allow the developer to construct a large model by using a number of small
CP-nets. This is similar to the situation in which a programmer constructs a
large program by means of a set of modules. When a page gets too many places
and transitions, we can move some of them to a new subpage for simplicity of
the model.
The hierarchical model supports both a bottom-up as well as a top-down de-
sign. The interesting properties of the hierarchical model are that new modules
can be created from existing modules, and modules can be reused in different
parts of the CPN model. These modules are called pages. Figure 2.12 demon-
strates an example of a hierarchical model implemented by using a CPN Tool.
Figure 2.12: A Hierarchical Model
The hierarchical CPN model consists of three main constructions that are
used for implementation. The following details give a brief description of such
constructions. The full definition of the these three constructions have been
2.4. Formal Analysis Techniques 45
provided in Jensen’s book [57].
1. Substitution transitions and subpages
Each substitution transition is marked with an hierarchy and substitution
tag. This tag is attached by the box called hierarchy inscriptions which are
used to define the details of the substitution. Each hierarchy inscription
specifies the subpage corresponding to each substitution transition.
2. Port and socket places
Ports and sockets have similar definitions but ports are used when the
developer assigns them in the subpage, while sockets are assigned in the
top page. They are used for communication to its surrounding transitions.
In order to specify the relationship between a substitution transition and
its subpage, port places related to the socket places of the substitution
transition of the subpage must be allocated by providing a port assignment.
Once a port place is assigned to a socket place, the two places become
identical.
3. Fusion places
Fusion places allow the developer to specify a set of places that are con-
sidered to be identical. When a token is added or removed at one of the
places, an identical token will be simutaneously added or removed at all
the other places in the fusion set.
To construct models of DoS-resistant protocols, we use CPN Tools which
is an interactive and a graphical interface tool for modelling, analysing, and
simulating the protocols. CPN Tools is well-known and widely used not only in
the industry, but also in the system engineering community for modelling and
analysis of various communication protocols [11,50,60], as well as cryptographic
protocols [4, 38, 39, 94, 99, 131]. As stated by Jensen [56], there are several key
benefits of using Colored Petri Nets in the verification of cryptographic protocols.
Some example of key benefits related to the cryptographic protocol modelling and
analysis are listed as following
• CP-Nets are very general and can be used to describe a large variety of
different systems;
• CP-Nets can be extended with a time concept;
• CP-Nets offer interactive simulations where the results are presented di-
rectly on the CPN diagram;
46 Chapter 2. Research Background
• CP-Nets have a large number of formal analysis methods by which prop-
erties of CP-Nets can be proved.
More specific to the modelling and analysis of cryptographic protocols, some
important advantages of using Coloured Petri Nets over other formal modelling
and analysis techniques are the graphical representation for easy to understand
and visualize the dynamic behaviour of systems. Most importantly, graphical
representation of Coloured Petri Nets is able to assist the protocol designer to
prevent some inconsistency or ambiguous of the message sequences communi-
cated among protocol participants in the design of protocols.
In particular, Colored Petri Nets have several advantages over original Petri
Nets. For example, CPNs incorporates data concepts, and is able to handle
large communication networks containing a high number of data set because
CPNs supports a hierarchical modelling concept. Most importantly, Colored
Petri Nets has a feature to reduce the size of the complex model by using a
single place to hold multiple tokens having similar colour sets.
For these reasons, we use Colored Petri Nets for modelling and analysing key
establishment protocol in order to explore DoS vulnerabilities and examine DoS-
resistance in such protocols. More specific detail of CPN Tools and techniques
to model key establishment protocols, as well as a critical analysis of a key
establishment protocol, will be explained and illustrated in Chapter 4.
2.4.5 Meadows Cost-based Framework for DoS-resistance
Formal methods have been widely used to reveal ambiguity, incompleteness, and
inconsistency of a system. They also address problems of who uses it, what it is
used for, when it is used, and how it is used. When users apply formal methods
at the beginning of the system development process, they can reveal design flaws
that might be otherwise discovered only during testing and debugging steps.
Meanwhile, they can help determine the correctness of a system implementation
in the final stage of the system development process. Nevertheless, users might be
able to apply formal methods in all phases of the system development including:
system verification, system validation, and system analysis and evaluation.
Existing protocol analysis tools might not be able to be directly applied to
the problem of DoS attacks because of their special characteristics. Meadows
[89] suggested two possible solutions to apply the formal method into the DoS
problem;
2.4. Formal Analysis Techniques 47
1. The protocol must be analyzed, not only in terms of its final goals, but along
each step of the way. Every time a principal takes part in some action that
requires the use of a significant amount of resources, one must check that
an attacker could not fraudulently cause that principal to reach that step
without spending a significant amount of its own resources ;
2. In order to make that verification possible, it is necessary to have a model,
not only of principal and intruder actions, but of the cost of those actions.
Thus, some sort of formal analysis technique would be beneficial, simply in
order to keep track of this complex multistage analysis.
Many key establishment protocols discussed in the literature use authentica-
tion for assisting the responder to identify malicious activities, and protecting
the responder against DoS attacks. Nevertheless, using authentication might in-
troduce a DoS problem by itself because most authentication processes require
the responder to expend high computational burden.
In this circumstance, successful analysis might depend on the ability to com-
pare the resources expended by an attacker to the resources expended by a
responder. Meadows [88] points out that DoS attacks might occur before parties
are authenticated to each other. Meadows introduced several defenses against
attacks including;
1. Reducing the cost to the potential defender of engaging in the protocol;
2. Increasing the resources of the defender;
3. Introducing some methods of authentication so that a defender could at
least tell where an attack is coming from; and
4. Using weak authentication when the protocol is initiated but strong au-
thentication as it completes.
To construct a formal analysis of DoS-resistant protocols, Meadows has de-
fined the term of denial-of-service resistance [88] in which DoS-resistance is an
ability to resist denial of service attacks in term of how the protocol can withstand
the attacks, and serves the intended parties without degrading the performance
and services. Meadows’ cost-based framework is based on a modification of the
Gong-Syversons fail-stop model [48] of cryptographic protocols. The model has
an output, which can be either success or failure. If the output is success, the
program proceeds to the next step. If the output is failure, the program halts.
48 Chapter 2. Research Background
Normal and accept events always output success while verification events can
output success or failure.
In order to identify the DoS vulnerabilities of protocols, Meadows defined the
steps to analyze the problem including;
1. Computing the cost of several actions over time;
2. Assigning and comparing costs of each actions such as computing digital
signatures, storing data, and impersonating principals;
3. Comparing the ratio of cost to available resources between a server, a client,
and an adversary to identify the risk of DoS attacks on the victim.
More detail of Meadows’ cost-based framework, as well as an example of using
this framework to identify DoS vulnerabilities in key establishment protocols will
be provided and discussed later in Chapter 4.
2.5 Summary
This chapter reviews fundamental knowledge relating to Denial-of-Service (DoS)
and Distributed Denial-of-Service (DDoS) attacks, cryptographic techniques for
preventing denial of service attacks in the key establishment protocols, as well
as some methods to use as a formalising tool to model and examine DoS vulner-
abilities in existing DoS-resistant key establishment protocols.
In term of denial-of-service attacks, Needham classified denial of service at-
tacks into three subcategories including attacks on the server, attacks on the
network, and attacks on the client. This work only focuses on the attack on
the server since our main objective is to develop key establishment protocols for
protecting the server against DoS attacks.
The main objectives of key establishment protocols are to share a secret key
between parties and to provide authentication to those parties, but key establish-
ment protocols themselves might be subjected to denial of service attacks. Most
well-known key establishment protocols are vulnerable to DoS attacks because
they involve digital signatures as expensive operations, at the beginning phase.
To provide a promising defensive technique, as a result, this chapter presents an
example of a cryptographic technique, client puzzles, to assist the responder to
counteract DoS adversaries. The taxonomy of key benefits and weaknesses of in-
dividual puzzle construction is also provided and discussed in order to effectively
implement them for different applications.
2.5. Summary 49
Having discussed the DoS vulnerabilities in the protocols, we have found that
some attacks are difficult to discover at the design phase. Therefore, the protocol
developer should assign some formal verification techniques to DoS-resistant ap-
proaches to ensure that the proposed protocols function properly and are secure
at least against well-known attack strategies before releasing them to public. Al-
though no specific formal technique is able to cover all aspects and figure out all
vulnerabilities in key establishment protocols, we choose CPNs as our formalism
to model and examine DoS-resistant protocols because its graphical representa-
tion and visualized analysis makes our task easier, and adds understandability
to the protocol analyst.
In the next chapter, we will provide strategies and techniques to improve
the resistance of key establishment protocols for defending DoS attacks. More
concrete discussion on seven key establishment protocols proposed to resist denial
of service attacks will be provided. In addition, a critical investigation and
analysis on such key establishment implemented with DoS-resistance built-in
will be demonstrated.
50 Chapter 2. Research Background
Chapter 3
DoS-resistance in Key
Establishment Protocols
Having provided background knowledge used in this research, the focus of this
chapter moves to the defensive mechanisms of denial of service attacks in key
establishment. In this chapter, we provide denial-of-service defending strategies
and techniques to improve the resistance of key establishment protocols to DoS
attacks. In addition, we identify and discuss key establishment protocols that
have denial of service resistance built-in and implement these proposed strategies
and techniques. There are relatively few examples of key establishment proto-
cols implementing denial of service resistance techniques, with our survey of the
literature only identifying seven protocols (see Table 3.2 in Section 3.4 for a sum-
mary list). The protocols presented in this section were selected as they provide
concrete examples of the range of denial of service resistance techniques being
applied to key establishment.
Below we will introduce three strategies and techniques for DoS-resistant
mechanisms. The modified internet key exchange protocol proposed by Mat-
suura and Imai [83] adopts all three strategies for improving denial of service
resistance and includes an elegant proof of work that is reused by the initiator to
complete the protocol execution. The Host Identity Protocol (HIP) also uses all
three proposed strategies and seems to be the latest protocol to implement client
puzzles as the proof of work to defend against denial of service attacks. The
Just Fast Keying (JFK) protocol [3] demonstrates new techniques for gradually
authenticating initiators, and the client-aided RSA proposal by Castelluccia et
al. [20] is the first key establishment protocol to adopt client aided computation
51
52 Chapter 3. DoS-resistance in Key Establishment Protocols
in order to counterbalance computational expenditure.
The main contributions of this chapter are:
• A taxonomy of strategies and techniques to help the responder to defeat
resource exhaustion attacks;
• Concrete examples of use of DoS-resistant mechanisms in key establishment
protocols;
• A critical analysis and investigation of how DoS-resistant mechanisms are
used in key establishment protocols.
To provide some concrete examples of use of DoS-resistant mechanisms in
key establishment protocols, an investigation and critical analysis of seven key
establishment protocols that implement denial of service resistance are discussed
in Section 3.3. Finally, discussion and summary on the practical use of individual
DoS-resistant mechanisms are provided at the end of this chapter.
3.1 DoS-Resistance Strategies
To design an effective DoS-resistant key establishment protocol, it is worthwhile
for a protocol engineer to understand not only the characteristic of denial-of-
service attacks, but the abilities of adversaries as well. In the DoS attacks,
the skillful adversary can prevent the application from performing its intended
task for legitimate initiators by causing the application to exhaust the finite
supply of a specific resource of the target responder. These target resources
of the responder might be: available memory, available computational CPU,
available storage space, and maximum number of simultaneous connections the
application is permitted. Some resources are self-renewing such as computational
CPU cycles, but some resources require operations to recover such as storage
space. If the application cannot automatically recover, then the attacker can do
more harm to the system and make it difficult for the responder to prevent such
actions.
Most well-known denial-of-service (DoS) attacks are performed by using tech-
niques which attack weaknesses of the underlying protocol. For example, the
attacker consumes the responder’s resource by flooding a large number of bogus
requests to the responder. This attack can overwhelm the responder because
the attacker knows that the underlying authentication protocol at the responder
side is based on the expensive computation of public key cryptography. As a
3.1. DoS-Resistance Strategies 53
result, the implementation of key establishment protocols should be developed
with great care.
The requirement for key establishment protocols to achieve some level of
denial-of-service resistance is well recognised by the protocol engineering com-
munity and a number of design strategies have emerged. For example, in the
design of DoS-resistant protocols, Leiwo et al. [73] proposed some suggestions to
design an effective protocol to protect network against DoS attacks including:
• Memory of the responder should be allocated after the initiator
has been authenticated.
• Detecting attacks should be completed before initiator authenti-
cation processes.
• The workload of initiator should be greater than that of the re-
sponder in order to prevent an initiator from launching multiple
attacks to the responder.
• The workload of the initiator should be varied by the responder.
In the case of attacks, the responder could increase computation
for the initiator to guarantee the survivability of the system.
Some additional strategies and mechanisms have been proposed by Mead-
ows [88] in order to implement an efficient DoS-resistance public-key based au-
thentication protocol. These include the following strategies:
• Reducing the cost to the potential defender of engaging in the
protocol.
• Increasing the resources of the defender.
• Introducing some methods of authentication so that a defender
could at least tell where an attack is coming from.
• Using weak authentication when the protocol is initiated but stronger
authentication as it completes.
The strategies for improving resistance to denial-of-service attacks can be
broadly classified into three sub-categories; 1) Preventing CPU exhaustion, 2)
Preventing memory exhaustion, and 3) Gradual authentication.
54 Chapter 3. DoS-resistance in Key Establishment Protocols
3.1.1 Counterbalancing CPU Expenditure
To design a DoS-resistance protocol, poorly designed authentication mechanisms
as well as strong authentication protocols using cryptographic methods may in-
crease the vulnerability of the problem because of their expensive modular arith-
metic operations. If such authentication mechanisms allow the attacker to send
bogus packets to the responder that require computationally expensive cryp-
tographic authentication before the packet can be verified, then this situation
allows the attacker to easily perform the CPU exhaustion attack to the respon-
der. To prevent this situation and achieve our goals, either increasing the cost of
computation to an initiator or reducing the cost of computation to a responder
by having the initiator perform computations on behalf of the responder, should
be included in the protocol specification. Example techniques of this strategy
are a client puzzle [6, 43,62,112,136], and a client-aided RSA computation [20].
3.1.2 Counterbalancing Memory Expenditure
For implementing a protocol resistant against memory exhaustion attacks, one
direct strategy is to reduce storage for the responder during the protocol run.
It means that the responder should maintain a stateless connection until the
authentication process has completed. Cookies are one example defending tech-
nique using cryptographic techniques to preventing memory exhaustion attacks
because the responder does not maintain half-open state. The responder uses
the cookie to complete the connection and as a weak form authentication. The
function of the cookie in this context is to establish a stateless connection as will
be explained in Section 3.2.2. So far, there have been many public-key based au-
thentication protocols which use a cookie strategy to thwart memory exhaustion
attacks, for instance, Photuris [64], Host Identity Protocol (HIP) [95], Internet
Key Exchange (IKE) [51], and Just Fast Keying (JFK) [3].
3.1.3 Gradual Authentication
As initiators must be authenticated at some point during the protocol execution,
strong authentication of the initiator’s requests might introduce another denial-
of-service problem. There are a number of ways in which a protocol could be
strengthened against denial of service attacks. A suggested strategy is introduced
by Meadows [88] to use weak authentication when the protocol is initiated at the
beginning state and gradually increase the level of authentication for proving the
3.2. DoS Defending Techniques 55
user identity in the final state. An example protocol using gradual authentication
framework is Just Fast Keying (JFK) protocol [3]. JFK begins with a form of
weak authentication using nonce and authenticator functioning as a cookie, and
ends up with a signature verification after satisfying a medium-level authenticator
by checking a message authentication code (MAC).
Protocol engineering implementing DoS-resistant key establishment should
adopt at least some of these three strategies in order to ensure that such im-
plementations would not be susceptible to such attacks. The following section
reviews the defending techniques that incorporate some introduced strategies to
achieve the DoS-resistant goals.
3.2 DoS Defending Techniques
Having identified the strategies employed to make responders in key establish-
ment protocols more resistant to denial of service attacks, we now describe the
specific techniques employed by some key establishment protocols (illustrated in
Section 3.3). Some of the techniques are capable of implementing more than one
strategy, and the techniques might be able to be composed in numerous ways
to meet more complex goals such as gradual authentication. For each technique
identified, we discuss the construction, the DoS resistance strategies it is capable
of supporting, and how it might be combined with other techniques. Protocols
implementing the techniques are then identified in Chapter 3.
3.2.1 Stateless Connection
In a stateful protocol, there is a limited number of initiators that the responder
can handle and simultaneously provide the services. To protect the responder
from wasting resources with bogus messages, Aura and Nikander proposed the
concept of a stateless connection [5]. The stateless connection protocol is resilient
to attacks that open connections in a half open state, thus preventing resource
exhaustion attacks. However, it is important to note that a stateless responder
might open a new attack technique, because the responder has no previous state
information of the messages that have been received and processed. Therefore,
it might be possible to attack the connections integrity by replaying connection
states. Using the design principle of authenticating an initiator before commit-
ment of resources, Aura and Nikander suggested that authentication protocols
56 Chapter 3. DoS-resistance in Key Establishment Protocols
should always remain stateless until the initiator has been authenticated or has
shown in some other way its commitment to proper use of the service. After
authentication the responder can change to the stateful mode.
3.2.2 SYN cache and SYN Cookie
SYN cache and SYN cookie [74] are techniques used to reduce the effect of the
flooding attack of the responder by pushing messages to store at the initiator side.
The SYN cache allocates minimal state when an initiator establishes a connection
to a responder. When authentication is completed, only information required by
the responder is kept in responder storage. In contrast, the SYN cookie does not
store any information until initiator authentication has been completed. It sends
a cryptographic packet called a cookie, which records initial connection data of an
initiator. Cookies are time variant, unpredictable data issued by the responder
on receipt of a request for service that can ensure the responder remains stateless
and can initiate gradual authentication of the initiator.
In the implementation of cookies in key establishment protocols, cookies were
first introduced in Photuris [64] (see Section 3.3.1) and subsequently extended
for resisting SYN flooding DoS attacks. Typically a cookie is constructed by
taking some connection specific parameters and transforming them with a time
variant local secret; a keyed hash of the initiator IP address and nonce for exam-
ple. It is vitally important that the responder stores no state when constructing
cookies. In order to remain stateless and thereby prevent memory exhaustion,
any relevant state required by the responder can also be encoded in the cookie
and returned with the next message from the initiator. On receipt of a valid
cookie, the responder is able to reconstruct and validate any state encoded in
the cookie and has weak assurance that it is in round trip communication with
the initiator. Round trip communication implies that the initiator is not using
a spoofed address. This assurance can only be considered weak, as an adver-
sary with control of an intermediary link, between a claimed address and the
responder, would be able to receive cookies for any address they wished to claim.
Unless cookies are carefully constructed the responder may remain vulnerable
to attack even if cookies are used. Simpson [121] identified a state exhaustion
attack, called a cookie crumb attack, in Internet Security Association and Key
Management Protocol (ISAKMP) [84] implementation of cookies. In contrast
to remaining stateless when constructing cookies, ISAKMP cookies required the
storage of a small amount of state on each connection request. Even though
3.2. DoS Defending Techniques 57
the state information stored per request is very small (a crumb) it is easy for
an attacker to initiate a large number of requests, exhausting available memory
resources. In addition to ensuring that no state is stored on the construction of a
cookie, Karn and Simpson [64] identified that the technique used for generating
cookies must also satisfy the following three requirements.
• The cookie must depend on the participating entities ;
• It must not be possible for anyone other than the issuing entity
to generate a cookie that will be accepted by that entity ;
• The cookie generation and verification methods must be compu-
tationally efficient.
The first requirement prevents an attacker from obtaining valid cookies, in-
tended for other initiators, and using those cookies to generate a large number of
requests with spoofed IP addresses. The second requirement secures the cookie
generating process. The use of a secret value in generating the cookie prevents
others from forging cookies and making this value time variant ensures that cook-
ies must be used within a predetermined time frame, preventing the hoarding of
valid cookies. Finally, the third requirement prevents DoS attacks directed at
the cookie mechanism itself.
3.2.3 Proofs of Work (PoW)
Proofs of work, or puzzles, are hard but tractable problems that allow an initiator
to prove to a responder that a verifiable level of computational effort has been
expended. They permit the responder to gain some assurance of the initiator’s
willingness to commit resources to the protocol and provide a mechanism for
counterbalancing computational expenditure in the event that the responder is
exposed to a denial of service attack.
The concept was first proposed by Dwork and Naor [41] to control junk
email by having recipients only accept emails if they were accompanied by a
correct puzzle solution. It has since been extended to protect authentication
protocols [6, 62] and permit initiators to bid for limited service resources [133]
using the difficulty of the puzzle as currency. Jakobsson and Juels [55] formalised
the notion of reusable proofs of work , where the computational effort expended
by the prover in generating the puzzle solution can be reused for some useful
function, and provided a working example of a reusable proof of work.
58 Chapter 3. DoS-resistance in Key Establishment Protocols
Puzzles serving as proofs of work can be constructed from a number of under-
lying problems as previously described in Section 2.3, which introduce a minimal
and configurable overhead for legitimate initiators but result in a significant com-
putational burden for attackers who wish to send large numbers of requests to a
responder.
Proofs of work can be viewed as a way for an initiator to make a payment to
a responder for the services it will provide. The computational effort expended
in generating the proof of work can be: (1) wasted; (2) reused by the initiator
in completing the protocol; (3) reused by the initiator for some other purpose;
(4) reused by the responder in completing the protocol; or (5) reused by the
responder for some other purpose.
The client puzzles employed by the protocols identified in the following sec-
tion, are typically non-reusable proofs of work, so the computational effort ex-
pended in generating the proof of work is wasted. We will see however, that the
modified version of Internet Key Exchange (IKE) proposed by Matsuura and
Imai [83] adopts a reusable proof of work based on signature verification of the
responder, an action that a legitimate initiator will have to perform in order to
complete the protocol.
As identified earlier, Jakobsson and Juels [55] describe a proof of work in
which the computational effort expended in generating the proof of work is reused
by the responder for another application. There are currently no examples of
key exchange protocols implementing this type of reusable proof of work and
unless an initiator implicitly trusts the responder to delegate its computational
resources, initiators must be aware that the computational effort expended in
generating a proof of work may be reused for malicious purposes.
Even though proofs of work are increasingly being adopted by protocols to
aid in the counterbalancing computational expenditure and as a way of authenti-
cating the willingness of an initiator to commit resources to having the protocol
proceed, their use still faces numerous issues. For example, the puzzle con-
structions based on hash function meet many desirable requirements but they
are susceptible to DoS attacks in which adversaries controlling group of machines
are able to distribute puzzles and solve them in parallel. Meanwhile, the repeated
squaring puzzles provide non-parallelized characteristic but they require the re-
sponder to participate with expensive computation of modular arithmetic to
construct puzzles, or verify their puzzle solution. Hence, the protocol developer
implementing DoS-resistant mechanisms should be careful to select appropriate
algorithms depending on applications that they consider to develop.
3.2. DoS Defending Techniques 59
3.2.4 Client-Aided computation
Client-Aided RSA (CA-RSA) computation has been proposed by Castelluccia et
al. [20] for helping the responder in order to reduce computational workload in
the Secure Socket Layer (SSL) handshake protocol. The general idea of a client-
aided computation is modified from the responder-aided computation originally
introduced by Matsumoto [80] and consequently developed by Horng [52] to
use in a smart card technology. Because conventional smart cards only have
limited computational resources, it takes too long for a smart card to perform
some expensive computation. One possible solution is to distribute most of the
computation to a high-performance processor which is capable of performing fast
modular exponentiation.
In typical authenticated versions of the RSA signature, the responder has
to verify the initiator’s signature for authentication. The responder could be
potentially flooded with a large number of bogus requests by the attacker. One
proposed countermeasure is re-ordering by moving some expensive computation
steps to the initiator side. This technique is called client-aided computation.
Client-aided computation is based on the idea of rebalancing the cost of expensive
computation to the initiator when the responder is under a heavy load condition
i.e. a denial-of-service attack, the responder should move some computations to
the initiator side for reducing responder overload.
In the authentication process, one of the main overload factors is the direct
consequence of expensive public key computations performed by the responder
as part of the 3-way handshake protocol. Castelluccia et al. [20] introduce the
client-aided technique and extend it for rebalancing SSL handshakes based on
RSA computation. Because the 3-way handshake is computationally unbalanced
as the encryption process by the initiator is relatively cheaper than the decryp-
tion process by the responder, then, one possible solution is to modify the key
establishment protocol by letting the responder generate and encrypt the session
key. Thus, the initiator task is changed to decryption process. Since the task of
the responder does not include signature verification as in the original version,
this situation might introduce another DoS attack because the responder needs
to verify an initiator via a certified public key instead. The process requires
the responder to verify a certificate chain for each connection, therefore, the
responder could spend long periods and extensive resources in this operation.
This circumstance would be susceptible to resource depletion on the responder’s
machine.
In summary, the concept of rebalancing the authentication step by moving
60 Chapter 3. DoS-resistance in Key Establishment Protocols
some expensive computation to the initiator side could be one possible solution
to protect the responder from resource depletion. However, this concept could
only be applied to specific protocols, such as SSL, for re-positioning the process of
RSA public-key cryptography between an initiator and a responder. On the other
hand, other DoS-resistant mechanisms, such as client puzzles, can be developed
in any general protocols for improving the security and resistance of protocols.
Thus, this research pays more attention to other techniques for combining them
to counteract adversaries rather than using the client-aided computation only in
the implementation.
3.2.5 Gradual Authentication
While the expense of strongly authenticating initiators using digital signatures
will be dependent on many parameters, the computational expense of a signature
verification will not always be prohibitively expensive. Rabin signatures [107]
with a public exponent of 2 or RSA signatures [111] with a public exponent
of 3 can be verified with only one or two modular multiplications respectively.
While the cost of signature verification with these parameters is low, signature
generation is much more expensive, which may not be suitable for all deploy-
ment scenarios. Other signature schemes, RSA with larger public exponents for
example, increase the cost of signature verification, requiring the responder to
perform expensive modular exponentiations. While newly proposed key estab-
lishment protocols can be specified to accommodate cheap signature verification
for responders the requirement to improve resistance to denial of service attack
remains for already deployed protocols and protocols that for other reasons are
restricted in the choice of signature schemes they must implement.
Gradual authentication provides a mechanism for weakly authenticating an
initiator, prior to performing stronger and more expensive cryptographic authen-
tication. While the use of techniques such as cookies, client puzzles, and releasing
hash preimages do not meet strong notions of authentication, when generated
using cryptographically sound primitives they can be combined in ways which
enable a responder to discount a range of denial of service attacks and present
a number of hurdles that must be overcome by an attacker intent on disrupting
the protocol execution. A key characteristic of the techniques used in gradual
authentication is that they are all cheap for the responder to verify, while their
fabrication is relatively expensive for an attacker. Even when signature schemes
that minimize verification costs to a responder are adopted, the cost of verifying
3.3. DoS-resistance in Key Establishment 61
gradual authenticators such as client-puzzles is still cheaper, costing only a single
hash operation.
The idea of combining weak and strong authentication was first introduced
by Meadows [88] and is proposed as a technique to increase resistance to denial of
service attacks by combining weak authentication when the protocol is initiated
and moving to strong authentication as it completes.
Cookies and client puzzles can be considered forms of weak authenticators.
Cookies provide some assurance that the initiator is able to send and receive
packets from the claimed address - implying that the request is not part of
a connection depletion attack, which typically relies on using random spoofed
addresses. Receipt of a correct solution to a client puzzle provides some assurance
to the responder that the initiator is willing to expend her own resources in order
to get the protocol to proceed.
Other cryptographic techniques, such as the use of message authentication
codes and release of hash digest preimages, that allow the responder to cheaply
verify messages are being adopted by recently proposed protocols such as Host
Identity Protocol (HIP) as discussed in Chapter 3.
Key establishment protocols must complete with strong authentication of
the initiator. Having weakly authenticated the initiator, the responder is able to
commit to the computational expenditure associated with strong authentication
with increased assurance that a denial of service attack is not underway. The
gradual authentication technique is employed by several protocols, for instances
the Host Identity Protocol (HIP) and the Just Fast Keying (JFK) protocols,
discussed in more detail in Chapter 3.
3.3 DoS-resistance in Key Establishment
The ability of key establishment protocols to resist denial of service attacks is
defined in terms of how the protocol can withstand the attacks, and serves the
intended parties without degrading the performance and services. This property
is called denial-of-service resistance of the protocol as defined by Meadows [88].
Having introduced strategies and techniques to tackle DoS attacks in the
previous section, this section provides some concrete examples of use of DoS-
resistant mechanisms in key establishment protocols. To make an important
note, some key establishment protocols implement more than one DoS-resistant
strategy, and individual techniques might be able to be composed in numerous
ways to meet more complex goals to defeat adversaries.
62 Chapter 3. DoS-resistance in Key Establishment Protocols
While denial of service attacks can be classified in three subcategories as
previously described in Section 2.2, we limit ourself to consider attacks targeting
the responder and disrupting legitimate traffic. Additionally, we focus on the
defending techniques used to protect the responder against malicious parties as
the impact on the responder, who is a centre to distribute and maintain services
on the system, is far more significant than on the initiator.
For the DoS-resistant protocols presented in this chapter, we focus on those
elements of the protocol that implement denial of service resistance techniques.
Our representation of the protocols is simplified, with references to header infor-
mation and certificate requests, that are not relevant to the discussion of denial of
service resistance, deliberately omitted. For complete descriptions of the proto-
cols, the reader is referred to the full protocol specifications. Table 3.1 represents
a list of notation used in the protocols for the remainder of this chapter.
3.3.1 Photuris
Photuris is a session-key management protocol developed by Karn and Simp-
son [64,122] in 1999. Photuris functions are much simpler than the Internet Se-
curity Association and Key Management Protocol (ISAKMP) [84], and Internet
Key Exchange (IKE) [51]. The security goals of Photuris are defending against
resource exhaustion attacks1, providing forward secrecy of the session-keys, and
protecting privacy for the participants. Photuris establishes short-lived session-
keys for using between two parties without passing the session-keys across the
Internet. These session-keys directly replace the long-lived secret-keys in order
to protect the compromise of long-lived secret key by the adversary. Photuris
publicly distributes a shared value by using a Diffie-Hellman (DH) protocol [34],
and uses an RSA signature for authentication. Figure 3.1 represents the simpler
version of Photuris.
Counterbalancing memory expenditure
Photuris implements a cookie and stateless connection to counterbalance memory
to protect the responder against DoS attacks. A cookie functions as a reachability
test preventing adversaries attempting to launch IP address spoofing attacks as
used in Smurf attacks [24]. Meanwhile, stateless connection is implemented to
protect the responder from exhausting the computational and storage resource
to handle a large number of bogus requests. It can be achieved by keeping
1Oppliger has defined this term as resource clogging attacks in [104]
3.3. DoS-resistance in Key Establishment 63
Table 3.1: Protocol Notation
Messages Notation
I The principal who initiates the request message known as Initiator or client
RThe principal who responds to the request message known as Responder orserver
ID Identity of the principal
IP Network address of principal
H(M) Unkeyed cryptographic hash of the message M
HK(M) Keyed cryptographic hash of the message M with key K
KsSession key generated by key exchange protocol which is used to secure ongoingcommunications
EKs{M} Symmetric encryption of message M with the secret key Ks
{M}KeKa
Encryption of M using symmetric key Ke followed by MAC generation withsymmetric key Ka.
PKR[M ]Asymmetric encryption of the message M by the public key PKR belongingto R
SKR[M ]Asymmetric decryption of the message M by the private key SKR belongingto R
SigI(·) Digital signature signed by the private key SKI belonging to the principal I
SigR(·) Digital signature signed by the private key SKR belonging to the principal R
LSB(t, k) Returns the k least significant bits of an output by taking a string t as input
0k A string consisting of k zero bits
p, q Large prime numbers
n = pq An RSA modulus
i, r A Diffie-Hellman secret parameter of I and R, respectively
gGroup generator of order q used in Diffie-Hellman key exchange and key agree-ment protocol
s A periodically changing secret only known to the responder R
t Digital Timestamp
NI , NR Nonce; a random bit string of principal I and R respectively
HII , HIR The host identity of an initiator and a responder, respectively
HITI , HITRThe host identity tag of I and R created by taking a hash over the host identityHII and HIR
AttAttribute List including Identification, Authentication, and Encapsulationused in Photuris
SPI Security Parameters Index
saICryptographic and service properties of the security association (SA) that theinitiator wants to establish
saR SA information that the responder may need to give to the initiator
puzzle(m,H (M ), k)Initiator puzzle with partial pre-image m, digest of full preimage H(M) anddifficulty level k
solution(J )Solution J to a puzzle of the form puzzle(m,H (M ), k), such that H(J) =H(M)
grpinfoR All groups supported by the responder
x ∈R A Assigns to x an element of the set A chosen uniformly at random
CertRA certificate which contains a responder’s identity and a public key used forauthentication
64 Chapter 3. DoS-resistance in Key Establishment Protocols
the responder stateless until the strong authentication has been successful as
recommended by Meadows [87].
In more detail, once the responder receives an initiation request from the
initiator, the responder creates a cookie by using the source IP address and
some secret data known only to the responder. Then, the responder returns this
cookie to the source IP address of the initiator. At this state, the responder
does not have to store state information as well as does not perform expensive
computation. In order to run any services from the responder, the initiator
needs to return the cookie to the responder within a given time for verification.
If these processes were completed, the responder provides services and allocates
the resource to store the connection status of this IP address.
I R
1)CookieRequest−−−−−−−−−−−−−−→ Cookie = H (IPI , saR1 , s)
2) select saI from saR1Cookie, saR1←−−−−−−−−−−−−−−
3) i ∈R [1, 2, . . . , q − 2]Cookie, saI , g
i
−−−−−−−−−−−−−−→ Cookie?= H (IPI , saR1 , s)
determine saI
r ∈R [1, 2, . . . , q − 2]
4) Kir = H (g ir)Cookie, saI , g
r
←−−−−−−−−−−−−−−v ′I = HKir (Cookie, SPI ,AttI )
Ke = H (Cookie, g i , gr , SPI , g ir)
E1 = EKe{IDI , v′I ,AttI}
sigI = SKI [EKe{IDI , v′I ,AttI}] Cookie, SPI ,
5)E1 , sigI−−−−−−−−−−−−−−→ Cookie
?= H (IPI , saR1 , s)
Kir = H (g ir)
Ke = H (Cookie, g i , gr , SPI , g ir)
decrypt E1
verify sigI
v ′I?= HKir (Cookie, SPI ,AttI )
v ′R = HKir (Cookie, SPI ,AttR, v ′I )
E2 = EKs{IDR, v ′R,AttR}sigR = SKR[EKs{IDR, v ′R,AttR}]
Cookie, SPI , Ks = H (Cookie, v ′R, g ir)
6) Ks = H (Cookie, v ′R, g ir)E2 , sigR←−−−−−−−−−−−−−−
decrypt E2
verify sigR
Figure 3.1: Photuris Protocol [64]
3.3. DoS-resistance in Key Establishment 65
Gradual authentication
Photuris starts with the cookie verification as a weak form of authentication. Us-
ing a cookie in Photuris establishes a stateless connection for protecting resource
consumption attacks. This process involves no expensive operations because the
cookie generation and verification method uses a cryptographic hashing function
which is fast and cheap operation to thwart CPU exhaustion attacks. Once the
responder ensures the validity of the source address, the responder continues to
exchange the Diffie-Hellman key exchange parameters used to compute a shared
secret key. The responder checks information encrypted with this shared secret
key, and verifies the initiator authenticity using public-key encryption as the
strong authentication.
3.3.2 Modified Internet Key Exchange
The Internet Key Exchange (IKE) protocol [51] was designed to perform mutual
authentication and establish a shared secret key for use in an IPsec security as-
sociation. As originally specified, the aggressive, signature based authentication
mode of IKE was vulnerable to CPU and memory exhaustion denial of service at-
tacks. In order to address these vulnerabilities Matsuura and Imai [83] proposed
modifications to improve the protocol’s resistance to both computational and
memory based denial of service attacks. The modified protocol is presented in
Figure 3.2 and adopts techniques that counterbalance computational and mem-
ory expenditure, and implement gradual authentication.
Counterbalancing memory expenditure
To address memory based denial of service attacks, this modified version of IKE
stores no state after the first message. Unlike the original cookie construction
that was vulnerable to a cookie crumb attack [121], the cookie in the modified
protocol is constructed as a hash over request specific parameters, responder
secret s, random fresh material a and Diffie-Hellman exponent r. The session
specific secret parameters a and r are not stored by the responder, instead the
protocol remains stateless by sending an encrypted copy of these parameters to
the initiator.2 In addition to allowing the responder to remain stateless, the
cookie acts as a reachability test for the initiator, providing assurance that a
spoofed address is not being used.
2While not specifically indicated in the protocol specification, we suspect that not onlythe exponent, but the actual Diffie-Hellman value gr must be securely sent to the initiator,otherwise the responder would have to recalculate the value on receipt of message 3.
66 Chapter 3. DoS-resistance in Key Establishment Protocols
I R
Precomputed parameters
r, xR ∈R [1, 2, . . . , q − 2]
a = gxR
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1) i ∈R [1, 2, . . . , q − 2]saI , g
i ,NI , IDI−−−−−−−−−−−−−−→ Cookie = H (s , saR, IDI ,NI ,
NR, a, r)
HASHR = H (NI ||NR||gi||gr
Cookie||IDR)
s2 = H (HASHR||a)
s1 = SR · s2 + xR mod q
saR, gr ,NR, IDR, sigR = (s1 , s2 )
2) HASHR = H (NI ||NR||gi||grCookie,EKR
{a||r ||gr}, sigR←−−−−−−−−−−−−−−Cookie||IDR)
a ′ = g s1 P−s2R
s2?= H(HASHR||a′)
HASH ∗I = H (H (NI ,NR),
g i , gr ,Cookie, a ′, saR, IDI )
Ks = H (NI ,NR, g ir)
sigI = SI [HASH ∗I ] IDI ,NI ,HASH ∗
I ,
Cookie,EKR{a||r ||gr},
3)saR, gr ,NR, sigI−−−−−−−−−−−−−−→ decrypt EKR
{a||r ||gr}Cookie
?= H (s , saR,
IDI ,NI ,NR, a, r)
HASH ∗I
?= H (H (NI ,NR),
g i , gr ,Cookie, a, saR, IDI )
verify sigI
Ks = H (NI ,NR, g ir)
Figure 3.2: Modified aggressive mode of IKE Protocol [83]
Counterbalancing computational expenditure
Computational denial of service attacks against the original aggressive mode of
IKE with signature authentication resulted from the responder generating an
expensive signature on the receipt of an unauthenticated message 1 and the ex-
pensive verification of a signature on message 3. To address these vulnerabilities
Matsuura and Imai [83] specify the use of a signature scheme that: permits ex-
pensive components of the signature generation to be precomputed; and has a
signature verification procedure that permits the recovery of random fresh ma-
terial (a) used in the generation of the signature. The initiator then uses the
3.3. DoS-resistance in Key Establishment 67
recovery of the random fresh material a to provide proof to the responder that
the signature in message 2 has been verified.
The expense of generating the signature sigR is reduced but not eliminated
in message 2 of the proposed IKE modification. The verification of the initiator
signature (sigI) is only performed after verifying that the initiator has incurred
the computational expenses associated with verifying sigR. To accomplish this
requirement, the initiator must verify sigR in order to construct a message 3 that
is accepted as genuine by the responder. Verification of sigR, reveals the value
a′, which the initiator then uses to construct the proof of work HASH ∗I .
On receipt of message 3, the responder first validates the cookie and secondly
verifies the proof of work by recovering a and verifying the modified initiator hash.
Finally the responder can verify the signature of the initiator, with confidence
that the initiator has already committed resources to the protocol execution.
While Matsuura and Imai’s modified version of IKE [83] introduces an elegant
proof of work based on verification of sigR, it suffers from being an untunable
parameter that provides no mechanism for the responder to increase the com-
putational effort that must be expended by the initiator in order to provide
the proof of work. Additionally, the technique is restricted to specific signature
schemes with the Shortened DSS and the Schnorr signature schemes [115, 116]
suggested by Matsuura and Imai [83] as capable of supporting precomputation
and providing recovery of a.
Gradual authentication
In addition to counterbalancing computational and memory expenditure, this
modified IKE allows the responder to gradually authenticate the initiator. Before
verifying sigI the responder has assurance that the initiator is not using a spoofed
address, is willing to commit memory to having the protocol proceed, and has
committed the computational resources required to verify the responder signature
sigR.
While the proposal represents an improvement over the original protocol,
the responder is still required to commit computational resources to generate a
signature on receipt of the first unauthenticated message leaving the responder
susceptible to a flood of message ones.
3.3.3 Internet Key Exchange (IKE) Protocol version 2
In response to DoS vulnerabilities of original IKE version 1, Kaufman et al. [65]
proposed the development of IKE version 2. In the second version, the number of
68 Chapter 3. DoS-resistance in Key Establishment Protocols
round trips of the protocol have been increased in order to provide DoS protection
for the server. The protocol implemented the concept of the cookie technique
used to provide stateless communication at the server and used as a reachability
test. The cookie is created in the form of a hash function of the initiator’s nonce,
the initiator’s IP address, security parameter index and a server’s secret value.
In the DH secret key exchange phase, IKEv2 allows the server to reuse the same
secret value for multiple sessions to reduce overhead cost of computational setup.
However, allowing the server to use repeated rekeying using the same Diffie-
Hellman value is a trade-off because it causes the protocol to be vulnerable
to cryptanalysis of a single key and fails to provide a perfect forward secrecy
property. The IKE version 2 protocol is illustrated in Figure 3.3.
While IKEv2 messages are proposed to be short, there is no limit to the
maximum size of the message structures. Moreover, IKEv2 itself does not have
a mechanism for fragmenting large messages. IKEv2, which is developed based
on user datagram protocol (UDP), uses the fragmentation mechanism specified
in a specification of oversize UDP messages. However, the size of messages in
this implementation is variable according to the maximum size of supported
mechanisms. Furthermore, use of this fragmentation scheme opens another vul-
nerabilities of denial-of-service attacks. One example of a well-known attack
is an IP fragmentation attack [66], which attacks against the fragmentation or
reassembly part of the networking stack.
Counterbalancing memory expenditure
To prevent the responder from the memory based denial-of-service attacks, IKEv2
has been designed to use stateless connection. IKEv2 adopts the cookie concept
as previously used in Photuris. The session parameters, for instance, nonce and
IP address of the initiator are hashed and then function as a Cookie. Instead
of storing this information, the responder remains stateless by sending a hashed
copy of these parameters to the initiator.
Gradual authentication
As the modified IKE version discussed in Section 3.3.2, IKEv2 has adopted the
gradual authentication strategy for preventing memory and computational re-
sources based attacks at the responder. On receipt of the third message, the
responder starts to check validity of the Cookie. If the Cookie is valid, it pro-
vides an assurance of the round-trip communication to the responder. Then, the
responder randomly chooses a Diffie-Hellman value, r, and distributes it to the
3.3. DoS-resistance in Key Establishment 69
I R
1)NI , IDI−−−−−−−−−−−−−−→ Cookie = H (NI , IDI , SPI , s)
2)Cookie, SPI←−−−−−−−−−−−−−−
3) choose saI1 from SPI
i ∈R [1, 2, . . . , q − 2]Cookie, saI1 , g i ,NI−−−−−−−−−−−−−−→ Cookie
?= H (NI , IDI , SPI , s)
determine saR1
4) Ke = H (NI ,NR, g ir , SPI , ′1 ′)saR1 , gr ,NR←−−−−−−−−−−−−−− r ∈R [1, 2, . . . , q − 2]
Ka = H (NI ,NR, g ir , SPI , ′2 ′)
sigI = SKI [NI ,H (g i , IDI )]
5){IDI , sigI , saI2}Ke
Ka−−−−−−−−−−−−−−→ Ke = H (NI ,NR, g ir , SPI , ′1 ′)
Ka = H (NI ,NR, g ir , SPI , ′2 ′)
decrypt & verify MAC{IDI , sigI , saI2}Ke
Ka
verify sigI
sigR = SKR[NR,H (gr , IDR)]
6) decrypt & verify MAC{IDR, sigR, saR2}Ke
Ka
{IDR, sigR, saR2}KeKa←−−−−−−−−−−−−−−
verify sigR
Figure 3.3: IKEv2 Protocol [65]
initiator. To construct the fifth message, the initiator has to compute the Diffie-
Hellman key (gir) and the encryption and authentication keys (Ke, Ka) used to
protect the contents of this message. Moreover, the initiator has to compute the
signature sigI in this step. Once the responder receives the fifth message, the
responder begins to compute the encryption and authentication keys (Ke, Ka)
for decrypting and authenticating the message. If the result of this process is
invalid, it allows the responder to detect a possible attack before committing the
resources for an expensive signature verification.
3.3.4 Just Fast Keying (JFK)
The Just Fast Keying (JFK) protocol was developed by Aiello et al. [3] as a
key agreement protocol providing identity protection and capable of operating
in a hostile environment such as the Internet. The Just Fast Keying protocol
and Internet Key Exchange version 2 have been designed for replacing IKE. The
JFK protocol implements several techniques to counterbalance computational
and memory expenditure, and to gradually authenticate initiator requests. The
70 Chapter 3. DoS-resistance in Key Establishment Protocols
protocol variant implementing identity protection for the initiator is depicted in
Figure 3.4.
I R
Precomputed parameters Precomputed parameters
i ∈R [1, 2, . . . , q − 2] r ∈R [1, 2, . . . , q − 2]
sigR1 = SR[gr , grpinfoR]
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1) N ′I = H (NI )
N ′I , g
i , ID ′R−−−−−−−−−−−−−−→ Cookie = H (gr ,NR,N ′
I , IPI )
N ′I ,NR, gr , grpinfoR,
2) verify sigR1IDR,Cookie, sigR1←−−−−−−−−−−−−−−
Ke = Hg ir (N ′I ,NR, ′1 ′)
Ka = Hg ir (N ′I ,NR, ′2 ′)
Ks = Hg ir (N ′I ,NR, ′0 ′)
sigI = SKI [N′I ,NR, g i , gr , IDR, saI ]
E1 = {IDI , saI , sigI}KeKa
NI ,NR, g i , gr ,
3)Cookie,E1−−−−−−−−−−−−−−→ N ′
I = H (NI )
Cookie?= H (gr ,NR,N ′
I , IPI )
Ke = Hg ir (N ′I ,NR, ′1 ′)
Ka = Hg ir (N ′I ,NR, ′2 ′)
Ks = Hg ir (N ′I ,NR, ′0 ′)
verify and decrypt E1
verify sigI
sigR2 = SR[N ′I ,NR, g i , gr ,
IDI , saI , saR]
E2 = {sigR2 , saR}Ke
Ka
4) verify and decrypt E2E2←−−−−−−−−−−−−−−
verify sigR2
Figure 3.4: JFKi Protocol [3]
The responder periodically selects a Diffie-Hellman exponential (gr) and gen-
erates a signature over this value and information on the groups it supports. The
designers of the protocol allow the responder to reduce computational expendi-
ture, at the expense of perfect forward secrecy similar to IKEv2, by reusing this
Diffie-Hellman value with multiple initiators.
Counterbalancing memory expenditure
On receipt of the first message from an initiator, the responder remains state-
less and weakly authenticates the reachability of the initiator by generating a
cryptographic cookie (Cookie = HHKR(gr, NR, N ′
I , IPI)), that is returned by the
3.3. DoS-resistance in Key Establishment 71
initiator in message 3. The secret key HKR is a time variant local secret that
limits the period of time a cookie will be accepted.
Counterbalancing computational expenditure
While the protocol permits reuse of the responder exponential to reduce compu-
tational expenditure at the responder, no mechanism to increase computational
expenditure at the initiator is provided. The absence of a proof of work from
this recently proposed protocol is conspicuous and exposes the responder to a
computational denial of service in the presence of an initiator willing to reveal
their IP address. The initiator could engage the responder with a legitimate mes-
sage 1, then fabricate a bogus message 3 at a minimal computational cost. The
responder would have to perform a modular exponentiation before being able
to determine that the received message was bogus. The addition of a proof of
work, in the form of a client puzzle as recommended by Smith et al. [125], would
allow the responder to increase the computational expenditure of an initiator
attempting to mount such an attack - with verification of the puzzle solution
only costing a single hash operation.
Gradual authentication
In message 3, the initiator releases the preimage (NI) to the nonce (N ′I) pro-
vided in message 1. This binds message 1 and 3 to the same initiator. The
initiator must also derive the Diffie-Hellman key (gir) and the encryption and
authentication keys (Ke, Ka) used to protect the contents of message 3.
On receipt of message 3, the responder conducts a range of checks that grad-
ually authenticate the message, prior to conducting an expensive signature ver-
ification to strongly authenticate the initiator. First, the responder validates
the nonce NI . Then the cookie is verified to weakly assure the responder that
the initiator is reachable at the claimed address. Once these checks complete
successfully, the responder then decrypts and verifies the contents of message 3.
Finally, the initiator’s signature is verified.
The encryption and authentication (via a message authentication code) of
message 3 provides the responder assurance that the initiator is willing to commit
computational resources to having the protocol proceed. Unlike a proof of work
however, the decryption and MAC verification require the responder to incur an
equivalent computational cost to the initiator (that of a modular exponentiation)
so there is no counterbalancing of computational effort. This technique does
provide gradual authentication however, as a failure to correctly decrypt or verify
72 Chapter 3. DoS-resistance in Key Establishment Protocols
the MAC of a received message 3 allows the responder to detect a possible attack
before committing the resources for an expensive signature verification.
3.3.5 Host Identity Protocol (HIP)
The host identity protocol (HIP) was developed by Moskowitz [95]. HIP sepa-
rates host identity and network address to deal with mobility and multihoming
applications. HIP uses cryptographic public keys as host identifiers and intro-
duces a set of protocols for establishing a Security Association (SA). HIP also
provides the property to protect the responder from denial-of-service attacks in
upper layer protocols such as TCP and UDP. There are three new parameters
introduced in the HIP protocol including; (1) the host identifier (HI) as a pub-
lic key used to represent the host identity, (2) the host identity tag (HIT) as a
hash of the HI used in HIP payload and to index the corresponding state in the
end hosts, and (3) the local scope identifier (LSI). Figure 3.5 presents the HIP
protocol.
Counterbalancing memory expenditure
Using the cookie, which is calculated based on the IP address and HIT of the
initiator and responder, in the form of client puzzles as a reachability test helps
the HIP protocol to resist with IP spoofing attacks. HIP provides two options
for the responder to choose the cookies technique depending on the trade-off
between space-saving and computation-saving mechanisms. In the space-saving
scheme, it allows the responder to be stateless from the initiator until the re-
sponder receives a valid cookie in the third message. In this option, the validity
of a cookie in the third message can be verified by re-computing a hash func-
tion and comparing the output with the received one. On the other hand, the
latter option allows the responder to keep state about the first received request
connection of the initiator, and match the second received message against the
state, thereby allowing the implementation to avoid the computational cost of
the hash function. The drawback of this latter approach is the requirement of
creating state to store in the memory table.
Counterbalancing computational expenditure
The attractive characteristic of HIP is that HIP uses client puzzles in the second
message to limit the number of requests that an attacker can initiate, thereby
protecting against flooding attacks. To construct a puzzle, the responder com-
putes Cookie by hashing IP address and HIT tag of a responder and an initiator,
3.3. DoS-resistance in Key Establishment 73
I R
Precomputed parameters
r ∈R [1, 2, . . . , q − 2]
sigR1 = SKR[gr ,PKR]
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1) create HITI ,HITRHITI ,HITR−−−−−−−−−−−−−−→ k ∈ [0, 1, . . . , 160]
s ∈R Zn
HITI ,HITR, puzzle = (s , k)
2) solve puzzlepuzzle, sigR1←−−−−−−−−−−−−−−
LSB(H (s ,HITI ,HITR, J ), k) = 0 k
verify sigR1
Ke = H (HITI ,HITR, g ir , 01 )
E1 = EKe{PKI})sigI = SKI [HITI ,HITR, SPI ,
(s , k , J ), g i ,E1 ] HITI ,HITR,
3)J ,Cookie, sigI−−−−−−−−−−−−−−→ verify puzzle solution
H (s ,HITI ,HITR, k), J )?= 0
Ke = H (HITI ,HITR, g ir)
decrypt E1
verify sigI
Ks = H (HITI ,HITR, g ir , 02 )
HMAC = HKs (HITI ,HITR)
4) verify sigR2sigR2←−−−−−−−−−−−−−− sigR2 = SKR[HITI ,HITR, SPI ,HMAC ]
Ks = H (HITI ,HITR, g ir , 02 )
HKs (HITI ,HITR, g ir , 02 )?= HMAC
Figure 3.5: HIP Protocol [95]
then uses the result as an index to choose a nonce value from the pre-computed
fixed size table. The responder sends a cryptographic puzzle binding with a dif-
ficulty level k of the puzzle, to the initiator. The difficulty of the puzzle depends
on the workload condition of the responder at that period. It is possible to set
k = 1 under normal responder load so that the initiator does not have to do any
computation. In order to request any services from the responder, the initia-
tor has to run brute-force searching to find the correct puzzle solution by using
Aura et al. scheme (Section 2.3.1) to solve the puzzle and submit it within the
given time slot. To prevent an initiator performs pre-computation attacks, the
responder selects the secret value in such a way that the initiator cannot guess it.
Furthermore, the responder needs to verify whether the received puzzle solution
is the correct one or not before the responder perform signature verification at
74 Chapter 3. DoS-resistance in Key Establishment Protocols
higher cost. As a result, the responder ensures that the initiator spends some
degree of computational effort before the responder commits its resources.
Gradual authentication
On receipt of message 3, the responder gradually authenticates the received mes-
sage by beginning with the cheap operation in the form of a hash function. Sub-
sequently, the responder increases a range of check by computing the symmetrical
key to decrypt the encrypted message. Finally, the gradual authentication pro-
cess ends with the strong authentication in the form of verifying the initiator’s
signature.
Firstly, the responder verifies the client puzzle by comparing the value s with
the saved one, and then searching the corresponding k value based on s. Once
the responder has collected both values, the responder checks the hash function
of H (s ,HITI ,HITR, k), J )?= 0 k for validating the puzzle solution as the cheap
authentication. Before the responder performs digital signature verification at
expensive cost, the puzzle verification process provides the responder to assure
that the initiator has performed a proof of work and committed his computa-
tional resources in the equivalent cost as the responder. Having verified the
correctness of the puzzle solution, the responder computes the symmetric key
used for decrypting the initiator’s public key PKI . Finally, the responder per-
forms the strong authentication by using the initiator’s public key to verify the
signature for the identity confirmation.
3.3.6 Lee and Fung’s Protocol
Lee and Fung [45,72] developed a public-key establishment protocol and adopted
a client puzzle to protect the responder from DoS attacks. The construction of
puzzles used in this protocol is based on Juels and Brainard’s client puzzle [62]
scheme. In Lee and Fung’s protocol (as illustrated in Figure 3.6), when initiators
initiate a connection to the responder, a unique puzzle will be generated and sent
by the responder to the initiator. The initiator must solve puzzles and return
them to the responder in certain period as determined by the responder. Lee and
Fung proposed to include a difficulty level of the puzzle, which can be adjusted
according to the availability of resources of the responder. This approach can
reduce the probability of guessing the correct solution by an attacker as well as
having the advantage over the Juel and Brainard’s scheme that the new scheme
has a lower cost of construction and verification of the puzzle than the original
scheme.
3.3. DoS-resistance in Key Establishment 75
Counterbalancing CPU expenditure
To protect the responder from CPU exhaustion attacks, the proposed protocol
does not perform any expensive computational operations until the responder
ensures the identity of the initiator by completing the verification process. This
step can be achieved by computing two relatively fast hash operations. Once the
attacker attempts to flood the large number of bogus requests to consume the
availability of the responder resources, the attacker would subsequently suffer
with exhaustion of its own resource to compute a huge number of corresponding
puzzles. In addition, the proposed scheme applies the concept of challenge-
response by using random fresh nonces and timestamps embedded with the ini-
tiator identity which are encrypted by the responder’s secret key for protocol
resistance against replay attacks. As a result, the protocol could resist resource
exhaustion attacks.
I R
1)IDI−−−−−−−−−−−−−−→ k ∈ [0, 1, . . . , 160]
M = H (IDI , s , t)
x [j ]=preimage of H(M)
with k-bits masked out
puzzle = H (M ), x [j ], t , k
2) find J s.t.H (J , x [j ]) = H (M )
puzzle←−−−−−−−−−−−−−−
NI ∈R Zn
sigI = SKI [H (IDI , IDR,
(J , puzzle),PKR[NI ])] IDI , IDR, (J , puzzle),
3)PKR[NI ], sigI−−−−−−−−−−−−−−→ recover s by checking
(IDI , t)
M = H (IDI , s , t)
H (J , x [j ])?= H (M )
decrypt PKR[NI ]
verify sigI
NR ∈R Zn
sigR = SKR[H (IDI , IDR,
PKR[NI ],PKI [NR])]
IDI , IDR,PKR[NI ], Ks = H (NI ,NR)
4) decrypt PKR[NI ]PKI [NR], sigR←−−−−−−−−−−−−−−
verify sigR
Ks = H (NI ,NR)
Figure 3.6: Lee and Fung’s Protocol [72]
76 Chapter 3. DoS-resistance in Key Establishment Protocols
Gradual authentication
In addition to counterbalancing computational and memory expenditure, Lee
and Fung’s protocol allows the responder to gradually authenticate the initiator.
Before verifying the initiator’s signature sigI , the responder has assurance that
the initiator has committed his computational resources by computing the client
puzzle via the brute-force reversal hash algorithm. Moreover, during the verifica-
tion process of the third message, the responder begins with the cheap operation
by validating the puzzle solution, and then slightly increases the level of authen-
tication by decrypting the encrypted value of nonce (NI). Once both steps have
been completed, the responder will participate into the signature verification for
confirmation of the initiator’s identity.
3.3.7 Client-Aided RSA SSL / TLS
Castelluccia et al. [20] observed that the computational expenditure of a respon-
der in the SSL / TLS protocol could be reduced through the adoption of client
aided computation. In the SSL / TLS protocol a responder receives a public
key encrypted copy of an initiator selected session secret and has to perform an
expensive RSA decryption operation in order to generate session keys.
Fortunately, work had already been done on implementing server aided signa-
ture generation for resource constrained smart cards [52,80] and Castelluccia et
al. were able to apply the same techniques to have the initiator aid the responder
in decrypting the initiator selected session secret. They termed this approach
client-aided RSA (CA-RSA). The details of the CA-RSA protocol, which is de-
signed to be compatible with existing SSL/TLS deployments, are presented in
Figure 3.7.
In the SSL three way handshake protocol adopting CA-RSA, the first client hello
message remains unchanged. The server hello message includes the server’s cer-
tificate and the vector D = (d1,d2,. . .,dk). The initiator aids responder computa-
tion by performing calculations with these values. The client randomly chooses
the secret value x, which is used to compute the SSL session key. The secret
value x is encrypted with the server’s public key component e: y = xe (mod N).
Next, the client uses D to create a new vector Z by computing zi = ydi (mod
N), for 1 ≤ i ≤ k. The client then returns the vector Z and the encrypted ses-
sion key seed y to the responder in client key exchange message. Once the server
receives this message it uses the elements of vector Z to recover x, by computing
the values Mp =∏k
i=1 zfi
i (mod P ) and Mq =∏k
i=1 zgi
i (mod Q). Finally, the
3.3. DoS-resistance in Key Establishment 77
I R
Precomputed parameters
e, d, N : RSA parameters
fi, gi ∈R {0, 1}c
D = (d1 , d2 , . . . , dk)
d ≡k∑
i=1
fidi mod P − 1
d ≡k∑
i=1
gidi mod Q− 1
np = Q(Q−1 mod P )
nq = P (P−1 mod Q)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1)IDI ,NI , saI−−−−−−−−−−−−−−→ s ∈R {0, 1}a
t = H (s)
IDI , IDR,NI ,NR, saR, puzzle = (t , s(b)) ; s(b) =last b bits of s
2) J s.t. H(J ||s(b)) = tpuzzle, (e,N ),D←−−−−−−−−−−−−−−
x ∈R {0, 1}48
Ks = H(x,NI , NR)
y = xe mod N
zi = ydi mod N
Z = (z1 , z2 , . . . , zk) IDI , IDR,NI ,NR,
3)saR, J , y ,Z−−−−−−−−−−−−−−→ H(J ||s(b))
?= H(s)
Mp =∏k
i=1 zfi
i mod P
Mq =∏k
i=1 zgi
i mod Q
x = yd = Mpnp +Mqnq modN
Ks = H(x,NI , NR)
4)ServerF inish←−−−−−−−−−−−−−−
Figure 3.7: CA-RSA Protocol [20]
responder recovers x by calculating Mpnp + Mqnq (mod N), which is much less
computationally expensive than performing the modular exponentiation yd (mod
N). The responder can now derive the session key Ks.
Counterbalancing computational expenditure
Recognising that while CA-RSA eases the responder’s computational burden it
cannot be used as a proof of work, the protocol is supplemented by the addi-
tion of a Juels and Brainard [62] style client puzzle to ensure that the responder
only attempts to decrypt values from initiators who can provide a proof of work.
Unfortunately the puzzle construction specified is in violation of the guidelines
78 Chapter 3. DoS-resistance in Key Establishment Protocols
specified by Juels and Brainard and is constructed by hashing a random value. A
puzzle construction that does not include: time or rely on a time variant respon-
der secret; or any connection specific parameters, introduces numerous problems.
Firstly, the responder will be unable to know whether the puzzle solution it is
verifying is to a puzzle that it issued3 or if it is a puzzle that has been solved
previously unless it stores state. As mentioned earlier, storing state leaves the
responder vulnerable to memory-based denial of service attacks. Secondly, the
failure to make puzzles time variant provides no mechanism for a responder to
defend itself against an initiator that hoards puzzles, generating solutions at its
convenience, and then flooding the responder with legitimate puzzle solutions.
Finally, the failure to encode connection specific parameters into the puzzle, or
to make use of cookies, prevents the responder remaining stateless after message
1 and results in the responder having no assurance that the initiator is reach-
able at the IP address claimed. Storing state without confirmation of initiator
reachability exposes the responder to anonymous memory-based flooding denial
of service attacks.
Counterbalancing memory expenditure
As with the unmodified version of TLS, CA-RSA does not make use of cookies
to counterbalance memory expenditure. The puzzle construction adopted by
the protocol is inadequate to replace the function of a cookie for allowing the
protocol responder to remain stateless and serve as a reachability test. Two
consequences of this are that the responder must store state on each connection
request, making it vulnerable to a memory-based denial of service attack, and
secondly, the responder has no way of assessing whether the initiator is using a
spoofed IP address.
Gradual authentication
Assuming that the responder maintains the significant amount of state required
to keep track of issued puzzles, the receipt of a valid puzzle solution could provide
some assurance that the initiator has committed computational resources to
having the protocol proceed.
While this protocol attempts to counterbalance computational expenditure,
the combination of a poorly constructed proof of work, a failure to counter-
balance memory expenditure leaves the protocol vulnerable to denial of service
3If an initiator can choose its own puzzles to solve independently of the responder, therewould be a great risk of the initiator precomputing a large number of puzzle solutions to usein a denial of service attack.
3.4. Discussion and Conclusion 79
attacks. To counterbalance memory expenditure, the responder should adopt
the use of cookies, or use a puzzle construction that is consistent with meeting
the functional requirements of a cookie.
3.4 Discussion and Conclusion
Having classified several promising mechanisms and defending techniques in
Chapter 2, as well as identified desirable properties of DoS-resistance in key
establishment protocols, Table 3.2 provides a summary of the seven protocols
in terms of DoS-resistant mechanisms and strategies. We note that the number
of key exchange protocols implementing denial of service resistance techniques
is limited, with our review of the literature revealing only seven protocols. Of
those protocols implementing denial of service resistance techniques only three,
Modified IKE, Host Identity Protocol (HIP), and Lee & Fung’s protocol, use
techniques supporting all three strategies: counterbalancing computational ex-
penditure; counterbalancing memory expenditure; and gradual authentication.
Nevertheless, those protocols implementing all three strategies do not appear to
be significantly more complex than the protocols implementing only a subset of
the strategies.
Key establishment protocols are particularly vulnerable to denial of service
attacks owing to the significant resources they must expend in authenticating ini-
tiators and generating the cryptographic keys used for securing ongoing commu-
nications. The goal of denial of service resistance in key establishment protocols
is to ensure that attackers cannot prevent a legitimate initiator and responder
deriving cryptographic keys without expending resources beyond a responder
determined threshold.
In this chapter, we have explored the strategies and techniques that permit re-
sponders to counterbalance memory expenditure, counterbalance computational
expenditure, and to gradually authenticate initiators, thereby determining the
level of resources an attacker must commit to disrupting the key establishment
protocol and improving the responders resistance to denial of service attacks.
The adoption of denial of service resistance techniques in seven key establish-
ment protocols was critically analysed.
Cookies were identified as a technique that can counterbalance memory ex-
penditure and initiate gradual authentication. Correctly constructed cookies
allow the protocol responder to remain stateless and serve as a reachability test,
providing the responder with assurance that an initiator is able to send and re-
80 Chapter 3. DoS-resistance in Key Establishment Protocols
Table 3.2: Summary of DoS Resistance Techniques used by Protocols
Key Establishment Protocol Mechanisms Strategies
Photuris [64] Cookie Rebalancing Memory
Cookie → EK{message} → Signature Gradual Auth.
Modified IKE [83] Cookie Rebalancing Memory
PoW Rebalancing CPU
Cookie → HASH ∗I → Signature Gradual Auth.
IKEv2 [65] Cookie Rebalancing Memory
Cookie → MAC → Signature Gradual Auth.
JFK [3] Cookie Rebalancing Memory
Nonce → Cookie → MAC → Signature Gradual Auth.
HIP [95] Cookie Rebalancing Memory
PoW Rebalancing CPU
Cookie → PoW → Signature Gradual Auth.
Lee & Fung [72] PoW Rebalancing memory
PoW Rebalancing CPU
Cookie → Signature Gradual Auth.
CA-RSA [20] Client-Aided Comp. Rebalancing CPU
PoW Rebalancing CPU
PoW → Decrypt secret seed Gradual Auth.
ceive messages from a claimed address. Cookie generation must not lead to any
state creation, as this will expose the responder to a “cookie crumb” attack.
Proofs of work can be used to counterbalance computational expenditure at
the responder and authenticate the commitment of initiator to expending re-
sources to having the protocol proceed. While proofs of work can be constructed
from a range of underlying problem, proofs of work based on hash-based con-
structions are the most prevalent as they are simple to construct and can be
verified cheaply.
The notion of gradual authentication was introduced as a strategy for allow-
ing responders to gain assurance that an attack is not underway and that an
initiator is willing to commit computational and memory resources to having
the protocol proceed. Specific techniques for gradually authenticating initiators
were presented and discussed. A common characteristic of each of the techniques
3.4. Discussion and Conclusion 81
is that they all afford the responder the ability to cheaply verify some aspect of
a received message, while fabrication of a message that can pass the responders’
check is expensive for an attacker.
In summary, the goal of protocol designers to strengthen key establishment
protocols to withstand DoS adversaries is to ensure that those malicious users
should not easily disrupt the service between legitimate user and the respon-
der. In practice, the general concept to counteract DoS adversaries could be
to increase the computational tasks and required resources to the adversaries
who attempt to mount attacks. In the meantime, this implementation should be
able to ensure that legitimate users should not expend their resources beyond
an acceptable threshold. At least, this will guarantee that the proposed DoS-
resistant protocols are able to withstand those attacks and serve the intended
users properly without service disruption.
Having identified the characteristic as well as key strengths and weaknesses
of individual DoS-resistant protocols, the next chapter will discuss a formal tech-
nique used to examine and explore DoS vulnerabilities in key establishment pro-
tocols, namely Meadows’ cost based framework. In addition, the next chapter
will provide a technique to model and use automated analysis for examining DoS-
resistance in key establishment by using Coloured Petri Nets. For modelling and
analysing the characteristic of denial of service resistance of key establishment
protocols, we select HIP as an example because HIP is the latest DoS-resistant
key establishment protocol that incorporates all proposed defending mechanisms.
82 Chapter 3. DoS-resistance in Key Establishment Protocols
Chapter 4
Denial of Service Model for Key
Establishment
Formal methods are an effective tool to analyze whether the design of crypto-
graphic protocols is secure or not. Formal methods became popular after the
development of the BAN logic (Section 2.4.1) in 1989. In formal methods analy-
sis, a set of participant’s confidence is generated from a set of initial assumptions
and the protocol messages. If these messages satisfy the goals of the protocol,
then the protocol is valid.
In the case of denial of service attacks, it is a little complicated to model
a framework of adversaries executing DoS attacks because of the complexity of
such attacks and abilities of adversaries. One promising way has been introduced
by Meadows [87,88]. Meadows has proposed a solution to model a DoS-resistant
framework in cryptographic protocols by comparing cost to the attacker and cost
to the responder in execution the protocol, defined as a cost set. To model the
protocol framework, we need to calculate the cost of sequence actions comparing
between an attacker and a responder. However, there has been a small gap in
this technique in order to analyse the DoS-resistant protocols more accurately.
To fulfill this lack, we introduce the way for refining and calculating the cost of
actions based on the cryptographic benchmark and the technique for comparing
them during the protocol runs.
Surprisingly, there has been little interest in the research community in apply-
ing Meadows’ framework to different protocols. Moreover, the limited application
so far has suffered from two significant shortcomings which make the results of
restricted value. Firstly, the cost analysis in the original work has only taken
83
84 Chapter 4. Denial of Service Model for Key Establishment
into account honest runs of the protocol. In principle, the adversary (typically
the client in a client-server protocol) can deviate arbitrarily from the protocol
in order to achieve an attack. By only taking into account honest behaviour
it is quite likely the logical attacks will be missed. While Meadows certainly
recognised this fact, there has only been a few researches has yet examined the
effectiveness of the framework in detecting such potential attacks. One exam-
ple has been done in an examination of DoS-resistance in the Just Fast Keying
(JFK) protocol by Smith et al. [125]. Secondly, Meadows used only a coarse
measure of computational cost, with three levels denoted as cheap, medium or
expensive. In practice it can be quite difficult to classify and compare operations
in such a limited way. For example, in Meadows’ classification digital signature
generation and verification are counted as of equal cost, yet in practice an RSA
signature generation may take 2 or 3 order of magnitude more effort than RSA
signature verification.
The formal tool that we use to develop DoS-resistant models of key estab-
lishment protocols based on the concept of Meadows’ cost-based framework
is Coloured Petri Nets (CPNs) [11, 57, 58]. Over many years, cryptographic
and security protocols have been modeled and verified using Coloured Petri
Nets [4, 38, 39, 59, 94, 99]. To the best of our knowledge, however, there is no
implementation of CPNs focusing on an exploration of vulnerabilities based on
unbalanced computation that might lead to DoS attacks in key establishment
protocols. As a result, this problem leads us as a challenge of using CPNs in
DoS attacks modelling and analysis.
Motivated by the above limitations, this chapter provides a refinement of
Meadows’ cost-based framework. For our sample protocols we use the Host
Identity Protocol (HIP) [95], which has built-in DoS resistance and implements
all strategies and techniques mentioned in Chapter 3. To develop a formal frame-
work of such protocols, we use CPN Tools [130] which is a general-purpose ver-
ification tool for modeling and analysing Coloured Petri Nets. Using CPNs as
our formalism, we provide a formal specification of two protocols to allow auto-
matic searching of adversary and victim cost under different adversarial attack
strategies.
The main contributions of this chapter are
• An investigation the usefulness of using Meadows’ cost-based framework
and Coloured Petri Nets for modeling and analysing key establishment
protocols;
4.1. Meadows’ Cost-based Framework 85
• A refinement of Meadows’ cost-based framework to more accurately repre-
sent the cost of typical cryptographic algorithms;
• An examination of the denial of service vulnerabilities in Host Identity
Protocol (HIP) by using Meadows’ cost-based framework and Coloured
Petri Nets;
• An automated model constructed in the CPN Tools for analysing cost of
computation between initiators and a responder in order to identify the
vulnerabilities based on the concept of Meadows;
In the remainder of this chapter, Section 4.1 provides the definition and ex-
ample use of Meadows’ cost-based framework to examine the DoS resistance
in the HIP protocol. In addition, the proposed technique to refine the defini-
tion of computational cost in the Meadows’ framework is demonstrated in this
section. Section 4.3 describes the technique to implement Coloured Petri Nets
for modelling and analysing cryptographic protocols. An experiment of Host
Identity Protocol (HIP) cost-based analysis is setup and examined in this sec-
tion. Finally, discussion on the advantages and disadvantages of using Meadows’
cost-based framework and CPN Tools is provided in Section 4.5.
4.1 Meadows’ Cost-based Framework
Design of key establishment protocols has long been considered a delicate prob-
lem, but the analysis of a protocol becomes even harder when DoS prevention
is an additional requirement. Meadows [88] introduced a systematic framework
to evaluate DoS resistance of a protocol by computing the cost incurred by both
parties at each step in key establishment protocols. Meadows analysed the STS
protocol (a protocol without special DoS resistance properties) and later Smith
et al. [125] used Meadows’ framework to analyse the JFK protocol [3] in order
to evaluate its DoS prevention capabilities.
Meadows framework [87, 89] works by comparing the cost to the attacker
and cost to the responder in execution a protocol, defined using a cost set. To
model the Meadows’ cost-based framework, we need to calculate the cost of
sequence actions comparing between an attacker and a responder. However, we
need to find the appropriate technique to refine the cost set of these actions and
the technique for comparing them because during the protocol runs each action
could use different resources for computing, i.e. CPU or memory. One possible
86 Chapter 4. Denial of Service Model for Key Establishment
solution that we use in our model is the cryptographic benchmark [32]. We
replace the original cost set; cheap, medium, and expensive, with the CPU usage
of individual cryptographic operations tested on different platforms. Details will
be discussed later in Section 4.2.
4.1.1 Definition
To analyse the protocol specification, we begin with the notation and definitions
introduced by Meadows [88]. The protocol framework uses the Alice-and-Bob
notation to represent message sequences communicated between an initiator and
a responder.
Definition 1. The sequence of messages sent from the principal A to the prin-
cipal B in the protocol is written in the form
A → B : T1, . . . , Tk ‖ M ‖ O1, . . . , On
• Ti are the operations performed by the principal A for preparing the message
M , whereas the sequence T1, . . . , Tk represents the order steps to construct
message M by the principal A.
• M is the sent message from the principal A and subsequently received by
the intended recipient B.
• Oi are the operations performed by the principal B for processing the mes-
sage M , whereas the sequence O1, . . . , On represents the order steps to pro-
cess message M by the recipient B.
Definition 2. An event in the operation A → B : T1, . . . , Tk ‖ M ‖ O1, . . . , On
is one of:
• an operation in Ti or an operation Oj.
• an operation in Ti, and A sends M to B are said to occur at A; or
• an operation in Oj, and B receives M from A are said to occur at B.
There are three types of events:
1. Normal events appear at both principals and always succeed and are fol-
lowed by the next events.
2. Verification events appear only at the recipient side and can be either
success or failure.
4.1. Meadows’ Cost-based Framework 87
3. Accept events appear at the end of the operations Oj indicating the com-
pletion of the process.
Definition 3. A cost set C is a set with operation + with partial order ≤ such
that
x ≤ x + y and y ≤ x + y for all x, y ∈ C.
Definition 4. Event cost function (δ) and message processing cost function (δ′)
• δ is an event cost function that transfers sets of operations in the protocol
into the cost set C and is 0 in case of accept events.
• δ′ is a message processing cost function related to δ defined on verification
events {Vi} ⊂ {Oj} such that
for A → B : T1, . . . , Tk ‖ M ‖ O1, . . . , On
if {Vi} = {Oj}, then δ′(Vi) = δ(O1) + . . . + δ(Oj)
In general, a message processing cost δ′ is a total computational cost that the
message receiver involves from the beginning of receipt of a message until a final
verification of this message has been done.
Definition 5. Responder protocol engagement cost function (∆)
• ∆(On) is the sum of all operational costs appearing at the responder B up
to the accept event On, plus operational costs of processing the response
message related to such accept event On.
In particular, the protocol engagement cost function ∆ includes all operating
cost at the responder to perform a message verification. If the responder reaches
to an accept event, it needs to prepare a response. In this case, the protocol
engagement cost function must include this preparing operational cost of the
replied message. Undoubtedly, most protocols require the responder to process
digital signature generation, which requires the responder to spend expensive
computation, in order to prepare a response. Therefore, this could cause the
responder to be susceptible to denial-of-service attacks.
Definition 6. Attacker event cost function φ, attacker cost function Φ, and
attack cost function Θ are defined as follows
• φ is a function used to map attacker actions to attacker costs.
88 Chapter 4. Denial of Service Model for Key Establishment
• Φ is a sequence of attacker actions, in which
Φ({x1, . . . , xn}) = φ(x1) + . . . + φ(xn)
where xi are the attacker’s capabilities.
• Θ regards to each event E of the system such that an attacker interferes
with any message desirably-preceding E, then neither E nor any events
desirably-after E will occur, unless the cost of the capabilities which the
attacker uses for interfering with any messages is at least Θ(E).
According to the definition described by Meadows, we say that Ej is desirably-
after Ei if Ei desirably-precedes Ej, for i < j. Events that are occurred under
the desirably-preceding and desirably-after notations allow the principal to in-
corporate with the spoofed, redirected, or altered actions. The full detail of
desirably-preceding and desirably-after have been described by Meadows’ Defini-
tion 5 [88].
Definition 7. Responder cost set C, attacker cost sets G, tolerance relation τ
are defined as follows
• τ ⊂ C × G; consists of all pairs (c, g) such that the attacker cannot
force a responder to expend resources of cost c or greater without expending
resources of cost g or greater. Therefore, the protocol is secure under the
condition that (c′, g′) is within the tolerance relation τ = (∆(E), Θ(E)) if
there is a (c, g) ∈ τ such that c′ ≤ c and g′ ≥ g.
4.1.2 Cost sets and Tolerance relation
Once we have translated the actions from the protocol into the computational
costs of the principal, then, we can compare all actions of the protocol run
between the initiator and the responder. The protocol is secure against Denial-
of-Service attacks, if the final cost is great enough from the point of view of the
attacker in comparison with the cost of engaging in the events up to an accepted
action from the point of view of the responder. Otherwise, it concludes that the
protocol is insecure against Denial-of-Service attacks. Table 4.1 provides cost of
protocol executions and table 4.2 provides a conclusion of the cost of an attacker
during a protocol execution.
To assess that the investigated protocol is secure against DoS or not, we need
to examine the tolerance relation τ as defined in definition 6. From the Meadows’
original work, the cost sets C and the attacker cost set G are defined as:
4.1. Meadows’ Cost-based Framework 89
Table 4.1: Cost of protocol executions
Action Cost
1) pre X : preprocessing of an oper-ation X
= Cheap
2) create: create a message = Cheap
3) exp: compute an exponentialfunction
= Cheap
4) determine: determine a message = Cheap
5) compute: compute a mathemati-cal operation
= Cheap to Medium
6) HASH
6.1) compute = Cheap to Expensive (depended on
difficulty level of a client puzzle)
6.2) verify = Cheap to Medium
7) Symmetrical Encryption
7.1) computekey = Medium
7.2) encrypt = Medium
7.3) decrypt = Medium
8) Asymmetrical Encryption
8.1) sign = Medium
8.2) verifysig = Expensive
expensive > medium > cheap > 0
In term of the tolerance relation τ , it contains the subset of the cost sets Cand the attacker cost set G. The protocol is secure against DoS attacks if the
subset (C, G) lies within acceptable tuples including:
90 Chapter 4. Denial of Service Model for Key Establishment
Table 4.2: Cost to an Attacker [88]
Action Cost
1) Sending a legitimate message= The cost of computing the mes-sage
2) Forging a return address = Cheap
3) Reading messages = Medium
4) Creating a new message out of oldones
= Cost of deconstructing the oldmessage + Cost of creating the newmessage
5) Disabling of a legitimate principal = Medium
6) Substituting bogus messages forgenuine ones in real time (man-in-the-middle attack)
= Very expensive
7) Breaking cryptosystem = Maximal
8) Inducing a principal to initiatecommunication with a bogus, dis-abled, or dishonest principal
= Expensive to very expensive
τ =
(cheap, cheap); (cheap,medium); (cheap, expensive);
(medium, cheap); (medium,medium); (medium, expensive);
(expensive, expensive)
If other pairs of tolerance relation τ occur in the analysis, for instances (expen-
sive, cheap), and (expensive, medium), we are able to conclude that the examined
protocols are vulnerable to cheaply DoS attacks. The next section provides an
example of using Meadows’ cost-based analysis to examine key establishment
protocols.
4.1.3 Cost-based Analysis of DoS-resistance Protocols
The cost based framework has been firstly introduced and used to analyse Station-
to-Station protocol (STS) [35]. In that work, Meadows [88] has explored the
resource exhaustion vulnerabilities that cause the server to waste high compu-
tation at several states of the protocol. The first state is at the beginning when
4.1. Meadows’ Cost-based Framework 91
the adversary is capable of sending bogus requests to waste the server to respond
with Diffie-Hellman key generation and signature. Similar to the first vulnera-
bility, the adversary who has capacity to craft a return address and message can
cause the server to process the bogus messages. Finally, the more sophisticated
adversaries could run Lowe’s attack [76] and convince the legitimate client to
believe that it shares a genuine key with the server.
The second work using Meadows’ cost based framework to successfully anal-
yse the security protocol was proposed by Ramachandran [109]. In that work,
Ramachandran analysed protocol fragments including 1) storing state without
authentication, 2) immediate strong authentication, and 3) fast-escalating au-
thentication. In the first type, Ramachandran found the DoS threat when the
adversary is able to spend nothing to create many requests to overwhelm the
buffer of the server in order to keep the state information. This threat becomes
worse if the adversary can hide or spoof its source address, so the server memory
might be more consumed by a large number of incoming requests from different
sources. For the second type, it is similar to the attack found in STS protocol
by Meadows [88]. If the server requires strong authentication at the beginning
of the protocol, the adversary is able to flood bogus messages to force the server
becomes overload from wasting resources to compute a Diffie-Hellman key and
signature. For the fast-escalating authentication protocol, they are susceptible
to DoS attacks if the protocol designer does not carefully develop the weak au-
thentication process. The adversaries are able to spoof or replay this weak form
of authenticating message and force the server to process the next step of the
strong authentication to waste computational resources.
Another example of JFK protocol [3] modeled in the cost based framework
has been analysed by Smith et al. [125]. Even though JFK implemented grad-
ual authentication to reduce the susceptibility of DoS attack as the proposed
attacking technique in Ramachandran’s work [109], Smith et al. found two DoS
vulnerabilities at the message 3 of the protocol. By mounting this attack, the
adversaries have to participate in the round trip communication until they re-
ceived a replied message from the server. Later on, the adversaries send data
from a replied message along with some bogus information including a bogus
message authentication code (MAC) and a bogus signature. This is a cheap op-
eration. On the server side, the server needs to perform an exponentiation, and
key derivation before detecting the bogus message authentication code. As a re-
sult, these processes cause the server to reach expensive operation. The tolerance
relation of this situation would be a set of (expensive, cheap), which represents
92 Chapter 4. Denial of Service Model for Key Establishment
a potential attack to the server.
Cost-based analysis in HIP
Having shown examples of STS and JFK cost-based analysis, we move our focus
to examine DoS vulnerabilities on the HIP protocol. It is important to note
that, the full protocol specification of HIP is provided in Figure 3.5 in Section
3.3.5. Figure 4.1 illustrates the HIP cost based specification translated from the
HIP protocol specification by using the notation of Meadows’ framework from
Section 4.1. More results of HIP cost based specification can be extended to be
an automated simulation and analysis of the CPNs model in Section 4.4.
MSG Cost-based Function
1) I → R: create HIT I , create HITR ‖HITI ,HITR ‖verify1 , verify HITR, accept1
2) R → I: create k , create s ‖HITI ,HITR, puzzle, sigR1 ‖verifysig sigR1 , accept2
3) I → R: compute J (LSB(H (s ,HITI ,HITR, J ), k) = 0 k),
computekey Ke(Ke = H (HITI ,HITR, g ir , 01 )),
encryptPKI(Ke , {PKI}), sign sigI ‖
J , g i ,Ke{PKI}, sigI [HITI ,HITR, J , g i ,Ke{PKI}] ‖verify2 (solutionJ = (H (s ,HITI ,HITR, k), J ))
?= 0 ),
verify3 (computekey Ke(Ke = H (HITI ,HITR, g ir , 01 ))),
decryptE1 (Ke , {PKI}), verifysig sigI , accept3
4) R → I: computekey Ks(Ks = H (HITI ,HITR, g ir , 02 )),
computeHMAC (HMAC = HKs (HITI ,HITR)), sign sigR2 ‖HMAC , SigR[HITI ,HITR,HKs (MAC )] ‖verifysig sigR2 , computekey Ks(Ks = H (HITI ,HITR, g ir , 02 )),
verify hashMAC (HMAC?= HKs (HITI ,HITR)), accept4
Figure 4.1: HIP Protocol in the Cost-Based Framework Notation
Host Identity Protocol (HIP) implements stateless connection as the first
line of DoS defensive mechanism at the first state. In addition, it adopts the
Aura’s client puzzle mechanism (Figure 2.2(b)) to counteract malicious users
attempting to perform flooding attacks. Incorporating these mechanisms makes
HIP to satisfy all three proposed strategies which are counterbalancing CPU
exhaustion using proofs of work from client puzzles, counterbalancing memory
4.1. Meadows’ Cost-based Framework 93
exhaustion using stateless connection, and verifying an user’s identity by gradual
authentication using client puzzles and digital signature. From this point of view,
HIP is effective implementation and would be a promising protocol to withstand
denial of service attacks.
Protocol Assessment
From Figure 4.1, the initiator I performs create HIT I , create HITR and sends
a requested message to the responder R. Before the responder R receives a
message HITI ,HITR from I, the server generates pre-computed parameters r,
s, and signature sigR1. These pre-actions require only cheap operation. Upon
the receipt of request messages, the server verifies HITR for completing the first
message. If adversaries attempt to attack the protocol at this phase, as a result,
they might not be able to mount DoS attacks to the server because the cost
evaluation at state 1 is within the defined tolerance relation τ , that is
scenario 1: flooding bogus request messages
δ′(verify HITR) = cheap
Θ(verify1) = cheap
(∆(accept1), Θ(accept1)) ∈ τ(cheap, cheap) ∈ τ
Considering a second message, R performs create k, create s and returns the
second message to I which is HITI ,HITR, puzzle, sigR1 . When the initiator I
receives this message, I then performs verifysig sigR1 , accept2 . If verifysig sigR1
fails, the process stops. Otherwise, I accepts message and then continues to
search for a puzzle solution compute J . The initiator I performs computekey Ke ,
encryptPKI, sign sigI for constructing the third message appending it with the
signature sigI and sends it to R. Having received the message from I, the server
R validates the puzzle solution. If a solution is invalid, the process stops. Oth-
erwise, R performs computekey Ke , decryptE1 for obtaining the initiator’s public
key used in digital signature verification. R then finally verifies the signature of
I by performing verifysig sigI . At this state, there are three possible scenarios
to mount the attacks; 1) following the protocol specification directly to cause the
server to reach the final state of the HIP based exchange, 2) only computing puz-
zle solution and generating signature randomly to cause the server to compute
the signature verification, and 3) choosing all parts of message 3 randomly.
scenario 2: adversaries are required to follow the protocol specification di-
rectly until step 3. This attack causes the server to engage in the protocol until
94 Chapter 4. Denial of Service Model for Key Establishment
key confirmation at step 4. The result of this attack is
δ′(verifysig sigI) = expensive
Θ(verify3) = expensive
(∆(accept3), Θ(accept3) ∈ τ(expensive, expensive) ∈ τ
scenario 3: adversaries are required to follow the protocol until they receive
returned messages 2 from the server. Next, adversaries only compute puzzle
solution and generate signature randomly to cause the server to compute the
signature verification. The result of this attack is
δ′(verifysig sigI) = medium to expensive
Θ(verify3) = expensive
(∆(accept3), Θ(accept3)) ∈ τ(expensive, expensive) ∈ τ
scenario 4: adversaries perform similarly to the second scenario, except they
choose all parts of message 3 randomly. In this case, the server is required to
engage in the protocol upto the puzzle verification. It is important to note that
the original work allows only one accept event in the verification process at the
responder. As HIP implements client puzzles as the first line of DoS defense for
authenticating the initiator, there are two verification process at this step. Hence,
the bogus messages from Type 4 adversaries will be detected after completion of
a puzzle verification (verify2 in Figure 4.1). As a result, the responder does not
process the digital signature verification and the accept event does not occur in
this analysis. The result of this attack is
δ(verify solutionJ) = cheap
Θ(verify2) = cheap
(∆(onthecompletionofverify2), Θ(onthecompletionofverify2) ∈ τ(cheap, cheap) ∈ τ
According to an analysis of the second attacking scenario, the attacker cost
set depends on puzzle difficulty determined by the server. Since puzzle difficulty
can be adjusted from easiest to the most difficult, this cost set can be varied
in the range cheap, medium, or expensive. As a result, the effect of this attack
must be considered and evaluated carefully.
In summary, an evaluation of HIP using the Meadows’ cost-based frame-
work in the first attacking scenario might not cause the disruption on the server
4.2. Refinement of Meadows’s Framework 95
since the cost set, which is (cheap, cheap), is within defined tolerance relation.
Meanwhile, the cost set for the attacking scenario 2, 3, and 4 would be the set
of (expensive, expensive), (expensive, expensive), and (cheap, cheap), respec-
tively. All of these results are defined in tolerance relation. As a result, we are
able to conclude that HIP is DoS-resistant under those four attacking scenarios
based on an evaluation using Meadows’ cost-based framework.
We have demonstrated the cost based analysis on HIP, we found limitation
based on the technique to assign computational cost to individual operations, for
instances, the plenty of attacking strategies that adversaries are willing to mount
to the server. Thus, only set of (cheap, medium, expensive) might be not enough
to assign to such attacks for evaluation. Furthermore, the attacking strategies
found in the next chapter cannot be addressed with these three specified values.
One possible solution is to use cryptographic benchmark in terms of CPU usage
instead of that set. Section 4.2 will provide this idea and demonstrate how to
use this technique with an automated analysis tools.
4.2 Refinement of Meadows’s Framework
An obvious limitation of the original formulation of the framework is that the
computational costs are not defined precisely, but consist instead of a small
number of discrete values. Indeed Meadows herself called this a “crude and ad
hoc cost function” [88]. In order to perform a more useful cost comparison we
need to obtain a more accurate estimate of the computational and/or storage
costs required to complete the protocol steps. How to do this is not as obvious
as it may seem at first.
When comparing the efficiency of different cryptographic protocols is they
customary to count the number of different types of cryptographic operations.
For protocols that use public key operations it is common to ignore most oper-
ations and count only the most expensive ones, which typically are exponentia-
tions in different groups (traditionally written as multiplications in elliptic curve
computations). However, for the protocols that interest us this is definitely not
acceptable. One of the most common techniques in DoS prevention is to de-
mand that clients solve puzzles which require the client to iterate a hash function
a large number of times. Although one hash computation takes minimal time in
comparison with a public key operation, ignoring a large number of hash com-
putations would make the cost function ignore completely the DoS prevention
mechanism when a puzzle is used. Therefore we need to be able to compare
96 Chapter 4. Denial of Service Model for Key Establishment
directly the cost of all different kinds of cryptographic operations.
Comparing operations like hashing and exponentiations directly seems very
hard to do since they are based on completely different types of primitive instruc-
tions. Therefore we have resorted to an empirical comparison which compares
benchmark implementations on common types of processors. While we acknowl-
edge that the detailed results may differ considerably for different computing
environments (CPU, compilers, memory, and so on) we believe that the figures
we obtain are indicative of the true cost in typical environments and allow rea-
sonable comparisons to be made.
For our cost estimates we use the cryptographic protocol benchmarks of Wei
Dai [32]. These include tested speed benchmarks for some of the most commonly
used cryptographic algorithms using Crypto++ library version 5.5 on an Intel
Pentium processor dual core at 1.83 GHz under Windows XP service pack 2. We
have presented the results for some common cryptographic algorithms used in
key exchange protocols such as HIP in Table 4.3.
Table 4.3: Computational Cost of CPU and Time Usage of Specific Algorithms
Algorithm CPU Usage Time Usage Algorithm CPU Usage Time Usage
Hash Cycle/Block µ seconds Symmetrical Crypto Cycle/Block µ seconds
SHA-1 (512bits/block) 0.177 6.45 DES (64bits/block) 6.388 29.41
MD5 (512bits/block) 0.106 3.88 AES/ECB 128-bit key
HMAC/SHA-1 (512bits/block) 0.180 6.78 (128bits/block)1.413 10.10
Public-Key Crypto kCycle/ops µ seconds Key Exchange kCycle/ops µ seconds
RSA Encryption/Verification 130 70 Diffie-Hellman
RSA Decryption/Signature 2780 1520 Key-Pair Generation800 440
DSA Signature 850 470 Diffie-Hellman
DSA Verification 950 520 Key Agreement2100 1150
The units that we use in Table 4.3 are cycles per block (note that block size
varies for different algorithms) for hash function and symmetrical encryption, and
1000 cycles per operation for public key encryption and key exchange algorithms.
This allows direct comparison of CPU usage and may be expected to be similar on
processors with different clock speeds. This entails conversion from the original
data which uses direct time measurements.
From the table, we can estimate the CPU usage in cycles per block for com-
mon hash functions and the symmetric key encryption, and cycles per key lengths
in bits for the public-key encryption and Diffie-Hellman key exchange algorithm.
Once we get a result, we scale it down by a factor of 1000 and apply these costs
4.3. CPNs Simulation and Performance Analysis 97
in our formal specification and analysis. Before we can export these values into
CPN Tools we round them into an integer representation because CPN Tools
allows only integers in the simulation process.
4.3 CPNs Simulation and Performance Analy-
sis
The simulation of a CPN can be seen as an occurrence sequence consisting of
markings that are reached and steps. For a transition to be enabled in the current
marking, it must be possible to bind data values to the variables appearing on
the surrounding arc expressions and in the guard and the following conditions
must be met. Firstly, each of the input arc expressions evaluates to tokens that
are present on the corresponding input places. Secondly, if there is any guard, it
must evaluate to true. The occurrence of a transition removes tokens from the
input places and adds tokens to the output places. The removed tokens are the
result of evaluating the expressions on the corresponding incoming arcs, while
the values of the added tokens are the result of evaluating the arc expressions on
the corresponding outgoing arcs.
Simulations allow the user to understand and to debug the model. The
CPN simulator supports execution of CPN models and provides two fundamental
modes of simulation that are suitable for different purposes.
1. Interactive simulation
In an interactive mode, the developer can make a detailed investigation
of the behavior of the individual transitions or small parts of the model.
The user can control, set breakpoints, choose between enabled binding ele-
ments, change markings of places, and study the token in detail as a single
step debugging in a programming language. The purpose is to investigate
whether the individual components work as expected or not. However,
interactive simulations are very slow.
2. Automatic simulation
The automatic mode of the simulator is suitable for investigate the over-
all behavior of the full model. In this case, the simulator makes random
choices between enabled binding elements. The developer is in control
of automatic simulations by means of stop options. However, the token
is not displayed during automatic simulations. Therefore, this technique
98 Chapter 4. Denial of Service Model for Key Establishment
typically represents less information than expectation. A way to obtain
information is to use the simulation report which is a text file containing
detailed information about all the bindings of transitions which occurred.
In this technique, it is possible to obtain the result much faster than an
interactive simulation.
To achieve the formal proof of the security protocol not only the simulation
techniques can be used to explore the vulnerabilities of the protocol, but the
state space analysis can be used to check the security and correctness of the
protocol specification as well. In this research, we limit our scope to use the
performance analysis at the final step in order to evaluate the protocol’s behavior
from different types of workloads from DoS adversaries. An addition purpose of
the performance analysis in our work is to determine which parts of our new
protocols might be able to optimize speed and memory usage.
The performance analysis1 is used the time concept of CPNs by conducting
simulation of systems called Timed CPNs2. The concept is based on the using
of a global clock which is either integer or real value. In addition, the system
allows each token to carry a time value called a time stamp. The time stamp
describes the earliest model time at which the token can be added or removed by
the occurrence of a binding element in which the time stamps of the tokens to
be removed must be less than or equal to the current model time. The execution
of a timed CPNs is time driven. It works in a similar way to the event queues
that used in many other programming languages for discrete event simulation.
Once applying the performance analysis in a system, some mechanism is
needed to generate the workload or traffic for using as input for evaluating the
performance of the system. There are a number of techniques which can be
applied when generating the workload. One technique is to make a CP-net
which is part of the CPN model, and which is responsible for generating the
workload. In other cases the workload may be available from a text file which
can then be read into the CPN model. A third technique is a combination of the
two former techniques in which a separate CPN model is created which generates
the workload and writes it into a text file. Then, this text file can be used as
input for the actual CPN model.
Simulation based on performance analysis might consist of a large number of
statistical data during a process. To make it simple to collect data during such
simulations, the CPN simulator implements a number of high-level primitives
1The performance analysis will be applied to our new approach in Chapter 62The Timed CPNs concept will be used in the modelling of HIP later in Chapter 5 and 6
4.4. Formal Modelling and Performance Analysis of HIP 99
and data structures for data collection. The collected data are stored in data
structures called data collections. Moreover, data collections offer operations for
computing the average, standard deviation, and minimum and maximum of the
observed values. The output of the process can be written to a performance re-
port which provides a quick summary of the observed values and the performance
of the system. The data to be collected depend on the performance measures
of interest, for example, the throughput and the delay of the communication
protocol. This issue regarding to performance analysis will be discussed again in
Chapter 6.
4.4 Formal Modelling and Performance Analy-
sis of HIP
As the cryptographic protocol and the attack strategy becomes more compli-
cated, the protocol engineering requires some automated analysis techniques
and tools to help in the design and analysis process. CPN Tools is well-known
and widely used in the system engineering community for modelling and anal-
ysis of communication protocols. In this section, we demonstrate the results of
modelling and analysis by simulation of the Host Identity Protocol (HIP) [95],
which implements DoS-resistant strategies. The major purpose of simulation is
to search for unbalanced computational states and to compare computational
costs between an initiator and a server in order to find system flaws based on
resource exhaustion attacks.
4.4.1 Adversaries’ Ability
For the analysis of the adversary capacity, we consider four types of adversary.
Type 1 adversary computes a valid first message (may be pre-computed in
practice), and then takes no further action in the protocol.
Type 2 adversary completes the protocol normally until the third message
is sent and takes no further action after this. The computations of this
adversary include searching a correct client puzzle solution J , generating a
session key Ke and encrypting a public key PKI , and finally computing a
digital signature SigI .
Type 3 adversary completes the protocol step one and two with the exception
that the adversary does not verify the server signature sigR1. The adversary
100 Chapter 4. Denial of Service Model for Key Establishment
searches for a correct client puzzle solution J but randomly chooses the
remaining message elements: an encrypted element Ke{HII} and a digital
signature sigI . The adversary takes no further action in the protocol.
Type 4 adversary is like an adversary type 3, except that the client puzzle
solution J is now also chosen randomly.
In more details, the fundamental objective of a Type 1 adversary is to over-
whelm the server’s storage by sending a large number of requested packets, for
example, a denial-of-service attack via ping [22] and SYN flooding attack [23],
while the major goal of a Type 2, 3, and 4 adversary is to force the server to
waste computational resources up to the final step of the digital signature veri-
fication and digital signature generation which are expensive operations. There
are three possible situations to cause a server to waste computational resources
by the adversary;
1. All values of the third message including a puzzle solution J , an encrypted
part Ke{HII}, and a digital signature sigI are valid. This process requires
adversaries type 2 to participate in the round trip communication up to the
completion of step 3 on the client side. This output will force the server to
process a gradual authentication and complete the final step of the com-
munication. However, the tolerance relation of this attack is (expensive,
expensive), which is an acceptable outcome. Thus, HIP can resist against
this attack strategy.
2. Only the client puzzle solution J is valid. This process requires adver-
saries type 3 to participate in the round trip communication up to puzzle
solving in the step 3 on the client side. This bogus message causes the
server to involve the puzzle solution verification, the symmetrical key de-
cryption, and the signature verification. Even though the server can detect
the bogus signature from adversaries, the server has already involved with
an expensive operation. As a result, the tolerance relation of this attack
is (expensive, medium), which is an unacceptable range. This situation
leads to an achievement of the type 3 adversary to denial the service on
the server.
3. The client puzzle solution J is invalid, so the server computes only a cheap
hash function to verify the solution and then this connection will be termi-
nated whether the remaining messages are valid or not. Even though Type
4.4. Formal Modelling and Performance Analysis of HIP 101
4 adversary spends only cheap process to craft bogus messages, the server
can detect the attack at the cheap state. As a result, Type 4 adversary
fails to mount DoS attacks on the server machine because the tolerance
relation is in the secure location, which is (cheap, cheap).
4.4.2 Tolerance Relation (τ)
In order to evaluate the DoS-resistance of HIP from the CPNs cost-based simu-
lation, we need to define the tolerance relation of the protocol. From Meadows’
original work, a tolerance relation τ is defined in term of an ability of DoS-
resistant protocols to resist denial of service attacks, and serves the intended
parties without degrading the performance and services. Meanwhile, Aiello et
al. define this term in the JFK protocol (Section 3.3.4) such that the protocol
can resist DoS attacks if the responder does not incur any expensive computation
before the attacker identity has been revealed.
In our HIP modelling and simulation using CPNs, we define a tolerance rela-
tion τ and DoS-resistance such that the protocol is able to withstand DoS attacks
if the difference between the computational cost of the responder to engage in
the protocol and the computational cost of the attacker to force the responder
to process its messages is not greater than an acceptable threshold (Thres). This
threshold can refer to an expensive operation on the server, digital signature veri-
fication for instance. To assess the protocol by comparing the cost of an attacker
to the responder, the tuples of a cost set in a tolerance relation of Meadows’
framework are defined somewhat subjectively as cheap, medium, and expensive
as provided in Section 4.1.2. The objective of defining these tuples is to make
sure that the attacker is not able to expend only cheap or medium operation,
while it can force the responder to spend computation somewhat expensive to
respond to this attack.
As we replace the notation (cheap, medium, expensive) with the crypto-
graphic benchmark, the computational cost comparing between an attacker and
a responder will be more accurate. While the main goal of our analysis is sim-
ilar to the Meadows’ technique, we need to examine whether the attacker can
mount DoS attacks with cheap operation to cause the responder to engage in
some expensive operations such as digital signature generation and verification
or not. By analysing with an automated simulation tool; CPN Tools, we can
easily gather the attacker cost and the responder protocol engagement cost for
evaluating whether it lies in a subset of acceptable tuples of the tolerance relation
102 Chapter 4. Denial of Service Model for Key Establishment
or not. Following is a definition of our tolerance relation.
τ = { ((Responder’s Cost - Attacker’s Cost) < Thres) }
where Thres is an acceptable threshold defined as the computational cost of
digital signature verification.
4.4.3 A Cost-based Model of HIP in CPNs
HIP is a four-packet exchange protocol which allows the initiator I on the IP
address IPI and server R on the IP address IPR to establish an authenticated
communication (full protocol of HIP base exchange is illustrated in Figure 3.5).
Both I and R hold long-term keys to generate signatures sigI(·) and sigR(·)respectively. A public key PKI of the initiator and PKR of the server are
represented in the form of host identifiers (HI). HIT represents the host identity
tag created by taking a cryptographic hash over a host identifier HI.
A one-way hash function H(·) is used to form the puzzle, while HKs represents
a keyed hash function using session key Ks to generate a hash-MAC (HMAC).
The value s is a periodically changing secret only known to the server. LSB(t, k)
takes as input a string t and a parameter k and returns the k least significant
bits of t. 0k is a string consisting of k zero bits. EKe(·) and DKe(·) denotes
a symmetric encryption and decryption respectively under session key Ke. To
generate session keys Ke and Ks, HIP employs Diffie-Hellman key agreement
protocol. Parameters used to generate these keys consist of large prime numbers
p and q, a generator g, a server’s secret value r, and an initiator’s secret value i.
HIP adopts a proof-of-work scheme proposed by Jakobsson and Juels [55]
for countering resource exhaustion attacks. In a proof-of-work, HIP extends
the concept of a client puzzle, first implemented by Juels and Brainard, and
later proposed an alternative construction by Aura et al. (Section 2.3.1) for
protecting the server against DoS attacks in authentication protocols. Moreover,
HIP allows the additional feature of the client puzzle that helps the server to
delay state creation [5] until the checking of the second incoming message and
the user authentication has been done in order to prevent the server against
resource exhaustion attack.
Experiment 1: Host Identity Protocol (HIP)
The purpose of the first experiment is to compare computational cost of the
protocol execution based on the concept of Meadows’ cost-based analysis between
4.4. Formal Modelling and Performance Analysis of HIP 103
all principals, as well as four specified adversaries. Since HIP incorporates a client
puzzles mechanism, we investigate computational cost between all principals with
some possible ranges of puzzle difficulty (k) including easiest value k = 1 when
the server has a normal load, intermediate values k = 10 and k = 20 when the
server becomes medium loaded, and k = 40 for a hardest value as instructed in
HIP specification. In this simulation, we allow individual initiators to initiate a
request token only once and wait for a replied message, while the server is able
to flexibly adjust the puzzle difficulty within previously mentioned ranges. Once
the simulation has arrived to the final step, we record the total computational
cost of individual user comparing to the server on specified ranges of k.
T h e r e s p o n d e r s h o u l d s e l e c t k t o f o r c e t h e i n i t i a t o rt o s p e n d t i m e m o r e t h a n t i m e u s a g e i n M S G 3 R & 4 RR e s p o n d e rI n i t i a t o r
F i n i s hF i n i s h
M S G 1M S G 1 M S G 2N e t w o r kM S G 4 _ IM S G 4 _ IM S G 3 _ IM S G 3 _ I M S G 4 _ RM S G 4 _ R
M S G 2 _ RM S G 2 _ RM S G 3 _ RM S G 3 _ RM S G 2 _ IM S G 2 _ I
M S G 4N e t w o r kM S G 3N e t w o r k
M S G 1 _ RR e c e i v e rM S G 1 _ IS e n d e rR e j e c t e d 3F u s i o n R e j 3R e j e c tR e j e c t e d 1F u s i o n R e j 1R e j e c t
R e j e c t e dF u s i o n R e j e c t dR e j e c t C o m p l e t e dD o n e
R e s o u r c e I1 ` h c U s e rP u z z l eT i m eT I M E D r k M S G
c o s t R 4C O S Tc o s t I 4C O S T c o s t R 3C O S Tc o s t I 3C O S T c o s t R 2C O S Tc o s t I 2C O S T c o s t R 1C O S Tc o s t I 1C O S T
O u t p u tM S G M 4M S G 1FU s e r R e c e i v e dM S G 1
I 4M S GI 3M S G M 3 M S GM 1 M S GM 2M S G
R 4M S GR 2M S GR 3M S GR 1M S GI 1M S G 1
I 2M S G
F u s i o n R e j e c t d
F u s i o n R e j 1F u s i o n R e j 3
S e n d e r R e c e i v e rN e t w o r kN e t w o r k
M S G 2 _ IM S G 3 _ RM S G 2 _ RM S G 4 _ RM S G 3 _ IM S G 4 _ I
N e t w o r kM S G 1
F i n i s h
H I P C o s t , b a s e d M o d e l
Figure 4.2: HIP Cost-based Model
Figure 4.2 shows the designed page of the HIP protocol. The HIP protocol
is modelled hierarchically as graphical representation for simplicity of the sim-
ulation, such that all nodes in the top page are related to individual subpages
defined by the HIP specification. There are three main network components on
the top page including the initiator network on the left (it could be either the
honest client who performs as a protocol specification or the adversary who does
not play honestly), the server of the protocol on the right, and the communi-
cation network in the middle. To make an important note for this experiment,
we run a single round simulation and do not allow the adversary to reuse the
104 Chapter 4. Denial of Service Model for Key Establishment
previous messages to attack the server because our objective of this experiment
is to record the computational cost of individual principals only.
HIP is modelled using the cost-based framework, such that each states has
the computational cost place to record and display the computational cost of that
state in the protocol execution. The HIP network contains six types of users;
an honest client (hc), an adversary type one (ad1), an adversary type two (ad2),
an adversary type three (ad3), an adversary type four (ad4), and a server (sv).
In addition, individual message tokens consist of six colour sets including user
types, arrival time of messages, departure time of messages, data, computational
cost, and message sequence number. It is important to note that the display cost
at each state shows the total operation cost of that corresponding state only, not
an accumulation cost of all state. The reason behind this choice is that it is
easy for the protocol analyst to compare the cost of processing the message by
the initiator and then, subsequently, processing by the server at each step of the
protocol as suggested in Meadows’ cost-based framework.
At the beginning of the protocol simulation, an honest client sends a request
message traveling along with bogus messages from each adversaries to the server.
Upon the receipt of these requests, the server chooses the puzzle difficulty de-
pending on the workload condition and returns messages to initiators. When the
initiator receives the replied messages, the initiator first verifies the HIT element.
At this state, there are two possible outputs after verification depending on the
user field; 1) if the user is not an adversary type 1 (ad1), it will accept and the
cost is equal to the action at this step, 2) if the user is an adversary type 1 (ad1),
it will fail because an Type 1 adversaries do not take any further actions after
the first message has been sent. Moreover, this transition will send a notification
to the adversary type 1 to start flooding the new bogus messages, if applicable.
Upon receipt of a replied message from the server, the initiator’s tasks includes
the brute-force search to find the puzzle solution, the session key generation, and
a digital signature generation. In the model, we set up some different values
of the difficulty value (k = 1, 10, 20, 40) as specified above. Therefore, these
mentioned costs are displayed depending on the difficulty level for a honest client
and individual types of adversaries. Recall that only Type 4 adversaries do not
attempt to solve the puzzle, therefore the puzzle difficulty does not affect the
computational cost to this type of adversary.
4.4. Formal Modelling and Performance Analysis of HIP 105
Experimental Result:
During the protocol execution, the initiator sends a request message to the server
using the host identity tag (HITR) which is a hash of the host identifier (HI ) used
in HIP payload and to index the corresponding state in the end hosts. Therefore,
the initiator only employs cheap operations at the beginning step. We assume
that the computation at this step can be precomputed, so the cost at the first
operation would be negligible. Once the server receives the requested message,
the server requires a hash operation and some values from the precomputation
for returning to the initiator in the second step. Even though these operations
are a cheap operation, it costs a few computation to the server. However, the
computational burden of the server at this state is in a reasonable threshold
comparing to the initiator’s cost.
When the initiator receives the replied message, only honest clients partici-
pate in the verification of HIT and server’s signature, so the cost is equal to the
HIT verification plus signature verification. In the case of ad1, it does not take
any further actions after the first message, therefore the computational cost is
zero for the second stage. The operations in message three of the initiator in-
clude the brute-force search to find the puzzle solution, and the key generation.
The cost of solving a puzzle depends on the value of k in the puzzle message
field. However, only an hc, ad2, and ad3 is required to solve the puzzle solution.
Similarly to ad1, the ad4 does not attempt to solve the puzzle. As a result the
puzzle difficulty does not affect computational cost on this type of adversary.
Considering the processing task on the server’s machine when it receives mes-
sage three from the initiator, the server begins to validate the puzzle solution
which is defined as a cheap operation because the server performs only one hash
calculation. If it is invalid, the process will stop and the server will drop the
corresponding packet from the connection queue (the system will return a re-
source to the server). Otherwise, the server performs the decryption to obtain
an initiator’s public key. The server finally verifies the signature by using the
initiator’s public key obtained in the previous step. At this phase, only honest
clients’ and Type 2 adversaries’ packets can pass through to the next step. After
the authentication has been completed, the server and the initiator will perform
a key confirmation and start to exchange information.
Table 4.4 summarizes the computational cost when the puzzle difficulty is set
to k=1 or k=10 comparing between particular principals (honest client and ad-
versaries) and the server. The experimental result shows that the most effective
adversary is ad3 (the greatest different threshold between ad3 and the server)
106 Chapter 4. Denial of Service Model for Key Establishment
because ad3 can force the server to engage in the expensive tasks, i.e. digital sig-
nature verification. As a result, the total computational cost comparing between
(R,hc), (R,ad1), (R,ad2), (R,ad3), and (R,ad4) would be the pair of (3770,3901),
(1,0), (3770,3769), (988,2), and (2,0), respectively.
Table 4.4: Comparison of Computational Cost of HIP with k=1 and k=10
Authentication Initiator server
Protocol k=1 k=10 J ,E1,sigI valid only J valid Everything invalid
hc 3901 4923 3770 - -
ad1 0 0 - - 1
HIP ad2 3769 4791 3770 - -
ad3 2 1024 - 988 -
ad4 0 0 - - 2
The experimental result demonstrated in Table 4.4 compares computational
cost of the protocol execution based on the key concept of Meadows’ cost-based
analysis between all principals with two fixed range of client puzzle difficulty; k
= 1 and k = 10. Notice that Type 1 adversaries spend nothing to generate bogus
requests for asking the server to reply with some work. Since the server has to
do some minimal work in checking the first message, it may not be possible to do
better than this. To prevent the attack at this state, the implementor develops
the protection to an alternative resource, known as a memory or storage resource.
HIP mitigates this problem by integrating a stateless connection mechanism to
prevent the server from exhausting storage spaces to keep attacked messages
before they can be detected. Considering Type 3 adversaries, if the server selects
an unsuitable value of k, for example, minimum k=1, the adversary can exhaust
the server by spending only minimal work (2k−1 hashes in average) but be able
to force the server to process the bogus packets up to the signature verification
(see Table 4.4). In case of Type 4 adversaries, even though they spend nothing
to flood bogus messages at higher state than Type 1 adversaries, the server is
able to detect these bogus packets at puzzle verification and remove them from
the communication at a little cost higher than cost from Type 1 adversaries.
Figure 4.3 illustrates the computational cost of hc, ad2, and ad3, respectively
(we do not illustrate the computational cost of ad1, and ad4 because both of them
do not incorporate with client puzzles). We set up the experiment to measure
the computational cost of the protocol execution between such participants with
the technical range of puzzle difficulty (k) including easiest value k = 1 for using
4.4. Formal Modelling and Performance Analysis of HIP 107
(a) Computational Cost between hc and a server
(b) Computational Cost between ad2 and a server
(c) Computational Cost between ad3 and a server
Figure 4.3: Computational Cost on HIP with different ranges of k
108 Chapter 4. Denial of Service Model for Key Establishment
in the normal situation, intermediate values k = 10 and k = 20 for medium
loaded situation, and k = 40 as a hardest value defined in HIP specification. In
this simulation, the server is able to flexibly adjust the puzzle difficulty within
specified range mentioned above. Once the simulation stops, we record the total
computational cost of individual user comparing to the server on different ranges
of k.
Comparison between Figures 4.3(a) and 4.3(b) shows that hc and ad2 incur
similar computational costs for the same value of k chosen. This illustrates well
the effectiveness of HIP in achieving its aims in resisting DoS attacks, at least
against this type of adversary. On the other hand, ad3 and ad4 spend very small
computational resources compared with the server because both adversaries use
some random message elements. This situation would bring the server to the risk
of DoS attacks if the value of k is not chosen properly. Figure 4.3(c) indicates
that a value of k chosen at least 10 would be appropriate to ensure that ad3 uses
as much computation as the server.
4.5 Discussion and Conclusion
Based on the experimental results of cost-based analysis of HIP, the Meadows’
framework is able to model and capture the cost incurred during the protocol ex-
ecution. However, implementing Meadows’ cost-based framework in CPN Tools
for automated simulation and analysis has more advantages when the proto-
cols and adversaries’ capabilities become more complex. In addition, graphical
representation from the CPNs model can help the protocol analyst to remove
inconsistency and carefully investigate the protocol behaviour as well as some
vulnerabilities more easier than other techniques.
Considering the limitation of the cost-based analysis, one obvious example
could be found at the experimental result from CPNs model (Table 4.4). When
adjustable puzzles are used in the protocol simulation, the cost-based framework
might not be able to well represent this issue as in the simulation approach
because the cost of the initiator is varied as it is a puzzle-difficulty-dependent
parameter. In addition, the protocol analyst could consider either Type 3 or
Type 4 adversaries as the most effective strategies to degrade the throughput of
HIP. If the analyst interprets the result as the ratio between the computational
cost of adversaries and the server, Type 4 adversaries become the most effective
attacker to disrupt the service since they do nothing but cause the server to
always spend some computation. Similarly, the increasing of puzzle difficulty
4.5. Discussion and Conclusion 109
might be able to make the different between Type 3 adversaries and the server
getting smaller, while the gap between Type 4 and the server is still the same.
However if we consider the cost-based definition more carefully, the tolerance
relation could be the difference of the computational cost set instead of ratio.
As a result, the ambiguous to interpret the result, for example in case of Type 3
and Type 4 adversaries, might be another concern of this analysis technique.
Another issue based on the effectiveness of modelling system that relates to
temporal aspects, for instance, when the server becomes heavily loaded and later
decides to adjust puzzle difficulty to delay the incoming traffic, the cost-based
model might not be able to examine this issue properly. In addition, another at-
tacking technique defined as coordinated attacks [125], in which adversaries can
reduce the timeframe of solving the puzzles, can be applied to the system imple-
mented with hash-based puzzles because such schemes are solvable in parallel.
This kind of attack based on the weaknesses of hash-based construction is also
defined as strong attack by Bocan and Cosma [12]. Hence, the protocol analyst
requires another appropriate tools to examine this issue more effectively. Fortu-
nately, CPN Tools provides the time feature and allows the user to effectively
model the protocol related to time issue.
In summary, the experiment in this chapter demonstrates that we have achieved
the aims of extending the Meadows’ framework to provide a more accurate repre-
sentation of computational cost. Furthermore, this chapter demonstrates the po-
tential and usefulness of using automated tools to apply the Meadows analysing
technique to model and evaluate DoS vulnerabilities in Host Identity Protocol
(HIP).
Even though different methods are able to be applied in the analysis of cryp-
tographic protocols, we focus on using CPNs approach due to its feature to
represent protocols as graphical representation similar to the design flowchart
for providing visual analysis to the protocol analyst. In addition, this facil-
ity brings more understandable of complex and large-scale protocols, as well as
removes some inconsistency of the protocol using informal drawing during the
design state to be more practical and efficient.
Meadows’ cost based framework is the first technique so far to identify DoS
vulnerabilities in the cryptographic protocol, Station-to-Station protocol. In ad-
dition, this technique has been successfully used to analysed and discovered DoS
vulnerabilities in the Just Fast Keying (JFK) protocol a few years later after
the introduction of this tool. However, lacking of an automated ability for an
analysis as well as limitation of the assignment to the operational cost in the
110 Chapter 4. Denial of Service Model for Key Establishment
complex protocol that consists of large instances and various interactions of ad-
vanced attacking techniques are somewhat concern of this technique. Motivated
by the above limitations, the next chapter will provide technique to model strong
attack and demonstrate the experiment based on time constant provided in CPN
Tools.
Chapter 5
Simulation and Analysis of HIP
in Timed CPNs
The previous chapter described the modelling and analysis of key establishment
protocols using Meadows’ cost-based framework implemented in Coloured Petri
Nets. The outcome illustrated that these protocols can require the associated
server to spend some expensive computational efforts, such as modular exponen-
tiation in the case of Diffie-Hellman and RSA digital signature algorithm, while
the initiator is only required to perform computations which are cheaper than
for the server. This circumstance leads to DoS vulnerabilities on the server’s
machine because adversaries can mount DoS attacks to deplete resources of the
server before the server can detect the attack.
Even though the cost-based framework is able to identify some DoS vul-
nerabilities in HIP, it has been a limitation when it comes to examine advanced
attacking techniques that requires an ability to model time parameter. The mod-
elling of strong attacks (See Section 5.1.2) for instance, might be another issue
of this technique. As the adversaries’ capability becomes more skillful, protocol
designers require more advanced formal analysis tools for exploring flaws in the
implementation. Since Meadows’ cost based framework considers only the cost
function of the protocol participants, it might limit the ability of the analyst to
explore and identify new DoS vulnerabilities. Moreover as we have seen from
the CPNs cost-based construction, the result of Experiment 1 from Section 4.4
illustrated that HIP is susceptible to resource exhaustion attacks at state 3 if the
adversaries are able to find a correct solution and force the server to verify the
signature. Even though the server can delay the attack by increasing the puzzle
111
112 Chapter 5. Simulation and Analysis of HIP in Timed CPNs
difficulty, if Type 3 adversaries are able to run parallel searching to find a correct
solution faster than the specified period, the server is still vulnerable to such
DoS attack. To analyse this circumstance, we need to incorporate time in the
simulation. Fortunately, CPN Tools provides us a time feature for evaluation.
In this chapter, we propose an alternative technique to model and analyse
protocols using the time function provided by CPN Tools. Our formal model
is developed based on Timed Coloured Petri Nets (Timed CPNs) and analysed
the DoS-resistance by using a simulation approach provided in CPN Tools. The
model is used to explore vulnerabilities of DoS-resistant protocols by examining
unbalanced computation that leads to resource exhaustion attacks in HIP. Our
new Timed CPNs model can be considered as a parametric model because we
only insert and tune some parameters of the model from the previous experiment
in Chapter 4. This is an attractive example of using CPNs to model and analyse
DoS-resistant protocols.
Simulation approaches have been valued in the research community for ex-
ploring vulnerabilities in cryptographic protocols. Using simulation approaches
has several benefits over other techniques; for instance, simulation provides flex-
ibility to the developer to choose, examine and adjust parameters for evaluating
the system. In addition, simulation provides visualization to users who can see
and learn what is happening during the simulation of cryptographic protocols to
gain more understanding for evaluating the correctness of those protocols.
Using Timed Coloured Petri Nets as our formalism to model HIP, we have
found that HIP can mitigate the effect of DoS attacks because of the adjustable
client puzzle mechanism. However, an inability to protect against coordinated
adversaries who are capable of running parallel search on a hash-based puzzle
causes the server to be susceptible to resource exhaustion attacks at the signature
verification phase. As a result, we propose an enhanced approach by replacing
hash-based puzzles with non-parallelizable cryptographic puzzles. Once the non-
parallelizable cryptographic puzzles is implemented, the effect of coordinated
attacks will be removed and the throughput from legitimate users will return to
the desirable level as shown in Section 5.1.2.
The main contributions of this chapter include:
• a formal specification of HIP using Timed Coloured Petri Nets as our for-
malism to model time-dependent attacking techniques targeting weaknesses
of hash-based puzzle construction;
• an evaluation of the tolerance of HIP under different levels of puzzle diffi-
5.1. Simulation of HIP using Timed CPNs 113
culty;
• a simulation of the coordinated adversarial strategy and validation of the
analysis performed by Smith et al. [125] using the Meadows framework;
• an evaluation of the most effective attack strategies by measuring the suc-
cessful throughput of the successful packets from legitimate users in order
to compare the result with the examination from Meadows’ cost based
framework;
• identification of a vulnerability caused by the hash-based puzzle schemes
in HIP;
• a proposed technique to improve the DoS-resistance in HIP to tackle the
problem of parallelization in hash-based puzzles, and to prevent coordi-
nated adversaries who try to overwhelm the server’s resource by computing
a puzzle solution in parallel.
Section 5.1 describes an experiment using CPN Tools to formally construct
HIP in the Timed CPNs specification and provides the experimental results of
such a model. Moreover, an enhanced version of HIP using non-parallelizable
puzzles to protect against coordinated adversaries will be provided in Section 5.2.
A discussion on the experimental results of time based analysis compared with
Meadows’ cost based framework is provided in Section 5.3. Finally, a summary
and some open problems are presented.
5.1 Simulation of HIP using Timed CPNs
In this section, we propose a more accurate and realistic model by applying
the concept of time into the HIP time-based simulation and analysis, such that
every cryptographic process requires some amount of time calculated by using
cryptographic benchmarks of Crypto++ library [32].
In Timed CPNs, the concept of the simulated time or the model time1, which
is represented by the system clock in the tool, has been introduced. Once we have
attached the system time into tokens, we can see the sequence or action of states
that tokens move to as a temporal analysis. It means that only tokens which
hold the current time as presented on the clock can be traversed to the next
1More formal descriptions are available on the official website of CPN Tools, http://wiki.daimi.au.dk/cpntools/cpntools.wiki
114 Chapter 5. Simulation and Analysis of HIP in Timed CPNs
position, while the others have to wait until the system clock reaches their value.
This feature provides us more abilities to model and examine the behaviour of
the participants as well as the system more accurately.
Figure 5.1: An Example of Timed CPNs Model
Figure 5.1 shows an example of how Timed-CPNs works. In this example,
the concept of a server’s resource and a time-out period of the message has been
implemented. The transition Check processes the token having the current time
at an input place, and checks the current resource and a time-out period of the
token. If the server has resources available to handle a packet and the current
time-out period of that packet has not expired, the system moves that packet to
an accept transition for processing the next step. Otherwise, the system drops a
packet.
5.1.1 Modelling of HIP using Timed CPNs
To develop a time-based model on CPN Tools, HIP is constructed hierarchically
for simplicity of the model and simulation. A new time-based construction is
an extension of a HIP cost-based model from the previous chapter, such that no
re-design is needed for the new construction. We only insert time parameter and
change some parameters for a time-based compatible simulation.
In the time-based model, all nodes in the top page are related to individual
5.1. Simulation of HIP using Timed CPNs 115
subpages defined by the HIP specification2 and all cryptographic processes re-
quire some amount of time calculated by using the cryptographic benchmark of
Crypto++ library developed by Wei Dai (Table 4.3). This construction also in-
volves cost-based places such that each state has the computational cost place to
display the total cost of that state during the protocol simulation. Furthermore,
the concepts of resource and time-out are used in this evaluation. It means that
when the server has to deal with requests, the server spends a queue resource
to handle an individual request. If incoming packets exceed the server capacity,
those packets need to wait in the server’s queue until the server becomes idle
again. However, if the time-out of each packet is reached, the client gives up or
is required to re-send a request.
In our HIP time-based simulation model, we initially configure an individual
message to contain six coloured sets; 1) User who initiates the messages, 2) AT
which is an arrival time of the messages, 3) DT which is a departure time of
the messages, 4) DATA which indicates the string of messages, 5) COST which
is used to display the computational cost when the message is traveling to each
operation, and 6) NUM which represents the sequence number of each messages
in the protocol. Similar to the cost-based models examined in Section 4.4 of
Chapter 4, the display cost at each state shows the total operation cost of that
corresponding state only, not an accumulation cost of all state. If the adversaries
are able to force the responder to reach the accept state and process the next
operation for returning the message to adversaries, the total computational cost
of the responder will include the cost displayed at the cost place at the next
state. The top page of HIP Timed-CPNs is constructed as illustrated in Figure
5.2.
From Figure 5.2, the top page consists of three major segments; 1) an ini-
tiator’s network, 2) a communication channel, and 3) a server’s network. Each
transition represents a stage of protocol execution, which consists of four stages
in each principal because HIP is a four message protocol, corresponding to a
specified subpage. Each stage consists of CPN elements constructed as specified
in the HIP protocol specification. An example of a server’s subpage at the first
stage is demonstrated in Figure 5.3
The server subpage consists of two main important transitions; the first one
named Queue is used for ordering received messages; the second transition named
Check HIT R is used for verifying the validity of the responder’s host identity
tag (HITR). It is important to note that the process of ordering the incoming
2full description is provided in Section 3.3.5
116 Chapter 5. Simulation and Analysis of HIP in Timed CPNs
T h e r e s p o n d e r s h o u l d s e l e c t k t o f o r c e t h e i n i t i a t o rt o s p e n d t i m e m o r e t h a n t i m e u s a g e i n M S G 3 R & 4 R
R e s p o n d e rI n i t i a t o r K e y E s t a b l i s h m e n t P r o t o c o l
n 1 ! 1n 1 1 ` ( u 1 )1 ` ( u 1 , a t 1 , d t 1 , x , i , n u m 1 )M S G 3N e t w o r k
F i n i s h
M S G 1M S G 1 M S G 2N e t w o r kM S G 4 _ IM S G 4 _ IM S G 3 _ IM S G 3 _ I M S G 4 _ RM S G 4 _ R
M S G 2 _ RM S G 2 _ RM S G 3 _ RM S G 3 _ RM S G 2 _ IM S G 2 _ I
M S G 4N e t w o r k
M S G 1 _ RR e c e i v e rM S G 1 _ IS e n d e r
R e sF u s i o n C o u n t1 ` 0 N U M R e s o u r c eF u s i o n R e s o u r c eR e s o u r c eN U ME x p i r e d P u z z l eE x p i r e d P u z z l eP u z z l e
R e j e c t e d 1F u s i o n R e j 1R e j e c t
R e j e c t e dR e j e c t e dR e j e c t C o m p l e t e dD o n e
R e s o u r c e I2 0 ` h c + +2 0 0 ` a d 4 T I M E DP u z z l eT i m eC O S T r k M S G _ u n t i m e d
c o s t R 4C O S Tc o s t I 4C O S T c o s t R 3C O S Tc o s t I 3C O S T c o s t R 2C O S Tc o s t I 2C O S T c o s t R 1C O S Tc o s t I 1C O S T
O u t p u tM S G
R e c e i v e dM S G _ u n t i m e d
I 4M S G _ u n t i m e dI 3M S G M 3 M S GM 1 M S GM 2M S G
R 4M S GR 2M S GR 3M S G _ u n t i m e dR 1M S GI 1M S G _ u n t i m e d
I 2M S G _ u n t i m e d
R e j e c t e d
F u s i o n R e j 1E x p i r e d P u z z l e
F u s i o n R e s o u r c eF u s i o n C o u n t
S e n d e r R e c e i v e r
N e t w o r kM S G 2 _ I
M S G 3 _ RM S G 2 _ RM S G 4 _ RM S G 3 _ IM S G 4 _ I
N e t w o r kM S G 1
N e t w o r k
Figure 5.2: Hierarchical Construction of HIP Model
messages into the queue, if multiple requested messages arrive to this transition
simultaneously, is non-deterministic because these packets have been arranged
randomly based on a random function of CPN Tools. The purpose of Queue and
Count components in the first transition is to measure the number of arriving
requested packets in order for the server to flexibly and appropriately adjust the
puzzle difficulty according to the level of workload.
Considering the second transition, CheckHITR, in this subpage, in order to
process the job the server has to have available memory resource which is pre-
specified at the place Resource. Note that each packet requires only one resource
(one connection queue) during the process at the server’s machine. Once those
messages are processed and removed from the connection queue, they will return
resource tokens to the server’s machine. If the server is busy processing the
current jobs, incoming requests have to wait at the Busy place until the server
has at least one resource available or the server becomes idle again. Since we
implement the time-out in this construction, all requests have some specified
waiting period before they expire. If the attacker can force the server to waste
its resource to process all bogus packets until the time-out of legitimate users’
5.1. Simulation of HIP using Timed CPNs 117
i f ( w t 1 > = T i m e O u t )t h e n 1 ` ( u 1 )e l s e e m p t yi f ( w t 1 > = T i m e O u t )t h e n 1 ` ( n + 1 )e l s e 1 ` nn i f ( w t 1 < T i m e O u t )t h e n 1 ` ( u 1 , a t 1 , d t 1 , x , 0 , n u m 1 )e l s e e m p t yi f ( w t 1 > = T i m e O u t )t h e n ( n 2 � 1 )e l s e n 2n 2i f ( w t 1 > = T i m e O u t )t h e n ( n 1 � 1 )e l s e n 1n 1
i f ( w t 1 > = T i m e O u t )t h e n e m p t ye l s e 1 ` ( u 1 )1 ` ( u 1 , a t 1 , d t 1 ,x , i , n u m 1 ) i f ( w t 1 < T i m e O u t )t h e n 1 ` ( u 1 , I n t I n f . t o I n t ( t i m e ( ) ) , d t 1 , x , i , n u m 1 )e l s e e m p t y
i f ( w t 1 < T i m e O u t )t h e n 1 ` ( u 1 , 0 )e l s e e m p t yi f ( w t 1 < T i m e O u t )t h e n 1 ` ( u 1 , I n t I n f . t o I n t ( t i m e ( ) ) , d t 1 , x , i , n u m 1 )e l s e e m p t y
1 ` u 1 1 ` u 1n � 1n
n 1 + 1 n 1 1 ` ( u 1 , a t 1 ,d t 1 , x , i , n 1 + 1 )1 ` ( u 1 , a t 1 , d t 1 , x , i , n u m 1 ) C h e c kH I T _ Ri n p u t ( a t 1 ) ;o u t p u t ( w t 1 ) ;a c t i o n( I n t I n f . t o I n t ( t i m e ( ) ) � a t 1 ) ;
F a i lQ u e u e[ n > 0 ]
T i m e o u t R e j e c tM 1O u t M S GM e m o r yF u s i o n M e m o r y1 ` 0I N T 1
U s e r F u s i o n U s e rU s e rB u s yM S G
R e t u r nF u s i o n R e t u r nU s e rR e s o u r c eF u s i o n R e s o u r c eR e s o u r c e N U M
R e j e c t e dF u s i o n R e j 1R e j e c t R e c e i v e d 2F u s i o n R c v 2 M S G _ u n t i m e dC o u n tF u s i o n C o u n t1 ` 0 N U M
c o s t I 1O u tC O S TR 1I n M S G
R e c e i v e dO u t M S G _ u n t i m e dO u tI n O u t
F u s i o n C o u n t F u s i o n R c v 2F u s i o n R e j 1
F u s i o n R e s o u r c eF u s i o n R e t u r n F u s i o n U s e rF u s i o n M e m o r yO u t
@ + 1Figure 5.3: The Server’s Subpage at the First Stage
packets expire, the attacker wins the game and achieves the goal of DoS attacks.
In addition to the resource, the quantity at the resource place represents not
only available connection queue, but indirectly represents CPU usage on the
server’s machine as well. It is important to note that the server’s connection
queue is a finite number that is defined at the initial configuration before the
simulation starts. We shall explain the concept of a server’s resource by giving
an example. Compare two messages in which message one is in step one, while
message two is in step three. The server has to spend a similar amount of
connection queue, a token per message, for serving both of them. However,
in the case of CPU usage, the server has to waste more capacity and time for
message two than message one because the main task at stage three is to verify
the puzzle solution and the signature, while the task at stage one is only choosing
the puzzle difficulty and returning it to the initiator. By specifying the time
usage from Table 4.3 for individual cryptographic transitions, we can infer that
the longer period that the message is processed in the server’s machine, the more
CPU usage it takes from the server.
To model adversaries’ packets, all individual adversaries have a specified num-
ber of requested tokens at the beginning during the protocol execution. That
means adversaries have the power to constantly flood a huge number of bogus
118 Chapter 5. Simulation and Analysis of HIP in Timed CPNs
messages to deplete the connection queue, simultaneously with the legitimate
users’ packets. Some may argue that any limitation of adversaries’ packets might
not be fair and realistic; however, allowing this unlimited ability to adversaries
might cause unrealistic advantages over honest clients and a server because ad-
versaries are able to mount such attacks to deny services to any websites by
constantly flooding bogus messages with unlimited rate until the server gets
congested and terminates. If we do not limit such attack capability, the server
is always in a full-load condition and unable to serve any legitimate users dur-
ing the simulation. Therefore, we are unable to measure the tolerance of key
exchange protocols for resisting DoS attacks.
In our construction, adversaries are able to perform a number of attacks
depending on the available resources specified at the initial state during the
protocol simulation. Before launching new attacks, adversaries have to wait for a
return message (token/available resource) from the server. In normal situations,
the number of returned messages will be equivalent to the number of messages
that the server receives. That means adversaries still have the same level of
capability to perform DoS attack as long as the server can serve those packets.
However, once the server is in a full-loaded condition, the server is unable to
accept or queue the incoming messages. This causes adversaries to lose their
packets (tokens) from the system, similarly to those packets from legitimate
users.
In the HIP protocol execution, there are three possible circumstances to cause
a server to waste computational resources by the adversaries.
1. All values of the third message including a puzzle solution J , an encrypted
part Ke{HII}, and a digital signature sigI are valid. This will force the
server to process a gradual authentication and waste its resources to com-
plete the final step, which is key confirmation, of the protocol.
2. Only the client puzzle solution J is valid. This situation also causes the
server to perform the puzzle solution verification, decryption, and the sig-
nature verification. The server can detect the attack only at the final step
of authentication.
3. The client puzzle solution J is invalid, so the server computes only a cheap
hash function to verify the solution and then this connection will be ter-
minated whether the remaining messages are valid or not.
To make an important note on the first attacking scenario, this type of attack
may not be a threat or does not exist in some particular applications. For
5.1. Simulation of HIP using Timed CPNs 119
example, the HIP protocol specification defines that if the initiator’s signature
is valid, the user belonging to this message is authenticated and identified as a
legitimate user.
From these three attacking scenarios, we set up five different adversarial
strategies to attack the server running HIP as an underlying algorithm for key
exchange in order to explore the DoS vulnerabilities on HIP, as well as the pro-
tecting capability of client puzzles integrated in HIP to deal with resource deple-
tion attacks. We have previously described the first four attacking strategies for
each adversary capacity in Section 4.4. An additional adversary allowed in the
experiment of the time-based simulation is called Coordinated Type 3 adversary.
The attacking technique of this adversary is similar to Type 3 adversaries, except
that Coordinated Type 3 adversaries are able to control a group of compromised
machines to solve puzzles in parallel for obtaining the solution with a certain
period.
Having introduced our time-based construction, the next subsection provides
the experiment and result of the original HIP protocol implemented with hash-
based reversal puzzles to protect the server against resource exhaustion attacks.
5.1.2 Experiment
To obtain experimental results, we set up two different experiments, Experiment
2 and Experiment 3, in order to measure tolerance of the server under resource
exhaustion attacks. In Experiment 2, the server is unable to control the puzzle
difficulty. On the other hand, Experiment 3 allows the server to be able to adjust
puzzle difficulty within specified ranges between 1 (the client is required to search
for input that the first bit of output is zero) for the light-load condition and 10
(the client is required to search for input that the first ten bits of output are
zero) for the heavy-load condition. This allows us to pinpoint the key benefit
of adjustable client puzzles, as well as to define an appropriate range of puzzle
difficulty in different circumstances.
Experimental Setup
We set up the experiment simulating on a Pentium 4 processor running at 3
GHz of central processor unit (CPU) speed with 1 GB of random access mem-
ory (RAM). For the software, we use Microsoft Windows XP Professional ver-
sion 2002 service pack 2 as the operating system, and model DoS-resistant pro-
tocols using CPN Tools version 2.2.0, which was released in September 2006
120 Chapter 5. Simulation and Analysis of HIP in Timed CPNs
and implemented at the CPN group of the University of Aarhus, Denmark
(http://wiki.daimi.au.dk/cpntools/cpntools.wiki).
To assess the DoS-resistant protocol, there are different actions from initiators
and the server during the protocol simulation. Honest clients initiate a request
only once and keep waiting to process the next steps. Processing time of the
protocol and delay is assigned by means of Timed CPNs, i.e. every transition
which relates to cryptographic operations is defined as a timed process. Dur-
ing the simulation, we first specified the number of available resources on the
server’s machine such that the server has a certain capacity and is able to handle
only that specified number of incoming packets at a certain period. Further-
more, we assume that the server is able to process requests only one message
at a time, such that a concept of the server queue is implemented for arranging
an order of incoming packets. All remaining requests that exceed the server’s
capacity have to wait in the queue until the server becomes idle or has available
resources to serve. If requests from honest clients have been rejected under DoS
circumstances, honest clients give up to open another session.
Considering the adversary circumstance, there are two different situations in
which packets from adversaries are rejected by the server; 1) the server detects
the bogus messages during the verification steps, and 2) the server does not have
enough resources for serving any requests and the time-out of these requests
expire. Once the server detects the attack and rejects those packets, adversaries
lose the game and those packets will be removed from the system.
Slightly different from Experiment 1 in Chapter 4, we allow another type of
adversaries in the time-based simulation to mount the attacks on the system.
The new adversary is called Coordinated Type 3 adversaries or Co ad3. Coordi-
nated Type 3 adversaries are similar to Type 3 adversaries, who solve the puzzle
solution but do not process the remaining of the protocol, except that those
Coordinated Type 3 adversaries are able to control multiple machines to run
parallel search on the puzzles for obtaining the solution faster than the expected
period. In the simulation, Coordinated Type 3 adversaries reduce puzzle solving
time k to k/n, where n is the number of machines controlled by these Co ad3.
The characteristic of Co ad3 is similar to strong attacks as defined by Bocan and
Cosma [12] in which adversaries control a group of compromised machines used
to solve the puzzles in parallel to obtain corresponding solutions much faster
than the expected solving period. Once the server authenticated these puzzles
and received more requests, the server gradually increases the puzzle difficulty
to increase the client’s workload and delay the incoming requests. This situa-
5.1. Simulation of HIP using Timed CPNs 121
tion can cause the denial-of-service attack to legitimate clients who cannot solve
puzzles having the maximum puzzle difficulty issued by the server.
To make our simulation and experimental result more reliable, we run indi-
vidual types of adversaries a hundred times and average them to obtain a final
throughput of successful services. The standard deviation (STD) of each simu-
lation is also provided in the table of the experimental results. Parameters used
in our experiment are summarised in Table 5.1.
Table 5.1: Experimental Setup Parameters
Parameter Value
1) Server’s capacity 20 units of resource
2) Honest Client’s packets 20 packets
3) Bogus packets’ from individualadversaries
1) low load at 50 packets (2.5 times greater than Server’s capacity)
2) high load at 500 packets (25 times greater than Server’s capacity)
4) Packet time-out period 50000 units of time
5) Puzzle difficulty1) low load: k minimum required 1 unit of time to solve, and usedwhen server’s load is under server’s capacity (20)
2) medium load: k medium required 500 unit of time to solve, andused when server’s load is higher than server’s capacity (20) butless than 5 times (100)
3) heavy load: k maximum required 5000 unit of time to solve, andused when server’s load is higher than 5 times (100)
6) Cryptographic operations Cryptographic benchmark as shown in Table 4.3, Section 4.2
The reason why we simulate the server’s capacity and honest client’s packets
at 20 units is that we have tested several data set and recorded the running time
of a hundred simulation for particular data sets. If we allow the honest client’s
packets and server’s capacity to be greater than 20, the total running time of the
simulation takes an excessive period that will limit our simulation to examine
a small number of bogus messages initiated by different types of adversaries.
Meanwhile, the number of bogus messages that we set up in the simulation
consists of two sizes; 50 and 500. When adversaries are able to generate attacking
traffic greater than 500 packets at 20 units of the server’s capacity, the throughput
becomes saturate and drops to 0. We are unable to make a comparison and
investigate the tolerance of the system.
Having introduced the experimental parameters and rules, we demonstrate
the experiment and result in the remainder of this chapter. The result may be
compared with Experiment 1 described in Section 4.4.3.
122 Chapter 5. Simulation and Analysis of HIP in Timed CPNs
Experiment 2: Non-adjustable Client Puzzles
The purpose of the this experiment is to examine the minimal DoS-resistant
mechanism. To achieve this, we run the simulation under five specified attacks
with the non-adjustable client puzzle. We initially fix k=1, which is an easiest
value of the puzzles, that requires nothing expensive to solve for establishing a
connection under normal circumstances. Once we have examined the server’s
tolerance under the minimum defensive mechanism, we set up another experi-
ment to examine the system under a higher value of puzzle difficulty (k = 10) in
order to compare the result with the minimum assessment.
Different from Experiment 1 in Chapter 4, we allow a server to participate
with requests from honest clients coming together with adversary’s packets. We
assume that the server has to deal with different strategies from five types of
adversary, as well as different amounts of packets which consist of both legitimate
and bogus messages. Considering the number of packets, legitimate users initiate
the amount of requests at the full server’s capacity (20 packets), meanwhile a
single type of adversary can flood the amount of bogus requests (Z) at 50 and
500 requests, which is 2.5 times and 25 times greater than the server’s capacity,
respectively. To examine the tolerance of HIP protocol under different attack
strategies, each adversary has been made a pair with an honest client to initialise
requests simultaneously to the server during the protocol execution. Finally, by
inserting places for displaying the number of completed and rejected messages at
the server’s network, the number of successful legitimate requests that the server
can serve under different adversary’s abilities is measured as a percentage for the
protocol evaluation.
The Experimental Results:
Table 5.2 and Figure 5.4 represents the percentage of successful legitimate con-
nections compared among two different amounts of bogus messages (Z=50, and
Z=500 packets) from five adversarial strategies. Undoubtedly when we prohibit
the server’s ability to adjust puzzle difficulty k, the percentage of successful
messages from honest clients hc to obtain service will drop drastically once ad-
versaries increase the number of bogus messages to the system.
Considering Type 1 adversaries, they cannot achieve DoS attacks and cause
any disruption to the server. This is because HIP implements stateless connec-
tions as the first line of defense to protect the server under flooding attacks at
the initial step. Moreover, the ability to easily generate puzzles at the first step
helps the server to deal with this kind of attack as well. One possibility is that
5.1. Simulation of HIP using Timed CPNs 123
Table 5.2: Percentage of Throughput with Non-adjustable Client Puzzles
Z=50 Z=500
Adversaries k=1 k=10 k=1 k=10
% success ofSTD
% success ofSTD
% success ofSTD
% success ofSTD
legitimate request legitimate request legitimate request legitimate request
ad1 100 0 100 0 100 0 100 0
ad2 58.55 2.46 51.80 1.80 30.85 1.76 26.55 1.50
ad3 49.50 1.10 43.75 1.17 6.70 2.14 5.25 0.95
Co ad3 0.70 0.38 1.95 0.63 0.35 0.29 4.45 0.74
ad4 79 1.35 100 0 70 0 100 0
Type 1 adversaries can disrupt the legitimate packets by flooding huge num-
bers of bogus messages, which should be large enough to make the server busy
processing these packets until the time-out of legitimate packets expire. In our
model, for example, we set up the time-out to be 50,000 units of time and the
server processing at step 1 to be 1 unit of time, that enables Type 1 adversaries
to flood bogus messages over 50,000 packets simultaneously to the server before
arrival of the legitimate packets.
Comparing Type 1 and Type 4 adversaries, even though both of them craft
random messages, Type 4 adversaries can achieve the goal at higher rate than
Type 1 adversaries because the server can process the incoming request at step
1 and clear a queue faster than at step 3. At step 1, the server only partic-
ipates in the protocol by choosing the puzzle difficulty (k) and pre-computed
information, and returns it to Type 1 adversaries. Although Type 1 adversaries
can re-generate bogus messages after receiving replied messages, this does not
cause the server to reject a large number of messages because HIP mitigates
such problem by adopting a stateless-connection. On the other hand, the task
of Type 4 adversaries, to fill-up the server’s queue at step 3, can be achieved
more easily than the task of Type 1 adversaries because the process of checking
a puzzle solution and a digital signature takes longer than the whole process at
step 1. However, at the higher level of the fixed puzzle difficulty, the percentage
of throughput of honest client with Type 4 adversaries increases to the higher
level in contrast to Type 1, Type 2, and Type 3 adversaries. Honest clients are
required to solve the client puzzles at higher level difficulty taking longer period,
but this does not apply to Type 4 adversaries. As a result, only legitimate packets
will be delayed and the server has some certain period to detect the attacks and
drop requests with invalid puzzle solutions from the system, and later provides
the service to those legitimate users. This circumstance occurs similarly on the
124 Chapter 5. Simulation and Analysis of HIP in Timed CPNs
(a) Attacking Packets is 2.5 times greater than Server’s Capacity
(b) Attacking Packets is 25 times greater than Server’s Capacity
Figure 5.4: Percentage of throughput of hc with k=1 and k=10
5.1. Simulation of HIP using Timed CPNs 125
graph of coordinated Type 3 adversaries in Figure 5.4 because the puzzle solving
time of these adversaries is independent of the puzzle difficulty, i.e. no matter
what the puzzle difficulty is, both Type 4 adversaries and coordinated Type 3
adversaries still use the same certain time to craft and flood bogus messages at
the third step.
For Type 2 adversaries and Type 3 adversaries who attempt to deny service
at phase 3 by computing the puzzle solution, the results show that Type 3 ad-
versaries succeed at higher proportion than Type 2 adversaries. This is because
Type 3 adversaries can flood attack messages faster than Type 2 adversaries who
must engage in the correct generation of message two. Nonetheless, both adver-
saries can force the server to engage in the signature verification that causes the
server to waste its resource significantly.
Finally when we compare all different types of adversaries, the most effective
is coordinated Type 3 adversaries who solve the puzzle but randomly choose the
remainder of the third step of the protocol. This is because coordinated Type
3 adversaries can flood a large number of messages to overwhelm the server’s
resource quicker and more efficient than the others. Even though Type 4 ad-
versaries are able to launch attacks at step 3 as quick as coordinated Type 3
adversaries, they cannot cause the server to reach to an expensive state which
is a digital signature verification phase as coordinated Type 3 adversaries. Since
the server can detect the bogus messages from Type 4 adversaries at the puzzle
verification that requires only cheap computation, the server can discard all at-
tacking packets from the buffer queue and serve upcoming requests from other
users. Having identified the most effective scenario, we will apply these attacks
to the fourth experiment for investigating the usefulness of puzzle difficulty.
Experiment 3: Adjustable Client Puzzles
The purpose of this experiment is to measure tolerance of the server when ad-
justable client puzzles are implemented. As this experiment allows the server to
automatically adjust the puzzle difficulty within three different levels depend-
ing on the number of requests, the result can be used to evaluate the benefit of
using adjustable client puzzles to tackle the DoS attacks in different introduced
scenarios, as well as to compare with the experiment of Meadows’s cost-based
framework for finding limitations of the cost-based framework. Furthermore, the
number of rejected packets come from each puzzle difficulty can be used as a
parameter for evaluating the performance of the system.
Slightly different from Experiment 2, we allocate three possible values of
126 Chapter 5. Simulation and Analysis of HIP in Timed CPNs
puzzle difficulty k, which require 1, 500, and 5000 units of time in approximation
for the users to solve. We choose those three values because they do not add much
computational effort to honest clients, while the total task is still in the acceptable
threshold (tolerance relation, τ) comparing to tasks on the server. In order to
allow the server to flexibly adjust puzzle difficulty between those three values
more efficiently, we simply insert a counter into the model for measuring the
condition of a server’s workload. Once the workload has reached the maximum
tolerance, the server has an ability to increase the puzzle difficulty to the higher
level related to the condition as defined in the Table 5.1 for delaying the incoming
rate of requested messages, or decrease the level of puzzle difficulty once the
workload becomes normal again. One important note in this experiment is that
both honest clients and every types of adversaries is allowed to make a request
at the same time, while the server can process the request one message at a time.
In order to manipulate these requests, we implement a queue at the server. The
queuing process is a uniformly distributed random process simulated by CPN
Tools.
The Experimental Results:
Figure 5.5 illustrates that the number of attacking machines that the server
can tolerate is increased to a higher proportion compared to the result of non-
adjustable client puzzles in which k is fixed to be 1 and 10 as shown in Figure
5.4. These graphs represent the percentage of successful legitimate connections
compared between two different amounts of bogus messages (Z=50, and Z=500
packets) from five adversarial strategies. Similar to the previous experiment,
the percentage of successful messages from honest clients to obtain service will
drop drastically once adversaries increase the number of bogus messages to the
system.
Another interesting result depicted in Figure 5.5 is that the successful trend at
Type 4 adversaries’ graph is different from other columns. Since all other columns
show that the throughput from adjustable puzzles is higher than with the fixed
value of puzzle difficulty, we investigate this experimental result more carefully
by using single step simulation to find the reason behind this situation. We
found that during the simulation some legitimate users obtain the easiest puzzle
difficulty, while some of them receive the medium or maximum value depending
on the server’s workload. In the meantime, Type 4 adversaries receive different
puzzle difficulty similarly to the legitimate users. Nonetheless, Type 4 adversaries
5.1. Simulation of HIP using Timed CPNs 127
do not spend time to solve puzzles, no matter what the puzzle difficulty is they
keep flooding bogus messages at the same speed. That means some legitimate
packets which have the lowest value and being replied to the server at this period
will be affected by the traffic from adversaries type 4. Undoubtedly, some of these
legitimate packets will be disrupted and denied from service. As a result, the
final throughput will be located in between the graph of fixed k=1 and k=10 in
the Figure 5.5.
Considering the other attack strategies, Type 1 adversaries cannot achieve
their DoS goals as the previous experiment because of the protection from the
stateless connection combined with cookie mechanism. Type 2 and Type 3 ad-
versaries are able to launch DoS attacks at a certain point depending on the
number of bogus messages and speed of incoming attacking packets. The coor-
dinated Type 3 adversaries are still the most effective strategy to deny service to
the system implemented with hash-based reversal puzzle mechanism. The rea-
son behind this attack is because coordinated Type 3 adversaries can force the
server to reach the expensive computation, which is digital signature verification,
at the highest speed. Since coordinated Type 3 adversaries can control a group
of high-end machines used for obtaining the puzzle solutions faster than specified
period assigned by the server for delaying the requests, these bogus packets will
be accumulated in the connection queue longer than those from other adversaries
because the server has to participate and spend more computational time to ver-
ify puzzles and digital signature. Hence, this strategy from coordinated Type 3
adversaries can deplete the server’s resource fastest and most effectively.
Apart from the result of the number of adversaries that the server can tolerate
as illustrated in Figure 5.5, output shown in Table 5.3 and 5.4 regarding to the
number (in percentage) of honest clients receiving each value of puzzle difficulty,
as well as the percentage of unsuccessful honest clients can be used for analysing
the impact of attacks. From the table, the meaning of the number in each row can
be translated as following. For example, when Coordinated Type 3 adversaries
flood bogus messages at 2.5 times greater than the server’s capacity (Z = 50),
there are only 18.50 out of 100 honest clients getting service. From these 100
users, 29.85 users receive minimum puzzle difficulty, while 70.15 users receive
medium puzzle difficulty. The clients receiving medium puzzle difficulty have
to spend some resources to solve puzzle such that they have to wait for some
delays. According to the experimental result in Table 5.3, 97.65% of honest
clients getting minimum puzzle dificult are fail to get service, while 74.63% of
honest clients getting medium puzzle dificult are rejected by the server because
128 Chapter 5. Simulation and Analysis of HIP in Timed CPNs
(a) Attacking Packets is 2.5 times greater than Server’s Capacity
(b) Attacking Packets is 25 times greater than Server’s Capacity
Figure 5.5: Percentage of throughput of hc with adjustable client puzzles incomparison with non-adjustable client puzzles from Experiment 3
5.1. Simulation of HIP using Timed CPNs 129
the impact of DoS attacks. The experimental results of attacks at higher number
of bogus messages (25 times greater than the server’s capacity (Z = 500)) are
shown in Table 5.4.
Table 5.3: Percentage of Throughput with Adjustable Client Puzzles at Z = 50
load Z=50
Adversaries % success % HC receiving each k % Unsuccessful HC
of HC k = 1 k = 500 k = 5000 k = 1 k = 500 k = 5000
ad1 100 38.75 61.25 0 0 0 0
ad2 71.60 41.42 58.58 0 0 48.48 0
ad3 62.95 35.38 64.62 0 96.66 4.41 0
Co ad3 18.50 29.85 70.15 0 97.65 74.63 0
ad4 87.20 32.08 67.92 0 33.98 2.80 0
Table 5.4: Percentage of Throughput with Adjustable Client Puzzles at Z = 500
load Z=500
Adversaries % success % HC receiving each k % Unsuccessful HC
of HC k = 1 k = 500 k = 5000 k = 1 k = 500 k = 5000
ad1 100 36.16 63.84 0 0 0 0
ad2 42.05 30.89 64.65 4.46 0 89.64 0
ad3 31.45 26.46 67.33 6.21 95.05 64.46 0
Co ad3 4.95 18.08 72.84 9.08 96.52 93.56 0
ad4 83.20 31.39 68.61 0 45.72 3.57 0
When adversaries flood using a small number of bogus messages (Table 5.3),
neither honest clients nor any types of adversaries receive the highest value of
puzzle difficulty (as shown in the column k = 5000). The server is able to handle
these messages within the medium level, however, more than a half of honest
clients receive the medium puzzle difficulty. That means they are required to
solve more difficult puzzles than the normal situation. Obviously, they have to
wait for some delay before accessing to the server. If the server can manipulate
all messages or remove the attack before the puzzle expires, all honest clients are
able to gain access. However, not every honest client achieves their request. The
number of successful clients is different depending on the adversarial scenarios
as illustrated in Table 5.3.
130 Chapter 5. Simulation and Analysis of HIP in Timed CPNs
In the case of Type 1 adversaries, they are unable to stop the server to handle
packets from honest clients (0% of unsuccessful honest clients in both tables).
This is because the server does not store any messages into the memory at the
phase 1. As a result, every honest client receives a service. Considering the
number of honest clients receiving medium puzzle difficulty, the highest num-
ber comes from Coordinated Type 3 adversaries, while the lowest number comes
from Type 2 adversaries. This is because Coordinated Type 3 adversaries return
the correct puzzle quicker than other types and force the server to verify bogus
signature, while Type 2 adversaries are the slowest adversaries to respond to
the server at step 3 of the handshaking process. However when we consider the
number of unsuccessful honest clients, Type 2 adversaries can make the rejection
of honest clients’ messages getting higher puzzle difficulty at the highest propor-
tion (unsuccessful messages are only initiated by medium-range honest clients).
In contrast, Coordinated Type 3 adversaries harm lowest-puzzle-difficulty hon-
est clients at the highest proportion (97.65% of rejected messages comes from
this attack). This is because the process of puzzle verification is much shorter
than the digital signature verification. The server is able to detect the bogus
messages from Coordinated Type 3 adversaries quicker than Type 2 adversaries.
As a result, honest clients getting lowest puzzle difficulty and responding step 3
message quicker receive impact from Coordinated Type 3 adversaries more than
impact from Type 2 adversaries.
For Type 3 adversaries, most unsuccessful honest clients receive lowest puz-
zle difficulty (96.66%), while few honest clients receive lowest puzzle difficulty
(4.41%). This is because large bogus traffic from Type 3 adversaries mainly
move toward the server at digital signature verification process. Because Type
3 adversaries only solve the puzzle but not participate in signature generation,
so they can return messages to the server quicker than the honest clients who
receive similar level of client puzzle. As a result, most honest clients who get
higher puzzle difficulty can respond the solution slower and wait until the server
mitigates attacks from the system.
From the simulation of a Type 4 adversarial strategy, only few honest clients
are impacted by this technique. Most unsuccessful users participate in solving
of minimum puzzle difficulty. That is because large amounts of bogus traffic
are flooded to the server very quickly at phase 3, but the server can detect
and remove the attacks by verifying client puzzle (not participate into digital
signature verification as Type 2 and Type 3 adversaries). As a result, honest
clients having minimum puzzle difficulty and submitting puzzle solution later are
5.1. Simulation of HIP using Timed CPNs 131
able to obtain services from the server. (A few of them, 2.80% of clients, cannot
obtain services but this amount of honest clients are the lowest proportion apart
from Type 1 adversarial strategy).
The similar situation occurs in the experiment when we increase the volume of
adversarial traffic as shown in Table 5.4. Although adversaries increase number
of bogus messages to 25 times greater than the server’s capacity, only Type
1 adversaries cannot deny the service. The reason is similar to the situation
described previously. Considering Type 2, Type 3, and Coordinated Type 3
adversaries, some honest clients are required to solve highest value of puzzle
difficulty. Obviously, these users have to spend their resources and wait for some
period of time longer than others. However, the experiment shows that messages
from users in this group never fail from getting services. Solving puzzles in the
highest level ensures that the server is able to remove attacks and retrieve their
resources for serving the legitimate users again.
To make a conclusion from the experiment, Coordinated Type 3 adversaries
are the most destructive adversaries because they can solve the puzzle more
quickly than other adversaries, while Type 1 adversaries cannot make an impact
to the server and honest clients. By considering the unsuccessful packets, the
experimental results illustrate that client puzzles can mitigate the impact of DoS
attacks by delaying the legitimate users and move them to the next time slots
depending on the chosen puzzle difficulty. Even though the experiment shows
that there are some rejected packets from honest clients, most of them comes
from the minimum puzzle difficulty users. Honest clients who do not participate
in solving harder puzzle are affected and required to retransmit a request, but
this situation does not make significant impact to such users because they did not
waste extra resources to solve puzzle. If adversaries need to achieve their goal at
higher rate and make more denials of service to honest clients who spend extra
computational usage to solve harder puzzle, they need to perform puzzle solving
and digital signature generation as Type 2 adversaries do. So, adversaries have to
commit their resources to solve puzzles in order to deplete the server’s resources
at digital signature verification process. In this case, they will be penalized by
solving harder puzzles. This is because when the server receives more incoming
packets, the server increases the puzzle difficulty. The more bogus messages they
flood, the harder the puzzle they receive. The second option for adversaries is
to find the technique to solve client puzzles much faster than honest clients as
Coordinated Type 3 adversaries perform. This option could be more effective
technique for adversaries to succeed because the generation of digital signature
132 Chapter 5. Simulation and Analysis of HIP in Timed CPNs
requires more computational usage and it is very difficult to impersonate other
legitimate identities in order to pass signature verification at phase 3. In this
case, if adversaries have a group of high-end computing machines used for solving
puzzles and floods the solutions faster (Coordinated Type 3 adversaries), they
will achieve their goal to deny services at highest proportion.
The result of comparing the percentage of honest clients receiving each value
of puzzle difficulty with the percentage of unsuccessful honest clients can help
the protocol analyst to determine the usefulness of client puzzle to mitigate the
problem of DoS attacks as well as to evaluate the performance of the system
in order to deal with such attacks. Having demonstrated the experiments and
analysed the results, we found a weakness on the parallelizable characteristic
of hash-based reversal approaches. Since the hash-based puzzle can be com-
puted in parallel, this open problem makes vulnerabilities to the server adopting
hash-based puzzle for mitigating the DoS attacks. This kind of attack was also
identified in terms of a coordinated attack with reusable computation by Smith et
al. [125]. As a result, parallelization of hash-based schemes is one major concern
and open problem for the protocol developer who might consider to strengthen
the security of the DoS-resistant protocols.
5.2 An Enhanced Version of HIP
Vulnerabilities based on unbalanced computations between an initiator and a
server have been revealed in Section 5.1.2, especially in the form of attacks per-
forming by coordinated adversaries. This vulnerability leads to the risk of the
server’s machine to be overwhelmed by such attacks. This section proposes a
technique to mitigate this problem by replacing hash-based reversal puzzles with
a generic model of non-parallelizable client puzzles (a concrete example of the
construction based on non-parallelizable algorithms will be provided and dis-
cussed in Chapter 6). The results show that the proposed technique assists the
server to deal with the attack mentioned above, and the final outcome increases
again to the acceptable ratio of successfully processed messaged.
Experiment 4: Non-Parallelizable Client Puzzles
In the experiment, we do a minor change in a HIP model at the server’s subpage
by using the generic model of non-parallelizable cryptographic puzzles. There-
fore, the server can select a more appropriate value of puzzle difficulty and spec-
ify the predetermined time period of a puzzle for delaying the incoming requests
5.2. An Enhanced Version of HIP 133
when the server becomes heavily loaded.
In this experiment, we only focus our attention on simulation and analysis
of a system being attacked by coordinated Type 3 adversaries (Co-ad3). When
we replace hash-based puzzles with non-parallelizable cryptographic puzzles into
HIP model at the server page, the final throughput of successful packets from
simulating honest clients with coordinated Type 3 adversaries is improved to the
higher percentage approximately comparable to Experiment 3. This experimen-
tal result supports our assumption that non-parallelizable client puzzles will help
to protect the server against coordinated DoS attacks.
The Experimental Results:
Figure 5.6 represents the percentage of successful legitimate connections com-
pared between two different amounts of bogus messages (Z=50, and 500 packets)
from coordinated Type 3 adversaries. Unlike the representation of Figures 5.4
and 5.5, the x-axis of Figure 5.6 compares the result of fixed-value and adjustable
non-parallelizable puzzles. As displayed in the figure, use of non-parallelizable
cryptographic puzzles with adjustable k effectively increases the percentage of
successful packets from the legitimate users.
Comparing the throughput of parallelizable puzzles in the column of Co ad3
in Figure 5.5, the throughput increases from 0.70% to 50.45% and 0.35% to
5.20% for the non-adjustable puzzles with Z=50 packets and Z=500 packets,
respectively. Similarly, the throughput improves from 18.50% to 63.10% and
14.40% to 43.80% for the adjustable puzzles with Z=50 packets and Z=500
packets, respectively. These overall throughput in both cases increases to a level
approximately comparable to the outcome of Type 3 adversaries in Experiment
3 (as shown in Figure 5.5).
In summary, the reason of this improvement is that Co-ad3 is unable to run
parallel search on client puzzles, so they are forced to solve puzzles directly and
spend a certain period specified by the server until the correct solution has been
found. This period is identical to the time required in Type 3 adversaries to
search for a correct solution of a hash-based reversal puzzle. As a result, when
the server constructs non-parallelizable cryptographic puzzles, the server can
control the time required for the initiator to solve a puzzle more precisely.
Although the non-parallelizable puzzles can improve the overall throughput
of the system, another issue arises according to the attacking technique on non-
parallelization. It could be argued that instead of using n compromised machines
to solve a puzzle in parallel to reduce the solving time to k/n approximately, why
134 Chapter 5. Simulation and Analysis of HIP in Timed CPNs
Figure 5.6: Percentage of throughput of hc and Co-ad3 with adjustable clientpuzzles
do not these coordinated adversaries use n machines to flood n puzzles instead?
Intuitively, this scenario would cause the same degradation on the server. How-
ever, the puzzle solving time of the latter scenario would be approximately equiv-
alent to the period of time k assigned by the server since individual compromised
machines are required to solve particular puzzles. To explain this situation and
compare these two issues more clearly, we create an example of output as shown
in Figure 5.7. The x-axis represents time, and y-axis represents the percentage
of legitimate user’s outcome. The y-axis can be referred to the available slots of
a finite connection queue. Scenario 1 represents the technique that uses n com-
promised machines to reduce the puzzle solving time to k/n in approximation,
while scenario 2 represents the technique that uses n machines to flood n puzzles.
As illustrated in Figure 5.7, attacking scenario 1 generates bogus requests
to make traffic congestion constantly because they can solve the solution more
quickly by reducing the solving time to k/n, and generate new requests once they
finish solving the previous values. Since the server’s connection queue is directly
proportional to the percentage of successful throughput, if adversaries are able
to fill up the server’s connection queue constantly, therefore, the percentage of
throughput drops continuously as the proportion of available queue decreases.
5.2. An Enhanced Version of HIP 135
Figure 5.7: Comparison between Attacking Scenario 1 and 2
Even though the server increases the puzzle difficulty after the time k to higher
level (because we assume that the server becomes full loaded at the capacity n,
where n is equivalent to the number of adversaries’ machines), this circumstance
would occur similarly again if coordinated adversaries still exist in the network
and keep generating the bogus messages at the same speed.
On the other hand, attacking scenario 2 generates requests for n machines
to solve the puzzles independently. In this case, individual machines spend k
unit of time in approximation to solve particular puzzles. Even though they are
able to fill up the queue and disrupt the services on the server at time k, the
overall throughput of the system at the period before k is greater than attacking
scenario 1 (represented as the area under the curve of scenario 2). For example,
the percentage of successful packets from legitimate users at time (k − n2) is
x12 and x22 for attacking scenario 1 and 2, respectively. This is because in the
certain period of time between 0 to k−n2, the number of bogus messages in the
connection queue from attacking scenario 1 is increasing far more quickly than
the later technique. Figure 5.8 and Table 5.5 illustrate the experimental result
of using parallelizable client puzzles.
From the experimental result shown in Figure 5.8, the throughput of fixed-
136 Chapter 5. Simulation and Analysis of HIP in Timed CPNs
Figure 5.8: Comparison between Attacking Scenario 1 and 2 with adjustableparallelizable client puzzles
value puzzle difficulty (k=10) is almost comparable at 6.15% and 6.70% for
attacking scenario 1 and 2, respectively. This is because when the server is
unable to generate different range of puzzle difficulty, all initiators receive the
same level of puzzles such that they are required to return the solution at the same
interval. Similarly to both attacking scenarios, they can generate bogus messages
at equivalent number approximately during the time k. Both of them achieve
the disruption by congesting the connection queue at the same rate, therefore,
the percentage of throughput significantly drops at similar proportional on both
techniques.
On the other hand, the successful throughput of adjustable puzzles is slightly
different. Although the percentage of throughput increases when compared to
the non-adjustable puzzles, the increment is not in the equivalent proportion, i.e.
attacking scenario 1 obtains higher achievement level than attacking scenario 2.
The reason behind this result is because in the circumstance that we enable the
function to allow the server to control the incoming rate of requests by updating
the level of puzzle difficulty, attacking scenario 2 has to spend longer period to
solve puzzles, while attacking scenario 1 is not affected since the puzzle solving
time does not change (as we assume that coordinated Type 3 adversaries are able
5.3. Discussion 137
to reduce solving time to minimum for any values of k). As a result, attacking
scenario 1 succeeds to empty the connection queue and makes congestion at the
higher proportion than scenario 2 in the same period of time interval.
Table 5.5: Percentage of Throughput compared between Attacking Scenario 1and 2 with adjustable parallelizable client puzzles (hc=20 and Z=500)
AttackingFixed-value (k=10) Adjustable
% success STD % success STD
Scenario 1 6.15 1.84 13.30 3.09
Scenario 2 6.70 1.19 30.05 1.82
To confirm our assumption of the connection queue, we insert a monitoring
tool at the server connection queue for recording the time and comparing which
technique is able to empty the connection queue earlier. Table 5.6 illustrates the
experimental result of this issue. As shown in the table, the first opportunity that
the connection queue is full by attacking scenario 1 is far earlier than scenario 2
(34 versus 523 unit of time for the fixed puzzles, and 43 versus 1049 unit of time
for the adjustable puzzles).
Table 5.6: Time Interval that the Connection Queue is Full
AttackingFixed-value (k=10) Adjustable
#1 #2 #3 #1 #2 #3
Scenario 1 34 507-536 1009-1038 43-44 508-546 1010-1048
Scenario 2 523 1005-1025 1507-1527 1049 1527-1551 2029-2053
5.3 Discussion
Based on the formal modelling and analysis of HIP cost-based construction in
Chapter 4, the result of Experiment 1 from Section 4.4 illustrated that HIP can
resist flooding attacks at message 1 because HIP uses stateless connection as
a first line of defense. However, HIP is still susceptible to resource exhaustion
attacks at state 3 if the adversaries are able to find a correct solution and force
138 Chapter 5. Simulation and Analysis of HIP in Timed CPNs
the server to verify the signature. Even though HIP incorporates the gradual
authentication mechanism and client puzzles (as defined in Section 3.2 of Chap-
ter 2) at message 3 to protect the server against resource exhaustion attacks,
the evaluation from Meadows’ cost based framework identified the DoS threat
when Type 3 adversaries are able to compute the correct puzzle solution with
bogus signature to force the server to engage with expensive digital signature
verification to waste its resources.
When the server is unable to adjust puzzle difficulty, the experimental result
from Meadows’ cost-based framework identifies attacks from Type 3 adversaries
as the most effective strategy when the puzzle difficulty is chosen at a minimum
value (Table 4.4 from Section 4.4 in the column k=1). This is because Type 3
adversaries spend a tiny computational cost to force the server to compute up
to the digital signature verification phase. In the case of Type 4 adversaries who
do not spend any computation, the server requires only a cheap computation for
detecting and discarding those bogus messages. This evaluation from Meadows’
cost based framework is similar to the experimental result from simulation and
analysis using Timed CPNs model as shown in Experiment 2 (Figure 5.4).
Once we set up the experiment of adjustable puzzle difficulty (Figure 5.5),
Type 3 adversaries are still the most effective technique in the Timed CPNs
simulation-based analysis (excluding coordinated Type 3 adversaries), while the
cost-based framework is unable to identify the vulnerabilities in the case of Type 3
adversaries because the cost difference between the server and Type 3 adversaries
becomes smaller and lies in tolerance relation. Although Type 4 adversaries
can cause the largest difference of computational cost between the adversary
computation and the server, this different gap is not large enough to justify
that this is a vulnerability of the protocol as the cost difference lies within the
tolerance relation (similar to the conclusion of Meadows’ cost-based analysis
as the tolerance relation (cheap, cheap)). Considering the computational cost
of Type 3 adversaries, when the server increases puzzle difficulty, eventually
the computational cost of puzzle solving becomes approximately equivalent to
the computation on the server operation at step 3. Hence, the different gap of
computational cost between Type 3 adversaries and the server becomes narrow,
such that the cost set C will be located within reasonable tolerance relation τ . As
a result, the cost-based framework might be unable to identify the vulnerability
of DoS attacks in this circumstance because all defender cost sets are allocated
in the secure location within tolerance relation.
Due to more complex system as well as more sophisticated attacking scenario
5.4. Summary 139
from advanced adversaries, the limitation of the Meadows’ cost-based framework
might not be able to estimate some hidden factors that cause service disruption
on the system. Contrasting from the experimental result of Meadows’ cost-based
framework and the Timed CPNs simulation, the cost-based analysis considers
only the cost to an individual adversary to perform a set of identified actions
to a certain point of the protocol execution compared to the cost of protocol
engagement by the server’s machine. The simulation-based analysis is able to
identify more sophisticated techniques of the adversary as well as to involve
a large amount of packets launched by multiple participants. Moreover, some
hidden parameters, such as time as examined by Chan et al. [29], can be included
and evaluated in the analysis. As a result, lack of ability to model realistic
events related with time factors and to handle large amount of messages could
be a limitation for analysing DoS-resistant protocols in Meadows’ cost-based
framework.
Finally, the most obvious benefit from the simulation-based analysis is that
we can observe not only the behaviour of adversaries, but also the consequence
of attacks to the system during the protocol execution. In addition, the ana-
lyst is able to more easily understand and evaluate the final outcome even by
using single step analysis to investigate the behaviour either in the complicated
situation, or in the large system that consists of a large amount of communicat-
ing traffic. In contrast, the analyst has to take extra care in order to consider
and identify adversary capability in the cost-based evaluation. Furthermore, the
effect of complex attacks might generate ambiguous output. For example, the
result of Type 3 adversaries and Type 4 adversaries under cost-based analysis
from Table 4.4 might lead to the conclusion that the most effective strategy is
performed by Type 3 adversaries because the total cost difference between the
server and such adversaries is the most. However, it is also arguable that Type
4 adversaries should be the most destructive scenario because the ratio of com-
putational effort between the server and such adversaries is infinity no matter
what the chosen value of puzzle difficulty is.
5.4 Summary
In this chapter, we propose the first formal specification of HIP using time-based
coloured petri nets as the formalism. CPN Tools is a versatile and powerful tool
that allows us to model complex protocols, as well as the advanced attacking
techniques targeting the server by reducing the time for puzzle solving. Fur-
140 Chapter 5. Simulation and Analysis of HIP in Timed CPNs
thermore, we have demonstrated a simulation and analysis of HIP under normal
conditions and five scenarios of DoS attacks. Tolerance of the system represented
as the percentage of successful throughput under different adversarial strategies
has been examined and analysed. Although our range of adversarial actions
is limited to five different strategies, we have found weaknesses of using hash-
based client puzzles and proposed the technique to strengthen such parallizable
vulnerability in a hash-based puzzle construction.
Even though the experimental results of simulation 2 and 3 demonstrate
that puzzle difficulty can mitigate the problem of resource exhaustion attacks,
coordinated adversaries can employ an alternative technique to attack hash-
based puzzles and cause the server to participate in the signature verification as
discussed in Section 5.2. In order to prevent such attacks, we replace hash-based
puzzles with the generic model of non-parallelizable cryptographic puzzles and
set up another experiment to validate our assumption.
According to our investigation of individual cryptographic puzzles proposed
in Section 2.3 of Chapter 2, only time-lock puzzles and hash chain puzzles have
non-parallelizability characteristic. Although the integration of these puzzles is
able to mitigate the problem of coordinated Type 3 adversaries to speed up the
process of searching a solution by distributing a puzzle to other high-performance
machines, the underlying computation for constructing such puzzles is a major
concern. Puzzle generation entails the calculation of modular exponentiation in
the case of time-lock puzzles3, or the sequential generation of hash operation in
the case of hash chain puzzles, which has greater magnitude than hash-based
reversal schemes.
To address the problem of parallelizable client puzzles, the next chapter will
propose a new construction having characteristics comparable to time-lock puz-
zles and hash chain puzzles but the new scheme requires less computation in
the puzzle construction and verification. The new technique also provides non-
parallelizable property which should be an additional characteristic added to
desirable characteristics of good puzzles.
3Some other example uses of time-lock puzzles have been evaluated and identified by Mao[78], Back [7], and Feng et al. [44] which all suffer from the same problem.
Chapter 6
Subset Sum Cryptographic
Puzzles
Hash-based constructions meet many of the desirable properties of proofs of work
(puzzles), but they also have the property that exhaustive searching of a pre-
image search space is a parallelizable task. Using such a technique in the presence
of an adversary with access to distributed computing resources may leave key
establishment protocols exposed to denial of service. Adopting alternate puzzle
constructions, such as time lock puzzles that are inherently sequential and non-
parallelizable, may need to be considered for protocols that are to be used in an
environment where the adversarial model assumes that significant resources are
available to the attacker.
A client puzzle is non-parallelizable if the solution to the puzzle cannot be
computed in parallel. Non-parallelizable client puzzles can be used to defend
against distributed denial-of-service (DDoS) attacks, where a single adversary
can control a large group of compromised machines and launch attacks to the
targeted server from those machines. If the client puzzle is parallelizable, such
an adversary could distribute puzzles to other compromised machines to obtain
puzzle solutions faster than the time expected by the server. This kind of attack
is identified as strong attacks previously described in Chapter 5. In addition, non-
parallelizable client puzzles could prevent high-end users gaining more advantage
of using multicore processors to obtain puzzle solutions faster than the period
specified by the server. The latter issue leads to another concern on the fairness
property, which is defined by Boyd and Foo [14] as follows.
141
142 Chapter 6. Subset Sum Cryptographic Puzzles
An exchange or payment protocol is considered fair if neither of the
two parties exchanging items or payment at any time during the pro-
tocol has a significant advantage over the other entity.
An alternative construction to prevent adversaries from distributing and com-
puting a puzzle in parallel is a chaining technique. Since the characteristic of
chaining requires the previous value for constructing the next consecutive items,
it will defeat those coordinated adversaries who attempt to solve puzzles by par-
allel computing. To date, there have been two constructions using the chaining
technique based on hash functions proposed by Ma in 2005 and by Groza and
Petrica in 2006 (see Section 2.3.6). Although the construction of hash chain
puzzles can prevent parallel searching techniques, both of them require high
computational burden on the server to construct the puzzles as same as on the
client.
In this chapter, we propose a new puzzle construction based on the subset
sum problem. The primary strengths of this puzzle over others are the simple and
cheap construction and verification for the server, as well as non-parallelizability.
The main contributions of this chapter are:
• to provide the background of knapsack problems, and discuss a character-
istic of LLL lattice reduction, which is a polynomial time algorithm used
to successfully break knapsack cryptographic protocols;
• to propose a new client puzzle construction, called Subset Sum Client Puz-
zles ;
• to provide performance analysis of new proposed algorithm in order to
guarantee that our technique functions effectively under specified DoS at-
tacks.
The structure of this chapter is that Section 6.1 provides general background
knowledge of subset sum or knapsack problems, which are used as an underlying
technique to construct our new puzzles. Section 6.2 presents the polynomial time
algorithm called LLL Lattice Reduction, which is used to successfully break or
solve the subset sum problems within polynomial time. Our new puzzle scheme
including puzzle construction, puzzle solving, and puzzle verification, as well as
the experimental results are provided and discussed in Section 6.3. Performance
analysis on the statistical output using CPN Tools is shown in Section 6.4. Fi-
nally, Section 6.5 provides a discussion and some open problems for future work.
6.1. Knapsack or Subset Sum Problems 143
6.1 Knapsack or Subset Sum Problems
A knapsack system associates a given set of items which have a known weight and
value, and a knapsack which can carry the number of items no more than a certain
weight. Slightly different from knapsack problems, subset sum problems or so
called value independent knapsack problems [79] are considered as a subcategory
of knapsack problems. The subset sum problems are a 0-1 knapsack problem,
in which the solver is asked to search whether there is a subset of a finite set
whose elements sum up to a certain value or not. In general, given the finite set
of items with a specific weight and a target outcome, the solver requires to select
items to obtain the total weight exactly equivalent to a specified target.
Definition 8 ( [90]). Given a set (a1, a2, . . . , an) of positive integers, called a
knapsack set, and a positive integer s, determine whether or not there is a subset
of the aj that sum to s. Equivalently, determine whether or not there exists xi ∈(0, 1), 1 ≤ i ≤ n, such that
∑ni=1 aixi = s.
The subset sum problem is a decision problem, which is NP-complete, similar
to knapsack problems. That means the subset sum problem is computationally
equivalent to its computational version which is to determine the xi such that∑ni=1 aixi = s provided that such xi exist. From the fact that no polynomial
algorithm can break the NP-completeness problems within polynomial time, as
a result, the knapsack problem was identified in the 1970s as a promising ap-
proach to be used as an underlying technique for constructing a public-key based
cryptosystem.
The complexity of subset sum problems depends on two parameters which are
n and b. The former parameter, n, represents the number of decision variables,
meanwhile the latter parameter, b, represents the number of chosen items to
obtain the solution. Particularly if either n or b are a very small number, the
problem becomes very easy to solve. The solver is able to run exhaustive search
for the solution in practical time. Once both of them are of the same order and
large, the solution becomes more difficult.
Considering exhaustive search on the subset sum problems having size n, the
solver needs to run through all possible items of n numbers. Also, the solver
has to check every one of them to find whether its total value is correct or not.
Hence, the running time is of order O(2nn), since there are 2n subsets and, to
check each subset, we need to sum at most n elements.
In 1974, Horowitz and Sahni [53] proposed an improved version of exponential
time algorithm which runs in time O(2n/2n). The algorithm splits the n elements
144 Chapter 6. Subset Sum Cryptographic Puzzles
into two sets of a half size. For each of these two sets, it calculates sums of all
2n/2 possible subsets of its elements and stores them in a memory of length 2n/2
in decreasing order for the first half, and in increasing order for the second half.
Once the elements are in suitable order, the algorithm can check whether an item
in the first set and an item in the second set sum up to the target weight c in
time O(2n/2). If two items with the summation c are found, it stops. Otherwise,
the algorithm moves to the next items in the first set, if the output is greater
than the target s, or moves to the next items in the second set, if the output
is less than the target s. This algorithm is also known as meet-in-the-middle
algorithm for subset sum problems. This is an exponential-time algorithm and
so far known as the fastest method for the general subset sum problem [90].
Using subset sum algorithms for construction of public-key cryptosystems
was firstly introduced Merkle and Hellman [91] in 1978. The construction of this
cryptosystem was based on a super-increasing sequence and a trapdoor function
which was used to transform an easy set of weights to a difficult (pseudo-random)
set. The first version of the Merkle’s protocol used a single iteration of modular
multiplication as a trapdoor function in order to conceal the set of easy problems
from the cryptanalysis. A few years later, Merkle improved the security of the
first version by increasing the number of iterations in the trapdoor function
because a single iterative version was broken by Shamir [120]. However, Brickell
[17] proposed a method to break a multiple iterative Merkle’s protocol in a
system of 40 iterations. We encourage the reader interested in more detail of the
knapsack cryptosystem history to read the papers by Desmedt and Skwirzynski
[33] and Lai [70].
6.2 LLL Lattice Reduction on Subset Sum Cryp-
tosystems
As more implementations of subset sum cryptosystems were proposed to improve
the security of the Merkle’s scheme, many researchers attempted to analyse and
explore vulnerabilities of such systems. One famous tool that was used to suc-
cessfully break those approaches is the lattice reduction technique. A lattice is
a discrete version of a vector space. The reduction algorithm attempts to deter-
mine the smallest non-zero vector in a lattice. There are several lattice reduction
algorithms but the best method so far for breaking the subset sum problems is
the LLL or L3 algorithm which was developed by Lenstra et al. [75] in 1982.
6.2. LLL Lattice Reduction on Subset Sum Cryptosystems 145
L3 lattice basis reduction is a polynomial time algorithm. The concept was
originally used to solve the shortest vector problem (SVP) and closest vector
problem (CVP) of a lattice. Adleman [2] seems to have been the first researcher
to apply LLL lattice basis reduction as a cryptanalysis tool to successfully break
the subset sum problem. By using the LLL algorithm, users simply treat the
subset sum schemes as a lattice problem. In 1985, Lagarias and Odlyzko [69]
announced the first successful attack on a low density1 subset sum cryptosystem;
i.e. a density below 0.6464 in approximately. A few years later, Coster et al. [31]
proposed the improved version of the Lagarias and Odlyzko technique. They
claimed that their method was able to break almost all subset sum problems
having density below 0.9408 in polynomial time.
To provide a brief history on the development of L3 lattice reduction, we
broadly consider two research directions; 1) Performance represents the improv-
ing of the computation complexity, and 2) Accuracy represents the correctness
of the L3 algorithm when dealing with large instance problems.
In term of performance, Kaltofen [63] modified the calculation of Gram-
Schmidt coefficient2 (µ) and claimed that the complexity of this proposed tech-
nique was n6 log2 B +n5 log3 B, where n is the item size and B is the item weight
in bits. This was reduced from n6 log3 B which is the complexity of an original
LLL version [75] re-analysed by Nguyen [101]. Later on, Storjohann [128] con-
structed L3 problems and fed them as a matrix. The algorithm then performed
unimodular row operations that reduced the computation complexity to n3 log B
arithmetic steps. In 2005, Nguyen [100] found an instability due to floating
point errors in the Schnorr’s LLL version [118] that caused the program not to
terminate in some instances. Thus, Nguyen proposed a revised version called
L2 algorithm [100] in which the complexity was n5(n + log B) log B arithmetic
steps. In 2006, Schnorr [117] reconstructed the floating point scheme by adopt-
ing the segmentation technique for speeding up the calculation of Gram-Schmidt
coefficient. This new implementation minimizes the complexity of the basic one
to n3 log n which is recently considered to be the fastest algorithm for reducing
lattice bases.
In term of accuracy, Lagarias and Odlyzko [69] announced the first successful
attack on a low density subset sum cryptosystems; i.e. a density below 0.6464
approximately. In 1988, Radziszowski and Kreher [108] set up an experiment to
run a performance test of L3 on the subset sum problems that have n between 26
1The density is defined as n/ log(max an), where n is a number of items and max an is amaximum item value.
2The definition of this parameter will be provided in Definition 10.
146 Chapter 6. Subset Sum Cryptographic Puzzles
and 98 with different density. Furthermore, they replaced a reduced step by using
multiprecision algorithm to accelerate the job. The experimental result showed
that when n grows up to 98, their implementation succeeded at density below 0.3
which is lower than the theoretic value proposed in Lagarias and Odlyzko [69].
A few years later, however, Coster et al. [31] extended Lagarias and Odlyzko’s
work and proposed the improved version of such technique. They claimed that
their method was able to break almost all subset sum problems having density
below 0.9408 in polynomial time. In 1991, LaMacchia [71] combined Seysen’s
algorithm [119] with the L3 lattice reduction for increasing the accuracy of the
algorithm. They set up an experiment similar to the one used by Radziszowski
and Kreher [108], however, their improved version was able to reach up at n = 106
with density 0.3. In the meantime, Schnorr and Euchner [118] proposed a way to
speed up the reduction step by using floating point instead of integer arithmetic
in the original L3, plus adding the deep insertion technique to their scheme. In
comparison with LaMacchia [71], they claimed that their experimental result had
higher success rate at the same data range.
LLL lattice basis reduction algorithm has been widely used in breaking subset
sum cryptosystems [61, 102, 103] because the algorithm is able to terminate in
polynomial time. Moreover, it is highly sequential because the underlying pro-
gram requires recursive computation. From this perspective, LLL is a promising
technique to fulfill our requirement in terms of non-parallelizability and thwart
coordinated adversaries from distributing the client puzzle to calculate the solu-
tion in a parallel manner. To explain the LLL lattice basis reduction, we refer
to materials provided in Smart’s book: Cryptography: An Introduction (2nd edi-
tion) [123].
Definition 9. Let {b1, b2, . . . , bn} be a set of vectors in Zn that are linearly inde-
pendent over R. Then the set of all integer linear combinations of {b1, b2, . . . , bn}is called an integer lattice. In a formula:
B =
{n∑
i=1
ai · bi | ai ∈ Z, 1 ≤ i ≤ n
}(6.1)
Definition 10. The Gram-Schmidt algorithm transforms a given basis {b1, b2, . . . , bn}into a basis {b∗1, b∗2, . . . , b∗m} which is pairwise orthogonal. The algorithm uses
equations
µi,j =
⟨bi, b
∗j
⟩⟨b∗j , b
∗j
⟩ for 1 ≤ j < i ≤ n (6.2)
6.2. LLL Lattice Reduction on Subset Sum Cryptosystems 147
where µi,j is called a Gram-Schmidt coefficient.
b∗i = bi −i−1∑j=1
µi,j b∗j (6.3)
Definition 11. A basis {b1, b2, . . . , bm} is called LLL reduced if the associated
Gram-Schmidt basis {b∗1, b∗2, . . . , b∗m} satisfies
|µi,j| ≤ 1
2for 1 ≤ j < i ≤ m (6.4)
‖b∗i ‖2 ≥(
3
4− µ2
i,i−1
) ∥∥b∗i−1
∥∥2for 1 < i ≤ m (6.5)
Equation (6.4), so called size reduction, ensures that we obtain a basis in
which the vectors are short in length, while equation (6.5) guarantees that the
obtained vectors are nearly orthogonal. The LLL algorithm works as follows;
1. We examine a fixed column k in which k starts at k = 2
2. If equation (6.4) does not hold, we need to perform size reduction by mod-
ifying the basis B;
3. If equation (6.5) does not hold for column k and k−1 (it means the obtained
vectors are non-orthogonal), we have to swap those columns and decrease
the value of k by one (unless k is already equal to two). Otherwise, we
increase k by one.
4. Once k reaches m, the algorithm stops.
Non-parallelization of LLL Lattice Reduction
According to the problem of the shortest vector and the minimum basis in the
computation of lattice vectors, there have been several research interest in the de-
velopment of reduced lattice basis for finding as short as possible lattice vectors.
The main function used to find the shortest vector is the Gram-Schmidt algo-
rithm (Equation 6.2). However, there is a well-known weakness of this algorithm.
Because the computation of the Gram-Schmidt orthogonalization does not in-
corporate with the reduction process, the running time of the algorithm might
not be able to terminate in polynomial time. This could lead to an inefficiency
of the system.
LLL lattice reduction introduced the technique of the reduction process to
overcome this weakness. By combining the method of size reduction and almost
148 Chapter 6. Subset Sum Cryptographic Puzzles
orthogonal basis, the running time of the LLL lattice reduction can be reduced to
polynomial time. However, these additional steps work as the repetition process
until the final output satisfies the condition of the LLL lattice reduction (as shown
in Equation 6.4 and 6.5). As a result, the non-parallelizable characteristic of the
LLL lattice reduction depends on the repeating routine of the size reduction and
orthogonal basis. The summary of these testing processes are illustrated in the
following flowchart (Figure 6.1).
Figure 6.1: LLL Lattice Reduction Process
Since the successful breaking of the subset sum problem using LLL reduction
was proposed, there have been several experiments set up to compare the prac-
tical performance with the theoretical limits as previously discussed. The next
section provides our new subset sum puzzle construction and verification. The
technique to solve the puzzle using LLL reduction is also provided and discussed.
To evaluate the non-parallelization characteristic of the subset sum puzzles, we
set up the experiment in Section 6.3.5 and examine the puzzle solving using
the LLL implementation of the Schnorr and Euchner [118] scheme provided in
MAGMA (http://magma.maths.usyd.edu.au).
6.3. Subset Sum Cryptographic Puzzles 149
6.3 Subset Sum Cryptographic Puzzles
From the critical analysis of existing cryptographic puzzles, we have identified
weaknesses in existing approaches, particularly, with regard to the parallelism
characteristic. In this section, we therefore propose a new puzzle construction
based on the subset sum problem. The primary strength of this puzzle over
others is that the construction and verification is very cheap for the responder,
while there are no (known) efficient parallelizable solutions.
Before introducing our new construction, we provide the notation used in the
puzzle challenge-response protocol. I represents a client and R represents a server
of the protocol. Communicating messages used in the protocol execution will
carry the subscript I or R representing whose these messages are; for instance,
IDI represents the identity of the client and NR represents a nonce generated
by the server. A secret parameter is denoted as s and puzzle difficulty by k.
The desired weight of the subset sum problem is W , while the set of candidate
weights is w1, w2, . . . , wn. Finally, H(·) represents a hash operation on arbitrary
length input messages, and LSB(·, k)2 obtains the k least significant bits from
the output of the hash function. The puzzle construction, puzzle verification,
and puzzle solving mechanism will be discussed in the next sub-section.
6.3.1 Puzzle Construction
For the protocol specification of subset sum puzzles presented in this section, we
limit our representation to those elements of the exchanged messages that imple-
ment our client puzzles. Our representation of the protocols is simplified; some
information or exchanged messages implemented in individual key establishment
protocols that are not relevant to the discussion of our algorithm are deliberately
omitted. For complete descriptions of the individual key establishment protocols
for DoS resistance, please refer back to the full protocol specifications or brief
details in Chapter 3.
To establish a secure connection to a server, I sends a request containing
an identity (IDI) along with a random nonce (NI) to R. The server chooses
a secret parameter s randomly in order to make the output unique for each
communication, and decides a puzzle difficulty k depending on the workload
condition. The value of k should be selected to be at least 25 (refer to Table
6.2 for a comparison of the experimental result). This will guarantee that the
coordinated adversary requires over a thousand compromised machines to brute-
150 Chapter 6. Subset Sum Cryptographic Puzzles
force search or over a hundred compromised machines to run bounding algorithm3
on the subset sum puzzles at the equivalent time to the legitimate user performing
LLL lattice reduction. As a practical choice we suggest to take a value of k
between 25 and 100 and then if weights are chosen to be of length 200 bits
we can ensure that the generated knapsack has density at most 0.5. Practical
experimental tests are described in Section 6.3.5 which support our proposal.
I R
Precomputed parameters
set of random weight wn
wn = H (wn−1 )
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1) send requestIDI ,NI−−−−−−−−−−−−−−→ choose secret s ∈R Zn
C = LSB(H (IDI ,NI , IDR,NR, s)), k)2
W =∑k
i=1 Ci · wi
puzzle = (w1 ,W , k)
Figure 6.2: Subset Sum Puzzle Construction
Figure 6.2 represents the protocol specification of the puzzle construction.
The responder R computes a hash operation (H(·)), and computes (LSB((·), k)2)
to obtain k bits from the output of hash function. In practice H could be
implemented by truncating the output of SHA-256. Finally, R forms a puzzle
by computing a desired weight (W ) that it wants a client to solve from a pre-
computed set of random weight (wn). To save on protocol bandwidth, the weights
can be generated given the initial random weight w1 by iterative hashing. Hence,
a puzzle contains an initial value of weight of the first item (w1), a desired weight
(W ), and puzzle difficulty (k). The construction of the subset sum puzzle requires
only one hash operation and addition.
6.3.2 Puzzle Solving
To ensure that the client follows our requirement, we have to configure the puzzle
difficulty so that the efficient LLL method of solving is more efficient than brute-
force searching, even when the latter is divided amongst many parallel attacking
machines. As mentioned above in the description of puzzle construction, when k
3This bounding algorithm will be explained in more details in Section 6.3.5.
6.3. Subset Sum Cryptographic Puzzles 151
is in the range between 25 to 100 we can expect that a puzzle would not be solved
faster by brute-force technique. Moreover, when k is around 50 or larger the LLL
method is more efficient than brute-force search even when the latter is divided
amongst 10000 parallel machines in approximation. Figure 6.3 summarizes the
step to solve puzzles on the client’s machine.
I R
2) verify IDI ,NI IDI ,NI , puzzle = (w1 ,W , k)
generatewk = H (wk−1 )
IDR,NR, puzzle←−−−−−−−−−−−−−−
search C ′ such that
W?=
∑ki=1 C ′
i · wi
return C ′
Figure 6.3: Subset Sum Puzzle Solving
Consider the client’s job when receiving a puzzle challenge from a server. It
begins to generate a series of random weights, (w1, w2, . . . , wk), by computing a
hash chain on an initial value w1. Then, the client constructs a basis reduction
set B as follows.
b1 = (1, 0, . . . , 0, w1); b2 = (0, 1, . . . , 0, w2)...
bk = (0, 0, . . . , 1, wk); bk+1 = (0, 0, . . . , 0,−W )
Finally, the client runs a LLL Basis Reduction algorithm [75] which is known
from the community to be the most effective method to find moderately short
lattice vectors in polynomial time. It is important to note that, the protocol
does not limit the client to use LLL algorithm to solve the puzzles. However,
using other techniques, such as brute-force search in traditional puzzles, might
take an unreasonable interval to solve our scheme as shown later in Table 6.2.
In the puzzle solving, the LLL algorithm guarantees to return a set of out-
puts in which one is a solution of the puzzle. To the best of our knowledge,
the algorithm succeeds at high probability (almost all) using the improved LLL
152 Chapter 6. Subset Sum Cryptographic Puzzles
version of Coster et al. [31] if subset sum problems having density below 0.9408.
In addition, this improved version is a highly sequential process because the un-
derlying algorithm requires recursive computation as explained in Section 6.2, so
the puzzle cannot be distributed and solved in parallel computation.
In terms of the puzzle granularity, there are two possible options for the server
to adjust the puzzle difficult; 1) adjusting the item size (n), or 2) adjusting
the density (which will cause a change in B because the density refers to the
maximum weight of the items). Both modifications bound the running time
by a factor (nα · logβ B), where α and β are real numbers dependent on the
version of LLL basis reduction. Since the complexity of LLL basis reduction
is a polynomial function, we conclude that our subset sum puzzles provide a
polynomial granularity.
6.3.3 Puzzle Verification
Puzzle verification is a simple and cheap task for a server which eliminates the
risk of puzzle solution flooding attacks. Generally, there are two options for the
verification process;
1. avoiding CPU usage: the first case minimizes CPU usage at verification
time. By storing the value of C and W corresponding to the client’s identity
(IDI , NI), the verification requires only a table lookup for comparing
the claimed solution from a client to the stored solution. However, this
option might violate the fifth requirement of good puzzles (see Section
2.3). Hence, this option could suit other applications in which the verifier
has low processor power and requires very fast verification, rather than
DoS-resistant key establishment protocols such as HIP.
2. avoiding memory usage: this option eliminates memory usage prior to
verification. The server uses a stateless connection in which no information
is stored until the puzzle is solved. Once the server receives a solution,
it is required to re-generate C and W from the arriving message. This
process is a very cheap computation that costs little more than a single
hash computation, which is the typical cost of verification for hash-reversal
puzzles. This option is a preference for HIP, since it can provide not only
stateless connection, but a cheap verification as well.
We can conclude that the upper bound of computational complexity in the
former case is O(1 ) for the table lookup, whereas the upper bound for compu-
6.3. Subset Sum Cryptographic Puzzles 153
tational complexity in the latter case is O(k) additions which is similar to the
construction of the first state. Figure 6.4 provides the puzzle verification process
on the server.
I R
3) return C ′ IDI ,NI , IDR, option 1) checkC ′ ?= C
NR, puzzle,C ′−−−−−−−−−−−−−−→ option 2) re-generateC
C = LSB(H (IDI ,NI , IDR,NR, s)), k)2
checkC ′ ?= C
Figure 6.4: Subset Sum Puzzle Verification
6.3.4 Comparison of Puzzle Properties
Having introduced the detail of our new approach, Table 6.1 summarizes the
characteristic of individual cryptographic puzzles in order to examine and inves-
tigate the characteristic of our new approach based on the properties of good
puzzles defined by Juels and Brainard (see Section 2.3).
1. Generation and verification is inexpensive for the responder:
The first property requires that the proposed puzzles should be easy to
construct and verify on the server’s machine. Puzzles constructed and ver-
ified in the form of a hash function, such as hash-based reversal schemes,
would be considered as cheaper schemes than those puzzles involving mod-
ular arithmetic, such as repeated-squaring or DH-based schemes. Similarly
to the hash-based puzzles, our subset sum puzzles use the hash function as
an underlying algorithm for construction and verification. As a result, our
new proposed scheme satisfies the first property of good puzzles.
2. Level of difficulty can easily be adjusted from trivial to impossible:
The second requirement involves with the capability of the puzzle issuer
to easily adjust the puzzle difficulty for controlling the incoming request
rate when it becomes heavily loaded. As shown in the Table 6.1, all client
puzzles satisfy this property. The puzzle issuer is able to easily change
the puzzle difficulty by choosing this value within a reasonable range, for
instance the server simply changes either an amount of items (n), or density
of the random weight (d) of the subset sum puzzles in order to increase or
154 Chapter 6. Subset Sum Cryptographic Puzzles
Table 6.1: Summary of Puzzles in term of Proposed Desirable Properties
Properties for Good Puzzles
Puzzle TypeEasy to Construct Easy to Not Require Specialised Solution cannot Server does not Non- Granularity
and Verify Adjust Client Hardware be pre-computed store solution parallelization
Hash-based Reversal√ √ √ √ √
X Exponential
Hint-Based Hash Reversal√ √ √ √ √
X Linear
Repeated-Squaring X√ √ √ √ √
Linear
DH-based X√ √ √
X X Linear
Trapdoor RSA X√ √ √
X X Linear
Trapdoor DLP X√ √ √
X X Linear
Hash Chain X√ √ √ √ √
Linear
Subset Sum√ √ √ √ √ √
Polynomial
decrease the puzzle solving time on the client’s machine. This available
puzzle difficulty range of subset sum puzzles is discussed in Section 6.3.5.
3. Solutions should not require specialised client hardware: This re-
quirement ensures that the client should be able to solve the client puzzles
in most platforms. Since hash function and modular arithmetic are the
standard operations mostly used in cryptographic algorithms, undoubt-
edly, most clients hardware can participate in the client puzzle mechanisms.
Even though our subset sum puzzles require the user to run LLL lattice re-
duction algorithm, this function requires only small space and it is solvable
in any platform similar to the other constructions. Thus, the client does
not require to have any specialised hardware for running this cryptographic
puzzles.
4. Solutions cannot be precomputed: This property guarantees that no
any participants in the system can predict the puzzle solution before puz-
6.3. Subset Sum Cryptographic Puzzles 155
zles are released. This issue prevents the server from being attacked by
malicious users who are able to generate a large number of valid puzzle
solutions. This circumstance might cause the server to become heavily
loaded because if malicious users can generate valid puzzles, they can force
the server to participate with the strong authentication that requires high
computational burden on the server. From the table, all schemes satisfy
this property because the puzzle construction requires the server to insert
a secret parameter to individual puzzles in every time slot. As a result,
client puzzles are user- and session-dependent cryptographic parameters in
which their related solutions are not precomputable and predictable.
5. Issuing a puzzle does not require the responder to store any state:
This issue helps protecting the server against memory exhaustion attacks.
Since the cost of generating bogus requests at the first step is very small to
negligible, malicious users are able to easily exhaust the server’s memory
and cause the service disruption if the server store all puzzle solutions before
validating the user’s identity. Hence, the server should be stateless until
the server can authenticate the users. In this case, some puzzle schemes
including DH-based, Trapdoor-RSA, and Trapdoor-DLP require the server
to store the solutions for saving the computational usage. This is because
these schemes are modular arithmetic based puzzles. If the server needs
to re-generate the solution at the verification, it might susceptible to CPU
exhaustion attacks since the operation requires expensive computation.
6. Puzzles should be non-parallelizable: This property is an additional
requirement which has been examined in detail in Chapter 5. Non-parallelization
ensures that coordinated adversaries are not able to find solution faster than
specified interval, and use these messages to mount the attacks. In addition,
it can help to protect participants using multi-core processors to gain more
advantage over other users. That means non-parallelizable puzzles can
provide fairness to all participants in the system. Only repeated-squaring,
hash chain, and subset sum puzzles achieve this requirement.
7. Puzzles should have fine granularity: While the first six properties
describes the efficiency of puzzles, the last property directly refers to the
resolution of puzzles. Fine-grained puzzles allow the puzzle issuer to flex-
ibly control the amount of work given to the client. Moreover, the avail-
able range of puzzle difficulty for the puzzle issuer is another concern. As
156 Chapter 6. Subset Sum Cryptographic Puzzles
demonstrated in the experiment by Feng et al. [44], finer granularity puz-
zles would have more available range of puzzle difficulty for the issuer.
From the comparison, linear-grained puzzles provide the highest density of
difficulty levels for the clients, while exponential-grained puzzles provide
the lowest density of difficulty levels especially as the difficulty increases.
6.3.5 Experimental Results of LLL Reduction on Subset
Sum Puzzles
To demonstrate how LLL lattice reduction and the subset sum problems work
in practice on client machines, we set up an experiment to create a random set
of subset sum problems based on different criteria including density and amount
of items. In terms of hardware, we simulated the LLL reduction algorithm using
a Sun Enterprise 420R computer operating with four UltraSPARC-II 450 MHz
CPUs with 4096 MB of RAM running on Sun Solaris 9 (Sparc). We tested
the result using MAGMA full version patch number V2.13-11 released on April
5, 2007 (details at http://magma.maths.usyd.edu.au). We created MAGMA
source code to generate a random set of subset sum problems which have different
densities between 0.3 and 0.9 for a range of instance sizes between 5 and 100,
as well as a subset sum solving function for testing the LLL implementation
provided in MAGMA. The LLL version provided in MAGMA is based on the
floating point arithmetic version (FP-LLL) proposed by Schnorr and Euchner
[118].
The following briefly provides the methods that we used to evaluate our
new scheme. Two different searching methods, a backtracking and bounding
technique, are taken into account for comparing with the LLL lattice reduction
method.
Backtracking or Brute Force Searching: This is the simplest method to
search a solution of subset sum problems. The backtracking algorithm is
a recursive technique, which is also known as exhaustive search, because it
gathers all possible solutions and then checks for one satisfying the solution.
This guarantees that it will always return an optimal solution. However,
this technique consumes high CPU power as well as running time. The
detail and pseudo-code of this algorithm is provided in Appendix B.1.
Bounding Technique: To avoid the time-consuming brute force searching,
some pruning techniques can be used for saving the running time by avoid-
6.3. Subset Sum Cryptographic Puzzles 157
ing some unnecessary nodes during the searching process. By storing and
traveling only to states whose total weight does not exceed the limit, it
can generate a specified solution faster than brute force. The bounding
technique is one of those pruning methods. It specifies an upper bound
on the output, so any descendant tracks having value above or not below
their ascendant node will be eliminated from the possible solution. This
can reduce running time and storage space. The detail and pseudo-code of
this algorithm is provided in Appendix B.2.
LLL Lattice Reduction: This advanced tool, explained in Section 6.2, can
effectively solve subset sum problems. This method can solve the subset
sum puzzle within polynomial time rather than exponential time as the two
former techniques do. Recently, there have been many implementations for
accelerating the running time of LLL reduction. In our experiment, we use
two techniques: the first one, Int-LLL, is the original developed in 1982
by Lenstra et al. [75] provided in Mathematica, while the second one, FP-
LLL, developed by Schnorr and Euchner [118], is a modified version using
floating point arithmetic and provided in MAGMA.
Table 6.2 shows the experimental results compared among the brute force
searching, bounding technique, and LLL Lattice Reduction examining puzzles
having small size between 5 and 30.
Table 6.2: Backtracking and Bounding Technique Running Time of The SubsetSum Puzzle
Number Average Running Time (seconds)
of Items Backtracking Bounding Technique LLL
(n) Data 1 Data 2 Data 3 Data 1 Data 2 Data 3 Data 1 Data 2 Data 3
5 0.034 0.034 0.025 0.049 0.049 0.053 0 0 0
10 0.086 0.083 0.083 0.06 0.064 0.082 0 0 0
15 1.70 1.69 1.67 0.134 0.40 0.137 0 0 0
20 51.85 52.74 53.74 2.633 3.691 1.43 0 0 0.01
25 2320.70 2262.80 2428.60 315.743 456.97 602.81 0.01 0.01 0.01
30 – – – 1437.758 1865.001 1647.246 0.01 0.01 0.01
By evaluating the result from Table 6.2, we conclude that the reasonable range
of puzzle difficulty would be at least 25 for preventing coordinated adversaries
who can control a number of compromised machines to obtain puzzle solutions
158 Chapter 6. Subset Sum Cryptographic Puzzles
at the same capacity as the legitimate user performing LLL lattice reduction.
Before illustrating the second experimental result, we need to briefly explain
the reasoning behind our configuration. By investigating the primary result
comparing between FP-LLL and Int-LLL, we have found that Int-LLL works
well for low density problems with data size below 100. Once the density grows,
the Int-LLL performance drops gradually and becomes ineffective when we run it
on high density examples. This behaviour was also observed by LaMacchia [71]
as well as by Schnorr and Euchner [118]. Due to this degradation of Int-LLL
with large instance and high density problems, we suggest to use FP-LLL in the
puzzle solving to avoid the situation that legitimate users are unable to solve their
puzzles. The reason is that a floating point arithmetic returns the Gram-Schmidt
coefficient in the reduction process more precisely than integer arithmetic. As a
result, the FP-LLL reduction provides a more correct output.
Table 6.3 shows the result of puzzles having size between 30 and 100. We
restrict to this range because we are only interested in the values where the
LLL performs faster than brute force searching, otherwise the protocol would
be vulnerable to parallel attacks if the adversaries are able to run a brute force
searching. The table shows that there is a good range of puzzle times suitable for
practical use. Even though a large number of items and density offer more spread
options to the server to choose the puzzle difficulty, some ranges cause unreliable
to the system. Particularly in case of the amount of items (n) is greater than 60
and the density of puzzles is greater than 0.70, the running time of the puzzle
solving is longer than an hour. Moreover, the standard deviation (STD) and
variance (V AR) of the experimental data sets having the amount of items and
the density in the mentioned ranges become less reliable.
6.4 Performance Analysis and Evaluation on Sub-
set Sum Puzzles
This section provides a performance analysis and evaluation of our new puzzle
construction. By replacing a hash-based reversal scheme with our subset sum
puzzles in HIP, we set up a new formal time-based model of HIP using CPN
Tools as our formalism. To identify security flaws of the new proposed technique
as well as to investigate the performance of such a scheme once we integrate
it into the DoS-resistant protocols, we apply a performance analysis function
provided in CPN Tools. This is a final process of the protocol design in order
6.4. Performance Analysis and Evaluation on Subset Sum Puzzles 159
Table 6.3: The Experimental Result of The Subset Sum Puzzle
Amount Average Running Time (seconds)
of Density
Items (n) 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90
AVG 0.018 0.020 0.022 0.024 0.024 0.025 0.025 0.026 0.027 0.027 0.028 0.028 0.029
30 STD 0.008 0 0.008 0.005 0.005 0.005 0.006 0.007 0.005 0.004 0.005 0.004 0.003
VAR 0 0 0 0 0 0 0 0 0 0 0 0 0
AVG 0.038 0.04 0.042 0.046 0.048 0.050 0.050 0.051 0.053 0.063 0.070 0.088 0.109
35 STD 0.006 0.010 0.007 0.004 0.006 0.004 0.003 0.005 0.002 0.0060 0.002 0.003 0.020
VAR 0 0 0 0 0 0 0 0 0 0 0 0.0001 0.0001
AVG 0.074 0.076 0.081 0.082 0.084 0.085 0.086 0.095 0.108 0.139 0.317 0.543 1.047
40 STD 0.013 0.012 0.016 0.013 0.011 0.015 0.013 0.022 0.016 0.028 0.180 0.116 0.485
VAR 0.0001 0.0001 0.0002 0.0002 0.0002 0.0002 0.0002 0.0004 0.0002 0.0008 0.0320 0.0130 0.2350
AVG 0.121 0.137 0.141 0.143 0.144 0.144 0.154 0.252 0.560 1.322 4.148 9.764 15.482
45 STD 0.019 0.027 0.015 0.016 0.015 0.008 0.027 0.107 0.194 0.526 1.286 2.024 4.826
VAR 0.0003 0.0007 0.0002 0.0002 0.0002 0 0.0007 0.0114 0.0377 0.2764 1.6533 4.0965 23.2855
AVG 0.235 0.239 0.239 0.240 0.241 0.256 0.458 2.156 5.518 28.156 57.650 156.143 234.414
50 STD 0.046 0.031 0.026 0.029 0.028 0.032 0.030 1.336 1.648 7.595 6.536 21.629 41
VAR 0.0021 0.0009 0.0007 0.0007 0.0008 0.0012 0.0028 1.7861 2.7146 57.6840 42.7229 467.8 1681.01
AVG 0.349 0.382 0.385 0.396 0.429 0.700 6.440 51.410 79.213 348.270 600.434 1354.410 6119.350
55 STD 0.068 0.039 0.019 0.017 0.036 0.055 0.513 3.348 2.180 32.424 35.454 38.702 539.485
VAR 0.0005 0.0015 0.0004 0.0003 0.0013 0.0030 0.2633 11.2103 4.7582 1051.34 1256.98 1497.86 291044
AVG 0.557 0.567 0.571 0.623 1.250 9.490 28.640 333.103 2311.33
60 STD 0.021 0.021 0.019 0.050 0.261 0.634 2.653 69.094 51.495 > 1hr > 1hr > 1hr > 1hr
VAR 0.0004 0.0004 0.0003 0.0024 0.0682 0.4023 7.0370 4774 2651.68
AVG 0.776 0.784 0.806 1.587 21.406 146.176 1897.18 4214.04
65 STD 0.065 0.048 0.037 0.445 5.324 21.143 192.008 532.85 > 1hr > 1hr > 1hr > 1hr > 1hr
VAR 0.0004 0.0002 0.0013 0.1979 28.3412 447.009 36867.2 283929
AVG 1.100 1.129 1.138 24.551 243.314 1811.59
70 STD 0.134 0.130 0.081 3.003 54.057 318.625 > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr
VAR 0.0178 0.0168 0.0065 9.0204 2922.16 101522
AVG 1.295 1.515 7.807 521.9 2934.01
75 STD 0.076 0.162 3.050 50.455 265.215 > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr
VAR 0.0057 0.0262 9.3039 2545.72 70339
AVG 2.010 2.050 42.250 4206.990
80 STD 0.118 0.164 5.353 405.11 > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr
VAR 0.0139 0.0269 28.6547 164225
AVG 2.540 14.890 1615.420
85 STD 0.217 2.126 70.480 > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr
VAR 0.0471 4.5205 4967.15
AVG 3.440 102.383
90 STD 0.533 12.452 > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr
VAR 0.284 155.042
AVG 5.534 987.957
95 STD 1.173 183.663 > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr
VAR 1.3770 33732.2
AVG 31.801
100 STD 11.738 > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr > 1hr
VAR 137.788
to ensure that the new algorithm works effectively under fundamental security
requirements.
Performance of the algorithms is an important issue in the design and devel-
160 Chapter 6. Subset Sum Cryptographic Puzzles
opment of systems. The major aim of the performance analysis is to evaluate
the new proposed schemes by examining some additional parameters regarding
statistical outputs of the server. With regard to this performance analysis, the
protocol designer might be able to find an optimal configuration of such systems.
Fortunately, the simulation-based feature provided in CPN Tools allows a
protocol designer to be able to integrate performance analysis function into the
model. This simulation-based performance analysis consists of a statistical in-
vestigation of output data, as well as the verification and validation of simulation
experiments by means of statistical analysis. In our model, we pay more attention
to quantitative information about the performance of our new puzzle construc-
tion, for instance user processing time compared to server processing time, queue
delay on the server at message 1 and message 3, number of rejected packets of
legitimate users, as well as the percentage of final throughput. These statistical
data can be obtained from the model during the simulation by examining the
markings of associated places and transitions.
6.4.1 Tolerance of HIP with Subset Sum Puzzles
Evaluating tolerance of the server under DoS attacks, as well as investigating
the server’s performance when we replace original hash-based reversal puzzles
with our new construction, are the major purpose of this experiment. Similar
to Experiment 2 and 3 in Chapter 5, we test the server performance with and
without an ability to adjust puzzle difficulty under two different workloads; 2.5
and 25 times greater than the server’s capacity, from five types of adversaries
(four of them ad1, ad2, ad3, and ad4 as defined in Section 4.1.3, and the fifth
type Co ad3 as defined in Section 5.1.2).
Other remaining construction of the protocol using CPNs model are equiva-
lent to Experiment 3 and 4 except that the processing time value of the operation
for constructing the puzzles at message 1 and verifying the puzzles at messages
3 are slightly different. The construction of our new mechanism requires one
hash to generate a set of chosen items, plus one summation to calculate a total
weight of the problem. In comparison, the construction of the hash-based re-
versal puzzles requires only the server to choose specific value of k, which is the
number of initial zero bits of the output from hash function. As a result, our
construction proceeds one step further in comparison with the original ones. In
the meantime, our verification can save the processing time one step less than
the original work since our scheme requires only one summation to validate the
6.4. Performance Analysis and Evaluation on Subset Sum Puzzles 161
puzzles, but the hash-based reversal technique requires one hash operation to
check the correctness of the puzzles. This circumstance shows a slight improve-
ment of the outcome from Type 2 and Type 3 adversaries as the experimental
results displayed in Table 6.4 and Figure 6.5.
Similar to the representation used in experiment 3 and 4 in Chapter 5, Table
6.4 and Figure 6.5 represent the percentage of successful legitimate connections
compared among two different amounts of bogus messages (Z=50, and 500 pack-
ets) from five adversarial strategies. Undoubtedly when we prohibit the server’s
ability to adjust puzzle difficulty k, the percentage of successful messages from
honest clients hc to obtain service will drop drastically once adversaries increase
the number of bogus messages to the system. However once the server has a full
capability to adjust the puzzle difficulty, the final throughput grows up to a de-
sirable level again. Interestingly, the throughput of honest clients once the server
comes under coordinated attacks by Co ad3 is greatly improved compared to the
result from the original hash-based reversal scheme in the fourth experiment.
Table 6.4: Percentage of Throughput with Subset Sum Client Puzzles
Number of Bogus Messages
Adversaries 50 500
Fixed k=1 Adjustable k Fixed k=1 Adjustable k
ad1 100 100 100 100
ad2 63.60 80.65 32 48.25
ad3 54.75 70.50 9 33.20
Co ad3 50.25 71.50 7.10 35.80
ad4 85.10 99.95 75 87.45
6.4.2 Performance Analysis of Subset Sum Puzzles
As illustrated in Table 6.4 and Figure 6.5, the overall throughput (especially in
the defending of coordinated attacks) of the protocol implemented with subset
sum puzzles increases in comparison with parallelizable puzzles. This section
provides an additional investigation regarding performance of subset sum puzzles
in order to guarantee that the new approach functions properly and effectively
under denial of service circumstances. These investigated parameters include
162 Chapter 6. Subset Sum Cryptographic Puzzles
(a) Attacking Packets is 2.5 times greater than Server’s Capacity
(b) Attacking Packets is 25 times greater than Server’s Capacity
Figure 6.5: Percentage of throughput of hc when using Subset Sum cryptographicpuzzles
6.4. Performance Analysis and Evaluation on Subset Sum Puzzles 163
adversary processing time, server processing time, queue delay on the server at
message 1 and message 3, as well as numbers of messages in which time-out value
expires before getting services. These statistical outputs are extracted from the
model during the simulation to examine the performance of subset sum puzzles
with respect to the speed of puzzle construction and puzzle generation on the
server. If the performance of the server to process these operations is too low,
it might cause some bottleneck during the communication that leads adversaries
to exploit this vulnerability to mount denial of service attacks.
Table 6.5 and Table 6.6 represents the statistical results obtained from the
simulation under five different attacks with non-adjustable and adjustable subset
sum puzzles running on HIP, respectively.
Table 6.5: Performance of Non-adjustable Subset Sum Client Puzzles
Non-adjustable Subset Sum Puzzles with k=1
Performance Factors ad1 ad2 ad3 Co ad3 ad4
50 500 50 500 50 500 50 500 50 500
ad Processing Time 50 500 75200 752000 150 1500 150 1500 100 1000
Server Processing Time 42835 176260 52905 162162 120391.99 824263.84 117990.07 648996.78 37568.04 171100.66
Time Out at MSG1 0% 0% 0% 0% 0% 0% 0% 0% 0% 0%
Max Delay at MSG1 69 519 69 519 582.32 1665.53 613.94 1537.48 69 519
Max Delay at MSG3 0 0 1489 1501 1437.36 2723.73 2840.43 3294.16 1396.85 1372.65
Table 6.6: Performance of Adjustable Subset Sum Client Puzzles
Adjustable Subset Sum Puzzles
Performance Factors ad1 ad2 ad3 Co ad3 ad4
50 500 50 500 50 500 50 500 50 500
ad Processing Time 50 500 75200 2645147.06 40638 2006222.03 150 1500 100 1000
Server Processing Time 42835 176260 69017 216540 103991.06 266340.90 107611.54 281906.65 42885 171704.86
Time Out at MSG1 0% 0% 0% 0% 0% 0% 0% 0% 0% 0%
Max Delay at MSG1 69 519 69 519 69 519 99.65 523.90 69 519
Max Delay at MSG3 0 0 1481 1494 1858.14 1716.35 2456.12 2265.87 0 1354.50
1) ad Processing Time: This provides information of individual adversary
types processing time during the simulation. This parameter is represented in
the modeling time unit. The adversaries’ processing time is counted from the
beginning of the sending request messages from adversaries until they finish
all operations as defined for individual adversary’s tasks or there are no more
legitimate packets in the system. This information can also be used to represent
how much computation is spent in the attack in comparison with the server
164 Chapter 6. Subset Sum Cryptographic Puzzles
to defend such attacks as quantitative measurement in cost-based analysis of
Meadows’ proposal.
To interpret the result from Table 6.5 and Table 6.6, Type 2 adversaries
take the longest time to perform DoS attacks to the server since they require to
follow the protocol description at step 3 including puzzle solving and signature
generation, while Type 1 adversaries spend less computation to mount the attack
to the server since they only create and flood bogus requests at step 1. However if
we compare this information with the result from Figure 6.5, Type 1 adversaries
do not achieve their denial of service attacks, while Type 2 adversaries are not
the most destructive strategy either. From this point of view, this parameter
(ad Processing Time) should not be used alone as the formal analysis in the cost-
based framework for estimating the consequence of the attacks. Other factors
should be combined and introduced in the evaluation.
2) Server Processing Time: This represents the processing time that the
server spends during the simulation. The server’s processing time is counted from
the beginning of the first request messages arrived to the server’s queue until the
server finishes processing all legitimate packets or no more legitimate packets in
the system. This parameter has been used as a cost factor to compare with the
adversaries’ processing time for estimating the effect of DoS attacks. From the
table, the server processing time includes both computation of legitimate packets
and bogus messages, and is represented in the time unit.
From the statistical result shown in Table 6.5 and Table 6.6, the server wastes
maximum computation to participate in the system with Type 3 and coordinated
Type 3 adversaries, while spends minimum computation for Type 1, Type 2, and
Type 4 approximately. Undoubtedly, the reason for the minimum computation
is that the server can detect the attacks from Type 4 adversaries and disregard
them from the communication very quickly using puzzle mechanism, while using
stateless connection and cookies to resist against flooding attacks from Type 1
adversaries. Since Type 4 adversaries are able to deny some legitimate packets,
the total processing time becomes slightly less than Type 1 adversaries in both
tables. Hence, the most processing time occurred in this result comes from the
operation of legitimate packets only.
Considering Type 2, Type 3, and coordinated Type 3 adversaries, intuitively
Type 2 adversaries should be the most powerful attacks causing the server to
spend much computation to participate in both puzzles and digital signature
verification, as well as the key confirmation in the final state. However, this
attack strategy does not cause much destruction in comparison with Type 3 and
6.4. Performance Analysis and Evaluation on Subset Sum Puzzles 165
coordinated Type 3 adversaries because it causes denial of service to adversaries
themselves during the simulation. This interesting situation leads our attention
to carefully investigate the experimental result by using single step simulation
from CPN Tools. We have found that this situation occurs because most puzzles
belonging to Type 2 adversaries are waiting in the queue for a long time (due
to the large number of bogus messages from Type 2 adversaries). Eventually
the puzzles usually expire before getting served (since Type 2 adversaries spend
longer time than other adversaries to create message 3, so they have shorter
period left for the puzzle to be verified). Unlike Type 2 adversaries, Type 3 and
coordinated Type 3 adversaries do not compute the signature at step 3 (that
saves large period of time). As a result, their bogus messages arrive to the
server quickly and those bogus packets have longer period to stay in the queue
before puzzles expire. Even though the server is able to detect the attacks at
signature verification, it might be too late for serving the legitimate users since
the signature verification is an expensive operation that requires plenty of time.
This situation becomes better once the server has an ability to delay incoming
messages by increasing the puzzle difficulty as shown in Table 6.6.
3) Time Out at MSG1: This represents the number of rejected packets
of honest clients at step 1 if they are waiting for the service in the queue longer
than the specified time-out period (we assume that the time-out is 50000 units of
time in our model). This factor provides information regarding to how effective
are the flooding attacks from Type 1 adversaries, which is the most common
and easiest denial of service technique. In addition we have found that since
HIP implements stateless connection and cookies to thwart TCP SYN flooding
attacks, it is more difficult for adversaries to mount the attacks using simple
techniques. If adversaries desire to block the legitimate users at this stage, they
have to move their attention to the further state which is the puzzle verification
and signature validation as the attacking strategy from Type 3, Type 4, and
coordinated Type 3 adversaries.
Time-out at message 1 also refers to the efficiency of the puzzle generation of
the server in order to deal with large numbers of flooding attacks. As shown in
the table, there are no rejected messages at this state for any attacking strategies.
Puzzle generation is a very fast operation. (In our model, we configure 2 time
unit in this step since it requires 1 hash function and 1 summation for subset
sum puzzles. That means if adversaries need to achieve flooding attacks to
cause the time-out period of legitimate packets running out, they have to flood
bogus messages over 30000 packets simultaneously to the server.) The puzzle
166 Chapter 6. Subset Sum Cryptographic Puzzles
can therefore be a powerful defending approach as a first line of defense when we
combine with other DoS-resistant mechanisms. However as shown in the column
of Type 3 and Co ad3 adversaries, both attackers are able to boost up the time-
out period to higher level than others because they can flood message 3 very fast
and force the server to participate with signature verification. If they increase
the number of bogus messages to a higher level, they might be able to achieve
blocking services to legitimate packets at step 1 of the protocol.
4) Max Delay at MSG1 and MSG3: These two values represent the
maximum time delay of incoming packets in the queue at protocol step 1 and step
3 on the server machine. The delay at step 1 indicates the efficiency of the server
to generate the client puzzles under heavy load condition, while the delay at step
3 can be referred to the efficiency of the server to validate the puzzle solution for
detecting the bogus messages. Since we assume that all users initiate requests
to the same server simultaneously and the server can process these incoming
requests only one at a time, there is always the minimum delay depending on the
number of requests at the state 1 during the protocol simulation. Not surprisingly
the longer delay in the queue at step 3 is, the more degradation of overall services
in the system will be. The reason is because the jobs at step 3, which primarily
consists of puzzle and signature verification, requires longer time to execute than
the job at state 1. In addition, the delay at state 3 might cause the increment of
rejected messages at step 1 if the accumulation on the incoming messages at step
3 is increasing at a high rate and keeps the server busy processing these packets
until requests at state 1 have reached or exceeded the maximum time-out period
of messages.
Considering the experimental result of Type 1 adversaries, their strategy
cannot cause huge delay to both state 1 and 3. The delay at state 1 is the
maximum waiting time of the last packets since all requests arrive to the server
simultaneously and the server is able to process only one request at a time.
Thus, this attack does not succeed to mount the attack on the server’s machine.
If Type 1 adversaries are to succeed with DoS attacks to cause rejection at step
1, those adversaries must be able to flood large numbers of requests to keep the
server busy processing their bogus messages longer than the time-out period of
the standard TCP/IP protocol which is 60 seconds (60000 unit of modeling time
in our configuration). Similarly to the Type 1 adversaries, Type 4 adversaries do
not achieve their attack if the number of bogus messages is not huge enough in
the case of non-adjustable puzzles. However, the percentage of successful output
is significantly developed once the server has an ability to control the puzzle
6.5. Discussion and Open Problems 167
difficulty (see Table 6.5 and Table 6.6 in the column of ad4 for comparison).
From the performance result, only Type 3 and coordinated adversaries are
able to boost up the delay on both state 1 and 3 in the non-adjustable puzzle
mechanism. However, the ability to control the puzzle difficulty assists the server
to decrease the delay in both states as shown in Table 6.6. Nevertheless, coor-
dinated Type 3 adversaries still have an opportunity to boost the delay if they
increase the number of bogus messages into the system.
In summary, our subset sum puzzles function properly at least under five pro-
posed attacking strategies. In particular, they can prevent users from gaining
advantages by searching valid puzzle solutions more quickly by parallel compu-
tation. Moreover, performance of puzzle construction and puzzle generation of
our approach functions effectively as shown in the performance analysis. This
leads to the improvement of the tolerance under all defined denial-of-service tech-
niques.
6.5 Discussion and Open Problems
Our main objective has been to design non-parallelizable puzzles. Subset sum
problems with the LLL lattice reduction bring us this characteristic and fulfill our
requirement. However, simplicity and performance of the existing LLL schemes
are a concern for deploying them in general applications. As several experiments
have shown the failure of original LLL in dealing with the large instance and
high density problems, recently several attempts have been made to scale down
the computation time of the size reduction process as well as increase the accu-
racy for dealing with the large instances of subset sum problems. One example
was using dynamic approximation and heuristic techniques [8] to speed up the
reduction process. To our knowledge, the fastest LLL reduction scheme for solv-
ing subset sum problems is the segmentation FP-LLL proposed by Schnorr [117]
that minimizes the running time to be O(n3 log n).
The non-parallelizable characteristic to prevent coordinated adversaries and
multicore processor users, as well as the fast and simple puzzle construction and
puzzle generation are the most interesting and important property of our new
approach. The major disadvantage of the subset sum puzzle is the memory
requirement. By investigating instances when the item size n exceeds 100, we
found that the memory resource is exhausted in some trials. That is because the
LLL reduction constructs a n × n lattice matrix and allocates it into reserved
168 Chapter 6. Subset Sum Cryptographic Puzzles
memory. As a result, the practical range of puzzle difficulty would be up to
n = 100 for avoiding memory exhaustion. In addition, the running time within
this range would be reasonable and acceptable for most users. When we compare
this bound with the hash-based reversal puzzles, the reasonable puzzle difficulty
for hash-based reversal schemes would have k between 0 and 40 which results in
a smaller length puzzle than our construction.
Since we are concerned with the problem of coordinated attacks and the
fairness property on solving the cryptographic puzzles with multi-core processors,
we focus on resolving the parallelizable characteristic rather than implementing
linear granularity. Even though our new scheme has coarser granularity than
other hint-based schemes, it does offer polynomial granularity which is better
and easier to control than exponential granularity found in hash-based reversal
puzzles recently used in some client puzzle protocols. From this point of view,
this concern on the puzzle granularity could be one interesting open problem for
the research community to develop more fine-grained puzzles.
Table 6.7 summarizes the characteristics of the new puzzle. Comparing our
construction with repeated squaring we find that, although repeated squaring
offers non-parallelism and linear-grained control to the user, it suffers from high
computation at construction time which means that a server using these puzzles
would be susceptible to flooding attacks. As a result, an interesting open problem
for the research community is to explore techniques to find new puzzles providing
both non-parallelization and linear granularity.
Table 6.7: Comparison of our new Subset Sum Puzzles with the others
Puzzle TypeServer’s Cost
Client’s CostNon
Granularity
Pre-Compute Construction VerificationParallel
Hash-based Reversal - 1 hash 1 hash O(2k) hash No Exponential
Hint-Based Hash Reversal - 1 hash 1 hash O(k) hash No Linear
Repeated-Squaring - 2 mod mul 2 mod mul O(k) mod mul Yes Linear
DH-based - 1 mod exp 1 comparison O(k) mod exp No Linear
Trapdoor RSA1 mod exp 3 mod mul
1 comparison O(k) mod exp No Linear1 mod mul 2 additions
2 mod mul
Trapdoor DLP 1 mod exp 1 comparison O(k) mod exp No Linear3 additions
Hash Chain - k hash 1 comparison O(k) hash Yes Linear
Subset Sum n hash 1 hash 1 hash L3 reduction Yes Polynomial
6.6. Summary 169
6.6 Summary
Cryptographic puzzles or client puzzles shed light on defeating resource exhaus-
tion denial of service (DoS) attacks. By including this DoS-resistant mechanism
into cryptographic protocols, the responder ensures that the client spends suf-
ficient resources before committing its own resources. From the adversaries’
point of view, the major aim of using client puzzles is to penalize dishonest users
attempting to perform illegal actions by forcing them to conduct additional com-
putation. Client puzzles are also used to delay incoming requests when the re-
sponder comes under DoS attack, and to deal with resource exhaustion attacks
when adversaries attempt to defeat the number of connection queues that the
responder can handle.
Recently, there have been a variety of client puzzles proposed to resolve DoS
attacks in computer networks. Having examined and analysed the practical use
of existing cryptographic puzzles, we have found that most of them are suscep-
tible to coordinated attacks because they do not support the non-parallelization
characteristic. With regard to lacking of the parallelism characteristic in existing
client puzzles, therefore, this chapter proposes a new puzzle construction based
on the subset sum problem. Undoubtedly, the primary strength of this puzzle
over others is non-parallelization. In addition, another key strength is that the
puzzle construction and verification requires simple and fast computation on the
server as shown in the performance analysis (see Section 6.4). Evaluation and
investigation by using performance analysis under five performance parameters
and the percentage of successful service shows that our new approach slightly
improves the throughput in comparison with hash-based reversal technique from
Experiment 3 and 4 in Chapter 5. This is because the puzzle verification of
our new algorithm can be done slightly quicker than hash reversal techniques,
even though the puzzle generation of new scheme requires one step longer than
original technique.
On the other hand, a weakness with this construction is that it does not
provide linear granularity for puzzle difficulty adjustment. As a result, an open
problem for future research is to explore alternative techniques for improving
the new puzzle mechanisms which support all the defined properties as shown in
Table 6.1. Such a puzzle would be more effective for defending the server against
DoS and DDoS attacks.
170 Chapter 6. Subset Sum Cryptographic Puzzles
Chapter 7
Conclusion and Future Work
The main objective of this research is to design denial-of-service resistant mecha-
nisms in key establishment protocols and extend formal security techniques into
a new implementation to examine the security and performance of DoS-resistant
key establishment protocols. We hope that the new construction as well as
all proposed DoS-resistant strategies and techniques might be able to help the
computer security community to develop effective protocols in the future. The
following is a summary of our work related to these proposed objectives.
7.1 Summary of Contribution
We revisit the five aims of the research originally listed in Chapter 1.
1) To identify denial of service vulnerabilities in key establishment
protocols.
• We study the denial of service characteristics in Chapter 2, and seven key
establishment protocols implementing DoS-resistant mechanisms in Chap-
ter 3. We identify vulnerabilities relating to resource exhaustion at the
authentication process of key establishment protocols.
• In Chapter 5, we identify vulnerabilities in client puzzles that are used in
HIP to counterbalance the computational usage and delay the incoming
requests from clients when the server becomes heavily loaded.
171
172 Chapter 7. Conclusion and Future Work
2) To propose improved strategies and techniques to resist denial of
service attacks in key establishment protocols.
• In Chapter 2, we introduce strategies and techniques to help the server
to improve the security against denial of service attacks. Some of these
strategies and techniques have never been implemented, or are otherwise
inappropriately used in currently proposed key establishment protocols.
• Chapter 3 provides critical analysis and evaluation on the implementation
of DoS-resistant strategies and techniques proposed in Chapter 2. Only
three candidates; Modified IKE [83], Host Identity Protocol (HIP) [95],
and Lee & Fung’s protocol [72], use techniques supporting all three DoS-
resistant strategies.
• Applying the time-based model and a coordinated attacking strategy in
Chapter 5, we identify a vulnerability in hash-based puzzles relating to their
parallelizable characteristic. We propose to replace this construction with a
generic model of non-parallelizable client puzzles. The experimental result
shows that an enhanced protocol can help the server to resist coordinated
attacks.
3) To extend Meadows’ cost-based framework for modelling and analysing
key establishment protocols that have DoS-resistant built-in mecha-
nisms.
• In our evaluation in Chapter 3, we identify key strengths of HIP and se-
lect this protocol for modelling and analysis with Meadows’ cost-based
framework in Chapter 4. Because HIP implements client puzzles as the
defensive tool to counteract adversaries attempting to flood bogus mes-
sages, the cost-based framework identifies that HIP can effectively resist
four specified types of DoS adversaries.
• Using CPN Tools for simulating DoS-resistant protocols in Chapter 5, we
model and simulate the coordinated adversarial strategy, as well as validate
the analysis performed by Smith et al. [125] using the Meadows framework.
• From the experimental result of a HIP time-based model in Chapter 5,
we identify a limitation of Meadows’ cost-based framework for modelling
7.1. Summary of Contribution 173
and identify resource exhaustion vulnerabilities resulting from coordinated
parallelized attacks.
4) To evaluate and examine DoS-resistant protocols by using formal
simulation techniques.
• In Chapter 4, we examine a HIP cost-based model by constructing it in
Coloured Petri Nets and using simulation techniques to identify resource
exhaustion vulnerabilities. We use simulation techniques rather than math-
ematical proofs for modelling and verifying the complex protocols due to
large instances and various interactions between protocol entities.
• Once we can identify the attack on HIP that relates to puzzle difficulty
and puzzle solving time, Chapter 5 introduces an alternative technique to
construct a HIP time-based model in CPN Tools. From this construction,
we identify one major flaw in client puzzles in which coordinated adversaries
might be able to mount a strong attack [12] on the server.
5) To design new DoS-resistant mechanisms and evaluate their security
and performance using a formal security approach.
• In Chapter 6, we study the subset sum problems and the algorithm used
to solve such problems in polynomial time. This solving algorithm is called
LLL lattice reduction whose key benefit is in a non-parallelizable and poly-
nomial solving time characteristic.
• We design a new client puzzle mechanism based on the construction of sub-
set sum problems. The key strengths of our scheme are a cheap construction
and verification, as well as a non-parallelizable characteristic. Experimen-
tal results under DoS attacks and performance evaluation ensure that our
subset sum puzzles satisfy fundamental requirements of good puzzles. In-
tegrating our mechanism into HIP can help the server to resist the five
specified adversarial strategies as shown in the experimental results. In
addition, the performance analysis confirms that our puzzle construction
and verification can be performed as quickly as hash-based puzzles, such
that this new approach does not introduce other attacks to the protocol
entities.
174 Chapter 7. Conclusion and Future Work
7.2 Open Problems and Future Work
Some problems in this research area have been left open during this work. These
open problems could be useful to study and investigate for future work. The
following are some possible directions to be extended;
1. Since this research only focusses on two-party key establishment protocols,
one promising future work is to extend formal analysis to multi-party key
establishment protocols to explore some denial-of-service vulnerabilities.
Several multi-party key establishment protocols have been introduced in
the past several years. Few of them have been formally analysed and veri-
fied, while the problem of denial of service attacks in multi-party protocols
is almost entirely unexplored;
2. In the taxonomy of client puzzles, we only evaluate client puzzles based
on processor bound functions for puzzle constructions. In the past few
years, there have been new puzzle constructions based on memory bound
functions developed by Dwork et al. [40], Abadi et al. [1], Naor [96], Coelho
[30], and Doshi et al. [37]. These schemes have been proposed to overcome
the large speed differentiation between high-end processors and light weight
devices such as mobile phones or personal digital assistants (PDA). It will
be interesting to find whether this different class of puzzles can be applied
in the ways explored in this thesis;
3. The simplicity and performance of the existing LLL schemes is another
concern for deploying subset sum puzzles in general applications. Several
experiments have shown the failure of original LLL in dealing with the large
instances (n over 100) and high density problems (d over 0.90). The com-
bination of other algorithms such as dynamic approximation and heuristic
technique [8] to speed up the reduction process and make it more accurate
is one possible direction to be investigated;
4. Satisfying both non-parallelization and linear granularity in the same con-
struction is another open area for future research. As stated by Naor [96],
new implementations of puzzles based on memory bound functions could
be immune to parallel attacks. As a result, more research should consider
algorithms relying on memory, such that the new construction would be
able to provide not only non-parallelization and linear granularity, but also
to reduce the contrast of CPU speed in the client’s machine; and
7.2. Open Problems and Future Work 175
5. The criteria to select puzzle difficulty (k) for different workload condition
would be another interesting area for future research direction. This can
lead to improvement of overall throughput when the server has more avail-
able range of puzzle difficulty and is able to select these values more appro-
priately for precisely controlling the incoming rate of requests. An example
technique to adjust puzzle difficulty based on the server’s workload crite-
ria was proposed by Beal and Shepard [9] for deamplifying the impact of
DoS attacks in HIP. Another technique based on an analysis of queue man-
agement to prevent DoS attacks has been introduced in [13, 68, 135]. This
mechanism would be an interesting research area not only for organising in-
coming requests and effectively managing the queue resource of the server,
but also appropriately assigning puzzle difficulty to users for delaying the
number of requests once the server becomes fully loaded.
176 Chapter 7. Conclusion and Future Work
Appendix A
HIP Model
k i s V a r i e d
T h e r e s p o n d e r s h o u l d s e l e c t k t o f o r c e t h e i n i t i a t o rt o s p e n d t i m e m o r e t h a n t i m e u s a g e i n M S G 3 R & 4 R
H I P t i m e d V a r i e d K h a s h . c p n > s a v e s a s f r o m H I P t i m e d V a r i e d 0 5 . c p n1 ) A d v e r s a r i e s f l o o d o n l y o n c e > n o r e t u r n t o k e n t o f i r s t s t e p2 ) d o n ' t a d d t i m e t o a t 1 a t I n i t i a t o r M S G 2 _ I a n d M S G 3 _ I > H C _ P r o c e s s i n g T i m e = T i m e a t 13 ) t i m e : 1 a t M S G 1 a n d 2 a t M S G 3 ( f a s t e r t h a n S u b s e t S u m a t M S G 1 b u t s l o w e r a t M S G 3 ) R e s p o n d e rI n i t i a t o r K e y E s t a b l i s h m e n t P r o t o c o l
n 1 1n 1 1 ` ( u 1 )1 ` ( u 1 , a t 1 , d t 1 , x , i , n u m 1 )M S G 3N e t w o r k
F i n i s h
M S G 1M S G 1 M S G 2N e t w o r kM S G 4 _ IM S G 4 _ IM S G 3 _ IM S G 3 _ I M S G 4 _ RM S G 4 _ R
M S G 2 _ RM S G 2 _ RM S G 3 _ RM S G 3 _ RM S G 2 _ IM S G 2 _ I
M S G 4N e t w o r k
M S G 1 _ RR e c e i v e rM S G 1 _ IS e n d e r
R e sF u s i o n C o u n t1 ` 0 N U M R e s o u r c eF u s i o n R e s o u r c eR e s o u r c eN U ME x p i r e d P u z z l eE x p i r e d P u z z l eP u z z l e
R e j e c t e d 1F u s i o n R e j 1R e j e c t
R e j e c t e dR e j e c t e dR e j e c t C o m p l e t e dD o n e
R e s o u r c e I2 0 ` h c + +5 0 0 ` a d 2 T I M E DP u z z l eT i m eC O S T r k M S G _ u n t i m e d
c o s t R 4C O S Tc o s t I 4C O S T c o s t R 3C O S Tc o s t I 3C O S T c o s t R 2C O S Tc o s t I 2C O S T c o s t R 1C O S Tc o s t I 1C O S T
O u t p u tM S G
R e c e i v e dM S G _ u n t i m e d
I 4M S G _ u n t i m e dI 3M S G M 3 M S GM 1 M S GM 2M S G
R 4M S GR 2M S GR 3M S G _ u n t i m e dR 1M S GI 1M S G _ u n t i m e d
I 2M S G _ u n t i m e d
R e j e c t e d
F u s i o n R e j 1E x p i r e d P u z z l e
F u s i o n R e s o u r c eF u s i o n C o u n t
S e n d e r R e c e i v e r
N e t w o r kM S G 2 _ I
M S G 3 _ RM S G 2 _ RM S G 4 _ RM S G 3 _ IM S G 4 _ I
N e t w o r kM S G 1
N e t w o r k
Figure A.1: Top Page of HIP CP-Net
177
178 Appendix A. HIP Model
Figure A.2: Colour Sets and Values
S e n d e r S u b p a g eu 1 u 1u 1 ( u 1 , k )( u 1 , I n t I n f . t o I n t ( t i m e ( ) ) , d t 1 , x , i , n u m 1 )( u 1 , a t 1 , d t 1 , x , i , n u m 1 ) 1 ` ( u 1 , I n t I n f . t o I n t ( t i m e ( ) ) , d t 1 , " H I T i , H I T r " , k , n u m 1 )f r o mR e c e i v e r H a s hi n p u t ( i ) ;o u t p u t ( k ) ;a c t i o n( i + 0 ) ;R e t u r nF u s i o n R e t u r nU s e rR e s p o u r c e II n T I M E D c o s t I 1O u t C O S TI 1 O u tM S G _ u n t i m e d
s e n d e r1 ` ( h c , I n t I n f . t o I n t ( t i m e ( ) ) , 0 , " H I i , H I r " , 0 , 0 )+ + 1 ` ( a d 1 , I n t I n f . t o I n t ( t i m e ( ) ) , 0 , " H I i , H I r " , 0 , 0 )+ + 1 ` ( a d 2 , I n t I n f . t o I n t ( t i m e ( ) ) , 0 , " H I i , H I r " , 0 , 0 )+ + 1 ` ( a d 3 , I n t I n f . t o I n t ( t i m e ( ) ) , 0 , " H I i , H I r " , 0 , 0 )+ + 1 ` ( a d 4 , I n t I n f . t o I n t ( t i m e ( ) ) , 0 , " H I i , H I r " , 0 , 0 )+ + 1 ` ( C o _ a d 3 , I n t I n f . t o I n t ( t i m e ( ) ) , 0 , " H I i , H I r " , 0 , 0 )M S GO u tO u tI n
F u s i o n R e t u r nFigure A.3: Initiator Subpage for Request Generation
179M S G 2 _ R s u b p a g en 1 ` ( u 1 , a t 1 , I n t I n f . t o I n t ( t i m e ( ) ) + d t 2 , y , k , n u m 1 )n + 11 ` ( u 1 , a t 1 ,I n t I n f . t o I n t ( t i m e ( ) ) + d t 2 ,y , k , n u m 1 ) n 1 n 1 1 ` ( u 1 , y , a t 2 , d t 2 )1 ` ( u 1 , a t 1 , d t 1 , x , i , n u m 1 )
( u 1 , k ) u 1n � 1n ( n 1 )( n 1 ) i f ( ( n 1 ) < k )t h e n 1 ` ( u 1 , " k _ m i n " , I n t I n f . t o I n t ( t i m e ( ) ) , p z 1 + 2 0 0 4 )e l s e i f ( n 1 < 5 * l i m i t )t h e n 1 ` ( u 1 , " k _ m e d " , I n t I n f . t o I n t ( t i m e ( ) ) , p z 2 + 2 0 0 4 )e l s e 1 ` ( u 1 , " k _ m a x " , I n t I n f . t o I n t ( t i m e ( ) ) , p z 3 + 2 0 0 4 )k kP u z z l e@ + 1 i n p u t ( i ) ;o u t p u t ( k ) ;a c t i o n( i + 1 ) ; D e c i s i o n [ n > 0 ]
r k O u tM S G _ u n t i m e dU s e rF u s i o n U s e rU s e rM e m o r yF u s i o n M e m o r y1 ` 0 I N T 1
R e s o u r c eF u s i o n R e s o u r c eR e s o u r c eN U MC o u n tF u s i o n C o u n t1 ` 0N U M
kP u z z l eL i m i t l i m i tI N T 1
M 1I nM S GR 2O u tM S Gc o s t R 2O u tC O S TO u t
O u t I nF u s i o n C o u n t
F u s i o n R e s o u r c e
F u s i o n M e m o r y F u s i o n U s e rO u t
Figure A.4: Responer Subpage for Puzzle Generationp u z z l eI n M S GI n
p 1O u t M S GO u tH I TI n M S GI n
c o s t s o l v e JC O S T
k _ m i nM S GA d v e r s a r yP o o l M S G
T i m e s o l v e JO u t C O S TO u t
k _ m e dM S G k _ m a xM S GC o s t I 3O u tC O S TO u t O 1O u t M S GO u tH a s h 1 i n p u t ( j ) ;o u t p u t ( l ) ;a c t i o n( j + 1 * 1 ) ;
d e c i d e k
H a s h 2 i n p u t ( j ) ;o u t p u t ( l ) ;a c t i o n( j + 5 0 0 * 1 ) ; H a s h 3 i n p u t ( j ) ;o u t p u t ( l ) ;a c t i o n( j + 5 0 0 0 * 1 ) ;D e c i d e kD e c i d e kD e c i d e k1 ` ( u 1 , a t 1 , d t 1 , y , j , n u m 1 )
1 ` ( u 1 , a t 1 , d t 1 , y , j , n u m 1 )i f ( u 1 = C o _ a d 3 ) o r e l s e ( u 1 = a d 3 )t h e n e m p t ye l s e i f ( s u b s t r i n g ( y , 0 , 5 ) = " k _ m i n " )t h e n ( 1 ` ( u 1 , a t 1 , d t 1 , y , j , n u m 1 ) )e l s e e m p t yi f ( u 1 = C o _ a d 3 ) o r e l s e ( u 1 = a d 3 )t h e n ( 1 ` ( u 1 , a t 1 , d t 1 , y , j , n u m 1 ) )e l s e e m p t yi f ( u 1 = C o _ a d 3 ) o r e l s e ( u 1 = a d 3 )t h e n e m p t ye l s e i f ( s u b s t r i n g ( y , 0 , 5 ) = " k _ m e d " )t h e n ( 1 ` ( u 1 , a t 1 , d t 1 , y , j , n u m 1 ) )e l s e e m p t y i f ( u 1 = C o _ a d 3 ) o r e l s e ( u 1 = a d 3 )t h e n e m p t ye l s e i f ( s u b s t r i n g ( y , 0 , 5 ) = " k _ m a x " )t h e n ( 1 ` ( u 1 , a t 1 , d t 1 , y , j , n u m 1 ) )e l s e e m p t y1 ` ( u 1 , a t 1 , d t 1 , y , j , n u m 1 ) 1 ` ( u 1 , a t 1 , d t 1 , y , j , n u m 1 )1 ` ( u 1 , a t 2 , d t 2 , z , k , n u m 1 ) 1 ` ( u 1 , a t 2 , d t 2 , z , k , n u m 1 ) 1 ` ( u 1 , a t 2 , d t 2 , z , k , n u m 1 )
( u 1 , 1 * 1 ) ( u 1 , 1 * 5 0 0 * 1 ) ( u 1 , 1 * 5 0 0 0 * 1 ) ( u 1 , a t 1 , d t 2 , y , l , n u m 1 )@ + 5 0 0 0( u 1 , a t 1 , d t 2 , y , l , n u m 1 )@ + 5 0 0( u 1 , a t 1 , d t 2 , y , l , n u m 1 )@ + 1( u 1 , l ) ( u 1 , l ) ( u 1 , l )
P u z z l e _ 3 I S u b p a g e R e u s e d P u z z l e S o l u t i o n
Figure A.5: Initiator Subpage for Puzzle Solving
180 Appendix A. HIP Model
Appendix B
Pseudo-code of Subset Sum
Experiments
181
182 Appendix B. Pseudo-code of Subset Sum Experiments
B.1 Backtracking Algorithm
Backtracking or Brute Force Searching is a recursive technique to gathers all
possible solutions and then checks for one satisfying the solution of subset sum
problems. This guarantees that it will always return an optimal solution. How-
ever, this technique consumes high CPU power and running time because it
requires to test all 2n solutions, where n is the amount of items. The following
is pseudocode of the backtracking algorithm.
Backtracking Algorithm [79]
• Solution is a set of (b1,. . .,bn), where bi ∈ {0,1}
• There are 2n possible solutions stored at each leaf nodes of binary
tree
• Check whether the current value is the solution M or not
• Otherwise, backtrack to other choices
Backtracking Pseudocodefunction SubsetSum(int k, int CurrentSum)
/* global array b[1 . . . k] contains binary vector of 1 to k */
/* CurrentSum = sum of previous values */
1) if (k = n)
2) { if CurrentSum = M; return Solution has been found }3) else { /* explore all remaining leaf nodes */
4) b[k+1] = 0;
5) SubsetSum(k+1, CurrentSum);
6) b[k+1] = 1;
7) SubsetSum(k+1, CurrentSum + x[k+1]);
8) }
B.2. Branch & Bound Algorithms 183
B.2 Branch & Bound Algorithms
The bounding technique or branch & bound algorithm is a technique to avoid
the time-consuming brute force searching by inserting some pruning schemes.
The pruning algorithm can save the running time by avoiding some unnecessary
nodes during the searching process by storing and traveling only to states whose
total weight does not exceed the limit. It specifies an upper bound on the output,
so any descendant tracks having value above or not below their ascendant node
will be eliminated from the possible solution.
Branch & Bound Algorithms [97]
• When a node that represents a subset whose sum equals the desired
solution M , terminate
• When a node that represents a subset whose sum exceeds the de-
sired solution M , backtrack (do not enter its subtrees, go back to
parent node instead)
• Keep a variable r that gives the summation of the numbers not yet
considered
• When moving to a right child, check if (CurrentSum + r ≥ M).
Otherwise, backtrack
Branch & Bound Pseudocode1) initialize (PQ);
2) v = root of Tree;
3) best = value(v);
4) insert(PQ, v);
5) while ( 6= empty (PQ)) // Remove node with best branch
6) remove(PQ, v);
7) if (branch(v) is better than best value)
8) for(each child u of v)
9) if(value(u) is better than best value)
10) best value = value(u);
11) if(branch(u) is better than best value)
12) insert(PQ, u);
13) return best value;
184 Appendix B. Pseudo-code of Subset Sum Experiments
Bibliography
[1] M. Abadi, M. Burrows, M. Manasse, and T. Wobber. Moderately Hard,
Memory-bound Functions. In the 10th Annual Network and Distributed
System Security Symposium, San Diego, California, USA, 6–7 Feb 2003.
[2] L M. Adleman. On Breaking Generalized Knapsack Public Key Cryptosys-
tems. In the 15th Annual ACM Symposium on Theory of Computing, pages
402–412, New York, USA, 1983.
[3] W. Aiello, S. M. Bellovin, M. Blaze, J. Ioannidis, O. Reingold, R. Canetti,
and A. D. Keromytis. Efficient, DoS-resistant, secure key exchange for
internet protocols. In The 9th ACM Conference on Computer and Com-
munications Security, pages 48–58, Washington, DC, USA, 2002. ACM
Press.
[4] I. Al-azzoni. The Verification of Cryptographic Protocols using Coloured
Petri Nets. Master of Applied Sciences Thesis, Department of Software
Engineering, McMaster University, Ontario, Canada, 2004.
[5] T. Aura and P. Nikander. Stateless Connections. In International Confer-
ence on Information and Communications Security, pages 87–97, Beijing,
China, Nov 1997. Springer-Verlag.
[6] T. Aura, P. Nikander, and J. Leiwo. DoS-resistant authentication with
client puzzles. In Security Protocols Workshop 2000, pages 170–181. Cam-
bridge, Apr 2000.
[7] A. Back. Hashcash - A Denial of Service Counter-Measure, 2002. http:
//citeseer.ist.psu.edu/back02hashcash.html.
[8] W. Backes and S. Wetzel. Heuristics on Lattice Basis Reduction in Practice.
Journal of Experimental Algorithmics (JEA), 7:1–21, 2002.
185
186 BIBLIOGRAPHY
[9] J. Beal and T. Shepard. Deamplification of DoS Attacks via Puzzles.
Available: http://web.mit.edu/jakebeal/www/Unpublished/puzzle.
pdf, 2004.
[10] N. Behki and S. E. Tavares. An Integrated Approach to Protocol Design.
In the 1989 IEEE Pacific Rim Conference on Computers, Communications
and Signal Processing, pages 244–248, 30 May–2 June 1989.
[11] J. Billington, G. E. Gallasch, and B. Han. A Coloured Petri Net Approach
to Protocol Verification. In Lectures on Concurrency and Petri Nets, pages
210–290, 2003.
[12] V. Bocan and M. F. Cosma. Adaptive Threshold Puzzles. In EUROCON
2005 - The International Conference on Computer as a tool, Belgrade,
Serbia and Montenegro, Nov, 22-24 2005.
[13] D. Boteanu, J. M. Fernandez, J. McHugh, and J. Mullins. Queue Man-
agement as a DoS Counter-Measure? Lecture Notes in Computer Science:
Information Security, 4779/2007:263–280, 2007.
[14] C. Boyd and E. Foo. Off-Line Fair Payment Protocols Using Convert-
ible Signatures. In ASIACRYPT ’98: Proceedings of the International
Conference on the Theory and Applications of Cryptology and Information
Security, pages 271–285, London, UK, 1998. Springer-Verlag.
[15] C. Boyd and W. Mao. On a Limitation of BAN Logic. In Advances in
Cryptology, pages 240–247. Springer-Verlag, 1994.
[16] C. Boyd and A. Mathuria. Protocols for Authentication and Key Estab-
lishment. Springer-Verlag, June 2003.
[17] E. F. Brickell. Breaking Iterated Knapsacks. In Advances in Cryptology:
Proceedings of CRYPTO 84, volume 196/1985. Springer Berlin / Heidel-
berg, 1985.
[18] John Burns and Chris J. Mitchell. A security scheme for resource sharing
over a network. Comput. Secur., 9(1):67–75, 1990.
[19] M. Burrows, M. Abadi, and R. Needham. A Logic of Authentication. In
ACM Transaction on Computer Systems, pages 18–36, New York, USA,
Feb 1990. ACM Press.
BIBLIOGRAPHY 187
[20] C. Castelluccia, E. Mykletun, and G. Tsudik. Improving Secure Server
Performance by Re-balancing SSL/TLS Handshakes. In ASIACCS ’06:
Proceedings of the 2006 ACM Symposium on Information, computer and
communications security, pages 26–34, New York, NY, USA, 2006. ACM
Press. http://eprint.iacr.org/.
[21] V. Cerf, Y. Dalal, and C. Sunshine. Specification of Internet Transmission
Control Protocol. Informational RFC 0675, IETF, Dec 1974. http://www.
ietf.org/rfc/rfc0675.txt.
[22] Computer Emergency Response Team (CERT). Denial-of-Service At-
tack via ping. [Online]. Available: http://www.cert.org/advisories/
CA-1996-26.html [Accessed: August 2004], 1996.
[23] Computer Emergency Response Team (CERT). TCP SYN Flooding
and IP Spoofing Attacks. [Online]. Available: http://www.cert.org/
advisories/CA-1996-21.html, 1996.
[24] Computer Emergency Response Team (CERT). Smurf Attack. [Online].
Available: http://www.cert.org/advisories/CA-1998-01.html, 1998.
[25] Computer Emergency Response Team (CERT). Tribe Flood Network.
[Online]. Available: http://www.cert.org/incident notes/IN-99-07.
html, 1999.
[26] Computer Emergency Response Team (CERT). mstream Distributed
Denial of Service Tool. [Online]. Available: http://www.cert.org/
incident notes/IN-2000-05.html, 2000.
[27] Computer Emergency Response Team (CERT). Distributed Denial of
Service (DDoS). [Online]. Available: http://www.cert.org/homeusers/
ddos.html, 2004.
[28] Computer Emergency Response Team (CERT). Denial-of-Service Attack
Articles and Reports. [Online]. Available: http://www.cert.org/nav/
allpubs.html, 2006.
[29] M. C. Chan, E. Chang, L. Lu, and P. S. Ngiam. Effect of Malicious Syn-
chronization. In 4th International Conference on Applied Cryptography and
Network Security (ACNS’06), pages 114–129, Singapore, Jun 6–9 2006.
188 BIBLIOGRAPHY
[30] F. Coelho. Exponential Memory-Bound Functions for Proof of Work Pro-
tocols. Technical report 2005/356, 2005. http://eprint.iacr.org/2005/
356.
[31] M. J. Coster, A. Joux, B. A. LaMacchia, A. M. Odlyzko, C. Schnorr, and
J. Stern. Improved low-density subset sum algorithms. Computational
Complexity, 2(2):111–128, 1992.
[32] Wei Dai. Crypto++ 5.2.1 Benchmarks. [Online]. Available: http://www.
eskimo.com/∼weidai/benchmarks.html, 2004.
[33] Y. G. Desmedt and J. K. Skwirzynski. What Happened to the Knapsack
Cryptographic Scheme? Performance Limits in Communication Theory
and Practice. NATO ASI Series E: Applied Sciences, 142:113–134, 1988.
[34] W. Diffie and M. E. Hellman. New Directions in Cryptography. IEEE
Transaction of Information Technology, IT-22(6):644–654, Nov 1976.
[35] W. Diffie, P. C. van Oorschot, and M. J. Wiener. Authentication and Au-
thenticated Key Exchanges. Designs, Codes and Cryptography, 2(2):107–
125, 1992.
[36] D. Dolev and A. C. Yao. On the Security of Public Key Protocols. IEEE
Transaction of Information Technology, 29(2):198–208, 1983.
[37] S. Doshi, F. Monrose, and A. D. Rubin. Efficient Memory Bound Puzzles
using Pattern Database. In 4th International Conference on Applied Cryp-
tography and Network Security (ACNS’06), pages 98–113, Singapore, Jun
6–9 2006.
[38] E. M. Doyle. Automated Security Analysis of Cryptographic Protocols
using Coloured Petri Net Specification. Master of Science Thesis, Depart-
ment of Electrical and Computer Engineering, Queen’s University, Ontario,
Canada, 1996.
[39] W. Dresp. Security Analysis of the Secure Authentication Protocol by
Means of Coloured Petri Nets. In Communications and Multimedia Secu-
rity, 2005.
[40] C. Dwork, A. Goldberg, and M. Naor. On Memory-Bound Functions for
Fighting Spam. In the 23rd Annual International Cryptology Conference
(CRYPTO 2003), pages 426–444, Aug 2003.
BIBLIOGRAPHY 189
[41] C. Dwork and M. Naor. Pricing via Processing or Combatting Junk Mail.
In the 12th Annual International Cryptology Conference on Advances in
Cryptology, pages 139 – 147. Springer-Verlag, 1992. Lecture Notes In Com-
puter Science; Vol. 740.
[42] W. Feng. The case for TCP/IP Puzzles. In ACM SIGCOMM 2003 Work-
shops, pages 322–327, Karlsruhe, Germany, 25-27 Aug 2003. ACM Press.
[43] W. Feng, E. Kaiser, W. Feng, and A. Luu. The Design and Implementation
of Network Layer Puzzles. In Proceedings of IEEE Infocom 2005, 13-17 Mar
2005.
[44] W. Feng, A. Luu, and W. Feng. Scalable, Fine-grained Control of Network
Puzzles. Technical report 03-015, Computer Science & Electrical Engineer-
ing, Oregon Health & Science University, 2003.
[45] C. Fung and M. C. Lee. A Denial-of-Service Resistant Public-key Authen-
tication and Key Establishment Protocol. In Performance, Computing,
and Communications Conference 21st IEEE International, pages 171–178,
Phoenix, AZ, USA, 3-5 Apr 2002.
[46] Y. Gao. Efficient Trapdoor-Based Client Puzzle System against DoS At-
tacks. Master of Computer Science by Research, School of Information
Technology and Computer Science, University of Wollongong, Wollongong,
Australia, 2005.
[47] V. D. Gligor. A note on denial-of-service in operating systems. IEEE
Trans. Softw. Eng., 10(3):320–324, 1984.
[48] L. Gong and P. Syverson. Fail-Stop Protocols: An Approach to Designing
Secure Protocols. In Proceedings of the 5th International Working Confer-
ence on Dependable Computing for Critical Applications (DCCA-5), pages
44–55, 1995.
[49] B. Groza and D. Petrica. On Chained Cryptographic Puzzles. In 3rd
Romanian-Hungarian Joint Symposium on Applied Computational Intelli-
gence (SACI), Timisoara, Romania, May 25-26 2006.
[50] Y. Han. Automated Security Analysis of Internet Protocols using Coloured
Petri Net Specification. Master of Science Thesis, Department of Electrical
and Computer Engineering, Queen’s University, Ontario, Canada, 1996.
190 BIBLIOGRAPHY
[51] D. Harkins and D. Carrel. The Internet Key Exchange (IKE). Standards
Track RFC 2409, IETF, Nov 1998. http://www.ietf.org/rfc/rfc2409.
txt.
[52] G. Horng. A Secure Server-Aided RSA Signature Computation Protocol
for Smart Cards. Journal of Information Science and Engineering, 16:847–
855, 2000.
[53] E. Horowitz and S. Sahni. Computing Partitions with Applications to the
Knapsack Problem. Journal of ACM, 21(2):277–292, 1974.
[54] A. Hussain, J. Heidemann, and C. Papadopoulos. A Framework for Clas-
sifying Denial of Service Attacks-Extended. In the 2003 conference on
applications, technologies, architectures, and protocols for computer com-
munications, pages 99–110, Karlsruhe, Germany, Aug 2003.
[55] M. Jakobsson and A. Juels. Proofs of work and bread pudding protocols.
In the IFIP TC6 and TC11 Joint Working Conference on Communications
and Multimedia Security (CMS 99), Sep 1999.
[56] K. Jensen. An Introduction to the Practical Use of Coloured Petri Nets.
In Petri Nets (2), pages 237–292, 1996.
[57] K. Jensen. Coloured Petri Nets: Basic Concepts, Analysis Methods and
Practical Use, volume 2nd edition, Vol. 1-3. Springer-Verlag, Apr 1997.
[58] K. Jensen. An introduction to the Theoretical Aspects of Colored Petri
Nets. In Workshop on the Applicability of Formal Models, Aarhus, Den-
mark, 2 Jun 1998.
[59] J. B. Jorgensen and L. M. Kristensen. Verification of Coloured Petri Nets
Using State Spaces with Equivalence Classes. In Proceedings of Work-
shop on Petri nets in System Engineering – Modelling, Verification, and
Validation, pages 20–31. Department of Computer Science, University of
Hamburg, 1997.
[60] J. B. Jorgensen and L. M. Kristensen. Computer Aided Verification of
Lamport’s Fast Mutual Exclusion Algorithm Using Coloured Petri Nets
and Occurrence Graphs with Symmetries. In IEEE Transactions on Par-
allel and Distributed Systems, 1999.
BIBLIOGRAPHY 191
[61] A. Joux and J. Stern. Lattice Reduction: A Toolbox for the Cryptanalyst.
Journal of Cryptology: the journal of the International Association for
Cryptologic Research, 11(3):161–185, 1998.
[62] A. Juels and J. Brainard. Client Puzzles: A Cryptographic Defense Against
Connection Depletion Attacks. In the 1999 Network and Distributed System
Security Symposium (NDSS ’99), pages 151–165, San Diego, California,
USA, Feb 1999. Internet Society Press, Reston.
[63] E. Kaltofen. On the Complexity of Finding Short Vectors in Integer Lat-
tices. In EUROCAL ’83: Proceedings of the European Computer Alge-
bra Conference on Computer Algebra, pages 236–244, London, UK, 1983.
Springer-Verlag.
[64] P. Karn and W. A. Simpson. Photuris: Session-Key Management Protocol.
Experimental RFC 2522, IETF, Mar 1999. http://www.ietf.org/rfc/
rfc2522.txt.
[65] C. Kaufman. Internet Key Exchange (IKEv2) Protocol. Inter-
net Draft, IETF, Sep 2004. http://www.ietf.org/internet-drafts/
draft-ietf-ipsec-ikev2-17.txt.
[66] C. Kaufman, R. Perlman, and B. Sommerfeld. DoS protection for UDP-
based protocols. In CCS ’03: Proceedings of the 10th ACM conference
on Computer and Communications Security, pages 2–7, Washington D.C.,
USA, 2003. ACM Press.
[67] R. A. Kemmerer, C. Meadows, and J. Millen. Three Systems for Crypto-
graphic Protocol Analysis. Journal of Cryptology, 7:79–130, 1994.
[68] S. Khan and I. Traore. Queue-based Analysis of DoS Attacks. In IAW
’05: Information Assurance Workshop, 2005. Proceedings from the Sixth
Annual IEEE SMC, pages 266–273, NY, USA, June 15-17 2003. IEEE.
[69] J. C. Lagarias and A. M. Odlyzko. Solving low-density subset sum prob-
lems. Journal of the ACM (JACM), 32(1):229–246, 1985.
[70] Ming Kin Lai. Knapsack Cryptosystems: The Past and the Future. Inter-
net, Mar 2001. http://www1.ics.uci.edu/∼mingl/knapsack.html.
[71] Brian A. LaMacchia. Basis Reduction Algorithms and Subset Sum Prob-
lems. Master Thesis, Department of Electrical Engineering and Computer
Science, Massachusetts Institute of Technology, 1991.
192 BIBLIOGRAPHY
[72] M. C. Lee and C. K. Fung. A Public-Key Based Authentication and Key Es-
tablishment Protocol Coupled with a Client Puzzle. Journal of the Amer-
ican Society for Information Science and Technology, 54(9):810–823, Jun
2003.
[73] J. Leiwo, P. Nikander, and T. Aura. Towards network denial of service
resistant protocols. In the 15th Annual Working Conference on Information
Security (SEC2000), volume 175, Beijing, China, Aug 2000.
[74] J. Lemon. Resisting SYN flood DoS attacks with a SYN cache. In the
BSDCon 2002, pages 89–97, Berkley, CA, USA, 11-14 Feb 2002.
[75] A. K. Lenstra, H. W. Lenstra Jr., and L. Lovasz. Factoring Polynomials
with Rational Coefficients. Mathematische Annalen, 261(4):515–534, Dec
1982.
[76] G. Lowe. Some new attacks upon security protocols. In 9th IEEE Com-
puter Security Foundations Workshop, pages 162–169, Kenmare, 10-12
June 1996.
[77] M. Ma. Mitigating denial of service attacks with password puzzles. In Inter-
national Conference on Information Technology: Coding and Computing,
2005. (ITCC 2005), volume 2, pages 621–626, 2005.
[78] W. Mao. Time-Lock Puzzle with Examinable Evidence of Unlocking Time.
In Proceedings of the 7th International Workshop on Security Protocols,
pages 95–102, London, UK, 2000. Springer-Verlag.
[79] S. Martello and P. Toth. Knapsack problems: algorithms and computer
implementations. John Wiley & Sons, Inc., New York, NY, USA, 1990.
[80] T. Matsumoto, K. Kato, and H. Imai. Speeding Up Secret Computations
with Insecure Auxiliary Devices. In Proceedings of the 8th Annual Interna-
tional Cryptology Conference on Advances in Cryptology, pages 497 – 506,
1988.
[81] K. Matsuura and H. Imai. Protection of Authenticated Key-Agreement
Protocol against a Denial-of-Service Attack. In International Symposium
on Information Theory and Its Applications (ISITA’98), pages 466–470,
Mexico City, Mexico, Oct 1998.
BIBLIOGRAPHY 193
[82] K. Matsuura and H. Imai. Resolution of ISAKMP/Oakley key-agreement
protocol resistant against denial-of-service attack. In Internet Workshop
1999 (IWS 99), pages 17–24, Osaka, Japan, 18-20 Feb 1999.
[83] K. Matsuura and H. Imai. Modification of Internet Key Exchange Resistant
against Denial-of-Service. In Pre-Proceeding of Internet Workshop 2000
(IWS2000), pages 167–174, Feb 2000.
[84] D. Maughan, M. Schneider, M. Schertler, and J. Turner. Internet Security
Association and Key Management Protocol (ISAKMP). Standards Track
RFC 2408, IETF, Nov 1998. http://www.ietf.org/rfc/rfc2408.txt.
[85] C. Meadows. The NRL Protocol Analyzer: An overview. Journal of Logic
Programming, 26(2):113–131, 1996.
[86] C. Meadows. Formal Verification of Cryptographic Protocols: A Survey.
Technical report, Center for High Assurance Computer Systems, Naval
Research Laboratory, 1998.
[87] C. Meadows. A Formal Framework and Evaluation Method for Network
DoS. In the 12th IEEE Computer Security Foundations Workshop, pages 4–
13, Mordano, Italy, 28-30 Jun 1999. IEEE Computer Society Press, Wash-
ington, DC.
[88] C. Meadows. A Cost-Based Framework for Analysis of DoS in Networks.
Journal of Computer Security, 9(1/2):143–164, Jan 2001.
[89] C. Meadows. Formal methods for cryptographic protocol analysis: emerg-
ing issues and trends. IEEE Journal on Selected Areas in Communications,
21(1):44–54, Jan 2003.
[90] A. Menezes, P. van Oorschot, and S. Vanstone. Handbook of Applied Cryp-
tography. CRC Press, Inc., 1996.
[91] R. C. Merkle. Secure communications over insecure channels. Source Com-
munications of the ACM, 21(4):294 – 299, Apr 1978.
[92] J. Mirkovic, S. Dietrich, D. Dittrich, and P. Reiher. Internet Denial of
Service: Attack and Defense Mechanisms. Prentice Hall, 2005.
[93] J. Mirkovic and P. Reiher. A Taxonomy of DDoS attack and DDoS Defense
Mechanisms. ACM SIGCOMM Computing and Communication Review,
34(2):39 – 53, 2004.
194 BIBLIOGRAPHY
[94] H. C. Moon. A study on formal specification and analysis of cryptographic
protocols using Colored Petri Nets. Master of Science Thesis, Institute of
Science and Technology, Kwangju University, Korea, 1998.
[95] R. Moskowitz. The Host Identity Protocol (HIP). Internet Draft,
Internet Engineering Task Force, Feb 2007. http://www.ietf.org/
internet-drafts/draft-ietf-hip-base-07.txt.
[96] M. Naor. Moderately Hard Functions: From Complexity to Spam Fighting.
In FST TCS 2003: Foundations of Software Technology and Theoretical
Computer Science, volume 2914/2003, pages 434–442. Springer Berlin /
Heidelberg, 2003.
[97] R. E. Neapolitan and K. Naimipour. Foundations of Algorithms Using
C++ Pseudocode. Jones & Bartlett, 2004.
[98] R. M. Needham. Denial of Service. In the 1st ACM conference on Computer
and communications security, pages 151–153, Fairfax, Virginia, USA, Dec
1993.
[99] B. B. Neih and S. E. Tavares. Modelling and Analysis of Cryptographic
Protocols using Petri Nets. In Advances in Cryptology, pages 275–295,
Berlin, German, 1993.
[100] P. Q. Nguyen and D. Stehle. Floating-Point LLL Revisited. In Advances
in Cryptology EUROCRYPT 2005, volume 3494/2005, pages 215–233,
Aarhus, Denmark, May 22-26 2005. Springer Berlin / Heidelberg.
[101] P. Q. Nguyen and D. Stehle. LLL on the average. In ANTS-VII: Proceedings
of the 7th International Symposium on Algorithmic Number Theory, Berlin,
Germany, 23–28 July 2006. Springer-Verlag.
[102] P. Q. Nguyen and J. Stern. Lattice Reduction in Cryptology: An Update.
In ANTS-IV: Proceedings of the 4th International Symposium on Algorith-
mic Number Theory, pages 85–112, London, UK, 2000. Springer-Verlag.
[103] P. Q. Nguyen and J. Stern. The Two Faces of Lattices in Cryptology.
Lecture Notes in Computer Science, 2146:146–180, 2001.
[104] R. Oppliger. Protecting Key Exchange and Management Protocols against
Resource Clogging Attacks. In Proceedings of the IFIP TC6 and TC11
BIBLIOGRAPHY 195
Joint Working Conference on Communications and Multimedia Security
(CMS’99), pages 163–175, Sep 1999.
[105] T. Peng, C. Leckie, and K. Ramamohanarao. Survey of Network-Based
Defense Mechanisms Countering the DoS and DDoS Problems. ACM Com-
puting Surveys, 39(1), 2007.
[106] C. A. Petri. Kommunikation mit Automaten. PhD Thesis, Institut fur
Instumentelle Mathematik, Schriffen des IIM, 1962.
[107] M. O. Rabin. Digitalized signatures and public-key functions as in-
tractable as factorization. Technical report, Cambridge, MA, USA, 1979.
[Online]. Available: ftp://ftp-pubs.lcs.mit.edu/pub/lcs-pubs/tr.
outbox/MIT-LCS-TR-212.ps.gz.
[108] S. Radziszowski and D. Kreher. Solving subset sum problems with the
L3 algorithm. Journal of Combinatorial Mathematics and Combinatorial
Computing, 3:49–63, 1988.
[109] V. Ramachandran. Analyzing DoS-Resistance of Protocols Using a Cost-
Based Framework. Technical Report YALEU/DCS/TR-1239, Department
of Computer Science, Yale University, Jul 2002.
[110] A. V. Ratzer, L. Wells, H. M. Lassen, M. Laursen, J. F. Qvortrup, M. S.
Stissing, M. Westergaard, S. Christensen, and K. Jensen. CPN Tools for
Editing, Simulating, and Analysing Coloured Petri Nets. In Proceedings
of the 24th International Conference on Applications and Theory of Petri
Nets (ICATPN 2003), volume 2679, pages 450–462. Springer-Verlag, Jun
23-27 2003.
[111] R. L. Rivest, A. Shamir, and L. Adleman. A method for obtaining digital
signatures and public-key cryptosystems. Communications of the ACM,
21(2):120–126, 1978.
[112] R. L. Rivest, A. Shamir, and D. A. Wagner. Time-lock Puzzles and Timed-
release Crypto. Technical Report TR-684, Massachusetts Institute of Tech-
nology, Cambridge, MA, USA, 10 Mar 1996.
[113] A. Rubin and P. Honeyman. Formal Methods for the Analysis of Authen-
tication Protocols. Technical Report 93-7, Center for Information Technol-
ogy Integration (CITI), Oct 1993.
196 BIBLIOGRAPHY
[114] P. Ryan and S. Schneider. Modelling and Analysis of Security Protocols.
Addison Wesley, 2001.
[115] C. P. Schnorr. Efficient Identification and Signatures for Smart Cards. In
Advances in Cryptology EUROCRYPT 89, pages 688–689. Springer Berlin
/ Heidelberg, 1989.
[116] C. P. Schnorr. Efficient Signature Generation by Smart Cards. Journal of
Cryptology, 4(3):161–174, 1991.
[117] C. P. Schnorr. Fast LLL-type Lattice Reduction. Information and Com-
putation, 204(1):1–25, 2006.
[118] C. P. Schnorr and M. Euchner. Lattice Basis Reduction: Improved Prac-
tical Algorithms and Solving Subset Sum Problems. In FCT ’91: Proceed-
ings of the 8th International Symposium on Fundamentals of Computation
Theory, pages 68–85, London, UK, 1991. Springer-Verlag.
[119] M. Seysen. Simultaneous Reduction of a Lattice Basis and its Reciprocal
Basis. Combinatorica, 13(3):363–376, Sep 1993.
[120] A. Shamir. A polynomial-time algorithm for breaking the basic Merkle-
Hellman cryptosystem. IEEE Transactions on Information Theory,
30(5):699–704, Sep 1984.
[121] W. A. Simpson. IKE/ISAKMP Considered Harmful. USENIX ;login,
24(6), dec 1999.
[122] W. A. Simpson. Photuris: Design Criteria. In SAC ’99: Proceedings of
the 6th Annual International Workshop on Selected Areas in Cryptography,
pages 226–242, London, UK, 2000. Springer-Verlag.
[123] N. Smart. Cryptography: An Introduction. Mcgraw-Hill College, 2nd edi-
tion, 2006.
[124] J. Smith. Denial of Service: Prevention, Modelling, and Detection. PhD
thesis, Information Security Institute (ISI), Queensland University of Tech-
nology, Brisbane, QLD, Australia, June 2006.
[125] J. Smith, J. M. Gonzalez Nieto, and C. Boyd. Modelling Denial of Service
Attacks on JFK with Meadows’s Cost-Based Framework. In Fourth Aus-
tralasian Information Security Workshop (AISW-NetSec 2006), volume 54,
pages 125–134. CRPIT series, 2006.
BIBLIOGRAPHY 197
[126] T. Socolofsky and C. Kale. A TCP/IP Tutorial. Informational RFC 1180,
IETF, Jan 1991. http://tools.ietf.org/html/rfc1180.
[127] D. M. Stal, S. E. Tavares, and H. Meijer. Backward State Analysis of Cryp-
tographic Protocols Using Coloured Petri Nets. In Workshop on Selected
Areas in Cryptography (SAC’ 94), Queen’s University, Kingston, Ontario,
Canada, May 1994.
[128] A. Storjohann. Faster Algorithms for Integer Lattice Basis Reduction.
Technical Report 249, Departement Informatik, ETH Zurich, (1996), 1996.
http://citeseer.ist.psu.edu/storjohann96faster.html.
[129] M. Tatebayashi, N. Matsuzaki, and D. B. Newman Jr. Key Distribution
Protocol for Digital Mobile Communication Systems. In CRYPTO ’89:
Proceedings on Advances in cryptology, pages 324–334, New York, NY,
USA, 1989. Springer-Verlag New York, Inc.
[130] The Department of Computer Science, University of Aarhus, Denmark.
CPN Tools: Computer Tool for Coloured Petri Nets. [Online]. Available:
http://wiki.daimi.au.dk/cpntools/cpntools.wiki, 2004.
[131] S. Tritilanunt, C. Boyd, E. Foo, and J. M. Gonzalez Nieto. Using Coloured
Petri Nets to Simulate DoS-resistant Protocols. In 7th Workshop and Tuto-
rial on Practical Use of Coloured Petri Nets and the CPN Tools (CPN’06),
Aarhus, Denmark, Oct 24 - 26 2006.
[132] L. v. Ahn, M. Blum, N. J. Hopper, and J. Langford. CAPTCHA: Using
Hard AI Problems for Security. In Advances in Cryptology EUROCRYPT
2003, volume 2656/2003, page 646. Springer Berlin / Heidelberg, 2003.
[133] X. Wang and M. K. Reiter. Defending Against Denial-of-Service Attacks
with Puzzle Auctions (Extended Abstract). In the 2003 IEEE Symposium
on Security and Privacy (SP’03), pages 78–92, Berkeley, CA, USA, 11-14
May 2003.
[134] X. Wang and M. K. Reiter. Mitigating bandwidth-exhaustion attacks using
congestion puzzles. In CCS ’04: Proceedings of the 11th ACM conference
on Computer and communications security, pages 257–267, New York, NY,
USA, 2004. ACM Press.
198 BIBLIOGRAPHY
[135] Y. Wang, C. Lin, Q. Lin Li, and Y. Fang. A Queueing Analysis for the
Denial of Service (DoS) Attacks in Computer Networks. Computur Net-
works: The International Journal of Computer and Telecommunications
Networking, 51(12):3564–3573, 2007.
[136] B. Waters, A. Juels, J. A. Halderman, and E. W. Felten. New Client Puzzle
Outsourcing Techniques for DoS Resistance. In the 11th ACM Conference
on Computer and Communications Security (CCS 2004), pages 246–256,
Washington DC, USA, 2004. ACM Press.
Top Related