2012 (Vol. 1, No. 2)
Transcript of 2012 (Vol. 1, No. 2)
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 67-74
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
67
Responding to Identity Crime on the Internet
Eric Holm
The Business School. University of Ballarat
PhD Candidate at Bond University
P.O. Box 663, Ballarat VIC 3353
ABSTRACT
This paper discusses the unique challenges
of responding to identity crime. Identity
crime involves the use of personal
identification information to perpetrate
crimes. As such, identity crime involves
using personal and private information to for
illegal purposes. In this article, the two
significant issues that obstruct responses to
this crime are considered. These are first, the
reporting of crime, and second the issue of
jurisdiction. The paper also presents an
exploration of some responses to identity
crime.
KEYWORDS
Identity crime, regulation, online fraud,
jurisdiction, personal information.
1 INTRODUCTION
Certain information is worth money
whereas other information is worthless
when it comes to crimes involving
identity [1]. The information that is
valuable to the identity criminal is that
which can be converted into gain,
typically by way of fraudulent activity
[2]. Certain information, particularly
personal identification provides
opportunities for identity criminals to
either obtain credit under false pretenses
or to impersonate another for like
purposes [2]. Personal identification
particulars include social security details,
driver’s license details, passport details
as well as other information [2]. The
theft of identity particulars may be the
catalyst for a number of crimes that
follow. The offenses that may follow can
include fraud, money laundering,
organized crime and even acts of
terrorism [2].
There are two variations to identity crime
committed by an identity criminal. The
first is through the assumption of parts of
another’s identity to perpetrate the crime
[3]. This involves the criminal using
parts of the victim’s identity to obtain
goods or service, for instance [3]. The
second is through the assumption of
identity wholly which involves the
criminal basically becoming the victim
[4]. This involves, establishing lines of
credit while impersonating the victim.
Each type of identity crime has costly
implications for an individual [5].
Identity crime is reliant upon information
[6]. Much of the information used for
identity crimes is obtained through
various means on the Internet. A study
conducted in the United States on
identity crime found that the most
common method used for obtaining
information was to purchase the
information on the Internet [7]. However,
information is also obtained by other
means such as through committing
computer crimes including spam, scams
and phishing [8] as well as other crimes.
Importantly it is the availability of
personal information that is the enabler
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 67-74
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
68
for identity crime [2]. Sometimes this
information can simply be acquired
through the interpersonal exchanges that
take place on the Internet, such as,
through social networking [9].
The misuse of information for identity
crime occurs typically when information
is used for gain [4]. However, not all
identity crime leads directly to a financial
gain and there may be other motivations
for committing such crime, like avoiding
criminal sanctions [10]. Therefore, the
impetus for such crime is dependent
upon the motivation of the offender [10].
There is debate as to whether identity
crime is more prominent on the Internet
[11] or elsewhere. Interestingly,
sometimes components of this crime may
take place both online and offline [12].
However, an important reason why so
much identity crime takes place on the
Internet is that a significant amount of
personal identification information is
stored on the Internet as well as there
being ample targets [13].
The exposure to risk of an individual
online is dependent on many things.
Information is exchanged on the Internet
not only by individuals, but by
governments and corporations [14].
While it is argued that the decision to
interact on the Internet is associated with
exposing oneself to greater risk, [15]
ultimately a latent risk subsists for all
information on the Internet [16]. Indeed,
it seems that the greater the amount of
personal information on the Internet, the
greater the risk a person has of becoming
a victim of identity crime.
Information is used in a variety of ways
to perpetrate identity crime. According
to the Social Security website, a
common example is the misuse of social
security numbers in the United States
[17]. This personal identifier is a key
identification detail that can be used in
conjunction with a person’s name to
establish identity. This information is
used by the identity criminal to use or
establish an identity for crime [17]. Other
notable personal identification
information includes passports, birth
dates and bank details, but is not limited
to these [18].
2 THE ISSUE OF RELIABLE DATA
The losses attributable to identity crime
can be measured by monetary losses [19]
but a number of additional offenses can
be committed once personal information
is stolen. In Australia it has been
suggested that identity crime is one of
the more prominent emerging types of
fraud [20]. However, one of the
challenges of recording this crime in is
that identity crimes are at times
subsumed into the recorded incidence of
other crime such as fraud [21]. The
misreporting of this crime tends to distort
the reliability of data that pertain to the
measurement of identity crime [22].
Importantly, different ways of reporting
the crime result in different responses to
such crimes [2].
In 2012, the Australian Bureau of
Statistics (ABS) estimated that
approximately three per cent of the
Australian population had become
victims of identity crime [23]. The most
significant implication of this crime was
financial [24]. In 2006, the losses arising
from identity crime in the United
Kingdom economy was $1.7 billion [25].
The United Kingdom figure took into
account the cost of preventative
measures as well as the costs associated
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 67-74
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
69
with the prosecution of cases. In many
statistics relating to identity crime, the
wider losses attributable to identity crime
are not considered despite being
significant.
Conservatively there are significant costs
associated with identity crime that have
been estimated at tens of billions of
dollars worldwide [26]. However, it is
difficult to gather an accurate view of the
total cost attributable to this crime
because instances of identity crime are
not always reported. For instance, the
ABS suggests that only 43 per cent of
victims of crimes involving credit and
bank cards in 2007 were prepared to
report this crime to police [27]. This
suggests a significant proportion of
identity crime relating to credit and debit
cards is not reported [28]. This distorts
the statistics on the true incidence of
identity crime.
The direct monetary losses arising from
identity crime are more easily
quantifiable but the indirect losses
remain more difficult to measure. A cost
rarely considered is the indirect cost
related to a victim psychologically [29].
Likewise, there are losses attributable to
lost trust that can also be difficult to
measure [30]. In addition, there is a
hidden cost associated with reputational
damage that is similarly difficult to
reflect in monetary terms partly due to
the intangible nature of this loss [31].
These indirect costs are also rarely
considered in the statistics that pertain to
identity crime.
There are costs with the preventative
measures [32] taken to reduce identity
crime which are not contemplated when
measuring the impact of this crime.
Indeed, there are numerous preventative
steps that can be taken to overcome the
threats of cyber-crime. For instance there
may be preventative measures taken
through technological means [33] as well
as physical security measures [34]. These
have a cost associated with them and this
cost is seldom incorporated into the
overall costs associated with crime [35].
There are broader implications of
identity crime on national economies that
have scarcely been researched [36]. What
remains difficult to ascertain is how
extensive the impact of this crime is
globally [2]. Where losses are sustained,
these are not recorded on any global
register of losses but rather are recorded
domestically [37]. Further, there is no
central repository of data pertaining to
identity crime; the data gathered are both
varied and dispersed [38]. This makes
the reporting of accurate global statistics
on this crime problematic. A central
repository of information that pertains to
victimization arising from identity crime
would be most useful for law
enforcement efforts [39].
3 THE ISSUE OF JURISDICTION
There is no central body that controls
information dissemination on the
Internet. The Internet itself is dispersed
and thereby transcends all jurisdictional
boundaries. This presents difficulties in
responding to identity crime in terms of
the coordination of investigation and
enforcement efforts [40]. Furthermore,
the regulatory responses to identity crime
also vary depending on the particular
emphasis that is placed upon the
regulatory responses to these
domestically [41]. There are variations in
the way in which identity crime is dealt
with. As most responses to identity crime
are dealt with through domestic criminal
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 67-74
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
70
sanctions, these differences identify the
domestic priorities placed on the
responses to this crime.
Contrasts can be made in the regulatory
responses to this crime. For example, in
the United States, the penalties
applicable under federal law are fifteen
years’ imprisonment and a fine [42].
Comparatively, Australian offenses
under Commonwealth Law have
penalties with a maximum of five years’
imprisonment [43]. Likewise, differences
also exist in regard to the restorative
functions of these laws. The variations
in penalties as well as other functions
point out the different importance placed
on this crime.
Similar variations in regulatory responses
exist within the states and territories of
Australia. While one state may react to
the crime of dealing with and possession
of identification material with
imprisonment for five years [44] another
may prescribe a penalty of seven years
[45]. Furthermore, other jurisdictions,
such as the Northern Territory, do not
have offenses that recognize identity
crime as the core offense and instead
they deal with this through other offenses
[46]. There are also varied responses to
restorative justice.
The issue of jurisdiction stems from the
ability of the state to bring an action
against the identity criminal. Historically,
the effects doctrine has been adopted as a
way to justify a state taking action
against the individual [47]. This doctrine
applies where the harm is linked to the
state [47]. This approach has been
utilized as a justification for which an
action to apply criminal sanctions may be
taken [48]. This doctrine provides for a
state to exercise jurisdiction outside its
physical location [49]. For identity
crime, this could enable a state to bring
an action against an offender in another
state, provided it could be ascertained
that an effect of the actions of such an
offender caused a crime to be committed
within the domestic territory [50].
Another challenge in regulating identity
crime is that, the responses to this crime
are dealt with by domestic laws and
therefore the responsibility for
investigation and enforcement belong to
the state concerned [51]. This brings into
question the domestic authority’s
capacity to deal with such crime which
may be influenced by the scarcity of
resources that exist for law enforcement
[52]. A consequence of this is that,
important technical, social and legal
information pertaining to that crime are
often not shared [53]. However,
regulatory responses are not the only
way in which this crime can be dealt
with and these will be further explored
in the outline of responses to identity
crime that follow.
4 THE RESPONSES TO IDENTITY
CRIME
4.1 Regulatory responses
A number of developments
internationally will positively influence
the regulatory response to identity crime.
An important recent development is the
Council of Europe Convention on
Cybercrime which is an international
agreement supporting and enhancing the
investigation and enforcement of
domestic law relating to cyber-crime
internationally [54]. The importance of
this convention for identity crime resides
in the enhancements that can be made in
the facilitation and cooperation of law
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 67-74
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
71
enforcement efforts toward cyber-crimes
on the Internet [54]. Signatories to such
Conventions typically improve their
interrelations with other countries
specifically in terms of investigation and
cooperation efforts [55]. While this
Convention does not specifically mention
identity crime it nonetheless will impact
on this crime through the enhancements
in cooperation of law enforcement efforts
around cyber-crimes [55].
Jurisdictions are boundaries that are
problematical when applied to the
Internet [56]. However, the Convention
on Cybercrime has received attention
because it prompts cooperation and
reliance on domestic laws in dealing
with jurisdictional issues around
cybercrime [57]. This has a positive
influence on the way cyber-crimes are
dealt with domestically [58]. Australia is
working toward accepting this
convention [59].
4.2 Technological responses
This paper has not sought to provide an
exhaustive coverage of any specific
responses to identity crime but rather it
traverses the key responses that have
been identified in the literature. In
relation to technological responses,
authentication provides an important way
of identifying an individual [33] with
whom one conducts transactions with on
the Internet [60]. Another technological
response that is helpful in preventing the
unauthorized interception of data is
encryption [61]. However these
technological responses remain
susceptible to the more sophisticated
forms of attack [61]. Another weakness
of such responses is the human beings
involved with dealing with such
measures [62].
Authentication is an important response
to identity crime because this crime
involves the assumption of another
identity and authentication aims to
prevent such actions [63]. Therefore, this
technological response facilitates the
security around the ascertainment of
identity [33]. This is an important
response in dealing with identity crime
because it has a focus on preventing the
assumption of identity which is a key
aspect of this crime.
Encryption is a technological solution
that protects data transfer when
information is exchanged on the Internet
[61] Encryption provides a protective
measure in relation to data that are
transferred between computers connected
together [64]. Therefore this response
plays an important role in the prevention
of identity crime through enhancing data
security [65].
4.3 Education as a response
There seems to be a lack of appreciation
of the vulnerabilities arising from
identity crime. Individuals have become
the focus of this crime because they are
the easier target [66]. Furthermore,
individuals are becoming the more
common target due to their lack of
knowledge regarding identity crime [67].
It has been suggested that a key
weakness in cyber security is the human
and computer interface [68]. Indeed,
there are behavioral factors that influence
the way in which individuals exchange
information across the Internet [69].
Therefore it is important to understand
this relationship and to work on
enhancing knowledge with respect to the
vulnerabilities arising from this crime
[67]. However, the educative process
cannot be focused at the organisation or
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 67-74
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
72
institution and rather needs to be focus
also on the individual [70].
The computer and human interface is
important in understanding cyber-crimes.
While the computer can have robust
methods of security, the human has
become the weak link in the overall
security in place to prevent cybercrime
[71]. The human aspect of this interface
means that humans are now the target
due to their vulnerabilities [68]. It is for
this reason that educative responses to
identity crime need to be expansive.
The discussion of responses to identity
crime aims to identify the more
prominent responses to this crime and is
far from exhaustive. There are additional
responses such as governmental and
organizational responses that have not
been discussed in this paper [72].
However, this presents opportunities for
further research.
5 CONCLUSION
In reflecting back on the title of this
paper, it is clear that there are challenges
in responding to identity crime. The
responses listed cannot work in isolation
to be effective. All responses to this
crime rely on data relating to it. Then
there are the issues pertaining to
jurisdiction. Interestingly, the issues of
data and jurisdiction remain closely
intertwined. The lack of data relating to
identity crime has a stifling effect on the
response to this crime. The catalyst for
change in relation to responding to this
crime will need to come from
improvements in the reporting of the
crime, which will then prompt more
work in resolving the jurisdictional
issues. In the absence of this, the true
incidence of identity crime will remain
concealed and jurisdictional boundaries
will continue to present barriers in
responding to this crime.
6 REFERENCES
1. Forester, T., Morrison, P.: Computer Ethics:
Cautionary Tales and Ethical Dilemmas in
Computing. MIT Press, Boston MA (1994).
2. Saunders, K., Zucker, B.: Counteracting
Identity Fraud in the Information Age: The
Identity Theft and Assumption Deterrence
Act., Cornell Journal of Law and Public
Policy 8, 661-666 (1999).
3. Office of the Australian Information
Commissioner,
http://www.oaic.gov.au/publications/reports/
audits/document_verification_service_audit
_report.html.
4. Australian Federal Police,
http://www.afp.gov.au/policing/fraud/identit
y-crime.aspx.
5. Public Interest Advocacy Centre,
http://www.travel-
net.com/~piacca/IDTHEFT.pdf.
6. Department of Justice,
http://www.cops.usdoj.gov/files/ric/Publicati
ons/e05042360.txt.
7. Anderson, K.: Who are the victims of
identity theft? The effect of demographics.
Journal of Public Policy and Marketing 25,
160-171 (2006).
8. Australian Competition and Consumer
Commission,
http://www.accc.gov.au/content/item.phtml?
itemId=816453&nodeId=ef518e04976145ff
ed4b13dd0ecda1a6&fn=Little%20Black%2
0Book%20of%20Scams.pdf.
9. Wei, R.: Lifestyles and New Media:
Adoption and Use of Wireless
Communication Technologies in China.
New Media & Society 8, 991-1008 (2006).
10. State of New Jersey Commission of
Investigation and Attorney-General of New
Jersey,
http://csrc.nist.gov/publications/secpubs/co
mputer.pdf.
11. Public Interest Advocacy Centre,
http://www.travel-
net.com/~piacca/IDTHEFT.pdf.
12. Organisation for Economic Development,
http://www.oecd.org/dataoecd/35/24/4064.
13. Quirk, P., Forder, J: Electronic Commerce
and the Law. John Wiley & Sons Australia,
Ltd, Milton, Qld (2003).
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 67-74
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
73
14. Australian Office of the Australian
Information Commissioner,
http://www.privacy.gov.au/faq/smallbusines
s/q2.
15. Bossler, A., Holt, T.: The effect of self-
control on victimization in the Cyberworld.
Journal of Criminal Justice 38, 227−236
(2010).
16. PCWorld,
http://www.docstoc.com/docs/51221743/PC
-World-September-2010.
17. Social Security Administration,
http://www.ssa.gov/pubs/10064.html/.
18. Australian Government,
http://www.cybersmart.gov.au/Schools/Com
mon%20cybersafety%20issues/Protecting%
20personal%20information.aspx.
19. State of New Jersey Commission of
Investigation and the Attorney General of
New Jersey,
http://csrc.nist.gov/publications/secpubs/co
mputer.pdf.
20. Queensland Police Fraud Investigative
Group,
http://www.police.qld.gov.au/Resources/Inte
rnet/services/reportsPublications/documents/
page27.pdf.
21. Australian Institute of criminology,
http://www.aic.gov.au/publications/current
%20series/tandi/381-400/tandi382.aspx.
22. Grabosky, P., Smith, R., Dempsey, G.:
Electronic theft: unlawful acquisition in
cyberspace. Cambridge: Cambridge
University Press, United Kingdom (2001).
23. Australian Bureau of Statistics,
http://www.abs.gov.au/ausstats/[email protected]/Lo
okup/65767D57E11FC149CA2579E400120
57F?opendocument.
24. Lynch, J.: Identity Theft in Cyberspace:
Crime Control Methods and Their
Effectiveness in Combating Phishing
Attacks. Berkeley Technology Law Journal
20, 266-67 (2005).
25. Home Office Identity Fraud Steering
Committee,
http://www.identitytheft.org.uk/faqs.asp.
26. Willox, N., Regan, T.: Identity fraud:
Providing a solution. Journal of Economic
Crime Management 1, 1-15 (2002).
27. Australian Bureau of Statistics,
http://www.ausstats.abs.gov.au/Ausstats/sub
scriber.nsf/0/866E0EF22EFC4608CA25747
40015D234/$File/45280_2007.pdf.
28. National Consumer Council,
http://www.talkingcure.co.uk/articles/ncc_m
odels_self_regulation.pdf.
29. Black, P.:Phish to fry: responding to the
phishing problem. Journal of Law and
Information Science 73, 73-91 (2005).
30. Jarvenpaa, S., Tractinsky, N., Vitale, M.:
Consumer Trust in an Internet Store:
Across-Cultural Validation. Journal of
Computer Mediated Communication 5. 45-
71 (1999).
31. Parliamentary Joint Committee on the
Australian Crime Commission,
http://www.parliament.wa.gov.au/intranet/li
bpages.nsf/WebFiles/Hot+topics+-
+organised+crime+cttee+rept/$FILE/hot+to
pics+-
+Aust+Crime+Commiss+cttee+rept.pdf.
32. Sullivan, R.: Payments Fraud and Identity
Theft? Economic Review 3, 36-37 (2008).
33. Morrison, R.: Commentary: Multi-Factor
Identification and Authentication.
Information Systems Management 24, 331-
332 (2007).
34. Baker, R.: An Analysis of Fraud on the
Internet. Internet Research: Electronic
Networking Applications and Policy 9, 348-
360 (1999).
35. Felson, M.: Crime and Everyday Life,
Insight and Implications for Society. Sage,
Thousand Oaks, CA (1994).
36. Organisation for Economic Co-operation
and Development,
http://www.oecd.org/dataoecd/49/39/408791
36.pdf.
37. Australian Institute of Criminology,
http://www.aic.gov.au/publications/current
%20series/tandi/381-400/tandi382.aspx.
38. United States Department of Justice,
http://www.ncjrs.gov/pdffiles1/nij/grants/21
0459.pdf.
39. Organisation for Economic Cooperation and
Development,
http://www.oecd.org/dataoecd/49/39/408791
36.pdf.
40. Smith, R.: Examining Legislative and
Regulatory Controls on Identity Fraud in
Australia. In: Proc. 2002 Marcus Evans
Conferences, pp.7-12, Sydney (2002).
41. Towell, E., Westphal, H.: Investigating the
future of Internet regulation 8, 26-31 (1998).
42. 18 U.S.C. 1028A (2004).
43. Commonwealth Criminal Code 1995 (Cth)
Div 372 (1)(b).
44. Queensland Criminal Code 1899 (Qld) s
408D(7).
45. Criminal Code Compilation Act 1913 (WA)
s 490(1)(a).
46. Criminal Code Act 2009 (NT) s 276(1)(a).
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 67-74
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
74
47. Coppel, J.: A Hard Look at the Effects
Doctrine of Jurisdiction in Public
International Law. Leiden Journal of
International Law 6, 73-90 (1993).
48. United States v. Aluminum Co. of America,
148 F.2d 416, 444 (2d Cir. 1945).
49. Hartford Fire Insurance Co. v. California,
113 S. Ct. 2891 (1993)
50. Gencor Ltd v. Commission [1999] ECR II-
753 at paras. 89-92.
51. Svantesson, S.: Jurisdictional Issues in
Cyberspace: At the Crossroads — The
Proposed Hague Convention and the Future
of Internet Defamation. Computer Law &
Security Report 18, 191 - 196 (2002).
52. Bolton, R., Hand, D.: Statistical Fraud
Detection: A Review. Statistical Science 17,
235-255 (2002).
53. United Nations Office on Drugs and Crime,
http://www.unodc.org/documents/data-and-
analysis/tocta/TOCTA_Report_2010_low_r
es.pdf.
54. European Convention on Cyber Crime,
opened for Signature 23 November 2001,
CETS No. 185, art 185 (Entered into force 1
July 2004).
55. Attorney-General for Australia,
http://conventions.coe.int/Treaty/EN/Treatie
s/html/185.htm.
56. Fitzgerald, B.: Fitzgerald, A.: Beale, T.:
Lim, Y.: Middleton, G.: Internet and E-
Commerce Law – Technology Law and
Policy. Law Book Co, Pyrmont, NSW
(2007).
57. Parliament of Australia,
http://www.aph.gov.au/Parliamentary_Busin
ess/Bills_Legislation/Bills_Search_Results/
Result?bId=r4575.
58. Australian Government Information
Management Office,
http://www.finance.gov.au/publications/futu
re-challenges-for-
egovernment/docs/AGIMO-FC-no14.pdf>.
59. Australian Government,
http://www.ema.gov.au/www/agd/rwpattach
.nsf/VAP/(8AB0BDE05570AAD0EF9C283
AA8F533E3)~TSLB+-+LSD+-
+FINAL+APPROVED+public+consultation
+paper+-+cybercrime+convention+-
+15+February+2011.pdf/$file/TSLB+-
+LSD+-
+FINAL+APPROVED+public+consultation
+paper+-+cybercrime+convention+-
+15+February+2011.pdf.
60. O’Farrell, N., Outllet, E., Outllet, E.: Hack
Proofing your wireless network. Syngress
Publishing, Rockland, MA (2002).
61. Broadhurst, R., Grabosky, P.: Computer-
related Crime in Asia: Emergent Issues. In:
Broadhurst, R., Grabosky, P. (eds) Cyber-
Crime: The Challenge in Asia, Hong Kong
University Press, pp.1-26. (2005).
62. Sullivan, R,: Can Smart Cards Reduce
Payments Fraud and Identity Fraud.
Economic Review 3 (2008).
63. Model Criminal Code Officers’ Committee
of the Standing Committee of Attorneys-
General,
http://www.scag.gov.au/lawlink/SCAG/ll_sc
ag.nsf/vwFiles/MCLOC_MCC_Chapter_3_I
dentity_Crime_-_Final_Report_-
_PDF.pdf/$file/MCLOC_MCC_Chapter_3_
Identity_Crime_-_Final_Report_-_PDF.pdf.
64. Broadhurst, R.: Grabosky, P.: Computer-
related Crime in Asia: Emergent Issues. In:
Broadhurst, R., Grabosky, P. (eds.) Cyber-
Crime: The Challenge in Asia, pp 15-17.
Hong Kong University Press (2005).
65. Ferguson, N.: Schneier, B.: Practical
Cryptography (Wiley, New York, NY
(2003).
66. Australian Institute of Criminology,
http://www.aic.gov.au/documents/9/3/6/%7
B936C8901-37B3-4175-B3EE-
97EF27103D69%7Drpp78.pdf.
67. Community for Information Technology
Leaders,
http://www.cioupdate.com/technology-
trends/cios-cybercrime-and-wetware.html.
68. Symantec,
http://www.symantec.com/specprog/threatre
port/ent-
whitepaper_symantec_internet_security_thre
at_report_x_09_2006.en-us.pdf.
69. Stajano, F.: Understanding Scam Victims:
Seven Principles for Systems Security.
Communications of the ACM 44, 70 (2011).
70. Bard Prison Initiative,
http://www.stcloudstate.edu/continuingstudi
es/distance/documents/EducationasCrimePre
ventionTheCaseForReinstatingthePellGrantf
orOffendersKarpowitzandKenner.pdf.
71. Federal Reserve Bank of Kansas City,
http://www.kansascityfed.org/PUBLICAT/e
conrev/pdf/3q08sullivan.pdf.
72. Benson, M.: Offenders or Opportunities:
Approaches to Controlling Identity Theft.
Criminology & Public Policy 8, 231–236
(2009).
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 75-81
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
75
Authenticating Devices in Ubiquitous Computing
Environment
Kamarularifin Abd Jalil1, Qatrunnada Binti Abdul Rahman2
Faculty of Computer and Mathematical Sciences
Universiti Teknologi MARA,
40450 Shah Alam,
Selangor, Malaysia.
[email protected], [email protected]
Abstract – The deficient of a good authentication protocol in a
ubiquitous application environment has made it a good target for
adversaries. As a result, all the devices which are participating in
such environment are said to be exposed to attacks such as
identity impostor, man-in-the-middle attacks and also
unauthorized attacks. Thus, this has created skeptical among the
users and has resulted them of keeping their distance from such
applications. For this reason, in this paper, we are proposing a
new authentication protocol to be used in such environment.
Unlike other authentication protocols which can be adopted to be
used in such environment, our proposed protocol could avoid a
single point of failures, implements trust level in granting access
and also promotes decentralization. It is hoped that the proposed
authentication protocol can reduce or eliminate the problems
mentioned.
Keywords: Authentication protocol, Ubiquitous Computing,
application security, decentralize.
I. INTRODUCTION
Ubiquitous computing can be said as the latest
paradigm in the world of computers today. It allows devices
and systems to be integrated and embedded together with
computing and communication systems through wireless
transmission [1]. In a related work, Weiser [2] has defined
ubiquitous computing as “a model of computing in which
computation is everywhere and computer functions are being
integrated into everything. It will be built into the basic objects
(smart devices), environments and the activities of our
everyday lives in such a way that no one will notice its
presence”.
In a ubiquitous system, information can be processed
and delivered seamlessly among the participating devices
without the users’ even notice it. This is in contrast with what
is being practiced in a non-ubiquitous computing environment
whereby the users themselves have to make certain
adjustments (to the devices) in order to suit the current
computing environment they are in. These capabilities might
sound a bit futuristic, but in reality, the technology is already
here.
Basically, any device that can be connected to a
network via a wired or wireless link can be included in a
ubiquitous computing environment. However, nowadays, such
devices are referring to the smart devices which are portable
and connected to each other via wireless technologies such as
the Bluetooth, Wi-Fi, 3G, 4G and etc. Some of these devices
might be used to browse the Internet and some are partially
autonomous and have the capability to sense their
environment as discussed in [3]. With these capabilities,
information dissemination is just at anyone’s finger tips.
Figure 1.Ubiquitous Computing
Unfortunately, in this time and age, information can
be easily misused or manipulated if not protected. The
information that flows in the environment could fall into the
wrong hands and could be manipulated maliciously. Such
information can be said to be exposed to attacks such as
unauthorized manipulation, illicit access, and also disruption
of computing data and services. There have been many works
to solve these problems, and using authentication protocol is
one of them. Authentication protocol can ensure that users’
information and privacy are safeguarded. In section III, some
authentication protocols will be explained. These protocols
can be seen as the potential candidates to be used in the
ubiquitous computing applications. However, as mentioned in
section III, it was found that all these candidates cannot satisfy
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 75-81
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
76
the needs of a ubiquitous computing application and that is
why we are proposing a multi devices authentication protocol.
II. COMPUTER SECURITY COMPONENTS
Computer security as defined by NIST [4] is the
defenses employed by information systems to maintain three
elements and those are confidentiality, integrity and
availability of its computing resources. The three elements
mentioned in the definition are essentials to information
systems security purposes as elaborated in [5] and often
referred as the CIA triad. In order to fulfill these security
objectives, information system developers and organization
security managers are following security architecture for OSI
which is featured in ITU-T Recommendation i.e. the X.800
[6], a standard for providing security. It emphasizes on
Security Attack, Security Mechanism and Security Service.
Our research is about utilizing some of these Security
Mechanisms and Security Services to avert Security Attacks
prominent in ubiquitous computing applications.
Security Service, according to X.800 [6], is a service
offered by a layer of corresponding open systems that ensures
sufficient protection of the systems or data transfers. There are
five types of services, namely: Non-repudiation,
Authentication, Data Integrity, Data Confidentiality and
Access Control. Since this paper deals with authenticating
devices in Ubiquitous computing environment, the focus will
be more on Authentication service. Authentication is about
making sure interacting entities are who they claimed to be.
The X.800 standard has divided the Authentication service
into two particular services, Data-origin authentication and
Peer Entity authentication. The purpose of this paper is to
provide Peer Entity authentication type of service which is to
grant assurance and trust among interacting entities.
On the other hand, Security Mechanism is a method
to avoid, detect or recover from security attacks. It is divided
into two categories, Specific Security mechanisms, which may
be deployed in any protocol layer or Security Services and
Pervasive Security mechanisms, which are not particular to
any protocol layer or Security Services. Moreover, there are
many different types of Security Mechanisms and further
elaboration on these can be seen in [6]. For this paper, only
three mechanisms will be utilized in the development of the
new authentication protocol. Those three Security
Mechanisms which falls under Specific Security mechanism
category are; Authentication Exchange, Digital Signature, and
Encipherment. The purpose of Authentication Exchange is to
identify an entity through the exchange of information
meanwhile Digital Signature will provide integrity to the
information so that its origin will not be doubted whereas
Encipherment will alter the information, making it unreadable
during transmission of the information.
In order to create the new authentication protocol,
basic missions in a security service need to be established,
indeed, Stallings [7] has identified four significant missions
that can fit into any security services. The first one is an
algorithm needs to be created for security purposes. The
second one is generation of secret information to be utilized
with the algorithm and this secret needs to be conveyed, so,
the third mission is to create a process to satisfy that purpose.
The last mission is to identify a protocol in order to utilize the
secret information and the algorithm to fulfill certain security
service. Figure 2 is a depiction of a basic form of network
security for two or more interacting entities that can be fitted
by security services and security mechanisms discussed earlier
in order to secure particular network services.
Figure 2.Basic form of network security
Source: [7]
All in all, based on the discussion earlier we know
that there are many security services which implement certain
security mechanisms in order to prevent security attacks, and
among all the security services mentioned, this paper only
concentrates on designing a new authentication protocol which
can be categorized as Peer Entity Authentication security
service. The proposed protocol will utilize the Authentication
Exchange, Digital Signature, and Encipherment security
mechanism. Furthermore, figure 2 is also the basic form for
the new authentication protocol design, but it will be altered to
achieve the objective of assurance in the identity of
communicating entities. More information about the proposed
protocol can be found in section V of this paper.
III. AUTHENTICATION PROTOCOLS
Authentication is important in order to maintain the
integrity of an entity. On the other hand, integrity is essential
in determining that an entity is really who it claims to be.
Moreover, authentication can be used to ensure that an entity
has full authority and accountability over its data. Therefore,
in maintaining an entity’s integrity, many authentication
protocols have been introduced. Protocols such as Kerberos,
SSO and OpenID are some of the examples that are widely
used. Most of these authentication protocols required a
dedicated access to a server for either validation process or to
acquire digital certificates, tickets or tokens. On the other
hand, users who utilize OpenID needs to register to an OpenID
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 75-81
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
77
identifier with an identity provider in order to sign in to the
websites which employ the OpenID authentication.
Some of these authentication protocols are not
suitable for ubiquitous computing environment. For example,
Kerberos, which is a computer network authentication
protocol that consists of a centralized Key distribution Centre
(KDC) which actually is two logically separate parts
comprises of Authentication Server and Ticket Granting
Server as mentioned in [8]. Although centralization is good in
managing multiple users at one time, but it still has a
disadvantage because if the KDC server is compromised or
the service is down, users may not be able to authenticate
themselves. Accordingly, KDC can be the single-point-of-
failure which is the major drawback for Kerberos protocol as
argued in [9].
Another current authentication protocol which is
widely used is the Single sign-on (SSO). According to [10], it
enables a user to gain access to several systems or
applications by a single login. A user does not have to
reiterate the login process to every application that he or she
is trying to access. This means that, SSO is a centralized
authentication system that has access control to multiple
applications that are unified under it. In SSO, once the user
logged in, he or she can access other applications too. This
makes the authentication system to be highly vital. If the
authentication system availability is disrupted then the users
can face the denial of access to the other applications that
employed the SSO authentication system. This is a major
drawback of the SSO authentication system as shown in [11].
Nonetheless, there is still authentication protocol
which implements decentralized system. OpenID enables
users to choose their preferable identity providers in order to
create accounts. The users are able to sign in to any
application that acknowledges the authentication by using
those accounts. Nevertheless, that is also the downside of it.
As OpenID account can only be used to sign in to websites
that acknowledges it. Although OpenId is already widely
being implemented there are more websites which do not, so
relying on it for integrity conformation is not convenient.
Moreover, it is also susceptible to phishing attacks. The
phishing attack can be in such a way that a user account can
be tampered with when a user is swindled into believing that
he or she is filling credentials into the real identity provider
authentication page whereas it is actually a fake
authentication page. Upon giving his or her credentials into
this fake site, the malicious person that is controlling it can
use those credentials to access the user’s account and then log
into any application that associate with that particular user’s
OpenID as mentioned in [12].
Recently, there is a different approached in
authentication, which specialized in securing communications
between devices by using the knowledge of their radio
environment as a proof of physical proximity. This new
authentication protocol is called Amigo. According to
Varshavsky et al. in [13], Amigo is a technique which
extends the Diffie-Hellman key exchange with verification of
device co-location. This protocol can ensure that the key is
exchanged with the right device. In doing this, a device’s
location or specifically its radio environment will be verified
whether it is in the same proximity or not. This technique is
interesting as it involves comparing the proximity of the
devices. The only downside of this technique is that the
interacting devices would only know the proximity of one
another and not their exact identities. This is not enough if a
user wants to execute trust in communications.
Based on the related authentication protocols
features mentioned above, there are many attributes that need
to be improved in order to suit the ubiquitous computing
volatile and decentralized environment.
IV. JUSTIFICATIONS AND REQUIREMENTS FOR
THE PROPOSED PROTOCOL
In Section III, we have presented the current
protocols that can be used in the ubiquitous computing
applications. From the discussions, it can be deduced that
there are three issues with these protocols that need to be
addressed by the proposed protocol. The first one is the issue
of centralization. The second one is the issue of accessibility
(need Internet in order to use the protocol). The third and the
last one is the issue of trust.
According to Colouris [14], due to its volatile
environment as compared to the existing computing
environment, ubiquitous computing needs a special
authentication and authorization protocol. In a volatile
environment, heterogeneous devices may come in contact with
each other spontaneously and could start interacting with each
other and also may suddenly leave from the established
network connections [15]. The volatility and dynamicity of
mobile devices in a ubiquitous computing environment could
contribute to the fluctuating usage environment such as user’s
location, device’s context and user’s activity that varies
randomly. As a result, the current rigid and centralized
authentication protocol that relied on certification authorities
in order to confirm the identity of the entity involved will not
be sufficient for a volatile environment such as smart
environment as demonstrated by Nixon [16]. In this paper, we
have identified three requirements for the proposed
authentication protocol (see Figure 3). These requirements are
seen as vital in order for the proposed protocol to be accepted
by the users.
A. Decentralization
The decentralization of an authentication protocol is
actually referring to the distribution of the authentication
process to the respective devices. This is opposite to the
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 75-81
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
78
current practice which provides a centralized authentication
protocol that relied on hierarchies of certification authorities
that provide certificates and confirmations of the respective
owners using a dedicated server. The decentralization of the
authentication process in the proposed protocol is achieved
through multiple trusted agreements among the devices
involved.
Figure 3.Requirements for the New Authentication Protocol
The act of using multiple trusted devices to verify the
identity of an entity will eliminate the need for constant access
to an online dedicated server for authentication process. This
would be useful in the case of interruption in the network
access whereby the respective devices cannot get access to the
Internet. In the proposed protocol, the only network
connection needed in order for the authentication process to
take place is the connection between the communicating
devices. As pointed out in [17], processors are now being
embedded into common everyday objects and surrounding
infrastructures, for that reason it is not efficient to provide an
authentication process that requires constant online access to a
dedicated server and certificate authority. Besides that, there
are many questions regarding public key infrastructure
practicality as mentioned by Creese et al. in [18], who
questioned the Certification Authority practicality which
needs constant online access.
Decentralization of authentication process can also
eliminate the single-point-of-failure problem. A centralized
authentication protocol does have high chances of having a
single-point-of –failure due to high dependency in dedicated
servers for validation processes. If this single-point-of-failure
risk can be minimized or can be avoided, then the usability
and availability of an authentication system can be improved.
B. Trust
Trust can be said as the second requirements for the
new authentication protocol. Coulouris et al. in [14], has stated
that the devices’ trust needs to be lowered in order to
spontaneously interact. When this situation happens, they will
be short of of knowledge of each other and a trusted third
party will be needed to ensure the identity of one another. In
addition, Varshavsky et al. in [13] had mentioned about
mobile devices that have wireless capabilities may
spontaneously interact with one another whenever they come
in close proximity, so this sort of communications are risky as
the trust among them are not priory established. Eventually,
this lack of trust may give opportunity to malicious attacker to
be connected with any devices in presence. Hence, an
approach to solve this problem is proposed by adopting a trust
level mechanism where user can choose to set their devices’
trust level accordingly for authentication process.
In a normal scenario of ubiquitous computing
environment, some users may already know each other
beforehand and some may not. So each user may want to set
different trust level for different situation or people. As,
suggested by Westin in [19], there are three types of
respondents, namely; privacy fundamentalists, privacy
pragmatists, and privacy unconcerned. Based on that
argument, users should be given a choice to choose their
privacy settings.
C. Seamless
The third requirement for the new authentication
protocol can be said as making the interaction in the
authentication process to be seamless to the users. This is
because; one of ubiquitous computing characteristics as
emphasized by Weiser [2] is that the technology should blend
into the surroundings to the extent that the people are not
aware of it and do not need to know how it is done. This
concurs with Stringer et al [20] and also Bardram and Friday
[15], who acknowledged that ubiquitous computing is about
disappearing computing application which blends into objects
and surroundings. As, stated by Langheinrich [21] processors
and sensors are being embedded into almost everything.
Because of that, the form of interaction of ubiquitous
applications and devices are beyond the traditional computing
interaction where it is done via sensors that sense an entity
presence, sound or gesture implicitly. Consequently, it is
appropriate to design an authentication protocol that suits to
this characteristic of ubiquitous computing, which involve
authenticating entities without having the entity to interfere in
the process and is unobtrusive.
V. THE PROPOSED AUTHENTICATION
PROTOCOL
Since the current authentication protocols are more
suitable being implemented by the rigid computing
infrastructure which is centralized and required constant
access to a dedicated server, the new design for the proposed
authentication protocol will be developed to be suitable with
the volatile environment of ubiquitous computing. Figure 4
will explain more about the multiple trusted devices
authentication protocol for Ubiquitous Computing application.
REQUIREMENTS
Decentralization
Seamless Trust
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 75-81
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
79
In order to understand the proposed authentication protocol, a
scenario is used.
In the scenario, there are 3 persons A (PA), B (PB) and
C (PC) who each has a smart phone device A (DA), B (DB) and
C (DC). PA and PB had just met but PC is a mutual friend of PA
and PB. PB has a collection of interesting pictures that he had
taken while visiting an art gallery in Paris. PA on the other
hand, really wants to have those pictures so he decided to copy
it from PB. PB do not mind to share it with PA, and all PA has to
do is to access PB’s device. In order to do so, he must have the
authority to access device DB. As PA and PB have just met,
therefore, PA must first register with device DB. Whereas,
since PC is a mutual friend to both PA and PB, therefore, it is
assumed that he has already registered to both of his friends’
devices. Therefore, he will not have a problem in accessing
both of his friends’ devices. Hence, Figure 4 depicts how
device DA will be granted the access to device DB.
Figure 4. The proposed authentication protocol
First of all, in step 1 device DA must request
permission to access device DB. In doing so, DA must show or
send its ID to DB so that DB is able to check in its registry
whether DA is already in it or not. This ID is actually a random
value that DA can generate and renew whenever it is needed.
This ID is not permanent; nevertheless each time it is being
renewed, the old ID which is in the other devices registry will
become invalid. As a result, a device must go through the
registration process each time the ID is being renewed.
In step 2, DB will check (by using DA’s ID just now)
whether it has DA’s ID recorded in its registry or not. This
action will result in two conditions either DA’s ID is found or
not found. So, if it is found, it will proceed to step 3a if not it
will proceed to step 3b. Then, in step 3a, when it is confirmed
that DA has already registered in the registry, DB will proceed
to request DA to authenticate itself by giving its Identity Key.
This Identity key is also a sequence of random value and also
can be generated and renew whenever it is needed. Apart from
that, this Identity key will be conveyed partially, see figure 5.
Hence, only a portion of the whole Identity key and its
metadata of Identity key sequence will be sent. This is to
avoid malicious device that might be eavesdropping. Although
the Identity Key will be partially revealed, DB will not have
any problem to verify it as it will compare the sequence of DA
Identity Key being sent with the one that is already in its
registry. Furthermore, in step 4a, DB will also seek other
device to participate in validating the Identity Key.
Nonetheless, all information during these transmissions will
be encrypted using the existing cryptographic algorithm.
q 1 w 2 e r t y 5 7 z 0
Figure 5. Identity Key
During step 4a, apart from finding DA’s ID Key in the
registry and then validates it, there will also be security level
settings depicted in figure 6 that DB has to set for DA. This
security level setting will determine how the validation
process will take place (in this phase, PB is free to set different
security level for different device that PB encounters). There
are currently three levels of security settings in this protocol. If
DA is set to Level 1 then, after DB has validate its Identity Key
it can access DB right away. But if it is set to level 2, then after
DB has validate its Identity Key, DB will proceed to ask other
device which may be nearby to check for DA’s credentials.
However, if DA is set to Level 3, then its credentials will be
validated by more than 1 nearby devices.
DB’s registry/database
Device ID ID Key
A a b c 1 2 3 z q 1 w 2 e r 3 t y 5 7 z 0
C 7 9 j l s 3 8 1 2 3 0 9 8 w 0 w x i d 0
... ... ...
1) DA Requests access permission from DB and send DA’s ID
2) DB Check whether DA’s ID exists in its record or not
If it exists, DB request DA to authenticate itself (proceed to step 3a)
A
(PA)
Device A
(DA)
Device B
(DB)
1 2
3a
3b
4a
4b
5
6
B
(PB)
4a) DB request other devices to validate DA’s Identity key
4b) Device DA credentials will be updated in DB
registry
5) DB allow DA to access its application
If it do not exist, DB request DA to register (proceed to step 3b)
6) DA update its own registry
3a) DB request DA to authenticate itself
DA send Identity key for authentication
3b) DB request DA to register
DA go through registration process
5) DB grant its Identity
Key and authorization
ID to DA
security level settings
Level 1 Permit access right away
Level 2
Check with a nearby
device for DA’s
credentials
Level 3 Check with more than 1
nearby devices for DA’s
credentials
Device A
(DA)
ID: a b c 1 2 3 z
ID Key: q 1 w 2 e r 3 t y 5 7 z 0
Validation process
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 75-81
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
80
Figure 6. Security level settings
Nevertheless, steps 3a and 4a deal with a situation
where DA has already registered in DB’s registry. If it is not
then it will continue to step 3b, where it is prompt to register
first in order to access DB’s application. Here, DA will go
through the registration process where it will provide its ID as
well as its Identity Key. After that, in step 4b, DA’s credentials
will be updated inside DB’s registry. Next, after step 3a and 4a
or 3b and 4b have been completed, step 5 will take place,
where DB is ready to give permission and authorization for DA
to access its application. DB will also grant its Identity Key
and its authorization ID to DA.
Finally, in step 6, after DA has accepted DB’s ID Key and
authorization ID, it will update them in its database/registry.
Then it will proceed to use the authorization ID to access the
desired application on device DB.
VI. CONCLUSIONS
In this paper, we have discussed a number of
possible authentication protocols to be used in the ubiquitous
application environment. From the discussion, we have
shown that there is no protocol that can really suit the needs
of the application running in such environment. Therefore, in
this paper, we are proposing a new authentication protocol
which can satisfy the needs of the applications running in a
ubiquitous environment. The proposed protocol uses multiple
trusted devices and this has resulted in the decentralization of
the authentication process in order to suit the volatile and
dynamic environment of Ubiquitous Computing. It is hope
that in the near future, the proposed protocol can be tested in
a test bed environment.
REFERENCES
1. R. Want, “An introduction to ubiquitous computing,
Ubiquitous Computing Fundamentals,” J. Krumm, Ed.
Redmond,Washington, U.S.A:CRC Press, ch.1, pp. 2-27.
2. M. Weiser, “The computer for the 21st century, Mobile
Computing and Communications Review,” New York,
NY, USA, pp. 3(3):3–11, (1999).
3. S. Yahya, E. A. Ahmad, K. Abd Jalil, “The definition
and characteristics of ubiquitous learning: A discussion,”
International Journal of Education and Development
using Information and Communication Technology, pp.
117-127, (2010).
4. B. Guttman and E. A. Roback, "Introduction, An
Introduction to Computer Security: The NIST
Handbook," Gaithersburg, MD: NIST special Publication
800-12, ch.1, pp.5, (1995).
5. Standards for Security Categorization of Federal
Information and Information Systems, Federal
Information processing Standards Publication.
Gaithersburg, MD, p. 2, (2004).
6. Security Architecture for Open Systems Interconnection
for CCITT Applications, Recommendation X.800.
Geneva, p.8-9, (1991).
7. W. Stallings, “A Model for Network Security,
Cryptography and Network Security 5th ed.” Prentice
Hall, Upper Saddle River, NY: ch. 1, pp. 25-26, (2011).
8. J. Garman, “Pieces of the puzzle, Kerberos the definitive
guide,” Sebastopol, CA: O’Reilly & Associates, Inc, ch.
2, pp. 17-23, (2010).
9. J. Garman, “Security, Kerberos the definitive guide,”
Sebastopol, CA: O’Reilly & Associates, Inc, ch. 6, pp.
100-125, (2010).
10. B. Ballad, T. Ballad, E. K. Banks, “Single Sign-on
(SSO), access control, authentication, and public key
infrastructure,” Sudbury, MA : Jones & Bartlett
Learnings, ch. 10, pp. 229-231, (2011).
11. J. Pyles, “Getting started with Microsoft Office
SharePoint Server, McTs,” Microsoft Office Sharepoint
Server 2007 Configuration Study Guide. Indianapolis,
Indiana: Wiley Publishing, Inc., ch. 1, pp. 14, (2008).
12. R. U. Rehman, “OpenID Protocol: Miscellaneous Topics,
Get Ready for OpenID,” 1st ed. Conformix Technologies
Inc., ch. 8, pp. 205-207, (2008).
13. A. Varshavsky, A. Scannell, A. E. Lara LaMarca,
“Amigo: Proximity-based Authentication of Mobile
Devices,” Proc. 2007: The 9th international conference
on Ubiquitous computing, Berlin, Heidelberg, pp. 253-
270, (2007).
14. G. Coulouris, “Mobile and Ubiquitous Computing,
distributed systems,” Concepts and Design. 4th ed.,
Addison-Wesley, Reading, MA : Addison-Wesley, ch.
16, pp. 683-704, (2005).
15. J. Bardram, A. Friday, “Ubiquitous Computing Systems,
Ubiquitous Computing Fundamentals,” J. Krumm, Ed.
Redmond, Washington, U.S.A: CRC Press, ch. 2, pp. 39-
41, (2010).
16. P. Nixon, W. Wagealla, C. English, and S. Terzis,
“Privacy, Security, and Trust Issues in Smart
Environments,” In Smart Environments: Technology,
Protocols and Applications. Wiley, London, UK, pp.
220-240. ISBN 978-0-471-54448-7, (2004).
17. Middleware Architecture for Ambient Intelligence in the
Networked Home, Handbook of Ambient Intelligence
and Smart Environments. Springer-Verlag US, p. 1139,
(2010).
18. S. Creese, M. Goldsmith, B. Roscoe, I. Zakiuddin,
“Authentication for Pervasive Computing. Security in
pervasive computing,” First International Conference,
Boppard, Germany, pp. 117-129, (2003).
DA is authenticated and given authorization
to access application from DB
True False
Flag DA for future reminder
in case DA is an imposter
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 75-81
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
81
19. A. F. Westin, “Privacy and Freedom”, New York, NY,
USA: Atheneum, (1967).
20. M. Stringer, et al “Situating Ubiquitous Computing in
Everyday Life: Some Useful Strategies” [Online].
Available: http://www.informatics.sussex.ac.uk/
research/groups/interact/publications/stringer_ubicomp0
5.pdf.
21. M. Langheinrich, “Privacy by Design - Principles of
Privacy- Aware Ubiquitous Systems,” Proc of the 3rd
international conference on Ubiquitous Computing,
London, UK, (2001).
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 82-88
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
82
An Image Encryption Method: SD-Advanced Image
Encryption Standard: SD-AIES
Somdip Dey
St. Xavier’s College [Autonomous] Kolkata, India
E-mail: [email protected] [email protected]
ABSTRACT
The security of digital information in modern times is one of
the most important factors to keep in mind. For this reason, in
this paper, the author has proposed a new standard method of
image encryption. The proposed method consists of 4
different stages: 1) First, a number is generated from the
password and each pixel of the image is converted to its
equivalent eight binary number, and in that eight bit number,
the number of bits, which are equal to the length of the
number generated from the password, are rotated and
reversed; 2) In second stage, extended hill cipher technique is
applied by using involutory matrix, which is generated by
same password used in second stage of encryption to make it
more secure; 3) In third stage, generalized modified Vernam
Cipher with feedback mechanism is used on the file to create
the next level of encryption; 4) Finally in fourth stage, the
whole image file is randomized multiple number of times
using modified MSA randomization encryption technique and
the randomization is dependent on another number, which is
generated from the password provided for encryption method.
SD-AIES is an upgraded version of SD-AEI Image
Encryption Technique. The proposed method, SD-AIES is
tested on different image files and the results were far more
than satisfactory.
KEYWORDS
SD-EI, SD-AEI, image encryption, bit reversal, bit
manipulation, bit rotation, hill cipher, vernam cipher,
randomization.
1. INTRODUCTION In today’s world, keeping the digital information safe from
being misused, is one of the most important criteria. This
issue gave rise to a new branch in computer science, named
Information Security. Although new methods are introduced
every day to keep the data secure, but computer hackers and
un-authorized persons are always trying to break those
cryptographic methods or protocols to fetch the sensitive
beneficial information from those data. For this reason,
computer scientist and cryptographers are trying very hard to
come up with permanent solutions to this problem.
Cryptography can be basically classified into two types:
1) Symmetric Key Cryptography
2) Public Key Cryptography
In Symmetric Key Cryptography [16], only one key is used
for encryption purpose and the same key is used for
decryption purpose as well. Whereas, in Public Key
Cryptography [16], one key is used for encryption and another
publicly generated key is used for the decryption purpose. In
symmetric key, it is easier for the whole process because only
one key is needed for both encryption and decryption.
Although today, public key cryptography such as RSA [14] or
Elliptical Curve Cryptography [15] is more popular because
of its high security, but still these methods are also susceptible
to attack like “brute force key search attack” [16]. The
proposed method, SD-AIES, is a type of symmetric key
cryptographic method, which is itself a combination of four
different encryption modules.
SD-AIES method is devised by Somdip Dey [5] [6] [9] [10]
[11] [12] [13], and it is itself a successor and upgraded version
of SD-AEI [6] image encryption technique. The four different
encryption modules, which make up SD-AIES Cryptographic
methods, are as follows:
1) Modified Bits Rotation and Reversal Technique for
Image Encryption
2) Extended Hill Cipher Technique for Image
Encryption
3) Generalized Modified Vernam Cipher for File
Encryption
4) Modified MSA Randomization for File Encryption
The aforementioned methods will be discussed in the next
section, i.e. in The Methods in SD-AIES. All the
cryptographic modules, used in SD-AIES method, use the
same password (key) for both encryption and decryption (as
in case of symmetric key cryptography). Although there is a
rising issue of strong security in between symmetric key
cryptography and public key cryptography, but SD-AIES is
very strong cryptographic method indeed because of the use
of modified Vernam Cipher with feedback mechanism. It has
already been proved by scientists that the use of one-time
padding Vernam Cipher is itself unbreakable if and only if the
key chosen for encryption is truly random in nature. The
combination of both bit and byte manipulation along with
modified Vernam Cipher makes the SD-AIES method truly
unique and strong.
The differences between SD-AEI [6] and SD-AIES methods
are that the later one contains one extra encryption module,
which is the modified Vernam Cipher with feedback
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 82-88
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
83
mechanism, and the Bits rotation and Reversal Technique is
modified to provide better security.
2. THE METHODS IN SD-AIES Before we discuss the four methods, which make the SD-
AIES Encryption Technique, we need to generate a number
from the password, which will be used to randomize the file
structure using the modified MSA Randomization module.
2.1 Generation of a Number from the Key In this step, we generate a number from the password
(symmetric key) and use it later for the randomization
method, which is used to encrypt the image file. The number
generated from the password is case sensitive and depends on
each byte (character) of the password and is subject to change
if there is a slightest change in the password.
If [P1P2P3P4…..Plen] be the password, where length of the
password ranges from 1,2,3,4…..len and ‘len’ can be
anything.
Then, we first multiply 2i , where ‘i’ is the position of each
byte (character) of the password, to the ASCII vale of the byte
of the password at position ‘i’. And keep on doing this until
we have finished this method for all the characters present in
the password. Then we add all the values, which is generated
from the aforementioned step and denote this as N.
Now, if N = [n1n2……nj], then we add all the digits of that
number to generate the code (number), i.e. we need to do: n1 +
n2 + n3 + n4 + ….. + nj and get the unique number, which is
essential for the encryption method of randomization. We
denote this unique number as ‘Code’.
For example: If the password is ‘AbCd’, then,
P1 = A; P2 = b; P3 = C
N = 65*2(1)
+ 98 2(2)
+ 67*2(3)
+ 100*2(4)
= 2658
Code = 2 + 6 + 5 + 8 = 21
2.2 Modified Bits Rotation and Reversal
Technique In this method, a password is given along with input image.
Value of each pixel of input image is converted into
equivalent eight bit binary number. Now we add the ASCII
Value of each byte of the password and generate a number
from the password. This number is used for the Bits Rotation
and Reversal technique i.e., Number of bits to be rotated to
left and reversed will be decided by the number generated by
adding the ASCII Value of each byte of the password. This
generated number will be then modular operated by 7 to
produce the effective number (NR), according to which the
bits will be rotated and reversed. Let N be the number
generated from the password and NR (effective number) be the
number of bits to be rotated to left and reversed. The relation
between N and NR is represented by equation (1).
NR =N mod 7 ------ eq. (1)
,where ‘7’ is the number of iterations required to reverse
entire input byte and N = [n1 + n2 + n3 + n4 +……nj], where
n1, n2, ……nj is the ASCII Value of each byte of the
password.
For example, Pin(i,j) is the value of a pixel of an input image.
[B1 B2 B3 B5 B6 B7 B8] is equivalent eight bit binary
representation of Pin(i,j).
i.e. Pin(i,j) [B1 B2 B3 B5 B6 B7 B8]
If NR=5, five bits of input byte are rotated left to generate
resultant byte as [B6 B7 B8 B1 B2 B3 B4 B5]. After rotation,
rotated five bits i.e. B1 B2 B3 B4 B5, get reversed as B5 B4 B3 B2
B1 and hence we get the resultant byte as [B6 B7 B8 B5 B4 B3 B2
B1]. This resultant byte is converted to equivalent decimal
number Pout(i,j).
i.e. [B6B7B8B5B4B3B2B1] Pout(i,j)
,where Pout(i,j) is the value of output pixel of resultant image.
Since, the weight of each pixel is responsible for its colour,
the change occurred in the weight of each pixel of input image
due to modified Bits Rotation & Reversal generates the
encrypted image. Figure 1 (a, b) shows input and encrypted
images respectively. For the encryption process given
password is “SD13”, whose NR = 6.
Note: - If N=7 or multiple of 7, then NR=0. In this condition,
the whole byte of pixel gets reversed.
1(a) 1(b)
Figure 1: (a).Input Image. (b).Encrypted Image for password “SD13”
2.3 Extended Hill Cipher Technique This is a new method for encryption of images proposed in
this paper. The basic idea of this method is derived from the
work presented by Saroj Kumar Panigrahy et al [2] and
Bibhudendra Acharya et al [3]. In this work, involutory matrix
is generated by using the algorithm presented in [3].
Algorithm of Extended Hill Cipher technique:
Step 1: An involutory matrix of dimensions m×m is
constructed by using the input password.
Step 2: Index value of each row of input image is converted
into x-bit binary number, where x is number of bits present in
binary equivalent of index value of last row of input image.
The resultant x-bit binary number is rearranged in reverse
order. This reversed-x-bit binary number is converted into its
equivalent decimal number. Therefore weight of index value
of each row changes and hence position of all rows of input
image changes. i.e., Positions of all the rows of input image
are rearranged in Bits-Reversed-Order. Similarly, positions of
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 82-88
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
84
all columns of input image are also rearranged in Bits-
Reversed-Order.
Step 3: Hill Cipher technique is applied onto the Positional
Manipulated image generated from Step 2 to obtain final
encrypted image.
2.4 Generalized Modified Vernam Cipher The module of modified Vernam Cipher, which is used in this
method is a concept proposed by Nath et al. [4][7]. Nath et al.
in their cryptographic method, called TTJSA [7], has
proposed an advanced form of generalized modified Vernam
Cipher with feedback mechanism. For this reason, even if the
data is slightly changed, the encrypted output generated is
very different from the other outputs.
TTJSA method is a combination of 3 distinct cryptographic
methods, namely, (i) Generalized Modified Vernam Cipher
Method, (ii) MSA method and (iii) NJJSA method. To begin
the method a user has to enter a text-key, which may be at
most 16 characters in length. From the text-key, the
randomization number and the encryption number is
calculated using a method proposed by Nath et al. A minor
change in the text-key will change the randomization number
and the encryption number quite a lot. The method have also
been tested on various types of known text files and have been
found that, even if there is repetition in the input file, the
encrypted file contains no repetition of patterns.
In SD-AIES Image Encryption method we have only used the
modified Vernam Cipher module of TTJSA by Nath et al.
Here, ‘Code’ represents the randomization number and ‘N’
represents the encryption number. All the data in the file are
converted to their equivalent 16 bit binary format and broken
down into blocks.
Algorithm for Modified Vernam Cipher with feedback
mechanism is as follows:
2.4.1 Algorithm of vernamenc(f1,f2): Step 1: Start vernamenc() function
Step 2: The matrix mat[16][16] is initialized with numbers 0-
255 in row major wise order
Step 3: call function randomization() to
randomize the contents of mat[16][16].
Step 4: Copy the elements of random matrix
mat[16][16] into key[256] (row major wise)
Step 5: pass=1, times3=1, ch1=0
Step 6: Read a block from the input file f1 where number of
characters in the block 256 characters
Step 7: If block size < 256 then goto Step 15
Step 8: copy all the characters of the block into an array
str[256]
Step 9: call function encryption where str[] is passed as
parameter along with the size of the current block
Step 10: if pass=1 then
times=(times+times3*11)%64
pass=pass+1
else if pass=2 then
times=(times+times3*3)%64
pass=pass+1
else if pass=3 then
times=(times+times3*7)%64
pass=pass+1
else if pass=4 then
times=(times+times3*13)%64
pass=pass+1
else if pass=5 then
times=(times+times3*times3)%64
pass=pass+1
else if pass=6 then
times=(times+times3*times3*times3)%64
pass=1
Step 11: call function randomization() with
current value of times
Step 12: copy the elements of mat[16][16] into
key[256]
Step 13: read the next block
Step 14: goto Step 7
Step 15: copy the last block (residual character if any) into
str[]
Step 16: call function encryption() using str[] and the no. of
residual characters
Step 17: Return
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 82-88
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
85
2.4.2 Algorithm of function encryption(str[],n): Step 1: Start encryption() function
Step2: ch1=0
Step 3: calculate ch=(str[0]+key[0]+ch1)%256
Step 4: write ch into output file
Step 5: ch1=ch
Step 6: i=1
Step 7: if in then goto Step 13
Step 8: ch=(str[i]+key[i]+ch1)%256
Step 9: write ch into the output file
Step 10: ch1=ch
Step 11: i=i+1
Step 12: goto Step 7
Step 13: Return
2.4.3 Algortihm for function randomization(): The randomization of key matrix is done using the
following function calls:
Step-1: call Function cycling()
Step-2: call Function upshift()
Step-3: call Function downshift()
Step-4: call Function leftshift()
Step-5: call Function rightshift()
Note: Cycling, upshift, downshift, leftshift, rightshift are
matrix operations performed (applied) on the matrix, formed
from the key. The aforementioned methods are the steps
followed in MSA algorithm [2] proposed by Nath et al.
After the execution of modified Vernam Cipher, each block is
written down into the file and further processed by next steps
of the cipher method.
2.5 Modified MSA Randomization Nath et al. [4][7] proposed a symmetric key method,
where they have used a random key generator for generating
the initial key and that key is used for encrypting the given
source file. MSA method [4] is basically a substitution
method where we take 2 characters from any input file and
then search the corresponding characters from the random key
matrix and store the encrypted data in another file. MSA
method provides us multiple encryptions and multiple
decryptions. The key matrix (16x16) is formed from all
characters (ASCII code 0 to 255) in a random order.
The randomization of key matrix is done using the
following function calls:
Step-1: Function cycling()
Step-2: Function upshift()
Step-3: Function rightshift()
Step-4:Function downshift()
Step-5:Function leftshift()
N.B: Cycling, upshift, downshift, leftshift, rightshift are
matrix operations performed (applied) on the matrix, formed
from the key. The detailed description of the above methods is
given in MSA [4] algorithm.
The above randomization process we apply for n1 times and in
each time we change the sequence of operations to make the
system more random. Once the randomization is complete we
write one complete block in the output key file.
In our method SD-AEI [6] and SD-AIES, we have used the
same concept of randomization but instead of doing the
randomization on the key matrix, we applied the randomization
technique on the whole file after picking up each block from
the image file. Basically, the whole file is broken up into
number of blocks of data and then randomization technique is
applied on each block of data of the image file, then after the
completion of randomization method, each block is written
down in the output file as the final encrypted image file.
Modified Randomization method algorithm, which is followed
in this SD-AIES method is:
Step-1: Function cycling()
Step-2: Function upshift()
Step-3: Function rightshift()
Step-4: Function left_diagonal_randomization()
Step-5: Function cycling() for “code” number of times
Step-6: Function downshift()
Step-7: Function leftshift()
Step-8: Function right_diagonal_randomization()
3. IMPORTANT FEATURES In this section we discuss about few special features of the
SD-AIES method, which is as follows:
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 82-88
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
86
3.1 Effectiveness of Generalized Modified
Vernam Cipher The use of modified Vernam Cipher is an important thing in
this method. The feedback mechanism in the modified
Vernam Cipher is the game changing method and it makes the
entire cipher system very secure. Even if there is a slight
change in the original file, the entire content of the final
encrypted file will be totally different from the encrypted file
in previous state.
For example, we chose two test cases to show the
effectiveness of modified Vernam Cipher by analyzing the
frequency of the characters of the encrypted file i.e. by
studying the spectral analysis of the encrypted file. The
following table shows the test cases:
TABLE 1: Test Cases for Modified Vernam Cipher
Seriel No. Test Case
1 File containing 2048 bytes of A
(AAAAAAAAAAAAAAAA…………
……AAAAAAAAAAA)
2 File Containing 2047 bytes of A and 1
byte of B
(AAAAAAAAAAAAAAAAAAAAAA
AAAAAA…………………AAAAAAA
AAAAAAB)
Fig 2.1 shows the spectral analysis of the test case 1 and Fig
2.2 shows the spectral analysis of the encrypted file of test
case 2.
Fig 2.1: Spectral Analysis of Test Case 1
Fig 2.2: Spectral Analysis of Test Case 2
Thus from the spectral analysis it is evident that there is no
pattern match in between the two test cases and the peaks are
totally different. For this reason, it proved that even if there is
slight change in the original file, the final encrypted file will
be totally different.
3.2 The Difference between Bits Rotation
and Reversal Method Vs Modified Bits
Rotation and Reversal Method In the Bits Rotation and Reversal Method, which is used in
SD-EI and SD-AEI Image Encryption techniques, was
dependent on the length of the password and thus the bits
were rotated and reversed according to the effective length of
the password. For example, the password is “Somdip”, then
LR (effective length of password)= 6 (according to Bits
Rotation And Reversal technique), and thus 6 bits of every
pixel are rotated and then reversed. Now, majority of the
passwords can be of same length and the resultant of this
method will be the same for all those password. For example,
if the password is “123456” or “DeYSYS”, the effective
length (LR) is still 6, and the result of Bits Rotation and
Reversal Technique will be same for the same password.
So to make the method more effective and secure, we add all
the ASCII Values of each byte in the password to generate the
N and thus find the effective number (NR) instead of effective
length (LR), which will instead be used for Bits Rotation and
Reversal Technique. For example, if the password are
“Somdip”, “123456” and “DeySYS”, then the effective
numbers are 4, 1 and 6 respectively. Thus, the resultant of Bits
Rotation and Reversal technique will be different for all the
three passwords. But still, even in modified Bits Rotation and
Reversal Technique, two passwords are likely to produce the
same effective number (NR) because the range of the
effectiveness is in between 0-6 i.e. (because the data range is
in between 1-7), and the summation of ASCII Value may also
lead to same sum. For example, if a password is “DeY” or
“DYe”, both will generate same effective number (NR). Thus,
this is a drawback of the system, but still this method is better
than the normal Bits Rotation and Reversal Method.
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 82-88
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
87
4. BLOCK DIAGRAM OF SD-AIES
METHOD In this section, we provide the block diagram of SD-AIES
method.
Fig 3: Block Diagram of SD-AIES Method
5. RESULTS AND DISCUSSIONS We provided few results of the proposed SD-AIES method in
the following table.
TABLE 2: Results of SD-AIES
Original File Encrypted File
No
Preview
No
Preview
No
Preview
From the results section it is not possible to know the
effectiveness of the SD-AIES method because the end result
of both SD-AEI and SD-AIES are same i.e. there will be no
preview of the encrypted file, because the internal structure of
the file is already messed up due to encryption methods.
6. CONCLUSION AND FUTURE SCOPE In this paper, the author proposes a standard method of image
encryption, which first tampers the image and then disrupts
the file structure of the image file. SD-AIES method is very
successful to encrypt the image perfectly to maintain its
security and authentication. The inclusion of modified bits
rotation and reversal technique, and modified Vernam Cipher
along with feedback mechanism, made the system even
stronger than it used to be before. In future, the security of
method can be further enhanced by adding more secure bit
and byte manipulation techniques to the system. And the
author has already started to work on that.
7. ACKNOWLEDGMENTS Somdip Dey would like to thank the fellow students and his
professors for constant enthusiasm and support. He would
also like to thank Dr. Asoke Nath, founder of Department of
Computer Science, St. Xavier’s College [Autonomous],
Kolkata, India, for providing his feedback on the method and
help with the preparation of the project. Somdip Dey would
also like to thank his parents, Sudip Dey and Soma Dey, for
their blessings and constant support, without which the
completion of the project would have not been possible.
8. REFERENCES [1]. Mitra et. el., “A New Image Encryption Approach using
Combinational Permutation Techniques,” IJCS, 2006, vol. 1, No
2, pp.127-131.
[2]. Saroj Kumar Panigrahy, Bibhudendra Acharya, Debasish Jena,
“Image Encryption Using Self-Invertible Key Matrix of Hill
Cipher Algorithm”, 1st International Conference on Advances in
Computing, Chikhli, India, 21-22 February 2008.
[3]. Bibhudendra Acharya, Saroj Kumar Panigrahy, Sarat Kumar
Patra, and Ganapati Panda, “Image Encryption Using Advanced
Hill Cipher Algorithm”, International Journal of Recent Trends
in Engineering, Vol. 1, No. 1, May 2009, pp. 663-667.
[4]. Asoke Nath, Saima Ghosh, Meheboob Alam Mallik, ”
Symmetric Key Cryptography using Random Key generator”,
“Proceedings of International conference on security and
management (SAM’10” held at Las Vegas, USA Jull 12-15,
2010), P-Vol-2, pp. 239-244 (2010).
[5]. Somdip Dey, “SD-EI: A Cryptographic Technique To Encrypt
Images”, Proceedings of “The International Conference on
Cyber Security, CyberWarfare and Digital Forensic (CyberSec
2012)”, held at Kuala Lumpur, Malaysia, 2012, pp. 28-32.
[6]. Somdip Dey, “SD-AEI: An advanced encryption technique for
images”, 2012 IEEE Second International Conference on Digital
Information Processing and Communications (ICDIPC), pp. 69-
74.
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 82-88
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
88
[7]. Asoke Nath, Trisha Chatterjee, Tamodeep Das, Joyshree Nath,
Shayan Dey, “Symmetric key cryptosystem using combined
cryptographic algorithms - Generalized modified Vernam
Cipher method, MSA method and NJJSAA method: TTJSA
algorithm”, Proceedings of “WICT, 2011 “ held at Mumbai, 11th
– 14th Dec, 2011, Pages:1175-1180.
[8]. Somdip Dey, “SD-REE: A Cryptographic Method To Exclude
Repetition From a Message”, Proceedings of The International
Conference on Informatics & Applications (ICIA 2012),
Malaysia, pp. 182 – 189.
[9]. Somdip Dey, “SD-AREE: A New Modified Caesar Cipher
Cryptographic Method Along with Bit- Manipulation to Exclude
Repetition from a Message to be Encrypted”, Journal:
Computing Research Repository - CoRR, vol. abs/1205.4279,
2012.
[10]. Somdip Dey, Joyshree Nath and Asoke Nath. Article: An
Advanced Combined Symmetric Key Cryptographic Method
using Bit Manipulation, Bit Reversal, Modified Caesar Cipher
(SD-REE), DJSA method, TTJSA method: SJA-I
Algorithm. International Journal of Computer
Applications 46(20): 46-53, May 2012. Published by Foundation
of Computer Science, New York, USA.
[11]. Somdip Dey, Joyshree Nath, Asoke Nath,"An Integrated
Symmetric Key Cryptographic Method – Amalgamation of
TTJSA Algorithm, Advanced Caesar Cipher Algorithm, Bit
Rotation and Reversal Method: SJA Algorithm", IJMECS,
vol.4, no.5, pp.1-9, 2012.
[12]. Somdip Dey, Kalyan Mondal, Joyshree Nath, Asoke
Nath,"Advanced Steganography Algorithm Using Randomized
Intermediate QR Host Embedded With Any Encrypted Secret
Message: ASA_QR Algorithm", IJMECS, vol.4, no.6, pp.59-67,
2012.
[13]. Somdip Dey, Joyshree Nath, Asoke Nath," Modified Caesar
Cipher method applied on Generalized Modified Vernam Cipher
method with feedback, MSA method and NJJSA method: STJA
Algorithm” Proceedings of FCS’12, Las Vegas, USA.
[14]. http://en.wikipedia.org/wiki/RSA_(algorithm) [ONLINE]
[15]. http://en.wikipedia.org/wiki/Elliptic_curve_cryptography
[ONLINE]
[16]. Cryptography & Network Security, Behrouz A. Forouzan, Tata
McGraw Hill Book Company.
Measuring Security of Web Services in Requirement Engineering Phase
Davoud Mougouei1, Wan Nurhayati Wan Ab. Rahman2, Mohammad Moein Almasi3 Faculty of Computer Science and Information Technology
Universiti Putra Malaysia, 43400 Serdang, Selangor, Malaysia
[email protected], [email protected], [email protected]
ABSTRACT Addressing security in early stages of web service development has always been a major engineering trend. However, to assure security of web services it is required to perform security evaluation in a rigorous and tangible manner. The results of such an evaluation if performed in early stages of the development process can be used to improve the quality of the target web service. On the other hand, it is impossible to remove all of the security faults during the security analysis of web services. As a result, absolute security is never possible to achieve and a security failure may occur during the execution of web service. To avoid security failures, a measurable level of fault tolerance is required to be achieved through partial satisfaction of security goals. Thus any proposed measurement technique must care for this partiality. Even though there are some approaches toward assessing the security of web services but still there is no precise model for evaluation of security goal satisfaction specifically during the requirement engineering phase. This paper introduces a Security Measurement Model (SMM) for evaluating the Degree of Security (DS) in security requirements of web services by taking into consideration partial satisfaction of security goals. The proposed model evaluates overall security of the target service through measuring the security in Security Requirement Model (SRM) of the service. The proposed SMM also takes into account cost, technical ability, impact and flexibility as the key features of security evaluation. KEYWORDS Vulnerability; Web Service; Threat; Security Fault; Web Service Security
1 INTRODUCTION Security has always been a vital concern in development of web services. However, current software development methods are almost neglectful of engineering of security into the system analysis and particularly requirement elicitation process [1]. Even though, some researchers attempted to integrate security analysis into the requirement phase, it is not clearly specified yet how to accomplish this spontaneously during the requirements engineering process [2]. On one hand, it is not always possible to fully mitigate the vulnerabilities or threats within the service and on the other hand, existence of faults in the service may ultimately lead to a security failure. In order to avoid security failure of the target web service requires being flexible and tolerant in the presence of security faults [3]. To facilitate this it is needed to care for fault tolerance in security requirements of the target web service. In the paper [4], we have presented a goal-based approach to address fault tolerance into the security requirements of the security critical systems. The method contributes to a flexible model for requirements of security important systems. Based on this model we have constructed a security requirement model for web services. Our intend in the current work is to help security analyzers assess Overall Degree of Security (ODS) in the target service by explicitly factoring the security factors such as impact, technical ability, cost and flexibility of the security countermeasures introduced by security requirement model of the target web service. For this reason, we divide the applied security mitigations into four categories as
described in [4] to support evaluation of the degree of security of security goals with respect to the cost, flexibility, technical ability and impact of the security goals as countermeasures to security threats. Hence, a SMM has been introduced to address assessment of security in security requirements of web services. Integration of it into the SRM makes the proposed models amenable to analysis and alteration at the requirement engineering time. In our previous work [4] we have introduced some mitigation techniques to mitigate security faults and lastly make a flexible model for a given system specification. In this paper we also care for measuring partial satisfaction of security goals we have proposed in [4] to address fault tolerance in the security specification of the system. This paper has three main contributions. Firstly, it presents a model for evaluation of degree of security in security requirements of web service. Secondly, it introduces a method for calculation of degree of security for all of the security goals and consequently for the SRM of the web service by explicitly factoring the security goal attributes and also characteristics of logical model of SRM [4] into the evaluation process.
The validity of our approach is demonstrated through applying it to SRM of a typical online money transfer service (MTS), a service that offers money transfer to the beneficiary accounts. The remainder of the paper is organized as follows. Section 2 discusses related works. Section 3 presents our measurement model and introduces MTS as our running application. Section 4 describes the DS attributes and section 5 gives the details of evaluating the security for MTS. Finally, in Section 6, we conclude this paper and discuss future work. 2 RELATED WORKS With development and utilization of web services, many researchers concern about the security of web services which leads to different evaluation models and frameworks from different perspectives. In [5] Zhang has proposed integrated security framework based on authentication, authorization, integrity and confidentiality factors besides integration of these mechanism to have more secure web services.
Some researchers put forward the improvement of web service technologies, for instance, paper [6] focus on enhancing security of web services WSDL file and they proposed model for encrypting WSDL document to handle its security problems. Moreover, Li Jiang et al. in their work [7] state that mainly research in the area of web service concern on the security of web service rather than evaluation of its secureness, they proposed evaluation model which is based on STRIDE model that determine whether or not web service is secure. Gonzalez et al., in paper [8], offered sets of metrics to assess e-commerce website requirements in terms of security and usability by means of human computer interaction, their proposed evaluation model is based on GQM approach. Furthermore, in [9], author has proposed a secure measurement model that introduces different categories of security measurements and their corresponding factors in order to detect potential security defects. Wei Fu et al., in their work [10], developed web service security analysis tools that look through the source code and generates the dependency graph and through that it identifies unsafe methods and the spread of them which helps to make these methods invisible to outer users after web service being published.
Authors of [11] have proposed client transparent fault tolerance model for web serve which will recognize server errors and redirect requests to reserved backup server in order to reduce the service failures. Santos at el. [12] proposed fault tolerance infrastructure that adds an extra layer acting as proxy between client requests and service provider’s response to ensure client transparent faults tolerance. In paper [13] also the author has cared for uncertainty factors in the environment through partial satisfaction of goals in self-adaptive systems. Web services are required to operate with high level of security and dependability. Several studies proposed web service strategies in order address this issue. Merideth et al. [14] introduced “Thema” which is Byzantine Fault-Tolerance middleware system in order to execute the Byzantine Fault-Tolerance by capturing all requests and responses.
Figure 1. OBS’ conceptual model in terms of use cases and misuse cases.
3 THE PROPOSED MEASUREMENT MODEL 3.1 Running Application To illustrate the validity of our approach, we applied it to a case study provided in [15] describing an Online Banking System (OBS) as a security critical system (SCS). We have focused on the Money Transfer Service (MTS) in OBS. OBS provides some standard banking services including money transfer service over the internet. The bank accounts are a tempting target for hackers. For this reason, MTS transactions must be protected to keep financial losses to a minimum. The availability of MTS is as important as the confidentiality and integrity. The MTS also has a server which should be protected from any possible misuse. In addition to that, an attacker may exploit the MTS’ internal communication network to threaten the transactions. MTS in addition should prevent unauthorized online access to the service. Thus, it supports user authentication by checking the user name and password. However, the attacker still can guess either user name or password but it is supposed to be difficult. MTS must offer reasonable assurance that their customers’ accounts are secure. The main threat that concerns MTS is that an attacker will transfer money out of customers’ accounts.
MTS as a web service relies on security concepts to work properly. Therefore, 1) maintaining integrity, 2) achieving a high level of confidentiality and 3) maintaining OBS available to the users, as the key features of security [3] are extremely important. 3.2 Methodology Our proposed approach contains several steps. For a given security requirement model, first of all the security goals and requirements will be categorized in terms of the mitigation technique they are refined by. Afterward, the DS will be calculated for each security goal (requirement) based on its corresponding category attributes and formula. Note that all goals and requirements would be elicited from SRM of the target web service. The SRM is formally described with respect to the existing service requirement artifacts like attack trees [16] or use case and misuse case [17] diagrams. SRM is a tree-like model with “AND-OR” relations among security goals. Therefore, after calculating the degree of security for all of the security requirements so called leaves, the calculation will be propagated to the higher levels of the SRM based on the logical relation among security goals and also considering the mitigation technique the goal has refined through. In the last step, the Overall degree of security of the SRM will be calculated for the target web service.
3.3 Model Description The SRM is supposed to reflect the security goals of the web service based on the use case model of the system illustrated in figure 1. Every security goal in SRM is refined through application of one of the four mitigation techniques mentioned in [18]. Based on the mitigation technique used to refine the goal, calculation of DS and attributes to be considered for this calculation may differ. On the other hand, some attributes should be taken into consideration to assess the goal either individually or as a part of SRM with respect to the category of mitigation it belongs to. These attributes include technical ability, impact, cost-of-implementation and flexibility of goal in the presence of security faults. TABLE 1 describes the proposed SMM in terms of these categories and attributes. Table 1. Categories and Attributes in Proposed SMM
Mitigation Technique Attributes
Add low level sub goals (ALG)
Cost of implementation (C) Technical Ability of goal (T) Impact of goal (I) Flexibility of goal (F)
Relaxation (RLX)
Sum of DSs of descendants (S) Production of DSs of descendants (P) Flexibility of goal (F)
Add High Level Goal (AHG)
Sum of DSs of descendants (S) Production of DSs of descendants (M) Flexibility of goal (F)
No refinement (NF) -
For each goal in the SRM the DS values will be calculated based on the equation (1). Finally, the Overall Degree of Security (ODS) for the SRM of target web service will be calculated based on equation (2). DS: Degree of security 𝐷𝑖: Technical ability of goal i 𝐼𝑖: Impact of goal i 𝐶𝑖: Cost of implementation of goal i 𝐷𝑖: Flexibility of goal i
∀𝑔𝑜𝑎𝑙 𝑔𝑖 ∶ 𝑂𝑂𝑖
=
⎩⎪⎪⎪⎨
⎪⎪⎪⎧
0.5 × (0.01 ×𝐷𝑖 × 𝐼𝑖𝐶𝑖
+ 𝐷𝑖) ��
0 ≤ 𝐷𝑖 ≤ 11 ≤ 𝐶𝑖 ≤ 1000 ≤ 𝐼𝑖 ≤ 100
0 ≤ 𝐷𝑖 ≤ 1
,𝑔𝑖 𝑖𝑠 𝑙𝑒𝑎𝑓
0.5 × (𝐷𝑖 + � 𝑂𝑂𝑘 ) ,𝑔𝑖 𝑖𝑠 𝑂𝑅 − 𝑁𝑜𝑑𝑒𝑎𝑙𝑙 𝑑𝑒𝑠𝑐𝑒𝑛𝑑𝑎𝑛𝑡𝑠 𝑘
0.5 × (𝐷𝑖 + � 𝑂𝑂𝑘 ) ,𝑔𝑖 𝑖𝑠 𝐴𝑁𝑂 −𝑁𝑜𝑑𝑒𝑎𝑙𝑙 𝑑𝑒𝑠𝑐𝑒𝑛𝑑𝑎𝑛𝑡𝑠 𝑘
(1)
4 SECURITYATTRIBUTES In this section, the attributes taken into account for calculation of DS for each goal and also for the ODS will be discussed. 4.1 Technical ability (T) Technical ability as one of the attributes for calculation of DS reveals the ease of implementing the goal in the following stages of the development in terms of complexity of the goal and existence of professions in the development team. In fact the Technical ability can be calculated using equation (3). Technical complexity of the implementation in the equation (3) also can be calculated based on any acceptable method for calculation the program complexity. However, since it is required to calculate the complexity in the requirement engineering stage for our proposed measurement model, using the techniques like Albresht [18] which are capable of calculationg the complexity in the early stages of development are adviced. Nonetheless any method or technique capable of
𝑂𝑂𝑂 =∑ 𝑂𝑂𝑖𝑛𝑖=1 × 𝑂𝑆𝑆𝑖∑ 𝑂𝑆𝑆𝑖𝑛𝑖=1
, 0 < 𝑂𝑆𝑆𝑖 ≤ 100
ODS: Overall Degree of security 𝑂𝑂𝐷𝐷𝑖: Degree of security of goal i 𝑂𝑆𝑆𝑖: Severity of the threat which is mitigated by goal i
(2)
doing this calculation based on the SRM is applicable. Technical ability as given in equation (3) is a number between zero and one. 𝐷𝑖: Technical ability of Goal i 𝐷𝐶𝐼𝑖: Technical Complexity of Implementation of goal i
𝐷𝑖 =1
𝐷𝐶𝐼𝑖, 1 < 𝐷𝐶𝐼𝑖 ≤ 100
(3) 4.2 Impact (I) Impact is another attribute for calculating the DS of security goals in requirement model of the web service. This attribute reflects the efficiency of the mitigation constructed by the security goal. On the other words, it describes to which extent the security goal is able to mitigate the corresponding security threat. This parameter takes a value between zero and one hundred which will be specified by the security expert. Security expert can either be a member of the development team or an external security expert. 4.3 Cost of Implementation (C) Cost is one of the main factors for evaluation the security requirements. Sometimes a security requirement can make a great contribution to the security of the service but the cost of implementation does not allow the development team to implement it. On one hand cost of development is one of the key features of web service market. So less development cost contributes to more profit and keeping abreast of the technology changes in the web market. On the other hand the extent to which the security is critical for a web service specifies the amount of budget which can be spent on security enhancements. The value for cost will be specified by development team. This can be used for calculation of DSs. 4.4 Flexibility (F) Since it is not always possible to completely satisfy the goals, sometimes we need to accept the partial goal satisfaction [12]. We address this partiality in terms of the relaxed attributes in RELAX
statements. Accordingly, we benefit from fuzzy temporal logic as a semantic for our applied syntax to take the security faults into account during the RE process [18]. This way we can integrate the fault tolerance into the target system’s SRM. If partial satisfaction of the security goal is acceptable, we RELAX the goal. We apply this technique when threats can be partially mitigated. In this case, we add flexibility by explicitly factoring the security faults into the SRM. This contributes to a fault tolerant model for the target system which can resist in the presence of unavoidable security faults. According to the proposed model, we calculate the flexibility for each goal based on the category it belongs to. Basically, flexibility of the goal depends on the mitigation technique it has been derived by. Calculations of flexibility for all of the categories are given in equation (4). As it is depicted in equation (4), measuring the DS at the proposed SMM , takes the fuzziness of RELAX statements into account by incorporating the membership function of corresponding fuzzy set into account for calculation of flexibility of the goal. This will be applied only on goals belonging to the RLX set. 𝐷𝑖: Flexibility of goal i 𝑔𝑖: Goal i
𝐷𝑖 = �0.2 ,𝑔𝑖 ∈ 𝐴𝐻𝐺
𝑀 ( ∆(𝑔𝑖) − 𝑂𝑃𝐷𝑖)0.1 ,𝑔𝑖 ∈ 𝐴𝐿𝐺
,𝑔𝑖 ∈ 𝑅𝐿𝑋
𝑅𝐿𝑋 = {𝑔𝑖|𝑅𝑆𝐿𝐴𝑋 − 𝑎𝑡𝑖𝑜𝑛 𝑖𝑠 𝑢𝑠𝑒𝑑 𝑡𝑜 𝑑𝑒𝑟𝑖𝑣𝑒 𝑔𝑖} ∆(𝑔𝑖) = ∑ ∆𝑘(𝑔𝑖)
𝑘𝑎𝑙𝑙 𝑘 𝑚𝑒𝑠𝑢𝑟𝑒𝑚𝑒𝑛𝑡𝑠 ∆𝑘(𝑔𝑖)= 𝑀𝑒𝑎𝑠𝑢𝑟𝑒𝑑 𝑣𝑎𝑙𝑢𝑒 𝑓𝑜𝑟 𝑔𝑜𝑎𝑙 𝑖 𝑖𝑛 𝑘𝑡ℎ 𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑚𝑒𝑛𝑡 In the presence of security faults 𝑂𝑃𝐷𝑖 = 𝑂𝑝𝑡𝑖𝑚𝑎𝑙 𝑆𝑎𝑙𝑢𝑒 𝑓𝑜𝑟 𝑂𝑎𝑡𝑖𝑠𝑓𝑎𝑐𝑡𝑖𝑜𝑛 𝑜𝑓 𝑔𝑜𝑎𝑙 𝑔𝑖 | 𝑀(∆𝑘(𝑔𝑖) − 𝑂𝑃𝐷𝑖) = 1, 𝑖𝑓 ∆𝑘(𝑔𝑖) = 𝑂𝑃𝐷𝑖
𝑀 (𝑥) = 𝑀𝑒𝑚𝑏𝑒𝑟𝑠ℎ𝑖𝑝𝐷𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑜𝑓 𝑓𝑢𝑧𝑧𝑦 𝑠𝑒𝑡 𝑜𝑓 𝑂 | 𝑀(𝑥) → 𝑂 ,𝑀(0) = 1
𝑂 = { (𝑥𝑖 ,𝑀(𝑥𝑖 ))|𝑀(𝑥𝑖) ∈ [0,1] , 𝑥 ∈ ℝ}
(4)
Figure 2. SRM for MTS. (Junction points represent AND-relations while their absence means OR-relation)
A fuzzy set is a set whose elements have degrees of membership. Fuzzy set theory permits the gradual measuring of membership of elements in a fuzzy set, which is described using the membership function 𝑀 (𝑂, 𝑥) in the range of real numbers [0, 1]. In other words, a fuzzy set is a pair (A, x) where S is a set and 𝑀(𝐴, 𝑥) → [0,1] captures the degree of membership of A degree of membership of 5 APPLYING THE PROPOSED SMM TO MTS In this section we apply the proposed SMM to the MTS through the following steps. 5.1 Step 1: Categorization of All Goals Step 1 is to categorize the security goals in SRM based on the mitigation technique they have been derived by. As we discussed before we have four different mitigation techniques. By the end of the categorization process, no requirement will go under the category of the NF. This is because no
requirement is derived by NF mitigation. An excerpt of the SRM for MTS is given in figure 2. The top-level security goal is to protect the MTS against possible attacks (i.e., Protect [MTS]). MTS is developed through several steps. Firstly we initiate the SRM with refinement of the top-level goal to protect the service. As a web service MTS also should be reliable and available to users. From identified assets we can specify the systems security goals in the highest-level of the SRM to protect the assets. The SRM may include other security requirements too. But in this paper we only concentrate on one of these goals (R1) to apply the proposed SMM on. At level two of the SRM we also have reduced the goals to only R1.1. This means for instance to maintain security of bank accounts (R1) in SRM includes other security goals which we have eliminated them to simplify the model for applying our proposed SMM. To categorize the goals in SRM we look into formal specification of SRM to find about the mitigation technique the goal is introduced by. Otherwise it might be difficult and subjective to
categorize some of the goals as high level or low level goals. Normally high level goals are the goals which adding them to the model leads to radical changes on the behavior of the target service. Consider the situation in which the ID and Password are guessed by the attacker and the MTS cannot tolerate this security violence. In this case, we have to add redundant behavior in terms of high level security goal(s) to tolerate the threat. As it’s depicted in figure 2, we may add supplementary authentication mechanisms like challenge response as high-level security goals to avoid unauthorized access to accounts in case of violation of R1.1.3.2. However, this new goals represent new behavior and the closer to the top-level goal they are, the greater the cost of implementation would be. The new goal is OR-ed with the other high level goals. As it is shown, the definition of high or low is comparative. Better said, we call a goal as a high-level goal when adding it to the system’s SRM will cause radical changes in the specification of the original security requirement model. We have listed the categorized security goals of SRM in Table 2 as follows. As it is depicted in the table 2, we only have one RELAXed [19] requirement (R.1.1.3.2) for the target web service. Table 2. Categorized Security Goals of MTS
Category Goal / Requirement
Add low level sub goals (ALG)
R1.1.3.2.1.1.1, R1.1.3.2.1.1.2, R1.1.3.2.1.2.1, R1.1.3.2.2.1.1, R1.1.3.2.2.2.1
Relaxation (RLX) R1.1.3.2
Add High Level Goal (AHG)
R1 R1.1 R1.1.3, R1.1.4 R1.1.3.2.1, R1.1.3.2.2 R1.1.3.2.1.1, R1.1.3.2.1.2, R1.1.3.2.2.1, R1.1.3.2.2.2
No refinement (NF) - 5.2 Step 2: Calculation of DS for Category ALG In this step we calculate the DS for the low level requirements (leaves) in SRM. The calculations are performed based on equation (1) and listed in the
table 3 as follows. For example degree of security for the low level goal of R1.1.3.2.2.1.1 which brings about to limitation of number of password trials, is equal to 0.122 which is the highest among the other low level goals in SRM. Although enforcing encryption contributes to an acceptable level of mitigation but due to the comparatively low technical ability and high cost can contribute only to 0.0535 of DS which is the lowest among all DS’s in Table 3. Table 3. Calculation of Ds for Category ALG Requirement C T I F DS R1.1.3.2.1.1.1 30 0.7 90 0.1 0.06050 R1.1.3.2.1.1.2 50 0.5 70 0.1 0.05350 R1.1.3.2.1.2.1 5 0.9 30 0.1 0.07700
R1.1.3.2.2.1.1 5 0.9 80 0.1 0.12200
R1.1.3.2.2.2.1 5 0.9 60 0.1 0.10400
R1.1.4 20 0.9 90 0.1 0.07025
5.3 Step3: Calculation of DS for Categories AHG and RLX In this step we calculate the DS for high level requirements in SRM. The calculations are performed based on equation (1) and listed in the table 4 as follows. In order to calculate the DS for AHG goals, we need to firstly calculate the DS for ALG goals as we did in Step 2. Then we propagate the calculated values into the higher levels of the SRM and recalculate the higher level goals’ DS by factoring the flexibility factor into the calculation. The flexibility factor as we described in section 3 will be calculated based on equation (4).
Concomitantly with calculation of DOF for high level goals, we calculate the DS for RELAXed goals. As we discussed before and based on equation (4), measuring the DS for RELAXed goals in proposed SMM , takes the fuzziness of RELAX statements into account by incorporating the membership function of corresponding fuzzy set into account for calculation of flexibility of the goal. This will be applied only on goals belonging to the RLX category. How to propagate the calculated DS to higher levels of the model depends on the relation
among nodes in logical model of SRM. If the node in SRM (goal) is OR node, then the DS for that node will be calculated based on sum of the descending nodes. Otherwise if it is AND node, the DS will be calculated based on production of the descending nodes. Table 4. Calculation of DS for Categories AHG and RLX Category Requirement S P F DS AHG
R1 0.28087 - 0.2 0.24043 R1.1 0.36174 - 0.2 0.28089 R1.1.3 0.52347 - 0.2 0.36174 R1.1.3.2.1 0.24019 - 0.2 0.22006 R1.1.3.2.2 0.31300 - 0.2 0.25650 R1.1.3.2.1.1 - 0.00324 0.2 0.10162 R1.1.3.2.1.2 0.07700 - 0.2 0.13850 R1.1.3.2.2.1 0.12200 - 0.2 0.16100 R1.1.3.2.2.2 0.10400 - 0.2 0.15200
RLX R1.1.3.2 - 0.05645 0.85 0.45322 We have RELAXed [19] the goal R1.1.3.2 in by assigning the RELAX statement of ‘as many as possible’ to the ‘relaxed’ attribute of the requirement R1.1.3.2. So, R1.1.3.2 will be described as follows: “R1.1.3.2: OBS shall generally avoid [ID and Password to Guess] as close as possible to hardToGuess” The value ‘hardToGuess’ is a constant value representing the optimum value for difficulty of guessing password and ID. ‘hardToGuess’ is the optimum value not definitely the maximum value. On the other words, difficulty of guessing ID and password might be less than the maximum value while it’s still optimal. This is explained in terms of fuzzy nature of RELAX semantic: “AG ((Δ (avoid ID and Password to Guess) – hardToGuess) ∈ S)” Where S is a fuzzy set whose membership function has value 1 at zero (m (0) = 1) and decreases continuously around zero. “Δ (avoid ID and Password to Guess)” represents the hardness of guessing the ID and password which will be compared to ‘hardToGuess’. It means although we cannot accurately measure the difficulty of guessing
the ID and password for OBS, the system model should use the capabilities of security resources for providing a best effort at protecting ID and password from attacker. In order to calculate the DS for RELAXed goal of R1.1.3.2, we need to both calculate the DS for its descendants and consequently calculate the S or P parameters and also the flexibility of the goal. To calculate the flexibility of the goal for R1.1.3.2 we need to calculate the membership of the value ( ∆(𝑅1.1.3.2 ) − ℎ𝑎𝑟𝑑𝐷𝑜𝐺𝑢𝑒𝑠) as 𝑀 ( ∆(𝑅1.1.3.2 ) − ℎ𝑎𝑟𝑑𝐷𝑜𝐺𝑢𝑒𝑠) based on the equation (4). We consider ℎ𝑎𝑟𝑑𝐷𝑜𝐺𝑢𝑒𝑠 = 50 for R1.1.3.2 which means the optimum difficulty value to guess ID and password is equal to 50. Through checking the MTS model against goal R1.1.3.2 of MTS captured by SRM and in the presence of security faults, we can calculate the ∆(𝑅1.1.3.2) for a specific number of running the model checker. In our running example we consider ∆(𝑅1.1.3.2) = 35 for R1.1.3.2. So we need to calculate the 𝑀 ( −15) based on the membership function. We define the membership function for satisfaction of goal R1.1.3.2 in equation (5) as follows:
𝑀 � ∆(𝑅1.1.3.2 ) − ℎ𝑎𝑟𝑑𝐷𝑜𝐺𝑢𝑒𝑠� = 1 − �∆(𝑅1.1.3.2 )−50100
�
(5) From the equation (5) we have: 𝑀 (35 ) = 0.85 so the value for flexibility of R1.1.3.2 will be equal to 0.85 according to the equation (4). Consequently we can calculate the DS for the R1.1.3.2 after propagation of previously calculated DS values for its descendants. The results are listed in Table 4. As you can see in the Table 4, for AND nodes in the SRM we propagate the production of descendants so the S attribute which is sum of the DSs of descendant nodes is left blank in the table. For OR nodes also the P attributes are left blank because we propagate the sum of DSs of descendants to calculate. If there is only one child for a node in the logical model of SRM, we can consider it either as an OR node or an AND node. In our running example we considered those as OR nodes. The example for this kind of node in SRM of MTS is R1.1.3.
Step 4: Calculation of ODS for the MTS In this step we calculate the overall degree of security for the target web service of MTS. The calculation is performed based on equation (2) as follows. As you can see in the equation (2), in order to calculate the ODS for the SRM, we need to identify the severity of faults the security goals in SRM mitigate. In our running example (MTS) we assume the severity of faults as is listed in Table 5. Severity of faults are assumed to be specified by security experts and ranged from zero to one hundred. Based on the results in Table 5 we can calculate the ODS as follows:
𝑂𝑂𝑂 =∑ 𝑂𝐷𝐷𝑖𝑛𝑖=1 × 𝑂𝑆𝑆𝑖∑ 𝑂𝑆𝑆𝑖𝑛𝑖=1
= 195.11327
1035≅ 0.189
The total degree of security for the MTS is approximately equal to 0.189 which means if we develop the target web service for MTS based on the specification given by SRM and current model of the system, the MTS will be able to tolerate the security threats to the extent of 0.189. The higher the ODS is the more tolerant the target web service would be in the presence of security faults. Table 5. Calculation of ODS
Category Requirement DS SEV DS×SEV
AHG
R1 0.24043 100 24.04340 R1.1 0.28089 80 22.46945 R1.1.3 0.36174 75 27.13022 R1.1.3.2.1 0.22006 65 14.30385 R1.1.3.2.2 0.25650 65 16.67250 R1.1.3.2.1.1 0.10162 65 6.60520 R1.1.3.2.1.2 0.13850 40 5.54000 R1.1.3.2.2.1 0.16100 65 10.46500 R1.1.3.2.2.2 0.15200 40 6.08000
RLX R1.1.3.2 0.45322 70 31.72558
ALG
R1.1.3.2.1.1.1 0.06050 65 3.93250 R1.1.3.2.1.1.2 0.05350 65 3.47750 R1.1.3.2.1.2.1 0.07700 40 3.08000 R1.1.3.2.2.1.1 0.12200 65 7.93000 R1.1.3.2.2.2.1 0.10400 65 6.76000 R1.1.4 0.07025 70 4.91750
Figure 3 presents all the process required from categorization of all the goals to calculation of overall degree of security for the money transfer system.
Figure 3. Steps required for calculation of ODS
6 CONCLUSION AND FUTURE WORKS In this work we proposed a measurement model for evaluating security in security requirement model of the web services. Our proposed approach takes the security requirement model of the system as the input and measures degree of security in security requirements based on mitigation techniques they are refined through. The proposed model also takes into consideration attributes such as cost, technical ability, impact and flexibility of the security countermeasures to measures security of the target service. Consequently the overall degree of security can be calculated and the evaluation results can used to improve the security of the web service. To demonstrate the validity of our model, we have applied it to a typical money transfer service as our running application.
REFERENCES 1. Haley, C.B., Moffett, J.D., Laney, R., Nuseibeh, B.: A
framework for security requirements engineering. In: Proceedings of the 2006 international workshop on Software engineering for secure systems, vol. Shanghai, pp. 35--42. (2006)
2. Mead, N.R., Hough, E.D.: Security Requirements Engineering for Software Systems: Case Studies in Support of Software Engineering Education. In: Software Engineering Education and Training, 2006. Proceedings. 19th Conference on, pp. 149--158. (2006)
3. Avizienis, A., Laprie, J.C., Randell, B., Landwehr, C.: Basic concepts and taxonomy of dependable and secure computing. In: Dependable and Secure Computing, IEEE Transactions on, vol. 1, no. 1, pp. 1--33. (2004)
4. Mougouei, D., Moghtadaei, M., Moradmand, S.: A Goal-Based Modeling Approach to Develop Security Requirements of Fault Tolerant Security-Critical Systems, in: Proceedings of 4th International Conference on Computer and Communication Engineering, Malaysia, pp. 200-205. (2012)
5. Zhang, W.: Integrated Security Framework for Secure Web Services. In: Intelligent Information Technology and Security Informatics (IITSI), Third International Symposium on, pp. 17--183. (2010)
6. Mirtalebi, A., Khayyambashi, M.R.: Enhancing Security of Web Services against WSDL Threats. In: Emergency Management and Management Sciences (ICEMMS), 2nd IEEE International Conference on, pp. 920—923. (2011)
7. Jiang, L., Chen, H., Deng, F. A Security Evaluation Method Based on STRIDE Model for Web Service. In: Intelligent Systems and Applications (ISA), 2010 2nd International Workshop on, 2010, pp. 1--5. (2010)
8. Gonzalez, R.M., Martin, M.V., Munoz-Arteaga, J., Alvarez-Rodriguez, F., Garcia-Ruiz, M.A.: A measurement model for secure and usable e-commerce websites. In: Electrical and Computer Engineering, 2009. CCECE ’09. Canadian Conference on, pp. 77--82. (2009)
9. Lai, S.T.: An Interface Design Secure Measurement Model for Improving Web App Security. In: Broadband and Wireless Computing, Communication and Applications (BWCCA), 2011 International Conference on, pp. 422--427. (2011)
10. Fu, W., Zhang, Y., Zhu, X., Qian, J.: WSSecTool: A Web Service Security Analysis Tool Based on Program Slicing. Services (SERVICES), IEEE Eighth World Congress on, pp. 179--183. (2012)
11. Aghdaie, N., Tamir, Y.: Client-transparent fault-tolerant Web service. In: Performance, Computing, and Communications, 2001. IEEE International Conference on, pp. 209--216. (2001)
12. Santos, G.T., Lung, L.C., Montez, C.: FTWeb: a fault tolerant infrastructure for Web services. In: EDOC Enterprise Computing Conference, 2005 Ninth IEEE International, pp. 95--105. (2005)
13. Cheng, B., Sawyer, P., Bencomo, N., Whittle, J., A Goal-Based Modeling Approach to Develop Requirements of an Adaptive System with Environmental Uncertainty. In: Model Driven Engineering Languages and Systems, vol. 5795, A. Schürr and B. Selic, Eds. Springer Berlin / Heidelberg, pp. 468--483. (2009)
14. Merideth, M.G., Iyengar, A., Mikalsen, T., Tai, S., Rouvellou, I., Narasimhan, P.: Thema: Byzantine-fault-tolerant middleware for Web-service applications. In: Reliable Distributed Systems, 2005. SRDS 2005. 24th IEEE Symposium on, pp. 131--140. (2005)
15. Edge, K.S.: A framework for analyzing and mitigating the vulnerabilities of complex systems via attack and protection trees. Air Force Institute of Technology, Wright Patterson AFB, OH, USA. (2007)
16. Edge, K.S., Dalton, G.C., Raines, R.A., Mills, R.F.: Using Attack and Protection Trees to Analyze Threats and Defenses to Homeland Security. In: Military Communications Conference, 2006. MILCOM 2006. IEEE, pp. 1--7. (2006)
17. Sindre, G., Opdahl, A.L.: Eliciting security requirements by misuse cases. in Technology of Object-Oriented Languages and Systems, 2000. TOOLS-Pacific 2000. Proceedings. 37th International Conference on, 2000, pp. 120--131. (2000)
18. Cheng, B., Sawyer, P., Bencomo, N., Whittle, J.: A goal-based modeling approach to develop requirements of an adaptive system with environmental uncertainty. In: Model Driven Engineering Languages and Systems, A. Schürr and B. Selic, Eds. Springer Berlin / Heidelberg, pp.468--483. (2009)
19. Whittle, J., Sawyer, P., Bencomo, N., Cheng, B., Bruel, J.M.: RELAX: a language to address uncertainty in self-adaptive systems requirement. In: Requirements Engineering, vol. 15, no. 2, pp. 177--196. (2010)
Power Amount Analysis: An efficient Means to Reveal the Secrets in Cryptosystems
Qizhi Tian and Sorin A. Huss Integrated Circuits and Systems Lab (ICS)
TU Darmstadt, Germany Email: tian, [email protected]
ABSTRACT
In this paper we propose a novel approach to reveal the information leakage of cryptosys-tems by means of a side-channel analysis of their power consumption. We therefore in-troduce first a novel power trace model based on communication theory to better understand and to efficiently exploit power traces in side-channel attacks. Then, we dis-cuss a dedicated attack method denoted as Power Amount Analysis, which takes more time points into consideration compared to many other attack methods. We use the well-known Correlation Power Analysis method as the reference in order to demon-strate the figures of merit of the advocated analysis method. Then we perform a com-parison of these analysis methods at identi-cal attack conditions in terms of run time, traces usage, misalignment tolerance, and internal clock frequency effects. The result-ing advantages of the novel analysis method are demonstrated by mounting both men-tioned attack methods for an FPGA-based AES-128 encryption module.
KEYWORDS
AES-128 Block Cipher; Power Model; Trace Model; Correlation Power Analysis; Power Amount Analysis.
1 Introduction In 1999, Kocher et al. introduced the
Differential Power Analysis (DPA) [1],
as a novel analysis method for revealing the secret key of a cryptosystem. DPA then became the premier approach for exploiting the temporal power consump-tion for practical side-channel attacks of a cryptosystem. In the past decade, many researchers addressed the side-channel properties of cryptosystems and contrib-uted their efforts to this area resulting in both new and powerful side-channel analysis methods next to DPA and in related countermeasures. Thus, the re-search in side-channel properties of cryptosystem implementations may be classified in two opposite domains: The one is aimed to the development of effi-cient analysis methods to eventually at-tack the system, whereas the other is dedicated to the invention or creation of countermeasures to harden the system and thus to reduce or even to avoid the success of such attacks. In other words, a still open competition between attack and defense of cryptosystems has been established meanwhile.
With regard to the attack methods, Chari et al. published in 2002 a paper on the so-called template attack [2]. In 2004, Brier et al. proposed the Correlation Power Analysis (CPA) method [3]. Later, in 2005, the stochastic analysis approach was introduced by Schindler et al. [4]. In 2012, Tian et al. [17] proposed an attack method called Power Amount Analysis (PAA) aimed to attack the cryptosystem by exploiting a large set of time points, which may contribute to information
99
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
leakage. Compared to the CPA attack, the PAA attack as outlined in [17] shows clear advantages in terms of run time, traces usage, misalignment tolerance, and internal Clock Frequency Effects (CFE). In the area of the defense of cryptosystems, on the other hand, a large number of countermeasures aimed at reducing the exploitable information leakage, i.e., the hardening of a cryp-tosystem, have been suggested as de-tailed in, e.g., [7], [8].
The fundamental idea of the attacking methods mentioned above is that adver-saries mimic the variation of the power consumption behavior of the cryptosys-tem at hand in time domain by construct-ing a key dependent power model and by exploiting some mathematical functions. Then, various statistical methods are ap-plied to analyze the relation between power model and measured power traces such as correlation coefficients, least squares, or maximum likelihood, aimed to help to eventually unveil the secret of the cryptosystem.
As discussed in [17], usually, the key dependent power model is based on some states produced by the crypto-graphic operations and then stored in registers of the cryptosystem hardware. Although these states seem to change instantaneously, it takes time in reality to calculate and to store them. For instance, if this process requires 0.1ms and is be-ing monitored by means of an oscillo-scope operated at a sample rate of 1MHz, i.e., the sample interval is 10 s, the re-sulting 100 discrete points, which carry part of the information leakage, will be used to depict the results of this process in the monitored power curve in time domain. In other words, all these points should be used to reveal the secret key of the cryptographic system for the sake of efficiency.
In the CPA attack, the secret of the cryptographic system is indicated by the highest correlation peak, i.e., the maxi-mum similarity between the power trac-es and the key dependent power model. Compared to the other information car-rying time points, the highest correlation coefficient value comes from one certain fixed time point out of the captured trac-es. This means that the other time points of the recorded power traces do not ex-plicitly contribute to the information leakage of this analysis method, they are only used for reference purposes. In oth-er words, in the CPA attack just one time point is being exploited, while the other time points, which clearly do contain parts of the total information leakage, are discarded.
Compared to the CPA attack, in both the template attack and the stochastic approach several time points are in fact being used to identify the information leakage in order to reveal the secret key [4]. But their calculation complexity is considerably larger than in, e.g., CPA: The more time points are being used, the more computational time and memory space are needed. Note that in the profil-ing phase of the template attack and sto-chastic approach a certain amount of traces has to be captured using an identi-cal training device [5], which takes even more execution time.
However, the PAA exploits a set of time points contributing to the infor-mation leakage in the attack without sig-nificantly increasing the computation effort. This property stems from a new power trace model. Such a method is able to exploit hundreds or even thou-sands of time points for revealing the secret key. In [17] the authors show that the related computational time is consid-erably less than needed for the CPA at-tack.
100
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Therefore, in this paper we discuss more in-depth the PAA attack’s proper-ties and the resulting advantages when mounting practical attacks. The paper is structured as follows. In Section 2 we first detail how CPA works and how its trace model looks like. In Section 3, we define a new trace model based on communication theory and introduce the information leakage extraction from this model as well as a related attack proce-dure and highlight the resulting ad-vantages. Section 4 presents a compari-son of measured results by executing both CPA and the new analysis method PAA under identical conditions on raw, artificially misaligned, and clock fre-quency distorted traces produced from an FPGA-based cryptosystem running AES-128 encryption. Finally, we con-clude with a summary of the advantages and benefits of this new analysis method. 2 Correlation Power Analysis
In this section some basic definitions
will be given first, which are used in this and in the upcoming sections.
2.1 Basic Definitions Input: A set of plaintext with size D,
where represents the plaintext and i ∈ [1, D]. Output: A set of ciphertext cwith size
D, where represents the ciphertext, which corresponds to the plaintext .
Subkey: All the possible subkey val-ues form a set with size K . For in-stance, a subkey byte has 2 possible key values, i.e., K = 256, where de-notes the subkey value.
Power Trace Matrix : It is construct-ed from D power traces, captured by a sampling oscilloscope, while the cryp-tosystem is processing all inputs . Each
trace has M sample points. , : holds the measured power trace related to input .
Analysis Region: Because there are a large number of time points in the cap-tured traces, we do not need to analyze each and every of these points: A small portion of time points containing the in-formation leakages is our analysis target. Therefore, we introduce an area of inter-est called analysis region in the captured traces, which contains the information leakage both of the selected power mod-el and the part of the consumption the adversary focuses on. For instance, for an AES-128 power trace, if the power model is constructed on the basis of the last round operation, then the analysis region should be the area, where the last round peak exists, as depicted in Figure 1.
Following abbreviations are applied where appropriate: Expectation (E), Var-iance (Var), Standard Deviation (Dev), and Correlation Coefficient (CorrCoef).
2.2 Model of Power Traces
The power traces are captured and recorded by a sampling oscilloscope while either encryption or decryption is running. As a matter of fact, the exist-ence of noise in the recorded power
Figure 1: Visualization of Analysis Region
101
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
traces is inevitable in practice. The total consumption of the cryptosystem may then be determined as follows according to [7, p. 62]:
= + + . + (2)
At each point in time of the recorded trace the total power may thus be mod-eled by (2), where is the operation dependent power consumption, defines the data dependent power con-sumption, . denotes power result-ing from the electronic noise in the hardware, which features a normal dis-tribution, i.e., . ∈ (0, ) holds, and represents, depending on the technical implementation, some constant power consumption. All these parame-ters are additive, independent, and func-tions of time. But the power model as exploited in CPA is restricted to analyze just a single point in time rather than the complete power function in time domain. CPA aims at a traversal of all the cap-tured traces at a certain point in time to find the biggest information leakage point, i.e., the same time point, but in different traces. Therefore, the precondi-tion of CPA to mount an attack success-fully is that the power consumption val-ues at each time point are yielded by the same operation in the cryptographic al-gorithm. In other words, the power trac-es must be correctly aligned in time as pointed out in, e.g., [7, p.120].
2.3 Power Model Power models are in general based on both the algorithm running in the hard-ware and its architecture. Considering,
e.g., the last round of the AES-128 algo-rithm, the Hamming Distance (HD) model of the output register before and after the S-Box, respectively, as dis-cussed in, e.g., [7, p. 132], is given by (3):
= ℎ ( ⨁ ) (3)
where denotes a certain byte, e.g., the second byte of the register stored in the last round before the S-Box, which is the counterpart of . In contrast, the Hamming Weight (HW) model of the output register is given by:
= ℎ ( ⨁ ) (4) Another possible classification of the
power model is proposed in the follow-ing.
Instantaneous Model: A power model based on the state at some time point of a certain register, e.g., HW power model.
Process Model: A power model based on the two states changing within a time interval, e.g., HD power model. 2.4 CPA Attack Phase
The attack procedure may be summa-
rized as follows: Step1: Plaintext or ciphertext and
the subkey are mapped by the power model, for example exploiting (3) or (4), to form a matrix, which is named hy-pothesis matrix of size D × K.
Step2: Analysis of power trace matrix and hypothesis matrix is performed
by calculating the correlation coefficient during StatAnalysis as shown in (1), which yields the result matrix with
, … ,⋮ ⋱ ⋮, … , = , … ,⋮ ⋱ ⋮, … , , , … ,⋮ ⋱ ⋮, … , (1)
102
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
size K × M. The elements of are calcu-lated from:
, = ( : , , : , ) (5)
where ∈ [1, K] and ∈ [1,M] hold. Then, the unique time point featuring the maximum value of is determined next, which indicates the correct key value.
3 Power Amount Analysis
In this section, we introduce a trace
model to address the power consumption in a quite different way, which relies on principles adopted from communication theory. Then, based on this model, a new attacking method, i.e., PAA is proposed, which is characterized by an exploitation of a larger set of time points compared to CPA. In PAA we exploit in general more than one hundred points to efficiently extract the information leakages and to attack the cryptosystem successfully as detailed in the sequel.
3.1 Hardware Model
Communication theory has been de-veloped for more than one hundred years. Many models were proposed and are currently used to evaluate and simulate the communication channel. Among the-se models, there exists a simple and easy one, which is named Additive White Gaussian Noise (AWGN) channel, as detailed in, e.g., [10, p. 167], [11]. A discrete time AWGN channel is given as follows:
[ ] = [ ] + [ ] (6) where [i] is the input signal of the channel at the discrete time point i, [i] denotes the output of the channel, and [i] represents the additive white Gauss-
ian noise while the input signal passes through the channel. As generally as-sumed in the communications field, for the noise ∈ (0, ) holds, see [10, pp. 29-30].
Consequently, we model the power
trace of a cryptosystem based on the communication model in (6). As shown in Figure 2, the power consumption from the core chip is taken as the input to the channel and noise is being added while it propagates. The time discrete trace of the power consumption function, cap-tured by the oscilloscope, now consists of two parts, as visualized in Figure 3: The first one is the power consumption function of the cryptographic chip while encryption or decryption runs and con-tains the information leakage of the cryptosystem; the second part contains the noise produced by the hardware, which can be modeled as in the AWGN
Figure 2: Abstract Signal and Noise Model
Figure 3: Visualization of the Power Traces
103
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
channel. Assume that the power con-sumption of the core chip is pure, i.e., without noise, and its temporal value is being transferred via the electric circuit network to the oscilloscope. Meanwhile, the AWGN adds the noise to it. Conse-quently, for each measurement time point, the power traces are modeled as follows:
[ ] = [ ] + [ ] (7) where [ ] represents the output power consumption, which is captured by a sampling oscilloscope at time index , [ ] is the power consumption gen-erated by the cryptographic core chip while running encryption or decryption, and [ ] is taken from the AWGN, i.e., for any measurement in time domain, the noise features ∈ (0, ) . Please note as an important property that and are independent and uncorrelated in time domain.
Let us assume that an attack using a process model, e.g., the HD model, takes place from time points to . Now, we intend to calculate the power varia-tion of the cryptographic chip from m to m , which contains the information leakage the adversary is looking for, i.e., the power variation of the analyzed reg-ister’s state changing, which matches the HD model very well. Consequently, by calculating ( ) in the time inter-val [ , ] for each trace and then comparing the similarity between the variation of the power consumption of the core chip and the key dependent hy-pothesis matrix, one can eventually re-trieve the secret key of the cryptographic system.
However, we cannot measure the values directly. Each time point of
the measurement contains a mixture of power consumption and noise N
as defined by (7), so that one cannot separate them easily. Therefore, a straight forward calculation of ( ) is difficult, but we will show in the sequel how to derive it indirectly.
3.2 Power Consumption of the Hard-ware Module
In reality, when a hardware device is
working, its power consumption is a continuous function in time domain. But when a sampling oscilloscope is being used to monitor this power function, on-ly discrete points will be captured. These discrete points constitute the curve , where [ ] is the instantaneous power at time index .
The average power consumption in the time index interval [ , ] can be approximately calculated by
= 1− + 1 [ [ ] + ⋯+ [ ]] (8)
Equation (8) denotes that the average power consumption is just the mean of the power values within [ , ]. If one increases the sample rate, i.e., more sample points in the interval [ , ] , then becomes an increasingly precise estimator of ( ) . Because ( ) = 0 holds, (8) can be rewritten as follows:
( ) = ( ) + ( )= ( ) (9)
Here ( ) = ( ), i.e., the mean
value for the captured power traces de-notes the average power consumption of the core chip in the time index interval [ , ]. We can assume that such an average value contains the constant av-erage power from the hardware circuits its self and the average power variation
104
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
from the state changing in the time index interval [ , ]. We prefer to identify the power variation for the states chang-ing in the register to the constant average power from hardware itself. However, the constant power of hardware circuits is difficult to determine. Therefore, fil-tering out the variation information form ( ) seems to be impossible.
Now let us look at the ( ) cal-culation as follows:
( ) = E[ − E( )] (10) which denotes the average power varia-tion around the average power ( ) for the register states changing in the time interval [ , ] the adversary looks for. It contains two steps: in the first step, the information is compressed by calculation of the mean power con-sumption ( ) . However, such a compression is not sufficient to being used for key revealing. Therefore each sample is compared to ( ) resulting in some differences. After that, the aver-age differences power value is calculated to form Var( ), which is the infor-mation carrier the adversary is looking for. This is called the information extrac-tion step. Indeed, Var( ) is very im-portant for the key revealing in the cryp-tosystem, but it cannot be measured or calculated directly.
Nevertheless, ( ) = holds and and N are independent as well as
uncorrelated in time domain. From (9) we can easily get ( ) as:
( ) = ( ) + ( ) = ( ) +
(11)
Equation (11) consists of two parts:
The first part is ( ), the second part is , i.e., the noise in a trace matrix has the same variance for each single
trace, which is the fundamental property of the new trace model as mentioned be-fore. The value is a constant, thus items Var( ) and Var( ) are in a linear relation. Now, instead of the cal-culation of ( ) , one just com-pares the similarity between ( ) and yielding the same results.
( ) = ( ) + (12) √ = 1 + 12 ( − 1) + 18 ( − 1) +⋯ (13)
( ) is a non-linear function be-cause of the relation by a square root as given in (12). Therefore, we cannot use it as the substitution of ( ) to attack the system. Nevertheless, the square root can be expanded to a Taylor series as given in (13), in which the first two terms of the series result in a linear relation. Thus, ( ) and ( ) can approximately be taken as being in a linear relation. So, we can exploit this approximation to further analyze the power consumption of the system.
3.2 Attack Phase
An important property of PAA is that
an usage of more time points in the anal-ysis region involved results in less traces needed for a successful attack. There are two ways to achieve this goal:
1) Use more time points in the anal-ysis region for the attack.
2) Increase the sample rate of the monitor devices, i.e., use an oscil-loscope of a good quality, e.g., with a higher sampling rate.
Theoretically, the proposed PAA takes the time interval factors into considera-tion, thus the process model fits such an attack very well. On the contrary, the
105
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
instantaneous model, e.g., HW model, focuses on some time point only. By us-ing such a model the attack results of PAA will be not as good as we would expect.
3.2.1 Attacking Procedure
The attacking procedure of the PAA is given as follows:
Step1: Plaintext or ciphertext and the subkey are mapped by the power model, e.g., by equation (3), thus gener-ating the hypothesis matrix of size × .
Step2: Calculate the variance or standard deviation of each row of the trace matrix , i.e., , = ( , : ) , where ∈ [1, ] holds, as given in (14).
Step3: Calculate the results matrix with size 1 × K by analyzing statistically the vector derived from (14) and the hypothesis matrix according to (15), whereas the correlation coefficient is used as the distinguisher:
, = ( : , , : , ) (16)
where ∈ [1, K] holds. Subsequently, the maximum correlation value will be determined to find the correct key value. 3.3 Advantages
Usually, in the view of an attacker, some factors must be taken into consid-eration in a practical attack. For example,
the key should be revealed within lim-ited time and traces usage. If the targeted algorithm is hardened by a countermeas-ure, e.g., the power traces captured from oscilloscope are not aligned because of a related countermeasure, such as random clock or dummy wait state insertion, then before mounting a CPA attack some preprocessing should be done. Com-pared to the CPA attack, the proposed PAA method can deal with the men-tioned requirements easily, which will be discussed in the upcoming sections. 3.3.1 Execution Time
The required execution or run time is a very important metric, which indicates the efficiency of the algorithm in a real attack. In PAA a large number of time points is taken into the variance or de-viation calculation. Therefore, the trace matrix is mapped to , see (14), which will be used to calculate the correlation coefficient using the hypothesis matrix
as shown in (15). On the contrary, in the CPA attack, the correlation coeffi-cient values matrix is directly calculated from the trace matrix and the hypothe-sis matrix , see (1). Therefore, under the same calculation condition, i.e., the number of time points CPA needs to traverse, the variance in PAA attack is being calculated, i.e., the PAA attack is faster than CPA attack. One can also find that in CPA the result is a matrix with size K ×M. In contrast, PAA yields
( ) = ( , … , )⋮( , … , ) = ,⋮, (14)
[ , … , ] = , … ,⋮ ⋱ ⋮, … , , ,⋮, (15)
106
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
a vector , with size 1 × K. Therefore, the calculation complexity is decreased, because it is easier to identify the correct key by means of a result vector than of a matrix. We will focus on the run time consumed in both attack methods by means of experimental results in the se-quel. 3.3.1 Traces Usage
Traces usage is an important parame-
ter in the evaluation of cryptosystem se-curity. From an attacker's point of view, the less trace usage, the more efficient the attack method will be. In contrast, for a system designer, to some extent it denotes the degree of safety of the cryp-tographic algorithms. Therefore, traces usage reduction is crucial to come quick-ly to an assessment of the related SCA resistance level of a cryptosystem.
In general, the PAA attack calculates the variance and standard deviation for the captured power traces, where the in-formation leakages at each single time point is compressed and extracted, i.e., more information leakages sources are considered. By means of such a method, one can achieve higher distinguisher values when the number of power traces is limited. In contrast, CPA just exploits
one time point which contributes the maximum information leakage. There-fore, such an attack method requires a relatively large number of power traces to achieve the same attack results as in PAA. In the last section, we will demon-strate this important feature.
3.3.3 Misalignment Tolerance = , , ,, , ,, , , , , ,, , ,, , ,
(17)= , , ,, , ,, , , , , ,, , ,, , ,
(18)
As mentioned before, the model of the
power traces in CPA attack concentrates on a common time point in different power traces. For example, (17) denotes a matrix constructed from aligned power traces. The third column contains the maximum information leakage point, i.e., the elements , , , , , , are the best combination for the information contri-bution. If there are some misalignments in the constituting such power traces as indicated in (18), the third column’s
Figure 4: Power Traces at different Base Clock Frequencies: a) 2 MHz, b) 24 MHz
107
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
combination is broken. Then the new combination , , , , , in the third column cannot provide the maximum leakage for the CPA attack. Therefore, the attack results become worse. In other words, the prerequisite for mounting a CPA attack successfully is that the pow-er traces must be aligned. However, in the PAA attack a large number of time points are taken to the variance calcula-tion. Therefore, for each misaligned power trace compared to the original power trace, only a few time points are missing. Thus, the difference between the second rows of and , respectively, is that , is substituted by , . If there are enough time points in the interval, then such a substitution cannot greatly impact the variance values and hence the overall attack results. Consequently, PAA features a considerably stronger misalignment tolerance during a real at-tack. In other words, a small misalign-ment does not affect the final results to a large extent. Therefore, such a property of the analysis method can be exploited to improve attacks of some power traces featuring a misalignment injection as a hardening countermeasure. 3.3.4 Clock Frequency Effects
Misalignment may be a countermeas-
ure to impede CPA attacks in practice, see [13]. In presence of misaligned pow-er traces, a preprocessing is required for improving the CPA attack results in or-der to cope with traces manipulated by changes in the clock frequency of the cryptosystem as shown in Figure 4 a), where the aimed peaks are shifted in time domain. The authors of [18] pro-posed a method to align misaligned power traces by exploiting dynamic time warping. The authors of [15] presented a horizontal alignment method to align the
power traces in time domain both par-tially and dynamically. Later, in [16], these authors reported on a phenomenon called clock frequency effect, which oc-curs in random clock featured cryptosys-tem, when the base clock runs at a high-er clock frequency. Then the power peaks in the captured traces not only shift in time domain, but also change their power values in the amplitude do-main. One finds easily that the power value change in Figure 4 b) is considera-bly larger than that in Figure 4 a), where the base clock frequencies are 24MHz and 2MHz, respectively. In order to cope with such effects, these authors proposed a vertical matching after the horizontal alignment, where each horizontally aligned power trace is moved up and down in the amplitude domain in order to find the minimal distance between the moved trace and an arbitrarily chosen template. Finally, these vertically matched power traces are attacked. The experimental results in [16] show that by exploiting vertical matching as a pre-processing step, the efficiency of the CPA attack can greatly be improved. This is because in the CPA attack the focus is on finding a certain time point in different power traces only. Let us take , : as an example and shift its element values in the amplitude domain, i.e., to each element the same positive or nega-tive value is added, an operation, which does not change its variance. In other words, regardless of a possible shifting in the amplitude domain, the attack results will always be the same as visible from (20).
, : + = , + ,… , , + (19) ( , : + ) = ( , : ) (20)
108
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Therefore, for attacking misaligned power traces, only the horizontal align-ment is required. The vertical matching step in PAA attack can be completely omitted in contrast to CPA. This proper-ty saves a lot of traces processing time without affecting the quality in the attack results.
4 Application Examples
In this section several attacks on an AES-128 cryptosystem featuring the TBL S-Box [12] are presented and dis-cussed. The HD model from (3) is being taken as the power model. In order to evaluate and to compare the mentioned four main properties to be taken as a metric, the results are produced by mounting the attack exploiting both CPA and PAA, respectively. The experiments were ordered into three sections as fol-lows:
1) Attack of the captured power traces directly with CPA and PAA, re-spectively.
2) The captured power traces will be misaligned artificially to some ex-tent and then be attacked by mounting CPA and PAA, respec-tively, in order to assess the misa-lignment tolerance and thus the ro-bustness of both attack methods.
3) In order to verify the internal clock frequency effects for the PAA at-tack, each captured power trace will be shifted in the amplitude domain by some random offset, i.e., a high clock frequency effect injec-tion takes place. Finally, the attack results for both CPA and PAA are compared.
Here, the run time and success rate for each byte and global key will be exploit-ed as the metric to evaluate both the CPA and PAA attack results. The run time is a relative value, which depends
on the calculation computer’s processor, memory, configurations, etc. The suc-cess rate is detailed in [14], which de-fines the possible rate such that all the key bytes are to be successfully recov-ered under the constraint of a limited amount of experiments. Therefore, we ran 30 times different attack experiments, each experiment with 1000 power traces. Then the success rate is calculated ac-cordingly.
4.1 Platform
The side channel attack standard eval-uation board version G (SASEBO) [6] is exploited as the target platform, which embodies two Xilinx Virtex-II pro series FPGAs: One for board control and one for cryptographic algorithms implemen-tation. Both FPGAs are running at 2MHz clock frequency.
4.2 Run Time and Traces Usage
In order to compare the run time for
both CPA and PAA attacks, we executed these two methods under the same con-
Figure 5: Global Success Rate for CPA and PAA
Table 1: Run Time Comparison
CPA PAA Ratio Run Time
54.44sVar 33.29s 60.0%Std 33.54s 60.5%
109
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
ditions. For the CPA attack, we traverse 600 time points in the analysis region to find the maximum information leakage. In PAA attack, these 600 time points will be taken to the variance or standard deviation calculation, respectively. After that, the total run time values for all 16 key bytes are compared.
Table I shows the run times for all 16 attackable bytes when mounting CPA and PAA attacks. When applying the methods Var and Dev, the PAA attack results to run times of 33.29s and 33.54s, respectively, which in both cases result in 60.0% and 60.5% of the run time con-sumed in the CPA attack. Therefore, un-der the same attack condition, the PAA is faster than CPA, which can thus short-en the breaking time of a cryptosystem in practical attacks.
With regard to the traces usage Figure 5 illustrates that, when we consider the power traces range of from 0 to 1000, all 16 bytes are revealed after an usage of 850 traces only by PAA, i.e., the global success rate is 1. However, under the same condition, CPA can reveal only 90% correct keys when consuming all available 1000 power traces, i.e., such an attack needs more power traces to reveal all correct key bytes. At the same time, the success rate curve for PAA attack raises faster after 400 power traces usage compared to its counterpart in CPA at-tack. This is because, PAA exploits more time points, which contribute to the in-formation leakage, thus resulting in a lower traces usage in comparison to CPA. In the Appendix, we visualize in Figure 10 the success rate individually for each key byte. One easily finds out from this figure that for each byte to be revealed, PAA always consumes less traces than the CPA method.
4.3 Misalignment Tolerance
As mentioned above, we expect that the PAA attack shows a good robustness in presence of a reasonably misalign-ment of traces. Sometimes, the power trace misalignment is intended as a hard-ening countermeasure against power consumption attacks [13]. In order to demonstrate this additional robustness feature, the misaligned traces are first generated by applying Algorithm 1 and then attacked by means of both CPA and PAA, respectively.
In order to generate comparable re-sults for CPA and PAA attacks, we set the range B of the random number in Al-gorithm 1 from 0 to 20, 50, and 100, re-spectively. The global success rate curves for CPA and PAA attacks before misalignment (MA) are depicted for comparison purposes as illustrated in Figure 6 to 8.
Figure 6: Global Success Rate, B=20
Algorithm 1 Misaligned Traces Genera-tion
Require: Aligned Traces , :
Ensure: Misaligned Traces , : 1: Find a start point in , : 2: Generate an integer random num-
ber , ∈ [0, ] 3: Cut the traces from point + ,
with width . 4: Save the cut trace into set , : Return: , :
110
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Figure 7: Global Success Rate, B=50
Figure 8: Global Success Rate, B=100
In Figure 6, the maximum shifting of the power traces is 20 time points. One finds that the global success rates for PAA attack before and after misalign-ment are overlapped, i.e., the misalign-ment cannot affect the PAA attack re-sults greatly. For CPA, the global suc-cess rate curve deviates a bit from its counterpart before misalignment, i.e., a small misalignment affects the CPA at-tack not that much.
Then we increase the maximum shift-ing to 50 points, as shown in Figure 7. For PAA, the success rate curves still overlap. In contrast, in CPA, because of the stronger misalignment, the attack becomes harder, and then the deviation of the success rate curves before and af-ter misalignment is enlarged. Thus, when increasing the maximum number of shifting time points, the deviation of
PAA attack results is smaller than that in CPA, i.e., PAA is more robust.
In order to show this characteristic more clearly, the parameter B is now set to 100, i.e., the maximum shifting of the power trace is 100 time points. Now for PAA, the success rate curves show a small deviation. However, in CPA, the deviation between the corresponding curves unveils a big gap as shown in Figure 8. Therefore, we can state that the PAA features a considerably stronger misalignment tolerance compared to CPA.
4.4 Internal Clock Frequency Effects
In order to simulate the clock fre-quency effects environment condition, mentioned in [16], Algorithm 2 is used to inject such effects into power traces by moving each power trace in the am-plitude domain by a random offset vec-tor .
Figure 9: Global Success Rate F=10
Algorithm 2 Clock Frequency Effects Injection (CFEI)
Require: Aligned Traces , :
Ensure: CFE Injected Traces , : 1: Generate an integer random num-
ber , ∈ [− , ] holds 2: Generate an elements constant
vector , where = [ , … ] 3: Do , : = , : + Return: , :
111
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
The chosen parameter value is F = 10, thus each trace will be moved by an off-set from the interval [−10,10] unit in the amplitude domain. Then, CPA and PAA are mounted.
As already mentioned, a power trace shifts in amplitude domain results in un-changed variance and standard deviation values. Therefore, for PAA, the attack results are always the same, as illustrated in Figure 9. One finds that the success rate curves for PAA attack are complete-ly overlapped, i.e., the numerical results are almost the same. In contrast, in CPA after the CFEI, the success rate is de-creased from 90% to just 10% at 1000 traces usage. This means that the clock frequency effects decline the success rate in CPA attack considerably for ran-dom clock hardened cryptosystems. In contrast, the PAA method can counteract such shifts in the amplitude domain au-tomatically and yields the same attack efficiency as for the original data set. 5 Summary
In this paper we discussed in detail the
Power Amount Analysis method, which is based on a new trace model originat-ing from communication theory. This novel SCA analysis exploits many time points within the power traces that con-tribute to the information leakage in the captured traces and thus helps signifi-cantly in revealing the secret key. Start-ing from the original PAA paper, we first performed a comparison to the well-known CPA attack and we then elabo-rated four advantages of the proposed methods in terms of run time, traces us-age, misalignment tolerance, and internal clock frequency effects. These ad-vantages were demonstrated by mount-ing both CPA and PAA attacks on the power traces captured from an FPGA-
based AES-128 cryptosystem. We have shown that the advocated analysis meth-od takes advantage in presence of both aligned and misaligned power traces. We see PAA as a new means, which pro-vides a different way to view and to un-derstand power traces. Its specific prop-erties help to reveal the secret key in cryptosystems more easily and thus to qualify the security of cryptographic al-gorithm implementations.
Acknowledgement This work was supported by CASED
(www.cased.de).
6 REFERENCES 1. Paul C. Kocher, Joshua Jaffe, and Benjamin
Jun, Differential Power Analysis, Interna-tional Cryptology Conference (CRYPTO), 1999, pp.388-397.
2. Suresh Chari, Josyula R. Rao, and Pankaj Rohatgi, Template Attacks, Cryptographic Hardware and Embedded Systems (CHES), 2002, pp. 13-28, Springer-Verlag.
3. Eric Brier, Christophe Clavier, and Francis Olivier, Correlation Power Analysis with a Leakage Model, Cryptographic Hardware and Embedded Systems (CHES), 2004, pp. 16-29, Springer-Verlag.
4. Werner Schindler, Kerstin Lemke, and Christof Paar, A Stochastic Model for Dif-ferential Side Channel Cryptanalysis, Cryp-tographic Hardware and Embedded Systems (CHES), 2005, pp. 30-46, Springer-Verlag.
5. Werner Schindler, Advanced Stochastic Methods in Side Channel Analysis on Block Ciphers in the Presence of Masking, J. of Mathematical Cryptology 2(3), 2008, pp. 291-310.
6. N. N., Research Center for Information Se-curity National Institute, Side Channel At-tack Standard Evaluation Board Version G Specification, 2008, http://www.morita-tech.co.jp/SASEBO/en/board/index.html.
7. Stefan Mangard, Elisabeth Oswald, and Thomas Popp, Power Analysis Attacks: Re-vealing the Secrets of Smart Cards
112
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
(Advances in Information Security), 2007, Springer-Verlag, New York, USA.
8. Kai Schramm and Christof Paar, Higher Order Masking of the AES, Cryptographers’ Track of RSA Conference (CT-RSA), 2006, pp. 208-225, Springer-Verlag.
9. Danil Sokolov, Julian P. Murphy, Alexandre V. Bystrov, and Alexandre Yakovlev, De-sign and Analysis of Dual-Rail Circuits for Security Applications, IEEE Trans. On Computers, 2005, vol. 54, pp. 449-460.
10. David Tse and Pramod Viswanath, Funda-mentals of Wireless Communication, 2005, Cambridge University Press.
11. Andrea Goldsmith, Wireless Communica-tions, 2005, Cambridge University Press.
12. Atri Rudra, Pradeep K. Dubey, Charanjit S. Jutla, Vijay Kumar, Josyula R. Rao, and Pankaj Rohatgi, Efficient Rijndael Encryp-tion Implementation with Composite Field Arithmetic, Cryptographic Hardware and Embedded Systems (CHES), 2001, pp. 175-188, Springer-Verlag.
13. Shengqi Yang, Wayne Wolf, Narayanan Vijaykrishnan, Dimitrios N. Serpanos, and Yuan Xie, Power Attack Resistant Cryp-tosystem Design: A Dynamic Voltage and Frequency Switching Approach, ACM/IEEE DATE, 2005, pp. 64-69.
14. Francois-Xavier Standaert, Tal Malkin and Moti Yung, A Unified Framework for the Analysis of Side-Channel Key Recovery At-tacks, EUROCRYPT, 2009, pp. 443-461, Springer-Verlag.
15. Qizhi Tian, Abdulhadi Shoufan, Marc Stoettinger, and Sorin A. Huss, Power Trace Alignment for Cryptosystems featuring Random Frequency Countermeasures, IEEE Int. Conf. on Digital Information Processing and Communications, 2012.
16. Qizhi Tian and Sorin A. Huss, On Clock Frequency Effects in Side Channel Attacks of Symmetric Block Ciphers, IEEE Int. Conf. on New Technologies, Mobility, and Securi-ty, 2012.
17. Qizhi Tian and Sorin A. Huss, Power Amount Analysis: Another Way to Under-stand Power Traces in Side Channel Attacks, IEEE Int. Conf. on Digital Information Pro-cessing and Communications, 2012.
18. Jasper G. J. van Woudenberg, Marc F. Wit-teman, and Bram Bakker, Improving Differ-ential Power Analysis by Elastic Alignment, Cryptographers’ Track of RSA Conference (CT-RSA), 2011, pp. 104-119, Springer-Verlag.
113
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
7 Appendix
Figure 10: Success Rate for each Byte in CPA and PAA attacks
114
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 99-114The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 115-121
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Certificate Revocation Management in VANET
Ghassan Samara
Department of Computer Science, Faculty of Science and Information Technology, Zarqa University
Zarqa, Jordan. [email protected]
ABSTRACT
Vehicular Ad hoc Network security is one of
the hottest topics of research in the field of
network security. One of the ultimate goals in
the design of such networking is to resist
various malicious abuses and security attacks.
In this research new security mechanism is
proposed to reduce the channel load resulted
from frequent warning broadcasting happened
in the adversary discovery process –
Accusation Report (AR) - which produces a
heavy channel load from all the vehicles in the
road to report about any new adversary
disovery. Furthermore, this mechanism will
replace the Certificate Revocation List (CRL),
which cause long delay and high load on the
channel with Local Revocation List (LRL)
which will make it fast and easy in the
adversary discovery process.
KEYWORDS
Secure Certificate Revocation; Local
Certificate Revocation; VANET; Certificate
Management; VANET Security.
1. INTRODUCTION
Traffic congestion is the most annoying
thing that any driver in the world dreaming
of avoiding it, a lot of traveling vehicles
may cause problems, or facing problems
that must be reported to other vehicles to
avoid traffic overcrowding, furthermore,
there are a lot of vehicles may send
incorrect information, or a bogus data, and
this could make the situation even worse.
Recent research initiatives supported by
governments and car manufacturers seek
to enhance the safety and efficiency of
transportation systems. And one of the
major topics to search is "Certificate
Revocation".
Certificate revocation is a method to
revoke some or all the certificates that the
problematic vehicle has, this will enable
other vehicles to avoid any information
from those vehicles, which cause
problems.
Current studies suggest that the Road Side
Unit (RSU) is responsible for tracking the
misbehavior of vehicles and for certificate
revocation by broadcasting Certificate
Revocation List (CRL). RSU also
responsible for the certificate
management, communication with
Certificate Authority (CA), warning
messages broadcasting, communicating
with other RSUs. RSU is a small unit will
be hanged on the street columns, every 1
KM [2] according to DSRC 5.9 GHZ
range.
In vehicular ad hoc networks most of road
vehicles will receive messages or
broadcast sequence of messages, and they
don’t need to consider all of these
Messages, because not all vehicles have a
good intention and some of them have an
Evil-minded.
Current technology suffers from high
overhead on RSU, as RSU tacking
responsibility for the whole Vehicular
Network (VN) Communication.
115
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 115-121
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Furthermore, distributing CRL causes
control channel consumption, as CRL
need to be transmitted every 0.3 second
[3]. Search in CRL for each message
received causes a processing overhead for
finding a single Certificate, where VN
communication involves a kind of periodic
message being sent and received 10 times
per second.
This research proposes mechanisms that
examine the certificates for the received
messages, the certificate indicates to
accept the information from the current
vehicle or ignore it; furthermore, this
research will implement a mechanism for
revoking certificates and assigning ones,
these mechanisms will lead better and
faster adversary vehicle recognition.
2. RESEARCH BACKGROUND
In the previous published work [1], security mechanisms were proposed to achieve secure certificate revocation, and to overcome the problems that CRL causes.
Existing works on vehicular network security [4], [5], [6], and [7] propose the usage of a PKI and digital signatures but do not provide any mechanisms for certificate revocation, even though it is a required component of any PKI-based solution.
In [8] Raya presented the problem of certificate revocation and its importance, the research discussed the current methods of revocation and its weaknesses, and proposed a new protocols for certificate revocation including : Certificate Revocation List (CRL), Revocation using Compressed Certificate Revocation Lists (RC
2RL), Revocation of the Tamper Proof
Device (RTPD) and Distributed Revocation Protocol (DRP) stating the differences among them. Authors made a simulation on the DRP protocol concluding
that the DRP protocol is the most convenient one which used the Bloom filter, the simulation tested a variety of environment like: Freeway, City and Mixing Freeway with City.
In [9] Samara divided the network to small adjacent clusters and replaced the CRL with local CRL exchanged interactively among vehicles, RSUs and CAs. The size of local CRL is small as it contains the certificates for the vehicles inside the cluster only.
In [10] Laberteaux proposed to distribute the CRL initiated by CA frequently. CRL contains only the IDs of misbehaving vehicles to reduce its size. The distribution of the received CRL from CA is made from RSU to all vehicles in its region, the problem of this method is that, not all the vehicles will receive the CRL (Ex: a vehicle in the Rural areas), to solve this problem the use of Car to Car (C2C) is introduced, using small number of RSU’s, transmitting the CRL to the vehicles.
In [3] the eviction of problematic vehicles is introduced, furthermore, some revocation protocols like: Revocation of Trusted Component (RTC) and Leave Protocol are proposed.
In [11] some certificate revocation protocols were introduced in the traditional PKI architecture. It is concluded that the most commonly adopted certificate revocation scheme is through CRL, using central repositories prepared in CAs. Based on such centralized architecture, alternative solutions to CRL could be used for certificate revocation system like certificate revocation tree (CRT), the Online Certificate Status Protocol (OCSP), and other methods where the common requirement for these schemes is high availability of the centralized CAs, as frequent data transmission with On Board Unit (OBUs) to obtain timely revocation
116
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 115-121
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
information may cause significant overhead.
3. PROPOSED SOLUTION
In the previous published work in [1] the proposed protocols for message checking and certificate revocation were the following:
Message Checking:
In this approach any vehicle receives a message from any other vehicle takes the message and checks for the sender certificate validity, if the sender has a Valid Certificate (VC), the receiver will consider the message, in contrary, if the sender has an Invalid Certificate (IC) the receiver will ignore the message, furthermore, if the sender doesn’t have a certificate at all, the receiver will report to the RSU about the sender and check the message if it is correct or not, if the information received was correct RSU will give a VC for the sender, else RSU will give IC for it, and register the vehicle’s identity into the CRL. See figure 1 for message checking process.
Figure 1. Message checking procedure
Certificate Revocation:
Certificate revocation is done when any misbehaving vehicle having VC is discovered, where RSU replaces the old VC with new IC, to indicate that this vehicle has to be avoided and this happens when more than one vehicle reporting to RSU that a certain vehicle has a VC and broadcasting wrong data. See figure 2, this report must be given to RSU each time that any receiver receives information from sender and finds that this information is wrong.
Figure 2. Certificate revocation procedure
The revocation will be as follows, a sender sen sends a message to receiver rec; this message may be from untrusted vehicle, so receiver sends Message to RSU to acquire Session Key (SKA), RSU replay message Containing SK Reply (SKR), this message contains the SK assigned to the current connection, this key is used to prevent attackers from fabrication of messages between the two vehicles.
Receiver sends a message to check validity, this message called “Validity Message”, the message job is to indicate if the sender vehicle has a VC or not. Afterwards, RSU reports to the rec that the
117
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 115-121
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
sender has a VC, so receiver can consider the information from the sender with no fear.
In some situations, receiver receives several massages, where all massages agree on a same result and same data, but a specific sender sends deferent data, this data will be considered as wrong data, if this data belongs to the same category.
Every message will be classified depending on its category:
TABLE I. MESSAGE CLASSIFICATION AND
CODING
Every category has a code, if the message received has the same code of the other messages, and has a deferent data, then this message is considered as a bogus message. In this case rec sends an Abuse Report (AR) for RSU, the Abuse AR (sen id, Message Code, Time of Receive), this report will be forwarded to CA, if RSU receives the same AR from other vehicles located in the same area, the number of abuse Report messages depends on the vehicles density on the road, see figure 3.
Figure 3. Calculation of the Number of Vehicles in
the Range [12]. If the number of vehicles that
making accusation for a specific vehicle is
near the half of the current vehicles, RSU
will make a Revocation Request (RR) to
revoke the VC from the sender vehicle.
Some vehicles don’t produce an AR because they didn’t receive any data from the sender vehicle (maybe they weren’t in the area wile broadcasting), or they have a problem in their devices, or they have an IC, so RSU will not consider their messages.
CA makes a revocation order to RSU after confirming the RR and updates the CRL and then RSU revokes the VC from the sender vehicle, and assigns IC for it, to indicate to other vehicles in the future, that this vehicle broadcasts wrong data, "don’t trust it".
Figure 2 shows certificate revocation steps.
Message 1: sen (sender) sends a message to the rec (receiver), this message along with digital signature of sen, and this message is encrypted with the Primary Key (PK) of rec.
Any attacker can make a fabricated message telling rec that this message originated from sen, to prevent this signature from being used.
Message 2: rec sends a request to RSU encrypted with the PK of RSU, acquiring a SK for securing connection.
Message 3: replay for Message 2, contains the SK and the time for sending the replay, the importance of the time is to prevent replay attack, where an attacker can send this message more than once, with the same session key, and same signature, so he can forge the whole connection.
Message 4: rec sends validity message to check if the vehicle has to be avoided or not, this message encrypted with the shared SK obtained from RSU.
118
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 115-121
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Message 7: sen sends a message to rec containing the VC, to report for rec that this vehicle must be trusted, and the time of sending, in here, to avoid reply attack, which happens when an attacker keeps the message with him, and sends it after a period, may be at that time, the senders certificate been revoked by RSU, so the sen must be avoided, but the attacker force the rec vehicle to trust it. After receiving the information, rec checks if the message has a deferent or same data for the same category of other messages received.
Message 8: if the message is deferent, then, wrong data is received, rec sends an Abuse Report for RSU, contains sen id to know which vehicle made the problem, Message Code to know the category of the message, Time of Receive to know when the message received, and the message also includes the Time to avoid replay attack and Signature to avoid fabrication; the message is encrypted with PK of RSU.
In this situation replay attack will happen, if an attacker copied this message, and sends it frequently to RSU in several times to make sure that the number of accusation reached a level, that the certificate must be revoked.
After examining the number of vehicles that accused sen for sending an Invalid message, if the number is reasonable, RSU sends Message 9.
Message 9: RSU sends RR for CA, containing Serial Number and Time to avoid replay attack and Signature to avoid fabrication, Revocation Reason to state what is the reason for revocation, and sen id to know which vehicle is the problematic one and message code to know what is the message category; the message is encrypted with PK of CA.
Replay attack in this situation happens when an attacker wants to transmit the same message for CA claiming that this message is from RSU, after some time CA
will not have the ability to respond, causing for DoS attack, so RSU must use Time and Serial number for this message, because CA has a lot of work to do and sending a lot of these kind of messages will cause a problem.
Message 10: CA makes a Revocation Order for RSU; this message contains SN to avoid DoS Attack, time to avoid replay attack, signature to avoid fabrication attack, Sender Id, Revocation Reason to state what is the reason for revocation.
After receiving this request CA will update CRL, adding the new vehicle that been captured to CRL and send it for RSU.
DoS attack can happen, when attacker keep sending the same message to RSU, claiming that the message originated from CA, CA messages have the highest priority to be processed by RSU, so RSU will receive a huge amount of messages from CA and process it, without having the time to communicate with other RSUs or other vehicles, to avoid it a serial number and signature is used.
Message 11: RSU makes the revocation, revoking VC, assigning IC, also this message contains the time to avoid replay attack, Signature to avoid fabrication attack, Revocation Reason to state what is the reason for revocation.
However, RSU will be responsible for renewing vehicle certificates, any vehicle has an expiring certificate will communicate with RSU to renew the certificate, then the RSU will check the CRL to see if this vehicle has an IC or not. If there is no problem for giving a new certificate for this vehicle, it will be given for a specific life time, when the period expires vehicle will issue a request for the CA for renewing the certificate. VC will have a special design different from the design of X.509 certificate [13] as shown in [1].
119
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 115-121
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
4. DISCUSION
Frequent adversary warning broadcasting will increase the load in the channel and make the channel busy. It should be noticed that an adversary may send a frequent AR just to make the whole network (vehicles and RSUs) busy with accusations analysis.
The idea of using CRL limits the warning broadcasting, but still sends large size messages about the adversaries in the whole world repeatedly every 0.3 second. To solve the mentioned problems a new adversary list will be created containing the local road adversary IC's by the following steps.
In this mechanism, all vehicles will be provided with LRL containing the information about all the adversaries in the current road, this LRL is received by nearest RSU to vehicle located on the road. When any vehicle discovers an adversary, it will search for its certificate in its local LRL, if it is there, vehicle will move the adversary ID to the top of the list to make future search faster, in contrary, if the IC is not in LRL, vehicle will send report informing the nearest RSU about this adversary presence.
When RSU receives a report from road vehicle reporting about an adversary, it checks for the senders certificate if it is valid or not, if it is valid it will check if the adversary IC in its LRL, if not it will add it to the LRL, the updated LRL will be broadcasted every 0.3 second like CRL timing [2] to all the vehicles inside the road. The RSUs in the road will receive the LRL broadcasting with a flag pointing to the added vehicle in the list to inform other RSUs to add this IC to their list.
Each RSU monitors the road for incoming and outgoing vehicles [8], if the adversary vehicle entered the road an add flag containing the adversary IC for the rest of
the RSUs will be broadcasted to add it to its personal LRL, in contrary, if the adversary left the road, a remove flag for the adversary IC will be broadcasted to the RSUs in the road.
In this way, the LRL will stay local only for the current road; the size will be too small. See table 2 for LRL which contains the ID of the adversary and the serial number of the IC certificate.
TABLE II. LRL STRUCTURE.
Vehicle ID IC Serial
5. CONCLUSION
The previous mechanisms proposed in [1] achieved secure certificate revocation, which is considered among the most challenging design objective in vehicular ad hoc networks, furthermore it helped vehicles to easily identify the adversary vehicle and made the certificate revocation for better certificate management. However, Frequent adversary warning broadcasting will increase the load in the channel and makes the channel busy, to solve this problem, a new mechanism were proposed in this paper by replacing the active warning broadcasting with reasonable broadcasting frequency of local revocation list containing the ICs of all the adversary vehicles on the current road, this reduces the load on the channel resulted from AR broadcasting proposed in [1].
6. References
1. Samara, G. and W.A.H. Al-Salihy, A New
Security Mechanism for Vehicular
Communication Networks. Proceeding of the
International Conference on Cyber Security,
CyberWarfare and Digital Forensic
(CyberSec2012), Kuala Lumpur, Malaysia. P.
18 – 22, IEEE.
120
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 115-121
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
2. DSRC Home Page. [cited 2011-11-21;
Available from:
http://www.leearmstrong.com/DSRC/DSRCH
omeset.htm. 3. Raya, M., et al., Eviction of misbehaving and
faulty nodes in vehicular networks. IEEE Journal on Selected Areas in Communications, 2007. 25(8): p. 1557-1568.
4. Raya, M. and J.P. Hubaux. The security of vehicular ad hoc networks. Proceedings of the 3rd ACM workshop on Security of ad hoc and sensor networks, 2005, ACM.
5. Parno, B. and A. Perrig. Challenges in securing vehicular networks. in Proceedings of the Fourth Workshop on Hot Topics in Networks (HotNets-IV). 2005.
6. Samara, G., W.A.H. Al-Salihy, and R. Sures. Security Issues and Challenges of Vehicular Ad Hoc Networks (VANET). in 4th International Conference on New Trends in Information Science and Service Science (NISS), 2010 . IEEE.
7. Samara, G., W.A.H. Al-Salihy, and R. Sures. Security Analysis of Vehicular Ad Hoc Nerworks (VANET). in Second International Conference on Network Applications Protocols and Services (NETAPPS), 2010. IEEE.
8. Raya, M., D. Jungels, and P. Papadimitratos, Certificate revocation in vehicular networks. Laboratory for computer Communications and Applications (LCA) School of Computer and Communication Sciences, EPFL, Switzerland, 2006.
9. Samara, G., S. Ramadas, and W.A.H. Al-Salihy, Design of Simple and Efficient Revocation List Distribution in Urban Areas for VANET's. International Journal of Computer Science, 2010. 8.
10. Laberteaux, K.P., J.J. Haas, and Y.C. Hu. Security certificate revocation list distribution for VANET. in Proceedings of the fifth ACM international workshop on VehiculAr Inter-NETworking 2008. ACM.
11. Lin, X., et al., Security in vehicular ad hoc networks. Communications Magazine, IEEE, 2008. 46(4): p. 88-95.
12. Raya, M. and J.P. Hubaux, Securing vehicular ad hoc networks. Journal of Computer Security, 2007. 15(1): p. 39-68.
13. Stallings, W., Cryptography and network security, principles and practices, 2003. Practice Hall.
121
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 122-129
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
*Institute of Media Content, Dankook University
152, Jukjeon-ro, Suji-gu, Yongin, Gyeonggi-do, 448-701, Korea **
Dept. of Computer Engineering, Kumoh National Institute of Technology
61, Daehak-ro, Gumi, Gyeongbuk, 730-701, Korea
[email protected]*, [email protected](corresponding author)
**
ABSTRACT
KEYWORDS
cellular array, finite field, semi-systolic
structure, Montgomery multiplication,
arithmetic architecture
1 INTRODUCTION
Finite field arithmetic operations,
especially for the binary field GF(2m),
have been widely used in the areas of
data communication and network
security applications such as error-
correcting codes [1,2] and cryptosystems
such as ECC(Elliptic Curve
Cryptosystem) [3,4]. The finite field
multiplication is the most frequently
studied. This is because the time-
consuming operations such as
exponentiation, division, and
multiplicative inversion can be
decomposed into repeated
multiplications. Thus, the fast
multiplication architecture with low
complexity is needed to design dedicated
high-speed circuits.
Certainly, one of most interesting and
useful advances in this realm has been
the Montgomery multiplication
algorithm, introduced by Montgomery
[5] for fast modular integer
multiplication. The multiplication was
successfully adapted to finite field
GF(2m) by Koc and Acar [6]. They have
proposed three Montgomery
multiplication algorithms for bit-serial,
digit-serial, and bit-parallel
multiplication. They have chosen the
Montgomery factor, R=xm for efficient
implementation of the multiplication in
hardware and software.
Wu [7] has chosen a new Montgomery
factor and shown that choosing the
middle term of the irreducible trinomial
G(ω)= ωm+ω
k+1 as the Montgomery
factor, i.e., R=xk, results in more efficient
bit-parallel architectures. In [8], MM is
implemented using systolic arrays for
all-one polynomials and trinomials. Chiu
et al. [9] proposed semi-systolic array
structure for MM which uses R=xm.
Hariri and Reyhani-Masoleh [10]
proposed a number of bit-serial and bit-
parallel Montgomery multipliers and
showed that MM can accelerate the ECC
scalar multiplication. Recently, in [11],
they have considered concurrent error
detection for MM over binary field.
122
Finite Field Arithmetic Architecture Based on Cellular Array
Kee-Won Kim* and Jun-Cheol Jeon
**
Recently, various finite field arithmetic
structures are introduced for VLSI circuit
implementation on cryptosystems and error
correcting codes. In this study, we present
an efficient finite field arithmetic
architecture based on cellular semi-systolic
array for Montgomery multiplication by
choosing a proper Montgomery factor which
is highly suitable for the design on parallel
structures. Therefore, our architecture has
reduced a time complexity by 50%
compared to typical architecture.
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 122-129
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
Three different multipliers, namely the
bit-serial, digit-serial, and bit-parallel
multipliers, have been considered and
the concurrent error detection scheme
has been derived and implemented for
each of them.
Chiou [12] used the recomputing with
shifted operands (RESO) to provide a
concurrent error detection method for
polynomial basis multipliers using an
irreducible all-one polynomial, which is
a special case of a general polynomial.
Lee et al. [13] described a concurrent
error detection (CED) method for a
polynomial multiplier with an
irreducible general polynomial. Chiou et
al. [9] also developed a Montgomery
multiplier with concurrent error
detection capability. Bayat-Sarmadi and
Hasan [14] proposed semi-systolic
multipliers for various bases, such as the
polynomial, dual, type I and type II
optimal normal bases. They have also
presented semi-systolic multipliers with
CED using RESO.
Recently, Huang et al. [15] proposed the
semi-systolic polynomial basis
multiplier over GF(2m) to reduce both
space and time complexities. Also they
proposed the semi-systolic polynomial
basis multipliers with concurrent error
detection and correction capability.
Various approaches adopt semi-systolic
architectures to reduce the total number
of latches and computation latency
because of permitting the broadcast
signals. However, almost existing
polynomial multipliers suffer from
several shortcomings, including large
time and/or hardware overhead, and low
performance.
In this paper, we consider the
shortcomings that the typical
architectures have, and propose a semi-
systolic Montgomery multiplier with a
new Montgomery factor. We show that
an efficient multiplication architecture
can be obtained by choosing a proper
Montgomery factor, and reduces time
complexity.
The remainder of this paper is organized
as follows. Section 2 introduces
Montgomery multiplication over finite
fields. In Section 3, we propose a
Montgomery multiplication architecture
based on our algorithm which is highly
optimized for hardware implementation.
In Section 4, we analyze and compare
our architecture with recent study.
Finally, Section 5 gives our conclusion.
2 MONTGOMERY
MULTIPLICATION ON FINITE
FIELDS
GF(2m) is a kind of finite field [16] that
contains 2m different elements. This
finite field is an extension of GF(2) and
any A GF(2m) can be represented as a
polynomial of degree m−1 over GF(2),
such as
01
1
1 axaxaA m
m
,
where ai {0,1}, 0 i m−1.
Let x be a root of the polynomial, then
the irreducible polynomial G is
represented as a following equation.
01 gxgxgG m
m , (1)
where gi GF(2), 0 i m−1.
Let and be two elements of GF(2m),
then we define = mod G. Also, let
A and B be two Montgomery residues,
then they are defined as A = R mod G
and B =R mod G, where GCD(R,G) =
123
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 122-129
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
1. Then, the Montgomery multiplication
algorithm over GF(2m) can be
formulated as
,mod1 GRBAP
where R−1
is the inverse of R modulo G,
and RR−1+GG=1 [17]. Thus, by the
definition of the Montgomery residue,
the equation can be expressed as
follows.
GR
GRRRP
mod
mod)()( 1
It means that P is the Montgomery
residue of . This makes it possible to
convert the operands to Montgomery
residues once at the beginning, and then,
do several consecutive multiplications/
squarings, and convert the final result to
the original representation. The final
conversion is a multiplication by R−1
, i.e.,
= P∙R−1
mod G. The polynomial R
plays an important role in the complexity
of the algorithm as we need to do
modulo R multiplication and a final
division by R.
3 PROPOSED ARCHITECTURE
This section describes the proposed
Montgomery multiplication algorithm
and architecture.
3.1 Proposed Algorithm
Based on the property of parallel
architecture, we choose the Montgomery
factor, 2/mxR . Then, the
Montgomery multiplication over GF(2m)
can be formulated as
GxBAPm
mod2/
(2)
We know that x is a root of G and mg
and 0g have always ‘1’ over all
irreducible polynomials. Thus, the
equations can be rewritten as follows.
1
mod
1
1
1
xgxg
Gx
m
m
m
(3)
1
2
1
1
1 mod
gxgx
Gx
m
m
m
(4)
Meanwhile, (2) is represented by
substituting A and B as follows.
]
[
mod
12/
1
22/
2
12/2/
2/
0
12/
1
2
22/
1
12/
m
m
m
m
mm
mm
mm
AxbAxb
AxbAb
AxbAxb
AxbAxb
GP
(5)
Now, it expresses that P can be divided
into two parts. One is based on the
negative powers of x and the other is
based on the positive powers of x. (5)
can be denoted by P = C+D, where
,mod]
[
2/
0
12/
1
2
22/
1
12/
GAxbAxb
AxbAxbC
mm
mm
.mod]
[
12/
1
22/
2
12/2/
GAxbAxb
AxbAbD
m
m
m
m
mm
Meanwhile, let )(iA and )(iA be
GAx i mod and GAx i mod ,
respectively. Then, based on (3) and (4),
the equations can be expressed as
124
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 122-129
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
,)(
)(
mod)
(
mod
1)1(
0
2
1)1(
0
)1(
1
1)1(
0
)1(
1
1)1(
1
2)1(
2
)1(
1
)1(
0
1
)1(1)(
mim
mii
m
ii
mi
m
mi
m
ii
ii
xaxgaa
gaa
Gxaxa
xaax
GAxA
,)(
)(
mod)
(
mod
1
1
)1(
1
)1(
2
1
)1(
1
)1(
0
)1(
1
1)1(
1
2)1(
2
)1(
1
)1(
0
)1()(
m
m
i
m
i
m
i
m
ii
m
mi
m
mi
m
ii
ii
xgaa
xgaaa
Gxaxa
xaax
GxAA
where
,1,
20,
)1(
0
1
)1(
0
)1(
1
)(
mja
mjgaa
a
i
j
ii
j
i
j
(6)
0,
11,
)1(
1
)1(
1
)1(
1
)(
ja
mjgaa
a
i
m
j
i
m
i
j
i
j
(7)
Also, using the formulae of )(iA and )(iA , the terms C and D are represented
as follows.
)2/(
0
)12/(
1
)2(
22/
)1(
12/
)0(
1
12/
2
22/
12/
1
2/
0
]
[
mod
mm
mm
mm
mm
AbAb
AbAbAz
AxbAxb
AxbAxb
GC
(8)
,
]
[
mod
)12/(
1
)22/(
2
)1(
12/
)0(
2/
12/
1
22/
2
12/2/
m
m
m
m
mm
m
m
m
m
mm
AbAb
AbAb
AxbAxb
AxbAb
GD
(9)
where 0z .
The coefficients of C and D are
produced by summing the corresponding
coefficients of each term in (8) and (9),
respectively. It means that cj and dj, for 0
j m−1 are represented as
)2/(
0
)12/(
1
)2(
22/
)1(
12/
)0(
m
j
m
j
jmjmjj
abab
ababazc
Algorithm 1. COM_C(A,B′,G)
Input:
),,,,( 0121 aaaaA mm ,
),,,,(' 0122/12/ bbbbB mm ,
),,,,( 0121 ggggG mm
Output:
GAxbAxb
AxbAxbC
mm
mm
mod]
[
1
12/
2
22/
12/
1
2/
0
;)0(
jj aa ;0)0( jc 0z ;
for 1i to 12/ m do
for 0j to 1m in parallel do
if (j = 0) then /* 0j */ )1(
0
)(
1
ii
m aa ;
)1(
012/
)1(
0
)(
0
i
im
ii abcc
(or )0(
0
)(
0
)1(
0 azcc i if 0i );
else /* 1,2,,2,1 mmj */
jm
ii
jm
i
jm gaaa
)1(
0
)1()(
1 ;
)1(
12/
)1()(
i
jmim
i
jm
i
jm abcc
(or )1()1()(
i
jm
i
jm
i
jm azcc if
0i );
end if
end for
end for
return C
125
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 122-129
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
.)12/(
1
)22/(
2
)1(
12/
)0(
2/
m
jm
m
jm
jmjmj
abab
ababd
Now, we obtain the following recurrence
equations from the above equations.
,12/1,
1,
)1(
12/
)1(
)1()1(
)(
miabc
iazc
c
i
jim
i
j
i
j
i
j
i
j
where 0)0( jc for 10 mj and
0z , and
)1(
12/
)1()(
i
jim
i
j
i
j abdd 2/1, mi
where 0)0( jd for 10 mj .
Algorithm 2. COM_D(A,B″,G)
Input:
),,,,( 0121 aaaaA mm ,
),,,,(" 1212/2/ mmmm bbbbB ,
),,,,( 0121 ggggG mm
Output:
GAxbAxb
AxbAbD
m
m
m
m
mm
mod]
[
12/
1
22/
2
12/2/
;)0(
jj aa ;0)0( jd
for 1i to 2/m do
for 0j to 1m in parallel do
if (j=0) then /* 0j */ )1(
1
)(
0
i
m
i aa ;
)1(
112/
)1(
1
)(
1
i
mim
i
m
i
m abdd ;
else /* 1,2,,2,1 mmj */
j
i
m
i
j
i
j gaaa )1(
1
)1(
1
)(
;
)1(
112/
)1(
1
)(
1
i
jim
i
j
i
j abdd ;
end if
end for
end for
return D
As shown in Algorithm 1 and 2, the
parallel computational algorithms for C
and D are driven by the above equations.
The proposed COM_C(A,B,G) and
COM_D(A,B,G) algorithms can be
executed simultaneously since there is
no data dependency between computing
C and D.
3.2 Proposed Multiplier
Based on the proposed algorithms, the
hardware architecture of the proposed
semi-systolic Montgomery multiplier is
shown in Figure 1. The upper, lower,
middle part of the array computes C, D,
and C+D, respectively. Our architecture
is composed of 12/ m )(
0U i cells,
)12/()2( mm )(U i
j cells, 2/m
)(
0V i cells, 2/)2( mm )(V i
j cells,
and one S cell.
1mb
2mb
12/ imb
12/ mb
2/mb
2c1mcjmc 1c0c
1ma
0a
1md1jd 0d
0p1pjp
2mp1mp
S
1ma 1mg 2g0 1a0 2a 1gjmg jma 0 0 0
1mg 2ma 2mg1jajg3ma 1g 0a
)0(z
12/ mb
12/ imb
0b
1b
(1)
1-Um
(1)
2-Um
(1)U j(1)
1U(1)
0U
)2(
1U )2(
2U m
)2(U j(2)
1-Um
(2)
0U
)(
0U i )(
2U i
m
)(U i
j)(
1U i
m
)(
1U i
)/2(
0Um )/2(
1-Um
m )/2(
Um
j )/2(
2Um
m )/2(
1Um
1)/2(
0Um 1)/2(
1-Um
m 1)/2(
Um
j 1)/2(
2U
m
m 1)/2(
1Um
2md 3md
0000 0
(1)
0V(1)
1V(1)Vj
(1)
1V m
(1)
2V m
(2)
0V(2)
1V(2)V j
(2)
1V m
(2)
2V m
)(
0V i)(
1V i)(V i
j)(
1V i
m
)(
2V i
m
12/
1Vm 12/
Vm
j 12/
1V
m
m 12/
0Vm 12/
2V
m
m
2/
1Vm 2/
Vm
j 2/
1Vm
m 2/
0Vm 2/
2Vm
m
Figure 1. The proposed semi-systolic
Montgomery multiplier over GF(2m)
126
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 122-129
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
The detailed circuits of the cells in
Figure 1 are depicted in Figure 2 thru
Figure 4, and , , and D denote XOR
gate, AND gate, and one-bit latch(flip-
flop), respectively. )1(
0
ic
D D
)(
0
ic )(
1
i
ma
12/ imb
)1(
0
ia
(a)
)(0Ui
jmg
)1(
i
jmc)1(
i
jma
)1(
0
ia
D D D
)(i
jmc
)(
1
i
jma
12/ imb
(b)
)(U i
j
Figure 2. Circuit configuration of )(
0U i and
)(U i
j cell
The latency of the proposed semi-
systolic multiplier requires m/2+1
clock cycles. Each clock cycle takes the
delay of one 2-input AND gate, one 2-
input XOR gate, and one 1-bit latch. The
space complexity of this multiplier
requires 2m2+m−1 2-input AND gates,
2m2+2m−1 2-input XOR gates, and
3m2+2m−1(for odd m) or 3m
2+3m−1(for
even m) 1-bit latches.
Note that )(U i
j ( )(
0U i ) and )(V i
j ( )(
0V i )
cells in Figure 2 and 3 are functionally
equivalent cells and the computations
can be executed in parallel, and the
computed results are added in S cell. In
Figure 4, D* denotes one bit latch when
m is even, otherwise it is ignored.
)1(
1
i
md
)1(
1
i
ma
)(
1
i
md
)(
0
ia
12/ imb
DD
(a) )(
0V i
12/ imb
)1(
1
i
jd )1(
1
i
ja
)1(
1
i
ma
)(
1
i
jd
)(i
ja
D D D
jg
(b) )(V i
j
Figure 3. Circuit configuration of )(
0V i and
)(V i
j cell
4 COMPLEXITY ANALYSIS
In CMOS VLSI technology, each gate is
composed of several transistors [18]. We
adopt that AAND2 = 6, AXOR2 = 6, and
ALATCH1 = 8, where AGATEn denotes
transistor count of an n-input gate,
respectively. Also, for a further
comparison of time complexity, we
adopt the practical integrated circuits in
[19] and the following assumptions, as
discussed in detail in [15], are made:
TAND2 = 7, TXOR2 = 12, and TLATCH1 = 13,
127
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 122-129
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
where TGATEn denotes the propagation
delay of an i-input gate, respectively.
2mc1mc jc1c0c
1d 1mdjd 0d
0p
1p
jp
2mp
1mp
2md
D*
D*
D*
D*
D*
Figure 4. Circuit configuration of S cell
Table 1. Comparison of semi-systolic
polynomial basis architectures
gate/delay In
[15]
Fig. 1
even m/odd m
Number of
cells 2m
U : 12/ m
U :
)12/()2( mm
V : 2/m
V : 2/)2( mm
S :1
2-input AND 22m 12 2 mm
2-input XOR 22m 122 2 mm
3-input XOR 0 0
one-bit latch 23m 133 2 mm /
123 2 mm
Total
transistor
count
248m 204248 2 mm /
203448 2 mm
Cell delay(ns) 32 32
Latency m 15.0 m / 5.05.0 m
Total
delay(ns) m32 3216 m / 1616 m
A circuit comparison between the proposed multiplier and the related
multiplier is given in Table 1. Although the proposed multiplier has nearly the same space complexity compared to Huang et al.[15], the time complexity is approximately reduced by 50%.
5 CONCLUSION
In this paper, we propose a cellular semi-
systolic architecture for Montgomery
multiplication over finite fields. We
choose a novel Montgomery factor
which is highly suitable for the design of
parallel structures. We also divided our
architecture into three parts, and
computed two parts of them in parallel
so that we reduced the time complexity
by nearly 50% compared to the recent
study in spite of maintaining similar
space complexity. We expect that our
architecture can be efficiently used for
various applications, which demand
high-speed computation, based on
arithmetic operations.
6 ACKNOWLEDGMENT
This research was supported by Basic
Science Research Program through the
National Research Foundation of
Korea(NRF) funded by the Ministry of
Education, Science and Technology
(2011-0014977).
7 REFERENCES
1. W. W. Peterson, and E. J. Weldon, Error-
Correcting Codes, MIT Press, Cambridge
(1972).
2. R. E. Blahut. Theory and Practice of Error
Control Codes, Addison-Wesley, Reading
(1983).
3. W. Diffie and M. E. Hellman, “New
directions in cryptography,” IEEE
Transactions on Information Theory, vol.
22, no. 6, pp. 644-654 (1976).
4. B. Schneier, Applied Cryptography, John
Wiley & Sons press, 2nd edition (1996).
128
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 122-129
The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
5. P. Montgomery, “Modular Multiplication
without Trial Division,” Mathematics of
Computation, vol. 44, no. 170, pp. 519–521
(1985).
6. C. Koc and T. Acar, “Montgomery
Multiplication in GF(2k),” Designs, Codes
and Cryptography, vol. 14, no. 1, pp. 57–69
(1998).
7. H. Wu, “Montgomery Multiplier and
Squarer for a Class of Finite Fields,” IEEE
Trans. Computers, vol. 51, no. 5, pp. 521-
529 (2002).
8. C. Y. Lee, J. S. Horng, I. C. Jou and E. H.
Lu, “Low-Complexity Bit-Parallel Systolic
Montgomery Multipliers for Special Classes
of GF(2m),” IEEE Transactions on
Computers, vol. 54, no. 9, pp. 1061–1070
(2005).
9. C. W. Chiou, C. Y. Lee, A. W. Deng and J.
M. Lin, “Concurrent Error Detection in
Montgomery Multiplication over GF(2m),”
IEICE Trans. Fundamentals of Electronics,
Communications and Computer Sciences,
vol. E89-A, no. 2, pp. 566-574, 2006.
10. A. Hariri and A. Reyhani-Masoleh, “Bit-
Serial and Bit-Parallel Montgomery
Multiplication and Squaring over GF(2m),”
IEEE Trans. Computers, vol. 58, no. 10, pp.
1332-1345 (2009).
11. A. Hariri and A. Reyhani-Masoleh,
“Concurrent Error Detection in Montgomery
Multiplication over Binary Extension
Fields,” IEEE Trans. Computers, vol. 60, no.
9, pp. 1341-1353 (2011).
12. C. W. Chiou, “Concurrent Error Detection
in Array Multipliers for GF(2m) Fields,” IEE
Electronics Letters, vol. 38, no. 14, pp. 688–
689 (2002).
13. C. Y. Lee, C. W. Chiou, and J. M. Lin,
“Concurrent Error Detection in a
Polynomial Basis Multiplier over GF(2m),”
J. Electronic Testing: Theory and
Applications, vol. 22, no. 2, pp. 143-150
(2006).
14. S. Bayat-Sarmadi and M.A. Hasan,
“Concurrent Error Detection in Finite Field
Arithmetic Operations Using Pipelined and
Systolic Architectures,” IEEE Trans.
Computers, vol. 58, no. 11, pp. 1553-1567
(2009).
15. W. T. Huang, C. H. Chang, C. W. Chiou
and F. H. Chou, “Concurrent error detection
and correction in a polynomial basis
multiplier over GF(2m),” IET Information
Security, vol. 4, no. 3, pp. 111-124 (2010).
16. R. Lidl and H. Niederreiter, Introduction to
Finite Fields and Their Applications,
Cambridge Univ. Press (1986).
17. J. C. Jeon and K. Y. Yoo, “Montgomery
exponent architecture based on
programmable cellular automata,”
Mathematics and Computers in Simulation,
vol. 79, pp. 1189-1196 (2008).
18. N. Weste, K. Eshraghian, Principles of
CMOS VLSI design: a system perspective,
Addison-Wesley, Reading, MA (1985).
19. STMicroelectronics, Available at
http://www.st.com/
129
Mohammad Mahboubian, Nur Izura Udzir, Shamala Subramaniam, Nor Asila Wati AbdulHamid
[email protected], {izura, shamala, asila}@fsktm.upm.edu.myFaculty of Computer Science and Information Technology
University Putra Malaysia, Serdang, Selangor, Malaysia
Abstract- One of the most important topicsin the field of intrusion detection systems isto find a solution to reduce theoverwhelming alerts generated by IDSs inthe network. Inspired by danger theorywhich is one of the most important theoriesin artificial immune system (AIS) weproposed a complementary subsystem forIDS which can be integrated into anyexisting IDS models to aggregate the alertsin order to reduce them, and subsequentlyreduce false alarms among the alerts. Afterevaluation using different datasets andattack scenarios and also different set ofrules, in best case our model managed toaggregate the alerts by the average rate of97.5 percent.
Keywords—Intrusion detection system; Alertfusion; Alert correlation, Artificial Immunesystem; Danger theory;
1.0 Introduction
In recent years intrusion detection systems(IDS) have been widely adopted incomputer networks as a must-haveappliances to monitor the network andlook for malicious activities. It is possibleto use and implement them either in thenetwork level to monitor the activities inthe network or to use them in host level tomonitor activities on a particular machinein the system. In both cases after detectinga malicious activity they will send an alertto the network administrator.
Each alert contains information about thismalicious activity such as source IPaddress, source port number, destination IPaddress, etc. Thus, for a single attack on anetwork or any of its hosts, there will be
thousands of alerts generated and sent tothe network administrator. Also some ofthese alerts may not be valid and aregenerated because of the wrong detectionof an IDS (false positive) in the network.This is crucial as every day a significantnumber of alerts is generated andprocessing these alerts for networkadministrators can be a tedious task,especially if all of these alerts are not validand can be result of false positivedetection. Therefore, in the last few yearsone of the most focused topics in the fieldof network security and more specificallyintrusion detection systems was to findsolutions for this problem.
To reduce the overwhelming amount ofgenerated alerts some researchers havesuggested to aggregate alerts into clusters,which is also called alert fusion. The finalobjective of aggregation is to group allsimilar alerts together. During aggregation,alerts are put into groups based on thesimilarity of their corresponding features[25] such as Source IP, Destination IP,Source Port, Destination Port, Attack Classand Timestamp. On the other hand some ofthe researchers investigated differentapproaches to correlate the attackscenarios based on the alerts. Alertcorrelation provides the networkadministrator with a higher view of a multistaged attack.
Three main approaches have been used inthe literature for correlating alerts in attackscenarios.In the first approach the relationshipbetween alerts are hardcoded in thesystem. These methods are limited to the
An AIS Inspired Alert Reduction Model
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
130
predefined rules available in theknowledge base of the system. In thesecond approach and to overcome theproblem in the first approach, otherapproaches have been suggested such asmachine learning and data miningtechniques to extract relationships betweenalerts, but these approaches require alengthy initial period of training. In theseapproaches, co-occurrence of alerts withina predefined time window is used as animportant feature for the statistical analysisof alerts. This involves pair-wisecomparison between alerts since every twoalerts might be similar and therefore canbe correlated [25]. But these repeatedcomparisons between alerts leads to a veryhuge computational overload, especiallywhen they are going to be used in large-scale networks, in which we may expectthousands of alerts per minute.
Finally, in the third approach, some of therecent works focused on filtering andomitting false positive alerts.
In this paper we proposed a newaggregating method inspired by artificialimmune system and more specificallydanger theory which attempt to aggregatethe generated alerts based on the predictionof attack scenarios. The proposedalgorithm is able to reduce alerts beforepassing them to the network administratorand also to remove false positives from thegenerated alerts.
The remainder of this paper is organized asfollows: in Section 2 we present a briefreview of previous works in the literature.In Section 3 we describe the proposedmodel and discuss some of the aspectsrelated to alert aggregation. Section 4presents experimental results and finallywe conclude this paper in Section 5.
Artificial Immune System:
Artificial Immune system is amathematical model based on the human
body defence system. Natural immunesystem is a remarkable and complexdefence mechanism, and it protects theorganism from foreign invaders, such asviruses. Therefore, it is vital for thedefence system to distinguish betweenself-cells and other cells, as well asensuring that lymph cells does not showany reaction against human body cells. Toachieve this, the human body will gothrough a "Negative Selection" process[16] in which T-cells that react againstself-proteins are destroyed therefore onlythose cells that do not have any similarityto self-proteins survive. These survivedcells which are now called matured T-cellsare ready to protect the body againstforeign antigens.
Danger Theory:
This theory was first proposed in 1994[17] by Matzinger. According to thistheory not all foreign cells in our bodyshould be considered an antigen. Forinstance the food which we eat is also aforeign ‘invader’ to our body but thehuman body does not react to this foreigninvader.
Danger theory suggests that foreigninvaders, which are dangerous, will inducethe generation of danger signals byinitiating cellular stress or cell death [19].
Then these molecules are detected byAPCs, critical cells in the initiation of animmune response, this leading toprotective immune defence system. Ingeneral there are two types of dangersignals; in the first category the dangersignals are generated by the body itself,and in the second category, the dangersignals are derived from invadingorganisms, e.g. bacteria [20].
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
131
2.0 Related Works
Recently, there have been severalproposals on alert fusion. Generally, eachmethod is to combine duplicated alerts(alert which are very similar to each other)from the same or different sensors toreduce a large part of alerts. Here weoverview some of the works which havebeen done in the last few years.
To measure similarities between alerts, thepioneers in the field of alert aggregation,Valdes and Skinner [1] proposed a methodin which alerts are grouped into differentclusters based on their overall similarity,determined based on their similarities onthe corresponding features. Unfortunately,this method relies on expert knowledge todetermine the similarity degree betweenattack classes.
In [2], the authors presented an algorithmto fuse multiple heterogeneous alerts tocreate scenarios, building scenarios byadding the alert to the most likely scenario.To do so it computes the probability that anew alert belongs to one of the existingscenarios.
Ning et al. [3] constructed a series ofprerequisites and consequences of theintrusions. Then by developing a formalmodel they managed to correlate relatedalerts by matching the outcome of somepreviously seen alerts and the preconditionof some later alerts. Julisch [4] used rootcauses to solve the problem of the alertattribute similarity. Although this approachwas effective but finding root causes of thealert attributes is very difficult and in largenetworks seems to be impractical. Chunget al. [5] uses Correlated Attack ModellingLanguage (CAML) for modellingmultistep attack scenarios and then to
recognize attack scenarios he allowed thecorrelation engines to process thesemodels. However, it is not easy for thisalgorithm to model new variant attacks.Valeur et al. [6] introduced a 10-stepComprehensive IDS Alert-Correlation(CIAC) system that uses exact featuresimilarity in two out of ten steps in theiralert correlation system. Qin and Lee [7]proposed a statistical-based correlationalgorithm to predict novel attack strategies.This approach combines the correlationbased on Bayesian inference with a broadrange of indicators of attack impacts andthe correlation based on the GrangerCausality Test. However, this algorithmcannot be used to predict complex multistaged attacks because of high falsepositive results.
In another work Qin and Lee [8] proposedan approach which applies Bayesiannetworks to IDS alerts in order to conductprobabilistic inference of attack sequencesand to predict possible potential upcomingattacks. In [9] authors introduced a bi-directional and multi-host causality tocorrelate distinct network and host IDSalerts. But if the number of false positivealerts increases mistakes in recognitionmay occur. Zhu and Ghorbani [10] use theprobabilistic output from two differentneural network approaches, namelyMultilayer Perception (MLP) and SupportVector Machine (SVM), to determine thecorrelation between the current alert andprevious alerts. They used AlertCorrelation Matrix (ACM) to storecorrelation level of any given two types ofalerts. Wang et al. [11] proposed a newdata mining algorithm to construct attackscenarios. This algorithm allows multi-stage attack behaviours to be recognized,and it also predicts the potential attack
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
132
steps of the attacker. However, to calculatethe threshold used in this approachsufficient training is required. To detectDDoS attacks Lee [12] proposed clusteringanalysis using the concept of entropy .Hethen calculated the similarity value ofattack attributes between two alerts usingEuclidian distance. Fava et al. [13]proposed a new approach based onVariable Length Markov Models(VLMM), which is a framework for thecharacterization and prediction of cyberattack behaviour. VLMM can predict theoccurrence of a new attack; however itdoes not know what kind of attack it is.Zhang et al. [14] uses the Forward andViterbi algorithm based on HMM torecognize the attacker’s attack intentionand forecasts the next possible attack forthe multi-step attack. By the design ofFinite State Machine (FSM) forforecasting attacks, the Forward algorithmis used to determine the most possibleattacking scenario, and the Viterbialgorithm is used to identify the attackerintension. Du et al. [15] proposed twoensemble approaches to project the likelyfuture targets of ongoing multi-stageattacks instead of future attack stages.
3.0 Proposed Model
Figure 1 – The proposed model
We assumed that all types of computerattacks can be categorized into thefollowing general groups:
a) One-to-One: in which the hackerattacks one of the machines on thenetwork. This can be a Probe or a Dosattack or exploitation of services in thathost.
b) Many-to-One: in which manymachines (zombies) attack one of themachines on the network. Mostprobably this is a form of DDos attack.
c) One-to-Many: in which the hackerattacks many machines on the networksuch as probe attack.
According to danger theory only an alertor group of alerts can be considered valid(dangerous) if they initiate the dangersignal.
To raise the danger signal some conditionsmust be satisfied and this conditions aredefined prior to implementation of thissystem. Therefore we have a list ofcondition in which if any of theseconditions is satisfied by a group ofalarms, that group of alarm is considereddangerous and will be reported to thenetwork administrator immediately.
Not only the proposed model tries toaggregate the alerts based on theircommon features but it correlates theattacks internally for better aggregating thealerts.
Figure 1 shows our proposed model. Thismodel consists of six components and thedesign of this model is in such a way thatany of these components can be replacedwith a new implementation of thatcomponent depending on different networksituation.
Alert Collector
Alert Parser
Alert Filtration and Validation
Danger Signal Detection
Final Alert Preparation Module
Database
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
133
The following is an illustration of the maincomponents of this model:
a) Alert Collector module (CM): Thismodule is responsible to collect allthe alerts from all the IDS sensorsin the network. Therefore aftergeneration of an alert by an IDSsensor, instead of sending the alertdirectly to the administrator, itshould be sent to this module. Oncethis module receives an alert, thatalert will be registered into modelto be processed. Another objectiveof this module is to standardize thealerts because IDS sensors mightgenerate the alerts in differentformat. Therefore in order toprocess and compare the receivedalerts they should be in a sameformat. Another point about thismodule is that as this modulereceives enormous volume of alertsit must be implemented using avery robust multi-threadedsoftware technology.
b) Alert Filtration and Validation(FVM): One of the prerequisites ofusing this model is to keep the listof IP address and services runningon all of the machines in thenetwork under our administrativeterritory. By utilizing thisinformation, this module filters outthose alerts which do not makesense such as an alert of attack on aweb server on a machine without aweb server. Also this moduleaggregates those alerts which areexactly similar feature wise,helping to reduce redundant alerts.
c) Alert Parser module (PM): Themain objective of this module is tocategorize and classify all validated
alerts into one of the groups wementioned earlier: one-to-one,many-to-one and one-to-many.
d) Danger Signal Detection Module(DSDM): This is the mostimportant module in this model.This module is the implementationof the one of the most famoustheories in the field of artificialimmune system namely known asDanger Theory. Its main functionis to analyze all received alerts in aspecific time window in an attemptto correlate a multi-steps attack andaggregating all related alerts into agroup of alerts which later will berepresented to the administrator asa single alert. In order to achieve tothis objective, a series ofgeneralized rules are hardcodedinto this module. Based on theserules and the actual characteristicof the available alerts this moduledynamically decides if a group ofalerts are related to an multi-stepsattack and can be aggregated to asingle alert.
e) Final Alert Preparation module(FAPM): The results of previousmodule are sent to this last modulein order to make them presentablebefore passing to the administrator.
3.1 Model Implementation
The proposed model has a module namelyDanger Signal Detection Module (DSDM)which decides if a group of alerts are likelyto raise the danger signal or not, and willreport a dangerous group of alerts to thenetwork administrator.
The steps to implement this model are:
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
134
a) First we provide this model withinformation about the machines on thenetwork, such as their IP address, listof services running on each machineand, in case of host IDS the id of theIDS, on that machine. This step shouldbe repeated periodically in order toprevent concept drift.
b) Next, all alerts are grouped into one ofthe groups explained earlier. Thepriority is with the alerts which areexactly similar in terms of theirfeatures and after grouping these alertsthe priority is with the second type ofalerts namely “one-to-many”. This isbecause before attacking a network theattacker needs to know about themachines on the network, so he/sheinitiates a probe to the network, whichresults in generating these types ofalerts. After this group the leastpriorities belong to “one- to-one” and“many-to-one” types of alerts. Thegrouping is done within an adjustabletime window value and based on thesource IP address, destination IPaddress, destination Port number,timestamp and in case of host-basedIDS, the id of the IDS.
c) Then each group is checked to find outif that group is capable of raising thedanger alarm (Danger Theory).
d) For each group which satisfies thechecking a record is registered in adatabase for the purpose of keepingtrack of the status of the attack as thisis one of the sources which canindicate the existence of danger signal.Finally an alert will be sent to thenetwork administrator containing theinformation about the attack, as well asall the machines IP addresses (sourceand destination) or port numbers whichcontribute to this alarm.
e) Alerts generated from network-basedIDSs and host-based IDs are groupedseparately but host-based IDSs’ alertsare important in determining theseverity of network-based IDS alerts.
3.2 Danger Signal Detection Module
This module indicates either a group ofalerts are capable of raising the dangeralarm or not and this is done by defining alist of rules. The following are some of themost important rules in this model:
a) In general an existence of one-to-manyalert group (generated by network-based IDS) in database followed byone-to-one alert group type (generatedby host-based IDS) will raise thedanger alarm. This is because a hackerfirst scans the machines on a networkand after he/she found a machine witha particular service running on it,he/she tries to exploit that service togain access to that machine.
b) If in the alert group the source IPs areexternal and port number(s) are notmatched with actual services runningon the internal machines, this is anindication of danger signal and will bereported.
c) If in the alert group the source IPs areinternal and port number(s) arematched with the actual servicesrunning on the destination machine(s),and the number of alerts in this groupare not more than a predefined value,then this group is ignored.
d) If in the alert group there are more thanone source IPs and a single destinationIP, this will raise the danger alarm.
e) If in the alert group there are one singlesource IP and more than one IPs indestination IP this will raise the dangeralarm.
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
135
f) If in the alert group there are more onesource IP and one destination IP andwe have recently a record in thedatabase related to this source and IPaddress (probe), then this will raise thedanger signal.
The similarity function S between twogiven alerts and b is calculated as follow:
( , ) ∑ ∝ . ( , ) (1)
Whereby n is total number of features and( , ) is the similarity of feature kbetween these alerts which can be between0 and 1, ∝ is the weight of that particularfeature such that∑ ∝ = 1 (2)
Having different weight for each featureleads to more precise grouping of thealerts. Among our features set source IPaddress and timestamp have the highestweights.
Therefore to calculate the similarity of twosignals we need to calculate the following:( , ), ( , ),( , ), ( , ),and in term of host based IDS:( , )After normalizing the formula in (1) thesimilarity value between two alerts can bebetween 0 and 1: 0 when two alerts arecompletely different and 1 when two alertsare identical.
4.0 Experimental Results
In order to evaluate our model first wesetup a network of seven computers inwhich two computers play the role ofattackers and with a different class of IP
addresses so that they are considered asexternal machines (Figure 2). Next, foreach machines inside the network weconfigured different services such as fileserver, web service, and remote desktopservice and so on. As for the IDS we usedour own proposed IDS in [21].
Table 1- Shows the services running on eachworkstation
Next we simulated different kinds ofattacks in order to generate alerts andstarting with probe (including vertical andhorizontal port scans) and Dos attacks andfinally exploiting different services on theworkstations to gain access to the machineand elevating the access level. Table 1shows the services running on each of theworkstations.
The first attacker (10.8.1.100) starts withscanning the whole range of network andfinding the running services on each of thediscovered workstations. Then he tries toexploit different services on differentworkstation one by one. At the same timethe second attacker (10.8.1.200) scans thewhole network and initiates a Dos attackagainst one of the discovered machines.These activities caused the IDS to generatemore than 3000 alerts. These alerts wasprocessed by this model and the finalnumber of alerts was 31 therefore theproposed model showed a very goodperformance of 98.95% alerts reduction..
Workstation Service(s)10.8.0.2 ftp (port 21)10.8.0.3 Web server (port 80)10.8.0.4 smtp (25) and imap (143)10.8.0.5 RDP (port 3389)10.8.0.6 SSH (port 22)
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
136
Figure 2 – The network setup for the firstexperiment
To better evaluate our proposed model weconsidered LLDOS1.0 and LLDOS2.0attack scenarios of DARPA 2000 [16] astest datasets. These datasets contain a largenumber of normal data and attack data,well-known among IDS researchers [22,23]. For this experiment we used a partialof these datasets. In order to simulate thenetworks we used NetPoke from DARPAto replay datasets and once again we usedour own developed IDS for attackdetection and also for generating alerts.Total number of 12068 alerts wasgenerated by our IDS. Then we updatedthe model with the services running oneach of the machines in these networks.Finally we run these experiments multipletimes and each time with a different set ofrules in “Danger Signal DetectionModule”. In all cases we make sure thatthese rules are enough generalized so thatthey can be utilized in other networks alsotherefore they are not crafted only forthese experiments.
The following tables show the reductionpercentage of each level of our model forthe worse and best cases that we achieved.These results show that it is possible for
this model to achieve the alerts reductionrate of 98.5% for LLDOS1.0, and 97.02%for LLDOS2.0 if we use the correct rulesset in this model. Some of the modules arenot meant for alert reduction and theymostly handle other issues such as parsingthe incoming alerts or rearranging of alertsto make them more presentable for enduser which in this case it is network admin.
Table 2- LLDOS1.0 worse case result
FVM PM DSDM FAPM SUM
Input 7054 4901 4893 1951 7054Output 4901 4893 1951 1945 1945% 60.13 72.4
Table 3- LLDOS1.0 best case result after updatingthe rules
FVM PM DSDM FAPM SUM
Input 7054 4901 4893 112 7054Output 4901 4893 112 106 106% 97.71 98.5
Table 4- LLDOS2.0 worse case result
FVM PM DSDM FAPM SUM
Input 5014 3818 3812 1909 5014Output 3818 3812 1909 1915 1915% 49.92 61.8
Table 5- LLDOS2.0 best case result after updatingthe rules
FVM PM DSDM FAPM
SUM
In 5014 3818 3812 153 5014Out 3818 3812 153 149 149% 95.98 97.02
As one of our immediate future work weintend to experiment this model with“Capture the Flag 2010 dataset” [24].
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
137
5.0 Conclusion
In this paper we proposed a model to fusethe generated alerts by the IDSs in acomputer network. Inspired by the humandefence system, this model utilizes one ofthe most important theories in ArtificialImmune System (AIS), danger theory, andattempts to aggregate alerts based on ageneral set of predefined rules, and alsoreduce the false alarms. In contrast withexisting rule based alert correlation modelswhich are limited to their set of predefinedrules, this model does not have anylimitation in terms of alert aggregation andthis is because the predefined rules in thismodel are very general. Afterexperimenting this model in a real networkenvironment and also using existingdatasets in literature, the proposed modelmanaged to aggregate alerts with anaverage rate of 97.5 percent.
References
[1] A. Valdes and K Skinner, “Probabilistic AlertCorrelation”, In Proceedings of the 4thInternational Symposium on Recent Advancesin Intrusion Detection, 2001, pp.54-68.
[2] O. M. Dain and R. K. Cunningham. Fusing aheterogeneous alert stream into scenarios. InProceedings of the 2001 ACM Workshop onData Mining for Security Applications, pages1–13, 2001.
[3] P. Ning, Y. Cui, and D. S. Reeves.Constructing attack scenarios throughcorrelation of intrusion alerts. In Proceedingsof the 9th ACM Conference on Computer andCommunications Security, pages 245–254,2002.
[4] K.Julisch, “Using root cause analysis to handleintrusion detection alarms”, PhD Thesis,University of Dortmund, Germany, 2003.
[5] S. Cheung, U. Lindqvist and M. W. Fong,Modelling multistep cyber attacks for scenariorecognition, In Proceeding of Third DARPAInformation Survivability Conference and
Exposition (DISCEX III), Washington,D.C.,April 2003.
[6] F. Valeur, G. Vigna, C. Kruegel and R. A.Kemmerer, A comprehensive approach tointrusion detection alert correlation, InProceeding of IEEE Trans. Dependable SecureComputing., vol. 1, no. 3, pp. 146169, Jul.Sep.2004.
[7] X. Qin and W. Lee, Discovering novel attackstrategies from INFOSEC alerts, In Proceedingof 9th European Symposium on Research inComputer Security (ESORICS 2004), pp. 439-456, 2004.
[8] X. Qin and W. Lee, Attack plan recognitionand prediction using causal networks, InProceeding of 20th Annual Computer SecurityApplications Conference 2004.
[9] S. King, M. Mao, D. Lucchetti, and P. Chen,Enriching intrusionalerts through multi-hostcausality, In proceeding of the Network andDistributed Systems Security Symposium., SanDiego, CA, 2005.
[10]B. Zhu and A. A. Ghorbani, Alert correlationfor extracting attack strategies, InternationalJournal of Network Security, Vol.3, No.3,pp.244-258, November 2006.
[11]L. Wang, Z. T. Li and Q. H. Wang, A noveltechnique of recognizing multi-stage attackbehaviour, In Proceeding of IEEE InternationalWorkshop on Networking, Architecture andStorages, pp. 188, 2006.
[12]Keunsoo Lee, Juhyun Kim, Ki Hoon Kwon,Younggoo Han and Sehun Kim, “DDoS attackdetection method using cluster analysis”,Expert Systems with Applications, vol.34,no.3,2007, pp.1659-1665 .
[13]D. Fava, S. R. Byers, S. J. Yang, ProjectingCyber Attacks through Variable LengthMarkov Models, IEEE Transactions onInformation Forensics and Security, Vol.3,Issue 3, September 2008.
[14]S. H. Zhang, Y. D. Wang and J. H. Han,Approach to forecasting multistep attack basedon HMM, Computer Engineering, Vol.34,No.6, pp. 131-133, Mar 2008.
[15]H. Du, D. Liu, J. Holsopple, and S. J. Yang,Toward Ensemble Characterization andProjection of Multistage Cyber attacks, InProceeding of IEEE ICCCN10, Zurich,Switzerland, August 2-5, 2010.
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
138
[16]Duan Shan-Rong, LiXin The anomalyintrusion detection based on immune negativeselection algorithm Granular Computing,2009, GRC '09.In Proceeding of IEEEInternational Conference, 978-1-4244-4830-2,2009.
[17]Matzinger P, Tolerance Danger and theExtended Family, Annual reviews ofImmunology 12, 1994.
[18]U.Aickelin, P.Bentley, S.Cayzer, J.Kim,J.McLeod “Danger Theory: The Link betweenAIS and IDS second International Conferenceon Artificial Immune Systems, Edinburgh,U.K. September, 2003.
[19]Matzinger P, The Danger Model: A RenewedSense of Self, Science 296: 2002.
[20]Gallucci S, Matzinger P, Danger signals: SOSto the immune system, Current Opinions inImmunology 13, pp 114-119. 2001
[21]M. Mahboubian, N. A. W. A Hamid “AMachine Learning based AIS IDS” InProceeding of GCSE 2011 Dubai.
[22]G.Xiang, X.Dong, G.Yu “Gorrelating Alertswith a data mining based approach” InProceedings of the 2005 IEEE InternationalConference on e-Technology, e-Commerceand e-Service.
[23]B.Cheng, G.Liao, C.Huang “A novelprobabilistic matching algorithm for multistage attack forecasts” IEEE Journal onSelected Areas in Communications, Vol. 29,No. 7, August 2011.
[24] “Capture the flag traffic dump,http://www.defcon.org/html/links/dc-ctf.html.”
[25]Reza Sadoddin, Ali A. Ghorbani, Anincremental frequent structure miningframework for real-time alert correlation,Computers & Security, Volume 28, Issues 3–4,May–June 2009, pp. 153-173, ISSN 0167-4048, 10.1016/j.cose.2008.11.010.
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 130-139The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
139
Trust Measurements Yeld Distributed Decision Support in Cloud
Computing
1Edna Dias Canedo,
1Rafael Timóteo de Sousa Junior,
2Rhandy Rafhael de Carvalho and
1Robson de
Oliveira Albuquerque 1Electrical Engineering Department, University of Brasília – UNB – Campus Darcy
Ribeiro – Asa Norte – Brasília – DF, Brazil, 70910-900. 2Informatics Institute – INF, University Federal of Goiás – UFG - Campus Samambaia –
Bloco IMF I – Goiânia – GO, Brazil, 74001-970
[email protected], [email protected], [email protected], [email protected]
ABSTRACT
This paper proposes the creation of a trust
model to ensure the reliable files exchange
between the users of a private cloud. To
validate the proposed model, a simulation
environment with the tool CloudSim was
used. Its use to run the simulations of the
adopted scenarios allowed us to calculate the
nodes (virtual machines) trust table and
select those considered more reliable;
identify that the metrics adopted by us
directly influenced the measurement of trust
in a node and verify that the trust model
proposed effectively allows the selection of
the most suitable machine to perform the
exchange of files.
KEYWORDS
Distributed system; cloud computing;
availability; exchange of files and model
trust.
1 INTRODUCTION
The development of virtualization
technologies allows the sale on-demand,
in a scalable form, of resource and
computing infrastructure, which are able
to sustain web applications. So it borns
cloud computing, generating a
increasing tendency for applications that
can be accessed efficiently, independent
from their location. This technology
arrival creates the necessity to rethink
how applications are developed and
made available to users, at the same time
that motivates the development of
technologies that can support its
enhancement.
Since IBM Corporation announced its
program for cloud computing at the end
of 2007, other major technology
companies (IT) has adopted clouds
progressively, for example, Google App
Engine, which lets you create and host
applications web with the same systems
that power Google applications, Amazon
Web Services (AWS) from Amazon,
which was one of the first companies
providing cloud services to the public,
Elastic Compute Cloud (EC2) from
Amazon, which allows users to rent a
virtual machines that they can run their
own applications providing a complete
control over their computational
resources and allowing the execution in
the computing environment, Simple
Storage Service (S3) of Amazon, which
allows the storage of files in the storage
service, and Apple iCloud Azure
Services Platform from Microsoft, which
introduced Cloud computing products
[1]. However, the Cloud computing also
presents risks related to data security in
its different aspects, such as
confidentiality, integrity and authenticity
[2-3, 4].
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
140
This paper proposes a trust model to
exchange of files between peers in a
private cloud. Private cloud
computing environment allows it to
be working with a specific context of file
distribution, so the files have a
desired distribution and availability,
being possible guarantees from the cloud
manager that the access is restricted, and
the identification of nodes is unique and
controlled.
In the proposed model, the choice of the
more reliable node is performed taking
into account its availability. The
selection of nodes and its evaluation of
trust value will determine whether the
node is reliable or not, which will be
performed according to the storage
system, operational system, processing
capacity and node link. Trust is
established based on requests and
consultations held between nodes of the
private cloud.
This paper is organized as follows. In
Section II, we present an overview of the
concepts of trust and reputation. In
Section III, we present review some
related work about security, file system
and trust in the cloud. In section IV, we
introduce the proposed trust model and
practical results. Finally, in Section VI,
we conclude with a summary of our
results and directions for new research.
2 TRUST
The concepts of trust, trust models and
trust management has been the object of
several recent research projects. Trust is
recognized as an important aspect for
decision-making in distributed and auto-
organized applications [5-6]. In spite of
that, there is no consensus in the
literature on the definition of trust and
what trust management encompasses. In
the computer science literature, Marsh
[5] is among the first to study
computational trust. Marsh [5] provided
a clarification of trust concepts,
presented an implementable formalism
for trust, and applied a trust model to a
distributed artificial intelligence (DAI)
system in order to enable agents to make
trust-based decisions.
The main definitions of trust, focused on
the human aspect are based on
relationships between individuals,
demonstrating clearly the relationship
between trust and the security feeling [7-
8]. Thus, trust in the human aspect is
related to the feeling of security focused
on a particular context, to satisfy an
expectation of a solution that is likely to
be solved [7-8].
The process of trusting in an individual
is the result of numerous analyzes that
together generates the definition of trust.
Trust (or, symmetrically, distrust) is a
particular level of subjective probability,
which an agent believes that another
agent or group of agents will perform a
particular action, which can go through a
monitoration (or independent of its
ability to monitor it) and in a context
which it affects his own action [8].
Trust is still defined in [8] as the most
important social concept that assist
humans to cooperate in their social
environment and its present in all human
interactions. In general, without trust (in
other humans, agents, organizations,
etc.) there is no cooperation and
therefore there is no society. In an
analogous situation, trust can be treated
as a probability of an agent behavior to
perform a given action expected by
another agent.
An agent can check the execution of a
requested action (if its capacity allows
it), inside a context that the achievement
of the expected action will affect the
action itself of this agent (involving a
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
141
decision). So if someone is trustworthy,
it means that there is a high enough
probability that this person will perform
an action considered beneficial some
way, to its cooperation be considered. In
an opposite situation, it simply believes
that the probability is low enough to its
cooperation be avoided.
Gambetta [8] proposes that trust would
have a relation with the cooperation,
making cooperation important for the
acquisition of trust. If trust is unilateral,
cooperation can't succeed. For example,
if there is only mistrust between two
agents, then there's not cooperation
between them at all, so they cannot
perform an operation together to solve a
problem. So similarly, if there is a high
level of trust, probably there is a high
cooperation among agents to solve a
particular problem.
Josang et al [9] define trust as the
subjective probability which an
individual, A, expects that another
individual, B, perform a given action
which its welfare depends on. This
definition includes the concept of
dependence and reliability (probability)
of the trusted party, as seen by the
relying party.
Using the trust, there is the prospect that
an entity P request information from one
entity Q to an entity R. Imagine that
entity P need some information about an
entity that she still didn't correlate (S
entity). P can ask for entities that it has a
relationship, if one of them knows the
entity S, and what their opinion about it
(experiences / relationships already
performed with the entity S), providing
an idea of the reputation of the entity S
in relation to the queried entity.
In a scenario that an entity knows
several other entities, but there is an
entity that doesn't know a specific entity
(R doesn't know the entity Z), it can send
a question about that unknown entity to
its related entities and wait their
answers. If one of the entities knows the
investigated entity, it will return the
response to the requesting entity
reporting its opinion about the unknown
entity.
Figure 1 presents the trust relation. From
the reviews about the behavior of an
entity, it can be performed the
calculation of trust, based on a model,
and from the obtained result, a
relationship decision is made, what
determines if an entity will or not relate
to another entity, in a given context.
Figure 1 - Trust Relation
2.1 Reputation
Reputation can be defined in a scenario
where there's not enough information to
make the inference that an entity is or
not reliable [10], and to achieve this
inference value, an entity ask the opinion
of other entities. From the obtained
information of the questioned entities,
the requesting entity performs the
calculation of reputation from its own
information, which is based on its values
of trust and obtained information from
third parties (the degree of trust in them).
With the necessary information, the
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
142
entity assesses the context of the
situation itself, being able to reach a
value of reputation. The reputation
calculation is obtained by analyzing the
behavior of an entity over time.
The reputation in the computing
scenario, according to the work reviews
related to trust, indicates that it may have
a strong influence on the calculation of
trust [10] and [8], allowing trust to be
interconnected with a reputation in
generation of trust values and these
values, be subject not only for the
perception of the behavior of an entity,
but also self-evaluation by those
interested in some kind of iteration in a
given context.
3 SECURITY IN THE CLOUD
Privacy and security have been shown to
be two important obstacles concerning
the general adoption of the cloud
computing paradigm. In order to solve
these problems in the IaaS service layer,
a model of trustworthy cloud computing
which provides a closed execution
environment for the confidential
execution of virtual machines was
proposed [11]. The proposed model,
called Trusted Cloud Computing
Platform (TCCP), is supposed to provide
higher levels of reliability, availability
and security. In this solution, there is a
cluster node that acts as a Trusted
Coordinator (TC). Other nodes in the
cluster must register with the TC in
order to certify and authenticate its key
and measurement list. The TC keeps a
list of trusted nodes. When a virtual
machine is started or a migration takes
place, the TC verifies whether the node
is trustworthy so that the user of the
virtual machine may be sure that the
platform remains trustworthy. A key and
a signature are used for identifying the
node. In the TCCP model, the private
certification authority is involved in each
transaction together with the TC [11].
Shen et al. [12] presented a method for
building a trustworthy cloud computing
environment by integrating a Trusted
Computing Platform (TCP) to the cloud
computing system. The TCP is used to
provide authentication, confidentiality
and integrity [12]. This scheme
displayed positive results for
authentication, rule-based access and
data protection in the cloud computing
environment.
Zhimin et al. [13] propose a
collaborative trust model for firewalls in
cloud computing. The model has three
advantages: a) it uses different security
policies for different domains; b) it
considers the transaction contexts,
historic data of entities and their
influence in the dynamic measurement
of the trust value; and c) the trust model
is compatible with the firewall and does
not break its local control policies.
A model of domain trust is employed.
Trust is measured by a trust value that
depends on the entity’s context and
historical behavior, and is not fixed. The
cloud is divided in a number of
autonomous domains and the trust
relations among the nodes are divided in
intra and inter-domain trust relations.
The intra-domain trust relations are
based on transactions operated inside the
domain. Each node keeps two tables: a
direct trust table and a recommendation
list. If a node needs to calculate the trust
value of another node, it first checks the
direct trust table and uses that value if
the value corresponding to the desired
node is already available. Otherwise, if
this value is not locally available, the
requesting node checks the
recommendation list in order to
determine a node that has a direct trust
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
143
table that includes the desired node.
Then it checks the direct trust table of
the recommended node for the trust
value of the desired node.
The inter-domain trust values are
calculated based on the transactions
among the inter-domain nodes. The
inter-domain trust value is a global value
of the nodes direct trust values and the
recommended trust value from other
domains. Two tables are maintained in
the Trust Agents deployed in each
domain: form of Inter-domain trust
relationships and the weight value table
of this domain node.
In [14] a trusted cloud computing
platform (TCCP) which enables IaaS
providers to offer a closed box execution
environment that guarantees confidential
execution of guest virtual machines
(VMs) is proposed. This system allows a
customer to verify whether its
computation will run securely, before
requesting the service to launch a VM.
TCCP assumes that there is a trusted
coordinator hosted in a trustworthy
external entity. The TCCP guarantees
the confidentiality and the integrity of a
user’s VM, and allows a user to
determine up front whether or not the
IaaS enforces these properties.
The work [15] evaluates a number of
trust models for distributed cloud
systems and P2P networks. It also
proposes a trustworthy cloud
architecture (including trust delegation
and reputation systems for cloud
resource sites and datacenters) with
guaranteed resources including datasets
for on-demand services.
4 TRUST MODEL FOR FILE
EXCHANGE IN PRIVATE CLOUD
According to the review and related
research [3-11, 13-16], it is necessary to
employ a cloud computing trust model to
ensure the exchange of files among
cloud users in a trustworthy manner. In
this section, we introduce a trust model
to establish a ranking of trustworthy
nodes and enable the secure sharing of
files among peers in a private cloud.
The environment computing private
cloud was chosen because we work with
a specific context of distributing files,
where the files have a desired
distribution and availability.
We propose a trust model where the
selection and trust value evaluation that
determines whether a node is
trustworthy can be performed based on
node storage space, operating system,
link and processing capacity. For
example, if a given client has access to a
storage space in a private cloud, it still
has no selection criterion to determine to
which cloud node it will send a
particular file. When a node wants to
share files with other users, it will select
trusted nodes to store this file through
the proposed following metrics:
processing capacity (the average
workload processed by the node, for
example, if the node’s processing
capacity is 100% utilized, it will take
longer to attend any demands), operating
system (operating system that has a
history of lower vulnerability will be less
susceptible to crashes), storage capacity
and link (better communication links and
storage resources imply greater trust
values, since they increase the node’s
capacity of transmitting and receiving
information).
The trust value is established based on
queries sent to nodes in the cloud,
considering the metrics previously
described.
Each node maintains two trust tables:
direct trust table and the recommended
list:
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
144
a) If a node needs to calculate the trust
value of another node, it first checks the
direct trust table and uses the trust value
if the value for the node exists. If this
value is not available yet, then the
recommended lists are checked to find a
node that has a direct trust relationship
with the desired node the direct trust
value from this node’s direct trust table
is used. If there’s no value attached, then
it sends a query to its peers requesting
information on their storage space,
processing capacity and link.
The trust values are calculated based on
queries exchanged between nodes.
b) The requesting node will assign a
greater trust value to nodes having
greater storage capacity and / or
processing and better link. In addition,
the operating system will also be
considered as a criterion of trust.
In this model is assumed that the node
has a unique identity on the network. As
trust is evolutionary, when a node joins
the network, the requesting node doesn’t
know, soon it will be asked about his
reputation to other network nodes. If no
node has information about respective
node (it has not had any experience with
it), the requesting node will decide
whether the requested relate to, initially
asking some activity / demand for it to
run. From its answers will be built trust
with its node. Trust table node will
contain a timer (saving behavior / events
that raise and lower the trust of a given
node) and will be updated at certain
times.
Figure 2 presents a high level view the
proposed trust model, where the nodes
query their peers to obtain the
information needed to build their local
trust table.
In this model, a trust rank is established,
allowing a node A to determine whether
it is possible to trust a node B to perform
storage operations in a private cloud. In
order to determine the trust value of B,
node A first has to obtain basic
information about this node.
When node A needs to exchange a file in
cloud and it wants to know if node B is
trusted to send and store the file, it will
use the proposed Protocol Trust Model,
which can be described with the
following scenario:
Step 1, node A sends a request to the
nodes of cloud, including node B, asking
about storage capacity, operating system,
processing capacity and link.
Figure 2 - High Level Trust Model
In step 2, nodes, including node B, send
a response providing the requested
information.
In step 3, node A evaluates the
information received from B and from
all nodes. If the information provided by
B, are consistent with the expected, with
the average value of the information of
other nodes, the values are stored in
local recommendations table of node A,
after to make the calculation of trust and
store in your local trust table.
The trust value of a node indicates its
disposition/suitability to perform the
operations between peers of cloud. This
value is calculated based on the history
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
145
interactions/queries between the nodes,
value ranging between [0, 1].
In general, trust of node A in node B, in
the context of a private cloud NP, can be
represented by a value V which
measures the expectation that a
particular node will have good behavior
in the private cloud, so trust can be
expressed by: ),(
),(
ba
np
np
ba VT (1)
np
baT ),( Represent the trust of A in B in the
private cloud NP and ),( ba
npV represent the
trust value of B, in the private cloud NP
analyzed by A. According to definition
of trust, ),( ba
npV is equivalent to queries
sent and received (interaction) by A
related to B in cloud NP. As the
interactions are made between the nodes
of private cloud, the information is used
for the calculation of trust.
Nodes of a private cloud should be able
to consider whether a trust value is
acceptable, generating trust level. If the
node exceeds the level within a set of
analyzed values, it must be able to judge
the node in a certain degree of trust.
Trust degree can vary according to a
quantitative evaluation: a node has a
very high trust in another one, a node
has low trust in another one, a node
doesn’t have sufficient criteria to opine,
a node trusts enough to opine, etc. In our
model, one node trusts another node
from trust value T ≥ 0.6 [5].
The trust values are calculated from
queries between the nodes of NP,
allowing obtaining the necessary
information for final calculation of trust.
The trust information is stored through
the individual records of interaction with
the respective node, staying in local
database information about the behavior
of each node in the cloud that wants to
exchange a file (local trust table and
local recommendations table).
Four aspects can to have impact on
calculation of direct trust of a node.
Greater storage capacity and processing
capacity have more weight in the choice
of a node more reliable, because of these
features are the responsible for ensure
the integrity and file storage.
To calculate direct trust of a node, it is
attributed by administrator of the private
cloud: storage capacity and processing
with weights of 35%, 15% to link and
the remaining 15% to operating system.
Knowing that a node can to have the
trust value ranging from [0.1] and that
these values are variable over time, a
node can have its storage capacity
increased or decreased, it’s necessary
that trust reflects the behavior of a node
in a given period of time. Nodes with
constant characteristics should therefore
be more reliable because they have less
variation in basic characteristics.
According to the weights attributed it’s
possible to calculate the trust of node.
The calculation of trust node A in B in
cloud NP will be represented by:
(2)
j
mbmbmbmb
j
np
b
np
fnp
ba VT
1))15,0*),(()15,0*),(()35,0*),(()35,0*),((( 4321
1),(
fnp
baT ),( Represents the final trust of A in B
in cloud NP. The trust value of B is
defined as the sum of metrics values that
the node B has (m) in the cloud NP; j
represents the number of interactions of
trust from node A in B in the cloud NP,
where j ≥ 0.
4.1 Description of the Simulated
Environment
In order to demonstrate the proposed
objectives, it's necessary to define a
simulation environment capable to
measure / validate the metrics used,
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
146
expecting to achieve results according to
the parameters and criteria of reliable
information used in this work.
Furthermore, the simulation environment
acts as basis for further discussion, as
well as the evolution of this proposal
through new cloud computing
environments.
Through the implementation of the
simulation environment, it's possible to
discuss and analyze the required
parameters for a trust model in a private
cloud, evaluate the generation of the
local trust table of the nodes, as well as
the effectiveness of the adopted metrics,
and finally generate results that serve to
discuss the problem of reliable exchange
of files among peers in a private cloud.
The CloudSim simulation environment
reproduces the interaction between a
Infrastructure provider as Service (IaaS)
and their customers [17].
The scenarios of the simulations of this
work through CloudSim framework
comprising a IaaS provider, which has
three datacenters and a client that afford
this service.
The client uses the resources offered by
the provider for sending and allocation
of virtual machines that perform a set of
tasks, called cloudlets.
The dynamic data center of choice for
sending and allocation of virtual
machines and execution of cloudlets is
defined by the utilization profile of the
client and the resources offered by the
provider. Thus, the scenario simulated in
this work consists of a IaaS provider that
has three datacenters distributed in
different locations, Goiania, GO,
Anapolis - GO and Brasília-DF, a
customer with a usage profile, 04 hosts,
30 VMs and 100 cloudlets.
4.1.1 Results and analysis
When the simulation environment of
CloudSim is defined and configured, and
once the weights of the metrics are
assigned, it can be performed the
calculation of the trust of a node running
the scenarios implemented in the
framework.
To perform the simulation of the
proposed environment it's initially
necessary to define the settings that are
considered ideal for a machine that is
reliable, and then define the baseline
machine configuration in order to
compare with the values of other virtual
machines of the simulation environment.
As in the context of this application
tasks are small and low complexity, the
baseline configuration used is the one
defined by the Amazon standard [18],
trying to get closer as possible to the
existing cost benefit in real clouds,
where the settings of the machines are
compatible with the charges and services
offered.
The configuration used in this work is
shown in Table 1.
Table 1. Configuration of the Baseline Machine
[17]
Values Ideal
HD Size 163840 MB
Memory RAM Size 1740 MB
MIPS Size 5000
Bandwidth Size 1024 Kbytes
In order to make comparisons and
analysis of the results in various
scenarios, several simulations were
performed during the proposed work.
The trust of a virtual machine in the
simulated model increases in proportion
as human being, example, when an
individual performs an activity or solve a
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
147
particular problem for us successfully,
our trust is increased gradually. Thus,
each cloudlet successfully executed, the
trust value of a VM will be increased by
2.5%, until the trust level arrives at 0.85.
Above 0.85, reliability increases 5%
until it reaches the maximum trust of
1.0.
If a machine doesn't perform a certain
task successfully, it doesn't solve its
problem, it loses trust. The weight of
suspicion is usually greater than the
weight of trust. Thus, in our simulated
model the rate of suspicion is 5% for
each task performed without success.
In the attempt to simulate an
environment closer to the reality, was
conducted a simulation scenario which
the cloudlets are not fully executed,
allowing virtual machines to change
their behavior over time, reflecting a fact
more similar to a real environment of a
private cloud computing. It was defined
that an unsuccessful task is chosen
randomly and that will occur when the
random number is higher than 0.8, it
means that the possibility of a
successfully task in this scenario would
be 80%. Thus, the simulation scenario
can be changed, as desired.
Analyzing the results of the simulations,
it's possible to identify the trust level of
the virtual machines that performed the
cloudlets. According to the reference
information, a node trusts another from
the value of trust 6.0T .
In the simulation of the proposed
scenario, some machines didn't perform
cloudlets because they didn't fulfill the
checking conditions of a reliable
machine to perform a task, compared to
the baseline machine.
The Table 2 presents the virtual
machines that performed cloudlets. The
other virtual machines did not perform
any cloudlet do not satisfy the trust level
desirable.
The simulation result is shown in Figure
3.
Table 2. Cloudlets/Tasks Performed Virtual
Machines with Success and without Success.
Virtual
Machines
Tasks
performed
successfully
Tasks
performed
unsuccessfully
Total
VM 03 00 01 01
VM 04 12 02 14
VM 05 08 02 10
VM 06 09 06 15
VM 07 01 02 03
VM 08 08 02 10
VM 13 03 01 04
VM 14 00 01 01
VM 15 07 01 08
VM 16 00 01 01
VM 24 00 01 01
VM 25 12 02 14
VM 26 13 01 14
VM 27 01 02 03
VM 28 00 01 01
The Figure 4 presents trust level of the
virtual machine 09 that didn’t perform
any cloudlet during simulation, so there
is no variation in the graph. All
machines not performed any task have
graph similar.
The Figure 5 presents the trust threshold
of virtual machine 15 after changing its
processing capacity (HD and RAM).
During the simulation VM 15 performed
07 tasks/cloudlets successfully and 01
unsuccessfully. The variation trust level
of the VM 15 was calculated in
accordance to the successfully and
unsuccessfully interactions. Every
interaction successfully performed the
trust value is increased by 2.5% and for
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
148
each interaction performed without
success, the value is decremented by 5%
of the threshold, as established weight.
Evaluating the results obtained with the
change of both parameters of the VM 15
configuration, it's also possible to
identify that all the simulated scenario
has changed, impacting not only on the
modified machine, but in the other
virtual machines too. Moreover, the
number of tasks/cloudlets executed with
the change of the two scenarios was very
close to the result obtained with the
change made in storage capacity. With
the results is possible to identify that the
processing capacity has greater impact in
the simulation results.
Figure 3 - Trust Virtual Machines after Task
Execution.
Figure 4 - Trust Virtual Machines 09 after 0
Task Execution.
The initial value trust threshold of the
virtual machine 15 was
0.5935552586206897 and the final value
0.7442351812748581, as presents in
Table 3.
Figure 5 - Trust Virtual Machines 15 after 8
Task Execution.
Table 3. Erro! Nenhum texto com o estilo
especificado foi encontrado no
documento.Trust virtual machine 15 Running 07
Cloudlets with Success and 01 without Success.
Task
Number
Trust threshold Virtual Machine
15 every Interaction
32 0.5935552586206897
42 0.6185552586206897
50 0.6402076105818058
54 0.6864683466109688
67 0.6514683466109688
75 0.6602111892581934
86 0.7098601812748581
95 0.7442351812748581
5 CONCLUSIONS
Cloud computing has been the focus of
research in several recent studies, which
demonstrate the importance and
necessity of a trust model to ensure
reliable and secure exchange of files. It
is a promising area to be explored
through research and experimental
analyzes, using a computational trust to
mitigate existing problems in aspects
related to security, trust and reputation,
to guarantee the integrity of exchange of
information in private cloud
environments, reducing the possibility of
failure or alteration of information in the
exchange of files, involving metrics that
are able to represent or map the trust
level of a network node in order to make
the exchange of files in a private cloud.
The proposal discussed in this paper, to
develop a new trust model for trusted
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
149
exchange of files, in an environment of
private cloud computing, using the
concepts of trust and reputation, seems
to be promising, due to the identification
of problems and vulnerabilities related to
security, privacy and trust that a cloud
computing environment presents.
Simulations and results allow to identify
which adopted metrics directly influence
the calculation of the trust in a node. The
future simulations using a real
environment will allow to evaluate the
behavior of nodes in an environment of
private cloud computing as well as
historical of its iterations and assumed
values throughout the execution of the
machines.
The use of open platform, CloudSim
[17], to execute the simulations of the
adopted scenarios allowed to calculate
the table trust of a node (virtual
machines) and select those considered
more reliable. Furthermore, the
adequacy of the used metrics were
evaluated in the proposed trust model,
allowing to identify and select the most
appropriate in relation to the historical
behavior of the nodes belonging to the
analyzed environment.
6 REFERENCES
1. Zhang Jian-jun and Xue Jing.“A Brief
Survey on the Security odel of Cloud
Computing,” 2010 Ninth International
Symposium on Distributed Computing and
Applications to Business, Engineering and
Science (DCABES), Hong Kong IEEE, pp.
475 – 478, 2010.
2. Wang Han-zhang and Huang Liu-sheng.“An
improved trusted cloud computing platform
model based on DAA and Privacy CA
scheme,” IEEE International Conference on
Computer Application and System Modeling
(ICCASM 2010). 978-1-4244-7235-2, 2010.
3. Uppoor, S., M. Flouris, and A. Bilas.
“Cloud-based synchronization of distributed
file system hierarchies,” Cluster Computing
Workshops and Posters (CLUSTER
WORKSHOPS), IEEE International
Conference, pp. 1-4. 2010.
4. Popovic, K. and Z. Hocenski. “Cloud
computing security issues and challenges,”
MIPRO, 2010 Proceedings of the 33rd
International Convention, pp. 344-349, 24-
28 May 2010.
5. Stephen Paul Marsh, “Formalising Trust as a
Computational Concept”, Ph.D. Thesis,
University of Stirling, 1994.
6. Thomas Beth, M. Borcherding, and B.
Klein, “Valuation of trust in open
networks,” In ESORICS 94. Brighton, UK,
November 1994.
7. Lamsal Pradip. (2006). “Understanding
Trust and Security”. Department of
Computer Science University of Helsiki,
Finland, October 2001. Acessado em
13/02/2006. Disponível em:
http://www.cs.helsinki.fi/u/lamsal/asgn/trust
/UnderstandingTrustAndSecurity.pdf
8. Gambetta Diego. (2000). “Can We Trust
Trust?”, in Gambetta, Diego (ed.) Trust:
Making and Breaking Cooperative
Relations, electronic edition, Department of
Sociology, University of Oxford, chapter 13,
213-237.
9. Josang Audun, Roslan Ismail, Colin Boyd.
(2007). A Survey of Trust and Reputation
Systems for Online Service Provision.
Decision Support Systems. Volume 43 Issue
2, March. Elsevier Science Publishers B. V.
Amsterdam, The Netherlands, The
Netherlands.
10. Patel, Jigar. “A Trust and Reputation Model
for Agent-Based Virtual Organizations”.
Thesis of Doctor of Philosophy. Faculty of
Engineering and Applied Science. School of
Electronics and Computer Science.
University of Southampton. January. 2007.
11. Xiao-Yong Li, Li-Tao Zhou, Yong Shi, and
Yu Guo, “A Trusted Computing
Environment Model in Cloud Architecture,”
Proceedings of the Ninth International
Conference on Machine Learning and
Cybernetics, 978-1-4244-6526-2. Qingdao,
pp. 11-14. China. July 2010.
12. Zhidong Shen, Li Li, Fei Yan, and Xiaoping
Wu, “Cloud Computing System Based on
Trusted Computing Platform,” Intelligent
Computation Technology and Automation
(ICICTA), IEEE International Conference
on Volume: 1, pp. 942-945. China. 2010.
13. Zhimin Yang, Lixiang Qiao, Chang Liu, Chi
Yang, and Guangming Wan, “A
collaborative trust model of firewall-through
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
150
based on Cloud Computing,” Proceedings of
the 2010 14th International Conference on
Computer Supported Cooperative Work in
Design. Shanghai, China. pp. 329-334, 14-
16. 2010.
14. Santos Nuno, K. Gummadi, and R.
Rodrigues, “Towards Trusted Cloud
Computing,” Proc. HotCloud. June 2009.
15. Chang. E, T. Dillon and Chen Wu, “Cloud
Computing: Issues and Challenges,” 24th
IEEE International Conference on Advanced
Information Networking and Applications
(AINA), pp. 27-33. Australia, 2010.
16. Kai Hwang, Sameer Kulkareni, and Yue Hu,
“Cloud Security with Virtualized Defense
and Reputation-Based Trust Mangement,”
2009 Eighth IEEE International Conference
on Dependable, Autonomic and Secure
Computing (DASC ’09), pp. 717-722, 2009.
17. Calheiros, Rodrigo, N.; Rajiv Ranjan; Anton
Beloglazov; De Rose, Cesar, A. F.; Buyya,
Rajkumar. (2011). “CloudSim: A Toolkit for
Modeling and Simulation of Cloud
Computing Environments and Evaluation of
Resource Provisioning Algorithms,
Software: Practice and Experience (SPE)”,
Volume 41, Number 1, 23-50, ISSN: 0038-
0644, Wiley Press, New York, USA,
January.
18. Amazon (2012). “Amazon Web Services”.
Accessed in 01/06/2012. Available:
http://aws.amazon.com/pt/ec2/instance-
types/.
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 140-151The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
151
Mark Evered
School of Science and Technology
University of New England
Armidale, Australia
Abstract— The access specification language RASP extends
traditional role-based access control (RBAC) concepts to provide
greater expressive power often required for fine-grained access
control in sensitive information systems. Existing formal models
of RBAC are not sufficient to describe these extensions.
In this paper, we define a new model for RBAC which formalizes
the RASP concepts of controlled role appointment and
transitions, object attributes analogous to subject roles and a
transitive role/attribute derivation relationship.
Keywords: security, access control, model, role, attribute
I. INTRODUCTION
In general, each of the users of an information system needs to be able to view or manipulate only some of the information stored in the system. Ideally, the appropriate access for each user will be specified in the form of an access policy during the analysis phase of the software development and then enforced via access control mechanisms during the execution of the implemented system. As the use of information systems for sensitive data continues to grow in areas such as e-health, it is becoming increasingly important, both for security and for privacy reasons, that the specification of the access control is precise and clear enough to express and satisfy strict minimal (need-to-know) policy requirements. This ensures both that valid users of a system will not misuse their access and that intruders who have illegitimately managed to assume the identity of a valid user will be restricted in what they can do within the system. Both of these factors are vital for the strengthening of cyber-security.
An access control policy can be understood as consisting of two components. The first is control over the membership of the subject groups of interest in the application domain. The second is a mapping from each of these groups to permissions which allow certain operations to be performed on the data by members of the groups. These operations may just be ‘read’ and ‘write’ as in traditional database systems or may be based on the methods of object classes as first suggested in [8].
Both components of access control have been approached in a number of different ways. In the simplest case, an access control list (ACL) for each object contains an entry for each subject or group of subjects. The owner of the object (or a system administrator) can assign subjects to groups. More
recently, Role Based Access Control (RBAC) models have been defined which allow the first component of access control to be based on the roles played by individuals in the organisations making use of an information system. This means that there is a (dynamic) mapping from subjects to roles and then a (relatively static) mapping from roles to permissions. These models recognise the complex nature of permissions in real organisations and have been shown to subsume both conventional discretionary access control models and mandatory access control models such as Bell-LaPadula [1].
Formally, given:
� – a set of subjects
ℝ – a set of roles
� – a set of objects and
� – a set of operations (methods) on objects
we can define an RBAC system as consisting of the pair:
(H, X)
where
H ⊆ �×ℝ is a set of role assignments and
X ⊆ ℝ×�×� is a set of permissions.
A pair (s, r) ∊ H specifies that the subject s has the role r
while a triple (r, o, m) ∊ X specifies that a subject with the role
r can access the object o via the method m.
While RBAC is an improvement over an ACL approach, case studies such as [2][6][18] have demonstrated that the access control requirements of real-world information systems are considerably more complex than the simple role-based approach described above can handle. For this reason, a number of different RBAC models have been proposed with varying degrees of additional expressive power. These additions include role hierarchies [15], parameterized roles [7] and control over role acquisition [17].
One very useful extension, as implemented in access control systems such as [9], is to allow objects to be labeled with attributes in much the same way that subjects acquire roles. Formally, we introduce the additional set:
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 152-159The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
152
A Formal Semantic Model for the Access
Specification Language RASP
� – a set of attributes with which objects can be labeled.
The permissions in such a system then give access to an object on the basis of it having a particular attribute or set of attributes rather than to objects directly. This is line with the argument in [3] that objects and environments need ‘roles’ just as subjects do.
The author has defined an access control specification language called RASP (Role and Attribute-based Specification of Protection ) [5] which is based on both roles of subjects and attributes of objects and which gives fine-grained control over initial role and attribute acquisition as well as subsequent transitions. In this paper we give a formal definition for an access control model which supports the RASP extensions to RBAC. In particular, it supports:
• Subject roles
• Object attributes
• Control over appointment to roles
• Control over labeling of objects with attributes
• Control over dynamic acquisition of further roles and attributes
The model uses a transitive approach which supports role hierarchies, appointment based on external certificates and role and attribute revocation. No existing RBAC model has the expressive power to support these requirements.
The following section discusses related work on role-based and attribute-based access control while section III gives a brief overview of the access control specification language RASP. Section IV gives a formal definition of the access rules in our model and section V defines the instantaneous state of the RBAC model together with the four operations for transforming the state. Section VI describes the transitive acquisition of roles and attributes and defines the function
allow for checking whether an operation on an object is permitted. Section VII defines some further useful constructs of RASP and section VIII addresses some issues of efficient implementation. We conclude with a summary of the findings and contributions of the paper.
II. RELATED WORK
Both the object-based access control paradigm [8] and the role-based access control paradigm [14] are well-known approaches as is the combination of the two to define access to an object in terms of the methods which can be invoked by subjects acting in a certain role. A number of significant extensions to the basic RBAC model have been suggested in order to adequately handle the complexities of minimal access control requirements in real-world scenarios. These include role hierarchies [15] and role parameters [7].
A question which has received much less attention is how to group objects so that the access constraints for the whole group can be specified in a single place rather than repeating them for each and every object. The Ponder policy specification language [4] supports a hierarchical structure of domains and sub-domains of objects similar to a file system
hierarchy. The leaves of the tree are references to objects rather than the objects themselves so that an object can appear in a number of different domains. This approach assumes that the domains are relatively static and that an administrator will place objects into domains via some mechanism external to the language. Case studies have shown, however, that the domains of an object may depend on object attributes which change in the same way that the role of a subject may change. These transitions require the same level of specification as to who can effect the change as is required for role changes. The approach of Generalized Role-Based Access Control [3] recognizes the need for symmetry between subject roles and object roles but does so on the basis of a very simple model which does not support role parameters or control over role transitions.
Attribute-based access control (ABAC) [19][20] was developed to support access to web services based on provable attributes of a user rather than the identity of the user. This is important for anonymity in using such services but is not appropriate for organizations or systems where fine-grained access-control policies are based on identity and roles. ABAC has been extended to include attributes for resources as well as subjects but does not address attribute transitions.
A further important question concerns the acquisition of access rights. Ponder is a delegation-based system. It provides for delegation policies which limit which access rights a subject can pass to another subject but the basic assumption is that the possessor of a right decides if and when another subject should gain that right. Case studies show that it is often necessary that access rights be granted by someone who does not possess them him/herself. The OASIS Role Definition Language [17] allows for this kind of appointment-based acquisition of access rights and for role acquisition pre-conditions based on external certificates known as auxiliary credential certificates. OASIS RDL does not however allow for a distinction between the case where a new role is replacing a previous role and the case where the new role is additional. This distinction has been found to be useful both for role transitions and for object attribute transitions. OASIS RDL also does not allow for the generation of new credential certificates as a result of operations performed within the system.
Ponder supports both positive and negative authorizations. In fact, it has two forms of negative access control clause: negative authorization policies and refrain policies. So, for example, a set of access rights can be granted to a group of subjects via a positive authorization policy and then one of the rights can later be revoked from a certain member of the group via a negative authorization policy. Negative authorizations lead to the problem of potential inconsistencies and loopholes in an access control system. A more elegant way to express this kind of partial revocation is to use role transition to transfer a subject from one role into a new role which has a more restricted set of rights.
The access control specification languages and mechanisms described in this section represent the state-of-the-art in fine-grained access control. Many of them have no formal definition at all and none of them can support all of the requirements which case studies show to be required. A formal definition of role hierarchies is given in [15] and role parameters are
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 152-159The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
153
formally defined in [7] but no formal definition of a symmetric approach to role and attribute transitions has been given in the literature to date.
III. OVERVIEW OF RASP
The three main constructs of the RASP access specification
language are the appoint clause, the attribute clause
and the allow clause. The appoint clause specifies that a
subject with a certain role can appoint someone else to have a certain role. The precondition for this is that the person being appointed already possesses a certain role before the appointment. So, for example:
appoint manager: staff -> deptHead;
expresses the appointment rule that someone who is a manager can appoint someone who is already a staff member to be a head of department.
The attribute clause is used to label an object in the
system with a certain attribute. This is done by specifying what role a subject must possess to be able to do this and the precondition that the object must already have a certain attribute. So, for example:
attribute admin: document -> obsolete;
expresses the attribute rule that someone who is an administrator can label a document as being obsolete.
For both the appoint and the attribute clauses, the
transition symbol ‘->’ indicates that the old role or attribute should be retained in addition to the new one, whereas the
transition symbol ‘/->’ can be used to express that the old role or attribute should be relinquished.
The third main construct is the allow clause. This specifies that someone with a certain role can invoke a certain operation on objects with a certain attribute or set of attributes. So, for example:
allow deptHead!obsolete.delete;
expresses the access rule that a head of department can delete an obsolete document.
RASP also provides a conflict clause which can be used to express the rule that two roles are in conflict with each other (if possessed by the same subject at the same time) and a
unique clause which expresses the rule that a certain role
may only be possessed by one subject at a time.
This overview of RASP will suffice for the purposes of this paper but for more detail on the rationale for and the design of the RASP language, the reader is referred to [5]. A summary of the syntax of the constructs discussed in this paper can be found in Appendix A.
IV. ACCESS RULES
We now extend the RBAC formalism sketched in the introduction to a more powerful model which is capable of expressing the semantics of the RASP constructs described above. In this section, we define the relatively static aspect of
our access control model, i.e. what access does a subject with a certain role have to an object with a certain set of attributes, who has the authority to appoint subjects to roles and who has the authority to label objects with attributes.
We define this as the 5-tuple:
(X, P, T, L, U)
where
X ⊆ ℝ×2�×� is a set of permissions
P ⊆ ℝ3 is a set of appointment rules
T ⊆ ℝ3 is a set of role transition rules
L ⊆ ℝ×�2 is a set of attribute labeling rules and
U ⊆ ℝ×�2 is a set of attribute transition rules.
A permission triple (r, A, m) ∊ X, A⊆� specifies that a
subject with the role r can access an object via the method m if
that object has all of the attributes in the set A. Examples are:
(admin, {thisFacility, patientPersonalDetails}, update)
(secretCleared, {secret}, read)
These express the semantic value of the RASP syntax:
allow admin!
{thisFacility, patientPersonalDetails}.
update; and
allow secretCleared!secret.read;
respectively (given the obvious mapping from an identifier ‘admin’ to the role radmin ∊ ℝ etc.).
An appointment triple (r1, r2, r3) ∊ P specifies that a
subject with the role r1 can appoint a subject with the role r2 to
additionally have the role r3. In this context, we denote the null
role (always possessed by all subjects) as ∅. So, for example we can have:
(manager, ∅, employee)
(manager, employee, admin)
(manager, doctor, doctorAtThisFacility)
These express the semantic value of the RASP syntax:
appoint manager: someone -> employee;
appoint manager: employee -> admin;
appoint manager: doctor ->
doctorAtThisFacility;
where the identifier ‘someone’ is used to denote the null role .
Similarly, an attribute labeling triple (r, a1, a2) ∊ L
specifies that a subject with the role r can label an object with
the attribute a1 as also having the attribute a2. Again, we
denote a null attribute as ∅. For example:
(sysadmin, ∅, thisFacility)
(sysadmin, thisFacility, patientPersonalDetails)
These express the semantic value of the RASP syntax:
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 152-159The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
154
attribute sysadmin:
something -> thisFacility;
attribute sysadmin: thisFacility ->
patientPersonalDetails;
A role transition triple (r1, r2, r3) ∊ T specifies that a
subject with the role r1 can cause a subject with the role r2 to
lose that role and take on the role r3 instead. For example,
(manager, traineeEmployee, employee)
(clearanceOfficer, secretCleared, topSecretCleared)
(manager, employee, ∅)
These express the semantic value of the RASP syntax:
appoint manager: traineeEmployee /->
employee;
appoint clearanceOfficer: secretCleared
/-> topSecretCleared;
appoint manager: employee /-> someone;
Note that in the last example, this kind of role transition is used to remove a role from a subject.
Finally, an attribute transition triple (r, a1, a2) ∊ U
specifies that a subject with the role r can cause an object with
the attribute a1 to lose that attribute and take on the attribute a2 instead. For example:
(admin, draftReport, report)
(manager, thisFacility, thatFacility)
(declassificationOfficer, secret, unclassified)
These express the semantic value of the RASP syntax:
attribute admin:
draftReport /-> report;
attribute manager: thisFacility /->
thatFacility;
attribute declassificationOfficer:
secret /-> unclassified;
V. ACCESS STATE
We now define the second part of the model, which determines for some point in time, which subject has which roles and which object has which attributes. This is represented via a set of role appointment certificates and a set of attribute labeling certificates. Formally, the state of the access control system is given by:
(C, D)
where
C ⊆ �×ℝ2 is a set of appointment certificates and
D ⊆ �×�2 is a set of label certificates.
The certificate (s, r1, r2) ∊ C specifies that if the subject s
has the role r1, then that subject also has the role r2. Thus:
(Fred, ∅, traineeEmployee)
(Fred, employee, admin)
Similarly, the certificate (o, a1, a2) ∊ D specifies that if the
object o has the attribute a1, then that object also has the
attribute a2.
We define four functions which update the state of the
access control system. Function addRole(C, s, r1, r2) is used to add an appointment certificate to C and is defined as:
addRole(C, s, r1, r2) = C ∪ {(s, r1, r2)}
Function modRole(C, s, r1, r2) is used to change some
of the appointment certificates in C and can be defined recursively as:
if C contains an appointment certificate of the form
(s, r, r1), for some r∊ℝ then
modRole(C, s, r1, r2) =
{(s, r, r2)} ∪ modRole(C \ {(s, r, r1)}, s, r1, r2)
otherwise
modRole(C, s, r1, r2) = C
So, for example, if C contains the certificate:
(Fred, ∅, traineeEmployee)
then modRole(C, Fred, traineeEmployee, employee) will instead contain the certificate:
(Fred, ∅, employee)
Function addAttr(D, o, a1, a2) is used to add a label certificate to D and is defined as:
addAttr(D, o, a1, a2) = D ∪ {(o, a1, a2)}
Finally, function modAttr(D, o, a1, a2) is used to change some of the label certificates in D and is defined as:
if D contains a label certificate of the form
(o, a, a1), for some a∊ � then
modAttr(D, o, a1, a2) =
{(o, a, a2)} ∪ modAttr(D \ {(o, a, a1)}, o, a1, a2) otherwise
modAttr(D, o, a1, a2) = D
Note that a subject s may invoke addRole(C, s1, r1, r2)
only if s has a role r such that (r, r1, r2) ∊ P. Similarly, s can
invoke modRole(C, s1, r1, r2) only with a role r such that
(r, r1, r2) ∊ T. Likewise, s can invoke addAttr(D, o, a1,
a2) only if s has a role r where (r, a1, a2) ∊ L and
modAttr(D, o, a1, a2) only with a role r such that (r, a1, a2) ∊ U. The exact definition of ‘having a role’ is given in the next section.
VI. DETERMINING ROLES AND ATTRIBUTES
From the definitions in the previous section, it can be seen that, rather than just representing the set of roles possessed by a subject at some point in time, our model represents the role from which each role is derived. We define the notation ⟨r, rʹ⟩s to represent that the subject s has the role rʹ
conditional on having the role r, i.e.:
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 152-159The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
155
∃r1…rn∊ℝ. (s, r, r1) ∊ C ∧ (s, r1, r2) ∊ C …
∧ (s, rn-1, rn) ∊ C ∧ (s, rn, rʹ) ∊ C
It can be seen that this conditional possession of roles is then a transitive relationship, i.e.
⟨r1, r2⟩s ∧ ⟨r2, r3⟩s ⇒ ⟨r1, r3⟩s
The actual possession of a role can then expressed as:
⟨∅, r⟩s
Similarly, for attributes of objects, we define ⟨a, aʹ⟩o to
mean that the object o has the attribute aʹ conditional on
having the attribute a. So, an object actually possesses an
attribute if:
⟨∅, a⟩o
Finally, we can define the allow function which
determines whether a subject s can access an object o via a
method m as:
allow(s, m, o) = ∃r∊ℝ, A⊆�.
⟨∅, r⟩s ∧ ∀a∊A.⟨∅, a⟩o ∧ (r, A, m)∊X
The fact that the model represents the possession of a role
or attribute as conditional on possession of another role or attribute is very important for an adequate level of access control in real-world information systems. Suppose, for example, the set C contains the appointment certificates:
(Fred, ∅, doctor) and
(Fred, doctor, doctorAtThisFacility)
If Fred were to lose the role of ‘doctor’ (for example by being ‘struck off’ the medical register for some reason), we would want him to also automatically lose the role of ‘doctorAtThisFacility’ with all its associated permissions. This is only possible if the model represents the derivation of the second role from the first. Similarly, for the labeling certificates:
(DocumentAbc, ∅, Australian) and
(DocumentAbc, Australian, Sydney)
we want the document to automatically lose the attribute ‘Sydney’ if it loses the attribute ‘Australian’. This illustrates that the transitive nature of our model can be used to support specialization hierarchies of roles and attributes. An example for roles is:
(Fred, ∅, sysadmin) and
(Fred, sysadmin, linuxSysadmin)
A further advantage of our approach is that the order of adding roles becomes more flexible. So, for example, if we have the certificates:
(Fred, ∅, traineeEmployee) and
(Fred, employee, admin)
then this represents the fact that “Fred does not yet have the ‘admin’ role but will acquire that role as soon as he becomes a (fully fledged) employee”. The operation:
modRole(C, Fred, traineeEmployee, employee)
will then make him an ‘admin’ as well as an ‘employee’.
Lastly, our representation of role appointment certificates supports explicit certificates which represent a precondition (e.g. for employment) which is imported from, or accessed at, an external source. For example, the certificate:
(Fred, ∅, doctor)
should ideally be maintained by an external body such as a national medical association rather than in the organization where the doctor is working. Our model provides for an explicit representation of such an external qualification certificate. (Of course, in an implementation which transfers or accesses this from an external site, it would need to be secured by a mechanism such as public-key cryptography, digital signatures and unique subject identifiers.)
VII. FURTHER FEATURES OF RASP
The main constructs of RASP are the appoint, the
attribute and the allow clauses as defined above but we can also use the formal model to define the semantics of two other constructs which can be important for restricting role appointments in the information systems of real organizations.
The first of these is a clause which specifies that it is a conflict for someone to be fulfilling two certain roles in the organization at the same time. So, for example, it may be considered a conflict for someone to be both a student and a staff member of a university at the same time. The syntax for expressing this in RASP is:
conflict staff, student;
We can formally describe the semantics of this by defining
functions addRoleCheckConflict(C, s, r1, r2) and
modRoleCheckConflict(C, s, r1, r2) which extend
addRole(C, s, r1, r2) and modRole(C, s, r1, r2) by adding
a check for a breach of the constraint whenever the set of appointment certificates is updated. The definitions of these
functions for the conflict roles role_id1 and role_id2 are then:
addRolecheckconflict(C, s, r1, r2) =
if ∃s∊� . ⟨∅, rrole_id1⟩s ∧ ⟨∅, rrole_id2⟩s in
addRole(C, s, r1, r2) then:
error
otherwise:
addRole(C, s, r1, r2)
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 152-159The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
156
and
modRolecheckconflict(C, s, r1, r2) =
if ∃s∊� . ⟨∅, rrole_id1⟩s ∧ ⟨∅, rrole_id2⟩s in
modRole(C, s, r1, r2) then:
error
otherwise:
modRole(C, s, r1, r2)
The second construct is a clause that specifies that only a
single subject may have a certain role at one time. So, for example, we can specify that these can only be one subject
with the role manager at one time by the clause:
unique manager;
Again, we can formally define this construct by defining
the functions addRoleCheckUnique(C, s, r1, r2) and
modRoleCheckUnique(C, s, r1, r2) which check for a
breach of the constraint whenever the set of appointment certificates is updated. The definitions for a unique role
role_id are:
addRolecheckunique(C, s, r1, r2) =
if ∃s1∊�,s2≠s1∊� .
⟨∅, rrole_id⟩s1 ∧ ⟨∅, rrole_id⟩s2 in
addRole(C, s, r1, r2) then:
error
otherwise:
addRole(C, s, r1, r2)
and
modRolecheckunique(C, s, r1, r2) =
if ∃s1∊�,s2≠s1∊� .
⟨∅, rrole_id⟩s1 ∧ ⟨∅, rrole_id⟩s2 in
modRole(C, s, r1, r2) then:
error
otherwise:
modRole(C, s, r1, r2)
One concept of RASP that the model presented in this
paper does not yet support is that of role parameters. We have deliberately excluded this concept, not because we consider it to be unnecessary or unimportant, but for the sake of brevity and of clearly describing the basic model without this complicating factor. Role parameters can however be integrated into our model and a future paper will discuss this. Existing models for role parameters such as in [7] are not sufficient for RASP since they do not describe role transitions or transitive role relationships and also do not relate the role parameters to attributes of the protected objects.
A summary of the mappings from RASP syntax to their semantics as expressed in the formal model is given in Appendix B.
VIII. IMPLEMENTATION CONSIDERATIONS
While this paper is concerned with a general model rather than a specific implementation, it is nevertheless important that any access control scheme be implementable with realistic overheads for the checking of permissions. If the definition of
the allow(s, m, o) function in the previous section were to be
evaluated in that form for every attempted invocation of a method on an object, then unacceptable delays would be incurred. Similarly, if the rules were to be preprocessed to a central access control matrix for all subjects and all objects then that would incur a high overhead each time a certificate was added or changed.
Fortunately, neither of these extremes is necessary. Firstly, most subjects will be interested in only a small fraction of the total number of objects and secondly, the system need only be concerned with the subjects who are currently using it. Thirdly, although the number of subjects and objects in an organization may be large, the number of roles and attributes and therefore the number of rules will generally be fairly small, even for a fine-grained access scheme. Also the kinds of operations to which the scheme is applied will generally be high-level operations like, ‘read’, ‘edit’ or ‘update’ on documents or databases and so will not be extremely frequent.
Finally, rather than calculate the entire set of roles allowed for a subject, it is actually preferable for each subject to acquire only the ∅ role when they start a session and then explicitly request any further role they wish to adopt for that session. This means that only those roles need be checked against the rules rather than all possible roles for that subject. The reason this is preferable is that it allows a log to be maintained of exactly who is acting in which role at what time.
An implementation could thus work along the following lines:
• assign the role ∅ to a subject who starts a session
• when a subject requests to act in a further role:
o check for a certificate which allows this
o if allowed, determine the attribute sets associated with this role in the permission rules
• allow a subject to searches for objects with those sets of attributes:
• when a subject selects a certain object
o use the permission rules for the current roles to determine which operations can be performed on the object
Given appropriate index tables for the rule and certificate information, none of these individual steps need incur an unacceptable overhead.
IX. CONCLUSION AND FUTURE WORK
Case studies show that information systems often require a degree of access control which cannot be expressed simply as a
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 152-159The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
157
static mapping from subjects to roles and from roles to operations on objects.
In this paper, we have formally defined a role-based access control model which has a much greater expressive power and which, in particular, can be used to formally describe the semantics of the RASP access specification language.
The model supports controlled dynamic acquisition of new roles, transitions from one role to another and role revocation. It also supports labeling of objects with attributes in a way analogous to appointing subjects to roles and defines permissions in terms of roles and attribute sets.
We have defined the access control model in two parts. The first represents the rules for role appointment and attribute labeling as well as role and attribute transitions and access permissions. The second part of the model represents the instantaneous state of the access system in terms of a set of appointment certificates and a set of labeling certificates. We have defined four functions for updating these sets.
A significant aspect of the model is the use of transitive relationships whereby a certificate represents the fact that the possession of a role or attribute may be conditional on the possession of another role or relationship. This allows the model to support role and attribute specialization hierarchies, controlled revocation of derived roles/attributes and flexibility in the addition of roles.
No existing formal model for role-based access control supports all the concepts captured in our model.
REFERENCES
[1] D.E. Bell and L.J. La Padula, “Secure computer systems: unified exposition and Multics interpretation”, MTR-2997, The MITRE Corporation, 1975.
[2] B. Blobel, “Authorisation and access control for electronic health record systems”, International Journal of Medical Informatics, 73, 2004.
[3] M.J. Covington, M.J. Moyer and M. Ahamad, “Generalized role-based access control for securing future applications”, Proc. 23rd National Information Systems Security Conference, Baltimore, 2000.
[4] N. Damianou, N. Dulay, E. Lupu and M. Sloman, “Ponder: A language for specifying security and management policies for distributed systems”, The Language Specification Version 2.3, Imperial College Research Report DoC 2000/1, 2000.
[5] M. Evered, “Rationale and Design of the Access Specification Language RASP”, Intl. Journal of Cyber-Security and Forensics, 1, 1, 2012.
[6] M. Evered and S. Bögeholz, “A case study in access control requirements for a health information system”, Proc. Australasian Information Security Workshop, Dunedin, 2004.
[7] J.H. Hine, W. Yao, J. Bacon and K. Moody, “An architecture for distributed OASIS services”, Proc. Middleware 2000, Lecture Notes in Computer Science, Vol. 1795, Springer-Verlag, Heidelberg/New York, 2000.
[8] A. Jones and B. Liskov, “A language extension for expressing constraints on data access”. Communications of the ACM, 21(5):358-367, May, 1978.
[9] T. Moses (Ed.), Extensible Access Control Markup Language (XACML) Version 2.0, OASIS Consortium, 2005.
[10] Object Management Group, Resource Access Decision Facility Specification, Version 1.0, 2001.
[11] Object Management Group, Object Constraint Language Specification Version 2.0, 2006.
[12] G. Russello, C. Dong and N. Dulay, “Authorisation and conflict resolution in hierarchical domains”, Proc. 8th IEEE Workshop on Policies for Distributed Systems and Networks, Bologna, 2007.
[13] J.H. Saltzer, “Protection and the control of information sharing in Multics”, Symposium on Operating System Principles, Yorktown Heights, NY, 1973.
[14] R. Sandhu, E.J. Coyne, H.L. Feinstein and C.E. Youman, “Role based access control models”, IEEE Computer 29 (2), 1996.
[15] R. Sandhu, “Role activation hierarchies”, Proc. 3rd ACM Workshop on Role-Based Access Control, Fairfax, 1998.
[16] M.C. Tschantz and S. Krishnamurthi, S. “Towards reasonability properties for access conrol policy languages”, Proc. 11th ACM Symposium on Access Control Models and Technologies, Lake Tahoe, 2006.
[17] W. Yao, K. Moody and J. Bacon, “A model of OASIS role-based access control and its support for active security”, ACM Transactions on Information and System Security, 5, 4, 2001.
[18] P. Yu and H. Yu, H., “Lessons learned from the practice of mobile health application development”, Proc. 28th Annual International Computer Software and Applications Conference, Hong Kong, 2004.
[19] T. Yu, X. Ma and M. Winslett, “Prunes: an efficient and complete strategy for automated trust negotiation over the internet”, Proc. 7th ACM conference on Computer and communications security.ACM Press, 2000.
[20] E. Yuan and J. Tong, “Attributed based access control (ABAC) for web services”, Proc. IEEE International Conference on Web Services, 2005.
Appendix A – Concrete syntax of relevant RASP
constructs
clause: appoint_clause |
attribute_clause |
allow_clause |
conflict_clause |
unique_clause
appoint_clause: 'appoint'
role_id ':'
role_id transition role_id ';'
transition: '->' | '/->'
attribute_clause: 'attribute'
role_id ':'
attribute_id transition
attribute_id ';'
allow_clause: 'allow' role_id '!'
action ';'
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 152-159The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
158
action: attribute_id ‘.’ operation_id
action: ‘{‘ attribute_list ‘}’ ‘.’
operation_id
attribute_list: attribute_id
{ ‘,’ attribute_id }
conflict_clause: 'conflict'
role_id ',' role_id ';'
unique_clause: 'unique' role_id ';'
Appendix B – Summary of semantic mappings
‘appoint’ role_id1 ‘:’
role_id2 ‘->’ role_id3 ⇒
P’ = P ∪ # (rrole_id1, rrole_id2, rrole_id3) }
‘appoint’ role_id1 ‘:’
role_id2 ‘/->’ role_id3 ⇒
T’ = T ∪ # (rrole_id1, rrole_id2, rrole_id3) }
‘attribute’ role_id ‘:’
attr_id1 ‘->’ attr_id2 ⇒
L’ = L ∪ # (rrole_id, aattr_id1, rattr_id2) }
‘attribute’ role_id ‘:’
attr_id1 ‘/->’ attr_id2 ⇒
U’ = U ∪ # (rrole_id, aattr_id1, rattr_id2) }
‘allow’ role_id ‘!’
attr_id ‘.’ op_id ‘;’ ⇒
X’ = X ∪ # (rrole_id, #aattr_id}, mop_id) }
‘allow’ role_id ‘!’
‘{‘ attr_id1 ‘,’
attr_id2 ‘,’ …
attr_idn ‘}’
‘.’ op_id ‘;’ ⇒
X’ = X ∪ # (rrole_id, #aattr_id1, aattr_id2, … aattr_idn}, mop_id) }
‘conflict’ role_id1 ‘,’ role_id2 ;’ ⇒
addRolecheckconflict(C, s, r1, r2) =
if ∃s∊� . ⟨∅, rrole_id1⟩s ∧ ⟨∅, rrole_id2⟩s in
addRole(C, s, r1, r2) then:
error
otherwise:
addRole(C, s, r1, r2)
and
modRolecheckconflict(C, s, r1, r2) =
if ∃s∊� . ⟨∅, rrole_id1⟩s ∧ ⟨∅, rrole_id2⟩s in
modRole(C, s, r1, r2) then:
error
otherwise:
modRole(C, s, r1, r2)
‘unique’ role_id ‘;’ ⇒
addRolecheckunique(C, s, r1, r2) =
if ∃s1∊�,s2≠s1∊� .
⟨∅, rrole_id⟩s1 ∧ ⟨∅, rrole_id⟩s2 in
addRole(C, s, r1, r2) then:
error
otherwise:
addRole(C, s, r1, r2)
and
modRolecheckunique(C, s, r1, r2) =
if ∃s1∊�,s2≠s1∊� .
⟨∅, rrole_id⟩s1 ∧ ⟨∅, rrole_id⟩s2 in
modRole(C, s, r1, r2) then:
error
otherwise:
modRole(C, s, r1, r2)
International Journal of Cyber-Security and Digital Forensics (IJCSDF) 1(2): 152-159The Society of Digital Information and Wireless Communications, 2012 (ISSN: 2305-0012)
159