i Eee Cloud Computing
Transcript of i Eee Cloud Computing
-
7/31/2019 i Eee Cloud Computing
1/24
cloudcomputing
IEEE
The Essentials IssueMay/June 2012
eScience2
Identifying
Risk14
Google App
Engine8
PROTOTYPEAdigitalmagazineinsupportoftheIEEECloud
ComputingInitiative
-
7/31/2019 i Eee Cloud Computing
2/24
cloudcomputing
IEEE
Cloud Computing InitiativeSteering Committee
Steve Diamond, chair
Nim Cheung
Kathy Grise
Michael Lightner
Mary Lynne Neilsen
Sorel Reisman
Jon Rokne
IEEE Computer Society Sta
Angela BurgessExecutive Director
Evan ButterfieldDirector, Products
& Services
Lars JentschManager,
Editorial Services
Steve WoodsManager, New Media
& Production
Kathy Clark-FisherManager, Products
& Acquisitions
Monette Velascoand Jennie ZhuDesign & Production
May/June 2012
-
7/31/2019 i Eee Cloud Computing
3/24
2012 IEEE Published by the IEEE Computer Society IEEE Cloud Computing 1
C loud computing is transform-ing information technology.As information and processesmigrate to the cloud, it is transformingnot only where computing is done, but,fundamentally, how it is done. As increas-ingly more corporate and academic
worlds invest in this technolog y, it wil lalso drastically change IT professionals
working environment .Cloud computing solves many prob-
lems of conventional computing, includ-ing handling peak loads, installing soware
updates, and using excess computingcycles. However, the new technology hasalso created new challenges such as datasecurity, data ownership, and transborderdata storage.
e IEEE has realized that cloud com-puting is poised to be the dominant formfor computing in the future and that it will
be necessary to develop standards, con-ferences, publications, educational mate-rial, and general awareness informationfor cloud computing. Because of this, theNew Initiative Commiee of the IEEE has
funded an IEEE Cloud Computing Initia-tive (CCI). CCI coordinates cloud-relatedactivities for IEEE and has tracks for all ofthe identied aspects of cloud computing.
e CCI Publications Track is taskedwith developing a slate of cloud comput-
ing-related publications. e CCI providesseed funding for the publications devel-oped by the CCI Publications Track andit has already developed a mature proposalfor anIEEE Transactions on Cloud Comput-ing, sponsored by ve IEEE societies. istransactions is slated to commence pub-lishing in-depth research papers in cloudcomputing in 2013 following approval
by the Board of IEEE at the June 2012meeting.
e second publishing initiative is to
develop a cloud computing magazine. Inpreparation, the IEEE Computer Societypublications team has created this supple-ment on behalf of the Cloud ComputingPublishing Track. is supplement con-tains previously published articles thathave recently appeared in several maga-zines. e aim ofIEEE Cloud Computingmagazine is to provide a focused home forcloud-related articles e magazine will betechnically cosponsored by several IEEEsocieties.
e CCI Publications Track would liketo have a broad representation from IEEEsocieties with interests in cloud comput-ing. If anyone wishes to participate in theongoing discussion of the publicationsinitiatives, please contact Jon Rokne [email protected].
Jon Rokne is the CCI Publications Trackchair. He is a professor and former headof the Computer Science department atthe University of Calgary and the past
vice president of publications for IEEE.
An EssentiaInitiativeJon Rokne, Un vers ty o Ca gary
Guest Editors Introduction
-
7/31/2019 i Eee Cloud Computing
4/24
2 IEEE Cloud Computing Published by the IEEE Computer Society 2012 IEEE
Recen rends in science have madecompuaional capabiliies anessenial par o scienic dis-
covery. Tis combinaion o science andcompuing is oen reerred o as enhancedscientifc discovery, or eScience. Te col-lecion o essays in Te Fourth Paradigmdescribes how science has evolved rombeing experimenally driven o being collab-oraive and analysis-ocused.1
eScience has been inegral o high-energyphysics or several decades due o he volumeand complexiy o daa such experimensproduce. In he 1990s, he compuaional
Science in the CloudAccelerating Discovery
in the 21st Century
Joseph L. Hellerstein, Kai J. Kohlhof,and David E. Konerding, Google
Scientifc discovery is transitioning rom a ocus on data
collection to an emphasis on analysis and prediction using large-
scale computation. With appropriate sotware support, scientists
can do these computations with unused cycles in commercial
clouds. Moving science into the cloud will promote data sharing
and collaborations that will accelerate scientifc discovery.
demands o sequencing he human genomemade eScience cenral o biology. Morerecenly, eScience has become essenial orneuroscieniss in modeling brain circuisand or asronomers in simulaing cosmologi-cal phenomena.
Biology provides an excellen exampleo how eScience conribues o scien-iic discovery. Much modern biologicalresearch is abou relaing DNA sequences(genotypes) o observable characerisics(phenotypes) as when researchers lookor variaions in DNA ha promoe can-cer. Te human genome has approximaely
3 billion pairs o nucleoides, he elemensha encode inormaion in DNA. Tese basepairs encode common human characerisics,benign individual variaions, and poenial dis-
ease-causing varians. I urns ou ha individ-ual variaion is usually much more commonhan are disease-causing varians. So, under-sanding how he genome conribues o dis-ease is much more complicaed han lookinga he dierence beween genomes. Insead,his analysis oen requires deailed models oDNA-mediaed chemical pahways o ideniydisease processes. Te human genomes sizeand he complexiy o modeling disease pro-cesses ypically require large-scale compua-ions and massive sorage capaciy.2
A common patern in eScience is oexplore many possibiliies in parallel.Compuaional biologiss can align mil-lions o DNA reads (produced by aDNA sequencer) o a reerence genome byaligning each one in parallel. Neuroscien-iss can evaluae a large number o param-eers in parallel o nd good models o brainaciviy. And asronomers can analyze di-eren regions o he sky in parallel o searchor supernovae.
ha a high degree o parallelism
can advance science has been a saringpoin or many eors. For example, Fold-ing@Home3 is a disribued compuingprojec ha enables scieniss o under-sand he biochemical basis o severaldiseases. A Google, he Exacycle projectprovides massive parallelism or doing sci-ence in he cloud.
Harvesting Cycles for ScienceOen, scienic discovery is enhanced byemploying large-scale compuaion o assessi a heory is consisen wih experimenalresuls. Frequenly, hese compuaions (orjobs) are srucured as a large number o inde-pendenly execuing asks. Tis job srucureis called embarrassingly parallel.
Te Exacycle projec aims o nd unusedresources in he Google cloud o run embar-rassingly parallel jobs a a ver y l arg e s cal e.We do h is by creai ng a sysem hasbo h a simplicaion and a generalizaiono MapReduce. Exacycle simplies MapRe-duce in ha all Exacycle asks are esseniallymappers. Tis simplicaion enables moreefcien resource managemen. Exacycle
Te Essentials Issue
-
7/31/2019 i Eee Cloud Computing
5/24
www.computer.org/cloud 3
Exacycle implemens he same com-municaion ineraces beween adja-cen layers. Communicaion roman upper o a lower layer requires he upperlayer o cu daa ino pieces ha i hen
passes on o he lower layer. ypically,his communicaion provides daa o askswihin he same job. Communicaion roma lower layer o an upper layer involves bun-dling daa o produce aggregaions. Teseinerlayer ineraces are scalable and robuswih minimal requiremens or managingdisribued sae.
he primary mechanism Exacycle useso scale is eliminaing nearly all inerclu-ser neworking and machine-level diskI/O. An Exacycle ask ypically can movemore han 5 Gbyes o daa ino or ou ohe machine on which he asks execues.Exacycle reduces nework usage by manag-ing daa movemen on asks behal. ypi-cally, he housands o millions o asks inan Exacycle job share some o heir inpuiles. Exacycle uses his knowledge oshared inpu iles o coschedule asks inhe same cluser. his sraegy improveshroughpu by exploiing he high ne workbandw idhs beween machines wihin hesame cluser. Furhermore, Exacycle usescaching so ha remoe daa are copied inoa cluser only once.
When Exacycle assigns a ask o amachine, a imeou and rery hierar-chy handles ailures. Tis combinaiono imeous and reries addresses mos sys-emic errors. Because asks have unique
ideniers, he Exacycle rery logic assumesha wo asks wih he same idenier com-pue he same resuls.
For he mos par, Exacycle doesnemploy durable cluser- or machine-levelsorage owing o is engineering cossand perormance penalies. Insead, Exa-cycle opimisically keeps nearly all saein RM. obusness comes rom havinga single auhoriaive sore and spreadingsae across many machines. I a machineails, Exacycle moves asks rom he ailedmachine o anoher machine. I here is a ail-ure o a machine running a Honcho cluser-level scheduler, he Honcho is resared onanoher machine, and uses discovery ser-vices o recover cached sae.
Te Exacycle projec began wo yearsago. Te sysem has been running eScienceapplicaions in producion or roughly ayear, and has had coninuous, inensiveuse over he pas six monhs. ecenly,Google donaed 1 billion core hours oscienic discovery hrough he ExacycleVisiing Faculy Gran Program (htp://research.google.com/universiy/exacycle
Figure 1. Exacycle system architecture. Daimyo assigns tasks to clusters, Honcho assigns tasks
to machines, Peasant encapsulates tasks, and the bottom layer caches task results.
Honcho
watches
Peasantdiscoveryservice
Peasant joins
Honchojoins
Assigntask
Compute cluster
Honcho
(per cluster)
Assigntask
Peasant(per core)
Cachedresults
Resultstorage
Othercomputeclusters
Work unitstorage
Daimyo(global manager)
Submit
Monitor
User
Daimyowatches
Honchodiscoveryservices
generalizes Mapeduce by providing auo-maion ha moniors resources across heGoogle cloud and assigning asks o compueclusers based on resource availabiliy and
job requiremens. hi s prov ides mas sivescaling or embarrassingly parallel jobs.
Google is very efcien a using com-puing resources, bu resource uiliza-ions sill vary depending on ime oday, day o he week, and season. Forexample, Web users mos requenly usesearch engines during he day, and searchproviders ypically direc rafc o daaceners close o users o reduce laency. Tisleads o low daa cener uilizaions duringhe daa ceners local nigh ime.
Sill, low resource uilizaion doesnnecessarily enable more asks o run in hecloud. Many asks require considerablememory, or moderae amouns o memoryand CPU in combinaion. Such asks canrun in he cloud only i a leas one machinesaises all he asks resource requiremens.One way o quaniy wheher asks can runis o deermine i suiably sized slos areavailable. For example, recen measuremenso he Google cloud indicae ha i has 13imes more slos or asks requiring only
one core and 4 Gbyes o RM han hereare slos or asks requiring our cores and32 Gbyes o RM. In general, nding slosor asks ha require ewer resources is mucheasier. For his reason, an Exacycle ask ypi-cally consumes abou one core and 1 Gbyeo memory or no more han an hour.
For cloud compuing o be efcien, imus adap quickly o changes in resourcedemands. In paricular, higher-prioriywork can preemp Exacycle asks, whichmus hen be re-run. However, Exacyclehroughpu is excellen because in pracicepreempion is rare. Tis is due in par o hemanner in which Excycle locaes resourcesha run asks.
As Figure 1 shows, Exacycle is srucuredino muliple layers. A he op is he Daimyoglobal scheduler, which assigns asks o clus-ers. Te second layer is he Honcho cluserscheduler, which assigns asks o machines.Te Peasan machine manager in he hirdlayer encapsulaes asks, and he botom layercaches ask resuls. Te Honchos and Peas-ans cache inormaion bu are oherwisesaeless. Tis simplies ailure handling.
-
7/31/2019 i Eee Cloud Computing
6/24
4 IEEE Cloud Computing May/June 2012
Te Essentials Issue
_program.hml). o suppor his, Exacycleconsumes approximaely 2.7 million CPUhours per day, and oen much more. As oearly February, visiing aculy had com-pleed 58 million asks.
Visiing aculy are addressing variousscienic problems ha can bene romlarge-scale compuaion:
Te enzyme science projec seeks o discoverhow baceria develop resisance o anibi-oics, a growing problem or public healh.
Te molecular docking projec seeks oadvance drug discovery by using mas-sive compuaion o ideniy small mol-ecules ha bind o one or more o he
huge se o proeins ha caalyze reacionsin cells. Te poenial here is o grealyaccelerae he design o drugs ha iner-ere wih disease pahways.
he compuaional asronomy proj-ec plays an inegral role in he design ohe 3200 Megapixel Large Synopic Sur-vey elescope. As one example, he proj-ec is doing large scale simulaions odeermine how o correc or amosphericdisorions o ligh.
Te molecular modeling projec is expand-ing he undersanding o compuaionalmehods or simulaing macromolecularprocesses. Te rs applicaion is o deer-mine how molecules ener and leave he
Figure 2. Trajectory durations for G protein-coupled receptors (GPCRs). Tese receptors are
critical to many drugs efectiveness because o their role in communicating signals across cell
membranes. Te upper x-axis shows the trajectory duration, whereas the lower x-axis shows
the core hours required to compute trajectory durations. Computing one millisecond o
trajectory data requires millions o core days on a modern desktop computer. Exacycle can do
these computations in a ew days.
Salt bridge formation Ligand binding Major conformational change
millisecondsmicrosecondsnanoseconds
101
102
103
104
105
106
107
108
109 Total core
hours
Time
LigandGPCR
Extracellular side
Intracellular
side
Ligand binding site
G protein binding site
Cell membrane
cell nucleus hrough a channel known ashe nuclear pore complex.
Google has seleced hese projecs based on
heir poenial o produce scieniic resulso major imporance. One measure oimpac will be publishing in op journalssuch as Science andNature.
o beter illusrae he poenial or sci-ence in he cloud, we nex look a one prob-lem in deail.
Simulating Molecular DynamicsExacycle has underaken a projec harelaes o a class o molecules called G pro-ein-coupled recepors. GPCRs are crii-
cal o many drug herapies. Indeed, aboua hird o pharmaceuicals arge GCPRs.Despie his, scieniss sill don ully under-sand he molecular basis o GPCR acion.
A bi o science is needed o appreciaehe compuaional problem ha Exacycleis addressing. GPCRs are criical o rans-membrane signaling, an imporan par omany disease pahways. Scieniss know haGPCRs embed in cell membranes o providecommunicaion beween exracellular sig-nals and inracellular processes. Tis com-
municaion occurs when cerain moleculesbind o sies on GPCRs ha are accessiblerom ouside he membrane. However, sci-eniss don ully undersand he sequenceo changes ha hen lead o communicaionacross he cell membrane.
o gain a beter undersanding o GPCRaciviy, Exacycle is doing large-scale simu-laions o GPCR molecular dynamics. Tisis a challenging underaking because o hedeail required o obain scienic insigh. Inparicular, biomolecules a body empera-ure undergo coninuous ucuaions wihregard o aoms locaion and he 3D shapeo molecules (reerred o as heir conforma-tion). Many changes occur a a ime scaleo emoseconds o nanoseconds (1015 o109 seconds). However, mos chemicalprocesses o ineres occur a a ime scale omicroseconds o milliseconds. Te erm tra-jectory reers o a sequence o moions o ase o aoms under sudy over ime. Figure 2depics he insighs possible wih rajecorieso diferen duraions. Undersanding GPCRacions requires simulaions ha generaedaa over milliseconds.
-
7/31/2019 i Eee Cloud Computing
7/24
www.computer.org/cloud 5
planeary-scale collaboraions ha powerscienic discovery in he 21s cenury.
References
1. Te Fourth Paradigm: Data-Intensive Scien-
tifc Discovery, . Hey, S. ansley, and K.
olle, eds., Microso Research, 2009.
2. M. Schaz, B. Langmead, and S. Salzberg,
Cloud Compuing and he DNA Daa
Race,Nature Biotechnology, vol. 28, no. 7,
2010, pp. 691693.
3. M. Shirs and V. Pande, Screen Savers o
he World Unie! Science, vol. 290, no.
5498, 2000, p. 1903.
4. S. Melnik e a l . , Dremel: Iner-
acive Analysis o Web-Scale Daases,Proc . Con. Very Large Datab ases, VLDB
Endowmen, vol. 3, 2010, pp. 330339.
Joseph L. Hellerstein is a Google, where
he manages he Big Science Projec, which
addresses cloud compuing or scienic dis-
covery. He has a PhD in compuer science
rom he Universiy o Caliornia, Los Angeles.
Hellersein is a ellow o IEEE. Conac him
Kai J. Kohlhofis a research scienis a Google,
where he works on cloud compuing and
eScience. He has a PhD in srucural bioin-
ormaics rom he Universiy o Cambridge,
UK. Conac him a [email protected].
David E. Konerding is a soware engineer
a Google, where he works on cloud inra-
srucure and scienic compuing. He has
a PhD in biophysics rom he Universiy o
Caliornia, Berkeley. Conac him a dek@
google.com.
Tis article will appear in IEEE Internet
Computing, July/August 2012.
Exacycle simulaes he rajecories oapproximaely 58,000 aomshe numbero aoms in a ypical GPCR sysem, includ-ing he cell membrane and waer molecules.
I does so a emosecond precision over ril-lions o ime seps by compuing rajecoriesusing embarrassingly parallel jobs.
Te GPCR daa analysis pipeline usesrajecories in wo ways. Te rs is o con-sruc models o GPCR behavior. For exam-ple, researchers can use rajecories o creaea Markov model wih saes in which proeinsrucures are described according o heir3D srucure and kineic energy. Second,researchers analyze rajecories or changesha are imporan or acivaing signaling
across he cell membrane.I akes approximaely one core day o
simulae hal a nanosecond o a single rajec-ory on a modern deskop. So, obaining sci-enic insigh requires millions o core dayso generae a millisecond o rajecory daa.Clearly, massive compuaional resourcesare required.
Exacycle provides hese resources ocompue rajecories in parallel. However,some hough is required o use Exacycleeecively. For GPCR rajecories, he chal-
lenge is ha i akes millions o core hourso compue an ineresing rajecory, bu anExacycle ask ypically execues or no morehan one core hour. So, Exacycle consrucsrajecories by execuing a series o asks.Tis requires passing parially compued ra-jecories rom one ask o he nex in a wayha mainains high hroughpus.
Te approach or compuing rajeco-ries has several pars. A driver scrip gener-aes ens o housands o asks and submishem o Exacycle. Te scrip also moniorsask saes and regisers evens such as askcompleions or ailures. o mainain highhroughpu, his scrip hen propagaes par-ial rajecories ha asks compue o oherasks. Exacycle provides mechanisms ormonioring ask execuions and supporinghe invesigaion and resoluion o ask andsysem ailures.
Tus ar, Exacycle has compued morehan 150,000 rajecories wih duraionsoaling more han 4 milliseconds. A peak,Exacycle simulaes approximaely 80 micro-seconds o rajecory daa per day. Tis cor-responds o roughly 600 erafops.
Exacycle has produced hundreds oerabyes o rajecory daa. Analyzinghese daa presens a huge challenge. Oneapproach is o use MapReduce o calculae
summary saisics o rajecories and henplace he resuls ino a relaional daabaseo rajecory ables. Scieniss have obainedconsiderable insigh by doing SQL queriesagains he rajecory ables. However, hisrequires he daabase o have he scalabiliyo echnologies such as Dremel,4 which pro-vides ineracive response imes or ad hocqueries on ables wih billions o rows.
Amazon, Microso, Google, and oh-
ers oer capabiliies or running sci-ence applicaions in he cloud. Te appealo hese services is ha scieniss donneed o buy expensive, dedicaed clusers.Insead, hey pay a modes ren or on-demand access o large quaniies o cloudcompuing resources. Alhough doing sci-ence in he cloud has appeal, i could havehidden coss. For example, scieniss mighhave o recode applicaions o exploi clouduncionaliy or add new code i some ea-ures aren presen in he cloud.
Science in he cloud oers much morehan a compue inrasrucure. A recenrend is ha scienic conribuionsrequire ha researchers make large daa-ses publicly available. Some examplesare he Allen Insiues Brain Alas andhe US Naional Cener or Bioechnol-ogy Inormaion (NCBI) genome daa-base. Boh are reposiories ha researcherswidely use o do compuaion-inensiveanalysis o daa ha ohers have colleced.Hosing hese daases in public clouds ismuch easier han requiring individual sci-eniss (or even universiies) o build heirown daa-hosing sysems.
Much more is on he way in his arena.Using he cloud or compuaion and daasorage will aciliae scieniss sharing bohdaa and compuaional ools. Indeed, sub-sanial eors are already under way, suchas Sage Bioneworks idea o a daa com-mons (htp://sagebase.org/research/Synapse1.php). Sharing daa and codewill le scieniss more rapidly build onheir peers resuls. Longer erm, he bigappeal o science in he cloud is promoing
-
7/31/2019 i Eee Cloud Computing
8/24
6 IEEE Cloud Computing Published by the IEEE Computer Society 2012 IEEE
F or many records and inormaionmanagemen (RIM) proessionals,cloud compuing resembles a radi-ional hosing service: inormaion sorageor applicaions are ousourced o a hird-pary provider and accessed by he orga-nizaion hrough a nework connecion.However, he inormaion, applicaions,and processing power in a cloud inrasruc-ure are disribued across many serversand sored along wih oher cusomersinormaion, separaed only by logical iso-laion mechanisms. Tis presens boh newRIM challenges and benes.
RIM proessionals are specically con-cerned wih inormaion as a core businessasse. Records are a subse o organizaionalinormaion ha is oen required o pro-vide evidence o organizaional aciviiesand ransacions. Tey require proecion in
he same way as every oher asse. Decision-making processes ake ino consideraionhe wider conex o organizaional sra-egy and orm par o a complex srucure oassessmens regarding inormaion value,alignmen, and assurance. All o hese oper-ae wihin an overarching perormance andrisk ramework.
Cloud Computing:A Brief IntroductionCloud compuing is he abiliy o access apool o compuing resources owned andmainained by a hird pary via he Iner-ne. I isn a new echnology bu a new wayo delivering compuing resources based onlong exising echnologies such as servervirualizaion. Te cloud is composed ohardware, sorage, neworks, ineraces, andservices ha provide he means hrough
which users access he inrasrucure, com-puing power, applicaions, and services ondemand and independen o locaion. Cloudcompuing usually involves he ranser, sor-
age, and processing o inormaion on heproviders inrasrucure, which is ousidehe cusomers conrol.
Te Naional Insiue o Sandardsand Securiy (NIS) denes i as a modelor enabling ubiquious, convenien, on-demand nework access o a shared poolo congurable compuing resources hacan be rapidly provisioned and releasedwih minimal managemen eor or ser-vice provider ineracion (htp://csrc.nis.gov/publicaions/PubsSPs.hml#800-145/
SP800-145.pd). As Figure 1 shows, heNIS-dened model highlighs ve essen-ial characerisics ha refec a servicesfexibiliy and he conrol ha users haveover i. NIS also disinguishes amonghree delivery models (soware as a ser-vice [SaaS], plaorm as a serv ice [PaaS],and inrasrucure as a service [IaaS]) andour deploymen models (public, privae,hybrid, and communiy clouds).
Delivery Models
As a general rule, he cusomer doesn con-rol he underlying cloud inrasrucure inany delivery model. SaaS is soware oeredby a hird-pary provider, usually on demandvia he Inerne and congurable remoely.PaaS also allows cusomers o develop newapplicaions using APIs deployed and con-gurable remoely. In his case, he cusomerdoes have conrol over he deployed applica-ions and operaing sysems. In he IaaS pro-vision, virual machines and oher absracedhardware and operaing sysems are madeavailable. Te cusomer, hereore, has con-rol over operaing sysems, sorage, anddeployed applicaions.
Deployment ModelsTere are, essenially, hree deploymenmodels: privae, communiy, and public,wih a ourh combined opion. Privateclouds are operaed solely or an organiza-ion; community clouds areshared by severalorganizaions and are designed o suppora specic communiy. In public clouds, heinrasrucure is made publicly available buis owned by an organizaion selling cloud
Cloud ComputingA Records and InformationManagement Perspective
Kirsten Ferguson-Boucher,Aberystwyth University, Wales
Ultimately, how to make decisions about which cloud service/
deployment model to select and what sort o things to take
into consideration when making that initial decision, require the
organization to consider whether loss o control will signicantly
afect the security o mission-critical inormation.
Te Essentials Issue
-
7/31/2019 i Eee Cloud Computing
9/24
www.computer.org/cloud 7
services. Resources are osie and sharedamong all cusomers in a mulienancymodel. Hybrid clouds are a composiion owo or more clouds ha remain unique eni-
ies bu are bound ogeher by sandardizedor proprieary echnology o enable daa andapplicaion porabiliy.
Is the Cloud Right for You?Making he decision o move o he cloud is acomplex one and depends very much on yourorganizaional conex. Les examine somegeneric benes and challenges. An obvi-ous bene is a reducion in capial expen-diureheavy invesmen in new hardwareand soware is no longer required or oen
underuilized uncions, such as sorageand processing. Organizaions can ap inoreadily available compuing resources ondemand, wih large daaceners oen usingvirualizaion echnologies (he absraciono compuing resources rom he underlyinghardware) o enable scalable and exible ser-vice provision. Applicaions, sorage, servers,and neworks are allocaed exibly in a mul-ienancy environmen o achieve maximumcompuing and sorage capaciies.
From a provider perspecive, uilizaion o
shared resources resuls in higher efciencyand he abiliy o oer cloud compuing ser-vices a low coss; cusomers likewise benerom cheaper access. Bu make no misakehe cloud sill involves coss o an organiza-ion as i ries o inegrae new services wihexising legacy processes. able 1 summarizeshese and some o he oher general pros andcons o cloud provision.
Tere are, however, very specic consider-aions ha relae o he abiliy o he organiza-ion o manage is inormaion and ensure harecords are available or curren and uureorganizaional use. In paricular, he cloudoers specic benes or RIM: improvedbusiness processes, aciliaion o locaion-independen collaboraion, and access oresources and inormaion a any ime. How-ever, some aspecs o cloud compuing canhave a negaive impac on RIM as well:
compliance and e-discovery; inegriy and condenialiy; service availabiliy and reliabiliy;
service porabiliy and ineroperabiliy; inormaion rerieval and desrucion; and
loss o governance, inegraion, andmanagemen.
Te sidebar en Quesions o Ask When
Ousourcing o he Cloud oers some guid-ance abou wha service migh be bes or a par-icular organizaions conex.
Managing InformationAssets in the CloudOrganizaions are sill responsible or heirinormaion even i is sored elsewhere(in his case, in he cloud). ISO 15489 (heinernaional sandard or records manage-men) denes records as being auhenic,reliable, and usable and possessing inegriy.
How does he move o he cloud aec hesecharacerisics? Inormaion governanceand assurance require policies and proce-dures or mainaining he above and willneed amending o incorporae he changingenvironmen. Tere mus be a clear under-sanding o whos responsible or wha andhow policies and procedures will be imple-mened. Issues such as meadaa applicaion,encrypion sraegies, and shorer-erm pres-ervaion requiremens as well as permanenreenion or desrucion sraegies mus also
be considered. Paricular reerence o daaproecion, privacy legislaion, reedom oinormaion, and environmenal regulaionsrequires organizaions o know where heirinormaion is sored (in wha jurisdicions)and how i can be accessed wihin given imerames. Will he move o he cloud resriche organizaions abiliy o comply?
Liigaion also requires consideraion:being able o ideniy relevan inorma-ion, rerieve i, and supply i o cours in aimely manner can be difcul i he organi-zaion hasn hough abou how his wouldbe achieved prior o an inciden. Conracsneed o be negoiaed wih hese consider-aions in mind, wih clauses buil in aboudaa desrucion or how inormaion can bereurned o he organizaion, as well as howhe provider manages i.
Operating in the CloudUse o inormaion in he cloud ypically pre-cludes he use o encrypion because i wouldadversely aec daa processing, indexing,and searching. I he service uses encryp-ion, he cusomer would need o know i his
happens auomaically and how he encryp-ion keys are creaed, held, and used acrosssingle and muliple sies o be able o con-rm ha inormaion is auhenic. Business
coninuiy can be aeced by sysem ailure,so inormaion abou coninuiy, monioring,prioriy, and recovery procedures would giveorganizaions a beter picure o he risk osysem ailure o heir aciviies.
U limaely, making decisions abouwhich cloud service/deploymenmodel o selec, and wha sor o hings oake ino consideraion when making hainiial decision, requires he organizaion o
consider wheher loss o conrol will signi-canly aec he securiy o mission-criicalinormaion. In paricular, ideniying riskand assessing he organizaions risk appeieis a criical acor in making decisions aboumoving o he cloud. Te business mus beclear abou he ype o inormaion is will-ing o sore, how sensiive ha inormaionis, and wheher is loss or compromise wouldaec he compliance environmen in whichhe organizaion operaes.
Acknowledgments
More inormaion abou he research under-
aken by Aberyswyh Universiy in con-
juncion wih he Archives and Records
Associaion o UK and Ireland, which
underpins his aricle, can be ound a www.
archives.org.uk/ara-in-acion/bes-pracice
-guidelines.hml.
Kirsten Ferguson-Boucher lecures in records
managemen; inormaion governance; law,
compliance, and ehics; and inormaion
assurance a Aberyswyh Universiy, Wales.
Her research ineress include he convergence
beween relaed disciplines and how organiza-
ions in all secors can reach accepable lev-
els o inormaion governance and assurance
across he specrum o echnologies. Conac
her a [email protected].
Tis article originally appeared in
IEEE Security & Privacy, November/
December 2011; http://doi.
ieeecomputersociety.org/10.1109/MSP.2011.159.
-
7/31/2019 i Eee Cloud Computing
10/24
8 IEEE Cloud Computing Published by the IEEE Computer Society 2012 IEEE
F unding agencies and insiuionsmus purchase and provision expen-sive parallel compuing hardwareo suppor high-perormance compuing(HPC) simulaions. In many cases, he phys-ical hosing coss, as well as he operaion,mainenance, and depreciaion coss, exceedhe acquisiion price, making he overallinvesmen nonransparen and unproable.
rough a new business model o reningresources only in exac amouns or preciseduraions, cloud compuing promises o bea cheaper alernaive o parallel compuersand more reliable han grids. Neverheless,i remains dominaed by commercial andindusrial applicaions; is suiabiliy or par-allel compuing remains largely unexplored.
Unil now, research on scienic cloudcompuing concenraed almos exclusively
on inrasrucure as a service (IaaS)inra-srucures on which you can easily deploylegacy applicaions and benchmarks encap-sulaed in virual machines. We presen anapproach o evaluae a cloud plaorm orHPC has based on plaorm as a service(PaaS): Google App Engine (GAE).1 GAEis a simple parallel compuing rameworkha suppors developmen o compuaion-ally inensive HPC algorihms and applica-ions. e underlying Google inrasrucureransparenly schedules and execues heapplicaions and produces deailed prolinginormaion or perormance and cos analy-sis. GAE suppors developmen o scalableWeb applicaions or smaller companieshose ha can aord o overprovision alarge inrasrucure ha can handle large ra-c peaks a all imes.
Google App EngineGAE hoss Web applicaions on Googleslarge-scale server inrasrucure. I has hreemain componens: scalable services, a run-
ime environmen, and a daa sore.GAEs ron-end service handles HTP
requess and maps hem o he appropriaeapplicaion servers. Applicaion servers sar,iniialize, and reuse applicaion insances orincoming requess. During rafc peaks, GAEauomaically allocaes addiional resourceso sar new insances. e number o newinsances or an applicaion and he disribu-ion o requess depend on rafc and resourceuse paterns. So, GAE perorms load balanc-ing and cache managemen auomaically.
Each applicaion insance execues in asandbox (a runime environmen absracedrom he underlying operaing sysem).is prevens applicaions rom perormingmalicious operaions and enables GAE oopimize CPU and memory uilizaion ormuliple applicaions on he same physicalmachine. Sandboxing also imposes variousprogrammer resricions:
Applicaions have no access o he under-lying hardware and only limied access o
nework aciliies. Java applicaions can use only a subse o
he sandard library uncionaliy. Applicaions can use hreads. A reques has a maximum o 30 seconds o
respond o he clien.
GAE applicaions use resources such asCPU ime, I/O bandwidh, and he numbero requess wihin cerain quoas associaedwih each resource ype. e CPU ime is, inuzzy erms, equivalen o he number o CPUcycles ha a 1.2-GHz Inel x86 processor canperorm in he same amoun o ime. Inor-maion on he resource usage can be obainedhrough he GAE applicaion adminisraionWeb inerace.
Finally, he daa sore les developersenable daa o persis beyond requess. edaa sore can be shared across dierenslave applicaions.
A Parallel ComputingFrameworko suppor he developmen o paral-lel applicaions wih GAE, we designed a
EvaluatingHigh-PerformanceComputing onGoogle App EngineRadu Prodan, Michael Sperk, and Simon Ostermann,
University of Innsbruck
Google App Engine ofers relatively low resource-provisioning
overhead and an inexpensive pricing model or jobs shorter than
one hour.
Te Essentials Issue
-
7/31/2019 i Eee Cloud Computing
11/24
www.computer.org/cloud 9
Java-based generic ramework (see Figure1). Implemening a new applicaion in ourramework requires specializaion or hreeabsrac ineraces (classes): JobFactory,
WorkJob, and Result (see Figure 2).e maser applicaion is a Java program
ha implemensJobFactory on he userslocal machine. JobFactory manages healgorihms logic and parallelizaion in sev-eral WorkJobs. WorkJob is an absracclass implemened as par o each slave appli-caionin paricular, he run() mehod,which execues he acual compuaional job.Each slave applicaion deploys as a separaeGAE applicaion and, hereore, has a dis-inc URI. e slave applicaions provide a
simple HTP inerace and accep eiher daarequess or compuaional job requess.
Requestse HTP message header sores he ypeo reques.
A job request conains one WorkJobhas submited o a slave applicaion andexended. I muliple requess are submitedo he same slave applicaion, GAE auomai-cally sars and manages muliple insanceso handle he curren load; he programmer
doesn have conrol over he insances. (Oneslave applicaion is, in heory, sucien;however, our ramework can disribue jobsamong muliple slave applicaions o solvelarger problems.)
Adata requestransers daa shared by alljobs o he persisen daa sore (indicaed byheuseSharedDatamehod). I uses mul-iple parallel HTP requess o ulll he GAEsmaximum HTP payload size o 1 Mbyeand improve bandwidh uilizaion. efetchSharedData mehod rerievesshared daa rom he daa sore as needed.
In a clear request, he slave applicaiondelees he enire daa sore conens. Clearrequess ypically occur aer a run, wheher issuccessul or ailed.
Aping requestreurns insanly and deer-mines wheher a slave applicaion is sill online.I he slave is ofine, he maser reclaims hejob and submis i o anoher insance.
WorkJob ExecutionMapping WorkJobs o resources ollowsa dynamic work pool approach has sui-able or slaves running as black boxes on
sandboxed resources wihunpredicable execuionimes. Each slave applica-ion has an associaed jobmanager in he conex o
he maser applicaion. Irequess WorkJobs romhe global pool, submishem o is slave insancesor compuaion (GAEauomaically decides whichinsance is used), and sendsback parial resuls.
We associae a queuesize wih every slave o indi-cae he number o paralleljobs i can simulaneouslyhandle. e size shouldcorrespond o he numbero processing cores avail-able underneah. Findinghe opimal size a a cer-ain ime is dicul orwo reasons. Firs, GAEdoesn publish is hard-ware inormaion; second,an applicaion migh sharehe hardware wih ohercompeing applicaions. So,we approximae he queuesize a a cerain ime by
Figure 1. Our parallel computing framework architecture. Te boxes labeled I denote
multiple slave instances. Te master application is responsible or generating and distributing
the work among parallel slaves implemented as GAE Web applications and responsible or the
actual computation.
I
I
I
I
I
I
Request
Master application
Result
Slaveapplication
Slaveapplication
DS
DSS
Data store
JM
Request
Result
ResultJobFactory
WorkJob
JM
Workpool
Figure 2. Te Java code for our parallel computing
framework interface. JobFactory instantiates the master
application,WorkJob instantiates the slave, and Result
represents the fnal outcome o a slave computation.
public interface JobFactory {
public WorkJob getWorkJob();
public int remainingJobs();
public void submitResult(Result);
public Result getEndResult();
public boolean useSharedData();
public Serializable getSharedData();
}
public abstract class WorkJob extends Serializable {
private int id;
public int getId();
public void setId(int);
public Result run();
public void etchSharedData();
}
public abstract class Result implements Serializable {
private int id;
private longcpuime;
public longgetCPUime();
public void setCPUime(long);
public int getId();
public void setId(int);
}
-
7/31/2019 i Eee Cloud Computing
12/24
10 IEEE Cloud Computing May/June 2012
Te Essentials Issue
conducing a warm-up raining phase beoreeach experimen.
e slave applicaion serializes heWorkJobs resuls and wraps hem in anHTP response, which he maser collecsand assembles. A Result has he sam-sideae unique idenier as he WorkJob.e calculationTime eld sores heeecive compuaion ime spen in run()or perormance evaluaion.
FailuresA GAE environmen can have hree ypes oailure: an exceeded quoa, oine slave appli-caions, or loss o conneciviy. o cope wihsuch ailures, he maser implemens a simpleaul-olerance mechanism o resubmi heailedWorkJobs o he corresponding slavesusing a corresponding exponenial back-oime-ou, depending on he ailure ype.
BenchmarksWe began our evaluaion o GAE wih a se obenchmarks o provide imporan inorma-ion or scheduling parallel applicaions onois resources. o help users undersand heprice o moving rom a local parallel compuero a remoe cloud wih sandboxed resources,we deployed a GAE developmen server onKarwendel,a local machine wih 16 Gbyes omemory and our 2.2-GHz dual-core Operonprocessorsa Insead o spawning addiionalsandboxed insances, he developmen servermanaged parallel requess in separae hreads.
Resource ProvisioningResource-provisioning overhead is heime beween issuing an HTP reques andreceiving he HTP response. Various ac-ors beyond he underlying CP neworkinfuence he overhead (or example, load
balancing o assign a reques o an applica-ion server, which includes he iniializaiono an insance i none exiss).
o measure he overhead, we sen HTP
ping requess wih payloads beween 0 and2.7 Mbyes in 300-Kbye seps, repeaed 50imes or each size, and ook he average.e overhead didn increase linearly wihhe payload (see Figure 3) because CPachieved higher bandwidh or larger pay-loads. We measured overhead in seconds;IaaS-based inrasrucures, such as Ama-zon Elasic Compue Cloud (EC2), exhibilaencies measured in minues.2
Just-in-ime Compilation
A Java virual machines jus-in-ime ( JI)compilaion convers requenly used parso bye code o naive machine code, noa-bly improving perormance. o observe JIcompilaion eecs, we implemened a simpleFibonacci number generaor. We submited io GAE 50 imes in sequence wih a delay oone second, always using he same problemsize. We se up he slave applicaion wih noinsances running and measured he eec-ive compuaion ime in he run() o eachWorkJob. As we described earlier, GAE
spawns insances o an applicaion depend-ing on is recen load (he more requess, hemore insances). o mark and rack insances,we used aSingleton class ha conained arandomly iniialized saic idenier eld.
Figure 4 shows ha seven insances han-dled he 50 requess. Moreover, he rs worequess in each insance ook considerablylonger han he res. Aer JI compilaion,he code execued over hree imes aser.
Monte Carlo SimulationsOne way o approximae is hrough a sim-ple Mone Carlo simulaion ha inscribesa circle ino a square, generaes p uniormlydisribued random poins in he square, andcouns m poins ha lie in he circle. So, wecan approximae = 4 m/p. We ran his algo-rihm on GAE.
Obaining consisen measuremensrom GAE is dicul or wo reasons. Firs,he programmer has no conrol over he slaveinsances. Second, wo idenical consecuiverequess o he same Web applicaion couldexecue on compleely dieren hardwarein dieren locaions. o minimize he
Figure 3. Resource-provisioning overhead didnt increase linearly with the payload because
CP achieved higher bandwidth or larger payloads.
5,000
4,000
3,000
2,000
1,000
00 030 600 900 1,200
Payload size (Kbytes)
1,500 1,800 2,100 2,400 2,700
Latency(m
s)
Figure 4. Computation time and the mapping of requests to instances. Te frst two
requests in each instance took considerably longer than the rest. Ater just-in-time
compilation, the code executed almost our times aster.
0 5 10 15 20
Request number
3025 35 40 45 50
5,000
4,000
3,000
2,000
1,000
0
Computationtime(ms)
-
7/31/2019 i Eee Cloud Computing
13/24
www.computer.org/cloud 11
bias, we repeated all experiments 10 times,eliminated outliers, and averaged all runs.
Running the simulations. We conducted
a warm-up phase or each application todetermine the queue size and eliminate JIcompilations efects. We executed the cal-culation algorithm rst sequentially and thenwith an increasing number o parallel jobsby generating a corresponding number oWorkJobs in theJobFactorywork pool.We chose a problem o 220 million randompoints, which produced a sequential execu-tion time slightly below the 30-second limit.
For each experiment, we measured andanalyzed two metrics. Te rst was compu-
tation time, which represented the averageexecution time orun(). Te second wasthe average overhead, which represented thediference between the total execution timeand the computation time (especially due torequest latencies).
Results. Figure 5 shows that serial execu-tion on GAE was about two times slowerthan on Karwendel, owing to a slower ran-dom-number-generation routine in GAEsstandard math library.3 On Karwendel,
transerring jobs and results incurred almostno overhead, owing to the ast local networkbetween the master and the slaves. So, theaverage computation time and total execu-tion time were almost identical until eightparallel jobs (Karwendel has eight cores).Until that point, almost linear speedupoccurred. Using more than eight parallel jobsgenerated a load imbalance that deterioratedspeedup because two jobs had to share onephysical core.
GAE exhibited a constant data transerand total overhead o approximately 700milliseconds in both cases, which explainsits lower speedup. Te random backgroundload on GAE servers or on the Internet net-work caused the slight irregularities in execu-tion time or diferent machine sizes.
Tis classic scalability analysis methoddidnt avor GAE because the 30-second limitlet us execute only relatively small problems(in which Amdahls law limits scalability).o eliminate this barrier and evaluate GAEspotential or computing larger problems, weused Gustasons law4 to increase the prob-lem size proportionally to the machine size.
We observed the impact on the executiontime (which should stay constant or an idealspeedup). We distributed the jobs to 10 GAEslave applications instead o one to gain su-cient quotas (in minutes).
In this case, we started with an ini-tial problem o 180 million randompoints to avoid exceeding the 30-second limit. (For a larger number o jobs,GAE cant provide more resources and startsdenying connections.) Again, Karwendelhad a constant execution time until eightparallel jobs (see Figure 6), demonstratingour rameworks good scalability.
Starting with nine parallel jobs, the execu-tion time steadily increased proportionally tothe problem size. GAE showed similarly goodscalability until 10 parallel jobs. Starting addi-tional parallel jobs slightly increased the exe-cution time. Te overhead o aborted requests
(owing to quotas being reached) caused mostirregularities.
For more than 17 parallel jobs, GAE had alower execution time than Karwendel owingto Googles larger hardware inrastructure.
Cost AnalysisAlthough we conducted all our experi-ments within the ree daily quotas thatGoogle ofered, it was still important toestimate cost to understand the price oexecuting our applications in real lie. So,alongside the approximation, we imple-mented three algorithms with diferentcomputation and communication com-plexity (see able 1):
matrix multiplication, based on row-wise dis-tribution o the rst matrix and ull broad-cast o the second;
Figure 5. Results for calculating p on (a) Google App Engine (GAE) and (b) Karwendel,
the local machine. Serial execution on GAE was about two times slower than on Karwendel,
owing to a slower random-number-generation routine in GAEs standard math library.3
5 10 15
Number of parallel jobs
Execution time
Computation time
Overhead
30
25
20
15
10
0
Time(second
s)
5 10 15
Number of parallel jobs
30
25
20
15
10
0
Time(second
s)
Execution time
Computation time
Overhead
(a) (b)
Figure 6. Scalability results for GAE and Karwendel for proportionally increasing machine
and problem sizes. Karwendel had a constant execution time until eight parallel jobs,
demonstrating our frameworks good scalability.
0 5 10 15 20
Number of parallel jobs
25
40
30
20
10
0
Time(seconds)
8
6
4
2
0Numberofabortedrequests
App engine
Karwendel
Aborted requests
-
7/31/2019 i Eee Cloud Computing
14/24
12 IEEE Cloud Computing May/June 2012
Te Essentials Issue
As we expeced, approximaion was
he mos compuaionally inensive and had
almos no daa-o-ranser cos. Surprisingly,
rank sor consumed litle bandwidh com-
pared o CPU ime, even hough he ull
unsored array had o ranser o he slaves
and he rank o each elemen had o ranser
back o he maser. Te Mandelbro se gen-
eraor was clearly dominaed by he amoun
o image daa ha mus ranser o he maser.
For approximaion, we generally could sam-
ple approximaely 129 109 random poins or
US$1 because he algorihm has linear com-
puaional eor. For he oher algorihms, a
precise esimaion is more difcul because
resource consumpion doesn increase lin-
early wih he problem size. Neverheless, we
can use he resource complexiy lised in able
Mandelbrot set generation, based on he
escape ime algorihm; and
rank sort, based on each array elemens
separae rank compuaion. Tis could
poenially ouperorm oher aser
sequenial algorihms.
We ran he experimens 100 imes in
sequence or each problem size and analyzed
he cos o he hree mos limiing resources:
CPU ime, incoming daa, and ougoing daa,
which we obained hrough he Google appli-
caion adminisraion inerace. We used he
Google prices as o 10 January 2011: US$0.12
per ougoing Gbye, $0.10 per incoming
Gbye, and $0.10 per CPU hour. We didn
analyze he daa sore quoa because he over-
all CPU hours includes is usage.
1 o roughly approximae he cos o execue
new problem sizes.
Finally, we esimaed he cos o run he
same experimens on he Amazon EC2 inra-
srucure using EC2s m1.small insances,
which have a compuaional perormance o
one EC2 compue uni. Tis is equivalen
o a 1.2-GHz Xeon or Operon processor,
which is similar o GAE and enables a direc
comparison. We packaged he implemened
algorihms ino Xen-based virual machines
deployed and booed on m1.small
insances. able 1 shows ha he compua-
ion coss were lower or GAE, owing mosly
o he cycle-based paymens as opposed o
EC2s hourly billing inervals.
Google recenly announced a change
in is pricing model ha will replace CPU
Related Work in Cloud Performance
Analysis o our commercial inrastructure-as-a-service-
based clouds or scientiic computing showed thatcloud perormance is lower than that o traditional scientiic
computing.1 However, the analysis indicated that cloud com-
puting might be a viable alternative or scientists who need
resources instantly and temporarily.
Alexandru Iosup and his colleagues examined the long-
term perormance variability o Google App Engine (GAE)
and Amazon Elastic Compute Cloud (EC2).2 The results
showed yearly and daily patterns, as well as periods o stable
perormance. The researchers concluded that GAEs and EC2s
perormance varied among dierent large-scale applications.
Christian Vecchiola and his colleagues analyzed di erent
cloud providers rom the perspective o high-perormance
computing applications, emphasizing the Aneka platorm-
as-a-service (PaaS) ramework.3 Aneka requires a third-party
deployment cloud platorm and doesnt support GAE.
Windows Azure is a PaaS provider comparable to GAE
but better suited or scientiic problems. Jie Li and colleagues
compared its perormance to that o a desktop computer but
perormed no cost analysis.4
MapReduce rameworks oer a dierent approach to
cloud computation.5,6 MapReduce is an orthogonal applica-
tion class5 that targets large-data processing.7 Its less suited
or computationally intensive parallel algorithms8or
example, those operating on small datasets. Furthermore, it
doesnt support the implementation o more complex appli-
cations, such as recursive and nonlinear problems or scientiic
worklows.
References
1. A. Iosup et al., Perormance Analysis o Cloud Computing Services or
Many-Tasks Scientifc Computing, IEEE rans. Parallel and Distributed
Systems, vol. 22, no. 6 , 2011, pp. 931945.
2. A. Iosup, N. Yigitbasi, and D. Epema, On the Perormance Variability
o Production Cloud Services, Proc. 11th IEEE/ACM Intl Symp. Cluster,
Cloud, and Grid Computing(CCGrid 11), IEEE CS, pp. 104113.
3. C. Vecchiola, S. Pandey, and R. Buyya, High-Perormance Cloud
Computing: A View o Scientifc Applications, Proc. 10th Intl Symp.
Pervasive Systems, Algorithms, and Networks (ISPAN 09), IEEE CS,
2009, pp. 416.
4. J. Li et al., eScience in the Cloud: A Modis Satellite Data Reprojection
and Reduction Pipeline in the Windows Azure Platorm, Proc. 2010
Intl Symp. Parallel & Distributed Processing(IPDPS 10), IEEE CS, 2010,
pp. 110.
5. C. Bunch, B. Drawert, and M. Norman, MapScale: A Cloud
Environment or Scientifc Computing, tech. report, Computer
Science Dept., Univ. o Caliornia, Santa Barbara, 2009; www.cs.ucsb.
edu/~cgb/papers/mapscale.pd.
6. J. Qiu et al., Hybrid Cloud and Cluster Computing Paradigms or
Lie Science Applications, BMC Bioinformatics, vol. 11, supplement
12, 2010, S3; www.biomedcentral.com/content/pd/1471-2105-11
-s12-s3.pd.
7. J. Dean and S. Ghemawat, MapReduce: Simplifed Data Processing on
Large Clusters, Comm. ACM, vol. 51, no. 1, 2008, pp. 107113.
8. J. Ekanayake and G. Fox, High Perormance Parallel Computing with
Clouds and Cloud Technologies, Cloud Computing and Software
Services: Teory and echniques, S.A. Ahson and M. Ilyas, eds., CRC
Press, 2010.
-
7/31/2019 i Eee Cloud Computing
15/24
www.computer.org/cloud 13
cycles wih a new insance-hours uni. Teuni is equivalen o one applicaion insance
running or one hour and will cos $0.08.
In addiion, Google will charge $9 a monh
or every applicaion. Te new model will
primarily hur Web applicaions ha rig-
ger addiional insances upon sparse reques
peaks and aerward remain idle.
able 1 gives a rough cos esimaion
assuming 15 parallel asks and an insance ui-
lizaion o 80 percen or useul compuaion.
Te resuls demonsrae ha he new pric-
ing model avors CPU-inensive applicaions
ha ry o ully uilize all available insances.
In addiion, we can expec ree resources o
las longer wih he new pricing model.
W e plan o invesigae he suiabiliyo new applicaion classes such asscienic workfow applicaions o be imple-
mened on op o our generic ramework and
run on GAE wih improved perormance. For
a look a oher research on cloud compuing
perormance, see he Relaed Work in Cloud
Perormance sidebar.
Acknowledgments
Ausrian Science Fund projec RP 72-N23
and he Sandoragenur irol projec Rain-
Cloud unded his research.
References
1. D. Sanderson, Programming Google App
Engine, OReilly Media, 2009.
2. A. Iosup e al., Perormance Analysis o
Cloud Compuing Services or Many-asks
Scienic Compuing,IEEE Trans. Parallel
and Distributed Systems, vol. 22, no. 6, 2011,
pp. 931945.
3. M. Sperk, Scienic Compuing in he Cloud
wih Google App Engine, masers hesis,
Faculy o Mahemaics, Compuer Sci-
ence, and Physics, Univ. o Innsbruck, 2011;
htp://dps.uibk.ac.a/~radu/sperk.pd.
4. J.L. Gusason, Reevaluaing Amdahls
Law, Comm. ACM, vol. 31, no. 5, 1988, pp.
532533.
Radu Prodan is an associae proessor a he
Universiy o Innsbrucks Insiue o Com-
puer Science. His research ineress include
programming mehods, compiler echnol-
ogy, perormance analysis, and scheduling
or parallel and disribued sysems. Prodan
has a PhD in echnical services rom he
Vienna Universiy o echnology. Conac
him a [email protected].
Michael Sperk is a PhD suden a he Uni-
versiy o Innsbruck. His research ineress
include disribued and parallel compuing.
Sperk has an MSc in compuer science rom
he Universiy o Innsbruck. Conac him a
Simon Ostermann is a PhD suden a he
Universiy o Innsbrucks Insiue o Com-
puer Science. His research ineress include
resource managemen and scheduling or
grid and cloud compuing. Osermann has an
MSc in compuer science rom he Universiy
o Innsbruck. Conac him a simon@dps.
uibk.ac.a.
Tis article originally appeared in
IEEE Software, March/April 2012; http://
doi.ieeecomputersociety.org/10.1109/
MS.2011.131.
Table 1. Resource consumption and the estimated cost for four algorithms.
Algorithm
Problem
size
(points)
Outgoing data Incoming data CPU time Cost (US$)
Gbytes Complexity Gbytes Complexity Hrs. Complexity
Google
App
Engine(GAE)
New GAE
Amazon
Elastic
ComputeCloud
approximation
220,000,000 0 O(1) 0 O(1) 1.7 O(n) 0.170 0.078 0.190
Matrix
multiplication
1,500
1,500
0.85 O(n2) 0.75 O(n2) 1.15 O(n2) 0.292 0.203 0.440
Mandelbrot set 3,200
3,200
0.95 O(n2) 0 O(1) 0.15 O(n2) 0.129 0.066 0.440
Rank sort 70,000 0.02 O(n2) 0.01 O(n) 1.16 O(n2) 0.119 0.120 0.245
Listen to Grady Boocho au P
www.mpu.g/u
-
7/31/2019 i Eee Cloud Computing
16/24
14 IEEE Cloud Computing Published by the IEEE Computer Society 2012 IEEE
E ach day, a resh news iem, blogenry, or oher publicaion warnsus abou cloud compuings secu-riy risks and hreas; in mos cases, securiyis cied as he mos subsanial roadblockor cloud compuing upake. Bu his dis-course abou cloud compuing securiyissues makes i dicul o ormulae a well-ounded assessmen o he acual securiyimpac or wo key reasons. Firs, in many ohese discussions abou risk, basic vocabu-lary ermsincluding risk, threat, and vul-nerabilityare oen used inerchangeably,wihou regard o heir respecive deni-ions. Second, no every issue raised is spe-cic o cloud compuing.
o achieve a well-ounded undersandingo he dela ha cloud compuing adds wih
UnderstandingCloud ComputingVulnerabilitiesBernd Grobauer, Tobias Walloschek, and Elmar Stcker, Siemens
Discussions about cloud computing security oten ail to
distinguish general issues rom cloud-specifc issues. To clariy
the discussions regarding vulnerabilities, the authors defne
indicators based on sound defnitions o risk actors and cloud
computing.
respec o securiy issues, we mus analyzehow cloud compuing inuences esablishedsecuriy issues. A key acor here is securiyvulnerabilities: cloud compuing makes cer-ain well-undersood vulnerabiliies moresignican as well as adds new ones o hemix. Beore we ake a closer look a cloud-specic vulnerabiliies, however, we musrs esablish wha a vulnerabiliy really is.
Vulnerability: An OverviewVulnerabiliy is a prominen acor o risk. ISO27005 denes risk as he poenial ha a givenhrea will exploi vulnerabiliies o an asse orgroup o asses and hereby cause harm o heorganizaion, measuring i in erms o boh helikelihood o an even and is consequence.1Te Open Groups risk axonomy (www.
opengroup.org/onlinepubs/9699919899/oc.pd) ofers a useul overview o risk acors(see Figure 1).
Te Open Groups axonomy uses he
same wo op-level risk acors as ISO 27005:he likelihood o a harmul even (here, lossevent fequency) and is consequence (here,probable loss magnitude).1 Te probable lossmagniudes subacors (on he righ in Fig-ure 1) inuence a harmul evens ulimaecos. Te loss even requency subacors (onhe le) are a bi more complicaed. A losseven occurs when a hrea agen (such as ahacker) successully explois a vulnerabil-iy. Te requency wih which his happensdepends on wo acors:
Te requency wih which hrea agens ryo exploi a vulnerabiliy. Tis requency isdeermined by boh he agens moivaion(Wha can hey gain wih an atack? Howmuch efor does i ake? Wha is he riskor he atackers?) and how much access(conac) he agens have o he atackarges.
Te diference beween he hrea agensatack capabiliies and he sysems srengho resis he atack.
Tis second acor brings us oward a use-ul deniion o vulnerabiliy.
Defning VulnerabilityAccording o he Open Groups riskaxonomy,
Vulnerabiliy is he probabiliy ha an
asse will be unable o resis he acions
o a hrea agen. Vulnerabiliy exiss
when here is a diference beween he
orce being applied by he hrea agen,
and an objecs abiliy o resis ha orce.
So, vulnerabiliy mus always bedescribed in erms o resisance o a cer-ain ype o atack. o provide a real-worldexample, a cars inabiliy o proec is driveragains injury when hi ronally by a ruckdriving 60 mph is a vulnerabiliy; he resis-ance o he cars crumple zone is simply ooweak compared o he rucks orce. Againshe atack o a biker, or even a small cardriving a a more moderae speed, he carsresisance srengh is perecly adequae.
Te Essentials Issue
-
7/31/2019 i Eee Cloud Computing
17/24
www.computer.org/cloud 15
We can also describe compuer vulnera-biliyha is, securiy-relaed bugs ha youclose wih vendor-provided pachesas aweakening or removal o a cerain resisancesrengh. A bufer-overow vulnerabiliy, or
example, weakens he sysems resisance oarbirary code execuion. Wheher atack-ers can exploi his vulnerabiliy depends onheir capabiliies.
Vulnerabilities and Cloud RiskWell now examine how cloud compuinginuences he risk acors in Figure 1, saringwih he righ-hand side o he risk acor ree.
From a cloud cusomer perspecive, herigh-hand side dealing wih probable magni-ude o uure loss isn changed a all by cloudcompuing: he consequences and ulimaecos o, say, a condenialiy breach, is exaclyhe same regardless o wheher he daabreach occurred wihin a cloud or a conven-ional I inrasrucure. For a cloud serviceprovider, hings look somewha diferen:because cloud compuing sysems were pre-viously separaed on he same inrasrucure,a loss even could enail a considerably largerimpac. Bu his ac is easily grasped andincorporaed ino a risk assessmen: no con-cepual work or adaping impac analysis ocloud compuing seems necessary.
So, we mus search or changes on Figure
1s le-hand sidehe loss even requency.Cloud compuing could change he prob-abiliy o a harmul evens occurrence. Aswe show laer, cloud compuing causes sig-nican changes in he vulnerabiliy acor.
O course, moving o a cloud inrasrucuremigh change he atackers access level andmoivaion, as well as he efor and riskaac ha mus be considered as uure work.Bu, or supporing a cloud-specic riskassessmen, i seems mos proable o sarby examining he exac naure o cloud-spe-cic vulnerabiliies.
Cloud ComputingIs here such a hing as a cloud-specicvulnerabiliy? I so, cerain acors in cloudcompuings naure mus make a vulnerabil-iy cloud-specic.
Essenially, cloud compuing combinesknown echnologies (such as virualizaion)in ingenious ways o provide I servicesrom he conveyor bel using economies oscale. Well now look closer a wha he coreechnologies are and which characerisics oheir use in cloud compuing are essenial.
Core Cloud ComputingTechnologiesCloud compuing builds heavily on capabili-ies available hrough several core echnologies:
Web applications and services. Sowareas a service (SaaS) and plaorm as a ser-vice (PaaS) are unhinkable wihou Webapplicaion and Web services echnolo-gies: SaaS oferings are ypically imple-
mened as Web applicaions, while PaaSoferings provide developmen and run-ime environmens or Web applicaionsand services. For inrasrucure as a ser-vice (IaaS) oferings, adminisraors ypi-cally implemen associaed services andAPIs, such as he managemen access orcusomers, using Web applicaion/serviceechnologies.
Virtualization IaaS oferings. Tese ech-nologies have virualizaion echniques aheir very hear; because PaaS and SaaSservices are usually buil on op o a sup-poring IaaS inrasrucure, he impor-ance o virualizaion also exends o heseservice models. In he uure, we expecvirualizaion o develop rom virualizedservers oward compuaional resourcesha can be used more readily or execuingSaaS services.
Cryptography. Many cloud compuingsecuriy requiremens are solvable only byusing crypographic echniques.
As cloud compuing develops, he l is ocore echnologies is likely o expand.
Figure 1. Factors contributing to risk according to the Open Groups risk taxonomy. Risk corresponds to the product o loss event requency
(let) and probable loss magnitude (right). Vulnerabilities infuence the loss event requency.
Random
Contact
Threat eventfrequency
International
Loss eventfrequencyRegular
Control strength
Vulnerability
Asset loss
Threat lossProbablelossmagnitude
Risk
Secondaryloss factors
Organizational
External
Threat capacity
Asset value
Risk
Level of effort Action
Primary
loss factors
Value
Volume
Competence
Action
Internal vs.external
Embarrassment
Competitiveadvantage
Legal/regulatory
General
Detection
Legal & regulatory
Competitors
Media
Stakeholders
Containment
Remediation
Recovery
Timing
Due diligence
Response
Detection
Access
Misuse
Disclose
Modify
Deny access
Productivity
Sensitivity
Cost
-
7/31/2019 i Eee Cloud Computing
18/24
16 IEEE Cloud Computing May/June 2012
Te Essentials Issue
Essential CharacteristicsIn is descripion o essenial cloud charac-erisics,2 he US Naional Insiue o San-dards and echnology (NIS) capures well
wha i means o provide I services romhe conveyor bel using economies o scale:
On-demand self-service. Users can orderand manage services wihou human iner-acion wih he service provider, using, orexample, a Web poral and managemeninerace. Provisioning and de-provision-ing o services and associaed resourcesoccur auomaically a he provider.
Ubiquitous network access. Cloud servicesare accessed via he nework (usually he
Inerne), using sandard mechanisms andproocols.
Resource pooling. Compuing resourcesused o provide he cloud service are real-ized using a homogeneous inrasrucurehas shared beween all service users.
Rapid elasticity. Resources can be scaled upand down rapidly and elasically.
Measured service. Resource/service usageis consanly meered, supporing opimi-zaion o resource usage, usage reporingo he cusomer, and pay-as-you-go busi-
ness models.
NISs deniion ramework or cloud compu-ing wih is lis o essenial characerisics hasby now evolved ino he de aco sandard ordening cloud compuing.
Cloud-Specifc VulnerabilitiesBased on he absrac view o cloud compu-ing we presened earlier, we can now moveoward a deniion o wha consiues acloud-specic vulnerabiliy. A vulnerabiliyis cloud specic i i
is inrinsic o or prevalen in a core cloudcompuing echnology,
has is roo cause in one o NISs essenialcloud characerisics,
is caused when cloud innovaions makeried-and-esed securiy conrols diculor impossible o implemen, or
is prevalen in esablished sae-o-he-arcloud oferings.
We now examine each o hese ourindicaors.
Core-echnology VulnerabilitiesCloud compuings core echnologiesWebapplicaions and services, virualizaion, andcrypographyhave vulnerabiliies ha are
eiher inrinsic o he echnology or preva-len in he echnologys sae-o-he-arimplemenaions. ree examples o suchvulnerabiliies are virual machine escape,session riding and hijacking, and insecure orobsolee crypography.
Firs, he possibiliy ha an atackermigh successully escape rom a virual-ized environmen lies in virualizaions verynaure. Hence, we mus consider his vul-nerabiliy as inrinsic o virualizaion andhighly relevan o cloud compuing.
Second, Web applicaion echnologiesmus overcome he problem ha, by design,he HTP proocol is a saeless proocol,whereas Web applicaions require somenoion o session sae. Many echniquesimplemen session handling andas anysecuriy proessional knowledgeable in Webapplicaion securiy will esiymany ses-sion handling implemenaions are vulner-able o session riding and session hijacking.Wheher session riding/hijacking vulner-abiliies are inrinsic o Web applicaion
echnologies or are only prevalen in manycurren implemenaions is arguable; in anycase, such vulnerabiliies are cerainly rel-evan or cloud compuing.
Finally, crypoanalysis advances can renderany crypographic mechanism or algorihminsecure as novel mehods o breaking hemare discovered. Is even more common o ndcrucial aws in crypographic algorihm imple-menaions, which can urn srong encryp-ion ino weak encrypion (or someimes noencrypion a all). Because broad upake ocloud compuing is unhinkable wihou heuse o crypography o proec daa con-denialiy and inegriy in he cloud, insecureor obsolee crypography vulnerabiliies arehighly relevan or cloud compuing.
Essential CloudCharacteristic VulnerabilitiesAs we noed earlier, NIS describes veessenial cloud characerisics: on-demandsel-service, ubiquious nework access,resource pooling, rapid elasiciy, and mea-sured service.
Following are examples o
vulnerabil iies wih roo causes in one ormore o hese characerisics:
Unauthorized access to management inter-
face. e cloud characerisic on-demandsel-service requires a managemen iner-ace has accessible o cloud service users.Unauhorized access o he managemeninerace is hereore an especially relevanvulnerabiliy or cloud sysems: he proba-biliy ha unauhorized access could occuris much higher han or radiional sysemswhere he managemen uncionaliy isaccessible only o a ew adminisraors.
Internet protocol vulnerabilities. e cloudcharacerisic ubiquious nework access
means ha cloud services are accessed vianework using sandard proocols. In moscases, his nework is he Inerne, whichmus be considered unrused. Inerne pro-ocol vulnerabiliiessuch as vulnerabiliiesha allow man-in-he-middle atacksarehereore relevan or cloud compuing.
Data recovery vulnerability. e cloud char-acerisics o pooling and elasiciy enailha resources allocaed o one user willbe reallocaed o a diferen user a a laerime. For memory or sorage resources, i
migh hereore be possible o recover daawriten by a previous user.
Metering and billing evasion. e cloudcharacerisic o measured service meansha any cloud service has a meering capa-biliy a an absracion level appropriae ohe service ype (such as sorage, process-ing, and acive user accouns). Meeringdaa is used o opimize service deliveryas well as billing. Relevan vulnerabiliiesinclude meering and billing daa manipu-laion and billing evasion.
us, we can leverage NISs well-ounded deniion o cloud compuing inreasoning abou cloud compuing issues.
Deects inKnown Security ControlsVulnerabiliies in sandard securiy conrolsmus be considered cloud specic i cloudinnovaions direcly cause he diculies inimplemening he conrols. Such vulnerabili-ies are also known as control challenges.
Here, we rea hree examples o such con-rol challenges. Firs, virualized neworks ofer
-
7/31/2019 i Eee Cloud Computing
19/24
www.computer.org/cloud 17
insucien nework-based conrols. Given henaure o cloud services, he adminisraiveaccess o IaaS nework inrasrucure and heabiliy o ailor nework inrasrucure are ypi-
cally limied; hence, sandard conrols such asIP-based nework zoning can be applied. Also,sandard echniques such as nework-basedvulnerabiliy scanning are usually orbidden byIaaS providers because, or example, riendlyscans can be disinguished rom atacker aciv-iy. Finally, echnologies such as virualizaionmean ha nework rac occurs on boh realand virual neworks, such as when wo virualmachine environmens (VMEs) hosed on hesame server communicae. Such issues con-siue a conrol challenge because ried and
esed nework-level securiy conrols mighno work in a given cloud environmen.
Te second challenge is in poor key man-agemen procedures. As noed in a recenEuropean Nework and Inormaion Secu-riy Agency sudy,3 cloud compuing inra-srucures require managemen and sorageo many diferen kinds o keys. Because vir-ual machines don have a xed hardwareinrasrucure and cloud-based conen isoen geographically disribued, is moredicul o apply sandard conrolssuch as
hardware securiy module (HSM) sorageo keys on cloud inrasrucures.
Finally, securiy merics aren adaped ocloud inrasrucures. Currenly, here are nosandardized cloud-specic securiy mericsha cloud cusomers can use o monior hesecuriy saus o heir cloud resources. Unilsuch sandard securiy merics are devel-oped and implemened, conrols or secu-riy assessmen, audi, and accounabiliy aremore dicul and cosly, and migh even beimpossible o employ.
Prevalent Vulnerabilitiesin State-o-the-Art Cloud OferingsAlhough cloud compuing is relaively young,here are already myriad cloud oferings onhe marke. Hence, we can complemen hehree cloud-specic vulnerabiliy indica-ors presened earlier wih a orh, empiri-cal indicaor: i a vulnerabiliy is prevalenin sae-o-he-ar cloud oferings, i mus beregarded as cloud-specic. Examples o suchvulnerabiliies include injecion vulnerabili-ies and weak auhenicaion schemes.
Injecion vulnerabiliies are exploied by
manipulaing service or applicaion inpus oinerpre and execue pars o hem againshe programmers inenions. Examples oinjecion vulnerabiliies include
SQL injecion, in which he inpu conainsSQL code has erroneously execued inhe daabase back end;
command injecion, in which he inpuconains commands ha are erroneouslyexecued via he OS; and
cross-sie scriping, in which he inpuconains JavaScrip code has erroneouslyexecued by a vicims browser.
In addiion, many widely used auheni-
caion mechanisms are weak. For example,usernames and passwords or auhenicaionare weak due o
insecure user behavior (choosing weakpasswords, reusing passwords, and so on),and
inheren limiaions o one-acor auhen-icaion mechanisms.
Also, he auhenicaion mechanismsimplemenaion migh have weaknesses and
allow, or example, credenial inercepionand replay. Te majoriy o Web applicaionsin curren sae-o-he-ar cloud servicesemploy usernames and passwords as auhen-icaion mechanism.
Architectural Componentsand VulnerabilitiesCloud service models are commonly dividedino SaaS, PaaS, and IaaS, and each modelinuences he vulnerabiliies exhibied by agiven cloud inrasrucure. Is helpul o addmore srucure o he service model sacks:Figure 2 shows a cloud reerence archiecureha makes he mos imporan securiy-rele-van cloud componens explici and providesan absrac overview o cloud compuing orsecuriy issue analysis.
Te reerence archiecure is based onwork carried ou a he Universiy o Cali-ornia, Los Angeles, and IBM.4 I inheris helayered approach in ha layers can encom-pass one or more service componens. Here,we use service in he broad sense o provid-ing somehing ha migh be boh maerial(such as sheler, power, and hardware) and
immaerial (such as a runime environmen).For wo layers, he cloud soware environ-men and he cloud soware inrasrucure,he model makes he layers hree main ser-
vice componenscompuaion, sorage,and communicaionexplici. op layerservices also can be implemened on layersurher down he sack, in efec skippinginermediae layers. For example, a cloudWeb applicaion can be implemened andoperaed in he radiional wayha is, run-ning on op o a sandard OS wihou usingdedicaed cloud soware inrasrucure andenvironmen componens. Layering andcomposiionaliy imply ha he ransiionrom providing some service or uncion in-
house o sourcing he service or uncion canake place beween any o he models layers.
In addiion o he original model, weveidenied supporing uncions relevan oservices in several layers and added hem ohe model as verical spans over several hori-zonal layers.
Our cloud reerence archiecure has hreemain pars:
Supporting (IT) inastructure. Tese areaciliies and services common o any I
service, cloud or oherwise. We includehem in he archiecure because we wano provide he complee picure; a ullreamen o I securiy mus accounor a cloud services non-cloud-speciccomponens.
Cloud-specifc inastructure. Tese com-ponens consiue he hear o a cloudservice; cloud-specic vulnerabiliiesand corresponding conrols are ypicallymapped o hese componens.
Cloud service consumer. Again, we include hecloud service cusomer in he reerence archi-ecure because is relevan o an all-encom-passing securiy reamen.
Also, we make explici he nework ha sep-araes he cloud service consumer rom hecloud inrasrucure; he ac ha access ocloud resources is carried ou via a (usuallyunrused) nework is one o cloud compu-ings main characerisics.
Using he cloud reerence archiecuressrucure, we can now run hrough he archiec-ures componens and give examples o eachcomponens cloud-specic vulnerabiliies.
-
7/31/2019 i Eee Cloud Computing
20/24
18 IEEE Cloud Computing May/June 2012
Te Essentials Issue
Cloud Software Infrastructure
and EnvironmentTe cloud sofware inastructure layer providesan absracion level or basic I resources haare ofered as services o higher layers: com-puaional resources (usually VMEs), sor-age, and (nework) communicaion. Teseservices can be used individually, as is ypi-cally he case wih sorage services, bu heyreoen bundled such ha servers are deliveredwih cerain nework conneciviy and (oen)access o sorage. Tis bundle, wih or wihousorage, is usually reerred o as IaaS.
Te cloud sofware environment layer pro-vides services a he applicaion plaorm level:
a developmen and runime environmenor services and applicaions writen in oneor more suppored languages;
sorage services (a daabase ineraceraher han le share); and
communicaion inrasrucure, such asMicrosos Azure service bus.
Vulnerabiliies in boh he inrasrucureand environmen layers are usually specico one o he hree resource ypes provided
by hese wo layers. However, cross-enan
access vulnerabiliies are relevan or all hreeresource ypes. Te virual machine escapevulnerabiliy we described earlier is a primeexample. We used i o demonsrae a vulner-abiliy has inrinsic o he core virualizaionechnology, bu i can also be seen as havingis roo cause in he essenial characerisico resource pooling: whenever resources arepooled, unauhorized access across resourcesbecomes an issue. Hence, or PaaS, where heechnology o separae diferen enans (andenan services) isn necessarily based on vir-ualizaion (alhough ha will be increasinglyrue), cross-enan access vulnerabiliies playan imporan role as well. Similarly, cloudsorage is prone o cross-enan sorage access,and cloud communicaionin he orm ovirual neworkingis prone o cross-enannework access.
Computational ResourcesA highly relevan se o compuaionalresource vulnerabiliies concerns how vir-ual machine images are handled: he onlyeasible way o providing nearly idenicalserver imageshus providing on-demand
service or virual serversis by cloningemplae images.
Vulnerable virual machine emplaeimagescause OS or applicaion vulnerabili-
ies o spread over many sysems. An atackermigh be able o analyze conguraion, pachlevel, and code in deail using adminisraiverighs by rening a virual server as a servicecusomer and hereby gaining knowledgehelpul in atacking oher cusomers images.A relaed problem is ha an image can beaken rom an unrusworhy source, a newphenomenon brough on especially by heemerging markeplace o virual images orIaaS services. In his case, an image migh,or example, have been manipulaed so as o
provide back-door access or an atacker.Daa leakage by virual machine replica-
ion is a vulnerabiliy has also rooed in heuse o cloning or providing on-demand ser-vice. Cloning leads o daa leakage problemsregarding machine secres: cerain elemens oan OSsuch as hos keys and crypographicsal valuesare mean o be privae o a singlehos. Cloning can violae his privacy assump-ion. Again, he emerging markeplace orvirual machine images, as in Amazon EC2,leads o a relaed problem: users can provide
emplae images or oher users by urning arunning image ino a emplae. Dependingon how he image was used beore creaing aemplae rom i, i could conain daa ha heuser doesn wish o make public.
Tere are also conrol challenges here,including hose relaed o crypography use.Crypographic vulnerabiliies due o weakrandom number generaion migh exis ihe absracion layer beween he hardwareand OS kernel inroduced by virualizaion isproblemaic or generaing random numberswihin a VME. Such generaion requires anenropy source on he hardware level. Viru-alizaion migh have awed mechanisms orapping ha enropy source, or having sev-eral VMEs on he same hos migh exhaushe available enropy, leading o weak ran-dom number generaion. As we noed ear-lier, his absracion layer also complicaeshe use o advanced securiy conrols, such ashardware securiy modules, possibly leadingo poor key managemen procedures.
StorageIn addiion o daa recovery vulnerabiliy
User
Front end
Network
Kernel (OS/apps)
Hardware
Facilities
Cloud (Web) applicationsSaaS
Service customer
PaaS
IaaS
Cloud software environment
Cloud software infrastructure
Storage CommunicationComputational
resourcesServices&
APIs
Managementaccess
IAAAmechanisms
Provider
Cloud-specic infrastructure
Supporting (IT) infrastructure
Figure 2. Te cloud reference architecture. We map cloud-specifc vulnerabilities to
components o this reerence architecture, which gives us an overview o which vulnerabilities
might be relevant or a given cloud service.
-
7/31/2019 i Eee Cloud Computing
21/24
www.computer.org/cloud 19
due o resource pooling and elasiciy, heresa relaed conrol challenge in media sanii-zaion, which is oen hard or impossible oimplemen in a cloud conex. For example,
daa desrucion policies applicable a heend o a lie cycle ha require physical diskdesrucion can be carried ou i a disk issill being used by anoher enan.
Because crypography is requenly usedo overcome sorage-relaed vulnerabiliies,his core echnologys vulnerabiliiesinsecure or obsolee crypography and poorkey managemenplay a special role orcloud sorage.
Communication
Te mos prominen example o a cloudcommunicaions service is he neworkingprovided or VMEs in an IaaS environmen.Because o resource pooling, several cus-omers are wikely o share cerain neworkinrasrucure componens: vulnerabili-ies o shared nework inrasrucure com-ponens, such as vulnerabiliies in a DNSserver, Dynamic Hos Conguraion Pro-ocol, and IP proocol vulnerabiliies, mighenable nework-based cross-enan atacksin an IaaS inrasrucure.
Virualized neworking also presens aconrol challenge: again, in cloud services,he adminisraive access o IaaS neworkinrasrucure and he possibiliy or ai-loring nework inrasrucure are usuallylimied. Also, using echnologies such asvirualizaion leads o a siuaion where ne-work rac occurs no only on real ne-works bu also wihin virualized neworks(such as or communicaion beween woVMEs hosed on he same server); mosimplemenaions o virual neworking oerlimied possibiliies or inegraing nework-based securiy. All in all, his consiues aconrol challenge o insucien nework-based conrols because ried-and-esed ne-work-level securiy conrols migh no workin a given cloud environmen.
Cloud Web ApplicationsA Web applicaion uses browser echnologyas he ron end or user ineracion. Wihhe increased upake o browser-based com-puing echnologies such as JavaScrip, Java,Flash, and Silverligh, a Web cloud applica-ion alls ino wo pars:
an applicaion componen operaed some-where in he cloud, and
a browser componen running wihin heusers browser.
In he uure, developers will increasinglyuse echnologies such as Google Gears opermi ofine usage o a Web applicaionsbrowser componen or use cases ha donrequire consan access o remoe daa.Weve already described wo ypical vulner-abiliies or Web applicaion echnologies:session riding and hijacking vulnerabiliiesand injecion vulnerabiliies.
Oher Web-applicaion-specic vulnera-biliies concern he browsers ron-end com-
ponen. Among hem are clien-side daamanipulaion vulnerabiliies, in which usersatack Web applicaions by manipulaingdaa sen rom heir applicaion componeno he servers applicaion componen. Inoher words, he inpu received by he servercomponen isn he expeced inpu senby he clien-side componen, bu alered orcompleely user-generaed inpu. Furher-more, Web applicaions also rely on browsermechanisms or isolaing hird-pary conenembedded in he applicaion (such as adver-
isemens, mashup componens, and so on).Browser isolaion vulnerabiliies migh husallow hird-pary conen o manipulae heWeb applicaion.
Services and APIsI migh seem obvious ha all layers o hecloud inrasrucure oer services, bu orexamining cloud inrasrucure securiy, isworhwhile o explicily hink abou all o heinrasrucures service and applicaion pro-gramming ineraces. Mos services are likelyWeb services, which share many vulnerabili-ies wih Web applicaions. Indeed, he Webapplicaion layer migh be realized com-pleely by one or more Web services such hahe applicaion URL would only give he usera browser componen. Tus he supporingservices and API uncions share many vul-nerabiliies wih he Web applicaions layer.
Management AccessNISs deniion o cloud compuing saesha one o cloud services cenral characer-isics is ha hey can be rapidly provisionedand released wih minimal managemen
eor or service provider ineracion. Con-sequenly, a common elemen o each cloudservice is a managemen ineracewhichleads direcly o he vulnerabiliy concern-
ing unauhorized access o he managemeninerace. Furhermore, because manage-men access is oen realized using a Webapplicaion or service, i oen shares he vul-nerabiliies o he Web applicaion layer andservices/API componen.
Identity, Authentication,Authorization, and AuditingMechanismsAll cloud services (and each cloud servicesmanagemen inerace) require mechanisms
or ideniy managemen, auhenicaion,auhorizaion, and audiing (IAAA). o a cer-ain exen, pars o hese mechanisms mighbe acored ou as a sand-alone IAAA ser-vice o be used by oher services. wo IAAAelemens ha mus be par o each serviceimplemenaion are execuion o adequaeauhorizaion checks (which, o course, useauhenicaion and/or auhorizaion inorma-ion received rom an IAA service) and cloudinrasrucure audiing.
Mos vulnerabiliies associaed wih
he IAAA componen mus be regarded ascloud-specic because heyre prevalen insae-o-he-ar cloud oerings. Earlier, wegave he example o weak user auhenica-ion mechanisms; oher examples include
Denial o service by account lockout. Oneoen-used securiy conrolespeciallyor auhenicaion wih username andpasswordis o lock ou accouns hahave received several unsuccessul auhen-icaion atemps in quick succession.Atackers can use such atemps o launchDoS atacks agains a user.
Weak credential-reset mechanisms. Whencloud compuing providers manage usercredenials hemselves raher han usingederaed auhenicaion, hey mus pro-vide a mechanism or reseting credenialsin he case o orgoten or los credenials.In he pas, password-recovery mecha-nisms have proven paricularly weak.
Insufcient or aulty authorization checks.Sae-o-he-ar Web applicaion andservice cloud oerings are oen vulner-able o insucien or auly auhorizaion
-
7/31/2019 i Eee Cloud Computing
22/24
20 IEEE Cloud Computing May/June 2012
Te Essentials Issue
checks ha can make unauhorized inor-maion or acions available o users. Miss-ing auhorizaion checks, or example, arehe roo cause o URL-guessing atacks.
In such atacks, users modiy URLs o dis-play inormaion o oher user accouns.
Coarse authorization control. Cloud ser-vices managemen ineraces are par-icularly prone o ofering auhorizaionconrol models ha are oo coarse. Tus,sandard securiy measures,