Cloud computing

20
Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected] IEEE Final Year Project List 2011-2012 Madurai Elysium Technologies Private Limited 230, Church Road, Annanagar, Madurai , Tamilnadu – 625 020. Contact : 91452 4390702, 4392702, 4394702. eMail: [email protected] Trichy Elysium Technologies Private Limited 3 rd Floor,SI Towers, 15 ,Melapudur , Trichy, Tamilnadu – 620 001. Contact : 91431 - 4002234. eMail: [email protected] Kollam Elysium Technologies Private Limited Surya Complex,Vendor junction, kollam,Kerala – 691 010. Contact : 91474 2723622. eMail: [email protected] A b s t r a c t CLOUD COMPUTING 2011 - 2012 01 A Hybrid Shared-nothing/Shared-data Storage Architecture for Large Scale Databases Shared-nothing and shared-disk are two widely-used storage architectures in current parallel database systems, and each of them has its own merits for different query patterns. However, there is no much effort in investigating the integration of these two architectures and exploiting their merits together. In this study, we propose a novel hybrid shared-nothing/shared- data storage scheme for large-scale databases, to leverage the benefits of both shared-nothing and shared-disk architectures. We adopt a shared-nothing architecture as the hardware layer and leverage a parallel file system as the storage layer. The proposed hybrid storage scheme can provide a high degree of parallelism in both I/O and computing, like that in a shared-nothing system. In the meantime, it can achieve convenient and high-speed data sharing across multiple database nodes, like that in a shared-disk system. The hybrid scheme is more appropriate for large-scale and dataintensive applications than each of the two individual types of systems. 02 A performance goal oriented processor allocation technique for centralized heterogeneous multi-cluster environments A performance goal oriented processor allocation technique for centralized heterogeneous multi-cluster environments. This paper proposes a processor allocation technique named temporal look-ahead processor allocation (TLPA) that makes allocation decision by evaluating the allocation effects on subsequent jobs in the waiting queue. TLPA has two strengths. First, it takes multiple performance factors into account when making allocation decision. Second, it can be used to optimize different performance metrics. To evaluate the performance of TLPA, we compare TLPA with best-fit and fastest- first algorithms. Simulation results show that TLPA has up to 32.75% performance improvement over conventional processor allocation algorithms in terms of average turnaround time in various system configurations 03 A Petri Net Approach to Analyzing Behavioral Compatibility and Similarity of Web Services Web services have become the technology of choice for service-oriented computing implementation, where Web services can be composed in response to some users’ needs. It is critical to verify the compatibility of component Web services to ensure the correctness of the whole composition in which these components participate. Traditionally, two conditions need to be satisfied during the verification of compatibility: reachable termination and proper termination. Unfortunately, it is complex and time consuming to verify those two conditions. To reduce the complexity of this verification, we model Web services using colored Petri nets (PNs) so that a specific property of their structures is looked into, namely, well structuredness. We prove that only reachable termination needs to be satisfied when verifying behavioral compatibility among well-structured Web services. When a composition is declared as valid and in the case where one of its component Web services fails at run time, an alternative one with similar behavior needs to come into play as a substitute. Thus, it is important to develop effective approaches that permit one to analyze the similarity of Web services. Although many existing approaches utilize PNs to analyze behavioral compatibility, few of them explore further appropriate definitions of behavioral similarity and provide a user-friendly tool with automatic verification. In this paper, we introduce a formal definition of context-independent similarity and show that a Web service can be substituted by an alternative peer of similar behavior without intervening otherWeb services in the composition. Therefore, the cost of verifying service substitutability is largely reduced. We also provide an algorithm for the verification and implement it in a tool. Using the tool, the verification of behavioral similarity of Web services can be performed in an automatic way. 1

description

IEEE Final Year Projects 2011-2012 :: Elysium Technologies Pvt LtdIEEE projects, final year projects, students project, be project, engineering projects, academic project, project center in madurai, trichy, chennai, kollam, coimbatore

Transcript of Cloud computing

Page 1: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

A b s t r a c t

CLOUD COMPUTING 2011 - 2012

01 A Hybrid Shared-nothing/Shared-data Storage Architecture for Large Scale Databases

Shared-nothing and shared-disk are two widely-used storage architectures in current parallel database systems, and each of

them has its own merits for different query patterns. However, there is no much effort in investigating the integration of

these two architectures and exploiting their merits together. In this study, we propose a novel hybrid shared-nothing/shared-

data storage scheme for large-scale databases, to leverage the benefits of both shared-nothing and shared-disk

architectures. We adopt a shared-nothing architecture as the hardware layer and leverage a parallel file system as the

storage layer. The proposed hybrid storage scheme can provide a high degree of parallelism in both I/O and computing, like

that in a shared-nothing system. In the meantime, it can achieve convenient and high-speed data sharing across multiple

database nodes, like that in a shared-disk system. The hybrid scheme is more appropriate for large-scale and dataintensive

applications than each of the two individual types of systems.

02 A performance goal oriented processor allocation technique for centralized heterogeneous multi-cluster environments

A performance goal oriented processor allocation technique for centralized heterogeneous multi-cluster environments. This

paper proposes a processor allocation technique named temporal look-ahead processor allocation (TLPA) that makes

allocation decision by evaluating the allocation effects on subsequent jobs in the waiting queue. TLPA has two strengths.

First, it takes multiple performance factors into account when making allocation decision. Second, it can be used to

optimize different performance metrics. To evaluate the performance of TLPA, we compare TLPA with best-fit and fastest-

first algorithms. Simulation results show that TLPA has up to 32.75% performance improvement over conventional

processor allocation algorithms in terms of average turnaround time in various system configurations

03 A Petri Net Approach to Analyzing Behavioral Compatibility and Similarity of Web Services

Web services have become the technology of choice for service-oriented computing implementation, where Web services

can be composed in response to some users’ needs. It is critical to verify the compatibility of component Web services to

ensure the correctness of the whole composition in which these components participate. Traditionally, two conditions need

to be satisfied during the verification of compatibility: reachable termination and proper termination. Unfortunately, it is

complex and time consuming to verify those two conditions. To reduce the complexity of this verification, we model Web

services using colored Petri nets (PNs) so that a specific property of their structures is looked into, namely, well

structuredness. We prove that only reachable termination needs to be satisfied when verifying behavioral compatibility

among well-structured Web services. When a composition is declared as valid and in the case where one of its component

Web services fails at run time, an alternative one with similar behavior needs to come into play as a substitute. Thus, it is

important to develop effective approaches that permit one to analyze the similarity of Web services. Although many existing

approaches utilize PNs to analyze behavioral compatibility, few of them explore further appropriate definitions of behavioral

similarity and provide a user-friendly tool with automatic verification. In this paper, we introduce a formal definition of

context-independent similarity and show that a Web service can be substituted by an alternative peer of similar behavior

without intervening otherWeb services in the composition. Therefore, the cost of verifying service substitutability is largely

reduced. We also provide an algorithm for the verification and implement it in a tool. Using the tool, the verification of

behavioral similarity of Web services can be performed in an automatic way.

1

Page 2: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

04 A Privacy Preserving Repository for Securing Data across the Cloud

Popularity of cloud computing is increasing day by day in distributed computing environment. There is a growing trend of

using cloud environments for storage and data processing needs. Cloud computing is an Internet-based computing,

whereby shared resources, software, and information are provided to computers and other devices on demand. However,

adopting a cloud computing paradigm may have positive as well as negative effects on the data security of service

consumers. This paper primarily highlights some major security issues existing in current cloud computing environments.

The primary issue that has to be dealt with when talking about data security in a cloud is protection of the data. The idea is

to construct a privacy preserving repository where data sharing services can update and control the access and limit the

usage of their shared data, instead of submitting data to central authorities, and, hence, the repository will promote data

sharing and privacy of data. This paper aims at simultaneously achieving data confidentiality while still keeping the

harmonizing relations intact in the cloud. Our proposed scheme enables the data owner to delegate most of computation

intensive tasks to cloud servers without disclosing data contents or user access privilege information.

05 A Scalable Method for Signalling Dynamic Reconfiguration Events with OpenSM

Rerouting around faulty components, on-the-fly policy changes, and migration of jobs all require reconfiguration of data

structures in the Queue Pairs residing in the hosts on an InfiniBand cluster. In addition to a proper implementation at the

host, the subnet manager needs to implement a scalable method for signaling reconfiguration events to the hosts. In this

paper we propose and evaluate three different implementations for signalling dynamic reconfiguration events with OpenSM.

Through our evaluation we demonstrate a scalable solution for signalling host-side reconfiguration events in an InfiniBand

network based on an example where dynamic network reconfiguration combined with a topology-agnostic routing function

is used to avoid malfunctioning components. Through measurements on our test-cluster and an analytical study we show

that our best proposal reduces reconfiguration latency by more than 90%and in certain situations eliminates it completely.

Furthermore, the processing overhead in the subnet manager is shown to be minimal.

06 A Segment-Level Adaptive Data Layout Scheme for Improved Load Balance in Parallel File Systems

Parallel file systems are designed to mask the everincreasing gap between CPU and disk speeds via parallel I/O processing.

While they have become an indispensable component of modern high-end computing systems, their inadequate

performance is a critical issue facing the HPC community today. Conventionally, a parallel file system stripes a file across

multiple file servers with a fixed stripe size. The stripe size is a vital performance parameter, but the optimal value for it is

often application dependent. How to determine the optimal stripe size is a difficult research problem. Based on the

observation that many applications have different data-access clusters in one file, with each cluster having a distinguished

data access pattern, we propose in this paper a segmented data layout scheme for parallel file systems. The basic idea

behind the segmented approach is to divide a file logically into segments such that an optimal stripe size can be identified

for each segment. A five-step method is introduced to conduct the segmentation, to identify the appropriate stripe size for

each segment, and to carry out the segmented data layout scheme automatically. Experimental results show that the

proposed layout scheme is feasible and effective, and it improves performance up to 163% for writing and 132% for reading

on the widely used IOR and IOzone benchmarks

07 A Sketch-based Architecture for Mining Frequent Items and Itemsets from Distributed Data Streams

2

Page 3: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

Parallel file systems are designed to mask the everincreasing gap between CPU and disk speeds via parallel I/O processing.

While they have become an indispensable component of modern high-end computing systems, their inadequate

performance is a critical issue facing the HPC community today. Conventionally, a parallel file system stripes a file across

multiple file servers with a fixed stripe size. The stripe size is a vital performance parameter, but the optimal value for it is

often application dependent. How to determine the optimal stripe size is a difficult research problem. Based on the

observation that many applications have different data-access clusters in one file, with each cluster having a distinguished

data access pattern, we propose in this paper a segmented data layout scheme for parallel file systems. The basic idea

behind the segmented approach is to divide a file logically into segments such that an optimal stripe size can be identified

for each segment. A five-step method is introduced to conduct the segmentation, to identify the appropriate stripe size for

each segment, and to carry out the segmented data layout scheme automatically. Experimental results show that the

proposed layout scheme is feasible and effective, and it improves performance up to 163% for writing and 132% for reading

on the widely used IOR and IOzone benchmarks.

08 A Trustworthiness Fusion Model for Service Cloud Platform Based on D-S Evidence Theory

Gold Trustworthiness plays an important role in service selection and usage. However, it is not easy to define and compute

the service trustworthiness because of its subject meaning and also the different views on it. In this paper, we describe the

meaning of trustworthiness and the computation method for trustworthiness fusion. Through extracting trustworthiness

from service provider, service requestor and service broker, we creatively adopted D-S (Dempster-Shafer) evident theory to

fuse the tripartite trustworthiness. Finally, we completed some comparison experiments on our web service supermarket

platform and certified the efficiency of our method.

09 Addressing Resource Fragmentation in Grids Through Network–Aware Meta–Scheduling in Advance

Grids are made of heterogeneous computing resources geographically dispersed where providing Quality of Service (QoS)

is a challenging task. One way of enhancing the QoS perceived by users is by performing scheduling of jobs in advance,

since reservations of resources are not always possible. This way, it becomes more likely that the appropriate resources are

available to run the job when needed. One drawback of this scenario is that fragmentation appears as a well known effect in

job allocations into resources and becomes the cause for poor resource utilization. So, a new technique has been developed

to tackle fragmentation problems, which consists of rescheduling already scheduled tasks. To this end, some heuristics are

implemented to calculate the intervals to be replanned and to select the jobs involved in the process. Moreover, another

heuristic is implemented to put rescheduled jobs as close together as possible to minimize the fragmentation. This

technique has been tested using a real testbed.

10 APP: Minimizing Interference Using Aggressive Pipelined Prefetching In Multi-Level Buffer Caches

As services become more complex with multiple interactions, and storage servers are shared by multiple services, the different I/O

streams arising from these multiple services compete for disk attention. Aggressive Pipelined Prefetching (APP) enabled storage

clients are designed to manage the buffer cache and I/O streams to minimize the disk I/O-interference arising from competing

streams. Due to the large number of streams serviced by a storage server, most of the disk time is spent seeking, leading to

degradation in response times. The goal of APP is to decrease application execution time by increasing the throughput of individual

I/O streams and utilizing idle capacity on remote nodes along with idle network times thus effectively avoiding alternating bursts of

activity followed by periods of inactivity. APP significantly increases overall I/O throughput and decreases overall messaging

overhead between servers. In APP, the intelligence is embedded in the clients and they automatically infer parameters in order to

achieve the maximum throughput. APP clients make use of aggressive prefetching and data offloading to remote buffer caches in

multi-level buffer cache hierarchies in an effort to minimize disk interference and tranquilize the effects of aggressive prefetching.

We used an extremely I/O-intensive Radix-k application employed in studies on the scalability of parallel image composition and

3

Page 4: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

particle tracing developed at the Argonne National Laboratory with data sets of up to 128 GB and implemented our scheme on a 16-

node Linux cluster. We observed that the execution time of the application decreased by 68%on average when using our scheme

11 ASDF: An Autonomous and Scalable Distributed File System

The demand for huge storage space on data-intensive applications and high-performance scientific computing continues to

grow. To integrate massive distributed storage resources for providing huge storage space is an important and challenging

issue in Cloud and Grid computing. In this paper, we propose a distributed file system, called ASDF, to meet the demands of

not only data-intensive applications but also end users, developers and administrators. While sharing many of the same

goals as previous distributed file systems such as scalability, reliability, and performance, it is also designed with the

emphasis on compatibility, extensibility and autonomy. With the design goals in minds, we address several issues and

present our design by adopting peer-to-peer technology, replication, multi-source data transfer, metadata caching and

service-oriented architecture. The experimental results show the proposed distributed file system meet our design goals and

will be useful in Cloud and Grid computing.

12 A Assertion Based Parallel Debugging

Programming languages have advanced tremendously over the years, but program debuggers have hardly changed.

Sequential debuggers do little more than allow a user to control the flow of a program and examine its state. Parallel ones

support the same operations on multiple processes, which are adequate with a small number of processors, but become

unwieldy and ineffective on very large machines. Typical scientific codes have enormous multidimensional data structures

and it is impractical to expect a user to view the data using traditional display techniques. In this paper we discuss the use of

debug-time assertions, and show that these can be used to debug parallel programs. The techniques reduce the debugging

complexity because they reason about the state of large arrays without requiring the user to know the expected value of

every element. Assertions can be expensive to evaluate, but their performance can be improved by running them in parallel.

We demonstrate the system with a case study finding errors in a parallel version of the Shallow Water Equations, and

evaluate the performance of the tool on a 4,096 cores Cray XE6.

13 Autonomic SLA-driven Provisioning for Cloud Applications

Significant achievements have been made for automated allocation of cloud resources. However, the performance of

applications may be poor in peak load periods, unless their cloud resources are dynamically adjusted. Moreover, although

cloud resources dedicated to different applications are virtually isolated, performance fluctuations do occur because of

resource sharing, and software or hardware failures (e.g. unstable virtual machines, power outages, etc.). In this paper, we

propose a decentralized economic approach for dynamically adapting the cloud resources of various applications, so as to

statistically meet their SLA performance and availability goals in the presence of varying loads or failures. According to our

approach, the dynamic economic fitness of a Web service determines whether it is replicated or migrated to another server,

or deleted. The economic fitness of a Web service depends on its individual performance constraints, its load, and the

utilization of the resources where it resides. Cascading performance objectives are dynamically calculated for individual

tasks in the application workflow according to the user requirements. By fully implementing our framework, we

experimentally proved that our adaptive approach statistically meets the performance objectives under peak load periods or

failures, as opposed to static resource settings.

14 BAR: An Efficient Data Locality Driven Task Scheduling Algorithm for Cloud Computing

Large scale data processing is increasingly common in cloud computing systems like MapReduce, Hadoop, and Dryad in

4

Page 5: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

recent years. In these systems, files are split into many small blocks and all blocks are replicated over several servers. To

process files efficiently, each job is divided into many tasks and each task is allocated to a server to deals with a file block.

Because network bandwidth is a scarce resource in these systems, enhancing task data locality(placing tasks on servers

that contain their input blocks) is crucial for the job completion time. Although there have been many approaches on

improving data locality, most of them either are greedy and ignore global optimization, or suffer from high computation

complexity. To address these problems, we propose a heuristic task scheduling algorithm called BAlance-Reduce(BAR) , in

which an initial task allocation will be produced at first, then the job completion time can be reduced gradually by tuning the

initial task allocation. By taking a global view, BAR can adjust data locality dynamically according to network state and

cluster workload. The simulation results show that BAR is able to deal with large problem instances in a few seconds and

outperforms previous related algorithms in term of the job completion time.

15 Building an online domain-specific computing service over non-dedicated grid and cloud resources: the Superlink-online experience

Linkage analysis is a statistical method used by geneticists in everyday practice for mapping disease-susceptibility genes in

the study of complex diseases. An essential first step in the study of genetic diseases, linkage computations may require

years of CPU time. The recent DNA sampling revolution enabled unprecedented sampling density, but made the analysis

even more computationally demanding. In this paper we describe a high performance online service for genetic linkage

analysis, called Superlink-online. The system enables anyone with Internet access to submit genetic data and analyze it as

easily and quickly as if using a supercomputer. The analyses are automatically parallelized and executed on tens of

thousands distributed CPUs in multiple clouds and grids. The first version of the system, which employed up to 3,000 CPUs

in UW Madison and Technion Condor pools, has been successfully used since 2006 by hundreds of geneticists worldwide,

with over 40 citations in the genetics literature. Here we describe the second version, which substantially improves the

scalability and performance of first: it uses over 45,000 non-dedicated hosts, in 10 different grids and clouds, including EC2

and the Superlink@Technion community grid. Improved system performance is obtained through a virtual grid hierarchy

with dynamic load balancing and multi-grid overlay via the GridBot system, parallel pruning of short tasks for overhead

minimization, and cost-efficient use of cloud resources in reliability-critical execution periods. These enhancements enabled

execution of many previously infeasible analyses, which can now be completed within a few hours. The new version of the

system, in production since 2009, has completed over 6500 different runs of over 10 million tasks, with total consumption of

420 CPU years.

16 Cheetah: A Framework for Scalable Hierarchical Collective Operations

Collective communication operations, used by many scientific applications, tend to limit overall parallel application

performance and scalability. Computer systems are becoming more heterogeneous with increasing node and core-per-node

counts. Also, a growing number of data-access mechanisms, of varying characteristics, are supported within a single

computer system. We describe a new hierarchical collective communication framework that takes advantage of hardware-

specific data-access mechanisms. It is flexible, with run-time hierarchy specification, and sharing of collective

communication primitives between collective algorithms. Data buffers are shared between levels in the hierarchy reducing

collective communication management overhead. We have implemented several versions of the Message Passing Interface

(MPI) collective operations, MPI Barrier() and MPI Bcast(), and run experiments using up to 49,152 processes on a Cray XT5,

and a small InfiniBand based cluster. At 49,152 processes our barrier implementation outperforms the optimized native

implementation by 75%. 32 Byte and one Mega-Byte broadcasts outperform it by 62% and 11%, respectively, with better

scalability characteristics. Improvements relative to the default Open MPI implementation are much larger.

17 Classification and Composition of QoS Attributes in Distributed, Heterogeneous Systems

In large-scale distributed systems the selection of services and data sources to respond to a given request is a crucial task.

Non-functional or Quality of Service (QoS) attributes need to be considered when there are several candidate services with

identical functionality. Before applying any service selection optimization strategy, the system has to be analyzed in terms of

5

Page 6: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

QoS metrics, comparable to the statistics needed by a database query optimizer. This paper presents a classification

approach for QoS attributes of system components, from which aggregation functions for composite services are derived.

The applicability and usefulness of the approach is shown in a distributed system from a High-Energy Physics experiment

posing a complex service selection challenge.

.18 Cloud MapReduce: a MapReduce Implementation on top of a Cloud Operating System

This study presents a fully automatedmembrane segmentation technique for immunohistochemical tissue images with

membrane staining, which is a critical task in computerized immunohistochemistry (IHC). Membrane segmentation is

particularly tricky in immunohistochemical tissue images because the cellular membranes are visible only in the stained

tracts of the cell, while the unstained tracts are not visible. Our automated method provides accurate segmentation of the

cellularmembranes in the stained tracts and reconstructs the approximate location of the unstained tracts using nuclear

membranes as a spatial reference. Accurate cell-by-cell membrane segmentation allows per cell morphological analysis and

quantification of the target membrane proteins that is fundamental in several medical applications such as cancer

characterization and classification, personalized therapy design, and for any other applications requiring cell morphology

characterization. Experimental results on real datasets from different anatomical locations demonstrate the wide

applicability and high accuracy of our approach in the context of IHC analysis.

19 CloudSpider: Combining Replication with Scheduling for Optimizing Live Migration of Virtual Machines Across Wide Area Networks

Exact information about the shape of a lumbar pedicle can increase operation accuracy and safety during computeraided

spinal fusion surgery, which requires extreme caution on the part of the surgeon, due to the complexity and delicacy of the

procedure. In this paper, a robust framework for segmenting the lumbar pedicle in computed tomography (CT) images is

presented. The framework that has been designed takes a CT image, which includes the lumbar pedicle as input, and

provides the segmented lumbar pedicle in the form of 3-D voxel sets. This multistep approach begins with 2-D dynamic

thresholding using local optimal thresholds, followed by procedures to recover the spine geometry in a high curvature

environment. A subsequent canal reference determination using proposed thinning-based integrated cost is then performed.

Based on the obtained segmented vertebra and canal reference, the edge of the spinal pedicle is segmented. This framework

has been tested on 84 lumbar vertebrae of 19 patients requiring spinal fusion. It was successfully applied, resulting in an

average success rate of 93.22% and a final mean error of 0.14±0.05 mm. Precision errors were smaller than 1% for spine

pedicle volumes. Intra- and interoperator precision errors were not significantly different.

20 Automatic and Unsupervised Snore Sound Extraction From Respiratory Sound Signals

In this paper, an automatic and unsupervised snore detection algorithm is proposed. The respiratory sound signals of 30

patients with different levels of airway obstruction were recorded by twomicrophones: one placed over the trachea (the

tracheal microphone), and the other was a freestanding microphone (the ambient microphone). All the recordings were done

simultaneously with full-night polysomnography during sleep. The sound activity episodes were identified using the vertical

box (V-Box) algorithm. The 500-Hz subband energy distribution and principal component analysis were used to extract

discriminative features from sound episodes. An unsupervised fuzzy C-means clustering algorithm was then deployed to

label the sound episodes as either snore or no-snore class, which could be breath sound, swallowing sound, or any other

noise. The algorithm was evaluated using manual annotation of the sound signals. The overall accuracy of the proposed

algorithm was found to be 98.6% for tracheal sounds recordings, and 93.1% for the sounds recorded by the ambient

microphone.

6

Page 7: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

21 Dealing with Grid-Computing Authorization using Identity-Based Certificateless Proxy Signature

In this paper, we propose a new Identity-Based Certificateless Proxy Signature scheme, for the grid environment, in order to

enable attribute-based authorization, finegrained delegation and enhanced delegation chain establishment and validation, all

without relying on any kind of PKI Certificates or proxy certificates. We show that our scheme is correct and secure. We also

give an evaluation of the computational and communication overhead of the proposed scheme. Simulations shows

satisfying results.

22 Debunking Real-Time Pricing in Cloud Computing

Under Elasticity of cloud computing eases the burden of capacity planning. Cloud computing users dynamically provision

IT resources tracking their uctuating demand, and only pay for their usage. Therefore, cloud comput- ing essentially shifts

the burden of capacity planning from user's side to provider's side. On the other hand, providers take this burden with the

optimistic assumption that di- verse workloads from various users will atten the overall demand curve. However, this

optimistic hypothesis has not been proved yet in the real world cases. In fact, counter evidences have been raised.

December 2009, Amazon Web Services (AWS), a leading infrastructure cloud service provider, started to o

er a real-time pricing for computing resources {Amazon EC2 Spot Instances (SIs). Real-time pricing, in princi- ple,

encourages users to shift their flexible workloads from provider's peak hours to o

-peak hours with monetary incentives. Interestingly, from our observation on AWS's one-year SI price history datasets, we

conclude that the observed monetary incentive is not large enough to motivate users to shift their workloads. It is

reasonable for users to choose SIs over on-demand instances because SIs are 52.3% cheaper on average. After that, shifting

the workload to cheaper period provides only 3.7 % additional cost savings at best. Moreover, both average cost savings

and price fluctuation have not been meaningfully changed over time.

.23 DELMA: Dynamically ELastic MApReduce Framework for CPU-Intensive Applications

Since its introduction, MapReduce implementations have been primarily focused towards static compute cluster sizes. In

this paper, we introduce the concept of dynamic elasticity to MapReduce. We present the design decisions and

implementation tradeoffs for DELMA, (Dynamically ELastic MApReduce), a framework that follows the MapReduce paradigm,

just like Hadoop MapReduce, but that is capable of growing and shrinking its cluster size, as jobs are underway. In our

study, we test DELMA in diverse performance scenarios, ranging from diverse node additions to node additions at various

points in the application run-time with various dataset sizes. The applicability of the MapReduce paradigm extends far

beyond its use with large-scale data intensive applications, and can also be brought to bear in processing long running

distributed applications executing on small-sized clusters. In this work, we focus both on the performance of processing

hierarchical data in distributed scientific applications, as well as the processing of smaller but demanding input sizes

primarily used in small clusters. We run experiments for datasets that require CPU intensive processing, ranging in size from

Millions of input data elements to process, up to over half a billion elements, and observe the positive scalability patterns

exhibited by the system. We show that for such sizes, performance increases accordingly with data and cluster size

increases. We conclude on the benefits of providing MapReduce with the capability of dynamically growing and shrinking its

cluster configuration by adding and removing nodes during jobs, and explain the possibilities presented by this model.

24 Detection and Protection against Distributed Denial of Service Attacks in Accountable Grid Computing Systems

7

Page 8: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

By exploiting existing vulnerabilities, malicious parties can take advantage of resources made available by grid systems to

attack mission-critical websites or the grid itself. In this paper, we present two approaches for protecting against attacks

targeting sites outside or inside the grid. Our approach is based on special-purpose software agents that collect provenance

and resource usage data in order to perform detection and protection. We show the effectiveness and the efficiency of our

approach by conducting various experiments on an emulated grid test-bed.

25 DHTbd: A Reliable Block-based Storage System for High Performance Clusters

Large, reliable and efficient storage systems are becoming increasingly important in enterprise environments. Our research

in storage system design is oriented towards the exploita- tion of commodity hardware for building a high performance,

resilient and scalable storage system. We present the design and implementation of DHTbd, a general purpose decentralized

storage system where storage nodes support a distributed hash table based interface and clients are implemented as in-

kernel device drivers. DHTbd, unlike most storage systems proposed to date, is implemented at the block device level of the

I/O stack, a simple yet efficient design. The experimental evaluation of the proposed system demonstrates its very good I/O

performance, its ability to scale to large clusters, as well as its robustness, even when massive failures occur.

26 Diagnosing Anomalous Network Performance with Confidence

Variability in network performance is a major obstacle in effectively analyzing the throughput of modern high performance

computer systems. High performance interconnection networks offer excellent best-case network latencies; however, highly

parallel applications running on parallel machines typically require consistently high levels of performance to adequately

leverage the massive amounts of available computing power. Performance analysts have usually quantified network

performance using traditional summary statistics that assume the observational data is sampled from a normal distribution.

In our examinations of network performance, we have found this method of analysis often provides too little data to

understand anomalous network performance. In particular, we examine a multi-modal performance scenario encountered

with an Infiniband interconnection network and we explore the performance repeatability on the custom Cray SeaStar2

interconnection network after a set of software and driver updates.

27 Enabling Multi-Physics Coupled Simulations within the PGAS Programming Framework

Complex coupled multi-physics simulations are playing increasingly important roles in scientific and engineering

applications such as fusion plasma and climate modeling. At the same time, extreme scales, high levels of concurrency and

the advent of multicore and manycore technologies are making the high-end parallel computing systems on which these

simulations run, hard to program. While the Partitioned Global Address Space (PGAS) languages is attempting to address

the problem, the PGAS model does not easily support the coupling of multiple application codes, which is necessary for the

coupled multi-physics simulations. Furthermore, existing frameworks that support coupled simulations have been

developed for fragmented programming models such as message passing, and are conceptually mismatched with the

shared memory address space abstraction in the PGAS programming model. This paper explores how multi-physics coupled

simulations can be supported within the PGAS programming framework. Specifically, in this paper, we present the design

and implementation of the XpressSpace programming system, which enables efficient and productive development of

coupled simulations across multiple independent PGAS Unified Parallel C (UPC) executables. XpressSpace provides the

global-view style programming interface that is consistent with the memory model in UPC, and provides an efficient runtime

system that can dynamically capture the data decomposition of global-view arrays and enable fast exchange of parallel data

structures between coupled codes. In addition, XpressSpace provides the flexibility to define the coupling process in

specification file that is independent of the program source codes. We evaluate the performance and scalability of

XpressSpace prototype implementation using different coupling patterns extracted from real world multi-physics simulation

scenarios, on the Jaguar Cray XT5 system of Oak Ridge National Laboratory

8

Page 9: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

28 EZTrace: a generic framework for performance analysis

Modern supercomputers with multi-core nodes enhanced by accelerators, as well as hybrid programming models introduce

more complexity in modern applications. Exploiting efficiently all the resources requires a complex analysis of the

performance of applications in order to detect time-consuming sections. We present EZTRACE, a generic trace generation

framework that aims at providing a simple way to analyze applications. EZTRACE is based on plugins that allow it to trace

different programming models such as MPI, pthread or OpenMP as well as user-defined libraries or applications. EZTRACE

uses two steps: one to collect the basic information during execution and one post-mortem analysis. This permits tracing

the execution of applications with low overhead while allowing to refine the analysis after the execution. We also present a

script language for EZTRACE that gives the user the opportunity to easily define the functions to instrument without

modifying the source code of the application.

29 Failure Avoidance through Fault Prediction Based on Synthetic Transactions

System logs are an important tool in studying the conditions (e.g., environment misconfigurations, resource status,

erroneous user input) that cause failures. However, production system logs are complex, verbose, and lack structural

stability over time. These traits make them hard to use, and make solutions that rely on them susceptible to high

maintenance costs. Additionally, logs record failures after they occur: by the time logs are investigated, users have already

experienced the failures’ consequences. To detect the environment conditions that are correlated with failures without

dealing with the complexities associated with processing production logs, and to prevent failure-causing conditions from

occurring before the system goes live, this research suggests a three step methodology: i) using synthetic transactions, i.e.,

simplified workloads, in pre-production environments that emulate user behavior, ii) recording the result of executing these

transactions in logs that are compact, simple to analyze, stable over time, and specifically tailored to the fault metrics of

interest, and iii) mining these specialized logs to understand the conditions that correlate to failures. This allows system

administrators to configure the system to prevent these conditions from happening. We evaluate the effectiveness of this

approach by replicating the behavior of a service used in production at Microsoft, and testing the ability to predict failures

using a synthetic workload on a 650 million events production trace. The synthetic prediction system is able to predict 91%

of real production failures using 50-fold fewer transactions and logs that are 10,000-fold more compact than their production

counterparts.

30 GeoServ: A Distributed Urban Sensing Platform

Urban sensing where mobile users continuously gather, process, and share location-sensitive sensor data (e.g., street

images, road condition, traffic flow) is emerging as a new network paradigm of sensor information sharing in urban

environments. The key enablers are the smartphones (e.g., iPhones and Android phones) equipped with onboard sensors

(e.g., cameras, accelerometer, compass, GPS) and various wireless devices (e.g., WiFi and 2/3G). The goal of this paper is to

design a scalable sensor networking platform where millions of users on the move can participate in urban sensing and

share locationaware information using always-on cellular data connections. We propose a two-tier sensor networking

platform called GeoServ where mobile users publish/access sensor data via an Internetbased distributed P2P overlay

network. The main contribution of this paper is two-fold: a location-aware sensor data retrieval scheme that supports

geographic range queries, and a locationaware publish-subscribe scheme that enables efficient multicast routing over a

group of subscribed users. We prove that GeoServ protocols preserve locality and validate their performance via extensive

simulations

.31 GPGPU-Accelerated Parallel and Fast Simulation of Thousand-core Platforms

9

Page 10: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

The multicore revolution and the ever-increasing complexity of computing systems is dramatically changing system design,

analysis and programming of computing platforms. Future architectures will feature hundreds to thousands of simple

processors and on-chip memories connected through a network-on-chip. Architectural simulators will remain primary tools

for design space exploration, software development and performance evaluation of these massively parallel architectures.

However, architectural simulation performance is a serious concern, as virtual platforms and simulation technology are not

able to tackle the complexity of thousands of core future scenarios. The main contribution of this paper is the development

of a new simulation approach and technology for many core processors which exploit the enormous parallel processing

capability of low-cost and widely available General Purpose Graphic Processing Units (GPGPU). The simulation of many-

core architectures exhibits indeed a high level of parallelism and is inherently parallelizable, but GPGPU acceleration of

architectural simulation requires an in-depth revision of the data structures and functional partitioning traditionally used in

parallel simulation. We demonstrate our GPGPU simulator on a target architecture composed by several cores (i.e. ARM ISA

based), with instruction and data caches, connected through a Network-on-Chip (NoC). Our experiments confirm the

feasibility of our approach.

32 Grid Global Behavior Prediction

Complexity has always been one of the most important issues in distributed computing. From the first clusters to grid and

now cloud computing, dealing correctly and efficiently with system complexity is the key to taking technology a step further.

In this sense, global behavior modeling is an innovative methodology aimed at understanding the grid behavior. The main

objective of this methodology is to synthesize the grid’s vast, heterogeneous nature into a simple but powerful behavior

model, represented in the form of a single, abstract entity, with a global state. Global behavior modeling has proved to be

very useful in effectively managing grid complexity but, in many cases, deeper knowledge is needed. It generates a

descriptive model that could be greatly improved if extended not only to explain behavior, but also to predict it. In this paper

we present a prediction methodology whose objective is to define the techniques needed to create global behavior

prediction models for grid systems. This global behavior prediction can benefit grid management, specially in areas such as

fault tolerance or job scheduling. The paper presents experimental results obtained in real scenarios in order to validate this

approach.

33 High Performance Pipelined Process Migration with RDMA

Coordinated Checkpoint/Restart (C/R) is a widely deployed strategy to achieve fault-tolerance. However, C/R by itself is not

capable enough to meet the demands of upcoming exascale systems, due to its heavy I/O overhead. Process migration has

already been proposed in literature as a pro-active fault-tolerance mechanism to complement C/R. Several popular MPI

implementations have provided support for process migration, including MVAPICH2 and OpenMPI. But these existing

solutions cannot yield a satisfactory performance. In this paper we conduct extensive profiling on several process migration

mechanisms, and reveal that inefficient I/O and network transfer are the principal factors responsible for the high overhead.

We then propose a new approach, Pipelined Process Migration with RDMA (PPMR), to overcome these overheads. Our new

protocol fully pipelines data writing, data transfer, and data read operations during different phases of a migration cycle.

PPMR aggregates data writes on the migration source node and transfers data to the target node via high throughput RDMA

transport. It implements an efficient process restart mechanism at the target node to restart processes from the RDMA data

streams. We have implemented this Pipelined Process Migration protocol in MVAPICH2 and studied the performance

benefits. Experimental results show that PPMR achieves a 10.7X speedup to complete a process migration over the

conventional approach at a moderate (8MB) memory usage. Process migration overhead on the application is significantly

minimized from 38% to 5% by PPMR when three migrations are performed in succession.

34 Improving Utilization of Infrastructure Clouds

A key advantage of infrastructure-as-a-service (IaaS) clouds is providing users on-demand access to resources. To provide

10

Page 11: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

on-demand access, however, cloud providers must either significantly overprovision their infrastructure (and pay a high

price for operating resources with low utilization) or reject a large proportion of user requests (in which case the access is

no longer on-demand). At the same time, not all users require truly on-demand access to resources. Many applications and

workflows are designed for recoverable systems where interruptions in service are expected. For instance, many scientists

utilize high-throughput computing (HTC)-enabled resources, such as Condor, where jobs are dispatched to available

resources and terminated when the resource is no longer available. We propose a cloud infrastructure that combines on-

demand allocation of resources with opportunistic provisioning of cycles from idle cloud nodes to other processes by

deploying backfill virtual machines (VMs). For demonstration and experimental evaluation, we extend the Nimbus cloud

computing toolkit to deploy backfill VMs on idle cloud nodes for processing an HTC workload. Initial tests show an increase

in IaaS cloud utilization from 37.5% to 100% during a portion of the evaluation trace but only 6.39% overhead cost for

processing the HTC workload. We demonstrate that a shared infrastructure between IaaS cloud providers and an HTC job

management system can be highly beneficial to both the IaaS cloud provider and HTC users by increasing the utilization of

the cloud infrastructure (thereby decreasing the overall cost) and contributing cycles that would otherwise be idle to

processing HTC jobs.

35 Development Inferring Network Topologies in Infrastructure as a Service Cloud

Infrastructure as a Service (IaaS) clouds are gaining increasing popularity as a platform for distributed computations. The

virtualization layers of those clouds offer new possibilities for rapid resource provisioning, but also hide aspects of the

underlying IT infrastructure which have often been exploited in classic cluster environments. One of those hidden aspects is

the network topology, i.e. the way the rented virtual machines are physically interconnected inside the cloud. We propose an

approach to infer the network topology connecting a set of virtual machines in IaaS clouds and exploit it for data-intensive

distributed applications. Our inference approach relies on delay-based end-to-end measurements and can be combined with

traditional IP-level topology information, if available. We evaluate the inference accuracy using the popular hypervisors KVM

as well as XEN and highlight possible performance gains for distributed applications.

36 Directed Differential Connectivity Graph of Interictal Epileptiform Discharges

In this paper, we study temporal couplings between interictal events of spatially remote regions in order to localize the

leading epileptic regions from intracerebral EEG (iEEG). We aim to assess whether quantitative epileptic graph analysis

during interictal period may be helpful to predict the seizure onset zone of ictal iEEG. Using wavelet transform, cross-

correlation coefficient, and multiple hypothesis test, we propose a differential connectivity graph (DCG) to represent the

connections that change significantly between epileptic and nonepileptic states as defined by the interictal events.

Postprocessings based on mutual information and multiobjective optimization are proposed to localize the leading epileptic

regions through DCG. The suggested approach is applied on iEEG recordings of five patients suffering from focal epilepsy.

Quantitative comparisons of the proposed epileptic regions within ictal onset zones detected by visual inspection and using

electrically stimulated seizures, reveal good performance of the present method.

37 Driver Drowsiness Managing distributed files with RNS in heterogeneous Data Grids

This paper describes the management of files distributed in heterogeneous Data Grids by using RNS (Resource Namespace

Service). RNS provides hierarchical namespace management for name-to-resource mapping as a key technology to use Grid

resources for different kinds of middleware. RNS directory entries and junction entries can contain their own XML messages

as metadata. We define attribute expressions in XML for the RNS entries and give an algorithm to access distributed files

stored within different kinds of Data Grids. The example in this paper shows how our Grid application can retrieve the actual

locations of files from the RNS server. An application can also access the distributed files as though they were files in the

11

Page 12: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

local file system without worrying about the underlying Data Grids. This approach can be used in a Grid computing system

to handle distributed Grid resources.

.38 Multi-Cloud Deployment of Computing Clusters for Loosely-Coupled MTC Applications

Cloud computing is gaining acceptance in many IT organizations, as an elastic, flexible and variable-cost way to deploy their

service platforms using outsourced resources. Unlike traditional utilities where a single provider scheme is a common

practice, the ubiquitous access to cloud resources easily enables the simultaneous use of different clouds. In this paper we

explore this scenario to deploy a computing cluster on top of a multi-cloud infrastructure, for solving loosely-coupled Many-

Task Computing (MTC) applications. In this way, the cluster nodes can be provisioned with resources from different clouds

to improve the cost-effectiveness of the deployment, or to implement high-availability strategies. We prove the viability of

this kind of solutions by evaluating the scalability, performance, and cost of different configurations of a Sun Grid Engine

cluster, deployed on a multi-cloud infrastructure spanning a local data-center and three different cloud sites: Amazon EC2

Europe, Amazon EC2 USA, and ElasticHosts. Although the testbed deployed in this work is limited to a reduced number of

computing resources (due to hardware and budget limitations), we have complemented our analysis with a simulated

infrastructure model, which includes a larger number of resources, and runs larger problem sizes. Data obtained by

simulation show that performance and cost results can be extrapolated to large scale problems and cluster infrastructures.

39 Dynamic Brain Phantom for Intracranial Volume Measurements

Knowledge of intracranial ventricular volume is important for the treatment of hydrocephalus, a disease in which

cerebrospinal fluid (CSF) accumulates in the brain. Current monitoring options involve MRI or pressure monitors (InSite,

Medtronic). However, there are no existing methods for continuous cerebral ventricle volume measurements. In order to test

a novel impedance sensor for direct ventricular volume measurements, we present a model that emulates the expansion of

the lateral ventricles seen in hydrocephalus. To quantify the ventricular volume, sensor prototypes were fabricated and

tested with this experimental model. Fluidwas injected andwithdrawn cyclically in a controlledmanner and volume

measurements were tracked over 8 h. Pressure measurements were also comparable to conditions seen clinically. The

results from the bench-top model served to calibrate the sensor for preliminary animal experiments. A hydrocephalic rat

model was used to validate a scaled-down, microfabricated prototype sensor. CSF was removed from the enlarged ventricles

and a dynamic volume decrease was properly recorded. This method of testing new designs on brain phantoms prior to

animal experimentation accelerates medical device design by determining sensor specifications and optimization in a

rational process.

40 Multiple Services Throughput Optimization in a Hierarchical Middleware

Accessing the power of distributed resources can nowadays easily be done using a middleware based on a client/server

approach. Several architectures exist for those middlewares. The most scalable ones rely on a hierarchical design.

Determining the best shape for the hierarchy, the one giving the best throughput of services, is not an easy task. We first

propose a computation and communication model for such hierarchical middleware. Our model takes into account the

deployment of several services in the hierarchy. Then, based on this model, we propose algorithms for automatically

constructing a hierarchy on two kinds of heterogeneous platforms: communication homogeneous/computation

heterogeneous platforms, and fully heterogeneous platforms. The proposed algorithms aim at offering the users the best

obtained to requested throughput ratio, while providing fairness on this ratio for the different kinds of services, and using as

few resources as possible for the hierarchy. For each kind of platforms, we compare our model with experimental results on

12

Page 13: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

a real middleware called DIET (Distributed Interactive Engineering Toolbox).

41 Network-Friendly One-Sided Communication Through Multinode Cooperation on Petascale Cray XT5 Systems

One-sided communication is important to enable asynchronous communication and data movement for Global Address

Space (GAS) programming models. Such communication is typically realized through direct messages between initiator and

target processes. For petascale systems with 10,000s of nodes and 100,000s of cores, these direct messages require

dedicated communication buffers and/or channels, which can lead to significant scalability challenges for GAS programming

models. In this paper, we describe a network-friendly communication model, multinode cooperation, to enable indirect one-

sided communication. Compute nodes work together to handle one-side requests through (1) request forwarding in which

one node can intercept a request and forward it to a target node, and (2) request aggregation in which one node can

aggregate many requests to a target node. We have implemented multinode cooperation for a popular GAS runtime library,

Aggregate Remote Memory Copy Interface (ARMCI). Our experimental results on a largescale Cray XT5 system demonstrate

that multinode cooperation is able to greatly increase memory scalability by reducing communication buffers required on

each node. In addition, multinode cooperation improves the resiliency of GAS runtime system to network contention.

Furthermore, multinode cooperation can benefit the performance of scientific applications. In one case, it reduces the total

execution time of an NWChem application by 52%.

42 Non-Cooperative Scheduling Considered Harmful in Collaborative Volunteer Computing Environments

Advances in inter-networking technology and computing components have enabled Volunteer Computing (VC) systems that

allows volunteers to donate their computers’ idle CPU cycles to a given project. BOINC is the most popular VC infrastructure

today with over 580,000 hosts that deliver over 2,300 TeraFLOP per day. BOINC projects usually have hundreds of thousands

of independent tasks and are interested in overall throughput. Each project has its own server which is responsible for

distributing work units to clients, recovering results and validating them. The BOINC scheduling algorithms are complex and

have been used for many years now. Their efficiency and fairness have been assessed in the context of throughput oriented

projects. Yet, recently, burst projects, with fewer tasks and interested in response time, have emerged. Many works have

proposed new scheduling algorithms to optimize individual response time but their use may be problematic in presence of

other projects. In this article we show that the commonly used BOINC scheduling algorithms are unable to enforce fairness

and project isolation. Burst projects may dramatically impact the performance of all other projects (burst or non-burst). To

study such interactions, we perform a detailed, multi-player and multi-objective game theoretic study. Our analysis and

experiments provide a good understanding on the impact of the different scheduling parameters and show that the non-

cooperative optimization may result in inefficient and unfair share of the resources.

43 Finite-Element-Based Discretization and Regularization Strategies for 3-D Inverse Electrocardiography

We consider the inverse electrocardiographic problem of computing epicardial potentials from a body-surface potential map.

We study how to improve numerical approximation of the inverse problem when the finite-element method is used. Being ill-

posed, the inverse problem requires different discretization strategies from its corresponding forward problem. We propose

refinement guidelines that specifically address the ill-posedness of the problem. The resulting guidelines necessitate the use

of hybrid finite elements composed of tetrahedra and prism elements. Also, in order to maintain consistent numerical quality

when the inverse problem is discretized into different scales, we propose a new family of regularizers using the variational

principle underlying finiteelement methods. These variational-formed regularizers serve as an alternative to the traditional

Tikhonov regularizers, but preserves the L2 norm and thereby achieves consistent regularization in multiscale simulations.

The variational formulation also enables a simple construction of the discrete gradient operator over irregularmeshes, which

is difficult to define in traditional discretization schemes.We validated our hybrid element technique and the variational

regularizers by simulations on a realistic 3-D torso/heart model with empirical heart data. Results show that discretization

based on our proposed strategies mitigates the ill-conditioning and improves the inverse solution, and that the variational

13

Page 14: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

formulation may benefit a broader range of potential-based bioelectric problems.

44 On the Performance Variability of Production Cloud Services

Cloud computing is an emerging infrastructure paradigm that promises to eliminate the need for companies to maintain

expensive computing hardware. Through the use of virtualization and resource time-sharing, clouds address with a single

set of physical resources a large user base with diverse needs. Thus, clouds have the potential to provide their owners the

benefits of an economy of scale and, at the same time, become an alternative for both the industry and the scientific

community to self-owned clusters, grids, and parallel production environments. For this potential to become reality, the first

generation of commercial clouds need to be proven to be dependable. In this work we analyze the dependability of cloud

services. Towards this end, we analyze long-term performance traces from Amazon Web Services and Google App Engine,

currently two of the largest commercial clouds in production. We find that the performance of about half of the cloud

services we investigate exhibits yearly and daily patterns, but also that most services have periods of especially stable

performance. Last, through tracebased simulation we assess the impact of the variability observed for the studied cloud

services on three large-scale applications, job execution in scientific computing, virtual goods trading in social networks,

and state management in social gaming. We show that the impact of performance variability depends on the application, and

give evidence that performance variability can be an important factor in cloud provider selection.

45 On the Relation Between Congestion Control, Switch Arbitration and Fairness

In lossless interconnection networks such as Infini- Band, congestion control (CC) can be an effective mechanism to achieve

high performance and good utilization of network resources. The InfiniBand standard describes CC functionality for

detecting and resolving congestion, but the design decisions on how to implement this functionallity is left to the hardware

designer. One must be cautious when making these design decisions not to introduce fairness problems, as our study

shows. In this paper we study the relationship between congestion control, switch arbitration, and fairness. Specifically, we

look at fairness among different traffic flows arriving at a hot spot switch on different input ports, as CC is turned on. In

addition we study the fairness among traffic flows at a switch where some flows are exclusive users of their input ports

while other flows are sharing an input port (the parking lot problem). Our results show that the implementation of congestion

control in a switch is vulnerable to unfairness if care is not taken. In detail, we found that a threshold hysteresis of more than

one MTU is needed to resolve arbitration unfairness. Furthermore, to fully solve the parking lot problem, proper

configuration of the CC parameters are required.

46 On the Scheduling of Checkpoints in Desktop Grids

Frequent resources failures are a major challenge for the rapid completion of batch jobs. Checkpointing and migration is

one approach to accelerate job completion avoiding deadlock. We study the problem of scheduling checkpoints of

sequential jobs in the context of Desktop Grids, consisting of volunteered distributed resources. We craft a checkpoint

scheduling algorithm that is provably optimal for discrete time when failures obey any general probability distribution. We

show using simulations with parameters based on real-world systems that this optimal strategy scales and outperforms

other strategies significantly in terms of checkpointing costs and batch completion times.

47 Parameter Exploration in Science and Engineering Using Many-Task Computing

Robust scientific methods require the exploration of the parameter space of a system (some of which can be run in parallel

on distributed resources), and may involve complete state space exploration, experimental design, or numerical optimization

14

Page 15: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

techniques. Many-Task Computing (MTC) provides a framework for performing robust design, because it supports the

execution of a large number of otherwise independent processes. Further, scientific workflow engines facilitate the

specification and execution of complex software pipelines, such as those found in real science and engineering design

problems. However, most existing workflow engines do not support a wide range of experimentation techniques, nor do they

support a large number of independent tasks. In this paper, we discuss Nimrod/K—a set of add in components and a new

run time machine for a general workflow engine, Kepler. Nimrod/K provides an execution architecture based on the tagged

dataflow concepts, developed in 1980s for highly parallel machines. This is embodied in a new Kepler “Director” that

supports many-task computing by orchestrating execution of tasks on on clusters, Grids, and Clouds. Further, Nimrod/K

provides a set of “Actors” that facilitate the various modes of parameter exploration discussed above. We demonstrate the

power of Nimrod/K to solve real problems in cardiac science.

48 Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing

Cloud computing is an emerging commercial infrastructure paradigm that promises to eliminate the need for maintaining

expensive computing facilities by companies and institutes alike. Through the use of virtualization and resource time

sharing, clouds serve with a single set of physical resources a large user base with different needs. Thus, clouds have the

potential to provide to their owners the benefits of an economy of scale and, at the same time, become an alternative for

scientists to clusters, grids, and parallel production environments. However, the current commercial clouds have been built

to support web and small database workloads, which are very different from typical scientific computing workloads.

Moreover, the use of virtualization and resource time sharing may introduce significant performance penalties for the

demanding scientific computing workloads. In this work, we analyze the performance of cloud computing services for

scientific computing workloads. We quantify the presence in real scientific computing workloads of Many-Task Computing

(MTC) users, that is, of users who employ loosely coupled applications comprising many tasks to achieve their scientific

goals. Then, we perform an empirical evaluation of the performance of four commercial cloud computing services including

Amazon EC2, which is currently the largest commercial cloud. Last, we compare through trace-based simulation the

performance characteristics and cost models of clouds and other scientific computing platforms, for general and MTC-based

scientific computing workloads. Our results indicate that the current clouds need an order of magnitude in performance

improvement to be useful to the scientific community, and show which improvements should be considered first to address

this discrepancy between offer and demand.

49 Predictive Data Grouping and Placement for Cloud-based Elastic Server Infrastructures

Workload variations on Internet platforms such as YouTube, Flickr, LastFM require novel approaches to dynamic resource

provisioning in order to meet QoS requirements, while reducing the Total Cost of Ownership (TCO) of the infrastructures.

The economy of scale promise of cloud computing is a great opportunity to approach this problem, by developing elastic

large scale server infrastructures. However, a proactive approach to dynamic resource provisioning requires prediction

models forecasting future load patterns. On the other hand, unexpected volume and data spikes require reactive

provisioning for serving unexpected surges in workloads. When workload can not be predicted, adequate data grouping and

placement algorithms may facilitate agile scaling up and down of an infrastructure. In this paper, we analyze a dynamic

workload of an on-line music portal and present an elastic Web infrastructure that adapts to workload variations by

dynamically scaling up and down servers. The workload is predicted by an autoregressive model capturing trends and

seasonal patterns. Further, for enhancing data locality, we propose a predictive data grouping based on the history of

content access of a user community. Finally, in order to facilitate agile elasticity, we present a data placement based on

workload and access pattern prediction. The experimental results demonstrate that our forecasting model predicts workload

with a high precision. Further, the predictive data grouping and placement methods provide high locality, load balance and

high utilization of resources, allowing a server infrastructure to scale up and down depending on workload.

15

Page 16: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

50 Resource Allocation for Security Services in Mobile Cloud Computing

Mobile cloud is a machine-to-machine service model, where a mobile device can use the cloud for searching, data mining,

and multimedia processing. To protect the processed data, security services, i.e., encryption, decryption, authentications,

etc., are performed in the cloud. In general, we can classify cloud security services in two categories: Critical Security (CS)

service and Normal Security (NS) service. CS service provides strong security protection such as using longer key size,

strict security access policies, isolations for protecting data, and so on. The CS service usually occupies more cloud

computing resources, however it generates more rewards to the cloud provider since the CS service users need to pay more

for using the CS service. With the increase of the number of CS and NS service users, it is important to allocate the cloud

resource to maximize the system rewards with the considerations of the cloud resource consumption and incomes

generated from cloud users. To address this issue, we propose a Security Service Admission Model (SSAM) based on Semi-

Markov Decision Process to model the system reward for the cloud provider. We, first, define system states by a tuple

represented by the numbers of cloud users and their associated security service categories, and current event type (i.e.,

arrival or departure).We then derive the system steady-state probability and service request blocking probability by using

the proposed SSAM. Numerical results show that the obtained theoretic probabilities are consistent with our simulation

results.

51 Resource and Revenue Sharing with Coalition Formation of Cloud Providers: Game Theoretic Approach

In cloud computing, multiple cloud providers can cooperate to establish a resource pool to support internal users and to

offer services to public cloud users. In this paper, we study the cooperative behavior of multiple cloud providers. The

hierarchical cooperative game model is presented. First, given a group (i.e., coalition) of cloud providers, the resource and

revenue sharing of a resource pool is presented. To obtain the solution, we develop the stochastic linear programming

game model which takes the uncertainty of internal users from each provider into account. We show that the solution of the

stochastic linear programming game is the core of cooperation. Second, we analyze the stability of the coalition formation

among cloud providers based on coalitional game. The dynamic model of coalition formation is used to obtain stable

coalitional structures. The resource and revenue sharing and coalition formation of cloud providers are intertwined in

which the proposed hierarchical cooperative game model can be used to obtain the solution. An extensive performance

evaluation is performed to investigate the decision making of cloud providers when cooperation can lead to the higher

profit.

52 Robust Execution of Service Workflows Using Redundancy and Advance Reservations

In this paper, we develop a novel algorithm that allows service consumers to execute business processes (or workflows) of

interdependent services in a dependable manner within tight time-constraints. In particular, we consider large

interorganizational service-oriented systems, where services are offered by external organizations that demand financial

remuneration and where their use has to be negotiated in advance using explicit service-level agreements (as is common in

Grids and cloud computing). Here, different providers often offer the same type of service at varying levels of quality and

price. Furthermore, some providers may be less trustworthy than others, possibly failing to meet their agreements. To

control this unreliability and ensure end-to-end dependability while maximizing the profit obtained from completing a

business process, our algorithm automatically selects the most suitable providers. Moreover, unlike existing work, it

reasons about the dependability properties of a workflow, and it controls these by using service redundancy for critical

tasks and by planning for contingencies. Finally, our algorithm reserves services for only parts of its workflow at any time, in

order to retain flexibility when failures occur. We show empirically that our algorithm consistently outperforms existing

approaches, achieving up to a 35-fold increase in profit and successfully completing most workflows, even when the

majority of providers fail.

16

Page 17: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

53 Modified Kinematic Technique for Measuring Pathological Hyperextension and Hypermobility of the Interphalangeal Joints

Dynamic finger joint motion is difficult to measure using optical motion analysis techniques due to the limited surface

area allowed for adequate marker placement. This paper describes an extension of a previously validated kinematic

measurement technique using a reduced surface marker set and outlines the required calculations based on a specific

surface marker placement to calculate flexion/extension and hyperextension of the metacarpophalangeal, proximal

interphalangeal, and distal interphalangeal joints. The modified technique has been assessed for accuracy using a

series of static reference frames (absolute residual error = ±3.7◦, cross correlation between new method and reference

frames; r = 0.99). The method was then applied to a small group of participantswith rheumatoid arthritis (seven females,

one male; mean age = 62.8 years ± 12.04) and illustrated congruent strategies of movement for a participant and a large

range of finger joint movement over the sample (5.8–71.1◦, smallest to largest active range of motion). This method used

alongside the previous paper [1] provides a comprehensive, validated method for calculating 3-D wrist, hand, fingers,

and thumb kinematics to date and provides a valuable measurement tool for clinical research.

54 Role-Based Access-Control Using Reference Ontology in Clouds

In cloud computing, security is an important issue due to the increasing scale of users. Current approaches to access

control on clouds do not scale well to multi-tenancy requirements because they are mostly based on individual user IDs

at different granularity levels. However, the number of users can be enormous and causes significant overhead in

managing security. RBAC (Role-Based Access Control) is attractive because the number of roles is significantly less,

and users can be classified according to their roles. This paper proposes a RBAC model using a role ontology for Multi-

Tenancy Architecture (MTA) in clouds. The ontology is used to build up the role hierarchy for a specific domain.

Ontology transformation operations algorithms are provided to compare the similarity of different ontology. The

proposed framework can ease the design of security system in cloud and reduce the complexity of system design and

implementation.

55 SLA-based Resource Allocation for Software as a Service Provider (SaaS) in Cloud Computing Environments

Cloud computing has been considered as a solution for solving enterprise application distribution and configuration

challenges in the traditional software sales model. Migrating from traditional software to Cloud enables on-going

revenue for software providers. However, in order to deliver hosted services to customers, SaaS companies have to

either maintain their own hardware or rent it from infrastructure providers. This requirement means that SaaS providers

will incur extra costs. In order to minimize the cost of resources, it is also important to satisfy a minimum service level to

customers. Therefore, this paper proposes resource allocation algorithms for SaaS providers who want to minimize

infrastructure cost and SLA violations. Our proposed algorithms are designed in a way to ensure that Saas providers are

able to manage the dynamic change of customers, mapping customer requests to infrastructure level parameters and

handling heterogeneity of Virtual Machines. We take into account the customers’ Quality of Service parameters such as

response time, and infrastructure level parameters such as service initiation time. This paper also presents an extensive

evaluation study to analyze and demonstrate that our proposed algorithms minimize the SaaS provider’s cost and the

number of SLA violations in a dynamic resource sharing Cloud environment

17

Page 18: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

56 Small Discrete Fourier Transforms on GPUs

Efficient implementations of the Discrete Fourier Transform (DFT) for GPUs provide good performance with large data

sizes, but are not competitive with CPU code for small data sizes. On the other hand, several applications perform

multiple DFTs on small data sizes. In fact, even algorithms for large data sizes use a divide-andconquer approach, where

eventually small DFTs need to be performed. We discuss our DFT implementation, which is efficient for multiple small

DFTs. One feature of our implementation is the use of the asymptotically slow matrix multiplication approach for small

data sizes, which improves performance on the GPU due to its regular memory access and computational patterns. We

combine this algorithm with the mixed radix algorithm for 1-D, 2-D, and 3-D complex DFTs. We also demonstrate the

effect of different optimization techniques. When GPUs are used to accelerate a component of an application running on

the host, it is important that decisions taken to optimize the GPU performance not affect the performance of the rest of

the application on the host. One feature of our implementation is that we use a data layout that is not optimal for the

GPU so that the overall effect on the application is better. Our implementation performs up to two orders of magnitude

faster than cuFFT on an NVIDIA GeForce 9800 GTX GPU and up to one to two orders of magnitude faster than FFTW on a

CPU for multiple small DFTs. Furthermore, we show that our implementation can accelerate the performance of a

Quantum Monte Carlo application for which cuFFT is not effective. The primary contributions of this work lie in

demonstrating the utility of the matrix multiplication approach and also in providing an implementation that is efficient

for small DFTs when a GPU is used to accelerate an application running on the host.

57 Neural Control of Posture During Small Magnitude Perturbations: Effects of Aging and Localized Muscle Fatigue

This study investigated the effects of aging and localized muscle fatigue on the neural control of upright stance during

small postural perturbations. Sixteen young (aged 18–24 years) and 16 older (aged 55–74 years) participants were

exposed to small magnitude, anteriorly-directed postural perturbations before and after fatiguing exercises (lumbar

extensors and ankle plantar flexors). A single degree of freedom model of the human body was used to simulate

recovery kinematics following the perturbations. Central to the model was a simulated neural controller that multiplied

time-delayed kinematics by invariant feedback gains. Feedback gains and time delay were optimized for each participant

based on measured kinematics, and a novel delay margin analysis was performed to assess system robustness. A

10.9% longer effective time delay (p = 0.010) was found among the older group, who also showed a greater reliance upon

velocity feedback information (31.1% higher differential gain, p = 0.001) to control upright stance. Based on delay

margins, older participants adopted a more robust control scheme to accommodate the small perturbations, potentially

compensating for longer time delays or degraded sensory feedback. No fatigue-induced changes in neural controller

gains, time delay, or delay margin were found in either age group, indicating that integration of this feedback information

was not altered by muscle fatigue. The sensitivity of this approach to changes with fatigue may have been limited by

model simplifications.

58 Techniques for fine-grained, multi-site computation offloading

Increasingly, mobile devices are becoming the preferred platform for computation for many users. Unfortunately, the

resource limitations, in battery life, computation power and storage, restricts the richness of applications that can be run

on such devices. To alleviate these concerns, a popular approach that has gained currency in recent years is

computation offloading, where a portion of an application is run off-site, leveraging the far greater resources of the

cloud. Prior work in this area has focused on a constrained form of the problem: a single mobile device offloading

computation to a single server. However, with the increased popularity of cloud computing and storage, it is more

18

Page 19: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

common for the data that an application accesses to be distributed among several servers. This paper describes

algorithmic approaches for performing fine-grained, multi-site offloading. This allows portions of an application to be

offloaded in a data-centric manner, even if that data exists at multiple sites. Our approach is based on a novel

partitioning algorithm, and a novel data representation. We demonstrate that our partitioning algorithm outperforms

existing multi-site offloading algorithms, and that our data representation provides for more efficient, fine-grained

offloading than prior approaches.

59 The Benefits of Estimated Global Information in DHT Load Balancing

Distributed hash tables (DHT) often rely on uniform hashing for balancing the load among their nodes. However, the

most overloaded node may still have a load up to O(logN) times higher than the average load [1]. DHTs with support for

range queries cannot rely on hashing to fairly balance the system’s load since hashing destroys the order of the stored

items. Ensuring a fair load distribution is vital to avoid individual nodes becoming overloaded, potentially leading to

node crashes or an incentive not to participate in the system. In both scenarios explicit load balancing schemes can

help to spread the load more evenly. In this paper, we improve on existing algorithms for itembased active load

balancing by relying on approximations of global properties. We show that the algorithms can be made more efficient by

incorporating estimates of properties such as the average load and the standard deviation. Our algorithms reduce the

network traffic induced by load balancing while achieving a better load balance than standard algorithms. We also show

that these improvements can be applied to passive load balancing algorithms. Compared to DHTs without explicit load

balancing, both variants are able to reduce the total maintenance traffic, i.e. item movements due to churn and load

balancing, by up to 18%. Simultaneously the system is being balanced achieving a better load distribution.

60 Towards Real-Time, Volunteer Distributed Computing

Many large-scale distributed computing applications demand real-time responses by soft deadlines. To enable such

real-time task distribution and execution on the volunteer resources, we previously proposed the design of the realtime

volunteer computing platform called RT-BOINC. The system gives low O(1) worst-case execution time for task

management operations, such as task scheduling, state transitioning, and validation. In this work, we present a full

implementation RT-BOINC, adding new features including deadline timer and parameter-based admission control. We

evaluate RT-BOINC at large scale using two real-time applications, namely, the games Go and Chess. The results of our

case study show that RT-BOINC provides much better performance than the original BOINC in terms of average and

worst-case response time, scalability and efficiency.

.61 Towards Reliable, PerformantWorkflows for Streaming-Applications on Cloud Platforms

Scientific workflows are commonplace in eScience applications. Yet, the lack of integrated support for data models,

including streaming data, structured collections and files, is limiting the ability of workflows to support emerging

applications in energy informatics that are stream oriented. This is compounded by the absence of Cloud data services

that support reliable and performant streams. In this paper, we propose and present a scientific workflow framework that

supports streams as first-class data, and is optimized for performant and reliable execution across desktop and Cloud

platforms. The workflow framework features and its empirical evaluation on a private Eucalyptus Cloud are presented.

62 Towards Secure and Dependable Storage Services in Cloud Computing

Cloud storage enables users to remotely store their data and enjoy the on-demand high quality cloud applications

19

Page 20: Cloud computing

Elysium Technologies Private Limited ISO 9001:2008 A leading Research and Development Division Madurai | Chennai | Trichy | Coimbatore | Kollam| Singapore Website: elysiumtechnologies.com, elysiumtechnologies.info Email: [email protected]

IEEE Final Year Project List 2011-2012

Madurai

Elysium Technologies Private Limited

230, Church Road, Annanagar,

Madurai , Tamilnadu – 625 020.

Contact : 91452 4390702, 4392702, 4394702.

eMail: [email protected]

Trichy

Elysium Technologies Private Limited

3rd

Floor,SI Towers,

15 ,Melapudur , Trichy,

Tamilnadu – 620 001.

Contact : 91431 - 4002234.

eMail: [email protected]

Kollam

Elysium Technologies Private Limited

Surya Complex,Vendor junction,

kollam,Kerala – 691 010.

Contact : 91474 2723622.

eMail: [email protected]

without the burden of local hardware and software management. Though the benefits are clear, such a service is also

relinquishing users’ physical possession of their outsourced data, which inevitably poses new security risks towards

the correctness of the data in cloud. In order to address this new problem and further achieve a secure and dependable

cloud storage service, we propose in this paper a flexible distributed storage integrity auditing mechanism, utilizing the

homomorphic token and distributed erasure-coded data. The proposed design allows users to audit the cloud storage

with very lightweight communication and computation cost. The auditing result not only ensures strong cloud storage

correctness guarantee, but also simultaneously achieves fast data error localization, i.e., the identification of

misbehaving server. Considering the cloud data are dynamic in nature, the proposed design further supports secure

and efficient dynamic operations on outsourced data, including block modification, deletion, and append. Analysis

shows the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification

attack, and even server colluding attacks.

.63 Utilizing “Opaque” Resources for Revenue Enhancement on Clouds and Grids

Clouds and grids are distributed resource infrastructures that are subjected to both On Demand (OD) as well as Advance

reservation (AR) requests. This paper focuses on the revenue that can be earned on cloud and grid systems using

different matchmaking strategies that map requests to resources. Existing research on matchmaking uses a priori

knowledge of the local scheduling policy used at the resources. However, in these dynamic and heterogeneous systems

having detailed a priori knowledge about all the resources is not always feasible. Thus, traditional matchmaking

strategies cannot use the resources for which the local scheduling policies are unknown. This paper discusses how a

novel matchmaking strategy that does not use knowledge of the local scheduling policies used at the resources may be

deployed in order to utilize these un-used resources and improve the revenue earned by the service providing

enterprise. Based on analytic bounds and simulation, insights gained into system behaviour and revenue earned are

presented.

20