Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A...

23
Author's Accepted Manuscript A study on critical analysis of computational offloading Frameworks for mobile cloud com- puting Muhammad Shiraz, Mehdi Sookhak, Abdullah Gani, Syed Adeel Ali Shah PII: S1084-8045(14)00210-0 DOI: http://dx.doi.org/10.1016/j.jnca.2014.08.011 Reference: YJNCA1295 To appear in: Journal of Network and Computer Applications Received date: 17 January 2014 Revised date: 24 August 2014 Accepted date: 26 August 2014 Cite this article as: Muhammad Shiraz, Mehdi Sookhak, Abdullah Gani, Syed Adeel Ali Shah, A study on critical analysis of computational offloading Frameworks for mobile cloud computing, Journal of Network and Computer Applications, http://dx.doi.org/10.1016/j.jnca.2014.08.011 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. www.elsevier.com/locate/jnca

Transcript of Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A...

Page 1: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

Author's Accepted Manuscript

A study on critical analysis of computationaloffloading Frameworks for mobile cloud com-puting

Muhammad Shiraz, Mehdi Sookhak, AbdullahGani, Syed Adeel Ali Shah

PII: S1084-8045(14)00210-0DOI: http://dx.doi.org/10.1016/j.jnca.2014.08.011Reference: YJNCA1295

To appear in: Journal of Network and Computer Applications

Received date: 17 January 2014Revised date: 24 August 2014Accepted date: 26 August 2014

Cite this article as: Muhammad Shiraz, Mehdi Sookhak, Abdullah Gani, SyedAdeel Ali Shah, A study on critical analysis of computational offloadingFrameworks for mobile cloud computing, Journal of Network and ComputerApplications, http://dx.doi.org/10.1016/j.jnca.2014.08.011

This is a PDF file of an unedited manuscript that has been accepted forpublication. As a service to our customers we are providing this early version ofthe manuscript. The manuscript will undergo copyediting, typesetting, andreview of the resulting galley proof before it is published in its final citable form.Please note that during the production process errors may be discovered whichcould affect the content, and all legal disclaimers that apply to the journalpertain.

www.elsevier.com/locate/jnca

Page 2: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

A Study on Critical Analysis of Computational Offloading Frameworks for Mobile Cloud Computing

Muhammad Shiraz*, Mehdi Sookhak, Abdullah Gani, Syed Adeel Ali Shah

Mobile Cloud Computing Research Lab, Faculty of Computer Science and Information Technology

University of Malaya, Kuala Lumpur, Malaysia

Email: *[email protected], [email protected], [email protected], [email protected]

Abstract

Despite of substantial technological advancements in the recent years, Smart Mobile Devices (SMDs) are still low potential computing devices. Therefore, Mobile Cloud Computing (MCC) deploys computational offloading for augmenting SMDs. The contemporary Computational Offloading Frameworks (COFs) implement resources intensive procedures for computational offloading, which involve the overhead of transmitting application binary code and deployment of distributed platform at runtime. As a result, energy consumption cost and turnaround time of the mobile application and overhead of data transmission is increased. Nevertheless, the resources limited nature of SMDs requires lightweight technique for leveraging the application processing services of computational clouds. This paper critically analyzes the resource intensive nature of the latest existing computational offloading techniques for MCC and highlights technical issues in the establishment of distributed application processing platform at runtime. A prototype application is evaluated with different computation intensities in the real MCC environment. Analysis of the results shows that additional computing resources are utilized in the deployment of distributed application processing platform at runtime such as: 31.6 % additional energy is consumed, 39% additional time is taken and 13241.2 KB data is transmitted in offloading different computational intensive components of the prototype mobile application. Finally, we highlight technical issues in the existing computational offloading techniques for MCC, which draw attention for the future research in computational offloading for MCC and assist in developing lightweight procedures for computational offloading in MCC.

Keywords: Mobile Cloud Computing, Distributed Systems, Computational offloading, Resource Intensive, Lightweight

1 Introduction The latest developments in mobile computing technology have changed user preferences for computing. The

report of Juniper Research states that consumer and enterprise market for cloud based mobile applications is expected to raise $9.5 billion by 2014 [1], which is an evidence of the increasing use of MCC. Recently, a number of computing and communication devices are replaced by smartphones towards all-in-one ubiquitous computing devices such as PDAs, digital cameras, Internet browsing devices, and Global positioning systems [2]. Human dependency on the contemporary smartphones has been increased rapidly in various domains including enterprise, e-learning, entertainment, gamming, management information systems, and healthcare [3]. Mobile devices are predicated as the dominant future computing devices with high user expectations for accessing computational intensive applications analogous to the powerful stationary computing machines. However in spite of all the advancements in the recent years, mobile applications on the latest generation of smartphones and tablets are still restricted by battery power, CPU potentials and memory capacity of the SMDs [4]. The latest developments in cloud computing facilitates to increase the computing capabilities of resources constraint client devices by accessing leased infrastructure and software applications. Computational clouds employ diverse IT business models for the provisioning of computing services; such as on-demand, pay-as-you-go, and utility computing [5, 6]. For example, Amazon web services are utilized to store personal data through its Simple Storage Service (S3) [7], and Elastic cloud compute is employed for application processing services. MCC enables computational intensive and ubiquitous mobile applications by leveraging the services of computational clouds.

MCC utilizes the application processing services of computational clouds for the processing of computationally intensive mobile applications. Recently, a number of COFs are proposed for the processing of computationally intensive mobile applications in MCC [8-11]. For instance, Apple iCloud [12] and Amazon Silk [13] browser are the latest mobile applications which leverage the services of computational clouds for application processing. Computational offloading is employed as a significant application layer solution for enabling intensive applications on SMDs [4]. For instance, MAUI saves 27% energy for the video game and 45% for chess [8]. Similarly by employing ASM framework [11] for computing offloading, RAM utilization on mobile devise is reduced by 72 %,

Page 3: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

CPU utilization is decreased by 99 %, the turnaround time of the application is reduced by 45 % and energy consumption cost of the application is reduced up to 33 %.

However, the establishment of ad-hoc distributed application processing platform and runtime component migration in current COFs result in additional computing resources utilization on SMDs. Runtime intensive component offloading involves the cost of migration of the components of the mobile application [8-11, 15], which includes computational resource utilization in transferring the application binary file and data file of the running instances of mobile application. Similarly, a number of application offloading frameworks implement dynamic application profiling and partitioning technique for application offloading [8, 15, -17], which increases memory allocation, turnaround time of the application and energy consumption on mobile device. This paper critically analyzes the resources intensive nature of existing computational offloading frameworks [10, 11] for MCC and highlights technical issues in the establishment of distributed application processing platform at runtime. Computational offloading is employed in real distributed MCC environment and the prototype application is evaluated with different computational intensities.

The following are significant contribution of this paper. (a) Establishing the fact that additional computing resources are utilized in the deployment of distributed application processing platform at runtime, which increases the size of data transmission, energy consumption cost and turnaround time of the application. Analysis of the results shows that 31.6 % additional energy is consumed, 39 % additional time is taken and 13241.2 KB data is transmitted in offloading different computational intensive components of the prototype mobile application. (b) Highlighting the addressable technical issues in the deployment and management of distributed application processing platform in computational offloading for MCC, which assist in exploring optimal solutions for leveraging the application processing services of computational clouds for augmenting SMDs.

The paper is classified into the following sections. Section 2 explains fundamental background concepts and terminologies including MCC, computational offloading, runtime component migration, and distributed application processing platform. Section 3 presents review on current offloading frameworks for MCC. Section 4 discusses methodology used for experimentation and evaluation of the overhead in runtime computational offloading. Section 5 presents the analytical findings by evaluating experimental results. Experimental results are categorized in three different sections for each measurement parameter. Section 5.1 analyzes energy consumption cost, section 5.2 investigates timing cot and section 5.3 presents the data transmission cost of offloading three components of the prototype application at runtime. Section 6 highlights the technical issues in computational offloading from the perspective of the deployment of delegated mobile application on the cloud server node and the resources intensive features of current COFs. Finally, section 7 draws the concluding remarks and future directions.

2 Background Mobile cloud computing is the latest distributed computing model which extends utility computing vision of

computational clouds to SMDs. MCC bridges the disparity of computing resources in SMDs and processing requirements of intensive applications on SMDs. Recently, a number of computational offloading algorithms are proposed for outsourcing intensive applications to remote server. The renowned examples of distributed models for offloading algorithms are: decentralized virtual cloud computing environment for mobile devices, centralized cloud computing environment for mobile devices, and centralized cloud computing datacenters based cloud computing environment [4]. In the first two cloud computing paradigms, mobile devices are enabled to provide distributed computing services; whereas, in the third cloud computing paradigm traditional cloud services are leveraged wherein diverse service models of computational cloud are utilized for mitigating resources limitations in SMDs. Centralized applications, services and resources are accessed over the wireless network technologies by employing web browser on the SMDs. MCC attracts the attention of businesspersons as a profitable business option that reduces the development and execution cost of mobile applications and mobile users can acquire latest technology conveniently on demand basis. MCC enables to achieve rich experience of a variety of cloud services for SMD at low cost on the move. Datacenters based MCC architecture is composed of three major components; SMDs, internet and computational cloud as shown in Fig.1. In MCC distributed services provided by cloud datacenters include: off-device storage, processing, queuing capabilities, and security mechanism to integrate mobile devices with cloud environment.

Page 4: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

Fig. 1 General Model of Mobile Cloud Computing [4]

The mechanism of outsourcing computational intensive application (entirely or partially) to remote server is called computational offloading. In MCC application offloading is deployed for coping with the challenge of executing computational intensive applications on SMDs. In the recent years, a number of computational offloading frameworks are proposed [10, 11, 18, 19] for computationally intensive mobile applications which are elastic in nature. Elastic mobile applications are attributed with the features of runtime partitioning. Elastic applications are partitioned at different granularity levels at runtime for the establishment of distributed processing platform. He distributed application processing platform is composed of SMD which runs mobile application locally, wireless network medium which can be either cellular network (3G, LTE) or datagram network (Wi-Fi) and remote cloud server node as show in Fig. 1. Partitions of the application are offloaded (migrated) to remote machines for remote execution which augments the computing capabilities of SMDs. Application offloading is performed while considering different objective functions including energy saving, processing power, memory storage, and fast execution. Current dynamic partitioning approaches analyze the resources utilization on SMDs, computational requirements of the mobile application and search for runtime solution of resource limitations on SMD. Profiling mechanism evaluates computing resources requirements of mobile application and the availability of resources on SMD. In the scenario of insufficient resources on SMD, elastic mobile application is partitioned and the computational intensive components of the application are offloaded dynamically at runtime. SMDs negotiate with cloud servers for the selection of appropriate server node and the partitions of the application are migrated to remote server node for remote processing. Application offloading is reflected as a possible solution for enabling computationally intensive applications on SMDs in the distributed MCC environment. However, the computational offloading is obstructed by a number of unresolved challenges. The latest computational offloading frameworks [8, 9, 12, 28] emphasize on the configuration of distributed application processing platform dynamically at runtime. SMDs negotiate with cloud server node dynamically at runtime for the selection of cloud server node. Therefore, the establishment of distributed processing platform at runtime is a resources intensive and energy consuming mechanism [4].

3 Review of Current Computational Offloading Frameworks for MCC MCC utilizes the computing power of the cloud datacenters by offloading computational load to cloud server

nodes [22-24]. Recently, a number of cloud server based application offloading frameworks are proposed for outsourcing computational intensive components of the mobile applications partially or entirely to cloud datacenters [8-11]. Current COFs implement computational offloading at different granularity levels such as object, class, component, bundle, thread, method and task. In [10, 11] module level offloading is employed for the migration of entire module of the application. In [8] method level granularity is employed for the migration of pre-annotated intensive methods of the application. In [18, 19] thread level granularity is used for separating the intensive logic of the application at runtime. In [23, 24] class level granularity is employed for runtime computational offloading. In [25, 26] component level partitioning is employed, which indicates that a group of classes are offloaded to the remote server at runtime. Refined level granularity requires highly intensive monitoring mechanism on SMD and intensive synchronization mechanism between SMD and remote cloud server node. The coarse level of granularity

Page 5: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

results in simple offloading mechanism [11, 18]. Nevertheless, coarse level granularity increases data transmission overhead between SMD and remote servers [11]. COFs implement application partitioning statically or dynamically. The static application partitioning involves one time application partitioning mechanism for the distribution of workload between SMD and cloud server node. The intensive components of the application are partitioned and transferred to the remote server node. For example, MISCO [27] implements static partitioning approach for the classification of the application processing load between Map and Reduce functions. Map function is applied on the set of input data that produces <key, value> pairs which are grouped into a number of partitions. The intermediate results of every partition are passed to a reduce function which returns the final results. Application developers classify the functionalities of the application as Map and Reduce function. Mobile devices serve as worker nodes and are monitored through centralized master server. The worker nodes provide the services of processing Map and Reduce functions. Static partitioning is a lightweight mechanism for the distribution of workload between SMD and cloud server node. However, it lacks of coping with the dynamic processing load of the mobile device. Therefore, the latest application offloading frameworks implement dynamic partitioning mechanism [18]. Dynamic partitioning techniques evaluate the availability of resources on mobile device and requirement of the mobile application continuously or periodically. In [14] a middleware framework is proposed for the dynamic distribution of application processing load between SMD and cloud server node. The framework deploys application partitioning in optimal mode and dynamically determines the execution location for modules of the mobile application. Processing workload is distributed as per the statistics of resources available on SMD. The framework determines optimal solution for optimization problem in order to optimize different objective functions including interaction time, communication cost, and memory consumption. However, runtime partitioning strategy requires additional computing resources consumption in the dynamic application profiling and partitioning mechanism.

Mobile Assistance Using Infrastructure (MAUI) [8] focuses on energy saving for SMD. Application developers identify the local and remote components of the application at design time. MAUI profiler determines the feasibility of remotely annotated method for offload processing. Each time a method is called, the profiler component evaluates it for energy saving which utilizes additional computing resources (CPU, battery) on SMD. MAUI solver operates on the input provided by application profiler. It determines the destination of execution for the method annotated as remote. MAUI implements application level partitioning for outsourcing computational load of SMD. However, the mechanism of runtime application profiling and solving at runtime involves additional computing resources utilization for application partitioning. Development of the applications on the basis of MAUI requires additional developmental efforts for annotating the execution pattern of each individual method the application. MAUI involves the overhead of dynamic application profiling, solving, partitioning, migration, and reintegration on SMD. In CloneCloud [28], the partitioning and reintegration of the application occurs at application level. Partitioning phase of the framework includes static analysis, dynamic application profiling, and optimization solution. A preprocess migratory thread is implemented on mobile devices to assist in the partitioning and reintegration of the thread states. Elastic application model [9] provides a middleware framework for mobile applications. Application is dynamically partitioned into weblets which are migrated dynamically to cloud server node. The framework implements different elastic patterns for the replication of weblets on the remote cloud. It considers different parameters for offloading of the weblets including status of the mobile device, cloud, application performance measures and user preferences which comprise power saving mode, high speed mode, low cost mode and offload mode. The framework implements resources intensive mechanism for runtime application partitioning and the migration of weblets between SMD and remote cloud nodes. It includes additional resources utilization on SMD in the process of application profiling, dynamic runtime partitioning, weblets migration and reintegration, and continuous synchronization with cloud server node for the entire duration of application processing.

Current COFs accomplish distributed application processing platform for the processing of intensive mobile applications in diverse modes. Several approaches deploy VM migration [15, 30]; others focus on part(s) of the application to be offloaded [8, 16]. A number of approaches implement dynamic application partitioning [8, 9], whereas other focus on static partitioning [27]. Diverse objective functions are considered [4]; saving processing power, efficient bandwidth utilization, saving energy consumption [8], user preferences and execution cost [9]. Recently, a number of mobile cloud applications are witnessed which employ cloud computing to mitigate resources constraints of SMDs [30]. Apple’s iCloud [12] provides on demand access automatically to applications such as music, photos, apps, calendars, documents. Amazon EC2 and Microsoft Azure host the application store of Apple’s iCloud. Similarly, Amazon released Silk application [13] which is a cloud-accelerated web browser Silk is a “split browser” which resides on both Kindle Fire and EC2. For each web page request, Silk dynamically determines distribution of computational load between the local SMD and remote Amazon EC2. Silk considers the objective functions of network conditions, page complexity and the location of any cached content.

Page 6: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

Computational offloading is employed as significant application layer solution for enabling intensive applications on SMD However, the establishment of distributed application processing platform and runtime component migration in current COFs result in additional computing resources utilization on SMDs. Runtime intensive component offloading involves the cost of migration of the components of the mobile application [8-11, 15], which includes computational resource utilization in transferring the application binary file and data file of the running instances of mobile application. Similarly, a number of application offloading frameworks implement dynamic application profiling and partitioning technique for application offloading [8, 15, -17], which increases memory allocation, turnaround time of the application and energy consumption on mobile device. For instance, it is shown that application profiling requires additional 8192 KB RAM for maintaining a temporary trace file [18]. Current COFs focus on what components of the application to offload, how to offload and where to offload the intensive components of the application [18]. Therefore, a resources intensive distributed platform is established at runtime, which results in high energy consumption and longer turnaround time of the intensive mobile applications. This paper critically analyzes the resources intensive nature of existing computational offloading technique for MCC and highlights technical issues in the establishment of distributed application processing platform at runtime. We analyze additional computing resources utilization by deploying the latest existing computational offloading frameworks [10, 11] for MCC. Computational offloading is employed in real distributed MCC environment and the prototype application is evaluated with different computational intensities.

4 Methodology We evaluate additional computing resources utilization in runtime computational offloading by testing the

prototype application for Android platform in the real MCC environment. The experimental setup is composed of remote server node, Wi-Fi wireless network and Samsung Galaxy SII mobile device. Virtual machine instance is created on the cloud server node which runs the virtual device instances of the Android Virtual Device (AVD) which is employed on the server machine for the execution of the delegated component of the application. Mobile device accesses the wireless network via Wi-Fi wireless network connection of radio type 802.11g, with the available physical layer data rate of 54 Mbps. Samsung smartphone runs Android 4.0.3, dual core ARMv7 Application processor with 1.2 GHz speed, 16GB memory capacity and 1650mAh battery. Java based Android software development toolkit (Android SDK) is deployed for the development of the prototype application. Monitoring tools such as Android Debug Bridge (ADB) and Dalvik Debug Monitor System (DDMS) are used for the measurement of resources utilization (CPU and RAM), whereas Power Tutor tool [31] is used for the measurement of battery power consumption in distributed application processing. Fig 2. shows a general model of experimental setup for deploying the prototype application. A virtual machine instance is created on the cloud server node, wherein virtual mobile device instance is deployed for the execution of delegated components of the mobile application. Application on mobile device arbitrates with the cloud server node for offloading intensive components and synchronizing the execution of mobile application in the distributed MCC environment.

   

 

 

 

 

Fig.  2 Model of Experimental Setup

The prototype application is composed of three computational intensive service components which include the following. (a) Sorting service component implements the logic of bubble sort for sorting liner list of integer type values. The sorting logic of the application is tested with 30 different computational intensities (11000-40000). (b) The matrix multiplication service of the application implements the logic of computing the product of 2-D array of integer type values. Matrix multiplication logic of the application is tested with 30 different computational

 

 

 

 

Virtual Machine 

 

 

Virtual Mobile Device Instance 

Delegated Mobile Application 

 

 

 

Mobile Device 

Application on Mobile Device

Cloud Server Node 

Page 7: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

intensities by varying the length of the 2-D array between 160*160 and 450*450. c) The power compute service of the application implements the logic of computing b^e, wherein b is the base and e is the exponent. The power compute logic of the application is tested for 30 different computational intensities by varying the exponent between 1000000 and 200000000.

Empirical data are collected by sampling all computational intensities of the application in 30 different experiments and the value of sample mean is signified with 99% confidence interval for the sample space of 30 values in each experiment. The additional overhead in runtime components offloading results in high energy consumption and longer turnaround time of the intensive mobile applications. Therefore, measurement parameters for evaluating the overhead of runtime computational offloading include total Energy consumption Cost (Ec), Timing Cost (Tc) and size of data transmission. Computational offloading is implemented by offloading running instances of the service components of the Android application.

5 Evaluation This section discusses experimental results for the evaluation of the additional overhead in runtime

computational offloading for MCC [10, 11]. We evaluate the cost of additional energy consumption, additional time taken and the size of data transmitted for offloading the computationally intensive components of the prototype application.

5.1 Analysis of Energy Consumption Cost The additional energy consumed in runtime computation offloading is evaluated by Energy consumption cost

(Ec) parameter in the units of Joules (J). The Energy consumption Cost (Ec) includes energy consumed in runtime component migration, energy consumed in saving the data states of running instance of the mobile application, energy consumed in uploading the data file to remote server node and energy consumed in returning the resultant data files to local mobile device. The total energy consumption cost for each component offloaded at runtime is represented by equation 1.

Ec =Em+Es+Eu+ Ed (1) (a) Em represents energy consumed in transferring the binary code of the component of mobile application which is being offloaded. (b) Es represents energy consumed in saving the running instances of the mobile application. (c) Eu represents energy consumed in uploading the data file (which is known as preferences file) to remote server node at runtime. (d) Ed represents energy consumed in downloading the resultant data file (preferences file) to the local mobile device.

Let E is the finite set of the energy consumption cost of the components of mobile application which are offloaded at runtime. Let Eca, represents the energy consumption cost of offloading a single component of the mobile application at runtime. Whereas a=1, 2,…, n

∴ E= {Ec1, Ec2, …, Ecn}

Let the total energy consumption of the runtime application offloading is represented by αe, which is the sum of energy consumed in all instances Eca=1,2,…,n of the runtime component offloading. Therefore, αc is represented as follows.

∴ αe= (Ec1 + Ec2+…+ Ecn) ⇒∀ Eca∈ E∧ |E| ≥ 1 whereas a=1,…, n By using summation notation the total energy consumption cost (αe) of the runtime computational offloading of

the mobile application is represented as follows.

αe =

1

n

aa

Ec=∑ ⇒∀ Eca∈ E∧ |E| ≥ 1 (2)

The energy consumption cost (Ec1) for offloading the sorting service component of the application at runtime is evaluated for 30 different computational intensities of sorting operation (11000-40000). It is found that in all instances of offloading the binary code of the application, the size of binary application file (.apk) remains constant (44.4 KB). Therefore, Em is determined 6.1(+/) 0.6 J which remains constant in offloading sorting service of the application with different intensities. The energy consumption cost of saving the data states (preferences file) on the mobile device (Es) is examined 8.5(+/-)1 J. Similarly, the energy consumption cost of uploading preferences file (Eu) to the cloud server node is examined 36(+/-) 0.9 mJ; whereas, the energy consumption cost of downloading the resultant preferences file (Ed) from the remote server node to the local mobile device is determined as 10.9 (+/-) 0.3 J. The total additional Energy consumption Cost (Ec1) in runtime computational offloading of sorting service is computed by using equation (1), which is determined as 25.5(+/-) 1.9 J.

Page 8: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

Fig. 3 shows the increase in the Ec1 for offloading sorting service at runtime. It is examined that Em, and Es remain constant in offloading sorting service for varying size of the sorting list. However, the size of preferences files increases by increasing the length of sort list. Therefore, the cost of Eu and Ed increases accordingly. For example, it is found that the average cost of Eu is 9.8 mJ for uploading preferences file of sorting list length 11000 values, whereas the average cost of Eu is 54 mJ for uploading preferences file of sorting list length 40000 values. Hence, the Eu increases 81.9 % for uploading preferences file of sort list length 40000 values as compared to uploading preferences file of sort list length 11000 values. Similarly, the cost Ed increases according to the size of preferences file. For instance, 7.5 J energy is consumed in downloading preferences file for sorting list size 11000 values; whereas, 15.3 J energy is consumed in downloading preferences file for sorting list size 4000. It shows that the cost of Ed increases 51 % for in downloading the preferences file for the sorting list of 40000 as compared to the preferences file for the sorting list size 11000 values. It shows that increase in the Ec1 is the result of increase in uploading and downloading larger preferences files.

Fig.  3  Energy Consumption Cost of Offloading Sorting Service Component of the Application

Fig.4 shows the comparison of ECC of processing sorting service on the local mobile device and ECC of executing sorting service on cloud server node by deploying existing offloading techniques [10, 11]. Further, it illustrates difference in ECC of local application and remote application execution. It shows that ECC of the sorting service increases with the increase in the length of sorting operation. For instance, the ECC of sorting 11000 values is 21.1 J in local processing and 49.8 J in cloud based application processing, whereas ECC of sorting 40000 values is 68.6 J in local processing and 201.4 J in cloud based application processing. It shows that the ECC of sorting the list of 40000 as compared to the sorting list of 11000 values is 69.2 % higher in local processing and 75.3 % higher in cloud based application processing. Similarly, the ECC of cloud based application processing is higher as compared to local application execution, for the reason of additional energy consumed in runtime computational offloading mechanism. The ECC for cloud based processing of sorting service increases 57.6 % for sorting list of 11000 values, 61.1 % for sorting list of 25000 values and 65.9 % for sorting list of 40000 values. The overall increase in ECC for cloud based processing of sorting list with 40000 values is 78.3 % higher as compared to sorting list of 11000 values.

05

101520253035404550

Ene

rgy

Con

sum

ptio

n (J

)

Length of Sorting List

Page 9: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

Fig.  4 Comparison of ECC for Sorting Service Execution on Local Mobile Device and Cloud Server Node

The total energy consumption cost (Ec2) for offloading the matrix multiplication service component of the application at runtime is evaluated for 30 different computational intensities of matrix multiplication operation (160*160-450*450). The energy consumption cost of transferring application binary code (Em) for matrix multiplication service, is evaluated in 30 experiments by offloading matrix multiplication service with 30 different computational intensities. It is found that in all instances of offloading the binary code of the application the size of binary application file (.apk) remains constant (46 KB); therefore, Em which is found 15.2(+/-)2.1 J which remains constant in offloading matrix multiplication service of the application with different intensities.

The energy consumption cost of saving the data states (Es) on the local mobile device is examined as 4.6(+/-)0.9 J. The energy consumption cost of uploading preferences file (Eu) to the cloud server node is examined as 273.9(+/-)1.4mJ. The energy consumption cost of downloading the resultant preferences file (Ed) from the remote server node to the local mobile device is determined as 9.3(+/-) 1.4 J in downloading the resultant preferences file of matrix multiplication service for the matrix length 160*160-450*450 values. By using equation (1) the energy consumption cost of offloading matrix multiplication service (Ec2) is determined 29.3(+/-)4.4 J.

Fig.5 shows increase in the Ec2 for offloading matrix multiplication service component with different computational intensities. It is found that the cost of Em, and Es remains constant in offloading matrix multiplication service with varying matrix length values. However, the size of preferences file increases by increasing the length of matrices. Therefore, the cost of Eu and Ed increases accordingly. It is examined that the average cost of Eu is 3.7 mJ for uploading preferences file of matrices length 160*160, whereas the average cost of Eu is 136.6 mJ, for uploading preferences file of matrices 450*450 length. Hence, Eu increases 72.4 % for uploading preferences file of matrices length 450*450 as compared to uploading preferences file of matrices length 160*160 values. Similarly, it is examined that Ed increases according to the size of preferences file. For instance, 4.5 J energy is consumed in downloading preferences file for matrices of length 160*160 values, whereas 16.2 J energy is consumed in downloading preferences file for matrices of length 160*160 values. It shows that the cost of Ed increases 72.2 % for in downloading the preferences file for matrices length 450*450 values as compared to the preferences file for the matrices length 160*160 values. It shows that increase in the Ec2 is the result of increase in uploading and downloading larger preferences files.

0

50

100

150

200

250

Ene

rgy

Con

sum

ptio

n C

ost (

J)

Length of Sorting List

ECC in Local Application Processing

ECC in Cloud Based Application Processing By Using Traditional Computational Offloading Difference in ECC

Page 10: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

Fig.  5 Energy Consumption Cost of Offloading Matrix Multiplication Service Component of the Application

Fig. 6 shows the comparison of ECC of processing matrix multiplication service on the local mobile device and ECC of executing matrix multiplication service on the cloud server node by employing existing offloading techniques [10, 11]. Furthermore, it illustrates the difference in ECC of local and remote application execution. It shows that ECC of the matrix multiplication service increases with the increase in the size of matrices. For instance, the ECC of multiplying160*160 values is 11.5 J in local processing and 40 J in cloud based application processing, whereas ECC of multiplying450*450 values is 69.9 J in local processing and 131.7 J in cloud based application processing. It shows that ECC of multiplying matrices of 450*450values is 83.5% higher in local processing and 69.6% higher in cloud based application processing as compared to the multiplying matrices of length 160*160 values. Similarly, the figure shows that the ECC of cloud based application processing is higher as compared to local application execution, for the reason of additional energy consumption in runtime computational offloading mechanism. The ECC for cloud based processing of matrix multiplication service increases 71 % for multiplying matrices of 160*160 values, 66% for multiplying matrices of 300*300 values and 47% for multiplying matrices of length 450*450 values. The increasing trend of the difference in ECC of matrix multiplication service indicates the additional overhead of runtime computational offloading mechanism. The increasing trend of ECC in cloud based application processing indicates that the difference in ECC decreases for higher intensities of matrix multiplication service. For instance, the increase in ECC of cloud based processing of matrix multiplication service with length 450*450 as compared to multiplying matrices of length 160*160 is 33.8 % smaller. However, fig. 5 shows that the additional overhead of runtime offloading of matrix multiplication service increases identically with the increase in the length of matrices. For instance, the total overhead of offloading matrix multiplication service with the length 450*450 values is 69.3 % higher as compared to the additional ECC of offloading matrix multiplication service with the length 160*160.

0

10

20

30

40

50

60

Ene

rgy

Con

sum

ptio

n (J

)

Length of Matrix

Page 11: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

Fig.  6 Comparison of ECC of Matrix Multiplication Service Execution on Local Mobile Device and Cloud Server Node

The energy consumption cost (Ec3) for offloading power compute service is evaluated with 30 different computational intensities (2^1000000-2^2000000000). Power compute service does not involve the cost of energy consumption in preferences uploading and downloading; therefore, Es, Eu and Ed are found zero. The average energy consumption cost for transferring the binary code (.apk file) of Power compute service component is found 4.7 J.

Fig. 7 shows the comparison of ECC of power compute service on the local mobile device and ECC of executing power compute service on cloud server node by using runtime computational offloading mechanism. Furthermore, it illustrates the difference in ECC of local application and remote application execution. It shows that ECC of the power compute service increases with the increase in the length of power compute operation. For instance, the ECC of computing 2^1000000 is 2.2 J in local processing and 5.4 J in cloud based application processing, whereas ECC of computing 2^2000000000 is 67 J in local processing and 351 J in cloud based application processing. It shows that the ECC of computing 2^2000000000 is 96.7 % higher in local processing and 98.8 % higher in cloud based application processing as compared to computing 2^1000000. Similarly, the ECC of cloud based application processing is higher as compared to local application execution, for the reason of additional energy consumption in runtime computational offloading mechanism. The ECC for cloud based processing of power compute service increases 59.3 % for computing 2^1000000, 76.7 % for computing 2^60000000 and 80.9 % for computing 2^2000000000. The overall increase in ECC increases 28.8 % for computing 2^2000000000 as compared to computing 2^1000000 in cloud based processing of power compute service.

Fig.  7 Comparison of ECC for Power Compute Service Execution on Local Mobile Device and Cloud Server Node

0

20

40

60

80

100

120

140

160*

160

170*

170

180*

180

190*

190

200*

200

210*

210

220*

220

230*

230

240*

240

250*

250

260*

260

270*

270

280*

280

290*

290

300*

300

310*

310

320*

320

330*

330

340*

340

350*

350

360*

360

370*

370

380*

380

390*

390

400*

400

410*

410

420*

420

430*

430

440*

440

450*

450

Ene

rgy

Con

sum

ptio

n C

ost (

J)

Size of Matrix

ECC in Local Application Processing

ECC in Cloud Based Application Processing By Using Traditional Computational Offloading Difference in ECC

050

100150200250300350400

2^10

0000

02^

2000

000

2^30

0000

02^

4000

000

2^50

0000

02^

6000

000

2^70

0000

02^

8000

000

2^90

0000

02^

1000

0000

2^20

0000

002^

3000

0000

2^40

0000

002^

5000

0000

2^60

0000

002^

7000

0000

2^80

0000

002^

9000

0000

2^10

0000

000

2^20

0000

000

2^30

0000

000

2^40

0000

000

2^50

0000

000

2^60

0000

000

2^70

0000

000

2^80

0000

000

2^90

0000

000

2^10

0000

000

2^19

0000

0000

2^20

0000

0000E

nerg

y C

onsu

mpt

ion

Cos

t (J)

Compute Length

ECC in Local Application Processing

ECC in Cloud Based Application Processing By Using Traditional Computational Offloading Difference in ECC

Page 12: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

The increasing trend of the difference in ECC of power compute service indicates the additional overhead of runtime computational offloading mechanism. Analysis of the results indicates that runtime computational offloading increases the ECC of distributed application execution considerably. For instance, the additional Ec1 in contemporary offloading of sorting service is 59.6 % for sort list length 11000, 44 % for sort list length 20000, 30.1 % for sort list length 30000 and 22.5 % for sort list length 40000.The average increase in the Ec1 of runtime computational offloading for sorting service is 37.1 % for sort list length 11000-40000. The additional Ec2 in contemporary offloading of matrix multiplication service is 72 % for multiplying matrices 160*160 length, 62.3 % for multiplying matrices 260*260 length, 47.6 % for multiplying matrices 340*340 length and 40 % for multiplying matrices 450*450 length. The average increase in the Ec2 of runtime computational offloading for matrix multiplication service is 53.2 % for multiplying matrices 160*160-450*450 length. The average energy consumption cost of offloading sorting service (Ec1) is 25.5 J, the average energy consumption cost of offloading matrix multiplication service (Ec2) is 29.3 J, and the average energy consumption cost of offloading power compute service (Ec3) is 4.7 J. Hence, by using equation (2) the total energy consumed cost (αc) of runtime computational offloading for the mobile application is calculated as 59.5 J. It is found that the total ECC of offloaded processing of the prototype application is 257 J which includes the cost of runtime computational offloading as 59.5 J, It shows that 23.1% additional energy is consumed in offloading three computational intensive components of the mobile application at runtime.

5.2 Analysis of Timing Cost The additional time taken in runtime component offloading is evaluated by using timing cost (Tc) parameter in

the units of milliseconds (ms). Tc involves preferences saving time (Tcm), binary code offloading time of the application (Tcm), time taken in uploading the data states of the mobile application to remote server node (Tpu), application download time to remote virtual device instance on the cloud server node (Tdv), application reconfiguration and resuming time on the remote server node (Trr), time taken in returning the resultant data file to local mobile device (Tpr). Therefore, the total timing cost of a single component of the mobile application which is offloaded at runtime is given by equation (3).

Tc=Tcm + Tps + Tpu + Tdv+ Trr + Tpr (3) Let T is the finite set of the offloading time of the components of mobile application which are offloaded at

runtime. Let Tca represents the total timing cost in offloading a single component of the mobile application at runtime. Whereas, a=1, 2,…, n

∴T= {Tc1, Tc2,…,Tcn} Let the total additional time taken in runtime application offloading is represented by αt, which is the sum of the timing cost of all the instances Tca =1,2,…,n of the runtime component offloading. Therefore, αt is represented as follows.

∴αt= (Tc1 + Tc2+…+ Tcn) ⇒∀ Tca∈ T∧ |T| ≥ 1 Whereas, a=1,…, n By using summation notation the total additional time taken in runtime computational offloading of the mobile

application is represented by equation (4).

αt =

1

n

aa

Tc=∑ ⇒ ∀ Tca∈ T∧ |T| ≥ 1 (4)

The timing cost for offloading the sorting service component (Tc1) of the application at runtime is evaluated for 30 different computational intensities of sorting operation (11000-40000). The time taken in transferring application binary code (Tcm) is evaluated in 30 experiments by offloading sorting service. It is examined that in all instances of offloading the binary code of the application, the size of binary application file (.apk) remains constant (44.4 KB). Therefore, Tcm which is found 77(+/-)16 ms remains constant in offloading sorting service of the application with different computational intensities. Timing cost of saving the data states (Tps) on the local mobile device is examined as 5076(+/-) 568 ms. The timing cost of uploading preferences file (Tpu) to the cloud server node is examined as 608(+/-) 94 ms. The timing cost of downloading the application file to remote virtual device instance (Tdv) is determined as 241(+/-)113 ms. The timing cost of application reconfiguration on remote server node and resuming time (Trr) is found as 6662(+/-)884 ms. The timing cost of downloading the resultant preferences file (Tpd) is examined as 11113(+/-)1813 ms in downloading the resultant preferences file of sorting service component. By

Page 13: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

using equation (3) the total timing cost (Tc1) in runtime computational offloading of sorting service is computed as 23777 (+/-) 3488 ms.

Fig.8 shows increase in the timing cost (Tc1) for offloading sort service at runtime. It is examined that the Tcm, and Tdv remains constant in offloading sorting service with varying sort list size. However, the size of preferences files increases by increasing the length of sort list. Therefore, the cost of Tps ,Tpu , Tpd increases accordingly. It is examined that the average Tps cost is 2438 ms for saving preferences file of sorting list length 11000 values, whereas the average Tps cost is 6739 ms for saving preferences file of sorting list length 40000 values. Hence, Tps cost increases 63.8 % for saving preferences file of sort list length 40000 values as compared to saving the preferences file of sort list length 11000 values. Similarly, it is examined that the Tpu cost increases according to the size of preferences file. For instance, 253 ms time is taken in uploading preferences file for sorting list size 11000 values; whereas 873 ms time is taken in uploading preferences file for sorting list size 4000. It shows that Tpu cost increases 71 % in uploading preferences file for the sorting list of 40000 as compared to the preferences file for the sorting list size 11000 values.

It is found that the Tpd cost increases according to the size of preferences file downloaded to the mobile device. For instance, 4620 ms time is taken in downloading the resultant preferences file for sorting list size 11000 values; whereas 16294 ms time is taken in downloading preferences file for sorting list size 4000. It shows that Tpd cost increases 71.6 % in downloading the preferences file for the sorting list of 40000 as compared to the preferences file for the sorting list size 11000 values.

Fig.  8 Timing Cost of Offloading Sort Service Component of the Application

Analysis of the results indicates that in computational offloading techniques [10, 11], saving preferences on the mobile device, preferences uploading, reconfiguration on the remote server node and downloading resultant file to the mobile device adversely affects the execution time and therefore increases the turnaround time of the application. Fig. 9 shows the comparison of Turnaround Time (TT) of processing sorting service on the local mobile device and TT of executing sorting service on cloud server node by deploying existing computational offloading techniques [10, 11]. Furthermore, it illustrates the difference in TT of local application and remote application execution. It shows that TT of the sorting service increases with the increase in the length of sorting operation. For example, the TT of sorting 11000 values is 4876 ms in local processing and 24331 ms in cloud based application processing, whereas TT of sorting 40000 values is 31207 ms in local processing and 166457 ms in cloud based application processing. It shows that the TT of sorting the list of 40000 as compared to the sorting list of 11000 values is 84.4 % higher in local processing and 85.4 % higher in cloud based application processing. Similarly, the TT of cloud based application processing is higher as compared to local application execution, for the reason of additional timing cost (Tc) of runtime computational offloading mechanism. The TT for cloud based processing of sorting service increases 80% for sorting list of 11000 values, 78 % for sorting list of 25000 values and 81% for sorting list of 40000 values.

05000

10000150002000025000300003500040000

Tim

ing

Cos

t (m

s)

Length of Sorting List

Page 14: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

Fig.  9 Comparison of Turnaround Time of Sorting Service Execution on Local Mobile Device and Cloud Server Node

The timing cost (Tc2) for offloading the matrix multiplication service component of the application at runtime is evaluated for 30 different computational intensities. The time taken in transferring application binary code (Tcm), is evaluated in 30 experiments by offloading matrix multiplication service application with 30 different computational intensities. It is examined that in all instances of offloading the binary code of the application the size of binary application file (.apk) remains constant (46 KB). Therefore Tcm which is found 52(+/-)5 ms, remains constant in offloading matrix multiplication service of the application with different computational intensities. The timing cost of saving the data states (preferences file) on the local mobile device is determined 28152(+/-) 11141 ms. The timing cost of uploading preferences file (Tpu) to the cloud server node is examined 7177(+/-)3048 ms. The timing cost of downloading the application file to remote virtual device instance (Tdv) is determined 205(+/-)15 ms. The timing cost of application reconfiguration on remote server node and resuming time (Trr) is found 10349(+/-)2307 ms. The timing cost of downloading the resultant preferences file (Tpd) from the remote server node to the local mobile device is examined 11238(+/-)2753 ms. The total timing cost (Tc2) in runtime computational offloading of matrix multiplication is computed by using equation (3) which is found 57173(+/-) 19269ms.

Fig.10 shows the increase in the timing cost (Tc2) for offloading matrix multiplication service at runtime. It is examined that the Tcm, and Tdv remains constant in offloading matrix multiplication service with varying matrices size. However, the size of preferences file increases by increasing the length of matrices. Therefore, the cost of Tps

,Tpu, Tpd increases accordingly. The average Tps cost is examined 3294 ms for saving preferences file of matrices length 160*160, whereas the average Tps cost is 91038 ms for saving preferences file of matrices length 450*450. Therefore, the Tps cost increases 96.3 % for saving preferences file of matrices length 450*450 as compared to saving the preferences file of matrices length 160*160. Similarly, it is examined that the Tpu cost increases according to the size of preferences file. For instance, 1518 ms time is taken in uploading preferences file for matrices length 160*160; whereas 20878 ms time is taken in uploading preferences file for matrices length 450*450. It shows that Tpu cost increases 92.7 % in uploading the preferences file for the matrices length 450*450 as compared to the preferences file for the matrices length 160*160.

It is examined that the Tpd cost increases according to the size of preferences file downloaded to the local mobile device. For instance, 3400 ms time is taken in downloading the resultant preferences file for matrices length 160*160 values; whereas 23015 ms time is taken in downloading preferences file for matrices length 450*450. It shows that Tpd cost increases 85.2 % in downloading the preferences file for matrices length 450*450 as compared to the preferences file for the matrices length 160*160.

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1100

012

000

1300

014

000

1500

016

000

1700

018

000

1900

020

000

2100

022

000

2300

024

000

2500

026

000

2700

028

000

2900

030

000

3100

032

000

3300

034

000

3500

036

000

3700

038

000

3900

040

000

Tur

naro

und

Tim

e (m

s)

Length of Sort List

TT on Local Mobile Device

TT In Cloud Based Processing By Using Traditional Computational OffloadingDifference in TT

Page 15: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

Fig.  10 Total Timing Cost (Tc2) of Offloading Matrix Multiplication Service Component of the Application

Fig.11 shows the comparison of TT of processing matrix multiplication service on the local mobile device and TT of executing matrix multiplication service on the cloud server node by using runtime computational offloading mechanism. Furthermore, it illustrates difference in TT of local and remote application execution. It shows that TT of the matrix multiplication service increases with the increase in the size of matrices being multiplied. For instance, the TT of multiplying 160*160 values is 3653 ms in local processing and 16431 ms in cloud based application processing, whereas TT of multiplying 450*450 values is 99286 ms in local processing and 262697 ms in cloud based application processing. It shows that the TT of multiplying matrices of 450*450 values as compared to the multiplying matrices of length 160*160 values is 96.3 % higher in local processing and 93.7 % higher in cloud based application processing. Similarly, the figure shows that TT of cloud based application processing is higher as compared to local application execution, for the reason of additional timing cost of runtime computational offloading mechanism.

The TT for cloud based processing of matrix multiplication service increases 78% for multiplying matrices of 160*160 values, 69 % for multiplying matrices of 300*300 values and 62% for multiplying matrices of length 450*450 values. The increasing trend of TT in cloud based application processing indicates that the difference in TT decreases for higher intensities of matrix multiplication service. For instance, the TT increases 20.5 % smaller for multiplying matrices of 450*450 length as compared to multiplying matrices of length 160*160 in cloud based application processing. However, fig. 11 shows that the additional overhead of runtime offloading of matrix multiplication service increases identically with the increase in the length of matrices. For instance, the total timing cost of offloading matrix multiplication service with the length 450*450 values is 85.3 % higher as compared to the timing cost of offloading matrix multiplication service with the length 160*160. The decrease in TT value is for the reason that for higher intensities of matrix multiplication operation, the amount of resources utilization and their duration increases in both local and remote execution scenarios, whereas the timing cost of runtime computational offloading increases normally.

020000400006000080000

100000120000140000160000180000

Tim

ing

Cos

t (m

s)

Length of Matrix

Page 16: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

Fig.  11 Comparison of Turnaround Time for Matrix Multiplication Service Execution on Local Mobile Device and Cloud Server Node

The timing cost (Tc3) for offloading the power compute service component of the application at runtime is evaluated for 30 different computational intensities. As stated in section 4.1, offloading Power compute service does not involve the overhead of transmitting preferences files. Therefore, Tps,Tpu andTpris determined zero. The offloading time of power compute service (Tcm) is determined as 52(+/-)4 ms. The service download time to remote virtual device instance and reconfiguration time of the power compute service on remote machine (Tdv) is determined 212(+/-)18 ms. The reconfiguration time (Trr) of the power compute service on the remote server node is determined 6349(+/-)312 ms. By using equation (3), the total additional timing cost in offloading power compute service (Tc3) is found 6613 (+/-)334 ms.

Fig. 12 shows the comparison of TT of processing power compute service on the local mobile device and TT of remote service execution by using runtime computational offloading mechanism. Furthermore, it illustrates the difference in TT of local application and remote application execution. The TT of power compute service increases with the increase in the length of power compute operation. For instance, the TT of computing 2^1000000 is 51 ms in local processing and 7175 ms in cloud based application processing, whereas TT of computing 2^2000000000 is 69044 ms in local processing and 265724 ms in cloud based application processing. It shows that the TT of computing 2^2000000000 as compared to computing 2^1000000 is 99.9 % higher in local processing and 97.2 % higher in cloud based application processing. Similarly, the TT of cloud based application processing is higher as compared to local application execution, for the reason of additional timing cost in runtime computational offloading mechanism. The TT for cloud based processing of power compute service increases 99.3 % for computing 2^1000000, 90.6 % for computing 2^40000000 and 74 % for computing 2^2000000000.

0

50000

100000

150000

200000

250000

300000

160*

160

170*

170

180*

180

190*

190

200*

200

210*

210

220*

220

230*

230

240*

240

250*

250

260*

260

270*

270

280*

280

290*

290

300*

300

310*

310

320*

320

330*

330

340*

340

350*

350

360*

360

370*

370

380*

380

390*

390

400*

400

410*

410

420*

420

430*

430

440*

440

450*

450

Tur

naro

und

Tim

e (m

s)

Matrix Length

TT on Local Mobile Device

TT In Cloud Based Processing By Using Traditional Computational Offloading

Difference in TT

Page 17: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

Fig.  12 Comparison of Turnaround Time for Power Compute Service Execution on Local Mobile Device and Cloud Server Node

Runtime computational offloading increase the TT of distributed application execution considerably. The increasing trend of the difference in TT of cloud based application execution indicates the additional overhead of runtime computational offloading mechanism. For instance, the additional timing cost (Tc1) in contemporary offloading of sorting service is 45 % for sort list length 11000, 36 % for sort list length 20000, 26 % for sort list length 30000 and 20 % for sort list length 40000. The average increase in timing cost (Tc1) of runtime computational offloading for sorting service is 31.1 % for sort list length 11000-40000.

Similarly, the timing cost (Tc2) in contemporary offloading of matrix multiplication service is 73 % for multiplying matrices 160*160 length, 67.5 % for multiplying matrices 260*260 length, 61.2 % for multiplying matrices 340*340 length and 58.8 % for multiplying matrices 450*450 length. The average increase in the Tc2 of runtime computational offloading for matrix multiplication service constitutes 65.2 % for multiplying matrices 160*160-450*450 length. The average timing cost (Tc3) of power compute service is determined 6613 ms for the sample space of 900 values in 30 different experiments. The average additional timing cost in offloading sorting service(Tc1) is determined is 23777 ms, the average additional timing cost of offloading matrix multiplication service (Tc2) is 57173 ms (), and the average timing cost of offloading power compute service is 6613 ms. Therefore, by using equation (4) the total timing cost (αt) of runtime computational offloading for the mobile application is calculated as 87563 ms, which means that in contemporary computational offloading, 39% additional time is taken in offloading the components of the mobile application at runtime.

5.3 Analysis of Data Transmission in Computational Offloading Data transmission (Ds) in runtime computational offloading involves the size of application binary file migrated

at runtime (Da), the size of preferences file uploaded to cloud server node (Dpu) and the size of resultant preferences file downloaded to the local (Dpd). Therefore, the total size of data transmission of a single component of the mobile application which is offloaded at runtime is given by equation (5).

Ds =Da + Dpu + Dpd (5) Let D is the finite set of the size of data transmission of the components of the mobile application which are

offloaded at runtime. Let Dsa represents the total size of data transmission in offloading a single component of the mobile application at runtime. Whereas, a=1, 2,…, n

∴ D= { Ds1, Ds2,…, Dsn } Let the total size of data transmission in runtime application offloading is represented by αd , which is the sum of size of data transmission of all the instances Dsa =1,2,…,n of the runtime component offloading. Therefore, αd is represented as follows.

∴αd= (Ds1 + Ds2+…+ Dsn) ⇒∀Dsa∈ D∧ |D| ≥ 1 where a=1,…, n By using summation notation the total size of data transmission of the runtime application offloading of the

mobile application is represented as follows:

0

50000

100000

150000

200000

250000

300000

2^10

0000

02^

2000

000

2^30

0000

02^

4000

000

2^50

0000

02^

6000

000

2^70

0000

02^

8000

000

2^90

0000

02^

1000

0000

2^20

0000

002^

3000

0000

2^40

0000

002^

5000

0000

2^60

0000

002^

7000

0000

2^80

0000

002^

9000

0000

2^10

0000

…2^

2000

00…

2^30

0000

…2^

4000

00…

2^50

0000

…2^

6000

00…

2^70

0000

…2^

8000

00…

2^90

0000

…2^

1000

00…

2^19

0000

…2^

2000

00…

Tur

naro

und

Tim

e (m

s)

Compute Length

TT on Local Mobile Device

TT In Cloud Based Processing By Using Traditional Computational OffloadingDifference in TT

Page 18: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

αd =

1

n

aa

Ds=∑ ⇒ ∀Dsa∈ D∧ |D| ≥ 1 (6)

The total size of data transmission in offloading sorting service (Ds1) and total size of data transmission in offloading matrix multiplication service (Ds2) of the application at runtime is computed by using equation (5).

The size of data in transferring application binary code and preferences file is evaluated in 30 experiments for offloading both sorting service and matrix multiplication service of the application in 30 different experiments. It is examined that in all instances of sorting service offloading and matrix multiplication service offloading the size of binary application file (.apk) remains 44.4 KB for sort service and 46 KB for matrix multiplication service. However, the size of preferences file uploaded to the cloud server node (Dpu) and the size of the resultant preferences file downloaded to the local mobile device (Dpd) varies for different length of both operations. Fig. 13 shows the size of data transmission in offloading sort service component of the application at runtime. The size of data transmission in sorting service offloading is examined 752.4 KB for sort list length 11000, 1360.4 KB for sort list length 20000 and 2645.36 KB for sort list length 40000. It shows that the size of data transmission increases 71.6% in offloading sorting service with the length of sorting list 40000 as compared to the length of sorting list 11000. The average size of data transmission (Ds1) for offloading sorting service with the sort list length 11000-40000 is determined 1722.2 KB.

Fig.  13 Size of Data Transmission in Offloading Sorting Service Component of the Application

Fig. 14 shows the size of data transmission in offloading matrix multiplication service component of the application. The size of data transmission in matrix multiplication service is examined 5739.44 KB for matrices length 160*160, 15426.5 KB for matrices length 260*260 and 46740 KB for matrices length 450*450. It shows that that the size of data transmission increases 87.8 % for offloading matrix multiplication service with the matrices length 450*450 as compared to matrices length 160*160. The average size of data transmission (Ds2) for offloading matrix multiplication service with the matrices length 160*160-450*450 is determined 11474.3 KB.

Fig.  14 Size of Data Transmission in Offloading Matrix Multiplication Service Component of the Application

0

500

1000

1500

2000

2500

3000

Size

of D

ata

Tra

nsm

issi

on (K

B)

Length of Sorting List

05000

100001500020000250003000035000400004500050000

Size

of D

ata

Tra

nmis

sion

(KB

)

Matrix size

Page 19: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

The size of data transmission for offloading power compute service (Ds3) at runtime is evaluated in 30 different experiments. It is examined that in all instances of offloading power compute service the size of binary application file (.apk) remains 42.7 KB, the size of preferences file uploaded to the cloud server node (Dpu) is 1 KB and the size of the resultant preferences file downloaded to the local (Dpd) is 1 KB. Hence, by using equation (5) the total size of data transmission in (Ds3) is 44.7 KB for offloading power compute service at runtime. By using equation (6) the total size of data transmission (αt) of runtime computational offloading for the mobile application is calculated as 13241.2 KB. Analysis of the size of data transmission in computational offloading indicates that current COFs involve the additional cost of migrating application binary file and data files of the components of the application, whereas transmission over the wireless network medium is an energy starving mechanism. The cost of data transmission over the wireless network includes energy consumption and time taken in uploading application binary file, uploading preferences file and downloading resultant file from the cloud server node to local mobile device. It is found that 6.1(+/-)0.6 J energy is consumed in transferring binary file of sort service component (44.4 KB), 15.2(+/-)2.1 J energy is consumed in transferring binary file of matrix multiplication service component (46 KB), 6.3(+/-)0.5 J energy is consumed in transferring power compute service component of the application (42.7 KB). Similarly, 0.5(+/-)0.6 J energy is consumed in downloading 354 KB data file, 15.3(+/-)0.5 J energy is consumed in downloading 1.27MB data file and 16.2(+/-)0.3 J energy is consumed in downloading 22.8 MB data file.

Additional data transmission in computational offloading adversely affects the turnaround time of the application. The timing cost of data transmission is examined as 77(+/-)16 ms in transferring binary file of sort service component, 52(+/-)5 ms in transferring binary file of matrix multiplication service component, 52(+/-)4 ms in transferring power compute service component of the application. Similarly, the timing cost of returning preferences file to mobile device is found as 4620(+/-)64 ms in downloading 354 KB data file, 16294(+/-)111 ms in downloading 1.27MB data file and 23015(+/-)729 ms in downloading 22.8 MB data file. Further, it is examined that in all instances of components offloading at runtime the timing cost of binary application file (.apk) remains approximately constant, whereas the cost of preferences file uploaded to the cloud server node and the cost of the resultant preferences file downloaded to the local mobile device varies according to the size of preferences file of the component of the mobile application. For instance, the timing cost of downloading 22.8 MB preferences file is 53.7 % larger as compared to downloading 354 KB preferences file. Similarly, the timing cost of downloading 22.8 MB preferences file is 79.9 % larger as compared to downloading 354 KB preferences file.

Table 1 summarizes additional overhead in offloading different components of the mobile application. Additional ECC attribute shows the percentage of additional energy consumed in offloading the components of the mobile application. The average additional ECC for varying intensities of the components of the mobile application is found 61.1 % for sort service, 63.1 % for matrix multiplication service and 70.5 % for power compute service. Similarly, the timing cost attribute indicates the percentage of average additional time taken in remote processing of the components of the mobile application. The execution time of the application is longer in offloaded processing of the application as compared to local application execution. The average additional timing cost for varying intensities of the application is found 79.2 % for sort service, 69 % for matrix multiplication service and 87.9 % for power compute service. Computational offloading involves transmission of application binary file and data files as represented by eq.(6). Therefore, additional data transmission attribute shows the amount of additional data that is transmitted in offloading different components of the applications. It is examined that 1722.2 KB data is transmitted while offloading sort service, 2299.5 KB data is transmitted while offloading matrix multiplication service, whereas 42.7 KB data is transmitted while offloading power compute service component of the application. The average data transmission for sort service and matrix multiplication components is larger as compared to power compute service for the reason of uploading and downloading preferences data file; whereas power compute involves transmission of only application binary file.

Table 1 Summary of Additional overhead in Runtime Computational Offloading for MCC

Components of the Mobile Application Additional ECC (%) Additional Timing Cost (%) Additional Data transmission (KB)

Sort Service (11000-40000)

61.1 79.2 1722.2

Matrix Multiplication Service (160*160-450*450) 63.1 69 22994.5

Power Compute Service (2^1000000- 2^2000000000) 70.5 87.9 42.7

Page 20: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

6 Issues in Computational Offloading for MCC In [4, 21-24], we discuss the general issues and challenges in leveraging the services of computational clouds for

client devices. However, the latest frameworks for computational offloading [8-11] involve technical issues which are discussed as follows. The issues are focused from the perspective of resources intensive features of existing COFs and the deployment of delegated mobile application on the remote cloud server node.

6.1 Resources Intensive Distributed Platform Resources intensive distributed platform is a critical issue in current COFs for MCC. The offloading frameworks

[8-11] require the configuration of ad-hoc distributed platform and partitioning of the mobile application at runtime which is resources intensive and time consuming [18]. Remote server note is selected temporarily for each instance of component offloading at runtime[8, 9], which increases the energy consumption cost and turnaround time of the application. The partitioning mechanism utilizes additional computing resources in runtime application profiling and solving which increases the computational load on mobile device [18]. As a result, the computing resources (RAM, CPU) and battery of the mobile devices are utilized abundantly and for a longer period of time.

Traditional COFs implement outsourcing running instances of mobile application [8, 10], which includes the additional cost of saving data states of the running application on mobile device and reconfiguration of the application on the remote service. The management of runtime distributed platform requires continuous synchronization between local SMD and remote cloud server node. The implementation of uninterrupted synchronization mechanism in the wireless network medium requires keeping SMD in active state which is energy consuming mechanism. Furthermore, traditional computational offloading involves runtime transmission of the binary code of the application and data files which increases the overhead of data transmission over the wireless network medium. The VM migration based application offloading frameworks involve the overhead of VM deployment and management on SMD which results in additional resources and battery power utilization on SMD [29]. Further, the migration of running instances of the application (partially or entirely) which are encapsulated in VM includes the issue of network attacks vulnerability. Hence, the proposal of lightweight frameworks for computational offloading is a challenging research perspective in MCC, for the reasons of unique hardware architecture, heterogeneous operating system platforms and the intrinsic limitations associated with wireless network medium.

6.2 Deployment of Virtual Mobile Devices on Cloud Server Node The deployment of delegated application on the virtual machine of the cloud server node is a challenging aspect

of runtime computational offloading. Current COFs [9, 10] focus on partitioning elastic mobile application dynamically and offloading the intensive partitions at runtime. A critical aspect of current COFs is that the delegated application needs to be reconfigured on the virtual device instance on the cloud server node. Therefore, the execution of the offloaded applications to the cloud server nodes requires the deployment of virtual phone instance(s) on the virtual machine of cloud datacenter. Since, the hardware architecture and operating system platform of the mobile devices are different, therefore the operating system platforms implement platform specific application framework.

The diversity in the mobile devices operating system platforms requires the deployment of different virtual phone instances on the cloud server node. For example, the offloaded component of the Android application requires the Android Virtual Device (AVD) instance on the remote virtual machine of the cloud server node to execute the offloaded android application. Similar, the execution of Black Berry application requires the virtual instance of black berry device to execute the delegated mobile application on the VM instance of the cloud server node. Furthermore, in current COFs the delegated application is reconfigured and the active states of the application are resumed on the virtual devices instance of the remote server node. Experimental analysis shows that reconfiguration of the delegated mobile application increases the turnaround time and resources utilization cost of mobile application. It is imperative to provide a homogenous platform for the configuration of delegated applications of heterogeneous mobile devices platforms. The diversity in mobile devices hardware architecture, operating system platform, application architecture and software development tools are the challenging perspectives for the deployment of homogenous virtual device instance on the cloud server node in elastic computational offloading for MCC.

6.3 Accessibility of Virtual Mobile Devices Accessibility of the virtual devices on remote cloud server node is another challenging aspect of current COFs

[8-14]. Mobile applications which run in the virtual device instance of the cloud server node are capable to connect to the external network. However, virtual devices access the external network through the emulator, not directly

Page 21: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

through the hardware interface, and the emulator serves as a normal application on the cloud server node. Therefore, communication with the virtual device instance is subjected to be blocked by a firewall program running on the remote server node. Furthermore, each instance of the virtual mobile device runs behind a virtual router/firewall service that isolates it from the network interfaces and settings of virtual machine of the cloud server node and from the internet. Similarly, communication with the virtual device can be blocked by another (physical) firewall/router to which the cloud server node is connected. Depending on the environment, the virtual device on the cloud server node may not be able to support other protocols; such as ICMP, IGMP or multicast. Virtual device on the virtual machine of cloud server node is connected through Ethernet to a router/firewall.

The virtual instances of mobile device which execute the delegated mobile application are inaccessible directly for the mobile device. For example, the IP address of the AVD instance of the Android virtual phone instance is not accessible from the remote computing device. Therefore, it is required to setup network redirection on the virtual router for communication with the virtual device instance of the cloud server node. Mobile clients connect to a specified guest port on the router, while the router directs traffic to/from that port to the emulated device's host port. Current COFs employ proxy server in the form of additional agent software, or cloud manager component on the cloud server node [8-11]. The deployment of additional proxy application to enable communication between virtual phone instance and remote mobile device results in additional overhead in the form of execution time, and communication delay. Hence, it is challenging to employ optimal procedures for accessing the virtual device instances on the cloud server node for computational offloading in MCC.

6.4 Constraints on Computing Resources of the Virtual Mobile Device Cloud datacenter is composed of powerful and resources rich computing nodes, however traditional COFs [8-14]

require the deployment of virtual mobile device instance for the execution of offloaded mobile application. Therefore, the performance of remote application execution is restricted to the computing potential of the virtual phone which is deployed on the virtual machine of the cloud server node for the execution of the delegated mobile application. For instance, the computing potential of the virtual machine which is deployed on the cloud server node is 2 GHZ, and implements four virtual phone instances of 700MHZ potential. In this regard, the delegated mobile application is executed on the virtual phone instances which run on the VM instance of the cloud sever node. Therefore, such a mechanism affects the performance of the remote system from three perspectives. (a) The physical resources of the cloud server node are shared for virtual machines which results in the deployment and management overhead of VM on the cloud server node [29]. (b) The computing resources which are allocated to the virtual machine are scheduled for the virtual instances of the mobile devices which run on the VM of the cloud server node. (3) The turnaround time of the offloaded mobile application is restricted to the computing capabilities of the virtual phone rather than the virtual machine on the cloud server node. Furthermore, the deployment of virtual device instance on the virtual machine of the cloud server node increases the level of physical resources scheduling on the cloud server node. Therefore, the deployment of runtime application offloading mechanism results in the additional resources utilization and adversely affects the performance of the application execution mechanism at two additional levels. It is challenging to employ hypervisor for the creation of virtual devices instances directly on the cloud server node rather than deploying virtual phone instance on the virtual machine of the cloud server node.

7 Conclusion and Future Work The latest frameworks for computational offloading deploy resources intensive procedures for the configuration

and management of distributed application execution platform in MCC, which results in additional cost of application file and data file migration, high cost of energy consumption in distributed application processing and longer turnaround time of mobile application. Analysis of the experimental results shows the resources intensive nature of existing computational offloading techniques. It is found that 31.6 % additional energy is consumed, 39 % additional time is taken and 13241.2 KB data is transmitted in offloading three computational intensive components of the prototype mobile application. Runtime computational offloading is useful in decentralized distributed platforms (such as mobile ad-hoc networks), for the reason of unavailability of rich resources and centralized service provision models. Therefore, remote server nodes are unpredictable and computational offloading is performed on ad-hoc basis at runtime. The attributes of availability of rich resources, scalability of services and centralized service provisioning models in the form of IaaS, PaaS and SaaS in computational clouds, motivate for accessing the preconfigured services on demand basis instead of dynamic component migration.

The compact design, size, limited computing resources and wireless access medium features of SMDs necessitate lightweight frameworks for the processing of intensive mobile applications in MCC. It is challenging to leverage cloud resources and services for mobile devices with lightweight access techniques, for the reasons of

Page 22: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

unique hardware architecture, heterogeneous operating system platforms and the intrinsic limitations associated with wireless network medium. The future research focuses on the lightweight procedures for the reducing the overhead of runtime computational offloading. The incorporation of SaaS service provisioning model and IaaS are aimed to be investigated for computational offloading in MCC.

Acknowledgement This research is carried out as part of the Mobile Cloud Computing research project funded by the Malaysian

Ministry of Higher Education under the University of Malaya High Impact Research Grant with reference UM.C/HIR/MOHE/FCSIT/03.

References [1]. R. Holman Mobile Cloud Computing: $9.5 billion by 2014, 2010 URL http://www.juniperresearch.com/analyst-

xpress-blog/2010/01/26/mobile-cloud-application-revenues-to-hit-95-billion-by-2014-driven-by-converged-mobile-services/ accessed on 18th August 2011.

[2]. Prosper Mobile Insights, Smartphone/tablet user survey URL http://prospermobileinsights.com/Default.aspx?pg=19, accessed on 20th July, 2011.

[3]. C. Albanesius. Smartphone shipments surpass PC shipments for first time. what’s next? http://www.pcmag.com/article2/, accessed on 15th December 2011.

[4]. M. Shiraz., A. Gani, R. H. Khokhar, R. Buyya A Review on Distributed Application Processing Frameworks in Smart Mobile Devices for Mobile Cloud Computing, IEEE Communications Surveys & Tutorials, 15(3), pp. 1294 - 1313, July 2013.

[5]. R. Buyya, S. C. Yeo, S. Venugopal, J. Broberg, I. Brandic, “Cloud Computing and Emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility” Future Generation Computer Systems, 25(6), pp. 599-616, 2009.

[6]. M. Armbrust, A. Fox, A. Grifth, A.D. Joseph, R.H. Katz., A. Konwinski., G. Lee, D.A. Patterson, A. Rabkin. A., A. Stoica, M. Zaharia, “Above the Clouds: A Berkeley View of Cloud Computing” Electrical Engineering and Computer Sciences University of California at Berkeley, February 10, 2009.

[7]. Amazon S3 http://status.aws.amazon.com/s3-20080720.html Accessed on 20th July 2011. [8]. E. Cuervo., A. Balasubramanian, D. K. Cho, A. Wolman, S. Saroiu, R. Chandra, P. Bahlx “MAUI: Making

Smartphones Last Longer with Code Offload” MobiSys’10, San Francisco, California, USA. June 15–18, 2010. [9]. X. Zhang, A. Kunjithapatham, S. Jeong, S. Gibbs Towards an Elastic Application Model for Augmenting the

Computing Capabilities of Mobile Devices with Cloud Computing, Mobile Networks & Applications, 16(3), pp.270-285, June 2011.

[10]. S. H. Hung, C. S. Shih, J. P. Shieh, C. P. Lee, Y. H. Huang “Executing mobile applications on the cloud: Framework and issues” Computers and Mathematics with Applications, 63(2), pp. 573–587, January 2012.

[11]. M. Shiraz, A. Gani, A Lightweight Active Service Migration Framework for Computational Offloading in Mobile Cloud Computing, Journal of Supercomputing, DOI: 10.1007/s11227-013-1076-7, in press, December 2013.

[12]. Apple – iCloud www.apple.com/icloud/ accessed on 1st January 2013. [13]. Introducing Amazon Silk http://amazonsilk.wordpress.com/2011/09/28/introducing-amazon-silk/ accessed on

25th December 2013. [14]. Messer, I. Greenberg, P. Bernadat, D. Milojicic, D. Chen, T. J. Giuli, X. Gu Towards a Distributed Platform for

Resource-Constrained Devices Hewlett-Packard Company 2002. [15]. Giurgiu, O. Riva, D. Juric, I. Krivulev, G. Alonso “Calling the Cloud: Enabling Mobile Phones As Interfaces To

Cloud Applications”, Middleware'09 Proceedings of the ACM/IFIP/USENIX 10th international conference on Middleware, ACM Press, pp. 83-102, 2009.

[16]. G. B. Chun, P. Maniatis, Augmented Smartphone Applications Through Clone Cloud Execution, Intel Research Berkeley, 2009.

[17]. D. Kovachev and R. Klamma, “Framework for Computation Offloading in Mobile Cloud Computing,” International Journal of Interactive Multimedia and Artificial Intelligence, 1(7), pp. 6-15, 2012.

[18]. M. Shiraz, E. Ahmed, A. Gani, Q. Han Investigation on Runtime Partitioning of Elastic Mobile Applications for Mobile Cloud Computing, Journal of Supercomputing, 67(1),pp. 84- 103 January 2014.

[19]. S. Goyal, J. Carter, A Lightweight Secure Cyber Foraging Infrastructure for Resource-Constrained Devices, WMCSA 2004 6th IEEE Workshop, IEEE Publisher, 2-3 Dec. 2004.

[20]. M. Satyanarayanan, “Pervasive computing: Vision and challenges,” IEEE Personal Communications., 8(4), 10–17, 2001.

Page 23: Author's Accepted Manuscript - umexpert.um.edu.my · Frameworks for mobile cloud computing, ... A prototype application is evaluated with different computation ... offloading algorithms

 

 

[21]. S. Abolfazli, Z. Sanaei, M. Alizadeh, A.Gani, F. Xia, An Experimental Analysis on Cloud-based Mobile Augmentation in Mobile Cloud Computing, IEEE Transactions on Consumer Electronics, l 40(1), pp. 146-154.

[22]. Z. Sanaei, S. Abolfazli, A. Gani, R. Buyya, Heterogeneity in mobile cloud computing: Taxonomy and open challenges, IEEE Communications Surveys and Tutorials, 16(1), pp.369-392, 2014.

[23]. A. Gani, G. M. Nayeem, M. Shiraz, M. Sookhak, M. Whaiduzzaman, S. Khan, A review on interworking and mobility techniques for seamless connectivity in mobile cloud computing, Journal of Network and Computer Applications, 43(2014), pp. 84–102, 2014.

[24]. S. Abolfazli, Z. Sanaei, E. Ahmed, A. Gani, R. Buyya, Cloud-based Augmentation for Mobile Devices: Motivation, Taxonomies, and Open Issues, IEEE Communications Surveys and Tutorials, 16(1), pp.337-368, 2014.

[25]. Gu, K. Nahrstedt, A. Messer, I. Greenberg, and D. Milojicic, “Adaptive offloading inference for delivering applications in pervasive computing environments,” in Pervasive Computing and Communications, 2003.(PerCom 2003). Proceedings of the First IEEE International Conference on, pp. 107-114, 2003.

[26]. H. Chu, H. Song, C. Wong, S. Kurakake, and M. Katagiri, “Roam, a seamless application framework,” Journal of Systems and Software, 69 (3), 209-226, 2004.

[27]. Dou, V. Kalogeraki, D. Gunopulos, T. Mielikainen, V. H. Tuulos, “Misco: A MapReduce Framework for Mobile Systems”, PETRA’10 Samos, Greece. ACM Press, June 23 - 25, 2010.

[28]. G. Chun, S. Ihm, P. Maniatis, M. Naik, A. Patti, “CloneCloud: Elastic Execution between Mobile Device and Cloud” EuroSys’11 Salzburg Austria ACM Press, April 10–13, 2011.

[29]. M. Shiraz, S. Abolfazli, Z. Sanaei, A. Gani. A Study on Virtual Machine Deployment for Application Outsourcing in Mobile Cloud Computing, The Journal of Supercomputing, 63(3), 946-964, March 2013

[30]. Y. Begum and M. Mohamed, “A DHT-based process migration policy for mobile clusters,” in 7th International Conference on Information Technology, Las Vegas, 2010, pp. 934–938.

[31]. PowerTutor Available at http://ziyang.eecs.umich.edu/projects/powertutor/ accessed on 15th April, 2012.