Reward-Aided Sensing Task Execution in Mobile Crowdsensing ...

11
Received April 8, 2018, accepted May 10, 2018, date of publication May 22, 2018, date of current version July 25, 2018. Digital Object Identifier 10.1109/ACCESS.2018.2839582 Reward-Aided Sensing Task Execution in Mobile Crowdsensing Enabled by Energy Harvesting JIEJUN HU 1 , KUN YANG 2,3 , (Senior Member, IEEE), LIANG HU 1 , AND KEZHI WANG 4 , (Member, IEEE) 1 Department of Computer Science and Technology, Jilin University, Changchun 130012, China 2 School of Computer Sciences and Electrical Engineering, University of Essex, Colchester CO4 3SQ, U.K. 3 School of Communication and Information Engineering, University of Electronic Science and Technology of China, Chengdu, China 4 Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne NE2 1XE, U.K. Corresponding author: Kun Yang ([email protected]) This work was supported in part by the China Scholarship Council, in part by the National Key Research and Development Plan of China under Grant 2017YFA0604500, in part by the National Sci-Tech Support Plan of China under Grant 2014BAH02F00, in part by the National Natural Science Foundation of China under Grant 61701190, in part by the Youth Science Foundation of Jilin Province of China under Grants 20160520011JH and 20180520021JH, in part by the Youth Sci-Tech Innovation Leader and Team Project of Jilin Province of China under Grant 20170519017JH, in part by the Key Technology Innovation Cooperation Project of Government and University for the whole Industry Demonstration under Grant SXGJSF2017-4, in part by the China Postdoctoral Science Foundation under Grant 2018M631873, and in part by the Key Scientific and Technological Research and Development Plan of Jilin Province of China under Grant 20180201103GX. ABSTRACT Mobile crowdsensing (MCS) is a new sensing framework that empowers normal mobile devices to participate in sensing tasks. The key challenge that degrades the performance of MCS is selfish mobile users who conserve the resources (e.g., CPU, battery, and bandwidth) of their devices. Thus, we introduce energy harvesting (EH) as rewards into MCS, and thus provide more possibilities to improve the quality of service (QoS) of the system. In this paper, we propose a game theoretic approach for achieving sustainable and higher quality sensing task execution in MCS. The proposed solution is implemented as a two-stage game. The first stage of the game is the system reward game, in which the system is the leader, who allocates the task and reward, and the mobile devices are the followers who execute the tasks. The second stage of the game is called the participant decision-making game, in which we consider both the network channel condition and participant’s abilities. We analyze the features of the second stage of the game and show that the game admits a Nash equilibrium (NE). Based on the NE of the second stage of the game, the system can admit a Stackelberg equilibrium, at which the utility is maximized. Simulation results demonstrate that the proposed mechanism can achieve a better QoS and prolong the system lifetime while also providing a proper incentive mechanism for MCS. INDEX TERMS Task execution, energy harvest, game theory, mobile crowdsensing. I. INTRODUCTION Nowadays smartphones have changed people’s daily lives. The functions of smartphones become increasingly powerful every day, and they have rich sensory capabilities. Smart- phones not only allow us to communicate with each other but also offer the possibilities of sensing the environment and collecting, processing and sharing information. These tech- nologies empower the development of mobile crowdsensing (MCS). MCS is widely applied in our daily lives and in areas such as environment monitoring, personal healthcare, virtual reality entertainment, transportation monitoring and smart city applications [1]–[4]. It has the advantages of mobility, scalability and cost effectiveness, comparing to the tradi- tional Internet of things. Moreover, the integration of human intelligence into the mobile sensing and computing process is also a special aspect of MCS. This means that people can decide how to and when to be a member of MCS. For a typical MCS procedure, first, the MCS server will publish the sensing task to a special area. Then, the participant recruiting procedure based on performance, reputation etc., will be triggered. All participants will begin to execute the sensing task: sensing, executing sensing data and returning the results to the server. When the sensing task is relatively easy, sensing and executing will not cause the heavy con- sumption of energy or other resources by the mobile devices. However, when a tough sensing task is allocated, such as multimedia data sensing and mining, participants know they have to spend more resources to accomplish the task, so some 37604 2169-3536 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. VOLUME 6, 2018

Transcript of Reward-Aided Sensing Task Execution in Mobile Crowdsensing ...

Page 1: Reward-Aided Sensing Task Execution in Mobile Crowdsensing ...

Received April 8, 2018, accepted May 10, 2018, date of publication May 22, 2018, date of current version July 25, 2018.

Digital Object Identifier 10.1109/ACCESS.2018.2839582

Reward-Aided Sensing Task Execution in MobileCrowdsensing Enabled by Energy HarvestingJIEJUN HU 1, KUN YANG 2,3, (Senior Member, IEEE), LIANG HU1,AND KEZHI WANG 4, (Member, IEEE)1Department of Computer Science and Technology, Jilin University, Changchun 130012, China2School of Computer Sciences and Electrical Engineering, University of Essex, Colchester CO4 3SQ, U.K.3School of Communication and Information Engineering, University of Electronic Science and Technology of China, Chengdu, China4 Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne NE2 1XE, U.K.

Corresponding author: Kun Yang ([email protected])

This work was supported in part by the China Scholarship Council, in part by the National Key Research and Development Plan of Chinaunder Grant 2017YFA0604500, in part by the National Sci-Tech Support Plan of China under Grant 2014BAH02F00, in part by theNational Natural Science Foundation of China under Grant 61701190, in part by the Youth Science Foundation of Jilin Province of Chinaunder Grants 20160520011JH and 20180520021JH, in part by the Youth Sci-Tech Innovation Leader and Team Project of Jilin Province ofChina under Grant 20170519017JH, in part by the Key Technology Innovation Cooperation Project of Government and University for thewhole Industry Demonstration under Grant SXGJSF2017-4, in part by the China Postdoctoral Science Foundation under Grant2018M631873, and in part by the Key Scientific and Technological Research and Development Plan of Jilin Province of China under Grant20180201103GX.

ABSTRACT Mobile crowdsensing (MCS) is a new sensing framework that empowers normalmobile devicesto participate in sensing tasks. The key challenge that degrades the performance of MCS is selfish mobileusers who conserve the resources (e.g., CPU, battery, and bandwidth) of their devices. Thus, we introduceenergy harvesting (EH) as rewards into MCS, and thus provide more possibilities to improve the quality ofservice (QoS) of the system. In this paper, we propose a game theoretic approach for achieving sustainableand higher quality sensing task execution in MCS. The proposed solution is implemented as a two-stagegame. The first stage of the game is the system reward game, in which the system is the leader, who allocatesthe task and reward, and the mobile devices are the followers who execute the tasks. The second stage ofthe game is called the participant decision-making game, in which we consider both the network channelcondition and participant’s abilities. We analyze the features of the second stage of the game and show thatthe game admits a Nash equilibrium (NE). Based on the NE of the second stage of the game, the systemcan admit a Stackelberg equilibrium, at which the utility is maximized. Simulation results demonstrate thatthe proposed mechanism can achieve a better QoS and prolong the system lifetime while also providing aproper incentive mechanism for MCS.

INDEX TERMS Task execution, energy harvest, game theory, mobile crowdsensing.

I. INTRODUCTIONNowadays smartphones have changed people’s daily lives.The functions of smartphones become increasingly powerfulevery day, and they have rich sensory capabilities. Smart-phones not only allow us to communicate with each otherbut also offer the possibilities of sensing the environment andcollecting, processing and sharing information. These tech-nologies empower the development of mobile crowdsensing(MCS). MCS is widely applied in our daily lives and in areassuch as environment monitoring, personal healthcare, virtualreality entertainment, transportation monitoring and smartcity applications [1]–[4]. It has the advantages of mobility,scalability and cost effectiveness, comparing to the tradi-tional Internet of things. Moreover, the integration of human

intelligence into the mobile sensing and computing processis also a special aspect of MCS. This means that people candecide how to and when to be a member of MCS.

For a typical MCS procedure, first, the MCS server willpublish the sensing task to a special area. Then, the participantrecruiting procedure based on performance, reputation etc.,will be triggered. All participants will begin to execute thesensing task: sensing, executing sensing data and returningthe results to the server. When the sensing task is relativelyeasy, sensing and executing will not cause the heavy con-sumption of energy or other resources by the mobile devices.However, when a tough sensing task is allocated, such asmultimedia data sensing and mining, participants know theyhave to spend more resources to accomplish the task, so some

376042169-3536 2018 IEEE. Translations and content mining are permitted for academic research only.

Personal use is also permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

VOLUME 6, 2018

Page 2: Reward-Aided Sensing Task Execution in Mobile Crowdsensing ...

J. Hu et al.: Reward-Aided Sensing Task Execution in MCS Enabled by EH

of them may choose to leave without a proper task executionplan and incentive mechanism.

Incentive mechanism is the most important issue whichneeds to be considered in MCS. A proper incentive mecha-nism can make sure the participants in MCS to donate theirresource to achieve a common interest. To simulate mobileuser to become a participant and remain the number of theparticipants, researchers have designed extensive method toprovide the incentive mechanism. They adopt money, repu-tation and credit as reward to participants to guarantee thesensing quality [5], [6].

In our work, energy harvesting (EH) is envisioned as apromising way to address the challenge of incentive mech-anism. EH can capture recyclable external energy, includingsolar, indoor lightening, vibrational, chemical and humanmotion energies [7]. The utilization of EH devices in MCSwill provide new features, such as self-sustainability, cover-age of sensing area and the quality of sensing result. In thispaper, we adopt EH devices in MCS. The system will use theenergy that can be harvested by the participants as a reward.

In the traditional MCS sensing task execution, partici-pants only can choose to be a member and donate their ownresource or choose not to be involved at all. However thetraditional method may reduce the possibility of a potentialparticipant by only two choices: to do or not to do. In ourwork, we introduce computation offloading [8], [9] into MCSsensing task execution. Thus, the participants can make dif-ferent choices by analyzing their resource. In addition of EH,participants’ battery can be maintained and the sensing life ofthe system can be prolonged.

In this paper, we adopt a game theoretic solution to addressthe reward-aided sensing task execution in MCS, in additionto the QoS and system lifetime of MCS. Game theory focuseson the features of participant competition and interactions[7], [10], [11]. In an MCS sensing environment, any actiontaken by a participant affects the decisions of others in themobile network. Thus, game theory is a natural mathemat-ical approach for studying this interaction. In this paper,the Stackelberg leadership model is a promising way to pro-vide an incentive mechanism and solve the reward allocationproblem, which enables the system and the participants tomaximize their utilities at the same time [12], [13].

In this paper, we study the issue of reward-aided sensingtask execution, in addition to sustainable system lifetimein MCS, and model it as a two-stage Stackelberg game.EH as a reward will also act as the incentive mechanismin MCS. This approach will reduce the energy consumptionof mobile users. By giving different participants differentrewards, this approach helps to control the sensing quality ofthe data. In summary, the contributions of this paper are asfollows:• Reward-aided sensing task execution game formu-lation: The proposed Stackelberg game consists twoparts. The first stage of the game is a system rewardgame (SRG). In the SRG, the system that allocates thetasks and reward is the leader, and the mobile devices

that execute the tasks are the followers. The secondstage of the game is called a participant decision-makinggame (PDG); both the system and mobile devices areplayers in the second stage. The strategy of the systemis the optimal reward, and the optimal strategy of theparticipants is how to execute the task.

• Properties of the game: The PDG in the frameworkensures that the SRG has an optimal result. Becausethe sub-game has an NE, all the mobile devices canachieve a mutual satisfaction. As a result of the sub-game, the participants inMCSwill be separated into twogroups: those who offload the task and those who exe-cute the task locally. Based on their different decisionsin sensing task execution, the system will allocate thereward to different participants.

• Reward-aided sensing task execution mechanismand main algorithms: The proposed Stackelberg gameis achieved by two main algorithms, which operate intandem: In PDG, algorithm1 takes communication inter-ference, computation time and energy into account andachieves the Nash equilibrium of the decision-makinggame. In SRG, based on the result of algorithm1, algo-rithm 2 can achieve the optimal reward amount andallocate the reward.

The remainder of this paper is organized as follows.In section II, we discuss the related works. The systemmodel is introduced in section III. The Stackelberg gameand reward-aided sensing task execution mechanism are pre-sented in sections IV and V, respectively. The performanceand simulation of the mechanism are analyzed in section VI.In section VII, we conclude the paper.

II. RELATED WORKThe incentive mechanism and sensing quality play crucialroles in MCS. Related works have showed different incen-tive mechanisms: they adopt money, reputation and credit asrewards for participants [5], [6].

In [14], two types of incentive mechanisms are introduced:a user-centric model based on the Stackelberg game and asystem-centric model based on auction. Kantarci et al. [15]proposed a new crowdsensing framework, namely, social net-work assisted trustworthiness assurance (SONATA), whichaims to maximize the crowdsensing platform utility and min-imize the manipulation probability through vote-based trust-worthiness analysis in a dynamic social network architecture.Reference [16] designed an inventive mechanism for discretecrowdsensing in which each user has a uniform sensingsubtask length. The objection of this work is to maximizethe platform utility and achieve perfect Bayesian equilib-rium. Reference [17] studied the incentive mechanisms fora novel Mobile Crowdsensing Scheduling problem, whichachieved desirable truthfulness, individual rationality andcomputational efficiency. However there is no work whichtakes energy as a reward in MCS. In our work, energy playsa significant role in MCS, because energy as a reward canguarantee the quantity of the participants, and then sustain

VOLUME 6, 2018 37605

Page 3: Reward-Aided Sensing Task Execution in Mobile Crowdsensing ...

J. Hu et al.: Reward-Aided Sensing Task Execution in MCS Enabled by EH

the coverage of sensing area, all these improve the QoSin MCS.

For the sensing quality of MCS, Jin et al. [18] pro-posed a reverse auction approach for the incentive mech-anism of MCS based on quality of information (QoI).Marjanović et al. [19] presented a framework for greenmobile crowdsensing that utilized a quality-driven sensormanagement function to continuously select the k-best sen-sors for a predefined sensing task. In [20], a novel approachwas proposed that uses the techniques of evolutionary algo-rithms to determine the optimal trade-off between data qualityand cost. In our work, we adopt computation offloading as anadditional option for the participants. Computation offload-ing makes it possible to execute heavy sensing task in thecloud and return the result directly back to the server. Bud-gets in MCS is also an essential parameter when MSC plat-formmaximizes the sensing quality. Reference [21] proposeda novel task allocation framework called CrowdTasker forMCS. CrowdTasker aims to maximize the coverage qualityof the sensing task under a budget constraint by greedyalgorithm. References [22] and [23] proposed a framework toachieve the optimal coverage quality under budget constraint.

For EH, this approach is widely applied in many scenar-ios in wireless sensor networks [24]–[27]. Wang et al. [28]studied joint channels and power allocation to improve theenergy efficiency of user equipment by analyzing their bat-teries. Mao et al. [29] investigated a green mobile-edgecomputing system with energy harvesting devices and devel-oped an effective computation offloading strategy. In [30],a tractable model was developed for analyzing the perfor-mance of downlink heterogeneous cellular networks withboth power-grid-connected base stations and energy har-vesting small cell access points. In [31], energy harvest-ing has also been taken into consideration in cognitiveradio sensor networks (CRSNs). It addressed a networkutility maximization problem which is greatly impacted bysampling rate control and channel access schedule, underthe harvested energy, channel capacity and interferenceconstraints.

Our work is inspired by the works on the interferenceamong the users in mobile networks and a game theoreticapproach for maximizing the system utility [14], [32]. How-ever, our work differs the above mentioned related worksin the following ways: (a) we consider interference amongparticipants in MCS to propose an efficient sensing task exe-cution approach. (b) we adopt EH as a reward in the incentivemechanism, which prolongs the system lifetime of MCS.(c) our work faces a multi-parameter environment where theparticipants’ information is multi-dimensional.

III. SYSTEM MODELWe introduce the system model of MCS in this section.As shown in Fig. 1, we consider in a framework that consistsof a set of participants P = {1, 2, . . . , p},where participanti will be assigned a sensing task Ti = {Bi,Di} to be exe-cuted, which is published by the system. Here, Bi denotes

FIGURE 1. Mobile crowdsensing with energy harvesting scenario.

the data size of the computation input data (which includingthe sensing data and the execution code), and Di denotes therequired CPU cycles of participant i. There is a wireless basestation that allows participants to offload computations to thecloud. Participants in MCS can decide whether to executelocally or to offload the task to the cloud via the wirelessnetwork. We denote by di as the sensing task executiondecision of participant i. Specifically, we have di = 0 if theparticipant will execute the sensing task locally, and di = 1if the participant chooses to offload the task. The decisionprofile is d = {d1, d2, . . . dn}. In this scenario, EH is aspecial feature of the MCS participants. Hence, the sensingtask execution model, EH model and network conditions inMCSwill be discussed. We will focus on the efficient sensingtasks execution procedure and the energy-aided incentivemechanism in MCS.

A. TASK EXECUTION MODELIn MCS, there are two potential ways to execute the sensingtasks. Participants can choose either of them to maximizetheir utility. In this section, we introduce the sensing taskexecution model in MCS.

1) LOCAL EXECUTION MODELWe consider each i ∈ P has a sensing task Ti, which ispublished by the system. We will discuss execution delay andenergy consumption of the local execution model.

For local execution, a participant will execute the task withthe local resources of the mobile device and generate thesensing result ri with data size Bri , B

ri can be larger or smaller

than the sensing task size Bi based on the sensing task. Thecomputational capacity of the participant i is denoted as Fi.The local execution delay T l1i and energy consumption E l1ican be expressed as follows:

T l1i =DiFi

(1)

E l1i = ϕDi (2)

where ϕ = KF2i denotes the energy consumption per CPU

cycle, K is energy coefficient based on the structure of thechips [33]. In sensing result transmission process, the trans-mission delay T l2i and energy cost E l2i can be expressed asfollows:

T l2i =BriRi

(3)

37606 VOLUME 6, 2018

Page 4: Reward-Aided Sensing Task Execution in Mobile Crowdsensing ...

J. Hu et al.: Reward-Aided Sensing Task Execution in MCS Enabled by EH

where Ri is the data transmission rate of participant i. Basedon (3), we can have the energy consumption of the resulttransmission stage.

E l2i = PiBriRi

(4)

wherePi is the transmission power of participant i. Accordingto (1), (2), (3) and (4) we can obtain the local sensing taskexecution cost C l

i as:

C li = w1(T

l1i + T

l2i )+ w2(E

l1i + E

l2i )

= w1(DiFi+BriRi

)+ w2(ϕDi + PiBriRi

) (5)

where w1,w2 ∈ (0, 1) are the coefficients of the execu-tion delay and energy consumption. Ri is the data rate forparticipant i that we simply assume is the same to everylocal execution participant due to the interference can bediscarded. A participant will compute the sensing cost beforedeciding whether to execute the sensing task locally or inthe cloud. Note that execution delay and energy consumptionare parameters with different scales, so we will use a nor-malization method to convert the parameters into a commonscales.

2) CLOUD EXECUTION MODELIn the cloud execution model, the sensing task will be exe-cuted in the cloud. For this procedure, the cost of cloud execu-tion includes two parts: the delay contributed by transmissionin the network and execution in the cloud and the energyconsumption of offloading the sensing task. The executiondelay of the cloud execution model is denoted as T oi , whichis given by

T oi =BiRi(d)

+DiFci

(6)

where Ri(d) is different from Ri in (4), because interferencewill happen when participants decide to offload the sensingtask. And Fci is the computational ability of the cloud whichwe assume the system will offer every participant the samecomputation ability. We denoted the energy consumption inthe cloud execution model as Eoi :

Eoi = Pi ·BiRi(d)

(7)

According to (6) and (7), we can compute the cost of the cloudexecution, which is denoted as Co

i :

Coi = w1(

BiRi(d)

+DiFci

)+ w2 PiBiRi(d)

(8)

B. NETWORK MODELDue to the scenario of MCS, when a big amount of mobiledevices begin to offload the sensing task to the cloud,the communication interference among the participants needto be considered about. In the mobile network, the basestation can manage the communications of all mobile users,including uplink and downlink communications. The data

FIGURE 2. Stackelberg game.

transmission rate of participant i is a logarithmic function ofSINR. SINR is denoted as γi(d) [32], [34]

Ri(d) = f (γi(d))

= f (PiHi,b

σ 2 +∑

m∈P,m 6=i,dm=1 PmHm,b)

= W log2(1+PiHi,b

σ 2 +∑

m∈P,m6=i,dm=1 PmHm,b) (9)

where Pi and Hi,b represent the transmission power of par-ticipant i and channel gain between the participant i andbase station, respectively; σ 2 denotes the noise power level,including noise power and interference power. Accordingto the equations above, if too many participants decide tooffloading the sensing task, SINR will decrease, interferencewill incur and lead to low data rates, which negatively affectMCS. Thus in the offloading phrase, the interference needsto be taken into consideration.

IV. REWARD-AIDED SENSING TASK EXECUTION GAMEIn MCS, the system is interested in maximizing its utilitywhile publishing the tasks and rewards for the participants.At the same time, the participants who own the mobiledevices are both selfish and rational; hence, they also wantto maximize their own utility. The participant must computethe cost of execution based on communication interference,energy consumption and battery level. If the system will givehim a reward that is not less than the cost, then he willparticipate in MCS. Firstly in our work, we assume that thesystem would like to have more participants to offload thesensing task to the cloud, due to the capacity of the cloudand the transmission delay of local execution. Thus therewill be more reward that will be allocated to the offloadingparticipants.

The proposed reward-aided sensing task execution mech-anism is achieved by Stackelberg game, where the systemis the leader who moves first in the game and the partici-pants are the followers. In Stackelberg game, there are twostages: an SRG, in which the system provides reward for

VOLUME 6, 2018 37607

Page 5: Reward-Aided Sensing Task Execution in Mobile Crowdsensing ...

J. Hu et al.: Reward-Aided Sensing Task Execution in MCS Enabled by EH

TABLE 1. Notation and descriptions.

users who participate in computation and a PDG, in whichparticipants can decide by themselves whether to offload thesensing task or execute it locally according to the reward fromthe system. In order the solve the whole Stacketlberg game,we need to solve the second stage of the game (PDG) firstbased on the reward from the system [35].

Before starting, there are several important questionsregarding the game, the first of which is how to choose a rea-sonable Utility Function (UF). After a UF has been selected,we must determine what kind of strategy a participant canchoose to maximize or minimize the UF; in another words,what is the best response of a participant? Moreover, will astable state (NE) exist for all the participants? If the systemcan achieve NE, will it be unique? In this section, we willaddress several important questions while formulating thetwo-stage game.

A. UTILITY FUNCTIONS1) STAGE ONE OF STACKELBERG GAMEIn an SRG, the leader is the system, who makes the first movein the game, and the followers are the mobile users. The UF ofthe system is the cost of the system after allocating the rewardto the participants, which can be formulated as

US =p∑i=1

EiI(di=1) +p∑i=1

E ′i I(di=0) (10)

where I is an indicator function, if di = 1 is true, then theindicator function is true and vice versa. In equation (10),Ei denotes as the reward for offloading participants and E ′ias the reward to local participants. E ′i = εEi for localparticipants where ε is constant. In the first stage of Stack-elberg game, the goal is to minimize the cost of the system,

which implies

mind,Ei

US

s.t. Ei ≥ Eoi ∀i ∈ PE ′i ≥ E

l1i + E

l2i ∀i ∈ P (11)

2) STAGE TWO OF STACKELBERG GAMEThe participant i’s utility function is the profit he can make byobtaining reward from the system minus the cost of sensingtask executions

UPi =

{Ei − Co

i if di = 1E ′i − C

li if di = 0

(12)

where Ei and E ′i are different rewards he can obtain fromdifferent sensing task execution plan. In the second stage ofStackelberg game, the utility function of participant shouldbe maximized for each participant.

maxd

UPi (13)

B. GAME FORMULATION AND PROPERTYDefinition 1 (Stackelberg Equilibrium): (d∗,E∗i ) is a Stack-elberg Equilibrium for the proposed game if it satisfies thefollowing conditions for any value of (d,Ei)

US (E∗i , d∗) ≥ US (Ei, d∗)

UPi (d∗i , d−i) ≥ UP

i (d′i , d−i) (14)

According to definition 1, the Stackelberg Equilibrium canbe obtained as follow: in PDG, equilibrium depends on thefollowers’ optimal response of sensing task execution planwhere theywill obtain the optimal strategy profile d∗. In SRG,system uses the optimal strategy profile of PDG to obtain the

37608 VOLUME 6, 2018

Page 6: Reward-Aided Sensing Task Execution in Mobile Crowdsensing ...

J. Hu et al.: Reward-Aided Sensing Task Execution in MCS Enabled by EH

optimal reward E∗i for the participants. Therefore, we need toanalysis the PDG first, to solve the SRG.

First, we define the Best Response (BR) in the PDG. TheBR is a central concept in game theory. It is a strategy thatproduces the maximum profit for a player in the game, giventhe other players’ strategies.Definition 2 (Best Response in a Decision-Making Game):

Participant i’s strategy d∗i is the Best Response to strategiesd−i of other participants, if

UPi (d∗i , d−i) > UP

i (d′i , d−i) ∀di ∈ d, d∗i 6= d ′i (15)

In a PDG, all the participants will act according to theirBR when playing the game. Hence, the PDG will eventuallyreach a stable point, which we call it NE.

C. SOLUTION OF STACKELBERG GAME1) STAGE2 – PDGIn order to solve the Stackelberg game, we apply backwardinduction. First we need to solve the second stage of the game.We take the reward Ei and E ′i as given in stage 2.Theorem 1 (Best Response Strategy): According to Defi-

nition 2 and the action profile of participant i is di = {0, 1},where 0 means local execution and 1 means cloud execution.Here wewill discuss which action of the participant is the bestresponse towards to other participants’ actions. We assumethe best response of participant i is d∗i , according to the fixedstrategy profile d−i of other participants. Thus we have

UPi (d∗i , d−i) =

{Ei − Co

i + UPi (d−i) if d∗i = 1

E ′i − Cli + U

Pi (d−i) if d∗i = 0

(16)

in BR d∗i 6= d ′i , thus we assume when d∗i = 1, we obtain

UPi (1, d−i) > UP

i (0, d−i)

where we have

Ei − Coi + U

Pi (d−i) ≥ E ′i − C

li + U

Pi (d−i)

Coi − C

li ≤ Ei − E ′i

on the other hand, when d∗i = 0, we have

Coi − C

li ≥ Ei − E

′i (17)

As it shows in (17), we need to discuss the relation betweenC li and C

oi . According to (5) and (8), we have the left side

of (17)

w1(BiRi(d)

+DiFc

)+ w2 PiBiRi(d)

− C li

where we can find out the data rate when participants offloadthe sensing task to the cloud will strongly effect the cost ofsensing task execution. Based on (9) when all the participantswant to offloading the task, theywill suffer a high interferencein the channel. This will cause the increasing of the cost ofoffloading. Thus in PDG, the best response is when partici-pants try to choose the cloud execution, which implies

d∗i =

{1 if Co

i − Cli ≤ Ei − E

′i

0 otherwise(18)

To simplify the problem, we assume all the participantsare homogenous, which means they are under the same chan-nel condition and same transmission power, thus P1H1,s =

P2H2,s = . . . = PpHp,s = k . Based on d∗i = 1 is the thebest response of participants i. According to (9) and (18),we obtain

0 <p∑i=1

kI(di=1) ≤k

2

w1Bi+w2BiPi

(Cli+Ei−E′i−w1

DiFci

)W− 1

− σ 2 (19)

When∑p

i=1 kI(di=1) is out of the range of (19), the utilityfunction will decrease, so the participants will choose to exe-cute locally. We can obtain the threshold of strategy profiled∗ of the participants

p∑i=1

I(di=1) =1

2

w1Bi+w2BiPi

(Cli+Ei−E′i−w1

DiFci

)W− 1

−σ 2

k(20)

2) STAGE1 – SRGFrom Stage2, we obtain the optimal strategy profile d∗. Nowwe will solve Stage1. According to (10) and (19), in additionwith E ′i = εEi, we obtain

mind∗,Ei

n∑i=1

EiI(di=1) +n∑i=1

εEiI(di=0)

= (1

2Ai

Bi+CiEi − 1−σ 2

k)Ei

+ (n− (1

2Ai

Bi+CiEi − 1−σ 2

k))εEi

= (1− ε)(1

2Ai

Bi+CiEi − 1−σ 2

k)Ei

+ εnEi (21)

where

Ai = w1Bi + w2BiPi

Bi = (C li − w1

DiFci

)W

Ci = (1− ε)W

and n is the total number of the participants in MCS.In Stage1, we want to find out the smallest value of function(24).Thus, the derivative of (24) is

d(US )d(Ei)

=(1− ε)(2

AiBi+CiEi − 1)

(2Ai

Bi+CiEi − 1)2

+

AiCiln2(1− ε)Ei2Ai

Bi+CiEi 1(Bi+CiEi)2

(2Ai

Bi+CiEi − 1)2

− [(1− ε)σ 2

k− εn] (22)

VOLUME 6, 2018 37609

Page 7: Reward-Aided Sensing Task Execution in Mobile Crowdsensing ...

J. Hu et al.: Reward-Aided Sensing Task Execution in MCS Enabled by EH

where ε ∈ (0, 1),Ai,Bi,Ci > 0. And then we need to discussabout the second derivative of (23), namely

d2(US )d(Ei)2

=d

d(Ei)[(1− ε)(2

AiBi+CiEi − 1)+

(2Ai

Bi+CiEi − 1)2

+

AiCiln2(1− ε)Ei2Ai

Bi+CiEi 1(Bi+CiEi)2

(2Ai

Bi+CiEi − 1)2] (23)

According to (23), we can easily find out d2(US )d(Ei)2

is positive,thus (22) is concave. Setting the first derivative of US to 0,we obtain the estimated Ei

Emini =Ai − Bi

Ci(σ2

k −εn1−ε )−BiCi

(24)

According to the range of Ei, namely

Ei =

{max[Eoi ,E

l1i + E

l2i ] if Emini < Eoi , (E

l1i + E

l2i )

Emini else

(25)

According to (25), we can obtain the optimal reward fromsystem to every participants individually based on how theparticipants execute the sensing task.

D. ENERGY HARVEST MODELNowwe introduce the battery level update of each participantafter reward allocated. The mobile devices in the MCS areequipped with EH components.

We use ECi , Ei, ENi , E

Ii and EBi to denote the energy con-

sumption, the energy that can be harvested as reward, the newenergy level, the initial energy level and battery capabilityof the participants, respectively. According to (2), (4), (7),we have

ECi =

{EoiE l1i + E

l2i

(26)

According to energy aided reward in MCS, we have

ENi =

{EBi if E Ii − E

Ci + E

Hi ≥ E

Bi

E Ii − ECi + E

Hi otherwise

(27)

Equation (27) shows the new battery level of participants afterenergy harvesting.

V. REWARD-AIDED SENSING TASK EXECUTIONMECHANISMIn this section, we introduce the reward-aided sensing taskexecution mechanism and the main algorithms. We supposethat the system publishes a sensing task that requires themobile devices in the specific area to record a video for datamining.

A. MECHANISM DESIGN• The system will announce the task to all the participantsin the specific area, including all the parameters of thesensing task (the size of the task, required CPU, etc.) andthe reward to the participants based on different sensingtask execution plan;

• Based on the parameters of the sensing task, the param-eters of the participants (battery level, bandwidth, dataplan, and CPU), the channel in the mobile networkand the reward, the participants will decide whether toexecute the sensing task locally or offload it to the server;

• The system will allocate the rewards to different par-ticipants based on the sensing task execution plan. Theparticipants will get the energy and renew their batterylevel;

• The cost of system is optimized and the lifetime of thesystem will be prolonged.

B. MAIN ALGORITHMSBased on Theorem 1, Algorithm 1 is designed to achieve opti-mal strategy profile in PDG. In other words, after algorithm 1has been carried out, all the participants will be separatedinto two groups. In this algorithm, first the cost of execu-tion will be calculated and then will be ranked in ascendingorder; then, based on the best response, the participants willmake different decisions. Here in Algorithm1, we obtain theoptimal decision profile for the stage 1 of Stackelberg game.Algorithm 2 determines the reward that will be allocatedto the participants. It is based on the utility functions ofthe SRG.

Algorithm 1 Participators Decision-Making Game1: Input: metrics of channel, task, participators’ ability and

reward2: Output: d∗

3: Let Coi [c

o1 . . . c

oi ], C

li [c

l1 . . . c

li] and d[d1 . . . di] be new

arrays4: Let i = 15: for i = 1 to n do6: Ci = Co

i − Cli

7: end for8: Sort Ci in ascending order9: Let j = 110: for j = 1 to n do11: if

∑ni=1 kI(di=1) ≤

k

2

w1Bi+w2BiPi(Cli+Ei−E

′i−w1

DiFci

)W−1

− σ 2 then

12: dj = 113: d∗ = dj

⋃d

14: end if15: end for

Here, we analyze the battery levels of all the participantsto determine whether the proposed mechanism can achievea longer system lifetime. Although system lifetime is animportant parameter in MCS, there has been little work in

37610 VOLUME 6, 2018

Page 8: Reward-Aided Sensing Task Execution in Mobile Crowdsensing ...

J. Hu et al.: Reward-Aided Sensing Task Execution in MCS Enabled by EH

Algorithm 2 System Reward Allocation1: Input: d2: Output: reward Ei and the new battery level ENi3: Let reward = 14: for i = 1 to n do5: if di 6= 0 then6: i++7: end if8: end for9: Ei = max( Ai

σ2k −

εn1−ε

−BiCi,Pi

BiRi(d)

, 1ε(ϕDi + Pi

BriRi)

10: for j = 1 to n do11: if di = 1 then12: E Ii − E

Ci + Ei

13: else14: E Ii − E

Ci + εEi

15: end if16: end for

this area. Therefore, a deeper collaborative approach forenergy harvesting in MCS will be developed in the futurework.

VI. PERFORMANCE EVALUATIONWe evaluate the performance of the reward-aided sensing taskexecution mechanism and its algorithms by performing sim-ulations. The simulation results are obtained using MATLABon a computer with Intel i5 at 1.3 GHz.

A. SIMULATION SETUPWe simulate an MCS environment that consists of 20 par-ticipants with a sensing task with a data size of 420KBand 1000 Megacycles of required CPU. The computationalcapability of each participant is randomly selected from 1.0,0.8 and 0.5 GHz. The bandwidth of the channel is set as5.5 MHz, the transmission power is 100 mWatts, and thebackground noise is −100 dBm. The capability of the cloudis 100 GHz.

To represent the sensing quality, we proposed the sensual-ity metric to measure the QoS of the system [36], which isdenoted by ψ

ψ =

∑pi=1 σ1F

ci I(di=1) +

∑pi=1 σ2FiI(di=0)

Expected(28)

where Expected is the expected quality level of the MCSsystem; here, we set Expected = 100. The QoS indexindicates howMCS executes the sensing task; since the cloudhas more computation power for better executing the sensingtask, we set 1 > σ1 > σ2 > 0.

B. PERFORMANCE EVALUATION OF THEREWARDED-AIDED SENSING TASKEXECUTION MECHANISMThe following simulation results provide an insight intothe performance of the reward-aided sensing task execution

FIGURE 3. System/ participants utility and task size.

mechanism. The metrics that interest us include the utilitiesof the system and participants, system lifetime and sensingquality, along with the effects of the numbers of participants,sensing task size and reward. The performance of the pro-posed mechanism is also compared with that of traditionalMCS sensing task execution strategy, in which all tasks aresensed and executed locally. In addition to the local strategy,we also adopt a non-cooperative game as a new compari-son [37]. In a non-cooperative game, based on equation (12)the participant’s utility function, the participants will competeaccording to a fixed reward tomaximize the utility by offload-ing sensing task. Thus, we set a non-cooperative algorithmand local sensing task execution as the benchmarks.

FIGURE 4. System utility and participants.

In Fig. 4, we show the utilities of system and participantsin the scenario in which the MCS system consists of 20 par-ticipants and a set reward, along with a task size range from0 KB to 5000 KB. According to the figure, when the task sizeis small, the values of the utilities are stable because most ofthe participants will execute a small task locally. As the tasksize increases, the participants’ utility decreases, while thesystem’s utility increases. Note that the system’s and partic-ipants’ utilities intersect when the task size is approximately1250 KB, which means that in this scenario, the task sizeis optimal for the MCS system. Compared to the proposedmechanism, the utility of the traditional sensing task execu-tion method is much smaller due to the low computationalcapability.

VOLUME 6, 2018 37611

Page 9: Reward-Aided Sensing Task Execution in Mobile Crowdsensing ...

J. Hu et al.: Reward-Aided Sensing Task Execution in MCS Enabled by EH

FIGURE 5. System lifetime and task size.

FIGURE 6. System lifetime and reward.

FIGURE 7. System lifetime and participants.

System lifetime is a crucial metric in MCS and canindicate how healthy an MCS is. In this paper, we adoptthe reward-aided mechanism to prolong the system lifetimeof MCS. We define that the system lifetime in terms ofthe lowest battery level of the participants. Figs. 5 and 6show the impacts of the sensing task size, reward and num-ber of participants, respectively, on system lifetime. Fig. 5shows the result of both mechanism, with and withoutenergy harvesting. The proposed mechanism achieves aslower decrease and a better lifetime than the other meth-ods. Fig. 7 shows that with the same reward, the proposedmechanism achieves a longer lifetime than the traditionalone and non-cooperation game. Fig. 7 also shows howthe number of participants affects the system lifetime.

As the number of participant increases, the system life timestabilizes.

FIGURE 8. Sensing quality and reward.

Fig. 8 shows the relation between reward and sensingquality. According to the figure, the sensing quality increaseswhen the reward increases because more participants willoffload the sensing task for execution, which achieves bet-ter quality. However, after a certain point, as the rewardincreases, the sensing quality does not change. This is due tothe interference of the channel, as no more participants canoffload the sensing task.

VII. CONCLUSIONIn this paper, a reward-aided sensing task execution mech-anism for MCS has been proposed. The proposed mecha-nism aims to improve the sensing quality and prolong thesystem lifetime by assigning energy as a reward to the partic-ipants. The proposed mechanism adopts game theory to solvethis multi-objective optimization problem. The mechanismpresented in this paper considers both the network channelconditions and participants’ abilities and adopts energy asthe reward for the participants. Extensive simulations havedemonstrated the advantages of the proposed mechanism,which yields a better QoS and a longer system lifetimefor MCS.

In future work, we plan to make possible deeper collabora-tion between energy harvesting andMCS.Wewill investigatethe sensing data and energy transmission in MCS to achievea better QoS of MCS.

REFERENCES[1] B. Guo et al., ‘‘Mobile crowd sensing and computing: The review of

an emerging human-powered sensing paradigm,’’ ACM Comput. Surv.,vol. 48, no. 1, p. 7, 2015.

[2] D. Zhao, H. Ma, and S. Tang, ‘‘Coupon: Cooperatively buildingsensing maps in mobile opportunistic networks,’’ in Proc. IEEE10th Int. Conf. Mobile Ad-Hoc Sensor Syst. (MASS), Oct. 2013,pp. 295–303.

[3] Y. Tobe, I. Usami, Y. Kobana, J. Takahashi, G. Lopez, andN. Thepvilojanapong, ‘‘vCity Map: Crowdsensing towards visiblecities,’’ in Proc. IEEE SENSORS, Nov. 2014, pp. 17–20.

[4] S.-T. Wang, C.-L. Fan, Y.-Y. Huang, and C.-H. Hsu, ‘‘Toward optimalcrowdsensing video quality for wearable cameras in smart cities,’’ inProc. IEEE Conf. Comput. Commun. Workshops (INFOCOM WKSHPS),Apr. 2015, pp. 624–629.

37612 VOLUME 6, 2018

Page 10: Reward-Aided Sensing Task Execution in Mobile Crowdsensing ...

J. Hu et al.: Reward-Aided Sensing Task Execution in MCS Enabled by EH

[5] L. G. Jaimes, I. J. Vergara-Laurens, and A. Raij, ‘‘A survey of incentivetechniques for mobile crowd sensing,’’ IEEE Internet Things J., vol. 2,no. 5, pp. 370–380, Oct. 2015.

[6] C. U. Saraydar, N. B. Mandayam, and D. J. Goodman, ‘‘Efficient powercontrol via pricing in wireless data networks,’’ IEEE Trans. Commun.,vol. 50, no. 2, pp. 291–303, Feb. 2002.

[7] F. Meshkati, H. V. Poor, S. C. Schwartz, and R. V. Balan, ‘‘Energy-efficient resource allocation in wireless networks with quality-of-serviceconstraints,’’ IEEE Trans. Commun., vol. 57, no. 11, pp. 3406–3414,Nov. 2009.

[8] Y. Wang, R. Chen, and D.-C. Wang, ‘‘A survey of mobile cloud comput-ing applications: Perspectives and challenges,’’ Wireless Pers. Commun.,vol. 80, no. 4, pp. 1607–1623, 2015.

[9] K. Kumar, J. Liu, Y.-H. Lu, and B. Bhargava, ‘‘A survey of computa-tion offloading for mobile systems,’’ Mobile Netw. Appl., vol. 18, no. 1,pp. 129–140, Feb. 2013.

[10] A. Galeotti, S. Goyal, M. O. Jackson, F. Vega-Redondo, and L. Yariv,‘‘Network games,’’ Rev. Econ. Stud., vol. 77, no. 1, pp. 218–244,2010.

[11] H.-Y. Shi, W.-L. Wang, N.-M. Kwok, and S.-Y. Chen, ‘‘Game the-ory for wireless sensor networks: A survey,’’ Sensors, vol. 12, no. 7,pp. 9055–9097, 2012.

[12] H. Von Stackelberg, Market Structure and Equilibrium. Springer, 2010.[13] J. Nash, ‘‘Non-cooperative games,’’ Ann. Math., vol. 54, no. 2,

pp. 286–295, 1951.[14] D. Yang, G. Xue, X. Fang, and J. Tang, ‘‘Crowdsourcing to smartphones:

Incentive mechanism design for mobile phone sensing,’’ in Proc. 18thAnnu. Int. Conf. Mobile Comput. Netw., 2012, pp. 173–184.

[15] B. Kantarci, K. G. Carr, and C. D. Pearsall, ‘‘SONATA: Social networkassisted trustworthiness assurance in smart city crowdsensing,’’ Int.J. Distrib. Syst. Technol., vol. 7, no. 1, pp. 59–78, 2016. [Online].Available: http://services.igi-global.com/resolvedoi/resolve.aspx?doi=s10.4018/IJDST.2016010104

[16] S. Ji and T. Chen, ‘‘Incentive mechanisms for discretized mobile crowd-sensings,’’ IEEE Trans. Wireless Commun., vol. 15, no. 1, pp. 146–161,Jan. 2016.

[17] K. Han, C. Zhang, J. Luo, M. Hu, and B. Veeravalli, ‘‘Truthful schedulingmechanisms for powering mobile crowdsensing,’’ IEEE Trans. Comput.,vol. 65, no. 1, pp. 294–307, Jan. 2016.

[18] H. Jin, L. Su, D. Chen, K. Nahrstedt, and J. Xu, ‘‘Quality of informa-tion aware incentive mechanisms for mobile crowd sensing systems,’’in Proc. 16th ACM Int. Symp. Mobile Ad Hoc Netw. Comput., 2015,pp. 167–176.

[19] M. Marjanović, L. Skorin-Kapov, K. Pripužić, A. Antonić, and I. P.Žarko, ‘‘Energy-aware and quality-driven sensor management for greenmobile crowd sensing,’’ J. Netw. Comput. Appl., vol. 59, pp. 95–108,Jan. 2016.

[20] H. N. Pham, B. S. Sim, and H. Y. Youn, ‘‘A novel approach for selecting theparticipants to collect data in participatory sensing,’’ in Proc. IEEE/IPSJ11th Int. Symp. Appl. Internet (SAINT), Jul. 2011, pp. 50–55.

[21] H. Xiong, D. Zhang, G. Chen, and L. Wang, ‘‘CrowdTasker: Maxi-mizing coverage quality in piggyback crowdsensing under budget con-straint,’’ in Proc. IEEE Int. Conf. Pervasive Comput. Commun., Mar. 2015,pp. 55–62.

[22] M. Zhang et al., ‘‘Quality-aware sensing coverage in budget-constrainedmobile crowdsensing networks,’’ IEEE Trans. Veh. Technol., vol. 65, no. 9,pp. 7698–7707, Sep. 2016.

[23] Z. Zheng, F. Wu, X. Gao, H. Zhu, G. Chen, and S. Tang, ‘‘A bud-get feasible incentive mechanism for weighted coverage maximizationin mobile crowdsensing,’’ IEEE Trans. Mobile Comput., vol. 16, no. 9,pp. 2392–2407, Sep. 2017.

[24] S. Ulukus et al., ‘‘Energy harvesting wireless communications: A review ofrecent advances,’’ IEEE J. Sel. Areas Commun., vol. 33, no. 3, pp. 360–381,Apr. 2015.

[25] K. Yang, Q. Yu, S. Leng, B. Fan, and F. Wu, ‘‘Data and energy integratedcommunication networks for wireless big data,’’ IEEE Access, vol. 4,pp. 713–723, 2016.

[26] K. Huang, M. Kountouris, and V. O. K. Li, ‘‘Renewable powered cellularnetworks: Energy field modeling and network coverage,’’ IEEE Trans.Wireless Commun., vol. 14, no. 8, pp. 4234–4247, Aug. 2015.

[27] B. Gurakan, O. Ozel, J. Yang, and S. Ulukus, ‘‘Energy cooperation inenergy harvesting two-way communications,’’ in Proc. IEEE Int. Conf.Commun. (ICC), Jun. 2013, pp. 3126–3130.

[28] F. Wang, C. Xu, L. Song, and Z. Han, ‘‘Energy-efficient resource alloca-tion for device-to-device underlay communication,’’ IEEE Trans. WirelessCommun., vol. 14, no. 4, pp. 2082–2092, Apr. 2015.

[29] Y. Mao, J. Zhang, Z. Chen, and K. B. Letaief, ‘‘Dynamic compu-tation offloading for mobile-edge computing with energy harvestingdevices,’’ IEEE J. Sel. Areas Commun., vol. 34, no. 12, pp. 3590–3605,Dec. 2016.

[30] P.-S. Yu, J. Lee, T. Q. S. Quek, and Y.-W. P. Hong, ‘‘Traffic offload-ing in heterogeneous networks with energy harvesting personal cells-network throughput and energy efficiency,’’ IEEE Trans. WirelessCommun., vol. 15, no. 2, pp. 1146–1161, Feb. 2016.

[31] J. Ren, Y. Zhang, R. Deng, N. Zhang, D. Zhang, and X. Shen, ‘‘Joint chan-nel access and sampling rate control in energy harvesting cognitive radiosensor networks,’’ IEEE Trans. Emerg. Topics Comput., to be published.

[32] X. Chen, ‘‘Decentralized computation offloading game for mobilecloud computing,’’ IEEE Trans. Parallel Distrib. Syst., vol. 26, no. 4,pp. 974–983, Apr. 2015.

[33] M. Xiao, N. B. Shroff, and E. K. P. Chong, ‘‘A utility-based power-controlscheme in wireless cellular systems,’’ IEEE/ACM Trans. Netw., vol. 11,no. 2, pp. 210–221, Apr. 2003.

[34] T. S. Rappaport,Wireless Communications: Principles and Practice, vol. 2.Upper Saddle River, NJ, USA: Prentice-Hall, 1996.

[35] W. Saad, Z. Han, M. Debbah, A. Hjørungnes, and T. Basar, ‘‘Coalitionalgame theory for communication networks,’’ IEEE Signal Process. Mag.,vol. 26, no. 5, pp. 77–97, May 2009.

[36] L. Xiao, T. Chen, C. Xie, H. Dai, and V. Poor, ‘‘Mobile crowdsensinggames in vehicular networks,’’ IEEE Trans. Veh. Technol., vol. 67, no. 2,pp. 1535–1545, Feb. 2017.

[37] H. Shah-Mansouri, V.W. S.Wong, and J. Huang, ‘‘An incentive frameworkfor mobile data offloading market under price competition,’’ IEEE Trans.Mobile Comput., vol. 16, no. 11, pp. 2983–2999, Nov. 2017.

JIEJUN HU received the B.E. degree in com-puter science from Jilin University, China, in 2012,where she is currently pursuing the Ph.D. degree.Her current research areas are computer networks,future network and technology, resource allocationin network, and related applications.

KUN YANG (M’02–SM’08) received the Ph.D.degree from the Department of Electronicand Electrical Engineering, University CollegeLondon (UCL), U.K. He is currently a Full Pro-fessor and the Head of the Network ConvergenceLaboratory, School of Computer Science and Elec-tronic Engineering, University of Essex, U.K.Before joining the University of Essex at 2003,he was with UCL on several European UnionResearch Projects, such as FAIN, MAN-TRIP, and

CONTEXT. His current research interests include heterogeneous wirelessnetworks, fixed mobile convergence, and cloud computing. He has authoredover 150 journal and conference papers in the these fields. He serves onthe editorial boards of both IEEE and non-IEEE journals. He is a fellow ofthe IET.

VOLUME 6, 2018 37613

Page 11: Reward-Aided Sensing Task Execution in Mobile Crowdsensing ...

J. Hu et al.: Reward-Aided Sensing Task Execution in MCS Enabled by EH

LIANG HU received the B.S. degree in computersystems from the Harbin Institute of Technologyin 1993, and the Ph.D. degree in computer softwareand theory in 1999. He is currently a Professor andPh.D. Supervisor with the College of ComputerScience and Technology, Jilin University, China.His main research interests include distributed sys-tems, computer networks, communications tech-nology, and information security systems. As aperson in charge or a principal participant, he has

finished more than 20 national, provincial, and ministerial level researchprojects of China.

KEZHI WANG received the B.E. andM.E. degreesfrom the College of Automation, Chongqing Uni-versity, China, in 2008 and 2011, respectively,and the Ph.D. degree from the University of War-wick, U.K., in 2015. He was a Senior ResearchOfficer with the University of Essex, U.K. Heis currently a Lecturer with the Department ofComputer and Information Sciences, NorthumbriaUniversity. His research interests include wire-less communication, signal processing, andmobilecloud computing.

37614 VOLUME 6, 2018