Cooperative Computing in Sensor Networksdiscolab.rutgers.edu/sm/papers/coopcomp03.pdf · graphical...

14
Cooperative Computing in Sensor Networks Liviu Iftode, Cristian Borcea, and Porlin Kang Division of Computer and Information Sciences Rutgers University Piscataway, NJ, 08854, USA {iftode, borcea, kangp}@cs.rutgers.edu Abstract We envision that during the next decade the ad- vances in technology will make sensor networks more powerful. Therefore, they will become part of a larger class of networks of embedded systems that have suf- ficient computing, communication, and energy re- sources to support distributed applications. The cur- rent software architectures and programming mod- els are not suitable for these new computing environ- ments. We present a distributed computing model, Co- operative Computing, and the Smart Messages soft- ware architecture for programming large networks of embedded systems. In Cooperative Computing, dis- tributed applications are dynamic collections of mi- gratory execution units, called Smart Messages, work- ing to achieve a common goal. Virtually, any user- defined distributed application can be implemented us- ing our model. To illustrate this flexibility, we describe the implementation, using Smart Messages, of two previously proposed applications for sensor networks, SPIN and Directed Diffusion. The simulation results for these applications together with micro-benchmark results for our prototype implementation demonstrate that Cooperative Computing is a viable solution for programming networks of embedded systems. 1. Introduction As the cost of embedding computing becomes negligible compared to the actual cost of goods, there is a trend toward incorporating computing and wireless communication capabilities in most of the consumer products. Therefore, we believe that the next generation of computing systems will be embedded, in a virtually unbounded number, and dynamically connected. Although these systems will penetrate every possible domain of our daily life, the expectation is that they will operate outside our normal cognizance, requiring far less attention from the human users than the desktop computers to- day. The first illustration of these systems that has received considerable interest in the last couple of years are sensor networks [12, 10, 11]. These net- works have severe resource limitations in terms of processing power, amount of available memory, net- work bandwidth, and energy. We envision, however, that during the next decade sensor networks will become part of a larger class of networks of em- bedded systems (NES) that have sufficient comput- ing, communication, and energy resources to sup- port distributed applications. For instance, there are already companies that propose computer sys- tems embedded into cars or video cameras which are able to communicate to each other [3,2]. For some of these networks, such as a networks of intelligent cameras performing object tracking over a large ge- ographical area, it might be beneficial to perform local computations and to cooperate in order to ex- ecute a global task. They may perform sophisticated filtering of data at a node that acquired an image, or even distributed object tracking rather than run- ning a centralized algorithm at a server. The chal- lenge that we face is how to program NES, namely, what the appropriate computing model is, and what system support is necessary to execute distributed applications in these networks. NES pose a unique set of challenges which makes traditional distributed computing models difficult to employ in programming them. The number of devices working together to achieve a common goal are orders of magnitude greater than those seen so far. These systems are heterogeneous in their hard- ware architectures since each embedded system is tailored to perform a specific task. Unlike the In-

Transcript of Cooperative Computing in Sensor Networksdiscolab.rutgers.edu/sm/papers/coopcomp03.pdf · graphical...

Page 1: Cooperative Computing in Sensor Networksdiscolab.rutgers.edu/sm/papers/coopcomp03.pdf · graphical information to “chase” the object. Once the SM arrives at a new node that has

Cooperative Computing in Sensor Networks

Liviu Iftode, Cristian Borcea, and Porlin KangDivision of Computer and Information Sciences

Rutgers UniversityPiscataway, NJ, 08854, USA

{iftode, borcea, kangp}@cs.rutgers.edu

Abstract

We envision that during the next decade the ad-vances in technology will make sensor networks morepowerful. Therefore, they will become part of a largerclass of networks of embedded systems that have suf-ficient computing, communication, and energy re-sources to support distributed applications. The cur-rent software architectures and programming mod-els are not suitable for these new computing environ-ments.

We present a distributed computing model, Co-operative Computing, and the Smart Messages soft-ware architecture for programming large networks ofembedded systems. In Cooperative Computing, dis-tributed applications are dynamic collections of mi-gratory execution units, called SmartMessages, work-ing to achieve a common goal. Virtually, any user-defined distributed application can be implemented us-ing ourmodel. To illustrate this flexibility, we describethe implementation, using Smart Messages, of twopreviously proposed applications for sensor networks,SPIN and Directed Diffusion. The simulation resultsfor these applications together with micro-benchmarkresults for our prototype implementation demonstratethat Cooperative Computing is a viable solution forprogramming networks of embedded systems.

1. Introduction

As the cost of embedding computing becomesnegligible compared to the actual cost of goods,there is a trend toward incorporating computingand wireless communication capabilities in most ofthe consumer products. Therefore, we believe thatthe next generation of computing systems will beembedded, in a virtually unbounded number, anddynamically connected. Although these systems will

penetrate every possible domain of our daily life,the expectation is that they will operate outside ournormal cognizance, requiring far less attention fromthe human users than the desktop computers to-day.

The first illustration of these systems that hasreceived considerable interest in the last couple ofyears are sensor networks [12, 10, 11]. These net-works have severe resource limitations in terms ofprocessing power, amount of available memory, net-work bandwidth, and energy. We envision, however,that during the next decade sensor networks willbecome part of a larger class of networks of em-bedded systems (NES) that have sufficient comput-ing, communication, and energy resources to sup-port distributed applications. For instance, thereare already companies that propose computer sys-tems embedded into cars or video cameras which areable to communicate to each other [3, 2]. For someof these networks, such as a networks of intelligentcameras performing object tracking over a large ge-ographical area, it might be beneficial to performlocal computations and to cooperate in order to ex-ecute a global task. They may perform sophisticatedfiltering of data at a node that acquired an image,or even distributed object tracking rather than run-ning a centralized algorithm at a server. The chal-lenge that we face is how to program NES, namely,what the appropriate computing model is, and whatsystem support is necessary to execute distributedapplications in these networks.

NES pose a unique set of challenges which makestraditional distributed computing models difficultto employ in programming them. The number ofdevices working together to achieve a common goalare orders of magnitude greater than those seen sofar. These systems are heterogeneous in their hard-ware architectures since each embedded system istailored to perform a specific task. Unlike the In-

Page 2: Cooperative Computing in Sensor Networksdiscolab.rutgers.edu/sm/papers/coopcomp03.pdf · graphical information to “chase” the object. Once the SM arrives at a new node that has

ternet, NES are typically deployed in environmentsvoid of human attention, where it is unacceptable toexpect a human to hit a “reset” button to recoverfrom a failure. NES are inherently fragile, with nodeand connection failures being the norm rather thanthe exception. The availability of nodes may varygreatly over time; the nodes can become unreach-able due to mobility, depletion of energy resources,or catastrophic failures.

The nodes in NES communicate through wirelessnetwork interfaces. Hence, they can communicatedirectly only with nodes within their transmissionrange. Similar to most ad hoc networks, the sepa-ration between hosts and routers disappears (i.e.,each node has to perform routing). However, thescale and heterogeneity encountered in NES as wellas different application requirements preclude theexistence of a common routing support. Therefore,the flexibility to use multiple routing algorithm inthe same network is desirable.

The applications running in NES target specificdata or properties within the network, not individ-ual nodes. From an application point of view, nodeswith the same properties are interchangeable. Fixednaming schemes, such as IP addressing, are inappro-priate in most situations. The need to target specificdata or properties within the network raises the is-sue of a different naming scheme with dynamic bind-ings between names and node addresses. A namingscheme based on content or properties is more ap-propriate for NES than a fixed naming scheme [9].

We propose a distributed computing model, Co-operative Computing, and a software architecturefor NES based on execution migration. CooperativeComputing applications consist of migratory exe-cution units, called Smart Messages (SMs), work-ing together to accomplish a distributed task. SMsare user-defined distributed programs (composed ofcode, data, and execution control state) that mi-grate through the network searching for nodes ofinterest (i.e., nodes on which the program needs torun) and execute their own routing at each nodein the path. We believe that distributed comput-ing based on execution migration is more suitablefor NES than data migration (message passing) dueto the volatility and dynamic binding of names tonodes specific to these networks. Cooperative Com-puting provides flexible support for a wide varietyof applications, ranging from data collection anddissemination to content-based routing and objecttracking.

Nodes in the network support SMs by provid-ing: (1) a name-based shared memory (tag space)

for inter-SM communication and access to the hostsystem, and (2) an architecturally independent en-vironment for SM execution. SMs are self-routing,namely, they are responsible for determining theirown paths through the network. SMs name thenodes of interest by properties and self-route tothem using other nodes as “stepping stones”. Ap-plications in Cooperative Computing are able toadapt to adverse network conditions by changingtheir routing dynamically.

To validate the Cooperative Computing modelwe have designed and implemented a prototype bymodifying Sun Microsystem’s Java KVM (KilobyteVirtual Machine) [1]. We report micro-benchmarkresults for this prototype running over a testbedconsisting of Linux-based HP iPAQs equipped with802.11 cards for wireless communication. These re-sults indicate that Cooperative Computing is a fea-sible solution for programming real world applica-tions. For larger scale evaluation, we have developeda simulator that executes SMs and allows us to ac-count for both execution and communication time.In this simulator, we have implemented two previ-ously proposed applications for data collection anddata dissemination in sensor networks, Directed Dif-fusion [12] and SPIN [10]. The simulation resultsshow that our model is able to provide high flexi-bility for user-defined distributed applications whilelimiting the increase in the response time to at most15% over the traditional non-active communicationimplementations.

The rest of this chapter is organized as follows.The next section describes Cooperative Comput-ing. Section 3 presents the node architecture forour the model. In Section 4, we discuss the de-tails of Smart Messages, and Section 5 presents theAPI for Cooperative Computing. Section 6 showsmicro-benchmark results for our prototype imple-mentation. Section 7 describes the applications im-plemented using SMs, while their simulation resultsare presented in Section 8. Section 9 discusses re-lated work. We conclude in Section 10.

2. The Cooperative ComputingModel

Cooperative Computing is a distributed comput-ing model for large scale, ad hoc NES. In this model,distributed applications are defined as dynamic col-lections of migratory execution units, called SmartMessages (SM), that cooperate in achieving a com-mon goal. The execution of an SM is described interms of computation and migration phases. The ex-

Page 3: Cooperative Computing in Sensor Networksdiscolab.rutgers.edu/sm/papers/coopcomp03.pdf · graphical information to “chase” the object. Once the SM arrives at a new node that has

ecution performed at each step is determined by theparticular properties of that node. On nodes thatpresent interest to the current computation, the SMmay read and process data. On intermediate nodes,the SM executes only its routing algorithm. Duringmigrations, SMs carry mobile data, the code miss-ing at destination, and a lightweight execution state.

Nodes in the network cooperate by providing anarchitecturally independent programming environ-ment (virtual machine) for SM execution and aname-based shared memory (tag space) for inter-SM communication and interaction with the hostsystem. SMs along with the system support pro-vided by nodes form the the Cooperative Comput-ing infrastructure which allows programming user-defined distributed applications in NES.

In our model, a new distributed application canbe developed without a priori knowledge about thescale and topology of the network, or the specificfunctionality of each node. Placing intelligence inSMs provides this flexibility and also obviates theissue of implementing a new application or proto-col in NES, which is difficult or even impossible us-ing conventional approaches [9].

SMs are resilient to network volatility. Over time,certain nodes may become unavailable due to mo-bility or energy depletion, but SMs are able to adaptby controlling the routing. SMs can carry multiplerouting procedures and choose the most appropriateone based on the conditions encountered in the net-work. Using this feature, SMs can discover routesto nodes of interest even in adverse network condi-tions.

Moving the execution to the source of data im-proves the performance for applications that need toprocess large amounts of data. For example, insteadof transferring large size images through the net-work for an object tracking application, an SM canperform the analysis of the images at the nodes thatacquired them. Thus, it reduces the network band-width and energy consumption, and in the sametime, it improves the user-perceived response time.The impact on performance of transferring code canbe limited by caching code at the nodes.

Figure 1 shows a simple application that illus-trates the novel aspects of computation and com-munication in Cooperative Computing. The appli-cation performs object tracking over a large area(e.g, a campus, airport, or urban highway system)using a network of mobile robots with attached cam-eras [16]. In the figure, the target is represented bya person that moves across a given geographical re-gion. A user can inject the tracking SM into any

Migrates BackResponse Smart MessageTracking Completes and

Inject Tracking Smart Message Camera Used Just For Routing

Camera That Performs Tracking

Tracking Smart Message

Response Smart Message

Figure 1: Distributed Object Tracking Using Coop-erative Computing

node of the network. The SM migrates to a nodethat acquired an image of a possible target object,analyzes this image, and then it may decide to fol-low the object. The network maintains no routinginfrastructure, and the SM is responsible for deter-mining its path to cameras that detected the object.The SM can use the direction of motion and geo-graphical information to “chase” the object. Oncethe SM arrives at a new node that has a pictureof the object, it generates a task to further ana-lyze the object and its motion. The SM may mi-grate to neighbor nodes to obtain pictures of theobject from a different angle or lighting conditions.When the tracking completes, the SM generates aresponse SM that will transport the gathered infor-mation back to the user node.

3. Node Architecture

The goal of the SM software architecture is tokeep the support required from nodes in the net-work to the minimum, placing intelligence in SMsrather than in individual nodes. Figure 2 shows thecommon system support provided by nodes for Co-operative Computing. The admission manager re-ceives incoming SMs, decides whether or not to ac-cept them, and stores these messages into the SMready queue. The code cache stores frequently usedcode to reduce the amount of traffic in the net-work. The virtual machine (VM) acts as a hard-ware abstraction layer for scheduling and executingtasks generated by incoming SMs. The tag space isa name-based shared memory that stores data ob-jects persistent across SM executions and offers aunique interface to host’s OS and I/O system.

3.1. Admission Manager

To prevent excessive use of its resources (energy,memory, bandwidth), a node needs to perform ad-mission control. Each SM presents its resource re-

Page 4: Cooperative Computing in Sensor Networksdiscolab.rutgers.edu/sm/papers/coopcomp03.pdf · graphical information to “chase” the object. Once the SM arrives at a new node that has

MigrationSmart MessageSmart Message

Arrival

Smart MessagesReady Queue

AdmissionManager

VirtualMachine

CodeCache Space

Tag

Figure 2: Node Architecture

quirements within a resource table. The admissionmanager is responsible for receiving incoming mes-sages and storing them into the SM ready queue,subject to admission restrictions.

3.2. Code Cache

Commonly, the applications executing in NEShave a localized behavior, exhibiting spatial andtemporal locality. Therefore, we cache frequentlyused SM code in order to amortize over time theinitial cost of transferring the code through the net-work.

3.3. Virtual Machine

The virtual machine (VM) schedules, executes,and migrates SMs. To migrate an SM, the VM cap-tures the execution state and sends it along with thecode and data to the next hop. The VM at the desti-nation will resume the SM from the instruction fol-lowing the migration invocation. The VM also en-sures that an SM conforms to its declared resourceestimates; otherwise, the SM can be removed fromthe system.

3.4. Tag Space

Each node that supports SMs manages a name-based shared memory, called tag space, consistingof tags that are persistent across SM executions.The tag space contains two types of tags: applica-tion tags which are created by SMs, and I/O tagswhich are provided by the system. The I/O tags de-fine the basic hardware of the node and provide SMswith a unique interface to the local OS and I/O sys-tem. SMs are allowed to read and write both typesof tags, but they can create or delete only applica-tion tags.

Figure 3 illustrates the structure of applicationand I/O tags. The identifier is the name of the tagand is similar to a file name in a file system. Thisidentifier is used by SMs for content-based nodenaming. The access of SMs to tags is restricted

I/O Tag

Application Tag

Identifier Access Control Information Pointer to I/O Handler

LifetimeIdentifier Access Control Information Data

Figure 3: The Structure of Tags

based on the access control information associatedwith each tag. For application tags, the VM asso-ciates the access control information carried by theSM that created the tag (i.e., the owner of the tag).For I/O tags, the owner of the device sets the ac-cess control information 1. Application tags and I/Otags differ in terms of functionality and lifetime. Ap-plication tags offer persistent memory for a limitedlifetime (i.e., application tags are still “alive”, fora certain amount of time, after the SMs that cre-ated them have finished the execution at the localnode); after this time interval, the tags expire, andthe node reclaims their memory. I/O tags, on theother hand, are permanent and provide a pointer toan I/O handler (i.e., a system call or an externalprocess) which is capable of serving I/O requests.Below, we list all the possible utilizations of tags:

• Naming: SMs name the nodes of interest usingtag identifiers.

• Data storage: An SM can store data in the net-work by creating its own tags.

• Data exchange and data sharing: Exchangingdata through the tag space is the only commu-nication channel among different SMs.

• Routing: SMs may create routing tags at vis-ited nodes to store routing information in thedata portion of these tags.

• Synchronization: An SM can block on a spe-cific tag pending a write of this tag by anotherSM. Once the tag is written, all SMs blockedon it are woken up and made ready for execu-tion.

• Interaction with the host system: An SM canissue commands to, or request data from thehost OS and I/O devices using I/O tags.

1 More information about access control, protection do-mains, and SM security in general can be found in [27].

Page 5: Cooperative Computing in Sensor Networksdiscolab.rutgers.edu/sm/papers/coopcomp03.pdf · graphical information to “chase” the object. Once the SM arrives at a new node that has

4. Smart Messages

SMs are execution units which migrate throughthe network to execute on nodes of interest androute themselves at each node in the path towarda node of interest. SMs are comprised of code anddata sections (referred to as “bricks”), a lightweightexecution state, and a resource table. The code anddata bricks can be dynamically used to assemblenew, possibly smaller SMs. The ability to incor-porate only the necessary code and data bricks inthe new SMs can reduce the size of these SMs, andconsequently, the amount of traffic in the network(i.e., the code and data carried by SMs are dividedinto bricks solely for this purpose). The executionstate contains the execution context necessary to re-sume the SM after a successful migration. The re-source table consists of resource estimates: execu-tion time, tags to be accessed or created, memoryrequirements, and network traffic. These resourceestimates set a bound on the expected needs of anSM at a node; they are used by the admission man-ager to make the admission decision.

The SM computation is embodied in tasks. Dur-ing its execution, a task may modify the data bricksof the SM as well as the local tags to which it has ac-cess. It can also migrate, create new SMs, or blockon tags of interest. A collection of SMs cooperat-ing toward a common goal forms a distributed ap-plication.

4.1. Smart Message Life Cycle

Each SM has a well defined life cycle at a node:(1) it is subject to admission control, (2) upon ad-mission, a task is generated out of SM’s code anddata bricks and scheduled for execution, and (3) af-ter completion at a node, the SM may terminateor may decide to migrate to other nodes of inter-est.

4.1.1. Admission. To avoid unnecessary re-source consumption, the admission manager exe-cutes a three-way handshake protocol for trans-ferring SMs between neighbor nodes. First, onlythe resource table is sent to destination for admis-sion control. If the SM admission fails, the task willbe informed, and it can decide on subsequent ac-tions.

If the SM is accepted, the admission managerchecks, using the code bricks’ IDs (computed off-line by applying a hash function on the code itself),whether the code bricks belonging to this SM arecached locally. Then, it informs the source to trans-

fer only the missing code bricks (i.e., these codebricks will also be cached upon arrival).

4.1.2. Scheduling and Execution. Upon ad-mission, an SM becomes a task which is scheduled(in FIFO order) for execution. The execution is non-preemptive; new SMs can be accepted, but theywill not be dispatched for execution until the cur-rent SM terminates. An executing SM can yield theVM, however, by blocking on a tag. The executiontime is bounded by the estimated running time pre-sented during admission (i.e., the VM may termi-nate an SM that does not respect the admissioncontract).

We use a non-preemptive scheduling for threereasons. First, the execution time of SMs is usu-ally short (many times a node is used merely asa “stepping stone” en-route to a node of interest).Thus, context-switching would incur too much over-head with respect to the total execution time ofthe SM. Second, there is no need to support multi-programming for interactive programs (unlike tra-ditional computer systems, embedded systems com-monly operate unattended). Third, the communi-cation always terminates the current SM (i.e., theonly form of communication in Cooperative Com-puting is a migration invocation), and consequently,the idea of using multiple threads in one appli-cation to overlap communication and computationdoes not make sense for SM programs. On the otherhand, non-preemptive scheduling makes inter-SMsynchronization and sharing particularly simple toimplement.

4.1.3. Migration. If the current computa-tion does not complete at the local node, thetask may continue its execution at another node.The current execution state is captured and mi-grated along with the code and data bricks. Sincea task accesses only mobile data and tags, wehave been able to implement an efficient migra-tion, where only a small part of the entire executioncontext is saved and transferred through the net-work. Essentially, we transfer the instructionand stack pointers for all the stack frames cor-responding to the current task. It is importantto notice that migration is explicit (i.e., the pro-grammers call a “migration” primitive whenneeded), and that data transferred during a mi-gration is specified by the programmer as databricks.

Page 6: Cooperative Computing in Sensor Networksdiscolab.rutgers.edu/sm/papers/coopcomp03.pdf · graphical information to “chase” the object. Once the SM arrives at a new node that has

Category PrimitivesTag Space Operations createTag(tag name, lifetime, data); deleteTag(tag name);

readTag(tag name); writeTag(tag name, value);SM Creation createSMFromFiles(program files);

createSM(code bricks, data bricks); spawnSM();SM Synchronization blockSM(tag name, timeout);SM Migration migrateSM(tag names, timeout); sys migrate(next hop);

Table 1: Cooperative Computing API

4.2. Smart Message Self-Routing

SMs are self-routing, i.e., they are responsiblefor determining their own paths through the net-work. There is no system support required by SMsfor routing, with the entire process taking place atapplication level. An SM names its destinations interms of tag identifiers and executes its routing al-gorithm at each node in the path. SMs may cre-ate routing tags at intermediate nodes in the net-work to store routing information. If routing infor-mation is not locally available, an SM may createother SMs for route discovery and block on a rout-ing tag. A write on this tag unblocks the SM, whichwill resume its migration. Since tags are persistentfor their lifetime, the routing information, once ac-quired, can be used by subsequent SMs, thus amor-tizing the route discovery effort.

Each SM has to include at least one routing brickamong its code bricks. A single routing algorithm,however, might not always reach a node of interestin the presence of highly dynamic network configu-rations. Therefore, an SM can carry multiple rout-ing algorithms and change them during executionaccording to the current network conditions. For in-stance, an SM can use a proactive routing algorithmin a stable and relatively dense network and an on-demand algorithm in a volatile and sparse network.In this way, the SM may complete even if the net-work conditions change significantly during its ex-ecution. A complete description of the self-routingmechanism can be found in [5].

5. Programming Interface

The API for the Cooperative Computing model,given in Table 1, provides simple, yet powerful prim-itives. SMs can access the tag space, dynamicallycreate new SMs, synchronize on tags, and migrateto nodes of interest.

createTag, deleteTag, readTag, and write-Tag. These operations allow SMs to create, delete,

or access existing tags. As mentioned in Section 3,these operations are subject to access control. Thesame interface is used to access the I/O tags: SMscan issue commands to I/O devices by writing intoI/O tags, or can get I/O data by reading I/O tags.

createSMFromFiles, createSM, andspawnSM. An SM is created by injecting aprogram file at a node; this program calls cre-ateSMFromFiles with a list of program file namesto build the new SM structure. An SM may use cre-ateSM during execution to assemble a new SMfrom a subset of its code and data bricks. A cre-ateSM call is commonly used to create a routediscovery SM when routing information is not lo-cally available. An SM that needs to clone it-self calls spawnSM; this primitive returns true inthe “parent” and false in the “child” SM. Typi-cally, spawnSM is invoked when the current com-putation needs to migrate a copy of itself to nodesof interest while continuing the execution at the lo-cal node. A newly created SM is inserted into theSM ready queue.

blockSM. This primitive implements theupdate-based synchronization mechanism. AnSM blocks on a tag waiting for a write. To pre-vent deadlocks, blockSM takes a timeout as param-eter. If nobody writes the tag in the timeout in-terval, the VM returns the control to the SM. Atypical example is an SM that blocks on a rout-ing tag while waiting for a route discovery SM tobring a new route.

migrateSM and sys migrate. The migrateSMprimitive implements a high level content-based mi-gration, provided usually as a library function. It al-lows applications to name the nodes of interest bytag names and to bound the migration time. WhenmigrateSM returns normally (no timeout), the SMis guaranteed to resume its execution at a node ofinterest. In case of timeout, the SM regains the con-trol at one of the intermediate nodes in the path.Figure 4 presents an example of a typical SM whichuses migrateSM. For instance, this SM can be used

Page 7: Cooperative Computing in Sensor Networksdiscolab.rutgers.edu/sm/papers/coopcomp03.pdf · graphical information to “chase” the object. Once the SM arrives at a new node that has

1 Typical_SM(tag){2 do3 migrateSM(tag, timeout);4 <do computation>5 until(<quality of result>);6 migrateSM(back, timeout);7 }

Figure 4: Code Skeleton for Typical Smart Message

in the object tracking application described in Sec-tion 2. The SM migrates to nodes hosting the tagof interest and executes on these nodes until a cer-tain quality of result is achieved. When this is done,the SM migrates back to the node that injected itin the network.

The migrateSM function implements routing us-ing routing tags, the low level primitive calledsys migrate, and possibly other SMs for route dis-covery. An SM can choose among multiple mi-grateSM functions which correspond to differ-ent routing algorithms. The sys migrate primi-tive is used to migrate SMs between neighbornodes. The entire migration protocol of captur-ing the execution state and sending the SM to thenext hop is implemented in sys migrate.

6. Prototype Implementation andEvaluation

We have implemented our SM prototype in Javaover Linux, thus harnessing well developed andsupported Java application development tools andknowledge base 2. Specifically, we have modified SunMicrosystem’s KVM (Kilobyte Virtual Machine) [1]since it has a small memory footprint (i.e., as lit-tle as 160 KB, which makes it suitable for resourceconstrained devices) and its source code is publiclyavailable.

The SM API is encapsulated in two Java classes:SmartMessage and TagSpace. For efficiency, we haveimplemented the API as Java native methods. Wehave also implemented our own serialization mech-anism since KVM does not support serialization.Besides the KVM interpreter thread, we have in-troduced two additional threads for admission con-trol and local code injection. The design of the SMcomputing platform is not specific to any hardwareor software environment. It can be implemented on

2 The SM software distribution is freely available athttp://discolab.rutgers.edu

any virtual machine (e.g., Mate [19], Scylla [25]),any language, or underlying operating system.

In the following, we report micro-benchmark re-sults for our SM prototype. Specifically, we haveevaluated the cost of one-hop migration and the costof tag space operations. Our testbed consists of HPiPAQs 3870 running Linux 2.4.18-rmk3-hh24. EachiPAQ contains an Intel StrongARM 1110 206Mhz32bit RISC processor, 32MB flash memory, and64MB RAM memory. For communication, we useLucent Orinoco 802.11b Silver PC Cards in ad hocmode. To factor out the cost of Java method calloverhead (approximately 6µs), we have inserted thecode for measuring the costs inside the native meth-ods associated with the SM API.

6.1. Cost of SM Migration

The one-hop migration has three phases: execu-tion capture at source, SM transfer, and executionresumption at destination. To be capable of cap-turing and resuming an SM, we convert the SMinto a machine-independent representation. Sincethe code bricks are already in machine-independentJava class format, only the data bricks and execu-tion state need to be converted. This conversion isdone using our simple object serialization mecha-nism. The serialization of the execution state doesnot have a significant impact because we do not cap-ture and transfer the local variables, but only theexecution control state. Therefore, the importantfactors that determine the cost of one-hop migra-tion are the data brick serialization, the SM trans-fer, and data brick de-serialization.

6.1.1. Data Brick Serialization and De-Serialization. To study the effect of data brick se-rialization, we have used a fixed size code brick(1197 bytes) and have varied the data brickfrom 2 KB to 16 KB. The stack frames havealso been kept constant (131 bytes for two ac-tivation records). The cost of serializing thesetwo stack frames is 0.235 ms. Commonly, thedata bricks in an SM consist of a mixture of ob-jects and primitive types. We have used two typesof data bricks in this evaluation which repre-sent a practical lower and upper bound for typi-cal data bricks: an array of integers, and an arrayof objects. The object array represents an up-per bound since each of its elements causes a call tothe top level VM serialization method, while the in-teger array represents a lower bound since thereis only one call to the top level VM serializa-tion method.

Page 8: Cooperative Computing in Sensor Networksdiscolab.rutgers.edu/sm/papers/coopcomp03.pdf · graphical information to “chase” the object. Once the SM arrives at a new node that has

Figure 5: Cost of Data Brick Serialization

Figure 6: Cost of Data Brick De-Serialization

Figure 5 shows that the serialization cost is be-low 6 ms for data bricks as large as 16 KB. Com-monly, the SMs process the data at source, andtherefore, they carry small size data. The applica-tions that we developed carry less than 2 KB, whichcosts less than 1 ms to serialize. Figure 6 presentsthe cost of de-serialization for the same data bricks.We observe that this cost is with as much as 30%larger than the cost of serialization. This increase iscaused by the memory allocation costs during ob-ject de-serializations.

6.1.2. SM Transfer. To evaluate the totalcost of migrating an SM (serialization, trans-fer, de-serialization), we have performed twosets of experiments. In the first, we have var-ied the code brick size while keeping the data bricksize and stack frame size fixed at 53 bytes and 131bytes, respectively. In the second, we have var-

Figure 7: Effect of Code Brick Size on Single HopMigration

Figure 8: Effect of Data Brick Size on Single HopMigration

ied the data brick size while keeping the code bricksize and stack frame size fixed at 1197 bytes and131 bytes.

Figures 7 and 8 show the results of these two ex-periments for two cases: the code is not cached, andthe code is cached. In Figure 7, the time to trans-fer the SM when the code is cached represents, es-sentially, the overhead of the three-way handshakeprotocol since the size of the data bricks and stackframes is small. Figure 8 demonstrates that the databrick size contributes significantly to the total costof migration. Thus, it is important to have a serial-ization mechanism with minimal space overhead.

6.2. Cost of Tag Space Operations

Tables 2 shows the cost of the tag space opera-tions for application tags. The readTag primitive hasthe lowest cost since it performs the least number of

Page 9: Cooperative Computing in Sensor Networksdiscolab.rutgers.edu/sm/papers/coopcomp03.pdf · graphical information to “chase” the object. Once the SM arrives at a new node that has

Tag Space Operation Time (µs)createTag 43.4deleteTag 55.9readTag 20.8writeTag 31.7blockSM 45.8

Table 2: Time for Tag Space Operations

Tag Name Time(ms)gps location 0.20neighbor list 0.34image capture (32-KB) 341.23light sensor 0.11battery lifetime 25.63system time 0.09free memory 0.12

Table 3: Cost of Reading I/O Tags

operations. When an SM reads a tag, the VM inter-preter acquires a lock, performs a lookup in the tagspace, and returns the data to the SM. The write-Tag operation costs slightly higher since the inter-preter has to check and unblock any SMs blockedon the tag. The createTag primitive involves an ad-ditional step to register a timer for the tag lifetime,while blockSM needs to append the SM to the queueand suspend the current task. The deleteTag primi-tive has the highest cost since the interpreter needsto wake up all SMs blocked on the tag, remove thetimer for the tag lifetime, and remove the tag struc-ture from the tag space.

Table 3 presents the access time to several I/Otags that are currently implemented in our proto-type: GPS location query, neighbor discovery, cam-era image capture, light sensor, and system statusinquiry (battery lifetime, system time, and amountof free memory). A typical node with a video cam-era and a GPS receiver attached is shown in Fig-ure 9. The gps location is updated by a user levelprocess which reads from the GPS serial interface.The location of the neighbors along with their iden-tifiers can be returned by reading the neighbor listtag. This tag is typically used by geographical rout-ing algorithms carried and executed by SMs. Toget the information about neighbor nodes, we haveimplemented a neighbor discovery protocol whichmaintains a cache of known neighbors. For the im-age capture tag, the system also performs YUYV to

Figure 9: Prototype Node with Video Camera andGPS Receiver Attached

RGB format conversion on the captured image be-fore returning it to the tag reader. All the other tagvalues are obtained directly from Linux using sys-tem calls.

7. Applications

To prove that virtually any protocol or applica-tion can be written using SMs, we have implementedtwo previously proposed applications: SPIN [10]and Directed Diffusion [12]. They present differentparadigms for content-based communication andcomputation in sensor networks: SPIN is a proto-col for data dissemination, and Directed Diffusionimplements data collection.

7.1. SPIN using Smart Messages

SPIN is a family of adaptive protocols that dis-seminates information among nodes in a sensor net-work. We present an implementation of SPIN-1which is a three-stage handshake protocol for datadissemination. Each time a node obtains new data,it disseminates this data in the network by send-ing an advertisement to its neighbors. The node re-ceiving the advertisement checks if it has alreadyreceived or requested that data. If not, it sends arequest message back to the sender asking for theadvertised data. The initiator sends the requesteddata, and then, the process is executed recursivelyfor the entire network.

As an example of a Cooperative Computing pro-gram, Figure 10 presents the code for our implemen-tation of SPIN using SMs. The tag space at eachnode hosts two tags: the value of the most recentdata received (tagData), and the timestamp associ-ated with this data (tagTimestamp).

Page 10: Cooperative Computing in Sensor Networksdiscolab.rutgers.edu/sm/papers/coopcomp03.pdf · graphical information to “chase” the object. Once the SM arrives at a new node that has

1 DisseminateSM(String tag, int timeout){2 int timestamp;3 Data data;4 String tagData=tag+"data";5 String tagTimestamp=tag+"timestamp";6 Address src, dest;7 while(true){ // SM at source8 TagSpace.blockSM(tagData, timeout);9 timestamp = TagSpace.readTag(tagTimestamp);

10 if (!SmartMessage.spawnSM()){ // child SM11 while(true){ // SM at every node12 src = SmartMessage.getLocalAddress();13 SmartMessage.sys_migrate(all); // migrate to all neighbors14 if (timestamp.CompareTo((Integer)TagSpace.readTag(tagTimestamp))<=0){15 System.exit(0); // the same or more recent data exists at this node16 }17 TagSpace.writeTag(tagTimestamp, timestamp);18 dest = SmartMessage.getLocalAddress();19 SmartMessage.sys_migrate(src); // migrate back to source20 data = TagSpace.readTag(tagData);21 SmartMessage.sys_migrate(dest); // bring data to destination22 TagSpace.writeTag(tagData, data);23 }24 }25 }26 }

Figure 10: SPIN with Smart Messages

The protocol is initiated by injecting a Dissem-inate SM into a node that produces data. This SMblocks on tagData (line 8) waiting for new data.Each time new data is produced, the SM reads thetagTimestamp and spawns itself (lines 9-10). The“child” SM migrates to the neighbors to advertisethe new data (line 13). If a destination node does nothave this data or more recent data, the “child” SMupdates the tagTimestamp and migrates back to thesource to bring the data (lines 14-22). Upon data ar-rival, the “child” SM executes recursively the samealgorithm until the data is disseminated in the en-tire network.

7.2. Directed Diffusion using SmartMessages

In Directed Diffusion, a sink node requests databy sending “interests” for named data. Data match-ing an interest is then “drawn” from source nodestoward the sink node. Intermediate nodes can cacheand aggregate data; they may also direct interestsbased on previously cached data. At the beginning,the sink may receive data from multiple paths, butafter a while it will reinforce the path providing thebest data rate. All future data will arrive on the re-inforced path only.

For the implementation of Directed Diffusion us-

ing SMs, the tag space at each node hosts three tags:the most recent data value (tagData), the best datarate available at that node (tagDataRate), and thebest next hop toward the source (tagBestRoute). Di-rected Diffusion is initiated by injecting an SM atthe sink. The execution of this SM has two mainphases: (1) exploration starts at the sink and floodsthe network to find data of interest, and (2) rein-forcement chooses the best path and brings datafrom source to sink.

If the information of interest is not locally avail-able (no tagDataRate value), the explore SM spawnsitself; the “child” SM migrates to all neighbors,while the “parent” SM blocks on tagDataRate. Thisoperation is performed recursively at every nodeuntil an SM reaches a node containing the tag-DataRate. At this point, the “child” SM migratesback to its parent carrying the discovered data rate.If the new data rate is better than the value storedin tagDataRate, the SM updates tagDataRate withthe new value and tagBestRoute with its source asthe best node in the path toward the source of data.This update unblocks the “parent” SM which willcarry the data rate one hop back. Eventually, thesink node is reached and the reinforcement phasebegins.

During the reinforcement phase, a collect SM mi-grates to the best next hop starting from the sink.

Page 11: Cooperative Computing in Sensor Networksdiscolab.rutgers.edu/sm/papers/coopcomp03.pdf · graphical information to “chase” the object. Once the SM arrives at a new node that has

Figure 11: Directed Diffusion using Smart Messages

At each intermediate node, this SM spawns; the“child” SM migrates to the best next hop, while the“parent” SM blocks waiting for data. When the SMreaches the source, it spawns new SMs to carry thedata one hop back at the promised data rate. Re-cursively, a blocked SM is awaken by the data ar-rival, and it will carry the data back until it reachesthe sink.

8. Simulation Results

For large scale evaluation, we have developed anevent-driven simulator extended with support forSM execution. The simulator is written in Java toallow rapid prototyping of applications. To get ac-curate results, both the communication and the ex-ecution time have to be accounted for. The simu-lator provides accurate measurements of the execu-tion time by counting, at the VM level, the num-ber of cycles per VM instruction. To account forthe execution time, we have simulated each nodewith a Java thread, and we have implemented anew mechanism for scheduling these threads insideJVM. The communication model used in our simu-lator is “generic wireless”, with contention solved atthe message level. Before any transmission, a node“senses” the medium and backs-off in case of con-tention.

The main goal in conducting the simulation ex-periments was to quantify the data convergencetime for our implementations of SPIN and DirectedDiffusion using SMs and to compare these resultswith the results for traditional message passing im-plementations. We define the data convergence timeas the time when a certain percentage of the totalnumber of nodes have received the data (SPIN), orthe data rate (Directed Diffusion). In both cases,

Figure 12: SPIN using Smart Messages

due to flooding, all nodes end up receiving the dataand the data rate. SPIN completes after all nodeshave received the data, while Directed Diffusion willstart the reinforcement phase after all nodes have re-ceived the data rate. We use the same network con-figuration for all experiments. The network has 256nodes distributed uniformly over a square area, andeach node has the same transmission range. The av-erage number of neighbors per node is 4.

The first set of experiments evaluate the dataconvergence time when only one SM is injected inthe network. Figure 11 presents the data conver-gence time for a single Directed Diffusion SM, withthe sink and source located at the diagonal cor-ners of the square region. We plot the data con-vergence time for three different cases of the sameSM and a base case for the same application us-ing passive communication (no SM). The top curveshows the time when code caching is not used. In thesecond curve, we can see an improvement of morethan 4 times in performance when code caching isactivated during the first execution of the SM inthe network. The code is cached when an SM vis-its a node for the first time and will be used bysubsequent SMs during the same execution. The ef-fects of caching are very important in this case be-cause the SMs visit a node multiple times in Di-rected Diffusion: they travel the network both for-ward (looking for the source) and backward (diffu-sion of data rate). In the third curve we can ob-serve a 30% decrease in the completion time whenthe code is already cached at all nodes. The fourthcurve shows the data convergence time for a tradi-tional implementation: the protocol is implementedat each node, only data is transferred through thenetwork, and the execution time is not accountedfor. We observe that the degradation in performance

Page 12: Cooperative Computing in Sensor Networksdiscolab.rutgers.edu/sm/papers/coopcomp03.pdf · graphical information to “chase” the object. Once the SM arrives at a new node that has

Figure 13: Directed Diffusion - Multiple Smart Mes-sages

for our implementation, when the code is cached atall nodes, compared to the traditional implementa-tion is only 5%. We believe that this is a reasonableprice for the flexibility to program any user-defineddistributed application in NES.

Figure 12 plots the same curves for a single SPINSM launched in the network at a node located ina corner of the square area. During the first execu-tion, code caching leads to a 3 times improvement inperformance (i.e., reducing the size of SMs is essen-tial for a protocol based on flooding and three-stagecommunication). The third curve shows a 30% de-crease in the completion time (similar to DirectedDiffusion) when the code is already cached at allnodes. The completion time increases from 10% to15% compared to the traditional implementation.

The second set of experiments quantify the per-formance of our applications when multiple SMs runsimultaneously in the network. Figures 13 and 14show the data convergence time for both DirectedDiffusion and SPIN with the code already cachedat nodes. For these experiments, data convergencetime is the time when a certain percentage of nodeshave received the data (or data rate) for all the SMsrunning in parallel. The nodes at which the SMsstart are distributed uniformly in the network. Theresults show that data convergence time increaseswith the number of SMs, but only during the ini-tial flooding phase because of increased contentionin the network. After that, the shapes of the curvesare the same, independent of the number of SMs.The results also indicate that SPIN completes fasterthan Directed Diffusion in all cases (i.e., 2.3 s com-pared to 3.4 s for the top curves in the figures). Thecause is that SPIN floods only the neighbors andthen brings the data to them, while Directed Diffu-

Figure 14: SPIN - Multiple Smart Messages

sion needs to flood the entire network until it findsthe source and then brings the data rate back to allnodes. In the initial phase Directed Diffusion gener-ates more messages in the network leading to highercontention, but its performance will increase as soonas the reinforcement phase begins.

9. Related Work

SMs have been influenced by the design of mo-bile agents for IP-based networks [26, 17, 8, 20]. Amobile agent may be viewed as a task that explic-itly migrates from node to node assuming that theunderlying network assures its transport betweenthem. SMs apply the general idea of code migra-tion, but focus more on flexibility, scalability, re-programmability, and ability to perform distributedcomputing over unattended NES. Unlike mobileagents, SMs are defined to be responsible for theirown routing in a network. A mobile agent namesnodes by fixed addresses and commonly knows thenetwork configuration a priori, while an SM namesnodes by content and discovers the network con-figuration dynamically. Furthermore, the SM soft-ware architecture defines the common system sup-port that each node must provide. The goal of thisarchitecture is to reduce the support required fromnodes since nodes in NES possess limited resources.

The SM self-routing mechanism shares some ofthe design goals and leverages work done in ac-tive networks (AN) [7, 24, 21]. SMs differ, however,from AN in several key features. The main differ-ence between them is in terms of programmabil-ity. Unlike AN which target faster communication inIP-based networks, Cooperative Computing define adistributed computing model for NES whereby sev-

Page 13: Cooperative Computing in Sensor Networksdiscolab.rutgers.edu/sm/papers/coopcomp03.pdf · graphical information to “chase” the object. Once the SM arrives at a new node that has

eral SMs can cooperate, exchange data, and syn-chronize with each other through the tag space. Ad-ditionally, the AN model does not contain the mi-gration of execution state whereas our model does.The migration of execution state for SMs tradesoff overhead for flexibility in programming sophisti-cated tasks which require cooperation and synchro-nization among several entities. For example, thisexecution state allows SMs to make routing deci-sions based on the results of the computation doneat previously visited nodes.

Research in mobile ad hoc networking [14, 22,18, 13] has resulted in numerous routing protocols.These protocols have generally been designed for IP-based networks and have primarily targeted tradi-tional mobile computing applications over networksof mobile personal computers. We have leveragedsome of these protocols into routing algorithms usedby the SM self-routing mechanism.

Sensor networks represent the first step towardlarge networks of embedded systems. Most of the re-search in this area has focused on hardware [15,23],operating systems [11], or network protocols [12,10, 4]. Cooperative Computing provides a solutionfor developing user-defined distributed applicationsin sensor networks, a crucial issue which has beenonly marginally tackled so far. As we have demon-strated, Cooperative Computing provides enoughflexibility to enable the implementation of previ-ously proposed protocols over our computing plat-form.

SensorWare [6] is similar to Cooperative Com-puting in the sense that both are frameworksfor programmable NES based on code migra-tion. Therefore, both are suitable to re-programthe network. However, SensorWare supports mo-bile control scripts and accesses the resourcesthrough virtual devices whereas Cooperative Com-puting support mobile Java code (i.e., Java is sup-ported on many embedded systems today [1]), ex-ecution state migration, and uniform access toresources through tags.

Mate [19] is an efficient virtual machine for sen-sor networks which can significantly simplify thecode development and dissemination effort. Themain difference between Cooperative Computingand this research is that Mate targets just the re-programmability of the network, but the program-ming model is still the traditional message passing.SMs on the other hand are based on execution mi-gration. An SM transfers not only the code, but alsothe execution state through the network.

10. Conclusions

We have described a programming model forlarge scale networks of embedded systems, in whichdistributed applications are implemented as collec-tions of Smart Messages. The model overcomes thescale, heterogeneity, and connectivity issues encoun-tered in these networks by using execution migra-tion, content-based naming, and self-routing. Theexperimental results for our prototype implemen-tation demonstrate the feasibility of CooperativeComputing. The implementation and simulation re-sults for two sensor network applications show thatour model represents a flexible, yet simple solutionfor programming large networks of embedded sys-tems.

Acknowledgments

This work was supported in part by the NSFunder the grant ANI-0121416. The authors wouldlike to thank Ulrich Kremer and Chalermek In-tanagonwiwat for our frequent discussions regard-ing the design of Cooperative Computing. We alsolike to thank Philip Stanley-Marbell, Deepa Iyer,and Akhilesh Saxena for their contributions at var-ious stages of this project.

References

[1] Java 2 Platform, Micro Edition (J2ME).http://java.sun.com/j2me/.

[2] Axis Communications. http://www.axis.com.

[3] Sensoria Corporation. http://www.sensoria.com.

[4] Blum, B., Nagaraddi, P., Wood, A., Abdelza-her, T., Son, S., and Stankovic, J. An En-tity Maintenance and Connection Service for Sen-sor Networks. In Proceedings of the First Interna-tional Conference on Mobile Systems, Applications,and Services (MobiSys 2003) (San Francisco, CA,May 2003), pp. 201–214.

[5] Borcea, C., Intanagonwiwat, C., Saxena, A.,and Iftode, L. Self-Routing in Pervasive Comput-ing Environments using Smart Messages. In Pro-ceedings of the 1st IEEE International ConferenceonPervasive Computing andCommunications (Per-Com 2003) (Dallas-Fort Worth, TX, March 2003),pp. 87–96.

[6] Boulis, A., Han, C., and Srivastava, M. De-signandImplementationofaFramework forEfficientand Programmable Sensor Networks. In Proceed-ings of the First International Conference on MobileSystems, Applications, and Services (MobiSys 2003)(San Francisco, CA, May 2003), pp. 187–200.

Page 14: Cooperative Computing in Sensor Networksdiscolab.rutgers.edu/sm/papers/coopcomp03.pdf · graphical information to “chase” the object. Once the SM arrives at a new node that has

[7] D. Wetherall. Active Network Vision Reality:Lessons from a Capsule-based System. In Proceed-ings of the 17th ACM Symposium on Operating Sys-tems Principles (SOSP 1999) (Charleston, SC, De-cember1999),ACMPress,NewYork,NY,pp. 64–79.

[8] Gray, R., Cybenko, G., Kotz, D., and Rus, D.Mobile Agents: Motivations and State of the Art.In Handbook of Agent Technology, J. Bradshaw, Ed.AAAI/MIT Press, 2002.

[9] Heideman, J., Silva, F., Intanagonwiwat, C.,Govindan, R., Estrin, D., and Ganesan, D.Building Efficient Wireless Sensor Networks withLow-Level Naming. In Proceedings of the 18th ACMSymposium on Operating Systems Principles (SOSP2001) (Banff, Canada, October 2001), ACM Press,New York, NY, pp. 146–159.

[10] Heinzelman, W. R., Kulik, J., and Balakrish-nan,H. AdaptiveProtocols for InformationDissem-ination in Wireless Sensor Networks. In Proceedingsof the Fifth annual ACM/IEEE International Con-ference onMobileComputing andNetworking (Mobi-Com 1999) (Seattle,WA,August 1999),ACMPress,New York, NY, pp. 174–185.

[11] Hill, J., Szewczyk, R., Woo, A., Hollar, S.,Culler, D., and Pister, K. System ArchitectureDirections for Networked Sensors. In Proceedings ofthe Ninth International Conference on ArchitecturalSupport for Programming Languages and OperatingSystems (ASPLOS-IX) (Cambridge,MA,November2000), ACM Press, New York, NY, pp. 93–104.

[12] Intanagonwiwat, C., Govindan, R., and Es-trin, D. Directed Diffusion: A Scalable and Ro-bustCommunicationParadigmforSensorNetworks.In Proceedings of the Sixth annual ACM/IEEE In-ternational Conference on Mobile Computing andNetworking (MobiCom 2000) (Boston, MA, August2000), ACM Press, New York, NY, pp. 56–67.

[13] Jinyang Li, John Janotti, Douglas De Couto,David R. Karger and Robert Morris. A Scal-able Location Service for Geographic Ad Hoc Rout-ing. In Proceedings of the Sixth annual ACM/IEEEInternational Conference on Mobile Computing andNetworking (MobiCom) (August 2000), pp. 120–130.

[14] Johnson, D., and Maltz, D. Dynamic SourceRouting in Ad Hoc Wireless Networks. T. Imielin-ski and H. Korth, (Eds.). Kluwer Academic Publish-ers, 1996.

[15] Juang, P., Oki, H., Wang, Y., Martonosi, M.,Peh, L., and Rubenstein, D. Energy-EfficientComputing for Wildlife Tracking: Design Tradeoffsand Early Experiences with ZebraNet. In Proceed-ings of the Tenth International Conference on Ar-chitectural Support for Programming Languages andOperating Systems (ASPLOS-X) (San Jose, CA, Oc-tober 2002), ACM Press, New York, NY, pp. 96–107.

[16] Jung, B., and Sukhatme, G. S. CooperativeTracking using Mobile Robots and Environment-Embedded, Networked Sensors. In the 2001 IEEE

International Symposium on Computational Intelli-gence in Robotics and Automation.

[17] Karnik, N., and Tripathi, A. Agent Server Ar-chitecture for the Ajanta Mobile-Agent System. InProceedings of the 1998 International Conferenceon Parallel and Distributed Processing Techniquesand Applications (PDPTA’98) (Las Vegas, NV, July1998), pp. 66–73.

[18] Karp, B., and Kung, H. Greedy Perimeter State-less Routing for Wireless Networks. In Proceedingsof the Sixth annual ACM/IEEE International Con-ference onMobileComputing andNetworking (Mobi-Com 2000) (Boston,MA,August 2000),ACMPress,New York, NY, pp. 243–254.

[19] Levis, P., and Culler, D. Mate: A Virtual Ma-chine for Tiny Networked Sensors. In Proceedingsof the Tenth International Conference on Architec-tural Support for Programming Languages and Oper-ating Systems (ASPLOS-X) (San Jose, CA, October2002), ACM Press, New York, NY, pp. 85–95.

[20] Milojicic, D., LaForge, W., and Chauhan, D.Mobile objects and agents. In USENIX ConferenceonObject-orientedTechnologies andSystems (1998),pp. 1–14.

[21] Moore, J., Hicks, M., and Nettles, S. PracticalProgrammable Packets. In Proceedings of the 20thAnnual Joint Conference of the IEEE Computer andCommunications Societies (INFOCOM 2001) (An-chorage, AK, April 2001), pp. 41–50.

[22] Perkins, C., and Royer, E. Ad Hoc On DemandDistance Vector Routing. In Proceedings of the 2ndIEEE Workshop on Mobile Computing Systems andApplications (WMCSA 1999) (New Orleans, LA,February 1999), pp. 90–100.

[23] Priyantha, N., Miu, A., Balakrishnan, H., andTeller,S.TheCricketCompass forContext-AwareMobile Applications. In Proceedings of the 7th an-nual ACM/IEEE International Conference on Mo-bile Computing and Networking (MobiCom 2001)(July 2001), ACM Press, New York, NY, pp. 1–14.

[24] Schwartz, B., Jackson, A., Strayer, W., Zhou,W., Rockwell, R., and Partridge, C. Smartpackets: Applying active networks to network man-agement. ACM Transactions on Computer Systems18, 1 (2000), 67–88.

[25] Stanley-Marbell, P., and Iftode, L. Scylla:A smart virtual machine for mobile embedded sys-tems. In 3rd IEEE Workshop on Mobile Comput-ing Systems and Applications, WMCSA2000 (Mon-terey, CA, December 2000), pp. 41–50.

[26] White, J. Mobile Agents. J. M. Bradshaw (Ed.),MIT Press, 1997.

[27] Xu, G., Borcea, C., and Iftode, L. Toward a Se-curity Architecture for Smart Messages: Challenges,Solutions, and Open Issues. In Proceedings of the 1stInternational Workshop on Mobile Distributed Com-puting (MDC’03) (May 2003).