Dr. Paul Flikkema - Northern Arizona Univ

download Dr. Paul Flikkema - Northern Arizona Univ

of 6

Transcript of Dr. Paul Flikkema - Northern Arizona Univ

  • 8/8/2019 Dr. Paul Flikkema - Northern Arizona Univ

    1/6

    1-4244-2575-4/08/$20.00 c 2008 IEEE

    Progressive Joint Coding, Estimation and Transmission Censoring inEnergy-Centric Wireless Data Gathering Networks

    Sheryl L. Howard and Paul FlikkemaEE Department

    Northern Arizona UniversityFlagstaff, AZ 86001

    sheryl.howard,[email protected]

    Abstract

    Energy-constrained wireless sensor networks often aredesigned to measure a spatio-temporal process that is cor-related in space, time, or both. The goal of these data-gathering networks is a description of the process that pro-vides the required delity with a minimum expenditure of energy. Our approach combines (1) channel coding and estimation/decision processing of coded messages for in-network data censoring with (2) estimation of the encoded and censored information at a fusion center where energyis plentiful. Nodes examine their own messages together with messages from preceding nodes, and compute the -delity of the estimate at the next node as a function of datacensoring proposals. Our algorithm exploits redundancyof two types: intra-message redundancy from channel cod-ing, and inter-message redundancy due to spatio-temporalcorrelation of the samples. This redundancy is used to alle-viate overall energy consumption and message congestionnear the fusion center by allowing relaying nodes to cen-sor messages that might otherwise be forwarded, if thosemessages can be inferred from other messages, given thecorrelation model. The effect of censoring on delity and energy consumption is characterized, and our censoringalgorithm shown to provide signicant energy savings.

    1 Introduction

    Many wireless sensor network applications involve theestimation of a time-varying vector eld at a destinationnode. Sensor samples are forwarded, often via multihopnetworking, to the destination or fusion center where thesamples are processed into an estimate of the eld. Theestimate is formed under severe resource constraints, theprimary constraint being the energy available to each node.This, along with economic pressure to limit node costs, re-duces communication reliability. The goal is an estimatewith the best precision possible given limited energy and

    unreliable communication.This paper views the problem from a distributed signal

    processing perspective, and integrates tools normally sepa-rated into distinct layers of the network protocol stack.Source coding, and, for networks, distributed source

    coding (see the review [1]) is a presentation- or application-layer technique that approaches the problem as one of min-imizing uncoded communication rates. Channel coding, anessential link layer technique, plays a role as well due to thepoor link quality in practical sensor networks [2]. It is well-known that the separation theorem justifying independentdesign of source and channel codes breaks down (i) for thesmall block sizes often found in sensor networks and (ii)when multiple access (not just a collection of point-to-pointlinks) exists. Clearly, source-channel coding should play arole. This is true in both distributed fusion problems (e.g.,the consensus problem, where all nodes form an estimate of the entire eld) and the centralized fusion problemwheresamples are funneled to the destinationconsidered here.This latter problem is not a special case of the distributedfusion problem, since the distributed fusion problem canbe solved with gossiping algorithms [3] leading to uniformtrafc loads across the network. In centralized fusion with-out source coding, trafc, and hence congestion, rapidlyincreases as the samples merge on their way to the fusioncenter. The problem is not limited to the highest and low-est layers, but directly challenges medium-access design as

    well. A recent paper [4] treats congestion at the MAC layerin isolation; our cross-layer approach tackles the root causeof congestion and energy efciency.

    This paper is organized as follows. Section 2 dis-cusses our philosophy of reducing energy consumption inthe network. Section 3 presents related work, and Sec-tion 4 presents the source model used for this paper. Sec-tion 5 discusses combined source-channel (SC) decod-ing/estimation at the fusion center, and Section 6 proposescensoring within the network. Section 7 presents the sim-ulation model used in this paper. Section 8 discusses a re-lay network, and shows results for combined SC decod-

  • 8/8/2019 Dr. Paul Flikkema - Northern Arizona Univ

    2/6

    ing/estimation without transmission censoring in a relaynetwork. Section 9 presents results for combined SC de-coding/estimation with censoring. Section 10 concludesthis paper.

    2 ApproachWe assume that a prior model for the measured eld ex-

    ists in the form of a multivariate distribution over a denedset of samples in space and/or time. This model representsknowledge about the eld that can be used in a Bayesianinference model, often implemented as a simple MAP es-timator. This assumption is completely general in that theprior model may be uninformative. More generally, boundscan usually be established with high condence in the ab-sence of a completely specied distribution. It was shownin [5] how the channel-coded samples received at the des-tination form a SC product codeword (formed from stack-ing the received samples) that combines deterministic par-ity constraints of the channel codes with probabilistic con-straints encoding the correlation of the prior model for themeasured eld.

    This paper extends [5] to every node in the network serving as a relay. Relaying nodes can exploit the redun-dancy in the information they receive (and collect them-selves) to make informed decisions on whether to forwardsamples, or censor a portion of them. Our algorithm spanslayers since it employs SC decoding to evaluate proposedcensoring decisions. Censoring is a form of distributedadaptive signal processing, as the decision is a function

    of the model, the data, and the targeted delity. From acoding perspective, the algorithm progressively puncturesa dynamic SC product codeword as it is shuttled to the des-tination. The resulting benets are two-fold. First, theamount of data transmitted, and hence energy consump-tion, is reduced. Congestion is also relieved: as the samplestravel to the destination, more redundant information maybe available, so transmission rates are naturally throttledwhere most needed.

    3 Related Work

    Several researchers have addressed the question of whattools to use in getting the required information from a sen-sor network with the minimum expenditure of energy. Theproblem is complex because it touches almost every tra-ditional layer of wireless networking. Our adaptive SCcensoring algorithm shares a goal of estimate and forward(EF) techniques (see [6] and the references therein) studiedin cooperative communication: maximizing the amount of information recoved at the destination. However, our ap-proach is different: we take into account the limited capa-bilities of low-cost radios for which sophisticated physical-layer strategies (such as maximal-ratio combining detec-

    tion) are not possible, and our goal is energy efciency, notcapacity. In our strategy a node does not relay in the senseof cooperative communication; it instead makes a decisionon what information to sendif anybased in part on re-ceived multicasts from neighboring nodes. Application-layer Bayesian inference techniques have been exploredin [7, 8] and [9]; [10] describes a technique that com-bines channel coding and opportunistic listening. Our ap-proach also couples the application and logical link layersin that we exploit prior knowledge of the sensed environ-ment jointly with channel coding.

    An error recovery approach in wireless sensor networkswas proposed in [11] that progressively increases the coderate as data nears the fusion center, reecting the fact thatthe overall source-destination channel is a composition of single-hop channels. Their criterion is throughput insteadof energy efciency, and the proposed algorithm treats onlychannel coding. Related work includes correlated data

    gathering [12] and censoring in detection [13].

    4 Model

    Many wireless sensor network (WSN) applications, par-ticularly environmental monitoring, involve multi-node de-ployments that measure multiple environmental processescorrelated in space and time. The goal is a faithful repre-sentation of the data with respect to a model at minimumenergy cost. In some applications (e.g., camera networks),bandwidth may also be limited. However, in this paper weassume the low data rates found in many environmental

    sensing applications, and neglect bandwidth efciency.Let x i be a sample computed from some physical pro-

    cess measured by a transducer of a sensor node. Here i is ageneral index subsuming sensor node, transducer, and timeindices. The measurement m i of x i is corrupted by trans-ducer noise and then quantized into the information wordu i , which is encoded into a codeword vi . The codeword isforwarded to the destination, which receives yi = vi + ziwhere zi is a binary error vector with addition mod 2.

    The entire collection of data samples is x, and similarlyfor m , u, v, and y. From Bayes theorem, we have

    p(x,m,u,v |y) p(y|x,m,u,v ) p(x,m,u,v ); (1) p(y|v) p(v|u) p(u|m) p(m|x) p(x),

    where p(x) is the prior information included in the sourcemodel as p(v, x ) or p(u).

    But the mapping of the per-measurement channel codefrom m to v is deterministic, so we can use

    p(x,m,u,v |y) p(y|v) p(v|x) p(x). (2)

    The model can be simplied depending on the sensingscenario. For example, in most cases of interest transducernoise and channel error processes are independent.

  • 8/8/2019 Dr. Paul Flikkema - Northern Arizona Univ

    3/6

    Due to space limitations, this paper emphasizes the ef-fects of channel errors and hence focuses on inference of the collection of quantized samples u . We thus re-deneu to incorporate the properties of the source statistics andmeasurement noises in a symbol- or bit-level probabilis-tic model. Then the MAP estimate of the transmitted SCproduct codeword is

    v = arg maxv

    p(v|y) = argmaxv

    p(y|v) p(v). (3)

    There is a direct mapping from v to u; the most likely mea-surements m are then found directly from u.

    5 Processing at the Fusion Center

    The fusion center receives y and estimates the samplesv. In the most general, computationally expensive case, itcomputes the posterior probability law according to equa-

    tion (3), or an approximation thereof. For reduced com-plexity, an iterative inference algorithm coupling sourceand channel estimator/decoders [5] can be used to com-pute a point estimate. Iterative source-channel decoderswere rst explored in [14], and later for a single rst-ordercorrelated source over an AWGN (additive white Gaussiannoise) channel in [15].

    Both the full Bayesian search and the iterative algorithmincorporate prior knowledge of the source statistics, chan-nel code structure, network reliability, as well as missing(censored) samples. Note that the symbols of censoredcodewords are treated as completely ambigious.

    The iterative decoder is comprised of component de-coders for both the channel codes and the source model.Probabilities are exchanged between soft-decision chan-nel decoders and source decoder, similar to turbo decoding[16], allowing for multiple iterations of decoding to im-prove performance. The source model decoder takes inthe a posteriori channel decoder output, and converts bitprobabilities p(vij |yij ) for bit j of codeword vi to sym-bol probabilities on m i for its a priori input. These a priori symbol probabilities p(m i ) are further combined toform product symbol probabilities p(M / k ), where M / k isthe combined message word for all samples, excluding k.These product symbol probabilities are assumed symmetricfor all k, i.e., invariant with respect to measurement order-ing. Finally, the source decoder combines these productsymbol probabilities p(M / k ) with conditional probabili-ties p(m i = M |M / k ) from the source model as

    p(m i = M ) =M / k

    p(m i = M |M / k )p (M / k ). (4)

    The source decoder then outputs these (hopefully) im-proved symbol probabilities p(m i ), which are then con-verted back to bit probabilities p(vij ) for a priori inputto each channel decoder i. A reduced set of conditional

    bitwise probabilities could be used instead of symbol prob-abilities, but would introduce further suboptimality. Thissimplication was used in [5], which did not include re-sults for a source decoder using symbol probabilities.

    In contrast, the full Bayesian search algorithm is non-iterative. This decoder chooses the most-likely productcodeword v with the MAP decision of (3), and is termedthe MAP decoder. The a priori product codeword prob-abilities p(v) may be found from the source model. TheMAP decoder is used as a performance benchmark, as itprovides optimal decoding results. The iterative decoder isslightly suboptimal, as it is composed of separate decodersfor channel codes and source model. The ability to itera-tively improve its performance allows the iterative decoderto approach the optimal performance of the MAP decoder.

    6 In-Network Processing

    The potential for recovering censored samples motivatesthe question: What information should be sent to the fu-sion center? More specically, given a limited energy sup-ply in each node of the network, what samples should anynode forward toward the fusion center, and for what sam-ples should transmission be censored?

    We propose an adaptive censoring algorithm: it weighsthe value of the data in terms of its informativeness inthe context of the prior model and transmission cost. Largeexcursions from the model are clearly more useful in char-acterizing the process than samples near expected values,and their transmission is a better use of energy.

    At a receiving node, the censoring decision of a senderis viewed as a puncturing of the SC product codeword. In-dividual samples are encoded using an (n,k,d ) channelcode. If a node has N potential samples to send but Lare sent (and thus N L are censored), then the formalsource-product code rate is kN nL , assuming the decoder in-fers all N samples. Note that the code rate is not fullydescriptive since some samples are more valuable. For ex-ample, communication of two product codewords of thesame rate corresponding to two censoring decisions havethe same energy cost, but may not be equally informative.For this reason the censoring decision must rely on the data

    itself in addition to the rate.The algorithm begins when a node receives a noisy

    product codeword y. The censoring decisions made by thetransmitters are known, since it is assumed that every mea-surement is associated with a unique label (for example, thenode/transducer ID and time). In practice, the label (suit-ably dened, compressed, and code-protected) can applyto all the information in the transmitted product codeword.

    The node performs the inference computation of sec-tion 5 on the received product codeword to generate an es-timate of the samples. Once computed, this a posterioridistribution becomes the prior distribution for the nodes

  • 8/8/2019 Dr. Paul Flikkema - Northern Arizona Univ

    4/6

    censoring algorithm. While the complete APP distribu-tion p(v|y, c) could be used, in this paper computationalenergy cost is limited by using the singleton MAP decisionv0 = argmax v p(v|y, c) as an approximation.

    The censoring algorithm has two steps: (1) computingan estimate of the delity and energy cost of a set of cen-soring proposals, and (2) choosing one proposal as the cen-soring decision based on a censoring policy .

    How should the estimate be computed? Here is wherethe algorithms model of the network becomes important.The simplest approach is to use the data as-is; this implic-itly assumes that the data communicated as a result of theestimation/decision algorithm will arrive without error. Inour algorithm, the estimate is predictive in the sense thatthe node accounts for the unreliability of channel to thenext node, and beyond that, to the fusion center. In this pa-per, we assume a model of local (one-hop) collaboration:the transmitting node requires only knowledge of the qual-

    ity of the hop to the next node in the data gathering tree,and assumes that the next node will take appropriate actionbased on its local knowledge.

    The overall goal of the censoring policy is to decidewhich codewords in the product codeword to censor. Eachcensoring proposal cl , l = 1 , . . . , 2N , censors a uniquepattern of codewords in v0 . The posterior distribution

    p(v|y, c l ) =p(y|v0 , cl ) p(v)

    p(y, cl )(5)

    is computed by the node for each proposal cl , where p(v)is the prior distribution over the information represented bythe product codeword.

    The subset of evaluated proposals is determined as partof the censoring policy. Another aspect of the policy is thecriterion for ranking the censoring proposals with respectto a summary measure of the distribution. One of manypossible measures is Cov( V l , V ) = E( V l V )2 , whereV is the random variable associated with the distribution p(v|y, c) and V l is the random variable associated with theposterior conditional distribution (5). Alternatively, andmore simply, we can consider the variance of the poste-rior about the MAP estimate, Var V l = E( V l v0 )2 . Inthis paper, we consider the simplest measure, computed asthe squared difference (vl v0 )2 between the MAP esti-

    mate of the nodes in-hand data v0 and the point estimatevl = arg max p(v|y, cl ) from (5) for each proposal cl .The chosen proposal is the decision; the non-censored

    samples are encoded and transmitted to the next node. Thereceiving node repeats the processing, computing p(v|y, c)(or an approximation) for its received product codewordand running the censoring algorithm.

    7 Simulation Experiments

    A symmetric multivariate Gaussian source model isused for the sensed process x . The a priori information un-

    certainty 2x is assumed uniform; the covariance matrix forthe source model has diagonal elements 2x i = ii =

    2x

    for all i. The cross-correlation r is found as r ij = ij / iifor all i = j . The information mean x is known a priori .In all simulations, 2x = 0 .2, x = 10 , and r ij = 0 .9.

    The transducer noise t is assumed Gaussian; thus p(m i |x i ) is also Gaussian with mean x i and variance 2t .The noisy measurement m i = x i + t i is then quantizedto a data sequence u i ; RU i is the range of the analog-to-digital converter (ADC) input. Encoding u i results in thecodeword vi . Due to the deterministic mapping from m tov, the conditional codeword probability p(vi |x i ) is givenby p(vi |x i ) = p(vi = V i |x i ) = R U i f m (s)ds , wheref m () N (x i , 2t ). In all simulation results,

    2t =0 to

    clearly show the effect of channel noise.The channel code used for encoding each sensor mea-

    surement is a ( n,k,d min ) = (7,4,3) Hamming code. Thislow-complexity, short block-length code was used to min-

    imize sensing node computational load and assess useful-ness of the approach.

    All simulations model a single hop as a binary symmet-ric channel (BSC) with crossover probability . The BSCmodels the hard-decision output of most available single-chip radio transceivers for sensor networks.

    The fusion center decodes the received noisy productcodeword y. Both an iterative SC decoder, described in [5],and a MAP decoder, also described in [5], are examinedin the simulations. The individual channel decoders usedin the iterative decoder are trellis decoders implementingthe BCJR [17] algorithm on the (7,4,3) Hamming parity-

    check trellis. Our gure of merit is the mean-squared error(MSE) E [(m i m i )2] between the measurements and the

    estimated measurements at the hub. Each MSE value iscalculated using 50,000 Monte Carlo simulations.

    8 Relay Network

    In a relay or line network, the sensor nodes form a relaychain or line that forwards messages to the fusion center.Each node takes its own measurements and transmits themalong with messages it receives from the preceding node.

    8.1 Relay Network without TransmissionCensoring

    A relay network with N =3 forwarding nodes and a fu-sion center is considered. Each node takes M =2 measure-ments of its own, and forwards those as well as any re-ceived messages on to the next node. The fusion centerreceives a correlated product codeword of size 6 x 7, wherethe channel code length is 7. This section examines per-formance when each forwarding node simply forwards allmeasurements to the next node, with no transmission cen-soring, and only the destination node performs any decod-

  • 8/8/2019 Dr. Paul Flikkema - Northern Arizona Univ

    5/6

    ing. This provides us with a baseline performance for max-imal energy cost.

    8.2 Results

    Each relay node encodes its own data with a (7,4,3)Hamming code. MSE results for MAP decoding of the SCproduct code, the iterative SC decoder after 5 decoding it-erations, and channel decoding alone (with ML decodingof each received codeword) are compared to an uncodedsystem with hard-decision decoding. Two different itera-tive decoders are examined, one using conditional symbolprobabilities, and a suboptimal version using bitwise-onlyconditional probabilities.

    10310 210 110 010

    4

    102

    100

    102

    BSC crossover probability

    M S E =

    M e a n

    S q u a r e

    d E r r o r

    UncodedChannel decoding onlyIterative dec, bit probs: 5 itersIterative dec, sym probs: 5 itersMAP decoding

    Figure 1. MSE vs for 3-node relay network without transmission censoring.

    The performance improvement possible with either theiterative or MAP SC decoders on the relay network is sig-nicant, as shown in Figure 1. The bottom curve (- -) dis-plays the performance of the optimum MAP SC decoder.The iterative SC decoder with conditional symbol probabil-ities (--) shows some loss due to the suboptimality of sep-arate decoders for source and channel. A lower complexityapproach [5] uses bitwise-only conditional probabilities inthe iterative source decoder, eliminating the need for wordassembly and disassembly at each iteration. This approach(-x-) shows noticeable performance loss compared to the it-erative decoder with conditional symbol probabilities. AllSC decoders perform much better than channel decodingalone (- -) or the uncoded system (-o-).

    9 Relay Network: Transmission Censoring

    In this section we assess the performance of inference-based censoring in the relay network. With an eye to-ward minimum energy cost for computations, we considera highly constrained censoring policy with a simple deci-sion criterion. Each relay node uses a self-only censoring

    policy, where it considers proposals that censor only sub-sets of its own samples; all data received from other nodesis forwarded. Proposals are considered in order of energycost from least to most, so the proposal that censors all of the nodes own samples is considered rst. Each proposalcl is evaluated by computing the predicted codeword v gen-erated by the receiving nodes decoder/estimator when theproposals associated product codeword vl is transmitted.If v = v (where v is the nodes true, in-hand productcodeword), the search is terminated and the vl is sent. Tominimize complexity, the full MAP prediction is not com-puted; instead, the node uses one iteration of the sourcedecoder. Since the effect of channel errors is considered,but the gain of the channel code is not, our results are pes-simistic but simplicity is gained.

    9.1 Results

    A 3-node relay network is examined; each relay nodeencodes its own data with a (7,4,3) Hamming code, andeach hop modeled as a BSC with crossover probability .

    Figure 2 compares MSE performance with and withoutcensoring for two inference algorithms at the fusion cen-ter: the MAP SC decoder, and the iterative SC decoderusing symbol probabilities. As expected, MAP decod-ing without censoring (- -) provides an upper performancebound, but MAP decoding with censoring (- -) is surpris-ingly close. At most channel reliabilities, iterative decod-ing without censoring (- + -) is better than with censoring(- -), as expected. However, when using the iterative de-coder in very poor channels ( =0.4), it is better to censor. Itis well-known that most channel codes perform worse thanuncoded communication in poor channels. This is exactlywhat is occurring here: the bit LLRs of censored samplesinput to the channel decoder are set to zero. The result-ing channel decoder output LLRs are also zeros in the rstiteration; in effect, no decoding is performed.

    The effect of censoring on both MSE performance andenergy efciency is shown in Figure 3, where the MSE isplotted against the fraction of messages censored for dif-ferent decoders at = .05. These decoders include theMAP SC decoder, the iterative SC decoder using bitwiseconditional probabilities,and the iterative SC decoder us-

    ing conditional symbol probabilities. The uncoded case isalso shown for comparison. MSE results without censoringare also shown; these fall on the y-axis, as the fraction of censored messages is zero for no censoring.

    For the optimum MAP SC decoder at the fusion center,the MSE performance loss due to censoring is negligible,with a huge gain in energy efciency80% of the mes-sages are censored. As expected, the iterative SC decoderhas poorer delity than the MAP decoder; but its energyefciency is similar despite lower complexity. Note thatthe MAP SC decoder is likely to be feasible in terms of computational load and energy consumption.

  • 8/8/2019 Dr. Paul Flikkema - Northern Arizona Univ

    6/6

    10310 2101100104

    103

    102

    101

    100

    Crossover probability

    M S E =

    M e a n

    S q u a r e

    d E r r o r

    Iterative decoding, symbol probs, censoringIterative decoding, symbol probs, no censoringMAP decoding, censoringMAP decoding, no censoring

    Figure 2. MSE vs for 3-node relay network with transmission censoring.

    0 0.2 0.4 0.6 0.8 110

    2

    101

    100

    101

    Fraction of Messages Censored

    M S E =

    M e a n

    S q u a r e

    d E r r o r

    Uncoded, no censoringIterative decoding, bit probs, no censoringIterative decoding, symbol probs, no censoringMAP decoding, no censoringIterative decoding, bit probs, censoringIterative decoding, symbol probs, censoringMAP decoding, censoring

    Figure 3. MSE vs fraction of messages cen-sored for relay network.

    10 Conclusion

    In a range of important applications, wireless sensor net-works are tasked to gather correlated data with minimumenergy expenditure. We have proposed an approach that

    integrates channel coding of samples and an adaptive strat-egy of data sample censoring. The in-network progressiveprediction of the delity and energy cost of censoring pro-posals exploits decoding/estimation using intra-codewordredundancy of channel coding and inter-codeword redun-dancy of the information, and allows the explicit trade-off of energy cost and delity that can be chosen in thecontext of the data gathering application. A related bene-t of censoring is the management of congestion as dataows toward the fusion center. The signicant perfor-mance improvement and energy-efciency of this approachwas demonstrated in simulations.

    Acknowledgment

    The work of the rst author (Howard) was supportedby an NAU Intramural Grant. The work of the second au-thor (Flikkema) was supported in part by NSF grant CNS-0540414 and an NAU ERDENE grant.

    References

    [1] Z. Xiong, A. Liveris, and S. Cheng, Distributed sourcecoding for sensor networks, IEEE Signal Proc. Mag. , pp.80-94, Sep. 2004.

    [2] Y. Zhao and R. Govindan, Understanding packet deliveryperformance in dense wireless sensor networks, in Proc.Sensys 03 , 2003.

    [3] L. Xiao, S. Boyd, and S. Lall, A scheme for robust dis-tributed sensor fusion based on average consensus, inProc. IPSN 05 , 2005.

    [4] G.-S. Ahn et al. , Funneling-MAC: A localized, sink-oriented MAC for boosting delity in sensor networks, inProc. Sensys 06 , 2006.

    [5] S. Howard and P. Flikkema, Integrated source-channeldecoding for correlated data-gathering sensor networks,Proc. IEEE WCNC 08 , 2008.

    [6] A. Chakrabarti, E. Erkip, A. Sabharwal, and B. Aazhang,Code designs for cooperative communication, IEEE Sig-nal Proc. Mag. , pp. 16-26, Sep. 2007.

    [7] S. Mukhopadhyay, D. Panigrahi, and S. Dey, Model basederror correction for wireless sensor networks, in Proc. IEEE SECON 2004 , 2004, pp. 575-584.

    [8] G. Hartl and B. Li, infer: A Bayesian inference approachtowards energy efcient data collection in dense sensor net-

    works, Proc. IEEE ICDCS 2005 , 2005, pp. 371-380.[9] A. Silberstein et al. , Suppression and failures in sensor net-works: A Bayesian approach, in Proc. VLDB 2007 , 2007.

    [10] H. Dubois-Ferrire, D. Estrin, and M. Vetterli, Packet com-bining in sensor networks, in SenSys 2005 , 2005.

    [11] S. Qaisar and H. Radha, OPERA: An optimal progressiveerror recovery algorithm for wireless sensor networks, inProc. IEEE SECON 07 , 2007.

    [12] R. Cristescu and B. Beferull-Lozano, Lossy network cor-related data gathering with high-resolution coding, inProc. IPSN 05 , 2005.

    [13] R. Jiang and B. Chen, Fusion of censored decisions inwireless sensor networks, IEEE Trans. Wireless Comm. ,vol. 4, no. 6, pp. 2668-2673, Nov. 2005.

    [14] N. G ortz, On the iterative approximation of optimal jointsource-channel decoding, IEEE Journal Sel. Areas inComm. , vol. 14, pp. 1662-1670, Sep. 2001.

    [15] J. Hagenauer and N. G ortz, The turbo principle in jointsource-channel coding, in Proc. IEEE Inform. TheoryWorkshop (ITW) 03 , 2003, pp. 275-278.

    [16] C. Berrou, A. Glavieux, and P. Thitimajshima, Near op-timum error correcting and decoding: Turbo codes, IEEE Trans. Comm. , vol. 44, pp. 1261-1271, Oct. 1996.

    [17] L. Bahl, J. Cocke, F. Jelinek, and J. Raviv, Optimal de-coding of linear codes for minimizing symbol error rate, IEEE Trans. Inform. Theory , vol. 20, no. 2, pp. 284-287,Mar. 1974.