Abstract
A novel framework for the online learning of expected costtogo functions characterizing wireless networks performance is proposed. The framework is based on the observation that wireless protocols induce structured and correlated behavior of the finite state machine (FSM) modeling the operations of the network. As a result, a significant dimension reduction can be achieved by projecting the costtogo function on a graph wavelet basis set capturing typical substructures in the graph associated with the FSM. Sparse approximation with random projection is then used to identify a concise set of coefficients representing the costtogo function in the wavelet domain. This Compressed Sensing (CS) approach enables a considerable reduction in the number of observations needed to achieve an accurate estimate of the costtogo function. The proposed method is characterized via stability analysis. In particular, we prove that the standard CS approach of the Least Angle Selection and Shrinkage Operator (LASSO) will not provide stability. We also determine a connection between the structure of the FSM induced by the wireless protocols and the restricted isometry property of the effective projection matrix. Simulation results of our approximation method show that 15 wavelet functions can accurately represent a costtogo function defined on a state space of 2000 states. Moreover, the number of statecost observations needed to estimate the costtogo function is orders of magnitude smaller than that required by traditional online learning techniques.
Introduction
Given the recent explosion in the number and types of wireless devices, new design and optimization paradigms are needed to effectively manage the complex and heterogeneous nature of modern wireless networks. We propose a novel approach for the online learning of costtogo functions in networks modeled via large finite state machines (FSM). Typical cost functions measure performance metrics such as throughput, packet delivery probability and delay. Costtogo functions measure the expected longterm cost incurred by the network from any state of the FSM. Estimation of costtogo functions is instrumental for the optimization of network control strategies. Our estimation approach is based on the observation that wireless networking protocols induce a structured behavior of the FSM, enabling dimension reduction of its state space via waveletprojection and compressed sensinglike techniques. The sparse approximation approach proposed herein considerably reduces the length of the trajectory of the FSM required to achieve an accurate estimate of the costtogo function compared to traditional learning techniques.
Markov models have been widely used for the analysis and optimization of wireless networks [19]. In one of the earliest works on protocol modeling [1], a Markov chain is proposed to analyze the saturation throughput of IEEE 802.11 medium access control. The FSM models the backoff countdown counter controlling channel sensing and access of a wireless terminal and the retransmission index of the packet under transmission. In general, the Markov chains defined in these models track the logical state of the wireless protocols (e.g., the retransmission index of the packet being transmitted, the number of packets in the buffer and the backoff counter) as well as environmental variables (e.g., the channel state).
The online optimization of control strategies based on these models requires the estimation of costtogo functions from a samplepath of statecost observations [1012]. However, the immense size of the state space of FSMs associated with practical wireless networks limits the applicability of traditional online learning techniques to toy networks and extremely simple case studies. In fact, the estimation of costtogo functions in traditional online learning (e.g., Qlearning and Reinforcement learning [1012]) requires sufficient observation of a samplepath such that it hits all the states of the FSM a large number of times. Approximations of costtogo functions [13,14] are generally based on oversimplified models and thus cannot be accurately used in general practical networks. For instance, the fluid approximation proposed in [14] is based on the assumption that the costtogo function is smooth in the state space of the FSM, meaning that only small variations of its value computed in neighboring states are allowed. This assumption is suitable for simple cases (e.g., buffer models and cost functions modeling buffer congestion), but does not hold for more complex FSM models and general cost functions.
This work provides the following contributions: 1) we present a framework based on CS for the approximation of costtogo functions; 2) we analyze the structure of the FSMs modeling wireless networks based on their decomposition into fundamental components; 3) we connect the structure of the FSM to the Restricted Isometry properties of the effective projection matrix; 4) we analyze the stability of CS in this context via perturbation analysis; 5) we present a methodology for the use of Diffusion Wavelets (DW) in online learning; 6) we present numerical results illustrating the performance of the proposed approach.
The framework proposed herein is not tailored to a specific canonical network example, but is rather based on the inherent structure of the FSMs modeling the operations of general wireless networks. The fundamental observation behind our framework is that the directed graph associated with the temporal evolution of the state of the FSM is inherently regular and local. As a consequence, typical trajectories on the FSM can be described by a number of graph substructures considerably smaller than the number of possible edges between states. Figure 1 depicts a schematic of the proposed online learning algorithm. A trajectory of the FSM associated with the operations of the physical network is used to estimate the transition probability matrix and the cost function and formulate the estimation problem. The costtogo function is projected onto a graph wavelet basis set capturing relevant and typical substructures in the graph. Sparse approximation (and in particular the leastsquares CS (LS CS) algorithm [15]) is then employed to identify a concise set of substructures to represent the cost function of interest.
Figure 1. Schematic of the algorithm. Graphical representation of the proposed approach. The physical network is a collection
of terminals (gray circles) connected by wireless links: data (solid lines) and interference
(dashed lines) links. The state of the terminals is defined by a collection of variables
whose value evolves over time. The temporal evolution of the state of the terminals
and of the links is modeled by the logical graph of the network. A samplepath of
the network on the logical graph generates a sequence of observations, that are used
to estimate the transition matrix
We characterize the performance of sparse approximation applied to the estimation problem addressed herein in terms of the minimum number of states that need to be observed to achieve an accurate estimate of the costtogo function. Our analysis is based on the decomposition of the FSM in fundamental structures we refer to as subchains. The transition matrix associated with the individual subchains is analyzed to measure the incoherence^{a} of the overall transition matrix, which is exploited to determine the conditions under which the restricted isometry property (RIP) [16] holds for our effective random projection matrix.
Note that whereas most prior work on sparse approximation focuses on static scenarios, the framework considered in this article addresses the problem of learning in dynamical systems. The inclusion of states visited a small number of times in the samplepath of the FSM results in instability of the estimation algorithm. To reduce this effect, we use the LS CS algorithm proposed in [15]. LS CS correlates the output of the sparse approximation algorithm by constraining variations in the support of the solution.
Relevant to the approach proposed herein, Mahadevan et al. in [17] proposed the use of DW [18] as a projection basis for the sparse approximation of costtogo functions. In [17], offline estimation of the costtogo function is considered, however, no performance analysis is undertaken. In contrast, we examine online learning and we provide a detailed analysis to assess the performance of sparse approximation applied to Markov models of wireless networks. Compressed sensingbased techniques have been previously applied to estimation problems in networks [1924]. These works address graphs related to the physical connectivity of the network, where nodes are terminals and links are specific wired or wireless links or modeled by undirected graphs. We address the fundamentally different problem of estimating functions defined on the state space of the FSM, i.e., the logical graph of the wireless network, modeling the temporal evolution of the network from a small number of state observations.
Numerical results for a case of interest show that a small number of graph wavelets (∼15) are sufficient to accurately approximate a costtogo function defined on a state space of approximately 2000 states. Moreover, the proposed algorithm can estimate the costtogo function by observing a trajectory of the state of the FSM visiting a small subset of states in the state space.
The rest of this article is organized as follows. Section ‘System model and problem formulation’ describes the model of the network and defines the estimation problem. The sparse learning algorithm is presented in Section ‘Sparse estimation of cost functions’. Section ‘Structure of the graph’ proposes the decomposition of the overall graph into subchains and analyzes the properties of the transition probability matrix. Section ‘Perturbation analysis and performance bounds’ discusses the stability of sparse approximation applied in our context and characterizes the performance of the learning algorithm in terms of how the number of state observations grows with the network size. Numerical results are presented in Section ‘Numerical results’. Section ‘Conclusions’ concludes the article. The proof of the stated theorems are in Appendices Appendix 1 and Appendix 2.
System model and problem formulation
The network is modeled as a FSM whose state evolves within the state space
where P(·) denotes the probability of an event. The performance of the network is measured by a function c(s,s^{′}) that assigns a positive and bounded cost to the transition from state s to state s^{′}. The average cost from state s is
The function
where E[·] denotes expectation and γ ∈ (0,1) is the discount factor, is the expected discounted longterm cost. This function is also known as the costtogo function and is central to DP and optimal control [10].
For any fixed
where
is the τstep transition from state s to s^{′}.^{c} Consider the graph associated with the FSM, where vertices are states in
In online learning, the function
The main challenge to achieve an accurate estimation of
Sparse estimation of cost functions
We now present an algorithm for the online learning of costtogo functions in wireless
networks from the observation of a statecost trajectory of the associated FSM. The
baseline observation is that networking protocols induce a structured behavior of
the network, which is reflected in a structured graph associated with the FSM. Thus,
every statecost observation conveys information about multiple states due to the
correlated behavior of the network. As a result, we can propose an algorithm to estimate
• observation: the transition probabilities and cost function c(·) are estimated by observing a statecost samplepath;
• projection:
• sparse estimation of
We define the N × N matrix P to be the probability transition matrix where P[s,s^{′}] = p(s,s^{′}) as in Equation (1).
The longterm cost
Thus,
where 1(·) is the indicator function. More refined estimators can be employed to reduce the sampling rate [25].
The estimates
To cope with this issue we exploit the fact that FSMs modeling the operations of wireless networks and their associated graphs present a very regular connectivity structure and the transition probabilities are determined by a limited set of parameters, e.g., packet arrival probability in the buffer of the nodes and packet failure (see Section ‘Structure of the graph’). By regular, we mean that the connectivity structure from many nodes of the graph to their 1hop neighbors is similar. Thus, the representation of the graph provided by the transition matrix is intrinsically redundant and trajectories of the network on the graph can thus presumably be described by a small number of functions capturing typical substructures of the graph. We observe that these substructures involve neighborhoods of states at different numbers of hops, corresponding to different temporal distances between observations in the samplepath.
A fundamental element of the proposed framework is the projection of the costtogo
function
The symmetrization step is required as DWs presume symmetric diffusion operators. The design of wavelet functions tailored to the compression of directed graphs will further improve the performance of the algorithm proposed herein.
Define W as a diffusion wavelet basis set computed on P_{symm}, where the DW functions are the columns of W. We have then
The main idea behind the estimation paradigm proposed herein is that the DW set of
functions is a sparsifying basis for the costtogo function
where
In the Compressed Sensing (CS) literature, the matrix
Structure of the graph
Wireless networking protocols induce a very structured temporal evolution of the network, and, thus, a very structured graph associated with the FSM. This structure is the key to show some general properties of the transition matrix P that determines the performance of the sparse reconstruction in terms of the minimum number of states that needs to be included in Equation (11) to achieve an accurate reconstruction. Our analysis is based on the decomposition of the overall graph into smaller graphs, which we refer to as subchains. The good incoherence properties of the transition matrices associated with the subchains are reflected in good incoherence of the overall transition matrix and, thus, result in good performance of the sparse reconstruction.
The decomposition into subchains of the complex graph associated with the FSM modeling
the temporal evolution of wireless networks results from the observation that the
state of the network is the collection of many individual descriptors tracking counters
and variables associated with the functioning of protocols and the environment. The
temporal evolution of each individual descriptor follows simple rules that can be
easily analyzed to retrieve properties of the overall graph. We then define S(t) = {S_{1}(t),…,S_{D}(t)}, where S_{d}(t) is the state of the dth subchain at time t. We denote by
The subchains track the evolution of the individual components of the state space.
Although in the overall FSM the transition probabilities are a function of the overall
state of the network, the connectivity structure of the subchains is preserved in the overall FSM. In fact, the state transition from
In stochastic models for wireless networks two classes of subchains can be identified:
• Counterlike subchains (see, Figure 2a,b: the FSM is associated with a counter. The value of the counter increments/decrements until it is reset to a given value. Examples of counterlike subchains are: the number of retransmissions of a packet in ARQ protocols, the backoff timers in DCF and the transmission windows and timers in TCP. This class can be further divided into forward counters (Figure 2a and backward counters (Figure 2b, depending on whether the counter is incremented or decremented until being reset to a predefined value;
Figure 2. Subchains. (a) Counterlike subchain, forward counter. (b) Counterlike subchain, backward counter. (c) Random walk subchain.
• Random walk subchains (see, Figure 2c): the value of the descriptor variable is subject to random, but constrained, increments and decrements. Examples of random walk subchains are channel state descriptors and variables tracking the number of packets in a buffer.
For instance, in the pioneering work [1], the Markov chain used to analyze the network is the composition of a random walk and a counterlike subchain. It can be observed that counterlike and random walk subchains present a very local and regular connectivity structure. By local, we mean that every state connects to a small neighborhood of states. Regularity implies that states connect to 1hop neighbors in a similar fashion. For instance, in counterlike subchains, states connect to the state corresponding to a reset counter and the state associated with an incremented or decremented value (possibly plus a selfloop). As a result, the overall graph is regular and local. This property is instrumental towards having an efficient compression in the wavelet domain, meaning that only a limited number of notable substructures is needed to model the temporal evolution of the state of the network.
Define an indexing in
where p_{d}(i_{d},j_{d}) is the transition probability from state i_{d }and j_{d} in the state space of the dth subchain. Then, the inner product between the ith and the jth columns of P is
where i_{d}, j_{d}, and k_{d} are the state of the dth subchain in the states associated with states i, j, and k, respectively.
We then need to compute the inner products of the columns of the transition matrices associated with the classes of subchains. The average inner products of the backward counter, forward counter and random walk subchains are, respectively,
where in a random walk subchain the transition probability from state i_{d} to state j_{d} is larger than zero only if j_{d }−i_{d} ≤ ℓ and we assume N_{d }≫2ℓ + 1. We observe that all these mean inner products are of order
We remark that the value of the transition probability p_{d}(i_{d},j_{d}) is the inverse of the number of outgoing links, i.e., allowed transitions, from
i_{d}. The average inner product
The average inner products of the subchains decrease on average with the number of
states of the associated FSM. Although in the general case the probability of transition
from a state to its neighbors may be much different from that provided by the natural
random walk associated with the graph structure, the locality and regularity of the
structure of the subchains cause the average overlap of the sets
The average inner product of a column with itself is also relevant to the performance of the sparse reconstruction (these means appear in the mean of the Gram matrix in the effective random projection). It is easy to compute that the average of this quantity for the counterlike backward subchain, counterlike forward subchain and random walk subchain is
where C is a positive constant smaller than 1. We observe that each of these means can be
expressed as
Perturbation analysis and performance bounds
In this section, we characterize the performance of the sparse approximation of the costtogo function proposed herein. We first discuss the stability of the solution of (11) and then determine how much compression is possible to ensure good reconstruction of the value function v. The number of observations required for good reconstruction directly translates to the learning rate of our proposed algorithm. An exact analysis of the transition matrix is challenging; however, by exploiting the average behavior of several key structures, we can determine the relationship between the minimum number of observations for this compressed sensing problem and the size of the logical graph.
Perturbation analysis
We discuss in this section how estimation noise in the sensing matrix
Define
In [27], Xu et al. have shown that the regularized regression problem of LASSO:
is equivalent to the robust regression (RR) problem, stated as
where
The following addresses the instability issues in the solution to the RR problem. In particular, the theorem shows that the inclusion of a new sample may result in suboptimal solutions to the RR problem. Moreover, due to the equivalence between LASSO and the RR problem, the same instability result applies to LASSO as well.
Theorem 1
Let x^{∗} be the solution of the problem
where
with
Denote the support of y^{∗ }and x^{∗} as
where
The proof of the theorem is in Appendix 1.
Minimum number of observations
In Section ‘Numerical results’, we will employ the LS CS residual algorithm [15] to minimize the Bellman residual subject to a sparsity constraint:
We will observe the temporal evolution of the Markov Chain over multiple timesteps, as such, we will not observe the cost at every state. Thus, coupled with additional random mixing to exploit the benefits of compressed sensing, we will optimize the following modified Bellman residual:
where R is a random matrix to be defined in the sequel and R(T) is the submatrix formed by retaining the columns of R indexed by states hit in the observation interval T. If the matrix R(T)(I−γP)W satisfies the socalled restricted isometry property, defined below, then the squared
error between x and
Our proof exploits arguments from [31] with appropriate tailoring to our framework. We begin with the definition of the properties we wish to show.
Definition 1
(Restricted Isometry Property): The observation matrix B is said to satisfy the restricted isometry property of order
holds for all
We have the following result,
Theorem 2
The matrix R_{H}(I−γP) does not satisfy RIP(S,δ_{S}) with the following probability bound,
if
We observe that this result states that if the number of observations K is of order
Numerical results
In this section, we present numerical results for an example of a wireless network to demonstrate the potential of the compressed sensing approach. We consider a wireless network where terminals store packets in a finite buffer of size Q and employ Automatic Retransmission reQuest (ARQ) to improve the delivery rate of packets. Time is divided in slots of fixed duration. For the sake of simplicity we assume that the transmission of a packet occurs in the duration of a time slot and that channel coefficients in the various slots are i.i.d.. Terminals with a nonempty buffer access the channel in a time slot with fixed probability equal to α. The failure probability of a packet transmitted by a terminal is a function of the set of terminals concurrently transmitting in the same slot. Packet arrival in the buffer of the terminals is modeled according to a Poisson process of intensity σ.
The FSM tracking the state of each individual terminal (see Figure 3) is composed of two subchains: a random walklike subchain tracking the number of packets in the buffer (state space {0,1,…,Q}) and a forward counterlike subchain tracking the retransmission index of the packet being transmitted (state space {0,1,…,F}, where F is the maximum number of transmissions of a packet). An additional binary variable is added to the FSM to track transmission/idleness of the terminal. The FSM tracking the state of the overall network is the composition of the FSMs of the individual terminals. The transition probabilities of the Markov process determining the trajectory of the state of the network in the state space of the FSM are a function of the packet arrival rate, of the failure probability function and of the transmission probability α.
Figure 3. Example of composition of subchains. FSM of a terminal in the considered network: a forwardcounter subchain (packet
retransmission) and an random walk subchain (number of packets in the buffer). On
the righthand side, the fundamental connectivity structure of most of the states
in the state space
The cost function c measures the normalized cost in terms of throughput loss with respect to the saturation throughput achieved by the terminals in the absence of interference. In particular, the cost function is defined as the sum for all the terminals of one minus the failure probability of the transmitted packets. Idleness is assigned a cost equal to 1.
For Q = 5 and F = 4 and 2 terminals the size of the state space is 1681. The transition matrix P is used to compute P_{symm} defined in Equation (9) and the associated set of DW functions W[18]. DW basis sets are overcomplete. In order to keep complexity low, the columns of W are subsampled. In particular, we select 400 wavelet functions at different time scales.
Figure
4 and
5 depict the exact and reconstructed value function using LASSO mapped on the stateaction
space and the sorted magnitude of the coefficients x^{∗}, respectively. In these figures, the exact vector c and transition matrix P are used in order to show the properties of the sparse reconstruction based on DW.
An accurate approximation of
Figure 4. Value function and its approximation. Value function (green) and its approximation (blue).
Figure 5. Magnitude of the coefficients of x^{∗}. Magnitude of the coefficients of x^{∗}.
Figure
6 plots the reconstruction error (norm2 of difference between the real and reconstructed
value functions weighted by the steadystate distribution) as a function of T achieved by LASSO for different values of the sampling rates. The estimated transition
probability matrix
Figure 6. Reconstruction error as a function of the time slot achieved by LASSO with different sampling rates. Reconstruction error as a function of the time slot achieved by LASSO with different sampling rates. The number in the legend corresponds to K, that is, the number of states used in the reconstruction.
Figure 7 depicts the reconstruction error achieved by the LS CSbased framework and that of standard Qlearning [10] as a function of the length of the observed samplepath. All states visited by the process are included in the Bellman residual. In order to improve stability, to generate this plot we used the LS CS algorithm. LS CS correlates x^{∗}(T) to x^{∗}(T−1) by constraining changes in the support of the representation vector. Interested readers are referred to [15] for a detailed description and performance characterization of the algorithm.
Figure 7. Reconstruction error as a function of time. Comparison between the reconstruction error as a function of the time slot achieved by the proposed algorithm and Qlearning. All states visited by the sample path are used in the estimation.
The proposed algorithm achieves a considerable accuracy in the estimation of
Conclusions
A novel framework for the online estimation of costtogo functions in wireless networks was proposed. We showed that the inherent regular and local structure of the graph associated with the FSM modeling the operations of wireless networks enables the sparse representation of costtogo functions. Our analysis, based on the decomposition of the overall graph in fundamental smaller structures, connects the structure of the FSM to the RIP of the transition probability matrix. Numerical results show that sparse approximation and projection onto DW basis sets enable a considerable reduction in the number of observations needed to estimate costtogo functions in wireless networks, and have the potential to make online learning practical in this context.
Endnotes
^{a}The incoherence of the transition matrix is connected to the magnitude of the inner products of its columns.
^{b}Note that c(s,s^{′}) can be generalized to be a random variable. In this case the expectation is over all the possible values of c(s,s^{′}).
^{c}Control can be included in the model by defining statistics and cost functions conditioned on a control action.
^{d}The indexing in the vector is based on a univocal map between
^{e}Note that this assumption does not reduce the applicability of the proposed algorithm. In fact, the connectivity structure of the transition matrix is determined by standard protocols that are shared and known by all the nodes.
^{f}By allowed, we means that the state transition has probability equal to zero for any set of parameters.
^{g}We note that numerical evaluations of incoherence for many typical Markov chains has revealed that incoherence holds on average.
^{h}In this analysis we assume that W is an orthonormal set of basis functions. We are aware that DW are overcomplete and, thus, W is not an orthonormal basis set. The design of orthonormal wavelet basis tailored to FSMs modeling wireless networks is an important research direction.
Appendix 1
Proof of Theorem 1
Fix the index k in the support
and the vector
with ∥u∥_{2 }= 1 and
If
and
then
Therefore, if the conditions (34) and (35) hold, then due to Theorem 5 in
[27], we have yk∗ = 0 and
Define
where
Then, we find u that maximizes the lefthand side of (35) as the solution of
Let M = QSZ be the singular value decomposition of M, then (41) is equal to
where
where
and using the Schwarz inequality we obtain
where the equality holds if
Appendix 2
Proof of Theorem 2
In this appendix, we prove the result on the minimum number of observed states needed
for perfect reconstruction of
Lemma 1
(Geršgorin) The eigenvalues of an m × m matrix G all lie in the union of the n discs d_{i }= d_{i}(c_{i},r_{i}),i = 1,2,…n, centered at c_{i }= G_{ii }and with radius
We will apply Gersgorin’s lemma to the following Gram matrix,
For the sake of exposition we assume that R(T) is an K × n matrix whose components are drawn i.i.d. from a binary distribution i.e.
We shall show that every element of the Gram matrix, G is bounded as follows, where m_{ij }= E[G_{ij}],
The dimension of the statespace,
where b_{i} is the ith column of B and
Lemma 2
(McDiarmid) Consider independent random variables
For our functions of interest, m = n^{2}. We let R^{′ }= R + Δ where Δ_{j,l }= 0 except for j = j_{0},l = l_{0}, i.e.
We have that
The same bound holds for the offdiagonal elements of the Gram matrix. Using these
values of
Then,
Equation (60) can be manipulated to yield the following relationship between the number
of samples K and the size of the logical network, n: the RIP holds with high probability if
Competing interests
The authors declare that they have no competing interests.
Competing interests
The authors declare that they have no competing interests.
Acknowledgements
The study was supported by AFOSR under grants FA9550080480 and FA95501210215 and by the National Science Foundation (NSF) under Grant CCF0917343.
References

G Bianchi, Performance analysis of the IEEE 802.11 distributed coordination function. IEEE J. Sel. Areas Commun 18(3), 535–547 (2000)

A Konrad, B Zhao, A Joseph, R Ludwig, A Markovbased channel model algorithm for wireless networks. Wirel. Netw 9(3), 189–199 (2003). Publisher Full Text

H Wu, Y Peng, K Long, S Cheng, J Ma, Performance of reliable transport protocol over, IEEE 802.11 wireless, LAN analysis and enhancement. proceedings of IEEE TwentyFirst Annual Joint Conference of the IEEE Computer and Communications Societies, INFOCOM 2002 (New York, USA, 2002), pp. 599–607 (vol, 2002), . 2, June 23–27

M Zorzi, RR Rao, On the use of renewal theory in the analysis of ARQ protocols. IEEE Trans. Commun 44(9), 1077–1081 (1996). Publisher Full Text

L Badia, M Levorato, M Zorzi, Markov analysis of selective repeat type II hybrid ARQ using block codes. IEEE Trans. Commun 56(9), 1434–1441 (2008)

E Modiano, An adaptive algorithm for optimizing the packet size used in wireless ARQ protocols. Wirel. Netw 5(4), 279–286 (1999). Publisher Full Text

H Zhai, Y Kwon, Y Fang, Performance analysis of IEEE 802.11 MAC protocols in wireless LANs. Wirel. Commun. Mob. Comput 4(8), 917–931 (2004). Publisher Full Text

H Su, X Zhang, Crosslayer based opportunistic MAC protocols for QoS provisionings over cognitive radio wireless networks. IEEE J. Sel. Areas Commun 26, 118–129 (2008)

M Dianati, X Ling, K Naik, X Shen, A nodecooperative ARQ scheme for wireless ad hoc networks. IEEE Trans. Veh. Technol 55(3), 1032–1044 (2006). Publisher Full Text

DP Bertsekas, Dynamic Programming and Optimal Control (Athena Scientific, Belmont, MA,, 2001)

S Mahadevan, Average reward reinforcement learning: Foundations, algorithms, and empirical results. Mach. Learn 22, 159–195 (1996)

A Schwartz, A reinforcement learning method for maximizing undiscounted rewards. Proceedings of the Tenth International Conference on Machine Learning (Amherst, Massachusett, 1993), pp. 305–305 (vol, 1993), . 298

F Fu, MVD Schaar, Structureaware stocastic control for transmission technology. ArXiv preprint (2010) (arXiv:1003, 2010), . 2471

W Chen, D Huang, A Kulkarni, J Unnikrishnan, Q Zhu, P Mehta, S Meyn, A Wierman, Approximate dynamic programming using fluid and diffusion approximations with applications to power management. Proceedings of the 48th IEEE Conference on Decision and Control (Shangai, China, 2009), pp. 3575–3580 (Dec, 2009), . 1618, 2009

N Vaswani, LSCSresidual (LSCS): compressive sensing on least squares residual. IEEE Trans. Signal Process 58(8), 4108–4120 (2010)

E Candes, MB Wakin, An introduction to compressive sampling. IEEE Signal Process. Mag 25(2), 21–30 (2008)

M Maggioni, S Mahadevan, A multiscale framework for Markov decision processes using diffusion wavelets (http://citeseerx, 2006), . ist.psu.edu/viewdoc/download?doi=10.1.1.74.8956&rep=rep1&type=pdf webcite

RR Coifman, M Maggioni, Diffusion wavelets. Appl. Comput. Harmonic Anal 21, 53–94 (2006). Publisher Full Text

M Crovella, E Kolaczyk, Graph wavelets for spatial traffic analysis. INFOCOM 2003 TwentySecond Annual Joint Conference of the IEEE Computer and Communications (San Francisco, CA, USA, 2003), pp. 1848–1857 (vol, 2003), . 3, Mar. 30–Apr. 3

M Firooz, S Roy, Network tomography via compressed sensing. in proc, ed. by . of IEEE Global Telecommunications Conference (GLOBECOM) (Miami, Florida, USA, 2010), pp. 1–5 (Dec, 2010), . 610

Y Chen, D Bindel, RH Katz, Tomographybased overlay network monitoring. Proceedings of the 3rd ACM SIGCOMM conference on Internet measurement (Miami Beach, FL, USA, 2003), pp. 216–231 (Aug, 2003), . 2529, 2003

J Haupt, WU Bajwa, M Rabbat, R Nowak, Compressed sensing for networked data. IEEE Signal Process. Mag 25(2), 92–101 (2008)

M Wang, W Xu, E Mallada, A Tang, Sparse recovery with graph constraints: fundamental limits and measurement construction (Arxiv preprint arXiv:1108), . 0443 (2011)to appear in Proceedings of IEEE INFOCOM 2012

W Xu, E Mallada, A Tang, Compressive sensing over graphs. Proceedings of the 30th IEEE International Conference on Computer Communications (IEEE INFOCOM) (Shangai, China, IEEE, 2011), pp. 2087–2095 (Apr, IEEE, 2011), . 1015

C SherlawJohnson, S Gallivan, J Burridge, Estimating a Markov transition matrix from observational data. J. Operat. Res. Soc 46(3), 405–410 (1995)

R Tibshirani, Regression shrinkage and selection via the Lasso. J. Royal Stat. Soc. Ser. B 58, 267–288 (1996)

H Xu, C Caramanis, S Mannor, Robust regression and Lasso. IEEE Trans. Inf. Theory 56(7), 3561–3574 (2010)

E Candes, T Tao, The dantzig selector: statistical estimation when p is much larger than n. Annals Stat 35(6), 2313–2351 (2007). Publisher Full Text

CH Zhang, J Huang, The sparsity and bias of the Lasso selection in highdimensional linear regression. Annals Stat 36(4), 1567–1594 (2008). Publisher Full Text

T Zhang, Some sharp performance bounds for least squares regression with L1 regularization. Annals Stat 37(5A), 2109–2144 (2009). Publisher Full Text

J Haupt, W Bajwa, G Raz, R Nowak, Toeplitz compressed sensing matrices with applications to sparse channel estimation. IEEE Trans. Inf. Theory 56(11), 5862–5875 (2010)