SpringerOpen Newsletter

Receive periodic news and updates relating to SpringerOpen.

This article is part of the series Recent advances in optimization techniques in wireless communication networks.

Open Access Research

Structure-based learning in wireless networks via sparse approximation

Marco Levorato12*, Urbashi Mitra1 and Andrea Goldsmith2

Author Affiliations

1 Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA

2 Department of Electrical Engineering, University of Southern California, Los Angeles, USA

For all author emails, please log on.

EURASIP Journal on Wireless Communications and Networking 2012, 2012:278  doi:10.1186/1687-1499-2012-278

The electronic version of this article is the complete one and can be found online at: http://jwcn.eurasipjournals.com/content/2012/1/278


Received:16 February 2012
Accepted:18 July 2012
Published:30 August 2012

© 2012 Levorato et al; licensee Springer.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A novel framework for the online learning of expected cost-to-go functions characterizing wireless networks performance is proposed. The framework is based on the observation that wireless protocols induce structured and correlated behavior of the finite state machine (FSM) modeling the operations of the network. As a result, a significant dimension reduction can be achieved by projecting the cost-to-go function on a graph wavelet basis set capturing typical sub-structures in the graph associated with the FSM. Sparse approximation with random projection is then used to identify a concise set of coefficients representing the cost-to-go function in the wavelet domain. This Compressed Sensing (CS) approach enables a considerable reduction in the number of observations needed to achieve an accurate estimate of the cost-to-go function. The proposed method is characterized via stability analysis. In particular, we prove that the standard CS approach of the Least Angle Selection and Shrinkage Operator (LASSO) will not provide stability. We also determine a connection between the structure of the FSM induced by the wireless protocols and the restricted isometry property of the effective projection matrix. Simulation results of our approximation method show that 15 wavelet functions can accurately represent a cost-to-go function defined on a state space of 2000 states. Moreover, the number of state-cost observations needed to estimate the cost-to-go function is orders of magnitude smaller than that required by traditional online learning techniques.

Introduction

Given the recent explosion in the number and types of wireless devices, new design and optimization paradigms are needed to effectively manage the complex and heterogeneous nature of modern wireless networks. We propose a novel approach for the online learning of cost-to-go functions in networks modeled via large finite state machines (FSM). Typical cost functions measure performance metrics such as throughput, packet delivery probability and delay. Cost-to-go functions measure the expected long-term cost incurred by the network from any state of the FSM. Estimation of cost-to-go functions is instrumental for the optimization of network control strategies. Our estimation approach is based on the observation that wireless networking protocols induce a structured behavior of the FSM, enabling dimension reduction of its state space via wavelet-projection and compressed sensing-like techniques. The sparse approximation approach proposed herein considerably reduces the length of the trajectory of the FSM required to achieve an accurate estimate of the cost-to-go function compared to traditional learning techniques.

Markov models have been widely used for the analysis and optimization of wireless networks [1-9]. In one of the earliest works on protocol modeling [1], a Markov chain is proposed to analyze the saturation throughput of IEEE 802.11 medium access control. The FSM models the backoff countdown counter controlling channel sensing and access of a wireless terminal and the retransmission index of the packet under transmission. In general, the Markov chains defined in these models track the logical state of the wireless protocols (e.g., the retransmission index of the packet being transmitted, the number of packets in the buffer and the backoff counter) as well as environmental variables (e.g., the channel state).

The online optimization of control strategies based on these models requires the estimation of cost-to-go functions from a sample-path of state-cost observations [10-12]. However, the immense size of the state space of FSMs associated with practical wireless networks limits the applicability of traditional online learning techniques to toy networks and extremely simple case studies. In fact, the estimation of cost-to-go functions in traditional online learning (e.g., Q-learning and Reinforcement learning [10-12]) requires sufficient observation of a sample-path such that it hits all the states of the FSM a large number of times. Approximations of cost-to-go functions [13,14] are generally based on oversimplified models and thus cannot be accurately used in general practical networks. For instance, the fluid approximation proposed in [14] is based on the assumption that the cost-to-go function is smooth in the state space of the FSM, meaning that only small variations of its value computed in neighboring states are allowed. This assumption is suitable for simple cases (e.g., buffer models and cost functions modeling buffer congestion), but does not hold for more complex FSM models and general cost functions.

This work provides the following contributions: 1) we present a framework based on CS for the approximation of cost-to-go functions; 2) we analyze the structure of the FSMs modeling wireless networks based on their decomposition into fundamental components; 3) we connect the structure of the FSM to the Restricted Isometry properties of the effective projection matrix; 4) we analyze the stability of CS in this context via perturbation analysis; 5) we present a methodology for the use of Diffusion Wavelets (DW) in online learning; 6) we present numerical results illustrating the performance of the proposed approach.

The framework proposed herein is not tailored to a specific canonical network example, but is rather based on the inherent structure of the FSMs modeling the operations of general wireless networks. The fundamental observation behind our framework is that the directed graph associated with the temporal evolution of the state of the FSM is inherently regular and local. As a consequence, typical trajectories on the FSM can be described by a number of graph sub-structures considerably smaller than the number of possible edges between states. Figure 1 depicts a schematic of the proposed online learning algorithm. A trajectory of the FSM associated with the operations of the physical network is used to estimate the transition probability matrix and the cost function and formulate the estimation problem. The cost-to-go function is projected onto a graph wavelet basis set capturing relevant and typical substructures in the graph. Sparse approximation (and in particular the least-squares CS (LS CS) algorithm [15]) is then employed to identify a concise set of substructures to represent the cost function of interest.

thumbnailFigure 1. Schematic of the algorithm. Graphical representation of the proposed approach. The physical network is a collection of terminals (gray circles) connected by wireless links: data (solid lines) and interference (dashed lines) links. The state of the terminals is defined by a collection of variables whose value evolves over time. The temporal evolution of the state of the terminals and of the links is modeled by the logical graph of the network. A sample-path of the network on the logical graph generates a sequence of observations, that are used to estimate the transition matrix <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M1','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M1">View MathML</a> and the cost vector <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M2','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M2">View MathML</a>. The cost function is projected onto a diffusion wavelet basis. A concise representation in the wavelet domain of the long-term cost function <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M3','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M3">View MathML</a> is found as the solution of a sparse approximation problem.

We characterize the performance of sparse approximation applied to the estimation problem addressed herein in terms of the minimum number of states that need to be observed to achieve an accurate estimate of the cost-to-go function. Our analysis is based on the decomposition of the FSM in fundamental structures we refer to as sub-chains. The transition matrix associated with the individual sub-chains is analyzed to measure the incoherencea of the overall transition matrix, which is exploited to determine the conditions under which the restricted isometry property (RIP) [16] holds for our effective random projection matrix.

Note that whereas most prior work on sparse approximation focuses on static scenarios, the framework considered in this article addresses the problem of learning in dynamical systems. The inclusion of states visited a small number of times in the sample-path of the FSM results in instability of the estimation algorithm. To reduce this effect, we use the LS CS algorithm proposed in [15]. LS CS correlates the output of the sparse approximation algorithm by constraining variations in the support of the solution.

Relevant to the approach proposed herein, Mahadevan et al. in [17] proposed the use of DW [18] as a projection basis for the sparse approximation of cost-to-go functions. In [17], offline estimation of the cost-to-go function is considered, however, no performance analysis is undertaken. In contrast, we examine online learning and we provide a detailed analysis to assess the performance of sparse approximation applied to Markov models of wireless networks. Compressed sensing-based techniques have been previously applied to estimation problems in networks [19-24]. These works address graphs related to the physical connectivity of the network, where nodes are terminals and links are specific wired or wireless links or modeled by undirected graphs. We address the fundamentally different problem of estimating functions defined on the state space of the FSM, i.e., the logical graph of the wireless network, modeling the temporal evolution of the network from a small number of state observations.

Numerical results for a case of interest show that a small number of graph wavelets (∼15) are sufficient to accurately approximate a cost-to-go function defined on a state space of approximately 2000 states. Moreover, the proposed algorithm can estimate the cost-to-go function by observing a trajectory of the state of the FSM visiting a small subset of states in the state space.

The rest of this article is organized as follows. Section ‘System model and problem formulation’ describes the model of the network and defines the estimation problem. The sparse learning algorithm is presented in Section ‘Sparse estimation of cost functions’. Section ‘Structure of the graph’ proposes the decomposition of the overall graph into sub-chains and analyzes the properties of the transition probability matrix. Section ‘Perturbation analysis and performance bounds’ discusses the stability of sparse approximation applied in our context and characterizes the performance of the learning algorithm in terms of how the number of state observations grows with the network size. Numerical results are presented in Section ‘Numerical results’. Section ‘Conclusions’ concludes the article. The proof of the stated theorems are in Appendices Appendix 1 and Appendix 2.

System model and problem formulation

The network is modeled as a FSM whose state evolves within the state space <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M4','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M4">View MathML</a> with N = |S|. Define <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M5','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M5">View MathML</a> as the state of the FSM at time t = 0,1,2,…. We assume that the sequence S = {S(0),S(1),S(2),…} is a Markov process with transition probabilities

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M6','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M6">View MathML</a>

(1)

where P(·) denotes the probability of an event. The performance of the network is measured by a function c(s,s) that assigns a positive and bounded cost to the transition from state s to state s. The average cost from state s is

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M7','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M7">View MathML</a>

(2)

The function

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M8','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M8">View MathML</a>

(3)

where E[·] denotes expectation and γ ∈ (0,1) is the discount factor, is the expected discounted long-term cost. This function is also known as the cost-to-go function and is central to DP and optimal control [10].

For any fixed <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M9','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M9">View MathML</a> the function <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M10','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M10">View MathML</a> is independent of the time index t and can be rewritten as

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M11','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M11">View MathML</a>

(4)

where

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M12','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M12">View MathML</a>

(5)

is the τ-step transition from state s to s.c Consider the graph associated with the FSM, where vertices are states in <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M13','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M13">View MathML</a> and edges are state-transitions with non-zero probability. The temporal distance τ in the evolution of the FSM translates into some number of hops in the graphical representation. Starting from a vertex s, <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M14','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M14">View MathML</a> is computed by sequentially summing the discounted and weighed cost of the reachable vertices for an increasing number of hops in the graph. The representation as a graph of the temporal evolution of the network is the key for the sparse learning algorithm proposed herein.

In online learning, the function <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M15','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M15">View MathML</a> is estimated from a sample-path of state-cost observations. The sample-path OT of observations up to time T includes the sequence of states {S(0),S(1),…,S(T)} and state transition costs {c(S(0),S(1)),…,c(S(T−1),S(T))}. Denote by <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M16','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M16">View MathML</a> the vector collecting the expected long-term cost <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M17','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M17">View MathML</a> for all <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M18','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M18">View MathML</a>, i.e., <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M19','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M19">View MathML</a>.d The objective is to build an estimator of <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M20','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M20">View MathML</a> based on the observations OT minimizing a distance metric such as <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M21','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M21">View MathML</a>, where <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M22','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M22">View MathML</a> is the estimate at time T.

The main challenge to achieve an accurate estimation of <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M23','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M23">View MathML</a> in wireless networks is the enormous size of the state space <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M24','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M24">View MathML</a> underlying the associated FSM. In fact, in traditional online learning an accurate estimation of <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M25','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M25">View MathML</a> requires a sample-path of observations where each state in <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M26','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M26">View MathML</a> is visited a considerable number of times. In fact, all the allowed state transitions need to be observed a sufficient number of times to estimate their probability.

Sparse estimation of cost functions

We now present an algorithm for the online learning of cost-to-go functions in wireless networks from the observation of a state-cost trajectory of the associated FSM. The baseline observation is that networking protocols induce a structured behavior of the network, which is reflected in a structured graph associated with the FSM. Thus, every state-cost observation conveys information about multiple states due to the correlated behavior of the network. As a result, we can propose an algorithm to estimate <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M27','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M27">View MathML</a> exploiting this structure from fewer observations than in traditional learning. The algorithm is composed of three elements:

observation: the transition probabilities and cost function c(·) are estimated by observing a state-cost sample-path;

projection:<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M28','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M28">View MathML</a> is projected onto a graph wavelet basis set capturing typical structures in the graph;

sparse estimation of<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M29','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M29">View MathML</a>: a sparse estimation algorithm is used to identify a concise set of basis functions providing the best fit with the estimated transition probabilities and cost function.

We define the N × N matrix P to be the probability transition matrix where P[s,s] = p(s,s) as in Equation (1).

The long-term cost <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M30','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M30">View MathML</a> can be rewritten as

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M31','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M31">View MathML</a>

(6)

Thus, <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M32','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M32">View MathML</a> can be computed as the fixed point solution <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M33','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M33">View MathML</a> of the operator <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M34','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M34">View MathML</a>. The transition matrix P and cost vector c are not known a priori and need to be estimated from observation. At time T, the sample-path OT is used to compute the estimates <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M35','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M35">View MathML</a> and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M36','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M36">View MathML</a> of P and c. We use the estimator

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M37','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M37">View MathML</a>

(7)

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M38','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M38">View MathML</a>

(8)

where 1(·) is the indicator function. More refined estimators can be employed to reduce the sampling rate [25].

The estimates <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M39','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M39">View MathML</a> and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M40','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M40">View MathML</a> may be very noisy and incomplete estimates of P and c even for TN. In fact, an accurate estimation of the transition probabilities from a state s in <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M41','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M41">View MathML</a> and the cost function c(s) may require a considerable number of visits to s. For asymptotically large T, the average number of times the FSM is in state s is (s), where <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M42','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M42">View MathML</a> is the steady-state probability of s. Note that the average steady-state probability of the states in <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M43','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M43">View MathML</a> is 1/N, and thus the average number of visits to a state is T/N. However, in a finite-length sample-path, the trajectory of the state of the FSM may remain confined in a region of <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M44','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M44">View MathML</a> even for lengths T larger than N and the number of states visited may be much smaller than T. Thus, due to the large size of the state space of FSMs modeling wireless networks, the number of observations needed to achieve an accurate estimation of the cost-to-go function is generally enormous, and it may be larger than the coherence time of the network, meaning that the statistics of the stochastic process modeling the operation of the network may change before the learning process achieves a meaningful estimate of <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M45','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M45">View MathML</a>.

To cope with this issue we exploit the fact that FSMs modeling the operations of wireless networks and their associated graphs present a very regular connectivity structure and the transition probabilities are determined by a limited set of parameters, e.g., packet arrival probability in the buffer of the nodes and packet failure (see Section ‘Structure of the graph’). By regular, we mean that the connectivity structure from many nodes of the graph to their 1-hop neighbors is similar. Thus, the representation of the graph provided by the transition matrix is intrinsically redundant and trajectories of the network on the graph can thus presumably be described by a small number of functions capturing typical substructures of the graph. We observe that these substructures involve neighborhoods of states at different numbers of hops, corresponding to different temporal distances between observations in the sample-path.

A fundamental element of the proposed framework is the projection of the cost-to-go function <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M46','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M46">View MathML</a> on a set of basis functions capturing the typical substructures of the graph at various time scales. In fact, every new observation updates the estimate of all the substructures including the observed transition. We employ the recently proposed DWs [18] as a basis set for the projection. DWs are a multiresolution geometric construction for the multiscale analysis of operators on graphs. DW functions are computed by sequentially applying a diffusion operator (for instance, the transition matrix P) at the current scale k, compressing the range via a local orthonormalization procedure, representing the operator in the compressed range and computing the P2k on this range. Functions defined on the support space are analyzed in multiresolution fashion, where dyadic powers of the diffusion operator correspond to dilations, and projections correspond to downsampling. Even if P is not known a priori, we assume that the location of the non-zero elements of P, that is, the connectivity structure of P, is known e. Define I(P) = sgn(P + PT). The basis set W is then computed on Psymm where the ith row of Psymm is

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M47','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M47">View MathML</a>

(9)

The symmetrization step is required as DWs presume symmetric diffusion operators. The design of wavelet functions tailored to the compression of directed graphs will further improve the performance of the algorithm proposed herein.

Define W as a diffusion wavelet basis set computed on Psymm, where the DW functions are the columns of W. We have then <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M48','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M48">View MathML</a>, where x is the representation vector collecting the coefficients of the wavelet functions in W. Given P and c, the representation vector x providing the most accurate approximation of <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M49','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M49">View MathML</a> on W minimizes the Bellman residual <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M50','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M50">View MathML</a>. We have then

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M51','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M51">View MathML</a>

(10)

The main idea behind the estimation paradigm proposed herein is that the DW set of functions is a sparsifying basis for the cost-to-go function <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M52','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M52">View MathML</a>. Due to the structured behavior defined by networking protocols, a small number of functions can represent the evolution and, thus, the collected cost, from large groups of states. The Least Angle Selection and Shrinkage Operator (LASSO) algorithm [26] minimizes the residual norm of the residual plus a regularization term. For the considered problem, the LASSO is formulated as

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M53','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M53">View MathML</a>

(11)

where <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M54','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M54">View MathML</a> and R(T) is a random projection matrix. The -1 regularization term λx1is a sparsity-promoting term, meaning that the least significant coefficients in x are pushed toward zero.

In the Compressed Sensing (CS) literature, the matrix <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M55','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M55">View MathML</a> and W in the above equation are generally referred to as the sensing and representation matrices, respectively. Note that the elements in the rows of <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M56','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M56">View MathML</a> and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M57','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M57">View MathML</a> corresponding to states not visited in the trajectory OT are set to zero and can be eliminated in the projection.

Structure of the graph

Wireless networking protocols induce a very structured temporal evolution of the network, and, thus, a very structured graph associated with the FSM. This structure is the key to show some general properties of the transition matrix P that determines the performance of the sparse reconstruction in terms of the minimum number of states that needs to be included in Equation (11) to achieve an accurate reconstruction. Our analysis is based on the decomposition of the overall graph into smaller graphs, which we refer to as sub-chains. The good incoherence properties of the transition matrices associated with the sub-chains are reflected in good incoherence of the overall transition matrix and, thus, result in good performance of the sparse reconstruction.

The decomposition into sub-chains of the complex graph associated with the FSM modeling the temporal evolution of wireless networks results from the observation that the state of the network is the collection of many individual descriptors tracking counters and variables associated with the functioning of protocols and the environment. The temporal evolution of each individual descriptor follows simple rules that can be easily analyzed to retrieve properties of the overall graph. We then define S(t) = {S1(t),…,SD(t)}, where Sd(t) is the state of the dth sub-chain at time t. We denote by <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M58','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M58">View MathML</a> the state space of the dth sub-chain, with <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M59','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M59">View MathML</a>.

The sub-chains track the evolution of the individual components of the state space. Although in the overall FSM the transition probabilities are a function of the overall state of the network, the connectivity structure of the sub-chains is preserved in the overall FSM. In fact, the state transition from <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M60','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M60">View MathML</a> to <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M61','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M61">View MathML</a> in the overall FSM is allowedf only if the state transition from sd to <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M62','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M62">View MathML</a> is allowed in the corresponding sub-chain, for all d = 1,…,D. Thus, the properties of the connectivity structure of the sub-chains are inherited by the overall graph.

In stochastic models for wireless networks two classes of sub-chains can be identified:

Counter-like sub-chains (see, Figure 2a,b: the FSM is associated with a counter. The value of the counter increments/decrements until it is reset to a given value. Examples of counter-like sub-chains are: the number of retransmissions of a packet in ARQ protocols, the backoff timers in DCF and the transmission windows and timers in TCP. This class can be further divided into forward counters (Figure 2a and backward counters (Figure 2b, depending on whether the counter is incremented or decremented until being reset to a predefined value;

thumbnailFigure 2. Sub-chains. (a) Counter-like sub-chain, forward counter. (b) Counter-like sub-chain, backward counter. (c) Random walk sub-chain.

Random walk sub-chains (see, Figure 2c): the value of the descriptor variable is subject to random, but constrained, increments and decrements. Examples of random walk sub-chains are channel state descriptors and variables tracking the number of packets in a buffer.

For instance, in the pioneering work [1], the Markov chain used to analyze the network is the composition of a random walk and a counter-like sub-chain. It can be observed that counter-like and random walk sub-chains present a very local and regular connectivity structure. By local, we mean that every state connects to a small neighborhood of states. Regularity implies that states connect to 1-hop neighbors in a similar fashion. For instance, in counter-like sub-chains, states connect to the state corresponding to a reset counter and the state associated with an incremented or decremented value (possibly plus a self-loop). As a result, the overall graph is regular and local. This property is instrumental towards having an efficient compression in the wavelet domain, meaning that only a limited number of notable substructures is needed to model the temporal evolution of the state of the network.

Define an indexing in <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M63','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M63">View MathML</a> and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M64','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M64">View MathML</a> assigning an integer in {1,2,…,N} to each state <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M65','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M65">View MathML</a> and an integer in {1,2,…,Nd} to each state <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M66','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M66">View MathML</a>. Define also <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M67','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M67">View MathML</a> and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M68','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M68">View MathML</a> as all the states of which id is a 1-hop neighbor and the 1-hop neighbors of <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M69','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M69">View MathML</a>, respectively. Note that the sets <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M70','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M70">View MathML</a> and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M71','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M71">View MathML</a> do not coincide due to the directionality of the graph modeling the behavior of the network. In fact, Markov processes modeling the operations of wireless networks are not invertible. In the counter-like and random walk structures, the connections from most of the states are the repetition of the same (local) connectivity structure. Thus, except in a few particular states whose effect decreases as the dimension of the Markov chain increases, given two states id,jd ∈ {1,…,nd}, with id jd, the number of states in <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M72','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M72">View MathML</a> is generally small and so is the inner product <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M73','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M73">View MathML</a>. Since the connectivity structure of the overall graph results from the composition of the connectivity structures of the sub-chains, the inner product of the columns of P is intuitively small. In order to perform a quantitative analysis of the inner products of P, we focus on the natural random walk defined on the sub-chainsg, that is we assign equal probability to all the allowed transitions from a given state. We have, then,

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M74','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M74">View MathML</a>

(12)

where pd(id,jd) is the transition probability from state id and jd in the state space of the dth sub-chain. Then, the inner product between the ith and the jth columns of P is

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M75','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M75">View MathML</a>

(13)

where id, jd, and kd are the state of the dth sub-chain in the states associated with states i, j, and k, respectively.

We then need to compute the inner products of the columns of the transition matrices associated with the classes of sub-chains. The average inner products of the backward counter, forward counter and random walk sub-chains are, respectively,

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M76','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M76">View MathML</a>

(14)

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M77','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M77">View MathML</a>

(15)

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M78','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M78">View MathML</a>

(16)

where in a random walk sub-chain the transition probability from state id to state jd is larger than zero only if |jd id| ≤ and we assume Nd ≫2 + 1. We observe that all these mean inner products are of order <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M79','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M79">View MathML</a>. As an example of how these quantities are computed, consider the forward counter-like sub-chain. The associated transition matrix is

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M80','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M80">View MathML</a>

(17)

We remark that the value of the transition probability pd(id,jd) is the inverse of the number of outgoing links, i.e., allowed transitions, from id. The average inner product <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M81','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M81">View MathML</a> is calculated by sequentially considering all the columns indexed by id = 1,…,Nd−1 and computing the product with the columns indexed by jd > id.

The average inner products of the sub-chains decrease on average with the number of states of the associated FSM. Although in the general case the probability of transition from a state to its neighbors may be much different from that provided by the natural random walk associated with the graph structure, the locality and regularity of the structure of the sub-chains cause the average overlap of the sets <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M82','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M82">View MathML</a> to vanish as Nd increases. Thus, sufficiently large sub-chains are associated with incoherent transition matrices.

The average inner product of a column with itself is also relevant to the performance of the sparse reconstruction (these means appear in the mean of the Gram matrix in the effective random projection). It is easy to compute that the average of this quantity for the counter-like backward sub-chain, counter-like forward sub-chain and random walk sub-chain is

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M83','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M83">View MathML</a>

(18)

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M84','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M84">View MathML</a>

(19)

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M85','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M85">View MathML</a>

(20)

where C is a positive constant smaller than 1. We observe that each of these means can be expressed as <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M86','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M86">View MathML</a>, where αd = 1/4,2/3,1/(2 + 1) are for the backward counter, forward counter and random walk, respectively. The fact that αd < 1 for all the sub-chains is critical for the performance analysis of the CS approach.

Perturbation analysis and performance bounds

In this section, we characterize the performance of the sparse approximation of the cost-to-go function proposed herein. We first discuss the stability of the solution of (11) and then determine how much compression is possible to ensure good reconstruction of the value function v. The number of observations required for good reconstruction directly translates to the learning rate of our proposed algorithm. An exact analysis of the transition matrix is challenging; however, by exploiting the average behavior of several key structures, we can determine the relationship between the minimum number of observations for this compressed sensing problem and the size of the logical graph.

Perturbation analysis

We discuss in this section how estimation noise in the sensing matrix <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M87','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M87">View MathML</a> affects the stability of the reconstruction provided by (11) as new states are visited by the sample-path. We show that the inclusion in (11) of a row in <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M88','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M88">View MathML</a> and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M89','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M89">View MathML</a> associated with a state hit by the FSM a small number of times may result in a dramatic change of x(T) with respect to x(T−1). As we are looking for the fixed point solution of the operator Ω, instability and large variations of the sparse solution are highly undesirable. In order to improve the stability of the algorithm, in the numerical results presented in Section ‘Numerical results’ we use the LS CS algorithm proposed in [15].

Define

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M90','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M90">View MathML</a>

(21)

In [27], Xu et al. have shown that the regularized regression problem of LASSO:

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M91','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M91">View MathML</a>

(22)

is equivalent to the robust regression (RR) problem, stated as

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M92','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M92">View MathML</a>

(23)

where <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M93','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M93">View MathML</a>, <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M94','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M94">View MathML</a> is the set of perturbation matrices <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M95','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M95">View MathML</a> and m is the number of columns of <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M96','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M96">View MathML</a>. We drop the dependence on T of the estimated matrices and vectors. Thus, the vector x is optimized for a worst case perturbation whose range is determined by the parameter λ, that is, the algorithm is robust to perturbations. Interestingly, the larger the value of λ, the sparser the output vector x, but also the larger the set of perturbations considered. In [27], it was also shown that LASSO is not stable, meaning that small variations of the sensing matrix <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M97','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M97">View MathML</a> may lead to significantly different output vectors.

The following addresses the instability issues in the solution to the RR problem. In particular, the theorem shows that the inclusion of a new sample may result in suboptimal solutions to the RR problem. Moreover, due to the equivalence between LASSO and the RR problem, the same instability result applies to LASSO as well.

Theorem 1

Let x be the solution of the problem

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M98','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M98">View MathML</a>

(24)

where <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M99','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M99">View MathML</a> and assume <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M100','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M100">View MathML</a>. Define yas the solution of

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M101','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M101">View MathML</a>

(25)

with <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M102','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M102">View MathML</a> and

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M103','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M103">View MathML</a>

(26)

Denote the support of yand x as <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M104','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M104">View MathML</a> and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M105','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M105">View MathML</a>, respectively. If <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M106','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M106">View MathML</a> such that

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M107','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M107">View MathML</a>

(27)

where <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M108','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M108">View MathML</a>, and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M109','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M109">View MathML</a> then <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M110','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M110">View MathML</a>. Thus, the support of the solution of LASSO changes if a new state meeting the hypothesis is added to the Bellman residual.

The proof of the theorem is in Appendix 1.

Minimum number of observations

In Section ‘Numerical results’, we will employ the LS CS residual algorithm [15] to minimize the Bellman residual subject to a sparsity constraint:

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M111','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M111">View MathML</a>

(28)

We will observe the temporal evolution of the Markov Chain over multiple time-steps, as such, we will not observe the cost at every state. Thus, coupled with additional random mixing to exploit the benefits of compressed sensing, we will optimize the following modified Bellman residual:

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M112','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M112">View MathML</a>

(29)

where R is a random matrix to be defined in the sequel and R(T) is the submatrix formed by retaining the columns of R indexed by states hit in the observation interval T. If the matrix R(T)(IγP)W satisfies the so-called restricted isometry property, defined below, then the squared error between x and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M113','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M113">View MathML</a> achieved by the Dantzig selector [28] can be bounded; in particular, there are guarantees on correctly estimating the locations of the non-zero components of x and thus the squared error is proportional to the number of non-zero components and the noise variance. Comparable analysis can be made for LASSO [29,30]. We focus on properties of R(T)(IγP) recognizing that projecting onto an orthonormal W would be an isometric operation. We note that a negative result regarding RIP would call the use of our approach into question. However, a positive RIP result suggests that our method work. Analysis of LS CS also relies on RIP parameters.

Our proof exploits arguments from [31] with appropriate tailoring to our framework. We begin with the definition of the properties we wish to show.

Definition 1

(Restricted Isometry Property): The observation matrix B is said to satisfy the restricted isometry property of order <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M114','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M114">View MathML</a> with parameter δS ∈ (0,1), i.e.RIP(S,δS) if

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M115','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M115">View MathML</a>

(30)

holds for all <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M116','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M116">View MathML</a> having no more than S non-zero entries. Note that B is a K × N matrix. RIP implies that B is approximately an isometry for S-sparse signals.

We have the following result,

Theorem 2

The matrix RH(IγP) does not satisfy RIP(S,δS) with the following probability bound,

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M117','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M117">View MathML</a>

if <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M118','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M118">View MathML</a> and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M119','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M119">View MathML</a>. The matrix P is the transition probability matrix for a Markov chain formed by concatenating forward and backward counter-like and random walk sub-chains and γ < 1. The proof of Theorem 2 can be found in Appendix 2.

We observe that this result states that if the number of observations K is of order <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M120','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M120">View MathML</a> then RIP is satisfied with high probability as the network grows large. We contrast this with the more typical results seen in say channel estimation problems where the order is O(S2 log n). We remark that our result on the RIP is not limited to LASSO, but leads to the more general conclusion that sparse estimation algorithms can be used to approximate cost-to-go functions of wireless networks. Furthermore, the proof of Theorem 2 shows that an arbitrary concatenation of sub-chains does not affect the RIP property in the limit of large wireless networks.

Numerical results

In this section, we present numerical results for an example of a wireless network to demonstrate the potential of the compressed sensing approach. We consider a wireless network where terminals store packets in a finite buffer of size Q and employ Automatic Retransmission reQuest (ARQ) to improve the delivery rate of packets. Time is divided in slots of fixed duration. For the sake of simplicity we assume that the transmission of a packet occurs in the duration of a time slot and that channel coefficients in the various slots are i.i.d.. Terminals with a non-empty buffer access the channel in a time slot with fixed probability equal to α. The failure probability of a packet transmitted by a terminal is a function of the set of terminals concurrently transmitting in the same slot. Packet arrival in the buffer of the terminals is modeled according to a Poisson process of intensity σ.

The FSM tracking the state of each individual terminal (see Figure 3) is composed of two sub-chains: a random walk-like sub-chain tracking the number of packets in the buffer (state space {0,1,…,Q}) and a forward counter-like sub-chain tracking the retransmission index of the packet being transmitted (state space {0,1,…,F}, where F is the maximum number of transmissions of a packet). An additional binary variable is added to the FSM to track transmission/idleness of the terminal. The FSM tracking the state of the overall network is the composition of the FSMs of the individual terminals. The transition probabilities of the Markov process determining the trajectory of the state of the network in the state space of the FSM are a function of the packet arrival rate, of the failure probability function and of the transmission probability α.

thumbnailFigure 3. Example of composition of sub-chains. FSM of a terminal in the considered network: a forward-counter sub-chain (packet retransmission) and an random walk sub-chain (number of packets in the buffer). On the right-hand side, the fundamental connectivity structure of most of the states in the state space <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M121','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M121">View MathML</a>.

The cost function c measures the normalized cost in terms of throughput loss with respect to the saturation throughput achieved by the terminals in the absence of interference. In particular, the cost function is defined as the sum for all the terminals of one minus the failure probability of the transmitted packets. Idleness is assigned a cost equal to 1.

For Q = 5 and F = 4 and 2 terminals the size of the state space is 1681. The transition matrix P is used to compute Psymm defined in Equation (9) and the associated set of DW functions W[18]. DW basis sets are overcomplete. In order to keep complexity low, the columns of W are subsampled. In particular, we select 400 wavelet functions at different time scales.

Figure 4 and 5 depict the exact and reconstructed value function using LASSO mapped on the state-action space and the sorted magnitude of the coefficients x, respectively. In these figures, the exact vector c and transition matrix P are used in order to show the properties of the sparse reconstruction based on DW. An accurate approximation of <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M122','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M122">View MathML</a> is achieved with approximately 15 active coefficients in x. This result shows that the temporal evolution of complex wireless networks can be effectively represented by a small number of wavelet function capturing typical substructures in the graph.

thumbnailFigure 4. Value function and its approximation. Value function (green) and its approximation (blue).

thumbnailFigure 5. Magnitude of the coefficients of x. Magnitude of the coefficients of x.

Figure 6 plots the reconstruction error (norm-2 of difference between the real and reconstructed value functions weighted by the steady-state distribution) as a function of T achieved by LASSO for different values of the sampling rates. The estimated transition probability matrix <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M123','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M123">View MathML</a> and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M124','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M124">View MathML</a> are used for the estimation of <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M125','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M125">View MathML</a>. States are sampled by randomly eliminating rows (among the states visited by the sample path) of the cost vector <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M126','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M126">View MathML</a> and transition matrix <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M127','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M127">View MathML</a>. In the legend, the maximum number states included in the Bellman residual is reported. As expected, if only a few states are included in the estimation, LASSO achieves very poor performance irrespectively of the accuracy in the estimates of the cost vector and transition matrix. However, if all the states are included in the Bellman residual, then poorly estimated states introduce rows affected by large noise both in <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M128','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M128">View MathML</a> and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M129','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M129">View MathML</a>. As shown in Section Perturbation analysis, noisy rows of P may destabilize the support of x and lead to poor reconstruction. However, the performance of LASSO is very sensitive to the sampling rate and it is unclear how to compute its optimal value.

thumbnailFigure 6. Reconstruction error as a function of the time slot achieved by LASSO with different sampling rates. Reconstruction error as a function of the time slot achieved by LASSO with different sampling rates. The number in the legend corresponds to K, that is, the number of states used in the reconstruction.

Figure 7 depicts the reconstruction error achieved by the LS CS-based framework and that of standard Q-learning [10] as a function of the length of the observed sample-path. All states visited by the process are included in the Bellman residual. In order to improve stability, to generate this plot we used the LS CS algorithm. LS CS correlates x(T) to x(T−1) by constraining changes in the support of the representation vector. Interested readers are referred to [15] for a detailed description and performance characterization of the algorithm.

thumbnailFigure 7. Reconstruction error as a function of time. Comparison between the reconstruction error as a function of the time slot achieved by the proposed algorithm and Q-learning. All states visited by the sample path are used in the estimation.

The proposed algorithm achieves a considerable accuracy in the estimation of <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M130','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M130">View MathML</a> after a very short number of state-cost observations, whereas standard learning converges slowly to <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M131','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M131">View MathML</a>. Moreover, the solution is extremely stable and the LS CS-based algorithm appears to be very robust to estimation noise.

Conclusions

A novel framework for the online estimation of cost-to-go functions in wireless networks was proposed. We showed that the inherent regular and local structure of the graph associated with the FSM modeling the operations of wireless networks enables the sparse representation of cost-to-go functions. Our analysis, based on the decomposition of the overall graph in fundamental smaller structures, connects the structure of the FSM to the RIP of the transition probability matrix. Numerical results show that sparse approximation and projection onto DW basis sets enable a considerable reduction in the number of observations needed to estimate cost-to-go functions in wireless networks, and have the potential to make online learning practical in this context.

Endnotes

aThe incoherence of the transition matrix is connected to the magnitude of the inner products of its columns.

bNote that c(s,s) can be generalized to be a random variable. In this case the expectation is over all the possible values of c(s,s).

cControl can be included in the model by defining statistics and cost functions conditioned on a control action.

dThe indexing in the vector is based on a univocal map between <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M132','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M132">View MathML</a> and {1,2,3,…,N}.

eNote that this assumption does not reduce the applicability of the proposed algorithm. In fact, the connectivity structure of the transition matrix is determined by standard protocols that are shared and known by all the nodes.

fBy allowed, we means that the state transition has probability equal to zero for any set of parameters.

gWe note that numerical evaluations of incoherence for many typical Markov chains has revealed that incoherence holds on average.

hIn this analysis we assume that W is an orthonormal set of basis functions. We are aware that DW are overcomplete and, thus, W is not an orthonormal basis set. The design of orthonormal wavelet basis tailored to FSMs modeling wireless networks is an important research direction.

Appendix 1

Proof of Theorem 1

Fix the index k in the support <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M133','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M133">View MathML</a> of x, define

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M134','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M134">View MathML</a>

(31)

and the vector

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M135','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M135">View MathML</a>

(32)

with ∥u2 = 1 and

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M136','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M136">View MathML</a>

(33)

If

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M137','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M137">View MathML</a>

(34)

and

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M138','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M138">View MathML</a>

(35)

then

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M139','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M139">View MathML</a>

(36)

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M140','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M140">View MathML</a>

(37)

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M141','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M141">View MathML</a>

(38)

Therefore, if the conditions (34) and (35) hold, then due to Theorem 5 in [27], we have yk∗ = 0 and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M142','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M142">View MathML</a>.

Define

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M143','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M143">View MathML</a>

(39)

where

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M144','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M144">View MathML</a>

(40)

Then, we find u that maximizes the left-hand side of (35) as the solution of

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M145','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M145">View MathML</a>

(41)

Let M = QSZ be the singular value decomposition of M, then (41) is equal to

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M146','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M146">View MathML</a>

(42)

where <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M147','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M147">View MathML</a> and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M148','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M148">View MathML</a>. Note that

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M149','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M149">View MathML</a>

(43)

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M150','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M150">View MathML</a>

(44)

where <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M151','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M151">View MathML</a>. Then, from

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M152','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M152">View MathML</a>

(45)

and using the Schwarz inequality we obtain

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M153','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M153">View MathML</a>

(46)

where the equality holds if <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M154','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M154">View MathML</a>. Therefore,

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M155','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M155">View MathML</a>

(47)

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M156','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M156">View MathML</a>

(48)

Appendix 2

Proof of Theorem 2

In this appendix, we prove the result on the minimum number of observed states needed for perfect reconstruction of <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M157','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M157">View MathML</a>. We first state the following lemma:

Lemma 1

(Geršgorin) The eigenvalues of an m × m matrix G all lie in the union of the n discs di = di(ci,ri),i = 1,2,…n, centered at ci = Gii and with radius

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M158','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M158">View MathML</a>

(49)

We will apply Gersgorin’s lemma to the following Gram matrix,

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M159','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M159">View MathML</a>

(50)

For the sake of exposition we assume that R(T) is an K × n matrix whose components are drawn i.i.d. from a binary distribution i.e.<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M160','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M160">View MathML</a> with probability <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M161','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M161">View MathML</a>; thus we have that the R(T)ik are zero mean and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M162','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M162">View MathML</a>. Other properly constrained distributions for R(T) can be handled. The other matrices specified in (50) are square and of dimension n × n.

We shall show that every element of the Gram matrix, G is bounded as follows, where mij = E[Gij],

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M163','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M163">View MathML</a>

(51)

The dimension of the state-space, <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M164','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M164">View MathML</a> is approximately <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M165','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M165">View MathML</a>, thus from the analysis of the inner products/norms of columns of the transition matrix (see Equations (14)–(16)) we find that <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M166','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M166">View MathML</a> and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M167','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M167">View MathML</a>, where pi is the i’th column of P. We note that for all three sub-chain structures examined α < 1 and thus <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M168','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M168">View MathML</a>. In fact, if we concatenate several sub-chains we have that <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M169','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M169">View MathML</a>, where |αd| < 1. Thus, E[∥pi2] diminishes, and eventually vanishes, as the number of concatenated sub-chains increases. Additionally, <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M170','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M170">View MathML</a>. Thus we can show that,

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M171','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M171">View MathML</a>

(52)

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M172','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M172">View MathML</a>

(53)

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M173','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M173">View MathML</a>

(54)

where bi is the ith column of B and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M174','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M174">View MathML</a>. The equality (a) follows from the independence of the probability transition matrix P and the random projection matrix R. Equations (52) and (54) follow from Equations (13) and (14)–(16), (18)–(20). The next needed inequality is,

Lemma 2

(McDiarmid) Consider independent random variables <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M175','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M175">View MathML</a> and a function <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M176','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M176">View MathML</a>. If for all i ∈ {1,…,m} and for all <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M177','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M177">View MathML</a>, the function υ satisfies

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M178','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M178">View MathML</a>

(55)

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M179','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M179">View MathML</a>

(56)

For our functions of interest, m = n2. We let R= R + Δ where Δj,l = 0 except for j = j0,l = l0, i.e.<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M180','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M180">View MathML</a> with probability <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M181','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M181">View MathML</a>, due to our assumptions on R. For clarity, we have dropped the subscript T on R(T). Thus we have for the diagonal elements of the Gram matrix, Gii = υii(R) for some R,

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M182','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M182">View MathML</a>

We have that <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M183','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M183">View MathML</a>, where δK(·) is the Kronecker delta function, and <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M184','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M184">View MathML</a>. We set <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M185','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M185">View MathML</a>, and observe from the statement of McDiarmid’s inequalities that we need to evaluate

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M186','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M186">View MathML</a>

The same bound holds for the off-diagonal elements of the Gram matrix. Using these values of <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M187','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M187">View MathML</a> we invoke McDiarmid’s inequality to show the following, wherein (a) and (b) follow from union bound arguments and (c) follows from setting <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M188','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M188">View MathML</a>

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M189','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M189">View MathML</a>

(57)

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M190','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M190">View MathML</a>

(58)

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M191','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M191">View MathML</a>

(59)

Then,

<a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M192','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M192">View MathML</a>

(60)

Equation (60) can be manipulated to yield the following relationship between the number of samples K and the size of the logical network, n: the RIP holds with high probability if <a onClick="popup('http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M193','MathML',630,470);return false;" target="_blank" href="http://jwcn.eurasipjournals.com/content/2012/1/278/mathml/M193">View MathML</a>, where c1 is constant selected to ensure that the denominator of the previous expression is positive and the Theorem is shown.

Competing interests

The authors declare that they have no competing interests.

Competing interests

The authors declare that they have no competing interests.

Acknowledgements

The study was supported by AFOSR under grants FA9550-08-0480 and FA9550-12-1-0215 and by the National Science Foundation (NSF) under Grant CCF-0917343.

References

  1. G Bianchi, Performance analysis of the IEEE 802.11 distributed coordination function. IEEE J. Sel. Areas Commun 18(3), 535–547 (2000)

  2. A Konrad, B Zhao, A Joseph, R Ludwig, A Markov-based channel model algorithm for wireless networks. Wirel. Netw 9(3), 189–199 (2003). Publisher Full Text OpenURL

  3. H Wu, Y Peng, K Long, S Cheng, J Ma, Performance of reliable transport protocol over, IEEE 802.11 wireless, LAN analysis and enhancement. proceedings of IEEE Twenty-First Annual Joint Conference of the IEEE Computer and Communications Societies, INFOCOM 2002 (New York, USA, 2002), pp. 599–607 (vol, 2002), . 2, June 23–27 OpenURL

  4. M Zorzi, RR Rao, On the use of renewal theory in the analysis of ARQ protocols. IEEE Trans. Commun 44(9), 1077–1081 (1996). Publisher Full Text OpenURL

  5. L Badia, M Levorato, M Zorzi, Markov analysis of selective repeat type II hybrid ARQ using block codes. IEEE Trans. Commun 56(9), 1434–1441 (2008)

  6. E Modiano, An adaptive algorithm for optimizing the packet size used in wireless ARQ protocols. Wirel. Netw 5(4), 279–286 (1999). Publisher Full Text OpenURL

  7. H Zhai, Y Kwon, Y Fang, Performance analysis of IEEE 802.11 MAC protocols in wireless LANs. Wirel. Commun. Mob. Comput 4(8), 917–931 (2004). Publisher Full Text OpenURL

  8. H Su, X Zhang, Cross-layer based opportunistic MAC protocols for QoS provisionings over cognitive radio wireless networks. IEEE J. Sel. Areas Commun 26, 118–129 (2008)

  9. M Dianati, X Ling, K Naik, X Shen, A node-cooperative ARQ scheme for wireless ad hoc networks. IEEE Trans. Veh. Technol 55(3), 1032–1044 (2006). Publisher Full Text OpenURL

  10. DP Bertsekas, Dynamic Programming and Optimal Control (Athena Scientific, Belmont, MA,, 2001)

  11. S Mahadevan, Average reward reinforcement learning: Foundations, algorithms, and empirical results. Mach. Learn 22, 159–195 (1996)

  12. A Schwartz, A reinforcement learning method for maximizing undiscounted rewards. Proceedings of the Tenth International Conference on Machine Learning (Amherst, Massachusett, 1993), pp. 305–305 (vol, 1993), . 298 OpenURL

  13. F Fu, MVD Schaar, Structure-aware stocastic control for transmission technology. ArXiv preprint (2010) (arXiv:1003, 2010), . 2471 OpenURL

  14. W Chen, D Huang, A Kulkarni, J Unnikrishnan, Q Zhu, P Mehta, S Meyn, A Wierman, Approximate dynamic programming using fluid and diffusion approximations with applications to power management. Proceedings of the 48th IEEE Conference on Decision and Control (Shangai, China, 2009), pp. 3575–3580 (Dec, 2009), . 16-18, 2009 OpenURL

  15. N Vaswani, LS-CS-residual (LS-CS): compressive sensing on least squares residual. IEEE Trans. Signal Process 58(8), 4108–4120 (2010)

  16. E Candes, MB Wakin, An introduction to compressive sampling. IEEE Signal Process. Mag 25(2), 21–30 (2008)

  17. M Maggioni, S Mahadevan, A multiscale framework for Markov decision processes using diffusion wavelets (http://citeseerx, 2006), . ist.psu.edu/viewdoc/download?doi=10.1.1.74.8956&rep=rep1&type=pdf webcite

  18. RR Coifman, M Maggioni, Diffusion wavelets. Appl. Comput. Harmonic Anal 21, 53–94 (2006). Publisher Full Text OpenURL

  19. M Crovella, E Kolaczyk, Graph wavelets for spatial traffic analysis. INFOCOM 2003 Twenty-Second Annual Joint Conference of the IEEE Computer and Communications (San Francisco, CA, USA, 2003), pp. 1848–1857 (vol, 2003), . 3, Mar. 30–Apr. 3 OpenURL

  20. M Firooz, S Roy, Network tomography via compressed sensing. in proc, ed. by . of IEEE Global Telecommunications Conference (GLOBECOM) (Miami, Florida, USA, 2010), pp. 1–5 (Dec, 2010), . 6-10 OpenURL

  21. Y Chen, D Bindel, RH Katz, Tomography-based overlay network monitoring. Proceedings of the 3rd ACM SIGCOMM conference on Internet measurement (Miami Beach, FL, USA, 2003), pp. 216–231 (Aug, 2003), . 25-29, 2003 OpenURL

  22. J Haupt, WU Bajwa, M Rabbat, R Nowak, Compressed sensing for networked data. IEEE Signal Process. Mag 25(2), 92–101 (2008)

  23. M Wang, W Xu, E Mallada, A Tang, Sparse recovery with graph constraints: fundamental limits and measurement construction (Arxiv preprint arXiv:1108), . 0443 (2011)to appear in Proceedings of IEEE INFOCOM 2012

  24. W Xu, E Mallada, A Tang, Compressive sensing over graphs. Proceedings of the 30th IEEE International Conference on Computer Communications (IEEE INFOCOM) (Shangai, China, IEEE, 2011), pp. 2087–2095 (Apr, IEEE, 2011), . 10-15 OpenURL

  25. C Sherlaw-Johnson, S Gallivan, J Burridge, Estimating a Markov transition matrix from observational data. J. Operat. Res. Soc 46(3), 405–410 (1995)

  26. R Tibshirani, Regression shrinkage and selection via the Lasso. J. Royal Stat. Soc. Ser. B 58, 267–288 (1996)

  27. H Xu, C Caramanis, S Mannor, Robust regression and Lasso. IEEE Trans. Inf. Theory 56(7), 3561–3574 (2010)

  28. E Candes, T Tao, The dantzig selector: statistical estimation when p is much larger than n. Annals Stat 35(6), 2313–2351 (2007). Publisher Full Text OpenURL

  29. CH Zhang, J Huang, The sparsity and bias of the Lasso selection in high-dimensional linear regression. Annals Stat 36(4), 1567–1594 (2008). Publisher Full Text OpenURL

  30. T Zhang, Some sharp performance bounds for least squares regression with L1 regularization. Annals Stat 37(5A), 2109–2144 (2009). Publisher Full Text OpenURL

  31. J Haupt, W Bajwa, G Raz, R Nowak, Toeplitz compressed sensing matrices with applications to sparse channel estimation. IEEE Trans. Inf. Theory 56(11), 5862–5875 (2010)