Regular Track

Session Regular-4

Resource Allocation, Scheduling, Wireless Networks II

10:30 AM — 12:30 PM EDT
Oct 20 Wed, 10:30 AM — 12:30 PM EDT

Experimental measurement of the capacity region of wireless networks

Yiannis Thomas, Nikolaos Smyrnioudis and Stavros Toumpis (Athens University of Economics and Business, Greece)

We present a method for experimentally measuring the capacity region of a wireless network, defined here as the set of all possible combinations of simultaneously achievable link transmission rates. Due to the inherent complexity of the problem, measuring the capacity region exactly typically involves a prohibitively large volume of experiments. Therefore, the method is based on a judicious, tunable algorithm, employing online machine learning, which aims at reducing the volume of experiments conducted without compromising significantly the accuracy with which the capacity region is established. In order to evaluate the efficiency of the method, and also to demonstrate the usefulness of having the capacity region available, we experimentally evaluate our method using a network of Raspberry Pi nodes placed inside a building and communicating using IEEE 802.11 and then compare the performance of an optimal, centrally organized communication scheme, computed using the measured capacity region within a network optimization formulation, with the performance of a simple communication scheme that uses multihop TCP flows.

Distributed Learning for Proportional-Fair Resource Allocation in Coexisting WiFi Networks

Piotr Gawlowicz (Technical University of Berlin, Germany), Jean Walrand (University of California, Berkeley, USA) and Adam Wolisz (Technical University of Berlin, Germany)

WiFi networks suffer from severe network utility degradation due to the usage of diverse modulation and coding schemes. The proportional-fair allocation, that has been shown to be a good remedy, can be enforced through the proper selection of contention window values. This has been achieved so far for centralized systems by an explicit solution of an optimization problem or, as proposed recently, by following a learning-based approach. In this paper, we present the first fully distributed solution in which each of the WiFi nodes independently tunes its contention window to achieve proportional fairness. Our solution is therefore applicable also for a set of collocated, unconnected WiFi networks. We compare the throughput and air-time allocation that this algorithm achieves to the values achieved by standard WiFi binary exponential back-off and values achieved by known centralized algorithms.

Optimal Precoder Design for MIMO-OFDM-based Joint Automotive Radar-Communication Networks

Ceyhun Deniz Ozkaptan, Eylem Ekici (The Ohio State University, USA), Chang-Heng Wang and Onur Altintas (Toyota Motor North America, USA)

Large-scale deployment of connected vehicles with cooperative awareness technologies increases the demand for vehicle-to-everything (V2X) communication spectrum in 5.9 GHz that is mainly allocated for the exchange of safety messages. To supplement V2X communication and support the high data rates needed by broadband applications, the millimeter-wave (mmWave) automotive radar spectrum at 76-81 GHz can be utilized. For this purpose, joint radar-communication systems have been proposed in the literature to perform both functions using the same waveform and hardware. While multiple-input and multiple-output (MIMO) communication with multiple users enables independent data streaming for high throughput, MIMO radar processing provides high-resolution imaging that is crucial for safety-critical systems. However, employing conventional precoding methods designed for communication generates directional beams that impair MIMO radar imaging and target tracking capabilities during data streaming. In this paper, we propose a MIMO joint automotive radar-communication (JARC) framework based on orthogonal frequency division multiplexing (OFDM) waveform. First, we show that the MIMO-OFDM preamble can be exploited for both MIMO radar processing and estimation of the communication channel. Then, we propose an optimal precoder design method that enables high accuracy target tracking while transmitting independent data streams to multiple receivers. The proposed methods provide high-resolution radar imaging and high throughput capabilities for MIMO JARC networks. Finally, we evaluate the efficacy of the proposed methods through numerical simulations.

BEAMWAVE: Cross-Layer Beamforming and Scheduling for Superimposed Transmissions in Industrial IoT mmWave Networks

Luis F. Abanto-Leon, Matthias Hollick and Gek Hong Sim (Technische Universität Darmstadt, Germany)

The omnipresence of IoT devices in Industry 4.0 is expected to foster higher reliability, safety, and efficiency. However, interconnecting a large number of wireless devices without jeopardizing the system performance proves challenging. To address the requirements of future industries, we investigate the cross-layer design of beamforming and scheduling for layered-division multiplexing (LDM) systems in millimeter-wave bands. Scheduling is crucial as the devices in industrial settings are expected to proliferate rapidly. Also, highly performant beamforming is necessary to ensure scalability. By adopting LDM, multiple transmissions can be non-orthogonally superimposed. Specifically, we consider a superior-importance control multicast message required to be ubiquitous to all devices and inferior-importance private unicast messages targeting a subset of scheduled devices. Due to NP-hardness, we propose BEAMWAVE, which decomposes the problem into beamforming and scheduling. Through simulations, we show that BEAMWAVE attains near-optimality and outperforms other competing schemes.

Opportunistic Overlapping: Joint scheduling of uplink URLLC/eMBB traffic in NOMA based Wireless Systems

Arjun Anand (Intel Labs, USA), Gustavo De Veciana (The University of Texas at Austin, USA), Derya Malak (Rensselaer Polytechnic Institute , USA), Ayman Elezabi (The American University in Cairo, Egypt) and Aniruddh Venkatakrishnan (The University of Texas at Austin, USA)

We consider the joint scheduling of uplink URLLC and eMBB user traffic in a cellular system. The central challenge is coordinating URLLC uplink user transmissions for traffic requiring extremely low latency, i.e., scheduling timescales smaller than the typically used frame boundaries. To avoid collisions and minimise access delay we propose to pre-optimize layouts of non-overlapping transmission opportunities for URLLC users’, which may, or may not, be used depending on their traffic. To increase efficiency we propose to leverage Non-Orthogonal Multiple Access (NOMA) based opportunistic scheduling of over-lapping eMBB user traffic accounting for both channel variations and the necessary power control policy required to protect possible URLLC transmissions from overlapping eMBB traffic. We propose a simple/practical URLLC layout and opportunistic overlapping scheduler which, subject to meeting URLLC users’ requirements for timely access to transmission opportunities, optimizes eMBB users’ sum utility. Substantial discrete event simulations were conducted to explore the performance impact of system parameters associated with URLLC traffic requirements, eMBB power control, etc. Depending on the traffic scenarios we show gains from 5-45% in the sum eMBB throughput and/or 5th percentile throughput relative to an orthogonal multiple access baseline.

Session Chair

Stavros Toumpis (Athens University of Economics and Business, Greece)

Enter Zoom
Session Regular-5

Age of Information

2:00 PM — 4:00 PM EDT
Oct 20 Wed, 2:00 PM — 4:00 PM EDT

Age of Sensed Information in a Cognitive Radio Network

Clement Kai-Ming Kam, Sastry Kompella (U.S. Naval Research Laboratory, USA) and Anthony Ephremides (University of Maryland, USA)

Age of information is often studied as a primary objective to be optimized, but for problems where age is not the primary objective, it can still have a major role that can be utilized. This work studies a two-user, single-channel cognitive radio network, where the primary user's transmit/idle dynamics are modeled as a binary Markov chain, and the secondary user decides to either sense or transmit. Under this setup, the age of the information sensed by the secondary user has a direct impact on its performance. The secondary user aims to maximize its throughput subject to a constraint on the probability of collision experienced by the primary. Using the Markov chain model of the primary user, the secondary user decides on its transmission and sensing strategy based on the estimated evolution of the primary user transmission state. For a stationary randomized transmission policy that depends on the sensed state, we derive the secondary throughput and the collision probability. Due to the complexity of the resulting expressions, we develop an alternative formulation of the problem by recognizing that the throughput and collision probability are functions of the age of each type of sensed information. Therefore, we transform the problem by converting the randomized policy to its induced age distribution function. As a result, the age distribution-based formulation results in a linear program, which can be solved efficiently. We include numerical results and simulations, and discuss the role of the age distribution and other related qualities of the information.

When To Pull Data for Minimum Age Penalty

Orhan Tahir Yavaşcan, Elif Tugce Ceran, Zeynep Cakir (METU, Turkey), Onur Kaya (Isik University, Turkey) and Elif Uysal (METU, Turkey)

A communication receiver that wants to pull data from a remote sensor by exploiting wireless energy transfer is considered. The receiver has a long-term average energy budget for this operation, and its goal is to keep the time average of a general age penalty function as small as possible. The channel from the source to the receiver is a two-state (ON/OFF) communication link whose state is IID or Markovian, and known instantaneously by the receiver. Modeling the problem as a constrained Markov decision problem, we obtain a randomized threshold-based decision policy that achieves the minimum possible average age penalty. We determine the optimal time average Age of Information and age violation probabilities by exploiting the optimality of the derived policy.

Online Energy Minimization Under A Peak Age of Information Constraint

Kumar Saurav and Rahul Vaze (Tata Institute of Fundamental Research, Mumbai, India)

We consider a node where packets of fixed size are generated at arbitrary intervals. The node is required to maintain the peak age of information (AoI) at the monitor below a threshold by transmitting potentially only a subset of the generated packets. At any time, depending on packet availability and current AoI, the node can choose the packet to transmit, and its transmission speed. We consider a power function (rate of energy consumption) that is increasing and convex in transmission speed, and the objective is to minimize the energy consumption so as to satisfy the peak AoI constraint at all times. For this problem, we propose a (customized) greedy policy and derive an upper bound on its competitive ratio (CR) that depends on the power function, but is independent of the packet generation times as well as the time horizon. We also derive a lower bound on the CR of all causal policies, and show that the dependence of the CR of the proposed greedy policy on the system parameters (such as packet size, peak AoI and power function) is similar to that of an optimal causal policy.

Age of Information Minimization with Power and Distortion Constraints in Multiple Access Channels

Gagan G B, Jayanth S and Rajshekhar V Bhat (IIT Dharwad, India)

Emerging fifth generation and beyond networks are expected to deliver accurate information as fresh as possible. In this work, we consider a wireless fading multiple access channel, where M users communicate to a base station (BS) in a timeslotted system. Each user can sample an information packet in any slot of interest, compress it to a finite number of bits and then transmit the compressed packet to the BS, where the compression and transmission result in distortion and power consumption, respectively. Using the age of information (AoI) metric for quantifying freshness of information, we consider minimization of a long-term weighted average AoI across the users, subject to average power and distortion constraints at each user, for obtaining the number of bits to be transmitted by a user in a given slot. We cast the problem as a constrained Markov decision process (CMDP) and solve it via Lagrange relaxation. We show that a threshold-type policy is optimal for the relaxed problem. We also propose a convex optimization problem to obtain a suboptimal but simpler stationary randomized policy, whose minimum achievable average AoI is within twice that of the optimal policy. Via numerical simulations, we illustrate the threshold structure of the CMDP based optimal solution and study variation of the average AoIs achieved by the proposed policies when the bounds on the average power and distortion constraints are varied.

Distributional Properties of Age of Information in Energy Harvesting Status Update Systems

Mohamed A. Abd-Elmagid and Harpreet S. Dhillon (Virginia Tech, USA)

This paper considers an energy harvesting (EH) real-time status update system in which an EH-powered transmitter node sends status updates about some physical process of interest to a destination node. The status update and harvested energy packets are assumed to arrive at the transmitter according to independent Poisson processes, and the service time of each status update is assumed to be exponentially distributed. We quantify the freshness of status updates when they reach the destination using the concept of Age of Information (AoI). Unlike most of the existing analyses of AoI that focus on characterizing its average value when the transmitter has a reliable energy source and is hence not powered by EH (referred henceforth as a non-EH transmitter), our analysis is focused on understanding the distributional properties of AoI through the characterization of its moment generating function (MGF). In particular, we use the stochastic hybrid systems (SHS) framework to derive closed-form expressions of the MGF of AoI under both non-preemptive and preemptive in service queueing disciplines at the transmitter. We demonstrate the generality of this analysis by recovering several known results for the corresponding system with a non-EH transmitter as special cases of the new results. Our numerical results verify the analytical findings, and demonstrate the necessity of incorporating the higher moments of AoI in the implementation/optimization of real-time status update systems rather than just relying on its average value.

Session Chair

Sastry Kompella (U.S. Naval Research Laboratory, USA)

Enter Zoom
Session Regular-6


4:30 PM — 6:30 PM EDT
Oct 20 Wed, 4:30 PM — 6:30 PM EDT

A Coupon Collector based approximation for LRU cache hits under Zipf requests

Pawan Poojary (Northwestern University, USA), Sharayu Moharir (Indian Institute of Technology Bombay, India) and Krishna Jagannathan (Indian Institute of Technology Madras, India)

The Least Recently Used (LRU) policy is widely used in caching, since it is computationally inexpensive and can be implemented `on-the-fly.' However, existing analyses of content-wise hit-rates under LRU have expressions whose complexity grows rapidly with the buffer size. In this paper, we derive a simple yet accurate approximation for the LRU content-wise hit-rates under Zipf-distributed requests, in the regime of a large content population. To this end, we map the characteristic time of a content in the LRU policy to the classical Coupon Collector's Problem (CCP). We justify the accuracy of these approximations by showing analytically that the characteristic time concentrates sharply around its mean. Our bounds highlight and quantify the impact of cache-size scaling as well as the variations in content popularity on the accuracy of the hit-rate estimates. Specifically, we show that these estimates become more accurate with a decrease in Zipf parameter β, or an increase in the cache-size scaling. Finally, our analysis of the CCP with Zipf-distributed coupons could be of independent interest.

Memory-Rate Tradeoff for Decentralized Caching under Nonuniform File Popularity

Yong Deng and Min Dong (Ontario Tech University, Canada)

We study the memory-rate tradeoff for decentralized caching under nonuniform file popularity. We formulate the cache placement optimization problem for a recently proposed decentralized modified coded caching scheme (D-MCCS) to minimize the average rate. To solve this non-convex optimization problem, we develop two algorithms: a successive Geometric Programming (GP) approximation algorithm, which guarantees convergence to a stationary point but has a high computational complexity, and a low-complexity approach based on a two-file-group-based placement strategy. We further propose a lower bound on the average rate for decentralized caching under nonuniform file popularity. The lower bound is given as a non-convex optimization problem, for which we propose a similar successive GP approximation algorithm to compute a stationary point. We show that the optimized MCCS attains the lower bound for the special case of no more than two active users requesting files, or for the general case but satisfying a special condition. Thus, the optimized MCCS characterizes the exact memory-rate tradeoff for decentralized caching in these cases. In general, our numerical result shows that the optimized D-MCCS performs close to the lower bound.

Optimal Load-Splitting and Distributed-Caching for Dynamic Content

Bahman Abolhassani (The Ohio State University, USA), John Tadrous (Gonzaga University, USA), and Atilla Eryilmaz (The Ohio State University, USA)

In this work, we consider the problem of `fresh' caching at distributed (front-end) local caches of content that is subject to `dynamic' updates at the (back-end) database. We first provide new models and analyses of the average operational cost of a network of distributed edge-caches that utilizes wireless multicast to refresh aging content. We attack the problems of what to cache in each edge-cache and how to split the incoming demand amongst them (also called ``load-splitting'' in the rest of the paper) in order to minimize the operational cost. While the general form of the problem comes with an NP-hard Knapsack subproblem, we were able to completely solve the problem by judiciously choosing the number of edge-caches to be deployed over the network. Interestingly, our findings reveal that the optimal caching policy necessitates unequal load-splitting over the edge-caches even when all conditions are symmetric. Moreover, we find that edge-caches with higher load will generally cache fewer but relatively more popular content. We further investigate the tradeoffs between cost reduction and cache savings when employing equal and optimal load-splitting solutions for demand with Zipf($z$) popularity distribution. Our analysis reveals that equal load-splitting to edge-caches achieves close-to-optimal for less predictable demand ($z<2$) while also saving in the cache size. On the other hand, for more predictable demand ($z>2$), optimal load-splitting results in substantial cost gains while decreasing the cache occupancy.

Session Chair

Min Dong (Ontario Tech University, Canada)

Enter Zoom

Made with in Toronto · Privacy Policy · WiOpt 2020 · © 2021 Duetone Corp.