The 18th International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt 2020)

The 4th International Workshop on Content Caching and Delivery in Wireless Networks (CCDWN)

Session CCDWN-Papers

CCDWN Session

Conference
12:01 AM — 11:59 PM EEST
Local
Jun 18 Thu, 5:01 PM — 4:59 PM EDT

Age of Information Aware Cache Updating with File- and Age-Dependent Update Durations

Haoyue Tang (Beijing National Research Center for Information Science and Technology), Philippe Ciblat (Telecom Paris, Institut Polytechnique de Paris, Paris, France), Jintao Wang (Beijing National Research Center for Information Science and Technology and Research Institute of Tsinghua University in Shenzhen, Shenzhen, China), Michele Wigger (Telecom Paris, Institut Polytechnique de Paris, Paris, France), Roy Yates (Rutgers University, New Brunswick, NJ, USA)

0
We consider a system consisting of a library of time-varying files, a server that at all times observes the current version of all files, and a cache that at the beginning stores the current versions of all files but afterwards has to update these files from the server. Unlike previous works, the update duration is not constant but depends on the file and its Age of Information (AoI), i.e., of the time elapsed since it was last updated. The goal of this work is to design an update policy that minimizes the average AoI of all files with respect to a given popularity distribution. Actually a relaxed problem, close to the original optimization problem, is solved and a practical update policy is derived. The update policy relies on the file popularity and on the functions that characterize the update durations of the files depending on their AoI. Numerical simulations show a significant improvement of this new update policy compared to the so-called square-root policy that is optimal under file-independent and constant update durations.

Stochastic D2D Caching with Energy Harvesting Nodes

Homa Nikbakht (LTCI, Telecom Paris, France), Sarah Kamel (LTCI, Telecom Paris, France), Michele Wigger (LTCI, Telecom Paris, France) and Aylin Yener (Wireless Communications and Networking Laboratory, The Pennsylvania State University)

0
Consider a stochastic wireless device-to-device (D2D) caching network with nodes that are harvesting energy from external sources at random times. Each node is equipped with a cache memory, where the node prefetches maximum distance separable (MDS) coded packets of the files from a given library. When a node requests a file from this library, neighbouring nodes are asked to send the relevant missing subfiles over noisy channels. This work presents different selection strategies to determine which neighbouring nodes should transmit which missing subfiles. The strategies can roughly be divided into three categories: sequential strategies where transmission stops when the requesting node has correctly decoded enough subfiles; coordinated strategies where the requesting node is informed about the other nodes’ cache contents and centrally decides which node should send which file; and adaptive strategies where the requesting node sequentially decides on which files should be sent in function of the subfiles that it previously decoded correctly. Our numerical simulations show that at moderate energy levels or when there are many file requests, sequential strategies perform significantly worse than coordinated or adaptive strategies. On the other hand, at high energy levels sequential strategies (or even completely decentralized strategies) perform as well or even better. These latter strategies should thus be prefered as they come with less synchronization overhead and delay. The same applies for environments with only few transmission errors (i.e.,in high quality channels).

Impact of Popular Content Relational Structure on Joint Caching and Recommendation Policies

Marina Costantini, Thrasyvoulos Spyropoulos EURECOM, Sophia-Antipolis, France

0
Recent work has shown that the performance of caching systems can be boosted by the delivery of alternative but related content. In this setup, recommendation systems are exploited to offer the user appealing alternatives if the original request is not found in the cache. This framework relies on the assumption that a content can partially or completely substitute another if they are sufficiently similar. In this work we model these similarity relations as a graph, where each content is a node and related contents are linked by an edge. We then study how the characteristics of this content graph constrain the gains of designing jointly the caching and the recommendations with respect to just using a simple baseline in the soft cache hits setup. We start by selecting a number of descriptive graph features that we expect to play a role in the performance of related content delivery policies. We then analyze the effect of each of these features on the policies’ performance through extensive simulations with synthetic data. Our results confirm that features such as the degree distribution and clustering coefficient of the graph are crucial to decide whether the close-to-optimal algorithm will perform significantly better than the simple baseline. Our experiments with four real-world distinct datasets further support these observations. Motivated by these clear dependencies, we conclude by showing that we can train a classifier to predict the gains attainable by a “smart” policy with respect to a low-complexity baseline using only a few macroscopic graph features as predictor variables.

In-network packet-level caching for error recovery in ICN

Yannis Thomas, George Xylomenos, George C. Polyzos Mobile Multimedia Laboratory Department of Informatics, School of Information Sciences and Technology Athens University of Economics and Business, Greece

0
In-network packet-level caching is one of the salient features of Information-Centric Networking (ICN) architectures, offering reduced communication latency, sender load, and net- work traffic. The literature on ICN caching is quite extensive, exploring in depth different aspects of caching, such as the positioning of storage resources in the network, the caching policies of individual caches, as well as the interaction of caching and routing. However, many researchers have questioned the value of in-network caching, arguing that the limited storage resources of ICN routers cannot eliminate a significant amount of redundant transmissions, hence the performance advantages of in-network caching may be similar to those of traditional edge-caches. Nevertheless, there is another potential use of in-network caching that poses less challenging storage requirements: exploiting it to enhance the efficiency of error recovery by speeding-up retransmissions of lost packets. In this paper, we investigate the impact of in-network packet-level caching on transport-layer error recovery. We first sketch a retransmission oriented packet-level caching scheme, highlighting the differences in the performance requirements compared to general-purpose caches. Then we model and assess its expected gains in terms of latency and network traffic reduction, and finally we provide a reality check of the approach based on current network requirements and hardware specifications.

Partial Service Caching at the Edge

Rudrabhotla Sri Prakash, Nikhil Karamchandani, Veeraruna Kavitha, and Sharayu Moharir Indian Institute of Technology Bombay

0
We consider the problem of service caching at the edge, in which an application provider can rent resources at the edge to cache its codes/libraries to serve incoming requests. A key characteristic of current edge computing platforms is that they provide pay-as-you-go flexibility via short term contracts. This implies that the client can make significant cost benefits by dynamically adjusting its caching decisions depending on the intensity of service requests. A novel feature of this work in contrast to the literature is that we allow the service to be partially cached at the edge. The probability that the edge is able to serve an incoming request is assumed to be an increasing function of the fraction of service cached. The focus of this work is on characterizing the benefits of partial service caching at the edge. The key insight from this work is that partial service caching can bring down the cost of service only if there is at least one intermediate caching level at which the probability that an incoming request can be served at the edge is strictly more than the fraction of service cached and the cost of renting edge storage resources falls within a range of values which depends on the statistics of the request arrival process. We use these insights to design near-optimal service caching policies.

Content Preference-Aware User Association and Caching in Cellular Networks

George Darzanos, Livia Elena Chatzieleftheriou, Merkourios Karaliopoulos, Iordanis Koutsopoulos Athens University of Economics and Business, AUEB, Athens, Greece

0
The ever-growing trend of mobile users to consume high quality videos in their devices has pushed the backhaul network to its limits. Caching at Small-cell Base Stations (SBSs) has been established as an effective mechanism for alleviating this burden. Next generation mobile networks promise high Access Network density, hence multiple SBS association options per mobile user may exist. In this paper, we study the joint problem of mobile user association to SBSs and content placement to SBS-collocated caches, aiming to further optimize the utilization of backhaul network. The problem is solved periodically, considering time intervals where the users’ location to the system is assumed to be fixed. First, we decompose the joint problem into two sub-problems that are solve sequentially, namely the content preference similarity-driven user association and the demand aware content placement sub-problems. We then propose a heuristic that consists of multiple phases. In particular, at a preparation phase, it performs clustering of users based on the similarity of their content preferences, accounting also for geographical constraints. The resulting clusters are then utilized for the demand-aware association of users to the SBS, while the placement of content is driven by the resulting local demand in each SBS, and takes place at the end. We demonstrate the effectiveness of our heuristic by evaluating its performance against multiple schemes that either lack a preparation phase or do not account for geographical constraints. As it is evident through the numerical results, the user clustering that takes place during the preparation phase can increase the overall cache hit ratio up to 20%.

Caching Policies over Unreliable Channels

Paulo Sena (Dept. of Computer Science, UFPA, Belem, Brazil), Igor Carvalho (Dept. of Computer Science, UFPA, Belem, Brazil), Antonio Abelem (Dept. of Computer Science, UFPA, Belem, Brazil), Gyorgy Dan (Dept. of Computer Science, EECS, KTH, Stockholm, Sweden), Daniel Menasche (Dept. of Computer Science, UFRJ, Rio de Janeiro, Brazil), Don Towsley (School of Information and Computer Sciences, UMass, Amhers)

1
Recently, there has been substantial progress in the formal understanding of how caching resources should be allocated when multiple caches each deploy the common LRU policy. Nonetheless, the role played by caching policies beyond LRU in a networked setting where content may be replicated across multiple caches and where channels are unreliable is still poorly understood. In this paper, we investigate this issue by first analyzing the cache miss rate in a system with two caches of unit size each, for the LRU, and the LFU caching policies, and their combination. Our analytical results show that joint use of the two policies outperforms LRU, while LFU outperforms all these policies whenever resource pooling is not optimal. We provide empirical results with larger caches to show that simple alternative policies, such as LFU, provide superior performance compared to LRU even if the space allocation is not fine tuned. We envision that fine tuning the cache space used by such policies may lead to promising additional gains.

Private Cache-Aided Interference Alignment for Multiuser Private Information Retrieval

Xiang Zhang (Department of Electrical and Computer Engineering, University of Utah), Kai Wan (Department of Electrical Engineering and Computer Science, Technische Universitat Berlin), Hua Sun (Department of Electrical Engineering, University of North Texas), Mingyue Ji (Department of Electrical and Computer Engineering, University of Utah), and Giuseppe Caire (Department of Electrical Engineering and Computer Science, Technische Universitat Berlin)

0
In the problem of cache-aided Multiuser Private Information Retrieval (MuPIR), a set of Ku cache-aided users wish to download their desired messages from a set of N distributed non-colluding databases each holding a library of K independent messages. The communication load of this problem is defined as the total number of bits downloaded (normalized by the message length) by the users. The goal is to find the optimal memory-load trade-off under the constraint of user demand privacy, which ensures that any individual database learns nothing about the demands of the users. In this paper, for the MuPIR problem with K = 2 messages, Ku = 2 users and N ≥ 2 databases, we provide achievability for the memory-load pairs (N−12N, N+1 N) and ( 2(N−1) 2N−1, N+1 2N−1) by constructing specific achievable schemes based on the novel idea of Private Cache- aided Interference Alignment (PCIA). We prove that the proposed scheme is optimal if the cache placement is uncoded (i.e., users directly cache a subset of the library bits). Computer-aided investigation also shows that the proposed schemes are optimal in general when N = 2, 3.

Session Chair

Not Needed — Asynchronous Q&A throughout the conference

Enter Zoom
Session CCDWN-Keynote-1

Keynote

Conference
4:30 PM — 5:15 PM EEST
Local
Jun 19 Fri, 9:30 AM — 10:15 AM EDT

Coded Caching and Distributed Computing over Wireless Channels

Petros Elia (Dept. of Mobile Communications, Eurecom, France)

0
This presentation will discuss cache-aided clique-based communications as a means of achieving unparalleled performance in a variety of communication settings, while also briefly touching upon ramifications in distributed computing.
We will discuss some of the novel challenges that come about when combining caching, with PHY, load balancing, and content prediction. Central to all this is the idea of non-separability, where we show scenarios where the joint treatment of Caching and Comm can far outperform separable approaches that simply `concatenate’ existing caching techniques to existing Comm methods. The presentation can be of interest to any caching or distributed computing specialist who wishes to see how “things change” when considering cache-aided wireless (and thus interference-limited) channels, as well as of interest to Comm and networking experts who can see how existing approaches (e.g. multi-antenna or cooperative methods) can be changed in order to allow a splash of caching to provide unparalleled performance gains.

Session Chair

Mari Kobayashi CentraleSupélec and George Iosifidis, Trinity College Dublin

Enter Zoom
Session CCDWN-Keynote-2

Keynote

Conference
6:00 PM — 7:15 PM EEST
Local
Jun 19 Fri, 11:00 AM — 12:15 PM EDT

New Challenges in Caching

Emilio Leonardi (Department of Electronics, Politecnico de Torino, Italy)

0
In my talk first, I will briefly, review some challenging aspects in the related to the design and the analysis of caching system. Then I will focus on similarity caching systems, where a user request for an object o that is not in the cache can be (partially) satisfied by a similar stored object o’, at the cost of a loss of user utility that depends on a “distance” between o and o’. Similarity caching systems can be effectively employed in several application areas, like multimedia retrieval, recommender systems, genome study, and machine learning training/serving. However, despite their relevance, the behavior of such systems is far from being well understood. I will present some recent work in the area that provides a first comprehensive analysis of similarity caching in the offline, adversarial, and stochastic settings. Similarity caching raises significant new challenges, some dynamic policies with some optimality guarantees have been recently proposed. The behavior of similarity caching systems significantly deviates from exact caching, leading to the definition of NP-hard problems also in cases in which exact caching optimal policies have a simple structure.

Session Chair

Mari Kobayashi CentraleSupélec and George Iosifidis, Trinity College Dublin

Enter Zoom

Made with in Toronto · Privacy Policy · © 2020 Duetone Corp.