Resource Allocation at the Network Edge
Axe : ComEx - Intelligent Network Structures
Sujet : Resource Allocation at the Network Edge
Directeurs de thèse : Mohamad Assaad, L2S et Andrea Araldo, SAMOVAR
Institution : CentraleSupelec
Laboratoire gestionnaire : L2S
Doctorant : Alessio Scalinghi
Début : 2019
Productions scientifiques :
Introduction to the problem and context
To achieve its ambitious goals, 5G is pushing toward “intelligent networks”, which are not only able to transfer flows of data but also to serve computational and storage need. Edge Computing (EC) is expected to be the new stage of intelligent networks and distributed services. Under the EC paradigm, the Network Provider (NP) deploys processing capabilities and storage directly in the access network, thus allowing more responsiveness and a better use of the bandwidth. However, resources at the Edge are constrained, and thus the problem of allocating them to different Service Providers (SPs) arises. The goal of the PhD is to design optimization strategies to solve this problem, which presents challenges that are novel and not yet fully solved. Since EC is still a new field compared, for example, with Cloud, the ambition of the PhD is to contribute to shaping what EC will look like in the future generation Internet.
The increasing capacity available in access networks (in particular in the context of 5G) will enable on the one hand the emergence of novel Internet services and, on the other hand, will allow users and machines to generate an impressively growing amount of traffic in the Internet. To scale their service, Service Providers (SPs, e.g., Youtube, Netflix, Google, etc.) need to distribute their intelligence up to access networks, whence the emergence of EC 14. The French Government and the H2020 programme 1, 2 claim that EC is one of the keys of the Internet evolution and will help the development of AI. Indeed, the node proximity guaranteed by EC promises to reduce the distance between the equipment and the processing nodes, thus reducing bandwidth and achieving the 1ms latency requirement for time-critical applications targeted by 5G. Orange and Bouygues are already investing and experimenting on Edge Computing, as well as other giants in the Internet. The NP can use the resources deployed at the edge to perform either some network management tasks and/or make them available to the SPs to run a part of their applications at the edge. We focus in particular on the latter scenario. Resource allocation at the Edge has recently raised interest in the research community. Most work 8, 15 assumes that users submit tasks to be executed to the NP, which decides how to allocate resources to each task. However, this model is not appropriate for EC, since it assumes that the NP observes the tasks and handles them. This is not true, since all the traffic from users to SPs is encrypted to maintain confidentiality. Moreover, users do not really submit tasks but continuously interact with the service. This is why, in our vision, EC must allow SPs to deploy their micro-servers at the Edge, which end-users or machines should interact with. Therefore, the contention for resources is in our vision between SPs and not between tasks submitted by users. For these reasons, in our view of EC the NP owns the resources (bandwidth, processing and storage) and lets SPs run their applications without observing the information. The role of the NPs is thus to decide how to allocate the resources among SPs.
The proposed PhD is devoted to studying resource allocation in Edge Computing. The Network
Provider (NP) owns a set of resources, namely storage, processing capabilities and bandwidth, distributed on a heterogeneous set of nodes, e.g., central office, gNode B, users’ WiFi access points, etc.
The NP dynamically allocates them to third party Service Providers (SPs) in order to optimize its own utility function, which can include inter-domain bandwidth saving, revenue, users’ quality of experience, or to achieve fairness. The goal of the PhD is to design dynamic allocation strategies for the NP. We will present some of the challenges our allocation strategies must face, which makes the problem relevant and novel. For each of them we will cite the most relevant work, thus framing this proposal into the state-of-the-art.
- The rich literature on resource allocation in Cloud Computing assumes that each SP has access to infinite resources, as long as it is willing to pay for them. This not true at the edge 15, where contention emerges between different SPs on limited resources.
- We assume SPs adapt to the resources that the NP has allocated to them 11, 8 and change behavior accordingly, which complicates the allocation problem.
- We want to guarantee the confidentiality of SPs and users, which has been overlooked in network resource allocation problems with few exceptions 3. Therefore, our allocation must be computed just by measuring performance metrics 13.
- Services with heterogeneous requirements coexist (high bandwidth, low latency), which need to be satisfied at the same time. Resources are distributed in heterogeneous nodes, e.g., central office, gNode B, users’ WiFi access points, with different performance and capabilities. Moreover, different allocations may induce different traffic between nodes.
- Due to vicinity of resources, the requirements of each service dynamically change based on the requests of the local population of users or machines served. Services may need to use local resources just for few seconds and then disappear from the Edge. The outcome of the PhD is expected to be novel and coherent resource allocation strategies for Edge Computing, which tackle the challenges above and are based on a solid theoretical formulation and