Demos

Cloud native programming and the serverless paradigm can revolutionize software development and the operation of distributed applications. However, latency sensitive applications pose additional challenges to the underlying networks and cloud platforms. Moving compute resources to the edge is an inevitable step but further mechanisms and novel components are also required to enable such services in a serverless environment. In this demonstration, we present a novel system providing soft latency control for serverless applications and we showcase our proof-of-concept prototype supervising microservices operated on Amazon Web Services, and its edge extension, called Greengrass. Our main objective is the cost optimal operation while meeting the average latency requirements which is achieved by dynamically changing the software layout and serverless artifacts based on live monitoring.

The cloud-native paradigm has become a widely applied approach to ensure elasticity and reliability of applications running in the cloud. One recurrent motif is the stateless design of applications, which aims to decouple the life-cycle of application states from the life-cycle of individual application instances. Application data is written to and read from cloud databases, deployed close to the application code to ensure low latency bounds on state access. When applying a stateless design, the performance of the cloud service is often limited by the cloud database. In order not to become a bottleneck, database instances are distributed on multiple hosts, and strive to ensure data locality for all application instances. However, the shared nature of certain states, and the inevitable dynamics of the application workload necessarily lead to inter-host data access within the data center or even across data centers and edge servers, if the service is geographically distributed. To minimize the service performance loss due to the stateless design of applications, we propose a latency and access pattern aware state storage design and adapt it to a key-value store from academia to showcase a proof-of-concept that is ideal to store network function states. To foster further research in this area, we make our solution open-source.

GitHub

Kubernetes delay aware scheduler

Kubernetes has become the most popular cluster manager during the past 5 years. It is used primarily for orchestrating data center deployments running web applications. Its powerful features, e.g., self-healing and scaling, have attracted a huge community, which in turn, is inducing a meteoric rise of this open source project. We venture to shape Kubernetes to be suited for edge infrastructure. As mostly delay-sensitive applications are to be deployed in the edge, a topology-aware Kubernetes is needed, extending its widely-used feature set with regard to network latency. Moreover, as the edge infrastructure is highly prone to failures and is considered to be expensive to build and maintain, self-healing features must receive more emphasis than in the baseline Kubernetes. We therefore designed a custom Kubernetes scheduler that makes its decisions with applications' delay constraints and edge reliability in mind. In this demonstration we show the novel features of our Kubernetes extension, and describe the solution that we release as open source.

GitHub

Towards Human-Robot Collaboration: An Industry 4.0 VR Platform with Clouds Under the Hood

Safe and efficient Human-Robot Collaboration (HRC) is an essential feature of future Industry 4.0 production systems which requires sophisticated collision avoidance mechanisms with intense computation need. Digital twins provide a novel way to test the impact of different control decisions in a simulated virtual environment even in parallel. In addition, Virtual/Augmented Reality (VR/AR) applications can revolutionize future industry environments. Each component requires extreme computational power which can be provided by cloud platforms but at the cost of higher delay and jitter. Moreover, clouds bring a versatile set of novel techniques easing the life of both developers and operators. Can these applications be realized and operated on today's systems? In this demonstration, we give answers to this question via real experiments.

Controlling Drones from 5G Networks

Envisioned 5G applications are key drivers of the evolution of network and cloud architectures. These novel services pose several challenges on the underlying infrastructure in terms of latency, reliability or capacity, just to mention a few. Controlling or coordinating both indoor and outdoor drones from future networks is a potential application with significant importance. Today's technologies addressing network softwarization, such as Software Defined Networking (SDN) and Network Function Virtualization (NFV), enable a novel way to create and provision such services. In this demonstration, we showcase an Industry 4.0 use-case including a local factory equipped with drones and local cloud and network facilities connecting to remote cloud resources. The envisioned service is realized by a Service Function Chain (SFC) consisting of Virtual Network Functions (VNFs) and logical connections between them with special requirements. In addition, the envisioned service is integrated with our multi-domain resource orchestration system and as a result, it can be controlled, deployed and monitored from that framework. The use-case and the demo well illustrate several aspects and challenges which should be addressed by future 5G systems.

Resource Allocation Algorithm for Distributed Cloud Environments

Today's most widely used Virtual Infrastructure Manager (VIM) is OpenStack, which is responsible for managing compute, storage and virtual network resources. As the current scheduler of OpenStack does not take into account the underlying physical network characteristics, in order to deploy network services (NS) in a geographically distributed infrastructure, multiple VIMs, and an NFV Orchestrator (NFVO) on top of them, are necessary for resource management. In contrast to this current setup, we show a novel solution that merges the functionality of VIM and NFVO under one common OpenStack domain: our solution is capable of

  • measuring the bandwidth and delay characteristics of the underlying physical network among compute nodes,

  • creating a topology model that contains both compute-, and network-related features,

  • mapping the incoming services, and re-mapping already deployed ones to the underlying resources with our novel orchestration algorithm,

  • deploying and migrating services via OpenStack API calls.

FERO: Fast and Efficient Resource Orchestrator for a Data Plane Built on Docker and DPDK

Future services and applications, such as Tactile Internet, coordinated remote driving or wireless controlled exoskeletons, pose serious challenges on the underlying networks and IT platforms. Towards those services, virtualization is a key enabler from both technological and economic aspects which significantly reshaped the IT and networking ecosystem. On the one hand, cloud computing and the services based on that are evident results of last years' efforts; on the other hand, networking is in the middle of a momentous revolution and important changes mainly driven by Network Function Virtualization (NFV) and Software Defined Networking (SDN). In order to enable carrier grade network services with strict QoS requirements, we need a novel data plane supporting high performance and flexible, fine granular programmability and control.

As the network functions (implemented by virtual machines or containers) use the same hardware resources (cpu, memory) as the components responsible for networking, we need a low-level resource orchestrator which is capable of jointly controlling these resources. We propose a novel resource orchestrator (RO) for a data plane making use of open source components such as, Docker, DPDK and OVS.

FERO operates on a novel data plane resource model which is capable of abstracting several hardware architectures. Additionally, we provide an adapter module which can automatically discover the underlying hardware and build the model on-the-fly. FERO's core mapping module is based on our recently proposed Service Graph embedding engine. As a proof of the concept, two software switches (OVS, ERFS) have already been adapted and different hardware platforms were evaluated.

The source codes are available at our github page