ARCloud platform

Introduction

The next step for the SolAR Framework is to distribute some spatial computing pipelines into the cloud. The idea is to offer an access to efficient and valuable pipelines to any AR application designer:

  • whatever his location

  • whatever his hardware configuration

  • whatever his development platform

  • whatever his target equipment

And by continuing to benefit from the advantages of SolAR Framework (see document SolAR Architecture).

Moreover, the "cloud" solution provided by SolAR Framework must take into account the following requirements:

  • performance: pipeline components often require specific CPU, GPU, memory, storage capacities that must be taken into account by physical machines hosting these components

  • latency: AR applications must be very responsive, so spatial computing pipelines must return their processed data as quickly as possible, especially when running in the cloud

  • robustness: pipelines deployed in the cloud must be able to support a large number of calls from different AR applications, sometimes with a significant volume of data (images, point clouds, etc.)

  • availability: pipelines in the cloud must always be available for AR applications, regardless of the location of the end-user device, and at any time

  • scalability: pipelines in the cloud should be easy to update, and new pipelines should be easy to deploy, with minimal impact on existing AR applications

  • security: integrity of spatial computing pipelines deployed in the cloud must be guarenteed

With this objective, the SolAR Framework must evolve to offer new features to its users:

  • communication layer: to handle calls to pipelines in the cloud, synchronous or asynchronous, including complex data sending or receiving to and from these pipelines
    ⇒ based on gRPC framework and Boost library for serialization (by default)

  • automatic generation of deployable pipeline packages: to help developers to prepare their pipelines to be deployed in the cloud, benefiting from the environment of the SolAR Framework
    ⇒ based on Docker technology

  • cloud deployment orchestration: to make pipelines deployment in the cloud as easy as possible, and to offer tools to manage the lifecycle of these pipelines
    ⇒ based on Kubernetes (K8s) platform

This evolution of the SolAR Framework is called the ARCloud platform.

In the remainder of this document, the term "service" will be used to designate a SolAR pipeline deployed on the ARCloud platform in a docker container, including the remote communication layer.

The purpose of this documentation is to describe the ARCloud platform, which offers a complete solution for the deployment, maintenance and upgrade of SolAR services in the Cloud. In addition, this document presents, with concrete examples, how to use the ARCloud platform to design AR client applications.

ARCloud: a generic cloud architecture for SolAR

The premise of deploying SolAR pipelines in the cloud is based on the need to be able to offer any AR application the ability to rely on powerful spatial computing services not locally, but somewhere in the cloud, using an internet connection.

The next diagram shows a global and high level vision of connections between AR devices and spatial computing services hosted on cloud servers:

SolAR Framework global cloud vision
Figure 1. SolAR Framework global cloud vision

Each service has its own characteristics:

  • it can be called synchronously or asynchronously, or in streaming mode

  • its processing may require specific technical criteria applied to the CPU, GPU, memory…​

  • it may need to store a local context linked to a specific device

  • it may also need to store permanent local data

  • it can be chained to another service (i.e. service chaining must be managed by the platform)

  • etc.

All these aspects will be presented later in this document.

External tools used by the ARCloud platform

As presented previously, some open source tools are used to manage the new features of SolAR Framework, necessary for the implementation of this cloud architecture:

  • Boost: As described in the SolAR Architecture document, this C++ library provides some functionalities to deconstruct a set of data structures to a sequence of bytes (serialization), and to reconstitute an equivalent structure in another program context (deserialization).
    More information on: https://www.boost.org/doc/libs/1_74_0/libs/serialization/doc/index.html
    [underline]#Note: Boost serialization is the default solution provided by the XPCF remoting framework to ensure serialization and deserialization of data structures, but the framework has been designed so that any other solution can be used for this feature if needed.

  • gRPC: This open source remote procedure call (RPC) system uses HTTP/2 for transport, Protocol Buffers as the interface description language, and provides features such as authentication, bidirectional streaming and flow control, blocking or nonblocking bindings, and cancellation and timeouts. It generates cross-platform client and server bindings for many languages.
    More information on: https://www.grpc.io/
    Note: This tool is embedded in the XPCF remoting framework.

  • Docker: This set of Platform as a Service (PaaS) products uses OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. All containers are run by a single operating system kernel and therefore use fewer resources than virtual machines.
    More information on: https://www.docker.com/
    Note: The container generation is managed by the XPCF remoting framework.

  • Kubernetes: K8s is an open-source container-orchestration system. It defines a set of building blocks, which collectively provide mechanisms that deploy, maintain, and scale applications based on CPU, GPU, memory or custom metrics. K8s is loosely coupled and extensible to meet different workloads.
    More information on: https://kubernetes.io/

All of these tools will work together to constitute the ARCloud platform based on the SolAR Framework, as shown on this diagram:

Open source tools used by the ARCloud platform based on the SolAR Framework
Figure 2. Open source tools used by the ARCloud platform based on the SolAR Framework

Remote access to SolAR pipelines using XPCF remoting framework

The SolAR Framework relies heavily on XPCF framework features (component configuration and injection, module loading at runtime, interface introspection…​), as described in the SolAR Architecture document.
As every SolAR component is based on XPCF interface and component design, it permits to consider a common approach to automate the distribution of any XPCF component across the network (including SolAR components), through its interfaces.
The goal is to provide the ability to distribute the SolAR Framework (either a single component, or meta-components such as pipelines) across the network with each part becoming a service, such as depicted on this schema.

This approach has led to improve the XPCF framework to provide remote support from any interface declaration, through a new version: XPCF remoting framework.

XPCF remoting framework (version 2.5.x) detailed presentation and code are available on this Git Hub repository: https://github.com/b-com-software-basis/xpcf/tree/develop#xpcf-remoting-architecture

XPCF remoting framework global architecture

This diagram shows the generic software atchitecture of XPCF remoting framework used to enable remote access to any SolAR spatial computing pipeline (based on XPCF components) deployed in the cloud:

XPCF remoting architecture overview
Figure 3. XPCF remoting architecture overview

The client application can be any AR application running on a device (computer, mobile phone, tablet, AR smartglasses…​) and relying on certain SolAR spatial computing pipelines available in the cloud.

The server application is a XPCF application, running a gRPC server on a provided url.

SolAR Framework elements

SolAR component interfaces correspond to the interfaces proposed by the SolAR pipelines to any client application (i.e. entry points to remote ARCloud services). They are based on SolAR Framework pipeline interfaces (see API reference: https://solarframework.github.io/create/api/).

SolAR component implementations correspond to components that run the algorithms of the spatial computing pipelines. They are not visible to client applications.

Important:

The remoting functionality works for any interface that:

  • uses data structures (see document SolAR Architecture) that declare one of the serialization methods exposed through XPCF specifications (one of the methods provided in xpcf/remoting/ISerializable.h)
    [underline]#Note: in version 2.5.0 only Boost serialization is handled, upcoming minor releases of XPCF will provide support for the other methods

  • uses basic C++ types only

Pointers are not supported as they can handle arrays …​ so a pointer will need a size, but no presumption to detect such cases can be made. Anyway, interfaces represent contracts between a user and a service, so interfaces should be self-explanatory. In this context, relying on pointers does not explicit the expected behavior, and interfaces should therefore rely on structured types, standard types (string, vectors…​) or user-defined types (structures, classes).

XPCF Framework elements

The GrpcServerManager uses XPCF to find every component implementing the IGrpcService interface and registers them. It is used to start the server application.

The IGrpcService interface defines the methods needed to manage any XPCF server component, and is used to register these components at run time (i.e. each XPCF server component implements this interface).

Elements generated by the XPCF remoting framework

The client configuration file contains information needed to manage gRPC communications on client side, such as channel urls (IP address and port of the server hosting the concrete code, for each interface).

A XPCF proxy component is created for each SolAR interface and implements this interface. This component transforms the interface input/output methods to grpc requests/responses, managing the serialization and deserialization of the data. It calls the grpc service and rpc methods corresponding to the methods of the interface. This component is configured with a channel url (given by the configuration file) and a grpc credential.

The server configuration file contains the server channel url for each interface. It also contains the parameters of each component used on the server side.

A XPCF server component is created for each SolAR interface, and implements this interface. It inherits from xpcf::IGrpcService, which allows the GrpcServerManager to register it. Moreover, each rpc method of the xpcf::IGrpcService class is implemented so that grpc requests/responses are translated to the SolAR component input/output methods with the correct data types (serialization/deserialization). The concrete SolAR component is injected to this component.

gRPC proto files (text files with .proto extension) are generated by the XPCF remoting framework for each SolAR interface. These files define the structure of the data to serialize. They will be used by the protocol buffer compiler protoc to generate gRPC Stubs and gRPC Services.

Process to generate a remote SolAR pipeline

  1. Generate the compilation database from the SolAR project (for example, using Qt Creator: Build→Generate Compilation Database).

  2. From this compilation database, generate the remoting code for the SolAR Framework interfaces, using the xpcf_grpc_gen tool included in the XPCF remoting framework.
    This tool parses all the SolAR interfaces and creates a Qt Creator development project to compile all the generated code in a XPCF module. It also creates a client and a server XPCF configuration file (see next diagram).
    The xpcf_grpc_gen tool uses a forked version of cppast (with customization around templates parsing) and libclang.

  3. Build the generated XPCF remoting modules, using your IDE.

  4. Adapt client and server configuration files with needed services.

  5. Launch the xpcf_grpc_server application with the server configuration file and the modules.

XPCF remoting code generation
Figure 4. XPCF remoting code generation

xpcf_grpc_gen syntax:

xpcf_grpc_gen -n <project name> -v <project version> -r <repository type> -u <host repository url> --std <C++ standard> --database_dir <compilation database path> --remove_comments_in_macro -o <destination folder> -g <message format>

Example:

xpcf_grpc_gen -n SolARFramework -v 0.11.0 -r build@github -u https://github.com/SolarFramework/SolARFramework --database_dir /home/user/Dev/SolAR/core/build-SolARFramework-Desktop_Qt_5_15_2_GCC_64bit-Release/ --std c++1z --remove_comments_in_macro -g protobuf -o ~/grpcGenSolar

The project generated by xpcf_grpc_gen for the entire SolAR Framework is available on GitHub: https://github.com/SolarFramework/SolARFrameworkGRPCRemote/tree/0.11.0

XPCF remoting Domain Specific Language

The XPCF remoting framework provides a Domain Specific Language (DSL) to ease the code generation.

This DSL is:

  • based on C++ user defined attributes (those attributes can be used on interfaces or on their methods)

  • made to assist/extend the code generation

  • usable in SolAR interface declarations

Currently, the DSL helps to:

  • remove ambiguities

  • maintain proxy and server components UUID stability

  • specify interfaces to ignore

In future releases, the DSL will allow to specify different remoting modes for methods (asynchronous, streaming…).

Table 1. XPCF remoting DSL description
Attribute Argument type Apply to Semantic

[[xpcf::ignore]]

none

class or method level

specify that the corresponding class/method must be ignored while generating remoting code

[[xpcf::clientUUID("uuid_string")]]

string

class level

specify the XPCF remoting client UUID

[[xpcf::serverUUID("uuid_string")]]

string

class level

specify the XPCF remoting server UUID

[[grpc::server_streaming]], [[grpc::client_streaming]] or [[grpc::streaming]]

none

method level

[[grpc::request("requestMessageName")]]

string

method level

optionally set gRPC request message name

[[grpc::response("responseMessageName")]]

string

method level

optionally set gRPC response message name

[[grpc::rpcName("rpcname")]]

string

method level

optionally set gRPC method RPC name

For example, the following code shows how to set the client and server UUID for a XPCF remote interface:

class [[xpcf::clientUUID("fe931815-479c-4659-96c3-09074747796d")]]
      [[xpcf::serverUUID("2ee889a4-bb14-46a0-b6e9-a391f61e4ad0")]]
    IMessage: virtual public org::bcom::xpcf::IComponentIntrospect {
      ...
};

Runtime behaviour

This diagram shows the tasks performed when starting the XPCF client and server applications:

XPCF remoting runtime behaviour: initialization
Figure 5. XPCF remoting behaviour: initialization

On client side:

  1. the client configuration file is used to configure the channel url of each XPCF proxy component

  2. XPCF proxy components are injected into corresponding SolAR component interfaces

On server side:

  1. the server configuration file is used to:

    • configure each SolAR component with its dedicated parameters

    • configure the channel url of each XPCF server component

  2. SolAR components are injected into corresponding XPCF server components interfaces

  3. server components are injected as IGrpcService services

  4. the GrpcServerManager registers every IGrpcService to gRPC

  5. the GrpcServerManager runs the gRPC server on given address:port (either with a configuration parameter or through the environment variable XPCF_GRPC_SERVER_URL)
    ⇒ without configuration, the server listens to "0.0.0.0:50051"

Then, the next diagram shows the tasks performed by the XPCF client and server applications at runtime:

XPCF remoting behaviour: runtime
Figure 6. XPCF remoting behaviour: runtime
  1. the client application calls a service through its SolAR component interface

  2. the interface is bound to the XPCF proxy component: the service is called on the proxy and forwarded to gRPC

  3. the gRPC request is caught by the corresponding XPCF server component

  4. the call is forwarded from the XPCF server component to the concrete SolAR component implementation (through its interface)

If the function called from the SolAR component interface returns a value, this value will then be sent back to the client, following the reverse path. Additionally, if some parameters of this function are output parameters, their new values will also be sent back to the client. The call to such a service is synchronous, which means that the client will be blocked until it retrieves the response.

Containerization of SolAR pipelines using Docker

Introduction to containers and Docker Engine

Before we can deploy a SolAR spatial computing pipeline in the cloud, we need to make that pipeline run in an independent software unit that contains both its code and all of its dependencies, and also offers a way to communicate with it. This will be possible using a docker container.

Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. Available for both Linux and Windows-based applications, containerized software will always run the same, regardless of the infrastructure. Container images become containers at runtime, when they run on Docker Engine.

Applications running in docker containers
Figure 7. Applications running in docker containers

Docker Engine acts as a client-server application that hosts images, containers, networks and storage volumes. The engine also provides a client-side Command-Line Interface (CLI) that enables users to interact with the daemon through the Docker Engine API.

Docker Engine overview
Figure 8. Docker Engine overview

To allow external access to a docker container (i.e. to a spatial computing service interface), it is possible to expose its port by mapping it to an external port in the host. Docker containers have an internal network and each container is associated with an IP address that can be accessed from the Docker host machine. Being internal IP, this IP cannot be used to access the containers from external network. But the Docker host machine’s main IP is accessible from outside. So, the solution is to bind the internal port of the Docker container to a port on the host machine.

Docker port mapping
Figure 9. Docker port mapping

Docker is a free software, licensed under the Apache v2 licence, and distributed as an open source project since 2013.

More information on: https://www.docker.com/

Using docker containers for SolAR pipelines

As explained in paragraph XPCF remoting framework global architecture, the XPCF framework offers a fully fonctional software architecture to manage SolAR pipelines as server applications:

  • XPCF provides a server application named "xpcf_grpc_server" to host server side components (the spatial computing pipeline)

  • the xpcf_grpc_server application automatically registers the pipeline interface to gRPC, to allow remote calls from clients

  • the xpcf_grpc_server application automatically runs a gRPC server using predefined url and credentials ("0.0.0.0:50051" with “insecure” channel credentials by default)

So, this software bundle can then be easily deployed in a docker container, as shown in the following diagram:

SolAR spatial computing pipeline running in a docker using XPCF
Figure 10. SolAR spatial computing pipeline running in a docker using XPCF

In order to deploy any remote pipeline, three elements must be provided to the xpcf_grpc_server application:

  • the XPCF server configuration file defining:

    • the list of XPCF remoting server components (implementing IGrpcService interface)

    • the declarations and configurations of concrete modules and components (SolAR spatial computing pipeline)

  • the XPCF remoting server module, which provides the server side stubs, implements the gRPC interfaces and will receive, through injection, the concrete implementation of SolAR components

  • the concrete modules and components of the SolAR pipeline

The generation of the application is managed by the automated bundling of modules offered by XPCF, based on remaken mechanisms (see SolAR Architecture document). The resulting bundle can then be easily deployed to a docker container.

remaken syntax for XPCF bundle generation:

remaken bundleXpcf -d <destination directory> <XPCF xml declaration file>

Then, the application build directory must be copied to the new bundle directory, and the app is ready to run.

Management of ARCloud services using Kubernetes

Introduction to Kubernetes

Kubernetes (commonly called K8S) is an open source orchestrator for deploying containerized applications. This platform provides the software necessary to successfully build and deploy reliable, scalable distributed systems. It works with a variety of containerization tools, and is often used with Docker (see Introduction to containers and Docker Engine).

Clusters

The main abstraction on which Kubernetes is based is the cluster, which is the group of machines running Kubernetes and the containers it manages.

Nodes

Each cluster contains Kubernetes nodes. They can be physical or virtual machines, and considered as "worker machines". Nodes, on the other hand, run pods: the most basic Kubernetes objects that can be created or managed.

Pods

The basic unit of scheduling in K8s is called a pod, which is an abstract view of containerized components. A pod consists of one or more containers that are guaranteed to be co-located on the same host machine and can share its resources. Each pod in K8S has a unique IP address (inside the cluster), which allows applications to use machine ports without risk of conflict. A pod can define a volume, such as a directory on a local drive or on the network, and expose it to containers in that pod. Pods can be defined through the Kubernetes API. Their management can also be delegated to a controller. Pods are attached to the node that deploys them until they expire or are deleted. If the pod is faulty, a new pod having the same properties will be broadcast to an other available node.

The following figure shows an overview of a cluster:

Kubernetes: overview of a cluster
Figure 11. Kubernetes: overview of a cluster

The next figure zooms inside a pod to show how containers are managed, and how they can access local volumes and communicate with external applications using port exposition:

Kubernetes: containers inside pods
Figure 12. Kubernetes: containers inside pods

Labels and selectors

Kubernetes allows customers (users and internal components) to attach key-value pairs called labels to any API object in the system, such as pods and nodes. By correspondence, label selectors are queries made on labels linked to objects. Labels and selectors are the first grouping mechanism in K8s, and are used to determine which components to perform an operation on.

Controllers

A controller is an arbitration loop that drives the current state of a cluster to its desired state. It does this by managing a set of pods. One of the types of controller is called a "replication controller", it handles replication and scaling by launching a specific number of copies of a pod on a cluster. It also handles the creation of replacement pods if the underlying node is faulty. Two of the controllers that are part of the Kubernetes core system are: the DaemonSet Controller to launch a single pod on each machine (or a subset of machines), as well as the Job Controller to launch pods that have a specific purpose (e.g. scripts). The set of pods that a controller manages is determined by selectors labels that are part of the controller definition.

Services

A Kubernetes service is a group of pods working together, for example one layer in a multi-layered application. All the pods that make up a service are defined by a label selector. Kubernetes provides a discovery and routing service by assigning an IP address and a domain name to a service, and balances the traffic load from that address on all pods corresponding to the selector. By default, a service is exposed inside a cluster, but a service can also be exposed outside of a cluster.

Ingress

An Ingress is an API object that manages external access to the services in a cluster. In that way, Ingress exposes HTTP or HTTPS routes from outside the cluster to services within the cluster. Moreover, Ingress may provide load balancing and manage traffic routing based on rules defined on its resource.

The following figure shows how Ingress routes incoming requests to services, and therefore to pods:

Kubernetes: Ingress as a router to pods
Figure 13. Kubernetes: Ingress as a router to pods

Using Kubernetes to manage ARCloud services

Once SolAR pipelines have been generated as "remote" bundles, integrating gRPC remote access management and data serialization (see Remote access to SolAR pipelines using XPCF remoting framework), and encapsulated in Docker containers (see Containerization of SolAR pipelines using Docker), these pipelines are ready to be deployed as ARCloud services using the Kubernetes platform.

To do that, each SolAR pipeline bundle will be deployed on a dedicated pod, and its interface will then be exposed to other services or client applications through a Kubernetes service (a service can bundle several SolAR pipelines to provide more powerful spatial computing services to AR client applications). External requests to ARCloud services will be managed by Ingress objects.

This implementation of Kubernetes for ARCloud can be schematized as follows:

Kubernetes: ARCloud implementation
Figure 14. Kubernetes: ARCloud implementation

Service chaining management with Istio

A particular need for ARCloud is to be able to chain together multiple ARCloud services to provide more powerful spatial computing processing to AR client applications, as shown in the following figure:

ARCloud service chaining
Figure 15. ARCloud service chaining

At this time, to deploy ARCloud services at the Kubernetes level, these rules have been defined:

  • one Pod runs one service (i.e. one container): many similar pods are possible (replica) and placement of services on computing machines is based on declarative conditions

  • a Service exposes the gRPC endpoint of the first component of the service (interface of the SolAR pipeline)

  • an Ingress exposes the gRPC endpoint of the first service of the chaining (external interface of the multi pipelines spatial computing processing)

So, respecting these rules, the expected behavior for chained services can be represented like this:

ARCloud Kubernetes chaining
Figure 16. ARCloud Kubernetes chaining

But this solution is not sufficient to fully meet the need to chain services in the cloud. Indeed, several AR devices can simultaneously use the same service, so each chained service must be able to keep the context linked to the calling AR device, in order to guarantee the consistency of the data processed throughout the chaining.

As service instantiations can handle a specific context (related to a given AR device), chaining must be defined at the service instantiation level (pods) and not only at the service level. For this purpose, a Unique Identifier (UID) will be defined for each AR device, and used to chain service instances together, keeping the context of the device. Note that all these services are asynchronous.

To implement this solution, the ARCloud platform uses Istio, an open source service mesh offering tools to connect, secure, control and observe services. Istio is able to manage microservices as a service mesh, defining links between these services and managing interactions between them. Istio is also able to handle service chaining at pod level to ensure that service instantiations dedicated to a given AR device can be chained together and that the instantiation of services can run in an asynchronous way.

More information about Istio on the official web site: https://istio.io/latest/docs/concepts/what-is-istio/

Using Istio to handle service chaining while keeping the context of the calling AR device, the ARCloud architecture can be represented like this:

Service chaining based on Istio
Figure 17. Service chaining based on Istio

The first resource deployed is an Istio gateway that describes a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections.

An Istio virtual service can then be bound to the gateway to control the forwarding of traffic arriving at a particular host/service or gateway port. Each virtual service resource will forward incoming HTTP traffic, i.e. gRPC requests, to a destination service defined by the corresponding Istio destination rule resource.

The other Istio virtual services define the chaining of the Kubernetes services inside the cluster.

Istio destination rule resources define policies that apply to traffic intended for Kubernetes services after routing has occurred. A destination rule is defined for each service, to load balance incoming gRPC requests to attached pods. A Consistent Hash-based, declared in the load balancing rule, is used to provide a soft session affinity based on the AR device UID (sent in the HTTP header). All gRPC requests sent by a particular device will be handled by the same pod at AR service level.

ARCloud deployment over SUPRA

What is SUPRA?

For a cloud platform like ARCloud, taking into account current trends for this type of platform, some requirements are particularly relevant:

  • optimized edge computing infrastructure to minimize bandwidth between AR devices and ARCloud services, and reduce latency

  • Kubernetes to manage container orchestration

  • bare metal servers to optimize performance and reduce costs

  • GPU in addition to CPU to increase performance

To meet these requirements, SUPRA is a complete cloud stack architecture defined by b<>com for Telco, AI, AR, VR and e-Health applications on bare metal infrastructure. In particular, SUPRA incorporates the following concepts, essential for ARCloud:

  • Container as a Service (CaaS) for cloud native applications

  • Monitoring as a Service (MaaS)

  • GPU as a Service to access to GPU resources to accelerate specific computations

but also:

  • Field Programmable Gate Arrays (FPGA) as a Service

  • AI as a Service

ARCloud platform over SUPRA infrastructure
Figure 18. ARCloud platform over SUPRA infrastructure

Wireless Edge Factory

Wireless Edge Factory (WEF) is a cloud native wireless private network solution offering end-to-end tailored communication services that are deployed and run in a fully secure manner. The solution allows to build optimized, cooperative and coordinated networks and Edge Clouds for verticals.

Designed by b<>com, WEF provides:

  • a performant Cloud Native 5G end to end network solution, optimized to leverage on Edge infrastructure

  • orchestration of business services (Networking + Bus Apps)

  • adaptive user plane:

    • hardware/software depending on context (traffic, use cases, …)

    • cloud native: localization, dynamic placement, RAN aggregation

  • methods and algorithms for dynamic and adaptive placement of Cloud Native Network Functions in multiple Edge Cloud

The WEF platform offers a set of Cloud Network Functions (CNF) deployed on Kubernetes Virtual Infrastructure Manager (VIM).

The ultimate goal of this solution is to propose a full 5G SA connectivity.

WEF is used in ARCLoud context to manage connections between 5G compatible AR devices and cloud computers that host SolAR spatial computing services.

WEF global architecture
Figure 19. WEF global architecture

Functional architecture

The ARCloud services must be deployed on several machines, according to the technical criteria necessary for their optimum processing (CPU, GPU, memory…​). To easily operate these deployments, SUPRA provides common functions, such as:

  • Container orchestration across multiple hosts

  • Service discovery and load balancing

  • Secret and configuration management

  • Storage orchestration

  • Control of external access to services

  • Observability

Moreover, AR devices should be able to access ARCloud services over a 5G network, to benefit from a fast and reliable connection to the cloud. That is the role of the Wireless Edge Factory (WEF) solution.

Thus, by applying the functional architecture principles recommended by SUPRA, integrating 5G access with WEF, the deployment of ARCloud services can be represented as follows:

ARCloud over SUPRA: functional architecture
Figure 20. ARCloud over SUPRA: functional architecture

Technical architecture

The ARCloud platform consists of several services interconnected with each other, providing high-level and powerful spatial computing processing, running on the cloud platform. This aims to speed up the computation and to not depend on the resources of AR devices. Instead of computing complicated calculations on the device, the AR application simply transfers data (such as a video stream or a set of images) to the ARCloud platform, which then sends back the resulting data (for example, a render video stream to display to the end-user).

Technically, to meet the requirements defined for the ARCloud platform, the whole solution must provide tools to:

  • chain AR services (pods) at network level, without impacting pod configuration

  • manage session affinity (to keep an AR device context when requesting a spatial computing service)

The following figure shows the technical elements required for the operation of the ARCloud platform:

ARCloud over SUPRA: technical architecture
Figure 21. ARCloud over SUPRA: technical architecture

A single ARCloud service sample: the mapping pipeline

Mapping pipeline description

This sample is based on a spatial computing pipeline previously developed using the SolAR Framework (see document SolAR Architecture).

The objective of this "mapping" pipeline is to build a 3D sparse map surrounding the AR device for visualization and relocalization, based on images and their poses provided by the tracking system of the AR device.

A sparse map component in SolAR Framework consists of the following storage components:

  • Identification

  • Coordinate system

  • Point cloud manager

  • Keyframe manager

  • Covisibility graph

  • Keyframe retriever

The figure below shows the class diagram of the sparse map, including component interfaces and implementations. Boost library serialization is used to save and load map data to external storage files. Moreover, this serialization allows to easily exchange map data between services.

Class diagram of the sparse map developed using SolAR Framework
Figure 22. Class diagram of the sparse map developed using SolAR Framework

The processing of this pipeline is divided into three main steps:

  1. The bootstrap: This step aims to define the first two keyframes and to triangulate the initial point cloud from the matching keypoints between these keyframes. Moreover, before that, this step computes the 3D transformation between the fiducial marker and the AR device. Then, this transformation is applied to correct the poses given by this device into the reference coordinate system of the fiducial marker.

  2. The mapping: Firstly, this step updates the visibilities of the features in each input frame with the 3D point cloud. Then, it tries to detect new keyframes based on these visibilities. Once a new keyframe is found, it is triangulated with its neighboring keyframes to create new 3D map points. Finally, a local bundle adjustment is performed to optimize the local map points and the poses of keyframes.

  3. The loop closure: Although the use of the tracking system built into the AR device provides high accuracy of the poses, the pose errors still accumulate over time. Therefore, the loop closure process allows to optimize the map and the keyframe poses, as well as to avoid creating a redundant map when AR devices return to the pre-built area. The loop closure process includes three components: loop detection, loop correction and loop optimization.

So, to initialize the mapping pipeline processing, a device must first give two data:

  • the caracteristics of the camera it uses (resolution, focal)

  • the description of the fiducial marker to track (matrix and dimensions)

Then, the pipeline is able to process images and poses. To do this, some input data are needed:

  • the images captured by the device (images are sent to the pipeline one by one)

  • the relative position and orientation of the capture device, for each image

And finally, after the pipeline processing, the output data are:

  • the current map of the place calculated by the pipeline from the first image to the current one (in fact, a point cloud)

  • the recalculated positions and orientations of the capture device, inside this point cloud, from the first image to the current one (in fact, only for some keyframes determined by the pipeline).

To facilitate the use of this pipeline by any client application embedded in a device, it offers a simple interface based on the SolAR::api::pipeline::IMappingPipeline class (see https://solarframework.github.io/create/api/ for interface definition and data structures).
This interface is defined as follows:

/// @brief Set the camera parameters
/// @param[in] cameraParams: the camera parameters (its resolution and its focal)
/// @return FrameworkReturnCode::_SUCCESS if the camera parameters are correctly set, else FrameworkReturnCode::_ERROR_
virtual FrameworkReturnCode setCameraParameters(const datastructure::CameraParameters & cameraParams) = 0;
/// @brief Request to the mapping pipeline to process a new image/pose
/// Retrieve the new image (and pose) to process, in the current pipeline context
/// (camera configuration, fiducial marker, point cloud, key frames, key points)
/// @param[in] image: the input image to process
/// @param[in] pose: the input pose to process
/// @return FrameworkReturnCode::_SUCCESS if the data are ready to be processed, else FrameworkReturnCode::_ERROR_
virtual FrameworkReturnCode mappingProcessRequest(const SRef<datastructure::Image> image,
                                                  const datastructure::Transform3Df & pose) = 0;
/// @brief Provide the current data from the mapping pipeline context for visualization
/// (resulting from all mapping processing since the start of the pipeline)
/// @param[out] outputPointClouds: pipeline current point clouds
/// @param[out] keyframePoses: pipeline current keyframe poses
/// @return FrameworkReturnCode::_SUCCESS if data are available, else FrameworkReturnCode::_ERROR_
virtual FrameworkReturnCode getDataForVisualization(std::vector<SRef<datastructure::CloudPoint>> & outputPointClouds,
                                                    std::vector<datastructure::Transform3Df> & keyframePoses) const = 0;

To complete the description of the mapping pipeline, the following diagram shows the different steps it implements, from initialization to the constitution of the point cloud:

Mapping pipeline description
Figure 23. Mapping pipeline description

The mapping pipeline sample is available on GitHub (multithreading version): https://github.com/SolarFramework/Sample-Mapping/tree/0.11.0/Mapping/SolARPipeline_Mapping_Multi

The mapping pipeline in ARCloud architecture

The benefit of deploying the mapping pipeline in the cloud, based on the ARCloud architecture, is to provide AR device applications with an easy way to benefit from this powerful SolAR Framework spatial computing algorithm, regardless of the type, the location, the hardware / software configuration of the device, while benefiting from the computing capacities of remote computers dedicated to these particular resource-intensive processing, with latency times allowing almost real-time user feedback.

Using XPCF remoting framework, the mapping pipeline will be embeded into a server application handling external requests with a gRPC Service, based on the SolAR Framework interface implemented by this pipeline. On the AR client application side, a gRPC Stub, also based on the same SolAR Framework interface, will manage requests to the remote mapping pipeline. This software architecture is illustrated in the following diagram:

ARCloud mapping service description
Figure 24. ARCloud mapping service description

Based on this architecture, the data flows between AR client applications and ARCloud remote components are described in the following sequence diagram:

ARCloud mapping service sequence diagram
Figure 25. ARCloud mapping service sequence diagram

The multithreaded mapping pipeline remote server project is available on GitHub: https://github.com/SolarFramework/Service-Mapping/tree/0.11.0/SolARService_Mapping_Multi

And two test clients, one sending images to the server and the other getting the point cloud and poses from the server, are also available on GitHub:

A full ARCloud service: the Map Update service

The Map Update service description

The Map Update service is actually a set of SolAR Framework based microservices deployed on the ARCloud platform to provide a global map management solution for any AR applications. Its principle is to collect images captured by AR devices to calculate the map (point cloud) of the real environment in which the device moves, and then to build by enrichment a global map of all places where AR applications have been used. Then, this service will be able to quickly relocate a device in this global map. Moreover, it will allow the image capture device to be precisely repositioned and oriented in the point cloud that surrounds it.

Thus, this figure represents the different microservices that contribute to the Map Update service, as well as the data flows between them:

ARCloud map update service description
Figure 26. ARCloud map update service description
  1. The Relocalization Service gets the whole global map from the Global Map service to initialize its context

  2. The AR Device starts sending pairs of images and poses to the Tracking Service

  3. The Tracking Service requests the Relocalization Service to find the initial pose of the device

  4. If the relocalization is successful, the Tracking Service updates the pose provided by the AR Device to define it relative to the coordinate system of the global map using the pose provided by the relocalization service. Then, it sends the corrected poses to the AR Device application, and the pairs of images and corrected poses to the Mapping Service

  5. The Mapping Service attempts to create a local map in the global map coordinate system:

    • if successful, it sends this map to the Map Update Service then cleans the local map

    • in case of failure, it creates a floating map, sends it to the Map Overlap Detection Service and then cleans this map

  6. The Map Update Service gets the global map from the Global Map service, and the local maps from the Mapping Service or the Map Overlap Detection Service to:

    • merge the local maps into the global map (fusion)

    • detect and update map changes (update)

    • remove redundant cloud points and keyframes (pruning)

    • process loop closure and bundle adjustment (optimization)

  7. The Map Overlap Detection Service gets the global map from the Global Map service and the floating maps from the Mapping Service to:

    • detect map overlaps

    • process map tranformations

    • and finally transform the floating maps into local maps and send them to the Map Update Service

Conclusion

The ARCloud architecture, based on the SolAR Framework, provides a complete and powerful solution for designers of AR applications through an efficient set of always up-to-date spatial computing pipelines, deployed in the cloud, and available anytime, anywhere. Furthermore, these remote pipelines give users of AR applications a better experience, by pooling all the individual experiences to build the premise of the global digital twin.

For creators of AR vision components and pipeline assemblers, the ARCloud solution offers a complete environment from complexe spatial computing algorithms writting to full vision pipeline deployment: - a rich framework, SolAR, offering ready-to-use components and all the tools to design new components based on predefined functionalities, with documentation and support - process and tools to: - organise components in modules, and assemble them to create high-level pipelines - make these pipelines accessible remotly, managing the serialization of complexe data structures - generate remote pipeline bundles to encapsulate them in autonomous docker containers - deploy and manage spatial computing services, using Kubernetes, on a complete cloud stack architecture, SUPRA, ready for 5G access

But the major asset of the ARCloud solution lies in its community which will make this platform ever richer and more efficient.

Glossary

b<>com

Technology Research Institute focused on innovation in areas like Artificial Intelligence, immersive video and audio, content protection, 5G networks, Internet of Things, cognitive technologies (see https://b-com.com/en)

AI

Artificial Intelligence

API

Application Programming Interface

AR

Augmented Reality

BA

Bundle Adjustment

CaaS

Container as a Service

CLI

Command-Line Interface

CNF

Cloud Network Functions

DLL

Dynamic Link Library

DSL

Domain Specific Language

GPU

Graphics Processing Unit

gRPC

general-purpose Remote Procedure Calls

GUI

Graphical User Interface

IDE

Integrated Development Environment (Qt Creator, Visual Studio…​)

IT

Information Technology

K8s

Kubernetes

MaaS

Monitoring as a Service

PaaS

Platform as a Service

SDK

Software Development Kit

SLAM

Simultaneous Localization And Mapping

SolAR

Solution for Augmented Reality (b<>com)

SWIG

Simplified Wrapper and Interface Generator

UUID

Universally Unique IDentifier

VIM

Virtual Infrastructure Manager

VR

Virtual Reality

WEF

Wireless Edge Factory (b<>com)

XML

Extensible Markup Language

XPCF

Cross-Platform Component Framework (b<>com)