Generic cloud architecture for SolAR

The premise of deploying SolAR pipelines in the cloud is based on the need to be able to offer any AR application the ability to rely on powerful spatial computing services not locally, but somewhere in the cloud, using an internet connection.

The next diagram shows a global and high level vision of connections between AR devices and spatial computing services hosted on cloud servers:

SolAR Framework global cloud vision

Each service has its own characteristics:

  • it can be called synchronously or asynchronously, or in streaming mode

  • its processing may require specific technical criteria applied to the CPU, GPU, memory…​

  • it may need to store a local context linked to a specific device

  • it may also need to store permanent local data

  • it can be chained to another service (i.e. service chaining must be managed by the platform)

  • etc.

External tools used to generate services

  • Boost: this C++ library provides some functionalities to deconstruct a set of data structures to a sequence of bytes (serialization), and to reconstitute an equivalent structure in another program context (deserialization).
    More information on: https://www.boost.org/doc/libs/1_74_0/libs/serialization/doc/index.html
    Note: Boost serialization is the default solution provided by the XPCF remoting framework to ensure serialization and deserialization of data structures, but the framework has been designed so that any other solution can be used for this feature if needed.

  • gRPC: This open source remote procedure call (RPC) system uses HTTP/2 for transport, Protocol Buffers as the interface description language, and provides features such as authentication, bidirectional streaming and flow control, blocking or nonblocking bindings, and cancellation and timeouts. It generates cross-platform client and server bindings for many languages.
    More information on: https://www.grpc.io/
    Note: This tool is embedded in the XPCF remoting framework.

  • Docker: This set of Platform as a Service (PaaS) products uses OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. All containers are run by a single operating system kernel and therefore use fewer resources than virtual machines.
    More information on: https://www.docker.com/
    Note: The container generation is managed by the XPCF remoting framework.

  • Kubernetes: K8s is an open-source container-orchestration system. It defines a set of building blocks, which collectively provide mechanisms that deploy, maintain, and scale applications based on CPU, GPU, memory or custom metrics. K8s is loosely coupled and extensible to meet different workloads.
    More information on: https://kubernetes.io/

All of these tools will work together to constitute the cloud platform based on the SolAR Framework, as shown on this diagram:

Open source tools used by the cloud platform based on the SolAR Framework

Remote access to SolAR pipelines using XPCF remoting framework

The SolAR Framework relies heavily on XPCF framework features (component configuration and injection, module loading at runtime, interface introspection…​), as described in the SolAR Framework architecture documentation.
As every SolAR component is based on XPCF interface and component design, it permits to consider a common approach to automate the distribution of any XPCF component across the network (including SolAR components), through its interfaces.
The goal is to provide the ability to distribute the SolAR Framework (either a single component, or meta-components such as pipelines) across the network with each part becoming a service.

This approach has led to improve the XPCF framework to provide remote support from any interface declaration, through a new version: XPCF remoting framework.

XPCF remoting framework (version 2.5.0) detailed presentation and code are available on this Git Hub repository: https://github.com/b-com-software-basis/xpcf/

XPCF remoting framework global architecture

This diagram shows the generic software atchitecture of XPCF remoting framework used to enable remote access to any SolAR spatial computing pipeline (based on XPCF components) deployed in the cloud:

XPCF remoting architecture overview

The client application can be any AR application running on a device (computer, mobile phone, tablet, AR smartglasses…​) and relying on certain SolAR spatial computing pipelines available in the cloud.

The server application is a XPCF application, running a gRPC server on a provided url.

SolAR Framework elements

SolAR component interfaces correspond to the interfaces proposed by the SolAR pipelines to any client application (i.e. entry points to remote services). They are based on SolAR Framework pipeline interfaces (see API reference).

SolAR component implementations correspond to components that run the algorithms of the spatial computing pipelines. They are not visible to client applications.

The remoting functionality works for any interface that:

  • uses data structures (see SolAR Framework architecture documentation) that declare one of the serialization methods exposed through XPCF specifications (one of the methods provided in xpcf/remoting/ISerializable.h)
    Note: in version 2.5.0 only Boost serialization is handled, upcoming minor releases of XPCF will provide support for the other methods

  • uses basic C++ types only

Pointers are not supported as they can handle arrays …​ so a pointer will need a size, but no presumption to detect such cases can be made. Anyway, interfaces represent contracts between a user and a service, so interfaces should be self-explanatory. In this context, relying on pointers does not explicit the expected behavior, and interfaces should therefore rely on structured types, standard types (string, vectors…​) or user-defined types (structures, classes).

XPCF Framework elements

The GrpcServerManager uses XPCF to find every component implementing the IGrpcService interface and registers them. It is used to start the server application.

The IGrpcService interface defines the methods needed to manage any XPCF server component, and is used to register these components at run time (i.e. each XPCF server component implements this interface).

Elements generated by the XPCF remoting framework

The client configuration file contains information needed to manage gRPC communications on client side, such as channel urls (IP address and port of the server hosting the concrete code, for each interface).

A XPCF proxy component is created for each SolAR interface and implements this interface. This component transforms the interface input/output methods to grpc requests/responses, managing the serialization and deserialization of the data. It calls the grpc service and rpc methods corresponding to the methods of the interface. This component is configured with a channel url (given by the configuration file) and a grpc credential.

The server configuration file contains the server channel url for each interface. It also contains the parameters of each component used on the server side.

A XPCF server component is created for each SolAR interface, and implements this interface. It inherits from xpcf::IGrpcService, which allows the GrpcServerManager to register it. Moreover, each rpc method of the xpcf::IGrpcService class is implemented so that grpc requests/responses are translated to the SolAR component input/output methods with the correct data types (serialization/deserialization). The concrete SolAR component is injected to this component.

gRPC proto files (text files with .proto extension) are generated by the XPCF remoting framework for each SolAR interface. These files define the structure of the data to serialize. They will be used by the protocol buffer compiler protoc to generate gRPC Stubs and gRPC Services.

Process to generate a remote SolAR pipeline

  1. Generate the compilation database from the SolAR project (for example, using Qt Creator: Build→Generate Compilation Database).

  2. From this compilation database, generate the remoting code for the SolAR Framework interfaces, using the xpcf_grpc_gen tool included in the XPCF remoting framework.
    This tool parses all the SolAR interfaces and creates a Qt Creator development project to compile all the generated code in a XPCF module. It also creates a client and a server XPCF configuration file (see next diagram).
    The xpcf_grpc_gen tool uses a forked version of cppast (with customization around templates parsing) and libclang.

  3. Build the generated XPCF remoting modules, using your IDE.

  4. Adapt client and server configuration files with needed services.

  5. Launch the xpcf_grpc_server application with the server configuration file and the modules.

XPCF remoting code generation

xpcf_grpc_gen syntax:

xpcf_grpc_gen -n <project name> -v <project version> -r <repository type> -u <host repository url> --std <C++ standard> --database_dir <compilation database path> --remove_comments_in_macro -o <destination folder> -g <message format>

Example:

xpcf_grpc_gen -n SolARFramework -v 0.10.0 -r build@github -u https://github.com/SolarFramework/SolARFramework --database_dir /home/user/Dev/SolAR/core/build-SolARFramework-Desktop_Qt_5_15_2_GCC_64bit-Release/ --std c++1z --remove_comments_in_macro -g protobuf -o ~/grpcGenSolar

The project generated by xpcf_grpc_gen for the entire SolAR Framework is available on GitHub: https://github.com/SolarFramework/SolARFrameworkGRPCRemote/tree/0.10.0

XPCF remoting Domain Specific Language

The XPCF remoting framework provides a Domain Specific Language (DSL) to ease the code generation.

This DSL is:

  • based on C++ user defined attributes (those attributes can be used on interfaces or on their methods)

  • made to assist/extend the code generation

  • usable in SolAR interface declarations

Currently, the DSL helps to:

  • remove ambiguities

  • maintain proxy and server components UUID stability

  • specify interfaces to ignore

In future releases, the DSL will allow to specify different remoting modes for methods (asynchronous, streaming…).

Attribute Argument type Apply to Semantic

[[xpcf::ignore]]

none

class or method level

specify that the corresponding class/method must be ignored while generating remoting code

[[xpcf::clientUUID("uuid_string")]]

string

class level

specify the XPCF remoting client UUID

[[xpcf::serverUUID("uuid_string")]]

string

class level

specify the XPCF remoting server UUID

[[grpc::server_streaming]], [[grpc::client_streaming]] or [[grpc::streaming]]

none

method level

[[grpc::request("requestMessageName")]]

string

method level

optionally set gRPC request message name

[[grpc::response("responseMessageName")]]

string

method level

optionally set gRPC response message name

[[grpc::rpcName("rpcname")]]

string

method level

optionally set gRPC method RPC name

[[grpc::client_receiveSize("size")]]

string

class level

optionally set gRPC max size for received messages ("-1" for the highest value)

[[grpc::client_sendSize("size")]]

string

class level

optionally set gRPC max size for sent messages ("-1" for the highest value)

For example, the following code shows how to set the client and server UUID for a XPCF remote interface:

class [[xpcf::clientUUID("fe931815-479c-4659-96c3-09074747796d")]]
      [[xpcf::serverUUID("2ee889a4-bb14-46a0-b6e9-a391f61e4ad0")]]
    IMessage: virtual public org::bcom::xpcf::IComponentIntrospect {
      ...
};

Runtime behaviour

This diagram shows the tasks performed when starting the XPCF client and server applications:

XPCF remoting runtime behaviour: initialization

On client side:

  1. the client configuration file is used to configure the channel url of each XPCF proxy component

  2. XPCF proxy components are injected into corresponding SolAR component interfaces

On server side:

  1. the server configuration file is used to:

    • configure each SolAR component with its dedicated parameters

    • configure the channel url of each XPCF server component

  2. SolAR components are injected into corresponding XPCF server components interfaces

  3. server components are injected as IGrpcService services

  4. the GrpcServerManager registers every IGrpcService to gRPC

  5. the GrpcServerManager runs the gRPC server on given address:port (either with a configuration parameter or through the environment variable XPCF_GRPC_SERVER_URL)

Then, the next diagram shows the tasks performed by the XPCF client and server applications at runtime:

XPCF remoting behaviour: runtime
  1. the client application calls a service through its SolAR component interface

  2. the interface is bound to the XPCF proxy component: the service is called on the proxy and forwarded to gRPC

  3. the gRPC request is caught by the corresponding XPCF server component

  4. the call is forwarded from the XPCF server component to the concrete SolAR component implementation (through its interface)

If the function called from the SolAR component interface returns a value, this value will then be sent back to the client, following the reverse path. Additionally, if some parameters of this function are output parameters, their new values will also be sent back to the client. The call to such a service is synchronous, which means that the client will be blocked until it retrieves the response.

Containerization of SolAR pipelines using Docker

Introduction to containers and Docker Engine

Before we can deploy a SolAR spatial computing pipeline in the cloud, we need to make sure that this pipeline runs in an independent software unit that contains both its code and all of its dependencies, and also offers a way to communicate with it. This will be possible using a docker container.

Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. Available for both Linux and Windows-based applications, containerized software will always run the same, regardless of the infrastructure. Container images become containers at runtime, when they run on Docker Engine.

Applications running in docker containers

Docker Engine acts as a client-server application that hosts images, containers, networks and storage volumes. The engine also provides a client-side Command-Line Interface (CLI) that enables users to interact with the daemon through the Docker Engine API.