Introduction

The purpose of this documentation is to present the ARCloud Services: a set of computer vision pipelines dedicated to Augmented Reality (AR) applications and ready to be deployed on a Cloud architecture.

First, we will present the pipelines on which these remote services are based. Then, we will describe how to configure and deploy the services on your Cloud architecture. And finally, we will explain how to test these services using sample applications.

Overview

Currently, ARCloud Services consist of six services, working together to provide a complete vision processing to AR client applications:

  • Mapping and Mapping No Drop: calculate the point cloud (sparse map) corresponding to a series of images captured by a camera, as well as the pose of the camera during its journey

  • Map Update: builds and consolidates a global map (sparse map) from all the partial maps resulting from the Mapping services

  • Relocalization: allows precise re-estimation of camera poses from captured images, based on the sparse submap provided by the Map Update service

  • Relocalization Markers: allows precise re-estimation of camera poses from captured images, based on markers detection (fiducial markers or QR Codes)

  • Mapping and Relocalization Front End: front end service that relies on Map Update, Relocalization, Relocalization Markers and Mapping services to calculate the sparse map corresponding to an end user journey (Mapping), and to transpose the corresponding point cloud into the SolAR coodinate system (Relocalization/Relocalization Markers), to finally merge it into the global map (Map Update).

These services are therefore strongly linked, which results in data exchanges presented in the following diagram:

ARCloud services
Figure 1. ARCloud Services

The advantage of this set of services is to allow a user to be precisely located in the 3D environment in which he evolves, from a camera integrated into the device he uses (mixed reality device such as Hololens headset, tablet, mobile phone, etc.). More than that, this set of services is capable of gradually building an accurate map of the real world to enable increasingly precise and fast localization of AR application users. And above all, this computer vision service greatly reduces the processing costs of the user’s device, since it runs in the Cloud, on servers with much higher processing capacities.

In this lastest version, some services have been enhanced by GPU processing based on the Compute Unified Device Architecture (CUDA) interface provided by Nvidia. Thus, the ARCloud services set now contains the Cuda versions of the MapUpdate, Relocalization, Mapping and Mapping No Drop services. These services have been tested on Nvidia A100 and Tesla T4 GPU.

Global processing of ARCloud Services

This diagram shows, in chronological order, all the processing that is carried out to process a sequence of images captured by a camera:

ARCloud services processing
Figure 2. ARCloud Services processing
  1. Initializes services by giving, among other things, the camera parameters.

  2. Calculates the first camera pose and the initial transformation matrix. If the global map is empty, tries to detect a marker in the captured images to determine the first camera pose (marker based reloc.) Else, tries to determine the first camera pose from captured images using point matching (sparse map based reloc.) Deduces the initial transformation matrix from client’s coordinate system and that of ARCloud and determines the sparse submap to use for mapping.

  3. Determines the submap for mapping. Initializes the mapping service with the sparse submap previously determined to help the map merge at the end of process.

  4. Iteratively calculates the transformation matrix. Every “n” images, tries to calculate the transformation matrix using markers based and sparse map based Relocalization services.

  5. Sends images and transformed poses to mapping. Provides the mapping service with the images and poses given by the client, by applying the transformation matrix, to process for mapping.

  6. Merges the new partial sparse map. At the end of the mapping process, sends the new partial sparse map to the map update service to merge to current global map

Contents of the ARCloud Services packages

This set of services is delivered as six packages (one for each service) containing all the files you need to deploy these computer vision pipelines on your own Cloud architecture.

These packages are completed by Helm charts files giving all the information to deploy each ARCloud service on a Cloud architecture using Kubernetes, and script files to launch these services on a local computer.

Additionally, some test applications are also available for testing the ARCloud Services once deployed, and will be described in this document.

The ARCloud Service Docker images, based on Ubuntu 18.04, are completely independent from other external resources, and can be deployed on any Cloud infrastructure that supports Docker containers.

The Map Update Service package

This package includes:

  • the Docker image of the remote service, available on a public Docker repository (repository.solarframework.org/docker/mapupdate/0.11.0/mapupdate-service:x.y.z)

  • the Docker image of the CUDA version of the remote service, available on a public Docker repository (repository.solarframework.org/docker/mapupdate/0.11.0/mapupdate-cuda-service:x.y.z)

The Relocalization Service package

This package includes:

  • the Docker image of the remote service, available on a public Docker repository (repository.solarframework.org/docker/relocalization/0.11.0/relocalization-service:x.y.z)

  • the Docker image of the CUDA version of the remote service, available on a public Docker repository (repository.solarframework.org/docker/relocalization/0.11.0/relocalization-cuda-service:x.y.z)

The Relocalization Service depends on the Map Update Service, and this one must therefore already be deployed on your Cloud infrastructure (in Cuda version if needed).

The Relocalization Markers Service package

This package includes:

  • the Docker image of the remote service, available on a public Docker repository (repository.solarframework.org/docker/relocalization/0.11.0/relocalization-markers-service:x.y.z)

The Mapping Service package

This package includes:

  • the Docker image of the remote service, available on a public Docker repository (repository.solarframework.org/docker/mapping/0.11.0/mapping-multi-service:x.y.z)

  • the Docker image of the CUDA version of the remote service, available on a public Docker repository (repository.solarframework.org/docker/mapping/0.11.0/mapping-multi-cuda-service:x.y.z)

The Mapping Service depends on the Map Update Service and the Relocalization Service, and these must therefore already be deployed on your Cloud infrastructure (in Cuda versions if needed).

The Mapping No Drop Service package

This package includes:

  • the Docker image of the remote service, available on a public Docker repository (repository.solarframework.org/docker/mapping/0.11.0/mapping-multi-nodrop-service:x.y.z)

  • the Docker image of the CUDA version of the remote service, available on a public Docker repository (repository.solarframework.org/docker/mapping/0.11.0/mapping-multi-nodrop-cuda-service:x.y.z)

The Mapping No Drop Service depends on the Map Update Service and the Relocalization Service, and these must therefore already be deployed on your Cloud infrastructure (in Cuda versions if needed).

The Mapping and Relocalization Front End Service package

This package includes:

  • the Docker image of the remote service, available on a public Docker repository (repository.solarframework.org/docker/relocalization/0.11.0/mappingandrelocalizationfrontend-service:x.y.z)

The Mapping and Relocalization Front End Service depends on Map Update Service, Relocalization Service, Relocalization Markers Service and Mapping Service, and these must therefore already be deployed on your Cloud infrastructure (this Front End Service can refer to non Cuda or Cuda versions of other services).

The test applications package

This package includes:

The test application images, based on Ubuntu 18.04, are completely independent from other external resources, and can be run on any computer that supports Docker engine.

Overview of computer vision pipelines

These computer vision pipelines have been developed using the SolAR Framework, an open-source framework under Apache v2 license dedicated to Augmented Reality.

The Map Update Pipeline

The objective of this map update pipeline is to build and maintain a global map from all local maps provided by each AR device, using the Mapping Service.

This pipeline is able to process sparse maps made up of the following data:

  • identification: to uniquely identify a map

  • coordinate system: the coordinate system used by the map

  • point cloud: set of 3D points constituting the sparse map

  • keyframes: set of keyframes selected among captured images to approximately cover all viewpoints of the navigation space

  • covisibility graph: graph consisting of keyframes as nodes so that connections between keyframes exist if they share a minimum of common map points

  • keyframe retriever: set of nearest images (the most similar to a reference image) used to accelerate feature matching

This data is defined in a specific data structure Map that is detailed on the SolAR web site, in the API part: https://solarframework.github.io/create/api/

For each local map, the processing of this pipeline is divided into three main steps:

  1. The overlap detection: If no relative transformation is known to go from the local map coordinate system to the global map coordinate system (in this case, the local map is considered as a floating map), the pipeline tries to detect overlap area between the two maps, in order to be able to calculate the transformation to be applied to process the next step.

  2. The map fusion: This step allows to merge the local map into the global map, using the same coordinate system. For that, the pipeline performs the following treatments:

    • change local map coordinate system to that of the global map

    • detect of duplicated cloud points between the two maps

    • merge cloud points and fuse duplicated points

    • merge keyframes and update the covisibility graph

    • merge keyframe retriever models

    • process a global bundle adjustment

  3. The map update: This step ensures maximum consistency between the global map and the current state of the real world, by regularly updating the map to adjust to real-world evolutions (taking into account lighting conditions, moving objects, etc.).

To initialize the map update pipeline processing, a device must give the characteristics of the camera it uses (resolution, focal).

Then, the pipeline is able to process maps. To do this, the local map calculated from the images and poses captured by the camera of the device must be sent to it.

At any time, the current global map can be requested from the Map Update pipeline, which then returns a map data structure.

To facilitate the use of this pipeline by any client application embedded in a device, it offers a simple interface based on the SolAR::api::pipeline::IMapUpdatePipeline class (see https://solarframework.github.io/create/api/ for interface definition and data structures).
This interface is defined as follows:

    /// @brief Initialization of the pipeline
    /// @return FrameworkReturnCode::_SUCCESS if the init succeed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode init() override;
    /// @brief Set the camera parameters
    /// @param[in] cameraParams: the camera parameters (its resolution and its focal)
    /// @return FrameworkReturnCode::_SUCCESS if the camera parameters are correctly set, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode setCameraParameters(const datastructure::CameraParameters & cameraParams) override;
    /// @brief Start the pipeline
    /// @return FrameworkReturnCode::_SUCCESS if the stard succeed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode start() override;
    /// @brief Stop the pipeline.
    /// @return FrameworkReturnCode::_SUCCESS if the stop succeed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode stop() override;
    /// @brief Request to the map update pipeline to update the global map from a local map
    /// @param[in] map: the input local map to process
    /// @return FrameworkReturnCode::_SUCCESS if the data are ready to be processed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode mapUpdateRequest(const SRef<datastructure::Map> map) override;
    /// @brief Request to the map update pipeline to get the global map
    /// @param[out] map: the output global map
    /// @return FrameworkReturnCode::_SUCCESS if the global map is available, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode getMapRequest(SRef<SolAR::datastructure::Map> & map) const override;
          /// @brief Request to the map update pipeline to get a submap based on a query frame.
                /// @param[in] frame the query frame
                /// @param[out] map the output submap
                /// @return FrameworkReturnCode::_SUCCESS if submap is found, else FrameworkReturnCode::_ERROR_
                FrameworkReturnCode getSubmapRequest(const SRef<SolAR::datastructure::Frame> frame,
                                                                                         SRef<SolAR::datastructure::Map> & map) const override;
        /// @brief Reset the map stored by the map update pipeline
        /// @return FrameworkReturnCode::_SUCCESS if the map is correctly reset, else FrameworkReturnCode::_ERROR_
        FrameworkReturnCode resetMap() override;

The map update pipeline project is available on the SolAR Framework GitHub: https://github.com/SolarFramework/Sample-MapUpdate/tree/0.11.0/SolARPipeline_MapUpdate

The Relocalization Pipeline

The goal of the Relocalization Pipeline is to estimate the pose of a camera from an image it has captured. For that, this service must carry out the following steps:

  1. Get a submap: This submap is managed by the Map Update Service (which must therefore already be deployed on your Cloud infrastructure). Thus, the relocalization pipeline requests the Map Update service to obtain the submap, and all associated data, corresponding to the frame given by the camera of the client. This step will be repeated until a submap is given.

  2. Keyframe retrieval: Each time the relocalization pipeline receives an image for which is has to estimate its pose, it will try to find the similar keyframes stored in the global map data. Several candidate keyframes, which should be captured from viewpoints close to the viewpoint of the input image, can be retrieved if the similarity score is high enough.

  3. Keypoint matching: For each candidate keyframe, the relocalization pipeline will try to match their keypoints with the one detected in the input image. The matches are found thanks to the similarity of the descriptors attached to the keypoints.

To initialize the relocalization pipeline processing, a device must give the characteristics of the camera it uses (resolution, focal).

Then, the pipeline is able to process images. To do this, input data is required:

  • the images captured by the device (images are sent to the pipeline one by one)

And, if the processing is successful, the output data is:

  • the new pose calculated by the pipeline

  • the confidence score of this new pose

To facilitate the use of this pipeline by any client application embedded in a device, it offers a simple interface based on the SolAR::api::pipeline::IRelocalizationPipeline class (see https://solarframework.github.io/create/api/ for interface definition and data structures).
This interface is defined as follows:

    /// @brief Initialization of the pipeline
    /// @return FrameworkReturnCode::_SUCCESS if the init succeed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode init() override;
    /// @brief Set the camera parameters
    /// @param[in] cameraParams: the camera parameters (its resolution and its focal)
    /// @return FrameworkReturnCode::_SUCCESS if the camera parameters are correctly set, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode setCameraParameters(const SolAR::datastructure::CameraParameters & cameraParams) override;
    /// @brief Get the camera parameters
    /// @param[out] cameraParams: the camera parameters (its resolution and its focal)
    /// @return FrameworkReturnCode::_SUCCESS if the camera parameters are correctly returned, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode getCameraParameters(SolAR::datastructure::CameraParameters & cameraParams) const override;
    /// @brief Start the pipeline
    /// @return FrameworkReturnCode::_SUCCESS if the stard succeed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode start() override;
    /// @brief Stop the pipeline.
    /// @return FrameworkReturnCode::_SUCCESS if the stop succeed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode stop() override;
    /// @brief Request the relocalization pipeline to process a new image to calculate the corresponding pose
    /// @param[in] image: the image to process
    /// @param[out] pose: the new calculated pose
    /// @param[out] confidence: the confidence score
    /// @return FrameworkReturnCode::_SUCCESS if the processing is successful, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode relocalizeProcessRequest(const SRef<SolAR::datastructure::Image> image, SolAR::datastructure::Transform3Df& pose, float_t & confidence) override;
    /// @brief Request to the relocalization pipeline to get the map
          /// @param[out] map the output map
          /// @return FrameworkReturnCode::_SUCCESS if the map is available, else FrameworkReturnCode::_ERROR_
          FrameworkReturnCode getMapRequest(SRef<SolAR::datastructure::Map> & map) const override;

The relocalization pipeline project is available on the SolAR Framework GitHub (multithreading version): https://github.com/SolarFramework/Sample-Relocalization/tree/0.11.0/SolARPipeline_Relocalization

The Relocalization Markers Pipeline

The goal of the Relocalization Markers Pipeline is to estimate the pose of a camera from an image it has captured, based on markers detection such as fiducial markers or QR Codes. For that, this service must carry out the following steps:

  1. Marker detection: Each time the relocalization markers pipeline receives an image, it will try to detect a predefined marker (fiducial or QR Code) in this image. If it happens, the service will use the known position of the marker to calculate the precise pose of the camera.

To initialize the relocalization markers pipeline processing, a device must give the characteristics of the camera it uses (resolution, focal).

Then, the pipeline is able to process images. To do this, input data is required:

  • the images captured by the device (images are sent to the pipeline one by one)

And, if the processing is successful, the output data is:

  • the new pose calculated by the pipeline

  • the confidence score of this new pose

To facilitate the use of this pipeline by any client application embedded in a device, it offers a simple interface based on the SolAR::api::pipeline::IRelocalizationPipeline class (see https://solarframework.github.io/create/api/ for interface definition and data structures).
This interface is defined as follows:

    /// @brief Initialization of the pipeline
    /// @return FrameworkReturnCode::_SUCCESS if the init succeed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode init() override;
    /// @brief Set the camera parameters
    /// @param[in] cameraParams: the camera parameters (its resolution and its focal)
    /// @return FrameworkReturnCode::_SUCCESS if the camera parameters are correctly set, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode setCameraParameters(const SolAR::datastructure::CameraParameters & cameraParams) override;
    /// @brief Get the camera parameters
    /// @param[out] cameraParams: the camera parameters (its resolution and its focal)
    /// @return FrameworkReturnCode::_SUCCESS if the camera parameters are correctly returned, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode getCameraParameters(SolAR::datastructure::CameraParameters & cameraParams) const override;
    /// @brief Start the pipeline
    /// @return FrameworkReturnCode::_SUCCESS if the stard succeed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode start() override;
    /// @brief Stop the pipeline.
    /// @return FrameworkReturnCode::_SUCCESS if the stop succeed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode stop() override;
    /// @brief Request the relocalization pipeline to process a new image to calculate the corresponding pose
    /// @param[in] image: the image to process
    /// @param[out] pose: the new calculated pose
    /// @param[out] confidence: the confidence score
    /// @return FrameworkReturnCode::_SUCCESS if the processing is successful, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode relocalizeProcessRequest(const SRef<SolAR::datastructure::Image> image, SolAR::datastructure::Transform3Df& pose, float_t & confidence) override;
    /// @brief Request to the relocalization pipeline to get the map
          /// @param[out] map the output map
          /// @return FrameworkReturnCode::_SUCCESS if the map is available, else FrameworkReturnCode::_ERROR_
          FrameworkReturnCode getMapRequest(SRef<SolAR::datastructure::Map> & map) const override;

The relocalization markers pipeline project is available on the SolAR Framework GitHub (multithreading version): https://github.com/SolarFramework/Sample-Relocalization/tree/0.11.0/SolARPipeline_RelocalizationMarker

The Mapping Pipeline

The objective of this mapping pipeline is to build a 3D sparse map surrounding the AR device for visualization and relocalization, based on images and their poses provided by the tracking system of the AR device.

The processing of this pipeline is divided into three main steps:

  1. The bootstrap: This step aims to define the first two keyframes and to triangulate the initial point cloud from the matching keypoints between these keyframes.

  2. The mapping: Firstly, this step updates the visibilities of the features in each input frame with the 3D point cloud. Then, it tries to detect new keyframes based on these visibilities. Once a new keyframe is found, it is triangulated with its neighboring keyframes to create new 3D map points. Finally, a local bundle adjustment is performed to optimize the local map points and the poses of keyframes.

  3. The loop closure: Although the use of the tracking system built into the AR device provides high accuracy of the poses, the pose errors still accumulate over time. Therefore, the loop closure process allows to optimize the map and the keyframe poses, as well as to avoid creating a redundant map when AR devices return to the pre-built area.

To initialize the mapping pipeline processing, a device must give the characteristics of the camera it uses (resolution, focal).

Then, the pipeline is able to process images and poses. To do this, input data is required:

  • the images captured by the device (images are sent to the pipeline one by one)

  • the relative position and orientation of the capture device, for each image

And finally, after the pipeline processing, the output data is:

  • the current map of the place calculated by the pipeline from the first image to the current one (in fact, a point cloud)

  • the recalculated positions and orientations of the capture device, inside this point cloud, from the first image to the current one (in fact, only for some keyframes determined by the pipeline).

To facilitate the use of this pipeline by any client application embedded in a device, it offers a simple interface based on the SolAR::api::pipeline::IMappingPipeline class (see https://solarframework.github.io/create/api/ for interface definition and data structures).
This interface is defined as follows:

    /// @brief Initialization of the pipeline
    /// @return FrameworkReturnCode::_SUCCESS if the init succeed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode init() override;
    /// @brief Set the camera parameters
    /// @param[in] cameraParams: the camera parameters (its resolution and its focal)
    /// @return FrameworkReturnCode::_SUCCESS if the camera parameters are correctly set, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode setCameraParameters(const datastructure::CameraParameters & cameraParams) override;
    /// @brief Start the pipeline
    /// @return FrameworkReturnCode::_SUCCESS if the stard succeed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode start() override;
    /// @brief Stop the pipeline.
    /// @return FrameworkReturnCode::_SUCCESS if the stop succeed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode stop() override;
    /// @brief Request to the mapping pipeline to process a new image/pose
    /// Retrieve the new image (and pose) to process, in the current pipeline context
    /// (camera configuration, fiducial marker, point cloud, key frames, key points)
    /// @param[in] image the input image to process
    /// @param[in] pose the input pose in the device coordinate system
    /// @param[in] transform the transformation matrix from the device coordinate system to the world coordinate system
    /// @param[out] updatedTransform the refined transformation by a loop closure detection
    /// @param[out] status the current status of the mapping pipeline
    /// @return FrameworkReturnCode::_SUCCESS if the data are ready to be processed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode mappingProcessRequest(const SRef<SolAR::datastructure::Image> image,
                                              const SolAR::datastructure::Transform3Df & pose,
                                              const SolAR::datastructure::Transform3Df & transform,
                                              SolAR::datastructure::Transform3Df & updatedTransform,
                                              MappingStatus & status) override;
    /// @brief Provide the current data from the mapping pipeline context for visualization
    /// (resulting from all mapping processing since the start of the pipeline)
    /// @param[out] outputPointClouds: pipeline current point clouds
    /// @param[out] keyframePoses: pipeline current keyframe poses
    /// @return FrameworkReturnCode::_SUCCESS if data are available, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode getDataForVisualization(std::vector<SRef<datastructure::CloudPoint>> & outputPointClouds,
                                                std::vector<datastructure::Transform3Df> & keyframePoses) const override;

To complete the description of the mapping pipeline, the following diagram shows the different steps it implements, from initialization to the constitution of the point cloud:

Mapping pipeline processing
Figure 3. Mapping pipeline processing

The mapping pipeline project is available on the SolAR Framework GitHub (multithreading version): https://github.com/SolarFramework/Sample-Mapping/tree/0.11.0/Mapping/SolARPipeline_Mapping_Multi

ARCloud Services deployment

In fact, these services are remote versions of the pipelines presented previously, designed to be deployed in a Cloud architecture, to provide AR device applications with an easy way to benefit from this powerful SolAR Framework spatial computing algorithms.

To help you easily deploy the services on your own infrastructure, we have already performed some preliminary steps:

  • creation of a remote version of each pipeline:

    • by adding a gRPC server based on the interface of the pipeline (which becomes the interface of the service), as described in the previous paragraph

    • by managing the serialization and deserialization of all the data structures that can be used to request the service through its interface

  • encapsulation of the remote pipeline in a Docker container, which exposes its interface through a default port

  • generation of the corresponding Docker image, which is available on a public docker repository

You must check the available versions of the ARCloud Services on our public repository to be able to deploy them to your servers:
https://repository.solarframework.org/generic/solar-services-delivery/Docker_images_index.txt

So, you are ready to deploy the services on your Cloud infrastructure, using Kubernetes.

Kubernetes is an open source system for automating the deployment, scaling and management of containerized applications.

But before that, you must have prepared your Cloud architecture: set up the Kubernetes environment with a traffic manager, a load balancer, an application gateway, a firewall, servers, clusters, nodes…​

In order to be able to perform the instructions presented in this part, you must have installed the Kubernetes command-line tool, kubectl, on your computer:
https://kubernetes.io/docs/tasks/tools/

Helm charts

To help you deploy the ARCloud services on your own cloud architecture, we provide you with some Helm packages (charts) giving all the information needed to deploy each of these services.

Helm is a tool that helps you manage Kubernetes applications - Helm charts help you define, install, and upgrade even the most complex Kubernetes application. Helm is the package manager supported and recommended by Kubernetes.

In order to be able to perform the instructions presented in this part, you must have installed the Helm CLI on your computer:
https://helm.sh/docs/intro/install/

The ARCloud Helm charts are organized as follows:

[Helm charts name-version]
|_ Chart.yaml
|_ values.yaml
|_ templates
  |_ mapupdate
  |_ mapupdatecuda
  |_ relocalization
  |_ relocalizationcuda
  |_ relocalizationmarkers
  |_ mapping
  |_ mappingcuda
  |_ mappingnodrop
  |_ mappingnodropcuda
  |_ mappingandrelocalizationfrontend
  |_ mappingandrelocalizationfrontendcuda

Chart.yaml

This yaml file defines, among other things, the Helm charts name, version and description.

apiVersion: v2
appVersion: "2.0"
description: [description]
name: [Helm charts name]
type: application
version: [Helm charts version]

values.yaml

This yaml file defines the default values to use for all ARCloud services during deployment. These values will be described later for each service.

templates

This folder contains a subfolder for each service to deploy (with the name of this service). Each subfolder is organized identically:

[service name]
|_ NOTES.txt              => defines some variables needed for deployment
|_ _helpers.tpl           => defines labels used in the 'values.yaml' file
|_ deployment.yaml        => gives deployment information such as Docker image,
                          environment variables, ports, replicas, etc.
|_ ingress.yaml           => sets ingress configuration (host, path,
                          service name and port, etc.)
|_ service.yaml           => gives service information such as name, labels,
                          internal/external ports, network protocol, etc.
|_ serviceaccount.yaml    => gives service account information (name, labels, annotations)

To use these Helm charts to deploy the ARCloud services, here are the commands you need to run on your computer:

  • first, to set your Kubernetes configuration (distant server IP and port, user, certificates, cluster, etc):

    export KUBECONFIG=[/path/to/config/file]
  • to add ARCloud Helm charts repository to your Helm environment:

    helm repo add solar-helm-virtual https://repository.solarframework.org/docker/solar-helm-virtual
  • then, to sync your local Helm environment with ARCloud distant repository (to run before each new deployment):

    helm repo update
  • and, finally, to deploy the ARCloud services on your server:

    helm upgrade --install --namespace [your name space] [Helm charts name] solar-helm-virtual/[Helm charts package] --version [Helm charts version]
You must check the available versions of the Helm charts on our public repository to be able to deploy the ARCloud services to your servers:
https://repository.solarframework.org/generic/solar-services-delivery/Helm_charts_index.txt

The Map Update Service

The Docker image of this service is available on this public Docker repository: repository.solarframework.org/docker/mapupdate/0.11.0/mapupdate-service:x.y.z

The Map Update Service default values are defined in the values.yaml file included in the Helm charts, in the 'mapUpdateService:' part.

You can modify some of these values according to your own configuration:

enable/disable the service deployment:

mapUpdateService:
  enabled: true (*)
  replicaCount: 1
...

path and version of the Docker image to use for deployment:

  ...
  image:
    repository: repository.solarframework.org/docker/mapupdate/0.11.0/mapupdate-service (*)
    pullPolicy: Always
    tag: "3.0.2" (*)
  ...

external port for remote clients:

  ...
  service:
    type: NodePort
    port: 80
    nodePort: 32050 (*)
  ...

environment variables

  ...
  settings:
    solarLogLevel: INFO             (*) Log level
                                        (DEBUG, CRITICAL, ERROR, INFO, TRACE, WARNING)
    xpcfGRPCMaxSendMsgSize: "-1"    (*) maximum size of gRPC sent messages
                                        (-1 for maximum value)
    xpcfGRPCMaxRecvMsgSize: "-1"    (*) maximum size of gRPC received messages
                                        (-1 for maximum value)

The Map Update Cuda Service (GPU version)

The Docker image of this service is available on this public Docker repository: repository.solarframework.org/docker/mapupdate/0.11.0/mapupdate-cuda-service:x.y.z

The Map Update Cuda Service default values are defined in the values.yaml file included in the Helm charts, in the 'mapUpdateCudaService:' part.

You can modify some of these values according to your own configuration:

enable/disable the service deployment:

mapUpdateCudaService:
  enabled: true (*)
  replicaCount: 1
...

path and version of the Docker image to use for deployment:

  ...
  image:
    repository: repository.solarframework.org/docker/mapupdate/0.11.0/mapupdate-cuda-service (*)
    pullPolicy: Always
    tag: "3.0.3" (*)
  ...

external port for remote clients:

  ...
  service:
    type: NodePort
    port: 80
    nodePort: 32150 (*)
  ...

GPU configuration:

for NVIDIA Tesla 4 (virtual GPU):

  resources:
    limits:
      k8s.kuartis.com/vgpu: '1'

for NVIDIA A100 (GPU card partitioning):

  resources:
    limits:
      nvidia.com/mig-1g.5gb: "1"

environment variables

  ...
  settings:
    solarLogLevel: INFO             (*) Log level
                                        (DEBUG, CRITICAL, ERROR, INFO, TRACE, WARNING)
    xpcfGRPCMaxSendMsgSize: "-1"    (*) maximum size of gRPC sent messages
                                        (-1 for maximum value)
    xpcfGRPCMaxRecvMsgSize: "-1"    (*) maximum size of gRPC received messages
                                        (-1 for maximum value)

The Relocalization Service

The Docker image of this service is available on this public Docker repository: repository.solarframework.org/docker/relocalization/0.11.0/relocalization-service:x.y.z

The Relocalization Service default values are defined in the values.yaml file included in the Helm charts, in the 'relocalizationService:' part.

You can modify some of these values according to your own configuration:

enable/disable the service deployment:

relocalizationService:
  enabled: true (*)
  replicaCount: 1
...

path and version of the Docker image to use for deployment:

  ...
  image:
    repository: repository.solarframework.org/docker/relocalization/0.11.0/relocalization-service (*)
    pullPolicy: Always
    tag: "3.0.2" (*)
  ...

external port for remote clients:

  ...
  service:
    type: NodePort
    port: 80
    nodePort: 32051 (*)
  ...

environment variables

  ...
  settings:
    solarLogLevel: INFO             (*) Log level
                                        (DEBUG, CRITICAL, ERROR, INFO, TRACE, WARNING)
    xpcfGRPCMaxSendMsgSize: "-1"    (*) maximum size of gRPC sent messages
                                        (-1 for maximum value)
    xpcfGRPCMaxRecvMsgSize: "-1"    (*) maximum size of gRPC received messages
                                        (-1 for maximum value)

The Relocalization Cuda Service (GPU version)

The Docker image of this service is available on this public Docker repository: repository.solarframework.org/docker/relocalization/0.11.0/relocalization-cuda-service:x.y.z

The Relocalization Cuda Service default values are defined in the values.yaml file included in the Helm charts, in the 'relocalizationCudaService:' part.

You can modify some of these values according to your own configuration:

enable/disable the service deployment:

relocalizationCudaService:
  enabled: true (*)
  replicaCount: 1
...

path and version of the Docker image to use for deployment:

  ...
  image:
    repository: repository.solarframework.org/docker/relocalization/0.11.0/relocalization-cuda-service (*)
    pullPolicy: Always
    tag: "3.0.2" (*)
  ...

external port for remote clients:

  ...
  service:
    type: NodePort
    port: 80
    nodePort: 32151 (*)
  ...

GPU configuration:

for NVIDIA Tesla 4 (virtual GPU):

  resources:
    limits:
      k8s.kuartis.com/vgpu: '1'

for NVIDIA A100 (GPU card partitioning):

  resources:
    limits:
      nvidia.com/mig-1g.5gb: "1"

environment variables

  ...
  settings:
    solarLogLevel: INFO             (*) Log level
                                        (DEBUG, CRITICAL, ERROR, INFO, TRACE, WARNING)
    xpcfGRPCMaxSendMsgSize: "-1"    (*) maximum size of gRPC sent messages
                                        (-1 for maximum value)
    xpcfGRPCMaxRecvMsgSize: "-1"    (*) maximum size of gRPC received messages
                                        (-1 for maximum value)

The Relocalization Markers Service

The Docker image of this service is available on this public Docker repository: repository.solarframework.org/docker/relocalization/0.11.0/relocalization-markers-service:x.y.z

The Relocalization Markers Service default values are defined in the values.yaml file included in the Helm charts, in the 'relocalizationMarkersService:' part.

You can modify some of these values according to your own configuration:

enable/disable the service deployment:

relocalizationMarkersService:
  enabled: true (*)
  replicaCount: 1
...

path and version of the Docker image to use for deployment:

  ...
  image:
    repository: repository.solarframework.org/docker/relocalization/0.11.0/relocalization-markers-service (*)
    pullPolicy: Always
    tag: "3.0.2" (*)
  ...

external port for remote clients:

  ...
  service:
    type: NodePort
    port: 80
    nodePort: 32052 (*)
  ...

environment variables

  ...
  settings:
    solarLogLevel: INFO               (*) Log level
                                        (DEBUG, CRITICAL, ERROR, INFO, TRACE, WARNING)
    xpcfGRPCMaxSendMsgSize: "20000"   (*) maximum size of gRPC sent messages
                                        (-1 for maximum value)
    xpcfGRPCMaxRecvMsgSize: "7000000" (*) maximum size of gRPC received messages
                                        (-1 for maximum value)

The Mapping Service

The Docker image of this service is available on this public Docker repository: repository.solarframework.org/docker/mapping/0.11.0/mapping-multi-service:x.y.z

The Mapping Service default values are defined in the values.yaml file included in the Helm charts, in the 'mappingService:' part.

You can modify some of these values according to your own configuration:

enable/disable the service deployment:

mappingService:
  enabled: true (*)
  replicaCount: 1
...

path and version of the Docker image to use for deployment:

  ...
  image:
    repository: repository.solarframework.org/docker/mapping/0.11.0/mapping-multi-service (*)
    pullPolicy: Always
    tag: "3.0.2" (*)
  ...

external port for remote clients:

  ...
  service:
    type: NodePort
    port: 80
    nodePort: 32053 (*)
  ...

environment variables

  ...
  settings:
    solarLogLevel: INFO               (*) Log level
                                        (DEBUG, CRITICAL, ERROR, INFO, TRACE, WARNING)
    xpcfGRPCMaxSendMsgSize: "-1"      (*) maximum size of gRPC sent messages
                                        (-1 for maximum value)
    xpcfGRPCMaxRecvMsgSize: "7000000" (*) maximum size of gRPC received messages
                                        (-1 for maximum value)

The Mapping Cuda Service (GPU version)

The Docker image of this service is available on this public Docker repository: repository.solarframework.org/docker/mapping/0.11.0/mapping-multi-cuda-service:x.y.z

The Mapping Cuda Service default values are defined in the values.yaml file included in the Helm charts, in the 'mappingCudaService:' part.

You can modify some of these values according to your own configuration:

enable/disable the service deployment:

mappingCudaService:
  enabled: true (*)
  replicaCount: 1
...

path and version of the Docker image to use for deployment:

  ...
  image:
    repository: repository.solarframework.org/docker/mapping/0.11.0/mapping-multi-cuda-service (*)
    pullPolicy: Always
    tag: "3.0.2" (*)
  ...

external port for remote clients:

  ...
  service:
    type: NodePort
    port: 80
    nodePort: 32153 (*)
  ...

GPU configuration:

for NVIDIA Tesla 4 (virtual GPU):

  resources:
    limits:
      k8s.kuartis.com/vgpu: '1'

for NVIDIA A100 (GPU card partitioning):

  resources:
    limits:
      nvidia.com/mig-1g.5gb: "1"

environment variables

  ...
  settings:
    solarLogLevel: INFO               (*) Log level
                                        (DEBUG, CRITICAL, ERROR, INFO, TRACE, WARNING)
    xpcfGRPCMaxSendMsgSize: "-1"      (*) maximum size of gRPC sent messages
                                        (-1 for maximum value)
    xpcfGRPCMaxRecvMsgSize: "7000000" (*) maximum size of gRPC received messages
                                        (-1 for maximum value)

The Mapping No Drop Service

This service behaves in much the same way as the * Mapping Service * (it is also based on the Mapping Pipeline), except that it has a buffer of images to store those sent by the client, before processing them. This mechanism allows it to process these images at its own rate, without loss (no drop), even if the client sends its requests at a higher rate (within the limit of the maximum size of the buffer).

The idea is to make this service more robust in terms of loss of tracking compared to the mapping service presented above. On the other hand, using this service, the processing of the images is no longer done in real time.

The Docker image of this service is available on this public Docker repository: repository.solarframework.org/docker/mapping/0.11.0/mapping-multi-nodrop-service:x.y.z

The Mapping No Drop Service default values are defined in the values.yaml file included in the Helm charts, in the 'mappingNoDropService:' part.

You can modify some of these values according to your own configuration:

enable/disable the service deployment:

mappingNoDropService:
  enabled: true (*)
  replicaCount: 1
...

path and version of the Docker image to use for deployment:

  ...
  image:
    repository.solarframework.org/docker/mapping/0.11.0/mapping-multi-nodrop-service (*)
    pullPolicy: Always
    tag: "3.0.2" (*)
  ...

external port for remote clients:

  ...
  service:
    type: NodePort
    port: 80
    nodePort: 32054 (*)
  ...

environment variables

  ...
  settings:
    solarLogLevel: INFO               (*) Log level
                                        (DEBUG, CRITICAL, ERROR, INFO, TRACE, WARNING)
    xpcfGRPCMaxSendMsgSize: "-1"      (*) maximum size of gRPC sent messages
                                        (-1 for maximum value)
    xpcfGRPCMaxRecvMsgSize: "7000000" (*) maximum size of gRPC received messages
                                        (-1 for maximum value)

The Mapping No Drop Cuda Service (GPU version)

The Docker image of this service is available on this public Docker repository: repository.solarframework.org/docker/mapping/0.11.0/mapping-multi-nodrop-cuda-service:x.y.z

The Mapping No Drop Cuda Service default values are defined in the values.yaml file included in the Helm charts, in the 'mappingNoDropCudaService:' part.

You can modify some of these values according to your own configuration:

enable/disable the service deployment:

mappingNoDropCudaService:
  enabled: true (*)
  replicaCount: 1
...

path and version of the Docker image to use for deployment:

  ...
  image:
    repository.solarframework.org/docker/mapping/0.11.0/mapping-multi-nodrop-cuda-service (*)
    pullPolicy: Always
    tag: "3.0.2" (*)
  ...

external port for remote clients:

  ...
  service:
    type: NodePort
    port: 80
    nodePort: 32154 (*)
  ...

GPU configuration:

for NVIDIA Tesla 4 (virtual GPU):

  resources:
    limits:
      k8s.kuartis.com/vgpu: '1'

for NVIDIA A100 (GPU card partitioning):

  resources:
    limits:
      nvidia.com/mig-1g.5gb: "1"

environment variables

  ...
  settings:
    solarLogLevel: INFO               (*) Log level
                                        (DEBUG, CRITICAL, ERROR, INFO, TRACE, WARNING)
    xpcfGRPCMaxSendMsgSize: "-1"      (*) maximum size of gRPC sent messages
                                        (-1 for maximum value)
    xpcfGRPCMaxRecvMsgSize: "7000000" (*) maximum size of gRPC received messages
                                        (-1 for maximum value)

The Mapping and Relocalization Front End Service

This service is quite different from those presented above. In fact, it is not based on a computer vision processing embedded in a SolAR pipeline. As its name suggests, this service is used as a "front end" to receive requests from end-user applications (running on Hololens glasses for example). It then relies on the other ARCloud services (Map Update, Relocalization, Relocalization Markers and Mapping) to process the received data and to return some results to the calling application.

The processing of this service is the following:

  1. it receives pairs of images (captured by the camera of the client) and poses (calculated by the client in its own coordinate system)

  2. it tries to get an initial pose of the camera in the SolAR coordinate system, sending images to Relocalization Service (sparse map based) and Relocalization Markers Service (marker based), for the mapping process which will start just after
    Currently, the marker definitions are embedded in the Relocalization Markers Service image (it will be handled differently in next versions)
    For your tests, a fiducial marker image is available here: https://github.com/SolarFramework/Service-Relocalization/blob/0.11.0/SolARService_MappingAndRelocalizationFrontend/markerA.png

  3. once the origin of the coordinate system has been determined, it uses the Relocalization Service and the Relocalization Merkers Service to attempt to relocalize some images in the ARCloud map to determine the corrected pose of these images in the SolAR coordinate system, and calculate the 3D transformation matrix between the client coordinate system and that of SolAR

  4. in the same time, it sends each pair of image and pose (corrected using the 3D transformation matrix) to the Mapping Service to calculate the point cloud of the client journey

  5. when no more images are sent (end of the client journey), the point cloud calculated by the Mapping Service is sent to the Map Update Service to merge it with the current global map (thus making the ARCloud services more and more efficient)

Client applications based on the SolAR Framework

This service can be directly requested by C++ applications based on the SolAR Framework. For that, it exposes a specific interface, IAsyncRelocalizationPipeline, on a dedicated port (see the values.yaml description" bellow). This interface is defined like this:

    /// @brief Initialization of the pipeline
    /// @return FrameworkReturnCode::_SUCCESS if the init succeed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode init() override;
    /// @brief Init the pipeline and specify the mode for the pipeline processing
    /// @param[in] pipelineMode: mode to use for pipeline processing
    /// @return FrameworkReturnCode::_SUCCESS if the mode is correctly initialized, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode init(PipelineMode pipelineMode) override;
    /// @brief Set the camera parameters
    /// @param[in] cameraParams: the camera parameters (its resolution and its focal)
    /// @return FrameworkReturnCode::_SUCCESS if the camera parameters are correctly set, else FrameworkReturnCode::_ERROR_
    virtual FrameworkReturnCode setCameraParameters(const SolAR::datastructure::CameraParameters & cameraParams) = 0;
    /// @brief Set the camera parameters
    /// @param[in] cameraParams: the camera parameters (its resolution and its focal)
    /// @return FrameworkReturnCode::_SUCCESS if the camera parameters are correctly set, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode setCameraParameters(const SolAR::datastructure::CameraParameters & cameraParams) override;
    /// @brief Get the camera parameters
    /// @param[out] cameraParams: the camera parameters (its resolution and its focal)
    /// @return FrameworkReturnCode::_SUCCESS if the camera parameters are correctly returned, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode getCameraParameters(SolAR::datastructure::CameraParameters & cameraParams) const override;
    /// @brief Start the pipeline
    /// @return FrameworkReturnCode::_SUCCESS if the stard succeed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode start() override;
    /// @brief Stop the pipeline.
    /// @return FrameworkReturnCode::_SUCCESS if the stop succeed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode stop() override;
    /// @brief Request the asynchronous relocalization pipeline to process a new image to calculate
    /// the corresponding 3D transformation to the SolAR coordinates system
    /// @param[in] image: the image to process
    /// @param[in] pose: the original pose in the client coordinates system
    /// @param[in] timestamp: the timestamp of the image
    /// @param[out] transform3DStatus: the status of the current 3D transformation matrix
    /// @param[out] transform3D : the current 3D transformation matrix (if available)
    /// @param[out] confidence: the confidence score of the 3D transformation matrix
    /// @return FrameworkReturnCode::_SUCCESS if the data are ready to be processed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode relocalizeProcessRequest(const SRef<SolAR::datastructure::Image> image,
                                                 const SolAR::datastructure::Transform3Df & pose,
                                                 const std::chrono::system_clock::time_point & timestamp,
                                                 TransformStatus & transform3DStatus,
                                                 SolAR::datastructure::Transform3Df & transform3D,
                                                 float_t & confidence) override;
    /// @brief Request the asynchronous relocalization pipeline to get the 3D transform offset
    /// between the device coordinate system and the SolAR coordinate system
    /// @param[out] transform3DStatus: the status of the current 3D transformation matrix
    /// @param[out] transform3D : the current 3D transformation matrix (if available)
    /// @param[out] confidence: the confidence score of the 3D transformation matrix
    /// @return FrameworkReturnCode::_SUCCESS if the 3D transform is available, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode get3DTransformRequest(TransformStatus & transform3DStatus,
                                              SolAR::datastructure::Transform3Df & transform3D,
                                              float_t & confidence) override;
    /// @brief Return the last pose processed by the pipeline
    /// @param[out] pose: the last pose if available
    /// @param[in] poseType: the type of the requested pose
    ///            - in the SolAR coordinate system (by default)
    ///            - in the device coordinate system
    /// @return FrameworkReturnCode::_SUCCESS if the last pose is available, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode getLastPose(SolAR::datastructure::Transform3Df & pose,
                                    const PoseType poseType = SOLAR_POSE) const override;
    /// @brief Reset the map stored by the map update pipeline
    /// @return FrameworkReturnCode::_SUCCESS if the map is correctly reset, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode resetMap() const override;

Mapping And Relocalization Frontend Proxy

For clients that are unable to use the SolAR framework directly (e.g. unsupported hardware platforms), SolAR provides a proxy for the Mapping and Relocalization Front End service that can be called via a dedicated gRPC interface. Here is an excerpt of the proto file defining the gRPC service:

service SolARMappingAndRelocalizationProxy
{
    rpc Init(PipelineModeValue) returns (Empty);
    rpc Start(Empty) returns (Empty);
    rpc Stop(Empty) returns (Empty);
    rpc SetCameraParameters(CameraParameters) returns (Empty);
    rpc RelocalizeAndMap(Frame) returns (RelocalizationResult);
    rpc Get3DTransform(Empty) returns (RelocalizationResult);
    rpc Reset(Empty) returns (Empty);}

The proxy comes as a C++ gRPC service, handling the client requests and passing them to the SolAR frontend service.

The gRPC proto can be used to generate clients for any gRPC-supported language, which opens the access to the Mapping and Relocalization Front End service to a wide range of platforms. The generated C# client can for instance be used in Unity projects, so that applications embedded in mobile devices (Android, iOS, VR/AR headsets,…​) can use the Mapping and Relocalization Front End service without the need to have the SolAR framework as a dependency.

The proxy is packaged alongside the frontend service in the same Docker image, and is accessible via another port than the one used for direct access to the service, as shown in next figure.

Hololens 2 Unity plugin

For Hololens 2 AR glasses, SolAR provides a native C++ Unity plugin which is able to retrieve sensors data. With it, one can collect images, poses and timestamps from the RGB (PV) camera, as well as from the tracking (VLC) cameras and depth sensors, provided that Research Mode is enabled for the device. This native library can then be installed and used in a Unity project, thanks to a C# wrapper of the plugin API.

For convenience, SolAR also provides a Unity package containing a prefab encapsulating the use of the aforementioned plugin and the C# gRPC client for the frontend proxy.

This prefab is parameterized by:

  • The URL of the Mapping And Relocalization frontend

  • The Hololens 2 camera(s) to enable

  • A Unity game object of a scene on which the returned transformation will be applied to

The prefab can then be controlled to start and stop the collect of sensor data which are sent to the SolAR Mapping And Relocalization Front End service, which in turn returns the transformation to apply to the given GameObject, so that the scene is properly aligned.

Here is the global architecture of the Mapping and Relocalization Front End service:

Global architecture of the Mapping and Relocalization Front End Service
Figure 4. Global architecture of the Mapping and Relocalization Front End Service

Service configuration

The Docker image of this service is available on this public Docker repository: _repository.solarframework.org/docker/relocalization/0.11.0/mappingandrelocalizationfrontend-service:x.y.z

The Mapping and Relocalization Front End Service default values are defined in the values.yaml file included in the Helm charts, in the 'mappingAndRelocalizationFrontend:' part and in the 'mappingAndRelocalizationFrontendCuda' part (for ARCloud Services using GPU).

You can modify some of these values according to your own configuration:

enable/disable the service deployment:

mappingAndRelocalizationFrontend:
  enabled: true (*)
  replicaCount: 1
...

or (GPU)

mappingAndRelocalizationFrontendCuda:
  enabled: true (*)
  replicaCount: 1
...

path and version of the Docker image to use for deployment (same image for Cuda/GPU version):

  ...
  image:
    repository: repository.solarframework.org/docker/relocalization/0.11.0/mappingandrelocalizationfrontend-service (*)
    pullPolicy: Always
    tag: "3.0.2" (*)
  ...

external port for remote clients:

  ...
  service:
    type: NodePort
    port: 80
    httpNodePort: 32055   (*) Port for C++ app clients
    proxyNodePort0: 32060 (*) Ports port for Hololens/Unity clients
    proxyNodePort1: 32061 (*)
    proxyNodePort2: 32062 (*)
    proxyNodePort3: 32063 (*)
    proxyNodePort4: 32064 (*)
    proxyNodePort5: 32065 (*)
    proxyNodePort6: 32066 (*)
    proxyNodePort7: 32067 (*)
    proxyNodePort8: 32068 (*)
    proxyNodePort9: 32069 (*)
  ...

or (GPU)

  ...
  service:
    type: NodePort
    port: 80
    httpNodePort: 32155   (*) Port for C++ app clients
    proxyNodePort0: 32160 (*) Ports port for Hololens/Unity clients
    proxyNodePort1: 32161 (*)
    proxyNodePort2: 32162 (*)
    proxyNodePort3: 32163 (*)
    proxyNodePort4: 32164 (*)
    proxyNodePort5: 32165 (*)
    proxyNodePort6: 32166 (*)
    proxyNodePort7: 32167 (*)
    proxyNodePort8: 32168 (*)
    proxyNodePort9: 32169 (*)
  ...

GPU configuration (for GPU version):

for NVIDIA Tesla 4 (virtual GPU):

  resources:
    limits:
      k8s.kuartis.com/vgpu: '1'

for NVIDIA A100 (GPU card partitioning):

  resources:
    limits:
      nvidia.com/mig-1g.5gb: "1"

environment variables

  ...
  settings:
    solarLogLevel: INFO               (*) Log level
                                        (DEBUG, CRITICAL, ERROR, INFO, TRACE, WARNING)
    xpcfGRPCMaxSendMsgSize: "20000"   (*) maximum size of gRPC sent messages
                                        (-1 for maximum value)
    xpcfGRPCMaxRecvMsgSize: "7000000" (*) maximum size of gRPC received messages
                                        (-1 for maximum value)

kubectl commands

To deploy or re-deploy a service in your infrastructure, when your manifest file is correctly filled in, you can use the kubectl command-line tool to:

  • define your Kubernetes context in a .conf file (cluster name, namespace, server address and port, user, certificates, etc.) like this:

    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: ...
        server: https://...
    name: kubernetes
    contexts:
    - context:
        cluster: kubernetes
        namespace: artwin
        user: kubernetes-admin
    name: cluster-mapping
    current-context: cluster-mapping
    kind: Config
    preferences: {}
    users:
    - name: kubernetes-admin
    user:
        client-certificate-data: ...
        client-key-data: ...

and set it as default configuration:

export KUBECONFIG=path/to/your_configuration.conf
  • set the context to use:

    kubectl config use-context [context name]

you can also specify the default namespace:

kubectl config set-context --current --namespace=[namespace]
  • deploy your service:

    kubectl apply -f [path/to/your_manifest.yaml]
  • check your deployments:

    kubectl get deployments
    NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
    map-update-service        1/1     1            1           27d
    relocalization-service    1/1     1            1           25d
    mapping-service           1/1     1            1           21d
  • check your services:

    kubectl get services
    NAME                      TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
    map-update-service        NodePort   10.110.144.63    <none>        80:31888/TCP     27d
    relocalization-service    NodePort   10.108.177.214   <none>        80:31889/TCP     28d
    mapping-service           NodePort   10.107.117.85    <none>        80:31887/TCP     30d
  • check your pods:

    kubectl get pods
    NAME                                       READY   STATUS    RESTARTS   AGE
    map-update-service-57ff9b9888-65hth        1/1     Running   0          14d
    relocalization-service-598bc78d96-rnwmx    1/1     Running   0          21d
    mapping-service-84bc74d954-6vc7v           1/1     Running   1          7d15h
  • visualize the logs of a pod:

    kubectl logs -f [pod name]
  • restart a pod: to do that, you can for example change an environment variable of the pod, to force it to restart

    kubectl set env deployment [deployment name] SOLAR_LOG_LEVEL=INFO
You can find more kubectl commands on this web site: https://kubernetes.io/docs/reference/kubectl/cheatsheet/

ARCloud Services on a local computer (with Docker engine)

If you want to test our ARCloud Services on your own computer, without any Cloud deployment, you can easily pull their images on your local Docker Engine, and then launch these services using predefined scripts.

To run these services, you must first install a Docker engine to manage the Docker images (https://docs.docker.com/engine/install/).
Then, you have to add our repository to the "insecure-registries" of your Docker Engine configuration, like this:
{
  "builder": {
    "gc": {
      "defaultKeepStorage": "20GB",
      "enabled": true
    }
  },
  "debug": true,
  "experimental": false,
  "features": {
    "buildkit": true
  },
  "insecure-registries": [
    ...
    "repository.solarframework.org/docker"
  ],
  "registry-mirrors": []
}

1- Download the Service images you need to test on your own Docker engine

You must check the available versions of the ARCloud Services on our public repository to be able to pull their images to your local Docker engine:
https://repository.solarframework.org/generic/solar-services-delivery/Docker_images_index.txt

For example:

docker pull repository.solarframework.org/docker/mapupdate/0.11.0/mapupdate-service:x.y.z
docker pull repository.solarframework.org/docker/relocalization/0.11.0/relocalization-service:x.y.z
docker pull repository.solarframework.org/docker/relocalization/0.11.0/relocalization-markers-service:x.y.z
docker pull repository.solarframework.org/docker/mapping/0.11.0/mapping-multi-service:x.y.z
docker pull repository.solarframework.org/docker/relocalization/0.11.0/mappingandrelocalizationfrontend-service:x.y.z

2- Get the launch scripts corresponding to your Operating System (Windows or Linux) from the public repository

You must check the available scripts on our public repository to be able to download them to your local computer:
https://repository.solarframework.org/generic/solar-services-delivery/Scripts_index.txt

For example:

Linux:

https:/repository.solarframework.org/generic/solar-service-scripts/services/launch_relocalization_markers.sh

Windows:

At the end of the "docker run" command (last line of the scripts), you should replace the default version of the service ('latest') by the current one.

3- If needed, modify the launch script of the ARCloud service you want to test

By default, the log level of each service is set to INFO, but you can change it in the corresponding script file (available values are DEBUG, CRITICAL, ERROR, INFO, TRACE, WARNING):
export SOLAR_LOG_LEVEL=INFO
By default, a specific port is set for each service, but you can change it in the corresponding script file (last line):
docker run -d -p default_port:8080 ...

The default ports currently defined for the ARCloud Services are:

Services without GPU (no Cuda):
  • Map Update: 50053

  • Relocalization: 50052

  • Relocalization Markers: 50050

  • Mapping/Mapping No Drop: 50051

  • Front End: 50055 / 5000→5009 (for Unity clients)

Services with GPU (Cuda):
  • Map Update: 60053

  • Relocalization: 60052

  • Mapping/Mapping No Drop: 60051

  • Front End: 60055 / 5100→5109 (for Unity clients)

4- Execute the launch script of the ARCloud service you want to test

ARCloud Services depend on each other and you must launch them in a specific order to test them all:
Map Update → Relocalization → Relocalization Markers → Mapping → Front End

As some ARCloud Services depend on others, sometimes you need to give some services URL (IP:port) as script parameters. To check if a script needs such parameters, just run it and you will see a help message if needed:

launch_relocalization.sh
You need to give Map Update Service URL as parameter!

A specific message will be prompted for each necessary parameter.

To give the local URL of a service, use the Docker bridge IP address (and the port defined in the launch script) given by this command ("Gateway" value):

docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "7ec85993b0ab533552db22a0a61794f0051df5782703838948cdac82d4069674",
        "Created": "2021-09-22T06:51:14.2139067Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
...

For example:

launch_relocalization.sh 172.17.0.1:50053

5- Check the logs of the service:

You can display the logs of each service using its Docker name:

docker logs [-f] [service_name]

For example:

docker logs -f solarservicemapupdate

You should see something like this:

[2021-09-22 07:24:56:772970] [info] [    7] [main():137] Load modules configuration file: /.xpcf/SolARService_MapUpdate_modules.xml
[2021-09-22 07:24:56:789855] [info] [    7] [main():146] Load properties configuration file: /.xpcf/SolARService_MapUpdate_properties.xml
[2021-09-22 07:25:00:716371] [info] [    7] [main():161] LOG LEVEL: INFO
[2021-09-22 07:25:00:716410] [info] [    7] [main():163] GRPC SERVER ADDRESS: 0.0.0.0:8080
[2021-09-22 07:25:00:716429] [info] [    7] [main():165] GRPC SERVER CREDENTIALS: 0
[2021-09-22 07:25:00:716432] [info] [    7] [main():171] GRPC MAX RECEIVED MESSAGE SIZE: 18446744073709551615
[2021-09-22 07:25:00:716435] [info] [    7] [main():178] GRPC MAX SENT MESSAGE SIZE: 18446744073709551615
[2021-09-22 07:25:00:716438] [info] [    7] [main():182] XPCF gRPC server listens on: 0.0.0.0:8080

To get all service names (for running containers), use this command:

docker container ls

Here is the exhaustive list of ARCloud services (Docker containers):

  • solarservicemapupdate

  • solarservicemapupdatecuda

  • solarservicerelocalization

  • solarservicerelocalizationcuda

  • solarservicerelocalizationmarkers

  • solarservicemappingmulti

  • solarservicemappingmulticuda

  • solarservicemappingmultinodrop

  • solarservicemappingmultinodropcuda

  • solarservicemappingandrelocalizationfrontend

  • solarservicemappingandrelocalizationfrontendcuda

Test applications

To verify if your services are deployed correctly and are running as expected, we provide you with test client applications that are embedded in Docker containers and delivered as pre-built Docker images, so that you can run them on any computer with Linux or Windows operating system.

You must check the available versions of the ARCloud test applications on our public repository to be able to pull them on your own computer:
https://repository.solarframework.org/generic/solar-services-delivery/Docker_images_index.txt

These test images are available on a public Docker repository:

repository.solarframework.org/docker/tests/0.11.0/map-update-client:x.y.z

repository.solarframework.org/docker/tests/0.11.0/map-update-display-map-client:x.y.z

repository.solarframework.org/docker/tests/0.11.0/mapping-multi-producer:x.y.z

repository.solarframework.org/docker/tests/0.11.0/mapping-multi-viewer:x.y.z

repository.solarframework.org/docker/tests/0.11.0/relocalization-client:x.y.z

repository.solarframework.org/docker/tests/0.11.0/mapping-multi-relocalization-client:x.y.z

repository.solarframework.org/docker/tests/0.11.0/mappingandrelocalizationfrontend-client:x.y.z

To run these test applications, you must first install a Docker engine to manage the Docker images (https://docs.docker.com/engine/install/).
Then, you have to add our repository to the "insecure-registries" of your Docker Engine configuration, like this:
{
  "builder": {
    "gc": {
      "defaultKeepStorage": "20GB",
      "enabled": true
    }
  },
  "debug": true,
  "experimental": false,
  "features": {
    "buildkit": true
  },
  "insecure-registries": [
    ...
    "repository.solarframework.org/docker"
  ],
  "registry-mirrors": []
}
You must have a X-server running on your system to manage the graphical outputs of the test applications (example for Windows: https://sourceforge.net/projects/vcxsrv/)
To run these test applications on a Linux Virtual Machine hosted on a Windows computer, you need to disable some Windows features (Programs and Features→Turn Windows features on or off) as shown in this screenshot:
Windows features to turn off for Linux VM configurations
Figure 5. Windows features to turn off for Linux VM configurations

The Map Update Service test application

Overview

This sample application is used to test the processing of local maps carried out remotely by the Map Update Service. It uses a predefined local map dataset, included in this sample package.

The algorithm in this example is quite simple:

1- It requests the Map Update service for the first time to get its current global map, and displays it in a dedicated window if available (in the form of a point cloud). To do that, the application uses this method from the pipeline interface:

    /// @brief Request to the map update pipeline to get the global map
    /// @param[out] map: the output global map
    /// @return FrameworkReturnCode::_SUCCESS if the global map is available, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode getMapRequest(SRef<SolAR::datastructure::Map> & map) const override;

2- Then, it reads the local map dataset stored in the ./data/maps folder, and sends it for processing to the Map Update Service using this method from its interface:

    /// @brief Request to the map update pipeline to update the global map from a local map
    /// @param[in] map: the input local map to process
    /// @return FrameworkReturnCode::_SUCCESS if the data are ready to be processed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode mapUpdateRequest(const SRef<datastructure::Map> map) override;

3- And to finish, it requests again the Map Update service to get its new global map, and displays it in a dedicated window if available (in the form of a point cloud).

Traces are displayed at runtime to follow the progress of the application.

Launch the test application

First, you have to install the test application image on your Docker engine:

docker pull repository.solarframework.org/docker/tests/0.11.0/map-update-client:x.y.z

Then, get the script file used to launch the Docker container of the test application from one of these URL:

At the end of the "docker run" command (last line of the scripts), you should replace the default version of the service by the current one.

You can now launch the Docker container using the script file, giving your Map Update Service URL and, if you do not use a Linux Virtual Machine, your computer local IP address (to export graphical display) as parameter:

launch.sh [map_update_service_url] [host_IP_address]
launch_vm.sh [map_update_service_url]
launch.bat [map_update_service_url] [host_IP_address]

For example:

launch.bat 172.17.0.1:50053 192.168.56.1
On Windows host, do not forget to start the X Server manager before running the test application

Then, you can verify that the application is running correctly by looking at its traces, with this Docker command:

docker logs [-f] solarservicemapupdateclient

And you will see something like this:

MAPUPDATE_SERVICE_URL defined: 0.0.0.0:50053
Try to replace the MapUpdate Service URL in the XML configuration file...
XML configuration file ready
[2021-07-28 16:19:29:451534] [info] [12204] [main():87] Get component manager instance
[2021-07-28 16:19:29:451821] [info] [12204] [main():91] Load Client Remote Map Update Service configuration file: SolARServiceTest_MapUpdate_conf.xml
[2021-07-28 16:19:29:452551] [info] [12204] [main():99] Resolve IMapUpdatePipeline interface
[IMapUpdatePipeline_grpcProxy::onConfigured()]Check if remote map update service URL is defined in MAPUPDATE_SERVICE_URL
[2021-07-28 16:19:29:482257] [info] [12204] [main():102] Resolve other components
[2021-07-28 16:19:29:506398] [info] [12204] [main():106] Client components loaded
[2021-07-28 16:19:29:506478] [info] [12204] [main():110] Initialize map update service
[2021-07-28 16:19:30:481731] [info] [12204] [main():118] Set camera parameters
[2021-07-28 16:19:30:482769] [info] [12204] [main():125] Start map update service
[2021-07-28 16:19:30:596887] [info] [12204] [main():136] Try to get initial global map from Map Update remote service
[2021-07-28 16:19:37:196074] [info] [12204] [main():151]
==> Display initial global map

You will also see the global map given by the Map Update Service in a dedicated window:

Image viewer of the first global map
Figure 6. Image viewer of the first global map
[2021-07-28 16:32:11:961857] [info] [12264] [main():163] Load local map
[2021-07-28 16:32:11:961959] [info] [12264] [loadFromFile():289] Loading the map from file...
[2021-07-28 16:32:12:945752] [info] [12264] [loadFromFile():346] Load done!
[2021-07-28 16:32:12:945831] [info] [12264] [main():170] Send map request for local map
[2021-07-28 16:32:12:945871] [info] [12264] [main():174] Nb points: 8338
[2021-07-28 16:32:31:572424] [info] [12264] [main():188]
==> Display new global map (after local map processing): press Ctrl+C to stop test

And you will see the new global map given by the Map Update Service in a dedicated window:

Image viewer of the new global map
Figure 7. Image viewer of the new global map

The Map Update Display Map test application

Overview

This sample application is just used to display the map currently stored in the Map Update service. To do that, it requests the service to get its global map, and displays it in a dedicated window if available (in the form of a point cloud).

Launch the test application

First, you have to install the test application image on your Docker engine:

docker pull repository.solarframework.org/docker/tests/0.11.0/map-update-display-map-client:x.y.z

Then, get the script file used to launch the Docker container of the test application from one of these URL:

At the end of the "docker run" command (last line of the scripts), you should replace the default version of the service by the current one.

You can now launch the Docker container using the script file, giving your Map Update Service URL and, if you do not use a Linux Virtual Machine, your computer local IP address (to export graphical display) as parameter:

launch.sh [map_update_service_url] [host_IP_address]
launch_vm.sh [map_update_service_url]
launch.bat [map_update_service_url] [host_IP_address]

For example:

launch.bat 172.17.0.1:50053 192.168.56.1
On Windows host, do not forget to start the X Server manager before running the test application

Then, you can verify that the application is running correctly by looking at its traces, with this Docker command:

docker logs [-f] solarservicemapupdatedisplaymapclient

And you will see something like this:

MAPUPDATE_SERVICE_URL defined: 0.0.0.0:50053
Try to replace the MapUpdate Service URL in the XML configuration file...
XML configuration file ready
[2021-12-09 14:27:40:754955] [info] [    9] [main():120] Get component manager instance
[2021-12-09 14:27:40:755155] [info] [    9] [main():124] Load Client Remote Map Update Pipeline configuration file: /.xpcf/SolARServiceTest_MapUpdate_DisplayMap_conf.xml
[2021-12-09 14:27:40:755522] [info] [    9] [main():132] Resolve IMapUpdatePipeline interface
[2021-12-09 14:27:40:758871] [info] [    9] [main():135] Resolve other components
[2021-12-09 14:27:40:766994] [info] [    9] [main():137] Client components loaded
[2021-12-09 14:27:40:767020] [info] [    9] [main():142] Initialize map update pipeline
[2021-12-09 14:27:40:782673] [info] [    9] [main():150] Set camera parameters
[2021-12-09 14:27:40:783056] [info] [    9] [main():157] Start map update pipeline
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
[2021-12-09 14:27:40:900158] [info] [    9] [main():168] Try to get current global map from Map Update remote service
[2021-12-09 14:27:43:430034] [info] [    9] [main():177] Map Update request terminated
[2021-12-09 14:27:43:452630] [info] [    9] [main():186] ==> Display current global map: press ESC on the map display window to end test

You will also see the global map given by the Map Update Service in a dedicated window:

Image viewer of the first global map
Figure 8. Image viewer of the first global map

The Mapping Service test application: Producer client

Overview

This application is used to test the processing of images and poses carried out remotely by the Mapping Service. It uses a full set of pre-captured images taken from a Hololens device, included in this sample package.

The algorithm in this example is quite simple: it reads each pair of images and poses from the Hololens data files and requests the Mapping Service to process them, using this method from its interface:

    /// @brief Request to the mapping pipeline to process a new image/pose
    /// Retrieve the new image (and pose) to process, in the current pipeline context
    /// (camera configuration, point cloud, key frames, key points)
    /// @param[in] image: the input image to process
    /// @param[in] pose: the input pose to process
    /// @return FrameworkReturnCode::_SUCCESS if the data are ready to be processed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode mappingProcessRequest(const SRef<datastructure::Image> image,
                                              const datastructure::Transform3Df & pose) override;

Traces are displayed at runtime to follow the progress of the application. In addition, it displays each image read in a specific window to be able to visually follow the path of the Hololens device.

Launch the producer application

First, you have to install the test application image on your Docker engine:

docker pull repository.solarframework.org/docker/tests/0.11.0/mapping-multi-producer:x.y.z

Then, get the script file used to launch the Docker container of the test application from one of these URL:

At the end of the "docker run" command (last line of the scripts), you should replace the default version of the service by the current one.

You can now launch the Docker container using the script file, giving your Mapping Service URL and, if you do not use a Linux Virtual Machine, your computer local IP address (to export graphical display) as parameter:

launch_producer.sh [mapping_service_url] [host_IP_address]
launch_producer_vm.sh [mapping_service_url]
launch_producer.bat [mapping_service_url] [host_IP_address]

For example:

launch_producer.bat 172.17.0.1:50051 192.168.56.1

You can also change the default Hololens image data set used for the test by specifying the "B" data set ("A" used by default)

For example:

launch_producer.bat 172.17.0.1:50051 192.168.56.1 B
On Windows host, do not forget to start the X Server manager before running the test application

Then, you can verify that the application is running correctly by looking at its traces, with this Docker command:

docker logs [-f] solarservicemappingmultiproducer

And you will see something like this:

MAPPING_SERVICE_URL defined: 172.17.0.1:50051
Try to replace the Mapping Service URL in the XML configuration file...
HOLOLENS_DATA_SET defined: B
Path to image set to: ./data/data_hololens/loop_desktop_B
Replace the path to image data in the XML configuration file...
XML configuration file ready
[2021-07-09 17:02:58:985680] [info] [  824] [main():197] Get component manager instance
[2021-07-09 17:02:58:985973] [info] [  824] [main():201] Load Client Remote Mapping Pipeline configuration file: SolARServiceTest_Mapping_Multi_Producer_conf.xml
[2021-07-09 17:02:59:000145] [info] [  824] [main():205] Resolve IMappingPipeline interface
[2021-07-09 17:02:59:094552] [info] [  824] [main():216] Remote producer client: AR device component created
[2021-07-09 17:02:59:094801] [info] [  824] [main():219] Remote producer client: AR device component created
[2021-07-09 17:03:00:335424] [info] [  824] [main():230] Remote producer client: Init mapping pipeline result = SUCCESS
[2021-07-09 17:03:00:336157] [info] [  824] [main():234] Remote producer client: Set mapping pipeline camera parameters result = SUCCESS
[2021-07-09 17:03:00:336257] [info] [  824] [main():236] Remote producer client: Start remote mapping pipeline
[2021-07-09 17:03:00:338818] [info] [  824] [main():239] Start remote producer client thread
[2021-07-09 17:03:00:338965] [info] [  824] [main():253]
***** Control+C to stop *****
[2021-07-09 17:03:00:726830] [info] [  835] [operator()():73] Producer client: Send (image, pose) num 1 to mapping pipeline
[2021-07-09 17:03:02:717961] [info] [  835] [operator()():73] Producer client: Send (image, pose) num 2 to mapping pipeline
[2021-07-09 17:03:03:075171] [info] [  835] [operator()():73] Producer client: Send (image, pose) num 3 to mapping pipeline
[2021-07-09 17:03:03:465519] [info] [  835] [operator()():73] Producer client: Send (image, pose) num 4 to mapping pipeline
[2021-07-09 17:03:03:783061] [info] [  835] [operator()():73] Producer client: Send (image, pose) num 5 to mapping pipeline
...

And you will see the images loaded from the Hololens dataset in a dedicated window:

Image viewer of the producer application
Figure 9. Image viewer of the producer application

The Mapping Service test application: Viewer client

Overview

This sample application is used to check the result of the processing of images and poses carried out remotely by the Mapping Service. It continually requests this service to get the current point cloud and keyframe poses resulting from all previously processed images, using this method from its interface:

    /// @brief Provide the current data from the mapping pipeline context for visualization
    /// (resulting from all mapping processing since the start of the pipeline)
    /// @param[out] outputPointClouds: pipeline current point clouds
    /// @param[out] keyframePoses: pipeline current keyframe poses
    /// @return FrameworkReturnCode::_SUCCESS if data are available, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode getDataForVisualization(std::vector<SRef<datastructure::CloudPoint>> & outputPointClouds,
                                                std::vector<datastructure::Transform3Df> & keyframePoses) const override;

Traces are displayed at runtime to follow the progress of the application. In addition, it displays current point cloud and keyframe poses in a specific window to visualize the Mapping Service processing result.

Launch the viewer application

First, you have to install the test application image on your Docker engine:

docker pull repository.solarframework.org/docker/tests/0.11.0/mapping-multi-viewer:x.y.z

Then, get the script file used to launch the Docker container of the test application from one of these URL:

At the end of the "docker run" command (last line of the scripts), you should replace the default version of the service by the current one.

You can now launch the Docker container using the script file, giving your Mapping Service URL and, if you do not use a Linux Virtual Machine, your computer local IP address (to export graphical display) as parameter:

launch_viewer.sh [mapping_service_url] [host_IP_address]
launch_viewer_vm.sh [mapping_service_url] [host_IP_address]
launch_viewer.bat [mapping_service_url] [host_IP_address]

For example:

launch_viewer.bat 172.17.0.1:50051 192.168.56.1
On Windows host, do not forget to start the X Server manager before running the test application

Then, you can verify that the application is running correctly by looking at its traces, with this Docker command:

docker logs [-f] solarservicemappingmultiviewer

And you will see something like this:

MAPPING_SERVICE_URL defined: 172.17.0.1:50051
Try to replace the Mapping Service URL in the XML configuration file...
XML configuration file ready
[2021-07-09 17:51:17:466319] [info] [20783] [main():134] Get component manager instance
[2021-07-09 17:51:17:466693] [info] [20783] [main():138] Load Client Remote Mapping Pipeline configuration file: SolARServiceTest_Mapping_Multi_Viewer_conf.xml
[2021-07-09 17:51:17:467416] [info] [20783] [main():142] Resolve IMappingPipeline interface
[2021-07-09 17:51:17:579575] [info] [20783] [main():150] Start viewer client thread
[2021-07-09 17:51:17:579735] [info] [20783] [main():155]
***** Control+C to stop *****
[2021-07-09 17:51:18:052272] [info] [20807] [operator()():69] Viewer client: I3DPointsViewer component created
...

And you will see the current point cloud and keyframe poses in a dedicated window:

Display window of the viewer application
Figure 10. Image viewer of the viewer application

The Relocalization Service test application

Overview

This application is used to test the pose calculation carried out remotely by the Relocalization Service. It uses a full set of pre-captured images taken from a Hololens device, included in this sample package.

To test the service, this application reads each image from the Hololens data files and, every "x" images, requests the Relocalization Service to process the current one, using this method from its interface:

    /// @brief Request the relocalization pipeline to process a new image to calculate the corresponding pose
    /// @param[in] image: the image to process
    /// @param[out] pose: the new calculated pose
    /// @param[out] confidence: the confidence score
    /// @return FrameworkReturnCode::_SUCCESS if the processing is successful, else FrameworkReturnCode::_ERROR_
    virtual FrameworkReturnCode relocalizeProcessRequest(const SRef<SolAR::datastructure::Image> image, SolAR::datastructure::Transform3Df& pose, float_t & confidence) override;

Traces are displayed at runtime to follow the progress of the application. In addition, it displays each image read in a specific window to be able to visually follow the path of the Hololens device.

Launch the test application

First, you have to install the test application image on your Docker engine:

docker pull repository.solarframework.org/docker/tests/0.11.0/relocalization-client:x.y.z

Then, get the script file used to launch the Docker container of the test application from one of these URL:

At the end of the "docker run" command (last line of the scripts), you should replace the default version of the service by the current one.

You can now launch the Docker container using the script file, giving your Relocalization Service URL and, if you do not use a Linux Virtual Machine, your computer local IP address (to export graphical display) as parameter:

launch.sh [relocalization_service_url] [host_IP_address]
launch_vm.sh [relocalization_service_url]
launch.bat [relocalization_service_url] [host_IP_address]

For example:

launch.bat 172.17.0.1:50052 192.168.56.1
On Windows host, do not forget to start the X Server manager before running the test application

Then, you can verify that the application is running correctly by looking at its traces, with this Docker command:

docker logs [-f] solarservicerelocalizationclient

And you will see something like this:

RELOCALIZATION_SERVICE_URL defined: 172.17.0.1:50052
Try to replace the Relocalization Service URL in the XML configuration file...
XML configuration file ready
[2021-10-20 14:52:34:711132] [info] [    8] [main():101] Get component manager instance
[2021-10-20 14:52:34:711374] [info] [    8] [main():105] Load Client Remote Relocalization Service configuration file: /.xpcf/SolARServiceTest_Relocalization_conf.xml
[2021-10-20 14:52:34:711910] [info] [    8] [main():109] Resolve IRelocalizationPipeline interface
[2021-10-20 14:52:35:270449] [info] [    8] [main():114] Resolve components used
[2021-10-20 14:52:35:302757] [info] [    8] [main():121] Set relocalization service camera parameters
[2021-10-20 14:52:35:307598] [info] [    8] [main():129] Start relocalization pipeline process
[2021-10-20 14:52:35:310974] [info] [    8] [main():136]
***** Control+C to stop *****
[2021-10-20 14:52:35:716387] [info] [    8] [main():162] Send an image to relocalization pipeline
[2021-10-20 14:52:43:432279] [info] [    8] [main():169] Failed to calculate pose for this image
[2021-10-20 14:52:57:486997] [info] [    8] [main():162] Send an image to relocalization pipeline
[2021-10-20 14:52:58:317535] [info] [    8] [main():169] Failed to calculate pose for this image
[2021-10-20 14:53:11:988718] [info] [    8] [main():162] Send an image to relocalization pipeline
[2021-10-20 14:53:12:932932] [info] [    8] [main():169] Failed to calculate pose for this image
[2021-10-20 14:53:26:661736] [info] [    8] [main():162] Send an image to relocalization pipeline
[2021-10-20 14:53:27:699770] [info] [    8] [main():169] Failed to calculate pose for this image
[2021-10-20 14:53:41:562936] [info] [    8] [main():162] Send an image to relocalization pipeline
[2021-10-20 14:53:42:334819] [info] [    8] [main():169] Failed to calculate pose for this image
[2021-10-20 14:53:48:776327] [info] [    8] [main():162] Send an image to relocalization pipeline
[2021-10-20 14:53:49:454503] [info] [    8] [main():165] New pose calculated by relocalization pipeline
...

And you will see the images loaded from the Hololens dataset in a dedicated window.

The Relocalization, Mapping (and Map Update) Services test application

Overview

This application can be used to test both Relocalization and Mapping services. In addition, as these two services rely on the Map Update service, this third one can also be tested using this example. This program uses a full set of pre-captured images taken from a Hololens device, included in the sample package.

First of all, this test program tries to determine the transformation matrix between the Hololens coordinate system and that of the global map. To do that, it uses the 'n' first images from the Hololens dataset to request the Relocalization service for pose calculation, based on the global map provided by the Map Update service. If this treatment is successful, the program is able to compare the original Hololens pose with the new pose from the Relocalization service, and to deduce the transformation.

For this step, this method of the Relocalization Service interface is called:

    /// @brief Request the relocalization pipeline to process a new image to calculate the corresponding pose
    /// @param[in] image: the image to process
    /// @param[out] pose: the new calculated pose
    /// @param[out] confidence: the confidence score
    /// @return FrameworkReturnCode::_SUCCESS if the processing is successful, else FrameworkReturnCode::_ERROR_
    virtual FrameworkReturnCode relocalizeProcessRequest(const SRef<SolAR::datastructure::Image> image, SolAR::datastructure::Transform3Df& pose, float_t & confidence) override;

Then, the test application sends each pair of image and pose from the Hololens dataset to the Mapping service, applying the transformation matrix if known. Thus, the Mapping pipeline will be able to constitute a new local sparse map completed by the precise poses corresponding to the movements of the Hololens device.

For this step, this method of the Mapping Service interface is called:

    /// @brief Request to the mapping pipeline to process a new image/pose
    /// Retrieve the new image (and pose) to process, in the current pipeline context
    /// (camera configuration, point cloud, key frames, key points)
    /// @param[in] image: the input image to process
    /// @param[in] pose: the input pose to process
    /// @return FrameworkReturnCode::_SUCCESS if the data are ready to be processed, else FrameworkReturnCode::_ERROR_
    FrameworkReturnCode mappingProcessRequest(const SRef<datastructure::Image> image,
                                              const datastructure::Transform3Df & pose) override;

At the end of the program, when all the Hololens images have been analyzed, the Mapping service sends the new local map to the Map Update service, to consolidate and enrich the global sparse map.

Traces are displayed at runtime to follow the progress of the application. In addition, it displays each image read in a specific window to be able to visually follow the path of the Hololens device.

At any time during the execution of this example, you can use the Mapping Service viewer client to visualize the current local map being creating.
At the end of this application, you can use the Map Update Service test application to view the new global map.

Launch the test application

First, you have to install the test application image on your Docker engine:

docker pull repository.solarframework.org/docker/tests/0.11.0/mapping-multi-relocalization-client:x.y.z

Then, get the script file used to launch the Docker container of the test application from one of these URL:

At the end of the "docker run" command (last line of the scripts), you should replace the default version of the service by the current one.

You can now launch the Docker container using the script file, giving your Relocalization Service URL, your Mapping Service URL and, if you do not use a Linux Virtual Machine, your computer local IP address (to export graphical display) as parameter:

launch.sh [relocalization_service_url] [mapping_service_url] [host_IP_address]
launch_vm.sh [relocalization_service_url] [mapping_service_url]
launch.bat [relocalization_service_url] [mapping_service_url] [host_IP_address]

For example:

launch.bat 172.17.0.1:50052 172.17.0.1:50051 192.168.56.1

You can also change the default Hololens image data set used for the test by specifying the "B" data set ("A" used by default)

For example:

launch.bat 172.17.0.1:50052 172.17.0.1:50051 192.168.56.1 B
On Windows host, do not forget to start the X Server manager before running the test application

Then, you can verify that the application is running correctly by looking at its traces, with this Docker command:

docker logs [-f] solarservicemappingmultirelocalizationclient

And you will see something like this:

RELOCALIZATION_SERVICE_URL defined: 172.17.0.1:50052
Replace the Relocalization Service URL in the XML configuration file...
MAPPING_SERVICE_URL defined: 172.17.0.1:50051
Replace the Mapping Service URL in the XML configuration file...
HOLOLENS_DATA_SET defined: A
Path to image set to: ./data/data_hololens/loop_desktop_A
Replace the path to image data in the XML configuration file...
XML configuration file ready
[2021-08-02 16:42:29:986811] [info] [24091] [main():228] Get component manager instance
[2021-08-02 16:42:29:987168] [info] [24091] [main():232] Load configuration file: SolARServiceTest_Mapping_Multi_Relocalization_conf.xml
[2021-08-02 16:42:29:987681] [info] [24091] [main():236] Resolve IMappingPipeline interface
[2021-08-02 16:42:30:018976] [info] [24091] [main():239] Resolve IRelocalizationPipeline interface
[2021-08-02 16:42:30:041947] [info] [24091] [main():250] Remote producer client: AR device component created
[2021-08-02 16:42:30:042243] [info] [24091] [main():253] Remote producer client: AR device component created
[2021-08-02 16:42:30:055965] [info] [24091] [main():264] Remote mapping client: Init mapping pipeline result = SUCCESS
[2021-08-02 16:42:30:056461] [info] [24091] [main():273] Remote mapping client: Set mapping pipeline camera parameters result = SUCCESS
[2021-08-02 16:42:30:487307] [info] [24091] [main():277] Remote relocalization client: Init relocalization pipeline result = SUCCESS
[2021-08-02 16:42:30:489220] [info] [24091] [main():286] Remote relocalization client: Set relocalization pipeline camera parameters result = SUCCESS
[2021-08-02 16:42:30:489261] [info] [24091] [main():288] Remote mapping client: Start remote mapping pipeline
[2021-08-02 16:42:30:490200] [info] [24091] [main():292] Start remote mapping client thread
[2021-08-02 16:42:30:490391] [info] [24091] [main():297] Start remote relocalization client thread
[2021-08-02 16:42:30:490508] [info] [24091] [main():302] Read images and poses from hololens files
[2021-08-02 16:42:30:490538] [info] [24091] [main():304]
***** Control+C to stop *****
[2021-08-02 16:42:31:082000] [info] [24091] [main():330] Add image to input drop buffer for relocalization
[2021-08-02 16:42:31:082029] [info] [24091] [main():341] Add pair (image, pose) to input drop buffer for mapping
[2021-08-02 16:42:31:082150] [info] [24102] [operator()():82] Mapping client: Send (image, pose) to mapping pipeline
[2021-08-02 16:42:31:082599] [info] [24103] [operator()():107] Relocalization client: Send image to relocalization pipeline
[2021-08-02 16:42:31:389366] [info] [24091] [main():341] Add pair (image, pose) to input drop buffer for mapping
[2021-08-02 16:42:31:389448] [info] [24102] [operator()():82] Mapping client: Send (image, pose) to mapping pipeline
...

If the relocalization processing is successful, you will see the following trace:

[2021-08-02 16:42:32:544970] [info] [24103] [operator()():111] => Relocalization succeeded
[2021-08-02 16:42:32:544999] [info] [24103] [operator()():113] Hololens pose:    0.996649  -0.0777834   0.0252523  -0.0062405
  -0.045278   -0.781977   -0.621658 -0.00958775
  0.0681019    0.618432   -0.782881   -0.143014
          0           0           0           1
[2021-08-02 16:42:32:545107] [info] [24103] [operator()():114] World pose:    0.996321  -0.0810918    0.027727 -0.00972971
 -0.0469172   -0.786846   -0.615364  -0.0186298
  0.0717178    0.611799   -0.787756   -0.138227
          0           0           0           1
[2021-08-02 16:42:32:545174] [info] [24103] [operator()():118] Transformation matrix from Hololens to World:    0.999993   0.0010635 -0.00400569 -0.00405193
  -0.001096    0.999969 -0.00805077  -0.0102006
 0.00399664  0.00805406     0.99996  0.00488381
          0           0           0           1

At any time, you will see the images loaded from the Hololens dataset in a dedicated window:

Image viewer of the test application
Figure 11. Image viewer of the test application

The Mapping and Relocalization Front End test application

Overview

This application is used to test the Mapping and Relocalization Front End service. In addition, as this service relies on the Relocalization service, the Mapping service, and also the Map Update service, these can also be tested using this example. This program uses a full set of pre-captured images taken from a Hololens device, included in the sample package.

The algorithm of this test application is very simple: it first initializes the Mapping and Relocalization Front End, giving it the parameters of the camera used to capture the Holones set of images.

For this step, this method of the Front End Service interface is called:

    /// @brief Set the camera parameters
    /// @param[in] cameraParams: the camera parameters (its resolution and its focal)
    /// @return FrameworkReturnCode::_SUCCESS if the camera parameters are correctly set, else FrameworkReturnCode::_ERROR_
    virtual FrameworkReturnCode setCameraParameters(const SolAR::datastructure::CameraParameters & cameraParams) = 0;

Then, the test application sends each pair of image and pose from the Hololens dataset to the Front End service.

For this step, this method of the Mapping And Relocalization Front End Service interface is called:

    /// @brief Request the asynchronous relocalization pipeline to process a new image to calculate
    /// the corresponding 3D transformation to the SolAR coordinates system
    /// @param[in] image: the image to process
    /// @param[in] pose: the original pose in the client coordinates system
    /// @param[in] timestamp: the timestamp of the image
    /// @param[out] transform3DStatus: the status of the current 3D transformation matrix
    /// @param[out] transform3D : the current 3D transformation matrix (if available)
    /// @param[out] confidence: the confidence score of the 3D transformation matrix
    /// @return FrameworkReturnCode::_SUCCESS if the data are ready to be processed, else FrameworkReturnCode::_ERROR_
    virtual FrameworkReturnCode relocalizeProcessRequest(
                  const SRef<SolAR::datastructure::Image> image,
                  const SolAR::datastructure::Transform3Df & pose,
                  const std::chrono::system_clock::time_point & timestamp,
                  TransformStatus & transform3DStatus,
                  SolAR::datastructure::Transform3Df & transform3D,
                  float_t & confidence) = 0;

Traces are displayed at runtime to follow the progress of the application. In addition, it displays each image read in a specific window to be able to visually follow the path of the Hololens device.

At any time during the execution of this example, you can use the Mapping Service viewer client to visualize the current local map being creating.
At the end of this application, you can use the Map Update Service test application to view the new global map.

Launch the test application

First, you have to install the test application image on your Docker engine:

docker pull repository.solarframework.org/docker/tests/0.11.0/mappingandrelocalizationfrontend-client:x.y.z

Then, get the script file used to launch the Docker container of the test application from one of these URL:

Windows: https:/repository.solarframework.org/generic/solar-service-scripts/clients/launch_mappingandrelocalizationfrontend_client.bat

At the end of the "docker run" command (last line of the scripts), you should replace the default version of the service by the current one.

You can now launch the Docker container using the script file, giving your Mapping and Relocalization Front End Service URL and, if you do not use a Linux Virtual Machine, your computer local IP address (to export graphical display) as parameter:

launch.sh [mapping_and_relocalization_frontend_service_url] [host_IP_address]
launch_vm.sh [mapping_and_relocalization_frontend_service_url]
launch.bat [mapping_and_relocalization_frontend_service_url] [host_IP_address]

For example:

launch.bat 172.17.0.1:50055 192.168.56.1
On Windows host, do not forget to start the X Server manager before running the test application

Then, you can verify that the application is running correctly by looking at its traces, with this Docker command:

docker logs [-f] solarservicemappingandrelocalizationfrontendclient

And you will see something like this:

MAPPINGANDRELOCALIZATIONFRONTEND_SERVICE_URL defined: 0.0.0.0:50055
Try to replace the MappingAndRelocalizationFrontend Service URL in the XML configuration file...
XML configuration file ready
[2021-12-09 15:20:32:915391] [info] [    9] [main():106] Get component manager instance
[2021-12-09 15:20:32:915620] [info] [    9] [main():114] Load Client Remote Service configuration file: /.xpcf/SolARServiceTest_MappingAndRelocalizationFrontend_conf.xml
[2021-12-09 15:20:32:919253] [info] [    9] [main():121] Mapping and relocalization front end service component created
[2021-12-09 15:20:32:919272] [info] [    9] [main():128] Initialize the service
[2021-12-09 15:20:32:931482] [info] [    9] [main():136] Producer client: AR device component created
[2021-12-09 15:20:32:931621] [info] [    9] [main():139] Remote producer client: AR device component created
[2021-12-09 15:20:32:939194] [info] [    9] [main():147] Set camera paremeters for the service
[2021-12-09 15:20:32:940457] [info] [    9] [main():154] Start the service
[2021-12-09 15:20:32:941359] [info] [    9] [main():161] Read images and poses from hololens files
[2021-12-09 15:20:32:941386] [info] [    9] [main():162]
***** Control+C to stop *****
[2021-12-09 15:20:33:055380] [info] [    9] [main():179] Send image and pose to service
[2021-12-09 15:20:33:248617] [info] [    9] [main():179] Send image and pose to service
[2021-12-09 15:20:33:397523] [info] [    9] [main():179] Send image and pose to service
...

If the 3D transformation matrix between the client coordinate system and the SolAR coordinate systeme if found (by the Relocalization Service), you will see the following trace:

[2021-12-09 15:20:34:810304] [info] [    9] [main():189] New 3D transformation =
  -0.989591    0.00656944  -0.143773    -0.153553
   0.143782    0.000883323 -0.989609    -0.716952
  -0.00637388 -0.99998     -0.00181631  -0.796267
   0           0            0            1

At any time, you will see the images loaded from the Hololens dataset in a dedicated window.