Deploying your Service

How to deploy your service.

Once the service is developed well enough, you can deploy it. Quick visual recap is (regional deployment):

The large block on the right/top side (most of the image) is the SPEKTRA Edge-based service. Below you have various applications (web browsers or Edge agents) that can communicate with service or core SPEKTRA Edge services (left).

Service backend deployment is what we focus on in this part. The blue parts in this block are elements you had to develop, three different binaries we discussed in this guideline.

You will need to set up Networking & Ingress elements. Inside a cluster, you will need Deployments for API servers, controllers and db-controllers. Inside the cluster, you will need:

  • Database of course. Core SPEKTRA Edge provides a database for logging or monitoring metrics but for document storage. NoSQL database is needed. We typically recommend MongoDB as a cloud-agnostic option, but firestore also may be available for GCP.
  • Redis instance is needed for Node Managers for all Controllers (sharding!). Although arrows are missing, redis can optionally also be used as a Cache for DB.

If possible, in the Kubernetes environment type, it is highly recommended to use HorizontalPodAutoscaler for deployments.

In the Inventory Manager example which we will talk about, we assume we did everything on the Kubernetes cluster. Configuration of kubectl is assumed as its part of general knowledge and not-edgelq specific. Refer to the documentation online for how to create a cluster in Kubernetes and how to configure kubectl.

In the future, we may ship edgelq-lite images though, with instructions for local Kubernetes deployment.

Building images

We use docker build to ship images for backend services. We will use dockerfiles from Inventory Manager as examples.

You will need to build 4 images:

  • API Server (that you coded)
  • API Server Envoy proxy (part of API Server)
  • Controller
  • DbController

When making an API Server, each pod must contain 2 containers: One is the image of the server, which handles all gRPC calls. But as we mentioned many times, we also need to support:

  • webGRPC, so web browsers can access the server too, not just native gRPC clients
  • REST API, for those who prefer this way of communication

This may be handled by envoy proxy (https://www.envoyproxy.io/). They provide ready image sets. It handles webGRPC is pretty much out of the box with proper config. REST API requires a little more work. We need to come back to the regenerate.sh file, like in the InventoryManager example (https://github.com/cloudwan/inventory-manager-example/blob/master/regenerate.sh).

Find the following part:

protoc \
    -I "${PROTOINCLUDE}" \
    "--descriptor_set_out=${INVENTORYMANAGERROOT}/proto/inventory_manager.pb" \
    "--include_source_info" \
    "--include_imports" \
    "${INVENTORYMANAGERROOT}"/proto/v1/*_service.proto \
    "${DIAGNOSTICSPATH}"/proto/v1/*_service.proto

This generates a file inventory_manager.pb, which contains service descriptors from all files in a service, plus optionally diagnostics (part of SPEKTRA Edge repository) - if you want health check from grpc service available from REST.

This generated pb file must be passed to the created envoy proxy image. See the docker file for this: https://github.com/cloudwan/inventory-manager-example/blob/master/build/serviceproxy.dockerfile

We require the argument SERVICE_PB_FILE, which must point to that pb file. During image building, it will be copied to /var/envoy. This concludes the process of building an envoy proxy for a service.

The remaining three images can be constructed often with the same dockerfile. For InventoryManager, we have: https://github.com/cloudwan/inventory-manager-example/blob/master/build/servicebk.dockerfile

This example however is quite generic and may fit many services. We have two docker runs there. The first is for building - we use images with desired Golang installed already, ensuring some build dependencies. This build docker must copy the code repository and execute the build for the main binary. You can notice also the FIXTURES_DIR param, which MAY contain the path to the fixtures directory for your service. This must be passed when building controller images, not necessarily for server/db-controller ones.

In the second docker process (service), we will construct a simple image with minimal env, plus runtime binary, plus optionally fixtures directory (/etc/lqd/fixtures).

For a reference on how variables may be populated, see the skaffold file example (We use scaffold for our build). It is a good tool, we recommend, probably not necessarily mandatory: https://github.com/cloudwan/inventory-manager-example/blob/master/skaffold.yaml.

Note that we are passing the .gitconfig file there. This is mandatory to access private repositories (your service may be private. Also, at the moment of this writing, goten and edgelq are also private!). You may see also the main README for SPEKTRA Edge: https://github.com/cloudwan/edgelq/blob/main/README.md, with more info about building. Since a process may be the same, you may need to configure your own .gitconfig.

Note that the skaffold can be configured to push images to Azure, GCP, AWS, you name it.

Cluster preparedness

In your cluster, you need to prepare some machines that will host:

  • API Server with envoy proxy
  • Controller
  • DbController
  • Redis instance

In your cluster, you can also deploy MongoDB deployment, inside a cluster, or use managed services like MongoDB Atlas. If you use Managed Cloud, then MongoDB Atlas can be used to deploy instances being run on the same data center as your cluster.

When you get the MongoDB instance, remember its endpoint and get an authentication certificate. It is required to give admin privileges to the Mongo user. It will not only need to make reads/writes of regular resources but also create databases, and collections, configure these collections, and create and manage indices (from proto declarations to Mongo). This requires full access. It is recommended to make MongoDB closed and available from your cluster only!

An authentication certificate will be needed later during deployment, so keep it - as a PEM file.

If you use firestore instead of MongoDB, you will need to have a service account that also is an admin in firestore, that has access to index management. You will need to get Google credentials and remember Google project ID.

Networking

When you made a reservation for the SPEKTRA Edge service domain (Service project and service domain name), you reserved the domain name of your service in the SPEKTRA Edge namespace, but it’s not an actual networking domain. For example, iam.edgelq.com is the name of a Service object in meta.goten.com, but this name is universal, shared by all production, staging, and development environments. To reach IAM, you will have a specific endpoint for a specific environment. For example, one common staging environment we have has the domain stg01b.edgelq.com - and the IAM endpoint is iam.stg01b.edgelq.com.

Therefore, if you reserved custom.edgelq.com on the SPEKTRA Edge platform, you may want to have a domain like someorg.com. Then, optionally you may have subdomains defined, per various env types:

  • dev.someorg.com

    and full endpoint may be custom.dev.someorg.com for development custom.edgelq.com service

  • stg.someorg.com

    and full endpoint may be custom.stg.someorg.com for staging custom.edgelq.com service

  • someorg.com

    and full endpoint may be custom.someorg.com for production custom.edgelq.com service

You will need to purchase the domain separately and this domain can be used for potentially many environments and applications reserved on the SPEKTRA Edge platform (custom, custom2, another…). You may host them on a single cluster as well.

Once you purchase let’s say someorg.com, and decide you want to use stg.someorg.com for staging environments, you will need to configure at least 2 endpoints for each SPEKTRA Edge service. One endpoint is a global one, the other one is a regional one.

Since SPEKTRA Edge is multi-region in its core, it is required to provide these two endpoints. Suppose you have custom.edgelq.com service reserved on SPEKTRA Edge platform, and you bought someorg.com, you will need the following endpoints:

  • custom.someorg.com

    global endpoint for your service

  • custom.<REGION>.someorg.com

    regional endpoint for your service in a specified region.

If your service is single-regional, then you will need in total two endpoints for a service. If you have 2 regions, then you will need three endpoints and so on.

To recap so far:

  • You will need to reserve an SPEKTRA Edge domain name (like custom.edgelq.com) on the SPEKTRA Edge platform. Then you may reserve more, like another.edgelq.com. Those will be just resources on the SPEKTRA Edge platform.

  • You will need to purchase a domain from the proper provider (like someorg.com), then optionally configure more subdomains to accommodate more env types if needed.

  • You will need to configure a global endpoint per each service (like custom.someorg.com, another.someorg.com).

  • You will need to configure a regional endpoint per each region (like custom.eastus2.someorg.com, another.eastus2.someorg.com).

Note that the domain for global endpoints here is someorg.com, for eastus2 it is eastus2.someorg.com.

Even if you don’t intend to have more than one region, it is required to have a regional domain - you can just use CNAME to make the same.

Let’s move to the public IPs part.

Regional and global domains must be resolved into public IP addresses you own/rent. Note that regional endpoints must be resolved into different IP addresses. The global endpoint may:

  • Use separate IP addresses than regional ones. This separate IP address will be an anycast. It should still route the traffic to the nearest regional cluster.

  • Use DNS solution and allow the global domain to be resolved into one of the regional IP addresses according to the best local performance.

For a single-regional setup, you may make regional and global domains use the same IP address, and make a CNAME record.

Meaning, if you have endpoints:

  • custom.someorg.com, another.someorg.com

    They need to resolve to a single IP address. This IP address may be different, or equal to one of the regional endpoints.

  • custom.eastus2.someorg.com, another.eastus2.someorg.com

    those are regional endpoints and needs single regional IP addresses. If you have more regions, then each requires a different IP address.

For each region, you will need different cluster deployments. Inside each cluster, you will need an Ingress object with all necessary certifications.

Networking setup is up to service maintainers, setup may vary significantly depending on the cloud provider or on-premise setup. The required parts from SPEKTRA Edge’s point of view are around domain names.

Config files preparation

With images constructed, you need to prepare the following config files:

  • API Server config
  • Envoy proxy
  • Controller
  • Db Controller

As the Inventory manager example uses Kubernetes declarations, this may influence some aspects of config files! You will see some variables here and there. Refer to this file for more explanation along the way: https://github.com/cloudwan/inventory-manager-example/blob/master/deployment/env.properties

API Server

Example of API Server config for Inventory Manager: https://github.com/cloudwan/inventory-manager-example/blob/master/deployment/api-server-config.yaml

The proto-model can be found here: https://github.com/cloudwan/inventory-manager-example/blob/master/config/apiserver.proto

Review this config file along with this document.

From the top, by convention, we start with sharding information. We use ring sizes 16 as standard, others are optional. You need to use the same naming conventions. Note that:

  • byName is mandatory ALWAYS
  • byProjectId is mandatory because in InventoryManager we use Project related resources
  • byServiceId is mandatory because in InventoryManager we use Service related resources
  • byIamScope is mandatory because we use byProjectId or byServiceId.

Below you have a “common” config, which applies to servers, controllers, and db-controllers, although some elements are specific only to one kind. There, we specify the grpc server config (the most important is a port of course). There is some experimental web sockets part (for bidi-streaming support for web browsers exclusively). We need to run on separate ports, but underlying libraries/techniques are experimental and may or may not work. You may skip this if you don’t need bidi-streaming calls for web browsers.

After grpcServer, you can see the databases (dbs) part. Note that namespace convention:

  • Part envs/$(ENV_NAME)-$(EDGELQ_REGION) ensures that we may potentially run a single database for various environments on a single cluster. This we adopted from development environments, but you may skip this part entirely if you are certain you will just run a single environment in a single cluster.

  • The second part, inventory-manager/v1-1, first specifies the application (if you have multiple SPEKTRA Edge apps), then version and revision (v1-1). “v1” refers to the API version of the service, then “-1” refers to revision part. If there is a completely new API version, we will need to synchronize databases (copy) during an upgrade. The second part, -1, is there because there is also a possibility of an internal database format upgrade, without API changes.

Other notable parts of the database:

  • We used the “mongo” backend.
  • We must specify an API version matching this DB.
  • You will need to provide the MONGO_ENDPOINT variable, Mongo deployment is not covered in this example.
  • Note that in the URL you have /etc/lqd/mongo/mongodb.pem specified. As of now, this file must be mounted on the pod during startup. In the future, it may be provided using different ways though.

Instead of Mongo, you may also configure firestore:

dbs:
- namespace: "envs/$(ENV_NAME)-$(EDGELQ_REGION)/inventory-manager/v1-1"
  backend: "firestore"
  apiVersion: "v1"
  connectionPoolSize: $(INVENTORY_MANAGER_DB_CONN_POOL_SIZE)
  firestore:
    projectId: "$(GCP_PROJECT_ID)"
    credentialsFilePath: "/etc/lqd/gcloud/db-google-credentials.json"

Of course, you will need to have these credentials and use them later in deployment.

Later you have the dbCache configuration. We only support Redis for now, note also the endpoint - for deployments like this, it should be some internal endpoint available only inside.

Further on you have the authenticator part. Values AUTH0_TENANT, AUTH0_CLIENT_ID, and EDGELQ_DOMAIN must match those provided by the SPEKTRA Edge cluster you are deploying for. But you need to pay more attention to serviceAccountIdTokenAudiencePrefixes value. There, you need to provide all private and public endpoints your service may encounter. Example there provides:

  • one private endpoint visible inside the Kubernetes cluster only (the one ending in -service).
  • public regional endpoint
  • public global endpoint

Public endpoints must match those configured during the Networking stage!

After authenticator, you have observability settings. You can configure logger, Audit, and Usage there. The last two use audit.edgelq.com and monitoring.edgelq.com. You can also add tracing deployment. As of now, it can work for Jaeger and Google Tracing (GCP only):

Stackdriver example: Note you are responsible for providing Google credentials path

observability:
  tracing:
    exporter: "stackdriver"
    sample_probability: 0.001
    stackdriver:
      projectId: "$(GCP_PROJECT_ID)"
      credentialsFilePath: "/etc/lqd/gcloud/google-credentials.json"

Jaeger part, BUT as of now it hard hardcoded endpoints:

  • agentEndpointURI = “jaeger-agent:6831”
  • collectorEndpointURI = “http://jaeger-collector:14268/api/traces”
observability:
  tracing:
    exporter: "jaeger"
    sample_probability: 0.001

This means you will need to deploy Jaeger manually. Furthermore, you should be careful with sampling - some low value is preferred, but it will make an unsuitable tool for bug hunting. SPEKTRA Edge uses now obsolete tracing instrumentation, but the proper one is on the work map. With this, an example will be enhanced.

After observability, you should see clientEnvironment. This used to be responsible for connecting with other services, it was taking domain part and pre-pending short service names. With a multi-domain environment, this is however obsolete. It is there for some compatibility reasons and should point to your domain. It may be dropped in the future. The replacement is envRegistry, which is just below.

Env registry config (envRegistry) is one of the more important parts. You need to specify the current instance type, and region information: which region is for the current deployment, which is the default one for your service. The default one must be the first you deploy your service to. Sub-param service must be the same as the service domain name you reserved on SPEKTRA Edge platform. Then you must provide global and regional (for this region) endpoints for your service. You may provide a private regional endpoint along with localNetworkId. The latter param should have a value of your own choice, it’s not equal to any resource ID created anywhere. It must be only same for all config files for all runtimes running on the same cluster, so they know they can safely use private endpoint (for performance reasons). Finally, scoreCalculator and location is used for multi-region middleware routing, if it detects a request that needs to be routed somewhere else, but somewhere else may be more than 1 region, it will use these options to get the best option.

Next part, bootstrap is necessary to configure EnvRegistry in the first place, this must point to meta service endpoint, where information about the whole SPEKTRA Edge environment will be obtained from.

The last common config parts are:

  • disableAuth: you should need to leave false here, but you may set it to true for some local debug

  • disableLimits: It is an old option used in the past for development, but typically needs to be false. It has no effect if limits integration was not done for a service.

  • Option enableStrictNaming enables strict IDs (32 chars max per ID, only a-z, 0-9, - and _ are allowed). This must be always true. The option exists only because of legacy SPEKTRA Edge environments.

  • avoidResourceCreationOverride

    if true, then an attempt to send a Create request for an existing resource will result in AlreadyExists error. This must be always true. The option exists only because of legacy SPEKTRA Edge environments.

  • allowNotFoundOnResourceDeletion

    if true, then an attempt to send a Delete request for a non-existing resource will result in a NotFound error. This must be always true. The option exists only because of legacy SPEKTRA Edge environments.

Param nttCredentialsFile is a very important one: It must contain the field path to the NTT credentials file you must have obtained when reserving service on the SPEKTRA Edge platform.

Envoy proxy

Example of API Server config for Inventory Manager: https://github.com/cloudwan/inventory-manager-example/blob/master/deployment/envoy.yaml

From a protocol point of view, the task of the envoy proxy is to:

  • Passthrough gRPC traffic
  • Convert webGRPC calls (made by web browsers) to gRPC ones.
  • Convert REST API (HTTP 1.1) calls to gRPC ones.

It also adds a TLS layer between Ingress and the API Server! Note that when a client outside the cluster communicates with your service, it will connect not with the service directly, but to the Ingress Controller sitting at the entry to your cluster. This Ingress will handle TLS with the client, but separate to the API server is also required. Ingress maintains double connections, one to the end client and, the other to the API server. Envoy proxy, sitting in the same Pod as the API Server, handles the upstream part of TLS. Note that in the envoy.yaml you have the /etc/envoy/pem/ directory with TLS certs. You will need to provision them separately, in addition to the public certificate for Ingress.

Refer to envoy proxy documentation for these files. From SPEKTRA Edge’s point of view, you may copy and paste this file from service to service. You should need though:

  • Replace all “inventory-manager” strings with proper service.
  • Configure REST API transcoding on a case-by-case basis.

For this REST API, see the following config part:

- name: envoy.filters.http.grpc_json_transcoder
  typed_config:
    "@type": type.googleapis.com/envoy.extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder
    proto_descriptor: /var/envoy/inventory_manager.pb
    services:
    - ntt.inventory_manager.v1.ProjectService
    - ntt.inventory_manager.v1.DeviceModelService
    - ntt.inventory_manager.v1.DeviceOrderService
    - ntt.inventory_manager.v1.ReaderAgentService
    - ntt.inventory_manager.v1.RoomService
    - ntt.inventory_manager.v1.SiteService
    - ntt.mixins.diagnostics.v1.UtilityService
    print_options:
      add_whitespace: false
      always_print_primitive_fields: true
      always_print_enums_as_ints: false
      preserve_proto_field_names: false
- name: envoy.filters.http.grpc_web
- name: envoy.filters.http.router

If you come back to Building images documentation part for the envoy proxy, you can see that we created the inventory_manager.pb file, which we included during the build process. We need to ensure this file is present in our envoy.yaml file, and all services are listed. For your service, find all services and put them in this list. You can find them in the protobuf files. As of now, Utility service offers just this one API group.

If you study envoy.yaml as well, you should see that it has two listeners:

  • On port 8091 we have for websockets (experimental, you should omit this if you don’t need bidistreaming support for web browsers over websockets).
  • On port 8443 we serve the rest of the protocols (gRPC, webGRPC, REST API).

It forwards traffic (proxying) to ports (setting clusters):

  • 8080 for gRPC
  • 8092 for websockets-grpc

Note those numbers match those on the API server config file! But when you configure Kubernetes Service, you will need use envoy ports.

Controller

Look at example, Inventory manager: https://github.com/cloudwan/inventory-manager-example/blob/master/deployment/controller-config.yaml

The proto-model can be found here: https://github.com/cloudwan/inventory-manager-example/blob/master/config/controller.proto

The top part serverEnvironment is very similar (actually the same) to commonConfig part in the API server config, we just specify fewer options, AND instanceType for envRegistry needs to specify a different value (CONTROLLER). We don’t specify databases, grpc servers, cache, or authenticator, observability is smaller.

The next part, nodeRegistry is required. This specifies the Redis instance that will be used for controller nodes to detect each other. Make sure to provide a unique namespace, don’t copy and paste easily to different controllers if you have more service backends!

Next, businessLogicNodes is required if you have a business logic controller in use. It is relatively simple, typically we need to provide just the node’s name (for Redis registration purposes), and most importantly, the sharding ring. It must match with some value in the backend. You can specify the number of nodes (virtual), that will fit into a single runtime process.

Param limitNodes is required if you use limits integration, and you should just copy-paste those values, with specified rings as in the example.

Finally, fixtureNodes were discussed in SPEKTRA Edge registration doc, so we can skip here.

Db controller

Look at example, Inventory manager: https://github.com/cloudwan/inventory-manager-example/blob/master/deployment/db-controller-config.yaml

The proto-model can be found here: https://github.com/cloudwan/inventory-manager-example/blob/master/config/dbcontroller.proto

The top part, serverEnvironment is very similar to those in api server and controller. Unlike the server, it does not have parts for the server or authenticator. But it has database and cache options because those are needed for database upgrades or multi-region syncing. Param instanceType in envRegistry must be equal to DB_CONTROLLER, but otherwise, all is the same.

It needs a nodeRegistry config because it uses sharding with other db-controllers in the same region and service.

Config nodesCfg is a standard and must be used as in the example.

TLS

Let’s start with the TLS part.

There are two encrypted connections:

  • Between end client and Ingress (Downstream for Ingress, External)
  • Between Ingress and API Server (via Envoy - Upstream for Ingress, Internal).

It means we have separate connections, and each one needs encryption. For external connection, we need a certificate that is public, and signed by a trusted authority. There are many ways to obtain it, for Clouds, we can likely get some managed certificates, and optionally use LetsEncrypt services (cloud-agnostic). It is up to service developers to decide how to get them. They need to issue certificates for regional and global endpoints. Refer to LetsEncrypt documentation for how to set up with Ingress if you need it, along with your choice of Ingress in the first place.

For the Internal certificate, for connections to API Server Envoy runtime, we need just a self-signed certificate. If we are in Kubernetes cluster, and we have ClusterIssuer for self-signed certs, we can make (assuming Inventory manager service, and namespace examples, region ID we used is eastus2):

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: inventory-manager.eastus2.examples-cert
  namespace: examples
spec:
  secretName: inventory-manager.eastus2.examples-cert
  duration: 87600h # 10 years
  renewBefore: 360h # 15 days
  privateKey:
    algorithm: RSA
    size: 2048
  usages:
  - server auth
  - digital signature
  - key encipherment
  dnsNames:
  - "inventory-manager.examples.svc.cluster.local"
  - "inventory-manager.examples.pod.cluster.local"
  - "inventory-manager.eastus2.examples.dev04.nttclouds.co"
  - "inventory-manager.examples.dev04.nttclouds.co"
  issuerRef:
    name: selfsigned-clusterissuer
    kind: ClusterIssuer

Note that you need the selfsigned-clusterissuer component ready, but on the internet, there are examples of how to make cluster issuer like that.

With the created Certificate, you can get pem/crt files:

kubectl get secret "inventory-manager.eastus2.examples-cert" --namespace examples -o json | jq -r '.data."tls.key"' | base64 --decode > "./server-key.pem"
kubectl get secret "inventory-manager.eastus2.examples-cert" --namespace examples -o json | jq -r '.data."tls.crt"' | base64 --decode > "./server.crt"

You will need those TLS for upstream connection TLS - keep these files.

Deployment manifests

For the Inventory manager example, we should start examining deployments from customized files: https://github.com/cloudwan/inventory-manager-example/blob/master/deployment/kustomization.yaml

This contains full deployment (except secret files and Ingress object), you may copy, understand, and modify its contents for your case. Ingress requires additional configuration.

Images

In the given example, the code contains my development image registry, so you will need to replace it with your images. Otherwise it is straightforward to understand.

Resources - Deployments and main Service

We have full yaml deployments for all runtimes - note that the apiserver.yaml file has deployment with 2 containers, one for API Server and the other for Envoy proxy.

All deployments have relevant pod auto-scalers (except Redis, to avoid synchronization across pods). You may though deploy also Redis as a managed service, in yaml config files for API-server, controller, and db-controller just replace endpoint!

In this file you also have a Service object at the bottom, that exposes two ports: One https (443), that redirects traffic to envoy proxy on 8443. It serves gRPC, grpc-web, and REST API. The other is experimental for websockets only and may be omitted. This is the Service you will need to provide to Ingress to have a full setup. When you construct an Ingress, you will need to redirect traffic to “inventory-manager-service” k8s Service (but replace the inventory-manager- prefix with something valid for you). If you ask why, since metadata.name is service, then the reason is following the line in customization.yaml:

namePrefix: inventory-manager-

This is pre-pended to all resource names in this directory.

When adopting these files, you need to:

  • Replace the “inventory-manager-” prefix in all places with a valid value for your service.

  • Fix container image names (inventorymanagerserverproxy, inventorymanagerserver, inventorymanagercontroller, inventorymanagerdbcontroller) in yaml files AND kustomization.yaml

    images should point to your image registry!

Config generator, configuration, and vars

In kustomization, you should see a config generator, that loads config maps for all 4 images. However, we also need to take care of all variables using the $(VAR_NAME) format. First, we declare configurations pointing to params.yaml. Then we declare a full list of vars. These will be populated with the config map generator:

- name: examplesenv
  envs:
  - env.properties

And now we can use config files for replacements.

Secrets recap

Param secretGenerator in kustomization.yaml should recap all secret files we need:

  • We have 2 TLS files for self-signed certificates, for internal connection between Ingress and API Server Envoy.

  • We have credentials to MongoDB. This must be obtained for Mongo. You may opt for Firestore if you can and prefer, in which case you need to replace it with Google creds.

  • We have finally ntt credentials

    This must have been obtained when you initially reserved Service on the SPEKTRA Edge platform, using UI or cuttle - see Setting up Environment.