This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Third-Party Service Developer Guide

How to develop third-party services running on the SPEKTRA Edge platform.

1 - Quick Start

Quickly start the service development with Goten framework.

Prerequisites

  • Git distributed version control software.

    For instration instructions, see Git’s Installation guide.

  • clang-format code formatter.

    Grab the clang-format through the OS package manager and rename it to clang-format-12 as blow:

    $ ln -s $(which clang-format) $(dirname $(which clang-format))/clang-format-12
    

Get the example code

The example code is part of the goten repo.

  1. Close the repo:

    $ git clone https://github.com/cloudwan/goten
    
  1. Change to the top directory and install Protocol buffer compiler, protoc, version 3, Go plug-ins for the protocol compiler, and JavaScript packages with install-proto-deps.sh script:

    $ cd goten
    $ ./scripts/install-proto-deps.sh
    
  1. Change to the quick start example directory:

    $ cd example/helloworld
    

Run the example

From the example/helloworld directory:

  1. Compile and execute the server code:

    $ go run cmd/greeter_server/main.go
    
  2. From a different terminal, compile and execute the client code to see the client output:

    $ go run cmd/greeter_client/main.go -name Goten
    Greeting: Hello Goten
    

Congratulations! You’ve just run a client-server application with Goten.

What’s Next

2 - Third-Party Service Developer Guide

Understanding how to develop third-party services.

In this section, you will find an in-depth explanation of the SPEKTRA Edge platform to developer your own services. To see the quickstart guide and example service built on top of SPEKTRA Edge, it is highly recommended to start from https://github.com/cloudwan/inventory-manager-example

This documentation provides full reference to all possible options and should be considered for more advanced users.

To develop code with SPEKTRA Edge it is necessary to first understand all relevant technologies: mostly grpc and protobuf, others depend on your specific use case, but our tools/libraries already wrap many of them, and hopefully, it can cover your case too.

Goten/SPEKTRA Edge frameworks are written in Golang, and it is recommended to know this language, however, it is usually plain simple (channels may require a bit more work, for concurrency techniques).

2.1 - Preparing your Environment

How to prepare your development environment.

2.1.1 - Prerequisites

What you need to know before developing SPEKTRA Edge services.

gRPC and Protocol buffers

All core services on the SPEKTRA Edge platform utilize gRPC/Protocol buffer technologies as a way of communication with each other. You can read about those here: https://grpc.io/docs/what-is-grpc.

We are using exclusively the proto3 version.

However, to put things more simply: gRPC is a high-level protocol where client-server can communicate in the following ways:

  • Unary request-response, for example, GET Object, LIST Objects etc.
  • Server streaming: The client initiates the connection, sends a first message and then the server keeps sending a series of messages one after another. Use case: WATCH changes on Object “X”.
  • Client streaming: The client initiates the connection and then keeps sending messages. Example use case: Logging
  • Bidi-streaming: The client initializes the connection and then keeps exchanging messages with the server until the connection is closed.

In other words, gRPC defines methods, which can be unary or streaming. It is built on top of HTTP2, and requests, including streams, have HTTP headers. Messages (payloads) themselves use binary format, so you can’t just use JSON with Curl/Postman utility. The format of messages themselves is defined by protocol buffers. However, those messages always can be dumped into some human-readable format - Cuttle is an example.

Message definitions (or structures) naturally are defined in a human-friendly way. First, you need to define structure in proto files (proto is a file name extension). Basic primitive field types are self-describing - string, int64, uint32, bool, etc. You can add “repeated” before type to make an array: repeated int32 integers = 1; as an example. To declare a map, use map<key, value> to define key-value collection. You can declare message <name> { <body> } inside messages too, to define a child-structure. There is also an “enum” and “oneof”. You can find plenty of examples on the internet, in our services, and in our example inventory-manager app.

In proto files, on top of messages, you also define a list of APIs and methods. Again, see an inventory-manager example, some of our services, or check the internet. They should be simple to understand and with practice, you will get everything. The only thing that may look strange at the beginning are those “numbers” assigned all fields in all messages. But they only inform what is the binary ID identifier that will be used for serialization. It’s not something to be particularly worried about, however - once code is released into production, numbers need to stay as they are. Changes to them would render API incompatible, as messages passed between clients/servers would break.

When developing in Goten, you will need to model requests/responses and resources using protocol buffers. Once you have proto files, you can use a proto compiler that creates relevant source code files (C++, javascript, Golang, Python, etc.). It will create code for messages (structs, getters, setters, standard util functions, etc.) and clients (the client itself is an interface, which contains the same list of methods as defined in proto files for your service).

Some pseudo-golang example:

connectionHandle := EstablishConnection(
   service.address.com,
   credentialsObject,
)

// make a client
client := NewYourAppServiceClient(connectionHandle)

// unary request example
response := client.UnaryMethodName(requestObject)

// streaming example
stream := client.StreamingMethodName()
streamSend(clientMsg)
serverMsg := stream.Recv()

Protoc for Golang (or any other language) will create a constructor for the client (NewYourAppServiceClient), with all the methods.

Goten enables also REST API - incoming request objects are converted into protobuf, then outgoing messages back to JSON. This has however some performance penalty.

Goten

Goten is a service development framework for building resource-oriented APIs on top of grpc/protobuf. Service built with Goten consists of:

  • Resources
  • APIs (multiple API groups)
  • Methods (each method belongs to some API group).

All core SPEKTRA Edge services were built using Goten, and this is also required for any third party application.

Developers first need to define resources - with relationships to each other. Goten implicitly creates an API group per each one. For example, in service iam.edgelq.com we have a resource called “Permission”. Therefore, service iam.edgelq.com has an API called “PermissionService”, which contains a set of methods operating on Permission resources. By default, those <Resource>Service APIs contain basic CRUD methods (example for PermissionService):

  • GetPermission
  • BatchGetPermissions
  • ListPermissions
  • WatchPermission
  • WatchPermissions
  • CreatePermission
  • UpdatePermission
  • DeletePermission

Developers can attach more methods (custom ones) to those implicitly created API groups. On top of that, they can create custom additional API groups with custom methods - each may operate on different resources.

The benefit of splitting a single service into many API groups is the packaging - we can have a single package per resource and per API groups (set of methods). Client modules can pick which parts of the service they are interested in and compiled binaries should be smaller. Smaller packages make also modules smaller. Still, within code, we can access all resources defined within the service using the same connection/database handler - and consequently, we can access multiple resources within the service in a single transaction.

Goten is more like a toolbox rather than a single tool, it first contains a set of tools generating code for a service based on YAML/protobuf files. It contains also the definition of reusable protobuf types and a set of runtime libraries linked during compilation.

Service built with Goten will support communication with the following APIs:

  • gRPC

    Recommended as the most native protocol and having the best performance.

  • web-gRPC

    It is gRPC for web browsers (as they can’t use native GRPC).

  • HTTP REST API

    Used gRPC transcoding, where requests/responses are converted from/to JSON before passing for processing.

Request processing by SPEKTRA Edge backends

SPEKTRA Edge-based service can accept gRPC (regular request-response or streams), webGRPC (gRPC for web browsers) with websockets-grpc (for streaming), HTTP REST API (request-response only, no streams).

When a request is received by the backend, it first checks the protocol. Any non-gRPC messages are converted to gRPC before the backend handler is called. The handler then identifies the method and sets up observability components for auditing and usage metrics such as latency and request/response sizes. If configured, it also initiates tracing spanning. The request/stream passes through common interceptors that are universal for each method. These interceptors are often simple, adding tags, catching exceptions, and configuring the logger. One notable component is the Authenticator, which is a module provided by the SPEKTRA Edge framework and is built into every server runtime during compilation and linking. The Authenticator retrieves the authorization header and attempts to identify the holder, referred to as the Principal. There are two primary types of principals: ServiceAccount (optimized for bots) and human (User). If the holder cannot be classified, the Principal is classified as Anonymous. The authenticator then checks with the cache. If the principal is stored there, it validates the claims and proceeds if there is a match. Otherwise, it sends a request to the iam.edgelq.com service to inquire about the identity of the principal (GetPrincipal). If the IAM service itself is making requests, it will ask for its database. During the GetPrincipal execution, IAM identifies the Principal and validates that the service requesting the principal has the right to access it. The basic rule is that if a user or ServiceAccount has any RoleBindings for a given service, then the service is assumed to be allowed to access that principal. In this case, IAM returns the Principal data with the corresponding User or ServiceAccount. The returned data is cached for faster execution in subsequent requests.

Note: In the multi-regional environment, GetPrincipal has additional tricks though - Authenticator needs to identify from authorization a token what regions of iam.edgelq.com will know the given principal, but this is already provided in the Authenticator code.

After authentication finishes, the request/stream reaches a set of code-generated middlewares (layers) specifically for this method. The first middleware MAY be transformer middleware - if the method called by the user uses an older API version, then the request/stream is upgraded to the higher version. Then it proceeds further. However, if the request/stream already uses the newest API, it goes straight to the next part. This next middleware, and the first one if no versioning was needed, is multi-region routing middleware. It inspects the stream/request and decides if the request can be executed locally, or should be proxied to the same service in a different region. The next middleware is the Authorization type. It extracts the Principal object from the current processing context and checks if the given Principal is allowed to execute this request. Authorization middleware uses an Authorizer component that is linked in during a server runtime build process. The authorizer grabs relevant RoleBindings for the user and validates against the method. When possible - RoleBindings are extracted from the local cache. Otherwise, it will need to send another request to iam.edgelq.com, returned RoleBindings will be cached. The authorizer will decide eventually if the request/stream can be processed further or not. If all is good, the next middleware is database transaction middleware - it grabs either the SNAPSHOT-transaction or NO-transaction handle. In the case of snapshot one, it needs additional IO operation on the database. Next middleware is “outer” middleware - it makes basic request/stream validation, for update operations it will verify previous resources, and execute CAS (Compare And Swap) if specified. For creations, it will verify resources did not exist, for deletions it will verify they existed.

Finally, outer middleware passes the request/stream to the proper processing part. It can be two things:

  • Code-generated server core

    it handles all CRUD operations or returns Unimplemented errors for custom actions.

  • Custom middleware, written by engineer/developer.

    This custom middleware must be written like a middleware. It must have a handle to the code-generated server core. Custom middleware should execute finally all custom methods (requests or streams) and not pass to the server core, because it would return an Unimplemented error.

For CRUD operations, it MAY implement some additional processing that is executed BEFORE OR AFTER server core. But eventually, for CRUD operations, custom middleware must pass handling to code-generated server code. During this proper processing server can get/save resources from the database, or connect with other services for some more complex cases. Note that the database handle, that is provided by the Goten framework, MAY not only connect with the actual database BUT also connect with other services (in the same or different regions) if we are executing the save/deletion of a resource with references to other services and/or regions. It works to ensure that services database schemas remain in sync, even if we have cross-service or cross-region references. For cross-service requests, other services will also use Authorizer to confirm requesting user/service is allowed to reference resources they own!

Once the request/stream is processed by the proper handler (optional custom middleware + server core), the request/stream goes through middleware in the reverse order (unwrapping). Here most of the middlewares don’t do anything. For example, outer middleware will just pass the response/exit stream error further back. More important things happen with transaction middleware when exiting - if a snapshot transaction was used, then all resource updates/deletions are submitted to the database at this moment. When a request comes back to routing middleware, usually nothing should happen, the middleware should propagate further back the response or exit stream code. However, if it was decided that the request should have been executed by MORE THAN ONE REGION, then middleware will wait until the other servers in different regions return the response. Once it happens, the final response is merged from multiple regional ones. If there was a transformer middleware for versioning before routing middleware, it would convert the response to an older API, if needed. Interceptors will then be called in reverse, but as of now, this is a simple pass-through. Eventually, the response or stream exit code will be returned by the server. At this point, observability components will also be concluded:

  • Audit (if used for this call) will be notified about the activity.
  • Monitoring service will get usage metrics.
  • Tracing may optionally get spans.

The protocol will be adjusted back to REST API/webGRPC if needed after it exists on the server.

Goten/SPEKTRA Edge provides all of those interceptors/middlewares/procedures based on YAML/protobuf prototypes written by the user. The required code to be written is custom middleware, at least when it comes to backend services.

2.1.2 - Setting up your Development Environment

How to setup your development environment.

Setting up development environment

If you do not have Go SDK, you should download and configure it. To check the version required by SPEKTRA Edge, see this file - top shows the required minimum version. As of the moment of this writing, it is 1.21, but it may change. Ensure Go SDK is installed.

You will need to access the following repositories, ensure you have access to:

With Go SDK installed, check the $GOPATH variable: echo $GOPATH. Ensure the following paths exist:

Export variables, as they are referenced by various scripts:

export GOTENPATH=$GOPATH/src/github.com/cloudwan/goten
export EDGELQROOT=$GOPATH/src/github.com/cloudwan/edgelq

You may export them permanently.

Goten/SPEKTRA Edge comes with its own dependencies and plugins, you should install them:

$GOTENPATH/scripts/install-proto-deps.sh
$GOTENPATH/scripts/install-plugins.sh
$EDGELQROOT/scripts/install-edgelq-plugins.sh

You need some repository for your code, like github.com/some-namespace/some-repo. If you do that, ensure the location of this repository is in $GOPATH/src/some-namespace/some-repo, OR it is sym-linked there.

Reserving service on the SPEKTRA Edge platform

Before you begin here, ensure that you have access to some Organization where you have permission to create service projects - typically it means some administrator. You need cuttle configured - at least you should go through the user guide and reach the IAM chapter.

All service resources on the SPEKTRA Edge platform belong to some IAM project. An IAM Project that is capable of “hosting” services is called a “Service Project”. You may create many services under a single service project, but you need to make the first one.

# If you dont plan to create devices/applications resources under your service project, you should skip them
# using core-edgelq-service-opt-outs. It DOES NOT MEAN that final tenant projects using your service will not use
# those services, or that your service wont be able to use devices/applications. It merely says that your project
# will not use them.
cuttle iam setup-service-project project --name 'projects/$SERVICE_PROJECT_ID' --title 'Service Project' \
  --parent-organization 'organizations/$MY_ORGANIZATION_ID' --multi-region-policy '{"enabledRegions":["$REGION_ID"],"defaultControlRegion":"$REGION_ID"}' \
  --core-edgelq-service-opt-outs 'services/devices.edgelq.com' --core-edgelq-service-opt-outs 'services/applications.edgelq.com'

To clarify: A service project is just a container for services, and is used for some simple cases like usage metrics storage or service accounts. Therefore, eventually, you will have two resources:

  • IAM Service project: projects/$SERVICE_PROJECT_ID - it will contain credentials of ServiceAccounts with access to your service or usage metrics.
  • Meta Service: services/$YOUR_SERVICE

The service project is a type of IAM project, but your service tenants will have their projects. Service on its own is a separate entity from a project it belongs to. Therefore, unless your project will need to have some devices/applications resources directly, it is recommended to opt out from those services using --core-edgelq-service-opt-outs arguments. Your service will still be able to import/use devices/applications, and tenants using your service too.

You need to decide on $SERVICE_PROJECT_ID and $REGION_ID, where your service will run. As of this moment, SPEKTRA Edge platform is single-regional, so you will only have one region at your disposal. But it should change in the future. It will be possible to expand your service project (and therefore services) to more regions later on.

You will need to replace the $MY_ORGANIZATION_ID variable with one you have access to.

Once you have service project created, you will need to reserve a service:

cuttle iam reserve-service-name project --name 'projects/$SERVICE_PROJECT_ID' --service 'services/$YOUR_SERVICE_NAME' \
  --admin-account 'projects/$SERVICE_PROJECT_ID/regions/$REGION_ID/serviceAccounts/svc-admin' \
  --admin-key '{"name":"projects/$SERVICE_PROJECT_ID/regions/$REGION_ID/serviceAccounts/svc-admin/serviceAccountKeys/key", "algorithm":"RSA_2048"}' \
  -o json

Now you will need to determine the value of $YOUR_SERVICE_NAME - our 3rd party services are watchdog.edgelq.com, and ztna.edgelq.com. Those look like domains, but the actual public domain you can decide/reserve later on.

Argument --admin-account determines the ServiceAccount resource that will be allowed to create a given Service, and it will be responsible for its future management. If it does not exist, it will be created. You should be able to see it with:

cuttle iam get service-account 'projects/$SERVICE_PROJECT_ID/regions/$REGION_ID/serviceAccounts/svc-admin' -o json

The argument --admin-key is more important as it will create a ServiceAccountKey resource under the specified admin account. However, if the key already exists, you will receive an AlreadyExists error. This may occur if you were already making reservations for different services. If both ServiceAccount and ServiceAccountKey already exist in a given service project, you should skip using the --admin-key argument altogether and simply use previously obtained credentials. The same ServiceAccount can be used for many services. However, if you wish, you can decide to create another --admin-account by providing a different name than what was used before.

If you provide –admin-key argument, you can do this in two ways:

  --admin-key '{"name":"projects/$SERVICE_PROJECT_ID/regions/$REGION_ID/serviceAccounts/svc-admin/serviceAccountKeys/key", "algorithm":"RSA_2048"}'

OR

  --admin-key '{"name":"projects/$SERVICE_PROJECT_ID/regions/$REGION_ID/serviceAccounts/svc-admin/serviceAccountKeys/key", "publicKeyData":"$DATA"}'

In the case of the first example, the response will contain private key data contents that you will need. In the second case, you can create a private/public pair yourself and supply public data ($DATA param). This version should be used if you prefer to keep a private key never known by SPEKTRA Edge services.

You should pay attention to the response returned from cuttle iam reserve-service-name project. More specifically, field nttAdminCredentials:

{
  "nttAdminCredentials": {
    "type": "<TYPE>",
    "client_email": "<CLIENT_EMAIL>",
    "private_key_id": "<KEY_ID>",
    "private_key": "<PRIVATE_KEY>"
  }
}

You should get this value and save in own ntt-credentials.json file (you can name file however you like though):

{
  "type": "<TYPE>",
  "client_email": "<CLIENT_EMAIL>",
  "private_key_id": "<KEY_ID>",
  "private_key": "<PRIVATE_KEY>"
}

Note that, if you created –admin-key with the public key (not algorithm), then <PRIVATE_KEY> will not be present in response. Instead, when saving the ntt-credentials.json file, you should populate this value yourself with the private key.

Credentials need to be kept and not lost. In case it happens, you can use the DeleteServiceAccountKey method. Note that ServiceAccount (admin) for services is just a regular ServiceAccount in iam.edgelq.com service - and you have full CRUD of its ServiceAccountKey instances.

When reserving a service for the first time, you may also decide what Role will be assigned to a ServiceAccount for your service. By default, ServiceAccount will be an admin in the Service namespace, but it will have a limited role assigned in the projects/$SERVICE_PROJECT_ID scope. By default, it is services/iam.edgelq.com/roles/default-admin-project-role. However, for more advanced users, you can pass a custom role, for example, full ownership:

cuttle iam reserve-service-name project --admin-account-project-role 'services/iam.edgelq.com/roles/scope-admin' <OTHER ARGS>

You may manage service projects & services on SPEKTRA Edge dashboard as well. To have same API via Cuttle, see:

cuttle iam list-my-service-projects projects --help  # To see service projects
cuttle iam list-service-reservations project --help  # To see existing service reservations under specific service project.
cuttle iam delete-service-reservation project --help # To delete service reservation
cuttle iam list-project-services project --help      # To see already created services under specific service project

# This command is more advanced, should be used when expanding service project
# to new regions. Will be more covered in next docs in detail, for now its just
# FYI.
cuttle iam add-regional-admin-account-for-services service-account --help

With service reserved, you should continue with the normal development.

2.2 - Declaring your Service

How to declare your SPEKTRA Edge service.

2.2.1 - Service Specification

How to declare your services.

At the beginning of this chapter, before continuing, it is worth first mentioning the naming conventions we use: https://cloud.google.com/apis/design/naming_convention

Example api-skeleton for 3rd party app: https://github.com/cloudwan/inventory-manager-example/blob/master/proto/api-skeleton-v1.yaml You can also see API skeletons in the edgelq repository, too.

This document describes api-skeletons in bigger detail than a quick startup.

When you start writing a service, the first (you have a new, empty directory for your service) you need is to do two things:

  • Create a subdirectory called “proto” (convention used in all created goten services).

  • In the proto directory, create file api-skeleton-$SERVICE_VERSION.yaml file.

    In place of $SERVICE_VERSION, you should put a version of your service, for example, v1alpha for a start.

API skeleton file is used by Goten to bootstrap initial proto files for your service - some of them will be initialized only once, some will always be overwritten by subsequent regeneration.

JSON schema

API-Skeleton schema is based on the protobuf file itself: https://github.com/cloudwan/goten/blob/main/annotations/bootstrap.proto You may check this file to see all possible options.

There is a useful trick you can do with your IDE, so it understands schema and can aid you with prototyping: https://github.com/cloudwan/goten/blob/main/schemas/api-skeleton.schema.json

In your IDE, find JSON Schema mappings, and give a path to this file (for example, to your cloned copy of goten). Match with file pattern api-skeleton. This way, IDE can help with writing it.

Generating protobuf files

Once the API-skeleton file is ready, you can generate protobuf files with:

goten-bootstrap -i "${SERVICEPATH}/proto/api-skeleton-$VERSION.yaml" \
  -o "${SERVICEPATH}/proto"
clang-format-12 -i "${SERVICEPATH}"/proto/$VERSION/**.proto

This utility is provided by the Goten repository - you should set up development env first to have this tool.

Variable SERVICEPATH must point to the directory of your service, and VERSION match the service version. It is highly recommended to use clang formatter on generated files - but there is no particular recommendation from version, 12 comes from current state of the scripts.

Note that you should re-generate every time something changes in api-skeleton.

Header part

The header shows basic information about your service:

name: $SERVICE_NAME
proto:
 package:
   name: $PROTO_PACKAGE_PREFIX
   currentVersion: $SERVICE_VERSION
   goPackage: $GITHUB_LINK_TO_YOUR_SERVICE
   protoImportPathPrefix: $DIRECTORY_NAME_WITH_SERVICE_CODE/proto
 service:
   name: $SERVICE_SHORT_NAME
   defaultHost: $SERVICE_NAME
   oauthScopes: https://apis.edgelq.com
  • $SERVICE_NAME

    It must be exactly equal to the service you reserved (see introduction to developer guide).

  • $PROTO_PACKAGE_PREFIX

    It will be used as a prefix for a proto package containing the whole of your service.

  • $SERVICE_VERSION

    It shows a version of your service. It will also be used as a suffix for a proto package of your service.

  • $GITHUB_LINK_TO_YOUR_SERVICE

    It must be an actual Github link.

  • $DIRECTORY_NAME_WITH_SERVICE_CODE

    It must be equal to the directory name of your code.

  • $SERVICE_SHORT_NAME

    It should be some short service name, not in “domain format”.

The header simply declares what service and what version is being offered. It is advisable to configure your IDE to include $DIRECTORY_NAME_WITH_SERVICE_CODE/proto in your proto paths - it will make traversing through IDE much simpler.

Imported services

Very often you will need to declare services your service imports, this is done usually below the header in the API-skeleton:

imports:
- $IMPORTED_SERVICE_NAME

You must provide the service name in imported if at least one of the below applies:

  • One of the resources you declared in your service has a parent resource in the imported service.
  • One of the resources you declared in your service has a reference to a resource in the imported service

You do NOT NEED to declare a service you are just “using” via its API. For example, if your client runtimes use proxies.edgelq.com for tunneling, but if you don’t use proxies.edgelq.com on the schema level, then you don’t need to import it.

Goten operates on No-SQL and No-relationship databases (Mongo, Firestore), so it provides its mechanism that provides those things. The benefit of Goten is that it can provide relationships not only between resources within your service but also across services. However, it needs in advance to know which services are going to be used. Runtime libraries/modules provided by Goten/SPEKTRA Edge ensure that databases across services are synchronized (for example, we don’t have a dangling reference that was supposed to be blocking deletion).

When you import a service, you must modify your goten-bootstrap call. For example, if you imported meta.goten.com service in version v1, then you need to run commands like:

goten-bootstrap -i "${SERVICEPATH}/proto/api-skeleton-$VERSION.yaml" \
  -o "${SERVICEPATH}/proto" \
  --import "${GOTENPATH}/meta-service/proto/api-skeleton-v1.yaml"
clang-format-12 -i "${SERVICEPATH}"/proto/$VERSION/**.proto

Note that GOTENPATH must point to the Goten code directory - and this path is the current reflection of the current code.

If you imported let’s say iam.edgelq.com, which imports meta.goten.com, you will need to provide import paths to all relevant API-skeletons:

goten-bootstrap -i "${SERVICEPATH}/proto/api-skeleton-$VERSION.yaml" \
  -o "${SERVICEPATH}/proto" \
  --import "${GOTENPATH}/meta-service/proto/api-skeleton-v1.yaml" \
  --import "${EDGELQROOT}/iam/proto/api-skeleton-v1.yaml"
clang-format-12 -i "${SERVICEPATH}"/proto/$VERSION/**.proto

As of now, goten-bootstrap needs field paths to directly and indirectly imported services.

Resources

Goten is resource-oriented, so you should organize your service around resources:

resources:
- name: $RESOURCE_SINGULAR_NAME # It should be in UpperCamelCase format
  plural: $RESOURCE_PLURAL_NAME # If not provided, it is $RESOURCE_SINGULAR_NAME with 's' added at the end.
  parents:
  - $RESOURCE_PARENT_SERVICE/$RESOURCE_PARENT_NAME # $RESOURCE_PARENT_SERVICE/ can be skipped if $RESOURCE_PARENT_NAME is declared in same service
  scopeAttributes:
  - $SCOPE_ATTRIBUTE_NAME
  idPattern: $ID_PATTERN_REGEX

Certain more advanced elements were omitted from above.

Standard Resource represents an object with the following characteristics:

  • It has a name field that makes a unique identifier.
  • It has a metadata field that contains meta information (sharding, lifecycle, etc.)
  • It has an associated API group with the same name as Resource. That API contains CRUD actions for this resource and custom ones, added to the resource in the API-skeleton file.
  • Has a collection in the database. That collection can be accessed via CRUD actions added implicitly by goten, OR more directly from server code via database handle.

Resources can be in parent-child relationships - including multiple parents' support, as you may see in the case of “Message” resource. However, you should also note that the resource “Comment”, despite having only one possible parent “Message”, can too have multiple ancestry paths.

Resource naming

Each resource has a unique identifier and name - it is stored also in a “name” field. Resource naming is a very important topic in Goten therefore it deserves a good explanation. The format of any name is the following:

$PARENT_NAME_BLOCK$SCOPE_ATTRIBUTES_NAME_BLOCK$SELF_IF_BLOCK

There are 3 blocks: $PARENT_NAME_BLOCK, then $SCOPE_ATTRIBUTES_NAME_BLOCK, and finally $SELF_IF_BLOCK.

Let us start with $SELF_IF_BLOCK, which has the following format: $resourcePluralNameCamelCase/$resourceId. It is always present and cannot be skipped. First part, $resourcePluralNameCamelCase is derived from $RESOURCE_PLURAL_NAME variable, but first and later is lower-cased. Variable $resourceId is assigned during creation and can never be updated. It must comply with the regex supplied with the variable $ID_PATTERN_REGEX in the api-skeleton file. Param idPattern can be skipped from resource definition - in that case, the default value will be applied: [a-z][a-z0-9\\-]{0,28}[a-z0-9].

The middle block, $SCOPE_ATTRIBUTES_NAME_BLOCK will be EMPTY if none scopeAttributes were defined for a resource in the api-skeleton file. By syntax, scopeAttributes is an array from 0 to N elements, like:

scopeAttributes:
- AttributeOne
- AttributeTwo

Block $SCOPE_ATTRIBUTES_NAME_BLOCK will be a concatenation of all scope attributes in declared order, for this example, it will be like: attributeOnes/$attributeOneId/attributeTwos/$attributeTwoId/. The last ‘/’ is to ensure it can be concatenated with $SELF_IF_BLOCK. Scope attributes also have singular/plural names and ID pattern regexes.

As of now, Goten provides only one built-in scope attribute that can be attached to a resource: Region. It means, that resource like:

name: SomeName
scopeAttributes:
- Region

will have the following name pattern: regions/$regionId/someNames/$someNameId. This built-in attribute Region is very special and has a significant impact on resources, but in essence: It shows that a resource has specific un-modifiable region it belongs to. All write requests for it will have to be executed by the region the resource belongs to. Region attributes should be considered when modeling for MultiRegion deployments. More details later in this doc.

Finally, we have a block $PARENT_NAME_BLOCK - it is empty if param parents were not present for the given resource in the API skeleton. Unlike scope attributes where all are active, a single resource instance can only have one active parent at the same time. When we specify multiple parents in the API skeleton, we just say that there are many alternate values to $PARENT_NAME_BLOCK. This param is the name of the parent resource. Each value of $PARENT_NAME_BLOCK is then structured in this way: $PARENT_NAME_BLOCK$SCOPE_ATTRIBUTES_NAME_BLOCK$SELF_IF_BLOCK/. Last / ensures that it can be concatenated with $SCOPE_ATTRIBUTES_NAME_BLOCK or $SELF_IF_BLOCK if the former is blank.

Top parent resource must have no parents at all.

It is possible to have an optional parent resource as well if we specify the empty string "" as a parent:

parents:
- SomeOptionalParent
- ""

In the above case, $PARENT_NAME_BLOCK will either be empty or end with someOptionalParents/$someOptionalParentId/.

Note that resource parent is a special kind of reference to different resource types. However, unlike regular references:

  • The name of the resource contains actual references to ALL ancestral resources.

  • Regular references are somewhere in the resource body

    not in the identifier. Therefore, for example, a GET request automatically shows us not only the resource we want to get but also its whole ancestry path.

  • If parent is deleted, all kid resources must be automatically deleted

    asynchronously or in-transaction. Unlike in regular reference, a parent cannot be “unset”.

  • Scope attributes from parents are automatically inherited by all child resources. It is not the case for regular references.

Throughout Goten, you may encounter some additional things about resource names:

  • Wildcards

    for example name someResource/- indicates ANY resource of SomeResource kind.

  • Parent names

    parent name is like name, but without $SELF_IF_BLOCK part. It indicates just the parent collection of resources.

Let’s consider some known resources in iam.edgelq.com service: RoleBinding. It has API-skeleton definition:

name: RoleBinding
parents:
- meta.goten.com/Service # Service is declared in different service
- Project
- Organization
- ""

From above, there are 4 valid name patterns RoleBinding can have:

  • services/{service}/roleBindings/{roleBinding}
  • projects/{project}/roleBindings/{roleBinding}
  • organizations/{organization}/roleBindings/{roleBinding}
  • roleBindings/{roleBinding}

Then, PARENT NAME patterns that are valid are (same order):

  • services/{service}
  • projects/{project}
  • organizations/{organization}
  • "" - just empty string

With wildcards, we can define (just examples):

  • services/{service}/roleBindings/- - This indicates ANY RoleBinding from specific service
  • services/-/roleBindings/- - This indicates ANY RoleBinding from ANY service
  • services/-/roleBindings/{roleBinding} - This would pick all RoleBindings across all services having the same final ID.

Wildcards can be specified in parent names too.

Resource opt-outs

Developers can opt-out from specific standard features offered by Goten. In most cases, we need to do this for specific CRUD actions that are attached to all resources. For example:

resources:
- name: SomeResourceName
  optOuts:
    basicActions:
    - CreateSomeResourceName

Other cases are more tricky. Standard goten CRUD access, resourceChange, and metadata are used intensively by Goten/SPEKTRA Edge framework, even if you don’t use it yourself. For 3rd party developers, we recommend not disabling anything outside basic actions. Other opt-outs exist for resources that are NOT using a standard database (but a custom one, where developers provide their own driver). They will in this way escape the normal schema system (references to those resources will not work as usual).

Resource opt-ins

Some features are optional - as of now we have just one opt-in, it is a Search addition to the standard CRUD for resource objects. We can enable this in API-skeleton:

resources:
- name: SomeResourceName
  optIns:
    searchable: true

With the above, Goten will add the SearchSomeResourceNames action to the standard CRUD. The search method is very similar to List, but users can also specify a search phrase on top of the filter, field mask, and standard paging fields.

Note that enabling in api-skeleton is not sufficient. The developer will also have to:

  • Specify (in protobuf files) a list of fields for search-text indexing
  • Configure search store backend during deployment

As of now, we support Algolia search, but we plan to extend this to MongoDB too. In this case, we may be able to use same database for records and searches.

Tenant and authorization separation when prototyping resources

There are certain requirements that service developers must follow when developing on the SPEKTRA Edge platform.

The consideration here is a tenant/authorization separation. SPEKTRA Edge services are designed for multi-tenancy in the heart. Those are organized as two resources in service iam.edgelq.com: Organization and Project. The organization is meant to be a container for child Organizations and Projects. The project is a final tenant. Those resource types are top. They don’t have any parents or scope attributes. Their name patterns are, therefore:

  • iam.edgelq.com/Project: projects/{project}
  • iam.edgelq.com/Organization: organizations/{organization}

Most of the resources in all services should either belong to the Project as final tenant consumer, less typically Organization. Project is preferable due to the stronger integration:

  • Monitoring time series can go to project only
  • Limits are capable of limiting resource instances only within projects
  • Usage metrics are counted per project.

For the above reasons, it is recommended to let Organization be a container for Projects on the core SPEKTRA Edge platform, and use Project resource as a parent for further resources. Therefore, you should apply the following practice in your service API skeleton:

resource:
# You should declare resource Project in your service!
# It must have specific multiRegion setup.
- name: Project
  multiRegion:
    isPolicyHolder: true

# Under project, you can define resource types that will belong to
# each tenant. Those are to be defined by you.
- name: $CUSTOM_RESOURCE_NAME_1
  parents:
  - Project

- name: $CUSTOM_RESOURCE_NAME_2
  parents:
  - Project

# This resource is still descending from Project, so its fine.
- name: $CUSTOM_RESOURCE_NAME_3
  parents:
  - $CUSTOM_RESOURCE_NAME_1
- - $CUSTOM_RESOURCE_NAME_2

The above setup is integrating with iam.edgelq.com Authorization already: Service IAM recognizes 4 authorization scopes:

  • System level (root): /
  • Organization level: organizations/
  • Project level: projects/
  • Service level: services/

This list of scopes matches possible parents of RoleBinding resources. By declaring resources under Project, we are utilizing well-known scope and project admins can manage RoleBindings on their own - and forbid other projects to see/modify their data.

Even if a service in development is meant to be used by a single user (like a private service), it is still recommended to use Project resource

  • we will have just one instance in existence.

When you deploy such a service, you will need to configure synchronization between 2 collections: iam.edgelq.com/Project AND your.service.com/Project. You should copy those projects into your collection that are interested in your service.

The reason is that projects follow multi-service design: Their administrators should freely choose which services are used by their projects and which are not. If we have a copy of the Project resource from the iam.edgelq.com service, we can:

  • Ensure that all projects in your service are those enabling the given service.
  • If the project leaves your service, all child resources will be garbage-collected.
  • Project defined by your service can have additional fields not present in iam.edgelq.com/Project.

Synchronization between collections across services will be explained in the fixture controller document.

It is of course recognized that not all resource types are suitable to be put under Project tenant - some resource types are meant to be commonly shared across many tenants, probably in read-only mode, with write reserved for service administrators. If you have resources like this, the best option may be to declare them under meta.goten.com/Service resource:

imports:
- meta.goten.com # Needed in order to use Service as a parent

resources:
- name: $SOME_SERVICE_RESOURCE
  parents:
  - meta.goten.com/Service

Specifically, we should NOT have something like:

resources:
- name: $SOME_GLOBAL_SERVICE_RESOURCE

The reason is that, by declaring global service resource on the root level, IAM permissions required for any CRUD will be on the root level. As a system owner, you will have access to this resource type, BUT users for your service will not have, and you won’t be able to grant them any permissions - because RoleBindings on the root / level cannot be created by anyone but SPEKTRA Edge platform administrators. However, 3rd party service admins will be able to create RoleBindings under the Service resource:

services/$your_service_name/roleBindings/$rbId. From this position, permissions can be granted to users by service admins to access those service-level resources.

You may optionally declare service resources like for a Project:

resources:
- name: Service
  multiRegion:
    isPolicyHolder: true

- name: $SOME_SERVICE_RESOURCE
  parents:
  - Service

However, it will cause the generation of Service resources in your service and you will need to copy your service record from meta.goten.com to your service. But this has some benefits:

  • It is in some sense clearer

    While meta.goten.com contains all services in the system, your service will contain only services that are using “your service”. In this case, it will be 1 element collection.

  • Other services than yours can become tenants of your service as well! You will need to copy a subset of services from meta.goten.com to your service

    A subset using your service. You can then add extra fields not available in the meta.goten.com service.

If you plan to expose your service to other services, you should declare your Service resource and set up the controller to synchronize meta.goten.com/Service with your.service.com/Service. You should synchronize a subset of services only.

If you wonder why not then handle Project resources in the same way:

imports:
- iam.edgelq.com

resources:
- name: $SOME_RESOURCE
  parents:
  - iam.edgelq.com/Project

In the above pattern, the benefit is, that you don’t have a Projects collection in your service - iam already has. However, it is not suitable if you plan to have a multi-tenant service, where the tenant is a Project. As was mentioned. The project is meant to be able to enable/disable services it uses at a whim. By using a synchronized owned collection of Projects, we can ensure that child resources of projects in your service can be properly cleaned up.

However, if you are certain that your service is meant to be used by private project(s) who are always going to use your service and cannot disable it. Then in fact it is a valid choice to use iam.edgelq.com/Project directly.

API

Before explaining API, let me explain the relationships between services, resources, APIs, and methods:

  • Service package contains a set of resources and APIs, each resource and API belongs to a single Service Package.
  • API contains a set of actions and each action belongs to a single API only. Each action can also be optionally associated with a single resource (primary resource for action).
  • APIs can be either “developer-defined” or “provided with resources”. The primary difference between them is that developer-defined API is explicitly described in the API skeleton, while the other kind is implicit and provided by Goten itself, implicitly, per each resource. For example, for the resource “Permission”, there will be a corresponding “PermissionService”.

Developer-defined APIs are typically declared below resources:

apis:
- name: $API_NAME # Should be UpperCamelCase format.

We did not put an equal mark between “Service package” and “API” to achieve smaller code packages and better granularity. Each resource and API has its code package. Custom actions are grouped according to a service developer, who should think what seems to make more sense or is more convenient. APIs can be considered as a “namespace” for actions.

Action

The action represents a single gRPC method. It can be attached to API or resource:

resources:
- name: $SOME_RESOURCE_NAME
  actions:
  - name: $ACTION_NAME
  # ... CONTINUED HERE ...

apis:
- name: $SOME_API_NAME
  actions:
  - name: $ACTION_NAME
  # ... CONTINUED HERE ...

Below a resource or an API, some common properties of an Action are:

actions:
- name: $ACTION_NAME
  verb: $ACTION_VERB # You can skip, this, and $ACTION_VERB will be equal to $ACTION_NAME and lowerCamelCased.
  opResourceInfo:
    name: $RESOURCE_ACTION_OPERATES_ON # Skip-able, if action is defined within resource already
    isCollection: $TRUE_IF_ACTION_OPERATES_ON_COLLECTION
    isPlural: $TRUE_IF_ACTION_OPERATES_ON_MULTIPLE_RESOURCES
    skipResourceInRequest: $TRUE_IF_REQUEST_DOES_NOT_CONTAIN_RESOURCE_NAME_OR_PARENT_NAME
    requestPaths: $PATHS_TO_RESOURCE_IN_REQUEST   # You can skip if defaults are used
    responsePaths: $PATHS_TO_RESOURCE_IN_RESPONSE # You can skip if not needed
  requestName: $REQUEST_NAME   # You can skip for default, which is ${ACTION_NAME}Request
  responseName: $RESPONSE_NAME # You can skip for default, which is ${ACTION_NAME}Response
  skipRequestMsgGen: $TRUE_IF_YOU_WANT_TO_SKIP_REQUEST_GEN_IN_PROTO_FILE
  skipResponseMsgGen: $TRUE_IF_YOU_WANT_TO_SKIP_RESPONSE_GEN_IN_PROTO_FILE
  streamingRequest: $TRUE_IF_CLIENT_IS_STREAMING
  streamingResponse: $TRUE_IF_SERVER_IS_STREAMING
  withStoreHandle:
    transaction: $LEVEL_FOR_TX
    readOnly: $TRUE_IF_NO_WRITES_EXPECTED

Boolean fields can be skipped if you plan to have “false”, unless you like explicit declarations.

Action - transaction

Fields you must set are name and withStoreHandle. The transaction part decides what happens in the transaction middleware when action is being processed by the backend. 3 types have to be specified for transactions:

  • NONE

    we declare that no transaction is needed and all database requests should be handled without a transaction. Suitable for read-only requests.

  • SNAPSHOT

    we declare that we will be making writes/deletions to the database after the reads. When transaction is concluded, the database must guarantee that all reads WOULD be repeatable (meaning, no one modified and resource/collection we read!). Note that this also included the “collection” part.

For Goten transactions and API skeleton, the word “SNAPSHOT” is a bit misleading, because what Goten offers here is SERIALIZABLE, since we also protect against write skews (which ARE NOT protected by SNAPSHOT). See https://en.wikipedia.org/wiki/Snapshot_isolation for more details, or https://www.cockroachlabs.com/blog/what-write-skew-looks-like/.

transaction level can also be set to MANUAL - in this case, Goten will not generate code starting a read-only session or snapshot transaction (generated transaction middleware in server code will not be present for given action). This is useful, if we deal with some special action for which we want to have for example many separate transactions executed, and we want to give the developer full control over when and how a transaction is started.

Action - request and response objects

The group of Acton API-skeleton options that should be considered together are requestName, responseName, skipRequestMsgGen and skipResponseMsgGen. Those are all optional, but it’s important to make informed decisions about them. While default request/response names are fine in many cases, occasionally you may want to use some specific, existing object for request or response. Consider actions like CreateRoleBinding in the iam.edgelq.com service. The request name is CreateRoleBindingRequest, but the response is just RoleBinding. For an action DeleteRoleBinding, the request is DeleteRoleBindingRequest, but the response name is google.protobuf.Empty (since we don’t need anything from the response). In those cases, the object is already defined and we don’t need to generate it. We would need to write something like:

- name: CreateRoleBinding
  responseName: RoleBinding
  skipResponseMsgGen: true
- name: DeleteRoleBinding
  responseName: google.protobuf.Empty
  skipResponseMsgGen: true

Action - unary, client streaming, server streaming, or bidi-streaming

The next important decision is to decide about Action in the API skeleton is what kind of action we are defining:

  • Is it unary type, meaning single request and single response?

    If so, params streamingRequest and streamingResponse must be equal to false. In this case, you don’t need to write it, since false is a default.

  • Sometimes what is needed is server streaming gRPC calls.

    Example case: WatchRoleBindings in iam.edgelq.com. The client first sends a single request, then the server keeps responding with responses (many). In this case, streamingRequest must remain false, but you must set streamingResponse to true.

  • It is very rare, as of the moment of this writing theoretical, but action can be exclusively client streaming

    The client opens the stream and keeps sending requests. Example use case: Continuous logging submission. In that case, param streamingRequest must be set to true.

  • Occasionally we need full bidirectional streaming

    In this case, both streamingResponse and streamingRequest must be set to true.

Action - operated resource

Almost every action interacts with some resource. For example, the action CreateRoleBinding in iam.edgelq.com operates on the RoleBinding resource. We need to define how action behaves on a resource using the opResourceInfo annotation. The most basic property is name there - we can skip this for Action defined within the resource, but we need to specify if action is defined for a custom API.

There are many modes in how action operates on a resource. For example, CreateRoleBinding operates on a single RoleBinding resource, but ListRoleBindings operates on a collection (or sub-collection). In Goten, we define 4 modes:

  • Action operating on the single resource in isolation from the collection.

    Examples: Any Update or Delete operation. When you send for example UpdateRoleBinding, only a single instance of RoleBinding is affected, and the rest of the collection is isolated.

  • Action operating on a single resource, but affecting collection (or sub-collection).

    An example of such an action is CreateRoleBinding. It creates only a single instance, but it DOES affect collection. Create operations mean you are inserting something into the collection, and Goten needs to check if the name is unique. The act of creating means that you are reserving some name within the collection namespace, affecting anyone else there.

  • Actions operating on multiple resources in isolation from the collection.

    A good example here is BatchGetRoleBindings. You specify many specific instances, but still specific, with isolation to non-specified items.

  • Actions operating on multiple resources affecting collection.

    classic is ListRoleBindings. You get many instances from collection (or sub-collection).

You need to pick which mode is active by using fields isCollection and isPlural within opResourceInfo.

A very important part of the action is the requestPaths param. It contains information on how to retrieve information about the resource(s) from the request object (in case of streaming, client message). A lot of code-generated parts/framework modules rely on this kind of information. For example, the Authorizer will extract resource name(s)/collection to determine if the caller has a right to execute an action for a given context. Another example the auditing component will need to know what resource(s) are affected by the action, so it can correctly define the activity logs associated.

Object requestPaths has defaults depending on params isCollection and isPlural.

If isCollection is true, then by default Goten assumes that the request object contains a “parent” field pointing to the affected sub-collection.

For example, ListRoleBindings is like:

syntax = "proto3";

message ListRoleBindings {
  // This annotation enforces that value of this string conforms to parent
  // name patterns of RoleBinding, see resource naming chapter in this
  // document.
  string parent = 1 [(goten.annotations.type).parent_name.resource = "RoleBinding"];
  
  // other fields ...
}

Note that Create requests normally have parent field!

For collection requests then, the default value of requestPaths is:

opResourceInfo:
  isCollection: true
  requestPaths:
    resourceParent:
    - parent

However, if the resource has no parents whatsoever, then the resourceParent slice is empty.

If isCollection is false, then Goten looks into isPlural to determine the default. If the action is of plural type, then the following default applies:

opResourceInfo:
  isCollection: true
  requestPaths:
    resourceName:
    - names

Note that this matches any BatchGet request:

syntax = "proto3";

message BatchGetRoleBindings {
  // This annotation enforces that each value of this slice conforms to
  // name patterns of RoleBinding, see resource naming chapter in this
  // document.
  repeated string names = 1 [(goten.annotations.type).name.resource = "RoleBinding"];
  
  // other fields ...
}

For the non-collection and non-plural actions, the default is:

opResourceInfo:
  isCollection: true
  requestPaths:
    resourceName:
    - name

And the request object is like:

syntax = "proto3";

message GetRoleBindings {
  // This annotation enforces that value of this string conforms to name
  // patterns of RoleBinding, see resource naming chapter in this document.
  string name = 1 [(goten.annotations.type).name.resource = "RoleBinding"];
  
  // other fields ...
}

Note that plural/singular use both resourceName annotations - Goten can figure out whether it deals with repeated or a single string.

Because requestPaths are so important for Action (Authorization, Auditing, Usage tracking…), this is de facto mandatory to specify. Even if default is used, during code generation Goten will fail complaining that requestPaths in API-skeleton don’t match those in actual request objects. For initial prototyping though, it is fine to fail first, then define all fields in the request object, finally correct api-skeleton and re-generate everything again.

Note that in requestPaths you can specify multiple possible field paths. The first populated will be picked. This handles cases where you may have oneof protobuf keywords in use. You may also specify resource body paths if entire objects are there.

If action is associated with the resource, but the request does not have explicit field paths showing which, it will be necessary to indicate that with option skipResourceInRequest:

opResourceInfo:
  skipResourceInRequest: true # If so, then requestPaths will be considered empty

This however renders certain aspects like Authorization more tricky. By default only system admins may execute this.

Param responsePaths is optional - it may be used by Audit/Usage metrics if contains some field paths. However, it should not be entirely overlooked. For example, if there is a special authorization to read some specific and sensitive fields in resources returned by the response, indicating field paths containing such resources will help authorization middleware clear those values from the response (before returning to the user)!

Implicit APIs and actions

For each resource, Goten declares an implicit API with the same name as the resource. Those implicit APIs will have CRUD actions:

  • Create<ResourceName>
  • Update<ResourceName>
  • Delete<ResourceName>
  • Get<ResourceName>
  • BatchGet<ResourcePluralName>
  • List<ResourcePluralName>
  • Watch<ResourceName>
  • Watch<ResourcePluralNames>

Names of those actions should be generally self-explanatory, the only exception may be watch. Note there are two versions of it - for single resource and collection. The singular version is in a way similar to Get<ResourceName>, The plural is similar to List<ResourcePluralName>. The significant difference is that while Get/List is unary, their Watch equivalents are server-streaming. After the first client request and first server response, the client should simply maintain connection and receive updates of resource/collection (as diff messages) in real time.

Multi-region Design

Goten comes with a framework for setting up multi-region environments. Even a single-region setup is considered just a special case of multi-regional (NumRegions is simply 1). Considering this, it is recommended to prepare an API-skeleton with multi-region in mind, but multi-region features can still be skipped. If desired, you can specify the following:

name: $SERVICE_NAME
proto:
  ## Stuff here...

disableMultiRegion: true

If you do this, then you don’t need to specify any multiRegion spec in any Action or Resource. You will still need to specify in which region your service will be running, but you can program without it. Your service will not have multi-region routing middleware either.

When it comes to the MultiRegion setup, the most important elements are concentrated around resources because resources are the actual state of the service. Code is running in all regions where it runs, actions that don’t operate on any resources can be easily executed without issues on any region. But there are important decisions to make about resources.

One important rule is that:

Each resource within a Service MUST belong to one particular region only. In many cases it is more simple: Edge devices, sites, data centers, etc. have normally some location, which we can easily pinpoint. Other some “policies” are more tricky, because we assume they should apply to all regions. In the case of those non-regional resources, it is necessary to specify the primary region responsible for them. Ultimately, we need to ensure database consistency across regions, and transactions cannot provide guarantees they do if a single resource could be written to by two regions.

Goten ensures that:

  • All resources belong to a single region
  • Read-only copies are asynchronously copied to all relevant regions (described later)!

The above ensures that resource writes are not breaking, but reads are executed on the nearest possible region.

In api-skeleton, we need to decide which resources are regional. We do this by setting proper scopeAttribute:

resources:
# This resource will be considered non-regional
- name: SomeResourceName1

# This resource will be considered regional
- name: SomeResourceName2
  scopeAttributes:
  - Region

# This resource will be considered regional, because parent is!
- name: SomeResourceName3
  parents:
  - SomeResourceName2

Note that the Region scope attribute, which is inherited automatically by all kid resources, adds the regions/{region}/ block to all resource names! Therefore, whenever you see a resource name with such a block, it means this is a regional resource, and the name itself reveals to which region the resource belongs.

The next thing to learn about designing with multi-region in mind is about an object called MultiRegionPolicy. This is a common protobuf object type defined in the Goten framework. A resource that has this object in its fields is called a multi-region policy-holder. Those types of resources need special annotation in the api-skeleton file. You should have seen this already in fact

  • when we described tenant separation in this document. This annotation is very common for projects:
resources:
- name: Project
  multiRegion:
    isPolicyHolder: true

In SPEKTRA Edge, we specify three well-known resource types that are multi-region policy-holders:

  • meta.goten.com/Service
  • iam.edgelq.com/Organization
  • iam.edgelq.com/Project

As you should notice. We recommend in fact to declare Project resource for your service already and if not, import iam.edgelq.com explicitly and make other resources child of iam.edgelq.com/Project. This way, if you already annotated which resources are regional, you may have completed prototyping your service for multi-region. At least in the API skeleton file, and if you do not have any tricky actions with some complex multi-region implications!

MultiRegionPolicy object specifies:

  • Primary region ID
  • All enabled region IDs
  • Cross-region database synchronization criteria (by default, all resources under policy-holder are synchronized across all enabled regions).

The definition of the MultiRegionPolicy object in Goten is there: https://github.com/cloudwan/goten/blob/main/types/multi_region_policy.proto.

MultiRegionPolicy object defines multi-region settings for CHILD resources of policy-holder. It does not affect policy-holder itself! It should be easy to understand why though - note that resource Project is the top one, it does not have any parent resources. Its name pattern is simply projects/{project}.

If we have two create requests like below, we will have issues:

createProjectRequests:
- {"project": {"name": "projects/projectId", "multiRegionPolicy": {"defaultControlRegion": "us-west2", "enabledRegions": ["eastus2", "us-west2"]}}}
- {"project": {"name": "projects/projectId", "multiRegionPolicy": {"defaultControlRegion": "eastus2", "enabledRegions": ["eastus2", "us-west2"]}}}

Region us-west2 will accept the creation of projectId, and eastus2 will have the same project in its region. Because transactions can not guarantee uniqueness here, we are facing a conflict during asynchronous multi-region synchronization!

Therefore, policy-holder resources define multi-region policy for their child resources only, never themselves. Project resource itself is considered global for a service - it is automatically synchronized across all regions enabled in a service and its instances are “owned” by the primary region of a service. Note that a Service itself is a policyholder. When you create your service, you need to pick its primary region and deploy it to all regions where you want it to be running. This is the reason why meta.goten.com/Service resource is a policy-holder too - its enabledRegions field is automatically updated whenever you create a new deployment, and defaultControlRegion is set to the primary region of your service.

Let’s wrap this up with some examples: Let’s define service custom.edgelq.com, which MultiRegionPolicy will be:

{"defaultControlRegion": "us-west2", "enabledRegions": ["eastus2", "japaneast", "us-west2"]}

The API skeleton part is:

name: custom.edgelq.com

imports:
- meta.goten.com

resources:
# This collection is synchronized with iam.edgelq.com/Project
- name: Project
  multiRegion:
    isPolicyHolder: true

- name: EdgeDevice
  parents:
  - Project
  scopeAttributes:
  - Region

- name: Interface
  parents:
  - EdgeDevice

- name: AccessPolicy
  plural: AccessPolicies
  parents:
  - Project

# This resource type is managed by service admins, so child of Service
- name: DeviceType
  parents:
  - meta.goten.com/Service

Let’s declare 2 Project resources with this:

- name: projects/p1
  multiRegionPolicy:
    defaultControlRegion: us-west2
    enabledRegions: [japaneast, us-west2]
- name: projects/p2
  multiRegionPolicy:
    defaultControlRegion: eastus2
    enabledRegions: [eastus2, japaneast]

Let’s define the multi-region syncing/ownership situation of those projects. First of all - the resource Project is a global resource, therefore its situation is defined by MultiRegionSpec of the custom.edgelq.com Service record!

Therefore:

  • Project projects/p1 will belong to us-west2, and its read-only copies will be distributed to regions “eastus2” and “japaneast”.
  • Project projects/p2 will belong to us-west2, and its read-only copies will be distributed to regions “eastus2” and “japaneast”.

Note that the multiRegionPolicy object of the Project is not any factor here MultiRegionPolicy is applied to descending resources only, never policy-holders. Every resource not descending from policy-holder is subject to MultiRegionPolicy defined for a Service itself.

Now, let’s define some EdgeDevice instances:

- name: projects/p1/regions/japaneast/edgeDevices/dId
- name: projects/p1/regions/us-west2/edgeDevices/dId
- name: projects/p2/regions/japaneast/edgeDevices/dId
- name: projects/p2/regions/eastus2/edgeDevices/dId

What will happen is that:

  • Resource projects/p1/regions/japaneast/edgeDevices/dId will belong to Region japaneast, and its read-only copy will go to us-west2.
  • Resource projects/p1/regions/us-west2/edgeDevices/dId will belong to Region us-west2, and its read-only copy will go to japaneast.
  • Resource projects/p2/regions/japaneast/edgeDevices/dId will belong to Region japaneast, and its read-only copy will go to eastus2.
  • Resource projects/p2/regions/eastus2/edgeDevices/dId will belong to Region eastus2, and its read-only copy will go to japaneast.

Service will disallow creation of projects/p1/regions/eastus2/edgeDevices/- or projects/p2/regions/us-west2/edgeDevices/-. Note that read-only copies are distributed to all regions indicated by MultiRegionPolicy of Project ancestor, but ownership is indicated by name.

If we define some Interfaces:

- name: projects/p1/regions/japaneast/edgeDevices/dId/interfaces/ix
- name: projects/p1/regions/us-west2/edgeDevices/dId/interfaces/ix
- name: projects/p2/regions/japaneast/edgeDevices/dId/interfaces/ix
- name: projects/p2/regions/eastus2/edgeDevices/dId/interfaces/ix

What will happen is that:

  • Resource projects/p1/regions/japaneast/edgeDevices/dId/interfaces/ix will belong to Region japaneast, and its read-only copy will go to us-west2.
  • Resource projects/p1/regions/us-west2/edgeDevices/dId/interfaces/ix will belong to Region us-west2 and its read-only copy will go to japaneast.
  • Resource projects/p2/regions/japaneast/edgeDevices/dId/interfaces/ix will belong to Region japaneast, and its read-only copy will go to eastus2.
  • Resource projects/p2/regions/eastus2/edgeDevices/dId/interfaces/ix will belong to Region eastus2 and its read-only copy will go to japaneast.

Note that interfaces basically inherit region ownership from EdgeDevice resource, and syncing regions are provided still by MultiRegionPolicy of Projects.

For AccessPolicy resources:

- name: projects/p1/accessPolicies/ap
- name: projects/p2/accessPolicies/ap
  • Resource projects/p1/accessPolicies/ap will belong to Region us-west2, and its read-only copy will get to japaneast.
  • Resource projects/p2/accessPolicies/ap will belong to Region eastus2, and its read-only copy will get to japaneast

Note that ownership of AccessPolicies is decided by the defaultControlRegion field in the MultiRegionPolicy object of the relevant parent resource. Read-only copies are distributed to the remaining enabled regions for a project.

Finally, for a DeviceType resource like:

- name: services/custom.edgelq.com/deviceTyped/d1

All DeviceType instances will belong to the region us-west2, and their read copies be distributed to japaneast and eastus2, because this is what MultiRegionPolicy of services/custom.edgelq.com tells us.

Note that regions projects can use are limited to those defined for a service. Resources under the project are limited to regions the project specifies. This way tenants within the service only bear resources within their chosen regions.

Whenever the server gets any request, routing middleware will try to use the request to deduce actual regions that should execute the request. Routing middleware is code-generated based on API skeleton annotations. Goten has at the disposal some set of known templates, that can handle 99% of the cases. Normally autopilot can handle many cases, but if there is some tricky part, it’s recommended to continue reading this documentation part, especially the API skeleton MultiRegion annotations for Actions.

MultiRegion customizations for Resource

MultiRegionPolicy contains also an additional field, criteriaForDisabledSync. See MultiRegionPolicy documentation for that. However, it means that read-only copies can be prevented from being shared with other regions, even if the default is to always sync to all enabled regions. If it is important, from an application point of view, to enforce syncing across enabled regions, it can be done via API skeleton for a resource:

resources:
- name: SomeName
  multiRegion:
    syncType: ALWAYS

Value syncType can also be NEVER, in which case no read-only copies be made. This is useful for example when we know particular resources are not needed to be copied we can reduce some workload.

Another API skeleton customization we can make for resources is to disable code-generated multi-region routing for specific CRUD functions. We can do this with:

resources:
- name: SomeName
  multiRegion:
    skipCodeGenBasedRoutingBasicActions:
    - CreateSomeName

With annotation like this, it is possible also to write by hand code for multi-region routing.

MultiRegion customizations for Action (and defaults explained)

Handling actions, unlike resources, is much more difficult. For a resource, we can just take a name and deduce what region owns it, and where we can expect a read-only copy. Syncing is more straightforward. But request routing/stream proxying is a more difficult topic. Naturally, CRUD has some defaults generated. Custom actions - code generated is on the best effort.

There are generally 3 models of request/stream handling in middleware for region-routing:

  • If the server receives a request that can be executed locally (write to the owned resource, or read from locally available resources), then middleware just passes the stream/request to the next one.

  • If the server receives a request that has to be executed somewhere else, then it opens a connection to another region, sends a request, or opens a streaming proxy (for streaming requests). In this case, middleware does not pass anything to the next local middleware, it just passes data elsewhere. Ideally, we should avoid this, as an extra proxy just adds unnecessary latency.

  • There is also a possibility, that a given request should be executed by more than 1 region. For example, imagine ListDevices request from all projects (not specific ones). Since Projects can be located in different regions, no single region is guaranteed to have devices from all projects. Routing middleware will then broadcast requests to all regions, but before doing so, it will modify the filter field to indicate, that they are interested in devices from a particular region only. Once middleware routing gets responses from ALL regions, it will merge all responses into one. If orderBy was specified, then it will need to sort them again and apply paging again. The response can be returned. Note that this particular model is more extreme and should be avoided with proper queries. However, this approach has some specific use cases and is supported by multi-region middleware routing.

Two things that need to be analyzed are:

  • What is the resource name on which the action operates?

    Or in case of plural action, resource names? If collection, what is the collection (or sub-collection) value? This value needs to be extracted from the request.

  • Once we get resource or collection name(s), should this be executed on the region owning it, or on the region just having its read-only copies?

Goten can deduce those values based on the following API skeleton properties of an Action: withStoreHandle AND opResourceInfo.requestPaths. The rule is simple: If the transaction indicates there will be database writes (SNAPSHOT or MANUAL), then Goten will assume action can be executed on the owning resource region. Similarly, requestPaths annotation is used to determine what are the field paths leading to the resource parent name/name/names.

Note: As of now, Goten does not support write actions working on multiple resources, it has to be single.

If we want to ensure Goten will generate routing middleware code that will force action to be executed on the owning region, we can provide the following annotation:

actions:
- name: SomeName
  multiRegionRouting:
    executeOnOwningRegion: true

It is first recommended to provide paths via opResourceInfo.requestPaths annotation in action, as this is common for many things, including Authorization, etc. However, if we want to use separate resource name/parent field paths, specifically for multi-region routing, we can:

Use this annotation for single resource actions:

actions:
- name: SomeName
  multiRegionRouting:
    resourceFieldPaths:
    - some.path_to.resource_name
    - alternative.path

If you have collection type actions (isPlural is true), then use the scopeFieldPaths annotation instead of resourceFieldPaths.

If you just have an explicit field path in a request object indicating a specific region ID that should execute the request, like:

syntax = "proto3";

message SomeRequest {
  // Region ID where request must be routed.
  string executing_region_id = 1;
}

In this case, you should use the following annotation:

actions:
- name: SomeName
  multiRegionRouting:
    regionIdFieldPaths:
    - executing_region_id

If code-gen multi-region routing is not possible in your case, you may need to explicitly disable it:

actions:
- name: SomeName
  multiRegionRouting:
    skipCodeGenBasedRouting: true

If you disable code-gen-based routing, you can write manually your handler later on, in the Golang.

On top of that, there is some special caveat regarding streams and multi-region routing - It is required that the first client message received from a stream will be able to determine routing behavior. If the stream needs routing, the middleware will open the stream to the proxy region and just forward the first message.

gRPC transcoding customizations

We have a gRPC transcoding feature that allows gRPC services to support REST API clients. You can read more in this document: https://cloud.google.com/endpoints/docs/grpc/transcoding

URL paths for each action will be provided in protobuf files via the google.http.api annotation. We will come back to this topic again in the document about prototyping service in protobuf files. But again, api-skeleton is used to create the first set of protobuf files and many of those files must stay as code-generated - including those defining gRPC transcoding. Therefore, all developer customizations for gRPC transcoding can be done in api-skeleton only. After we re-generate protobuf files from api-skeleton, we just need to verify correctness by looking at these files. In this part we will explain defaults/potential customizations but it is worth noting, that customizations are rarely needed, normally Goten can derive proper defaults.

Let’s first look at the transcoding table for all actions, all types, CRUD, optional Search, and custom ones:

gRPC Method Attrs HTTP Path pattern Body
Get<Res> GET /$version/{name=$name}
BatchGet<Res> GET /$version/$collection:batchGet
List<Collection> With parent GET /$version/{parent=$parent}/$collection
List<Collection> Without parent GET /$version/$collection
Watch<Res> POST /$version/{name=$name}:watch
Watch<Collection> With parent POST /$version/{parent=$parent}/$collection:watch
Watch<Collection> Without parent POST /$version/$collection:watch
Create<Res> With parent POST /$version/{parent=$parent}/$collection $resource
Create<Res> Without parent POST /$version/$collection $resource
Update<Res> PUT /$version/{$resource.name=$name} $resource
Delete<Res> DELETE /$version/{name=$name}
Search<Res> With parent GET /$version/{parent=$parent}/$collection:search
Search<Res> Without parent GET /$version/$collection:search
<CustomCollection> With parent POST /$version/{parent=$parent}/$collection:$verb
<CustomCollection> Without parent POST /$version/$collection:$verb
<CustomSingular> POST /$version/{name=$name}:$verb
<CustomOther> POST /$version:$verb

Of course, the HTTP method and pattern must be unique across services. For this reason, as a standard, HTTP pattern contains: service version, resource’s name/parent (when relevant), HTTP method, and finally verb.

Simple examples:

ListRoleBindings in iam.edgelq.com service (version v1), for project p1 will have a REST API path:

/v1/projects/p1/roleBindings - Note that $parent is a valid RoleBinding parent name and, therefore contains “projects/” prefix too.

ListProjects, since they don’t have a parent, would have this path: v1/projects

SearchAlertingPolicies from monitoring (v4 version) would have this path: v4/projects/p1/regions/-/alertingPolicies:search. It assumes the parent is projects/p1/regions/-, therefore policies from the specific project but all regions.

Note that :$verb is often used to distinguish proper action - it uses verb param from Action annotation.

If you don’t have any specific issues/needs, you can finish the gRPC transcoding part now, otherwise, you can check some special customizations that can be made:

REST API paths - complete overrides.

If we need it, we can use the nuclear option and just completely define a path for action on our own. To achieve this, we need to use the http_path_overrides option for Action. Example:

actions:
- name: SomeCustomMethod
  grpcTranscoding:
    httpPathOverrides:
    - /very/custom/path
    - /other/custom/path

Goten bootstrap will produce the following annotation in proto files:

option (google.api.http) = {
 post : "/very/custom/path"
 additional_bindings : {
  post : "/other/custom/path"
 }
};

HTTP prefix

Suppose we have a mixin service “health” that:

  • Has its own versioning (v1, v2, v3…).
  • Can be attached to other services, but generally API is “separated”.
  • We want to add it to any other service

When a user sends a request GET some.edgelq.com/v1alpha/topics, then we are calling the ListTopics method. Suppose that something is wrong with the connection, and we want to debug it. We can do that using the health endpoint. The user should then send the following request: GET some.edgelq.com/v1:healthCheck. We assume that health service serves a method with the verb healthCheck. However, this is not a very nice way of doing this, because “v1” and “v1alpha” are “on the top” and they may look like different versions of the same service. To separate mixin from proper service we can put additional prefixes in the path. For example, this looks better: GET some.edgelq.com/health/v1:healthCheck. This is how we can do it in the API skeleton for health service:

name: health.edgelq.com
proto:
 package:
   name: ntt.health
   currentVersion: v1
   goPackage: github.com/example/health
   protoImportPathPrefix: health/proto
 service:
   name: Health
   defaultHost: health.edgelq.com
   oauthScopes: https://apis.edgelq.com
   httpNamespacePrefix: health

apis:
- name: Health
  actions:
  - name: HealthCheck
    verb: healthCheck
    withStoreHandle:
      transaction: NONE

See field proto.package.service.httpNamespacePrefix. It will decorate all HTTP patterns for ALL methods in this mixin service.

Custom reference path capture

Almost every method associated with some resource contains name=$name or parent=$parent in its HTTP pattern. This is called here “captured reference path”. Those two variants are the most common (which one exactly depends on the isCollection param), but they are not non-negotiable. As an example, we can look at the update request, which has a different path: $resource.name. Generally, the field path in the HTTP pattern, if any, MUST reflect the real field path in a request object. Note that this is determined by the requestPaths annotation for an action. Custom single-resource, no-collection actions default to “name”, and collection ones to “parent”. If you change the field path name, names, or parent to something else, the HTTP capture path will also change.

Example (we assume API Version is v1):

message SomeActionRequest {
 string custom_name = 1 [ (goten.annotations.type).name.resource : "SomeResource"];
}

Api-skeleton file:

apis:
- name: SomeApi
  actions:
  - name: SomeAction
    opResourceInfo:
      name: SomeResource
      requestPaths:
        resourceName: [ "custom_name" ]

Annotation for REST API:

option (google.api.http) = {
 post : "/v1/{custom_name=someResources/*}:someAction"
};

If we have multiple alternatives, we can provide multiple items as resource name:

apis:
- name: SomeApi
  actions:
  - name: SomeAction
    opResourceInfo:
      name: SomeResource
      requestPaths:
        resourceName: [ "custom_name", "other_name" ]

In this case, goten-bootstrap will produce following REST API path:

option (google.api.http) = {
 post : "/v1:someAction"
};

The reason is that we can’t have “OR” in those patterns. To specify the exact reference client needs to simply populate the request body. The same story applies to any BatchGet request that has a “names” field (array). URL has no place for arrays like that, so the name pattern is simply $version/$collection:batchGet.

HTTP method

By default, every custom action uses the POST method. It can be changed simply with:

actions:
- name: SomeCustomMethod
  grpcTranscoding:
    httpMethod: PUT

How to remove :$verb from HTTP path:

actions:
- name: SomeCustomMethod
  grpcTranscoding:
    isBasic: true

However, this option should only be used reasonably for standard CRUD methods. It is provided here more for the completeness of this guide. Verb is something that is best in ensuring path uniqueness for custom methods.

How to customize the HTTP body field:

By default, the body field is equal to the whole request. It is a bit different though for create/update requests, where the body is mapped to resource fields only. If we want the user to be able to specify a selected field only, we can use the following the API skeleton option:

actions:
- name: SomeAction
  grpcTranscoding:
    httpBodyField: some_request_field

This is based on the assumption that SomeAction contains a field called some_request_field. Note that this will prevent users from setting other fields though.

2.2.2 - Auto-Generated Protobuf Files

How to understand the auto-generated service Protobuf files.

Protobuf files describe:

  • Resources - models, database indices, name patterns, views, etc.
  • Request/Response object definitions (bodies)
  • API groups, each with a list of methods
  • Service package metadata information

Note that you can read about API just all by looking at protobuf files.

Example proto files for 3rd party app: https://github.com/cloudwan/inventory-manager-example/tree/master/proto/v1 You can also see files in the edgelq repository too.

Resource protobuf files

For each resource in the API specification, Goten will create 2 protobuf files:

  • <resource_name>.proto

    This file will contain the proto definition of a single resource.

  • <resource_name>_change.proto

    This file will contain the proto definition of the DIFF object of a resource.

Protobuf with Change object is used for Watch requests, real-time subscriptions.

Be aware that:

  • goten-bootstrap will always overwrite <resource_name>_change.proto

    You should never write to it.

  • File <resource_name>.proto will be generated for the first time only.

    If you change anything in the API skeleton later that would affect the proto file, you will need to either update the file manually in the way the bootstrap utility would, or rename the file and let a new one be generated. You will need to copy all manually written modifications back to the newly generated file. Typically it means resource fields and additional import files.

When a resource file is generated for the first time, it will have name and metadata fields, plus special annotations applicable for resources only. You will need to replace TODO sections in the resource.

The first notable annotation is google.api.resource, like:

option (google.api.resource) = {
  type : "inventory-manager.examples.edgelq.com/Site"
  pattern : "projects/{project}/regions/{region}/sites/{site}"
};

You should note that this annotation will always show you a list of all possible name patterns. Whenever you change something later in the API specification (parents or scopeAttributes), you will need to modify this annotation manually.

Second, a more important annotation is the one provided by Goten, for example:

option (goten.annotations.resource) = {
  id_pattern : "[a-zA-Z0-9_.-]{1,128}"              // This is default value, this is set initially from api-skeleton idPattern param!
  collection : "sites"                              // Always plural and lowerCamelJson
  plural : "sites"                                  // Equal to collection
  parents : "Project"                               // If there are many parents, we will have many "parents:"
  on_parent_deleted_behavior : ASYNC_CASCADE_DELETE // Highly recommended, typical in SPEKTRA Edge
  scope_attributes : "goten.annotations/Region"     // Set for regional resources
  async_deletion : false                            // If set to true, resource will not disappear immediately after deletion.
};

This one shows basic properties like a list of parents, scope attributes, or what happens when a parent is deleted. Parent deletion will always need to be set for each resource. From SPEKTRA Edge’s perspective, we recommend however cascade deletion (and better to do this asynchronously). You may do this in-transaction deletion if you are certain there will be no more than 10 kid resources at once. Especially project kids should use asynchronous cascade deletion. We strive to make project deletion rather a smooth process (although warning: SOFT delete option is not implemented yet).

Parameter async_deletion should have an additional note: When a resource is deleted, by default its record is removed from the database. However, if async_deletion is true, then it will stay till all backreferences are cleaned up (no resource points at us). In some cases it may take considerable time: for example large project deletion.

We recommend setting async_deletion to true for top resources, like Project.

References to other resources

Setting a reference to other resources is pretty straightforward, it follows this pattern:

message SomeResource {
  option (google.api.resource) = { ... };

  option (goten.annotations.resource) = { ... };
  
  string reference_to_resource_from_current_service = 3 [
    (goten.annotations.type).reference = {
      resource: "OtherResource"
      target_delete_behavior : BLOCK
    }
  ];

  string reference_to_resource_from_different_service = 4 [
    (goten.annotations.type).reference = {
      resource: "different.edgelq.com/DifferentResource"
      target_delete_behavior : BLOCK
    }
  ];
}

Note you always need to specify target deletion behavior. If you just want to hold the resource name, but it is not supposed to be a true reference, then you should use (goten.annotations.type).name.resource annotation.

References to resources from different services or different regions will implicitly switch to ASYNC versions of UNSET/CASCADE_DELETE!

Views

Reading methods (Get, BatchGet, List, Watch, Search - if enabled) normally have a field_mask field in their request bodies. Field mask selects which fields should be returned in the response, or the case of the watch, incremental real-time updates. Apart from field mask field, there is another one: view. View indicates the default field mask that should be applied. If both view and field_mask are specified in a request, then their masks are just merged.

There are the following view types available: NAME, BASIC, DETAIL, and FULL. The first one is a two-element field mask, with fields name and display_name (if it is defined in a resource!). The last one should be self-explanatory. Two other ones by default are undefined and if they are used, they will work as FULL ones. Developers can define any 4 of them, even NAME and FULL - those will be just overwritten. This can be done using annotation goten.annotations.resource.

message SomeResource {
  option (goten.annotations.resource) = {
    ...
    views : [
      {
        view : BASIC
        fields : [
          {path : "name"},
          {path : "some_field"},
          {path : "other_field"}
        ]
      },
      {
        view : DETAIL
        fields : [
          {path : "name"},
          {path : "some_field"},
          {path : "other_field"},
          {path : "outer.nested"}
        ]
      }
    ]
  };
}

Note that you need to specify fields using snake_case. You can specify nested fields too.

Database indices

List/Watch requests work on a “best effort” basis in principle. However, sometimes indices are needed for performance, or, like in the case of Firestore, to make certain queries even possible.

Database indices are declared in protobuf definitions in each resource. During startup, db-controller runtime uses libraries provided by Goten to ensure indices in protobuf match those in the database. Note that you should not create indices on your own unless for experimentation.

Let’s define some examples, for simplicity we show just name patterns and indices annotations, fields can be imagined:

message Device {
  option (google.api.resource) = {
    type : "example.edgelq.com/Device"
    pattern : "projects/{project}/devices/{device}"
  };

  option (goten.annotations.indices) = {
    composite : {
      sorting_groups : [
        {
          name : "byDisplayName",
          order_by : "display_name",
          scopes : [ "projects/{project}/devices/-" ]
        },
        {
          name : "bySerialNumber"
          order_by : "info.serial_number"
          scopes : [
            "projects/-/devices/-",
            "projects/{project}/devices/-"
          ]
        }
      ]
      filters : [
        {
          field_path : "info.model"
          required : true
          restricted_sorting_groups : [ "bySerialNumber" ]
        },
        {
          field_path : "info.maintainer_group"
          reference_patterns : [ "projects/{project}/maintanenceGroups/{maintanenceGroup}" ]
        }
      ]
    }
    single : [ {field_path : "machine_type"} ]
  };
}

There are two indices types: single-field and composite. Single should be pretty straightforward, you specify just the field path (can be nested with dots), and the index should be usable by this field. Composite indices are generated based on sorting groups combined with filters.

Composite indices are optimized for sorting - but as of now, only one sorting field is supported. However, if the sorting field is different from the name, then “name” is additionally added, to ensure sorting is stable. In the above example, composite indices can be divided into two groups - those with sorting by display_name, or info.serial_number.

Note that the sorting field path also is usable for filtering, therefore, if you just need a specific composite index for multiple fields for filtering, you can just pick some field that may be optionally used for sorting too. Apart from that, each sorting group has built-in filter support for name fields, for specified patterns only (scopes).

Attached filters can either be required (and if the filter is not specified in a query, it will not be indexed), or optional (each non-required filter doubles the amount of generated indices.)

Based on the above example, generated composite indices will be:

  • filter (name.projectId) orderBy (display_name ASC, name.deviceId ASC)
  • filter (name.projectId) orderBy (display_name DESC, name.deviceId DESC)
  • filter (name.projectId, info.maintainer_group) orderBy (display_name ASC, name.deviceId ASC)
  • filter (name.projectId, info.maintainer_group) orderBy (display_name DESC, name.deviceId DESC)
  • filter (info.model) orderBy (info.serial_number ASC, name.projectId ASC, name.deviceId ASC)
  • filter (info.model) orderBy (info.serial_number DESC, name.projectId DESC, name.deviceId DESC)
  • filter (name.projectId, info.model) orderBy (info.serial_number ASC, name.deviceId ASC)
  • filter (name.projectId, info.model) orderBy (info.serial_number DESC, name.deviceId DESC)
  • filter (info.model, info.maintainer_group) orderBy (info.serial_number ASC, name.projectId ASC, name.deviceId ASC)
  • filter (info.model, info.maintainer_group) orderBy (info.serial_number DESC, name.projectId DESC, name.deviceId DESC)
  • filter (name.projectId, info.model, info.maintainer_group) orderBy (info.serial_number ASC, name.deviceId ASC)
  • filter (name.projectId, info.model, info.maintainer_group) orderBy (info.serial_number DESC, name.deviceId DESC)

When we sort by display_name, to utilize the composite index, we should also filter by the projectId part of the name field. Additional sorting by name.deviceId part is added implicitly to any order. If we add info.maintainer_group to the filter, we will switch to a different composite index.

If we just filter by display_name (we can use > or < operators too!), and add filter by projectId part of the name, then one of those first composite indices will be used too.

When defining indices - be aware of multiplications. Each sorting group has two multipliers - the next multiply is the number of possible name patterns we add (scopes). Finally, for each non-required field, we multiply the number of indices by 2. Here we generated 12 composite indices and 1 single-field one. The amount of indices is important from the perspective of the database used, in Firestore we can have 200 indices typically per database, and in Mongo 64 per collection.

Cache indices

To improve performance & reduce database usage, Goten & SPEKTRA Edge utilize Redis as a database cache.

Service developers should carefully analyze which queries are mostly used, what is the update rate, etc. With goten cache, we support:

  • Get/BatchGet queries

    caching is done by resource name. Invalidation happens for updated/deleted resources for specific instances.

  • List/Search queries

    we cache by all query params (filter, parent name, order by, page, phrase in case of search, field mask). If a resource is updated/deleted/created, then we invalidate whole cached query groups by filter only. We will explain more with examples.

We don’t support cache for Watch requests.

To enable cache support for service it is required to:

  • Provide cache annotation for each relevant resource in their proto files.
  • In server code, during initialization, construct store objects with cache, it’s a very short amount of code.

Let’s define some indices, for simplicity, we show just name patterns and annotations specific to the cache:

message Comment {
  option (google.api.resource) = {
    type : "forum.edgelq.com/Comment"
    pattern : "messages/{message}/comments/{comment}"
    pattern : "topics/{topic}/messages/{message}/comments/{comment}"
  };

  option (goten.annotations.cache) = {
    queries : [
      {eq_field_paths : ["name"]},
      {eq_field_paths : ["name", "user"]}
    ]
    query_reference_patterns : [{
      field_path : "name",
      patterns : [
        "messages/-/comments/-",
        "topics/{topic}/messages/-/comments/-"
      ]
    }]
  };
};

By default, Goten generates this proto annotation for every resource when the resource is initiated for the first time, but a very minimal one, with the index for the name field only.

We will support caching for:

  • Get/BatchGet requests

    it is enabled by default and the goten.annotations.cache annotation provides a way to disable it only. Users do not need to do anything here.

  • Following List/Search queries which filter/parent SATISFY following filter conditions:

    • Group 1: name = "messages/-/comments/-”
    • Group 2: name = “topics/{topicId}/messages/-/comments/-”
    • Group 3: name = “messages/-/comments/-” AND user = “users/{userId}”
    • Group 4: name = “topics/{topicId}/messages/-/comments/-” AND user = “users/{userId}”

Since caching by exact name is very simple, we will be discussing only list/search queries.

We have 4 groups of indices. This is because:

  • We have 2 query sets.

    one for name and, the other for name with user. The name field has 2 name patterns.

  • Multiply 2 by 2, you have 4.

As a reminder, the presence of the “parent” field in List/Search requests already implies that the final filter will contain the “name” field.

Let’s put some example queries and how invalidation works then. Queries that will be cache-able:

  • LIST { parent = 'topics/t1/messages/m1' filter = '' }

    It will belong to group 2.

  • LIST { parent = 'topics/t1/messages/-' filter = '' }

    It will belong to group 2.

  • LIST { parent = 'messages/-' filter = '' }

    It will belong to group 1.

  • LIST { parent = 'messages/m1' filter = '' }

    It will belong to group 1.

  • LIST { parent = 'topics/t1/messages/m1' filter = 'user=”users/u1”' }

    It will belong to groups 2 and 4.

  • LIST { parent = 'topics/t1/messages/-' filter = 'user=”users/-”' }

    It will belong to group 2.

This query will not be cached: LIST { parent = 'topics/-/messages/-' filter = '' }

Note that exact queries may belong to more than one group. Also note that groups 3 and 4, which require a user, must be given full user reference without wildcards. If we wanted to enable caching also wildcards, then we would need to provide the following annotation:

 option (goten.annotations.cache) = {
   queries : [
     {eq_field_paths : [ "name" ]},
     {eq_field_paths : [ "name", "user" ]}
   ]
   query_reference_patterns : [ {
     field_path : "name",
     patterns : [
       "messages/-/comments/-",
       "topics/{topic}/messages/-/comments/-"
     ]
   }, {
     field_path : "user",
     patterns : [ "users/-" ]
   } ]
 };

The param that allows us to decide to which degree we allow for wildcards is query_reference_patterns. This param is actually “present” for every name/reference field within the resource body that is present in the queries param. The thing is, if the developer does not provide it, goten will assume some default. That default is to allow ALL name patterns - but allow the last segment of the name field to be a wildcard. In other words, the following annotations are equivalent:

 option (goten.annotations.cache) = {
   queries : [
     {eq_field_paths : [ "name" ]},
     {eq_field_paths : [ "name", "user" ]}
   ]
 };


 option (goten.annotations.cache) = {
   queries : [
     {eq_field_paths : [ "name" ]},
     {eq_field_paths : [ "name", "user" ]}
   ]
   query_reference_patterns : [ {
     field_path : "name",
     patterns : [
       "messages/{message}/comments/-",
       "topics/{topic}/messages/{message}/comments/-"
     ]
   }, {
     field_path : "user",
     patterns : [ "users/{user}" ]
   } ]
 };

Going back to our original 4 groups, let’s explain how invalidation works. Suppose that the following resource is created: Comment { name: “topics/t1/messages/m1/comments/c1”, user = “users/u1” }.

Goten will need to delete the following cached query sets:

  • CACHED QUERY SET { name: “topics/t1/messages/-/comments/-” }

    filter group 2

  • CACHED QUERY SET { name: “topics/t1/messages/m1/comments/-” }

    filter group 2

  • CACHED QUERY SET { name: “topics/t1/messages/m1/comments/c1” }

    filter group 2

  • CACHED QUERY SET { name: “topics/t1/messages/m1/comments/c1” user: “users/u1” }

    filter group 4

  • CACHED QUERY SET { name: “topics/t1/messages/m1/comments/-” user: “users/u1” }

    filter group 4

  • CACHED QUERY SET { name: “topics/t1/messages/-/comments/-” user: “users/u1” }

    filter group 4

You can notice that actually, 2 cached query sets may belong to the same filter group - it’s just with a wildcard and with a message specified. All cached query sets are generated from created comments. If the topic/message/user was different, then we would also have different query sets.

We can say, that we have: 2 query field groups, multiplied by 2 patterns for the name field, multiplied by 1 pattern for the user field, multiplied by 3 variants with wildcards in the name pattern. It gives 12 cached query sets for 4 filter groups.

List/Search query is also classified into query sets. For example, a request SEARCH { phrase = “Error” parent: “topics/t1/messages/m1” filter: “user = users/u2 AND metadata.tags CONTAINS xxx” } would be put in the following cached query sets: CACHED QUERY SET { name: “topics/t1/messages/m1/comments/-” user: “users/u2” }

Note that, unlike for resource instances, we are getting the biggest possible cached query set for actual queries. Thanks to that, if there is some update of comment for a specific user and message, then cached queries for the same message and OTHER users will not be invalidated. It’s worth considering this when designing proto-annotation. If a collection gets a lot of updates in general we are getting a lot of invalidations. In that case, it’s worth putting in more possible query field sets, so we are less affected by the high write rate. The more fields are specified, the less likely the update will cause invalidation.

The last remaining thing to mention regarding cache is what kind of filter conditions are supported. At this moment we cache by two conditions: Equality (=) and IN. In other words, request SEARCH { phrase = “Error” parent: “topics/t1/messages/m1” filter: “user IN [users/u2, users/u3] AND metadata.tags CONTAINS xxx” } would be put in the following cached query sets:

CACHED QUERY SET { name: “topics/t1/messages/m1/comments/-” user: “users/u2” } CACHED QUERY SET { name: “topics/t1/messages/m1/comments/-” user: “users/u3” }

Note that IN queries have a bigger chance of invalidation, because the update of comments from 2 users would cause invalidation. But it’s still better than all users.

Search Indices

If the search feature was enabled in the API specification for a given resource, to make it work it is necessary to add annotation for a resource.

We need to tell:

  • Which fields should be fully searchable
  • Which fields should be sortable
  • Which fields should be filter-able only

Each of those field groups we can define via search specification in the resource. For example, let’s define search spec for an imaginary resource called “Message” (should be easy to understand):

message Message {
 option (google.api.resource) = {
   type : "forum.edgelq.com/Message"
   pattern : "messages/{message}"
   pattern : "topics/{topic}/messages/{message}"
 };

 option (goten.annotations.search) = {
  fully_searchable : [
   "name",                 // Name is also a string
   "user",                 // Some reference field (still string)
   "content",              // string
   "metadata.labels",      // map<string, string>
   "metadata.annotations", // map<string, string>
   "metadata.tags"         // []string
  ]
  filterable_only : [
   "views_count",          // integer
   "metadata.create_time"  // timestamp
  ]
  sortable : [
   "views",                // integer
   "metadata.create_time"  // timestamp
  ]
 };
}

Fully searchable fields will be text-indexed AND filterable. They do not only support string fields (name, content, user), they can also support more complex structures that contain strings internally (metadata tags, annotations, labels.) But generally, they should focus on strings. Filterable fields on the other hand can contain non-string elements like numbers, timestamps, booleans, etc. They will not be text-indexed, but can still be used in filters. As a general rule, developers should put string fields (and objects with strings) in a fully searchable category, otherwise is “filterable only”. Sortable fields are of course self-explanatory, they enable sorting for specific fields in both directions. However, during actual queries, only one field can be sorted at once.

Search backend in use may be different from service to service. However, it is the responsibility of the developer to ensure that their chosen backend will support ALL declared search annotations for all relevant resources.

API Group Protobuf Files

SPEKTRA Edge-based Service is a specific version represented by a single protobuf package. It contains multiple API groups, each containing a set of gRPC methods. By default, Goten creates one API group per resource, and its name is equal to that of a resource. By default, it contains CRUD actions, but the developer can add custom ones too in the API-skeleton file.

Files created by goten-bootstrap for each API group are the following:

  • <api_name>_service.proto

    This file contains the definition of an API object with its actions from api-skeleton (with CRUD if applicable).

  • <api_name>_custom.proto

    This file will contain definitions of requests/responses for custom actions. Each object contains a TODO section because again, this is something that goten cannot fully provide. Those custom files are created only when there are custom actions in the first place.

Files <api_name>_service.proto are generated each time goten-bootstrap is invoked. But <api_name>_custom.proto is generated for the first time only. If you for example add a custom action after the file exists, the request/response pair will not be generated. Instead, you will either need to rename (temporarily) existing files or add full objects manually. It is not a big issue, however, because code-gen just provides empty messages with an optionally single field inside, and a TODO section to populate the rest of the request/response body.

All API groups within the same service will of course share the same endpoint, they will just have different paths and generated code will be packaged per API.

Files ending with _service.proto should be inspected for beginners, or debugging/verification, as those contain action annotations that influence how the request is executed. Based on this example (snippet from inventory manager):

rpc ListReaderAgents(ListReaderAgentsRequest) returns (ListReaderAgentsResponse) {
  option (google.api.http) = {
    get : "/v1/{parent=projects/*/regions/*}/readerAgents"
  };
  option (goten.annotations.method) = {
    resource : "ReaderAgent"
    is_collection : true
    is_plural : true
    verb : "list"
    request_paths : {resource_parent : [ "parent" ]}
    response_paths : {resource_body : [ "reader_agents" ]}
  };
  option (goten.annotations.tx) = {
    read_only : true
    transaction : NONE
  };
  option (goten.annotations.multi_region_routing) = {
    skip_code_gen_based_routing : false
    execute_on_owning_region : false
  };
}

This declaration defines:

  • What is the request, what is the response

  • gRPC Transcoding via google.api.http annotation

    you can see HTTP method, URL path, capture reference. In this example, we could send HTTP GET /v1/projects/p1/regions/us-west2/readerAgents to get a list of agents in project p1, region us-west2. It would set the value of the “parent” field in ListReaderAgentsRequest to projects/p1/regions/us-west2

  • Annotation goten.annotations.method provides basic information (usually self-explanatory). Important fields are those for request_paths and response_paths

    Usage, Auditing, Authorization, and MultiRegion routing depend on these fields, and they need to exist in request/response objects.

  • Annotation (goten.annotation.tx) defines what transaction middleware does

    How the database handle is opened. NONE uses the current connection handle. SNAPSHOT will need a separate session.

  • Annotation goten.annotations.multi_region_routing tells how the request is routed and if code-gen is used for it at all.

    In this case, since this is a reading request (List), we do not require a request to be executed on the region owning agents, it can be executed in the region where read-only copies are also available.

Note that all of this is copied/derived from the API specification.

Service Package Definition

Finally, among generated protobuf files there is one last time wrapping up information about the service package (with one version): <service_name>.proto. It looks like:

// Goten Service InventoryManager
option (goten.annotations.service_pkg) = {
  // Human friendly short name
  name : "ServiceName"
  
  // We will have meta.goten.com/Service resource with name services/service-name.edgelq.com
  domain : "service-name.edgelq.com"

  // Current version
  version : "v1"
  
  // All imported services
  imported_services : {
    domain : "imported.edgelq.com"
    version : "v1"
    proto_pkg : "ntt.imported.v1"
  }
};

There can be only one file within a proto package like this.

Goten Protobuf Types and other Annotations

When modeling service in Goten with protobuf files, it is just required to use normal proto in version 3 syntax. There are worth mentioning additional elements to consider:

Set of custom types (you should have seen many of them in standard CRUD):

message ExampleSet {
  // This string must conform to naming pattern of specified resource.
  string name_type = 1 [(goten.annotations.type).name.resource = "ResourceName"];
  
  // This string must conform to the naming pattern of specified resource. Also,
  // references in Goten are validated against actual resources (if specified within
  // resource).
  string reference_type = 2 [(goten.annotations.type).reference = {
    resource : "ResourceName"
    target_delete_behavior : ASYNC_CASCADE_DELETE
  }];
  
  // This string must conform to parent naming pattern of specified resource.
  string parent_name_type = 3 [(goten.annotations.type).parent_name.resource = "ResourceName"];
  
  // This string contains token used for pagination (list/search/watch queries). Its contents
  // are validated into specific value required by ResourceName.
  string cursor_type = 4 [(goten.annotations.type).pager_cursor.resource = "ResourceName"];
  
  // This should contain value like "field_name ASC". Field name must exist within specified ResourceName.
  string order_by_type = 5 [(goten.annotations.type).order_by.resource = "ResourceName"];
  
  // This should contain string with conditions using AND condition: We support  equality conditions (like ==, >),
  // IN, CONTAINS, CONTAINS-ANY, NOT IN, IS NULL... some specific queries may be unsupported by underlying
  // database though. Field paths used must exist within ResourceName.
  string filter_type = 6 [(goten.annotations.type).filter.resource = "ResourceName"];
  
  // This is the only non-string custom type. This annotation forces all values within
  // this mask to be valid within ResourceName.
  google.protobuf.FieldMask field_mask_type = 7 [(goten.annotations.type).field_mask.resource = "ResourceName"];
}

When modeling resources/requests/responses, it is important to keep in mind any input validation, to avoid bugs or more malicious intent. You should use annotations from here: https://github.com/cloudwan/goten/blob/main/annotations/validate.proto

An example is here: https://github.com/cloudwan/goten/blob/main/compiler/validate/example.proto

As of now, we don’t apply default string maximum values (we may in the future), so it is worth considering upfront.

2.3 - Developing your Service

How to develop your SPEKTRA Edge service.

2.3.1 - Developing your Service

How to develop your service.

Full example of sample service: https://github.com/cloudwan/inventory-manager-example

Service development preparation steps (as described in the introduction):

  • Reserving service name (domain format) using IAM API.
  • Creating a repository for the service.
  • Install Go SDK in the minimal version or better, the highest.
  • Setting up development - cloning edgelq, goten, setting env variables.

In your repository, you should first:

  • Create a proto directory.
  • Create a proto/api-skeleton-$VERSION.yaml file
  • Get familiar with API Skeleton doc and write some minimal skeleton. You can always come back at some point later on.
  • Generate protobuf files using the goten-bootstrap tool, which is described in the api-skeleton doc.

After this, service can be worked on, using this document and examples.

Further initialization

With api-skeleton and protobuf files you may model your service, but at this point, you need to start preparing some other common files. First is the go.mod file, which should start with something like this:

module github.com/your_organization/your_repository # REPLACE ME!

go 1.22

require (
	github.com/cloudwan/edgelq v1.X.X # PUT CURRENT VERSIONS
	github.com/cloudwan/goten v1.X.X  # PUT CURRENT VERSIONS
)

replace (
	cloud.google.com/go/firestore => github.com/cloudwan/goten-firestore v1.9.0
	google.golang.org/protobuf => github.com/cloudwan/goten-protobuf v1.26.1
)

Note that we have two special forks that are required in SPEKTRA Edge-based service.

The next crucial file is regenerate.sh, which we typically put at the top of the code repository. Refer to InventoryManager example application.

It includes steps:

  • Setting up the PROTOINCLUDE variable (also via script in SPEKTRA Edge repo)
  • Calling goten-bootstrap with clang formatter to create protobuf files
  • Generating server/client libraries (set of protoc calls)
  • Generating descriptor for REST API transcoding
  • Generating controller code (if business logic controller is needed)
  • Generating code for config files

For the startup part, you should skip business logic controller generation, as you may not have it (or need it). Config files (in the config directory), you should start by copying from the example here: https://github.com/cloudwan/inventory-manager-example/tree/master/config.

You should copy all *.proto files and config.go. You may need to remove the business logic controller config part if you don’t need it.

Another note about config files is resource sharding: For API server config, you must specify the following sharding:

  • byName (always)
  • byProjectId (if you have any resources where the parent contains Project)
  • byServiceId (if you have any resources where parent contains Service)
  • byOrgId (if you have any resources where parent contains Organization)
  • byIamScope (if you have resources where the parent contains either Project, Service, or Organization - de facto always).

Once this is done, you should execute regenerate.sh, and you will have almost all the code for the server, controllers and CLI utility ready.

Whenever you modify a Golang code, or after the regenerate.sh call, you may need to run:

go mod tidy # Ensures dependencies are all good

This will update the go.mod and go.sum files, you need to ensure all dependencies are in sync.

At this point, you are ready to start implementing your service. In the next parts, we will describe what you can find in generated code, and provide various advice on how to write code for your apps yourself.

Generated code

All Golang-generated files have .pb. in its file name. Developers can, and should in some cases, extend generated code (structs) with handwritten files using non-pb extensions. They will not be deleted.

We will describe briefly generated code packages and mention where manually written files have to be added.

Resource packages

The first directory you can explore generated code is the resources directory, it contains one package per resource per API version. Within a single resource module like this, we can find:

  • <resource_name>.pb.access.go

    Contains access interface for a resource collection (CRUD). It may be implemented by a database handle or API client.

  • <resource_name>.pb.collections.go

    Generated collections, developed from times before generics were introduced into Golang. We have standard maps/lists.

  • <resource_name>.pb.descriptor.go

    This file is crucial for the development of generic components. It contains a definition of a descriptor that is tied to a specific resource. It was inspired by the protobuf library, where each proto message has its descriptor. Here we do the same, but this descriptor more focuses on creating resource-specific objects without knowing the type. Descriptors are also registered globally, see github.com/cloudwan/goten/runtime/resource/registry.go.

  • <resource_name>.pb.fieldmask.go

    Contains generated type-safe field mask for a specific resource. Paths should be built with a builder, see below.

  • <resource_name>.pb.fieldpath.go

    Contains generated type-safe field path for a specific resource. Users don’t necessarily need to know its workings, apart from interfaces. Each path should be built with the builder and IDE should help show what is possible to do with field paths.

  • <resource_name>.pb.fieldpathbuilder.go

    Developers are recommended to use this file and its builder. It allows the construction of field paths, also with value variants.

  • <resource_name>.pb.filter.go

    Contains generated type-safe filter for a specific resource. Developers should rather not attempt to build filters directly from this but rather use a builder.

  • <resource_name>.pb.filterbuilder.go

    Developers are recommended to use filter builder in those files. It allows simple concatenations of conditions using functions like Where().Path.Eq(value).

  • <resource_name>.pb.go

    Contains generated resource model in Golang, with getters and setters.

  • <resource_name>.pb.name.go

    Contains Name and Reference objects generated for a specific resource. Note that those types are struct in Go, but string in protobuf. However, this allows much easier manipulation of names/references compared to standard strings.

  • <resource_name>.pb.namebuilder.go

    Contains use-to-use builder for name/reference/parent name types.

  • <resource_name>.pb.object_ext.go

    Contains additional utility-generated functions for copying/merging, and diffing.

  • <resource_name>.pb.pagination.go

    Contains types used by pagination components. Usually, developers don’t need to worry about them, but the function MakePagerQuery is often helpful to construct an initial pager.

  • <resource_name>.pb.parentname.go

    It is like its name equivalent but contains a name object for the parent. This file exists for resources with possible parents.

  • <resource_name>.pb.query.go

    Contains query objects for CRUD operations. Should be used with the Access interface.

  • <resource_name>.pb.validate.go

    Generated validation functions (based on goten annotations). They are automatically called by the generated server code.

  • <resource_name>.pb.view.go

    Contains function to generate default field mask from view object.

  • <resource_name>_change.pb.change.go

    Contains additional utility functions for the ResourceChange object.

  • <resource_name>_change.pb.go

    Contains model of change object in Golang.

  • <resource_name>_change.pb.validate.go

    Generated validation functions (based on goten annotations) but for Change object.

Generated types often implement common interfaces as defined in the package github.com/cloudwan/goten/runtime/resource. Notable interfaces: Access, Descriptor, Filter, Name, Reference, PagerQuery, Query, Resource, Registry (global registry of descriptors).

Field mask/Field path base interfaces can be found in module github.com/cloudwan/goten/runtime/object.

While by default resource packages are considered complete and can be used out of the box, often some additional methods extending resource structs are implemented in separate files.

Client packages

Higher-level modules from resources can be found in client, this is typically the second directory to explore. It contains one package per API group plus one final glue package for the whole service (in a specific version).

API group package directory contains:

  • <api_name>_service.pb.go

    Contains definitions of request/response objects, but excluding those from _custom.proto files.

  • <api_name>_custom.pb.go

    Contains definitions of request/response objects from _custom.proto files.

  • <api_name>_service.pb.validate.go

    Contains validation utilities of request/response objects, excluding those from _custom.proto files.

  • <api_name>_custom.pb.validate.go

    Contains validation utilities of request/response objects from _custom.proto files.

  • <api_name>_service.pb.client.go

    Contains wrapper around gRPC connection object. The wrapper contains all actions offered by an API group in type type-safe manner.

  • <api_name>_service.pb.descriptors.go

    Contains descriptor per each method and one per whole API group.

Usually, developers will need to use just the client wrapper and request/response objects.

Descriptors in this case are more usable for maintainers building generic modules, modules responsible for things like Auditing and usage tracing use method descriptors. Those are often using annotations derived from the API skeleton, like requestPaths.

Client modules contain one final “all” package - it is under a directory name having a short service name. It contains typically two files:

  • <service_short_name>.pb.client.go

    Combines API wrappers from all API groups together as one bundle.

  • <service_short_name>.pb.descriptor.go

    Contains descriptor for the whole service in a specific version, with all metadata. Used for generic modules.

When developing applications, developers are encouraged to maintain a single gRPC Connection object and use only those wrappers (clients for API groups) that are needed. This should reduce compiled binary sizes.

Client packages can be usually considered complete - developers don’t need to provide anything there.

Store packages

Directory store contains packages building on top of resources, and is used by server binary. There is one package per resource plus one final wrapper for the whole service.

Within the resource store package we can find:

  • <resource_name>.pb.cache.go

    It has generated code specifically for cache. It is based on cache protobuf annotations.

  • <resource_name>.pb.store_access.go

    It is a wrapper that takes the store handle in the constructor. It provides convenient CRUD access to resources. Note that this implements interface Access defined in <resource_name>.pb.access.go files (In the resources directory).

One common package for the whole service has a name equal to the short service name. It contains files:

  • <service_short_name>.pb.cache.go

    It wraps up all cache descriptors from all resources.

  • <service_short_name>.pb.go

    It takes generic store handle in the constructor and wraps to provide an interface with CRUD for all resources within the service.

There are no known cases where some custom implementation had ever to be provided within those packages, it can be considered complete on its own.

Server packages

Goten/SPEKTRA Edge strives to provide as much ready-to-use code as possible, and this includes almost full server code in the server directory. Each API group has a separate package, but there is one additional overarching package gluing all API groups together.

For each API group, we have for the server side:

  • <api_name>_service.pb.grpc.go

    Server handler interfaces (per each API group)

  • <api_name>_service.pb.middleware.routing.go

    MultiRegion middleware layer.

  • <api_name>_service.pb.middleware.authorization.go

    Authorization middleware layer (but see more in IAM integration)

  • <api_name>_service.pb.middleware.tx.go

    Transaction middleware layer, regulating access to the store for the call.

  • <api_name>_service.pb.middleware.outer.go

    Outer middleware layer - with validation, Compare And Swap checks, etc.

  • <api_name>_service.pb.server.core.go

    Core server that handles all CRUD functions already.

Note that for CRUD, everything is provided fully out of the box, but often there are custom actions or some extra steps required for some basic CRUD, in that case, it is recommended to write custom middleware between outer middleware and core.

Directory server contains also a glue package for a whole service in a specific version, with files:

  • <service_short_name>.pb.grpc.go

    Constructs server interface by gluing interfaces from all API groups

  • <api_name>_service.pb.middleware.routing.go

    Glue for multiRegion middleware layer.

  • <api_name>_service.pb.middleware.authorization.go

    Glue for the authorization middleware layer

  • <api_name>_service.pb.middleware.tx.go

    Glue for the transaction middleware layer

  • <api_name>_service.pb.middleware.outer.go

    Glue for the outer middleware layer

  • <api_name>_service.pb.server.core.go

    Glue for a core server that handles all CRUD functions.

This last directory with glue will also need manually written code files, like https://github.com/cloudwan/inventory-manager-example/blob/master/server/v1/inventory_manager/inventory_manager.go.

Note that this example server constructor shows the order of middleware execution. It corresponds to the process described in the prerequisites.

Be aware, that transaction middleware MAY be executed more than once for SNAPSHOT transaction types, in case we get ABORTED error. Transaction is retried a couple (typically 10) times. This also means that all middleware after TX must contain code that can be executed more than once. The database is guaranteed to reverse any write changes, BUT it is important to keep a check on another state (for example, if we send requests to other services and a transaction fails, those won’t be reversed!). If we change the request body, changes will be present in the request object on the second run too!

Apart from this, developers need to provide files only if there is a need for custom middleware (which fairly is needed always to some extent)

cli packages

Packages cli are used to create the simplest CLI utility based on cuttle. It is complete and only main.go file will be needed later on (to be explained later).

audit handlers packages

Packages for audithandlers contain one package per version for the whole service-generated handlers for all audited methods and resources. It is complete and some minor customizations are only needed, see the Audit integration document part. These packages need only inclusion in the main file, during server initialization. It is not necessarily needed to understand internal workings here.

access packages

Packages for the access directory contain modules that are built around client ones. There are two differences here.

First, while client contains basic objects for client-side code, access is delivering more high-level modules, that are not necessarily needed for all clients. Splitting them into separate packages allows clients to pick smaller packages.

Second, while client packages are built in one-per-API-group mode, access packages are built on the one-per-resource-type basis, and are focused on CRUD functionality only.

In access, each resource has its package, and finally, we have one glue package for the whole service.

Files generated for each resource:

  • <resource_name>.pb.api_access.go

    This implements interface Access defined in <resource_name>.pb.access.go files (In the resources directory). In the constructor, it takes the client interface as defined in <resource_name>_service.pb.client.go file (In the client directory), the one containing CRUD methods.

  • <resource_name>.pb.query_watcher.go

    Lower level watcher built around Watch<CollectionName> method. It takes the client interface and channel where it will be supplying events in real-time. It simplifies the handling of Watch calls for collections. It hides some level of complexity associated with stateless watch calls like soft resets or partial changes.

  • <resource_name>.pb.watcher.go

    High-level watcher components built around the Watch<CollectionName> method. It can support multiple queries and hides all complexity associated with stateless watch calls (resets, snapshot checks, partial snapshots, partial changes, etc.).

Files generated for a glue package for the whole service in a specific version:

  • <service_short_name>.pb.api_access.go

    Glues all access interfaces for each resource.

Watcher components require special attention and are best used for real-time database update observation. They are used heavily in our applications to provide system reactions in real-time. They can be used by web browsers to provide dynamic changes to a view, by client applications to react swiftly to some configuration updates, or by controllers to keep data in sync. We will cover this topic more in real-time updates topic.

Fixtures

Inside the fixtures directory, you will find some base files containing definitions of various resources that will have to be bootstrapped for your service. Usually, fixtures are created:

  • for the service itself.
  • per each project that enables a given service (dynamic creation).

Those files are not a “code” in any form, but some of those fixtures are still generated and it may be worth adding them here for completeness. We will come back to them in SPEKTRA Edge migration document.

Main files (runtime entry points)

SPEKTRA Edge-based service backend consists of:

  • Server runtime, which handles all incoming gRPC, webGRPC, and REST API calls.
  • DbController runtime, which executes all asynchronous database tasks (like Garbage Collecting, multi-region syncing, etc).
  • Controller runtime, that executes all asynchronous tasks related to business logic to keep the system working. It also handles various bootstrapping tasks, like for IAM integration.

For each runtime, it is necessary to write one main.go file.

Apart from the backend, it is very advisable to create a CLI tool that will allow developers to quickly play with the backend at least. It should use generated cli packages.

Clients for web browsers and agents are not covered by this document, but examples provide some insights into how to create a client agent application running on the edge.

All main file examples can be found here: https://github.com/cloudwan/inventory-manager-example/tree/master/cmd.

Service developers should create a cmd directory with relevant runtimes.

Server

In the main file for the server, we need:

  • Initialize the EnvRegistry component, responsible for interacting with the wider SPEKTRA Edge platform (includes the discovery of endpoints, real-time changes, etc.).

  • Initialize observability components

    SPEKTRA Edge provides Audit for recording many API calls, Monitoring for usage tracking (it can also be used to monitor error counters). It is also possible to initialize tracing.

  • Run the server in the selected version (as detected by envRegistry).

In function running a server in a specific version:

  • We are initializing the database access handle. Note that it needs to support collections for your resources, but also for mixins.

  • We need to initialize a multi-region policy store

    It will observe all resources that are multi-region policy-holders for your service. If you use policyholders from imported services, you may need to add a filter that will guarantee you are not trying to access resources unavailable for your service.

  • We need to initialize AuthInfoProvider, which is common for Authenticator and then Authorization.

  • We need to initialize the Authenticator module.

  • Finally, we initialize the gRPC server object. It does not contain any registered handlers on its own yet

    Only common interceptors like Authentication.

For the gRPC server instance, we need to create and register handlers, it is required to provide server handlers for your particular server, then for mandatory mixins:

  • schema mixin is mandatory, and it provides all methods related to database/schema consistency across multi-service, multi-region, and multi-version environments.
  • limits mixin is mandatory if you want to use Limits integration for your service.
  • Diagnostics mixin is optional for now, but this may change once EnvRegistry gets proper health checks based on gRPC. It should be included.

Mixins provide their API methods - they are separate “services” with their own API skeletons and protobuf files.

Refer to the example in the instructions on how to provide your main file for the server.

Note: As of now, webGRPC or REST API is handled not by server runtime, but by envoyproxy component. Examples include configuration example file (And Kubernetes deployment declaration).

Controller

In the main file for the controller, we need:

  • Initialize the EnvRegistry component, responsible for interacting with the wider SPEKTRA Edge platform (includes the discovery of endpoints, real-time changes, etc.).
  • Initialize observability components.
  • Run controller in selected version (as detected by envRegistry).

For a selected version of the controller, we need to:

  • Create business logic controller virtual nodes manager

    This step is necessary if you have business logic nodes, otherwise, you can skip. You can refer to the business logic controller document for more information on what it is and how to use it.

  • Limits-mixin controller virtual nodes manager is mandatory if you include the Limits feature in your service. You can skip this module if you don’t need Limits. Otherwise, it is needed to execute common Limits logic.

  • Fixtures controller nodes are necessary to:

    • Bootstrap resources related to the service itself (like IAM permissions).
    • Bootstrap resources related to the projects enabling service-like metric descriptors. Note that this means that the controller needs to dynamically create new resources and watch project resources appearing in the service (tenants).

The fixtures controller is described more in the document about SPEKTRA Edge integration. However, since some fixtures are mandatory, it is a practically mandatory component to include.

Refer to the example of how to provide your main file for the controller.

DbController

Db-Controller is a set of modules executing tasks related to the database:

  • MultiRegion syncing.

  • Search database syncing

    If the search is enabled and uses a separate database.

  • Schema consistency (like asynchronous cascade unsets/deletions when some resources are deleted).

In the main file for the controller, we need:

  • Initialize the EnvRegistry component, responsible for interacting with the wider SPEKTRA Edge platform (includes the discovery of endpoints, real-time changes, etc.).
  • Initialize observability components.
  • Configure database and search DB indices, as described in proto files.
  • Run db-syncing controller for all syncing-related tasks
  • Run db-constraint controller for all schema consistency tasks

All db-controller modules are provided by the Goten framework, so developers need to provide just the main file only.

Refer to the example in the instructions on how to provide your main file for db-controller.

CLI

If you have Cuttle installed, you can use core SPEKTRA Edge services with it. However, it is useful, especially when developing a service, to have a similar tool for own service too. Goten generates a CLI module in the cli directory, developers need only to provide their main file for CLI. Refer to the inventory-manager example.

In that example, we include an example service, and we add some mixins, schema-mixin and limits-mixin. Those objects for CLI can access mixin APIs exposed by your service. They can be skipped to reduce code size if you prefer. They contain calls that are relevant for service developers or maintainers. Mixins contain internal APIs and, if there are no bugs, even service developers don’t have to know their internals (and if there is a bug, they can submit an issue). Mixins try to operate on mixin APIs on their own and should do all the job.

Inclusion of Audit is recommended, the default cuttle provided by SPEKTRA Edge will not be able to decode Audit messages for custom services. However, CLI utility with all types of service registered will be.

Refer to the example in the instructions on how to provide your main file for CLI.

Note: Compiled CLI will only work if there is a cuttle locally installed and initialized. Apart from that, you need to add endpoint for your service separately to the environment, if your cuttle environment points to the core SPEKTRA Edge platform.

For example: Suppose that the domain for SPEKTRA Edge is beta.apis.edgelq.com:

cuttle config  environment get staging-env
Environment:  staging-env
Domain:  beta.apis.edgelq.com
Auth data:
    ...
Endpoint specific configs:
+--------------+----------+--------------+-----------------+------------------+---------------+
| SERVICE NAME | ENDPOINT | TLS DISABLED | TLS SKIP VERIFY | TLS SERVICE NAME | TLS CERT FILE |
+--------------+----------+--------------+-----------------+------------------+---------------+
+--------------+----------+--------------+-----------------+------------------+---------------+

Therefore, the connection to the IAM service will be iam.beta.apis.edgelq.com because this is the default domain. Considering 3rd party services use different domains, you will need to add different endpoint-specific settings like:

cuttle config environment set-endpoint \
  staging-env $SERVICE_SHORT_NAME --endpoint $SERVICE_ENDPOINT

Variable $SERVICE_SHORT_NAME should be snake_cased, it is derived from the short name of the service in api-skeleton. For The inventory manager example is inventory_manager (in api-skeleton, the short name is InventoryManager). See https://github.com/cloudwan/inventory-manager-example/blob/master/proto/api-skeleton-v1.yaml, field proto.service.name.

Variable $SERVICE_ENDPOINT must point to your service, like inventory-manager.examples.custom.domain.com:443. Note that you must include the port number, but not the method (like https://).

2.3.2 - Developing your Business Logic in Controller

How to develop your business logic in controller.

API Server can execute very little actual work - all reading requests are limited in size, they can fetch a page. Write requests will stop working if you start saving/deleting too many resources in a single transaction. Multiple transactions will also make users wonder if something is stuck. Some actions are intense, for example, when a user creates a Distribution resource in applications.edgelq.com, that matches thousands of Edge devices, the system needs to create thousands of Pod services. Transaction from this side is practically impossible, pods must be created asynchronously for a service to operate correctly.

Since we are using No-SQL databases, which don’t have cross-collection joins, we need sometimes to denormalize data, and make copies to be able to read from a single collection all necessary data.

Service development very often requires the development of its business logic controller - it is designed to execute all additional write tasks in an asynchronous manner.

We also need to acknowledge that:

  • Some write requests may be failing, and some parts of the system may be not available. We need to have reasonable retries.

  • System may be in constant move

    actions changing the desired state may be arriving asynchronously. Tasks may change dynamically even before they are completed.

  • For various reasons (mistake?) users may delete objects that need to exist. We need to handle interruptions and correct errors.

The business logic controller was designed to react in real-time, able to handle failures, cancel or amend actions when necessary, heal the system to the desired state.

Desired state/Observed state are the key things here. Controllers are first optimized for Create/Update/Delete operations, trying to match the desired state with the observed. The pattern is the following: The Controller uses Watchers to know the current system state, ideally, it should watch the subset it needs. The observed state of some resources is used to compute the desired state. Then desired state is compared with the relevant part of the observed state again, any mismatch is handled by Create/Update/Delete operation. Although this is not the only way the controller can operate, this is the most common.

Since the exact tasks of the business logic controller are service-specific, SPEKTRA Edge/Goten provides a framework for building it. This is different compared to db-controller, where we have just ready modules to use.

Of course, there are some ready controller node managers, like limits mixin, which need to be included in each controller runtime if the limits feature is used. This document however provides explanations of how to create own one.

Some example tasks

Going back to Distributions, Devices, and Pods: The Controller should, for any matching combination of Device + Distribution, create a Pod resource. If the Device is deleted, all its pods must be deleted. If Distribution is deleted, then similarly pods need to be deleted from all devices.

Observed states are Devices, Pods, and Distributions. The observed state of Distributions and Devices is used to compute the desired pod set. This is then compared with observed pods to create action points.

Another example: Imagine we have a collection of Orders and Products, and one order can point to one product, but the product can be pointed to by many orders. Imagine that we want to display a view of orders, but each item has also short product info. Since we have no SQL without joins, we will need to copy short info from a product into order. We can do this when the Order is created/updated, get Product resource, and copy its info to the Order record. It may be questionable whether we want to update existing orders if the product is updated, for the sake of this example, suppose we need to support this case. In this case, we observe products and orders as observed state. For each observed order, we compute the desired one by checking current product info. If there is any mismatch, we issue an Update to a server. Note we have both observed and desired state of orders here.

Architecture Overview

When it comes to controllers, we define a thing called a “Processor”. The processor is a module that accepts the observed state as the input. Inside, it computes the desired state. Note that observed and desired states can potentially consist of many resource types and many collections. Still, it should concentrate on isolated business logic tasks, for example, management of Pods based on Distributions and Devices is such a task. Still inside a processor, desired and observed state is provided into internal syncers that ensure the valid state of the system. The processor does not have any output, it is a rather high-level and large object. However, processors are scoped.

Resource models in SPEKTRA Edge are concentrated around tenants, Services, Organizations, Projects, usually the last one though. This is where each processor is scoped around, selected Service, Organization, or Project. We have as many processor instances as many tenant resources in total. This is for safety reasons, to ensure that tenants are separated. It would not be good if by mistake we matched Distribution with Devices from different projects. Then one tenant could schedule pods in the other one…

Therefore, we need to remember, that the Processor in the Business Logic Controller is a unit scoped by a tenant (usually Project), and focused on executing a single business logic task (developer defined). This business logic task may produce as many desired states (one per collection) as deemed necessary by the developer.

Above a Processor, we have a “Node”. Node contains:

  • Set of processors, one per tenant it sees.
  • Set of watchers, one per each input (observed) collection.

Node is responsible for:

  • Management of processors, for example, if a new project is created, it should create a new processor object. If the project is deleted, then the processor must also be deleted.
  • Running set of watchers. By using common watchers for processors, we ensure that we do not have too many streams to the servers (multiple small projects are a thing here).
  • Distributing observed state changes to the processors, each change set should be split if necessary and provided to the relevant processors.

The node should be considered self-contained and generally, the highest-level object, although we have things like “Node Managers”, which manage a typically fixed set of nodes, typically one to four of them. We will come back to this with Scaling considerations topic here.

We can go back to a Processor instance: each has a “heart”, one primary goroutine that runs all the internal computations and events, only one, to avoid multi-threading issues as much as possible. Those “events” include all observed state changes provided by a Node to Processor. This “heart” is called a processor runner here. Its responsibility includes computing the desired state.

Modules-wise, each processor consists of (typically):

  • Set of input objects. They are queues, where the Node is pushing observed state changes on the produces side. On the consumer side, the processor runner extracts those updates and pushes them into “stores”.
  • Set of stores, one per observed collection, stores are stateful and contain full snapshots of the observed collection. When the processor runner gets an update from the input object, it applies change on the “store”. This is where we decide if there was any update/deletion/creation.
  • Set of transformers. They observe one or many stores, which are responsible for propagating changes in real-time to them. Transformers contain code responsible for computing the desired state based on the observed one.
  • Set of syncers. Each has two inputs: One is some store with the observed state, other is the transformer producing the desired state. In some cases though, it is possible to provide more than one transformer to the desired state input of a syncer.

All of these components are run by Processor Runtime goroutine with little exception, Syncers have internally additional goroutines that are executing actual updates (create, delete, and update operations). Those are IO operations, therefore it is necessary to delegate those tasks away from the processor runner.

An important note is that the processor runner MUST NOT execute and IO work, it should be always fast. If necessary, framework allows to run additional goroutines in the processor, which can execute longer operations (or those that can return errors).

One final thing to talk about processors is initial synchronization. When Node boots up initially, its number of processors is 0, and watchers for the observed state are empty. First, watchers need to start observing relevant input collections, as instructed. When they start, before getting real-time updates, they get a current snapshot of the data. Only then will we start getting real-time updates, that happened after point of snapshot in time. Node is responsible for creating as many processors as many tenants it observes. Events from different collections may be out of sync, sometimes we may get tenants after other collections, sometimes before, often both. It is also possible for a watcher to lose connectivity with a server. If disconnection is long enough, it may opt for requesting a snapshot again after successful reconnection. A full snapshot for each tenant is delivered to each corresponding processor. Therefore, when Node provides an “event” to a processor, it must include “Sync” or “LostSync” flags too. In the case of “Sync”, the processor is responsible for generating its diff using its internal Store with the previous snapshot.

Note that each observed input of the processor will get its own “sync” event, and we can’t control the order here. It is considered that:

  • Sync/LostSync events must be propagated from inputs/stores to syncers.
  • Transformer must send a “Sync” signal when all of its inputs (or stores it uses) are in sync. If at least one gets a LostSync event, then it must propagate LostSync to the Syncer’s desired state.
  • Syncer’s desired state is in sync/non-sync depending on events from the transformer(s).
  • Syncer’s observed state is in sync/non-sync depending on the sync/lostSync event from the store it observes.
  • Syncer’s updater executes updates only when both desired and observed states are in sync. When they both gain a sync event, the syncer executes a fresh snapshot of Create/Update/Delete operations, all previous operations are discarded.
  • Syncer’s updater must stop actions when either observed or the desired state loses sync.
  • Transformers may postpone desired state calculation till all inputs achieve sync state (developer decides).

Prototyping controllers with proto annotations

In Goten, we first define the structure of the business logic controller (or what is possible) in protobuf files, we define the structure of Nodes, Processors, and their components.

A full reference can be found here: https://github.com/cloudwan/goten/blob/main/annotations/controller.proto. We will discuss some examples here to provide some more clarity.

By convention, in proto/$VERSION we create a controller subdirectory for proto files. In regenerate.sh we add relevant protoc compiler call, like in https://github.com/cloudwan/inventory-manager-example/blob/master/regenerate.sh, find --goten-controller_out.

When going through examples, we will explore some common patterns and techniques.

Inventory manager example - Processor and Node definitions.

We can review some examples, first Inventory Manager, definition of a Processor: https://github.com/cloudwan/inventory-manager-example/blob/master/proto/v1/controller/agent_phantom_processor.proto

We can start from the top of the file (imports and top options):

  • See go_package annotation - this is the location where generated files will be put. Directory controller/$version/$processor_module is a convention we use and recommend for Processors.
  • Import of goten.proto and controller.proto from goten/annotations is required.
  • We need to import service packages’ main files for the versions we intend to use. For this example, we want to use monitoring from v4, and Inventory manager for v1. Relevant import files were added.
  • We also import “common components”, but we will return to it later.

In this file, we define a processor called “AgentPhantomProcessor”. We MAY then optionally specify the types we want to use. This one (CommonInventoryManagerControllerTypes), is specified in the one imported file we mentioned we will come back later to it. Let’s skip explaining this one yet.

The next important part is definitions. In Goten, resource-type names are fully qualified with the format $SERVICE_DOMAIN/$RESOURCE_NAME. This is how we need to specify resources. Definitions can be used to escape long names into shorter ones. With the next example, we will also demonstrate another use case.

In AgentPhantomProcessor, we would like to generate a single PhantomTimeSerie resource per each ReaderAgent in existence. So this is a very simple business logic task, make one additional resource for everyone in another collection.

Since both ReaderAgent and PhantomTimeSerie are project-scoped resources, we want processors to operate per project. Therefore, we declare that “Project” is a scope object in the processor. Then we define two inputs: ReaderAgent and PhantomTimeSerie. Here in protobuf, “input” code-wise will consist of 2 components: Input and Store object (as described in the architecture overview).

We define a single transformer object: AgentPhantomTransformer. There, we want to notify you that this transformer should produce the desired collection of PhantomTimeSerie instances, where each will be owned by some ReaderAgent. It simplifies cleanup, if ReaderAgent is deleted, the transformer will delete PhantomTimeSerie from the desired collection. The best transformer type is owner_ownee in such a situation, where each output resource belongs to a separate parent.

After transformers, we define syncers, we have one instance, PhantomTimeSerieSyncer. It takes PhantomTimeSerie from the input list as observed input. Then the desired collection must come from AgentPhantomTransformer.

This Processor instance shows us what connects with what, we constructed the structure in a declarative way.

Now let’s come back to types. As we said in the architecture overview, the Processor consists of input, store, transformer, and syncer objects. While transformers can be specified only in Processor definitions, the rest of those little elements can be delegated to type sets (here it is CommonInventoryManagerControllerTypes). This is optional, type_sets are not needed very often, here as well. If they were not defined, then the compiler would generate all necessary components implicitly in the same directory indicated by go_package along the processor. If type_sets are defined, then it will try to find types elsewhere before deciding to generate some on its own.

Separate type_sets can be used for example to reduce unnecessary generated code, especially if we have multiple processors using similar underlying types. In the Inventory manager, it was done for demonstration purposes only. Let’s see this file though: https://github.com/cloudwan/inventory-manager-example/blob/master/proto/v1/controller/common_components.proto.

We define here input, store, and syncer components. Note that go_package is different compared to the one in the processor file. It means that generated components will reside in a different directory than the processor. The only benefit here is this separation, but it’s not strictly required.

Finally, note that in the processor we only indicated what the is controller doing, and the connections. However, implementation is not here yet, it will be in the Golang. For now, let’s jump to the Node declaration, which can be found here: https://github.com/cloudwan/inventory-manager-example/blob/master/proto/v1/controller/inventory_manager_ctrl_node.proto

Node is a component managing Processor instances and is responsible for dispatching real-time updates from all watchers to processors, which are scoped in this example by an Inventory Manager Project (inventory-manager.edgelq.com/Project).

In this example file, we declare a Node called “InventoryManagerCtrl”. The processors we want to attach are just one element array containing AgentPhantomProcessor. We potentially could attach more processors, under one condition though: All must be scoped by the exactly same object. Since AgentPhantomProcessor is scoped by Project (inventory-manager.edgelq.com/Project), other processors would need the same.

Compiler parsing such a Node definition will automatically detect Scope and all Input resources. What we need to define is:

  • Sharding method, since the scope is a Project, the standard sharding for it is “byProjectId”. For organization, it would be “byOrgId”, for service, “byServiceId”. All 3 can be optionally replaced with “byIamScope”. We will return to it when talking about scaling.
  • Dispatchment: When Node gets snapshot + real-time updates from watchers for all resources (Scope + Input), it needs to also know how resources should be grouped.
    • Param scope_grouping tells us how the Project is identified. Normally, we want to define Project ID by using its name, if you are unsure, just pass method: NAME for scope_grouping. In result, Node will extract the name field from a Project and use it as a Processor Identifier.
    • Param input_groupings is defined per each detected input resource. In the processor, we defined monitoring.edgelq.com/PhantomTimeSerie and inventory-manager.edgelq.com/ReaderAgent (which were shortened to PhantomTimeSerie and ReaderAgent). Input groupings instruct Node how each resource instance of a given type should be classified, which means, how to extract ID of the corresponding processor instance. Resource ReaderAgent is a child of an inventory-manager.edgelq.com/Project instance according to the api-skeleton. Therefore, we want to indicate that the method for grouping is a “NAME” type. Node can figure out the rest. Resource PhantomTimeSerie is a bit more tricky because its parent resource is not inventory-manager.edgelq.com/Project, but monitoring.edgelq.com/Project. Still, Node will need a method to extract the name of inventory-manager.edgelq.com/Project from the monitoring.edgelq.com/PhantomTimeSerie instance. Because it can’t be done in a declarative way (as of now, the compiler does not figure out things by string value as IAM Authorizer), we must pass the CUSTOM method. It means that in Golang we provide our function of getting processor ID.

When deciding on dispatchment annotation, we need to know that Node has a customizable way of defining Processor Identifier. We need to provide a way how <Resource Instance> needs to be mapped into <Processor Identifier>, and we need to do this for Scope AND all Input resources. Method NAME passed to either scope or input resource means that Node should just call the GetName() function on the resource instance to get an Identifier. It will work for same-service resources, but for others like PhantomTimeSerie not, GetName returned by it would eventually point to Project in monitoring service.

Although the GetName() method on the ReaderAgent instance would return the Name of ReaderAgent than a Project, Node is able to notice that the Name of ReaderAgent contains also the name of the project.

Applications example - Processor and Node definitions.

We have a more interesting example of a Controller in applications.edgelq.com. We have a Controller processor responsible for Pods management, we say that “Pod management” is a business logic task. There are two things we want from such a processor:

  • Create a Pod instance per each matched Device and Distribution.
  • Whenever the Device goes offline, we want to mark all its Pods as offline.

Business notes:

  • One Device can host multiple Pods, and Distribution can create Pods across many devices. Still, Device+Distribution can have one Pod instance at most.
  • Pod can be deployed manually, not via Distribution. Not all pods are of distribution type.
  • When the Device gets online, it will Update pod statuses itself. But when it goes offline, the controller will need to do it. Note that it means, that basically, the controller will need to track pods of offline status.
  • If the device is offline, the pod status should be “Unknown”.
  • Resources Pods, Distribution, and Devices are project scoped, however, Device belongs to the devices.edgelq.com service, and other resources to applications.edgelq.com. Still, the Project is our primary scope.

With this knowledge, we can draft the following Processor declaration: https://github.com/cloudwan/edgelq/blob/main/applications/proto/v1/controller/pods_processor.proto

Compared to the previous example, goten.controller.type_set is declared in the same file, but for now let’s skip this part, and talk about the processor first. There, we have the PodsProcessor type defined. As we can deduce from business notes, the Scope resource should be “Project”, inputs should be clear too. Then we have two transformers, one per business task defined. You should also note that we have two additional definitions of the applications.edgelq.com/Pod instance. One is DistributionPod, other is UnknownStatePod. As mentioned in business notes, not all pods belong to distribution, and pods with unknown states are considered also a subset of all notes. Those extra definitions can be used to differentiate between types and help write proper controllers.

Transformer DistributionController is of known already type, owner/ownee. But in this case, each Pod instance is owned by a unique combination of Distributions and Devices. Also, when either of the parents is deleted, all associated pods will be automatically deleted.

Another transformer, UnknownStateTracker, is of a different kind: Generic. This type of transformer just takes some number of inputs, and then produces some number of outputs. In this particular case, we want to just have some Devices and Pods, where each belongs to a specific Device only. For each offline Device, we want to mark its Pods as of Unknown state. Generic type requires more code implementation and developers need to handle all input events: Additions, updates, and deletions too. For each change in the input resources new snapshot of the output (or DIFF to the snapshot) is required.

One alternative we could have used is a Decorator:

{
  name : "UnknownStateStatusDecorator"
  decorator : {
    resource : "Pod"
    additional_inputs : [ "Device" ]
  }
}

The decorator takes the same resource on the output, in this case, when Pod is changed, the decorator function will be called to decorate Pod resource. There, we could get the Device record owning pod, check the Device status, and then mark the Pod status. If the device changes, then it would trigger a re-compute of all Pods it belongs to (decorator is called again). We did not use this decorator here, because the Controller should only mark Pod status as UNKNOWN when the Device is offline. When the Device is online, it needs to manage its Pod statuses. This “shared” ownership means that the decorator was not exactly suitable, instead, we may need to use a “generic” type, and output pods that have UNKNOWN status. The controller needs to run UpdatePod for only offline device pods. If the device gets online, the controller should “forget” about those pods. What do we mean: UnknownStateTracker DELETES pods from output collection if the device becomes online (it’s not the same as actually Deleting pods!). This is why the output from UnknownStateTracker is UnknownStatePod, not Pod. We want to show that output contains pods with unknown status, not all pods. We will come back to this when commenting on the implementation in Go.

We also will be re-checking offline pods periodically, producing snapshots (per scope project) after each period. By default, transformer would be kicked only when some Pod/Device changes (create, update, delete).

Now going back to goten.controller.type_set - there, we defined only the Store component for a Pod resource, with one custom index, even though we have multiple resource definitions in the processor. As we mentioned, this type set is an optional annotation and the compiler can generate missing bits on its own. In this particular case, we wanted to have a custom index for pods, field path spec.node defines a field path to the Device resource. This index gives us just some convenience in the code later on. Anyway, this is another use case for type sets, the ability to enhance default types we would get from the code-generation compiler.

Node definition can be found here: https://github.com/cloudwan/edgelq/blob/main/applications/proto/v1/controller/applications_ctrl_node.proto

However, in this case, it is pretty much the same as in Inventory Manager.

Overview of generated code and implementing missing bits

The best way to discuss controller code is by examples again, we will check the example Inventory Manager and Application.

Inventory manager

In InventoryManager, we want the following feature: a time series showing the history of online/offline changes per agent. First, each agent runtime should be sending an online signal within the interval (1 minute), using a CreateTimeSeries call from monitoring. When an agent goes offline, it can be sending “offline” though - instead, we need to generate a PhantomTimeSerie object per each agent, so it can generate data when original metrics are missing. This is how we obtain online/offline history, zeroes are filling periods of offline, “ones” of online parts. This is a task we did for the Inventory Manager.

The controller code can be found here: https://github.com/cloudwan/inventory-manager-example/tree/master/controller/v1.

As with the rest of the packages, file names with .pb. are generated, otherwise handwritten. Directory common from there contain only generated types, as pointed out by the proto file for type_sets. More interesting is the agent phantom processor to be found here: https://github.com/cloudwan/inventory-manager-example/tree/master/controller/v1/agent_phantom.

We should start examining examples from there.

The first file is https://github.com/cloudwan/inventory-manager-example/blob/master/controller/v1/agent_phantom/agent_phantom_processor.pb.go.

It contains the Processor object and all its methods. We can notice the following:

  • In the constructor NewAgentPhantomProcessor, we are creating all processor components as described by the protobuf file for a processor. Connections are done automatically.
  • Constructor gets an instance of AgentPhantomProcessorCustomizer, which we will need to implement.
  • The processor has a “runner” object, this is the “heart” of the processor handling all the events.
  • Processor has a set of getters for all components, including runner and scope object.
  • Processor has AddExtraRunner function, where we can add extra procedures running on separate goroutines, doing some extra tasks not predicted by processor proto definition.
  • Interface AgentPhantomProcessorCustomizer has an extra default partial implementation.

In the customizer, we can:

  • Add PreInit and PostInit handlers

    PreInit is called for a processor with all internal components not initialized. PostInit is done after initial construction is completed (but not after it runs).

  • We have StoreConfig calls, which can be used to additionally customize Store objects. You can check the code to see the options, one option is to provide an additional filter applied to the store, so we don’t see all resources.

  • Functions ending with ConfigAndHandlers are for Syncer objects. We will have to implement them. This is for the final configuration & tuning of Syncers.

  • Functions ending with ConfigAndImpl must be used to customize transformers.

  • We can also hook a handler in case the Scope object changes itself (like, some fields in the Project). Usually, it is left empty, but we may hit some use cases for it still.

After reviewing the processor file, you should see the processor customizer implementation. This is a handwritten file, here example for InventoryManager: https://github.com/cloudwan/inventory-manager-example/blob/master/controller/v1/agent_phantom/agent_phantom_processor.go.

Constructor we can define however we want. Then, for implementation notes:

  • For PhantomTimeSerieStoreConfig, we want to ensure to filter out PhantomTimeSeries that are not of specific metric type, of which we don’t have specific meta owner types. This may often be redundant because we can define the proper filter for PhantomTimeSerie objects themselves (in a different file, we will come back to it).

  • In the function AgentPhantomTransformerConfigAndImpl, we need to return an implementation handler that must satisfy the specific interface required by the transformer. In the config file, usually, provide a reconciliation mask. These masks are used to prevent the triggering of the transformer function for non-interesting updates. In this example, we are checking field paths online, activation.status, and location. It means, that if some of those fields change in ReaderAgent, then we will need to trigger the transformer to recompute PhantomTimeSerie objects (for this agent only). Reconciliation mask helps reduce unnecessary work. If someone changed let’s say display name of the agent, then no work would be triggered.

  • In function PhantomTimeSerieSyncerConfigAndHandlers we are customizing Syncer for PhantomTimeSeries objects. In the config part, we almost always need to provide update mask, fields that are maintained by the controller. We also may provide information on what to do in case of duplicated resource detection - by default we delete them, but it may be OK to provide this value explicitly (AllowDuplicates is false). Apart from that, there is some quirk about PhantomTimeSerie instances:

    Fields resource and metric are non-updatable. Because of that, we need to disable updates UpdatesDisabled. It is recommended to review all options in the code itself to see what else can change. Handlers for syncer are a bit tricky here, we could have just returned NewDefaultPhantomTimeSerieSyncerHandlers, but we need some special cases, which is common for PhantomTimeSerie instances. We will come back later to it.

  • In the function PostInit we are providing extra goroutine, ConnectionTracker. It is doing work unpredicted by the controller framework for now and needs some IO. For those reasons, it is highly recommended to delegate this work on a separate goroutine. This component will also get updates from the ReaderAgent store (create, update, delete).

Let’s first discuss the phantomTimeSerieSyncerHandlers object. It extends generated common.PhantomTimeSerieSyncerHandlers. Custom handlers are quite powerful tools, we can customize even how the object is created/updated/deleted, by default, it uses standard Create/Update/Delete methods, but it does not need to be this way. In this particular case, we want to customize identifier extraction from the PhantomTimeSerie resource instance. We created a key instance for this defined here: https://github.com/cloudwan/inventory-manager-example/blob/master/controller/v1/agent_phantom/agent_phantom_key.go.

By default, the identifier of a resource is just simply extracted from the name field. However, PhantomTimeSerie is very special in this manner: This resource has a non-predictable name! All CreatePhantomTimeSerie requests must not specify its name, it’s assigned by a server during creation. This has nothing to do with the controller, it is part of the PhantomTimeSerie spec in monitoring. For this reason, we are extracting some fields that we know will be unique. Since we know that for a given ReaderAgent we will generate only one “Online” metric, we use just the agent name extracted from metadata along metric type value. This customized syncer will then match desired and observed PhantomTimeSerie resources using those custom IDs.

Connection tracker defined here: https://github.com/cloudwan/inventory-manager-example/blob/master/controller/v1/agent_phantom/connection_tracker.go shows some examples of controller tasks that were not predicted by the controller framework. It is being run on a separate goroutine, however, OnReaderAgentSet and OnReaderAgentDeleted are called by the processor runner, the main goroutine of the processor. This mandates some protection. Golang’s channels may have been used perhaps, but we need to note that they have limited capacity, if they get full processing threads stalls. Maps with traditional locks are safer in this manner and are often used in SPEKTRA Edge, which solved some issues when there were sudden floods of updates. The benefit of maps is that they can merge multiple updates at once (overrides). With channels, we would need to process all individual elements.

Going back to comments about implementation: As we said, we are ensuring monitoring has a time series per each agent showing if the agent was online or offline at a given time point. However, to synchronize the “online” flag, we are periodically asking for monitoring for time series for all agents, then flip flags if they mismatch with the desired value.

Let’s move forward, to files for the transformer. The generated one can be found here: https://github.com/cloudwan/inventory-manager-example/blob/master/controller/v1/agent_phantom/agent_phantom_transformer.pb.go.

Notes:

  • The interface we need to notice is AgentPhantomTransformerImpl, this one needs implementation from us.
  • Config structure AgentPhantomTransformerConfig, which needs to be provided by us.
  • In transformer code, we are already handling all events related to input resources, including deletions. This reduces the required interface from AgentPhantomTransformerImpl to a minimum, we just need to compute desired resources for a given input.
  • Note that this config and impl are provided by your customizer implementation for the processor.

The file with the implementation for the transformer is here: https://github.com/cloudwan/inventory-manager-example/blob/master/controller/v1/agent_phantom/agent_phantom_transformer.go.

Notes:

  • For a single ReaderAgent, we may have potentially N output resources (PhantomTimeSerie here).
  • When we create DESIRED PhantomTimeSerie, note that we provide only the parent part of the name field, When we call NewNameBuilder, we are NOT calling SetId. As part of PhantomTimeSerie spec, we can only provide parent names, but never own ID. This must be generated by a server. Note this combines with custom PhantomTimeSerie Syncer handlers, where we extract ID from metadata.ownerReferences and metric.type.
  • PhantomTimeSerie is constructed with service ownership info and ReaderAgent. This ensures that we will own this resource, not another service. Metadata ensures PhantomTimeSeries will be cleaned up (this is an additional cleanup guarantee, as a transformer of owner/ownee type can provide the same functionality).

To summarize, when implementing a Processor, it is necessary to (at the minimum):

  • Provide all transformers implementations and define their configs.
  • Provide an implementation of processor customizer, at the minimum, it needs to provide objects for syncers and transformers.

We need to provide missing implementation though not just for processors, but nodes too. You can typically find three code-generated files for nodes (for InventoryManager example):

The main file to review is the one ending with the pb.node.go name. In the constructor, it creates all watcher instances for all scope and input resources for all processors. It manages a set of processors per project (in this case we have one processor, but more could be available). All updates from watchers are distributed to relevant processors. It is quite a big file, initially, you may just remember, that this component just watches all collections in real-time and pushes updates to Processors, so they can react. However, at the top of the file, there are four types you need to see:

  • InventoryManagerCtrlFieldMasks

    Its generic name is <NodeName>FieldMasks.

  • InventoryManagerCtrlFilters

    Its generic name is <NodeName>Filters.

  • InventoryManagerCtrlCleaner

    Its generic name is <NodeName>Cleaner.

  • InventoryManagerCtrlNodeCustomizer

    Its generic name is <NodeName>NodeCustomizer.

Of these types, the most important for developers is NodeCustomizer. Developers should implement its functions:

  • Filters() needs to return filters for all input resources (From all processors) and scope resources. This is important, the controller should only know the resources it needs to know!
  • FieldMasks() needs to return field masks for all input resources (From all processors) and scope resources. It is very beneficial to return only the fields the controller needs to know, especially considering that the controller will need to keep those objects in RAM! However, be aware to include all needed fields, those needed by dispatchment (typically name), those needed by transformers and reconciliation masks, and all fields required by syncers (Update masks!).
  • Function GetScopeIdentifierForPhantomTimeSerie (or GetScopeIdentifierFor<Resource>) was generated because in protobuf, in dispatchment annotation for PhantomTimeSerie, we declared that the identifier is using the CUSTOM method!
  • Function CustomizedCleaner should return a cleaner that handles orphaned resources in case Scope resource (Project here) is deleted, but some kid resources exist. However, in 99.99% of cases, this functionality is not needed. When the Project is deleted, then all kid resources are cleaned up asynchronously by the db-controller.
  • Function AgentPhantomProcessorCustomizer must return a customizer for each Processor and scope object.

Developers need to implement a customizer, for the inventory manager we have the file: https://github.com/cloudwan/inventory-manager-example/blob/master/controller/v1/inventory_manager_ctrl_node.go.

Notes:

  • For GetScopeIdentifierForPhantomTimeSerie, we need to return the name object of the Inventory Manager project. Using the name of a PhantomTimeSerie is very easy though. We may find some autodetection in the future: If the name pattern matches across resources, then the developer won’t need to provide those simple functions.
  • In the FieldMask call, the Mask for ReaderAgent needs to be checked against all fields used in the processor - reconciliation mask and connection tracker. The name should always be included.
  • In the Filters call, we need to consider a couple of things:
    • We may have multi-region env, and each region will have its controllers. Typically, for regional resources, we should get those belonging to our region (ReaderAgents or PhantomTimeSeries). Projects should we get that can be in our region, so we filter by enabled regions.
    • Resources from core SPEKTRA Edge services we should filter by our service, ideally by owned. Otherwise, we would get PermissionDenied.
  • Customizer for processor construction should be straightforward. Any extra params were provided upfront, passed to the node customizer.

To see a final top-level implementation bit for the business logic controller for InventoryManager, see https://github.com/cloudwan/inventory-manager-example/blob/master/cmd/inventorymanagercontroller/main.go.

Find the NewInventoryManagerCtrlNodeManager call, this is how we construct node manager and how we pass our node customizer there. It should conclude this example.

Applications controller implementation

Controller implementation for applications can be found here: https://github.com/cloudwan/edgelq/tree/main/applications/controller/v1.

It contains additional information compared to Inventory Manager so let’s go through it, but skip common parts with previous example.

Starting from the processor, the main files are:

As described in the protobuf part, we have essentially two transformers and two syncers, for two different sub-tasks of general pod processing.

Let’s start with a transformer called DistributionController. For a quick recap, this transformer is producing Pods based on combined Device and Distribution resources, each matched Device + Distribution should produce a Pod, called DistributionPod in protobuf. Not all pods belong to Distribution though! Some may be deployed manually by clients.

You should start examining code from the processor customizer (link above).

In the function DistributionControllerConfigAndImpl of customizer we are creating a config, that reacts to specific field path changes for Distributions and Devices. At least as for now, distribution is matched with the device based solely on metadata.labels field path in Device, so this is what we check in Device. For Distribution, we want to recompute pods if the selector or pod template changes, other updates to Distributions should not trigger Pod re-computation! Also, note that the implementation object also can have Store instances, so we can access the current state. This will be necessary.

In the transformer, https://github.com/cloudwan/edgelq/blob/main/applications/controller/v1/pods/distribution_controller.go, there are additional knowledge elements. Since we know that this transformer is meta-ownee type, BUT we have two owners, we must implement two functions, one computing Pods for Distribution across all matched devices, other computing Pods for Device across Distributions. Note that each DESIRED generated pod does not have a clear static name value, GenerateResourceIdFromElements may have non-deterministic elements. We will need to reflect this when configuring Syncer for the DistributionPod type.

This is how pods are generated. To continue analyzing the behavior of Distribution pods, go back to the customizer for the processor and find the function PodFactoryConfigAndHandlers. Config with an update mask only seems ordinary, but there is an example of limit integration. First, we construct default handlers. Then, we are attaching the limits guard. Note that Pods are subject to limits! There is a possibility that we will fit with Devices/Distributions in the plan, but we would exceed Pods.

In such a situation, Syncer must:

  • Stop executing CreatePod if we did hit a limit.
  • UpdatePod should continue being executed as normal.
  • DeletePod should be executed as normal.

Once we have a free limit (as a result of plan change or deleted other pods), creation should be resumed. Limits guard is a component that must be used if we may be creating resources in the limits plan! Note also that in the PostInit call, we must additionally configure the limits guard.

For the pod factory syncer, we also provided some other syncer customizations:

  • Identifiers of pods have a custom implementation, since the pod name may be non-deterministic.
  • We must be very careful of what we delete! Note that in the protobuf section for PodFactory Syncer desired state takes pods from the DistributionController transformer. But the observed state contains ALL pods! To prevent the wrong deletion, we must provide additional CanProceedWithDeletion.

Let’s move on to the next transformer and syncer, handling unknown state pods. As a recap, the controller must mark pods whose device went offline as of UNKNOWN status. A set of unknown pods is a collection on its own (UnknownStatePod). When the Device gets online, we will need to remove the pods belonging there. We want to recompute snapshots of unknown state pods periodically - so this is what we declared in protobuf.

Starting with the transformer, we have the UnknownStateTrackerConfigAndImpl function used to customize it and examine it, it is in the Customizer implementation for PodsProcessor. Note that the config object has now a SnapshotTimeout variable. This timeout decides the interval how often the desired collection is re-computed (in this case!). Note that we declared this transformer as periodic-snapshot generic type.

See transformer-generated file and handwritten customization:

From the PB file, note that the minimum implementation required is called CalculateSnapshot, which is called periodically as instructed. This is the minimum we require from implementation.

However, if you examine it carefully, you can notice code like this:

onDeviceSetExec, ok := t.Implementation.(interface {
    OnDeviceSet(
      ctx context.Context,
      current, previous *device.Device,
    ) *UnknownStateTrackerDiffActions })
if ok {
    t.ProcessActions(
      ctx,
      onDeviceSetExec.OnDeviceSet(ctx, current, previous),
    )
}

Basically, all generic transformers allow additional custom interfaces for implementations, generally, On<Resource>Set and On<Resource>Deleted calls for each input resource. Those allow us to update desired collections much faster!

There is also an additional benefit of implementing those optional methods:

  • For generic, without periodic snapshot transformers, this avoids the CalculateSnapshot call entirely. In regular generic transformers, if the implementation does not implement the On<Resource><SetOrDeleted> call, the snapshot is triggered with a delay specified by the SnapshotTimeout variable (different behavior than periodic snapshot!). To avoid some extra CPU work, it is recommended to implement optional methods.

For this particular transformer, in file https://github.com/cloudwan/edgelq/blob/main/applications/controller/v1/pods/unknown_state_tracker.go, we implemented basic snapshot computation, where we get all pods with unknown statuses based on the last heartbeat from devices. However, we also implemented OnDeviceSet and OnDeviceDeleted. The set is especially important, when the Device gets online, we want to remove pods with unknown states from the desired collection ASAP. If we waited for timeout (more than 10 seconds), there is a possibility Device will mark pods online, but our controller would mark them unknown till timeout happens. This mechanism may be improved in the future though, even now we risk having two to three additional updates unnecessarily.

Going back to customizer (file pods_processor.go), see finally UnknownStateMaintainerConfigAndHandlers. We are again using Syncer for pods, but it’s a separate instance with a different update mask. We just want to control specific fields only, related to the status. Note that as in Distribution pods, the observed state contains ALL pods, but the desired state is only those with unknown status. To avoid bad deletions, we are disabling deletions entirely, creations too, as we don’t need them.

We can now exit processor type and examine node customizer, which can be seen here: https://github.com/cloudwan/edgelq/blob/main/applications/controller/v1/applications_ctrl_node.go.

It is very similar to customizer for Inventory Manager, with some additional info:

  • Note we are passing limits observer instance for limits guard integration. We will return to it shortly.
  • For the Filters() call, we need to note that the Distribution resource is non-regional - in fact, its instances are copied to all regions where the project is present in the multi-region environment. In those situations, we should filter by the metadata.syncing.regions field. This will return all distributions for all projects enabled in our region, which is basically what we need.

For the limits guard integration, also see a controller main.go file: https://github.com/cloudwan/edgelq/blob/main/applications/cmd/applicationscontroller/main.go.

Note that there we are constructing limits observer:

limitsObserver := v1ctrllg.NewLimitTrackersObserver(ctx, envRegistry)

We also need to run it:

g.Go(func() error {
    return limitsObserver.Run(gctx)
})

Limits observer instance should be global for the whole NodesManager, and be declared before, in the main!

Scaling considerations

Note that processors, to work properly, need to have:

  • Scope object (Project typically)
  • All input collections (snapshot of each of them)
  • All desired collections

In the case of multiple processors, input collections may be shared, but that’s not the point here, the point is, that the controller will need to have sufficient RAM, at least for now. This may be improved in the future, for example with disk usage. It won’t change the fact, that the Controller node needs to handle all assigned scope objects and their collections.

First, to minimize memory footprint, provide field masks for all collections, but be careful to include all necessary paths, there were bugs because of missing values! Then we need some horizontal scaling. For this, we use sharding.

Sharding divides resources into some groups. Note that in both examples we used byProjectId, declared explicitly in the protobuf file for Inventory Manager and Applications controller. This project ID sharding means that each Node instance will get a share of projects, not all of them. If we have only one Node, then it will contain data for all projects. But, if we have multiple Nodes, projects will be spread across them. Sharding by project also guarantees that resources belonging to one project will always belong to the same shard, this is why it is called byProjectId. For each resource, we extract a name, then we extract the project ID part from it and hash it. Hash is some large integer value, like int64. We need to know how big the ring is: 16, 256, 4096… For each ring, we modulo ring size and we get the shard number. For example byProjectId hash mod 16 gives us the byProjectIdMod16 shard key. Those values are saved in metadata.shards for each resource. This is done by sharding store plugins on the server side. Note that the field metadata.shards is a map<string, int64>. See https://github.com/cloudwan/goten/blob/main/types/meta.proto.

The ring size we use everywhere is 16 now, meaning we could potentially divide work across 16 nodes for all controller nodes.

When the first node starts, it will get assigned 16 shards, a value from 0 to 15. If the second node starts, it will get some random starting point, let’s say from 10-15, while the first node keeps 0-9. When the third node starts, it grabs some new random range, like 0-3. The remaining nodes are left with 4-9 and 10-15. It can continue till we are blocked by ring size and scaling is no longer effective.

Note that when the node starts, it can lower pressure on the two nodes at the maximum, not all. For this reason, we have a thing called Node Managers in all controllers, in all examples. We are building node managers in main.go files in the first place. Node managers start with one to four virtual node instances, but the most common is two. This way, when the new runtime starts, we have a good chance of taking pressure off from more instances.

Node managers are responsible for communicating with each other and assigning shards to their nodes. As of now, we use a Redis instance for this purpose. If you examined generated files for nodes, you could see that each Node has a method for updating the shard range. Shard ranges add additional filter conditions to filters passed from the node customizer instance.

With Kubernetes Horizontal Pod Autoscaler we are solving some issues with scaling, by splitting projects across more instances. This gives us some room for breath. But we have remaining 2 issues:

  • A super large project could potentially outgrow the controller.
  • Super large shards (lots of projects assigned to the same value) can be too massive.

For the first issue, we could leverage multi-region env, like we already did for example, we get resources mostly from our region only, so large projects can be further split across regions. Still, we may get hit with a large project-region.

For the second issue, we could switch to a larger ring size: like 256. However, it means we will have lots of controller instances, like 20 or more. Controllers also induce their overhead, meaning that we are wasting plenty of resources just for a large number of instances.

Presented techniques still provide us with some flexibility and horizontal scaling. To scale further, we can:

  • Introduce improvements in the framework, so it can compress data, use disk, or even “forget” data and retrieve it on demand.
  • Use diagonal scaling - use horizontal autoscaling first (like in Kubernetes), then, if the number of instances hits some alert (like 4 pods), then we can increase assigned memory in the YAML declaration and redeploy.

Diagonal autoscaling with automation in one axis may be most efficient, even though it will require little reaction from the operator, to handle the alert and increase values in yaml. Note however this simple action also has a potential for automation.

2.3.3 - Registering your Service to the SPEKTRA Edge platform

How to register your service to the SPEKTRA Edge platform.

While goten provides a framework for building services, SPEKTRA Edge provides a ready environment with a set of common, pre-defined set of services. This document describes a selected set of specific registrations needed by the developer, other services can and should typically be used with the standard API approach.

Integration with SPEKTRA Edge is practically enforced/recommended on multiple levels:

  • Your service needs to register itself in meta.goten.com, otherwise it can’t simply work.
  • Your resources model must be organized around the following top resources:
    • meta.goten.com/Service
    • iam.edgelq.com/Organization
    • iam.edgelq.com/Project
      • For multi-tenants, you need to have your Project resource in the api-skeleton.
  • You need to follow authentication & authorization model of iam.edgelq.com.
  • Although you may skip it, it is highly recommended to use audit.edgelq.com to Audit the usage of service, and monitoring.edgelq.com to track its usage. Your service activities in core SPEKTRA Edge are monitored by those services.
  • If a service needs to control the amount of resources, limits.edgelq.com is highly recommended.

The above list contains mandatory or highly recommended registrations, but practically all services are at your disposal. SPEKTRA Edge provides also edge devices with their own OS, where you can deploy your agent applications. Hardware and containers are managed using services devices.edgelq.com and applications.edgelq.com.

Service with a high level of registration example: https://github.com/cloudwan/inventory-manager-example

This provides more insights into how custom services can be integrated with core SPEKTRA Edge services.

Fixtures controller

Before jumping into SPEKTRA Edge registration, one common element of all registrations is the fixtures controller.

The fixtures controller is responsible for creating & updating resources in various services that are needed for:

  • Correct operation of an SPEKTRA Edge Service. Example: Service iam.edgelq.com needs a list of permissions from each service, that describe what users can do in a given service. If permissions are not maintained in IAM, then SPEKTRA Edge will have trouble helping with Authorization. It would render the Service non-operable as a result. As part of the bootstrapping Service, Permission fixtures must be submitted by interested Service.

  • Correct operation of a Project or Organization.

    Example: The user who created a given Project/Organization automatically gets an administrator RoleBinding resource in the created Project or Organization. Without it, the creator of a Project/Organization would not be able to access their entity. It would render it non-operable.

Some fixtures are a bit more dynamic. For example, when an existing Project is enabling some particular service, then a given Service automatically gets RoleBinding in a project, which allows the Service to manage its resources that are associated with the Service. Without it, Service would not be able to provide services to a project, rendering it non-operable.

Those cases are handled by the fixtures controller, by convention, the fixtures controller is part of controller runtime.

Be aware, that the fixtures controller not only keeps in sync by creating/updating resources. It also detects if there is UNNEEDED fixture that is not defined, but exists, it is then deleted. This is necessary to clean up the garbage, as, in proper conditions, it also has the potential to make the Service/Project/Organization non-operable and full of errors.

The Fixtures Controller works in this way: It computes a DESIRED set of resources. Then it uses CRUD to get the observed state, and compares it with desired, finally executes a set of Create/Update/Delete calls as necessary. If there is a dynamic change in the desired state, the controller computes & executes a new set of commands. If there is a dynamic change in the observed state, the fixtures controller will attempt to fix it.

Fixtures are a set of YAML files in the fixtures directory. They are either completely static or templated (have <VARIABLE> elements). Templated fixtures are created < FOR EACH > Project, or Organization, or Service - typically, but not limited to. Those “for each” fixture provide a source of dynamic updates to the desired state.

Fixtures are built into the controller image during compilation. Then config file decides the rest, like how variables are resolved. See basic fixtures controller config in: https://github.com/cloudwan/inventory-manager-example/blob/master/config/controller.proto.

For fixtures, for every resource type, it is necessary to include an access package for related resources. For example, see https://github.com/cloudwan/inventory-manager-example/blob/master/cmd/inventorymanagercontroller/main.go, and the fine import types needed by the fixture controller! This list must include all resource types fixtures the controller can create OR requires (via forEach directive in the config file).

During each registration, we will explain all discuss various fixtures:

  • For IAM registration, we will define some static fixtures
  • For adding projects, we will show examples of “synchronized collections”, dynamic fixture examples.
  • For Monitoring registration, we will show some static and dynamic (per project) fixtures.
  • For Logging registration, we will show again some per-project fixtures.
  • For Limits, we use plans as static fixtures.

What your service is authorized to do

Your service uses IAM ServiceAccount which will have its own assigned RoleBindings. For any SPEKTRA Edge-based service, you will be allowed to:

  • Do anything in your service namespace: services/{service}/. Your ServiceAccount will be marked as owner, so you will be able to do anything there. This applies to Service resources in all services, including core SPEKTRA Edge.
  • Do anything in the root scope (/), AS LONG AS permissions are related to your service. So for example, if your ServiceAccount wants to execute some Create for a resource belonging to your service, it will be able to. But not in other services, and especially not in core SPEKTRA Edge services.
  • ServiceAccount can do some things in projects/$SERVICE_PROJCT_ID, depending on the role you are assigned to when making the initial service reservation as described in the preparation section.
  • For core SPEKTRA Edge services, you will be able to have read access to all resources in the root scope (/), as long as they will satisfy the following filter condition: metadata.services.allowedServices CONTAINS $YOUR_SERVICE.
  • For core SPEKTRA Edge services, you will be able to have write access to all resources in the root scope (/), as long as they satisfy the following filter condition: metadata.services.owningService == $YOUR_SERVICE.
  • You will be able to create resources in projects that enable your service in its enabled_services field. But they will have to specify metadata.services.owningService = $YOUR_SERVICE if we talk about core SPEKTRA Edge service’s resources.

These rough permissions above must be remembered when you start making requests from your service. Those limitations are reflected in various examples (for example, when you create ServiceAccount for a project, you need to specify proper metadata.services).

IAM registration

Introduction

Service iam.edgelq.com handles all actor collections (Users, ServiceAccounts, Groups), tenants (Organizations, Projects), permission related (Permissions, Roles), finally binds actors with permissions within tenant scopes (RoleBinding).

The only tenant-type resource not in iam.edgelq.com is Service, which resides in meta.goten.com. It is still treated as a kind of tenant from the IAM point of view.

The primary point of registration between IAM and any SPEKTRA Edge-based service is permission-related. Permissions are generated for all services, for each API method (any API group). The typical format of permission is the following: services/$SERVICE_NAME/permissions/$COLLECTION_NAME.$ACTION_VERB. If some method has no resource configured in the API skeleton (no opResourceInfo value!), then permission has a name of this format: services/$SERVICE_NAME/permissions/$ACTION_VERB.

Variable $SERVICE_NAME is naturally a service name in domain format, $COLLECTION_NAME is a lowerPluralCamelJson format of resource collection (examples: role bindings, devices…), finally $ACTION_VERB is equal to the value of verb of the method in the api-skeleton file. For example, the action CreateRoleBinding operates on the roleBindings collection, the verb is create, and the service, where the action is defined, is iam.edgelq.com. Therefore, the permission name is services/iam.edgelq.com/permissions/roleBindings.create.

Another popular permission type is the “attach” kind. Even if the permission holder can create/update a resource if that the resource has references to different ones, then authorization must also validate actor can create a reference relationship. For example, the caller can create a RoleBinding thanks to the services/iam.edgelq.com/permissions/roleBindings.create permission, but reference to a Role requires that holder also has permission services/iam.edgelq.com/roles!attach.

You should be already familiar with the IAM model, using its README.

What is provided during generation

During Service code generation, the IAM protoc plugin analyzes a service and collects all permissions that need to exist. It creates a file in the fixtures directory, with the name <service_short_name>.pb.permissions.yaml. Apart from that, it also generates Authorization middleware for your server specifically.

Authorization middleware extracts WHAT for each call:

  • Collection (typically parent field) for collection type methods (isCollection = true in API-skeleton)
  • Resource name (typically name field) for single resource non-collection type methods (isCollection and isPlural = false)
  • Resource names (typically names field) for plural non-collection methods (BatchGet examples! isCollection is false, isPlural true).

To get this WHAT, it uses by default values provided in the API skeleton: Param opResourceInfo.requestPaths in an Action declaration. Note CRUD has implicit built-ins. It gets authenticated principal from the current context object (associated with the call) and attaches permission related to the current call. It uses the generic Authorizer component to verify if the request should pass or be denied.

Minimal registration required from developers

This whole registration is almost out of the box. The minimal elements to do are:

  • Developers need to create an appropriate main.go file for the server, with Auth-related modules. In the constructor for the main service server handlers, Authorization middleware must be added to the chain, all according to the example InventoryManager.
  • Developers are highly recommended to write their role fixtures per their service (Static fixture). Roles are necessary to bind users with permissions. Roles should be well-thought-out. Inventory manager has basic roles for users and specific limited role examples for agent application, with access to clearly defined resources within tenant project. Although there is a fixture called <service_short_name>.pb.default.roles.yaml provided, they are very limited and usually a “bad guess”. Usually, we create a file called <service_short_name>_roles.yaml for manually written ones.
  • Developers must configure at the minimum two fixture files: <service_short_name>_roles.yaml (or <service_short_name>.pb.default.roles.yaml), then <service_short_name>.pb.permissions.yaml.

Fixture controller registration requires two parts. First, in the main.go file for a controller, it is required to import github.com/cloudwan/edgelq/iam/access/v1/permission and github.com/cloudwan/edgelq/iam/access/v1/role. Those packages contain modules that are imported by the fixtures controller framework provided by Goten/SPEKTRA Edge. The fixtures controller analyzes YAML files and tries to find in the global registry associated types, without it, a program will crash.

Second, in a config file of the controller, you need to define fixture file paths. You can copy-paste them from the inventory manager example, like:

fixtureNodes:
  global:
    manifests:
    - file: "/etc/lqd/fixtures/v1/inventory_manager.pb.permissions.yaml"
      groupName: "inventory-manager.edgelq.com/Permissions/CodeGen"
      parent: "services/inventory-manager.edgelq.com"
    - file: "/etc/lqd/fixtures/v1/inventory_manager_roles.yaml"
      groupName: "inventory-manager.edgelq.com/Roles"
      parent: "services/inventory-manager.edgelq.com"

It will be mentioned in the deployment document, but by convention, the fixtures directory is placed in the /etc/lqd path.

Two notes:

  1. groupName is mandatory and generally should be unique. This helps in case there is more than one fixture file for the same resource type, to ensure they don’t clash. Still, resource names also must be unique.
  2. The parent field is mandatory in this particular case too, here, the fixtures controller gets a guarantee that all Roles and Permissions have the same parent resource called exactly services/inventory-manager.edgelq.com (in this case). Note that a Service has only access to scopes it owns. Without this parent value specified, we would get PermissionDenied error. We will also get a PermissionDenied error if, in the fixture file, we would attempt to create a Role or Permission with a different parent.

Using this example, we should clarify yet another thing: The Fixtures controller not only creates/updates resources that are defined in the fixtures. It also DELETES those that are not defined within fixtures. This is why we have groupName and parent. For example, if there was a Role, which groupName is equal toinventory-manager.edgelq.com/Roles, and its parent is equal to services/inventory-manager.edgelq.com, and it would not exist within the fixture file as defined by /etc/lqd/fixtures/v1/inventory_manager_roles.yaml, it WOULD BE DELETED. This is why params groupName or parents play an important role here, and why we would get PermissionDenied without parents. The fixtures controller always gets the observed state to compare against the desired one. This observed state is obtained using regular CRUD, and this is why we need to specify a parent for Roles/Permissions, the service will not be authorized if it tries to get resources from ALL services.

So far we explained the mandatory part of IAM registration. The first common additional registration, although a very small one, is to declare some actions of a Service public. An example is here: https://github.com/cloudwan/inventory-manager-example/blob/master/fixtures/v1/inventory_manager_role_bindings.yaml

We are granting some public role to all authenticated users, regardless of who they are (but they are users of our service). This requires a separate entry in fixtures and import in main.go for RoleBinding (access packages).

More advanced IAM registration

In this topic, there are two things extra that are offered:

  1. IAM provides a way to OVERRIDE generated Authorization middleware. Developers can define additional protobuf files with special annotations in their proto/$VERSION directory, that will be merged on generated/assumed defaults.
  2. Some fields in resources can be considered sensitive from a reading or writing perspective. Developers can define custom IAM permissions that are required to be owned to write to/read from them. Permissions and protected fields can be defined in protobuf files.

Starting from the first part, overriding Authorization defaults. By convention, we create an authorization.proto file along with others. Some simple examples:

Example service provides a first basic example: To disable Authorization altogether for a given action, you just need to provide a skip_authorization annotation flag for a specific method, in a specific API group. Since this example is a little too simplified, examples for Audit and IAM were provided as being more interesting.

For example, take the ListActivityLogs method:

{
  name : "ListActivityLogs"
  action_checks : [ {
    field_paths : [ "parents" ]
    permission : "activityLogs.list"
  } ]
}

There is an important problem with this particular method: SPEKTRA Edge code-generation supports the collection, single resource, or multi-resource request types. However, in ListActivityLogsRequest we have a plural parents field because we are enabling users to query from multiple collections at once. This is a kind of isPluralCollection type. But such an annotation does not exist in api-skeleton. However, there is some level of enhancement: we can explicitly tell IAM to use the “parents” field path, and it will authorize all individual paths from this field. If the user does not have access to any of the parents, they will receive a PermissionDenied error.

There is also the possibility to provide multiple field paths (but only one will be used).

Another interesting case example, is CreateProject:

{
  name : "CreateProject"
  action_checks : [ {
    field_paths : [ "project.parent_organization" ]
    permission : "projects.create"
  } ]
}

In api-skeleton, Project and Organization are both “top” resources. Their name patterns are: projects/{project} and organizations/{organization}. Judging by these, the creation project should require permission on the system level and, the same for the organization. However, in practice we want projects to be final tenants and organizations’ intermediaries. Note that Organization and Project resources have a parent_organization field. Especially for organization resources, it is not possible to specify that the parent of the Organization is “Organization”. Name pattern cannot be like: organizations/{organization}/organizations/{organization}/.... Therefore, from a naming perspective, both projects and organizations are considered to be “top” resources. However, when it comes to creation, IAM Authorization middleware should make an exception, and take authorization scope object (WHERE) from a different field path, in the case of CreateProject, it must be project.parent_organization. This changes generated code of Authorization for CreateProject, and permission is required in the parent organization scope instead.

To declare sensitive fields in resources, it is necessary to use annotations.iam.auth_checks annotations. There are no current examples in InventoryManager, but there are some examples in secrets.edgelq.com:

As of now, there is:

option (annotations.iam.auth_checks) = {
  read_checks : [
    {permission : "mask_encrypted_data" paths : "enc_data"},
    {permission : "secrets.sensitiveData" paths : "data"}
  ]
};

Note you also need to include also edgelq/iam/annotations/iam.proto import in the resource proto file.

When the secret is being read, then additional permissions may be checked:

  • services/secrets.edgelq.com/permissions/mask_encrypted_data, if denied, field path enc_data will be cleared from response object.
  • services/secrets.edgelq.com/permissions/secrets.sensitiveData, if denied, field path data will be cleared from the response object.

Those read checks apply to all methods that contain resource bodies in response, therefore, even UpdateSecret or CreateSecret responses would have fields cleared. However, it will mostly be used to clear values from List/Search/Get/BatchGet responses.

Param set_checks are just like read_checks, but work in reverse.

Note that you can specify multiple paths.

Users are generally free to pick any permission name for set/read checks, but it is recommended to follow secrets.sensitiveData than mask_encrypted_data.

To have a full document about iam-related protobuf annotations, you can access it here: https://github.com/cloudwan/edgelq/blob/main/iam/annotations/iam.proto.

Adding projects (tenants) to the service

For multi-tenant cases, it is recommended to copy Project resources from iam.edgelq.com into 3rd party service. You need a Project resource declared yourself in api-skeleton. This copying, or syncing was already mentioned in some places in developer-guide, as collection synchronization.

Service based on SPEKTRA Edge should copy only these projects, which are enabling that particular service (in enabled_services list). Note that services based on SPEKTRA Edge can only filter projects/organizations that are using particular services themselves.

Once the project instance copy is in the service database, it is assumed that it is now able to use that service. If project removes service from allowed, then its copy is removed from the service database (garbage collecting).

An example of registration is in InventoryManager. Integration steps:

Let’s copy and paste part of the config and discuss it more:

fixtureNodes:
  global:
    manifests:
    - file: "/etc/lqd/fixtures/v1/inventory_manager_project.yaml"
      groupName: "inventory-manager.edgelq.com/Projects"
      createForEach:
      - kind: iam.edgelq.com/Project
        version: v1
        filter: "enabledServices CONTAINS \"services/inventory-manager.edgelq.com\""
        varRef: project
      withVarReplacements:
      - placeholder: <project>
        value: $project.Name.ProjectId
      - placeholder: <multiRegionPolicy>
        value: $project.MultiRegionPolicy
      - placeholder: <metadataLabels>
        value: $project.Metadata.Labels
      - placeholder: <metadataServices>
        value: $project.Metadata.Services
      - placeholder: <title>
        value: $project.Title

As always, we need to provide file and groupName variables. Note that the resource we are creating in this fixture belongs to our service: inventory-manager.edgelq.com/Project. Because it is ours, the service does not need an additional parent or filter to be authorized correctly, so those parameters are not necessary here.

We have some new elements though, first is the createForEach directive. It instructs to create fixtures defined in a mentioned file for each combination of input resources. In this case, we have one input resource, and its type is iam.edgelq.com/Project, in version v1. Our service cannot list all IAM projects, but it can list them if they enable our service, therefore we are passing the proper filter param. Besides, we should create project copies only for projects interested in our service anyway. Each instance of iam.edgelq.com/Project is remembered as project variable (as indicated by varRef).

When fixtures are evaluated from file /etc/lqd/fixtures/v1/inventory_manager_project.yaml per each iam project, we need to replace all variables, so the final YAML is produced. This example above should be relatively self-explanatory. You may note, however, that you can extract IDs from names, and take full objects (fixtures variables are not limited to primitives), maps, or slices.

There is however one more important aspect: Project admins cannot by default add your service to their enabled list. This is to prevent the attachment of a private service to a project, it may be against the service maintainer’s wishes. To allow someone to create/update a project/organization using your service, you will need to create a RoleBinding:

cuttle iam create role-binding \
  --service $YOUR_SERVICE \
  --role 'services/iam.edgelq.com/service-user' \
  --member $ADMIN_OF_ORGS_AND_PROJECTS

Provided user from now on can create new organizational entity that uses your service.

Audit registration

Overview

SPEKTRA Edge provides a LogsExporter component, which is part of observability. It records selected API calls (unary and streams), and submits them to audit.edgelq.com. All activity or resource change logs are classified as service, organization, or project scoped. Out of these 3, service logs are default, if the method call was not classified as neither project nor organization.

Scope classification is relatively simple: When a unary request arrives, the logs exporter analyzes the request, extracts resource name(s) and collection, and decides what is the scope of the request (project, organization, or service). Resource change logs are submitted just before the transaction is concluded, if logs could not have been sent, the transaction fails. This is to ensure that we always track resource change logs at least. Activity logs are submitted in a manner of seconds after the request finishes, which allows some degree of lost messages. In practice, it does not happen often.

For streams, Audit examines client and server messages before deciding how activity logs should look like.

Resource change logs are submitted based on transaction lifespan regardless of grpc method streaming kinds.

Minimal registration

The audit requires minimal effort from developers to include in its default form. They just need to put a little initialization in the main.go file for a server runtime, as in the example InventoryManager service. You can see it in https://github.com/cloudwan/inventory-manager-example/blob/master/cmd/inventorymanagerserver/main.go.

Find the following strings:

  • NewAuditStorePlugin is necessary to add to a store handle. It is a plugin that observes changes on DB.
  • InitAuditing is necessary to initialize the Audit Logs exporter that your server will use. You need to pass all relevant handlers (code-generated).

Audit handlers are code generated based on method annotations (therefore, the API skeleton decides normally). There are the following defaults:

  • Api-skeleton annotations opResourceInfo.requestPaths and opResourceInfo.responsePaths are used to determine what field paths in request/response objects contain values that would be interesting from an Audit point of view.
  • Audit by default focuses on auditing all writing calls. It checks the api-skeleton annotation withStoreHandle in each action. If the transaction type is SNAPSHOT or MANUAL, then the call will be logged, not otherwise.
  • By default, activity log types will be always classified as some kind of writes. Other kinds require manual configuration.

From this point on Audit will work, and service developers will be able to query for logs from their service. Let’s discuss list of possible customizations.

Customizations on the proto files level

Generally, full proto customizations can be found here: https://github.com/cloudwan/edgelq/blob/main/audit/annotations/audit.proto You will need to include the edgelq/audit/annotations/audit.proto import to use any audit annotations.

The most common customization is the categorization of write activities for a resource. Activity logs have categories: Operations, Creations, Deletions, Spec Updates, State Updates, Meta Updates, Internal, Rejected, Client and Server errors, Reads.

Note that write categories are quite a few: creations, deletions, and three different update kinds. Creations and deletions are easy to classify, but updates are not so much. When a resource is updated, the Audit Logs exporter examines a different object and determines which fields changed, and which not. To determine the update kind, it needs to know which fields are related to spec, which state, and which are meta. This has to be defined within the resource protobuf definition.

It is like the following:

message ResourceName {
  option (ntt.annotations.audit.fields) = {
    spec_fields : [
      "name",
      "spec_field",
      "other_spec_field"
    ]
    state_fields : [ "some_state_field" ]
    meta_fields : [ "metadata", "other_meta" ]
    hidden_fields : [ "sensitive_field", "too_big_field" ]
  };
}

We must classify all fields. Normally, we put “name” as a spec, and “metadata” as a meta field. Other choices are up to the developer. On top of spec/state/meta, we also can hide some fields from Audit at all (especially if they are sensitive, or big and we want to minimize log sizes).

Note that hidden_fields can also be defined for any messages, including request/response objects. Some example from SPEKTRA Edge: https://github.com/cloudwan/edgelq/blob/main/common/api/credentials.proto. See annotations for ServiceAccount, we are hiding private key objects for example, as this would be too sensitive to include in Audit logs. Be aware of what is being logged!

You can define field specifications on the resource level, or any nested object too.

Going back to update requests: Spec update takes importance over the state, then state over meta. Therefore, if we detect update that modifies one meta, two state, and one spec field, the update is classified as spec update.

Another part of customization developers may find useful, is to ability to attach labels to activity/resource change logs. Those logs can be queried (filtered) by service, method name, API version, resource name/type on which method operates (or which changed), category, and request ID… However, you can notice that resource change and activity logs also have a “labels” field, which is a generic map of strings. This can hold any labels that were defined by developers. Most common way of defining labels can be in request/response objects:

message ActionNameRequest {
  option (ntt.annotations.audit.fields) = {
    labels : [
      { path : "field_a", key: "label_a" },
      { path : "field_b", key: "label_b" }
    ]
    
    promoted_labels : [
      { label_keys : [ "label_a" ] }
    ]
  };
  
  string field_a = 1;
  
  string field_b = 2;
}

With this, you can start querying Activity logs like:

{parents: ["projects/tenant1"], filter: "service.name = \"custom.edgelq.com\" AND labels.field_a = \"V1\""}

This query above will also be optimized (index will be created, according to the promoted_labels value).

Note that each promoted label set require also service name and parent to be indexed!

Apart from field customization, developers can customize how Audit Logs Exporter handles method calls. We are typically creating the file auditing.proto in the proto/$VERSION directory for a given service. There we declare file-level annotation ntt.annotations.audit.service_audit_customizations.

Examples in SPEKTRA Edge:

Starting with the device’s service, for example for ProvisioningPolicyService, method ProvisionDeviceViaPolicy. As of now, we have annotations like:

{
  name : "ProvisionDeviceViaPolicy"
  activity_type : WriteType
  response_resource_field_paths : [ "device.name" ]
}

Method ProvisionDeviceViaPolicy has in api-skeleton:

actions:
- name: ProvisionDeviceViaPolicy
  verb: provision_device_via_policy
  withStoreHandle:
    readOnly: false
    transaction: SNAPSHOT

By default, opResourceInfo has these values for the action:

opResourceInfo:
  name: ProvisioningPolicy # Because this action is defined for this resource!
  isCollection: false      # Default is false
  isPlural: false          # Default is false
  # For single resource non-collection requests, defaults for paths are determined like below:
  requestPaths:
    resourceName: [ "name" ]
  responsePaths: {}

You can find request/response object definitions in: https://github.com/cloudwan/edgelq/blob/main/devices/proto/v1/provisioning_policy_custom.proto

This method primarily operates on the ProvisioningPolicy resource, and the exact resource can be extracted from the “name” field in the request. By default, Audit would decide that the primary resource for Activity logs for these calls is ProvisioningPolicy. The following Audit specification would be implicitly assumed:

{
  name : "ProvisionDeviceViaPolicy"
  activity_type : WriteType                 # Because withStoreHandle api-skeleton annotation tells it is a SNAPSHOT
  request_resource_field_paths : [ "name" ] # Because this is what requestPaths api-skeleton annotation tells us.
}

However, we know that this method takes the ProvisioningPolicy object, but creates a Device resource, and the response object contains the Device instance. To ensure that the field resource.name in Activity logs points to a Device, not ProvisioningPolicy, we write that response_resource_field_paths should point to device.name.

To be able to still query Activity logs by ProvisioningPolicy, we also attach annotation to request object:

option (annotations.audit.fields) = {
  labels : [ {key : "provisioning_policy_name" path : "name"} ]
};

This is one example modification of default behavior.

We can also disable auditing for particular methods entirely: Again in auditing.proto for the Devices service you may see:

{
  name : "DeviceService"
  methods : [ {name : "UpdateDevice" disable_logging : true} ]
},

The reason, in this case, is that, as of now, all devices are sending UpdateDevice each minute. To avoid too many requests to Audit, we have for now this disabled, till a solution is found (perhaps you already don’t see this part in auditing for devices).

In the auditing.proto file for the Proxies service ( https://github.com/cloudwan/edgelq/blob/main/proxies/proto/v1/auditing.proto ), you may see something different too:

{
  name : "BrokerService"
  methods : [
    {name : "Connect" activity_type : OperationType},
    {name : "Listen" activity_type : OperationType}
  ]
}

In Broker API in API-skeleton, you can see that Connect and Listen are streaming calls, Listen is used by an Edge agent to provide access to other actors, and Connect is used by an actor to connect an Edge agent. Those calls are non-writing and, therefore would not be audited by default. To force auditing, and classify them as Operation kind, we specify this directly in the auditing file.

A final example that is good to see, is the auditing file for monitoring: https://github.com/cloudwan/edgelq/blob/main/monitoring/proto/v4/auditing.proto.

First, you can see that we are classifying some resources as INTERNAL types, like RecoveryStoreShardingInfo. It means that any writes to these resources are not classified as writes, but as “internal”. This changes the category in Activity logs, making it easier to filter out. Finally, we are enabling reads auditing for ListTimeSeries call:

{
  name : "TimeSerieService"
  methods : [ {
    name : "ListTimeSeries"
    scope_field_paths : [ "parent" ]
    activity_type : ReadType
    disable_logging : false
  } ]
}

Before finishing, it will be worth we have some extra customizations in the code for ListTimeSeries calls.

Customizations of Audit in Golang code

There is a package github.com/cloudwan/edgelq/common/serverenv/auditing with some functions that can be used.

Most common examples can be summarized like this:

package some_server

import (
  "context"

  "google.golang.org/grpc/codes"
  "google.golang.org/grpc/status"

  "github.com/cloudwan/edgelq/common/serverenv/auditing"
)

func (srv *CustomMiddleware) SomeStreamingCall(
    stream StreamName,
) error {
	ctx := stream.Context()

    firstRequestObject, err := stream.Recv()
    if err != nil {
        return status.Errorf(status.Code(err), "Error receiving first client msg: %s", status.Convert(err).Message())
    }
	
	// Lets assume, that request contains project ID, but it is somehow encoded AND not available
	// from a field in a straight way. Because of this, we cannot provide protobuf annotation. We can
	// do this from code however:
	projectId := firstRequestObject.ExtractProjectId()
    auditing.SetCustomScope(ctx, "projects/" + projectId) // Now we ensure this is where log parent is.
	
	// We can set also some custom labels, because these were not available as any direct fields.
	// However, to have it working, we will still need to declare labels in protobuf:
	//
	// message StreamNameRequest {
	//  option (ntt.annotations.audit.fields) = {
	//    labels : [ { key: "custom_label" } ]
	//  };
	// }
	//
	// Note we specify only key, not path! But if we do this, we can then do:
	auditing.SetCustomLabel(ctx, "custom_label", firstRequestObject.ComputeSomething())

    // Now, we want to inform Audit Logs Exporter that this stream is exportable. If we did not do this,
	// then Audit would export Activity logs only AFTER STREAM FINISHES (this function exits!). If this
	// stream is long-running (like several minutes, or maybe hours), then it may not be the best option.
	// It would be better to send Activity logs NOW. However, be aware that you should not call
	// any SetCustomLabel or SetCustomScope calls after exporting stream - activity logs are "concluded"
	// and labels can no longer be modified. New activity log events may be still being appended for each
	// client and server message though!
	auditing.MarkStreamAsExportable(ctx)
	
	firstServerMsg := srv.makeFirstResp(stream, firstRequestObject)
	if err = stream.Send(firstServerMsg); err != nil {
      return status.Errorf(status.Code(err), "Error sending first server msg: %s", status.Convert(err).Message())
    }
	
	// There may be multiple Recv/Send here ...
	
	return nil
}

By default, Activity logs record all client/server messages, each represents an Activity Log Event object, appended to the existing Activity Log. It may not always be the best choice if objects are large. For example, for ListTimeSeries, which is audited, we don’t need responses. The request object contains elements like filter or parent, so we can predict/check what data was returned from monitoring. In such a case, we can disable appending ActivityLog (also, ListTimeSeriesResponse can be very large!):

func (r *ListTimeSeriesResponse) AuditShouldRecord() bool {
	return false
}

The function AuditShouldRecord can be defined for any request/response object. Audit Logs Exporter will examine if they implement this method to act accordingly.

We can also sample logs, we do this for ListTimeSeries. Since those methods are executed quite often, we don’t want too many activity logs for them. We implemented the following functions for request objects:

func (r *ListTimeSeriesRequest) ShouldSample(
	ctx context.Context,
	sampler handlers.Sampler,
) bool {
	return sampler.ShouldSample(ctx, r)
}

func (r *ListTimeSeriesRequest) SamplingKey() string {
	// ... Compute value and return
}

First, we need to implement ShouldSample, which gets the default sampler. If ShouldSample returns true, then the activity is logged. The default sampler requires a SamplingKey() string implemented from an object. It ensures that “new” requests are being logged, not similar to those before (at least till TTL expires or cache lost entry).

Also, if some streaming calls are heavy (like downloading a multi-GB image), make sure these requests/responses are not logged at all! Otherwise, Audit may get fat.

Monitoring registration (and usage notes)

Monitoring is a bit simpler case than IAM or Audit. Unlike them, it does not integrate on a protobuf level and does not inject any code. The common registration is via metric/resource descriptors, followed by periodic time series submission.

It is up to the service to decide if there is a need for time-series numeric data with aggregations needed. If there is, then service developers need to:

  • Declare MonitoredResourceDescriptor instances via fixtures file. Those resources are defined for the whole service.
  • Declare MetricDescriptor instances via fixture file. Those resources must be created per each project using a service.

With descriptors created from the fixture controller, clients can start submitting logs via CreateTimeSeries calls. It is recommended to use the cached client from Monitoring: https://github.com/cloudwan/edgelq/blob/main/monitoring/metrics_client/v4/tsh_cached_client.go

This typically is used for agents running on edge devices, it is the responsibility of service developers to create relevant code. It is good to use the InventoryManager example.

Fixture files for this example service can be found here:

Notes:

  • For MetricDescriptors, it is mandatory to provide value for metadata.services. The reason is, that the project is a separate entity from a Service, and can enable/disable the services it uses. Given limited access, the service should declare ownership of metric descriptors it is creating in a project.
  • As of now, in this example, the fixtures controller will forbid modifications of MetricDescriptors by project admins, for example, if they add some label or index, changes will be reverted to reflect these in fixtures. However, in the future, we plan to give some flexibility to mix user changes with fixtures. This can enable use cases, like additional indices that are usable for specific projects only. This allows per-tenant customizations. This is a good reason to keep MetricDescriptors are defined per project rather than per service.
  • Because metric descriptors are created per each project, we call them dynamic fixtures.

File main.go for a controller will need to import relevant Go packages from Monitoring. Example is in https://github.com/cloudwan/inventory-manager-example/blob/master/cmd/inventorymanagercontroller/main.go.

Packages needed are:

  • github.com/cloudwan/edgelq/monitoring/access/v4/metric_descriptor
  • github.com/cloudwan/edgelq/monitoring/access/v4/monitored_resource_descriptor

In this config file ( https://github.com/cloudwan/inventory-manager-example/blob/master/deployment/controller-config.yaml ) we can find usage of these two fixture files. Note that MonitoredResourceDescriptors instances are declared with a parent. This is again, like in IAM registration, ensuring that the fixtures controller only gets the observed state from this particular sub-collection. Resources MetricDescriptors don’t specify the parent field (we have multiple projects!). Therefore, we must provide different mechanisms to ensure we get access to metric descriptors we can access. We do this with filter param: We filter by metadata.services.owningService value. This way we guarantee to see resources we have write access to.

Other notable elements for MetricDescriptors are how we are filtering input projects:

createForEach:
- kind: inventory-manager.edgelq.com/Project
  version: v1
  filter: multiRegionPolicy.defaultControlRegion="$myRegionId"
  varRef: project

First, we use inventory-manager.edgelq.com/Project instances, not iam.edgelq.com/Project. This way we can be sure we don’t get PermissionDenied, once (it is our service after all). We can skip the filter for enabledServices CONTAINS this way.

Another notable element is the filter, we get projects only from our region only. It is recommended to create per-project fixtures this way in multi-region env. If our service is in many regions, then each region will take its share of projects.

The last element is where the variable $myRegionId comes from. This is defined in the main.go file for the controller. If you take a look at the example: https://github.com/cloudwan/inventory-manager-example/blob/master/cmd/inventorymanagercontroller/main.go.

In the versioned constructor, you can find the following:

vars := map[string]interface{}{
    "myRegionId": envRegistry.MyRegionId(),
}

This is an example of passing some custom variables to the fixture controller.

Some simplified examples of client submitting logs can be found here, in the function keepSendingConnectivityMetrics: https://github.com/cloudwan/inventory-manager-example/blob/master/cmd/simple-agent-simulator/agent.go

Usage registration

Service monitoring.edgelq.com, apart from being an optional registration option, has some other specific built-in registration already. We talk here about usage metrics:

  • Number of open calls being currently processed and not concluded (more useful for long-running streams!)
  • Request and response byte sizes (uncompressed protobufs)
  • Call durations, in the form of Distributions, to catch all individual values.
  • Database read and write counts.
  • Database resource counters (but these are limited only to those tracked by Limits service).

SPEKTRA Edge platform creates metric descriptors for each service separately in this fixture file:

Resource descriptors are also defined per service:

This way, we can have separate resource types like:

  • custom.edgelq.com/server
  • another.edgelq.com/server
  • etc.

From these fixtures, you can learn what metrics your backend service will be submitting to monitoring.edgelq.com.

Notable things:

  • All usage metrics go to your service project, where the service belongs (along with its ServiceAccount).
  • To track usage by each tenant project, all metric descriptors have a user_project_id label. This will contain the project ID (without the projects/ prefix) for which a call is accounted for.
  • User project ID labels for calls are computed based on the requestPaths object in requests!

To ensure the backend sends usage metrics, it is necessary to include this in the main.go file. For example, for Inventory Manager, in server main.go we have an InitServerUsageReporter call, find it in https://github.com/cloudwan/inventory-manager-example/blob/master/cmd/inventorymanagerserver/main.go. When constructing a store, you need to add a store and cache plugin, NewUsageStorePlugin. You can grep this string in the main.go file as well.

This describes all minimum registration needed from the developer.

There is some coding customization available though: It is possible to customize how user_project_id is extracted. By default, the usage component uses auto-generated method descriptors (in client packages), which are generated based on requestPaths in API skeletons. It is possible to customize this by implementing additional functions to generate objects. An example can be found here: https://github.com/cloudwan/edgelq/blob/main/monitoring/client/v4/time_serie/time_serie_service_descriptors.go.

For a client msg handle, we can define the UsageOverrideExtractUserProjectIds function, then from a request object extract the project ID where usage goes. If possible, it is however better to skip to defaults with api-skeleton.

Logging registration

Logging registration is another optional one and is even simpler than monitoring. It is recommended to use logging.edgelq.com if there is a need for non-numerical time series like data (logs).

Service developer needs to:

  • Define fixtures with LogDescriptor instances to be created per each project (optionally for service or organization). Defining per project may enable in the future some per-project customizations.
  • File main.go for the controller will need, traditionally, relevant Go package (now it is github.com/cloudwan/edgelq/logging/access/v1/log_descriptor).
  • Complete configuration of fixtures in controller config.
  • Use logging API from Edge agent runtime (or even any runtime if they want/need it, edge agents are just the most typical).

In InventoryManager we have an example:

It is similar to monitoring but simpler.

Limits registration

Service limits.edgelq.com allows to limit the number of resources that can be created in a Project, to avoid system overload, or because of contractual agreements.

Limitations:

  • Only resources under projects can be limited
  • Limit object is created per unique combination of Project, Region, and Resource type.

Therefore, when integrating with limits, it is highly recommended (again) to work primarily with Projects, and then model resources keeping in mind that only the total count of them (in a region) is limited. For example, we can’t limit the number of “items in an array in a resource”. If we need to, we should create a child resource type, and provide a limited number of these that can be created in a project/region entirely.

With those pre-conditions, the remaining steps are rather simple to follow, we will go one by one.

First, we need to define service plans. It is necessary to provide default plans for organizations and projects too. This should be done again with fixtures, as we have in this example: https://github.com/cloudwan/inventory-manager-example/blob/master/fixtures/v1/inventory_manager_plans.yaml.

As always, this requires importing the relevant package in main.go, and entry in config file. As in https://github.com/cloudwan/inventory-manager-example/blob/master/deployment/controller-config.yaml.

The service plan will be assigned automatically to the service during initial bootstrapping by limits.edgelq.com. Organization plans will be at least used by “top” organizations (those without parent organizations). They will have one of the organization plans assigned. Organizations from this point can define either their plans or continue using defaults provided by a service via fixtures.

When someone creates a resource under a project, the server needs to check whether it exceeds its limit, if it does, then the server must reject the call with a ResourceExhausted error. Similarly, when the resource is deleted, limit usage should decrease. This must happen on a Store level, not an API server. Resources often can be created or deleted not via standard Create/Delete calls, but custom methods. We need to track each Save/Delete call on the store level. SPEKTRA Edge provides relevant modules already though. If you look at the file here: https://github.com/cloudwan/inventory-manager-example/blob/master/cmd/inventorymanagerserver/main.go, you should notice that, when we construct a store (via NewStoreBuilder), we are adding a relevant plugin (find NewV1ResourceAllocatorStorePlugin). It injects necessary behavior, it checks the local limit tracker and ensures its value is in sync. Version ‘v1’ corresponds to the limits service version, not 3rd party service.

There is also a need to maintain synchronization between SPEKTRA Edge-based service using Limits and limits.edgelq.com itself. Ultimately, it is limits.edgelq.com where limit configuration is happening. For this reason, it is required that the service using Limits exposes an API that Limits can understand. This is why, in the main.go file for a server runtime, you can find the mixin limits server instantiation (find NewLimitsMixinServer) call. It needs to be included.

Also, for limit synchronization, we need a controller module provided by the SPEKTRA Edge framework. By convention, this is a part of the business logic controller. You can find it example here: https://github.com/cloudwan/inventory-manager-example/blob/master/cmd/inventorymanagercontroller/main.go

Find the NewLimitsMixinNodeManager call - this must be included, and the created manager must be run along with others.

Limits mixin node manager needs its entry in controller config, as in https://github.com/cloudwan/inventory-manager-example/blob/master/config/controller.proto.

There is one very common customization required for limit registration only. By default, if limits service is enabled, then ALL resources under projects are tracked. Sometimes it may not always be intended, and resources should not be limited. As of now, we can do this via code, we need to provide a function for the resource allocator.

We have an example in InventoryManager again: https://github.com/cloudwan/inventory-manager-example/blob/master/resource_allocator/resource_allocator.go.

In this example, we are creating an allocator that does not count usage if the resource type is ReaderAgent. It is also possible to filter out specific fields and so on. This function is called for any creation, update (if for some reason resource switches from/to counted to/from non-counted!), or deletion.

This ResourceAllocator is used in the main.go function in server runtime, we are passing it to the store plugin.

2.3.4 - Developing the Sample Service

Let’s develop the sample service.

When writing code for your service, it is important to know some Goten/SPEKTRA Edge-specific components and how to use them. This part contains notable examples and advice.

Some examples here apply to edge runtimes too, as they often describe methods of accessing service backends.

Basic CRUD functionality

Unit tests are often a good way to show the possibilities of Goten/SPEKTRA Edge. While example service implementation shows something more “real” and “full”, various use cases in the shorted form are better represented with tests. In Goten, we have CRUD with: https://github.com/cloudwan/goten/blob/main/example/library/integration_tests/crud_test.go And pagination: https://github.com/cloudwan/goten/blob/main/example/library/integration_tests/pagination_test.go

Client modules will always be used by edge applications, and often by servers too - since the backend, on top of storage access will always need some access to other services/regions.

Using field paths/masks generated by Goten

Goten generates plenty of code related to field masks and paths. Those can be used for various techniques.

import (
	// Imaginary resource, but you can still use example
	resmodel "github.com/cloudwan/some-repo/resources/v1/some_resource"
)

func DemoExampleFieldPathsUsage() {
	// Construction of some field mask
    fieldMaskObject := &resmodel.SomeResource_FieldMask{Paths: []resmodel.SomeResource_FieldPath{
        resmodel.NewSomeResourceFieldPathBuilder().SomeField().FieldPath(),
        resmodel.NewSomeResourceFieldPathBuilder().OtherField().NestedField().FieldPath(),
    }}
	
	// We can also set a value to an object... if there is path item equal to NIL, then it is allocated
	// on the way. 
	res := &resmodel.SomeResource{}
    resmodel.NewSomeResourceFieldPathBuilder().OtherField().NestedField().WithValue("SomeValue").SetTo(&res)
    resmodel.NewSomeResourceFieldPathBuilder().IntArrayField().WithValue([]int32{4,3,2,1}).SetTo(&res)
	
	// You can access items from a field path... we also support this if there is an array on the path. But
	// this time we need to cast.
	for _, iitem := range resmodel.NewSomeResourceFieldPathBuilder().ObjectField().ArrayOfObjectsField().ItemFieldOfStringType().Get(res) {
		item := iitem.(string) // If we know that "item_field_of_string_type" is a string, we can safely do that!
		// Do something with item here...
    }
}

It is worth seeing interfaces FieldMask, and FieldPath in the github.com/cloudwan/object module. Those interfaces are implemented for all resource-related objects. Many of these methods have their strong-typed equivalents.

With field path objects you can:

  • Set the value to a resource
  • Extract value (or values) from a resource.
  • Compare value from the one in resource
  • Clear value from a resource
  • Get the default value for a field path (you may need reflection though)

With field masks, you can:

  • project a resource (shallow copy for selected paths)
  • Merge resources with field mask…
  • Copy selected field paths from one resource to another

You can explore some examples also in unit tests: https://github.com/cloudwan/goten/blob/main/runtime/object/fieldmask_test.go https://github.com/cloudwan/goten/blob/main/runtime/object/object_test.go

Tests for objects show also more possibilities related to field paths: We can use those modules for general deep cloning, diffing, or merging.

Creating resources with meta-owner references

In inventory-manager there is some particular example of creating a Service resource, see CreateDeviceModel custom implementation. Before the resource DeviceOrder is created, we connect with the secrets.edgelq.com service, and we create a Secret resource. We are creating it with populated metadata.ownerReferences value, as an argument, we are passing meta OwnerReference object, which contains the name of the DeviceOrder being created, along with the region ID where it is being created.

This is the file with the code we describe: https://github.com/cloudwan/inventory-manager-example/blob/master/server/v1/device_order/device_order_service.go.

Find implementation for the CreateDeviceOrder method there.

Meta-owner references are different kinds of references compared to those defined in the schema. Mainly:

  • They are considered “soft”, and can never block pointed resources.
  • You cannot unfortunately filter by them.
  • During creation (or when making an Update request with a new meta owner), meta owner reference does not need to point to the existing resource (yet - see below).
  • Have specific deletion behavior (see below).

The resource being pointed by meta owner reference we call “meta owner”, the pointing one is “meta ownee”.

Meta owner refs have however following deletion property:

  • When the meta owner resource is being deleted, then the meta owner reference is unset in an asynchronous manner.
  • If the meta owner resource does not exist, then after some time (minutes), the meta owner reference is removed from the meta ownee.
  • If the field metadata.ownerReferences becomes an empty array due to the removal of the last meta owner, the meta ownee resource is automatically deleted!

Therefore, you may consider that meta ownee has specific ASYNC_CASCADE_DELETE behavior - except that it needs all parents to be deleted.

When it is possible, it is much better to use schema references, declared in the protobuf files. However, it is not always possible, like here, because the InventoryManager service is importing secrets.edgelq.com, not the other way around. Secrets service cannot possibly know about the existence of the InventoryManager resources model, therefore Secret resource cannot have any reference to DeviceOrder. Instead, when we want to create a Secret resource and associate it with the lifecycle of DeviceOrder (we want Secret to be garbage collected), then we should precisely use meta ownership.

This way, we can ensure that “child” resources from lower-level services like secrets are automatically cleaned up. It will also happen if, after successful Secret creation, we fail to create DeviceOrder (let’s say, something happened and the database rejected our transaction without a retry option). It is because meta owner references are timing out when meta owner fails to exist within a couple of minutes since meta owner reference attachment.

There is one super corner case though, it is possible, that Secret resource will be successfully created, BUT transaction saving DeviceOrder will fail with Aborted code, but this error type can be retried. As a result, the whole transaction will be repeated, including another CreateSecret call. After the second approach, we will have two Secrets pointing to the same DeviceOrder, but DeviceOrder will have only one reference to one of those secrets. The other is stale. This particular case is being handled by the option WithRequiresOwnerReference passed to the meta owner, it means that the Meta owner reference is removed from the meta ownee also when the parent resource has no “hard” reference pointing at the meta ownee. In this case, one of the secrets would not be pointed by DeviceOrder and would be automatically cleaned up asynchronously.

It is advised to always use meta owner reference with the WithRequiresOwnerReference option if the parent resource can have a schema reference to the meta ownee - like in this case, where DeviceOrder has a reference to a Secret. It follows the principle, where the owner has a reference to the ownee. Note that in this case, we are creating a kind of loop reference, but it is allowed in this case.

Creating resources from the 3rd party service.

Any 3rd party service can create resources in SPEKTRA Edge core services, however, there is a condition attached to it. They must mark resources with service ownership information.

In method CreateDeviceOrder from https://github.com/cloudwan/inventory-manager-example/blob/master/server/v1/device_order/device_order_service.go, look again at the CreateSecret call and see field metadata.services of a Secret to create. We need to pass on the following information:

  • Which service owns this particular resource

    and we must point to our service.

  • List of allowed services that can read this resource

    we should point to our service, but we may optionally include other services too if this is needed.

Setting this field is a common requirement when 3rd party service needs to create a resource owned by it.

It is assumed that Service should not have full access to the project. Users however can create resources without this restriction.

Accessing service from the client

Services on SPEKTRA Edge typically have Edge clients, devices/applications running with ServiceAccount registered in IAM, connecting to SPEKTRA Edge/ Third party service via API.

An example is provided with inventory-manager here: https://github.com/cloudwan/inventory-manager-example/blob/master/cmd/simple-agent-simulator/dialer.go

Note that you can skip WithPerRPCCredentials to have anonymous access. The Authenticator will classify the principal as Anonymous, and the Authorizer will then likely reject the request with a PermissionDenied code. It may still be useful, for example during activation, when a service account is being created and credentials keys are allocated, the backend will need to allow anonymous access though, and custom security needs to be provided. See Edge agent activation in this doc.

Created gRPC connection you can use to wrap with client interfaces generated in client packages for your service (or also any SPEKTRA Edge-based service).

Edge agent activation

SPEKTRA Edge-based service has typically human users (represented by the User resource in iam.edgelq.com), or agents running on the edge (represented by the ServiceAccount resource in iam.edgelq.com). Users typically access SPEKTRA Edge via a web browser or CLI and get access to the service via invitation.

A common problem with Edge devices is that, during the first startup, they don’t have credentials yet (typically).

If you have an agent runtime running on the edge, and it needs to self-activate by connecting to the backend and requesting credentials, this part is for you to read.

Activation can be done with a token - the client needs to establish a connection without RPC credentials in GRPC. Then it can connect to a special API method for activation. During activation, it should send a token for identification. At this exchange, credentials are created and returned by the server. There is a plan to have a generic Activation module in SPEKTRA Edge framework, but it’s not ready yet.

For the inventory manager, we have:

It is a fairly complex example though, therefore Activation module is planned to be added in the future.

The token for activation is created with DeviceOrderService when an order for edge devices is created. We store token value using a secrets service, to ensure its value is not stored in any database just in case. This token is then needed during the Activation stream.

The activation method is bidi-streaming, as seen in api-skeleton. The client will initialize activation with the first request containing the token value. The server will respond with credentials, but to activate, the client will need to send additional confirmation. Because of multiple requests done by the client/server side, it was necessary to make this call a streaming type.

When implementing activation, there is another issue with it: ActivationRequest sent by the client has no region ID information, if there are multiple regions for a given service, and the agent connects with the wrong region, the backend will have issues during execution. RegionID is encoded however in the token itself. As of now, code-generated multi-region routing does not support methods where region ID is encoded in some field in the request. For now, it is necessary to disable multi-region routing here and implement the custom method, as shown in the example file.

During proper implementation of Activation (examine example file activation_service.go), we are:

  • Using secrets service to validate token first

  • We are opening a transaction to create an initial record for the agent object. This part may be more service-specific

    in this case, we are associating an agent with a device from a different project, which is not typical here! More likely we would need to associate the agent with a device from same project.

  • We are creating several resources for our agent: a logging bucket, a metrics bucket, and finally service account with key and role binding.

  • We then ask the client to confirm activation, if fine, we save the agent in another transaction to associate with created objects (buckets and service account)

This activation example is however good at showing how to implement custom middleware, interact with other services and create resources there.

Notable elements:

  • When creating ServiceAccount, it is not usable at the beginning: you need to create also a ServiceAccountKey, along with RoleBinding, so this ServiceAccount can do anything useful. We will discuss this example more in the document about the IAM integration document.
  • Note that the ServiceAccount object has a set meta owner reference pointing to the agent resource. It also gets the attribute WithRequiresOwnerReference(). It is highly advisable to create resources here in this way. ServiceAccount in this way is bound to the agent resource, when the agent is deleted, ServiceAccount is also deleted. Also, if Activation failed and ServiceAccount was created, then ServiceAccount will be cleaned up, along with ServiceAccountKey and RoleBinding. Note we talked about it when describing meta-owner references.
  • Logging and metrics buckets are also created using meta owner references, if an agent record is deleted, they will be cleaned automatically. The usage of buckets specified per agent is required to ensure that agents cannot read data owned by others. This topic will be covered more in a document describing SPEKTRA Edge integration. If logging and/or metrics are not needed by the agent, they can be skipped.
  • All resources in SPEKTRA Edge created by Activation require the metadata.services field populated.

EnvRegistry usage and accessing other services/regions from the server backend

The envRegistry component is used for connecting the current runtime with other services/regions. It can also provide real-time updates to changes (like dynamic deployment of a service in a new region). Although those things are rare, dynamic updates help in those cases, we should not need to redeploy clusters from existing regions if we are adding a new deployment in a new region.

EnvRegistry can be used to find regional deployments and services.

It is worth to remind difference between Deployment and Service: While service represents service as a whole, with public domain, Deployment is a regional instance of Deployment (specific cluster).

The interface of EnvRegistry can be found here: https://github.com/cloudwan/goten/blob/main/runtime/env_registry/env_registry.go

You will encounter EnvRegistry usage throughput examples, they are always constructed in the main file.

The notable thing about EnvRegistry is that all dial functions also have “fctx” equivalent calls (like DialServiceInRegion and DialServiceInRegionFCtx). FCtx stands for Forward Context. We are passing over various headers from the previous call to the next one, like authorization or call ID. Usually, it is called from MultiRegion middleware, when headers need to be passed to the new call (especially Authorization). It has some restrictions though, since services do not necessarily trust each other, forwarding authorization to another service may be rejected. MultiRegion routing is a different topic because a request is routed between different regions of the same service, meaning that the service being called stays the same.

As of now, envRegistry is available only for backend services, it may be enhanced in the future, so clients can just pass bootstrap endpoint (meta.goten.com service), and all other endpoints are discovered.

Store usage (database)

In files main.go for servers you will see a call to NewStoreBuilder. We typically add a cache and constraints layer. Then we must add plugins (this list is for server runtimes):

  • Mandatory: MetaStorePlugin, Various sharding plugins (for all used sharding)
  • Highly recommended: AuditStorePlugin and UsageStorePlugin.
  • Mandatory if multi-region features are used: SyncingDecoratorStorePlugin
  • Mandatory if you use Limits service integration: V1ResourceAllocatorStorePlugin for the v1 limits version.

Such a constructed store handle already has all the functionality: Get, Search, Query, Save, Delete, List, Watch… However, it does not have type-safe equivalents for individual resources, like SaveRoleBinding, DeleteRoleBinding, etc. To have a nice wrapper, we have a set of As<ServiceShortName>Store functions that decorate a given store handle. Note that all collections must exist within a specified namespace.

You need to call the WithStoreHandleOpts function on the Store interface before you can access the database. Typically, you should use one of the following, with snapshot transaction, or cache-enabled no-transaction session:

import (
	"context"
	
	gotenstore "github.com/cloudwan/goten/runtime/store"
)

func withSnapshotTransaction(
  ctx context.Context,
  sh gotenstore.Store,
) error {
  return sh.WithStoreHandleOpts(ctx, func (ctx context.Context) error {
    var err error
	//
	// Here we use all Get, List, Save, Delete etc.
	//
    return err
  }, gotenstore.WithTransactionLevel(gotenstore.TransactionSnapshot))
}

func withNoTransaction(
  ctx context.Context,
  sh gotenstore.Store,
) error {
  return sh.WithStoreHandleOpts(ctx, func (ctx context.Context) error {
    var err error
    //
    // Here we use all Get, List etc.
    //
    return err
  }, gotenstore.WithReadOnly(), gotenstore.WithTransactionLevel(gotenstore.NoTransaction), gotenstore.WithCacheEnabled(true))
}

If you look at any transaction middleware, like here: https://github.com/cloudwan/inventory-manager-example/blob/master/server/v1/site/site_service.pb.middleware.tx.go, you should note that typically transaction is already set per each call. It may be a different case if in the API-skeleton file, you did set the MANUAL type:

actions:
- name: SomeActionName
  withStoreHandle:
    transaction: MANUAL

In this case, transaction middleware would not set anything, and you need to call WithStoreHandleOpts yourself. MANUAL type is useful, if you plan to have multiple micro transactions.

Notes:

  • All Watch calls (singular and for collection) do NOT require WithStoreHandleOpts calls. They do not provide any transaction properties at all.
  • All read calls (Get, List, BatchGet, Search) must NOT be executed after ANY write (Save or Delete). You need to always collect all reads before making any writes.

Example usages can be found in https://github.com/cloudwan/inventory-manager-example/blob/master/server/v1/activation/activation_service.go

Note that the Activation service is using MANUAL type, middleware is not setting it.

Watching real-time updates

SPEKTRA Edge-based services utilize heavily real-time watch functionality offered by Goten. There are 3 types of watches:

  • Single resource watch

    The client picks a specific resource by name and subscribes for real-time updates of it. Initially, it gets the current data object, then it gets an update whenever there is a change to it.

  • Stateful watch

    Stateful watch is used to watch a specific PAGE of resources in a given collection (ORDER BY + PAGE SIZE + CURSOR), where CURSOR typically means offset from the beginning (but is more performant). This is more useful for web applications for users if there is a need to show real-time updates of a page where the user is. It is possible to specify filter objects.

  • Stateless watch

    It is used to watch ALL resources within a specified optional filter object. It is not possible to specify order or paging. Note this may overload the client with a large changeset if the filter is not carefully set.

For each resource, if you look at <resource_name>_service.proto files, API offers Watch<Single> or Watch<Collection>. The first one is for a single resource watch and is relatively simple to use. Collection watch type requires you to specify param: STATELESS or STATEFUL. We recommend STATEFUL for web-type applications because of its paging features. STATELESS is recommended for some edge applications that need to watch some sub-collection of resources. However, we do not recommend using direct API in this particular case. STATELESS watch, while powerful, may require clients to handle cases like resets or snapshot size checks. To hide this level of complexity, it is recommended to use Watcher modules in access packages, each resource has a typed-safe generated class.

This is reflected in tests from https://github.com/cloudwan/goten/blob/main/example/library/integration_tests/crud_test.go

There are 3 unit tests for various watches, and TestStatelessWatchAuthorsWithWatcher shows usage with the watcher.

Multi-Region development advice (for server AND clients)

Most of the multi-region features and instructions were discussed with api-skeleton functionality. If you stick to cases mentioned in the api-skeleton, then typically code-generated multi-region routing will handle all the quirks. Similarly, db-controller and MultiRegionPolicy objects will handle all cross-region synchronization.

Common advice for servers:

  • Easiest for multi-region routing are actions where isCollection and isPlural are both false.
  • Cases where isPlural is true and isCollection is false are not supported, we have built-in support for BatchGet, but custom methods will not fit. It is advised to avoid them, if possible.
  • Plural and collections requests are somewhat supported, we do support Watch, List, and Search requests. Customizations based on them are the easiest to support. You can look at the example like ListPublicDevices method in devices.edgelq.com service. However, there are certain conditions, Request object needs standard fields like parent and filter. Code-generation tool look for these to implement multi-region routing. Pagination fields are optional. In the response, it is necessary to include an array of returned resources. In the api-skeleton, it is necessary to provide responsePaths and point to the path where this list of resources is. If those conditions are met, you can implement various List variations yourself.
  • For streaming calls, you must allow multi-region routing using the first request from the client.

Links for ListPublicDevices:

Common advice for clients:

It is also advisable to avoid queries that will be routed or worse, split & merged across multiple regions. Those queries should be rather exceptional, not a rule. One easy way to avoid splitting & merge is to query for resources within a single policy-holder resource (Service, Organization, or Project). For example, if you query for a Distributions in specific project, they will likely be synced across all project regions - if not, they will at least reside in the primary region for a project. This way, one or more regions will be able to execute the request fully.

If you query (with filter) across projects/organizations/services, you can:

  • For resources attached to regions (like Device resource in devices.edgelq.com service), you can query just specific region across projects: ListDevices WHERE parent = "projects/-/regions/us-west2/devices/-". Note that the project is a wildcard, but the region is specific.
  • There is an object in each metadata object within each resource syncing. You can find this here: https://github.com/cloudwan/goten/blob/main/types/meta.proto. See the SyncingMeta object and its description. Now, if you filter by owningRegion, regardless of resource type, regardless of whether this is regional or not, a request with metadata.syncing.owningRegion will be routed to that specific region. Similarly, if you query with metadata.syncing.regions CONTAINS condition, you can also ensure requests will be routed to a specific region. Query with CONTAINS condition ensures that the client will see resources that the region can see anyway. Filter for owningRegion takes precedence over regions and CONTAINS.

2.4 - Operating your Service

How to operate your SPEKTRA Edge service.

2.4.1 - Deploying your Service

How to deploy your service.

Once the service is developed well enough, you can deploy it. Quick visual recap is (regional deployment):

The large block on the right/top side (most of the image) is the SPEKTRA Edge-based service. Below you have various applications (web browsers or Edge agents) that can communicate with service or core SPEKTRA Edge services (left).

Service backend deployment is what we focus on in this part. The blue parts in this block are elements you had to develop, three different binaries we discussed in this guideline.

You will need to set up Networking & Ingress elements. Inside a cluster, you will need Deployments for API servers, controllers and db-controllers. Inside the cluster, you will need:

  • Database of course. Core SPEKTRA Edge provides a database for logging or monitoring metrics but for document storage. NoSQL database is needed. We typically recommend MongoDB as a cloud-agnostic option, but firestore also may be available for GCP.
  • Redis instance is needed for Node Managers for all Controllers (sharding!). Although arrows are missing, redis can optionally also be used as a Cache for DB.

If possible, in the Kubernetes environment type, it is highly recommended to use HorizontalPodAutoscaler for deployments.

In the Inventory Manager example which we will talk about, we assume we did everything on the Kubernetes cluster. Configuration of kubectl is assumed as its part of general knowledge and not-edgelq specific. Refer to the documentation online for how to create a cluster in Kubernetes and how to configure kubectl.

In the future, we may ship edgelq-lite images though, with instructions for local Kubernetes deployment.

Building images

We use docker build to ship images for backend services. We will use dockerfiles from Inventory Manager as examples.

You will need to build 4 images:

  • API Server (that you coded)
  • API Server Envoy proxy (part of API Server)
  • Controller
  • DbController

When making an API Server, each pod must contain 2 containers: One is the image of the server, which handles all gRPC calls. But as we mentioned many times, we also need to support:

  • webGRPC, so web browsers can access the server too, not just native gRPC clients
  • REST API, for those who prefer this way of communication

This may be handled by envoy proxy (https://www.envoyproxy.io/). They provide ready image sets. It handles webGRPC is pretty much out of the box with proper config. REST API requires a little more work. We need to come back to the regenerate.sh file, like in the InventoryManager example (https://github.com/cloudwan/inventory-manager-example/blob/master/regenerate.sh).

Find the following part:

protoc \
    -I "${PROTOINCLUDE}" \
    "--descriptor_set_out=${INVENTORYMANAGERROOT}/proto/inventory_manager.pb" \
    "--include_source_info" \
    "--include_imports" \
    "${INVENTORYMANAGERROOT}"/proto/v1/*_service.proto \
    "${DIAGNOSTICSPATH}"/proto/v1/*_service.proto

This generates a file inventory_manager.pb, which contains service descriptors from all files in a service, plus optionally diagnostics (part of SPEKTRA Edge repository) - if you want health check from grpc service available from REST.

This generated pb file must be passed to the created envoy proxy image. See the docker file for this: https://github.com/cloudwan/inventory-manager-example/blob/master/build/serviceproxy.dockerfile

We require the argument SERVICE_PB_FILE, which must point to that pb file. During image building, it will be copied to /var/envoy. This concludes the process of building an envoy proxy for a service.

The remaining three images can be constructed often with the same dockerfile. For InventoryManager, we have: https://github.com/cloudwan/inventory-manager-example/blob/master/build/servicebk.dockerfile

This example however is quite generic and may fit many services. We have two docker runs there. The first is for building - we use images with desired Golang installed already, ensuring some build dependencies. This build docker must copy the code repository and execute the build for the main binary. You can notice also the FIXTURES_DIR param, which MAY contain the path to the fixtures directory for your service. This must be passed when building controller images, not necessarily for server/db-controller ones.

In the second docker process (service), we will construct a simple image with minimal env, plus runtime binary, plus optionally fixtures directory (/etc/lqd/fixtures).

For a reference on how variables may be populated, see the skaffold file example (We use scaffold for our build). It is a good tool, we recommend, probably not necessarily mandatory: https://github.com/cloudwan/inventory-manager-example/blob/master/skaffold.yaml.

Note that we are passing the .gitconfig file there. This is mandatory to access private repositories (your service may be private. Also, at the moment of this writing, goten and edgelq are also private!). You may see also the main README for SPEKTRA Edge: https://github.com/cloudwan/edgelq/blob/main/README.md, with more info about building. Since a process may be the same, you may need to configure your own .gitconfig.

Note that the skaffold can be configured to push images to Azure, GCP, AWS, you name it.

Cluster preparedness

In your cluster, you need to prepare some machines that will host:

  • API Server with envoy proxy
  • Controller
  • DbController
  • Redis instance

In your cluster, you can also deploy MongoDB deployment, inside a cluster, or use managed services like MongoDB Atlas. If you use Managed Cloud, then MongoDB Atlas can be used to deploy instances being run on the same data center as your cluster.

When you get the MongoDB instance, remember its endpoint and get an authentication certificate. It is required to give admin privileges to the Mongo user. It will not only need to make reads/writes of regular resources but also create databases, and collections, configure these collections, and create and manage indices (from proto declarations to Mongo). This requires full access. It is recommended to make MongoDB closed and available from your cluster only!

An authentication certificate will be needed later during deployment, so keep it - as a PEM file.

If you use firestore instead of MongoDB, you will need to have a service account that also is an admin in firestore, that has access to index management. You will need to get Google credentials and remember Google project ID.

Networking

When you made a reservation for the SPEKTRA Edge service domain (Service project and service domain name), you reserved the domain name of your service in the SPEKTRA Edge namespace, but it’s not an actual networking domain. For example, iam.edgelq.com is the name of a Service object in meta.goten.com, but this name is universal, shared by all production, staging, and development environments. To reach IAM, you will have a specific endpoint for a specific environment. For example, one common staging environment we have has the domain stg01b.edgelq.com - and the IAM endpoint is iam.stg01b.edgelq.com.

Therefore, if you reserved custom.edgelq.com on the SPEKTRA Edge platform, you may want to have a domain like someorg.com. Then, optionally you may have subdomains defined, per various env types:

  • dev.someorg.com

    and full endpoint may be custom.dev.someorg.com for development custom.edgelq.com service

  • stg.someorg.com

    and full endpoint may be custom.stg.someorg.com for staging custom.edgelq.com service

  • someorg.com

    and full endpoint may be custom.someorg.com for production custom.edgelq.com service

You will need to purchase the domain separately and this domain can be used for potentially many environments and applications reserved on the SPEKTRA Edge platform (custom, custom2, another…). You may host them on a single cluster as well.

Once you purchase let’s say someorg.com, and decide you want to use stg.someorg.com for staging environments, you will need to configure at least 2 endpoints for each SPEKTRA Edge service. One endpoint is a global one, the other one is a regional one.

Since SPEKTRA Edge is multi-region in its core, it is required to provide these two endpoints. Suppose you have custom.edgelq.com service reserved on SPEKTRA Edge platform, and you bought someorg.com, you will need the following endpoints:

  • custom.someorg.com

    global endpoint for your service

  • custom.<REGION>.someorg.com

    regional endpoint for your service in a specified region.

If your service is single-regional, then you will need in total two endpoints for a service. If you have 2 regions, then you will need three endpoints and so on.

To recap so far:

  • You will need to reserve an SPEKTRA Edge domain name (like custom.edgelq.com) on the SPEKTRA Edge platform. Then you may reserve more, like another.edgelq.com. Those will be just resources on the SPEKTRA Edge platform.

  • You will need to purchase a domain from the proper provider (like someorg.com), then optionally configure more subdomains to accommodate more env types if needed.

  • You will need to configure a global endpoint per each service (like custom.someorg.com, another.someorg.com).

  • You will need to configure a regional endpoint per each region (like custom.eastus2.someorg.com, another.eastus2.someorg.com).

Note that the domain for global endpoints here is someorg.com, for eastus2 it is eastus2.someorg.com.

Even if you don’t intend to have more than one region, it is required to have a regional domain - you can just use CNAME to make the same.

Let’s move to the public IPs part.

Regional and global domains must be resolved into public IP addresses you own/rent. Note that regional endpoints must be resolved into different IP addresses. The global endpoint may:

  • Use separate IP addresses than regional ones. This separate IP address will be an anycast. It should still route the traffic to the nearest regional cluster.

  • Use DNS solution and allow the global domain to be resolved into one of the regional IP addresses according to the best local performance.

For a single-regional setup, you may make regional and global domains use the same IP address, and make a CNAME record.

Meaning, if you have endpoints:

  • custom.someorg.com, another.someorg.com

    They need to resolve to a single IP address. This IP address may be different, or equal to one of the regional endpoints.

  • custom.eastus2.someorg.com, another.eastus2.someorg.com

    those are regional endpoints and needs single regional IP addresses. If you have more regions, then each requires a different IP address.

For each region, you will need different cluster deployments. Inside each cluster, you will need an Ingress object with all necessary certifications.

Networking setup is up to service maintainers, setup may vary significantly depending on the cloud provider or on-premise setup. The required parts from SPEKTRA Edge’s point of view are around domain names.

Config files preparation

With images constructed, you need to prepare the following config files:

  • API Server config
  • Envoy proxy
  • Controller
  • Db Controller

As the Inventory manager example uses Kubernetes declarations, this may influence some aspects of config files! You will see some variables here and there. Refer to this file for more explanation along the way: https://github.com/cloudwan/inventory-manager-example/blob/master/deployment/env.properties

API Server

Example of API Server config for Inventory Manager: https://github.com/cloudwan/inventory-manager-example/blob/master/deployment/api-server-config.yaml

The proto-model can be found here: https://github.com/cloudwan/inventory-manager-example/blob/master/config/apiserver.proto

Review this config file along with this document.

From the top, by convention, we start with sharding information. We use ring sizes 16 as standard, others are optional. You need to use the same naming conventions. Note that:

  • byName is mandatory ALWAYS
  • byProjectId is mandatory because in InventoryManager we use Project related resources
  • byServiceId is mandatory because in InventoryManager we use Service related resources
  • byIamScope is mandatory because we use byProjectId or byServiceId.

Below you have a “common” config, which applies to servers, controllers, and db-controllers, although some elements are specific only to one kind. There, we specify the grpc server config (the most important is a port of course). There is some experimental web sockets part (for bidi-streaming support for web browsers exclusively). We need to run on separate ports, but underlying libraries/techniques are experimental and may or may not work. You may skip this if you don’t need bidi-streaming calls for web browsers.

After grpcServer, you can see the databases (dbs) part. Note that namespace convention:

  • Part envs/$(ENV_NAME)-$(EDGELQ_REGION) ensures that we may potentially run a single database for various environments on a single cluster. This we adopted from development environments, but you may skip this part entirely if you are certain you will just run a single environment in a single cluster.

  • The second part, inventory-manager/v1-1, first specifies the application (if you have multiple SPEKTRA Edge apps), then version and revision (v1-1). “v1” refers to the API version of the service, then “-1” refers to revision part. If there is a completely new API version, we will need to synchronize databases (copy) during an upgrade. The second part, -1, is there because there is also a possibility of an internal database format upgrade, without API changes.

Other notable parts of the database:

  • We used the “mongo” backend.
  • We must specify an API version matching this DB.
  • You will need to provide the MONGO_ENDPOINT variable, Mongo deployment is not covered in this example.
  • Note that in the URL you have /etc/lqd/mongo/mongodb.pem specified. As of now, this file must be mounted on the pod during startup. In the future, it may be provided using different ways though.

Instead of Mongo, you may also configure firestore:

dbs:
- namespace: "envs/$(ENV_NAME)-$(EDGELQ_REGION)/inventory-manager/v1-1"
  backend: "firestore"
  apiVersion: "v1"
  connectionPoolSize: $(INVENTORY_MANAGER_DB_CONN_POOL_SIZE)
  firestore:
    projectId: "$(GCP_PROJECT_ID)"
    credentialsFilePath: "/etc/lqd/gcloud/db-google-credentials.json"

Of course, you will need to have these credentials and use them later in deployment.

Later you have the dbCache configuration. We only support Redis for now, note also the endpoint - for deployments like this, it should be some internal endpoint available only inside.

Further on you have the authenticator part. Values AUTH0_TENANT, AUTH0_CLIENT_ID, and EDGELQ_DOMAIN must match those provided by the SPEKTRA Edge cluster you are deploying for. But you need to pay more attention to serviceAccountIdTokenAudiencePrefixes value. There, you need to provide all private and public endpoints your service may encounter. Example there provides:

  • one private endpoint visible inside the Kubernetes cluster only (the one ending in -service).
  • public regional endpoint
  • public global endpoint

Public endpoints must match those configured during the Networking stage!

After authenticator, you have observability settings. You can configure logger, Audit, and Usage there. The last two use audit.edgelq.com and monitoring.edgelq.com. You can also add tracing deployment. As of now, it can work for Jaeger and Google Tracing (GCP only):

Stackdriver example: Note you are responsible for providing Google credentials path

observability:
  tracing:
    exporter: "stackdriver"
    sample_probability: 0.001
    stackdriver:
      projectId: "$(GCP_PROJECT_ID)"
      credentialsFilePath: "/etc/lqd/gcloud/google-credentials.json"

Jaeger part, BUT as of now it hard hardcoded endpoints:

  • agentEndpointURI = “jaeger-agent:6831”
  • collectorEndpointURI = “http://jaeger-collector:14268/api/traces”
observability:
  tracing:
    exporter: "jaeger"
    sample_probability: 0.001

This means you will need to deploy Jaeger manually. Furthermore, you should be careful with sampling - some low value is preferred, but it will make an unsuitable tool for bug hunting. SPEKTRA Edge uses now obsolete tracing instrumentation, but the proper one is on the work map. With this, an example will be enhanced.

After observability, you should see clientEnvironment. This used to be responsible for connecting with other services, it was taking domain part and pre-pending short service names. With a multi-domain environment, this is however obsolete. It is there for some compatibility reasons and should point to your domain. It may be dropped in the future. The replacement is envRegistry, which is just below.

Env registry config (envRegistry) is one of the more important parts. You need to specify the current instance type, and region information: which region is for the current deployment, which is the default one for your service. The default one must be the first you deploy your service to. Sub-param service must be the same as the service domain name you reserved on SPEKTRA Edge platform. Then you must provide global and regional (for this region) endpoints for your service. You may provide a private regional endpoint along with localNetworkId. The latter param should have a value of your own choice, it’s not equal to any resource ID created anywhere. It must be only same for all config files for all runtimes running on the same cluster, so they know they can safely use private endpoint (for performance reasons). Finally, scoreCalculator and location is used for multi-region middleware routing, if it detects a request that needs to be routed somewhere else, but somewhere else may be more than 1 region, it will use these options to get the best option.

Next part, bootstrap is necessary to configure EnvRegistry in the first place, this must point to meta service endpoint, where information about the whole SPEKTRA Edge environment will be obtained from.

The last common config parts are:

  • disableAuth: you should need to leave false here, but you may set it to true for some local debug

  • disableLimits: It is an old option used in the past for development, but typically needs to be false. It has no effect if limits integration was not done for a service.

  • Option enableStrictNaming enables strict IDs (32 chars max per ID, only a-z, 0-9, - and _ are allowed). This must be always true. The option exists only because of legacy SPEKTRA Edge environments.

  • avoidResourceCreationOverride

    if true, then an attempt to send a Create request for an existing resource will result in AlreadyExists error. This must be always true. The option exists only because of legacy SPEKTRA Edge environments.

  • allowNotFoundOnResourceDeletion

    if true, then an attempt to send a Delete request for a non-existing resource will result in a NotFound error. This must be always true. The option exists only because of legacy SPEKTRA Edge environments.

Param nttCredentialsFile is a very important one: It must contain the field path to the NTT credentials file you must have obtained when reserving service on the SPEKTRA Edge platform.

Envoy proxy

Example of API Server config for Inventory Manager: https://github.com/cloudwan/inventory-manager-example/blob/master/deployment/envoy.yaml

From a protocol point of view, the task of the envoy proxy is to:

  • Passthrough gRPC traffic
  • Convert webGRPC calls (made by web browsers) to gRPC ones.
  • Convert REST API (HTTP 1.1) calls to gRPC ones.

It also adds a TLS layer between Ingress and the API Server! Note that when a client outside the cluster communicates with your service, it will connect not with the service directly, but to the Ingress Controller sitting at the entry to your cluster. This Ingress will handle TLS with the client, but separate to the API server is also required. Ingress maintains double connections, one to the end client and, the other to the API server. Envoy proxy, sitting in the same Pod as the API Server, handles the upstream part of TLS. Note that in the envoy.yaml you have the /etc/envoy/pem/ directory with TLS certs. You will need to provision them separately, in addition to the public certificate for Ingress.

Refer to envoy proxy documentation for these files. From SPEKTRA Edge’s point of view, you may copy and paste this file from service to service. You should need though:

  • Replace all “inventory-manager” strings with proper service.
  • Configure REST API transcoding on a case-by-case basis.

For this REST API, see the following config part:

- name: envoy.filters.http.grpc_json_transcoder
  typed_config:
    "@type": type.googleapis.com/envoy.extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder
    proto_descriptor: /var/envoy/inventory_manager.pb
    services:
    - ntt.inventory_manager.v1.ProjectService
    - ntt.inventory_manager.v1.DeviceModelService
    - ntt.inventory_manager.v1.DeviceOrderService
    - ntt.inventory_manager.v1.ReaderAgentService
    - ntt.inventory_manager.v1.RoomService
    - ntt.inventory_manager.v1.SiteService
    - ntt.mixins.diagnostics.v1.UtilityService
    print_options:
      add_whitespace: false
      always_print_primitive_fields: true
      always_print_enums_as_ints: false
      preserve_proto_field_names: false
- name: envoy.filters.http.grpc_web
- name: envoy.filters.http.router

If you come back to Building images documentation part for the envoy proxy, you can see that we created the inventory_manager.pb file, which we included during the build process. We need to ensure this file is present in our envoy.yaml file, and all services are listed. For your service, find all services and put them in this list. You can find them in the protobuf files. As of now, Utility service offers just this one API group.

If you study envoy.yaml as well, you should see that it has two listeners:

  • On port 8091 we have for websockets (experimental, you should omit this if you don’t need bidistreaming support for web browsers over websockets).
  • On port 8443 we serve the rest of the protocols (gRPC, webGRPC, REST API).

It forwards traffic (proxying) to ports (setting clusters):

  • 8080 for gRPC
  • 8092 for websockets-grpc

Note those numbers match those on the API server config file! But when you configure Kubernetes Service, you will need use envoy ports.

Controller

Look at example, Inventory manager: https://github.com/cloudwan/inventory-manager-example/blob/master/deployment/controller-config.yaml

The proto-model can be found here: https://github.com/cloudwan/inventory-manager-example/blob/master/config/controller.proto

The top part serverEnvironment is very similar (actually the same) to commonConfig part in the API server config, we just specify fewer options, AND instanceType for envRegistry needs to specify a different value (CONTROLLER). We don’t specify databases, grpc servers, cache, or authenticator, observability is smaller.

The next part, nodeRegistry is required. This specifies the Redis instance that will be used for controller nodes to detect each other. Make sure to provide a unique namespace, don’t copy and paste easily to different controllers if you have more service backends!

Next, businessLogicNodes is required if you have a business logic controller in use. It is relatively simple, typically we need to provide just the node’s name (for Redis registration purposes), and most importantly, the sharding ring. It must match with some value in the backend. You can specify the number of nodes (virtual), that will fit into a single runtime process.

Param limitNodes is required if you use limits integration, and you should just copy-paste those values, with specified rings as in the example.

Finally, fixtureNodes were discussed in SPEKTRA Edge registration doc, so we can skip here.

Db controller

Look at example, Inventory manager: https://github.com/cloudwan/inventory-manager-example/blob/master/deployment/db-controller-config.yaml

The proto-model can be found here: https://github.com/cloudwan/inventory-manager-example/blob/master/config/dbcontroller.proto

The top part, serverEnvironment is very similar to those in api server and controller. Unlike the server, it does not have parts for the server or authenticator. But it has database and cache options because those are needed for database upgrades or multi-region syncing. Param instanceType in envRegistry must be equal to DB_CONTROLLER, but otherwise, all is the same.

It needs a nodeRegistry config because it uses sharding with other db-controllers in the same region and service.

Config nodesCfg is a standard and must be used as in the example.

TLS

Let’s start with the TLS part.

There are two encrypted connections:

  • Between end client and Ingress (Downstream for Ingress, External)
  • Between Ingress and API Server (via Envoy - Upstream for Ingress, Internal).

It means we have separate connections, and each one needs encryption. For external connection, we need a certificate that is public, and signed by a trusted authority. There are many ways to obtain it, for Clouds, we can likely get some managed certificates, and optionally use LetsEncrypt services (cloud-agnostic). It is up to service developers to decide how to get them. They need to issue certificates for regional and global endpoints. Refer to LetsEncrypt documentation for how to set up with Ingress if you need it, along with your choice of Ingress in the first place.

For the Internal certificate, for connections to API Server Envoy runtime, we need just a self-signed certificate. If we are in Kubernetes cluster, and we have ClusterIssuer for self-signed certs, we can make (assuming Inventory manager service, and namespace examples, region ID we used is eastus2):

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: inventory-manager.eastus2.examples-cert
  namespace: examples
spec:
  secretName: inventory-manager.eastus2.examples-cert
  duration: 87600h # 10 years
  renewBefore: 360h # 15 days
  privateKey:
    algorithm: RSA
    size: 2048
  usages:
  - server auth
  - digital signature
  - key encipherment
  dnsNames:
  - "inventory-manager.examples.svc.cluster.local"
  - "inventory-manager.examples.pod.cluster.local"
  - "inventory-manager.eastus2.examples.dev04.nttclouds.co"
  - "inventory-manager.examples.dev04.nttclouds.co"
  issuerRef:
    name: selfsigned-clusterissuer
    kind: ClusterIssuer

Note that you need the selfsigned-clusterissuer component ready, but on the internet, there are examples of how to make cluster issuer like that.

With the created Certificate, you can get pem/crt files:

kubectl get secret "inventory-manager.eastus2.examples-cert" --namespace examples -o json | jq -r '.data."tls.key"' | base64 --decode > "./server-key.pem"
kubectl get secret "inventory-manager.eastus2.examples-cert" --namespace examples -o json | jq -r '.data."tls.crt"' | base64 --decode > "./server.crt"

You will need those TLS for upstream connection TLS - keep these files.

Deployment manifests

For the Inventory manager example, we should start examining deployments from customized files: https://github.com/cloudwan/inventory-manager-example/blob/master/deployment/kustomization.yaml

This contains full deployment (except secret files and Ingress object), you may copy, understand, and modify its contents for your case. Ingress requires additional configuration.

Images

In the given example, the code contains my development image registry, so you will need to replace it with your images. Otherwise it is straightforward to understand.

Resources - Deployments and main Service

We have full yaml deployments for all runtimes - note that the apiserver.yaml file has deployment with 2 containers, one for API Server and the other for Envoy proxy.

All deployments have relevant pod auto-scalers (except Redis, to avoid synchronization across pods). You may though deploy also Redis as a managed service, in yaml config files for API-server, controller, and db-controller just replace endpoint!

In this file you also have a Service object at the bottom, that exposes two ports: One https (443), that redirects traffic to envoy proxy on 8443. It serves gRPC, grpc-web, and REST API. The other is experimental for websockets only and may be omitted. This is the Service you will need to provide to Ingress to have a full setup. When you construct an Ingress, you will need to redirect traffic to “inventory-manager-service” k8s Service (but replace the inventory-manager- prefix with something valid for you). If you ask why, since metadata.name is service, then the reason is following the line in customization.yaml:

namePrefix: inventory-manager-

This is pre-pended to all resource names in this directory.

When adopting these files, you need to:

  • Replace the “inventory-manager-” prefix in all places with a valid value for your service.

  • Fix container image names (inventorymanagerserverproxy, inventorymanagerserver, inventorymanagercontroller, inventorymanagerdbcontroller) in yaml files AND kustomization.yaml

    images should point to your image registry!

Config generator, configuration, and vars

In kustomization, you should see a config generator, that loads config maps for all 4 images. However, we also need to take care of all variables using the $(VAR_NAME) format. First, we declare configurations pointing to params.yaml. Then we declare a full list of vars. These will be populated with the config map generator:

- name: examplesenv
  envs:
  - env.properties

And now we can use config files for replacements.

Secrets recap

Param secretGenerator in kustomization.yaml should recap all secret files we need:

  • We have 2 TLS files for self-signed certificates, for internal connection between Ingress and API Server Envoy.

  • We have credentials to MongoDB. This must be obtained for Mongo. You may opt for Firestore if you can and prefer, in which case you need to replace it with Google creds.

  • We have finally ntt credentials

    This must have been obtained when you initially reserved Service on the SPEKTRA Edge platform, using UI or cuttle - see Setting up Environment.

2.4.2 - Migrating your Service to New Version

How to migrate your service to the new version.

When we talk about Service versioning, we don’t mean simple cases like adding new elements:

  • Adding a new field to an existing request/response/resource, as long as it does not destroy old logic).
  • Removing a field from existing request/response/resource, as long as backend service does not need it).

These operations can be done simply from protobuf files.

message SomeRequest {
  string old_field_a = 1;
  
  int32 old_field_b = 2;
  
  // We can add new field just by assigning new proto wire number to it:
  repeated int32 new_array_field = 3;
}

This is an example of how to properly remove a field

message SomeRequest {
  // It is a former old_field_a. This ensures that if there are new fields,
  // they wont take '1' ID.
  reserved 1;
  
  int32 old_field_b = 2;
  
  repeated int32 new_array_field = 3;
}

In protobufs, fields are identified by their numbers and are designed to simplify adding/removing elements. When we remove it, we should just make this value reserved. The backend service will be ignoring the field with ID “1”, even if the client app is still sending it. To ensure some developers do not introduce yet another field using previously discarded proto numbers (while old clients may still be running), to avoid clashes it is recommended to mark the number reserved.

What else can be done in backward backward-compatible manner:

  • You can add a new resource type entirely
  • You can add a new API group or action in the API-skeleton.
  • You can add a new parent to an existing resource unless it is the first parent (previously it was a root). Note this does not extend to scope attributes! Adding them is breaking change.

You can even:

  • Rename ENUM numbers in protobuf (as long as you don’t change their numbers)
  • Rename field names in protobuf (we support this for MongoDB only though!)

These renaming will break code if someone updates the library version, but the API will stay compatible, including the database format if you use MongoDB.

If your worry is about all the mentioned cases, you can stop here and just modify your service normally, in the current API-skeleton and current proto files.

Here, we discuss API breaking changes, like:

  • Adding first resource parent to the existing resource (Status: Supported, but may require some little tricks)
  • Changing resource name (Supported)
  • Replacing field type from one to another, or splitting/merging fields somehow (Supported, but can be improved).
  • Merging two resource types into one or splitting one resource type into two (NOT YET Supported).
  • Adding new scope attribute, for example, previously non-regional resources now can be regional (Supported).
  • One resource instance may be split too many or the other way around (With some hacks, we can provide tips on how to do it).
  • When we upgrade the version of the imported service, it is considered a breaking change (Supported BUT has traps hidden, needs improvement).

These things are hard - while in Goten we strive to provide a framework for it, there is much to do yet. Some of those cases are not even yet supported, they are in the plans only.

Therefore, you need to know now, that some things WILL BE changed in the future Goten releases to improve the versioning experience. We will try to do this before any actual 3rd party will need serious versioning, but as of this moment, versioning was needed only internally, and there are no official 3rd party services yet (we did not even release 3rd parties to production as of the moment of this writing).

We don’t have “example” versioning like for Inventory Manager, there is only one version there. But we may show you the last simple versioning example in core SPEKTRA Edge services. For example, secrets service, where we upgraded v1alpha2 to v1: https://github.com/cloudwan/edgelq/tree/main/secrets. You may watch this document while observing how it was done in Secrets.

How does it work

When we have a breaking change, the service backend actually “doubles” its size: It will start to offer a new API, while the old one is still running. Essentially, it will expose two APIs. Since the version number is exposed in ALL URL paths, the gRPC server can support all of them at once. However, you will need to maintain somehow 2 instances. New API you need to maintain and develop normally. Old API may need bug fixes only, no new development.

Once you upgrade your server, your service will have the following way of processing requests in OLD API:

  • Client sends request/stream call in old API. It reaches the envoy, which passes through it, or converts to gRPC if needed, as it was always doing (no change here). Request/Stream reaches your service.
  • Server gets a request/stream in an old API. It gets through interceptors as normally (including Authentication), since interceptors are common for all calls, regardless of method and version. This is the same as in the old processing.
  • The first difference is the first middleware. New API requests will get normally to the first middleware: multi-region routing. BUT old API requests will instead hit TRANSFORMER middleware. During this process all requests are converted to the new API versions or you can provide your handling. Then transformer middleware passes the request in a new format to the multi-region routing middleware of the NEW server. When the multi-region middleware of the new server returns a response, it is converted to an OLD version by the transformer middleware of the old server. Streams are also converted - the old stream is wrapped with a transforming stream that converts all requests/responses on the fly. Again, transformer middleware does all the conversions.

Note the significance of this transformer - basically, all old API requests are treated as new APIs. When they access the database in read or write mode, they are operating on new resource instances. A database upgrade is a separate thing to consider, and it will be described in this document later, in Upgrade process.

There are some notable notes about observability modules:

  • Usage component hidden in the framework will still count usage using the old version, despite transformer middleware. It is helpful because we can easily check if someone is using an old API.
  • Audit will be altered significantly. Resource change logs will be reported only using the new API (unfortunately for projects using the old version perhaps). But Activity Logs will contain requests/responses in the older format.

Audit is very tricky - once the format of request/response/resource is saved in the Audit storage, it is there. Audit does not know service versioning and does not know how to transform between versions. It is assumed that projects/organizations may be switching to new APIs on their own. If they use the old version - Activity logs will be using the old format, and they will see this format. Resource change logs will require further work. Once the project/organization switches, they should be aware of both versions and therefore can read both formats.

Defining new API-skeleton and prototyping versioning

Breaking changes cannot normally be accepted - therefore, we are tracking versions in api-skeletons. We always must provide the currentVersion param. Suppose we have the v1 version, and now we want the v2. First, we need to open the api-skeleton-v1.yaml file, and provide the following param:

name: somename.edgelq.com
proto:
  # Rest of the fields are omitted...
  package:
    currentVersion: v1
    nextVersion: v2

We must at least indicate what is the next version. In regenerate.sh file, we need to actually call bootstrap two times:

goten-bootstrap -i "${SERVICEPATH}/proto/api-skeleton-v1.yaml" \
  -o "${SERVICEPATH}/proto" \
  -n "${SERVICEPATH}/proto/api-skeleton-v2.yaml" [... OLD imports here...]

goten-bootstrap -i "${SERVICEPATH}/proto/api-skeleton-v2.yaml" \
  -o "${SERVICEPATH}/proto" [... NEW imports here...]

# Your life will be easier if you also format them:
clang-format-12 -i "${SERVICEPATH}"/proto/v1/*.proto
clang-format-12 -i "${SERVICEPATH}"/proto/v2/*.proto

Note that, when we call bootstrap for an older file, we must provide a path to the new one. A new api-skeleton file must be written like a new file, there should be no annotations or traces of the old API-skeleton (other than accommodating to what is possible to support old API).

During version upgrades, we can (and it is highly recommended) upgrade versions of services we import. This can be done only in the context of the upgraded API version.

This describes minimal updates to an old api-skeleton file. However, we can have some level of customization of versioning we can achieve this by modifying the old api-skeleton.

We can define extra instructions for versioning. For resources, we can:

resources:
- name: OldResourceName
  versioning:
    # This can be omitted, if we don't change resource name, or we want to discontinue resource.
    replacement: NewResourceName

    # In practice, I don't know any cases where below options were actually needed by us, but we
    # potentially can opt out from some automatic versioning...

    # With this, Goten will not provide automatic versioning of create request at all. This is more likely
    # to be needed by developers, if there is some special handling there.
    skipTransformersBasicActions:
    - CreateOldResourceName
  
    # Old store access by default will always try to support all store operations on old API resources, it provides
    # automatic conversion. But you can opt out here:
    skipAccessTransformer: true

    # You can skip OldResourceNameChange objects automatic conversion... it will render Watch methods
    # non-working though... I consider personally it may be even removed as an option.
    skipResourceChangeTransformers: true

For actions in API-skeleton, if we want to change their names, we can point this out to the Goten compiler using API-skeleton again (old API-skeleton file):

actions:
- name: OldActionName
  versioning:
    # This can be omitted, if we don't change action name, or we want to discontinue action at all.
    # NewApiGroupName may be omitted if this is same resource/api group as before.
    replacement: NewApiGroupName/NewActionName

Let’s review quickly what was done for the Secrets service (v1alpha2 - v1) upgrade. This is v1alpha2 api-skeleton: https://github.com/cloudwan/edgelq/blob/main/secrets/proto/api-skeleton-v1alpha2.yaml.

Note that nextVersion points to v1. We did not do any customizations here, it was not needed. Then we defined v1 api-skeleton: https://github.com/cloudwan/edgelq/blob/main/secrets/proto/api-skeleton-v1.yaml.

What did we change in breaking way:

  • Resource Secret is now regional. Therefore, if we had resources like projects/p0/secrets/s0, it would be now projects/p0/regions/some/secrets/s0.

We need to think about how to handle this kind of change, what is some? How convert GET requests, BatchGet, how do we convert existing resources or handle List requests using filter fields? We have LIST WHERE parent = projects/p0, which now needs LIST WHERE parent = projects/p0/regions/- or maybe LIST WHERE parent = projects/p0/regions/some? Also, if there is another service importing us, and they upgrade the version of the secret they import, how this is handled?

We used a trick here: We know that, during the upgrade of Secrets from v1alpha2 to v1, all our environments are single-regional. Therefore, we can assume that region is some constant value. We will provide this in the transformer converting secret reference to the new format. All old clients will keep using secrets from existing single regions, while new clients on new regions will be using new API only (required). The same trick can be done for services that started single-region, but have second thoughts when going multi-region.

We also added a CryptoKey resource, but it would be non-breaking. This new resource type is available only in the new API anyway. In the regenerate.sh file we added a second call to goten-bootstrap: https://github.com/cloudwan/edgelq/blob/main/secrets/regenerate.sh.

Versioning on proto annotations level

Once you have a new API-skeleton, provided necessary changes to the old API-skeleton, modified calls to goten-bootstrap, and finally you called goten-bootstrap for BOTH API-skeletons, you will have generated:

  • Full set of proto files in the proto/$NEW_VERSION directory. You will need to fill all request/response/resource bodies as normal. This is not covered here, you will probably need to copy contents from old files to new ones and make modifications where necessary.
  • In proto/$OLD_VERSION directory you should discover new file: <service_short_name>_versioning.proto.

You should have a short examination of it. There is a file-level annotation describing the versioning of this service:

option (goten.annotations.service_versioning) = {
  // We will have more methods generated, for each API group, for each method...
  methods : [{
    original_method : "$OLD_API_GROUP/$OLD_METHOD"
    replacement : "$NEW_API_GROUP/$NEW_METHOD"
  }]
  
  // Again, we may have many proto objects provided, but template for single one.
  // Object may be an instance of request, response, resource, or anything else!
  //
  // For any object NOT mentioned here, the following default is assumed, provided that
  // new object is found somewhere in new API proto package:
  //
  // {
  //  object: $OBJECT_NAME
  //  replacement: $OBJECT_NAME
  //  transformation_direction: BIDIRECTIONAL
  // }
  objects : [
    {
      // We can assume that old and new object name usually are same, but not always.
      object : "$OLD_OBJECT_NAME"
      replacement : "$NEW_OBJECT_NAME"
      
      // To reduce generated transformers code, we can use FROM_NEW_TO_OLD or FROM_OLD_TO_NEW.
      // This is used typically for responses/requests objects. We will need to convert old API
      // request to new API, but never other way around. Therefore, no need for extra generation.
      // DISABLED should be used to explicitly disable conversion of particular object.
      // BIDIRECTIONAL should be used by resources and all sub-types they use.
      transformation_direction : BIDIRECTIONAL // OR may be FROM_NEW_TO_OLD, FROM_OLD_TO_NEW, DISABLED
      
      // These options below probably should be considered obsolete and not used!
      // If this is true, then field path helper objects are not transformed...
      // If you don't understand, probably you dont need this option.
      skip_field_path_transformers : false

      // Skip generation of transformer for Store access.
      skip_resource_access_transformer : true
    }
  ]
};

This versioning file is generated only once based on the api-skeleton, it is assumed that the developer may modify this manually. If you made the next changes to api-skeleton, and you don’t have manual modifications, you should delete this file first.

Once you have filled all proto files in the new API, and ensured you are happy with versioning in general, you should further modify the regenerate.sh file, you must include a new protoc compiler to the list, PLUS add a list of new proto files as the input!

protoc \
    -I "${PROTOINCLUDE}" \
    "--goten-go_out=:${GOGENPATH}" \
    "--goten-validate_out=${GOGENPATH}" \
    "--goten-object_out=:${GOGENPATH}" \
    "--goten-resource_out=:${GOGENPATH}" \
    "--goten-store_out=datastore=firestore:${GOGENPATH}" \
    "--goten-client_out=${GOGENPATH}" \
    "--goten-access_out=${GOGENPATH}" \
    "--goten-server_out=lang=:${GOGENPATH}" \
    "--goten-cli_out=${GOGENPATH}" \
    "--edgelq-doc_out=service=${SERVICE_SHORT_NAME}:${SERVICEPATH}/docs/apis" \
    "--ntt-iam_out=lang=:${GOGENPATH}" \
    "--ntt-audit_out=:${GOGENPATH}" \
    "--goten-versioning_out=:${GOGENPATH}" \
    "${SERVICEPATH}"/proto/v1/*.proto "${SERVICEPATH}"/proto/v2/*.proto

There are 2 additions:

  • You must have "--goten-versioning_out=:${GOGENPATH}" in the list!
  • Instead of "${SERVICEPATH}"/proto/v1/*.proto, you also MUST include new version proto files: "${SERVICEPATH}"/proto/v2/*.proto.

When you generate pb file for REST API descriptors, you also need to provide two directories now:

protoc \
    -I "${PROTOINCLUDE}" \
    "--descriptor_set_out=${SERVICEPATH}/proto/${SERVICE_SHORT_NAME_LOWER_CASE}.pb" \
    "--include_source_info" \
    "--include_imports" \
    "${SERVICEPATH}"/proto/v1/*_service.proto \
    "${SERVICEPATH}"/proto/v2/*_service.proto \
    "${DIAGNOSTICSPATH}"/proto/v1/*_service.proto

With the new pb file, to enable REST API for both versions, you will need to modify envoy.yaml and provide a list of API services in the list for this transcoding. Unfortunately, the envoy is not able to figure out this itself. You may need to maintain multiple envoy.yaml files, for backends with the new version, and backends without the new version.

This is all regarding the regenerate.sh file.

Let’s have a quick view of the Secrets versioning we described before. Here you can see the versioning proto file: https://github.com/cloudwan/edgelq/blob/main/secrets/proto/v1alpha2/secrets_versioning.proto. Then again see the regenerate file, with extra protoc calls and more files provided: https://github.com/cloudwan/edgelq/blob/main/secrets/regenerate.sh.

During this upgrade, we also bumped the diagnostics mixin API, but it’s not important here.

Overview of generated code and implementation

Once you regenerate the service, you will have a “double” code size. Several directories of your service repository will have two subdirectories: v1 and v2 for example. Those directories are access, audithandlers, cli, client, fixtures, proto, resources, server, and store.

Directories for the new version you should treat as already known topics, it is the task of the older version to know how to transform to the new version, not the other way around. In this regard, you should first provide an implementation in new version directories: resources, client, server, etc. You may start by copying handwritten Go files from old version directories to new ones, then make all necessary modifications. You should have the new version fully developed first ideally, without looking at the old (apart from keeping in mind you need later to provide transformers for compatibility). Do not touch the cmd/ directory yet, it’s the last part you should work on.

Versioning module and transformers

When you have a new version, you may first look at all the new files that appeared for the old version. First, look at the new directory created: versioning/v1 (if v1 is the old version). It will have several subdirectories, for all resources and API groups. API groups may be a little less visible at first, because for each resource we have an implicit API group sharing the same name. But if you examine files, you should see the following pattern:

versioning/:
  $OLD_VERSION/
    $API_GROUP_NAME/
      <api_name>_service.pb.transformer
    $RESOURCE_NAME/
      <resource_name>.pb.access.go
      <resource_name>.pb.transformer.go
      <resource_name>_change.pb.transformer.go

Since the resource name has an API group with the same name, you will see often a directory with four generated files. You can look around versioning for secrets, since it is simple: https://github.com/cloudwan/edgelq/tree/main/secrets/versioning/v1alpha2.

All files ending with pb.transformer.go are standard transformer files. They contain one transformer struct definition per each protobuf object defined in the proto file. Therefore, files <api_name>_service.pb.transformer will be containing transformers for requests and responses. Files <resource_name>.pb.transformer.go will contain transformers for resources, <resource_name>_change.pb.transformer.go for change objects.

Let’s start with the resource transformer, for the Secret resource: https://github.com/cloudwan/edgelq/blob/main/secrets/versioning/v1alpha2/secret/secret.pb.transformer.go.

Note that we have first an interface, then we have a default implementation for that interface. Main parts to look at:

var (
    registeredSecretTransformer SecretTransformer
)

func SetSecretTransformer(transformer SecretTransformer) {
    ...
}

func GetSecretTransformer() SecretTransformer {
    ...
}

type SecretTransformer interface {
	...
}

type secretTransformer struct{}

We have a global transformer (for the package), and we can get/set it via functions. There is a reason for that, which will be explained shortly.

If you look at the interface though, you will see transformer functions for Secret resources in versions v1 and v1alpha2. Additionally, you will also see functions for transforming all “helper” objects - name, reference, field path, field mask, filter, field path value, etc. All those functions are also doubled for full bidirectional support. Still, they concentrate on a single object.

Before we jump to some transformation examples, let’s recap one thing about Golang: It has an interface, and you can “cast” implementing struct into the interface, but you don’t have polymorphism. Suppose you defined a struct “inheriting” another one and “overwritten” one of its methods, let’s call it A. Now, imagine that the parent struct has a method called B, which calls A internally. With polymorphism, it would be called your implementation, but not in Golang. Therefore, let’s see the current function for transforming Secret resource from v1alpha2 to v1:

func (t *secretTransformer) SecretToV1(
    ctx context.Context,
    src *secret.Secret,
) (*v1_secret.Secret, error) {
	if src == nil {
		return nil, nil
	}
	dst := &v1_secret.Secret{}
	trName, err := GetSecretTransformer().SecretNameToV1(ctx, src.GetName())
	if err != nil {
		return nil, err
	}
	dst.Name = trName
	dst.EncData = src.GetEncData()
	dst.Data = src.GetData()
	dst.Metadata = src.GetMetadata()
	return dst, nil
}

If we subclass secretTransformer and override SecretNameToV1, then inside SecretToV1 we would still call old implementation, if the code was written like:

trName, err := t.SecretNameToV1(ctx, src.GetName())

Since this is not desired, we decided to always get a globally registered transformer when calling other transformer functions, including self. Therefore, transformers are using a global registry (although they are still packaged). There may have been another solution perhaps, but it works fine.

When you want to override the transformer, you need to create another file and implement this transformer, inherit first from the base one. You should implement minimal required implementation. Your custom transformer will need to be exported.

If you look at other files across Secrets versioning (another transformer, not the pb.access.go file!), you should see that they implement much smaller interfaces - usually just objects back and forth. Resources are those with the largest amount of methods, but they follow the same principles.

Overall, you should notice that there is some hierarchy in these transformation calls.

For example, SecretToV1 needs SecretNameToV1, because the name field is part of the resource. SecretNameToV1 actually needs SecretReferenceToV1. Then SecretFieldMaskToV1 needs SecretFieldPathToV1. Next, SecretFilterToV1 needs SecretFieldPathValueToV1 etc.

Filters and field masks are especially important - transformations like ListSecretsRequestToV1 rely on them! In other words, if we have some special conversion of some specific field path within the resource, and we want to support filter conversions (and field masks), then we need to override relevant transformer functions:

  • <Name>To<Version>

    for object transformation itself. We need to convert fields that cannot be code-generated.

  • <Name>FieldPathTo<Version>

    for field mask transformations, we need to provide mapping for field paths that were not auto-generated.

  • <Name>FieldPathValueTo<Version>

    for filter COMPARE conditions, for non-auto generated field path values.

  • <Name>FieldPathArrayOfValuesTo<Version>

    for filter IN conditions (!), for non-auto generated field path values.

  • <Name>FieldPathArrayItemValue<Version>

    for filter CONTAINS conditions if the field we need special treatment is an array and code-gen was not available.

Filters are pretty complex, they are after all set of conditions, and each condition is a combination of some field path value with an operator!

For secrets, we did not change any fields, we changed just its name field patterns by adding a region segment. Because of this, we need to override only: Reference and ParentReference transformers (for both versions). Name transformers are calling references, so we skipped them. WARNING: To be honest, it should be the other way around, name is basic, reference is on the top. This is one of the versioning parts that will be subject to a change, at least till versioning is only used by our team and no 3rd party services exist at this point. The ParentReference type is also considered obsolete and will be removed entirely.

What is at least good, is that those Reference/Name transformers will be used by resource transformers, filters, requests in all CRUD, etc.

Also, our transformer function will be used by all resources having references to Secret resource! This means, that if we have resources like:

message OtherResource {
  string secret_ref = 1 [(goten.annotations.type).reference = {
    resource: "secrets.edgelq.com/Secret"
    target_delete_behavior : BLOCK
  }];
}

This resource would belong to a service that is upgrading the Secrets version, maintainers of that service would not have to worry about transformation at all. Instead, they will need to import our versioning package, and it will be done for them.

Still, there are some areas for improvement. Note that field changes within resources, if they are breaking changes, require plenty of work - up to five transformer functions (those field paths, field path values for filters…), and even 10, because we need bidirectional transformation. In the future, we will have a special transformation function mapping one field path to another with value transformation - for both directions - two functions in total. Then all those transformer functions will be used.

When it comes to transformers, the code-gen compiler will try to match fields following way: If they share the same type (like int32 to int32, string to string, repeated string to repeated string) and proto wire number, then we have a match. Fields are allowed to change names. Any number/type change requires transformation.

Reference/name transformations require the same underlying type and name pattern.

Using transformers, we can construct access objects, like in https://github.com/cloudwan/edgelq/blob/main/secrets/versioning/v1alpha2/secret/secret.pb.access.go.

It takes access interface of new objects and wraps to provide old ones.

Transformers provide some flexibility in transforming objects larger or smaller, but they lack plenty of abilities. You cannot convert one object into two or more, or the other way around. Access to the database during transformation was possible in the past, but so far not necessary, and what is more problematic, prone to bugs. Roadmap predicts different mechanisms now, and it is advised to provide transformations that are possible. They should convert one item to another, and any “sub” item should be delegated to another transformer.

Once you have all transformers for the given version, it is highly recommended to wrap their initialization in a single module. For example, for secrets we have https://github.com/cloudwan/edgelq/blob/main/secrets/versioning/v1alpha2/secrets/registration.go.

We are importing all versioning packages. If there is any registration using the Go init function, we can “dummy” import with an underscore “_”. Otherwise, we need a registration function with arguments. Any runtime that will need those transformers will need to call this whole-service register function with transformers. Those runtimes are server and dbController of a versioned service AND all servers/dbControllers of importing services. For example, the service applications.edgelq.com imports secrets.edgelq.com, so its server and dbController will need to load secrets versioning modules.

Store transforming

It may be useful to have a handle to the new store, and “cast” it to the old one. This way you could interact with new database data via old API. Goten generates a structure that provides exactly that. For secrets, you can see it here: https://github.com/cloudwan/edgelq/blob/main/secrets/store/v1alpha2/secrets/secrets.pb.transformer.go. It takes the interface to the new store to provide new. It uses generated transformers from versioning packages.

This is an extra file Goten provides in older version packages.

Normally, if you have good transformers, this does not need any extra work.

Server transformer middleware

Server transformer middleware may be considered a final part of API transformation. It receives requests in the old format, transforms them into new ones, and passes them to the new middleware chain. It uses transformer objects from versioning packages.

Goten generates this middleware automatically in each of the server packages. For secrets service you have:

Then you have a glue of transformer middlewares:

Once you have this glue, you may provide a constructor for the server object as simple as in this example: https://github.com/cloudwan/edgelq/blob/main/secrets/server/v1alpha2/secrets/secrets.go.

There, we are passing new service handlers object and wrap with transformer accepting older API. See function NewTransformedSecretsServer. This is a very simple example: When you have a new server object, just wrap it with transformers.

If you wonder why we left NewSecretsServer that returns the old server, we will explain this when we talk about the need to run 2 versions. This is important: When you create a constructor for old server handlers that wrap a new server, you must leave the old constructor still in place.

If you see generated transformers, you may see that everything is wrapped around “transformation sessions”. Those are used by Audit, who needs to be notified about every converted message. If you are curious, check https://github.com/cloudwan/goten/blob/main/runtime/versioning/transformation_session.go, and see the ApiCommunicationTransformationObserver interface. This allows interested parties to observe if there was any change to the version.

If you were able to provide full versioning with transformers only, you can conclude the main work here. If you however need some extra IO work, split requests, or do anything more complicated, you may want either to:

  • Disable server transformations

    for example by disabling it in api-skeletons!. You can check (read again) about skipTransformersBasicActions. Then you can implement your transforming actions for transformation middleware.

  • You may also amend transformer middleware by providing additional custom middleware in front of the generated one, or after if you prefer.

In your transformer middleware, you may also use a store object to extract additional data from a database, but it should be done in NO-TRANSACTION, read-only mode.

If you use transformations, you need to wrap up them with functions from the Goten module:

  • WithUnaryRequestTransformationSession
  • WithUnaryResponseTransformationSession
  • WithStreamClientMsgTransformationSession
  • WithStreamServerMsgTransformationSession

These are defined in https://github.com/cloudwan/goten/blob/main/runtime/versioning/transformation_session.go.

By leveraging custom transformer middlewares, note that you may even construct a “server” instance differently. Let’s go back to “server” construction with a transformer like here (https://github.com/cloudwan/edgelq/blob/main/secrets/server/v1alpha2/secrets/secrets.go), it does not necessarily need to be simple like:

func NewTransformedSecretsServer(
    newServer v1server.SecretsServer,
) SecretsServer {
	return WithTransformerMiddleware(newServer)
}

Instead, you can get a store handle for the new database, API server config, authInfoProvider, and so on. Then, you may construct a server handlers chains in the following way:

  • Old API middleware for multi-region routing
  • Old API middleware for authorization
  • Old API middleware for transaction
  • Old API middleware for outer
  • Transformation of middleware to new API - with special customizations
  • New API Custom middleware (if present)
  • New API Core server

Inside transformer middleware, you are guaranteed to be in a transaction. This may enable new cases, like splitting one Update request (for old API) into multiple Updates (for new API).

However, in the future, this may become recommended in the first place, with new Goten/SPEKTRA Edge upgrades. Note that if you have changed let’s say resource name, permission names from new and old APIs may be incompatible. You may make sure your roles will have permissions for both cases, but it will be more difficult once we Update our IAM to have their roles! It would be unreasonable to expect project admins to update their roles for new permissions, or to upgrade automatically since roles are stored in the IAM database.

Note that you can easily wrap the new store handle into the old one using the store transformer (from the store package we mentioned!).

If you need, you can take the wrapped old store handle, and construct the old API Server completely like it was before, using its middleware only, without transformer one. Then transformations will be happening only on the store level, sparing perhaps some tricky custom methods.

There is practically no cost in constructing two “server” objects for new and old APIs, those are rather stateless light objects. They are not actual servers but just sets of server handlers. If you use old authorization middleware, however, make sure permission names are the same, or you passed old permissions to new roles too! This way new roles can handle new and old APIs. Authorization does not necessarily care about versioning there.

Custom middlewares are powerful with the possibility to execute extra IO work or splitting requests, or maybe even changing entirely one request to a completely different unexpected type. However, there are limitations to it:

  • You still need some good transformers for full resource bodies without the context of requests/responses. The reason is, that during the upgrade process resource transformers are used to convert one to another!

  • References are still tricky. You need to consider that other services (or even your own) have references to resources with tricky name transformations. When those other services upgrade their resources, they will need some instruction on how to convert problematic references. For example, if you have resource A, which you want to split into B and C, then perhaps resource D concerning A will suddenly need to have two references to B and C in the next version.

    Also, filter conditions like WHERE ref_to_a = "$value", may need to be transformed into a thing like WHERE ref_to_b = "$value1" AND ref_to_c = "$value2".

Fixtures

With multiple versions, you will see multiple directories for fixtures. When building images, you should include both in your controller’s final image.

When you create fixtures, especially roles, consider if they will work for both new and old APIs. For example, if some action changed the name, or if the resource changed the name, your role for the new API should have permissions for older and newer names.

Problematic maybe if project admins define their roles (which will be supported in the future). We may in this case recommends using older API Authorization in the first place, transformer middleware should be after it.

When you write new fixtures, you should avoid updating old fixtures! Old things must stay as it is!

Ensuring server and controller can work on two versions

With new and old APIs implemented, we need to handle the following:

  • main.go files - with support for both versions
  • fixtures for new and old versions.

Once you have “code” in place, you need to acknowledge the fact that, right now, an old API is running in your environments, with old resources, and an old database. Once you push your images, they will inherit the old database, incompatible with a new database. The safest process will be to keep the old database as it is AND prepare a new database in its namespace. Without revealing all the details yet, your service backend will be using two databases to achieve a smooth transition.

When your new servers/controllers start, they need to first check what kind of version is “operating now”. The first time they do that, they will see that the database is old, so new servers cannot run yet. Instead, they will need to run old server handlers, old controller, and old db-controller. This is why when you make a new server, you will need to do plenty of copying and pasting.

While your backend service upgrades the database, it will keep serving old content in the old way. Once the database upgrade finishes, your backend service will flip the primary database version. It will offer new API in its full form, and old API will be transformed to a new API on the fly by servers. This ensures you don’t necessarily need to maintain an old database anymore.

Now you should look carefully at the main.go files for the secrets service:

Server

Starting with the server, note that we are using the vrunner object, which gets a function for constructing different servers depending on whether the old or new API version is currently “active”. In this case, runV1Alpha2MainVersionServer is the old of course. If you look at runV1Alpha2MainVersionServer you should note:

  • v1alpha2 store handle is constructed normally
  • v1 store handle is constructed with read-only mode
  • We are constructing two multi-region policy store handlers. For the old one, we constructed it as we would do normally.
  • v1alpha2 server uses OLD handlers constructor, without wrapping new server at all!
  • v1 server is constructed like a standalone, but uses a read-only store handle. It means that all write requests will fail, but it will be already possible to “read” immediately. Read requests may not however return valid data yet.

It is assumed that, at the moment of the server upgrade, no client should be using the new version yet. Therefore, the API server will serve the old API as normal, and the old database will be written/read to. There will be a background database upgrade running and read requests will gradually be more consistent with reality.

In this example, please ignore NewMetaMixinServer (old schema-mixin) and NewLimitsMixinServer. During this particular upgrade of secrets, we also upgraded schema mixin (former meta-mixin) and limits mixin. In the case of 3rd party services, if you use just v1 schema and limits mixins for both versions, just construct the same instance as always, but give them the old API store.

For example, if you had some custom service using limits and schema mixin in v1, and you upgraded from v1 to v2, you should construct the following servers when running in v1 mode:

  • Limits mixin in v1, with access to v1 store handle, and multi-region policy for v1 version.
  • Schema mixin in v1, with access to v1 store handle.
  • Your service in v1, with access to v1 store handle.
  • Your service in v2, with access to v2 read-only store handle, and multi-region policy for v2.

When we upgrade mixins, we will describe the procedure of how to upgrade, but nothing is predicted on the roadmap.

Your service will detect automatically when a switch happens. In that case, the old server will be canceled, and vrunner will automatically call the constructor for the new version server. In the case of secrets, it would be called runV1MainVersionServer.

If you see this constructor, we build a read-write v1 store handle, and we discard the old store entirely. Now limits and schema mixins had to use a new store handle, and the old API server is a wrapped version of the new one. We still serve the old API, but the database has switched completely.

We will need also to prepare an API-server config file to support two database versions. This is the snippet for secrets service:

dbs:
- namespace: "envs/$(LQDENV)-$(EDGELQ_REGION)/secrets/v1"
  backend: "$(DB_BACKEND)"
  apiVersion: "v1alpha2"
  connectionPoolSize: $(SECRETS_DB_CONN_POOL_SIZE)
  mongo:
    endpoint: "mongodb+srv://$(MONGO_DOMAIN)/?authSource=%24external&authMechanism=MONGODB-X509&retryWrites=true&w=majority&tlsCertificateKeyFile=/etc/lqd/mongo/mongodb.pem"
  firestore:
    projectId: "$(FIRESTORE_GCP_PROJECT_ID)"
    credentialsFilePath: "/etc/lqd/gcloud/db-google-credentials.json"
- namespace: "envs/$(LQDENV)-$(EDGELQ_REGION)/secrets/v2"
  backend: "$(DB_BACKEND)"
  apiVersion: "v1"
  connectionPoolSize: $(SECRETS_DB_CONN_POOL_SIZE)
  mongo:
    endpoint: "mongodb+srv://$(MONGO_DOMAIN)/?authSource=%24external&authMechanism=MONGODB-X509&retryWrites=true&w=majority&tlsCertificateKeyFile=/etc/lqd/mongo/mongodb.pem"
  firestore:
    projectId: "$(FIRESTORE_GCP_PROJECT_ID)"
    credentialsFilePath: "/etc/lqd/gcloud/db-google-credentials.json"

Note that we have 2 database entries, with different namespaces and different apiVersion assigned! For historical reasons, we had a mismatch between the version in namespace and apiVersion though, so don’t worry about this part (v1 DB is v1alpha2 for API, and v2 for DB is v1 for API).

Controller

Like with the server, in controller runtime we also use the vrunner object, if you see secrets controller main.go. If the old version is active, then it will run just an old controller, with old fixtures. Meaning, that once you upgrade your images, your controller should run like it was always doing. However, when a version switch is detected, the old controller will be canceled and one new deployed in its place.

Note that the config object has 2 fixture sets: v1alpha2 and v1. If you look at the config file: https://github.com/cloudwan/edgelq/blob/main/secrets/config/controller.proto, you will also see 2 fixture configs accordingly. Any multi-version service should have this.

It also means that new fixtures for projects and your service will only be deployed when the actual version changes.

For your config file, ensure you provide two fixture sets for both versions.

Db-Controller

Now, the db controller is much more different than the controller and server. You don’t have any vrunner. Instead, you should see that we are calling NewVersionedStorage twice, for different databases. We are even passing both to dbSyncerCtrlManager. You should be aware of multiple tasks happening in a db syncer controller module:

  • It handles multi-region syncing.
  • It handles search db syncing if you use a different search backend than a primary database.
  • It handles database upgrades too!

We don’t use vrunner in db-controller, because it is already used by db-syncer-ctrl and db-constraint-ctrl internally. They switch automatically with version change, so it’s not visible in the main.go file.

When the db-controller starts and detects the old version active, it will continue executing regular tasks for the old service. However, in the background, it will start database copying from the old to the new namespace!

Config file for db-controller will need, like a server instance, two entries for dbs. This is a snippet from our secrets:

dbs:
- namespace: "envs/$(LQDENV)-$(EDGELQ_REGION)/secrets/v1"
  backend: "$(DB_BACKEND)"
  apiVersion: "v1alpha2"
  connectionPoolSize: $(SECRETS_DB_CONN_POOL_SIZE)
  mongo:
    endpoint: "mongodb+srv://$(MONGO_DOMAIN)/?authSource=%24external&authMechanism=MONGODB-X509&retryWrites=true&w=majority&tlsCertificateKeyFile=/etc/lqd/mongo/mongodb.pem"
  firestore:
    projectId: "$(FIRESTORE_GCP_PROJECT_ID)"
    credentialsFilePath: "/etc/lqd/gcloud/db-google-credentials.json"
  disabled: $(V1_ALPHA2_DB_DISABLED)
- namespace: "envs/$(LQDENV)-$(EDGELQ_REGION)/secrets/v2"
  backend: "$(DB_BACKEND)"
  apiVersion: "v1"
  connectionPoolSize: $(SECRETS_DB_CONN_POOL_SIZE)
  mongo:
    endpoint: "mongodb+srv://$(MONGO_DOMAIN)/?authSource=%24external&authMechanism=MONGODB-X509&retryWrites=true&w=majority&tlsCertificateKeyFile=/etc/lqd/mongo/mongodb.pem"
  firestore:
    projectId: "$(FIRESTORE_GCP_PROJECT_ID)"
    credentialsFilePath: "/etc/lqd/gcloud/db-google-credentials.json"

The new element not present on the server side is disabled: $(V1_ALPHA2_DB_DISABLED). If you look back at main.go for controller though, you should see the following code:

func main() {
    ...

    var v1Alpha2Storage *node.VersionedStorage
    dbSyncingCtrlCfg := db_syncing_ctrl.NewDefaultControllerNodeConfig()
    if serverEnvCfg.DbVersionEnabled("v1alpha2") || envRegistry.MyRegionalDeploymentInfo().GetCurrentVersion() == "v1alpha2" {
		...

        dbSyncingCtrlCfg.EnableDowngradeDbSyncing = serverEnvCfg.DbVersionEnabled("v1alpha2")
    }
}

It means that:

  • If the current detected version is v1alpha2, the old one, then the second boolean check passes, and we are adding v1alpha2 storage regardless of serverEnvCfg.DbVersionEnabled("v1alpha2"). However, if this returns false, then dbSyncingCtrlCfg.EnableDowngradeDbSyncing is false.
  • If current version is v1, and serverEnvCfg.DbVersionEnabled("v1alpha2") returns false, then v1alpha2 storage is completely non-visible anymore.

We will discuss this when talking about the Upgrade process.

Versioning transformers

If you looked carefully enough, you should notice the following lines in two main.go files for secrets (server and db-controller):

import (
    vsecrets "github.com/cloudwan/edgelq/secrets/versioning/v1alpha2/secrets"
)

func main() {
	...

    vsecrets.RegisterCustomSecretsTransformers(envRegistry.MyRegionId())
	
	...
}

This import is necessary for the correct working of the server and db-controller. The former needs them for API transformation, the latter needs for database upgrade. If you don’t have any custom transformers, and you use just the init function, you at least will need to make a “dummy” import:

import (
    _ "github.com/cloudwan/edgelq/secrets/versioning/v1alpha2/secrets"
)

For non-dummy imports, transformers will be also needed for all importing services. For example, since service applications.edgelq.com imports secrets.edgelq.com, we also had to load same versioning transformers in its main

go files:

Note that we are calling vsecrets.RegisterCustomSecretsTransformers( envRegistry.MyRegionId()) there too! This is necessary to transform references to Secret resources! When you upgrade imported services, make sure to import their transformers.

Upgrading process

By now you should know that, when you upgrade images, your service backend will continue operating on the old API version and old database, but db-controller will be secretly upgrading the database by copying data from one namespace to another.

The information on what version is active is coming from the meta.goten.com service. Each Deployment resource has a field called currentVersion. It also means, that each region controls its version, and you need to run an upgrade process for all regions for a service (Deployment).

Therefore, we focus on a single region only, just in case. First, you pick a region to upgrade, upload images, and restart backend services to use them. They will start serving the old version first, and start upgrading the database.

But they won’t switch on their own, they will just sync the database, then keep syncing forever for every write request happening for the old version. To trigger an upgrade with the version switch, you should use the BeginUpgrade request to Meta service. For example, if you are upgrading service custom.edgelq.com in region us-west2, you may use cuttle. Let us assume you are upgrading from v1 to v2.

cuttle meta begin-upgrade deployment \
  --name 'services/custom.edgelq.com/deployments/us-west2' \
  --total-shards-count 16 \
  --target-version 'v2'

Total shards count, Value 16, is coming from several shards byName you have in db-controller, see db controller config, sharding settings. This must be the same. In the future, we may provide sharding info via meta service resources rather than config files. Ring size 16 is the current standard.

You may find the request proto definition here: https://github.com/cloudwan/goten/blob/main/meta-service/proto/v1/deployment_custom.proto.

Once you start upgrading, monitor services/custom.edgelq.com/deployments/us-west2 with periodic GET or WATCH requests.

Once you start upgrading, the field upgrade_state of deployment will be updated, and you should see data like (other fields are omitted):

{
  "name": "services/custom.edgelq.com/deployments/us-west2",
  "currentVersion": "v1",
  "upgradeState": {
    "targetVersion": "v2",
    "pendingShards": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
    "state": "INITIAL_SYNCING"
  }
}

This initial syncing may be a bit misleading because initial syncing already starts automatically, but this time db-controller is reporting process, for each shard completed, it will update:

{
  "name": "services/custom.edgelq.com/deployments/us-west2",
  "currentVersion": "v1",
  "upgradeState": {
    "targetVersion": "v2",
    "readyShards": [0, 2],
    "pendingShards": [1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
    "state": "INITIAL_SYNCING"
  }
}

Once all shards move to ready, then the state will change and all ready shards become pending again:

{
  "name": "services/custom.edgelq.com/deployments/us-west2",
  "currentVersion": "v1",
  "upgradeState": {
    "targetVersion": "v2",
    "pendingShards": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
    "state": "SWITCHING"
  }
}

When this happens, API servers will reject writing requests. This ensures that the db-controller does not need to play catch up game with writes that may be happening, instead, it can focus on stabilizing the database and finish remaining writes.

At this point, like 99.5% of data should already be on the new database. Initial syncing completes when the DB-controller reached parity at least for a moment. Active writes may make it unsafe to switch databases though, it will be necessary to disable writes for a moment, up to 1 minute. Of course, reading and writing to other services will continue as usual, therefore disruption should be relatively minimal.

Pending shards will start moving to ready. Once all of them are moved, you should see:

{
  "name": "services/custom.edgelq.com/deployments/us-west2",
  "currentVersion": "v2"
}

This concludes the upgrade process. All backend runtimes will automatically switch to the new version.

If you believe the db-controller is stuck, check for logs -> if there is any bug, it may be crashing, which requires a fix or rollback, depending on the env type. If everything is fine, it may deadlock optionally, which happened in our case in some dev environments. This upgrade mechanism is still under work, but restarting the db-controller normally fixes the issue, and it continues to upgrade without any issues. So far we upgraded a couple of environments like this without breaking anything, but still be careful as it is an experimental feature. The worst case however should be averted thanks to database namespace separation, and other means of db upgrades are more risky.

Let’s talk about rollback options.

First, to note, there is some other task db-controller will start in the background, depending on settings, ONCE it switches from old database to new: It will start syncing from new database to old, in a reverse direction than before. This may be beneficial if you will need to revert after several days, and you want to keep updates in the new database. If this is not desired, if you prefer to have quick rollback by just updating pods to old images, you can modify the DB-controller config field disabled:

dbs:
- namespace: "envs/$(LQDENV)-$(EDGELQ_REGION)/secrets/v1"
  backend: "$(DB_BACKEND)"
  apiVersion: "v1alpha2"
  connectionPoolSize: $(SECRETS_DB_CONN_POOL_SIZE)
  mongo:
    endpoint: "mongodb+srv://$(MONGO_DOMAIN)/?authSource=%24external&authMechanism=MONGODB-X509&retryWrites=true&w=majority&tlsCertificateKeyFile=/etc/lqd/mongo/mongodb.pem"
  firestore:
    projectId: "$(FIRESTORE_GCP_PROJECT_ID)"
    credentialsFilePath: "/etc/lqd/gcloud/db-google-credentials.json"

  ## If this value is "true", then, if this API version is inactive, DbController WILL NOT try to sync updates
  ## from new database to old. Old DB will be more and more behind new version with each day when new version is active.
  disabled: $(V1_ALPHA2_DB_DISABLED)

- namespace: "envs/$(LQDENV)-$(EDGELQ_REGION)/secrets/v2"
  backend: "$(DB_BACKEND)"
  apiVersion: "v1"
  connectionPoolSize: $(SECRETS_DB_CONN_POOL_SIZE)
  mongo:
    endpoint: "mongodb+srv://$(MONGO_DOMAIN)/?authSource=%24external&authMechanism=MONGODB-X509&retryWrites=true&w=majority&tlsCertificateKeyFile=/etc/lqd/mongo/mongodb.pem"
  firestore:
    projectId: "$(FIRESTORE_GCP_PROJECT_ID)"
    credentialsFilePath: "/etc/lqd/gcloud/db-google-credentials.json"

Note that we have particular code in db-controller (again):

func main() {
    ...

    var v1Alpha2Storage *node.VersionedStorage
    dbSyncingCtrlCfg := db_syncing_ctrl.NewDefaultControllerNodeConfig()
    if serverEnvCfg.DbVersionEnabled("v1alpha2") || envRegistry.MyRegionalDeploymentInfo().GetCurrentVersion() == "v1alpha2" {
		...

        dbSyncingCtrlCfg.EnableDowngradeDbSyncing = serverEnvCfg.DbVersionEnabled("v1alpha2")
    }
}

If serverEnvCfg.DbVersionEnabled("v1alpha2") returns false, then either db-controller will not even get access to the old database, or if the current version is v1alpha2, when the switch happens, then dbSyncingCtrlCfg.EnableDowngradeDbSyncing will be false. This will ensure that the db-controller will not start syncing in the reverse new -> old direction after the switch. This may make quick rollback safer, without using database backup.

Perhaps it is best to start this way -> disable the old version if inactive, and make the upgrade. Then check if everything is fine, and if there is an emergency to rollback, you can deploy old pods and apply updates quickly. If you do that, remember however to send the UpdateDeployment request to meta.goten.com to ensure the currentVersion field points to the old one. If everything is good, however, you may optionally enable new -> old DB syncing from the config, in case rollback is needed after some amount of days, but you are confident enough that it won’t corrupt the old database.

Other upgrade information:

  • SearchDB, if present, is automatically synced during the upgrade, but maybe a bit delayed behind (a matter of seconds).
  • For resources not owned by the local database, we are talking about read copies resulting from multi-region, db-controller will not attempt to sync old -> new. Instead, it will just send watch requests to other regions, separate for old and new APIs. Copies will be done asynchronously and don’t influence the “switch”.
  • Meta owner references, throughout all services, will be updated asynchronously once the service switches to the new version. Unlike hard schema references pointing to our service, meta owner references are assumed owned by the service they point to, and they must use the version currently used by this service/region.

3 - Goten Framework Guide

Understanding the Goten compiler and runtime framework.

Goten is a set of runtime libraries and tools for skaffolding the SPEKTRA Edge service out of the yaml specification file.

3.1 - Goten Organization

Understanding the Goten directory structure and libraries.

In the SPEKTRA Edge repository, we have directories for each service:

edgelq/
 applications/
 audit/
 devices/
 iam/
 limits/
 logging/
 meta/
 monitoring/
 proxies/
 secrets/
 ztp/

All names of these services end with the .edgelq.com suffix, except meta. The full name of the meta service is meta.goten.com. The reason is that the core of this service is not in the SPEKTRA Edge repository, it is in the Goten repository:

goten/
 meta-service/

This is where meta’s api-skeleton is, Protocol buffers files, almost all the code-generated modules, and server implementation. The reason why we talk about meta service first is quite important, because it also teaches the difference between SPEKTRA Edge and Goten.

Goten is called a framework for SPEKTRA Edge, but this framework has two main tool sets:

  1. Compiler

    It takes the schema of your service, and generates all the boilerplate code.

  2. Runtime

    Runtime libraries, which are referenced by generated code, are used heavily throughout all services based on Goten.

Goten provides its schema language on top of Protocol Buffers: It introduces the concept of Service packages (with versions), API groups, actions, resources. Unlike raw Protocol Buffers, we have a full-blown schema with references that can point across regions and services, and those services can also vary in versions. Resources can reference each other, services can import each other.

Goten balances between code-generation and runtime libraries operating on “resources” or “methods”. It is usually the tradeoff between performance, type safety, code size, maintainability, and the readability.

If you look at the meta-service, you will see that it has four resource types:

  1. Region
  2. Service
  3. Resource
  4. Deployment

This is pretty much exactly what Goten provides. To be a robust framework, to provide on its promises, Multi-region, multi-service, and multi-version, Goten needs a concept of a service that contains information about regions, services, etc.

SPEKTRA Edge provides various services on a higher level, but Goten provides the baseline for them and allows relationships between them. The only reason why the “meta” directory exists also in SPEKTRA Edge, is because the meta service also needs extra SPEKTRA Edge integration like authorization layer. In the SPEKTRA Edge repo, we have additional components added to meta, and finally, we have meta main.go files. If you look at the files meta service has in the SPEKTRA Edge repo (for the v1 version, not v1alpha2), you will understand that the edgelq version wraps up what goten provides. We have also a “v1alpha2” service (full one), from times before we moved meta to Goten. In those past times, SPEKTRA Edge was overriding half of the functionality provided by Goten, and it was heading in a terrible direction from there.

Goten Directory Structure

As a framework, goten provides:

  1. Modules related to service schema and prototyping (API skeleton and proto files).
  2. Compilers that generate code based on schema
  3. Runtime libraries linked during the compilation

For schema & prototyping, we have directories:

  • schemas

    This directory contains generated JSON schema for api-skeleton files. It is generated based on file annotations/bootstrap.proto.

  • annotations

    Protobuf is already a kind of language for building APIs, but Goten is said to provide a higher level one. This directory contains various proto options (extra decorations), that enhance standard protobuf language. There is one exceptional file though: bootstrap.proto, which DOES NOT define any options, instead it describes the api-skeleton schema in protobuf. The file in the schemas directory is just a compilation of this file. The annotations directory contains generated Golang code describing those proto options. Normally ignore it.

  • types

    Contains set of reusable protobuf messages that are used in services using Goten, for example, “Meta” object (file types/meta.proto) is used in almost every resource type. The difference between annotations and types is that, while annotations describe options we can attach to files/proto messages/fields/enums etc., types contain just reusable objects/enums. Apart from that, each proto file contains compiled Golang objects in the relevant directory.

  • contrib/protobuf/google

    This directory, as far as I understand, allows us to avoid downloading full protobuf deps from Google, it just has bits we decided to take. SubDirectory api maps to our annotations, and type to types. There is a weird exception, because distribution.proto matches more type directories than api in this manner, but let it be. Perhaps it can be deleted entirely, as I am not sure where we use it if at all. However, I told you one small lie, SPEKTRA Edge Contributors DO HAVE to download some protocol buffers (will be mentioned in the scripts directory). The problem is, that this downloaded library is more lightweight and does not contain types we put in contrib/protobuf/google.

All the above directories can be considered a form of Goten-protobuf language that you should know from the developer guide.

For compilers (code-generators), we have directories:

  • compiler

    Each subdirectory (well, almost) contains a specific compiler that generates some set of files that Goten as a whole generates. For example compiler/server generates server middleware.

  • cmd

    Goten does not come with any runtime on its own. This directory provides main.go files for all compilers (code-generators) Goten has.

Compilers generate code you should already know from the developer guide as well.

Runtime libraries have just a single directory:

  • runtime

    Contains various modules for clients, servers, controllers… Each will be talked about separately in various topic-oriented documents.

  • Compiled types

    types/meta/, and types/multi_region_policy may be considered part of the runtime, they map to types objects. You may say, that while resource proto schema imports goten/types/meta.proto, generated code will refer to Go package goten/types/meta/.

In the developer guide, we had brief mentions of some base runtime types, but we were treating them as black boxes, while in this document set, we will dive in.

Other directories in Goten:

  • example

    Contains some typical services developed on Goten, but without SPEKTRA Edge. The current purpose of them is only to run some integration tests though.

  • prototests

    Contains just some basic tests over base extended types by Goten, but does not delve as deep as tests in the example directory.

  • meta-service

    It contains full service of meta without SPEKTRA Edge components and main files. It is supposed to be wrapped by Goten users, SPEKTRA Edge in our case.

  • scripts

    Contains one-of scripts for installing development tools, reusable scripts for other scripts, or regeneration script that regenerates files from the current goten directory (regenerate.sh).

  • src

    This directory name is the most confusing here. It does not contain anything for the framework. It contains generated Java code of annotations and types directories in Goten. It is generated for the local pom.xml file. This Java module is just an import dependency for Goten, so Java code can use protobuf types defined by Goten. We have some Java code in the SPEKTRA Edge repository, so for this purpose, in Goten, we have a small Java package.

  • tools

    Just some dummy imports to ensure they are present in go.mod/go.sum files in goten.

  • webui

    Some generic UI for Goten service, but note this lies abandoned, as our front-end teams no longer develop generic UI, focusing on specialized only.

Regarding files other than obvious:

  • pom.xml

    This is for building a Java package containing Goten protobuf types.

  • sdk-config.yaml

    This is used to generate the goten-sdk repository (public one), since goten itself is private. Nobody wants to keep copying manually public files from goten to goten-sdk, so we have this done for us.

  • tools.go

    It just ensures we have deps in go.mod. Unsure why it is separated from the tools directory.

SPEKTRA Edge Directory Structure

SPEKTRA Edge is a home repository for all core SPEKTRA Edge services, and an adaptation of meta.goten.com, meaning that its sub directories should be familiar, and you should navigate their code well enough since they are “typical” Goten-built services.

We have a common directory though, with some example elements (more important):

  • api and rpc

    Those directories contain extra protobuf reusable types. You will most likely interact with api.ServiceAccount (not to confuse with the iam.edgelq.com/ServiceAccount resource)!

  • cli_configv1, cli_configv2

    The second directory is used by the cuttle CLI utility, and will be needed for all cuttles for 3rd parties.

  • clientenv

    Those contains obsolete config for client env, but its grpc dialers and authclients (for user authentication) are still in use. Needs some cleanup.

  • consts

    It has a set of various common constants in SPEKTRA Edge.

  • doc

    It wraps protoc-gen-goten-doc with additional functionality, to display needed permissions for actions.

  • fixtrues_controller

    It is the full fixtures controller module.

  • serverenv

    It contains a common set for backend runtimes provided by SPEKTRA Edge (typically server, but some elements are used by controllers too).

  • widecolumn

    It contains a storage alternative to the Goten store, for some advanced cases, we will have a different document design for this.

Other directories:

  • healthcheck

    It contains a simple image that polls health checks of core SPEKTRA Edge services.

  • mixins

    It contains a set of mixins, they will be discussed via separate topics.

  • protoc-gen-npm-apis

    It is a Typescript compiler for the frontend team, maintained by the backend. You should read more about compilers here

  • npm

    It is where code generated by protoc-gen-npm-apis goes.

  • scripts

    Set of common scripts, developers must learn to use primarily regenerate-all-sh whenever they change any api-skeleton or proto file.

  • src

    It contains some “soon legacy” Java-generated code for the Monitoring Pipeline, which will get the separate documents.

3.1.1 - Goten Server Library

Understanding the Goten server library.

The server should more or less be already known from the developer guide. We will provide some missing bits here only.

When we talk about servers, we can distinguish:

  • gRPC Server instance that is listening on a TCP port.
  • Server handler sets that implement some Service GRPC interface.

To underline what I mean, look at the following code snippet from IAM:

grpcServer := grpcserver.NewGrpcServer(
  authenticator.AuthFunc(),
  commonCfg.GetGrpcServer(),
  log,
)

v1LimMixinServer := v1limmixinserver.NewLimitsMixinServer(
  commonCfg,
  limMixinStore,
  authInfoProvider,
  envRegistry,
  policyStore,
)
v1alpha2LimMixinServer := v1alpha2limmixinserver.NewTransformedLimitsMixinServer(
  v1LimMixinServer,
)
schemaServer := v1schemaserver.NewSchemaMixinServer(
  commonCfg,
  schemaStore,
  v1Store,
  policyStore,
  authInfoProvider,
  v1client.GetIAMDescriptor(),
)
v1alpha2MetaMixinServer := metamixinserver.NewMetaMixinTransformerServer(
  schemaServer,
  envRegistry,
)
v1Server := v1server.NewIAMServer(
  ctx,
  cfg,
  v1Store,
  authenticator,
  authInfoProvider,
  envRegistry,
  policyStore,
)
v1alpha2Server := v1alpha2server.NewTransformedIAMServer(
  cfg,
  v1Server,
  v1Store,
  authInfoProvider,
)

v1alpha2server.RegisterServer(
  grpcServer.GetHandle(),
  v1alpha2Server,
)
v1server.RegisterServer(grpcServer.GetHandle(), v1Server)

metamixinserver.RegisterServer(
  grpcServer.GetHandle(),
  v1alpha2MetaMixinServer,
)
v1alpha2limmixinserver.RegisterServer(
  grpcServer.GetHandle(),
  v1alpha2LimMixinServer,
)
v1limmixinserver.RegisterServer(
  grpcServer.GetHandle(),
  v1LimMixinServer,
)
v1schemaserver.RegisterServer(
  grpcServer.GetHandle(),
  schemaServer,
)
v1alpha2diagserver.RegisterServer(
  grpcServer.GetHandle(),
  v1alpha2diagserver.NewDiagnosticsMixinServer(),
)
v1diagserver.RegisterServer(
  grpcServer.GetHandle(),
  v1diagserver.NewDiagnosticsMixinServer(),
)

There, an instance called grpcServer is an actual GRPC Server instance listening on a TCP port. If you dive into this implementation, you should notice we are constructing an EdgelqGrpcServer structure. It may consist of actually two port listening instances:

  • googleGrpcServer *grpc.Server, which is initialized with a set of unary and stream interceptors, optional TLS.
  • websocketHTTPServer *http.Server, which is initialized only if the websocket port was set. It delegates handling to improbableGrpcwebServer, which uses googleGrpcServer.

This Google server is the primary one and handles regular gRPC calls. The reason for the additional HTTP server is that we need to support web browsers, which cannot support native gRPC protocol. Instead:

  • grpcweb is needed to handle unary and server-streaming calls.
  • websockets are needed for bidirectional streaming calls.

Additionally, we have REST API support…

We have this envoy proxy sidecar, a separate container running next to the server instance. It handles all REST API, converting to native gRPC. It converts grpcweb into native grpc too, but has issues with websockets. For this reason, we added a Golang HTTP server with an improbable gRPC web instance. This improbable grpc web instance can handle both grpcweb and websockets, but we use it for websockets only, since it is missing from envoy proxy.

In theory, an improbable web server would be able to handle ALL protocols, but there is a drawback: For native gRPC calls will be less performant than the native grpc server (and ServeHTTP is less maintained). It is recommended to keep them separate, so we will stick with 2 ports. We may have some opportunity to remove the envoy proxy though.

Returning to the googleGrpcServer instance, we have all stream/unary interceptors that are common for all calls, but this does not implement the actual interface we expect from gRPC servers. Each service version provides a complete interface to implement. For example, see the IAMServer interface in this file: https://github.com/cloudwan/edgelq/blob/main/iam/server/v1/iam/iam.pb.grpc.go.

Those server interfaces are in files ending with pb.grpc.go.

To have a full server, we need to combine the GRPC Server instance for SPEKTRA Edge (EdgelqGrpcServer), with, let’s make up some name for it: A business logic server instance (set of handlers). In this iam.pb.grpc.go file this business logic instance is iamServer. Going back to the main.go snippet that is provided way above, we are registering eight business logic servers (handler sets) on the provided *grpc.Server instance. As long as paths are unique across all, it is fine to register as many as we can. Typically, we must include primary service for all versions, then all mixins in all versions.

Those business logic servers provide code-generated middleware, typically executed in this order:

  • Multi-region routing middleware (may redirect processing somewhere else, or split across many regions).
  • Authorization middleware (may use a local cache, or send a request to IAM to obtain fresh role bindings).
  • Transaction middleware (configures access to the database, for snapshot transactions and establishes new session).
  • Outer middleware, which provides validation, and common outer operations for certain CRUD requests. For example, for update calls, it will ensure the resource exists and apply an update mask to achieve the final resource to save.
  • Optional custom middleware and server code - which are responsible for final execution.

Transaction middleware also may repeat execution of all internal middleware

  • core server, if the transaction needs to be repeated.

There are also “initial handlers” in generated pb.grpc.go files. For example, see this file: https://github.com/cloudwan/edgelq/blob/main/iam/server/v1/group/group_service.pb.grpc.go. For example, you can see _GroupService_GetGroup_Handler as example for unary, and _GroupService_WatchGroup_Handler as an example for streaming calls.

It is worth mentioning how interceptors play with middleware and these “initial handlers”. Let’s copy and paste interceptors from the current edgelq/common/serverenv/grpc/server.go file:

grpc.StreamInterceptor(grpc_middleware.ChainStreamServer(
    grpc_ctxtags.StreamServerInterceptor(),
    grpc_logrus.StreamServerInterceptor(
      log,
      grpc_logrus.WithLevels(codeToLevel),
    ),
    grpc_recovery.StreamServerInterceptor(
      grpc_recovery.WithRecoveryHandlerContext(recoveryHandler),
    ),
    RespHeadersStreamServerInterceptor(),
    grpc_auth.StreamServerInterceptor(authFunc),
    PayloadStreamServerInterceptor(log, PayloadLoggingDecider),
    grpc_validator.StreamServerInterceptor(),
)),
grpc.UnaryInterceptor(grpc_middleware.ChainUnaryServer(
    grpc_ctxtags.UnaryServerInterceptor(),
    grpc_logrus.UnaryServerInterceptor(
      log,
      grpc_logrus.WithLevels(codeToLevel),
    ),
    grpc_recovery.UnaryServerInterceptor(
      grpc_recovery.WithRecoveryHandlerContext(recoveryHandler),
    ),
    RespHeadersUnaryServerInterceptor(),
    grpc_auth.UnaryServerInterceptor(authFunc),
    PayloadUnaryServerInterceptor(log, PayloadLoggingDecider),
    grpc_validator.UnaryServerInterceptor(),
)),

Unary requests are executed in the following way:

  • Function _GroupService_GetGroup_Handler is called first! It calls the first interceptor but before that, it creates a handler that wraps the first middleware and passes to the interceptor chain.
  • The first interceptor is: grpc_ctxtags.UnaryServerInterceptor(). It calls the handler passed, which is the next interceptor.
  • The next interceptor is grpc_logrus.UnaryServerInterceptor and so on. At some point, we are calling the interceptor executing authentication.
  • The last interceptor (grpc_validator.UnaryServerInterceptor()) calls finally handler created by GroupService_GetGroup_Handler.
  • First middleware is called. The call is executed through the middleware chain, and may reach the core server, but may return earlier.
  • Interceptors are unwrapping in reverse order.

It is visible how this is called from the ChainUnaryServer implementation if you look.

Streaming calls are a bit different because we start from the interceptors themselves:

  • gRPC Server instance takes function _GroupService_WatchGroup_Handler and casts into grpc.StreamHandler type.
  • Object grpc.StreamHandler, which is a handler for our method, is passed to the interceptor chain. During the chaining process, grpc.StreamHandler is wrapped with all streaming interceptors, starting from the last. Therefore, the most internal StreamHandler will be _GroupService_WatchGroup_Handler.
  • grpc_ctxtags.StreamServerInterceptor() is the entry point! It then invokes the next interceptors, and we go further and further, till we reach _GroupService_WatchGroup_Handler, which is called by the last stream interceptor, grpc_validator.StreamServerInterceptor().
  • Middlewares are executed in the same way as always.

See the ChainStreamServer implementation if you don’t believe it.

In total, this should give an idea of how the server works and what are the layers.

3.1.2 - Goten Controller Library

Understanding the Goten controller library.

You should know about controller design from the developer guide. Here we give a small recap of the controller with tips about code paths.

The controller framework is part of the wider Goten framework. It has annotations + compiler parts, in:

You can read more about Goten compiler. For now, in this place, we will talk just about generated controllers.

There are some runtime elements for all controller components (NodeManager, Node, Processor, Syncer…) in runtime/controller direction in Goten repo: https://github.com/cloudwan/goten/tree/main/runtime/controller.

In the config.proto, we have node registry access config and nodes manager configs, which you should already know from controller/db-controller config proto files.

A bit more interesting thing we have with Node managers. As it was said in the Developer Guide, we scale horizontally by adding more nodes. To have more nodes in a single pod, which increases the chance of fairer workload distribution, we often have more than 1 Node instance per type. We organize them with Node Managers. You should see a directory runtime/controller/node_management/manager.go.

Each Node must implement:

type Node interface {
  Run(ctx context.Context) error
  UpdateShardRange(ctx context.Context, newRange ShardRange)
}

Node Manager component creates on the startup as many Nodes as it has in the config. Next, it runs all of them, but they don’t get yet any share of shards. Therefore, they are idle. Managers register all nodes in the registry, where all node IDs across all pods are collected. The registry is responsible for returning the shard range assigned for each node. Whenever a pod dies or a new one is deployed, the Node registry will notify the manager about new shard ranges per Node. It then notifies the relevant Node via the UpdateShardRange call.

Registry for Redis uses periodic polling, therefore there may be a chance two controllers executing the same work in theory for a couple of seconds. It probably will be better to improve, but we design controllers around the observed/desired state, and duplicating the same request may bring some temporary warning errors, but they should be harmless. Still, it’s a field for improvement.

See the NodeRegistry component (in file registry.go, we use Redis).

Apart from the node managers directory in runtime/controller, you can see the processor package. We have there from more notable elements:

  • Runner module, which is processor runner goroutine. It is the component for executing all events in a thread-safe manner, but developers must not do any IO.
  • Syncer module, which is generic and based on interfaces, although we generate type-safe wrappers in all controllers. It is quite large, it consists of Desired/Observed state objects (file syncer_states.go), an updater that operates on its own goroutine (file syncer_updater.go), and finally central Syncer object, defined in syncer.go. It compares the desired vs observed state and pushes updates to the syncer updater.
  • In synchronizable we have structures responsible for propagating sync/lostSync events across Processor modules, so ideally developers don’t need to handle them themselves.

Syncer is fairly complex, it needs to handle failures/recoveries, resets, and bursts of updates. Note that it does not use Go channels because:

  • They have limited capacity (defined). This is not nice considering we have IO works there.
  • Maps are best if there are multiple updates to a single resource because they will allow to merging of multiple events (overwrite previous ones). Channels would force at least to consume all items from the queue.

3.1.3 - Goten Data Store Library

Understanding the Goten data store library.

The developer guide gives some examples of simple interaction with the Store interface, but hides all implementation details, which we will cover here now, at least partially.

The store should provide:

  • Read and write access to resources according to the resource Access interface. Transactions, which will guarantee resources that have been read from the database (or query collections) will not change before the transaction is committed. This is provided by the core store module, described in this doc.
  • Transparent cache layer, reducing pressure on the database, managed by “cache” middleware, described in this doc.
  • Transparent constraint layer handling references to other resources, and handling blocking references. This is a more complex topic, and we will discuss this in different documents (multi-region, multi-service, multi-version design).
  • Automatic resource sharding by various criteria, managed by store plugins, covered in this doc.
  • Automatic resource metadata updates (generation, update time…), managed by store plugins, covered in this doc.
  • Observability is provided automatically (we will come back to it in the Observability document).

The above list should however at least give an idea, that interface calls may be often complex and require interactions with various components using IO operations! In general, a call to the Store interface may involve:

  • Calling underlying database (mongo, firestore…), for transactions (write set), non-cached reads…
  • Calling cache layer (redis), for reads or invalidation purposes.
  • Calling other services or regions in case of references to resources to other services and regions. This will be not covered by this document but in this multi-multi-multi thing.

Store implementation resides in Goten, here: https://github.com/cloudwan/goten/tree/main/runtime/store.

The primary file is store.go, with the following interfaces:

  • Store is the public store interface for developers.
  • Backend and TxSession are to be implemented by specific backend implementations like Firestore and Mongo. They are not exposed to end-service developers.
  • SearchBackend is like Backend, for just for search, which is often provided separately. Example: Algolia, but in the future we may introduce Mongo combining both search and regular backend implementation.

The store is also actually a “middleware” chain like a server. In the file store.go we have store struct type, which wraps the backend and provides the first core implementation of the Store interface. This wrapper does:

  • Add tracing spans for all operations
  • For transactions, store an observability tracker in the current ctx object.
  • Invokes all relevant store plugin functions, so custom code can be injected apart from “middlewares”.
  • Accumulates resources to save/delete, does not trigger updates immediately. They are executed at the end of the transaction.

You can consider it equivalent to a server core module (in the middleware chain).

To study the store, you should at least check the implementation of WithStoreHandleOpts.

  • You can see that plugins are notified about new and finished transactions.
  • Function runCore is a RETRY-ABLE function that may be invoked again for the aborted transaction. However, this can happen only for SNAPSHOT transactions. This also implies that all logic within a transaction must be repeatable.
  • runCore executes a function passed to the transaction. In terms of server middleware chains, it means we are executing outer + custom middleware (if present) and/or core server.
  • Store plugins are notified when a transaction is attempted (perhaps again), and get a chance to inject logic just before committing. They also have a chance to cancel the entire operation.
  • You should also note, that Store Save/Delete implementations do not add any changes to the backend. Instead, creations, updates, and deletions are accumulated and passed in batch commit inside WithStoreHandleOpts.

Notable things for Save/Delete implementations:

  • They don’t do any changes yet, they are just added to the change set to be applied (inside WithStoreHandleOpts).
  • For Save, we extract current resources from the database and this is how we detect whether it is an update or creation.
  • For Delete, we also get the current object state, so we know the full resource body we are about to delete.
  • Store plugins get a chance to see created/updated/deleted resource bodies. For updates, we can see before/after.

To see a plugin interface, check the plugin.go file. Some simple store plugins you could check, are those in the directory store_plugins:

  • metaStorePlugin in meta.go must be always the first store plugin inserted. It ensures the metadata object is initialized and tracks the last update.
  • You should also see a sharding plugins (by_name_sharding.go and by_service_id_sharding.go),

Multi-region plugins and design will be discussed in another document.

Store Cache middleware

The core store module, as described in store.go, is wrapped with cache “middleware”, see subdirectory cache, file cached_store.go, which implements the Store interface and wraps the lower level:

  • WithStoreHandleOpts decorates function passed to it, to include cache session restart, in case we have writes that invalidate the cache. After WithStoreHandleOpts finishes (inner), we need to push invalidated objects to the worker. It will either invalidate or mark itself as bad if invalidation fails.
  • All read requests (Get, BatchGet, Query, Search) first try to get data from the cache and pass it to the inner in case of failure, cache miss, or not cache-able.
  • Struct cachedStore implements not only the Store interface but the store plugin as well. In the constructor NewCachedStore you should see it adds itself as a plugin. The reason is that cachedStore is interested in creating/updated (pre + post) and deleted resource bodies. Save provides only the current resource body, and Delete provides only the name to delete. To utilize the fact that the core store already extracts the “previous” resource state, we implement cachedStore as a plugin.

Note that watches are non-cacheable. The cached store also needs a separate backend, we support as of now Redis implementation only.

The reason why we invalidate references/query groups after the transaction concludes (WithStoreHandleOpts), is because we want new changes to be already in the database. If we invalidate after writes, then when the new cache is refreshed, it will be for data after the transaction. This is one safeguard, but not sufficient yet.

The cache is written to during non-transaction reads (gets or queries). If results were not in the cache, we fall back to the internal store, using the main database. With results obtained, we are saving them in cache, but this is a bit less simple:

  • When we first try to READ from cache but face cache MISS, then we are writing “reservation indicator” for the given cache key.
  • When we get results from an actual database, we have fresh results… but there is a small chance, there is a write transaction undergoing, that just finished and invalidated cache (deleted keys).
  • Cache backend writer must update cache only if data was not invalidated, if reservation indicator was not deleted, then no write transaction happened. We can safely update the cache.

This reservation is not done in cached_store.go, it is required behavior from the backend, see store/cache/redis/redis.go file. It uses SETXX when updating the cache, meaning we write only if data exists (reservation marker is present). This behavior is the second safeguard for a valid cache.

The remaining issue may potentially be with 2 reads and one writing transaction:

  • First read request faces, cache miss, makes reservation.
  • First read request gets old data from the database.
  • Transaction just concluded, overwriting old data, deleting reservation.
  • Second read also faces cache miss, and makes a reservation.
  • The second read gets new data from the database.
  • Second read updates cache with new data.
  • First request updates cache with old data, because key exists (redis only supports if key exists condition)!

This is a known scenario that can cause the issue, it however relies on the first read request being suspended for quite a long time, allowing for concluded transaction, and invalidation (which happens with extra delay after write), furthermore we have full flow of another read request. As of now, probability may be comparable to serial accidental lotto wins, so we still allow for the long-live cache. Cache update happens in the code just after getting results from the database, so first read flow must be suspended by the CPU scheduler for quite a very long and then starved a bit.

It may have been better if we find a Redis alternative, that can do proper Compare and Swap, cache update can only happen for reservation key, and this key must be unique across read requests. It means the first request will be only written if the cache contains the reservation key with the proper unique ID relevant to the first request. If it contains full data or the wrong ID, it means another read updates reservation. If some read has cache miss, but sees a reservation mark, then it must skip cache updating.

The cached store relies on the ResourceCacheImplementation interface, which is implemented by code generation, see any <service>/store/<version>/<resource> directory, there is a cache implementation in a dedicated file, generated based on cache annotations passed in a resource.

Using centralized cache (redis) we can support very long caches, lasting even days.

Resource Metadata

Each resource has a metadata object, as defined in https://github.com/cloudwan/goten/blob/main/types/meta.proto.

The following fields are managed by store modules:

  • create_time, update_time and delete_time. Two of these are updated by the Meta store plugin, delete is a bit special since we don’t have yet a soft delete function, we have asynchronous deletion and this is handled by the constraint store layer, not covered by this document.
  • resource_version is updated by Meta store plugin.
  • shards are updated by various store plugins, but can accept client sharding too (as long as they don’t clash).
  • syncing is provided by a store plugin, it will be described in multi-region, multi-service, multi-version design doc.
  • lifecycle is managed by a constraint layer, again, it will be described in multi-region, multi-service, multi-version design doc.

Users can manage exclusively: tags, labels, annotations, and owner_references, although the last one may be managed by services when creating lower-level resources for themselves.

Field services is often a mix: Each resource may often apply its own rules. Meta service populates this field itself, For IAM, it depends on kind: For example, Roles and RoleBindings detect their contents and decide what services own them and which can read them. When 3rd party service creates some resource in core SPEKTRA Edge, they must annotate their service. Some resources, like Device in devices.edgelq.com, its the client deciding which services can read it.

Field generation is almost dead, as well as uuid. We may however fix this at some point. Originally Meta was copied and pasted from Kubernetes and not all the fields were implemented.

Auxiliary search functionality

The store can provide Search functionality if this is configured. By default, FailedPrecondition will be returned if no search backend exists. As of now, the only backend we support is Algolia, but we may add Mongo as well in the future.

If you check the implementation of Search in store.go and cache/cached_store.go, it is pretty much like List, but allows additional search phrases.

Since the search database is however additional to the main one, there is some problem to resolve: Syncing from the main database to search. This is an asynchronous process, and the Search query after Save/Delete is not guaranteed to be accurate. Algolia says it may even be minutes in some cases. Plus, this synchronization must not be allowed within transactions, because there is a chance search backend can accept updates, but the primary database not.

The design decisions regarding search:

  • Updates to the search backend are happening asynchronously after the Store’s successful transaction.
  • Search backend needs separate cache keys (they are prefixed), to avoid mixing.
  • Updates to the search backend must be retried in case of failures because we cannot allow the search to stay out of sync for too long.
  • Because of potentially long search updates and, the asynchronous nature of them, we decided that search writes are NOT executed by Store components at all! The store does only search queries.
  • We dedicated a separate SearchUpdater interface (See store/search_updater.go file) for updating the Search backend. It is not a part of the Store!
  • The SearchUpdater module is used by db-controllers, which observe changes on the Store in real-time, and update the search backend accordingly, taking into account potential failures, writes must be retried.
  • Cache for search backend needs invalidation too. Therefore, there is a store/cache/search_updater.go file too, which wraps the inner SearchUpdater for the specific backend.
  • To summarize: Store (used by Server modules) makes Search queries, DbController using SearchUpdater makes writes and invalidates search cache.

Other store interface useful wrappers

To achieve a read-only database entirely, use the NewReadOnlyStore wrapper in with_read_only.go.

Normally, the store interface will reject even reads when no transaction was set (WithStoreHandleOpts was not used). This is to prevent people from using DB after forgetting to set transactions explicitly. It can be corrected by using the WithAutomaticReadOnlyTx wrapper in the auto_read_tx_store.go.

To also be able to write to a database without transaction set explicitly using WithStoreHandleOpts, it is possible to use WithAutomaticTx wrapper in auto_tx_store.go, but it is advised to consider other approaches first.

Db configuration and store handle construction

Store handle construction and database configuration are separated.

The store needs configuration because:

  • Collections may need pre-initialization.
  • Store indices may need configuration too.

Configuration tasks are configured by db-controller runtimes by convention. Typically, in main.go files we have something like:

senvstore.ConfigureStore(
    ctx,
    serverEnvCfg,
    v1Desc.GetVersion(),
    v1Desc,
	schemaclient.GetSchemaMixinDescriptor(),
    v1limmixinclient.GetLimitsMixinDescriptor(),
)
senvstore.ConfigureSearch(ctx, serverEnvCfg, v1Desc)

The store is configured after being given the main service descriptor, plus all the mixins, so they can configure additional collections. If a search feature is used, then it needs a separate configuration.

Configuration functions are in the edgelq/common/serverenv/store/configurator.go file, and they refer to further files in goten:

  • goten/runtime/store/db_configurator.go
  • goten/runtime/store/search_configurator.go

Configuration therefore happens at db-controller startup but in a separate manner.

Then, the store handler we construct in the server and db-controller runtimes. It is done by the builder from the edgelq repository, see the edgelq/common/serverenv/store/builder.go file. If you have seen any server initialization (I mean main.go) file, you can see how the store builder constructs “middlewares” (WithCacheLayer, WithConstraintLayer), and adds plugins executing various functions.

3.2 - Goten as a Compiler

Understanding the compiler aspect of the Goten framework.

This document provides instructions on how this bootstrap utility works and by extension, helps you contribute here.

3.2.1 - goten-bootstrap

What is goten-bootstrap executable?

Utility goten-bootstrap is a tool generating proto files from the specification file, also known as api-skeleton. In the goten repository, you can find the following files for the api-skeleton schema:

  • annotations/bootstrap.proto with JSON generated schema from it in schemas/api-skeleton.schema.json. This is the place you can modify input to bootstrap.

Runtime entry (main.go) can be found in the cmd/goten-bootstrap directory. It imports package in compiler/bootstrap directory, which pretty much contains the whole code for the goten-bootstrap utility. This is the place to explore if you want to modify generated protobuf files.

In main.go you can see two primary steps:

  1. Initialize the Service package object and pass it to the generator.

    During initialization, we validate input, populate defaults, and deduce all values.

  2. It then attaching all implicit API groups per each resource.

First look at the Generator object initialized with NewGenerator.

The relevant file is compiler/bootstrap/generate.go, which contains the ServiceGenerator struct with a single public method Generate. It takes parsed, validated, and initialized Service object (as described in the service.go file), then just generates all relevant files, with API groups and resources using regular for loops. Template protobuf files are all in tmpl subdirectory.

See the initTmpls function of ServiceGenerator: It collects all template files as giant strings (because those are strings…), parses them, and adds some set of functions that can be used within {{ }}. Those big strings are “render-able” objects, see https://pkg.go.dev/text/template for more details, but normally I find them self-explanatory. In those template strings, you see often:

  • {{functionName <ARGS>}}

    The word is some function. It may be built-in like define, range, if, or it may be a function we provided. From initTmpls you may see functions like uniqueResources, formatActionReplacement etc. Those are our functions. They may take arguments.

  • {{$variable}}

    This variable must be initialized somewhere using := operator. Those are Golang objects under the hood! You can access even sub-fields with dots ., or even call functions (but without arguments).

  • {{.}}

    This is a special kind of “current” active variable. In a given moment only one variable may be active. You may access its properties from regular variables like {{ $otherVar := .Field1.Field2 }}.

  • With {{ or }}

    you may see dashes: {{- or -}}. Their purpose is to remove whitespace (typically newline) behind or after them. It makes output nicer, but may occasionally render code non-compilable.

In generate.go, see svcgen.tmpl.ExecuteTemplate(file, tmplName, data). The first argument is the file writer object where the protobuf file will be generated. The second argument is a string, for example, resourceSchemaFile. The third argument is an active variable that can be accessed as {{.}}, which we mentioned. For example, you should see the following piece of code there:

if err := svcgen.genFile(
  "resourceSchemaFile",
  resource.Service.Proto.Package.CurrentVersion,
  fileName,
  resource,
  svcgen.override,
); err != nil {
    return fmt.Errorf("error generating resource file %s: %s", fileName, err)
}

The function genFile passes resourceSchemaFile as the second argument to tmpl.ExecuteTemplate, and object resource is passed as the last argument to tmpl.ExecuteTemplate. This resource object is of type Resource which you can see in the file resource.go.

How Golang templates are executed: Runtime will try to find the following piece of the template:

{{ define "resourceSchemaFile" }} ... {{ end }}

In this instance, you can find it in file tmpl/resource.tmpl.go, it starts with:

package tmpl

// language=gohtml
const ResourceTmplString = `
{{- define "resourceSchemaFile" -}}
{{- /*gotype: github.com/cloudwan/goten/annotations/bootstrap.Resource*/ -}}
{{- $resource := . }}

... stuff here....

{{ end }}

By convention, we try to provide what kind of object was passed as dot . under define, at least for main templates. Since range loops override dot value, to avoid losing resource reference (and for clarity), we often save current dot into a named variable.

Golang generates from the beginning of define till it reaches relevant {{ end }}. When it sees {{ template "..." <ARG> }}, it calls another define, and passes arg as the next “dot”. To pass multiple arguments, we often provide a dictionary using the dict function: {{ template "... name ..." dict <KEY1> <VALUE1> ... }}. Dict accepts N arguments and just makes a single object. You can see that we implemented this function in initTmpls! The generated final string is outputted to the specified file writer. This is how all protobuf files are generated.

Note that ServiceGenerator skips certain templates depending on the overrideFile argument. This is why resources and custom files for API groups are generated only once, to avoid overriding developer code. Perhaps in the future, we should be able to do some merging. That’s all regarding the generation part.

Also, very important is parsing the api-skeleton service package schema and wrapping it with the Service object as defined in the service.go file. Note that YAML in api-skeleton contains the definition of a Service not in the compiler/bootstrap/service.go, but annotations/bootstrap/bootstrap.pb.go file. See function ParseServiceSkeletonFiles in compiler/bootstrap/utils.go. It loads base bootstrap objects from yaml, and parses to according to the protobuf definition, but then we wrap them with the proper Service object. After we load all Service objects (including the next version and imported ones), we are calling the Init function of a Service. This is where we validate all input properly, and where we put default values missing in api-skeleton. The largest example is the function InitMainApi, which is called from service.go for each resource owned by the service. It adds our implicit APIs with full CRUD methods, it should be visible how all those “implicit” features play out there. We try also to validate as much input as possible. Any error messages must be wrapped with another error, so we return the full message at the top.

3.2.2 - Goten Protobuf Compilers

Understanding the Goten protobuf compilers.

Protobuf was developed by Google, and it has been implemented in many languages, including Golang. Each supported language provides a protoc compiler. For Golang, there exists protoc-gen-go, which takes protobuf files and generates Golang files. This tool however is massively insufficient compared to what we need in Goten: We have custom types, extra functionality, and a full-blown framework generating almost all the server code. We developed our protoc compilers which replace standard protoc-gen-go. We have many protoc compilers, see the cmd directory:

  • protoc-gen-goten-go

    This is the main replacement of the standard protoc-gen-go. It generates all the base go files you can see throughout the resources/ and client/ modules, typically. All the generated files ending with .pb.go.

  • protoc-gen-goten-client

    It compiles some files in the client directory, except those ending with .pb.go, which contain basic types.

  • protoc-gen-goten-server

    It generates those middleware and default core files under the server directory.

  • protoc-gen-goten-controller

    It generates controller packages, as described in the developer guide.

  • protoc-gen-goten-store

    It generates files under the store directory.

  • protoc-gen-goten-access

    It compiles files in the access directory.

  • protoc-gen-goten-resource

    It focuses on protobuf objects annotated as resources but does not generate anything for “under” resources. It produces most of the files in the resources directory for each service. This includes pb.access.go, pb.collections.go, pb.descriptor.go, pb.filter.go, pb.filterbuilder.go, pb.name.go, pb.namebuilder.go, pb.pagination.go, pb.query.go, pb.view.go, pb.change.go.

  • protoc-gen-goten-object

    It provides additional optional types over protoc-gen-goten-go, those types are FieldPath, FieldMask, additional methods for merging, cloning, and diffing objects. You can see them in files ending with pb.fieldmask.go, pb.fieldpath.go, pb.fieldpathbuider.go, pb.object_ext.go. This is done for resources or sub-objects used by resources. For example, in the goten repository, you can see files from this protoc compiler under types/meta/ directory.

  • protoc-gen-goten-cli

    It compiles files in the cli directory.

  • protoc-gen-goten-validate

    It generates pb.validate.go files you can typically find in the resources directory, but it’s not necessarily limited there.

  • protoc-gen-goten-versioning

    It generates all versioning transformers under the versioning directory.

  • protoc-gen-goten-doc

    It generates markdown documentation files based on proto files (often docs directory).

  • protoc-gen-goten-jsonschema

    It is a separate compiler for parsing bootstrap.proto into API skeleton JSON schema.

Depending on which files you want to be generated differently, or which you want to study, you need to start with relevant compiler.

Pretty much any compiler in the cmd directory maps to some module in the compiler directory (there are exceptions like the ast package!). For example:

  • cmd/protoc-gen-goten-go maps to compiler/gengo.
  • cmd/protoc-gen-goten-client maps to compiler/client.

Each of these compilers takes a set of protobuf files as the input. When you see some bash code like:

protoc \
    -I "${PROTOINCLUDE}" \
    "--goten-go_out=:${GOGENPATH}" \
    "--goten-validate_out=${GOGENPATH}" \
    "--goten-object_out=:${GOGENPATH}" \
    "--goten-resource_out=:${GOGENPATH}" \
    "--goten-access_out=:${GOGENPATH}" \
    "--goten-cli_out=${GOGENPATH}" \
    "--goten-versioning_out=:${GOGENPATH}" \
    "--goten-store_out=datastore=firestore:${GOGENPATH}" \
    "--goten-server_out=lang=:${GOGENPATH}" \
    "--goten-client_out=:${GOGENPATH}" \
    "--goten-doc_out=service=Meta:${SERVICEPATH}/docs/apis" \
    "${SERVICEPATH}"/proto/v1/*.proto

It simply means we are calling many of those protoc utilities. In the flag we pass proto include paths, so protos can be parsed correctly and linked to others. In this shell, in the last line, we are passing all files for which we want code to be generated. In this case, it is all files we can find in the ${SERVICEPATH}"/proto/v1 directory.

3.2.3 - Abstract Syntax Tree

Understanding the internals of the Goten compiler.

Let’s analyze one of the modules, protoc-get-goten-object, as an example, to understand the internals of the Goten protobuf compiler.

func main() {
	pgs.Init(pgs.DebugEnv("DEBUG_PGV")).
		RegisterModule(object.New()).
		RegisterPostProcessor(utils.CalmGoFmt()).
		Render()
}

For a starter, we utilize https://github.com/lyft/protoc-gen-star library. It does the initial parsing of all proto objects for us. Then it invokes a module for file generation. We are passing our object module, from the compiler/object/object.go file:

func New() *Module {
	return &Module{ModuleBase: &pgs.ModuleBase{}}
}

Once it finishes generating files, all will be formatted by GoFmt. However, as you should note, the main processing unit of the compiler is always in the compiler/<name>/<name>.go file. This pgs library is calling the following functions from the passed module, like the one in the object directory: InitContext, then Execute.

Inside InitContext, we are getting some BuildContext, but it only carries arguments that we need to pass to the primary context object. All protobuf compilers (access, cli, client, controller, gengo, object, resource…) use the GoContext object we are defining in the compiler/gengo/context.go file. It is questionable if this file should be in the gengo directory. It is there because the gengo compiler is the most basic for all Golang compilers. GoContext inherits also a Context from the compiler/shared directory. The idea is, that potentially we could support other programming languages in some limited way. We do in SPEKTRA Edge we have a specialized compiler for TypeScript.

Regardless, GoContext is necessary during compilation. Traditionally in Golang, Golang provides the same interface that behaves a bit differently depending on “who is asking and under what circumstances”.

Next (still in the InitContext function), you can see that Module in object.go imports the gengo module from compiler/gengo using NewWithContext. This function is used by us, never pgs library. We always load lower-level modules from higher-level ones, because we will need them. Now we conclude the InitContext analysis.

The Pgsgo library then parses all proto files that were given in the input. All parsed input files are remembered as “targets” (map[string]pgs.File). The library also collects information from all other imported files that were mentioned via import statements. It accumulates them in the packages variable of type map[string]pgs.Package. Then it calls Execute method of the initial module. You can see in object.go things like:

func (m *Module) Execute(
    targets map[string]pgs.File,
    packages map[string]pgs.Package,
) []pgs.Artifact {
    m.ctx.InitGraph(targets, packages)
    
    for _, file := range m.ctx.GetGraph().TargetFiles() {
        ...
    }
}

Goten provides its annotation system on top of protobuf, and we start seeing an effect here: Normally, an InitGraph call should not be needed. We should be able to pass just generated artifacts from a given input. However, in Goten, we call InitGraph to enrich all targets/packages that were passed from pgsgo. One of the non-compiler directories in the compiler directory is ast. File compiler/ast/graph.go is the entry file, which uses visitors to enrich all types.

Let’s stop with the object.go file and jump to ast library for now.

Visitor wrapper invoked first wraps pgsgo types like:

  • pgsgo.Entity is wrapped as ast.Entity.

    It is a generic object, it can be a package, file, message, etc.

  • pgsgo.Package is wrapped with ast.Package.

    If the proto package contains the Goten Service definition, it becomes ast.ServicePackage. It describes the Goten-based service in a specific version!

  • pgsgo.Message, which represents just a normal protobuf message

    It becomes ast.Object in Goten. If this ast.Object specifies Resource annotation, it becomes ast.Resource, or ast.ResourceChange if describes Change!

  • pgsgo.Service (API group in api-skeleton, service in proto)

    It becomes ast.API in Goten ast package.

  • pgsgo.Method (Action in api-skeleton)

    It becomes ast.Method in Goten ast package.

  • pgsgo.Enum becomes ast.Enum

  • pgsgo.File becomes ast.File

  • pgsgo.Field becomes ast.Field

and so on.

The visitor wrapper also introduces our Goten-specific types. For example, look at this:

message SomeObject {
  string some_ref_field = 1 [(goten.annotations.type).reference = {
    resource : "SomeResource"
    target_delete_behavior : BLOCK
  }];
}

Library pgsgo will classify this field type as pgs.FieldType string. However, if you see any generated Golang file by us, you will see something like:

type SomeObject struct {
	SomeRefField *some_resource.Reference
}

This is another Goten-specific change compared to some protoc-gen-go. For this reason, in our AST library, we have structs like ast.Reference, ast.Name, ast.Filter, ast.FieldMask ast.ParentName etc.

After the visitor wrapper finishes its task, we have a visitor hydrator that establishes relationships between wrapped entities. As of the moment of this writing, there is a rooting visitor, but it’s not needed and I simply forgot to delete it. If you don’t see it, it means it’s already deleted.

This ast library is very important, because, in our templates for Golang, we want to use enriched types, according to the Goten language! You should be able to deduce the rest from ast library when you need it. For now, let’s go back to the compiler/object/object.go file, to Execute the function.

Once we have our own enriched graph, we can start generating files, we check each of the files we were given as targets. Of these, we filter out if there are no objects defined, or if objects generated do not need extended functionality defined, note that in client packages in any service we don’t define any pb.fieldpath.go files and so on. We generate only for resources and their sub-objects.

The next crucial element of Golang files generation is a call to InitTemplate. It should get the current module name for some friendly error message, and entity target for which we want to generate files. For example, let’s say we have resource SomeResource in the some_resource.proto file. This is our target file (as ast.File). We will generate four files based on this single proto file:

  1. some_resource.pb.fieldpath.go
  2. some_resource.pb.fieldpathbuilder.go
  3. some_resource.pb.fieldmask.go
  4. some_resource.pb.object_ext.go

Note that if this proto file contains some other objects defined, they will also be provided in generated files! For For this reason, we pass the whole ast.File to this InitTemplate call.

It is worth looking inside InitTemplate. There are some notable elements:

  • We create some discardable additional GoContext for the current template set.
  • There is an imports loader object, that automatically loads all dependencies to the passed target object. By default, it is enough to load direct entities. For example, for ast.File, those are files directly imported via import statements.
  • We are iterating all modules - our and those we imported. In the case of object.go, we load Object and Gengo modules. We are calling WithContext for each, basically, we enrich our temporary GoContex and we initialize the Golang template object we should already know well from bootstrap utility topic.

If you see some WithContext call, like in object.go:

func (m *Module) WithContext(ctx *gengo.GoContext) *gengo.GoContext {
	ctx.AddHelpers(&objectHelpers{ctx: ctx})
	return ctx
}

What we do, is add a helper object. Struct objectHelpers is defined in the compiler/object/funcs.go file. Since Object module loads also Gengo, we should see that we have gengo helpers as well:

func (m *Module) WithContext(ctx *GoContext) *GoContext {
	ctx.AddHelpers(&gengoHelpers{ctx: ctx})
	return ctx
}

If you follow more deeply, you should reach the file compiler/shared/context.go file, see AddHelpers and InitTemplate. When we add helpers, we store helpers in a map using the module namespace as a key. In InitTemplate we use this map to provide all functions we can use in Golang large string templates! It means, that if you see the following function calls in templates:

  • {{ object.FieldPathInterface … }}

    It means we are calling the method “FieldPathInterface” on the “objectHelpers” object.

  • {{ gengo.GoFieldType … }}

    It means we are calling the method “GoFieldType” on the “gengoHelpers” object.

We have typically one “namespace” function per each compiler: gengo, object, resource, client, server… This is a method to have a large set of functions available in templates.

Going back to the compiler/object/object.go file. After InitTemplate returns us a template, we are parsing all relevant templates from the tmpl subdirectory we want to use. Then, using AddGeneratorTemplateFile we are adding all files to generate. We give the full file path (context detects exact modules), then we pass the exact template to call, lookup tries to find a matching template name: {{ define "... name ..." }}. The last argument to AddGeneratorTemplateFile is the current object (And we can reference by a dot .). The rest should be well known already from the bootstrap part.

This concludes protoc generation, all the other protoc compilers, while may be more complicated in detail, are following the same design. Knowing what files are generated by the compiler, you should be able to reach part of the code you want to change.

3.2.4 - Goten protobuf-go Extension

Understanding the protobuf-go Goten extension.

Goten builds on top of protobuf, but the basic library for proto is not provided by us of course. One popular protobuf library for Golang can be found here: https://github.com/protocolbuffers/protobuf-go. It provides lots of utilities around protobuf:

  • Parsing proto messages to and from binary format (proto wire).
  • Parsing proto messages to and from JSON format (for being human-friendly)
  • Copying, merging, comparing
  • Access to proto option annotations

Parsing messages to binary format and back is especially important, this is how we send/receive messages over the network. However, it does not exactly work for Goten, because of our custom types.

If we have:

message SomeResource {
  string name = 1 [(goten.annotations.type).name.resource = "SomeResource" ];
}

The native protobuf-go library would map the “name” field into the Go “string” type. But in Goten, we interpret this as a pointer to the Name struct in the package relevant to the resource. Reference, Filter, OrderBy, Cursor, FieldMask, and ParentName are all other custom types. Problematic are strings, how to map them to non-strings if they have special annotations.

For this reason, we developed a fork called goten-protobuf: https://github.com/cloudwan/goten-protobuf.

The most important bit is the ProtoStringer interface defined in this file: https://github.com/cloudwan/goten-protobuf/blob/main/reflect/protoreflect/value.go.

This is the key difference between our fork and the official implementation. It’s worth to mention more or less how it works.

Look at any gengo-generated file, like https://github.com/cloudwan/goten/blob/main/meta-service/resources/v1/service/service.pb.go. If you scroll somewhere to the bottom, to the init() function, are registering all types we generated in this file. We also pass raw descriptors. This is how we are passing information to the protobuf library. It then populates its registry with all proto descriptors and matches (via reflection) protobuf declarations with our Golang struct definitions. If it detects that some field is a string in protobuf, but it’s a struct in implementation, it will try to match with ProtoStringer, which should work, as long as the interface matches.

We tried to make minimal changes in our fork, but unfortunately, we sometimes need to sync from the main one.

Just by the way, we can use the following protobuf functions (interface proto.Message is implemented by ALL Go structs implemented on protobuf message type), using google.golang.org/protobuf/proto import:

  • proto.Size(proto.Message)

    To detect the size of the message in binary format (proto wire)

  • proto.Marshal(proto.Message)

    Serialize to the binary format.

  • proto.Unmarshal(in []byte, out proto.Message)

    De-serialize from binary format.

  • proto.Merge(dst, src proto.Message)

    It merges src into dst.

  • proto.Clone(proto.Message)

    It makes a deep copy.

  • proto.Equal(a, b proto.Message)

    It ompares messages.

More interestingly, we can extract annotations in Golang with this library, like:

import (
    resourceann "github.com/cloudwan/goten/annotations/resource"
    "github.com/cloudwan/goten/runtime/resource"
)

func IsRegionalResource(res resource.Resource) bool {
    msgOpts := res.ProtoReflect().Descriptor().
                   Options().(*descriptorpb.MessageOptions)
    resSpec := proto.GetExtension(msgOpts, resourceann.E_Resource).
                     (*resourceann.ResourceSpec)
	// ... Now we have instance of ResourceSpec -> See goten/annotations/resource.proto file!
}

The function ProtoReflect() is often used to reach out for object descriptors. It sometimes gives some nice alternative to regular reflection in Go (but has some corner cases where it breaks on our types… and we fall back to reflect).

3.2.5 - Goten TypeScript compiler

Understabnding the TypeScript Goten compiler module.

In SPEKTRA Edge, we maintain also a TypeScript compiler that generates modules for the front end based on protobuf. It is fairly limited compared to generated Golang though. You can find compiler code in the SPEKTRA Edge repository: https://github.com/cloudwan/edgelq/tree/main/protoc-gen-npm-apis.

It generates code to https://github.com/cloudwan/edgelq/tree/main/npm.

3.3 - Goten as a Runtime

Understanding the runtime aspect of the Goten framework.

Directory runtime contains various libraries linked during compilation. Many more complex cases will be discussed throughout this guide, here is rather a quick recap of some common/simpler ones.

runtime/goten

It is rather tiny, and mostly defines interface GotenMessage, which just merges fmt.Stringer and proto.Message interfaces. Any message generated by protoc-gen-goten-go implements this interface. We could use it to figure out who generated the interface.

runtime/object

For resources and many objects, but excluding requests/responses, Goten generates additional helper types. This directory contains interfaces for them. Also, for each proto message that has those helper types, Goten generates implementation as described in the interface GotenObjectExt.

  • FieldPath

    Describes some path valid within the associated object.

  • FieldMask

    Set of FieldPath objects, all valid for the same object.

  • FieldPathValue

    Combination of FieldPath and valid underlying value.

  • FieldPathArrayOfValues

    Combination of FieldPath and valid list of underlying values.

  • FieldPathArrayItemValue

    Combination of FieldPath describing slice and a valid underlying item value.

runtime/resource

This directory Contains multiple interfaces related to resource objects. The most important interface is Resource, which is implemented by every proto message with Goten resource annotation, see file resource.go. The next most important probably is Descriptor, as defined in the descriptor.go file. You can access proto descriptor using ProtoReflect().Descriptor() call on any proto message, this descriptor contains additional functionality for resources.

Then, you have plenty of helper interfaces like Name, Reference, Filter, OrderBy, Cursor, and PagerQuery.

In the access.go file you have an interface that can be implemented by a store or API client by using proper wrappers.

Note that resources have a global registry.

runtime/client

It contains important descriptors: For methods, API groups, and the whole service, but within a version. It has some narrow cases, for example in observability components, where we get request/response objects, and we need to use descriptors to get something useful.

More often we use service descriptors, mostly for convenience for finding methods or more often, iterating resource descriptors.

It contains a global registry for these descriptors.

runtime/access

This Directory is connected with access packages in generated services, but it is relatively poor because those packages are pretty much code-generated. It has mostly interfaces for watcher-related components. It has however powerful registry component. If you have a connection to the service (just grpc.ClientConnInterface) and a descriptor of the resource, you can construct basic API Access (CRUD) or a high-level Watcher component (or lower-level QueryWatcher). See the runtime/access/registry.go file for the actual implementation.

Note that this global registry needs to be populated, though. When any specific access package is imported (I mean, <service>/access/<version>/<resource>), inside the init function calls this global registry and stores constructors.

This is the reason we have so many “dummy” imports, just to invoke init functions, so some generic modules can create access objects they need.

runtime/clipb

This contains a set of common functions/types used by CLI tools, like cuttle.

runtime/utils

This directory is worth mentioning for its proto utility functions, like:

  • GetFieldTypeForOneOf

    From the given proto message, can be empty, dummy, extract the actual reflection type under the specified oneof paths. Not an interface, but the final path. Normally it takes some effort to get it…

  • GetValueFromProtoPath

    From given proto object and path, extracts single current value. It takes into account all Goten specific types, including in oneofs. If the last item is an array, it returns the array as a single object.

  • GetValuesFromProtoPath

    Like GetValueFromProtoPath, but returns multiple values, if the field path points to a single object, it is a one-element array. If the field path points to some array, then it contains an array of those values. If the last field path item is NOT an array, but some middle field path item is an array, it will return all values, making this more powerful than GetValueFromProtoPath.

  • SetFieldPathValueToProtoMsg

    It sets value to a proto message under a given path. It allocates all the paths in the middle if sub-objects are missing, and resets oneofs on the path.

  • SetFieldValueToProtoMsg

    It sets a value to a specified field by the descriptor.

3.3.1 - runtime/observability

Understanding the observability module in the Goten runtime.

In the Goten repo, there is a observability module located at runtime/observability. This module is for:

  • Store tracing spans (Jaeger and Google Tracing supported)
  • Audit (Service audit.edgelq.com).
  • Monitoring usage (Metrics are stored in monitoring.edgelq.com).

In the Goten repo, this module is rather small, in observer.go we have Observer for spans. Goten also stores in context for the current gRPC call object called CallTracker (call_tracker.go). This generic tracker is used by the Audit and Monitoring usage reporter.

Goten also provides a global registry, where listeners can tap in to monitor all calls.

The mentioned module is however just a small base, more proper code is in SPEKTRA Edge repository, directory common/serverenv/observability:

  • InitCloudTracing

    It initializes span tracing. It registers a global instance, but is picked when we register the proper module, in file common/serverenv/grpc/server.go. See function NewGrpcServer, option grpc.StatsHandler. This is where tracing is added to the server. TODO: We will need to migrate Audit and Monitoring related, too. Also, I believe we should move the CallTracker initialization there altogether!

  • InitServerUsageReporter

    It initializes usage tracking. First, it stores a global usage reporter, that periodically sends usage time series data. It also stores standard observers, usage for store and API. They are registered in Goten observability module, to catch all calls.

  • InitAuditing

    It creates a logs exporter, as defined in the audit/logs_exporter module, then registers within the goten observability module.

Usage tracking

In the file common/serverenv/observability/observability.go, inside function InitServerUsageReporter, we initialize two modules:

  1. Usage reporter

    It is a periodic job, that checks all recorders from time to time, and exports usage as time series. It’s defined in common/serverenv/usage/reporter.go.

  2. usageCallObserver object

    With the RegisterStdUsageReporters call, it is registered within Goten observability (gotenobservability.RegisterCallObserver)

    (defined in common/serverenv/usage/std_recorders.

Reporter is supposed to be generic, there is a possibility to add more recorders. In the std_recorders directory, we just add a standard observer for API calls, we track usage on the API Server AND local store usage.

If you look at Reporter implementation, note that we are using always the same Project ID. This is Service Project ID, global for the whole Service, shared across all Deployments for this service. Each Service maintains usage metrics in its project. By convention, if we want to distinguish usage across user projects, we have a label for it, user_project_id. This is a common convention. See files std_recorders/api_recorder.go and std_recorders/storage_recorder.go, find RetrieveResults calls. We are providing a user_project_id label for all time series.

Let’s describe standard usage trackers. For this, the central point is usageCallObserver, defined in the call_observer.go file. If you look at it, it catches all unnecessary requests/responses plus streams (new/closed streams, new client or server messages). Its responsibilities are:

  • Insert store usage tracker in the context (via CallTracker).
  • Extract usage project IDs from requests or responses (where possible).
  • Notify API and storage recorders when necessary, storage recorder needs periodic flushing for streaming especially.

To track actual store usage, there is a dedicated store plugin, SPEKTRA Edge repository, file common/store_plugins/usage_observer.go. It gets a store usage tracker and increments values when necessary!

In summary, this implementation serves to provide metrics for fixtures defined in monitoring/fixtures/v4/per_service_metric_descriptor.yaml.

Audit

The audit is initialized in general in the common/serverenv/observability/observability.go file, inside the function InitAuditing. It calls NewLogsExporter from the audit/logs_exporter package.

Then, inside RegisterExporter, defined in file common/serverenv/auditing/exporter.go, we are hooking up two objects into Goten observability modules:

  • auditMsgVersioningObserver

    It’s responsible for catching all request/response versioning transformations.

  • auditCallObserver

    It’s responsible for catching all unary and streaming calls.

Of course, tracking API and versioning is not enough, we also need to export ResourceChangeLogs somehow. For this, we have also an additional store plugin in the file common/store_plugins/audit_observer.go file! It tracks changes happening in the store and pings Exporter when necessary. When the transaction is about to be committed, we call MarkTransactionAsReady. It may look a bit innocent, but it is not, see implementation. We are calling OnPreCommit, which is creating ResourceChangeLog resources! If we do not succeed, then we return an error, it will break the entire transaction in result. This is to ensure that ResourceChangeLogs are always present, even if we fail to commit ActivityLogs later on, so something is still there in audit.

The reason why we have a separate common/serverenv/auditing directory from the audit service, was some kind of idea that we should have an interface in the “common” part, but implementation should be elsewhere. This was an unnecessary abstraction, especially since we don’t expect other exporters here (and we want to maintain functionality and be able to break it). But for now, it is still there and probably will stay due to low harm.

Implementation of the audit log exporter should be fairly simple, see the audit/logs_exporter/exporter.go file in the SPEKTRA Edge repository.

Basically:

  • IsStreamReqAuditable and IsUnaryReqAuditable are used to determine whether we want to track this call. If not, no further calls will be made.

  • OnPreCommit and OnCommitResult are called to send ResourceChangeLog. Those are synchronous calls, they don’t exist until the Audit finishes processing. Note that it will extend a bit duration of store transactions!

  • OnUnaryReqStarted and OnUnaryReqFinished are called for unary requests and responses.

  • OnRequestVersioning and OnResponseVersioning are called for unary requests when their bodies are transformed between API versions. The function of it is to extract potential labels from updated requests or responses. Activity logs recorded will still be done for the old version.

  • OnStreamStarted and OnStreamFinished should be self-explanatory.

  • OnStreamExportable notifies when ActivityLog can be generated. It is used to send ActivityLogs before the call finishes.

  • OnStreamClientMessage and OnStreamServerMessage add client/server messages to ActivityLogs.

  • OnStreamClientMsgVersioning and OnStreamServerMsgVersioning notify the exporter when client or server messages are transformed to different API versions.

Notable elements:

  • Audit log exporter can sample unary requests when deciding whether to audit or not.
  • While ResourceChangeLog is sent synchronously and extends call duration, ActivityLogs does not. The audit exporter maintains a set of workers for streaming and unary calls, they have a bit of a different implementation. They work asynchronously.
  • Stream and unary log workers will try to accumulate a small batch of activity logs before sending, them to save on IO work. They have timeouts based on log size and time.
  • Stream and unary log workers will retry failed logs, but if they accumulate too much, they will start dropping.
  • Unary log workers send ActivityLogs for finished calls only.
  • Streaming log workers can send ActivityLogs for ongoing calls. If this happens, many Activity log fields like labels are no longer updateable. But request/responses and exit codes will be appended as Activity Log Events.

3.4 - Goten Design

Understanding the core concept of the Goten design.

The goten framework is designed for services to be:

  • spreading across multiple clusters in different regions
  • running with different versions at the same time

which means, Goten is aware of:

  • multi-services
  • multi-regions
  • multi-versions

We must think of it as a protocol between those entities.

Protocol, because there must be some established communication that enforces database schema stability despite swimming in this tri-dimensional environment. We can’t use any database features. Even global databases with regional replication would not work, because services are not even guaranteed to work on the same database backend. Goten was shown to be a kind of language on top of protobuf because of extended types. Now, we see it needs some protocol on top of gRPC too, to ensure some global correctness.

Since this is all integrated, we will also describe how multi-region design works from an implementation point of view. We assume you have a basic knowledge of the multi-region design as explained in the developer guide, which describes:

  1. regional resources
  2. MultiRegionPolicy object
  3. the region information included in the name field

The developer guide also explains the multi-version concept in the migration section.

With that knowlege in place, we will discuss four important concepts:

  1. Meta service as the service registry service
  2. EnvRegistry as the service discovery object
  3. the resource metadata for the service synchronization
  4. the multi-region policy store

with the protocol call flows and the actual implementation.

3.4.1 - Goten Design Concepts

Understanding the Goten design concepts.

3.4.1.1 - Meta Service as Service Registry

Understanding the role of Meta service as a service registry.

To build a multi-service framework, we first need a special service, that provides service registry offers. Using it, we must be able to discover:

  • List of existing Regions
  • List of existing Services
  • List of existing Resources per Service
  • List of existing regional Deployments per Service.

This is provided by the meta.goten.com Service, in the Goten repository, directory meta-service. It follows the typical structure of any service, but has no cmd directory or fixtures, as Goten provides only basic parts. The final implementation is in the edgelq repository, see directory meta. SPEKTRA Edge version of meta contains an old version of the service, v1alpha2, which is obsolete and irrelevant to this document. For this purpose, ignore v1alpha2 elements.

Still, the resource model for Meta service resides in the Goten repository, see normal protobuf files. For Goten, we made the following design decisions, this reflects fields we have in protobuf files (you can and should see).

  • List of regions in meta service must show a list of all possible regions where services can be deployed, not necessarily where are deployed.
  • Each Service must be fairly independent. It must be able to specify its global network endpoint where it is reachable. It must display a list of API versions it has. For each API version, it must tell which services it imports, and which versions of them. It must tell what services it would like to use as a client too (but not import).
  • Every Deployment describes an instance of a service in a region. It must be able to specify its regional network endpoint and tell which service version it operates on (current maximum version). It is assumed it can support lower versions too. Deployments for a single service do not need to upgrade at once to the new version, but it’s recommended to not wait too long.
  • Deployments can be added to a Service dynamically, meaning, service owners can expand by just adding new Deployment in Meta service.
  • Each Service manages its multi-region setup. Meaning: Each Service decides which region is “primary” for them. Then list of Deployment resources describes what regions are available.
  • Each region manages its network endpoints, but it is recommended to have the same domain for global and regional endpoints, and each regional endpoint has a region ID as part of a subdomain, before the main part.
  • For Service A to import Service B, we require that Service B is available in all regions where Service A is deployed. This should be the only limitation Services must follow for multi-region setup.

All those design decisions are reflected in protobuf files, and server implementation (custom middlewares), see in goten repository, meta-service/server/v1/ custom middlewares, they are fairly simple.

For SPEKTRA Edge, design decisions are that:

  • All core SPEKTRA Edge services (iam, meta adaptation, audit, monitoring, etc.) are always deployed to all regions and are deployed together.
  • It means, that 3rd party services can always import any SPEKTRA Edge core service because it is guaranteed to be in all regions needed by 3rd party.
  • All core SPEKTRA Edge services will point to the same primary region.
  • All core SPEKTRA Edge services will have the same network domain: iam.apis.edgelq.com, monitoring.apis.edgelq.com, etc. If you replace the first word with another, it will be valid.
  • If core SPEKTRA Edge services are upgraded in some regions, then they will be upgraded at once.
  • All core SPEKTRA Edge services will be public: Anyone authenticated will be able to read its roles, permissions, and plans, or be able to import them.
  • All 3rd party services will be assumed to be users of core SPEKTRA Edge services (no cost if no actual use).
  • Service resources can be created by a ServiceAccount only. It is assumed that it will be managing this Service.
  • Service will belong to a Project, where ServiceAccount who created it belongs.

Users may think of core edgelq services as a service bundle. Most of these SPEKTRA Edge rules are declarations, but I believe deployment workflows are enforcing this anyway. The decision, that all 3rd parties are considered users of all core SPEKTRA Edge services, and that each Service must belong to some project, is reflected in additional custom middleware we have for meta service in the edgelq repository, see file meta/server/v1/service/service_service.go. In this extra middleware, executed before custom middleware in the goten repository (meta-service/server/v1/service/service_service.go), we are adding core SPEKTRA Edge to the used services array. We also assign a project-owning Service. This is where the management of ServiceAccounts is, or where usage metrics will go.

This concludes Meta service workings, where we can find information about services and relationships between them.

3.4.1.2 - EnvRegistry as Service Discovery

Understanding the role of EnvRegistry module in Meta service.

Meta service provides API allowing inspection global environment, but we also need a side library, called EnvRegistry:

  • It must allow a Deployment to register itself in a Meta service, so others can see it.
  • It must allow the discovery of other services with their deployments and resources.
  • It must provide a way to obtain real-time updates of what is happening in the environment.

Those three items above are the responsibilities of EnvRegistry module.

In the goten repo, this module is defined in the runtime/env_registry/env_registry.go file.

As of now, it can only be used by server, controller, and db-controller runtimes. It may be beneficial for client runtimes someday probably, but we will opt out from “registration” responsibility because the client is not the part of the backend, it cannot self-register in Meta service.

One of the design decisions regarding EnvRegistry is that it must block till initialization is completed, meaning:

  • User of EnvRegistry instance must complete self-registration in Meta Service.
  • EnvRegistry must obtain the current state of services and deployments.

Note that no backend service works in isolation, as part of the Goten design, it is essential that:

  • any backend runtime knows its surroundings before executing its tasks.
  • all backend runtimes must be able to see other services and deployments, which are relevant for them.
  • all backend runtimes must initialize and run the EnvRegistry component and it must be one of the first things to do in the main.go file.

This means, that the backend service, if it cannot successfully pass initialization, will be blocked from any useful work. If you check all run functions in EnvRegistry, you should see they lead to the runInBackground function. It runs several goroutines, but then it waits for a signal showing all is fine. After this, EnvRegistry can be safely used to find other services, and deployments, and make networking connections.

This also guarantees that Meta service contains relevant records for services, in other words, EnvRegistry registration initializes regions, services, deployments, and resources. Note, however:

  • The region resources can be created/updated by meta.goten.com service only. Since meta is the first service, it is responsible for this resource to be initialized.
  • The service resource is created by the first deployment of a given service. So, if we release custom.edgelq.com for the first time, in the first region, it will send a CreateService request. The next deployment of the same service, in the next region, will just send UpdateService. This update must have a new MultiRegionPolicy, where field-enabled regions contain a new region ID.
  • Each deployment is responsible for its deployment resource in Meta.
  • All deployments for a given service are responsible for Resource instances. If a new service is deployed with the server, controller, and db-controller pods, then they may initially be sending clashing create requests. We are fine with those minor races there, since transactions in Meta service, coupled with CAS requests made by EnvRegistry, ensure eventual consistency.

Visit the runInit function, which is one of the goroutines of EnvRegistry executed by runInBackground. It contains procedures for registration of Meta resources finishes after a successful run.

From this process, another emerging design property of EnvRegistry is that it is aware of its context, it knows what Service and Deployment it is associated with. Therefore, it has getters for self Deployment and Service.

Let’s stay for a while in this run process, as it shows other goroutines that are run forever:

  • One goroutine keeps running runDeploymentsWatch
  • Second goroutine keeps running runServicesWatch
  • The final goroutine is the main one, runMainSync

We don’t need real-time watch updates of regions and resources, we need services and their regional deployments only. Normally watch requires a separate goroutine, and it is the same case here. To synchronize actual event processing across multiple real-time updates, we need a “main synchronization loop”, which unites all Go channels.

In the main sync goroutine, we:

  • Process changes detected by runServicesWatch.
  • Process changes detected by runDeploymentsWatch.
  • Catch initialization signal from the runInit function, which guarantees information about our service is stored in Meta.
  • Attachment of new real-time subscribers. When they attach, they must get a snapshot of past events.
  • Detachment of real-time subscribers.

As of additional note: since EnvRegistry is self-aware, it gets only Services and Deployments that are relevant. Those are:

  • Services and Deployments of its Service (obviously)
  • Services and Deployments that are used/imported by the current Service
  • Services and Deployments that are using the current Service

The last two parts are important, it means that EnvRegistry for top service (like meta.goten.com) is aware of all Services and Deployments. Higher levels will see all those below or above them, but they won’t be able to see “neighbors”. The higher the tree, there will be fewer services above, and more below, but the proportion of neighbors will be higher and higher.

It should not be a problem, though, unless we reach the scale of thousands of Services, core SPEKTRA Edge services will however be more pressured than all upstream ones for various reasons.

In the context of SPEKTRA Edge, we made additional implementation decisions, when it comes to SPEKTRA Edge platform deployments:

  • Each service, except meta.goten.com itself, must connect to the regional meta service in its EnvRegistry.

    For example, iam.edgelq.com in us-west2, must connect to Meta service in us-west2. Service custom.edgelq.com in eastus2 must connect to Meta service in eastus2.

  • Server instance of meta.goten.com must use local-mode EnvRegistry. The reason is, that it can’t connect to itself via API, especially since it must succeed in EnvRegistry initialization before running its API server.

  • DbController instance of meta.goten.com is special, and shows the asymmetric nature of SPEKTRA Edge core services regarding regions. As a whole, core SPEKTRA Edge services point to the same primary region, any other is secondary. Therefore, DbController instance of meta.goten.com must:

    • In the primary region, connect to the API server of meta.goten.com in the primary region (intra-region)
    • In the secondary region, connect to the API server of meta.goten.com in the primary region (the secondary region connects to the primary).

Therefore, when we add a new region, the meta-db-controller in the secondary region registers itself in the primary region meta-service. This way primary region gets the awareness of the next region’s creation. The choice of meta-db-controller for this responsibility has more for it, Meta-db-controller will be responsible for syncing the secondary region meta database from the primary one. This will be discussed in the following section of this guide. For now, we just mentioned conventions where EnvRegistry must source information from.

3.4.1.3 - Resource Metadata

Understanding the resource metadata for the service synchronization

As a protocol, Goten needs to have protocol-like properties. One of the thems is the requirement that resource types of all Services managed by Goten must contain metadata objects. It was already mentioned multiple times, but let’s put a link to the Meta object again https://github.com/cloudwan/goten/blob/main/types/meta.proto.

Resource type managed by Goten must satisfy interface methods (you can see in the Resource interface defined in the runtime/resource/resource.go file):

GetMetadata() *meta.Meta
EnsureMetadata() *meta.Meta

There is, of course, the option to opt-out, interface Descriptor has method SupportsMetadata() bool. If it returns false, it means the resource type is not managed by Goten, and will be omitted from the Goten design! However, it is important to recognize if resource type is subject to this design or not, and how we can do this, including programmatically.

To summarize, as protocol, Goten requires resources to satisfy this interface. It is important to note what information is stored in resource metadata in the context of the Goten design:

  • Field syncing of type SyncingMeta must always describe which region owns a resource, and which regions have read a copy of it. SyncingMeta must be always populated for each resource, regardless of type.

  • Field services of type ServicesInfo must tell us which service owns a given resource, and a list of services for which this resource is relevant. Unlike syncing, services may not be necessarily populated, meaning that Service-defining resource type is responsible for explaining how it works in this case. In the future probably it may slightly change:

    If services is not populated at the moment of resource save, it will point to the current service as owning, and allowed services will be a one-element array containing the current service too. This in fact should be assumed by default, but it is not enforced globally, which we will explain now.

First, service meta.goten.com always ensures that the services field is populated for the following cases:

  • Instances of meta.goten.com/Service must have ServicesInfo where:
    • Field owning_service is equal to the current service itself.
    • Field allowed_services contains the current service, all imported/used services, AND all services using importing this service! Note that this may be dynamically changing, if a new service is deployed, it will update the ServicesInfo fields of all services it uses/imports.
  • Instances of meta.goten.com/Deployment and meta.goten.com/Resource must have their ServicesInfo synchronized with parent meta.goten.com/Service instance.
  • Instances of meta.goten.com/Region do not have ServicesInfo typically populated. However, in the SPEKTRA Edge context, we have a public RoleBinding that allows all users to read from this collection (but never write). Because of this private/public nature, there was no need to populate service information there.

Note that this implies that service meta.goten.com is responsible for syncing ServicesInfo of meta.goten.com/Deployment and meta.goten.com/Resource instances. It is done by a controller implemented in the Goten repository: meta-service/controller directory. It is relatively simple.

However, while meta.goten.com can detect what ServicesInfo should be populated, this is often not the case at all. For example, when service iam.edgelq.com receives a request CreateServiceAccount, it does not know necessarily for whom this ServiceAccount is at all. Multiple services may be owning ServiceAccount resources, therefore, but the resource type itself does not have a dedicated “service” field in its schema. The only way services can annotate ServiceAccount resources is by providing necessary metadata information. Furthermore, if some custom service wants to make the ServiceAccount instance available for others services to see, it may need to provide multiple items to the allowed_services array. This should explain that service information must be determined at the business logic level. For this reason, it is allowed to have empty service information, but in many cases, SPEKTRA Edge will enforce their presence, where business logic requires it.

Then, the situation for the other meta field, syncing, is much easier. Value can be determined on the schema level. There already is instruction in the multi-region design section of the developer guide.

Regions setup always can be defined based on resource name only:

  • If it is a regional resource (has a region/ segment in the name), it strictly tells which region owns it. The list of regions that get a read-only copy is decided on below resource name properties below.
  • If it contains a well-known policy-holder in the name, then the policy-holder defines what regions get a read copy. If the resource is non-regional, then MultiRegionPolicy also tells what region owns it (default control region).
  • If the resource is not subject to MultiRegionPolicy (like Region, or User in iam.edgelq.com), then it is a subject of MultiRegionPolicy defined in the relevant meta.goten.com/Service instance (for this service).

Now the trick is: All policy-holder resources are well-known. Although we try not to hardcode anything anywhere, Goten provides utility functions for detecting if a resource contains a MultiRegionPolicy field in its schema. This also must be defined in the Goten specification. By detecting what resource types are policy-holders, Goten can provide components that can easily extract regional information from a given resource by its name only.

Versioning information does not need to be specified in the resource body. Having instance, it is easily possible to get Descriptor instance, and check API version. All schema references are clear in this regard too, if resource A has a reference field to resource B, then from the reference object we can get the Descriptor instance of B, and get the version. The only place where it is not possible, are meta owner references. Therefore, in the field metadata.owner_references, an instance of each must contain the name, owning service, API version, and region (just in case it is not provided in the name field). When talking about the meta references, it is important to mention other differences compared to schema-level references:

  • schema references are owned by a Service that owns resources with references.
  • meta owner references are owned by a Service to which references are pointing!

This ownership has implication: when Deployment D1 in Service S1 upgrades from v1 to v2 (for example), and there is some resource X in Deployment D2 from Service S2, and this X has the meta owner reference to some resource owned by D1, then D1 will be responsible for sending an Update request to D2, so meta owner reference is updated.

3.4.1.4 - Multi-Region Policy Store

Understanding the design of the multi-region policy store.

We mentioned MultiRegion policy-holder resources, and their importance when it comes to evaluating region syncing information based on resource name. There is a need to have a MultiRegion PolicyStore object, that for any given resource name returns a managing MultiRegionPolicy object. This object is defined in the Goten repository, file runtime/multi_region/policy_store.go. This file is important for this design and worth remembering. As of now, it returns a nil object for global resources though, the caller should in this case take MultiRegionPolicy from the EnvRegistry component from the relevant Service.

It uses a cache that accumulates policy objects, so we should normally not use any IO operations, only initially. We have watch-based invalidation, which allows us to have a long-lived cache.

We have some code-generation that provides us functions needed to initialize PolicyStore for a given Service in a given version, but the caller is responsible for remembering to include them (All those main.go files for server runtimes!).

In this file, you can also see a function that sets/gets MultiRegionPolicy from a context object. In multi-region design, it is required from a server code, to store the MultiRegionPolicy object in a context if there will be updates to the database!

3.4.2 - Goten Protocol Flows

Understanding the Goten protocol flows.

Design decision includes:

  1. services are isolated, but they can use/import services on lower levels only, and they can support only a subset of regions available from these used/imported services.
  2. deployments within the Service must be isolated in the context of versioning. Therefore, they don’t need to point to the same primary API version and each Service version may import different services in different versions.
  3. references may point across services only if the Service imports another service. References across regions are fine, it is assumed regions for the same Service trust each other, at least for now.
  4. all references must carry region, version, and service information to maintain full global env.
  5. We have schema and meta owner references. Schema refs define a region by name, version, and service by context. Meta refs have separate fields for region, service, and version.
  6. Schema references may be of blocking type, use cascade deletion, or unset.
  7. Meta references must trigger cascade deletion if all owners disappear.
  8. Each Deployment, Service + Region pair, is responsible for maintaining metadata.syncing fields of resources it owns.
  9. Each Deployment is responsible for catching up with read-copies from other regions available for them.
  10. Each Deployment is responsible for local database schema and upgrades.
  11. Each Deployment is responsible for Meta owner references in all service regions if they point to the Deployment (via Kind and Region fields!).
  12. Every time cross-region/service references are established, the other side may reject this relationship.

We have several components in API servers and db controllers for maintaining order in this graph. Points one to three are enforced by Meta service and EnvRegistry components. EnvRegistry uses generated descriptors from the Goten specification to populate the Meta service. If someone is “cheating”, then look at point twelve, the other side may reject it.

3.4.2.1 - API Server Flow

Understanding the API server flow.

To enforce general schema consistency, we must first properly handle requests coming from users, especially writing ones.

The following rules are executed when API servers get a write call:

  • when a writing request is sent to the server, multi-region routing middleware must inspect the request, and ensure that all resources that will be written to (or deleted), are owned by the current region. It must store the MultiRegionPolicy object in the context associated with the current call.
  • write requests can only execute write updates for a resources under single multi-region policy! It means that writing across let’s say two projects will not be allowed. It is allowed to have writing operations to global resources though. If there is an attempt to write to multiple resources across different policy holders in a single transaction, the Store object must reject the write.
  • Store object must populate the metadata.syncing field when saving. It should use MultiRegionPolicy from context.
  • When the server calls the Save or Delete function on the store interface (for whatever Service resource), the following things happen:
    • If this is a creation/update, and the new resource has schema references that were not there before, then the Store is responsible for connecting to those Services and ensuring that resources exist, the relationship is established, and it is allowed to establish references in general. For references to local resources, it also needs to check if all is fine.
    • If this is deletion, the Store is obliged to check if there are any blocking back-references. It needs to connect with Deployments where references may exist, including self. For local synchronous cascade deletion & unset, it must execute them.
    • When Deployment connects with others, it must respect their API versions used.
  • Meta owner references are not checked, because it is assumed they may be created later. Meta-owner references are asynchronously checked by the system after the request is completed.

This is a designed flow for API Servers, but we have a couple more flows regarding schema consistency. First, let’s define some corner cases when it comes to blocking references across regions/services. Scenario:

  • Deployment D1 gets a write (Creation) to resource R1. Establishes SNAPSHOT transaction.
  • R1 references (blocking) R2 in Deployment D2, therefore, on the Save call, D1 must ensure everything is valid.
  • Deployment D1 sends a request to establish a blocking reference to R2 for R1. D2 can see R2 is here.
  • D2 blocks resource R2 in its SNAPSHOT transaction. Then sends a signal to D1 that all is good.

Two things can happen:

  • D1 may fail to save R1 because of the failure of its local transaction. Resource R2 may be left with some blockade.
  • Small chance, but after successful blockade on R2, D2 may get delete R2 request, while R1 still does not exist, because D1 did not finish its transaction yet. If D2 asks D1 for R1, D1 will say nothing exists. R2 will be deleted, but then R1 may appear.

Therefore, when D2 blocks resource R2, it is a special tentative blockade with a timeout of up to 5 minutes, if I recall the amount correctly. This is way more than enough since transactions are configured to timeout after one minute. It means R2 will not be possible to delete for this period. Then protocol continues:

  • If D1 fails transaction, D2 is responsible to asynchronously remove tentative blockade from R2.
  • If D1 succeeds the transaction, then D1 is responsible for informing in an asynchronous manner that tentative blockade on R1 is confirmed.

3.4.2.2 - Meta Owner Flow

Understanding the meta owner flow.

Let’s define some terminologies:

  • Meta Owner

    It is a resource that is being pointed by the Meta owner reference object

  • Meta Ownee

    It is a resource that points to another resource by the metadata.owner_references field.

  • Meta Owner Deployment

    Deployment to which Meta Owner belongs.

  • Meta Ownee Deployment

    Deployment to which Meta Ownee belongs.

  • Meta Owner Reference

    It is an item in metadata.owner_references array field.

We have three known cases where action is required:

  1. API Server calls Save method of Store, and saved resource has non-empty meta owner refs. API Server must schedule asynchronous tasks to be executed after the resource is saved locally (We trust meta owner refs are valid). Then asynchronously:

    • deployment owning meta ownee resource must periodically check if meta owners exist in target Deployments.
    • if after some timeout it is detected that the meta owner reference is not valid, then it must be removed. If it empties all meta owner refs array, the whole resource must be deleted.
    • if meta owner reference is valid, Deployment with meta ownee resource is responsible for sending notifications to Deployment with meta owner resource. If the reference is valid, it will be successful.
    • if Deployment with meta ownee detects that version of meta owner reference is too old (during validation), then it must upgrade it.

    Note that in this flow Deployment with meta ownee resource is an actor initializing action, it must ask Deployments with meta owners if its meta ownee is valid.

  2. API Server calls the Save method of Store, and the saved resource is known to be the meta-owner of some resources in various Deployments. In this case, it is meta owner Deployment responsible for actions, asynchronously:

    • it must iterate over Deployments where meta ownees may be, and verify if they are affected by the latest save. If not, no need for any action. Why however meta ownees may be affected? Let’s list the points below…
    • sometimes, meta owner reference has a flag telling that the meta owner must have a schema reference to the meta ownee resource. If this is the case, and we see that the meta owner lost the reference to a meta ownee, the meta ownee must be forced to clean up its meta owner refs. It may trigger its deletion.
    • If there was a Meta Owner Deployment version upgrade, this Deployment is responsible for updating all Meta ownee resources. Meta ownees must have meta owner references using the current version of the target Deployment.
  3. API Server calls Delete method of Store, and deleted resource is KNOWN to be meta-owner of some resources in various Deployments. Deployment owning deleted meta owner resource is responsible for the following asynchronous actions:

    • It must iterate over Deployments where meta ownees may exist, and list them.
    • For each meta ownee, Meta Owner Deployment must notify about deletion, Meta Ownee Deployment.
    • API Server of meta ownee deployment is responsible for removing meta owner reference from the array list. It may trigger the deletion of meta ownee if there are no more meta owner references.

Note that all flows are pretty much asynchronous, but still ensure consistency of meta owner references. In some cases though it is meta owner Deployment reaching out, sometimes the other way around. It depends on which resource was updated last.

3.4.2.3 - Cascade Deletion Flow

Understanding the cascade deletion flow.

When some resource is deleted, and the API Server accepts deletion, it means there are no blocking references anywhere. This is ensured. However, there may be resources pointing to deleted ones with asynchronous deletion (or unset).

In these flows we talk only about schema references, meta are fully covered already.

When Deployment deletes some resource, then all Deployments affected by this deletion must take an asynchronous action. It means that if Deployment D0-1 from Service S0 imports Service S1 and S2, and S1 + S2 have deployments D1-1, D1-2, D2-1, D2-2, then D0-1 must make four real-time watches asking for any deletions that it needs to handle! In some cases, I remember service importing five others. If there were 50 regions, it would mean 250 watch instances, but it would be a very large deployment with sufficient resources for goroutines.

Suppose that D1-1 had some resource RX, that was deleted. Following happens:

  • D1-1 must notify all interested deployments that RX is deleted by inspecting back reference sources.
  • Suppose that RX had some back-references in Deployment D0-1, Deployment D1-1 can see that.
  • D1-1, after notifying D0-1, periodically checks if there are still active back-references from D0-1.
  • Deployment D0-1, which points to D1-1 as an importer, is notified about the deleted resource.
  • D0-1 grabs all local resources that need cascade deletion or unset. For unsets, it needs to execute regular updates. For deletions, it needs to delete (or mark for deletion if there are still some other back-references pointing, which may be blocking).
  • Once D0-1 deals with all local resources pointing to RX, it is done, it has no work anymore.
  • At some point, D0-1 will be asked by D1-1 if RX no longer has back refs. If this is the case, then D0-1 will confirm all is clear and D1-1 will finally clean up what remains of RX.

Note that:

  • This deletion spree may be deep for large object deletions, like projects. It may involve multiple levels of Deployments and Services.

  • If there is an error in the schema, some pending deletion may be stuck forever. By error in the schema, we mean situations like:

    • Resource A is deleted, and is back referenced from B and C (async cascade delete).
    • Normally B and C should be deleted, but it may be a problem if C is let’s say blocked by D, and D has no relationship with A, so will never be deleted. In this case, B is deleted, but C is stuck, blocked by D. Unfortunately as of now Goten does not detect weird errors in schema like this, perhaps it may be a good idea, although not sure if possible.
    • It will be the service developers’ responsibility to fix schema errors.
  • In the flow, D0-1 imports Service to which D1-1 belongs. Therefore, we know that D0-1 knows the full-service schema of D1-1, but not the other way around. We need to consider this in the situation when D1-1 asks D0-1 if RX no longer has back refs.

3.4.2.4 - Multi-Region Sync Flow

Understanding the multi-region synchronization flow.

First, each Deployment must keep updating metadata.syncing for all resources it owns. To watch owned resources, it must:

  • WATCH <Resource> WHERE metadata.syncing.owningRegion = <SELF>.

    It will be getting updates in real-time.

API Server already ensures that the resource on update has the metadata.syncing field synced! However, we have an issue when MultiRegionPolicy object changes. This is where Deployment must asynchronously update all resources that are subject to this policyholder. It must therefore send Watch requests for ALL resources that can be policy-holders. For example, Deployment of iam.edgelq.com will need to have three watches:

  1. Watch Projects WHERE multi_region_policy.enabled_regions CONTAINS <MyRegion>

    by iam.edgelq.com service.

  2. Watch Organizations WHERE multi_region_policy.enabled_regions CONTAINS <MyRegion>

    by iam.edgelq.com service.

  3. Watch Services WHERE multi_region_policy.enabled_regions CONTAINS <MyRegion>

    by meta.goten.com service.

Simpler services like devices.edgelq.com would need to watch only projects, because it does not have other resources subject to this.

Deployment needs to watch policyholders that are relevant in its region.

Flow is now the following:

  • When Deployment gets a notification about the update of MultiRegionPolicy, it needs to accumulate all resources subject to this policy.
  • Then it needs to send an Update request for each, API server ensures that metadata.syncing is updated accordingly.

The above description ensures that metadata.syncing is up-to-date.

The next part is actual multi-region syncing. In this case, Deployments of each Service MUST have one active watch on all other Deployments from the same family. For example, if we have iam.edgelq.com in regions japaneast, eastus2, us-west2, then following watches must be maintainer:

Deployment of iam.edgelq.com in us-west2 has two active watches, one sent to japaneast region, the other eastus:

  • WATCH <Resources> WHERE metadata.syncing.owningRegion = japaneast AND metadata.syncing.regions CONTAINS us-west2
  • WATCH <Resources> WHERE metadata.syncing.owningRegion = eastus2 AND metadata.syncing.regions CONTAINS us-west2

Deployments in japaneast and eastus2 will also have similar two watches. We have a full mesh of connections.

Then, when some resource in us-west2 gets created with metadata.syncing.regions = [eastus2, japaneast], then one copy will be sent to each of these regions. Those regions must be executing pretty much continuous work.

Now, on the startup, it is necessary to mention the following procedure:

  • Deployment should check all lists of currently held resources owned by other regions, but syncable locally.
  • Grab a snapshot of these resources from other regions, and compare if anything is missing, or if we have too much (missing deletion). If this is the case, it should execute missing actions to bring the system to sync.
  • During the initial snapshot comparison, it is still valuable to keep copying real-time updates from other regions. It may take some time for the snapshot to be completed.

3.4.2.5 - Database Migration Flow

Understanding the database migration flow.

When Deployment boots up after the image upgrade, it will detect that the currently active version is lower than the version it can support. In that case, the API Server will work on the older version normally, but the new version API will become available in read-only mode. Deployment is responsible for asynchronous, background syncing of higher version database with current version database. Clients are expected to use older versions anyway, so they won’t necessarily see incomplete higher versions. Besides, it’s fine, because what matters is the current version pointed out by Deployment.

It is expected that all Deployments will get new images first before we start switching to the next versions. Each Deployment will be responsible for silent copying.

For the MultiRegion case, when multiple deployments of the same service are on version v1, but they run on images that can support version v2, they will be still synced with each other, but on both versions: v1 and v2. When images are being deployed region by region (Deployment by Deployment), they may experience Unimplemented error messages, but it should be till images are updated in all regions. We may improve this and try to detect “available” versions first, before making cross-region watches.

Anyway, it will be required that new images are deployed to all regions before the upgrade procedure is triggered on any Regional deployment.

Upgrade then can be done one Deployment by one, using the procedure described in the migration section of the developer guide.

When one Deployment is officially upgraded to the new version, but still uses primarily the old version, then all deployments still watch each other for both versions, for the sake of multi-region syncing. However, Deployment using a newer version may already opt-out from pulling older API resources from other Deployments at this point.

Meta owner references are owned by Deployment they point to. It means that they are upgraded asynchronously after deployment switch the version to the newer one.

3.4.3 - Goten Flow Implementation

Understanding the Goten flow implementation.

All components for described flows are implemented in the Goten repository, we have several places where implementation can be found:

  • In runtime/schema-mixin we have a mixin service directory, which must be part of all services using Goten.
  • In runtime/store/constraint we have another “middleware” for Store, which is aware of cross-service & regional nature of schemas. This middleware must be used in all.
  • In runtime/db_constraint_ctrl we have a controller that handles asynchronous schema-related tasks like asynchronous cascade deletions, meta owner references management, etc.
  • In runtime/db_syncing_ctrl we have a controller that handles all tasks related to DB syncing: Cross-region syncing, metadata.syncing updates, database upgrades, and search database syncing as well.

3.4.3.1 - Schema Mixin

Understanding the schema mixin implementation.

Mixins are special kinds of services, that are supposed to be mixed/blended with proper services. Like any service, they have api-skeleton, protobuf files, resources, and server handlers. What they don’t get, is independent deployment. They don’t exist in the Meta Service registry. Instead, their resources and API groups are mixed with proper resources.

Moreover, for schema mixins, we are not validating references to other resources, they are excluded from this mechanism, and it’s up to the developer to keep them valid.

The Goten repository provides schema mixin, under runtime/schema-mixin. If you look at this mixin service, you will see that it has ResourceShadow resource. By mixing the schema mixin with let’s say Meta service, which formally has four resource types, four API groups, we have the following total Meta service with:

  • Resources: Region, Service, Deployment, Resource, ResourceShadow
  • API Groups: Region, Service, Deployment, Resource, ResourceShadow (CRUD plus custom actions).

If you inspect the Meta service database, you will have five collections (unless there are more mixins).

See api-skeleton: https://github.com/cloudwan/goten/blob/main/runtime/schema-mixin/proto/api-skeleton-v1.yaml.

By requiring that ALL services attach to themselves schema-mixin, we can guarantee, that all services can access each other via schema-mixin. This is one of the key ingredients of Goten’s protocol. Some common service is always needed, because, to enable circular communication between two services, which can’t possibly know each other schemas, they need some kind of common protocol.

Take a look at the resource_shadow.proto file. Just a note: You can ignore target_delete_behavior, they are more for informative purposes. But for mixins, Goten does not provide schema management. ResourceShadow is a very special kind of resource, and it exists for every other resource in a deployment (except other mixins). What I mean, let’s take a look at the list of resources that may exist in the Deployment of Meta service in region us-west2, like:

  • regions/us-west2 (Kind: meta.goten.com/Region)
  • services/meta.goten.com (Kind: meta.goten.com/Service)
  • services/meta.goten.com/resources/Region (Kind: meta.goten.com/Resource)
  • services/meta.goten.com/resources/Deployment (Kind: meta.goten.com/Resource)
  • services/meta.goten.com/resources/Service (Kind: meta.goten.com/Resource)
  • services/meta.goten.com/resources/Resource (Kind: meta.goten.com/Resource)
  • services/meta.goten.com/deployments/us-west2 (Kind: meta.goten.com/Deployment)

If those resources exist in the database for meta.goten.com in us-west2, then collection ResourceShadow will have the following resources:

  • resourceShadows/regions/us-west2
  • resourceShadows/services/meta.goten.com
  • resourceShadows/services/meta.goten.com/resources/Region
  • resourceShadows/services/meta.goten.com/resources/Deployment
  • resourceShadows/services/meta.goten.com/resources/Service
  • resourceShadows/services/meta.goten.com/resources/Resource
  • resourceShadows/services/meta.goten.com/deployments/us-west2

Basically it’s a one-to-one mapping, with the following exceptions:

  • if there are other mixin resources, they don’t get ResourceShadows.
  • synced read-only copies from other regions do not get ResourceShadows. For example, resource regions/us-west2 will exist in region us-west2, and resourceShadows/regions/us-west2 will also exist in us-west2. But, if regions/us-west2 is copied to other regions, like eastus2, then resourceShadows/regions/us-west2 WILL NOT exist in eastus2.

This makes Resource shadows rather “closed” within their Deployment.

ResourceShadow instances are created/updated along a resource they represent, during each transaction. It ensures that they are always in sync with a resource. They contain all references to other resources and contain all back reference source deployments. The reason we have back reference deployments, not an exact list, is that the full list would have been massive, imagine a Project instance and 10000 Devices pointing to it. Instead, if let’s say those devices are spread across four regions, ResourceShadow for Project will have 4 back reference sources, more manageable.

Now, with ResourceShadows, we can provide some abstraction needed to facilitate communication between services. However, note that we don’t use standard CRUD at all (for shadows). They were in the past, but the problem with CRUD is that they don’t contain the “API Version” field.

For example, we have the secrets.edgelq.com service in versions v1alpha2 and v1. In the older version, we have a Secret resource with the name pattern projects/{project}/secrets/{secret}. Now, with v1 upgrade, name pattern changed to projects/{project}/regions/{region}/secrets/{secret}. Note that this means, that the ResourceShadow name changes too!

Suppose there are services S1 and S2. S1 imports secrets in v1alpha2, and S2 imports secrets in v1. Suppose both S1 and S2 want to create resources concerning some Secret instance. In this case, they would try to use schema-mixin API, and they would give conflicting resource shadow names, but this conflict arises from a different version, not because of a bug. S1 would try to establish a reference to shadow for projects/{project}/secrets/{secret}, and S2 would use the version with region.

This problem repeats for the whole CRUD for ResourceShadow, so we don’t use it. Instead, we developed a bunch of custom actions you can see in the api-skeleton of schema-mixin like EstablishReferences, ConfirmBlockades, etc. All those requests contain a version field, and the API Server can use versioning transformers to convert between names between versions.

Now, coming back to custom actions for ResourceShadows, see API-skeleton along, recommended to see protobuf with request objects!

We had a flow on how references are established, when API Servers handle writing requsts, this is where schema mixin API is in use.

EstablishReferences is used by Store modules in API Servers, when they save resources with cross-region/service references. This is called the DURING transaction of Store in API Server. It ensures that referenced resources will not be deleted for the next few minutes. It creates tentative blockades in ResourceShadow instances on the other side. You may check the implementation in the goten repo, file runtime/schema-mixin/server/v1/resource_shadow/resource_shadow_service.go. When the transaction concludes, then Deployment asynchronously will send ConfirmBlockades to remove the tentative blockade from referenced ResourceShadow in the target Service. It will leave with a back reference source though!

For deletion requests, the API Server must call CheckIfResourceIsBlocked before proceeding with resource deletion. It must also block deletion if there are tentative blockades in ResourceShadow.

We also described Meta owner flows with three cases. When Meta Ownee Deployment tries to confirm the meta owner, it must use the ConfirmMetaOwner call to a Meta Owner Deployment instance. If all is fine, then we will get a successful response. If there is a version mismatch, Meta Ownee Deployment will send UpgradeMetaOwnerVersion request to itself (its API Server), so the meta owner reference is finally in the desired state. If ConfirmMetaOwner discovers the Meta Owner does not confirm ownership, then Meta Ownee Deployment should use the RemoveMetaOwnerReference call.

When it is Meta Owner Deployment that needs to initiate actions (cases two and three), it needs to use ListMetaOwnees to get meta ownees. When relevant, it will need to call UpgradeMetaOwnerVersion or RemoveMetaOwnerReference, depending on the context of why we are iterating meta ownees.

When we described asynchronous deletions handling, the most important schema-mixin API action is WatchImportedServiceDeletions. This is a real-time watch subscription with versioning support. For example, if we have Services S1 and S2 importing secrets.edgelq.com in versions v1alpha2 and v1, then if some Secret is deleted (with name pattern containing region in v1 only), separate WatchImportedServiceDeletionsResponse is sent to S1 and S2 Deployments, containing shadow ID of secret in version Service desires.

When it comes to the deletion flow, we also use CheckIfHasMetaOwnee, and CheckIfResourceHasDeletionSubscriber. These methods are used when waiting for back-references to be deleted generally.

Since the schema-mixin Server is mixed with proper service, it means we can also access original resources from the Store interface! In total, Schema-mixin is a powerful utility for Goten as protocol cases.

We still need CRUD in ResourceShadows, because:

  • Update, Delete, and Watch functions are used within Deployment itself (where we know all runtimes use the same version).
  • debugging purposes. Developers can use read requests when some bug needs investigation.

3.4.3.2 - Metadata Syncing Decorator

Understanding the metadata synchronization decorator.

As we said, when the resource is saved in the Store, the metadata.syncing field is refreshed according to the MultiRegionPolicy. See the decorator component in the Goten repository: runtime/multi_region/syncing_decorator.go. This is wrapped up by a store plugin, runtime/store/store_plugins/multiregion_syncing_decorator.go.

This plugin is added to all stores for all API Servers. It can be opted out only if multi-region features are not used at all. When Deployment sees that metadata.syncing is not up-to-date with MultiRegionPolicy, the empty update can handle this. Thanks to this, we could have annotated this field as output only (in the protobuf file), and users wouldn’t be able to make any mistakes there.

3.4.3.3 - Constraint Store

Understanding the constraint store.

As it was said, Store is a series of its middlewares like Server, but the base document in the Contributor guide only has shown core and cache layers. An additional layer is Constraints, you can see it in the Goten repo, runtime/store/constraints/constraint_store.go.

It focuses mostly on decorating Save/Delete methods. When Saving, it grabs the current ResourceShadow instance for the saved resource. Then it ensures references are up-to-date. Note that it calls the processUpdate function, which repopulates shadow instances. For each new reference, that was not before, it will need to connect with the relevant Deployment and confirm the relationship. All new references are grouped into Service & Region buckets. For each foreign Service or Region, it will need to send an EstablishReferences call. It will need to consider versioning too, because shadow names may change.

Note that we have a “Lifecycle” object, where we store any flags indicating if asynchronous tasks are pending on the resource. State PENDING shows that there are some asynchronous tasks to execute.

Method EstablishReferences is not called for local references. Instead, at the end of transactions, preCommitExec is called to connect with local resources in a single transaction. This is the most optimal, and the only option possible. Imagine that in a single transaction we create resources A and B, where A has reference to B. If we used EstablishReferences, then it would fail because B does not exist yet. By skipping this call for local resources, we are fixing this problem.

When deleting, the Constraint store layer uses processDeletion, where we need to check if the resource is not blocked. We also may need to iterate over other back reference sources (foreign Deployments). When we do it, we must verify versioning, because other Deployments may use a lower version of our API, resulting in different resource shadow names.

For deletion, we also may trigger synchronous cascade deletions (or unsets).

Also, note that there is something additional about deletions, they may delete an actual resource instance (unless we have a case like async deletion annotation), but they won’t delete the ResourceShadow instance. Instead, they will set deletion time and put Lifecycle into a DELETING state. This is a special signal that will be distributed to all Deployments that have resources with references pointing at deleted resources. This is how they will be executing any cascade deletions (or unsets). Only when back-references are cleared

This is the last layer in Store objects, along with cache and core, now you should see in full how the actually Store works, and what it does, what it interacts with (actual database, local cache, AND other Deployments). Using Schema mixin API, it achieves a “global” database across services, regions, and versions.

3.4.3.4 - Database Constraint Controller

Understanding the database constraint controller.

Each db-controller instance consists mainly of two Node managers modules: One is the DbConstraint Controller. It’s tasks include execution of all asynchronous tasks related to the local database (Deployment). There are 3 groups of tasks:

  • Handling of owned (by Deployment) resources in PENDING state (Lifecycle)
  • Handling of owned (by Deployment) resources in DELETING state (Lifecycle)
  • Handling of all subscribed (from current and each foreign Deployment) resources in the DELETING state (Lifecycle)

The module is found in the Goten repository, module runtime/db_constraint_ctrl. As with any other controller, it uses a Node Manager instance. This Node Manager, apart from running Nodes, must also keep a map of interested deployments! What does it mean: we know that iam.edgelq.com imports meta.goten.com. Suppose we have regions us-west2 and eastus. In that case, Deployment of iam.edgelq.com in the us-west2 region will need to remember four Deployment instances:

  1. meta.goten.com in us-west2
  2. meta.goten.com in eastus2
  3. iam.edgelq.com in us-west2
  4. iam.edgelq.com in eastus2

This map is useful for 3rd task group: handling of subscribed resources in the deleting state. As IAM imports meta and no other service, and also because IAM resources can reference each other, we can deduce the following: resources of iam.edgelq.com in region us-west2 can only reference resources from meta.goten.com and iam.edgelq.com, and only from regions us-west2 and eastus2. If we need to handle the cascade deletions (or unsets), then we need to watch these deployments. See file node_manager.go in db_constraint_ctrl, we are utilizing EnvRegistry to get dynamic updates about interesting Deployments. In the function createAndRunInnerMgr we use the ServiceDescriptor instance to get information about Services we import, this is how we know which deployments we need to watch.

As you can see, we utilize EnvRegistry to initiate DbConstraintCtrl correctly in the first place, and then we maintain it. We also handle version switches. If this happens, we stop the current inner node manager and deploy a new one.

When we watch other deployments, we are interested only in schema references, not meta. Meta references are more difficult to predict because services don’t need to import each other. For this reason, responsibility for managing meta owner references is split between Deployments on both sides: Meta Owner and Meta Ownee, as described by the flows.

The most important files in runtime/db_constraint_ctrl/node directory are:

  • owned_deleting_handler.go
  • owned_pending_handler.go
  • subscribed_deleting_handler.go

Those files are handling all asynchronous tasks as described by many of the flows, regarding the establishment of references to other resources (confirming/removing expired tentative blockades), meta owner references management, cascade deletions, or unsets. I was trying to document the steps they do and why, so refer to the code for more information.

For other notable elements in this module:

  • For subscribed deleting resource shadows, we have wrapped watcher, which uses a different method than standard WatchResourceShadows. The reason is, that other Deployments may vary between API versions they support. We use the dedicated method by schema mixin API, WatchImportedServiceDeletions.
  • Subscribed deleting resource shadow events are sent to a common channel (in controller_node.go) file, but they are still grouped per Deployment (along with tasks).

Note that this module is also responsible for upgrading meta owner references after Deployment upgrades its current version field! This is an asynchronous process, and is executed by owned_pending_handler.go, function executeCheckMetaOwnees.

3.4.3.5 - Database Syncer Controller

Understanding the database syncer controller.

Another db-controller big module is DbSyncer Controller. In the Goten repository, see the runtime/db_syncing_ctrl module. It is responsible for:

  • Maintaining the syncing.metadata field when corresponding MultiRegionPolicy changes.
  • Syncing resources from other Deployments in the same Service for the current local database (read copies).
  • Syncing resources from other Deployments and current Deployment for Search storage.
  • Database upgrade of local Deployment

It mixes multi-version/multi-region features, but the reason is, that we pretty much share many common structures and patterns regarding db-syncing here. Version syncing is still copying from one database to another, even if this is a bit special since we will need to “modify” the resources we are copying.

This module is interested in dynamic Deployment updates, but only for current Service. See the node_manager.go file. We utilize EnvRegistry to get the current setup. Normally we will initiate inner node manager when we get SyncEvent, but then we support dynamic updates via DeploymentSetEvent and DeploymentRemovedEvent. We just need to verify this Deployment belongs to our service. If it does, it means something changed there and we should refresh. Perhaps we can get the “previous” state, but it is fine to make NOOP refresh too. Anyway, we need to ensure that Node is aware of all foreign Deployments because those are potential candidates to sync from. Now let’s dive into a single Node instance.

Now, DbSyncingCtrl can be quite complex, even though it copies resource instances across databases. First, check ControllerNode struct in the controller_node.go file, which symbolizes a single Node responsible for copying data. What we can say about it (basic breaking down):

  • it may have two instances of VersionedStorage, one is older, one for newer API. Generally, we support only the last two versions for DbSyncer. It should not be needed to have more, and it would make the already complex structure more difficult. This is necessary for database upgrades.
  • We have two instances of syncingMetaSet, for two versioned storages. Those contain SyncingMeta objects per multi-region policy-holders and resource type pair. An instance of syncingMetaSet is used by localDataSyncingNode instances. To be honest, if ControllerNode had just one localDataSyncingNode object, not many, then syncingMetaSet would be part of it!
  • We have then rangedLocalDataNodes and rangedRemoteDataNodes maps.

Now, object localDataSyncingNode is responsible for:

  • Maintaining syncing.metadata, it must use the syncingMetaSet passed instance for real-time updates.
  • Syncing local resources to Search storage (read copies).
  • Upgrading local database.

Then, remoteDataSyncingNode is responsible for:

  • Syncing resources from other Deployments in the same Service for the current local database (read copies).
  • Syncing resources from other Deployments for Search storage.

For each foreign Deployment, we will have separate remoteDataSyncingNode instances.

It is worth asking the question, why do we have a map of syncing nodes (local and remote) for shard ranges, the reason is, that we split them to have at most ten shards. Often we may end up with maps of one sub-shard range still. Why ten? Because in firestore, which is a supported database, we can pass a maximum of ten shard numbers in a single request (filter)! Therefore, we will need to make separate watch queries, and it’s easier to separate nodes then. Now we can guarantee that a single local/remote node will be able to send a query successfully to the backend. However, because we have this split, we needed to separate syncingMetaSet away from localDataSyncingNode, and put it directly in ControllerNode.

Since we have syncingMetaSet separated, let’s describe what it does first: Basically, it observes all multi-region policy-holders a Service uses and computes SyncingMeta objects per policy-holder/resource type pair. For example, Service iam.edgelq.com has resources belonging to Service, Organization, and Project, so it watches these 3 resource types. Service devices.edgelq.com only uses Project, so it watches Project instances, and so on. It uses the ServiceDescriptor passed in the constructor to detect all policy-holders.

When syncingMetaSet runs, it collects the first snapshot of all SyncingMeta instances and then maintains it. It sends events to subscribers in real-time (See ConnectSyncingMetaUpdatesListener). This module is not responsible for updating the metadata.syncing field yet, but it is an important first step. It will be triggering localDataSyncingNode when new SyncingMeta is detected, so it can run its updates.

The next important module is the resVersionsSet object, defined in file res_versions_set.go. It is a central component in both local and remote nodes, so perhaps it is worth explaining how it works.

This set contains all resource names with their versions in the tree structure. By version, I don’t mean API version of the resource, I mean literal resource version, we have a field in metadata for that, metadata.resource_version. This value is a string but can contain only an integer that increments with every update. This is a base for comparing resources across databases. How do we know that? Well, if we have the “main” database owning resource, we know that it contains the newest version, the field metadata.resource_version is the highest there. However, we have other databases… for example search database, it may be separate, like Algolia. In that case, metadata.resource_version may be lower. We also have a syncing database (for example across regions). The other database in another region, which gets just read-only copies, also can at best match the origin database. resVersionsSet has important functions:

  • SetSourceDbRes and DelSourceDbRes are called by original database owning resource.
  • SetSearchRes and DelSearchRes are called by the search database.
  • SetSyncDbRes and DelSyncDbRes are called by syncing database (for example cross-region syncing).
  • CollectMatchingResources collects all resource names matched by prefix. This is used by metadata.syncing updates. When policy-holder resource updates its MultiRegionPolicy, we will need to collect all resources subject to it!
  • CheckSourceDbSize is necessary for Firestore, which is known to be able to “lose” some deletions. If the size is incorrect, we will need to reset the source DB (original) and provide a snapshot.
  • SetSourceDbSyncFlag is used by the original DB to signal that it supplied all updates to resVersionsSet and now continues with real-time updates only.
  • Run: resVersionsSet is used in multi-threading env, so we will run on separate goroutine and use Go channels for synchronization. We will need to use callbacks when necessary.

resVersionsSet also supports listeners when necessary, it triggers when source DB updates/deletes a resource, or when we reach syncing database equivalence with the original database. We don’t provide similar signals for search DB, because simply we don’t need them… but we do for syncing DB. We will explain later.

Now let’s talk about local and remote nodes, starting with local.

See the local_data_syncing_node.go file, which constructs all modules responsible for the mentioned tasks. First, analyze newShardRangedLocalDataSyncingNode constructor up to the if needsVersioning condition, where we create modules for Database versioning. Before this condition, we are creating modules for Search DB syncing and metadata.syncing maintenance. Note how we are using the activeVsResVSet object (type of resVersionsSet). We are connecting to the search syncer and syncing meta updater modules. For each resource type, we are creating an instance of source db watcher, which gets access to the resource version set. It should be clear now: Source DB, which is for our local deployment, keeps updating activeVsResVSet, which in turn passes updates to activeVsSS and activeVsMU. For activeVsMU, we are also connecting it to activeVsSyncMS, so we have two necessary signal sources for maintaining the metadata.syncing object.

So, you should know now that:

  • search_syncer.go

    It is used to synchronize the Search database, for local resources in this case.

  • syncing_meta_updater.go

    It is used to synchronize the metadata.syncing field for all local resources.

  • base_syncer.go

    It is actually a common implementation for search_syncer.go, but not limited to.

Let’s dive deeper and explain what is synchronization protocol here between source and destination. Maybe you noticed, but why sourceDbWatcher contains two watchers, for live and snapshot? Also, why there is a wait to run a snapshot? Did you see that in the OnInitialized function of localDataSyncingNode, we are running a snapshot only when we have a sync signal received? There are reasons for all of that. Let’s discuss design here.

When the DbSyncingCtrl node instance is initiated for the first time, or when the shard range changes, we will need to re-download all resources from the current or foreign database, to compare with synced database and execute necessary creations, updates, and deletions. Moreover, we will need to ask for a snapshot of data on the destination database. This may take time, we don’t know how much, but probably downloading potentially millions of items may not be the fastest operation. It means, that when there are changes in nodes, upscaling, downscaling, reboots, whatever, we would need to suspend database syncing, and it may be a bit long, maybe minute, what if more? Is there an upper limit? If we don’t sync fast, this lag will start to be quite too visible for users. It is better if we start separate watchers, for live data directly. Then we will be syncing from the live database to the destination (like search db), providing almost immediate sync most of the time. In the meantime, we will collect snapshots of data from the destination database. See the base_syncer.go file, and see function synchronizeInitialData. When we are done with initialization, we are triggering a signal, that will notify the relevant instance (local or remote syncing node). In the file local_data_syncing_node.go, function OnInitialized, we are checking if all components are ready, then we run RunOrResetSnapshot for our source db watchers. This is when the full snapshot will be done, and if there are any “missing” updates during the handover, we will execute them. Ideally, we won’t have them, live watcher goes back by one minute when it starts watching, so some updates may even be repeated! But it’s still necessary to provide some guarantees of course. I hope this explains the protocol:

  • Live data immediately is copying records from source to destination database…
  • In the meantime, the destination database collects snapshots…
  • And when the snapshot is collected, we start the snapshot from the source database…
  • We execute anything missing and continue with live data only.

Another reason why we have the design we have, why we use QueryWatcher instances (and not Watchers), is simple: RAM. DbSyncingCtrl needs to practically watch all database updates and needs to get full resource bodies. Note we are also using access.QueryWatcher instances in sourceDbWatcher. QueryWatcher is a lower-level object compared to just Watcher. It means, that it can’t support multiple queries, it does not handle resets, or snapshot size checks (firestore only). This is also a reason why in ControllerNode we have a map of localDataSyncingNode instances per shard range… The watcher would be able to split queries and hide this complexity. But QueryWatcher has benefits:

  • It does not store watched resources in its internal memory!

Imagine millions of resources, whose whole resource bodies are kept by Watcher instance in RAM. It goes in the wrong direction, so DbSyncingCtrl is supposed to be slim. In resVersionsSet we only keep version numbers and resource names in tree form. We try to compress all syncer modules into one place, so syncingMetaUpdater and searchUpdater are in one place. If there is some update, we don’t need to further split and increase pressure on the infrastructure.

This concludes the local data syncing node discussion in terms of MultiRegion replication and Search db syncing for LOCAL nodes. We will describe later in this doc Remote data syncing nodes. However, let’s continue with the local data syncing node, and talk about its other task: database upgrades. Therefore, let’s continue the discussion here.

Object localDataSyncingNode needs to consider now actually four databases (at maximum):

  1. Local database for API Version currently active (1)
  2. Local database for API Version to which we sync to (2)
  3. Local Search database for API Version currently active (3)
  4. Local Search database for API Version to which we sync to (4)

Let’s introduce the terms: Active database, and Syncing database. When we are upgrading to a new API Version, the Active database contains old data, Syncing database contains new data. When we are synchronizing in another direction, for rollback purposes (just in case?), the Active database contains new data, and the syncing database contains old data.

And extra SyncingMetaUpdaters:

  • syncingMetaUpdater for the currently active version (5)
  • syncingMetaUpdater for synced version (6)

We need sync connections:

  • Point 1 to Point 2 (This is most important for database upgrade)
  • Point 1 to Point 3
  • Point 2 to Point 4
  • Point 1 to Point 5 (plus extra signal input from syncingMetaSet active instance)
  • Point 2 to Point 6 (plus extra signal input from the syncingMetaSet syncing instance)

This is insane and probably needs careful code writing, which sometimes lacking here. We will need to carefully add some tests and try to put extra makeup on the code, but the deadline was deadline.

Go back to function newShardRangedLocalDataSyncingNode in local_data_syncing_node.go, and see a line with if needsVersioning and below. This constructs extra elements. First, note we are creating a syncingVsResVSet object, and another resVersionsSet. This set will be responsible for syncing between the syncing database and the search store. It is also used to keep signaling the syncing version to syncingMetaUpdater. But I see now this was a mistake because we don’t need this element. Instead, it is enough for the Active database to keep running its syncingMetaUpdater. We will know that those updates will be reflected in the syncing database because we have already synced in this direction! We will need to keep however second, additional Search database syncing. When we finish upgrading the database to the new version, we don’t want to have an empty search store from the first moment! This may not go unnoticed. Therefore, we have this database, search syncing for “Syncing database” too.

But let’s focus on the most important bits: actual database upgrade, from Active to Syncing local main storages. Find a function called newResourceVerioningSyncer, and see what it is called. It receives access to the syncing database, and it gets access to the node.activeVsResVSet object, which contains resources from the active database. This is the object responsible for upgrading resources: resourceVersioningSyncer, in file resource_versioning_syncer.go. It works like other “syncers”, and inherits from base syncer, but it also needs to transform resources. It uses transformers from versioning packages. When it uses resVersionsSet, it calls SetSyncDbRes and DelSyncDbRes, to compare with original database. We can safely require, that metadata.resourceVersion must be the same between old and new resource instances, transformation cannot change it. Because syncDb and searchDb are different, we are fine with having search syncer and versioning syncer use the same resource versions set.

Object resourceVersioningSyncer also makes extra ResourceShadow upgrades, transformed resources MAY have different references after the changes, therefore we need to refresh them! It makes this syncer even more special.

However, we have little issue with ResourceShadow instances, they don’t have a metadata.syncing field, and they are partially covered by resourceVersioningSyncer, we are not populating some fields, like back reference sources. As this is special, we need shadowsSyncer, defined in file shadows_versioning_syncer.go. It synchronizes also ResourceShadow instances, but fields that cannot be populated by resourceVersioningSyncer.

During database version syncing, localDataSyncingNode receives signals (per resource type), when there is a synchronization event between the source database and the syncing database. See that we have the ConnectSyncReadyListener method in resVersionsSet. This is how syncDb (here it is a syncing database!) notifies when there is a match between two databases. This is used by localDataSyncingNode to coordinate Deployment version switches. See function runDbVersionSwitcher to see the full procedure. This is the place basically, where Deployment can switch from one version to another. When this happens, all backend services will flip their instances.

This is all about local data syncing nodes. Let us switch to remote nodes: remote node (object remoteDataSyncingNode, file remote_data_syncing_node.go) is syncing between the local database and a foreign regional one. It is simpler than local at least. It synchronizes:

  • From remote database to local database
  • From remote database to local search database

If there are two API Versions, it is assumed that both regions may be updating. Then, we have 2 extra syncs:

  • From the remote database in the other version to the local database
  • From remote database in the other version to local search database

When we are upgrading, it is required to deploy new images on the first region, then the second, third, and so on, till the last region gets new images. However, we must not switch versions of any region till all regions get new images. While switching and deploying can be done one by one, those stages need separation. This is required for these nodes to work correctly. Also, if we switch the Deployment version in one region before we upgrade images in other regions, there is a high chance users may use the new API and see some significant gaps in resources. Therefore, versioning upgrade needs to be considered in multi-regions too.

Again, we may be operating on four local databases and two remote APIs in total, but at least this is symmetric. Remote syncing nodes also don’t deal with Mixins, so no ResourceShadow cross-db syncing. If you study newShardRangedRemoteDataSyncingNode, you can see that it uses searchSyncer and dbSyncer (db_syncer.go).