Database Constraint Controller

Understanding the database constraint controller.

Each db-controller instance consists mainly of two Node managers modules: One is the DbConstraint Controller. It’s tasks include execution of all asynchronous tasks related to the local database (Deployment). There are 3 groups of tasks:

  • Handling of owned (by Deployment) resources in PENDING state (Lifecycle)
  • Handling of owned (by Deployment) resources in DELETING state (Lifecycle)
  • Handling of all subscribed (from current and each foreign Deployment) resources in the DELETING state (Lifecycle)

The module is found in the Goten repository, module runtime/db_constraint_ctrl. As with any other controller, it uses a Node Manager instance. This Node Manager, apart from running Nodes, must also keep a map of interested deployments! What does it mean: we know that iam.edgelq.com imports meta.goten.com. Suppose we have regions us-west2 and eastus. In that case, Deployment of iam.edgelq.com in the us-west2 region will need to remember four Deployment instances:

  1. meta.goten.com in us-west2
  2. meta.goten.com in eastus2
  3. iam.edgelq.com in us-west2
  4. iam.edgelq.com in eastus2

This map is useful for 3rd task group: handling of subscribed resources in the deleting state. As IAM imports meta and no other service, and also because IAM resources can reference each other, we can deduce the following: resources of iam.edgelq.com in region us-west2 can only reference resources from meta.goten.com and iam.edgelq.com, and only from regions us-west2 and eastus2. If we need to handle the cascade deletions (or unsets), then we need to watch these deployments. See file node_manager.go in db_constraint_ctrl, we are utilizing EnvRegistry to get dynamic updates about interesting Deployments. In the function createAndRunInnerMgr we use the ServiceDescriptor instance to get information about Services we import, this is how we know which deployments we need to watch.

As you can see, we utilize EnvRegistry to initiate DbConstraintCtrl correctly in the first place, and then we maintain it. We also handle version switches. If this happens, we stop the current inner node manager and deploy a new one.

When we watch other deployments, we are interested only in schema references, not meta. Meta references are more difficult to predict because services don’t need to import each other. For this reason, responsibility for managing meta owner references is split between Deployments on both sides: Meta Owner and Meta Ownee, as described by the flows.

The most important files in runtime/db_constraint_ctrl/node directory are:

  • owned_deleting_handler.go
  • owned_pending_handler.go
  • subscribed_deleting_handler.go

Those files are handling all asynchronous tasks as described by many of the flows, regarding the establishment of references to other resources (confirming/removing expired tentative blockades), meta owner references management, cascade deletions, or unsets. I was trying to document the steps they do and why, so refer to the code for more information.

For other notable elements in this module:

  • For subscribed deleting resource shadows, we have wrapped watcher, which uses a different method than standard WatchResourceShadows. The reason is, that other Deployments may vary between API versions they support. We use the dedicated method by schema mixin API, WatchImportedServiceDeletions.
  • Subscribed deleting resource shadow events are sent to a common channel (in controller_node.go) file, but they are still grouped per Deployment (along with tasks).

Note that this module is also responsible for upgrading meta owner references after Deployment upgrades its current version field! This is an asynchronous process, and is executed by owned_pending_handler.go, function executeCheckMetaOwnees.