How to use GitOps to create microservice products


by Fabio Lonegro

GitOps helps development and operations teams work together. In the second of a two-part series, Fabio Lonegro, Technical Director at Deltatre, tells us how it helps our teams develop and deliver microservice applications securely and progressively


by Fabio Lonegro

GitOps helps development and operations teams work together. In the second of a two-part series, Fabio Lonegro, Technical Director at Deltatre, tells us how it helps our teams develop and deliver microservice applications securely and progressively

At Deltatre, we use the GitOps approach for many of our projects, for example, when creating a distributed system, also known as a microservice. A microservice is a collection of apps that work together to meet the final requirements.

Prototype: High-level setup

Let's say we create a distributed system where the frontend will have a web app with a simple UI to save and retrieve saved items. The backend will interact with the persistence layer by using server to server (REST) APIs, which uses Redis as a backing system.

Flux v2 is the GitOps controller. Flux, created by WeaveWorks (who introduced the concept of the GitOps workflow), is built on top of the GitOps toolkit. Many different components work together to ensure the GitOps workflow works in the environment. We will also use a remote Azure Kubernetes Service (AKS) cluster.



DevOps Engineer - Video Experiences, Product


Git repositories configuration

I used one code for both the frontend application and APIs. This approach is not usually a good practice as there is the risk of coupling too many components, which creates the opposite of a microservices architecture. However, in this case, as long as I am aware of this and can keep things separated, condensing both in the same place increases the speed.

Next, every service has its own helm chart repo, a collection of files that describe a related set of Kubernetes resources. There is one GitOps repository, which sits between development and operations. All changes use the GitOps approach.

The high level setup

Bootstrapping Flux

Flux is a set of controllers (also called operators) that run inside a Kubernetes cluster. To bootstrap this, Flux provides a Comand-line Interface (CLI) that allows the creation and deployment of the Flux operator and its components within the target cluster. It does this by bootstrapping the cluster by creating a connection with the Git repo and delivering Flux inside the cluster using a GitOps approach.

Flux can monitor any Git repo and needs specific credentials to interact with the repo itself. We use GitHub, so we need to provide a GitHub token as the environment variable to ensure the Flux CLI can provide the correct permissions to the controller.

Secrets Management

Next, I would like to introduce Redis authentication and enable an API key on the REST APIs. Even if this is not visible to the outside world, it adds an extra level of security.

The frontend component will need to know the API key to talk with the APIs. The API will need the Redis authentication to talk with Redis. The API will need the API key to validate the API key at every request. Finally, the Redis Helm chart will need the Redis authentication key.

Secrets should only be provided to the components that need them. For example, it would be risky to provide Redis authentication to the frontend application. There are several ways to approach this.

The two we use are:

Sealed Secrets

Sealed Secrets is a Kubernetes operator. When delivered inside the cluster, it creates an RSA key pair: a public and private key.

Using a CLI companion, you can retrieve the public key from the sealed secret operator. The public key is safe to store inside the Git repo and can be used to encrypt standard Kubernetes secrets.


Czech Republic

DevOps Engineer - Video Experiences, Technology


Hashicorp Vault

Another approach is to use Hashicorp Vault: a tool that can store secrets. With Hashicorp, it is possible to define access policies based on the identity of its clients.

Going back to our example, we want to prevent the frontend from having the Redis password. With the Hashicorp vault, we can combine our defined policies with additional authentication mechanisms. For example, we can identify people using identification systems such as Active Directory or Facebook. We can also authenticate services using a service account.

Other features of Hashicorp Vault include:

  • Dynamic secrets injection: We can inject secrets in various ways. For example, we use deployment annotations. When a pod with specific annotations starts, the Vault can inject its container into the pod. This container will perform the authentication workflow against the Vault and ask for a specific secret. For this to work, you need to specify a service account.
  • Secrets revocation: Secrets can also be revoked. Hashicorp Vault contains a Full Reach Audit Log for all secret requests, making it possible to track when a secret was leaked.
  • Decoupling: Secrets can be decoupled from the applications cluster. For example, if you want to run the vault in a Kubernetes cluster, it doesn’t have to be the same cluster where the applications are running. It is considered a good practice to keep them separate.
  • No impact on application development: Applications don’t know where secrets come from, so they only require a specific environment variable or configuration file.
  • Cryptographic APIs: If you don’t want to implement your own cryptographic APIs, Hashicorp Vault can provide these.

High level workflow of how Hashicorp Vault can provide dynamic secrets to running applications

In a nutshell, Hashicorp Vault provides a way to manage who can access what. We can enforce some policies on specific secrets, ranging from basic read-only policies, to more complex ones.

Which approach is better?

Both can work together. We use Hashicorp Vault as a secrets provider because of the required access policies. When Hashicorps Vault is activated, it creates five cryptographic keys used to unseal its components. These keys can be encrypted with a key stored in a cloud-specific keys storage, for example, Azure KeyVault, AWS, or KMS. At least three of these keys are needed to open the vault.

When working with Kubernetes, pods are often killed and new ones created. In this case, we don’t want to have to provide the three keys each time we want to run a new pod. We can couple Hashicorp with a Key storage. In this case, one cryptographic key, stored in the Git storage, can encrypt the five keys needed. The new pod can then communicate with the key storage to fetch, decrypt and use the key to start.

To provide access to the vault, we need a secret to communicate. For this, we use Sealed Secrets.



Senior DevOps Engineer - Video Experiences, Technology



Finally, while you have to validate your software components during all the phases of the development, there needs to be a safe way to rollout applications in production, whether in the form of canary release, blue/green, or A/B testing, for example.

To automate the rollout process, we use a tool called Flagger, a reconciliation controller that lives inside the cluster.

When there is a request to deliver a new version of an application, Flagger intercepts the request and applies traffic splits. Flagger makes sure that the rollout of the application only happens when all KPIs are successful. For example, latency should be below 5milliseconds.

If the requirements are not met, Flagger will throw back the delivery. Otherwise, Flagger rolls out the new version and removes the old one.

In conclusion, GitOps provides a way for teams to collaborate and release applications progressively and securely. Git's version control system ensures that all team members work with the correct version of the application and can track and manage changes easily.


Join Deltatre

Check open positions at Deltatre

Explore roles