It looks like you are using Internet Explorer, which unfortunately is not supported. Please use a modern browser like Chrome, Firefox, Safari or Edge.

Automating feature environments pays off with faster feature delivery

Publicerad i Teknologi

Skriven av

Mika Majakorpi
Chief Technologist

Mika Majakorpi works as Chief Technologist at Nitor. Mika has been there and done that in various architecture and development roles for years in the Finnish IT scene. Lately he's into Event Driven Architecture and data driven solutions.

Artikel

1 mars 2022 · 5 min lästid

A fundamental tenet of DevOps is to automate everything you can. Those who take automation the furthest stand to gain significant benefits by eliminating human errors and reducing lead times for software releases. 

The CI/CD pipeline is where this automation happens. Automation related to various testing steps in the pipeline is a mature practice these days. Tests are run automatically when code is pushed to a repository and when a pre-production runtime environment is updated with new code.

However, there is a bottleneck with getting new features to this well-refined part of the pipeline. The issue surfaces when the team works on multiple features in parallel, and each of them needs an environment to run tests in isolation before integrating them.

Development environment short­­­comings

With cloud-based platform services, it’s no longer always feasible to spin up a complete local development environment on a developer’s laptop. Although solutions, such as LocalStack, can emulate cloud services locally, they don’t always cover all the needed platform services or apply to each specific use case. When things get complex enough, you’ll run out of options if you rely on local environments which mimic real cloud services.

A quick feedback loop is essential for efficient software development. Developers want to stay focused on the task. Integration test runs should be quick to complete after changing the code. Sharing environments can lead to an unclear baseline state for testing and arbitrary test failures.

Deploying a new environment from scratch solves the issues mentioned above but has traditionally meant going through a manual list of steps that could take a long time. This could, of course, be done once for each developer so that they’d have their sandbox environments. But such environments tend to drift from the state defined in infrastructure code and cause issues in the long run.

Visualisation of developers' environment shortcomings process

Automation to the rescue

With infrastructure as code and improved tooling such as the AWS Cloud Deployment Kit (CDK), these problems are a thing of the past. Automated feature branch environments are becoming more and more common in the CI/CD pipeline. Here’s how we use them in our projects!

The basic idea is a simple 3-step process: 

  1. When a pull request is created in the code repository, a pipeline job that provisions the infrastructure environment for the branch is triggered. To avoid creating an environment for all branches, we typically use a naming convention with a prefix like feature/ or f/ for the branch name for branches we want an environment for.

  2. The regular deployment pipeline runs after the environment provisioning completes and on each new push to the branch after that.

  3. When work on the feature concludes, and the pull request is closed, an infrastructure deprovisioning job is triggered to destroy the branch environment.

Automated Feature Branch Environment process visualisation

The pull request functions as a central location for peer review and decision making. Feedback from the deployment pipeline jobs, indicating whether deployments are successful and tests pass, is brought back to the pull request. Some workflows like software component dependency updates can be fully automated with tools such as Renovate and this feedback loop from the pipeline back to the code repository. 

Infrastructure provisioning can get complex for many reasons, such as dependencies between environment resources and steps in build and test automation. An excellent way to bring structure to such complexity is to split a big deployment pipeline to separate stages. Infrastructure is bootstrapped stage by stage, with other tasks like building application artefacts is done at the right moment after dependencies have been provisioned.

One problem teams could face is increased cost. Provisioning the full stack of infrastructure for each environment can be unnecessarily expensive. Another negative aspect is the time it takes to provision complex resources. Databases and other costly resources might not need to be naively provisioned from scratch for each environment. Using schemas with branch-specific namespaces can help with cost optimisation and provide faster deployment times as you won’t have to wait for database resources to spin up during environment creation.

To enable the reuse of expensive resources, we split the concept of an environment into two levels. A shared environment contains low-level infrastructure like networking, databases, messaging middleware, while an application environment is concerned with application-level resources. We could choose to deploy a branch with the full stack of shared and application infrastructure or reuse a shared environment for multiple application deployments, each from a separate feature branch.

concept of an environment split into shared and infratructure enviroments illustrated


Increased delivery speed

Achieving automated feature branch environments is a milestone for DevOps maturity. Once you have this capability, you’ll find that your lead time to release new features decreases significantly! You’ll have the flexibility of dynamically creating more testing environments, e.g. load testing and other scenarios. Blue-Green deployments in production are also within reach. It’s just a matter of switching over from one application environment to another on top of a shared environment which contains your production data.

Finesse of the deployment pipeline comes with the increased effort to develop and maintain the code that implements the pipeline. It’s good to think about the level of automation and optimisation you want to achieve versus the time you end up spending on the pipeline itself. A balanced approach, based on the specific goals of each software product, is key to the successful use of automated feature environments.

Your tooling may vary

This pattern is generic and applicable to all cloud environments. The implementation details will differ based on the platforms and tools you use. On AWS, we recommend their CodeCommit, CodePipeline and CodeBuild services for DevOps automation orchestrated using the AWS Cloud Development Kit (CDK). Tooling is getting better, but expect to write a fair amount of scripting to implement automated feature branches for your specific needs.

Skriven av

Mika Majakorpi
Chief Technologist

Mika Majakorpi works as Chief Technologist at Nitor. Mika has been there and done that in various architecture and development roles for years in the Finnish IT scene. Lately he's into Event Driven Architecture and data driven solutions.