Näyttää siltä, että käytät Internet Explorer -selainta. Selain ei valitettavasti ole tuettu. Suosittelemme käyttämään modernia selainta kuten Chrome, Firefox, Safari tai Edge.

Codified DevOps Culture: A look at our open-source multi-cloud deployment toolkit

Julkaistu aiheella Teknologia

Kirjoittaja

Mika Majakorpi
Chief Technologist

Mika Majakorpi works as Chief Technologist at Nitor. Mika has been there and done that in various architecture and development roles for years in the Finnish IT scene. Lately he's into Event Driven Architecture and data driven solutions.

Artikkeli

5. huhtikuuta 2022 · 7 min lukuaika

DevOps is first and foremost a cultural trait. It’s easy for companies to say they are doing DevOps, but the real proof is in the culture of each organisation.

Like many aspects of organisational culture, DevOps at Nitor started small and eventually reached a point where scaling it up benefited from a systematic description. We codified our take on infrastructure code and automated deployments by writing an open-source CLI toolkit. Let’s take a look at Nameless Deploy Tools (ndt) and how the project has evolved from an AWS tool to a multi-cloud toolkit.

Each cloud platform has its native tooling for infrastructure provisioning. AWS has CloudFormation and Cloud Development Kit (CDK), Azure has Resource Manager and Google has Cloud Deployment Manager. Then there are the multi-cloud tools like Terraform and Serverless Framework, for example. Working with and mixing & matching these within each cloud is fair enough, but there are often glitches, small shortcomings or flat out oversights that lead to complicated workarounds for deployments. There are also as many ways to organise a code repository and the deployment pipeline as there are developers. It’s for these kinds of situations that we developed ndt. To help us get things done more efficiently and to help reduce the cognitive load of passing DevOps responsibilities to others.

Getting started on the cloud typically means choosing a preferred cloud provider and going all-in on their offering for efficiency. There rarely are compelling reasons to start with a multi-cloud setup. But as organisations expand their cloud usage, it’s only a matter of time before they end up in the multi-cloud scenario by way of acquisitions, siloed business units or internal startups choosing their own platform. It doesn’t necessarily have to be one of these “victims of circumstance” scenarios at all. Maybe there is a compelling product for a specific need on one of the other clouds that they didn’t choose to begin with.

Automating tasks helps to manage the increasing complexity of cloud infrastructure management. Multi-cloud adds to this complexity, but that’s where ndt comes to the rescue. Here’s how we work with infrastructure across clouds with ndt.

Developer Experience

The following are examples of ways ndt makes DevOps work easier:

  • Use your favourite tool to deploy to each cloud. You can choose from the following at the time of writing:

    • AWS: CloudFormation, CDK, Serverless Framework, Terraform

    • Azure: Resource Manager, Bicep, Serverless Framework, Terraform

    • Google: Serverless Framework, Terraform

  • Dynamic parameter lookup across clouds or deployed components within the same cloud. No need to copy-paste parameter values or write separate query scripts.

  • Tab completion for commands makes typing them right easier.

  • Credentials and session management. Automatically activates the correct cloud credentials for example when you enter into a repository directory on the command line.

  • Opinionated take on directory structure for infrastructure code. This enables quick uptake of projects by people who are familiar with the tooling.

Deployment Pipeline

For pipeline automation, ndt has the following key features:

  • Branch-to-environment mapped workflow model makes use of feature development environments easy and enables release promotions by merging and advanced control of environment-specific changes.

  • Dynamic parameter lookup means there is one source of truth for parameter values.

  • AWS Organizations based account creation for landing zone management.

  • Automatic build and deployment job creation for Jenkins and AWS CodeBuild based on the ndt directory structure. Of course, you could use ndt with any other CI/CD pipeline solution like Azure DevOps/GitHub Actions or Google Cloud Build too.

These features accelerate setting up a fully automated pipeline following the thinking previously discussed in our blog post on automated feature environments.

The preferred working model for large projects is as follows:

  • Feature branches and the main branch are mapped to a development environment with developers sharing the same shared dependency resources and APIs.

  • Pipeline milestone environments (integration, staging, QA, production) are mapped to specifically named branches. Changes are merged to the main branch via a peer-reviewed merge request and deployed to the dev environment. The same process is repeated one environment at a time to the milestone environments until the changes reach production.

Small projects can make do with dev and main branches that map to a testing environment and production respectively.

That’s a good list of ndt benefits, but enough theory already! Let’s take a look at a real proof of concept multi-cloud deployment with ndt!

Demo App

Our demo app nFinder is a game concept where you take photos of things and get scored based on what Azure Computer Vision API recognizes in the picture. It’s deployed across AWS, Azure and Google Cloud as a tech demo of ndt multi-cloud capability. The game UI is rather minimalistic but we’re open for pull requests! The repo with the app code and infra code is on GitHub here.

The demo is set up as a monorepo with application and infrastructure code all in one place. This works well to an extent but you might want to consider splitting the app and infra to separate repos if you need a build once approach for your deployables.

Here’s the high-level architecture:

NDT architecture illustration

The architecture translates to the following ndt directory structure:

ndt repository structure diagram

The first level directories are called components in ndt. Components can contain multiple bake or deployment projects or subcomponents. Bake projects create a docker or virtual machine image as a result while deployment projects provision cloud infrastructure and deploy applications.

Here’s how nFinder is deployed:

  1. Install ndt with pipx, for example:
    pipx install nameless-deploy-tools

  2. Install AWS CDK, Terraform and Azure CLI. These are the deployment tools ndt depends on for this project.

  3. Clone the repo:
    git clone https://github.com/NitorCreations/nfinder

  4. Set up an active CLI session for each cloud. ndt supports credential profiles for AWS and Azure which helps here:
    ndt enable-profile -a my-aws-profile, ndt enable-profile -s my-azure-subscription

    Configuring the profiles is a separate step instructed in ndt documentation. For Google, activate a service account for your CLI session:
    export GOOGLE_APPLICATION_CREDENTIALS=/path/to/keyfile.json

    This step can be automated to happen upon entering the project directory for AWS and Azure.

  5. Set properties in infra.properties to match your cloud environment. This is the main ndt configuration file. Further properties can be set and overridden in an infra.properties file in the component and project directories. Branch specific values can be given in infra-${branchName}.properties files at all levels.The demo setup assumes you have an existing Google billing account and a folder where your service account is allowed to create a project. For AWS, it assumes you have an existing domain and a Route 53 hosted zone.

  6. Deploy the Azure Vision API:
    ndt deploy-azure vision vision

  7. Deploy the Firebase app:
    ndt deploy-terraform firebase firebase

    Google’s infrastructure as code support is a bit lacking here. You’ll need to go to your Firebase web console and enable the Google authentication support for this project. There’s no way to do this programmatically with a public API!

  8. Deploy the AWS backend:
    ndt deploy-cdk aws api

    Note how the API deployment references an API key output value from the Vision API deployment using an ndt AzRef entry in aws/cdk-api/infra.properties: VISION_API_KEY={AzRef: {component: vision, azure: vision, paramName: visionApiKey}}. The API key is set as an environment variable for the image handler Lambda function so that it can access the Azure Computer Vision API.

  9. Deploy the AWS frontend:
    ndt deploy-cdk aws frontend

That’s it, you should have nFinder available at the domain name you chose in infra.properties! If you’d rather see the app we deployed, go have a look at https://nfinder.nitorio.us!

Takeaways from the demo and future developments

The demo gives a glimpse into how infra projects deployed with ndt are structured and deployed with similar commands regardless of the underlying deployment tool you choose to use for each project. It shows parameter value references between infra projects using different deployment tools. The Vision API key is passed from Azure Bicep & Resource Manager outputs to an AWS CDK project and the frontend CDK project references the API CDK project for the S3 bucket name.

We like how ndt unifies the infrastructure code under a similar structure for each cloud and each supported deployment technology and allows easy passing of parameter values between the components and projects!

AWS is best supported at the moment with Azure and Google support developing further as we need them. One other thing we are discussing is to add an orchestration capability where ndt would understand dependencies based on cross-project references and deploy them in the right order. For now, this is done by simply writing out a deployment script that runs ndt per project, but adding the dependency tree concept to ndt would enable easier feature environment setup, for example.

Acknowledgements

Our toolkit has been around for many years already and many people have contributed to it over the years. Thanks to all the contributors and a special shout-out to Pasi Niemi with 1478 commits at the time of writing!

Kirjoittaja

Mika Majakorpi
Chief Technologist

Mika Majakorpi works as Chief Technologist at Nitor. Mika has been there and done that in various architecture and development roles for years in the Finnish IT scene. Lately he's into Event Driven Architecture and data driven solutions.

Our recipe for success with DevOps

We live and breathe DevOps both in our organisational culture and technical viewpoints. We’re partners with the major cloud platforms AWS, Azure and Google Cloud, and ready to help you with their respective DevOps tooling, or others from Jenkins to New Relic. We support our clients’ perspective in choosing the right tools for the job on a case by case basis, rather than advocating any specific technology.