Setting up Yandex Cloud Provider with Terraform and Terragrunt

Here's a practical guide on how to manage Terraform provider configurations for different Yandex Cloud regions using Terragrunt, which helps create provider configs dinamically.

Nov 28, 2024 - 17:36
 0
Setting up Yandex Cloud Provider with Terraform and Terragrunt

\ Here's a practical guide on how to manage Terraform provider configurations for different Yandex Cloud regions using Terragrunt.

What you'll need

  • Terraform >= 1.9.7
  • Terragrunt >= 0.67.16
  • Yandex Cloud Provider >= 0.129.0

Setup Steps

Let's look at how to use Terragrunt to dynamically create provider configs for Yandex Cloud. I'll break this down into digestible pieces:

\

  1. Basic provider setup

    First, we'll set up the base Yandex Cloud config in the root terragrunt.hcl. This will automatically generate versions.tf for each module:

    \

   locals {
     tf_providers = {
       yandex = ">= 0.129.0"
     }
   }

   generate "providers_versions" {
     path      = "versions.tf"
     if_exists = "overwrite"
     contents  = <

\

  1. Region settings

    For regions like the newly created KZ region, additional endpoints need to be specified due to the default configuration for the RU region. We can specify them at the project level, for example env.hcl and the providers.tf is generated dynamically for each module:

   locals {
     cloud_id         = "SOME_ID"
     folder_id        = "SOME_ID"
     sa_key_file      = "${get_repo_root()}/key.json"
     endpoint         = "api.yandexcloud.kz:443"       # Region-Specific
     storage_endpoint = "storage.yandexcloud.kz"       # Region-Specific
   }

   generate "providers_configs" {
     path      = "providers.tf"
     if_exists = "overwrite_terragrunt"
     contents  = <

\

  1. Additional providers

    If you're working with Kubernetes / Kubectl / Helm in Terraform, you'll need these additional provider configs to manage your cluster. The simplest and most straightforward solution would be to pass cluster_id from a terragrunt dependency into the called module:

    \

   dependencies {
     paths = ["path/to/your/mks"]
   }

   dependency "mks" {
     config_path = "path/to/your/mks"

     mock_outputs_allowed_terraform_commands = ["init", "validate", "plan", "destroy"]
     mock_outputs_merge_strategy_with_state  = "shallow"
     mock_outputs = {
       cluster_id = "cluster_id"
     }
   }

   terraform {
     source = "path/to/your/module"
   }

   inputs = {
     cluster_id = dependency.mks.outputs.cluster_id
     . . .
     
     . . .
   }

\

Then use data resources in the module to configure providers:

\

variable "cluster_id" {
  type        = string
  default     = null
  description = "Managed Kubernetes Service cluster ID"
}

data "yandex_kubernetes_cluster" "this" {
  cluster_id = var.cluster_id
}

data "yandex_client_config" "this" {}

provider "kubernetes" {
  host                   = data.yandex_kubernetes_cluster.this.master.0.external_v4_endpoint
  cluster_ca_certificate = data.yandex_kubernetes_cluster.this.master.0.cluster_ca_certificate
  token                  = data.yandex_client_config.this.iam_token
}

provider "helm" {
  kubernetes {
    host                   = data.yandex_kubernetes_cluster.this.master.0.external_v4_endpoint
    cluster_ca_certificate = data.yandex_kubernetes_cluster.this.master.0.cluster_ca_certificate
    token                  = data.yandex_client_config.this.iam_token
  }
}

provider "kubectl" {
  host                   = data.yandex_kubernetes_cluster.this.master.0.external_v4_endpoint
  cluster_ca_certificate = data.yandex_kubernetes_cluster.this.master.0.cluster_ca_certificate
  token                  = data.yandex_client_config.this.iam_token
}

Notes

  • Using Terragrunt for configuration management:

    Terragrunt simplifies configuration management for multiple environments by dynamically generating provider configurations via the generate block in the .hcl files. This setup allows for easy handling of multi-region deployments from a single configuration source.

    \

  • Setup JSON key for Terragrunt:

    To access the Yandex Cloud resources, place the JSON key for the service account in the root directory of your project. Don't forget to add it to .gitignore. Alternatively you can use a static access key.

    \

  • Configuring the module:

    Remember that even if you don't manage the terraform module directly, you can almost always override the configuration using generate when calling the module in terragrunt.

Conclusion

This setup gives you a clean way to manage Terraform configs across different Yandex Cloud regions. It handles authentication properly and works well whether you're just using basic cloud resources or diving into Kubernetes and Helm deployments.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow

CryptoFortress Disclosure: This article does not represent investment advice. The content and materials featured on this page are for educational purposes only.