miércoles, julio 9, 2025
InicioHealth CareGet Began With Terraform and Cisco Modeling Labs

Get Began With Terraform and Cisco Modeling Labs


Infrastructure as Code (IaC) is a scorching subject today, and the IaC instrument of selection is Terraform by HashiCorp. Terraform is a cloud provisioning product that gives infrastructure for any software. You’ll be able to check with an extended listing of suppliers for any goal platform. 

Terraform’s listing of suppliers now contains Cisco Modeling Labs (CML) 2, so we will use Terraform to manage digital community infrastructure operating on CML2. Preserve studying to learn to get began with Terraform and CML, from the preliminary configuration via its superior options. 

How does Terraform work? 

Terraform makes use of code to explain the specified state of the required infrastructure and monitor this state over the infrastructure’s lifetime. This code is written in HashiCorp Configuration Language (HCL). If it adjustments, Terraform figures out all of the variations (state adjustments) to replace the infrastructure and assist attain the brand new state. Finally, when the infrastructure isn’t wanted anymore, Terraform can destroy it. 

A Terraform supplier presents sources (issues which have state) and information sources (read-only information with out state).

In CML2 phrases, examples embrace: 

  • Sources: Labs, nodes, hyperlinks 
  • Information sources: Labs, nodes, and hyperlinks, in addition to obtainable nodes and picture definitions, obtainable bridges for exterior connectors, and consumer lists and teams, and so forth. 

NOTE: At the moment, just a few information sources are carried out. 

Getting began with Terraform and CML

To get began with Terraform and CML, you’ll want the next: 

Outline and initialize a workspace 

First, we’ll create a brand new listing and alter it as follows: 

$ mkdir tftest
$ cd tftest 

All of the configuration and state required by Terraform stays on this listing. 

The code snippets offered want to enter a Terraform configuration file, usually a file known as fundamental.tf. Nevertheless, configuration blocks will also be unfold throughout a number of information, as Terraform will mix all information with the .tf extension within the present working listing. 

The next code block tells Terraform that we wish to use the CML2 supplier. It can obtain and set up the newest obtainable model from the registry at initialization. We add this to a brand new file known as fundamental.tf: 

terraform {
  required_providers {
    cml2 = {
      supply  = "registry.terraform.io/ciscodevnet/cml2"
    }
  }
} 

With the supplier outlined, we will now initialize the surroundings. This can obtain the supplier binary from the Hashicorp registry and set up it on the native laptop. It can additionally create numerous information and a listing that holds extra Terraform configuration and state. 

$ terraform init

Initializing the backend...

Initializing supplier plugins...
- Discovering newest model of ciscodevnet/cml2...
- Putting in ciscodevnet/cml2 v0.4.1...
- Put in ciscodevnet/cml2 v0.4.1 (self-signed, key ID A97E6292972408AB)

Companion and group suppliers are signed by their builders.
If you would like to know extra about supplier signing, you'll be able to examine it right here:
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has created a lock file .terraform.lock.hcl to file the supplier
alternatives it made above. Embody this file in your model management repository in order that Terraform can assure to make the identical alternatives by default once you run "terraform init" sooner or later.

Terraform has been efficiently initialized!

It's possible you'll now start working with Terraform. Strive operating "terraform plan" to see
any adjustments which can be required on your infrastructure. All Terraform instructions
ought to now work.

In the event you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working listing. In the event you neglect, different instructions will detect it and remind you to take action if crucial.
$ 

Configure the supplier 

The CML2 terraform supplier wants credentials to entry CML2. These credentials are configured as proven within the following instance. In fact, tackle, username and password have to match the precise surroundings: 

supplier "cml2" {
  tackle     = "https://cml-controller.cml.lab"
  username    = "admin"
  password    = "supersecret"
  # skip_verify = true
} 

The skip_verify is commented out within the instance. You may wish to uncomment it to work with the default certificates that’s shipped with the product, which is signed by the Cisco CML CA. Take into account putting in a trusted certificates chain on the controller. 

Whereas the above works OK, it’s not advisable to configure clear-text credentials in information that may find yourself in supply code administration (SCM). A greater strategy is to make use of surroundings variables, ideally together with some tooling like direnv. As a prerequisite, the variables have to be outlined inside the configuration: 

variable "tackle" {
  description = "CML controller tackle"
  sort        = string
  default     = "https://cml-controller.cml.lab"
}

variable "username" {
  description = "cml2 username"
  sort        = string
  default     = "admin"
}

variable "password" {
  description = "cml2 password"
  sort        = string
  delicate   = true
} 

NOTE: Including the “delicate” attribute ensures that this worth isn’t printed in any output. 

We now can create a direnv configuration to insert values from the surroundings into our supplier configuration by making a .envrc file. You may also obtain this by manually “sourcing” this file utilizing supply .envrc. The good thing about direnv is that this robotically occurs when becoming the listing. 

TF_VAR_address="https://cml-controller.cml.lab"
TF_VAR_username="admin"
TF_VAR_password="secret"

export TF_VAR_username TF_VAR_password TF_VAR_address 

This decouples the Terraform configuration information from the credentials/dynamic values in order that they’ll simply be added to SCM, like Git, with out exposing delicate values, comparable to passwords or addresses. 

Outline the CML2 lab infrastructure 

With the essential configuration accomplished, we will now describe our CML2 lab infrastructure. Now we have two choices: 

  1. Import-mode 
  1. Outline-mode 

Import-mode 

This imports an present CML2 lab YAML topology file as a Terraform lifecycle useful resource. That is the “one-stop” resolution, defining all nodes, hyperlinks and interfaces in a single go. As well as, you should use Terraform templating to switch properties of the imported lab (see under). 

Import-mode instance 

Right here’s a easy import-mode instance: 

useful resource "cml2_lifecycle" "this" {
  topology = file("topology.yaml")
} 

The file topology.yaml will likely be imported into CML2 after which began. We now have to “plan” the change: 

$ terraform plan

Terraform used the chosen suppliers to generate the next execution plan. Useful resource actions are indicated with the next symbols:
  + create

Terraform will carry out the next actions:

  # cml2_lifecycle.this will likely be created
  + useful resource "cml2_lifecycle" "this" {
      + booted   = (identified after apply)
      + id       = (identified after apply)
      + lab_id   = (identified after apply)
      + nodes    = {
        } -> (identified after apply)
      + state    = (identified after apply)
      + topology = (delicate worth)
    }

Plan: 1 so as to add, 0 to alter, 0 to destroy.
$ 

Then apply it (-auto-approve is a short-cut and must be dealt with with care): 

$ terraform apply -auto-approve
Terraform used the chosen suppliers to generate the next execution plan. Useful resource actions are indicated with the next symbols:
  + create
Terraform will carry out the next actions:

  # cml2_lifecycle.this will likely be created
  + useful resource "cml2_lifecycle" "this" {
      + booted   = (identified after apply)
      + id       = (identified after apply)
      + lab_id   = (identified after apply)
      + nodes    = {
        } -> (identified after apply)
      + state    = (identified after apply)
      + topology = (delicate worth)
    }

Plan: 1 so as to add, 0 to alter, 0 to destroy.
cml2_lifecycle.this: Creating...
cml2_lifecycle.this: Nonetheless creating... [10s elapsed]
cml2_lifecycle.this: Nonetheless creating... [20s elapsed]
cml2_lifecycle.this: Creation full after 25s [id=b75992ec-d345-4638-a6fd-2c0b640a3c22]

Apply full! Sources: 1 added, 0 modified, 0 destroyed.
$ 

We will now take a look at the state: 

$ terraform present
# cml2_lifecycle.this:
useful resource "cml2_lifecycle" "this" {
    booted   = true
    id       = "b75992ec-d345-4638-a6fd-2c0b640a3c22"
    nodes    = {
        # (3 unchanged components hidden)
    }
    state    = "STARTED"
    topology = (delicate worth)
}
$ terraform console
> keys(cml2_lifecycle.this.nodes)
tolist([
  "0504773c-5396-44ff-b545-ccb734e11691",
  "22271a81-1d3a-4403-97de-686ebf0f36bc",
  "2bccca61-d4ee-459a-81bd-96b32bdaeaed",
])
> cml2_lifecycle.this.nodes["0504773c-5396-44ff-b545-ccb734e11691"].interfaces[0].ip4[0]
"192.168.122.227"
> exit  
$ 

Easy import instance with a template 

This instance is much like the one above, however this time we import the topology utilizing templatefile(), which permits templating of the topology. Assuming that the CML2 topology YAML file begins with 

lab:
  description: "description"
  notes: "notes"
  timestamp: 1606137179.2951126
  title: ${toponame}
  model: 0.0.4
nodes:
  - id: n0
[...] 

then utilizing this HCL 

useful resource "cml2_lifecycle" "this" {
  topology = templatefile("topology.yaml", { toponame = "yolo lab" })
} 

will substitute the title: ${toponame} from the YAML with the content material of the string “yolo lab” at import time. Word that as a substitute of a string literal, it’s completely high-quality to make use of a variable like var.toponame or different HCL options! 

Outline-mode utilization 

Outline-mode begins with the definition of a lab useful resource after which provides node and hyperlink sources. On this mode, sources will solely be created. If we wish to management the runtime state (e.g., begin/cease/wipe the lab), then we have to hyperlink these components to a lifecycle useful resource. 

Right here’s an instance: 

useful resource "cml2_lab" "this" {
}

useful resource "cml2_node" "ext" {
  lab_id         = cml2_lab.this.id
  nodedefinition = "external_connector"
  label          = "Web"
  configuration  = "bridge0"
}

useful resource "cml2_node" "r1" {
  lab_id         = cml2_lab.this.id
  label          = "R1"
  nodedefinition = "alpine"
}

useful resource "cml2_link" "l1" {
  lab_id = cml2_lab.this.id
  node_a = cml2_node.ext.id
  node_b = cml2_node.r1.id
} 

This can create the lab, the nodes, and the hyperlink between them. With out additional configuration, nothing will likely be began. If these sources must be began, then you definitely’ll want a CML2 lifecycle useful resource: 

useful resource "cml2_lifecycle" "prime" {
  lab_id = cml2_lab.this.id
  components = [
    cml2_node.ext.id,
    cml2_node.r2.id,
    cml2_link.l1.id,
  ]
} 

Right here’s what this appears like after making use of the mixed plan. 

NOTE: For brevity, some attributes are omitted and have been changed by […]: 

$ terraform apply -auto-approve

Terraform used the chosen suppliers to generate the next execution plan. Useful resource actions are indicated with the next symbols:
  + create

Terraform will carry out the next actions:

  # cml2_lab.this will likely be created
  + useful resource "cml2_lab" "this" {
      + created     = (identified after apply)
      + description = (identified after apply)
      + teams      = [
        ] -> (identified after apply)
      + id          = (identified after apply)
      [...]
      + title       = (identified after apply)
    }

  # cml2_lifecycle.prime will likely be created
  + useful resource "cml2_lifecycle" "prime" {
      + booted   = (identified after apply)
      + components = [
          + (known after apply),
          + (known after apply),
          + (known after apply),
        ]
      + id       = (identified after apply)
      + lab_id   = (identified after apply)
      + nodes    = {
        } -> (identified after apply)
      + state    = (identified after apply)
    }

  # cml2_link.l1 will likely be created
  + useful resource "cml2_link" "l1" {
      + id               = (identified after apply)
      + interface_a      = (identified after apply)
      + interface_b      = (identified after apply)
      + lab_id           = (identified after apply)
      + label            = (identified after apply)
      + link_capture_key = (identified after apply)
      + node_a           = (identified after apply)
      + node_a_slot      = (identified after apply)
      + node_b           = (identified after apply)
      + node_b_slot      = (identified after apply)
      + state            = (identified after apply)
    }

  # cml2_node.ext will likely be created
  + useful resource "cml2_node" "ext" {
      + configuration   = (identified after apply)
      + cpu_limit       = (identified after apply)
      + cpus            = (identified after apply)
      [...]
      + x               = (identified after apply)
      + y               = (identified after apply)
    }

  # cml2_node.r1 will likely be created
  + useful resource "cml2_node" "r1" {
      + configuration   = (identified after apply)
      + cpu_limit       = (identified after apply)
      + cpus            = (identified after apply)
      [...]
      + x               = (identified after apply)
      + y               = (identified after apply)
    }

Plan: 5 so as to add, 0 to alter, 0 to destroy.
cml2_lab.this: Creating...
cml2_lab.this: Creation full after 0s [id=306f3ebf-c819-4b89-a99d-138a58ca7195]
cml2_node.ext: Creating...
cml2_node.r2: Creating...
cml2_node.ext: Creation full after 1s [id=32f187bf-4f53-462a-8e36-43cd9b6e17a4]
cml2_node.r2: Creation full after 1s [id=5d59a0d3-70a1-45a1-9b2a-4cecd9a4e696]
cml2_link.l1: Creating...
cml2_link.l1: Creation full after 0s [id=a083c777-abab-47d2-95c3-09d897e01d2e]
cml2_lifecycle.prime: Creating...
cml2_lifecycle.prime: Nonetheless creating... [10s elapsed]
cml2_lifecycle.prime: Nonetheless creating... [20s elapsed]
cml2_lifecycle.prime: Creation full after 22s [id=306f3ebf-c819-4b89-a99d-138a58ca7195]

Apply full! Sources: 5 added, 0 modified, 0 destroyed.

$ 

The components lifecycle attribute is required to tie the person nodes and hyperlinks into the lifecycle useful resource. This ensures the proper sequence of operations primarily based on the dependencies between the sources. 

NOTE: It’s not doable to make use of each import and components on the similar time. As well as, when importing a topology utilizing the topology attribute, a lab_id can’t be set. 

Superior utilization 

The lifecycle useful resource has just a few extra configuration parameters that management superior options. Right here’s a listing of these parameters and what they do: 

  • configs is a map of strings. The keys are node labels, and the values are node configurations. When these are current, the supplier will verify for all node labels to see whether or not they’re matching and, if they’re, substitute the node’s configuration with the supplied configuration. This lets you “inject” configurations right into a topology file. The bottom topology file may haven’t any configurations, through which case the precise configurations can be supplied through an instance file(“node1-config”) or a literal configuration string, as proven right here: 
configs = {
 "node-1": file("node1-config")
 "node-2": "hostname node2"
 
  • staging defines the node begin sequence when the lab is began. Node tags are used to attain this. Right here’s an instance: 
staging = {
    levels = ["infra", "core", "site-1"]
    start_remaining = true
} 

The given instance ensures that nodes with the tag “infra” are began first. The supplier waits till all nodes with this tag are marked as “booted.” Then, all nodes with the tag “core” are began, and so forth. If, after the tip of the stage listing, there are nonetheless stopped nodes, then the start_remaining flag determines whether or not they need to stay stopped or must be began as nicely (the default is true, e.g., they are going to all be began). 

  • state defines the runtime state of the lab. By default that is STARTED, which suggests the lab will likely be began. Choices are STARTED, STOPPED, and DEFINED_ON_CORE 

–    STARTED is the default 

–    STOPPED will be set if the lab is at present began, in any other case it’ll produce a failure 

–    DEFINED_ON_CORE is wiping the lab if the present state is both STARTED or STOPPED 

  • timeouts can be utilized to set completely different timeouts for operations. This is perhaps crucial for giant labs that take a very long time to begin. The defaults are set to 2h . 
  • wait is a boolean flag, which defines whether or not the supplier ought to look forward to convergence (for instance, when the lab begins, and that is set to false, then the supplier will begin the lab however is not going to wait till all nodes inside the lab are “prepared”).
  • id is a read-only computed attribute. A UUIDv4 will likely be auto-generated at create time and assigned to this ID. 

CRUD operations

Of the 4 fundamental operations of useful resource administration, create, learn, replace, and delete (CRUD), the earlier sections primarily described the create and skim facet. However Terraform can even take care of replace and delete. 

Plans will be modified, new sources will be added, and present sources will be eliminated or modified. That is all the time a results of enhancing/altering your Terraform configuration information after which having Terraform determine the required state adjustments through the terraform plan adopted by a terraform apply as soon as you’re happy with these adjustments. 

Updating sources

It’s doable to replace sources, however not each mixture is seamless. Right here are some things to contemplate: 

  • Only some node attributes will be modified seamlessly; examples are coordinates (x/y), label, and configuration 
  • Some plan adjustments will re-create sources. For instance, operating nodes will likely be destroyed and restarted is that if the node definition is modified 

Deleting sources

Lastly, a terraform destroy will delete all created sources from the controller. 

Information Sources 

Versus sources, information sources don’t maintain any state. They’re used to learn information from the controller. This information can then be used to reference components in different information sources or sources. An excellent instance, though not but carried out, can be a listing of obtainable node- and image-definitions. By studying these into an information supply, the HCL defining the infrastructure may take obtainable definitions under consideration. 

There are, nonetheless, just a few information sources carried out: 

  • Node: Reads a node by offering a lab and a node ID 
  • Lab: Reads a lab by offering both a lab ID or a lab title 

Output 

All information in sources and information sources can be utilized to drive output from Terraform. A helpful instance within the context of CML2 is the retrieval of IP addresses from operating nodes. Right here’s the way in which to do it, assuming that the lifecycle useful resource is known as this and likewise assuming that R1 is ready to purchase an IP tackle through an exterior connector: 

cml2_lifecycle.this.nodes["0504773c-5396-44ff-b545-
ccb734e11691"].interfaces[0].ip4[0] 

Word, nonetheless, that output can also be calculated when sources won’t exist, so the above will give an error as a result of node not being discovered or the interface listing being empty. To protect in opposition to this, you should use HCL: 

output "r1_ip_address" {
  worth = (
    cml2_lifecycle.prime.nodes[cml2_node.r1.id].interfaces[0].ip4 == null ?
    "undefined" : (
      size(cml2_lifecycle.prime.nodes[cml2_node.r1.id].interfaces[0].ip4) > 0 ?
      cml2_lifecycle.prime.nodes[cml2_node.r1.id].interfaces[0].ip4[0] :
      "no ip"
    )
  )
} 

Output: 

r1_ip_address = "192.168.255.115" 

Conclusion 

The CML2 supplier suits properly into the general Terraform eco-system. With the pliability HCL gives and by combining it with different Terraform suppliers, it’s by no means been simpler to automate digital community infrastructure inside CML2. What’s going to you do with these new capabilities? We’re curious to listen to about it! Let’s proceed the dialog on the Cisco Studying Community’s Cisco Modeling Labs Group.

Single customers should purchase Cisco Modeling Labs – Private and Cisco Modeling Labs – Private Plus licenses from the Cisco Studying Community Retailer. For groups, discover CML – Enterprise and CML – Larger Schooling licensing and phone us to learn the way Cisco Modeling Labs can energy your NetDevOps transformation.


Be a part of the Cisco Studying Community immediately totally free.

Comply with Cisco Studying & Certifications

Twitter | Fb | LinkedIn | Instagram

Use #CiscoCert to affix the dialog.

 

References 

  • https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli 
  • https://github.com/CiscoDevNet/terraform-provider-cml2 
  • https://registry.terraform.io/suppliers/CiscoDevNet/cml2 
  • https://developer.hashicorp.com/terraform/language 
  • https://direnv.internet/ 
  • Picture by Dall-E (https://labs.openai.com/) 

Share:



RELATED ARTICLES

DEJA UNA RESPUESTA

Por favor ingrese su comentario!
Por favor ingrese su nombre aquí

Most Popular

Recent Comments

Esta web utiliza cookies propias para su correcto funcionamiento. Al hacer clic en el botón Aceptar, acepta el uso de estas tecnologías y el procesamiento de tus datos para estos propósitos. Más información
Privacidad