karapincha epa | information technology degree





10 Terraform Best Practices for Better Infrastructure Provisioning

Let’s mention a number of the simplest practices that ought to be followed while using terraform.

Terraform may be a very fashionable open-source IaC (infrastructure as code) tool to define and provision the entire infrastructure.

Although Terraform was launched in 2014, the adoption of this tool has grown globally. More and more developers are learning Terraform to deploy infrastructure in their organization.

If you've got started using Terraform, you want to adopt the simplest practices for better production infrastructure provisioning.

If you're a newbie then inspect this Terraform for beginner’s article.

Structuring

When you are performing on an outsized production infrastructure project using Terraform, you want to follow a correct directory structure to require care of the complexities which will occur within the project. it might be best if you had separate directories for various purposes.

For example, if you're using Terraform in development, staging, and production environments, have separate directories for every one of them.

Even the terraform configurations should be separate because, after a period, the configurations of a growing infrastructure will become complex.

For example – you'll write all of your terraform codes (modules, resources, variables, outputs) inside the most .tf file itself, but having separate terraform codes for variables and outputs makes it more readable and straightforward to know.

Naming Convention

Naming conventions are utilized in Terraform to form things easily understandable.

For example, let’s say you would like to form three different workspaces for various environments during a project. So, instead of naming them as env1, en2, env3, you ought to call them as a dev, stage, prod. From the name itself, it becomes pretty clear that there are three different workspaces for every environment.

Similar conventions for resources, variables, modules, etc. also should be followed. The resource name in Terraform should start with a provider name followed by an underscore and other details.

For example, the resource name for creating a terraform object for a routeing table in AWS would be aws_route_table.

So, if you follow the naming conventions right, it'll be easier to know even complex codes.

Use Shared Modules

It is strongly suggested to use official Terraform modules available. No got to reinvent a module that already exists. It saves tons of your time and pain. Terraform registry has many modules readily available. Make changes to the prevailing modules as per the necessity.

Also, each module should consider just one aspect of the infrastructure, like creating an AWS EC2 instance, setting MySQL database, etc.

For example, if you would like to use AWS VPC in your terraform code, you'll use – simple VPC

Terraform development community is extremely active, and therefore the release of the latest functionalities happens frequently. it's recommended to remain on the newest version of Terraform as in when a replacement major release happens. you'll easily upgrade to the newest version.

If you skip multiple major releases, upgrading will become very complex.

Backup System State

Always backup the state files of Terraform.

These files keep track of the metadata and resources of the infrastructure. By default, these files called terraform. the state is stored locally inside the workspace directory.

Without these files, Terraform won't be ready to find out which resources are deployed on the infrastructure. So, it's essential to possess a backup of the state file. By default, a file with a reputation 

If you would like to store a backup state file in another location, use -backup flag within the terraform command and provides the situation path.

Most of the time, there'll be multiple developers performing on a project. So, to offer them access to the state file, it should be stored at a foreign location employing a terraform_remote_state data source.

The following example will take a backup to S3.

Lock State File

There are often multiple scenarios where quite one developer tries to run the terraform configuration at an equivalent time. virtualization technology will cause the corruption of the terraform state file or maybe data loss. The locking mechanism helps to stop such scenarios. It makes sure that at a time, just one person is running the terraform configurations, and there's no conflict.

Here is an example of locking the state file, which is at a foreign location using DynamoDB.

When multiple users attempt to access the state file, the DynamoDB database name and first key are going to be used for state locking and maintaining consistency.

Note: not all backend support locking.

Use self Variable

the self variable may be a special quite variable that's used once you don’t know the worth of the variable before deploying an infrastructure.

Let’s say you would like to use the IP address of an instance which can be deployed only after the terraform apply command, so you don’t know the IP address until it's up and running.

In such cases, you employ self variables, and therefore the syntax to use its self.ATTRIBUTE. So, during this case, you'll use self.ipv4_address as a self variable to urge the IP address of the instance. These variables are only allowed on connection and provisioner blocks of terraform configuration.

Minimize Blast Radius

The blast radius is nothing but the measure of injury which will happen if things don't go as planned.

For example, if you're deploying some terraform configurations on the infrastructure and therefore the configuration doesn't get applied correctly, what is going to be the quantity of injury to the infrastructure.

So, to attenuate the blast radius, it's always suggested to push a couple of configurations on the infrastructure at a time. So, if something went wrong, the damage to the infrastructure is going to be minimal and may be corrected quickly. Deploying many configurations directly is extremely risky.

Use var-file

In Terraform, you'll create a file with extension .tfvars and pass this file to terraform apply command using -var-file flag. This helps you en passant those variables which you don’t want to place within the terraform configuration code.

It is always suggested to pass variables for a password, secret key, etc. locally through -var-file instead of saving it inside terraform configurations or on a foreign location version system.

For example, if you would like to require to launch an ec2 instance using Terraform, you'll pass the access key and secret key using -var-file

Create a file terraform. fears and put the keys during this file.

User Docker

When you are running a CI/CD pipeline build job, it's suggested to use docker containers. Terraform provides official Docker containers that will be used. just in case you're changing the CI/CD server, you'll easily pass the infrastructure inside a container.

you get portable, reusable, repeatable infrastructure.

Conclusion

I hope these best practices will assist you in writing better Terraform configurations. plow ahead and begin implementing these in your terraform projects for better results.

Terraform for Beginners

Wondering, what's Terraform? Let’s determine it.

Infrastructure as Code (IaC) may be a widespread terminology among DevOps professionals. it's the method of managing and provisioning the entire IT infrastructure (comprises both physical and virtual machines) using machine-readable definition files. it's a software engineering approach toward operations. It helps in automating the entire data center by using programming scripts.

With all the features that Infrastructure as Code provides, it's multiple challenges:

Need to learn to code

Don’t know the change impact.

Need to revert the change

Can’t track changes

Can’t automate a resource

Multiple environments for infrastructure

Terraform has been created to unravel these challenges.

What is Terraform?

Terraform is an open-source infrastructure as a Code tool developed by HashiCorp. it's wont to define and provision the entire infrastructure using an easy-to-learn declarative language.

It is an infrastructure provisioning tool where you'll store your cloud infrastructure setup as codes. It’s very almost like tools like CloudFormation, which you'd use to automate your AWS infrastructure, but you'll only use that on AWS.information technology schools

 With Terraform, you'll use it on other cloud platforms also.

Below are a number of advantages of using Terraform.

Does orchestration, not just configuration management

Supports multiple providers like AWS, Azure, GCP, DigitalOcean, and lots of more

Provide immutable infrastructure where configuration changes smoothly

Uses easy to know the language, HCL (HashiCorp configuration language)

Easily portable to the other provider

Supports Client only architecture, so no need for extra configuration management on a server

Terraform Core concepts

Below are the core concepts/terminologies utilized in Terraform:

Variables: Also used as input-variables, it's key-value pair employed by Terraform modules to permit customization.

Provider: it's a plugin to interact with APIs of service and access its related resources.

Module: it's a folder with Terraform templates where all the configurations are defined

State: It consists of cached information about the infrastructure managed by Terraform and therefore the related configurations.

Resources: It refers to a block of 1 or more infrastructure objects (compute instances, virtual networks, etc.), which are utilized in configuring and managing the infrastructure.

Data Source: it's implemented by providers to return information on external objects to terraform.

Output Values: These are return values of a terraform module that will be employed by other configurations.

Plan: it's one among the stages where it determines what must be created, updated, or destroyed to maneuver from the real/current state of the infrastructure to the specified state.

Apply: it's one among the stages where it applies the changes real/current state of the infrastructure so as to maneuver to the specified state.

Terraform Lifecycle

Terraform lifecycle consists of – init, plan, apply, and destroy.

Terraform init initializes the working directory which consists of all the configuration files

Terraform plan is employed to make an execution decide to reach a desired state of the infrastructure. Changes within the configuration files are wiped out in order to realize the specified state.

Terraform apply then makes the changes within the infrastructure as defined within the plan, and therefore the infrastructure involves the specified state.

Terraform destroy is employed to delete all the old infrastructure resources, which are marked tainted after the application phase.

How Terraform Works?

Terraform has two main components that structure its architecture:

Terraform Core

Providers

Terraform Core

Terraform core uses two input sources to try to do its job.

The first input source may be a Terraform configuration that you simply, as a user, configure. Here, you define what must be created or provisioned. and therefore the second input source may be a state where terraform keeps the up-to-date state of how the present found out of the infrastructure seems like.

So, what terraforms core does is it takes the input, and it figures out the plan of what must be done. It compares the state, what's the present state, and what's the configuration that you simply desire within the outcome. It figures out what must be done to urge thereto desired state within the configuration file. It figures what must be created, what must be updated, what must be deleted to make and provision the infrastructure.

Providers

The second component of the architecture are providers for specific technologies. this might be cloud providers like AWS, Azure, GCP, or other infrastructure as a service platform. it's also a provider for more high-level components like Kubernetes or other platform-as-a-service tools, even some software as a self-service tool.

It gives you the likelihood to make infrastructure on different levels.

For example – create an AWS infrastructure, then deploy Kubernetes on top of it then create services/components inside that Kubernetes cluster.

Terraform has over 100 providers for various technologies, and every provider then gives terraform user access to its resources. So through AWS provider, for instance, you've got access to many AWS resources like EC2 instances, the AWS users, etc. With Kubernetes provider, your access to commodities, resources like services and deployments and namespaces, etc.

So, this is often how Terraform works, and in this manner, it tries to assist you in provision and canopy the entire application setup from infrastructure all the thanks to the appliance.

Let’s do some practical stuff. 👨‍💻

We will install Terraform on Ubuntu and provision a really basic infrastructure.

Install Terraform

Download the newest terraform package.

Refer to the official download page to urge the newest version for the respective OS.

How to Perform GCP Security Scanning to seek out Misconfiguration?

Cloud infrastructure has benefits like flexibility, scalability, high performance, and affordability. Once you subscribe to a service, like the Google Cloud Platform (GCP), you are doing not need to worry about the high capital and maintenance costs of the same in-house data center and associated infrastructure. However, traditional on-premise security practices don't provide sufficient and prompt security for virtual environments.

Unlike an on-premise data center where perimeter security protects the whole installation and resources, the character of the cloud environment, with diverse technologies and locations, requires a special approach. information technology degree

 Usually, the decentralized and dynamic nature of the cloud environment results in an increased attack surface.

In particular, misconfigurations on the cloud platforms and components expose the assets while increasing the hidden security risks. Sometimes, developers may open a knowledge store when developing a bit of software on the other hand leave it open when releasing the appliance to the market.

As such, additionally, to following security best practices, there's a requirement to make sure proper configurations also because of the ability to supply continuous monitoring, visibility, and compliance. Luckily, there are several tools to assist you to improve security by detecting and preventing misconfigurations, providing visibility into the safety posture of the GCP also as identifying and addressing other vulnerabilities.