Using AI to generate Terraform Code from actual AWS resources

The world is changing, with new AI tools emerging every day. One such tool that has been making waves recently is ChatGPT. It is only the first of many such tools hitting the market, and it urges us to think about the future of our work. I recently used it to help with a standard task that I often perform and was amazed by how well it helped me to automate it.

There has been countless talk about AI, ChatGPT, and LLMs in general since OpenAI released the first public version of their tool. I have never been a friend of pushing any hype, but this one really hasn’t left my attention. I have been following the developments in AI for quite some time and actually wrote my master thesis about practical applications that AI could have “in the future” and how that would impact the world we live and work in. This was in 2019, and I couldn’t have imagined how fast the future became reality.

This blog post is about my most efficient use of an AI tool to date. It does already change my day to day job as an AWS consultant and should get all of us thinking about our role in the future.

The Task: Migrating AWS VPCs, Subnets, and Routes into Terraform Code

The task at hand was straightforward and mundane. I know from experience that GPT-4 was surely capable of it. Still, I was surprised by how well it followed my proposal of how to approach the task step by step.

The task was as follows: Some AWS resources were created manually in the past, and we now want to continue managing those in Terraform via Infrastructure as Code. Specifically, it was about VPCs, their subnets, and related route tables. So far, not a big deal, but I knew it would require tedious yet very simple work to accomplish it. First, I would need to extract the current configuration details via the AWS console or CLI; then write the corresponding Terraform code and, in the end, execute terraform import for each resource. If done correctly, a terraform planwould state that the state matches the configuration.

Process

Breaking Down the Task: Utilizing ChatGPT for Step-by-Step Execution

I knew I could speed it up with ChatGPT and figured that I needed to break it down into simple steps. I learned in prior experiments that trying to accomplish complex tasks in one step would often lead to errors and frustration.

Here’s my approach: I first asked it to provide me a list of steps that need to be done. Then I should go through the steps one by one and use the results of previous steps as input for the next step. I hoped to only have to double-check and then copy/paste the respective text, and it worked. I knew that ChatGPT is designed to consider the chat thread as context during text generation, but seeing its application amazed me as it really followed through with my instructions.

Here’s the full first prompt:

I want to migrate an existing AWS VPC, subnets, and route into Terraform code. Please provide the steps for this in an ordered list.
skip the tf basics. i am a pro.
provide the aws cli commands to gather the intel and then provide the tf code + tf commands for import.
However, please start with the ordered list of the steps. Later we will go through each step and I will provide the output of each. Based on that output you can then provide the input for the next step

please keep it short

What followed was a correct list of all the steps I would have done manually:

Steps

From AWS CLI Commands to Terraform Code: ChatGPT Generates the Solution

After double checking it, I started the process:

Get VPCs

The command was correct, and it provided a JSON document that I then posted without comment as the next prompt. ChatGPT knew that it had to proceed with the second step and immediately jumped into it. I found that combination of results impressive.

Get Subnets

Again I posted the results without comment and lastly got asked about the route tables.

Next, without further ado (remember, I didn’t tell it to do the next steps. It simply “remembered”), it gave me some perfectly fine Terraform Code that included all required information and sensible naming.

Generate Code

After I checked it, I saved it into a new Terraform project, and finally, the last steps, which are to perform the actual Terraform Import, simply brings existing resources into TF management:

Generate TF Imports

The Results: Terraform Plan, Import, and Success

Onto the moment of truth. As ChatGPT states, terraform plan should now say that the Infrastructure is “up to date”, indicating success.

TF Plan

And really, it indeed worked without any errors:

Result

Reflection

Let’s recap my own role in this process. Clearly, ChatGPT controlled the entire process. I was merely the agent and executing the steps.

Interaction and Steps

This precision and effectiveness of that approach still amaze me and has again been eye-opening. This does change the way we will approach such clerical tasks in the future, and it got me thinking about my own role as an IT professional. For sure, future tools will be able to execute the steps autonomously without a human in the loop. This opens up a lot of questions about AI safety as it will become more and more difficult to control the actions of the AI.

All this may still seem like “child’s play,” and surely, I’m happy that it doesn’t yet make me expendable. But we should think of GPT-4 as a very early version. Basically, it is still a toddler. Now, imagine what it will do to our jobs once it further evolves, which will happen sooner than we now think. Further iterations of this are just around the corner, and they will change many of our day-to-day tasks, and we better prepare for it.

Similar Posts You Might Enjoy

Streamlined Kafka Schema Evolution in AWS using MSK and the Glue Schema Registry

In today’s data-driven world, effective data management is crucial for organizations aiming to make well-informed, data-driven decisions. As the importance of data continues to grow, so does the significance of robust data management practices. This includes the processes of ingesting, storing, organizing, and maintaining the data generated and collected by an organization. Within the realm of data management, schema evolution stands out as one of the most critical aspects. Businesses evolve over time, leading to changes in data and, consequently, changes in corresponding schemas. Even though a schema may be initially defined for your data, evolving business requirements inevitably demand schema modifications. Yet, modifying data structures is no straightforward task, especially when dealing with distributed systems and teams. It’s essential that downstream consumers of the data can seamlessly adapt to new schemas. Coordinating these changes becomes a critical challenge to minimize downtime and prevent production issues. Neglecting robust data management and schema evolution strategies can result in service disruptions, breaking data pipelines, and incurring significant future costs. In the context of Apache Kafka, schema evolution is managed through a schema registry. As producers share data with consumers via Kafka, the schema is stored in this registry. The Schema Registry enhances the reliability, flexibility, and scalability of systems and applications by providing a standardized approach to manage and validate schemas used by both producers and consumers. This blog post will walk you through the steps of utilizing Amazon MSK in combination with AWS Glue Schema Registry and Terraform to build a cross-account streaming pipeline for Kafka, complete with built-in schema evolution. This approach provides a comprehensive solution to address your dynamic and evolving data requirements. - by Hendrik Hagen

Centralized traffic filtering using AWS Network Firewall

In the process of constructing your Hybrid Hub and Spoke Network within the Cloud, which includes the integration of On-Premises networks and allows internet-based access, the implementation of a network firewall is essential for robust security. This security measure involves thorough traffic analysis and filtering between the entities to safeguard against both internal and external cyber threats and exploits. By actively monitoring and inspecting the flow of traffic, a network firewall plays a crucial role in identifying and blocking vulnerability exploits and unauthorized access attempts. Within the AWS ecosystem, the AWS Network Firewall is a service that is often used for achieving a high level of network security. As a stateful and fully managed network firewall, it includes intrusion detection and prevention capabilities, offering comprehensive protection for VPC-based network traffic. This blog post aims to guide you through the process of integrating the AWS Network Firewall into your hybrid AWS Hub and Spoke network. By doing so, you can effectively analyze, monitor, and filter both incoming and outgoing network traffic among all involved parties, thereby enhancing the overall security of your infrastructure layer. - by Hendrik Hagen

Build Golden AMIs with Packer and AWS CodePipeline

When leveraging AWS services such as EC2, ECS, or EKS, achieving standardized and automated image creation and configuration is essential for securely managing workloads at scale. The concept of a Golden AMI is often used in this context. Golden AMIs represent pre-configured, hardened and thoroughly tested machine images that encompass a fully configured operating system, essential software packages, and customizations tailored for specific workload. It is also strongly recommended to conduct comprehensive security scans during the image creation process to mitigate the risk of vulnerabilities. By adopting Golden AMIs, you can ensure consitent configuration across different environments, leading to decreased setup and deployment times, fewer configuration errors, and a diminished risk of security breaches. In this blog post, I would like to demonstrate how you can leverage AWS CodePipeline and AWS Stepfunctions, along with Terraform and Packer, to establish a fully automated pipeline for creating Golden AMIs. - by Hendrik Hagen