The declarative vs imperative Infrastructure as Code discussion is flawed



“Infrastructure definition has to be declarative”. Let’s see where this presumption gets us.

My guess why some ops guys prefer pure terraform or CloudFormation is that these languages seem to be easier to understand. There is precisely one way of creating a specific resource in the language. If you use a programming language, there are many ways to solve one specific problem.

The problem which could occur later in the project is that both declarative languages have boundaries in what they can do, with a programming language you do not have these boundaries. One approach to solve this problem is to define a preferred way of doing things in the programming language.

If you have code snippets, e.g. for the vpc in the project an do not add interface layers, dependency injection, redirection etc, then the programming language can be read nearly as easy as a declarative language.

That terraform and cloudformation both have some aspects of the programming languages included like variables, if statements and loops (terraform only) makes them stripped down programming languages.

Programming language features

When we look at the features of the two “declarative” frameworks, CloudFormation and terraform, we see that both are evolving towards an programming language. (Terraform with more speed) But they both only implement a small fraction of the programming features, so it seems that they are easier to learn. These restrictions lead to implementing additional features as plugins or external scripting.

With a programming language first approach you do not need some of these plugins or external scripts, because you can script inside the same language.

Definition “Is declarative” could mean the opposite…

What framework is declarative, which is imperative?

That’s an easy question: CloudFormation and Terraform are declarative, CDK is imperative. But only at first glance!

If you provide a lambda resource in terraform, what will happen?

terraform is imperative restricted by declarative HCL

A hcl lambda function like:

resource "aws_lambda_function" "this" {
  ...
  runtime                        = var.runtime
  handler                        = var.handler
  ...
}

Will be interpreted by this code:

func resourceAwsLambdaFunction() *schema.Resource {
	return &schema.Resource{
		Create: resourceAwsLambdaFunctionCreate,
		Read:   resourceAwsLambdaFunctionRead,
		Update: resourceAwsLambdaFunctionUpdate,
    Delete: resourceAwsLambdaFunctionDelete,

From https://github.com/terraform-providers/terraform-provider-aws/blob/master/aws/resource_aws_lambda_function.go

Aha, terraform is purely imperative… :)

Terraform is imperative

If you want to extend functionality beyond the boundaries of the terraform aAWSws provider, you have to “leave” the terraform domain and add external scripting.

So you could see terraform as an imperative language restricted by a declarative language (hcl).

CDK is declarative broadened by imperative Typescript

In the CDK we define a Lambda function :

 new lambda.Function(this, 'HelloHandler', {
      ...
      handler: 'hello.handler',
      runtime: lambda.Runtime.NODEJS_10_X,
      ...
    });

which will be translated to something like:

    "Type": "AWS::Lambda::Function",
      "Properties": {
       ...
      "Handler": "hello.handler",
      "Runtime": "nodejs10.x",

CDK is declarative

If you have to add some functionality which is not included in CloudFormation, you may implement it directly into the CDK construct source. Jsii could not translate this from typescript with jsii, so you cant use this in your modules But you can have an embedded approach for extended functionality.

Conclusion

So neither is terraform declarative nor is CDK imparative.

So we can move the discussion about which framework fits better to the project more to the point whether the needs of the project are implementable by the framework, which approach would be more agile or just what do you just like more.


Thank you

Image Credits

Photo by Hermes Rivera on Unsplash

Similar Posts You Might Enjoy

Consistent Style Across Editors

Consistent Style Across Editors Sometimes, common themes occur if working on a project with multiple people and different development environments. One of the unexpected, time-consuming problems is related to editor configurations. But it is pretty easy to unify things, if you know where to look… - by Thomas Heinen

Bridging the terraform - CloudFormation gap

CloudFormation does not cover all AWS Resource types. Terraform does a better job in covering resource types just in time. So if you want to use a resource type which CloudFormation does not support yet, but you want to use CloudFormation, you have to build a Custom Resource with an own Lambda Function. CDK to the rescue: use AwsCustomResource. - by Gernot Glawe

Streamlined Kafka Schema Evolution in AWS using MSK and the Glue Schema Registry

In today’s data-driven world, effective data management is crucial for organizations aiming to make well-informed, data-driven decisions. As the importance of data continues to grow, so does the significance of robust data management practices. This includes the processes of ingesting, storing, organizing, and maintaining the data generated and collected by an organization. Within the realm of data management, schema evolution stands out as one of the most critical aspects. Businesses evolve over time, leading to changes in data and, consequently, changes in corresponding schemas. Even though a schema may be initially defined for your data, evolving business requirements inevitably demand schema modifications. Yet, modifying data structures is no straightforward task, especially when dealing with distributed systems and teams. It’s essential that downstream consumers of the data can seamlessly adapt to new schemas. Coordinating these changes becomes a critical challenge to minimize downtime and prevent production issues. Neglecting robust data management and schema evolution strategies can result in service disruptions, breaking data pipelines, and incurring significant future costs. In the context of Apache Kafka, schema evolution is managed through a schema registry. As producers share data with consumers via Kafka, the schema is stored in this registry. The Schema Registry enhances the reliability, flexibility, and scalability of systems and applications by providing a standardized approach to manage and validate schemas used by both producers and consumers. This blog post will walk you through the steps of utilizing Amazon MSK in combination with AWS Glue Schema Registry and Terraform to build a cross-account streaming pipeline for Kafka, complete with built-in schema evolution. This approach provides a comprehensive solution to address your dynamic and evolving data requirements. - by Hendrik Hagen