Articles tagged with "dynamodb"

Cross Account Kafka Streaming Part 1: Use Amazon MSK and Terraform to build a real-time data analytics pipeline.

When discussing high performant real-time event streaming, Apache Kafka is a tool that immediately comes to mind. Optimized for ingesting and transforming real-time streaming data in a reliable and scalable manner, a great number of companies today rely on Apache Kafka to power their mission-critical applications and data analytics pipelines. In this blog series, I would like to show you how you can leverage Amazon MSK and Terraform to set up a fully managed, cross-account Apache Kafka streaming pipeline on AWS. In this first part, we will set up the MSK Kafka cluster and producers. The second part will show you how you can set up distributed Kafka clients in different AWS accounts and communicate with the MSK cluster via AWS VPC Endpoints.

Cross Account Kafka Streaming Part 2: Use Amazon MSK and Terraform to build a real-time data analytics pipeline.

When discussing high performant real-time event streaming, Apache Kafka is a tool that immediately comes to mind. Optimized for ingesting and transforming real-time streaming data in a reliable and scalable manner, a great number of companies today rely on Apache Kafka to power their mission-critical applications and data analytics pipelines. In this blog series, I would like to show you how you can leverage Amazon MSK and Terraform to set up a fully managed, cross-account Apache Kafka streaming pipeline on AWS. In the first part, we already set up the MSK Kafka cluster and producers. The second part will show you how you can set up distributed Kafka clients in different AWS accounts and communicate with the MSK cluster via AWS VPC Endpoints.

Implementing Pessimistic Locking with DynamoDB and Python

I will show you how to implement pessimistic locking using Python with DynamoDB as our backend. Before we start, we’ll review the basics and discuss some of the design criteria we’re looking for. In an earlier post, I outlined to you how to implement optimistic locking using DynamoDB. There, I explained some of the reasons why locking is useful and which issues it can prevent. If you’re unfamiliar with the topic, I suggest you check that one out first.

Getting a near-real-time view of a DynamoDB stream with Python

DynamoDB streams help you respond to changes in your tables, which is commonly used to create aggregations or trigger other workflows once data is updated. Getting a near-real-time view into these Streams can also be helpful during developing or debugging a Serverless application in AWS. Today, I will share a Python script that I built to hook into DynamoDB streams. Before we begin, I suggest you read my blog post that contains a deep dive into DynamoDB streams and how they’re implemented because we’ll be using these concepts today.