Have you ever wondered how you can ingests a large amount of data in real time, durably stores this data, and makes the data available for consumption? or how you could decouple and scale microservices, distributed systems, and serverless applications while storing your data in real time? Maybe you have wondered how you can reliably load streaming data into data lakes, data stores and analytics tools. Well the good news is that all this can be done with the aid of Amazon Kinesis, SQS (Simple Queue Service), and Firehose. In this short tutorial, I will be taking you on how to create all this using a terraform module.
Firstly, lets create a terraform file that takes in our provider, let us call this provider.tf:
By doing this, we have included a region in which we want our service to be deployed into. Then we need to create a main.tf file in which we include kinesis stream resource, SQS resource as well as the firehose resource. below is an image that shows the code for that:
The image above depicts the AWS Kinesis resource and comments as well for what each line of code does.
The image above depicts the SQS resource with each line of code supported by a comment.
NOTE: AWS SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent, and in this tutorial am using the Standard Queue.
Kinesis Firehose is a type of AWS Kinesis which allows the capture, transform, and load data streams into AWS data stores for near real-time analytics with existing business intelligence tools. Basically we use this in Data Analytics. Also the codes for creating a Firehose as well as comment are in the image above. In creating a Firehose, you need a destination, the destination can be an S3, Extended S3, Redshift, Elasticsearch or Splunk. For this project i used an S3 because it is easy to configure.
After creating the main.tf file, we then create the iam.tf file which includes the IAM Role and IAM policy
PS: IAM mean Identity and Access Management
The image above shows how i created the role, which could also be done in AWS console (This tutorial will be shared later).
Then i created the policy for the IAM role. For my AWS kinesis i allowed the use of all Kinesis Resource, same for my SQS as well as my S3.
With that all been written out. I then created a folder and saved all terraform file inside this folder. I called the folder kinesis. Outside this folder is where i created my module.tf file. The reason why am using a module is because Modules can be used to create lightweight abstractions, so that you can describe your infrastructure in terms of its architecture, rather than directly in terms of physical objects.
With this i was done with all that needs to be created. So all you just need to do is run terraform int, terraform plan and terraform apply and then you should see the output below:
The terraform init command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration
The terraform plan command is used to create an execution plan. Terraform performs a refresh, unless explicitly disabled, and then determines what actions are necessary to achieve the desired state specified in the configuration files
terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a
terraform plan execution plan.
And there you have it, you have successfully created a Terraform Module that creates Kinesis, Firehose and SQS.
The code to this post can be found on: https://github.com/adefemi171/AWS-Kinesis-firehose-sqs