1. Using eks eu-airflow-prod with eu-kafka_mt-master

    TypeScript

    To set up an EKS cluster and deploy a Kafka topic within it, we'll be walking you through the process using Pulumi in TypeScript. You will need to have AWS credentials configured where your Pulumi CLI is set up, in order for these commands to run successfully.

    The following program will create:

    1. An Amazon EKS (Elastic Kubernetes Service) cluster named eu-airflow-prod.
    2. A Kafka topic called eu-kafka_mt-master within the Kafka cluster in Confluent Cloud.

    We will use the @pulumi/eks package to create the EKS cluster because it provides higher level abstractions for EKS. For Kafka, we'll use the confluentcloud package to interact with Confluent Cloud's managed Kafka service.

    First, let's start with the EKS cluster setup:

    import * as pulumi from '@pulumi/pulumi'; import * as eks from '@pulumi/eks'; import * as aws from '@pulumi/aws'; import * as k8s from '@pulumi/kubernetes'; import * as confluentcloud from '@pulumi/confluentcloud'; // Create an EKS cluster with the default configuration. const cluster = new eks.Cluster("eu-airflow-prod", {}); // Export the cluster's kubeconfig. export const kubeconfig = cluster.kubeconfig;

    With the cluster created and the kubeconfig exported, you can interact with your Kubernetes cluster using kubectl configured with the exported kubeconfig.

    Next, we'll create a Kafka topic on Confluent Cloud. Before doing this, make sure you have signed up for a Confluent Cloud account and have the necessary credentials.

    // Provisioning a Kafka Topic on Confluent Cloud const kafkaCluster = new confluentcloud.KafkaCluster("eu-kafka_mt-master", { // Specify cloud and region where Kafka cluster is to be created cloud: "aws", region: "eu-west-1", // Replace with the appropriate AWS region environment: { id: "env-XXXXXXXX", // Replace with your Confluent Cloud environment ID }, // Provide specifications for your Kafka cluster // This using a basic cluster for the example, you might want // to use a dedicated cluster for production workloads basic: { availability: "LOW", storage: 5000, }, // Provide your display name displayName: "Airflow Production Kafka Cluster", // Kafka Cluster credentials (will be created if not provided) credential: { key: "<Confluent_Cloud_API_Key>", secret: "<Confluent_Cloud_API_Secret>", }, }); const kafkaTopic = new confluentcloud.KafkaTopic("eu-kafka_mt-master-topic", { kafkaCluster: kafkaCluster.name, topicName: "eu-kafka_mt-master", partitionsCount: 3, // Basic example, adjust as needed config: { "cleanup.policy": "delete", "min.insync.replicas": "2", "retention.ms": "259200000", // Example value, adjust as needed "segment.ms": "259200000", // Example value, adjust as needed }, // Necessary credentials for the Kafka topic credentials: { key: kafkaCluster.credential.key, secret: kafkaCluster.credential.secret, }, }); // Export Kafka Cluster and Kafka Topic names export const kafkaClusterName = kafkaCluster.displayName; export const kafkaTopicName = kafkaTopic.topicName;

    In the above code we:

    • Provisioned a Kafka cluster on Confluent Cloud with the basic specifications suitable for development purposes.
    • Created a Kafka topic with the name eu-kafka_mt-master, which has settings such as the cleanup policy and retention time specified. You should replace these with the values that best suit your requirements.

    Remember, you will need to replace the placeholders such as <Confluent_Cloud_API_Key>, <Confluent_Cloud_API_Secret>, and env-XXXXXXXX with your actual Confluent Cloud API credentials and environment ID.

    After setting up the Kafka cluster and topic you can run the Pulumi program using the Pulumi CLI. The CLI commands to deploy and update your stack are pulumi up. This command automatically provisions resources as described in the Pulumi program. If you wish to destroy the resources, you can use pulumi destroy.