Ready to level-up your engineering skills? Join a Pulumi Workshop. Register Now

getKafka

# Kafka Data Source

The Kafka data source provides information about the existing Aiven Kafka services.

Example Usage

using Pulumi;
using Aiven = Pulumi.Aiven;

class MyStack : Stack
{
    public MyStack()
    {
        var kafka1 = Output.Create(Aiven.GetKafka.InvokeAsync(new Aiven.GetKafkaArgs
        {
            Project = data.Aiven_project.Pr1.Project,
            ServiceName = "my-kafka1",
        }));
    }

}
package main

import (
    "github.com/pulumi/pulumi-aiven/sdk/v3/go/aiven"
    "github.com/pulumi/pulumi/sdk/v2/go/pulumi"
)

func main() {
    pulumi.Run(func(ctx *pulumi.Context) error {
        _, err := aiven.LookupKafka(ctx, &aiven.LookupKafkaArgs{
            Project:     data.Aiven_project.Pr1.Project,
            ServiceName: "my-kafka1",
        }, nil)
        if err != nil {
            return err
        }
        return nil
    })
}
import pulumi
import pulumi_aiven as aiven

kafka1 = aiven.get_kafka(project=data["aiven_project"]["pr1"]["project"],
    service_name="my-kafka1")
import * as pulumi from "@pulumi/pulumi";
import * as aiven from "@pulumi/aiven";

const kafka1 = aiven.getKafka({
    project: data.aiven_project.pr1.project,
    serviceName: "my-kafka1",
});

Using getKafka

function getKafka(args: GetKafkaArgs, opts?: InvokeOptions): Promise<GetKafkaResult>
def get_kafka(cloud_name: Optional[str] = None, components: Optional[Sequence[GetKafkaComponentArgs]] = None, default_acl: Optional[bool] = None, kafka: Optional[GetKafkaKafkaArgs] = None, kafka_user_config: Optional[GetKafkaKafkaUserConfigArgs] = None, maintenance_window_dow: Optional[str] = None, maintenance_window_time: Optional[str] = None, plan: Optional[str] = None, project: Optional[str] = None, project_vpc_id: Optional[str] = None, service_host: Optional[str] = None, service_integrations: Optional[Sequence[GetKafkaServiceIntegrationArgs]] = None, service_name: Optional[str] = None, service_password: Optional[str] = None, service_port: Optional[int] = None, service_type: Optional[str] = None, service_uri: Optional[str] = None, service_username: Optional[str] = None, state: Optional[str] = None, termination_protection: Optional[bool] = None, opts: Optional[InvokeOptions] = None) -> GetKafkaResult
func LookupKafka(ctx *Context, args *LookupKafkaArgs, opts ...InvokeOption) (*LookupKafkaResult, error)

Note: This function is named LookupKafka in the Go SDK.

public static class GetKafka {
    public static Task<GetKafkaResult> InvokeAsync(GetKafkaArgs args, InvokeOptions? opts = null)
}

The following arguments are supported:

Project string

identifies the project the service belongs to. To set up proper dependency between the project and the service, refer to the project as shown in the above example. Project cannot be changed later without destroying and re-creating the service.

ServiceName string

specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.

CloudName string

defines where the cloud provider and region where the service is hosted in. This can be changed freely after service is created. Changing the value will trigger a potentially lengthy migration process for the service. Format is cloud provider name (aws, azure, do google, upcloud, etc.), dash, and the cloud provider specific region name. These are documented on each Cloud provider’s own support articles, like here for Google and here for AWS.

Components List<GetKafkaComponentArgs>
DefaultAcl bool
Kafka GetKafkaKafkaArgs

Kafka server provided values:

KafkaUserConfig GetKafkaKafkaUserConfigArgs

defines Kafka specific additional configuration options. The following configuration options available:

MaintenanceWindowDow string

day of week when maintenance operations should be performed. On monday, tuesday, wednesday, etc.

MaintenanceWindowTime string

time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.

Plan string

defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The exact options can be seen from the Aiven web console’s Create Service dialog.

ProjectVpcId string

optionally specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference as shown above to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.

ServiceHost string

Kafka hostname.

ServiceIntegrations List<GetKafkaServiceIntegrationArgs>
ServicePassword string

Password used for connecting to the Kafka service, if applicable.

ServicePort int

Kafka port.

ServiceType string
ServiceUri string

URI for connecting to the Kafka service.

ServiceUsername string

Username used for connecting to the Kafka service, if applicable.

State string

Service state.

TerminationProtection bool

prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.

Project string

identifies the project the service belongs to. To set up proper dependency between the project and the service, refer to the project as shown in the above example. Project cannot be changed later without destroying and re-creating the service.

ServiceName string

specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.

CloudName string

defines where the cloud provider and region where the service is hosted in. This can be changed freely after service is created. Changing the value will trigger a potentially lengthy migration process for the service. Format is cloud provider name (aws, azure, do google, upcloud, etc.), dash, and the cloud provider specific region name. These are documented on each Cloud provider’s own support articles, like here for Google and here for AWS.

Components []GetKafkaComponent
DefaultAcl bool
Kafka GetKafkaKafka

Kafka server provided values:

KafkaUserConfig GetKafkaKafkaUserConfig

defines Kafka specific additional configuration options. The following configuration options available:

MaintenanceWindowDow string

day of week when maintenance operations should be performed. On monday, tuesday, wednesday, etc.

MaintenanceWindowTime string

time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.

Plan string

defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The exact options can be seen from the Aiven web console’s Create Service dialog.

ProjectVpcId string

optionally specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference as shown above to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.

ServiceHost string

Kafka hostname.

ServiceIntegrations []GetKafkaServiceIntegration
ServicePassword string

Password used for connecting to the Kafka service, if applicable.

ServicePort int

Kafka port.

ServiceType string
ServiceUri string

URI for connecting to the Kafka service.

ServiceUsername string

Username used for connecting to the Kafka service, if applicable.

State string

Service state.

TerminationProtection bool

prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.

project string

identifies the project the service belongs to. To set up proper dependency between the project and the service, refer to the project as shown in the above example. Project cannot be changed later without destroying and re-creating the service.

serviceName string

specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.

cloudName string

defines where the cloud provider and region where the service is hosted in. This can be changed freely after service is created. Changing the value will trigger a potentially lengthy migration process for the service. Format is cloud provider name (aws, azure, do google, upcloud, etc.), dash, and the cloud provider specific region name. These are documented on each Cloud provider’s own support articles, like here for Google and here for AWS.

components GetKafkaComponent[]
defaultAcl boolean
kafka GetKafkaKafka

Kafka server provided values:

kafkaUserConfig GetKafkaKafkaUserConfig

defines Kafka specific additional configuration options. The following configuration options available:

maintenanceWindowDow string

day of week when maintenance operations should be performed. On monday, tuesday, wednesday, etc.

maintenanceWindowTime string

time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.

plan string

defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The exact options can be seen from the Aiven web console’s Create Service dialog.

projectVpcId string

optionally specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference as shown above to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.

serviceHost string

Kafka hostname.

serviceIntegrations GetKafkaServiceIntegration[]
servicePassword string

Password used for connecting to the Kafka service, if applicable.

servicePort number

Kafka port.

serviceType string
serviceUri string

URI for connecting to the Kafka service.

serviceUsername string

Username used for connecting to the Kafka service, if applicable.

state string

Service state.

terminationProtection boolean

prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.

project str

identifies the project the service belongs to. To set up proper dependency between the project and the service, refer to the project as shown in the above example. Project cannot be changed later without destroying and re-creating the service.

service_name str

specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.

cloud_name str

defines where the cloud provider and region where the service is hosted in. This can be changed freely after service is created. Changing the value will trigger a potentially lengthy migration process for the service. Format is cloud provider name (aws, azure, do google, upcloud, etc.), dash, and the cloud provider specific region name. These are documented on each Cloud provider’s own support articles, like here for Google and here for AWS.

components Sequence[GetKafkaComponentArgs]
default_acl bool
kafka GetKafkaKafkaArgs

Kafka server provided values:

kafka_user_config GetKafkaKafkaUserConfigArgs

defines Kafka specific additional configuration options. The following configuration options available:

maintenance_window_dow str

day of week when maintenance operations should be performed. On monday, tuesday, wednesday, etc.

maintenance_window_time str

time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.

plan str

defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The exact options can be seen from the Aiven web console’s Create Service dialog.

project_vpc_id str

optionally specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference as shown above to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.

service_host str

Kafka hostname.

service_integrations Sequence[GetKafkaServiceIntegrationArgs]
service_password str

Password used for connecting to the Kafka service, if applicable.

service_port int

Kafka port.

service_type str
service_uri str

URI for connecting to the Kafka service.

service_username str

Username used for connecting to the Kafka service, if applicable.

state str

Service state.

termination_protection bool

prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.

getKafka Result

The following output properties are available:

Components List<GetKafkaComponent>
Id string

The provider-assigned unique ID for this managed resource.

Kafka GetKafkaKafka

Kafka server provided values:

Project string
ServiceHost string

Kafka hostname.

ServiceName string
ServicePassword string

Password used for connecting to the Kafka service, if applicable.

ServicePort int

Kafka port.

ServiceType string
ServiceUri string

URI for connecting to the Kafka service.

ServiceUsername string

Username used for connecting to the Kafka service, if applicable.

State string

Service state.

CloudName string

defines where the cloud provider and region where the service is hosted in. This can be changed freely after service is created. Changing the value will trigger a potentially lengthy migration process for the service. Format is cloud provider name (aws, azure, do google, upcloud, etc.), dash, and the cloud provider specific region name. These are documented on each Cloud provider’s own support articles, like here for Google and here for AWS.

DefaultAcl bool
KafkaUserConfig GetKafkaKafkaUserConfig

defines Kafka specific additional configuration options. The following configuration options available:

MaintenanceWindowDow string

day of week when maintenance operations should be performed. On monday, tuesday, wednesday, etc.

MaintenanceWindowTime string

time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.

Plan string

defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The exact options can be seen from the Aiven web console’s Create Service dialog.

ProjectVpcId string

optionally specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference as shown above to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.

ServiceIntegrations List<GetKafkaServiceIntegration>
TerminationProtection bool

prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.

Components []GetKafkaComponent
Id string

The provider-assigned unique ID for this managed resource.

Kafka GetKafkaKafka

Kafka server provided values:

Project string
ServiceHost string

Kafka hostname.

ServiceName string
ServicePassword string

Password used for connecting to the Kafka service, if applicable.

ServicePort int

Kafka port.

ServiceType string
ServiceUri string

URI for connecting to the Kafka service.

ServiceUsername string

Username used for connecting to the Kafka service, if applicable.

State string

Service state.

CloudName string

defines where the cloud provider and region where the service is hosted in. This can be changed freely after service is created. Changing the value will trigger a potentially lengthy migration process for the service. Format is cloud provider name (aws, azure, do google, upcloud, etc.), dash, and the cloud provider specific region name. These are documented on each Cloud provider’s own support articles, like here for Google and here for AWS.

DefaultAcl bool
KafkaUserConfig GetKafkaKafkaUserConfig

defines Kafka specific additional configuration options. The following configuration options available:

MaintenanceWindowDow string

day of week when maintenance operations should be performed. On monday, tuesday, wednesday, etc.

MaintenanceWindowTime string

time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.

Plan string

defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The exact options can be seen from the Aiven web console’s Create Service dialog.

ProjectVpcId string

optionally specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference as shown above to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.

ServiceIntegrations []GetKafkaServiceIntegration
TerminationProtection bool

prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.

components GetKafkaComponent[]
id string

The provider-assigned unique ID for this managed resource.

kafka GetKafkaKafka

Kafka server provided values:

project string
serviceHost string

Kafka hostname.

serviceName string
servicePassword string

Password used for connecting to the Kafka service, if applicable.

servicePort number

Kafka port.

serviceType string
serviceUri string

URI for connecting to the Kafka service.

serviceUsername string

Username used for connecting to the Kafka service, if applicable.

state string

Service state.

cloudName string

defines where the cloud provider and region where the service is hosted in. This can be changed freely after service is created. Changing the value will trigger a potentially lengthy migration process for the service. Format is cloud provider name (aws, azure, do google, upcloud, etc.), dash, and the cloud provider specific region name. These are documented on each Cloud provider’s own support articles, like here for Google and here for AWS.

defaultAcl boolean
kafkaUserConfig GetKafkaKafkaUserConfig

defines Kafka specific additional configuration options. The following configuration options available:

maintenanceWindowDow string

day of week when maintenance operations should be performed. On monday, tuesday, wednesday, etc.

maintenanceWindowTime string

time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.

plan string

defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The exact options can be seen from the Aiven web console’s Create Service dialog.

projectVpcId string

optionally specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference as shown above to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.

serviceIntegrations GetKafkaServiceIntegration[]
terminationProtection boolean

prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.

components Sequence[GetKafkaComponent]
id str

The provider-assigned unique ID for this managed resource.

kafka GetKafkaKafka

Kafka server provided values:

project str
service_host str

Kafka hostname.

service_name str
service_password str

Password used for connecting to the Kafka service, if applicable.

service_port int

Kafka port.

service_type str
service_uri str

URI for connecting to the Kafka service.

service_username str

Username used for connecting to the Kafka service, if applicable.

state str

Service state.

cloud_name str

defines where the cloud provider and region where the service is hosted in. This can be changed freely after service is created. Changing the value will trigger a potentially lengthy migration process for the service. Format is cloud provider name (aws, azure, do google, upcloud, etc.), dash, and the cloud provider specific region name. These are documented on each Cloud provider’s own support articles, like here for Google and here for AWS.

default_acl bool
kafka_user_config GetKafkaKafkaUserConfig

defines Kafka specific additional configuration options. The following configuration options available:

maintenance_window_dow str

day of week when maintenance operations should be performed. On monday, tuesday, wednesday, etc.

maintenance_window_time str

time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.

plan str

defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The exact options can be seen from the Aiven web console’s Create Service dialog.

project_vpc_id str

optionally specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference as shown above to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.

service_integrations Sequence[GetKafkaServiceIntegration]
termination_protection bool

prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.

Supporting Types

GetKafkaComponent

Component string
Host string
KafkaAuthenticationMethod string
Port int
Route string
Ssl bool
Usage string
Component string
Host string
KafkaAuthenticationMethod string
Port int
Route string
Ssl bool
Usage string
component string
host string
kafkaAuthenticationMethod string
port number
route string
ssl boolean
usage string

GetKafkaKafka

AccessCert string

The Kafka client certificate

AccessKey string

The Kafka client certificate key

ConnectUri string

The Kafka Connect URI, if any

RestUri string

The Kafka REST URI, if any

SchemaRegistryUri string

The Schema Registry URI, if any

AccessCert string

The Kafka client certificate

AccessKey string

The Kafka client certificate key

ConnectUri string

The Kafka Connect URI, if any

RestUri string

The Kafka REST URI, if any

SchemaRegistryUri string

The Schema Registry URI, if any

accessCert string

The Kafka client certificate

accessKey string

The Kafka client certificate key

connectUri string

The Kafka Connect URI, if any

restUri string

The Kafka REST URI, if any

schemaRegistryUri string

The Schema Registry URI, if any

access_cert str

The Kafka client certificate

access_key str

The Kafka client certificate key

connect_uri str

The Kafka Connect URI, if any

rest_uri str

The Kafka REST URI, if any

schema_registry_uri str

The Schema Registry URI, if any

GetKafkaKafkaUserConfig

CustomDomain string

Serve the web frontend using a custom CNAME pointing to the Aiven DNS name.

IpFilters List<string>

Allow incoming connections from CIDR address block, e.g. ‘10.20.0.0/16’.

Kafka GetKafkaKafkaUserConfigKafkaArgs

Kafka server provided values:

KafkaAuthenticationMethods GetKafkaKafkaUserConfigKafkaAuthenticationMethodsArgs

Kafka authentication methods

KafkaConnect string

Enable kafka_connect

KafkaConnectConfig GetKafkaKafkaUserConfigKafkaConnectConfigArgs

Kafka Connect configuration values

KafkaRest string

Enable kafka_rest

KafkaRestConfig GetKafkaKafkaUserConfigKafkaRestConfigArgs

Kafka-REST configuration

KafkaVersion string

Kafka major version

PrivateAccess GetKafkaKafkaUserConfigPrivateAccessArgs

Allow access to selected service ports from private networks

PrivatelinkAccess GetKafkaKafkaUserConfigPrivatelinkAccessArgs

Allow access to selected service components through Privatelink

PublicAccess GetKafkaKafkaUserConfigPublicAccessArgs

Allow access to selected service ports from the public Internet

SchemaRegistry string

Enable schema_registry

SchemaRegistryConfig GetKafkaKafkaUserConfigSchemaRegistryConfigArgs

Schema Registry configuration

CustomDomain string

Serve the web frontend using a custom CNAME pointing to the Aiven DNS name.

IpFilters []string

Allow incoming connections from CIDR address block, e.g. ‘10.20.0.0/16’.

Kafka GetKafkaKafkaUserConfigKafka

Kafka server provided values:

KafkaAuthenticationMethods GetKafkaKafkaUserConfigKafkaAuthenticationMethods

Kafka authentication methods

KafkaConnect string

Enable kafka_connect

KafkaConnectConfig GetKafkaKafkaUserConfigKafkaConnectConfig

Kafka Connect configuration values

KafkaRest string

Enable kafka_rest

KafkaRestConfig GetKafkaKafkaUserConfigKafkaRestConfig

Kafka-REST configuration

KafkaVersion string

Kafka major version

PrivateAccess GetKafkaKafkaUserConfigPrivateAccess

Allow access to selected service ports from private networks

PrivatelinkAccess GetKafkaKafkaUserConfigPrivatelinkAccess

Allow access to selected service components through Privatelink

PublicAccess GetKafkaKafkaUserConfigPublicAccess

Allow access to selected service ports from the public Internet

SchemaRegistry string

Enable schema_registry

SchemaRegistryConfig GetKafkaKafkaUserConfigSchemaRegistryConfig

Schema Registry configuration

customDomain string

Serve the web frontend using a custom CNAME pointing to the Aiven DNS name.

ipFilters string[]

Allow incoming connections from CIDR address block, e.g. ‘10.20.0.0/16’.

kafka GetKafkaKafkaUserConfigKafka

Kafka server provided values:

kafkaAuthenticationMethods GetKafkaKafkaUserConfigKafkaAuthenticationMethods

Kafka authentication methods

kafkaConnect string

Enable kafka_connect

kafkaConnectConfig GetKafkaKafkaUserConfigKafkaConnectConfig

Kafka Connect configuration values

kafkaRest string

Enable kafka_rest

kafkaRestConfig GetKafkaKafkaUserConfigKafkaRestConfig

Kafka-REST configuration

kafkaVersion string

Kafka major version

privateAccess GetKafkaKafkaUserConfigPrivateAccess

Allow access to selected service ports from private networks

privatelinkAccess GetKafkaKafkaUserConfigPrivatelinkAccess

Allow access to selected service components through Privatelink

publicAccess GetKafkaKafkaUserConfigPublicAccess

Allow access to selected service ports from the public Internet

schemaRegistry string

Enable schema_registry

schemaRegistryConfig GetKafkaKafkaUserConfigSchemaRegistryConfig

Schema Registry configuration

custom_domain str

Serve the web frontend using a custom CNAME pointing to the Aiven DNS name.

ip_filters Sequence[str]

Allow incoming connections from CIDR address block, e.g. ‘10.20.0.0/16’.

kafka GetKafkaKafkaUserConfigKafkaArgs

Kafka server provided values:

kafka_authentication_methods GetKafkaKafkaUserConfigKafkaAuthenticationMethodsArgs

Kafka authentication methods

kafka_connect str

Enable kafka_connect

kafka_connect_config GetKafkaKafkaUserConfigKafkaConnectConfigArgs

Kafka Connect configuration values

kafka_rest str

Enable kafka_rest

kafka_rest_config GetKafkaKafkaUserConfigKafkaRestConfigArgs

Kafka-REST configuration

kafka_version str

Kafka major version

private_access GetKafkaKafkaUserConfigPrivateAccessArgs

Allow access to selected service ports from private networks

privatelink_access GetKafkaKafkaUserConfigPrivatelinkAccessArgs

Allow access to selected service components through Privatelink

public_access GetKafkaKafkaUserConfigPublicAccessArgs

Allow access to selected service ports from the public Internet

schema_registry str

Enable schema_registry

schema_registry_config GetKafkaKafkaUserConfigSchemaRegistryConfigArgs

Schema Registry configuration

GetKafkaKafkaUserConfigKafka

AutoCreateTopicsEnable string

Enable auto creation of topics

CompressionType string

Specify the final compression type for a given topic. This configuration accepts the standard compression codecs (‘gzip’, ‘snappy’, ‘lz4’, ‘zstd’). It additionally accepts ‘uncompressed’ which is equivalent to no compression; and ‘producer’ which means retain the original compression codec set by the producer.

ConnectionsMaxIdleMs string

Idle connections timeout: the server socket processor threads close the connections that idle for longer than this.

DefaultReplicationFactor string

Replication factor for autocreated topics

GroupMaxSessionTimeoutMs string

The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.

GroupMinSessionTimeoutMs string

The minimum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.

LogCleanerDeleteRetentionMs string
LogCleanerMaxCompactionLagMs string

The maximum amount of time message will remain uncompacted. Only applicable for logs that are being compacted

LogCleanerMinCleanableRatio string

Controls log compactor frequency. Larger value means more frequent compactions but also more space wasted for logs. Consider setting log.cleaner.max.compaction.lag.ms to enforce compactions sooner, instead of setting a very high value for this option.

LogCleanerMinCompactionLagMs string

The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.

LogCleanupPolicy string

The default cleanup policy for segments beyond the retention window.

LogFlushIntervalMessages string

The number of messages accumulated on a log partition before messages are flushed to disk.

LogFlushIntervalMs string

The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used.

LogIndexIntervalBytes string

The interval with which Kafka adds an entry to the offset index.

LogIndexSizeMaxBytes string

The maximum size in bytes of the offset index.

LogMessageDownconversionEnable string

This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests.

LogMessageTimestampDifferenceMaxMs string

The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message

LogMessageTimestampType string

Define whether the timestamp in the message is message create time or log append time.

LogPreallocate string

Should pre allocate file when create new segment?

LogRetentionBytes string

The maximum size of the log before deleting messages

LogRetentionHours string

The number of hours to keep a log file before deleting it.

LogRetentionMs string

The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied.

LogRollJitterMs string

The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used.

LogRollMs string

The maximum time before a new log segment is rolled out (in milliseconds).

LogSegmentBytes string

The maximum size of a single log file

LogSegmentDeleteDelayMs string

The amount of time to wait before deleting a file from the filesystem.

MaxConnectionsPerIp string

The maximum number of connections allowed from each ip address (defaults to 2147483647).

MaxIncrementalFetchSessionCacheSlots string

The maximum number of incremental fetch sessions that the broker will maintain.

MessageMaxBytes string

The maximum size of message that the server can receive.

MinInsyncReplicas string

When a producer sets acks to ‘all’ (or ‘-1’), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful.

NumPartitions string

Number of partitions for autocreated topics

OffsetsRetentionMinutes string

Log retention window in minutes for offsets topic.

ProducerPurgatoryPurgeIntervalRequests string

The purge interval (in number of requests) of the producer request purgatory(defaults to 1000).

ReplicaFetchMaxBytes string

The number of bytes of messages to attempt to fetch for each partition (defaults to 1048576). This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made.

ReplicaFetchResponseMaxBytes string

Maximum bytes expected for the entire fetch response (defaults to 10485760). Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum.

SocketRequestMaxBytes string

The maximum number of bytes in a socket request (defaults to 104857600).

TransactionRemoveExpiredTransactionCleanupIntervalMs string

The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing (defaults to 3600000 (1 hour)).

TransactionStateLogSegmentBytes string

The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads (defaults to 104857600 (100 mebibytes)).

AutoCreateTopicsEnable string

Enable auto creation of topics

CompressionType string

Specify the final compression type for a given topic. This configuration accepts the standard compression codecs (‘gzip’, ‘snappy’, ‘lz4’, ‘zstd’). It additionally accepts ‘uncompressed’ which is equivalent to no compression; and ‘producer’ which means retain the original compression codec set by the producer.

ConnectionsMaxIdleMs string

Idle connections timeout: the server socket processor threads close the connections that idle for longer than this.

DefaultReplicationFactor string

Replication factor for autocreated topics

GroupMaxSessionTimeoutMs string

The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.

GroupMinSessionTimeoutMs string

The minimum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.

LogCleanerDeleteRetentionMs string
LogCleanerMaxCompactionLagMs string

The maximum amount of time message will remain uncompacted. Only applicable for logs that are being compacted

LogCleanerMinCleanableRatio string

Controls log compactor frequency. Larger value means more frequent compactions but also more space wasted for logs. Consider setting log.cleaner.max.compaction.lag.ms to enforce compactions sooner, instead of setting a very high value for this option.

LogCleanerMinCompactionLagMs string

The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.

LogCleanupPolicy string

The default cleanup policy for segments beyond the retention window.

LogFlushIntervalMessages string

The number of messages accumulated on a log partition before messages are flushed to disk.

LogFlushIntervalMs string

The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used.

LogIndexIntervalBytes string

The interval with which Kafka adds an entry to the offset index.

LogIndexSizeMaxBytes string

The maximum size in bytes of the offset index.

LogMessageDownconversionEnable string

This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests.

LogMessageTimestampDifferenceMaxMs string

The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message

LogMessageTimestampType string

Define whether the timestamp in the message is message create time or log append time.

LogPreallocate string

Should pre allocate file when create new segment?

LogRetentionBytes string

The maximum size of the log before deleting messages

LogRetentionHours string

The number of hours to keep a log file before deleting it.

LogRetentionMs string

The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied.

LogRollJitterMs string

The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used.

LogRollMs string

The maximum time before a new log segment is rolled out (in milliseconds).

LogSegmentBytes string

The maximum size of a single log file

LogSegmentDeleteDelayMs string

The amount of time to wait before deleting a file from the filesystem.

MaxConnectionsPerIp string

The maximum number of connections allowed from each ip address (defaults to 2147483647).

MaxIncrementalFetchSessionCacheSlots string

The maximum number of incremental fetch sessions that the broker will maintain.

MessageMaxBytes string

The maximum size of message that the server can receive.

MinInsyncReplicas string

When a producer sets acks to ‘all’ (or ‘-1’), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful.

NumPartitions string

Number of partitions for autocreated topics

OffsetsRetentionMinutes string

Log retention window in minutes for offsets topic.

ProducerPurgatoryPurgeIntervalRequests string

The purge interval (in number of requests) of the producer request purgatory(defaults to 1000).

ReplicaFetchMaxBytes string

The number of bytes of messages to attempt to fetch for each partition (defaults to 1048576). This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made.

ReplicaFetchResponseMaxBytes string

Maximum bytes expected for the entire fetch response (defaults to 10485760). Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum.

SocketRequestMaxBytes string

The maximum number of bytes in a socket request (defaults to 104857600).

TransactionRemoveExpiredTransactionCleanupIntervalMs string

The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing (defaults to 3600000 (1 hour)).

TransactionStateLogSegmentBytes string

The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads (defaults to 104857600 (100 mebibytes)).

autoCreateTopicsEnable string

Enable auto creation of topics

compressionType string

Specify the final compression type for a given topic. This configuration accepts the standard compression codecs (‘gzip’, ‘snappy’, ‘lz4’, ‘zstd’). It additionally accepts ‘uncompressed’ which is equivalent to no compression; and ‘producer’ which means retain the original compression codec set by the producer.

connectionsMaxIdleMs string

Idle connections timeout: the server socket processor threads close the connections that idle for longer than this.

defaultReplicationFactor string

Replication factor for autocreated topics

groupMaxSessionTimeoutMs string

The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.

groupMinSessionTimeoutMs string

The minimum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.

logCleanerDeleteRetentionMs string
logCleanerMaxCompactionLagMs string

The maximum amount of time message will remain uncompacted. Only applicable for logs that are being compacted

logCleanerMinCleanableRatio string

Controls log compactor frequency. Larger value means more frequent compactions but also more space wasted for logs. Consider setting log.cleaner.max.compaction.lag.ms to enforce compactions sooner, instead of setting a very high value for this option.

logCleanerMinCompactionLagMs string

The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.

logCleanupPolicy string

The default cleanup policy for segments beyond the retention window.

logFlushIntervalMessages string

The number of messages accumulated on a log partition before messages are flushed to disk.

logFlushIntervalMs string

The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used.

logIndexIntervalBytes string

The interval with which Kafka adds an entry to the offset index.

logIndexSizeMaxBytes string

The maximum size in bytes of the offset index.

logMessageDownconversionEnable string

This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests.

logMessageTimestampDifferenceMaxMs string

The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message

logMessageTimestampType string

Define whether the timestamp in the message is message create time or log append time.

logPreallocate string

Should pre allocate file when create new segment?

logRetentionBytes string

The maximum size of the log before deleting messages

logRetentionHours string

The number of hours to keep a log file before deleting it.

logRetentionMs string

The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied.

logRollJitterMs string

The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used.

logRollMs string

The maximum time before a new log segment is rolled out (in milliseconds).

logSegmentBytes string

The maximum size of a single log file

logSegmentDeleteDelayMs string

The amount of time to wait before deleting a file from the filesystem.

maxConnectionsPerIp string

The maximum number of connections allowed from each ip address (defaults to 2147483647).

maxIncrementalFetchSessionCacheSlots string

The maximum number of incremental fetch sessions that the broker will maintain.

messageMaxBytes string

The maximum size of message that the server can receive.

minInsyncReplicas string

When a producer sets acks to ‘all’ (or ‘-1’), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful.

numPartitions string

Number of partitions for autocreated topics

offsetsRetentionMinutes string

Log retention window in minutes for offsets topic.

producerPurgatoryPurgeIntervalRequests string

The purge interval (in number of requests) of the producer request purgatory(defaults to 1000).

replicaFetchMaxBytes string

The number of bytes of messages to attempt to fetch for each partition (defaults to 1048576). This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made.

replicaFetchResponseMaxBytes string

Maximum bytes expected for the entire fetch response (defaults to 10485760). Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum.

socketRequestMaxBytes string

The maximum number of bytes in a socket request (defaults to 104857600).

transactionRemoveExpiredTransactionCleanupIntervalMs string

The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing (defaults to 3600000 (1 hour)).

transactionStateLogSegmentBytes string

The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads (defaults to 104857600 (100 mebibytes)).

auto_create_topics_enable str

Enable auto creation of topics

compression_type str

Specify the final compression type for a given topic. This configuration accepts the standard compression codecs (‘gzip’, ‘snappy’, ‘lz4’, ‘zstd’). It additionally accepts ‘uncompressed’ which is equivalent to no compression; and ‘producer’ which means retain the original compression codec set by the producer.

connections_max_idle_ms str

Idle connections timeout: the server socket processor threads close the connections that idle for longer than this.

default_replication_factor str

Replication factor for autocreated topics

group_max_session_timeout_ms str

The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.

group_min_session_timeout_ms str

The minimum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.

log_cleaner_delete_retention_ms str
log_cleaner_max_compaction_lag_ms str

The maximum amount of time message will remain uncompacted. Only applicable for logs that are being compacted

log_cleaner_min_cleanable_ratio str

Controls log compactor frequency. Larger value means more frequent compactions but also more space wasted for logs. Consider setting log.cleaner.max.compaction.lag.ms to enforce compactions sooner, instead of setting a very high value for this option.

log_cleaner_min_compaction_lag_ms str

The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.

log_cleanup_policy str

The default cleanup policy for segments beyond the retention window.

log_flush_interval_messages str

The number of messages accumulated on a log partition before messages are flushed to disk.

log_flush_interval_ms str

The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used.

log_index_interval_bytes str

The interval with which Kafka adds an entry to the offset index.

log_index_size_max_bytes str

The maximum size in bytes of the offset index.

log_message_downconversion_enable str

This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests.

log_message_timestamp_difference_max_ms str

The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message

log_message_timestamp_type str

Define whether the timestamp in the message is message create time or log append time.

log_preallocate str

Should pre allocate file when create new segment?

log_retention_bytes str

The maximum size of the log before deleting messages

log_retention_hours str

The number of hours to keep a log file before deleting it.

log_retention_ms str

The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied.

log_roll_jitter_ms str

The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used.

log_roll_ms str

The maximum time before a new log segment is rolled out (in milliseconds).

log_segment_bytes str

The maximum size of a single log file

log_segment_delete_delay_ms str

The amount of time to wait before deleting a file from the filesystem.

max_connections_per_ip str

The maximum number of connections allowed from each ip address (defaults to 2147483647).

max_incremental_fetch_session_cache_slots str

The maximum number of incremental fetch sessions that the broker will maintain.

message_max_bytes str

The maximum size of message that the server can receive.

min_insync_replicas str

When a producer sets acks to ‘all’ (or ‘-1’), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful.

num_partitions str

Number of partitions for autocreated topics

offsets_retention_minutes str

Log retention window in minutes for offsets topic.

producer_purgatory_purge_interval_requests str

The purge interval (in number of requests) of the producer request purgatory(defaults to 1000).

replica_fetch_max_bytes str

The number of bytes of messages to attempt to fetch for each partition (defaults to 1048576). This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made.

replica_fetch_response_max_bytes str

Maximum bytes expected for the entire fetch response (defaults to 10485760). Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum.

socket_request_max_bytes str

The maximum number of bytes in a socket request (defaults to 104857600).

transaction_remove_expired_transaction_cleanup_interval_ms str

The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing (defaults to 3600000 (1 hour)).

transaction_state_log_segment_bytes str

The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads (defaults to 104857600 (100 mebibytes)).

GetKafkaKafkaUserConfigKafkaAuthenticationMethods

Certificate string

Enable certificate/SSL authentication

Sasl string

Enable SASL authentication

Certificate string

Enable certificate/SSL authentication

Sasl string

Enable SASL authentication

certificate string

Enable certificate/SSL authentication

sasl string

Enable SASL authentication

certificate str

Enable certificate/SSL authentication

sasl str

Enable SASL authentication

GetKafkaKafkaUserConfigKafkaConnectConfig

ConnectorClientConfigOverridePolicy string

Defines what client configurations can be overridden by the connector. Default is None

ConsumerAutoOffsetReset string

What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.

ConsumerFetchMaxBytes string

Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.

ConsumerIsolationLevel string

Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.

ConsumerMaxPartitionFetchBytes string

Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.

ConsumerMaxPollIntervalMs string

The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).

ConsumerMaxPollRecords string

The maximum number of records returned in a single call to poll() (defaults to 500).

OffsetFlushIntervalMs string

The interval at which to try committing offsets for tasks (defaults to 60000).

OffsetFlushTimeoutMs string

Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).

ProducerMaxRequestSize string

This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.

SessionTimeoutMs string

The timeout in milliseconds used to detect failures when using Kafka’s group management facilities (defaults to 10000).

ConnectorClientConfigOverridePolicy string

Defines what client configurations can be overridden by the connector. Default is None

ConsumerAutoOffsetReset string

What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.

ConsumerFetchMaxBytes string

Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.

ConsumerIsolationLevel string

Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.

ConsumerMaxPartitionFetchBytes string

Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.

ConsumerMaxPollIntervalMs string

The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).

ConsumerMaxPollRecords string

The maximum number of records returned in a single call to poll() (defaults to 500).

OffsetFlushIntervalMs string

The interval at which to try committing offsets for tasks (defaults to 60000).

OffsetFlushTimeoutMs string

Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).

ProducerMaxRequestSize string

This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.

SessionTimeoutMs string

The timeout in milliseconds used to detect failures when using Kafka’s group management facilities (defaults to 10000).

connectorClientConfigOverridePolicy string

Defines what client configurations can be overridden by the connector. Default is None

consumerAutoOffsetReset string

What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.

consumerFetchMaxBytes string

Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.

consumerIsolationLevel string

Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.

consumerMaxPartitionFetchBytes string

Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.

consumerMaxPollIntervalMs string

The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).

consumerMaxPollRecords string

The maximum number of records returned in a single call to poll() (defaults to 500).

offsetFlushIntervalMs string

The interval at which to try committing offsets for tasks (defaults to 60000).

offsetFlushTimeoutMs string

Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).

producerMaxRequestSize string

This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.

sessionTimeoutMs string

The timeout in milliseconds used to detect failures when using Kafka’s group management facilities (defaults to 10000).

connector_client_config_override_policy str

Defines what client configurations can be overridden by the connector. Default is None

consumer_auto_offset_reset str

What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.

consumer_fetch_max_bytes str

Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.

consumer_isolation_level str

Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.

consumer_max_partition_fetch_bytes str

Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.

consumer_max_poll_interval_ms str

The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).

consumer_max_poll_records str

The maximum number of records returned in a single call to poll() (defaults to 500).

offset_flush_interval_ms str

The interval at which to try committing offsets for tasks (defaults to 60000).

offset_flush_timeout_ms str

Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).

producer_max_request_size str

This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.

session_timeout_ms str

The timeout in milliseconds used to detect failures when using Kafka’s group management facilities (defaults to 10000).

GetKafkaKafkaUserConfigKafkaRestConfig

ConsumerEnableAutoCommit string

If true the consumer’s offset will be periodically committed to Kafka in the background

ConsumerRequestMaxBytes string

Maximum number of bytes in unencoded message keys and values by a single request

ConsumerRequestTimeoutMs string

The maximum total time to wait for messages for a request if the maximum number of messages has not yet been reached

ProducerAcks string

The number of acknowledgments the producer requires the leader to have received before considering a request complete. If set to ‘all’ or ‘-1’, the leader will wait for the full set of in-sync replicas to acknowledge the record.

ProducerLingerMs string

Wait for up to the given delay to allow batching records together

SimpleconsumerPoolSizeMax string

Maximum number of SimpleConsumers that can be instantiated per broker.

ConsumerEnableAutoCommit string

If true the consumer’s offset will be periodically committed to Kafka in the background

ConsumerRequestMaxBytes string

Maximum number of bytes in unencoded message keys and values by a single request

ConsumerRequestTimeoutMs string

The maximum total time to wait for messages for a request if the maximum number of messages has not yet been reached

ProducerAcks string

The number of acknowledgments the producer requires the leader to have received before considering a request complete. If set to ‘all’ or ‘-1’, the leader will wait for the full set of in-sync replicas to acknowledge the record.

ProducerLingerMs string

Wait for up to the given delay to allow batching records together

SimpleconsumerPoolSizeMax string

Maximum number of SimpleConsumers that can be instantiated per broker.

consumerEnableAutoCommit string

If true the consumer’s offset will be periodically committed to Kafka in the background

consumerRequestMaxBytes string

Maximum number of bytes in unencoded message keys and values by a single request

consumerRequestTimeoutMs string

The maximum total time to wait for messages for a request if the maximum number of messages has not yet been reached

producerAcks string

The number of acknowledgments the producer requires the leader to have received before considering a request complete. If set to ‘all’ or ‘-1’, the leader will wait for the full set of in-sync replicas to acknowledge the record.

producerLingerMs string

Wait for up to the given delay to allow batching records together

simpleconsumerPoolSizeMax string

Maximum number of SimpleConsumers that can be instantiated per broker.

consumer_enable_auto_commit str

If true the consumer’s offset will be periodically committed to Kafka in the background

consumer_request_max_bytes str

Maximum number of bytes in unencoded message keys and values by a single request

consumer_request_timeout_ms str

The maximum total time to wait for messages for a request if the maximum number of messages has not yet been reached

producer_acks str

The number of acknowledgments the producer requires the leader to have received before considering a request complete. If set to ‘all’ or ‘-1’, the leader will wait for the full set of in-sync replicas to acknowledge the record.

producer_linger_ms str

Wait for up to the given delay to allow batching records together

simpleconsumer_pool_size_max str

Maximum number of SimpleConsumers that can be instantiated per broker.

GetKafkaKafkaUserConfigPrivateAccess

Prometheus string

Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network

Prometheus string

Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network

prometheus string

Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network

prometheus str

Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network

GetKafkaKafkaUserConfigPrivatelinkAccess

Kafka string

Kafka server provided values:

KafkaConnect string

Enable kafka_connect

KafkaRest string

Enable kafka_rest

SchemaRegistry string

Enable schema_registry

Kafka string

Kafka server provided values:

KafkaConnect string

Enable kafka_connect

KafkaRest string

Enable kafka_rest

SchemaRegistry string

Enable schema_registry

kafka string

Kafka server provided values:

kafkaConnect string

Enable kafka_connect

kafkaRest string

Enable kafka_rest

schemaRegistry string

Enable schema_registry

kafka str

Kafka server provided values:

kafka_connect str

Enable kafka_connect

kafka_rest str

Enable kafka_rest

schema_registry str

Enable schema_registry

GetKafkaKafkaUserConfigPublicAccess

Kafka string

Kafka server provided values:

KafkaConnect string

Enable kafka_connect

KafkaRest string

Enable kafka_rest

Prometheus string

Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network

SchemaRegistry string

Enable schema_registry

Kafka string

Kafka server provided values:

KafkaConnect string

Enable kafka_connect

KafkaRest string

Enable kafka_rest

Prometheus string

Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network

SchemaRegistry string

Enable schema_registry

kafka string

Kafka server provided values:

kafkaConnect string

Enable kafka_connect

kafkaRest string

Enable kafka_rest

prometheus string

Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network

schemaRegistry string

Enable schema_registry

kafka str

Kafka server provided values:

kafka_connect str

Enable kafka_connect

kafka_rest str

Enable kafka_rest

prometheus str

Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network

schema_registry str

Enable schema_registry

GetKafkaKafkaUserConfigSchemaRegistryConfig

LeaderEligibility string

If true, Karapace / Schema Registry on the service nodes can participate in leader election. It might be needed to disable this when the schemas topic is replicated to a secondary cluster and Karapace / Schema Registry there must not participate in leader election. Defaults to ‘true’.

TopicName string

The durable single partition topic that acts as the durable log for the data. This topic must be compacted to avoid losing data due to retention policy. Please note that changing this configuration in an existing Schema Registry / Karapace setup leads to previous schemas being inaccessible, data encoded with them potentially unreadable and schema ID sequence put out of order. It’s only possible to do the switch while Schema Registry / Karapace is disabled. Defaults to ‘_schemas’.

LeaderEligibility string

If true, Karapace / Schema Registry on the service nodes can participate in leader election. It might be needed to disable this when the schemas topic is replicated to a secondary cluster and Karapace / Schema Registry there must not participate in leader election. Defaults to ‘true’.

TopicName string

The durable single partition topic that acts as the durable log for the data. This topic must be compacted to avoid losing data due to retention policy. Please note that changing this configuration in an existing Schema Registry / Karapace setup leads to previous schemas being inaccessible, data encoded with them potentially unreadable and schema ID sequence put out of order. It’s only possible to do the switch while Schema Registry / Karapace is disabled. Defaults to ‘_schemas’.

leaderEligibility string

If true, Karapace / Schema Registry on the service nodes can participate in leader election. It might be needed to disable this when the schemas topic is replicated to a secondary cluster and Karapace / Schema Registry there must not participate in leader election. Defaults to ‘true’.

topicName string

The durable single partition topic that acts as the durable log for the data. This topic must be compacted to avoid losing data due to retention policy. Please note that changing this configuration in an existing Schema Registry / Karapace setup leads to previous schemas being inaccessible, data encoded with them potentially unreadable and schema ID sequence put out of order. It’s only possible to do the switch while Schema Registry / Karapace is disabled. Defaults to ‘_schemas’.

leader_eligibility str

If true, Karapace / Schema Registry on the service nodes can participate in leader election. It might be needed to disable this when the schemas topic is replicated to a secondary cluster and Karapace / Schema Registry there must not participate in leader election. Defaults to ‘true’.

topic_name str

The durable single partition topic that acts as the durable log for the data. This topic must be compacted to avoid losing data due to retention policy. Please note that changing this configuration in an existing Schema Registry / Karapace setup leads to previous schemas being inaccessible, data encoded with them potentially unreadable and schema ID sequence put out of order. It’s only possible to do the switch while Schema Registry / Karapace is disabled. Defaults to ‘_schemas’.

GetKafkaServiceIntegration

Package Details

Repository
https://github.com/pulumi/pulumi-aiven
License
Apache-2.0
Notes
This Pulumi package is based on the aiven Terraform Provider.