Ready to level-up your engineering skills? Join a Pulumi Workshop. Register Now

getKafkaConnect

# Kafka Connect Data Source

The Kafka Connect data source provides information about the existing Aiven Kafka Connect service.

Example Usage

using Pulumi;
using Aiven = Pulumi.Aiven;

class MyStack : Stack
{
    public MyStack()
    {
        var kc1 = Output.Create(Aiven.GetKafkaConnect.InvokeAsync(new Aiven.GetKafkaConnectArgs
        {
            Project = data.Aiven_project.Pr1.Project,
            ServiceName = "my-kc1",
        }));
    }

}
package main

import (
    "github.com/pulumi/pulumi-aiven/sdk/v3/go/aiven"
    "github.com/pulumi/pulumi/sdk/v2/go/pulumi"
)

func main() {
    pulumi.Run(func(ctx *pulumi.Context) error {
        _, err := aiven.LookupKafkaConnect(ctx, &aiven.LookupKafkaConnectArgs{
            Project:     data.Aiven_project.Pr1.Project,
            ServiceName: "my-kc1",
        }, nil)
        if err != nil {
            return err
        }
        return nil
    })
}
import pulumi
import pulumi_aiven as aiven

kc1 = aiven.get_kafka_connect(project=data["aiven_project"]["pr1"]["project"],
    service_name="my-kc1")
import * as pulumi from "@pulumi/pulumi";
import * as aiven from "@pulumi/aiven";

const kc1 = aiven.getKafkaConnect({
    project: data.aiven_project.pr1.project,
    serviceName: "my-kc1",
});

Using getKafkaConnect

function getKafkaConnect(args: GetKafkaConnectArgs, opts?: InvokeOptions): Promise<GetKafkaConnectResult>
def get_kafka_connect(cloud_name: Optional[str] = None, components: Optional[Sequence[GetKafkaConnectComponentArgs]] = None, kafka_connect: Optional[GetKafkaConnectKafkaConnectArgs] = None, kafka_connect_user_config: Optional[GetKafkaConnectKafkaConnectUserConfigArgs] = None, maintenance_window_dow: Optional[str] = None, maintenance_window_time: Optional[str] = None, plan: Optional[str] = None, project: Optional[str] = None, project_vpc_id: Optional[str] = None, service_host: Optional[str] = None, service_integrations: Optional[Sequence[GetKafkaConnectServiceIntegrationArgs]] = None, service_name: Optional[str] = None, service_password: Optional[str] = None, service_port: Optional[int] = None, service_type: Optional[str] = None, service_uri: Optional[str] = None, service_username: Optional[str] = None, state: Optional[str] = None, termination_protection: Optional[bool] = None, opts: Optional[InvokeOptions] = None) -> GetKafkaConnectResult
func LookupKafkaConnect(ctx *Context, args *LookupKafkaConnectArgs, opts ...InvokeOption) (*LookupKafkaConnectResult, error)

Note: This function is named LookupKafkaConnect in the Go SDK.

public static class GetKafkaConnect {
    public static Task<GetKafkaConnectResult> InvokeAsync(GetKafkaConnectArgs args, InvokeOptions? opts = null)
}

The following arguments are supported:

Project string

identifies the project the service belongs to. To set up proper dependency between the project and the service, refer to the project as shown in the above example. Project cannot be changed later without destroying and re-creating the service.

ServiceName string

specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.

CloudName string

defines where the cloud provider and region where the service is hosted in. This can be changed freely after service is created. Changing the value will trigger a potentially lengthy migration process for the service. Format is cloud provider name (aws, azure, do google, upcloud, etc.), dash, and the cloud provider specific region name. These are documented on each Cloud provider’s own support articles, like here for Google and here for AWS.

Components List<GetKafkaConnectComponentArgs>
KafkaConnect GetKafkaConnectKafkaConnectArgs

Kafka Connect specific server provided values.

KafkaConnectUserConfig GetKafkaConnectKafkaConnectUserConfigArgs

defines Kafka Connect specific additional configuration options. The following configuration options available:

MaintenanceWindowDow string

day of week when maintenance operations should be performed. On monday, tuesday, wednesday, etc.

MaintenanceWindowTime string

time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.

Plan string

defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The exact options can be seen from the Aiven web console’s Create Service dialog.

ProjectVpcId string

optionally specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference as shown above to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.

ServiceHost string

Kafka Connect hostname.

ServiceIntegrations List<GetKafkaConnectServiceIntegrationArgs>
ServicePassword string

Password used for connecting to the Kafka Connect service, if applicable.

ServicePort int

Kafka Connect port.

ServiceType string
ServiceUri string

URI for connecting to the Kafka Connect service.

ServiceUsername string

Username used for connecting to the Kafka Connect service, if applicable.

State string

Service state.

TerminationProtection bool

prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.

Project string

identifies the project the service belongs to. To set up proper dependency between the project and the service, refer to the project as shown in the above example. Project cannot be changed later without destroying and re-creating the service.

ServiceName string

specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.

CloudName string

defines where the cloud provider and region where the service is hosted in. This can be changed freely after service is created. Changing the value will trigger a potentially lengthy migration process for the service. Format is cloud provider name (aws, azure, do google, upcloud, etc.), dash, and the cloud provider specific region name. These are documented on each Cloud provider’s own support articles, like here for Google and here for AWS.

Components []GetKafkaConnectComponent
KafkaConnect GetKafkaConnectKafkaConnect

Kafka Connect specific server provided values.

KafkaConnectUserConfig GetKafkaConnectKafkaConnectUserConfig

defines Kafka Connect specific additional configuration options. The following configuration options available:

MaintenanceWindowDow string

day of week when maintenance operations should be performed. On monday, tuesday, wednesday, etc.

MaintenanceWindowTime string

time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.

Plan string

defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The exact options can be seen from the Aiven web console’s Create Service dialog.

ProjectVpcId string

optionally specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference as shown above to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.

ServiceHost string

Kafka Connect hostname.

ServiceIntegrations []GetKafkaConnectServiceIntegration
ServicePassword string

Password used for connecting to the Kafka Connect service, if applicable.

ServicePort int

Kafka Connect port.

ServiceType string
ServiceUri string

URI for connecting to the Kafka Connect service.

ServiceUsername string

Username used for connecting to the Kafka Connect service, if applicable.

State string

Service state.

TerminationProtection bool

prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.

project string

identifies the project the service belongs to. To set up proper dependency between the project and the service, refer to the project as shown in the above example. Project cannot be changed later without destroying and re-creating the service.

serviceName string

specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.

cloudName string

defines where the cloud provider and region where the service is hosted in. This can be changed freely after service is created. Changing the value will trigger a potentially lengthy migration process for the service. Format is cloud provider name (aws, azure, do google, upcloud, etc.), dash, and the cloud provider specific region name. These are documented on each Cloud provider’s own support articles, like here for Google and here for AWS.

components GetKafkaConnectComponent[]
kafkaConnect GetKafkaConnectKafkaConnect

Kafka Connect specific server provided values.

kafkaConnectUserConfig GetKafkaConnectKafkaConnectUserConfig

defines Kafka Connect specific additional configuration options. The following configuration options available:

maintenanceWindowDow string

day of week when maintenance operations should be performed. On monday, tuesday, wednesday, etc.

maintenanceWindowTime string

time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.

plan string

defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The exact options can be seen from the Aiven web console’s Create Service dialog.

projectVpcId string

optionally specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference as shown above to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.

serviceHost string

Kafka Connect hostname.

serviceIntegrations GetKafkaConnectServiceIntegration[]
servicePassword string

Password used for connecting to the Kafka Connect service, if applicable.

servicePort number

Kafka Connect port.

serviceType string
serviceUri string

URI for connecting to the Kafka Connect service.

serviceUsername string

Username used for connecting to the Kafka Connect service, if applicable.

state string

Service state.

terminationProtection boolean

prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.

project str

identifies the project the service belongs to. To set up proper dependency between the project and the service, refer to the project as shown in the above example. Project cannot be changed later without destroying and re-creating the service.

service_name str

specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.

cloud_name str

defines where the cloud provider and region where the service is hosted in. This can be changed freely after service is created. Changing the value will trigger a potentially lengthy migration process for the service. Format is cloud provider name (aws, azure, do google, upcloud, etc.), dash, and the cloud provider specific region name. These are documented on each Cloud provider’s own support articles, like here for Google and here for AWS.

components Sequence[GetKafkaConnectComponentArgs]
kafka_connect GetKafkaConnectKafkaConnectArgs

Kafka Connect specific server provided values.

kafka_connect_user_config GetKafkaConnectKafkaConnectUserConfigArgs

defines Kafka Connect specific additional configuration options. The following configuration options available:

maintenance_window_dow str

day of week when maintenance operations should be performed. On monday, tuesday, wednesday, etc.

maintenance_window_time str

time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.

plan str

defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The exact options can be seen from the Aiven web console’s Create Service dialog.

project_vpc_id str

optionally specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference as shown above to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.

service_host str

Kafka Connect hostname.

service_integrations Sequence[GetKafkaConnectServiceIntegrationArgs]
service_password str

Password used for connecting to the Kafka Connect service, if applicable.

service_port int

Kafka Connect port.

service_type str
service_uri str

URI for connecting to the Kafka Connect service.

service_username str

Username used for connecting to the Kafka Connect service, if applicable.

state str

Service state.

termination_protection bool

prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.

getKafkaConnect Result

The following output properties are available:

Components List<GetKafkaConnectComponent>
Id string

The provider-assigned unique ID for this managed resource.

KafkaConnect GetKafkaConnectKafkaConnect

Kafka Connect specific server provided values.

Project string
ServiceHost string

Kafka Connect hostname.

ServiceName string
ServicePassword string

Password used for connecting to the Kafka Connect service, if applicable.

ServicePort int

Kafka Connect port.

ServiceType string
ServiceUri string

URI for connecting to the Kafka Connect service.

ServiceUsername string

Username used for connecting to the Kafka Connect service, if applicable.

State string

Service state.

CloudName string

defines where the cloud provider and region where the service is hosted in. This can be changed freely after service is created. Changing the value will trigger a potentially lengthy migration process for the service. Format is cloud provider name (aws, azure, do google, upcloud, etc.), dash, and the cloud provider specific region name. These are documented on each Cloud provider’s own support articles, like here for Google and here for AWS.

KafkaConnectUserConfig GetKafkaConnectKafkaConnectUserConfig

defines Kafka Connect specific additional configuration options. The following configuration options available:

MaintenanceWindowDow string

day of week when maintenance operations should be performed. On monday, tuesday, wednesday, etc.

MaintenanceWindowTime string

time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.

Plan string

defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The exact options can be seen from the Aiven web console’s Create Service dialog.

ProjectVpcId string

optionally specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference as shown above to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.

ServiceIntegrations List<GetKafkaConnectServiceIntegration>
TerminationProtection bool

prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.

Components []GetKafkaConnectComponent
Id string

The provider-assigned unique ID for this managed resource.

KafkaConnect GetKafkaConnectKafkaConnect

Kafka Connect specific server provided values.

Project string
ServiceHost string

Kafka Connect hostname.

ServiceName string
ServicePassword string

Password used for connecting to the Kafka Connect service, if applicable.

ServicePort int

Kafka Connect port.

ServiceType string
ServiceUri string

URI for connecting to the Kafka Connect service.

ServiceUsername string

Username used for connecting to the Kafka Connect service, if applicable.

State string

Service state.

CloudName string

defines where the cloud provider and region where the service is hosted in. This can be changed freely after service is created. Changing the value will trigger a potentially lengthy migration process for the service. Format is cloud provider name (aws, azure, do google, upcloud, etc.), dash, and the cloud provider specific region name. These are documented on each Cloud provider’s own support articles, like here for Google and here for AWS.

KafkaConnectUserConfig GetKafkaConnectKafkaConnectUserConfig

defines Kafka Connect specific additional configuration options. The following configuration options available:

MaintenanceWindowDow string

day of week when maintenance operations should be performed. On monday, tuesday, wednesday, etc.

MaintenanceWindowTime string

time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.

Plan string

defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The exact options can be seen from the Aiven web console’s Create Service dialog.

ProjectVpcId string

optionally specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference as shown above to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.

ServiceIntegrations []GetKafkaConnectServiceIntegration
TerminationProtection bool

prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.

components GetKafkaConnectComponent[]
id string

The provider-assigned unique ID for this managed resource.

kafkaConnect GetKafkaConnectKafkaConnect

Kafka Connect specific server provided values.

project string
serviceHost string

Kafka Connect hostname.

serviceName string
servicePassword string

Password used for connecting to the Kafka Connect service, if applicable.

servicePort number

Kafka Connect port.

serviceType string
serviceUri string

URI for connecting to the Kafka Connect service.

serviceUsername string

Username used for connecting to the Kafka Connect service, if applicable.

state string

Service state.

cloudName string

defines where the cloud provider and region where the service is hosted in. This can be changed freely after service is created. Changing the value will trigger a potentially lengthy migration process for the service. Format is cloud provider name (aws, azure, do google, upcloud, etc.), dash, and the cloud provider specific region name. These are documented on each Cloud provider’s own support articles, like here for Google and here for AWS.

kafkaConnectUserConfig GetKafkaConnectKafkaConnectUserConfig

defines Kafka Connect specific additional configuration options. The following configuration options available:

maintenanceWindowDow string

day of week when maintenance operations should be performed. On monday, tuesday, wednesday, etc.

maintenanceWindowTime string

time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.

plan string

defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The exact options can be seen from the Aiven web console’s Create Service dialog.

projectVpcId string

optionally specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference as shown above to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.

serviceIntegrations GetKafkaConnectServiceIntegration[]
terminationProtection boolean

prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.

components Sequence[GetKafkaConnectComponent]
id str

The provider-assigned unique ID for this managed resource.

kafka_connect GetKafkaConnectKafkaConnect

Kafka Connect specific server provided values.

project str
service_host str

Kafka Connect hostname.

service_name str
service_password str

Password used for connecting to the Kafka Connect service, if applicable.

service_port int

Kafka Connect port.

service_type str
service_uri str

URI for connecting to the Kafka Connect service.

service_username str

Username used for connecting to the Kafka Connect service, if applicable.

state str

Service state.

cloud_name str

defines where the cloud provider and region where the service is hosted in. This can be changed freely after service is created. Changing the value will trigger a potentially lengthy migration process for the service. Format is cloud provider name (aws, azure, do google, upcloud, etc.), dash, and the cloud provider specific region name. These are documented on each Cloud provider’s own support articles, like here for Google and here for AWS.

kafka_connect_user_config GetKafkaConnectKafkaConnectUserConfig

defines Kafka Connect specific additional configuration options. The following configuration options available:

maintenance_window_dow str

day of week when maintenance operations should be performed. On monday, tuesday, wednesday, etc.

maintenance_window_time str

time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.

plan str

defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The exact options can be seen from the Aiven web console’s Create Service dialog.

project_vpc_id str

optionally specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference as shown above to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.

service_integrations Sequence[GetKafkaConnectServiceIntegration]
termination_protection bool

prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.

Supporting Types

GetKafkaConnectComponent

Component string
Host string
KafkaAuthenticationMethod string
Port int
Route string
Ssl bool
Usage string
Component string
Host string
KafkaAuthenticationMethod string
Port int
Route string
Ssl bool
Usage string
component string
host string
kafkaAuthenticationMethod string
port number
route string
ssl boolean
usage string

GetKafkaConnectKafkaConnectUserConfig

GetKafkaConnectKafkaConnectUserConfigKafkaConnect

ConnectorClientConfigOverridePolicy string

Defines what client configurations can be overridden by the connector. Default is None.

ConsumerAutoOffsetReset string

What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.

ConsumerFetchMaxBytes string

Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.

ConsumerIsolationLevel string

Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.

ConsumerMaxPartitionFetchBytes string

Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.

ConsumerMaxPollIntervalMs string

The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000). * consumer_max_poll_records The maximum number of records returned by a single poll.

ConsumerMaxPollRecords string
OffsetFlushIntervalMs string

The interval at which to try committing offsets for tasks (defaults to 60000).

OffsetFlushTimeoutMs string

Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).

ProducerMaxRequestSize string

This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.

SessionTimeoutMs string

The timeout in milliseconds used to detect failures when using Kafka’s group management facilities (defaults to 10000).

ConnectorClientConfigOverridePolicy string

Defines what client configurations can be overridden by the connector. Default is None.

ConsumerAutoOffsetReset string

What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.

ConsumerFetchMaxBytes string

Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.

ConsumerIsolationLevel string

Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.

ConsumerMaxPartitionFetchBytes string

Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.

ConsumerMaxPollIntervalMs string

The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000). * consumer_max_poll_records The maximum number of records returned by a single poll.

ConsumerMaxPollRecords string
OffsetFlushIntervalMs string

The interval at which to try committing offsets for tasks (defaults to 60000).

OffsetFlushTimeoutMs string

Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).

ProducerMaxRequestSize string

This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.

SessionTimeoutMs string

The timeout in milliseconds used to detect failures when using Kafka’s group management facilities (defaults to 10000).

connectorClientConfigOverridePolicy string

Defines what client configurations can be overridden by the connector. Default is None.

consumerAutoOffsetReset string

What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.

consumerFetchMaxBytes string

Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.

consumerIsolationLevel string

Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.

consumerMaxPartitionFetchBytes string

Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.

consumerMaxPollIntervalMs string

The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000). * consumer_max_poll_records The maximum number of records returned by a single poll.

consumerMaxPollRecords string
offsetFlushIntervalMs string

The interval at which to try committing offsets for tasks (defaults to 60000).

offsetFlushTimeoutMs string

Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).

producerMaxRequestSize string

This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.

sessionTimeoutMs string

The timeout in milliseconds used to detect failures when using Kafka’s group management facilities (defaults to 10000).

connector_client_config_override_policy str

Defines what client configurations can be overridden by the connector. Default is None.

consumer_auto_offset_reset str

What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.

consumer_fetch_max_bytes str

Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum.

consumer_isolation_level str

Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.

consumer_max_partition_fetch_bytes str

Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress.

consumer_max_poll_interval_ms str

The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000). * consumer_max_poll_records The maximum number of records returned by a single poll.

consumer_max_poll_records str
offset_flush_interval_ms str

The interval at which to try committing offsets for tasks (defaults to 60000).

offset_flush_timeout_ms str

Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).

producer_max_request_size str

This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.

session_timeout_ms str

The timeout in milliseconds used to detect failures when using Kafka’s group management facilities (defaults to 10000).

GetKafkaConnectKafkaConnectUserConfigPrivateAccess

KafkaConnect string

Kafka Connect specific server provided values.

Prometheus string
KafkaConnect string

Kafka Connect specific server provided values.

Prometheus string
kafkaConnect string

Kafka Connect specific server provided values.

prometheus string
kafka_connect str

Kafka Connect specific server provided values.

prometheus str

GetKafkaConnectKafkaConnectUserConfigPublicAccess

KafkaConnect string

Kafka Connect specific server provided values.

Prometheus string
KafkaConnect string

Kafka Connect specific server provided values.

Prometheus string
kafkaConnect string

Kafka Connect specific server provided values.

prometheus string
kafka_connect str

Kafka Connect specific server provided values.

prometheus str

GetKafkaConnectServiceIntegration

Package Details

Repository
https://github.com/pulumi/pulumi-aiven
License
Apache-2.0
Notes
This Pulumi package is based on the aiven Terraform Provider.