1. Packages
  2. Aiven Provider
  3. API Docs
  4. getKafkaTopicList
Aiven v6.50.0 published on Friday, Feb 27, 2026 by Pulumi
aiven logo
Aiven v6.50.0 published on Friday, Feb 27, 2026 by Pulumi

    Lists Kafka topics for a service.

    Using getKafkaTopicList

    Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

    function getKafkaTopicList(args: GetKafkaTopicListArgs, opts?: InvokeOptions): Promise<GetKafkaTopicListResult>
    function getKafkaTopicListOutput(args: GetKafkaTopicListOutputArgs, opts?: InvokeOptions): Output<GetKafkaTopicListResult>
    def get_kafka_topic_list(project: Optional[str] = None,
                             service_name: Optional[str] = None,
                             timeouts: Optional[GetKafkaTopicListTimeouts] = None,
                             topics: Optional[Sequence[GetKafkaTopicListTopic]] = None,
                             opts: Optional[InvokeOptions] = None) -> GetKafkaTopicListResult
    def get_kafka_topic_list_output(project: Optional[pulumi.Input[str]] = None,
                             service_name: Optional[pulumi.Input[str]] = None,
                             timeouts: Optional[pulumi.Input[GetKafkaTopicListTimeoutsArgs]] = None,
                             topics: Optional[pulumi.Input[Sequence[pulumi.Input[GetKafkaTopicListTopicArgs]]]] = None,
                             opts: Optional[InvokeOptions] = None) -> Output[GetKafkaTopicListResult]
    func GetKafkaTopicList(ctx *Context, args *GetKafkaTopicListArgs, opts ...InvokeOption) (*GetKafkaTopicListResult, error)
    func GetKafkaTopicListOutput(ctx *Context, args *GetKafkaTopicListOutputArgs, opts ...InvokeOption) GetKafkaTopicListResultOutput

    > Note: This function is named GetKafkaTopicList in the Go SDK.

    public static class GetKafkaTopicList 
    {
        public static Task<GetKafkaTopicListResult> InvokeAsync(GetKafkaTopicListArgs args, InvokeOptions? opts = null)
        public static Output<GetKafkaTopicListResult> Invoke(GetKafkaTopicListInvokeArgs args, InvokeOptions? opts = null)
    }
    public static CompletableFuture<GetKafkaTopicListResult> getKafkaTopicList(GetKafkaTopicListArgs args, InvokeOptions options)
    public static Output<GetKafkaTopicListResult> getKafkaTopicList(GetKafkaTopicListArgs args, InvokeOptions options)
    
    fn::invoke:
      function: aiven:index/getKafkaTopicList:getKafkaTopicList
      arguments:
        # arguments dictionary

    The following arguments are supported:

    Project string
    Project name.
    ServiceName string
    Service name.
    Timeouts GetKafkaTopicListTimeouts
    Topics List<GetKafkaTopicListTopic>
    List of Kafka topics.
    Project string
    Project name.
    ServiceName string
    Service name.
    Timeouts GetKafkaTopicListTimeouts
    Topics []GetKafkaTopicListTopic
    List of Kafka topics.
    project String
    Project name.
    serviceName String
    Service name.
    timeouts GetKafkaTopicListTimeouts
    topics List<GetKafkaTopicListTopic>
    List of Kafka topics.
    project string
    Project name.
    serviceName string
    Service name.
    timeouts GetKafkaTopicListTimeouts
    topics GetKafkaTopicListTopic[]
    List of Kafka topics.
    project String
    Project name.
    serviceName String
    Service name.
    timeouts Property Map
    topics List<Property Map>
    List of Kafka topics.

    getKafkaTopicList Result

    The following output properties are available:

    Id string
    Resource ID composed as: project/service_name.
    Project string
    Project name.
    ServiceName string
    Service name.
    Timeouts GetKafkaTopicListTimeouts
    Topics List<GetKafkaTopicListTopic>
    List of Kafka topics.
    Id string
    Resource ID composed as: project/service_name.
    Project string
    Project name.
    ServiceName string
    Service name.
    Timeouts GetKafkaTopicListTimeouts
    Topics []GetKafkaTopicListTopic
    List of Kafka topics.
    id String
    Resource ID composed as: project/service_name.
    project String
    Project name.
    serviceName String
    Service name.
    timeouts GetKafkaTopicListTimeouts
    topics List<GetKafkaTopicListTopic>
    List of Kafka topics.
    id string
    Resource ID composed as: project/service_name.
    project string
    Project name.
    serviceName string
    Service name.
    timeouts GetKafkaTopicListTimeouts
    topics GetKafkaTopicListTopic[]
    List of Kafka topics.
    id str
    Resource ID composed as: project/service_name.
    project str
    Project name.
    service_name str
    Service name.
    timeouts GetKafkaTopicListTimeouts
    topics Sequence[GetKafkaTopicListTopic]
    List of Kafka topics.
    id String
    Resource ID composed as: project/service_name.
    project String
    Project name.
    serviceName String
    Service name.
    timeouts Property Map
    topics List<Property Map>
    List of Kafka topics.

    Supporting Types

    GetKafkaTopicListTimeouts

    Read string
    A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).
    Read string
    A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).
    read String
    A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).
    read string
    A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).
    read str
    A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).
    read String
    A string that can be parsed as a duration consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).

    GetKafkaTopicListTopic

    CleanupPolicy string
    The retention policy to use on old segments. Possible values include 'delete', 'compact', or a comma-separated list of them. The default policy ('delete') will discard old segments when their retention time or size limit has been reached. The 'compact' setting will enable log compaction on the topic.
    DisklessEnable bool
    Indicates whether diskless should be enabled. This is only available for BYOC services with Diskless feature enabled.
    MinInsyncReplicas int
    When a producer sets acks to 'all' (or '-1'), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of 'all'. This will ensure that the producer raises an exception if a majority of replicas do not receive a write.
    OwnerUserGroupId string
    The user group that owns this topic.
    Partitions int
    Number of partitions.
    RemoteStorageEnable bool
    Indicates whether tiered storage should be enabled. This is only available for services with Tiered Storage feature enabled.
    Replication int
    Number of replicas.
    RetentionBytes int
    This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the 'delete' retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes.
    RetentionHours int
    Retention period (hours).
    State string
    Topic state. The possible values are ACTIVE, CONFIGURING and DELETING.
    TopicDescription string
    Topic description.
    TopicName string
    Topic name.
    Tags List<GetKafkaTopicListTopicTag>
    Topic tags.
    CleanupPolicy string
    The retention policy to use on old segments. Possible values include 'delete', 'compact', or a comma-separated list of them. The default policy ('delete') will discard old segments when their retention time or size limit has been reached. The 'compact' setting will enable log compaction on the topic.
    DisklessEnable bool
    Indicates whether diskless should be enabled. This is only available for BYOC services with Diskless feature enabled.
    MinInsyncReplicas int
    When a producer sets acks to 'all' (or '-1'), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of 'all'. This will ensure that the producer raises an exception if a majority of replicas do not receive a write.
    OwnerUserGroupId string
    The user group that owns this topic.
    Partitions int
    Number of partitions.
    RemoteStorageEnable bool
    Indicates whether tiered storage should be enabled. This is only available for services with Tiered Storage feature enabled.
    Replication int
    Number of replicas.
    RetentionBytes int
    This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the 'delete' retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes.
    RetentionHours int
    Retention period (hours).
    State string
    Topic state. The possible values are ACTIVE, CONFIGURING and DELETING.
    TopicDescription string
    Topic description.
    TopicName string
    Topic name.
    Tags []GetKafkaTopicListTopicTag
    Topic tags.
    cleanupPolicy String
    The retention policy to use on old segments. Possible values include 'delete', 'compact', or a comma-separated list of them. The default policy ('delete') will discard old segments when their retention time or size limit has been reached. The 'compact' setting will enable log compaction on the topic.
    disklessEnable Boolean
    Indicates whether diskless should be enabled. This is only available for BYOC services with Diskless feature enabled.
    minInsyncReplicas Integer
    When a producer sets acks to 'all' (or '-1'), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of 'all'. This will ensure that the producer raises an exception if a majority of replicas do not receive a write.
    ownerUserGroupId String
    The user group that owns this topic.
    partitions Integer
    Number of partitions.
    remoteStorageEnable Boolean
    Indicates whether tiered storage should be enabled. This is only available for services with Tiered Storage feature enabled.
    replication Integer
    Number of replicas.
    retentionBytes Integer
    This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the 'delete' retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes.
    retentionHours Integer
    Retention period (hours).
    state String
    Topic state. The possible values are ACTIVE, CONFIGURING and DELETING.
    topicDescription String
    Topic description.
    topicName String
    Topic name.
    tags List<GetKafkaTopicListTopicTag>
    Topic tags.
    cleanupPolicy string
    The retention policy to use on old segments. Possible values include 'delete', 'compact', or a comma-separated list of them. The default policy ('delete') will discard old segments when their retention time or size limit has been reached. The 'compact' setting will enable log compaction on the topic.
    disklessEnable boolean
    Indicates whether diskless should be enabled. This is only available for BYOC services with Diskless feature enabled.
    minInsyncReplicas number
    When a producer sets acks to 'all' (or '-1'), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of 'all'. This will ensure that the producer raises an exception if a majority of replicas do not receive a write.
    ownerUserGroupId string
    The user group that owns this topic.
    partitions number
    Number of partitions.
    remoteStorageEnable boolean
    Indicates whether tiered storage should be enabled. This is only available for services with Tiered Storage feature enabled.
    replication number
    Number of replicas.
    retentionBytes number
    This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the 'delete' retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes.
    retentionHours number
    Retention period (hours).
    state string
    Topic state. The possible values are ACTIVE, CONFIGURING and DELETING.
    topicDescription string
    Topic description.
    topicName string
    Topic name.
    tags GetKafkaTopicListTopicTag[]
    Topic tags.
    cleanup_policy str
    The retention policy to use on old segments. Possible values include 'delete', 'compact', or a comma-separated list of them. The default policy ('delete') will discard old segments when their retention time or size limit has been reached. The 'compact' setting will enable log compaction on the topic.
    diskless_enable bool
    Indicates whether diskless should be enabled. This is only available for BYOC services with Diskless feature enabled.
    min_insync_replicas int
    When a producer sets acks to 'all' (or '-1'), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of 'all'. This will ensure that the producer raises an exception if a majority of replicas do not receive a write.
    owner_user_group_id str
    The user group that owns this topic.
    partitions int
    Number of partitions.
    remote_storage_enable bool
    Indicates whether tiered storage should be enabled. This is only available for services with Tiered Storage feature enabled.
    replication int
    Number of replicas.
    retention_bytes int
    This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the 'delete' retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes.
    retention_hours int
    Retention period (hours).
    state str
    Topic state. The possible values are ACTIVE, CONFIGURING and DELETING.
    topic_description str
    Topic description.
    topic_name str
    Topic name.
    tags Sequence[GetKafkaTopicListTopicTag]
    Topic tags.
    cleanupPolicy String
    The retention policy to use on old segments. Possible values include 'delete', 'compact', or a comma-separated list of them. The default policy ('delete') will discard old segments when their retention time or size limit has been reached. The 'compact' setting will enable log compaction on the topic.
    disklessEnable Boolean
    Indicates whether diskless should be enabled. This is only available for BYOC services with Diskless feature enabled.
    minInsyncReplicas Number
    When a producer sets acks to 'all' (or '-1'), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of 'all'. This will ensure that the producer raises an exception if a majority of replicas do not receive a write.
    ownerUserGroupId String
    The user group that owns this topic.
    partitions Number
    Number of partitions.
    remoteStorageEnable Boolean
    Indicates whether tiered storage should be enabled. This is only available for services with Tiered Storage feature enabled.
    replication Number
    Number of replicas.
    retentionBytes Number
    This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the 'delete' retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes.
    retentionHours Number
    Retention period (hours).
    state String
    Topic state. The possible values are ACTIVE, CONFIGURING and DELETING.
    topicDescription String
    Topic description.
    topicName String
    Topic name.
    tags List<Property Map>
    Topic tags.

    GetKafkaTopicListTopicTag

    Key string
    Tag key.
    Value string
    Tag value.
    Key string
    Tag key.
    Value string
    Tag value.
    key String
    Tag key.
    value String
    Tag value.
    key string
    Tag key.
    value string
    Tag value.
    key str
    Tag key.
    value str
    Tag value.
    key String
    Tag key.
    value String
    Tag value.

    Package Details

    Repository
    Aiven pulumi/pulumi-aiven
    License
    Apache-2.0
    Notes
    This Pulumi package is based on the aiven Terraform Provider.
    aiven logo
    Aiven v6.50.0 published on Friday, Feb 27, 2026 by Pulumi
      Meet Neo: Your AI Platform Teammate