getDataLakes

mongodbatlas.getDataLakes describe all Data Lakes.

NOTE: Groups and projects are synonymous terms. You may find groupId in the official documentation.

Example Usage

using System.Collections.Generic;
using Pulumi;
using Mongodbatlas = Pulumi.Mongodbatlas;

return await Deployment.RunAsync(() => 
{
    var test = Mongodbatlas.GetDataLakes.Invoke(new()
    {
        ProjectId = "PROJECT ID",
    });

});
package main

import (
	"github.com/pulumi/pulumi-mongodbatlas/sdk/v3/go/mongodbatlas"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		_, err := mongodbatlas.LookupDataLakes(ctx, &GetDataLakesArgs{
			ProjectId: "PROJECT ID",
		}, nil)
		if err != nil {
			return err
		}
		return nil
	})
}
package generated_program;

import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.mongodbatlas.MongodbatlasFunctions;
import com.pulumi.mongodbatlas.inputs.GetDataLakesArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;

public class App {
    public static void main(String[] args) {
        Pulumi.run(App::stack);
    }

    public static void stack(Context ctx) {
        final var test = MongodbatlasFunctions.getDataLakes(GetDataLakesArgs.builder()
            .projectId("PROJECT ID")
            .build());

    }
}
import pulumi
import pulumi_mongodbatlas as mongodbatlas

test = mongodbatlas.get_data_lakes(project_id="PROJECT ID")
import * as pulumi from "@pulumi/pulumi";
import * as mongodbatlas from "@pulumi/mongodbatlas";

const test = pulumi.output(mongodbatlas.getDataLakes({
    projectId: "PROJECT ID",
}));
variables:
  test:
    Fn::Invoke:
      Function: mongodbatlas:getDataLakes
      Arguments:
        projectId: PROJECT ID

Using getDataLakes

Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

function getDataLakes(args: GetDataLakesArgs, opts?: InvokeOptions): Promise<GetDataLakesResult>
function getDataLakesOutput(args: GetDataLakesOutputArgs, opts?: InvokeOptions): Output<GetDataLakesResult>
def get_data_lakes(project_id: Optional[str] = None,
                   opts: Optional[InvokeOptions] = None) -> GetDataLakesResult
def get_data_lakes_output(project_id: Optional[pulumi.Input[str]] = None,
                   opts: Optional[InvokeOptions] = None) -> Output[GetDataLakesResult]
func LookupDataLakes(ctx *Context, args *LookupDataLakesArgs, opts ...InvokeOption) (*LookupDataLakesResult, error)
func LookupDataLakesOutput(ctx *Context, args *LookupDataLakesOutputArgs, opts ...InvokeOption) LookupDataLakesResultOutput

> Note: This function is named LookupDataLakes in the Go SDK.

public static class GetDataLakes 
{
    public static Task<GetDataLakesResult> InvokeAsync(GetDataLakesArgs args, InvokeOptions? opts = null)
    public static Output<GetDataLakesResult> Invoke(GetDataLakesInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetDataLakesResult> getDataLakes(GetDataLakesArgs args, InvokeOptions options)
// Output-based functions aren't available in Java yet
fn::invoke:
  function: mongodbatlas:index/getDataLakes:getDataLakes
  arguments:
    # arguments dictionary

The following arguments are supported:

ProjectId string

The unique ID for the project to get all data lakes.

ProjectId string

The unique ID for the project to get all data lakes.

projectId String

The unique ID for the project to get all data lakes.

projectId string

The unique ID for the project to get all data lakes.

project_id str

The unique ID for the project to get all data lakes.

projectId String

The unique ID for the project to get all data lakes.

getDataLakes Result

The following output properties are available:

Id string

The provider-assigned unique ID for this managed resource.

ProjectId string
Results List<GetDataLakesResult>

A list where each represents a Data lake.

Id string

The provider-assigned unique ID for this managed resource.

ProjectId string
Results []GetDataLakesResult

A list where each represents a Data lake.

id String

The provider-assigned unique ID for this managed resource.

projectId String
results List<GetDataLakesResult>

A list where each represents a Data lake.

id string

The provider-assigned unique ID for this managed resource.

projectId string
results GetDataLakesResult[]

A list where each represents a Data lake.

id str

The provider-assigned unique ID for this managed resource.

project_id str
results Sequence[GetDataLakesResult]

A list where each represents a Data lake.

id String

The provider-assigned unique ID for this managed resource.

projectId String
results List<Property Map>

A list where each represents a Data lake.

Supporting Types

GetDataLakesResult

Aws List<GetDataLakesResultAw>
DataProcessRegions List<GetDataLakesResultDataProcessRegion>

The cloud provider region to which Atlas Data Lake routes client connections for data processing.

  • data_process_region.0.cloud_provider - Name of the cloud service provider.
  • data_process_region.0.region -Name of the region to which Data Lake routes client connections for data processing.
Hostnames List<string>

The list of hostnames assigned to the Atlas Data Lake. Each string in the array is a hostname assigned to the Atlas Data Lake.

Name string
ProjectId string

The unique ID for the project to get all data lakes.

State string

Current state of the Atlas Data Lake:

StorageDatabases List<GetDataLakesResultStorageDatabase>

Configuration details for mapping each data store to queryable databases and collections.

  • storage_databases.#.name - Name of the database to which Data Lake maps the data contained in the data store.
  • storage_databases.#.collections - Array of objects where each object represents a collection and data sources that map to a stores data store.
  • storage_databases.#.collections.#.name - Name of the collection.
  • storage_databases.#.collections.#.data_sources - Array of objects where each object represents a stores data store to map with the collection.
  • storage_databases.#.collections.#.data_sources.#.store_name - Name of a data store to map to the <collection>.
  • storage_databases.#.collections.#.data_sources.#.default_format - Default format that Data Lake assumes if it encounters a file without an extension while searching the storeName.
  • storage_databases.#.collections.#.data_sources.#.path - Controls how Atlas Data Lake searches for and parses files in the storeName before mapping them to the <collection>.
  • storage_databases.#.views - Array of objects where each object represents an aggregation pipeline on a collection.
  • storage_databases.#.views.#.name - Name of the view.
  • storage_databases.#.views.#.source - Name of the source collection for the view.
  • storage_databases.#.views.#.pipeline- Aggregation pipeline stage(s) to apply to the source collection.
StorageStores List<GetDataLakesResultStorageStore>

Each object in the array represents a data store. Data Lake uses the storage.databases configuration details to map data in each data store to queryable databases and collections.

  • storage_stores.#.name - Name of the data store.
  • storage_stores.#.provider - Defines where the data is stored.
  • storage_stores.#.region - Name of the AWS region in which the S3 bucket is hosted.
  • storage_stores.#.bucket - Name of the AWS S3 bucket.
  • storage_stores.#.prefix - Prefix Data Lake applies when searching for files in the S3 bucket .
  • storage_stores.#.delimiter - The delimiter that separates storage_databases.#.collections.#.data_sources.#.path segments in the data store.
  • storage_stores.#.include_tags - Determines whether or not to use S3 tags on the files in the given path as additional partition attributes.
Aws []GetDataLakesResultAw
DataProcessRegions []GetDataLakesResultDataProcessRegion

The cloud provider region to which Atlas Data Lake routes client connections for data processing.

  • data_process_region.0.cloud_provider - Name of the cloud service provider.
  • data_process_region.0.region -Name of the region to which Data Lake routes client connections for data processing.
Hostnames []string

The list of hostnames assigned to the Atlas Data Lake. Each string in the array is a hostname assigned to the Atlas Data Lake.

Name string
ProjectId string

The unique ID for the project to get all data lakes.

State string

Current state of the Atlas Data Lake:

StorageDatabases []GetDataLakesResultStorageDatabase

Configuration details for mapping each data store to queryable databases and collections.

  • storage_databases.#.name - Name of the database to which Data Lake maps the data contained in the data store.
  • storage_databases.#.collections - Array of objects where each object represents a collection and data sources that map to a stores data store.
  • storage_databases.#.collections.#.name - Name of the collection.
  • storage_databases.#.collections.#.data_sources - Array of objects where each object represents a stores data store to map with the collection.
  • storage_databases.#.collections.#.data_sources.#.store_name - Name of a data store to map to the <collection>.
  • storage_databases.#.collections.#.data_sources.#.default_format - Default format that Data Lake assumes if it encounters a file without an extension while searching the storeName.
  • storage_databases.#.collections.#.data_sources.#.path - Controls how Atlas Data Lake searches for and parses files in the storeName before mapping them to the <collection>.
  • storage_databases.#.views - Array of objects where each object represents an aggregation pipeline on a collection.
  • storage_databases.#.views.#.name - Name of the view.
  • storage_databases.#.views.#.source - Name of the source collection for the view.
  • storage_databases.#.views.#.pipeline- Aggregation pipeline stage(s) to apply to the source collection.
StorageStores []GetDataLakesResultStorageStore

Each object in the array represents a data store. Data Lake uses the storage.databases configuration details to map data in each data store to queryable databases and collections.

  • storage_stores.#.name - Name of the data store.
  • storage_stores.#.provider - Defines where the data is stored.
  • storage_stores.#.region - Name of the AWS region in which the S3 bucket is hosted.
  • storage_stores.#.bucket - Name of the AWS S3 bucket.
  • storage_stores.#.prefix - Prefix Data Lake applies when searching for files in the S3 bucket .
  • storage_stores.#.delimiter - The delimiter that separates storage_databases.#.collections.#.data_sources.#.path segments in the data store.
  • storage_stores.#.include_tags - Determines whether or not to use S3 tags on the files in the given path as additional partition attributes.
aws List<GetDataLakesResultAw>
dataProcessRegions List<GetDataLakesResultDataProcessRegion>

The cloud provider region to which Atlas Data Lake routes client connections for data processing.

  • data_process_region.0.cloud_provider - Name of the cloud service provider.
  • data_process_region.0.region -Name of the region to which Data Lake routes client connections for data processing.
hostnames List<String>

The list of hostnames assigned to the Atlas Data Lake. Each string in the array is a hostname assigned to the Atlas Data Lake.

name String
projectId String

The unique ID for the project to get all data lakes.

state String

Current state of the Atlas Data Lake:

storageDatabases List<GetDataLakesResultStorageDatabase>

Configuration details for mapping each data store to queryable databases and collections.

  • storage_databases.#.name - Name of the database to which Data Lake maps the data contained in the data store.
  • storage_databases.#.collections - Array of objects where each object represents a collection and data sources that map to a stores data store.
  • storage_databases.#.collections.#.name - Name of the collection.
  • storage_databases.#.collections.#.data_sources - Array of objects where each object represents a stores data store to map with the collection.
  • storage_databases.#.collections.#.data_sources.#.store_name - Name of a data store to map to the <collection>.
  • storage_databases.#.collections.#.data_sources.#.default_format - Default format that Data Lake assumes if it encounters a file without an extension while searching the storeName.
  • storage_databases.#.collections.#.data_sources.#.path - Controls how Atlas Data Lake searches for and parses files in the storeName before mapping them to the <collection>.
  • storage_databases.#.views - Array of objects where each object represents an aggregation pipeline on a collection.
  • storage_databases.#.views.#.name - Name of the view.
  • storage_databases.#.views.#.source - Name of the source collection for the view.
  • storage_databases.#.views.#.pipeline- Aggregation pipeline stage(s) to apply to the source collection.
storageStores List<GetDataLakesResultStorageStore>

Each object in the array represents a data store. Data Lake uses the storage.databases configuration details to map data in each data store to queryable databases and collections.

  • storage_stores.#.name - Name of the data store.
  • storage_stores.#.provider - Defines where the data is stored.
  • storage_stores.#.region - Name of the AWS region in which the S3 bucket is hosted.
  • storage_stores.#.bucket - Name of the AWS S3 bucket.
  • storage_stores.#.prefix - Prefix Data Lake applies when searching for files in the S3 bucket .
  • storage_stores.#.delimiter - The delimiter that separates storage_databases.#.collections.#.data_sources.#.path segments in the data store.
  • storage_stores.#.include_tags - Determines whether or not to use S3 tags on the files in the given path as additional partition attributes.
aws GetDataLakesResultAw[]
dataProcessRegions GetDataLakesResultDataProcessRegion[]

The cloud provider region to which Atlas Data Lake routes client connections for data processing.

  • data_process_region.0.cloud_provider - Name of the cloud service provider.
  • data_process_region.0.region -Name of the region to which Data Lake routes client connections for data processing.
hostnames string[]

The list of hostnames assigned to the Atlas Data Lake. Each string in the array is a hostname assigned to the Atlas Data Lake.

name string
projectId string

The unique ID for the project to get all data lakes.

state string

Current state of the Atlas Data Lake:

storageDatabases GetDataLakesResultStorageDatabase[]

Configuration details for mapping each data store to queryable databases and collections.

  • storage_databases.#.name - Name of the database to which Data Lake maps the data contained in the data store.
  • storage_databases.#.collections - Array of objects where each object represents a collection and data sources that map to a stores data store.
  • storage_databases.#.collections.#.name - Name of the collection.
  • storage_databases.#.collections.#.data_sources - Array of objects where each object represents a stores data store to map with the collection.
  • storage_databases.#.collections.#.data_sources.#.store_name - Name of a data store to map to the <collection>.
  • storage_databases.#.collections.#.data_sources.#.default_format - Default format that Data Lake assumes if it encounters a file without an extension while searching the storeName.
  • storage_databases.#.collections.#.data_sources.#.path - Controls how Atlas Data Lake searches for and parses files in the storeName before mapping them to the <collection>.
  • storage_databases.#.views - Array of objects where each object represents an aggregation pipeline on a collection.
  • storage_databases.#.views.#.name - Name of the view.
  • storage_databases.#.views.#.source - Name of the source collection for the view.
  • storage_databases.#.views.#.pipeline- Aggregation pipeline stage(s) to apply to the source collection.
storageStores GetDataLakesResultStorageStore[]

Each object in the array represents a data store. Data Lake uses the storage.databases configuration details to map data in each data store to queryable databases and collections.

  • storage_stores.#.name - Name of the data store.
  • storage_stores.#.provider - Defines where the data is stored.
  • storage_stores.#.region - Name of the AWS region in which the S3 bucket is hosted.
  • storage_stores.#.bucket - Name of the AWS S3 bucket.
  • storage_stores.#.prefix - Prefix Data Lake applies when searching for files in the S3 bucket .
  • storage_stores.#.delimiter - The delimiter that separates storage_databases.#.collections.#.data_sources.#.path segments in the data store.
  • storage_stores.#.include_tags - Determines whether or not to use S3 tags on the files in the given path as additional partition attributes.
aws Sequence[GetDataLakesResultAw]
data_process_regions Sequence[GetDataLakesResultDataProcessRegion]

The cloud provider region to which Atlas Data Lake routes client connections for data processing.

  • data_process_region.0.cloud_provider - Name of the cloud service provider.
  • data_process_region.0.region -Name of the region to which Data Lake routes client connections for data processing.
hostnames Sequence[str]

The list of hostnames assigned to the Atlas Data Lake. Each string in the array is a hostname assigned to the Atlas Data Lake.

name str
project_id str

The unique ID for the project to get all data lakes.

state str

Current state of the Atlas Data Lake:

storage_databases Sequence[GetDataLakesResultStorageDatabase]

Configuration details for mapping each data store to queryable databases and collections.

  • storage_databases.#.name - Name of the database to which Data Lake maps the data contained in the data store.
  • storage_databases.#.collections - Array of objects where each object represents a collection and data sources that map to a stores data store.
  • storage_databases.#.collections.#.name - Name of the collection.
  • storage_databases.#.collections.#.data_sources - Array of objects where each object represents a stores data store to map with the collection.
  • storage_databases.#.collections.#.data_sources.#.store_name - Name of a data store to map to the <collection>.
  • storage_databases.#.collections.#.data_sources.#.default_format - Default format that Data Lake assumes if it encounters a file without an extension while searching the storeName.
  • storage_databases.#.collections.#.data_sources.#.path - Controls how Atlas Data Lake searches for and parses files in the storeName before mapping them to the <collection>.
  • storage_databases.#.views - Array of objects where each object represents an aggregation pipeline on a collection.
  • storage_databases.#.views.#.name - Name of the view.
  • storage_databases.#.views.#.source - Name of the source collection for the view.
  • storage_databases.#.views.#.pipeline- Aggregation pipeline stage(s) to apply to the source collection.
storage_stores Sequence[GetDataLakesResultStorageStore]

Each object in the array represents a data store. Data Lake uses the storage.databases configuration details to map data in each data store to queryable databases and collections.

  • storage_stores.#.name - Name of the data store.
  • storage_stores.#.provider - Defines where the data is stored.
  • storage_stores.#.region - Name of the AWS region in which the S3 bucket is hosted.
  • storage_stores.#.bucket - Name of the AWS S3 bucket.
  • storage_stores.#.prefix - Prefix Data Lake applies when searching for files in the S3 bucket .
  • storage_stores.#.delimiter - The delimiter that separates storage_databases.#.collections.#.data_sources.#.path segments in the data store.
  • storage_stores.#.include_tags - Determines whether or not to use S3 tags on the files in the given path as additional partition attributes.
aws List<Property Map>
dataProcessRegions List<Property Map>

The cloud provider region to which Atlas Data Lake routes client connections for data processing.

  • data_process_region.0.cloud_provider - Name of the cloud service provider.
  • data_process_region.0.region -Name of the region to which Data Lake routes client connections for data processing.
hostnames List<String>

The list of hostnames assigned to the Atlas Data Lake. Each string in the array is a hostname assigned to the Atlas Data Lake.

name String
projectId String

The unique ID for the project to get all data lakes.

state String

Current state of the Atlas Data Lake:

storageDatabases List<Property Map>

Configuration details for mapping each data store to queryable databases and collections.

  • storage_databases.#.name - Name of the database to which Data Lake maps the data contained in the data store.
  • storage_databases.#.collections - Array of objects where each object represents a collection and data sources that map to a stores data store.
  • storage_databases.#.collections.#.name - Name of the collection.
  • storage_databases.#.collections.#.data_sources - Array of objects where each object represents a stores data store to map with the collection.
  • storage_databases.#.collections.#.data_sources.#.store_name - Name of a data store to map to the <collection>.
  • storage_databases.#.collections.#.data_sources.#.default_format - Default format that Data Lake assumes if it encounters a file without an extension while searching the storeName.
  • storage_databases.#.collections.#.data_sources.#.path - Controls how Atlas Data Lake searches for and parses files in the storeName before mapping them to the <collection>.
  • storage_databases.#.views - Array of objects where each object represents an aggregation pipeline on a collection.
  • storage_databases.#.views.#.name - Name of the view.
  • storage_databases.#.views.#.source - Name of the source collection for the view.
  • storage_databases.#.views.#.pipeline- Aggregation pipeline stage(s) to apply to the source collection.
storageStores List<Property Map>

Each object in the array represents a data store. Data Lake uses the storage.databases configuration details to map data in each data store to queryable databases and collections.

  • storage_stores.#.name - Name of the data store.
  • storage_stores.#.provider - Defines where the data is stored.
  • storage_stores.#.region - Name of the AWS region in which the S3 bucket is hosted.
  • storage_stores.#.bucket - Name of the AWS S3 bucket.
  • storage_stores.#.prefix - Prefix Data Lake applies when searching for files in the S3 bucket .
  • storage_stores.#.delimiter - The delimiter that separates storage_databases.#.collections.#.data_sources.#.path segments in the data store.
  • storage_stores.#.include_tags - Determines whether or not to use S3 tags on the files in the given path as additional partition attributes.

GetDataLakesResultAw

GetDataLakesResultDataProcessRegion

CloudProvider string
Region string
CloudProvider string
Region string
cloudProvider String
region String
cloudProvider string
region string
cloudProvider String
region String

GetDataLakesResultStorageDatabase

GetDataLakesResultStorageDatabaseCollection

GetDataLakesResultStorageDatabaseCollectionDataSource

DefaultFormat string
Path string
StoreName string
DefaultFormat string
Path string
StoreName string
defaultFormat String
path String
storeName String
defaultFormat string
path string
storeName string
defaultFormat String
path String
storeName String

GetDataLakesResultStorageDatabaseView

Name string
Pipeline string
Source string
Name string
Pipeline string
Source string
name String
pipeline String
source String
name string
pipeline string
source string
name str
pipeline str
source str
name String
pipeline String
source String

GetDataLakesResultStorageStore

AdditionalStorageClasses List<string>
Bucket string
Delimiter string
IncludeTags bool
Name string
Prefix string
Provider string
Region string
AdditionalStorageClasses []string
Bucket string
Delimiter string
IncludeTags bool
Name string
Prefix string
Provider string
Region string
additionalStorageClasses List<String>
bucket String
delimiter String
includeTags Boolean
name String
prefix String
provider String
region String
additionalStorageClasses string[]
bucket string
delimiter string
includeTags boolean
name string
prefix string
provider string
region string
additionalStorageClasses List<String>
bucket String
delimiter String
includeTags Boolean
name String
prefix String
provider String
region String

Package Details

Repository
https://github.com/pulumi/pulumi-mongodbatlas
License
Apache-2.0
Notes

This Pulumi package is based on the mongodbatlas Terraform Provider.