1. On-demand Query Execution with Azure Synapse Serverless SQL Pool

    Python

    To set up an on-demand query execution environment using Azure Synapse Serverless SQL Pools, you can utilize Pulumi with the Azure Native provider. The Azure Synapse Analytics service enables you to use serverless SQL pools to execute SQL queries over your data in the Azure cloud without the need to provision databases or manage infrastructure. This capability allows you to pay per query and is suitable for intermittent querying patterns where continuous compute resources are not needed.

    Below, I'll provide you with a Pulumi program that creates an Azure Synapse Workspace and a serverless SQL Pool within it. The Azure Native provider offers different resources for managing Azure Synapse, and we'll specifically use the azure_native.synapse.Workspace to create a workspace and azure_native.synapse.SqlPool for the serverless SQL pool (notice that in Synapse, serverless and dedicated SQL pools are both represented by a SqlPool resource, but the serverless pool doesn't have a defined SKU).

    Here is a Pulumi program written in Python that sets up these resources:

    import pulumi import pulumi_azure_native as azure_native # Create an Azure Resource Group resource_group = azure_native.resources.ResourceGroup("synapse_resource_group") # Create an Azure Synapse Workspace synapse_workspace = azure_native.synapse.Workspace( "synapse_workspace", resource_group_name=resource_group.name, location=resource_group.location, storage_account_name="your_storage_account_name", # Replace with your storage account name file_system="your_file_system_name", # Replace with your file system name in Azure Data Lake Storage Gen2 account sql_administrator_login="sqladminuser", # Replace with your admin username sql_administrator_login_password="ComplexP@ssw0rd!", # Replace with your admin password ) # Create a Serverless SQL Pool within the Synapse Workspace sql_pool = azure_native.synapse.SqlPool( "serverless_sql_pool", resource_group_name=resource_group.name, location=resource_group.location, workspace_name=synapse_workspace.name, sku=azure_native.synapse.SkuArgs( name="Basic", tier="Serverless", ), create_mode="Default", ) # Export some of the resource properties to view and interact with them pulumi.export("resource_group_name", resource_group.name) pulumi.export("synapse_workspace_name", synapse_workspace.name) pulumi.export("sql_pool_name", sql_pool.name)

    In this program:

    • We start by creating an Azure Resource Group, which is a container that holds related resources for an Azure solution.
    • Then we create an Azure Synapse Workspace, which is an analytics service that brings together enterprise data warehousing and Big Data analytics.
    • We specify the necessary parameters (storage_account_name, file_system, sql_administrator_login, sql_administrator_login_password) for the workspace. You'll need to replace the placeholder values with your actual storage account name and file system name.
    • Within the workspace, we set up a Serverless SQL Pool. The sku property defines the performance level of the SQL pool, but since we're creating a serverless pool here, we use the "Basic" tier with a "Serverless" name.
    • Finally, we export the names of the created resources so you can obtain their identifiers and use them to interact with the resources or in other Pulumi programs.

    Notice that for sensitive data like sql_administrator_login_password, it is advisable to use Pulumi configurations with secrets, which are beyond the scope of this script. You would store your secrets in a secure manner and reference them within your Pulumi program.

    To run this Pulumi program, you would need to have Pulumi installed, log in to the Pulumi service (or set up the local state), and have the Azure CLI installed and configured with credentials to access your Azure subscription.

    This program serves as a starting point, and you can further customize it by adding more configurations and automations as per your requirements.