1. Setting User Permissions for MLflow in Databricks


    To set user permissions for MLflow in Databricks with Pulumi, you would typically use the databricks.Permissions resource. This resource sets permissions on various Databricks objects for users, groups, and service principals. Depending on the exact needs, you would adjust the accessControls property to define the appropriate permissions for MLflow.

    Here's a Pulumi program in Python that demonstrates how to set user permissions for an MLflow model in Databricks. This program assumes that you have already set up the Databricks provider and the MLflow model you want to configure permissions for.

    Let's go step-by-step through what the code is doing:

    1. Importing necessary libraries: We start by importing the Pulumi library along with the Databricks provider.
    2. Define MLflow model permissions: We use the databricks.Permissions resource to define permissions for a Databricks MLflow model for a particular user by specifying the user name and permission level.
    3. Exporting the permission object: Finally, we use pulumi.export to output the created permissions object for further reference.

    Here is the Python program:

    import pulumi import pulumi_databricks as databricks # Create a Permissions resource for an MLflow model mlflow_model_permissions = databricks.Permissions("mlflow-model-permissions", experiment_id="<MLFLOW_EXPERIMENT_ID>", # Replace with your MLflow experiment ID access_controls=[databricks.PermissionsAccessControlArgs( user_name="<USERNAME>", # Replace with the username to grant permissions permission_level="CAN_MANAGE", # Set the permission level as appropriate )] ) # Export the permissions object pulumi.export("mlflowModelPermissions", mlflow_model_permissions.id)

    Replace <MLFLOW_EXPERIMENT_ID> with the actual MLflow experiment ID and <USERNAME> with the Databricks username of the individual or service principal you’re granting permissions to. The permission_level should be set according to the level of access you wish to grant. Available permission levels include CAN_MANAGE, CAN_EDIT, CAN_VIEW, etc.

    It is important to consult the Databricks Permissions documentation for the specific requirements and properties that match your use case, as there are a variety of object types (like notebook_id, directory_id, etc.) that can be controlled with this resource, each with its unique ID parameter.

    Also, note that before running this Pulumi program, you must have the Databricks provider configured in your Pulumi project. This usually involves setting up the appropriate API token and host URL as environment variables or in your Pulumi stack configuration. The exact setup may vary depending on whether you are using Databricks in AWS, Azure, or GCP.

    After defining your Pulumi program, you can run pulumi up in your terminal to deploy the changes. Pulumi will show you a preview of the actions to be taken and, upon confirmation, will apply those changes to set the permissions for the MLflow model in your Databricks workspace.