Skip to main content
  1. Docs
  2. Infrastructure as Code
  3. Guides
  4. Basics
  5. Using a DIY Backend

Using a DIY Backend

    A DIY (“Do It Yourself”) backend stores Pulumi state in an object store or on your local machine instead of in Pulumi Cloud. The filesystem and cloud storage backends allow you to store state locally on your machine or remotely within a cloud object store. For DIY backends, state management—including backup, sharing, and team access synchronization—is custom and implemented manually. A basic file-based locking system is enabled by default for all DIY backends.

    Both Pulumi Cloud and DIY backends provide reliable state management. DIY backends include built-in state locking and history tracking. However, most users find Pulumi Cloud to be the easiest way to get started and scale. Pulumi Cloud handles operational concerns automatically, including backup and recovery, team collaboration, RBAC, and audit logging. It also provides a transactional API that offers stronger guarantees than blob storage protocols. With DIY backends, you will need to implement your own backup procedures and manage access control. For a full comparison, see Pulumi Cloud vs. OSS. For advanced state management concepts, see Advanced state.

    To use a DIY backend, specify a storage endpoint URL as pulumi login’s <backend-url> argument: s3://<bucket-path>, azblob://<container-path>, gs://<bucket-path>, or file://<fs-path>. This will tell Pulumi to store state in AWS S3, Azure Blob Storage, Google Cloud Storage, or the local filesystem, respectively. Checkpoint files are stored in a relative .pulumi directory. For example, if you were using the Amazon S3 DIY backend, your checkpoint files would be stored at s3://my-pulumi-state-bucket/.pulumi where my-pulumi-state-bucket represents the name of your S3 bucket.

    Inside the .pulumi folder, we access the following subdirectories:

    1. meta.yaml: This is the metadata file. It does not hold information about the stacks but rather information about the backend itself.
    2. stacks/: Active state files for each stack (e.g. dev.json or proj/dev.json if the stack is scoped to a project).
    3. locks/: Lock files for each stack if the stack is currently being operated on by a Pulumi operation (e.g. dev/$lock.json or proj/dev/$lock.json where $lock is a unique identifier for the lock).
    4. history/: History for each stack (e.g. dev/dev-$timestamp.history.json or proj/dev/dev-$timestamp.history.json where $timestamp records the time the history file was created).

    The detailed format of the <backend-url> differs by backend and each has different options such as how to authenticate, as described below.

    Local filesystem

    To use the filesystem backend to store your state files locally on your machine, pass the --local flag when logging in:

    $ pulumi login --local
    

    You will see Logged into <my-machine> as <my-user> (file://~) as a result where <my-machine> and <my-user> are your configured machine and user names, respectively. All subsequent stack state will be stored as JSON files locally on your machine.

    The default directory for these JSON files is ~/.pulumi. To store state files in an alternative location, specify a file://<path> URL instead, where <path> is the full path to the target directory where state files will be stored. For instance, to store state underneath /app/data/.pulumi/ instead, run:

    $ pulumi login file:///app/data
    
    If you use a relative path (e.g. file://./einstein), it will be relative to the current working directory.

    Notice that pulumi login --local is syntactic sugar for pulumi login file://~.

    AWS S3

    To use the AWS S3 backend, pass the s3://<bucket-name> as your <backend-url>:

    $ pulumi login s3://<bucket-name>
    

    As of Pulumi CLI v3.33.1, instead of specifying the AWS Profile, add awssdk=v2 along with the region and profile to the query string. The URL should be quoted to escape the shell operator &, and used as follows:

    pulumi login 's3://<bucket-name>?region=us-east-1&awssdk=v2&profile=<profile-name>'
    
    The bucket-name value can include multiple folders, such as my-bucket/app/project1. This is useful when storing multiple projects’ state in the same bucket.

    To configure credentials and authorize access, please see the AWS Session documentation. For additional configuration options, see AWS Setup. If you’re new to AWS S3, see the AWS documentation.

    This backend also supports alternative object storage servers with AWS S3 compatible REST APIs, including Minio, Ceph, or SeaweedFS. To use such a server, you may pass endpoint, disableSSL, and s3ForcePathStyle querystring parameters to your <backend-url>, as follows:

    $ pulumi login 's3://<bucket-name>?endpoint=my.minio.local:8080&disableSSL=true&s3ForcePathStyle=true'
    

    Azure Blob Storage

    To use the Azure Blob Storage backend, pass the azblob://<container-path> as your <backend-url>:

    $ pulumi login azblob://<container-path>
    

    Set the AZURE_STORAGE_ACCOUNT environment variable to specify which Azure storage account to use. For authentication, you may set AZURE_STORAGE_KEY (a storage account access key) or AZURE_STORAGE_SAS_TOKEN (a shared access signature token). If neither is provided, the backend authenticates using Azure SDK for Go’s DefaultAzureCredential, which attempts a series of methods in order, including managed identity, workload identity federation, service principal credentials (AZURE_TENANT_ID, AZURE_CLIENT_ID, AZURE_CLIENT_SECRET), and the Azure CLI. If you’re new to Azure Blob Storage, see the Azure documentation.

    This backend authenticates using Azure SDK for Go, not the Pulumi Azure provider’s authentication mechanism. The Azure provider’s environment variables — such as ARM_TENANT_ID, ARM_CLIENT_ID, and ARM_USE_OIDC — are not supported. Use the Azure SDK’s own environment variables (AZURE_TENANT_ID, AZURE_CLIENT_ID, AZURE_CLIENT_SECRET) for service principal authentication.

    As of Pulumi CLI v3.41.1, you can also specify the storage account directly in the backend URL after authenticating with az login:

    $ pulumi login azblob://<container-path>?storage_account=account_name
    
    The Azure account must have the Storage Blob Data Contributor role or an equivalent role with permissions to read, write, and delete blobs.

    Google Cloud Storage

    To use the Google Cloud Storage backend pass the gs://<bucket-path> as your <backend-url>:

    $ pulumi login gs://<my-pulumi-state-bucket>
    

    To configure credentials for this backend, see Application Default Credentials. For additional configuration options, see Google Cloud Setup. If you’re new to Google Cloud Storage, see the Google Cloud documentation.

    PostgreSQL

    To use the PostgreSQL backend pass the postgres://<username>:<password>@<hostname>:<port>/<database> as your <backend-url>:

    $ pulumi login postgres://<username>:<password>@<hostname>:<port>/<database>
    
    Avoid including credentials directly in commands. Consider using environment variables or other secure credential management methods.

    For additional configuration options, see the README ↗.

    Scoping

    Versions of Pulumi prior to v3.61.0 placed stacks in a global namespace in DIY backends. This meant that you couldn’t share stack names (e.g. dev, prod, staging) across multiple projects in the same DIY backend. With Pulumi v3.61.0 and later, stacks created in new or empty DIY backends are scoped by project by default—same as the Pulumi Cloud backend.

    Existing DIY backends will continue to use the global namespace for stacks. You can upgrade an existing DIY backend to use project-scoped stacks using the pulumi state upgrade command. This command will upgrade all stacks in the backend to be scoped by project.

    pulumi state upgrade will make upgraded stacks inaccessible to older versions of Pulumi. This is a one-way operation. Once you have upgraded your backend, you cannot downgrade to the previous version.