Viewing docs for DigitalOcean v4.65.0
published on Wednesday, Apr 29, 2026 by Pulumi
published on Wednesday, Apr 29, 2026 by Pulumi
Viewing docs for DigitalOcean v4.65.0
published on Wednesday, Apr 29, 2026 by Pulumi
published on Wednesday, Apr 29, 2026 by Pulumi
Returns the supported GPU and model compatibility matrix for dedicated inference endpoints. Use this data source to discover which models can be deployed on which GPU types.
Example Usage
import * as pulumi from "@pulumi/pulumi";
import * as digitalocean from "@pulumi/digitalocean";
const available = digitalocean.getDedicatedInferenceGpuModelConfig({});
export const gpuModelConfigs = available.then(available => available.gpuModelConfigs);
import pulumi
import pulumi_digitalocean as digitalocean
available = digitalocean.get_dedicated_inference_gpu_model_config()
pulumi.export("gpuModelConfigs", available.gpu_model_configs)
package main
import (
"github.com/pulumi/pulumi-digitalocean/sdk/v4/go/digitalocean"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)
func main() {
pulumi.Run(func(ctx *pulumi.Context) error {
available, err := digitalocean.GetDedicatedInferenceGpuModelConfig(ctx, map[string]interface{}{}, nil)
if err != nil {
return err
}
ctx.Export("gpuModelConfigs", available.GpuModelConfigs)
return nil
})
}
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using DigitalOcean = Pulumi.DigitalOcean;
return await Deployment.RunAsync(() =>
{
var available = DigitalOcean.Index.GetDedicatedInferenceGpuModelConfig.Invoke();
return new Dictionary<string, object?>
{
["gpuModelConfigs"] = available.Apply(getDedicatedInferenceGpuModelConfigResult => getDedicatedInferenceGpuModelConfigResult.GpuModelConfigs),
};
});
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.digitalocean.DigitaloceanFunctions;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
final var available = DigitaloceanFunctions.getDedicatedInferenceGpuModelConfig(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference);
ctx.export("gpuModelConfigs", available.gpuModelConfigs());
}
}
variables:
available:
fn::invoke:
function: digitalocean:getDedicatedInferenceGpuModelConfig
arguments: {}
outputs:
gpuModelConfigs: ${available.gpuModelConfigs}
Using getDedicatedInferenceGpuModelConfig
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getDedicatedInferenceGpuModelConfig(opts?: InvokeOptions): Promise<GetDedicatedInferenceGpuModelConfigResult>
function getDedicatedInferenceGpuModelConfigOutput(opts?: InvokeOptions): Output<GetDedicatedInferenceGpuModelConfigResult>def get_dedicated_inference_gpu_model_config(opts: Optional[InvokeOptions] = None) -> GetDedicatedInferenceGpuModelConfigResult
def get_dedicated_inference_gpu_model_config_output(opts: Optional[InvokeOptions] = None) -> Output[GetDedicatedInferenceGpuModelConfigResult]func GetDedicatedInferenceGpuModelConfig(ctx *Context, opts ...InvokeOption) (*GetDedicatedInferenceGpuModelConfigResult, error)
func GetDedicatedInferenceGpuModelConfigOutput(ctx *Context, opts ...InvokeOption) GetDedicatedInferenceGpuModelConfigResultOutput> Note: This function is named GetDedicatedInferenceGpuModelConfig in the Go SDK.
public static class GetDedicatedInferenceGpuModelConfig
{
public static Task<GetDedicatedInferenceGpuModelConfigResult> InvokeAsync(InvokeOptions? opts = null)
public static Output<GetDedicatedInferenceGpuModelConfigResult> Invoke(InvokeOptions? opts = null)
}public static CompletableFuture<GetDedicatedInferenceGpuModelConfigResult> getDedicatedInferenceGpuModelConfig(InvokeOptions options)
public static Output<GetDedicatedInferenceGpuModelConfigResult> getDedicatedInferenceGpuModelConfig(InvokeOptions options)
fn::invoke:
function: digitalocean:index/getDedicatedInferenceGpuModelConfig:getDedicatedInferenceGpuModelConfig
arguments:
# arguments dictionarygetDedicatedInferenceGpuModelConfig Result
The following output properties are available:
- Gpu
Model List<Pulumi.Configs Digital Ocean. Outputs. Get Dedicated Inference Gpu Model Config Gpu Model Config> - The list of supported GPU and model combinations. Each element contains:
- Id string
- The provider-assigned unique ID for this managed resource.
- Gpu
Model []GetConfigs Dedicated Inference Gpu Model Config Gpu Model Config - The list of supported GPU and model combinations. Each element contains:
- Id string
- The provider-assigned unique ID for this managed resource.
- gpu
Model List<GetConfigs Dedicated Inference Gpu Model Config Gpu Model Config> - The list of supported GPU and model combinations. Each element contains:
- id String
- The provider-assigned unique ID for this managed resource.
- gpu
Model GetConfigs Dedicated Inference Gpu Model Config Gpu Model Config[] - The list of supported GPU and model combinations. Each element contains:
- id string
- The provider-assigned unique ID for this managed resource.
- gpu_
model_ Sequence[Getconfigs Dedicated Inference Gpu Model Config Gpu Model Config] - The list of supported GPU and model combinations. Each element contains:
- id str
- The provider-assigned unique ID for this managed resource.
- gpu
Model List<Property Map>Configs - The list of supported GPU and model combinations. Each element contains:
- id String
- The provider-assigned unique ID for this managed resource.
Supporting Types
GetDedicatedInferenceGpuModelConfigGpuModelConfig
- Gpu
Slugs List<string> - The GPU slugs that support this model.
- Is
Model boolGated - Whether the model requires gated access (e.g. a HuggingFace token).
- Model
Name string - The human-readable name of the model.
- Model
Slug string - The slug identifier for the model.
- Gpu
Slugs []string - The GPU slugs that support this model.
- Is
Model boolGated - Whether the model requires gated access (e.g. a HuggingFace token).
- Model
Name string - The human-readable name of the model.
- Model
Slug string - The slug identifier for the model.
- gpu
Slugs List<String> - The GPU slugs that support this model.
- is
Model BooleanGated - Whether the model requires gated access (e.g. a HuggingFace token).
- model
Name String - The human-readable name of the model.
- model
Slug String - The slug identifier for the model.
- gpu
Slugs string[] - The GPU slugs that support this model.
- is
Model booleanGated - Whether the model requires gated access (e.g. a HuggingFace token).
- model
Name string - The human-readable name of the model.
- model
Slug string - The slug identifier for the model.
- gpu_
slugs Sequence[str] - The GPU slugs that support this model.
- is_
model_ boolgated - Whether the model requires gated access (e.g. a HuggingFace token).
- model_
name str - The human-readable name of the model.
- model_
slug str - The slug identifier for the model.
- gpu
Slugs List<String> - The GPU slugs that support this model.
- is
Model BooleanGated - Whether the model requires gated access (e.g. a HuggingFace token).
- model
Name String - The human-readable name of the model.
- model
Slug String - The slug identifier for the model.
Package Details
- Repository
- DigitalOcean pulumi/pulumi-digitalocean
- License
- Apache-2.0
- Notes
- This Pulumi package is based on the
digitaloceanTerraform Provider.
Viewing docs for DigitalOcean v4.65.0
published on Wednesday, Apr 29, 2026 by Pulumi
published on Wednesday, Apr 29, 2026 by Pulumi
