Viewing docs for gcore 2.0.0-alpha.3
published on Monday, Mar 30, 2026 by g-core
published on Monday, Mar 30, 2026 by g-core
Viewing docs for gcore 2.0.0-alpha.3
published on Monday, Mar 30, 2026 by g-core
published on Monday, Mar 30, 2026 by g-core
Inference flavors define the GPU and CPU resource configurations available for inference deployments.
Example Usage
import * as pulumi from "@pulumi/pulumi";
import * as gcore from "@pulumi/gcore";
const exampleCloudInferenceFlavors = gcore.getCloudInferenceFlavors({});
import pulumi
import pulumi_gcore as gcore
example_cloud_inference_flavors = gcore.get_cloud_inference_flavors()
package main
import (
"github.com/pulumi/pulumi-terraform-provider/sdks/go/gcore/v2/gcore"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)
func main() {
pulumi.Run(func(ctx *pulumi.Context) error {
_, err := gcore.GetCloudInferenceFlavors(ctx, &gcore.GetCloudInferenceFlavorsArgs{}, nil)
if err != nil {
return err
}
return nil
})
}
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using Gcore = Pulumi.Gcore;
return await Deployment.RunAsync(() =>
{
var exampleCloudInferenceFlavors = Gcore.GetCloudInferenceFlavors.Invoke();
});
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.gcore.GcoreFunctions;
import com.pulumi.gcore.inputs.GetCloudInferenceFlavorsArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
final var exampleCloudInferenceFlavors = GcoreFunctions.getCloudInferenceFlavors(GetCloudInferenceFlavorsArgs.builder()
.build());
}
}
variables:
exampleCloudInferenceFlavors:
fn::invoke:
function: gcore:getCloudInferenceFlavors
arguments: {}
Using getCloudInferenceFlavors
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getCloudInferenceFlavors(args: GetCloudInferenceFlavorsArgs, opts?: InvokeOptions): Promise<GetCloudInferenceFlavorsResult>
function getCloudInferenceFlavorsOutput(args: GetCloudInferenceFlavorsOutputArgs, opts?: InvokeOptions): Output<GetCloudInferenceFlavorsResult>def get_cloud_inference_flavors(max_items: Optional[float] = None,
opts: Optional[InvokeOptions] = None) -> GetCloudInferenceFlavorsResult
def get_cloud_inference_flavors_output(max_items: Optional[pulumi.Input[float]] = None,
opts: Optional[InvokeOptions] = None) -> Output[GetCloudInferenceFlavorsResult]func GetCloudInferenceFlavors(ctx *Context, args *GetCloudInferenceFlavorsArgs, opts ...InvokeOption) (*GetCloudInferenceFlavorsResult, error)
func GetCloudInferenceFlavorsOutput(ctx *Context, args *GetCloudInferenceFlavorsOutputArgs, opts ...InvokeOption) GetCloudInferenceFlavorsResultOutput> Note: This function is named GetCloudInferenceFlavors in the Go SDK.
public static class GetCloudInferenceFlavors
{
public static Task<GetCloudInferenceFlavorsResult> InvokeAsync(GetCloudInferenceFlavorsArgs args, InvokeOptions? opts = null)
public static Output<GetCloudInferenceFlavorsResult> Invoke(GetCloudInferenceFlavorsInvokeArgs args, InvokeOptions? opts = null)
}public static CompletableFuture<GetCloudInferenceFlavorsResult> getCloudInferenceFlavors(GetCloudInferenceFlavorsArgs args, InvokeOptions options)
public static Output<GetCloudInferenceFlavorsResult> getCloudInferenceFlavors(GetCloudInferenceFlavorsArgs args, InvokeOptions options)
fn::invoke:
function: gcore:index/getCloudInferenceFlavors:getCloudInferenceFlavors
arguments:
# arguments dictionaryThe following arguments are supported:
- Max
Items double - Max items to fetch, default: 1000
- Max
Items float64 - Max items to fetch, default: 1000
- max
Items Double - Max items to fetch, default: 1000
- max
Items number - Max items to fetch, default: 1000
- max_
items float - Max items to fetch, default: 1000
- max
Items Number - Max items to fetch, default: 1000
getCloudInferenceFlavors Result
The following output properties are available:
- Id string
- The provider-assigned unique ID for this managed resource.
- Items
List<Get
Cloud Inference Flavors Item> - The items returned by the data source
- Max
Items double - Max items to fetch, default: 1000
- Id string
- The provider-assigned unique ID for this managed resource.
- Items
[]Get
Cloud Inference Flavors Item - The items returned by the data source
- Max
Items float64 - Max items to fetch, default: 1000
- id String
- The provider-assigned unique ID for this managed resource.
- items
List<Get
Cloud Inference Flavors Item> - The items returned by the data source
- max
Items Double - Max items to fetch, default: 1000
- id string
- The provider-assigned unique ID for this managed resource.
- items
Get
Cloud Inference Flavors Item[] - The items returned by the data source
- max
Items number - Max items to fetch, default: 1000
- id str
- The provider-assigned unique ID for this managed resource.
- items
Sequence[Get
Cloud Inference Flavors Item] - The items returned by the data source
- max_
items float - Max items to fetch, default: 1000
- id String
- The provider-assigned unique ID for this managed resource.
- items List<Property Map>
- The items returned by the data source
- max
Items Number - Max items to fetch, default: 1000
Supporting Types
GetCloudInferenceFlavorsItem
- Cpu double
- Inference flavor cpu count.
- Description string
- Inference flavor description.
- Gpu double
- Inference flavor gpu count.
- Gpu
Compute stringCapability - Inference flavor gpu compute capability.
- Gpu
Memory double - Inference flavor gpu memory in Gi.
- Gpu
Model string - Inference flavor gpu model.
- bool
- Inference flavor is gpu shared.
- Memory double
- Inference flavor memory in Gi.
- Name string
- Inference flavor name.
- Cpu float64
- Inference flavor cpu count.
- Description string
- Inference flavor description.
- Gpu float64
- Inference flavor gpu count.
- Gpu
Compute stringCapability - Inference flavor gpu compute capability.
- Gpu
Memory float64 - Inference flavor gpu memory in Gi.
- Gpu
Model string - Inference flavor gpu model.
- bool
- Inference flavor is gpu shared.
- Memory float64
- Inference flavor memory in Gi.
- Name string
- Inference flavor name.
- cpu Double
- Inference flavor cpu count.
- description String
- Inference flavor description.
- gpu Double
- Inference flavor gpu count.
- gpu
Compute StringCapability - Inference flavor gpu compute capability.
- gpu
Memory Double - Inference flavor gpu memory in Gi.
- gpu
Model String - Inference flavor gpu model.
- Boolean
- Inference flavor is gpu shared.
- memory Double
- Inference flavor memory in Gi.
- name String
- Inference flavor name.
- cpu number
- Inference flavor cpu count.
- description string
- Inference flavor description.
- gpu number
- Inference flavor gpu count.
- gpu
Compute stringCapability - Inference flavor gpu compute capability.
- gpu
Memory number - Inference flavor gpu memory in Gi.
- gpu
Model string - Inference flavor gpu model.
- boolean
- Inference flavor is gpu shared.
- memory number
- Inference flavor memory in Gi.
- name string
- Inference flavor name.
- cpu float
- Inference flavor cpu count.
- description str
- Inference flavor description.
- gpu float
- Inference flavor gpu count.
- gpu_
compute_ strcapability - Inference flavor gpu compute capability.
- gpu_
memory float - Inference flavor gpu memory in Gi.
- gpu_
model str - Inference flavor gpu model.
- bool
- Inference flavor is gpu shared.
- memory float
- Inference flavor memory in Gi.
- name str
- Inference flavor name.
- cpu Number
- Inference flavor cpu count.
- description String
- Inference flavor description.
- gpu Number
- Inference flavor gpu count.
- gpu
Compute StringCapability - Inference flavor gpu compute capability.
- gpu
Memory Number - Inference flavor gpu memory in Gi.
- gpu
Model String - Inference flavor gpu model.
- Boolean
- Inference flavor is gpu shared.
- memory Number
- Inference flavor memory in Gi.
- name String
- Inference flavor name.
Package Details
- Repository
- gcore g-core/terraform-provider-gcore
- License
- Notes
- This Pulumi package is based on the
gcoreTerraform Provider.
Viewing docs for gcore 2.0.0-alpha.3
published on Monday, Mar 30, 2026 by g-core
published on Monday, Mar 30, 2026 by g-core
