1. Packages
  2. Azure Native
  3. API Docs
  4. media
  5. getJob
This is the latest version of Azure Native. Use the Azure Native v1 docs if using the v1 version of this package.
Azure Native v2.76.0 published on Friday, Dec 6, 2024 by Pulumi

azure-native.media.getJob

Explore with Pulumi AI

azure-native logo
This is the latest version of Azure Native. Use the Azure Native v1 docs if using the v1 version of this package.
Azure Native v2.76.0 published on Friday, Dec 6, 2024 by Pulumi

    Gets a Job. Azure REST API version: 2022-07-01.

    Using getJob

    Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

    function getJob(args: GetJobArgs, opts?: InvokeOptions): Promise<GetJobResult>
    function getJobOutput(args: GetJobOutputArgs, opts?: InvokeOptions): Output<GetJobResult>
    def get_job(account_name: Optional[str] = None,
                job_name: Optional[str] = None,
                resource_group_name: Optional[str] = None,
                transform_name: Optional[str] = None,
                opts: Optional[InvokeOptions] = None) -> GetJobResult
    def get_job_output(account_name: Optional[pulumi.Input[str]] = None,
                job_name: Optional[pulumi.Input[str]] = None,
                resource_group_name: Optional[pulumi.Input[str]] = None,
                transform_name: Optional[pulumi.Input[str]] = None,
                opts: Optional[InvokeOptions] = None) -> Output[GetJobResult]
    func LookupJob(ctx *Context, args *LookupJobArgs, opts ...InvokeOption) (*LookupJobResult, error)
    func LookupJobOutput(ctx *Context, args *LookupJobOutputArgs, opts ...InvokeOption) LookupJobResultOutput

    > Note: This function is named LookupJob in the Go SDK.

    public static class GetJob 
    {
        public static Task<GetJobResult> InvokeAsync(GetJobArgs args, InvokeOptions? opts = null)
        public static Output<GetJobResult> Invoke(GetJobInvokeArgs args, InvokeOptions? opts = null)
    }
    public static CompletableFuture<GetJobResult> getJob(GetJobArgs args, InvokeOptions options)
    // Output-based functions aren't available in Java yet
    
    fn::invoke:
      function: azure-native:media:getJob
      arguments:
        # arguments dictionary

    The following arguments are supported:

    AccountName string
    The Media Services account name.
    JobName string
    The Job name.
    ResourceGroupName string
    The name of the resource group within the Azure subscription.
    TransformName string
    The Transform name.
    AccountName string
    The Media Services account name.
    JobName string
    The Job name.
    ResourceGroupName string
    The name of the resource group within the Azure subscription.
    TransformName string
    The Transform name.
    accountName String
    The Media Services account name.
    jobName String
    The Job name.
    resourceGroupName String
    The name of the resource group within the Azure subscription.
    transformName String
    The Transform name.
    accountName string
    The Media Services account name.
    jobName string
    The Job name.
    resourceGroupName string
    The name of the resource group within the Azure subscription.
    transformName string
    The Transform name.
    account_name str
    The Media Services account name.
    job_name str
    The Job name.
    resource_group_name str
    The name of the resource group within the Azure subscription.
    transform_name str
    The Transform name.
    accountName String
    The Media Services account name.
    jobName String
    The Job name.
    resourceGroupName String
    The name of the resource group within the Azure subscription.
    transformName String
    The Transform name.

    getJob Result

    The following output properties are available:

    Created string
    The UTC date and time when the customer has created the Job, in 'YYYY-MM-DDThh:mm:ssZ' format.
    EndTime string
    The UTC date and time at which this Job finished processing.
    Id string
    Fully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
    Input Pulumi.AzureNative.Media.Outputs.JobInputAssetResponse | Pulumi.AzureNative.Media.Outputs.JobInputClipResponse | Pulumi.AzureNative.Media.Outputs.JobInputHttpResponse | Pulumi.AzureNative.Media.Outputs.JobInputSequenceResponse | Pulumi.AzureNative.Media.Outputs.JobInputsResponse
    The inputs for the Job.
    LastModified string
    The UTC date and time when the customer has last updated the Job, in 'YYYY-MM-DDThh:mm:ssZ' format.
    Name string
    The name of the resource
    Outputs List<Pulumi.AzureNative.Media.Outputs.JobOutputAssetResponse>
    The outputs for the Job.
    StartTime string
    The UTC date and time at which this Job began processing.
    State string
    The current state of the job.
    SystemData Pulumi.AzureNative.Media.Outputs.SystemDataResponse
    The system metadata relating to this resource.
    Type string
    The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
    CorrelationData Dictionary<string, string>
    Customer provided key, value pairs that will be returned in Job and JobOutput state events.
    Description string
    Optional customer supplied description of the Job.
    Priority string
    Priority with which the job should be processed. Higher priority jobs are processed before lower priority jobs. If not set, the default is normal.
    Created string
    The UTC date and time when the customer has created the Job, in 'YYYY-MM-DDThh:mm:ssZ' format.
    EndTime string
    The UTC date and time at which this Job finished processing.
    Id string
    Fully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
    Input JobInputAssetResponse | JobInputClipResponse | JobInputHttpResponse | JobInputSequenceResponse | JobInputsResponse
    The inputs for the Job.
    LastModified string
    The UTC date and time when the customer has last updated the Job, in 'YYYY-MM-DDThh:mm:ssZ' format.
    Name string
    The name of the resource
    Outputs []JobOutputAssetResponse
    The outputs for the Job.
    StartTime string
    The UTC date and time at which this Job began processing.
    State string
    The current state of the job.
    SystemData SystemDataResponse
    The system metadata relating to this resource.
    Type string
    The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
    CorrelationData map[string]string
    Customer provided key, value pairs that will be returned in Job and JobOutput state events.
    Description string
    Optional customer supplied description of the Job.
    Priority string
    Priority with which the job should be processed. Higher priority jobs are processed before lower priority jobs. If not set, the default is normal.
    created String
    The UTC date and time when the customer has created the Job, in 'YYYY-MM-DDThh:mm:ssZ' format.
    endTime String
    The UTC date and time at which this Job finished processing.
    id String
    Fully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
    input JobInputAssetResponse | JobInputClipResponse | JobInputHttpResponse | JobInputSequenceResponse | JobInputsResponse
    The inputs for the Job.
    lastModified String
    The UTC date and time when the customer has last updated the Job, in 'YYYY-MM-DDThh:mm:ssZ' format.
    name String
    The name of the resource
    outputs List<JobOutputAssetResponse>
    The outputs for the Job.
    startTime String
    The UTC date and time at which this Job began processing.
    state String
    The current state of the job.
    systemData SystemDataResponse
    The system metadata relating to this resource.
    type String
    The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
    correlationData Map<String,String>
    Customer provided key, value pairs that will be returned in Job and JobOutput state events.
    description String
    Optional customer supplied description of the Job.
    priority String
    Priority with which the job should be processed. Higher priority jobs are processed before lower priority jobs. If not set, the default is normal.
    created string
    The UTC date and time when the customer has created the Job, in 'YYYY-MM-DDThh:mm:ssZ' format.
    endTime string
    The UTC date and time at which this Job finished processing.
    id string
    Fully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
    input JobInputAssetResponse | JobInputClipResponse | JobInputHttpResponse | JobInputSequenceResponse | JobInputsResponse
    The inputs for the Job.
    lastModified string
    The UTC date and time when the customer has last updated the Job, in 'YYYY-MM-DDThh:mm:ssZ' format.
    name string
    The name of the resource
    outputs JobOutputAssetResponse[]
    The outputs for the Job.
    startTime string
    The UTC date and time at which this Job began processing.
    state string
    The current state of the job.
    systemData SystemDataResponse
    The system metadata relating to this resource.
    type string
    The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
    correlationData {[key: string]: string}
    Customer provided key, value pairs that will be returned in Job and JobOutput state events.
    description string
    Optional customer supplied description of the Job.
    priority string
    Priority with which the job should be processed. Higher priority jobs are processed before lower priority jobs. If not set, the default is normal.
    created str
    The UTC date and time when the customer has created the Job, in 'YYYY-MM-DDThh:mm:ssZ' format.
    end_time str
    The UTC date and time at which this Job finished processing.
    id str
    Fully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
    input JobInputAssetResponse | JobInputClipResponse | JobInputHttpResponse | JobInputSequenceResponse | JobInputsResponse
    The inputs for the Job.
    last_modified str
    The UTC date and time when the customer has last updated the Job, in 'YYYY-MM-DDThh:mm:ssZ' format.
    name str
    The name of the resource
    outputs Sequence[JobOutputAssetResponse]
    The outputs for the Job.
    start_time str
    The UTC date and time at which this Job began processing.
    state str
    The current state of the job.
    system_data SystemDataResponse
    The system metadata relating to this resource.
    type str
    The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
    correlation_data Mapping[str, str]
    Customer provided key, value pairs that will be returned in Job and JobOutput state events.
    description str
    Optional customer supplied description of the Job.
    priority str
    Priority with which the job should be processed. Higher priority jobs are processed before lower priority jobs. If not set, the default is normal.
    created String
    The UTC date and time when the customer has created the Job, in 'YYYY-MM-DDThh:mm:ssZ' format.
    endTime String
    The UTC date and time at which this Job finished processing.
    id String
    Fully qualified resource ID for the resource. Ex - /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}
    input Property Map | Property Map | Property Map | Property Map | Property Map
    The inputs for the Job.
    lastModified String
    The UTC date and time when the customer has last updated the Job, in 'YYYY-MM-DDThh:mm:ssZ' format.
    name String
    The name of the resource
    outputs List<Property Map>
    The outputs for the Job.
    startTime String
    The UTC date and time at which this Job began processing.
    state String
    The current state of the job.
    systemData Property Map
    The system metadata relating to this resource.
    type String
    The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
    correlationData Map<String>
    Customer provided key, value pairs that will be returned in Job and JobOutput state events.
    description String
    Optional customer supplied description of the Job.
    priority String
    Priority with which the job should be processed. Higher priority jobs are processed before lower priority jobs. If not set, the default is normal.

    Supporting Types

    AacAudioResponse

    Bitrate int
    The bitrate, in bits per second, of the output encoded audio.
    Channels int
    The number of channels in the audio.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    Profile string
    The encoding profile to be used when encoding audio with AAC.
    SamplingRate int
    The sampling rate to use for encoding in hertz.
    Bitrate int
    The bitrate, in bits per second, of the output encoded audio.
    Channels int
    The number of channels in the audio.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    Profile string
    The encoding profile to be used when encoding audio with AAC.
    SamplingRate int
    The sampling rate to use for encoding in hertz.
    bitrate Integer
    The bitrate, in bits per second, of the output encoded audio.
    channels Integer
    The number of channels in the audio.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    profile String
    The encoding profile to be used when encoding audio with AAC.
    samplingRate Integer
    The sampling rate to use for encoding in hertz.
    bitrate number
    The bitrate, in bits per second, of the output encoded audio.
    channels number
    The number of channels in the audio.
    label string
    An optional label for the codec. The label can be used to control muxing behavior.
    profile string
    The encoding profile to be used when encoding audio with AAC.
    samplingRate number
    The sampling rate to use for encoding in hertz.
    bitrate int
    The bitrate, in bits per second, of the output encoded audio.
    channels int
    The number of channels in the audio.
    label str
    An optional label for the codec. The label can be used to control muxing behavior.
    profile str
    The encoding profile to be used when encoding audio with AAC.
    sampling_rate int
    The sampling rate to use for encoding in hertz.
    bitrate Number
    The bitrate, in bits per second, of the output encoded audio.
    channels Number
    The number of channels in the audio.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    profile String
    The encoding profile to be used when encoding audio with AAC.
    samplingRate Number
    The sampling rate to use for encoding in hertz.

    AbsoluteClipTimeResponse

    Time string
    The time position on the timeline of the input media. It is usually specified as an ISO8601 period. e.g PT30S for 30 seconds.
    Time string
    The time position on the timeline of the input media. It is usually specified as an ISO8601 period. e.g PT30S for 30 seconds.
    time String
    The time position on the timeline of the input media. It is usually specified as an ISO8601 period. e.g PT30S for 30 seconds.
    time string
    The time position on the timeline of the input media. It is usually specified as an ISO8601 period. e.g PT30S for 30 seconds.
    time str
    The time position on the timeline of the input media. It is usually specified as an ISO8601 period. e.g PT30S for 30 seconds.
    time String
    The time position on the timeline of the input media. It is usually specified as an ISO8601 period. e.g PT30S for 30 seconds.

    AudioAnalyzerPresetResponse

    AudioLanguage string
    The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). If you know the language of your content, it is recommended that you specify it. The language must be specified explicitly for AudioAnalysisMode::Basic, since automatic language detection is not included in basic mode. If the language isn't specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463
    ExperimentalOptions Dictionary<string, string>
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    Mode string
    Determines the set of audio analysis operations to be performed. If unspecified, the Standard AudioAnalysisMode would be chosen.
    AudioLanguage string
    The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). If you know the language of your content, it is recommended that you specify it. The language must be specified explicitly for AudioAnalysisMode::Basic, since automatic language detection is not included in basic mode. If the language isn't specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463
    ExperimentalOptions map[string]string
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    Mode string
    Determines the set of audio analysis operations to be performed. If unspecified, the Standard AudioAnalysisMode would be chosen.
    audioLanguage String
    The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). If you know the language of your content, it is recommended that you specify it. The language must be specified explicitly for AudioAnalysisMode::Basic, since automatic language detection is not included in basic mode. If the language isn't specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463
    experimentalOptions Map<String,String>
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    mode String
    Determines the set of audio analysis operations to be performed. If unspecified, the Standard AudioAnalysisMode would be chosen.
    audioLanguage string
    The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). If you know the language of your content, it is recommended that you specify it. The language must be specified explicitly for AudioAnalysisMode::Basic, since automatic language detection is not included in basic mode. If the language isn't specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463
    experimentalOptions {[key: string]: string}
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    mode string
    Determines the set of audio analysis operations to be performed. If unspecified, the Standard AudioAnalysisMode would be chosen.
    audio_language str
    The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). If you know the language of your content, it is recommended that you specify it. The language must be specified explicitly for AudioAnalysisMode::Basic, since automatic language detection is not included in basic mode. If the language isn't specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463
    experimental_options Mapping[str, str]
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    mode str
    Determines the set of audio analysis operations to be performed. If unspecified, the Standard AudioAnalysisMode would be chosen.
    audioLanguage String
    The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). If you know the language of your content, it is recommended that you specify it. The language must be specified explicitly for AudioAnalysisMode::Basic, since automatic language detection is not included in basic mode. If the language isn't specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463
    experimentalOptions Map<String>
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    mode String
    Determines the set of audio analysis operations to be performed. If unspecified, the Standard AudioAnalysisMode would be chosen.

    AudioOverlayResponse

    InputLabel string
    The label of the job input which is to be used as an overlay. The Input must specify exactly one file. You can specify an image file in JPG, PNG, GIF or BMP format, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file. See https://aka.ms/mesformats for the complete list of supported audio and video file formats.
    AudioGainLevel double
    The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0.
    End string
    The end position, with reference to the input video, at which the overlay ends. The value should be in ISO 8601 format. For example, PT30S to end the overlay at 30 seconds into the input video. If not specified or the value is greater than the input video duration, the overlay will be applied until the end of the input video if the overlay media duration is greater than the input video duration, else the overlay will last as long as the overlay media duration.
    FadeInDuration string
    The duration over which the overlay fades in onto the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade in (same as PT0S).
    FadeOutDuration string
    The duration over which the overlay fades out of the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade out (same as PT0S).
    Start string
    The start position, with reference to the input video, at which the overlay starts. The value should be in ISO 8601 format. For example, PT05S to start the overlay at 5 seconds into the input video. If not specified the overlay starts from the beginning of the input video.
    InputLabel string
    The label of the job input which is to be used as an overlay. The Input must specify exactly one file. You can specify an image file in JPG, PNG, GIF or BMP format, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file. See https://aka.ms/mesformats for the complete list of supported audio and video file formats.
    AudioGainLevel float64
    The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0.
    End string
    The end position, with reference to the input video, at which the overlay ends. The value should be in ISO 8601 format. For example, PT30S to end the overlay at 30 seconds into the input video. If not specified or the value is greater than the input video duration, the overlay will be applied until the end of the input video if the overlay media duration is greater than the input video duration, else the overlay will last as long as the overlay media duration.
    FadeInDuration string
    The duration over which the overlay fades in onto the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade in (same as PT0S).
    FadeOutDuration string
    The duration over which the overlay fades out of the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade out (same as PT0S).
    Start string
    The start position, with reference to the input video, at which the overlay starts. The value should be in ISO 8601 format. For example, PT05S to start the overlay at 5 seconds into the input video. If not specified the overlay starts from the beginning of the input video.
    inputLabel String
    The label of the job input which is to be used as an overlay. The Input must specify exactly one file. You can specify an image file in JPG, PNG, GIF or BMP format, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file. See https://aka.ms/mesformats for the complete list of supported audio and video file formats.
    audioGainLevel Double
    The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0.
    end String
    The end position, with reference to the input video, at which the overlay ends. The value should be in ISO 8601 format. For example, PT30S to end the overlay at 30 seconds into the input video. If not specified or the value is greater than the input video duration, the overlay will be applied until the end of the input video if the overlay media duration is greater than the input video duration, else the overlay will last as long as the overlay media duration.
    fadeInDuration String
    The duration over which the overlay fades in onto the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade in (same as PT0S).
    fadeOutDuration String
    The duration over which the overlay fades out of the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade out (same as PT0S).
    start String
    The start position, with reference to the input video, at which the overlay starts. The value should be in ISO 8601 format. For example, PT05S to start the overlay at 5 seconds into the input video. If not specified the overlay starts from the beginning of the input video.
    inputLabel string
    The label of the job input which is to be used as an overlay. The Input must specify exactly one file. You can specify an image file in JPG, PNG, GIF or BMP format, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file. See https://aka.ms/mesformats for the complete list of supported audio and video file formats.
    audioGainLevel number
    The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0.
    end string
    The end position, with reference to the input video, at which the overlay ends. The value should be in ISO 8601 format. For example, PT30S to end the overlay at 30 seconds into the input video. If not specified or the value is greater than the input video duration, the overlay will be applied until the end of the input video if the overlay media duration is greater than the input video duration, else the overlay will last as long as the overlay media duration.
    fadeInDuration string
    The duration over which the overlay fades in onto the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade in (same as PT0S).
    fadeOutDuration string
    The duration over which the overlay fades out of the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade out (same as PT0S).
    start string
    The start position, with reference to the input video, at which the overlay starts. The value should be in ISO 8601 format. For example, PT05S to start the overlay at 5 seconds into the input video. If not specified the overlay starts from the beginning of the input video.
    input_label str
    The label of the job input which is to be used as an overlay. The Input must specify exactly one file. You can specify an image file in JPG, PNG, GIF or BMP format, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file. See https://aka.ms/mesformats for the complete list of supported audio and video file formats.
    audio_gain_level float
    The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0.
    end str
    The end position, with reference to the input video, at which the overlay ends. The value should be in ISO 8601 format. For example, PT30S to end the overlay at 30 seconds into the input video. If not specified or the value is greater than the input video duration, the overlay will be applied until the end of the input video if the overlay media duration is greater than the input video duration, else the overlay will last as long as the overlay media duration.
    fade_in_duration str
    The duration over which the overlay fades in onto the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade in (same as PT0S).
    fade_out_duration str
    The duration over which the overlay fades out of the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade out (same as PT0S).
    start str
    The start position, with reference to the input video, at which the overlay starts. The value should be in ISO 8601 format. For example, PT05S to start the overlay at 5 seconds into the input video. If not specified the overlay starts from the beginning of the input video.
    inputLabel String
    The label of the job input which is to be used as an overlay. The Input must specify exactly one file. You can specify an image file in JPG, PNG, GIF or BMP format, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file. See https://aka.ms/mesformats for the complete list of supported audio and video file formats.
    audioGainLevel Number
    The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0.
    end String
    The end position, with reference to the input video, at which the overlay ends. The value should be in ISO 8601 format. For example, PT30S to end the overlay at 30 seconds into the input video. If not specified or the value is greater than the input video duration, the overlay will be applied until the end of the input video if the overlay media duration is greater than the input video duration, else the overlay will last as long as the overlay media duration.
    fadeInDuration String
    The duration over which the overlay fades in onto the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade in (same as PT0S).
    fadeOutDuration String
    The duration over which the overlay fades out of the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade out (same as PT0S).
    start String
    The start position, with reference to the input video, at which the overlay starts. The value should be in ISO 8601 format. For example, PT05S to start the overlay at 5 seconds into the input video. If not specified the overlay starts from the beginning of the input video.

    AudioResponse

    Bitrate int
    The bitrate, in bits per second, of the output encoded audio.
    Channels int
    The number of channels in the audio.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    SamplingRate int
    The sampling rate to use for encoding in hertz.
    Bitrate int
    The bitrate, in bits per second, of the output encoded audio.
    Channels int
    The number of channels in the audio.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    SamplingRate int
    The sampling rate to use for encoding in hertz.
    bitrate Integer
    The bitrate, in bits per second, of the output encoded audio.
    channels Integer
    The number of channels in the audio.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    samplingRate Integer
    The sampling rate to use for encoding in hertz.
    bitrate number
    The bitrate, in bits per second, of the output encoded audio.
    channels number
    The number of channels in the audio.
    label string
    An optional label for the codec. The label can be used to control muxing behavior.
    samplingRate number
    The sampling rate to use for encoding in hertz.
    bitrate int
    The bitrate, in bits per second, of the output encoded audio.
    channels int
    The number of channels in the audio.
    label str
    An optional label for the codec. The label can be used to control muxing behavior.
    sampling_rate int
    The sampling rate to use for encoding in hertz.
    bitrate Number
    The bitrate, in bits per second, of the output encoded audio.
    channels Number
    The number of channels in the audio.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    samplingRate Number
    The sampling rate to use for encoding in hertz.

    AudioTrackDescriptorResponse

    ChannelMapping string
    Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks.
    ChannelMapping string
    Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks.
    channelMapping String
    Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks.
    channelMapping string
    Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks.
    channel_mapping str
    Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks.
    channelMapping String
    Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks.

    BuiltInStandardEncoderPresetResponse

    PresetName string
    The built-in preset to be used for encoding videos.
    Configurations Pulumi.AzureNative.Media.Inputs.PresetConfigurationsResponse
    Optional configuration settings for encoder. Configurations is only supported for ContentAwareEncoding and H265ContentAwareEncoding BuiltInStandardEncoderPreset.
    PresetName string
    The built-in preset to be used for encoding videos.
    Configurations PresetConfigurationsResponse
    Optional configuration settings for encoder. Configurations is only supported for ContentAwareEncoding and H265ContentAwareEncoding BuiltInStandardEncoderPreset.
    presetName String
    The built-in preset to be used for encoding videos.
    configurations PresetConfigurationsResponse
    Optional configuration settings for encoder. Configurations is only supported for ContentAwareEncoding and H265ContentAwareEncoding BuiltInStandardEncoderPreset.
    presetName string
    The built-in preset to be used for encoding videos.
    configurations PresetConfigurationsResponse
    Optional configuration settings for encoder. Configurations is only supported for ContentAwareEncoding and H265ContentAwareEncoding BuiltInStandardEncoderPreset.
    preset_name str
    The built-in preset to be used for encoding videos.
    configurations PresetConfigurationsResponse
    Optional configuration settings for encoder. Configurations is only supported for ContentAwareEncoding and H265ContentAwareEncoding BuiltInStandardEncoderPreset.
    presetName String
    The built-in preset to be used for encoding videos.
    configurations Property Map
    Optional configuration settings for encoder. Configurations is only supported for ContentAwareEncoding and H265ContentAwareEncoding BuiltInStandardEncoderPreset.

    CopyAudioResponse

    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    label string
    An optional label for the codec. The label can be used to control muxing behavior.
    label str
    An optional label for the codec. The label can be used to control muxing behavior.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.

    CopyVideoResponse

    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    label string
    An optional label for the codec. The label can be used to control muxing behavior.
    label str
    An optional label for the codec. The label can be used to control muxing behavior.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.

    DDAudioResponse

    Bitrate int
    The bitrate, in bits per second, of the output encoded audio.
    Channels int
    The number of channels in the audio.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    SamplingRate int
    The sampling rate to use for encoding in hertz.
    Bitrate int
    The bitrate, in bits per second, of the output encoded audio.
    Channels int
    The number of channels in the audio.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    SamplingRate int
    The sampling rate to use for encoding in hertz.
    bitrate Integer
    The bitrate, in bits per second, of the output encoded audio.
    channels Integer
    The number of channels in the audio.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    samplingRate Integer
    The sampling rate to use for encoding in hertz.
    bitrate number
    The bitrate, in bits per second, of the output encoded audio.
    channels number
    The number of channels in the audio.
    label string
    An optional label for the codec. The label can be used to control muxing behavior.
    samplingRate number
    The sampling rate to use for encoding in hertz.
    bitrate int
    The bitrate, in bits per second, of the output encoded audio.
    channels int
    The number of channels in the audio.
    label str
    An optional label for the codec. The label can be used to control muxing behavior.
    sampling_rate int
    The sampling rate to use for encoding in hertz.
    bitrate Number
    The bitrate, in bits per second, of the output encoded audio.
    channels Number
    The number of channels in the audio.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    samplingRate Number
    The sampling rate to use for encoding in hertz.

    DeinterlaceResponse

    Mode string
    The deinterlacing mode. Defaults to AutoPixelAdaptive.
    Parity string
    The field parity for de-interlacing, defaults to Auto.
    Mode string
    The deinterlacing mode. Defaults to AutoPixelAdaptive.
    Parity string
    The field parity for de-interlacing, defaults to Auto.
    mode String
    The deinterlacing mode. Defaults to AutoPixelAdaptive.
    parity String
    The field parity for de-interlacing, defaults to Auto.
    mode string
    The deinterlacing mode. Defaults to AutoPixelAdaptive.
    parity string
    The field parity for de-interlacing, defaults to Auto.
    mode str
    The deinterlacing mode. Defaults to AutoPixelAdaptive.
    parity str
    The field parity for de-interlacing, defaults to Auto.
    mode String
    The deinterlacing mode. Defaults to AutoPixelAdaptive.
    parity String
    The field parity for de-interlacing, defaults to Auto.

    FaceDetectorPresetResponse

    BlurType string
    Blur type
    ExperimentalOptions Dictionary<string, string>
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    Mode string
    This mode provides the ability to choose between the following settings: 1) Analyze - For detection only.This mode generates a metadata JSON file marking appearances of faces throughout the video.Where possible, appearances of the same person are assigned the same ID. 2) Combined - Additionally redacts(blurs) detected faces. 3) Redact - This enables a 2-pass process, allowing for selective redaction of a subset of detected faces.It takes in the metadata file from a prior analyze pass, along with the source video, and a user-selected subset of IDs that require redaction.
    Resolution string
    Specifies the maximum resolution at which your video is analyzed. The default behavior is "SourceResolution," which will keep the input video at its original resolution when analyzed. Using "StandardDefinition" will resize input videos to standard definition while preserving the appropriate aspect ratio. It will only resize if the video is of higher resolution. For example, a 1920x1080 input would be scaled to 640x360 before processing. Switching to "StandardDefinition" will reduce the time it takes to process high resolution video. It may also reduce the cost of using this component (see https://azure.microsoft.com/en-us/pricing/details/media-services/#analytics for details). However, faces that end up being too small in the resized video may not be detected.
    BlurType string
    Blur type
    ExperimentalOptions map[string]string
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    Mode string
    This mode provides the ability to choose between the following settings: 1) Analyze - For detection only.This mode generates a metadata JSON file marking appearances of faces throughout the video.Where possible, appearances of the same person are assigned the same ID. 2) Combined - Additionally redacts(blurs) detected faces. 3) Redact - This enables a 2-pass process, allowing for selective redaction of a subset of detected faces.It takes in the metadata file from a prior analyze pass, along with the source video, and a user-selected subset of IDs that require redaction.
    Resolution string
    Specifies the maximum resolution at which your video is analyzed. The default behavior is "SourceResolution," which will keep the input video at its original resolution when analyzed. Using "StandardDefinition" will resize input videos to standard definition while preserving the appropriate aspect ratio. It will only resize if the video is of higher resolution. For example, a 1920x1080 input would be scaled to 640x360 before processing. Switching to "StandardDefinition" will reduce the time it takes to process high resolution video. It may also reduce the cost of using this component (see https://azure.microsoft.com/en-us/pricing/details/media-services/#analytics for details). However, faces that end up being too small in the resized video may not be detected.
    blurType String
    Blur type
    experimentalOptions Map<String,String>
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    mode String
    This mode provides the ability to choose between the following settings: 1) Analyze - For detection only.This mode generates a metadata JSON file marking appearances of faces throughout the video.Where possible, appearances of the same person are assigned the same ID. 2) Combined - Additionally redacts(blurs) detected faces. 3) Redact - This enables a 2-pass process, allowing for selective redaction of a subset of detected faces.It takes in the metadata file from a prior analyze pass, along with the source video, and a user-selected subset of IDs that require redaction.
    resolution String
    Specifies the maximum resolution at which your video is analyzed. The default behavior is "SourceResolution," which will keep the input video at its original resolution when analyzed. Using "StandardDefinition" will resize input videos to standard definition while preserving the appropriate aspect ratio. It will only resize if the video is of higher resolution. For example, a 1920x1080 input would be scaled to 640x360 before processing. Switching to "StandardDefinition" will reduce the time it takes to process high resolution video. It may also reduce the cost of using this component (see https://azure.microsoft.com/en-us/pricing/details/media-services/#analytics for details). However, faces that end up being too small in the resized video may not be detected.
    blurType string
    Blur type
    experimentalOptions {[key: string]: string}
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    mode string
    This mode provides the ability to choose between the following settings: 1) Analyze - For detection only.This mode generates a metadata JSON file marking appearances of faces throughout the video.Where possible, appearances of the same person are assigned the same ID. 2) Combined - Additionally redacts(blurs) detected faces. 3) Redact - This enables a 2-pass process, allowing for selective redaction of a subset of detected faces.It takes in the metadata file from a prior analyze pass, along with the source video, and a user-selected subset of IDs that require redaction.
    resolution string
    Specifies the maximum resolution at which your video is analyzed. The default behavior is "SourceResolution," which will keep the input video at its original resolution when analyzed. Using "StandardDefinition" will resize input videos to standard definition while preserving the appropriate aspect ratio. It will only resize if the video is of higher resolution. For example, a 1920x1080 input would be scaled to 640x360 before processing. Switching to "StandardDefinition" will reduce the time it takes to process high resolution video. It may also reduce the cost of using this component (see https://azure.microsoft.com/en-us/pricing/details/media-services/#analytics for details). However, faces that end up being too small in the resized video may not be detected.
    blur_type str
    Blur type
    experimental_options Mapping[str, str]
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    mode str
    This mode provides the ability to choose between the following settings: 1) Analyze - For detection only.This mode generates a metadata JSON file marking appearances of faces throughout the video.Where possible, appearances of the same person are assigned the same ID. 2) Combined - Additionally redacts(blurs) detected faces. 3) Redact - This enables a 2-pass process, allowing for selective redaction of a subset of detected faces.It takes in the metadata file from a prior analyze pass, along with the source video, and a user-selected subset of IDs that require redaction.
    resolution str
    Specifies the maximum resolution at which your video is analyzed. The default behavior is "SourceResolution," which will keep the input video at its original resolution when analyzed. Using "StandardDefinition" will resize input videos to standard definition while preserving the appropriate aspect ratio. It will only resize if the video is of higher resolution. For example, a 1920x1080 input would be scaled to 640x360 before processing. Switching to "StandardDefinition" will reduce the time it takes to process high resolution video. It may also reduce the cost of using this component (see https://azure.microsoft.com/en-us/pricing/details/media-services/#analytics for details). However, faces that end up being too small in the resized video may not be detected.
    blurType String
    Blur type
    experimentalOptions Map<String>
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    mode String
    This mode provides the ability to choose between the following settings: 1) Analyze - For detection only.This mode generates a metadata JSON file marking appearances of faces throughout the video.Where possible, appearances of the same person are assigned the same ID. 2) Combined - Additionally redacts(blurs) detected faces. 3) Redact - This enables a 2-pass process, allowing for selective redaction of a subset of detected faces.It takes in the metadata file from a prior analyze pass, along with the source video, and a user-selected subset of IDs that require redaction.
    resolution String
    Specifies the maximum resolution at which your video is analyzed. The default behavior is "SourceResolution," which will keep the input video at its original resolution when analyzed. Using "StandardDefinition" will resize input videos to standard definition while preserving the appropriate aspect ratio. It will only resize if the video is of higher resolution. For example, a 1920x1080 input would be scaled to 640x360 before processing. Switching to "StandardDefinition" will reduce the time it takes to process high resolution video. It may also reduce the cost of using this component (see https://azure.microsoft.com/en-us/pricing/details/media-services/#analytics for details). However, faces that end up being too small in the resized video may not be detected.

    FadeResponse

    Duration string
    The Duration of the fade effect in the video. The value can be in ISO 8601 format (For example, PT05S to fade In/Out a color during 5 seconds), or a frame count (For example, 10 to fade 10 frames from the start time), or a relative value to stream duration (For example, 10% to fade 10% of stream duration)
    FadeColor string
    The Color for the fade In/Out. it can be on the CSS Level1 colors https://developer.mozilla.org/en-US/docs/Web/CSS/color_value/color_keywords or an RGB/hex value: e.g: rgb(255,0,0), 0xFF0000 or #FF0000
    Start string
    The position in the input video from where to start fade. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Default is 0
    Duration string
    The Duration of the fade effect in the video. The value can be in ISO 8601 format (For example, PT05S to fade In/Out a color during 5 seconds), or a frame count (For example, 10 to fade 10 frames from the start time), or a relative value to stream duration (For example, 10% to fade 10% of stream duration)
    FadeColor string
    The Color for the fade In/Out. it can be on the CSS Level1 colors https://developer.mozilla.org/en-US/docs/Web/CSS/color_value/color_keywords or an RGB/hex value: e.g: rgb(255,0,0), 0xFF0000 or #FF0000
    Start string
    The position in the input video from where to start fade. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Default is 0
    duration String
    The Duration of the fade effect in the video. The value can be in ISO 8601 format (For example, PT05S to fade In/Out a color during 5 seconds), or a frame count (For example, 10 to fade 10 frames from the start time), or a relative value to stream duration (For example, 10% to fade 10% of stream duration)
    fadeColor String
    The Color for the fade In/Out. it can be on the CSS Level1 colors https://developer.mozilla.org/en-US/docs/Web/CSS/color_value/color_keywords or an RGB/hex value: e.g: rgb(255,0,0), 0xFF0000 or #FF0000
    start String
    The position in the input video from where to start fade. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Default is 0
    duration string
    The Duration of the fade effect in the video. The value can be in ISO 8601 format (For example, PT05S to fade In/Out a color during 5 seconds), or a frame count (For example, 10 to fade 10 frames from the start time), or a relative value to stream duration (For example, 10% to fade 10% of stream duration)
    fadeColor string
    The Color for the fade In/Out. it can be on the CSS Level1 colors https://developer.mozilla.org/en-US/docs/Web/CSS/color_value/color_keywords or an RGB/hex value: e.g: rgb(255,0,0), 0xFF0000 or #FF0000
    start string
    The position in the input video from where to start fade. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Default is 0
    duration str
    The Duration of the fade effect in the video. The value can be in ISO 8601 format (For example, PT05S to fade In/Out a color during 5 seconds), or a frame count (For example, 10 to fade 10 frames from the start time), or a relative value to stream duration (For example, 10% to fade 10% of stream duration)
    fade_color str
    The Color for the fade In/Out. it can be on the CSS Level1 colors https://developer.mozilla.org/en-US/docs/Web/CSS/color_value/color_keywords or an RGB/hex value: e.g: rgb(255,0,0), 0xFF0000 or #FF0000
    start str
    The position in the input video from where to start fade. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Default is 0
    duration String
    The Duration of the fade effect in the video. The value can be in ISO 8601 format (For example, PT05S to fade In/Out a color during 5 seconds), or a frame count (For example, 10 to fade 10 frames from the start time), or a relative value to stream duration (For example, 10% to fade 10% of stream duration)
    fadeColor String
    The Color for the fade In/Out. it can be on the CSS Level1 colors https://developer.mozilla.org/en-US/docs/Web/CSS/color_value/color_keywords or an RGB/hex value: e.g: rgb(255,0,0), 0xFF0000 or #FF0000
    start String
    The position in the input video from where to start fade. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Default is 0

    FiltersResponse

    Crop Pulumi.AzureNative.Media.Inputs.RectangleResponse
    The parameters for the rectangular window with which to crop the input video.
    Deinterlace Pulumi.AzureNative.Media.Inputs.DeinterlaceResponse
    The de-interlacing settings.
    FadeIn Pulumi.AzureNative.Media.Inputs.FadeResponse
    Describes the properties of a Fade effect applied to the input media.
    FadeOut Pulumi.AzureNative.Media.Inputs.FadeResponse
    Describes the properties of a Fade effect applied to the input media.
    Overlays List<Union<Pulumi.AzureNative.Media.Inputs.AudioOverlayResponse, Pulumi.AzureNative.Media.Inputs.VideoOverlayResponse>>
    The properties of overlays to be applied to the input video. These could be audio, image or video overlays.
    Rotation string
    The rotation, if any, to be applied to the input video, before it is encoded. Default is Auto
    Crop RectangleResponse
    The parameters for the rectangular window with which to crop the input video.
    Deinterlace DeinterlaceResponse
    The de-interlacing settings.
    FadeIn FadeResponse
    Describes the properties of a Fade effect applied to the input media.
    FadeOut FadeResponse
    Describes the properties of a Fade effect applied to the input media.
    Overlays []interface{}
    The properties of overlays to be applied to the input video. These could be audio, image or video overlays.
    Rotation string
    The rotation, if any, to be applied to the input video, before it is encoded. Default is Auto
    crop RectangleResponse
    The parameters for the rectangular window with which to crop the input video.
    deinterlace DeinterlaceResponse
    The de-interlacing settings.
    fadeIn FadeResponse
    Describes the properties of a Fade effect applied to the input media.
    fadeOut FadeResponse
    Describes the properties of a Fade effect applied to the input media.
    overlays List<Either<AudioOverlayResponse,VideoOverlayResponse>>
    The properties of overlays to be applied to the input video. These could be audio, image or video overlays.
    rotation String
    The rotation, if any, to be applied to the input video, before it is encoded. Default is Auto
    crop RectangleResponse
    The parameters for the rectangular window with which to crop the input video.
    deinterlace DeinterlaceResponse
    The de-interlacing settings.
    fadeIn FadeResponse
    Describes the properties of a Fade effect applied to the input media.
    fadeOut FadeResponse
    Describes the properties of a Fade effect applied to the input media.
    overlays (AudioOverlayResponse | VideoOverlayResponse)[]
    The properties of overlays to be applied to the input video. These could be audio, image or video overlays.
    rotation string
    The rotation, if any, to be applied to the input video, before it is encoded. Default is Auto
    crop RectangleResponse
    The parameters for the rectangular window with which to crop the input video.
    deinterlace DeinterlaceResponse
    The de-interlacing settings.
    fade_in FadeResponse
    Describes the properties of a Fade effect applied to the input media.
    fade_out FadeResponse
    Describes the properties of a Fade effect applied to the input media.
    overlays Sequence[Union[AudioOverlayResponse, VideoOverlayResponse]]
    The properties of overlays to be applied to the input video. These could be audio, image or video overlays.
    rotation str
    The rotation, if any, to be applied to the input video, before it is encoded. Default is Auto
    crop Property Map
    The parameters for the rectangular window with which to crop the input video.
    deinterlace Property Map
    The de-interlacing settings.
    fadeIn Property Map
    Describes the properties of a Fade effect applied to the input media.
    fadeOut Property Map
    Describes the properties of a Fade effect applied to the input media.
    overlays List<Property Map | Property Map>
    The properties of overlays to be applied to the input video. These could be audio, image or video overlays.
    rotation String
    The rotation, if any, to be applied to the input video, before it is encoded. Default is Auto

    FromAllInputFileResponse

    IncludedTracks List<object>
    The list of TrackDescriptors which define the metadata and selection of tracks in the input.
    IncludedTracks []interface{}
    The list of TrackDescriptors which define the metadata and selection of tracks in the input.
    includedTracks List<Object>
    The list of TrackDescriptors which define the metadata and selection of tracks in the input.
    includedTracks (AudioTrackDescriptorResponse | SelectAudioTrackByAttributeResponse | SelectAudioTrackByIdResponse | SelectVideoTrackByAttributeResponse | SelectVideoTrackByIdResponse | VideoTrackDescriptorResponse)[]
    The list of TrackDescriptors which define the metadata and selection of tracks in the input.
    included_tracks Sequence[Union[AudioTrackDescriptorResponse, SelectAudioTrackByAttributeResponse, SelectAudioTrackByIdResponse, SelectVideoTrackByAttributeResponse, SelectVideoTrackByIdResponse, VideoTrackDescriptorResponse]]
    The list of TrackDescriptors which define the metadata and selection of tracks in the input.
    includedTracks List<Property Map | Property Map | Property Map | Property Map | Property Map | Property Map>
    The list of TrackDescriptors which define the metadata and selection of tracks in the input.

    FromEachInputFileResponse

    IncludedTracks List<object>
    The list of TrackDescriptors which define the metadata and selection of tracks in the input.
    IncludedTracks []interface{}
    The list of TrackDescriptors which define the metadata and selection of tracks in the input.
    includedTracks List<Object>
    The list of TrackDescriptors which define the metadata and selection of tracks in the input.
    includedTracks (AudioTrackDescriptorResponse | SelectAudioTrackByAttributeResponse | SelectAudioTrackByIdResponse | SelectVideoTrackByAttributeResponse | SelectVideoTrackByIdResponse | VideoTrackDescriptorResponse)[]
    The list of TrackDescriptors which define the metadata and selection of tracks in the input.
    included_tracks Sequence[Union[AudioTrackDescriptorResponse, SelectAudioTrackByAttributeResponse, SelectAudioTrackByIdResponse, SelectVideoTrackByAttributeResponse, SelectVideoTrackByIdResponse, VideoTrackDescriptorResponse]]
    The list of TrackDescriptors which define the metadata and selection of tracks in the input.
    includedTracks List<Property Map | Property Map | Property Map | Property Map | Property Map | Property Map>
    The list of TrackDescriptors which define the metadata and selection of tracks in the input.

    H264LayerResponse

    Bitrate int
    The average bitrate in bits per second at which to encode the input video when generating this layer. This is a required field.
    AdaptiveBFrame bool
    Whether or not adaptive B-frames are to be used when encoding this layer. If not specified, the encoder will turn it on whenever the video profile permits its use.
    BFrames int
    The number of B-frames to be used when encoding this layer. If not specified, the encoder chooses an appropriate number based on the video profile and level.
    BufferWindow string
    The VBV buffer window length. The value should be in ISO 8601 format. The value should be in the range [0.1-100] seconds. The default is 5 seconds (for example, PT5S).
    Crf double
    The value of CRF to be used when encoding this layer. This setting takes effect when RateControlMode of video codec is set at CRF mode. The range of CRF value is between 0 and 51, where lower values would result in better quality, at the expense of higher file sizes. Higher values mean more compression, but at some point quality degradation will be noticed. Default value is 23.
    EntropyMode string
    The entropy mode to be used for this layer. If not specified, the encoder chooses the mode that is appropriate for the profile and level.
    FrameRate string
    The frame rate (in frames per second) at which to encode this layer. The value can be in the form of M/N where M and N are integers (For example, 30000/1001), or in the form of a number (For example, 30, or 29.97). The encoder enforces constraints on allowed frame rates based on the profile and level. If it is not specified, the encoder will use the same frame rate as the input video.
    Height string
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    Label string
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    Level string
    We currently support Level up to 6.2. The value can be Auto, or a number that matches the H.264 profile. If not specified, the default is Auto, which lets the encoder choose the Level that is appropriate for this layer.
    MaxBitrate int
    The maximum bitrate (in bits per second), at which the VBV buffer should be assumed to refill. If not specified, defaults to the same value as bitrate.
    Profile string
    We currently support Baseline, Main, High, High422, High444. Default is Auto.
    ReferenceFrames int
    The number of reference frames to be used when encoding this layer. If not specified, the encoder determines an appropriate number based on the encoder complexity setting.
    Slices int
    The number of slices to be used when encoding this layer. If not specified, default is zero, which means that encoder will use a single slice for each frame.
    Width string
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    Bitrate int
    The average bitrate in bits per second at which to encode the input video when generating this layer. This is a required field.
    AdaptiveBFrame bool
    Whether or not adaptive B-frames are to be used when encoding this layer. If not specified, the encoder will turn it on whenever the video profile permits its use.
    BFrames int
    The number of B-frames to be used when encoding this layer. If not specified, the encoder chooses an appropriate number based on the video profile and level.
    BufferWindow string
    The VBV buffer window length. The value should be in ISO 8601 format. The value should be in the range [0.1-100] seconds. The default is 5 seconds (for example, PT5S).
    Crf float64
    The value of CRF to be used when encoding this layer. This setting takes effect when RateControlMode of video codec is set at CRF mode. The range of CRF value is between 0 and 51, where lower values would result in better quality, at the expense of higher file sizes. Higher values mean more compression, but at some point quality degradation will be noticed. Default value is 23.
    EntropyMode string
    The entropy mode to be used for this layer. If not specified, the encoder chooses the mode that is appropriate for the profile and level.
    FrameRate string
    The frame rate (in frames per second) at which to encode this layer. The value can be in the form of M/N where M and N are integers (For example, 30000/1001), or in the form of a number (For example, 30, or 29.97). The encoder enforces constraints on allowed frame rates based on the profile and level. If it is not specified, the encoder will use the same frame rate as the input video.
    Height string
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    Label string
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    Level string
    We currently support Level up to 6.2. The value can be Auto, or a number that matches the H.264 profile. If not specified, the default is Auto, which lets the encoder choose the Level that is appropriate for this layer.
    MaxBitrate int
    The maximum bitrate (in bits per second), at which the VBV buffer should be assumed to refill. If not specified, defaults to the same value as bitrate.
    Profile string
    We currently support Baseline, Main, High, High422, High444. Default is Auto.
    ReferenceFrames int
    The number of reference frames to be used when encoding this layer. If not specified, the encoder determines an appropriate number based on the encoder complexity setting.
    Slices int
    The number of slices to be used when encoding this layer. If not specified, default is zero, which means that encoder will use a single slice for each frame.
    Width string
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    bitrate Integer
    The average bitrate in bits per second at which to encode the input video when generating this layer. This is a required field.
    adaptiveBFrame Boolean
    Whether or not adaptive B-frames are to be used when encoding this layer. If not specified, the encoder will turn it on whenever the video profile permits its use.
    bFrames Integer
    The number of B-frames to be used when encoding this layer. If not specified, the encoder chooses an appropriate number based on the video profile and level.
    bufferWindow String
    The VBV buffer window length. The value should be in ISO 8601 format. The value should be in the range [0.1-100] seconds. The default is 5 seconds (for example, PT5S).
    crf Double
    The value of CRF to be used when encoding this layer. This setting takes effect when RateControlMode of video codec is set at CRF mode. The range of CRF value is between 0 and 51, where lower values would result in better quality, at the expense of higher file sizes. Higher values mean more compression, but at some point quality degradation will be noticed. Default value is 23.
    entropyMode String
    The entropy mode to be used for this layer. If not specified, the encoder chooses the mode that is appropriate for the profile and level.
    frameRate String
    The frame rate (in frames per second) at which to encode this layer. The value can be in the form of M/N where M and N are integers (For example, 30000/1001), or in the form of a number (For example, 30, or 29.97). The encoder enforces constraints on allowed frame rates based on the profile and level. If it is not specified, the encoder will use the same frame rate as the input video.
    height String
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    label String
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    level String
    We currently support Level up to 6.2. The value can be Auto, or a number that matches the H.264 profile. If not specified, the default is Auto, which lets the encoder choose the Level that is appropriate for this layer.
    maxBitrate Integer
    The maximum bitrate (in bits per second), at which the VBV buffer should be assumed to refill. If not specified, defaults to the same value as bitrate.
    profile String
    We currently support Baseline, Main, High, High422, High444. Default is Auto.
    referenceFrames Integer
    The number of reference frames to be used when encoding this layer. If not specified, the encoder determines an appropriate number based on the encoder complexity setting.
    slices Integer
    The number of slices to be used when encoding this layer. If not specified, default is zero, which means that encoder will use a single slice for each frame.
    width String
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    bitrate number
    The average bitrate in bits per second at which to encode the input video when generating this layer. This is a required field.
    adaptiveBFrame boolean
    Whether or not adaptive B-frames are to be used when encoding this layer. If not specified, the encoder will turn it on whenever the video profile permits its use.
    bFrames number
    The number of B-frames to be used when encoding this layer. If not specified, the encoder chooses an appropriate number based on the video profile and level.
    bufferWindow string
    The VBV buffer window length. The value should be in ISO 8601 format. The value should be in the range [0.1-100] seconds. The default is 5 seconds (for example, PT5S).
    crf number
    The value of CRF to be used when encoding this layer. This setting takes effect when RateControlMode of video codec is set at CRF mode. The range of CRF value is between 0 and 51, where lower values would result in better quality, at the expense of higher file sizes. Higher values mean more compression, but at some point quality degradation will be noticed. Default value is 23.
    entropyMode string
    The entropy mode to be used for this layer. If not specified, the encoder chooses the mode that is appropriate for the profile and level.
    frameRate string
    The frame rate (in frames per second) at which to encode this layer. The value can be in the form of M/N where M and N are integers (For example, 30000/1001), or in the form of a number (For example, 30, or 29.97). The encoder enforces constraints on allowed frame rates based on the profile and level. If it is not specified, the encoder will use the same frame rate as the input video.
    height string
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    label string
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    level string
    We currently support Level up to 6.2. The value can be Auto, or a number that matches the H.264 profile. If not specified, the default is Auto, which lets the encoder choose the Level that is appropriate for this layer.
    maxBitrate number
    The maximum bitrate (in bits per second), at which the VBV buffer should be assumed to refill. If not specified, defaults to the same value as bitrate.
    profile string
    We currently support Baseline, Main, High, High422, High444. Default is Auto.
    referenceFrames number
    The number of reference frames to be used when encoding this layer. If not specified, the encoder determines an appropriate number based on the encoder complexity setting.
    slices number
    The number of slices to be used when encoding this layer. If not specified, default is zero, which means that encoder will use a single slice for each frame.
    width string
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    bitrate int
    The average bitrate in bits per second at which to encode the input video when generating this layer. This is a required field.
    adaptive_b_frame bool
    Whether or not adaptive B-frames are to be used when encoding this layer. If not specified, the encoder will turn it on whenever the video profile permits its use.
    b_frames int
    The number of B-frames to be used when encoding this layer. If not specified, the encoder chooses an appropriate number based on the video profile and level.
    buffer_window str
    The VBV buffer window length. The value should be in ISO 8601 format. The value should be in the range [0.1-100] seconds. The default is 5 seconds (for example, PT5S).
    crf float
    The value of CRF to be used when encoding this layer. This setting takes effect when RateControlMode of video codec is set at CRF mode. The range of CRF value is between 0 and 51, where lower values would result in better quality, at the expense of higher file sizes. Higher values mean more compression, but at some point quality degradation will be noticed. Default value is 23.
    entropy_mode str
    The entropy mode to be used for this layer. If not specified, the encoder chooses the mode that is appropriate for the profile and level.
    frame_rate str
    The frame rate (in frames per second) at which to encode this layer. The value can be in the form of M/N where M and N are integers (For example, 30000/1001), or in the form of a number (For example, 30, or 29.97). The encoder enforces constraints on allowed frame rates based on the profile and level. If it is not specified, the encoder will use the same frame rate as the input video.
    height str
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    label str
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    level str
    We currently support Level up to 6.2. The value can be Auto, or a number that matches the H.264 profile. If not specified, the default is Auto, which lets the encoder choose the Level that is appropriate for this layer.
    max_bitrate int
    The maximum bitrate (in bits per second), at which the VBV buffer should be assumed to refill. If not specified, defaults to the same value as bitrate.
    profile str
    We currently support Baseline, Main, High, High422, High444. Default is Auto.
    reference_frames int
    The number of reference frames to be used when encoding this layer. If not specified, the encoder determines an appropriate number based on the encoder complexity setting.
    slices int
    The number of slices to be used when encoding this layer. If not specified, default is zero, which means that encoder will use a single slice for each frame.
    width str
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    bitrate Number
    The average bitrate in bits per second at which to encode the input video when generating this layer. This is a required field.
    adaptiveBFrame Boolean
    Whether or not adaptive B-frames are to be used when encoding this layer. If not specified, the encoder will turn it on whenever the video profile permits its use.
    bFrames Number
    The number of B-frames to be used when encoding this layer. If not specified, the encoder chooses an appropriate number based on the video profile and level.
    bufferWindow String
    The VBV buffer window length. The value should be in ISO 8601 format. The value should be in the range [0.1-100] seconds. The default is 5 seconds (for example, PT5S).
    crf Number
    The value of CRF to be used when encoding this layer. This setting takes effect when RateControlMode of video codec is set at CRF mode. The range of CRF value is between 0 and 51, where lower values would result in better quality, at the expense of higher file sizes. Higher values mean more compression, but at some point quality degradation will be noticed. Default value is 23.
    entropyMode String
    The entropy mode to be used for this layer. If not specified, the encoder chooses the mode that is appropriate for the profile and level.
    frameRate String
    The frame rate (in frames per second) at which to encode this layer. The value can be in the form of M/N where M and N are integers (For example, 30000/1001), or in the form of a number (For example, 30, or 29.97). The encoder enforces constraints on allowed frame rates based on the profile and level. If it is not specified, the encoder will use the same frame rate as the input video.
    height String
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    label String
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    level String
    We currently support Level up to 6.2. The value can be Auto, or a number that matches the H.264 profile. If not specified, the default is Auto, which lets the encoder choose the Level that is appropriate for this layer.
    maxBitrate Number
    The maximum bitrate (in bits per second), at which the VBV buffer should be assumed to refill. If not specified, defaults to the same value as bitrate.
    profile String
    We currently support Baseline, Main, High, High422, High444. Default is Auto.
    referenceFrames Number
    The number of reference frames to be used when encoding this layer. If not specified, the encoder determines an appropriate number based on the encoder complexity setting.
    slices Number
    The number of slices to be used when encoding this layer. If not specified, default is zero, which means that encoder will use a single slice for each frame.
    width String
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.

    H264VideoResponse

    Complexity string
    Tells the encoder how to choose its encoding settings. The default value is Balanced.
    KeyFrameInterval string
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    Layers List<Pulumi.AzureNative.Media.Inputs.H264LayerResponse>
    The collection of output H.264 layers to be produced by the encoder.
    RateControlMode string
    The video rate control mode
    SceneChangeDetection bool
    Whether or not the encoder should insert key frames at scene changes. If not specified, the default is false. This flag should be set to true only when the encoder is being configured to produce a single output video.
    StretchMode string
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    SyncMode string
    The Video Sync Mode
    Complexity string
    Tells the encoder how to choose its encoding settings. The default value is Balanced.
    KeyFrameInterval string
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    Layers []H264LayerResponse
    The collection of output H.264 layers to be produced by the encoder.
    RateControlMode string
    The video rate control mode
    SceneChangeDetection bool
    Whether or not the encoder should insert key frames at scene changes. If not specified, the default is false. This flag should be set to true only when the encoder is being configured to produce a single output video.
    StretchMode string
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    SyncMode string
    The Video Sync Mode
    complexity String
    Tells the encoder how to choose its encoding settings. The default value is Balanced.
    keyFrameInterval String
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    layers List<H264LayerResponse>
    The collection of output H.264 layers to be produced by the encoder.
    rateControlMode String
    The video rate control mode
    sceneChangeDetection Boolean
    Whether or not the encoder should insert key frames at scene changes. If not specified, the default is false. This flag should be set to true only when the encoder is being configured to produce a single output video.
    stretchMode String
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    syncMode String
    The Video Sync Mode
    complexity string
    Tells the encoder how to choose its encoding settings. The default value is Balanced.
    keyFrameInterval string
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label string
    An optional label for the codec. The label can be used to control muxing behavior.
    layers H264LayerResponse[]
    The collection of output H.264 layers to be produced by the encoder.
    rateControlMode string
    The video rate control mode
    sceneChangeDetection boolean
    Whether or not the encoder should insert key frames at scene changes. If not specified, the default is false. This flag should be set to true only when the encoder is being configured to produce a single output video.
    stretchMode string
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    syncMode string
    The Video Sync Mode
    complexity str
    Tells the encoder how to choose its encoding settings. The default value is Balanced.
    key_frame_interval str
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label str
    An optional label for the codec. The label can be used to control muxing behavior.
    layers Sequence[H264LayerResponse]
    The collection of output H.264 layers to be produced by the encoder.
    rate_control_mode str
    The video rate control mode
    scene_change_detection bool
    Whether or not the encoder should insert key frames at scene changes. If not specified, the default is false. This flag should be set to true only when the encoder is being configured to produce a single output video.
    stretch_mode str
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    sync_mode str
    The Video Sync Mode
    complexity String
    Tells the encoder how to choose its encoding settings. The default value is Balanced.
    keyFrameInterval String
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    layers List<Property Map>
    The collection of output H.264 layers to be produced by the encoder.
    rateControlMode String
    The video rate control mode
    sceneChangeDetection Boolean
    Whether or not the encoder should insert key frames at scene changes. If not specified, the default is false. This flag should be set to true only when the encoder is being configured to produce a single output video.
    stretchMode String
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    syncMode String
    The Video Sync Mode

    H265LayerResponse

    Bitrate int
    The average bitrate in bits per second at which to encode the input video when generating this layer. For example: a target bitrate of 3000Kbps or 3Mbps means this value should be 3000000 This is a required field.
    AdaptiveBFrame bool
    Specifies whether or not adaptive B-frames are to be used when encoding this layer. If not specified, the encoder will turn it on whenever the video profile permits its use.
    BFrames int
    The number of B-frames to be used when encoding this layer. If not specified, the encoder chooses an appropriate number based on the video profile and level.
    BufferWindow string
    The VBV buffer window length. The value should be in ISO 8601 format. The value should be in the range [0.1-100] seconds. The default is 5 seconds (for example, PT5S).
    Crf double
    The value of CRF to be used when encoding this layer. This setting takes effect when RateControlMode of video codec is set at CRF mode. The range of CRF value is between 0 and 51, where lower values would result in better quality, at the expense of higher file sizes. Higher values mean more compression, but at some point quality degradation will be noticed. Default value is 28.
    FrameRate string
    The frame rate (in frames per second) at which to encode this layer. The value can be in the form of M/N where M and N are integers (For example, 30000/1001), or in the form of a number (For example, 30, or 29.97). The encoder enforces constraints on allowed frame rates based on the profile and level. If it is not specified, the encoder will use the same frame rate as the input video.
    Height string
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    Label string
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    Level string
    We currently support Level up to 6.2. The value can be Auto, or a number that matches the H.265 profile. If not specified, the default is Auto, which lets the encoder choose the Level that is appropriate for this layer.
    MaxBitrate int
    The maximum bitrate (in bits per second), at which the VBV buffer should be assumed to refill. If not specified, defaults to the same value as bitrate.
    Profile string
    We currently support Main. Default is Auto.
    ReferenceFrames int
    The number of reference frames to be used when encoding this layer. If not specified, the encoder determines an appropriate number based on the encoder complexity setting.
    Slices int
    The number of slices to be used when encoding this layer. If not specified, default is zero, which means that encoder will use a single slice for each frame.
    Width string
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    Bitrate int
    The average bitrate in bits per second at which to encode the input video when generating this layer. For example: a target bitrate of 3000Kbps or 3Mbps means this value should be 3000000 This is a required field.
    AdaptiveBFrame bool
    Specifies whether or not adaptive B-frames are to be used when encoding this layer. If not specified, the encoder will turn it on whenever the video profile permits its use.
    BFrames int
    The number of B-frames to be used when encoding this layer. If not specified, the encoder chooses an appropriate number based on the video profile and level.
    BufferWindow string
    The VBV buffer window length. The value should be in ISO 8601 format. The value should be in the range [0.1-100] seconds. The default is 5 seconds (for example, PT5S).
    Crf float64
    The value of CRF to be used when encoding this layer. This setting takes effect when RateControlMode of video codec is set at CRF mode. The range of CRF value is between 0 and 51, where lower values would result in better quality, at the expense of higher file sizes. Higher values mean more compression, but at some point quality degradation will be noticed. Default value is 28.
    FrameRate string
    The frame rate (in frames per second) at which to encode this layer. The value can be in the form of M/N where M and N are integers (For example, 30000/1001), or in the form of a number (For example, 30, or 29.97). The encoder enforces constraints on allowed frame rates based on the profile and level. If it is not specified, the encoder will use the same frame rate as the input video.
    Height string
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    Label string
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    Level string
    We currently support Level up to 6.2. The value can be Auto, or a number that matches the H.265 profile. If not specified, the default is Auto, which lets the encoder choose the Level that is appropriate for this layer.
    MaxBitrate int
    The maximum bitrate (in bits per second), at which the VBV buffer should be assumed to refill. If not specified, defaults to the same value as bitrate.
    Profile string
    We currently support Main. Default is Auto.
    ReferenceFrames int
    The number of reference frames to be used when encoding this layer. If not specified, the encoder determines an appropriate number based on the encoder complexity setting.
    Slices int
    The number of slices to be used when encoding this layer. If not specified, default is zero, which means that encoder will use a single slice for each frame.
    Width string
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    bitrate Integer
    The average bitrate in bits per second at which to encode the input video when generating this layer. For example: a target bitrate of 3000Kbps or 3Mbps means this value should be 3000000 This is a required field.
    adaptiveBFrame Boolean
    Specifies whether or not adaptive B-frames are to be used when encoding this layer. If not specified, the encoder will turn it on whenever the video profile permits its use.
    bFrames Integer
    The number of B-frames to be used when encoding this layer. If not specified, the encoder chooses an appropriate number based on the video profile and level.
    bufferWindow String
    The VBV buffer window length. The value should be in ISO 8601 format. The value should be in the range [0.1-100] seconds. The default is 5 seconds (for example, PT5S).
    crf Double
    The value of CRF to be used when encoding this layer. This setting takes effect when RateControlMode of video codec is set at CRF mode. The range of CRF value is between 0 and 51, where lower values would result in better quality, at the expense of higher file sizes. Higher values mean more compression, but at some point quality degradation will be noticed. Default value is 28.
    frameRate String
    The frame rate (in frames per second) at which to encode this layer. The value can be in the form of M/N where M and N are integers (For example, 30000/1001), or in the form of a number (For example, 30, or 29.97). The encoder enforces constraints on allowed frame rates based on the profile and level. If it is not specified, the encoder will use the same frame rate as the input video.
    height String
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    label String
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    level String
    We currently support Level up to 6.2. The value can be Auto, or a number that matches the H.265 profile. If not specified, the default is Auto, which lets the encoder choose the Level that is appropriate for this layer.
    maxBitrate Integer
    The maximum bitrate (in bits per second), at which the VBV buffer should be assumed to refill. If not specified, defaults to the same value as bitrate.
    profile String
    We currently support Main. Default is Auto.
    referenceFrames Integer
    The number of reference frames to be used when encoding this layer. If not specified, the encoder determines an appropriate number based on the encoder complexity setting.
    slices Integer
    The number of slices to be used when encoding this layer. If not specified, default is zero, which means that encoder will use a single slice for each frame.
    width String
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    bitrate number
    The average bitrate in bits per second at which to encode the input video when generating this layer. For example: a target bitrate of 3000Kbps or 3Mbps means this value should be 3000000 This is a required field.
    adaptiveBFrame boolean
    Specifies whether or not adaptive B-frames are to be used when encoding this layer. If not specified, the encoder will turn it on whenever the video profile permits its use.
    bFrames number
    The number of B-frames to be used when encoding this layer. If not specified, the encoder chooses an appropriate number based on the video profile and level.
    bufferWindow string
    The VBV buffer window length. The value should be in ISO 8601 format. The value should be in the range [0.1-100] seconds. The default is 5 seconds (for example, PT5S).
    crf number
    The value of CRF to be used when encoding this layer. This setting takes effect when RateControlMode of video codec is set at CRF mode. The range of CRF value is between 0 and 51, where lower values would result in better quality, at the expense of higher file sizes. Higher values mean more compression, but at some point quality degradation will be noticed. Default value is 28.
    frameRate string
    The frame rate (in frames per second) at which to encode this layer. The value can be in the form of M/N where M and N are integers (For example, 30000/1001), or in the form of a number (For example, 30, or 29.97). The encoder enforces constraints on allowed frame rates based on the profile and level. If it is not specified, the encoder will use the same frame rate as the input video.
    height string
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    label string
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    level string
    We currently support Level up to 6.2. The value can be Auto, or a number that matches the H.265 profile. If not specified, the default is Auto, which lets the encoder choose the Level that is appropriate for this layer.
    maxBitrate number
    The maximum bitrate (in bits per second), at which the VBV buffer should be assumed to refill. If not specified, defaults to the same value as bitrate.
    profile string
    We currently support Main. Default is Auto.
    referenceFrames number
    The number of reference frames to be used when encoding this layer. If not specified, the encoder determines an appropriate number based on the encoder complexity setting.
    slices number
    The number of slices to be used when encoding this layer. If not specified, default is zero, which means that encoder will use a single slice for each frame.
    width string
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    bitrate int
    The average bitrate in bits per second at which to encode the input video when generating this layer. For example: a target bitrate of 3000Kbps or 3Mbps means this value should be 3000000 This is a required field.
    adaptive_b_frame bool
    Specifies whether or not adaptive B-frames are to be used when encoding this layer. If not specified, the encoder will turn it on whenever the video profile permits its use.
    b_frames int
    The number of B-frames to be used when encoding this layer. If not specified, the encoder chooses an appropriate number based on the video profile and level.
    buffer_window str
    The VBV buffer window length. The value should be in ISO 8601 format. The value should be in the range [0.1-100] seconds. The default is 5 seconds (for example, PT5S).
    crf float
    The value of CRF to be used when encoding this layer. This setting takes effect when RateControlMode of video codec is set at CRF mode. The range of CRF value is between 0 and 51, where lower values would result in better quality, at the expense of higher file sizes. Higher values mean more compression, but at some point quality degradation will be noticed. Default value is 28.
    frame_rate str
    The frame rate (in frames per second) at which to encode this layer. The value can be in the form of M/N where M and N are integers (For example, 30000/1001), or in the form of a number (For example, 30, or 29.97). The encoder enforces constraints on allowed frame rates based on the profile and level. If it is not specified, the encoder will use the same frame rate as the input video.
    height str
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    label str
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    level str
    We currently support Level up to 6.2. The value can be Auto, or a number that matches the H.265 profile. If not specified, the default is Auto, which lets the encoder choose the Level that is appropriate for this layer.
    max_bitrate int
    The maximum bitrate (in bits per second), at which the VBV buffer should be assumed to refill. If not specified, defaults to the same value as bitrate.
    profile str
    We currently support Main. Default is Auto.
    reference_frames int
    The number of reference frames to be used when encoding this layer. If not specified, the encoder determines an appropriate number based on the encoder complexity setting.
    slices int
    The number of slices to be used when encoding this layer. If not specified, default is zero, which means that encoder will use a single slice for each frame.
    width str
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    bitrate Number
    The average bitrate in bits per second at which to encode the input video when generating this layer. For example: a target bitrate of 3000Kbps or 3Mbps means this value should be 3000000 This is a required field.
    adaptiveBFrame Boolean
    Specifies whether or not adaptive B-frames are to be used when encoding this layer. If not specified, the encoder will turn it on whenever the video profile permits its use.
    bFrames Number
    The number of B-frames to be used when encoding this layer. If not specified, the encoder chooses an appropriate number based on the video profile and level.
    bufferWindow String
    The VBV buffer window length. The value should be in ISO 8601 format. The value should be in the range [0.1-100] seconds. The default is 5 seconds (for example, PT5S).
    crf Number
    The value of CRF to be used when encoding this layer. This setting takes effect when RateControlMode of video codec is set at CRF mode. The range of CRF value is between 0 and 51, where lower values would result in better quality, at the expense of higher file sizes. Higher values mean more compression, but at some point quality degradation will be noticed. Default value is 28.
    frameRate String
    The frame rate (in frames per second) at which to encode this layer. The value can be in the form of M/N where M and N are integers (For example, 30000/1001), or in the form of a number (For example, 30, or 29.97). The encoder enforces constraints on allowed frame rates based on the profile and level. If it is not specified, the encoder will use the same frame rate as the input video.
    height String
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    label String
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    level String
    We currently support Level up to 6.2. The value can be Auto, or a number that matches the H.265 profile. If not specified, the default is Auto, which lets the encoder choose the Level that is appropriate for this layer.
    maxBitrate Number
    The maximum bitrate (in bits per second), at which the VBV buffer should be assumed to refill. If not specified, defaults to the same value as bitrate.
    profile String
    We currently support Main. Default is Auto.
    referenceFrames Number
    The number of reference frames to be used when encoding this layer. If not specified, the encoder determines an appropriate number based on the encoder complexity setting.
    slices Number
    The number of slices to be used when encoding this layer. If not specified, default is zero, which means that encoder will use a single slice for each frame.
    width String
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.

    H265VideoResponse

    Complexity string
    Tells the encoder how to choose its encoding settings. Quality will provide for a higher compression ratio but at a higher cost and longer compute time. Speed will produce a relatively larger file but is faster and more economical. The default value is Balanced.
    KeyFrameInterval string
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    Layers List<Pulumi.AzureNative.Media.Inputs.H265LayerResponse>
    The collection of output H.265 layers to be produced by the encoder.
    SceneChangeDetection bool
    Specifies whether or not the encoder should insert key frames at scene changes. If not specified, the default is false. This flag should be set to true only when the encoder is being configured to produce a single output video.
    StretchMode string
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    SyncMode string
    The Video Sync Mode
    Complexity string
    Tells the encoder how to choose its encoding settings. Quality will provide for a higher compression ratio but at a higher cost and longer compute time. Speed will produce a relatively larger file but is faster and more economical. The default value is Balanced.
    KeyFrameInterval string
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    Layers []H265LayerResponse
    The collection of output H.265 layers to be produced by the encoder.
    SceneChangeDetection bool
    Specifies whether or not the encoder should insert key frames at scene changes. If not specified, the default is false. This flag should be set to true only when the encoder is being configured to produce a single output video.
    StretchMode string
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    SyncMode string
    The Video Sync Mode
    complexity String
    Tells the encoder how to choose its encoding settings. Quality will provide for a higher compression ratio but at a higher cost and longer compute time. Speed will produce a relatively larger file but is faster and more economical. The default value is Balanced.
    keyFrameInterval String
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    layers List<H265LayerResponse>
    The collection of output H.265 layers to be produced by the encoder.
    sceneChangeDetection Boolean
    Specifies whether or not the encoder should insert key frames at scene changes. If not specified, the default is false. This flag should be set to true only when the encoder is being configured to produce a single output video.
    stretchMode String
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    syncMode String
    The Video Sync Mode
    complexity string
    Tells the encoder how to choose its encoding settings. Quality will provide for a higher compression ratio but at a higher cost and longer compute time. Speed will produce a relatively larger file but is faster and more economical. The default value is Balanced.
    keyFrameInterval string
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label string
    An optional label for the codec. The label can be used to control muxing behavior.
    layers H265LayerResponse[]
    The collection of output H.265 layers to be produced by the encoder.
    sceneChangeDetection boolean
    Specifies whether or not the encoder should insert key frames at scene changes. If not specified, the default is false. This flag should be set to true only when the encoder is being configured to produce a single output video.
    stretchMode string
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    syncMode string
    The Video Sync Mode
    complexity str
    Tells the encoder how to choose its encoding settings. Quality will provide for a higher compression ratio but at a higher cost and longer compute time. Speed will produce a relatively larger file but is faster and more economical. The default value is Balanced.
    key_frame_interval str
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label str
    An optional label for the codec. The label can be used to control muxing behavior.
    layers Sequence[H265LayerResponse]
    The collection of output H.265 layers to be produced by the encoder.
    scene_change_detection bool
    Specifies whether or not the encoder should insert key frames at scene changes. If not specified, the default is false. This flag should be set to true only when the encoder is being configured to produce a single output video.
    stretch_mode str
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    sync_mode str
    The Video Sync Mode
    complexity String
    Tells the encoder how to choose its encoding settings. Quality will provide for a higher compression ratio but at a higher cost and longer compute time. Speed will produce a relatively larger file but is faster and more economical. The default value is Balanced.
    keyFrameInterval String
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    layers List<Property Map>
    The collection of output H.265 layers to be produced by the encoder.
    sceneChangeDetection Boolean
    Specifies whether or not the encoder should insert key frames at scene changes. If not specified, the default is false. This flag should be set to true only when the encoder is being configured to produce a single output video.
    stretchMode String
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    syncMode String
    The Video Sync Mode

    ImageFormatResponse

    FilenamePattern string
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    FilenamePattern string
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    filenamePattern String
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    filenamePattern string
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    filename_pattern str
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    filenamePattern String
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.

    ImageResponse

    Start string
    The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
    KeyFrameInterval string
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    Range string
    The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
    Step string
    The intervals at which thumbnails are generated. The value can be in ISO 8601 format (For example, PT05S for one image every 5 seconds), or a frame count (For example, 30 for one image every 30 frames), or a relative value to stream duration (For example, 10% for one image every 10% of stream duration). Note: Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and Step position from start time as the first output. As the default value is 10%, it means if stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select reasonable value for Step if the first thumbnail is expected close to start time, or set Range value at 1 if only one thumbnail is needed at start time.
    StretchMode string
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    SyncMode string
    The Video Sync Mode
    Start string
    The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
    KeyFrameInterval string
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    Range string
    The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
    Step string
    The intervals at which thumbnails are generated. The value can be in ISO 8601 format (For example, PT05S for one image every 5 seconds), or a frame count (For example, 30 for one image every 30 frames), or a relative value to stream duration (For example, 10% for one image every 10% of stream duration). Note: Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and Step position from start time as the first output. As the default value is 10%, it means if stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select reasonable value for Step if the first thumbnail is expected close to start time, or set Range value at 1 if only one thumbnail is needed at start time.
    StretchMode string
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    SyncMode string
    The Video Sync Mode
    start String
    The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
    keyFrameInterval String
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    range String
    The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
    step String
    The intervals at which thumbnails are generated. The value can be in ISO 8601 format (For example, PT05S for one image every 5 seconds), or a frame count (For example, 30 for one image every 30 frames), or a relative value to stream duration (For example, 10% for one image every 10% of stream duration). Note: Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and Step position from start time as the first output. As the default value is 10%, it means if stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select reasonable value for Step if the first thumbnail is expected close to start time, or set Range value at 1 if only one thumbnail is needed at start time.
    stretchMode String
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    syncMode String
    The Video Sync Mode
    start string
    The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
    keyFrameInterval string
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label string
    An optional label for the codec. The label can be used to control muxing behavior.
    range string
    The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
    step string
    The intervals at which thumbnails are generated. The value can be in ISO 8601 format (For example, PT05S for one image every 5 seconds), or a frame count (For example, 30 for one image every 30 frames), or a relative value to stream duration (For example, 10% for one image every 10% of stream duration). Note: Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and Step position from start time as the first output. As the default value is 10%, it means if stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select reasonable value for Step if the first thumbnail is expected close to start time, or set Range value at 1 if only one thumbnail is needed at start time.
    stretchMode string
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    syncMode string
    The Video Sync Mode
    start str
    The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
    key_frame_interval str
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label str
    An optional label for the codec. The label can be used to control muxing behavior.
    range str
    The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
    step str
    The intervals at which thumbnails are generated. The value can be in ISO 8601 format (For example, PT05S for one image every 5 seconds), or a frame count (For example, 30 for one image every 30 frames), or a relative value to stream duration (For example, 10% for one image every 10% of stream duration). Note: Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and Step position from start time as the first output. As the default value is 10%, it means if stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select reasonable value for Step if the first thumbnail is expected close to start time, or set Range value at 1 if only one thumbnail is needed at start time.
    stretch_mode str
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    sync_mode str
    The Video Sync Mode
    start String
    The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
    keyFrameInterval String
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    range String
    The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
    step String
    The intervals at which thumbnails are generated. The value can be in ISO 8601 format (For example, PT05S for one image every 5 seconds), or a frame count (For example, 30 for one image every 30 frames), or a relative value to stream duration (For example, 10% for one image every 10% of stream duration). Note: Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and Step position from start time as the first output. As the default value is 10%, it means if stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select reasonable value for Step if the first thumbnail is expected close to start time, or set Range value at 1 if only one thumbnail is needed at start time.
    stretchMode String
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    syncMode String
    The Video Sync Mode

    InputFileResponse

    Filename string
    Name of the file that this input definition applies to.
    IncludedTracks List<object>
    The list of TrackDescriptors which define the metadata and selection of tracks in the input.
    Filename string
    Name of the file that this input definition applies to.
    IncludedTracks []interface{}
    The list of TrackDescriptors which define the metadata and selection of tracks in the input.
    filename String
    Name of the file that this input definition applies to.
    includedTracks List<Object>
    The list of TrackDescriptors which define the metadata and selection of tracks in the input.
    filename string
    Name of the file that this input definition applies to.
    includedTracks (AudioTrackDescriptorResponse | SelectAudioTrackByAttributeResponse | SelectAudioTrackByIdResponse | SelectVideoTrackByAttributeResponse | SelectVideoTrackByIdResponse | VideoTrackDescriptorResponse)[]
    The list of TrackDescriptors which define the metadata and selection of tracks in the input.
    filename str
    Name of the file that this input definition applies to.
    included_tracks Sequence[Union[AudioTrackDescriptorResponse, SelectAudioTrackByAttributeResponse, SelectAudioTrackByIdResponse, SelectVideoTrackByAttributeResponse, SelectVideoTrackByIdResponse, VideoTrackDescriptorResponse]]
    The list of TrackDescriptors which define the metadata and selection of tracks in the input.
    filename String
    Name of the file that this input definition applies to.
    includedTracks List<Property Map | Property Map | Property Map | Property Map | Property Map | Property Map>
    The list of TrackDescriptors which define the metadata and selection of tracks in the input.

    JobErrorDetailResponse

    Code string
    Code describing the error detail.
    Message string
    A human-readable representation of the error.
    Code string
    Code describing the error detail.
    Message string
    A human-readable representation of the error.
    code String
    Code describing the error detail.
    message String
    A human-readable representation of the error.
    code string
    Code describing the error detail.
    message string
    A human-readable representation of the error.
    code str
    Code describing the error detail.
    message str
    A human-readable representation of the error.
    code String
    Code describing the error detail.
    message String
    A human-readable representation of the error.

    JobErrorResponse

    Category string
    Helps with categorization of errors.
    Code string
    Error code describing the error.
    Details List<Pulumi.AzureNative.Media.Inputs.JobErrorDetailResponse>
    An array of details about specific errors that led to this reported error.
    Message string
    A human-readable language-dependent representation of the error.
    Retry string
    Indicates that it may be possible to retry the Job. If retry is unsuccessful, please contact Azure support via Azure Portal.
    Category string
    Helps with categorization of errors.
    Code string
    Error code describing the error.
    Details []JobErrorDetailResponse
    An array of details about specific errors that led to this reported error.
    Message string
    A human-readable language-dependent representation of the error.
    Retry string
    Indicates that it may be possible to retry the Job. If retry is unsuccessful, please contact Azure support via Azure Portal.
    category String
    Helps with categorization of errors.
    code String
    Error code describing the error.
    details List<JobErrorDetailResponse>
    An array of details about specific errors that led to this reported error.
    message String
    A human-readable language-dependent representation of the error.
    retry String
    Indicates that it may be possible to retry the Job. If retry is unsuccessful, please contact Azure support via Azure Portal.
    category string
    Helps with categorization of errors.
    code string
    Error code describing the error.
    details JobErrorDetailResponse[]
    An array of details about specific errors that led to this reported error.
    message string
    A human-readable language-dependent representation of the error.
    retry string
    Indicates that it may be possible to retry the Job. If retry is unsuccessful, please contact Azure support via Azure Portal.
    category str
    Helps with categorization of errors.
    code str
    Error code describing the error.
    details Sequence[JobErrorDetailResponse]
    An array of details about specific errors that led to this reported error.
    message str
    A human-readable language-dependent representation of the error.
    retry str
    Indicates that it may be possible to retry the Job. If retry is unsuccessful, please contact Azure support via Azure Portal.
    category String
    Helps with categorization of errors.
    code String
    Error code describing the error.
    details List<Property Map>
    An array of details about specific errors that led to this reported error.
    message String
    A human-readable language-dependent representation of the error.
    retry String
    Indicates that it may be possible to retry the Job. If retry is unsuccessful, please contact Azure support via Azure Portal.

    JobInputAssetResponse

    AssetName string
    The name of the input Asset.
    End Pulumi.AzureNative.Media.Inputs.AbsoluteClipTimeResponse | Pulumi.AzureNative.Media.Inputs.UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
    Files List<string>
    List of files. Required for JobInputHttp. Maximum of 4000 characters each. Query strings will not be returned in service responses to prevent sensitive data exposure.
    InputDefinitions List<object>
    Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
    Label string
    A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'.
    Start Pulumi.AzureNative.Media.Inputs.AbsoluteClipTimeResponse | Pulumi.AzureNative.Media.Inputs.UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.
    AssetName string
    The name of the input Asset.
    End AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
    Files []string
    List of files. Required for JobInputHttp. Maximum of 4000 characters each. Query strings will not be returned in service responses to prevent sensitive data exposure.
    InputDefinitions []interface{}
    Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
    Label string
    A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'.
    Start AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.
    assetName String
    The name of the input Asset.
    end AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
    files List<String>
    List of files. Required for JobInputHttp. Maximum of 4000 characters each. Query strings will not be returned in service responses to prevent sensitive data exposure.
    inputDefinitions List<Object>
    Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
    label String
    A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'.
    start AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.
    assetName string
    The name of the input Asset.
    end AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
    files string[]
    List of files. Required for JobInputHttp. Maximum of 4000 characters each. Query strings will not be returned in service responses to prevent sensitive data exposure.
    inputDefinitions (FromAllInputFileResponse | FromEachInputFileResponse | InputFileResponse)[]
    Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
    label string
    A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'.
    start AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.
    asset_name str
    The name of the input Asset.
    end AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
    files Sequence[str]
    List of files. Required for JobInputHttp. Maximum of 4000 characters each. Query strings will not be returned in service responses to prevent sensitive data exposure.
    input_definitions Sequence[Union[FromAllInputFileResponse, FromEachInputFileResponse, InputFileResponse]]
    Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
    label str
    A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'.
    start AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.
    assetName String
    The name of the input Asset.
    end Property Map | Property Map
    Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
    files List<String>
    List of files. Required for JobInputHttp. Maximum of 4000 characters each. Query strings will not be returned in service responses to prevent sensitive data exposure.
    inputDefinitions List<Property Map | Property Map | Property Map>
    Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
    label String
    A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'.
    start Property Map | Property Map
    Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.

    JobInputClipResponse

    End Pulumi.AzureNative.Media.Inputs.AbsoluteClipTimeResponse | Pulumi.AzureNative.Media.Inputs.UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
    Files List<string>
    List of files. Required for JobInputHttp. Maximum of 4000 characters each. Query strings will not be returned in service responses to prevent sensitive data exposure.
    InputDefinitions List<object>
    Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
    Label string
    A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'.
    Start Pulumi.AzureNative.Media.Inputs.AbsoluteClipTimeResponse | Pulumi.AzureNative.Media.Inputs.UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.
    End AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
    Files []string
    List of files. Required for JobInputHttp. Maximum of 4000 characters each. Query strings will not be returned in service responses to prevent sensitive data exposure.
    InputDefinitions []interface{}
    Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
    Label string
    A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'.
    Start AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.
    end AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
    files List<String>
    List of files. Required for JobInputHttp. Maximum of 4000 characters each. Query strings will not be returned in service responses to prevent sensitive data exposure.
    inputDefinitions List<Object>
    Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
    label String
    A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'.
    start AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.
    end AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
    files string[]
    List of files. Required for JobInputHttp. Maximum of 4000 characters each. Query strings will not be returned in service responses to prevent sensitive data exposure.
    inputDefinitions (FromAllInputFileResponse | FromEachInputFileResponse | InputFileResponse)[]
    Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
    label string
    A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'.
    start AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.
    end AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
    files Sequence[str]
    List of files. Required for JobInputHttp. Maximum of 4000 characters each. Query strings will not be returned in service responses to prevent sensitive data exposure.
    input_definitions Sequence[Union[FromAllInputFileResponse, FromEachInputFileResponse, InputFileResponse]]
    Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
    label str
    A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'.
    start AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.
    end Property Map | Property Map
    Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
    files List<String>
    List of files. Required for JobInputHttp. Maximum of 4000 characters each. Query strings will not be returned in service responses to prevent sensitive data exposure.
    inputDefinitions List<Property Map | Property Map | Property Map>
    Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
    label String
    A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'.
    start Property Map | Property Map
    Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.

    JobInputHttpResponse

    BaseUri string
    Base URI for HTTPS job input. It will be concatenated with provided file names. If no base uri is given, then the provided file list is assumed to be fully qualified uris. Maximum length of 4000 characters. The query strings will not be returned in service responses to prevent sensitive data exposure.
    End Pulumi.AzureNative.Media.Inputs.AbsoluteClipTimeResponse | Pulumi.AzureNative.Media.Inputs.UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
    Files List<string>
    List of files. Required for JobInputHttp. Maximum of 4000 characters each. Query strings will not be returned in service responses to prevent sensitive data exposure.
    InputDefinitions List<object>
    Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
    Label string
    A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'.
    Start Pulumi.AzureNative.Media.Inputs.AbsoluteClipTimeResponse | Pulumi.AzureNative.Media.Inputs.UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.
    BaseUri string
    Base URI for HTTPS job input. It will be concatenated with provided file names. If no base uri is given, then the provided file list is assumed to be fully qualified uris. Maximum length of 4000 characters. The query strings will not be returned in service responses to prevent sensitive data exposure.
    End AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
    Files []string
    List of files. Required for JobInputHttp. Maximum of 4000 characters each. Query strings will not be returned in service responses to prevent sensitive data exposure.
    InputDefinitions []interface{}
    Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
    Label string
    A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'.
    Start AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.
    baseUri String
    Base URI for HTTPS job input. It will be concatenated with provided file names. If no base uri is given, then the provided file list is assumed to be fully qualified uris. Maximum length of 4000 characters. The query strings will not be returned in service responses to prevent sensitive data exposure.
    end AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
    files List<String>
    List of files. Required for JobInputHttp. Maximum of 4000 characters each. Query strings will not be returned in service responses to prevent sensitive data exposure.
    inputDefinitions List<Object>
    Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
    label String
    A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'.
    start AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.
    baseUri string
    Base URI for HTTPS job input. It will be concatenated with provided file names. If no base uri is given, then the provided file list is assumed to be fully qualified uris. Maximum length of 4000 characters. The query strings will not be returned in service responses to prevent sensitive data exposure.
    end AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
    files string[]
    List of files. Required for JobInputHttp. Maximum of 4000 characters each. Query strings will not be returned in service responses to prevent sensitive data exposure.
    inputDefinitions (FromAllInputFileResponse | FromEachInputFileResponse | InputFileResponse)[]
    Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
    label string
    A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'.
    start AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.
    base_uri str
    Base URI for HTTPS job input. It will be concatenated with provided file names. If no base uri is given, then the provided file list is assumed to be fully qualified uris. Maximum length of 4000 characters. The query strings will not be returned in service responses to prevent sensitive data exposure.
    end AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
    files Sequence[str]
    List of files. Required for JobInputHttp. Maximum of 4000 characters each. Query strings will not be returned in service responses to prevent sensitive data exposure.
    input_definitions Sequence[Union[FromAllInputFileResponse, FromEachInputFileResponse, InputFileResponse]]
    Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
    label str
    A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'.
    start AbsoluteClipTimeResponse | UtcClipTimeResponse
    Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.
    baseUri String
    Base URI for HTTPS job input. It will be concatenated with provided file names. If no base uri is given, then the provided file list is assumed to be fully qualified uris. Maximum length of 4000 characters. The query strings will not be returned in service responses to prevent sensitive data exposure.
    end Property Map | Property Map
    Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
    files List<String>
    List of files. Required for JobInputHttp. Maximum of 4000 characters each. Query strings will not be returned in service responses to prevent sensitive data exposure.
    inputDefinitions List<Property Map | Property Map | Property Map>
    Defines a list of InputDefinitions. For each InputDefinition, it defines a list of track selections and related metadata.
    label String
    A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'.
    start Property Map | Property Map
    Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.

    JobInputSequenceResponse

    Inputs []JobInputClipResponse
    JobInputs that make up the timeline.
    inputs List<JobInputClipResponse>
    JobInputs that make up the timeline.
    inputs JobInputClipResponse[]
    JobInputs that make up the timeline.
    inputs Sequence[JobInputClipResponse]
    JobInputs that make up the timeline.
    inputs List<Property Map>
    JobInputs that make up the timeline.

    JobInputsResponse

    Inputs List<object>
    List of inputs to a Job.
    Inputs []interface{}
    List of inputs to a Job.
    inputs List<Object>
    List of inputs to a Job.
    inputs (JobInputAssetResponse | JobInputClipResponse | JobInputHttpResponse | JobInputSequenceResponse | JobInputsResponse)[]
    List of inputs to a Job.
    inputs Sequence[Union[JobInputAssetResponse, JobInputClipResponse, JobInputHttpResponse, JobInputSequenceResponse, JobInputsResponse]]
    List of inputs to a Job.
    inputs List<Property Map | Property Map | Property Map | Property Map | Property Map>
    List of inputs to a Job.

    JobOutputAssetResponse

    AssetName string
    The name of the output Asset.
    EndTime string
    The UTC date and time at which this Job Output finished processing.
    Error Pulumi.AzureNative.Media.Inputs.JobErrorResponse
    If the JobOutput is in the Error state, it contains the details of the error.
    Progress int
    If the JobOutput is in a Processing state, this contains the Job completion percentage. The value is an estimate and not intended to be used to predict Job completion times. To determine if the JobOutput is complete, use the State property.
    StartTime string
    The UTC date and time at which this Job Output began processing.
    State string
    Describes the state of the JobOutput.
    Label string
    A label that is assigned to a JobOutput in order to help uniquely identify it. This is useful when your Transform has more than one TransformOutput, whereby your Job has more than one JobOutput. In such cases, when you submit the Job, you will add two or more JobOutputs, in the same order as TransformOutputs in the Transform. Subsequently, when you retrieve the Job, either through events or on a GET request, you can use the label to easily identify the JobOutput. If a label is not provided, a default value of '{presetName}_{outputIndex}' will be used, where the preset name is the name of the preset in the corresponding TransformOutput and the output index is the relative index of the this JobOutput within the Job. Note that this index is the same as the relative index of the corresponding TransformOutput within its Transform.
    PresetOverride Pulumi.AzureNative.Media.Inputs.AudioAnalyzerPresetResponse | Pulumi.AzureNative.Media.Inputs.BuiltInStandardEncoderPresetResponse | Pulumi.AzureNative.Media.Inputs.FaceDetectorPresetResponse | Pulumi.AzureNative.Media.Inputs.StandardEncoderPresetResponse | Pulumi.AzureNative.Media.Inputs.VideoAnalyzerPresetResponse
    A preset used to override the preset in the corresponding transform output.
    AssetName string
    The name of the output Asset.
    EndTime string
    The UTC date and time at which this Job Output finished processing.
    Error JobErrorResponse
    If the JobOutput is in the Error state, it contains the details of the error.
    Progress int
    If the JobOutput is in a Processing state, this contains the Job completion percentage. The value is an estimate and not intended to be used to predict Job completion times. To determine if the JobOutput is complete, use the State property.
    StartTime string
    The UTC date and time at which this Job Output began processing.
    State string
    Describes the state of the JobOutput.
    Label string
    A label that is assigned to a JobOutput in order to help uniquely identify it. This is useful when your Transform has more than one TransformOutput, whereby your Job has more than one JobOutput. In such cases, when you submit the Job, you will add two or more JobOutputs, in the same order as TransformOutputs in the Transform. Subsequently, when you retrieve the Job, either through events or on a GET request, you can use the label to easily identify the JobOutput. If a label is not provided, a default value of '{presetName}_{outputIndex}' will be used, where the preset name is the name of the preset in the corresponding TransformOutput and the output index is the relative index of the this JobOutput within the Job. Note that this index is the same as the relative index of the corresponding TransformOutput within its Transform.
    PresetOverride AudioAnalyzerPresetResponse | BuiltInStandardEncoderPresetResponse | FaceDetectorPresetResponse | StandardEncoderPresetResponse | VideoAnalyzerPresetResponse
    A preset used to override the preset in the corresponding transform output.
    assetName String
    The name of the output Asset.
    endTime String
    The UTC date and time at which this Job Output finished processing.
    error JobErrorResponse
    If the JobOutput is in the Error state, it contains the details of the error.
    progress Integer
    If the JobOutput is in a Processing state, this contains the Job completion percentage. The value is an estimate and not intended to be used to predict Job completion times. To determine if the JobOutput is complete, use the State property.
    startTime String
    The UTC date and time at which this Job Output began processing.
    state String
    Describes the state of the JobOutput.
    label String
    A label that is assigned to a JobOutput in order to help uniquely identify it. This is useful when your Transform has more than one TransformOutput, whereby your Job has more than one JobOutput. In such cases, when you submit the Job, you will add two or more JobOutputs, in the same order as TransformOutputs in the Transform. Subsequently, when you retrieve the Job, either through events or on a GET request, you can use the label to easily identify the JobOutput. If a label is not provided, a default value of '{presetName}_{outputIndex}' will be used, where the preset name is the name of the preset in the corresponding TransformOutput and the output index is the relative index of the this JobOutput within the Job. Note that this index is the same as the relative index of the corresponding TransformOutput within its Transform.
    presetOverride AudioAnalyzerPresetResponse | BuiltInStandardEncoderPresetResponse | FaceDetectorPresetResponse | StandardEncoderPresetResponse | VideoAnalyzerPresetResponse
    A preset used to override the preset in the corresponding transform output.
    assetName string
    The name of the output Asset.
    endTime string
    The UTC date and time at which this Job Output finished processing.
    error JobErrorResponse
    If the JobOutput is in the Error state, it contains the details of the error.
    progress number
    If the JobOutput is in a Processing state, this contains the Job completion percentage. The value is an estimate and not intended to be used to predict Job completion times. To determine if the JobOutput is complete, use the State property.
    startTime string
    The UTC date and time at which this Job Output began processing.
    state string
    Describes the state of the JobOutput.
    label string
    A label that is assigned to a JobOutput in order to help uniquely identify it. This is useful when your Transform has more than one TransformOutput, whereby your Job has more than one JobOutput. In such cases, when you submit the Job, you will add two or more JobOutputs, in the same order as TransformOutputs in the Transform. Subsequently, when you retrieve the Job, either through events or on a GET request, you can use the label to easily identify the JobOutput. If a label is not provided, a default value of '{presetName}_{outputIndex}' will be used, where the preset name is the name of the preset in the corresponding TransformOutput and the output index is the relative index of the this JobOutput within the Job. Note that this index is the same as the relative index of the corresponding TransformOutput within its Transform.
    presetOverride AudioAnalyzerPresetResponse | BuiltInStandardEncoderPresetResponse | FaceDetectorPresetResponse | StandardEncoderPresetResponse | VideoAnalyzerPresetResponse
    A preset used to override the preset in the corresponding transform output.
    asset_name str
    The name of the output Asset.
    end_time str
    The UTC date and time at which this Job Output finished processing.
    error JobErrorResponse
    If the JobOutput is in the Error state, it contains the details of the error.
    progress int
    If the JobOutput is in a Processing state, this contains the Job completion percentage. The value is an estimate and not intended to be used to predict Job completion times. To determine if the JobOutput is complete, use the State property.
    start_time str
    The UTC date and time at which this Job Output began processing.
    state str
    Describes the state of the JobOutput.
    label str
    A label that is assigned to a JobOutput in order to help uniquely identify it. This is useful when your Transform has more than one TransformOutput, whereby your Job has more than one JobOutput. In such cases, when you submit the Job, you will add two or more JobOutputs, in the same order as TransformOutputs in the Transform. Subsequently, when you retrieve the Job, either through events or on a GET request, you can use the label to easily identify the JobOutput. If a label is not provided, a default value of '{presetName}_{outputIndex}' will be used, where the preset name is the name of the preset in the corresponding TransformOutput and the output index is the relative index of the this JobOutput within the Job. Note that this index is the same as the relative index of the corresponding TransformOutput within its Transform.
    preset_override AudioAnalyzerPresetResponse | BuiltInStandardEncoderPresetResponse | FaceDetectorPresetResponse | StandardEncoderPresetResponse | VideoAnalyzerPresetResponse
    A preset used to override the preset in the corresponding transform output.
    assetName String
    The name of the output Asset.
    endTime String
    The UTC date and time at which this Job Output finished processing.
    error Property Map
    If the JobOutput is in the Error state, it contains the details of the error.
    progress Number
    If the JobOutput is in a Processing state, this contains the Job completion percentage. The value is an estimate and not intended to be used to predict Job completion times. To determine if the JobOutput is complete, use the State property.
    startTime String
    The UTC date and time at which this Job Output began processing.
    state String
    Describes the state of the JobOutput.
    label String
    A label that is assigned to a JobOutput in order to help uniquely identify it. This is useful when your Transform has more than one TransformOutput, whereby your Job has more than one JobOutput. In such cases, when you submit the Job, you will add two or more JobOutputs, in the same order as TransformOutputs in the Transform. Subsequently, when you retrieve the Job, either through events or on a GET request, you can use the label to easily identify the JobOutput. If a label is not provided, a default value of '{presetName}_{outputIndex}' will be used, where the preset name is the name of the preset in the corresponding TransformOutput and the output index is the relative index of the this JobOutput within the Job. Note that this index is the same as the relative index of the corresponding TransformOutput within its Transform.
    presetOverride Property Map | Property Map | Property Map | Property Map | Property Map
    A preset used to override the preset in the corresponding transform output.

    JpgFormatResponse

    FilenamePattern string
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    FilenamePattern string
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    filenamePattern String
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    filenamePattern string
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    filename_pattern str
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    filenamePattern String
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.

    JpgImageResponse

    Start string
    The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
    KeyFrameInterval string
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    Layers List<Pulumi.AzureNative.Media.Inputs.JpgLayerResponse>
    A collection of output JPEG image layers to be produced by the encoder.
    Range string
    The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
    SpriteColumn int
    Sets the number of columns used in thumbnail sprite image. The number of rows are automatically calculated and a VTT file is generated with the coordinate mappings for each thumbnail in the sprite. Note: this value should be a positive integer and a proper value is recommended so that the output image resolution will not go beyond JPEG maximum pixel resolution limit 65535x65535.
    Step string
    The intervals at which thumbnails are generated. The value can be in ISO 8601 format (For example, PT05S for one image every 5 seconds), or a frame count (For example, 30 for one image every 30 frames), or a relative value to stream duration (For example, 10% for one image every 10% of stream duration). Note: Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and Step position from start time as the first output. As the default value is 10%, it means if stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select reasonable value for Step if the first thumbnail is expected close to start time, or set Range value at 1 if only one thumbnail is needed at start time.
    StretchMode string
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    SyncMode string
    The Video Sync Mode
    Start string
    The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
    KeyFrameInterval string
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    Layers []JpgLayerResponse
    A collection of output JPEG image layers to be produced by the encoder.
    Range string
    The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
    SpriteColumn int
    Sets the number of columns used in thumbnail sprite image. The number of rows are automatically calculated and a VTT file is generated with the coordinate mappings for each thumbnail in the sprite. Note: this value should be a positive integer and a proper value is recommended so that the output image resolution will not go beyond JPEG maximum pixel resolution limit 65535x65535.
    Step string
    The intervals at which thumbnails are generated. The value can be in ISO 8601 format (For example, PT05S for one image every 5 seconds), or a frame count (For example, 30 for one image every 30 frames), or a relative value to stream duration (For example, 10% for one image every 10% of stream duration). Note: Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and Step position from start time as the first output. As the default value is 10%, it means if stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select reasonable value for Step if the first thumbnail is expected close to start time, or set Range value at 1 if only one thumbnail is needed at start time.
    StretchMode string
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    SyncMode string
    The Video Sync Mode
    start String
    The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
    keyFrameInterval String
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    layers List<JpgLayerResponse>
    A collection of output JPEG image layers to be produced by the encoder.
    range String
    The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
    spriteColumn Integer
    Sets the number of columns used in thumbnail sprite image. The number of rows are automatically calculated and a VTT file is generated with the coordinate mappings for each thumbnail in the sprite. Note: this value should be a positive integer and a proper value is recommended so that the output image resolution will not go beyond JPEG maximum pixel resolution limit 65535x65535.
    step String
    The intervals at which thumbnails are generated. The value can be in ISO 8601 format (For example, PT05S for one image every 5 seconds), or a frame count (For example, 30 for one image every 30 frames), or a relative value to stream duration (For example, 10% for one image every 10% of stream duration). Note: Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and Step position from start time as the first output. As the default value is 10%, it means if stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select reasonable value for Step if the first thumbnail is expected close to start time, or set Range value at 1 if only one thumbnail is needed at start time.
    stretchMode String
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    syncMode String
    The Video Sync Mode
    start string
    The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
    keyFrameInterval string
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label string
    An optional label for the codec. The label can be used to control muxing behavior.
    layers JpgLayerResponse[]
    A collection of output JPEG image layers to be produced by the encoder.
    range string
    The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
    spriteColumn number
    Sets the number of columns used in thumbnail sprite image. The number of rows are automatically calculated and a VTT file is generated with the coordinate mappings for each thumbnail in the sprite. Note: this value should be a positive integer and a proper value is recommended so that the output image resolution will not go beyond JPEG maximum pixel resolution limit 65535x65535.
    step string
    The intervals at which thumbnails are generated. The value can be in ISO 8601 format (For example, PT05S for one image every 5 seconds), or a frame count (For example, 30 for one image every 30 frames), or a relative value to stream duration (For example, 10% for one image every 10% of stream duration). Note: Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and Step position from start time as the first output. As the default value is 10%, it means if stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select reasonable value for Step if the first thumbnail is expected close to start time, or set Range value at 1 if only one thumbnail is needed at start time.
    stretchMode string
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    syncMode string
    The Video Sync Mode
    start str
    The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
    key_frame_interval str
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label str
    An optional label for the codec. The label can be used to control muxing behavior.
    layers Sequence[JpgLayerResponse]
    A collection of output JPEG image layers to be produced by the encoder.
    range str
    The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
    sprite_column int
    Sets the number of columns used in thumbnail sprite image. The number of rows are automatically calculated and a VTT file is generated with the coordinate mappings for each thumbnail in the sprite. Note: this value should be a positive integer and a proper value is recommended so that the output image resolution will not go beyond JPEG maximum pixel resolution limit 65535x65535.
    step str
    The intervals at which thumbnails are generated. The value can be in ISO 8601 format (For example, PT05S for one image every 5 seconds), or a frame count (For example, 30 for one image every 30 frames), or a relative value to stream duration (For example, 10% for one image every 10% of stream duration). Note: Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and Step position from start time as the first output. As the default value is 10%, it means if stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select reasonable value for Step if the first thumbnail is expected close to start time, or set Range value at 1 if only one thumbnail is needed at start time.
    stretch_mode str
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    sync_mode str
    The Video Sync Mode
    start String
    The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
    keyFrameInterval String
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    layers List<Property Map>
    A collection of output JPEG image layers to be produced by the encoder.
    range String
    The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
    spriteColumn Number
    Sets the number of columns used in thumbnail sprite image. The number of rows are automatically calculated and a VTT file is generated with the coordinate mappings for each thumbnail in the sprite. Note: this value should be a positive integer and a proper value is recommended so that the output image resolution will not go beyond JPEG maximum pixel resolution limit 65535x65535.
    step String
    The intervals at which thumbnails are generated. The value can be in ISO 8601 format (For example, PT05S for one image every 5 seconds), or a frame count (For example, 30 for one image every 30 frames), or a relative value to stream duration (For example, 10% for one image every 10% of stream duration). Note: Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and Step position from start time as the first output. As the default value is 10%, it means if stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select reasonable value for Step if the first thumbnail is expected close to start time, or set Range value at 1 if only one thumbnail is needed at start time.
    stretchMode String
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    syncMode String
    The Video Sync Mode

    JpgLayerResponse

    Height string
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    Label string
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    Quality int
    The compression quality of the JPEG output. Range is from 0-100 and the default is 70.
    Width string
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    Height string
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    Label string
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    Quality int
    The compression quality of the JPEG output. Range is from 0-100 and the default is 70.
    Width string
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    height String
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    label String
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    quality Integer
    The compression quality of the JPEG output. Range is from 0-100 and the default is 70.
    width String
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    height string
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    label string
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    quality number
    The compression quality of the JPEG output. Range is from 0-100 and the default is 70.
    width string
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    height str
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    label str
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    quality int
    The compression quality of the JPEG output. Range is from 0-100 and the default is 70.
    width str
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    height String
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    label String
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    quality Number
    The compression quality of the JPEG output. Range is from 0-100 and the default is 70.
    width String
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.

    Mp4FormatResponse

    FilenamePattern string
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    OutputFiles List<Pulumi.AzureNative.Media.Inputs.OutputFileResponse>
    The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .
    FilenamePattern string
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    OutputFiles []OutputFileResponse
    The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .
    filenamePattern String
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    outputFiles List<OutputFileResponse>
    The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .
    filenamePattern string
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    outputFiles OutputFileResponse[]
    The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .
    filename_pattern str
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    output_files Sequence[OutputFileResponse]
    The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .
    filenamePattern String
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    outputFiles List<Property Map>
    The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .

    MultiBitrateFormatResponse

    FilenamePattern string
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    OutputFiles List<Pulumi.AzureNative.Media.Inputs.OutputFileResponse>
    The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .
    FilenamePattern string
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    OutputFiles []OutputFileResponse
    The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .
    filenamePattern String
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    outputFiles List<OutputFileResponse>
    The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .
    filenamePattern string
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    outputFiles OutputFileResponse[]
    The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .
    filename_pattern str
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    output_files Sequence[OutputFileResponse]
    The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .
    filenamePattern String
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    outputFiles List<Property Map>
    The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .

    OutputFileResponse

    Labels List<string>
    The list of labels that describe how the encoder should multiplex video and audio into an output file. For example, if the encoder is producing two video layers with labels v1 and v2, and one audio layer with label a1, then an array like '[v1, a1]' tells the encoder to produce an output file with the video track represented by v1 and the audio track represented by a1.
    Labels []string
    The list of labels that describe how the encoder should multiplex video and audio into an output file. For example, if the encoder is producing two video layers with labels v1 and v2, and one audio layer with label a1, then an array like '[v1, a1]' tells the encoder to produce an output file with the video track represented by v1 and the audio track represented by a1.
    labels List<String>
    The list of labels that describe how the encoder should multiplex video and audio into an output file. For example, if the encoder is producing two video layers with labels v1 and v2, and one audio layer with label a1, then an array like '[v1, a1]' tells the encoder to produce an output file with the video track represented by v1 and the audio track represented by a1.
    labels string[]
    The list of labels that describe how the encoder should multiplex video and audio into an output file. For example, if the encoder is producing two video layers with labels v1 and v2, and one audio layer with label a1, then an array like '[v1, a1]' tells the encoder to produce an output file with the video track represented by v1 and the audio track represented by a1.
    labels Sequence[str]
    The list of labels that describe how the encoder should multiplex video and audio into an output file. For example, if the encoder is producing two video layers with labels v1 and v2, and one audio layer with label a1, then an array like '[v1, a1]' tells the encoder to produce an output file with the video track represented by v1 and the audio track represented by a1.
    labels List<String>
    The list of labels that describe how the encoder should multiplex video and audio into an output file. For example, if the encoder is producing two video layers with labels v1 and v2, and one audio layer with label a1, then an array like '[v1, a1]' tells the encoder to produce an output file with the video track represented by v1 and the audio track represented by a1.

    PngFormatResponse

    FilenamePattern string
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    FilenamePattern string
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    filenamePattern String
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    filenamePattern string
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    filename_pattern str
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    filenamePattern String
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.

    PngImageResponse

    Start string
    The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
    KeyFrameInterval string
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    Layers List<Pulumi.AzureNative.Media.Inputs.PngLayerResponse>
    A collection of output PNG image layers to be produced by the encoder.
    Range string
    The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
    Step string
    The intervals at which thumbnails are generated. The value can be in ISO 8601 format (For example, PT05S for one image every 5 seconds), or a frame count (For example, 30 for one image every 30 frames), or a relative value to stream duration (For example, 10% for one image every 10% of stream duration). Note: Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and Step position from start time as the first output. As the default value is 10%, it means if stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select reasonable value for Step if the first thumbnail is expected close to start time, or set Range value at 1 if only one thumbnail is needed at start time.
    StretchMode string
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    SyncMode string
    The Video Sync Mode
    Start string
    The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
    KeyFrameInterval string
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    Layers []PngLayerResponse
    A collection of output PNG image layers to be produced by the encoder.
    Range string
    The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
    Step string
    The intervals at which thumbnails are generated. The value can be in ISO 8601 format (For example, PT05S for one image every 5 seconds), or a frame count (For example, 30 for one image every 30 frames), or a relative value to stream duration (For example, 10% for one image every 10% of stream duration). Note: Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and Step position from start time as the first output. As the default value is 10%, it means if stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select reasonable value for Step if the first thumbnail is expected close to start time, or set Range value at 1 if only one thumbnail is needed at start time.
    StretchMode string
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    SyncMode string
    The Video Sync Mode
    start String
    The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
    keyFrameInterval String
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    layers List<PngLayerResponse>
    A collection of output PNG image layers to be produced by the encoder.
    range String
    The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
    step String
    The intervals at which thumbnails are generated. The value can be in ISO 8601 format (For example, PT05S for one image every 5 seconds), or a frame count (For example, 30 for one image every 30 frames), or a relative value to stream duration (For example, 10% for one image every 10% of stream duration). Note: Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and Step position from start time as the first output. As the default value is 10%, it means if stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select reasonable value for Step if the first thumbnail is expected close to start time, or set Range value at 1 if only one thumbnail is needed at start time.
    stretchMode String
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    syncMode String
    The Video Sync Mode
    start string
    The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
    keyFrameInterval string
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label string
    An optional label for the codec. The label can be used to control muxing behavior.
    layers PngLayerResponse[]
    A collection of output PNG image layers to be produced by the encoder.
    range string
    The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
    step string
    The intervals at which thumbnails are generated. The value can be in ISO 8601 format (For example, PT05S for one image every 5 seconds), or a frame count (For example, 30 for one image every 30 frames), or a relative value to stream duration (For example, 10% for one image every 10% of stream duration). Note: Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and Step position from start time as the first output. As the default value is 10%, it means if stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select reasonable value for Step if the first thumbnail is expected close to start time, or set Range value at 1 if only one thumbnail is needed at start time.
    stretchMode string
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    syncMode string
    The Video Sync Mode
    start str
    The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
    key_frame_interval str
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label str
    An optional label for the codec. The label can be used to control muxing behavior.
    layers Sequence[PngLayerResponse]
    A collection of output PNG image layers to be produced by the encoder.
    range str
    The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
    step str
    The intervals at which thumbnails are generated. The value can be in ISO 8601 format (For example, PT05S for one image every 5 seconds), or a frame count (For example, 30 for one image every 30 frames), or a relative value to stream duration (For example, 10% for one image every 10% of stream duration). Note: Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and Step position from start time as the first output. As the default value is 10%, it means if stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select reasonable value for Step if the first thumbnail is expected close to start time, or set Range value at 1 if only one thumbnail is needed at start time.
    stretch_mode str
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    sync_mode str
    The Video Sync Mode
    start String
    The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
    keyFrameInterval String
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    layers List<Property Map>
    A collection of output PNG image layers to be produced by the encoder.
    range String
    The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
    step String
    The intervals at which thumbnails are generated. The value can be in ISO 8601 format (For example, PT05S for one image every 5 seconds), or a frame count (For example, 30 for one image every 30 frames), or a relative value to stream duration (For example, 10% for one image every 10% of stream duration). Note: Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and Step position from start time as the first output. As the default value is 10%, it means if stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select reasonable value for Step if the first thumbnail is expected close to start time, or set Range value at 1 if only one thumbnail is needed at start time.
    stretchMode String
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    syncMode String
    The Video Sync Mode

    PngLayerResponse

    Height string
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    Label string
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    Width string
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    Height string
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    Label string
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    Width string
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    height String
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    label String
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    width String
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    height string
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    label string
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    width string
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    height str
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    label str
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    width str
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
    height String
    The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
    label String
    The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
    width String
    The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.

    PresetConfigurationsResponse

    Complexity string
    Allows you to configure the encoder settings to control the balance between speed and quality. Example: set Complexity as Speed for faster encoding but less compression efficiency.
    InterleaveOutput string
    Sets the interleave mode of the output to control how audio and video are stored in the container format. Example: set InterleavedOutput as NonInterleavedOutput to produce audio-only and video-only outputs in separate MP4 files.
    KeyFrameIntervalInSeconds double
    The key frame interval in seconds. Example: set KeyFrameIntervalInSeconds as 2 to reduce the playback buffering for some players.
    MaxBitrateBps int
    The maximum bitrate in bits per second (threshold for the top video layer). Example: set MaxBitrateBps as 6000000 to avoid producing very high bitrate outputs for contents with high complexity.
    MaxHeight int
    The maximum height of output video layers. Example: set MaxHeight as 720 to produce output layers up to 720P even if the input is 4K.
    MaxLayers int
    The maximum number of output video layers. Example: set MaxLayers as 4 to make sure at most 4 output layers are produced to control the overall cost of the encoding job.
    MinBitrateBps int
    The minimum bitrate in bits per second (threshold for the bottom video layer). Example: set MinBitrateBps as 200000 to have a bottom layer that covers users with low network bandwidth.
    MinHeight int
    The minimum height of output video layers. Example: set MinHeight as 360 to avoid output layers of smaller resolutions like 180P.
    Complexity string
    Allows you to configure the encoder settings to control the balance between speed and quality. Example: set Complexity as Speed for faster encoding but less compression efficiency.
    InterleaveOutput string
    Sets the interleave mode of the output to control how audio and video are stored in the container format. Example: set InterleavedOutput as NonInterleavedOutput to produce audio-only and video-only outputs in separate MP4 files.
    KeyFrameIntervalInSeconds float64
    The key frame interval in seconds. Example: set KeyFrameIntervalInSeconds as 2 to reduce the playback buffering for some players.
    MaxBitrateBps int
    The maximum bitrate in bits per second (threshold for the top video layer). Example: set MaxBitrateBps as 6000000 to avoid producing very high bitrate outputs for contents with high complexity.
    MaxHeight int
    The maximum height of output video layers. Example: set MaxHeight as 720 to produce output layers up to 720P even if the input is 4K.
    MaxLayers int
    The maximum number of output video layers. Example: set MaxLayers as 4 to make sure at most 4 output layers are produced to control the overall cost of the encoding job.
    MinBitrateBps int
    The minimum bitrate in bits per second (threshold for the bottom video layer). Example: set MinBitrateBps as 200000 to have a bottom layer that covers users with low network bandwidth.
    MinHeight int
    The minimum height of output video layers. Example: set MinHeight as 360 to avoid output layers of smaller resolutions like 180P.
    complexity String
    Allows you to configure the encoder settings to control the balance between speed and quality. Example: set Complexity as Speed for faster encoding but less compression efficiency.
    interleaveOutput String
    Sets the interleave mode of the output to control how audio and video are stored in the container format. Example: set InterleavedOutput as NonInterleavedOutput to produce audio-only and video-only outputs in separate MP4 files.
    keyFrameIntervalInSeconds Double
    The key frame interval in seconds. Example: set KeyFrameIntervalInSeconds as 2 to reduce the playback buffering for some players.
    maxBitrateBps Integer
    The maximum bitrate in bits per second (threshold for the top video layer). Example: set MaxBitrateBps as 6000000 to avoid producing very high bitrate outputs for contents with high complexity.
    maxHeight Integer
    The maximum height of output video layers. Example: set MaxHeight as 720 to produce output layers up to 720P even if the input is 4K.
    maxLayers Integer
    The maximum number of output video layers. Example: set MaxLayers as 4 to make sure at most 4 output layers are produced to control the overall cost of the encoding job.
    minBitrateBps Integer
    The minimum bitrate in bits per second (threshold for the bottom video layer). Example: set MinBitrateBps as 200000 to have a bottom layer that covers users with low network bandwidth.
    minHeight Integer
    The minimum height of output video layers. Example: set MinHeight as 360 to avoid output layers of smaller resolutions like 180P.
    complexity string
    Allows you to configure the encoder settings to control the balance between speed and quality. Example: set Complexity as Speed for faster encoding but less compression efficiency.
    interleaveOutput string
    Sets the interleave mode of the output to control how audio and video are stored in the container format. Example: set InterleavedOutput as NonInterleavedOutput to produce audio-only and video-only outputs in separate MP4 files.
    keyFrameIntervalInSeconds number
    The key frame interval in seconds. Example: set KeyFrameIntervalInSeconds as 2 to reduce the playback buffering for some players.
    maxBitrateBps number
    The maximum bitrate in bits per second (threshold for the top video layer). Example: set MaxBitrateBps as 6000000 to avoid producing very high bitrate outputs for contents with high complexity.
    maxHeight number
    The maximum height of output video layers. Example: set MaxHeight as 720 to produce output layers up to 720P even if the input is 4K.
    maxLayers number
    The maximum number of output video layers. Example: set MaxLayers as 4 to make sure at most 4 output layers are produced to control the overall cost of the encoding job.
    minBitrateBps number
    The minimum bitrate in bits per second (threshold for the bottom video layer). Example: set MinBitrateBps as 200000 to have a bottom layer that covers users with low network bandwidth.
    minHeight number
    The minimum height of output video layers. Example: set MinHeight as 360 to avoid output layers of smaller resolutions like 180P.
    complexity str
    Allows you to configure the encoder settings to control the balance between speed and quality. Example: set Complexity as Speed for faster encoding but less compression efficiency.
    interleave_output str
    Sets the interleave mode of the output to control how audio and video are stored in the container format. Example: set InterleavedOutput as NonInterleavedOutput to produce audio-only and video-only outputs in separate MP4 files.
    key_frame_interval_in_seconds float
    The key frame interval in seconds. Example: set KeyFrameIntervalInSeconds as 2 to reduce the playback buffering for some players.
    max_bitrate_bps int
    The maximum bitrate in bits per second (threshold for the top video layer). Example: set MaxBitrateBps as 6000000 to avoid producing very high bitrate outputs for contents with high complexity.
    max_height int
    The maximum height of output video layers. Example: set MaxHeight as 720 to produce output layers up to 720P even if the input is 4K.
    max_layers int
    The maximum number of output video layers. Example: set MaxLayers as 4 to make sure at most 4 output layers are produced to control the overall cost of the encoding job.
    min_bitrate_bps int
    The minimum bitrate in bits per second (threshold for the bottom video layer). Example: set MinBitrateBps as 200000 to have a bottom layer that covers users with low network bandwidth.
    min_height int
    The minimum height of output video layers. Example: set MinHeight as 360 to avoid output layers of smaller resolutions like 180P.
    complexity String
    Allows you to configure the encoder settings to control the balance between speed and quality. Example: set Complexity as Speed for faster encoding but less compression efficiency.
    interleaveOutput String
    Sets the interleave mode of the output to control how audio and video are stored in the container format. Example: set InterleavedOutput as NonInterleavedOutput to produce audio-only and video-only outputs in separate MP4 files.
    keyFrameIntervalInSeconds Number
    The key frame interval in seconds. Example: set KeyFrameIntervalInSeconds as 2 to reduce the playback buffering for some players.
    maxBitrateBps Number
    The maximum bitrate in bits per second (threshold for the top video layer). Example: set MaxBitrateBps as 6000000 to avoid producing very high bitrate outputs for contents with high complexity.
    maxHeight Number
    The maximum height of output video layers. Example: set MaxHeight as 720 to produce output layers up to 720P even if the input is 4K.
    maxLayers Number
    The maximum number of output video layers. Example: set MaxLayers as 4 to make sure at most 4 output layers are produced to control the overall cost of the encoding job.
    minBitrateBps Number
    The minimum bitrate in bits per second (threshold for the bottom video layer). Example: set MinBitrateBps as 200000 to have a bottom layer that covers users with low network bandwidth.
    minHeight Number
    The minimum height of output video layers. Example: set MinHeight as 360 to avoid output layers of smaller resolutions like 180P.

    RectangleResponse

    Height string
    The height of the rectangular region in pixels. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    Left string
    The number of pixels from the left-margin. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    Top string
    The number of pixels from the top-margin. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    Width string
    The width of the rectangular region in pixels. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    Height string
    The height of the rectangular region in pixels. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    Left string
    The number of pixels from the left-margin. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    Top string
    The number of pixels from the top-margin. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    Width string
    The width of the rectangular region in pixels. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    height String
    The height of the rectangular region in pixels. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    left String
    The number of pixels from the left-margin. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    top String
    The number of pixels from the top-margin. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    width String
    The width of the rectangular region in pixels. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    height string
    The height of the rectangular region in pixels. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    left string
    The number of pixels from the left-margin. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    top string
    The number of pixels from the top-margin. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    width string
    The width of the rectangular region in pixels. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    height str
    The height of the rectangular region in pixels. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    left str
    The number of pixels from the left-margin. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    top str
    The number of pixels from the top-margin. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    width str
    The width of the rectangular region in pixels. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    height String
    The height of the rectangular region in pixels. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    left String
    The number of pixels from the left-margin. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    top String
    The number of pixels from the top-margin. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
    width String
    The width of the rectangular region in pixels. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).

    SelectAudioTrackByAttributeResponse

    Attribute string
    The TrackAttribute to filter the tracks by.
    Filter string
    The type of AttributeFilter to apply to the TrackAttribute in order to select the tracks.
    ChannelMapping string
    Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks.
    FilterValue string
    The value to filter the tracks by. Only used when AttributeFilter.ValueEquals is specified for the Filter property.
    Attribute string
    The TrackAttribute to filter the tracks by.
    Filter string
    The type of AttributeFilter to apply to the TrackAttribute in order to select the tracks.
    ChannelMapping string
    Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks.
    FilterValue string
    The value to filter the tracks by. Only used when AttributeFilter.ValueEquals is specified for the Filter property.
    attribute String
    The TrackAttribute to filter the tracks by.
    filter String
    The type of AttributeFilter to apply to the TrackAttribute in order to select the tracks.
    channelMapping String
    Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks.
    filterValue String
    The value to filter the tracks by. Only used when AttributeFilter.ValueEquals is specified for the Filter property.
    attribute string
    The TrackAttribute to filter the tracks by.
    filter string
    The type of AttributeFilter to apply to the TrackAttribute in order to select the tracks.
    channelMapping string
    Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks.
    filterValue string
    The value to filter the tracks by. Only used when AttributeFilter.ValueEquals is specified for the Filter property.
    attribute str
    The TrackAttribute to filter the tracks by.
    filter str
    The type of AttributeFilter to apply to the TrackAttribute in order to select the tracks.
    channel_mapping str
    Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks.
    filter_value str
    The value to filter the tracks by. Only used when AttributeFilter.ValueEquals is specified for the Filter property.
    attribute String
    The TrackAttribute to filter the tracks by.
    filter String
    The type of AttributeFilter to apply to the TrackAttribute in order to select the tracks.
    channelMapping String
    Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks.
    filterValue String
    The value to filter the tracks by. Only used when AttributeFilter.ValueEquals is specified for the Filter property.

    SelectAudioTrackByIdResponse

    TrackId double
    Track identifier to select
    ChannelMapping string
    Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks.
    TrackId float64
    Track identifier to select
    ChannelMapping string
    Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks.
    trackId Double
    Track identifier to select
    channelMapping String
    Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks.
    trackId number
    Track identifier to select
    channelMapping string
    Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks.
    track_id float
    Track identifier to select
    channel_mapping str
    Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks.
    trackId Number
    Track identifier to select
    channelMapping String
    Optional designation for single channel audio tracks. Can be used to combine the tracks into stereo or multi-channel audio tracks.

    SelectVideoTrackByAttributeResponse

    Attribute string
    The TrackAttribute to filter the tracks by.
    Filter string
    The type of AttributeFilter to apply to the TrackAttribute in order to select the tracks.
    FilterValue string
    The value to filter the tracks by. Only used when AttributeFilter.ValueEquals is specified for the Filter property. For TrackAttribute.Bitrate, this should be an integer value in bits per second (e.g: '1500000'). The TrackAttribute.Language is not supported for video tracks.
    Attribute string
    The TrackAttribute to filter the tracks by.
    Filter string
    The type of AttributeFilter to apply to the TrackAttribute in order to select the tracks.
    FilterValue string
    The value to filter the tracks by. Only used when AttributeFilter.ValueEquals is specified for the Filter property. For TrackAttribute.Bitrate, this should be an integer value in bits per second (e.g: '1500000'). The TrackAttribute.Language is not supported for video tracks.
    attribute String
    The TrackAttribute to filter the tracks by.
    filter String
    The type of AttributeFilter to apply to the TrackAttribute in order to select the tracks.
    filterValue String
    The value to filter the tracks by. Only used when AttributeFilter.ValueEquals is specified for the Filter property. For TrackAttribute.Bitrate, this should be an integer value in bits per second (e.g: '1500000'). The TrackAttribute.Language is not supported for video tracks.
    attribute string
    The TrackAttribute to filter the tracks by.
    filter string
    The type of AttributeFilter to apply to the TrackAttribute in order to select the tracks.
    filterValue string
    The value to filter the tracks by. Only used when AttributeFilter.ValueEquals is specified for the Filter property. For TrackAttribute.Bitrate, this should be an integer value in bits per second (e.g: '1500000'). The TrackAttribute.Language is not supported for video tracks.
    attribute str
    The TrackAttribute to filter the tracks by.
    filter str
    The type of AttributeFilter to apply to the TrackAttribute in order to select the tracks.
    filter_value str
    The value to filter the tracks by. Only used when AttributeFilter.ValueEquals is specified for the Filter property. For TrackAttribute.Bitrate, this should be an integer value in bits per second (e.g: '1500000'). The TrackAttribute.Language is not supported for video tracks.
    attribute String
    The TrackAttribute to filter the tracks by.
    filter String
    The type of AttributeFilter to apply to the TrackAttribute in order to select the tracks.
    filterValue String
    The value to filter the tracks by. Only used when AttributeFilter.ValueEquals is specified for the Filter property. For TrackAttribute.Bitrate, this should be an integer value in bits per second (e.g: '1500000'). The TrackAttribute.Language is not supported for video tracks.

    SelectVideoTrackByIdResponse

    TrackId double
    Track identifier to select
    TrackId float64
    Track identifier to select
    trackId Double
    Track identifier to select
    trackId number
    Track identifier to select
    track_id float
    Track identifier to select
    trackId Number
    Track identifier to select

    StandardEncoderPresetResponse

    Codecs List<object>
    The list of codecs to be used when encoding the input video.
    Formats List<object>
    The list of outputs to be produced by the encoder.
    ExperimentalOptions Dictionary<string, string>
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    Filters Pulumi.AzureNative.Media.Inputs.FiltersResponse
    One or more filtering operations that are applied to the input media before encoding.
    Codecs []interface{}
    The list of codecs to be used when encoding the input video.
    Formats []interface{}
    The list of outputs to be produced by the encoder.
    ExperimentalOptions map[string]string
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    Filters FiltersResponse
    One or more filtering operations that are applied to the input media before encoding.
    codecs List<Object>
    The list of codecs to be used when encoding the input video.
    formats List<Object>
    The list of outputs to be produced by the encoder.
    experimentalOptions Map<String,String>
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    filters FiltersResponse
    One or more filtering operations that are applied to the input media before encoding.
    codecs (AacAudioResponse | AudioResponse | CopyAudioResponse | CopyVideoResponse | DDAudioResponse | H264VideoResponse | H265VideoResponse | ImageResponse | JpgImageResponse | PngImageResponse | VideoResponse)[]
    The list of codecs to be used when encoding the input video.
    formats (ImageFormatResponse | JpgFormatResponse | Mp4FormatResponse | MultiBitrateFormatResponse | PngFormatResponse | TransportStreamFormatResponse)[]
    The list of outputs to be produced by the encoder.
    experimentalOptions {[key: string]: string}
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    filters FiltersResponse
    One or more filtering operations that are applied to the input media before encoding.
    codecs Sequence[Union[AacAudioResponse, AudioResponse, CopyAudioResponse, CopyVideoResponse, DDAudioResponse, H264VideoResponse, H265VideoResponse, ImageResponse, JpgImageResponse, PngImageResponse, VideoResponse]]
    The list of codecs to be used when encoding the input video.
    formats Sequence[Union[ImageFormatResponse, JpgFormatResponse, Mp4FormatResponse, MultiBitrateFormatResponse, PngFormatResponse, TransportStreamFormatResponse]]
    The list of outputs to be produced by the encoder.
    experimental_options Mapping[str, str]
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    filters FiltersResponse
    One or more filtering operations that are applied to the input media before encoding.
    codecs List<Property Map | Property Map | Property Map | Property Map | Property Map | Property Map | Property Map | Property Map | Property Map | Property Map | Property Map>
    The list of codecs to be used when encoding the input video.
    formats List<Property Map | Property Map | Property Map | Property Map | Property Map | Property Map>
    The list of outputs to be produced by the encoder.
    experimentalOptions Map<String>
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    filters Property Map
    One or more filtering operations that are applied to the input media before encoding.

    SystemDataResponse

    CreatedAt string
    The timestamp of resource creation (UTC).
    CreatedBy string
    The identity that created the resource.
    CreatedByType string
    The type of identity that created the resource.
    LastModifiedAt string
    The timestamp of resource last modification (UTC)
    LastModifiedBy string
    The identity that last modified the resource.
    LastModifiedByType string
    The type of identity that last modified the resource.
    CreatedAt string
    The timestamp of resource creation (UTC).
    CreatedBy string
    The identity that created the resource.
    CreatedByType string
    The type of identity that created the resource.
    LastModifiedAt string
    The timestamp of resource last modification (UTC)
    LastModifiedBy string
    The identity that last modified the resource.
    LastModifiedByType string
    The type of identity that last modified the resource.
    createdAt String
    The timestamp of resource creation (UTC).
    createdBy String
    The identity that created the resource.
    createdByType String
    The type of identity that created the resource.
    lastModifiedAt String
    The timestamp of resource last modification (UTC)
    lastModifiedBy String
    The identity that last modified the resource.
    lastModifiedByType String
    The type of identity that last modified the resource.
    createdAt string
    The timestamp of resource creation (UTC).
    createdBy string
    The identity that created the resource.
    createdByType string
    The type of identity that created the resource.
    lastModifiedAt string
    The timestamp of resource last modification (UTC)
    lastModifiedBy string
    The identity that last modified the resource.
    lastModifiedByType string
    The type of identity that last modified the resource.
    created_at str
    The timestamp of resource creation (UTC).
    created_by str
    The identity that created the resource.
    created_by_type str
    The type of identity that created the resource.
    last_modified_at str
    The timestamp of resource last modification (UTC)
    last_modified_by str
    The identity that last modified the resource.
    last_modified_by_type str
    The type of identity that last modified the resource.
    createdAt String
    The timestamp of resource creation (UTC).
    createdBy String
    The identity that created the resource.
    createdByType String
    The type of identity that created the resource.
    lastModifiedAt String
    The timestamp of resource last modification (UTC)
    lastModifiedBy String
    The identity that last modified the resource.
    lastModifiedByType String
    The type of identity that last modified the resource.

    TransportStreamFormatResponse

    FilenamePattern string
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    OutputFiles List<Pulumi.AzureNative.Media.Inputs.OutputFileResponse>
    The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .
    FilenamePattern string
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    OutputFiles []OutputFileResponse
    The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .
    filenamePattern String
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    outputFiles List<OutputFileResponse>
    The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .
    filenamePattern string
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    outputFiles OutputFileResponse[]
    The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .
    filename_pattern str
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    output_files Sequence[OutputFileResponse]
    The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .
    filenamePattern String
    The file naming pattern used for the creation of output files. The following macros are supported in the file name: {Basename} - An expansion macro that will use the name of the input video file. If the base name(the file suffix is not included) of the input video file is less than 32 characters long, the base name of input video files will be used. If the length of base name of the input video file exceeds 32 characters, the base name is truncated to the first 32 characters in total length. {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {AudioStream} - string "Audio" plus audio stream number(start from 1). {Bitrate} - The audio/video bitrate in kbps. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. {Resolution} - The video resolution. Any unsubstituted macros will be collapsed and removed from the filename.
    outputFiles List<Property Map>
    The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .

    UtcClipTimeResponse

    Time string
    The time position on the timeline of the input media based on Utc time.
    Time string
    The time position on the timeline of the input media based on Utc time.
    time String
    The time position on the timeline of the input media based on Utc time.
    time string
    The time position on the timeline of the input media based on Utc time.
    time str
    The time position on the timeline of the input media based on Utc time.
    time String
    The time position on the timeline of the input media based on Utc time.

    VideoAnalyzerPresetResponse

    AudioLanguage string
    The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). If you know the language of your content, it is recommended that you specify it. The language must be specified explicitly for AudioAnalysisMode::Basic, since automatic language detection is not included in basic mode. If the language isn't specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463
    ExperimentalOptions Dictionary<string, string>
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    InsightsToExtract string
    Defines the type of insights that you want the service to generate. The allowed values are 'AudioInsightsOnly', 'VideoInsightsOnly', and 'AllInsights'. The default is AllInsights. If you set this to AllInsights and the input is audio only, then only audio insights are generated. Similarly if the input is video only, then only video insights are generated. It is recommended that you not use AudioInsightsOnly if you expect some of your inputs to be video only; or use VideoInsightsOnly if you expect some of your inputs to be audio only. Your Jobs in such conditions would error out.
    Mode string
    Determines the set of audio analysis operations to be performed. If unspecified, the Standard AudioAnalysisMode would be chosen.
    AudioLanguage string
    The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). If you know the language of your content, it is recommended that you specify it. The language must be specified explicitly for AudioAnalysisMode::Basic, since automatic language detection is not included in basic mode. If the language isn't specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463
    ExperimentalOptions map[string]string
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    InsightsToExtract string
    Defines the type of insights that you want the service to generate. The allowed values are 'AudioInsightsOnly', 'VideoInsightsOnly', and 'AllInsights'. The default is AllInsights. If you set this to AllInsights and the input is audio only, then only audio insights are generated. Similarly if the input is video only, then only video insights are generated. It is recommended that you not use AudioInsightsOnly if you expect some of your inputs to be video only; or use VideoInsightsOnly if you expect some of your inputs to be audio only. Your Jobs in such conditions would error out.
    Mode string
    Determines the set of audio analysis operations to be performed. If unspecified, the Standard AudioAnalysisMode would be chosen.
    audioLanguage String
    The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). If you know the language of your content, it is recommended that you specify it. The language must be specified explicitly for AudioAnalysisMode::Basic, since automatic language detection is not included in basic mode. If the language isn't specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463
    experimentalOptions Map<String,String>
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    insightsToExtract String
    Defines the type of insights that you want the service to generate. The allowed values are 'AudioInsightsOnly', 'VideoInsightsOnly', and 'AllInsights'. The default is AllInsights. If you set this to AllInsights and the input is audio only, then only audio insights are generated. Similarly if the input is video only, then only video insights are generated. It is recommended that you not use AudioInsightsOnly if you expect some of your inputs to be video only; or use VideoInsightsOnly if you expect some of your inputs to be audio only. Your Jobs in such conditions would error out.
    mode String
    Determines the set of audio analysis operations to be performed. If unspecified, the Standard AudioAnalysisMode would be chosen.
    audioLanguage string
    The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). If you know the language of your content, it is recommended that you specify it. The language must be specified explicitly for AudioAnalysisMode::Basic, since automatic language detection is not included in basic mode. If the language isn't specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463
    experimentalOptions {[key: string]: string}
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    insightsToExtract string
    Defines the type of insights that you want the service to generate. The allowed values are 'AudioInsightsOnly', 'VideoInsightsOnly', and 'AllInsights'. The default is AllInsights. If you set this to AllInsights and the input is audio only, then only audio insights are generated. Similarly if the input is video only, then only video insights are generated. It is recommended that you not use AudioInsightsOnly if you expect some of your inputs to be video only; or use VideoInsightsOnly if you expect some of your inputs to be audio only. Your Jobs in such conditions would error out.
    mode string
    Determines the set of audio analysis operations to be performed. If unspecified, the Standard AudioAnalysisMode would be chosen.
    audio_language str
    The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). If you know the language of your content, it is recommended that you specify it. The language must be specified explicitly for AudioAnalysisMode::Basic, since automatic language detection is not included in basic mode. If the language isn't specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463
    experimental_options Mapping[str, str]
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    insights_to_extract str
    Defines the type of insights that you want the service to generate. The allowed values are 'AudioInsightsOnly', 'VideoInsightsOnly', and 'AllInsights'. The default is AllInsights. If you set this to AllInsights and the input is audio only, then only audio insights are generated. Similarly if the input is video only, then only video insights are generated. It is recommended that you not use AudioInsightsOnly if you expect some of your inputs to be video only; or use VideoInsightsOnly if you expect some of your inputs to be audio only. Your Jobs in such conditions would error out.
    mode str
    Determines the set of audio analysis operations to be performed. If unspecified, the Standard AudioAnalysisMode would be chosen.
    audioLanguage String
    The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). If you know the language of your content, it is recommended that you specify it. The language must be specified explicitly for AudioAnalysisMode::Basic, since automatic language detection is not included in basic mode. If the language isn't specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463
    experimentalOptions Map<String>
    Dictionary containing key value pairs for parameters not exposed in the preset itself
    insightsToExtract String
    Defines the type of insights that you want the service to generate. The allowed values are 'AudioInsightsOnly', 'VideoInsightsOnly', and 'AllInsights'. The default is AllInsights. If you set this to AllInsights and the input is audio only, then only audio insights are generated. Similarly if the input is video only, then only video insights are generated. It is recommended that you not use AudioInsightsOnly if you expect some of your inputs to be video only; or use VideoInsightsOnly if you expect some of your inputs to be audio only. Your Jobs in such conditions would error out.
    mode String
    Determines the set of audio analysis operations to be performed. If unspecified, the Standard AudioAnalysisMode would be chosen.

    VideoOverlayResponse

    InputLabel string
    The label of the job input which is to be used as an overlay. The Input must specify exactly one file. You can specify an image file in JPG, PNG, GIF or BMP format, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file. See https://aka.ms/mesformats for the complete list of supported audio and video file formats.
    AudioGainLevel double
    The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0.
    CropRectangle Pulumi.AzureNative.Media.Inputs.RectangleResponse
    An optional rectangular window used to crop the overlay image or video.
    End string
    The end position, with reference to the input video, at which the overlay ends. The value should be in ISO 8601 format. For example, PT30S to end the overlay at 30 seconds into the input video. If not specified or the value is greater than the input video duration, the overlay will be applied until the end of the input video if the overlay media duration is greater than the input video duration, else the overlay will last as long as the overlay media duration.
    FadeInDuration string
    The duration over which the overlay fades in onto the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade in (same as PT0S).
    FadeOutDuration string
    The duration over which the overlay fades out of the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade out (same as PT0S).
    Opacity double
    The opacity of the overlay. This is a value in the range [0 - 1.0]. Default is 1.0 which mean the overlay is opaque.
    Position Pulumi.AzureNative.Media.Inputs.RectangleResponse
    The location in the input video where the overlay is applied.
    Start string
    The start position, with reference to the input video, at which the overlay starts. The value should be in ISO 8601 format. For example, PT05S to start the overlay at 5 seconds into the input video. If not specified the overlay starts from the beginning of the input video.
    InputLabel string
    The label of the job input which is to be used as an overlay. The Input must specify exactly one file. You can specify an image file in JPG, PNG, GIF or BMP format, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file. See https://aka.ms/mesformats for the complete list of supported audio and video file formats.
    AudioGainLevel float64
    The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0.
    CropRectangle RectangleResponse
    An optional rectangular window used to crop the overlay image or video.
    End string
    The end position, with reference to the input video, at which the overlay ends. The value should be in ISO 8601 format. For example, PT30S to end the overlay at 30 seconds into the input video. If not specified or the value is greater than the input video duration, the overlay will be applied until the end of the input video if the overlay media duration is greater than the input video duration, else the overlay will last as long as the overlay media duration.
    FadeInDuration string
    The duration over which the overlay fades in onto the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade in (same as PT0S).
    FadeOutDuration string
    The duration over which the overlay fades out of the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade out (same as PT0S).
    Opacity float64
    The opacity of the overlay. This is a value in the range [0 - 1.0]. Default is 1.0 which mean the overlay is opaque.
    Position RectangleResponse
    The location in the input video where the overlay is applied.
    Start string
    The start position, with reference to the input video, at which the overlay starts. The value should be in ISO 8601 format. For example, PT05S to start the overlay at 5 seconds into the input video. If not specified the overlay starts from the beginning of the input video.
    inputLabel String
    The label of the job input which is to be used as an overlay. The Input must specify exactly one file. You can specify an image file in JPG, PNG, GIF or BMP format, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file. See https://aka.ms/mesformats for the complete list of supported audio and video file formats.
    audioGainLevel Double
    The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0.
    cropRectangle RectangleResponse
    An optional rectangular window used to crop the overlay image or video.
    end String
    The end position, with reference to the input video, at which the overlay ends. The value should be in ISO 8601 format. For example, PT30S to end the overlay at 30 seconds into the input video. If not specified or the value is greater than the input video duration, the overlay will be applied until the end of the input video if the overlay media duration is greater than the input video duration, else the overlay will last as long as the overlay media duration.
    fadeInDuration String
    The duration over which the overlay fades in onto the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade in (same as PT0S).
    fadeOutDuration String
    The duration over which the overlay fades out of the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade out (same as PT0S).
    opacity Double
    The opacity of the overlay. This is a value in the range [0 - 1.0]. Default is 1.0 which mean the overlay is opaque.
    position RectangleResponse
    The location in the input video where the overlay is applied.
    start String
    The start position, with reference to the input video, at which the overlay starts. The value should be in ISO 8601 format. For example, PT05S to start the overlay at 5 seconds into the input video. If not specified the overlay starts from the beginning of the input video.
    inputLabel string
    The label of the job input which is to be used as an overlay. The Input must specify exactly one file. You can specify an image file in JPG, PNG, GIF or BMP format, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file. See https://aka.ms/mesformats for the complete list of supported audio and video file formats.
    audioGainLevel number
    The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0.
    cropRectangle RectangleResponse
    An optional rectangular window used to crop the overlay image or video.
    end string
    The end position, with reference to the input video, at which the overlay ends. The value should be in ISO 8601 format. For example, PT30S to end the overlay at 30 seconds into the input video. If not specified or the value is greater than the input video duration, the overlay will be applied until the end of the input video if the overlay media duration is greater than the input video duration, else the overlay will last as long as the overlay media duration.
    fadeInDuration string
    The duration over which the overlay fades in onto the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade in (same as PT0S).
    fadeOutDuration string
    The duration over which the overlay fades out of the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade out (same as PT0S).
    opacity number
    The opacity of the overlay. This is a value in the range [0 - 1.0]. Default is 1.0 which mean the overlay is opaque.
    position RectangleResponse
    The location in the input video where the overlay is applied.
    start string
    The start position, with reference to the input video, at which the overlay starts. The value should be in ISO 8601 format. For example, PT05S to start the overlay at 5 seconds into the input video. If not specified the overlay starts from the beginning of the input video.
    input_label str
    The label of the job input which is to be used as an overlay. The Input must specify exactly one file. You can specify an image file in JPG, PNG, GIF or BMP format, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file. See https://aka.ms/mesformats for the complete list of supported audio and video file formats.
    audio_gain_level float
    The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0.
    crop_rectangle RectangleResponse
    An optional rectangular window used to crop the overlay image or video.
    end str
    The end position, with reference to the input video, at which the overlay ends. The value should be in ISO 8601 format. For example, PT30S to end the overlay at 30 seconds into the input video. If not specified or the value is greater than the input video duration, the overlay will be applied until the end of the input video if the overlay media duration is greater than the input video duration, else the overlay will last as long as the overlay media duration.
    fade_in_duration str
    The duration over which the overlay fades in onto the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade in (same as PT0S).
    fade_out_duration str
    The duration over which the overlay fades out of the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade out (same as PT0S).
    opacity float
    The opacity of the overlay. This is a value in the range [0 - 1.0]. Default is 1.0 which mean the overlay is opaque.
    position RectangleResponse
    The location in the input video where the overlay is applied.
    start str
    The start position, with reference to the input video, at which the overlay starts. The value should be in ISO 8601 format. For example, PT05S to start the overlay at 5 seconds into the input video. If not specified the overlay starts from the beginning of the input video.
    inputLabel String
    The label of the job input which is to be used as an overlay. The Input must specify exactly one file. You can specify an image file in JPG, PNG, GIF or BMP format, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file. See https://aka.ms/mesformats for the complete list of supported audio and video file formats.
    audioGainLevel Number
    The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0.
    cropRectangle Property Map
    An optional rectangular window used to crop the overlay image or video.
    end String
    The end position, with reference to the input video, at which the overlay ends. The value should be in ISO 8601 format. For example, PT30S to end the overlay at 30 seconds into the input video. If not specified or the value is greater than the input video duration, the overlay will be applied until the end of the input video if the overlay media duration is greater than the input video duration, else the overlay will last as long as the overlay media duration.
    fadeInDuration String
    The duration over which the overlay fades in onto the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade in (same as PT0S).
    fadeOutDuration String
    The duration over which the overlay fades out of the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade out (same as PT0S).
    opacity Number
    The opacity of the overlay. This is a value in the range [0 - 1.0]. Default is 1.0 which mean the overlay is opaque.
    position Property Map
    The location in the input video where the overlay is applied.
    start String
    The start position, with reference to the input video, at which the overlay starts. The value should be in ISO 8601 format. For example, PT05S to start the overlay at 5 seconds into the input video. If not specified the overlay starts from the beginning of the input video.

    VideoResponse

    KeyFrameInterval string
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    StretchMode string
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    SyncMode string
    The Video Sync Mode
    KeyFrameInterval string
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    Label string
    An optional label for the codec. The label can be used to control muxing behavior.
    StretchMode string
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    SyncMode string
    The Video Sync Mode
    keyFrameInterval String
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    stretchMode String
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    syncMode String
    The Video Sync Mode
    keyFrameInterval string
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label string
    An optional label for the codec. The label can be used to control muxing behavior.
    stretchMode string
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    syncMode string
    The Video Sync Mode
    key_frame_interval str
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label str
    An optional label for the codec. The label can be used to control muxing behavior.
    stretch_mode str
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    sync_mode str
    The Video Sync Mode
    keyFrameInterval String
    The distance between two key frames. The value should be non-zero in the range [0.5, 20] seconds, specified in ISO 8601 format. The default is 2 seconds(PT2S). Note that this setting is ignored if VideoSyncMode.Passthrough is set, where the KeyFrameInterval value will follow the input source setting.
    label String
    An optional label for the codec. The label can be used to control muxing behavior.
    stretchMode String
    The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize
    syncMode String
    The Video Sync Mode

    VideoTrackDescriptorResponse

    Package Details

    Repository
    Azure Native pulumi/pulumi-azure-native
    License
    Apache-2.0
    azure-native logo
    This is the latest version of Azure Native. Use the Azure Native v1 docs if using the v1 version of this package.
    Azure Native v2.76.0 published on Friday, Dec 6, 2024 by Pulumi