1. Video Indexer for Cognitive Services with Azure Media

    Python

    To set up a Video Indexer for Cognitive Services with Azure Media using Pulumi, you'll need a sequence of resources configured to handle media assets, process them using Azure's Media Services, and then apply cognitive services to extract insights from the video content.

    Here's how you can orchestrate this with Pulumi in Python:

    1. Azure Cognitive Services Account: This will serve as the foundation for applying cognitive services like the Video Indexer. You will need to create an instance of Azure's Cognitive Services.

    2. Azure Media Services Account: Azure Media Services will be utilized to upload, encode, and process media files. It provides the infrastructure necessary for enabling media playbacks and indexing.

    3. Assets and Jobs within Media Services: You'll need to define assets which represent the media files you plan to index. Then, create jobs that actually process these media files. The jobs will use the cognitive services to analyze and index the video content.

    The following program will illustrate the necessary steps to configure these resources using Pulumi:

    import pulumi import pulumi_azure_native.videoindexer as videoindexer import pulumi_azure_native.media as media import pulumi_azure_native.cognitiveservices as cognitiveservices # Configure the Azure region to deploy the resources. azure_region = 'West US' # Create an Azure Cognitive Services Account. cognitive_services_account = cognitiveservices.Account("VideoIndexerCognitiveServicesAccount", resource_group_name="my-resource-group", # Use a unique account name; placeholder is used here. account_name="mycognitiveaccount", location=azure_region, sku=cognitiveservices.SkuArgs(name="S1"), kind="VideoIndexer", properties=cognitiveservices.AccountPropertiesArgs( # Additional properties can be configured as needed. ) ) # Create an Azure Media Services Account. media_services_account = media.MediaService("MediaServicesAccount", resource_group_name="my-resource-group", account_name="mymediaservices", location=azure_region, # System or User-assigned managed identity (identity-based authentication). identity=media.MediaServiceIdentityArgs(type="None") ) # Create a new asset within the Media Services Account for the video to be indexed. asset = media.Asset("MediaAsset", resource_group_name="my-resource-group", account_name=media_services_account.name, asset_name="my-video-asset", # The asset can be configured with alternate_id, description, etc. ) # Create a job to process the video asset. job = media.Job("MediaJob", resource_group_name="my-resource-group", account_name=media_services_account.name, transform_name="MyTransform", # The name of the transform with the desired encoding settings. job_name="my-job", input=media.JobInputAssetArgs(asset_name=asset.asset_name), outputs=[media.JobOutputAssetArgs(asset_name=asset.asset_name)] ) # Export relevant information. pulumi.export("cognitive_services_account_id", cognitive_services_account.id) pulumi.export("media_services_account_id", media_services_account.id) pulumi.export("asset_id", asset.id) pulumi.export("job_id", job.id)

    In this program:

    • We begin by importing the necessary Pulumi modules.
    • We create a Cognitive Services account for video indexing.
    • We create a Media Services account, which will be used to manage the assets and the job processing.
    • We define an asset; in a real-world scenario, this would be the video file you want to index.
    • We create a job which processes the video file using the settings defined in a transform (not shown).

    This is a foundational setup and you would need to add further details like the actual video files, transformations, and connections to the Cognitive Services for video analysis. Be sure to replace placeholders such as resource group names, account names, and transform details with actual values suitable for your deployment.

    You can access the full Pulumi documentation for these resources here: