1. Protecting AI Model Serving Endpoints

    Python

    To protect AI model serving endpoints, you need to ensure authenticated and authorized access, possibly through an Identity Access Management (IAM) system, and you might also want to encrypt the data in transit and at rest.

    The following Pulumi program demonstrates how to deploy an AI model serving endpoint on AWS SageMaker with secured access. We will use an aws.sagemaker.EndpointConfiguration along with an aws.sagemaker.Endpoint resource for this purpose. The EndpointConfiguration resource lets us specify the structure and the access controls of the AI model endpoint we are deploying, and the Endpoint resource creates the actual endpoint instance where we can serve predictions from.

    In this example, we're defining an endpoint that is using AWS KMS for key management, where the KmsKeyArn references a key that would be used to encrypt the endpoint's storage. We also set up data capture, specifying an S3 bucket to capture request/response data sent to the model, which is useful for monitoring the endpoint or re-training the model in the future.

    Here's how you can create such a resource:

    import pulumi import pulumi_aws as aws # Define the role for SageMaker to assume # Make sure the role has the necessary permissions sagemaker_role = aws.iam.Role("sagemakerRole", assume_role_policy="""{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": {"Service": "sagemaker.amazonaws.com"}, "Action": "sts:AssumeRole" }] }""" ) # Attach a policy to the role to allow full access to SageMaker sagemaker_policy_attachment = aws.iam.RolePolicyAttachment("sagemakerPolicyAttachment", role=sagemaker_role, policy_arn=aws.iam.ManagedPolicy.AMAZON_SAGEMAKER_FULL_ACCESS ) # Create a KMS key to encrypt the SageMaker endpoint data kms_key = aws.kms.Key("kmsKey", description="KMS key for SageMaker endpoint") # Define the SageMaker Model # This assumes you have already created a model resource which you reference here. model = aws.sagemaker.Model("model", execution_role_arn=sagemaker_role.arn, primary_container={ "image": "174872318107.dkr.ecr.us-west-2.amazonaws.com/kmeans:1", # For illustration purposes, use your own image URI "modelDataUrl": "s3://my-bucket/model.tar.gz" # Replace with your model data URL } ) # Define the SageMaker endpoint configuration # CaptureOption 'All' captures all payloads, 'None' would disable capturing endpoint_config = aws.sagemaker.EndpointConfiguration("endpointConfig", kms_key_arn=kms_key.arn, production_variants=[{ "variantName": "AllTraffic", "modelName": model.name, "initialInstanceCount": 1, "instanceType": "ml.m4.xlarge", }], data_capture_config={ "enableCapture": True, "initialSamplingPercentage": 100, "destinationS3Uri": "s3://my-bucket/data-capture/", # Replace with your S3 bucket URI "captureOptions": [{ "captureMode": "All" }], } ) # Finally, deploy the model as an endpoint endpoint = aws.sagemaker.Endpoint("endpoint", endpoint_config_name=endpoint_config.name ) # Export the endpoint name pulumi.export("endpointName", endpoint.endpoint_name)

    In the above code, replace references to my-bucket, model.tar.gz, kmeans:1, and the model URI with your actual AWS S3 bucket and model details. The instance type ml.m4.xlarge should also be chosen based on your specific modeling needs and cost considerations.

    The output of this program will generate an endpoint name that you can use to interact with your AI model endpoint securely. Ensure that you call your endpoint from an authenticated context with the appropriate IAM permissions to perform invocations on the SageMaker endpoint.