1. Edge Computing for AI Inference with Akamai EdgeWorkers


    Edge computing brings computation and data storage closer to the location where it is needed, which improves response times and saves bandwidth. One application of edge computing is AI inference, where a model is served directly at the edge rather than in a centralized cloud. This is often used for tasks that require low latency decision making, like content customization and real-time analytics. If you want to implement this, Akamai's EdgeWorkers allows you to run JavaScript code on Akamai's edge platform, close to your users.

    In this Pulumi program, we are going to create an Akamai EdgeWorker which can be used for AI inference at the edge. We'll deploy a generic EdgeWorker as the specifics of the AI inference would be part of the JavaScript code written independently and uploaded as the EdgeWorker bundle.

    We'll go through the following steps:

    1. Create an EdgeWorker resource for running user-provided JavaScript at the edge.
    2. Activate the EdgeWorker on the Akamai network.

    Here’s a high-level program using Pulumi with the Akamai provider to deploy this infrastructure:

    import pulumi import pulumi_akamai as akamai # Configurations for the Akamai EdgeWorker edgeworker_name = "ai-inference-worker" resource_tier_id = 123 # This should be obtained from your Akamai account group_id = 456 # This is an identifier for the group on Akamai # Creating an Akamai EdgeWorker resource edgeworker = akamai.EdgeWorker(f"{edgeworker_name}", name=edgeworker_name, resource_tier_id=resource_tier_id, group_id=group_id) # Assuming you have the EdgeWorker bundle (the JavaScript code for AI inference) # available locally, specify the path to your bundle file. local_bundle_path = "./path-to-your-edgeworker-bundle.tgz" # Attaching the local AI inference bundle to the EdgeWorker resource edgeworker_bundle = akamai.EdgeWorkerBundle(f"{edgeworker_name}-bundle", content=pulumi.FileArchive(local_bundle_path), edgeworker_id=edgeworker.id, version_note="Initial version") # Activate the EdgeWorker version on the Akamai Network activation = akamai.EdgeWorkersActivation(f"{edgeworker_name}-activation", network="production", # or "staging" depending on your requirements version=edgeworker_bundle.version, edgeworker_id=edgeworker.id) # Export the EdgeWorker ID and status pulumi.export('edgeworker_id', edgeworker.id) pulumi.export('edgeworker_activation_status', activation.status)


    • We import the pulumi and pulumi_akamai modules to interact with the Akamai platform.
    • Next, we define the name of our EdgeWorker and resource tier and group identifiers, which are unique to your Akamai account.
    • We then create an EdgeWorker resource using akamai.EdgeWorker. This resource represents the EdgeWorker that will run on the Akamai network where the AI inference model will be served.
    • After defining the EdgeWorker, we assume that the JavaScript code for AI inference is archived in a .tgz file and located at ./path-to-your-edgeworker-bundle.tgz.
    • We attach this EdgeWorker bundle to our EdgeWorker resource by creating an EdgeWorkerBundle resource.
    • The EdgeWorkersActivation resource activates our EdgeWorker version on either the 'production' or 'staging' Akamai network.
    • Lastly, we export the ID and activation status of our EdgeWorker for later reference.

    Keep in mind, this program sets up the infrastructure and deployment for the EdgeWorker. The JavaScript code delivering the AI inference functionality needs to be provided as a .tgz file and is out of the scope of this Pulumi example. The actual inference logic and managing AI models would be contained within the JavaScript executed at the edge.