Generative AI: Amazon Bedrock using the CLI

October 28th, 2023 377 Words

With AWS re:Invent 2023 just around the corner, the frequency of AWS press releases increases. Generative AI is a hot topic everywhere, tools like Midjourney and ChatGPT lower the bar for non-technical people, and I wonder how and when AWS will introduce a more solution-like Genrative AI service.

Amazon Bedrock Settings

Amazon Bedrock is a flexible baseline for future Generative AI production by AWS or Amazon. With unified access to various models, this will allow flexible services in the future.

Amazon Bedrock Foundation Models

Available depend on the used AWS region and individual AWS account settings:

This enables Stable Diffusion on AWS with an on-demand pricing model. Text generation is supported by all other models. For a simple CLI example, this will use AI21 Labs Jurassic-2 Ultra to generate a simple Haiku about software engineering.

Using the AWS CLI

After enabling access to the desired Amazon Bedrock Model, common functionality is available using the Command Line Interface for AWS. With the current version, you list the available foundation models:

$ > aws bedrock list-foundation-models

{
    "modelSummaries": [
        {
            "modelArn": "arn:aws:bedrock:us-east-1::foundation-model/amazon.titan-tg1-large",
            "modelId": "amazon.titan-tg1-large",
            "modelName": "Titan Text Large",
            "providerName": "Amazon",
            "inputModalities": [
                "TEXT"
            ],
            "outputModalities": [
                "TEXT"
            ],
            "responseStreamingSupported": true,
            "customizationsSupported": [
                "FINE_TUNING"
            ],
            "inferenceTypesSupported": [
                "ON_DEMAND"
            ]
        },
    
    []

}

If you have access to the list of available models, you can invoke a single model with the AWS CLI. First, create a local file with your input payload:

// payload.json

{
    "prompt": "Please generate a funny Haiku about software engineering.",
    "maxTokens": 300,
    "temperature": 0.5,
    "topP": 0.9
}

With a local payload.json file, you can send the request to Amazon Bedrock:

$ > aws bedrock-runtime invoke-model \
    --model-id ai21.j2-mid-v1 \
    --body "$(base64 -i payload.json)" \
    response.json

The response is available in response.json and includes the response:

$ > cat response.json | jq -r '.completions[0].data.text'

Debugging code
Is like solving a puzzle
With infinite pieces

That’s it; now you can use Generate AI on AWS to request simple text generations and get started with prompt design for text-based foundation models.

Debugging code
Is like solving a puzzle
With infinite pieces

Enjoy! 🎉 Soon, Bedrock Agents will enable you to chain and orchestrate individual prompts and automate complex tasks …