Skip to content

Allow to set TransformAmiVersion #5204

Open
@martinber

Description

@martinber

Describe the feature you'd like

As I understand it, now it is possible to set TransformAmiVersion when launching transform jobs with the AWS SDK (see for example TransformResources in the boto3 docs). I think it is not yet possible to set this parameter while using the Transformer (or at least it's not specified in the docs)

I need it because I want to start using TransformAmiVersion=al2-ami-sagemaker-batch-gpu-535.

How would this feature be used? Please describe.
Depends on how you want to make the parameter available, for example:

transformer = Transformer(
    base_transform_job_name="",
    model_name="my-model",
    instance_type="ml.g4dn.2xlarge",
    transform_ami_version="al2-ami-sagemaker-batch-gpu-535"
    instance_count=1,
    strategy="SingleRecord",
    assemble_with="Line",
    output_path="s3://.../out.json",
    env={
        "MODEL_SERVER_TIMEOUT": "7200",
        "LOCAL_WEIGHTS_PATH": "/opt/ml/model/model.pth",
        "MODEL_SERVER_WORKERS": "1",
    },
)
transformer.transform(
    data="s3://.../in.json",
    data_type="S3Prefix",
    split_type="Line",
    job_name="my-job",
    compression_type=None,
    content_type="application/jsonlines",
    model_client_config={
        "InvocationsTimeoutInSeconds": 3600,
        "InvocationsMaxRetries": 0,
    },
    logs=False,
    wait=False,
)

Describe alternatives you've considered
Otherwise I have to stop using this library and use boto3 directly

Additional context
When launching a transform job with Transformer() and printing nvidia-smi I see it is still launching with TransformAmiVersion=al2-ami-sagemaker-batch-gpu-470:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.256.02   Driver Version: 470.256.02   CUDA Version: 11.8     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            On   | 00000000:00:1E.0 Off |                    0 |
| N/A   31C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions