Skip to content

End-to-End example for *NON* multi model deployment #53

@nvs-abhilash

Description

@nvs-abhilash

What did you find confusing? Please describe.
Based on the documentation the complete example multi_model_bring_your_own, it seems that sagemaker-inference-toolkit is only for multi-model requirements. But I have also seen links for sagemaker_pytorch_serving_container which suggests that that is not the case.

There is no clear instruction in the documentation or an end-to-end example link indicating that it can be used for single model hosting scenarios as well.

Describe how documentation can be improved
You can provide one more end-to-end example for single model hosting, along with some points in favor of using this python package, instead of designing our own docker containers from scratch.

Additional context

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions