-
Notifications
You must be signed in to change notification settings - Fork 81
Open
Description
What did you find confusing? Please describe.
Based on the documentation the complete example multi_model_bring_your_own, it seems that sagemaker-inference-toolkit
is only for multi-model requirements. But I have also seen links for sagemaker_pytorch_serving_container which suggests that that is not the case.
There is no clear instruction in the documentation or an end-to-end example link indicating that it can be used for single model hosting scenarios as well.
Describe how documentation can be improved
You can provide one more end-to-end example for single model hosting, along with some points in favor of using this python package, instead of designing our own docker containers from scratch.
Additional context
RyoMazda, torsjonas, kai2nenobu, memo26167, lorenzwalthert and 4 more