Skip to content

Commit 42e47d5

Browse files
cdoernashwinb
authored andcommitted
fix: Ensure a better error stack trace when llama-stack is not built (llamastack#950)
# What does this PR do? currently this is the output when you run a distribution locally without running `llama stack build`: ``` Traceback (most recent call last): File "/Users/charliedoern/Documents/llama-sdk.py", line 25, in <module> models = client.models.list() ^^^^^^^^^^^^^^^^^^^^ File "/Users/charliedoern/Documents/llama-stack-client-python/src/llama_stack_client/resources/models.py", line 107, in list raise exc File "/Users/charliedoern/Documents/llama-stack-client-python/src/llama_stack_client/resources/models.py", line 95, in list return self._get( ^^^^^^^^^^ File "/Users/charliedoern/Documents/llama-stack-client-python/src/llama_stack_client/_base_client.py", line 1212, in get return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/charliedoern/Documents/llama-stack/llama_stack/distribution/library_client.py", line 168, in request return asyncio.run(self.async_client.request(*args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/[email protected]/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/[email protected]/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/[email protected]/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/Users/charliedoern/Documents/llama-stack/llama_stack/distribution/library_client.py", line 258, in request if not self.endpoint_impls: ^^^^^^^^^^^^^^^^^^^ AttributeError: 'AsyncLlamaStackAsLibraryClient' object has no attribute 'endpoint_impls' ``` the intended exception is never raised, add an except for an AttributeError so users can catch when they call things like `models.list()` and so that a more useful error telling them that the client is not properly initialized is printed. ## Test Plan Please describe: - I ran the script found here: https://llama-stack.readthedocs.io/en/latest/getting_started/index.html#run-inference-with-python-sdk locally with the changes in this PR and the exception was caught successfully. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests. --------- Signed-off-by: Charlie Doern <[email protected]> Co-authored-by: Ashwin Bharambe <[email protected]>
1 parent d4cb624 commit 42e47d5

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

llama_stack/distribution/library_client.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -198,6 +198,7 @@ def __init__(
198198

199199
async def initialize(self) -> bool:
200200
try:
201+
self.endpoint_impls = None
201202
self.impls = await construct_stack(self.config, self.custom_provider_registry)
202203
except ModuleNotFoundError as _e:
203204
cprint(_e.msg, "red")

0 commit comments

Comments
 (0)