Skip to content

fastapi error? #1143

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
silvacarl2 opened this issue Jan 31, 2024 · 6 comments
Closed

fastapi error? #1143

silvacarl2 opened this issue Jan 31, 2024 · 6 comments

Comments

@silvacarl2
Copy link

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [X ] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • [ X] I carefully followed the README.md.
  • [ X] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [X ] I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

Please provide a detailed written description of what you were trying to do, and what you expected llama-cpp-python to do.

uvicorn llama_cpp.server.app:app --reload

returns this error:

INFO: Will watch for changes in these directories: ['/mnt/d/Dropbox/Developer-Tools/llama-cpp-python/examples/high_level_api']
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [601969] using WatchFiles
ERROR: Error loading ASGI app. Attribute "app" not found in module "llama_cpp.server.app".

Current Behavior

Please provide a detailed written description of what llama-cpp-python did, instead.

Doesnt' work

Environment and Context

Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.

  • Physical (or virtual) hardware you are using, e.g. for Linux:

$ lscpu

Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 141
Model name: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz
Stepping: 1
CPU MHz: 2303.999
BogoMIPS: 4607.99
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 384 KiB
L1i cache: 256 KiB
L2 cache: 10 MiB
L3 cache: 24 MiB
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx
fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good
nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pdcm
pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand
hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanc
ed tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invp
cid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx5
12bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512vbmi umip avx512_vbmi2 gfni vaes
vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm a
vx512_vp2intersect flush_l1d arch_capabilities

  • Operating System, e.g. for Linux:

$ uname -a

Linux DESKTOP-UD5405F 5.15.133.1-microsoft-standard-WSL2 #1 SMP Thu Oct 5 21:02:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

  • SDK version, e.g. for Linux:
$ python3 --version
Python 3.8.10
$ make --version
GNU Make 4.2.1
$ g++ --version
g++ (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0

Failure Information (for bugs)

Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.

Steps to Reproduce

Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.

  1. just run fastapi_server.py

Note: Many issues seem to be regarding functional or performance issues / differences with llama.cpp. In these cases we need to confirm that you're comparing against the version of llama.cpp that was built with your python package, and which parameters you're passing to the context.

Try the following:

  1. git clone https://github.com/abetlen/llama-cpp-python
  2. cd llama-cpp-python
  3. rm -rf _skbuild/ # delete any old builds
  4. python -m pip install .
  5. cd ./vendor/llama.cpp
  6. Follow llama.cpp's instructions to cmake llama.cpp
  7. Run llama.cpp's ./main with the same arguments you previously passed to llama-cpp-python and see if you can reproduce the issue. If you can, log an issue with llama.cpp

Failure Logs

Please include any relevant log snippets or files. If it works under one configuration but not under another, please provide logs for both configurations and their corresponding outputs so it is easy to see where behavior changes.

Also, please try to avoid using screenshots if at all possible. Instead, copy/paste the console output and use Github's markdown to cleanly format your logs for easy readability.

Example environment info:

llama-cpp-python$ git log | head -1
commit 47b0aa6e957b93dbe2c29d53af16fbae2dd628f2

llama-cpp-python$ python3 --version
Python 3.10.10

llama-cpp-python$ pip list | egrep "uvicorn|fastapi|sse-starlette|numpy"
fastapi                  0.95.0
numpy                    1.24.3
sse-starlette            1.3.3
uvicorn                  0.21.1

llama-cpp-python/vendor/llama.cpp$ git log | head -3
commit 66874d4fbcc7866377246efbcee938e8cc9c7d76
Author: Kerfuffle <[email protected]>
Date:   Thu May 25 20:18:01 2023 -0600
@abetlen
Copy link
Owner

abetlen commented Jan 31, 2024

Hey @silvacarl2 sorry about that, the command to run the server via uvicorn cli changed but I think I forgot to update it everywhere. I'll fix that.

You should be able to run the server viat uvicorn --reload --factory llama_cpp.server.app:create_app. You'll also need to set the MODEL environment variable to the path to the gguf file you plan to serve or set CONFIG_FILE if you plan to serve multiple models.

@silvacarl2
Copy link
Author

awesome, thx, let me know when we can check it out.

@abetlen
Copy link
Owner

abetlen commented Jan 31, 2024

I updated the example, you should be able to use the command I provided in my comment to run the server, let me know if you still have any issues.

@silvacarl2
Copy link
Author

AWESOEM THX!!!! CHECKING IT OUT!!!!!!!!!

@silvacarl2
Copy link
Author

JUST FYI THIS IS TOTALLY AWESOME. WOW.

@abetlen
Copy link
Owner

abetlen commented Feb 1, 2024

Haha happy to hear!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants