Skip to content

Could not load model: SIGILL: illegal instruction #1447

@Taronyuu

Description

@Taronyuu

LocalAI version:

quay.io/go-skynet/local-ai:master-cublas-cuda12-core

Environment, CPU architecture, OS, and Version:

Linux user-Z68X-UD3P-B3 6.2.0-39-generic #40~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 16 10:53:04 UTC 2 x86_64 x86_64 x86_64 GNU/Linux

version: '3.6'

services:
  api:
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]
    image: quay.io/go-skynet/local-ai:master-cublas-cuda12-core
    tty: true # enable colorized logs
    restart: always # should this be on-failure ?
    ports:
      - 8080:8080
    env_file:
      - .env
    volumes:
      - ./models:/models
      - ./images/:/tmp/generated/images/
    command: ["/usr/bin/local-ai" ]
nvidia-smi
Fri Dec 15 22:13:12 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.08              Driver Version: 545.23.08    CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3090        On  | 00000000:01:00.0 Off |                  N/A |
|  0%   43C    P8              25W / 350W |      3MiB / 24576MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

Describe the bug
Every .gguf model that I try fails with the error as seen below. I've downloaded TheBloke's CodeLLama-13b (gguf) and it failed, I've tried out the 7B LLama model, the Luna model (as shown in the docs) and now Tinyllama and they all fail. I know that the Cuda integration with Docker is working as expect because I ran the Nvidia sample workload and Axolotl for training all fine inside Docker.

Furthermore, if I remove the backend altogether then LocalAI will try every backend, however, none of them work.

To Reproduce

Execute this curl, but every model will fail.

curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "thebloke__tinyllama-1.1b-chat-v0.3-gguf__tinyllama-1.1b-chat-v0.3.q4_k_m.gguf",
     "messages": [{"role": "user", "content": "How are you?"}],
     "temperature": 0.9
   }'
user@user-Z68X-UD3P-B3:~/LocalAI/models$ pwd
/home/user/LocalAI/models
user@user-Z68X-UD3P-B3:~/LocalAI/models$ ls -al
total 4388344
drwxrwxr-x  2 user user       4096 Dec 15 21:29 .
drwxrwxr-x 17 user user       4096 Dec 15 21:29 ..
-rw-r--r--  1 root   root          253 Dec 15 21:23 tinyllama.yaml
-rw-r--r--  1 root   root    667822976 Dec 15 21:23 tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf
cat models/tinyllama.yaml
context_size: 1024
name: thebloke__tinyllama-1.1b-chat-v0.3-gguf__tinyllama-1.1b-chat-v0.3.q4_k_m.gguf
parameters:
  model: tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf
  temperature: 0.2
  top_k: 80
  top_p: 0.7
template:
  chat: chat
  completion: completion
backend: llama
f16: true
gpu_layers: 30

I've also tried llama-stable as backend, but that didn't help.

Expected behavior
I would expect that the model would return a response, or at the very least show an reasonable error. I don't think the error shown is directly related to LocalAI)

Logs

docker compose up
[+] Running 22/22
 ✔ api 21 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿]      0B/0B      Pulled                                                                                              756.3s
   ✔ d1da99c2f148 Already exists                                                                                                                               0.0s
   ✔ 577ff23cfe55 Already exists                                                                                                                               0.0s
   ✔ c7b1e60e9d5a Already exists                                                                                                                               0.0s
   ✔ 714cd879eb99 Already exists                                                                                                                               0.0s
   ✔ 2bd8b252ec0a Pull complete                                                                                                                               28.0s
   ✔ 6ef0790763b3 Pull complete                                                                                                                                0.7s
   ✔ 44bdf02e4a01 Pull complete                                                                                                                               40.7s
   ✔ 77491a53669e Pull complete                                                                                                                                1.9s
   ✔ 05ae0f4a5fe4 Pull complete                                                                                                                                3.4s
   ✔ 4f4fb700ef54 Pull complete                                                                                                                                4.1s
   ✔ 14601617e69c Pull complete                                                                                                                              632.7s
   ✔ 6e3a4bd4a7f0 Pull complete                                                                                                                              154.3s
   ✔ 63661a91fb39 Pull complete                                                                                                                               42.1s
   ✔ c414c2c4015d Pull complete                                                                                                                               43.5s
   ✔ ffae41ac74b5 Pull complete                                                                                                                               46.2s
   ✔ 7bbc1461a8b5 Pull complete                                                                                                                              603.3s
   ✔ 5801e1ec273c Pull complete                                                                                                                              354.6s
   ✔ 30952fbd13a3 Pull complete                                                                                                                              511.2s
   ✔ 8f06b863e302 Pull complete                                                                                                                              582.3s
   ✔ 5b07b6742079 Pull complete                                                                                                                              588.2s
   ✔ ea25b4a47834 Pull complete                                                                                                                              594.2s
WARN[0756] Found orphan containers ([localai-local-ai-1]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
[+] Running 1/1
 ✔ Container localai-api-1  Recreated                                                                                                                          7.0s
Attaching to localai-api-1
localai-api-1  | @@@@@
localai-api-1  | Skipping rebuild
localai-api-1  | @@@@@
localai-api-1  | If you are experiencing issues with the pre-compiled builds, try setting REBUILD=true
localai-api-1  | If you are still experiencing issues with the build, try setting CMAKE_ARGS and disable the instructions set as needed:
localai-api-1  | CMAKE_ARGS="-DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_FMA=OFF"
localai-api-1  | see the documentation at: https://localai.io/basics/build/index.html
localai-api-1  | Note: See also https://github.com/go-skynet/LocalAI/issues/288
localai-api-1  | @@@@@
localai-api-1  | CPU info:
localai-api-1  | model name	: Intel(R) Core(TM) i5-2500K CPU @ 3.30GHz
localai-api-1  | flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts md_clear flush_l1d
localai-api-1  | CPU:    AVX    found OK
localai-api-1  | CPU: no AVX2   found
localai-api-1  | CPU: no AVX512 found
localai-api-1  | @@@@@
localai-api-1  | 9:03PM INF Starting LocalAI using 2 threads, with models path: /models
localai-api-1  | 9:03PM INF LocalAI version: fb6a5bc (fb6a5bc620cc39657e03ef958b09230acdf977a0)
localai-api-1  | 9:03PM DBG Model: thebloke__tinyllama-1.1b-chat-v0.3-gguf__tinyllama-1.1b-chat-v0.3.q4_k_m.gguf (config: {PredictionOptions:{Model:tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf Language: N:0 TopP:0.7 TopK:80 Temperature:0.2 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:thebloke__tinyllama-1.1b-chat-v0.3-gguf__tinyllama-1.1b-chat-v0.3.q4_k_m.gguf F16:true Threads:0 Debug:false Roles:map[] Embeddings:false Backend:llama TemplateConfig:{Chat:chat ChatMessage: Completion:completion Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:30 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:1024 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:} CUDA:false})
localai-api-1  | 9:03PM DBG Extracting backend assets files to /tmp/localai/backend_data
localai-api-1  |
localai-api-1  |  ┌───────────────────────────────────────────────────┐
localai-api-1  |  │                   Fiber v2.50.0                   │
localai-api-1  |  │               http://127.0.0.1:8080               │
localai-api-1  |  │       (bound on host 0.0.0.0 and port 8080)       │
localai-api-1  |  │                                                   │
localai-api-1  |  │ Handlers ............ 74  Processes ........... 1 │
localai-api-1  |  │ Prefork ....... Disabled  PID ................ 14 │
localai-api-1  |  └───────────────────────────────────────────────────┘
localai-api-1  |
localai-api-1  | 9:04PM DBG Request received:
localai-api-1  | 9:04PM DBG Configuration read: &{PredictionOptions:{Model:tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf Language: N:0 TopP:0.7 TopK:80 Temperature:0.9 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:thebloke__tinyllama-1.1b-chat-v0.3-gguf__tinyllama-1.1b-chat-v0.3.q4_k_m.gguf F16:true Threads:2 Debug:true Roles:map[] Embeddings:false Backend:llama TemplateConfig:{Chat:chat ChatMessage: Completion:completion Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:30 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:1024 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:} CUDA:false}
localai-api-1  | 9:04PM DBG Parameters: &{PredictionOptions:{Model:tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf Language: N:0 TopP:0.7 TopK:80 Temperature:0.9 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:thebloke__tinyllama-1.1b-chat-v0.3-gguf__tinyllama-1.1b-chat-v0.3.q4_k_m.gguf F16:true Threads:2 Debug:true Roles:map[] Embeddings:false Backend:llama TemplateConfig:{Chat:chat ChatMessage: Completion:completion Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:30 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:1024 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:} CUDA:false}
localai-api-1  | 9:04PM DBG Prompt (before templating): How are you?
localai-api-1  | 9:04PM DBG Template found, input modified to: Below is an instruction that describes a task. Write a response that appropriately completes the request.
localai-api-1  |
localai-api-1  | ### Instruction:
localai-api-1  | How are you?
localai-api-1  |
localai-api-1  | ### Response:
localai-api-1  | 9:04PM DBG Prompt (after templating): Below is an instruction that describes a task. Write a response that appropriately completes the request.
localai-api-1  |
localai-api-1  | ### Instruction:
localai-api-1  | How are you?
localai-api-1  |
localai-api-1  | ### Response:
localai-api-1  | 9:04PM DBG Loading model llama from tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf
localai-api-1  | 9:04PM DBG Loading model in memory from file: /models/tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf
localai-api-1  | 9:04PM DBG Loading Model tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf with gRPC (file: /models/tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf) (backend: llama): {backendString:llama model:tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf threads:2 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc0004281e0 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh petals:/build/backend/python/petals/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
localai-api-1  | 9:04PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama
localai-api-1  | 9:04PM DBG GRPC Service for tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf will be running at: '127.0.0.1:42591'
localai-api-1  | 9:04PM DBG GRPC Service state dir: /tmp/go-processmanager721992787
localai-api-1  | 9:04PM DBG GRPC Service Started
localai-api-1  | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:42591: connect: connection refused"
localai-api-1  | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:42591: connect: connection refused"
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 2023/12/15 21:04:11 gRPC Server listening at 127.0.0.1:42591
localai-api-1  | 9:04PM DBG GRPC Service Ready
localai-api-1  | 9:04PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf ContextSize:1024 Seed:0 NBatch:512 F16Memory:true MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:30 MainGPU: TensorSplit: Threads:2 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0}
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr SIGILL: illegal instruction
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr PC=0x8a06bc m=5 sigcode=2
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr signal arrived during cgo execution
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr instruction bytes: 0xc4 0xe3 0x7d 0x39 0x8c 0x24 0x18 0x3 0x0 0x0 0x1 0x66 0x89 0x84 0x24 0x0
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr goroutine 50 [syscall]:
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.cgocall(0x823240, 0xc0000f54d8)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/cgocall.go:157 +0x4b fp=0xc0000f54b0 sp=0xc0000f5478 pc=0x41960b
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr github.com/go-skynet/go-llama%2ecpp._Cfunc_load_model(0x7fa350000cd0, 0x400, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1e, 0x200, ...)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	_cgo_gotypes.go:267 +0x4f fp=0xc0000f54d8 sp=0xc0000f54b0 pc=0x815b2f
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr github.com/go-skynet/go-llama%2ecpp.New({0xc00002c0c0, 0x2c}, {0xc00010bd00, 0x9, 0x938460?})
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/build/sources/go-llama/llama.go:39 +0x385 fp=0xc0000f56e8 sp=0xc0000f54d8 pc=0x816525
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr main.(*LLM).Load(0xc0000a4618, 0xc00012ed20)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/build/backend/go/llm/llama/llama.go:87 +0xc9c fp=0xc0000f5900 sp=0xc0000f56e8 pc=0x82049c
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr github.com/go-skynet/LocalAI/pkg/grpc.(*server).LoadModel(0xc000098d50, {0xc00012ed20?, 0x50c886?}, 0x0?)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/build/pkg/grpc/server.go:50 +0xe6 fp=0xc0000f59b0 sp=0xc0000f5900 pc=0x81dce6
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr github.com/go-skynet/LocalAI/pkg/grpc/proto._Backend_LoadModel_Handler({0x9a9900?, 0xc000098d50}, {0xa90570, 0xc000024cc0}, 0xc00010a380, 0x0)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/build/pkg/grpc/proto/backend_grpc.pb.go:264 +0x169 fp=0xc0000f5a08 sp=0xc0000f59b0 pc=0x80afa9
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr google.golang.org/grpc.(*Server).processUnaryRPC(0xc0001d61e0, {0xa90570, 0xc00028e120}, {0xa93a98, 0xc000007ba0}, 0xc0002a0000, 0xc0001dec90, 0x11895b0, 0x0)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/go/pkg/mod/google.golang.org/[email protected]/server.go:1343 +0xe03 fp=0xc0000f5df0 sp=0xc0000f5a08 pc=0x7f3f23
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr google.golang.org/grpc.(*Server).handleStream(0xc0001d61e0, {0xa93a98, 0xc000007ba0}, 0xc0002a0000)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/go/pkg/mod/google.golang.org/[email protected]/server.go:1737 +0xc4c fp=0xc0000f5f78 sp=0xc0000f5df0 pc=0x7f8e8c
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr google.golang.org/grpc.(*Server).serveStreams.func1.1()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/go/pkg/mod/google.golang.org/[email protected]/server.go:986 +0x86 fp=0xc0000f5fe0 sp=0xc0000f5f78 pc=0x7f1e26
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.goexit()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000f5fe8 sp=0xc0000f5fe0 pc=0x47c961
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr created by google.golang.org/grpc.(*Server).serveStreams.func1 in goroutine 7
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/go/pkg/mod/google.golang.org/[email protected]/server.go:997 +0x145
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr goroutine 1 [IO wait]:
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.gopark(0x42b828?, 0x7fa36032a8f8?, 0x78?, 0x9b?, 0x4e8e3d?)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc0001c9b08 sp=0xc0001c9ae8 pc=0x44ddae
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.netpollblock(0xc0001c9b98?, 0x418da6?, 0x0?)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc0001c9b40 sp=0xc0001c9b08 pc=0x446857
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr internal/poll.runtime_pollWait(0x7fa3603b1e58, 0x72)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc0001c9b60 sp=0xc0001c9b40 pc=0x477885
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr internal/poll.(*pollDesc).wait(0xc00019a600?, 0x0?, 0x0)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc0001c9b88 sp=0xc0001c9b60 pc=0x4e1aa7
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr internal/poll.(*pollDesc).waitRead(...)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr internal/poll.(*FD).Accept(0xc00019a600)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac fp=0xc0001c9c30 sp=0xc0001c9b88 pc=0x4e6f8c
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr net.(*netFD).accept(0xc00019a600)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/net/fd_unix.go:172 +0x29 fp=0xc0001c9ce8 sp=0xc0001c9c30 pc=0x642969
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr net.(*TCPListener).accept(0xc0000da4a0)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e fp=0xc0001c9d10 sp=0xc0001c9ce8 pc=0x65993e
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr net.(*TCPListener).Accept(0xc0000da4a0)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/net/tcpsock.go:315 +0x30 fp=0xc0001c9d40 sp=0xc0001c9d10 pc=0x658af0
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr google.golang.org/grpc.(*Server).Serve(0xc0001d61e0, {0xa8fb80?, 0xc0000da4a0})
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/go/pkg/mod/google.golang.org/[email protected]/server.go:852 +0x462 fp=0xc0001c9e80 sp=0xc0001c9d40 pc=0x7f0a82
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr github.com/go-skynet/LocalAI/pkg/grpc.StartServer({0x7ffe142a4aa5?, 0xc00009c130?}, {0xa941c0?, 0xc0000a4618})
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/build/pkg/grpc/server.go:178 +0x17d fp=0xc0001c9f10 sp=0xc0001c9e80 pc=0x81f6dd
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr main.main()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/build/backend/go/llm/llama/main.go:20 +0x85 fp=0xc0001c9f40 sp=0xc0001c9f10 pc=0x822a45
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.main()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/proc.go:267 +0x2bb fp=0xc0001c9fe0 sp=0xc0001c9f40 pc=0x44d95b
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.goexit()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0001c9fe8 sp=0xc0001c9fe0 pc=0x47c961
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr goroutine 2 [force gc (idle)]:
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00004cfa8 sp=0xc00004cf88 pc=0x44ddae
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.goparkunlock(...)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/proc.go:404
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.forcegchelper()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/proc.go:322 +0xb3 fp=0xc00004cfe0 sp=0xc00004cfa8 pc=0x44dc33
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.goexit()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00004cfe8 sp=0xc00004cfe0 pc=0x47c961
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr created by runtime.init.6 in goroutine 1
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/proc.go:310 +0x1a
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr goroutine 3 [GC sweep wait]:
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00004d778 sp=0xc00004d758 pc=0x44ddae
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.goparkunlock(...)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/proc.go:404
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.bgsweep(0x0?)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/mgcsweep.go:280 +0x94 fp=0xc00004d7c8 sp=0xc00004d778 pc=0x439cd4
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.gcenable.func1()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/mgc.go:200 +0x25 fp=0xc00004d7e0 sp=0xc00004d7c8 pc=0x42ee85
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.goexit()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00004d7e8 sp=0xc00004d7e0 pc=0x47c961
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr created by runtime.gcenable in goroutine 1
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/mgc.go:200 +0x66
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr goroutine 4 [GC scavenge wait]:
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.gopark(0xc000034070?, 0xa88d58?, 0x1?, 0x0?, 0xc0000071e0?)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00004df70 sp=0xc00004df50 pc=0x44ddae
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.goparkunlock(...)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/proc.go:404
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.(*scavengerState).park(0x11d2aa0)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc00004dfa0 sp=0xc00004df70 pc=0x4375a9
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.bgscavenge(0x0?)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/mgcscavenge.go:653 +0x3c fp=0xc00004dfc8 sp=0xc00004dfa0 pc=0x437b3c
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.gcenable.func2()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/mgc.go:201 +0x25 fp=0xc00004dfe0 sp=0xc00004dfc8 pc=0x42ee25
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.goexit()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00004dfe8 sp=0xc00004dfe0 pc=0x47c961
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr created by runtime.gcenable in goroutine 1
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/mgc.go:201 +0xa5
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr goroutine 18 [finalizer wait]:
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.gopark(0x198?, 0x9d3860?, 0x1?, 0xef?, 0x0?)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00004c620 sp=0xc00004c600 pc=0x44ddae
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.runfinq()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/mfinal.go:193 +0x107 fp=0xc00004c7e0 sp=0xc00004c620 pc=0x42dea7
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.goexit()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00004c7e8 sp=0xc00004c7e0 pc=0x47c961
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr created by runtime.createfing in goroutine 1
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/mfinal.go:163 +0x3d
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr goroutine 5 [select]:
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.gopark(0xc000165f00?, 0x2?, 0x1e?, 0x0?, 0xc000165ed4?)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000165d80 sp=0xc000165d60 pc=0x44ddae
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.selectgo(0xc000165f00, 0xc000165ed0, 0x78b1f6?, 0x0, 0xc000150000?, 0x1)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc000165ea0 sp=0xc000165d80 pc=0x45d805
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000100550, 0x1)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/controlbuf.go:418 +0x113 fp=0xc000165f30 sp=0xc000165ea0 pc=0x76a053
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001401c0)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/controlbuf.go:552 +0x86 fp=0xc000165f90 sp=0xc000165f30 pc=0x76a766
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr google.golang.org/grpc/internal/transport.NewServerTransport.func2()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_server.go:336 +0xd5 fp=0xc000165fe0 sp=0xc000165f90 pc=0x780fb5
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.goexit()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000165fe8 sp=0xc000165fe0 pc=0x47c961
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr created by google.golang.org/grpc/internal/transport.NewServerTransport in goroutine 34
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_server.go:333 +0x1acc
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr goroutine 6 [select]:
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.gopark(0xc000048770?, 0x4?, 0x0?, 0x69?, 0xc0000486c0?)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000048528 sp=0xc000048508 pc=0x44ddae
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.selectgo(0xc000048770, 0xc0000486b8, 0xf?, 0x0, 0xc000048690?, 0x1)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc000048648 sp=0xc000048528 pc=0x45d805
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr google.golang.org/grpc/internal/transport.(*http2Server).keepalive(0xc000007ba0)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_server.go:1152 +0x225 fp=0xc0000487c8 sp=0xc000048648 pc=0x788265
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr google.golang.org/grpc/internal/transport.NewServerTransport.func4()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_server.go:339 +0x25 fp=0xc0000487e0 sp=0xc0000487c8 pc=0x780ea5
localai-api-1  | [172.19.0.1]:55268 500 - POST /v1/chat/completions
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.goexit()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000487e8 sp=0xc0000487e0 pc=0x47c961
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr created by google.golang.org/grpc/internal/transport.NewServerTransport in goroutine 34
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_server.go:339 +0x1b0e
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr goroutine 7 [IO wait]:
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.gopark(0x11eac00?, 0xb?, 0x0?, 0x0?, 0x6?)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000061aa0 sp=0xc000061a80 pc=0x44ddae
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.netpollblock(0x4c6d18?, 0x418da6?, 0x0?)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/netpoll.go:564 +0xf7 fp=0xc000061ad8 sp=0xc000061aa0 pc=0x446857
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr internal/poll.runtime_pollWait(0x7fa3603b1d60, 0x72)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/netpoll.go:343 +0x85 fp=0xc000061af8 sp=0xc000061ad8 pc=0x477885
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr internal/poll.(*pollDesc).wait(0xc000316000?, 0xc000148000?, 0x0)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc000061b20 sp=0xc000061af8 pc=0x4e1aa7
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr internal/poll.(*pollDesc).waitRead(...)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr internal/poll.(*FD).Read(0xc000316000, {0xc000148000, 0x8000, 0x8000})
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a fp=0xc000061bb8 sp=0xc000061b20 pc=0x4e2d9a
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr net.(*netFD).Read(0xc000316000, {0xc000148000?, 0x1060100000000?, 0x8?})
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/net/fd_posix.go:55 +0x25 fp=0xc000061c00 sp=0xc000061bb8 pc=0x640945
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr net.(*conn).Read(0xc000318000, {0xc000148000?, 0xc000061c90?, 0x3?})
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/net/net.go:179 +0x45 fp=0xc000061c48 sp=0xc000061c00 pc=0x651065
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr net.(*TCPConn).Read(0x0?, {0xc000148000?, 0xc000061ca0?, 0x46bcad?})
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	<autogenerated>:1 +0x25 fp=0xc000061c78 sp=0xc000061c48 pc=0x663805
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr bufio.(*Reader).Read(0xc0000767e0, {0xc000158040, 0x9, 0xc1574db2f3113641?})
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/bufio/bufio.go:244 +0x197 fp=0xc000061cb0 sp=0xc000061c78 pc=0x5bbed7
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr io.ReadAtLeast({0xa8d5e0, 0xc0000767e0}, {0xc000158040, 0x9, 0x9}, 0x9)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/io/io.go:335 +0x90 fp=0xc000061cf8 sp=0xc000061cb0 pc=0x4c0ed0
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr io.ReadFull(...)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/io/io.go:354
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr golang.org/x/net/http2.readFrameHeader({0xc000158040, 0x9, 0xc000288030?}, {0xa8d5e0?, 0xc0000767e0?})
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:237 +0x65 fp=0xc000061d48 sp=0xc000061cf8 pc=0x756ac5
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr golang.org/x/net/http2.(*Framer).ReadFrame(0xc000158000)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:498 +0x85 fp=0xc000061df0 sp=0xc000061d48 pc=0x757205
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr google.golang.org/grpc/internal/transport.(*http2Server).HandleStreams(0xc000007ba0, 0x1?)
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_server.go:636 +0x145 fp=0xc000061f00 sp=0xc000061df0 pc=0x784105
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr google.golang.org/grpc.(*Server).serveStreams(0xc0001d61e0, {0xa93a98?, 0xc000007ba0})
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/go/pkg/mod/google.golang.org/[email protected]/server.go:979 +0x1c2 fp=0xc000061f80 sp=0xc000061f00 pc=0x7f1bc2
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr google.golang.org/grpc.(*Server).handleRawConn.func1()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/go/pkg/mod/google.golang.org/[email protected]/server.go:920 +0x45 fp=0xc000061fe0 sp=0xc000061f80 pc=0x7f1425
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr runtime.goexit()
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000061fe8 sp=0xc000061fe0 pc=0x47c961
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr created by google.golang.org/grpc.(*Server).handleRawConn in goroutine 34
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr 	/go/pkg/mod/google.golang.org/[email protected]/server.go:919 +0x185
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr rax    0x0
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr rbx    0xab7900
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr rcx    0x7fa35bff51a0
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr rdx    0x7fa3d2c616d8
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr rdi    0x7fa3d2c616c8
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr rsi    0x7fa3d2c59e38
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr rbp    0x7fa35bff52c0
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr rsp    0x7fa35bff4f40
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr r8     0x0
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr r9     0x7fa350000080
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr r10    0xfffffffffffffd8c
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr r11    0x7fa3d2b64990
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr r12    0x1
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr r13    0x7fa35bff5060
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr r14    0x7fa35bff4ff0
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr r15    0x7fa35bff5160
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr rip    0x8a06bc
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr rflags 0x10246
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr cs     0x33
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr fs     0x0
localai-api-1  | 9:04PM DBG GRPC(tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf-127.0.0.1:42591): stderr gs     0x0
localai-api-1  | [127.0.0.1]:40982 200 - GET /readyz
localai-api-1  | [127.0.0.1]:53278 200 - GET /readyz

Additional context

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions