Skip to content

CUDA error 217 at ggml-cuda.cu:6292: peer access is not supported between these two devices #3230

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
quarterturn opened this issue Sep 17, 2023 · 12 comments · Fixed by #3231
Closed

Comments

@quarterturn
Copy link

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [x ] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • [ x] I carefully followed the README.md.
  • [ x] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [x ] I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

Please provide a detailed written description of what you were trying to do, and what you expected llama.cpp to do.

Current Behavior

execution of:

./server --numa -m ./models/llama2-70b-chat-ggml/ggml-chat-model-q6.bin --n-gpu-layers 83 -c 4096

fails with:

CUDA error 217 at ggml-cuda.cu:6292: peer access is not supported between these two devices

Please provide a detailed written description of what llama.cpp did, instead.
NOTE: I've run it as well without "--numa", the results are the same

~/llama.cpp$ ./server --numa -m ./models/llama2-70b-chat-ggml/ggml-chat-model-q6.bin --n-gpu-layers 83 -c 4096
ggml_init_cublas: found 3 CUDA devices:
  Device 0: Tesla P40, compute capability 6.1
  Device 1: Tesla P40, compute capability 6.1
  Device 2: Tesla P40, compute capability 6.1
WARNING: /proc/sys/kernel/numa_balancing is enabled, this has been observed to impair performance
{"timestamp":1694971640,"level":"INFO","function":"main","line":1294,"message":"build info","build":1253,"commit":"111163e"}
{"timestamp":1694971640,"level":"INFO","function":"main","line":1296,"message":"system info","n_threads":28,"total_threads":56,"system_info":"AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "}
llama_model_loader: loaded meta data with 15 key-value pairs and 723 tensors from ./models/llama2-70b-chat-ggml/ggml-chat-model-q6.bin (version GGUF V1 (support until nov 2023))
llama_model_loader: - tensor    0:                token_embd.weight q6_K     [  8192, 32000,     1,     1 ]
...
llama_model_loader: - tensor  722:           blk.79.ffn_norm.weight f32      [  8192,     1,     1,     1 ]
llama_model_loader: - kv   0:                       general.architecture str
llama_model_loader: - kv   1:                               general.name str
llama_model_loader: - kv   2:                       llama.context_length u32
llama_model_loader: - kv   3:                     llama.embedding_length u32
llama_model_loader: - kv   4:                          llama.block_count u32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32
llama_model_loader: - kv   7:                 llama.attention.head_count u32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32
llama_model_loader: - kv  10:                       tokenizer.ggml.model str
llama_model_loader: - kv  11:                      tokenizer.ggml.tokens arr
llama_model_loader: - kv  12:                      tokenizer.ggml.scores arr
llama_model_loader: - kv  13:                  tokenizer.ggml.token_type arr
llama_model_loader: - kv  14:               general.quantization_version u32
llama_model_loader: - type  f32:  161 tensors
llama_model_loader: - type q6_K:  562 tensors
llm_load_print_meta: format         = GGUF V1 (support until nov 2023)
llm_load_print_meta: arch           = llama
llm_load_print_meta: vocab type     = SPM
llm_load_print_meta: n_vocab        = 32000
llm_load_print_meta: n_merges       = 0
llm_load_print_meta: n_ctx_train    = 4096
llm_load_print_meta: n_ctx          = 4096
llm_load_print_meta: n_embd         = 8192
llm_load_print_meta: n_head         = 64
llm_load_print_meta: n_head_kv      = 8
llm_load_print_meta: n_layer        = 80
llm_load_print_meta: n_rot          = 128
llm_load_print_meta: n_gqa          = 8
llm_load_print_meta: f_norm_eps     = 1.0e-05
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: n_ff           = 28672
llm_load_print_meta: freq_base      = 10000.0
llm_load_print_meta: freq_scale     = 1
llm_load_print_meta: model type     = 70B
llm_load_print_meta: model ftype    = mostly Q6_K (guessed)
llm_load_print_meta: model params   = 68.98 B
llm_load_print_meta: model size     = 52.70 GiB (6.56 BPW)
llm_load_print_meta: general.name   = LLaMA
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token  = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.23 MB
llm_load_tensors: using CUDA for GPU acceleration
ggml_cuda_set_main_device: using device 0 (Tesla P40) as main device
llm_load_tensors: mem required  =  205.31 MB (+ 1280.00 MB per state)
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloading v cache to GPU
llm_load_tensors: offloading k cache to GPU
llm_load_tensors: offloaded 83/83 layers to GPU
llm_load_tensors: VRAM used: 55041 MB
....................................................................................................
llama_new_context_with_model: kv self size  = 1280.00 MB
llama_new_context_with_model: compute buffer total size =  561.47 MB
llama_new_context_with_model: VRAM scratch buffer: 560.00 MB

CUDA error 217 at ggml-cuda.cu:6292: peer access is not supported between these two devices

Environment and Context

Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.

  • Physical (or virtual) hardware you are using, e.g. for Linux:

$ lscpu

~/llama.cpp$ lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         46 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  56
  On-line CPU(s) list:   0-55
Vendor ID:               GenuineIntel
  Model name:            Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
    CPU family:          6
    Model:               79
    Thread(s) per core:  2
    Core(s) per socket:  14
    Socket(s):           2
    Stepping:            1
    CPU(s) scaling MHz:  40%
    CPU max MHz:         3300.0000
    CPU min MHz:         1200.0000
    BogoMIPS:            4788.56
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fx
                         sr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_go
                         od nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est
                         tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer
                          aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_sing
                         le pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase ts
                         c_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt c
                         qm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
Virtualization features:
  Virtualization:        VT-x
Caches (sum of all):
  L1d:                   896 KiB (28 instances)
  L1i:                   896 KiB (28 instances)
  L2:                    7 MiB (28 instances)
  L3:                    70 MiB (2 instances)
NUMA:
  NUMA node(s):          2
  NUMA node0 CPU(s):     0-13,28-41
  NUMA node1 CPU(s):     14-27,42-55
Vulnerabilities:
  Gather data sampling:  Not affected
  Itlb multihit:         KVM: Mitigation: VMX disabled
  L1tf:                  Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
  Mds:                   Mitigation; Clear CPU buffers; SMT vulnerable
  Meltdown:              Mitigation; PTI
  Mmio stale data:       Mitigation; Clear CPU buffers; SMT vulnerable
  Retbleed:              Not affected
  Spec rstack overflow:  Not affected
  Spec store bypass:     Mitigation; Speculative Store Bypass disabled via prctl
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:            Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS
                         Not affected
  Srbds:                 Not affected
  Tsx async abort:       Mitigation; Clear CPU buffers; SMT vulnerable
  • Operating System, e.g. for Linux:
    Debian 12
    $ uname -a
Linux t7910 6.1.0-12-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.52-1 (2023-09-07) x86_64 GNU/Linux

llama.cpp$ git log | head -1
commit 111163e

CUDA info:
NVIDIA-SMI 525.125.06 Driver Version: 525.125.06 CUDA Version: 12.0

nvidia-smi -a | grep Multi
    MultiGPU Board                        : No
    MultiGPU Board                        : No
    MultiGPU Board                        : No

NOTE: I have another dual Xeon system which also is "no" as above, and it does not have the issue.

@Ph0rk0z
Copy link

Ph0rk0z commented Sep 17, 2023

Put them on the same CPU and try again?

@quarterturn
Copy link
Author

2 are on CPU1 and 1 is on CPU2, due to the nature of the available slots on the motherboard. On my R720 system, I definitely have GPUs split between CPUs and I don't have this issue but I also haven't updated that system in the past week or so.

@quarterturn
Copy link
Author

Looks like #2470 got merged to master, if I read that correctly. Is there a compile flag to un-set it?

@Ph0rk0z
Copy link

Ph0rk0z commented Sep 17, 2023

Yea, set the limit lower than your nbatch, that should mean it never gets enabled.

@quarterturn
Copy link
Author

What option are you referring to? I'm using '-c 4096', is it an option not shown via '--help'?

@JohannesGaessler
Copy link
Collaborator

I pushed a fix: #3231 . Please check whether it works.

What option are you referring to? I'm using '-c 4096', is it an option not shown via '--help'?

It's a compile option LLAMA_CUDA_PEER_MAX_BATCH_SIZE. I chose to do it like that because I think long-term I will be able to come up with a better solution for determining whether or not to enable peer access.

@Ph0rk0z
Copy link

Ph0rk0z commented Sep 17, 2023

Heh.. I just had the same problem now. Also my 3090 are 0 and 1, nvlink always enables 0->1 and 1->0 but now with this change it failed on 1-> when running falcon along with the P40s. So to say it "only' works with the main device is running a bit counter to what I am seeing.

I will try the fix.

edit: it does indeed work.

@quarterturn
Copy link
Author

"make clean && make -j LLAMA_CUBLAS=1 LLAMA_CUDA_PEER_MAX_BATCH_SIZE=2048" still results in "CUDA error 217 at ggml-cuda.cu:6292: peer access is not supported between these two devices
current device: 0
"
Trying the suggested fix resulted in a CUDA ordinal error and a hard GPU lockup requiring a power-cycle.

@Ph0rk0z
Copy link

Ph0rk0z commented Sep 17, 2023

Setting the size high means you will always enable it. Set the batch size to like 64 if you use 512. At least that's how I think it works.

I also merged the PR that was posted before.

@JohannesGaessler
Copy link
Collaborator

Trying the suggested fix resulted in a CUDA ordinal error and a hard GPU lockup requiring a power-cycle.

Just do be clear: you are not talking about the PR I posted, are you?

@quarterturn
Copy link
Author

Trying the suggested fix resulted in a CUDA ordinal error and a hard GPU lockup requiring a power-cycle.

Just do be clear: you are not talking about the PR I posted, are you?

I was referring to #3231

@quarterturn
Copy link
Author

I pulled again just now on main, it's working. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants