You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$./llama-cli --version
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon RX 6650 XT, gfx1030 (0x1030), VMM: no, Wave Size: 32
version: 5097 (fe5b78c)
built with cc (Ubuntu 14.2.0-4ubuntu2) 14.2.0 for x86_64-linux-gnu
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
The multiline feature caused #491 to regress. I expect Ctrl+D to behave exactly like it does in bash. Instead, it's as if it was Enter, and Ctrl+C doesn't work when it is held down by /dev/null either.
...
== Running in interactive mode. ==
- Press Ctrl+C to interject at any time.
- Press Return to return control to the AI.
- To return control without starting a new line, end your input with '/'.
- If you want to submit another line, end your input with '\'.
- Not using system message. To change it, set a different value via -sys PROMPT
> Hi
Hi there! How’s your day going so far? Is there anything I can help you with today? 😊
Do you want to:
* Chat about something?* Get some information?* Brainstorm ideas?>* Play a game?>>>>>
llama_perf_sampler_print: sampling time = 0.16 ms / 1 runs ( 0.16 ms per token, 6172.84 tokens per second)
llama_perf_context_print: load time = 793.98 ms
llama_perf_context_print: prompt evaltime = 185.32 ms / 10 tokens ( 18.53 ms per token, 53.96 tokens per second)
llama_perf_context_print: evaltime = 4524.32 ms / 62 runs ( 72.97 ms per token, 13.70 tokens per second)
llama_perf_context_print: total time = 13648.78 ms / 72 tokens
Interrupted by user
...
== Running in interactive mode. ==
- Press Ctrl+C to interject at any time.
- Press Return to return control to the AI.
- To return control without starting a new line, end your input with '/'.
- If you want to submit another line, end your input with '\'.
- Not using system message. To change it, set a different value via -sys PROMPT
> The 2023 NFL Draft is less than two weeks away, and the anticipation^C
> is building^C
>. Here’^C
> s a breakdown^C
> of the top^C
> storylines, key^C
> players, and^C
> potential surprises heading into^C
> the event^C
>:**^C
> 1.^C
> The Quarterback^C
> Carousel:**^C
>***^C
> The Big Three^C
> :** Caleb Williams^C
> (USC),^\Quit (core dumped)
The text was updated successfully, but these errors were encountered:
This restores the behavior from ggml-org#491. This does not affect Ctrl+D's ability to
terminate --multiline-input lines (ggml-org#1040).
This also actually implements ggml-org#587: "If the user wants the text to end in a
newline, this should be accomplished by explicitly adding a newline by using
\ followed by return, then returning control by pressing return again."
Fixesggml-org#12949
This restores the behavior from ggml-org#491. This does not affect Ctrl+D's ability to
terminate --multiline-input lines (ggml-org#1040).
This also actually implements ggml-org#587: "If the user wants the text to end in a
newline, this should be accomplished by explicitly adding a newline by using
\ followed by return, then returning control by pressing return again."
Fixesggml-org#12949
pockers21
pushed a commit
to pockers21/llama.cpp
that referenced
this issue
Apr 28, 2025
This restores the behavior from ggml-org#491. This does not affect Ctrl+D's ability to
terminate --multiline-input lines (ggml-org#1040).
This also actually implements ggml-org#587: "If the user wants the text to end in a
newline, this should be accomplished by explicitly adding a newline by using
\ followed by return, then returning control by pressing return again."
Fixesggml-org#12949
Name and Version
$./llama-cli --version
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon RX 6650 XT, gfx1030 (0x1030), VMM: no, Wave Size: 32
version: 5097 (fe5b78c)
built with cc (Ubuntu 14.2.0-4ubuntu2) 14.2.0 for x86_64-linux-gnu
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
llama-cli
Command line
Problem description & steps to reproduce
The multiline feature caused #491 to regress. I expect Ctrl+D to behave exactly like it does in bash. Instead, it's as if it was Enter, and Ctrl+C doesn't work when it is held down by
/dev/null
either.First Bad Commit
Perhaps 41654ef
Relevant log output
The text was updated successfully, but these errors were encountered: