Skip to content

Conversation

tc-wolf
Copy link
Contributor

@tc-wolf tc-wolf commented Aug 1, 2024

Upstream changed some things w/ how grammar works in ggml-org/llama.cpp#8508 and ggml-org/llama.cpp#8093 - may also want to check if llama_grammar_init return value is null, since this is what's done now rather than throwing an error.

Old argument order for llama_grammar_accept_token was:
llama_grammar_accept_token(ctx, grammar, token)

Now this is:
llama_grammar_accept_token(grammar, ctx, token)

Can test with

model = llama_cpp.Llama("bartowski/Meta-Llama-3.1-8B-Instruct-GGUF/Meta-Llama-3.1-8B-Instruct-Q8_0.gguf", n_ctx=4096, n_gpu_layers=-1, offload_kqv=True, n_batch=1024, n_threads=12, n_threads_batch=12, verbose=True)
model.create_chat_completion(
    messages=[
        {
            "role": "system",
            "content": "You are a helpful assistant that outputs in JSON.",
        },
        {"role": "user", "content": "Who won the world series in 2020"},
    ],
    response_format={
        "type": "json_object",
        "schema": {
            "type": "object",
            "properties": {"team_name": {"type": "string"}},
            "required": ["team_name"],
        },
    },
    temperature=0.7,
)

Should fix #1623

Old was:
llama_grammar_accept_token(ctx, grammar, token)

Now this is:
llama_grammar_accept_token(grammar, ctx, token)
@abetlen
Copy link
Owner

abetlen commented Aug 4, 2024

@tc-wolf thank you so much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

The latest version kills python kernel with LlamaGrammar
2 participants