|
1 | 1 | {
|
2 | 2 | "cells": [
|
| 3 | + { |
| 4 | + "cell_type": "markdown", |
| 5 | + "metadata": {}, |
| 6 | + "source": [ |
| 7 | + "# Functions\n", |
| 8 | + "\n", |
| 9 | + "The OpenAI compatbile web server in `llama-cpp-python` supports function calling.\n", |
| 10 | + "\n", |
| 11 | + "Function calling allows API clients to specify a schema that gives the model a format it should respond in.\n", |
| 12 | + "Function calling in `llama-cpp-python` works by combining models pretrained for function calling such as [`functionary`](https://huggingface.co/abetlen/functionary-7b-v1-GGUF) with constrained sampling to produce a response that is compatible with the schema.\n", |
| 13 | + "\n", |
| 14 | + "Note however that this improves but does not guarantee that the response will be compatible with the schema.\n", |
| 15 | + "\n", |
| 16 | + "## Requirements\n", |
| 17 | + "\n", |
| 18 | + "Before we begin you will need the following:\n", |
| 19 | + "\n", |
| 20 | + "- A running `llama-cpp-python` server with a function calling compatible model. [See here](https://llama-cpp-python.readthedocs.io/en/latest/server/#function-calling)\n", |
| 21 | + "- The OpenAI Python Client `pip install openai`\n", |
| 22 | + "- (Optional) The Instructor Python Library `pip install instructor`\n", |
| 23 | + "\n", |
| 24 | + "## Function Calling with OpenAI Python Client\n", |
| 25 | + "\n", |
| 26 | + "We'll start with a basic demo that only uses the OpenAI Python Client." |
| 27 | + ] |
| 28 | + }, |
3 | 29 | {
|
4 | 30 | "cell_type": "code",
|
5 |
| - "execution_count": 29, |
| 31 | + "execution_count": 4, |
6 | 32 | "metadata": {},
|
7 | 33 | "outputs": [
|
8 | 34 | {
|
9 | 35 | "name": "stdout",
|
10 | 36 | "output_type": "stream",
|
11 | 37 | "text": [
|
12 |
| - "ChatCompletion(id='chatcmpl-b6dcbb47-1120-4761-8cd9-83542c97647b', choices=[Choice(finish_reason='stop', index=0, message=ChatCompletionMessage(content=\"The current temperature in San Francisco is 72 degrees Fahrenheit. It's a sunny day with clear skies, making it perfect for outdoor activities.\\n \", role='assistant', function_call=None, tool_calls=None))], created=1699602158, model='gpt-3.5-turbo-1106', object='chat.completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=38, prompt_tokens=135, total_tokens=173))\n" |
| 38 | + "ChatCompletion(id='chatcmpl-a2d9eb9f-7354-472f-b6ad-4d7a807729a3', choices=[Choice(finish_reason='stop', index=0, message=ChatCompletionMessage(content='The current weather in San Francisco is **72°F** (22°C).\\n ', role='assistant', function_call=None, tool_calls=None))], created=1699638365, model='gpt-3.5-turbo-1106', object='chat.completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=22, prompt_tokens=136, total_tokens=158))\n" |
13 | 39 | ]
|
14 | 40 | }
|
15 | 41 | ],
|
|
20 | 46 | "\n",
|
21 | 47 | "client = openai.OpenAI(\n",
|
22 | 48 | " api_key = \"sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\", # can be anything\n",
|
23 |
| - " base_url = \"http://100.64.159.73:8000/v1\"\n", |
| 49 | + " base_url = \"http://100.64.159.73:8000/v1\" # NOTE: Replace with IP address and port of your llama-cpp-python server\n", |
24 | 50 | ")\n",
|
25 | 51 | "\n",
|
26 | 52 | "# Example dummy function hard coded to return the same weather\n",
|
|
100 | 126 | "print(run_conversation())"
|
101 | 127 | ]
|
102 | 128 | },
|
| 129 | + { |
| 130 | + "cell_type": "markdown", |
| 131 | + "metadata": {}, |
| 132 | + "source": [ |
| 133 | + "# Function Calling with Instructor\n", |
| 134 | + "\n", |
| 135 | + "The above example is a bit verbose and requires you to manually verify the schema.\n", |
| 136 | + "\n", |
| 137 | + "For our next examples we'll use the `instructor` library to simplify the process and accomplish a number of different tasks with function calling.\n", |
| 138 | + "\n", |
| 139 | + "You'll first need to install the [`instructor`](https://github.com/jxnl/instructor/).\n", |
| 140 | + "\n", |
| 141 | + "You can do so by running the following command in your terminal:\n", |
| 142 | + "\n", |
| 143 | + "```bash\n", |
| 144 | + "pip install instructor\n", |
| 145 | + "```\n", |
| 146 | + "\n", |
| 147 | + "Below we'll go through a few basic examples taken directly from the [instructor cookbook](https://jxnl.github.io/instructor/)\n", |
| 148 | + "\n", |
| 149 | + "## Basic Usage" |
| 150 | + ] |
| 151 | + }, |
103 | 152 | {
|
104 | 153 | "cell_type": "code",
|
105 |
| - "execution_count": 30, |
| 154 | + "execution_count": 5, |
106 | 155 | "metadata": {},
|
107 | 156 | "outputs": [
|
108 | 157 | {
|
|
139 | 188 | "print(user)"
|
140 | 189 | ]
|
141 | 190 | },
|
| 191 | + { |
| 192 | + "cell_type": "markdown", |
| 193 | + "metadata": {}, |
| 194 | + "source": [ |
| 195 | + "## Text Classification\n", |
| 196 | + "\n", |
| 197 | + "### Single-Label Classification" |
| 198 | + ] |
| 199 | + }, |
142 | 200 | {
|
143 | 201 | "cell_type": "code",
|
144 |
| - "execution_count": 31, |
| 202 | + "execution_count": 7, |
145 | 203 | "metadata": {},
|
146 |
| - "outputs": [], |
| 204 | + "outputs": [ |
| 205 | + { |
| 206 | + "name": "stdout", |
| 207 | + "output_type": "stream", |
| 208 | + "text": [ |
| 209 | + "class_label=<Labels.SPAM: 'spam'>\n" |
| 210 | + ] |
| 211 | + } |
| 212 | + ], |
147 | 213 | "source": [
|
148 | 214 | "import enum\n",
|
149 | 215 | "\n",
|
|
172 | 238 | " ) # type: ignore\n",
|
173 | 239 | "\n",
|
174 | 240 | "prediction = classify(\"Hello there I'm a Nigerian prince and I want to give you money\")\n",
|
175 |
| - "assert prediction.class_label == Labels.SPAM" |
| 241 | + "assert prediction.class_label == Labels.SPAM\n", |
| 242 | + "print(prediction)" |
| 243 | + ] |
| 244 | + }, |
| 245 | + { |
| 246 | + "cell_type": "markdown", |
| 247 | + "metadata": {}, |
| 248 | + "source": [ |
| 249 | + "### Multi-Label Classification" |
176 | 250 | ]
|
177 | 251 | },
|
178 | 252 | {
|
179 | 253 | "cell_type": "code",
|
180 |
| - "execution_count": 32, |
| 254 | + "execution_count": 12, |
181 | 255 | "metadata": {},
|
182 | 256 | "outputs": [
|
183 | 257 | {
|
184 | 258 | "name": "stdout",
|
185 | 259 | "output_type": "stream",
|
186 | 260 | "text": [
|
187 |
| - "class_labels=[<MultiLabels.BILLING: 'billing'>, <MultiLabels.TECH_ISSUE: 'tech_issue'>]\n" |
| 261 | + "class_labels=[<MultiLabels.TECH_ISSUE: 'tech_issue'>, <MultiLabels.BILLING: 'billing'>]\n" |
188 | 262 | ]
|
189 | 263 | }
|
190 | 264 | ],
|
|
223 | 297 | "print(prediction)"
|
224 | 298 | ]
|
225 | 299 | },
|
| 300 | + { |
| 301 | + "cell_type": "markdown", |
| 302 | + "metadata": {}, |
| 303 | + "source": [ |
| 304 | + "## Self-Critique" |
| 305 | + ] |
| 306 | + }, |
226 | 307 | {
|
227 | 308 | "cell_type": "code",
|
228 |
| - "execution_count": 33, |
| 309 | + "execution_count": 13, |
229 | 310 | "metadata": {},
|
230 | 311 | "outputs": [
|
231 | 312 | {
|
232 | 313 | "name": "stdout",
|
233 | 314 | "output_type": "stream",
|
234 | 315 | "text": [
|
235 |
| - "question='What is the meaning of life?' answer='The meaning of life, according to the Devil, is to live a life of sin and debauchery.'\n" |
| 316 | + "question='What is the meaning of life?' answer='According to the Devil, the meaning of life is to live a life of sin and debauchery.'\n", |
| 317 | + "1 validation error for QuestionAnswerNoEvil\n", |
| 318 | + "answer\n", |
| 319 | + " Assertion failed, The statement promotes sin and debauchery, which can be considered objectionable. [type=assertion_error, input_value='According to the Devil, ... of sin and debauchery.', input_type=str]\n", |
| 320 | + " For further information visit https://errors.pydantic.dev/2.3/v/assertion_error\n" |
236 | 321 | ]
|
237 | 322 | }
|
238 | 323 | ],
|
|
294 | 379 | " print(e)"
|
295 | 380 | ]
|
296 | 381 | },
|
| 382 | + { |
| 383 | + "cell_type": "markdown", |
| 384 | + "metadata": {}, |
| 385 | + "source": [ |
| 386 | + "## Answering Questions with Validated Citations" |
| 387 | + ] |
| 388 | + }, |
297 | 389 | {
|
298 | 390 | "cell_type": "code",
|
299 | 391 | "execution_count": 42,
|
|
366 | 458 | "qa = ask_ai(question, context)\n",
|
367 | 459 | "print(qa)"
|
368 | 460 | ]
|
369 |
| - }, |
370 |
| - { |
371 |
| - "cell_type": "code", |
372 |
| - "execution_count": null, |
373 |
| - "metadata": {}, |
374 |
| - "outputs": [], |
375 |
| - "source": [] |
376 | 461 | }
|
377 | 462 | ],
|
378 | 463 | "metadata": {
|
|
0 commit comments