Skip to content

Commit d6acb3e

Browse files
authored
Clean-up template READMEs (#12403)
Normalize, and update notebooks.
1 parent 4254028 commit d6acb3e

File tree

8 files changed

+88
-161
lines changed

8 files changed

+88
-161
lines changed

templates/extraction-openai-functions/README.md

Lines changed: 1 addition & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -10,32 +10,4 @@ By default, it will extract the title and author of papers.
1010

1111
This template will use `OpenAI` by default.
1212

13-
Be sure that `OPENAI_API_KEY` is set in your environment.
14-
15-
## Adding the template
16-
17-
Install the langchain package
18-
```
19-
pip install -e packages/extraction_openai_functions
20-
```
21-
22-
Edit app/server.py to add that package to the routes
23-
```
24-
from fastapi import FastAPI
25-
from langserve import add_routes
26-
from extraction_openai_functions.chain import chain
27-
28-
app = FastAPI()
29-
add_routes(app, chain)
30-
```
31-
32-
Run the app
33-
```
34-
python app/server.py
35-
```
36-
37-
You can use this template in the Playground:
38-
39-
http://127.0.0.1:8000/extraction-openai-functions/playground/
40-
41-
Also, see Jupyter notebook `openai_functions` for various other ways to connect to the template.
13+
Be sure that `OPENAI_API_KEY` is set in your environment.

templates/extraction-openai-functions/openai_functions.ipynb renamed to templates/extraction-openai-functions/extraction_openai_functions.ipynb

Lines changed: 11 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -29,22 +29,10 @@
2929
"source": [
3030
"## Run Template\n",
3131
"\n",
32-
"\n",
33-
"As shown in the README, add template and start server:\n",
34-
"```\n",
35-
"langchain serve add openai-functions\n",
36-
"langchain start\n",
32+
"In `server.py`, set -\n",
3733
"```\n",
38-
"\n",
39-
"We can now look at the endpoints:\n",
40-
"\n",
41-
"http://127.0.0.1:8000/docs#\n",
42-
"\n",
43-
"And specifically at our loaded template:\n",
44-
"\n",
45-
"http://127.0.0.1:8000/docs#/default/invoke_openai_functions_invoke_post\n",
46-
" \n",
47-
"We can also use remote runnable to call it."
34+
"add_routes(app, chain_ext, path=\"/extraction_openai_functions\")\n",
35+
"```"
4836
]
4937
},
5038
{
@@ -55,40 +43,38 @@
5543
"outputs": [],
5644
"source": [
5745
"from langserve.client import RemoteRunnable\n",
58-
"oai_function = RemoteRunnable('http://localhost:8000/openai-functions')"
46+
"oai_function = RemoteRunnable('http://0.0.0.0:8001/extraction_openai_functions')"
5947
]
6048
},
6149
{
6250
"cell_type": "markdown",
6351
"id": "68046695",
6452
"metadata": {},
6553
"source": [
66-
"The function call will perform tagging:\n",
67-
"\n",
68-
"* summarize\n",
69-
"* provide keywords\n",
70-
"* provide language"
54+
"The function wille extract paper titles and authors from an input."
7155
]
7256
},
7357
{
7458
"cell_type": "code",
75-
"execution_count": 3,
59+
"execution_count": 8,
7660
"id": "6dace748",
7761
"metadata": {},
7862
"outputs": [
7963
{
8064
"data": {
8165
"text/plain": [
82-
"AIMessage(content='', additional_kwargs={'function_call': {'name': 'Overview', 'arguments': '{\\n \"summary\": \"This article discusses the concept of building agents with LLM (large language model) as their core controller. It explores the potentiality of LLM as a general problem solver and describes the key components of an LLM-powered autonomous agent system, including planning, memory, and tool use. The article also presents case studies and challenges related to building LLM-powered agents.\",\\n \"language\": \"English\",\\n \"keywords\": \"LLM, autonomous agents, planning, memory, tool use, case studies, challenges\"\\n}'}})"
66+
"[{'title': 'Chain of Thought', 'author': 'Wei et al. 2022'},\n",
67+
" {'title': 'Tree of Thoughts', 'author': 'Yao et al. 2023'},\n",
68+
" {'title': 'LLM+P', 'author': 'Liu et al. 2023'}]"
8369
]
8470
},
85-
"execution_count": 3,
71+
"execution_count": 8,
8672
"metadata": {},
8773
"output_type": "execute_result"
8874
}
8975
],
9076
"source": [
91-
"oai_function.invoke(text[0].page_content[0:1500])"
77+
"oai_function.invoke({\"input\":text[0].page_content[0:4000]})"
9278
]
9379
}
9480
],

templates/rag-chroma-private/README.md

Lines changed: 1 addition & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -26,24 +26,4 @@ This template will create and add documents to the vector database in `chain.py`
2626

2727
By default, this will load a popular blog post on agents.
2828

29-
However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders).
30-
31-
## Adding the template
32-
33-
Create your LangServe app:
34-
```
35-
langchain serve new my-app
36-
cd my-app
37-
```
38-
39-
Add template:
40-
```
41-
langchain serve add rag-chroma-private
42-
```
43-
44-
Start server:
45-
```
46-
langchain start
47-
```
48-
49-
See Jupyter notebook `rag_chroma_private` for various way to connect to the template.
29+
However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders).
Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "232fd40d-cf6a-402d-bcb8-414184a8e924",
6+
"metadata": {},
7+
"source": [
8+
"## Run Template\n",
9+
"\n",
10+
"In `server.py`, set -\n",
11+
"```\n",
12+
"add_routes(app, chain_private, path=\"/rag_chroma_private\")\n",
13+
"```"
14+
]
15+
},
16+
{
17+
"cell_type": "code",
18+
"execution_count": 1,
19+
"id": "888494ca-0509-4070-b36f-600a042f352c",
20+
"metadata": {},
21+
"outputs": [
22+
{
23+
"data": {
24+
"text/plain": [
25+
"' Based on the given context, the answer to the question \"How does agent memory work?\" can be inferred as follows:\\n\\nAgent memory refers to the long-term memory module of an autonomous agent system, which records a comprehensive list of agents\\' experiences in natural language. Each element is an observation or event directly provided by the agent, and inter-agent communication can trigger new natural language statements. The retrieval model surfaces the context to inform the agent\\'s behavior according to relevance, recency, and importance.\\n\\nIn other words, the agent memory is a component of the autonomous agent system that stores and manages the agent\\'s experiences and observations in a long-term memory module, which is based on natural language processing and generation capabilities of a large language model (LLM). The memory is used to inform the agent\\'s behavior and decision-making, and it can be triggered by inter-agent communication.'"
26+
]
27+
},
28+
"execution_count": 1,
29+
"metadata": {},
30+
"output_type": "execute_result"
31+
}
32+
],
33+
"source": [
34+
"from langserve.client import RemoteRunnable\n",
35+
"rag_app = RemoteRunnable('http://0.0.0.0:8001/rag_chroma_private/')\n",
36+
"rag_app.invoke(\"How does agent memory work?\")"
37+
]
38+
}
39+
],
40+
"metadata": {
41+
"kernelspec": {
42+
"display_name": "Python 3 (ipykernel)",
43+
"language": "python",
44+
"name": "python3"
45+
},
46+
"language_info": {
47+
"codemirror_mode": {
48+
"name": "ipython",
49+
"version": 3
50+
},
51+
"file_extension": ".py",
52+
"mimetype": "text/x-python",
53+
"name": "python",
54+
"nbconvert_exporter": "python",
55+
"pygments_lexer": "ipython3",
56+
"version": "3.9.16"
57+
}
58+
},
59+
"nbformat": 4,
60+
"nbformat_minor": 5
61+
}

templates/rag-chroma/README.md

Lines changed: 0 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -13,23 +13,3 @@ These documents can be loaded from [many sources](https://python.langchain.com/d
1313
## LLM
1414

1515
Be sure that `OPENAI_API_KEY` is set in order to the OpenAI models.
16-
17-
## Adding the template
18-
19-
Create your LangServe app:
20-
```
21-
langchain serve new my-app
22-
cd my-app
23-
```
24-
25-
Add template:
26-
```
27-
langchain serve add rag-chroma
28-
```
29-
30-
Start server:
31-
```
32-
langchain start
33-
```
34-
35-
See Jupyter notebook `rag_chroma` for various way to connect to the template.

templates/rag-conversation/rag_conversation.ipynb

Lines changed: 14 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -7,38 +7,26 @@
77
"source": [
88
"## Run Template\n",
99
"\n",
10-
"\n",
11-
"As shown in the README, add template and start server:\n",
12-
"```\n",
13-
"langchain serve add rag-conversation\n",
14-
"langchain start\n",
10+
"In `server.py`, set -\n",
1511
"```\n",
16-
"\n",
17-
"We can now look at the endpoints:\n",
18-
"\n",
19-
"http://127.0.0.1:8000/docs#\n",
20-
"\n",
21-
"And specifically at our loaded template:\n",
22-
"\n",
23-
"http://127.0.0.1:8000/docs#/default/invoke_rag_conversation_invoke_post\n",
24-
" \n",
25-
"We can also use remote runnable to call it."
12+
"add_routes(app, chain_rag_conv, path=\"/rag_conversation\")\n",
13+
"```"
2614
]
2715
},
2816
{
2917
"cell_type": "code",
30-
"execution_count": 24,
18+
"execution_count": 2,
3119
"id": "5f521923",
3220
"metadata": {},
3321
"outputs": [],
3422
"source": [
3523
"from langserve.client import RemoteRunnable\n",
36-
"rag_app = RemoteRunnable('http://localhost:8000/rag-conversation')"
24+
"rag_app = RemoteRunnable('http://0.0.0.0:8001/rag_conversation')"
3725
]
3826
},
3927
{
4028
"cell_type": "code",
41-
"execution_count": 26,
29+
"execution_count": 5,
4230
"id": "679bd83b",
4331
"metadata": {},
4432
"outputs": [],
@@ -52,17 +40,17 @@
5240
},
5341
{
5442
"cell_type": "code",
55-
"execution_count": 27,
43+
"execution_count": 8,
5644
"id": "94a05616",
5745
"metadata": {},
5846
"outputs": [
5947
{
6048
"data": {
6149
"text/plain": [
62-
"AIMessage(content=\"Agent memory works by utilizing both short-term memory and long-term memory mechanisms. \\n\\nShort-term memory allows the agent to learn and retain information within the current context or task. This in-context learning helps the agent handle complex tasks efficiently. \\n\\nOn the other hand, long-term memory enables the agent to retain and recall an unlimited amount of information over extended periods. This is achieved by leveraging an external vector store, such as a memory stream, which serves as a comprehensive database of the agent's past experiences in natural language. The memory stream records observations and events directly provided by the agent, and inter-agent communication can also trigger new natural language statements to be added to the memory.\\n\\nTo access and utilize the stored information, a retrieval model is employed. This model determines the context that is most relevant, recent, and important to inform the agent's behavior. By retrieving information from memory, the agent can reflect on past actions, learn from mistakes, and refine its behavior for future steps, ultimately improving the quality of its results.\")"
50+
"'Based on the given context, it is mentioned that the design of generative agents combines LLM (which stands for language, learning, and memory) with memory mechanisms. However, the specific workings of agent memory are not explicitly described in the given context.'"
6351
]
6452
},
65-
"execution_count": 27,
53+
"execution_count": 8,
6654
"metadata": {},
6755
"output_type": "execute_result"
6856
}
@@ -73,12 +61,12 @@
7361
},
7462
{
7563
"cell_type": "code",
76-
"execution_count": 29,
64+
"execution_count": 9,
7765
"id": "ce206c8a",
7866
"metadata": {},
7967
"outputs": [],
8068
"source": [
81-
"chat_history = [(question, answer.content)]\n",
69+
"chat_history = [(question, answer)]\n",
8270
"answer = rag_app.invoke({\n",
8371
" \"question\": \"What are the different types?\",\n",
8472
" \"chat_history\": chat_history,\n",
@@ -87,17 +75,17 @@
8775
},
8876
{
8977
"cell_type": "code",
90-
"execution_count": 30,
78+
"execution_count": 10,
9179
"id": "4626f167",
9280
"metadata": {},
9381
"outputs": [
9482
{
9583
"data": {
9684
"text/plain": [
97-
"AIMessage(content='The different types of memory utilized by the agent are sensory memory, short-term memory, and long-term memory.')"
85+
"\"Based on the given context, two types of memory are mentioned: short-term memory and long-term memory. \\n\\n1. Short-term memory: It refers to the ability of the agent to retain and recall information for a short period. In the context, short-term memory is described as the in-context learning that allows the model to learn.\\n\\n2. Long-term memory: It refers to the capability of the agent to retain and recall information over extended periods. In the context, long-term memory is described as the ability to retain and recall infinite information by leveraging an external vector store and fast retrieval.\\n\\nIt's important to note that these are just the types of memory mentioned in the given context. There may be other types of memory as well, depending on the specific design and implementation of the agent.\""
9886
]
9987
},
100-
"execution_count": 30,
88+
"execution_count": 10,
10189
"metadata": {},
10290
"output_type": "execute_result"
10391
}

templates/rag-pinecone/README.md

Lines changed: 0 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -15,23 +15,3 @@ Be sure that you have set a few env variables in `chain.py`:
1515
## LLM
1616

1717
Be sure that `OPENAI_API_KEY` is set in order to the OpenAI models.
18-
19-
## Installation
20-
21-
Create your LangServe app:
22-
```
23-
langchain serve new my-app
24-
cd my-app
25-
```
26-
27-
Add template:
28-
```
29-
langchain serve add rag-pinecone
30-
```
31-
32-
Start server:
33-
```
34-
langchain start
35-
```
36-
37-
See Jupyter notebook `rag_pinecone` for various way to connect to the template.

templates/summarize-anthropic/README.md

Lines changed: 0 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -14,23 +14,3 @@ To do this, we can use various prompts from LangChain hub, such as:
1414
This template will use `Claude2` by default.
1515

1616
Be sure that `ANTHROPIC_API_KEY` is set in your enviorment.
17-
18-
## Adding the template
19-
20-
Create your LangServe app:
21-
```
22-
langchain serve new my-app
23-
cd my-app
24-
```
25-
26-
Add template:
27-
```
28-
langchain serve add summarize-anthropic
29-
```
30-
31-
Start server:
32-
```
33-
langchain start
34-
```
35-
36-
See Jupyter notebook `summarize_anthropic` for various way to connect to the template.

0 commit comments

Comments
 (0)