Ask LLM anything about Lex Fridman Podcast videos on Youtube
1. Create virtual environment
python -m venv venv
source venv/bin/activate
2. Install dependencies
pip install -r requirements.txt
3. Setup local instance of Qdrant The simplest way to do this is docker. Run the following commands in the terminal
docker pull qdrant/qdrant
docker run -p 6333:6333 qdrant/qdrant
4. Setup configuration variables in .env file
- Rename
.env.examplefile to.env. - In the current version of
lexllm, the only mandatory environment variable to specify is theCHATNBX_KEYandOPENAI_API_KEY. CHATNBX_KEYis used to invoke the ChatNBX API. This API is used to generate answers.OPENAI_API_KEYis used to invoke the OpenAI Embeddings API. This API is used to convert the query into an embedding that in turn is used to fetch relevant documents from Qdrant vectorDB.QDRANT_CLOUD_KEYandQDRANT_DB_URLare optional since we will be storing the embeddings in a local instance of Qdrant. I was facing some issues(ERROR 403)while creating a collection in Qdrant cloud DB.
5. Store embeddings in Qdrant DB
- Inside
embed.py, you can change the value ofCREATE_EMBEDDINGtoTrue, if you want to recreate embeddings. - Since, embeddings are already provided, I recommend to keep it to unchanged.
python embed.py
- In order to interact with the chatbot, I have provided a notebook called
chat.ipynb. - It also contains some example questions that can be asked to the chatbot.