- The AI Debate Simulator is a dynamic application that pits an AI proponent against an AI opponent in structured debates. Real-time updates keep users engaged, while a streamlined user interface makes it easy to explore a variety of debate topics.
- Real-time argument generation by AI agents.
- Turn-based debate flow managed by LangGraph.
- Utilizes WebSockets for seamless, real-time communication between the backend and frontend.
- Employs a cloud-based model, GPT-3.5-turbo, for the proponent.
- Leverages a local LLM, Mistral-7B-v0.1, for the opponent.
- Arguments are generated with both cloud-based and local LLMs, managed for optimal performance.
- Interactive UI with round indicators and progress tracking.
-
Pre-requisites:
- Python 3.9+
- Conda or venv set up for virtual environment management.
- CUDA-enabled GPU (for optimal local LLM performance).
- Mistral-7B-v0.1 GGUF model: You can download the quantized model file (e.g.,
mistral-7b-v0.1.Q4_K_M.gguf
) from TheBloke's Hugging Face repository. Place the downloaded file in thebackend/models/
directory.
-
Installation:
- Clone the repository:
git clone https://github.com/Data-Science-Community-SRM/Debate-Simulator.git cd Debate-Simulator
- Create a virtual environment:
conda create -n debate-sim python=3.9 conda activate debate-sim
- Install the required packages:
pip install -r requirements.txt
-
Execution:
- Configure the environment variables:
cp .env.example .env # Edit .env with your actual OPENAI_API_KEY and SERPAPI_KEY
- Run the FastAPI backend:
cd backend uvicorn app.main:app --reload --port 8000
- Run the Flask frontend:
cd flask_frontend flask run --port 5000
- We have used chrome DB with sqlite3
|
Made with ❤️ by DS Community SRM