A minimal Python web app that provides a mobile-friendly chat interface backed by Google Gemini. Built with Flask and vanilla HTML/CSS/JS with streaming responses.
- macOS, Linux, or Windows
- Python 3.9+
- A Google AI Studio API key with access to a Gemini chat-capable model (e.g.,
gemini-2.0-flash)
-
Install Apple Command Line Tools (required by Homebrew)
xcode-select --install
- If they are already installed, you'll see a message indicating so. You can verify with:
which should print a path like
xcode-select -p
/Library/Developer/CommandLineTools.
- If they are already installed, you'll see a message indicating so. You can verify with:
-
Install Homebrew if you do not already have it.
-
Verify whether Python is already available:
python3 --version pip3 --version
-
If either command fails, install Python (which includes pip) via Homebrew:
brew update brew install python
-
Open a new terminal (or reload your shell) and confirm the installation again:
python3 --version pip3 --version
- Clone the repository
git clone https://github.com/mcough2/chatbox-example.git chatbot-example cd chatbot-example - Create and activate a virtual environment (recommended)
python3 -m venv .venv source .venv/bin/activate - Install project dependencies
pip install -r requirements.txt
- Create a Google AI Studio API key
- Visit https://aistudio.google.com/app/apikey
- Generate a key and copy it (you can revoke or rotate later).
- Add your Gemini API key
cp .env.example .env
- Open the new
.envfile in your editor and replaceyour-gemini-api-keywith the key you created earlier.- On macOS you can run
open .envto edit it with TextEdit from the terminal.
- On macOS you can run
- Save the changes after updating the file.
- Leave
GEMINI_MODELasgemini-2.0-flashunless you've enabled and prefer another Gemini model. The includedpython-dotenvdependency loads this file automatically on startup.
- Open the new
- Run the development server
flask --app app run # or: python app.py # inside the virtualenv `python` points to Python 3 - Open the app in your browser
- Navigate to http://127.0.0.1:5000
- Type a message and watch Gemini stream back its response live.
app.pyexposes two routes:/serves the front-end template and/api/chatrelays chat requests to Gemini, streaming generation chunks back to the browser as NDJSON.- The front-end (vanilla JS + CSS) sends the full conversation history on each request so the backend can preserve context and updates the UI incrementally as streaming chunks arrive.
- Environment variables keep credentials out of source control and make it easy to run the app on another machine—just add your own key.
- Update
static/styles.cssto tweak the look and feel. - Swap
GEMINI_MODELto any chat-capable Gemini model you have access to. - Adjust
static/chat.jsif you prefer a different streaming protocol (e.g., Server-Sent Events or WebSockets).
- If you see
GEMINI_API_KEY is not set, double-check that the variable is exported in the same shell where you run Flask, or use a.envfile. - Install the dependencies with
pip install -r requirements.txtif you hitModuleNotFoundErrorforflaskorgoogle.generativeai. - Errors returned from Gemini surface in the chat window—handy for diagnosing quota limits or auth issues.