Skip to content

Add make setup target #23

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 38 additions & 0 deletions CLAUDE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

## Project Architecture

- Frontend: React application with TypeScript, Vite, and Tailwind CSS
- Backend: Python-based LangGraph agent leveraging AWS Bedrock foundation models for iterative research
- Agent Flow:
1. Uses AWS Bedrock Claude to generate search queries from user input
2. Performs web research via AWS search capabilities
3. Uses AWS Bedrock Claude to reflect on results and identify knowledge gaps
4. Uses AWS Bedrock Claude for iterative query refinement
5. Uses AWS Bedrock Claude to synthesize answers with citations

## Development Commands

Initial Setup:
```bash
make setup # Install all dependencies for frontend and backend
```

Development:
```bash
make dev # Run both frontend and backend dev servers
npm run build # Build frontend for production (in frontend/)
npm run lint # Run frontend ESLint
ruff check . # Run backend linter (in backend/)
mypy . # Run backend type checker (in backend/)
```

## Environment Setup

Required environment variables:
- AWS_ACCESS_KEY_ID: AWS access key for Bedrock services
- AWS_SECRET_ACCESS_KEY: AWS secret access key for Bedrock services
- AWS_REGION: AWS region where Bedrock models are deployed (e.g., us-west-2)
- LANGSMITH_API_KEY: LangSmith API key (for production)
12 changes: 9 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,4 +1,10 @@
.PHONY: help dev-frontend dev-backend dev
.PHONY: help setup dev-frontend dev-backend dev

setup:
@echo "Installing frontend dependencies..."
@cd frontend && npm install
@echo "Installing backend dependencies..."
@cd backend && uv sync

help:
@echo "Available commands:"
Expand All @@ -12,9 +18,9 @@ dev-frontend:

dev-backend:
@echo "Starting backend development server..."
@cd backend && langgraph dev
@cd backend && uv run langgraph dev

# Run frontend and backend concurrently
dev:
@echo "Starting both frontend and backend development servers..."
@make dev-frontend & make dev-backend
@make dev-frontend & make dev-backend
80 changes: 38 additions & 42 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,25 @@
# Gemini Fullstack LangGraph Quickstart
# AWS Bedrock Fullstack LangGraph Quickstart

This project demonstrates a fullstack application using a React frontend and a LangGraph-powered backend agent. The agent is designed to perform comprehensive research on a user's query by dynamically generating search terms, querying the web using Google Search, reflecting on the results to identify knowledge gaps, and iteratively refining its search until it can provide a well-supported answer with citations. This application serves as an example of building research-augmented conversational AI using LangGraph and Google's Gemini models.
This project demonstrates a fullstack application using a React frontend and a LangGraph-powered backend agent. The agent leverages AWS Bedrock foundation models to perform comprehensive research on user queries, dynamically generating search terms, performing web searches, and iteratively refining its research until providing well-supported answers with citations. This application serves as an example of building research-augmented conversational AI using LangGraph and AWS Bedrock.

![Gemini Fullstack LangGraph](./app.png)
![AWS Bedrock Fullstack LangGraph](./app.png)

## Features

- 💬 Fullstack application with a React frontend and LangGraph backend.
- 🧠 Powered by a LangGraph agent for advanced research and conversational AI.
- 🔍 Dynamic search query generation using Google Gemini models.
- 🌐 Integrated web research via Google Search API.
- 🤔 Reflective reasoning to identify knowledge gaps and refine searches.
- 📄 Generates answers with citations from gathered sources.
- 🔄 Hot-reloading for both frontend and backend development during development.
- 💬 Fullstack application with a React frontend and LangGraph backend
- 🧠 Powered by AWS Bedrock foundation models for advanced conversational AI
- 🔍 Dynamic search query generation using AWS Bedrock Claude
- 🌐 Integrated web research capabilities powered by AWS
- 🤔 Reflective reasoning using AWS Bedrock models to identify knowledge gaps
- 📄 Answer synthesis with citations using AWS Bedrock
- 🔄 Hot-reloading for both frontend and backend development

## Project Structure

The project is divided into two main directories:

- `frontend/`: Contains the React application built with Vite.
- `backend/`: Contains the LangGraph/FastAPI application, including the research agent logic.
- `frontend/`: Contains the React application built with Vite
- `backend/`: Contains the LangGraph/FastAPI application leveraging AWS Bedrock

## Getting Started: Development and Local Testing

Expand All @@ -29,25 +29,21 @@ Follow these steps to get the application running locally for development and te

- Node.js and npm (or yarn/pnpm)
- Python 3.8+
- **`GEMINI_API_KEY`**: The backend agent requires a Google Gemini API key.
1. Navigate to the `backend/` directory.
2. Create a file named `.env` by copying the `backend/.env.example` file.
3. Open the `.env` file and add your Gemini API key: `GEMINI_API_KEY="YOUR_ACTUAL_API_KEY"`
- uv (https://docs.astral.sh/uv/)
- **AWS Credentials:**
1. Navigate to the `backend/` directory
2. Create a file named `.env` by copying the `backend/.env.example` file
3. Add your AWS credentials:
```
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION=your_region
```

**2. Install Dependencies:**

**Backend:**

```bash
cd backend
pip install .
```

**Frontend:**

```bash
cd frontend
npm install
make setup
```

**3. Run Development Servers:**
Expand All @@ -57,21 +53,21 @@ npm install
```bash
make dev
```
This will run the backend and frontend development servers. Open your browser and navigate to the frontend development server URL (e.g., `http://localhost:5173/app`).
This will run the backend and frontend development servers. Open your browser and navigate to the frontend development server URL (e.g., `http://localhost:5173/app`).

_Alternatively, you can run the backend and frontend development servers separately. For the backend, open a terminal in the `backend/` directory and run `langgraph dev`. The backend API will be available at `http://127.0.0.1:2024`. It will also open a browser window to the LangGraph UI. For the frontend, open a terminal in the `frontend/` directory and run `npm run dev`. The frontend will be available at `http://localhost:5173`._

## How the Backend Agent Works (High-Level)

The core of the backend is a LangGraph agent defined in `backend/src/agent/graph.py`. It follows these steps:
The core of the backend is a LangGraph agent defined in `backend/src/agent/graph.py`. It leverages AWS Bedrock foundation models at each step:

![Agent Flow](./agent.png)

1. **Generate Initial Queries:** Based on your input, it generates a set of initial search queries using a Gemini model.
2. **Web Research:** For each query, it uses the Gemini model with the Google Search API to find relevant web pages.
3. **Reflection & Knowledge Gap Analysis:** The agent analyzes the search results to determine if the information is sufficient or if there are knowledge gaps. It uses a Gemini model for this reflection process.
4. **Iterative Refinement:** If gaps are found or the information is insufficient, it generates follow-up queries and repeats the web research and reflection steps (up to a configured maximum number of loops).
5. **Finalize Answer:** Once the research is deemed sufficient, the agent synthesizes the gathered information into a coherent answer, including citations from the web sources, using a Gemini model.
1. **Generate Initial Queries:** Uses AWS Bedrock Claude to analyze the input and generate targeted search queries
2. **Web Research:** Performs web searches using AWS capabilities to find relevant information
3. **Reflection & Knowledge Gap Analysis:** Uses AWS Bedrock Claude to analyze search results and identify knowledge gaps
4. **Iterative Refinement:** Generates follow-up queries using AWS Bedrock Claude and repeats research if needed
5. **Answer Synthesis:** Uses AWS Bedrock Claude to create a comprehensive answer with citations

## Deployment

Expand All @@ -85,24 +81,24 @@ _Note: If you are not running the docker-compose.yml example or exposing the bac

Run the following command from the **project root directory**:
```bash
docker build -t gemini-fullstack-langgraph -f Dockerfile .
docker build -t aws-bedrock-fullstack-langgraph -f Dockerfile .
```
**2. Run the Production Server:**

```bash
GEMINI_API_KEY=<your_gemini_api_key> LANGSMITH_API_KEY=<your_langsmith_api_key> docker-compose up
AWS_ACCESS_KEY_ID=your_access_key AWS_SECRET_ACCESS_KEY=your_secret_key AWS_REGION=your_region LANGSMITH_API_KEY=your_langsmith_api_key docker-compose up
```

Open your browser and navigate to `http://localhost:8123/app/` to see the application. The API will be available at `http://localhost:8123`.

## Technologies Used

- [React](https://reactjs.org/) (with [Vite](https://vitejs.dev/)) - For the frontend user interface.
- [Tailwind CSS](https://tailwindcss.com/) - For styling.
- [Shadcn UI](https://ui.shadcn.com/) - For components.
- [LangGraph](https://github.com/langchain-ai/langgraph) - For building the backend research agent.
- [Google Gemini](https://ai.google.dev/models/gemini) - LLM for query generation, reflection, and answer synthesis.
- [React](https://reactjs.org/) (with [Vite](https://vitejs.dev/)) - For the frontend user interface
- [Tailwind CSS](https://tailwindcss.com/) - For styling
- [Shadcn UI](https://ui.shadcn.com/) - For components
- [LangGraph](https://github.com/langchain-ai/langgraph) - For building the backend research agent
- [AWS Bedrock](https://aws.amazon.com/bedrock/) - Foundation models for query generation, reflection, and answer synthesis

## License

This project is licensed under the Apache License 2.0. See the [LICENSE](LICENSE) file for details.
This project is licensed under the Apache License 2.0. See the [LICENSE](LICENSE) file for details.
5 changes: 3 additions & 2 deletions backend/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,14 @@ requires-python = ">=3.11,<4.0"
dependencies = [
"langgraph>=0.2.6",
"langchain>=0.3.19",
"langchain-google-genai",
"langchain-aws>=0.1.0",
"langchain-community>=0.0.24",
"python-dotenv>=1.0.1",
"langgraph-sdk>=0.1.57",
"langgraph-cli",
"langgraph-api",
"fastapi",
"google-genai",
"boto3>=1.34.0",
]


Expand Down
18 changes: 6 additions & 12 deletions backend/src/agent/configuration.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,24 +9,18 @@ class Configuration(BaseModel):
"""The configuration for the agent."""

query_generator_model: str = Field(
default="gemini-2.0-flash",
metadata={
"description": "The name of the language model to use for the agent's query generation."
},
default="anthropic.claude-3-sonnet-20240229-v1:0",
metadata={"description": "AWS Bedrock model for query generation."},
)

reflection_model: str = Field(
default="gemini-2.5-flash-preview-04-17",
metadata={
"description": "The name of the language model to use for the agent's reflection."
},
default="us.anthropic.claude-3-5-sonnet-20240620-v1:0",
metadata={"description": "AWS Bedrock model for reflection."},
)

answer_model: str = Field(
default="gemini-2.5-pro-preview-05-06",
metadata={
"description": "The name of the language model to use for the agent's answer."
},
default="us.anthropic.claude-3-5-haiku-20241022-v1:0",
metadata={"description": "AWS Bedrock model for answer generation."},
)

number_of_initial_queries: int = Field(
Expand Down
Loading