๐บ LLM-driven Werewolf Game Agents - A system where multiple LLM agents play the Werewolf game against each other.
AutoWerewolf implements full 12-player Werewolf rules with LLM-powered agents. The system supports:
- 12-player games with standard role compositions
- Two role sets:
- Set A: Seer, Witch, Hunter, Guard
- Set B: Seer, Witch, Hunter, Village Idiot
- Multiple model backends: HTTP API models (OpenAI, etc.) and local Ollama models
- LangChain integration for all agent logic
- Agent memory system for strategic reasoning and fact tracking
- Werewolf coordination via shared memory or discussion chains
- Performance profiles for optimized simulation speed
- Comprehensive analytics for multi-game statistics
- Web UI for interactive gameplay and observation
- Human player mode - play alongside AI agents
- Complete 12-player Werewolf rules implementation
- Night action resolution (Seer, Witch, Guard, Hunter, Werewolves)
- Day phase with speeches, voting, and lynch resolution
- Sheriff election and badge passing/tearing mechanics
- Configurable rule variants (witch self-heal, guard rules, win conditions)
- Role-specific agents: Werewolf, Villager, Seer, Witch, Hunter, Guard, Village Idiot
- LangChain-based chains with structured output parsing
- Per-agent memory system (conversation + game facts)
- Werewolf camp coordination (shared memory or multi-agent discussion)
- Output corrector for improved response quality
- Real-time game observation via WebSocket
- Human player participation mode
- Interactive game creation and configuration
- Multi-language support (i18n)
- Responsive UI design
- Model profiles:
fast_local,balanced,cloud_strong - Performance presets:
minimal,standard,fast,simulation - Batch execution for parallel agent calls
- Configurable verbosity and narration levels
- Structured game logs (JSON persistence)
- Game replay and analysis tools
- Multi-game statistics and win rate analysis
- Timeline visualization
# Basic installation (rules engine only)
pip install -e .
# With LLM support
pip install -e ".[llm]"
# With CLI support
pip install -e ".[cli]"
# With Web UI support
pip install -e ".[web]"
# Full installation (all features + development)
pip install -e ".[all]"# Install with uv
uv pip install -e ".[all]"
# Or use uv sync
uv sync# Run a single game with Ollama
autowerewolf run-game --backend ollama --model llama3
# Run a game with a specific role set
autowerewolf run-game --role-set B --seed 42
# Use a performance profile
autowerewolf run-game --profile fast_local
# Run multiple simulations
autowerewolf simulate 10 --backend ollama --model llama3 --fast
# Analyze saved game logs
autowerewolf analyze ./game_logs/
# Replay a specific game
autowerewolf replay ./game_logs/game_0001.json --timeline# Start the web server
autowerewolf serve --host 0.0.0.0 --port 8000
# Specify custom config file paths
autowerewolf serve --model-config ./my_models.yaml --game-config ./my_game.yaml
# Then open http://localhost:8000 in your browserThe Web UI supports:
- ๐ญ Watch Mode: Observe AI agents play against each other
- ๐ฎ Play Mode: Join the game as a human player alongside AI agents
- โ๏ธ Configuration: Customize game rules, model settings, and more
- ๐ Real-time Updates: Watch the game unfold via WebSocket
- ๐ Auto-load Configs: Automatically loads default values from config files
from autowerewolf.engine import (
create_game_state,
GameConfig,
RoleSet,
resolve_night_actions,
check_win_condition,
)
# Create a game with role set A
config = GameConfig(role_set=RoleSet.A, random_seed=42)
state = create_game_state(config)
# Game state contains 12 players with assigned roles
for player in state.players:
print(f"{player.name}: {player.role.value}")from autowerewolf.orchestrator.game_orchestrator import GameOrchestrator
from autowerewolf.engine.state import GameConfig
from autowerewolf.engine.roles import RoleSet
from autowerewolf.config.models import AgentModelConfig, ModelConfig, ModelBackend
# Configure the game
game_config = GameConfig(role_set=RoleSet.A, random_seed=42)
# Configure the model
model_config = AgentModelConfig(
default=ModelConfig(
backend=ModelBackend.OLLAMA,
model_name="llama3",
temperature=0.7,
)
)
# Create and run the orchestrator
orchestrator = GameOrchestrator(
config=game_config,
agent_models=model_config,
)
result = orchestrator.run_game()
print(f"Winner: {result.winning_team.value}")| Command | Description |
|---|---|
run-game |
Run a single Werewolf game with LLM agents |
simulate N |
Run N games and collect statistics |
replay <log> |
Replay and analyze a saved game log |
analyze <dir> |
Analyze multiple game logs for aggregate statistics |
serve |
Start the web server for browser-based gameplay |
| Option | Description |
|---|---|
--backend |
Model backend: ollama or api |
--model |
Model name (e.g., llama3, gpt-4) |
--role-set |
Role set: A (Guard) or B (Village Idiot) |
--seed |
Random seed for reproducibility |
--profile |
Model profile: fast_local, balanced, cloud_strong |
--performance |
Performance preset: minimal, standard, fast, simulation |
--output |
Output file/directory for game logs |
autowerewolf/
โโโ autowerewolf/
โ โโโ config/ # Configuration models
โ โ โโโ models.py # Model and agent configuration
โ โ โโโ game_rules.py # Game rules configuration
โ โ โโโ performance.py # Performance profiles and presets
โ โโโ engine/ # Game rules and state
โ โ โโโ roles.py # Role enums and constants
โ โ โโโ state.py # Pydantic models for game state
โ โ โโโ rules.py # Core game logic
โ โโโ agents/ # LangChain-based agents
โ โ โโโ backend.py # Model backend abstraction
โ โ โโโ batch.py # Batch execution for parallel calls
โ โ โโโ memory.py # Agent memory management
โ โ โโโ moderator.py # Moderator chain for narration
โ โ โโโ player_base.py # Base player agent class
โ โ โโโ human.py # Human player agent
โ โ โโโ output_corrector.py # Output correction for LLM responses
โ โ โโโ prompts.py # Prompt templates
โ โ โโโ schemas.py # Pydantic output schemas
โ โ โโโ roles/ # Role-specific agents
โ โ โโโ werewolf.py
โ โ โโโ villager.py
โ โ โโโ seer.py
โ โ โโโ witch.py
โ โ โโโ hunter.py
โ โ โโโ guard.py
โ โ โโโ village_idiot.py
โ โโโ orchestrator/ # Game loop management
โ โ โโโ game_orchestrator.py
โ โโโ io/ # Logging and persistence
โ โ โโโ logging.py # Structured logging
โ โ โโโ persistence.py # Game log save/load
โ โ โโโ analysis.py # Statistics and analysis
โ โโโ web/ # Web interface
โ โ โโโ server.py # FastAPI server
โ โ โโโ session.py # Game session management
โ โ โโโ schemas.py # Web API schemas
โ โ โโโ i18n.py # Internationalization
โ โ โโโ templates/ # HTML templates
โ โ โโโ static/ # CSS/JS assets
โ โโโ cli/ # Command-line interface
โ โโโ main.py
โโโ tests/ # Unit tests
โโโ tools/ # Utility tools
โ โโโ game_replay.py # Game replay tool
โโโ docs/ # Documentation
โโโ logs/ # Game logs directory
โโโ autowerewolf_config.yaml # Game configuration
โโโ autowerewolf_models_example.yaml # Model configuration example
โโโ pyproject.toml # Project configuration
AutoWerewolf uses YAML configuration files for game rules and model settings:
autowerewolf_config.yaml- Game rules and variantsautowerewolf_models.yaml- Model backend configuration (copy fromautowerewolf_models_example.yaml)
When starting the web server with autowerewolf serve, it automatically searches for configuration files:
Model config search order:
autowerewolf_models.yamlautowerewolf_models.ymlconfig/models.yamlconfig/models.yml
Game config search order:
autowerewolf_config.yamlautowerewolf_config.ymlconfig/game.yamlconfig/game.yml
You can also specify custom paths:
autowerewolf serve --model-config /path/to/models.yaml --game-config /path/to/game.yaml# autowerewolf_models.yaml
default:
backend: "ollama"
model_name: "llama3"
temperature: 0.7
max_tokens: 1024
ollama_base_url: "http://localhost:11434"
# Output corrector configuration
output_corrector:
enabled: true
max_retries: 2
# Optional: use a separate model for correction
# model_config_override:
# backend: "api"
# model_name: "gpt-4o-mini"
# api_base: "https://api.openai.com/v1"
# api_key: "your-api-key"
# Optional: Role-specific model overrides
# werewolf:
# backend: "api"
# model_name: "gpt-4"
# api_base: "https://api.openai.com/v1"
# api_key: "your-api-key"# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run tests with coverage
pytest --cov=autowerewolf# Format code
black autowerewolf tests
# Lint
ruff check autowerewolf tests
# Type check
mypy autowerewolf- Python 3.10+
- For LLM features: LangChain, LangGraph
- For local models: Ollama installed with models pulled
- For Web UI: FastAPI, uvicorn, WebSockets
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
MIT License