Skip to content

๐Ÿบ LLM-driven Werewolf Game Agents - A system where multiple LLM agents play the Werewolf game against each other.

Notifications You must be signed in to change notification settings

Linorman/AutoWerewolf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

24 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

AutoWerewolf

๐Ÿบ LLM-driven Werewolf Game Agents - A system where multiple LLM agents play the Werewolf game against each other.

Python License FastAPI LangChain

Overview

AutoWerewolf implements full 12-player Werewolf rules with LLM-powered agents. The system supports:

  • 12-player games with standard role compositions
  • Two role sets:
    • Set A: Seer, Witch, Hunter, Guard
    • Set B: Seer, Witch, Hunter, Village Idiot
  • Multiple model backends: HTTP API models (OpenAI, etc.) and local Ollama models
  • LangChain integration for all agent logic
  • Agent memory system for strategic reasoning and fact tracking
  • Werewolf coordination via shared memory or discussion chains
  • Performance profiles for optimized simulation speed
  • Comprehensive analytics for multi-game statistics
  • Web UI for interactive gameplay and observation
  • Human player mode - play alongside AI agents

Features

๐ŸŽฎ Core Game Engine

  • Complete 12-player Werewolf rules implementation
  • Night action resolution (Seer, Witch, Guard, Hunter, Werewolves)
  • Day phase with speeches, voting, and lynch resolution
  • Sheriff election and badge passing/tearing mechanics
  • Configurable rule variants (witch self-heal, guard rules, win conditions)

๐Ÿค– LLM-Powered Agents

  • Role-specific agents: Werewolf, Villager, Seer, Witch, Hunter, Guard, Village Idiot
  • LangChain-based chains with structured output parsing
  • Per-agent memory system (conversation + game facts)
  • Werewolf camp coordination (shared memory or multi-agent discussion)
  • Output corrector for improved response quality

๐ŸŒ Web Interface

  • Real-time game observation via WebSocket
  • Human player participation mode
  • Interactive game creation and configuration
  • Multi-language support (i18n)
  • Responsive UI design

โšก Performance & Optimization

  • Model profiles: fast_local, balanced, cloud_strong
  • Performance presets: minimal, standard, fast, simulation
  • Batch execution for parallel agent calls
  • Configurable verbosity and narration levels

๐Ÿ“Š Logging & Analysis

  • Structured game logs (JSON persistence)
  • Game replay and analysis tools
  • Multi-game statistics and win rate analysis
  • Timeline visualization

Installation

# Basic installation (rules engine only)
pip install -e .

# With LLM support
pip install -e ".[llm]"

# With CLI support
pip install -e ".[cli]"

# With Web UI support
pip install -e ".[web]"

# Full installation (all features + development)
pip install -e ".[all]"

Using uv (Recommended)

# Install with uv
uv pip install -e ".[all]"

# Or use uv sync
uv sync

Quick Start

Using the CLI

# Run a single game with Ollama
autowerewolf run-game --backend ollama --model llama3

# Run a game with a specific role set
autowerewolf run-game --role-set B --seed 42

# Use a performance profile
autowerewolf run-game --profile fast_local

# Run multiple simulations
autowerewolf simulate 10 --backend ollama --model llama3 --fast

# Analyze saved game logs
autowerewolf analyze ./game_logs/

# Replay a specific game
autowerewolf replay ./game_logs/game_0001.json --timeline

Using the Web UI

# Start the web server
autowerewolf serve --host 0.0.0.0 --port 8000

# Specify custom config file paths
autowerewolf serve --model-config ./my_models.yaml --game-config ./my_game.yaml

# Then open http://localhost:8000 in your browser

The Web UI supports:

  • ๐ŸŽญ Watch Mode: Observe AI agents play against each other
  • ๐ŸŽฎ Play Mode: Join the game as a human player alongside AI agents
  • โš™๏ธ Configuration: Customize game rules, model settings, and more
  • ๐Ÿ“œ Real-time Updates: Watch the game unfold via WebSocket
  • ๐Ÿ“ Auto-load Configs: Automatically loads default values from config files

Using the Python API

from autowerewolf.engine import (
    create_game_state,
    GameConfig,
    RoleSet,
    resolve_night_actions,
    check_win_condition,
)

# Create a game with role set A
config = GameConfig(role_set=RoleSet.A, random_seed=42)
state = create_game_state(config)

# Game state contains 12 players with assigned roles
for player in state.players:
    print(f"{player.name}: {player.role.value}")

Running Full Games with LLM Agents

from autowerewolf.orchestrator.game_orchestrator import GameOrchestrator
from autowerewolf.engine.state import GameConfig
from autowerewolf.engine.roles import RoleSet
from autowerewolf.config.models import AgentModelConfig, ModelConfig, ModelBackend

# Configure the game
game_config = GameConfig(role_set=RoleSet.A, random_seed=42)

# Configure the model
model_config = AgentModelConfig(
    default=ModelConfig(
        backend=ModelBackend.OLLAMA,
        model_name="llama3",
        temperature=0.7,
    )
)

# Create and run the orchestrator
orchestrator = GameOrchestrator(
    config=game_config,
    agent_models=model_config,
)
result = orchestrator.run_game()

print(f"Winner: {result.winning_team.value}")

CLI Commands

Command Description
run-game Run a single Werewolf game with LLM agents
simulate N Run N games and collect statistics
replay <log> Replay and analyze a saved game log
analyze <dir> Analyze multiple game logs for aggregate statistics
serve Start the web server for browser-based gameplay

Common Options

Option Description
--backend Model backend: ollama or api
--model Model name (e.g., llama3, gpt-4)
--role-set Role set: A (Guard) or B (Village Idiot)
--seed Random seed for reproducibility
--profile Model profile: fast_local, balanced, cloud_strong
--performance Performance preset: minimal, standard, fast, simulation
--output Output file/directory for game logs

Documentation

Project Structure

autowerewolf/
โ”œโ”€โ”€ autowerewolf/
โ”‚   โ”œโ”€โ”€ config/              # Configuration models
โ”‚   โ”‚   โ”œโ”€โ”€ models.py        # Model and agent configuration
โ”‚   โ”‚   โ”œโ”€โ”€ game_rules.py    # Game rules configuration
โ”‚   โ”‚   โ””โ”€โ”€ performance.py   # Performance profiles and presets
โ”‚   โ”œโ”€โ”€ engine/              # Game rules and state
โ”‚   โ”‚   โ”œโ”€โ”€ roles.py         # Role enums and constants
โ”‚   โ”‚   โ”œโ”€โ”€ state.py         # Pydantic models for game state
โ”‚   โ”‚   โ””โ”€โ”€ rules.py         # Core game logic
โ”‚   โ”œโ”€โ”€ agents/              # LangChain-based agents
โ”‚   โ”‚   โ”œโ”€โ”€ backend.py       # Model backend abstraction
โ”‚   โ”‚   โ”œโ”€โ”€ batch.py         # Batch execution for parallel calls
โ”‚   โ”‚   โ”œโ”€โ”€ memory.py        # Agent memory management
โ”‚   โ”‚   โ”œโ”€โ”€ moderator.py     # Moderator chain for narration
โ”‚   โ”‚   โ”œโ”€โ”€ player_base.py   # Base player agent class
โ”‚   โ”‚   โ”œโ”€โ”€ human.py         # Human player agent
โ”‚   โ”‚   โ”œโ”€โ”€ output_corrector.py  # Output correction for LLM responses
โ”‚   โ”‚   โ”œโ”€โ”€ prompts.py       # Prompt templates
โ”‚   โ”‚   โ”œโ”€โ”€ schemas.py       # Pydantic output schemas
โ”‚   โ”‚   โ””โ”€โ”€ roles/           # Role-specific agents
โ”‚   โ”‚       โ”œโ”€โ”€ werewolf.py
โ”‚   โ”‚       โ”œโ”€โ”€ villager.py
โ”‚   โ”‚       โ”œโ”€โ”€ seer.py
โ”‚   โ”‚       โ”œโ”€โ”€ witch.py
โ”‚   โ”‚       โ”œโ”€โ”€ hunter.py
โ”‚   โ”‚       โ”œโ”€โ”€ guard.py
โ”‚   โ”‚       โ””โ”€โ”€ village_idiot.py
โ”‚   โ”œโ”€โ”€ orchestrator/        # Game loop management
โ”‚   โ”‚   โ””โ”€โ”€ game_orchestrator.py
โ”‚   โ”œโ”€โ”€ io/                  # Logging and persistence
โ”‚   โ”‚   โ”œโ”€โ”€ logging.py       # Structured logging
โ”‚   โ”‚   โ”œโ”€โ”€ persistence.py   # Game log save/load
โ”‚   โ”‚   โ””โ”€โ”€ analysis.py      # Statistics and analysis
โ”‚   โ”œโ”€โ”€ web/                 # Web interface
โ”‚   โ”‚   โ”œโ”€โ”€ server.py        # FastAPI server
โ”‚   โ”‚   โ”œโ”€โ”€ session.py       # Game session management
โ”‚   โ”‚   โ”œโ”€โ”€ schemas.py       # Web API schemas
โ”‚   โ”‚   โ”œโ”€โ”€ i18n.py          # Internationalization
โ”‚   โ”‚   โ”œโ”€โ”€ templates/       # HTML templates
โ”‚   โ”‚   โ””โ”€โ”€ static/          # CSS/JS assets
โ”‚   โ””โ”€โ”€ cli/                 # Command-line interface
โ”‚       โ””โ”€โ”€ main.py
โ”œโ”€โ”€ tests/                   # Unit tests
โ”œโ”€โ”€ tools/                   # Utility tools
โ”‚   โ””โ”€โ”€ game_replay.py       # Game replay tool
โ”œโ”€โ”€ docs/                    # Documentation
โ”œโ”€โ”€ logs/                    # Game logs directory
โ”œโ”€โ”€ autowerewolf_config.yaml         # Game configuration
โ”œโ”€โ”€ autowerewolf_models_example.yaml # Model configuration example
โ””โ”€โ”€ pyproject.toml           # Project configuration

Configuration

AutoWerewolf uses YAML configuration files for game rules and model settings:

  • autowerewolf_config.yaml - Game rules and variants
  • autowerewolf_models.yaml - Model backend configuration (copy from autowerewolf_models_example.yaml)

Web UI Auto-load Configuration

When starting the web server with autowerewolf serve, it automatically searches for configuration files:

Model config search order:

  1. autowerewolf_models.yaml
  2. autowerewolf_models.yml
  3. config/models.yaml
  4. config/models.yml

Game config search order:

  1. autowerewolf_config.yaml
  2. autowerewolf_config.yml
  3. config/game.yaml
  4. config/game.yml

You can also specify custom paths:

autowerewolf serve --model-config /path/to/models.yaml --game-config /path/to/game.yaml

Example Model Configuration

# autowerewolf_models.yaml
default:
  backend: "ollama"
  model_name: "llama3"
  temperature: 0.7
  max_tokens: 1024
  ollama_base_url: "http://localhost:11434"
  
# Output corrector configuration
output_corrector:
  enabled: true
  max_retries: 2
  # Optional: use a separate model for correction
  # model_config_override:
  #   backend: "api"
  #   model_name: "gpt-4o-mini"
  #   api_base: "https://api.openai.com/v1"
  #   api_key: "your-api-key"
  
# Optional: Role-specific model overrides
# werewolf:
#   backend: "api"
#   model_name: "gpt-4"
#   api_base: "https://api.openai.com/v1"
#   api_key: "your-api-key"

Development

Running Tests

# Install dev dependencies
pip install -e ".[dev]"

# Run tests
pytest

# Run tests with coverage
pytest --cov=autowerewolf

Code Quality

# Format code
black autowerewolf tests

# Lint
ruff check autowerewolf tests

# Type check
mypy autowerewolf

Requirements

  • Python 3.10+
  • For LLM features: LangChain, LangGraph
  • For local models: Ollama installed with models pulled
  • For Web UI: FastAPI, uvicorn, WebSockets

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

MIT License

Acknowledgments

About

๐Ÿบ LLM-driven Werewolf Game Agents - A system where multiple LLM agents play the Werewolf game against each other.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published