Skip to content

🧠 LLM Application – A modular Java backend for interacting with Large Language Models using Clean Architecture, Spring Boot, and WebFlux. Offers a REST API with mock and OpenAI providers, input validation, health checks, and Swagger docs for quick integration and testing.

License

Notifications You must be signed in to change notification settings

AnderssonProgramming/llm-application

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

14 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

LLM Application

A Spring Boot application for interacting with Large Language Models using Clean Architecture principles.

Overview

This LLM application provides a REST API for generating text using different providers (mock for development, OpenAI for production). Built with Java 17, Spring Boot 3.5.4, and Clean Architecture patterns.

Features

  • πŸ—οΈ Clean Architecture - Separated layers (Domain, Application, Infrastructure)
  • πŸ”Œ Multiple LLM Providers - Mock and OpenAI adapters with conditional loading
  • πŸ”’ Secure Configuration - Environment variables with .env file support
  • πŸ“Š Health Monitoring - Actuator endpoints for monitoring
  • πŸ“š API Documentation - Swagger/OpenAPI 3.0 with interactive UI
  • βœ… Input Validation - Request validation with comprehensive error handling
  • πŸ”„ Reactive HTTP Client - WebFlux WebClient for external API calls

Tech Stack

  • Java 17 + Spring Boot 3.5.4 (Web, WebFlux, Actuator, Validation)
  • OpenAPI 3.0 (SpringDoc) + Lombok + dotenv-java + Maven

Architecture

src/main/java/edu/study/llm_application/
β”œβ”€β”€ domain/                 # Domain Layer (Business Logic)
β”‚   β”œβ”€β”€ entities/          # Domain Entities
β”‚   β”œβ”€β”€ ports/             # Interfaces (in/out)
β”‚   └── usecases/         # Business Logic Implementation
β”œβ”€β”€ application/           # Application Layer (API)
β”‚   β”œβ”€β”€ controllers/      # REST Controllers
β”‚   β”œβ”€β”€ dtos/            # Data Transfer Objects
β”‚   └── mappers/         # DTO ↔ Domain Mapping
└── infrastructure/       # Infrastructure Layer
    β”œβ”€β”€ adapters/        # External Service Adapters
    └── config/         # Configuration Classes

Quick Start

Prerequisites

  • Java 17+ and Maven 3.6+

Setup

  1. Clone and configure:
git clone https://github.com/AnderssonProgramming/llm-application.git
cd llm-application
cp .env.example .env
  1. Edit .env file with your configuration:
# For development (free, no API key needed)
OPENAI_MOCK_ENABLED=true

# For production (requires OpenAI API key)
OPENAI_MOCK_ENABLED=false
OPENAI_API_KEY=sk-your-actual-api-key-here
OPENAI_API_URL=https://api.openai.com/v1
OPENAI_TIMEOUT_SECONDS=30
  1. Run the application:
./mvnw spring-boot:run

The application starts on http://localhost:8081

Configuration

Variable Default Description
OPENAI_MOCK_ENABLED true false = real OpenAI API, true = mock responses
OPENAI_API_KEY your-openai-api-key-here Your OpenAI API key (required when mock=false)
OPENAI_API_URL https://api.openai.com/v1 OpenAI API base URL
OPENAI_TIMEOUT_SECONDS 30 Request timeout in seconds

πŸ”’ Security: Never commit your .env file. It's already in .gitignore.

API Endpoints

Generate Text

POST /api/v1/llm/generate
Content-Type: application/json

{
  "prompt": "Explain artificial intelligence",
  "model": "gpt-3.5-turbo",
  "max_tokens": 150,
  "temperature": 0.7
}

Other Endpoints

Development

Switching Between Providers

Mock Provider (default, free):

# In .env file
OPENAI_MOCK_ENABLED=true

OpenAI Provider (requires API key):

# In .env file
OPENAI_MOCK_ENABLED=false
OPENAI_API_KEY=sk-your-actual-api-key-here

Get your API key from OpenAI Platform. Note: Real API usage requires billing setup.

Testing

./mvnw test

Adding New Providers

  1. Implement LlmProviderPort interface
  2. Add implementation in infrastructure/adapters/llm/
  3. Configure with @ConditionalOnProperty

Example Usage

# Generate text
curl -X POST http://localhost:8081/api/v1/llm/generate \
  -H "Content-Type: application/json" \
  -d '{"prompt": "Write a haiku about coding", "model": "gpt-3.5-turbo"}'

# Get models
curl http://localhost:8081/api/v1/llm/models

# Health check
curl http://localhost:8081/api/v1/llm/health

Error Handling

  • 400 Bad Request - Validation errors with detailed field information
  • 500 Internal Server Error - LLM provider errors with descriptive messages
  • Standardized Error Format - Consistent error response structure

Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'feat: add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

🧠 LLM Application – A modular Java backend for interacting with Large Language Models using Clean Architecture, Spring Boot, and WebFlux. Offers a REST API with mock and OpenAI providers, input validation, health checks, and Swagger docs for quick integration and testing.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published