A Spring Boot application for interacting with Large Language Models using Clean Architecture principles.
This LLM application provides a REST API for generating text using different providers (mock for development, OpenAI for production). Built with Java 17, Spring Boot 3.5.4, and Clean Architecture patterns.
- ποΈ Clean Architecture - Separated layers (Domain, Application, Infrastructure)
- π Multiple LLM Providers - Mock and OpenAI adapters with conditional loading
- π Secure Configuration - Environment variables with .env file support
- π Health Monitoring - Actuator endpoints for monitoring
- π API Documentation - Swagger/OpenAPI 3.0 with interactive UI
- β Input Validation - Request validation with comprehensive error handling
- π Reactive HTTP Client - WebFlux WebClient for external API calls
- Java 17 + Spring Boot 3.5.4 (Web, WebFlux, Actuator, Validation)
- OpenAPI 3.0 (SpringDoc) + Lombok + dotenv-java + Maven
src/main/java/edu/study/llm_application/
βββ domain/ # Domain Layer (Business Logic)
β βββ entities/ # Domain Entities
β βββ ports/ # Interfaces (in/out)
β βββ usecases/ # Business Logic Implementation
βββ application/ # Application Layer (API)
β βββ controllers/ # REST Controllers
β βββ dtos/ # Data Transfer Objects
β βββ mappers/ # DTO β Domain Mapping
βββ infrastructure/ # Infrastructure Layer
βββ adapters/ # External Service Adapters
βββ config/ # Configuration Classes
- Java 17+ and Maven 3.6+
- Clone and configure:
git clone https://github.com/AnderssonProgramming/llm-application.git
cd llm-application
cp .env.example .env
- Edit
.env
file with your configuration:
# For development (free, no API key needed)
OPENAI_MOCK_ENABLED=true
# For production (requires OpenAI API key)
OPENAI_MOCK_ENABLED=false
OPENAI_API_KEY=sk-your-actual-api-key-here
OPENAI_API_URL=https://api.openai.com/v1
OPENAI_TIMEOUT_SECONDS=30
- Run the application:
./mvnw spring-boot:run
The application starts on http://localhost:8081
Variable | Default | Description |
---|---|---|
OPENAI_MOCK_ENABLED |
true |
false = real OpenAI API, true = mock responses |
OPENAI_API_KEY |
your-openai-api-key-here |
Your OpenAI API key (required when mock=false) |
OPENAI_API_URL |
https://api.openai.com/v1 |
OpenAI API base URL |
OPENAI_TIMEOUT_SECONDS |
30 |
Request timeout in seconds |
π Security: Never commit your .env
file. It's already in .gitignore
.
POST /api/v1/llm/generate
Content-Type: application/json
{
"prompt": "Explain artificial intelligence",
"model": "gpt-3.5-turbo",
"max_tokens": 150,
"temperature": 0.7
}
- Models:
GET /api/v1/llm/models
- Health:
GET /api/v1/llm/health
- Swagger UI: http://localhost:8081/swagger-ui.html
- Actuator: http://localhost:8081/actuator/health
Mock Provider (default, free):
# In .env file
OPENAI_MOCK_ENABLED=true
OpenAI Provider (requires API key):
# In .env file
OPENAI_MOCK_ENABLED=false
OPENAI_API_KEY=sk-your-actual-api-key-here
Get your API key from OpenAI Platform. Note: Real API usage requires billing setup.
./mvnw test
- Implement
LlmProviderPort
interface - Add implementation in
infrastructure/adapters/llm/
- Configure with
@ConditionalOnProperty
# Generate text
curl -X POST http://localhost:8081/api/v1/llm/generate \
-H "Content-Type: application/json" \
-d '{"prompt": "Write a haiku about coding", "model": "gpt-3.5-turbo"}'
# Get models
curl http://localhost:8081/api/v1/llm/models
# Health check
curl http://localhost:8081/api/v1/llm/health
- 400 Bad Request - Validation errors with detailed field information
- 500 Internal Server Error - LLM provider errors with descriptive messages
- Standardized Error Format - Consistent error response structure
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'feat: add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.