- Samarth Bhat Y (PES1UG22CS507)
- Samar Garg (PES1UG22CS506)
- Vibhav Tiwari (PES1UG22CS686)
This project is a distributed logging application designed for robust and efficient log handling in a microservice-based architecture. The system is built with components for log generation, broadcasting, accumulation, and querying. It features real-time log processing with critical logs, scalable log storage, and a CLI for querying logs.
-
Microservices:
- Origin Server: Simulates the main server in a CDN.
- Cache: Represents a caching layer.
- Router/Client: Sends requests to the system. These microservices generate logs during their operation.
-
Log Processing:
- Kafka: Acts as the message broker with two topics:
logs: For normal info logs processed via Fluentd.critical_logs: For critical logs sent directly to the Kafka broker.
- Fluentd: Accumulates and processes normal info logs.
- Elasticsearch DB: Stores logs for querying.
- Kafka: Acts as the message broker with two topics:
-
Server:
- Implements timeout functionality for log handling.
- Consumes logs from Kafka and stores them in Elasticsearch.
-
CLI:
- Enables querying of logs by type (
info,alerts, orall) and allows specifying a limit on the number of logs fetched.
- Enables querying of logs by type (
-
Logger:
- Implemented in
logger.go. - Generates and broadcasts logs across the system.
- Implemented in
-
Real-time Logging:
- Logs generated by microservices are broadcast via Kafka.
- Critical logs bypass Fluentd for immediate processing.
-
Scalable Storage:
- Logs are stored in Elasticsearch for efficient querying.
-
Querying:
- The CLI provides flexibility to search logs by category and limit.
-
Timeout Handling:
- Ensures reliable log storage even under high load.
- Install and set up the following:
- Kafka
- Fluentd
- Elasticsearch
- Golang