SmartBlogger is an AI-powered blogging assistant that helps you create high-quality, original content with integrated research capabilities and plagiarism detection.
- AI-Powered Content Generation: Create blog posts from code, documents, or ideas
- Multi-Source Research: Research from Arxiv, Web, GitHub, and Substack
- Plagiarism Detection: Built-in originality checking
- Local LLM Support: Works with Ollama for fully local AI processing
- Image Processing: Extract text from images using DeepSeek-OCR with vLLM acceleration and perform general image understanding
- Customizable Writing Style: Adjust tone, audience, and content preferences
- Modern UI/UX: Professional interface with shadcn UI components and enhanced visual design
- Real-time Feedback: Progress indicators and status updates during content generation
-
Install dependencies using uv (recommended):
uv sync
Or using pip:
pip install -e . -
Install Ollama from https://ollama.ai
-
Start the application:
streamlit run app.py
Or using the project script:
uv run streamlit run app.py
SmartBlogger now supports comprehensive image processing capabilities:
-
Optical Character Recognition (OCR): Extract text from images using the DeepSeek-OCR model with vLLM acceleration. This allows you to upload images containing text (such as screenshots of documents, handwritten notes, charts, etc.) and have the text automatically extracted and processed.
-
General Image Understanding: Describe and analyze images using multimodal vision models.
Supported image formats: PNG, JPEG, BMP, TIFF
For more details on image processing capabilities, see Image Processing Guide.
For Apple Silicon users, see Apple Silicon Optimization Guide.
For development setup and contribution guidelines, see Development Guide.
- Python 3.11+
- Ollama (for local LLM support)
- Internet connection (for research and initial model download)
- uv - Ultra-fast Python package installer and resolver
- NVIDIA GPUs: Full support with vLLM acceleration
- Apple Silicon (M1/M2/M3): Optimized processing with MLX acceleration (included by default)
- Intel CPUs: Standard CPU processing
Set environment variables in a .env file:
OLLAMA_BASE_URL=http://localhost:11434
LOCAL_WRITER_MODEL=llama3.1:8b
LOCAL_RESEARCHER_MODEL=llama3.1:8b
TAVILY_API_KEY=your_tavily_api_key
PERPLEXITY_API_KEY=your_perplexity_api_key
MIT