Adapters for integrating Model Context Protocol (MCP) tools with LangChain.js applications, supporting both stdio and SSE transports.
-
Updated
Apr 9, 2025 - TypeScript
Adapters for integrating Model Context Protocol (MCP) tools with LangChain.js applications, supporting both stdio and SSE transports.
A discovery and compression tool for your Python codebase. Creates a knowledge graph for a LLM context window, efficiently outlining your project | Code structure visualization | LLM Context Window Efficiency | Static analysis for AI | Large Language Model tooling #LLM #AI #Python #CodeAnalysis #ContextWindow #DeveloperTools
CodeWhisper: AI-Powered End-to-End Task Implementation & blazingly fast Codebase-to-LLM Context Bridge
A lightweight tool to optimize your Javascript / Typescript project for LLM context windows by using a knowledge graph | AI code understanding | LLM context enhancement | Code structure visualization | Static analysis for AI | Large Language Model tooling #LLM #AI #JavaScript #TypeScript #CodeAnalysis #ContextWindow #DeveloperTools
An MCP server that provides safe, read-only access to SQLite databases through Model Context Protocol (MCP). This server is built with the FastMCP framework, which enables LLMs to explore and query SQLite databases with built-in safety features and query validation.
A lightweight tool to optimize your C# project for LLM context windows by using a knowledge graph | Code structure visualization | Static analysis for AI | Large Language Model tooling | .NET ecosystem support #LLM #AI #CSharp #DotNet #CodeAnalysis #ContextWindow #DeveloperTools
A discovery and compression tool for your Java codebase. Creates a knowledge graph for a LLM context window, efficiently outlining your project #LLM #AI #Java #CodeAnalysis #ContextWindow #DeveloperTools #StaticAnalysis #CodeVisualization
An open-source, AI-powered Functions-as-a-Service (FaaS) platform that enables LLMs like ChatGPT to perform real-world tasks through dynamic function execution. By connecting LLMs to function calls, Ingra unlocks automation, complex integrations, and real-time context loading—paving the way towards true AGI potential and beyond.
Community-built Qwen AI Provider for Vercel AI SDK - Integrate Alibaba Cloud's Qwen models with Vercel's AI application framework
🌐 OpenCrawl: An ethical, high-performance web crawler built for scale A powerful web crawling library that respects robots.txt and rate limits while leveraging Kafka for high-throughput data processing. Built with ethics and efficiency in mind.
This repository contains a modular, scalable chatbot system leveraging LLMs (Large Language Models) with a microservices architecture. The chatbot is built for retrieval-augmented generation (RAG), enabling intelligent, context-aware responses.
Hexel is an AI-native OS enabling autonomous decision-making and multi-agent collaboration across various deployments.
This React application allows users to enter their GitHub token and repository URL to generate a comprehensive Markdown file structure. Each file’s code is listed sequentially, facilitating seamless copying for LLM integration. Enhance your development workflow with this efficient tool!
RotinaPy: Simplify your daily life and maximize productivity with an integrated app for task management, study tracking, flashcards, and more. Built with Streamlit and Python.
PRISM: A Multi-Perspective AI Alignment Framework for Ethical AI (Demo: https://app.prismframework.ai | Paper: https://arxiv.org/abs/2503.04740)
This project focuses on automating the analysis and reporting of bibliometric data, specifically targeting the annual production of academic articles. The primary goal is to understand trends, anomalies, and patterns in bibliometric data through a combination of statistical modeling and exploratory data analysis.
XllamaOS: A powerful and scalable AI interface for seamless interaction with large language models (LLMs), designed to empower developers, researchers, and innovators to unlock the full potential of AI-powered applications.
A TypeScript-based autonomous agent framework with modular systems for memory, planning, and tool integration. Features vector-based recall, multi-strategy planning, and extensible tools for AI agent development.
A scalable Q&A web application designed for coursework discussions, featuring real-time caching, LLM-generated answers, end-to-end testing, and performance optimization. Built with Astro, Svelte, Deno, PostgreSQL, and Redis.
A very simple Python API to control access to a LLM or an AI model.
Add a description, image, and links to the llm-integration topic page so that developers can more easily learn about it.
To associate your repository with the llm-integration topic, visit your repo's landing page and select "manage topics."