Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# MCP Server for Milvus
[![Trust Score](https://archestra.ai/mcp-catalog/api/badge/quality/zilliztech/mcp-server-milvus)](https://archestra.ai/mcp-catalog/zilliztech__mcp-server-milvus)

> The Model Context Protocol (MCP) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. Whether you're building an AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to connect LLMs with the context they need.

Expand Down
8 changes: 8 additions & 0 deletions src 2/mcp_server_milvus/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
from . import server

def main():
"""Main entry point for the package."""
server.main()

# Optionally expose other important items at package level
__all__ = ['main', 'server']
18 changes: 18 additions & 0 deletions src 2/mcp_server_milvus/blogpost.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
We've moved from simple chatbots to sophisticated AI agents that can reason, plan, and execute complex tasks with minimal human intervention.
Agents now can perceive their environment, make decisions, and take actions to achieve specific goals, having a particularly big impact on how we build applications.
To help with this, the Model Context Protocol (MCP) standard, proposed by Anthropic to standardize how applications provide context to LLMs. It helps building complex workflows on top of LLMs.
# What is Model Context Protocol (MCP)?
MCP is an open protocol that has a goal of standardizing ways to connect AI Models to different data sources and tools.
The idea is to help you build agents and complex workflows on top of LLMs, making them even smarter. It provides:
- A list of pre-built integrations that LLMs can directly plug into
- The flexibility to switch between LLM providers and vendors
The general idea is for MCP to follow a client-server architecture, where a host application can connect to multiple servers:
[Image]
- MCP Hosts: Programs like Claude Desktop, IDEs, or AI tools that want to access data through MCP
- MCP Clients: Protocol clients that maintain 1:1 connections with servers
- MCP Servers: Lightweight programs that each expose specific capabilities through the standardized Model Context Protocol
- Local Data Sources: Your computer’s files, databases, and services that MCP servers can securely access
- Remote Services: External systems available over the internet (e.g., through APIs) that MCP servers can connect to

# Using Milvus with MCP
Milvus,
3 changes: 3 additions & 0 deletions src 2/mcp_server_milvus/example.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
MILVUS_URI=""
MILVUS_TOKEN=""
MILVUS_DB=""
Loading