Skip to content

Commit 424e563

Browse files
LLM Chat Interface with MCP Enabled Tool Orchestration (#1202)
* MCP client implemented as new tab LLM Chat Signed-off-by: Keval Mahajan <[email protected]> * Improvements Signed-off-by: Keval Mahajan <[email protected]> * experience changes Signed-off-by: Keval Mahajan <[email protected]> * Improved MCP client backend Signed-off-by: Keval Mahajan <[email protected]> * Fix scrolling to the latest message Signed-off-by: Keval Mahajan <[email protected]> * Fixed jwt token for servers Signed-off-by: Keval Mahajan <[email protected]> * require auth token for team and private level servers Signed-off-by: Keval Mahajan <[email protected]> * proper error handling Signed-off-by: Keval Mahajan <[email protected]> * Thinking functionality display Signed-off-by: Keval Mahajan <[email protected]> * responsive tabs Signed-off-by: Keval Mahajan <[email protected]> * code standard fixes Signed-off-by: Keval Mahajan <[email protected]> * updated env example Signed-off-by: Keval Mahajan <[email protected]> * added new optional dependencies Signed-off-by: Keval Mahajan <[email protected]> * Fixed web linting Signed-off-by: Keval Mahajan <[email protected]> * Fix linting issues for PR #1202 - Fix bandit B105 false positive in llmchat_router.py by adding nosec comment for empty string check - Fix CSS linting issues in admin.css: - Rename keyframes to kebab-case (messageSlideIn -> message-slide-in, fadeIn -> fade-in, thinkingStepIn -> thinking-step-in) - Replace deprecated word-break: break-word with overflow-wrap: break-word - Split single-line declaration blocks into multi-line format - Remove duplicate @Keyframes fade-in and .animate-fade-in selectors - Merge duplicate .thinking-step selector with animation property All pytest tests pass (3392 passed, 45 skipped) All quality checks pass (flake8, bandit, interrogate, pylint, verify) All web linting passes (stylelint, htmlhint) * Fix doctest failures by adding import guards for optional LLM dependencies Wrapped langchain imports in try/except block to handle environments where the optional LLM chat dependencies are not installed. This allows doctest to run successfully without requiring these dependencies. - Added _LLMCHAT_AVAILABLE flag to track availability - Set imports to None with type ignore when unavailable - Prevents ModuleNotFoundError during doctest runs * Linting and test fixes Signed-off-by: Mihai Criveti <[email protected]> * Web lint Signed-off-by: Mihai Criveti <[email protected]> --------- Signed-off-by: Keval Mahajan <[email protected]> Signed-off-by: Mihai Criveti <[email protected]> Co-authored-by: Mihai Criveti <[email protected]>
1 parent c7330e0 commit 424e563

34 files changed

+11017
-4802
lines changed

.env.example

Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -840,3 +840,73 @@ REQUIRE_STRONG_SECRETS=false
840840
# Set to false to allow startup with security warnings
841841
# NOT RECOMMENDED for production!
842842
# REQUIRE_STRONG_SECRETS=false
843+
844+
845+
#####################################
846+
# LLM Chat MCP Client Configuration
847+
#####################################
848+
849+
# Enable the LLM Chat functionality (true/false)
850+
# When disabled, LLM chat features will be completely hidden from UI and APIs
851+
# Default: false (must be explicitly enabled)
852+
LLMCHAT_ENABLED=false
853+
854+
# LLM Provider Selection
855+
# Options: azure_openai, openai, anthropic, aws_bedrock, ollama
856+
# Default: azure_openai
857+
# LLM_PROVIDER=azure_openai
858+
859+
#####################################
860+
# Azure OpenAI Configuration
861+
#####################################
862+
# Use for Microsoft Azure OpenAI Service deployments
863+
# AZURE_OPENAI_API_KEY=<your_api_key>
864+
# AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
865+
# AZURE_OPENAI_API_VERSION=2024-02-15-preview
866+
# AZURE_OPENAI_DEPLOYMENT=gpt-4o
867+
# AZURE_OPENAI_MODEL=gpt-4o
868+
# AZURE_OPENAI_TEMPERATURE=0.7
869+
870+
#####################################
871+
# OpenAI Configuration
872+
#####################################
873+
# Use for direct OpenAI API access (non-Azure) or OpenAI-compatible endpoints
874+
# OPENAI_API_KEY=sk-...
875+
# OPENAI_MODEL=gpt-4o-mini
876+
# OPENAI_BASE_URL=https://api.openai.com/v1 # Optional: for OpenAI-compatible endpoints
877+
# OPENAI_TEMPERATURE=0.7
878+
# OPENAI_MAX_RETRIES=2
879+
880+
#####################################
881+
# Anthropic Claude Configuration
882+
#####################################
883+
# Use for Anthropic Claude API
884+
# Requires: pip install langchain-anthropic
885+
# ANTHROPIC_API_KEY=sk-ant-...
886+
# ANTHROPIC_MODEL=claude-3-5-sonnet-20241022
887+
# ANTHROPIC_TEMPERATURE=0.7
888+
# ANTHROPIC_MAX_TOKENS=4096
889+
# ANTHROPIC_MAX_RETRIES=2
890+
891+
#####################################
892+
# AWS Bedrock Configuration
893+
#####################################
894+
# Use for AWS Bedrock LLM services
895+
# Requires: pip install langchain-aws boto3
896+
# Note: Uses AWS credential chain if credentials not explicitly provided
897+
# AWS_BEDROCK_MODEL_ID=anthropic.claude-v2
898+
# AWS_BEDROCK_REGION=us-east-1
899+
# AWS_BEDROCK_TEMPERATURE=0.7
900+
# AWS_BEDROCK_MAX_TOKENS=4096
901+
# Optional AWS credentials (uses default credential chain if not provided):
902+
# AWS_ACCESS_KEY_ID=<your_access_key>
903+
# AWS_SECRET_ACCESS_KEY=<your_secret_key>
904+
# AWS_SESSION_TOKEN=<your_session_token>
905+
906+
#####################################
907+
# Ollama Configuration
908+
#####################################
909+
# Use for local or self-hosted Ollama instances
910+
# OLLAMA_MODEL=llama3
911+
# OLLAMA_BASE_URL=http://localhost:11434
912+
# OLLAMA_TEMPERATURE=0.7

README.md

Lines changed: 79 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1182,6 +1182,85 @@ You can get started by copying the provided [.env.example](https://github.com/IB
11821182
- `MCPGATEWAY_A2A_ENABLED=false`: Completely disables A2A features (API endpoints return 404, admin tab hidden)
11831183
- `MCPGATEWAY_A2A_METRICS_ENABLED=false`: Disables metrics collection while keeping functionality
11841184

1185+
### LLM Chat MCP Client
1186+
1187+
The LLM Chat MCP Client allows you to interact with MCP servers using conversational AI from multiple LLM providers. This feature enables natural language interaction with tools, resources, and prompts exposed by MCP servers.
1188+
1189+
| Setting | Description | Default | Options |
1190+
| ------------------------------ | -------------------------------------- | ------- | ------- |
1191+
| `LLMCHAT_ENABLED` | Enable LLM Chat functionality | `false` | bool |
1192+
| `LLM_PROVIDER` | LLM provider selection | `azure_openai` | `azure_openai`, `openai`, `anthropic`, `aws_bedrock`, `ollama` |
1193+
1194+
**Azure OpenAI Configuration:**
1195+
1196+
| Setting | Description | Default | Options |
1197+
| ------------------------------ | -------------------------------------- | ------- | ------- |
1198+
| `AZURE_OPENAI_ENDPOINT` | Azure OpenAI endpoint URL | (none) | string |
1199+
| `AZURE_OPENAI_API_KEY` | Azure OpenAI API key | (none) | string |
1200+
| `AZURE_OPENAI_DEPLOYMENT` | Azure OpenAI deployment name | (none) | string |
1201+
| `AZURE_OPENAI_API_VERSION` | Azure OpenAI API version | `2024-02-15-preview` | string |
1202+
| `AZURE_OPENAI_TEMPERATURE` | Sampling temperature | `0.7` | float (0.0-2.0) |
1203+
| `AZURE_OPENAI_MAX_TOKENS` | Maximum tokens to generate | (none) | int |
1204+
1205+
**OpenAI Configuration:**
1206+
1207+
| Setting | Description | Default | Options |
1208+
| ------------------------------ | -------------------------------------- | ------- | ------- |
1209+
| `OPENAI_API_KEY` | OpenAI API key | (none) | string |
1210+
| `OPENAI_MODEL` | OpenAI model name | `gpt-4o-mini` | string |
1211+
| `OPENAI_BASE_URL` | Base URL for OpenAI-compatible endpoints | (none) | string |
1212+
| `OPENAI_TEMPERATURE` | Sampling temperature | `0.7` | float (0.0-2.0) |
1213+
| `OPENAI_MAX_RETRIES` | Maximum number of retries | `2` | int |
1214+
1215+
**Anthropic Claude Configuration:**
1216+
1217+
| Setting | Description | Default | Options |
1218+
| ------------------------------ | -------------------------------------- | ------- | ------- |
1219+
| `ANTHROPIC_API_KEY` | Anthropic API key | (none) | string |
1220+
| `ANTHROPIC_MODEL` | Claude model name | `claude-3-5-sonnet-20241022` | string |
1221+
| `ANTHROPIC_TEMPERATURE` | Sampling temperature | `0.7` | float (0.0-1.0) |
1222+
| `ANTHROPIC_MAX_TOKENS` | Maximum tokens to generate | `4096` | int |
1223+
| `ANTHROPIC_MAX_RETRIES` | Maximum number of retries | `2` | int |
1224+
1225+
**AWS Bedrock Configuration:**
1226+
1227+
| Setting | Description | Default | Options |
1228+
| ------------------------------ | -------------------------------------- | ------- | ------- |
1229+
| `AWS_BEDROCK_MODEL_ID` | Bedrock model ID | (none) | string |
1230+
| `AWS_BEDROCK_REGION` | AWS region name | `us-east-1` | string |
1231+
| `AWS_BEDROCK_TEMPERATURE` | Sampling temperature | `0.7` | float (0.0-1.0) |
1232+
| `AWS_BEDROCK_MAX_TOKENS` | Maximum tokens to generate | `4096` | int |
1233+
| `AWS_ACCESS_KEY_ID` | AWS access key ID (optional) | (none) | string |
1234+
| `AWS_SECRET_ACCESS_KEY` | AWS secret access key (optional) | (none) | string |
1235+
| `AWS_SESSION_TOKEN` | AWS session token (optional) | (none) | string |
1236+
1237+
**Ollama Configuration:**
1238+
1239+
| Setting | Description | Default | Options |
1240+
| ------------------------------ | -------------------------------------- | ------- | ------- |
1241+
| `OLLAMA_BASE_URL` | Ollama base URL | `http://localhost:11434` | string |
1242+
| `OLLAMA_MODEL` | Ollama model name | `llama3.2` | string |
1243+
| `OLLAMA_TEMPERATURE` | Sampling temperature | `0.7` | float (0.0-2.0) |
1244+
1245+
> 🤖 **LLM Chat Integration**: Chat with MCP servers using natural language powered by Azure OpenAI, OpenAI, Anthropic Claude, AWS Bedrock, or Ollama
1246+
> 🔧 **Flexible Providers**: Switch between different LLM providers without changing your MCP integration
1247+
> 🔒 **Security**: API keys and credentials are securely stored and never exposed in responses
1248+
> 🎛️ **Admin UI**: Dedicated LLM Chat tab in the admin interface for interactive conversations
1249+
1250+
**LLM Chat Configuration Effects:**
1251+
- `LLMCHAT_ENABLED=false` (default): Completely disables LLM Chat features (API endpoints return 404, admin tab hidden)
1252+
- `LLMCHAT_ENABLED=true`: Enables LLM Chat functionality with the selected provider
1253+
1254+
**Provider Requirements:**
1255+
- **Azure OpenAI**: Requires `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_API_KEY`, and `AZURE_OPENAI_DEPLOYMENT`
1256+
- **OpenAI**: Requires `OPENAI_API_KEY`
1257+
- **Anthropic**: Requires `ANTHROPIC_API_KEY` and `pip install langchain-anthropic`
1258+
- **AWS Bedrock**: Requires `AWS_BEDROCK_MODEL_ID` and `pip install langchain-aws boto3`. Uses AWS credential chain if explicit credentials not provided.
1259+
- **Ollama**: Requires local Ollama instance running (default: `http://localhost:11434`)
1260+
1261+
**Documentation:**
1262+
- [LLM Chat Guide](https://ibm.github.io/mcp-context-forge/manage/llm-chat/) - Complete LLM Chat setup and provider configuration
1263+
11851264
### Email-Based Authentication & User Management
11861265

11871266
| Setting | Description | Default | Options |
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
11
nav:
2-
- langflow-server.md
2+
- langflow-server.md

docs/docs/using/servers/third-party/langflow-server.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -753,4 +753,4 @@ def get_workflow_endpoint(workflow_id, version="latest"):
753753
### Community and Support
754754
- [Langflow Community](https://github.com/logspace-ai/langflow/discussions) - Community discussions
755755
- [Langflow Discord](https://discord.gg/langflow) - Real-time community support
756-
- [MCP Context Forge Issues](https://github.com/IBM/mcp-context-forge/issues) - Report integration issues
756+
- [MCP Context Forge Issues](https://github.com/IBM/mcp-context-forge/issues) - Report integration issues

0 commit comments

Comments
 (0)