Skip to content

Arbitrary code execution in LLMMathChain #8363

@jan-kubena

Description

@jan-kubena

System Info

Langchain version: 0.0.244
Numexpr version: 2.8.4
Python version: 3.10.11

Who can help?

@hwchase17 @vowelparrot

Information

  • The official example notebooks/scripts
  • My own modified scripts

Related Components

  • LLMs/Chat Models
  • Embedding Models
  • Prompts / Prompt Templates / Prompt Selectors
  • Output Parsers
  • Document Loaders
  • Vector Stores / Retrievers
  • Memory
  • Agents / Agent Executors
  • Tools / Toolkits
  • Chains
  • Callbacks/Tracing
  • Async

Reproduction

Numexpr's evaluate function that Langchain uses here in the LLMMathChain is susceptible to arbitrary code execution with eval in the latest released version. See this issue where PoC for numexpr's evaluate is also provided.

This vulnerability allows an arbitrary code execution, that is to run code and commands on target machine, via LLMMathChain's run method with the right prompt. I'd like to ask the Langchain's maintainers to confirm if they want a full PoC with Langchain posted here publicly.

Expected behavior

Numerical expressions should be evaluated securely so as to not allow code execution.

Metadata

Metadata

Assignees

Labels

bugRelated to a bug, vulnerability, unexpected error with an existing feature

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions