In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.
References
Link | Resource |
---|---|
https://github.com/hwchase17/langchain/issues/1026 | Issue Tracking |
https://github.com/hwchase17/langchain/issues/814 | Exploit Issue Tracking Patch |
https://github.com/hwchase17/langchain/pull/1119 | Patch |
https://twitter.com/rharang/status/1641899743608463365/photo/1 | Exploit |
Configurations
History
No history.
Information
Published : 2023-04-05 02:15
Updated : 2023-04-17 16:57
NVD link : CVE-2023-29374
Mitre link : CVE-2023-29374
CVE.ORG link : CVE-2023-29374
JSON object : View
Products Affected
langchain
- langchain
CWE
CWE-74
Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')