This repository was archived by the owner on Jun 5, 2025. It is now read-only.

Description
I'm splitting this off #45 because I have the input processing ready and want to submit it.
This task will focus on modifying the outputs, e.g making sure that the code that the LLM generates does not contain malicious packages.
Given an output, logic around block or allow or / and provide extra context (this pkg is malicious)