Generate games and programs using OpenAI agents. Built on top of Microsoft Autogen.
⚠️ Work In Progress
The current code works but:
- THERE ARE, AND THERE WILL BE, BREAKING CHANGES:
- Always check that your hidden
.env.jsoncfile match the lastenv.sample.jsoncstructure.- Always update your dependencies via
poetry install.- A lot of thing need to be optimized in order to drastically reduce tokens usage: caching, steps-by-step process, conversation splitting, better prompts.
- The code needs some cleaning up.
- Microsoft Autogen is still in early stage and contains a few bugs.
- A lot of hard-coded stuff could be customizable via config files.
- I will only focus on a few programming languages at first.
There are some amazing projects doing similar things but I hope to find a way to solve ambitious programs generation.
You either need an OpenAI API key or an Azure OpenAI API key.
Do not rely on GPT-3.5, whether turbo or standard, for more than just "sample" programs.
If you're aiming for more complex applications, GPT-4 is a must, preferably even GPT-4-32k.
Using the OpenAI API might quickly exhaust your token limit. For more extensive projects, Azure OpenAI API is recommended.
Be mindful of costs if you have ambitious goals! Always monitor tokens usage and what your agents are doing. While AI can be a powerful tool, it isn't necessarily cheaper than hiring real developers — yet 🙄.
conda create -n autogen python=3.10
conda activate autogen
pip install poetry
poetry install
cp env.sample.jsonc env.jsoncEdit your env.json to add your API keys and customize your installation.
Just:
make runOADS will automatically generate the program source code in ./project directory.
You can clean it via:
make cleanIMPORTANT: Functions will NOT work.
From what I tested, Autogen seems to work with any Open Source LLM supported by Text generation web UI.
You just have to enable openai extension in "Session" tab of the web UI:
Be sure to have your 5001 port open or binded if it's a remote server
since this is where the OpenAI-like API will be exposed.
I personally deploy my current models on RunPod (not affiliated)
and use thebloke/cuda11.8.0-ubuntu22.04-oneclick:latest image
even though I think it seems a bit outdated regarding llama.cpp & co.
