- Make sure you have setup the project and
rye sync
ed. - Modify
.prompt.md
to provide instructions to the LLM.- Example prompt:
Rename `AskUserTool` in `ask_user_tool.py` to `AskUserTool2`.
- Example prompt:
- Run
main_coder.py
ormain_manager.py
to fulfill that prompt.
rye sync
./setup.sh
for system dependencies (if necessary)
- Install rye, make sure its activated and sync'ed and put in your startup script.
- Create a new
.env.secret
file and add your secrets, especially:ANTHROPIC_API_KEY
(for Claude)OPENAI_API_KEY
(optional, for GPT-4)
- Run
./setup.sh
to get started.
- Try some local/offline models:
- e.g.
ollama run codellama:70b
- e.g.
- NOTE: RAG (Retrieval-Augmented Generation) or any proper local/offline models will need a GPU and/or a powerful setup. We'll have to investigate that.
- How can we make sure that the agent can learn and stops repeating rather specific types of mistakes?
- For example, when adding a tool, it always wants to return things, and convert exceptions into return values. Terrible pattern.
- Usually, the SYSTEM_PROMPT should help with this, but sometimes the mistakes are too context-sensitive.