# research
- started in late march after reading papers about automated design of agentic systems, ai researcher, dreamcoder, bayesian program learning, prompt breeder
- realized that metalearning is the only way to make ai that adapts to the user in real time and that in-context learning, not pretraining or fine tuning, are the way to do it
- realized that in-context learning and its counter part, prompt engineering, are vastly underestimated and no knows how to write system prompts, see https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools
- since even the best prompt engineers treat it as a wall-of-text-dump, while it needs to be treated as a program - modular and composable, while language models to be treated as interpreters
- realized that ai will be better at prompting ai than humans ever could as per the bitter lesson
- defined in-context computation and in-context patterns as its primitives (aka functions executed by a language model)
- defined basic grammar and syntax of an in-context language, a programming language for llms
- defined metapatterns as a category of pattern factories that create new task-specific patterns on the fly, aka: user query /over @metapattern --> @pattern /run by llm
# prototyping
- built several metapatterns as bots (prompter-3000, botmaker, metaevaluator)
- generated ~30 specialized bots with them (for customers + internal processes)
- designed 'agent origin' - collection of patterns / prompts that allows a new agent to be instantiated in a tool like cursor or windsurf by the seed agent
- built 10+ simple agents with openai sdk to learn more about orchestration, delegation, etc
# next steps
- build a prototype of a self-replicating agent (@agent.maker)
- recreate 3 old customer projects to verify quality
- build a metalearning engine for evolutionary learning
- reverse engineering commit history of open source repos to train reasoning models on process rewards
**
with this in mind, let's consider why the
--> [[market is full of broken stuff]]
---