# metalearning, program search, library learning, tuning - are ways to make ai that self-adapts
- metaprompting allows to abstract thinking from doing, which focuses the @metathinker token usage on planning, reflection, real-time rewrites of agent scaffolding, adapting workflows etc, while the @runner grabs library functions, writes code, and ships to the interface for formatting and integrations.
- program search allows ai to develop and test new prompts and code at "sleep time" and verify from already completed tasks
- library learning allows ai to start with prompt templates, code repos, workflow libraries - and bundle those into standalone agents that can be distributed in the cogit.store
- tuning allows to incorporate user-generated traces into model behavior, producing task-specific reasoning models
in concert, these metalearning methods create a self-referential learning loop
therefore, the research program for **cogit** development is focused on the above four areas.
--> [[overview of recent progress]]
**
==note==: currently, i tested metaprompting and library learning - incl. the development of an ai-first language. works great but i need a co-founder to help me build a low-level agent framework + to into program search with evolutionary algs on prompts, because that's where my data science coding expertise breaks
==note==: inspired by the papers such as dreamcoder, automated design of agentic systems, ai researcher, autoagent, s1 reasoner, deepscalr, universal transformers, turing completeness of prompting
---