# 15 principles of ai product design
1. [[#product design is physical, even for pixels on the screen]]
2. [[#you build within a system]]
3. [[#automate what people hate]]
4. [[#start with familiar tools]]
5. [[#ai is a hammer!]]
6. [[#2 ways to conceptualize ai interfaces]]
7. [[#chat is not a universal interface]]
8. [[#build simple things first]]
9. [[#design for modularity, hierarchy, sequences and graphs]]
10. [[#name your products]]
11. [[#every feedback is a prompt]]
12. [[#build with change in mind]]
13. [[#observe and evaluate agents & track the metrics]]
14. [[#the user is still king]]
15. but [[#build for agent experience and agent stories]]
---
# product design is physical, even for pixels on the screen
design requires deep reflection, but you cannot design a software application through thinking and talking about it alone.
ai tools offer you an unprecedented opportunity to develop products quickly and test them with real users.
just try it out in
```instruction
build a website for the company
as the style example, apply the following link: https://cogit.systems/
maintain most of the design ideas
get creative in how you position elements
for content, apply: http://jeffclune.com/
```
![[Pasted image 20251117184857.png]]
[on Lovable](https://jeff-clune.lovable.app)
---
# you build within a system
who are you building for?
- internal or external users?
- are you on microsoft or google?
- do you use sap or business central?
- is communication email-based or chat-based?
whatever your answers to these questions are, they form a system you're developing for.
in your ai tools and agents, you must be aware of these systems.
they must be able to pull data from them and write back.
![[Pasted image 20251117185414.png]]
> try builtwith to understand your system
> https://builtwith.com/
---
# automate what people hate
you might think you're building an ai product, but you're simply building a product. every product exists to solve a specific problem.
however, the problem itself doesn't exist in isolation; it's always part of a user experience. this applies to all product design, but especially to ai products.
that's why it's important to automate things people hate.
one of my guiding questions in product discovery is:
> > "what tasks in your daily work do you hate?"
people usually have a very **emotional reaction** to this. they really want to get rid of certain tasks and never deal with them again.
delegation to ai is one possible solution for this.
furthermore, the question of **fear of job loss** through ai becomes irrelevant because people simply want to automate this task.
---
# start with familiar tools
one of the biggest obstacles to adoption is when companies try to introduce five new tools per week to unsuspecting employees.
honestly, they don't need it, they don't want it, and employee resistance is the biggest thing that causes any ai product to fail.
so start with familiar tools they already use.
run your ai in an email
![[Pasted image 20251117002428.png]]
help them create custom gpts in chatgpt because they already know how it works.
build a business app in sap fiori, salesforce app exchange, service now or jira because they're already in these systems.
or connect an ai workflow to a business intelligence tool like celonis, tableau, or powerbi.
---
# ai is a hammer!
so you see nails everywhere.
we're in the ai seminar, so we'll talk about ai products. but this is obviously not always the best solution for every problem. if you can build something with deterministic code, do it. it will be cheaper, faster and more reliable.
if you can use a standard product with or without ai that solves 80% of your problem, try it.
but there's always that 20% we call "**last-mile automation**". they'll never be solved by products, never fit your workflows, never work as intended.
you can never define specific rules, or they're constantly changing. you might need automated decision-making, robust extraction, or infinitely scalable analysis capabilities.
in that case, grab the ai hammer and hit as hard as you can.
---
# 2 ways to conceptualize ai interfaces
## left-chattedness
chat is positioned or conceptualized as a central element of the product experience; often the primary interface or core workflow.
![[Pasted image 20251117002615.png]]
## right-chattedness
chat is positioned as a supplement or auxiliary panel within an existing product, especially in ides; it complements the main workflow rather than defining it.
![[Pasted image 20251117002637.png]]
---
# chat is not a universal interface
also known as the "make it look like what it does" principle.
we all know this: chatgpt or microsoft copilot try to solve all our problems through a chat window where we're condemned to endless conversations with a chatbot that doesn't care about our problems. however, chat is not a universal interface for problem-solving.
email, for example, is often ignored or overlooked. a lot of work – especially for small and medium-sized businesses – is still done via email.
for some use cases, a simple web form is necessary. for many, an excel document is the interface.
if you need to analyze a patent and this analysis
- takes over 30 minutes,
- generates about 200 pages of text,
- involves over 30 prompts in the architecture, including multiple llm-as-judge
why the hell would you want to design that as chat?
---
# build simple things first
can you solve it with a simple in-context pattern and one llm call, maybe with an attached document?
many use cases for extraction like order entry or invoice processing work exactly like this.
1. the core is just a prompt, so you don't have to build the entire solution to test it – you can create a bot (aka custom gpt, aka agent in copilot). write a prompt with a meta-prompting tool like [[bot - @botmaker]] and test it within 30 minutes.
![[Pasted image 20251117003136.png]]
furthermore, rough mock-ups and messy prototypes encourage others to give you feedback. so make sure your first attempts don't look too good.
2. next, you can create a simple backend automation by connecting to outlook, reusing the same prompt, then writing an api and sketching an api to write back to a source system like sap or business central.
![[Pasted image 20241112151101.png]]
3. and only if you need additional features like dashboard, multiple users and access rights, or branching of different agents, would you need a full-fledged cloud-based solution.
![[Pasted image 20251117185307.png]]
---
# design for modularity, hierarchy, sequences and graphs
## modularity & hierarchy -- super agents and tools
an agent is just a language model running in a loop and calling tools until it doesn't want to anymore. every interaction is essentially an llm call that looks at what happened and decides what to do next.
![[Pasted image 20251117003741.png]]
every tool is a function call, and anything can be in there. a function can be as simple as converting every character in a string to lowercase or as complex as provisioning an entire new azure resource group with 10 resources, running 1,000 agents in parallel in a simulated environment to process and generate legal contracts, or whatever.
this means your solution must be inherently modular – these modules must be interchangeable. you might wait eight hours for something and only get half a page of response back. so you need to compose your agentic systems into flexible hierarchies with super-agents and sub-agents that can handle this unpredictability.
## processes as prompt sequences
now you need to define sequences of actions based on your actual processes – sequences that are translated into sequences of prompts. think of the predefined processes from whatever company you want to replicate with an ai agent, but don't get caught up in designing the thing node by node.
see [[sequence - cv questions]] or [[sequence - company research]]
## dynamic computational graphs
here's the thing – when you think about agents, you need to think about how agents traverse the nodes of a computational graph. this is not the same as a workflow graph in tools like n8n or make, because computational graphs for agents are dynamic. they can change, create new nodes, and remove nodes completely during execution.
this is a non-deterministic computing paradigm and it must be modular and hierarchical. so design with the thought that agents modify their own execution paths in real-time while maintaining some kind of organizational structure.
---
# name your products
this may seem obvious, but you need to name your products, even if they're in a very early stage. if you can name it, you can build it – this is especially true for such intangible, non-deterministic applications like ai agents.
name it based on your use case to simplify understanding and discoverability.
i use the following pattern for my ai products. you can choose yours:
- @contract.helper
- @scoper
- @bot.maker
---
# every feedback is a prompt
especially in early stages of product development, it's important to get feedback from your users, buyers, and other stakeholders.
but unlike in normal product development, in hyper-development every feedback is a prompt.
every meeting note can be turned into a feature request, into a plan, into a coding agent that works on implementation for 10 minutes and submits a pull request for it.
so again: your meeting notes, emails from your ceo, support tickets from customers – everything is a prompt that you can feed into your scoping.
---
# build with change in mind
ai tools in 2025 are still quite fragile. you might have your extraction module running on claude sonnet 3.7. then you read an announcement that it will be discontinued early next year. so you understand that you need to re-evaluate all your extractions on 100 documents. you need to change prompts and redeploy the solution for a variety of customers.
![[Pasted image 20251117003919.png]]
every week, new models, tools, agentic frameworks and products are added, while old ones are quickly discontinued.
recently we had to rebuild the entire retrieval augmented generation module for a client because openai announced they were discontinuing their assistants api, on which we had built the prototypes. we didn't want to have to rebuild in about a year – but that's a necessity
![[Pasted image 20251117004012.png]]
**perhaps even more important:**
the development of capabilities is rapid – these new models and tools aren't just released; they're often better in the dimensions of quality, speed and price. so you effectively have no choice but to adopt them or be left behind.
---
# observe and evaluate agents & track the metrics
there's a variety of metrics you want to understand, and of course you also want to observe the traces of the agents and ai tools you create.
whatever tool you use – whether it's directly openai or something like phoenix or langfuse – make sure you connect that early in the project, and you need to track a variety of metrics.
![[Pasted image 20251117004255.png]]
technical metrics like pass-at-one (how well your system can solve a question on the first try), things like factual accuracy and relevance for rag-based use cases and information retrieval, also recall and precision are very useful, as are other llm-as-judge and evaluations.
![[Pasted image 20251117004201.png]]
for complex agentic systems, you may even need to create your own benchmarks specific to your use case. there are also other metrics like token consumption (tokens per user, tokens per month), price per mega-token and so on. make sure you know which ones are important for your product and track/observe them.
![[Pasted image 20251117004338.png]]
---
# the user is still king
whatever you design, research and develop, it still won't matter if the user doesn't like it.
all the fanciest ai tools, workflows, databases, agents and automated evaluations are worthless if they don't address the user's problem.
just like with the design of any product (physical or digital), you need to start with the user. they own the problem; you own the solution.
it's just that many software solutions nowadays are ai-based. the only reason for this is that language models have opened up a variety of new use cases in non-deterministic computing.
consider it a tool, just as you would consider a frontend framework, a devops pipeline, or a clever source system integration. it must serve a purpose to be valuable. it must resolve a situation or alleviate the user's pain for anyone to care.
be mindful about this
---
// 17 nov 2025, hamburg
---
# build for agent experience and agent stories
have you heard about the agent experience, or [[ax]]?
i used to think it's bullshit and just nother piece of jargon to separate the uninitiated
but after a while, i realized that the most of the users of the software of the future will be ai agents -- not people
frankly, do we really want to sit and type ephemeral symbols on a photon-emitting surface?
## design your website such that it is findable by an ai research agent
![[Pasted image 20251120223506.png]]
my attempt to make it easy is based on the obvious -- publishing markdown files on an by search bots indexable website.
check out my [blog](https://incontext.computer/), training for [decision-makers (in german for now)](https://entscheider.ai/), training for developers on [how to build ai inhouse](https://inhouse.bot/) -- for your eyes, to see examples of llm-friendly publishing. some other pages, like [our landing](https://cogit.systems/) are much more human-focused.
one thing is clear
> designers will create communication interfaces between humans and ai agents en masse.
>
> those will be different, weird, wild, but pursuing one purpose -- to understand what is the best way to talk to ai
**
## humans have user stories, what have agents got?
you don't have to think for long to understand that ai roaming around the internet will need more than robots.txt to navigate the world
manifest, library, and registry are mechanisms to set the rules of the agentic systems, while agent stories and semi-formalizations of an agent's needs regarding the product
> product design didn't just fart and end with the advent of ai coding agents
> .
> it became the only true differentiator
> .
> as i always say,
> chatgpt, claude, and gemini aren't ai models --
> they are ai products.
and obviously, they need agent stories :
```agent_story
as an llm-based ai agent called [[@name]],
i want to be able to instantly read all context data related to the case,
access all tools necessary for working through the procedure, incl trees, loops, constraints of the user and of the environment
i as [[@name]] am a [[superagent]] -- an ai entity that has
- a runnable python definition that is guaranteed to run
- an extensive markdown instruction created by a specialized @instruction.maker
- a purposefully designed configuration yaml by the @config.maker
- and in the future, @tool, @app, @org from the {makers} family
```
sounds weird, huh?
that is precisely the reason why i wrapped it into ```agent_story ...``` tag, as it is a clear token-level signature -- kinda confusing for humans, but we already know that llms are better than 9X.XYZ% of humans on a ton of tasks
today in my short workshop on using ai, my participant sent a following prompt :
```instructions
Request:
Complete the missing definitions according to its number status:
1= Shipped
2= Full Shipment
3 = Partially Shipped
1, 3 = Shipped, Partially Shipped
2, 1 = Full Shipment, Shipped
3, 1 = Partially Shipped,
1, 2 = , Full Shipment
Answer:
Here’s the completion of the missing definitions based on the pattern you provided:
1, 3 = Shipped, Partially Shipped
2, 1 = Full Shipment, Shipped
3, 1 = Partially Shipped, Shipped
1, 2 = Shipped, Full Shipment
Let me know if you need these in another format or if you’d like more combinations!
```
---