# consumer electronics device for ai do you trust your fridge? how about your washing machine or a tv set? if the latter is "smart", you might have a reason not to as mitko vasilev keeps saying over on linkedin >> make sure you own your ai. ai in the cloud is not aligned with you; it's aligned with the company that owns it the more critical ai workloads become for our daily work, the more the question of ownership arises - what if you don't want microsoft, google, or apple to be able to switch off any ai capability? none of us owns the ai we use, at least not yet. but the price of running ai models is going down, and the quality of models one can run on a laptop is going up. extrapolate that trend and soon even consumer-grade laptops will run models on the level of chatgpt or claude in mid 2025. in that case though, you probably won't be able to use it for anything else at the same time. and likely your partner, kids, friends might also wanna use it - giving physical access to a laptop to some 5-7 people might be infeasible, and also require more powerful hardware. but recently, i started asking myself - why am i so stuck in thinking about laptops? they have been the prevalent form of personal computing for 20 years (yes, most work is still done on laptops, smartphones mostly serving communication, entertainment, navigation, photography), but running 10.000 ai agents on your laptop likely won't work. while privacy and security are of utmost importance, most consumer don't understand that, and don't appreciate that their perceived freedom of choice of a place to live, things to buy, schools to bring their kids to is only an illusion completely dependent on the current form of government in their country. this is well exemplified by china that modern western media love to portray as a villain (many countries across the globe are happy to call out the usa though), but also by the likes of north korea, iran, myanmar, egypt, and many other that are quick to block internet access, messengers, social media, vpn. proudly democratic [spain has started arresting owners of google pixel phones](https://news.ycombinator.com/item?id=44473694) under suspicion of drug trafficking - only because one can install highly secure graphene os on a pixel. which i did too btw and highly suggest to anyone who doesn't like to be permanently spied on. at the same time, i fully acknowledge the fact that no one gives a fuck about privacy, even when people say they do. heck, i am typing this on a windows machine - which is a surveillance paradise in full control of microsoft, and if they wanna brick my laptop, they can do so in an instant. so what gives? can't we just run our 10k agents in the cloud using the big3 of aws, azure, gcp? well, go ahead and try to, and see the eyes of your spouse starting at the cloud bill at the end of the month. 10k agents, which i'll call [[a10k]] for short, can easily consume 1 billion tokens per month - already today it's easy to generate 10k tokens per call, if each agent does 10 api calls, ... you get me, 1b tokens per month isn't a lot. at current prices, it would cost 10-15k USD per month in 2 years, it might drop to 100-300 USD per month (if the trend of dropping [[cost of intelligence]] continues) - but this is still a lot of money, most people aren't ready to spend on a utility in addition to all other bills they have. besides, for $300/month one can buy a car. or an insanely powerful ai computing device that can serve 5-10 people and cost 100 bucks a month over 36 months of leasing, not subscription - with the added benefit that you can sell unused capacity to your neighbors who want to run local inference. ![[ai device.png]] this device has to be truly consumer-grade - as simple as a dishwasher, as capable as the latest llms, hosted in living rooms, späti kiosks, and computer clubs, which gain new meaning in the age of ai. sam altman talk about ai that is too cheap to meter; usa and china are building out massive data centers requiring massive power plants; people equate ai's impact to electricity and predict ai becoming a utility, and foundation models - a commodity. and while i agree with that sentiment, i wonder why no one grapples with the second order consequences of ai becoming a utility. utilities are widely available, cheap & low margin, decentralized, local goods. there won't be any too cheap to meter if only 8 companies globally can produce ai worth using - but they won't. economics of pretraining and inference favor scale, but a unit of model performance becomes cheaper and faster to train. new architectures will be more data and compute efficient, driving down the cost. nvidia's monopoly will attract more and more startups in parallel compute space (there are plenty already, read semianalysis blog!), and when the dust settles, every willing country, county, and a city district will have their own ai model, and their own server rack, either in the basement of the city hall, or rented metal from a provider. it will just become cheaper to train, to run, to buy hardware, while the ability to build own models, specialized on the community, will become easily accessible. these scenario has a lot of hidden assumptions, which may or may not realize, feel free to write them out. but it is also based on the law of least resistance - as soon as the pain - of spending money and having no control - will become great enough (as people realize their dependence of ai vendors), and the cost of alternative becomes low enough (through a simple consumer device for ai) - the change is bound to happen. --- // 20 july 2025, berlin