“AI” is not there to help you

I’m not writing this post to convince somebody, I write it mostly to formulate my thoughts and so I can refer to it later saying “called it”.

First of all, what do I have against AI and why the first word of the title is in quotes? Not much, actually, it’s just what gets hyped as AI nowadays is far from it—hence the quotes. It can do something, sometimes it can do it good—but in general it is far from being intelligence.

IMO it’s more accurate to call it artificial managers, since they do what your typical manager does: spew completely meaningless bullshit, take your work and reword it in corporate-speak way, plagiarise somebody’s work and take credit for it. Also maybe it’s acceptable for typical USian not to ever learn, but normally it is expected from human to keep learning and re-evaluating things throughout whole life. Of course I’m no AI scientist (and so my opinion does not matter) but I believe that proper AI should have two feedback loops: an inner loop that controls what is being done, and an outer loop that adjusts knowledge based on the new experience. Inner feedback loop means that while executing the task you’re trying to understand what you got, how it relates to the goal, and then you adjust what you’re doing if necessary. It’s like in a famous joke about the difference between physicists and mathematicians being asked to boil a kettle when it’s full and on the oven already: physicist will simply light a match and light fire, mathematician will take that kettle off the oven and pour water out, thus reducing the task to the well-known one. Outer feedback loop means learning from the experience. For example, LLMs apparently still make the same mistake as small children on answering what is larger, 4.9 or 4.71; unlike small children they don’t learn from it, so next time they will give the same answer or make the same mistake on some other numbers. I reckon implementing both such loops is feasible even if the inner loop will require a magnitude more of resources (for reverse engineering its own output, calculating some metric for deviation from goal and re-doing it again if needed), the outer loop is much worse since it would mean going over the knowledge base (model weights, whatever) and adjusting it (by reinforcing some parts and demoting or even deleting others).

So if I believe it can be improved why I claim it’s not helpful? What I’m saying is that while in current state it still may be useful for you, it is not being developed to make your life easier. It should be obvious that developing such system takes an enormous effort—all the input data to collect and process let alone R&D and learning control—so it’s something that can be done only by a large community or a large company (often stealing results of the former). And companies do something not to advance human well-being but rather to get profit, “dishonestly, if we can; honestly if we must” (bonus points for recognising what sketch this quote is from). I consider the current situation to be a kind of arms race: somebody managed to convince somebody that AI will be an ultimate solution, so the company that gets first practical solution will get an extreme advantage over competitors—thus current multi-billion budgets are spent mostly on fear of missing out.

What follows from the fact that AI is being developed by large companies in pursuit of commercial interests? Only that its goal is not to provide free service but rather to return investments and make profit. And profit from replacing expensive workforce is much higher (and real) compared to what you might get from just offering some service to random users (especially if you do it for free). Hence the apt observation that “AI” takes creative (i.e. highly-paid) work instead of house chores while people would rather have it the other way round.

As the result if the things go the way the companies that develop AI want, a lot of people will be rather superfluous. There will be no need for the developers, there will be no need for people doing menial tasks like giving information, performing moderation and such (we can observe that even now to large extent). There will be no reasons for those free-to-play games either as non-paying players there are just to create background for whales (called so because they spend insane amounts of money on the game). Essentially the whole world will be like Web of Bullshit with people being rather a nuisance.

Of course it is just an attempt to model how events will develop based on incomplete data. Yet I remain an optimist and expect humanity to drive itself to an early grave before AI will pose any serious threat.

2 Responses to ““AI” is not there to help you”

  1. Paul says:

    One of popular “AI” helped me rewrite video filter to better quality.
    You still need to always review and sometime babysit “AI reasoning process”.

  2. Kostya says:

    In other words, just add some actual intelligence to make it work.

    I still prefer the old-fashioned way since it allows me to learn stuff (which is also the reason why most people don’t like it).

Leave a Reply