by

Author’s avatar.

Mindaugas Rudokas

Thoughts on AI

A humanoid robot sitting on the floor, leaning against the wall, battery-depleted, with legs missing. Hand-drawn illustration in ink and markers.

When I did return to the social media last year (after a decade or so of abstinence), my intention/expectation was to mostly post about UI design and SwiftUI. This is what I professionally care most about these days. But when at some point during the summer of 2025 I did take a sample of my Mastodon timeline for analysis, the distribution of posts by topic looked something like this: 55% about AI, 44% bashing the macOS Tahoe design, and ~1% about SwiftUI.

I guess critiquing Tahoe is still a design talk. But the AI topic got into such dominating position without any explicit intent or planning. It just gradually sneaked in and hijacked a significant portion of my attention. Apparently it’s hard to keep your head out of the tech bubble, even if you rationally think of it as being a bubble. Media washes you with the stream of muddy hype water. FOMO forces you to pay attention. ADHD looks for green fields of opportunities. A perfect combo.

At the beginning of 2025 I still had quite reserved attitude towards generative AI. But then what happened was the Claude Code has dropped and it started to look like it’s time to dive in and explore the code generation more seriously. The agent-in-the-background mode of interaction was definitely a big trigger for me (and obviously I don’t feel alone in this). It sold itself as a materializing sci-fi-ish dream, a new kind of Sims game. The “only” issue is that the backbone behind this new shell is still the same level of “intelligence”.

When you deviate from the original plans and spend time on something else, you do that in hopes of greater outcomes. But I came out of this period with a disappointment. In retrospect it feels like I’ve squandered time familiarizing myself with the new and shiny tech that did overpromise and underdeliver. So far the ROI for me personally is horrible. At the very least it’s still too early when it comes to the quality and abilities. But also I find that the agentic workflow makes the process of building software way more scattered and less enjoyable. Just keep in mind that this is coming from the perfectionist design engineer’s point of view.

For better or for worse, I’m reinforcing the realization that my personal traits in conjunction with decades of professional wanderings have pushed me far away from the statistical average participant of the software industry. Oh well…

The Bad Fit

I’m in Apple’s native development boat. Which means Swift. Which is certainly not one of the most popular languages when it comes to open source, and definitely not a match to all the available code of web pages/apps. That alone seemingly does have enough of an effect on the quality of the results you can expect from code generation models.

On top of that, I currently have a luxury of being able to work with everything newest — latest Swift enhancements, and everything what SwiftUI did deliver during the WWDC 2025. And new information by definition will lag to appear in data sets that models get trained on. Even after the cutoff date is moved forward, as there is always way more code available which uses old APIs, the oldies for a long time will have larger probabilities burned into model’s brain. So you need to hunt all the cases important to you, and manually compose those into rules. And when LLM “forgets” the rules during the longer session, you need to remind it to reread the damn rules. In other words — models are primed to generate technical debt. If you at least have requirement to support minus couple of versions of OS back, I believe your experience may be somewhat better in this regard, but not that you have zero need to adopt new APIs either.

And let alone the agentic coding case — I find it hard to get anything useful when asking models narrow and specific SwiftUI questions. At least that’s my experience — constant hallucinations. This cannot be only me — if AI would be any good at it, there would be no Slack channels full of SwiftUI problem solving.

Now regardless of what the frequency of the hallucinations is, the fact that it is never zero and seemingly can not be reduced to zero with the current LLM architecture — it completely eliminates any near term chances of the true spec-based development becoming possible. By “spec-based” I mean the “code will no longer matter” vision for all kinds of coding, not only for vibe coding prototypes. It is an entertaining idea, but currently it’s just pushed as part of the strategy to promote the hypothetical future (i.e. science fiction) for hype building purposes.

This leads to the fact that for the foreseeable future the agentic coding workflow for anything more serious will continue to require both: writing as detailed spec as possible and reviewing every single line of generated code. Most often that feels like at least 2x of work to me. Emphasis on “feels” and I think it’s mostly based on hating the indirect modes of work.

First, I need to detail the spec to the level that I personally would never need for myself. It’s always hard to guess where to stop — will it be enough info already or that stupid thing needs more guidance. In the process you realize how much knowledge, edge cases, assumptions, defaults, rules you have accumulated through the years, and now all this autopilot-level knowledge needs to be identified and extracted from your head and written down. As a generalist working solo you are used to do a lot in your mind — you only write rough plans/tasks. Skipping the externalization of communication between product manager, designer, programmer, and tester by housing them in a single brain is already huge efficiency multiplier — should I necessarily be chasing the multiplier for how fast the code is typed, by loosing time on externalization of communication? The cumulative gains/losses here are hard to measure, and obviously it varies wildly case by case in how easy different concepts are to describe in inherently precision-lacking natural language. But it always feels bad to give up certain efficiency in exchange for probable efficiency.

And second, the “reviewing every single line” part. I was humbly reminded, that I never did like doing code reviews or analyzing code bases. And why reading the code of the averaged quality (at best) produced by non-reasoning entity should be any more pleasant activity? It’s not. It’s worse because of that zero trust you need to maintain. It’s like the “self-driving” where you still need to oversee the steering. Maintain attention on the task that you don’t enjoy doing — thats just a “perfect” receipt for the productivity of ADHD brain.

When encountering any failures, the negative multiplier grows even more. I hear people saying like “I vibe code for 1 hour, then do 4 hours of cleanup”, and that it feels like huge performance multiplier to them. I have zero understanding how this works. Oh, and apparently now some people on LinkedIn advertise their specialization as “vibe code cleanup specialists”. What?! Here is my personal preferred method of cleaning the mess up: delete everything. Done! It’s clean now! Different brains = different approaches.

Speaking of ADHD brains. There are way more opportunities to loose time.

You can get excited about the new tool and play endless game of tweaking, changing, optimizing. Next month you’ll have yet another new model or agentic feature. Everything changes fast, there is endless supply of newness.

The agentic workflow also is a lot of babysitting — an additional source of interruptions. You cannot go into deep work mode when you are on babysitting duty. That’s true even for non-ADHD brains — increasing the multitasking always has a cost. And for ADHDers the cost is way higher.

For the person who prefers working solo to avoid all the distractions and inefficiencies of the office job, introducing an agentic workflow is in some ways like getting back into a team, but with the current level of “intelligence” getting disproportionally more of the negative aspects of the team work. Like if you got to manage one or more needy interns, just a lot worse. For now an AI agent is like an employee with half of the brain missing — no real understanding, no learning ability. Composing the memory, improving the behavior of the agent is your job. Pointing out the created mess is also your job. Reading the “You are absolutely right!” as an “apology” after that is also part of the deal. I lack patience to go into these repeating fights with the probabilistic system. It’s basically babysitting an entity that way too often insults you with its stupidity. So at the end you start questioning who is helping whom here? All this patching and experimenting feels like you just volunteering your efforts to improve the training data sets of the AI companies. Shouldn’t I be payed for this then?

Which brings us to the last aspect that I despise about generative coding — missing privacy. My codebase gets sucked up and stored on somebody’s server. I guess it’s fine if your project is already open source. But for private code — I do not want to give up the assumption that private means “my eyes only”. Accidents and leaks happen, and we cannot trust any of the AI companies anyway. You need to assume that privacy policy means nothing. Oh, you did uncheck the “use my data for training” checkbox? Well, who cares? Just assume it’s weakly public. I’m so not ready for the zero privacy world.

Let’s switch the gears now from code to design. That will be rather short.

Currently I don’t see any use for AI in my UI design workflows. Picture is worth a thousand words — so if you have mastery of design tool, you will not revert to describing your UIs in natural language, obviously. And usually I know what I want — I do not need visual brainstorming partner for this.

Overall there is way too little attention given to vector graphics file formats as the output from the models, which would have way more potential — when you can take over at any point and fix things in editable files. Obviously there will never be the amount of public data available in Illustrator/Photoshop/Sketch formats as it is for the bitmaps and web UIs expressed as HTML/CSS. So maybe that is a wall in general case, not sure what to expect here. Except that we all know where the 98% of all the design files of the world are hidden — on Figma servers. They are in the perfect position to produce the best models for anything in 2D design, they are already doing some things, and will do more, I’m sure. But as not a Figma user, neither a fan of web apps in general — little do I care.

And not that this is any sort of disappointment — design has no boring parts, why would I complain about robot not taking over? I’m fine and happy dragging rectangles.

So to conclude this entire tirade — my current evaluation of everything generative for my own purposes is: meh.

Good For You/Them

We all are working on different things, we have different goals, our brains work differently, and our standards are different too. Different models give different results. Same model gives different results for the same prompt. Obviously it’s not possible to have the same experiences and opinions about this tech.

Maybe web devs and Python aficionados are already having it? At least it sounds like that. Not that it would be possible to distinguish that impression from the overall hype. It certainly must be better situation due to amount of available training data. Can’t confirm, but good for them, if true.

Another big cohort is startups, where testing the idea is all that matters. That’s practically allowing yourself to ship prototypes to the public. Now you can maybe do that even faster. Less time spent on what will likely be a failure is certainly a win. And if you survive, a full rewrite of mostly vibe-coded monster is not a big deal. Good for you.

If you are writing boring software, that was already written thousands upon thousands of times, and you call most of it a boilerplate code — generative must excel at this. We need to do less boring stuff. Good for you.

What I’m really stoked about is prototyping in design field. Designers who do not code now are able to create working prototypes way faster. This must feel like super powers. Good for you.

Also if you are not a designer, but you need UI, I can see the generative approach being an empowerment. Good for you.

And for the folks who are just at the beginning of learning to code — I believe the AI assistance can be great speed up for the learning process. Good for you.

It’s just that I’m not in any of these boats. And that’s fine.

Still Useful

Very localized code generation (at function level) can be useful sometimes. Even if it fails, the fixing/salvaging/retrying does not cary so much burden as agent generated PR with thousands of lines of code. But having the constant AI auto-complete enabled is also annoying at the same time (ADHD nuances again). I prefer invoke-on-demand. Overall the UX could use some more innovation in this domain — slapping a chat window inside IDE is not a design.

Generating mock data — like avatar pictures, lists/tables of data — is a great case. I imagine it could be very helpful. Never needed it so far.

Of course I use it sometimes for preliminary search/research. I just never trust it. And it certainly did not replace Stack Overflow for me.

I would use it for brainstorming, if I would feel the need, but I always have too many ideas anyway.

I probably most often use it for looking up the English language things, as non native speaker. This is probably the only thing that I trust LLMs to do well enough for my needs without a need to double check.

And, for sure, translation in general. Although that’s nothing new for this age, but I like it getting better. Perfect it, make it super accessible. Like Apple’s Live Translation feature, but for all existing languages, on any hardware. That would be a positive change.

AI can be good for triage tasks. I would love to apply it to some channels of information to clean out AI slop and bots. It’s just that those platforms are not incentivized to make it easy.

For visuals, I quite like the ability to generate textures or make them tileable for example. Or upscaling — can also be handy. In essence it’s the new Magic Wand–like tools that are welcome.

Sounds pretty boring? Yes. And I think I’d prefer it to be so. While there is not enough of intelligence, just double down on the paths pawed by Machine Learning and improve the tools by being mostly invisible.

It Might Get Better

To some degree it is likely to, as the money is still flowing, and the bubble has not popped yet. But it also is way too easy to speculate about the future of this tech, as proven by the hype itself. So keeping it at “it might” feels like the most accurate prediction.

At least the tooling/interfaces will get better over time for sure, even if the advances in “intelligence” levels stale. One could even argue that this is what’s mostly happening already — even companies producing the foundation models are focusing more and more on building products, while incremental model updates are getting less hype worthy.

Now if coding models will ever become really good, like cloning-myself good — then I would love to play that Sims game for real at some point — like most indies, I have more ideas than time. It would not change the fact that natural language can be more efficient at describing only a small part of what I usually do, but I’m sure I would find use for digital employe that can reason, learn, and be trusted. Director’s role of building the vision for the product and doing the design are the most fun parts anyway — I would gladly delegate the coding part to the perfect coding god. It’s just that I do not believe in that future happening soon.

Are We Better Off Overall So Far?

No. So far it does not feel like better at all. Although it is not a straightforward conclusion. I think what’s bothering my mind the most is the double-edged nature of the situation. Ethics/copyrights angle is fucked, forget the climate change, who cares about those pesky humans or entire planet — big money doing the development of this tech is behaving like big money always did. Disruptions of various kinds are here, probably more are coming, consequences are unknown — we are in an experiment. And still it is very cool applied math, interesting latent spaces exploration tools, possible automation/productivity gains. It’s a mixture of good, bad, and unknown.

Part of the issue is that so many things were going along the enshittification route long before the current AI sprint started. But now AI acts as an additional accelerator for negative trends, and seemingly in way more ways than it is helping with the positive progress.

What will happen to the software at large? The average quality level was on a downward slope for years (decades?) already. And now random code is definitely diffusing into the veins of the industry. The overuse of code generation is very likely to lead to code base obesity epidemic. It will not get better, will it? I’m not optimistic.

General population gives way too much trust to AI chatbots. What 2% of consumed lies will do to the global knowledge? And not that there is much hiding of it — e.g. OpenAI in its ChatGPT Agent introduction bragged about achieving the 98% correctness in financial data processing task. Enough said.

Slop of all forms is flooding the internet. So much noise. Discoverability for good content is nosediving. And not only on social media networks — look at what’s happening with Etsy or Spotify. And obviously not only for consuming — the same applies to your content and marketing efforts as a creator.

As if TikTok was not enough, we got SoraSlop. How will we raise the next generation with this shit available? What about their taste development (let alone the dopamine-based issues)?

We don’t know how learning will be affected. Will the utopia of perfect personal tutoring materialize or will it just enable laziness by providing many shortcuts to skip the thinking part altogether?

Scammers are over-empowered.

Bots are everywhere and harder to distinguish at first glance. Disinformation and propaganda machines are way easier to create and scale.

And all of this is a part of so much touted “democratization of skills” package. That’s a very interesting definition of “democratization”. Basically software industry kings stealing everything that hard working professionals have created, distilling that into automated “skills”, and distributing that to the unskilled population (not excluding bad actors) while collecting money for it.

No really, just think for a second — who has “contributed” the best quality data for AI training? Obviously the highest grade illustrators, writers, photographers, designers, coders. And what they get in return is an average stochastic simulated assistant parody. Maybe some helper tools eventually (definitely for a price). But also reduced demand for their services. And polluted internet. Very “fare” and “democratic” deal indeed. While those without the skills get something to play with (or to do damage with), most often lacking the taste (or morale) to judge the outcomes. Are their lives enriched? Well, scammers definitely are happy. But it is very hard to empathize when it comes to normies and generative “art”. For example: I am nothing when it comes to the music composition, but it is cool skill, and all I could care about is how to learn the craft — I have zero interest in generating random track with AI. Where this urge to create slop comes from? Does it pay with attention? Is it just fun? Or is it mostly ad-revenue-driven “business” attempts in this attention economy of ours?

We have always strived for automation through invention. But it’s not exactly shoveling we are dealing with here. It’s just too much of human touch removal from where it actually matters. And in the process the livelihoods are affected. The human labor will not be replaced to the level that the hype would love us to think, but it has real effects on some sectors. The closer the output of your work is to bitmap images or written/spoken language, the more hit you take. The demand for illustrative work has certainly dropped — artist are having hardest time ever. Then think about voice actors, translators, copywriters. Are they all on UBI programs yet? Are they all looking for new hobbies to enjoy, instead of doing those “uncreative and boring” jobs? I don’t think so.

So is there more empowerment or more disempowerment? More productivity or more plain slop? In my eyes it’s a net negative so far.

Renewed Intention

Honestly, after finishing writing this, I sort of regret it. Did I really need a therapy session of writing this out? Maybe. Or is it just more hours lost to the AI topic? I hope this will at least be interesting for the future me to reread at some point.

In the meantime, I have already been actively avoiding the AI usage, news, and discussions for over 4 months now. And I still have zero interest in hearing about the next model and how well it performs on the benchmarks, or about the next VS Code clone with integrated chat. Ok, recent Xcode 26.3 release was something significant in this regard for our circle, but that just proves the point — everything is in a state of flux. And I’m inclined to let it brew further for a while — let Apple and enthusiasts put in the work. So, everybody, please, clone your knowledge and upload it to the clouds together with your Swift codebases, so that somewhere around the fall there is something worth giving a try again. Or is September like a million years away on the exponential time scale of AI progress? Somehow I doubt it.

While complete ignorance is impossible to achieve without moving into the woods, the intention to not spend attention on AI is there and it is going quite well so far – I like it, I feel rested and neue-retro!

2026 with less distraction, please. And more human touch in everything. Let’s go!