Blog

Why I refrain from fully committing to AI tools

Published

6 mins to read

There’s a lot of noise in the developer community and tech twitter ever since GPTs were released to the public. The introduction of AI frameworks, AI-powered IDEs and recently the announcement of MCP and subsequently the uprise of agentic software development workflows, further fueled the hype train. Multiple statements from people with influence in tech, like Nvidia CEO's Jensen Huang to politicians such as US's Barack Obama, empathized with AI and how it is going to affect every job.

The pressure to keep up is real. In this blog post, I share my opinion, talk about why I find myself hesitating from fully embracing the AI-tools in my day-to-day software development workflow. This post is rather a summary or reflection on why I’m intentionally cautious about going all-in.

Avoiding the Hype Train

I might not agree with Theo (t3.gg) and his way of approaching tech all the time, but he is undoubtedly a well-thought engineer who has successfully launched projects like T3 Chat. In one of his YouTube videos he strike a good point about how he avoided the FOMO of the React ecosystem and joined after it matured with hooks, Nextjs, RSCs and avoided the chaos of its earlier phases. I find myself in a similar position now with AI tools. I'd rather miss the initial spark than commit too early to a volatile style of writing software.

Vibe Coding Isn't My Thing

"Vibe coding," coined by OpenAI cofounder Andrej Karpathy in February 2025, is the idea of relying on AI suggestions and code generation with little to no actions or correction by the person prompting—just keep throwing prompts and patching it until it works.

One viral instance of "vibe coding" came from the controversial indie hacker Peter Levels, who live-streamed creating "fly.pieter.com"—a functional flight simulator built using only AI. While he rapidly produced a playable version, the process was riddled with buggy, inefficient AI-generated code. Also, not every one has the same amount of experience in developing software like Levels' has. What helped fuel the hype train further is Elon Musk retweeting Level's original post, his large fanbase and his way of way of marking—he didn't even bother to choose a name for it.

I, myself, like to be intentional with how I build systems, and I’ve found that AI tools often break down under the surface. They hallucinate functions, use inefficient or unsafe logic, and miss edge cases that I’d rather not debug at 2 a.m.

A great example of this was when Prime (the YouTuber with the mustache) and his group of friends gathered in a tower and attempted to "vibe code" a tower defense game in a week using Cursor. It started off looking promising, but quickly turned into a tangled mess of code. In the end, they had to rewrite most of the project by hand.

That experiment confirmed that AI can be helpful, but letting it steer the ship is not a wise decision.

Code ≠ Art

Generating code is not the same as generating art.

With AI-generated images, a misplaced pixel doesn’t matter. But a misplaced ! in a condition can open a security hole. A wrong parameter can crash production. In code, small mistakes aren’t aesthetic—they’re catastrophic.

Even with the "rules" files in Cursor and the context you provide to these tools, they still manage to abuse regular expressions when simpler solutions exist, division by zero cases slip past their logic and their famous use of hallucinated libraries and non-existent functions is one of worst ways to spend your time correcting them and going into a feedback loop that often ends up starting again from scratch.

Tons of Prompts

One common assumption around AI coding tools is that they dramatically cut down the time it takes to write software. But in reality, much of the "typing time" isn’t actually saved—it’s just shifted. Instead of writing code directly, you’re now writing prompts, reviewing AI output, fixing subtle bugs, clarifying instructions and maybe going rage on the model or your keyboard for how dumb the decision it made.

Jarrad Summer, the founder of Bun summed this up in a recent tweet:

This reflects the truth many software developers started to feel. AI-assited coding does not eliminate the work —it just reframes it.

Not Everyone Needs to Be a Developer

Often times, AI-assisted development hypers will tell you that AI will allow "everyone to build software." Nvidia CEO's even went far to say this controversial statement

...everybody in the world is now a programmer. This is the miracle of artificial intelligence. [Word Government Summit in Dubai Speak]

While that’s nice-to-have on paper, history shows us that it's not realistic at scale.

We’ve already seen a version of this with low-code and no-code platforms. They made it easy for non-developers to create basic apps, landing pages. But when it comes to building robust, scalable, secure, and maintainable software—especially something like a full-fledged SaaS product—those platforms quickly hit their limits.

Software engineering is not just about writing code—it’s about making trade-offs, understanding edge cases, and managing long-term complexity. And while AI might assist in this process, the idea that it can make development possible by typing a few prompts to the point where everyone becomes a software builder is more of a fantasy than future—at least for now.

A Second Brain for Decisions

There are moments where AI tools genuinely shine. One of the most helpful ways I use AI is as a second brain—a thinking partner for making technical decisions.

Whether I’m considering the trade-offs between libraries, exploring how a certain pattern scales, or asking for pros and cons of a specific architectural choice, AI often helps frame the problem clearly. It won’t make the decision for me. Sometimes, just getting a table of summarized pros & cons implied by a specific tech helps me realize what I truly care about. Especially as someone who's often haunt by perfectionism and the tendency to overengineer the solutions. In fact, I have my AI-assignat always injected with a prompt to check if the solution is an overkill for the task in hand.

This ability to brains quickly and maybe even spinning a quick demo to validate decision helps free up mental bandwidth for other tasks.

A Smarter Rubber Duck

I also use AI tools as an intelligent debugging partner. (For context, "rubber duck debugging" is a practice where you explain your code out loud—often to an inanimate object—to clarify your thinking.) Now, I describe it to an AI, which often points out the obvious thing I’ve missed when I am stressed, or even suggests a path I hadn’t considered.

On top of that, AI saves time on research. Instead of going through outdated docs or digging into Stack Overflow threads from 2013, I can ask the model for a summary. This works especially well when I'm onboarding to a new library or working in a domain I don’t deeply care to master.

It can dramatically cut down the time I spend context-switching during development.

AI is here to stay. I agree with that, but it doesn't replace the deep thinking, careful problem-solving, and sheer understanding that goes into building solid, scabble systems.