Friday, March 13, 2026

Programming in 2026 with Artificial Intelligence tools

Some context:

I work on mostly legacy applications. Some are gigantic mono-repos, some are running  on Python 2 and some are a mash of half-solutions and ideas (like some of the UI is in JavaScript and the other half is TypeScript; made to work with a kludgy, brittle build script).

I've been using mostly Cursor. It was provided by my employer and I'm familiar with it's VScode editor. I also installed their CLI for Windows Terminal. I've tried other stuff like Co-pilot (not a fan, it was too intrusive for me) and Gemini.

Here's a couple of things that I've learn so far:

1. The Cursor agent for coding will get you almost there, I'd say 90% of the way. They will often misinterpret some nuance but I think that's probably more of the prompt used rather than the agent. You can "chat" with the agent to get what you want but be aware that has limitations - ie. context. 

2. Speaking of prompts, the Cursor agent works better if you can provide more detailed instructions rather than letting it guess. Too often, if you provide a vague enough prompt, it will use the latest or most "popular" approach. For example, it picked Set collection to use on a JS codebase prior to 2015. Sets are only supported by ECMAScript 6 which was released on 2015. But then again, the project code base was a mess of shims and Babel scripts. 

3. Setting up a project especially long-term ones, need some time and thought. For example, Cursor supports things like Rules to provide project to system-level instructions. You can add hooks to do stuff like run formatters or scan for bad practices like hard-coding secrets or API keys. The small downside for this is that senior developers have to make conscious decisions for these things because you can include these rules and hooks in the codebase. 

4. I don't like the feeling of giving the Cursor agent "full-control" of my PC outside of the VScode editor like issuing Git commands or doing az or gcloud calls. I didn't auto-allow those types of commands. If the agent wants to execute those, I have to manually allow. 

Personally, I still don't fully trust AI in doing my job. Don't get it twisted though, my job is to solve business problems and NOT writing code. Writing code is just a side effect. But not fully trusting AI tools doesn't mean not using them. It just means using or viewing them differently from what social media or the "tech bros" are saying.



No comments:

Post a Comment