Reading tech-adjacent internet, be it Hacker News, tech X, tech Reddit or random news outlets, feels increasingly weird these days. Between tech CEOs repeating AI will take our jobs in 6 months in yet another 6 month cycle, engineers worried about their livelihoods, and new versions of large language models popping left and right the way Javascript frontend frameworks used to, we are definitely living in interesting times. On one hand, the never ending stream of “it’s all over for software development” news might be pushing some people into panic mode, understandably so. On the other, it feels like just another reason to learn something new for a profession that requires learning something new all the time to stay on top of the latest developments in our field.
I am allowed, and even encouraged, to use genAI by the company I work for. I use it for planning, researching new areas (like my recent first foray into the world of mobile AR), analysing lengthy logs or writing unit tests. Sometimes I even use it to help me find a particularly stealthy bug - or more precisely, to help me get unstuck when I feel I’m looking at the wrong place.
I use genAI enough to know not to trust it.
Even though, by use of LLMs in different capacities, prototyping got way faster, the time saved is instead used to thoroughly review AI generated code. From my experience, if you’re even just a bit vague in determining rules to follow for your LLM of choice to generate code, you’ll inevitably end up with something unreadable, nonsensical or straight up dangerous. Recent developments in agentic coding undoubtedly helped setting rigid rules for AI to abide by, but even with MCPs, lengthy config files and you overseeing its every move, LLMs tend to try and be rebellious due to their next-token prediction nature.
Our job as software engineers, therefore, shifted from being able to churn out code to being able to read and understand code thoroughly, and describe requirements and expectations clearly enough for AI to follow, unless we insist on writing code on our own.
If anything, I don’t believe using AI impairs our capabilities to write software, unless we allow it to. I believe that it helps us develop a much needed skill of understanding code we did not write ourselves. And it’s a skill that sets us up for mentoring roles.
I still enjoy writing “grass fed, free range code” by hand, and I do it pretty much daily, both at work and in my personal projects. But I’m also trying to stay aware of the potential paradigm shift and, quite frankly, I find it thrilling to explore new ways of working with AI.
It’s just fun to work with new toys, just like it always was. It’s what kept my interest in coding for all those years.