Opinions & Blogs

The AI Dilemma

This is more of a brain dump than a well constructed blog post. Just a warning. Since the inception of LLMs and their quirky ways we, as a society, have been seeing the magical letters ‘AI’ shoehorned into any space they will fit. So far I have seen all sorts from AI toothbrushes to code reviewers and I do not expect that this will be where the metaphorical buck stops.

Now, don’t get me wrong, I love innovation. I have even utilised some LLM based tools in my workflow. However, there seems to be a massive divide between what is being promised and what is actually delivered. From my experience, which mostly dwells in the realm of software development, the outputs of these tools are very hit and miss.

If you are asking the LLM agent to do something trivial like scaffold a NextJS app with a familiar tech stack it will probably nail it. But, then again, there is probably a template which will generate the same output via a simple npx script. The same is true for more bespoke prompts, if you ask the LLM to solve a logic based problem which has been solved time and time again it will probably nail it. However, if you ask it to solve a problem which is extremely off-piste it will probably fail.

You could spend the time to feed all the background information and context required for the LLM to fumble its way to a solution but that brings us to a more philosophical question of what the benefit is? Sure, it may generate some code which works but at what cost? By delegating the responsibility to find a solution to the LLM we have essentially deferred our opportunity to learn. This is a problem.

Learning in the ‘AI’ era

If we allow ourselves to continue down this wayward path of just asking a black box for an answer to a complex issue how will we, as a species, progress? Further to that, how will the LLMs ever improve? If you consider the fact that LLMs are reliant on being fed unfathomably large corpuses of data in order to ‘learn’ it would seem obvious that they will hit a wall at around the same time that humans stop trying to learn.

If we allow ourselves to stop seeking knowledge and building our own understanding of the world we will, surely, end up in a situation where humans start to intellectually regress. I have personally started witnessing this first hand when conducting code reviews. I have seen developers submitting code for review when they haven’t even bothered to test that it works.

This kind of non-issue should not exist in 2025 but I feel the rise of LLMs is also causing a rise in human complacency. Instead of learning how to read a stack trace to find the crux of an issue we are just doing a bit of copy/pasta and then letting an LLM fix the issue. This is dangerous. Think about the bigger picture. We are paving the way for software development and engineering to be a field which is dominated by opaque boxes owned by VC backed companies that only have the interests of their investors at heart. Didn’t we learn anything from absolute obliteration of the human attention span as a result of anti-social media?

It would be easy to dismiss this concern as one which only affects software developers but I believe this is something which has the scope to damage the entire human race. If we allow ourselves to be robbed of the opportunity to learn and, as a result, damage our actual ability to learn we will drastically increase the divide between the haves and have nots. We will end up in a situation where the path out of poverty is so steep and obfuscated that people give up hope before they even leave school (school kids are already just using LLM outputs for homework instead of actually studying).

This may well seem like a long old rant against LLMs, it isn’t. It is more a call for us all to use them responsibly. Before automatically reaching for an LLM try and search for the solution yourself. In doing so you will be improving your own reasoning skills. Simply pasting a stack trace into an LLM and then letting it fix your app is robbing you of the euphoric feeling which is experienced after solving a complex issue.

Try not to treat the LLM as your genius assistant that knows more than you and will do your work for you. Treat it as a tool in your toolbox, use it for rubber duck debugging, construct your own solutions and then ask the LLM to asses them. If you do use an LLM to generate a solution for you (there’s nothing wrong with that) make sure you know what it is doing and, more importantly, why it is the solution.

I guess, the main point I am trying to convey is that we should not start allowing ourselves to become reliant on LLMs at our own expense. I am sure that they will not continue advancing on the trajectory which has been demonstrated over the course of the past year but they will continue advancing. As we see the rise of fascism and right wing ideologies it has never been more important for humans to retain humanity.

If we allow all our learning to be controlled by VC backed companies it is only fair to assume that our learning is going to be entirely in the interests of said VC backed companies. Based on their track record that can only be bad for everyone expect the people cashing the cheque and the political parties they fund.