AI Observations: 2025

Table of Contents

Introduction

There are at least three groups I identify in the software industry when the topic of AI comes up: Zealots, Skeptical Implementers, & Deniers. I will present characteristics and perceptions of each group, validate and refute their positions, and then give my own observations.

Deniers

The AI Denier believes AI has few or no benefits. In fact, it has more negative side effects than positive. The industry, and world, would be better off rejecting it.

Denier Validations

Deniers recognize there are various additional market factors at play and plenty of blame to go around, but AI is the layoff scapegoat. Companies purport performance increases when implementing AI. A company has never lied about the reasons they reduce their workforce, it certainly is not because of poor market conditions that are often the result of their own decisions, but because they are simply winning. So, while AI may not be the direct cause of job less, it is a proximate cause.

Data centers are bad for the environment. AI training operations require enormous amounts of computation (read, energy), and prompt responses require non-insignificant amounts. I wont entertain the “number of bottles of water” argument, because there is no real measure. Instead, we can correlate the demand for data centers with the speculation of AI needs.

Denier Refutations

Certainly there are use cases and benefits. If we unravel the marketing of AI, we discover Natural Language Processing, which has implementations in Language Models and then Large Language Models which are the more significant and recent development within "AI". NLP techniques have been used for many years now successfully in various applications, but not quite in the "Large" category until more recently. As such, I would refute that there are no, or even few, if we consider AI to be an inclusive overarching field including NLP in software applications. A denier may overlook the history and existing working use-cases.

Software Engineering flows, whether we resist them or not, are factually changing and being implemented. The denier may claim that this is because engineers are forced to utilize these tools under duress. The DORA(video) report disagrees, if you consider it a valid source (Google definitely has conflict of interest; but DORA has a good reputation). Engineers are integrating from auto completion to “Agentic” flows, to prompting for information in various mediums and perceiving some benefits; perhaps not as beneficial as touted by Jensen Huang, but perceived benefits nonetheless.

Skeptical Implementer

There have been and will be software hype cycles. Some claiming to be the ’end’ of software engineering. We are at the peak of inflated expectations. Some hype cycles resolve in useful artifacts from which you’ve benefited. Experimentation is important, but we must adopt what works and refute what doesn’t.

Skeptical Implementer Validations

The skeptic emphasizes focus on critical thinking which protects from misinformation. They are suspicious of absolute statements by absolute ends of the AI spectrum allowing them reasonable evaluation and usage. Is AI a bust? Well the skeptic probably learned something valuable. Is AI God? Well the skeptic likely implemented enough to be saved.

Understanding non-deterministic nature of AI allows the skeptic to see practical implementations. This include implementation of specific models and use-cases, as well as developer workflows.

Skeptical Implementer Refutations

There is an extremely large amount of capital infusion into the AI space. A skeptic may too slow to adopt or implement could result in missed career and financial opportunities.

The skeptical implementer may find themselves focusing too much on generalized software implementations and miss out on specific-case implementations. When a skeptic hears "AI" they’re most likely to immediately think of IDE, code generation, and general purpose uses, overlooking specific use-cases which may be missed improvements.

Zealots

Numbers, Data, Technology and progression are saviors. In order for there to be progress, we must advance and AI is the incarnation of our advancement and our salvation. Don’t sweat the details of functionality, limitations, or social implications.

Technology should govern everything.

Zealot Validations

New markets are being created under the umbrella of AI. In the event AI pays off the zealot will be at the forefront having earlier connections and insights into the new markets.

The technology is no doubt fascinating. There seem to be excellent niche use-cases when applied are rendering irrefutable results.

Zealot Refutations

The zealot claims engineering performance is through the roof with AI implementations, but the industry doesn’t even know how to measure performance in the first place, so how is it determined that AI enabled engineering increases performance?

AI is currently a risky investment unless you’re providing server hosting for AI solutions or building hardware; effectively everyone else is currently losing. As such, uncritical adoption or forcing other to adopt it may likely end up as a negative ROI.

The zealot suspends understanding of AI Limitations and black-box functionality. Zealots may have mismatched expectations of functionality and blame others for unachieved promised results.


My Opinion

AI is not inherently bad but the societal hyperfixation on it is bad. Going further, the western worlds unbounded hyperfixation on misused data contexts emboldens poor claims around AI. I think we should be somewhere between the skeptic and the zealot, but pay close attention to the rationale of deniers.

My Approach

I utilize generative AI in my daily workflows. If I cannot immediately recall some technical detail that I hitherto knew, I can prompt for that information and refresh my memory of it. If there are clear well-documented, thus trained upon, code-cases, I can reasonable expect a good output.

I still have never worked on “A Simple CRUD” app. As such, I’ve tried agentic work flows in my production systems at least 25 times, and have been dissatisfied with the output each time. The level of effort to correct, move files, reimplement standards etc outweighs the time and energy to have done it “the old way”. I am told that this approach does work on simpler cases, or more narrow ones, but again there is usually a large amount of context on any single thing I’ve worked on.

There are some simple use cases where adding just a column or somesuch goes well, but that requires substantial structural effort beforehand and is still not guaranteed to be the output I expect.

I think the concept of intelligence will remain biological. As such I am doubling down on acquisition of knowledge and the application thereof in the software world.

Predictions

I don’t foresee the human in the loop going away.

Regarding code generative uses cases

I predict that the currently hit hardware limitations and further software optimizations will result in lower quality outputs. I predict that LLMs will be poisoned and that we’ve effectively stolen and trained upon all available information; that trademark holders will find ways to protect new innovations from theft. I predict that when the loss-leader and free tokens era ends, the cost will outweigh the benefits. I use AI regularly, and I have seen next to no improvements in the output of such over the last year and change.

Whats next for code generative AI

I predict that the UX of AI will be the next frontier, that what we have will need to be refined and presented in a way that makes it’s use-case in generative ai for code irrefutable. However, I am skeptical the cost will be matched with the benefits.