As you may have heard, the end is near. AI will do all of the software engineering and we will be redundant. Except, that’s just not quite true, is it?
Note: Any mention of AI in this post is strictly referring to the generative AI hyped up, and not traditional ML work! These two topics get mixed together a lot and my opinion here is only referring to generative AI.
Hey! I’m Allie and I am a senior software engineer. Like most (maybe all) devs, I have been following the developments in Gen AI and have ridden the emotional rollercoaster, wondering if my career is going to be replaced. I felt a bit insane for a while, because I just wasn’t getting it. I am a perpetual Redditor and have seen so many posts talking about how AI is amazing and will take all of the jobs. I’ve seen tech CEOs talk about the future where AI will do everything and us humans will be showered in abundance. Because that’s how this typically works, right?
With the great divide between pro-AI and anti-AI folks, it was hard to get a feel for how these tools were actually performing. Pro-AI people will talk all day how amazing the tools are and how it’s great that it will take over the world. Anti-AI people will talk all day how every single thing the tools produce is slop and completely worthless. But like everything in life, the answer is probably somewhere in the middle.
I felt a lot of anxiety reading the pro-AI responses, because I have used the tools and while sometimes they can be nice, they are not revolutionary in my day-to-day work. And with how passionately people are talking about it, I felt like I must have been missing something. So, I went on a hunt to investigate the claims more and form an opinion based on my findings and experience. We are a few years into the tools now, and are getting a good idea on what’s possible.
My take: Generative AI is a tool that can help developers learn faster, maybe write some boilerplate code, do janitorial engineering for us, and be a sounding board. Generative AI cannot do the job of a software engineer, and that includes the work of a junior engineer.
Let’s talk about junior developers for a minute
My heart breaks for the junior engineers trying to get their careers started right now. We are in the middle of companies moving jobs overseas, laying off workers for cost savings, and still correcting for over hiring during covid. But none of those reasons sound sexy to shareholders, so they are doing it in the name of AI. This paints a picture that AI is more effective than it truly is. However, regardless of the reason, the job market is tough. And when the job market is tough, it will of course be the hardest on entry level roles.
When I read that AI can do the job of a junior developer, my skin crawls. Sure, entry level devs do not have the foundation that senior+ developers have quite yet, but their role is much more than writing basic code. This is a career that has nearly no training on how to learn the craft, and relies on devs teaching themselves basically everything. I think of the junior role as an apprenticeship where they can be surrounded by more experienced workers to learn the ropes and grow into an effective engineer.
Do juniors take on easier tickets? Yeah. Do we hire juniors so that we can have someone to do the easy parts of the job? Hell no! For every easy ticket a junior pulls, a mid-level can do it better, faster, and with less hand holding. The purpose of hiring an entry level engineer is not so that we can have an assistant to do these tasks, and saying that AI can take the job of a junior is implying just that. The mentality is backwards. We don’t have juniors to complete easier tickets, we have easier tickets so that junior engineers have approachable work to do while they work on their actual job duty. What is that job duty? Being an apprentice and learning their craft.
When I was a junior, my very first ticket was reordering a menu on our website. Could an AI do this task? Absolutely. Was I hired because my team desperately needed someone willing to do things like reorder the menu? No.. This ticket was probably created by someone because they were like, “hey, I can do this in 2 seconds as part of my current PR, but that new engineer needs something easy to start learning how we do things around here”.
That ticket taught me how we run our unit tests. It taught me that pull requests templates exist and how to fill our specific template out. It taught me the process we have for requesting reviews and that we need two approvers. It taught me how to set up my dev environment and where these files lived in the codebase. It taught me our ticketing software and process. It taught me what the ship-it squirrel was. And it gave me the confidence to keep going, because my entire team showered the PR with their approvals.
The person who made that ticket for me could have done this work in seconds without needing a dedicated ticket for it. But that wasn’t the point. The point was that I was learning to be a developer on a team and I would not have become the software engineer I am today without the lessons I learned doing that menial ticket. It wasn’t a trivial task we should give the AI. It was a building block required in the making of a new engineer.
Years later, I am the senior engineer leading teams and I pay it forward. I actively break new features up in buckets of increasing difficulty. I have trivial tasks new engineers can work on, tasks that I’d expect a midlevel to put some work into, and tasks that I expect the engineer to get buy-in from the team on. And the first people I try to push to pick up these tickets are the ones that will benefit the most from the learning if time allows. Of course, there is always the business to think about and sometimes it’s crunch time and we have to get moving.
My point here is that the job of a junior engineer exists to churn out future mid, senior, and staff+ engineers. These are the people who will be leading technical efforts in the future, and they are worth investing in. The last thing we want is a bunch of code zombies who can prompt the AI to do a thing, and have nothing but AI generated answers when support comes paging about issues.
So how effective is AI in the workplace?
With the industry pushing developers to stop writing code in favor of using AI, we run into productivity pitfalls. Microsoft’s CEO claims that 20-30% of their code is now AI generated and their CTO estimated that that figure will be 95% by 2030 1. But if these current figures are accurate, is it making a huge difference? Developers interviewed state that they find that AI tools are great at writing boilerplate, testing, explaining unknown pieces of code, and fixing bugs 2. They can also help with prototyping and getting started.
A METR study found that while forecasted and perceived productivity of AI usage among developers was high, observed results were lower than the baseline productivity without AI use 3. Developer surveys indicated that nearly 70% say they spend more time debugging AI-generated code and resolving AI-related security vulnerabilities 4.
So what gives? Is it writing a large chunk of our code and making us vastly more productive? Yes and no. Boilerplate and unit tests tend to take up a large portion of our code, so I can believe that 30% figure might be pretty accurate. The claim that it will reach 95% is a harder sell, unless we get to some crazy breakthrough in AI code generation.
Pitfalls of AI code generation
Code generation is great to aim for, but the job of a software engineer is way more than just code. When I get days where I can throw my headphones on and code all day, I can’t stop talking about it because that almost never happens! Most of my time is dedicated to meetings, conversations with other teams, alignment across multiple projects, mentoring other developers, etc. So even if AI handled 100% of my coding, the most that would save me is 20% of my work day. I wish it was more. With AI more realistically handling 30% of code generation, that gives me an estimate of 6% productivity increases. If things go well.
But there are downsides that come along with this time savings. We already found that the code generated needs much more debugging time. A survey by Faros found that PR review times increased by 91% with AI assisted coding and METR found that 61% of AI suggestions need modification 5. So while it feels fast, we are actually spending a lot of time fixing it!
On top of that, a worse downside of relying on AI code generation is losing our abilities to code well. The more we delegate code to a machine, the less we are doing it ourselves and the less muscle memory we have. Maybe this isn’t a big deal if AI is writing 100% of our code, but we just learned that we need a human in the loop to correct the AI. Now we have a less effective human reviewing the code to correct it, and the problems will just keep getting worse. Until AI can generate code with an accuracy that does not need human intervention, it is vital we keep our coding abilities at a level where we are ready to step in at any moment.
I worry about the new generation of developers who started off their learning with AI tools and never built a foundation, but that’s a topic for another time.
And I know, the Claude coders will come out of the woodwork at some point and explain how I obviously have not used Claude Code because that is the revolutionary product that changed everything. I agree, Claude Code is pretty neat. It does a great job at getting context around your entire codebase and tends to do well with generating projects. However, I have noticed a bit of a trend with projects written with this tool.
Usually I see two categories. 1) Completely “vibe coded” with a prompter that does not understand the craft. 2) Software engineers who are controlling the tool and constantly using domain-specific terminology that a lay person would not use. I think this is a point that a lot of people lose when they are arguing for these tools. A non-developer will not be nearly as effective with them because of the lack of context they can provide to the tool and the fact that they will not be able to review the output. And if we require a trained developer to use the tool effectively, how is it going to ever replace our jobs? And if we lean on the tool and lose our development skills, how does the quality of the code not naturally degrade over time?
So where does that leave us?
AI has usages that can make us more productive. I am working on another post detailing how I use it to be more productive, which will come soon! But as a preview, it’s generally not code generation. I use AI tools to help me learn by creating roadmaps, explaining and reviewing code, and acting as a sounding board.
It also helps me during admin tasks, such as helping me with feedback at work. I don’t generate feedback! That is incredibly insincere. I tend to struggle with adding actionable feedback for my peers, but I want to provide value when feedback is requested. So having prompts like, “I have a coworker I am providing feedback to, and I want there to be high quality, actionable feedback. I am struggling with coming up with good ideas. Can you ask me a series of questions to learn more about what this coworker does and explore ideas I can give them? Afterwards, please don’t generate feedback for me, just give me bullet points I can work with”. The AI will ask a series of questions and will end up with a list based on my answers. Sometimes items on the list are awful and sometimes they are pretty good. But the benefit here is helping the brain juices flow. Sometimes this exercise will jog my brain to think of things I would not have otherwise come up with.
I will sometimes send it a code snippet for a language I am learning and ask if it follows best practices, or how would it review that piece of code if it were doing a code review at work. This has been great for getting me to think of things in an ecosystem that I have little experience in.
With that being said, I don’t foresee AI doing the job unless something drastic changes. In order to have confidence in anything it outputs, we need a trained software engineer in the loop. If we remove all of our junior engineers and have seniors monitor the AI, we will be reducing the number of qualified workers who can review this output year over year and not replacing that talent. Further, coding is such a small part of the job and we would need to take away time from the more important tasks so we can add time to review and direct the AI output. I can’t find the logic in how this would work.
As with all forecasts, who knows what the future will bring.
Other good reads
https://bootcamps.cs.cmu.edu/blog/will-ai-replace-software-engineers-reality-check
Footnotes
-
https://techcrunch.com/2025/04/29/microsoft-ceo-says-up-to-30-of-the-companys-code-was-written-by-ai/ ↩
-
https://www.technologyreview.com/2025/12/15/1128352/rise-of-ai-coding-developers-2026/ ↩
-
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/ ↩
-
https://www.prnewswire.com/news-releases/harness-releases-its-state-of-software-delivery-report-developers-excited-by-promise-of-ai-to-combat-burnout-but-security-and-governance-gaps-persist-302345391.html ↩
-
https://www.softwareseni.com/the-hidden-quality-costs-of-ai-generated-code-and-how-to-manage-them/ ↩