Will AI Replace Developers? A Vibe Coding Reality Check 2025

 

Transcript:

AI will kill developers. Cool story, but let's talk about what's really happening. AI right now is on a wave of hype. But we know that waves of hype happen. The previous one was about blockchain. This one is obviously different because we, you and I, see the difference. And this difference looks huge. But over the course of my career, over the last 17 years, I saw at least one wave that changed my life too, and it was the introduction of Stack Overflow.

Hi, my name is Pasha. I work as a developer advocate for BellSoft, and I've been writing code for at least the last 17 years. Like many of us, I have a strong opinion on AI. The question I get a lot at conferences, meetups, and other events is: Pasha, aren't you afraid that AI will replace us all? And my answer is no. There are several things I want to talk about: code assistance, data generation, documentation creation, DevOps, economics, and of course we should address the main elephant in the room, the most fearful one — vibe coding.

Let's start with vibe coding because this is the most frightening topic. There are many ideas about vibe coding. Small robots can replace junior developers. You can write a whole project with vibe coding. It will replace us eventually. All of this is simply not true. Let's start with junior developers. What is the main goal of a junior developer's existence? Not as a person, but as a developer. The main goal is to become a staff or principal engineer, or a manager, or to realize that they do not want to be a developer, which is also fine. This is not something robots can achieve for you. Your junior robot will not become a staff robot any time soon.

AWS understands this and never stops hiring junior developers. Today they might not need junior developers because they generate a lot of code, at least this is what they say. But tomorrow they need new senior, staff, and principal engineers — L5, L6, L7. It is impossible to become one without actual production experience. You have to experience failures. You have to make mistakes and learn from them. This is something robots are not capable of.

Now let's talk about creating full projects from scratch because this is another huge promise. I tried many vibe coding tools. The idea looks interesting. I am lazy, and I would prefer everything to be generated for me. But the thing is, I am a good developer. I have reasonably large experience, and every time I see a large generated project, I see huge flaws in it. It is often overengineered. It has interfaces where they are not needed. It has a very complicated domain structure. It is extremely hard to refactor. It is hard to refactor not only for me, but even for the vibe coding tool itself.

Where vibe coding is very good is static content. For example, if you have a small café and you want a simple website with a menu, phone number, email, and some basic information, vibe coding is perfect. It is usually one to five HTML files, and it will look reasonably good. But this is not a real project. This is not what we usually create as developers. This is where the world does not really need us, and that is fine. These tools allow us to focus on something more important and more complex. Handling complexity is one of the main skills of an engineer.

Now that we covered the hottest topic, let's switch to more common ones. AI code assistance, for example. Many of us use AI code assistance without even knowing it. JetBrains IDEs have used AI for years to prioritize code autocompletion suggestions. When you see a small popup with autocomplete options, it is already powered by a small AI built into the IDE.

There are other tools too: full-line completion, AI assistants, and Junie. Honestly, I turn off full-line completion in IntelliJ IDEA because it rarely guesses what I want to type, and I end up deleting a lot of generated code. AI assistance is still amazing. It can help with small parts of an algorithm. For example, if I do not remember how to write a bubble sort, AI can help me, even though I never write bubble sort in real projects.

Junie is an agentic AI built into the IDE. It helps me when I need to create a project, but it has downsides. I use Junie to generate things I do not want to think about, such as configuration, controllers, and boilerplate. But I do not allow it to write services for me because services are the most critical part of my system. Controllers are easy to fix. Business logic is not. It contains domain rules and corner cases, and AI does not know about my corner cases or the specifics of my business.

After that, everything must be tested. I can try to generate tests with AI, but I do not like the results. It is very good at creating test structure, but the tests themselves are usually incomplete. When I use TDD and write tests first, and then AI writes code for me, sometimes it works great, but sometimes it does not. Sometimes AI specializes the code for a specific test case. It inspects the stack trace and returns exactly what the test expects, which is definitely not what I want.

Guidelines help, but they never fully solve the problem. AI will not stop and say, "Sorry, I cannot do this because of your guidelines." It will work around them somehow, and that is not acceptable, even from a junior developer.

That said, there are several very useful tools. Today I want to talk about Tessle and Context7 in particular, but there are many others. Both of them are MCP servers. MCP stands for Model Context Protocol. They allow AI to avoid hallucinating about library APIs and to know which versions of libraries you are using. If you use version 1.0 of a library, AI will not generate code for version 2.0 or 0.7. This is very important because at least your code will compile, and in dynamically typed languages like Python or JavaScript, it will not fail at runtime.

Speaking of MCPs, they are amazing, but there is a problem. Every additional MCP server you add to the context makes the context bigger, and when the context becomes bigger, the model becomes worse. If you use 20 libraries and put all information about them into the context, the model will perform very poorly. You can enable one or two MCP servers, but you cannot enable all of them if you want useful answers. This applies not only to local models but also to large ones. In vibe coding and agentic scenarios, attaching many MCP tools results in worse answers while consuming a lot of tokens, and tokens cost money.

We should also remember that AI is only as good as its training data, and that data is average by definition. It cannot be all top-notch material. You want your code to be better than average. AI does not have tools like Sonar or other static analyzers built in to judge code quality. At best, it checks whether the code works, and often not even that.

There are a few more things to mention. In DevOps, AI is sometimes very good at generating GitHub Actions workflows. I have also heard that it is good at creating TeamCity pipelines. This makes sense because these configurations are highly standardized and based on best practices. Even if plugin versions are not perfect, the result usually works, which is often what we care about at early stages.

There are also frameworks for AI development. For example, Simon Martin developed something called the Unified AI Process, which provides guidelines and guardrails for predictable results. However, it is tied to a specific stack: JDK, Vaadin, Spring. These are great technologies, but what if you choose something else? Will the results still be good? I am ready to bet that Simon tightly controls what code is being generated and changes the process when the result is not good enough. That level of control is the pinnacle of the whole approach.

We need experts who really understand code and know how to write it well. AI cannot replace them because AI is average. We need better than average. That is why AI will not replace junior developers, and it will not replace senior, staff, or principal engineers either. Developers are not going anywhere any time soon. Our skills are shifting slightly, from just writing code to also reviewing and understanding it. These are the same skills required to work with legacy code. Prompting is simply another new skill.

AI is already essential in our workflows. Recently, I asked on Twitter whether someone would hire a developer without AI skills, and the answer was yes, if they are willing to learn. AI provides a huge productivity boost, but it does not replace us. If we refuse to use it, we are in trouble. We should use all the tools available to us. We used Stack Overflow before. Now we also use AI.

To summarize, AI rewards architectural knowledge, code-reading skills, and critical thinking. It punishes blind trust. That is everything I wanted to share today. If you like the video, like it, subscribe to the channel, leave a comment, and argue with me, because I might be wrong. Pasha out.

Summary

This talk argues that AI will not replace developers, but will change how they work. While AI is powerful for code assistance, scaffolding, documentation, and DevOps automation, it struggles with deep system design, business logic, and handling real-world complexity. Tools like AI assistants and MCP servers can reduce hallucinations and improve accuracy, but overusing them degrades model quality and increases costs. AI is inherently average because of its training data, so expert developers are still required to guide, review, and validate code. Ultimately, AI is a productivity multiplier that rewards architectural thinking and critical reasoning, but punishes blind trust.

Social Media

Videos
card image
Feb 6, 2026
Backend Developer Roadmap 2026: What You Need to Know

Backend complexity keeps growing, and frameworks can't keep up. In 2026, knowing React or Django isn't enough. You need fundamentals that hold up when systems break, traffic spikes, or your architecture gets rewritten for the third time.I've been building production systems for 15 years. This roadmap covers three areas that separate people who know frameworks from people who can actually architect backend systems: data, architecture, and infrastructure. This is about how to think, not what tools to install.

Videos
card image
Jan 29, 2026
JDBC Connection Pools in Microservices. Why They Break Down (and What to Do Instead)

In this livestream, Catherine is joined by Rogerio Robetti, the founder of Open J Proxy, to discuss why traditional JDBC connection pools break down when teams migrate to microservices, and what is a more efficient and reliable approach to organizing database access with microservice architecture.

Further watching

Videos
card image
Feb 27, 2026
Spring Developer Roadmap 2026: What You Need to Know

Spring Boot is powerful. But knowing the framework isn’t the same as understanding backend engineering. In this video, I walk through the roadmap I believe matters for a Spring developer in 2026. We start with data. That means real SQL — CTEs, window functions, normalization trade-offs — and understanding what ACID and BASE actually imply for system guarantees. Spring Data JPA is useful, but you still need to know what happens underneath. Then architecture: microservices vs modular monolith, serverless, CQRS, and when HTTP, gRPC, Kafka, or WebSockets make sense. Not as buzzwords — but as design choices with trade-offs. Security and infrastructure follow: OWASP Top 10, AuthN vs AuthZ, encryption in transit and at rest, Docker, Kubernetes, Infrastructure as Code, and observability with Micrometer, OpenTelemetry, and Grafana. This roadmap isn’t about mastering every tool. It’s about knowing what affects reliability in production.

Videos
card image
Feb 18, 2026
Build Typed AI Agents in Java with Embabel

Most Java AI demos stop at prompt loops. That doesn't scale in production. In this video, we integrate Embabel into an existing Spring Boot application and build a multi-step, goal-driven agent for incident triage. Instead of manually orchestrating prompt → tool → prompt cycles, we define typed actions and let the agent plan across deterministic and LLM-powered steps. We parse structured input with Ollama, query MongoDB deterministically, classify risk using explicit thresholds, rank affected implants, generate a constrained root cause hypothesis, and produce a bounded containment plan. LLM handles reasoning. Java enforces rules. This is about controlled AI workflows on the JVM — not prompt glue code.

Videos
card image
Feb 12, 2026
Spring Data MongoDB: From Repositories to Aggregations

Spring Data MongoDB breaks down fast once CRUD meets production—real queries, actual data volumes, analytics. What looks simple at first quickly turns into unreadable repository methods, overfetching, and slow queries. In this video, I walk through building a production-style Spring Boot application using Spring Data MongoDB — starting with basic setup and repositories, then moving into indexing, projections, custom queries, and aggregation pipelines. You'll see how MongoDB's document model changes data design compared to SQL, when embedding helps, and when it becomes a liability. We cover where repository method naming stops scaling, how to use @Query safely, when to switch to MongoTemplate, and how to reduce payload size with projections and DTOs. Finally, we implement real MongoDB aggregations to calculate analytics directly in the database and test everything against a real MongoDB instance using Testcontainers. This is not another MongoDB overview. It's a practical guide to actually using Spring Data MongoDB in production without fighting the database.