The Entry-Level Problem Nobody in AI Wants to Talk About
Over the past two years, I have facilitated AI at Work workshops for organizations across a range of industries, including financial services, healthcare, manufacturing, logistics, professional services, and the public sector. The audiences vary. The questions vary. But one pattern shows up in almost every session, regardless of the room.
When I ask participants to think about which tasks in their current role they’d most like AI to handle, the answers cluster quickly around the same category: the routine, repetitive, time-consuming work that fills the margins of their day. Data entry. Report generation. First-pass research. Meeting summaries. Basic customer inquiries. The stuff they do because someone has to, not because it requires their full capability.
That’s a reasonable and honest answer. And it points directly at the problem nobody in the AI conversation wants to sit with for too long.
That work — the routine, repetitive, entry-level work that experienced professionals are eager to hand off — is the same work that has historically served as the on-ramp to a career. It is how junior professionals learn the underlying mechanics of their field before they are trusted with higher-stakes responsibilities. It is how organizations build the talent pipeline that eventually becomes their senior leadership.
And with the growth of AI, this repetitive yet career-developing work is disappearing.
What the Data Says
Deloitte’s 2026 State of AI in the Enterprise report puts numbers to what many practitioners already feel on the ground. Within one year, 36% of surveyed companies expect at least 10% of their jobs to be fully automated. Looking out three years, that figure rises to 82%. The report specifically calls out entry-level roles, such as data entry, reconciliation, and first-level customer support, as the first targets for automation.
What makes this finding particularly sharp is what Deloitte says next. These are often the starting point for longer careers. Organizations will likely need to develop alternate pathways for professional advancement, ensuring that employees have expertise that includes foundational processes.
That sentence is doing a lot of work. It acknowledges a serious structural problem while leaving the solution almost entirely undefined. And that gap between acknowledgment and action is exactly where most organizations are sitting right now.
The external data reinforces the concern. Entry-level hiring at the 15 largest tech firms fell 25% from 2023 to 2024, according to research from SignalFire. Entry-level job postings dropped 15% year over year in 2025. The World Economic Forum’s Future of Jobs Report 2025Â found that 40% of employers expect to reduce their workforce in areas where AI can automate tasks. And 49% of US Gen Z job seekers now believe AI has reduced the value of their college education in the job market.
These are not fringe statistics. They reflect a structural shift that is already in motion.
What I See in the Workshop Room
The data matters, but data alone doesn’t capture what this actually looks like from where I stand. So let me tell you what I see in the room.
In introductory AI workshops, the participants who engage most enthusiastically with AI tools are almost always the more experienced professionals. They grasp quickly how to redirect the time they save toward higher-value work. A senior analyst who used to spend half her week pulling and formatting data can now spend that time on interpretation and recommendations. A project manager who used to draft status reports from scratch can now review and refine an AI-generated draft in a fraction of the time. In fact, interacting with two Senior PMs in a Copilot Cowork demo, the presentation hadn’t even finished before we were talking about how these tools could automate a number of daily and weekly workloads. The productivity gains are real and visible, and participants leave these sessions genuinely energized.
The junior participants — the analysts two years into their career, the coordinators fresh out of school, the associates still learning how their organization actually works — tend to engage differently. Some are enthusiastic. But a meaningful number are quietly anxious in a way they rarely say out loud directly. They are doing the math. The tasks AI is best at are the tasks they were hired to do. And they don’t yet have the experience base to pivot into the higher-order work that AI is supposedly freeing everyone up for.
In intermediate workshops, where participants have usually been using AI tools for six months to a year, a different pattern emerges. I start to see the gap between those who have used AI to accelerate their learning and those who have used it to avoid learning. The first group has built something: genuine understanding of their domain, developed through AI-augmented practice. The second group has a dependency: they can produce outputs, but they can’t always explain them, defend them, or adapt when something goes wrong.
That dependency is the long-term version of the entry-level problem. And it doesn’t resolve itself on its own.
The Ladder Rung Nobody Is Replacing
Here is the framing I keep coming back to, and that I think deserves more direct attention than the industry is currently giving it:
Entry-level work has always served two functions simultaneously. The obvious function is output: getting necessary tasks done. The less obvious but arguably more important function is formation: building the foundational knowledge, professional judgment, and contextual understanding that makes a junior person capable of becoming a senior person.
A loan officer who spent three years manually reviewing credit applications develops intuitions about risk that are difficult to acquire any other way. A financial analyst who built hundreds of models from scratch understands what can go wrong in ways that an analyst who only reviewed AI-generated models doesn’t. A customer service rep who handled ten thousand calls builds a model of human behavior and customer psychology that informs everything they do later in their career.
As one analysis put it, we risk creating a generation of architects who have never laid a brick. The concern isn’t that AI will make junior work impossible to find. It’s that if the current generation of early-career professionals never grapples with the foundational challenges of their field because AI solves those challenges automatically, they may never develop the deep intuition and tacit knowledge required for senior roles.
Here’s a parallel problem that most people will understand: an overreliance on smartphones has created a population of people who don’t remember phone numbers and require detailed driving instructions to get anywhere. If we don’t use these skills, we lose them.
Most organizations have not thought carefully about how they are going to replace that formation function. They are automating the output without preserving the learning. And in three to five years, the consequences of that choice will be visible in ways they aren’t today.
What the Deloitte Report Misses
The Deloitte report flags the problem clearly and then moves past it relatively quickly, recommending that organizations develop alternate pathways for professional advancement. That recommendation is correct as far as it goes. But it understates how difficult that actually is to design and how few organizations are actively working on it.
The report’s talent strategy data is revealing. When asked how they are adjusting talent strategies because of AI, 53% of organizations said they are educating the broader workforce to raise AI fluency. Only 33% said they are redesigning career paths and career mobility strategies. Only 19% said they are changing the balance between full-time, contract, and gig workers.
In other words, most organizations are teaching people to use AI tools without fundamentally rethinking the structure of how careers are built. They are adjusting for the output function of entry-level work without addressing the formation function. And those are not the same problem.
What Responsible Organizations Are Starting to Do
I want to be careful not to present this as a problem without any constructive direction, because there are organizations approaching it thoughtfully. They are the minority, but the patterns are worth naming.
The most effective approaches I’ve seen share a few characteristics. They are deliberate about preserving learning experiences even when AI could handle the task more efficiently. A junior analyst might still be asked to build a model manually before being allowed to work with AI-generated versions, not because the manual version is more efficient, but because the manual process builds understanding that the AI-assisted process doesn’t.
They invest in structured mentorship that is explicitly designed to transfer the tacit knowledge that used to move laterally through junior work. Senior professionals aren’t just reviewing AI outputs, but are explicitly narrating their reasoning, surfacing the judgment calls that don’t show up in the final product, and creating structured opportunities for junior staff to practice that reasoning themselves.
They are rethinking what entry-level roles look like when routine tasks are automated. Rather than simply eliminating those roles, they are asking what remains — what the human layer of that work looks like when the AI handles the repetitive core — and building role definitions around that remainder. The job changes shape rather than disappearing.
And they are honest with their junior staff about what is happening and why, rather than leaving people to figure it out through ambient anxiety. That honesty alone is rarer than it should be.
A Word to the People Who Train People
If you design training programs, manage learning and development, or facilitate AI workshops, this problem sits at the center of your work, whether you’ve named it that way or not. [Update: Check out my article on making change “stick.”]
The workshops I run increasingly have to hold two things simultaneously: On one hand, I’m helping participants use AI tools more effectively — building genuine capability with the technology, reducing friction, expanding what people can accomplish. On the other hand, I’m watching carefully for the dependency pattern, the place where efficiency tips over into atrophy, where people start producing outputs they don’t fully understand.
That balance is not something you can set and forget. It requires intentional curriculum design, regular reassessment, and a willingness to slow down the productivity narrative long enough to ask whether learning is actually happening alongside the output gains.
The organizations that get this right will have a meaningful advantage in five years — not just because their AI deployments will be more effective, but because their people will understand what the AI is doing well enough to catch it when it goes wrong, adapt when the tools change, and apply judgment in situations the AI wasn’t designed for.
The organizations that get it wrong will have efficient processes and a talent pipeline that is shallower than they realize. They won’t notice until they need someone to step into a senior role and discover that the formation work was never done.
Let’s Talk About Your Training Program
I run introductory and intermediate AI at Work workshops for organizations navigating exactly these questions: how to build genuine AI capability without creating dependency, how to help experienced professionals leverage AI’s efficiency gains, and how to think about the junior talent pipeline in a world where the traditional on-ramp is changing shape.
If your organization is working through any of this, I’m happy to share my current syllabi and talk through what might fit your context. Whether you’re looking for a virtual session, an in-person workshop, or something in between, feel free to reach out directly.
You can connect with me through LinkedIn. I’d rather have the conversation early than have you discover the gaps after the fact.




