A VP of engineering called me last month and said, “I need an ML engineer. Budget is flexible, start date is yesterday.” I asked one question: “Computer vision, NLP, MLOps, or something else?” She paused for ten seconds, then said, “Oh. I have not thought about it that way.” If you are hiring ML talent in 2026 and you have not thought about it that way, this post is for you. I have spent the last several years placing specialized AI/ML talent, and one-size-fits-all recruiting is the fastest path to a thirty-day slippage and a lukewarm finalist.
Why one-size-fits-all ML recruiting fails in 2026
Five years ago, a strong generalist ML engineer could cover most of the work a growing team needed. In 2026 the surface area has split. The engineer who is brilliant at fine-tuning a vision transformer is rarely the same engineer who can design a resilient production inference pipeline, and neither of them is the engineer you want leading an LLM retrieval-augmented generation project. Treating these as interchangeable roles costs you either the wrong hire or an extremely expensive and lengthy search.
Computer vision engineers: signal vs. noise in the resume
Computer vision engineers split into three rough camps: traditional image-processing engineers who have evolved into deep learning, academic vision researchers moving into industry, and product ML engineers who have specialized in vision over time. Signals that matter on the resume: experience with real-world image quality and preprocessing pain, familiarity with video as well as still images, and hands-on work with a detection or segmentation architecture in production. Noise: a long list of ImageNet accuracy numbers with no discussion of deployment.
NLP and LLM engineers: the hottest and hardest role to fill
Six out of every ten open ML requisitions I work on right now touch NLP or LLMs. The talent pool has not caught up. The strongest NLP engineers I place typically have at least one of: meaningful pre-LLM-era work on sequence models, current production experience with one of the major foundation model providers, or hands-on work with retrieval-augmented generation systems that run in front of real users. Do not confuse someone who has used the OpenAI API with someone who has shipped a production NLP system. The pay gap between the two is enormous, and so is the performance gap.
MLOps and ML platform engineers: the role everyone forgets until production breaks
Clients remember to hire MLOps engineers the week after an inference outage. The job is a hybrid of DevOps, software engineering, and ML literacy. The strongest candidates have production experience with at least one of Kubeflow, Ray, Vertex AI, SageMaker, or a homegrown equivalent, understand model-monitoring and drift, and can have a grown-up conversation about cost-per-inference. MLOps candidates are often sourced from site reliability engineering and data engineering backgrounds, not from ML research.
Reinforcement learning and research engineers: a different hiring bar
RL and pure research roles sit on a different hiring curve. The best candidates are often coming out of PhD programs or frontier labs, and the evaluation looks more like academic hiring than product hiring, publications, citations, depth of one specific contribution. These are the roles where a doctorate genuinely moves the needle. They are also the roles where a poor pairing of candidate to problem space fails fastest.
Where each specialty actually lives (geography + communities)
Geography still matters when you are sourcing specialties. Computer vision depth clusters around Boston, Pittsburgh, the Bay Area, and any city with a strong robotics or autonomous vehicle presence. NLP and LLM talent skews toward the Bay Area, Seattle, and New York, with rising density in Toronto and Austin. MLOps engineers are the most geographically distributed of the three, because their skills transfer cleanly from adjacent infrastructure work. Online communities matter as much as cities: Papers with Code, the Hugging Face community, the MLOps Community Slack, and specialty conference Discords are where real practitioners gather.
Specialty-specific interview questions that work
Generic ML interviews fail to reveal specialty depth. A few examples of questions that sort the field:
- Computer Vision: “Walk me through how you debugged a dataset where your validation metric looked great and production performance was terrible.”
- NLP / LLM: “How did you evaluate your retrieval system’s quality independent of the generation model’s quality?”
- MLOps: “Describe a time a model in production started drifting. What monitoring caught it, what did you do, and what did you change after?”
- RL / Research: “Describe a paper you tried to reproduce. What did the paper leave out, and how did you fill the gap?”
Every one of those questions is impossible to answer well without real experience, and impossible to answer poorly without revealing the gap.
Pay bands by specialty, and where the premiums are
The premium rankings I see in 2026, in order of highest-to-lowest compensation for equivalent seniority:
- Foundation model and frontier LLM research. The ceiling is effectively uncapped
- Senior NLP/LLM applied engineers, twenty to forty percent above generalist ML
- Computer vision engineers at the senior level, roughly in line with generalist ML, premium in robotics and autonomous vehicles
- MLOps engineers, in line with or slightly above generalist ML at senior levels, because the pool is smaller than the demand
- Reinforcement learning applied roles, variable; high in specific industries (logistics, recommendation, gaming)
If you are building a team and budgeting by a single ML engineer band, you are almost certainly underpaying one of these specialties. Split your bands, or expect to lose finalists. Specialized Machine Learning recruitment and staffing partners exist precisely because generalist recruiting misses these distinctions.