How to Build Recruiting Workflows with Claude

Learn how to connect Claude to a candidate database, build reusable search skills, and score candidates against job specs. Step-by-step recruiting workflow guide.

Published

May 2, 2026

Written by

Abhilash Chowdhary

Reviewed by

Manmohit

Read time

7

minutes

How to Build Recruiting Workflows with Claude

Most recruiting teams have already tried SaaS sourcing tools and found the same failure mode. The tools return hundreds of results, but a huge portion are noise. A two-person executive recruiting firm told us that their sourcing tool labels someone a "100% match" even when the candidate's background clearly does not fit the role, because the matching operates on keyword overlap rather than the nuanced criteria that actually matter for the search.

Claude changes what is possible here. The shift we're seeing is teams connecting Claude to a candidate database with up-to-date data, encoding their search criteria as a reusable skill, and scoring candidates against a job spec so the whole workflow compounds across roles instead of resetting for every new search.

What follows covers each layer of that system, from what it replaces through how to search, score, and extend the workflow over time.

What a Claude Recruiting Workflow Replaces

A typical sourcing workflow today stacks four or five tools on top of each other. LinkedIn Recruiter for the initial Boolean search. An enrichment provider like Apollo or ContactOut for emails and phone numbers. A spreadsheet or ATS for tracking. And a recruiter's own judgment, applied manually to every single profile, with no documented criteria.

These problems compound across the stack. Boolean strings are brittle, so small syntax errors silently exclude qualified candidates. Enrichment data goes out of date within weeks because most providers refresh on a quarterly cycle. And the recruiter who knows exactly what "good" looks like for a niche role carries that knowledge in their notes and Slack messages. One talent intelligence team at a chip company found that the insights from their recruiter conversations lived and died in internal notes, and every new search started from zero even when the role was nearly identical to one they had filled three months earlier.

Claude recruiting workflows replace this stack with a single interface where search, enrichment, scoring, and institutional knowledge all live in the same place. The recruiter defines what they are looking for once, and the system handles the data retrieval, filtering, enrichment, and scoring without requiring the recruiter to switch tools, export files, or re-enter criteria at each step.

Two Ways to Connect Claude to Recruiting Data: MCP vs Claude Code

Claude on its own does not have access to candidate databases. If you open Claude right now and ask it to find senior engineers in Austin, it will give you general advice about sourcing strategies and maybe a handful of profiles that might not be accurate without their work history and contact information. To get real results, you need to connect Claude to an up-to-date people database.

There are two ways to do this, and which one fits depends on whether you have engineering resources or want to do everything through a visual interface.

Claude Cowork: the recruiter's path

Claude Cowork is the agentic mode in the Claude desktop app. It can connect to external tools through MCP (Model Context Protocol), save and invoke skills, chain multi-step workflows together, read and write local files, and schedule tasks to run later. For a recruiter, MCP is the part that matters most: it is a plugin system that lets Claude connect directly to external data sources. Once you add a people data connector, Claude can search a candidate database with up-to-date data from inside the same interface where you are already working.

Here is how to set it up with a people data provider like Crustdata:

  1. Get a Claude Pro or Team subscription ($20/month or $30/seat/month). The free tier does not support MCP connections.

  2. Open Claude Desktop and navigate to Settings, then the Integrations section.

  3. Add the MCP connector by pasting the configuration URL from your data provider. For Crustdata, this connects Claude to an up-to-date database of over one billion professional profiles.

  4. Authenticate by logging into your data provider account when prompted. This is a one-time step.

  5. Search. Open any new conversation and type what you are looking for. Claude now has direct access to the candidate database and will return actual profiles.

From this point forward, every Claude conversation can search, filter, and return real candidate data. You describe what you need the same way you would explain it to a colleague sitting next to you.

Because Cowork also supports skills and scheduling, you can save a search workflow as a reusable skill (covered in the next section) and schedule it to run on a recurring basis, all without writing code.

Claude Code: the engineering path

Claude Code runs in the terminal and can execute code. It also connects to MCP servers and supports skills, but the key differences are that it can hit any API even without an MCP connector, write custom Python scripts for complex scoring logic and batch processing across thousands of records, and store workflows in a git repository so they are version-controlled, reviewable, and shareable across a team.

This matters for recruiting when you need logic that goes beyond what MCP tool capabilities can express. For example: writing a Python script that pulls candidates from the People Discovery API, cross-references each one against your internal ATS database to check for prior contact, applies a proprietary scoring algorithm your team developed, enriches only the top-scoring candidates via the People Enrichment API, and writes the results directly into your Greenhouse or Ashby pipeline through their APIs. Claude Code builds that entire pipeline, tests it, and commits it to your repo so any engineer on the team can run or modify it.

The practical distinction: Cowork is for recruiters who want to search, score, build skills, and automate through a visual interface without writing code. Claude Code is for teams that need custom scoring algorithms, want to connect to internal tools that lack MCP connectors, or need version-controlled workflows that any engineer can run, modify, and review through pull requests.

How to Source Candidates with Natural-Language Search

With the MCP connection in place, sourcing works like a conversation. You type something like: "Find senior backend engineers in Austin who have worked at companies with over 500 employees, changed jobs in the last six months, and have experience with Kubernetes or distributed systems."

Claude takes that plain-language description and translates it into a structured query against the People Search API. The database supports over 60 filters covering title, seniority, function, company size, geography, skills, education, job tenure, and more. What comes back are structured profiles you can read and act on immediately, with the candidate's current role, full work history, skills, education, and available contact information, all pulled from up-to-date data rather than a quarterly database export.

The filtering depth depends entirely on the data source you connect Claude to. Claude itself is just the interface. If the underlying database has limited or rigid filters, your search will be limited regardless of how well you describe what you want. This is why the choice of data provider matters. The filter that recruiters hit limits on most often is past employer combined with title. Most providers let you filter by current title or current company, but cannot answer "show me people who held a VP of Operations role specifically at a robotics company." That requires nested filtering, where the title filter applies within the context of a specific past employer rather than across the entire profile. A database like Crustdata with 60+ structured filters supports this kind of nested logic, along with seniority level, function, geography down to city level, company headcount range, and whether the candidate has changed roles recently.

You can also iterate on results in real time. If the first search returns too many generalists, you narrow it in the same conversation: "Only show me people who have worked at companies with fewer than 200 employees and have Kubernetes listed as a skill." Claude adjusts the query and returns a refined set. This back-and-forth is what makes MCP sourcing feel like working with a researcher rather than wrestling with a search interface.

For teams using Claude Code for automated sourcing, the same filters are available programmatically. You can also pass a list of profiles you have already sourced so the API does not return or charge for them again, which keeps costs predictable when you are running multiple searches for related roles across the same talent pool.

When Crustdata is used as the underlying data source, the profiles Claude returns through either path are up to date. The contact information is enriched in real time, and running the same search next week returns updated results reflecting job changes, new skills, and work history updates that happened since your last search.

How to Build a Claude Skill That Makes Your Search Repeatable

A single search is useful, but making that search repeatable across roles, recruiters, and time is what turns Claude from a tool into a system.

What a Claude skill is

A Claude skill is a saved set of instructions that lives as a markdown file in your project. Instead of explaining your requirements from scratch every time you open a new role, you invoke the skill by name and it runs the same structured workflow with the same quality bar. Skills work in both Cowork (through the desktop interface) and Claude Code (as files in a .claude/skills/ directory).

Each skill has a SKILL.md file with two parts: a YAML header that tells Claude when to use the skill and what it does, followed by the actual instructions Claude follows when the skill runs. The instructions can include your search criteria, scoring weights, output format, and any special logic for how to evaluate candidates.

Why skills matter more than good prompts

When recruiters type free-form queries, the results are inconsistent. One recruiter asks for "senior engineers in fintech" and gets a broad list. Another asks for "people who built payment systems at Series B companies" and gets a narrower, better-matched set. A Dutch recruitment SaaS builder found that some recruiters ask unstructured questions that break the underlying tool calls, producing no results or the wrong results. Skills eliminate that variance by encoding the structured query logic once and letting anyone on the team invoke it.

Two types of skills for recruiting

Reference skills add knowledge Claude applies to your current work without you needing to invoke them directly. For recruiting, this could be a skill that contains your firm's candidate evaluation criteria, your ideal candidate profile for a recurring role type, or your company's outreach tone and messaging guidelines. Claude loads this context automatically when it detects relevance to what you are doing.

Task skills give Claude step-by-step instructions for a specific action you trigger manually. For recruiting, this is a search-and-score workflow you invoke with /executive-search or /bulk-sourcing to run the full pipeline: query the database, filter results, score against a rubric, and output a ranked shortlist.

How to build a recruiting skill step by step

  1. Create the skill directory. In Cowork, navigate to your project settings and create a new skill. In Claude Code, create a folder at .claude/skills/executive-search/ with a SKILL.md file inside it.

  2. Write the YAML header. This tells Claude when to activate the skill:

    name: executive-search
    description: Search and score executive candidates against a role spec.
    Use when running VP+ searches or when asked to find senior leaders
    
    
    name: executive-search
    description: Search and score executive candidates against a role spec.
    Use when running VP+ searches or when asked to find senior leaders
    
    
    name: executive-search
    description: Search and score executive candidates against a role spec.
    Use when running VP+ searches or when asked to find senior leaders
    
    
  3. Write the instructions. This is where you encode everything a recruiter would otherwise type from scratch each time:

    • The search criteria (target title, minimum experience, required industry exposure, geography, company stage)

    • The scoring rubric (must-haves scored pass/fail, nice-to-haves like board experience or public company background scored on a weighted scale)

    • The output format (a ranked list showing match percentage, reasoning behind the score, and flagged criteria the candidate does not meet)

    • The data source (which API to query, whether to enrich with contact data, how many results to return)

  4. Add supporting files if needed. For complex skills, you can add a reference.md with detailed scoring criteria or an examples/ folder showing what good output looks like. Keep the main SKILL.md under 500 lines and move reference material to separate files.

  5. Test and refine. Invoke the skill with /executive-search [role description] and review the output. Adjust the scoring weights and criteria based on which candidates you actually reach out to.

A two-person executive recruiting firm built exactly this kind of task skill so they could trust the output enough to put it directly into candidate sequences across 10 to 20 different searches without rewriting the prompt each time. The skill captured what "good" looked like for their specific practice, the niche criteria that no generic Boolean template could express, and ensured that every search started from that baseline instead of from a blank prompt.

Once the skill exists, any recruiter on the team can invoke it. The search criteria, the quality standards, and the output format stay consistent even as the people running the searches change. Over time, you refine the skill based on which candidates convert, and Claude's memory layer can learn recruiter-specific preferences on top of the shared skill, so the system gets better with use rather than resetting.

How to Score and Match Candidates Against Open Roles

Sourcing generates a list of candidates, but scoring is what tells you which ones deserve your time.

Inside a Claude skill, you can define a scoring rubric that evaluates each candidate against the open role's requirements. The rubric specifies must-have criteria (each is a pass/fail gate), weighted nice-to-haves (scored on a scale), and deal-breakers (automatic disqualification). Claude reads each profile, evaluates it against the rubric, and returns a match score with an explanation of how it arrived at that number.

The explanation matters as much as the number itself, because recruiting teams that have implemented AI matching, particularly those operating under EU regulations that require explainable and auditable AI decisions, need to show why a candidate scored the way they did. A match score without reasoning is a black box that no recruiter will trust and no compliance team will approve.

In practice, the scoring creates three tiers that determine how a recruiter spends their review time:

90%+ matches meet every must-have and most nice-to-haves. A recruiter still reviews these before outreach, but the review is a quick confirmation rather than deep evaluation because the rubric has already validated alignment on the criteria that matter.

60 to 80% matches pass the must-have gates but are mixed on nice-to-haves. These are the candidates where recruiters add the most value, applying the kind of judgment that no rubric captures: whether a candidate's career trajectory suggests they are ready for the step up, whether the timing is right given a recent role change, whether their background translates across industries even if it does not map cleanly to the criteria.

Below 60% means one or more must-have criteria failed. A recruiter can still scan these for edge cases the rubric missed, but in most cases they represent profiles that would not have made the shortlist manually either.

The scoring does not remove the recruiter from the process. It reorganizes their attention so the deepest evaluation time goes to the candidates where human judgment matters most, rather than spending equal time on every profile in the list. This happens inside the same Claude session as the search, using the same skill, so the recruiter sees the shortlist with scores and reasoning and moves candidates forward without exporting to a separate tool.

What to Build Next: Outreach Sequencing, Signal Monitoring, and Market Intelligence

The workflow described above handles active sourcing, where you have an open role, run a search, score the results, and reach out to the top candidates. But recruiting does not stop when you fill a position, and the strongest workflows extend into three additional capabilities.

Outreach sequencing connects match scores to personalized messaging. Candidates who score above your threshold can receive outreach that references the specific criteria they matched on, generated by Claude using their profile data. Claude can reference a candidate's actual project history, recent role change, or published work because it has the full enriched profile in context, which produces outreach that reads as specific to each person rather than templated.

Signal monitoring replaces the manual habit of re-running searches every few weeks to check if anything changed. The Watcher API lets you set webhook triggers on candidate profiles or companies, and receive a notification only when something changes. You can track job changes (a candidate at a competitor leaves, signaling they may be open to outreach), new job postings (a target company opens three engineering roles, suggesting a new team is being built), or profile updates (a passive candidate adds new skills or updates their headline). Instead of searching for candidates on a schedule, the relevant candidates surface themselves through the signals they emit. One talent intelligence team told us that they did not want to dig for data anymore, they wanted the data to come and find them.

Market intelligence uses the same data layer for strategic questions that recruiting leaders need answered before a search even starts. How many qualified candidates for this role exist in a given geography? What compensation ranges are companies posting for similar positions? Which competitors are hiring aggressively for the same profile, and which are decreasing in headcount? These answers inform whether a role is fillable at the planned level and compensation band, whether the search should expand to additional geographies, and how competitive the market is for a specific talent pool. Because the data comes from the same source that powers the sourcing workflow, the intelligence is grounded in actual candidate supply rather than survey estimates.

Each of these extends the same system, because the data source, the skills, and the scoring rubric you built for active sourcing carry forward into monitoring and intelligence without requiring any rebuild.

To start building your own recruiting workflow with Claude, connect to an up-to-date people database and run your first natural-language search. The system starts with one search and compounds from there.

Data

Delivery Methods

Solutions