How to Automate Candidate Sourcing with Claude Code (for Recruiters and Talent Teams)

Step-by-step guide to automating candidate sourcing with Claude Code and MCP. Natural-language people search, profile enrichment, and contact info with real API examples.

Published

Apr 25, 2026

Written by

Chris Pisarski

Reviewed by

Manmohit Grewal

Read time

7

minutes

Most sourcing workflows look the same from the inside. Open LinkedIn Recruiter, type a title and location, scroll through 50 profiles one at a time, click into each to check whether they're actually still in the role, copy the promising ones into a spreadsheet, then start the separate process of finding email addresses. By the end, you've spent a full day building a shortlist of 20 candidates, and half the contact info will bounce by the time you send outreach.

Claude Code, connected to a live people data API over MCP, collapses that loop into a single natural-language query. You describe the candidate you want, Claude searches a database of over a billion profiles, enriches the matches with current role, work history, and verified email, and writes the results to a formatted file you can import into your ATS or hand to a hiring manager. No tab switching, no manual exports, no out-of-date profiles.

This guide walks through how to build that workflow step by step, starting with your search criteria in a CLAUDE.md file, then running live API calls against a people database, and finishing with a formatted shortlist ready for outreach.

What This Workflow Actually Does

Before the steps, the architecture. The workflow has four stages:

  1. Search: query a people database for candidates matching your role requirements

  2. Enrich: pull full profiles for the top matches, including current employer, work history, skills, and verified email

  3. Format: export the results as a structured file (CSV or JSON) ready for your ATS or hiring manager

  4. Monitor: set up persistent watchers on target profiles or companies so you're notified when a candidate changes jobs or a target company posts a new role

Claude Code handles the orchestration. A people data API, connected via MCP, handles the data. You write the search criteria once in plain language and Claude executes the lookup each time you run it.

Why Claude Code, not Claude.ai? Claude.ai runs in a browser with no persistent state between sessions. Claude Code runs in your terminal with access to your local file system, your project context, and MCP servers that connect it to external APIs. That's what makes it useful for sourcing. It can call a people search API, read a list of open roles from a file, write candidate shortlists to disk, and maintain your search criteria across sessions.

Who this is for: Recruiters running their own sourcing, agency founders building internal tooling, and talent ops teams with repetitive search patterns. You don't need to write code from scratch, but you do need to be comfortable running Claude Code from a terminal and editing a config file.

Step 1: Write Your CLAUDE.md (The Recruiting Brain)

Every Claude Code project starts with a CLAUDE.md file. This is the persistent instruction set Claude reads at the start of every session. For recruiting, it's your role requirements, your exclusion rules, and your outreach constraints, all in one place that Claude actually references when it runs searches.

A generic CLAUDE.md won't work here. You need one written specifically for the roles you're filling. Here's what to include:

Role requirements, in skills and signal language

Don't write "Senior Backend Engineer." Write what the hiring manager actually needs:

## Target Candidate Profile

Role: Senior Backend Engineer
Location: San Francisco Bay Area, Seattle, or remote US
Experience: 5+ years backend development
Must-have skills: Python, distributed systems, PostgreSQL or similar RDBMS
Nice-to-have: Kubernetes, event-driven architecture, prior startup experience (Series A–C)
Seniority: Senior or Staff level
Current employer size: 50–1,000 employees (mid-stage startup or growth-stage company)

## Exclusion Rules

- Candidates already in our ATS (check against ats-export.csv before adding)
- Candidates at companies where we have an active client relationship
- Anyone who changed roles in the last 3 months (likely not open to moving again)
- Contractors, consultants, and agency employees

## Outreach Rules

- One personalized detail per message tied to their background or recent work
- Never reference salary expectations or compensation data
- Draft all messages to outreach/drafts/ for review before sending
## Target Candidate Profile

Role: Senior Backend Engineer
Location: San Francisco Bay Area, Seattle, or remote US
Experience: 5+ years backend development
Must-have skills: Python, distributed systems, PostgreSQL or similar RDBMS
Nice-to-have: Kubernetes, event-driven architecture, prior startup experience (Series A–C)
Seniority: Senior or Staff level
Current employer size: 50–1,000 employees (mid-stage startup or growth-stage company)

## Exclusion Rules

- Candidates already in our ATS (check against ats-export.csv before adding)
- Candidates at companies where we have an active client relationship
- Anyone who changed roles in the last 3 months (likely not open to moving again)
- Contractors, consultants, and agency employees

## Outreach Rules

- One personalized detail per message tied to their background or recent work
- Never reference salary expectations or compensation data
- Draft all messages to outreach/drafts/ for review before sending
## Target Candidate Profile

Role: Senior Backend Engineer
Location: San Francisco Bay Area, Seattle, or remote US
Experience: 5+ years backend development
Must-have skills: Python, distributed systems, PostgreSQL or similar RDBMS
Nice-to-have: Kubernetes, event-driven architecture, prior startup experience (Series A–C)
Seniority: Senior or Staff level
Current employer size: 50–1,000 employees (mid-stage startup or growth-stage company)

## Exclusion Rules

- Candidates already in our ATS (check against ats-export.csv before adding)
- Candidates at companies where we have an active client relationship
- Anyone who changed roles in the last 3 months (likely not open to moving again)
- Contractors, consultants, and agency employees

## Outreach Rules

- One personalized detail per message tied to their background or recent work
- Never reference salary expectations or compensation data
- Draft all messages to outreach/drafts/ for review before sending

Client or hiring manager context

Two or three sentences about what the company does and why the role is open. Claude uses this to match candidates whose experience is actually relevant to the role.

The CLAUDE.md takes 20 to 30 minutes to write well the first time. After that, you update it per role and every session starts with full context about what you're looking for and what to exclude.

Step 2: Give Claude Code Access To A People Data API

There are two ways to connect Claude Code to a people data API. You can use either one, and the rest of this guide works the same regardless of which path you choose.

Option A: Connect via MCP

MCP (Model Context Protocol) lets Claude Code call external APIs as tools during a conversation. To add the Crustdata MCP server to Claude Code, run:

claude mcp add --transport http crustdata "https://mcp.crustdata.com/mcp"
claude mcp add --transport http crustdata "https://mcp.crustdata.com/mcp"
claude mcp add --transport http crustdata "https://mcp.crustdata.com/mcp"

Claude Code will prompt you to authenticate with your Crustdata API key. Once connected, verify with /mcp and you should see the Crustdata tools available. From that point on, Claude can call people search and enrichment endpoints directly without you writing any API code.

Option B: Drop the API docs into your project folder

If you prefer not to use MCP, or your data provider doesn't have an MCP server, you can drop the API documentation into your project directory and Claude Code will read it and write the correct scripts for you. Create an api-docs/ folder, save the relevant endpoint documentation as markdown or text files, and reference them in your CLAUDE.md:

## Data Sources

API documentation is in the api-docs/ folder.
When I ask you to search for candidates or enrich profiles, read the docs
and write the appropriate API calls using curl or Python requests.
Write the API token as a placeholder (YOUR_API_TOKEN) in any script you generate.                                                          
Do not read or access API keys from environment variables directly
## Data Sources

API documentation is in the api-docs/ folder.
When I ask you to search for candidates or enrich profiles, read the docs
and write the appropriate API calls using curl or Python requests.
Write the API token as a placeholder (YOUR_API_TOKEN) in any script you generate.                                                          
Do not read or access API keys from environment variables directly
## Data Sources

API documentation is in the api-docs/ folder.
When I ask you to search for candidates or enrich profiles, read the docs
and write the appropriate API calls using curl or Python requests.
Write the API token as a placeholder (YOUR_API_TOKEN) in any script you generate.                                                          
Do not read or access API keys from environment variables directly

With this approach, Claude reads the docs and writes a script with placeholder credentials. You paste in your API key before running it, or export the environment variable yourself and run the script from your terminal. Claude never touches your actual key. The trade-off compared to MCP is that you handle authentication manually, and each search refinement requires Claude to re-read the docs and rewrite the script rather than iterating conversationally.

Other options: Crustdata is the worked example throughout this guide, but the architecture works with any people data provider. Apollo has a native MCP connector. The steps below work the same regardless of which data source you connect, only the filter names and response fields will differ.

Step 3: Run Your First Candidate Search

With CLAUDE.md written and MCP connected, you're ready to source. Open Claude Code and describe the candidate you want in plain language:

Find senior backend engineers in the San Francisco Bay Area with 5+ years of
experience, currently at companies with 50–500 employees. They should have
Python and distributed systems in their skill set. Return the top 25 sorted
by years of experience

Find senior backend engineers in the San Francisco Bay Area with 5+ years of
experience, currently at companies with 50–500 employees. They should have
Python and distributed systems in their skill set. Return the top 25 sorted
by years of experience

Find senior backend engineers in the San Francisco Bay Area with 5+ years of
experience, currently at companies with 50–500 employees. They should have
Python and distributed systems in their skill set. Return the top 25 sorted
by years of experience

Claude translates this into an API call against the people database. Using Crustdata's People Discovery API, that looks like:

curl --request POST \
  --url 'https://api.crustdata.com/screener/persondb/search' \
  --header 'Authorization: Token $CRUSTDATA_API_TOKEN' \
  --header 'Content-Type: application/json' \
  --data '{
    "filters": {
      "op": "and",
      "conditions": [
        {"filter_type": "current_employers.title", "type": "(.)","value": "Senior Backend Engineer"},
        {"filter_type": "region", "type": "[.]", "value": "San Francisco Bay Area"},
        {"filter_type": "years_of_experience_raw", "type": ">", "value": 5},
        {"filter_type": "current_employers.employee_count_range", "type": "in", "value": ["51-200", "201-500"]},
        {"filter_type": "skills", "type": "(.)","value": "Python"},
        {"filter_type": "skills", "type": "(.)","value": "distributed systems"}
      ]
    },
    "sorts": [{"column": "years_of_experience_raw", "order": "desc"}],
    "limit": 25
  }'
curl --request POST \
  --url 'https://api.crustdata.com/screener/persondb/search' \
  --header 'Authorization: Token $CRUSTDATA_API_TOKEN' \
  --header 'Content-Type: application/json' \
  --data '{
    "filters": {
      "op": "and",
      "conditions": [
        {"filter_type": "current_employers.title", "type": "(.)","value": "Senior Backend Engineer"},
        {"filter_type": "region", "type": "[.]", "value": "San Francisco Bay Area"},
        {"filter_type": "years_of_experience_raw", "type": ">", "value": 5},
        {"filter_type": "current_employers.employee_count_range", "type": "in", "value": ["51-200", "201-500"]},
        {"filter_type": "skills", "type": "(.)","value": "Python"},
        {"filter_type": "skills", "type": "(.)","value": "distributed systems"}
      ]
    },
    "sorts": [{"column": "years_of_experience_raw", "order": "desc"}],
    "limit": 25
  }'
curl --request POST \
  --url 'https://api.crustdata.com/screener/persondb/search' \
  --header 'Authorization: Token $CRUSTDATA_API_TOKEN' \
  --header 'Content-Type: application/json' \
  --data '{
    "filters": {
      "op": "and",
      "conditions": [
        {"filter_type": "current_employers.title", "type": "(.)","value": "Senior Backend Engineer"},
        {"filter_type": "region", "type": "[.]", "value": "San Francisco Bay Area"},
        {"filter_type": "years_of_experience_raw", "type": ">", "value": 5},
        {"filter_type": "current_employers.employee_count_range", "type": "in", "value": ["51-200", "201-500"]},
        {"filter_type": "skills", "type": "(.)","value": "Python"},
        {"filter_type": "skills", "type": "(.)","value": "distributed systems"}
      ]
    },
    "sorts": [{"column": "years_of_experience_raw", "order": "desc"}],
    "limit": 25
  }'

The response comes back as structured JSON with each candidate's name, title, company, location, LinkedIn URL, skills, and years of experience. Claude saves this to a file in your project directory.

You don't need to write that API call yourself. You describe the criteria and Claude writes and executes it. What you do need to verify is that the results match your actual requirements. Read the first 10 results and check them. If you're getting DevOps engineers instead of backend engineers, tighten the title filter. If the seniority skews junior, add a years_of_experience_raw floor. The first search is rarely perfect, but refining a natural-language query is faster than rebuilding Boolean strings in LinkedIn Recruiter.

The MCP path vs. the direct API path: Everything above happens inside Claude Code through MCP. If you prefer to run the API calls from a Python or Node script instead, the same endpoint and filters work identically. The difference is that MCP lets you iterate on the search criteria conversationally ("narrow that to only people with Kubernetes experience"), while a script requires editing code for each refinement.

Step 4: Enrich Candidate Profiles

The People Search tells you who's out there, but to reach out you need more than a name and title. You need a verified email, complete work history, education, and confirmation that they're actually still at the company listed in the search results.

Ask Claude:

For the top 15 candidates from the search results, pull their full profile
including work history, education, skills, and business email

For the top 15 candidates from the search results, pull their full profile
including work history, education, skills, and business email

For the top 15 candidates from the search results, pull their full profile
including work history, education, skills, and business email

Claude runs the People Enrichment API against each matched LinkedIn profile:

curl --request GET \
  --url 'https://api.crustdata.com/screener/person/enrich?linkedin_profile_url=https://linkedin.com/in/example-profile&fields=business_email' \
  --header 'Authorization: Token $CRUSTDATA_API_TOKEN'
curl --request GET \
  --url 'https://api.crustdata.com/screener/person/enrich?linkedin_profile_url=https://linkedin.com/in/example-profile&fields=business_email' \
  --header 'Authorization: Token $CRUSTDATA_API_TOKEN'
curl --request GET \
  --url 'https://api.crustdata.com/screener/person/enrich?linkedin_profile_url=https://linkedin.com/in/example-profile&fields=business_email' \
  --header 'Authorization: Token $CRUSTDATA_API_TOKEN'

The enriched profile returns current employer, title, full work history, education, skills, location, and a verified business email where available. Claude writes these to a candidates-[date].json file alongside your search results.

Why real-time enrichment matters for recruiting: The most common complaint from recruiting teams using batch-enrichment tools is that job changes from two or three months ago haven't propagated. You send outreach to someone's old company email because the cached data is behind. Real-time enrichment fetches the profile at the moment of the call, not from a cached copy that's months old. For a handful of high-priority candidates, a manual spot-check against LinkedIn before outreach is still worth the two minutes. For a batch of 25, the real-time lookup handles it.

Credit cost: Database enrichment costs 3 credits per profile. Adding business email costs 2 additional credits per profile. Enriching 25 candidates with email uses 125 credits total. Check Crustdata pricing for the per-credit rate on your plan.

Step 5: Export A Formatted Shortlist

Raw JSON is useful for scripts but not for hiring managers. Ask Claude to format the results:

Take the enriched candidate profiles and create two outputs:

1. A CSV file with columns: Name, Current Title, Current Company, Years in Role,
   Location, Key Skills (top 5), LinkedIn URL, Email. Save to shortlists/backend-eng-[date].csv

2. A one-paragraph summary for each candidate highlighting why they match the role
   requirements from CLAUDE.md. Save to shortlists/backend-eng-summaries-[date].md
Take the enriched candidate profiles and create two outputs:

1. A CSV file with columns: Name, Current Title, Current Company, Years in Role,
   Location, Key Skills (top 5), LinkedIn URL, Email. Save to shortlists/backend-eng-[date].csv

2. A one-paragraph summary for each candidate highlighting why they match the role
   requirements from CLAUDE.md. Save to shortlists/backend-eng-summaries-[date].md
Take the enriched candidate profiles and create two outputs:

1. A CSV file with columns: Name, Current Title, Current Company, Years in Role,
   Location, Key Skills (top 5), LinkedIn URL, Email. Save to shortlists/backend-eng-[date].csv

2. A one-paragraph summary for each candidate highlighting why they match the role
   requirements from CLAUDE.md. Save to shortlists/backend-eng-summaries-[date].md

Claude reads the enriched data, cross-references it against the role requirements in CLAUDE.md, and generates both files. The CSV is ready for ATS import. The summaries give the hiring manager enough context to decide who to prioritize without opening 15 LinkedIn tabs.

What the summary looks like:

## Sarah Chen, Staff Engineer, Acme Corp (4 years)
Python and distributed systems background with 8 years total experience.
Previously at a Series B fintech startup (DataPay) where she led the migration
from monolith to microservices. Currently managing a team of 6 at Acme (280 employees,
SF-based). Strong Kubernetes and PostgreSQL experience. Business email available

## Sarah Chen, Staff Engineer, Acme Corp (4 years)
Python and distributed systems background with 8 years total experience.
Previously at a Series B fintech startup (DataPay) where she led the migration
from monolith to microservices. Currently managing a team of 6 at Acme (280 employees,
SF-based). Strong Kubernetes and PostgreSQL experience. Business email available

## Sarah Chen, Staff Engineer, Acme Corp (4 years)
Python and distributed systems background with 8 years total experience.
Previously at a Series B fintech startup (DataPay) where she led the migration
from monolith to microservices. Currently managing a team of 6 at Acme (280 employees,
SF-based). Strong Kubernetes and PostgreSQL experience. Business email available

This step is where Claude Code's file system access pays off. It reads the enriched profiles, applies the filtering logic from your CLAUDE.md, and writes structured output to files that integrate with your existing workflow. No copy-pasting between tabs, no manual formatting.

Step 6: Monitor Your Bench For Intent Signals

Most recruiters keep a bench, a running list of strong candidates for roles they hire repeatedly. The problem is knowing when someone on that bench might be ready to move. Instead of waiting for them to apply or respond to cold outreach, you can watch for signals that suggest they're open before they've made it official.

Recruiters call this the "intent signal hire." The idea is that certain company-level and profile-level changes indicate a candidate may be ready to move, even if they haven't turned on "Open to Work" yet.

The signals worth watching:

  • Their company just laid off people in another department (anxiety spreads across the org, even to teams that weren't affected)

  • Their company's Glassdoor reviews dropped significantly over the last 3 months (internal morale problems tend to show up in reviews before they show up in attrition)

  • They updated their LinkedIn profile recently but aren't flagged as "Open to Work" (profile updates without the badge often mean they're testing the waters quietly)

  • Their company's headcount growth rate is declining, with a department-level decrease in engineering or sales headcount showing up as a possible layoff indicator

  • A key leader at their company just left (when a VP of Engineering or CTO departs, senior ICs often start looking within weeks)

When you spot these signals, you're reaching out to someone who's already feeling friction at their current company, which means the conversation starts differently than a cold message to someone who's happy in their role.

Set up a Watcher on your bench companies:

curl --request POST \
  --url 'https://api.crustdata.com/watcher/watch' \
  --header 'Authorization: Token $CRUSTDATA_API_TOKEN' \
  --header 'Content-Type: application/json' \
  --data '{
    "entity_type": "company",
    "entity_identifiers": [
      {"domain": "acmecorp.com"},
      {"domain": "anothercompany.com"}
    ],
    "triggers": [
      {"event_type": "headcount_growth", "filter": {"threshold_percent": -10, "period_months": 3}},
      {"event_type": "job_posting", "filter": {"title_keyword": "VP Engineering OR CTO OR Head of Engineering"}}
    ],
    "webhook_url": "https://your-endpoint.com/webhook"
  }'
curl --request POST \
  --url 'https://api.crustdata.com/watcher/watch' \
  --header 'Authorization: Token $CRUSTDATA_API_TOKEN' \
  --header 'Content-Type: application/json' \
  --data '{
    "entity_type": "company",
    "entity_identifiers": [
      {"domain": "acmecorp.com"},
      {"domain": "anothercompany.com"}
    ],
    "triggers": [
      {"event_type": "headcount_growth", "filter": {"threshold_percent": -10, "period_months": 3}},
      {"event_type": "job_posting", "filter": {"title_keyword": "VP Engineering OR CTO OR Head of Engineering"}}
    ],
    "webhook_url": "https://your-endpoint.com/webhook"
  }'
curl --request POST \
  --url 'https://api.crustdata.com/watcher/watch' \
  --header 'Authorization: Token $CRUSTDATA_API_TOKEN' \
  --header 'Content-Type: application/json' \
  --data '{
    "entity_type": "company",
    "entity_identifiers": [
      {"domain": "acmecorp.com"},
      {"domain": "anothercompany.com"}
    ],
    "triggers": [
      {"event_type": "headcount_growth", "filter": {"threshold_percent": -10, "period_months": 3}},
      {"event_type": "job_posting", "filter": {"title_keyword": "VP Engineering OR CTO OR Head of Engineering"}}
    ],
    "webhook_url": "https://your-endpoint.com/webhook"
  }'

You can also monitor individual candidates by watching for social posts. If someone on your bench posts about their company restructuring, their team being dissolved, or simply starts engaging with content about "career transitions," that's a signal worth acting on. Claude Code can pull their recent posts via the social posts API and flag anything that suggests they're open to a conversation.

When a trigger fires, pass the context to Claude Code:

A bench candidate's company just showed a 15% headcount drop in engineering
over the last quarter. Here's the context:

Company: [name], [headcount trend], [recent Glassdoor rating if available]
Candidate: [name], Senior Backend Engineer, on our bench since [date]

Check if they've posted anything recently that suggests they're considering
a move. Draft a warm outreach message that acknowledges the market without
referencing the layoffs directly. Save to outreach/drafts/[name]-[date].txt
A bench candidate's company just showed a 15% headcount drop in engineering
over the last quarter. Here's the context:

Company: [name], [headcount trend], [recent Glassdoor rating if available]
Candidate: [name], Senior Backend Engineer, on our bench since [date]

Check if they've posted anything recently that suggests they're considering
a move. Draft a warm outreach message that acknowledges the market without
referencing the layoffs directly. Save to outreach/drafts/[name]-[date].txt
A bench candidate's company just showed a 15% headcount drop in engineering
over the last quarter. Here's the context:

Company: [name], [headcount trend], [recent Glassdoor rating if available]
Candidate: [name], Senior Backend Engineer, on our bench since [date]

Check if they've posted anything recently that suggests they're considering
a move. Draft a warm outreach message that acknowledges the market without
referencing the layoffs directly. Save to outreach/drafts/[name]-[date].txt

The Watcher runs continuously in the background. When a relevant signal fires at a bench company, the data comes to you and Claude drafts outreach timed to the actual trigger rather than a weekly check-in schedule.

What Breaks (And What To Watch)

A few things will go wrong on your first run.

Credit exhaustion. Most people data APIs bill per call and cut off when credits run out, with no auto-overage by default. If you're enriching 50 candidates and credits run out at 35, you'll have an incomplete shortlist with no warning. Check your credit balance at the start of each batch (GET /user/credits) and set a stop condition before running large enrichments.

Profile accuracy. Real-time enrichment fetches the latest version of a LinkedIn profile, but LinkedIn itself has propagation delays. Someone who changed jobs two weeks ago may not have updated their profile yet. For high-priority candidates, a quick manual check before outreach is still worth the time.

Over-personalization. Claude has access to a lot of profile context, and left unconstrained it will reference work history, education, skills, and recent activity in a single message, which reads less like research and more like you've been watching someone. The CLAUDE.md rule of "one personalized detail" is there for a reason.

Human-in-the-loop is not optional. In the EU, the AI Act requires explainability and human oversight for any AI system used in recruiting decisions. Even outside the EU, fully automated sourcing without review creates risk. Claude drafts messages to a review folder. Read what it wrote before sending.

Conclusion

Most of the time you spend sourcing goes to what happens after you find someone promising, pulling their full profile, tracking down a verified email, drafting a personalized message while switching between LinkedIn, your ATS, an enrichment tool, and a spreadsheet.

Claude Code sits at the center of that workflow. It reads your role requirements from CLAUDE.md, calls the people data API for matching candidates, enriches the profiles, formats the output for your ATS, and drafts outreach following your rules. The Watcher handles ongoing monitoring so you're notified when something changes at a target company or with a tracked candidate.

The setup takes a few hours the first time. After that, what used to take a full day of manual sourcing runs while you're on a client call.

If you want to test the people data layer, Crustdata's People Discovery and Enrichment APIs are the endpoints used in the examples above. The API documentation has the full filter reference. For teams already running sourcing workflows and wanting to add real-time candidate monitoring, the Watcher API is where to start.

Data

Delivery Methods

Solutions