Blog

Build a lead enrichment pipeline with Claude and Stekpad

Combine the Stekpad MCP server with Claude Desktop to build an end-to-end enrichment pipeline in under an hour. Claude reads a domain list, scrapes LinkedIn for each, and writes enriched rows back to your Sheet.

The setup is: one Stekpad recipe for LinkedIn profile scraping, one Anthropic Messages API call per domain, and one Stekpad destination pointing to your CRM sheet. Total cost: €12/month Stekpad plus pennies in Anthropic tokens.

Keep exploring

Related on Stekpad

Same topic cluster

More in this cluster

blog (contrarian)

Agents Need Live Data. Most Still Don't Have It.

**Use the contrarian voice from `docs/brand-voice.md`.** The core argument: every AI agent — Claude, GPT-4o, Gemini — has a training cutoff. The web moves daily. A company changes its pricing, a person changes jobs, a product launches, a competitor drops a feature — and your agent still knows the old version. Retrieval-augmented generation helps for documents you index. It does nothing for a live LinkedIn profile, a Google Maps listing, or a competitor's pricing page that changed yesterday. Name the gap directly: agents without live web access are answering from a snapshot, not the present. Stekpad's MCP server is the minimal-friction solution: register the server, call a recipe, get a structured response from the live page in two seconds. Show three concrete examples with the exact prompts.

blog (contrarian)

Beyond Cron Jobs: Why Scraping Schedules Are the Wrong Model

**Use the contrarian voice from `docs/brand-voice.md`.** Take a strong position: cron-based scraping is a cargo cult from the server-side ETL era, not a design choice appropriate for 2026 workflows. Name the problem specifically: you schedule a 6am job, the data you need arrives at 3am — or a user's Claude session needs a live answer at 2pm and the next cron run is in 4 hours. Contrast two models: batch (cron, browse.ai robots, Apify schedules) vs on-demand (MCP calls, Zapier triggers, user-initiated). Argue that the only scraping model that fits agents, sales reps, and real-time pipelines is on-demand — triggered by the thing that needs the data. Stekpad supports both, but on-demand is the default because it matches how people actually work.

blog

MCP Explained for Growth Teams: Give Claude Live Web Data

Plain-English explanation of the Model Context Protocol for a non-developer growth audience. Covers: what MCP is (Claude's way of calling external tools), why it matters for web data (live results vs stale training data), how the Stekpad MCP server works in practice (install once, call a recipe from Claude, get structured rows back), and three concrete growth workflows (enrich a CRM, monitor competitor pricing, build a lead list). No code required in any example.

blog (contrarian)

Why Every Scraper Built for Cron Is Broken for Agents

**Use the contrarian voice from `docs/brand-voice.md`.** State strong positions and name targets: Apify, Firecrawl, browse.ai — all built for scheduled batch jobs, not synchronous agent calls. Back every claim with specifics: Apify actor cold-start times, Firecrawl's server-side rendering pipeline latency, browse.ai's robot-definition paradigm. The core argument: agents need a call-and-response data layer, not a pipeline. Stekpad's browser-native architecture is the only design that matches that requirement — a Claude session calls a recipe and gets rows back in under 2 seconds from the page the user has open. No cloud proxy. No phantom credits. No cron.

Try Stekpad free

The extension is free forever. Pro at €12/month or €99 lifetime.

Build a Data Enrichment Pipeline with Claude + Stekpad — Stekpad