Feature

The MCP server for web scraping

Stekpad ships a local Model Context Protocol server that exposes every saved recipe as a callable tool. Claude Desktop, Cursor, ChatGPT Desktop, and any MCP-compatible client can invoke recipes and receive structured rows in seconds.

This is the first web scraper built natively for AI agents. Your agent calls a recipe instead of waiting for an overnight CSV export. Live data, two-second latency, zero credentials shared.

Keep exploring

Related on Stekpad

Same topic cluster

More in this cluster

blog (contrarian)

Agents Need Live Data. Most Still Don't Have It.

**Use the contrarian voice from `docs/brand-voice.md`.** The core argument: every AI agent — Claude, GPT-4o, Gemini — has a training cutoff. The web moves daily. A company changes its pricing, a person changes jobs, a product launches, a competitor drops a feature — and your agent still knows the old version. Retrieval-augmented generation helps for documents you index. It does nothing for a live LinkedIn profile, a Google Maps listing, or a competitor's pricing page that changed yesterday. Name the gap directly: agents without live web access are answering from a snapshot, not the present. Stekpad's MCP server is the minimal-friction solution: register the server, call a recipe, get a structured response from the live page in two seconds. Show three concrete examples with the exact prompts.

blog (contrarian)

Beyond Cron Jobs: Why Scraping Schedules Are the Wrong Model

**Use the contrarian voice from `docs/brand-voice.md`.** Take a strong position: cron-based scraping is a cargo cult from the server-side ETL era, not a design choice appropriate for 2026 workflows. Name the problem specifically: you schedule a 6am job, the data you need arrives at 3am — or a user's Claude session needs a live answer at 2pm and the next cron run is in 4 hours. Contrast two models: batch (cron, browse.ai robots, Apify schedules) vs on-demand (MCP calls, Zapier triggers, user-initiated). Argue that the only scraping model that fits agents, sales reps, and real-time pipelines is on-demand — triggered by the thing that needs the data. Stekpad supports both, but on-demand is the default because it matches how people actually work.

blog

Build a Data Enrichment Pipeline with Claude and Stekpad

Practical walkthrough of a three-step enrichment pipeline: (1) configure the Stekpad MCP server in Claude Desktop, (2) write a Claude prompt that calls a recipe and processes the returned rows (summarize, classify, score), (3) output the enriched data to Google Sheets. Uses a concrete example: scrape a list of companies from LinkedIn, pass each to Claude for ICP scoring, write scored rows to a Sheet. Includes the exact Claude prompt template. No Python. Non-developers can follow.

blog

MCP Explained for Growth Teams: Give Claude Live Web Data

Plain-English explanation of the Model Context Protocol for a non-developer growth audience. Covers: what MCP is (Claude's way of calling external tools), why it matters for web data (live results vs stale training data), how the Stekpad MCP server works in practice (install once, call a recipe from Claude, get structured rows back), and three concrete growth workflows (enrich a CRM, monitor competitor pricing, build a lead list). No code required in any example.

Try Stekpad free

The extension is free forever. Pro at €12/month or €99 lifetime.

Stekpad MCP Server — Scrape the Web from Claude — Stekpad