scrape1 creditFetch one URL. Markdown, JSON, HTML or a screenshot.
Discover all the URLs on a site without fetching the bodies. One credit per thousand URLs returned.
map walks /robots.txt, every sitemap referenced there, sitemap indexes recursively, and optionally a shallow link crawl. It returns the URLs only, with minimal metadata. It does not render pages or transfer bodies — it is the fast, cheap, pre-step before a targeted crawl.
A common pattern: map → filter → crawl. Run map first to find every product URL, drop the ones you do not want, and pass the survivors to crawl.
Pick your language. Every snippet is a real, runnable example.
curl -X POST https://api.stekpad.com/v1/map \ -H "Authorization: Bearer stkpd_live_..." \ -H "Content-Type: application/json" \ -d '{ "url": "https://example.com", "sources": ["sitemap", "robots"], "include_paths": ["/product/**"], "max_urls": 10000 }'| Name | Type | Required | Description |
|---|---|---|---|
| url | string | required | Seed URL or domain. |
| sources | string[] | optional | Any of sitemap, robots, links. Defaults to ["sitemap", "robots"]. |
| include_paths | string[] | optional | Glob filter applied to the result. |
| exclude_paths | string[] | optional | Glob filter applied to the result. |
| max_urls | int | optional | Hard cap. Default 50,000. |
| follow_sitemap_index | boolean | optional | Walk sitemap indexes recursively. Default true. |
{ "run_id": "run_01HZ...", "urls": [ { "url": "https://example.com/product/1", "source": "sitemap", "last_modified": "2026-04-01" }, { "url": "https://example.com/product/2", "source": "sitemap" } ], "total": 4231, "credits_charged": 5}| Code | When |
|---|---|
| no_sitemap_found | The site has neither a sitemap nor a usable robots.txt and links was not in sources. |
| sitemap_too_large | A single sitemap exceeded max_urls and follow_sitemap_index was false. |
| target_unreachable | The robots.txt request failed. |
Every error includes a code, a human message, and a guidance field with the exact action to take.
Get an API key, paste the curl, watch the row land in your dataset.