A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities.
firecrawl_scrape
Scrape content from a single webpage with advanced options for content extraction. Supports various formats including markdown, HTML, and screenshots, and can execute custom actions like clicking or scrolling before scraping.
firecrawl_batch_scrape
Scrape multiple URLs efficiently with built-in rate limiting and parallel processing. Returns a job ID that can be used to check status.
firecrawl_check_batch_status
Check the status of a batch scraping operation. Takes a batch job ID as input to track progress.
firecrawl_map
Discover URLs from a starting point using both sitemap.xml and HTML link discovery . Supports search filtering, subdomain inclusion, and URL limits.
firecrawl_crawl
Start an asynchronous crawl of multiple pages from a starting URL. Supports depth control, path filtering, and webhook notifications.
firecrawl_check_crawl_status
Check the status of a crawl operation. Takes a crawl job ID to monitor progress and retrieve results.
firecrawl_search
Search the web and optionally extract content from search results. Returns SERP results by default or full page content when scrapeOptions are provided .
firecrawl_extract
Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction with JSON schema support.
firecrawl_deep_research
Conduct deep research on a query using web crawling, search, and AI analysis . Supports configurable depth, time limits, and URL analysis limits .
firecrawl_generate_llmstxt
Generate standardized LLMs.txt file for a given URL. Provides context about how LLMs should interact with the website.
{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "mcp-server-firecrawl"],
"env": {
"FIRE_CRAWL_API_KEY": "YOUR_API_KEY_HERE"
}
}
}
}