1. firecrawl_scrape
Scrape content from a single webpage with advanced options for content extraction. Supports various formats including markdown, HTML, and screenshots, and can execute custom actions like clicking or scrolling before scraping.
2. firecrawl_batch_scrape
Scrape multiple URLs efficiently with built-in rate limiting and parallel processing. Returns a job ID that can be used to check status.
3. firecrawl_check_batch_status
Check the status of a batch scraping operation. Takes a batch job ID as input to track progress.
Discovery and Crawling Tools
4. firecrawl_map
Discover URLs from a starting point using both sitemap.xml and HTML link discovery . Supports search filtering, subdomain inclusion, and URL limits.
5. firecrawl_crawl
Start an asynchronous crawl of multiple pages from a starting URL. Supports depth control, path filtering, and webhook notifications.
6. firecrawl_check_crawl_status
Check the status of a crawl operation. Takes a crawl job ID to monitor progress and retrieve results.
Search and Analysis Tools
7. firecrawl_search
Search the web and optionally extract content from search results. Returns SERP results by default or full page content when scrapeOptions are provided .
8. firecrawl_extract
Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction with JSON schema support.
9. firecrawl_deep_research
Conduct deep research on a query using web crawling, search, and AI analysis . Supports configurable depth, time limits, and URL analysis limits .
10. firecrawl_generate_llmstxt
Generate standardized LLMs.txt file for a given URL. Provides context about how LLMs should interact with the website.