Crawl websites and convert their pages into clean, readable Markdown content using Mozilla's Readability and Turndown.
This package combines website crawling with Mozilla's Readability (the same technology behind Firefox's Reader View) and Turndown to:
- Crawl websites and follow links up to a specified depth
- Extract the main content from each page, removing clutter and ads
- Convert the cleaned content into well-formatted Markdown
- Preserve the site structure in the output files
- Intelligent website crawling with configurable depth
- Clean extraction of main content from web pages using Mozilla's Readability
- Conversion of readable HTML to Markdown using Turndown
- Removes ads, navigation, sidebars, and other distracting elements
- Preserves important content structure and formatting
- Maintains website hierarchy in output directory structure
- Flexible output options including single file or directory structure
- Smart URL handling and deduplication during crawling
npm install crawldown
# or
pnpm add crawldown
# or
yarn add crawldown
The package provides a command-line interface for easy conversion:
# Basic usage
npx crawldown https://example.com
# Specify crawl depth (default: 0)
npx crawldown https://example.com -d 2
# Custom output directory (when not using --single-file)
npx crawldown https://example.com -o my-docs
# Output to a single file (will create result.md)
npx crawldown https://example.com --single-file -o result
# Output to a single file with explicit .md extension
npx crawldown https://example.com --single-file -o result.md
# Force scraping even if page hasn't loaded completely
npx crawldown https://example.com --force
# Force scraping with short flag
npx crawldown https://example.com -f
# Set custom navigation timeout (default: 10000ms)
npx crawldown https://example.com --timeout 30000
# Set timeout with short flag
npx crawldown https://example.com -t 30000
# Enable verbose logging
npx crawldown https://example.com -v
# Specify custom browser path
npx crawldown https://example.com --browser-path /path/to/chrome
# Set number of concurrent pages (default: 4)
npx crawldown https://example.com -c 8
# Define a specific scope URL for crawling
npx crawldown https://example.com --scope-url https://example.com/docs
url
: URL to scrape (required, positional argument)-d, --depth
: Number of levels to crawl (default: "0")-v, --verbose
: Enable verbose logging (default: false)--browser-path
: Path to browser executable. Will use playwright default if not provided-o, --output
: When used with --single-file, specifies the output filename. Otherwise specifies the output directory (default: "output")--single-file
: Output all results to a single markdown file instead of a directory structure (default: false)-c, --concurrency
: Number of concurrent pages to use (default: "4")--scope-url
: URL that defines the crawling scope. Links outside this scope will be ignored-f, --force
: When enabled, if page load is taking too long, captures page content 1 second before the timeout would occur. This ensures content is retrieved even if the page never fully loads (default: false)-t, --timeout
: Navigation timeout in milliseconds (default: 10000)
You can also use the package programmatically in your Node.js applications:
import { crawl } from "crawldown";
async function main() {
const results = await crawl({
url: "https://example.com", // Required: URL to crawl
depth: 1, // Optional: how deep to crawl links (default: 0)
browserPath: "/path/to/chrome", // Optional: custom browser executable path
concurrency: 4, // Optional: number of concurrent pages (default: 4)
scopeUrl: "https://example.com/docs", // Optional: URL that defines crawling scope
force: false, // Optional: force scraping before timeout (default: false)
timeout: 10000, // Optional: navigation timeout in ms (default: 10000)
});
// Each result contains:
// - url: string
// - title: string
// - markdown: string
for (const result of results) {
console.log(`Title: ${result.title}`);
console.log(`URL: ${result.url}`);
console.log(`Content:\n${result.markdown}`);
}
}
This project was inspired by:
- repomix - An awesome CLI tool to combine whole repositories into a single markdown file
- crawl4ai - A feature-rich Python web crawler
Mozilla Public License 2.0 License