Skip to content

stacksjs/ts-web-scraper

Social Card of this repo

npm version GitHub Actions Commitizen friendly

ts-web-scraper

A powerful, type-safe web scraping library for TypeScript and Bun with zero external dependencies. Built entirely on Bun's native APIs for maximum performance and minimal footprint.

Features

Core Scraping

  • πŸš€ Zero Dependencies - Built entirely on Bun native APIs
  • πŸ’ͺ Fully Typed - Complete TypeScript support with type inference
  • ⚑️ High Performance - Optimized for speed with native Bun performance
  • 🎨 Client-Side Rendering - Support for JavaScript-heavy sites (React, Vue, Next.js)
  • 🌐 Pagination - Automatic pagination detection and traversal
  • πŸ€– Ethical Scraping - Robots.txt support and user-agent management

Data Extraction & Analysis

  • πŸ“Š Content Extraction - Readability-style main content extraction
  • πŸ“§ Contact Information - Automatic extraction of emails, phones, addresses, social profiles
  • 🏷️ Metadata Extraction - Open Graph, Twitter Cards, Schema.org structured data
  • 🌍 Language Detection - Multi-language detection with confidence scoring
  • β™Ώ Accessibility Analysis - WCAG compliance checking with scoring
  • ⚑ Performance Metrics - Resource analysis and optimization hints
  • πŸ€– ML-Ready Features - Sentiment analysis, entity extraction, text statistics
  • πŸ” Change Detection - Track content changes over time with diff algorithms

Performance & Reliability

  • πŸ”„ Rate Limiting - Built-in token bucket rate limiter with burst support
  • πŸ’Ύ Smart Caching - LRU cache with TTL support and disk persistence
  • πŸ” Automatic Retries - Exponential backoff retry logic with budgets
  • πŸ“ˆ Monitoring - Performance metrics and analytics
  • πŸͺ Session Management - Cookie jar and session persistence

Data Processing

  • πŸ”§ Pipeline Architecture - Powerful pipeline-based data extraction and transformation
  • 🎯 Validation - Built-in schema validation for extracted data
  • πŸ“ Multiple Export Formats - JSON, CSV, XML, YAML, Markdown, HTML
  • πŸ” Security Tested - Comprehensive XSS, injection, and edge case testing

Installation

bun add ts-web-scraper

Quick Start

import { createScraper } from 'ts-web-scraper'

// Create a scraper instance
const scraper = createScraper({
  rateLimit: { requestsPerSecond: 2 },
  cache: { enabled: true, ttl: 60000 },
  retry: { maxRetries: 3 },
})

// Scrape a website
const result = await scraper.scrape('https://example.com', {
  extract: doc => ({
    title: doc.querySelector('title')?.textContent,
    headings: Array.from(doc.querySelectorAll('h1')).map(h => h.textContent),
  }),
})

console.log(result.data)

Core Concepts

Scraper

The main scraper class provides a unified API for all scraping operations:

import { createScraper } from 'ts-web-scraper'

const scraper = createScraper({
  // Rate limiting
  rateLimit: {
    requestsPerSecond: 2,
    burstSize: 5
  },

  // Caching
  cache: {
    enabled: true,
    ttl: 60000,
    maxSize: 100
  },

  // Retry logic
  retry: {
    maxRetries: 3,
    initialDelay: 1000
  },

  // Performance monitoring
  monitor: true,

  // Change tracking
  trackChanges: true,

  // Cookies & sessions
  cookies: { enabled: true },
})

Data Extraction

Extract and transform data using pipelines:

import { extractors, pipeline } from 'ts-web-scraper'

const extractProducts = pipeline()
  .step(extractors.structured('.product', {
    name: '.product-name',
    price: '.product-price',
    rating: '.rating',
  }))
  .map('parse-price', p => ({
    ...p,
    price: Number.parseFloat(p.price.replace(/[^0-9.]/g, '')),
  }))
  .filter('in-stock', products => products.every(p => p.price > 0))
  .sort('by-price', (a, b) => a.price - b.price)

const result = await extractProducts.execute(document)

Change Detection

Track content changes over time:

const scraper = createScraper({ trackChanges: true })

// First scrape
const result1 = await scraper.scrape('https://example.com', {
  extract: doc => ({ price: doc.querySelector('.price')?.textContent }),
})
// result1.changed === undefined (no previous snapshot)

// Second scrape
const result2 = await scraper.scrape('https://example.com', {
  extract: doc => ({ price: doc.querySelector('.price')?.textContent }),
})
// result2.changed === false (if price hasn't changed)

Export Data

Export scraped data to multiple formats:

import { exportData, saveExport } from 'ts-web-scraper'

// Export to JSON
const json = exportData(data, { format: 'json', pretty: true })

// Export to CSV
const csv = exportData(data, { format: 'csv' })

// Save to file (format auto-detected from extension)
await saveExport(data, 'output.csv')
await saveExport(data, 'output.json')
await saveExport(data, 'output.xml')

Advanced Features

Pagination

Automatically traverse paginated content:

for await (const page of scraper.scrapeAll('https://example.com/posts', {
  extract: doc => ({
    posts: extractors.structured('article', {
      title: 'h2',
      content: '.content',
    }).execute(doc),
  }),
}, { maxPages: 10 })) {
  console.log(`Page ${page.pageNumber}:`, page.data)
}

Performance Monitoring

Track and analyze scraping performance:

const scraper = createScraper({ monitor: true })

await scraper.scrape('https://example.com')
await scraper.scrape('https://example.com/page2')

const stats = scraper.getStats()
console.log(stats.totalRequests) // 2
console.log(stats.averageDuration) // Average time per request
console.log(stats.cacheHitRate) // Cache effectiveness

const report = scraper.getReport()
console.log(report) // Formatted performance report

Content Validation

Validate extracted data against schemas:

const result = await scraper.scrape('https://example.com', {
  extract: doc => ({
    title: doc.querySelector('title')?.textContent,
    price: Number.parseFloat(doc.querySelector('.price')?.textContent || '0'),
  }),
  validate: {
    title: { type: 'string', required: true },
    price: { type: 'number', min: 0, required: true },
  },
})

if (result.success) {
  // Data is valid and typed
  console.log(result.data.title, result.data.price)
}
else {
  console.error(result.error)
}

Documentation

For full documentation, visit https://ts-web-scraper.netlify.app

Testing

bun test

With comprehensive coverage of:

  • Core scraping functionality (static & client-side rendered)
  • Content extraction (main content, contact info, metadata)
  • Analysis features (accessibility, performance, ML, language detection)
  • Rate limiting, caching, and retry logic
  • Data extraction pipelines and validation
  • Change detection and monitoring
  • Export formats and session management
  • Security (XSS, injection attacks, sanitization)
  • Edge cases (malformed HTML, extreme values, encoding issues)

Changelog

Please see our releases page for more information on what has changed recently.

Contributing

Please see CONTRIBUTING for details.

Community

For help, discussion about best practices, or any other conversation that would benefit from being searchable:

Discussions on GitHub

For casual chit-chat with others using this package:

Join the Stacks Discord Server

Postcardware

"Software that is free, but hopes for a postcard." We love receiving postcards from around the world showing where Stacks is being used! We showcase them on our website too.

Our address: Stacks.js, 12665 Village Ln #2306, Playa Vista, CA 90094, United States 🌎

Sponsors

We would like to extend our thanks to the following sponsors for funding Stacks development. If you are interested in becoming a sponsor, please reach out to us.

License

The MIT License (MIT). Please see LICENSE for more information.

Made with πŸ’™

About

A powerful, type-safe web scraping library for TypeScript.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Contributors 2

  •  
  •