Popularity
2.9
Growing
Activity
7.3
-
125
10
24

Programming language: C#
License: MIT License
Tags: Crawler     Spider     Netstandard    
Latest version: v0.3.0

Infinity Crawler alternatives and similar packages

Based on the "Tools" category.
Alternatively, view Infinity Crawler alternatives based on common mentions on social networks and blogs.

Do you think we are missing an alternative of Infinity Crawler or a related project?

Add another 'Tools' Package

README

Infinity Crawler

A simple but powerful web crawler library in C#

AppVeyor Codecov NuGet

Features

  • Obeys robots.txt (crawl delay & allow/disallow)
  • Obeys in-page robots rules (X-Robots-Tag header and <meta name="robots" /> tag)
  • Uses sitemap.xml to seed the initial crawl of the site
  • Built around a parallel task async/await system
  • Swappable request and content processors, allowing greater customisation
  • Auto-throttling (see below)

Polite Crawling

The crawler is built around fast but "polite" crawling of website. This is accomplished through a number of settings that allow adjustments of delays and throttles.

You can control:

  • Number of simulatenous requests
  • The delay between requests starting (Note: If a crawl-delay is defined for the User-agent, that will be the minimum)
  • Artificial "jitter" in request delays (requests seem less "robotic")
  • Timeout for a request before throttling will apply for new requests
  • Throttling request backoff: The amount of time added to the delay to throttle requests (this is cumulative)
  • Minimum number of requests under the throttle timeout before the throttle is gradually removed

Other Settings

  • Control the UserAgent used in the crawling process
  • Set additional host aliases you want the crawling process to follow (for example, subdomains)
  • The max number of retries for a specific URI
  • The max number of redirects to follow
  • The max number of pages to crawl

Example Usage

using InfinityCrawler;

var crawler = new Crawler();
var result = await crawler.Crawl(new Uri("http://example.org/"), new CrawlSettings {
    UserAgent = "MyVeryOwnWebCrawler/1.0",
    RequestProcessorOptions = new RequestProcessorOptions
    {
        MaxNumberOfSimultaneousRequests = 5
    }
});