Introduction
The classes implement a basic web spider (also called “web robot” or “web crawler“) to grab web pages (including resources like images and CSS), download them locally and adjust any resource hyperlinks to point to the locally downloaded resources. The classes allow for synchronous as well as asynchronous download of the web pages. To parse a document it is using the SGMLReader DLL.
Via: A Web Spider Library in C# – The Code Project – ASP.NET