Commit Graph

5 Commits

Author SHA1 Message Date
naddy
25989f293a fix build; from Robert Nagy <thuglife@bsd.ru> 2004-04-16 15:06:48 +00:00
sturm
04a09edf10 fix db dependencies to ensure db/v3 is installed
ensure db/v3 is used
also fixes build on NO_SHARED_ARCHS

with conceptual help from brad@
2004-01-10 08:33:11 +00:00
sturm
0d2f5634f8 use new databases/db layout
db update and these modifications by
Aleksander Piotrowski <aleksander dot piotrowski at nic dot com dot pl>
2003-12-08 17:42:34 +00:00
brad
db1541d1f7 - remove unnecessary patches
- use DESTDIRNAME
2002-06-23 18:23:03 +00:00
obecian
4b01747aa5 crawl-0.1b import - provos@ ok
The crawl utility starts a depth-first traversal of the web at the
specified URLs.  It stores all JPEG images that match the configured
constraints. Crawl is fairly fast and allows for graceful termination. 
After terminating crawl, it is possible to restart it at exactly the
same spot where it was terminated. Crawl keeps a persistent database
that allows multiple crawls without revisiting sites.

The main reason for writing crawl was the lack of simple open source
web crawlers. Crawl is only a few thousand lines of code and fairly
easy to debug and customize. 

Features

+ Saves encountered JPEG images 
+ Image selection based on regular expressions and size contraints 
+ Resume previous crawl after graceful termination 
+ Persistent database of visited URLs 
+ Very small and efficient code 
+ Supports robots.txt
2001-09-09 21:57:12 +00:00