[Netarchivesuite-users] Your URI/sec and KB/sec figures?

Peter Svanberg Peter.Svanberg at kb.se
Mon Jun 24 14:53:49 CEST 2019


Thank you Tue, this is very interesting information!

About 70 TB in how many days?

You emphasize "harvested" - do you aim at that more data is downloaded but not archived (sorted out duplicates/irrelevant?)?

I'll return when I have gathered corresponding info on our environment.

Regards,

-----

Peter Svanberg

National Library of Sweden
Phone: +46 10 709 32 78

E-mail: peter.svanberg at kb.se
Web: www.kb.se

Från: NetarchiveSuite-users <netarchivesuite-users-bounces at ml.sbforge.org> För Tue Hejlskov Larsen
Skickat: den 24 juni 2019 12:22
Till: netarchivesuite-users at ml.sbforge.org
Ämne: Re: [Netarchivesuite-users] Your URI/sec and KB/sec figures?

Hi Peter

We have currently only minor performance issues during harvesting. We have almost finished with our 2. broadcrawl this year - it will end up between 60-70 TB harvested pages.
Our harvesting capacity is 90-100 Heritrix harvesters including some virtual Umbra harvesters...
We are using physical servers for the broadcrawl harvesters and virtual servers for selective harvesters.
The 5 physical servers have each:
32 G MEM, 24 CPU's, 4 TB local storage
The 5 Virtual servers using NFS:
20 G RAM, 8 CPU's and 3 TB NFS storage
On each server we have between 8-10 Heritrix instances running - withdrawn the Umbra harvesters which only have one per server.
Between the  harvester and the www we have a firewall and throttling firewall agreements with about 5 webhotels, because they blocked/throttled our harvesters.

Best regards
Tue


From: NetarchiveSuite-users <netarchivesuite-users-bounces at ml.sbforge.org<mailto:netarchivesuite-users-bounces at ml.sbforge.org>> On Behalf Of Peter Svanberg
Sent: Monday, June 24, 2019 11:39 AM
To: netarchivesuite-users at ml.sbforge.org<mailto:netarchivesuite-users at ml.sbforge.org>
Subject: [Netarchivesuite-users] Your URI/sec and KB/sec figures?

Hello!

I discovered a Heritrix mailinglist(*). Amongst some interesting tips on making the crawl faster, I also read some speed figures far from what we ever get. So I ask you: what do you get as speed values?

Our latest 19 selective harvests have the following figures (from crawl-report.txt in the jobs metadata WARC file):

URIs/sec: slowest job 0,83; fastest job 9,8;  average 5,11
KB/sec: slowest 34; fastest 863; average 313

(I realize that this besides NAS/Heritrix configuration depends much on hardware, memory, disk I/O, network capacity etc. but don't know which such figures that are most relevant to add to this comparison. Suggestions?)

* https://groups.yahoo.com/neo/groups/archive-crawler/conversations/messages
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://ml.sbforge.org/pipermail/netarchivesuite-users/attachments/20190624/a264549f/attachment-0001.html>


More information about the NetarchiveSuite-users mailing list