[Netarchivesuite-users] Timelimit – usage and NAS problems

Tue Hejlskov Larsen tlr at kb.dk
Sat Oct 21 15:00:22 CEST 2023


Hi Peter

Our broad crawl is based on all domains in the NAS harvest dbs - dead or alive. We are not forced to postprocess or clean a specific broadcrawl domain list before a broadcrawl to limit the number of processed domains.

The total domain list in the database is updated before each broad crawl with the new “currently paid” domains from the danish TLD  provider<https://punktum.dk/en> - since 2005.
Every domain in the harvest dbs get updated or "touched" 4 times per year - because domains appears and disappears very frequently in DK (30-40K over 3 months).
We are not aware of any problems with the way NAS do the updates.

?Timelimit and time spent on job follow up.

You can chose to decrease specific H3 template timeouts for your defaultorder template, use harvest timelimits or bytelimit a crawl step to avoid hanging jobs ( and make traffic limit agreements with the hosting centers, too).
In DK every broadcrawl harvester instance is configured to only have 50 toethreads queues per job. A broad crawl job with a long seeds list is harvested in parallel within the toethreads 50 limit.
If you set the toethreads limit higher you can process more seeds in parallel but you need to have HW (CPU,RAM, IO etc) and database power because of the harvesting reporting and
perhaps also fewer/one harvester instance(s) per server.
It is weighted balance depending on your HW setup.

In the broadcrawl we only follow up on the big step 2 jobs and a few big selective domain harvests containing only domains which hit the 50MB limit in step 1 -
e.g. sometimes we take very big domains out of the step 2 and put them into the focused selective crawls with more fast harvest setup using the selective pool of harvesters.

The job follow up is hard and time consuming but “pay back” in QA of our biggest part of the harvested content – and gives a good understanding of our harvesting quality – and what are currently the most important harvesting problems and new crawler traps.

If you don’t have the time for that, you can use time limits to get rid of hanging jobs and finish a broad crawl on time. It can be an appropriate choice.

Best regards
Tue

From: NetarchiveSuite-users <netarchivesuite-users-bounces at ml.sbforge.org> On Behalf Of Peter Svanberg
Sent: Friday, 20 October 2023 18.46
To: netarchivesuite-users at ml.sbforge.org
Subject: Re: [Netarchivesuite-users] Timelimit – usage and NAS problems


Yes, the harvesting stops when the timelimit is reached -- what else did you expect? :-)

The purpose of setting a timelimit is, I suppose, to stop very slow or tracked/looping harvests. Maybe you meant that you don't have any need for that? Or did you expect some other timelimit behaviour?



And as I said (and maybe you alluded to) there is a problem with false-reporting timelimit for already completed domains. Can be fixed in the database, but you have to gather info from the metadata WARC files first, to know what jobs was actually stopped. I have Python scripts. But this could be fixed quite easily in NAS.



Thanks for an update of your current schedule and figures!



Side track: We also observed that NAS reports "Domain completed" even if the domain doesn't exist (DNS error). Maybe that case should be a separate stop reason?

-----

Peter Svanberg, Sweden





-----Ursprungligt meddelande-----

Från: NetarchiveSuite-users <netarchivesuite-users-bounces at ml.sbforge.org<mailto:netarchivesuite-users-bounces at ml.sbforge.org>> För Tue Hejlskov Larsen

Skickat: den 3 oktober 2023 17:13

Till: netarchivesuite-users at ml.sbforge.org<mailto:netarchivesuite-users at ml.sbforge.org>

Ämne: Re: [Netarchivesuite-users] Timelimit – usage and NAS problems



If we use the timelimit for a broadcrawl job it cuts the domain seed queue crawlling - eventhough it is not finished harvesting of all the domain's in the job. You can see it in Heritrix seeds or host reports. We are using a harvesting policy different from BNF's.

So if we used timelimit for a job we will loose a lot of content especially in step 2. We have tried it - it was stopped after a couple of days. We have jobs in step 2 which runs for more weeks because of the size of the domains.

We have up to 10K domains in each job (domains are grouped e.g. by order templates hops etc.) in our 2 broad crawl steps.

Each step has an overall maxbyte per domain (even though the maxbyte can be set higher on the domain level) :  50MB for step 1 and 16 G for step 2. In step 1 all domains in the jobdbs (about 3 mio) are crawled even though they are inactive or active. Only the domains which hit the 50MB limit are included in step 2 with an overall 16G maxbyte limit per domain. Lower bytelimits on domain level have higher priority than the overall step limit. The 2 steps schedules about 500-800 jobs. Step 1 runs about 10 days on 110 crawlers in parallel without job curating and harvest about 12-15 TB. Step 2 runs on the same number of harvesters for about 6 weeks with job curating and harvest about 80-100 TB before dedup and compress. A broadcrawl is done 4 times per year with "maintenance windows" - between 2 weeks to 1 month per quarter.



Best regards

Tue



-----Original Message-----

From: NetarchiveSuite-users <netarchivesuite-users-bounces at ml.sbforge.org<mailto:netarchivesuite-users-bounces at ml.sbforge.org>> On Behalf Of Peter Svanberg

Sent: Tuesday, 3 October 2023 15.35

To: 'netarchivesuite-users at ml.sbforge.org' <netarchivesuite-users at ml.sbforge.org<mailto:netarchivesuite-users at ml.sbforge.org>>

Subject: Re: [Netarchivesuite-users] Timelimit – usage and NAS problems



(Continuation after Zoom meeting:)

So BNF use timelimit but is not influenced by the NAS insufficiency and Denmark does not use it. Anyone else use it?



And what did you mean, Tue, about the queue? At the time limit all harvesters are stopped and what was harvested so far is saved in the WARC and the job is DONE -- or? What was the problem? (Besides that some domains are falsely reported as timelimit stopped. But you can correct this in the database, which we did in one case.)

---

Peter Svanberg, Sweden





-----Ursprungligt meddelande-----

Från: NetarchiveSuite-users <netarchivesuite-users-bounces at ml.sbforge.org<mailto:netarchivesuite-users-bounces at ml.sbforge.org>> För Peter Svanberg

Skickat: den 2 oktober 2023 13:37

Till: netarchivesuite-users at ml.sbforge.org<mailto:netarchivesuite-users at ml.sbforge.org>

Ämne: [Netarchivesuite-users] Timelimit – usage and NAS problems



Two things about timelimits.



1) When and how do you use timelimits in harvesting? It’s another way to limit the jobs. I suppose that stops mainly slow host – that maybe has figures for politeness in robots.txt, if you allow that to influence. Or host with many small objects, each delaying.



2) NAS has limitations in handling jobs stopped by timelimit. It checks for mentions of ”timelimit” on the last line in some Heritrix report and then reports timelimit for all domains which has not allready been stopped by data or object limits. Hence the statistics gets wrong. In our current broad crawl (pass 3) just 11 % of the domains were not ready when the jobs where timelimit stopped. Also, if there is another pass, all those falsely timelimit-reported domains is unnecessarily harvetested again.



This can be corrected in two ways:



A) NAS could look in the hosts-report files ”remaining” column to check which domains are stopped or not.



B) We could suggest/fix Heritrix to add a line in the log with a new Heritrix code when the queue for a domain gets empty. And then easily use that in NAS, as with objects and data limit codes.



I appreciate answers and comments.



Peter Svanberg

Sweden





_______________________________________________

NetarchiveSuite-users mailing list

NetarchiveSuite-users at ml.sbforge.org<mailto:NetarchiveSuite-users at ml.sbforge.org>

https://ml.sbforge.org/mailman/listinfo/netarchivesuite-users

_______________________________________________

NetarchiveSuite-users mailing list

NetarchiveSuite-users at ml.sbforge.org<mailto:NetarchiveSuite-users at ml.sbforge.org>

https://ml.sbforge.org/mailman/listinfo/netarchivesuite-users

_______________________________________________

NetarchiveSuite-users mailing list

NetarchiveSuite-users at ml.sbforge.org<mailto:NetarchiveSuite-users at ml.sbforge.org>

https://ml.sbforge.org/mailman/listinfo/netarchivesuite-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://ml.sbforge.org/pipermail/netarchivesuite-users/attachments/20231021/25db7e5b/attachment-0001.html>


More information about the NetarchiveSuite-users mailing list