[Netarchivesuite-users] Questions about deduplication (and reduplication)

Peter Svanberg Peter.Svanberg at kb.se
Mon Jan 13 23:05:39 CET 2020


Thanks, Sara!

So, when reduplicating, e.g. at Wayback or Pyweb usage, all potential revisit target files must be reachable – not a problem? Kristinn mentioned that generating indexes (of content) can take much longer as it have to look up in url indexes and open a lot of files. Something you (or others) have experienced?

Do you have any idea of how old the oldest revisit target to recent warc files could be? Five, maybe ten years, then?

And I add a fifth question:

5) How much space do you save – just approximately.

      Peter


13 jan. 2020 kl. 17:47 skrev "sara.aubry at bnf.fr" <sara.aubry at bnf.fr>:

Hi Peter,

For BnF,
1) yes
2) you probably mean focused crawls: yes
3) URL
4) Only when we have a major change in the crawler or the data format. Which means, the least possible.
Because it really save a lot of space, and also because we don't care about intervals between WARC files: that's why WARC revisit records were made for.
Deduplication also sometimes incidentally restarts when the previous capture of a harvest is not finished (either at crawl stage or post-processing stage) or crashed.

Best,

Sara




De :        "Peter Svanberg" <Peter.Svanberg at kb.se>
A :        "netarchivesuite-users at ml.sbforge.org" <netarchivesuite-users at ml.sbforge.org>
Date :        13/01/2020 17:31
Objet :        [Netarchivesuite-users] Questions about deduplication (and reduplication)
Envoyé par :        "NetarchiveSuite-users" <netarchivesuite-users-bounces at ml.sbforge.org>
________________________________



Hello!

I’m trying to understand how NAS and Heritrix handles deduplication, which lead to an internal discussion about the overall pros and cons of ditto. I then found Kristinn Sigurðsson’s interesting web archiving blog articles. He has written about de- and reduplication: https://kris-sigur.blogspot.com/2015/01/the-downside-of-web-archive.html

Some short questions about the deduplication in NAS (is.hi.bok.deduplicator.DeDuplicator) that I would appreciate quick answers on (from all NAS user sites):

1)      Do you use deduplication for snapshot harvests (broad crawls)?
2)      Do you use deduplication for snapshot harvests?
3)      Which matching method do you use – DIGEST or URL?
4)      Do you “restart” the deduplication at intervals? How long intervals?

By (4) I mean you do a harvest with no deduplication, limiting the number of dependencies between WARC files. (Somewhat like total and incremental backups.) Maybe you just do deduplication between  the 2–3 steps in a broad crawl? Or between the last X broad crawls?

Regards,
-----

Peter Svanberg

National Library of Sweden
Phone: +46 10 709 32 78

E-mail: peter.svanberg at kb.se
Web: www.kb.se


 _______________________________________________
NetarchiveSuite-users mailing list
NetarchiveSuite-users at ml.sbforge.org
https://ml.sbforge.org/mailman/listinfo/netarchivesuite-users

________________________________

Exposition Tolkien, voyage en Terre du Milieu<https://www.bnf.fr/fr/agenda/tolkien-voyage-en-terre-du-milieu> - du 22 octobre 2019 au 16 février 2020 - BnF - François-Mitterrand

Avant d'imprimer, pensez à l'environnement.

_______________________________________________
NetarchiveSuite-users mailing list
NetarchiveSuite-users at ml.sbforge.org
https://ml.sbforge.org/mailman/listinfo/netarchivesuite-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://ml.sbforge.org/pipermail/netarchivesuite-users/attachments/20200113/e6dfdb23/attachment.html>


More information about the NetarchiveSuite-users mailing list