From: Jackson, A. <And...@bl...> - 2015-07-21 21:52:39
|
Hi Alex, That's a very interesting piece of work. I hope you don’t mind, but I've added a reference to it here: https://github.com/iipc/openwayback/wiki/Advanced-configuration#coping-with-very-large-indexes Do you know if there's any chance the RocksDB-JNI main project might support compression, rather than having to build a forked version? Thanks, Andy -----Original Message----- From: Alex Osborne [mailto:AOS...@nl...] Sent: 14 July 2015 02:59 To: Jones, Gina; arc...@li... Subject: Re: [Archive-access-discuss] Compressed and surted CDX files Hi Gina, I don't know how immediately helpful this is to you, but I figured I'd share our approach. Our largest index would is around ~2.5 TB when stored in uncompressed CDX files. We're currently storing it in a packed binary format in RocksDB with Snappy block-compression (much faster but not as much compression as gzip). This gives us near realtime incremental updates, more than sufficient performance and reduces the index size to ~550GB. Our general hardware infrastructure consists of a small number of fairly powerful servers with local consumer-level SSD drives (which we use a lot for Solr indexes) and a big NFS disk array for bulk content storage. We've experimented on and off with Hadoop but generally found it more of a complication than helpful at our scale. Until recently we'd been manually sorting together CDX files stored on NFS but as we increased the frequency of Heritrix crawling it quickly became difficult to manage. We considered putting some tooling into automating CDX file management but what we really wanted was a centralised index server that we could incrementally dump records into and later query from multiple tools. We also wanted to try to reduce the size of the index so that we could comfortably fit it on SSD for fast queries. While we've heard bad things about large Wayback BDB indexes, there are a number of other key-value stores available now and SSD storage can be a game changer as to what's practical. We first experimented with LevelDB and then moved to RocksDB which we found worked a bit better with larger indexes, particularly during the initial data load. The source code for our index server is here: https://github.com/nla/tinycdxserver While we are using it in production its still rather experimental and lacking a lot of functionality you'd expect in an out of the box application (like deletes!). Cheers, Alex -- Alex Osborne National Library of Australia ________________________________ From: Jones, Gina [gj...@lo...] Sent: Tuesday, July 14, 2015 1:40 AM To: arc...@li... Subject: [Archive-access-discuss] Compressed and surted CDX files We are looking at making a giant leap in access to our content. Uncompressed cdx’s would currently be ~ 3.5TB or 4TB and continue to grow and since we cat and sort indexes into bigger indexes for wayback efficiency, this is somewhat of concern to us. Did web searches to see if I could find any information on how to structure ourselves into a compressed world. Looks like IA, BA and Common Crawl are using compressed indexes and from discussions I found, we would use what is currently configurable for the cdx server to manage access. Beyond that, I don’t have a clue how to create compressed indexes during the indexing process. It doesn’t seem efficient to uncompress to cat/sort and then compress back up. We just have a plain vanilla wayback VM running java7. We don’t have a Hadoop infrastructure for ZIPNUM clusters. We will be approaching 1 PB of content soon. Any recommendations or pointers off to information on how we can more efficiently index/store and serve up our content? Or possibly a volunteer to help mentor us to move in the right direction/develop best practices? To either help us figure out what we need to do to get up and running or help us document requirements to submit to our information technology services if we need better infrastructure? Thanks, Gina Gina Jones Web Archiving Team Library of Congress 202-707-6604 ------------------------------------------------------------------------------ Don't Limit Your Business. Reach for the Cloud. GigeNET's Cloud Solutions provide you with the tools and support that you need to offload your IT needs and focus on growing your business. Configured For All Businesses. Start Your Cloud Today. https://www.gigenetcloud.com/ _______________________________________________ Archive-access-discuss mailing list Arc...@li... https://lists.sourceforge.net/lists/listinfo/archive-access-discuss ****************************************************************************************************************** Experience the British Library online at www.bl.uk<http://www.bl.uk/> The British Library’s latest Annual Report and Accounts : www.bl.uk/aboutus/annrep/index.html<http://www.bl.uk/aboutus/annrep/index.html> Help the British Library conserve the world's knowledge. Adopt a Book. www.bl.uk/adoptabook<http://www.bl.uk/adoptabook> The Library's St Pancras site is WiFi - enabled ***************************************************************************************************************** The information contained in this e-mail is confidential and may be legally privileged. It is intended for the addressee(s) only. If you are not the intended recipient, please delete this e-mail and notify the pos...@bl...<mailto:pos...@bl...> : The contents of this e-mail must not be disclosed or copied without the sender's consent. The statements and opinions expressed in this message are those of the author and do not necessarily reflect those of the British Library. The British Library does not take any responsibility for the views of the author. ***************************************************************************************************************** Think before you print |