I use a Libarchive copy in Pcompress to do archiving. Pcompress prepares the list of pathnames to be archived and calls low-level libarchive apis to archive the pathnames for fine-granied control. For example, the pathnames are sorted via fairly involved heuristics. The output from Libarchive is passed to the compressor stage. In addition, a few file type specific filters are used prior to writing file data via Libarchive apis.
The libarchive output stream is split into buffers and compressed in parallel, the basic approach in Pcompress. This results in a solid compression mode that achieves a high compression ratio. However this also has a few problems:
- Libarchive does streaming archives so archive metadata is inline within the data stream. This causes breaks within the actual data stream and pollutes the context eventually reducing compression ratio slightly.
- Simply listing the archive members, without actually extracting them to disk requires decompressing all of the data.
- Similarly extracting a single member means decompressing everything before it (which is the problem with things like tar.gz anyway).
So separating out data and metadata would be beneficial. Metadata can be kept in a separate compressed stream from the data which would allow fast access in the couple of use cases above. To achieve this, Libarchive would need to indicate to the client callbacks the type of request: data or metadata. This is tricky. The model of Libarchive is to just request some data as in:
- When extracting it invokes the callback. The callback returns a blob of data and tells it’s length. When Libarchive has consumed all of the blob it requests for more.
- When archiving it invokes the callback to with a block of data to write with a specified length.
There is no differentiation between data and metadata as the formats are typically streaming formats. Pcompress uses PAX. The immediate thought that arises is to have Libarchive indicate via a flag in the archive structure whether the request is for data or metadata. This works fine when archiving, since blocks of data and metadata are written separately. The client callback can use an api call using the archive structure to determine whether data or metadata is being written. The trouble arises during extraction
This simple technique of using a flag within Libarchive is not sufficient when extracting an archive. Looking into the archive_read code, one can notice that there is a filter structure, that keeps track of the current buffer passed from the client callback and the internal cache to implement a virtually zero-copy architecture. The filter structures can be cascaded if there are multiple filters in the chain. The root filter is obviously the NONE filter. See archive_read_open1() in archive_read.c. Once all data in the client buffer is consumed, the callback is invoked again to request for more. So, If data and metadata is stored separately, then the client callback will have to stitch them together and re-create the original archive stream which has to be passed back to Libarchive. This is complicated and very expensive in practice. In particular, the buffer copying required would defeat Libarchive’s zero-copy approach. Eventually, the solution I landed upon, is to introduce a secondary filter structure accompanying each normal filter struct. I call this the shadow filter. This is identical to the main filter struct and is only initialized in case metadata streaming is being done. Now, Libarchive can use this shadow filter structure to keep track of the separate metadata stream. Whenever a metadata request is made the client can return a metadata buffer which is tagged onto the shadow filter. Data requests are handled via the normal filter structure.
The changes needed in Libarchive to get this working were, in fact, smaller than I anticipated and are all inside the Libarchive copy in Pcompress trunk(changeset 1, changeset 2). A couple of obvious new api calls are needed: archive_set_metadata_streaming() and archive_request_is_metadata().
The current Pcompress trunk packs metadata into 3MB chunks and marks them as metadata chunks. These chunks are compressed using the Delta2 filter and Bzip2 compression. During extraction, it opens two handles to the archive file. One handle is used by the metadata thread to read metadata chunks and skip data chunks while the main thread does vice versa. This of course precludes pipe-mode streaming operation.
Currently, listing of archive contents is extremely fast as Pcompress has to just decompress the metadata chunks and return that to Libarchive. The data sections are simply skipped. I am still working on a mechanism to do optimized selective extraction. For this I will have to store additional metadata to indicate which data chunk holds the start of the file contents so that the data decompression thread can quickly skip forward to that one.
So what’s the difference in terms of compression ratio and timings, with and without metadata streaming? Using the 10GB compression benchmark from Matt Mahoney here’s the results (83435 archive members) on a MacBook Pro late 2013 model:
- Metadata Streaming:
- pcompress -a -l14 -s60m -t5 10gb 10gb.pz —> archive size: 2936436327 bytes
- Without metadata streaming (notice the -T flag):
- pcompress -a -l14 -s60m -t5 -T 10gb 10gb.pz —> archive size: 2939247636 bytes
- Time to list archive contents with metadata stream:
- ./pcompress -i 10gb.pz 0.90s user 0.10s system 84% cpu 1.184 total
- Time to list archive contents without metadata stream:
- ./pcompress -i 10gb.pz 1106.16s user 28.63s system 346% cpu 5:27.38 total
So compression done using inline metadata (without metadata streaming) is about 2.6MB or 0.1% larger. It is a small amount given the archive size, but a difference nevertheless. Larger relative benefits will be visible for archives having a large number of small files. However the real massive benefit is seen in the total time to list archive contents. 1.2 seconds vs 5.5 minutes: It really is a no-brainer.