Author Archives: moinakg

About moinakg

*NIX Geek.

Boot Environments on Linux

I have been busy working on a bunch of exciting tech last few years with very little time for anything else apart from family. The typical startup grind. However, better late than never, I found a bit of opportunity to write up on something I have been hacking on (along with couple of others) and have been put out in open-source.

We have seen the concept of Boot Environments initially with ZFS on Solaris. Because the filesystem supports snapshotting and cloning, it is possible to create another virtual copy of the entire root filesystem as a clone of the currently booted root. It is then possible to mount that clone, upgrade packages inside that prefix and finally reboot into the upgraded clone after adding a grub entry for it. There are several advantages of this approach.

  • The operation of cloning the filesystem is instantaneous due to the Copy-On-Write design. There is no need to perform a costly copy of the entire root filesystem image.
  • During the upgrade the box is up and running and in production use because the currently booted environment remains untouched. The only downtime is when we reboot.
  • Since the currently booted environment is untouched the upgrade is very safe. Any failures or breakage during upgrade would mean destroying the clone and starting again from the beginning, in the worst case.
  • If we discover issues after booting into the updated clone, we can easily reboot back to the previous working Boot Environment.
  • Because of the COW design the upgraded clone only occupies space for the packages that were actually upgraded. Unchanged package content is shared with the parent dataset of the clone. This un-duplication is space efficient.
  • It is possible to have multiple boot environments on the box without having to worry about partitioning.
  • Because of the pooled storage design all datasets or clones share the same pooled storage space avoiding hard partitioning and fixed space allocation headaches.
  • Multiple boot environments are *extremely* useful in a testing and development setup where multiple software releases can reside on the box. To select a certain release for testing or experimenting, just boot into the appropriate BE (Boot Environment). Messing around is also easy. Create another BE from the current one. Boot into it and mess to your heart’s content. The original BE remains safe as long as you do not screw the disk or the bootloader of course!

I can go on some more but lets draw a line here. Now, we wanted to get the same stuff on Linux. It is possible given we have the nice beast called Btrfs. Btrfs, till sometime back has been controversial and criticised quite a bit. However, I noticed that it has been maturing of late. Number of serious issues have gone to negligible. Many fixes and improvements have come. All of the rants I found via Google were at least couple of years back and mostly were older than that. This gave us the confidence to start testing it and eventually use it in the products my employer sells. We did have to go through a learning curve getting to grips with the nitty gritties and idiosyncracies of Btrfs. It created a bit of initial teething troubles and deployment issues but it was manageable.

I looked around to see if the BE capability already existed but found none. I came across things like apt-snapshot or Snapper which are similar but not quite the same. Our hard requirement was that upgrade must not touch the running environment. So, in the end, we came up with our own scripts to implement the BE feature.

Since our Linux environment is based off Ubuntu the root subvolume is ‘@’. We then create a writable snapshot of ‘@’ as our initial root subvolume and that becomes our first BE. Subsequent upgrades creates writable snapshots of the current booted subvolume. In addition, the ‘@’ subvolume is always mounted under /.rootbe in our environment and all the BE subvolumes and mounted under it including the currently booted one which is also mounted at ‘/’ obviously.

Btrfs has the concept of a default subvolume, however we do not change that. Rather, we just use the ‘rootflags=subvol=…’ parameter. This allows us to have the primary grub menu in a single place and access always via /.rootbe/boot/grub/grub.cfg.

The entire show is managed via two shell scripts. One to do the BE management (create, delete, list etc.) called ‘beadm’ and one to upgrade the current environment by creating a new BE called ‘pn-apt-get’. Both of them are available at this url: https://github.com/PluribusNetworks/pluribus_linux_userland/tree/master/components/bootenv-tools

The ‘pn-apt-get’ script creates a new BE and runs ‘apt-get dist-upgrade’ inside a chroot. It assumes that the ‘sources.list’ has been setup properly.

In addition to all this I wanted to optimize the space used by having dpkg Not replace files in a package being upgraded if the new file is identical to the one already installed. I needed to add a small Dpkg patch to achieve this: https://github.com/PluribusNetworks/pluribus_linux_userland/blob/master/components/dpkg-ubuntu/debian/patches/skip_unchanged.patch

All this allows us to do safe, fast, in-production upgrades and only incur downtime during the reboot.  One quirk I had to deal with was BE space usage reporting. I had to turn on the ‘quota’ feature of Btrfs to get accurate space accounting even though I do not use quotas in practice. This also meant that I hit a couple of obscure quota bugs (especially after subvolume delete) in the 4.4 Ubuntu Xenial kernel release that we have been using (logistics issues). To work around I found it sufficient to do a periodic “quota rescan” every few hours.

Little Oxalis caught on Galaxy S6

The Galaxy S6 camera is close to the top end as far as mobile cameras go. It can get nice close ups. Here’s a wallpaper grade close up of a cluster of Oxalis or Wood Sorrel with a tiny flower just after a sprinkler watering. I have edited it lightly with Seashore. I am placing this under Creative Commons license.

Oxalis_scaled

 

The TATA Nano Comes Home

Picked up the Nano Twist, primarily for wifey. I used to have a dim view of the Nano. I never considered it in my interest list of small cars, buying was a big NO NO. My wife has this minimalist view. She likes to go for the minimum requirement for her needs. All she needed were 4 wheels to go from point A to point B in relative comfort as opposed to 2 wheels. The Nano fitted that operating model, so she forced me to take a test drive. That changed my whole opinion.

It’s been more than a month since I picked it up and I am loving the little wonder. It is a fantastic car for crowded city roads. Easy to drive, almost like a Go-Cart! Has all the creature comforts. Great fuel efficiency. Smooth ride quality. Little but peppy engine, quick on it’s feet. Electronic power steering smooth as butter. Pedals are light on the feet. Spacious and comfortable interiors, sits 5 average-sized adults with ease. There is good legroom and under-thigh support. Great ground clearance. I am yet to scrape the bottom, even when going over mountain sized speed breakers. Smallest turning radius, can do next to impossible U-turns in a single turn. No reverse and forward monkey business. If you do not believe me, take a test drive. Super easy to park. What more does one ask for? The regular service schedules are infrequent. In addition TATA has a scheme whereby I paid a small amount up front and the next three year regular services are free as in free beer! Probably I will have to dole out some small amounts for a few odd consumables. Combined with high fuel efficiency and low insurance fees, for a vehicle of this segment, it represents exceptional value for money.

The engine sound has been tamed down. Also it now sounds more car-like rather than the earlier commercial vehicle sound. There are a few minor cons. Like the outside rear view mirror is too small. There are quality issues with the finishing on some of the panel edges, though panel gaps are consistent. While I get curious stares on the road, the biggest problem is I do not get respect on the road! I am used to driving my usual SUV and that gets ample respect like a gorilla. On the Nano suddenly, I am a mouse instead of a Gorilla! However the Nano makes up for it by it’s agility. Think of Stuart Little on a busy road. The other big con is the lack of front disc brakes. Braking from speed requires a harder push, which can take some getting used to . However these cons do not take anything away from the value proposition that the Nano represents.

Apart from disc brakes, I do wish that TATA comes out with an automatic transmission variant, probably using the AMT from Magneti Marelli. That would be an excellent addition. It would push the price up, but no harm in having a high-end Nano with a whole bunch of bells and whistles. The age-old marketing technique: attract the customer with the lower-end cheaper models to the showroom and sell them the high-end models 🙂

Anyone looking at cars in the same segment or a second hand Santro or some such, definitely need to give the Nano a serious consideration. One cannot get better value for money that this vehicle in it’s “Twist” Avatar. This vehicle definitely deserves 10x the sales that it is unfortunately garnering today. This is the car that TATA Motors should have launched on day one, but as we all know, Indian car manufacturers like to use customers as test engineers, sometimes with disastrous results! Also for the Nano’s woes TATA got the initial marketing, product positioning and branding wrong.

The children are really excited with the Nano. The little ones with the “Little One”:20140714_191724_Fotor

OH Calcutta!

Calcutta Maidan (By Subhasis Chakraborty (Own work) [CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0)%5D, via Wikimedia Commons)

Back from sweltering, HOT, HUMID, stuffy Calcutta after a couple of weeks. Record temperatures and record humidity left me gasping. Even the locals were crying. However it was definitely not the Calcutta I had expected.

I could see change all around. No sign of the dirty, stinking cesspooled streets, no sign of the broken, moth-eaten street lamps, no sign of the grimy railings, no sign of the pot-holed roads, no sign of the traffic cop whiling time by the roadside. Decorative lamp shades of the road lights, swanky departmental stores and malls. Organized and orderly traffic, taxis with no-refusal signs. They won’t refuse a hire just because the destination does not suit them. Active alert cops, some of them even smart. Yes I am not joking. Active clean-up and concreting ongoing of the various horrid open drains. Large modern residential complexes springing up. No power cuts, not at all, nada – one can take power for granted. Yes the roads are still too narrow, the bylanes are downright claustrophobic, too many people on the roads, many independent houses still being constructed to early 1990’s standards, but still, there was this change. I could feel a wasting giant stirring, rebuilding, repairing, rising phoenix-like brushing aside decades of exploitation, neglect and decay.

The previous time I had visited was almost 2.5 years back. So this coincides with the change in Govt. I am happy to see real impact from the current administration. There is still a long way to go. I am hoping to see more changes the next time I go there.

Scaling Deduplication in Pcompress – Petascale in RAM

My core objectives with Deduplication experiments in Pcompress has been to achieve large-scale data handling capability using tiny chunks (4KB and even 2KB). This has traditionally been extremely difficult to the point of being nearly impractical. The algorithms in Pcompress can now achieve this without loss of performance, at least for archival data streams. Read on to find out more.

Approach

I have in the past alluded to a an indexing algorithm in Pcompress that allows me to deduplicate large data using only a tiny memory-resident index. I finally found some time to write about it. The basic idea is quite simple and is not entirely new. What is new, is the way certain components of the approach are implemented. The steps are simple:

  1. Break dataset into large segments, such that each segment is a collection of adjacent variable-length chunks as derived using a chunking algorithm.
  2. For each segment create a chunk-list file on disk which contains the list of chunk hashes.
  3. For each segment compute one or more similarity identifiers and store in an index.
  4. If one or more similarity identifiers for two segments match then we know that the segments have at least a few chunks in common. In that case load the chunk lists of the segments from disk and deduplicate the chunks.

If the segments are large enough then a relatively tiny similarity index can address large amounts of data. This scheme requires one disk read per bunch of chunks reducing lookup and access times. Also each chunk list is one file and is read sequentially improving read times. The chunk list files can also be stored on SSDs to speed up even more. The similarity index can be a RAM-resident hashtable giving the fastest possible lookup times. Since each segment is a collection of chunks, I experimented using different chunk counts and found excellent results having 2048 chunks per segment. Thus the segment size is of variable length. For an average chunk size of 4KB we get 8MB segments on average, for 8KB chunks we get 16MB segments and so on.

Many Similarity based Deduplication approaches use files as the objects for checking similarity. However this suffers from a problem that very large files can give rise to loss of accuracy and poor deduplication ratio. Conversely too many tiny files can bloat the similarity index. The approach described above avoids these extremes. A somewhat similar approach has been described in the SiLo scheme.

The old guard – MinHash

The effectiveness of the approach described above hinges on the effectiveness of the similarity indexing scheme. Here I based my implementation on the good old MinHash technique. This technique has been used commonly in data mining and web search to determine similar documents, detect plagiarism, perform clustering and so on.

Essentially we break up a document into a set of semantic pieces, compute multiple hashes per piece and identify a subset of pieces that possess the numerically lowest hash values. If such k-min subsets of two documents match then we can say that the documents are somewhat similar to each other. In this deduplication use case a document is nothing but a segment. Since each segment is a set of chunks, each sematic piece is a chunk as determined by the chunking algorithm. The question is what hashes to compute for each chunk in order to apply the MinHash algorithm? (See MinHash for Dummies for a layman’s explanation of MinHash).

After lots of experimentation and head scratching it turns out that the cryptographic hashes computed per chunk can themselves be used via truncating. Truncated cryptographic hashes are themselves independent hashes. I took this idea to its logical extreme and split each cryptographic hash into a set of smaller hashes. These can be thought of as the permutations of the hash function. Then I can sort these hashes numerically and select the K lowest values to get my K-min-values sketch. If one or more elements of the KMV sketches of two segments match then the segments are deemed to have at least a few chunks in common. How small to truncate a hash? It turns out that 64-bit hashes provide a good approximation. So if we are using, say SHA-256, then we get 4 smaller hashes from each chunk hash. These are then numerically sorted and the 20 lowest unique values are chosen. Pictorially this approach can be depicted as below.

sid2

The number 20 and other thresholds were arrived at experimentally, to give a good balance of memory usage vs deduplication effectiveness. Increasing beyond 20 hashes, results in diminishing returns on dedupe effectiveness while linearly increasing memory consumption. The point of inflection actually comes at 16 hashes, below which, dedupe effectiveness falls rapidly. The chart below shows the results of testing on a 360GB dataset that I have used in previous occasions.

sidHere Duplicate Elimination Ratio =\frac{input\, stream\, length}{total\, size\, of\, chunks\, stored\,+\, metadata\, size}.

The interesting thing here is that this approach results in high deduplication effectiveness to the extent of 90% to 95% of the effectiveness of straight forward exact dedupe using a simple chunk index. The choice of the hash function (SHA2, SHA3, BLAKE2 etc) has little bearing on the outcome. Here is another chart showing a comparison of this similarity based deduplication (using various datasets and hash funtions) with the exact deduplication baseline.

der2

Memory Usage

Lets consider the case of 4KB average chunk size. Each segment contains 2048 (approx) chunks, which gives us 8MB average segment size. For each chunk we derive a 256-bit crypto hash which results in 4 64-bit similarity hashes per chunk. We select the lowest valued 20 unique hashes per segment which form the similarity indicators. So we need 160 bytes of similarity indicators per segment. In addition to this we need to store 64-bit pointers to the actual segment data on disk. The segment data is basically the chunk hash and data pointer list stored on disk. So the storage requirement is doubled to 320 bytes. The similarity indicators for a segment are kept in a hash table. So we need to consider some data structure overheads as well. Assuming 4 bytes of overhead on average we have a final storage requirement of 400 bytes per segment.

Now assuming 8MB segment size and one Petabyte of 100% random data where no duplicates exist (worst case), we would need 134217728 segments. This translates to 50GB of worst case memory usage. If we use 8KB chunk size, then the calculations lead to 25GB RAM requirement for the worst case. These memory values are not too much by present day standards and typically data will have duplicates. So RAM requirement will come down by the extent of duplicates present, especially when using the deduplicating storage architecture I had described earlier. If we limit data handling to say 500TB, then even smaller 2KB chunks can be used. These are practical resource limitations. The algorithm does not have any inherent limitations. If we use a well-endowed server with say 256GB or more of RAM then petascale data can be handled using 2KB chunks as well. Performance will be lower of course.

To the best of my knowledge there is as yet no dedupe product that can handle a petabyte of data using small 4KB chunks and an in-memory index. I may be wrong, so please add comments if you know of another such system. If you are looking at less than a Petabyte then even 2KB chunks are possible – virtually no one does this today.

This indexing scheme is actually implemented in Pcompress today and works quite well. It can actually reach raw a dedupe processing throughput of upto 500MB/s (discounting I/O) on my average laptop. Of course there are many other optimizations both algorithmic and architectural some of which I have posted about earlier. I am presently working on the persistent similarity index part which would allow creating versioned incremental archives.

Related Posts