Wednesday, July 16, 2014

About HBase flushes and compactions

By Lars Hofhansl

The topic of flushes and compactions comes up frequently when using HBase. There are somewhat obscure configuration options around this, you hear terms like "write amplification", and you might see scary messages about blocked writes in the logs until a flush has finished.

Let's step back for a minute and explore what HBase is actually doing. The configuration parameters will then make more sense.

Unlike most traditional databases HBase stores its data in "Log Structured Merge" (LSM) Trees.

There are numerous academic treatments concerning LSM trees, so I won't go into details here.

Basically in HBase it works something like this:
  1. Edits (Puts, etc) are collected and sorted in memory (using a skip list specifically). HBase calls this the "memstore"
  2. When the memstore reached a certain size (hbase.hregion.memstore.flush.size) it is written (or flushed) to disk as a new "HFile"
  3. There is one memstore per region and column family
  4. Upon read, HBase performs a merge sort between all - partially sorted - memstore disk images (i.e. the HFiles)
From a correctness perspective that is all that is needed... But note that HBase would need to consider every memstore image ever written for sorting. Obviously that won't work. Each file needs to be seeked and read in order to find the next key in the sort. Hence eventually some of the HFiles need to be cleaned up and/or combined: compactions.

A compaction asynchronously reads two or more existing HFiles and rewrites the data into a single new HFile. The source HFiles are then deleted.

This reduces the work to be done at read time at the expense of rewriting the same data multiple times - this effect is called "write amplification". (There are some more nuances like major and minor compaction, which files to collect, etc, but that is besides the point for this discussion)

This can be tweaked to optimize either reads or writes.

If you let HBase accumulate many HFiles without compacting them, you'll achieve better write performance (the data is rewritten less frequently). If on the other hand you instruct HBase to compact many HFiles sooner you'll have better read performance, but now the same data is read and rewritten more often.

HBase allows to tweak when to start compacting HFiles and what is considered the maximum limit of HFiles to ensure acceptable read performance.

Generally flushes and compaction can commence in parallel. A scenario of particular interest is when clients write to HBase faster than the IO (disk and network) can absorb, i.e. faster than compactions can reduce the number of HFiles - manifested in an ever larger number of HFiles, eventually reaching the specified limit.
When this happens the memstores can continue to buffer the incoming data, but they cannot grow indefinitely - RAM is limited.

What should HBase do in this case? What can it do?
The only option is to disallow writes, and that is exactly what HBase does.

There are various knobs to tweak flushes and compactions:
  • hbase.hregion.memstore.flush.size
    The size a single memstore is allowed to reach before it is flushed to disk.
  • hbase.hregion.memstore.block.multiplier
    A memstore is temporarily allowed to grow to the maximum size times this factor.
  • hbase.regionserver.global.memstore.lowerLimit
    JVM global limit on aggregate memstore size before some of the memstore are force-flushed (in % of the heap).
  • hbase.regionserver.global.memstore.upperLimit
    JVM memstore size limit before writes are blocked (in % of the heap)
  • hbase.hstore.compactionThreshold
    When a store (region and column family) has reach this many HFiles, HBase will start compacting HFiles
  • hbase.hstore.blockingStoreFiles
    HBase disallows further flushes until compactions have reduced the number of HFiles at least to this value. That means that now the memstores need to buffer all writes and hence eventually are subject blocking clients if compactions cannot keep up.
  • hbase.hstore.compaction.max
    The maximum number of HFiles a single - minor - compaction will consider.
  • hbase.hregion.majorcompactionTime interval between timed - major - compactions. HBase will trigger a compaction with this frequency even when no changes occurred.
  • hbase.hstore.blockingWaitTime
    Maximum time clients are blocked. After this time writes will be allowed again.
So when hbase.hstore.blockingStoreFiles HFiles are reached and the memstores are full (reaching
hbase.hregion.memstore.flush.size *
hbase.hregion.memstore.block.multiplier or due their aggregate size reaching hbase.regionserver.global.memstore.upperLimit) writes are blocked for hbase.hstore.blockingWaitTime milliseconds.

Note that this is not a flaw of HBase but simply physics. When disks/network are too slow at some point clients needs to slowed down.

Tuesday, July 8, 2014

Key note at HBaseCon 2014

By Lars Hofhansl

HBaseCon 2014 in May was a blast again as usual.
My keynote is now online (together with Google's and Facebook's key note - Facebook's for some reason was not recorded).

(And that from somebody who just three years ago would quite literally rather have died than doing any public speaking - I think you can do and learn anything if you put your mind to it.)

Also check out the session that actually have some contents: http://hbasecon.com/archive.html.

Monday, March 3, 2014

HBase GC tuning observations

By Lars Hofhansl

Java garbage collection tuning depends heavily on the application.

HBase is somewhat special: It produces a lot of short lived objects that typically does not outlive a single RPC request; at the same time most of the heap is (and should be) used for the block cache and the memstores that hold objects that typically live for a while.
In most setups HBase also requires reasonably short response times, many seconds of garbage collection are not acceptable.

Quick GC primer 

There is a lot of (better informed) literature out there about this topic, so here just the very basics.
  • In typical applications most object "die young" (they are created and soon are no longer needed). This is certainly true for HBase. This is called the "weak generational hypothesis". Most modern garbage collectors accommodate this by organizing the heap into generations.
  • HotSpot: manages four generations:
    1. Eden for all new objects
    2. Survivor I and II where surviving objects are promoted when eden is collected
    3. Tenured space. Objects surviving a few rounds (16 by default) of eden/survivor collection are promoted into the tenured space
    4. Perm gen for classes, interned strings, and other more or less permanent objects.
  • The principle costs to the garbage collector are (1) tracing all objects from the "root" objects and (2) collecting all unreachable objects.
    Obviously #1 is expensive when many objects need to be traced, and #2 is expensive when objects have to be moved (for example to reduce memory fragmentation)

Back to HBase

So what does that mean for HBase?

Most of the heap should be set aside for the memstores and the blockcache as that is the principal data that HBase manages in memory.

HBase already attempts to minimize the number of objects used (see cost #1 above) and keeps them at the same size as much as possible (cost #2 above).

The memstore is allocated in chunks of 2mb (by default), and the block cache stores blocks of 64k (also by default).
With that HBase keeps the number of objects small and avoids heap fragmentation (since all objects are roughly the same size and hence new objects can fit into "holes" created by previously released objects).

Most other objects and datastructures do not outlive a single request and can (and should) be collected quickly.

That means we want to keep eden as small as possible - just big enough to handle the garbage from a few concurrent requests. That allows us to use most of the heap for the useful data and keep minor compactions short.
We also want to optimize for latency rather than throughput.
And we'd want GC settings where we do not move the objects in the tenured generation around and also collect new garbage as quickly as possible.
The young generation should be moving to avoid fragmentation (in fact all young gen collectors in Hotspot are moving).

TL;DR: With that all in mind here are the typical GC settings that I would recommend:

-Xmn256m - small eden space (maybe -Xmn512m, but not more than that)
-XX:+UseParNewGC - collect eden in parallel
-XX:+UseConcMarkSweepGC - use the non-moving CMS collector
-XX:CMSInitiatingOccupancyFraction=70 - start collecting when 70% of the tenured gen are full to avoid collection under pressure
-XX:+UseCMSInitiatingOccupancyOnly - do not try to adjust CMS setting


Those should be good settings for starters. You should find average GC times around 50ms and even 99'ile GC times around 150ms, and absolute worst case 1500ms or so; all on contemporary hardware with heaps of 32GB or less.

There are efforts to move cached data off the Java to heap to reduce these pauses especially the worst case.

Wednesday, January 1, 2014

More HBase performance profiling

By Lars Hofhansl

Continuing my sporadic profiling sessions I identified a few more issues in HBase.

 

First, a note about profiling

"Instrumentating" profilers often distort the actual results and frequently lead to premature optimization.
For example in HBase that lead to caching of redundant information on each KeyValue wasting heap space. Using a sampler showed that none of the optimized methods were hotspots.
The instrumentation is still useful, but the result should always be validated by a "sampler" or, even better, real "performance counters".

I simply used the Sampler in jVisualVM, which ships with the JDK.

Here are some new issues I found and fixed: 

 

HBASE-9915 Performance: isSeeked() in EncodedScannerV2 always returns false
HBase optimizes seeks and ensures that (re)seeks that would land on a key on the current block continue scanning forward inside the block, rather than (1) consulting the HFile index again, (2) loading the block, and (3) scanning from the beginning of that block again.
There was a bug in HBase where this optimization did not happen for block-encoded HFiles. Nobody noticed that this never worked.
A performance improvement of 2-3x (200-300%) when block encoding is enabled.

HBASE-9807 block encoder unnecessarily copies the key for each reseek
Here we copied the entire row key just to compare it with the first key of the next index block when the HFile is block encoded.
10% improvement by itself. Much more together with HBASE-9915 mentioned above.

HBASE-10015 Replace intrinsic locking with explicit locks in StoreScanner
This one was somewhat surprising. The issue is that we're locking inside the StoreScanner on each operation of next/seek/etc - millions of times per second, just so that a parallel compaction/flush can invalidate the HFile readers, which happens a few times per hour.
According to conventional wisdom intrinsic locking (synchronized) should be preferred since the JVM does biased locking. In my tests, which were confirmed across JVMs and architectures, I found that by simply replacing intrinsic locks with ReentrantLock's there was 20-30% performance gain overall to be had.

HBASE-10117 Avoid synchronization in HRegionScannerImpl.isFilterDone
Another issue with synchronization. As described here the synchronization penalty is not just blocked threads, but also involves memory read and write fences when the lock is completely uncontended. It turns out that in the typical case the synchronization is not needed.
20-25% improvement

More to come.
Edit: Some spelling/grammar corrections.

Monday, July 29, 2013

HBase and data locality

By Lars Hofhansl

A quick note about data locality.

In distributed systems data locality is an important property. Data locality, here, refers to the ability to move the computation close to where the data is.

This is one of the key concepts of MapReduce in Hadoop. The distributed file system provides the framework with locality information, which is then used to split the computation into chunks with maximum locality.

When data is written in HDFS, one copy is written locally, another is written to another node in a different rack (if possible) and a third copy is written to another node in the same rack. For all practical purposes the two extra copies are written to random nodes in the cluster.

In typical HBase setups a RegionServer is co-located with an HDFS DataNode on the same physical machine. Thus every write is written locally and then to the two nodes as mentioned above. As long the regions are not moved between RegionServers there is good data locality: A RegionServer can serve most reads just from the local disk (and cache), provided short circuit reads are enabled (see HDFS-2246 and HDFS-347).

When regions are moved some data locality is lost and the RegionServers in question need to request the data over the network from remote DataNodes, until the data is rewritten locally (for example by running some compactions).

There are four basic events that can potentially destroy data locality:
  1. The HBase balancer decides to move a region to balance data sizes across RegionServers.
  2. A RegionServer dies. All its regions need to be relocated to another server.
  3. A table is disable and re-enabled.
  4. A cluster is stopped and restarted.
Case #1 is a trade off. Either a RegionServer hosts an over-proportional large portion of the data, or some of the data loses its locality.

In case #2 there is not much choice. That RegionServer and likely its machine died, so the regions need to be migrated to somewhere else where not all data is local (remember the "random" writes to the two other data nodes).

In case #4 HBase reads the previous locality information from the .META. table (whose regions - currently only one - are assigned to an RegionServer first), and then attempts to the assign the regions of all tables to the same RegionServers (fixed with HBASE-4402).

In case #3 the regions are reassigned in a round-robin fashion...

Wait what? If the a table is simply disabled and then re-enabled its regions are just randomly assigned?

Yep. In a large enough cluster you'll end up with close to 0% data locality, which means all data for all requests is pulled remotely from another machine. In some setups that means an outage, a time of unacceptably slow read performance.

This was brought up on the mailing lists again this week and we're fixing the old issue now (HBASE-6143, HBASE-9080).

Tuesday, July 2, 2013

Protecting HBase against data center outages

By Lars Hofhansl

As HBase relies on HDFS for durability many of HDFS's idiosyncrasies need to be considered when deploying HBase into a production environment.

HBASE-5954 is still not finished, and in any event it is not clear whether the performance penalty incurred by syncing each log entry (i.e. each individual change) to disk is an acceptable performance trade off.

There are however a few practical considerations that can be applied right now.
It is important to keep in mind how HBase works:
  1. new data is collected in memory
  2. the memory contents are written into new immutable files (flush)
  3. smaller files are combined into larger immutable files and then deleted (compaction)
#3 is interesting when it comes power outages. As described here, data written to HDFS is typically only guaranteed to have reached N datanodes (3 by default), but not guaranteed to be persistent on disk. If the wrong N machines fail at the same time - as can happen during a data center power outage - recently written data can be lost.

In case #3 above, HBase rewrites an arbitrary amount of old data into new files and then deletes the old files. This in turn means that during a DC outage an arbitrary amount of data can be lost and that more specifically the data loss is not limited to the newest data.

Since version 1.1.1 HDFS support syncing a closed block to disk (HDFS-1539). When used together with HBase this guarantees that new files are persisted to disk before the old files are deleted, and thus compactions would no longer lead to data loss following DC outages.

Of course there is a price to pay. The client now needs to wait until the data is synced on the N datanodes, and more problematically this is likely to happen all at once when the datablock (64mb or larger) is closed. I found that with this setting new file creation takes between 50 and 100ms!

This performance penalty can be alleviated to some extend by syncing the data to disk early and asynchronously. Syncing early here has no detrimental effect as the all data written by HBase is immutable (a delay is only beneficial if a block is changed multiple times).

Linux/Posix can do that automatically with the proper fadvise and sync_file_range calls.
HDFS supports these since version 1.1.0 (HDFS-2465). Specifically, dfs.datanode.sync.behind.writes is useful here, since it will smooth out the sync cost as much as possible.

So... TL;DR: In our cluster we enable
  • dfs.datanode.sync.behind.writes
    and
  • dfs.datanode.synconclose
for HDFS, in order to get some amount of safety against DC power outages with acceptable performance.

Monday, May 6, 2013

HBase durability guarantees

By Lars Hofhansl

Like most other databases HBase logs changes to a write ahead log (WAL) before applying them (i.e. making them visible).

I wrote in more detail about HDFS' flush and sync semantics here: here. Also check out HDFS-744 and HBASE-5954.

Recently I got back to this area in the code and committed HBASE-7801 to HBase 0.94.7+, which streamlines the durability API for the HBase client.
The existing API was a mess, with some settings configured only via the table on the server (such as deferred log flush) and some only via the client per each Mutation (i.e. Put or Delete) such as Mutation.setWriteToWAL(...). In addition some table settings could be overridden by a client and some could not.

HBASE-7801 includes the necessary changes on both server and client to support a new API: Mutation.setDurablity(Durability).
A client can now control durability per Mutation - via passing a Durability Enum - as follows:
  1. USE_DEFAULT: do whatever the default is for the table. This is the client default.
  2. SKIP_WAL: do not write to the WAL
  3. ASYNC_WAL: write to the WAL asynchronously as soon as possible
  4. SYNC_WAL: write WAL synchronously
  5. FSYNC_WAL: write to the WAL synchronously and guarantee that the edit is on disk (not currently supported)
Prior to this change only the equivalent of USE_DEFAULT and SKIP_WAL was supported.

The changes are backward compatible. A server prior to 0.94.7 will gracefully ignore any client settings that it does not support.

The HTable.setWriteToWAL(boolean) API is deprecated in 0.94.7+ and was removed in 0.95+.

I filed a followup jira to do the same refactoring the table descriptors HBASE-8375.