Consistent Hashing – A simplified explanation


Technology / Monday, January 1st, 2018

At the mention of the word ‘Hashing’, the first thing my mind conjures up is an image like below. It has an array of size n, the slots of which form the buckets of the HashTable and each bucket points to a linked list like structure that holds the (key,value) pair.

For mapping a key into the HashTable, a Hash Function is used on the key and a modulo operation is applied to the hash value to select the bucket the key maps into.

i.e hashFunction(key) mod n

This approach works fine as long as the number of buckets remain unchanged.

Things get more interesting when the number of buckets change, either a bucket goes away or a new bucket is added. So what happens when the number of buckets ‘n‘ changes ? An obvious issue is that instead of applying modulo ‘n‘, we now have to use ‘n+1‘ (assuming a new bucket was added). This means that n/n+1 keys need to be rehashed, i.e almost all the keys.

If you are thinking that this is uncommon you are correct for most cases, except when it comes to distributed systems. Imagine a distributed system that has a frontend Load balancer that is serving N backend nodes. The load balancer is distributing the load among these n nodes based on some algorithm. Assume that the system services some stateful requests and the requests from a single client must all be routed to the same backend server for a certain duration. Applying the hashing technique described above, one way of doing this would be to associate some sort of a connection id to the request and hash  the request to a particular backend server.

hashFunction(connection id) mod N

However at any point one of nodes could go down for any number of reasons or new nodes could be added dynamically to scale. Given that this is a distributed system a good design should always factor this inherent instability. One of the ways to mitigate this instability is through the use of a hashing mechanism called ‘Consistent Hashing’ which minimizes the number of rehashes required when the nodes in the system change. This is exactly what Google is doing in it’s frontend loadbalancers (Source).

So how does consistent hashing work ? Consider a common hashing function such as SHA-256. The range of the output of this hash function is from 0 to 2^256 -1. Now imagine that this range of values form a circle where Rm = max value of the hash. Suppose we have 3 nodes (n = 3). The nodes themselves are hashed on to this ring like below.

These points create 3 partitions along the circle each of which contain a continuous range of values. To hash a key k, we compute hashFunction(k) and move along the circle clockwise until we find the first node. This is the node that the key is mapped to. If for example node1 is removed, the keys that belonged to it will need to be moved to node3. In general the number of keys that will need to be rehashed is 1/n+1 (if a node is added). Notice that this is much smaller than rehashing n/n+1 and hence is considered more stable.

One of the caveats in this approach is that the partition sizes created are arbitrary and based on the hash values of the nodes. For instance, two nodes can map very close to each other on the circle. In such a case one of nodes will end up with far fewer keys than the other. If a more uniform partitioning is desired, one common solution is to map multiple points for each node on to the circle and interleave the points corresponding to different nodes. The multiplicity is achieved by creating many virtual nodes for a single physical node. So all the keys that map to the virtual nodes are in fact assigned to the single physical node. This leads to a more uniform distribution of keys. Another advantage is that when a node is removed, the keys of that node are distributed evenly among several nodes. Earlier all of them would be moved to just one other node.

Amazon Dynamo and memcached use Consistent Hashing. It is also used in Databases when partitioning data onto several nodes.

Further reading :

Consistent hashing original paper by David Karger, Eric Lehman, Tom Leighton, et al.: “Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web,” at 29th Annual ACM Symposium on Theory of Computing (STOC).

 

One Reply to “Consistent Hashing – A simplified explanation”

  1. Very well written article. It will be supportive to everyone who employess it, including me. Keep doing what you are doing – i will definitely read more posts.

Leave a Reply

Your email address will not be published. Required fields are marked *