As it is said in the recent article "Google: Taming the Long Latency Tail - When More Machines Equals Worse Results" , latency variability has greater impact in larger scale clusters where a typical request is composed of multiple distributed/parallel requests. The overall response time dramatically decreases if latency of each request is not consistent and low.
In dynamically scalable partitioned storage systems, whether it is a NoSQL database, filesystem or in-memory data grid, changes in the cluster (adding or removing a node) can lead to big data moves in the network to re-balance the cluster. Re-balancing will be needed for both primary and backup data on those nodes. If a node crashes for example, dead node’s data has to be re-owned (become primary) by other node(s) and also its backup has to be taken immediately to be fail-safe again. Shuffling MBs of data around has a negative effect in the cluster as it consumes your valuable resources such as network, CPU and RAM. It might also lead to higher latency of your operations during that period.
With 2.0 release, Hazelcast, an open source clustering and highly scalable data distribution platform written in Java, focuses on latency and makes it easier to cache/share/operate TB's of data in-memory. Storing terabytes of data in-memory is not a problem but avoiding GC to achieve predictable, low latency and being resilient to crashes are big challenges. By default, Hazelcast stores your distributed data (map entries, queue items) into Java heap which is subject to garbage collection. As your heap gets bigger, garbage collection might cause your application to pause tens of seconds, badly effecting your application performance and response times. Elastic Memory is Hazelcast with off-heap memory storage to avoid GC pauses. Even if you have terabytes of cache in-memory with lots of updates, GC will have almost no effect; resulting in more predictable latency and throughput.
Elastic Memory implementation uses NIO DirectByteBuffers and doesn’t require any defragmentation. Here is how things work: User defines the number of GB storage to have off the heap per JVM, let’s say it is 40GB. Hazelcast will create 40 DirectBuffers, each with 1GB capacity. If you have, say 100 nodes, then you have total of 4TB off-heap storage capacity. Each buffer is divided into configurable chunks (blocks) (default chunk-size is 1KB). Hazelcast uses a queue of available (writable) blocks. 3KB value, for example, will be stored into 3 blocks. When the value is removed, these blocks are returned back into the available blocks queue so that they can be reused to store another value.
With new backup implementation, data owned by a node is divided into chunks and evenly backed up by all the other nodes. In other words, every node takes equal responsibility to backup every other node. This leads to better memory usage and less influence in the cluster when you add/remove nodes.
To demonstrate the capabilities of Elastic Memory, Hazelcast team did a demo using 100 EC2 m2.4xlarge instances. The demo will run the SimpleMapTest.java available in the distribution. Initially the application will load the grid with total of 500M entries, each with 4KB value size. Redundancy level is 2 by default. There will be 2 copy of each entry in the cluster. This makes total of 1B entries, that takes 4TB in memory.
After the loading 500M entries, it will do %95 get and %5 put to random keys. Later on, we'll terminate an instance to observe no data loss because of backups and we should also notice that key ownerships remain well-balanced. The total throughput of the cluster was over 1.3M distributed operations per second.
源链接:http://highscalability.com/blog/2012/4/3/hazelcast-20-big-data-in-memory.html