Please see, Todo page for planned features.
2.0.4
96, 98, 131, 132, 135, 140, 166
2.0.3
99, 102, 103, 104, 109, 114, 117, 119, 127, 128
2.0
New Elastic Memory(Enterprise Edition Only): By default, Hazelcast stores your distributed data (map entries, queue items) into Java heap which is subject to garbage collection. As your heap gets bigger, garbage collection might cause your application to pause tens of seconds, badly effecting your application performance and response times. Elastic Memory is Hazelcast with off-heap memory storage to avoid GC pauses. Even if you have terabytes of cache in-memory with lots of updates, GC will have almost no effect; resulting in more predictable latency and throughput.
Security Framework(Enterprise Edition Only): Hazelcast Security is JAAS based pluggable security framework which can be used to authenticate both cluster members and clients and do access control checks on client operations. With the security framework, take control of who can be part of the cluster or connect as client and which operations are allowed or not.
Native C# Client(Enterprise Edition Only): Just like our Native Java Client, it supports all map, multimap, queue, topic operations including listeners and queries.
Distributed Backups: Data owned by a member will be evenly backed up by all the other members. In other word, every member takes equal responsibility to backup every other node. This leads to better memory usage and less influence in the cluster when you add/remove nodes. The new backup system makes it possible to form backup-groups so that you can have backups and owners fall into different groups.
Parallel IO: Number of socket selector threads can be configured. You can have more IO threads, if you have good number of CPU/cores and high-throughput network.
Connection Management: Hazelcast 2.0 is more tolerant to connection failures. On connection failure it tries to repair it before declaring the member as dead. So now it is ok to have short socket disconnections… No problem if your virtual server migrates to a new host.
Listeners such as migration, membership and map indexes can be added with configuration.
New Event Objects: Event Listeners for Queue/List/Set/Topic were delivering the item itself on event methods. That’s why the items had to be deserialized by Hazelcast Threads before invoking the listeners. Sometimes this was causing class loader problems too. With 2.0, we have introduced new event containers for Queue/List/Set and Topic just like Map has EntryEvent. The new listeners now receive ItemEvent and Message objects respectively. The actual items are deserialized only if you call the appropriate get method on the event objects. This is where we brake the compatibility with the older versions of Hazelcast.
ClientConfig API: We had too many of factory methods to instantiate a HazelcastClient. Now all we
need isHazelcastClient.newHazelcastClient(ClientConfig)
.
SSL communication support among cluster nodes.
Distributed MultiMap value collection can be either List or Set.
SuperClient is renamed to LiteMember to avoid confusion. Be careful! It is a member, not a client.
New
IMap.set (key, value, ttl, TimeUnit)
implementation, which is optimized
put(key, value)
operation as set doesn’t return the old value.
HazelcastInstance.getLifecycleService().kill()
will forcefully kill the node. Useful for testing.
forceUnlock
, to unlock the locked entry from any node and any thread regardless
of the owner.
Enum type query support..
new SqlPredicate (“level = Level.WARNING”)
for example
Fixed issues: (on http://code.google.com/p/hazelcast/issues/list) 430, 459, 471, 567, 574, 582, 629, 632, 646, 666, 686, 669, 690, 692, 693, 695, 698, 705, 708, 710, 711, 712, 713, 714, 715, 719 , 721, 722, 724, 727, 728, 729, 730, 731, 732, 733, 735, 738, 739, 740, 741, 742, 747, 751, 752, 754, 756, 758, 759, 760, 761, 765, 767, 770, 773, 779, 781, 782, 783, 787, 790, 795, 796
1.9.4
New WAN Replication (synchronization of separate active clusters)
New Data Affinity (co-location of related entries) feature.
New EC2 Auto Discovery for your Hazelcast cluster running on Amazon EC2 platform.
Improvement: Distribution contains HTML and PDF documentation besides Javadoc.
Improvement: Better TCP/IP and multicast join support. Handling more edge cases like multiple nodes starting at the same time.
Improvement: Memcache protocol: Better integration between Java and Memcache clients. Put from memcache, get from Java client.
Monitoring Tool is removed from the project.
200+ commits 25+ bug fixes and several other enhancements.
1.9.3
Re-implementation of distributed queue.
Configurable backup-count and synchronous backup.
Persistence support based on backing MapStore
Auto-recovery from backing MapStore on startup.
Re-implementation of distributed list supporting index based operations.
New distributed semaphore implementation.
Optimized
IMap.putAll
for much faster bulk writes.
New
IMap.getAll
for bulk reads which is calling
MapLoader.loadAll
if necessary.
New
IMap.tryLockAndGet
and
IMap.putAndUnlock
API
New
IMap.putTransient
API for storing only in-memory.
New
IMap.addLocalEntryListener()
for listening locally
owned entry events.
New
IMap.flush()
for flushing the dirty entries into
MapStore.
New
MapLoader.getAllKeys
API for auto-pre-populating the
map when cluster starts.
Support for min. initial cluster size to enable equally partitioned start.
Graceful shutdown.
Faster dead-member detection.
1.9
Memcache interface support. Memcache clients written in any language can access Hazelcast cluster.
RESTful access support.
http://<ip>:5701/hazelcast/rest/maps/mymap/key1
Split-brain (network partitioning) handling
New LifecycleService API to restart, pause Hazelcast instances and listen for the lifecycle events.
New asynchronous put and get support for IMap via IMap.asyncPut() and IMap.asyncGet()
New AtomicNumber API; distributed implementation of java.util.concurrent.atomic.AtomicLong
So many bug fixes.
1.8.4
Significant performance gain for multi-core servers. Higher CPU utilization and lower latency.
Reduced the cost of map entries by 50%.
Better thread management. No more idle threads.
Queue Statistics API and the queue statistics panel on the Monitoring Tool.
Monitoring Tool enhancements. More responsive and robust.
Distribution contains hazelcast-all-<version>.jar to simplify jar dependency.
So many bug fixes.
1.8.3
Bug fixes
Sorted index optimization for map queries.
1.8.2
A major bug fix
Minor optimizations
1.8.1
Hazelcast Cluster Monitoring Tool (see the hazelcast-monitor-1.8.1.war in the distro)
New Partition API. Partition and key owner, migration listeners.
New IMap.lockMap() API.
New Multicast+TCP/IP join feature. Try multicast first, if not found, try tcp/ip.
New Hazelcast.getExecutorService(name) API. Have separate named ExecutorServices. Do not let your big tasks blocking your small ones.
New Logging API. Build your own logging. or simply use Log4j or get logs as LogEvents.
New MapStatistics API. Get statistics for your Map operations and entries.
HazelcastClient automatically updates the member list. no need to pass all members.
Ability to start the cluster members evenly partitioned. so no migration.
So many bug fixes and enhancements.
There are some minor Config API change. Just make sure to re-compile.
1.8
Java clients for accessing the cluster remotely. (C# is next)
Distributed Query for maps. Both Criteria API and SQL support.
Near cache for distributed maps.
TTL (time-to-live) for each individual map entry.
IMap.put(key,value, ttl, timeunit)
IMap.putIfAbsent(key,value, ttl, timeunit)
Many bug fixes.
1.7.1
Multiple Hazelcast members on the same JVM. New
HazelcastInstance
API.
Better API based configuration support.
Many performance optimizations. Fastest Hazelcast ever!
Smoother data migration enables better response times during joins.
Many bug fixes.
1.7
Persistence via Loader/Store interface for distributed map.
Socket level encryption. Both symmetric and asymmetric encryption supported.
New JMX support. (many thanks to Marco)
New Hibernate second level cache provider (many thanks to Leo)
Instance events for getting notified when a data structure instance (map, queue, topic etc.) is created or destroyed.
Eviction listener.
EntryListener.entryEvicted(EntryEvent)
Fully 'maven'ized.
Modularized...
hazelcast (core library)
hazelcast-wm (http session clustering tool)
hazelcast-ra (JCA adaptor)
hazelcast-hibernate (hibernate cache provider)
1.6
Support for synchronous backups and configurable backup-count for maps.
Eviction support. Timed eviction for queues. LRU, LFU and time based eviction for maps.
Statistics/history for entries. create/update time, number of hits, cost. see
IMap.getMapEntry(key)
MultiMap
implementation. similar to google-collections and
apache-common-collections
MultiMap
but distributed and
thread-safe.
Being able to
destroy()
the data structures when not needed
anymore.
Being able to Hazelcast.shutdown() the local member.
Get the list of all data structure instances
viaHazelcast.getInstances()
.
1.5
Major internal refactoring
Full implementation ofjava.util.concurrent.BlockingQueue
.
Now queues can have configurable capacity limits.
Super Clients (a.k.a LiteMember): Members with no storage. If
-Dhazelcast.super.client=true
JVM parameter is set, that
JVM will join the cluster as a 'super client' which will not be a 'data
partition' (no data on that node) but will have super fast access to the cluster
just like any regular member does.
Http Session sharing support for Hazelcast Web Manager. Different webapps can share the same sessions.
Ability to separate clusters by creating groups. ConfigGroup
java.util.logging
support.
1.4
Add, remove and update events for queue, map, set and list
Distributed Topic for pub/sub messaging
Integration with J2EE transactions via JCA complaint resource adapter
ExecutionCallback interface for distributed tasks
Cluster-wide unique id generator
1.3
Transactional Distributed Queue, Map, Set and List
1.2
Distributed Executor Service
Multi member executions
Key based execution routing
Task cancellation support
1.1
Session Clustering with Hazelcast Webapp Manager
Full TCP/IP clustering support
1.0
Distributed implementation of java.util.{Queue,Map,Set,List}
Distributed implementation of java.util.concurrency.Lock
Cluster Membership Events