Class PartitionGroupConfig

java.lang.Object
com.hazelcast.config.PartitionGroupConfig

public class PartitionGroupConfig extends Object
With PartitionGroupConfig, you can control how primary and backup partitions are mapped to physical Members.

Hazelcast will always place partitions on different partition groups so as to provide redundancy. There are seven partition group schemes defined in PartitionGroupConfig.MemberGroupType: PER_MEMBER, HOST_AWARE CUSTOM, ZONE_AWARE, NODE_AWARE, PLACEMENT_AWARE, SPI.

In all cases a partition will never be created on the same group. If there are more partitions defined than there are partition groups, then only those partitions, up to the number of partition groups, will be created. For example, if you define 2 backups, then with the primary, that makes 3. If you have only two partition groups only two will be created.

PER_MEMBER Partition Groups

This is the default partition scheme and is used if no other scheme is defined. Each Member is in a group of its own.

Partitions (primaries and backups) will be distributed randomly but not on the same Member.

 <partition-group enabled="true" group-type="PER_MEMBER"/>
 
This provides good redundancy when Members are on separate hosts but not if multiple instances are being run from the same host.

HOST_AWARE Partition Groups

In this scheme, a group corresponds to a host, based on its IP address. Partitions will not be written to any other members on the same host.

This scheme provides good redundancy when multiple instances are being run on the same host.

 <partition-group enabled="true" group-type="HOST_AWARE"/>
 

CUSTOM Partition Groups

In this scheme, IP addresses, or IP address ranges, are allocated to groups. Partitions are not written to the same group. This is very useful for ensuring partitions are written to different racks or even availability zones.

For example, members in data center 1 have IP addresses in the range 10.10.1.* and for data center 2 they have the IP address range 10.10.2.*. You would achieve HA by configuring a CUSTOM partition group as follows:

 <partition-group enabled="true" group-type="CUSTOM">
      <member-group>
          <interface>10.10.1.*</interface>
      </member-group>
      <member-group>
          <interface>10.10.2.*</interface>
      </member-group>
 </partition-group>
 
The interfaces can be configured with wildcards ('*') and also with address ranges e.g. '10-20'. Each member-group can have an unlimited number of interfaces.

You can define as many member-groups as you want. Hazelcast will always store backups in a different member-group to the primary partition.

ZONE_AWARE Partition Groups

In this scheme, groups are allocated according to the metadata provided by Discovery SPI. These metadata are availability zone, rack and host. The backups of the partitions are not placed on the same group so this is very useful for ensuring partitions are placed on different availability zones without providing the IP addresses to the config ahead.
 <partition-group enabled="true" group-type="ZONE_AWARE"/>
 

NODE_AWARE Partition Groups

In this scheme, groups are allocated according to node name metadata provided by Discovery SPI. For container orchestration tools like Kubernetes and Docker Swarm, node is the term used to refer machine that containers/pods run on. A node may be a virtual or physical machine. The backups of the partitions are not placed on same group so this is very useful for ensuring partitions are placed on different nodes without providing the IP addresses to the config ahead.
 <partition-group enabled="true" group-type="NODE_AWARE"/>
 

PLACEMENT_AWARE Partition Groups

In this scheme, groups are allocated according to the placement metadata provided by Discovery SPI. Depending on the cloud provider, this metadata indicates the placement information (rack, fault domain, etc.) of a VM in a zone. This scheme provides a finer granularity than ZONE_AWARE for partition groups and is useful to provide good redundancy when running members within a single availability zone.
 <partition-group enabled="true" group-type="PLACEMENT_AWARE"/>
 

SPI Aware Partition Groups

In this scheme, groups are allocated according to the implementation provided by Discovery SPI.
 <partition-group enabled="true" group-type="SPI"/>
 

Overlapping Groups

Care should be taken when selecting overlapping groups, e.g.
 <partition-group enabled="true" group-type="CUSTOM">
      <member-group>
          <interface>10.10.1.1</interface>
          <interface>10.10.1.2</interface>
      </member-group>
      <member-group>
          <interface>10.10.1.1</interface>
          <interface>10.10.1.3</interface>
      </member-group>
 </partition-group>
 
In this example there are 2 groups, but because interface 10.10.1.1 is shared between the 2 groups, this member may store store primary and backups.