20.3.2 组复制 Limitations
The following known limitations exist for Group Replication. Note that the limitations and issues described for multi-primary mode groups can also apply in single-primary mode clusters during a failover event, while the newly elected primary flushes out its applier queue from the old primary.
Group Replication is built on GTID based replication, therefore you should also be aware of Section 19.1.3.7, “Restrictions on Replication with GTIDs”.
-
--upgrade=MINIMAL
option. Group Replication cannot be started following a MySQL Server upgrade that uses the MINIMAL option (--upgrade=MINIMAL
), which does not upgrade system tables on which the replication internals depend. -
Gap Locks. Group Replication's certification process for concurrent transactions does not take into account gap locks, as information about gap locks is not available outside of
InnoDB
. See Gap Locks for more information.NoteFor a group in multi-primary mode, unless you rely on
REPEATABLE READ
semantics in your applications, we recommend using theREAD COMMITTED
isolation level with Group Replication. InnoDB does not use gap locks inREAD COMMITTED
, which aligns the local conflict detection within InnoDB with the distributed conflict detection performed by Group Replication. For a group in single-primary mode, only the primary accepts writes, so theREAD COMMITTED
isolation level is not important to Group Replication. -
Table Locks and Named Locks. The certification process does not take into account table locks (see Section 15.3.6, “LOCK TABLES and UNLOCK TABLES Statements”) or named locks (see
GET_LOCK()
). -
Binary Log Checksums. Group Replication in MySQL 8.4 supports checksums, so group members may use the default setting
binlog_checksum=CRC32
. The setting forbinlog_checksum
does not have to be the same for all members of a group.When checksums are available, Group Replication does not use them to verify incoming events on the
group_replication_applier
channel, because events are written to that relay log from multiple sources and before they are actually written to the originating server's binary log, which is when a checksum is generated. Checksums are used to verify the integrity of events on thegroup_replication_recovery
channel and on any other replication channels on group members. -
SERIALIZABLE Isolation Level.
SERIALIZABLE
isolation level is not supported in multi-primary groups by default. Setting a transaction isolation level toSERIALIZABLE
configures Group Replication to refuse to commit the transaction. -
Concurrent DDL versus DML Operations. Concurrent data definition statements and data manipulation statements executing against the same object but on different servers is not supported when using multi-primary mode. During execution of Data Definition Language (DDL) statements on an object, executing concurrent Data Manipulation Language (DML) on the same object but on a different server instance has the risk of conflicting DDL executing on different instances not being detected.
-
Foreign Keys with Cascading Constraints. Multi-primary mode groups (members all configured with
group_replication_single_primary_mode=OFF
) do not support tables with multi-level foreign key dependencies, specifically tables that have definedCASCADING
foreign key constraints. This is because foreign key constraints that result in cascading operations executed by a multi-primary mode group can result in undetected conflicts and lead to inconsistent data across the members of the group. Therefore we recommend settinggroup_replication_enforce_update_everywhere_checks=ON
on server instances used in multi-primary mode groups to avoid undetected conflicts.In single-primary mode this is not a problem as it does not allow concurrent writes to multiple members of the group and thus there is no risk of undetected conflicts.
-
Multi-primary Mode Deadlock. When a group is operating in multi-primary mode,
SELECT .. FOR UPDATE
statements can result in a deadlock. This is because the lock is not shared across the members of the group, therefore the expectation for such a statement might not be reached. -
Replication Filters. Global replication filters cannot be used on a MySQL server instance that is configured for Group Replication, because filtering transactions on some servers would make the group unable to reach agreement on a consistent state. Channel specific replication filters can be used on replication channels that are not directly involved with Group Replication, such as where a group member also acts as a replica to a source that is outside the group. They cannot be used on the
group_replication_applier
orgroup_replication_recovery
channels. -
Encrypted Connections. Support for the TLSv1.3 protocol is available in MySQL, provided that it was compiled using OpenSSL 1.1.1 or higher. Group Replication supports TLSv1.3, where it can be used for group communication connections and distributed recovery connections.
group_replication_recovery_tls_version
andgroup_replication_recovery_tls_ciphersuites
can be used to configure client support for any selection of ciphersuites, including only non-default ciphersuites if desired. See Section 8.3.2, “Encrypted Connection TLS Protocols and Ciphers”. -
Cloning Operations. Group Replication initiates and manages cloning operations for distributed recovery, but group members that have been set up to support cloning may also participate in cloning operations that a user initiates manually. You can initiate a cloning operation manually if the operation involves a group member on which Group Replication is running, provided that the cloning operation does not remove and replace the data on the recipient. The statement to initiate the cloning operation must therefore include the
DATA DIRECTORY
clause if Group Replication is running. See Section 20.5.4.2.4, “Cloning for Other Purposes”.
The maximum number of MySQL servers that can be members of a single replication group is 9. If further members attempt to join the group, their request is refused. This limit has been identified from testing and benchmarking as a safe boundary where the group performs reliably on a stable local area network.
If an individual transaction results in message contents which are large enough that the message cannot be copied between group members over the network within a 5-second window, members can be suspected of having failed, and then expelled, just because they are busy processing the transaction. Large transactions can also cause the system to slow due to problems with memory allocation. To avoid these issues use the following mitigations:
-
If unnecessary expulsions occur due to large messages, use the system variable
group_replication_member_expel_timeout
to allow additional time before a member under suspicion of having failed is expelled. You can allow up to an hour after the initial 5-second detection period before a suspect member is expelled from the group. An additional 5 seconds is allowed by default. -
Where possible, try and limit the size of your transactions before they are handled by Group Replication. For example, split up files used with
LOAD DATA
into smaller chunks. -
Use the system variable
group_replication_transaction_size_limit
to specify a maximum transaction size that the group accepts. The default maximum transaction size is 150000000 bytes (approximately 143 MB); transactions above this size are rolled back and are not sent to Group Replication's Group Communication System (GCS) for distribution to the group. Adjust the value of this variable depending on the maximum message size that you need the group to tolerate, bearing in mind that the time taken to process a transaction is proportional to its size. -
Use the system variable
group_replication_compression_threshold
to specify a message size above which compression is applied. This system variable defaults to 1000000 bytes (1 MB), so large messages are automatically compressed. Compression is carried out by Group Replication's Group Communication System (GCS) when it receives a message that was permitted by thegroup_replication_transaction_size_limit
setting but exceeds thegroup_replication_compression_threshold
setting. For more information, see Section 20.7.4, “Message Compression”. -
Use the system variable
group_replication_communication_max_message_size
to specify a message size above which messages are fragmented. This system variable defaults to 10485760 bytes (10 MiB), so large messages are automatically fragmented. GCS carries out fragmentation after compression if the compressed message still exceeds thegroup_replication_communication_max_message_size
limit. For more information, see Section 20.7.5, “Message Fragmentation”.
The maximum transaction size, message compression, and message fragmentation can all be deactivated by specifying a zero value for the relevant system variable. If you have deactivated all these safeguards, the upper size limit for a message that can be handled by the applier thread on a member of a replication group is the value of the member's replica_max_allowed_packet
system variable, which has a default and maximum value of 1073741824 bytes (1 GB). A message that exceeds this limit fails when the receiving member attempts to handle it. The upper size limit for a message that a group member can originate and attempt to transmit to the group is 4294967295 bytes (approximately 4 GB). This is a hard limit on the packet size that is accepted by the group communication engine for Group Replication (XCom, a Paxos variant), which receives messages after GCS has handled them. A message that exceeds this limit fails when the originating member attempts to broadcast it.