(RDMA: Remote Direct Memory Access)
RDMA Live Migration Specification, Version # 1
==============================================
-Wiki: http://wiki.qemu-project.org/Features/RDMALiveMigration
+Wiki: https://wiki.qemu.org/Features/RDMALiveMigration
* RDMA Migration Protocol Description
* Versioning and Capabilities
* QEMUFileRDMA Interface
-* Migration of pc.ram
+* Migration of VM's ram
* Error handling
* TODO
because the RDMA I/O architecture reduces the number of interrupts and
data copies by bypassing the host networking stack. In particular, a TCP-based
migration, under certain types of memory-bound workloads, may take a more
-unpredicatable amount of time to complete the migration if the amount of
+unpredictable amount of time to complete the migration if the amount of
memory tracked during each live migration iteration round cannot keep pace
with the rate of dirty memory produced by the workload.
message is that SEND messages cause notifications
to be posted to the completion queue (CQ) on the
infiniband receiver side, whereas RDMA messages (used
-for pc.ram) do not (to behave like an actual DMA).
+for VM's ram) do not (to behave like an actual DMA).
Messages in infiniband require two things:
listed above and issue another "QEMU File" protocol command,
asking for a new SEND message to re-fill the buffer.
-Migration of pc.ram:
+Migration of VM's ram:
====================
At the beginning of the migration, (migration-rdma.c),
TODO:
=====
1. Currently, 'ulimit -l' mlock() limits as well as cgroups swap limits
- are not compatible with infinband memory pinning and will result in
+ are not compatible with infiniband memory pinning and will result in
an aborted migration (but with the source VM left unaffected).
2. Use of the recent /proc/<pid>/pagemap would likely speed up
the use of KSM and ballooning while using RDMA.