qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* Issue Report: When VM memory is extremely large, downtime for RDMA migration is high. (64G mem --> extra 400ms)
@ 2021-04-15  1:54 LIZHAOXIN1 [李照鑫]
  0 siblings, 0 replies; only message in thread
From: LIZHAOXIN1 [李照鑫] @ 2021-04-15  1:54 UTC (permalink / raw)
  To: qemu-devel@nongnu.org, quintela@redhat.com, dgilbert@redhat.com
  Cc: LIZHAOXIN1 [李照鑫], sunhao2 [孙昊],
	DENGLINWEN [邓林文],
	YANGFENG1 [杨峰]

Hi:
When I tested RDMA live migration, I found that the downtime increased as the VM's memory increased.

My Mellanox network card is [ConnectX-4 LX] and the driver is MLNX-5.2, My VM memory size is 64GB, downtime is 430ms when I migrate using the following parameters:
virsh migrate --live --p2p --persistent --copy-storage-inc --auto-converge --verbose --listen-address 0.0.0.0 --rdma-pin-all --migrateuri rdma://192.168.0.2 [VM] qemu+tcp://192.168.0.2/system

The extra time, about 400ms, which is how long it takes RDMA to deregister memory (the function: ibv_dereg_mr) after memory migration is complete, is before qmp_cont and therefore part of downtime.

How do we reduce this downtime? Like deregister memory somewhere else?

If anything wrong, Please point out.
Thanks!

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2021-04-15  4:05 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-04-15  1:54 Issue Report: When VM memory is extremely large, downtime for RDMA migration is high. (64G mem --> extra 400ms) LIZHAOXIN1 [李照鑫]

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).