From: Zhanghailiang <zhang.zhanghailiang@huawei.com>
To: zhengchuan <zhengchuan@huawei.com>,
"quintela@redhat.com" <quintela@redhat.com>,
"dgilbert@redhat.com" <dgilbert@redhat.com>
Cc: "Chenzhendong \(alex\)" <alex.chen@huawei.com>,
yubihong <yubihong@huawei.com>,
"wanghao \(O\)" <wanghao232@huawei.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
Xiexiangyou <xiexiangyou@huawei.com>
Subject: RE: [PATCH v3 00/18] Support Multifd for RDMA migration
Date: Wed, 21 Oct 2020 09:25:28 +0000 [thread overview]
Message-ID: <2ea09ca2cc8c494390b506877f6e5e2c@huawei.com> (raw)
In-Reply-To: <1602908748-43335-1-git-send-email-zhengchuan@huawei.com>
Hi zhengchuan,
> -----Original Message-----
> From: zhengchuan
> Sent: Saturday, October 17, 2020 12:26 PM
> To: quintela@redhat.com; dgilbert@redhat.com
> Cc: Zhanghailiang <zhang.zhanghailiang@huawei.com>; Chenzhendong (alex)
> <alex.chen@huawei.com>; Xiexiangyou <xiexiangyou@huawei.com>; wanghao
> (O) <wanghao232@huawei.com>; yubihong <yubihong@huawei.com>;
> fengzhimin1@huawei.com; qemu-devel@nongnu.org
> Subject: [PATCH v3 00/18] Support Multifd for RDMA migration
>
> Now I continue to support multifd for RDMA migration based on my colleague
> zhiming's work:)
>
> The previous RFC patches is listed below:
> v1:
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg669455.html
> v2:
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg679188.html
>
> As descried in previous RFC, the RDMA bandwidth is not fully utilized for over
> 25Gigabit NIC because of single channel for RDMA migration.
> This patch series is going to support multifd for RDMA migration based on multifd
> framework.
>
> Comparsion is between origion and multifd RDMA migration is re-tested for v3.
> The VM specifications for migration are as follows:
> - VM use 4k page;
> - the number of VCPU is 4;
> - the total memory is 16Gigabit;
> - use 'mempress' tool to pressurize VM(mempress 8000 500);
> - use 25Gigabit network card to migrate;
>
> For origin RDMA and MultiRDMA migration, the total migration times of VM are
> as follows:
> +++++++++++++++++++++++++++++++++++++++++++++++++
> | | NOT rdma-pin-all | rdma-pin-all |
> +++++++++++++++++++++++++++++++++++++++++++++++++
> | origin RDMA | 26 s | 29 s |
> -------------------------------------------------
> | MultiRDMA | 16 s | 17 s |
> +++++++++++++++++++++++++++++++++++++++++++++++++
>
> Test the multifd RDMA migration like this:
> virsh migrate --live --multiFd --migrateuri
There is no option '--multiFd' for virsh commands, It seems that, we added this private option for internal usage.
It's better to provide testing method by using qemu commands.
Thanks.
> rdma://192.168.1.100 [VM] --listen-address 0.0.0.0
> qemu+tcp://192.168.1.100/system --verbose
>
> v2 -> v3:
> create multifd ops for both tcp and rdma
> do not export rdma to avoid multifd code in mess
> fix build issue for non-rdma
> fix some codestyle and buggy code
>
> Chuan Zheng (18):
> migration/rdma: add the 'migrate_use_rdma_pin_all' function
> migration/rdma: judge whether or not the RDMA is used for migration
> migration/rdma: create multifd_setup_ops for Tx/Rx thread
> migration/rdma: add multifd_setup_ops for rdma
> migration/rdma: do not need sync main for rdma
> migration/rdma: export MultiFDSendParams/MultiFDRecvParams
> migration/rdma: add rdma field into multifd send/recv param
> migration/rdma: export getQIOChannel to get QIOchannel in rdma
> migration/rdma: add multifd_rdma_load_setup() to setup multifd rdma
> migration/rdma: Create the multifd recv channels for RDMA
> migration/rdma: record host_port for multifd RDMA
> migration/rdma: Create the multifd send channels for RDMA
> migration/rdma: Add the function for dynamic page registration
> migration/rdma: register memory for multifd RDMA channels
> migration/rdma: only register the memory for multifd channels
> migration/rdma: add rdma_channel into Migrationstate field
> migration/rdma: send data for both rdma-pin-all and NOT rdma-pin-all
> mode
> migration/rdma: RDMA cleanup for multifd migration
>
> migration/migration.c | 24 +++
> migration/migration.h | 11 ++
> migration/multifd.c | 97 +++++++++-
> migration/multifd.h | 24 +++
> migration/qemu-file.c | 5 +
> migration/qemu-file.h | 1 +
> migration/rdma.c | 503
> +++++++++++++++++++++++++++++++++++++++++++++++++-
> 7 files changed, 653 insertions(+), 12 deletions(-)
>
> --
> 1.8.3.1
next prev parent reply other threads:[~2020-10-21 9:26 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-17 4:25 [PATCH v3 00/18] Support Multifd for RDMA migration Chuan Zheng
2020-10-17 4:25 ` [PATCH v3 01/18] migration/rdma: add the 'migrate_use_rdma_pin_all' function Chuan Zheng
2020-11-10 11:52 ` Dr. David Alan Gilbert
2020-10-17 4:25 ` [PATCH v3 02/18] migration/rdma: judge whether or not the RDMA is used for migration Chuan Zheng
2020-10-17 4:25 ` [PATCH v3 03/18] migration/rdma: create multifd_setup_ops for Tx/Rx thread Chuan Zheng
2020-11-10 12:11 ` Dr. David Alan Gilbert
2020-11-11 7:51 ` Zheng Chuan
2020-10-17 4:25 ` [PATCH v3 04/18] migration/rdma: add multifd_setup_ops for rdma Chuan Zheng
2020-11-10 12:30 ` Dr. David Alan Gilbert
2020-11-11 7:56 ` Zheng Chuan
2020-10-17 4:25 ` [PATCH v3 05/18] migration/rdma: do not need sync main " Chuan Zheng
2020-10-17 4:25 ` [PATCH v3 06/18] migration/rdma: export MultiFDSendParams/MultiFDRecvParams Chuan Zheng
2020-10-17 4:25 ` [PATCH v3 07/18] migration/rdma: add rdma field into multifd send/recv param Chuan Zheng
2020-10-17 4:25 ` [PATCH v3 08/18] migration/rdma: export getQIOChannel to get QIOchannel in rdma Chuan Zheng
2020-10-17 4:25 ` [PATCH v3 09/18] migration/rdma: add multifd_rdma_load_setup() to setup multifd rdma Chuan Zheng
2020-11-10 16:51 ` Dr. David Alan Gilbert
2020-10-17 4:25 ` [PATCH v3 10/18] migration/rdma: Create the multifd recv channels for RDMA Chuan Zheng
2020-10-17 4:25 ` [PATCH v3 11/18] migration/rdma: record host_port for multifd RDMA Chuan Zheng
2020-10-17 4:25 ` [PATCH v3 12/18] migration/rdma: Create the multifd send channels for RDMA Chuan Zheng
2020-10-17 4:25 ` [PATCH v3 13/18] migration/rdma: Add the function for dynamic page registration Chuan Zheng
2020-10-17 4:25 ` [PATCH v3 14/18] migration/rdma: register memory for multifd RDMA channels Chuan Zheng
2020-10-17 4:25 ` [PATCH v3 15/18] migration/rdma: only register the memory for multifd channels Chuan Zheng
2020-10-17 4:25 ` [PATCH v3 16/18] migration/rdma: add rdma_channel into Migrationstate field Chuan Zheng
2020-10-17 4:25 ` [PATCH v3 17/18] migration/rdma: send data for both rdma-pin-all and NOT rdma-pin-all mode Chuan Zheng
2020-10-17 4:25 ` [PATCH v3 18/18] migration/rdma: RDMA cleanup for multifd migration Chuan Zheng
2020-10-21 9:25 ` Zhanghailiang [this message]
2020-10-21 9:33 ` [PATCH v3 00/18] Support Multifd for RDMA migration Zheng Chuan
2020-10-23 19:02 ` Dr. David Alan Gilbert
2020-10-25 2:29 ` Zheng Chuan
2020-12-15 7:28 ` Zheng Chuan
2020-12-18 20:01 ` Dr. David Alan Gilbert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2ea09ca2cc8c494390b506877f6e5e2c@huawei.com \
--to=zhang.zhanghailiang@huawei.com \
--cc=alex.chen@huawei.com \
--cc=dgilbert@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=wanghao232@huawei.com \
--cc=xiexiangyou@huawei.com \
--cc=yubihong@huawei.com \
--cc=zhengchuan@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).