From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:56849) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UUhQc-00060x-If for qemu-devel@nongnu.org; Tue, 23 Apr 2013 13:54:57 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UUhQZ-0000kU-LW for qemu-devel@nongnu.org; Tue, 23 Apr 2013 13:54:54 -0400 Received: from e7.ny.us.ibm.com ([32.97.182.137]:45098) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UUhQZ-0000kO-I2 for qemu-devel@nongnu.org; Tue, 23 Apr 2013 13:54:51 -0400 Received: from /spool/local by e7.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 23 Apr 2013 13:54:50 -0400 Received: from d01relay02.pok.ibm.com (d01relay02.pok.ibm.com [9.56.227.234]) by d01dlp01.pok.ibm.com (Postfix) with ESMTP id B402B38C805C for ; Tue, 23 Apr 2013 13:54:47 -0400 (EDT) Received: from d03av01.boulder.ibm.com (d03av01.boulder.ibm.com [9.17.195.167]) by d01relay02.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r3NHskd7185246 for ; Tue, 23 Apr 2013 13:54:47 -0400 Received: from d03av01.boulder.ibm.com (loopback [127.0.0.1]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r3NHsaqq002789 for ; Tue, 23 Apr 2013 11:54:36 -0600 Message-ID: <5176CADB.4000304@linux.vnet.ibm.com> Date: Tue, 23 Apr 2013 13:54:35 -0400 From: "Michael R. Hines" MIME-Version: 1.0 References: <1366682139-22122-1-git-send-email-mrhines@linux.vnet.ibm.com> <87li89dtmc.fsf@codemonkey.ws> In-Reply-To: <87li89dtmc.fsf@codemonkey.ws> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v5 00/12] rdma: migration support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: quintela@redhat.com, qemu-devel@nongnu.org, owasserm@redhat.com, Bulent Abali , Michael R Hines , Gokul B Kandiraju , pbonzini@redhat.com On 04/23/2013 01:50 PM, Anthony Liguori wrote: > mrhines@linux.vnet.ibm.com writes: > >> From: "Michael R. Hines" >> >> Juan, Please pull. > I assume this is actually v6, not v5? > > I don't see collected Reviewed-bys... > > That said, we're pretty close to hard freeze. I think this should wait > until 1.6 opens up although I'm open to suggestion if people think this > is low risk. I don't like the idea of adding a new protocol this close > to the end of a cycle. > > Regards, > > Anthony Liguori There are no instructions/procedures documented on the qemu.org website on how to automatically generate "Reviewed-by" signatures. How do you guys do that? - Michael >> Changes since v4: >> >> - Re-ran checkpatch.pl >> - Added new QEMUFileOps function: qemu_get_max_size() >> - Renamed capability to x-pin-all, disabled by default >> - Added numbers for x-pin-all to performance section in docs/rdma.txt >> - Included performance numbers in this cover letter >> - Converted throughput patch to a MigrationStats statistic in QMP >> - Better QMP error message delivery >> - Updated documentation >> - Moved docs/rdma.txt up to top of patch series >> - Fixed all v4 changes requested >> - Finished additional cleanup requests >> - Updated copyright for migration-rdma.c >> >> Wiki: http://wiki.qemu.org/Features/RDMALiveMigration >> Github: git@github.com:hinesmr/qemu.git >> >> Here is a brief summary of total migration time and downtime using RDMA: >> >> Using a 40gbps infiniband link performing a worst-case stress test, >> using an 8GB RAM virtual machine: >> Using the following command: >> >> $ apt-get install stress >> $ stress --vm-bytes 7500M --vm 1 --vm-keep >> >> RESULTS: >> >> 1. Migration throughput: 26 gigabits/second. >> 2. Downtime (stop time) varies between 15 and 100 milliseconds. >> >> EFFECTS of memory registration on bulk phase round: >> >> For example, in the same 8GB RAM example with all 8GB of memory in >> active use and the VM itself is completely idle using the same 40 gbps >> infiniband link: >> >> 1. x-rdma-pin-all disabled total time: approximately 7.5 seconds @ 9.5 Gbps >> 2. x-rdma-pin-all enabled total time: approximately 4 seconds @ 26 Gbps >> >> These numbers would of course scale up to whatever size virtual machine >> you have to migrate using RDMA. >> >> Enabling this feature does *not* have any measurable affect on >> migration *downtime*. This is because, without this feature, all of the >> memory will have already been registered already in advance during >> the bulk round and does not need to be re-registered during the successive >> iteration rounds. >> >> Michael R. Hines (12): >> rdma: add documentation >> rdma: export yield_until_fd_readable() >> rdma: export throughput w/ MigrationStats QMP >> rdma: introduce qemu_get_max_size() >> rdma: introduce qemu_file_mode_is_not_valid() >> rdma: export qemu_fflush() >> rdma: introduce ram_handle_compressed() >> rdma: introduce qemu_ram_foreach_block() >> rdma: new QEMUFileOps hooks >> rdma: introduce capability x-rdma-pin-all >> rdma: core logic >> rdma: send pc.ram >> >> Makefile.objs | 1 + >> arch_init.c | 59 +- >> configure | 29 + >> docs/rdma.txt | 404 ++++++ >> exec.c | 9 + >> hmp.c | 2 + >> include/block/coroutine.h | 6 + >> include/exec/cpu-common.h | 5 + >> include/migration/migration.h | 24 + >> include/migration/qemu-file.h | 44 + >> migration-rdma.c | 2727 +++++++++++++++++++++++++++++++++++++++++ >> migration.c | 22 +- >> qapi-schema.json | 12 +- >> qemu-coroutine-io.c | 23 + >> savevm.c | 133 +- >> 15 files changed, 3451 insertions(+), 49 deletions(-) >> create mode 100644 docs/rdma.txt >> create mode 100644 migration-rdma.c >> >> -- >> 1.7.10.4