From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:40893) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UHgde-0002Gl-3t for qemu-devel@nongnu.org; Mon, 18 Mar 2013 16:26:40 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UHgdb-0002fv-4J for qemu-devel@nongnu.org; Mon, 18 Mar 2013 16:26:34 -0400 Received: from e7.ny.us.ibm.com ([32.97.182.137]:53855) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UHgdb-0002fq-0V for qemu-devel@nongnu.org; Mon, 18 Mar 2013 16:26:31 -0400 Received: from /spool/local by e7.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 18 Mar 2013 16:26:30 -0400 Received: from d01relay06.pok.ibm.com (d01relay06.pok.ibm.com [9.56.227.116]) by d01dlp03.pok.ibm.com (Postfix) with ESMTP id 93EB8C9001D for ; Mon, 18 Mar 2013 16:26:28 -0400 (EDT) Received: from d01av02.pok.ibm.com (d01av02.pok.ibm.com [9.56.224.216]) by d01relay06.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r2IKQSax27459630 for ; Mon, 18 Mar 2013 16:26:28 -0400 Received: from d01av02.pok.ibm.com (loopback [127.0.0.1]) by d01av02.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r2IKQSOm013115 for ; Mon, 18 Mar 2013 17:26:28 -0300 Message-ID: <51477873.2030005@linux.vnet.ibm.com> Date: Mon, 18 Mar 2013 16:26:27 -0400 From: "Michael R. Hines" MIME-Version: 1.0 References: <1363576743-6146-1-git-send-email-mrhines@linux.vnet.ibm.com> <1363576743-6146-8-git-send-email-mrhines@linux.vnet.ibm.com> <5146D6C8.5040709@redhat.com> In-Reply-To: <5146D6C8.5040709@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC PATCH RDMA support v4: 07/10] connection-establishment for RDMA List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: aliguori@us.ibm.com, mst@redhat.com, qemu-devel@nongnu.org, owasserm@redhat.com, abali@us.ibm.com, mrhines@us.ibm.com, gokul@us.ibm.com Acknowledged. On 03/18/2013 04:56 AM, Paolo Bonzini wrote: > In a "socket-like" abstraction, all of these steps except the initial > listen are part of "accept". Please move them to > qemu_rdma_migrate_accept (possibly renaming the existing > qemu_rdma_migrate_accept to a different name). Similarly, perhaps you > can merge qemu_rdma_server_prepare and qemu_rdma_migrate_listen. Try > to make the public API between modules as small as possible (but not > smaller :)), so that you can easily document it without too many > references to RDMA concepts. Thanks, Paolo >> + rdma->total_bytes = 0; >> + rdma->enabled = 1; >> + qemu_rdma_dump_gid("server_connect", rdma->rdma_ctx.cm_id); >> + return 0; >> + >> +err_rdma_server_wait: >> + qemu_rdma_cleanup(rdma); >> + return -1; >> + >> +} >> + >> +int rdma_start_incoming_migration(const char * host_port, Error **errp) >> +{ >> + RDMAData *rdma = g_malloc0(sizeof(RDMAData)); >> + QEMUFile *f; >> + int ret; >> + >> + if ((ret = qemu_rdma_data_init(rdma, host_port, errp)) < 0) >> + return ret; >> + >> + ret = qemu_rdma_server_init(rdma, NULL); >> + >> + DPRINTF("Starting RDMA-based incoming migration\n"); >> + >> + if (!ret) { >> + DPRINTF("qemu_rdma_server_init success\n"); >> + ret = qemu_rdma_server_prepare(rdma, NULL); >> + >> + if (!ret) { >> + DPRINTF("qemu_rdma_server_prepare success\n"); >> + >> + ret = rdma_accept_incoming_migration(rdma, NULL); >> + if(!ret) >> + DPRINTF("qemu_rdma_accept_incoming_migration success\n"); >> + f = qemu_fopen_rdma(rdma, "rb"); >> + if (f == NULL) { >> + fprintf(stderr, "could not qemu_fopen RDMA\n"); >> + ret = -EIO; >> + } >> + >> + process_incoming_migration(f); >> + } >> + } >> + >> + return ret; >> +} >> + >> +void rdma_start_outgoing_migration(void *opaque, const char *host_port, Error **errp) >> +{ >> + RDMAData *rdma = g_malloc0(sizeof(RDMAData)); >> + MigrationState *s = opaque; >> + int ret; >> + >> + if (qemu_rdma_data_init(rdma, host_port, errp) < 0) >> + return; >> + >> + ret = qemu_rdma_client_init(rdma, NULL); >> + if(!ret) { >> + DPRINTF("qemu_rdma_client_init success\n"); >> + ret = qemu_rdma_client_connect(rdma, NULL); >> + >> + if(!ret) { >> + s->file = qemu_fopen_rdma(rdma, "wb"); >> + DPRINTF("qemu_rdma_client_connect success\n"); >> + migrate_fd_connect(s); >> + return; >> + } >> + } >> + >> + migrate_fd_error(s); >> +} >> + >> +size_t save_rdma_page(QEMUFile *f, ram_addr_t block_offset, ram_addr_t offset, int cont, size_t size) >> +{ >> + int ret; >> + size_t bytes_sent = 0; >> + ram_addr_t current_addr; >> + RDMAData * rdma = migrate_use_rdma(f); >> + >> + current_addr = block_offset + offset; >> + >> + /* >> + * Add this page to the current 'chunk'. If the chunk >> + * is full, an actual RDMA write will occur. >> + */ >> + if ((ret = qemu_rdma_write(rdma, current_addr, size)) < 0) { >> + fprintf(stderr, "rdma migration: write error! %d\n", ret); >> + return ret; >> + } >> + >> + /* >> + * Drain the Completion Queue if possible. >> + * If not, the end of the iteration will do this >> + * again to make sure we don't overflow the >> + * request queue. >> + */ >> + while (1) { >> + int ret = qemu_rdma_poll(rdma); >> + if (ret == RDMA_WRID_NONE) { >> + break; >> + } >> + if (ret < 0) { >> + fprintf(stderr, "rdma migration: polling error! %d\n", ret); >> + return ret; >> + } >> + } >> + >> + bytes_sent += size; >> + return bytes_sent; >> +} >> + >> +size_t qemu_rdma_fill(void * opaque, uint8_t *buf, int size) >> +{ >> + RDMAData * rdma = opaque; >> + size_t len = 0; >> + >> + if(rdma->qemu_file_len) { >> + DPRINTF("RDMA %" PRId64 " of %d bytes already in buffer\n", >> + rdma->qemu_file_len, size); >> + >> + len = MIN(size, rdma->qemu_file_len); >> + memcpy(buf, rdma->qemu_file_curr, len); >> + rdma->qemu_file_curr += len; >> + rdma->qemu_file_len -= len; >> + } >> + >> + return len; >> +} >> >