From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58208) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bz9lQ-0007Mv-0k for qemu-devel@nongnu.org; Tue, 25 Oct 2016 18:00:09 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bz9lM-0002uv-69 for qemu-devel@nongnu.org; Tue, 25 Oct 2016 18:00:08 -0400 Received: from mail-wm0-x242.google.com ([2a00:1450:400c:c09::242]:33817) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1bz9lL-0002tr-Up for qemu-devel@nongnu.org; Tue, 25 Oct 2016 18:00:04 -0400 Received: by mail-wm0-x242.google.com with SMTP id y138so2038134wme.1 for ; Tue, 25 Oct 2016 15:00:03 -0700 (PDT) Sender: Paolo Bonzini References: <1475035789-685-1-git-send-email-ashish.mittal@veritas.com> <20160930083606.GA5411@stefanha-x1.localdomain> <1cec99e2-eb93-b897-bf7d-8ae7bd70ed17@redhat.com> <8552F2D6-EA05-452B-A90F-61BF0C213E17@veritas.com> From: Paolo Bonzini Message-ID: <8bcb6620-a810-8e6f-8249-2d3b1c5f2937@redhat.com> Date: Tue, 25 Oct 2016 23:59:57 +0200 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v7 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Ketan Nilangekar Cc: "kwolf@redhat.com" , "Venkatesha M.G." , Ashish Mittal , Stefan Hajnoczi , "jcody@redhat.com" , "qemu-devel@nongnu.org" , "armbru@redhat.com" , Abhijit Dey , "famz@redhat.com" , Ashish Mittal On 25/10/2016 23:53, Ketan Nilangekar wrote: > We need to confirm the perf numbers but it really depends on the way we do failover outside qemu. > > We are looking at a vip based failover implementation which may need > some handling code in qnio but that overhead should be minimal (atleast > no more than the current impl in qemu driver) Then it's not outside QEMU's address space, it's only outside block/vxhs.c... I don't understand. Paolo > IMO, the real benefit of qemu + qnio perf comes from: > 1. the epoll based io multiplexer > 2. 8 epoll threads > 3. Zero buffer copies in userland code > 4. Minimal locking > > We are also looking at replacing the existing qnio socket code with > memory readv/writev calls available with the latest kernel for even > better performance. > > Ketan > >> On Oct 25, 2016, at 1:01 PM, Paolo Bonzini wrote: >> >> >> >>> On 25/10/2016 07:07, Ketan Nilangekar wrote: >>> We are able to derive significant performance from the qemu block >>> driver as compared to nbd/iscsi/nfs. We have prototyped nfs and nbd >>> based io tap in the past and the performance of qemu block driver is >>> significantly better. Hence we would like to go with the vxhs driver >>> for now. >> >> Is this still true with failover implemented outside QEMU (which >> requires I/O to be proxied, if I'm not mistaken)? What does the benefit >> come from if so, is it the threaded backend and performing multiple >> connections to the same server? >> >> Paolo >> >>> Ketan >>> >>> >>>> On Oct 24, 2016, at 4:24 PM, Paolo Bonzini >>>> wrote: >>>> >>>> >>>> >>>>> On 20/10/2016 03:31, Ketan Nilangekar wrote: This way the >>>>> failover logic will be completely out of qemu address space. We >>>>> are considering use of some of our proprietary >>>>> clustering/monitoring services to implement service failover. >>>> >>>> Are you implementing a different protocol just for the sake of >>>> QEMU, in other words, and forwarding from that protocol to your >>>> proprietary code? >>>> >>>> If that is what you are doing, you don't need at all a vxhs driver >>>> in QEMU. Just implement NBD or iSCSI on your side, QEMU already >>>> has drivers for that. >>>> >>>> Paolo > >