From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Subject: Re: [RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target Date: Tue, 1 Dec 2015 17:02:19 +0100 Message-ID: <565DC48B.6030903@redhat.com> References: <1447978868-17138-1-git-send-email-mlin@kernel.org> <56506D95.70101@redhat.com> <1448266667.18175.5.camel@hasee> <56531F5F.3050709@redhat.com> <1448925639.27669.7.camel@ssi> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1448925639.27669.7.camel@ssi> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: virtualization-bounces@lists.linux-foundation.org Errors-To: virtualization-bounces@lists.linux-foundation.org To: Ming Lin Cc: qemu-devel@nongnu.org, Christoph Hellwig , linux-nvme@lists.infradead.org, virtualization@lists.linux-foundation.org List-Id: virtualization@lists.linuxfoundation.org On 01/12/2015 00:20, Ming Lin wrote: > qemu-nvme: 148MB/s > vhost-nvme + google-ext: 230MB/s > qemu-nvme + google-ext + eventfd: 294MB/s > virtio-scsi: 296MB/s > virtio-blk: 344MB/s > > "vhost-nvme + google-ext" didn't get good enough performance. I'd expect it to be on par of qemu-nvme with ioeventfd but the question is: why should it be better? For vhost-net, the answer is that more zerocopy can be done if you put the data path in the kernel. But qemu-nvme is already using io_submit for the data path, perhaps there's not much to gain from vhost-nvme... Paolo > Still tuning.