From: Ming Lin <mlin@kernel.org>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-devel@nongnu.org, Christoph Hellwig <hch@lst.de>,
linux-nvme@lists.infradead.org,
virtualization@lists.linux-foundation.org
Subject: Re: [Qemu-devel] [RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Date: Tue, 01 Dec 2015 08:26:11 -0800 [thread overview]
Message-ID: <1448987171.3041.2.camel@hasee> (raw)
In-Reply-To: <565DC48B.6030903@redhat.com>
On Tue, 2015-12-01 at 17:02 +0100, Paolo Bonzini wrote:
>
> On 01/12/2015 00:20, Ming Lin wrote:
> > qemu-nvme: 148MB/s
> > vhost-nvme + google-ext: 230MB/s
> > qemu-nvme + google-ext + eventfd: 294MB/s
> > virtio-scsi: 296MB/s
> > virtio-blk: 344MB/s
> >
> > "vhost-nvme + google-ext" didn't get good enough performance.
>
> I'd expect it to be on par of qemu-nvme with ioeventfd but the question
> is: why should it be better? For vhost-net, the answer is that more
> zerocopy can be done if you put the data path in the kernel.
>
> But qemu-nvme is already using io_submit for the data path, perhaps
> there's not much to gain from vhost-nvme...
What do you think about virtio-nvme+vhost-nvme?
I also have patch for vritio-nvme:
https://git.kernel.org/cgit/linux/kernel/git/mlin/linux.git/log/?h=nvme-split/virtio
Just need to change vhost-nvme to work with it.
>
> Paolo
>
> > Still tuning.
next prev parent reply other threads:[~2015-12-01 16:26 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-11-20 0:20 [Qemu-devel] [RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target Ming Lin
2015-11-20 0:21 ` [Qemu-devel] [RFC PATCH 1/9] nvme-vhost: add initial commit Ming Lin
2015-11-20 0:21 ` [Qemu-devel] [RFC PATCH 2/9] nvme-vhost: add basic ioctl handlers Ming Lin
2015-11-20 0:21 ` [Qemu-devel] [RFC PATCH 3/9] nvme-vhost: add basic nvme bar read/write Ming Lin
2015-11-20 0:21 ` [Qemu-devel] [RFC PATCH 4/9] nvmet: add a controller "start" hook Ming Lin
2015-11-20 5:13 ` Christoph Hellwig
2015-11-20 5:31 ` Ming Lin
2015-11-20 0:21 ` [Qemu-devel] [RFC PATCH 5/9] nvme-vhost: add controller "start" callback Ming Lin
2015-11-20 0:21 ` [Qemu-devel] [RFC PATCH 6/9] nvmet: add a "parse_extra_admin_cmd" hook Ming Lin
2015-11-20 0:21 ` [Qemu-devel] [RFC PATCH 7/9] nvme-vhost: add "parse_extra_admin_cmd" callback Ming Lin
2015-11-20 0:21 ` [Qemu-devel] [RFC PATCH 8/9] nvme-vhost: add vhost memory helpers Ming Lin
2015-11-20 0:21 ` [Qemu-devel] [RFC PATCH 9/9] nvme-vhost: add nvme queue handlers Ming Lin
2015-11-20 5:16 ` [Qemu-devel] [RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target Christoph Hellwig
2015-11-20 5:33 ` Ming Lin
2015-11-21 13:11 ` Paolo Bonzini
2015-11-23 8:17 ` Ming Lin
2015-11-23 14:14 ` Paolo Bonzini
2015-11-24 7:27 ` Ming Lin
2015-11-24 8:23 ` Ming Lin
2015-11-24 10:51 ` Paolo Bonzini
2015-11-24 19:25 ` Ming Lin
2015-11-25 11:27 ` Paolo Bonzini
2015-11-25 18:51 ` Ming Lin
2015-11-25 19:32 ` Paolo Bonzini
2015-11-30 23:20 ` Ming Lin
2015-12-01 16:02 ` Paolo Bonzini
2015-12-01 16:26 ` Ming Lin [this message]
2015-12-01 16:59 ` Paolo Bonzini
2015-12-02 5:13 ` Ming Lin
2015-12-02 10:07 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1448987171.3041.2.camel@hasee \
--to=mlin@kernel.org \
--cc=hch@lst.de \
--cc=linux-nvme@lists.infradead.org \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).