qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Ming Lin <mlin@kernel.org>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-devel@nongnu.org, Christoph Hellwig <hch@lst.de>,
	linux-nvme@lists.infradead.org,
	virtualization@lists.linux-foundation.org
Subject: Re: [Qemu-devel] [RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Date: Mon, 23 Nov 2015 23:27:54 -0800	[thread overview]
Message-ID: <1448350074.6696.3.camel@hasee> (raw)
In-Reply-To: <56531F5F.3050709@redhat.com>

On Mon, 2015-11-23 at 15:14 +0100, Paolo Bonzini wrote:
> 
> On 23/11/2015 09:17, Ming Lin wrote:
> > On Sat, 2015-11-21 at 14:11 +0100, Paolo Bonzini wrote:
> >>
> >> On 20/11/2015 01:20, Ming Lin wrote:
> >>> One improvment could be to use google's NVMe vendor extension that
> >>> I send in another thread, aslo here:
> >>> https://git.kernel.org/cgit/linux/kernel/git/mlin/linux.git/log/?h=nvme-google-ext
> >>>
> >>> Qemu side:
> >>> http://www.minggr.net/cgit/cgit.cgi/qemu/log/?h=vhost-nvme.0
> >>> Kernel side also here:
> >>> https://git.kernel.org/cgit/linux/kernel/git/mlin/linux.git/log/?h=vhost-nvme.0
> >>
> >> How much do you get with vhost-nvme plus vendor extension, compared to
> >> 190 MB/s for QEMU?
> > 
> > There is still some bug. I'll update.
> 
> Sure.
> 
> >> Note that in all likelihood, QEMU can actually do better than 190 MB/s,
> >> and gain more parallelism too, by moving the processing of the
> >> ioeventfds to a separate thread.  This is similar to
> >> hw/block/dataplane/virtio-blk.c.
> >>
> >> It's actually pretty easy to do.  Even though
> >> hw/block/dataplane/virtio-blk.c is still using some old APIs, all memory
> >> access in QEMU is now thread-safe.  I have pending patches for 2.6 that
> >> cut that file down to a mere 200 lines of code, NVMe would probably be
> >> about the same.
> > 
> > Is there a git tree for your patches?
> 
> No, not yet.  I'll post them today or tomorrow, will make sure to Cc you.
> 
> > Did you mean some pseduo code as below?
> > 1. need a iothread for each cq/sq?
> > 2. need a AioContext for each cq/sq?
> > 
> >  hw/block/nvme.c | 32 ++++++++++++++++++++++++++++++--
> >  hw/block/nvme.h |  8 ++++++++
> >  2 files changed, 38 insertions(+), 2 deletions(-)
> > 
> > diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> > index f27fd35..fed4827 100644
> > --- a/hw/block/nvme.c
> > +++ b/hw/block/nvme.c
> > @@ -28,6 +28,8 @@
> >  #include "sysemu/sysemu.h"
> >  #include "qapi/visitor.h"
> >  #include "sysemu/block-backend.h"
> > +#include "sysemu/iothread.h"
> > +#include "qom/object_interfaces.h"
> >  
> >  #include "nvme.h"
> >  
> > @@ -558,9 +560,22 @@ static void nvme_init_cq_eventfd(NvmeCQueue *cq)
> >      uint16_t offset = (cq->cqid*2+1) * (4 << NVME_CAP_DSTRD(n->bar.cap));
> >  
> >      event_notifier_init(&cq->notifier, 0);
> > -    event_notifier_set_handler(&cq->notifier, nvme_cq_notifier);
> >      memory_region_add_eventfd(&n->iomem,
> >          0x1000 + offset, 4, false, 0, &cq->notifier);
> > +
> > +    object_initialize(&cq->internal_iothread_obj,
> > +                      sizeof(cq->internal_iothread_obj),
> > +                      TYPE_IOTHREAD);
> > +    user_creatable_complete(OBJECT(&cq->internal_iothread_obj), &error_abort);
> 
> For now, you have to use one iothread for all cq/sq of a single NVMe
> device; multiqueue block layer is planned for 2.7 or 2.8.  Otherwise
> yes, it's very close to just these changes.

Here is the call stack of iothread for virtio-blk-dataplane.

handle_notify (qemu/hw/block/dataplane/virtio-blk.c:126)
aio_dispatch (qemu/aio-posix.c:329)
aio_poll (qemu/aio-posix.c:474)
iothread_run (qemu/iothread.c:45)
start_thread (pthread_create.c:312)
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)

I think I'll have a "nvme_dev_notify" similar as "handle_notify"

static void nvme_dev_notify(EventNotifier *e)
{
    ....
}

But then how can I know this notify is for cq or sq?

  reply	other threads:[~2015-11-24  7:28 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-20  0:20 [Qemu-devel] [RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target Ming Lin
2015-11-20  0:21 ` [Qemu-devel] [RFC PATCH 1/9] nvme-vhost: add initial commit Ming Lin
2015-11-20  0:21 ` [Qemu-devel] [RFC PATCH 2/9] nvme-vhost: add basic ioctl handlers Ming Lin
2015-11-20  0:21 ` [Qemu-devel] [RFC PATCH 3/9] nvme-vhost: add basic nvme bar read/write Ming Lin
2015-11-20  0:21 ` [Qemu-devel] [RFC PATCH 4/9] nvmet: add a controller "start" hook Ming Lin
2015-11-20  5:13   ` Christoph Hellwig
2015-11-20  5:31     ` Ming Lin
2015-11-20  0:21 ` [Qemu-devel] [RFC PATCH 5/9] nvme-vhost: add controller "start" callback Ming Lin
2015-11-20  0:21 ` [Qemu-devel] [RFC PATCH 6/9] nvmet: add a "parse_extra_admin_cmd" hook Ming Lin
2015-11-20  0:21 ` [Qemu-devel] [RFC PATCH 7/9] nvme-vhost: add "parse_extra_admin_cmd" callback Ming Lin
2015-11-20  0:21 ` [Qemu-devel] [RFC PATCH 8/9] nvme-vhost: add vhost memory helpers Ming Lin
2015-11-20  0:21 ` [Qemu-devel] [RFC PATCH 9/9] nvme-vhost: add nvme queue handlers Ming Lin
2015-11-20  5:16 ` [Qemu-devel] [RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target Christoph Hellwig
2015-11-20  5:33   ` Ming Lin
2015-11-21 13:11 ` Paolo Bonzini
2015-11-23  8:17   ` Ming Lin
2015-11-23 14:14     ` Paolo Bonzini
2015-11-24  7:27       ` Ming Lin [this message]
2015-11-24  8:23         ` Ming Lin
2015-11-24 10:51         ` Paolo Bonzini
2015-11-24 19:25           ` Ming Lin
2015-11-25 11:27             ` Paolo Bonzini
2015-11-25 18:51               ` Ming Lin
2015-11-25 19:32                 ` Paolo Bonzini
2015-11-30 23:20       ` Ming Lin
2015-12-01 16:02         ` Paolo Bonzini
2015-12-01 16:26           ` Ming Lin
2015-12-01 16:59             ` Paolo Bonzini
2015-12-02  5:13               ` Ming Lin
2015-12-02 10:07                 ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1448350074.6696.3.camel@hasee \
    --to=mlin@kernel.org \
    --cc=hch@lst.de \
    --cc=linux-nvme@lists.infradead.org \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).