From: mlin@kernel.org (Ming Lin)
Subject: [RFC PATCH 0/2] virtio nvme
Date: Thu, 10 Sep 2015 10:28:18 -0700 [thread overview]
Message-ID: <1441906098.18716.21.camel@ssi> (raw)
In-Reply-To: <CAJSP0QUmtgoh23G3u64xRnBbJkAoO19AEi=vkyPPyg9n9jqC2Q@mail.gmail.com>
On Thu, 2015-09-10@15:38 +0100, Stefan Hajnoczi wrote:
> On Thu, Sep 10, 2015@6:48 AM, Ming Lin <mlin@kernel.org> wrote:
> > These 2 patches added virtio-nvme to kernel and qemu,
> > basically modified from virtio-blk and nvme code.
> >
> > As title said, request for your comments.
> >
> > Play it in Qemu with:
> > -drive file=disk.img,format=raw,if=none,id=D22 \
> > -device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4
> >
> > The goal is to have a full NVMe stack from VM guest(virtio-nvme)
> > to host(vhost_nvme) to LIO NVMe-over-fabrics target.
>
> Why is a virtio-nvme guest device needed? I guess there must either
> be NVMe-only features that you want to pass through, or you think the
> performance will be significantly better than virtio-blk/virtio-scsi?
It simply passes through NVMe commands.
Right now performance is poor. Performance tunning is on my todo list.
It should be as good as virtio-blk/virtio-scsi.
>
> At first glance it seems like the virtio_nvme guest driver is just
> another block driver like virtio_blk, so I'm not clear why a
> virtio-nvme device makes sense.
I think the future "LIO NVMe target" only speaks NVMe protocol.
Nick(CCed), could you correct me if I'm wrong?
For SCSI stack, we have:
virtio-scsi(guest)
tcm_vhost(or vhost_scsi, host)
LIO-scsi-target
For NVMe stack, we'll have similar components:
virtio-nvme(guest)
vhost_nvme(host)
LIO-NVMe-target
>
> > Now there are lots of duplicated code with linux/nvme-core.c and qemu/nvme.c.
> > The ideal result is to have a multi level NVMe stack(similar as SCSI).
> > So we can re-use the nvme code, for example
> >
> > .-------------------------.
> > | NVMe device register |
> > Upper level | NVMe protocol process |
> > | |
> > '-------------------------'
> >
> >
> >
> > .-----------. .-----------. .------------------.
> > Lower level | PCIe | | VIRTIO | |NVMe over Fabrics |
> > | | | | |initiator |
> > '-----------' '-----------' '------------------'
>
> You mentioned LIO and SCSI. How will NVMe over Fabrics be integrated
> into LIO? If it is mapped to SCSI then using virtio_scsi in the guest
> and tcm_vhost should work.
I think it's not mapped to SCSI.
Nick, would you share more here?
>
> Please also post virtio draft specifications documenting the virtio device.
I'll do this later.
>
> Stefan
next prev parent reply other threads:[~2015-09-10 17:28 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-10 5:48 [RFC PATCH 0/2] virtio nvme Ming Lin
2015-09-10 5:48 ` [RFC PATCH 1/2] virtio_nvme(kernel): virtual NVMe driver using virtio Ming Lin
2015-09-10 5:48 ` [RFC PATCH 2/2] virtio-nvme(qemu): NVMe device " Ming Lin
2015-09-10 14:02 ` [RFC PATCH 0/2] virtio nvme Keith Busch
2015-09-10 17:02 ` Ming Lin
2015-09-11 4:55 ` Ming Lin
2015-09-11 17:46 ` J Freyensee
2015-09-10 14:38 ` Stefan Hajnoczi
2015-09-10 17:28 ` Ming Lin [this message]
2015-09-11 7:48 ` Stefan Hajnoczi
2015-09-11 17:21 ` Ming Lin
2015-09-11 17:53 ` Stefan Hajnoczi
2015-09-11 18:54 ` Ming Lin
2015-09-17 6:10 ` Nicholas A. Bellinger
2015-09-17 18:18 ` Ming Lin
2015-09-17 21:43 ` Nicholas A. Bellinger
2015-09-17 23:31 ` Ming Lin
2015-09-18 0:55 ` Nicholas A. Bellinger
2015-09-18 18:12 ` Ming Lin
2015-09-18 21:09 ` Nicholas A. Bellinger
2015-09-18 23:05 ` Ming Lin
2015-09-23 22:58 ` Ming Lin
2015-09-27 5:01 ` Nicholas A. Bellinger
2015-09-27 6:49 ` Ming Lin
2015-09-28 5:58 ` Hannes Reinecke
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1441906098.18716.21.camel@ssi \
--to=mlin@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).