From: mlin@kernel.org (Ming Lin)
Subject: [RFC PATCH 0/2] virtio nvme
Date: Fri, 11 Sep 2015 10:21:41 -0700 [thread overview]
Message-ID: <1441992101.802.11.camel@ssi> (raw)
In-Reply-To: <CAJSP0QXbjY5kbz0jTJSDyo=Zh9B+DRZorTY0m3bRC8yhJT=Nvw@mail.gmail.com>
On Fri, 2015-09-11@08:48 +0100, Stefan Hajnoczi wrote:
> On Thu, Sep 10, 2015@6:28 PM, Ming Lin <mlin@kernel.org> wrote:
> > On Thu, 2015-09-10@15:38 +0100, Stefan Hajnoczi wrote:
> >> On Thu, Sep 10, 2015@6:48 AM, Ming Lin <mlin@kernel.org> wrote:
> >> > These 2 patches added virtio-nvme to kernel and qemu,
> >> > basically modified from virtio-blk and nvme code.
> >> >
> >> > As title said, request for your comments.
> >> >
> >> > Play it in Qemu with:
> >> > -drive file=disk.img,format=raw,if=none,id=D22 \
> >> > -device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4
> >> >
> >> > The goal is to have a full NVMe stack from VM guest(virtio-nvme)
> >> > to host(vhost_nvme) to LIO NVMe-over-fabrics target.
> >>
> >> Why is a virtio-nvme guest device needed? I guess there must either
> >> be NVMe-only features that you want to pass through, or you think the
> >> performance will be significantly better than virtio-blk/virtio-scsi?
> >
> > It simply passes through NVMe commands.
>
> I understand that. My question is why the guest needs to send NVMe commands?
>
> If the virtio_nvme.ko guest driver only sends read/write/flush then
> there's no advantage over virtio-blk.
>
> There must be something you are trying to achieve which is not
> possible with virtio-blk or virtio-scsi. What is that?
I actually learned from your virtio-scsi work.
http://www.linux-kvm.org/images/f/f5/2011-forum-virtio-scsi.pdf
Then I thought a full NVMe stack from guest to host to target seems
reasonable.
Trying to achieve similar things as virtio-scsi, but all NVMe protocol.
- Effective NVMe passthrough
- Multiple target choices: QEMU, LIO-NVMe(vhost_nvme)
- Almost unlimited scalability. Thousands of namespaces per PCI device
- True NVMe device
- End-to-end Protection Information
- ....
next prev parent reply other threads:[~2015-09-11 17:21 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-10 5:48 [RFC PATCH 0/2] virtio nvme Ming Lin
2015-09-10 5:48 ` [RFC PATCH 1/2] virtio_nvme(kernel): virtual NVMe driver using virtio Ming Lin
2015-09-10 5:48 ` [RFC PATCH 2/2] virtio-nvme(qemu): NVMe device " Ming Lin
2015-09-10 14:02 ` [RFC PATCH 0/2] virtio nvme Keith Busch
2015-09-10 17:02 ` Ming Lin
2015-09-11 4:55 ` Ming Lin
2015-09-11 17:46 ` J Freyensee
2015-09-10 14:38 ` Stefan Hajnoczi
2015-09-10 17:28 ` Ming Lin
2015-09-11 7:48 ` Stefan Hajnoczi
2015-09-11 17:21 ` Ming Lin [this message]
2015-09-11 17:53 ` Stefan Hajnoczi
2015-09-11 18:54 ` Ming Lin
2015-09-17 6:10 ` Nicholas A. Bellinger
2015-09-17 18:18 ` Ming Lin
2015-09-17 21:43 ` Nicholas A. Bellinger
2015-09-17 23:31 ` Ming Lin
2015-09-18 0:55 ` Nicholas A. Bellinger
2015-09-18 18:12 ` Ming Lin
2015-09-18 21:09 ` Nicholas A. Bellinger
2015-09-18 23:05 ` Ming Lin
2015-09-23 22:58 ` Ming Lin
2015-09-27 5:01 ` Nicholas A. Bellinger
2015-09-27 6:49 ` Ming Lin
2015-09-28 5:58 ` Hannes Reinecke
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1441992101.802.11.camel@ssi \
--to=mlin@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).