From: james_p_freyensee@linux.intel.com (J Freyensee)
Subject: [RFC PATCH 0/2] virtio nvme
Date: Fri, 11 Sep 2015 10:46:29 -0700 [thread overview]
Message-ID: <1441993589.7919.28.camel@linux.intel.com> (raw)
In-Reply-To: <1441904521.18716.4.camel@ssi>
On Thu, 2015-09-10@10:02 -0700, Ming Lin wrote:
> On Thu, 2015-09-10@14:02 +0000, Keith Busch wrote:
> > On Wed, 9 Sep 2015, Ming Lin wrote:
> > > The goal is to have a full NVMe stack from VM guest(virtio-nvme)
> > > to host(vhost_nvme) to LIO NVMe-over-fabrics target.
> > >
> > > Now there are lots of duplicated code with linux/nvme-core.c and
> > > qemu/nvme.c.
> > > The ideal result is to have a multi level NVMe stack(similar as
> > > SCSI).
> > > So we can re-use the nvme code, for example
> > >
> > > .-------------------------.
> > > | NVMe device register |
> > > Upper level | NVMe protocol process |
> > > | |
> > > '-------------------------'
> > >
> > >
> > >
> > > .-----------. .-----------. .-----------------
> > > -.
> > > Lower level | PCIe | | VIRTIO | |NVMe over Fabrics
> > > |
> > > | | | | |initiator
> > > |
> > > '-----------' '-----------' '-----------------
> > > -'
> > >
> > > todo:
> > > - tune performance. Should be as good as virtio-blk/virtio-scsi
> > > - support discard/flush/integrity
> > > - need Redhat's help for the VIRTIO_ID_NVME pci id
> > > - multi level NVMe stack
> >
> > Hi Ming,
>
> Hi Keith,
>
> >
> > I'll be out for travel for the next week, so I won't have much time
> > to
> > do a proper review till the following week.
> >
> > I think it'd be better to get this hierarchy setup to make the most
> > reuse
> > possible than to have this much code duplication between the
> > existing
> > driver and emulated qemu nvme. For better or worse, I think the
> > generic
> > nvme layer is where things are going. Are you signed up with the
> > fabrics
> > contributors?
>
> No. How to sign up?
Ming,
Here is the email Keith sent out on this list that says how to sign up:
http://lists.infradead.org/pipermail/linux-nvme/2015
-September/002331.html
Jay
>
>
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2015-09-11 17:46 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-10 5:48 [RFC PATCH 0/2] virtio nvme Ming Lin
2015-09-10 5:48 ` [RFC PATCH 1/2] virtio_nvme(kernel): virtual NVMe driver using virtio Ming Lin
2015-09-10 5:48 ` [RFC PATCH 2/2] virtio-nvme(qemu): NVMe device " Ming Lin
2015-09-10 14:02 ` [RFC PATCH 0/2] virtio nvme Keith Busch
2015-09-10 17:02 ` Ming Lin
2015-09-11 4:55 ` Ming Lin
2015-09-11 17:46 ` J Freyensee [this message]
2015-09-10 14:38 ` Stefan Hajnoczi
2015-09-10 17:28 ` Ming Lin
2015-09-11 7:48 ` Stefan Hajnoczi
2015-09-11 17:21 ` Ming Lin
2015-09-11 17:53 ` Stefan Hajnoczi
2015-09-11 18:54 ` Ming Lin
2015-09-17 6:10 ` Nicholas A. Bellinger
2015-09-17 18:18 ` Ming Lin
2015-09-17 21:43 ` Nicholas A. Bellinger
2015-09-17 23:31 ` Ming Lin
2015-09-18 0:55 ` Nicholas A. Bellinger
2015-09-18 18:12 ` Ming Lin
2015-09-18 21:09 ` Nicholas A. Bellinger
2015-09-18 23:05 ` Ming Lin
2015-09-23 22:58 ` Ming Lin
2015-09-27 5:01 ` Nicholas A. Bellinger
2015-09-27 6:49 ` Ming Lin
2015-09-28 5:58 ` Hannes Reinecke
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1441993589.7919.28.camel@linux.intel.com \
--to=james_p_freyensee@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).