public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Clay Mayers <Clay.Mayers@kioxia.com>
To: Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Cc: Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@fb.com>,
	Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>
Subject: RE: [PATCH V2 0/2] nvme: Support for fused NVME_IOCTL_SUBMIT_IO
Date: Tue, 26 Jan 2021 21:14:16 +0000	[thread overview]
Message-ID: <70bbf9a1054e4d5e97078bbab201296f@kioxia.com> (raw)
In-Reply-To: <BYAPR04MB49652EA8F4CF06D41C708E2386BC9@BYAPR04MB4965.namprd04.prod.outlook.com>

> From: Chaitanya Kulkarni <Chaitanya.Kulkarni@wdc.com>
> Sent: Tuesday, January 26, 2021 11:01 AM
> 
> On 1/26/21 10:17 AM, Clay Mayers wrote:
> >>
> >> On 1/25/21 12:03, clay.mayers@kioxia.com wrote:
> >>> Local pci device fused support is also necessary for NVMeOF targets
> >>> to support fused operation.
> >> Please explain the use case and the application of the NVMeOF fuse
> >> command feature.
> > NVMeOF devices are used to create disaggregated storage systems where
> > compute and storage are connected over a fabric.  Fused compare/write
> > can be used to arbitrate shared access to NVMeOF devices w/o a central
> > authority.
> >
> > A specific example of how fused compare/write is used is the clustered
> > file system VMFS.  It uses the SCSI version of compare/write to manage
> > meta data on shared SAN systems.  File system meta data is updated
> > using locks stored on the storage media.  Those locks are grabbed
> > using fused compare/write operations as an atomic test & set.  VMFS
> > originally used device reserve, which is a courser gained locking
> > mechanism but it doesn't scale as well as an atomic test & set.
> If I understand correctly VMFS is out of tree filesystem is it ?
I seem to have misunderstood your request for a use-case.  As a patch series,
this is not about NVMeOF.  This is about pci support for the fused command.
NVMeOF is the use case for pci fused support.

But how strong of a use case is NVMeOF?  I offered clustered file systems
and the public example of VMWare's VMFS to illustrate the usefulness.
Here VMWare is the target and Linux is the host serving up storage over
NVMeOF.  That requires fused support the target/host and pci. With a
past company I worked for, we use the SPDK to get this functionality for
disaggregated storage.  That's right for some solutions but not all.

Our actual goal is to have something like direct device access without
something like the SPDK.  We think io uring is the correct solution.
Jens, just before his winter PTO, tweeted about adding ioctl support to
io uring.  We hope to extend that to support fused operations as well.
Exposing it through IOCTL makes the pci patch useful now.  The one
Example I have is for nvme-cli as requested on github.

https://github.com/linux-nvme/nvme-cli/issues/318

I thought this was better than folding an nvme change in with an io uring
patch series.  I'm trying to find the balance between a small isolated unit of
change and something compelling.
> Can you please explain the setup in detail ? what kind of interface file-system
> is using to issue the command ?
> Based on your description it looks like target is connected to the vmware
> based system and host is vmware based host and not the linux host which is
> present in this series.
No - the idea is to be standards based and use NVMeOF for target and host
data exchange.  In one example, the target would be running vSphere. The
host, as a Linux machine, would expose its attached devices with NVMeOF.
VSphere would expect fused command support from the Linux machine.
> Also what are other applications or is this the only one application?
The application is disaggregated storage on NVMeOF, both consuming it
and publishing it.  I don't have any specific set of applications to offer.


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-01-26 21:15 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-05 22:49 [PATCH 0/2] nvme: Support for fused NVME_IOCTL_SUBMIT_IO klayph
2021-01-05 22:49 ` [PATCH 1/2] nvme: support fused nvme requests klayph
2021-01-05 23:52   ` Keith Busch
2021-01-06 14:55     ` Clay Mayers
2021-01-06  0:35   ` James Smart
2021-01-06 15:01     ` Clay Mayers
2021-01-06  7:59   ` Christoph Hellwig
2021-01-25 19:58   ` [PATCH V2 0/2] nvme: Support for fused NVME_IOCTL_SUBMIT_IO clay.mayers
2021-01-26  1:43     ` Chaitanya Kulkarni
2021-01-26 18:17       ` Clay Mayers
2021-01-26 19:00         ` Chaitanya Kulkarni
2021-01-26 21:14           ` Clay Mayers [this message]
2021-02-09  0:53           ` Clay Mayers
2021-02-09  3:12             ` Keith Busch
2021-02-09 15:24               ` Bart Van Assche
2021-02-09 15:38               ` Clay Mayers
2021-02-09  7:54             ` Christoph Hellwig
2021-02-09 15:53               ` Clay Mayers
2021-01-25 19:58   ` [PATCH V2 1/2] nvme: support fused pci nvme requests clay.mayers
2021-01-25 19:58   ` [PATCH V2 2/2] nvme: support fused NVME_IOCTL_SUBMIT_IO clay.mayers
2021-01-05 22:49 ` [PATCH " klayph
2021-01-05 23:04 ` [PATCH 0/2] nvme: Support for " James Smart

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=70bbf9a1054e4d5e97078bbab201296f@kioxia.com \
    --to=clay.mayers@kioxia.com \
    --cc=Chaitanya.Kulkarni@wdc.com \
    --cc=axboe@fb.com \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox