public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Mike Christie <michael.christie@oracle.com>
To: Christoph Hellwig <hch@lst.de>
Cc: chaitanyak@nvidia.com, kbusch@kernel.org, sagi@grimberg.me,
	joao.m.martins@oracle.com, linux-nvme@lists.infradead.org,
	kvm@vger.kernel.org, kwankhede@nvidia.com,
	alex.williamson@redhat.com, mlevitsk@redhat.com,
	Hannes Reinecke <hare@suse.de>
Subject: Re: [PATCH RFC 00/11] nvmet: Add NVMe target mdev/vfio driver
Date: Thu, 13 Mar 2025 12:17:23 -0500	[thread overview]
Message-ID: <d1104f67-30e3-4397-bee0-5d8e81439fc3@oracle.com> (raw)
In-Reply-To: <20250313064743.GA10198@lst.de>

On 3/13/25 1:47 AM, Christoph Hellwig wrote:
> On Thu, Mar 13, 2025 at 12:18:01AM -0500, Mike Christie wrote:
>>
>> If we agree on a new virtual NVMe driver being ok, why mdev vs vhost?
>> =====================================================================
>> The problem with a vhost nvme is:
>>
>> 2.1. If we do a fully vhost nvmet solution, it will require new guest
>> drivers that present NVMe interfaces to userspace then perform the
>> vhost spec on the backend like how vhost-scsi does.
>>
>> I don't want to implement a windows or even a linux nvme vhost
>> driver. I don't think anyone wants the extra headache.
> 
> As in a nvme-virtio spec?  Note that I suspect you could use the
> vhost infrastructure for something that isn't virtio, but it would> be a fair amount of work.

Yeah, for this option 2.1 I meant a full nvme-virtio spec.

(forgot to cc Hannes's so cc'ing him now)

And you can use the vhost infrastructure for something that's not virtio.
Hannes's did that for vhost megasas:

https://github.com/Datera/rts-megasas/blob/master/rts_megasas-fabric-v6.patch

but perf is not good, it's extra userspace code and I think it's
just a little more messy because it requires the extra
QEMU code which I know those engineers didn't want.


  reply	other threads:[~2025-03-13 17:17 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-13  5:18 [PATCH RFC 00/11] nvmet: Add NVMe target mdev/vfio driver Mike Christie
2025-03-13  5:18 ` [PATCH RFC 01/11] nvmet: Remove duplicate uuid_copy Mike Christie
2025-03-13  6:36   ` Christoph Hellwig
2025-03-13  8:59   ` Damien Le Moal
2025-03-13 17:20   ` Keith Busch
2025-03-13  5:18 ` [PATCH RFC 02/11] nvmet: Export nvmet_add_async_event and add definitions Mike Christie
2025-03-13  6:36   ` Christoph Hellwig
2025-03-13 17:50     ` Mike Christie
2025-03-13  5:18 ` [PATCH RFC 03/11] nvmet: Add nvmet_fabrics_ops flag to indicate SGLs not supported Mike Christie
2025-03-13  6:37   ` Christoph Hellwig
2025-03-13  9:02   ` Damien Le Moal
2025-03-13  9:13     ` Christoph Hellwig
2025-03-13  9:16       ` Damien Le Moal
2025-03-13 17:19         ` Mike Christie
2025-03-13  5:18 ` [PATCH RFC 04/11] nvmet: Add function to get nvmet_fabrics_ops from trtype Mike Christie
2025-03-13  9:03   ` Damien Le Moal
2025-03-13  5:18 ` [PATCH RFC 05/11] nvmet: Add function to print trtype Mike Christie
2025-03-13  5:18 ` [PATCH RFC 06/11] nvmet: Allow nvmet_alloc_ctrl users to specify the cntlid Mike Christie
2025-03-13  5:18 ` [PATCH RFC 07/11] nvmet: Add static controller support to configfs Mike Christie
2025-03-13  5:18 ` [PATCH RFC 08/11] nvmet: Add shadow doorbell support Mike Christie
2025-03-13  5:18 ` [PATCH RFC 09/11] nvmet: Add helpers to find and get static controllers Mike Christie
2025-03-13  5:18 ` [PATCH RFC 10/11] nvmet: Add addr fam and trtype for mdev pci driver Mike Christie
2025-03-13  6:42   ` Christoph Hellwig
2025-03-13 17:56     ` Mike Christie
2025-03-13  5:18 ` [PATCH RFC 11/11] nvmet: Add nvmet-mdev-pci driver Mike Christie
2025-03-13  5:32 ` [PATCH RFC 00/11] nvmet: Add NVMe target mdev/vfio driver Damien Le Moal
2025-03-13  6:47 ` Christoph Hellwig
2025-03-13 17:17   ` Mike Christie [this message]
2025-03-14  8:31     ` Hannes Reinecke

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d1104f67-30e3-4397-bee0-5d8e81439fc3@oracle.com \
    --to=michael.christie@oracle.com \
    --cc=alex.williamson@redhat.com \
    --cc=chaitanyak@nvidia.com \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=joao.m.martins@oracle.com \
    --cc=kbusch@kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=mlevitsk@redhat.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox