public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Hannes Reinecke <hare@suse.de>
To: Mike Christie <michael.christie@oracle.com>,
	Christoph Hellwig <hch@lst.de>
Cc: chaitanyak@nvidia.com, kbusch@kernel.org, sagi@grimberg.me,
	joao.m.martins@oracle.com, linux-nvme@lists.infradead.org,
	kvm@vger.kernel.org, kwankhede@nvidia.com,
	alex.williamson@redhat.com, mlevitsk@redhat.com
Subject: Re: [PATCH RFC 00/11] nvmet: Add NVMe target mdev/vfio driver
Date: Fri, 14 Mar 2025 09:31:23 +0100	[thread overview]
Message-ID: <d6df4f45-8cef-423d-af7d-1df19cdef010@suse.de> (raw)
In-Reply-To: <d1104f67-30e3-4397-bee0-5d8e81439fc3@oracle.com>

On 3/13/25 18:17, Mike Christie wrote:
> On 3/13/25 1:47 AM, Christoph Hellwig wrote:
>> On Thu, Mar 13, 2025 at 12:18:01AM -0500, Mike Christie wrote:
>>>
>>> If we agree on a new virtual NVMe driver being ok, why mdev vs vhost?
>>> =====================================================================
>>> The problem with a vhost nvme is:
>>>
>>> 2.1. If we do a fully vhost nvmet solution, it will require new guest
>>> drivers that present NVMe interfaces to userspace then perform the
>>> vhost spec on the backend like how vhost-scsi does.
>>>
>>> I don't want to implement a windows or even a linux nvme vhost
>>> driver. I don't think anyone wants the extra headache.
>>
>> As in a nvme-virtio spec?  Note that I suspect you could use the
>> vhost infrastructure for something that isn't virtio, but it would> be a fair amount of work.
> 
> Yeah, for this option 2.1 I meant a full nvme-virtio spec.
> 
> (forgot to cc Hannes's so cc'ing him now)
> 
And it really is a bit pointless. A nvme-virtio spec would, in the end 
of the day, result in a virtio pci driver in the guest. Which then 
speaks nvme over the virtio protocol.

But we already _have_ a nvme-pci driver, so the benefits of that
approach would be ... questionable.
OTOH, virtio-nvme really should be a fabrics driver, as it's running
nvme over another transport protocol.
Then you could do proper SGL mapping etc.
_But_ you would need another guest driver for that, which brings it's
own set of problems. Not to mention the problem that you would have
to update the spec for that, as you need another transport identifier.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich

      reply	other threads:[~2025-03-14  8:31 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-13  5:18 [PATCH RFC 00/11] nvmet: Add NVMe target mdev/vfio driver Mike Christie
2025-03-13  5:18 ` [PATCH RFC 01/11] nvmet: Remove duplicate uuid_copy Mike Christie
2025-03-13  6:36   ` Christoph Hellwig
2025-03-13  8:59   ` Damien Le Moal
2025-03-13 17:20   ` Keith Busch
2025-03-13  5:18 ` [PATCH RFC 02/11] nvmet: Export nvmet_add_async_event and add definitions Mike Christie
2025-03-13  6:36   ` Christoph Hellwig
2025-03-13 17:50     ` Mike Christie
2025-03-13  5:18 ` [PATCH RFC 03/11] nvmet: Add nvmet_fabrics_ops flag to indicate SGLs not supported Mike Christie
2025-03-13  6:37   ` Christoph Hellwig
2025-03-13  9:02   ` Damien Le Moal
2025-03-13  9:13     ` Christoph Hellwig
2025-03-13  9:16       ` Damien Le Moal
2025-03-13 17:19         ` Mike Christie
2025-03-13  5:18 ` [PATCH RFC 04/11] nvmet: Add function to get nvmet_fabrics_ops from trtype Mike Christie
2025-03-13  9:03   ` Damien Le Moal
2025-03-13  5:18 ` [PATCH RFC 05/11] nvmet: Add function to print trtype Mike Christie
2025-03-13  5:18 ` [PATCH RFC 06/11] nvmet: Allow nvmet_alloc_ctrl users to specify the cntlid Mike Christie
2025-03-13  5:18 ` [PATCH RFC 07/11] nvmet: Add static controller support to configfs Mike Christie
2025-03-13  5:18 ` [PATCH RFC 08/11] nvmet: Add shadow doorbell support Mike Christie
2025-03-13  5:18 ` [PATCH RFC 09/11] nvmet: Add helpers to find and get static controllers Mike Christie
2025-03-13  5:18 ` [PATCH RFC 10/11] nvmet: Add addr fam and trtype for mdev pci driver Mike Christie
2025-03-13  6:42   ` Christoph Hellwig
2025-03-13 17:56     ` Mike Christie
2025-03-13  5:18 ` [PATCH RFC 11/11] nvmet: Add nvmet-mdev-pci driver Mike Christie
2025-03-13  5:32 ` [PATCH RFC 00/11] nvmet: Add NVMe target mdev/vfio driver Damien Le Moal
2025-03-13  6:47 ` Christoph Hellwig
2025-03-13 17:17   ` Mike Christie
2025-03-14  8:31     ` Hannes Reinecke [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d6df4f45-8cef-423d-af7d-1df19cdef010@suse.de \
    --to=hare@suse.de \
    --cc=alex.williamson@redhat.com \
    --cc=chaitanyak@nvidia.com \
    --cc=hch@lst.de \
    --cc=joao.m.martins@oracle.com \
    --cc=kbusch@kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=michael.christie@oracle.com \
    --cc=mlevitsk@redhat.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox