Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Damien Le Moal <dlemoal@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: linux-nvme@lists.infradead.org, "Keith Busch" <kbusch@kernel.org>,
	"Sagi Grimberg" <sagi@grimberg.me>,
	"Manivannan Sadhasivam" <manivannan.sadhasivam@linaro.org>,
	"Krzysztof Wilczyński" <kw@linux.com>,
	"Kishon Vijay Abraham I" <kishon@kernel.org>,
	"Bjorn Helgaas" <bhelgaas@google.com>,
	"Lorenzo Pieralisi" <lpieralisi@kernel.org>,
	linux-pci@vger.kernel.org,
	"Rick Wertenbroek" <rick.wertenbroek@gmail.com>,
	"Niklas Cassel" <cassel@kernel.org>
Subject: Re: [PATCH v2 4/5] PCI: endpoint: Add NVMe endpoint function driver
Date: Mon, 14 Oct 2024 19:41:18 +0900	[thread overview]
Message-ID: <79e57ebb-eef7-48b1-b337-845d2ef6ff49@kernel.org> (raw)
In-Reply-To: <20241014084424.GC23780@lst.de>

On 10/14/24 17:44, Christoph Hellwig wrote:
> For one please keep nvme target code in drivers/nvme/  PCI endpoint is
> just another transport and should not have device class logic.
> 
> But I also really fail to understand the architecture of the whole
> thing.  It is a target driver and should in no way tie into the NVMe
> host code, the host code runs on the other side of the PCIe wire.

Nope, it is not a target driver. It is a PCI endpoint driver which turns the
host running it into a PCIe NVMe device. But the NVMe part implementation is
minimal. Instead I use an endpoint local fabrics host controller which is itself
connected to whatever target you want (loop, tcp, ...).

Overall, it looks like this:

         +-----------------------------------+
         | PCIe Host Machine (Root-Complex)  |
         | (BIOS, Grub, Linux, Windows, ...) |
         |                                   |
         |       +------------------+        |
         |       | NVMe PCIe driver |        |
         +-------+------------------+--------+
                           |
                 PCIe bus  |
                           |
        +----+---------------------------+-----+
        |    | PCIe NVMe endpoint driver |     |
	|    | (Handles BAR registers,   |     |
	|    | doorbells, IRQs, SQs, CQs |     |
	|    | and DMA transfers)        |     |
        |    +---------------------------+     |
        |                  |                   |
        |    +---------------------------+     |
        |    |     NVMe fabrics host     |     |
        |    +---------------------------+     |
        |                  |                   |
        |    +---------------------------+     |
        |    |     NVMe fabrics target   |     |
        |    |      (loop, TCP, ...)     |     |
        |    +---------------------------+     |
        |                                      |
        | PCIe Endpoint Machine (e.g. Rock 5B) |
        +--------------------------------------+

The nvme target can be anything that can be supported with the PCI Endpoint
Machine. With a small board like the Rock 5B, it is loop (file and block
device), TCP target or NVMe passtrhough (using the PCIe Gen2 M.2 E-Key slot).

Unless I am mistaken, if I use a PCI transport as the base for the endpoint
driver, I would be able to connect only to a PCIe nvme device as the backend, no
? With the above design, I can use anything support by nvmf as backend and that
is exposed to the root-complex host through the nvme endpoint PCIe driver.
To do that, the PCI endpoint driver mostly need only to create the fabrics host
with nvmf_create_ctrl(), which connects to the target and then the nvme endpoint
driver can execute the nvme commands with __nvme_submit_sync_cmd().
Only some admin commands need special handling (e.g. create sq/cq).

-- 
Damien Le Moal
Western Digital Research


  reply	other threads:[~2024-10-14 10:42 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-11 12:19 [PATCH v2 0/5] NVMe PCI endpoint function driver Damien Le Moal
2024-10-11 12:19 ` [PATCH v2 1/5] nvmet: rename and move nvmet_get_log_page_len() Damien Le Moal
2024-10-14  6:24   ` Chaitanya Kulkarni
2024-10-11 12:19 ` [PATCH v2 2/5] nvmef: export nvmef_create_ctrl() Damien Le Moal
2024-10-14  6:32   ` Chaitanya Kulkarni
2024-10-14  8:42   ` Christoph Hellwig
2024-10-14  9:10     ` Damien Le Moal
2024-10-14 11:45       ` Christoph Hellwig
2024-10-11 12:19 ` [PATCH v2 3/5] nvmef: Introduce the NVME_OPT_HIDDEN_NS option Damien Le Moal
2024-10-14  8:42   ` Christoph Hellwig
2024-10-14  9:12     ` Damien Le Moal
2024-10-11 12:19 ` [PATCH v2 4/5] PCI: endpoint: Add NVMe endpoint function driver Damien Le Moal
2024-10-14  8:44   ` Christoph Hellwig
2024-10-14 10:41     ` Damien Le Moal [this message]
2024-10-14 11:38       ` Christoph Hellwig
2024-10-11 12:19 ` [PATCH v2 5/5] PCI: endpoint: Document the " Damien Le Moal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=79e57ebb-eef7-48b1-b337-845d2ef6ff49@kernel.org \
    --to=dlemoal@kernel.org \
    --cc=bhelgaas@google.com \
    --cc=cassel@kernel.org \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=kishon@kernel.org \
    --cc=kw@linux.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=lpieralisi@kernel.org \
    --cc=manivannan.sadhasivam@linaro.org \
    --cc=rick.wertenbroek@gmail.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox