public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Chaitanya Kulkarni <chaitanyak@nvidia.com>
To: Wilfred Mallawa <wilfred.opensource@gmail.com>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	Keith Busch <kbusch@kernel.org>, Christoph Hellwig <hch@lst.de>,
	Sagi Grimberg <sagi@grimberg.me>,
	Chaitanya Kulkarni <chaitanyak@nvidia.com>
Cc: "dlemoal@kernel.org" <dlemoal@kernel.org>,
	"alistair.francis@wdc.com" <alistair.francis@wdc.com>,
	"cassel@kernel.org" <cassel@kernel.org>,
	Wilfred Mallawa <wilfred.mallawa@wdc.com>
Subject: Re: [PATCH 0/5] pci: nvmet: support completion queue sharing by multiple submission queues
Date: Sun, 11 May 2025 23:05:17 +0000	[thread overview]
Message-ID: <1c0b76bc-87d8-4e56-b17b-beaf37dbadd6@nvidia.com> (raw)
In-Reply-To: <20250424051352.7980-2-wilfred.opensource@gmail.com>

On 4/23/25 22:13, Wilfred Mallawa wrote:
> From: Wilfred Mallawa<wilfred.mallawa@wdc.com>
>
> Hi all,
>
> For the NVMe PCI transport, the NVMe specification allows different
> submission queues (SQs) to share completion queues (CQs), however,
> this is not supported in the current NVMe target implementation.
> Until now, the nvmet target implementation enforced a 1:1 relationship
> between SQs and CQs, which is not specification compliant for the NVMe
> PCI transport.
>
> This patch series adds support for CQ sharing between multiple SQs in the
> NVMe target driver, in line with the NVMe PCI transport specification.
> This series implements reference counting for completion queues to ensure
> proper lifecycle management when shared across multiple submission queues.
> This ensures that we retain CQs until all referencing SQs are deleted
> first, thereby avoiding premature CQ deletions.
>
> Sample callchain with CQ refcounting for the PCI endpoint target
> (pci-epf)
> =================================================================
>
> i.   nvmet_execute_create_cq -> nvmet_pci_epf_create_cq -> nvmet_cq_create
>       -> nvmet_cq_init			[cq refcount = 1]
>
> ii.  nvmet_execute_create_sq -> nvmet_pci_epf_create_sq -> nvmet_sq_create
>       -> nvmet_sq_init -> nvmet_cq_get	[cq refcount = 2]
>
> iii. nvmet_execute_delete_sq -> nvmet_pci_epf_delete_sq ->
>       nvmet_sq_destroy -> nvmet_cq_put	[cq refcount = 1]
>
> iv.  nvmet_execute_delete_cq -> nvmet_pci_epf_delete_cq -> nvmet_cq_put
> 					[cq refcount = 0]
>
> For NVMe over fabrics, CQ sharing is not supported per specification,
> however, the fabrics drivers are updated to integrate the new
> API changes. No functional change is intended here.
>
> Testing
> =======
>
> Core functionality changes were tested with a Rockchip-based Rock5B PCIe
> endpoint setup using the pci-epf driver. The host kernel was modified to
> support queue sharing. In the test setup, this resulted in IO SQs 1 & 2
> using IO CQ 1 and IO SQ 3 & 4 using IO CQ 2.
>
> Testing methodology includes:
>
> For PCI:
>
> 1. Boot up host
> 2. Assert that the endpoint device is detected as an NVMe drive
>     (IO CQs/SQs are created)
> 3. Run FIOs
> 4. Unbind NVMe driver (IO SQs then CQs are deleted)
> 5. Rebind NVMe driver (IO SQs then CQs are created)
> 6. Run FIOs
>
> For NVMe over fabrics: Using NVMe loop driver:
>
> Note that there is no queue sharing supported for fabrics.
>
> 1. Connect command (IO queues are created)
> 2. Run FIOs
> 3. Disconnect command (IO queues are deleted)
>
> Thanks!

FWIW,

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>

-ck



      parent reply	other threads:[~2025-05-11 23:05 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-24  5:13 [PATCH 0/5] pci: nvmet: support completion queue sharing by multiple submission queues Wilfred Mallawa
2025-04-24  5:13 ` [PATCH 1/5] nvmet: add a helper function for cqid checking Wilfred Mallawa
2025-05-07  7:20   ` Damien Le Moal
2025-04-24  5:13 ` [PATCH 2/5] nvmet: cq: prepare for completion queue sharing Wilfred Mallawa
2025-05-07  7:25   ` Damien Le Moal
2025-05-07  7:34     ` Wilfred Mallawa
2025-05-07  7:38       ` Damien Le Moal
2025-05-07  7:47         ` Wilfred Mallawa
2025-04-24  5:13 ` [PATCH 3/5] nvmet: fabrics: add CQ init and destroy Wilfred Mallawa
2025-05-07  7:27   ` Damien Le Moal
2025-04-24  5:13 ` [PATCH 4/5] nvmet: support completion queue sharing Wilfred Mallawa
2025-05-07  7:29   ` Damien Le Moal
2025-04-24  5:13 ` [PATCH 5/5] nvmet: Simplify nvmet_req_init() interface Wilfred Mallawa
2025-05-07  7:29   ` Damien Le Moal
2025-04-24 13:16 ` [PATCH 0/5] pci: nvmet: support completion queue sharing by multiple submission queues Niklas Cassel
2025-04-25  1:22   ` Damien Le Moal
2025-04-29 12:56   ` Christoph Hellwig
2025-05-07  7:32 ` Damien Le Moal
2025-05-09  5:05 ` Christoph Hellwig
2025-05-11 23:05 ` Chaitanya Kulkarni [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1c0b76bc-87d8-4e56-b17b-beaf37dbadd6@nvidia.com \
    --to=chaitanyak@nvidia.com \
    --cc=alistair.francis@wdc.com \
    --cc=cassel@kernel.org \
    --cc=dlemoal@kernel.org \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=wilfred.mallawa@wdc.com \
    --cc=wilfred.opensource@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox