public inbox for linux-pci@vger.kernel.org
 help / color / mirror / Atom feed
From: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
To: Damien Le Moal <dlemoal@kernel.org>
Cc: linux-nvme@lists.infradead.org, "Christoph Hellwig" <hch@lst.de>,
	"Keith Busch" <kbusch@kernel.org>,
	"Sagi Grimberg" <sagi@grimberg.me>,
	linux-pci@vger.kernel.org, "Krzysztof Wilczyński" <kw@linux.com>,
	"Kishon Vijay Abraham I" <kishon@kernel.org>,
	"Bjorn Helgaas" <bhelgaas@google.com>,
	"Lorenzo Pieralisi" <lpieralisi@kernel.org>,
	"Rick Wertenbroek" <rick.wertenbroek@gmail.com>,
	"Niklas Cassel" <cassel@kernel.org>
Subject: Re: [PATCH v7 00/18] NVMe PCI endpoint target driver
Date: Fri, 20 Dec 2024 17:56:56 +0530	[thread overview]
Message-ID: <20241220122656.mb7bs47pfw2xbadr@thinkpad> (raw)
In-Reply-To: <20241220095108.601914-1-dlemoal@kernel.org>

On Fri, Dec 20, 2024 at 06:50:50PM +0900, Damien Le Moal wrote:
> This patch series implements an NVMe target driver for the PCI transport
> using the PCI endpoint framework.
> 
> The first 5 patches of this series move and cleanup some nvme code that
> will be reused in following patches.
> 
> Patch 6 introduces the PCI transport type to allow setting up ports for
> the new PCI target controller driver. Patch 7 to 10 are improvements of
> the target core code to allow creating the PCI controller and processing
> its nvme commands without the need to rely on fabrics commands like the
> connect command to create the admin and I/O queues.
> 
> Patch 11 relaxes the SGL check in nvmet_req_init() to allow for PCI
> admin commands (which must use PRPs).
> 
> Patches 12 to 16 improve the set/get feature support of the target code
> to get closer to achieving NVMe specification compliance. These patches
> though do not implement support for some mandatory features.
> 
> Patch 17 is the main patch which introduces the NVMe PCI endpoint target
> driver. This patch commit message provides and overview of the driver
> design and operation.
> 
> Finally, patch 18 documents the NVMe PCI endpoint target driver and
> provides a user guide explaning how to setup an NVMe PCI endpoint
> device.
> 
> The patches are base on Linus 6.13-rc3 tree.
> 
> This driver has been extensively tested using a Radxa Rock5B board
> (RK3588 Arm SoC). Some tests have also been done using a Pine Rockpro64
> board. However, this board does not support DMA channels for the PCI
> endpoint controller, leading to very poor performance.
> 
> Using the Radxa Rock5b board and setting up a 4 queue-pairs controller
> with a null-blk block device loop target, performance was measured using
> fio as follows:
> 
>  +----------------------------------+------------------------+
>  | Workload                         | IOPS (BW)              |
>  +----------------------------------+------------------------+
>  | Rand read, 4KB, QD=1, 1 job      | 14.3k IOPS             |
>  | Rand read, 4KB, QD=32, 1 job     | 80.8k IOPS             |
>  | Rand read, 4KB, QD=32, 4 jobs    | 131k IOPS              |
>  | Rand read, 128KB, QD=32, 1 job   | 16.7k IOPS (2.18 GB/s) |
>  | Rand read, 128KB, QD=32, 4 jobs  | 17.4k IOPS (2.27 GB/s) |
>  | Rand read, 512KB, QD=32, 1 job   | 5380 IOPS (2.82 GB/s)  |
>  | Rand read, 512KB, QD=32, 4 jobs  | 5206 IOPS (2.27 GB/s)  |
>  | Rand write, 128KB, QD=32, 1 job  | 9617 IOPS (1.26 GB/s)  |
>  | Rand write, 128KB, QD=32, 4 jobs | 8405 IOPS (1.10 GB/s)  |
>  +----------------------------------+------------------------+
> 
> These results use the default MDTS of the NVMe enpoint driver of 512 KB.
> 
> This driver is not intended for production use but rather to be a
> playground for learning NVMe and exploring/testing new NVMe features
> while providing reasonably good performance.
> 

Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

- Mani

-- 
மணிவண்ணன் சதாசிவம்

      parent reply	other threads:[~2024-12-20 12:27 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-20  9:50 [PATCH v7 00/18] NVMe PCI endpoint target driver Damien Le Moal
2024-12-20  9:50 ` [PATCH v7 01/18] nvme: Move opcode string helper functions declarations Damien Le Moal
2024-12-20  9:50 ` [PATCH v7 02/18] nvmet: Add vendor_id and subsys_vendor_id subsystem attributes Damien Le Moal
2024-12-20  9:50 ` [PATCH v7 03/18] nvmet: Export nvmet_update_cc() and nvmet_cc_xxx() helpers Damien Le Moal
2024-12-20  9:50 ` [PATCH v7 04/18] nvmet: Introduce nvmet_get_cmd_effects_admin() Damien Le Moal
2024-12-20  9:50 ` [PATCH v7 05/18] nvmet: Add drvdata field to struct nvmet_ctrl Damien Le Moal
2024-12-20  9:50 ` [PATCH v7 06/18] nvme: Add PCI transport type Damien Le Moal
2024-12-20  9:50 ` [PATCH v7 07/18] nvmet: Improve nvmet_alloc_ctrl() interface and implementation Damien Le Moal
2024-12-20  9:50 ` [PATCH v7 08/18] nvmet: Introduce nvmet_req_transfer_len() Damien Le Moal
2024-12-20  9:50 ` [PATCH v7 09/18] nvmet: Introduce nvmet_sq_create() and nvmet_cq_create() Damien Le Moal
2024-12-20  9:51 ` [PATCH v7 10/18] nvmet: Add support for I/O queue management admin commands Damien Le Moal
2024-12-20  9:51 ` [PATCH v7 11/18] nvmet: Do not require SGL for PCI target controller commands Damien Le Moal
2024-12-20  9:51 ` [PATCH v7 12/18] nvmet: Introduce get/set_feature controller operations Damien Le Moal
2024-12-20  9:51 ` [PATCH v7 13/18] nvmet: Implement host identifier set feature support Damien Le Moal
2024-12-20  9:51 ` [PATCH v7 14/18] nvmet: Implement interrupt coalescing " Damien Le Moal
2024-12-20  9:51 ` [PATCH v7 15/18] nvmet: Implement interrupt config " Damien Le Moal
2024-12-20  9:51 ` [PATCH v7 16/18] nvmet: Implement arbitration " Damien Le Moal
2024-12-20  9:51 ` [PATCH v7 17/18] nvmet: New NVMe PCI endpoint function target driver Damien Le Moal
2024-12-20 12:18   ` Manivannan Sadhasivam
2024-12-20 16:19   ` Krzysztof Wilczyński
2024-12-23  9:31     ` Niklas Cassel
2024-12-24 23:34   ` kernel test robot
2024-12-20  9:51 ` [PATCH v7 18/18] Documentation: Document the NVMe PCI endpoint " Damien Le Moal
2024-12-20 12:22 ` [PATCH v7 00/18] " Manivannan Sadhasivam
2024-12-20 12:26 ` Manivannan Sadhasivam [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20241220122656.mb7bs47pfw2xbadr@thinkpad \
    --to=manivannan.sadhasivam@linaro.org \
    --cc=bhelgaas@google.com \
    --cc=cassel@kernel.org \
    --cc=dlemoal@kernel.org \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=kishon@kernel.org \
    --cc=kw@linux.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=lpieralisi@kernel.org \
    --cc=rick.wertenbroek@gmail.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox