From: Keith Busch <kbusch@kernel.org>
To: Caleb Sander Mateos <csander@purestorage.com>
Cc: Jens Axboe <axboe@kernel.dk>, Christoph Hellwig <hch@lst.de>,
Sagi Grimberg <sagi@grimberg.me>,
Kanchan Joshi <joshi.k@samsung.com>,
linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4 2/2] nvme/pci: make PRP list DMA pools per-NUMA-node
Date: Tue, 22 Apr 2025 10:34:05 -0600 [thread overview]
Message-ID: <aAfE_eofwrOIQ3Sw@kbusch-mbp.dhcp.thefacebook.com> (raw)
In-Reply-To: <20250422161959.1958205-3-csander@purestorage.com>
On Tue, Apr 22, 2025 at 10:19:59AM -0600, Caleb Sander Mateos wrote:
> NVMe commands with more than 4 KB of data allocate PRP list pages from
> the per-nvme_device dma_pool prp_page_pool or prp_small_pool. Each call
> to dma_pool_alloc() and dma_pool_free() takes the per-dma_pool spinlock.
> These device-global spinlocks are a significant source of contention
> when many CPUs are submitting to the same NVMe devices. On a workload
> issuing 32 KB reads from 16 CPUs (8 hypertwin pairs) across 2 NUMA nodes
> to 23 NVMe devices, we observed 2.4% of CPU time spent in
> _raw_spin_lock_irqsave called from dma_pool_alloc and dma_pool_free.
>
> Ideally, the dma_pools would be per-hctx to minimize
> contention. But that could impose considerable resource costs in a
> system with many NVMe devices and CPUs.
>
> As a compromise, allocate per-NUMA-node PRP list DMA pools. Map each
> nvme_queue to the set of DMA pools corresponding to its device and its
> hctx's NUMA node. This reduces the _raw_spin_lock_irqsave overhead by
> about half, to 1.2%. Preventing the sharing of PRP list pages across
> NUMA nodes also makes them cheaper to initialize.
I was hoping for an even greater improvement, but still good. Maybe we
can do slightly better if we also pass the numa_node to
dma_pool_create() and allocate the 'struct dma_pool' on the same node
that's going to use it.
Looks good,
Reviewed-by: Keith Busch <kbusch@kernel.org>
next prev parent reply other threads:[~2025-04-22 18:35 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-22 16:19 [PATCH v4 0/2] nvme/pci: PRP list DMA pool partitioning Caleb Sander Mateos
2025-04-22 16:19 ` [PATCH v4 1/2] nvme/pci: factor out nvme_init_hctx() helper Caleb Sander Mateos
2025-04-22 16:28 ` Keith Busch
2025-04-22 16:19 ` [PATCH v4 2/2] nvme/pci: make PRP list DMA pools per-NUMA-node Caleb Sander Mateos
2025-04-22 16:34 ` Keith Busch [this message]
2025-04-22 17:48 ` [PATCH v4 0/2] nvme/pci: PRP list DMA pool partitioning Keith Busch
2025-04-22 22:04 ` Caleb Sander Mateos
2025-04-22 22:46 ` Keith Busch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aAfE_eofwrOIQ3Sw@kbusch-mbp.dhcp.thefacebook.com \
--to=kbusch@kernel.org \
--cc=axboe@kernel.dk \
--cc=csander@purestorage.com \
--cc=hch@lst.de \
--cc=joshi.k@samsung.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox