From: Caleb Sander Mateos <csander@purestorage.com>
To: Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@kernel.dk>,
Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>
Cc: Kanchan Joshi <joshi.k@samsung.com>,
linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org,
Caleb Sander Mateos <csander@purestorage.com>
Subject: [PATCH v3 0/2] nvme/pci: PRP list DMA pool partitioning
Date: Mon, 21 Apr 2025 10:55:23 -0600 [thread overview]
Message-ID: <20250421165525.1618434-1-csander@purestorage.com> (raw)
NVMe commands with more than 4 KB of data allocate PRP list pages from
the per-nvme_device dma_pool prp_page_pool or prp_small_pool. Each call
to dma_pool_alloc() and dma_pool_free() takes the per-dma_pool spinlock.
These device-global spinlocks are a significant source of contention
when many CPUs are submitting to the same NVMe devices. On a workload
issuing 32 KB reads from 16 CPUs (8 hypertwin pairs) across 2 NUMA nodes
to 23 NVMe devices, we observed 2.4% of CPU time spent in
_raw_spin_lock_irqsave called from dma_pool_alloc and dma_pool_free.
Ideally, the dma_pools would be per-hctx to minimize
contention. But that could impose considerable resource costs in a
system with many NVMe devices and CPUs.
As a compromise, allocate per-NUMA-node PRP list DMA pools. Map each
nvme_queue to the set of DMA pools corresponding to its device and its
hctx's NUMA node. This reduces the _raw_spin_lock_irqsave overhead by
about half, to 1.2%. Preventing the sharing of PRP list pages across
NUMA nodes also makes them cheaper to initialize.
Caleb Sander Mateos (2):
nvme/pci: factor out nvme_init_hctx() helper
nvme/pci: make PRP list DMA pools per-NUMA-node
drivers/nvme/host/pci.c | 171 +++++++++++++++++++++++-----------------
1 file changed, 98 insertions(+), 73 deletions(-)
v3: simplify nvme_release_prp_pools() (Keith)
v2:
- Initialize admin nvme_queue's nvme_prp_dma_pools (Kanchan)
- Shrink nvme_dev's prp_pools array from MAX_NUMNODES to nr_node_ids (Kanchan)
--
2.45.2
next reply other threads:[~2025-04-21 16:55 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-21 16:55 Caleb Sander Mateos [this message]
2025-04-21 16:55 ` [PATCH v3 1/2] nvme/pci: factor out nvme_init_hctx() helper Caleb Sander Mateos
2025-04-22 7:38 ` Kanchan Joshi
2025-04-22 11:45 ` Sagi Grimberg
2025-04-21 16:55 ` [PATCH v3 2/2] nvme/pci: make PRP list DMA pools per-NUMA-node Caleb Sander Mateos
2025-04-22 7:38 ` Kanchan Joshi
2025-04-22 11:48 ` Sagi Grimberg
2025-04-22 14:59 ` Caleb Sander Mateos
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250421165525.1618434-1-csander@purestorage.com \
--to=csander@purestorage.com \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=joshi.k@samsung.com \
--cc=kbusch@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox