* [PATCH V3 0/2] md/nvme: Enable PCI P2PDMA support for RAID0 and NVMe Multipath
@ 2026-04-16 21:26 Chaitanya Kulkarni
2026-04-16 21:26 ` [PATCH V3 1/3] block: clear BLK_FEAT_PCI_P2PDMA in blk_stack_limits() for non-supporting devices Chaitanya Kulkarni
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Chaitanya Kulkarni @ 2026-04-16 21:26 UTC (permalink / raw)
To: song, yukuai, linan122, kbusch, axboe, hch, sagi
Cc: linux-raid, linux-nvme, kmodukuri, Chaitanya Kulkarni
Hi,
This patch series extends PCI peer-to-peer DMA (P2PDMA) support to enable
direct data transfers between PCIe devices through RAID and NVMe multipath
block layers.
Current Linux kernel P2PDMA infrastructure supports direct peer-to-peer
transfers, but this support is not propagated through certain storage
stacks like MD RAID and NVMe multipath. This adds two patches for
MD RAID 0/1/10 and NVMe to propogate P2PDMA support through the
storage stack.
All four test scenarios demonstrate that P2PDMA capabilities are correctly
propagated through both the MD RAID layer (patch 1/2) and NVMe multipath
layer (patch 2/2). Direct peer-to-peer transfers complete successfully with
full data integrity verification, confirming that:
1. RAID devices properly inherit P2PDMA capability from member devices
2. NVMe multipath devices correctly expose P2PDMA support
3. P2P memory buffers can be used for transfers involving both types
4. Data integrity is maintained across all transfer combinations
I've added the patch specific tests and blktest log as well at the end.
Repo:-
git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux.git
Branch HEAD:-
commit 88a57e15861997dd6fa98154ad087f7831bbead1 (origin/for-next)
Merge: 81a0a2e4e535 36446de0c30c
Author: Jens Axboe <axboe@kernel.dk>
Date: Fri Apr 10 07:02:42 2026 -0600
Merge branch 'for-7.1/block' into for-next
* for-7.1/block:
ublk: fix tautological comparison warning in ublk_ctrl_reg_buf
-ck
Changes from V2:-
1. Unconditionally set the BLK_FEAT_PCI_P2PDMA for md and nvme multipath.
(Christoph)
2. Add a prep patch to diable BLK_FEAT_PCI_P2PDMA in the blk_stack_limit().
(christoph)
Changes from V1:-
- Update patch 1 to explicitly support MD RAID 0/1/10.
- Fix signoff chain order for patch 2.
- Clear BLK_FEAT_PCI_P2PDMA in nvme_mpath_add_disk() when a newly
added path does not support it, to handle multipath across different
transports.
- Add nvme multipath test log for mixed transport TCP and PCIe.
Chaitanya Kulkarni (1):
block: clear BLK_FEAT_PCI_P2PDMA in blk_stack_limits() for
non-supporting devices
Kiran Kumar Modukuri (2):
md: propagate BLK_FEAT_PCI_P2PDMA from member devices to RAID device
nvme-multipath: enable PCI P2PDMA for multipath devices
block/blk-settings.c | 2 ++
drivers/md/raid0.c | 1 +
drivers/md/raid1.c | 1 +
drivers/md/raid10.c | 1 +
drivers/nvme/host/multipath.c | 2 +-
5 files changed, 6 insertions(+), 1 deletion(-)
========================================================
* MD RAID Personalities and NVMe testing :-
========================================================
================================================================================
P2PDMA Comprehensive Test Report
================================================================================
Date: Thu Apr 16, 2026 19:00:32 UTC
Patch Series Under Test:
1/3 blk: clear BLK_FEAT_PCI_P2PDMA in blk_stack_limits
2/3 nvme-multipath: expose BLK_FEAT_PCI_P2PDMA on head disk
3/3 md: raid0/1/10: expose BLK_FEAT_PCI_P2PDMA on array disk
================================================================================
1. System Information
================================================================================
Kernel:
Linux vm70 7.0.0-rc2-p2pdma-v2+ #19 SMP PREEMPT Thu Apr 16 18:01:55 UTC 2026 x86_64 x86_64 x86_64 GNU/Linux
Kernel patches (git log above baseline):
5bf19d9 md: raid0/1/10: expose BLK_FEAT_PCI_P2PDMA on array disk
02dc9a6 nvme-multipath: expose BLK_FEAT_PCI_P2PDMA on head disk
ba22b62 blk: clear BLK_FEAT_PCI_P2PDMA in blk_stack_limits
NVMe Modules:
nvme_fabrics 24576 0
MD Modules loaded:
(none yet -- will be loaded on demand)
--------------------------------------------------------------------------------
1.1 NVMe Device Inventory (nvme list -v)
--------------------------------------------------------------------------------
Subsystem Subsystem-NQN Controllers
---------------- ------------------------------------------------------------------------------------------------ ----------------
nvme-subsys1 nqn.2019-08.org.qemu:nqn.2019-08.org.qemu:shared-ns nvme0, nvme1
nvme-subsys2 nqn.2019-08.org.qemu:nvme3 nvme2
nvme-subsys3 nqn.2019-08.org.qemu:nvme4 nvme3
nvme-subsys4 nqn.2019-08.org.qemu:nvme5 nvme4
nvme-subsys5 nqn.2019-08.org.qemu:nvme6 nvme5
Device SN MN FR TxPort Address Slot Subsystem Namespaces
-------- -------------------- ---------------------------------------- -------- ------ -------------- ------ ------------ ----------------
nvme0 shared2 QEMU NVMe Ctrl 1.0 pcie 0000:0a:00.0 nvme-subsys1 nvme1n1
nvme1 shared2 QEMU NVMe Ctrl 1.0 pcie 0000:0b:00.0 nvme-subsys1 nvme1n1
nvme2 nvme3 QEMU NVMe Ctrl 1.0 pcie 0000:0c:00.0 nvme-subsys2 nvme2n1
nvme3 nvme4 QEMU NVMe Ctrl 1.0 pcie 0000:0d:00.0 nvme-subsys3 nvme3n1
nvme4 nvme5 QEMU NVMe Ctrl 1.0 pcie 0000:0e:00.0 nvme-subsys4 nvme4n1
nvme5 nvme6 QEMU NVMe Ctrl 1.0 pcie 0000:10:00.0 nvme-subsys5 nvme5n1
Device Generic NSID Usage Format Controllers
------------ ------------ ---------- -------------------------- ---------------- ----------------
/dev/nvme1n1 /dev/ng1n1 0x1 10.74 GB / 10.74 GB 512 B + 0 B nvme0, nvme1
/dev/nvme2n1 /dev/ng2n1 0x1 10.74 GB / 10.74 GB 512 B + 0 B nvme2
/dev/nvme3n1 /dev/ng3n1 0x1 10.74 GB / 10.74 GB 512 B + 0 B nvme3
/dev/nvme4n1 /dev/ng4n1 0x1 10.74 GB / 10.74 GB 512 B + 0 B nvme4
/dev/nvme5n1 /dev/ng5n1 0x1 10.74 GB / 10.74 GB 512 B + 0 B nvme5
--------------------------------------------------------------------------------
1.2 Shared Namespace Configuration (nvme-subsys1)
--------------------------------------------------------------------------------
nvme-subsys1 - NQN=nqn.2019-08.org.qemu:nqn.2019-08.org.qemu:shared-ns
hostnqn=nqn.2014-08.org.nvmexpress:uuid:148a9e69-3f22-420f-afea-bfd5d4b77f36
iopolicy=numa
\
+- nvme0 pcie 0000:0a:00.0 live optimized
+- nvme1 pcie 0000:0b:00.0 live optimized
--------------------------------------------------------------------------------
1.3 PCI P2PDMA / CMB Configuration
--------------------------------------------------------------------------------
CMB-enabled NVMe controllers:
0000:0a:00.0 (nvme0) - CMB 64 MB
0000:0b:00.0 (nvme1) - CMB 64 MB
0000:0c:00.0 (nvme2) - CMB 64 MB
dmesg (P2P):
[ 7.283954] nvme 0000:0a:00.0: added peer-to-peer DMA memory 0x1808000000-0x180bffffff
[ 7.288711] nvme 0000:0c:00.0: added peer-to-peer DMA memory 0x1800000000-0x1803ffffff
[ 7.293117] nvme 0000:0b:00.0: added peer-to-peer DMA memory 0x1804000000-0x1807ffffff
--------------------------------------------------------------------------------
1.4 Standalone NVMe Devices (for RAID tests)
--------------------------------------------------------------------------------
/dev/nvme2n1 ( 10G)
/dev/nvme3n1 ( 10G)
/dev/nvme4n1 ( 10G)
/dev/nvme5n1 ( 10G)
P2PMEM for multipath tests: /sys/bus/pci/devices/0000:0a:00.0/p2pmem/allocate
P2PMEM for RAID tests: /sys/bus/pci/devices/0000:0c:00.0/p2pmem/allocate
================================================================================
2. Test 1: NVMe Multipath P2PDMA (Patch 2/3)
================================================================================
Objective:
Verify BLK_FEAT_PCI_P2PDMA is set on the multipath head when all paths
support P2PDMA (PCIe-only), and cleared when a non-P2P path (TCP) is added.
Clearing is handled by blk_stack_limits() in the block core (Patch 1/3).
Test tool: /home/lab/p2pmem-test/p2pmem-test
P2PMEM buffer: /sys/bus/pci/devices/0000:0a:00.0/p2pmem/allocate
Target device: /dev/nvme1n1 (multipath head, nvme-subsys1)
--------------------------------------------------------------------------------
2.1 Test 1a: P2PDMA with PCIe-only Multipath Paths (Expect PASS)
--------------------------------------------------------------------------------
Paths before test:
\
+- nvme0 pcie 0000:0a:00.0 live optimized
+- nvme1 pcie 0000:0b:00.0 live optimized
All paths are PCIe with CMB -> P2PDMA supported.
Patch 2/3 sets BLK_FEAT_PCI_P2PDMA unconditionally in nvme_mpath_alloc_disk().
Command:
/home/lab/p2pmem-test/p2pmem-test /dev/nvme1n1 /dev/nvme1n1 /sys/bus/pci/devices/0000:0a:00.0/p2pmem/allocate -c 1 -s 4k --check
Output:
Running p2pmem-test: reading /dev/nvme1n1 (10.74GB): writing /dev/nvme1n1 (10.74GB): p2pmem buffer /sys/bus/pci/devices/0000:0a:00.0/p2pmem/allocate.
chunk size = 4096 : number of chunks = 1: total = 4.096kB : thread(s) = 1 : overlap = OFF.
skip-read = OFF : skip-write = OFF : duration = INF sec.
buffer = 0x7f30b32dc000 (p2pmem): mmap = 4.096kB
PAGE_SIZE = 4096B
checking data with seed = 1776366032
MATCH on data check, 0x23039cdb = 0x23039cdb.
Transfer:
4.10kB in 860.0 us 4.76MB/s
Exit code: 0
Result: PASS
P2PDMA transfer succeeded with data verification.
--------------------------------------------------------------------------------
2.2 Test 1b: Add NVMe-oF TCP Path, Then Test P2PDMA (Expect FAIL)
--------------------------------------------------------------------------------
Setting up NVMe-oF TCP target (nvmet) on loopback...
Subsystem NQN: nqn.2019-08.org.qemu:nqn.2019-08.org.qemu:shared-ns
Namespace 1: backed by /dev/nvme1n1
Device UUID: 00000000-0000-0000-0000-000000000000 (matches QEMU quirk)
Transport: TCP, 127.0.0.1:4420
CNTLID min: 10
Paths after TCP connection:
\
+- nvme0 pcie 0000:0a:00.0 live optimized
+- nvme1 pcie 0000:0b:00.0 live optimized
+- nvme6 tcp traddr=127.0.0.1,trsvcid=4420,src_addr=127.0.0.1 live optimized
NVMe device inventory (nvme list -v) confirming TCP path in shared namespace:
Subsystem Subsystem-NQN Controllers
---------------- ------------------------------------------------------------------------------------------------ ----------------
nvme-subsys1 nqn.2019-08.org.qemu:nqn.2019-08.org.qemu:shared-ns nvme0, nvme1, nvme6
nvme-subsys2 nqn.2019-08.org.qemu:nvme3 nvme2
nvme-subsys3 nqn.2019-08.org.qemu:nvme4 nvme3
nvme-subsys4 nqn.2019-08.org.qemu:nvme5 nvme4
nvme-subsys5 nqn.2019-08.org.qemu:nvme6 nvme5
Device SN MN FR TxPort Address Slot Subsystem Namespaces
-------- -------------------- ---------------------------------------- -------- ------ -------------- ------ ------------ ----------------
nvme0 shared2 QEMU NVMe Ctrl 7.0.0-rc pcie 0000:0a:00.0 nvme-subsys1 nvme1n1
nvme1 shared2 QEMU NVMe Ctrl 7.0.0-rc pcie 0000:0b:00.0 nvme-subsys1 nvme1n1
nvme2 nvme3 QEMU NVMe Ctrl 1.0 pcie 0000:0c:00.0 nvme-subsys2 nvme2n1
nvme3 nvme4 QEMU NVMe Ctrl 1.0 pcie 0000:0d:00.0 nvme-subsys3 nvme3n1
nvme4 nvme5 QEMU NVMe Ctrl 1.0 pcie 0000:0e:00.0 nvme-subsys4 nvme4n1
nvme5 nvme6 QEMU NVMe Ctrl 1.0 pcie 0000:10:00.0 nvme-subsys5 nvme5n1
nvme6 shared2 QEMU NVMe Ctrl 7.0.0-rc tcp traddr=127.0.0.1,trsvcid=4420,src_addr=127.0.0.1 nvme-subsys1 nvme1n1
Device Generic NSID Usage Format Controllers
------------ ------------ ---------- -------------------------- ---------------- ----------------
/dev/nvme1n1 /dev/ng1n1 0x1 10.74 GB / 10.74 GB 512 B + 0 B nvme0, nvme1, nvme6
/dev/nvme2n1 /dev/ng2n1 0x1 10.74 GB / 10.74 GB 512 B + 0 B nvme2
/dev/nvme3n1 /dev/ng3n1 0x1 10.74 GB / 10.74 GB 512 B + 0 B nvme3
/dev/nvme4n1 /dev/ng4n1 0x1 10.74 GB / 10.74 GB 512 B + 0 B nvme4
/dev/nvme5n1 /dev/ng5n1 0x1 10.74 GB / 10.74 GB 512 B + 0 B nvme5
TCP path lacks PCI P2PDMA. Patch 1/3 causes blk_stack_limits() to clear
BLK_FEAT_PCI_P2PDMA when the TCP path's limits are stacked onto the head.
Command:
/home/lab/p2pmem-test/p2pmem-test /dev/nvme1n1 /dev/nvme1n1 /sys/bus/pci/devices/0000:0a:00.0/p2pmem/allocate -c 1 -s 4k --check
Output:
pread: Remote I/O error
Running p2pmem-test: reading /dev/nvme1n1 (10.74GB): writing /dev/nvme1n1 (10.74GB): p2pmem buffer /sys/bus/pci/devices/0000:0a:00.0/p2pmem/allocate.
chunk size = 4096 : number of chunks = 1: total = 4.096kB : thread(s) = 1 : overlap = OFF.
skip-read = OFF : skip-write = OFF : duration = INF sec.
buffer = 0x7f9851d3d000 (p2pmem): mmap = 4.096kB
PAGE_SIZE = 4096B
checking data with seed = 1776366033
Exit code: 1
Result: PASS (expected failure)
P2PDMA transfer correctly rejected -- BLK_FEAT_PCI_P2PDMA was
cleared because a non-P2P-capable component is present.
Cleaning up TCP path...
TCP path removed.
================================================================================
3. Test 2: MD RAID0 P2PDMA (Patch 3/3)
================================================================================
Objective:
Verify BLK_FEAT_PCI_P2PDMA propagates through RAID0. Patch 3/3 sets
the flag in raid0_set_limits(); blk_stack_limits() preserves it when
all member devices support P2PDMA.
Members: /dev/nvme2n1 /dev/nvme3n1
--------------------------------------------------------------------------------
3.1 Test 2a: P2PDMA on RAID0 Array (Expect PASS)
--------------------------------------------------------------------------------
mdadm create output:
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/p2p-test started.
RAID0 device: /dev/md127
Array detail (mdadm --detail):
/dev/md127:
Version : 1.2
Creation Time : Thu Apr 16 19:00:35 2026
Raid Level : raid0
Array Size : 20953088 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Thu Apr 16 19:00:35 2026
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : original
Chunk Size : 512K
Consistency Policy : none
Name : vm70:p2p-test (local to host vm70)
UUID : 84f669e0:6b24971e:680cdf64:ca4087e0
Events : 0
Number Major Minor RaidDevice State
0 259 4 0 active sync /dev/nvme2n1
1 259 5 1 active sync /dev/nvme3n1
/proc/mdstat:
Personalities : [raid0] [raid1] [raid4] [raid5] [raid6] [raid10]
md127 : active raid0 nvme3n1[1] nvme2n1[0]
20953088 blocks super 1.2 512k chunks
unused devices: <none>
P2PMEM buffer: /sys/bus/pci/devices/0000:0c:00.0/p2pmem/allocate
Command:
/home/lab/p2pmem-test/p2pmem-test /dev/md127 /dev/md127 /sys/bus/pci/devices/0000:0c:00.0/p2pmem/allocate -c 1 -s 4k --check
Output:
Running p2pmem-test: reading /dev/md127 (21.46GB): writing /dev/md127 (21.46GB): p2pmem buffer /sys/bus/pci/devices/0000:0c:00.0/p2pmem/allocate.
chunk size = 4096 : number of chunks = 1: total = 4.096kB : thread(s) = 1 : overlap = OFF.
skip-read = OFF : skip-write = OFF : duration = INF sec.
buffer = 0x7f87e5cd4000 (p2pmem): mmap = 4.096kB
PAGE_SIZE = 4096B
checking data with seed = 1776366036
MATCH on data check, 0x3db0cfbb = 0x3db0cfbb.
Transfer:
4.10kB in 747.0 us 5.48MB/s
Exit code: 0
Result: PASS
P2PDMA transfer succeeded with data verification.
RAID0 array stopped.
================================================================================
4. Test 3: MD RAID1 P2PDMA (Patch 3/3)
================================================================================
Objective:
Verify BLK_FEAT_PCI_P2PDMA propagates through RAID1. Patch 3/3 sets
the flag in raid1_set_limits(); blk_stack_limits() preserves it when
all member devices support P2PDMA.
Members: /dev/nvme2n1 /dev/nvme3n1
--------------------------------------------------------------------------------
4.1 Test 3a: P2PDMA on RAID1 Array (Expect PASS)
--------------------------------------------------------------------------------
mdadm create output:
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/p2p-test started.
RAID1 device: /dev/md127
Array detail (mdadm --detail):
/dev/md127:
Version : 1.2
Creation Time : Thu Apr 16 19:00:38 2026
Raid Level : raid1
Array Size : 10476544 (9.99 GiB 10.73 GB)
Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Thu Apr 16 19:00:38 2026
State : clean, resyncing
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Resync Status : 2% complete
Name : vm70:p2p-test (local to host vm70)
UUID : 7917cc10:d47660cb:46454ecd:ccc8f946
Events : 0
Number Major Minor RaidDevice State
0 259 4 0 active sync /dev/nvme2n1
1 259 5 1 active sync /dev/nvme3n1
/proc/mdstat:
Personalities : [raid0] [raid1] [raid4] [raid5] [raid6] [raid10]
md127 : active raid1 nvme3n1[1] nvme2n1[0]
10476544 blocks super 1.2 [2/2] [UU]
[>....................] resync = 2.0% (215936/10476544) finish=0.7min speed=215936K/sec
unused devices: <none>
P2PMEM buffer: /sys/bus/pci/devices/0000:0c:00.0/p2pmem/allocate
Command:
/home/lab/p2pmem-test/p2pmem-test /dev/md127 /dev/md127 /sys/bus/pci/devices/0000:0c:00.0/p2pmem/allocate -c 1 -s 4k --check
Output:
Running p2pmem-test: reading /dev/md127 (10.73GB): writing /dev/md127 (10.73GB): p2pmem buffer /sys/bus/pci/devices/0000:0c:00.0/p2pmem/allocate.
chunk size = 4096 : number of chunks = 1: total = 4.096kB : thread(s) = 1 : overlap = OFF.
skip-read = OFF : skip-write = OFF : duration = INF sec.
buffer = 0x7f0947c68000 (p2pmem): mmap = 4.096kB
PAGE_SIZE = 4096B
checking data with seed = 1776366040
MATCH on data check, 0x1b7cad6e = 0x1b7cad6e.
Transfer:
4.10kB in 2.8 ms 1.44MB/s
Exit code: 0
Result: PASS
P2PDMA transfer succeeded with data verification.
RAID1 array stopped.
================================================================================
5. Test 4: MD RAID10 P2PDMA (Patch 3/3)
================================================================================
Objective:
Verify BLK_FEAT_PCI_P2PDMA propagates through RAID10. Patch 3/3 sets
the flag in raid10_set_limits(); blk_stack_limits() preserves it when
all member devices support P2PDMA.
Members: /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1 /dev/nvme5n1
--------------------------------------------------------------------------------
5.1 Test 4a: P2PDMA on RAID10 Array (Expect PASS)
--------------------------------------------------------------------------------
mdadm create output:
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/p2p-test started.
RAID10 device: /dev/md127
Array detail (mdadm --detail):
/dev/md127:
Version : 1.2
Creation Time : Thu Apr 16 19:00:43 2026
Raid Level : raid10
Array Size : 20953088 (19.98 GiB 21.46 GB)
Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Thu Apr 16 19:00:43 2026
State : clean, resyncing
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Consistency Policy : resync
Resync Status : 1% complete
Name : vm70:p2p-test (local to host vm70)
UUID : 0a6b9ded:3f38328a:61352e9c:6f5f7853
Events : 0
Number Major Minor RaidDevice State
0 259 4 0 active sync set-A /dev/nvme2n1
1 259 5 1 active sync set-B /dev/nvme3n1
2 259 3 2 active sync set-A /dev/nvme4n1
3 259 6 3 active sync set-B /dev/nvme5n1
/proc/mdstat:
Personalities : [raid0] [raid1] [raid4] [raid5] [raid6] [raid10]
md127 : active raid10 nvme5n1[3] nvme4n1[2] nvme3n1[1] nvme2n1[0]
20953088 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
[>....................] resync = 1.8% (387200/20953088) finish=0.8min speed=387200K/sec
unused devices: <none>
P2PMEM buffer: /sys/bus/pci/devices/0000:0c:00.0/p2pmem/allocate
Command:
/home/lab/p2pmem-test/p2pmem-test /dev/md127 /dev/md127 /sys/bus/pci/devices/0000:0c:00.0/p2pmem/allocate -c 1 -s 4k --check
Output:
Running p2pmem-test: reading /dev/md127 (21.46GB): writing /dev/md127 (21.46GB): p2pmem buffer /sys/bus/pci/devices/0000:0c:00.0/p2pmem/allocate.
chunk size = 4096 : number of chunks = 1: total = 4.096kB : thread(s) = 1 : overlap = OFF.
skip-read = OFF : skip-write = OFF : duration = INF sec.
buffer = 0x7f2061b9a000 (p2pmem): mmap = 4.096kB
PAGE_SIZE = 4096B
checking data with seed = 1776366044
MATCH on data check, 0x744f0811 = 0x744f0811.
Transfer:
4.10kB in 484.9 us 8.45MB/s
Exit code: 0
Result: PASS
P2PDMA transfer succeeded with data verification.
RAID10 array stopped.
================================================================================
6. Test 5: MD RAID4 P2PDMA -- Negative Test
================================================================================
Objective:
Verify that P2PDMA does NOT work on RAID4. Parity RAID levels (4/5/6)
require CPU access to data pages for XOR/parity computation, which is
incompatible with P2P mappings. Patch 3/3 intentionally does NOT add
BLK_FEAT_PCI_P2PDMA to raid456 personalities.
Members: /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1
--------------------------------------------------------------------------------
6.1 Test 5a: P2PDMA on RAID4 Array (Expect FAIL)
--------------------------------------------------------------------------------
mdadm create output:
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/p2p-test started.
RAID4 device: /dev/md127
Array detail (mdadm --detail):
/dev/md127:
Version : 1.2
Creation Time : Thu Apr 16 19:00:47 2026
Raid Level : raid4
Array Size : 20953088 (19.98 GiB 21.46 GB)
Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Thu Apr 16 19:00:47 2026
State : clean, resyncing
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Consistency Policy : resync
Resync Status : 1% complete
Name : vm70:p2p-test (local to host vm70)
UUID : b9fb0c5f:d6471fd5:4704f465:88bd6425
Events : 0
Number Major Minor RaidDevice State
0 259 4 0 active sync /dev/nvme2n1
1 259 5 1 active sync /dev/nvme3n1
2 259 3 2 active sync /dev/nvme4n1
/proc/mdstat:
Personalities : [raid0] [raid1] [raid4] [raid5] [raid6] [raid10]
md127 : active raid4 nvme4n1[2] nvme3n1[1] nvme2n1[0]
20953088 blocks super 1.2 level 4, 512k chunk, algorithm 0 [3/3] [UUU]
[>....................] resync = 1.6% (172572/10476544) finish=0.9min speed=172572K/sec
unused devices: <none>
P2PMEM buffer: /sys/bus/pci/devices/0000:0c:00.0/p2pmem/allocate
Command:
/home/lab/p2pmem-test/p2pmem-test /dev/md127 /dev/md127 /sys/bus/pci/devices/0000:0c:00.0/p2pmem/allocate -c 1 -s 4k --check
Output:
pread: Remote I/O error
Running p2pmem-test: reading /dev/md127 (21.46GB): writing /dev/md127 (21.46GB): p2pmem buffer /sys/bus/pci/devices/0000:0c:00.0/p2pmem/allocate.
chunk size = 4096 : number of chunks = 1: total = 4.096kB : thread(s) = 1 : overlap = OFF.
skip-read = OFF : skip-write = OFF : duration = INF sec.
buffer = 0x7f4f647b8000 (p2pmem): mmap = 4.096kB
PAGE_SIZE = 4096B
checking data with seed = 1776366048
Exit code: 1
Result: PASS (expected failure)
P2PDMA transfer correctly rejected -- BLK_FEAT_PCI_P2PDMA was
cleared because a non-P2P-capable component is present.
RAID4 array stopped.
================================================================================
7. Test Summary
================================================================================
Test Description Expected Actual Result
------ ----------------------------------------------- --------- ------- ------
1a NVMe multipath P2PDMA (PCIe-only paths) PASS PASS OK
1b NVMe multipath P2PDMA (PCIe + TCP paths) FAIL FAIL OK
2a MD RAID0 P2PDMA PASS PASS OK
3a MD RAID1 P2PDMA PASS PASS OK
4a MD RAID10 P2PDMA PASS PASS OK
5a MD RAID4 P2PDMA (negative test) FAIL FAIL OK
Totals: 6 tests, 6 passed, 0 failed
All tests PASSED.
================================================================================
========================================================
* BLKTEST Testing :- nvme, block, and md category
========================================================
blktests (master) # ./test-nvme.sh
++ for t in loop tcp
++ echo '################NVMET_TRTYPES=loop############'
################NVMET_TRTYPES=loop############
++ NVME_IMG_SIZE=1G
++ NVME_NUM_ITER=1
++ NVMET_TRTYPES=loop
++ ./check nvme
nvme/002 (tr=loop) (create many subsystems and test discovery) [passed]
runtime 36.834s ... 35.055s
nvme/003 (tr=loop) (test if we're sending keep-alives to a discovery controller) [passed]
runtime 10.240s ... 10.233s
nvme/004 (tr=loop) (test nvme and nvmet UUID NS descriptors) [passed]
runtime 0.684s ... 0.656s
nvme/005 (tr=loop) (reset local loopback target) [passed]
runtime 0.981s ... 0.970s
nvme/006 (tr=loop bd=device) (create an NVMeOF target) [passed]
runtime 0.091s ... 0.097s
nvme/006 (tr=loop bd=file) (create an NVMeOF target) [passed]
runtime 0.086s ... 0.083s
nvme/008 (tr=loop bd=device) (create an NVMeOF host) [passed]
runtime 0.645s ... 0.631s
nvme/008 (tr=loop bd=file) (create an NVMeOF host) [passed]
runtime 0.655s ... 0.632s
nvme/010 (tr=loop bd=device) (run data verification fio job) [passed]
runtime 9.238s ... 9.716s
nvme/010 (tr=loop bd=file) (run data verification fio job) [passed]
runtime 47.943s ... 39.701s
nvme/012 (tr=loop bd=device) (run mkfs and data verification fio) [passed]
runtime 52.603s ... 47.570s
nvme/012 (tr=loop bd=file) (run mkfs and data verification fio) [passed]
runtime 41.337s ... 38.148s
nvme/014 (tr=loop bd=device) (flush a command from host) [passed]
runtime 9.359s ... 9.667s
nvme/014 (tr=loop bd=file) (flush a command from host) [passed]
runtime 9.076s ... 8.428s
nvme/016 (tr=loop) (create/delete many NVMeOF block device-backed ns and test discovery) [passed]
runtime 0.141s ... 0.123s
nvme/017 (tr=loop) (create/delete many file-ns and test discovery) [passed]
runtime 0.146s ... 0.140s
nvme/018 (tr=loop) (unit test NVMe-oF out of range access on a file backend) [passed]
runtime 0.632s ... 0.623s
nvme/019 (tr=loop bd=device) (test NVMe DSM Discard command) [passed]
runtime 0.643s ... 0.626s
nvme/019 (tr=loop bd=file) (test NVMe DSM Discard command) [passed]
runtime 0.636s ... 0.623s
nvme/021 (tr=loop bd=device) (test NVMe list command) [passed]
runtime 0.662s ... 0.635s
nvme/021 (tr=loop bd=file) (test NVMe list command) [passed]
runtime 0.645s ... 0.635s
nvme/022 (tr=loop bd=device) (test NVMe reset command) [passed]
runtime 0.991s ... 0.993s
nvme/022 (tr=loop bd=file) (test NVMe reset command) [passed]
runtime 1.009s ... 0.997s
nvme/023 (tr=loop bd=device) (test NVMe smart-log command) [passed]
runtime 0.647s ... 0.620s
nvme/023 (tr=loop bd=file) (test NVMe smart-log command) [passed]
runtime 0.653s ... 0.618s
nvme/025 (tr=loop bd=device) (test NVMe effects-log) [passed]
runtime 0.649s ... 0.626s
nvme/025 (tr=loop bd=file) (test NVMe effects-log) [passed]
runtime 0.665s ... 0.627s
nvme/026 (tr=loop bd=device) (test NVMe ns-descs) [passed]
runtime 0.649s ... 0.639s
nvme/026 (tr=loop bd=file) (test NVMe ns-descs) [passed]
runtime 0.641s ... 0.620s
nvme/027 (tr=loop bd=device) (test NVMe ns-rescan command) [passed]
runtime 0.675s ... 0.641s
nvme/027 (tr=loop bd=file) (test NVMe ns-rescan command) [passed]
runtime 0.673s ... 0.637s
nvme/028 (tr=loop bd=device) (test NVMe list-subsys) [passed]
runtime 0.648s ... 0.626s
nvme/028 (tr=loop bd=file) (test NVMe list-subsys) [passed]
runtime 0.640s ... 0.614s
nvme/029 (tr=loop) (test userspace IO via nvme-cli read/write interface) [passed]
runtime 1.002s ... 0.925s
nvme/030 (tr=loop) (ensure the discovery generation counter is updated appropriately) [passed]
runtime 0.438s ... 0.420s
nvme/031 (tr=loop) (test deletion of NVMeOF controllers immediately after setup) [passed]
runtime 5.997s ... 5.825s
nvme/038 (tr=loop) (test deletion of NVMeOF subsystem without enabling) [passed]
runtime 0.034s ... 0.034s
nvme/040 (tr=loop) (test nvme fabrics controller reset/disconnect operation during I/O) [passed]
runtime 7.032s ... 7.002s
nvme/041 (tr=loop) (Create authenticated connections) [passed]
runtime 0.701s ... 0.681s
nvme/042 (tr=loop) (Test dhchap key types for authenticated connections) [passed]
runtime 3.906s ... 3.772s
nvme/043 (tr=loop) (Test hash and DH group variations for authenticated connections) [passed]
runtime 4.987s ... 4.815s
nvme/044 (tr=loop) (Test bi-directional authentication) [passed]
runtime 1.482s ... 1.259s
nvme/045 (tr=loop) (Test re-authentication) [passed]
runtime 1.643s ... 1.566s
nvme/047 (tr=loop) (test different queue types for fabric transports) [not run]
nvme_trtype=loop is not supported in this test
nvme/048 (tr=loop) (Test queue count changes on reconnect) [not run]
nvme_trtype=loop is not supported in this test
nvme/051 (tr=loop) (test nvmet concurrent ns enable/disable) [passed]
runtime 1.390s ... 1.330s
nvme/052 (tr=loop) (Test file-ns creation/deletion under one subsystem) [passed]
runtime 6.363s ... 6.276s
nvme/054 (tr=loop) (Test the NVMe reservation feature) [passed]
runtime 0.775s ... 0.742s
nvme/055 (tr=loop) (Test nvme write to a loop target ns just after ns is disabled) [not run]
kernel option DEBUG_ATOMIC_SLEEP has not been enabled
nvme/056 (tr=loop) (enable zero copy offload and run rw traffic) [not run]
Remote target required but NVME_TARGET_CONTROL is not set
nvme_trtype=loop is not supported in this test
kernel option ULP_DDP has not been enabled
module nvme_tcp does not have parameter ddp_offload
KERNELSRC not set
Kernel sources do not have tools/net/ynl/cli.py
NVME_IFACE not set
nvme/057 (tr=loop) (test nvme fabrics controller ANA failover during I/O) [passed]
runtime 27.176s ... 27.143s
nvme/058 (tr=loop) (test rapid namespace remapping) [passed]
runtime 4.517s ... 4.414s
nvme/060 (tr=loop) (test nvme fabrics target reset) [not run]
nvme_trtype=loop is not supported in this test
nvme/061 (tr=loop) (test fabric target teardown and setup during I/O) [not run]
nvme_trtype=loop is not supported in this test
nvme/062 (tr=loop) (Create TLS-encrypted connections) [not run]
nvme_trtype=loop is not supported in this test
nvme/063 (tr=loop) (Create authenticated TCP connections with secure concatenation) [not run]
nvme_trtype=loop is not supported in this test
nvme/065 (tr=loop) (test unmap write zeroes sysfs interface with nvmet devices) [passed]
runtime 2.356s ... 2.340s
++ for t in loop tcp
++ echo '################NVMET_TRTYPES=tcp############'
################NVMET_TRTYPES=tcp############
++ NVME_IMG_SIZE=1G
++ NVME_NUM_ITER=1
++ NVMET_TRTYPES=tcp
++ ./check nvme
nvme/002 (tr=tcp) (create many subsystems and test discovery) [not run]
nvme_trtype=tcp is not supported in this test
nvme/003 (tr=tcp) (test if we're sending keep-alives to a discovery controller) [passed]
runtime 10.245s ... 10.241s
nvme/004 (tr=tcp) (test nvme and nvmet UUID NS descriptors) [passed]
runtime 0.380s ... 0.362s
nvme/005 (tr=tcp) (reset local loopback target) [passed]
runtime 0.450s ... 0.431s
nvme/006 (tr=tcp bd=device) (create an NVMeOF target) [passed]
runtime 0.104s ... 0.098s
nvme/006 (tr=tcp bd=file) (create an NVMeOF target) [passed]
runtime 0.096s ... 0.093s
nvme/008 (tr=tcp bd=device) (create an NVMeOF host) [passed]
runtime 0.385s ... 0.361s
nvme/008 (tr=tcp bd=file) (create an NVMeOF host) [passed]
runtime 0.389s ... 0.353s
nvmenvme/010 (tr=tcp bd=device) (run data verification fio job) [passed]
runtime 75.462s ... 76.437s
nvme/010 (tr=tcp bd=file) (run data verification fio job) [passed]
runtime 126.440s ... 117.933s
nvme/012 (tr=tcp bd=device) (run mkfs and data verification fio) [passed]
runtime 88.197s ... 81.428s
nvme/012 (tr=tcp bd=file) (run mkfs and data verification fio) [passed]
runtime 120.398s ... 117.412s
nvme/014 (tr=tcp bd=device) (flush a command from host) [passed]
runtime 9.931s ... 10.182s
nvme/014 (tr=tcp bd=file) (flush a command from host) [passed]
runtime 9.745s ... 9.867s
nvme/016 (tr=tcp) (create/delete many NVMeOF block device-backed ns and test discovery) [not run]
nvme_trtype=tcp is not supported in this test
nvme/017 (tr=tcp) (create/delete many file-ns and test discovery) [not run]
nvme_trtype=tcp is not supported in this test
nvme/018 (tr=tcp) (unit test NVMe-oF out of range access on a file backend) [passed]
runtime 0.380s ... 0.351s
nvme/019 (tr=tcp bd=device) (test NVMe DSM Discard command) [passed]
runtime 0.385s ... 0.344s
nvme/019 (tr=tcp bd=file) (test NVMe DSM Discard command) [passed]
runtime 0.382s ... 0.339s
nvme/021 (tr=tcp bd=device) (test NVMe list command) [passed]
runtime 0.371s ... 0.368s
nvme/021 (tr=tcp bd=file) (test NVMe list command) [passed]
runtime 0.396s ... 0.359s
nvme/022 (tr=tcp bd=device) (test NVMe reset command) [passed]
runtime 0.482s ... 0.451s
nvme/022 (tr=tcp bd=file) (test NVMe reset command) [passed]
runtime 0.453s ... 0.443s
nvme/023 (tr=tcp bd=device) (test NVMe smart-log command) [passed]
runtime 0.379s ... 0.349s
nvme/023 (tr=tcp bd=file) (test NVMe smart-log command) [passed]
runtime 0.361s ... 0.339s
nvme/025 (tr=tcp bd=device) (test NVMe effects-log) [passed]
runtime 0.391s ... 0.371s
nvme/025 (tr=tcp bd=file) (test NVMe effects-log) [passed]
runtime 0.380s ... 0.380s
nvme/026 (tr=tcp bd=device) (test NVMe ns-descs) [passed]
runtime 0.373s ... 0.352s
nvme/026 (tr=tcp bd=file) (test NVMe ns-descs) [passed]
runtime 0.356s ... 0.351s
nvme/027 (tr=tcp bd=device) (test NVMe ns-rescan command) [passed]
runtime 0.417s ... 0.380s
nvme/027 (tr=tcp bd=file) (test NVMe ns-rescan command) [passed]
runtime 0.396s ... 0.383s
nvme/028 (tr=tcp bd=device) (test NVMe list-subsys) [passed]
runtime 0.353s ... 0.348s
nvme/028 (tr=tcp bd=file) (test NVMe list-subsys) [passed]
runtime 0.351s ... 0.341s
nvme/029 (tr=tcp) (test userspace IO via nvme-cli read/write interface) [passed]
runtime 0.762s ... 0.722s
nvme/030 (tr=tcp) (ensure the discovery generation counter is updated appropriately) [passed]
runtime 0.410s ... 0.377s
nvme/031 (tr=tcp) (test deletion of NVMeOF controllers immediately after setup) [passed]
runtime 3.055s ... 3.008s
nvme/038 (tr=tcp) (test deletion of NVMeOF subsystem without enabling) [passed]
runtime 0.044s ... 0.042s
nvme/040 (tr=tcp) (test nvme fabrics controller reset/disconnect operation during I/O) [passed]
runtime 6.492s ... 6.454s
nvme/041 (tr=tcp) (Create authenticated connections) [passed]
runtime 0.418s ... 0.388s
nvme/042 (tr=tcp) (Test dhchap key types for authenticated connections) [passed]
runtime 1.827s ... 1.780s
nvme/043 (tr=tcp) (Test hash and DH group variations for authenticated connections) [passed]
runtime 2.504s ... 2.343s
nvme/044 (tr=tcp) (Test bi-directional authentication) [passed]
runtime 0.776s ... 0.709s
nvme/045 (tr=tcp) (Test re-authentication) [passed]
runtime 1.338s ... 1.311s
nvme/047 (tr=tcp) (test different queue types for fabric transports) [passed]
runtime 1.887s ... 1.792s
nvme/048 (tr=tcp) (Test queue count changes on reconnect) [passed]
runtime 5.531s ... 4.498s
nvme/051 (tr=tcp) (test nvmet concurrent ns enable/disable) [passed]
runtime 1.344s ... 1.375s
nvme/052 (tr=tcp) (Test file-ns creation/deletion under one subsystem) [not run]
nvme_trtype=tcp is not supported in this test
nvme/054 (tr=tcp) (Test the NVMe reservation feature) [passed]
runtime 0.506s ... 0.459s
nvme/055 (tr=tcp) (Test nvme write to a loop target ns just after ns is disabled) [not run]
nvme_trtype=tcp is not supported in this test
kernel option DEBUG_ATOMIC_SLEEP has not been enabled
nvme/056 (tr=tcp) (enable zero copy offload and run rw traffic) [not run]
Remote target required but NVME_TARGET_CONTROL is not set
kernel option ULP_DDP has not been enabled
module nvme_tcp does not have parameter ddp_offload
KERNELSRC not set
Kernel sources do not have tools/net/ynl/cli.py
NVME_IFACE not set
nvme/057 (tr=tcp) (test nvme fabrics controller ANA failover during I/O) [passed]
runtime 25.972s ... 25.924s
nvme/058 (tr=tcp) (test rapid namespace remapping) [passed]
runtime 3.612s ... 2.850s
nvme/060 (tr=tcp) (test nvme fabrics target reset) [passed]
runtime 19.437s ... 19.330s
nvme/061 (tr=tcp) (test fabric target teardown and setup during I/O) [passed]
runtime 8.645s ... 8.580s
nvme/062 (tr=tcp) (Create TLS-encrypted connections) [failed]
runtime 5.242s ... 5.176s
--- tests/nvme/062.out 2026-01-28 12:04:48.888356244 -0800
+++ /mnt/sda/blktests/results/nodev_tr_tcp/nvme/062.out.bad 2026-04-16 10:33:14.946941197 -0700
@@ -2,9 +2,13 @@
Test unencrypted connection w/ tls not required
disconnected 1 controller(s)
Test encrypted connection w/ tls not required
-disconnected 1 controller(s)
+FAIL: nvme connect return error code
+WARNING: connection is not encrypted
+disconnected 0 controller(s)
...
(Run 'diff -u tests/nvme/062.out /mnt/sda/blktests/results/nodev_tr_tcp/nvme/062.out.bad' to see the entire diff)
nvme/063 (tr=tcp) (Create authenticated TCP connections with secure concatenation) [passed]
runtime 2.026s ... 1.919s
nvme/065 (tr=tcp) (test unmap write zeroes sysfs interface with nvmet devices) [passed]
runtime 1.767s ... 1.726s
++ ./manage-rdma-nvme.sh --cleanup
====== RDMA NVMe Cleanup ======
[INFO] Disconnecting NVMe RDMA controllers...
[INFO] No NVMe RDMA controllers to disconnect
[INFO] Removing RDMA links...
[INFO] No RDMA links to remove
[INFO] Unloading NVMe RDMA modules...
[INFO] Unloading module: nvmet
[ERROR] Failed to unload module nvmet after 10 attempts
[WARN] Failed to unload 1 NVMe module(s)
[WARN] Some NVMe modules could not be unloaded
[INFO] Unloading soft-RDMA modules...
[INFO] Soft-RDMA modules unloaded successfully
[INFO] Verifying cleanup...
[INFO] Verification passed
[INFO] RDMA cleanup completed successfully
====== RDMA Network Configuration Status ======
Loaded Modules:
nvmet 258048
RDMA Links:
None
Network Interfaces (RDMA-capable):
None
blktests Configuration:
Not configured (run --setup first)
NVMe RDMA Controllers:
None
=================================================
++ ./manage-rdma-nvme.sh --setup
====== RDMA NVMe Setup ======
RDMA Type: siw
Interface: auto-detect
[INFO] Checking prerequisites...
[INFO] Prerequisites check passed
[INFO] Loading RDMA module: siw
[INFO] Module siw loaded successfully
[INFO] Creating RDMA links...
[INFO] Creating RDMA link: ens5_siw
[INFO] Created RDMA link: ens5_siw -> ens5
++ ./manage-rdma-nvme.sh --status
====== RDMA Configuration Status ======
====== RDMA Network Configuration Status ======
Loaded Modules:
siw 217088
nvmet 258048
RDMA Links:
link ens5_siw/1 state ACTIVE physical_state LINK_UP netdev ens5
Network Interfaces (RDMA-capable):
Interface: ens5
IPv4: 192.168.0.46
IPv6: fe80::5054:98ff:fe76:5440%ens5
blktests Configuration:
Transport Address: 192.168.0.46:4420
Transport Type: rdma
Command: NVMET_TRTYPES=rdma ./check nvme/
NVMe RDMA Controllers:
None
=================================================
++ echo '################NVMET_TRTYPES=rdma############'
################NVMET_TRTYPES=rdma############
++ NVME_IMG_SIZE=1G
++ NVME_NUM_ITER=1
++ nvme_trtype=rdma
++ ./check nvme
nvme/002 (tr=rdma) (create many subsystems and test discovery) [not run]
nvme_trtype=rdma is not supported in this test
nvme/003 (tr=rdma) (test if we're sending keep-alives to a discovery controller) [passed]
runtime 10.343s ... 10.315s
nvme/004 (tr=rdma) (test nvme and nvmet UUID NS descriptors) [passed]
runtime 0.691s ... 0.695s
nvme/005 (tr=rdma) (reset local loopback target) [passed]
runtime 1.021s ... 0.974s
nvme/006 (tr=rdma bd=device) (create an NVMeOF target) [passed]
runtime 0.147s ... 0.136s
nvme/006 (tr=rdma bd=file) (create an NVMeOF target) [passed]
runtime 0.148s ... 0.127s
nvme/008 (tr=rdma bd=device) (create an NVMeOF host) [passed]
runtime 0.706s ... 0.684s
nvme/008 (tr=rdma bd=file) (create an NVMeOF host) [passed]
runtime 0.704s ... 0.672s
nvme/010 (tr=rdma bd=device) (run data verification fio job) [passed]
runtime 35.986s ... 35.255s
nvme/010 (tr=rdma bd=file) (run data verification fio job) [passed]
runtime 61.777s ... 68.570s
nvme/012 (tr=rdma bd=device) (run mkfs and data verification fio) [passed]
runtime 42.996s ... 47.482s
nvme/012 (tr=rdma bd=file) (run mkfs and data verification fio) [passed]
runtime 65.456s ... 61.407s
nvme/014 (tr=rdma bd=device) (flush a command from host) [passed]
runtime 9.546s ... 9.855s
nvme/014 (tr=rdma bd=file) (flush a command from host) [passed]
runtime 9.791s ... 9.919s
nvme/016 (tr=rdma) (create/delete many NVMeOF block device-backed ns and test discovery) [not run]
nvme_trtype=rdma is not supported in this test
nvme/017 (tr=rdma) (create/delete many file-ns and test discovery) [not run]
nvme_trtype=rdma is not supported in this test
nvme/018 (tr=rdma) (unit test NVMe-oF out of range access on a file backend) [passed]
runtime 0.710s ... 0.654s
nvme/019 (tr=rdma bd=device) (test NVMe DSM Discard command) [passed]
runtime 0.699s ... 0.664s
nvme/019 (tr=rdma bd=file) (test NVMe DSM Discard command) [passed]
runtime 0.686s ... 0.649s
nvme/021 (tr=rdma bd=device) (test NVMe list command) [passed]
runtime 0.725s ... 0.676s
nvme/021 (tr=rdma bd=file) (test NVMe list command) [passed]
runtime 0.703s ... 0.673s
nvme/022 (tr=rdma bd=device) (test NVMe reset command) [passed]
runtime 1.048s ... 1.021s
nvme/022 (tr=rdma bd=file) (test NVMe reset command) [passed]
runtime 1.032s ... 1.015s
nvme/023 (tr=rdma bd=device) (test NVMe smart-log command) [passed]
runtime 0.699s ... 0.633s
nvme/023 (tr=rdma bd=file) (test NVMe smart-log command) [passed]
runtime 0.687s ... 0.656s
nvme/025 (tr=rdma bd=device) (test NVMe effects-log) [passed]
runtime 0.701s ... 0.706s
nvme/025 (tr=rdma bd=file) (test NVMe effects-log) [passed]
runtime 0.696s ... 0.688s
nvme/026 (tr=rdma bd=device) (test NVMe ns-descs) [passed]
runtime 0.703s ... 0.686s
nvme/026 (tr=rdma bd=file) (test NVMe ns-descs) [passed]
runtime 0.684s ... 0.669s
nvme/027 (tr=rdma bd=device) (test NVMe ns-rescan command) [passed]
runtime 0.727s ... 0.703s
nvme/027 (tr=rdma bd=file) (test NVMe ns-rescan command) [passed]
runtime 0.715s ... 0.707s
nvme/028 (tr=rdma bd=device) (test NVMe list-subsys) [passed]
runtime 0.688s ... 0.680s
nvme/028 (tr=rdma bd=file) (test NVMe list-subsys) [passed]
runtime 0.672s ... 0.670s
nvme/029 (tr=rdma) (test userspace IO via nvme-cli read/write interface) [passed]
runtime 1.067s ... 1.077s
nvme/030 (tr=rdma) (ensure the discovery generation counter is updated appropriately) [passed]
runtime 0.541s ... 0.511s
nvme/031 (tr=rdma) (test deletion of NVMeOF controllers immediately after setup) [passed]
runtime 5.793s ... 5.869s
nvme/038 (tr=rdma) (test deletion of NVMeOF subsystem without enabling) [passed]
runtime 0.086s ... 0.084s
nvme/040 (tr=rdma) (test nvme fabrics controller reset/disconnect operation during I/O) [passed]
runtime 7.059s ... 7.024s
nvme/041 (tr=rdma) (Create authenticated connections) [passed]
runtime 0.742s ... 0.718s
nvme/042 (tr=rdma) (Test dhchap key types for authenticated connections) [passed]
runtime 3.781s ... 3.759s
nvme/043 (tr=rdma) (Test hash and DH group variations for authenticated connections) [passed]
runtime 4.797s ... 4.487s
nvme/044 (tr=rdma) (Test bi-directional authentication) [passed]
runtime 1.346s ... 1.279s
nvme/045 (tr=rdma) (Test re-authentication) [passed]
runtime 1.806s ... 1.837s
nvme/047 (tr=rdma) (test different queue types for fabric transports) [passed]
runtime 2.688s ... 2.642s
nvme/048 (tr=rdma) (Test queue count changes on reconnect) [passed]
runtime 6.846s ... 5.810s
nvme/051 (tr=rdma) (test nvmet concurrent ns enable/disable) [passed]
runtime 1.399s ... 1.479s
nvme/052 (tr=rdma) (Test file-ns creation/deletion under one subsystem) [not run]
nvme_trtype=rdma is not supported in this test
nvme/054 (tr=rdma) (Test the NVMe reservation feature) [passed]
runtime 0.831s ... 0.799s
nvme/055 (tr=rdma) (Test nvme write to a loop target ns just after ns is disabled) [not run]
nvme_trtype=rdma is not supported in this test
kernel option DEBUG_ATOMIC_SLEEP has not been enabled
nvme/056 (tr=rdma) (enable zero copy offload and run rw traffic) [not run]
Remote target required but NVME_TARGET_CONTROL is not set
nvme_trtype=rdma is not supported in this test
kernel option ULP_DDP has not been enabled
module nvme_tcp does not have parameter ddp_offload
KERNELSRC not set
Kernel sources do not have tools/net/ynl/cli.py
NVME_IFACE not set
nvme/057 (tr=rdma) (test nvme fabrics controller ANA failover during I/O) [passed]
runtime 27.023s ... 26.984s
nvme/058 (tr=rdma) (test rapid namespace remapping) [passed]
runtime 4.478s ... 4.444s
nvme/060 (tr=rdma) (test nvme fabrics target reset) [passed]
runtime 20.821s ... 20.696s
nvme/061 (tr=rdma) (test fabric target teardown and setup during I/O) [passed]
runtime 15.509s ... 15.375s
nvme/062 (tr=rdma) (Create TLS-encrypted connections) [not run]
nvme_trtype=rdma is not supported in this test
nvme/063 (tr=rdma) (Create authenticated TCP connections with secure concatenation) [not run]
nvme_trtype=rdma is not supported in this test
nvme/065 (tr=rdma) (test unmap write zeroes sysfs interface with nvmet devices) [passed]
runtime 2.331s ... 2.369s
++ ./manage-rdma-nvme.sh --cleanup
====== RDMA NVMe Cleanup ======
[INFO] Disconnecting NVMe RDMA controllers...
[INFO] No NVMe RDMA controllers to disconnect
[INFO] Removing RDMA links...
[INFO] No RDMA links to remove
[INFO] Unloading NVMe RDMA modules...
[INFO] Unloading module: nvme_rdma
[INFO] Module nvme_rdma unloaded
[INFO] Unloading module: nvmet_rdma
[INFO] Module nvmet_rdma unloaded
[INFO] Unloading module: nvmet
[ERROR] Failed to unload module nvmet after 10 attempts
[WARN] Failed to unload 1 NVMe module(s)
[WARN] Some NVMe modules could not be unloaded
[INFO] Unloading soft-RDMA modules...
[INFO] Unloading module: siw
[INFO] Module siw unloaded
[INFO] Soft-RDMA modules unloaded successfully
[INFO] Verifying cleanup...
[INFO] Verification passed
[INFO] RDMA cleanup completed successfully
====== RDMA Network Configuration Status ======
Loaded Modules:
nvmet 258048
RDMA Links:
None
Network Interfaces (RDMA-capable):
None
blktests Configuration:
Not configured (run --setup first)
NVMe RDMA Controllers:
None
=================================================
blktests (master) #
--
2.39.5
^ permalink raw reply [flat|nested] 9+ messages in thread* [PATCH V3 1/3] block: clear BLK_FEAT_PCI_P2PDMA in blk_stack_limits() for non-supporting devices
2026-04-16 21:26 [PATCH V3 0/2] md/nvme: Enable PCI P2PDMA support for RAID0 and NVMe Multipath Chaitanya Kulkarni
@ 2026-04-16 21:26 ` Chaitanya Kulkarni
2026-04-17 7:52 ` Christoph Hellwig
2026-04-17 10:11 ` Nitesh Shetty
2026-04-16 21:26 ` [PATCH V3 2/3] md: propagate BLK_FEAT_PCI_P2PDMA from member devices to RAID device Chaitanya Kulkarni
2026-04-16 21:26 ` [PATCH V3 3/3] nvme-multipath: enable PCI P2PDMA for multipath devices Chaitanya Kulkarni
2 siblings, 2 replies; 9+ messages in thread
From: Chaitanya Kulkarni @ 2026-04-16 21:26 UTC (permalink / raw)
To: song, yukuai, linan122, kbusch, axboe, hch, sagi
Cc: linux-raid, linux-nvme, kmodukuri, Chaitanya Kulkarni
BLK_FEAT_NOWAIT and BLK_FEAT_POLL are cleared in blk_stack_limits()
when an underlying device does not support them. Apply the same
treatment to BLK_FEAT_PCI_P2PDMA: stacking drivers set it
unconditionally and rely on the core to clear it whenever a
non-supporting member device is stacked.
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
block/blk-settings.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 78c83817b9d3..8274631290db 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -795,6 +795,8 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
t->features &= ~BLK_FEAT_NOWAIT;
if (!(b->features & BLK_FEAT_POLL))
t->features &= ~BLK_FEAT_POLL;
+ if (!(b->features & BLK_FEAT_PCI_P2PDMA))
+ t->features &= ~BLK_FEAT_PCI_P2PDMA;
t->flags |= (b->flags & BLK_FLAG_MISALIGNED);
--
2.39.5
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH V3 1/3] block: clear BLK_FEAT_PCI_P2PDMA in blk_stack_limits() for non-supporting devices
2026-04-16 21:26 ` [PATCH V3 1/3] block: clear BLK_FEAT_PCI_P2PDMA in blk_stack_limits() for non-supporting devices Chaitanya Kulkarni
@ 2026-04-17 7:52 ` Christoph Hellwig
2026-04-17 10:11 ` Nitesh Shetty
1 sibling, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2026-04-17 7:52 UTC (permalink / raw)
To: Chaitanya Kulkarni
Cc: song, yukuai, linan122, kbusch, axboe, hch, sagi, linux-raid,
linux-nvme, kmodukuri
On Thu, Apr 16, 2026 at 02:26:31PM -0700, Chaitanya Kulkarni wrote:
> BLK_FEAT_NOWAIT and BLK_FEAT_POLL are cleared in blk_stack_limits()
> when an underlying device does not support them. Apply the same
> treatment to BLK_FEAT_PCI_P2PDMA: stacking drivers set it
> unconditionally and rely on the core to clear it whenever a
> non-supporting member device is stacked.
>
> Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH V3 1/3] block: clear BLK_FEAT_PCI_P2PDMA in blk_stack_limits() for non-supporting devices
2026-04-16 21:26 ` [PATCH V3 1/3] block: clear BLK_FEAT_PCI_P2PDMA in blk_stack_limits() for non-supporting devices Chaitanya Kulkarni
2026-04-17 7:52 ` Christoph Hellwig
@ 2026-04-17 10:11 ` Nitesh Shetty
1 sibling, 0 replies; 9+ messages in thread
From: Nitesh Shetty @ 2026-04-17 10:11 UTC (permalink / raw)
To: Chaitanya Kulkarni
Cc: song, yukuai, linan122, kbusch, axboe, hch, sagi, linux-raid,
linux-nvme, kmodukuri
[-- Attachment #1: Type: text/plain, Size: 1635 bytes --]
On 16/04/26 02:26PM, Chaitanya Kulkarni wrote:
>BLK_FEAT_NOWAIT and BLK_FEAT_POLL are cleared in blk_stack_limits()
>when an underlying device does not support them. Apply the same
>treatment to BLK_FEAT_PCI_P2PDMA: stacking drivers set it
>unconditionally and rely on the core to clear it whenever a
>non-supporting member device is stacked.
>
>Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
>---
> block/blk-settings.c | 2 ++
> 1 file changed, 2 insertions(+)
>
>diff --git a/block/blk-settings.c b/block/blk-settings.c
>index 78c83817b9d3..8274631290db 100644
>--- a/block/blk-settings.c
>+++ b/block/blk-settings.c
>@@ -795,6 +795,8 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
> t->features &= ~BLK_FEAT_NOWAIT;
> if (!(b->features & BLK_FEAT_POLL))
> t->features &= ~BLK_FEAT_POLL;
>+ if (!(b->features & BLK_FEAT_PCI_P2PDMA))
>+ t->features &= ~BLK_FEAT_PCI_P2PDMA;
>
> t->flags |= (b->flags & BLK_FLAG_MISALIGNED);
I think you need below patch[1] as well to unset this here.
Also I feel better to include Mike,Mikulas and dm-devel mailing list as well.
Thanks,
Nitesh
[1]
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index dc2eff6b739d..0442c1f4c686 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -590,7 +590,8 @@ int dm_split_args(int *argc, char ***argvp, char *input)
static void dm_set_stacking_limits(struct queue_limits *limits)
{
blk_set_stacking_limits(limits);
- limits->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT | BLK_FEAT_POLL;
+ limits->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT | BLK_FEAT_POLL \
+ BLK_FEAT_PCI_P2PDMA;
}
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH V3 2/3] md: propagate BLK_FEAT_PCI_P2PDMA from member devices to RAID device
2026-04-16 21:26 [PATCH V3 0/2] md/nvme: Enable PCI P2PDMA support for RAID0 and NVMe Multipath Chaitanya Kulkarni
2026-04-16 21:26 ` [PATCH V3 1/3] block: clear BLK_FEAT_PCI_P2PDMA in blk_stack_limits() for non-supporting devices Chaitanya Kulkarni
@ 2026-04-16 21:26 ` Chaitanya Kulkarni
2026-04-17 7:53 ` Christoph Hellwig
2026-04-16 21:26 ` [PATCH V3 3/3] nvme-multipath: enable PCI P2PDMA for multipath devices Chaitanya Kulkarni
2 siblings, 1 reply; 9+ messages in thread
From: Chaitanya Kulkarni @ 2026-04-16 21:26 UTC (permalink / raw)
To: song, yukuai, linan122, kbusch, axboe, hch, sagi
Cc: linux-raid, linux-nvme, kmodukuri, Chaitanya Kulkarni
From: Kiran Kumar Modukuri <kmodukuri@nvidia.com>
MD RAID does not propagate BLK_FEAT_PCI_P2PDMA from member devices to
the RAID device, preventing peer-to-peer DMA through the RAID layer even
when all underlying devices support it.
Enable BLK_FEAT_PCI_P2PDMA unconditionally in raid0, raid1 and raid10
personalities during queue limits setup. blk_stack_limits() clears it
automatically if any member device lacks support, consistent with how
BLK_FEAT_NOWAIT and BLK_FEAT_POLL are handled in the block core.
Parity RAID personalities (raid4/5/6) are excluded because they require
CPU access to data pages for parity computation, which is incompatible
with P2P mappings.
Tested with RAID0/1/10 arrays containing multiple NVMe devices with
P2PDMA support, confirming that peer-to-peer transfers work correctly
through the RAID layer.
Signed-off-by: Kiran Kumar Modukuri <kmodukuri@nvidia.com>
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
drivers/md/raid0.c | 1 +
drivers/md/raid1.c | 1 +
drivers/md/raid10.c | 1 +
3 files changed, 3 insertions(+)
diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index 5e38a51e349a..2cdaf7495d92 100644
--- a/drivers/md/raid0.c
+++ b/drivers/md/raid0.c
@@ -392,6 +392,7 @@ static int raid0_set_limits(struct mddev *mddev)
lim.io_opt = lim.io_min * mddev->raid_disks;
lim.chunk_sectors = mddev->chunk_sectors;
lim.features |= BLK_FEAT_ATOMIC_WRITES;
+ lim.features |= BLK_FEAT_PCI_P2PDMA;
err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
if (err)
return err;
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index ba91f7e61920..422ad4786569 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -3215,6 +3215,7 @@ static int raid1_set_limits(struct mddev *mddev)
lim.max_hw_wzeroes_unmap_sectors = 0;
lim.logical_block_size = mddev->logical_block_size;
lim.features |= BLK_FEAT_ATOMIC_WRITES;
+ lim.features |= BLK_FEAT_PCI_P2PDMA;
err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
if (err)
return err;
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 4901ebe45c87..07a5b734c8f3 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -3939,6 +3939,7 @@ static int raid10_set_queue_limits(struct mddev *mddev)
lim.chunk_sectors = mddev->chunk_sectors;
lim.io_opt = lim.io_min * raid10_nr_stripes(conf);
lim.features |= BLK_FEAT_ATOMIC_WRITES;
+ lim.features |= BLK_FEAT_PCI_P2PDMA;
err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
if (err)
return err;
--
2.39.5
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH V3 2/3] md: propagate BLK_FEAT_PCI_P2PDMA from member devices to RAID device
2026-04-16 21:26 ` [PATCH V3 2/3] md: propagate BLK_FEAT_PCI_P2PDMA from member devices to RAID device Chaitanya Kulkarni
@ 2026-04-17 7:53 ` Christoph Hellwig
0 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2026-04-17 7:53 UTC (permalink / raw)
To: Chaitanya Kulkarni
Cc: song, yukuai, linan122, kbusch, axboe, hch, sagi, linux-raid,
linux-nvme, kmodukuri
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH V3 3/3] nvme-multipath: enable PCI P2PDMA for multipath devices
2026-04-16 21:26 [PATCH V3 0/2] md/nvme: Enable PCI P2PDMA support for RAID0 and NVMe Multipath Chaitanya Kulkarni
2026-04-16 21:26 ` [PATCH V3 1/3] block: clear BLK_FEAT_PCI_P2PDMA in blk_stack_limits() for non-supporting devices Chaitanya Kulkarni
2026-04-16 21:26 ` [PATCH V3 2/3] md: propagate BLK_FEAT_PCI_P2PDMA from member devices to RAID device Chaitanya Kulkarni
@ 2026-04-16 21:26 ` Chaitanya Kulkarni
2026-04-17 7:53 ` Christoph Hellwig
2026-04-17 10:42 ` Nitesh Shetty
2 siblings, 2 replies; 9+ messages in thread
From: Chaitanya Kulkarni @ 2026-04-16 21:26 UTC (permalink / raw)
To: song, yukuai, linan122, kbusch, axboe, hch, sagi
Cc: linux-raid, linux-nvme, kmodukuri, Chaitanya Kulkarni
From: Kiran Kumar Modukuri <kmodukuri@nvidia.com>
NVMe multipath does not expose BLK_FEAT_PCI_P2PDMA on the head disk
even when all underlying controllers support it.
Set BLK_FEAT_PCI_P2PDMA unconditionally in nvme_mpath_alloc_disk()
alongside the other features. nvme_update_ns_info_block() already
calls queue_limits_stack_bdev() to stack each path's limits onto the
head disk, which routes through blk_stack_limits(). The core now
clears BLK_FEAT_PCI_P2PDMA automatically if any path (e.g., FC) does
not support it, consistent with how BLK_FEAT_NOWAIT and BLK_FEAT_POLL
are handled.
Signed-off-by: Kiran Kumar Modukuri <kmodukuri@nvidia.com>
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
drivers/nvme/host/multipath.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index ba00f0b72b85..957e39c4795d 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -734,7 +734,7 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
blk_set_stacking_limits(&lim);
lim.dma_alignment = 3;
lim.features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT |
- BLK_FEAT_POLL | BLK_FEAT_ATOMIC_WRITES;
+ BLK_FEAT_POLL | BLK_FEAT_ATOMIC_WRITES | BLK_FEAT_PCI_P2PDMA;
if (head->ids.csi == NVME_CSI_ZNS)
lim.features |= BLK_FEAT_ZONED;
--
2.39.5
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH V3 3/3] nvme-multipath: enable PCI P2PDMA for multipath devices
2026-04-16 21:26 ` [PATCH V3 3/3] nvme-multipath: enable PCI P2PDMA for multipath devices Chaitanya Kulkarni
@ 2026-04-17 7:53 ` Christoph Hellwig
2026-04-17 10:42 ` Nitesh Shetty
1 sibling, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2026-04-17 7:53 UTC (permalink / raw)
To: Chaitanya Kulkarni
Cc: song, yukuai, linan122, kbusch, axboe, hch, sagi, linux-raid,
linux-nvme, kmodukuri
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH V3 3/3] nvme-multipath: enable PCI P2PDMA for multipath devices
2026-04-16 21:26 ` [PATCH V3 3/3] nvme-multipath: enable PCI P2PDMA for multipath devices Chaitanya Kulkarni
2026-04-17 7:53 ` Christoph Hellwig
@ 2026-04-17 10:42 ` Nitesh Shetty
1 sibling, 0 replies; 9+ messages in thread
From: Nitesh Shetty @ 2026-04-17 10:42 UTC (permalink / raw)
To: Chaitanya Kulkarni
Cc: song, yukuai, linan122, kbusch, axboe, hch, sagi, linux-raid,
linux-nvme, kmodukuri
[-- Attachment #1: Type: text/plain, Size: 1498 bytes --]
On 16/04/26 02:26PM, Chaitanya Kulkarni wrote:
>From: Kiran Kumar Modukuri <kmodukuri@nvidia.com>
>
>NVMe multipath does not expose BLK_FEAT_PCI_P2PDMA on the head disk
>even when all underlying controllers support it.
>
>Set BLK_FEAT_PCI_P2PDMA unconditionally in nvme_mpath_alloc_disk()
>alongside the other features. nvme_update_ns_info_block() already
>calls queue_limits_stack_bdev() to stack each path's limits onto the
>head disk, which routes through blk_stack_limits(). The core now
>clears BLK_FEAT_PCI_P2PDMA automatically if any path (e.g., FC) does
>not support it, consistent with how BLK_FEAT_NOWAIT and BLK_FEAT_POLL
>are handled.
>
>Signed-off-by: Kiran Kumar Modukuri <kmodukuri@nvidia.com>
>Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
>---
> drivers/nvme/host/multipath.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
>index ba00f0b72b85..957e39c4795d 100644
>--- a/drivers/nvme/host/multipath.c
>+++ b/drivers/nvme/host/multipath.c
>@@ -734,7 +734,7 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
> blk_set_stacking_limits(&lim);
> lim.dma_alignment = 3;
> lim.features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT |
>- BLK_FEAT_POLL | BLK_FEAT_ATOMIC_WRITES;
>+ BLK_FEAT_POLL | BLK_FEAT_ATOMIC_WRITES | BLK_FEAT_PCI_P2PDMA;
> if (head->ids.csi == NVME_CSI_ZNS)
> lim.features |= BLK_FEAT_ZONED;
Reviewed-by: Nitesh Shetty <nj.shetty@samsung.com>
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-04-17 10:47 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-16 21:26 [PATCH V3 0/2] md/nvme: Enable PCI P2PDMA support for RAID0 and NVMe Multipath Chaitanya Kulkarni
2026-04-16 21:26 ` [PATCH V3 1/3] block: clear BLK_FEAT_PCI_P2PDMA in blk_stack_limits() for non-supporting devices Chaitanya Kulkarni
2026-04-17 7:52 ` Christoph Hellwig
2026-04-17 10:11 ` Nitesh Shetty
2026-04-16 21:26 ` [PATCH V3 2/3] md: propagate BLK_FEAT_PCI_P2PDMA from member devices to RAID device Chaitanya Kulkarni
2026-04-17 7:53 ` Christoph Hellwig
2026-04-16 21:26 ` [PATCH V3 3/3] nvme-multipath: enable PCI P2PDMA for multipath devices Chaitanya Kulkarni
2026-04-17 7:53 ` Christoph Hellwig
2026-04-17 10:42 ` Nitesh Shetty
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox