* [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x
@ 2025-04-01 17:22 Farhan Ali
2025-04-01 17:22 ` [PATCH v3 1/3] util: Add functions for s390x mmio read/write Farhan Ali
` (3 more replies)
0 siblings, 4 replies; 18+ messages in thread
From: Farhan Ali @ 2025-04-01 17:22 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-block, qemu-s390x, stefanha, fam, philmd, kwolf, hreitz,
thuth, alifm, mjrosato, schnelle
Hi,
Recently on s390x we have enabled mmap support for vfio-pci devices [1].
This allows us to take advantage and use userspace drivers on s390x. However,
on s390x we have special instructions for MMIO access. Starting with z15
(and newer platforms) we have new PCI Memory I/O (MIO) instructions which
operate on virtually mapped PCI memory spaces, and can be used from userspace.
On older platforms we would fallback to using existing system calls for MMIO access.
This patch series introduces support the PCI MIO instructions, and enables s390x
support for the userspace NVMe driver on s390x. I would appreciate any review/feedback
on the patches.
Thanks
Farhan
[1] https://lore.kernel.org/linux-s390/20250226-vfio_pci_mmap-v7-0-c5c0f1d26efd@linux.ibm.com/
ChangeLog
---------
v2 series https://mail.gnu.org/archive/html/qemu-devel/2025-03/msg06847.html
v2 -> v3
- Update the PCI MMIO APIs to reflect that its PCI MMIO access on host
as suggested by Stefan(patch 2)
- Move s390x ifdef check to s390x_pci_mmio.h as suggested by Philippe (patch 1)
- Add R-bs for the respective patches.
v1 series https://mail.gnu.org/archive/html/qemu-devel/2025-03/msg06596.html
v1 -> v2
- Add 8 and 16 bit reads/writes for completeness (patch 1)
- Introduce new QEMU PCI MMIO read/write API as suggested by Stefan (patch 2)
- Update NVMe userspace driver to use QEMU PCI MMIO functions (patch 3)
Farhan Ali (3):
util: Add functions for s390x mmio read/write
include: Add a header to define host PCI MMIO functions
block/nvme: Use host PCI MMIO API
block/nvme.c | 37 +++++----
include/qemu/host-pci-mmio.h | 116 ++++++++++++++++++++++++++
include/qemu/s390x_pci_mmio.h | 24 ++++++
util/meson.build | 2 +
util/s390x_pci_mmio.c | 148 ++++++++++++++++++++++++++++++++++
5 files changed, 311 insertions(+), 16 deletions(-)
create mode 100644 include/qemu/host-pci-mmio.h
create mode 100644 include/qemu/s390x_pci_mmio.h
create mode 100644 util/s390x_pci_mmio.c
--
2.43.0
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v3 1/3] util: Add functions for s390x mmio read/write
2025-04-01 17:22 [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x Farhan Ali
@ 2025-04-01 17:22 ` Farhan Ali
2025-04-01 17:22 ` [PATCH v3 2/3] include: Add a header to define host PCI MMIO functions Farhan Ali
` (2 subsequent siblings)
3 siblings, 0 replies; 18+ messages in thread
From: Farhan Ali @ 2025-04-01 17:22 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-block, qemu-s390x, stefanha, fam, philmd, kwolf, hreitz,
thuth, alifm, mjrosato, schnelle
Starting with z15 (or newer) we can execute mmio
instructions from userspace. On older platforms
where we don't have these instructions available
we can fallback to using system calls to access
the PCI mapped resources.
This patch adds helper functions for mmio reads
and writes for s390x.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Farhan Ali <alifm@linux.ibm.com>
---
include/qemu/s390x_pci_mmio.h | 24 ++++++
util/meson.build | 2 +
util/s390x_pci_mmio.c | 148 ++++++++++++++++++++++++++++++++++
3 files changed, 174 insertions(+)
create mode 100644 include/qemu/s390x_pci_mmio.h
create mode 100644 util/s390x_pci_mmio.c
diff --git a/include/qemu/s390x_pci_mmio.h b/include/qemu/s390x_pci_mmio.h
new file mode 100644
index 0000000000..c5f63ecefa
--- /dev/null
+++ b/include/qemu/s390x_pci_mmio.h
@@ -0,0 +1,24 @@
+/*
+ * s390x PCI MMIO definitions
+ *
+ * Copyright 2025 IBM Corp.
+ * Author(s): Farhan Ali <alifm@linux.ibm.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+#ifndef S390X_PCI_MMIO_H
+#define S390X_PCI_MMIO_H
+
+#ifdef __s390x__
+uint8_t s390x_pci_mmio_read_8(const void *ioaddr);
+uint16_t s390x_pci_mmio_read_16(const void *ioaddr);
+uint32_t s390x_pci_mmio_read_32(const void *ioaddr);
+uint64_t s390x_pci_mmio_read_64(const void *ioaddr);
+
+void s390x_pci_mmio_write_8(void *ioaddr, uint8_t val);
+void s390x_pci_mmio_write_16(void *ioaddr, uint16_t val);
+void s390x_pci_mmio_write_32(void *ioaddr, uint32_t val);
+void s390x_pci_mmio_write_64(void *ioaddr, uint64_t val);
+#endif /* __s390x__ */
+
+#endif /* S390X_PCI_MMIO_H */
diff --git a/util/meson.build b/util/meson.build
index 780b5977a8..acb21592f9 100644
--- a/util/meson.build
+++ b/util/meson.build
@@ -131,4 +131,6 @@ elif cpu in ['ppc', 'ppc64']
util_ss.add(files('cpuinfo-ppc.c'))
elif cpu in ['riscv32', 'riscv64']
util_ss.add(files('cpuinfo-riscv.c'))
+elif cpu == 's390x'
+ util_ss.add(files('s390x_pci_mmio.c'))
endif
diff --git a/util/s390x_pci_mmio.c b/util/s390x_pci_mmio.c
new file mode 100644
index 0000000000..820458a026
--- /dev/null
+++ b/util/s390x_pci_mmio.c
@@ -0,0 +1,148 @@
+/*
+ * s390x PCI MMIO definitions
+ *
+ * Copyright 2025 IBM Corp.
+ * Author(s): Farhan Ali <alifm@linux.ibm.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#include "qemu/osdep.h"
+#include <unistd.h>
+#include <sys/syscall.h>
+#include "qemu/s390x_pci_mmio.h"
+#include "elf.h"
+
+union register_pair {
+ unsigned __int128 pair;
+ struct {
+ uint64_t even;
+ uint64_t odd;
+ };
+};
+
+static bool is_mio_supported;
+
+static __attribute__((constructor)) void check_is_mio_supported(void)
+{
+ is_mio_supported = !!(qemu_getauxval(AT_HWCAP) & HWCAP_S390_PCI_MIO);
+}
+
+static uint64_t s390x_pcilgi(const void *ioaddr, size_t len)
+{
+ union register_pair ioaddr_len = { .even = (uint64_t)ioaddr,
+ .odd = len };
+ uint64_t val;
+ int cc;
+
+ asm volatile(
+ /* pcilgi */
+ ".insn rre,0xb9d60000,%[val],%[ioaddr_len]\n"
+ "ipm %[cc]\n"
+ "srl %[cc],28\n"
+ : [cc] "=d"(cc), [val] "=d"(val),
+ [ioaddr_len] "+&d"(ioaddr_len.pair) :: "cc");
+
+ if (cc) {
+ val = -1ULL;
+ }
+
+ return val;
+}
+
+static void s390x_pcistgi(void *ioaddr, uint64_t val, size_t len)
+{
+ union register_pair ioaddr_len = {.even = (uint64_t)ioaddr, .odd = len};
+
+ asm volatile (
+ /* pcistgi */
+ ".insn rre,0xb9d40000,%[val],%[ioaddr_len]\n"
+ : [ioaddr_len] "+&d" (ioaddr_len.pair)
+ : [val] "d" (val)
+ : "cc", "memory");
+}
+
+uint8_t s390x_pci_mmio_read_8(const void *ioaddr)
+{
+ uint8_t val = 0;
+
+ if (is_mio_supported) {
+ val = s390x_pcilgi(ioaddr, sizeof(val));
+ } else {
+ syscall(__NR_s390_pci_mmio_read, ioaddr, &val, sizeof(val));
+ }
+ return val;
+}
+
+uint16_t s390x_pci_mmio_read_16(const void *ioaddr)
+{
+ uint16_t val = 0;
+
+ if (is_mio_supported) {
+ val = s390x_pcilgi(ioaddr, sizeof(val));
+ } else {
+ syscall(__NR_s390_pci_mmio_read, ioaddr, &val, sizeof(val));
+ }
+ return val;
+}
+
+uint32_t s390x_pci_mmio_read_32(const void *ioaddr)
+{
+ uint32_t val = 0;
+
+ if (is_mio_supported) {
+ val = s390x_pcilgi(ioaddr, sizeof(val));
+ } else {
+ syscall(__NR_s390_pci_mmio_read, ioaddr, &val, sizeof(val));
+ }
+ return val;
+}
+
+uint64_t s390x_pci_mmio_read_64(const void *ioaddr)
+{
+ uint64_t val = 0;
+
+ if (is_mio_supported) {
+ val = s390x_pcilgi(ioaddr, sizeof(val));
+ } else {
+ syscall(__NR_s390_pci_mmio_read, ioaddr, &val, sizeof(val));
+ }
+ return val;
+}
+
+void s390x_pci_mmio_write_8(void *ioaddr, uint8_t val)
+{
+ if (is_mio_supported) {
+ s390x_pcistgi(ioaddr, val, sizeof(val));
+ } else {
+ syscall(__NR_s390_pci_mmio_write, ioaddr, &val, sizeof(val));
+ }
+}
+
+void s390x_pci_mmio_write_16(void *ioaddr, uint16_t val)
+{
+ if (is_mio_supported) {
+ s390x_pcistgi(ioaddr, val, sizeof(val));
+ } else {
+ syscall(__NR_s390_pci_mmio_write, ioaddr, &val, sizeof(val));
+ }
+}
+
+void s390x_pci_mmio_write_32(void *ioaddr, uint32_t val)
+{
+ if (is_mio_supported) {
+ s390x_pcistgi(ioaddr, val, sizeof(val));
+ } else {
+ syscall(__NR_s390_pci_mmio_write, ioaddr, &val, sizeof(val));
+ }
+}
+
+void s390x_pci_mmio_write_64(void *ioaddr, uint64_t val)
+{
+ if (is_mio_supported) {
+ s390x_pcistgi(ioaddr, val, sizeof(val));
+ } else {
+ syscall(__NR_s390_pci_mmio_write, ioaddr, &val, sizeof(val));
+ }
+}
+
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 2/3] include: Add a header to define host PCI MMIO functions
2025-04-01 17:22 [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x Farhan Ali
2025-04-01 17:22 ` [PATCH v3 1/3] util: Add functions for s390x mmio read/write Farhan Ali
@ 2025-04-01 17:22 ` Farhan Ali
2025-04-02 14:09 ` Stefan Hajnoczi
2025-04-01 17:22 ` [PATCH v3 3/3] block/nvme: Use host PCI MMIO API Farhan Ali
2025-04-02 15:51 ` [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x Stefan Hajnoczi
3 siblings, 1 reply; 18+ messages in thread
From: Farhan Ali @ 2025-04-01 17:22 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-block, qemu-s390x, stefanha, fam, philmd, kwolf, hreitz,
thuth, alifm, mjrosato, schnelle
Add a generic API for host PCI MMIO reads/writes
(e.g. Linux VFIO BAR accesses). The functions access
little endian memory and returns the result in
host cpu endianness.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Farhan Ali <alifm@linux.ibm.com>
---
include/qemu/host-pci-mmio.h | 116 +++++++++++++++++++++++++++++++++++
1 file changed, 116 insertions(+)
create mode 100644 include/qemu/host-pci-mmio.h
diff --git a/include/qemu/host-pci-mmio.h b/include/qemu/host-pci-mmio.h
new file mode 100644
index 0000000000..c26426524f
--- /dev/null
+++ b/include/qemu/host-pci-mmio.h
@@ -0,0 +1,116 @@
+/*
+ * API for host PCI MMIO accesses (e.g. Linux VFIO BARs)
+ *
+ * Copyright 2025 IBM Corp.
+ * Author(s): Farhan Ali <alifm@linux.ibm.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#ifndef HOST_PCI_MMIO_H
+#define HOST_PCI_MMIO_H
+
+#include "qemu/bswap.h"
+#include "qemu/s390x_pci_mmio.h"
+
+
+static inline uint8_t host_pci_mmio_read_8(const void *ioaddr)
+{
+ uint8_t ret = 0;
+#ifdef __s390x__
+ ret = s390x_pci_mmio_read_8(ioaddr);
+#else
+ /* Prevent the compiler from optimizing away the load */
+ ret = *((volatile uint8_t *)ioaddr);
+#endif
+
+ return ret;
+}
+
+static inline uint16_t host_pci_mmio_read_16(const void *ioaddr)
+{
+ uint16_t ret = 0;
+#ifdef __s390x__
+ ret = s390x_pci_mmio_read_16(ioaddr);
+#else
+ /* Prevent the compiler from optimizing away the load */
+ ret = *((volatile uint16_t *)ioaddr);
+#endif
+
+ return le16_to_cpu(ret);
+}
+
+static inline uint32_t host_pci_mmio_read_32(const void *ioaddr)
+{
+ uint32_t ret = 0;
+#ifdef __s390x__
+ ret = s390x_pci_mmio_read_32(ioaddr);
+#else
+ /* Prevent the compiler from optimizing away the load */
+ ret = *((volatile uint32_t *)ioaddr);
+#endif
+
+ return le32_to_cpu(ret);
+}
+
+static inline uint64_t host_pci_mmio_read_64(const void *ioaddr)
+{
+ uint64_t ret = 0;
+#ifdef __s390x__
+ ret = s390x_pci_mmio_read_64(ioaddr);
+#else
+ /* Prevent the compiler from optimizing away the load */
+ ret = *((volatile uint64_t *)ioaddr);
+#endif
+
+ return le64_to_cpu(ret);
+}
+
+static inline void host_pci_mmio_write_8(void *ioaddr, uint8_t val)
+{
+
+#ifdef __s390x__
+ s390x_pci_mmio_write_8(ioaddr, val);
+#else
+ /* Prevent the compiler from optimizing away the store */
+ *((volatile uint8_t *)ioaddr) = val;
+#endif
+}
+
+static inline void host_pci_mmio_write_16(void *ioaddr, uint16_t val)
+{
+ val = cpu_to_le16(val);
+
+#ifdef __s390x__
+ s390x_pci_mmio_write_16(ioaddr, val);
+#else
+ /* Prevent the compiler from optimizing away the store */
+ *((volatile uint16_t *)ioaddr) = val;
+#endif
+}
+
+static inline void host_pci_mmio_write_32(void *ioaddr, uint32_t val)
+{
+ val = cpu_to_le32(val);
+
+#ifdef __s390x__
+ s390x_pci_mmio_write_32(ioaddr, val);
+#else
+ /* Prevent the compiler from optimizing away the store */
+ *((volatile uint32_t *)ioaddr) = val;
+#endif
+}
+
+static inline void host_pci_mmio_write_64(void *ioaddr, uint64_t val)
+{
+ val = cpu_to_le64(val);
+
+#ifdef __s390x__
+ s390x_pci_mmio_write_64(ioaddr, val);
+#else
+ /* Prevent the compiler from optimizing away the store */
+ *((volatile uint64_t *)ioaddr) = val;
+#endif
+}
+
+#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 3/3] block/nvme: Use host PCI MMIO API
2025-04-01 17:22 [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x Farhan Ali
2025-04-01 17:22 ` [PATCH v3 1/3] util: Add functions for s390x mmio read/write Farhan Ali
2025-04-01 17:22 ` [PATCH v3 2/3] include: Add a header to define host PCI MMIO functions Farhan Ali
@ 2025-04-01 17:22 ` Farhan Ali
2025-04-02 15:51 ` [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x Stefan Hajnoczi
3 siblings, 0 replies; 18+ messages in thread
From: Farhan Ali @ 2025-04-01 17:22 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-block, qemu-s390x, stefanha, fam, philmd, kwolf, hreitz,
thuth, alifm, mjrosato, schnelle
Use the host PCI MMIO functions to read/write
to NVMe registers, rather than directly accessing
them.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Farhan Ali <alifm@linux.ibm.com>
---
block/nvme.c | 37 +++++++++++++++++++++----------------
1 file changed, 21 insertions(+), 16 deletions(-)
diff --git a/block/nvme.c b/block/nvme.c
index bbf7c23dcd..ba66fbc93a 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -18,6 +18,7 @@
#include "qobject/qstring.h"
#include "qemu/defer-call.h"
#include "qemu/error-report.h"
+#include "qemu/host-pci-mmio.h"
#include "qemu/main-loop.h"
#include "qemu/module.h"
#include "qemu/cutils.h"
@@ -60,7 +61,7 @@ typedef struct {
uint8_t *queue;
uint64_t iova;
/* Hardware MMIO register */
- volatile uint32_t *doorbell;
+ uint32_t *doorbell;
} NVMeQueue;
typedef struct {
@@ -100,7 +101,7 @@ struct BDRVNVMeState {
QEMUVFIOState *vfio;
void *bar0_wo_map;
/* Memory mapped registers */
- volatile struct {
+ struct {
uint32_t sq_tail;
uint32_t cq_head;
} *doorbells;
@@ -292,7 +293,7 @@ static void nvme_kick(NVMeQueuePair *q)
assert(!(q->sq.tail & 0xFF00));
/* Fence the write to submission queue entry before notifying the device. */
smp_wmb();
- *q->sq.doorbell = cpu_to_le32(q->sq.tail);
+ host_pci_mmio_write_32(q->sq.doorbell, q->sq.tail);
q->inflight += q->need_kick;
q->need_kick = 0;
}
@@ -441,7 +442,7 @@ static bool nvme_process_completion(NVMeQueuePair *q)
if (progress) {
/* Notify the device so it can post more completions. */
smp_mb_release();
- *q->cq.doorbell = cpu_to_le32(q->cq.head);
+ host_pci_mmio_write_32(q->cq.doorbell, q->cq.head);
nvme_wake_free_req_locked(q);
}
@@ -460,7 +461,7 @@ static void nvme_process_completion_bh(void *opaque)
* so notify the device that it has space to fill in more completions now.
*/
smp_mb_release();
- *q->cq.doorbell = cpu_to_le32(q->cq.head);
+ host_pci_mmio_write_32(q->cq.doorbell, q->cq.head);
nvme_wake_free_req_locked(q);
nvme_process_completion(q);
@@ -749,9 +750,10 @@ static int nvme_init(BlockDriverState *bs, const char *device, int namespace,
int ret;
uint64_t cap;
uint32_t ver;
+ uint32_t cc;
uint64_t timeout_ms;
uint64_t deadline, now;
- volatile NvmeBar *regs = NULL;
+ NvmeBar *regs = NULL;
qemu_co_mutex_init(&s->dma_map_lock);
qemu_co_queue_init(&s->dma_flush_queue);
@@ -779,7 +781,7 @@ static int nvme_init(BlockDriverState *bs, const char *device, int namespace,
/* Perform initialize sequence as described in NVMe spec "7.6.1
* Initialization". */
- cap = le64_to_cpu(regs->cap);
+ cap = host_pci_mmio_read_64(®s->cap);
trace_nvme_controller_capability_raw(cap);
trace_nvme_controller_capability("Maximum Queue Entries Supported",
1 + NVME_CAP_MQES(cap));
@@ -805,16 +807,17 @@ static int nvme_init(BlockDriverState *bs, const char *device, int namespace,
bs->bl.request_alignment = s->page_size;
timeout_ms = MIN(500 * NVME_CAP_TO(cap), 30000);
- ver = le32_to_cpu(regs->vs);
+ ver = host_pci_mmio_read_32(®s->vs);
trace_nvme_controller_spec_version(extract32(ver, 16, 16),
extract32(ver, 8, 8),
extract32(ver, 0, 8));
/* Reset device to get a clean state. */
- regs->cc = cpu_to_le32(le32_to_cpu(regs->cc) & 0xFE);
+ cc = host_pci_mmio_read_32(®s->cc);
+ host_pci_mmio_write_32(®s->cc, cc & 0xFE);
/* Wait for CSTS.RDY = 0. */
deadline = qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + timeout_ms * SCALE_MS;
- while (NVME_CSTS_RDY(le32_to_cpu(regs->csts))) {
+ while (NVME_CSTS_RDY(host_pci_mmio_read_32(®s->csts))) {
if (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) > deadline) {
error_setg(errp, "Timeout while waiting for device to reset (%"
PRId64 " ms)",
@@ -843,19 +846,21 @@ static int nvme_init(BlockDriverState *bs, const char *device, int namespace,
s->queues[INDEX_ADMIN] = q;
s->queue_count = 1;
QEMU_BUILD_BUG_ON((NVME_QUEUE_SIZE - 1) & 0xF000);
- regs->aqa = cpu_to_le32(((NVME_QUEUE_SIZE - 1) << AQA_ACQS_SHIFT) |
- ((NVME_QUEUE_SIZE - 1) << AQA_ASQS_SHIFT));
- regs->asq = cpu_to_le64(q->sq.iova);
- regs->acq = cpu_to_le64(q->cq.iova);
+ host_pci_mmio_write_32(®s->aqa,
+ ((NVME_QUEUE_SIZE - 1) << AQA_ACQS_SHIFT) |
+ ((NVME_QUEUE_SIZE - 1) << AQA_ASQS_SHIFT));
+ host_pci_mmio_write_64(®s->asq, q->sq.iova);
+ host_pci_mmio_write_64(®s->acq, q->cq.iova);
/* After setting up all control registers we can enable device now. */
- regs->cc = cpu_to_le32((ctz32(NVME_CQ_ENTRY_BYTES) << CC_IOCQES_SHIFT) |
+ host_pci_mmio_write_32(®s->cc,
+ (ctz32(NVME_CQ_ENTRY_BYTES) << CC_IOCQES_SHIFT) |
(ctz32(NVME_SQ_ENTRY_BYTES) << CC_IOSQES_SHIFT) |
CC_EN_MASK);
/* Wait for CSTS.RDY = 1. */
now = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
deadline = now + timeout_ms * SCALE_MS;
- while (!NVME_CSTS_RDY(le32_to_cpu(regs->csts))) {
+ while (!NVME_CSTS_RDY(host_pci_mmio_read_32(®s->csts))) {
if (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) > deadline) {
error_setg(errp, "Timeout while waiting for device to start (%"
PRId64 " ms)",
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v3 2/3] include: Add a header to define host PCI MMIO functions
2025-04-01 17:22 ` [PATCH v3 2/3] include: Add a header to define host PCI MMIO functions Farhan Ali
@ 2025-04-02 14:09 ` Stefan Hajnoczi
0 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2025-04-02 14:09 UTC (permalink / raw)
To: Farhan Ali
Cc: qemu-devel, qemu-block, qemu-s390x, fam, philmd, kwolf, hreitz,
thuth, mjrosato, schnelle
[-- Attachment #1: Type: text/plain, Size: 583 bytes --]
On Tue, Apr 01, 2025 at 10:22:45AM -0700, Farhan Ali wrote:
> Add a generic API for host PCI MMIO reads/writes
> (e.g. Linux VFIO BAR accesses). The functions access
> little endian memory and returns the result in
> host cpu endianness.
>
> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> Signed-off-by: Farhan Ali <alifm@linux.ibm.com>
> ---
> include/qemu/host-pci-mmio.h | 116 +++++++++++++++++++++++++++++++++++
> 1 file changed, 116 insertions(+)
> create mode 100644 include/qemu/host-pci-mmio.h
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x
2025-04-01 17:22 [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x Farhan Ali
` (2 preceding siblings ...)
2025-04-01 17:22 ` [PATCH v3 3/3] block/nvme: Use host PCI MMIO API Farhan Ali
@ 2025-04-02 15:51 ` Stefan Hajnoczi
2025-04-03 7:47 ` Niklas Schnelle
3 siblings, 1 reply; 18+ messages in thread
From: Stefan Hajnoczi @ 2025-04-02 15:51 UTC (permalink / raw)
To: Alex Williamson
Cc: qemu-devel, qemu-block, qemu-s390x, fam, philmd, kwolf, hreitz,
thuth, mjrosato, schnelle, Farhan Ali
[-- Attachment #1: Type: text/plain, Size: 2438 bytes --]
On Tue, Apr 01, 2025 at 10:22:43AM -0700, Farhan Ali wrote:
> Hi,
>
> Recently on s390x we have enabled mmap support for vfio-pci devices [1].
Hi Alex,
I wanted to bring this to your attention. Feel free to merge it through
the VFIO tree, otherwise I will merge it once you have taken a look.
Thanks,
Stefan
> This allows us to take advantage and use userspace drivers on s390x. However,
> on s390x we have special instructions for MMIO access. Starting with z15
> (and newer platforms) we have new PCI Memory I/O (MIO) instructions which
> operate on virtually mapped PCI memory spaces, and can be used from userspace.
> On older platforms we would fallback to using existing system calls for MMIO access.
>
> This patch series introduces support the PCI MIO instructions, and enables s390x
> support for the userspace NVMe driver on s390x. I would appreciate any review/feedback
> on the patches.
>
> Thanks
> Farhan
>
> [1] https://lore.kernel.org/linux-s390/20250226-vfio_pci_mmap-v7-0-c5c0f1d26efd@linux.ibm.com/
>
> ChangeLog
> ---------
> v2 series https://mail.gnu.org/archive/html/qemu-devel/2025-03/msg06847.html
> v2 -> v3
> - Update the PCI MMIO APIs to reflect that its PCI MMIO access on host
> as suggested by Stefan(patch 2)
> - Move s390x ifdef check to s390x_pci_mmio.h as suggested by Philippe (patch 1)
> - Add R-bs for the respective patches.
>
> v1 series https://mail.gnu.org/archive/html/qemu-devel/2025-03/msg06596.html
> v1 -> v2
> - Add 8 and 16 bit reads/writes for completeness (patch 1)
> - Introduce new QEMU PCI MMIO read/write API as suggested by Stefan (patch 2)
> - Update NVMe userspace driver to use QEMU PCI MMIO functions (patch 3)
>
> Farhan Ali (3):
> util: Add functions for s390x mmio read/write
> include: Add a header to define host PCI MMIO functions
> block/nvme: Use host PCI MMIO API
>
> block/nvme.c | 37 +++++----
> include/qemu/host-pci-mmio.h | 116 ++++++++++++++++++++++++++
> include/qemu/s390x_pci_mmio.h | 24 ++++++
> util/meson.build | 2 +
> util/s390x_pci_mmio.c | 148 ++++++++++++++++++++++++++++++++++
> 5 files changed, 311 insertions(+), 16 deletions(-)
> create mode 100644 include/qemu/host-pci-mmio.h
> create mode 100644 include/qemu/s390x_pci_mmio.h
> create mode 100644 util/s390x_pci_mmio.c
>
> --
> 2.43.0
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x
2025-04-02 15:51 ` [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x Stefan Hajnoczi
@ 2025-04-03 7:47 ` Niklas Schnelle
2025-04-03 15:44 ` Stefan Hajnoczi
0 siblings, 1 reply; 18+ messages in thread
From: Niklas Schnelle @ 2025-04-03 7:47 UTC (permalink / raw)
To: Stefan Hajnoczi, Alex Williamson
Cc: qemu-devel, qemu-block, qemu-s390x, fam, philmd, kwolf, hreitz,
thuth, mjrosato, Farhan Ali
On Wed, 2025-04-02 at 11:51 -0400, Stefan Hajnoczi wrote:
> On Tue, Apr 01, 2025 at 10:22:43AM -0700, Farhan Ali wrote:
> > Hi,
> >
> > Recently on s390x we have enabled mmap support for vfio-pci devices [1].
>
> Hi Alex,
> I wanted to bring this to your attention. Feel free to merge it through
> the VFIO tree, otherwise I will merge it once you have taken a look.
>
> Thanks,
> Stefan
>
> > This allows us to take advantage and use userspace drivers on s390x. However,
> > on s390x we have special instructions for MMIO access. Starting with z15
> > (and newer platforms) we have new PCI Memory I/O (MIO) instructions which
> > operate on virtually mapped PCI memory spaces, and can be used from userspace.
> > On older platforms we would fallback to using existing system calls for MMIO access.
> >
> > This patch series introduces support the PCI MIO instructions, and enables s390x
> > support for the userspace NVMe driver on s390x. I would appreciate any review/feedback
> > on the patches.
> >
> > Thanks
> > Farhan
Hi Stefan,
the kernel patch actually made it into Linus' tree for v6.15 already as
commit aa9f168d55dc ("s390/pci: Support mmap() of PCI resources except
for ISM devices") plus prerequisites. This went via the PCI tree
because they included a change to struct pci_dev and also enabled
mmap() on PCI resource files. Alex reviewed an earlier version and was
the one who suggested to also enable mmap() on PCI resources.
Thanks,
Niklas
> >
> > [1] https://lore.kernel.org/linux-s390/20250226-vfio_pci_mmap-v7-0-c5c0f1d26efd@linux.ibm.com/
> >
> > ChangeLog
> > ---------
> > v2 series https://mail.gnu.org/archive/html/qemu-devel/2025-03/msg06847.html
> > v2 -> v3
> > - Update the PCI MMIO APIs to reflect that its PCI MMIO access on host
> > as suggested by Stefan(patch 2)
> > - Move s390x ifdef check to s390x_pci_mmio.h as suggested by Philippe (patch 1)
> > - Add R-bs for the respective patches.
> >
> > v1 series https://mail.gnu.org/archive/html/qemu-devel/2025-03/msg06596.html
> > v1 -> v2
> > - Add 8 and 16 bit reads/writes for completeness (patch 1)
> > - Introduce new QEMU PCI MMIO read/write API as suggested by Stefan (patch 2)
> > - Update NVMe userspace driver to use QEMU PCI MMIO functions (patch 3)
> >
> > Farhan Ali (3):
> > util: Add functions for s390x mmio read/write
> > include: Add a header to define host PCI MMIO functions
> > block/nvme: Use host PCI MMIO API
> >
> > block/nvme.c | 37 +++++----
> > include/qemu/host-pci-mmio.h | 116 ++++++++++++++++++++++++++
> > include/qemu/s390x_pci_mmio.h | 24 ++++++
> > util/meson.build | 2 +
> > util/s390x_pci_mmio.c | 148 ++++++++++++++++++++++++++++++++++
> > 5 files changed, 311 insertions(+), 16 deletions(-)
> > create mode 100644 include/qemu/host-pci-mmio.h
> > create mode 100644 include/qemu/s390x_pci_mmio.h
> > create mode 100644 util/s390x_pci_mmio.c
> >
> > --
> > 2.43.0
> >
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x
2025-04-03 7:47 ` Niklas Schnelle
@ 2025-04-03 15:44 ` Stefan Hajnoczi
2025-04-03 16:27 ` Alex Williamson
0 siblings, 1 reply; 18+ messages in thread
From: Stefan Hajnoczi @ 2025-04-03 15:44 UTC (permalink / raw)
To: Niklas Schnelle
Cc: Alex Williamson, qemu-devel, qemu-block, qemu-s390x, fam, philmd,
kwolf, hreitz, thuth, mjrosato, Farhan Ali
[-- Attachment #1: Type: text/plain, Size: 3654 bytes --]
On Thu, Apr 03, 2025 at 09:47:26AM +0200, Niklas Schnelle wrote:
> On Wed, 2025-04-02 at 11:51 -0400, Stefan Hajnoczi wrote:
> > On Tue, Apr 01, 2025 at 10:22:43AM -0700, Farhan Ali wrote:
> > > Hi,
> > >
> > > Recently on s390x we have enabled mmap support for vfio-pci devices [1].
> >
> > Hi Alex,
> > I wanted to bring this to your attention. Feel free to merge it through
> > the VFIO tree, otherwise I will merge it once you have taken a look.
> >
> > Thanks,
> > Stefan
> >
> > > This allows us to take advantage and use userspace drivers on s390x. However,
> > > on s390x we have special instructions for MMIO access. Starting with z15
> > > (and newer platforms) we have new PCI Memory I/O (MIO) instructions which
> > > operate on virtually mapped PCI memory spaces, and can be used from userspace.
> > > On older platforms we would fallback to using existing system calls for MMIO access.
> > >
> > > This patch series introduces support the PCI MIO instructions, and enables s390x
> > > support for the userspace NVMe driver on s390x. I would appreciate any review/feedback
> > > on the patches.
> > >
> > > Thanks
> > > Farhan
>
> Hi Stefan,
>
> the kernel patch actually made it into Linus' tree for v6.15 already as
> commit aa9f168d55dc ("s390/pci: Support mmap() of PCI resources except
> for ISM devices") plus prerequisites. This went via the PCI tree
> because they included a change to struct pci_dev and also enabled
> mmap() on PCI resource files. Alex reviewed an earlier version and was
> the one who suggested to also enable mmap() on PCI resources.
The introduction of a new QEMU API for accessing MMIO BARs in this
series is something Alex might be interested in as QEMU VFIO maintainer.
That wouldn't have been part of the kernel patch review.
If he's aware of the new API he can encourage other VFIO users to use it
in the future so that you won't need to convert them to work on s390x
again.
Stefan
>
> Thanks,
> Niklas
>
> > >
> > > [1] https://lore.kernel.org/linux-s390/20250226-vfio_pci_mmap-v7-0-c5c0f1d26efd@linux.ibm.com/
> > >
> > > ChangeLog
> > > ---------
> > > v2 series https://mail.gnu.org/archive/html/qemu-devel/2025-03/msg06847.html
> > > v2 -> v3
> > > - Update the PCI MMIO APIs to reflect that its PCI MMIO access on host
> > > as suggested by Stefan(patch 2)
> > > - Move s390x ifdef check to s390x_pci_mmio.h as suggested by Philippe (patch 1)
> > > - Add R-bs for the respective patches.
> > >
> > > v1 series https://mail.gnu.org/archive/html/qemu-devel/2025-03/msg06596.html
> > > v1 -> v2
> > > - Add 8 and 16 bit reads/writes for completeness (patch 1)
> > > - Introduce new QEMU PCI MMIO read/write API as suggested by Stefan (patch 2)
> > > - Update NVMe userspace driver to use QEMU PCI MMIO functions (patch 3)
> > >
> > > Farhan Ali (3):
> > > util: Add functions for s390x mmio read/write
> > > include: Add a header to define host PCI MMIO functions
> > > block/nvme: Use host PCI MMIO API
> > >
> > > block/nvme.c | 37 +++++----
> > > include/qemu/host-pci-mmio.h | 116 ++++++++++++++++++++++++++
> > > include/qemu/s390x_pci_mmio.h | 24 ++++++
> > > util/meson.build | 2 +
> > > util/s390x_pci_mmio.c | 148 ++++++++++++++++++++++++++++++++++
> > > 5 files changed, 311 insertions(+), 16 deletions(-)
> > > create mode 100644 include/qemu/host-pci-mmio.h
> > > create mode 100644 include/qemu/s390x_pci_mmio.h
> > > create mode 100644 util/s390x_pci_mmio.c
> > >
> > > --
> > > 2.43.0
> > >
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x
2025-04-03 15:44 ` Stefan Hajnoczi
@ 2025-04-03 16:27 ` Alex Williamson
2025-04-03 17:33 ` Farhan Ali
0 siblings, 1 reply; 18+ messages in thread
From: Alex Williamson @ 2025-04-03 16:27 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Niklas Schnelle, qemu-devel, qemu-block, qemu-s390x, fam, philmd,
kwolf, hreitz, thuth, mjrosato, Farhan Ali, Cédric Le Goater
On Thu, 3 Apr 2025 11:44:42 -0400
Stefan Hajnoczi <stefanha@redhat.com> wrote:
> On Thu, Apr 03, 2025 at 09:47:26AM +0200, Niklas Schnelle wrote:
> > On Wed, 2025-04-02 at 11:51 -0400, Stefan Hajnoczi wrote:
> > > On Tue, Apr 01, 2025 at 10:22:43AM -0700, Farhan Ali wrote:
> > > > Hi,
> > > >
> > > > Recently on s390x we have enabled mmap support for vfio-pci devices [1].
> > >
> > > Hi Alex,
> > > I wanted to bring this to your attention. Feel free to merge it through
> > > the VFIO tree, otherwise I will merge it once you have taken a look.
> > >
> > > Thanks,
> > > Stefan
> > >
> > > > This allows us to take advantage and use userspace drivers on s390x. However,
> > > > on s390x we have special instructions for MMIO access. Starting with z15
> > > > (and newer platforms) we have new PCI Memory I/O (MIO) instructions which
> > > > operate on virtually mapped PCI memory spaces, and can be used from userspace.
> > > > On older platforms we would fallback to using existing system calls for MMIO access.
> > > >
> > > > This patch series introduces support the PCI MIO instructions, and enables s390x
> > > > support for the userspace NVMe driver on s390x. I would appreciate any review/feedback
> > > > on the patches.
> > > >
> > > > Thanks
> > > > Farhan
> >
> > Hi Stefan,
> >
> > the kernel patch actually made it into Linus' tree for v6.15 already as
> > commit aa9f168d55dc ("s390/pci: Support mmap() of PCI resources except
> > for ISM devices") plus prerequisites. This went via the PCI tree
> > because they included a change to struct pci_dev and also enabled
> > mmap() on PCI resource files. Alex reviewed an earlier version and was
> > the one who suggested to also enable mmap() on PCI resources.
>
> The introduction of a new QEMU API for accessing MMIO BARs in this
> series is something Alex might be interested in as QEMU VFIO maintainer.
> That wouldn't have been part of the kernel patch review.
>
> If he's aware of the new API he can encourage other VFIO users to use it
> in the future so that you won't need to convert them to work on s390x
> again.
I don't claim any jurisdiction over the vfio-nvme driver. In general
vfio users should be using either vfio_region_ops, ram_device_mem_ops,
or directly mapping MMIO into the VM address space. The first uses
pread/write through the region offset, irrespective of the type of
memory, the second provides the type of access used here where we're
dereferencing into an mmap, and the last if of course the preferred
mechanism where available.
It is curious that the proposal here doesn't include any changes to
ram_device_mem_ops for more generically enabling MMIO access on s390x.
Thanks,
Alex
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x
2025-04-03 16:27 ` Alex Williamson
@ 2025-04-03 17:33 ` Farhan Ali
2025-04-03 18:05 ` Alex Williamson
0 siblings, 1 reply; 18+ messages in thread
From: Farhan Ali @ 2025-04-03 17:33 UTC (permalink / raw)
To: Alex Williamson, Stefan Hajnoczi
Cc: Niklas Schnelle, qemu-devel, qemu-block, qemu-s390x, fam, philmd,
kwolf, hreitz, thuth, mjrosato, Cédric Le Goater
On 4/3/2025 9:27 AM, Alex Williamson wrote:
> On Thu, 3 Apr 2025 11:44:42 -0400
> Stefan Hajnoczi <stefanha@redhat.com> wrote:
>
>> On Thu, Apr 03, 2025 at 09:47:26AM +0200, Niklas Schnelle wrote:
>>> On Wed, 2025-04-02 at 11:51 -0400, Stefan Hajnoczi wrote:
>>>> On Tue, Apr 01, 2025 at 10:22:43AM -0700, Farhan Ali wrote:
>>>>> Hi,
>>>>>
>>>>> Recently on s390x we have enabled mmap support for vfio-pci devices [1].
>>>> Hi Alex,
>>>> I wanted to bring this to your attention. Feel free to merge it through
>>>> the VFIO tree, otherwise I will merge it once you have taken a look.
>>>>
>>>> Thanks,
>>>> Stefan
>>>>
>>>>> This allows us to take advantage and use userspace drivers on s390x. However,
>>>>> on s390x we have special instructions for MMIO access. Starting with z15
>>>>> (and newer platforms) we have new PCI Memory I/O (MIO) instructions which
>>>>> operate on virtually mapped PCI memory spaces, and can be used from userspace.
>>>>> On older platforms we would fallback to using existing system calls for MMIO access.
>>>>>
>>>>> This patch series introduces support the PCI MIO instructions, and enables s390x
>>>>> support for the userspace NVMe driver on s390x. I would appreciate any review/feedback
>>>>> on the patches.
>>>>>
>>>>> Thanks
>>>>> Farhan
>>> Hi Stefan,
>>>
>>> the kernel patch actually made it into Linus' tree for v6.15 already as
>>> commit aa9f168d55dc ("s390/pci: Support mmap() of PCI resources except
>>> for ISM devices") plus prerequisites. This went via the PCI tree
>>> because they included a change to struct pci_dev and also enabled
>>> mmap() on PCI resource files. Alex reviewed an earlier version and was
>>> the one who suggested to also enable mmap() on PCI resources.
>> The introduction of a new QEMU API for accessing MMIO BARs in this
>> series is something Alex might be interested in as QEMU VFIO maintainer.
>> That wouldn't have been part of the kernel patch review.
>>
>> If he's aware of the new API he can encourage other VFIO users to use it
>> in the future so that you won't need to convert them to work on s390x
>> again.
> I don't claim any jurisdiction over the vfio-nvme driver. In general
> vfio users should be using either vfio_region_ops, ram_device_mem_ops,
> or directly mapping MMIO into the VM address space. The first uses
> pread/write through the region offset, irrespective of the type of
> memory, the second provides the type of access used here where we're
> dereferencing into an mmap, and the last if of course the preferred
> mechanism where available.
>
> It is curious that the proposal here doesn't include any changes to
> ram_device_mem_ops for more generically enabling MMIO access on s390x.
> Thanks,
>
> Alex
Hi Alex,
From my understanding the ram_device_mem_ops sets up the BAR access for
a guest passthrough device. Unfortunately today an s390x KVM guest
doesn't use and have support for these MIO instructions. We wanted to
use this series as an initial test vehicle of the mmap support.
Thanks
Farhan
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x
2025-04-03 17:33 ` Farhan Ali
@ 2025-04-03 18:05 ` Alex Williamson
2025-04-03 20:33 ` Farhan Ali
0 siblings, 1 reply; 18+ messages in thread
From: Alex Williamson @ 2025-04-03 18:05 UTC (permalink / raw)
To: Farhan Ali
Cc: Stefan Hajnoczi, Niklas Schnelle, qemu-devel, qemu-block,
qemu-s390x, fam, philmd, kwolf, hreitz, thuth, mjrosato,
Cédric Le Goater
On Thu, 3 Apr 2025 10:33:52 -0700
Farhan Ali <alifm@linux.ibm.com> wrote:
> On 4/3/2025 9:27 AM, Alex Williamson wrote:
> > On Thu, 3 Apr 2025 11:44:42 -0400
> > Stefan Hajnoczi <stefanha@redhat.com> wrote:
> >
> >> On Thu, Apr 03, 2025 at 09:47:26AM +0200, Niklas Schnelle wrote:
> >>> On Wed, 2025-04-02 at 11:51 -0400, Stefan Hajnoczi wrote:
> >>>> On Tue, Apr 01, 2025 at 10:22:43AM -0700, Farhan Ali wrote:
> >>>>> Hi,
> >>>>>
> >>>>> Recently on s390x we have enabled mmap support for vfio-pci devices [1].
> >>>> Hi Alex,
> >>>> I wanted to bring this to your attention. Feel free to merge it through
> >>>> the VFIO tree, otherwise I will merge it once you have taken a look.
> >>>>
> >>>> Thanks,
> >>>> Stefan
> >>>>
> >>>>> This allows us to take advantage and use userspace drivers on s390x. However,
> >>>>> on s390x we have special instructions for MMIO access. Starting with z15
> >>>>> (and newer platforms) we have new PCI Memory I/O (MIO) instructions which
> >>>>> operate on virtually mapped PCI memory spaces, and can be used from userspace.
> >>>>> On older platforms we would fallback to using existing system calls for MMIO access.
> >>>>>
> >>>>> This patch series introduces support the PCI MIO instructions, and enables s390x
> >>>>> support for the userspace NVMe driver on s390x. I would appreciate any review/feedback
> >>>>> on the patches.
> >>>>>
> >>>>> Thanks
> >>>>> Farhan
> >>> Hi Stefan,
> >>>
> >>> the kernel patch actually made it into Linus' tree for v6.15 already as
> >>> commit aa9f168d55dc ("s390/pci: Support mmap() of PCI resources except
> >>> for ISM devices") plus prerequisites. This went via the PCI tree
> >>> because they included a change to struct pci_dev and also enabled
> >>> mmap() on PCI resource files. Alex reviewed an earlier version and was
> >>> the one who suggested to also enable mmap() on PCI resources.
> >> The introduction of a new QEMU API for accessing MMIO BARs in this
> >> series is something Alex might be interested in as QEMU VFIO maintainer.
> >> That wouldn't have been part of the kernel patch review.
> >>
> >> If he's aware of the new API he can encourage other VFIO users to use it
> >> in the future so that you won't need to convert them to work on s390x
> >> again.
> > I don't claim any jurisdiction over the vfio-nvme driver. In general
> > vfio users should be using either vfio_region_ops, ram_device_mem_ops,
> > or directly mapping MMIO into the VM address space. The first uses
> > pread/write through the region offset, irrespective of the type of
> > memory, the second provides the type of access used here where we're
> > dereferencing into an mmap, and the last if of course the preferred
> > mechanism where available.
> >
> > It is curious that the proposal here doesn't include any changes to
> > ram_device_mem_ops for more generically enabling MMIO access on s390x.
> > Thanks,
> >
> > Alex
>
>
> Hi Alex,
> From my understanding the ram_device_mem_ops sets up the BAR access for
> a guest passthrough device. Unfortunately today an s390x KVM guest
> doesn't use and have support for these MIO instructions. We wanted to
> use this series as an initial test vehicle of the mmap support.
Right, ram_device_mem_ops is what we'll use to access a BAR that
supports mmap but for whatever reason we're accessing it directly
through the mmap. For instance if an overlapping quirk prevents the
page from being mapped to the VM or we have some back channel mechanism
where the VMM is interacting with the BAR.
I bring it up here because it's effectively the same kind of access
you're adding with these helpers and would need to be addressed if this
were generically enabling vfio mmap access on s390x.
Prior to commit 2b8fe81b3c2e ("system/memory: use ldn_he_p/stn_he_p")
the mmio helpers here might have been a drop-in replacement for the
dereferencing of mmap offsets, but something would need to be done
about the explicit PCI assumption introduced here and the possibility
of unaligned accesses that the noted commit tries to resolve. Thanks,
Alex
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x
2025-04-03 18:05 ` Alex Williamson
@ 2025-04-03 20:33 ` Farhan Ali
2025-04-03 21:24 ` Alex Williamson
2025-04-04 7:05 ` Cédric Le Goater
0 siblings, 2 replies; 18+ messages in thread
From: Farhan Ali @ 2025-04-03 20:33 UTC (permalink / raw)
To: Alex Williamson
Cc: Stefan Hajnoczi, Niklas Schnelle, qemu-devel, qemu-block,
qemu-s390x, fam, philmd, kwolf, hreitz, thuth, mjrosato,
Cédric Le Goater
On 4/3/2025 11:05 AM, Alex Williamson wrote:
> On Thu, 3 Apr 2025 10:33:52 -0700
> Farhan Ali <alifm@linux.ibm.com> wrote:
>
>> On 4/3/2025 9:27 AM, Alex Williamson wrote:
>>> On Thu, 3 Apr 2025 11:44:42 -0400
>>> Stefan Hajnoczi <stefanha@redhat.com> wrote:
>>>
>>>> On Thu, Apr 03, 2025 at 09:47:26AM +0200, Niklas Schnelle wrote:
>>>>> On Wed, 2025-04-02 at 11:51 -0400, Stefan Hajnoczi wrote:
>>>>>> On Tue, Apr 01, 2025 at 10:22:43AM -0700, Farhan Ali wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> Recently on s390x we have enabled mmap support for vfio-pci devices [1].
>>>>>> Hi Alex,
>>>>>> I wanted to bring this to your attention. Feel free to merge it through
>>>>>> the VFIO tree, otherwise I will merge it once you have taken a look.
>>>>>>
>>>>>> Thanks,
>>>>>> Stefan
>>>>>>
>>>>>>> This allows us to take advantage and use userspace drivers on s390x. However,
>>>>>>> on s390x we have special instructions for MMIO access. Starting with z15
>>>>>>> (and newer platforms) we have new PCI Memory I/O (MIO) instructions which
>>>>>>> operate on virtually mapped PCI memory spaces, and can be used from userspace.
>>>>>>> On older platforms we would fallback to using existing system calls for MMIO access.
>>>>>>>
>>>>>>> This patch series introduces support the PCI MIO instructions, and enables s390x
>>>>>>> support for the userspace NVMe driver on s390x. I would appreciate any review/feedback
>>>>>>> on the patches.
>>>>>>>
>>>>>>> Thanks
>>>>>>> Farhan
>>>>> Hi Stefan,
>>>>>
>>>>> the kernel patch actually made it into Linus' tree for v6.15 already as
>>>>> commit aa9f168d55dc ("s390/pci: Support mmap() of PCI resources except
>>>>> for ISM devices") plus prerequisites. This went via the PCI tree
>>>>> because they included a change to struct pci_dev and also enabled
>>>>> mmap() on PCI resource files. Alex reviewed an earlier version and was
>>>>> the one who suggested to also enable mmap() on PCI resources.
>>>> The introduction of a new QEMU API for accessing MMIO BARs in this
>>>> series is something Alex might be interested in as QEMU VFIO maintainer.
>>>> That wouldn't have been part of the kernel patch review.
>>>>
>>>> If he's aware of the new API he can encourage other VFIO users to use it
>>>> in the future so that you won't need to convert them to work on s390x
>>>> again.
>>> I don't claim any jurisdiction over the vfio-nvme driver. In general
>>> vfio users should be using either vfio_region_ops, ram_device_mem_ops,
>>> or directly mapping MMIO into the VM address space. The first uses
>>> pread/write through the region offset, irrespective of the type of
>>> memory, the second provides the type of access used here where we're
>>> dereferencing into an mmap, and the last if of course the preferred
>>> mechanism where available.
>>>
>>> It is curious that the proposal here doesn't include any changes to
>>> ram_device_mem_ops for more generically enabling MMIO access on s390x.
>>> Thanks,
>>>
>>> Alex
>>
>> Hi Alex,
>> From my understanding the ram_device_mem_ops sets up the BAR access for
>> a guest passthrough device. Unfortunately today an s390x KVM guest
>> doesn't use and have support for these MIO instructions. We wanted to
>> use this series as an initial test vehicle of the mmap support.
> Right, ram_device_mem_ops is what we'll use to access a BAR that
> supports mmap but for whatever reason we're accessing it directly
> through the mmap. For instance if an overlapping quirk prevents the
> page from being mapped to the VM or we have some back channel mechanism
> where the VMM is interacting with the BAR.
>
> I bring it up here because it's effectively the same kind of access
> you're adding with these helpers and would need to be addressed if this
> were generically enabling vfio mmap access on s390x.
On s390x the use of the MIO instructions is limited to only PCI access.
So i am not sure if we should generically apply this to all vfio mmap
access (for non PCI devices).
>
> Prior to commit 2b8fe81b3c2e ("system/memory: use ldn_he_p/stn_he_p")
> the mmio helpers here might have been a drop-in replacement for the
> dereferencing of mmap offsets, but something would need to be done
> about the explicit PCI assumption introduced here and the possibility
> of unaligned accesses that the noted commit tries to resolve. Thanks,
>
> Alex
AFAICT in qemu today the ram_device_mem_ops is used for non PCI vfio
mmap cases. For s390x these helpers should be restricted to PCI
accesses. For the unaligned accesses (thanks for pointing out that
commmit!), are you suggesting we use the ld*_he_p/st*_he_p functions in
the helpers i defined? Though those functions don't seem to be doing
volatile accesses.
Thanks
Farhan
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x
2025-04-03 20:33 ` Farhan Ali
@ 2025-04-03 21:24 ` Alex Williamson
2025-04-10 16:07 ` Farhan Ali
2025-04-04 7:05 ` Cédric Le Goater
1 sibling, 1 reply; 18+ messages in thread
From: Alex Williamson @ 2025-04-03 21:24 UTC (permalink / raw)
To: Farhan Ali
Cc: Stefan Hajnoczi, Niklas Schnelle, qemu-devel, qemu-block,
qemu-s390x, fam, philmd, kwolf, hreitz, thuth, mjrosato,
Cédric Le Goater, venture, crauer, pefoley, david
On Thu, 3 Apr 2025 13:33:17 -0700
Farhan Ali <alifm@linux.ibm.com> wrote:
> On 4/3/2025 11:05 AM, Alex Williamson wrote:
> > On Thu, 3 Apr 2025 10:33:52 -0700
> > Farhan Ali <alifm@linux.ibm.com> wrote:
> >
> >> On 4/3/2025 9:27 AM, Alex Williamson wrote:
> >>> On Thu, 3 Apr 2025 11:44:42 -0400
> >>> Stefan Hajnoczi <stefanha@redhat.com> wrote:
> >>>
> >>>> On Thu, Apr 03, 2025 at 09:47:26AM +0200, Niklas Schnelle wrote:
> >>>>> On Wed, 2025-04-02 at 11:51 -0400, Stefan Hajnoczi wrote:
> >>>>>> On Tue, Apr 01, 2025 at 10:22:43AM -0700, Farhan Ali wrote:
> >>>>>>> Hi,
> >>>>>>>
> >>>>>>> Recently on s390x we have enabled mmap support for vfio-pci devices [1].
> >>>>>> Hi Alex,
> >>>>>> I wanted to bring this to your attention. Feel free to merge it through
> >>>>>> the VFIO tree, otherwise I will merge it once you have taken a look.
> >>>>>>
> >>>>>> Thanks,
> >>>>>> Stefan
> >>>>>>
> >>>>>>> This allows us to take advantage and use userspace drivers on s390x. However,
> >>>>>>> on s390x we have special instructions for MMIO access. Starting with z15
> >>>>>>> (and newer platforms) we have new PCI Memory I/O (MIO) instructions which
> >>>>>>> operate on virtually mapped PCI memory spaces, and can be used from userspace.
> >>>>>>> On older platforms we would fallback to using existing system calls for MMIO access.
> >>>>>>>
> >>>>>>> This patch series introduces support the PCI MIO instructions, and enables s390x
> >>>>>>> support for the userspace NVMe driver on s390x. I would appreciate any review/feedback
> >>>>>>> on the patches.
> >>>>>>>
> >>>>>>> Thanks
> >>>>>>> Farhan
> >>>>> Hi Stefan,
> >>>>>
> >>>>> the kernel patch actually made it into Linus' tree for v6.15 already as
> >>>>> commit aa9f168d55dc ("s390/pci: Support mmap() of PCI resources except
> >>>>> for ISM devices") plus prerequisites. This went via the PCI tree
> >>>>> because they included a change to struct pci_dev and also enabled
> >>>>> mmap() on PCI resource files. Alex reviewed an earlier version and was
> >>>>> the one who suggested to also enable mmap() on PCI resources.
> >>>> The introduction of a new QEMU API for accessing MMIO BARs in this
> >>>> series is something Alex might be interested in as QEMU VFIO maintainer.
> >>>> That wouldn't have been part of the kernel patch review.
> >>>>
> >>>> If he's aware of the new API he can encourage other VFIO users to use it
> >>>> in the future so that you won't need to convert them to work on s390x
> >>>> again.
> >>> I don't claim any jurisdiction over the vfio-nvme driver. In general
> >>> vfio users should be using either vfio_region_ops, ram_device_mem_ops,
> >>> or directly mapping MMIO into the VM address space. The first uses
> >>> pread/write through the region offset, irrespective of the type of
> >>> memory, the second provides the type of access used here where we're
> >>> dereferencing into an mmap, and the last if of course the preferred
> >>> mechanism where available.
> >>>
> >>> It is curious that the proposal here doesn't include any changes to
> >>> ram_device_mem_ops for more generically enabling MMIO access on s390x.
> >>> Thanks,
> >>>
> >>> Alex
> >>
> >> Hi Alex,
> >> From my understanding the ram_device_mem_ops sets up the BAR access for
> >> a guest passthrough device. Unfortunately today an s390x KVM guest
> >> doesn't use and have support for these MIO instructions. We wanted to
> >> use this series as an initial test vehicle of the mmap support.
> > Right, ram_device_mem_ops is what we'll use to access a BAR that
> > supports mmap but for whatever reason we're accessing it directly
> > through the mmap. For instance if an overlapping quirk prevents the
> > page from being mapped to the VM or we have some back channel mechanism
> > where the VMM is interacting with the BAR.
> >
> > I bring it up here because it's effectively the same kind of access
> > you're adding with these helpers and would need to be addressed if this
> > were generically enabling vfio mmap access on s390x.
>
> On s390x the use of the MIO instructions is limited to only PCI access.
> So i am not sure if we should generically apply this to all vfio mmap
> access (for non PCI devices).
>
>
> >
> > Prior to commit 2b8fe81b3c2e ("system/memory: use ldn_he_p/stn_he_p")
> > the mmio helpers here might have been a drop-in replacement for the
> > dereferencing of mmap offsets, but something would need to be done
> > about the explicit PCI assumption introduced here and the possibility
> > of unaligned accesses that the noted commit tries to resolve. Thanks,
> >
> > Alex
>
> AFAICT in qemu today the ram_device_mem_ops is used for non PCI vfio
> mmap cases. For s390x these helpers should be restricted to PCI
> accesses. For the unaligned accesses (thanks for pointing out that
> commmit!), are you suggesting we use the ld*_he_p/st*_he_p functions in
> the helpers i defined? Though those functions don't seem to be doing
> volatile accesses.
TBH, it's not clear to me that 2b8fe81b3c2e is correct. We implemented
the ram_device MemoryRegion specifically to avoid memory access
optimizations that are not compatible with MMIO, but I see that these
{ld,st}*_he_pe operations are using __builtin_memcpy. I'm not a
compiler aficionado, but is __builtin_memcpy guaranteed to use an
instruction set compatible with MMIO?
Cc: folks related to that commit.
The original issue that brought us ram_device was a very obscure
alignment of a memory region versus a device quirk only seen with
assignment of specific RTL NICs.
The description for commit 4a2e242bbb30 ("memory: Don't use memcpy for
ram_device regions") also addresses unaligned accesses, we don't expect
drivers to use them and we don't want them to work differently in a VM
than they might on bare metal. We can debate whether that's valid or
not, but that was the intent.
Have we re-introduced the chance that we're using optimized
instructions only meant to target RAM here or is __builtin_memcpy
implicitly safe for MMIO? Thanks,
Alex
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x
2025-04-03 20:33 ` Farhan Ali
2025-04-03 21:24 ` Alex Williamson
@ 2025-04-04 7:05 ` Cédric Le Goater
1 sibling, 0 replies; 18+ messages in thread
From: Cédric Le Goater @ 2025-04-04 7:05 UTC (permalink / raw)
To: Farhan Ali, Alex Williamson
Cc: Stefan Hajnoczi, Niklas Schnelle, qemu-devel, qemu-block,
qemu-s390x, fam, philmd, kwolf, hreitz, thuth, mjrosato
On 4/3/25 22:33, Farhan Ali wrote:
>
> On 4/3/2025 11:05 AM, Alex Williamson wrote:
>> On Thu, 3 Apr 2025 10:33:52 -0700
>> Farhan Ali <alifm@linux.ibm.com> wrote:
>>
>>> On 4/3/2025 9:27 AM, Alex Williamson wrote:
>>>> On Thu, 3 Apr 2025 11:44:42 -0400
>>>> Stefan Hajnoczi <stefanha@redhat.com> wrote:
>>>>> On Thu, Apr 03, 2025 at 09:47:26AM +0200, Niklas Schnelle wrote:
>>>>>> On Wed, 2025-04-02 at 11:51 -0400, Stefan Hajnoczi wrote:
>>>>>>> On Tue, Apr 01, 2025 at 10:22:43AM -0700, Farhan Ali wrote:
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> Recently on s390x we have enabled mmap support for vfio-pci devices [1].
>>>>>>> Hi Alex,
>>>>>>> I wanted to bring this to your attention. Feel free to merge it through
>>>>>>> the VFIO tree, otherwise I will merge it once you have taken a look.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Stefan
>>>>>>>> This allows us to take advantage and use userspace drivers on s390x. However,
>>>>>>>> on s390x we have special instructions for MMIO access. Starting with z15
>>>>>>>> (and newer platforms) we have new PCI Memory I/O (MIO) instructions which
>>>>>>>> operate on virtually mapped PCI memory spaces, and can be used from userspace.
>>>>>>>> On older platforms we would fallback to using existing system calls for MMIO access.
>>>>>>>>
>>>>>>>> This patch series introduces support the PCI MIO instructions, and enables s390x
>>>>>>>> support for the userspace NVMe driver on s390x. I would appreciate any review/feedback
>>>>>>>> on the patches.
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>> Farhan
>>>>>> Hi Stefan,
>>>>>>
>>>>>> the kernel patch actually made it into Linus' tree for v6.15 already as
>>>>>> commit aa9f168d55dc ("s390/pci: Support mmap() of PCI resources except
>>>>>> for ISM devices") plus prerequisites. This went via the PCI tree
>>>>>> because they included a change to struct pci_dev and also enabled
>>>>>> mmap() on PCI resource files. Alex reviewed an earlier version and was
>>>>>> the one who suggested to also enable mmap() on PCI resources.
>>>>> The introduction of a new QEMU API for accessing MMIO BARs in this
>>>>> series is something Alex might be interested in as QEMU VFIO maintainer.
>>>>> That wouldn't have been part of the kernel patch review.
>>>>>
>>>>> If he's aware of the new API he can encourage other VFIO users to use it
>>>>> in the future so that you won't need to convert them to work on s390x
>>>>> again.
>>>> I don't claim any jurisdiction over the vfio-nvme driver. In general
>>>> vfio users should be using either vfio_region_ops, ram_device_mem_ops,
>>>> or directly mapping MMIO into the VM address space. The first uses
>>>> pread/write through the region offset, irrespective of the type of
>>>> memory, the second provides the type of access used here where we're
>>>> dereferencing into an mmap, and the last if of course the preferred
>>>> mechanism where available.
>>>>
>>>> It is curious that the proposal here doesn't include any changes to
>>>> ram_device_mem_ops for more generically enabling MMIO access on s390x.
>>>> Thanks,
>>>>
>>>> Alex
>>>
>>> Hi Alex,
>>> From my understanding the ram_device_mem_ops sets up the BAR access for
>>> a guest passthrough device. Unfortunately today an s390x KVM guest
>>> doesn't use and have support for these MIO instructions. We wanted to
>>> use this series as an initial test vehicle of the mmap support.
>> Right, ram_device_mem_ops is what we'll use to access a BAR that
>> supports mmap but for whatever reason we're accessing it directly
>> through the mmap. For instance if an overlapping quirk prevents the
>> page from being mapped to the VM or we have some back channel mechanism
>> where the VMM is interacting with the BAR.
>>
>> I bring it up here because it's effectively the same kind of access
>> you're adding with these helpers and would need to be addressed if this
>> were generically enabling vfio mmap access on s390x.
>
> On s390x the use of the MIO instructions is limited to only PCI access. So i am not sure if we should generically apply this to all vfio mmap access (for non PCI devices).
>
>
>>
>> Prior to commit 2b8fe81b3c2e ("system/memory: use ldn_he_p/stn_he_p")
>> the mmio helpers here might have been a drop-in replacement for the
>> dereferencing of mmap offsets, but something would need to be done
>> about the explicit PCI assumption introduced here and the possibility
>> of unaligned accesses that the noted commit tries to resolve. Thanks,
>>
>> Alex
>
> AFAICT in qemu today the ram_device_mem_ops is used for non PCI vfio mmap cases. For s390x these helpers should be restricted to PCI accesses. For the unaligned accesses (thanks for pointing out that commmit!), are you suggesting we use the ld*_he_p/st*_he_p functions in the helpers i defined? Though those functions don't seem to be doing volatile accesses.
I think that's fine. We had the same problem to deal with the XIVE
ESB MMIO pages. See xive_esb_rw() in hw/intc/spapr_xive_kvm.c.
Thanks,
C.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x
2025-04-03 21:24 ` Alex Williamson
@ 2025-04-10 16:07 ` Farhan Ali
2025-04-11 22:28 ` Alex Williamson
0 siblings, 1 reply; 18+ messages in thread
From: Farhan Ali @ 2025-04-10 16:07 UTC (permalink / raw)
To: Alex Williamson, Stefan Hajnoczi
Cc: Niklas Schnelle, qemu-devel, qemu-block, qemu-s390x, fam, philmd,
kwolf, hreitz, thuth, mjrosato, Cédric Le Goater, venture,
crauer, pefoley, david
On 4/3/2025 2:24 PM, Alex Williamson wrote:
> On Thu, 3 Apr 2025 13:33:17 -0700
> Farhan Ali <alifm@linux.ibm.com> wrote:
>
>> On 4/3/2025 11:05 AM, Alex Williamson wrote:
>>> On Thu, 3 Apr 2025 10:33:52 -0700
>>> Farhan Ali <alifm@linux.ibm.com> wrote:
>>>
>>>> On 4/3/2025 9:27 AM, Alex Williamson wrote:
>>>>> On Thu, 3 Apr 2025 11:44:42 -0400
>>>>> Stefan Hajnoczi <stefanha@redhat.com> wrote:
>>>>>
>>>>>> On Thu, Apr 03, 2025 at 09:47:26AM +0200, Niklas Schnelle wrote:
>>>>>>> On Wed, 2025-04-02 at 11:51 -0400, Stefan Hajnoczi wrote:
>>>>>>>> On Tue, Apr 01, 2025 at 10:22:43AM -0700, Farhan Ali wrote:
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> Recently on s390x we have enabled mmap support for vfio-pci devices [1].
>>>>>>>> Hi Alex,
>>>>>>>> I wanted to bring this to your attention. Feel free to merge it through
>>>>>>>> the VFIO tree, otherwise I will merge it once you have taken a look.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Stefan
>>>>>>>>
>>>>>>>>> This allows us to take advantage and use userspace drivers on s390x. However,
>>>>>>>>> on s390x we have special instructions for MMIO access. Starting with z15
>>>>>>>>> (and newer platforms) we have new PCI Memory I/O (MIO) instructions which
>>>>>>>>> operate on virtually mapped PCI memory spaces, and can be used from userspace.
>>>>>>>>> On older platforms we would fallback to using existing system calls for MMIO access.
>>>>>>>>>
>>>>>>>>> This patch series introduces support the PCI MIO instructions, and enables s390x
>>>>>>>>> support for the userspace NVMe driver on s390x. I would appreciate any review/feedback
>>>>>>>>> on the patches.
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>> Farhan
>>>>>>> Hi Stefan,
>>>>>>>
>>>>>>> the kernel patch actually made it into Linus' tree for v6.15 already as
>>>>>>> commit aa9f168d55dc ("s390/pci: Support mmap() of PCI resources except
>>>>>>> for ISM devices") plus prerequisites. This went via the PCI tree
>>>>>>> because they included a change to struct pci_dev and also enabled
>>>>>>> mmap() on PCI resource files. Alex reviewed an earlier version and was
>>>>>>> the one who suggested to also enable mmap() on PCI resources.
>>>>>> The introduction of a new QEMU API for accessing MMIO BARs in this
>>>>>> series is something Alex might be interested in as QEMU VFIO maintainer.
>>>>>> That wouldn't have been part of the kernel patch review.
>>>>>>
>>>>>> If he's aware of the new API he can encourage other VFIO users to use it
>>>>>> in the future so that you won't need to convert them to work on s390x
>>>>>> again.
>>>>> I don't claim any jurisdiction over the vfio-nvme driver. In general
>>>>> vfio users should be using either vfio_region_ops, ram_device_mem_ops,
>>>>> or directly mapping MMIO into the VM address space. The first uses
>>>>> pread/write through the region offset, irrespective of the type of
>>>>> memory, the second provides the type of access used here where we're
>>>>> dereferencing into an mmap, and the last if of course the preferred
>>>>> mechanism where available.
>>>>>
>>>>> It is curious that the proposal here doesn't include any changes to
>>>>> ram_device_mem_ops for more generically enabling MMIO access on s390x.
>>>>> Thanks,
>>>>>
>>>>> Alex
>>>> Hi Alex,
>>>> From my understanding the ram_device_mem_ops sets up the BAR access for
>>>> a guest passthrough device. Unfortunately today an s390x KVM guest
>>>> doesn't use and have support for these MIO instructions. We wanted to
>>>> use this series as an initial test vehicle of the mmap support.
>>> Right, ram_device_mem_ops is what we'll use to access a BAR that
>>> supports mmap but for whatever reason we're accessing it directly
>>> through the mmap. For instance if an overlapping quirk prevents the
>>> page from being mapped to the VM or we have some back channel mechanism
>>> where the VMM is interacting with the BAR.
>>>
>>> I bring it up here because it's effectively the same kind of access
>>> you're adding with these helpers and would need to be addressed if this
>>> were generically enabling vfio mmap access on s390x.
>> On s390x the use of the MIO instructions is limited to only PCI access.
>> So i am not sure if we should generically apply this to all vfio mmap
>> access (for non PCI devices).
>>
>>
>>> Prior to commit 2b8fe81b3c2e ("system/memory: use ldn_he_p/stn_he_p")
>>> the mmio helpers here might have been a drop-in replacement for the
>>> dereferencing of mmap offsets, but something would need to be done
>>> about the explicit PCI assumption introduced here and the possibility
>>> of unaligned accesses that the noted commit tries to resolve. Thanks,
>>>
>>> Alex
>> AFAICT in qemu today the ram_device_mem_ops is used for non PCI vfio
>> mmap cases. For s390x these helpers should be restricted to PCI
>> accesses. For the unaligned accesses (thanks for pointing out that
>> commmit!), are you suggesting we use the ld*_he_p/st*_he_p functions in
>> the helpers i defined? Though those functions don't seem to be doing
>> volatile accesses.
> TBH, it's not clear to me that 2b8fe81b3c2e is correct. We implemented
> the ram_device MemoryRegion specifically to avoid memory access
> optimizations that are not compatible with MMIO, but I see that these
> {ld,st}*_he_pe operations are using __builtin_memcpy. I'm not a
> compiler aficionado, but is __builtin_memcpy guaranteed to use an
> instruction set compatible with MMIO?
>
> Cc: folks related to that commit.
>
> The original issue that brought us ram_device was a very obscure
> alignment of a memory region versus a device quirk only seen with
> assignment of specific RTL NICs.
>
> The description for commit 4a2e242bbb30 ("memory: Don't use memcpy for
> ram_device regions") also addresses unaligned accesses, we don't expect
> drivers to use them and we don't want them to work differently in a VM
> than they might on bare metal. We can debate whether that's valid or
> not, but that was the intent.
>
> Have we re-introduced the chance that we're using optimized
> instructions only meant to target RAM here or is __builtin_memcpy
> implicitly safe for MMIO? Thanks,
>
> Alex
Hi Stefan, Alex
Polite ping. Following up to understand how we should proceed with this
series. Please let me know if there are any concerns that i haven't
addressed?
Thanks
Farhan
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x
2025-04-10 16:07 ` Farhan Ali
@ 2025-04-11 22:28 ` Alex Williamson
2025-04-11 23:28 ` Farhan Ali
2025-04-15 7:28 ` Niklas Schnelle
0 siblings, 2 replies; 18+ messages in thread
From: Alex Williamson @ 2025-04-11 22:28 UTC (permalink / raw)
To: Farhan Ali
Cc: Stefan Hajnoczi, Niklas Schnelle, qemu-devel, qemu-block,
qemu-s390x, fam, philmd, kwolf, hreitz, thuth, mjrosato,
Cédric Le Goater, venture, crauer, pefoley, david
On Thu, 10 Apr 2025 09:07:51 -0700
Farhan Ali <alifm@linux.ibm.com> wrote:
> On 4/3/2025 2:24 PM, Alex Williamson wrote:
> > On Thu, 3 Apr 2025 13:33:17 -0700
> > Farhan Ali <alifm@linux.ibm.com> wrote:
> >
> >> On 4/3/2025 11:05 AM, Alex Williamson wrote:
> >>> On Thu, 3 Apr 2025 10:33:52 -0700
> >>> Farhan Ali <alifm@linux.ibm.com> wrote:
> >>>
> >>>> On 4/3/2025 9:27 AM, Alex Williamson wrote:
> >>>>> On Thu, 3 Apr 2025 11:44:42 -0400
> >>>>> Stefan Hajnoczi <stefanha@redhat.com> wrote:
> >>>>>
> >>>>>> On Thu, Apr 03, 2025 at 09:47:26AM +0200, Niklas Schnelle wrote:
> >>>>>>> On Wed, 2025-04-02 at 11:51 -0400, Stefan Hajnoczi wrote:
> >>>>>>>> On Tue, Apr 01, 2025 at 10:22:43AM -0700, Farhan Ali wrote:
> >>>>>>>>> Hi,
> >>>>>>>>>
> >>>>>>>>> Recently on s390x we have enabled mmap support for vfio-pci devices [1].
> >>>>>>>> Hi Alex,
> >>>>>>>> I wanted to bring this to your attention. Feel free to merge it through
> >>>>>>>> the VFIO tree, otherwise I will merge it once you have taken a look.
> >>>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>> Stefan
> >>>>>>>>
> >>>>>>>>> This allows us to take advantage and use userspace drivers on s390x. However,
> >>>>>>>>> on s390x we have special instructions for MMIO access. Starting with z15
> >>>>>>>>> (and newer platforms) we have new PCI Memory I/O (MIO) instructions which
> >>>>>>>>> operate on virtually mapped PCI memory spaces, and can be used from userspace.
> >>>>>>>>> On older platforms we would fallback to using existing system calls for MMIO access.
> >>>>>>>>>
> >>>>>>>>> This patch series introduces support the PCI MIO instructions, and enables s390x
> >>>>>>>>> support for the userspace NVMe driver on s390x. I would appreciate any review/feedback
> >>>>>>>>> on the patches.
> >>>>>>>>>
> >>>>>>>>> Thanks
> >>>>>>>>> Farhan
> >>>>>>> Hi Stefan,
> >>>>>>>
> >>>>>>> the kernel patch actually made it into Linus' tree for v6.15 already as
> >>>>>>> commit aa9f168d55dc ("s390/pci: Support mmap() of PCI resources except
> >>>>>>> for ISM devices") plus prerequisites. This went via the PCI tree
> >>>>>>> because they included a change to struct pci_dev and also enabled
> >>>>>>> mmap() on PCI resource files. Alex reviewed an earlier version and was
> >>>>>>> the one who suggested to also enable mmap() on PCI resources.
> >>>>>> The introduction of a new QEMU API for accessing MMIO BARs in this
> >>>>>> series is something Alex might be interested in as QEMU VFIO maintainer.
> >>>>>> That wouldn't have been part of the kernel patch review.
> >>>>>>
> >>>>>> If he's aware of the new API he can encourage other VFIO users to use it
> >>>>>> in the future so that you won't need to convert them to work on s390x
> >>>>>> again.
> >>>>> I don't claim any jurisdiction over the vfio-nvme driver. In general
> >>>>> vfio users should be using either vfio_region_ops, ram_device_mem_ops,
> >>>>> or directly mapping MMIO into the VM address space. The first uses
> >>>>> pread/write through the region offset, irrespective of the type of
> >>>>> memory, the second provides the type of access used here where we're
> >>>>> dereferencing into an mmap, and the last if of course the preferred
> >>>>> mechanism where available.
> >>>>>
> >>>>> It is curious that the proposal here doesn't include any changes to
> >>>>> ram_device_mem_ops for more generically enabling MMIO access on s390x.
> >>>>> Thanks,
> >>>>>
> >>>>> Alex
> >>>> Hi Alex,
> >>>> From my understanding the ram_device_mem_ops sets up the BAR access for
> >>>> a guest passthrough device. Unfortunately today an s390x KVM guest
> >>>> doesn't use and have support for these MIO instructions. We wanted to
> >>>> use this series as an initial test vehicle of the mmap support.
> >>> Right, ram_device_mem_ops is what we'll use to access a BAR that
> >>> supports mmap but for whatever reason we're accessing it directly
> >>> through the mmap. For instance if an overlapping quirk prevents the
> >>> page from being mapped to the VM or we have some back channel mechanism
> >>> where the VMM is interacting with the BAR.
> >>>
> >>> I bring it up here because it's effectively the same kind of access
> >>> you're adding with these helpers and would need to be addressed if this
> >>> were generically enabling vfio mmap access on s390x.
> >> On s390x the use of the MIO instructions is limited to only PCI access.
> >> So i am not sure if we should generically apply this to all vfio mmap
> >> access (for non PCI devices).
> >>
> >>
> >>> Prior to commit 2b8fe81b3c2e ("system/memory: use ldn_he_p/stn_he_p")
> >>> the mmio helpers here might have been a drop-in replacement for the
> >>> dereferencing of mmap offsets, but something would need to be done
> >>> about the explicit PCI assumption introduced here and the possibility
> >>> of unaligned accesses that the noted commit tries to resolve. Thanks,
> >>>
> >>> Alex
> >> AFAICT in qemu today the ram_device_mem_ops is used for non PCI vfio
> >> mmap cases. For s390x these helpers should be restricted to PCI
> >> accesses. For the unaligned accesses (thanks for pointing out that
> >> commmit!), are you suggesting we use the ld*_he_p/st*_he_p functions in
> >> the helpers i defined? Though those functions don't seem to be doing
> >> volatile accesses.
> > TBH, it's not clear to me that 2b8fe81b3c2e is correct. We implemented
> > the ram_device MemoryRegion specifically to avoid memory access
> > optimizations that are not compatible with MMIO, but I see that these
> > {ld,st}*_he_pe operations are using __builtin_memcpy. I'm not a
> > compiler aficionado, but is __builtin_memcpy guaranteed to use an
> > instruction set compatible with MMIO?
> >
> > Cc: folks related to that commit.
> >
> > The original issue that brought us ram_device was a very obscure
> > alignment of a memory region versus a device quirk only seen with
> > assignment of specific RTL NICs.
> >
> > The description for commit 4a2e242bbb30 ("memory: Don't use memcpy for
> > ram_device regions") also addresses unaligned accesses, we don't expect
> > drivers to use them and we don't want them to work differently in a VM
> > than they might on bare metal. We can debate whether that's valid or
> > not, but that was the intent.
> >
> > Have we re-introduced the chance that we're using optimized
> > instructions only meant to target RAM here or is __builtin_memcpy
> > implicitly safe for MMIO? Thanks,
> >
> > Alex
>
>
> Hi Stefan, Alex
>
>
> Polite ping. Following up to understand how we should proceed with this
> series. Please let me know if there are any concerns that i haven't
> addressed?
I disassembled the current implementation using ldn_he_p/stn_he_p on
x86_64 and it doesn't appear to introduce any of the mmx/sse optimized
optimized code that we were trying to get away from in introducing the
ram_device MemoryRegion and getting away from memcpy. I wish I had
some assurance that __builtin_memcpy won't invoke such operations, but
it seems unlikely that it would for the discrete, fundamental size
operations we're asking of it. Therefore, maybe it is advisable to use
the ld*_he_p/st*_he_p helpers rather than open code the memory derefs.
It's unfortunate that s390x needs to specifically restrict this access
to PCI memory, but maybe that means that PCI specific version of these
helpers are only created for s390x and elsewhere #define'd to the
generic ld/st helpers, which maybe means the main interface should be a
host_pci_{ld,st}n_he_p (maybe "le" given the implementation) type
function. I don't know if we'd then create a ram_pci_device variant
memory region ops for use in vfio-pci, but it should probably be coded
with that in mind. Thanks,
Alex
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x
2025-04-11 22:28 ` Alex Williamson
@ 2025-04-11 23:28 ` Farhan Ali
2025-04-15 7:28 ` Niklas Schnelle
1 sibling, 0 replies; 18+ messages in thread
From: Farhan Ali @ 2025-04-11 23:28 UTC (permalink / raw)
To: Alex Williamson
Cc: Stefan Hajnoczi, Niklas Schnelle, qemu-devel, qemu-block,
qemu-s390x, fam, philmd, kwolf, hreitz, thuth, mjrosato,
Cédric Le Goater, venture, crauer, pefoley, david
On 4/11/2025 3:28 PM, Alex Williamson wrote:
> On Thu, 10 Apr 2025 09:07:51 -0700
> Farhan Ali <alifm@linux.ibm.com> wrote:
>
>> On 4/3/2025 2:24 PM, Alex Williamson wrote:
>>> On Thu, 3 Apr 2025 13:33:17 -0700
>>> Farhan Ali <alifm@linux.ibm.com> wrote:
>>>
>>>> On 4/3/2025 11:05 AM, Alex Williamson wrote:
>>>>> On Thu, 3 Apr 2025 10:33:52 -0700
>>>>> Farhan Ali <alifm@linux.ibm.com> wrote:
>>>>>
>>>>>> On 4/3/2025 9:27 AM, Alex Williamson wrote:
>>>>>>> On Thu, 3 Apr 2025 11:44:42 -0400
>>>>>>> Stefan Hajnoczi <stefanha@redhat.com> wrote:
>>>>>>>
>>>>>>>> On Thu, Apr 03, 2025 at 09:47:26AM +0200, Niklas Schnelle wrote:
>>>>>>>>> On Wed, 2025-04-02 at 11:51 -0400, Stefan Hajnoczi wrote:
>>>>>>>>>> On Tue, Apr 01, 2025 at 10:22:43AM -0700, Farhan Ali wrote:
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> Recently on s390x we have enabled mmap support for vfio-pci devices [1].
>>>>>>>>>> Hi Alex,
>>>>>>>>>> I wanted to bring this to your attention. Feel free to merge it through
>>>>>>>>>> the VFIO tree, otherwise I will merge it once you have taken a look.
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Stefan
>>>>>>>>>>
>>>>>>>>>>> This allows us to take advantage and use userspace drivers on s390x. However,
>>>>>>>>>>> on s390x we have special instructions for MMIO access. Starting with z15
>>>>>>>>>>> (and newer platforms) we have new PCI Memory I/O (MIO) instructions which
>>>>>>>>>>> operate on virtually mapped PCI memory spaces, and can be used from userspace.
>>>>>>>>>>> On older platforms we would fallback to using existing system calls for MMIO access.
>>>>>>>>>>>
>>>>>>>>>>> This patch series introduces support the PCI MIO instructions, and enables s390x
>>>>>>>>>>> support for the userspace NVMe driver on s390x. I would appreciate any review/feedback
>>>>>>>>>>> on the patches.
>>>>>>>>>>>
>>>>>>>>>>> Thanks
>>>>>>>>>>> Farhan
>>>>>>>>> Hi Stefan,
>>>>>>>>>
>>>>>>>>> the kernel patch actually made it into Linus' tree for v6.15 already as
>>>>>>>>> commit aa9f168d55dc ("s390/pci: Support mmap() of PCI resources except
>>>>>>>>> for ISM devices") plus prerequisites. This went via the PCI tree
>>>>>>>>> because they included a change to struct pci_dev and also enabled
>>>>>>>>> mmap() on PCI resource files. Alex reviewed an earlier version and was
>>>>>>>>> the one who suggested to also enable mmap() on PCI resources.
>>>>>>>> The introduction of a new QEMU API for accessing MMIO BARs in this
>>>>>>>> series is something Alex might be interested in as QEMU VFIO maintainer.
>>>>>>>> That wouldn't have been part of the kernel patch review.
>>>>>>>>
>>>>>>>> If he's aware of the new API he can encourage other VFIO users to use it
>>>>>>>> in the future so that you won't need to convert them to work on s390x
>>>>>>>> again.
>>>>>>> I don't claim any jurisdiction over the vfio-nvme driver. In general
>>>>>>> vfio users should be using either vfio_region_ops, ram_device_mem_ops,
>>>>>>> or directly mapping MMIO into the VM address space. The first uses
>>>>>>> pread/write through the region offset, irrespective of the type of
>>>>>>> memory, the second provides the type of access used here where we're
>>>>>>> dereferencing into an mmap, and the last if of course the preferred
>>>>>>> mechanism where available.
>>>>>>>
>>>>>>> It is curious that the proposal here doesn't include any changes to
>>>>>>> ram_device_mem_ops for more generically enabling MMIO access on s390x.
>>>>>>> Thanks,
>>>>>>>
>>>>>>> Alex
>>>>>> Hi Alex,
>>>>>> From my understanding the ram_device_mem_ops sets up the BAR access for
>>>>>> a guest passthrough device. Unfortunately today an s390x KVM guest
>>>>>> doesn't use and have support for these MIO instructions. We wanted to
>>>>>> use this series as an initial test vehicle of the mmap support.
>>>>> Right, ram_device_mem_ops is what we'll use to access a BAR that
>>>>> supports mmap but for whatever reason we're accessing it directly
>>>>> through the mmap. For instance if an overlapping quirk prevents the
>>>>> page from being mapped to the VM or we have some back channel mechanism
>>>>> where the VMM is interacting with the BAR.
>>>>>
>>>>> I bring it up here because it's effectively the same kind of access
>>>>> you're adding with these helpers and would need to be addressed if this
>>>>> were generically enabling vfio mmap access on s390x.
>>>> On s390x the use of the MIO instructions is limited to only PCI access.
>>>> So i am not sure if we should generically apply this to all vfio mmap
>>>> access (for non PCI devices).
>>>>
>>>>
>>>>> Prior to commit 2b8fe81b3c2e ("system/memory: use ldn_he_p/stn_he_p")
>>>>> the mmio helpers here might have been a drop-in replacement for the
>>>>> dereferencing of mmap offsets, but something would need to be done
>>>>> about the explicit PCI assumption introduced here and the possibility
>>>>> of unaligned accesses that the noted commit tries to resolve. Thanks,
>>>>>
>>>>> Alex
>>>> AFAICT in qemu today the ram_device_mem_ops is used for non PCI vfio
>>>> mmap cases. For s390x these helpers should be restricted to PCI
>>>> accesses. For the unaligned accesses (thanks for pointing out that
>>>> commmit!), are you suggesting we use the ld*_he_p/st*_he_p functions in
>>>> the helpers i defined? Though those functions don't seem to be doing
>>>> volatile accesses.
>>> TBH, it's not clear to me that 2b8fe81b3c2e is correct. We implemented
>>> the ram_device MemoryRegion specifically to avoid memory access
>>> optimizations that are not compatible with MMIO, but I see that these
>>> {ld,st}*_he_pe operations are using __builtin_memcpy. I'm not a
>>> compiler aficionado, but is __builtin_memcpy guaranteed to use an
>>> instruction set compatible with MMIO?
>>>
>>> Cc: folks related to that commit.
>>>
>>> The original issue that brought us ram_device was a very obscure
>>> alignment of a memory region versus a device quirk only seen with
>>> assignment of specific RTL NICs.
>>>
>>> The description for commit 4a2e242bbb30 ("memory: Don't use memcpy for
>>> ram_device regions") also addresses unaligned accesses, we don't expect
>>> drivers to use them and we don't want them to work differently in a VM
>>> than they might on bare metal. We can debate whether that's valid or
>>> not, but that was the intent.
>>>
>>> Have we re-introduced the chance that we're using optimized
>>> instructions only meant to target RAM here or is __builtin_memcpy
>>> implicitly safe for MMIO? Thanks,
>>>
>>> Alex
>>
>> Hi Stefan, Alex
>>
>>
>> Polite ping. Following up to understand how we should proceed with this
>> series. Please let me know if there are any concerns that i haven't
>> addressed?
> I disassembled the current implementation using ldn_he_p/stn_he_p on
> x86_64 and it doesn't appear to introduce any of the mmx/sse optimized
> optimized code that we were trying to get away from in introducing the
> ram_device MemoryRegion and getting away from memcpy. I wish I had
> some assurance that __builtin_memcpy won't invoke such operations, but
> it seems unlikely that it would for the discrete, fundamental size
> operations we're asking of it. Therefore, maybe it is advisable to use
> the ld*_he_p/st*_he_p helpers rather than open code the memory derefs.
>
> It's unfortunate that s390x needs to specifically restrict this access
> to PCI memory, but maybe that means that PCI specific version of these
> helpers are only created for s390x and elsewhere #define'd to the
> generic ld/st helpers, which maybe means the main interface should be a
> host_pci_{ld,st}n_he_p (maybe "le" given the implementation) type
> function. I don't know if we'd then create a ram_pci_device variant
> memory region ops for use in vfio-pci, but it should probably be coded
> with that in mind. Thanks,
>
> Alex
Hi Alex,
Yes, I can update the interface to use the generic ld/st helpers for
non-s390x cases.
Thanks
Farhan
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x
2025-04-11 22:28 ` Alex Williamson
2025-04-11 23:28 ` Farhan Ali
@ 2025-04-15 7:28 ` Niklas Schnelle
1 sibling, 0 replies; 18+ messages in thread
From: Niklas Schnelle @ 2025-04-15 7:28 UTC (permalink / raw)
To: Alex Williamson, Farhan Ali
Cc: Stefan Hajnoczi, qemu-devel, qemu-block, qemu-s390x, fam, philmd,
kwolf, hreitz, thuth, mjrosato, Cédric Le Goater, venture,
crauer, pefoley, david
On Fri, 2025-04-11 at 16:28 -0600, Alex Williamson wrote:
> > >
--- snip ---
> > > Cc: folks related to that commit.
> > >
> > > The original issue that brought us ram_device was a very obscure
> > > alignment of a memory region versus a device quirk only seen with
> > > assignment of specific RTL NICs.
> > >
> > > The description for commit 4a2e242bbb30 ("memory: Don't use memcpy for
> > > ram_device regions") also addresses unaligned accesses, we don't expect
> > > drivers to use them and we don't want them to work differently in a VM
> > > than they might on bare metal. We can debate whether that's valid or
> > > not, but that was the intent.
> > >
> > > Have we re-introduced the chance that we're using optimized
> > > instructions only meant to target RAM here or is __builtin_memcpy
> > > implicitly safe for MMIO? Thanks,
> > >
> > > Alex
> >
> >
> > Hi Stefan, Alex
> >
> >
> > Polite ping. Following up to understand how we should proceed with this
> > series. Please let me know if there are any concerns that i haven't
> > addressed?
>
> I disassembled the current implementation using ldn_he_p/stn_he_p on
> x86_64 and it doesn't appear to introduce any of the mmx/sse optimized
> optimized code that we were trying to get away from in introducing the
> ram_device MemoryRegion and getting away from memcpy. I wish I had
> some assurance that __builtin_memcpy won't invoke such operations, but
> it seems unlikely that it would for the discrete, fundamental size
> operations we're asking of it. Therefore, maybe it is advisable to use
> the ld*_he_p/st*_he_p helpers rather than open code the memory derefs.
>
> It's unfortunate that s390x needs to specifically restrict this access
> to PCI memory, but maybe that means that PCI specific version of these
> helpers are only created for s390x and elsewhere #define'd to the
> generic ld/st helpers, which maybe means the main interface should be a
> host_pci_{ld,st}n_he_p (maybe "le" given the implementation) type
> function. I don't know if we'd then create a ram_pci_device variant
> memory region ops for use in vfio-pci, but it should probably be coded
> with that in mind. Thanks,
>
> Alex
>
Hi Alex,
Just as a clarification because it was unclear earlier in the thread.
While it is true that the PCI instructions are restricted to PCI, there
is also no other kind of MMIO on s390x. This might not really help here
though because the PCI instructions can not be used to access
main/normal memory. So it's really a distinction between normal memory
and MMIO where MMIO would, if they were available, also include things
like GPU VRAM.
Thanks,
Niklas
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2025-04-15 7:29 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-01 17:22 [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x Farhan Ali
2025-04-01 17:22 ` [PATCH v3 1/3] util: Add functions for s390x mmio read/write Farhan Ali
2025-04-01 17:22 ` [PATCH v3 2/3] include: Add a header to define host PCI MMIO functions Farhan Ali
2025-04-02 14:09 ` Stefan Hajnoczi
2025-04-01 17:22 ` [PATCH v3 3/3] block/nvme: Use host PCI MMIO API Farhan Ali
2025-04-02 15:51 ` [PATCH v3 0/3] Enable QEMU NVMe userspace driver on s390x Stefan Hajnoczi
2025-04-03 7:47 ` Niklas Schnelle
2025-04-03 15:44 ` Stefan Hajnoczi
2025-04-03 16:27 ` Alex Williamson
2025-04-03 17:33 ` Farhan Ali
2025-04-03 18:05 ` Alex Williamson
2025-04-03 20:33 ` Farhan Ali
2025-04-03 21:24 ` Alex Williamson
2025-04-10 16:07 ` Farhan Ali
2025-04-11 22:28 ` Alex Williamson
2025-04-11 23:28 ` Farhan Ali
2025-04-15 7:28 ` Niklas Schnelle
2025-04-04 7:05 ` Cédric Le Goater
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).