qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/3] Enable QEMU NVMe userspace driver on s390x
@ 2025-04-14 21:36 Farhan Ali
  2025-04-14 21:36 ` [PATCH v4 1/3] util: Add functions for s390x mmio read/write Farhan Ali
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Farhan Ali @ 2025-04-14 21:36 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-s390x, alifm, mjrosato, schnelle, stefanha, philmd, kwolf,
	hreitz, thuth, fam, alex.williamson

Hi,

Recently on s390x we have enabled mmap support for vfio-pci devices [1].
This allows us to take advantage and use userspace drivers on s390x. However,
on s390x we have special instructions for MMIO access. Starting with z15 
(and newer platforms) we have new PCI Memory I/O (MIO) instructions which 
operate on virtually mapped PCI memory spaces, and can be used from userspace.
On older platforms we would fallback to using existing system calls for MMIO access.

This patch series introduces support the PCI MIO instructions, and enables s390x
support for the userspace NVMe driver on s390x. I would appreciate any review/feedback
on the patches.

Thanks
Farhan

[1] https://lore.kernel.org/linux-s390/20250226-vfio_pci_mmap-v7-0-c5c0f1d26efd@linux.ibm.com/

ChangeLog
---------
v3 series https://lore.kernel.org/qemu-devel/20250401172246.2688-1-alifm@linux.ibm.com/
v3 -> v4
    - Use generic ld/st functions for non s390x PCI access suggested by Alex (patch 2).
    - Removed R-b for patch 2 as the host PCI MMIO access API changed for non-s390x.
    Would appreciate review on this again.

v2 series https://mail.gnu.org/archive/html/qemu-devel/2025-03/msg06847.html
v2 -> v3
    - Update the PCI MMIO APIs to reflect that its PCI MMIO access on host 
as suggested by Stefan(patch 2)
    - Move s390x ifdef check to s390x_pci_mmio.h as suggested by Philippe (patch 1)
    - Add R-bs for the respective patches.

v1 series https://mail.gnu.org/archive/html/qemu-devel/2025-03/msg06596.html
v1 -> v2
    - Add 8 and 16 bit reads/writes for completeness (patch 1)
    - Introduce new QEMU PCI MMIO read/write API as suggested by Stefan (patch 2)
    - Update NVMe userspace driver to use QEMU PCI MMIO functions (patch 3)

Farhan Ali (3):
  util: Add functions for s390x mmio read/write
  include: Add a header to define host PCI MMIO functions
  block/nvme: Use host PCI MMIO API

 block/nvme.c                  |  41 +++++-----
 include/qemu/host-pci-mmio.h  | 141 ++++++++++++++++++++++++++++++++
 include/qemu/s390x_pci_mmio.h |  24 ++++++
 util/meson.build              |   2 +
 util/s390x_pci_mmio.c         | 148 ++++++++++++++++++++++++++++++++++
 5 files changed, 338 insertions(+), 18 deletions(-)
 create mode 100644 include/qemu/host-pci-mmio.h
 create mode 100644 include/qemu/s390x_pci_mmio.h
 create mode 100644 util/s390x_pci_mmio.c

-- 
2.43.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v4 1/3] util: Add functions for s390x mmio read/write
  2025-04-14 21:36 [PATCH v4 0/3] Enable QEMU NVMe userspace driver on s390x Farhan Ali
@ 2025-04-14 21:36 ` Farhan Ali
  2025-04-14 21:36 ` [PATCH v4 2/3] include: Add a header to define host PCI MMIO functions Farhan Ali
  2025-04-14 21:36 ` [PATCH v4 3/3] block/nvme: Use host PCI MMIO API Farhan Ali
  2 siblings, 0 replies; 8+ messages in thread
From: Farhan Ali @ 2025-04-14 21:36 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-s390x, alifm, mjrosato, schnelle, stefanha, philmd, kwolf,
	hreitz, thuth, fam, alex.williamson

Starting with z15 (or newer) we can execute mmio
instructions from userspace. On older platforms
where we don't have these instructions available
we can fallback to using system calls to access
the PCI mapped resources.

This patch adds helper functions for mmio reads
and writes for s390x.

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Niklas Schnelle <schnelle@linux.ibm.com>
Signed-off-by: Farhan Ali <alifm@linux.ibm.com>
---
 include/qemu/s390x_pci_mmio.h |  24 ++++++
 util/meson.build              |   2 +
 util/s390x_pci_mmio.c         | 148 ++++++++++++++++++++++++++++++++++
 3 files changed, 174 insertions(+)
 create mode 100644 include/qemu/s390x_pci_mmio.h
 create mode 100644 util/s390x_pci_mmio.c

diff --git a/include/qemu/s390x_pci_mmio.h b/include/qemu/s390x_pci_mmio.h
new file mode 100644
index 0000000000..c5f63ecefa
--- /dev/null
+++ b/include/qemu/s390x_pci_mmio.h
@@ -0,0 +1,24 @@
+/*
+ * s390x PCI MMIO definitions
+ *
+ * Copyright 2025 IBM Corp.
+ * Author(s): Farhan Ali <alifm@linux.ibm.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+#ifndef S390X_PCI_MMIO_H
+#define S390X_PCI_MMIO_H
+
+#ifdef __s390x__
+uint8_t s390x_pci_mmio_read_8(const void *ioaddr);
+uint16_t s390x_pci_mmio_read_16(const void *ioaddr);
+uint32_t s390x_pci_mmio_read_32(const void *ioaddr);
+uint64_t s390x_pci_mmio_read_64(const void *ioaddr);
+
+void s390x_pci_mmio_write_8(void *ioaddr, uint8_t val);
+void s390x_pci_mmio_write_16(void *ioaddr, uint16_t val);
+void s390x_pci_mmio_write_32(void *ioaddr, uint32_t val);
+void s390x_pci_mmio_write_64(void *ioaddr, uint64_t val);
+#endif /* __s390x__ */
+
+#endif /* S390X_PCI_MMIO_H */
diff --git a/util/meson.build b/util/meson.build
index 780b5977a8..acb21592f9 100644
--- a/util/meson.build
+++ b/util/meson.build
@@ -131,4 +131,6 @@ elif cpu in ['ppc', 'ppc64']
   util_ss.add(files('cpuinfo-ppc.c'))
 elif cpu in ['riscv32', 'riscv64']
   util_ss.add(files('cpuinfo-riscv.c'))
+elif cpu == 's390x'
+  util_ss.add(files('s390x_pci_mmio.c'))
 endif
diff --git a/util/s390x_pci_mmio.c b/util/s390x_pci_mmio.c
new file mode 100644
index 0000000000..820458a026
--- /dev/null
+++ b/util/s390x_pci_mmio.c
@@ -0,0 +1,148 @@
+/*
+ * s390x PCI MMIO definitions
+ *
+ * Copyright 2025 IBM Corp.
+ * Author(s): Farhan Ali <alifm@linux.ibm.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#include "qemu/osdep.h"
+#include <unistd.h>
+#include <sys/syscall.h>
+#include "qemu/s390x_pci_mmio.h"
+#include "elf.h"
+
+union register_pair {
+    unsigned __int128 pair;
+    struct {
+        uint64_t even;
+        uint64_t odd;
+    };
+};
+
+static bool is_mio_supported;
+
+static __attribute__((constructor)) void check_is_mio_supported(void)
+{
+    is_mio_supported = !!(qemu_getauxval(AT_HWCAP) & HWCAP_S390_PCI_MIO);
+}
+
+static uint64_t s390x_pcilgi(const void *ioaddr, size_t len)
+{
+    union register_pair ioaddr_len = { .even = (uint64_t)ioaddr,
+                                       .odd = len };
+    uint64_t val;
+    int cc;
+
+    asm volatile(
+        /* pcilgi */
+        ".insn   rre,0xb9d60000,%[val],%[ioaddr_len]\n"
+        "ipm     %[cc]\n"
+        "srl     %[cc],28\n"
+        : [cc] "=d"(cc), [val] "=d"(val),
+        [ioaddr_len] "+&d"(ioaddr_len.pair) :: "cc");
+
+    if (cc) {
+        val = -1ULL;
+    }
+
+    return val;
+}
+
+static void s390x_pcistgi(void *ioaddr, uint64_t val, size_t len)
+{
+    union register_pair ioaddr_len = {.even = (uint64_t)ioaddr, .odd = len};
+
+    asm volatile (
+        /* pcistgi */
+        ".insn   rre,0xb9d40000,%[val],%[ioaddr_len]\n"
+        : [ioaddr_len] "+&d" (ioaddr_len.pair)
+        : [val] "d" (val)
+        : "cc", "memory");
+}
+
+uint8_t s390x_pci_mmio_read_8(const void *ioaddr)
+{
+    uint8_t val = 0;
+
+    if (is_mio_supported) {
+        val = s390x_pcilgi(ioaddr, sizeof(val));
+    } else {
+        syscall(__NR_s390_pci_mmio_read, ioaddr, &val, sizeof(val));
+    }
+    return val;
+}
+
+uint16_t s390x_pci_mmio_read_16(const void *ioaddr)
+{
+    uint16_t val = 0;
+
+    if (is_mio_supported) {
+        val = s390x_pcilgi(ioaddr, sizeof(val));
+    } else {
+        syscall(__NR_s390_pci_mmio_read, ioaddr, &val, sizeof(val));
+    }
+    return val;
+}
+
+uint32_t s390x_pci_mmio_read_32(const void *ioaddr)
+{
+    uint32_t val = 0;
+
+    if (is_mio_supported) {
+        val = s390x_pcilgi(ioaddr, sizeof(val));
+    } else {
+        syscall(__NR_s390_pci_mmio_read, ioaddr, &val, sizeof(val));
+    }
+    return val;
+}
+
+uint64_t s390x_pci_mmio_read_64(const void *ioaddr)
+{
+    uint64_t val = 0;
+
+    if (is_mio_supported) {
+        val = s390x_pcilgi(ioaddr, sizeof(val));
+    } else {
+        syscall(__NR_s390_pci_mmio_read, ioaddr, &val, sizeof(val));
+    }
+    return val;
+}
+
+void s390x_pci_mmio_write_8(void *ioaddr, uint8_t val)
+{
+    if (is_mio_supported) {
+        s390x_pcistgi(ioaddr, val, sizeof(val));
+    } else {
+        syscall(__NR_s390_pci_mmio_write, ioaddr, &val, sizeof(val));
+    }
+}
+
+void s390x_pci_mmio_write_16(void *ioaddr, uint16_t val)
+{
+    if (is_mio_supported) {
+        s390x_pcistgi(ioaddr, val, sizeof(val));
+    } else {
+        syscall(__NR_s390_pci_mmio_write, ioaddr, &val, sizeof(val));
+    }
+}
+
+void s390x_pci_mmio_write_32(void *ioaddr, uint32_t val)
+{
+    if (is_mio_supported) {
+        s390x_pcistgi(ioaddr, val, sizeof(val));
+    } else {
+        syscall(__NR_s390_pci_mmio_write, ioaddr, &val, sizeof(val));
+    }
+}
+
+void s390x_pci_mmio_write_64(void *ioaddr, uint64_t val)
+{
+    if (is_mio_supported) {
+        s390x_pcistgi(ioaddr, val, sizeof(val));
+    } else {
+        syscall(__NR_s390_pci_mmio_write, ioaddr, &val, sizeof(val));
+    }
+}
+
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v4 2/3] include: Add a header to define host PCI MMIO functions
  2025-04-14 21:36 [PATCH v4 0/3] Enable QEMU NVMe userspace driver on s390x Farhan Ali
  2025-04-14 21:36 ` [PATCH v4 1/3] util: Add functions for s390x mmio read/write Farhan Ali
@ 2025-04-14 21:36 ` Farhan Ali
  2025-04-14 22:07   ` Farhan Ali
  2025-04-15  6:43   ` Philippe Mathieu-Daudé
  2025-04-14 21:36 ` [PATCH v4 3/3] block/nvme: Use host PCI MMIO API Farhan Ali
  2 siblings, 2 replies; 8+ messages in thread
From: Farhan Ali @ 2025-04-14 21:36 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-s390x, alifm, mjrosato, schnelle, stefanha, philmd, kwolf,
	hreitz, thuth, fam, alex.williamson

Add a generic API for host PCI MMIO reads/writes
(e.g. Linux VFIO BAR accesses). The functions access
little endian memory and returns the result in
host cpu endianness.

Signed-off-by: Farhan Ali <alifm@linux.ibm.com>
---
 include/qemu/host-pci-mmio.h | 141 +++++++++++++++++++++++++++++++++++
 1 file changed, 141 insertions(+)
 create mode 100644 include/qemu/host-pci-mmio.h

diff --git a/include/qemu/host-pci-mmio.h b/include/qemu/host-pci-mmio.h
new file mode 100644
index 0000000000..de17d67e3a
--- /dev/null
+++ b/include/qemu/host-pci-mmio.h
@@ -0,0 +1,141 @@
+/*
+ * API for host PCI MMIO accesses (e.g. Linux VFIO BARs)
+ *
+ * Copyright 2025 IBM Corp.
+ * Author(s): Farhan Ali <alifm@linux.ibm.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#ifndef HOST_PCI_MMIO_H
+#define HOST_PCI_MMIO_H
+
+#include "qemu/bswap.h"
+#include "qemu/s390x_pci_mmio.h"
+
+
+static inline uint8_t host_pci_ldub_p(const void *ioaddr)
+{
+    uint8_t ret = 0;
+#ifdef __s390x__
+    ret = s390x_pci_mmio_read_8(ioaddr);
+#else
+    ret = ldub_he_p(ioaddr);
+#endif
+
+    return ret;
+}
+
+static inline uint16_t host_pci_lduw_le_p(const void *ioaddr)
+{
+    uint16_t ret = 0;
+#ifdef __s390x__
+    ret = le16_to_cpu(s390x_pci_mmio_read_16(ioaddr));
+#else
+    ret = lduw_le_p(ioaddr);
+#endif
+
+    return ret;
+}
+
+static inline uint32_t host_pci_ldl_le_p(const void *ioaddr)
+{
+    uint32_t ret = 0;
+#ifdef __s390x__
+    ret = le32_to_cpu(s390x_pci_mmio_read_32(ioaddr));
+#else
+    ret = (uint32_t)ldl_le_p(ioaddr);
+#endif
+
+    return ret;
+}
+
+static inline uint64_t host_pci_ldq_le_p(const void *ioaddr)
+{
+    uint64_t ret = 0;
+#ifdef __s390x__
+    ret = le64_to_cpu(s390x_pci_mmio_read_64(ioaddr));
+#else
+    ret = ldq_le_p(ioaddr);
+#endif
+
+    return ret;
+}
+
+static inline void host_pci_stb_le_p(void *ioaddr, uint8_t val)
+{
+
+#ifdef __s390x__
+    s390x_pci_mmio_write_8(ioaddr, val);
+#else
+    stb_p(ioaddr, val);
+#endif
+}
+
+static inline void host_pci_stw_le_p(void *ioaddr, uint16_t val)
+{
+
+#ifdef __s390x__
+    s390x_pci_mmio_write_16(ioaddr, cpu_to_le16(val));
+#else
+    stw_le_p(ioaddr, val);
+#endif
+}
+
+static inline void host_pci_stl_le_p(void *ioaddr, uint32_t val)
+{
+
+#ifdef __s390x__
+    s390x_pci_mmio_write_32(ioaddr, cpu_to_le32(val));
+#else
+    stl_le_p(ioaddr, val);
+#endif
+}
+
+static inline void host_pci_stq_le_p(void *ioaddr, uint64_t val)
+{
+
+#ifdef __s390x__
+    s390x_pci_mmio_write_64(ioaddr, cpu_to_le64(val));
+#else
+    stq_le_p(ioaddr, val);
+#endif
+}
+
+static inline uint64_t host_pci_ldn_le_p(const void *ioaddr, int sz)
+{
+    switch (sz) {
+    case 1:
+        return host_pci_ldub_p(ioaddr);
+    case 2:
+        return host_pci_lduw_le_p(ioaddr);
+    case 4:
+        return host_pci_ldl_le_p(ioaddr);
+    case 8:
+        return host_pci_ldq_le_p(ioaddr);
+    default:
+        g_assert_not_reached();
+    }
+}
+
+static inline void host_pci_stn_le_p(void *ioaddr, int sz, uint64_t v)
+{
+    switch (sz) {
+    case 1:
+        host_pci_stb_le_p(ioaddr, v);
+        break;
+    case 2:
+        host_pci_stw_le_p(ioaddr, v);
+        break;
+    case 4:
+        host_pci_stl_le_p(ioaddr, v);
+        break;
+    case 8:
+        host_pci_stq_le_p(ioaddr, v);
+        break;
+    default:
+        g_assert_not_reached();
+    }
+}
+
+#endif
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v4 3/3] block/nvme: Use host PCI MMIO API
  2025-04-14 21:36 [PATCH v4 0/3] Enable QEMU NVMe userspace driver on s390x Farhan Ali
  2025-04-14 21:36 ` [PATCH v4 1/3] util: Add functions for s390x mmio read/write Farhan Ali
  2025-04-14 21:36 ` [PATCH v4 2/3] include: Add a header to define host PCI MMIO functions Farhan Ali
@ 2025-04-14 21:36 ` Farhan Ali
  2 siblings, 0 replies; 8+ messages in thread
From: Farhan Ali @ 2025-04-14 21:36 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-s390x, alifm, mjrosato, schnelle, stefanha, philmd, kwolf,
	hreitz, thuth, fam, alex.williamson

Use the host PCI MMIO functions to read/write
to NVMe registers, rather than directly accessing
them.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Farhan Ali <alifm@linux.ibm.com>
---
 block/nvme.c | 41 +++++++++++++++++++++++------------------
 1 file changed, 23 insertions(+), 18 deletions(-)

diff --git a/block/nvme.c b/block/nvme.c
index bbf7c23dcd..8df53ee4ca 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -18,6 +18,7 @@
 #include "qobject/qstring.h"
 #include "qemu/defer-call.h"
 #include "qemu/error-report.h"
+#include "qemu/host-pci-mmio.h"
 #include "qemu/main-loop.h"
 #include "qemu/module.h"
 #include "qemu/cutils.h"
@@ -60,7 +61,7 @@ typedef struct {
     uint8_t  *queue;
     uint64_t iova;
     /* Hardware MMIO register */
-    volatile uint32_t *doorbell;
+    uint32_t *doorbell;
 } NVMeQueue;
 
 typedef struct {
@@ -100,7 +101,7 @@ struct BDRVNVMeState {
     QEMUVFIOState *vfio;
     void *bar0_wo_map;
     /* Memory mapped registers */
-    volatile struct {
+    struct {
         uint32_t sq_tail;
         uint32_t cq_head;
     } *doorbells;
@@ -292,7 +293,7 @@ static void nvme_kick(NVMeQueuePair *q)
     assert(!(q->sq.tail & 0xFF00));
     /* Fence the write to submission queue entry before notifying the device. */
     smp_wmb();
-    *q->sq.doorbell = cpu_to_le32(q->sq.tail);
+    host_pci_stl_le_p(q->sq.doorbell, q->sq.tail);
     q->inflight += q->need_kick;
     q->need_kick = 0;
 }
@@ -441,7 +442,7 @@ static bool nvme_process_completion(NVMeQueuePair *q)
     if (progress) {
         /* Notify the device so it can post more completions. */
         smp_mb_release();
-        *q->cq.doorbell = cpu_to_le32(q->cq.head);
+        host_pci_stl_le_p(q->cq.doorbell, q->cq.head);
         nvme_wake_free_req_locked(q);
     }
 
@@ -460,7 +461,7 @@ static void nvme_process_completion_bh(void *opaque)
      * so notify the device that it has space to fill in more completions now.
      */
     smp_mb_release();
-    *q->cq.doorbell = cpu_to_le32(q->cq.head);
+    host_pci_stl_le_p(q->cq.doorbell, q->cq.head);
     nvme_wake_free_req_locked(q);
 
     nvme_process_completion(q);
@@ -749,9 +750,10 @@ static int nvme_init(BlockDriverState *bs, const char *device, int namespace,
     int ret;
     uint64_t cap;
     uint32_t ver;
+    uint32_t cc;
     uint64_t timeout_ms;
     uint64_t deadline, now;
-    volatile NvmeBar *regs = NULL;
+    NvmeBar *regs = NULL;
 
     qemu_co_mutex_init(&s->dma_map_lock);
     qemu_co_queue_init(&s->dma_flush_queue);
@@ -779,7 +781,7 @@ static int nvme_init(BlockDriverState *bs, const char *device, int namespace,
     /* Perform initialize sequence as described in NVMe spec "7.6.1
      * Initialization". */
 
-    cap = le64_to_cpu(regs->cap);
+    cap = host_pci_ldq_le_p(&regs->cap);
     trace_nvme_controller_capability_raw(cap);
     trace_nvme_controller_capability("Maximum Queue Entries Supported",
                                      1 + NVME_CAP_MQES(cap));
@@ -805,16 +807,17 @@ static int nvme_init(BlockDriverState *bs, const char *device, int namespace,
     bs->bl.request_alignment = s->page_size;
     timeout_ms = MIN(500 * NVME_CAP_TO(cap), 30000);
 
-    ver = le32_to_cpu(regs->vs);
+    ver = host_pci_ldl_le_p(&regs->vs);
     trace_nvme_controller_spec_version(extract32(ver, 16, 16),
                                        extract32(ver, 8, 8),
                                        extract32(ver, 0, 8));
 
     /* Reset device to get a clean state. */
-    regs->cc = cpu_to_le32(le32_to_cpu(regs->cc) & 0xFE);
+    cc = host_pci_ldl_le_p(&regs->cc);
+    host_pci_stl_le_p(&regs->cc, cc & 0xFE);
     /* Wait for CSTS.RDY = 0. */
     deadline = qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + timeout_ms * SCALE_MS;
-    while (NVME_CSTS_RDY(le32_to_cpu(regs->csts))) {
+    while (NVME_CSTS_RDY(host_pci_ldl_le_p(&regs->csts))) {
         if (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) > deadline) {
             error_setg(errp, "Timeout while waiting for device to reset (%"
                              PRId64 " ms)",
@@ -843,19 +846,21 @@ static int nvme_init(BlockDriverState *bs, const char *device, int namespace,
     s->queues[INDEX_ADMIN] = q;
     s->queue_count = 1;
     QEMU_BUILD_BUG_ON((NVME_QUEUE_SIZE - 1) & 0xF000);
-    regs->aqa = cpu_to_le32(((NVME_QUEUE_SIZE - 1) << AQA_ACQS_SHIFT) |
-                            ((NVME_QUEUE_SIZE - 1) << AQA_ASQS_SHIFT));
-    regs->asq = cpu_to_le64(q->sq.iova);
-    regs->acq = cpu_to_le64(q->cq.iova);
+    host_pci_stl_le_p(&regs->aqa,
+                        ((NVME_QUEUE_SIZE - 1) << AQA_ACQS_SHIFT) |
+                        ((NVME_QUEUE_SIZE - 1) << AQA_ASQS_SHIFT));
+    host_pci_stq_le_p(&regs->asq, q->sq.iova);
+    host_pci_stq_le_p(&regs->acq, q->cq.iova);
 
     /* After setting up all control registers we can enable device now. */
-    regs->cc = cpu_to_le32((ctz32(NVME_CQ_ENTRY_BYTES) << CC_IOCQES_SHIFT) |
-                           (ctz32(NVME_SQ_ENTRY_BYTES) << CC_IOSQES_SHIFT) |
-                           CC_EN_MASK);
+    host_pci_stl_le_p(&regs->cc,
+                      (ctz32(NVME_CQ_ENTRY_BYTES) << CC_IOCQES_SHIFT) |
+                      (ctz32(NVME_SQ_ENTRY_BYTES) << CC_IOSQES_SHIFT) |
+                      CC_EN_MASK);
     /* Wait for CSTS.RDY = 1. */
     now = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
     deadline = now + timeout_ms * SCALE_MS;
-    while (!NVME_CSTS_RDY(le32_to_cpu(regs->csts))) {
+    while (!NVME_CSTS_RDY(host_pci_ldl_le_p(&regs->csts))) {
         if (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) > deadline) {
             error_setg(errp, "Timeout while waiting for device to start (%"
                              PRId64 " ms)",
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 2/3] include: Add a header to define host PCI MMIO functions
  2025-04-14 21:36 ` [PATCH v4 2/3] include: Add a header to define host PCI MMIO functions Farhan Ali
@ 2025-04-14 22:07   ` Farhan Ali
  2025-04-15  6:43   ` Philippe Mathieu-Daudé
  1 sibling, 0 replies; 8+ messages in thread
From: Farhan Ali @ 2025-04-14 22:07 UTC (permalink / raw)
  To: qemu-devel
  Cc: qemu-s390x, mjrosato, schnelle, stefanha, philmd, kwolf, hreitz,
	thuth, fam, alex.williamson


On 4/14/2025 2:36 PM, Farhan Ali wrote:
> Add a generic API for host PCI MMIO reads/writes
> (e.g. Linux VFIO BAR accesses). The functions access
> little endian memory and returns the result in
> host cpu endianness.
>
> Signed-off-by: Farhan Ali <alifm@linux.ibm.com>
> ---
>   include/qemu/host-pci-mmio.h | 141 +++++++++++++++++++++++++++++++++++
>   1 file changed, 141 insertions(+)
>   create mode 100644 include/qemu/host-pci-mmio.h
>
> diff --git a/include/qemu/host-pci-mmio.h b/include/qemu/host-pci-mmio.h
> new file mode 100644
> index 0000000000..de17d67e3a
> --- /dev/null
> +++ b/include/qemu/host-pci-mmio.h
> @@ -0,0 +1,141 @@
> +/*
> + * API for host PCI MMIO accesses (e.g. Linux VFIO BARs)
> + *
> + * Copyright 2025 IBM Corp.
> + * Author(s): Farhan Ali <alifm@linux.ibm.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + */
> +
> +#ifndef HOST_PCI_MMIO_H
> +#define HOST_PCI_MMIO_H
> +
> +#include "qemu/bswap.h"
> +#include "qemu/s390x_pci_mmio.h"
> +
> +
> +static inline uint8_t host_pci_ldub_p(const void *ioaddr)
> +{
> +    uint8_t ret = 0;
> +#ifdef __s390x__
> +    ret = s390x_pci_mmio_read_8(ioaddr);
> +#else

Noticed a typo below, it should be ldub_p instead of ldub_he_p (didn't 
have any compiler errors). Will fix this, but will wait for some other 
feedback before sending a new revision.

> +    ret = ldub_he_p(ioaddr);
> +#endif
> +
> +    return ret;
> +}
> +
> +static inline uint16_t host_pci_lduw_le_p(const void *ioaddr)
> +{
> +    uint16_t ret = 0;
> +#ifdef __s390x__
> +    ret = le16_to_cpu(s390x_pci_mmio_read_16(ioaddr));
> +#else
> +    ret = lduw_le_p(ioaddr);
> +#endif
> +
> +    return ret;
> +}
> +
> +static inline uint32_t host_pci_ldl_le_p(const void *ioaddr)
> +{
> +    uint32_t ret = 0;
> +#ifdef __s390x__
> +    ret = le32_to_cpu(s390x_pci_mmio_read_32(ioaddr));
> +#else
> +    ret = (uint32_t)ldl_le_p(ioaddr);
> +#endif
> +
> +    return ret;
> +}
> +
> +static inline uint64_t host_pci_ldq_le_p(const void *ioaddr)
> +{
> +    uint64_t ret = 0;
> +#ifdef __s390x__
> +    ret = le64_to_cpu(s390x_pci_mmio_read_64(ioaddr));
> +#else
> +    ret = ldq_le_p(ioaddr);
> +#endif
> +
> +    return ret;
> +}
> +
> +static inline void host_pci_stb_le_p(void *ioaddr, uint8_t val)
> +{
> +
> +#ifdef __s390x__
> +    s390x_pci_mmio_write_8(ioaddr, val);
> +#else
> +    stb_p(ioaddr, val);
> +#endif
> +}
> +
> +static inline void host_pci_stw_le_p(void *ioaddr, uint16_t val)
> +{
> +
> +#ifdef __s390x__
> +    s390x_pci_mmio_write_16(ioaddr, cpu_to_le16(val));
> +#else
> +    stw_le_p(ioaddr, val);
> +#endif
> +}
> +
> +static inline void host_pci_stl_le_p(void *ioaddr, uint32_t val)
> +{
> +
> +#ifdef __s390x__
> +    s390x_pci_mmio_write_32(ioaddr, cpu_to_le32(val));
> +#else
> +    stl_le_p(ioaddr, val);
> +#endif
> +}
> +
> +static inline void host_pci_stq_le_p(void *ioaddr, uint64_t val)
> +{
> +
> +#ifdef __s390x__
> +    s390x_pci_mmio_write_64(ioaddr, cpu_to_le64(val));
> +#else
> +    stq_le_p(ioaddr, val);
> +#endif
> +}
> +
> +static inline uint64_t host_pci_ldn_le_p(const void *ioaddr, int sz)
> +{
> +    switch (sz) {
> +    case 1:
> +        return host_pci_ldub_p(ioaddr);
> +    case 2:
> +        return host_pci_lduw_le_p(ioaddr);
> +    case 4:
> +        return host_pci_ldl_le_p(ioaddr);
> +    case 8:
> +        return host_pci_ldq_le_p(ioaddr);
> +    default:
> +        g_assert_not_reached();
> +    }
> +}
> +
> +static inline void host_pci_stn_le_p(void *ioaddr, int sz, uint64_t v)
> +{
> +    switch (sz) {
> +    case 1:
> +        host_pci_stb_le_p(ioaddr, v);
> +        break;
> +    case 2:
> +        host_pci_stw_le_p(ioaddr, v);
> +        break;
> +    case 4:
> +        host_pci_stl_le_p(ioaddr, v);
> +        break;
> +    case 8:
> +        host_pci_stq_le_p(ioaddr, v);
> +        break;
> +    default:
> +        g_assert_not_reached();
> +    }
> +}
> +
> +#endif


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 2/3] include: Add a header to define host PCI MMIO functions
  2025-04-14 21:36 ` [PATCH v4 2/3] include: Add a header to define host PCI MMIO functions Farhan Ali
  2025-04-14 22:07   ` Farhan Ali
@ 2025-04-15  6:43   ` Philippe Mathieu-Daudé
  2025-04-15 14:19     ` Richard Henderson
  2025-04-15 16:08     ` Farhan Ali
  1 sibling, 2 replies; 8+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-04-15  6:43 UTC (permalink / raw)
  To: Farhan Ali, qemu-devel
  Cc: qemu-s390x, mjrosato, schnelle, stefanha, kwolf, hreitz, thuth,
	fam, alex.williamson

Hi,

On 14/4/25 23:36, Farhan Ali wrote:
> Add a generic API for host PCI MMIO reads/writes
> (e.g. Linux VFIO BAR accesses). The functions access
> little endian memory and returns the result in
> host cpu endianness.
> 
> Signed-off-by: Farhan Ali <alifm@linux.ibm.com>
> ---
>   include/qemu/host-pci-mmio.h | 141 +++++++++++++++++++++++++++++++++++
>   1 file changed, 141 insertions(+)
>   create mode 100644 include/qemu/host-pci-mmio.h
> 
> diff --git a/include/qemu/host-pci-mmio.h b/include/qemu/host-pci-mmio.h
> new file mode 100644
> index 0000000000..de17d67e3a
> --- /dev/null
> +++ b/include/qemu/host-pci-mmio.h
> @@ -0,0 +1,141 @@
> +/*
> + * API for host PCI MMIO accesses (e.g. Linux VFIO BARs)
> + *
> + * Copyright 2025 IBM Corp.
> + * Author(s): Farhan Ali <alifm@linux.ibm.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + */
> +
> +#ifndef HOST_PCI_MMIO_H
> +#define HOST_PCI_MMIO_H
> +
> +#include "qemu/bswap.h"
> +#include "qemu/s390x_pci_mmio.h"
> +
> +
> +static inline uint8_t host_pci_ldub_p(const void *ioaddr)

Is it really worth inlining?

> +{
> +    uint8_t ret = 0;
> +#ifdef __s390x__
> +    ret = s390x_pci_mmio_read_8(ioaddr);
> +#else
> +    ret = ldub_he_p(ioaddr);
> +#endif
> +
> +    return ret;
> +}



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 2/3] include: Add a header to define host PCI MMIO functions
  2025-04-15  6:43   ` Philippe Mathieu-Daudé
@ 2025-04-15 14:19     ` Richard Henderson
  2025-04-15 16:08     ` Farhan Ali
  1 sibling, 0 replies; 8+ messages in thread
From: Richard Henderson @ 2025-04-15 14:19 UTC (permalink / raw)
  To: qemu-devel

On 4/14/25 23:43, Philippe Mathieu-Daudé wrote:
>> +static inline uint8_t host_pci_ldub_p(const void *ioaddr)
> 
> Is it really worth inlining?
> 
>> +{
>> +    uint8_t ret = 0;
>> +#ifdef __s390x__
>> +    ret = s390x_pci_mmio_read_8(ioaddr);
>> +#else
>> +    ret = ldub_he_p(ioaddr);
>> +#endif

Since the non-s390x path is a single byte load, yes, it probably is worth inlining.


r~



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v4 2/3] include: Add a header to define host PCI MMIO functions
  2025-04-15  6:43   ` Philippe Mathieu-Daudé
  2025-04-15 14:19     ` Richard Henderson
@ 2025-04-15 16:08     ` Farhan Ali
  1 sibling, 0 replies; 8+ messages in thread
From: Farhan Ali @ 2025-04-15 16:08 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé, qemu-devel
  Cc: qemu-s390x, mjrosato, schnelle, stefanha, kwolf, hreitz, thuth,
	fam, alex.williamson


On 4/14/2025 11:43 PM, Philippe Mathieu-Daudé wrote:
> Hi,
>
> On 14/4/25 23:36, Farhan Ali wrote:
>> Add a generic API for host PCI MMIO reads/writes
>> (e.g. Linux VFIO BAR accesses). The functions access
>> little endian memory and returns the result in
>> host cpu endianness.
>>
>> Signed-off-by: Farhan Ali <alifm@linux.ibm.com>
>> ---
>>   include/qemu/host-pci-mmio.h | 141 +++++++++++++++++++++++++++++++++++
>>   1 file changed, 141 insertions(+)
>>   create mode 100644 include/qemu/host-pci-mmio.h
>>
>> diff --git a/include/qemu/host-pci-mmio.h b/include/qemu/host-pci-mmio.h
>> new file mode 100644
>> index 0000000000..de17d67e3a
>> --- /dev/null
>> +++ b/include/qemu/host-pci-mmio.h
>> @@ -0,0 +1,141 @@
>> +/*
>> + * API for host PCI MMIO accesses (e.g. Linux VFIO BARs)
>> + *
>> + * Copyright 2025 IBM Corp.
>> + * Author(s): Farhan Ali <alifm@linux.ibm.com>
>> + *
>> + * SPDX-License-Identifier: GPL-2.0-or-later
>> + */
>> +
>> +#ifndef HOST_PCI_MMIO_H
>> +#define HOST_PCI_MMIO_H
>> +
>> +#include "qemu/bswap.h"
>> +#include "qemu/s390x_pci_mmio.h"
>> +
>> +
>> +static inline uint8_t host_pci_ldub_p(const void *ioaddr)
>
> Is it really worth inlining?
>
Hi Philippe,

I think so, we inline the ld/st generic helper functions in bswap.h. 
Curious, why do you think its not necessary?

Thanks

Farhan

>> +{
>> +    uint8_t ret = 0;
>> +#ifdef __s390x__
>> +    ret = s390x_pci_mmio_read_8(ioaddr);
>> +#else
>> +    ret = ldub_he_p(ioaddr);
>> +#endif
>> +
>> +    return ret;
>> +}
>
>


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-04-15 16:10 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-14 21:36 [PATCH v4 0/3] Enable QEMU NVMe userspace driver on s390x Farhan Ali
2025-04-14 21:36 ` [PATCH v4 1/3] util: Add functions for s390x mmio read/write Farhan Ali
2025-04-14 21:36 ` [PATCH v4 2/3] include: Add a header to define host PCI MMIO functions Farhan Ali
2025-04-14 22:07   ` Farhan Ali
2025-04-15  6:43   ` Philippe Mathieu-Daudé
2025-04-15 14:19     ` Richard Henderson
2025-04-15 16:08     ` Farhan Ali
2025-04-14 21:36 ` [PATCH v4 3/3] block/nvme: Use host PCI MMIO API Farhan Ali

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).