qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PULL 0/4] Block layer patches
@ 2023-11-28 14:09 Kevin Wolf
  2023-11-28 14:09 ` [PULL 1/4] iotests: fix default machine type detection Kevin Wolf
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Kevin Wolf @ 2023-11-28 14:09 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, stefanha, qemu-devel

The following changes since commit e867b01cd6658a64c16052117dbb18093a2f9772:

  Merge tag 'qga-pull-2023-11-25' of https://github.com/kostyanf14/qemu into staging (2023-11-27 08:59:00 -0500)

are available in the Git repository at:

  https://repo.or.cz/qemu/kevin.git tags/for-upstream

for you to fetch changes up to 6e081324facf9aeece9c286774bab5af3b8d6099:

  ide/via: Fix BAR4 value in legacy mode (2023-11-28 14:56:32 +0100)

----------------------------------------------------------------
Block layer patches

- ide/via: Fix BAR4 value in legacy mode
- export/vhost-user-blk: Fix consecutive drains
- vmdk: Don't corrupt desc file in vmdk_write_cid
- iotests: fix default machine type detection

----------------------------------------------------------------
Andrey Drobyshev (1):
      iotests: fix default machine type detection

BALATON Zoltan (1):
      ide/via: Fix BAR4 value in legacy mode

Fam Zheng (1):
      vmdk: Don't corrupt desc file in vmdk_write_cid

Kevin Wolf (1):
      export/vhost-user-blk: Fix consecutive drains

 include/qemu/vhost-user-server.h     |  1 +
 block/export/vhost-user-blk-server.c |  9 +++++++--
 block/vmdk.c                         | 28 ++++++++++++++++++--------
 hw/ide/via.c                         | 17 ++++++++++------
 util/vhost-user-server.c             | 39 ++++++++++++++++++++++++++++--------
 tests/qemu-iotests/testenv.py        |  2 +-
 tests/qemu-iotests/059               |  2 ++
 tests/qemu-iotests/059.out           |  4 ++++
 8 files changed, 77 insertions(+), 25 deletions(-)



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PULL 1/4] iotests: fix default machine type detection
  2023-11-28 14:09 [PULL 0/4] Block layer patches Kevin Wolf
@ 2023-11-28 14:09 ` Kevin Wolf
  2023-11-28 14:09 ` [PULL 2/4] vmdk: Don't corrupt desc file in vmdk_write_cid Kevin Wolf
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Kevin Wolf @ 2023-11-28 14:09 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, stefanha, qemu-devel

From: Andrey Drobyshev <andrey.drobyshev@virtuozzo.com>

The machine type is being detected based on "-M help" output, and we're
searching for the line ending with " (default)".  However, in downstream
one of the machine types s marked as deprecated might become the
default, in which case this logic breaks as the line would now end with
" (default) (deprecated)".  To fix potential issues here, let's relax
that requirement and detect the mere presence of " (default)" line
instead.

Signed-off-by: Andrey Drobyshev <andrey.drobyshev@virtuozzo.com>
Message-ID: <20231122121538.32903-1-andrey.drobyshev@virtuozzo.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 tests/qemu-iotests/testenv.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/qemu-iotests/testenv.py b/tests/qemu-iotests/testenv.py
index e67ebd254b..3ff38f2661 100644
--- a/tests/qemu-iotests/testenv.py
+++ b/tests/qemu-iotests/testenv.py
@@ -40,7 +40,7 @@ def get_default_machine(qemu_prog: str) -> str:
 
     machines = outp.split('\n')
     try:
-        default_machine = next(m for m in machines if m.endswith(' (default)'))
+        default_machine = next(m for m in machines if ' (default)' in m)
     except StopIteration:
         return ''
     default_machine = default_machine.split(' ', 1)[0]
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PULL 2/4] vmdk: Don't corrupt desc file in vmdk_write_cid
  2023-11-28 14:09 [PULL 0/4] Block layer patches Kevin Wolf
  2023-11-28 14:09 ` [PULL 1/4] iotests: fix default machine type detection Kevin Wolf
@ 2023-11-28 14:09 ` Kevin Wolf
  2023-11-28 14:09 ` [PULL 3/4] export/vhost-user-blk: Fix consecutive drains Kevin Wolf
  2023-11-28 14:09 ` [PULL 4/4] ide/via: Fix BAR4 value in legacy mode Kevin Wolf
  3 siblings, 0 replies; 5+ messages in thread
From: Kevin Wolf @ 2023-11-28 14:09 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, stefanha, qemu-devel

From: Fam Zheng <fam@euphon.net>

If the text description file is larger than DESC_SIZE, we force the last
byte in the buffer to be 0 and write it out.

This results in a corruption.

Try to allocate a big buffer in this case.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1923

Signed-off-by: Fam Zheng <fam@euphon.net>
Message-ID: <20231124115654.3239137-1-fam@euphon.net>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 block/vmdk.c               | 28 ++++++++++++++++++++--------
 tests/qemu-iotests/059     |  2 ++
 tests/qemu-iotests/059.out |  4 ++++
 3 files changed, 26 insertions(+), 8 deletions(-)

diff --git a/block/vmdk.c b/block/vmdk.c
index d87f6d9aaa..d6971c7067 100644
--- a/block/vmdk.c
+++ b/block/vmdk.c
@@ -351,29 +351,41 @@ vmdk_write_cid(BlockDriverState *bs, uint32_t cid)
     BDRVVmdkState *s = bs->opaque;
     int ret = 0;
 
-    desc = g_malloc0(DESC_SIZE);
-    tmp_desc = g_malloc0(DESC_SIZE);
-    ret = bdrv_co_pread(bs->file, s->desc_offset, DESC_SIZE, desc, 0);
+    size_t desc_buf_size;
+
+    if (s->desc_offset == 0) {
+        desc_buf_size = bdrv_getlength(bs->file->bs);
+        if (desc_buf_size > 16ULL << 20) {
+            error_report("VMDK description file too big");
+            return -EFBIG;
+        }
+    } else {
+        desc_buf_size = DESC_SIZE;
+    }
+
+    desc = g_malloc0(desc_buf_size);
+    tmp_desc = g_malloc0(desc_buf_size);
+    ret = bdrv_co_pread(bs->file, s->desc_offset, desc_buf_size, desc, 0);
     if (ret < 0) {
         goto out;
     }
 
-    desc[DESC_SIZE - 1] = '\0';
+    desc[desc_buf_size - 1] = '\0';
     tmp_str = strstr(desc, "parentCID");
     if (tmp_str == NULL) {
         ret = -EINVAL;
         goto out;
     }
 
-    pstrcpy(tmp_desc, DESC_SIZE, tmp_str);
+    pstrcpy(tmp_desc, desc_buf_size, tmp_str);
     p_name = strstr(desc, "CID");
     if (p_name != NULL) {
         p_name += sizeof("CID");
-        snprintf(p_name, DESC_SIZE - (p_name - desc), "%" PRIx32 "\n", cid);
-        pstrcat(desc, DESC_SIZE, tmp_desc);
+        snprintf(p_name, desc_buf_size - (p_name - desc), "%" PRIx32 "\n", cid);
+        pstrcat(desc, desc_buf_size, tmp_desc);
     }
 
-    ret = bdrv_co_pwrite_sync(bs->file, s->desc_offset, DESC_SIZE, desc, 0);
+    ret = bdrv_co_pwrite_sync(bs->file, s->desc_offset, desc_buf_size, desc, 0);
 
 out:
     g_free(desc);
diff --git a/tests/qemu-iotests/059 b/tests/qemu-iotests/059
index 2bcb1f7f9c..0634c7bd92 100755
--- a/tests/qemu-iotests/059
+++ b/tests/qemu-iotests/059
@@ -84,6 +84,8 @@ echo
 echo "=== Testing big twoGbMaxExtentFlat ==="
 _make_test_img -o "subformat=twoGbMaxExtentFlat" 1000G
 _img_info --format-specific | _filter_img_info --format-specific
+$QEMU_IO -c "write 990G 512 -P 89" "$TEST_IMG" | _filter_qemu_io
+$QEMU_IO -c "read 990G 512 -P 89" "$TEST_IMG" | _filter_qemu_io
 _cleanup_test_img
 
 echo
diff --git a/tests/qemu-iotests/059.out b/tests/qemu-iotests/059.out
index 2b83c0c8b6..275ee7c778 100644
--- a/tests/qemu-iotests/059.out
+++ b/tests/qemu-iotests/059.out
@@ -2032,6 +2032,10 @@ Format specific information:
             virtual size: 2147483648
             filename: TEST_DIR/t-f500.IMGFMT
             format: FLAT
+wrote 512/512 bytes at offset 1063004405760
+512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 512/512 bytes at offset 1063004405760
+512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 
 === Testing malformed VMFS extent description line ===
 qemu-img: Could not open 'TEST_DIR/t.IMGFMT': Invalid extent line: RW 12582912 VMFS "dummy.IMGFMT" 1
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PULL 3/4] export/vhost-user-blk: Fix consecutive drains
  2023-11-28 14:09 [PULL 0/4] Block layer patches Kevin Wolf
  2023-11-28 14:09 ` [PULL 1/4] iotests: fix default machine type detection Kevin Wolf
  2023-11-28 14:09 ` [PULL 2/4] vmdk: Don't corrupt desc file in vmdk_write_cid Kevin Wolf
@ 2023-11-28 14:09 ` Kevin Wolf
  2023-11-28 14:09 ` [PULL 4/4] ide/via: Fix BAR4 value in legacy mode Kevin Wolf
  3 siblings, 0 replies; 5+ messages in thread
From: Kevin Wolf @ 2023-11-28 14:09 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, stefanha, qemu-devel

The vhost-user-blk export implement AioContext switches in its drain
implementation. This means that on drain_begin, it detaches the server
from its AioContext and on drain_end, attaches it again and schedules
the server->co_trip coroutine in the updated AioContext.

However, nothing guarantees that server->co_trip is even safe to be
scheduled. Not only is it unclear that the coroutine is actually in a
state where it can be reentered externally without causing problems, but
with two consecutive drains, it is possible that the scheduled coroutine
didn't have a chance yet to run and trying to schedule an already
scheduled coroutine a second time crashes with an assertion failure.

Following the model of NBD, this commit makes the vhost-user-blk export
shut down server->co_trip during drain so that resuming the export means
creating and scheduling a new coroutine, which is always safe.

There is one exception: If the drain call didn't poll (for example, this
happens in the context of bdrv_graph_wrlock()), then the coroutine
didn't have a chance to shut down. However, in this case the AioContext
can't have changed; changing the AioContext always involves a polling
drain. So in this case we can simply assert that the AioContext is
unchanged and just leave the coroutine running or wake it up if it has
yielded to wait for the AioContext to be attached again.

Fixes: e1054cd4aad03a493a5d1cded7508f7c348205bf
Fixes: https://issues.redhat.com/browse/RHEL-1708
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20231127115755.22846-1-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 include/qemu/vhost-user-server.h     |  1 +
 block/export/vhost-user-blk-server.c |  9 +++++--
 util/vhost-user-server.c             | 39 ++++++++++++++++++++++------
 3 files changed, 39 insertions(+), 10 deletions(-)

diff --git a/include/qemu/vhost-user-server.h b/include/qemu/vhost-user-server.h
index 64ad701015..0417ec0533 100644
--- a/include/qemu/vhost-user-server.h
+++ b/include/qemu/vhost-user-server.h
@@ -45,6 +45,7 @@ typedef struct {
     /* Protected by ctx lock */
     bool in_qio_channel_yield;
     bool wait_idle;
+    bool quiescing;
     VuDev vu_dev;
     QIOChannel *ioc; /* The I/O channel with the client */
     QIOChannelSocket *sioc; /* The underlying data channel with the client */
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index fe2cee3a78..16f48388d3 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -283,6 +283,7 @@ static void vu_blk_drained_begin(void *opaque)
 {
     VuBlkExport *vexp = opaque;
 
+    vexp->vu_server.quiescing = true;
     vhost_user_server_detach_aio_context(&vexp->vu_server);
 }
 
@@ -291,19 +292,23 @@ static void vu_blk_drained_end(void *opaque)
 {
     VuBlkExport *vexp = opaque;
 
+    vexp->vu_server.quiescing = false;
     vhost_user_server_attach_aio_context(&vexp->vu_server, vexp->export.ctx);
 }
 
 /*
- * Ensures that bdrv_drained_begin() waits until in-flight requests complete.
+ * Ensures that bdrv_drained_begin() waits until in-flight requests complete
+ * and the server->co_trip coroutine has terminated. It will be restarted in
+ * vhost_user_server_attach_aio_context().
  *
  * Called with vexp->export.ctx acquired.
  */
 static bool vu_blk_drained_poll(void *opaque)
 {
     VuBlkExport *vexp = opaque;
+    VuServer *server = &vexp->vu_server;
 
-    return vhost_user_server_has_in_flight(&vexp->vu_server);
+    return server->co_trip || vhost_user_server_has_in_flight(server);
 }
 
 static const BlockDevOps vu_blk_dev_ops = {
diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
index 5ccc6d24a0..a9a48fffb8 100644
--- a/util/vhost-user-server.c
+++ b/util/vhost-user-server.c
@@ -132,8 +132,7 @@ vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
                     qio_channel_yield(ioc, G_IO_IN);
                     server->in_qio_channel_yield = false;
                 } else {
-                    /* Wait until attached to an AioContext again */
-                    qemu_coroutine_yield();
+                    return false;
                 }
                 continue;
             } else {
@@ -201,8 +200,16 @@ static coroutine_fn void vu_client_trip(void *opaque)
     VuServer *server = opaque;
     VuDev *vu_dev = &server->vu_dev;
 
-    while (!vu_dev->broken && vu_dispatch(vu_dev)) {
-        /* Keep running */
+    while (!vu_dev->broken) {
+        if (server->quiescing) {
+            server->co_trip = NULL;
+            aio_wait_kick();
+            return;
+        }
+        /* vu_dispatch() returns false if server->ctx went away */
+        if (!vu_dispatch(vu_dev) && server->ctx) {
+            break;
+        }
     }
 
     if (vhost_user_server_has_in_flight(server)) {
@@ -353,8 +360,7 @@ static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
 
     qio_channel_set_follow_coroutine_ctx(server->ioc, true);
 
-    server->co_trip = qemu_coroutine_create(vu_client_trip, server);
-
+    /* Attaching the AioContext starts the vu_client_trip coroutine */
     aio_context_acquire(server->ctx);
     vhost_user_server_attach_aio_context(server, server->ctx);
     aio_context_release(server->ctx);
@@ -413,8 +419,25 @@ void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx)
                            NULL, NULL, vu_fd_watch);
     }
 
-    assert(!server->in_qio_channel_yield);
-    aio_co_schedule(ctx, server->co_trip);
+    if (server->co_trip) {
+        /*
+         * The caller didn't fully shut down co_trip (this can happen on
+         * non-polling drains like in bdrv_graph_wrlock()). This is okay as long
+         * as it no longer tries to shut it down and we're guaranteed to still
+         * be in the same AioContext as before.
+         *
+         * co_ctx can still be NULL if we get multiple calls and only just
+         * scheduled a new coroutine in the else branch.
+         */
+        AioContext *co_ctx = qemu_coroutine_get_aio_context(server->co_trip);
+
+        assert(!server->quiescing);
+        assert(!co_ctx || co_ctx == ctx);
+    } else {
+        server->co_trip = qemu_coroutine_create(vu_client_trip, server);
+        assert(!server->in_qio_channel_yield);
+        aio_co_schedule(ctx, server->co_trip);
+    }
 }
 
 /* Called with server->ctx acquired */
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PULL 4/4] ide/via: Fix BAR4 value in legacy mode
  2023-11-28 14:09 [PULL 0/4] Block layer patches Kevin Wolf
                   ` (2 preceding siblings ...)
  2023-11-28 14:09 ` [PULL 3/4] export/vhost-user-blk: Fix consecutive drains Kevin Wolf
@ 2023-11-28 14:09 ` Kevin Wolf
  3 siblings, 0 replies; 5+ messages in thread
From: Kevin Wolf @ 2023-11-28 14:09 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, stefanha, qemu-devel

From: BALATON Zoltan <balaton@eik.bme.hu>

Return default value in legacy mode for BAR4 when unset. This can't be
set in reset method because BARs are cleared on reset so we return it
instead when BARs are read in legacy mode. This fixes UDMA on amigaone
with AmigaOS.

Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Message-ID: <20231125140135.AF6A075A4C3@zero.eik.bme.hu>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 hw/ide/via.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/hw/ide/via.c b/hw/ide/via.c
index 2d3124ebd7..3f3c484253 100644
--- a/hw/ide/via.c
+++ b/hw/ide/via.c
@@ -163,14 +163,19 @@ static uint32_t via_ide_cfg_read(PCIDevice *pd, uint32_t addr, int len)
     uint32_t val = pci_default_read_config(pd, addr, len);
     uint8_t mode = pd->config[PCI_CLASS_PROG];
 
-    if ((mode & 0xf) == 0xa && ranges_overlap(addr, len,
-                                              PCI_BASE_ADDRESS_0, 16)) {
-        /* BARs always read back zero in legacy mode */
-        for (int i = addr; i < addr + len; i++) {
-            if (i >= PCI_BASE_ADDRESS_0 && i < PCI_BASE_ADDRESS_0 + 16) {
-                val &= ~(0xffULL << ((i - addr) << 3));
+    if ((mode & 0xf) == 0xa) {
+        if (ranges_overlap(addr, len, PCI_BASE_ADDRESS_0, 16)) {
+            /* BARs 0-3 always read back zero in legacy mode */
+            for (int i = addr; i < addr + len; i++) {
+                if (i >= PCI_BASE_ADDRESS_0 && i < PCI_BASE_ADDRESS_0 + 16) {
+                    val &= ~(0xffULL << ((i - addr) << 3));
+                }
             }
         }
+        if (addr == PCI_BASE_ADDRESS_4 && val == PCI_BASE_ADDRESS_SPACE_IO) {
+            /* BAR4 default value if unset */
+            val = 0xcc00 | PCI_BASE_ADDRESS_SPACE_IO;
+        }
     }
 
     return val;
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-11-28 14:11 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-28 14:09 [PULL 0/4] Block layer patches Kevin Wolf
2023-11-28 14:09 ` [PULL 1/4] iotests: fix default machine type detection Kevin Wolf
2023-11-28 14:09 ` [PULL 2/4] vmdk: Don't corrupt desc file in vmdk_write_cid Kevin Wolf
2023-11-28 14:09 ` [PULL 3/4] export/vhost-user-blk: Fix consecutive drains Kevin Wolf
2023-11-28 14:09 ` [PULL 4/4] ide/via: Fix BAR4 value in legacy mode Kevin Wolf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).