* [PULL 0/9] Block layer fixes for 8.2.0-rc1
@ 2023-11-21 11:52 Kevin Wolf
2023-11-21 11:52 ` [PULL 1/9] hw/ide/ahci: fix legacy software reset Kevin Wolf
` (9 more replies)
0 siblings, 10 replies; 11+ messages in thread
From: Kevin Wolf @ 2023-11-21 11:52 UTC (permalink / raw)
To: qemu-block; +Cc: kwolf, qemu-devel
The following changes since commit af9264da80073435fd78944bc5a46e695897d7e5:
Merge tag '20231119-xtensa-1' of https://github.com/OSLL/qemu-xtensa into staging (2023-11-20 05:25:19 -0500)
are available in the Git repository at:
https://repo.or.cz/qemu/kevin.git tags/for-upstream
for you to fetch changes up to debb4911667b1f8213ca8760ae83afcf3b3579e0:
hw/ide/via: implement legacy/native mode switching (2023-11-21 12:45:21 +0100)
----------------------------------------------------------------
Block layer patches
- Fix graph lock related deadlocks with the stream job
- ahci: Fix legacy software reset
- ide/via: Fix switch between compatibility and native mode
----------------------------------------------------------------
Kevin Wolf (4):
block: Fix bdrv_graph_wrlock() call in blk_remove_bs()
block: Fix deadlocks in bdrv_graph_wrunlock()
stream: Fix AioContext locking during bdrv_graph_wrlock()
iotests: Test two stream jobs in a single iothread
Mark Cave-Ayland (4):
ide/ioport: move ide_portio_list[] and ide_portio_list2[] definitions to IDE core
ide/pci: introduce pci_ide_update_mode() function
ide/via: don't attempt to set default BAR addresses
hw/ide/via: implement legacy/native mode switching
Niklas Cassel (1):
hw/ide/ahci: fix legacy software reset
include/block/graph-lock.h | 15 ++++-
include/hw/ide/internal.h | 3 +
include/hw/ide/pci.h | 1 +
block.c | 39 ++++++++-----
block/backup.c | 2 +-
block/blklogwrites.c | 4 +-
block/blkverify.c | 2 +-
block/block-backend.c | 12 +++-
block/commit.c | 10 ++--
block/graph-lock.c | 23 +++++++-
block/mirror.c | 14 ++---
block/qcow2.c | 2 +-
block/quorum.c | 4 +-
block/replication.c | 10 ++--
block/snapshot.c | 2 +-
block/stream.c | 10 ++--
block/vmdk.c | 10 ++--
blockdev.c | 4 +-
blockjob.c | 8 +--
hw/ide/ahci.c | 27 ++++++++-
hw/ide/core.c | 12 ++++
hw/ide/ioport.c | 12 ----
hw/ide/pci.c | 84 +++++++++++++++++++++++++++
hw/ide/via.c | 44 +++++++++++---
tests/unit/test-bdrv-drain.c | 20 +++----
tests/unit/test-bdrv-graph-mod.c | 10 ++--
scripts/block-coroutine-wrapper.py | 2 +-
tests/qemu-iotests/tests/iothreads-stream | 74 +++++++++++++++++++++++
tests/qemu-iotests/tests/iothreads-stream.out | 11 ++++
29 files changed, 374 insertions(+), 97 deletions(-)
create mode 100755 tests/qemu-iotests/tests/iothreads-stream
create mode 100644 tests/qemu-iotests/tests/iothreads-stream.out
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PULL 1/9] hw/ide/ahci: fix legacy software reset
2023-11-21 11:52 [PULL 0/9] Block layer fixes for 8.2.0-rc1 Kevin Wolf
@ 2023-11-21 11:52 ` Kevin Wolf
2023-11-21 11:52 ` [PULL 2/9] block: Fix bdrv_graph_wrlock() call in blk_remove_bs() Kevin Wolf
` (8 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Kevin Wolf @ 2023-11-21 11:52 UTC (permalink / raw)
To: qemu-block; +Cc: kwolf, qemu-devel
From: Niklas Cassel <niklas.cassel@wdc.com>
Legacy software contains a standard mechanism for generating a reset to a
Serial ATA device - setting the SRST (software reset) bit in the Device
Control register.
Serial ATA has a more robust mechanism called COMRESET, also referred to
as port reset. A port reset is the preferred mechanism for error
recovery and should be used in place of software reset.
Commit e2a5d9b3d9c3 ("hw/ide/ahci: simplify and document PxCI handling")
improved the handling of PxCI, such that PxCI gets cleared after handling
a non-NCQ, or NCQ command (instead of incorrectly clearing PxCI after
receiving anything - even a FIS that failed to parse, which should NOT
clear PxCI, so that you can see which command slot that caused an error).
However, simply clearing PxCI after a non-NCQ, or NCQ command, is not
enough, we also need to clear PxCI when receiving a SRST in the Device
Control register.
A legacy software reset is performed by the host sending two H2D FISes,
the first H2D FIS asserts SRST, and the second H2D FIS deasserts SRST.
The first H2D FIS will not get a D2H reply, and requires the FIS to have
the C bit set to one, such that the HBA itself will clear the bit in PxCI.
The second H2D FIS will get a D2H reply once the diagnostic is completed.
The clearing of the bit in PxCI for this command should ideally be done
in ahci_init_d2h() (if it was a legacy software reset that caused the
reset (a COMRESET does not use a command slot)). However, since the reset
value for PxCI is 0, modify ahci_reset_port() to actually clear PxCI to 0,
that way we can avoid complex logic in ahci_init_d2h().
This fixes an issue for FreeBSD where the device would fail to reset.
The problem was not noticed in Linux, because Linux uses a COMRESET
instead of a legacy software reset by default.
Fixes: e2a5d9b3d9c3 ("hw/ide/ahci: simplify and document PxCI handling")
Reported-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com>
Message-ID: <20231108222657.117984-1-nks@flawful.org>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Tested-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
hw/ide/ahci.c | 27 ++++++++++++++++++++++++++-
1 file changed, 26 insertions(+), 1 deletion(-)
diff --git a/hw/ide/ahci.c b/hw/ide/ahci.c
index 7676e2d871..afdc44b8e0 100644
--- a/hw/ide/ahci.c
+++ b/hw/ide/ahci.c
@@ -623,9 +623,13 @@ static void ahci_init_d2h(AHCIDevice *ad)
return;
}
+ /*
+ * For simplicity, do not call ahci_clear_cmd_issue() for this
+ * ahci_write_fis_d2h(). (The reset value for PxCI is 0.)
+ */
if (ahci_write_fis_d2h(ad, true)) {
ad->init_d2h_sent = true;
- /* We're emulating receiving the first Reg H2D Fis from the device;
+ /* We're emulating receiving the first Reg D2H FIS from the device;
* Update the SIG register, but otherwise proceed as normal. */
pr->sig = ((uint32_t)ide_state->hcyl << 24) |
(ide_state->lcyl << 16) |
@@ -663,6 +667,7 @@ static void ahci_reset_port(AHCIState *s, int port)
pr->scr_act = 0;
pr->tfdata = 0x7F;
pr->sig = 0xFFFFFFFF;
+ pr->cmd_issue = 0;
d->busy_slot = -1;
d->init_d2h_sent = false;
@@ -1242,10 +1247,30 @@ static void handle_reg_h2d_fis(AHCIState *s, int port,
case STATE_RUN:
if (cmd_fis[15] & ATA_SRST) {
s->dev[port].port_state = STATE_RESET;
+ /*
+ * When setting SRST in the first H2D FIS in the reset sequence,
+ * the device does not send a D2H FIS. Host software thus has to
+ * set the "Clear Busy upon R_OK" bit such that PxCI (and BUSY)
+ * gets cleared. See AHCI 1.3.1, section 10.4.1 Software Reset.
+ */
+ if (opts & AHCI_CMD_CLR_BUSY) {
+ ahci_clear_cmd_issue(ad, slot);
+ }
}
break;
case STATE_RESET:
if (!(cmd_fis[15] & ATA_SRST)) {
+ /*
+ * When clearing SRST in the second H2D FIS in the reset
+ * sequence, the device will execute diagnostics. When this is
+ * done, the device will send a D2H FIS with the good status.
+ * See SATA 3.5a Gold, section 11.4 Software reset protocol.
+ *
+ * This D2H FIS is the first D2H FIS received from the device,
+ * and is received regardless if the reset was performed by a
+ * COMRESET or by setting and clearing the SRST bit. Therefore,
+ * the logic for this is found in ahci_init_d2h() and not here.
+ */
ahci_reset_port(s, port);
}
break;
--
2.42.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PULL 2/9] block: Fix bdrv_graph_wrlock() call in blk_remove_bs()
2023-11-21 11:52 [PULL 0/9] Block layer fixes for 8.2.0-rc1 Kevin Wolf
2023-11-21 11:52 ` [PULL 1/9] hw/ide/ahci: fix legacy software reset Kevin Wolf
@ 2023-11-21 11:52 ` Kevin Wolf
2023-11-21 11:52 ` [PULL 3/9] block: Fix deadlocks in bdrv_graph_wrunlock() Kevin Wolf
` (7 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Kevin Wolf @ 2023-11-21 11:52 UTC (permalink / raw)
To: qemu-block; +Cc: kwolf, qemu-devel
While not all callers of blk_remove_bs() are correct in this respect,
the assumption in the function is that callers hold the AioContext lock
of the BlockBackend (this is required by the drain calls in it).
In order to avoid deadlock in the nested event loop, bdrv_graph_wrlock()
has then to be called with the root BlockDriverState as its parameter
instead of NULL, so that this AioContext lock is temporarily dropped.
Fixes: https://issues.redhat.com/browse/RHEL-1761
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20231115172012.112727-2-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
block/block-backend.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/block/block-backend.c b/block/block-backend.c
index 4053134781..f6f05ead28 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -882,6 +882,8 @@ BlockBackend *blk_by_public(BlockBackendPublic *public)
/*
* Disassociates the currently associated BlockDriverState from @blk.
+ *
+ * The caller must hold the AioContext lock for the BlockBackend.
*/
void blk_remove_bs(BlockBackend *blk)
{
@@ -916,7 +918,7 @@ void blk_remove_bs(BlockBackend *blk)
root = blk->root;
blk->root = NULL;
- bdrv_graph_wrlock(NULL);
+ bdrv_graph_wrlock(root->bs);
bdrv_root_unref_child(root);
bdrv_graph_wrunlock();
}
--
2.42.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PULL 3/9] block: Fix deadlocks in bdrv_graph_wrunlock()
2023-11-21 11:52 [PULL 0/9] Block layer fixes for 8.2.0-rc1 Kevin Wolf
2023-11-21 11:52 ` [PULL 1/9] hw/ide/ahci: fix legacy software reset Kevin Wolf
2023-11-21 11:52 ` [PULL 2/9] block: Fix bdrv_graph_wrlock() call in blk_remove_bs() Kevin Wolf
@ 2023-11-21 11:52 ` Kevin Wolf
2023-11-21 11:52 ` [PULL 4/9] stream: Fix AioContext locking during bdrv_graph_wrlock() Kevin Wolf
` (6 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Kevin Wolf @ 2023-11-21 11:52 UTC (permalink / raw)
To: qemu-block; +Cc: kwolf, qemu-devel
bdrv_graph_wrunlock() calls aio_poll(), which may run callbacks that
have a nested event loop. Nested event loops can depend on other
iothreads making progress, so in order to allow them to make progress it
must not hold the AioContext lock of another thread while calling
aio_poll().
This introduces a @bs parameter to bdrv_graph_wrunlock() whose
AioContext is temporarily dropped (which matches bdrv_graph_wrlock()),
and a bdrv_graph_wrunlock_ctx() that can be used if the BlockDriverState
doesn't necessarily exist any more when unlocking.
This also requires a change to bdrv_schedule_unref(), which was relying
on the incorrectly taken lock. It needs to take the lock itself now.
While this is a separate bug, it can't be fixed a separate patch because
otherwise the intermediate state would either deadlock or try to release
a lock that we don't even hold.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20231115172012.112727-3-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
[kwolf: Fixed up bdrv_schedule_unref()]
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
include/block/graph-lock.h | 15 +++++++++++-
block.c | 39 ++++++++++++++++++------------
block/backup.c | 2 +-
block/blklogwrites.c | 4 +--
block/blkverify.c | 2 +-
block/block-backend.c | 8 ++++--
block/commit.c | 10 ++++----
block/graph-lock.c | 23 +++++++++++++++++-
block/mirror.c | 14 +++++------
block/qcow2.c | 2 +-
block/quorum.c | 4 +--
block/replication.c | 10 ++++----
block/snapshot.c | 2 +-
block/stream.c | 8 +++---
block/vmdk.c | 10 ++++----
blockdev.c | 4 +--
blockjob.c | 8 +++---
tests/unit/test-bdrv-drain.c | 20 +++++++--------
tests/unit/test-bdrv-graph-mod.c | 10 ++++----
scripts/block-coroutine-wrapper.py | 2 +-
20 files changed, 122 insertions(+), 75 deletions(-)
diff --git a/include/block/graph-lock.h b/include/block/graph-lock.h
index 6f1cd12745..22b5db1ed9 100644
--- a/include/block/graph-lock.h
+++ b/include/block/graph-lock.h
@@ -123,8 +123,21 @@ bdrv_graph_wrlock(BlockDriverState *bs);
* bdrv_graph_wrunlock:
* Write finished, reset global has_writer to 0 and restart
* all readers that are waiting.
+ *
+ * If @bs is non-NULL, its AioContext is temporarily released.
+ */
+void no_coroutine_fn TSA_RELEASE(graph_lock) TSA_NO_TSA
+bdrv_graph_wrunlock(BlockDriverState *bs);
+
+/*
+ * bdrv_graph_wrunlock_ctx:
+ * Write finished, reset global has_writer to 0 and restart
+ * all readers that are waiting.
+ *
+ * If @ctx is non-NULL, its lock is temporarily released.
*/
-void bdrv_graph_wrunlock(void) TSA_RELEASE(graph_lock) TSA_NO_TSA;
+void no_coroutine_fn TSA_RELEASE(graph_lock) TSA_NO_TSA
+bdrv_graph_wrunlock_ctx(AioContext *ctx);
/*
* bdrv_graph_co_rdlock:
diff --git a/block.c b/block.c
index eac105a504..bfb0861ec6 100644
--- a/block.c
+++ b/block.c
@@ -1713,7 +1713,7 @@ open_failed:
bdrv_unref_child(bs, bs->file);
assert(!bs->file);
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
g_free(bs->opaque);
bs->opaque = NULL;
@@ -3577,7 +3577,7 @@ int bdrv_set_backing_hd(BlockDriverState *bs, BlockDriverState *backing_hd,
bdrv_drained_begin(drain_bs);
bdrv_graph_wrlock(backing_hd);
ret = bdrv_set_backing_hd_drained(bs, backing_hd, errp);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(backing_hd);
bdrv_drained_end(drain_bs);
bdrv_unref(drain_bs);
@@ -3796,7 +3796,7 @@ BdrvChild *bdrv_open_child(const char *filename,
child = bdrv_attach_child(parent, bs, bdref_key, child_class, child_role,
errp);
aio_context_release(ctx);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
return child;
}
@@ -4652,7 +4652,7 @@ int bdrv_reopen_multiple(BlockReopenQueue *bs_queue, Error **errp)
bdrv_graph_wrlock(NULL);
tran_commit(tran);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
QTAILQ_FOREACH_REVERSE(bs_entry, bs_queue, entry) {
BlockDriverState *bs = bs_entry->state.bs;
@@ -4671,7 +4671,7 @@ int bdrv_reopen_multiple(BlockReopenQueue *bs_queue, Error **errp)
abort:
bdrv_graph_wrlock(NULL);
tran_abort(tran);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
QTAILQ_FOREACH_SAFE(bs_entry, bs_queue, entry, next) {
if (bs_entry->prepared) {
@@ -4857,7 +4857,7 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
ret = bdrv_set_file_or_backing_noperm(bs, new_child_bs, is_backing,
tran, errp);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock_ctx(ctx);
if (old_ctx != ctx) {
aio_context_release(ctx);
@@ -5216,7 +5216,7 @@ static void bdrv_close(BlockDriverState *bs)
assert(!bs->backing);
assert(!bs->file);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
g_free(bs->opaque);
bs->opaque = NULL;
@@ -5511,7 +5511,7 @@ int bdrv_drop_filter(BlockDriverState *bs, Error **errp)
bdrv_drained_begin(child_bs);
bdrv_graph_wrlock(bs);
ret = bdrv_replace_node_common(bs, child_bs, true, true, errp);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
bdrv_drained_end(child_bs);
return ret;
@@ -5593,7 +5593,7 @@ out:
tran_finalize(tran, ret);
bdrv_refresh_limits(bs_top, NULL, NULL);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs_top);
bdrv_drained_end(bs_top);
bdrv_drained_end(bs_new);
@@ -5631,7 +5631,7 @@ int bdrv_replace_child_bs(BdrvChild *child, BlockDriverState *new_bs,
tran_finalize(tran, ret);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(new_bs);
bdrv_drained_end(old_bs);
bdrv_drained_end(new_bs);
bdrv_unref(old_bs);
@@ -5720,7 +5720,7 @@ BlockDriverState *bdrv_insert_node(BlockDriverState *bs, QDict *options,
bdrv_drained_begin(new_node_bs);
bdrv_graph_wrlock(new_node_bs);
ret = bdrv_replace_node(bs, new_node_bs, errp);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(new_node_bs);
bdrv_drained_end(new_node_bs);
bdrv_drained_end(bs);
bdrv_unref(bs);
@@ -6015,7 +6015,7 @@ int bdrv_drop_intermediate(BlockDriverState *top, BlockDriverState *base,
* That's a FIXME.
*/
bdrv_replace_node_common(top, base, false, false, &local_err);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(base);
if (local_err) {
error_report_err(local_err);
@@ -6052,7 +6052,7 @@ int bdrv_drop_intermediate(BlockDriverState *top, BlockDriverState *base,
goto exit;
exit_wrlock:
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(base);
exit:
bdrv_drained_end(base);
bdrv_unref(top);
@@ -7254,6 +7254,16 @@ void bdrv_unref(BlockDriverState *bs)
}
}
+static void bdrv_schedule_unref_bh(void *opaque)
+{
+ BlockDriverState *bs = opaque;
+ AioContext *ctx = bdrv_get_aio_context(bs);
+
+ aio_context_acquire(ctx);
+ bdrv_unref(bs);
+ aio_context_release(ctx);
+}
+
/*
* Release a BlockDriverState reference while holding the graph write lock.
*
@@ -7267,8 +7277,7 @@ void bdrv_schedule_unref(BlockDriverState *bs)
if (!bs) {
return;
}
- aio_bh_schedule_oneshot(qemu_get_aio_context(),
- (QEMUBHFunc *) bdrv_unref, bs);
+ aio_bh_schedule_oneshot(qemu_get_aio_context(), bdrv_schedule_unref_bh, bs);
}
struct BdrvOpBlocker {
diff --git a/block/backup.c b/block/backup.c
index 5bad7d116f..8aae5836d7 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -499,7 +499,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
bdrv_graph_wrlock(target);
block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
&error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(target);
return &job->common;
diff --git a/block/blklogwrites.c b/block/blklogwrites.c
index a0d70729bb..3678f6cf42 100644
--- a/block/blklogwrites.c
+++ b/block/blklogwrites.c
@@ -253,7 +253,7 @@ fail_log:
if (ret < 0) {
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, s->log_file);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
s->log_file = NULL;
}
fail:
@@ -268,7 +268,7 @@ static void blk_log_writes_close(BlockDriverState *bs)
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, s->log_file);
s->log_file = NULL;
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
}
static int64_t coroutine_fn GRAPH_RDLOCK
diff --git a/block/blkverify.c b/block/blkverify.c
index a96905db35..9b17c46644 100644
--- a/block/blkverify.c
+++ b/block/blkverify.c
@@ -154,7 +154,7 @@ static void blkverify_close(BlockDriverState *bs)
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, s->test_file);
s->test_file = NULL;
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
}
static int64_t coroutine_fn GRAPH_RDLOCK
diff --git a/block/block-backend.c b/block/block-backend.c
index f6f05ead28..ec21148806 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -889,6 +889,7 @@ void blk_remove_bs(BlockBackend *blk)
{
ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
BdrvChild *root;
+ AioContext *ctx;
GLOBAL_STATE_CODE();
@@ -918,9 +919,10 @@ void blk_remove_bs(BlockBackend *blk)
root = blk->root;
blk->root = NULL;
+ ctx = bdrv_get_aio_context(root->bs);
bdrv_graph_wrlock(root->bs);
bdrv_root_unref_child(root);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock_ctx(ctx);
}
/*
@@ -931,6 +933,8 @@ void blk_remove_bs(BlockBackend *blk)
int blk_insert_bs(BlockBackend *blk, BlockDriverState *bs, Error **errp)
{
ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
+ AioContext *ctx = bdrv_get_aio_context(bs);
+
GLOBAL_STATE_CODE();
bdrv_ref(bs);
bdrv_graph_wrlock(bs);
@@ -938,7 +942,7 @@ int blk_insert_bs(BlockBackend *blk, BlockDriverState *bs, Error **errp)
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
blk->perm, blk->shared_perm,
blk, errp);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock_ctx(ctx);
if (blk->root == NULL) {
return -EPERM;
}
diff --git a/block/commit.c b/block/commit.c
index eb3dc01f45..69cc75be0c 100644
--- a/block/commit.c
+++ b/block/commit.c
@@ -102,7 +102,7 @@ static void commit_abort(Job *job)
bdrv_drained_begin(commit_top_backing_bs);
bdrv_graph_wrlock(commit_top_backing_bs);
bdrv_replace_node(s->commit_top_bs, commit_top_backing_bs, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(commit_top_backing_bs);
bdrv_drained_end(commit_top_backing_bs);
bdrv_unref(s->commit_top_bs);
@@ -370,19 +370,19 @@ void commit_start(const char *job_id, BlockDriverState *bs,
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
iter_shared_perms, errp);
if (ret < 0) {
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(top);
goto fail;
}
}
if (bdrv_freeze_backing_chain(commit_top_bs, base, errp) < 0) {
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(top);
goto fail;
}
s->chain_frozen = true;
ret = block_job_add_bdrv(&s->common, "base", base, 0, BLK_PERM_ALL, errp);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(top);
if (ret < 0) {
goto fail;
@@ -436,7 +436,7 @@ fail:
bdrv_drained_begin(top);
bdrv_graph_wrlock(top);
bdrv_replace_node(commit_top_bs, top, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(top);
bdrv_drained_end(top);
}
}
diff --git a/block/graph-lock.c b/block/graph-lock.c
index e5525ee2db..079e878d9b 100644
--- a/block/graph-lock.c
+++ b/block/graph-lock.c
@@ -161,11 +161,21 @@ void no_coroutine_fn bdrv_graph_wrlock(BlockDriverState *bs)
}
}
-void bdrv_graph_wrunlock(void)
+void no_coroutine_fn bdrv_graph_wrunlock_ctx(AioContext *ctx)
{
GLOBAL_STATE_CODE();
assert(qatomic_read(&has_writer));
+ /*
+ * Release only non-mainloop AioContext. The mainloop often relies on the
+ * BQL and doesn't lock the main AioContext before doing things.
+ */
+ if (ctx && ctx != qemu_get_aio_context()) {
+ aio_context_release(ctx);
+ } else {
+ ctx = NULL;
+ }
+
WITH_QEMU_LOCK_GUARD(&aio_context_list_lock) {
/*
* No need for memory barriers, this works in pair with
@@ -187,6 +197,17 @@ void bdrv_graph_wrunlock(void)
* progress.
*/
aio_bh_poll(qemu_get_aio_context());
+
+ if (ctx) {
+ aio_context_acquire(ctx);
+ }
+}
+
+void no_coroutine_fn bdrv_graph_wrunlock(BlockDriverState *bs)
+{
+ AioContext *ctx = bs ? bdrv_get_aio_context(bs) : NULL;
+
+ bdrv_graph_wrunlock_ctx(ctx);
}
void coroutine_fn bdrv_graph_co_rdlock(void)
diff --git a/block/mirror.c b/block/mirror.c
index 2096fade90..cd9d3ad4a8 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -773,7 +773,7 @@ static int mirror_exit_common(Job *job)
"would not lead to an abrupt change of visible data",
to_replace->node_name, target_bs->node_name);
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(target_bs);
bdrv_drained_end(to_replace);
if (local_err) {
error_report_err(local_err);
@@ -798,7 +798,7 @@ static int mirror_exit_common(Job *job)
block_job_remove_all_bdrv(bjob);
bdrv_graph_wrlock(mirror_top_bs);
bdrv_replace_node(mirror_top_bs, mirror_top_bs->backing->bs, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(mirror_top_bs);
bdrv_drained_end(target_bs);
bdrv_unref(target_bs);
@@ -1920,7 +1920,7 @@ static BlockJob *mirror_start_job(
BLK_PERM_CONSISTENT_READ,
errp);
if (ret < 0) {
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
goto fail;
}
@@ -1965,17 +1965,17 @@ static BlockJob *mirror_start_job(
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
iter_shared_perms, errp);
if (ret < 0) {
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
goto fail;
}
}
if (bdrv_freeze_backing_chain(mirror_top_bs, target, errp) < 0) {
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
goto fail;
}
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
QTAILQ_INIT(&s->ops_in_flight);
@@ -2006,7 +2006,7 @@ fail:
bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing,
&error_abort);
bdrv_replace_node(mirror_top_bs, bs, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
bdrv_drained_end(bs);
bdrv_unref(mirror_top_bs);
diff --git a/block/qcow2.c b/block/qcow2.c
index cf2468858f..13e032bd5e 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -2809,7 +2809,7 @@ qcow2_do_close(BlockDriverState *bs, bool close_data_file)
bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, s->data_file);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
s->data_file = NULL;
bdrv_graph_rdlock_main_loop();
}
diff --git a/block/quorum.c b/block/quorum.c
index d3ffc2ee33..505b8b3e18 100644
--- a/block/quorum.c
+++ b/block/quorum.c
@@ -1044,7 +1044,7 @@ close_exit:
}
bdrv_unref_child(bs, s->children[i]);
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
g_free(s->children);
g_free(opened);
exit:
@@ -1061,7 +1061,7 @@ static void quorum_close(BlockDriverState *bs)
for (i = 0; i < s->num_children; i++) {
bdrv_unref_child(bs, s->children[i]);
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
g_free(s->children);
}
diff --git a/block/replication.c b/block/replication.c
index 43e259444b..5ded5f1ca9 100644
--- a/block/replication.c
+++ b/block/replication.c
@@ -568,7 +568,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
&local_err);
if (local_err) {
error_propagate(errp, local_err);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
aio_context_release(aio_context);
return;
}
@@ -579,7 +579,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
BDRV_CHILD_DATA, &local_err);
if (local_err) {
error_propagate(errp, local_err);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
aio_context_release(aio_context);
return;
}
@@ -592,7 +592,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
if (!top_bs || !bdrv_is_root_node(top_bs) ||
!check_top_bs(top_bs, bs)) {
error_setg(errp, "No top_bs or it is invalid");
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
reopen_backing_file(bs, false, NULL);
aio_context_release(aio_context);
return;
@@ -600,7 +600,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
bdrv_op_block_all(top_bs, s->blocker);
bdrv_op_unblock(top_bs, BLOCK_OP_TYPE_DATAPLANE, s->blocker);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
s->backup_job = backup_job_create(
NULL, s->secondary_disk->bs, s->hidden_disk->bs,
@@ -696,7 +696,7 @@ static void replication_done(void *opaque, int ret)
s->secondary_disk = NULL;
bdrv_unref_child(bs, s->hidden_disk);
s->hidden_disk = NULL;
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
s->error = 0;
} else {
diff --git a/block/snapshot.c b/block/snapshot.c
index 55974273ae..ec8cf4810b 100644
--- a/block/snapshot.c
+++ b/block/snapshot.c
@@ -292,7 +292,7 @@ int bdrv_snapshot_goto(BlockDriverState *bs,
/* .bdrv_open() will re-attach it */
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, fallback);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
ret = bdrv_snapshot_goto(fallback_bs, snapshot_id, errp);
open_ret = drv->bdrv_open(bs, options, bs->open_flags, &local_err);
diff --git a/block/stream.c b/block/stream.c
index 0b92410c00..e3aa696289 100644
--- a/block/stream.c
+++ b/block/stream.c
@@ -101,7 +101,7 @@ static int stream_prepare(Job *job)
bdrv_graph_wrlock(base);
bdrv_set_backing_hd_drained(unfiltered_bs, base, &local_err);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(base);
/*
* This call will do I/O, so the graph can change again from here on.
@@ -369,7 +369,7 @@ void stream_start(const char *job_id, BlockDriverState *bs,
bdrv_graph_wrlock(bs);
if (block_job_add_bdrv(&s->common, "active node", bs, 0,
basic_flags | BLK_PERM_WRITE, errp)) {
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
goto fail;
}
@@ -389,11 +389,11 @@ void stream_start(const char *job_id, BlockDriverState *bs,
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
basic_flags, errp);
if (ret < 0) {
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
goto fail;
}
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
s->base_overlay = base_overlay;
s->above_base = above_base;
diff --git a/block/vmdk.c b/block/vmdk.c
index dda783f06b..d87f6d9aaa 100644
--- a/block/vmdk.c
+++ b/block/vmdk.c
@@ -283,7 +283,7 @@ static void vmdk_free_extents(BlockDriverState *bs)
bdrv_unref_child(bs, e->file);
}
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
g_free(s->extents);
}
@@ -1237,7 +1237,7 @@ vmdk_parse_extents(const char *desc, BlockDriverState *bs, QDict *options,
bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, extent_file);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
bdrv_graph_rdlock_main_loop();
goto out;
}
@@ -1256,7 +1256,7 @@ vmdk_parse_extents(const char *desc, BlockDriverState *bs, QDict *options,
bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, extent_file);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
bdrv_graph_rdlock_main_loop();
goto out;
}
@@ -1267,7 +1267,7 @@ vmdk_parse_extents(const char *desc, BlockDriverState *bs, QDict *options,
bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, extent_file);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
bdrv_graph_rdlock_main_loop();
goto out;
}
@@ -1277,7 +1277,7 @@ vmdk_parse_extents(const char *desc, BlockDriverState *bs, QDict *options,
bdrv_graph_rdunlock_main_loop();
bdrv_graph_wrlock(NULL);
bdrv_unref_child(bs, extent_file);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
bdrv_graph_rdlock_main_loop();
ret = -ENOTSUP;
goto out;
diff --git a/blockdev.c b/blockdev.c
index 5bc921236c..4c1177e8db 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -1613,7 +1613,7 @@ static void external_snapshot_abort(void *opaque)
bdrv_drained_begin(state->new_bs);
bdrv_graph_wrlock(state->old_bs);
bdrv_replace_node(state->new_bs, state->old_bs, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(state->old_bs);
bdrv_drained_end(state->new_bs);
bdrv_unref(state->old_bs); /* bdrv_replace_node() ref'ed old_bs */
@@ -3692,7 +3692,7 @@ void qmp_x_blockdev_change(const char *parent, const char *child,
}
out:
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
}
BlockJobInfoList *qmp_query_block_jobs(Error **errp)
diff --git a/blockjob.c b/blockjob.c
index af44322cbe..b7a29052b9 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -212,7 +212,7 @@ void block_job_remove_all_bdrv(BlockJob *job)
g_slist_free_1(l);
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock_ctx(job->job.aio_context);
}
bool block_job_has_bdrv(BlockJob *job, BlockDriverState *bs)
@@ -523,7 +523,7 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
job = job_create(job_id, &driver->job_driver, txn, bdrv_get_aio_context(bs),
flags, cb, opaque, errp);
if (job == NULL) {
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
return NULL;
}
@@ -563,11 +563,11 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
goto fail;
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
return job;
fail:
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(bs);
job_early_fail(&job->job);
return NULL;
}
diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c
index 8d05538bf6..704d1a3f36 100644
--- a/tests/unit/test-bdrv-drain.c
+++ b/tests/unit/test-bdrv-drain.c
@@ -809,7 +809,7 @@ static void test_blockjob_common_drain_node(enum drain_type drain_type,
bdrv_graph_wrlock(target);
block_job_add_bdrv(job, "target", target, 0, BLK_PERM_ALL, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(target);
switch (result) {
case TEST_JOB_SUCCESS:
@@ -995,7 +995,7 @@ static void bdrv_test_top_close(BlockDriverState *bs)
QLIST_FOREACH_SAFE(c, &bs->children, next, next_c) {
bdrv_unref_child(bs, c);
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
}
static int coroutine_fn GRAPH_RDLOCK
@@ -1088,7 +1088,7 @@ static void do_test_delete_by_drain(bool detach_instead_of_delete,
bdrv_graph_wrlock(NULL);
bdrv_attach_child(bs, null_bs, "null-child", &child_of_bds,
BDRV_CHILD_DATA, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
/* This child will be the one to pass to requests through to, and
* it will stall until a drain occurs */
@@ -1101,7 +1101,7 @@ static void do_test_delete_by_drain(bool detach_instead_of_delete,
&child_of_bds,
BDRV_CHILD_DATA | BDRV_CHILD_PRIMARY,
&error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
/* This child is just there to be deleted
* (for detach_instead_of_delete == true) */
@@ -1110,7 +1110,7 @@ static void do_test_delete_by_drain(bool detach_instead_of_delete,
bdrv_graph_wrlock(NULL);
bdrv_attach_child(bs, null_bs, "null-child", &child_of_bds, BDRV_CHILD_DATA,
&error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
blk = blk_new(qemu_get_aio_context(), BLK_PERM_ALL, BLK_PERM_ALL);
blk_insert_bs(blk, bs, &error_abort);
@@ -1200,7 +1200,7 @@ static void no_coroutine_fn detach_indirect_bh(void *opaque)
data->child_c = bdrv_attach_child(data->parent_b, data->c, "PB-C",
&child_of_bds, BDRV_CHILD_DATA,
&error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
}
static void coroutine_mixed_fn detach_by_parent_aio_cb(void *opaque, int ret)
@@ -1308,7 +1308,7 @@ static void TSA_NO_TSA test_detach_indirect(bool by_parent_cb)
bdrv_attach_child(parent_a, a, "PA-A",
by_parent_cb ? &child_of_bds : &detach_by_driver_cb_class,
BDRV_CHILD_DATA, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
g_assert_cmpint(parent_a->refcnt, ==, 1);
g_assert_cmpint(parent_b->refcnt, ==, 1);
@@ -1735,7 +1735,7 @@ static void test_drop_intermediate_poll(void)
&chain_child_class, BDRV_CHILD_COW, &error_abort);
}
}
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
job = block_job_create("job", &test_simple_job_driver, NULL, job_node,
0, BLK_PERM_ALL, 0, 0, NULL, NULL, &error_abort);
@@ -1985,7 +1985,7 @@ static void do_test_replace_child_mid_drain(int old_drain_count,
bdrv_graph_wrlock(NULL);
bdrv_attach_child(parent_bs, old_child_bs, "child", &child_of_bds,
BDRV_CHILD_COW, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
parent_s->setup_completed = true;
for (i = 0; i < old_drain_count; i++) {
@@ -2018,7 +2018,7 @@ static void do_test_replace_child_mid_drain(int old_drain_count,
bdrv_drained_begin(new_child_bs);
bdrv_graph_wrlock(NULL);
bdrv_replace_node(old_child_bs, new_child_bs, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
bdrv_drained_end(new_child_bs);
bdrv_drained_end(old_child_bs);
g_assert(parent_bs->quiesce_counter == new_drain_count);
diff --git a/tests/unit/test-bdrv-graph-mod.c b/tests/unit/test-bdrv-graph-mod.c
index 878544dbd5..074adcbb93 100644
--- a/tests/unit/test-bdrv-graph-mod.c
+++ b/tests/unit/test-bdrv-graph-mod.c
@@ -140,7 +140,7 @@ static void test_update_perm_tree(void)
bdrv_graph_wrlock(NULL);
bdrv_attach_child(filter, bs, "child", &child_of_bds,
BDRV_CHILD_DATA, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
aio_context_acquire(qemu_get_aio_context());
ret = bdrv_append(filter, bs, NULL);
@@ -210,7 +210,7 @@ static void test_should_update_child(void)
g_assert(target->backing->bs == bs);
bdrv_attach_child(filter, target, "target", &child_of_bds,
BDRV_CHILD_DATA, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
aio_context_acquire(qemu_get_aio_context());
bdrv_append(filter, bs, &error_abort);
aio_context_release(qemu_get_aio_context());
@@ -260,7 +260,7 @@ static void test_parallel_exclusive_write(void)
&error_abort);
bdrv_replace_node(fl1, fl2, &error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
bdrv_drained_end(fl2);
bdrv_drained_end(fl1);
@@ -380,7 +380,7 @@ static void test_parallel_perm_update(void)
bdrv_attach_child(fl2, base, "backing", &child_of_bds,
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
&error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
/* Select fl1 as first child to be active */
s->selected = c_fl1;
@@ -438,7 +438,7 @@ static void test_append_greedy_filter(void)
bdrv_attach_child(top, base, "backing", &child_of_bds,
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
&error_abort);
- bdrv_graph_wrunlock();
+ bdrv_graph_wrunlock(NULL);
aio_context_acquire(qemu_get_aio_context());
bdrv_append(fl, base, &error_abort);
diff --git a/scripts/block-coroutine-wrapper.py b/scripts/block-coroutine-wrapper.py
index a601c3c672..a38e5833fb 100644
--- a/scripts/block-coroutine-wrapper.py
+++ b/scripts/block-coroutine-wrapper.py
@@ -262,7 +262,7 @@ def gen_no_co_wrapper(func: FuncDecl) -> str:
graph_unlock=' bdrv_graph_rdunlock_main_loop();'
elif func.graph_wrlock:
graph_lock=' bdrv_graph_wrlock(NULL);'
- graph_unlock=' bdrv_graph_wrunlock();'
+ graph_unlock=' bdrv_graph_wrunlock(NULL);'
return f"""\
/*
--
2.42.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PULL 4/9] stream: Fix AioContext locking during bdrv_graph_wrlock()
2023-11-21 11:52 [PULL 0/9] Block layer fixes for 8.2.0-rc1 Kevin Wolf
` (2 preceding siblings ...)
2023-11-21 11:52 ` [PULL 3/9] block: Fix deadlocks in bdrv_graph_wrunlock() Kevin Wolf
@ 2023-11-21 11:52 ` Kevin Wolf
2023-11-21 11:52 ` [PULL 5/9] iotests: Test two stream jobs in a single iothread Kevin Wolf
` (5 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Kevin Wolf @ 2023-11-21 11:52 UTC (permalink / raw)
To: qemu-block; +Cc: kwolf, qemu-devel
In stream_prepare(), we need to temporarily drop the AioContext lock
that job_prepare_locked() took for us while calling the graph write lock
functions which can poll.
All block nodes related to this block job are in the same AioContext, so
we can pass any of them to bdrv_graph_wrlock()/ bdrv_graph_wrunlock().
Unfortunately, the one that we picked is base, which can be NULL - and
in this case the AioContext lock is not released and deadlocks can
occur.
Fix this by passing s->target_bs, which is never NULL.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20231115172012.112727-4-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
block/stream.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/block/stream.c b/block/stream.c
index e3aa696289..01fe7c0f16 100644
--- a/block/stream.c
+++ b/block/stream.c
@@ -99,9 +99,9 @@ static int stream_prepare(Job *job)
}
}
- bdrv_graph_wrlock(base);
+ bdrv_graph_wrlock(s->target_bs);
bdrv_set_backing_hd_drained(unfiltered_bs, base, &local_err);
- bdrv_graph_wrunlock(base);
+ bdrv_graph_wrunlock(s->target_bs);
/*
* This call will do I/O, so the graph can change again from here on.
--
2.42.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PULL 5/9] iotests: Test two stream jobs in a single iothread
2023-11-21 11:52 [PULL 0/9] Block layer fixes for 8.2.0-rc1 Kevin Wolf
` (3 preceding siblings ...)
2023-11-21 11:52 ` [PULL 4/9] stream: Fix AioContext locking during bdrv_graph_wrlock() Kevin Wolf
@ 2023-11-21 11:52 ` Kevin Wolf
2023-11-21 11:52 ` [PULL 6/9] ide/ioport: move ide_portio_list[] and ide_portio_list2[] definitions to IDE core Kevin Wolf
` (4 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Kevin Wolf @ 2023-11-21 11:52 UTC (permalink / raw)
To: qemu-block; +Cc: kwolf, qemu-devel
This tests two parallel stream jobs that will complete around the same
time and run on two different disks in the same iothreads. It is loosely
based on the bug report at https://issues.redhat.com/browse/RHEL-1761.
For me, this test hangs reliably with the originally reported bug in
blk_remove_bs(). After fixing it, it intermittently hangs for the bugs
fixed after it, missing AioContext unlocking in bdrv_graph_wrunlock()
and in stream_prepare(). The deadlocks seem to happen more frequently
when the test directory is on tmpfs.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-ID: <20231115172012.112727-5-kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
tests/qemu-iotests/tests/iothreads-stream | 74 +++++++++++++++++++
tests/qemu-iotests/tests/iothreads-stream.out | 11 +++
2 files changed, 85 insertions(+)
create mode 100755 tests/qemu-iotests/tests/iothreads-stream
create mode 100644 tests/qemu-iotests/tests/iothreads-stream.out
diff --git a/tests/qemu-iotests/tests/iothreads-stream b/tests/qemu-iotests/tests/iothreads-stream
new file mode 100755
index 0000000000..503f221f16
--- /dev/null
+++ b/tests/qemu-iotests/tests/iothreads-stream
@@ -0,0 +1,74 @@
+#!/usr/bin/env python3
+# group: rw quick auto
+#
+# Copyright (C) 2023 Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+#
+# Creator/Owner: Kevin Wolf <kwolf@redhat.com>
+
+import iotests
+
+iotests.script_initialize(supported_fmts=['qcow2'],
+ supported_platforms=['linux'])
+iotests.verify_virtio_scsi_pci_or_ccw()
+
+with iotests.FilePath('disk1.img') as base1_path, \
+ iotests.FilePath('disk1-snap.img') as snap1_path, \
+ iotests.FilePath('disk2.img') as base2_path, \
+ iotests.FilePath('disk2-snap.img') as snap2_path, \
+ iotests.VM() as vm:
+
+ img_size = '10M'
+
+ # Only one iothread for both disks
+ vm.add_object('iothread,id=iothread0')
+ vm.add_device('virtio-scsi,iothread=iothread0')
+
+ iotests.log('Preparing disks...')
+ for i, base_path, snap_path in ((0, base1_path, snap1_path),
+ (1, base2_path, snap2_path)):
+ iotests.qemu_img_create('-f', iotests.imgfmt, base_path, img_size)
+ iotests.qemu_img_create('-f', iotests.imgfmt, '-b', base_path,
+ '-F', iotests.imgfmt, snap_path)
+
+ iotests.qemu_io_log('-c', f'write 0 {img_size}', base_path)
+
+ vm.add_blockdev(f'file,node-name=disk{i}-base-file,'
+ f'filename={base_path}')
+ vm.add_blockdev(f'qcow2,node-name=disk{i}-base,file=disk{i}-base-file')
+ vm.add_blockdev(f'file,node-name=disk{i}-file,filename={snap_path}')
+ vm.add_blockdev(f'qcow2,node-name=disk{i},file=disk{i}-file,'
+ f'backing=disk{i}-base')
+ vm.add_device(f'scsi-hd,drive=disk{i}')
+
+ iotests.log('Launching VM...')
+ vm.launch()
+
+ iotests.log('Starting stream jobs...')
+ iotests.log(vm.qmp('block-stream', device='disk0', job_id='job0'))
+ iotests.log(vm.qmp('block-stream', device='disk1', job_id='job1'))
+
+ finished = 0
+ while True:
+ try:
+ ev = vm.event_wait('JOB_STATUS_CHANGE', timeout=0.1)
+ if ev is not None and ev['data']['status'] == 'null':
+ finished += 1
+ # The test is done once both jobs are gone
+ if finished == 2:
+ break
+ except TimeoutError:
+ pass
+ vm.cmd('query-jobs')
diff --git a/tests/qemu-iotests/tests/iothreads-stream.out b/tests/qemu-iotests/tests/iothreads-stream.out
new file mode 100644
index 0000000000..ef134165e5
--- /dev/null
+++ b/tests/qemu-iotests/tests/iothreads-stream.out
@@ -0,0 +1,11 @@
+Preparing disks...
+wrote 10485760/10485760 bytes at offset 0
+10 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+wrote 10485760/10485760 bytes at offset 0
+10 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+Launching VM...
+Starting stream jobs...
+{"return": {}}
+{"return": {}}
--
2.42.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PULL 6/9] ide/ioport: move ide_portio_list[] and ide_portio_list2[] definitions to IDE core
2023-11-21 11:52 [PULL 0/9] Block layer fixes for 8.2.0-rc1 Kevin Wolf
` (4 preceding siblings ...)
2023-11-21 11:52 ` [PULL 5/9] iotests: Test two stream jobs in a single iothread Kevin Wolf
@ 2023-11-21 11:52 ` Kevin Wolf
2023-11-21 11:53 ` [PULL 7/9] ide/pci: introduce pci_ide_update_mode() function Kevin Wolf
` (3 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Kevin Wolf @ 2023-11-21 11:52 UTC (permalink / raw)
To: qemu-block; +Cc: kwolf, qemu-devel
From: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
These definitions are present in ioport.c which is currently only available when
CONFIG_IDE_ISA is enabled. Move them to the IDE core so that they can be made
available to PCI IDE controllers that support switching to legacy mode.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-ID: <20231116103355.588580-2-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
include/hw/ide/internal.h | 3 +++
hw/ide/core.c | 12 ++++++++++++
hw/ide/ioport.c | 12 ------------
3 files changed, 15 insertions(+), 12 deletions(-)
diff --git a/include/hw/ide/internal.h b/include/hw/ide/internal.h
index 2bfa7533d6..3bdcc75597 100644
--- a/include/hw/ide/internal.h
+++ b/include/hw/ide/internal.h
@@ -354,6 +354,9 @@ enum ide_dma_cmd {
extern const char *IDE_DMA_CMD_lookup[IDE_DMA__COUNT];
+extern const MemoryRegionPortio ide_portio_list[];
+extern const MemoryRegionPortio ide_portio2_list[];
+
#define ide_cmd_is_read(s) \
((s)->dma_cmd == IDE_DMA_READ)
diff --git a/hw/ide/core.c b/hw/ide/core.c
index 63ba665f3d..8a0579bff4 100644
--- a/hw/ide/core.c
+++ b/hw/ide/core.c
@@ -81,6 +81,18 @@ static const char *IDE_DMA_CMD_str(enum ide_dma_cmd enval)
static void ide_dummy_transfer_stop(IDEState *s);
+const MemoryRegionPortio ide_portio_list[] = {
+ { 0, 8, 1, .read = ide_ioport_read, .write = ide_ioport_write },
+ { 0, 1, 2, .read = ide_data_readw, .write = ide_data_writew },
+ { 0, 1, 4, .read = ide_data_readl, .write = ide_data_writel },
+ PORTIO_END_OF_LIST(),
+};
+
+const MemoryRegionPortio ide_portio2_list[] = {
+ { 0, 1, 1, .read = ide_status_read, .write = ide_ctrl_write },
+ PORTIO_END_OF_LIST(),
+};
+
static void padstr(char *str, const char *src, int len)
{
int i, v;
diff --git a/hw/ide/ioport.c b/hw/ide/ioport.c
index e2ecc6230c..0b283ac783 100644
--- a/hw/ide/ioport.c
+++ b/hw/ide/ioport.c
@@ -28,18 +28,6 @@
#include "hw/ide/internal.h"
#include "trace.h"
-static const MemoryRegionPortio ide_portio_list[] = {
- { 0, 8, 1, .read = ide_ioport_read, .write = ide_ioport_write },
- { 0, 1, 2, .read = ide_data_readw, .write = ide_data_writew },
- { 0, 1, 4, .read = ide_data_readl, .write = ide_data_writel },
- PORTIO_END_OF_LIST(),
-};
-
-static const MemoryRegionPortio ide_portio2_list[] = {
- { 0, 1, 1, .read = ide_status_read, .write = ide_ctrl_write },
- PORTIO_END_OF_LIST(),
-};
-
int ide_init_ioport(IDEBus *bus, ISADevice *dev, int iobase, int iobase2)
{
int ret;
--
2.42.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PULL 7/9] ide/pci: introduce pci_ide_update_mode() function
2023-11-21 11:52 [PULL 0/9] Block layer fixes for 8.2.0-rc1 Kevin Wolf
` (5 preceding siblings ...)
2023-11-21 11:52 ` [PULL 6/9] ide/ioport: move ide_portio_list[] and ide_portio_list2[] definitions to IDE core Kevin Wolf
@ 2023-11-21 11:53 ` Kevin Wolf
2023-11-21 11:53 ` [PULL 8/9] ide/via: don't attempt to set default BAR addresses Kevin Wolf
` (2 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Kevin Wolf @ 2023-11-21 11:53 UTC (permalink / raw)
To: qemu-block; +Cc: kwolf, qemu-devel
From: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
This function reads the value of the PCI_CLASS_PROG register for PCI IDE
controllers and configures the PCI BARs and/or IDE ioports accordingly.
In the case where we switch to legacy mode, the PCI BARs are set to return zero
(as suggested in the "PCI IDE Controller" specification), the legacy IDE ioports
are enabled, and the PCI interrupt pin cleared to indicate legacy IRQ routing.
Conversely when we switch to native mode, the legacy IDE ioports are disabled
and the PCI interrupt pin set to indicate native IRQ routing. The contents of
the PCI BARs are unspecified, but this is not an issue since if a PCI IDE
controller has been switched to native mode then its BARs will need to be
programmed.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-ID: <20231116103355.588580-3-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
include/hw/ide/pci.h | 1 +
hw/ide/pci.c | 84 ++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 85 insertions(+)
diff --git a/include/hw/ide/pci.h b/include/hw/ide/pci.h
index 1ff469de87..a814a0a7c3 100644
--- a/include/hw/ide/pci.h
+++ b/include/hw/ide/pci.h
@@ -61,6 +61,7 @@ void bmdma_cmd_writeb(BMDMAState *bm, uint32_t val);
void bmdma_status_writeb(BMDMAState *bm, uint32_t val);
extern MemoryRegionOps bmdma_addr_ioport_ops;
void pci_ide_create_devs(PCIDevice *dev);
+void pci_ide_update_mode(PCIIDEState *s);
extern const VMStateDescription vmstate_ide_pci;
extern const MemoryRegionOps pci_ide_cmd_le_ops;
diff --git a/hw/ide/pci.c b/hw/ide/pci.c
index a25b352537..810c6b6d98 100644
--- a/hw/ide/pci.c
+++ b/hw/ide/pci.c
@@ -104,6 +104,90 @@ const MemoryRegionOps pci_ide_data_le_ops = {
.endianness = DEVICE_LITTLE_ENDIAN,
};
+void pci_ide_update_mode(PCIIDEState *s)
+{
+ PCIDevice *d = PCI_DEVICE(s);
+ uint8_t mode = d->config[PCI_CLASS_PROG];
+
+ /*
+ * This function only configures the BARs/ioports for now: PCI IDE
+ * controllers must manage their own IRQ routing
+ */
+
+ switch (mode & 0xf) {
+ case 0xa:
+ /* Both channels legacy mode */
+
+ /*
+ * TODO: according to the PCI IDE specification the BARs should
+ * be completely disabled, however Linux for the pegasos2
+ * machine stil accesses the BAR addresses after switching to legacy
+ * mode. Hence we leave them active for now.
+ */
+
+ /* Clear interrupt pin */
+ pci_config_set_interrupt_pin(d->config, 0);
+
+ /* Add legacy IDE ports */
+ if (!s->bus[0].portio_list.owner) {
+ portio_list_init(&s->bus[0].portio_list, OBJECT(d),
+ ide_portio_list, &s->bus[0], "ide");
+ portio_list_add(&s->bus[0].portio_list,
+ pci_address_space_io(d), 0x1f0);
+ }
+
+ if (!s->bus[0].portio2_list.owner) {
+ portio_list_init(&s->bus[0].portio2_list, OBJECT(d),
+ ide_portio2_list, &s->bus[0], "ide");
+ portio_list_add(&s->bus[0].portio2_list,
+ pci_address_space_io(d), 0x3f6);
+ }
+
+ if (!s->bus[1].portio_list.owner) {
+ portio_list_init(&s->bus[1].portio_list, OBJECT(d),
+ ide_portio_list, &s->bus[1], "ide");
+ portio_list_add(&s->bus[1].portio_list,
+ pci_address_space_io(d), 0x170);
+ }
+
+ if (!s->bus[1].portio2_list.owner) {
+ portio_list_init(&s->bus[1].portio2_list, OBJECT(d),
+ ide_portio2_list, &s->bus[1], "ide");
+ portio_list_add(&s->bus[1].portio2_list,
+ pci_address_space_io(d), 0x376);
+ }
+ break;
+
+ case 0xf:
+ /* Both channels native mode */
+
+ /* Set interrupt pin */
+ pci_config_set_interrupt_pin(d->config, 1);
+
+ /* Remove legacy IDE ports */
+ if (s->bus[0].portio_list.owner) {
+ portio_list_del(&s->bus[0].portio_list);
+ portio_list_destroy(&s->bus[0].portio_list);
+ }
+
+ if (s->bus[0].portio2_list.owner) {
+ portio_list_del(&s->bus[0].portio2_list);
+ portio_list_destroy(&s->bus[0].portio2_list);
+ }
+
+ if (s->bus[1].portio_list.owner) {
+ portio_list_del(&s->bus[1].portio_list);
+ portio_list_destroy(&s->bus[1].portio_list);
+ }
+
+ if (s->bus[1].portio2_list.owner) {
+ portio_list_del(&s->bus[1].portio2_list);
+ portio_list_destroy(&s->bus[1].portio2_list);
+ }
+ break;
+ }
+}
+
static IDEState *bmdma_active_if(BMDMAState *bmdma)
{
assert(bmdma->bus->retry_unit != (uint8_t)-1);
--
2.42.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PULL 8/9] ide/via: don't attempt to set default BAR addresses
2023-11-21 11:52 [PULL 0/9] Block layer fixes for 8.2.0-rc1 Kevin Wolf
` (6 preceding siblings ...)
2023-11-21 11:53 ` [PULL 7/9] ide/pci: introduce pci_ide_update_mode() function Kevin Wolf
@ 2023-11-21 11:53 ` Kevin Wolf
2023-11-21 11:53 ` [PULL 9/9] hw/ide/via: implement legacy/native mode switching Kevin Wolf
2023-11-21 16:54 ` [PULL 0/9] Block layer fixes for 8.2.0-rc1 Stefan Hajnoczi
9 siblings, 0 replies; 11+ messages in thread
From: Kevin Wolf @ 2023-11-21 11:53 UTC (permalink / raw)
To: qemu-block; +Cc: kwolf, qemu-devel
From: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
The via-ide device currently attempts to set the default BAR addresses to the
values shown in the datasheet, but this doesn't work for 2 reasons: firstly
BARS 1-4 do not set the bottom 2 bits to PCI_BASE_ADDRESS_SPACE_IO, and
secondly the initial PCI bus reset clears the values of all PCI device BARs
after the device itself has been reset.
Remove the setting of the default BAR addresses from via_ide_reset() to ensure
there is no doubt that these values are never exposed to the guest.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-ID: <20231116103355.588580-4-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
hw/ide/via.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/hw/ide/via.c b/hw/ide/via.c
index fff23803a6..87b134083a 100644
--- a/hw/ide/via.c
+++ b/hw/ide/via.c
@@ -132,11 +132,6 @@ static void via_ide_reset(DeviceState *dev)
pci_set_word(pci_conf + PCI_STATUS, PCI_STATUS_FAST_BACK |
PCI_STATUS_DEVSEL_MEDIUM);
- pci_set_long(pci_conf + PCI_BASE_ADDRESS_0, 0x000001f0);
- pci_set_long(pci_conf + PCI_BASE_ADDRESS_1, 0x000003f4);
- pci_set_long(pci_conf + PCI_BASE_ADDRESS_2, 0x00000170);
- pci_set_long(pci_conf + PCI_BASE_ADDRESS_3, 0x00000374);
- pci_set_long(pci_conf + PCI_BASE_ADDRESS_4, 0x0000cc01); /* BMIBA: 20-23h */
pci_set_long(pci_conf + PCI_INTERRUPT_LINE, 0x0000010e);
/* IDE chip enable, IDE configuration 1/2, IDE FIFO Configuration*/
--
2.42.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PULL 9/9] hw/ide/via: implement legacy/native mode switching
2023-11-21 11:52 [PULL 0/9] Block layer fixes for 8.2.0-rc1 Kevin Wolf
` (7 preceding siblings ...)
2023-11-21 11:53 ` [PULL 8/9] ide/via: don't attempt to set default BAR addresses Kevin Wolf
@ 2023-11-21 11:53 ` Kevin Wolf
2023-11-21 16:54 ` [PULL 0/9] Block layer fixes for 8.2.0-rc1 Stefan Hajnoczi
9 siblings, 0 replies; 11+ messages in thread
From: Kevin Wolf @ 2023-11-21 11:53 UTC (permalink / raw)
To: qemu-block; +Cc: kwolf, qemu-devel
From: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Allow the VIA IDE controller to switch between both legacy and native modes by
calling pci_ide_update_mode() to reconfigure the device whenever PCI_CLASS_PROG
is updated.
This patch moves the initial setting of PCI_CLASS_PROG from via_ide_realize() to
via_ide_reset(), and removes the direct setting of PCI_INTERRUPT_PIN during PCI
bus reset since this is now managed by pci_ide_update_mode(). This ensures that
the device configuration is always consistent with respect to the currently
selected mode.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-ID: <20231116103355.588580-5-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
hw/ide/via.c | 39 +++++++++++++++++++++++++++++++++++++--
1 file changed, 37 insertions(+), 2 deletions(-)
diff --git a/hw/ide/via.c b/hw/ide/via.c
index 87b134083a..2d3124ebd7 100644
--- a/hw/ide/via.c
+++ b/hw/ide/via.c
@@ -28,6 +28,7 @@
#include "hw/pci/pci.h"
#include "migration/vmstate.h"
#include "qemu/module.h"
+#include "qemu/range.h"
#include "sysemu/dma.h"
#include "hw/isa/vt82c686.h"
#include "hw/ide/pci.h"
@@ -128,11 +129,14 @@ static void via_ide_reset(DeviceState *dev)
ide_bus_reset(&d->bus[i]);
}
+ pci_config_set_prog_interface(pci_conf, 0x8a); /* legacy mode */
+ pci_ide_update_mode(d);
+
pci_set_word(pci_conf + PCI_COMMAND, PCI_COMMAND_IO | PCI_COMMAND_WAIT);
pci_set_word(pci_conf + PCI_STATUS, PCI_STATUS_FAST_BACK |
PCI_STATUS_DEVSEL_MEDIUM);
- pci_set_long(pci_conf + PCI_INTERRUPT_LINE, 0x0000010e);
+ pci_set_byte(pci_conf + PCI_INTERRUPT_LINE, 0xe);
/* IDE chip enable, IDE configuration 1/2, IDE FIFO Configuration*/
pci_set_long(pci_conf + 0x40, 0x0a090600);
@@ -154,6 +158,36 @@ static void via_ide_reset(DeviceState *dev)
pci_set_long(pci_conf + 0xc0, 0x00020001);
}
+static uint32_t via_ide_cfg_read(PCIDevice *pd, uint32_t addr, int len)
+{
+ uint32_t val = pci_default_read_config(pd, addr, len);
+ uint8_t mode = pd->config[PCI_CLASS_PROG];
+
+ if ((mode & 0xf) == 0xa && ranges_overlap(addr, len,
+ PCI_BASE_ADDRESS_0, 16)) {
+ /* BARs always read back zero in legacy mode */
+ for (int i = addr; i < addr + len; i++) {
+ if (i >= PCI_BASE_ADDRESS_0 && i < PCI_BASE_ADDRESS_0 + 16) {
+ val &= ~(0xffULL << ((i - addr) << 3));
+ }
+ }
+ }
+
+ return val;
+}
+
+static void via_ide_cfg_write(PCIDevice *pd, uint32_t addr,
+ uint32_t val, int len)
+{
+ PCIIDEState *d = PCI_IDE(pd);
+
+ pci_default_write_config(pd, addr, val, len);
+
+ if (range_covers_byte(addr, len, PCI_CLASS_PROG)) {
+ pci_ide_update_mode(d);
+ }
+}
+
static void via_ide_realize(PCIDevice *dev, Error **errp)
{
PCIIDEState *d = PCI_IDE(dev);
@@ -161,7 +195,6 @@ static void via_ide_realize(PCIDevice *dev, Error **errp)
uint8_t *pci_conf = dev->config;
int i;
- pci_config_set_prog_interface(pci_conf, 0x8a); /* legacy mode */
pci_set_long(pci_conf + PCI_CAPABILITY_LIST, 0x000000c0);
dev->wmask[PCI_INTERRUPT_LINE] = 0;
dev->wmask[PCI_CLASS_PROG] = 5;
@@ -216,6 +249,8 @@ static void via_ide_class_init(ObjectClass *klass, void *data)
/* Reason: only works as function of VIA southbridge */
dc->user_creatable = false;
+ k->config_read = via_ide_cfg_read;
+ k->config_write = via_ide_cfg_write;
k->realize = via_ide_realize;
k->exit = via_ide_exitfn;
k->vendor_id = PCI_VENDOR_ID_VIA;
--
2.42.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PULL 0/9] Block layer fixes for 8.2.0-rc1
2023-11-21 11:52 [PULL 0/9] Block layer fixes for 8.2.0-rc1 Kevin Wolf
` (8 preceding siblings ...)
2023-11-21 11:53 ` [PULL 9/9] hw/ide/via: implement legacy/native mode switching Kevin Wolf
@ 2023-11-21 16:54 ` Stefan Hajnoczi
9 siblings, 0 replies; 11+ messages in thread
From: Stefan Hajnoczi @ 2023-11-21 16:54 UTC (permalink / raw)
To: Kevin Wolf; +Cc: qemu-block, kwolf, qemu-devel
[-- Attachment #1: Type: text/plain, Size: 115 bytes --]
Applied, thanks.
Please update the changelog at https://wiki.qemu.org/ChangeLog/8.2 for any user-visible changes.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2023-11-21 16:55 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-21 11:52 [PULL 0/9] Block layer fixes for 8.2.0-rc1 Kevin Wolf
2023-11-21 11:52 ` [PULL 1/9] hw/ide/ahci: fix legacy software reset Kevin Wolf
2023-11-21 11:52 ` [PULL 2/9] block: Fix bdrv_graph_wrlock() call in blk_remove_bs() Kevin Wolf
2023-11-21 11:52 ` [PULL 3/9] block: Fix deadlocks in bdrv_graph_wrunlock() Kevin Wolf
2023-11-21 11:52 ` [PULL 4/9] stream: Fix AioContext locking during bdrv_graph_wrlock() Kevin Wolf
2023-11-21 11:52 ` [PULL 5/9] iotests: Test two stream jobs in a single iothread Kevin Wolf
2023-11-21 11:52 ` [PULL 6/9] ide/ioport: move ide_portio_list[] and ide_portio_list2[] definitions to IDE core Kevin Wolf
2023-11-21 11:53 ` [PULL 7/9] ide/pci: introduce pci_ide_update_mode() function Kevin Wolf
2023-11-21 11:53 ` [PULL 8/9] ide/via: don't attempt to set default BAR addresses Kevin Wolf
2023-11-21 11:53 ` [PULL 9/9] hw/ide/via: implement legacy/native mode switching Kevin Wolf
2023-11-21 16:54 ` [PULL 0/9] Block layer fixes for 8.2.0-rc1 Stefan Hajnoczi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).