From: "Alex Bennée" <alex.bennee@linaro.org>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>,
Doug Gale <doug16k@gmail.com>, qemu-devel <qemu-devel@nongnu.org>
Subject: Re: Lockup with --accel tcg,thread=single
Date: Mon, 30 Sep 2019 20:20:58 +0100 [thread overview]
Message-ID: <87ftkdlhet.fsf@linaro.org> (raw)
In-Reply-To: <96703fc4-cbeb-5487-89b2-78c37b40237a@redhat.com>
Paolo Bonzini <pbonzini@redhat.com> writes:
> On 30/09/19 17:37, Peter Maydell wrote:
>> Not sure currently what the best fix is here.
>
> Since thread=single uses just one queued work list, could it be as
> simple as
Does it? I thought this was the case but:
static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
{
qemu_mutex_lock(&cpu->work_mutex);
if (cpu->queued_work_first == NULL) {
cpu->queued_work_first = wi;
} else {
cpu->queued_work_last->next = wi;
}
cpu->queued_work_last = wi;
wi->next = NULL;
wi->done = false;
qemu_mutex_unlock(&cpu->work_mutex);
qemu_cpu_kick(cpu);
}
Does seem to imply the vCPU CPUState is where the queue is. That's not
to say there shouldn't be a single work queue for thread=single.
>
> diff --git a/cpus.c b/cpus.c
> index d2c61ff..314f9aa 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -1539,7 +1539,7 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
> cpu = first_cpu;
> }
>
> - while (cpu && !cpu->queued_work_first && !cpu->exit_request) {
> + while (cpu && !first_cpu->queued_work_first && !cpu->exit_request) {
>
> atomic_mb_set(&tcg_current_rr_cpu, cpu);
> current_cpu = cpu;
>
> or something like that?
>
>> Side note -- this use of run_on_cpu() means that we now drop
>> the iothread lock within memory_region_snapshot_and_clear_dirty(),
>> which we didn't before. This means that a vCPU thread can now
>> get in and execute an access to the device registers of
>> hw/display/vga.c, updating its state in a way I suspect that the
>> device model code is not expecting... So maybe the right answer
>> here should be to come up with a fix for the race that 9458a9a1
>> addresses that doesn't require us to drop the iothread lock in
>> memory_region_snapshot_and_clear_dirty() ? Alternatively we need
>> to audit the callers and flag in the documentation that this
>> function has the unexpected side effect of briefly dropping the
>> iothread lock.
>
> Yes, this is intended. There shouldn't be side effects other than
> possibly a one-frame flash for all callers.
>
> Paolo
--
Alex Bennée
next prev parent reply other threads:[~2019-09-30 19:22 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-30 13:15 Lockup with --accel tcg,thread=single Doug Gale
2019-09-30 15:37 ` Peter Maydell
2019-09-30 16:38 ` Alex Bennée
2019-09-30 17:47 ` Paolo Bonzini
2019-09-30 19:20 ` Alex Bennée [this message]
2019-09-30 20:40 ` Paolo Bonzini
2019-10-01 8:42 ` Alex Bennée
2019-10-01 9:16 ` Paolo Bonzini
2019-10-01 15:28 ` Alex Bennée
2019-10-01 12:10 ` Peter Maydell
-- strict thread matches above, loose matches on Subject: below --
2019-09-30 13:12 Doug Gale
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87ftkdlhet.fsf@linaro.org \
--to=alex.bennee@linaro.org \
--cc=doug16k@gmail.com \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).