From: Richard Henderson <richard.henderson@linaro.org>
To: Jamie Iles <quic_jiles@quicinc.com>, qemu-devel@nongnu.org
Cc: pbonzini@redhat.com, philmd@linaro.org, peter.maydell@linaro.org
Subject: Re: [PATCH v3 0/2] accel/tcg/tcg-accel-ops-rr: ensure fairness with icount
Date: Sat, 29 Apr 2023 10:28:03 +0100 [thread overview]
Message-ID: <7277aa1e-413d-2f7a-37ce-23ea1f54c09a@linaro.org> (raw)
In-Reply-To: <20230427020925.51003-1-quic_jiles@quicinc.com>
On 4/27/23 03:09, Jamie Iles wrote:
> From: Jamie Iles <jiles@qti.qualcomm.com>
>
> The round-robin scheduler will iterate over the CPU list with an
> assigned budget until the next timer expiry and may exit early because
> of a TB exit. This is fine under normal operation but with icount
> enabled and SMP it is possible for a CPU to be starved of run time and
> the system live-locks.
>
> For example, booting a riscv64 platform with '-icount
> shift=0,align=off,sleep=on -smp 2' we observe a livelock once the kernel
> has timers enabled and starts performing TLB shootdowns. In this case
> we have CPU 0 in M-mode with interrupts disabled sending an IPI to CPU
> 1. As we enter the TCG loop, we assign the icount budget to next timer
> interrupt to CPU 0 and begin executing where the guest is sat in a busy
> loop exhausting all of the budget before we try to execute CPU 1 which
> is the target of the IPI but CPU 1 is left with no budget with which to
> execute and the process repeats.
>
> We try here to add some fairness by splitting the budget across all of
> the CPUs on the thread fairly before entering each one. The CPU count
> is cached on CPU list generation ID to avoid iterating the list on each
> loop iteration. With this change it is possible to boot an SMP rv64
> guest with icount enabled and no hangs.
>
> New in v3 (address feedback from Richard Henderson):
> - Additional patch to use QEMU_LOCK_GUARD with qemu_cpu_list_lock where
> appropriate
> - Move rr_cpu_count() call to be conditional on icount_enabled()
> - Initialize cpu_budget to 0
>
> Jamie Iles (2):
> cpu: expose qemu_cpu_list_lock for lock-guard use
> accel/tcg/tcg-accel-ops-rr: ensure fairness with icount
It appears as if one of these two patches causes a failure in replay, e.g.
https://gitlab.com/rth7680/qemu/-/jobs/4200609234#L4162
Would you have a look, please?
r~
next prev parent reply other threads:[~2023-04-29 9:28 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-27 2:09 [PATCH v3 0/2] accel/tcg/tcg-accel-ops-rr: ensure fairness with icount Jamie Iles
2023-04-27 2:09 ` [PATCH v3 1/2] cpu: expose qemu_cpu_list_lock for lock-guard use Jamie Iles
2023-04-27 7:44 ` Richard Henderson
2023-04-28 23:05 ` Philippe Mathieu-Daudé
2023-04-27 2:09 ` [PATCH v3 2/2] accel/tcg/tcg-accel-ops-rr: ensure fairness with icount Jamie Iles
2023-04-27 7:46 ` Richard Henderson
2023-04-29 9:28 ` Richard Henderson [this message]
2023-04-29 12:23 ` [PATCH v3 0/2] " Philippe Mathieu-Daudé
2023-05-03 9:44 ` Jamie Iles
2023-05-10 15:01 ` Richard Henderson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7277aa1e-413d-2f7a-37ce-23ea1f54c09a@linaro.org \
--to=richard.henderson@linaro.org \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=quic_jiles@quicinc.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).