From: Nicholas Piggin <npiggin@gmail.com>
To: qemu-devel@nongnu.org
Cc: "Nicholas Piggin" <npiggin@gmail.com>,
qemu-ppc@nongnu.org,
"Daniel Henrique Barboza" <danielhb413@gmail.com>,
"Cédric Le Goater" <clg@kaod.org>,
"David Gibson" <david@gibson.dropbear.id.au>,
"Greg Kurz" <groug@kaod.org>,
"Harsh Prateek Bora" <harshpb@linux.ibm.com>,
qemu-stable@nongnu.org
Subject: [PATCH] hw/ppc: Fix clock update drift
Date: Thu, 29 Jun 2023 12:07:13 +1000 [thread overview]
Message-ID: <20230629020713.327745-1-npiggin@gmail.com> (raw)
The clock update logic reads the clock twice to compute the new clock
value, with a value derived from the later time subtracted from a value
derived from the earlier time. The delta causes time to be lost.
This can ultimately result in time becoming unsynchronized between CPUs
and that can cause OS lockups, timeouts, watchdogs, etc. This can be
seen running a KVM guest (that causes lots of TB updates) on a powernv
SMP machine.
Fix this by reading the clock once.
Cc: qemu-stable@nongnu.org
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
I also made a test case that can trigger this with kvm-unit-tests, but
it's been taking me a while to get that upstreamed.
Thanks,
Nick
hw/ppc/ppc.c | 33 +++++++++++++++++----------------
1 file changed, 17 insertions(+), 16 deletions(-)
diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
index 82e4408c5c..6233f43c01 100644
--- a/hw/ppc/ppc.c
+++ b/hw/ppc/ppc.c
@@ -535,23 +535,24 @@ static inline void cpu_ppc_store_tb(ppc_tb_t *tb_env, uint64_t vmclk,
void cpu_ppc_store_tbl (CPUPPCState *env, uint32_t value)
{
ppc_tb_t *tb_env = env->tb_env;
+ int64_t clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
uint64_t tb;
- tb = cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL), tb_env->tb_offset);
+ tb = cpu_ppc_get_tb(tb_env, clock, tb_env->tb_offset);
tb &= 0xFFFFFFFF00000000ULL;
- cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
- &tb_env->tb_offset, tb | (uint64_t)value);
+ cpu_ppc_store_tb(tb_env, clock, &tb_env->tb_offset, tb | (uint64_t)value);
}
static inline void _cpu_ppc_store_tbu(CPUPPCState *env, uint32_t value)
{
ppc_tb_t *tb_env = env->tb_env;
+ int64_t clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
uint64_t tb;
- tb = cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL), tb_env->tb_offset);
+ tb = cpu_ppc_get_tb(tb_env, clock, tb_env->tb_offset);
tb &= 0x00000000FFFFFFFFULL;
- cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
- &tb_env->tb_offset, ((uint64_t)value << 32) | tb);
+ cpu_ppc_store_tb(tb_env, clock, &tb_env->tb_offset,
+ ((uint64_t)value << 32) | tb);
}
void cpu_ppc_store_tbu (CPUPPCState *env, uint32_t value)
@@ -584,23 +585,24 @@ uint32_t cpu_ppc_load_atbu (CPUPPCState *env)
void cpu_ppc_store_atbl (CPUPPCState *env, uint32_t value)
{
ppc_tb_t *tb_env = env->tb_env;
+ int64_t clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
uint64_t tb;
- tb = cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL), tb_env->atb_offset);
+ tb = cpu_ppc_get_tb(tb_env, clock, tb_env->atb_offset);
tb &= 0xFFFFFFFF00000000ULL;
- cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
- &tb_env->atb_offset, tb | (uint64_t)value);
+ cpu_ppc_store_tb(tb_env, clock, &tb_env->atb_offset, tb | (uint64_t)value);
}
void cpu_ppc_store_atbu (CPUPPCState *env, uint32_t value)
{
ppc_tb_t *tb_env = env->tb_env;
+ int64_t clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
uint64_t tb;
- tb = cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL), tb_env->atb_offset);
+ tb = cpu_ppc_get_tb(tb_env, clock, tb_env->atb_offset);
tb &= 0x00000000FFFFFFFFULL;
- cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
- &tb_env->atb_offset, ((uint64_t)value << 32) | tb);
+ cpu_ppc_store_tb(tb_env, clock, &tb_env->atb_offset,
+ ((uint64_t)value << 32) | tb);
}
uint64_t cpu_ppc_load_vtb(CPUPPCState *env)
@@ -622,14 +624,13 @@ void cpu_ppc_store_vtb(CPUPPCState *env, uint64_t value)
void cpu_ppc_store_tbu40(CPUPPCState *env, uint64_t value)
{
ppc_tb_t *tb_env = env->tb_env;
+ int64_t clock = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
uint64_t tb;
- tb = cpu_ppc_get_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
- tb_env->tb_offset);
+ tb = cpu_ppc_get_tb(tb_env, clock, tb_env->tb_offset);
tb &= 0xFFFFFFUL;
tb |= (value & ~0xFFFFFFUL);
- cpu_ppc_store_tb(tb_env, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
- &tb_env->tb_offset, tb);
+ cpu_ppc_store_tb(tb_env, clock, &tb_env->tb_offset, tb);
}
static void cpu_ppc_tb_stop (CPUPPCState *env)
--
2.40.1
next reply other threads:[~2023-06-29 2:08 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-29 2:07 Nicholas Piggin [this message]
2023-06-29 5:25 ` [PATCH] hw/ppc: Fix clock update drift Cédric Le Goater
2023-06-29 7:04 ` Frederic Barrat
2023-06-29 7:34 ` Cédric Le Goater
2023-06-29 7:45 ` Nicholas Piggin
2023-06-30 16:49 ` Daniel Henrique Barboza
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230629020713.327745-1-npiggin@gmail.com \
--to=npiggin@gmail.com \
--cc=clg@kaod.org \
--cc=danielhb413@gmail.com \
--cc=david@gibson.dropbear.id.au \
--cc=groug@kaod.org \
--cc=harshpb@linux.ibm.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
--cc=qemu-stable@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).