From: Nicholas Piggin <npiggin@gmail.com>
To: Daniel Henrique Barboza <danielhb413@gmail.com>
Cc: "Nicholas Piggin" <npiggin@gmail.com>,
"Cédric Le Goater" <clg@kaod.org>,
"David Gibson" <david@gibson.dropbear.id.au>,
"Greg Kurz" <groug@kaod.org>,
"Harsh Prateek Bora" <harshpb@linux.ibm.com>,
qemu-ppc@nongnu.org, qemu-devel@nongnu.org
Subject: [PATCH 4/6] hw/ppc: Avoid decrementer rounding errors
Date: Thu, 27 Jul 2023 04:22:28 +1000 [thread overview]
Message-ID: <20230726182230.433945-5-npiggin@gmail.com> (raw)
In-Reply-To: <20230726182230.433945-1-npiggin@gmail.com>
The decrementer register contains a relative time in timebase units.
When writing to DECR this is converted and stored as an absolute value
in nanosecond units, reading DECR converts back to relative timebase.
The tb<->ns conversion of the relative part can cause rounding such that
a value writen to the decrementer can read back a different, with time
held constant. This is a particular problem for a deterministic icount
and record-replay trace.
Fix this by storing the absolute value in timebase units rather than
nanoseconds. The math before:
store: decr_next = now_ns + decr * ns_per_sec / tb_per_sec
load: decr = (decr_next - now_ns) * tb_per_sec / ns_per_sec
load(store): decr = decr * ns_per_sec / tb_per_sec * tb_per_sec /
ns_per_sec
After:
store: decr_next = now_ns * tb_per_sec / ns_per_sec + decr
load: decr = decr_next - now_ns * tb_per_sec / ns_per_sec
load(store): decr = decr
Fixes: 9fddaa0c0cab ("PowerPC merge: real time TB and decrementer - faster and simpler exception handling (Jocelyn Mayer)")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/ppc/ppc.c | 41 ++++++++++++++++++++++++-----------------
1 file changed, 24 insertions(+), 17 deletions(-)
diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
index 0e0a3d93c3..fa60f76dd4 100644
--- a/hw/ppc/ppc.c
+++ b/hw/ppc/ppc.c
@@ -686,16 +686,17 @@ bool ppc_decr_clear_on_delivery(CPUPPCState *env)
static inline int64_t _cpu_ppc_load_decr(CPUPPCState *env, uint64_t next)
{
ppc_tb_t *tb_env = env->tb_env;
- int64_t decr, diff;
+ uint64_t now, n;
+ int64_t decr;
- diff = next - qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
- if (diff >= 0) {
- decr = muldiv64(diff, tb_env->decr_freq, NANOSECONDS_PER_SECOND);
- } else if (tb_env->flags & PPC_TIMER_BOOKE) {
+ now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
+ n = muldiv64(now, tb_env->decr_freq, NANOSECONDS_PER_SECOND);
+ if (next > n && tb_env->flags & PPC_TIMER_BOOKE) {
decr = 0;
- } else {
- decr = -muldiv64(-diff, tb_env->decr_freq, NANOSECONDS_PER_SECOND);
+ } else {
+ decr = next - n;
}
+
trace_ppc_decr_load(decr);
return decr;
@@ -834,11 +835,11 @@ static void __cpu_ppc_store_decr(PowerPCCPU *cpu, uint64_t *nextp,
/* Calculate the next timer event */
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
- next = now + muldiv64(value, NANOSECONDS_PER_SECOND, tb_env->decr_freq);
- *nextp = next;
+ next = muldiv64(now, tb_env->decr_freq, NANOSECONDS_PER_SECOND) + value;
+ *nextp = next; /* nextp is in timebase units */
/* Adjust timer */
- timer_mod(timer, next);
+ timer_mod(timer, muldiv64(next, NANOSECONDS_PER_SECOND, tb_env->decr_freq));
}
static inline void _cpu_ppc_store_decr(PowerPCCPU *cpu, target_ulong decr,
@@ -1153,14 +1154,20 @@ static void start_stop_pit (CPUPPCState *env, ppc_tb_t *tb_env, int is_excp)
} else {
trace_ppc4xx_pit_start(ppc40x_timer->pit_reload);
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
- next = now + muldiv64(ppc40x_timer->pit_reload,
- NANOSECONDS_PER_SECOND, tb_env->decr_freq);
- if (is_excp)
- next += tb_env->decr_next - now;
- if (next == now)
- next++;
+
+ if (is_excp) {
+ tb_env->decr_next += ppc40x_timer->pit_reload;
+ } else {
+ tb_env->decr_next = muldiv64(now, tb_env->decr_freq,
+ NANOSECONDS_PER_SECOND)
+ + ppc40x_timer->pit_reload;
+ }
+ next = muldiv64(tb_env->decr_next, NANOSECONDS_PER_SECOND,
+ tb_env->decr_freq);
+ if (next <= now) {
+ next = now + 1;
+ }
timer_mod(tb_env->decr_timer, next);
- tb_env->decr_next = next;
}
}
--
2.40.1
next prev parent reply other threads:[~2023-07-26 19:16 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-26 18:22 [PATCH 0/6] ppc fixes possibly for 8.1 Nicholas Piggin
2023-07-26 18:22 ` [PATCH 1/6] target/ppc: Implement ASDR register for ISA v3.0 for HPT Nicholas Piggin
2023-07-27 13:22 ` Cédric Le Goater
2023-07-26 18:22 ` [PATCH 2/6] target/ppc: Fix VRMA page size for ISA v3.0 Nicholas Piggin
2023-07-27 13:07 ` Cédric Le Goater
2023-07-26 18:22 ` [PATCH 3/6] target/ppc: Fix pending HDEC when entering PM state Nicholas Piggin
2023-07-27 12:57 ` Cédric Le Goater
2023-07-26 18:22 ` Nicholas Piggin [this message]
2023-07-26 18:22 ` [PATCH 5/6] hw/ppc: Always store the decrementer value Nicholas Piggin
2023-07-27 12:26 ` Cédric Le Goater
2023-07-30 9:40 ` Nicholas Piggin
2023-07-30 16:18 ` Cédric Le Goater
2023-07-26 18:22 ` [PATCH 6/6] target/ppc: Migrate DECR SPR Nicholas Piggin
2023-07-28 20:05 ` [PATCH 0/6] ppc fixes possibly for 8.1 Daniel Henrique Barboza
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230726182230.433945-5-npiggin@gmail.com \
--to=npiggin@gmail.com \
--cc=clg@kaod.org \
--cc=danielhb413@gmail.com \
--cc=david@gibson.dropbear.id.au \
--cc=groug@kaod.org \
--cc=harshpb@linux.ibm.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).