qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] accel/tcg: Hoist first page lookup above pointer_wrap
@ 2025-10-04 19:24 Richard Henderson
  2025-10-08 14:38 ` Michael Tokarev
  2025-10-13 14:28 ` Philippe Mathieu-Daudé
  0 siblings, 2 replies; 3+ messages in thread
From: Richard Henderson @ 2025-10-04 19:24 UTC (permalink / raw)
  To: qemu-devel; +Cc: mjt, qemu-stable

For strict alignment targets we registered cpu_pointer_wrap_notreached,
but generic code used it before recognizing the alignment exception.
Hoist the first page lookup, so that the alignment exception happens first.

Cc: qemu-stable@nongnu.org
Buglink: https://bugs.debian.org/1112285
Fixes: a4027ed7d4be ("target: Use cpu_pointer_wrap_notreached for strict align targets")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 accel/tcg/cputlb.c | 23 +++++++++++++----------
 1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 2a6aa01c57..a09c2ed857 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1744,6 +1744,7 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
                        uintptr_t ra, MMUAccessType type, MMULookupLocals *l)
 {
     bool crosspage;
+    vaddr last;
     int flags;
 
     l->memop = get_memop(oi);
@@ -1753,13 +1754,15 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
 
     l->page[0].addr = addr;
     l->page[0].size = memop_size(l->memop);
-    l->page[1].addr = (addr + l->page[0].size - 1) & TARGET_PAGE_MASK;
+    l->page[1].addr = 0;
     l->page[1].size = 0;
-    crosspage = (addr ^ l->page[1].addr) & TARGET_PAGE_MASK;
 
+    /* Lookup and recognize exceptions from the first page. */
+    mmu_lookup1(cpu, &l->page[0], l->memop, l->mmu_idx, type, ra);
+
+    last = addr + l->page[0].size - 1;
+    crosspage = (addr ^ last) & TARGET_PAGE_MASK;
     if (likely(!crosspage)) {
-        mmu_lookup1(cpu, &l->page[0], l->memop, l->mmu_idx, type, ra);
-
         flags = l->page[0].flags;
         if (unlikely(flags & (TLB_WATCHPOINT | TLB_NOTDIRTY))) {
             mmu_watch_or_dirty(cpu, &l->page[0], type, ra);
@@ -1769,18 +1772,18 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
         }
     } else {
         /* Finish compute of page crossing. */
-        int size0 = l->page[1].addr - addr;
+        vaddr addr1 = last & TARGET_PAGE_MASK;
+        int size0 = addr1 - addr;
         l->page[1].size = l->page[0].size - size0;
         l->page[0].size = size0;
-
         l->page[1].addr = cpu->cc->tcg_ops->pointer_wrap(cpu, l->mmu_idx,
-                                                         l->page[1].addr, addr);
+                                                         addr1, addr);
 
         /*
-         * Lookup both pages, recognizing exceptions from either.  If the
-         * second lookup potentially resized, refresh first CPUTLBEntryFull.
+         * Lookup and recognize exceptions from the second page.
+         * If the lookup potentially resized the table, refresh the
+         * first CPUTLBEntryFull pointer.
          */
-        mmu_lookup1(cpu, &l->page[0], l->memop, l->mmu_idx, type, ra);
         if (mmu_lookup1(cpu, &l->page[1], 0, l->mmu_idx, type, ra)) {
             uintptr_t index = tlb_index(cpu, l->mmu_idx, addr);
             l->page[0].full = &cpu->neg.tlb.d[l->mmu_idx].fulltlb[index];
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] accel/tcg: Hoist first page lookup above pointer_wrap
  2025-10-04 19:24 [PATCH] accel/tcg: Hoist first page lookup above pointer_wrap Richard Henderson
@ 2025-10-08 14:38 ` Michael Tokarev
  2025-10-13 14:28 ` Philippe Mathieu-Daudé
  1 sibling, 0 replies; 3+ messages in thread
From: Michael Tokarev @ 2025-10-08 14:38 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: qemu-stable

On 10/4/25 22:24, Richard Henderson wrote:
> For strict alignment targets we registered cpu_pointer_wrap_notreached,
> but generic code used it before recognizing the alignment exception.
> Hoist the first page lookup, so that the alignment exception happens first.
> 
> Cc: qemu-stable@nongnu.org
> Buglink: https://bugs.debian.org/1112285
> Fixes: a4027ed7d4be ("target: Use cpu_pointer_wrap_notreached for strict align targets")
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

Hi Richard!

This seems to fix the reported issue.  But I don't have other
means to test it besides already provided, which - I guess -
is the something similar which you used to test this change
too.  So my testing is useless :)

Thank you for the fix!

/mjt


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] accel/tcg: Hoist first page lookup above pointer_wrap
  2025-10-04 19:24 [PATCH] accel/tcg: Hoist first page lookup above pointer_wrap Richard Henderson
  2025-10-08 14:38 ` Michael Tokarev
@ 2025-10-13 14:28 ` Philippe Mathieu-Daudé
  1 sibling, 0 replies; 3+ messages in thread
From: Philippe Mathieu-Daudé @ 2025-10-13 14:28 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: mjt, qemu-stable, Pierrick Bouvier

On 4/10/25 21:24, Richard Henderson wrote:
> For strict alignment targets we registered cpu_pointer_wrap_notreached,
> but generic code used it before recognizing the alignment exception.
> Hoist the first page lookup, so that the alignment exception happens first.
> 
> Cc: qemu-stable@nongnu.org
> Buglink: https://bugs.debian.org/1112285
> Fixes: a4027ed7d4be ("target: Use cpu_pointer_wrap_notreached for strict align targets")
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   accel/tcg/cputlb.c | 23 +++++++++++++----------
>   1 file changed, 13 insertions(+), 10 deletions(-)
> 
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 2a6aa01c57..a09c2ed857 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -1744,6 +1744,7 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
>                          uintptr_t ra, MMUAccessType type, MMULookupLocals *l)
>   {
>       bool crosspage;
> +    vaddr last;
>       int flags;
>   
>       l->memop = get_memop(oi);
> @@ -1753,13 +1754,15 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
>   
>       l->page[0].addr = addr;
>       l->page[0].size = memop_size(l->memop);
> -    l->page[1].addr = (addr + l->page[0].size - 1) & TARGET_PAGE_MASK;
> +    l->page[1].addr = 0;
>       l->page[1].size = 0;
> -    crosspage = (addr ^ l->page[1].addr) & TARGET_PAGE_MASK;
>   
> +    /* Lookup and recognize exceptions from the first page. */
> +    mmu_lookup1(cpu, &l->page[0], l->memop, l->mmu_idx, type, ra);
> +
> +    last = addr + l->page[0].size - 1;
> +    crosspage = (addr ^ last) & TARGET_PAGE_MASK;
>       if (likely(!crosspage)) {
> -        mmu_lookup1(cpu, &l->page[0], l->memop, l->mmu_idx, type, ra);
> -
>           flags = l->page[0].flags;
>           if (unlikely(flags & (TLB_WATCHPOINT | TLB_NOTDIRTY))) {
>               mmu_watch_or_dirty(cpu, &l->page[0], type, ra);
> @@ -1769,18 +1772,18 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi,
>           }
>       } else {
>           /* Finish compute of page crossing. */
> -        int size0 = l->page[1].addr - addr;
> +        vaddr addr1 = last & TARGET_PAGE_MASK;
> +        int size0 = addr1 - addr;
>           l->page[1].size = l->page[0].size - size0;
>           l->page[0].size = size0;
> -
>           l->page[1].addr = cpu->cc->tcg_ops->pointer_wrap(cpu, l->mmu_idx,
> -                                                         l->page[1].addr, addr);
> +                                                         addr1, addr);
>   
>           /*
> -         * Lookup both pages, recognizing exceptions from either.  If the
> -         * second lookup potentially resized, refresh first CPUTLBEntryFull.
> +         * Lookup and recognize exceptions from the second page.
> +         * If the lookup potentially resized the table, refresh the
> +         * first CPUTLBEntryFull pointer.
>            */
> -        mmu_lookup1(cpu, &l->page[0], l->memop, l->mmu_idx, type, ra);
>           if (mmu_lookup1(cpu, &l->page[1], 0, l->mmu_idx, type, ra)) {
>               uintptr_t index = tlb_index(cpu, l->mmu_idx, addr);
>               l->page[0].full = &cpu->neg.tlb.d[l->mmu_idx].fulltlb[index];

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-10-13 14:30 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-04 19:24 [PATCH] accel/tcg: Hoist first page lookup above pointer_wrap Richard Henderson
2025-10-08 14:38 ` Michael Tokarev
2025-10-13 14:28 ` Philippe Mathieu-Daudé

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).