* [PULL 00/13] loongarch queue
@ 2025-09-28  8:47 Bibo Mao
  2025-09-28  8:47 ` [PULL 01/13] target/loongarch: Use mmu idx bitmap method when flush TLB Bibo Mao
                   ` (13 more replies)
  0 siblings, 14 replies; 15+ messages in thread
From: Bibo Mao @ 2025-09-28  8:47 UTC (permalink / raw)
  To: qemu-devel
The following changes since commit d6dfd8d40cebebc3378d379cd28879e0345fbf91:
  Merge tag 'pull-target-arm-20250926' of https://gitlab.com/pm215/qemu into staging (2025-09-26 13:27:01 -0700)
are available in the Git repository at:
  https://github.com/bibo-mao/qemu.git tags/pull-loongarch-20250928
for you to fetch changes up to 8d26856fabf8faac60de03a2e0fc35e5338e248e:
  target/loongarch: Only flush one TLB entry in helper_invtlb_page_asid() (2025-09-28 16:10:34 +0800)
----------------------------------------------------------------
pull-loongarch-20250928 queue
----------------------------------------------------------------
Bibo Mao (13):
      target/loongarch: Use mmu idx bitmap method when flush TLB
      target/loongarch: Add parameter tlb pointer with fill_tlb_entry
      target/loongarch: Reduce TLB flush with helper_tlbwr
      target/loongarch: Update TLB index selection method
      target/loongarch: Fix page size set issue with CSR_STLBPS
      target/loongarch: Add tlb search callback in loongarch_tlb_search()
      target/loongarch: Add common API loongarch_tlb_search_cb()
      target/loongarch: Change return value type with loongarch_tlb_search_cb()
      target/loongarch: Use loongarch_tlb_search_cb in helper_invtlb_page_asid_or_g
      target/loongarch: Use loongarch_tlb_search_cb in helper_invtlb_page_asid
      target/loongarch: Invalid tlb entry in invalidate_tlb()
      target/loongarch: Only flush one TLB entry in helper_invtlb_page_asid_or_g()
      target/loongarch: Only flush one TLB entry in helper_invtlb_page_asid()
 target/loongarch/cpu-csr.h        |   1 +
 target/loongarch/tcg/csr_helper.c |   5 +-
 target/loongarch/tcg/tlb_helper.c | 205 +++++++++++++++++++++++---------------
 3 files changed, 131 insertions(+), 80 deletions(-)
^ permalink raw reply	[flat|nested] 15+ messages in thread
* [PULL 01/13] target/loongarch: Use mmu idx bitmap method when flush TLB
  2025-09-28  8:47 [PULL 00/13] loongarch queue Bibo Mao
@ 2025-09-28  8:47 ` Bibo Mao
  2025-09-28  8:47 ` [PULL 02/13] target/loongarch: Add parameter tlb pointer with fill_tlb_entry Bibo Mao
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Bibo Mao @ 2025-09-28  8:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson
With API tlb_flush_range_by_mmuidx(), bitmap of mmu idx should be used
rather than itself. Also bitmap of MMU_KERNEL_IDX and MMU_USER_IDX are
used rather than that of current running mmu idx when flush TLB.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/loongarch/tcg/tlb_helper.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/target/loongarch/tcg/tlb_helper.c b/target/loongarch/tcg/tlb_helper.c
index 9365860c8c..0c31a346fe 100644
--- a/target/loongarch/tcg/tlb_helper.c
+++ b/target/loongarch/tcg/tlb_helper.c
@@ -101,8 +101,7 @@ static void invalidate_tlb_entry(CPULoongArchState *env, int index)
     target_ulong addr, mask, pagesize;
     uint8_t tlb_ps;
     LoongArchTLB *tlb = &env->tlb[index];
-
-    int mmu_idx = cpu_mmu_index(env_cpu(env), false);
+    int idxmap = BIT(MMU_KERNEL_IDX) | BIT(MMU_USER_IDX);
     uint8_t tlb_v0 = FIELD_EX64(tlb->tlb_entry0, TLBENTRY, V);
     uint8_t tlb_v1 = FIELD_EX64(tlb->tlb_entry1, TLBENTRY, V);
     uint64_t tlb_vppn = FIELD_EX64(tlb->tlb_misc, TLB_MISC, VPPN);
@@ -120,12 +119,12 @@ static void invalidate_tlb_entry(CPULoongArchState *env, int index)
 
     if (tlb_v0) {
         tlb_flush_range_by_mmuidx(env_cpu(env), addr, pagesize,
-                                  mmu_idx, TARGET_LONG_BITS);
+                                  idxmap, TARGET_LONG_BITS);
     }
 
     if (tlb_v1) {
         tlb_flush_range_by_mmuidx(env_cpu(env), addr + pagesize, pagesize,
-                                  mmu_idx, TARGET_LONG_BITS);
+                                  idxmap, TARGET_LONG_BITS);
     }
 }
 
-- 
2.43.5
^ permalink raw reply related	[flat|nested] 15+ messages in thread
* [PULL 02/13] target/loongarch: Add parameter tlb pointer with fill_tlb_entry
  2025-09-28  8:47 [PULL 00/13] loongarch queue Bibo Mao
  2025-09-28  8:47 ` [PULL 01/13] target/loongarch: Use mmu idx bitmap method when flush TLB Bibo Mao
@ 2025-09-28  8:47 ` Bibo Mao
  2025-09-28  8:47 ` [PULL 03/13] target/loongarch: Reduce TLB flush with helper_tlbwr Bibo Mao
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Bibo Mao @ 2025-09-28  8:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson
With function fill_tlb_entry(), it will update LoongArch emulated
TLB information. Here parameter tlb pointer is added so that TLB
entry will be updated based on relative TLB CSR registers.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/loongarch/tcg/tlb_helper.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/target/loongarch/tcg/tlb_helper.c b/target/loongarch/tcg/tlb_helper.c
index 0c31a346fe..25dbbd0d77 100644
--- a/target/loongarch/tcg/tlb_helper.c
+++ b/target/loongarch/tcg/tlb_helper.c
@@ -143,9 +143,8 @@ static void invalidate_tlb(CPULoongArchState *env, int index)
     invalidate_tlb_entry(env, index);
 }
 
-static void fill_tlb_entry(CPULoongArchState *env, int index)
+static void fill_tlb_entry(CPULoongArchState *env, LoongArchTLB *tlb)
 {
-    LoongArchTLB *tlb = &env->tlb[index];
     uint64_t lo0, lo1, csr_vppn;
     uint16_t csr_asid;
     uint8_t csr_ps;
@@ -312,7 +311,7 @@ void helper_tlbwr(CPULoongArchState *env)
         return;
     }
 
-    fill_tlb_entry(env, index);
+    fill_tlb_entry(env, env->tlb + index);
 }
 
 void helper_tlbfill(CPULoongArchState *env)
@@ -350,7 +349,7 @@ void helper_tlbfill(CPULoongArchState *env)
     }
 
     invalidate_tlb(env, index);
-    fill_tlb_entry(env, index);
+    fill_tlb_entry(env, env->tlb + index);
 }
 
 void helper_tlbclr(CPULoongArchState *env)
-- 
2.43.5
^ permalink raw reply related	[flat|nested] 15+ messages in thread
* [PULL 03/13] target/loongarch: Reduce TLB flush with helper_tlbwr
  2025-09-28  8:47 [PULL 00/13] loongarch queue Bibo Mao
  2025-09-28  8:47 ` [PULL 01/13] target/loongarch: Use mmu idx bitmap method when flush TLB Bibo Mao
  2025-09-28  8:47 ` [PULL 02/13] target/loongarch: Add parameter tlb pointer with fill_tlb_entry Bibo Mao
@ 2025-09-28  8:47 ` Bibo Mao
  2025-09-28  8:47 ` [PULL 04/13] target/loongarch: Update TLB index selection method Bibo Mao
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Bibo Mao @ 2025-09-28  8:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson
With function helper_tlbwr(), specified LoongArch TLB entry will be
updated. There are two PTE pages in one TLB entry called even/odd
pages. Supposing even/odd page is normal/none state, when odd page
is added, TLB entry is changed as normal/normal state and even page
keeps unchanged.
In this situation, it is not necessary to flush QEMU TLB since even
page keep unchanged and odd page is newly changed. Here check whether
PTE page is the same or not, TLB flush can be skipped if both are the
same or newly added.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/loongarch/tcg/tlb_helper.c | 29 ++++++++++++++++++++++++-----
 1 file changed, 24 insertions(+), 5 deletions(-)
diff --git a/target/loongarch/tcg/tlb_helper.c b/target/loongarch/tcg/tlb_helper.c
index 25dbbd0d77..88dba9eb3e 100644
--- a/target/loongarch/tcg/tlb_helper.c
+++ b/target/loongarch/tcg/tlb_helper.c
@@ -302,16 +302,35 @@ void helper_tlbrd(CPULoongArchState *env)
 void helper_tlbwr(CPULoongArchState *env)
 {
     int index = FIELD_EX64(env->CSR_TLBIDX, CSR_TLBIDX, INDEX);
+    LoongArchTLB *old, new = {};
+    bool skip_inv = false;
+    uint8_t tlb_v0, tlb_v1;
 
-    invalidate_tlb(env, index);
-
+    old = env->tlb + index;
     if (FIELD_EX64(env->CSR_TLBIDX, CSR_TLBIDX, NE)) {
-        env->tlb[index].tlb_misc = FIELD_DP64(env->tlb[index].tlb_misc,
-                                              TLB_MISC, E, 0);
+        invalidate_tlb(env, index);
+        old->tlb_misc = FIELD_DP64(old->tlb_misc, TLB_MISC, E, 0);
         return;
     }
 
-    fill_tlb_entry(env, env->tlb + index);
+    fill_tlb_entry(env, &new);
+    /* Check whether ASID/VPPN is the same */
+    if (old->tlb_misc == new.tlb_misc) {
+        /* Check whether both even/odd pages is the same or invalid */
+        tlb_v0 = FIELD_EX64(old->tlb_entry0, TLBENTRY, V);
+        tlb_v1 = FIELD_EX64(old->tlb_entry1, TLBENTRY, V);
+        if ((!tlb_v0 || new.tlb_entry0 == old->tlb_entry0) &&
+            (!tlb_v1 || new.tlb_entry1 == old->tlb_entry1)) {
+            skip_inv = true;
+        }
+    }
+
+    /* flush tlb before updating the entry */
+    if (!skip_inv) {
+        invalidate_tlb(env, index);
+    }
+
+    *old = new;
 }
 
 void helper_tlbfill(CPULoongArchState *env)
-- 
2.43.5
^ permalink raw reply related	[flat|nested] 15+ messages in thread
* [PULL 04/13] target/loongarch: Update TLB index selection method
  2025-09-28  8:47 [PULL 00/13] loongarch queue Bibo Mao
                   ` (2 preceding siblings ...)
  2025-09-28  8:47 ` [PULL 03/13] target/loongarch: Reduce TLB flush with helper_tlbwr Bibo Mao
@ 2025-09-28  8:47 ` Bibo Mao
  2025-09-28  8:47 ` [PULL 05/13] target/loongarch: Fix page size set issue with CSR_STLBPS Bibo Mao
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Bibo Mao @ 2025-09-28  8:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson
With function helper_tlbfill(), since there is no suitable TLB entry,
new TLB will be added and flush one old TLB entry. The old TLB entry
index is selected randomly now, instead it can be optimized as
following:
  1. invalid TLB entry can be selected at first.
  2. TLB entry with other ASID can be selected secondly
  3. random method is used by last.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/loongarch/tcg/tlb_helper.c | 49 ++++++++++++++++++++++++++-----
 1 file changed, 42 insertions(+), 7 deletions(-)
diff --git a/target/loongarch/tcg/tlb_helper.c b/target/loongarch/tcg/tlb_helper.c
index 88dba9eb3e..b46621f203 100644
--- a/target/loongarch/tcg/tlb_helper.c
+++ b/target/loongarch/tcg/tlb_helper.c
@@ -336,8 +336,11 @@ void helper_tlbwr(CPULoongArchState *env)
 void helper_tlbfill(CPULoongArchState *env)
 {
     uint64_t address, entryhi;
-    int index, set, stlb_idx;
+    int index, set, i, stlb_idx;
     uint16_t pagesize, stlb_ps;
+    uint16_t asid, tlb_asid;
+    LoongArchTLB *tlb;
+    uint8_t tlb_e;
 
     if (FIELD_EX64(env->CSR_TLBRERA, CSR_TLBRERA, ISTLBR)) {
         entryhi = env->CSR_TLBREHI;
@@ -351,20 +354,52 @@ void helper_tlbfill(CPULoongArchState *env)
 
     /* Validity of stlb_ps is checked in helper_csrwr_stlbps() */
     stlb_ps = FIELD_EX64(env->CSR_STLBPS, CSR_STLBPS, PS);
+    asid = FIELD_EX64(env->CSR_ASID, CSR_ASID, ASID);
     if (pagesize == stlb_ps) {
         /* Only write into STLB bits [47:13] */
         address = entryhi & ~MAKE_64BIT_MASK(0, R_CSR_TLBEHI_64_VPPN_SHIFT);
-
-        /* Choose one set ramdomly */
-        set = get_random_tlb(0, 7);
-
-        /* Index in one set */
+        set = -1;
         stlb_idx = (address >> (stlb_ps + 1)) & 0xff; /* [0,255] */
+        for (i = 0; i < 8; ++i) {
+            tlb = &env->tlb[i * 256 + stlb_idx];
+            tlb_e = FIELD_EX64(tlb->tlb_misc, TLB_MISC, E);
+            if (!tlb_e) {
+                set = i;
+                break;
+            }
+
+            tlb_asid = FIELD_EX64(tlb->tlb_misc, TLB_MISC, ASID);
+            if (asid != tlb_asid) {
+                set = i;
+            }
+        }
 
+        /* Choose one set randomly */
+        if (set < 0) {
+            set = get_random_tlb(0, 7);
+        }
         index = set * 256 + stlb_idx;
     } else {
         /* Only write into MTLB */
-        index = get_random_tlb(LOONGARCH_STLB, LOONGARCH_TLB_MAX - 1);
+        index = -1;
+        for (i = LOONGARCH_STLB; i < LOONGARCH_TLB_MAX; i++) {
+            tlb = &env->tlb[i];
+            tlb_e = FIELD_EX64(tlb->tlb_misc, TLB_MISC, E);
+
+            if (!tlb_e) {
+                index = i;
+                break;
+            }
+
+            tlb_asid = FIELD_EX64(tlb->tlb_misc, TLB_MISC, ASID);
+            if (asid != tlb_asid) {
+                index = i;
+            }
+        }
+
+        if (index < 0) {
+            index = get_random_tlb(LOONGARCH_STLB, LOONGARCH_TLB_MAX - 1);
+        }
     }
 
     invalidate_tlb(env, index);
-- 
2.43.5
^ permalink raw reply related	[flat|nested] 15+ messages in thread
* [PULL 05/13] target/loongarch: Fix page size set issue with CSR_STLBPS
  2025-09-28  8:47 [PULL 00/13] loongarch queue Bibo Mao
                   ` (3 preceding siblings ...)
  2025-09-28  8:47 ` [PULL 04/13] target/loongarch: Update TLB index selection method Bibo Mao
@ 2025-09-28  8:47 ` Bibo Mao
  2025-09-28  8:47 ` [PULL 06/13] target/loongarch: Add tlb search callback in loongarch_tlb_search() Bibo Mao
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Bibo Mao @ 2025-09-28  8:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Song Gao
When modify register CSR_STLBPS, the page size should come from
input parameter rather than old value.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
---
 target/loongarch/cpu-csr.h        | 1 +
 target/loongarch/tcg/csr_helper.c | 5 +++--
 2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/target/loongarch/cpu-csr.h b/target/loongarch/cpu-csr.h
index 0834e91f30..1a311bf06a 100644
--- a/target/loongarch/cpu-csr.h
+++ b/target/loongarch/cpu-csr.h
@@ -106,6 +106,7 @@ FIELD(CSR_PWCH, DIR4_WIDTH, 18, 6)
 
 #define LOONGARCH_CSR_STLBPS         0x1e /* Stlb page size */
 FIELD(CSR_STLBPS, PS, 0, 5)
+FIELD(CSR_STLBPS, RESERVE, 5, 27)
 
 #define LOONGARCH_CSR_RVACFG         0x1f /* Reduced virtual address config */
 FIELD(CSR_RVACFG, RBITS, 0, 4)
diff --git a/target/loongarch/tcg/csr_helper.c b/target/loongarch/tcg/csr_helper.c
index 0d99e2c92b..eb60fefa82 100644
--- a/target/loongarch/tcg/csr_helper.c
+++ b/target/loongarch/tcg/csr_helper.c
@@ -26,13 +26,14 @@ target_ulong helper_csrwr_stlbps(CPULoongArchState *env, target_ulong val)
      * The real hardware only supports the min tlb_ps is 12
      * tlb_ps=0 may cause undefined-behavior.
      */
-    uint8_t tlb_ps = FIELD_EX64(env->CSR_STLBPS, CSR_STLBPS, PS);
+    uint8_t tlb_ps = FIELD_EX64(val, CSR_STLBPS, PS);
     if (!check_ps(env, tlb_ps)) {
         qemu_log_mask(LOG_GUEST_ERROR,
                       "Attempted set ps %d\n", tlb_ps);
     } else {
         /* Only update PS field, reserved bit keeps zero */
-        env->CSR_STLBPS = FIELD_DP64(old_v, CSR_STLBPS, PS, tlb_ps);
+        val = FIELD_DP64(val, CSR_STLBPS, RESERVE, 0);
+        env->CSR_STLBPS = val;
     }
 
     return old_v;
-- 
2.43.5
^ permalink raw reply related	[flat|nested] 15+ messages in thread
* [PULL 06/13] target/loongarch: Add tlb search callback in loongarch_tlb_search()
  2025-09-28  8:47 [PULL 00/13] loongarch queue Bibo Mao
                   ` (4 preceding siblings ...)
  2025-09-28  8:47 ` [PULL 05/13] target/loongarch: Fix page size set issue with CSR_STLBPS Bibo Mao
@ 2025-09-28  8:47 ` Bibo Mao
  2025-09-28  8:47 ` [PULL 07/13] target/loongarch: Add common API loongarch_tlb_search_cb() Bibo Mao
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Bibo Mao @ 2025-09-28  8:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson
With function loongarch_tlb_search(), it is to search TLB entry with
speficied virtual address, the difference is selection with asid and
global bit. Here add selection callback with asid and global bit.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/loongarch/tcg/tlb_helper.c | 20 +++++++++++++++-----
 1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/target/loongarch/tcg/tlb_helper.c b/target/loongarch/tcg/tlb_helper.c
index b46621f203..fda81f190a 100644
--- a/target/loongarch/tcg/tlb_helper.c
+++ b/target/loongarch/tcg/tlb_helper.c
@@ -21,6 +21,13 @@
 #include "cpu-csr.h"
 #include "tcg/tcg_loongarch.h"
 
+typedef bool (*tlb_match)(bool global, int asid, int tlb_asid);
+
+static bool tlb_match_any(bool global, int asid, int tlb_asid)
+{
+    return global || tlb_asid == asid;
+}
+
 bool check_ps(CPULoongArchState *env, uint8_t tlb_ps)
 {
     if (tlb_ps >= 64) {
@@ -201,12 +208,15 @@ static bool loongarch_tlb_search(CPULoongArchState *env, vaddr vaddr,
 {
     LoongArchTLB *tlb;
     uint16_t csr_asid, tlb_asid, stlb_idx;
-    uint8_t tlb_e, tlb_ps, tlb_g, stlb_ps;
+    uint8_t tlb_e, tlb_ps, stlb_ps;
+    bool tlb_g;
     int i, compare_shift;
     uint64_t vpn, tlb_vppn;
+    tlb_match func;
 
+    func = tlb_match_any;
     csr_asid = FIELD_EX64(env->CSR_ASID, CSR_ASID, ASID);
-   stlb_ps = FIELD_EX64(env->CSR_STLBPS, CSR_STLBPS, PS);
+    stlb_ps = FIELD_EX64(env->CSR_STLBPS, CSR_STLBPS, PS);
     vpn = (vaddr & TARGET_VIRT_MASK) >> (stlb_ps + 1);
     stlb_idx = vpn & 0xff; /* VA[25:15] <==> TLBIDX.index for 16KiB Page */
     compare_shift = stlb_ps + 1 - R_TLB_MISC_VPPN_SHIFT;
@@ -218,9 +228,9 @@ static bool loongarch_tlb_search(CPULoongArchState *env, vaddr vaddr,
         if (tlb_e) {
             tlb_vppn = FIELD_EX64(tlb->tlb_misc, TLB_MISC, VPPN);
             tlb_asid = FIELD_EX64(tlb->tlb_misc, TLB_MISC, ASID);
-            tlb_g = FIELD_EX64(tlb->tlb_entry0, TLBENTRY, G);
+            tlb_g = !!FIELD_EX64(tlb->tlb_entry0, TLBENTRY, G);
 
-            if ((tlb_g == 1 || tlb_asid == csr_asid) &&
+            if (func(tlb_g, csr_asid, tlb_asid) &&
                 (vpn == (tlb_vppn >> compare_shift))) {
                 *index = i * 256 + stlb_idx;
                 return true;
@@ -239,7 +249,7 @@ static bool loongarch_tlb_search(CPULoongArchState *env, vaddr vaddr,
             tlb_g = FIELD_EX64(tlb->tlb_entry0, TLBENTRY, G);
             compare_shift = tlb_ps + 1 - R_TLB_MISC_VPPN_SHIFT;
             vpn = (vaddr & TARGET_VIRT_MASK) >> (tlb_ps + 1);
-            if ((tlb_g == 1 || tlb_asid == csr_asid) &&
+            if (func(tlb_g, csr_asid, tlb_asid) &&
                 (vpn == (tlb_vppn >> compare_shift))) {
                 *index = i;
                 return true;
-- 
2.43.5
^ permalink raw reply related	[flat|nested] 15+ messages in thread
* [PULL 07/13] target/loongarch: Add common API loongarch_tlb_search_cb()
  2025-09-28  8:47 [PULL 00/13] loongarch queue Bibo Mao
                   ` (5 preceding siblings ...)
  2025-09-28  8:47 ` [PULL 06/13] target/loongarch: Add tlb search callback in loongarch_tlb_search() Bibo Mao
@ 2025-09-28  8:47 ` Bibo Mao
  2025-09-28  8:47 ` [PULL 08/13] target/loongarch: Change return value type with loongarch_tlb_search_cb() Bibo Mao
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Bibo Mao @ 2025-09-28  8:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson
Common API loongarch_tlb_search_cb() is added here to search TLB entry
with specified address.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/loongarch/tcg/tlb_helper.c | 20 ++++++++++++++------
 1 file changed, 14 insertions(+), 6 deletions(-)
diff --git a/target/loongarch/tcg/tlb_helper.c b/target/loongarch/tcg/tlb_helper.c
index fda81f190a..5b78146769 100644
--- a/target/loongarch/tcg/tlb_helper.c
+++ b/target/loongarch/tcg/tlb_helper.c
@@ -203,19 +203,16 @@ static uint32_t get_random_tlb(uint32_t low, uint32_t high)
  * field in tlb entry contains bit[47:13], so need adjust.
  * virt_vpn = vaddr[47:13]
  */
-static bool loongarch_tlb_search(CPULoongArchState *env, vaddr vaddr,
-                                 int *index)
+static bool loongarch_tlb_search_cb(CPULoongArchState *env, vaddr vaddr,
+                                    int *index, int csr_asid, tlb_match func)
 {
     LoongArchTLB *tlb;
-    uint16_t csr_asid, tlb_asid, stlb_idx;
+    uint16_t tlb_asid, stlb_idx;
     uint8_t tlb_e, tlb_ps, stlb_ps;
     bool tlb_g;
     int i, compare_shift;
     uint64_t vpn, tlb_vppn;
-    tlb_match func;
 
-    func = tlb_match_any;
-    csr_asid = FIELD_EX64(env->CSR_ASID, CSR_ASID, ASID);
     stlb_ps = FIELD_EX64(env->CSR_STLBPS, CSR_STLBPS, PS);
     vpn = (vaddr & TARGET_VIRT_MASK) >> (stlb_ps + 1);
     stlb_idx = vpn & 0xff; /* VA[25:15] <==> TLBIDX.index for 16KiB Page */
@@ -259,6 +256,17 @@ static bool loongarch_tlb_search(CPULoongArchState *env, vaddr vaddr,
     return false;
 }
 
+static bool loongarch_tlb_search(CPULoongArchState *env, vaddr vaddr,
+                                 int *index)
+{
+    int csr_asid;
+    tlb_match func;
+
+    func = tlb_match_any;
+    csr_asid = FIELD_EX64(env->CSR_ASID, CSR_ASID, ASID);
+    return loongarch_tlb_search_cb(env, vaddr, index, csr_asid, func);
+}
+
 void helper_tlbsrch(CPULoongArchState *env)
 {
     int index, match;
-- 
2.43.5
^ permalink raw reply related	[flat|nested] 15+ messages in thread
* [PULL 08/13] target/loongarch: Change return value type with loongarch_tlb_search_cb()
  2025-09-28  8:47 [PULL 00/13] loongarch queue Bibo Mao
                   ` (6 preceding siblings ...)
  2025-09-28  8:47 ` [PULL 07/13] target/loongarch: Add common API loongarch_tlb_search_cb() Bibo Mao
@ 2025-09-28  8:47 ` Bibo Mao
  2025-09-28  8:47 ` [PULL 09/13] target/loongarch: Use loongarch_tlb_search_cb in helper_invtlb_page_asid_or_g Bibo Mao
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Bibo Mao @ 2025-09-28  8:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Song Gao
With function loongarch_tlb_search_cb(), change return value type from
bool type to pointer LoongArchTLB *, the pointer type can be use directly
in future.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
---
 target/loongarch/tcg/tlb_helper.c | 22 ++++++++++++++--------
 1 file changed, 14 insertions(+), 8 deletions(-)
diff --git a/target/loongarch/tcg/tlb_helper.c b/target/loongarch/tcg/tlb_helper.c
index 5b78146769..c7f30eaf15 100644
--- a/target/loongarch/tcg/tlb_helper.c
+++ b/target/loongarch/tcg/tlb_helper.c
@@ -203,8 +203,9 @@ static uint32_t get_random_tlb(uint32_t low, uint32_t high)
  * field in tlb entry contains bit[47:13], so need adjust.
  * virt_vpn = vaddr[47:13]
  */
-static bool loongarch_tlb_search_cb(CPULoongArchState *env, vaddr vaddr,
-                                    int *index, int csr_asid, tlb_match func)
+static LoongArchTLB *loongarch_tlb_search_cb(CPULoongArchState *env,
+                                             vaddr vaddr, int csr_asid,
+                                             tlb_match func)
 {
     LoongArchTLB *tlb;
     uint16_t tlb_asid, stlb_idx;
@@ -229,8 +230,7 @@ static bool loongarch_tlb_search_cb(CPULoongArchState *env, vaddr vaddr,
 
             if (func(tlb_g, csr_asid, tlb_asid) &&
                 (vpn == (tlb_vppn >> compare_shift))) {
-                *index = i * 256 + stlb_idx;
-                return true;
+                return tlb;
             }
         }
     }
@@ -248,12 +248,11 @@ static bool loongarch_tlb_search_cb(CPULoongArchState *env, vaddr vaddr,
             vpn = (vaddr & TARGET_VIRT_MASK) >> (tlb_ps + 1);
             if (func(tlb_g, csr_asid, tlb_asid) &&
                 (vpn == (tlb_vppn >> compare_shift))) {
-                *index = i;
-                return true;
+                return tlb;
             }
         }
     }
-    return false;
+    return NULL;
 }
 
 static bool loongarch_tlb_search(CPULoongArchState *env, vaddr vaddr,
@@ -261,10 +260,17 @@ static bool loongarch_tlb_search(CPULoongArchState *env, vaddr vaddr,
 {
     int csr_asid;
     tlb_match func;
+    LoongArchTLB *tlb;
 
     func = tlb_match_any;
     csr_asid = FIELD_EX64(env->CSR_ASID, CSR_ASID, ASID);
-    return loongarch_tlb_search_cb(env, vaddr, index, csr_asid, func);
+    tlb = loongarch_tlb_search_cb(env, vaddr, csr_asid, func);
+    if (tlb) {
+        *index = tlb - env->tlb;
+        return true;
+    }
+
+    return false;
 }
 
 void helper_tlbsrch(CPULoongArchState *env)
-- 
2.43.5
^ permalink raw reply related	[flat|nested] 15+ messages in thread
* [PULL 09/13] target/loongarch: Use loongarch_tlb_search_cb in helper_invtlb_page_asid_or_g
  2025-09-28  8:47 [PULL 00/13] loongarch queue Bibo Mao
                   ` (7 preceding siblings ...)
  2025-09-28  8:47 ` [PULL 08/13] target/loongarch: Change return value type with loongarch_tlb_search_cb() Bibo Mao
@ 2025-09-28  8:47 ` Bibo Mao
  2025-09-28  8:47 ` [PULL 10/13] target/loongarch: Use loongarch_tlb_search_cb in helper_invtlb_page_asid Bibo Mao
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Bibo Mao @ 2025-09-28  8:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson
With function helper_invtlb_page_asid_or_g(), currently it is to
search TLB entry one by one. Instead STLB can be searched at first
with hash method, and then search MTLB with one by one method.
Here common API loongarch_tlb_search_cb() is used in function
helper_invtlb_page_asid_or_g().
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/loongarch/tcg/tlb_helper.c | 31 ++++++++-----------------------
 1 file changed, 8 insertions(+), 23 deletions(-)
diff --git a/target/loongarch/tcg/tlb_helper.c b/target/loongarch/tcg/tlb_helper.c
index c7f30eaf15..0a87354ba9 100644
--- a/target/loongarch/tcg/tlb_helper.c
+++ b/target/loongarch/tcg/tlb_helper.c
@@ -559,31 +559,16 @@ void helper_invtlb_page_asid(CPULoongArchState *env, target_ulong info,
 void helper_invtlb_page_asid_or_g(CPULoongArchState *env,
                                   target_ulong info, target_ulong addr)
 {
-    uint16_t asid = info & 0x3ff;
-
-    for (int i = 0; i < LOONGARCH_TLB_MAX; i++) {
-        LoongArchTLB *tlb = &env->tlb[i];
-        uint8_t tlb_g = FIELD_EX64(tlb->tlb_entry0, TLBENTRY, G);
-        uint16_t tlb_asid = FIELD_EX64(tlb->tlb_misc, TLB_MISC, ASID);
-        uint64_t vpn, tlb_vppn;
-        uint8_t tlb_ps, compare_shift;
-        uint8_t tlb_e = FIELD_EX64(tlb->tlb_misc, TLB_MISC, E);
-
-        if (!tlb_e) {
-            continue;
-        }
-
-        tlb_ps = FIELD_EX64(tlb->tlb_misc, TLB_MISC, PS);
-        tlb_vppn = FIELD_EX64(tlb->tlb_misc, TLB_MISC, VPPN);
-        vpn = (addr & TARGET_VIRT_MASK) >> (tlb_ps + 1);
-        compare_shift = tlb_ps + 1 - R_TLB_MISC_VPPN_SHIFT;
+    int asid = info & 0x3ff;
+    LoongArchTLB *tlb;
+    tlb_match func;
 
-        if ((tlb_g || (tlb_asid == asid)) &&
-            (vpn == (tlb_vppn >> compare_shift))) {
-            tlb->tlb_misc = FIELD_DP64(tlb->tlb_misc, TLB_MISC, E, 0);
-        }
+    func = tlb_match_any;
+    tlb = loongarch_tlb_search_cb(env, addr, asid, func);
+    if (tlb) {
+        tlb->tlb_misc = FIELD_DP64(tlb->tlb_misc, TLB_MISC, E, 0);
+        tlb_flush(env_cpu(env));
     }
-    tlb_flush(env_cpu(env));
 }
 
 bool loongarch_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
-- 
2.43.5
^ permalink raw reply related	[flat|nested] 15+ messages in thread
* [PULL 10/13] target/loongarch: Use loongarch_tlb_search_cb in helper_invtlb_page_asid
  2025-09-28  8:47 [PULL 00/13] loongarch queue Bibo Mao
                   ` (8 preceding siblings ...)
  2025-09-28  8:47 ` [PULL 09/13] target/loongarch: Use loongarch_tlb_search_cb in helper_invtlb_page_asid_or_g Bibo Mao
@ 2025-09-28  8:47 ` Bibo Mao
  2025-09-28  8:47 ` [PULL 11/13] target/loongarch: Invalid tlb entry in invalidate_tlb() Bibo Mao
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Bibo Mao @ 2025-09-28  8:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson
With function helper_invtlb_page_asid(), currently it is to search
TLB entry one by one. Instead STLB can be searched at first with hash
method, and then search MTLB with one by one method
Here common API loongarch_tlb_search_cb() is used in function
helper_invtlb_page_asid()
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/loongarch/tcg/tlb_helper.c | 36 +++++++++++--------------------
 1 file changed, 13 insertions(+), 23 deletions(-)
diff --git a/target/loongarch/tcg/tlb_helper.c b/target/loongarch/tcg/tlb_helper.c
index 0a87354ba9..7a85d9ca55 100644
--- a/target/loongarch/tcg/tlb_helper.c
+++ b/target/loongarch/tcg/tlb_helper.c
@@ -28,6 +28,11 @@ static bool tlb_match_any(bool global, int asid, int tlb_asid)
     return global || tlb_asid == asid;
 }
 
+static bool tlb_match_asid(bool global, int asid, int tlb_asid)
+{
+    return !global && tlb_asid == asid;
+}
+
 bool check_ps(CPULoongArchState *env, uint8_t tlb_ps)
 {
     if (tlb_ps >= 64) {
@@ -529,31 +534,16 @@ void helper_invtlb_all_asid(CPULoongArchState *env, target_ulong info)
 void helper_invtlb_page_asid(CPULoongArchState *env, target_ulong info,
                              target_ulong addr)
 {
-    uint16_t asid = info & 0x3ff;
-
-    for (int i = 0; i < LOONGARCH_TLB_MAX; i++) {
-        LoongArchTLB *tlb = &env->tlb[i];
-        uint8_t tlb_g = FIELD_EX64(tlb->tlb_entry0, TLBENTRY, G);
-        uint16_t tlb_asid = FIELD_EX64(tlb->tlb_misc, TLB_MISC, ASID);
-        uint64_t vpn, tlb_vppn;
-        uint8_t tlb_ps, compare_shift;
-        uint8_t tlb_e = FIELD_EX64(tlb->tlb_misc, TLB_MISC, E);
-
-        if (!tlb_e) {
-            continue;
-        }
-
-        tlb_ps = FIELD_EX64(tlb->tlb_misc, TLB_MISC, PS);
-        tlb_vppn = FIELD_EX64(tlb->tlb_misc, TLB_MISC, VPPN);
-        vpn = (addr & TARGET_VIRT_MASK) >> (tlb_ps + 1);
-        compare_shift = tlb_ps + 1 - R_TLB_MISC_VPPN_SHIFT;
+    int asid = info & 0x3ff;
+    LoongArchTLB *tlb;
+    tlb_match func;
 
-        if (!tlb_g && (tlb_asid == asid) &&
-           (vpn == (tlb_vppn >> compare_shift))) {
-            tlb->tlb_misc = FIELD_DP64(tlb->tlb_misc, TLB_MISC, E, 0);
-        }
+    func = tlb_match_asid;
+    tlb = loongarch_tlb_search_cb(env, addr, asid, func);
+    if (tlb) {
+        tlb->tlb_misc = FIELD_DP64(tlb->tlb_misc, TLB_MISC, E, 0);
+        tlb_flush(env_cpu(env));
     }
-    tlb_flush(env_cpu(env));
 }
 
 void helper_invtlb_page_asid_or_g(CPULoongArchState *env,
-- 
2.43.5
^ permalink raw reply related	[flat|nested] 15+ messages in thread
* [PULL 11/13] target/loongarch: Invalid tlb entry in invalidate_tlb()
  2025-09-28  8:47 [PULL 00/13] loongarch queue Bibo Mao
                   ` (9 preceding siblings ...)
  2025-09-28  8:47 ` [PULL 10/13] target/loongarch: Use loongarch_tlb_search_cb in helper_invtlb_page_asid Bibo Mao
@ 2025-09-28  8:47 ` Bibo Mao
  2025-09-28  8:47 ` [PULL 12/13] target/loongarch: Only flush one TLB entry in helper_invtlb_page_asid_or_g() Bibo Mao
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Bibo Mao @ 2025-09-28  8:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Song Gao
Invalid tlb entry in function invalidate_tlb(), and its usage is
simple and easy to use.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
---
 target/loongarch/tcg/tlb_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/target/loongarch/tcg/tlb_helper.c b/target/loongarch/tcg/tlb_helper.c
index 7a85d9ca55..b777f68f71 100644
--- a/target/loongarch/tcg/tlb_helper.c
+++ b/target/loongarch/tcg/tlb_helper.c
@@ -123,6 +123,7 @@ static void invalidate_tlb_entry(CPULoongArchState *env, int index)
         return;
     }
 
+    tlb->tlb_misc = FIELD_DP64(tlb->tlb_misc, TLB_MISC, E, 0);
     tlb_ps = FIELD_EX64(tlb->tlb_misc, TLB_MISC, PS);
     pagesize = MAKE_64BIT_MASK(tlb_ps, 1);
     mask = MAKE_64BIT_MASK(0, tlb_ps + 1);
@@ -338,7 +339,6 @@ void helper_tlbwr(CPULoongArchState *env)
     old = env->tlb + index;
     if (FIELD_EX64(env->CSR_TLBIDX, CSR_TLBIDX, NE)) {
         invalidate_tlb(env, index);
-        old->tlb_misc = FIELD_DP64(old->tlb_misc, TLB_MISC, E, 0);
         return;
     }
 
-- 
2.43.5
^ permalink raw reply related	[flat|nested] 15+ messages in thread
* [PULL 12/13] target/loongarch: Only flush one TLB entry in helper_invtlb_page_asid_or_g()
  2025-09-28  8:47 [PULL 00/13] loongarch queue Bibo Mao
                   ` (10 preceding siblings ...)
  2025-09-28  8:47 ` [PULL 11/13] target/loongarch: Invalid tlb entry in invalidate_tlb() Bibo Mao
@ 2025-09-28  8:47 ` Bibo Mao
  2025-09-28  8:47 ` [PULL 13/13] target/loongarch: Only flush one TLB entry in helper_invtlb_page_asid() Bibo Mao
  2025-09-28 19:25 ` [PULL 00/13] loongarch queue Richard Henderson
  13 siblings, 0 replies; 15+ messages in thread
From: Bibo Mao @ 2025-09-28  8:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson
With function helper_invtlb_page_asid_or_g(), only one TLB entry in
LoongArch emulated TLB is invalidated. so with QEMU TLB, it is not
necessary to flush all QEMU TLB, only flush address range specified
LoongArch emulated TLB is ok. Here invalidate_tlb_entry() is called
so that only QEMU TLB entry with specified address range is flushed.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/loongarch/tcg/tlb_helper.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/target/loongarch/tcg/tlb_helper.c b/target/loongarch/tcg/tlb_helper.c
index b777f68f71..243f945612 100644
--- a/target/loongarch/tcg/tlb_helper.c
+++ b/target/loongarch/tcg/tlb_helper.c
@@ -556,8 +556,7 @@ void helper_invtlb_page_asid_or_g(CPULoongArchState *env,
     func = tlb_match_any;
     tlb = loongarch_tlb_search_cb(env, addr, asid, func);
     if (tlb) {
-        tlb->tlb_misc = FIELD_DP64(tlb->tlb_misc, TLB_MISC, E, 0);
-        tlb_flush(env_cpu(env));
+        invalidate_tlb(env, tlb - env->tlb);
     }
 }
 
-- 
2.43.5
^ permalink raw reply related	[flat|nested] 15+ messages in thread
* [PULL 13/13] target/loongarch: Only flush one TLB entry in helper_invtlb_page_asid()
  2025-09-28  8:47 [PULL 00/13] loongarch queue Bibo Mao
                   ` (11 preceding siblings ...)
  2025-09-28  8:47 ` [PULL 12/13] target/loongarch: Only flush one TLB entry in helper_invtlb_page_asid_or_g() Bibo Mao
@ 2025-09-28  8:47 ` Bibo Mao
  2025-09-28 19:25 ` [PULL 00/13] loongarch queue Richard Henderson
  13 siblings, 0 replies; 15+ messages in thread
From: Bibo Mao @ 2025-09-28  8:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Richard Henderson
With function helper_invtlb_page_asid(), only one TLB entry in
LoongArch emulated TLB is invalidated. so with QEMU TLB, it is not
necessary to flush all QEMU TLB, only flush address range specified
LoongArch emulated TLB is ok. Here invalidate_tlb_entry() is called
so that only QEMU TLB entry with specified address range is flushed.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
 target/loongarch/tcg/tlb_helper.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/target/loongarch/tcg/tlb_helper.c b/target/loongarch/tcg/tlb_helper.c
index 243f945612..8cfce48a29 100644
--- a/target/loongarch/tcg/tlb_helper.c
+++ b/target/loongarch/tcg/tlb_helper.c
@@ -541,8 +541,7 @@ void helper_invtlb_page_asid(CPULoongArchState *env, target_ulong info,
     func = tlb_match_asid;
     tlb = loongarch_tlb_search_cb(env, addr, asid, func);
     if (tlb) {
-        tlb->tlb_misc = FIELD_DP64(tlb->tlb_misc, TLB_MISC, E, 0);
-        tlb_flush(env_cpu(env));
+        invalidate_tlb(env, tlb - env->tlb);
     }
 }
 
-- 
2.43.5
^ permalink raw reply related	[flat|nested] 15+ messages in thread
* Re: [PULL 00/13] loongarch queue
  2025-09-28  8:47 [PULL 00/13] loongarch queue Bibo Mao
                   ` (12 preceding siblings ...)
  2025-09-28  8:47 ` [PULL 13/13] target/loongarch: Only flush one TLB entry in helper_invtlb_page_asid() Bibo Mao
@ 2025-09-28 19:25 ` Richard Henderson
  13 siblings, 0 replies; 15+ messages in thread
From: Richard Henderson @ 2025-09-28 19:25 UTC (permalink / raw)
  To: Bibo Mao, qemu-devel
On 9/28/25 01:47, Bibo Mao wrote:
> The following changes since commit d6dfd8d40cebebc3378d379cd28879e0345fbf91:
> 
>    Merge tag 'pull-target-arm-20250926' of https://gitlab.com/pm215/qemu into staging (2025-09-26 13:27:01 -0700)
> 
> are available in the Git repository at:
> 
>    https://github.com/bibo-mao/qemu.git tags/pull-loongarch-20250928
> 
> for you to fetch changes up to 8d26856fabf8faac60de03a2e0fc35e5338e248e:
> 
>    target/loongarch: Only flush one TLB entry in helper_invtlb_page_asid() (2025-09-28 16:10:34 +0800)
> 
> ----------------------------------------------------------------
> pull-loongarch-20250928 queue
> 
> ----------------------------------------------------------------
> Bibo Mao (13):
>        target/loongarch: Use mmu idx bitmap method when flush TLB
>        target/loongarch: Add parameter tlb pointer with fill_tlb_entry
>        target/loongarch: Reduce TLB flush with helper_tlbwr
>        target/loongarch: Update TLB index selection method
>        target/loongarch: Fix page size set issue with CSR_STLBPS
>        target/loongarch: Add tlb search callback in loongarch_tlb_search()
>        target/loongarch: Add common API loongarch_tlb_search_cb()
>        target/loongarch: Change return value type with loongarch_tlb_search_cb()
>        target/loongarch: Use loongarch_tlb_search_cb in helper_invtlb_page_asid_or_g
>        target/loongarch: Use loongarch_tlb_search_cb in helper_invtlb_page_asid
>        target/loongarch: Invalid tlb entry in invalidate_tlb()
>        target/loongarch: Only flush one TLB entry in helper_invtlb_page_asid_or_g()
>        target/loongarch: Only flush one TLB entry in helper_invtlb_page_asid()
> 
>   target/loongarch/cpu-csr.h        |   1 +
>   target/loongarch/tcg/csr_helper.c |   5 +-
>   target/loongarch/tcg/tlb_helper.c | 205 +++++++++++++++++++++++---------------
>   3 files changed, 131 insertions(+), 80 deletions(-)
> 
> 
Applied, thanks.  Please update https://wiki.qemu.org/ChangeLog/10.2 as appropriate.
r~
^ permalink raw reply	[flat|nested] 15+ messages in thread
end of thread, other threads:[~2025-09-28 19:27 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-28  8:47 [PULL 00/13] loongarch queue Bibo Mao
2025-09-28  8:47 ` [PULL 01/13] target/loongarch: Use mmu idx bitmap method when flush TLB Bibo Mao
2025-09-28  8:47 ` [PULL 02/13] target/loongarch: Add parameter tlb pointer with fill_tlb_entry Bibo Mao
2025-09-28  8:47 ` [PULL 03/13] target/loongarch: Reduce TLB flush with helper_tlbwr Bibo Mao
2025-09-28  8:47 ` [PULL 04/13] target/loongarch: Update TLB index selection method Bibo Mao
2025-09-28  8:47 ` [PULL 05/13] target/loongarch: Fix page size set issue with CSR_STLBPS Bibo Mao
2025-09-28  8:47 ` [PULL 06/13] target/loongarch: Add tlb search callback in loongarch_tlb_search() Bibo Mao
2025-09-28  8:47 ` [PULL 07/13] target/loongarch: Add common API loongarch_tlb_search_cb() Bibo Mao
2025-09-28  8:47 ` [PULL 08/13] target/loongarch: Change return value type with loongarch_tlb_search_cb() Bibo Mao
2025-09-28  8:47 ` [PULL 09/13] target/loongarch: Use loongarch_tlb_search_cb in helper_invtlb_page_asid_or_g Bibo Mao
2025-09-28  8:47 ` [PULL 10/13] target/loongarch: Use loongarch_tlb_search_cb in helper_invtlb_page_asid Bibo Mao
2025-09-28  8:47 ` [PULL 11/13] target/loongarch: Invalid tlb entry in invalidate_tlb() Bibo Mao
2025-09-28  8:47 ` [PULL 12/13] target/loongarch: Only flush one TLB entry in helper_invtlb_page_asid_or_g() Bibo Mao
2025-09-28  8:47 ` [PULL 13/13] target/loongarch: Only flush one TLB entry in helper_invtlb_page_asid() Bibo Mao
2025-09-28 19:25 ` [PULL 00/13] loongarch queue Richard Henderson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).