* [PATCH v2 0/2] lib: sbi: Flush cache entries after writing PMP CSRs @ 2026-02-26 12:34 cp0613 2026-02-26 12:34 ` [PATCH v2 1/2] lib: sbi: Introduce sbi_hart_pmp_fence_vma cp0613 2026-02-26 12:34 ` [PATCH v2 2/2] lib: sbi: Flush cache entries after writing PMP CSRs cp0613 0 siblings, 2 replies; 5+ messages in thread From: cp0613 @ 2026-02-26 12:34 UTC (permalink / raw) To: opensbi; +Cc: anup, samuel.holland, guoren, Chen Pei From: Chen Pei <cp0613@linux.alibaba.com> As the privileged specification states, after writing to the PMP CSRs, a SFENCE.VMA or HFENCE.GVMA instruction should be executed with rs1=x0 and rs2=x0 to flush all address translation cache entries. The original implementation does not cover all possible cases. For example, the unconfigure and map_range/unmap_range functions of sbi_hart_protection calls pmp_set but does not execute the SFENCE.VMA instruction. This patch covers these cases, ensuring that dbtr, sse, and other modules can safely update pmpcfg. Considering the performance issues caused by flush all address translation cache entries, sbi_hart_pmp_fence_vma is introduced to flush only the entries corresponding to a given address and size. Changes in v2: - Introduce sbi_hart_pmp_fence_vma - Use sbi_hart_pmp_fence_vma when calling map_range/unmap_range in sbi_hart_protection to avoid performance issues caused by using sbi_hart_pmp_fence_all. Chen Pei (2): lib: sbi: Introduce sbi_hart_pmp_fence_vma lib: sbi: Flush cache entries after writing PMP CSRs include/sbi/sbi_hart_pmp.h | 3 ++- lib/sbi/sbi_hart_pmp.c | 41 ++++++++++++++++++++++++++++---- platform/generic/eswin/eic770x.c | 2 +- 3 files changed, 40 insertions(+), 6 deletions(-) -- 2.50.1 -- opensbi mailing list opensbi@lists.infradead.org http://lists.infradead.org/mailman/listinfo/opensbi ^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v2 1/2] lib: sbi: Introduce sbi_hart_pmp_fence_vma 2026-02-26 12:34 [PATCH v2 0/2] lib: sbi: Flush cache entries after writing PMP CSRs cp0613 @ 2026-02-26 12:34 ` cp0613 2026-02-26 12:34 ` [PATCH v2 2/2] lib: sbi: Flush cache entries after writing PMP CSRs cp0613 1 sibling, 0 replies; 5+ messages in thread From: cp0613 @ 2026-02-26 12:34 UTC (permalink / raw) To: opensbi; +Cc: anup, samuel.holland, guoren, Chen Pei From: Chen Pei <cp0613@linux.alibaba.com> The original sbi_hart_pmp_fence implementation actually flushes all address translation cache entries. The sbi_hart_pmp_fence_vma is introduced to flush only the entries corresponding to a given address and size. Signed-off-by: Chen Pei <cp0613@linux.alibaba.com> --- include/sbi/sbi_hart_pmp.h | 3 ++- lib/sbi/sbi_hart_pmp.c | 30 +++++++++++++++++++++++++++--- platform/generic/eswin/eic770x.c | 2 +- 3 files changed, 30 insertions(+), 5 deletions(-) diff --git a/include/sbi/sbi_hart_pmp.h b/include/sbi/sbi_hart_pmp.h index a7765d17..5d359764 100644 --- a/include/sbi/sbi_hart_pmp.h +++ b/include/sbi/sbi_hart_pmp.h @@ -15,7 +15,8 @@ unsigned int sbi_hart_pmp_count(struct sbi_scratch *scratch); unsigned int sbi_hart_pmp_log2gran(struct sbi_scratch *scratch); unsigned int sbi_hart_pmp_addrbits(struct sbi_scratch *scratch); bool sbi_hart_smepmp_is_fw_region(unsigned int pmp_idx); -void sbi_hart_pmp_fence(void); +void sbi_hart_pmp_fence_all(void); +void sbi_hart_pmp_fence_vma(unsigned long start, unsigned long size); int sbi_hart_pmp_init(struct sbi_scratch *scratch); #endif diff --git a/lib/sbi/sbi_hart_pmp.c b/lib/sbi/sbi_hart_pmp.c index 02a3b3c4..80407110 100644 --- a/lib/sbi/sbi_hart_pmp.c +++ b/lib/sbi/sbi_hart_pmp.c @@ -62,7 +62,7 @@ bool sbi_hart_smepmp_is_fw_region(unsigned int pmp_idx) return bitmap_test(fw_smepmp_ids, pmp_idx) ? true : false; } -void sbi_hart_pmp_fence(void) +void sbi_hart_pmp_fence_all(void) { /* * As per section 3.7.2 of privileged specification v1.12, @@ -86,6 +86,30 @@ void sbi_hart_pmp_fence(void) } } +void sbi_hart_pmp_fence_vma(unsigned long start, unsigned long size) +{ + if ((start == 0 && size == 0) || (size == SBI_TLB_FLUSH_ALL)) { + sbi_hart_pmp_fence_all(); + } else { + /* Flush TLB entries for the specified address range */ + if (misa_extension('S')) { + unsigned long i; + for (i = 0; i < size; i += PAGE_SIZE) { + __asm__ __volatile__("sfence.vma %0" + : + : "r"(start + i) + : "memory"); + } + + if (misa_extension('H')) { + for (i = 0; i < size; i += PAGE_SIZE) { + __sbi_hfence_gvma_gpa((start + i) >> 2); + } + } + } + } +} + static void sbi_hart_smepmp_set(struct sbi_scratch *scratch, struct sbi_domain *dom, struct sbi_domain_memregion *reg, @@ -213,7 +237,7 @@ static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch) * Keep the RLB bit so that dynamic mappings can be done. */ - sbi_hart_pmp_fence(); + sbi_hart_pmp_fence_all(); return 0; } @@ -293,7 +317,7 @@ static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch) for(; pmp_idx < pmp_count; pmp_idx++) pmp_disable(pmp_idx); - sbi_hart_pmp_fence(); + sbi_hart_pmp_fence_all(); return 0; } diff --git a/platform/generic/eswin/eic770x.c b/platform/generic/eswin/eic770x.c index 7330df9f..d5564d35 100644 --- a/platform/generic/eswin/eic770x.c +++ b/platform/generic/eswin/eic770x.c @@ -347,7 +347,7 @@ static int eswin_eic7700_pmp_configure(struct sbi_scratch *scratch) while (pmp_idx < pmp_max) pmp_disable(pmp_idx++); - sbi_hart_pmp_fence(); + sbi_hart_pmp_fence_all(); return 0; no_more_pmp: sbi_printf("%s: insufficient PMP entries\n", __func__); -- 2.50.1 -- opensbi mailing list opensbi@lists.infradead.org http://lists.infradead.org/mailman/listinfo/opensbi ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v2 2/2] lib: sbi: Flush cache entries after writing PMP CSRs 2026-02-26 12:34 [PATCH v2 0/2] lib: sbi: Flush cache entries after writing PMP CSRs cp0613 2026-02-26 12:34 ` [PATCH v2 1/2] lib: sbi: Introduce sbi_hart_pmp_fence_vma cp0613 @ 2026-02-26 12:34 ` cp0613 2026-02-27 1:19 ` Guo Ren 2026-02-27 1:21 ` Guo Ren 1 sibling, 2 replies; 5+ messages in thread From: cp0613 @ 2026-02-26 12:34 UTC (permalink / raw) To: opensbi; +Cc: anup, samuel.holland, guoren, Chen Pei From: Chen Pei <cp0613@linux.alibaba.com> As the privileged specification states, after writing to the PMP CSRs, a SFENCE.VMA or HFENCE.GVMA instruction should be executed with rs1=x0 and rs2=x0 to flush all address translation cache entries. Considering the performance issues caused by flushing all address translation cache entries, sbi_hart_pmp_fence_vma is used. Signed-off-by: Chen Pei <cp0613@linux.alibaba.com> --- lib/sbi/sbi_hart_pmp.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/lib/sbi/sbi_hart_pmp.c b/lib/sbi/sbi_hart_pmp.c index 80407110..dc7f61fb 100644 --- a/lib/sbi/sbi_hart_pmp.c +++ b/lib/sbi/sbi_hart_pmp.c @@ -270,14 +270,21 @@ static int sbi_hart_smepmp_map_range(struct sbi_scratch *scratch, pmp_flags, base, order); pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order); + sbi_hart_pmp_fence_vma(addr, size); + return SBI_OK; } static int sbi_hart_smepmp_unmap_range(struct sbi_scratch *scratch, unsigned long addr, unsigned long size) { + int ret; + sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY); - return pmp_disable(SBI_SMEPMP_RESV_ENTRY); + ret = pmp_disable(SBI_SMEPMP_RESV_ENTRY); + sbi_hart_pmp_fence_vma(addr, size); + + return ret; } static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch) @@ -333,6 +340,8 @@ static void sbi_hart_pmp_unconfigure(struct sbi_scratch *scratch) sbi_platform_pmp_disable(sbi_platform_ptr(scratch), i); pmp_disable(i); } + + sbi_hart_pmp_fence_all(); } static struct sbi_hart_protection pmp_protection = { -- 2.50.1 -- opensbi mailing list opensbi@lists.infradead.org http://lists.infradead.org/mailman/listinfo/opensbi ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v2 2/2] lib: sbi: Flush cache entries after writing PMP CSRs 2026-02-26 12:34 ` [PATCH v2 2/2] lib: sbi: Flush cache entries after writing PMP CSRs cp0613 @ 2026-02-27 1:19 ` Guo Ren 2026-02-27 1:21 ` Guo Ren 1 sibling, 0 replies; 5+ messages in thread From: Guo Ren @ 2026-02-27 1:19 UTC (permalink / raw) To: cp0613; +Cc: opensbi, anup, samuel.holland [-- Attachment #1.1: Type: text/plain, Size: 2298 bytes --] On Thu, Feb 26, 2026 at 8:34 PM <cp0613@linux.alibaba.com> wrote: > From: Chen Pei <cp0613@linux.alibaba.com> > > As the privileged specification states, after writing to the PMP CSRs, > a SFENCE.VMA or HFENCE.GVMA instruction should be executed with rs1=x0 > and rs2=x0 to flush all address translation cache entries. > > Considering the performance issues caused by flushing all address > translation cache entries, sbi_hart_pmp_fence_vma is used. > It's not aligned with the spec description; PMP is about PA, but SFENCE. VMA is about the VA. So we don't know whether the other VAs has mapped to the PA that the pmp is related to. That's why spec needs "rs1=x0 and rs2=x0 to flush all address translation cache entries." > > Signed-off-by: Chen Pei <cp0613@linux.alibaba.com> > --- > lib/sbi/sbi_hart_pmp.c | 11 ++++++++++- > 1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/lib/sbi/sbi_hart_pmp.c b/lib/sbi/sbi_hart_pmp.c > index 80407110..dc7f61fb 100644 > --- a/lib/sbi/sbi_hart_pmp.c > +++ b/lib/sbi/sbi_hart_pmp.c > @@ -270,14 +270,21 @@ static int sbi_hart_smepmp_map_range(struct > sbi_scratch *scratch, > pmp_flags, base, order); > pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order); > > + sbi_hart_pmp_fence_vma(addr, size); > + > return SBI_OK; > } > > static int sbi_hart_smepmp_unmap_range(struct sbi_scratch *scratch, > unsigned long addr, unsigned long > size) > { > + int ret; > + > sbi_platform_pmp_disable(sbi_platform_ptr(scratch), > SBI_SMEPMP_RESV_ENTRY); > - return pmp_disable(SBI_SMEPMP_RESV_ENTRY); > + ret = pmp_disable(SBI_SMEPMP_RESV_ENTRY); > + sbi_hart_pmp_fence_vma(addr, size); > + > + return ret; > } > > static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch) > @@ -333,6 +340,8 @@ static void sbi_hart_pmp_unconfigure(struct > sbi_scratch *scratch) > sbi_platform_pmp_disable(sbi_platform_ptr(scratch), i); > pmp_disable(i); > } > + > + sbi_hart_pmp_fence_all(); > } > > static struct sbi_hart_protection pmp_protection = { > -- > 2.50.1 > > -- Best Regards Guo Ren [-- Attachment #1.2: Type: text/html, Size: 3292 bytes --] [-- Attachment #2: Type: text/plain, Size: 105 bytes --] -- opensbi mailing list opensbi@lists.infradead.org http://lists.infradead.org/mailman/listinfo/opensbi ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2 2/2] lib: sbi: Flush cache entries after writing PMP CSRs 2026-02-26 12:34 ` [PATCH v2 2/2] lib: sbi: Flush cache entries after writing PMP CSRs cp0613 2026-02-27 1:19 ` Guo Ren @ 2026-02-27 1:21 ` Guo Ren 1 sibling, 0 replies; 5+ messages in thread From: Guo Ren @ 2026-02-27 1:21 UTC (permalink / raw) To: cp0613; +Cc: opensbi, anup, samuel.holland On Thu, Feb 26, 2026 at 8:34 PM <cp0613@linux.alibaba.com> wrote: > > From: Chen Pei <cp0613@linux.alibaba.com> > > As the privileged specification states, after writing to the PMP CSRs, > a SFENCE.VMA or HFENCE.GVMA instruction should be executed with rs1=x0 > and rs2=x0 to flush all address translation cache entries. > > Considering the performance issues caused by flushing all address > translation cache entries, sbi_hart_pmp_fence_vma is used. It's not aligned with the spec description; PMP is about PA, but SFENCE. VMA is about the VA. So we don't know whether the other VAs has mapped to the PA that the pmp is related to. That's why spec needs "rs1=x0 and rs2=x0 to flush all address translation cache entries." > > Signed-off-by: Chen Pei <cp0613@linux.alibaba.com> > --- > lib/sbi/sbi_hart_pmp.c | 11 ++++++++++- > 1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/lib/sbi/sbi_hart_pmp.c b/lib/sbi/sbi_hart_pmp.c > index 80407110..dc7f61fb 100644 > --- a/lib/sbi/sbi_hart_pmp.c > +++ b/lib/sbi/sbi_hart_pmp.c > @@ -270,14 +270,21 @@ static int sbi_hart_smepmp_map_range(struct sbi_scratch *scratch, > pmp_flags, base, order); > pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order); > > + sbi_hart_pmp_fence_vma(addr, size); > + > return SBI_OK; > } > > static int sbi_hart_smepmp_unmap_range(struct sbi_scratch *scratch, > unsigned long addr, unsigned long size) > { > + int ret; > + > sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY); > - return pmp_disable(SBI_SMEPMP_RESV_ENTRY); > + ret = pmp_disable(SBI_SMEPMP_RESV_ENTRY); > + sbi_hart_pmp_fence_vma(addr, size); > + > + return ret; > } > > static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch) > @@ -333,6 +340,8 @@ static void sbi_hart_pmp_unconfigure(struct sbi_scratch *scratch) > sbi_platform_pmp_disable(sbi_platform_ptr(scratch), i); > pmp_disable(i); > } > + > + sbi_hart_pmp_fence_all(); > } > > static struct sbi_hart_protection pmp_protection = { > -- > 2.50.1 > -- Best Regards Guo Ren -- opensbi mailing list opensbi@lists.infradead.org http://lists.infradead.org/mailman/listinfo/opensbi ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-02-28 5:10 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-02-26 12:34 [PATCH v2 0/2] lib: sbi: Flush cache entries after writing PMP CSRs cp0613 2026-02-26 12:34 ` [PATCH v2 1/2] lib: sbi: Introduce sbi_hart_pmp_fence_vma cp0613 2026-02-26 12:34 ` [PATCH v2 2/2] lib: sbi: Flush cache entries after writing PMP CSRs cp0613 2026-02-27 1:19 ` Guo Ren 2026-02-27 1:21 ` Guo Ren
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox