public inbox for opensbi@lists.infradead.org
 help / color / mirror / Atom feed
* [PATCH] lib: sbi: Flush cache entries after writing PMP CSRs
@ 2025-09-04  9:54 cp0613
  2025-10-20  4:36 ` Anup Patel
  0 siblings, 1 reply; 8+ messages in thread
From: cp0613 @ 2025-09-04  9:54 UTC (permalink / raw)
  To: opensbi; +Cc: Chen Pei

From: Chen Pei <cp0613@linux.alibaba.com>

As the privileged specification states, after writing to the PMP CSRs,
a SFENCE.VMA or HFENCE.GVMA instruction should be executed with rs1=x0
and rs2=x0 to flush all address translation cache entries.

The original implementation does not cover all possible cases. For
example, the sbi_hart_map_saddr function calls pmp_set but does not
execute the SFENCE.VMA instruction. This patch covers sbi_hart_map_saddr
and sbi_hart_unmap_saddr, ensuring that dbtr, sse and other modules can
safely update pmpcfg.

Signed-off-by: Chen Pei <cp0613@linux.alibaba.com>
---
 lib/sbi/sbi_hart.c | 54 ++++++++++++++++++++++++++++------------------
 1 file changed, 33 insertions(+), 21 deletions(-)

diff --git a/lib/sbi/sbi_hart.c b/lib/sbi/sbi_hart.c
index 6a2d7d6..33bba38 100644
--- a/lib/sbi/sbi_hart.c
+++ b/lib/sbi/sbi_hart.c
@@ -364,6 +364,30 @@ static unsigned int sbi_hart_get_smepmp_flags(struct sbi_scratch *scratch,
 	return pmp_flags;
 }
 
+static void pmp_flush(void)
+{
+	/*
+	 * As per section 3.7.2 of privileged specification v1.12,
+	 * virtual address translations can be speculatively performed
+	 * (even before actual access). These, along with PMP traslations,
+	 * can be cached. This can pose a problem with CPU hotplug
+	 * and non-retentive suspend scenario because PMP states are
+	 * not preserved.
+	 * It is advisable to flush the caching structures under such
+	 * conditions.
+	 */
+	if (misa_extension('S')) {
+		__asm__ __volatile__("sfence.vma");
+
+		/*
+		 * If hypervisor mode is supported, flush caching
+		 * structures in guest mode too.
+		 */
+		if (misa_extension('H'))
+			__sbi_hfence_gvma_all();
+	}
+}
+
 static void sbi_hart_smepmp_set(struct sbi_scratch *scratch,
 				struct sbi_domain *dom,
 				struct sbi_domain_memregion *reg,
@@ -543,18 +567,25 @@ int sbi_hart_map_saddr(unsigned long addr, unsigned long size)
 			     pmp_flags, base, order);
 	pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
 
+	pmp_flush();
+
 	return SBI_OK;
 }
 
 int sbi_hart_unmap_saddr(void)
 {
+	int rc;
 	struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
 
 	if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
 		return SBI_OK;
 
 	sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY);
-	return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
+	rc = pmp_disable(SBI_SMEPMP_RESV_ENTRY);
+
+	pmp_flush();
+
+	return rc;
 }
 
 int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
@@ -578,26 +609,7 @@ int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
 		rc = sbi_hart_oldpmp_configure(scratch, pmp_count,
 						pmp_log2gran, pmp_addr_max);
 
-	/*
-	 * As per section 3.7.2 of privileged specification v1.12,
-	 * virtual address translations can be speculatively performed
-	 * (even before actual access). These, along with PMP traslations,
-	 * can be cached. This can pose a problem with CPU hotplug
-	 * and non-retentive suspend scenario because PMP states are
-	 * not preserved.
-	 * It is advisable to flush the caching structures under such
-	 * conditions.
-	 */
-	if (misa_extension('S')) {
-		__asm__ __volatile__("sfence.vma");
-
-		/*
-		 * If hypervisor mode is supported, flush caching
-		 * structures in guest mode too.
-		 */
-		if (misa_extension('H'))
-			__sbi_hfence_gvma_all();
-	}
+	pmp_flush();
 
 	return rc;
 }
-- 
2.49.0


-- 
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] lib: sbi: Flush cache entries after writing PMP CSRs
  2025-09-04  9:54 [PATCH] lib: sbi: Flush cache entries after writing PMP CSRs cp0613
@ 2025-10-20  4:36 ` Anup Patel
  2025-10-29 22:32   ` Samuel Holland
  0 siblings, 1 reply; 8+ messages in thread
From: Anup Patel @ 2025-10-20  4:36 UTC (permalink / raw)
  To: cp0613; +Cc: opensbi

On Thu, Sep 4, 2025 at 4:26 PM <cp0613@linux.alibaba.com> wrote:
>
> From: Chen Pei <cp0613@linux.alibaba.com>
>
> As the privileged specification states, after writing to the PMP CSRs,
> a SFENCE.VMA or HFENCE.GVMA instruction should be executed with rs1=x0
> and rs2=x0 to flush all address translation cache entries.
>
> The original implementation does not cover all possible cases. For
> example, the sbi_hart_map_saddr function calls pmp_set but does not
> execute the SFENCE.VMA instruction. This patch covers sbi_hart_map_saddr
> and sbi_hart_unmap_saddr, ensuring that dbtr, sse and other modules can
> safely update pmpcfg.

The temporary mapping created using sbi_hart_map/unmap_saddr() is
only accessed by M-mode and does not impact the S-mode TLB
entries in anyway.

Further, flushing TLB entries in sbi_hart_map/unmap_saddr() also
slows-down many SBI calls which use shared memory.

NACK to this patch.

Regards,
Anup

>
> Signed-off-by: Chen Pei <cp0613@linux.alibaba.com>
> ---
>  lib/sbi/sbi_hart.c | 54 ++++++++++++++++++++++++++++------------------
>  1 file changed, 33 insertions(+), 21 deletions(-)
>
> diff --git a/lib/sbi/sbi_hart.c b/lib/sbi/sbi_hart.c
> index 6a2d7d6..33bba38 100644
> --- a/lib/sbi/sbi_hart.c
> +++ b/lib/sbi/sbi_hart.c
> @@ -364,6 +364,30 @@ static unsigned int sbi_hart_get_smepmp_flags(struct sbi_scratch *scratch,
>         return pmp_flags;
>  }
>
> +static void pmp_flush(void)
> +{
> +       /*
> +        * As per section 3.7.2 of privileged specification v1.12,
> +        * virtual address translations can be speculatively performed
> +        * (even before actual access). These, along with PMP traslations,
> +        * can be cached. This can pose a problem with CPU hotplug
> +        * and non-retentive suspend scenario because PMP states are
> +        * not preserved.
> +        * It is advisable to flush the caching structures under such
> +        * conditions.
> +        */
> +       if (misa_extension('S')) {
> +               __asm__ __volatile__("sfence.vma");
> +
> +               /*
> +                * If hypervisor mode is supported, flush caching
> +                * structures in guest mode too.
> +                */
> +               if (misa_extension('H'))
> +                       __sbi_hfence_gvma_all();
> +       }
> +}
> +
>  static void sbi_hart_smepmp_set(struct sbi_scratch *scratch,
>                                 struct sbi_domain *dom,
>                                 struct sbi_domain_memregion *reg,
> @@ -543,18 +567,25 @@ int sbi_hart_map_saddr(unsigned long addr, unsigned long size)
>                              pmp_flags, base, order);
>         pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
>
> +       pmp_flush();
> +
>         return SBI_OK;
>  }
>
>  int sbi_hart_unmap_saddr(void)
>  {
> +       int rc;
>         struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
>
>         if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
>                 return SBI_OK;
>
>         sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY);
> -       return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
> +       rc = pmp_disable(SBI_SMEPMP_RESV_ENTRY);
> +
> +       pmp_flush();
> +
> +       return rc;
>  }
>
>  int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
> @@ -578,26 +609,7 @@ int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
>                 rc = sbi_hart_oldpmp_configure(scratch, pmp_count,
>                                                 pmp_log2gran, pmp_addr_max);
>
> -       /*
> -        * As per section 3.7.2 of privileged specification v1.12,
> -        * virtual address translations can be speculatively performed
> -        * (even before actual access). These, along with PMP traslations,
> -        * can be cached. This can pose a problem with CPU hotplug
> -        * and non-retentive suspend scenario because PMP states are
> -        * not preserved.
> -        * It is advisable to flush the caching structures under such
> -        * conditions.
> -        */
> -       if (misa_extension('S')) {
> -               __asm__ __volatile__("sfence.vma");
> -
> -               /*
> -                * If hypervisor mode is supported, flush caching
> -                * structures in guest mode too.
> -                */
> -               if (misa_extension('H'))
> -                       __sbi_hfence_gvma_all();
> -       }
> +       pmp_flush();
>
>         return rc;
>  }
> --
> 2.49.0
>
>
> --
> opensbi mailing list
> opensbi@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/opensbi

-- 
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] lib: sbi: Flush cache entries after writing PMP CSRs
  2025-10-20  4:36 ` Anup Patel
@ 2025-10-29 22:32   ` Samuel Holland
  2025-10-30  5:50     ` Anup Patel
  0 siblings, 1 reply; 8+ messages in thread
From: Samuel Holland @ 2025-10-29 22:32 UTC (permalink / raw)
  To: Anup Patel, cp0613; +Cc: opensbi

Hi Anup,

On 2025-10-19 11:36 PM, Anup Patel wrote:
> On Thu, Sep 4, 2025 at 4:26 PM <cp0613@linux.alibaba.com> wrote:
>>
>> From: Chen Pei <cp0613@linux.alibaba.com>
>>
>> As the privileged specification states, after writing to the PMP CSRs,
>> a SFENCE.VMA or HFENCE.GVMA instruction should be executed with rs1=x0
>> and rs2=x0 to flush all address translation cache entries.
>>
>> The original implementation does not cover all possible cases. For
>> example, the sbi_hart_map_saddr function calls pmp_set but does not
>> execute the SFENCE.VMA instruction. This patch covers sbi_hart_map_saddr
>> and sbi_hart_unmap_saddr, ensuring that dbtr, sse and other modules can
>> safely update pmpcfg.
> 
> The temporary mapping created using sbi_hart_map/unmap_saddr() is
> only accessed by M-mode and does not impact the S-mode TLB
> entries in anyway.

That doesn't match my interpretation of this paragraph from the privileged ISA:

Implementations with virtual memory are permitted to perform address
translations speculatively and earlier than required by an explicit memory
access, and are permitted to cache them in address translation cache
structures—including possibly caching the identity mappings from effective
address to physical address used in Bare translation modes and M-mode. The PMP
settings for the resulting physical address may be checked (and possibly cached)
at any point between the address translation and the explicit memory access.
Hence, when the PMP settings are modified, M-mode software must synchronize the
PMP settings with the virtual memory system and any PMP or address-translation
caches. This is accomplished by executing an SFENCE.VMA instruction with rs1=x0
and rs2=x0, after the PMP CSRs are written. See Section 19.5.3 for additional
synchronization requirements when the hypervisor extension is implemented.

Specifically, note "including possibly caching the identity mappings from
effective address to physical address used in Bare translation modes and M-mode".

So hardware is allowed to (speculatively) fill the TLB with entries that apply
to M-mode, and tag those M-mode PMP entries with the permissions from the PMPs.
The sfence.vma is required to clear those entries, or else accesses _even from
M-mode_ may erroneously use the previous PMP permissions.

Regards,
Samuel

> Further, flushing TLB entries in sbi_hart_map/unmap_saddr() also
> slows-down many SBI calls which use shared memory.
> 
> NACK to this patch.
> 
> Regards,
> Anup
> 
>>
>> Signed-off-by: Chen Pei <cp0613@linux.alibaba.com>
>> ---
>>  lib/sbi/sbi_hart.c | 54 ++++++++++++++++++++++++++++------------------
>>  1 file changed, 33 insertions(+), 21 deletions(-)
>>
>> diff --git a/lib/sbi/sbi_hart.c b/lib/sbi/sbi_hart.c
>> index 6a2d7d6..33bba38 100644
>> --- a/lib/sbi/sbi_hart.c
>> +++ b/lib/sbi/sbi_hart.c
>> @@ -364,6 +364,30 @@ static unsigned int sbi_hart_get_smepmp_flags(struct sbi_scratch *scratch,
>>         return pmp_flags;
>>  }
>>
>> +static void pmp_flush(void)
>> +{
>> +       /*
>> +        * As per section 3.7.2 of privileged specification v1.12,
>> +        * virtual address translations can be speculatively performed
>> +        * (even before actual access). These, along with PMP traslations,
>> +        * can be cached. This can pose a problem with CPU hotplug
>> +        * and non-retentive suspend scenario because PMP states are
>> +        * not preserved.
>> +        * It is advisable to flush the caching structures under such
>> +        * conditions.
>> +        */
>> +       if (misa_extension('S')) {
>> +               __asm__ __volatile__("sfence.vma");
>> +
>> +               /*
>> +                * If hypervisor mode is supported, flush caching
>> +                * structures in guest mode too.
>> +                */
>> +               if (misa_extension('H'))
>> +                       __sbi_hfence_gvma_all();
>> +       }
>> +}
>> +
>>  static void sbi_hart_smepmp_set(struct sbi_scratch *scratch,
>>                                 struct sbi_domain *dom,
>>                                 struct sbi_domain_memregion *reg,
>> @@ -543,18 +567,25 @@ int sbi_hart_map_saddr(unsigned long addr, unsigned long size)
>>                              pmp_flags, base, order);
>>         pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
>>
>> +       pmp_flush();
>> +
>>         return SBI_OK;
>>  }
>>
>>  int sbi_hart_unmap_saddr(void)
>>  {
>> +       int rc;
>>         struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
>>
>>         if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
>>                 return SBI_OK;
>>
>>         sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY);
>> -       return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
>> +       rc = pmp_disable(SBI_SMEPMP_RESV_ENTRY);
>> +
>> +       pmp_flush();
>> +
>> +       return rc;
>>  }
>>
>>  int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
>> @@ -578,26 +609,7 @@ int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
>>                 rc = sbi_hart_oldpmp_configure(scratch, pmp_count,
>>                                                 pmp_log2gran, pmp_addr_max);
>>
>> -       /*
>> -        * As per section 3.7.2 of privileged specification v1.12,
>> -        * virtual address translations can be speculatively performed
>> -        * (even before actual access). These, along with PMP traslations,
>> -        * can be cached. This can pose a problem with CPU hotplug
>> -        * and non-retentive suspend scenario because PMP states are
>> -        * not preserved.
>> -        * It is advisable to flush the caching structures under such
>> -        * conditions.
>> -        */
>> -       if (misa_extension('S')) {
>> -               __asm__ __volatile__("sfence.vma");
>> -
>> -               /*
>> -                * If hypervisor mode is supported, flush caching
>> -                * structures in guest mode too.
>> -                */
>> -               if (misa_extension('H'))
>> -                       __sbi_hfence_gvma_all();
>> -       }
>> +       pmp_flush();
>>
>>         return rc;
>>  }
>> --
>> 2.49.0
>>
>>
>> --
>> opensbi mailing list
>> opensbi@lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/opensbi
> 


-- 
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] lib: sbi: Flush cache entries after writing PMP CSRs
  2025-10-29 22:32   ` Samuel Holland
@ 2025-10-30  5:50     ` Anup Patel
  2025-10-30 10:36       ` Samuel Holland
  0 siblings, 1 reply; 8+ messages in thread
From: Anup Patel @ 2025-10-30  5:50 UTC (permalink / raw)
  To: Samuel Holland; +Cc: cp0613, opensbi

On Thu, Oct 30, 2025 at 4:02 AM Samuel Holland
<samuel.holland@sifive.com> wrote:
>
> Hi Anup,
>
> On 2025-10-19 11:36 PM, Anup Patel wrote:
> > On Thu, Sep 4, 2025 at 4:26 PM <cp0613@linux.alibaba.com> wrote:
> >>
> >> From: Chen Pei <cp0613@linux.alibaba.com>
> >>
> >> As the privileged specification states, after writing to the PMP CSRs,
> >> a SFENCE.VMA or HFENCE.GVMA instruction should be executed with rs1=x0
> >> and rs2=x0 to flush all address translation cache entries.
> >>
> >> The original implementation does not cover all possible cases. For
> >> example, the sbi_hart_map_saddr function calls pmp_set but does not
> >> execute the SFENCE.VMA instruction. This patch covers sbi_hart_map_saddr
> >> and sbi_hart_unmap_saddr, ensuring that dbtr, sse and other modules can
> >> safely update pmpcfg.
> >
> > The temporary mapping created using sbi_hart_map/unmap_saddr() is
> > only accessed by M-mode and does not impact the S-mode TLB
> > entries in anyway.
>
> That doesn't match my interpretation of this paragraph from the privileged ISA:
>
> Implementations with virtual memory are permitted to perform address
> translations speculatively and earlier than required by an explicit memory
> access, and are permitted to cache them in address translation cache
> structures—including possibly caching the identity mappings from effective
> address to physical address used in Bare translation modes and M-mode. The PMP
> settings for the resulting physical address may be checked (and possibly cached)
> at any point between the address translation and the explicit memory access.
> Hence, when the PMP settings are modified, M-mode software must synchronize the
> PMP settings with the virtual memory system and any PMP or address-translation
> caches. This is accomplished by executing an SFENCE.VMA instruction with rs1=x0
> and rs2=x0, after the PMP CSRs are written. See Section 19.5.3 for additional
> synchronization requirements when the hypervisor extension is implemented.
>
> Specifically, note "including possibly caching the identity mappings from
> effective address to physical address used in Bare translation modes and M-mode".
>
> So hardware is allowed to (speculatively) fill the TLB with entries that apply
> to M-mode, and tag those M-mode PMP entries with the permissions from the PMPs.
> The sfence.vma is required to clear those entries, or else accesses _even from
> M-mode_ may erroneously use the previous PMP permissions.

If hardware is filling TLB entries for M-mode then it must tag these TLB
entries with privilege level so that they don't match for S-mode otherwise
a simple M-mode to S-mode switch at boot-time will allow bootloaders
to access M-mode memory with M-mode permissions.

Regards,
Anup

>
> Regards,
> Samuel
>
> > Further, flushing TLB entries in sbi_hart_map/unmap_saddr() also
> > slows-down many SBI calls which use shared memory.
> >
> > NACK to this patch.
> >
> > Regards,
> > Anup
> >
> >>
> >> Signed-off-by: Chen Pei <cp0613@linux.alibaba.com>
> >> ---
> >>  lib/sbi/sbi_hart.c | 54 ++++++++++++++++++++++++++++------------------
> >>  1 file changed, 33 insertions(+), 21 deletions(-)
> >>
> >> diff --git a/lib/sbi/sbi_hart.c b/lib/sbi/sbi_hart.c
> >> index 6a2d7d6..33bba38 100644
> >> --- a/lib/sbi/sbi_hart.c
> >> +++ b/lib/sbi/sbi_hart.c
> >> @@ -364,6 +364,30 @@ static unsigned int sbi_hart_get_smepmp_flags(struct sbi_scratch *scratch,
> >>         return pmp_flags;
> >>  }
> >>
> >> +static void pmp_flush(void)
> >> +{
> >> +       /*
> >> +        * As per section 3.7.2 of privileged specification v1.12,
> >> +        * virtual address translations can be speculatively performed
> >> +        * (even before actual access). These, along with PMP traslations,
> >> +        * can be cached. This can pose a problem with CPU hotplug
> >> +        * and non-retentive suspend scenario because PMP states are
> >> +        * not preserved.
> >> +        * It is advisable to flush the caching structures under such
> >> +        * conditions.
> >> +        */
> >> +       if (misa_extension('S')) {
> >> +               __asm__ __volatile__("sfence.vma");
> >> +
> >> +               /*
> >> +                * If hypervisor mode is supported, flush caching
> >> +                * structures in guest mode too.
> >> +                */
> >> +               if (misa_extension('H'))
> >> +                       __sbi_hfence_gvma_all();
> >> +       }
> >> +}
> >> +
> >>  static void sbi_hart_smepmp_set(struct sbi_scratch *scratch,
> >>                                 struct sbi_domain *dom,
> >>                                 struct sbi_domain_memregion *reg,
> >> @@ -543,18 +567,25 @@ int sbi_hart_map_saddr(unsigned long addr, unsigned long size)
> >>                              pmp_flags, base, order);
> >>         pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
> >>
> >> +       pmp_flush();
> >> +
> >>         return SBI_OK;
> >>  }
> >>
> >>  int sbi_hart_unmap_saddr(void)
> >>  {
> >> +       int rc;
> >>         struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
> >>
> >>         if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
> >>                 return SBI_OK;
> >>
> >>         sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY);
> >> -       return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
> >> +       rc = pmp_disable(SBI_SMEPMP_RESV_ENTRY);
> >> +
> >> +       pmp_flush();
> >> +
> >> +       return rc;
> >>  }
> >>
> >>  int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
> >> @@ -578,26 +609,7 @@ int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
> >>                 rc = sbi_hart_oldpmp_configure(scratch, pmp_count,
> >>                                                 pmp_log2gran, pmp_addr_max);
> >>
> >> -       /*
> >> -        * As per section 3.7.2 of privileged specification v1.12,
> >> -        * virtual address translations can be speculatively performed
> >> -        * (even before actual access). These, along with PMP traslations,
> >> -        * can be cached. This can pose a problem with CPU hotplug
> >> -        * and non-retentive suspend scenario because PMP states are
> >> -        * not preserved.
> >> -        * It is advisable to flush the caching structures under such
> >> -        * conditions.
> >> -        */
> >> -       if (misa_extension('S')) {
> >> -               __asm__ __volatile__("sfence.vma");
> >> -
> >> -               /*
> >> -                * If hypervisor mode is supported, flush caching
> >> -                * structures in guest mode too.
> >> -                */
> >> -               if (misa_extension('H'))
> >> -                       __sbi_hfence_gvma_all();
> >> -       }
> >> +       pmp_flush();
> >>
> >>         return rc;
> >>  }
> >> --
> >> 2.49.0
> >>
> >>
> >> --
> >> opensbi mailing list
> >> opensbi@lists.infradead.org
> >> http://lists.infradead.org/mailman/listinfo/opensbi
> >
>

-- 
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] lib: sbi: Flush cache entries after writing PMP CSRs
  2025-10-30  5:50     ` Anup Patel
@ 2025-10-30 10:36       ` Samuel Holland
  2025-10-31  3:49         ` Anup Patel
  2025-11-07  7:50         ` cp0613
  0 siblings, 2 replies; 8+ messages in thread
From: Samuel Holland @ 2025-10-30 10:36 UTC (permalink / raw)
  To: Anup Patel; +Cc: cp0613, opensbi

Hi Anup,

On 2025-10-30 12:50 AM, Anup Patel wrote:
> On Thu, Oct 30, 2025 at 4:02 AM Samuel Holland
> <samuel.holland@sifive.com> wrote:
>> On 2025-10-19 11:36 PM, Anup Patel wrote:
>>> On Thu, Sep 4, 2025 at 4:26 PM <cp0613@linux.alibaba.com> wrote:
>>>>
>>>> From: Chen Pei <cp0613@linux.alibaba.com>
>>>>
>>>> As the privileged specification states, after writing to the PMP CSRs,
>>>> a SFENCE.VMA or HFENCE.GVMA instruction should be executed with rs1=x0
>>>> and rs2=x0 to flush all address translation cache entries.
>>>>
>>>> The original implementation does not cover all possible cases. For
>>>> example, the sbi_hart_map_saddr function calls pmp_set but does not
>>>> execute the SFENCE.VMA instruction. This patch covers sbi_hart_map_saddr
>>>> and sbi_hart_unmap_saddr, ensuring that dbtr, sse and other modules can
>>>> safely update pmpcfg.
>>>
>>> The temporary mapping created using sbi_hart_map/unmap_saddr() is
>>> only accessed by M-mode and does not impact the S-mode TLB
>>> entries in anyway.
>>
>> That doesn't match my interpretation of this paragraph from the privileged ISA:
>>
>> Implementations with virtual memory are permitted to perform address
>> translations speculatively and earlier than required by an explicit memory
>> access, and are permitted to cache them in address translation cache
>> structures—including possibly caching the identity mappings from effective
>> address to physical address used in Bare translation modes and M-mode. The PMP
>> settings for the resulting physical address may be checked (and possibly cached)
>> at any point between the address translation and the explicit memory access.
>> Hence, when the PMP settings are modified, M-mode software must synchronize the
>> PMP settings with the virtual memory system and any PMP or address-translation
>> caches. This is accomplished by executing an SFENCE.VMA instruction with rs1=x0
>> and rs2=x0, after the PMP CSRs are written. See Section 19.5.3 for additional
>> synchronization requirements when the hypervisor extension is implemented.
>>
>> Specifically, note "including possibly caching the identity mappings from
>> effective address to physical address used in Bare translation modes and M-mode".
>>
>> So hardware is allowed to (speculatively) fill the TLB with entries that apply
>> to M-mode, and tag those M-mode PMP entries with the permissions from the PMPs.
>> The sfence.vma is required to clear those entries, or else accesses _even from
>> M-mode_ may erroneously use the previous PMP permissions.
> 
> If hardware is filling TLB entries for M-mode then it must tag these TLB
> entries with privilege level so that they don't match for S-mode otherwise
> a simple M-mode to S-mode switch at boot-time will allow bootloaders
> to access M-mode memory with M-mode permissions.

Yes, agreed, the TLB entries must be tagged with the privilege mode, but that's
not the problem here. The problem has nothing at all to do with privilege mode
transitions, and nothing to do with executing code in S-mode, either. The bug
can be observed purely within the scope of the function calling
sbi_hart_map_saddr().

sbi_hart_map_saddr() rewrites a PMP range to allow M-mode access to some address
range that it previously did not have permission to access. The problem is that
this PMP change may not be observed until software executes a sfence.vma
covering at least this address range. So without this patch, it is
architecturally valid to receive an access fault when M-mode attempts to access
this address range (i.e. on the very next line of code after calling
sbi_hart_map_saddr()).

Regards,
Samuel

>>> Further, flushing TLB entries in sbi_hart_map/unmap_saddr() also
>>> slows-down many SBI calls which use shared memory.
>>>
>>> NACK to this patch.
>>>
>>> Regards,
>>> Anup
>>>
>>>>
>>>> Signed-off-by: Chen Pei <cp0613@linux.alibaba.com>
>>>> ---
>>>>  lib/sbi/sbi_hart.c | 54 ++++++++++++++++++++++++++++------------------
>>>>  1 file changed, 33 insertions(+), 21 deletions(-)
>>>>
>>>> diff --git a/lib/sbi/sbi_hart.c b/lib/sbi/sbi_hart.c
>>>> index 6a2d7d6..33bba38 100644
>>>> --- a/lib/sbi/sbi_hart.c
>>>> +++ b/lib/sbi/sbi_hart.c
>>>> @@ -364,6 +364,30 @@ static unsigned int sbi_hart_get_smepmp_flags(struct sbi_scratch *scratch,
>>>>         return pmp_flags;
>>>>  }
>>>>
>>>> +static void pmp_flush(void)
>>>> +{
>>>> +       /*
>>>> +        * As per section 3.7.2 of privileged specification v1.12,
>>>> +        * virtual address translations can be speculatively performed
>>>> +        * (even before actual access). These, along with PMP traslations,
>>>> +        * can be cached. This can pose a problem with CPU hotplug
>>>> +        * and non-retentive suspend scenario because PMP states are
>>>> +        * not preserved.
>>>> +        * It is advisable to flush the caching structures under such
>>>> +        * conditions.
>>>> +        */
>>>> +       if (misa_extension('S')) {
>>>> +               __asm__ __volatile__("sfence.vma");
>>>> +
>>>> +               /*
>>>> +                * If hypervisor mode is supported, flush caching
>>>> +                * structures in guest mode too.
>>>> +                */
>>>> +               if (misa_extension('H'))
>>>> +                       __sbi_hfence_gvma_all();
>>>> +       }
>>>> +}
>>>> +
>>>>  static void sbi_hart_smepmp_set(struct sbi_scratch *scratch,
>>>>                                 struct sbi_domain *dom,
>>>>                                 struct sbi_domain_memregion *reg,
>>>> @@ -543,18 +567,25 @@ int sbi_hart_map_saddr(unsigned long addr, unsigned long size)
>>>>                              pmp_flags, base, order);
>>>>         pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
>>>>
>>>> +       pmp_flush();
>>>> +
>>>>         return SBI_OK;
>>>>  }
>>>>
>>>>  int sbi_hart_unmap_saddr(void)
>>>>  {
>>>> +       int rc;
>>>>         struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
>>>>
>>>>         if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
>>>>                 return SBI_OK;
>>>>
>>>>         sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY);
>>>> -       return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
>>>> +       rc = pmp_disable(SBI_SMEPMP_RESV_ENTRY);
>>>> +
>>>> +       pmp_flush();
>>>> +
>>>> +       return rc;
>>>>  }
>>>>
>>>>  int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
>>>> @@ -578,26 +609,7 @@ int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
>>>>                 rc = sbi_hart_oldpmp_configure(scratch, pmp_count,
>>>>                                                 pmp_log2gran, pmp_addr_max);
>>>>
>>>> -       /*
>>>> -        * As per section 3.7.2 of privileged specification v1.12,
>>>> -        * virtual address translations can be speculatively performed
>>>> -        * (even before actual access). These, along with PMP traslations,
>>>> -        * can be cached. This can pose a problem with CPU hotplug
>>>> -        * and non-retentive suspend scenario because PMP states are
>>>> -        * not preserved.
>>>> -        * It is advisable to flush the caching structures under such
>>>> -        * conditions.
>>>> -        */
>>>> -       if (misa_extension('S')) {
>>>> -               __asm__ __volatile__("sfence.vma");
>>>> -
>>>> -               /*
>>>> -                * If hypervisor mode is supported, flush caching
>>>> -                * structures in guest mode too.
>>>> -                */
>>>> -               if (misa_extension('H'))
>>>> -                       __sbi_hfence_gvma_all();
>>>> -       }
>>>> +       pmp_flush();
>>>>
>>>>         return rc;
>>>>  }
>>>> --
>>>> 2.49.0
>>>>
>>>>
>>>> --
>>>> opensbi mailing list
>>>> opensbi@lists.infradead.org
>>>> http://lists.infradead.org/mailman/listinfo/opensbi
>>>
>>


-- 
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] lib: sbi: Flush cache entries after writing PMP CSRs
  2025-10-30 10:36       ` Samuel Holland
@ 2025-10-31  3:49         ` Anup Patel
  2025-11-07  8:13           ` cp0613
  2025-11-07  7:50         ` cp0613
  1 sibling, 1 reply; 8+ messages in thread
From: Anup Patel @ 2025-10-31  3:49 UTC (permalink / raw)
  To: Samuel Holland; +Cc: Anup Patel, cp0613, opensbi

On Thu, Oct 30, 2025 at 4:06 PM Samuel Holland
<samuel.holland@sifive.com> wrote:
>
> Hi Anup,
>
> On 2025-10-30 12:50 AM, Anup Patel wrote:
> > On Thu, Oct 30, 2025 at 4:02 AM Samuel Holland
> > <samuel.holland@sifive.com> wrote:
> >> On 2025-10-19 11:36 PM, Anup Patel wrote:
> >>> On Thu, Sep 4, 2025 at 4:26 PM <cp0613@linux.alibaba.com> wrote:
> >>>>
> >>>> From: Chen Pei <cp0613@linux.alibaba.com>
> >>>>
> >>>> As the privileged specification states, after writing to the PMP CSRs,
> >>>> a SFENCE.VMA or HFENCE.GVMA instruction should be executed with rs1=x0
> >>>> and rs2=x0 to flush all address translation cache entries.
> >>>>
> >>>> The original implementation does not cover all possible cases. For
> >>>> example, the sbi_hart_map_saddr function calls pmp_set but does not
> >>>> execute the SFENCE.VMA instruction. This patch covers sbi_hart_map_saddr
> >>>> and sbi_hart_unmap_saddr, ensuring that dbtr, sse and other modules can
> >>>> safely update pmpcfg.
> >>>
> >>> The temporary mapping created using sbi_hart_map/unmap_saddr() is
> >>> only accessed by M-mode and does not impact the S-mode TLB
> >>> entries in anyway.
> >>
> >> That doesn't match my interpretation of this paragraph from the privileged ISA:
> >>
> >> Implementations with virtual memory are permitted to perform address
> >> translations speculatively and earlier than required by an explicit memory
> >> access, and are permitted to cache them in address translation cache
> >> structures—including possibly caching the identity mappings from effective
> >> address to physical address used in Bare translation modes and M-mode. The PMP
> >> settings for the resulting physical address may be checked (and possibly cached)
> >> at any point between the address translation and the explicit memory access.
> >> Hence, when the PMP settings are modified, M-mode software must synchronize the
> >> PMP settings with the virtual memory system and any PMP or address-translation
> >> caches. This is accomplished by executing an SFENCE.VMA instruction with rs1=x0
> >> and rs2=x0, after the PMP CSRs are written. See Section 19.5.3 for additional
> >> synchronization requirements when the hypervisor extension is implemented.
> >>
> >> Specifically, note "including possibly caching the identity mappings from
> >> effective address to physical address used in Bare translation modes and M-mode".
> >>
> >> So hardware is allowed to (speculatively) fill the TLB with entries that apply
> >> to M-mode, and tag those M-mode PMP entries with the permissions from the PMPs.
> >> The sfence.vma is required to clear those entries, or else accesses _even from
> >> M-mode_ may erroneously use the previous PMP permissions.
> >
> > If hardware is filling TLB entries for M-mode then it must tag these TLB
> > entries with privilege level so that they don't match for S-mode otherwise
> > a simple M-mode to S-mode switch at boot-time will allow bootloaders
> > to access M-mode memory with M-mode permissions.
>
> Yes, agreed, the TLB entries must be tagged with the privilege mode, but that's
> not the problem here. The problem has nothing at all to do with privilege mode
> transitions, and nothing to do with executing code in S-mode, either. The bug
> can be observed purely within the scope of the function calling
> sbi_hart_map_saddr().

The reserved PMP entry under consideration is disabled while in S-mode
before sbi_hart_map_saddr() and sbi_hart_unmap_saddr() so there is no
way the reserved PMP entry impacts S-mode. In other words, we are
not changing any PMP entry which impacts address ranges accessed by
S-mode.

>
> sbi_hart_map_saddr() rewrites a PMP range to allow M-mode access to some address
> range that it previously did not have permission to access. The problem is that
> this PMP change may not be observed until software executes a sfence.vma
> covering at least this address range. So without this patch, it is
> architecturally valid to receive an access fault when M-mode attempts to access
> this address range (i.e. on the very next line of code after calling
> sbi_hart_map_saddr()).

This is a fair point but we don't need to nuke the entire TLB. To address this,
sfence.vma to a particular address range is sufficient. Plus, hfence is not
required at all. I think this patch needs a complete re-write with
clear comments
for sfence.vma after PMP entry change in sbi_hart_map_saddr().

Regards,
Anup

>
> Regards,
> Samuel
>
> >>> Further, flushing TLB entries in sbi_hart_map/unmap_saddr() also
> >>> slows-down many SBI calls which use shared memory.
> >>>
> >>> NACK to this patch.
> >>>
> >>> Regards,
> >>> Anup
> >>>
> >>>>
> >>>> Signed-off-by: Chen Pei <cp0613@linux.alibaba.com>
> >>>> ---
> >>>>  lib/sbi/sbi_hart.c | 54 ++++++++++++++++++++++++++++------------------
> >>>>  1 file changed, 33 insertions(+), 21 deletions(-)
> >>>>
> >>>> diff --git a/lib/sbi/sbi_hart.c b/lib/sbi/sbi_hart.c
> >>>> index 6a2d7d6..33bba38 100644
> >>>> --- a/lib/sbi/sbi_hart.c
> >>>> +++ b/lib/sbi/sbi_hart.c
> >>>> @@ -364,6 +364,30 @@ static unsigned int sbi_hart_get_smepmp_flags(struct sbi_scratch *scratch,
> >>>>         return pmp_flags;
> >>>>  }
> >>>>
> >>>> +static void pmp_flush(void)
> >>>> +{
> >>>> +       /*
> >>>> +        * As per section 3.7.2 of privileged specification v1.12,
> >>>> +        * virtual address translations can be speculatively performed
> >>>> +        * (even before actual access). These, along with PMP traslations,
> >>>> +        * can be cached. This can pose a problem with CPU hotplug
> >>>> +        * and non-retentive suspend scenario because PMP states are
> >>>> +        * not preserved.
> >>>> +        * It is advisable to flush the caching structures under such
> >>>> +        * conditions.
> >>>> +        */
> >>>> +       if (misa_extension('S')) {
> >>>> +               __asm__ __volatile__("sfence.vma");
> >>>> +
> >>>> +               /*
> >>>> +                * If hypervisor mode is supported, flush caching
> >>>> +                * structures in guest mode too.
> >>>> +                */
> >>>> +               if (misa_extension('H'))
> >>>> +                       __sbi_hfence_gvma_all();
> >>>> +       }
> >>>> +}
> >>>> +
> >>>>  static void sbi_hart_smepmp_set(struct sbi_scratch *scratch,
> >>>>                                 struct sbi_domain *dom,
> >>>>                                 struct sbi_domain_memregion *reg,
> >>>> @@ -543,18 +567,25 @@ int sbi_hart_map_saddr(unsigned long addr, unsigned long size)
> >>>>                              pmp_flags, base, order);
> >>>>         pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
> >>>>
> >>>> +       pmp_flush();
> >>>> +
> >>>>         return SBI_OK;
> >>>>  }
> >>>>
> >>>>  int sbi_hart_unmap_saddr(void)
> >>>>  {
> >>>> +       int rc;
> >>>>         struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
> >>>>
> >>>>         if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
> >>>>                 return SBI_OK;
> >>>>
> >>>>         sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY);
> >>>> -       return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
> >>>> +       rc = pmp_disable(SBI_SMEPMP_RESV_ENTRY);
> >>>> +
> >>>> +       pmp_flush();
> >>>> +
> >>>> +       return rc;
> >>>>  }
> >>>>
> >>>>  int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
> >>>> @@ -578,26 +609,7 @@ int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
> >>>>                 rc = sbi_hart_oldpmp_configure(scratch, pmp_count,
> >>>>                                                 pmp_log2gran, pmp_addr_max);
> >>>>
> >>>> -       /*
> >>>> -        * As per section 3.7.2 of privileged specification v1.12,
> >>>> -        * virtual address translations can be speculatively performed
> >>>> -        * (even before actual access). These, along with PMP traslations,
> >>>> -        * can be cached. This can pose a problem with CPU hotplug
> >>>> -        * and non-retentive suspend scenario because PMP states are
> >>>> -        * not preserved.
> >>>> -        * It is advisable to flush the caching structures under such
> >>>> -        * conditions.
> >>>> -        */
> >>>> -       if (misa_extension('S')) {
> >>>> -               __asm__ __volatile__("sfence.vma");
> >>>> -
> >>>> -               /*
> >>>> -                * If hypervisor mode is supported, flush caching
> >>>> -                * structures in guest mode too.
> >>>> -                */
> >>>> -               if (misa_extension('H'))
> >>>> -                       __sbi_hfence_gvma_all();
> >>>> -       }
> >>>> +       pmp_flush();
> >>>>
> >>>>         return rc;
> >>>>  }
> >>>> --
> >>>> 2.49.0
> >>>>
> >>>>
> >>>> --
> >>>> opensbi mailing list
> >>>> opensbi@lists.infradead.org
> >>>> http://lists.infradead.org/mailman/listinfo/opensbi
> >>>
> >>
>
>
> --
> opensbi mailing list
> opensbi@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/opensbi

-- 
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] lib: sbi: Flush cache entries after writing PMP CSRs
  2025-10-30 10:36       ` Samuel Holland
  2025-10-31  3:49         ` Anup Patel
@ 2025-11-07  7:50         ` cp0613
  1 sibling, 0 replies; 8+ messages in thread
From: cp0613 @ 2025-11-07  7:50 UTC (permalink / raw)
  To: samuel.holland; +Cc: anup, opensbi

On 2025-10-30 10:36 AM, Samuel Holland wrote:

> > If hardware is filling TLB entries for M-mode then it must tag these TLB
> > entries with privilege level so that they don't match for S-mode otherwise
> > a simple M-mode to S-mode switch at boot-time will allow bootloaders
> > to access M-mode memory with M-mode permissions.

> Yes, agreed, the TLB entries must be tagged with the privilege mode, but that's
> not the problem here. The problem has nothing at all to do with privilege mode
> transitions, and nothing to do with executing code in S-mode, either. The bug
> can be observed purely within the scope of the function calling
> sbi_hart_map_saddr().

> sbi_hart_map_saddr() rewrites a PMP range to allow M-mode access to some address
> range that it previously did not have permission to access. The problem is that
> this PMP change may not be observed until software executes a sfence.vma
> covering at least this address range. So without this patch, it is
> architecturally valid to receive an access fault when M-mode attempts to access
> this address range (i.e. on the very next line of code after calling
> sbi_hart_map_saddr()).

Hi Samuel,

Thank you so much for participating in the discussion.
You pointed out the key points, this is indeed a case.

Thanks,
Pei

> > Regards,
> > Samuel

> >>> Further, flushing TLB entries in sbi_hart_map/unmap_saddr() also
> >>> slows-down many SBI calls which use shared memory.

-- 
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] lib: sbi: Flush cache entries after writing PMP CSRs
  2025-10-31  3:49         ` Anup Patel
@ 2025-11-07  8:13           ` cp0613
  0 siblings, 0 replies; 8+ messages in thread
From: cp0613 @ 2025-11-07  8:13 UTC (permalink / raw)
  To: anup; +Cc: samuel.holland, opensbi

On 2025-10-31 03:49 AM, Anup Patel wrote:

> > Yes, agreed, the TLB entries must be tagged with the privilege mode, but that's
> > not the problem here. The problem has nothing at all to do with privilege mode
> > transitions, and nothing to do with executing code in S-mode, either. The bug
> > can be observed purely within the scope of the function calling
> > sbi_hart_map_saddr().
> 
> The reserved PMP entry under consideration is disabled while in S-mode
> before sbi_hart_map_saddr() and sbi_hart_unmap_saddr() so there is no
> way the reserved PMP entry impacts S-mode. In other words, we are
> not changing any PMP entry which impacts address ranges accessed by
> S-mode.
> 
> >
> > sbi_hart_map_saddr() rewrites a PMP range to allow M-mode access to some address
> > range that it previously did not have permission to access. The problem is that
> > this PMP change may not be observed until software executes a sfence.vma
> > covering at least this address range. So without this patch, it is
> > architecturally valid to receive an access fault when M-mode attempts to access
> > this address range (i.e. on the very next line of code after calling
> > sbi_hart_map_saddr()).
> 
> This is a fair point but we don't need to nuke the entire TLB. To address this,
> sfence.vma to a particular address range is sufficient. Plus, hfence is not
> required at all. I think this patch needs a complete re-write with
> clear comments
> for sfence.vma after PMP entry change in sbi_hart_map_saddr().
> 
> Regards,
> Anup

Thank you very much for your review. The issue you mentioned earlier, which slows
down many SBI calls that use shared memory, is indeed something that needs to be
considered. I will try to make further modifications based on your suggestions.

Thanks,
Pei

-- 
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-11-07  8:14 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-04  9:54 [PATCH] lib: sbi: Flush cache entries after writing PMP CSRs cp0613
2025-10-20  4:36 ` Anup Patel
2025-10-29 22:32   ` Samuel Holland
2025-10-30  5:50     ` Anup Patel
2025-10-30 10:36       ` Samuel Holland
2025-10-31  3:49         ` Anup Patel
2025-11-07  8:13           ` cp0613
2025-11-07  7:50         ` cp0613

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox