linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] poewrpc/mce: Fix SLB rebolting during MCE recovery path.
@ 2018-08-17  9:21 Mahesh J Salgaonkar
  2018-08-21 10:27 ` Nicholas Piggin
  0 siblings, 1 reply; 4+ messages in thread
From: Mahesh J Salgaonkar @ 2018-08-17  9:21 UTC (permalink / raw)
  To: linuxppc-dev, Michael Ellerman
  Cc: Nicholas Piggin, Nicholas Piggin, Aneesh Kumar K.V

From: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>

With the powrpc next commit e7e81847478 (poewrpc/mce: Fix SLB rebolting
during MCE recovery path.), the SLB error recovery is broken. The
commit missed a crucial change of OR-ing index value to RB[52-63] which
selects the SLB entry while rebolting. This patch fixes that.

Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/mm/slb.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 0b095fa54049..6dd9913425bc 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -101,9 +101,12 @@ void __slb_restore_bolted_realmode(void)
 
 	 /* No isync needed because realmode. */
 	for (index = 0; index < SLB_NUM_BOLTED; index++) {
+		unsigned long rb = be64_to_cpu(p->save_area[index].esid);
+
+		rb = (rb & ~0xFFFul) | index;
 		asm volatile("slbmte  %0,%1" :
 		     : "r" (be64_to_cpu(p->save_area[index].vsid)),
-		       "r" (be64_to_cpu(p->save_area[index].esid)));
+		       "r" (rb));
 	}
 }
 

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] poewrpc/mce: Fix SLB rebolting during MCE recovery path.
  2018-08-17  9:21 [PATCH] poewrpc/mce: Fix SLB rebolting during MCE recovery path Mahesh J Salgaonkar
@ 2018-08-21 10:27 ` Nicholas Piggin
  2018-08-23  4:28   ` Mahesh Jagannath Salgaonkar
  0 siblings, 1 reply; 4+ messages in thread
From: Nicholas Piggin @ 2018-08-21 10:27 UTC (permalink / raw)
  To: Mahesh J Salgaonkar; +Cc: linuxppc-dev, Michael Ellerman, Aneesh Kumar K.V

On Fri, 17 Aug 2018 14:51:47 +0530
Mahesh J Salgaonkar <mahesh@linux.vnet.ibm.com> wrote:

> From: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
> 
> With the powrpc next commit e7e81847478 (poewrpc/mce: Fix SLB rebolting
> during MCE recovery path.), the SLB error recovery is broken. The
> commit missed a crucial change of OR-ing index value to RB[52-63] which
> selects the SLB entry while rebolting. This patch fixes that.
> 
> Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>  arch/powerpc/mm/slb.c |    5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
> index 0b095fa54049..6dd9913425bc 100644
> --- a/arch/powerpc/mm/slb.c
> +++ b/arch/powerpc/mm/slb.c
> @@ -101,9 +101,12 @@ void __slb_restore_bolted_realmode(void)
>  
>  	 /* No isync needed because realmode. */
>  	for (index = 0; index < SLB_NUM_BOLTED; index++) {
> +		unsigned long rb = be64_to_cpu(p->save_area[index].esid);
> +
> +		rb = (rb & ~0xFFFul) | index;
>  		asm volatile("slbmte  %0,%1" :
>  		     : "r" (be64_to_cpu(p->save_area[index].vsid)),
> -		       "r" (be64_to_cpu(p->save_area[index].esid)));
> +		       "r" (rb));
>  	}
>  }
>  
> 

I'm just looking at this again. The bolted save areas do have the
index field set. So for the OS, your patch should be equivalent to
this, right?

 static inline void slb_shadow_clear(enum slb_index index)
 {
-       WRITE_ONCE(get_slb_shadow()->save_area[index].esid, 0);
+       WRITE_ONCE(get_slb_shadow()->save_area[index].esid, index);
 }

Which seems like a better fix.

PAPR says:

  Note: SLB is filled sequentially starting at index 0
  from the shadow buffer ignoring the contents of
  RB field bits 52-63

So that shouldn't be an issue.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] poewrpc/mce: Fix SLB rebolting during MCE recovery path.
  2018-08-21 10:27 ` Nicholas Piggin
@ 2018-08-23  4:28   ` Mahesh Jagannath Salgaonkar
  2018-08-23  4:36     ` Nicholas Piggin
  0 siblings, 1 reply; 4+ messages in thread
From: Mahesh Jagannath Salgaonkar @ 2018-08-23  4:28 UTC (permalink / raw)
  To: Nicholas Piggin; +Cc: linuxppc-dev, Michael Ellerman, Aneesh Kumar K.V

On 08/21/2018 03:57 PM, Nicholas Piggin wrote:
> On Fri, 17 Aug 2018 14:51:47 +0530
> Mahesh J Salgaonkar <mahesh@linux.vnet.ibm.com> wrote:
> 
>> From: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
>>
>> With the powrpc next commit e7e81847478 (poewrpc/mce: Fix SLB rebolting
>> during MCE recovery path.), the SLB error recovery is broken. The
>> commit missed a crucial change of OR-ing index value to RB[52-63] which
>> selects the SLB entry while rebolting. This patch fixes that.
>>
>> Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
>> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>>  arch/powerpc/mm/slb.c |    5 ++++-
>>  1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
>> index 0b095fa54049..6dd9913425bc 100644
>> --- a/arch/powerpc/mm/slb.c
>> +++ b/arch/powerpc/mm/slb.c
>> @@ -101,9 +101,12 @@ void __slb_restore_bolted_realmode(void)
>>  
>>  	 /* No isync needed because realmode. */
>>  	for (index = 0; index < SLB_NUM_BOLTED; index++) {
>> +		unsigned long rb = be64_to_cpu(p->save_area[index].esid);
>> +
>> +		rb = (rb & ~0xFFFul) | index;
>>  		asm volatile("slbmte  %0,%1" :
>>  		     : "r" (be64_to_cpu(p->save_area[index].vsid)),
>> -		       "r" (be64_to_cpu(p->save_area[index].esid)));
>> +		       "r" (rb));
>>  	}
>>  }
>>  
>>
> 
> I'm just looking at this again. The bolted save areas do have the
> index field set. So for the OS, your patch should be equivalent to
> this, right?
> 
>  static inline void slb_shadow_clear(enum slb_index index)
>  {
> -       WRITE_ONCE(get_slb_shadow()->save_area[index].esid, 0);
> +       WRITE_ONCE(get_slb_shadow()->save_area[index].esid, index);
>  }
> 
> Which seems like a better fix.

Yeah this also fixes the issue. The only additional change required is
cpu_to_be64(index). As long as we maintain index in bolted save areas
(for valid/invalid entries) we should be ok. Will respin v2 with this
change.

Thanks,
-Mahesh.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] poewrpc/mce: Fix SLB rebolting during MCE recovery path.
  2018-08-23  4:28   ` Mahesh Jagannath Salgaonkar
@ 2018-08-23  4:36     ` Nicholas Piggin
  0 siblings, 0 replies; 4+ messages in thread
From: Nicholas Piggin @ 2018-08-23  4:36 UTC (permalink / raw)
  To: Mahesh Jagannath Salgaonkar
  Cc: linuxppc-dev, Michael Ellerman, Aneesh Kumar K.V

On Thu, 23 Aug 2018 09:58:31 +0530
Mahesh Jagannath Salgaonkar <mahesh@linux.vnet.ibm.com> wrote:

> On 08/21/2018 03:57 PM, Nicholas Piggin wrote:
> > On Fri, 17 Aug 2018 14:51:47 +0530
> > Mahesh J Salgaonkar <mahesh@linux.vnet.ibm.com> wrote:
> >   
> >> From: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
> >>
> >> With the powrpc next commit e7e81847478 (poewrpc/mce: Fix SLB rebolting
> >> during MCE recovery path.), the SLB error recovery is broken. The
> >> commit missed a crucial change of OR-ing index value to RB[52-63] which
> >> selects the SLB entry while rebolting. This patch fixes that.
> >>
> >> Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
> >> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> >> ---
> >>  arch/powerpc/mm/slb.c |    5 ++++-
> >>  1 file changed, 4 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
> >> index 0b095fa54049..6dd9913425bc 100644
> >> --- a/arch/powerpc/mm/slb.c
> >> +++ b/arch/powerpc/mm/slb.c
> >> @@ -101,9 +101,12 @@ void __slb_restore_bolted_realmode(void)
> >>  
> >>  	 /* No isync needed because realmode. */
> >>  	for (index = 0; index < SLB_NUM_BOLTED; index++) {
> >> +		unsigned long rb = be64_to_cpu(p->save_area[index].esid);
> >> +
> >> +		rb = (rb & ~0xFFFul) | index;
> >>  		asm volatile("slbmte  %0,%1" :
> >>  		     : "r" (be64_to_cpu(p->save_area[index].vsid)),
> >> -		       "r" (be64_to_cpu(p->save_area[index].esid)));
> >> +		       "r" (rb));
> >>  	}
> >>  }
> >>  
> >>  
> > 
> > I'm just looking at this again. The bolted save areas do have the
> > index field set. So for the OS, your patch should be equivalent to
> > this, right?
> > 
> >  static inline void slb_shadow_clear(enum slb_index index)
> >  {
> > -       WRITE_ONCE(get_slb_shadow()->save_area[index].esid, 0);
> > +       WRITE_ONCE(get_slb_shadow()->save_area[index].esid, index);
> >  }
> > 
> > Which seems like a better fix.  
> 
> Yeah this also fixes the issue. The only additional change required is
> cpu_to_be64(index).

Ah yep.

> As long as we maintain index in bolted save areas
> (for valid/invalid entries) we should be ok. Will respin v2 with this
> change.

Cool, Reviewed-by: Nicholas Piggin <npiggin@gmail.com> in that case :)

Thanks,
Nick

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-08-23  4:36 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-08-17  9:21 [PATCH] poewrpc/mce: Fix SLB rebolting during MCE recovery path Mahesh J Salgaonkar
2018-08-21 10:27 ` Nicholas Piggin
2018-08-23  4:28   ` Mahesh Jagannath Salgaonkar
2018-08-23  4:36     ` Nicholas Piggin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).