* [PATCH 01/12] powerpc/64s/hash: Fix stab_rr off by one initialization
2018-09-14 15:30 [PATCH 00/12] SLB miss conversion to C, and SLB optimisations Nicholas Piggin
@ 2018-09-14 15:30 ` Nicholas Piggin
2018-09-17 6:51 ` Joel Stanley
2018-09-20 4:21 ` [01/12] " Michael Ellerman
2018-09-14 15:30 ` [PATCH 02/12] powerpc/64s/hash: avoid the POWER5 < DD2.1 slb invalidate workaround on POWER8/9 Nicholas Piggin
` (10 subsequent siblings)
11 siblings, 2 replies; 21+ messages in thread
From: Nicholas Piggin @ 2018-09-14 15:30 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin, Aneesh Kumar K . V
This causes SLB alloation to start 1 beyond the start of the SLB.
There is no real problem because after it wraps it stats behaving
properly, it's just surprisig to see when looking at SLB traces.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/mm/slb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 9f574e59d178..2f162c6e52d4 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -355,7 +355,7 @@ void slb_initialize(void)
#endif
}
- get_paca()->stab_rr = SLB_NUM_BOLTED;
+ get_paca()->stab_rr = SLB_NUM_BOLTED - 1;
lflags = SLB_VSID_KERNEL | linear_llp;
vflags = SLB_VSID_KERNEL | vmalloc_llp;
--
2.18.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH 01/12] powerpc/64s/hash: Fix stab_rr off by one initialization
2018-09-14 15:30 ` [PATCH 01/12] powerpc/64s/hash: Fix stab_rr off by one initialization Nicholas Piggin
@ 2018-09-17 6:51 ` Joel Stanley
2018-09-17 7:35 ` Nicholas Piggin
2018-09-20 4:21 ` [01/12] " Michael Ellerman
1 sibling, 1 reply; 21+ messages in thread
From: Joel Stanley @ 2018-09-17 6:51 UTC (permalink / raw)
To: Nicholas Piggin; +Cc: linuxppc-dev, aneesh.kumar
On Sat, 15 Sep 2018 at 01:03, Nicholas Piggin <npiggin@gmail.com> wrote:
>
> This causes SLB alloation to start 1 beyond the start of the SLB.
> There is no real problem because after it wraps it stats behaving
starts?
> properly, it's just surprisig to see when looking at SLB traces.
surprising
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> arch/powerpc/mm/slb.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
> index 9f574e59d178..2f162c6e52d4 100644
> --- a/arch/powerpc/mm/slb.c
> +++ b/arch/powerpc/mm/slb.c
> @@ -355,7 +355,7 @@ void slb_initialize(void)
> #endif
> }
>
> - get_paca()->stab_rr = SLB_NUM_BOLTED;
> + get_paca()->stab_rr = SLB_NUM_BOLTED - 1;
>
> lflags = SLB_VSID_KERNEL | linear_llp;
> vflags = SLB_VSID_KERNEL | vmalloc_llp;
> --
> 2.18.0
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 01/12] powerpc/64s/hash: Fix stab_rr off by one initialization
2018-09-17 6:51 ` Joel Stanley
@ 2018-09-17 7:35 ` Nicholas Piggin
0 siblings, 0 replies; 21+ messages in thread
From: Nicholas Piggin @ 2018-09-17 7:35 UTC (permalink / raw)
To: Joel Stanley; +Cc: linuxppc-dev, aneesh.kumar
On Mon, 17 Sep 2018 16:21:51 +0930
Joel Stanley <joel@jms.id.au> wrote:
> On Sat, 15 Sep 2018 at 01:03, Nicholas Piggin <npiggin@gmail.com> wrote:
> >
> > This causes SLB alloation to start 1 beyond the start of the SLB.
allocation
> > There is no real problem because after it wraps it stats behaving
>
> starts?
>
> > properly, it's just surprisig to see when looking at SLB traces.
>
> surprising
My keyboard is dying :(
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [01/12] powerpc/64s/hash: Fix stab_rr off by one initialization
2018-09-14 15:30 ` [PATCH 01/12] powerpc/64s/hash: Fix stab_rr off by one initialization Nicholas Piggin
2018-09-17 6:51 ` Joel Stanley
@ 2018-09-20 4:21 ` Michael Ellerman
1 sibling, 0 replies; 21+ messages in thread
From: Michael Ellerman @ 2018-09-20 4:21 UTC (permalink / raw)
To: Nicholas Piggin, linuxppc-dev; +Cc: Aneesh Kumar K . V, Nicholas Piggin
On Fri, 2018-09-14 at 15:30:45 UTC, Nicholas Piggin wrote:
> This causes SLB alloation to start 1 beyond the start of the SLB.
> There is no real problem because after it wraps it stats behaving
> properly, it's just surprisig to see when looking at SLB traces.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Series applied to powerpc next, thanks.
https://git.kernel.org/powerpc/c/09b4438db13fa83b6219aee5993711
cheers
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 02/12] powerpc/64s/hash: avoid the POWER5 < DD2.1 slb invalidate workaround on POWER8/9
2018-09-14 15:30 [PATCH 00/12] SLB miss conversion to C, and SLB optimisations Nicholas Piggin
2018-09-14 15:30 ` [PATCH 01/12] powerpc/64s/hash: Fix stab_rr off by one initialization Nicholas Piggin
@ 2018-09-14 15:30 ` Nicholas Piggin
2018-09-14 15:30 ` [PATCH 03/12] powerpc/64s/hash: move POWER5 < DD2.1 slbie workaround where it is needed Nicholas Piggin
` (9 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Nicholas Piggin @ 2018-09-14 15:30 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin, Aneesh Kumar K . V
I only have POWER8/9 to test, so just remove it for those.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/kernel/entry_64.S | 2 ++
arch/powerpc/mm/slb.c | 8 +++++---
2 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 2206912ea4f0..77a888bfcb53 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -672,7 +672,9 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
isync
slbie r6
+BEGIN_FTR_SECTION
slbie r6 /* Workaround POWER5 < DD2.1 issue */
+END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
slbmte r7,r0
isync
2:
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 2f162c6e52d4..1c7128c63a4b 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -256,9 +256,11 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
__slb_flush_and_rebolt();
}
- /* Workaround POWER5 < DD2.1 issue */
- if (offset == 1 || offset > SLB_CACHE_ENTRIES)
- asm volatile("slbie %0" : : "r" (slbie_data));
+ if (!cpu_has_feature(CPU_FTR_ARCH_207S)) {
+ /* Workaround POWER5 < DD2.1 issue */
+ if (offset == 1 || offset > SLB_CACHE_ENTRIES)
+ asm volatile("slbie %0" : : "r" (slbie_data));
+ }
get_paca()->slb_cache_ptr = 0;
copy_mm_to_paca(mm);
--
2.18.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 03/12] powerpc/64s/hash: move POWER5 < DD2.1 slbie workaround where it is needed
2018-09-14 15:30 [PATCH 00/12] SLB miss conversion to C, and SLB optimisations Nicholas Piggin
2018-09-14 15:30 ` [PATCH 01/12] powerpc/64s/hash: Fix stab_rr off by one initialization Nicholas Piggin
2018-09-14 15:30 ` [PATCH 02/12] powerpc/64s/hash: avoid the POWER5 < DD2.1 slb invalidate workaround on POWER8/9 Nicholas Piggin
@ 2018-09-14 15:30 ` Nicholas Piggin
2018-09-16 22:06 ` kbuild test robot
2018-09-17 6:00 ` Aneesh Kumar K.V
2018-09-14 15:30 ` [PATCH 04/12] powerpc/64s/hash: remove the vmalloc segment from the bolted SLB Nicholas Piggin
` (8 subsequent siblings)
11 siblings, 2 replies; 21+ messages in thread
From: Nicholas Piggin @ 2018-09-14 15:30 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin, Aneesh Kumar K . V
The POWER5 < DD2.1 issue is that slbie needs to be issued more than
once. It came in with this change:
ChangeSet@1.1608, 2004-04-29 07:12:31-07:00, david@gibson.dropbear.id.au
[PATCH] POWER5 erratum workaround
Early POWER5 revisions (<DD2.1) have a problem requiring slbie
instructions to be repeated under some circumstances. The patch below
adds a workaround (patch made by Anton Blanchard).
The extra slbie in switch_slb is done even for the case where slbia is
called (slb_flush_and_rebolt). I don't believe that is required
because there are other slb_flush_and_rebolt callers which do not
issue the workaround slbie, which would be broken if it was required.
It also seems to be fine inside the isync with the first slbie, as it
is in the kernel stack switch code.
So move this workaround to where it is required. This is not much of
an optimisation because this is the fast path, but it makes the code
more understandable and neater.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/mm/slb.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 1c7128c63a4b..d952ece3abf7 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -226,7 +226,6 @@ static inline int esids_match(unsigned long addr1, unsigned long addr2)
void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
{
unsigned long offset;
- unsigned long slbie_data = 0;
unsigned long pc = KSTK_EIP(tsk);
unsigned long stack = KSTK_ESP(tsk);
unsigned long exec_base;
@@ -241,7 +240,9 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
offset = get_paca()->slb_cache_ptr;
if (!mmu_has_feature(MMU_FTR_NO_SLBIE_B) &&
offset <= SLB_CACHE_ENTRIES) {
+ unsigned long slbie_data;
int i;
+
asm volatile("isync" : : : "memory");
for (i = 0; i < offset; i++) {
slbie_data = (unsigned long)get_paca()->slb_cache[i]
@@ -251,15 +252,14 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
slbie_data |= SLBIE_C; /* C set for user addresses */
asm volatile("slbie %0" : : "r" (slbie_data));
}
- asm volatile("isync" : : : "memory");
- } else {
- __slb_flush_and_rebolt();
- }
- if (!cpu_has_feature(CPU_FTR_ARCH_207S)) {
/* Workaround POWER5 < DD2.1 issue */
- if (offset == 1 || offset > SLB_CACHE_ENTRIES)
+ if (!cpu_has_feature(CPU_FTR_ARCH_207S) && offset == 1)
asm volatile("slbie %0" : : "r" (slbie_data));
+
+ asm volatile("isync" : : : "memory");
+ } else {
+ __slb_flush_and_rebolt();
}
get_paca()->slb_cache_ptr = 0;
--
2.18.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH 03/12] powerpc/64s/hash: move POWER5 < DD2.1 slbie workaround where it is needed
2018-09-14 15:30 ` [PATCH 03/12] powerpc/64s/hash: move POWER5 < DD2.1 slbie workaround where it is needed Nicholas Piggin
@ 2018-09-16 22:06 ` kbuild test robot
2018-09-17 6:00 ` Aneesh Kumar K.V
1 sibling, 0 replies; 21+ messages in thread
From: kbuild test robot @ 2018-09-16 22:06 UTC (permalink / raw)
To: Nicholas Piggin
Cc: kbuild-all, linuxppc-dev, Aneesh Kumar K . V, Nicholas Piggin
[-- Attachment #1: Type: text/plain, Size: 7968 bytes --]
Hi Nicholas,
I love your patch! Yet something to improve:
[auto build test ERROR on powerpc/next]
[also build test ERROR on v4.19-rc3 next-20180913]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Nicholas-Piggin/SLB-miss-conversion-to-C-and-SLB-optimisations/20180917-015458
base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-defconfig (attached as .config)
compiler: powerpc64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
GCC_VERSION=7.2.0 make.cross ARCH=powerpc
Note: it may well be a FALSE warning. FWIW you are at least aware of it now.
http://gcc.gnu.org/wiki/Better_Uninitialized_Warnings
Note: the linux-review/Nicholas-Piggin/SLB-miss-conversion-to-C-and-SLB-optimisations/20180917-015458 HEAD b26bd44c74488169a0fd19eef43ea3db189a207d builds fine.
It only hurts bisectibility.
All errors (new ones prefixed by >>):
arch/powerpc/mm/slb.c: In function 'switch_slb':
>> arch/powerpc/mm/slb.c:258:4: error: 'slbie_data' may be used uninitialized in this function [-Werror=maybe-uninitialized]
asm volatile("slbie %0" : : "r" (slbie_data));
^~~
cc1: all warnings being treated as errors
vim +/slbie_data +258 arch/powerpc/mm/slb.c
465ccab9 arch/powerpc/mm/slb.c will schmidt 2007-10-31 224
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 225 /* Flush all user entries from the segment table of the current processor. */
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 226 void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 227 {
9c1e1052 arch/powerpc/mm/slb.c Paul Mackerras 2009-08-17 228 unsigned long offset;
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 229 unsigned long pc = KSTK_EIP(tsk);
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 230 unsigned long stack = KSTK_ESP(tsk);
de4376c2 arch/powerpc/mm/slb.c Anton Blanchard 2009-07-13 231 unsigned long exec_base;
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 232
9c1e1052 arch/powerpc/mm/slb.c Paul Mackerras 2009-08-17 233 /*
9c1e1052 arch/powerpc/mm/slb.c Paul Mackerras 2009-08-17 234 * We need interrupts hard-disabled here, not just soft-disabled,
9c1e1052 arch/powerpc/mm/slb.c Paul Mackerras 2009-08-17 235 * so that a PMU interrupt can't occur, which might try to access
9c1e1052 arch/powerpc/mm/slb.c Paul Mackerras 2009-08-17 236 * user memory (to get a stack trace) and possible cause an SLB miss
9c1e1052 arch/powerpc/mm/slb.c Paul Mackerras 2009-08-17 237 * which would update the slb_cache/slb_cache_ptr fields in the PACA.
9c1e1052 arch/powerpc/mm/slb.c Paul Mackerras 2009-08-17 238 */
9c1e1052 arch/powerpc/mm/slb.c Paul Mackerras 2009-08-17 239 hard_irq_disable();
9c1e1052 arch/powerpc/mm/slb.c Paul Mackerras 2009-08-17 240 offset = get_paca()->slb_cache_ptr;
44ae3ab3 arch/powerpc/mm/slb.c Matt Evans 2011-04-06 241 if (!mmu_has_feature(MMU_FTR_NO_SLBIE_B) &&
f66bce5e arch/powerpc/mm/slb.c Olof Johansson 2007-10-16 242 offset <= SLB_CACHE_ENTRIES) {
6697d605 arch/powerpc/mm/slb.c Nicholas Piggin 2018-09-15 243 unsigned long slbie_data;
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 244 int i;
6697d605 arch/powerpc/mm/slb.c Nicholas Piggin 2018-09-15 245
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 246 asm volatile("isync" : : : "memory");
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 247 for (i = 0; i < offset; i++) {
1189be65 arch/powerpc/mm/slb.c Paul Mackerras 2007-10-11 248 slbie_data = (unsigned long)get_paca()->slb_cache[i]
1189be65 arch/powerpc/mm/slb.c Paul Mackerras 2007-10-11 249 << SID_SHIFT; /* EA */
1189be65 arch/powerpc/mm/slb.c Paul Mackerras 2007-10-11 250 slbie_data |= user_segment_size(slbie_data)
1189be65 arch/powerpc/mm/slb.c Paul Mackerras 2007-10-11 251 << SLBIE_SSIZE_SHIFT;
1189be65 arch/powerpc/mm/slb.c Paul Mackerras 2007-10-11 252 slbie_data |= SLBIE_C; /* C set for user addresses */
1189be65 arch/powerpc/mm/slb.c Paul Mackerras 2007-10-11 253 asm volatile("slbie %0" : : "r" (slbie_data));
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 254 }
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 255
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 256 /* Workaround POWER5 < DD2.1 issue */
6697d605 arch/powerpc/mm/slb.c Nicholas Piggin 2018-09-15 257 if (!cpu_has_feature(CPU_FTR_ARCH_207S) && offset == 1)
1189be65 arch/powerpc/mm/slb.c Paul Mackerras 2007-10-11 @258 asm volatile("slbie %0" : : "r" (slbie_data));
6697d605 arch/powerpc/mm/slb.c Nicholas Piggin 2018-09-15 259
6697d605 arch/powerpc/mm/slb.c Nicholas Piggin 2018-09-15 260 asm volatile("isync" : : : "memory");
6697d605 arch/powerpc/mm/slb.c Nicholas Piggin 2018-09-15 261 } else {
6697d605 arch/powerpc/mm/slb.c Nicholas Piggin 2018-09-15 262 __slb_flush_and_rebolt();
c15c9670 arch/powerpc/mm/slb.c Nicholas Piggin 2018-09-15 263 }
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 264
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 265 get_paca()->slb_cache_ptr = 0;
52b1e665 arch/powerpc/mm/slb.c Aneesh Kumar K.V 2017-03-22 266 copy_mm_to_paca(mm);
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 267
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 268 /*
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 269 * preload some userspace segments into the SLB.
de4376c2 arch/powerpc/mm/slb.c Anton Blanchard 2009-07-13 270 * Almost all 32 and 64bit PowerPC executables are linked at
de4376c2 arch/powerpc/mm/slb.c Anton Blanchard 2009-07-13 271 * 0x10000000 so it makes sense to preload this segment.
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 272 */
de4376c2 arch/powerpc/mm/slb.c Anton Blanchard 2009-07-13 273 exec_base = 0x10000000;
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 274
5eb9bac0 arch/powerpc/mm/slb.c Anton Blanchard 2009-07-13 275 if (is_kernel_addr(pc) || is_kernel_addr(stack) ||
de4376c2 arch/powerpc/mm/slb.c Anton Blanchard 2009-07-13 276 is_kernel_addr(exec_base))
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 277 return;
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 278
5eb9bac0 arch/powerpc/mm/slb.c Anton Blanchard 2009-07-13 279 slb_allocate(pc);
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 280
5eb9bac0 arch/powerpc/mm/slb.c Anton Blanchard 2009-07-13 281 if (!esids_match(pc, stack))
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 282 slb_allocate(stack);
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 283
de4376c2 arch/powerpc/mm/slb.c Anton Blanchard 2009-07-13 284 if (!esids_match(pc, exec_base) &&
de4376c2 arch/powerpc/mm/slb.c Anton Blanchard 2009-07-13 285 !esids_match(stack, exec_base))
de4376c2 arch/powerpc/mm/slb.c Anton Blanchard 2009-07-13 286 slb_allocate(exec_base);
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 287 }
^1da177e arch/ppc64/mm/slb.c Linus Torvalds 2005-04-16 288
:::::: The code at line 258 was first introduced by commit
:::::: 1189be6508d45183013ddb82b18f4934193de274 [POWERPC] Use 1TB segments
:::::: TO: Paul Mackerras <paulus@samba.org>
:::::: CC: Paul Mackerras <paulus@samba.org>
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 23944 bytes --]
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 03/12] powerpc/64s/hash: move POWER5 < DD2.1 slbie workaround where it is needed
2018-09-14 15:30 ` [PATCH 03/12] powerpc/64s/hash: move POWER5 < DD2.1 slbie workaround where it is needed Nicholas Piggin
2018-09-16 22:06 ` kbuild test robot
@ 2018-09-17 6:00 ` Aneesh Kumar K.V
2018-09-17 8:39 ` Nicholas Piggin
1 sibling, 1 reply; 21+ messages in thread
From: Aneesh Kumar K.V @ 2018-09-17 6:00 UTC (permalink / raw)
To: Nicholas Piggin, linuxppc-dev; +Cc: Nicholas Piggin
Nicholas Piggin <npiggin@gmail.com> writes:
> The POWER5 < DD2.1 issue is that slbie needs to be issued more than
> once. It came in with this change:
>
> ChangeSet@1.1608, 2004-04-29 07:12:31-07:00, david@gibson.dropbear.id.au
> [PATCH] POWER5 erratum workaround
>
> Early POWER5 revisions (<DD2.1) have a problem requiring slbie
> instructions to be repeated under some circumstances. The patch below
> adds a workaround (patch made by Anton Blanchard).
Thanks for extracting this. Can we add this to the code? Also I am not
sure what is repeated here? Is it that we just need one slb extra(hence
only applicable to offset == 1) or is it that we need to make sure there
is always one slb extra? The code does the former. Do you a have link for
that email patch?
>
> The extra slbie in switch_slb is done even for the case where slbia is
> called (slb_flush_and_rebolt). I don't believe that is required
> because there are other slb_flush_and_rebolt callers which do not
> issue the workaround slbie, which would be broken if it was required.
>
> It also seems to be fine inside the isync with the first slbie, as it
> is in the kernel stack switch code.
>
> So move this workaround to where it is required. This is not much of
> an optimisation because this is the fast path, but it makes the code
> more understandable and neater.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> arch/powerpc/mm/slb.c | 14 +++++++-------
> 1 file changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
> index 1c7128c63a4b..d952ece3abf7 100644
> --- a/arch/powerpc/mm/slb.c
> +++ b/arch/powerpc/mm/slb.c
> @@ -226,7 +226,6 @@ static inline int esids_match(unsigned long addr1, unsigned long addr2)
> void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
> {
> unsigned long offset;
> - unsigned long slbie_data = 0;
> unsigned long pc = KSTK_EIP(tsk);
> unsigned long stack = KSTK_ESP(tsk);
> unsigned long exec_base;
> @@ -241,7 +240,9 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
> offset = get_paca()->slb_cache_ptr;
> if (!mmu_has_feature(MMU_FTR_NO_SLBIE_B) &&
> offset <= SLB_CACHE_ENTRIES) {
> + unsigned long slbie_data;
> int i;
> +
> asm volatile("isync" : : : "memory");
> for (i = 0; i < offset; i++) {
> slbie_data = (unsigned long)get_paca()->slb_cache[i]
> @@ -251,15 +252,14 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
> slbie_data |= SLBIE_C; /* C set for user addresses */
> asm volatile("slbie %0" : : "r" (slbie_data));
> }
> - asm volatile("isync" : : : "memory");
> - } else {
> - __slb_flush_and_rebolt();
> - }
>
> - if (!cpu_has_feature(CPU_FTR_ARCH_207S)) {
> /* Workaround POWER5 < DD2.1 issue */
> - if (offset == 1 || offset > SLB_CACHE_ENTRIES)
> + if (!cpu_has_feature(CPU_FTR_ARCH_207S) && offset == 1)
> asm volatile("slbie %0" : : "r" (slbie_data));
> +
> + asm volatile("isync" : : : "memory");
> + } else {
> + __slb_flush_and_rebolt();
> }
>
> get_paca()->slb_cache_ptr = 0;
> --
> 2.18.0
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 03/12] powerpc/64s/hash: move POWER5 < DD2.1 slbie workaround where it is needed
2018-09-17 6:00 ` Aneesh Kumar K.V
@ 2018-09-17 8:39 ` Nicholas Piggin
0 siblings, 0 replies; 21+ messages in thread
From: Nicholas Piggin @ 2018-09-17 8:39 UTC (permalink / raw)
To: Aneesh Kumar K.V; +Cc: linuxppc-dev
On Mon, 17 Sep 2018 11:30:16 +0530
"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> wrote:
> Nicholas Piggin <npiggin@gmail.com> writes:
>
> > The POWER5 < DD2.1 issue is that slbie needs to be issued more than
> > once. It came in with this change:
> >
> > ChangeSet@1.1608, 2004-04-29 07:12:31-07:00, david@gibson.dropbear.id.au
> > [PATCH] POWER5 erratum workaround
> >
> > Early POWER5 revisions (<DD2.1) have a problem requiring slbie
> > instructions to be repeated under some circumstances. The patch below
> > adds a workaround (patch made by Anton Blanchard).
>
> Thanks for extracting this. Can we add this to the code?
The comment? Sure.
> Also I am not
> sure what is repeated here? Is it that we just need one slb extra(hence
> only applicable to offset == 1) or is it that we need to make sure there
> is always one slb extra? The code does the former.
Yeah it has always done the former, so my assumption is that you just
need more than one slbie. I don't think we need to bother revisiting
that assumption unless someone can pull up something definitive.
What I did change is that slbia no longer has the additional slbie, but
I think there are strong reasons not to need that.
> Do you a have link for
> that email patch?
I tried looking through the archives around that date but could not
find it. That came from a bitkeeper log.
Thanks,
Nick
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 04/12] powerpc/64s/hash: remove the vmalloc segment from the bolted SLB
2018-09-14 15:30 [PATCH 00/12] SLB miss conversion to C, and SLB optimisations Nicholas Piggin
` (2 preceding siblings ...)
2018-09-14 15:30 ` [PATCH 03/12] powerpc/64s/hash: move POWER5 < DD2.1 slbie workaround where it is needed Nicholas Piggin
@ 2018-09-14 15:30 ` Nicholas Piggin
2018-09-14 15:30 ` [PATCH 05/12] powerpc/64s/hash: Use POWER6 SLBIA IH=1 variant in switch_slb Nicholas Piggin
` (7 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Nicholas Piggin @ 2018-09-14 15:30 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin, Aneesh Kumar K . V
Remove the vmalloc segment from bolted SLBEs. This is not required to
be bolted, and seems like it was added to help pre-load the SLB on
context switch. However there are now other segments like the vmemmap
segment and non-zero node memory that often take misses after a context
switch, so it is better to solve this in a more general way.
A subsequent change will track free SLB entries and uses those rather
than round-robin overwrite valid entries, which makes it far less
likely for kernel SLBEs to be evicted after they are installed.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 2 +-
arch/powerpc/mm/slb.c | 23 ++++---------------
2 files changed, 6 insertions(+), 19 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
index b3520b549cba..20d9ca736bbd 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
@@ -30,7 +30,7 @@
* SLB
*/
-#define SLB_NUM_BOLTED 3
+#define SLB_NUM_BOLTED 2
#define SLB_CACHE_ENTRIES 8
#define SLB_MIN_SIZE 32
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index d952ece3abf7..a5e58f11d676 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -30,8 +30,7 @@
enum slb_index {
LINEAR_INDEX = 0, /* Kernel linear map (0xc000000000000000) */
- VMALLOC_INDEX = 1, /* Kernel virtual map (0xd000000000000000) */
- KSTACK_INDEX = 2, /* Kernel stack map */
+ KSTACK_INDEX = 1, /* Kernel stack map */
};
extern void slb_allocate(unsigned long ea);
@@ -133,13 +132,11 @@ static void __slb_flush_and_rebolt(void)
{
/* If you change this make sure you change SLB_NUM_BOLTED
* and PR KVM appropriately too. */
- unsigned long linear_llp, vmalloc_llp, lflags, vflags;
+ unsigned long linear_llp, lflags;
unsigned long ksp_esid_data, ksp_vsid_data;
linear_llp = mmu_psize_defs[mmu_linear_psize].sllp;
- vmalloc_llp = mmu_psize_defs[mmu_vmalloc_psize].sllp;
lflags = SLB_VSID_KERNEL | linear_llp;
- vflags = SLB_VSID_KERNEL | vmalloc_llp;
ksp_esid_data = mk_esid_data(get_paca()->kstack, mmu_kernel_ssize, KSTACK_INDEX);
if ((ksp_esid_data & ~0xfffffffUL) <= PAGE_OFFSET) {
@@ -157,14 +154,10 @@ static void __slb_flush_and_rebolt(void)
* the stack between the slbia and rebolting it. */
asm volatile("isync\n"
"slbia\n"
- /* Slot 1 - first VMALLOC segment */
+ /* Slot 1 - kernel stack */
"slbmte %0,%1\n"
- /* Slot 2 - kernel stack */
- "slbmte %2,%3\n"
"isync"
- :: "r"(mk_vsid_data(VMALLOC_START, mmu_kernel_ssize, vflags)),
- "r"(mk_esid_data(VMALLOC_START, mmu_kernel_ssize, VMALLOC_INDEX)),
- "r"(ksp_vsid_data),
+ :: "r"(ksp_vsid_data),
"r"(ksp_esid_data)
: "memory");
}
@@ -186,10 +179,6 @@ void slb_flush_and_rebolt(void)
void slb_vmalloc_update(void)
{
- unsigned long vflags;
-
- vflags = SLB_VSID_KERNEL | mmu_psize_defs[mmu_vmalloc_psize].sllp;
- slb_shadow_update(VMALLOC_START, mmu_kernel_ssize, vflags, VMALLOC_INDEX);
slb_flush_and_rebolt();
}
@@ -324,7 +313,7 @@ void slb_set_size(u16 size)
void slb_initialize(void)
{
unsigned long linear_llp, vmalloc_llp, io_llp;
- unsigned long lflags, vflags;
+ unsigned long lflags;
static int slb_encoding_inited;
#ifdef CONFIG_SPARSEMEM_VMEMMAP
unsigned long vmemmap_llp;
@@ -360,14 +349,12 @@ void slb_initialize(void)
get_paca()->stab_rr = SLB_NUM_BOLTED - 1;
lflags = SLB_VSID_KERNEL | linear_llp;
- vflags = SLB_VSID_KERNEL | vmalloc_llp;
/* Invalidate the entire SLB (even entry 0) & all the ERATS */
asm volatile("isync":::"memory");
asm volatile("slbmte %0,%0"::"r" (0) : "memory");
asm volatile("isync; slbia; isync":::"memory");
create_shadowed_slbe(PAGE_OFFSET, mmu_kernel_ssize, lflags, LINEAR_INDEX);
- create_shadowed_slbe(VMALLOC_START, mmu_kernel_ssize, vflags, VMALLOC_INDEX);
/* For the boot cpu, we're running on the stack in init_thread_union,
* which is in the first segment of the linear mapping, and also
--
2.18.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 05/12] powerpc/64s/hash: Use POWER6 SLBIA IH=1 variant in switch_slb
2018-09-14 15:30 [PATCH 00/12] SLB miss conversion to C, and SLB optimisations Nicholas Piggin
` (3 preceding siblings ...)
2018-09-14 15:30 ` [PATCH 04/12] powerpc/64s/hash: remove the vmalloc segment from the bolted SLB Nicholas Piggin
@ 2018-09-14 15:30 ` Nicholas Piggin
2018-09-17 6:08 ` Aneesh Kumar K.V
2018-09-14 15:30 ` [PATCH 06/12] powerpc/64s/hash: Use POWER9 SLBIA IH=3 " Nicholas Piggin
` (6 subsequent siblings)
11 siblings, 1 reply; 21+ messages in thread
From: Nicholas Piggin @ 2018-09-14 15:30 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin, Aneesh Kumar K . V
The SLBIA IH=1 hint will remove all non-zero SLBEs, but only
invalidate ERAT entries associated with a class value of 1, for
processors that support the hint (e.g., POWER6 and newer), which
Linux assigns to user addresses.
This prevents kernel ERAT entries from being invalidated when
context switchig (if the thread faulted in more than 8 user SLBEs).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/mm/slb.c | 38 +++++++++++++++++++++++---------------
1 file changed, 23 insertions(+), 15 deletions(-)
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index a5e58f11d676..03fa1c663ccf 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -128,13 +128,21 @@ void slb_flush_all_realmode(void)
asm volatile("slbmte %0,%0; slbia" : : "r" (0));
}
-static void __slb_flush_and_rebolt(void)
+void slb_flush_and_rebolt(void)
{
/* If you change this make sure you change SLB_NUM_BOLTED
* and PR KVM appropriately too. */
unsigned long linear_llp, lflags;
unsigned long ksp_esid_data, ksp_vsid_data;
+ WARN_ON(!irqs_disabled());
+
+ /*
+ * We can't take a PMU exception in the following code, so hard
+ * disable interrupts.
+ */
+ hard_irq_disable();
+
linear_llp = mmu_psize_defs[mmu_linear_psize].sllp;
lflags = SLB_VSID_KERNEL | linear_llp;
@@ -160,20 +168,7 @@ static void __slb_flush_and_rebolt(void)
:: "r"(ksp_vsid_data),
"r"(ksp_esid_data)
: "memory");
-}
-void slb_flush_and_rebolt(void)
-{
-
- WARN_ON(!irqs_disabled());
-
- /*
- * We can't take a PMU exception in the following code, so hard
- * disable interrupts.
- */
- hard_irq_disable();
-
- __slb_flush_and_rebolt();
get_paca()->slb_cache_ptr = 0;
}
@@ -248,7 +243,20 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
asm volatile("isync" : : : "memory");
} else {
- __slb_flush_and_rebolt();
+ struct slb_shadow *p = get_slb_shadow();
+ unsigned long ksp_esid_data =
+ be64_to_cpu(p->save_area[KSTACK_INDEX].esid);
+ unsigned long ksp_vsid_data =
+ be64_to_cpu(p->save_area[KSTACK_INDEX].vsid);
+
+ asm volatile("isync\n"
+ PPC_SLBIA(1) "\n"
+ "slbmte %0,%1\n"
+ "isync"
+ :: "r"(ksp_vsid_data),
+ "r"(ksp_esid_data));
+
+ asm volatile("isync" : : : "memory");
}
get_paca()->slb_cache_ptr = 0;
--
2.18.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH 05/12] powerpc/64s/hash: Use POWER6 SLBIA IH=1 variant in switch_slb
2018-09-14 15:30 ` [PATCH 05/12] powerpc/64s/hash: Use POWER6 SLBIA IH=1 variant in switch_slb Nicholas Piggin
@ 2018-09-17 6:08 ` Aneesh Kumar K.V
2018-09-17 8:42 ` Nicholas Piggin
0 siblings, 1 reply; 21+ messages in thread
From: Aneesh Kumar K.V @ 2018-09-17 6:08 UTC (permalink / raw)
To: Nicholas Piggin, linuxppc-dev; +Cc: Nicholas Piggin
Nicholas Piggin <npiggin@gmail.com> writes:
> The SLBIA IH=1 hint will remove all non-zero SLBEs, but only
> invalidate ERAT entries associated with a class value of 1, for
> processors that support the hint (e.g., POWER6 and newer), which
> Linux assigns to user addresses.
>
> This prevents kernel ERAT entries from being invalidated when
> context switchig (if the thread faulted in more than 8 user SLBEs).
how about renaming stuff to indicate kernel ERAT entries are kept?
something like slb_flush_and_rebolt_user()?
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> arch/powerpc/mm/slb.c | 38 +++++++++++++++++++++++---------------
> 1 file changed, 23 insertions(+), 15 deletions(-)
>
> diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
> index a5e58f11d676..03fa1c663ccf 100644
> --- a/arch/powerpc/mm/slb.c
> +++ b/arch/powerpc/mm/slb.c
> @@ -128,13 +128,21 @@ void slb_flush_all_realmode(void)
> asm volatile("slbmte %0,%0; slbia" : : "r" (0));
> }
>
> -static void __slb_flush_and_rebolt(void)
> +void slb_flush_and_rebolt(void)
> {
> /* If you change this make sure you change SLB_NUM_BOLTED
> * and PR KVM appropriately too. */
> unsigned long linear_llp, lflags;
> unsigned long ksp_esid_data, ksp_vsid_data;
>
> + WARN_ON(!irqs_disabled());
> +
> + /*
> + * We can't take a PMU exception in the following code, so hard
> + * disable interrupts.
> + */
> + hard_irq_disable();
> +
> linear_llp = mmu_psize_defs[mmu_linear_psize].sllp;
> lflags = SLB_VSID_KERNEL | linear_llp;
>
> @@ -160,20 +168,7 @@ static void __slb_flush_and_rebolt(void)
> :: "r"(ksp_vsid_data),
> "r"(ksp_esid_data)
> : "memory");
> -}
>
> -void slb_flush_and_rebolt(void)
> -{
> -
> - WARN_ON(!irqs_disabled());
> -
> - /*
> - * We can't take a PMU exception in the following code, so hard
> - * disable interrupts.
> - */
> - hard_irq_disable();
> -
> - __slb_flush_and_rebolt();
> get_paca()->slb_cache_ptr = 0;
> }
>
> @@ -248,7 +243,20 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
>
> asm volatile("isync" : : : "memory");
> } else {
> - __slb_flush_and_rebolt();
> + struct slb_shadow *p = get_slb_shadow();
> + unsigned long ksp_esid_data =
> + be64_to_cpu(p->save_area[KSTACK_INDEX].esid);
> + unsigned long ksp_vsid_data =
> + be64_to_cpu(p->save_area[KSTACK_INDEX].vsid);
> +
> + asm volatile("isync\n"
> + PPC_SLBIA(1) "\n"
> + "slbmte %0,%1\n"
> + "isync"
> + :: "r"(ksp_vsid_data),
> + "r"(ksp_esid_data));
> +
> + asm volatile("isync" : : : "memory");
> }
>
> get_paca()->slb_cache_ptr = 0;
> --
> 2.18.0
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 05/12] powerpc/64s/hash: Use POWER6 SLBIA IH=1 variant in switch_slb
2018-09-17 6:08 ` Aneesh Kumar K.V
@ 2018-09-17 8:42 ` Nicholas Piggin
0 siblings, 0 replies; 21+ messages in thread
From: Nicholas Piggin @ 2018-09-17 8:42 UTC (permalink / raw)
To: Aneesh Kumar K.V; +Cc: linuxppc-dev
On Mon, 17 Sep 2018 11:38:35 +0530
"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> wrote:
> Nicholas Piggin <npiggin@gmail.com> writes:
>
> > The SLBIA IH=1 hint will remove all non-zero SLBEs, but only
> > invalidate ERAT entries associated with a class value of 1, for
> > processors that support the hint (e.g., POWER6 and newer), which
> > Linux assigns to user addresses.
> >
> > This prevents kernel ERAT entries from being invalidated when
> > context switchig (if the thread faulted in more than 8 user SLBEs).
>
>
> how about renaming stuff to indicate kernel ERAT entries are kept?
> something like slb_flush_and_rebolt_user()?
User mappings aren't bolted though. I consider rebolt to mean update
the bolted kernel mappings when something has changed (like vmalloc
segment update). That doesn't need to be done here, so I think this
is okay. I can add a comment though.
Thanks,
Nick
>
> >
> > Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> > ---
> > arch/powerpc/mm/slb.c | 38 +++++++++++++++++++++++---------------
> > 1 file changed, 23 insertions(+), 15 deletions(-)
> >
> > diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
> > index a5e58f11d676..03fa1c663ccf 100644
> > --- a/arch/powerpc/mm/slb.c
> > +++ b/arch/powerpc/mm/slb.c
> > @@ -128,13 +128,21 @@ void slb_flush_all_realmode(void)
> > asm volatile("slbmte %0,%0; slbia" : : "r" (0));
> > }
> >
> > -static void __slb_flush_and_rebolt(void)
> > +void slb_flush_and_rebolt(void)
> > {
> > /* If you change this make sure you change SLB_NUM_BOLTED
> > * and PR KVM appropriately too. */
> > unsigned long linear_llp, lflags;
> > unsigned long ksp_esid_data, ksp_vsid_data;
> >
> > + WARN_ON(!irqs_disabled());
> > +
> > + /*
> > + * We can't take a PMU exception in the following code, so hard
> > + * disable interrupts.
> > + */
> > + hard_irq_disable();
> > +
> > linear_llp = mmu_psize_defs[mmu_linear_psize].sllp;
> > lflags = SLB_VSID_KERNEL | linear_llp;
> >
> > @@ -160,20 +168,7 @@ static void __slb_flush_and_rebolt(void)
> > :: "r"(ksp_vsid_data),
> > "r"(ksp_esid_data)
> > : "memory");
> > -}
> >
> > -void slb_flush_and_rebolt(void)
> > -{
> > -
> > - WARN_ON(!irqs_disabled());
> > -
> > - /*
> > - * We can't take a PMU exception in the following code, so hard
> > - * disable interrupts.
> > - */
> > - hard_irq_disable();
> > -
> > - __slb_flush_and_rebolt();
> > get_paca()->slb_cache_ptr = 0;
> > }
> >
> > @@ -248,7 +243,20 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
> >
> > asm volatile("isync" : : : "memory");
> > } else {
> > - __slb_flush_and_rebolt();
> > + struct slb_shadow *p = get_slb_shadow();
> > + unsigned long ksp_esid_data =
> > + be64_to_cpu(p->save_area[KSTACK_INDEX].esid);
> > + unsigned long ksp_vsid_data =
> > + be64_to_cpu(p->save_area[KSTACK_INDEX].vsid);
> > +
> > + asm volatile("isync\n"
> > + PPC_SLBIA(1) "\n"
> > + "slbmte %0,%1\n"
> > + "isync"
> > + :: "r"(ksp_vsid_data),
> > + "r"(ksp_esid_data));
> > +
> > + asm volatile("isync" : : : "memory");
> > }
> >
> > get_paca()->slb_cache_ptr = 0;
> > --
> > 2.18.0
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 06/12] powerpc/64s/hash: Use POWER9 SLBIA IH=3 variant in switch_slb
2018-09-14 15:30 [PATCH 00/12] SLB miss conversion to C, and SLB optimisations Nicholas Piggin
` (4 preceding siblings ...)
2018-09-14 15:30 ` [PATCH 05/12] powerpc/64s/hash: Use POWER6 SLBIA IH=1 variant in switch_slb Nicholas Piggin
@ 2018-09-14 15:30 ` Nicholas Piggin
2018-09-14 15:30 ` [PATCH 07/12] powerpc/64s/hash: convert SLB miss handlers to C Nicholas Piggin
` (5 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Nicholas Piggin @ 2018-09-14 15:30 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin, Aneesh Kumar K . V
POWER9 introduces SLBIA IH=3, which invalidates all SLB entries and
associated lookaside information that have a class value of 1, which
Linux assigns to user addresses. This matches what switch_slb wants,
and allows a simple fast implementation that avoids the slb_cache
complexity.
As a side-effect, the POWER5 < DD2.1 SLB invalidation workaround is
also avoided on POWER9.
Process context switching rate is improved about 2.2% for a small
process that hits the slb cache which is the best case for the current
code.
Signed-of-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/mm/slb.c | 86 +++++++++++++++++++++++-----------------
arch/powerpc/xmon/xmon.c | 11 +++--
2 files changed, 57 insertions(+), 40 deletions(-)
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 03fa1c663ccf..319c772f7cbd 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -209,7 +209,6 @@ static inline int esids_match(unsigned long addr1, unsigned long addr2)
/* Flush all user entries from the segment table of the current processor. */
void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
{
- unsigned long offset;
unsigned long pc = KSTK_EIP(tsk);
unsigned long stack = KSTK_ESP(tsk);
unsigned long exec_base;
@@ -221,45 +220,57 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
* which would update the slb_cache/slb_cache_ptr fields in the PACA.
*/
hard_irq_disable();
- offset = get_paca()->slb_cache_ptr;
- if (!mmu_has_feature(MMU_FTR_NO_SLBIE_B) &&
- offset <= SLB_CACHE_ENTRIES) {
- unsigned long slbie_data;
- int i;
-
- asm volatile("isync" : : : "memory");
- for (i = 0; i < offset; i++) {
- slbie_data = (unsigned long)get_paca()->slb_cache[i]
- << SID_SHIFT; /* EA */
- slbie_data |= user_segment_size(slbie_data)
- << SLBIE_SSIZE_SHIFT;
- slbie_data |= SLBIE_C; /* C set for user addresses */
- asm volatile("slbie %0" : : "r" (slbie_data));
- }
-
- /* Workaround POWER5 < DD2.1 issue */
- if (!cpu_has_feature(CPU_FTR_ARCH_207S) && offset == 1)
- asm volatile("slbie %0" : : "r" (slbie_data));
+ if (cpu_has_feature(CPU_FTR_ARCH_300)) {
+ /*
+ * SLBIA IH=3 invalidates all Class=1 SLBEs and their
+ * associated lookaside structures, which matches what
+ * switch_slb wants. So ARCH_300 does not use the slb
+ * cache.
+ */
+ asm volatile("isync ; " PPC_SLBIA(3)" ; isync");
- asm volatile("isync" : : : "memory");
} else {
- struct slb_shadow *p = get_slb_shadow();
- unsigned long ksp_esid_data =
- be64_to_cpu(p->save_area[KSTACK_INDEX].esid);
- unsigned long ksp_vsid_data =
- be64_to_cpu(p->save_area[KSTACK_INDEX].vsid);
-
- asm volatile("isync\n"
- PPC_SLBIA(1) "\n"
- "slbmte %0,%1\n"
- "isync"
- :: "r"(ksp_vsid_data),
- "r"(ksp_esid_data));
-
- asm volatile("isync" : : : "memory");
+ unsigned long offset = get_paca()->slb_cache_ptr;
+
+ if (!mmu_has_feature(MMU_FTR_NO_SLBIE_B) &&
+ offset <= SLB_CACHE_ENTRIES) {
+ unsigned long slbie_data;
+ int i;
+
+ asm volatile("isync" : : : "memory");
+ for (i = 0; i < offset; i++) {
+ /* EA */
+ slbie_data = (unsigned long)
+ get_paca()->slb_cache[i] << SID_SHIFT;
+ slbie_data |= user_segment_size(slbie_data)
+ << SLBIE_SSIZE_SHIFT;
+ slbie_data |= SLBIE_C; /* user slbs have C=1 */
+ asm volatile("slbie %0" : : "r" (slbie_data));
+ }
+
+ /* Workaround POWER5 < DD2.1 issue */
+ if (!cpu_has_feature(CPU_FTR_ARCH_207S) && offset == 1)
+ asm volatile("slbie %0" : : "r" (slbie_data));
+
+ asm volatile("isync" : : : "memory");
+ } else {
+ struct slb_shadow *p = get_slb_shadow();
+ unsigned long ksp_esid_data =
+ be64_to_cpu(p->save_area[KSTACK_INDEX].esid);
+ unsigned long ksp_vsid_data =
+ be64_to_cpu(p->save_area[KSTACK_INDEX].vsid);
+
+ asm volatile("isync\n"
+ PPC_SLBIA(1) "\n"
+ "slbmte %0,%1\n"
+ "isync"
+ :: "r"(ksp_vsid_data),
+ "r"(ksp_esid_data));
+ }
+
+ get_paca()->slb_cache_ptr = 0;
}
- get_paca()->slb_cache_ptr = 0;
copy_mm_to_paca(mm);
/*
@@ -385,6 +396,9 @@ static void insert_slb_entry(unsigned long vsid, unsigned long ea,
enum slb_index index;
int slb_cache_index;
+ if (cpu_has_feature(CPU_FTR_ARCH_300))
+ return; /* ISAv3.0B and later does not use slb_cache */
+
/*
* We are irq disabled, hence should be safe to access PACA.
*/
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 129dcf7e3d9c..eb2e0d472eb1 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -2393,10 +2393,13 @@ static void dump_one_paca(int cpu)
}
}
DUMP(p, vmalloc_sllp, "%#-*x");
- DUMP(p, slb_cache_ptr, "%#-*x");
- for (i = 0; i < SLB_CACHE_ENTRIES; i++)
- printf(" %-*s[%d] = 0x%016x\n",
- 22, "slb_cache", i, p->slb_cache[i]);
+
+ if (!early_cpu_has_feature(CPU_FTR_ARCH_300)) {
+ DUMP(p, slb_cache_ptr, "%#-*x");
+ for (i = 0; i < SLB_CACHE_ENTRIES; i++)
+ printf(" %-*s[%d] = 0x%016x\n",
+ 22, "slb_cache", i, p->slb_cache[i]);
+ }
DUMP(p, rfi_flush_fallback_area, "%-*px");
#endif
--
2.18.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 07/12] powerpc/64s/hash: convert SLB miss handlers to C
2018-09-14 15:30 [PATCH 00/12] SLB miss conversion to C, and SLB optimisations Nicholas Piggin
` (5 preceding siblings ...)
2018-09-14 15:30 ` [PATCH 06/12] powerpc/64s/hash: Use POWER9 SLBIA IH=3 " Nicholas Piggin
@ 2018-09-14 15:30 ` Nicholas Piggin
2018-09-14 15:30 ` [PATCH 08/12] powerpc/64s/hash: remove user SLB data from the paca Nicholas Piggin
` (4 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Nicholas Piggin @ 2018-09-14 15:30 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin, Aneesh Kumar K . V
This patch moves SLB miss handlers completely to C, using the standard
exception handler macros to set up the stack and branch to C.
This can be done because the segment containing the kernel stack is
always bolted, so accessing it with relocation on will not cause an
SLB exception.
Arbitrary kernel memory may not be accessed when handling kernel space
SLB misses, so care should be taken there. However user SLB misses can
access any kernel memory, which can be used to move some fields out of
the paca (in later patches).
User SLB misses could quite easily reconcile IRQs and set up a first
class kernel environment and exit via ret_from_except, however that
doesn't seem to be necessary at the moment, so we only do that if a
bad fault is encountered.
[ Credit to Aneesh for bug fixes, error checks, and improvements to bad
address handling, etc ]
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Since RFC:
- Added MSR[RI] handling
- Fixed up a register loss bug exposed by irq tracing (Aneesh)
- Reject misses outside the defined kernel regions (Aneesh)
- Added several more sanity checks and error handling (Aneesh), we may
look at consolidating these tests and tightenig up the code but for
a first pass we decided it's better to check carefully.
Since v1:
- Fixed SLB cache corruption (Aneesh)
- Fixed untidy SLBE allocation "leak" in get_vsid error case
- Now survives some stress testing on real hardware
---
arch/powerpc/include/asm/asm-prototypes.h | 2 +
arch/powerpc/include/asm/exception-64s.h | 8 -
arch/powerpc/kernel/exceptions-64s.S | 202 +++----------
arch/powerpc/mm/Makefile | 2 +-
arch/powerpc/mm/slb.c | 271 +++++++++--------
arch/powerpc/mm/slb_low.S | 335 ----------------------
6 files changed, 196 insertions(+), 624 deletions(-)
delete mode 100644 arch/powerpc/mm/slb_low.S
diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
index 1f4691ce4126..78ed3c3f879a 100644
--- a/arch/powerpc/include/asm/asm-prototypes.h
+++ b/arch/powerpc/include/asm/asm-prototypes.h
@@ -78,6 +78,8 @@ void kernel_bad_stack(struct pt_regs *regs);
void system_reset_exception(struct pt_regs *regs);
void machine_check_exception(struct pt_regs *regs);
void emulation_assist_interrupt(struct pt_regs *regs);
+long do_slb_fault(struct pt_regs *regs, unsigned long ea);
+void do_bad_slb_fault(struct pt_regs *regs, unsigned long ea, long err);
/* signals, syscalls and interrupts */
long sys_swapcontext(struct ucontext __user *old_ctx,
diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index a86feddddad0..47578b79f0fb 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -60,14 +60,6 @@
*/
#define MAX_MCE_DEPTH 4
-/*
- * EX_LR is only used in EXSLB and where it does not overlap with EX_DAR
- * EX_CCR similarly with DSISR, but being 4 byte registers there is a hole
- * in the save area so it's not necessary to overlap them. Could be used
- * for future savings though if another 4 byte register was to be saved.
- */
-#define EX_LR EX_DAR
-
/*
* EX_R3 is only used by the bad_stack handler. bad_stack reloads and
* saves DAR from SPRN_DAR, and EX_DAR is not used. So EX_R3 can overlap
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 9dad73722d1a..c4f372ef4842 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -567,28 +567,36 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
EXC_REAL_BEGIN(data_access_slb, 0x380, 0x80)
- SET_SCRATCH0(r13)
- EXCEPTION_PROLOG_0(PACA_EXSLB)
- EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x380)
- mr r12,r3 /* save r3 */
- mfspr r3,SPRN_DAR
- mfspr r11,SPRN_SRR1
- crset 4*cr6+eq
- BRANCH_TO_COMMON(r10, slb_miss_common)
+EXCEPTION_PROLOG(PACA_EXSLB, data_access_slb_common, EXC_STD, KVMTEST_PR, 0x380);
EXC_REAL_END(data_access_slb, 0x380, 0x80)
EXC_VIRT_BEGIN(data_access_slb, 0x4380, 0x80)
- SET_SCRATCH0(r13)
- EXCEPTION_PROLOG_0(PACA_EXSLB)
- EXCEPTION_PROLOG_1(PACA_EXSLB, NOTEST, 0x380)
- mr r12,r3 /* save r3 */
- mfspr r3,SPRN_DAR
- mfspr r11,SPRN_SRR1
- crset 4*cr6+eq
- BRANCH_TO_COMMON(r10, slb_miss_common)
+EXCEPTION_RELON_PROLOG(PACA_EXSLB, data_access_slb_common, EXC_STD, NOTEST, 0x380);
EXC_VIRT_END(data_access_slb, 0x4380, 0x80)
+
TRAMP_KVM_SKIP(PACA_EXSLB, 0x380)
+EXC_COMMON_BEGIN(data_access_slb_common)
+ mfspr r10,SPRN_DAR
+ std r10,PACA_EXSLB+EX_DAR(r13)
+ EXCEPTION_PROLOG_COMMON(0x380, PACA_EXSLB)
+ ld r4,PACA_EXSLB+EX_DAR(r13)
+ std r4,_DAR(r1)
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ bl do_slb_fault
+ cmpdi r3,0
+ bne- 1f
+ b fast_exception_return
+1: /* Error case */
+ std r3,RESULT(r1)
+ bl save_nvgprs
+ RECONCILE_IRQ_STATE(r10, r11)
+ ld r4,_DAR(r1)
+ ld r5,RESULT(r1)
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ bl do_bad_slb_fault
+ b ret_from_except
+
EXC_REAL(instruction_access, 0x400, 0x80)
EXC_VIRT(instruction_access, 0x4400, 0x80, 0x400)
@@ -611,160 +619,34 @@ ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
EXC_REAL_BEGIN(instruction_access_slb, 0x480, 0x80)
- SET_SCRATCH0(r13)
- EXCEPTION_PROLOG_0(PACA_EXSLB)
- EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x480)
- mr r12,r3 /* save r3 */
- mfspr r3,SPRN_SRR0 /* SRR0 is faulting address */
- mfspr r11,SPRN_SRR1
- crclr 4*cr6+eq
- BRANCH_TO_COMMON(r10, slb_miss_common)
+EXCEPTION_PROLOG(PACA_EXSLB, instruction_access_slb_common, EXC_STD, KVMTEST_PR, 0x480);
EXC_REAL_END(instruction_access_slb, 0x480, 0x80)
EXC_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x80)
- SET_SCRATCH0(r13)
- EXCEPTION_PROLOG_0(PACA_EXSLB)
- EXCEPTION_PROLOG_1(PACA_EXSLB, NOTEST, 0x480)
- mr r12,r3 /* save r3 */
- mfspr r3,SPRN_SRR0 /* SRR0 is faulting address */
- mfspr r11,SPRN_SRR1
- crclr 4*cr6+eq
- BRANCH_TO_COMMON(r10, slb_miss_common)
+EXCEPTION_RELON_PROLOG(PACA_EXSLB, instruction_access_slb_common, EXC_STD, NOTEST, 0x480);
EXC_VIRT_END(instruction_access_slb, 0x4480, 0x80)
-TRAMP_KVM(PACA_EXSLB, 0x480)
-
-
-/*
- * This handler is used by the 0x380 and 0x480 SLB miss interrupts, as well as
- * the virtual mode 0x4380 and 0x4480 interrupts if AIL is enabled.
- */
-EXC_COMMON_BEGIN(slb_miss_common)
- /*
- * r13 points to the PACA, r9 contains the saved CR,
- * r12 contains the saved r3,
- * r11 contain the saved SRR1, SRR0 is still ready for return
- * r3 has the faulting address
- * r9 - r13 are saved in paca->exslb.
- * cr6.eq is set for a D-SLB miss, clear for a I-SLB miss
- * We assume we aren't going to take any exceptions during this
- * procedure.
- */
- mflr r10
- stw r9,PACA_EXSLB+EX_CCR(r13) /* save CR in exc. frame */
- std r10,PACA_EXSLB+EX_LR(r13) /* save LR */
-
- andi. r9,r11,MSR_PR // Check for exception from userspace
- cmpdi cr4,r9,MSR_PR // And save the result in CR4 for later
-
- /*
- * Test MSR_RI before calling slb_allocate_realmode, because the
- * MSR in r11 gets clobbered. However we still want to allocate
- * SLB in case MSR_RI=0, to minimise the risk of getting stuck in
- * recursive SLB faults. So use cr5 for this, which is preserved.
- */
- andi. r11,r11,MSR_RI /* check for unrecoverable exception */
- cmpdi cr5,r11,MSR_RI
-
- crset 4*cr0+eq
-#ifdef CONFIG_PPC_BOOK3S_64
-BEGIN_MMU_FTR_SECTION
- bl slb_allocate
-END_MMU_FTR_SECTION_IFCLR(MMU_FTR_TYPE_RADIX)
-#endif
-
- ld r10,PACA_EXSLB+EX_LR(r13)
- lwz r9,PACA_EXSLB+EX_CCR(r13) /* get saved CR */
- mtlr r10
-
- /*
- * Large address, check whether we have to allocate new contexts.
- */
- beq- 8f
-
- bne- cr5,2f /* if unrecoverable exception, oops */
-
- /* All done -- return from exception. */
-
- bne cr4,1f /* returning to kernel */
-
- mtcrf 0x80,r9
- mtcrf 0x08,r9 /* MSR[PR] indication is in cr4 */
- mtcrf 0x04,r9 /* MSR[RI] indication is in cr5 */
- mtcrf 0x02,r9 /* I/D indication is in cr6 */
- mtcrf 0x01,r9 /* slb_allocate uses cr0 and cr7 */
-
- RESTORE_CTR(r9, PACA_EXSLB)
- RESTORE_PPR_PACA(PACA_EXSLB, r9)
- mr r3,r12
- ld r9,PACA_EXSLB+EX_R9(r13)
- ld r10,PACA_EXSLB+EX_R10(r13)
- ld r11,PACA_EXSLB+EX_R11(r13)
- ld r12,PACA_EXSLB+EX_R12(r13)
- ld r13,PACA_EXSLB+EX_R13(r13)
- RFI_TO_USER
- b . /* prevent speculative execution */
-1:
- mtcrf 0x80,r9
- mtcrf 0x08,r9 /* MSR[PR] indication is in cr4 */
- mtcrf 0x04,r9 /* MSR[RI] indication is in cr5 */
- mtcrf 0x02,r9 /* I/D indication is in cr6 */
- mtcrf 0x01,r9 /* slb_allocate uses cr0 and cr7 */
-
- RESTORE_CTR(r9, PACA_EXSLB)
- RESTORE_PPR_PACA(PACA_EXSLB, r9)
- mr r3,r12
- ld r9,PACA_EXSLB+EX_R9(r13)
- ld r10,PACA_EXSLB+EX_R10(r13)
- ld r11,PACA_EXSLB+EX_R11(r13)
- ld r12,PACA_EXSLB+EX_R12(r13)
- ld r13,PACA_EXSLB+EX_R13(r13)
- RFI_TO_KERNEL
- b . /* prevent speculative execution */
-
-
-2: std r3,PACA_EXSLB+EX_DAR(r13)
- mr r3,r12
- mfspr r11,SPRN_SRR0
- mfspr r12,SPRN_SRR1
- LOAD_HANDLER(r10,unrecov_slb)
- mtspr SPRN_SRR0,r10
- ld r10,PACAKMSR(r13)
- mtspr SPRN_SRR1,r10
- RFI_TO_KERNEL
- b .
-8: std r3,PACA_EXSLB+EX_DAR(r13)
- mr r3,r12
- mfspr r11,SPRN_SRR0
- mfspr r12,SPRN_SRR1
- LOAD_HANDLER(r10, large_addr_slb)
- mtspr SPRN_SRR0,r10
- ld r10,PACAKMSR(r13)
- mtspr SPRN_SRR1,r10
- RFI_TO_KERNEL
- b .
+TRAMP_KVM(PACA_EXSLB, 0x480)
-EXC_COMMON_BEGIN(unrecov_slb)
- EXCEPTION_PROLOG_COMMON(0x4100, PACA_EXSLB)
- RECONCILE_IRQ_STATE(r10, r11)
+EXC_COMMON_BEGIN(instruction_access_slb_common)
+ EXCEPTION_PROLOG_COMMON(0x480, PACA_EXSLB)
+ ld r4,_NIP(r1)
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ bl do_slb_fault
+ cmpdi r3,0
+ bne- 1f
+ b fast_exception_return
+1: /* Error case */
+ std r3,RESULT(r1)
bl save_nvgprs
-1: addi r3,r1,STACK_FRAME_OVERHEAD
- bl unrecoverable_exception
- b 1b
-
-EXC_COMMON_BEGIN(large_addr_slb)
- EXCEPTION_PROLOG_COMMON(0x380, PACA_EXSLB)
RECONCILE_IRQ_STATE(r10, r11)
- ld r3, PACA_EXSLB+EX_DAR(r13)
- std r3, _DAR(r1)
- beq cr6, 2f
- li r10, 0x481 /* fix trap number for I-SLB miss */
- std r10, _TRAP(r1)
-2: bl save_nvgprs
- addi r3, r1, STACK_FRAME_OVERHEAD
- bl slb_miss_large_addr
+ ld r4,_NIP(r1)
+ ld r5,RESULT(r1)
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ bl do_bad_slb_fault
b ret_from_except
+
EXC_REAL_BEGIN(hardware_interrupt, 0x500, 0x100)
.globl hardware_interrupt_hv;
hardware_interrupt_hv:
diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
index cdf6a9960046..892d4e061d62 100644
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -15,7 +15,7 @@ obj-$(CONFIG_PPC_MMU_NOHASH) += mmu_context_nohash.o tlb_nohash.o \
obj-$(CONFIG_PPC_BOOK3E) += tlb_low_$(BITS)e.o
hash64-$(CONFIG_PPC_NATIVE) := hash_native_64.o
obj-$(CONFIG_PPC_BOOK3E_64) += pgtable-book3e.o
-obj-$(CONFIG_PPC_BOOK3S_64) += pgtable-hash64.o hash_utils_64.o slb_low.o slb.o $(hash64-y) mmu_context_book3s64.o pgtable-book3s64.o
+obj-$(CONFIG_PPC_BOOK3S_64) += pgtable-hash64.o hash_utils_64.o slb.o $(hash64-y) mmu_context_book3s64.o pgtable-book3s64.o
obj-$(CONFIG_PPC_RADIX_MMU) += pgtable-radix.o tlb-radix.o
obj-$(CONFIG_PPC_STD_MMU_32) += ppc_mmu_32.o hash_low_32.o mmu_context_hash32.o
obj-$(CONFIG_PPC_STD_MMU) += tlb_hash$(BITS).o
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 319c772f7cbd..f6a5aedaae38 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -14,6 +14,7 @@
* 2 of the License, or (at your option) any later version.
*/
+#include <asm/asm-prototypes.h>
#include <asm/pgtable.h>
#include <asm/mmu.h>
#include <asm/mmu_context.h>
@@ -33,7 +34,7 @@ enum slb_index {
KSTACK_INDEX = 1, /* Kernel stack map */
};
-extern void slb_allocate(unsigned long ea);
+static long slb_allocate_user(struct mm_struct *mm, unsigned long ea);
#define slb_esid_mask(ssize) \
(((ssize) == MMU_SEGSIZE_256M)? ESID_MASK: ESID_MASK_1T)
@@ -44,13 +45,19 @@ static inline unsigned long mk_esid_data(unsigned long ea, int ssize,
return (ea & slb_esid_mask(ssize)) | SLB_ESID_V | index;
}
-static inline unsigned long mk_vsid_data(unsigned long ea, int ssize,
+static inline unsigned long __mk_vsid_data(unsigned long vsid, int ssize,
unsigned long flags)
{
- return (get_kernel_vsid(ea, ssize) << slb_vsid_shift(ssize)) | flags |
+ return (vsid << slb_vsid_shift(ssize)) | flags |
((unsigned long) ssize << SLB_VSID_SSIZE_SHIFT);
}
+static inline unsigned long mk_vsid_data(unsigned long ea, int ssize,
+ unsigned long flags)
+{
+ return __mk_vsid_data(get_kernel_vsid(ea, ssize), ssize, flags);
+}
+
static inline void slb_shadow_update(unsigned long ea, int ssize,
unsigned long flags,
enum slb_index index)
@@ -284,49 +291,19 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
is_kernel_addr(exec_base))
return;
- slb_allocate(pc);
+ slb_allocate_user(mm, pc);
if (!esids_match(pc, stack))
- slb_allocate(stack);
+ slb_allocate_user(mm, stack);
if (!esids_match(pc, exec_base) &&
!esids_match(stack, exec_base))
- slb_allocate(exec_base);
-}
-
-static inline void patch_slb_encoding(unsigned int *insn_addr,
- unsigned int immed)
-{
-
- /*
- * This function patches either an li or a cmpldi instruction with
- * a new immediate value. This relies on the fact that both li
- * (which is actually addi) and cmpldi both take a 16-bit immediate
- * value, and it is situated in the same location in the instruction,
- * ie. bits 16-31 (Big endian bit order) or the lower 16 bits.
- * The signedness of the immediate operand differs between the two
- * instructions however this code is only ever patching a small value,
- * much less than 1 << 15, so we can get away with it.
- * To patch the value we read the existing instruction, clear the
- * immediate value, and or in our new value, then write the instruction
- * back.
- */
- unsigned int insn = (*insn_addr & 0xffff0000) | immed;
- patch_instruction(insn_addr, insn);
+ slb_allocate_user(mm, exec_base);
}
-extern u32 slb_miss_kernel_load_linear[];
-extern u32 slb_miss_kernel_load_io[];
-extern u32 slb_compare_rr_to_size[];
-extern u32 slb_miss_kernel_load_vmemmap[];
-
void slb_set_size(u16 size)
{
- if (mmu_slb_size == size)
- return;
-
mmu_slb_size = size;
- patch_slb_encoding(slb_compare_rr_to_size, mmu_slb_size);
}
void slb_initialize(void)
@@ -348,19 +325,9 @@ void slb_initialize(void)
#endif
if (!slb_encoding_inited) {
slb_encoding_inited = 1;
- patch_slb_encoding(slb_miss_kernel_load_linear,
- SLB_VSID_KERNEL | linear_llp);
- patch_slb_encoding(slb_miss_kernel_load_io,
- SLB_VSID_KERNEL | io_llp);
- patch_slb_encoding(slb_compare_rr_to_size,
- mmu_slb_size);
-
pr_devel("SLB: linear LLP = %04lx\n", linear_llp);
pr_devel("SLB: io LLP = %04lx\n", io_llp);
-
#ifdef CONFIG_SPARSEMEM_VMEMMAP
- patch_slb_encoding(slb_miss_kernel_load_vmemmap,
- SLB_VSID_KERNEL | vmemmap_llp);
pr_devel("SLB: vmemmap LLP = %04lx\n", vmemmap_llp);
#endif
}
@@ -389,52 +356,13 @@ void slb_initialize(void)
asm volatile("isync":::"memory");
}
-static void insert_slb_entry(unsigned long vsid, unsigned long ea,
- int bpsize, int ssize)
+static void slb_cache_update(unsigned long esid_data)
{
- unsigned long flags, vsid_data, esid_data;
- enum slb_index index;
int slb_cache_index;
if (cpu_has_feature(CPU_FTR_ARCH_300))
return; /* ISAv3.0B and later does not use slb_cache */
- /*
- * We are irq disabled, hence should be safe to access PACA.
- */
- VM_WARN_ON(!irqs_disabled());
-
- /*
- * We can't take a PMU exception in the following code, so hard
- * disable interrupts.
- */
- hard_irq_disable();
-
- index = get_paca()->stab_rr;
-
- /*
- * simple round-robin replacement of slb starting at SLB_NUM_BOLTED.
- */
- if (index < (mmu_slb_size - 1))
- index++;
- else
- index = SLB_NUM_BOLTED;
-
- get_paca()->stab_rr = index;
-
- flags = SLB_VSID_USER | mmu_psize_defs[bpsize].sllp;
- vsid_data = (vsid << slb_vsid_shift(ssize)) | flags |
- ((unsigned long) ssize << SLB_VSID_SSIZE_SHIFT);
- esid_data = mk_esid_data(ea, ssize, index);
-
- /*
- * No need for an isync before or after this slbmte. The exception
- * we enter with and the rfid we exit with are context synchronizing.
- * Also we only handle user segments here.
- */
- asm volatile("slbmte %0, %1" : : "r" (vsid_data), "r" (esid_data)
- : "memory");
-
/*
* Now update slb cache entries
*/
@@ -456,58 +384,161 @@ static void insert_slb_entry(unsigned long vsid, unsigned long ea,
}
}
-static void handle_multi_context_slb_miss(int context_id, unsigned long ea)
+static enum slb_index alloc_slb_index(void)
+{
+ enum slb_index index;
+
+ /* round-robin replacement of slb starting at SLB_NUM_BOLTED. */
+ index = get_paca()->stab_rr;
+ if (index < (mmu_slb_size - 1))
+ index++;
+ else
+ index = SLB_NUM_BOLTED;
+ get_paca()->stab_rr = index;
+
+ return index;
+}
+
+static long slb_insert_entry(unsigned long ea, unsigned long context,
+ unsigned long flags, int ssize, bool kernel)
{
- struct mm_struct *mm = current->mm;
unsigned long vsid;
- int bpsize;
+ unsigned long vsid_data, esid_data;
+ enum slb_index index;
+
+ vsid = get_vsid(context, ea, ssize);
+ if (!vsid)
+ return -EFAULT;
+
+ index = alloc_slb_index();
+
+ vsid_data = __mk_vsid_data(vsid, ssize, flags);
+ esid_data = mk_esid_data(ea, ssize, index);
/*
- * We are always above 1TB, hence use high user segment size.
+ * No need for an isync before or after this slbmte. The exception
+ * we enter with and the rfid we exit with are context synchronizing.
+ * Also we only handle user segments here.
*/
- vsid = get_vsid(context_id, ea, mmu_highuser_ssize);
- bpsize = get_slice_psize(mm, ea);
- insert_slb_entry(vsid, ea, bpsize, mmu_highuser_ssize);
+ asm volatile("slbmte %0, %1" : : "r" (vsid_data), "r" (esid_data));
+
+ if (!kernel)
+ slb_cache_update(esid_data);
+
+ return 0;
}
-void slb_miss_large_addr(struct pt_regs *regs)
+static long slb_allocate_kernel(unsigned long ea, unsigned long id)
{
- enum ctx_state prev_state = exception_enter();
- unsigned long ea = regs->dar;
- int context;
+ unsigned long context;
+ unsigned long flags;
+ int ssize;
- if (REGION_ID(ea) != USER_REGION_ID)
- goto slb_bad_addr;
+ if ((ea & ~REGION_MASK) >= (1ULL << MAX_EA_BITS_PER_CONTEXT))
+ return -EFAULT;
- /*
- * Are we beyound what the page table layout supports ?
- */
- if ((ea & ~REGION_MASK) >= H_PGTABLE_RANGE)
- goto slb_bad_addr;
+ if (id == KERNEL_REGION_ID) {
+ flags = SLB_VSID_KERNEL | mmu_psize_defs[mmu_linear_psize].sllp;
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
+ } else if (id == VMEMMAP_REGION_ID) {
+ flags = SLB_VSID_KERNEL | mmu_psize_defs[mmu_vmemmap_psize].sllp;
+#endif
+ } else if (id == VMALLOC_REGION_ID) {
+ if (ea < H_VMALLOC_END)
+ flags = get_paca()->vmalloc_sllp;
+ else
+ flags = SLB_VSID_KERNEL | mmu_psize_defs[mmu_io_psize].sllp;
+ } else {
+ return -EFAULT;
+ }
+
+ ssize = MMU_SEGSIZE_1T;
+ if (!mmu_has_feature(MMU_FTR_1T_SEGMENT))
+ ssize = MMU_SEGSIZE_256M;
+
+ context = id - KERNEL_REGION_CONTEXT_OFFSET;
- /* Lower address should have been handled by asm code */
- if (ea < (1UL << MAX_EA_BITS_PER_CONTEXT))
- goto slb_bad_addr;
+ return slb_insert_entry(ea, context, flags, ssize, true);
+}
+
+static long slb_allocate_user(struct mm_struct *mm, unsigned long ea)
+{
+ unsigned long context;
+ unsigned long flags;
+ int bpsize;
+ int ssize;
/*
* consider this as bad access if we take a SLB miss
* on an address above addr limit.
*/
- if (ea >= current->mm->context.slb_addr_limit)
- goto slb_bad_addr;
+ if (ea >= mm->context.slb_addr_limit)
+ return -EFAULT;
- context = get_ea_context(¤t->mm->context, ea);
+ context = get_ea_context(&mm->context, ea);
if (!context)
- goto slb_bad_addr;
+ return -EFAULT;
- handle_multi_context_slb_miss(context, ea);
- exception_exit(prev_state);
- return;
+ if (unlikely(ea >= H_PGTABLE_RANGE)) {
+ WARN_ON(1);
+ return -EFAULT;
+ }
-slb_bad_addr:
- if (user_mode(regs))
- _exception(SIGSEGV, regs, SEGV_BNDERR, ea);
- else
- bad_page_fault(regs, ea, SIGSEGV);
- exception_exit(prev_state);
+ ssize = user_segment_size(ea);
+
+ bpsize = get_slice_psize(mm, ea);
+ flags = SLB_VSID_USER | mmu_psize_defs[bpsize].sllp;
+
+ return slb_insert_entry(ea, context, flags, ssize, false);
+}
+
+long do_slb_fault(struct pt_regs *regs, unsigned long ea)
+{
+ unsigned long id = REGION_ID(ea);
+
+ /* IRQs are not reconciled here, so can't check irqs_disabled */
+ VM_WARN_ON(mfmsr() & MSR_EE);
+
+ if (unlikely(!(regs->msr & MSR_RI)))
+ return -EINVAL;
+
+ /*
+ * SLB kernel faults must be very careful not to touch anything
+ * that is not bolted. E.g., PACA and global variables are okay,
+ * mm->context stuff is not.
+ *
+ * SLB user faults can access all of kernel memory, but must be
+ * careful not to touch things like IRQ state because it is not
+ * "reconciled" here. The difficulty is that we must use
+ * fast_exception_return to return from kernel SLB faults without
+ * looking at possible non-bolted memory. We could test user vs
+ * kernel faults in the interrupt handler asm and do a full fault,
+ * reconcile, ret_from_except for user faults which would make them
+ * first class kernel code. But for performance it's probably nicer
+ * if they go via fast_exception_return too.
+ */
+ if (id >= KERNEL_REGION_ID) {
+ return slb_allocate_kernel(ea, id);
+ } else {
+ struct mm_struct *mm = current->mm;
+
+ if (unlikely(!mm))
+ return -EFAULT;
+
+ return slb_allocate_user(mm, ea);
+ }
+}
+
+void do_bad_slb_fault(struct pt_regs *regs, unsigned long ea, long err)
+{
+ if (err == -EFAULT) {
+ if (user_mode(regs))
+ _exception(SIGSEGV, regs, SEGV_BNDERR, ea);
+ else
+ bad_page_fault(regs, ea, SIGSEGV);
+ } else if (err == -EINVAL) {
+ unrecoverable_exception(regs);
+ } else {
+ BUG();
+ }
}
diff --git a/arch/powerpc/mm/slb_low.S b/arch/powerpc/mm/slb_low.S
deleted file mode 100644
index 4ac5057ad439..000000000000
--- a/arch/powerpc/mm/slb_low.S
+++ /dev/null
@@ -1,335 +0,0 @@
-/*
- * Low-level SLB routines
- *
- * Copyright (C) 2004 David Gibson <dwg@au.ibm.com>, IBM
- *
- * Based on earlier C version:
- * Dave Engebretsen and Mike Corrigan {engebret|mikejc}@us.ibm.com
- * Copyright (c) 2001 Dave Engebretsen
- * Copyright (C) 2002 Anton Blanchard <anton@au.ibm.com>, IBM
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version
- * 2 of the License, or (at your option) any later version.
- */
-
-#include <asm/processor.h>
-#include <asm/ppc_asm.h>
-#include <asm/asm-offsets.h>
-#include <asm/cputable.h>
-#include <asm/page.h>
-#include <asm/mmu.h>
-#include <asm/pgtable.h>
-#include <asm/firmware.h>
-#include <asm/feature-fixups.h>
-
-/*
- * This macro generates asm code to compute the VSID scramble
- * function. Used in slb_allocate() and do_stab_bolted. The function
- * computed is: (protovsid*VSID_MULTIPLIER) % VSID_MODULUS
- *
- * rt = register containing the proto-VSID and into which the
- * VSID will be stored
- * rx = scratch register (clobbered)
- * rf = flags
- *
- * - rt and rx must be different registers
- * - The answer will end up in the low VSID_BITS bits of rt. The higher
- * bits may contain other garbage, so you may need to mask the
- * result.
- */
-#define ASM_VSID_SCRAMBLE(rt, rx, rf, size) \
- lis rx,VSID_MULTIPLIER_##size@h; \
- ori rx,rx,VSID_MULTIPLIER_##size@l; \
- mulld rt,rt,rx; /* rt = rt * MULTIPLIER */ \
-/* \
- * powermac get slb fault before feature fixup, so make 65 bit part \
- * the default part of feature fixup \
- */ \
-BEGIN_MMU_FTR_SECTION \
- srdi rx,rt,VSID_BITS_65_##size; \
- clrldi rt,rt,(64-VSID_BITS_65_##size); \
- add rt,rt,rx; \
- addi rx,rt,1; \
- srdi rx,rx,VSID_BITS_65_##size; \
- add rt,rt,rx; \
- rldimi rf,rt,SLB_VSID_SHIFT_##size,(64 - (SLB_VSID_SHIFT_##size + VSID_BITS_65_##size)); \
-MMU_FTR_SECTION_ELSE \
- srdi rx,rt,VSID_BITS_##size; \
- clrldi rt,rt,(64-VSID_BITS_##size); \
- add rt,rt,rx; /* add high and low bits */ \
- addi rx,rt,1; \
- srdi rx,rx,VSID_BITS_##size; /* extract 2^VSID_BITS bit */ \
- add rt,rt,rx; \
- rldimi rf,rt,SLB_VSID_SHIFT_##size,(64 - (SLB_VSID_SHIFT_##size + VSID_BITS_##size)); \
-ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_68_BIT_VA)
-
-
-/* void slb_allocate(unsigned long ea);
- *
- * Create an SLB entry for the given EA (user or kernel).
- * r3 = faulting address, r13 = PACA
- * r9, r10, r11 are clobbered by this function
- * r3 is preserved.
- * No other registers are examined or changed.
- */
-_GLOBAL(slb_allocate)
- /*
- * Check if the address falls within the range of the first context, or
- * if we may need to handle multi context. For the first context we
- * allocate the slb entry via the fast path below. For large address we
- * branch out to C-code and see if additional contexts have been
- * allocated.
- * The test here is:
- * (ea & ~REGION_MASK) >= (1ull << MAX_EA_BITS_PER_CONTEXT)
- */
- rldicr. r9,r3,4,(63 - MAX_EA_BITS_PER_CONTEXT - 4)
- bne- 8f
-
- srdi r9,r3,60 /* get region */
- srdi r10,r3,SID_SHIFT /* get esid */
- cmpldi cr7,r9,0xc /* cmp PAGE_OFFSET for later use */
-
- /* r3 = address, r10 = esid, cr7 = <> PAGE_OFFSET */
- blt cr7,0f /* user or kernel? */
-
- /* Check if hitting the linear mapping or some other kernel space
- */
- bne cr7,1f
-
- /* Linear mapping encoding bits, the "li" instruction below will
- * be patched by the kernel at boot
- */
-.globl slb_miss_kernel_load_linear
-slb_miss_kernel_load_linear:
- li r11,0
- /*
- * context = (ea >> 60) - (0xc - 1)
- * r9 = region id.
- */
- subi r9,r9,KERNEL_REGION_CONTEXT_OFFSET
-
-BEGIN_FTR_SECTION
- b .Lslb_finish_load
-END_MMU_FTR_SECTION_IFCLR(MMU_FTR_1T_SEGMENT)
- b .Lslb_finish_load_1T
-
-1:
-#ifdef CONFIG_SPARSEMEM_VMEMMAP
- cmpldi cr0,r9,0xf
- bne 1f
-/* Check virtual memmap region. To be patched at kernel boot */
-.globl slb_miss_kernel_load_vmemmap
-slb_miss_kernel_load_vmemmap:
- li r11,0
- b 6f
-1:
-#endif /* CONFIG_SPARSEMEM_VMEMMAP */
-
- /*
- * r10 contains the ESID, which is the original faulting EA shifted
- * right by 28 bits. We need to compare that with (H_VMALLOC_END >> 28)
- * which is 0xd00038000. That can't be used as an immediate, even if we
- * ignored the 0xd, so we have to load it into a register, and we only
- * have one register free. So we must load all of (H_VMALLOC_END >> 28)
- * into a register and compare ESID against that.
- */
- lis r11,(H_VMALLOC_END >> 32)@h // r11 = 0xffffffffd0000000
- ori r11,r11,(H_VMALLOC_END >> 32)@l // r11 = 0xffffffffd0003800
- // Rotate left 4, then mask with 0xffffffff0
- rldic r11,r11,4,28 // r11 = 0xd00038000
- cmpld r10,r11 // if r10 >= r11
- bge 5f // goto io_mapping
-
- /*
- * vmalloc mapping gets the encoding from the PACA as the mapping
- * can be demoted from 64K -> 4K dynamically on some machines.
- */
- lhz r11,PACAVMALLOCSLLP(r13)
- b 6f
-5:
- /* IO mapping */
-.globl slb_miss_kernel_load_io
-slb_miss_kernel_load_io:
- li r11,0
-6:
- /*
- * context = (ea >> 60) - (0xc - 1)
- * r9 = region id.
- */
- subi r9,r9,KERNEL_REGION_CONTEXT_OFFSET
-
-BEGIN_FTR_SECTION
- b .Lslb_finish_load
-END_MMU_FTR_SECTION_IFCLR(MMU_FTR_1T_SEGMENT)
- b .Lslb_finish_load_1T
-
-0: /*
- * For userspace addresses, make sure this is region 0.
- */
- cmpdi r9, 0
- bne- 8f
- /*
- * user space make sure we are within the allowed limit
- */
- ld r11,PACA_SLB_ADDR_LIMIT(r13)
- cmpld r3,r11
- bge- 8f
-
- /* when using slices, we extract the psize off the slice bitmaps
- * and then we need to get the sllp encoding off the mmu_psize_defs
- * array.
- *
- * XXX This is a bit inefficient especially for the normal case,
- * so we should try to implement a fast path for the standard page
- * size using the old sllp value so we avoid the array. We cannot
- * really do dynamic patching unfortunately as processes might flip
- * between 4k and 64k standard page size
- */
-#ifdef CONFIG_PPC_MM_SLICES
- /* r10 have esid */
- cmpldi r10,16
- /* below SLICE_LOW_TOP */
- blt 5f
- /*
- * Handle hpsizes,
- * r9 is get_paca()->context.high_slices_psize[index], r11 is mask_index
- */
- srdi r11,r10,(SLICE_HIGH_SHIFT - SLICE_LOW_SHIFT + 1) /* index */
- addi r9,r11,PACAHIGHSLICEPSIZE
- lbzx r9,r13,r9 /* r9 is hpsizes[r11] */
- /* r11 = (r10 >> (SLICE_HIGH_SHIFT - SLICE_LOW_SHIFT)) & 0x1 */
- rldicl r11,r10,(64 - (SLICE_HIGH_SHIFT - SLICE_LOW_SHIFT)),63
- b 6f
-
-5:
- /*
- * Handle lpsizes
- * r9 is get_paca()->context.low_slices_psize[index], r11 is mask_index
- */
- srdi r11,r10,1 /* index */
- addi r9,r11,PACALOWSLICESPSIZE
- lbzx r9,r13,r9 /* r9 is lpsizes[r11] */
- rldicl r11,r10,0,63 /* r11 = r10 & 0x1 */
-6:
- sldi r11,r11,2 /* index * 4 */
- /* Extract the psize and multiply to get an array offset */
- srd r9,r9,r11
- andi. r9,r9,0xf
- mulli r9,r9,MMUPSIZEDEFSIZE
-
- /* Now get to the array and obtain the sllp
- */
- ld r11,PACATOC(r13)
- ld r11,mmu_psize_defs@got(r11)
- add r11,r11,r9
- ld r11,MMUPSIZESLLP(r11)
- ori r11,r11,SLB_VSID_USER
-#else
- /* paca context sllp already contains the SLB_VSID_USER bits */
- lhz r11,PACACONTEXTSLLP(r13)
-#endif /* CONFIG_PPC_MM_SLICES */
-
- ld r9,PACACONTEXTID(r13)
-BEGIN_FTR_SECTION
- cmpldi r10,0x1000
- bge .Lslb_finish_load_1T
-END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
- b .Lslb_finish_load
-
-8: /* invalid EA - return an error indication */
- crset 4*cr0+eq /* indicate failure */
- blr
-
-/*
- * Finish loading of an SLB entry and return
- *
- * r3 = EA, r9 = context, r10 = ESID, r11 = flags, clobbers r9, cr7 = <> PAGE_OFFSET
- */
-.Lslb_finish_load:
- rldimi r10,r9,ESID_BITS,0
- ASM_VSID_SCRAMBLE(r10,r9,r11,256M)
- /* r3 = EA, r11 = VSID data */
- /*
- * Find a slot, round robin. Previously we tried to find a
- * free slot first but that took too long. Unfortunately we
- * dont have any LRU information to help us choose a slot.
- */
-
- mr r9,r3
-
- /* slb_finish_load_1T continues here. r9=EA with non-ESID bits clear */
-7: ld r10,PACASTABRR(r13)
- addi r10,r10,1
- /* This gets soft patched on boot. */
-.globl slb_compare_rr_to_size
-slb_compare_rr_to_size:
- cmpldi r10,0
-
- blt+ 4f
- li r10,SLB_NUM_BOLTED
-
-4:
- std r10,PACASTABRR(r13)
-
-3:
- rldimi r9,r10,0,36 /* r9 = EA[0:35] | entry */
- oris r10,r9,SLB_ESID_V@h /* r10 = r9 | SLB_ESID_V */
-
- /* r9 = ESID data, r11 = VSID data */
-
- /*
- * No need for an isync before or after this slbmte. The exception
- * we enter with and the rfid we exit with are context synchronizing.
- */
- slbmte r11,r10
-
- /* we're done for kernel addresses */
- crclr 4*cr0+eq /* set result to "success" */
- bgelr cr7
-
- /* Update the slb cache */
- lhz r9,PACASLBCACHEPTR(r13) /* offset = paca->slb_cache_ptr */
- cmpldi r9,SLB_CACHE_ENTRIES
- bge 1f
-
- /* still room in the slb cache */
- sldi r11,r9,2 /* r11 = offset * sizeof(u32) */
- srdi r10,r10,28 /* get the 36 bits of the ESID */
- add r11,r11,r13 /* r11 = (u32 *)paca + offset */
- stw r10,PACASLBCACHE(r11) /* paca->slb_cache[offset] = esid */
- addi r9,r9,1 /* offset++ */
- b 2f
-1: /* offset >= SLB_CACHE_ENTRIES */
- li r9,SLB_CACHE_ENTRIES+1
-2:
- sth r9,PACASLBCACHEPTR(r13) /* paca->slb_cache_ptr = offset */
- crclr 4*cr0+eq /* set result to "success" */
- blr
-
-/*
- * Finish loading of a 1T SLB entry (for the kernel linear mapping) and return.
- *
- * r3 = EA, r9 = context, r10 = ESID(256MB), r11 = flags, clobbers r9
- */
-.Lslb_finish_load_1T:
- srdi r10,r10,(SID_SHIFT_1T - SID_SHIFT) /* get 1T ESID */
- rldimi r10,r9,ESID_BITS_1T,0
- ASM_VSID_SCRAMBLE(r10,r9,r11,1T)
-
- li r10,MMU_SEGSIZE_1T
- rldimi r11,r10,SLB_VSID_SSIZE_SHIFT,0 /* insert segment size */
-
- /* r3 = EA, r11 = VSID data */
- clrrdi r9,r3,SID_SHIFT_1T /* clear out non-ESID bits */
- b 7b
-
-
-_ASM_NOKPROBE_SYMBOL(slb_allocate)
-_ASM_NOKPROBE_SYMBOL(slb_miss_kernel_load_linear)
-_ASM_NOKPROBE_SYMBOL(slb_miss_kernel_load_io)
-_ASM_NOKPROBE_SYMBOL(slb_compare_rr_to_size)
-#ifdef CONFIG_SPARSEMEM_VMEMMAP
-_ASM_NOKPROBE_SYMBOL(slb_miss_kernel_load_vmemmap)
-#endif
--
2.18.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 08/12] powerpc/64s/hash: remove user SLB data from the paca
2018-09-14 15:30 [PATCH 00/12] SLB miss conversion to C, and SLB optimisations Nicholas Piggin
` (6 preceding siblings ...)
2018-09-14 15:30 ` [PATCH 07/12] powerpc/64s/hash: convert SLB miss handlers to C Nicholas Piggin
@ 2018-09-14 15:30 ` Nicholas Piggin
2018-09-14 15:30 ` [PATCH 09/12] powerpc/64s/hash: SLB allocation status bitmaps Nicholas Piggin
` (3 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Nicholas Piggin @ 2018-09-14 15:30 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin, Aneesh Kumar K . V
User SLB mappig data is copied into the PACA from the mm->context so
it can be accessed by the SLB miss handlers.
After the C conversion, SLB miss handlers now run with relocation on,
and user SLB misses are able to take recursive kernel SLB misses, so
the user SLB mapping data can be removed from the paca and accessed
directly.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 1 +
arch/powerpc/include/asm/paca.h | 13 ------
arch/powerpc/kernel/asm-offsets.c | 9 ----
arch/powerpc/kernel/paca.c | 21 ---------
arch/powerpc/mm/hash_utils_64.c | 46 +++++--------------
arch/powerpc/mm/mmu_context.c | 3 +-
arch/powerpc/mm/slb.c | 20 +++++++-
arch/powerpc/mm/slice.c | 29 ++++--------
8 files changed, 40 insertions(+), 102 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
index 20d9ca736bbd..4c8d413ce99a 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
@@ -496,6 +496,7 @@ static inline void hpte_init_pseries(void) { }
extern void hpte_init_native(void);
extern void slb_initialize(void);
+extern void core_flush_all_slbs(struct mm_struct *mm);
extern void slb_flush_and_rebolt(void);
void slb_flush_all_realmode(void);
void __slb_restore_bolted_realmode(void);
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index 4331295db0f7..8c258a057207 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -143,18 +143,6 @@ struct paca_struct {
struct tlb_core_data tcd;
#endif /* CONFIG_PPC_BOOK3E */
-#ifdef CONFIG_PPC_BOOK3S
- mm_context_id_t mm_ctx_id;
-#ifdef CONFIG_PPC_MM_SLICES
- unsigned char mm_ctx_low_slices_psize[BITS_PER_LONG / BITS_PER_BYTE];
- unsigned char mm_ctx_high_slices_psize[SLICE_ARRAY_SIZE];
- unsigned long mm_ctx_slb_addr_limit;
-#else
- u16 mm_ctx_user_psize;
- u16 mm_ctx_sllp;
-#endif
-#endif
-
/*
* then miscellaneous read-write fields
*/
@@ -256,7 +244,6 @@ struct paca_struct {
#endif /* CONFIG_PPC_PSERIES */
} ____cacheline_aligned;
-extern void copy_mm_to_paca(struct mm_struct *mm);
extern struct paca_struct **paca_ptrs;
extern void initialise_paca(struct paca_struct *new_paca, int cpu);
extern void setup_paca(struct paca_struct *new_paca);
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 7834256585f1..43b67ead5b97 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -181,15 +181,6 @@ int main(void)
OFFSET(PACAIRQSOFTMASK, paca_struct, irq_soft_mask);
OFFSET(PACAIRQHAPPENED, paca_struct, irq_happened);
OFFSET(PACA_FTRACE_ENABLED, paca_struct, ftrace_enabled);
-#ifdef CONFIG_PPC_BOOK3S
- OFFSET(PACACONTEXTID, paca_struct, mm_ctx_id);
-#ifdef CONFIG_PPC_MM_SLICES
- OFFSET(PACALOWSLICESPSIZE, paca_struct, mm_ctx_low_slices_psize);
- OFFSET(PACAHIGHSLICEPSIZE, paca_struct, mm_ctx_high_slices_psize);
- OFFSET(PACA_SLB_ADDR_LIMIT, paca_struct, mm_ctx_slb_addr_limit);
- DEFINE(MMUPSIZEDEFSIZE, sizeof(struct mmu_psize_def));
-#endif /* CONFIG_PPC_MM_SLICES */
-#endif
#ifdef CONFIG_PPC_BOOK3E
OFFSET(PACAPGD, paca_struct, pgd);
diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
index 0ee3e6d50f28..6752e17f0281 100644
--- a/arch/powerpc/kernel/paca.c
+++ b/arch/powerpc/kernel/paca.c
@@ -259,24 +259,3 @@ void __init free_unused_pacas(void)
paca_ptrs_size + paca_struct_size, nr_cpu_ids);
}
-void copy_mm_to_paca(struct mm_struct *mm)
-{
-#ifdef CONFIG_PPC_BOOK3S
- mm_context_t *context = &mm->context;
-
- get_paca()->mm_ctx_id = context->id;
-#ifdef CONFIG_PPC_MM_SLICES
- VM_BUG_ON(!mm->context.slb_addr_limit);
- get_paca()->mm_ctx_slb_addr_limit = mm->context.slb_addr_limit;
- memcpy(&get_paca()->mm_ctx_low_slices_psize,
- &context->low_slices_psize, sizeof(context->low_slices_psize));
- memcpy(&get_paca()->mm_ctx_high_slices_psize,
- &context->high_slices_psize, TASK_SLICE_ARRAY_SZ(mm));
-#else /* CONFIG_PPC_MM_SLICES */
- get_paca()->mm_ctx_user_psize = context->user_psize;
- get_paca()->mm_ctx_sllp = context->sllp;
-#endif
-#else /* !CONFIG_PPC_BOOK3S */
- return;
-#endif
-}
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index f23a89d8e4ce..88c95dc8b141 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -1088,16 +1088,16 @@ unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap)
}
#ifdef CONFIG_PPC_MM_SLICES
-static unsigned int get_paca_psize(unsigned long addr)
+static unsigned int get_psize(struct mm_struct *mm, unsigned long addr)
{
unsigned char *psizes;
unsigned long index, mask_index;
if (addr < SLICE_LOW_TOP) {
- psizes = get_paca()->mm_ctx_low_slices_psize;
+ psizes = mm->context.low_slices_psize;
index = GET_LOW_SLICE_INDEX(addr);
} else {
- psizes = get_paca()->mm_ctx_high_slices_psize;
+ psizes = mm->context.high_slices_psize;
index = GET_HIGH_SLICE_INDEX(addr);
}
mask_index = index & 0x1;
@@ -1105,9 +1105,9 @@ static unsigned int get_paca_psize(unsigned long addr)
}
#else
-unsigned int get_paca_psize(unsigned long addr)
+unsigned int get_psize(struct mm_struct *mm, unsigned long addr)
{
- return get_paca()->mm_ctx_user_psize;
+ return mm->context.user_psize;
}
#endif
@@ -1118,15 +1118,11 @@ unsigned int get_paca_psize(unsigned long addr)
#ifdef CONFIG_PPC_64K_PAGES
void demote_segment_4k(struct mm_struct *mm, unsigned long addr)
{
- if (get_slice_psize(mm, addr) == MMU_PAGE_4K)
+ if (get_psize(mm, addr) == MMU_PAGE_4K)
return;
slice_set_range_psize(mm, addr, 1, MMU_PAGE_4K);
copro_flush_all_slbs(mm);
- if ((get_paca_psize(addr) != MMU_PAGE_4K) && (current->mm == mm)) {
-
- copy_mm_to_paca(mm);
- slb_flush_and_rebolt();
- }
+ core_flush_all_slbs(mm);
}
#endif /* CONFIG_PPC_64K_PAGES */
@@ -1191,22 +1187,6 @@ void hash_failure_debug(unsigned long ea, unsigned long access,
trap, vsid, ssize, psize, lpsize, pte);
}
-static void check_paca_psize(unsigned long ea, struct mm_struct *mm,
- int psize, bool user_region)
-{
- if (user_region) {
- if (psize != get_paca_psize(ea)) {
- copy_mm_to_paca(mm);
- slb_flush_and_rebolt();
- }
- } else if (get_paca()->vmalloc_sllp !=
- mmu_psize_defs[mmu_vmalloc_psize].sllp) {
- get_paca()->vmalloc_sllp =
- mmu_psize_defs[mmu_vmalloc_psize].sllp;
- slb_vmalloc_update();
- }
-}
-
/* Result code is:
* 0 - handled
* 1 - normal page fault
@@ -1239,7 +1219,7 @@ int hash_page_mm(struct mm_struct *mm, unsigned long ea,
rc = 1;
goto bail;
}
- psize = get_slice_psize(mm, ea);
+ psize = get_psize(mm, ea);
ssize = user_segment_size(ea);
vsid = get_user_vsid(&mm->context, ea, ssize);
break;
@@ -1327,9 +1307,6 @@ int hash_page_mm(struct mm_struct *mm, unsigned long ea,
WARN_ON(1);
}
#endif
- if (current->mm == mm)
- check_paca_psize(ea, mm, psize, user_region);
-
goto bail;
}
@@ -1364,15 +1341,14 @@ int hash_page_mm(struct mm_struct *mm, unsigned long ea,
"to 4kB pages because of "
"non-cacheable mapping\n");
psize = mmu_vmalloc_psize = MMU_PAGE_4K;
+ slb_vmalloc_update();
copro_flush_all_slbs(mm);
+ core_flush_all_slbs(mm);
}
}
#endif /* CONFIG_PPC_64K_PAGES */
- if (current->mm == mm)
- check_paca_psize(ea, mm, psize, user_region);
-
#ifdef CONFIG_PPC_64K_PAGES
if (psize == MMU_PAGE_64K)
rc = __hash_page_64K(ea, access, vsid, ptep, trap,
@@ -1460,7 +1436,7 @@ int __hash_page(unsigned long ea, unsigned long msr, unsigned long trap,
#ifdef CONFIG_PPC_MM_SLICES
static bool should_hash_preload(struct mm_struct *mm, unsigned long ea)
{
- int psize = get_slice_psize(mm, ea);
+ int psize = get_psize(mm, ea);
/* We only prefault standard pages for now */
if (unlikely(psize != mm->context.user_psize))
diff --git a/arch/powerpc/mm/mmu_context.c b/arch/powerpc/mm/mmu_context.c
index f84e14f23e50..28ae2835db3d 100644
--- a/arch/powerpc/mm/mmu_context.c
+++ b/arch/powerpc/mm/mmu_context.c
@@ -54,8 +54,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
* MMU context id, which is then moved to SPRN_PID.
*
* For the hash MMU it is either the first load from slb_cache
- * in switch_slb(), and/or the store of paca->mm_ctx_id in
- * copy_mm_to_paca().
+ * in switch_slb(), and/or load of MMU context id.
*
* On the other side, the barrier is in mm/tlb-radix.c for
* radix which orders earlier stores to clear the PTEs vs
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index f6a5aedaae38..d782a70d4a5d 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -278,8 +278,6 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
get_paca()->slb_cache_ptr = 0;
}
- copy_mm_to_paca(mm);
-
/*
* preload some userspace segments into the SLB.
* Almost all 32 and 64bit PowerPC executables are linked at
@@ -306,6 +304,24 @@ void slb_set_size(u16 size)
mmu_slb_size = size;
}
+static void cpu_flush_slb(void *parm)
+{
+ struct mm_struct *mm = parm;
+ unsigned long flags;
+
+ if (mm != current->active_mm)
+ return;
+
+ local_irq_save(flags);
+ slb_flush_and_rebolt();
+ local_irq_restore(flags);
+}
+
+void core_flush_all_slbs(struct mm_struct *mm)
+{
+ on_each_cpu(cpu_flush_slb, mm, 1);
+}
+
void slb_initialize(void)
{
unsigned long linear_llp, vmalloc_llp, io_llp;
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 205fe557ca10..606f424aac47 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -207,23 +207,6 @@ static bool slice_check_range_fits(struct mm_struct *mm,
return true;
}
-static void slice_flush_segments(void *parm)
-{
-#ifdef CONFIG_PPC64
- struct mm_struct *mm = parm;
- unsigned long flags;
-
- if (mm != current->active_mm)
- return;
-
- copy_mm_to_paca(current->active_mm);
-
- local_irq_save(flags);
- slb_flush_and_rebolt();
- local_irq_restore(flags);
-#endif
-}
-
static void slice_convert(struct mm_struct *mm,
const struct slice_mask *mask, int psize)
{
@@ -289,6 +272,9 @@ static void slice_convert(struct mm_struct *mm,
spin_unlock_irqrestore(&slice_convert_lock, flags);
copro_flush_all_slbs(mm);
+#ifdef CONFIG_PPC64
+ core_flush_all_slbs(mm);
+#endif
}
/*
@@ -502,8 +488,9 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
* be already initialised beyond the old address limit.
*/
mm->context.slb_addr_limit = high_limit;
-
- on_each_cpu(slice_flush_segments, mm, 1);
+#ifdef CONFIG_PPC64
+ core_flush_all_slbs(mm);
+#endif
}
/* Sanity checks */
@@ -665,8 +652,10 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
(SLICE_NUM_HIGH &&
!bitmap_empty(potential_mask.high_slices, SLICE_NUM_HIGH))) {
slice_convert(mm, &potential_mask, psize);
+#ifdef CONFIG_PPC64
if (psize > MMU_PAGE_BASE)
- on_each_cpu(slice_flush_segments, mm, 1);
+ core_flush_all_slbs(mm);
+#endif
}
return newaddr;
--
2.18.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 09/12] powerpc/64s/hash: SLB allocation status bitmaps
2018-09-14 15:30 [PATCH 00/12] SLB miss conversion to C, and SLB optimisations Nicholas Piggin
` (7 preceding siblings ...)
2018-09-14 15:30 ` [PATCH 08/12] powerpc/64s/hash: remove user SLB data from the paca Nicholas Piggin
@ 2018-09-14 15:30 ` Nicholas Piggin
2018-09-14 15:30 ` [PATCH 10/12] powerpc/64s: xmon do not dump hash fields when using radix mode Nicholas Piggin
` (2 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Nicholas Piggin @ 2018-09-14 15:30 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin, Aneesh Kumar K . V
Add 32-entry bitmaps to track the allocation status of the first 32
SLB entries, and whether they are user or kernel entries. These are
used to allocate free SLB entries first, before resorting to the round
robin allocator.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/paca.h | 6 ++-
arch/powerpc/kernel/asm-offsets.c | 2 +-
arch/powerpc/mm/slb.c | 62 +++++++++++++++++++++++++------
arch/powerpc/xmon/xmon.c | 4 +-
4 files changed, 58 insertions(+), 16 deletions(-)
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index 8c258a057207..bf7ab59be3b8 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -113,7 +113,10 @@ struct paca_struct {
* on the linear mapping */
/* SLB related definitions */
u16 vmalloc_sllp;
- u16 slb_cache_ptr;
+ u8 slb_cache_ptr;
+ u8 stab_rr; /* stab/slb round-robin counter */
+ u32 slb_used_bitmap; /* Bitmaps for first 32 SLB entries. */
+ u32 slb_kern_bitmap;
u32 slb_cache[SLB_CACHE_ENTRIES];
#endif /* CONFIG_PPC_BOOK3S_64 */
@@ -148,7 +151,6 @@ struct paca_struct {
*/
struct task_struct *__current; /* Pointer to current */
u64 kstack; /* Saved Kernel stack addr */
- u64 stab_rr; /* stab/slb round-robin counter */
u64 saved_r1; /* r1 save for RTAS calls or PM or EE=0 */
u64 saved_msr; /* MSR saved here by enter_rtas */
u16 trap_save; /* Used when bad stack is encountered */
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 43b67ead5b97..1f79cbf3da62 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -173,7 +173,6 @@ int main(void)
OFFSET(PACAKSAVE, paca_struct, kstack);
OFFSET(PACACURRENT, paca_struct, __current);
OFFSET(PACASAVEDMSR, paca_struct, saved_msr);
- OFFSET(PACASTABRR, paca_struct, stab_rr);
OFFSET(PACAR1, paca_struct, saved_r1);
OFFSET(PACATOC, paca_struct, kernel_toc);
OFFSET(PACAKBASE, paca_struct, kernelbase);
@@ -203,6 +202,7 @@ int main(void)
#ifdef CONFIG_PPC_BOOK3S_64
OFFSET(PACASLBCACHE, paca_struct, slb_cache);
OFFSET(PACASLBCACHEPTR, paca_struct, slb_cache_ptr);
+ OFFSET(PACASTABRR, paca_struct, stab_rr);
OFFSET(PACAVMALLOCSLLP, paca_struct, vmalloc_sllp);
#ifdef CONFIG_PPC_MM_SLICES
OFFSET(MMUPSIZESLLP, mmu_psize_def, sllp);
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index d782a70d4a5d..98521fec3536 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -122,6 +122,9 @@ void slb_restore_bolted_realmode(void)
{
__slb_restore_bolted_realmode();
get_paca()->slb_cache_ptr = 0;
+
+ get_paca()->slb_kern_bitmap = (1U << SLB_NUM_BOLTED) - 1;
+ get_paca()->slb_used_bitmap = get_paca()->slb_kern_bitmap;
}
/*
@@ -129,9 +132,6 @@ void slb_restore_bolted_realmode(void)
*/
void slb_flush_all_realmode(void)
{
- /*
- * This flushes all SLB entries including 0, so it must be realmode.
- */
asm volatile("slbmte %0,%0; slbia" : : "r" (0));
}
@@ -177,6 +177,9 @@ void slb_flush_and_rebolt(void)
: "memory");
get_paca()->slb_cache_ptr = 0;
+
+ get_paca()->slb_kern_bitmap = (1U << SLB_NUM_BOLTED) - 1;
+ get_paca()->slb_used_bitmap = get_paca()->slb_kern_bitmap;
}
void slb_vmalloc_update(void)
@@ -273,10 +276,13 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
"isync"
:: "r"(ksp_vsid_data),
"r"(ksp_esid_data));
+
+ get_paca()->slb_kern_bitmap = (1U << SLB_NUM_BOLTED) - 1;
}
get_paca()->slb_cache_ptr = 0;
}
+ get_paca()->slb_used_bitmap = get_paca()->slb_kern_bitmap;
/*
* preload some userspace segments into the SLB.
@@ -349,6 +355,8 @@ void slb_initialize(void)
}
get_paca()->stab_rr = SLB_NUM_BOLTED - 1;
+ get_paca()->slb_kern_bitmap = (1U << SLB_NUM_BOLTED) - 1;
+ get_paca()->slb_used_bitmap = get_paca()->slb_kern_bitmap;
lflags = SLB_VSID_KERNEL | linear_llp;
@@ -400,17 +408,47 @@ static void slb_cache_update(unsigned long esid_data)
}
}
-static enum slb_index alloc_slb_index(void)
+static enum slb_index alloc_slb_index(bool kernel)
{
enum slb_index index;
- /* round-robin replacement of slb starting at SLB_NUM_BOLTED. */
- index = get_paca()->stab_rr;
- if (index < (mmu_slb_size - 1))
- index++;
- else
- index = SLB_NUM_BOLTED;
- get_paca()->stab_rr = index;
+ /*
+ * The allocation bitmaps can become out of synch with the SLB
+ * when the _switch code does slbie when bolting a new stack
+ * segment and it must not be anywhere else in the SLB. This leaves
+ * a kernel allocated entry that is unused in the SLB. With very
+ * large systems or small segment sizes, the bitmaps could slowly
+ * fill with these entries. They will eventually be cleared out
+ * by the round robin allocator in that case, so it's probably not
+ * worth accounting for.
+ */
+
+ /*
+ * SLBs beyond 32 entries are allocated with stab_rr only
+ * POWER7/8/9 have 32 SLB entries, this could be expanded if a
+ * future CPU has more.
+ */
+ if (get_paca()->slb_used_bitmap != U32_MAX) {
+ index = ffz(get_paca()->slb_used_bitmap);
+ get_paca()->slb_used_bitmap |= 1U << index;
+ if (kernel)
+ get_paca()->slb_kern_bitmap |= 1U << index;
+ } else {
+ /* round-robin replacement of slb starting at SLB_NUM_BOLTED. */
+ index = get_paca()->stab_rr;
+ if (index < (mmu_slb_size - 1))
+ index++;
+ else
+ index = SLB_NUM_BOLTED;
+ get_paca()->stab_rr = index;
+ if (index < 32) {
+ if (kernel)
+ get_paca()->slb_kern_bitmap |= 1U << index;
+ else
+ get_paca()->slb_kern_bitmap &= ~(1U << index);
+ }
+ }
+ BUG_ON(index < SLB_NUM_BOLTED);
return index;
}
@@ -426,7 +464,7 @@ static long slb_insert_entry(unsigned long ea, unsigned long context,
if (!vsid)
return -EFAULT;
- index = alloc_slb_index();
+ index = alloc_slb_index(kernel);
vsid_data = __mk_vsid_data(vsid, ssize, flags);
esid_data = mk_esid_data(ea, ssize, index);
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index eb2e0d472eb1..323aac8321fa 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -2393,6 +2393,9 @@ static void dump_one_paca(int cpu)
}
}
DUMP(p, vmalloc_sllp, "%#-*x");
+ DUMP(p, stab_rr, "%#-*x");
+ DUMP(p, slb_used_bitmap, "%#-*x");
+ DUMP(p, slb_kern_bitmap, "%#-*x");
if (!early_cpu_has_feature(CPU_FTR_ARCH_300)) {
DUMP(p, slb_cache_ptr, "%#-*x");
@@ -2415,7 +2418,6 @@ static void dump_one_paca(int cpu)
DUMP(p, __current, "%-*px");
DUMP(p, kstack, "%#-*llx");
printf(" %-*s = 0x%016llx\n", 25, "kstack_base", p->kstack & ~(THREAD_SIZE - 1));
- DUMP(p, stab_rr, "%#-*llx");
DUMP(p, saved_r1, "%#-*llx");
DUMP(p, trap_save, "%#-*x");
DUMP(p, irq_soft_mask, "%#-*x");
--
2.18.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 10/12] powerpc/64s: xmon do not dump hash fields when using radix mode
2018-09-14 15:30 [PATCH 00/12] SLB miss conversion to C, and SLB optimisations Nicholas Piggin
` (8 preceding siblings ...)
2018-09-14 15:30 ` [PATCH 09/12] powerpc/64s/hash: SLB allocation status bitmaps Nicholas Piggin
@ 2018-09-14 15:30 ` Nicholas Piggin
2018-09-14 15:30 ` [PATCH 11/12] powerpc/64s/hash: provide arch_setup_exec hooks for hash slice setup Nicholas Piggin
2018-09-14 15:30 ` [PATCH 12/12] powerpc/64s/hash: Add a SLB preload cache Nicholas Piggin
11 siblings, 0 replies; 21+ messages in thread
From: Nicholas Piggin @ 2018-09-14 15:30 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin, Aneesh Kumar K . V
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/xmon/xmon.c | 40 +++++++++++++++++++++-------------------
1 file changed, 21 insertions(+), 19 deletions(-)
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 323aac8321fa..5dec84aba59e 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -2378,30 +2378,32 @@ static void dump_one_paca(int cpu)
DUMP(p, cpu_start, "%#-*x");
DUMP(p, kexec_state, "%#-*x");
#ifdef CONFIG_PPC_BOOK3S_64
- for (i = 0; i < SLB_NUM_BOLTED; i++) {
- u64 esid, vsid;
+ if (!early_radix_enabled()) {
+ for (i = 0; i < SLB_NUM_BOLTED; i++) {
+ u64 esid, vsid;
- if (!p->slb_shadow_ptr)
- continue;
+ if (!p->slb_shadow_ptr)
+ continue;
- esid = be64_to_cpu(p->slb_shadow_ptr->save_area[i].esid);
- vsid = be64_to_cpu(p->slb_shadow_ptr->save_area[i].vsid);
+ esid = be64_to_cpu(p->slb_shadow_ptr->save_area[i].esid);
+ vsid = be64_to_cpu(p->slb_shadow_ptr->save_area[i].vsid);
- if (esid || vsid) {
- printf(" %-*s[%d] = 0x%016llx 0x%016llx\n",
- 22, "slb_shadow", i, esid, vsid);
+ if (esid || vsid) {
+ printf(" %-*s[%d] = 0x%016llx 0x%016llx\n",
+ 22, "slb_shadow", i, esid, vsid);
+ }
}
- }
- DUMP(p, vmalloc_sllp, "%#-*x");
- DUMP(p, stab_rr, "%#-*x");
- DUMP(p, slb_used_bitmap, "%#-*x");
- DUMP(p, slb_kern_bitmap, "%#-*x");
+ DUMP(p, vmalloc_sllp, "%#-*x");
+ DUMP(p, stab_rr, "%#-*x");
+ DUMP(p, slb_used_bitmap, "%#-*x");
+ DUMP(p, slb_kern_bitmap, "%#-*x");
- if (!early_cpu_has_feature(CPU_FTR_ARCH_300)) {
- DUMP(p, slb_cache_ptr, "%#-*x");
- for (i = 0; i < SLB_CACHE_ENTRIES; i++)
- printf(" %-*s[%d] = 0x%016x\n",
- 22, "slb_cache", i, p->slb_cache[i]);
+ if (!early_cpu_has_feature(CPU_FTR_ARCH_300)) {
+ DUMP(p, slb_cache_ptr, "%#-*x");
+ for (i = 0; i < SLB_CACHE_ENTRIES; i++)
+ printf(" %-*s[%d] = 0x%016x\n",
+ 22, "slb_cache", i, p->slb_cache[i]);
+ }
}
DUMP(p, rfi_flush_fallback_area, "%-*px");
--
2.18.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 11/12] powerpc/64s/hash: provide arch_setup_exec hooks for hash slice setup
2018-09-14 15:30 [PATCH 00/12] SLB miss conversion to C, and SLB optimisations Nicholas Piggin
` (9 preceding siblings ...)
2018-09-14 15:30 ` [PATCH 10/12] powerpc/64s: xmon do not dump hash fields when using radix mode Nicholas Piggin
@ 2018-09-14 15:30 ` Nicholas Piggin
2018-09-14 15:30 ` [PATCH 12/12] powerpc/64s/hash: Add a SLB preload cache Nicholas Piggin
11 siblings, 0 replies; 21+ messages in thread
From: Nicholas Piggin @ 2018-09-14 15:30 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin, Aneesh Kumar K . V
This will be used by the SLB code in the next patch, but for now this
sets the slb_addr_limit to the correct size for 32-bit tasks.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 2 ++
arch/powerpc/include/asm/slice.h | 1 +
arch/powerpc/include/asm/thread_info.h | 6 ++++++
arch/powerpc/kernel/process.c | 9 +++++++++
arch/powerpc/mm/mmu_context_book3s64.c | 5 +++++
arch/powerpc/mm/slice.c | 14 ++++++++++++++
6 files changed, 37 insertions(+)
diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
index 4c8d413ce99a..fc68058554fa 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
@@ -487,6 +487,8 @@ int htab_remove_mapping(unsigned long vstart, unsigned long vend,
extern void pseries_add_gpage(u64 addr, u64 page_size, unsigned long number_of_pages);
extern void demote_segment_4k(struct mm_struct *mm, unsigned long addr);
+extern void hash__setup_new_exec(void);
+
#ifdef CONFIG_PPC_PSERIES
void hpte_init_pseries(void);
#else
diff --git a/arch/powerpc/include/asm/slice.h b/arch/powerpc/include/asm/slice.h
index e40406cf5628..a595461c9cb0 100644
--- a/arch/powerpc/include/asm/slice.h
+++ b/arch/powerpc/include/asm/slice.h
@@ -32,6 +32,7 @@ void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
unsigned long len, unsigned int psize);
void slice_init_new_context_exec(struct mm_struct *mm);
+void slice_setup_new_exec(void);
#endif /* __ASSEMBLY__ */
diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
index 3c0002044bc9..f9a442bb5a72 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -72,6 +72,12 @@ static inline struct thread_info *current_thread_info(void)
}
extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src);
+
+#ifdef CONFIG_PPC_BOOK3S_64
+void arch_setup_new_exec(void);
+#define arch_setup_new_exec arch_setup_new_exec
+#endif
+
#endif /* __ASSEMBLY__ */
/*
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 913c5725cdb2..e4feb45ae4c6 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1482,6 +1482,15 @@ void flush_thread(void)
#endif /* CONFIG_HAVE_HW_BREAKPOINT */
}
+#ifdef CONFIG_PPC_BOOK3S_64
+void arch_setup_new_exec(void)
+{
+ if (radix_enabled())
+ return;
+ hash__setup_new_exec();
+}
+#endif
+
int set_thread_uses_vas(void)
{
#ifdef CONFIG_PPC_BOOK3S_64
diff --git a/arch/powerpc/mm/mmu_context_book3s64.c b/arch/powerpc/mm/mmu_context_book3s64.c
index dbd8f762140b..f7352c66b6b8 100644
--- a/arch/powerpc/mm/mmu_context_book3s64.c
+++ b/arch/powerpc/mm/mmu_context_book3s64.c
@@ -84,6 +84,11 @@ static int hash__init_new_context(struct mm_struct *mm)
return index;
}
+void hash__setup_new_exec(void)
+{
+ slice_setup_new_exec();
+}
+
static int radix__init_new_context(struct mm_struct *mm)
{
unsigned long rts_field;
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 606f424aac47..fc5b3a1ec666 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -746,6 +746,20 @@ void slice_init_new_context_exec(struct mm_struct *mm)
bitmap_fill(mask->high_slices, SLICE_NUM_HIGH);
}
+#ifdef CONFIG_PPC_BOOK3S_64
+void slice_setup_new_exec(void)
+{
+ struct mm_struct *mm = current->mm;
+
+ slice_dbg("slice_setup_new_exec(mm=%p)\n", mm);
+
+ if (!is_32bit_task())
+ return;
+
+ mm->context.slb_addr_limit = DEFAULT_MAP_WINDOW;
+}
+#endif
+
void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
unsigned long len, unsigned int psize)
{
--
2.18.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 12/12] powerpc/64s/hash: Add a SLB preload cache
2018-09-14 15:30 [PATCH 00/12] SLB miss conversion to C, and SLB optimisations Nicholas Piggin
` (10 preceding siblings ...)
2018-09-14 15:30 ` [PATCH 11/12] powerpc/64s/hash: provide arch_setup_exec hooks for hash slice setup Nicholas Piggin
@ 2018-09-14 15:30 ` Nicholas Piggin
11 siblings, 0 replies; 21+ messages in thread
From: Nicholas Piggin @ 2018-09-14 15:30 UTC (permalink / raw)
To: linuxppc-dev; +Cc: Nicholas Piggin, Aneesh Kumar K . V
When switching processes, currently all user SLBEs are cleared, and a
few (exec_base, pc, and stack) are preloaded. In trivial testing with
small apps, this tends to miss the heap and low 256MB segments, and it
will also miss commonly accessed segments on large memory workloads.
Add a simple round-robin preload cache that just inserts the last SLB
miss into the head of the cache and preloads those at context switch
time. Every 256 context switches, the oldest entry is removed from the
cache to shrink the cache and require fewer slbmte if they are unused.
Much more could go into this, including into the SLB entry reclaim
side to track some LRU information etc, which would require a study of
large memory workloads. But this is a simple thing we can do now that
is an obvious win for common workloads.
With the full series, process switching speed on the context_switch
benchmark on POWER9/hash (with kernel speculation security masures
disabled) increases from 140K/s to 178K/s (27%).
POWER8 does not change much (within 1%), it's unclear why it does not
see a big gain like POWER9.
Booting to busybox init with 256MB segments has SLB misses go down
from 945 to 69, and with 1T segments 900 to 21. These could almost all
be eliminated by preloading a bit more carefully with ELF binary
loading.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/processor.h | 1 +
arch/powerpc/include/asm/thread_info.h | 5 +
arch/powerpc/kernel/process.c | 7 ++
arch/powerpc/mm/mmu_context_book3s64.c | 4 +
arch/powerpc/mm/slb.c | 166 +++++++++++++++++++------
5 files changed, 143 insertions(+), 40 deletions(-)
diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
index 12b76ecdc57d..936795acba48 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -273,6 +273,7 @@ struct thread_struct {
#endif /* CONFIG_HAVE_HW_BREAKPOINT */
struct arch_hw_breakpoint hw_brk; /* info on the hardware breakpoint */
unsigned long trap_nr; /* last trap # on this thread */
+ u8 load_slb; /* Ages out SLB preload cache entries */
u8 load_fp;
#ifdef CONFIG_ALTIVEC
u8 load_vec;
diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
index f9a442bb5a72..9e78b7d26b64 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -29,6 +29,7 @@
#include <asm/page.h>
#include <asm/accounting.h>
+#define SLB_PRELOAD_NR 16U
/*
* low level task data.
*/
@@ -44,6 +45,10 @@ struct thread_info {
#if defined(CONFIG_VIRT_CPU_ACCOUNTING_NATIVE) && defined(CONFIG_PPC32)
struct cpu_accounting_data accounting;
#endif
+ unsigned char slb_preload_nr;
+ unsigned char slb_preload_tail;
+ u32 slb_preload_esid[SLB_PRELOAD_NR];
+
/* low level flags - has atomic operations done on it */
unsigned long flags ____cacheline_aligned_in_smp;
};
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index e4feb45ae4c6..03c2e1f134bc 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1719,6 +1719,8 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
return 0;
}
+void preload_new_slb_context(unsigned long start, unsigned long sp);
+
/*
* Set up a thread for executing a new program
*/
@@ -1726,6 +1728,10 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
{
#ifdef CONFIG_PPC64
unsigned long load_addr = regs->gpr[2]; /* saved by ELF_PLAT_INIT */
+
+#ifdef CONFIG_PPC_BOOK3S_64
+ preload_new_slb_context(start, sp);
+#endif
#endif
/*
@@ -1816,6 +1822,7 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
#ifdef CONFIG_VSX
current->thread.used_vsr = 0;
#endif
+ current->thread.load_slb = 0;
current->thread.load_fp = 0;
memset(¤t->thread.fp_state, 0, sizeof(current->thread.fp_state));
current->thread.fp_save_area = NULL;
diff --git a/arch/powerpc/mm/mmu_context_book3s64.c b/arch/powerpc/mm/mmu_context_book3s64.c
index f7352c66b6b8..510f103d7813 100644
--- a/arch/powerpc/mm/mmu_context_book3s64.c
+++ b/arch/powerpc/mm/mmu_context_book3s64.c
@@ -53,6 +53,8 @@ int hash__alloc_context_id(void)
}
EXPORT_SYMBOL_GPL(hash__alloc_context_id);
+void slb_setup_new_exec(void);
+
static int hash__init_new_context(struct mm_struct *mm)
{
int index;
@@ -87,6 +89,8 @@ static int hash__init_new_context(struct mm_struct *mm)
void hash__setup_new_exec(void)
{
slice_setup_new_exec();
+
+ slb_setup_new_exec();
}
static int radix__init_new_context(struct mm_struct *mm)
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 98521fec3536..d200728fe41b 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -187,41 +187,119 @@ void slb_vmalloc_update(void)
slb_flush_and_rebolt();
}
-/* Helper function to compare esids. There are four cases to handle.
- * 1. The system is not 1T segment size capable. Use the GET_ESID compare.
- * 2. The system is 1T capable, both addresses are < 1T, use the GET_ESID compare.
- * 3. The system is 1T capable, only one of the two addresses is > 1T. This is not a match.
- * 4. The system is 1T capable, both addresses are > 1T, use the GET_ESID_1T macro to compare.
- */
-static inline int esids_match(unsigned long addr1, unsigned long addr2)
+static bool preload_hit(struct thread_info *ti, unsigned long esid)
{
- int esid_1t_count;
+ unsigned char i;
- /* System is not 1T segment size capable. */
- if (!mmu_has_feature(MMU_FTR_1T_SEGMENT))
- return (GET_ESID(addr1) == GET_ESID(addr2));
+ for (i = 0; i < ti->slb_preload_nr; i++) {
+ unsigned char idx;
+
+ idx = (ti->slb_preload_tail + i) % SLB_PRELOAD_NR;
+ if (esid == ti->slb_preload_esid[idx])
+ return true;
+ }
+ return false;
+}
+
+static bool preload_add(struct thread_info *ti, unsigned long ea)
+{
+ unsigned char idx;
+ unsigned long esid;
+
+ if (mmu_has_feature(MMU_FTR_1T_SEGMENT)) {
+ /* EAs are stored >> 28 so 256MB segments don't need clearing */
+ if (ea & ESID_MASK_1T)
+ ea &= ESID_MASK_1T;
+ }
- esid_1t_count = (((addr1 >> SID_SHIFT_1T) != 0) +
- ((addr2 >> SID_SHIFT_1T) != 0));
+ esid = ea >> SID_SHIFT;
- /* both addresses are < 1T */
- if (esid_1t_count == 0)
- return (GET_ESID(addr1) == GET_ESID(addr2));
+ if (preload_hit(ti, esid))
+ return false;
- /* One address < 1T, the other > 1T. Not a match */
- if (esid_1t_count == 1)
- return 0;
+ idx = (ti->slb_preload_tail + ti->slb_preload_nr) % SLB_PRELOAD_NR;
+ ti->slb_preload_esid[idx] = esid;
+ if (ti->slb_preload_nr == SLB_PRELOAD_NR)
+ ti->slb_preload_tail = (ti->slb_preload_tail + 1) % SLB_PRELOAD_NR;
+ else
+ ti->slb_preload_nr++;
- /* Both addresses are > 1T. */
- return (GET_ESID_1T(addr1) == GET_ESID_1T(addr2));
+ return true;
}
+static void preload_age(struct thread_info *ti)
+{
+ if (!ti->slb_preload_nr)
+ return;
+ ti->slb_preload_nr--;
+ ti->slb_preload_tail = (ti->slb_preload_tail + 1) % SLB_PRELOAD_NR;
+}
+
+void slb_setup_new_exec(void)
+{
+ struct thread_info *ti = current_thread_info();
+ struct mm_struct *mm = current->mm;
+ unsigned long exec = 0x10000000;
+
+ /*
+ * We have no good place to clear the slb preload cache on exec,
+ * flush_thread is about the earliest arch hook but that happens
+ * after we switch to the mm and have aleady preloaded the SLBEs.
+ *
+ * For the most part that's probably okay to use entries from the
+ * previous exec, they will age out if unused. It may turn out to
+ * be an advantage to clear the cache before switching to it,
+ * however.
+ */
+
+ /*
+ * preload some userspace segments into the SLB.
+ * Almost all 32 and 64bit PowerPC executables are linked at
+ * 0x10000000 so it makes sense to preload this segment.
+ */
+ if (!is_kernel_addr(exec)) {
+ if (preload_add(ti, exec))
+ slb_allocate_user(mm, exec);
+ }
+
+ /* Libraries and mmaps. */
+ if (!is_kernel_addr(mm->mmap_base)) {
+ if (preload_add(ti, mm->mmap_base))
+ slb_allocate_user(mm, mm->mmap_base);
+ }
+}
+
+void preload_new_slb_context(unsigned long start, unsigned long sp)
+{
+ struct thread_info *ti = current_thread_info();
+ struct mm_struct *mm = current->mm;
+ unsigned long heap = mm->start_brk;
+
+ /* Userspace entry address. */
+ if (!is_kernel_addr(start)) {
+ if (preload_add(ti, start))
+ slb_allocate_user(mm, start);
+ }
+
+ /* Top of stack, grows down. */
+ if (!is_kernel_addr(sp)) {
+ if (preload_add(ti, sp))
+ slb_allocate_user(mm, sp);
+ }
+
+ /* Bottom of heap, grows up. */
+ if (heap && !is_kernel_addr(heap)) {
+ if (preload_add(ti, heap))
+ slb_allocate_user(mm, heap);
+ }
+}
+
+
/* Flush all user entries from the segment table of the current processor. */
void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
{
- unsigned long pc = KSTK_EIP(tsk);
- unsigned long stack = KSTK_ESP(tsk);
- unsigned long exec_base;
+ struct thread_info *ti = task_thread_info(tsk);
+ unsigned char i;
/*
* We need interrupts hard-disabled here, not just soft-disabled,
@@ -244,8 +322,7 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
if (!mmu_has_feature(MMU_FTR_NO_SLBIE_B) &&
offset <= SLB_CACHE_ENTRIES) {
- unsigned long slbie_data;
- int i;
+ unsigned long slbie_data = 0;
asm volatile("isync" : : : "memory");
for (i = 0; i < offset; i++) {
@@ -285,24 +362,28 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
get_paca()->slb_used_bitmap = get_paca()->slb_kern_bitmap;
/*
- * preload some userspace segments into the SLB.
- * Almost all 32 and 64bit PowerPC executables are linked at
- * 0x10000000 so it makes sense to preload this segment.
+ * We gradually age out SLBs after a number of context switches to
+ * reduce reload overhead of unused entries (like we do with FP/VEC
+ * reload). Each time we wrap 256 switches, take an entry out of the
+ * SLB preload cache.
*/
- exec_base = 0x10000000;
+ tsk->thread.load_slb++;
+ if (!tsk->thread.load_slb) {
+ unsigned long pc = KSTK_EIP(tsk);
- if (is_kernel_addr(pc) || is_kernel_addr(stack) ||
- is_kernel_addr(exec_base))
- return;
+ preload_age(ti);
+ preload_add(ti, pc);
+ }
- slb_allocate_user(mm, pc);
+ for (i = 0; i < ti->slb_preload_nr; i++) {
+ unsigned char idx;
+ unsigned long ea;
- if (!esids_match(pc, stack))
- slb_allocate_user(mm, stack);
+ idx = (ti->slb_preload_tail + i) % SLB_PRELOAD_NR;
+ ea = (unsigned long)ti->slb_preload_esid[idx] << SID_SHIFT;
- if (!esids_match(pc, exec_base) &&
- !esids_match(stack, exec_base))
- slb_allocate_user(mm, exec_base);
+ slb_allocate_user(mm, ea);
+ }
}
void slb_set_size(u16 size)
@@ -575,11 +656,16 @@ long do_slb_fault(struct pt_regs *regs, unsigned long ea)
return slb_allocate_kernel(ea, id);
} else {
struct mm_struct *mm = current->mm;
+ long err;
if (unlikely(!mm))
return -EFAULT;
- return slb_allocate_user(mm, ea);
+ err = slb_allocate_user(mm, ea);
+ if (!err)
+ preload_add(current_thread_info(), ea);
+
+ return err;
}
}
--
2.18.0
^ permalink raw reply related [flat|nested] 21+ messages in thread