* [PATCH AUTOSEL 5.10 14/53] cpuidle: pseries: Do not cap the CEDE0 latency in fixup_cede0_latency()
[not found] <20210910002028.175174-1-sashal@kernel.org>
@ 2021-09-10 0:19 ` Sasha Levin
2021-09-10 0:19 ` [PATCH AUTOSEL 5.10 15/53] powerpc: make the install target not depend on any build artifact Sasha Levin
` (6 subsequent siblings)
7 siblings, 0 replies; 8+ messages in thread
From: Sasha Levin @ 2021-09-10 0:19 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Sasha Levin, Gautham R. Shenoy, linux-pm, linuxppc-dev
From: "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>
[ Upstream commit 71737a6c2a8f801622d2b71567d1ec1e4c5b40b8 ]
Currently in fixup_cede0_latency() code, we perform the fixup the
CEDE(0) exit latency value only if minimum advertized extended CEDE
latency values are less than 10us. This was done so as to not break
the expected behaviour on POWER8 platforms where the advertised
latency was higher than the default 10us, which would delay the SMT
folding on the core.
However, after the earlier patch "cpuidle/pseries: Fixup CEDE0 latency
only for POWER10 onwards", we can be sure that the fixup of CEDE0
latency is going to happen only from POWER10 onwards. Hence
unconditionally use the minimum exit latency provided by the platform.
Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1626676399-15975-3-git-send-email-ego@linux.vnet.ibm.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
drivers/cpuidle/cpuidle-pseries.c | 59 ++++++++++++++++---------------
1 file changed, 30 insertions(+), 29 deletions(-)
diff --git a/drivers/cpuidle/cpuidle-pseries.c b/drivers/cpuidle/cpuidle-pseries.c
index a2b5c6f60cf0..18747e5287c8 100644
--- a/drivers/cpuidle/cpuidle-pseries.c
+++ b/drivers/cpuidle/cpuidle-pseries.c
@@ -346,11 +346,9 @@ static int pseries_cpuidle_driver_init(void)
static void __init fixup_cede0_latency(void)
{
struct xcede_latency_payload *payload;
- u64 min_latency_us;
+ u64 min_xcede_latency_us = UINT_MAX;
int i;
- min_latency_us = dedicated_states[1].exit_latency; // CEDE latency
-
if (parse_cede_parameters())
return;
@@ -358,42 +356,45 @@ static void __init fixup_cede0_latency(void)
nr_xcede_records);
payload = &xcede_latency_parameter.payload;
+
+ /*
+ * The CEDE idle state maps to CEDE(0). While the hypervisor
+ * does not advertise CEDE(0) exit latency values, it does
+ * advertise the latency values of the extended CEDE states.
+ * We use the lowest advertised exit latency value as a proxy
+ * for the exit latency of CEDE(0).
+ */
for (i = 0; i < nr_xcede_records; i++) {
struct xcede_latency_record *record = &payload->records[i];
+ u8 hint = record->hint;
u64 latency_tb = be64_to_cpu(record->latency_ticks);
u64 latency_us = DIV_ROUND_UP_ULL(tb_to_ns(latency_tb), NSEC_PER_USEC);
- if (latency_us == 0)
- pr_warn("cpuidle: xcede record %d has an unrealistic latency of 0us.\n", i);
-
- if (latency_us < min_latency_us)
- min_latency_us = latency_us;
- }
-
- /*
- * By default, we assume that CEDE(0) has exit latency 10us,
- * since there is no way for us to query from the platform.
- *
- * However, if the wakeup latency of an Extended CEDE state is
- * smaller than 10us, then we can be sure that CEDE(0)
- * requires no more than that.
- *
- * Perform the fix-up.
- */
- if (min_latency_us < dedicated_states[1].exit_latency) {
/*
- * We set a minimum of 1us wakeup latency for cede0 to
- * distinguish it from snooze
+ * We expect the exit latency of an extended CEDE
+ * state to be non-zero, it to since it takes at least
+ * a few nanoseconds to wakeup the idle CPU and
+ * dispatch the virtual processor into the Linux
+ * Guest.
+ *
+ * So we consider only non-zero value for performing
+ * the fixup of CEDE(0) latency.
*/
- u64 cede0_latency = 1;
+ if (latency_us == 0) {
+ pr_warn("cpuidle: Skipping xcede record %d [hint=%d]. Exit latency = 0us\n",
+ i, hint);
+ continue;
+ }
- if (min_latency_us > cede0_latency)
- cede0_latency = min_latency_us - 1;
+ if (latency_us < min_xcede_latency_us)
+ min_xcede_latency_us = latency_us;
+ }
- dedicated_states[1].exit_latency = cede0_latency;
- dedicated_states[1].target_residency = 10 * (cede0_latency);
+ if (min_xcede_latency_us != UINT_MAX) {
+ dedicated_states[1].exit_latency = min_xcede_latency_us;
+ dedicated_states[1].target_residency = 10 * (min_xcede_latency_us);
pr_info("cpuidle: Fixed up CEDE exit latency to %llu us\n",
- cede0_latency);
+ min_xcede_latency_us);
}
}
--
2.30.2
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH AUTOSEL 5.10 15/53] powerpc: make the install target not depend on any build artifact
[not found] <20210910002028.175174-1-sashal@kernel.org>
2021-09-10 0:19 ` [PATCH AUTOSEL 5.10 14/53] cpuidle: pseries: Do not cap the CEDE0 latency in fixup_cede0_latency() Sasha Levin
@ 2021-09-10 0:19 ` Sasha Levin
2021-09-10 0:20 ` [PATCH AUTOSEL 5.10 35/53] powerpc/32: indirect function call use bctrl rather than blrl in ret_from_kernel_thread Sasha Levin
` (5 subsequent siblings)
7 siblings, 0 replies; 8+ messages in thread
From: Sasha Levin @ 2021-09-10 0:19 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Sasha Levin, Masahiro Yamada, Nick Desaulniers, linuxppc-dev
From: Masahiro Yamada <masahiroy@kernel.org>
[ Upstream commit 9bef456b20581e630ef9a13555ca04fed65a859d ]
The install target should not depend on any build artifact.
The reason is explained in commit 19514fc665ff ("arm, kbuild: make
"make install" not depend on vmlinux").
Change the PowerPC installation code in a similar way.
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210729141937.445051-2-masahiroy@kernel.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
arch/powerpc/boot/Makefile | 2 +-
arch/powerpc/boot/install.sh | 14 ++++++++++++++
2 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile
index e4b364b5da9e..de18b4df3866 100644
--- a/arch/powerpc/boot/Makefile
+++ b/arch/powerpc/boot/Makefile
@@ -436,7 +436,7 @@ $(obj)/zImage.initrd: $(addprefix $(obj)/, $(initrd-y))
$(Q)rm -f $@; ln $< $@
# Only install the vmlinux
-install: $(CONFIGURE) $(addprefix $(obj)/, $(image-y))
+install:
sh -x $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" vmlinux System.map "$(INSTALL_PATH)"
# Install the vmlinux and other built boot targets.
diff --git a/arch/powerpc/boot/install.sh b/arch/powerpc/boot/install.sh
index b6a256bc96ee..8d669cf1ccda 100644
--- a/arch/powerpc/boot/install.sh
+++ b/arch/powerpc/boot/install.sh
@@ -21,6 +21,20 @@
# Bail with error code if anything goes wrong
set -e
+verify () {
+ if [ ! -f "$1" ]; then
+ echo "" 1>&2
+ echo " *** Missing file: $1" 1>&2
+ echo ' *** You need to run "make" before "make install".' 1>&2
+ echo "" 1>&2
+ exit 1
+ fi
+}
+
+# Make sure the files actually exist
+verify "$2"
+verify "$3"
+
# User may have a custom install script
if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi
--
2.30.2
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH AUTOSEL 5.10 35/53] powerpc/32: indirect function call use bctrl rather than blrl in ret_from_kernel_thread
[not found] <20210910002028.175174-1-sashal@kernel.org>
2021-09-10 0:19 ` [PATCH AUTOSEL 5.10 14/53] cpuidle: pseries: Do not cap the CEDE0 latency in fixup_cede0_latency() Sasha Levin
2021-09-10 0:19 ` [PATCH AUTOSEL 5.10 15/53] powerpc: make the install target not depend on any build artifact Sasha Levin
@ 2021-09-10 0:20 ` Sasha Levin
2021-09-10 0:20 ` [PATCH AUTOSEL 5.10 36/53] powerpc/booke: Avoid link stack corruption in several places Sasha Levin
` (4 subsequent siblings)
7 siblings, 0 replies; 8+ messages in thread
From: Sasha Levin @ 2021-09-10 0:20 UTC (permalink / raw)
To: linux-kernel, stable; +Cc: Sasha Levin, linuxppc-dev
From: Christophe Leroy <christophe.leroy@csgroup.eu>
[ Upstream commit 113ec9ccc8049c3772f0eab46b62c5d6654c09f7 ]
Copied from commit 89bbe4c798bc ("powerpc/64: indirect function call
use bctrl rather than blrl in ret_from_kernel_thread")
blrl is not recommended to use as an indirect function call, as it may
corrupt the link stack predictor.
This is not a performance critical path but this should be fixed for
consistency.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/91b1d242525307ceceec7ef6e832bfbacdd4501b.1629436472.git.christophe.leroy@csgroup.eu
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
arch/powerpc/kernel/entry_32.S | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 459f5d00b990..a3db3748172c 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -496,10 +496,10 @@ ret_from_fork:
ret_from_kernel_thread:
REST_NVGPRS(r1)
bl schedule_tail
- mtlr r14
+ mtctr r14
mr r3,r15
PPC440EP_ERR42
- blrl
+ bctrl
li r3,0
b ret_from_syscall
--
2.30.2
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH AUTOSEL 5.10 36/53] powerpc/booke: Avoid link stack corruption in several places
[not found] <20210910002028.175174-1-sashal@kernel.org>
` (2 preceding siblings ...)
2021-09-10 0:20 ` [PATCH AUTOSEL 5.10 35/53] powerpc/32: indirect function call use bctrl rather than blrl in ret_from_kernel_thread Sasha Levin
@ 2021-09-10 0:20 ` Sasha Levin
2021-09-10 0:20 ` [PATCH AUTOSEL 5.10 37/53] powerpc: Avoid link stack corruption in misc asm functions Sasha Levin
` (3 subsequent siblings)
7 siblings, 0 replies; 8+ messages in thread
From: Sasha Levin @ 2021-09-10 0:20 UTC (permalink / raw)
To: linux-kernel, stable; +Cc: Sasha Levin, linuxppc-dev
From: Christophe Leroy <christophe.leroy@csgroup.eu>
[ Upstream commit f5007dbf4da729baa850b33a64dc3cc53757bdf8 ]
Use bcl 20,31,+4 instead of bl in order to preserve link stack.
See commit c974809a26a1 ("powerpc/vdso: Avoid link stack corruption
in __get_datapage()") for details.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/e9fbc285eceb720e6c0e032ef47fe8b05f669b48.1629791751.git.christophe.leroy@csgroup.eu
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
arch/powerpc/include/asm/ppc_asm.h | 2 +-
arch/powerpc/kernel/exceptions-64e.S | 6 +++---
arch/powerpc/kernel/fsl_booke_entry_mapping.S | 8 ++++----
arch/powerpc/kernel/head_44x.S | 6 +++---
arch/powerpc/kernel/head_fsl_booke.S | 6 +++---
arch/powerpc/mm/nohash/tlb_low.S | 4 ++--
6 files changed, 16 insertions(+), 16 deletions(-)
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index 511786f0e40d..f42ba0b86a3a 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -306,7 +306,7 @@ GLUE(.,name):
/* Be careful, this will clobber the lr register. */
#define LOAD_REG_ADDR_PIC(reg, name) \
- bl 0f; \
+ bcl 20,31,$+4; \
0: mflr reg; \
addis reg,reg,(name - 0b)@ha; \
addi reg,reg,(name - 0b)@l;
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index f579ce46eef2..1bb505eeb154 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -1443,7 +1443,7 @@ found_iprot:
* r3 = MAS0_TLBSEL (for the iprot array)
* r4 = SPRN_TLBnCFG
*/
- bl invstr /* Find our address */
+ bcl 20,31,$+4 /* Find our address */
invstr: mflr r6 /* Make it accessible */
mfmsr r7
rlwinm r5,r7,27,31,31 /* extract MSR[IS] */
@@ -1512,7 +1512,7 @@ skpinv: addi r6,r6,1 /* Increment */
mfmsr r6
xori r6,r6,MSR_IS
mtspr SPRN_SRR1,r6
- bl 1f /* Find our address */
+ bcl 20,31,$+4 /* Find our address */
1: mflr r6
addi r6,r6,(2f - 1b)
mtspr SPRN_SRR0,r6
@@ -1572,7 +1572,7 @@ skpinv: addi r6,r6,1 /* Increment */
* r4 = MAS0 w/TLBSEL & ESEL for the temp mapping
*/
/* Now we branch the new virtual address mapped by this entry */
- bl 1f /* Find our address */
+ bcl 20,31,$+4 /* Find our address */
1: mflr r6
addi r6,r6,(2f - 1b)
tovirt(r6,r6)
diff --git a/arch/powerpc/kernel/fsl_booke_entry_mapping.S b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
index 8bccce6544b5..dedc17fac8f8 100644
--- a/arch/powerpc/kernel/fsl_booke_entry_mapping.S
+++ b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* 1. Find the index of the entry we're executing in */
- bl invstr /* Find our address */
+ bcl 20,31,$+4 /* Find our address */
invstr: mflr r6 /* Make it accessible */
mfmsr r7
rlwinm r4,r7,27,31,31 /* extract MSR[IS] */
@@ -85,7 +85,7 @@ skpinv: addi r6,r6,1 /* Increment */
addi r6,r6,10
slw r6,r8,r6 /* convert to mask */
- bl 1f /* Find our address */
+ bcl 20,31,$+4 /* Find our address */
1: mflr r7
mfspr r8,SPRN_MAS3
@@ -117,7 +117,7 @@ skpinv: addi r6,r6,1 /* Increment */
xori r6,r4,1
slwi r6,r6,5 /* setup new context with other address space */
- bl 1f /* Find our address */
+ bcl 20,31,$+4 /* Find our address */
1: mflr r9
rlwimi r7,r9,0,20,31
addi r7,r7,(2f - 1b)
@@ -207,7 +207,7 @@ next_tlb_setup:
lis r7,MSR_KERNEL@h
ori r7,r7,MSR_KERNEL@l
- bl 1f /* Find our address */
+ bcl 20,31,$+4 /* Find our address */
1: mflr r9
rlwimi r6,r9,0,20,31
addi r6,r6,(2f - 1b)
diff --git a/arch/powerpc/kernel/head_44x.S b/arch/powerpc/kernel/head_44x.S
index 8e36718f3167..b178b372e66e 100644
--- a/arch/powerpc/kernel/head_44x.S
+++ b/arch/powerpc/kernel/head_44x.S
@@ -70,7 +70,7 @@ _ENTRY(_start);
* address.
* r21 will be loaded with the physical runtime address of _stext
*/
- bl 0f /* Get our runtime address */
+ bcl 20,31,$+4 /* Get our runtime address */
0: mflr r21 /* Make it accessible */
addis r21,r21,(_stext - 0b)@ha
addi r21,r21,(_stext - 0b)@l /* Get our current runtime base */
@@ -861,7 +861,7 @@ _GLOBAL(init_cpu_state)
wmmucr: mtspr SPRN_MMUCR,r3 /* Put MMUCR */
sync
- bl invstr /* Find our address */
+ bcl 20,31,$+4 /* Find our address */
invstr: mflr r5 /* Make it accessible */
tlbsx r23,0,r5 /* Find entry we are in */
li r4,0 /* Start at TLB entry 0 */
@@ -1053,7 +1053,7 @@ head_start_47x:
sync
/* Find the entry we are running from */
- bl 1f
+ bcl 20,31,$+4
1: mflr r23
tlbsx r23,0,r23
tlbre r24,r23,0
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 586a6ac501e9..daf4a72bb08f 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -79,7 +79,7 @@ _ENTRY(_start);
mr r23,r3
mr r25,r4
- bl 0f
+ bcl 20,31,$+4
0: mflr r8
addis r3,r8,(is_second_reloc - 0b)@ha
lwz r19,(is_second_reloc - 0b)@l(r3)
@@ -1195,7 +1195,7 @@ _GLOBAL(switch_to_as1)
bne 1b
/* Get the tlb entry used by the current running code */
- bl 0f
+ bcl 20,31,$+4
0: mflr r4
tlbsx 0,r4
@@ -1229,7 +1229,7 @@ _GLOBAL(switch_to_as1)
_GLOBAL(restore_to_as0)
mflr r0
- bl 0f
+ bcl 20,31,$+4
0: mflr r9
addi r9,r9,1f - 0b
diff --git a/arch/powerpc/mm/nohash/tlb_low.S b/arch/powerpc/mm/nohash/tlb_low.S
index eaeee402f96e..f849f26bfbfb 100644
--- a/arch/powerpc/mm/nohash/tlb_low.S
+++ b/arch/powerpc/mm/nohash/tlb_low.S
@@ -214,7 +214,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_476_DD2)
* Touch enough instruction cache lines to ensure cache hits
*/
1: mflr r9
- bl 2f
+ bcl 20,31,$+4
2: mflr r6
li r7,32
PPC_ICBT(0,R6,R7) /* touch next cache line */
@@ -442,7 +442,7 @@ _GLOBAL(loadcam_multi)
* Set up temporary TLB entry that is the same as what we're
* running from, but in AS=1.
*/
- bl 1f
+ bcl 20,31,$+4
1: mflr r6
tlbsx 0,r8
mfspr r6,SPRN_MAS1
--
2.30.2
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH AUTOSEL 5.10 37/53] powerpc: Avoid link stack corruption in misc asm functions
[not found] <20210910002028.175174-1-sashal@kernel.org>
` (3 preceding siblings ...)
2021-09-10 0:20 ` [PATCH AUTOSEL 5.10 36/53] powerpc/booke: Avoid link stack corruption in several places Sasha Levin
@ 2021-09-10 0:20 ` Sasha Levin
2021-09-10 0:20 ` [PATCH AUTOSEL 5.10 38/53] KVM: PPC: Book3S HV: Initialise vcpu MSR with MSR_ME Sasha Levin
` (2 subsequent siblings)
7 siblings, 0 replies; 8+ messages in thread
From: Sasha Levin @ 2021-09-10 0:20 UTC (permalink / raw)
To: linux-kernel, stable; +Cc: Sasha Levin, linuxppc-dev
From: Christophe Leroy <christophe.leroy@csgroup.eu>
[ Upstream commit 33e1402435cb9f3021439a15935ea2dc69ec1844 ]
bl;mflr is used at several places to get code position.
Use bcl 20,31,+4 instead of bl in order to preserve link stack.
See commit c974809a26a1 ("powerpc/vdso: Avoid link stack corruption
in __get_datapage()") for details.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/c6eabb4fb6c156f75d56dcbcc6f243e5ac0fba42.1629791763.git.christophe.leroy@csgroup.eu
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
arch/powerpc/kernel/misc.S | 2 +-
arch/powerpc/kernel/misc_32.S | 2 +-
arch/powerpc/kernel/misc_64.S | 2 +-
arch/powerpc/kernel/reloc_32.S | 2 +-
arch/powerpc/kexec/relocate_32.S | 12 ++++++------
5 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/kernel/misc.S b/arch/powerpc/kernel/misc.S
index 5be96feccb55..fb7de3543c03 100644
--- a/arch/powerpc/kernel/misc.S
+++ b/arch/powerpc/kernel/misc.S
@@ -29,7 +29,7 @@ _GLOBAL(reloc_offset)
li r3, 0
_GLOBAL(add_reloc_offset)
mflr r0
- bl 1f
+ bcl 20,31,$+4
1: mflr r5
PPC_LL r4,(2f-1b)(r5)
subf r5,r4,r5
diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S
index 717e658b90fd..9511f077842b 100644
--- a/arch/powerpc/kernel/misc_32.S
+++ b/arch/powerpc/kernel/misc_32.S
@@ -106,7 +106,7 @@ _GLOBAL(reloc_got2)
srwi. r8,r8,2
beqlr
mtctr r8
- bl 1f
+ bcl 20,31,$+4
1: mflr r0
lis r4,1b@ha
addi r4,r4,1b@l
diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
index 070465825c21..81870c1c827f 100644
--- a/arch/powerpc/kernel/misc_64.S
+++ b/arch/powerpc/kernel/misc_64.S
@@ -277,7 +277,7 @@ _GLOBAL(scom970_write)
* Physical (hardware) cpu id should be in r3.
*/
_GLOBAL(kexec_wait)
- bl 1f
+ bcl 20,31,$+4
1: mflr r5
addi r5,r5,kexec_flag-1b
diff --git a/arch/powerpc/kernel/reloc_32.S b/arch/powerpc/kernel/reloc_32.S
index 10e96f3e22fe..0508c14b4c28 100644
--- a/arch/powerpc/kernel/reloc_32.S
+++ b/arch/powerpc/kernel/reloc_32.S
@@ -30,7 +30,7 @@ R_PPC_RELATIVE = 22
_GLOBAL(relocate)
mflr r0 /* Save our LR */
- bl 0f /* Find our current runtime address */
+ bcl 20,31,$+4 /* Find our current runtime address */
0: mflr r12 /* Make it accessible */
mtlr r0
diff --git a/arch/powerpc/kexec/relocate_32.S b/arch/powerpc/kexec/relocate_32.S
index 61946c19e07c..cf6e52bdf8d8 100644
--- a/arch/powerpc/kexec/relocate_32.S
+++ b/arch/powerpc/kexec/relocate_32.S
@@ -93,7 +93,7 @@ wmmucr:
* Invalidate all the TLB entries except the current entry
* where we are running from
*/
- bl 0f /* Find our address */
+ bcl 20,31,$+4 /* Find our address */
0: mflr r5 /* Make it accessible */
tlbsx r23,0,r5 /* Find entry we are in */
li r4,0 /* Start at TLB entry 0 */
@@ -158,7 +158,7 @@ write_out:
/* Switch to other address space in MSR */
insrwi r9, r7, 1, 26 /* Set MSR[IS] = r7 */
- bl 1f
+ bcl 20,31,$+4
1: mflr r8
addi r8, r8, (2f-1b) /* Find the target offset */
@@ -202,7 +202,7 @@ next_tlb:
li r9,0
insrwi r9, r7, 1, 26 /* Set MSR[IS] = r7 */
- bl 1f
+ bcl 20,31,$+4
1: mflr r8
and r8, r8, r11 /* Get our offset within page */
addi r8, r8, (2f-1b)
@@ -240,7 +240,7 @@ setup_map_47x:
sync
/* Find the entry we are running from */
- bl 2f
+ bcl 20,31,$+4
2: mflr r23
tlbsx r23, 0, r23
tlbre r24, r23, 0 /* TLB Word 0 */
@@ -296,7 +296,7 @@ clear_utlb_entry:
/* Update the msr to the new TS */
insrwi r5, r7, 1, 26
- bl 1f
+ bcl 20,31,$+4
1: mflr r6
addi r6, r6, (2f-1b)
@@ -355,7 +355,7 @@ write_utlb:
/* Defaults to 256M */
lis r10, 0x1000
- bl 1f
+ bcl 20,31,$+4
1: mflr r4
addi r4, r4, (2f-1b) /* virtual address of 2f */
--
2.30.2
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH AUTOSEL 5.10 38/53] KVM: PPC: Book3S HV: Initialise vcpu MSR with MSR_ME
[not found] <20210910002028.175174-1-sashal@kernel.org>
` (4 preceding siblings ...)
2021-09-10 0:20 ` [PATCH AUTOSEL 5.10 37/53] powerpc: Avoid link stack corruption in misc asm functions Sasha Levin
@ 2021-09-10 0:20 ` Sasha Levin
2021-09-10 0:20 ` [PATCH AUTOSEL 5.10 39/53] KVM: PPC: Book3S HV P9: Fixes for TM softpatch interrupt NIP Sasha Levin
2021-09-10 0:20 ` [PATCH AUTOSEL 5.10 40/53] KVM: PPC: Book3S HV Nested: Fix TM softpatch HFAC interrupt emulation Sasha Levin
7 siblings, 0 replies; 8+ messages in thread
From: Sasha Levin @ 2021-09-10 0:20 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Sasha Levin, Alexey Kardashevskiy, kvm-ppc, Nicholas Piggin,
linuxppc-dev
From: Nicholas Piggin <npiggin@gmail.com>
[ Upstream commit fd42b7b09c602c904452c0c3e5955ca21d8e387a ]
It is possible to create a VCPU without setting the MSR before running
it, which results in a warning in kvmhv_vcpu_entry_p9() that MSR_ME is
not set. This is pretty harmless because the MSR_ME bit is added to
HSRR1 before HRFID to guest, and a normal qemu guest doesn't hit it.
Initialise the vcpu MSR with MSR_ME set.
Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210811160134.904987-2-npiggin@gmail.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
arch/powerpc/kvm/book3s_hv.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index bd7350a608d4..8e16dfecbe36 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -2353,6 +2353,7 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu)
spin_lock_init(&vcpu->arch.vpa_update_lock);
spin_lock_init(&vcpu->arch.tbacct_lock);
vcpu->arch.busy_preempt = TB_NIL;
+ vcpu->arch.shregs.msr = MSR_ME;
vcpu->arch.intr_msr = MSR_SF | MSR_ME;
/*
--
2.30.2
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH AUTOSEL 5.10 39/53] KVM: PPC: Book3S HV P9: Fixes for TM softpatch interrupt NIP
[not found] <20210910002028.175174-1-sashal@kernel.org>
` (5 preceding siblings ...)
2021-09-10 0:20 ` [PATCH AUTOSEL 5.10 38/53] KVM: PPC: Book3S HV: Initialise vcpu MSR with MSR_ME Sasha Levin
@ 2021-09-10 0:20 ` Sasha Levin
2021-09-10 0:20 ` [PATCH AUTOSEL 5.10 40/53] KVM: PPC: Book3S HV Nested: Fix TM softpatch HFAC interrupt emulation Sasha Levin
7 siblings, 0 replies; 8+ messages in thread
From: Sasha Levin @ 2021-09-10 0:20 UTC (permalink / raw)
To: linux-kernel, stable; +Cc: Sasha Levin, linuxppc-dev, kvm-ppc, Nicholas Piggin
From: Nicholas Piggin <npiggin@gmail.com>
[ Upstream commit 4782e0cd0d184d727ad3b0cfe20d1d44d9f98239 ]
The softpatch interrupt sets HSRR0 to the faulting instruction +4, so
it should subtract 4 for the faulting instruction address in the case
it is a TM softpatch interrupt (the instruction was not executed) and
it was not emulated.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210811160134.904987-4-npiggin@gmail.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
arch/powerpc/kvm/book3s_hv_tm.c | 17 +++++++++++++++--
1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_tm.c b/arch/powerpc/kvm/book3s_hv_tm.c
index cc90b8b82329..e7c36f8bf205 100644
--- a/arch/powerpc/kvm/book3s_hv_tm.c
+++ b/arch/powerpc/kvm/book3s_hv_tm.c
@@ -46,6 +46,15 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
u64 newmsr, bescr;
int ra, rs;
+ /*
+ * The TM softpatch interrupt sets NIP to the instruction following
+ * the faulting instruction, which is not executed. Rewind nip to the
+ * faulting instruction so it looks like a normal synchronous
+ * interrupt, then update nip in the places where the instruction is
+ * emulated.
+ */
+ vcpu->arch.regs.nip -= 4;
+
/*
* rfid, rfebb, and mtmsrd encode bit 31 = 0 since it's a reserved bit
* in these instructions, so masking bit 31 out doesn't change these
@@ -67,7 +76,7 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
(newmsr & MSR_TM)));
newmsr = sanitize_msr(newmsr);
vcpu->arch.shregs.msr = newmsr;
- vcpu->arch.cfar = vcpu->arch.regs.nip - 4;
+ vcpu->arch.cfar = vcpu->arch.regs.nip;
vcpu->arch.regs.nip = vcpu->arch.shregs.srr0;
return RESUME_GUEST;
@@ -100,7 +109,7 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
vcpu->arch.bescr = bescr;
msr = (msr & ~MSR_TS_MASK) | MSR_TS_T;
vcpu->arch.shregs.msr = msr;
- vcpu->arch.cfar = vcpu->arch.regs.nip - 4;
+ vcpu->arch.cfar = vcpu->arch.regs.nip;
vcpu->arch.regs.nip = vcpu->arch.ebbrr;
return RESUME_GUEST;
@@ -116,6 +125,7 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
newmsr = (newmsr & ~MSR_LE) | (msr & MSR_LE);
newmsr = sanitize_msr(newmsr);
vcpu->arch.shregs.msr = newmsr;
+ vcpu->arch.regs.nip += 4;
return RESUME_GUEST;
/* ignore bit 31, see comment above */
@@ -152,6 +162,7 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
msr = (msr & ~MSR_TS_MASK) | MSR_TS_S;
}
vcpu->arch.shregs.msr = msr;
+ vcpu->arch.regs.nip += 4;
return RESUME_GUEST;
/* ignore bit 31, see comment above */
@@ -189,6 +200,7 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
vcpu->arch.regs.ccr = (vcpu->arch.regs.ccr & 0x0fffffff) |
(((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 29);
vcpu->arch.shregs.msr &= ~MSR_TS_MASK;
+ vcpu->arch.regs.nip += 4;
return RESUME_GUEST;
/* ignore bit 31, see comment above */
@@ -220,6 +232,7 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
vcpu->arch.regs.ccr = (vcpu->arch.regs.ccr & 0x0fffffff) |
(((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 29);
vcpu->arch.shregs.msr = msr | MSR_TS_S;
+ vcpu->arch.regs.nip += 4;
return RESUME_GUEST;
}
--
2.30.2
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH AUTOSEL 5.10 40/53] KVM: PPC: Book3S HV Nested: Fix TM softpatch HFAC interrupt emulation
[not found] <20210910002028.175174-1-sashal@kernel.org>
` (6 preceding siblings ...)
2021-09-10 0:20 ` [PATCH AUTOSEL 5.10 39/53] KVM: PPC: Book3S HV P9: Fixes for TM softpatch interrupt NIP Sasha Levin
@ 2021-09-10 0:20 ` Sasha Levin
7 siblings, 0 replies; 8+ messages in thread
From: Sasha Levin @ 2021-09-10 0:20 UTC (permalink / raw)
To: linux-kernel, stable; +Cc: Sasha Levin, linuxppc-dev, kvm-ppc, Nicholas Piggin
From: Nicholas Piggin <npiggin@gmail.com>
[ Upstream commit d82b392d9b3556b63e3f9916cf057ea847e173a9 ]
Have the TM softpatch emulation code set up the HFAC interrupt and
return -1 in case an instruction was executed with HFSCR bits clear,
and have the interrupt exit handler fall through to the HFAC handler.
When the L0 is running a nested guest, this ensures the HFAC interrupt
is correctly passed up to the L1.
The "direct guest" exit handler will turn these into PROGILL program
interrupts so functionality in practice will be unchanged. But it's
possible an L1 would want to handle these in a different way.
Also rearrange the FAC interrupt emulation code to match the HFAC format
while here (mainly, adding the FSCR_INTR_CAUSE mask).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210811160134.904987-5-npiggin@gmail.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
arch/powerpc/include/asm/reg.h | 3 ++-
arch/powerpc/kvm/book3s_hv.c | 35 ++++++++++++++++----------
arch/powerpc/kvm/book3s_hv_tm.c | 44 ++++++++++++++++++---------------
3 files changed, 48 insertions(+), 34 deletions(-)
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index f4b98903064f..af07347ac61a 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -417,6 +417,7 @@
#define FSCR_TAR __MASK(FSCR_TAR_LG)
#define FSCR_EBB __MASK(FSCR_EBB_LG)
#define FSCR_DSCR __MASK(FSCR_DSCR_LG)
+#define FSCR_INTR_CAUSE (ASM_CONST(0xFF) << 56) /* interrupt cause */
#define SPRN_HFSCR 0xbe /* HV=1 Facility Status & Control Register */
#define HFSCR_PREFIX __MASK(FSCR_PREFIX_LG)
#define HFSCR_MSGP __MASK(FSCR_MSGP_LG)
@@ -428,7 +429,7 @@
#define HFSCR_DSCR __MASK(FSCR_DSCR_LG)
#define HFSCR_VECVSX __MASK(FSCR_VECVSX_LG)
#define HFSCR_FP __MASK(FSCR_FP_LG)
-#define HFSCR_INTR_CAUSE (ASM_CONST(0xFF) << 56) /* interrupt cause */
+#define HFSCR_INTR_CAUSE FSCR_INTR_CAUSE
#define SPRN_TAR 0x32f /* Target Address Register */
#define SPRN_LPCR 0x13E /* LPAR Control Register */
#define LPCR_VPM0 ASM_CONST(0x8000000000000000)
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 8e16dfecbe36..c63fb61de62a 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -1425,6 +1425,21 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu,
r = RESUME_GUEST;
}
break;
+
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+ case BOOK3S_INTERRUPT_HV_SOFTPATCH:
+ /*
+ * This occurs for various TM-related instructions that
+ * we need to emulate on POWER9 DD2.2. We have already
+ * handled the cases where the guest was in real-suspend
+ * mode and was transitioning to transactional state.
+ */
+ r = kvmhv_p9_tm_emulation(vcpu);
+ if (r != -1)
+ break;
+ fallthrough; /* go to facility unavailable handler */
+#endif
+
/*
* This occurs if the guest (kernel or userspace), does something that
* is prohibited by HFSCR.
@@ -1443,18 +1458,6 @@ static int kvmppc_handle_exit_hv(struct kvm_vcpu *vcpu,
}
break;
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
- case BOOK3S_INTERRUPT_HV_SOFTPATCH:
- /*
- * This occurs for various TM-related instructions that
- * we need to emulate on POWER9 DD2.2. We have already
- * handled the cases where the guest was in real-suspend
- * mode and was transitioning to transactional state.
- */
- r = kvmhv_p9_tm_emulation(vcpu);
- break;
-#endif
-
case BOOK3S_INTERRUPT_HV_RM_HARD:
r = RESUME_PASSTHROUGH;
break;
@@ -1552,9 +1555,15 @@ static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu)
* mode and was transitioning to transactional state.
*/
r = kvmhv_p9_tm_emulation(vcpu);
- break;
+ if (r != -1)
+ break;
+ fallthrough; /* go to facility unavailable handler */
#endif
+ case BOOK3S_INTERRUPT_H_FAC_UNAVAIL:
+ r = RESUME_HOST;
+ break;
+
case BOOK3S_INTERRUPT_HV_RM_HARD:
vcpu->arch.trap = 0;
r = RESUME_GUEST;
diff --git a/arch/powerpc/kvm/book3s_hv_tm.c b/arch/powerpc/kvm/book3s_hv_tm.c
index e7c36f8bf205..866cadd70094 100644
--- a/arch/powerpc/kvm/book3s_hv_tm.c
+++ b/arch/powerpc/kvm/book3s_hv_tm.c
@@ -88,14 +88,15 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
}
/* check EBB facility is available */
if (!(vcpu->arch.hfscr & HFSCR_EBB)) {
- /* generate an illegal instruction interrupt */
- kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
- return RESUME_GUEST;
+ vcpu->arch.hfscr &= ~HFSCR_INTR_CAUSE;
+ vcpu->arch.hfscr |= (u64)FSCR_EBB_LG << 56;
+ vcpu->arch.trap = BOOK3S_INTERRUPT_H_FAC_UNAVAIL;
+ return -1; /* rerun host interrupt handler */
}
if ((msr & MSR_PR) && !(vcpu->arch.fscr & FSCR_EBB)) {
/* generate a facility unavailable interrupt */
- vcpu->arch.fscr = (vcpu->arch.fscr & ~(0xffull << 56)) |
- ((u64)FSCR_EBB_LG << 56);
+ vcpu->arch.fscr &= ~FSCR_INTR_CAUSE;
+ vcpu->arch.fscr |= (u64)FSCR_EBB_LG << 56;
kvmppc_book3s_queue_irqprio(vcpu, BOOK3S_INTERRUPT_FAC_UNAVAIL);
return RESUME_GUEST;
}
@@ -138,14 +139,15 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
}
/* check for TM disabled in the HFSCR or MSR */
if (!(vcpu->arch.hfscr & HFSCR_TM)) {
- /* generate an illegal instruction interrupt */
- kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
- return RESUME_GUEST;
+ vcpu->arch.hfscr &= ~HFSCR_INTR_CAUSE;
+ vcpu->arch.hfscr |= (u64)FSCR_TM_LG << 56;
+ vcpu->arch.trap = BOOK3S_INTERRUPT_H_FAC_UNAVAIL;
+ return -1; /* rerun host interrupt handler */
}
if (!(msr & MSR_TM)) {
/* generate a facility unavailable interrupt */
- vcpu->arch.fscr = (vcpu->arch.fscr & ~(0xffull << 56)) |
- ((u64)FSCR_TM_LG << 56);
+ vcpu->arch.fscr &= ~FSCR_INTR_CAUSE;
+ vcpu->arch.fscr |= (u64)FSCR_TM_LG << 56;
kvmppc_book3s_queue_irqprio(vcpu,
BOOK3S_INTERRUPT_FAC_UNAVAIL);
return RESUME_GUEST;
@@ -169,14 +171,15 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
case (PPC_INST_TRECLAIM & PO_XOP_OPCODE_MASK):
/* check for TM disabled in the HFSCR or MSR */
if (!(vcpu->arch.hfscr & HFSCR_TM)) {
- /* generate an illegal instruction interrupt */
- kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
- return RESUME_GUEST;
+ vcpu->arch.hfscr &= ~HFSCR_INTR_CAUSE;
+ vcpu->arch.hfscr |= (u64)FSCR_TM_LG << 56;
+ vcpu->arch.trap = BOOK3S_INTERRUPT_H_FAC_UNAVAIL;
+ return -1; /* rerun host interrupt handler */
}
if (!(msr & MSR_TM)) {
/* generate a facility unavailable interrupt */
- vcpu->arch.fscr = (vcpu->arch.fscr & ~(0xffull << 56)) |
- ((u64)FSCR_TM_LG << 56);
+ vcpu->arch.fscr &= ~FSCR_INTR_CAUSE;
+ vcpu->arch.fscr |= (u64)FSCR_TM_LG << 56;
kvmppc_book3s_queue_irqprio(vcpu,
BOOK3S_INTERRUPT_FAC_UNAVAIL);
return RESUME_GUEST;
@@ -208,14 +211,15 @@ int kvmhv_p9_tm_emulation(struct kvm_vcpu *vcpu)
/* XXX do we need to check for PR=0 here? */
/* check for TM disabled in the HFSCR or MSR */
if (!(vcpu->arch.hfscr & HFSCR_TM)) {
- /* generate an illegal instruction interrupt */
- kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
- return RESUME_GUEST;
+ vcpu->arch.hfscr &= ~HFSCR_INTR_CAUSE;
+ vcpu->arch.hfscr |= (u64)FSCR_TM_LG << 56;
+ vcpu->arch.trap = BOOK3S_INTERRUPT_H_FAC_UNAVAIL;
+ return -1; /* rerun host interrupt handler */
}
if (!(msr & MSR_TM)) {
/* generate a facility unavailable interrupt */
- vcpu->arch.fscr = (vcpu->arch.fscr & ~(0xffull << 56)) |
- ((u64)FSCR_TM_LG << 56);
+ vcpu->arch.fscr &= ~FSCR_INTR_CAUSE;
+ vcpu->arch.fscr |= (u64)FSCR_TM_LG << 56;
kvmppc_book3s_queue_irqprio(vcpu,
BOOK3S_INTERRUPT_FAC_UNAVAIL);
return RESUME_GUEST;
--
2.30.2
^ permalink raw reply related [flat|nested] 8+ messages in thread