From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ozlabs.org (bilbo.ozlabs.org [IPv6:2401:3900:2:1::2]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 427xn62xHfzF397 for ; Mon, 10 Sep 2018 15:44:10 +1000 (AEST) From: Michael Neuling To: mpe@ellerman.id.au Cc: linuxppc-dev@lists.ozlabs.org, Nicholas Piggin , paulus@ozlabs.org, Haren Myneni , mikey@neuling.org Subject: [PATCH] powerpc: Avoid code patching freed init sections Date: Mon, 10 Sep 2018 15:44:05 +1000 Message-Id: <20180910054405.32422-1-mikey@neuling.org> List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , This stops us from doing code patching in init sections after they've been freed. In this chain: kvm_guest_init() -> kvm_use_magic_page() -> fault_in_pages_readable() -> __get_user() -> __get_user_nocheck() -> barrier_nospec(); We have a code patching location at barrier_nospec() and kvm_guest_init() is an init function. This whole chain gets inlined, so when we free the init section (hence kvm_guest_init()), this code goes away and hence should no longer be patched. We seen this as userspace memory corruption when using a memory checker while doing partition migration testing on powervm (this starts the code patching post migration via /sys/kernel/mobility/migration). In theory, it could also happen when using /sys/kernel/debug/powerpc/barrier_nospec. With this patch there is a small change of a race if we code patch between the init section being freed and setting SYSTEM_RUNNING (in kernel_init()) but that seems like an impractical time and small window for any code patching to occur. cc: stable@vger.kernel.org # 4.13+ Signed-off-by: Michael Neuling --- For stable I've marked this as v4.13+ since that's when we refactored code-patching.c but it could go back even further than that. In reality though, I think we can only hit this since the first spectre/meltdown changes. --- arch/powerpc/lib/code-patching.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index 850f3b8f4d..a2bc08bfd8 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -23,11 +23,30 @@ #include #include + +static inline bool in_init_section(unsigned int *patch_addr) +{ + if (patch_addr < (unsigned int *)__init_begin) + return false; + if (patch_addr >= (unsigned int *)__init_end) + return false; + return true; +} + +static inline bool init_freed(void) +{ + return (system_state >= SYSTEM_RUNNING); +} + static int __patch_instruction(unsigned int *exec_addr, unsigned int instr, unsigned int *patch_addr) { int err; + /* Make sure we aren't patching a freed init section */ + if (in_init_section(patch_addr) && init_freed()) + return 0; + __put_user_size(instr, patch_addr, 4, err); if (err) return err; -- 2.17.1