From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.1 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 636ACC43219 for ; Tue, 30 Apr 2019 11:20:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0CDED20835 for ; Tue, 30 Apr 2019 11:20:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=zytor.com header.i=@zytor.com header.b="vF73Xtcw" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727447AbfD3LUl (ORCPT ); Tue, 30 Apr 2019 07:20:41 -0400 Received: from terminus.zytor.com ([198.137.202.136]:53887 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726129AbfD3LUl (ORCPT ); Tue, 30 Apr 2019 07:20:41 -0400 Received: from terminus.zytor.com (localhost [127.0.0.1]) by terminus.zytor.com (8.15.2/8.15.2) with ESMTPS id x3UBK24T1347402 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO); Tue, 30 Apr 2019 04:20:02 -0700 DKIM-Filter: OpenDKIM Filter v2.11.0 terminus.zytor.com x3UBK24T1347402 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zytor.com; s=2019041745; t=1556623203; bh=lcIqU8A5KOahzmd/io1oC+2K+Q39XY1YJ8FyUQkjU8g=; h=Date:From:Cc:Reply-To:In-Reply-To:References:To:Subject:From; b=vF73XtcwN6O9MqlLGbfUvCJRMPCe9j3xIIF68UEmSbYhg8p+OMjm6qaKok5hE76GB nDbPoaI9E1IJarxEQsAcJGR9vBOXCpuqRH6CfHhFkoTT/AMl+baCfs0NUwDNl7EdU4 l7ceerQRd7oAVeE7/wYogAhOsy7t3x191jTNkILBvHtk2wpysJzA2czEETeSxrOYpQ aITPSjcv130Sh6vR9uoRjEricTlnTt+uYRapveTw5kGwUgcjJDrAIK+1k/GSz1xbvO EtoyXz0e+obfThy4VtaM++NBGNvP85TBGlI/O2EfYs3Op8IKKswVxk0FZhbOO2VYfr EleHlsDDIS6fw== Received: (from tipbot@localhost) by terminus.zytor.com (8.15.2/8.15.2/Submit) id x3UBK2fl1347386; Tue, 30 Apr 2019 04:20:02 -0700 Date: Tue, 30 Apr 2019 04:20:02 -0700 X-Authentication-Warning: terminus.zytor.com: tipbot set sender to tipbot@zytor.com using -f From: tip-bot for Nadav Amit Message-ID: Cc: kernel-hardening@lists.openwall.com, ard.biesheuvel@linaro.org, torvalds@linux-foundation.org, akpm@linux-foundation.org, mingo@kernel.org, linux-kernel@vger.kernel.org, riel@surriel.com, namit@vmware.com, linux_dti@icloud.com, keescook@chromium.org, bp@alien8.de, hpa@zytor.com, mhiramat@kernel.org, kristen@linux.intel.com, dave.hansen@intel.com, luto@kernel.org, deneen.t.dock@intel.com, peterz@infradead.org, tglx@linutronix.de, rick.p.edgecombe@intel.com, will.deacon@arm.com Reply-To: linux_dti@icloud.com, riel@surriel.com, mingo@kernel.org, linux-kernel@vger.kernel.org, namit@vmware.com, akpm@linux-foundation.org, torvalds@linux-foundation.org, ard.biesheuvel@linaro.org, kernel-hardening@lists.openwall.com, tglx@linutronix.de, will.deacon@arm.com, rick.p.edgecombe@intel.com, deneen.t.dock@intel.com, peterz@infradead.org, dave.hansen@intel.com, mhiramat@kernel.org, kristen@linux.intel.com, luto@kernel.org, bp@alien8.de, keescook@chromium.org, hpa@zytor.com In-Reply-To: <20190426001143.4983-8-namit@vmware.com> References: <20190426001143.4983-8-namit@vmware.com> To: linux-tip-commits@vger.kernel.org Subject: [tip:x86/mm] x86/alternatives: Use temporary mm for text poking Git-Commit-ID: b3fd8e83ada0d51b71a84297480187e2d40e5ded X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: b3fd8e83ada0d51b71a84297480187e2d40e5ded Gitweb: https://git.kernel.org/tip/b3fd8e83ada0d51b71a84297480187e2d40e5ded Author: Nadav Amit AuthorDate: Thu, 25 Apr 2019 17:11:27 -0700 Committer: Ingo Molnar CommitDate: Tue, 30 Apr 2019 12:37:52 +0200 x86/alternatives: Use temporary mm for text poking text_poke() can potentially compromise security as it sets temporary PTEs in the fixmap. These PTEs might be used to rewrite the kernel code from other cores accidentally or maliciously, if an attacker gains the ability to write onto kernel memory. Moreover, since remote TLBs are not flushed after the temporary PTEs are removed, the time-window in which the code is writable is not limited if the fixmap PTEs - maliciously or accidentally - are cached in the TLB. To address these potential security hazards, use a temporary mm for patching the code. Finally, text_poke() is also not conservative enough when mapping pages, as it always tries to map 2 pages, even when a single one is sufficient. So try to be more conservative, and do not map more than needed. Signed-off-by: Nadav Amit Signed-off-by: Rick Edgecombe Signed-off-by: Peter Zijlstra (Intel) Cc: Cc: Cc: Cc: Cc: Cc: Cc: Cc: Andy Lutomirski Cc: Borislav Petkov Cc: Dave Hansen Cc: H. Peter Anvin Cc: Kees Cook Cc: Linus Torvalds Cc: Masami Hiramatsu Cc: Rik van Riel Cc: Thomas Gleixner Link: https://lkml.kernel.org/r/20190426001143.4983-8-namit@vmware.com Signed-off-by: Ingo Molnar --- arch/x86/include/asm/fixmap.h | 2 - arch/x86/kernel/alternative.c | 108 +++++++++++++++++++++++++++++++++--------- arch/x86/xen/mmu_pv.c | 2 - 3 files changed, 86 insertions(+), 26 deletions(-) diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index 50ba74a34a37..9da8cccdf3fb 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -103,8 +103,6 @@ enum fixed_addresses { #ifdef CONFIG_PARAVIRT FIX_PARAVIRT_BOOTMAP, #endif - FIX_TEXT_POKE1, /* reserve 2 pages for text_poke() */ - FIX_TEXT_POKE0, /* first page is last, because allocation is backward */ #ifdef CONFIG_X86_INTEL_MID FIX_LNW_VRTC, #endif diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 11d5c710a94f..599203876c32 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -684,41 +685,104 @@ __ro_after_init unsigned long poking_addr; static void *__text_poke(void *addr, const void *opcode, size_t len) { + bool cross_page_boundary = offset_in_page(addr) + len > PAGE_SIZE; + struct page *pages[2] = {NULL}; + temp_mm_state_t prev; unsigned long flags; - char *vaddr; - struct page *pages[2]; - int i; + pte_t pte, *ptep; + spinlock_t *ptl; + pgprot_t pgprot; /* - * While boot memory allocator is runnig we cannot use struct - * pages as they are not yet initialized. + * While boot memory allocator is running we cannot use struct pages as + * they are not yet initialized. There is no way to recover. */ BUG_ON(!after_bootmem); if (!core_kernel_text((unsigned long)addr)) { pages[0] = vmalloc_to_page(addr); - pages[1] = vmalloc_to_page(addr + PAGE_SIZE); + if (cross_page_boundary) + pages[1] = vmalloc_to_page(addr + PAGE_SIZE); } else { pages[0] = virt_to_page(addr); WARN_ON(!PageReserved(pages[0])); - pages[1] = virt_to_page(addr + PAGE_SIZE); + if (cross_page_boundary) + pages[1] = virt_to_page(addr + PAGE_SIZE); } - BUG_ON(!pages[0]); + /* + * If something went wrong, crash and burn since recovery paths are not + * implemented. + */ + BUG_ON(!pages[0] || (cross_page_boundary && !pages[1])); + local_irq_save(flags); - set_fixmap(FIX_TEXT_POKE0, page_to_phys(pages[0])); - if (pages[1]) - set_fixmap(FIX_TEXT_POKE1, page_to_phys(pages[1])); - vaddr = (char *)fix_to_virt(FIX_TEXT_POKE0); - memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len); - clear_fixmap(FIX_TEXT_POKE0); - if (pages[1]) - clear_fixmap(FIX_TEXT_POKE1); - local_flush_tlb(); - sync_core(); - /* Could also do a CLFLUSH here to speed up CPU recovery; but - that causes hangs on some VIA CPUs. */ - for (i = 0; i < len; i++) - BUG_ON(((char *)addr)[i] != ((char *)opcode)[i]); + + /* + * Map the page without the global bit, as TLB flushing is done with + * flush_tlb_mm_range(), which is intended for non-global PTEs. + */ + pgprot = __pgprot(pgprot_val(PAGE_KERNEL) & ~_PAGE_GLOBAL); + + /* + * The lock is not really needed, but this allows to avoid open-coding. + */ + ptep = get_locked_pte(poking_mm, poking_addr, &ptl); + + /* + * This must not fail; preallocated in poking_init(). + */ + VM_BUG_ON(!ptep); + + pte = mk_pte(pages[0], pgprot); + set_pte_at(poking_mm, poking_addr, ptep, pte); + + if (cross_page_boundary) { + pte = mk_pte(pages[1], pgprot); + set_pte_at(poking_mm, poking_addr + PAGE_SIZE, ptep + 1, pte); + } + + /* + * Loading the temporary mm behaves as a compiler barrier, which + * guarantees that the PTE will be set at the time memcpy() is done. + */ + prev = use_temporary_mm(poking_mm); + + kasan_disable_current(); + memcpy((u8 *)poking_addr + offset_in_page(addr), opcode, len); + kasan_enable_current(); + + /* + * Ensure that the PTE is only cleared after the instructions of memcpy + * were issued by using a compiler barrier. + */ + barrier(); + + pte_clear(poking_mm, poking_addr, ptep); + if (cross_page_boundary) + pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1); + + /* + * Loading the previous page-table hierarchy requires a serializing + * instruction that already allows the core to see the updated version. + * Xen-PV is assumed to serialize execution in a similar manner. + */ + unuse_temporary_mm(prev); + + /* + * Flushing the TLB might involve IPIs, which would require enabled + * IRQs, but not if the mm is not used, as it is in this point. + */ + flush_tlb_mm_range(poking_mm, poking_addr, poking_addr + + (cross_page_boundary ? 2 : 1) * PAGE_SIZE, + PAGE_SHIFT, false); + + /* + * If the text does not match what we just wrote then something is + * fundamentally screwy; there's nothing we can really do about that. + */ + BUG_ON(memcmp(addr, opcode, len)); + + pte_unmap_unlock(ptep, ptl); local_irq_restore(flags); return addr; } diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c index a21e1734fc1f..beb44e22afdf 100644 --- a/arch/x86/xen/mmu_pv.c +++ b/arch/x86/xen/mmu_pv.c @@ -2318,8 +2318,6 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot) #elif defined(CONFIG_X86_VSYSCALL_EMULATION) case VSYSCALL_PAGE: #endif - case FIX_TEXT_POKE0: - case FIX_TEXT_POKE1: /* All local page mappings */ pte = pfn_pte(phys, prot); break;