From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3225C4727E for ; Thu, 1 Oct 2020 17:59:22 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 75A6A21707 for ; Thu, 1 Oct 2020 17:59:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="g4/4hUG/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 75A6A21707 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=nbcQnFiOPboMnSwttl4+EpOQcVj8waCevFJ2aZihagg=; b=g4/4hUG/mx6HiDo4QOTvkW894 EuFQalW2ciaqA8/OuUamTPck6VvSi4BPaQVW23vnvDeHD9puZY+1eMKZ6hmFmZRhtJkcnjBVHrlLX mWScPVhZMeOuSr3iwxxvVvVWCfsYRPRH9MrhL0Li42x/02RlfMXSJlFQB1rtU0PDsFmBBXlljKWUR NDjZTZJBZMzOfx7uyz9YO//Cm+y0lIjsp1p6EM9sQg26V80sFplPcPUBvGz4UGqjjbUArlgB8cQoQ dYrADA/4ts/EhnRsStf1eZj9Ox/wn4WySVCuY1pvcD9yRqXpmr+ancrTJsbSDwIJVO+dU8aY+g2Vj IZTuIs1uw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kO2qE-00085r-Rs; Thu, 01 Oct 2020 17:58:06 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kO2qB-000843-1X for linux-arm-kernel@lists.infradead.org; Thu, 01 Oct 2020 17:58:04 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 78BE31042; Thu, 1 Oct 2020 10:58:01 -0700 (PDT) Received: from C02TD0UTHF1T.local (unknown [10.57.51.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 604693F6CF; Thu, 1 Oct 2020 10:57:54 -0700 (PDT) Date: Thu, 1 Oct 2020 18:57:45 +0100 From: Mark Rutland To: Alexander Potapenko Subject: Re: [PATCH v3 03/10] arm64, kfence: enable KFENCE for ARM64 Message-ID: <20201001175716.GA89689@C02TD0UTHF1T.local> References: <20200921132611.1700350-1-elver@google.com> <20200921132611.1700350-4-elver@google.com> <20200921143059.GO2139@willie-the-truck> <20200929140226.GB53442@C02TD0UTHF1T.local> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201001_135803_205180_A7806DAA X-CRM114-Status: GOOD ( 38.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Hillf Danton , "open list:DOCUMENTATION" , Peter Zijlstra , Catalin Marinas , Dave Hansen , Linux Memory Management List , Eric Dumazet , "H. Peter Anvin" , Christoph Lameter , Will Deacon , SeongJae Park , Jonathan Corbet , the arch/x86 maintainers , kasan-dev , Ingo Molnar , Vlastimil Babka , David Rientjes , Andrey Ryabinin , Marco Elver , Kees Cook , "Paul E. McKenney" , Jann Horn , Andrey Konovalov , Borislav Petkov , Andy Lutomirski , Jonathan Cameron , Thomas Gleixner , Joonsoo Kim , Dmitriy Vyukov , Linux ARM , Greg Kroah-Hartman , LKML , Pekka Enberg , Andrew Morton Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, Oct 01, 2020 at 01:24:49PM +0200, Alexander Potapenko wrote: > Mark, > > > If you need virt_to_page() to work, the address has to be part of the > > linear/direct map. > > > > If you need to find the struct page for something that's part of the > > kernel image you can use virt_to_page(lm_alias(x)). > > > > > Looks like filling page table entries (similarly to what's being done > > > in arch/arm64/mm/kasan_init.c) is not enough. > > > I thought maybe vmemmap_populate() would do the job, but it didn't > > > (virt_to_pfn() still returns invalid PFNs). > > > > As above, I think lm_alias() will solve the problem here. Please see > > that and CONFIG_DEBUG_VIRTUAL. > > The approach you suggest works to some extent, but there are some caveats. > > To reiterate, we are trying to allocate the pool (2Mb by default, but > users may want a bigger one, up to, say, 64 Mb) in a way that: > (1) The underlying page tables support 4K granularity. > (2) is_kfence_address() (checks that __kfence_pool <= addr <= > __kfence_pool + KFENCE_POOL_SIZE) does not reference memory What's the underlying requirement here? Is this a performance concern, codegen/codesize, or something else? > (3) For addresses belonging to that pool virt_addr_valid() is true > (SLAB/SLUB rely on that) As I hinted at before, there's a reasonable amount of code which relies on being able to round-trip convert (va->{pa,page}->va) allocations from SLUB, e.g. phys = virt_to_page(addr); ... ; phys = page_to_virt(phys). Usually this is because the phys addr is stored in some HW register, or in-memory structure shared with HW. I'm fairly certain KFENCE will need to support this in order to be deployable in production, and arm64 is the canary in the coalmine. I added tests for this back when tag-based KASAN broke this property. See commit: b92a953cb7f727c4 ("lib/test_kasan.c: add roundtrip tests") ... for which IIUC the kfree_via_phys() test would be broken by KFENCE, even on x86: | static noinline void __init kfree_via_phys(void) | { | char *ptr; | size_t size = 8; | phys_addr_t phys; | | pr_info("invalid-free false positive (via phys)\n"); | ptr = kmalloc(size, GFP_KERNEL); | if (!ptr) { | pr_err("Allocation failed\n"); | return; | } | | phys = virt_to_phys(ptr); | kfree(phys_to_virt(phys)); | } ... since the code will pass the linear map alias of the KFENCE VA into kfree(). To avoid random breakage we either need to: * Have KFENCE retain this property (which effectively requires allocation VAs to fall within the linear/direct map) * Decide that round-trips are forbidden, and go modify that code somehow, which was deemed to be impractical in the past ... and I would strongly prefer the former as it's less liable to break any existing code. > On x86 we achieve (2) by making our pool a .bss array, so that its > address is known statically. Aligning that array on 4K and calling > set_memory_4k() ensures that (1) is also fulfilled. (3) seems to just > happen automagically without any address translations. > > Now, what we are seeing on arm64 is different. > My understanding (please correct me if I'm wrong) is that on arm64 > only the memory range at 0xffff000000000000 has valid struct pages, > and the size of that range depends on the amount of memory on the > system. The way virt_to_page() works is based on there being a constant (at runtime) offset between a linear map address and the corresponding physical page. That makes it easy to get the PA with a subtraction, then the PFN with a shift, then to index the vmemmap array with that to get the page. The x86 version of virt_to_page() automatically fixes up an image address to its linear map alias internally. > This probably means we cannot just pick a fixed address for our pool > in that range, unless it is very close to 0xffff000000000000. It would have to be part of the linear map, or we'd have to apply the same fixup as x86 does. But as above, I'm reluctant to do that as it only encourages writing fragile code. The only sensible way to detect that is to disallow virt_to_*() on image addresses, since that's the only time we can distinguish the source. > If we allocate the pool statically in the way x86 does (assuming we > somehow resolve (1)), we can apply lm_alias() to addresses returned by > the KFENCE allocator, making kmalloc() always return addresses from > the linear map and satisfying (3). > But in that case is_kfence_address() will also need to be updated to > compare the addresses to lm_alias(__kfence_pool), and this becomes > more heavyweight than just reading the address from memory. We can calculate the lm_alias(__kfence_pool) at boot time, so it's only a read from memory in the fast-path. > So looks like it's still more preferable to allocate the pool > dynamically on ARM64, unless there's a clever trick to allocate a > fixed address in the linear map (DTS maybe?) I'm not too worried about allocating this dynamically, but: * The arch code needs to set up the translation tables for this, as we cannot safely change the mapping granularity live. * As above I'm fairly certain x86 needs to use a carevout from the linear map to function correctly anyhow, so we should follow the same approach for both arm64 and x86. That might be a static carevout that we figure out the aliasing for, or something entirely dynamic. Thanks, Mark. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel