From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.6 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 064EDC433EF for ; Mon, 13 Sep 2021 09:52:12 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BDFA460F4C for ; Mon, 13 Sep 2021 09:52:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org BDFA460F4C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:Cc:To:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=0p0moCjKolU8lWF7kHT5XLf4daNjEp6ymFsLfyFCrVw=; b=Cp9czzJFbOfvOqu4hQMcUNuS50 +nGX1ps/mRfCk01O8vBsG2j3pda0bwqwzvfodQMmohSRUTYhWQ8dgMsLoReU2ed/eQv5PQQAXqmWr e8ZJBjfB1YE7+Ed0tnCN+5MVBPEUY8v+BT8NQr63aK4q5BIjbTx11u2eA5xIdFetaNtAlBv3hPV97 qr+G7MvXCq4RzNg4ubrhDuyTJvpjQhVHZHExg0S4HYe+37xhDlTQ+t96XWn7zn3KrQgT5uP91+2MS nsTfeMlvMZjVOwdEVtTnqtj8cY0skriZHqZNOZg8IuMv++FCr0h19bOwJrSmWHceVPvwRM6BeOV6W fh7G/c3g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mPiac-0018Lj-Hj; Mon, 13 Sep 2021 09:49:27 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mPiU2-0016pU-Q4 for linux-arm-kernel@lists.infradead.org; Mon, 13 Sep 2021 09:42:41 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9E4516D; Mon, 13 Sep 2021 02:42:34 -0700 (PDT) Received: from [10.57.15.112] (unknown [10.57.15.112]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1D65E3F719; Mon, 13 Sep 2021 02:42:32 -0700 (PDT) Subject: Re: [PATCH v2] kasan: test: add memcpy test that avoids out-of-bounds write To: Peter Collingbourne , Will Deacon , Catalin Marinas , Andrey Konovalov , Marco Elver Cc: Mark Rutland , Evgenii Stepanov , Alexander Potapenko , Linux ARM , linux-mm@kvack.org References: <20210910211356.3603758-1-pcc@google.com> From: Robin Murphy Message-ID: <622d97a2-c7b7-d46b-dee5-cf8d9fa205da@arm.com> Date: Mon, 13 Sep 2021 10:42:27 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: <20210910211356.3603758-1-pcc@google.com> Content-Language: en-GB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210913_024239_017484_5DCEBE4C X-CRM114-Status: GOOD ( 35.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 2021-09-10 22:13, Peter Collingbourne wrote: > With HW tag-based KASAN, error checks are performed implicitly by the > load and store instructions in the memcpy implementation. A failed check > results in tag checks being disabled and execution will keep going. As a > result, under HW tag-based KASAN, prior to commit 1b0668be62cf ("kasan: > test: disable kmalloc_memmove_invalid_size for HW_TAGS"), this memcpy > would end up corrupting memory until it hits an inaccessible page and > causes a kernel panic. > > This is a pre-existing issue that was revealed by commit 285133040e6c > ("arm64: Import latest memcpy()/memmove() implementation") which changed > the memcpy implementation from using signed comparisons (incorrectly, > resulting in the memcpy being terminated early for negative sizes) > to using unsigned comparisons. > > It is unclear how this could be handled by memcpy itself in a reasonable > way. One possibility would be to add an exception handler that would force > memcpy to return if a tag check fault is detected -- this would make the > behavior roughly similar to generic and SW tag-based KASAN. However, > this wouldn't solve the problem for asynchronous mode and also makes > memcpy behavior inconsistent with manually copying data. > > This test was added as a part of a series that taught KASAN to detect > negative sizes in memory operations, see commit 8cceeff48f23 ("kasan: > detect negative size in memory operation function"). Therefore we > should keep testing for negative sizes with generic and SW tag-based > KASAN. But there is some value in testing small memcpy overflows, so > let's add another test with memcpy that does not destabilize the kernel > by performing out-of-bounds writes, and run it in all modes. The only thing is, that's nonsense. You can't pass a negative size to memmove()/memcpy(), any more than you could pass a negative address. You can use the usual integer conversions to pass a very large size, but that's no different from just passing a very large size, and the language does not make any restrictions on the validity of very large sizes. Indeed in general a 32-bit program could legitimately memcpy() exactly half its address space to the other half, or memmove() a 3GB buffer a small distance. I'm not sure what we're trying to enforce there, other than arbitrary restrictions on how we think it makes sense to call library functions. The only way to say that a size is actually invalid is if it leads to an out-of-bounds access relative to the source or destination buffer, but to provoke that the given size only ever needs to be at least 1 byte larger than the object - making it excessively large only generates excessively large numbers of invalid accesses, and I fail to see what use that has. By all means introduce KAROHWTIMSTCLFSAN, but I'm not convinced it's meaningfully within the scope of *address* sanitisation. Thanks, Robin. > Link: https://linux-review.googlesource.com/id/I048d1e6a9aff766c4a53f989fb0c83de68923882 > Signed-off-by: Peter Collingbourne > --- > lib/test_kasan.c | 18 +++++++++++++++++- > 1 file changed, 17 insertions(+), 1 deletion(-) > > diff --git a/lib/test_kasan.c b/lib/test_kasan.c > index 8835e0784578..aa8e42250219 100644 > --- a/lib/test_kasan.c > +++ b/lib/test_kasan.c > @@ -493,7 +493,7 @@ static void kmalloc_oob_in_memset(struct kunit *test) > kfree(ptr); > } > > -static void kmalloc_memmove_invalid_size(struct kunit *test) > +static void kmalloc_memmove_negative_size(struct kunit *test) > { > char *ptr; > size_t size = 64; > @@ -515,6 +515,21 @@ static void kmalloc_memmove_invalid_size(struct kunit *test) > kfree(ptr); > } > > +static void kmalloc_memmove_invalid_size(struct kunit *test) > +{ > + char *ptr; > + size_t size = 64; > + volatile size_t invalid_size = size; > + > + ptr = kmalloc(size, GFP_KERNEL); > + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); > + > + memset((char *)ptr, 0, 64); > + KUNIT_EXPECT_KASAN_FAIL(test, > + memmove((char *)ptr, (char *)ptr + 4, invalid_size)); > + kfree(ptr); > +} > + > static void kmalloc_uaf(struct kunit *test) > { > char *ptr; > @@ -1129,6 +1144,7 @@ static struct kunit_case kasan_kunit_test_cases[] = { > KUNIT_CASE(kmalloc_oob_memset_4), > KUNIT_CASE(kmalloc_oob_memset_8), > KUNIT_CASE(kmalloc_oob_memset_16), > + KUNIT_CASE(kmalloc_memmove_negative_size), > KUNIT_CASE(kmalloc_memmove_invalid_size), > KUNIT_CASE(kmalloc_uaf), > KUNIT_CASE(kmalloc_uaf_memset), > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel