From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 71E53C001DB for ; Fri, 4 Aug 2023 17:52:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2WPV1BwOpQEYY+rlwygXBhFA/ixG7K99xzp6Qei0yOQ=; b=42KxHFVebSHSvP p7GXAK5ObGn1pWZUt5A7WnMCIofC/itNzV27mh8LmIo0cCKMgF3WTC+vm/P6aFDSUUmxsez/Q+mi2 Dxw6jC3BgRhQyusAjbCTDGZNJiABnG73k/fcHSINvW/P8d8Yoos2SJQ9YrUQ8ek49rQfXHYMwSicp E4Ls766snRa09vMSzaAAw+RL/BExcnJnmCn09KuLWRd+SHpjmYo6w287lAqDeN9sVNwfFYJTh7Hlw 7gtw/N1LkqQkYKfIP7d2Abhug+9c91oFWn6DXk6SvWtCxIZ3Iz5/ck5Erv/9tDbbS64nHw4LyotmH X3BCnxba3laRnUBz1JOg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qRyy4-00Cw3y-32; Fri, 04 Aug 2023 17:52:04 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qRyy2-00Cw2u-1h for linux-arm-kernel@lists.infradead.org; Fri, 04 Aug 2023 17:52:03 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id CF681620B2; Fri, 4 Aug 2023 17:52:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A652DC433C7; Fri, 4 Aug 2023 17:51:59 +0000 (UTC) Date: Fri, 4 Aug 2023 18:51:57 +0100 From: Catalin Marinas To: Andrey Konovalov Cc: Peter Collingbourne , Vincenzo Frascino , Alexander Potapenko , Evgenii Stepanov , Florian Mayer , kasan-dev , Linux ARM , Willem de Bruijn Subject: Re: MTE false-positive with shared userspace/kernel mapping Message-ID: References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230804_105202_604430_12DD90F1 X-CRM114-Status: GOOD ( 23.15 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi Andrey, On Thu, Jul 20, 2023 at 08:28:12PM +0200, Andrey Konovalov wrote: > Syzbot reported an issue originating from the packet sockets code [1], > but it seems to be an MTE false-positive with a shared > userspace/kernel mapping. > > The problem is that mmap_region calls arch_validate_flags to check > VM_MTE_ALLOWED only after mapping memory for a non-anonymous mapping > via call_mmap(). That was on purpose as we can have some specific mmap implementation that can set VM_MTE_ALLOWED. We only do this currently for shmem_mmap(). But I haven't thought of the vm_insert_page() case. > What happens in the reproducer [2] is: > > 1. Userspace creates a packet socket and makes the kernel allocate the > backing memory for a shared mapping via alloc_one_pg_vec_page. > 2. Userspace calls mmap _with PROT_MTE_ on a packet socket file descriptor. > 3. mmap code sets VM_MTE via calc_vm_prot_bits(), as PROT_MTE has been provided. > 3. mmap code calls the packet socket mmap handler packet_mmap via > call_mmap() (without checking VM_MTE_ALLOWED at this point). > 4. Packet socket code uses vm_insert_page to map the memory allocated > in step #1 to the userspace area. > 5. arm64 code resets memory tags for the backing memory via > vm_insert_page->...->__set_pte_at->mte_sync_tags(), as the memory is > MT_NORMAL_TAGGED due to VM_MTE. > 6. Only now the mmap code checks VM_MTE_ALLOWED via > arch_validate_flags() and unmaps the area, but the memory tags have > already been reset. > 5. The packet socket code accesses the area through its tagged kernel > address via __packet_get_status(), which leads to a tag mismatch. Ah, so we end up rejecting the mmap() eventually but the damage was done by clearing the tags on the kernel page via a brief set_pte_at(). I assume the problem only triggers with kasan enabled, though even without kasan, we shouldn't allow a set_pte_at(PROT_MTE) for a vma that does not allow MTE. > I'm not sure what would be the best fix here. Moving > arch_validate_flags() before call_mmap() would be an option, but maybe > you have a better suggestion. This would break the shmem case (though not sure who's using that). Also since many drivers do vm_flags_set() (unrelated to MTE), it makes more sense for arch_validate_flags() to happen after call_mmap(). Not ideal but an easy fix is calling arch_validate_flags() in those specific mmap functions that call vm_insert_page(). They create a mapping before the core code had a chance to validate the flags. Unless we find a different solution for shmem_mmap() so that we can move the arch_validate_flags() earlier. -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel