From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E803C47258 for ; Fri, 2 Feb 2024 04:02:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 052106B0071; Thu, 1 Feb 2024 23:02:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 002126B007E; Thu, 1 Feb 2024 23:02:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E0BF46B0080; Thu, 1 Feb 2024 23:02:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id D096C6B0071 for ; Thu, 1 Feb 2024 23:02:57 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 94E30C0157 for ; Fri, 2 Feb 2024 04:02:57 +0000 (UTC) X-FDA: 81745517994.10.E213258 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf03.hostedemail.com (Postfix) with ESMTP id F165B20010 for ; Fri, 2 Feb 2024 04:02:54 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=PGNrLATZ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf03.hostedemail.com: domain of pcc@google.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=pcc@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706846575; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jFJqfnpRCM1VqQTQw/nonDO6j9S3/xW4X7YVXfU/LVQ=; b=sRDzoIXKhAjqeXvFox6Vz3sIVLkRIzu7VyU3zPjCTxAr5ZL8rkVoabSSiVFVZHY0yV61L6 tX8d2ccnpuFKQhJKYcx6fz+YquLt15zi25cspfJ4K6oQZzalwQ1FnHaWCw7+noIvVso0WL CC3XjPVlhs0sdloc4nB98zKLSOrviyE= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=PGNrLATZ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf03.hostedemail.com: domain of pcc@google.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=pcc@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706846575; a=rsa-sha256; cv=none; b=JKrO44J9uUF9GdyWW4u9+pYnWYuW5NfAXDljz4XJXr5Ok5CXq7uul7Jge/4KMfOoYJzKxq 6v/lLSsjsIRHn5Lf2uWuTBoPTU87d6F0klUXoyKUObE6FXFQ30DbHGszg+RGO4abLYalbG PpKEk25Y4uUnlWNYLl+U353iS6HejF0= Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-1d5ce88b51cso112265ad.0 for ; Thu, 01 Feb 2024 20:02:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1706846574; x=1707451374; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=jFJqfnpRCM1VqQTQw/nonDO6j9S3/xW4X7YVXfU/LVQ=; b=PGNrLATZGMfKaUcbVdNGOFbnB+7QsjYNbAgl8Rw+wl9yYdmV4c4nBx8xY7ngrM4Dyl 1XVzG+2p/QMl9FuqtvgQ1a8VQnjUrIGBeJPSoqQo2NC0UPY3TTPKkzUMrSFTkldpvNSC wiWi+mB76vbJmDtcE4aS9sEQRRT6JuisXUAEfTXEFhCLy+tgtEmgkW3wuzfW3N/DeCMy CUA1mMtSznbc9tcCISn8QoVvPZ0PIWHajWW1Xxdzywq83Y257CoYPVMv2BZbhhmQz2OZ GecJfzENsQdJndVMBTJS2DEEj5nlx/PEG1eToX7Av3dTTo1V0DYJIVbPMdoU/0bf39jQ +SEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706846574; x=1707451374; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jFJqfnpRCM1VqQTQw/nonDO6j9S3/xW4X7YVXfU/LVQ=; b=lOehSCmhaTzAcLMRRpv8zWfWPidUZBEf8JbOwQleqLs1tlBZv3wMR/rBXMiBDMaNWt /5q2rdpJYBIAEparjQ6xwqJI2+rDiMRhRZ8Rxftl5ijTledxfQcTbPJqq7TYMpvXCpq0 lBr2/IbwPiLdcpmEp2jqrFFjcK9Wyzj7dtpf/cH5Hi4cZ1WffIwnC/5N8uKf4re75JYf abQdPfZ9Vfe1QazC1qKr14Jcbb+OQk4nk4k/s8HNdJbNu83ONpiMuD7urKHf+2G8KihV Z6XV3hNDJvQiCN9JKpxeO9mfFfKOBgaSyoT2WIAM4Do/TD0Mx96nZq5g2Cmqxhp6AZR8 XIaA== X-Gm-Message-State: AOJu0YxOtxWIM7gxapLSE6Io66bMygBukEaIjVUHchp2YLlOvyxmtCGf 3I4iKPAEBLUgnPwCKRp1cdiVpMa+TLOTIB9sHQRCvBFoCXvXVE3b1zsQ36u15kGkb6pOkStmIjt kc/vZAnGCfapGRLq64G0ZZIRTpLC4bqNgalDW X-Google-Smtp-Source: AGHT+IHNdARZNWgrLAHaHq/bbR2IKOvaETBe+RFLk+CFfCXPML9rH0oDYlYlP9TWnJMBqc7eyBYwkFLTKF4GMzsdKDQ= X-Received: by 2002:a17:903:32d2:b0:1d8:d6bf:145b with SMTP id i18-20020a17090332d200b001d8d6bf145bmr62172plr.15.1706846573457; Thu, 01 Feb 2024 20:02:53 -0800 (PST) MIME-Version: 1.0 References: <20240125164256.4147-1-alexandru.elisei@arm.com> <20240125164256.4147-29-alexandru.elisei@arm.com> In-Reply-To: <20240125164256.4147-29-alexandru.elisei@arm.com> From: Peter Collingbourne Date: Thu, 1 Feb 2024 20:02:40 -0800 Message-ID: Subject: Re: [PATCH RFC v3 28/35] arm64: mte: swap: Handle tag restoring when missing tag storage To: Alexandru Elisei Cc: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: F165B20010 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: cu8srhen71agbmwjqospyqahuupd1e9b X-HE-Tag: 1706846574-772283 X-HE-Meta: U2FsdGVkX1/X3FUneKstDDfu/nGPVNUmwiBBWGYYq963yv/e71SMN2WMTf0Jo8cHO+1n3Fm8ov93RhYuLfrHCar+Rz3qTmAljQH28yVOwH8u3xu6R5lGvb9DB9xnQH4NJKYeGyITzi+YHbIWW+T5npoonv+kfg5e4DBVnf1hLlhOinf2WnKrWw00B4r7tx8nHuiHUiA8TRFqv2GGJsbvZ/w3Hz74XCSTJ5jiSEXhqNF88M/vUkenZWL+25yGs7DRP6Ea0UO47bfCruXaqYBI+gn0+nh3rWcQDwnPCEmVky6HeKiNc9nmv2uoH/ANMV/5clO0JQr04qL/Czos29Vu5jWUQuVXtmGJ+ynyTAxky4YbIwOzvHRZ1UXdsMBxHEjFx8+so4bOxHyVap2X8nL46TW8KQHdJe8JjaMW7YSieS9S1PMjdFgXeFJ1g+PNdHmgRIovo68JfbrsDEc6QxdRfYi/Hk9Uz4QZe2sj2a1/HWhsOGA3ybawJrU0ofw9f1GniKoXE0Cp+xXwmsB6aukBN1VqYBLByTweHM1plMKUiGCu7WJnlct75J0BmhCxB44nWVopWlDi9RvRO5FqwHUs3wOgCbAlopARuW/5qLBkSJ4QQ9dDvTicM8s5ayAxg509nn9PrsWek7WHfb36M4TljDHwhDEcm7YIf3KPu6UrC9A5TudmnidCslDLiQWi3SxE0kfF9goR6JyXO0DSbiHNXZG7/w+u0LGLDG0YH29O7I9H8SmU+0G+MuY7MHptEF5WW86JUxtlxlUKnuEmEm0gKsg8zDXybuGou35miWFMabKjJN6O5EACVtWJjbiTjq8QJXKZOT480SOIZxhxWZGXScYa+Qx3wA+7iPx5XNH1oYzp0DDv0QoyvcW6jmvQfo0CNkWlS8n4Scn05z4zhIpm1YrTz9GAmQlKCHpPV2hqBpwFf6V3+v2B0Io264RutQKn48qmfB6tCTW1QHX1x1R rayij5hX KWMmd X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jan 25, 2024 at 8:45=E2=80=AFAM Alexandru Elisei wrote: > > Linux restores tags when a page is swapped in and there are tags associat= ed > with the swap entry which the new page will replace. The saved tags are > restored even if the page will not be mapped as tagged, to protect agains= t > cases where the page is shared between different VMAs, and is tagged in > some, but untagged in others. By using this approach, the process can sti= ll > access the correct tags following an mprotect(PROT_MTE) on the non-MTE > enabled VMA. > > But this poses a challenge for managing tag storage: in the scenario abov= e, > when a new page is allocated to be swapped in for the process where it wi= ll > be mapped as untagged, the corresponding tag storage block is not reserve= d. > mte_restore_page_tags_by_swp_entry(), when it restores the saved tags, wi= ll > overwrite data in the tag storage block associated with the new page, > leading to data corruption if the block is in use by a process. > > Get around this issue by saving the tags in a new xarray, this time index= ed > by the page pfn, and then restoring them when tag storage is reserved for > the page. > > Signed-off-by: Alexandru Elisei > --- > > Changes since rfc v2: > > * Restore saved tags **before** setting the PG_tag_storage_reserved bit t= o > eliminate a brief window of opportunity where userspace can access uninit= ialized > tags (Peter Collingbourne). > > arch/arm64/include/asm/mte_tag_storage.h | 8 ++ > arch/arm64/include/asm/pgtable.h | 11 +++ > arch/arm64/kernel/mte_tag_storage.c | 12 ++- > arch/arm64/mm/mteswap.c | 110 +++++++++++++++++++++++ > 4 files changed, 140 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/include/asm/mte_tag_storage.h b/arch/arm64/includ= e/asm/mte_tag_storage.h > index 50bdae94cf71..40590a8c3748 100644 > --- a/arch/arm64/include/asm/mte_tag_storage.h > +++ b/arch/arm64/include/asm/mte_tag_storage.h > @@ -36,6 +36,14 @@ bool page_is_tag_storage(struct page *page); > > vm_fault_t handle_folio_missing_tag_storage(struct folio *folio, struct = vm_fault *vmf, > bool *map_pte); > +vm_fault_t mte_try_transfer_swap_tags(swp_entry_t entry, struct page *pa= ge); > + > +void tags_by_pfn_lock(void); > +void tags_by_pfn_unlock(void); > + > +void *mte_erase_tags_for_pfn(unsigned long pfn); > +bool mte_save_tags_for_pfn(void *tags, unsigned long pfn); > +void mte_restore_tags_for_pfn(unsigned long start_pfn, int order); > #else > static inline bool tag_storage_enabled(void) > { > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pg= table.h > index 0174e292f890..87ae59436162 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -1085,6 +1085,17 @@ static inline void arch_swap_invalidate_area(int t= ype) > mte_invalidate_tags_area_by_swp_entry(type); > } > > +#ifdef CONFIG_ARM64_MTE_TAG_STORAGE > +#define __HAVE_ARCH_SWAP_PREPARE_TO_RESTORE > +static inline vm_fault_t arch_swap_prepare_to_restore(swp_entry_t entry, > + struct folio *folio= ) > +{ > + if (tag_storage_enabled()) > + return mte_try_transfer_swap_tags(entry, &folio->page); > + return 0; > +} > +#endif > + > #define __HAVE_ARCH_SWAP_RESTORE > static inline void arch_swap_restore(swp_entry_t entry, struct folio *fo= lio) > { > diff --git a/arch/arm64/kernel/mte_tag_storage.c b/arch/arm64/kernel/mte_= tag_storage.c > index afe2bb754879..ac7b9c9c585c 100644 > --- a/arch/arm64/kernel/mte_tag_storage.c > +++ b/arch/arm64/kernel/mte_tag_storage.c > @@ -567,6 +567,7 @@ int reserve_tag_storage(struct page *page, int order,= gfp_t gfp) > } > } > > + mte_restore_tags_for_pfn(page_to_pfn(page), order); > page_set_tag_storage_reserved(page, order); > out_unlock: > mutex_unlock(&tag_blocks_lock); > @@ -595,7 +596,8 @@ void free_tag_storage(struct page *page, int order) > struct tag_region *region; > unsigned long page_va; > unsigned long flags; > - int ret; > + void *tags; > + int i, ret; > > ret =3D tag_storage_find_block(page, &start_block, ®ion); > if (WARN_ONCE(ret, "Missing tag storage block for pfn 0x%lx", pag= e_to_pfn(page))) > @@ -605,6 +607,14 @@ void free_tag_storage(struct page *page, int order) > /* Avoid writeback of dirty tag cache lines corrupting data. */ > dcache_inval_tags_poc(page_va, page_va + (PAGE_SIZE << order)); > > + tags_by_pfn_lock(); > + for (i =3D 0; i < (1 << order); i++) { > + tags =3D mte_erase_tags_for_pfn(page_to_pfn(page + i)); > + if (unlikely(tags)) > + mte_free_tag_buf(tags); > + } > + tags_by_pfn_unlock(); > + > end_block =3D start_block + order_to_num_blocks(order, region->bl= ock_size_pages); > > xa_lock_irqsave(&tag_blocks_reserved, flags); > diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c > index 2a43746b803f..e11495fa3c18 100644 > --- a/arch/arm64/mm/mteswap.c > +++ b/arch/arm64/mm/mteswap.c > @@ -20,6 +20,112 @@ void mte_free_tag_buf(void *buf) > kfree(buf); > } > > +#ifdef CONFIG_ARM64_MTE_TAG_STORAGE > +static DEFINE_XARRAY(tags_by_pfn); > + > +void tags_by_pfn_lock(void) > +{ > + xa_lock(&tags_by_pfn); > +} > + > +void tags_by_pfn_unlock(void) > +{ > + xa_unlock(&tags_by_pfn); > +} > + > +void *mte_erase_tags_for_pfn(unsigned long pfn) > +{ > + return __xa_erase(&tags_by_pfn, pfn); > +} > + > +bool mte_save_tags_for_pfn(void *tags, unsigned long pfn) > +{ > + void *entry; > + int ret; > + > + ret =3D xa_reserve(&tags_by_pfn, pfn, GFP_KERNEL); copy_highpage can be called from an atomic context, so it isn't currently valid to pass GFP_KERNEL here. To give one example of a possible atomic context call, copy_pte_range will take a PTE spinlock and can call copy_present_pte, which can call copy_present_page, which will call copy_user_highpage. To give another example, __buffer_migrate_folio can call spin_lock(&mapping->private_lock), then call folio_migrate_copy, which will call folio_copy. Peter > + if (ret) > + return true; > + > + tags_by_pfn_lock(); > + > + if (page_tag_storage_reserved(pfn_to_page(pfn))) { > + xa_release(&tags_by_pfn, pfn); > + tags_by_pfn_unlock(); > + return false; > + } > + > + entry =3D __xa_store(&tags_by_pfn, pfn, tags, GFP_ATOMIC); > + if (xa_is_err(entry)) { > + xa_release(&tags_by_pfn, pfn); > + goto out_unlock; > + } else if (entry) { > + mte_free_tag_buf(entry); > + } > + > +out_unlock: > + tags_by_pfn_unlock(); > + return true; > +} > + > +void mte_restore_tags_for_pfn(unsigned long start_pfn, int order) > +{ > + struct page *page =3D pfn_to_page(start_pfn); > + unsigned long pfn; > + void *tags; > + > + tags_by_pfn_lock(); > + > + for (pfn =3D start_pfn; pfn < start_pfn + (1 << order); pfn++, pa= ge++) { > + tags =3D mte_erase_tags_for_pfn(pfn); > + if (unlikely(tags)) { > + /* > + * Mark the page as tagged so mte_sync_tags() doe= sn't > + * clear the tags. > + */ > + WARN_ON_ONCE(!try_page_mte_tagging(page)); > + mte_copy_page_tags_from_buf(page_address(page), t= ags); > + set_page_mte_tagged(page); > + mte_free_tag_buf(tags); > + } > + } > + > + tags_by_pfn_unlock(); > +} > + > +/* > + * Note on locking: swap in/out is done with the folio locked, which eli= minates > + * races with mte_save/restore_page_tags_by_swp_entry. > + */ > +vm_fault_t mte_try_transfer_swap_tags(swp_entry_t entry, struct page *pa= ge) > +{ > + void *swap_tags, *pfn_tags; > + bool saved; > + > + /* > + * mte_restore_page_tags_by_swp_entry() will take care of copying= the > + * tags over. > + */ > + if (likely(page_mte_tagged(page) || page_tag_storage_reserved(pag= e))) > + return 0; > + > + swap_tags =3D xa_load(&tags_by_swp_entry, entry.val); > + if (!swap_tags) > + return 0; > + > + pfn_tags =3D mte_allocate_tag_buf(); > + if (!pfn_tags) > + return VM_FAULT_OOM; > + > + memcpy(pfn_tags, swap_tags, MTE_PAGE_TAG_STORAGE_SIZE); > + saved =3D mte_save_tags_for_pfn(pfn_tags, page_to_pfn(page)); > + if (!saved) > + mte_free_tag_buf(pfn_tags); > + > + return 0; > +} > +#endif > + > int mte_save_page_tags_by_swp_entry(struct page *page) > { > void *tags, *ret; > @@ -54,6 +160,10 @@ void mte_restore_page_tags_by_swp_entry(swp_entry_t e= ntry, struct page *page) > if (!tags) > return; > > + /* Tags will be restored when tag storage is reserved. */ > + if (tag_storage_enabled() && unlikely(!page_tag_storage_reserved(= page))) > + return; > + > if (try_page_mte_tagging(page)) { > mte_copy_page_tags_from_buf(page_address(page), tags); > set_page_mte_tagged(page); > -- > 2.43.0 >