From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9A1ACFC6169 for ; Fri, 13 Sep 2024 17:15:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=0aNu9WYmnWqUox25/haX25Wk06xeu6CC6eiCPcrpO+4=; b=YK3FeIn05y7CwhzlsLc9bbzX9l MK+EokzuDmr3xmNym7VDwWWLoaqLCrkCxFFGjkflIDIzpN3YKqNogTQik01AZvxAekzLQ7fo1gITk MyKdkrZva70FUaO2YQI17IpmVrY9ibDIIO8kgGUG4LdqP/whE6dJe1PChb+P7gAS0gTDD8EbO7Cly wZnPg+2/cmc8SoYlbnBej36wyeY1U2LG13A7qqCbDXpxn1e/3RspGFbaFDzkpjN1s62/NeGNJRqMV SG+XA+b1EjCVXK9B8LLOW5v9qs7yjBLtSpUzmzW25BhZ84bC9FrGpI6LTQWcZYd/5PtNdiW8FI+4b mVn7wRIg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1sp9sd-0000000Gdpp-1mzH; Fri, 13 Sep 2024 17:14:47 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1sp9rX-0000000Gdep-3xAX for linux-arm-kernel@lists.infradead.org; Fri, 13 Sep 2024 17:13:41 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 6B5535C54D1; Fri, 13 Sep 2024 17:13:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 46CA9C4CEC0; Fri, 13 Sep 2024 17:13:37 +0000 (UTC) Date: Fri, 13 Sep 2024 18:13:35 +0100 From: Catalin Marinas To: Yang Shi Cc: will@kernel.org, muchun.song@linux.dev, david@redhat.com, akpm@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [v4 PATCH 1/2] hugetlb: arm64: add mte support Message-ID: References: <20240912204129.1432995-1-yang@os.amperecomputing.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240912204129.1432995-1-yang@os.amperecomputing.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240913_101340_068628_E3D7394A X-CRM114-Status: GOOD ( 19.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, Sep 12, 2024 at 01:41:28PM -0700, Yang Shi wrote: > diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c > index a7bb20055ce0..c8687ccc2633 100644 > --- a/arch/arm64/mm/copypage.c > +++ b/arch/arm64/mm/copypage.c > @@ -18,17 +18,41 @@ void copy_highpage(struct page *to, struct page *from) > { > void *kto = page_address(to); > void *kfrom = page_address(from); > + struct folio *src = page_folio(from); > + struct folio *dst = page_folio(to); > + unsigned int i, nr_pages; > > copy_page(kto, kfrom); > > if (kasan_hw_tags_enabled()) > page_kasan_tag_reset(to); > > - if (system_supports_mte() && page_mte_tagged(from)) { > - /* It's a new page, shouldn't have been tagged yet */ > - WARN_ON_ONCE(!try_page_mte_tagging(to)); > - mte_copy_page_tags(kto, kfrom); > - set_page_mte_tagged(to); > + if (system_supports_mte()) { > + if (folio_test_hugetlb(src) && > + folio_test_hugetlb_mte_tagged(src)) { > + if (!try_folio_hugetlb_mte_tagging(dst)) > + return; > + > + /* > + * Populate tags for all subpages. > + * > + * Don't assume the first page is head page since > + * huge page copy may start from any subpage. > + */ > + nr_pages = folio_nr_pages(src); > + for (i = 0; i < nr_pages; i++) { > + kfrom = page_address(folio_page(src, i)); > + kto = page_address(folio_page(dst, i)); > + mte_copy_page_tags(kto, kfrom); > + } > + folio_set_hugetlb_mte_tagged(dst); > + } else if (page_mte_tagged(from)) { > + /* It's a new page, shouldn't have been tagged yet */ > + WARN_ON_ONCE(!try_page_mte_tagging(to)); > + > + mte_copy_page_tags(kto, kfrom); > + set_page_mte_tagged(to); > + } > } > } A nitpick here: I don't like that much indentation, so just do an early return if !system_supports_mte() in this function. Otherwise the patch looks fine to me. I agree with David's point on an earlier version of this patch, the naming of these functions isn't great. So, as per David's suggestion (at least for the first two): folio_test_hugetlb_mte_tagged() folio_set_hugetlb_mte_tagged() folio_try_hugetlb_mte_tagging() As for "try" vs "test_and_set_.*_lock", the original name was picked to mimic spin_trylock() since this function is waiting/spinning. It's not great but the alternative naming is closer to test_and_set_bit_lock(). This has different behaviour, it only sets a bit with acquire semantics, no waiting/spinning. -- Catalin