From: Alexandru Elisei <alexandru.elisei@arm.com>
To: Hyesoo Yu <hyesoo.yu@samsung.com>
Cc: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev,
maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com,
yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org,
mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com,
vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de,
bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org,
rppt@kernel.org, hughd@google.com, pcc@google.com,
steven.price@arm.com, anshuman.khandual@arm.com,
vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com,
kcc@google.com, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev,
linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org,
linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org
Subject: Re: [PATCH RFC v2 16/27] arm64: mte: Manage tag storage on page allocation
Date: Wed, 29 Nov 2023 13:33:37 +0000 [thread overview]
Message-ID: <ZWc9sVTCHTBcp2Z2@raptor> (raw)
In-Reply-To: <20231129091040.GC2988384@tiffany>
Hi,
On Wed, Nov 29, 2023 at 06:10:40PM +0900, Hyesoo Yu wrote:
> On Sun, Nov 19, 2023 at 04:57:10PM +0000, Alexandru Elisei wrote:
> > [..]
> > +static int order_to_num_blocks(int order)
> > +{
> > + return max((1 << order) / 32, 1);
> > +}
> > [..]
> > +int reserve_tag_storage(struct page *page, int order, gfp_t gfp)
> > +{
> > + unsigned long start_block, end_block;
> > + struct tag_region *region;
> > + unsigned long block;
> > + unsigned long flags;
> > + unsigned int tries;
> > + int ret = 0;
> > +
> > + VM_WARN_ON_ONCE(!preemptible());
> > +
> > + if (page_tag_storage_reserved(page))
> > + return 0;
> > +
> > + /*
> > + * __alloc_contig_migrate_range() ignores gfp when allocating the
> > + * destination page for migration. Regardless, massage gfp flags and
> > + * remove __GFP_TAGGED to avoid recursion in case gfp stops being
> > + * ignored.
> > + */
> > + gfp &= ~__GFP_TAGGED;
> > + if (!(gfp & __GFP_NORETRY))
> > + gfp |= __GFP_RETRY_MAYFAIL;
> > +
> > + ret = tag_storage_find_block(page, &start_block, ®ion);
> > + if (WARN_ONCE(ret, "Missing tag storage block for pfn 0x%lx", page_to_pfn(page)))
> > + return 0;
> > + end_block = start_block + order_to_num_blocks(order) * region->block_size;
> > +
>
> Hello.
>
> If the page size is 4K, block size is 2 (block size bytes 8K), and order is 6,
> then we need 2 pages for the tag. However according to the equation, order_to_num_blocks
> is 2 and block_size is also 2, so end block will be incremented by 4.
>
> However we actually only need 8K of tag, right for 256K ?
> Could you explain order_to_num_blocks * region->block_size more detail ?
I think you are correct, thank you for pointing it out. The formula should
probably be something like:
static int order_to_num_blocks(int order, u32 block_size)
{
int num_tag_pages = max((1 << order) / 32, 1);
return DIV_ROUND_UP(num_tag_pages, block_size);
}
and that will make end_block = start_block + 2 in your scenario.
Does that look correct to you?
Thanks,
Alex
>
> Thanks,
> Regards.
>
> > + mutex_lock(&tag_blocks_lock);
> > +
> > + /* Check again, this time with the lock held. */
> > + if (page_tag_storage_reserved(page))
> > + goto out_unlock;
> > +
> > + /* Make sure existing entries are not freed from out under out feet. */
> > + xa_lock_irqsave(&tag_blocks_reserved, flags);
> > + for (block = start_block; block < end_block; block += region->block_size) {
> > + if (tag_storage_block_is_reserved(block))
> > + block_ref_add(block, region, order);
> > + }
> > + xa_unlock_irqrestore(&tag_blocks_reserved, flags);
> > +
> > + for (block = start_block; block < end_block; block += region->block_size) {
> > + /* Refcount incremented above. */
> > + if (tag_storage_block_is_reserved(block))
> > + continue;
> > +
> > + tries = 3;
> > + while (tries--) {
> > + ret = alloc_contig_range(block, block + region->block_size, MIGRATE_CMA, gfp);
> > + if (ret == 0 || ret != -EBUSY)
> > + break;
> > + }
> > +
> > + if (ret)
> > + goto out_error;
> > +
> > + ret = tag_storage_reserve_block(block, region, order);
> > + if (ret) {
> > + free_contig_range(block, region->block_size);
> > + goto out_error;
> > + }
> > +
> > + count_vm_events(CMA_ALLOC_SUCCESS, region->block_size);
> > + }
> > +
> > + page_set_tag_storage_reserved(page, order);
> > +out_unlock:
> > + mutex_unlock(&tag_blocks_lock);
> > +
> > + return 0;
> > +
> > +out_error:
> > + xa_lock_irqsave(&tag_blocks_reserved, flags);
> > + for (block = start_block; block < end_block; block += region->block_size) {
> > + if (tag_storage_block_is_reserved(block) &&
> > + block_ref_sub_return(block, region, order) == 1) {
> > + __xa_erase(&tag_blocks_reserved, block);
> > + free_contig_range(block, region->block_size);
> > + }
> > + }
> > + xa_unlock_irqrestore(&tag_blocks_reserved, flags);
> > +
> > + mutex_unlock(&tag_blocks_lock);
> > +
> > + count_vm_events(CMA_ALLOC_FAIL, region->block_size);
> > +
> > + return ret;
> > +}
> > +
> > +void free_tag_storage(struct page *page, int order)
> > +{
> > + unsigned long block, start_block, end_block;
> > + struct tag_region *region;
> > + unsigned long flags;
> > + int ret;
> > +
> > + ret = tag_storage_find_block(page, &start_block, ®ion);
> > + if (WARN_ONCE(ret, "Missing tag storage block for pfn 0x%lx", page_to_pfn(page)))
> > + return;
> > +
> > + end_block = start_block + order_to_num_blocks(order) * region->block_size;
> > +
> > + xa_lock_irqsave(&tag_blocks_reserved, flags);
> > + for (block = start_block; block < end_block; block += region->block_size) {
> > + if (WARN_ONCE(!tag_storage_block_is_reserved(block),
> > + "Block 0x%lx is not reserved for pfn 0x%lx", block, page_to_pfn(page)))
> > + continue;
> > +
> > + if (block_ref_sub_return(block, region, order) == 1) {
> > + __xa_erase(&tag_blocks_reserved, block);
> > + free_contig_range(block, region->block_size);
> > + }
> > + }
> > + xa_unlock_irqrestore(&tag_blocks_reserved, flags);
> > +}
> > diff --git a/fs/proc/page.c b/fs/proc/page.c
> > index 195b077c0fac..e7eb584a9234 100644
> > --- a/fs/proc/page.c
> > +++ b/fs/proc/page.c
> > @@ -221,6 +221,7 @@ u64 stable_page_flags(struct page *page)
> > #ifdef CONFIG_ARCH_USES_PG_ARCH_X
> > u |= kpf_copy_bit(k, KPF_ARCH_2, PG_arch_2);
> > u |= kpf_copy_bit(k, KPF_ARCH_3, PG_arch_3);
> > + u |= kpf_copy_bit(k, KPF_ARCH_4, PG_arch_4);
> > #endif
> >
> > return u;
> > diff --git a/include/linux/kernel-page-flags.h b/include/linux/kernel-page-flags.h
> > index 859f4b0c1b2b..4a0d719ffdd4 100644
> > --- a/include/linux/kernel-page-flags.h
> > +++ b/include/linux/kernel-page-flags.h
> > @@ -19,5 +19,6 @@
> > #define KPF_SOFTDIRTY 40
> > #define KPF_ARCH_2 41
> > #define KPF_ARCH_3 42
> > +#define KPF_ARCH_4 43
> >
> > #endif /* LINUX_KERNEL_PAGE_FLAGS_H */
> > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> > index a88e64acebfe..7915165a51bd 100644
> > --- a/include/linux/page-flags.h
> > +++ b/include/linux/page-flags.h
> > @@ -135,6 +135,7 @@ enum pageflags {
> > #ifdef CONFIG_ARCH_USES_PG_ARCH_X
> > PG_arch_2,
> > PG_arch_3,
> > + PG_arch_4,
> > #endif
> > __NR_PAGEFLAGS,
> >
> > diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
> > index 6ca0d5ed46c0..ba962fd10a2c 100644
> > --- a/include/trace/events/mmflags.h
> > +++ b/include/trace/events/mmflags.h
> > @@ -125,7 +125,8 @@ IF_HAVE_PG_HWPOISON(hwpoison) \
> > IF_HAVE_PG_IDLE(idle) \
> > IF_HAVE_PG_IDLE(young) \
> > IF_HAVE_PG_ARCH_X(arch_2) \
> > -IF_HAVE_PG_ARCH_X(arch_3)
> > +IF_HAVE_PG_ARCH_X(arch_3) \
> > +IF_HAVE_PG_ARCH_X(arch_4)
> >
> > #define show_page_flags(flags) \
> > (flags) ? __print_flags(flags, "|", \
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index f31f02472396..9beead961a65 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -2474,6 +2474,7 @@ static void __split_huge_page_tail(struct folio *folio, int tail,
> > #ifdef CONFIG_ARCH_USES_PG_ARCH_X
> > (1L << PG_arch_2) |
> > (1L << PG_arch_3) |
> > + (1L << PG_arch_4) |
> > #endif
> > (1L << PG_dirty) |
> > LRU_GEN_MASK | LRU_REFS_MASK));
> > --
> > 2.42.1
> >
> >
next prev parent reply other threads:[~2023-11-29 13:33 UTC|newest]
Thread overview: 98+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-19 16:56 [PATCH RFC v2 00/27] Add support for arm64 MTE dynamic tag storage reuse Alexandru Elisei
2023-11-19 16:56 ` [PATCH RFC v2 01/27] arm64: mte: Rework naming for tag manipulation functions Alexandru Elisei
2023-11-19 16:56 ` [PATCH RFC v2 02/27] arm64: mte: Rename __GFP_ZEROTAGS to __GFP_TAGGED Alexandru Elisei
2023-11-19 16:56 ` [PATCH RFC v2 03/27] mm: cma: Make CMA_ALLOC_SUCCESS/FAIL count the number of pages Alexandru Elisei
2023-11-19 16:56 ` [PATCH RFC v2 04/27] mm: migrate/mempolicy: Add hook to modify migration target gfp Alexandru Elisei
2023-11-25 10:03 ` Mike Rapoport
2023-11-27 11:52 ` Alexandru Elisei
2023-11-28 6:49 ` Mike Rapoport
2023-11-28 17:21 ` Alexandru Elisei
2023-11-19 16:56 ` [PATCH RFC v2 05/27] mm: page_alloc: Add an arch hook to allow prep_new_page() to fail Alexandru Elisei
2023-11-24 19:35 ` David Hildenbrand
2023-11-27 12:09 ` Alexandru Elisei
2023-11-28 16:57 ` David Hildenbrand
2023-11-28 17:17 ` Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 06/27] mm: page_alloc: Allow an arch to hook early into free_pages_prepare() Alexandru Elisei
2023-11-24 19:36 ` David Hildenbrand
2023-11-27 13:03 ` Alexandru Elisei
2023-11-28 16:58 ` David Hildenbrand
2023-11-28 17:17 ` Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 07/27] mm: page_alloc: Add an arch hook to filter MIGRATE_CMA allocations Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 08/27] mm: page_alloc: Partially revert "mm: page_alloc: remove stale CMA guard code" Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 09/27] mm: Allow an arch to hook into folio allocation when VMA is known Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 10/27] mm: Call arch_swap_prepare_to_restore() before arch_swap_restore() Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 11/27] arm64: mte: Reserve tag storage memory Alexandru Elisei
2023-11-29 8:44 ` Hyesoo Yu
2023-11-30 11:56 ` Alexandru Elisei
2023-12-03 12:14 ` Alexandru Elisei
2023-12-08 5:03 ` Hyesoo Yu
2023-12-11 14:45 ` Alexandru Elisei
2023-12-11 17:29 ` Rob Herring
2023-12-12 16:38 ` Alexandru Elisei
2023-12-12 18:44 ` Rob Herring
2023-12-13 13:04 ` Alexandru Elisei
2023-12-13 14:06 ` Rob Herring
2023-12-13 14:51 ` Alexandru Elisei
2023-12-13 17:22 ` Rob Herring
2023-12-13 17:44 ` Alexandru Elisei
2023-12-13 20:30 ` Rob Herring
2023-12-14 15:45 ` Alexandru Elisei
2023-12-14 18:55 ` Rob Herring
2023-12-18 10:59 ` Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 12/27] arm64: mte: Add tag storage pages to the MIGRATE_CMA migratetype Alexandru Elisei
2023-11-24 19:40 ` David Hildenbrand
2023-11-27 15:01 ` Alexandru Elisei
2023-11-28 17:03 ` David Hildenbrand
2023-11-29 10:44 ` Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 13/27] arm64: mte: Make tag storage depend on ARCH_KEEP_MEMBLOCK Alexandru Elisei
2023-11-24 19:51 ` David Hildenbrand
2023-11-27 15:04 ` Alexandru Elisei
2023-11-28 17:05 ` David Hildenbrand
2023-11-29 10:46 ` Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 14/27] arm64: mte: Disable dynamic tag storage management if HW KASAN is enabled Alexandru Elisei
2023-11-24 19:54 ` David Hildenbrand
2023-11-27 15:07 ` Alexandru Elisei
2023-11-28 17:05 ` David Hildenbrand
2023-11-19 16:57 ` [PATCH RFC v2 15/27] arm64: mte: Check that tag storage blocks are in the same zone Alexandru Elisei
2023-11-24 19:56 ` David Hildenbrand
2023-11-27 15:10 ` Alexandru Elisei
2023-11-29 8:57 ` Hyesoo Yu
2023-11-30 12:00 ` Alexandru Elisei
2023-12-08 5:27 ` Hyesoo Yu
2023-12-11 14:21 ` Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 16/27] arm64: mte: Manage tag storage on page allocation Alexandru Elisei
2023-11-29 9:10 ` Hyesoo Yu
2023-11-29 13:33 ` Alexandru Elisei [this message]
2023-12-08 5:29 ` Hyesoo Yu
2023-11-19 16:57 ` [PATCH RFC v2 17/27] arm64: mte: Perform CMOs for tag blocks on tagged page allocation/free Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 18/27] arm64: mte: Reserve tag block for the zero page Alexandru Elisei
2023-11-28 17:06 ` David Hildenbrand
2023-11-29 11:30 ` Alexandru Elisei
2023-11-29 13:13 ` David Hildenbrand
2023-11-29 13:41 ` Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 19/27] mm: mprotect: Introduce PAGE_FAULT_ON_ACCESS for mprotect(PROT_MTE) Alexandru Elisei
2023-11-28 17:55 ` David Hildenbrand
2023-11-28 18:00 ` David Hildenbrand
2023-11-29 11:55 ` Alexandru Elisei
2023-11-29 12:48 ` David Hildenbrand
2023-11-29 9:27 ` Hyesoo Yu
2023-11-30 12:06 ` Alexandru Elisei
2023-11-30 12:49 ` David Hildenbrand
2023-11-30 13:32 ` Alexandru Elisei
2023-11-30 13:43 ` David Hildenbrand
2023-11-30 14:33 ` Alexandru Elisei
2023-11-30 14:39 ` David Hildenbrand
2023-11-19 16:57 ` [PATCH RFC v2 20/27] mm: hugepage: Handle huge page fault on access Alexandru Elisei
2023-11-22 1:28 ` Peter Collingbourne
2023-11-22 9:22 ` Alexandru Elisei
2023-11-28 17:56 ` David Hildenbrand
2023-11-29 11:56 ` Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 21/27] mm: arm64: Handle tag storage pages mapped before mprotect(PROT_MTE) Alexandru Elisei
2023-11-28 5:39 ` Peter Collingbourne
2023-11-30 17:43 ` Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 22/27] arm64: mte: swap: Handle tag restoring when missing tag storage Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 23/27] arm64: mte: copypage: " Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 24/27] arm64: mte: Handle fatal signal in reserve_tag_storage() Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 25/27] KVM: arm64: Disable MTE if tag storage is enabled Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 26/27] arm64: mte: Fast track reserving tag storage when the block is free Alexandru Elisei
2023-11-19 16:57 ` [PATCH RFC v2 27/27] arm64: mte: Enable dynamic tag storage reuse Alexandru Elisei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZWc9sVTCHTBcp2Z2@raptor \
--to=alexandru.elisei@arm.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=arnd@arndb.de \
--cc=bristot@redhat.com \
--cc=bsegall@google.com \
--cc=catalin.marinas@arm.com \
--cc=david@redhat.com \
--cc=dietmar.eggemann@arm.com \
--cc=eugenis@google.com \
--cc=hughd@google.com \
--cc=hyesoo.yu@samsung.com \
--cc=james.morse@arm.com \
--cc=juri.lelli@redhat.com \
--cc=kcc@google.com \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arch@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=maz@kernel.org \
--cc=mgorman@suse.de \
--cc=mhiramat@kernel.org \
--cc=mingo@redhat.com \
--cc=oliver.upton@linux.dev \
--cc=pcc@google.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=rppt@kernel.org \
--cc=steven.price@arm.com \
--cc=suzuki.poulose@arm.com \
--cc=vincent.guittot@linaro.org \
--cc=vincenzo.frascino@arm.com \
--cc=vschneid@redhat.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).