From: Alexandru Elisei <alexandru.elisei@arm.com>
To: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev,
maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com,
yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org,
mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com,
vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de,
bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org,
rppt@kernel.org, hughd@google.com, pcc@google.com,
steven.price@arm.com, vincenzo.frascino@arm.com,
david@redhat.com, eugenis@google.com, kcc@google.com,
hyesoo.yu@samsung.com, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev,
linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org,
linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org
Subject: Re: [PATCH RFC v3 08/35] mm: cma: Introduce cma_alloc_range()
Date: Tue, 30 Jan 2024 11:35:47 +0000 [thread overview]
Message-ID: <ZbjfEzlNgprdxfxX@raptor> (raw)
In-Reply-To: <61a3dbb7-25b6-4f49-aa70-9a8aaeb53365@arm.com>
Hi,
On Tue, Jan 30, 2024 at 10:50:00AM +0530, Anshuman Khandual wrote:
>
>
> On 1/25/24 22:12, Alexandru Elisei wrote:
> > Today, cma_alloc() is used to allocate a contiguous memory region. The
> > function allows the caller to specify the number of pages to allocate, but
> > not the starting address. cma_alloc() will walk over the entire CMA region
> > trying to allocate the first available range of the specified size.
> >
> > Introduce cma_alloc_range(), which makes CMA more versatile by allowing the
> > caller to specify a particular range in the CMA region, defined by the
> > start pfn and the size.
> >
> > arm64 will make use of this function when tag storage management will be
> > implemented: cma_alloc_range() will be used to reserve the tag storage
> > associated with a tagged page.
>
> Basically, you would like to pass on a preferred start address and the
> allocation could just fail if a contig range is not available from such
> a starting address ?
>
> Then why not just change cma_alloc() to take a new argument 'start_pfn'.
> Why create a new but almost similar allocator ?
I tried doing that, and I gave up because:
- It made cma_alloc() even more complex and hard to follow.
- What value should 'start_pfn' be to tell cma_alloc() that it should be
ignored? Or, to put it another way, what pfn number is invalid on **all**
platforms that Linux supports?
I can give it another go if we can come up with an invalid value for
'start_pfn'.
>
> But then I am wondering why this could not be done in the arm64 platform
> code itself operating on a CMA area reserved just for tag storage. Unless
> this new allocator has other usage beyond MTE, this could be implemented
> in the platform itself.
I had the same idea in the previous iteration, David Hildenbrand suggested
this approach [1].
[1] https://lore.kernel.org/linux-fsdevel/2aafd53f-af1f-45f3-a08c-d11962254315@redhat.com/
Thanks,
Alex
>
> >
> > Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
> > ---
> >
> > Changes since rfc v2:
> >
> > * New patch.
> >
> > include/linux/cma.h | 2 +
> > include/trace/events/cma.h | 59 ++++++++++++++++++++++++++
> > mm/cma.c | 86 ++++++++++++++++++++++++++++++++++++++
> > 3 files changed, 147 insertions(+)
> >
> > diff --git a/include/linux/cma.h b/include/linux/cma.h
> > index 63873b93deaa..e32559da6942 100644
> > --- a/include/linux/cma.h
> > +++ b/include/linux/cma.h
> > @@ -50,6 +50,8 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
> > struct cma **res_cma);
> > extern struct page *cma_alloc(struct cma *cma, unsigned long count, unsigned int align,
> > bool no_warn);
> > +extern int cma_alloc_range(struct cma *cma, unsigned long start, unsigned long count,
> > + unsigned tries, gfp_t gfp);
> > extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned long count);
> > extern bool cma_release(struct cma *cma, const struct page *pages, unsigned long count);
> >
> > diff --git a/include/trace/events/cma.h b/include/trace/events/cma.h
> > index 25103e67737c..a89af313a572 100644
> > --- a/include/trace/events/cma.h
> > +++ b/include/trace/events/cma.h
> > @@ -36,6 +36,65 @@ TRACE_EVENT(cma_release,
> > __entry->count)
> > );
> >
> > +TRACE_EVENT(cma_alloc_range_start,
> > +
> > + TP_PROTO(const char *name, unsigned long start, unsigned long count,
> > + unsigned tries),
> > +
> > + TP_ARGS(name, start, count, tries),
> > +
> > + TP_STRUCT__entry(
> > + __string(name, name)
> > + __field(unsigned long, start)
> > + __field(unsigned long, count)
> > + __field(unsigned, tries)
> > + ),
> > +
> > + TP_fast_assign(
> > + __assign_str(name, name);
> > + __entry->start = start;
> > + __entry->count = count;
> > + __entry->tries = tries;
> > + ),
> > +
> > + TP_printk("name=%s start=%lx count=%lu tries=%u",
> > + __get_str(name),
> > + __entry->start,
> > + __entry->count,
> > + __entry->tries)
> > +);
> > +
> > +TRACE_EVENT(cma_alloc_range_finish,
> > +
> > + TP_PROTO(const char *name, unsigned long start, unsigned long count,
> > + unsigned attempts, int err),
> > +
> > + TP_ARGS(name, start, count, attempts, err),
> > +
> > + TP_STRUCT__entry(
> > + __string(name, name)
> > + __field(unsigned long, start)
> > + __field(unsigned long, count)
> > + __field(unsigned, attempts)
> > + __field(int, err)
> > + ),
> > +
> > + TP_fast_assign(
> > + __assign_str(name, name);
> > + __entry->start = start;
> > + __entry->count = count;
> > + __entry->attempts = attempts;
> > + __entry->err = err;
> > + ),
> > +
> > + TP_printk("name=%s start=%lx count=%lu attempts=%u err=%d",
> > + __get_str(name),
> > + __entry->start,
> > + __entry->count,
> > + __entry->attempts,
> > + __entry->err)
> > +);
> > +
> > TRACE_EVENT(cma_alloc_start,
> >
> > TP_PROTO(const char *name, unsigned long count, unsigned int align),
> > diff --git a/mm/cma.c b/mm/cma.c
> > index 543bb6b3be8e..4a0f68b9443b 100644
> > --- a/mm/cma.c
> > +++ b/mm/cma.c
> > @@ -416,6 +416,92 @@ static void cma_debug_show_areas(struct cma *cma)
> > static inline void cma_debug_show_areas(struct cma *cma) { }
> > #endif
> >
> > +/**
> > + * cma_alloc_range() - allocate pages in a specific range
> > + * @cma: Contiguous memory region for which the allocation is performed.
> > + * @start: Starting pfn of the allocation.
> > + * @count: Requested number of pages
> > + * @tries: Number of tries if the range is busy
> > + * @no_warn: Avoid printing message about failed allocation
> > + *
> > + * This function allocates part of contiguous memory from a specific contiguous
> > + * memory area, from the specified starting address. The 'start' pfn and the the
> > + * 'count' number of pages must be aligned to the CMA bitmap order per bit.
> > + */
> > +int cma_alloc_range(struct cma *cma, unsigned long start, unsigned long count,
> > + unsigned tries, gfp_t gfp)
> > +{
> > + unsigned long bitmap_maxno, bitmap_no, bitmap_start, bitmap_count;
> > + unsigned long i = 0;
> > + struct page *page;
> > + int err = -EINVAL;
> > +
> > + if (!cma || !cma->count || !cma->bitmap)
> > + goto out_stats;
> > +
> > + trace_cma_alloc_range_start(cma->name, start, count, tries);
> > +
> > + if (!count || start < cma->base_pfn ||
> > + start + count > cma->base_pfn + cma->count)
> > + goto out_stats;
> > +
> > + if (!IS_ALIGNED(start | count, 1 << cma->order_per_bit))
> > + goto out_stats;
> > +
> > + bitmap_start = (start - cma->base_pfn) >> cma->order_per_bit;
> > + bitmap_maxno = cma_bitmap_maxno(cma);
> > + bitmap_count = cma_bitmap_pages_to_bits(cma, count);
> > +
> > + spin_lock_irq(&cma->lock);
> > + bitmap_no = bitmap_find_next_zero_area(cma->bitmap, bitmap_maxno,
> > + bitmap_start, bitmap_count, 0);
> > + if (bitmap_no != bitmap_start) {
> > + spin_unlock_irq(&cma->lock);
> > + err = -EEXIST;
> > + goto out_stats;
> > + }
> > + bitmap_set(cma->bitmap, bitmap_start, bitmap_count);
> > + spin_unlock_irq(&cma->lock);
> > +
> > + for (i = 0; i < tries; i++) {
> > + mutex_lock(&cma_mutex);
> > + err = alloc_contig_range(start, start + count, MIGRATE_CMA, gfp);
> > + mutex_unlock(&cma_mutex);
> > +
> > + if (err != -EBUSY)
> > + break;
> > + }
> > +
> > + if (err) {
> > + cma_clear_bitmap(cma, start, count);
> > + } else {
> > + page = pfn_to_page(start);
> > +
> > + /*
> > + * CMA can allocate multiple page blocks, which results in
> > + * different blocks being marked with different tags. Reset the
> > + * tags to ignore those page blocks.
> > + */
> > + for (i = 0; i < count; i++)
> > + page_kasan_tag_reset(nth_page(page, i));
> > + }
> > +
> > +out_stats:
> > + trace_cma_alloc_range_finish(cma->name, start, count, i, err);
> > +
> > + if (err) {
> > + count_vm_events(CMA_ALLOC_FAIL, count);
> > + if (cma)
> > + cma_sysfs_account_fail_pages(cma, count);
> > + } else {
> > + count_vm_events(CMA_ALLOC_SUCCESS, count);
> > + cma_sysfs_account_success_pages(cma, count);
> > + }
> > +
> > + return err;
> > +}
> > +
> > +
> > /**
> > * cma_alloc() - allocate pages from contiguous area
> > * @cma: Contiguous memory region for which the allocation is performed.
>
next prev parent reply other threads:[~2024-01-30 11:36 UTC|newest]
Thread overview: 95+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-25 16:42 [PATCH RFC v3 00/35] Add support for arm64 MTE dynamic tag storage reuse Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 01/35] mm: page_alloc: Add gfp_flags parameter to arch_alloc_page() Alexandru Elisei
2024-01-29 5:48 ` Anshuman Khandual
2024-01-29 11:41 ` Alexandru Elisei
2024-01-30 4:26 ` Anshuman Khandual
2024-01-30 11:56 ` Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 02/35] mm: page_alloc: Add an arch hook early in free_pages_prepare() Alexandru Elisei
2024-01-29 8:19 ` Anshuman Khandual
2024-01-29 11:42 ` Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 03/35] mm: page_alloc: Add an arch hook to filter MIGRATE_CMA allocations Alexandru Elisei
2024-01-29 8:44 ` Anshuman Khandual
2024-01-29 11:45 ` Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 04/35] mm: page_alloc: Partially revert "mm: page_alloc: remove stale CMA guard code" Alexandru Elisei
2024-01-29 9:01 ` Anshuman Khandual
2024-01-29 11:46 ` Alexandru Elisei
2024-01-30 4:34 ` Anshuman Khandual
2024-01-30 11:57 ` Alexandru Elisei
2024-01-31 3:27 ` Anshuman Khandual
2024-01-25 16:42 ` [PATCH RFC v3 05/35] mm: cma: Don't append newline when generating CMA area name Alexandru Elisei
2024-01-29 9:13 ` Anshuman Khandual
2024-01-29 11:46 ` Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 06/35] mm: cma: Make CMA_ALLOC_SUCCESS/FAIL count the number of pages Alexandru Elisei
2024-01-29 9:24 ` Anshuman Khandual
2024-01-29 11:51 ` Alexandru Elisei
2024-01-30 4:52 ` Anshuman Khandual
2024-01-30 11:58 ` Alexandru Elisei
2024-01-31 4:40 ` Anshuman Khandual
2024-01-31 13:27 ` Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 07/35] mm: cma: Add CMA_RELEASE_{SUCCESS,FAIL} events Alexandru Elisei
2024-01-29 9:31 ` Anshuman Khandual
2024-01-29 11:53 ` Alexandru Elisei
2024-01-31 5:59 ` Anshuman Khandual
2024-01-25 16:42 ` [PATCH RFC v3 08/35] mm: cma: Introduce cma_alloc_range() Alexandru Elisei
2024-01-30 5:20 ` Anshuman Khandual
2024-01-30 11:35 ` Alexandru Elisei [this message]
2024-01-31 6:24 ` Anshuman Khandual
2024-01-31 14:18 ` Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 09/35] mm: cma: Introduce cma_remove_mem() Alexandru Elisei
2024-01-30 5:50 ` Anshuman Khandual
2024-01-30 11:33 ` Alexandru Elisei
2024-01-31 13:19 ` Anshuman Khandual
2024-01-31 13:48 ` Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 10/35] mm: cma: Fast track allocating memory when the pages are free Alexandru Elisei
2024-01-30 9:18 ` Anshuman Khandual
2024-01-30 11:34 ` Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 11/35] mm: Allow an arch to hook into folio allocation when VMA is known Alexandru Elisei
2024-01-26 20:00 ` Peter Collingbourne
2024-01-29 11:59 ` Alexandru Elisei
2024-01-30 9:55 ` Anshuman Khandual
2024-01-30 11:34 ` Alexandru Elisei
2024-01-31 6:53 ` Anshuman Khandual
2024-01-31 12:22 ` Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 12/35] mm: Call arch_swap_prepare_to_restore() before arch_swap_restore() Alexandru Elisei
2024-02-01 3:30 ` Anshuman Khandual
2024-02-01 17:32 ` Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 13/35] mm: memory: Introduce fault-on-access mechanism for pages Alexandru Elisei
2024-02-01 5:52 ` Anshuman Khandual
2024-02-01 17:36 ` Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 14/35] of: fdt: Return the region size in of_flat_dt_translate_address() Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 15/35] of: fdt: Add of_flat_read_u32() Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 16/35] KVM: arm64: Don't deny VM_PFNMAP VMAs when kvm_has_mte() Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 17/35] arm64: mte: Rework naming for tag manipulation functions Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 18/35] arm64: mte: Rename __GFP_ZEROTAGS to __GFP_TAGGED Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 19/35] arm64: mte: Discover tag storage memory Alexandru Elisei
2024-01-26 8:50 ` Krzysztof Kozlowski
2024-01-26 17:01 ` Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 20/35] arm64: mte: Add tag storage memory to CMA Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 21/35] arm64: mte: Disable dynamic tag storage management if HW KASAN is enabled Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 22/35] arm64: mte: Enable tag storage if CMA areas have been activated Alexandru Elisei
2024-02-02 22:30 ` Evgenii Stepanov
2024-02-05 16:30 ` Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 23/35] arm64: mte: Try to reserve tag storage in arch_alloc_page() Alexandru Elisei
2024-01-30 0:04 ` Peter Collingbourne
2024-01-30 11:38 ` Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 24/35] arm64: mte: Perform CMOs for tag blocks Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 25/35] arm64: mte: Reserve tag block for the zero page Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 26/35] arm64: mte: Use fault-on-access to reserve missing tag storage Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 27/35] arm64: mte: Handle tag storage pages mapped in an MTE VMA Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 28/35] arm64: mte: swap: Handle tag restoring when missing tag storage Alexandru Elisei
2024-02-02 4:02 ` Peter Collingbourne
2024-02-02 14:56 ` Alexandru Elisei
2024-02-03 1:32 ` Evgenii Stepanov
2024-02-03 1:52 ` Peter Collingbourne
2024-01-25 16:42 ` [PATCH RFC v3 29/35] arm64: mte: copypage: " Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 30/35] arm64: mte: ptrace: Handle pages with " Alexandru Elisei
2024-02-01 9:21 ` Anshuman Khandual
2024-02-01 17:38 ` Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 31/35] khugepaged: arm64: Don't collapse MTE enabled VMAs Alexandru Elisei
2024-02-01 8:12 ` Anshuman Khandual
2024-02-01 17:38 ` Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 32/35] KVM: arm64: mte: Reserve tag storage for virtual machines with MTE Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 33/35] KVM: arm64: mte: Introduce VM_MTE_KVM VMA flag Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 34/35] arm64: mte: Enable dynamic tag storage management Alexandru Elisei
2024-01-25 16:42 ` [PATCH RFC v3 35/35] HACK! arm64: dts: Add fake tag storage to fvp-base-revc.dts Alexandru Elisei
2024-01-25 17:01 ` [PATCH RFC v3 00/35] Add support for arm64 MTE dynamic tag storage reuse Steven Rostedt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZbjfEzlNgprdxfxX@raptor \
--to=alexandru.elisei@arm.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=arnd@arndb.de \
--cc=bristot@redhat.com \
--cc=bsegall@google.com \
--cc=catalin.marinas@arm.com \
--cc=david@redhat.com \
--cc=dietmar.eggemann@arm.com \
--cc=eugenis@google.com \
--cc=hughd@google.com \
--cc=hyesoo.yu@samsung.com \
--cc=james.morse@arm.com \
--cc=juri.lelli@redhat.com \
--cc=kcc@google.com \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arch@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=maz@kernel.org \
--cc=mgorman@suse.de \
--cc=mhiramat@kernel.org \
--cc=mingo@redhat.com \
--cc=oliver.upton@linux.dev \
--cc=pcc@google.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=rppt@kernel.org \
--cc=steven.price@arm.com \
--cc=suzuki.poulose@arm.com \
--cc=vincent.guittot@linaro.org \
--cc=vincenzo.frascino@arm.com \
--cc=vschneid@redhat.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).