From: David Hildenbrand <david@redhat.com>
To: Minchan Kim <minchan@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>
Cc: LKML <linux-kernel@vger.kernel.org>,
linux-mm <linux-mm@kvack.org>,
hyesoo.yu@samsung.com, willy@infradead.org,
iamjoonsoo.kim@lge.com, vbabka@suse.cz, surenb@google.com,
pullip.cho@samsung.com, joaodias@google.com, hridya@google.com,
sumit.semwal@linaro.org, john.stultz@linaro.org,
Brian.Starkey@arm.com, linux-media@vger.kernel.org,
devicetree@vger.kernel.org, robh@kernel.org,
christian.koenig@amd.com, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH 1/4] mm: introduce cma_alloc_bulk API
Date: Mon, 23 Nov 2020 15:15:37 +0100 [thread overview]
Message-ID: <a2c33b8f-e4fb-1f1c-7ed0-496a1256ea09@redhat.com> (raw)
In-Reply-To: <20201117181935.3613581-2-minchan@kernel.org>
On 17.11.20 19:19, Minchan Kim wrote:
> There is a need for special HW to require bulk allocation of
> high-order pages. For example, 4800 * order-4 pages, which
> would be minimum, sometimes, it requires more.
>
> To meet the requirement, a option reserves 300M CMA area and
> requests the whole 300M contiguous memory. However, it doesn't
> work if even one of those pages in the range is long-term pinned
> directly or indirectly. The other option is to ask higher-order
> size (e.g., 2M) than requested order(64K) repeatedly until driver
> could gather necessary amount of memory. Basically, this approach
> makes the allocation very slow due to cma_alloc's function
> slowness and it could be stuck on one of the pageblocks if it
> encounters unmigratable page.
>
> To solve the issue, this patch introduces cma_alloc_bulk.
>
> int cma_alloc_bulk(struct cma *cma, unsigned int align,
> gfp_t gfp_mask, unsigned int order, size_t nr_requests,
> struct page **page_array, size_t *nr_allocated);
>
> Most parameters are same with cma_alloc but it additionally passes
> vector array to store allocated memory. What's different with cma_alloc
> is it will skip pageblocks without waiting/stopping if it has unmovable
> page so that API continues to scan other pageblocks to find requested
> order page.
>
> cma_alloc_bulk is best effort approach in that it skips some pageblocks
> if they have unmovable pages unlike cma_alloc. It doesn't need to be
> perfect from the beginning at the cost of performance. Thus, the API
> takes gfp_t to support __GFP_NORETRY which is propagated into
> alloc_contig_page to avoid significat overhead functions to inrecase
> CMA allocation success ratio(e.g., migration retrial, PCP, LRU draining
> per pageblock) at the cost of less allocation success ratio.
> If the caller couldn't allocate enough pages with __GFP_NORETRY, they
> could call it without __GFP_NORETRY to increase success ratio this time
> if they are okay to expense the overhead for the success ratio.
I'm not a friend of connecting __GFP_NORETRY to PCP and LRU draining.
Also, gfp flags apply mostly to compaction (e.g., how to allocate free
pages for migration), so this seems a little wrong.
Can we instead introduce
enum alloc_contig_mode {
/*
* Normal mode:
*
* Retry page migration 5 times, ... TBD
*
*/
ALLOC_CONTIG_NORMAL = 0,
/*
* Fast mode: e.g., used for bulk allocations.
*
* Don't retry page migration if it fails, don't drain PCP
* lists, don't drain LRU.
*/
ALLOC_CONTIG_FAST,
};
To be extended by ALLOC_CONTIG_HARD in the future to be used e.g., by
virtio-mem (disable PCP, retry a couple of times more often ) ...
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2020-11-23 14:15 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-17 18:19 [PATCH 0/4] Chunk Heap Support on DMA-HEAP Minchan Kim
2020-11-17 18:19 ` [PATCH 1/4] mm: introduce cma_alloc_bulk API Minchan Kim
2020-11-23 14:15 ` David Hildenbrand [this message]
2020-11-25 20:12 ` Minchan Kim
2020-11-17 18:19 ` [PATCH 2/4] dma-buf: add export symbol for dma-heap Minchan Kim
2020-11-18 5:18 ` John Stultz
2020-11-17 18:19 ` [PATCH 3/4] dma-buf: heaps: add chunk heap to dmabuf heaps Minchan Kim
2020-11-18 9:00 ` Hillf Danton
2020-11-19 1:16 ` Hyesoo Yu
2020-11-17 18:19 ` [PATCH 4/4] dma-heap: Devicetree binding for chunk heap Minchan Kim
2020-11-18 3:00 ` John Stultz
2020-11-19 1:14 ` Hyesoo Yu
2020-11-19 3:19 ` John Stultz
2020-11-19 6:30 ` Hyesoo Yu
2020-12-09 23:53 ` Minchan Kim
2020-12-10 8:15 ` John Stultz
2020-12-10 16:06 ` Minchan Kim
2020-12-10 22:40 ` John Stultz
2020-12-10 23:30 ` Minchan Kim
2020-12-08 16:56 ` [PATCH 0/4] Chunk Heap Support on DMA-HEAP Nicolas Dufresne
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a2c33b8f-e4fb-1f1c-7ed0-496a1256ea09@redhat.com \
--to=david@redhat.com \
--cc=Brian.Starkey@arm.com \
--cc=akpm@linux-foundation.org \
--cc=christian.koenig@amd.com \
--cc=devicetree@vger.kernel.org \
--cc=hridya@google.com \
--cc=hyesoo.yu@samsung.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=joaodias@google.com \
--cc=john.stultz@linaro.org \
--cc=linaro-mm-sig@lists.linaro.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-media@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=pullip.cho@samsung.com \
--cc=robh@kernel.org \
--cc=sumit.semwal@linaro.org \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).