From: Baoquan He <bhe@redhat.com>
To: Jaewon Kim <jaewon31.kim@samsung.com>
Cc: vbabka@suse.cz, mgorman@techsingularity.net, minchan@kernel.org,
mgorman@suse.de, hannes@cmpxchg.org, akpm@linux-foundation.org,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
jaewon31.kim@gmail.com, ytk.lee@samsung.com,
cmlaika.kim@samsung.com
Subject: Re: [PATCH v4] page_alloc: consider highatomic reserve in watermark fast
Date: Fri, 19 Jun 2020 20:42:11 +0800 [thread overview]
Message-ID: <20200619124211.GE3346@MiWiFi-R3L-srv> (raw)
In-Reply-To: <20200619235958.11283-1-jaewon31.kim@samsung.com>
On 06/20/20 at 08:59am, Jaewon Kim wrote:
...
> kswapd0-1207 [005] ...1 889.213398: mm_page_alloc: page= (null) pfn=0 order=0 migratetype=1 nr_free=3650 gfp_flags=GFP_NOWAIT|__GFP_HIGHMEM|__GFP_NOWARN|__GFP_MOVABLE
>
> Reported-by: Yong-Taek Lee <ytk.lee@samsung.com>
> Suggested-by: Minchan Kim <minchan@kernel.org>
> Signed-off-by: Jaewon Kim <jaewon31.kim@samsung.com>
> Acked-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Baoquan He <bhe@redhat.com>
> ---
> v4: change description only; typo and log
> v3: change log in description to one having reserved_highatomic
> change comment in code
> v2: factor out common part
> v1: consider highatomic reserve
> ---
> mm/page_alloc.c | 66 +++++++++++++++++++++++++++----------------------
> 1 file changed, 36 insertions(+), 30 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 48eb0f1410d4..fe83f88ce188 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3487,6 +3487,29 @@ static noinline bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order)
> }
> ALLOW_ERROR_INJECTION(should_fail_alloc_page, TRUE);
>
> +static inline long __zone_watermark_unusable_free(struct zone *z,
> + unsigned int order, unsigned int alloc_flags)
> +{
> + const bool alloc_harder = (alloc_flags & (ALLOC_HARDER|ALLOC_OOM));
> + long unusable_free = (1 << order) - 1;
> +
> + /*
> + * If the caller does not have rights to ALLOC_HARDER then subtract
> + * the high-atomic reserves. This will over-estimate the size of the
> + * atomic reserve but it avoids a search.
> + */
> + if (likely(!alloc_harder))
> + unusable_free += z->nr_reserved_highatomic;
> +
> +#ifdef CONFIG_CMA
> + /* If allocation can't use CMA areas don't use free CMA pages */
> + if (!(alloc_flags & ALLOC_CMA))
> + unusable_free += zone_page_state(z, NR_FREE_CMA_PAGES);
> +#endif
> +
> + return unusable_free;
> +}
> +
> /*
> * Return true if free base pages are above 'mark'. For high-order checks it
> * will return true of the order-0 watermark is reached and there is at least
> @@ -3502,19 +3525,12 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
> const bool alloc_harder = (alloc_flags & (ALLOC_HARDER|ALLOC_OOM));
>
> /* free_pages may go negative - that's OK */
> - free_pages -= (1 << order) - 1;
> + free_pages -= __zone_watermark_unusable_free(z, order, alloc_flags);
>
> if (alloc_flags & ALLOC_HIGH)
> min -= min / 2;
>
> - /*
> - * If the caller does not have rights to ALLOC_HARDER then subtract
> - * the high-atomic reserves. This will over-estimate the size of the
> - * atomic reserve but it avoids a search.
> - */
> - if (likely(!alloc_harder)) {
> - free_pages -= z->nr_reserved_highatomic;
> - } else {
> + if (unlikely(alloc_harder)) {
> /*
> * OOM victims can try even harder than normal ALLOC_HARDER
> * users on the grounds that it's definitely going to be in
> @@ -3527,13 +3543,6 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark,
> min -= min / 4;
> }
>
> -
> -#ifdef CONFIG_CMA
> - /* If allocation can't use CMA areas don't use free CMA pages */
> - if (!(alloc_flags & ALLOC_CMA))
> - free_pages -= zone_page_state(z, NR_FREE_CMA_PAGES);
> -#endif
> -
> /*
> * Check watermarks for an order-0 allocation request. If these
> * are not met, then a high-order request also cannot go ahead
> @@ -3582,25 +3591,22 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order,
> unsigned long mark, int highest_zoneidx,
> unsigned int alloc_flags)
> {
> - long free_pages = zone_page_state(z, NR_FREE_PAGES);
> - long cma_pages = 0;
> + long free_pages;
> + long unusable_free;
>
> -#ifdef CONFIG_CMA
> - /* If allocation can't use CMA areas don't use free CMA pages */
> - if (!(alloc_flags & ALLOC_CMA))
> - cma_pages = zone_page_state(z, NR_FREE_CMA_PAGES);
> -#endif
> + free_pages = zone_page_state(z, NR_FREE_PAGES);
> + unusable_free = __zone_watermark_unusable_free(z, order, alloc_flags);
>
> /*
> * Fast check for order-0 only. If this fails then the reserves
> - * need to be calculated. There is a corner case where the check
> - * passes but only the high-order atomic reserve are free. If
> - * the caller is !atomic then it'll uselessly search the free
> - * list. That corner case is then slower but it is harmless.
> + * need to be calculated.
> */
> - if (!order && (free_pages - cma_pages) >
> - mark + z->lowmem_reserve[highest_zoneidx])
> - return true;
> + if (!order) {
> + long fast_free = free_pages - unusable_free;
> +
> + if (fast_free > mark + z->lowmem_reserve[highest_zoneidx])
> + return true;
> + }
>
> return __zone_watermark_ok(z, order, mark, highest_zoneidx, alloc_flags,
> free_pages);
> --
> 2.17.1
>
>
next prev parent reply other threads:[~2020-06-19 12:42 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CGME20200619055816epcas1p184da90b01aff559fe3cd690ebcd921ca@epcas1p1.samsung.com>
2020-06-19 23:59 ` [PATCH v4] page_alloc: consider highatomic reserve in watermark fast Jaewon Kim
2020-06-19 12:42 ` Baoquan He [this message]
2020-06-22 8:55 ` Mel Gorman
2020-06-22 9:11 ` Michal Hocko
[not found] ` <CGME20200619055816epcas1p184da90b01aff559fe3cd690ebcd921ca@epcms1p6>
2020-06-22 9:40 ` 김재원
2020-06-22 10:04 ` Mel Gorman
2020-06-22 14:23 ` Michal Hocko
2020-06-22 16:25 ` Mel Gorman
2020-06-23 7:11 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200619124211.GE3346@MiWiFi-R3L-srv \
--to=bhe@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=cmlaika.kim@samsung.com \
--cc=hannes@cmpxchg.org \
--cc=jaewon31.kim@gmail.com \
--cc=jaewon31.kim@samsung.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mgorman@techsingularity.net \
--cc=minchan@kernel.org \
--cc=vbabka@suse.cz \
--cc=ytk.lee@samsung.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).