linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Joonsoo Kim <js1304@gmail.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	kernel-team@lge.com, Vlastimil Babka <vbabka@suse.cz>,
	Christoph Hellwig <hch@infradead.org>,
	Roman Gushchin <guro@fb.com>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Subject: Re: [PATCH v4 04/11] mm/hugetlb: make hugetlb migration callback CMA aware
Date: Wed, 8 Jul 2020 16:12:40 +0900	[thread overview]
Message-ID: <20200708071224.GA16543@js1304-desktop> (raw)
In-Reply-To: <20200707113116.GH5913@dhcp22.suse.cz>

On Tue, Jul 07, 2020 at 01:31:16PM +0200, Michal Hocko wrote:
> On Tue 07-07-20 16:44:42, Joonsoo Kim wrote:
> > From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > 
> > new_non_cma_page() in gup.c which try to allocate migration target page
> > requires to allocate the new page that is not on the CMA area.
> > new_non_cma_page() implements it by removing __GFP_MOVABLE flag.  This way
> > works well for THP page or normal page but not for hugetlb page.
> > 
> > hugetlb page allocation process consists of two steps.  First is dequeing
> > from the pool.  Second is, if there is no available page on the queue,
> > allocating from the page allocator.
> > 
> > new_non_cma_page() can control allocation from the page allocator by
> > specifying correct gfp flag.  However, dequeing cannot be controlled until
> > now, so, new_non_cma_page() skips dequeing completely.  It is a suboptimal
> > since new_non_cma_page() cannot utilize hugetlb pages on the queue so this
> > patch tries to fix this situation.
> > 
> > This patch makes the deque function on hugetlb CMA aware and skip CMA
> > pages if newly added skip_cma argument is passed as true.
> 
> I really dislike this as already mentioned in the previous version of
> the patch. You are making hugetlb and only one part of its allocator a
> special snowflake which is almost always a bad idea. Your changelog
> lacks any real justification for this inconsistency.
> 
> Also by doing so you are keeping an existing bug that the hugetlb
> allocator doesn't respect scope gfp flags as I have mentioned when
> reviewing the previous version. That bug likely doesn't matter now but
> it might in future and as soon as it is fixed all this is just a
> pointless exercise.
> 
> I do not have energy and time to burn to repeat that argumentation to I
> will let Mike to have a final word. Btw. you are keeping his acks even
> after considerable changes to patches which I am not really sure he is
> ok with.

As you replied a minute ago, Mike acked.

> > Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> To this particular patch.
> [...]
> 
> > diff --git a/mm/gup.c b/mm/gup.c
> > index 5daadae..2c3dab4 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -1630,11 +1630,12 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private)
> >  #ifdef CONFIG_HUGETLB_PAGE
> >  	if (PageHuge(page)) {
> >  		struct hstate *h = page_hstate(page);
> > +
> >  		/*
> >  		 * We don't want to dequeue from the pool because pool pages will
> >  		 * mostly be from the CMA region.
> >  		 */
> > -		return alloc_migrate_huge_page(h, gfp_mask, nid, NULL);
> > +		return alloc_huge_page_nodemask(h, nid, NULL, gfp_mask, true);
> 
> Let me repeat that this whole thing is running under
> memalloc_nocma_save. So additional parameter is bogus.

As Vlasimil said in other reply, we are not under
memalloc_nocma_save(). Anyway, now, I also think that additional parameter
isn't need.

> [...]
> > -static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
> > +static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid, bool skip_cma)
> 
> If you really insist on an additional parameter at this layer than it
> should be checking for the PF_MEMALLOC_NOCMA instead.

I will change the patch to check PF_MEMALLOC_NOCMA instead of
introducing and checking skip_cma.

> [...]
> > @@ -1971,21 +1977,29 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h,
> >  
> >  /* page migration callback function */
> >  struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid,
> > -		nodemask_t *nmask, gfp_t gfp_mask)
> > +		nodemask_t *nmask, gfp_t gfp_mask, bool skip_cma)
> >  {
> > +	unsigned int flags = 0;
> > +	struct page *page = NULL;
> > +
> > +	if (skip_cma)
> > +		flags = memalloc_nocma_save();
> 
> This is pointless for a scope that is already defined up in the call
> chain and fundamentally this is breaking the expected use of the scope
> API. The primary reason for that API to exist is to define the scope and
> have it sticky for _all_ allocation contexts. So if you have to use it
> deep in the allocator then you are doing something wrong.

As mentioned above, we are not under memalloc_nocma_save(). Anyway, I
will rework the patch and attach it to Vlasimil's reply. It's appreciate
if you check it again.

Thanks.


  parent reply	other threads:[~2020-07-08  7:12 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-07  7:44 [PATCH v4 00/11] clean-up the migration target allocation functions js1304
2020-07-07  7:44 ` [PATCH v4 01/11] mm/page_isolation: prefer the node of the source page js1304
2020-07-07  7:44 ` [PATCH v4 02/11] mm/migrate: move migration helper from .h to .c js1304
2020-07-07  7:44 ` [PATCH v4 03/11] mm/hugetlb: unify migration callbacks js1304
2020-07-07 11:05   ` Vlastimil Babka
2020-07-07 11:19   ` Michal Hocko
2020-07-07  7:44 ` [PATCH v4 04/11] mm/hugetlb: make hugetlb migration callback CMA aware js1304
2020-07-07 11:22   ` Vlastimil Babka
2020-07-08  7:16     ` Joonsoo Kim
2020-07-08  7:41       ` Michal Hocko
2020-07-08  9:26         ` Vlastimil Babka
2020-07-08 10:57           ` Aneesh Kumar K.V
2020-07-08 11:32             ` Michal Hocko
2020-07-09  6:43         ` Michal Hocko
2020-07-09  7:03           ` Joonsoo Kim
2020-07-09  0:27       ` Mike Kravetz
2020-07-07 11:31   ` Michal Hocko
2020-07-08  6:48     ` Michal Hocko
2020-07-08  7:12     ` Joonsoo Kim [this message]
2020-07-07  7:44 ` [PATCH v4 05/11] mm/migrate: clear __GFP_RECLAIM for THP allocation for migration js1304
2020-07-07 11:40   ` Michal Hocko
2020-07-08  7:19     ` Joonsoo Kim
2020-07-08  7:48       ` Michal Hocko
2020-07-09  3:26         ` Joonsoo Kim
2020-07-07 12:17   ` Vlastimil Babka
2020-07-08  7:17     ` Joonsoo Kim
2020-07-09  7:17     ` Joonsoo Kim
2020-07-07  7:44 ` [PATCH v4 06/11] mm/migrate: make a standard migration target allocation function js1304
2020-07-07 11:43   ` Michal Hocko
2020-07-07 14:49   ` Vlastimil Babka
2020-07-07 19:00     ` Michal Hocko
2020-07-09  7:15       ` Joonsoo Kim
2020-07-09 10:28         ` Michal Hocko
2020-07-07  7:44 ` [PATCH v4 07/11] mm/gup: use a standard migration target allocation callback js1304
2020-07-07 11:46   ` Michal Hocko
2020-07-08  7:21     ` Joonsoo Kim
2020-07-07  7:44 ` [PATCH v4 08/11] mm/mempolicy: " js1304
2020-07-07  7:44 ` [PATCH v4 09/11] mm/page_alloc: remove a wrapper for alloc_migration_target() js1304
2020-07-07  7:44 ` [PATCH v4 10/11] mm/memory-failure: " js1304
2020-07-07 11:48   ` Michal Hocko
2020-07-07 15:03     ` Vlastimil Babka
2020-07-07 18:55       ` Michal Hocko
2020-07-07 15:00   ` Vlastimil Babka
2020-07-07  7:44 ` [PATCH v4 11/11] mm/memory_hotplug: " js1304
2020-07-07 11:52   ` Michal Hocko
2020-07-07 15:09   ` Vlastimil Babka
2020-07-09  3:25     ` Joonsoo Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200708071224.GA16543@js1304-desktop \
    --to=js1304@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=guro@fb.com \
    --cc=hch@infradead.org \
    --cc=kernel-team@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=mike.kravetz@oracle.com \
    --cc=n-horiguchi@ah.jp.nec.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).