linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Dave Hansen <dave.hansen@intel.com>
To: Michal Hocko <mhocko@suse.com>, Feng Tang <feng.tang@intel.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Andrea Arcangeli <aarcange@redhat.com>,
	David Rientjes <rientjes@google.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Vlastimil Babka <vbabka@suse.cz>, Andi leen <ak@linux.intel.com>,
	"Williams, Dan J" <dan.j.williams@intel.com>
Subject: Re: [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit
Date: Wed, 3 Mar 2021 08:48:58 -0800	[thread overview]
Message-ID: <d07f8675-939b-daea-c128-30ceecfac8a0@intel.com> (raw)
In-Reply-To: <20210303163141.v5wu2sfo2zj2qqsw@intel.com>

On 3/3/21 8:31 AM, Ben Widawsky wrote:
>> I haven't got to the whole series yet. The real question is whether the
>> first attempt to enforce the preferred mask is a general win. I would
>> argue that it resembles the existing single node preferred memory policy
>> because that one doesn't push heavily on the preferred node either. So
>> dropping just the direct reclaim mode makes some sense to me.
>>
>> IIRC this is something I was recommending in an early proposal of the
>> feature.
> My assumption [FWIW] is that the usecases we've outlined for multi-preferred
> would want more heavy pushing on the preference mask. However, maybe the uapi
> could dictate how hard to try/not try.

There are two things that I think are important:

1. MPOL_PREFERRED_MANY fallback away from the preferred nodes should be
   *temporary*, even in the face of the preferred set being full.  That
   means that _some_ reclaim needs to be done.  Kicking off kswapd is
   fine for this.
2. MPOL_PREFERRED_MANY behavior should resemble MPOL_PREFERRED as
   closely as possible.  We're just going to confuse users if they set a
   single node in a MPOL_PREFERRED_MANY mask and get different behavior
   from MPOL_PREFERRED.

While it would be nice, short-term, to steer MPOL_PREFERRED_MANY
behavior toward how we expect it to get used first, I think it's a
mistake if we do it at the cost of long-term divergence from MPOL_PREFERRED.


  reply	other threads:[~2021-03-05 17:38 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-03 10:20 [PATCH v3 00/14] Introduced multi-preference mempolicy Feng Tang
2021-03-03 10:20 ` [PATCH v3 01/14] mm/mempolicy: Add comment for missing LOCAL Feng Tang
2021-03-10  6:27   ` Feng Tang
2021-03-03 10:20 ` [PATCH v3 02/14] mm/mempolicy: convert single preferred_node to full nodemask Feng Tang
2021-03-03 10:20 ` [PATCH v3 03/14] mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes Feng Tang
2021-03-03 10:20 ` [PATCH v3 04/14] mm/mempolicy: allow preferred code to take a nodemask Feng Tang
2021-03-03 10:20 ` [PATCH v3 05/14] mm/mempolicy: refactor rebind code for PREFERRED_MANY Feng Tang
2021-03-03 10:20 ` [PATCH v3 06/14] mm/mempolicy: kill v.preferred_nodes Feng Tang
2021-03-03 10:20 ` [PATCH v3 07/14] mm/mempolicy: handle MPOL_PREFERRED_MANY like BIND Feng Tang
2021-03-03 10:20 ` [PATCH v3 08/14] mm/mempolicy: Create a page allocator for policy Feng Tang
2021-03-03 10:20 ` [PATCH v3 09/14] mm/mempolicy: Thread allocation for many preferred Feng Tang
2021-03-03 10:20 ` [PATCH v3 10/14] mm/mempolicy: VMA " Feng Tang
2021-03-03 10:20 ` [PATCH v3 11/14] mm/mempolicy: huge-page " Feng Tang
2021-03-03 10:20 ` [PATCH v3 12/14] mm/mempolicy: Advertise new MPOL_PREFERRED_MANY Feng Tang
2021-03-03 10:20 ` [PATCH v3 13/14] mem/mempolicy: unify mpol_new_preferred() and mpol_new_preferred_many() Feng Tang
2021-03-03 10:20 ` [PATCH v3 RFC 14/14] mm: speedup page alloc for MPOL_PREFERRED_MANY by adding a NO_SLOWPATH gfp bit Feng Tang
2021-03-03 11:39   ` Michal Hocko
2021-03-03 12:07     ` Feng Tang
2021-03-03 12:18       ` Feng Tang
2021-03-03 12:32         ` Michal Hocko
2021-03-03 13:18           ` Feng Tang
2021-03-03 13:46             ` Feng Tang
2021-03-03 13:59               ` Michal Hocko
2021-03-03 16:31                 ` Ben Widawsky
2021-03-03 16:48                   ` Dave Hansen [this message]
2021-03-10  5:19                     ` Feng Tang
2021-03-10  9:44                       ` Michal Hocko
2021-03-10 11:49                         ` Feng Tang
2021-03-03 17:14                   ` Michal Hocko
2021-03-03 17:22                     ` Ben Widawsky
2021-03-04  8:14                       ` Feng Tang
2021-03-04 12:59                         ` Michal Hocko
2021-03-05  2:21                           ` Feng Tang
2021-03-04 12:57                       ` Michal Hocko
2021-03-03 13:53             ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d07f8675-939b-daea-c128-30ceecfac8a0@intel.com \
    --to=dave.hansen@intel.com \
    --cc=aarcange@redhat.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=dan.j.williams@intel.com \
    --cc=feng.tang@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    --cc=rdunlap@infradead.org \
    --cc=rientjes@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).