From: Mel Gorman <mgorman@techsingularity.net>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Rik van Riel <riel@redhat.com>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Michal Hocko <mhocko@kernel.org>, Linux-MM <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 04/12] mm, page_alloc: Only check cpusets when one exists that can be mem-controlled
Date: Mon, 24 Aug 2015 14:16:16 +0100 [thread overview]
Message-ID: <20150824131616.GK12432@techsingularity.net> (raw)
In-Reply-To: <55DB1015.4080103@suse.cz>
On Mon, Aug 24, 2015 at 02:37:41PM +0200, Vlastimil Babka wrote:
> >
> >+/* Returns true if a cpuset exists that can set cpuset.mems */
> >+static inline bool cpusets_mems_enabled(void)
> >+{
> >+ return nr_cpusets() > 1;
> >+}
> >+
>
> Hm, but this loses the benefits of static key branches?
> How about something like:
>
> if (static_key_false(&cpusets_enabled_key))
> return nr_cpusets() > 1
> else
> return false;
>
Will do.
>
>
> > static inline void cpuset_inc(void)
> > {
> > static_key_slow_inc(&cpusets_enabled_key);
> >@@ -104,7 +106,7 @@ extern void cpuset_print_task_mems_allowed(struct task_struct *p);
> > */
> > static inline unsigned int read_mems_allowed_begin(void)
> > {
> >- if (!cpusets_enabled())
> >+ if (!cpusets_mems_enabled())
> > return 0;
> >
> > return read_seqcount_begin(¤t->mems_allowed_seq);
> >@@ -118,7 +120,7 @@ static inline unsigned int read_mems_allowed_begin(void)
> > */
> > static inline bool read_mems_allowed_retry(unsigned int seq)
> > {
> >- if (!cpusets_enabled())
> >+ if (!cpusets_mems_enabled())
> > return false;
>
> Actually I doubt it's much of benefit for these usages, even if the static
> key benefits are restored. If there's a single root cpuset, we would check
> the seqlock prior to this patch, now we'll check static key value (which
> should have the same cost?). With >1 cpusets, we would check seqlock prior
> to this patch, now we'll check static key value *and* the seqlock...
>
If the cpuset is enabled between the check, it still should retry.
Anyway, special casing this is overkill. It's a small
micro-optimisation.
> >
> > return read_seqcount_retry(¤t->mems_allowed_seq, seq);
> >@@ -139,7 +141,7 @@ static inline void set_mems_allowed(nodemask_t nodemask)
> >
> > #else /* !CONFIG_CPUSETS */
> >
> >-static inline bool cpusets_enabled(void) { return false; }
> >+static inline bool cpusets_mems_enabled(void) { return false; }
> >
> > static inline int cpuset_init(void) { return 0; }
> > static inline void cpuset_init_smp(void) {}
> >diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >index 62ae28d8ae8d..2c1c3bf54d15 100644
> >--- a/mm/page_alloc.c
> >+++ b/mm/page_alloc.c
> >@@ -2470,7 +2470,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
> > if (IS_ENABLED(CONFIG_NUMA) && zlc_active &&
> > !zlc_zone_worth_trying(zonelist, z, allowednodes))
> > continue;
> >- if (cpusets_enabled() &&
> >+ if (cpusets_mems_enabled() &&
> > (alloc_flags & ALLOC_CPUSET) &&
> > !cpuset_zone_allowed(zone, gfp_mask))
> > continue;
>
> Here the benefits are less clear. I guess cpuset_zone_allowed() is
> potentially costly...
>
> Heck, shouldn't we just start the static key on -1 (if possible), so that
> it's enabled only when there's 2+ cpusets?
It's overkill for the amount of benefit.
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-08-24 13:16 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-08-24 12:09 [PATCH 00/12] Remove zonelist cache and high-order watermark checking v3 Mel Gorman
2015-08-24 12:09 ` [PATCH 01/12] mm, page_alloc: Remove unnecessary parameter from zone_watermark_ok_safe Mel Gorman
2015-08-24 12:09 ` [PATCH 02/12] mm, page_alloc: Remove unnecessary recalculations for dirty zone balancing Mel Gorman
2015-08-24 12:09 ` [PATCH 03/12] mm, page_alloc: Remove unnecessary taking of a seqlock when cpusets are disabled Mel Gorman
2015-08-26 10:25 ` Michal Hocko
2015-08-24 12:09 ` [PATCH 04/12] mm, page_alloc: Only check cpusets when one exists that can be mem-controlled Mel Gorman
2015-08-24 12:37 ` Vlastimil Babka
2015-08-24 13:16 ` Mel Gorman [this message]
2015-08-24 20:53 ` Vlastimil Babka
2015-08-25 10:33 ` Mel Gorman
2015-08-25 11:09 ` Vlastimil Babka
2015-08-26 13:41 ` Mel Gorman
2015-08-26 10:46 ` Michal Hocko
2015-08-24 12:09 ` [PATCH 05/12] mm, page_alloc: Remove unecessary recheck of nodemask Mel Gorman
2015-08-25 14:32 ` Vlastimil Babka
2015-08-24 12:09 ` [PATCH 06/12] mm, page_alloc: Use masks and shifts when converting GFP flags to migrate types Mel Gorman
2015-08-25 14:36 ` Vlastimil Babka
2015-08-24 12:09 ` [PATCH 07/12] mm, page_alloc: Distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd Mel Gorman
2015-08-24 18:29 ` Mel Gorman
2015-08-25 15:37 ` Vlastimil Babka
2015-08-26 14:45 ` Mel Gorman
2015-08-26 16:24 ` Vlastimil Babka
2015-08-26 18:10 ` Mel Gorman
2015-08-27 9:18 ` Vlastimil Babka
2015-08-25 15:48 ` Vlastimil Babka
2015-08-26 13:05 ` Michal Hocko
2015-09-08 6:49 ` Joonsoo Kim
2015-09-09 12:22 ` Mel Gorman
2015-09-18 6:25 ` Joonsoo Kim
2015-08-24 12:09 ` [PATCH 08/12] mm, page_alloc: Rename __GFP_WAIT to __GFP_RECLAIM Mel Gorman
2015-08-26 12:19 ` Vlastimil Babka
2015-08-24 12:09 ` [PATCH 09/12] mm, page_alloc: Delete the zonelist_cache Mel Gorman
2015-08-24 12:29 ` [PATCH 10/12] mm, page_alloc: Remove MIGRATE_RESERVE Mel Gorman
2015-08-24 12:29 ` [PATCH 11/12] mm, page_alloc: Reserve pageblocks for high-order atomic allocations on demand Mel Gorman
2015-08-26 12:44 ` Vlastimil Babka
2015-08-26 14:53 ` Michal Hocko
2015-08-26 15:38 ` Mel Gorman
2015-09-08 8:01 ` Joonsoo Kim
2015-09-09 12:32 ` Mel Gorman
2015-09-18 6:38 ` Joonsoo Kim
2015-09-21 10:51 ` Mel Gorman
2015-08-24 12:30 ` [PATCH 12/12] mm, page_alloc: Only enforce watermarks for order-0 allocations Mel Gorman
2015-08-26 13:42 ` Vlastimil Babka
2015-08-26 14:53 ` Mel Gorman
2015-08-28 12:10 ` Michal Hocko
2015-08-28 14:12 ` Mel Gorman
2015-09-08 8:26 ` Joonsoo Kim
2015-09-09 12:39 ` Mel Gorman
2015-09-18 6:56 ` Joonsoo Kim
2015-09-21 10:51 ` Mel Gorman
2015-09-30 8:51 ` Vitaly Wool
2015-09-30 13:52 ` Vlastimil Babka
2015-09-30 14:16 ` Vitaly Wool
2015-09-30 14:43 ` Vlastimil Babka
2015-09-30 15:18 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150824131616.GK12432@techsingularity.net \
--to=mgorman@techsingularity.net \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=riel@redhat.com \
--cc=rientjes@google.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).