From: Mel Gorman <mgorman@suse.de>
To: Herton Ronaldo Krzesinski <herton.krzesinski@canonical.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
linux-kernel@vger.kernel.org, stable@vger.kernel.org,
torvalds@linux-foundation.org, akpm@linux-foundation.org,
alan@lxorguk.ukuu.org.uk, Miao Xie <miaox@cn.fujitsu.com>,
David Rientjes <rientjes@google.com>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Christoph Lameter <cl@linux.com>
Subject: Re: [ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3
Date: Fri, 27 Jul 2012 16:23:47 +0100 [thread overview]
Message-ID: <20120727152347.GG612@suse.de> (raw)
In-Reply-To: <20120727150823.GD3033@herton-Z68MA-D2H-B3>
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -1457,6 +1457,7 @@ static struct page *get_any_partial(stru
> > struct zone *zone;
> > enum zone_type high_zoneidx = gfp_zone(flags);
> > struct page *page;
> > + unsigned int cpuset_mems_cookie;
> >
> > /*
> > * The defrag ratio allows a configuration of the tradeoffs between
> > @@ -1480,22 +1481,32 @@ static struct page *get_any_partial(stru
> > get_cycles() % 1024 > s->remote_node_defrag_ratio)
> > return NULL;
> >
> > - get_mems_allowed();
> > - zonelist = node_zonelist(slab_node(current->mempolicy), flags);
> > - for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
> > - struct kmem_cache_node *n;
> > + do {
> > + cpuset_mems_cookie = get_mems_allowed();
> > + zonelist = node_zonelist(slab_node(current->mempolicy), flags);
> > + for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
> > + struct kmem_cache_node *n;
> >
> > - n = get_node(s, zone_to_nid(zone));
> > + n = get_node(s, zone_to_nid(zone));
> >
> > - if (n && cpuset_zone_allowed_hardwall(zone, flags) &&
> > - n->nr_partial > s->min_partial) {
> > - page = get_partial_node(n);
> > - if (page) {
> > - put_mems_allowed();
> > - return page;
> > + if (n && cpuset_zone_allowed_hardwall(zone, flags) &&
> > + n->nr_partial > s->min_partial) {
> > + page = get_partial_node(n);
> > + if (page) {
> > + /*
> > + * Return the object even if
> > + * put_mems_allowed indicated that
> > + * the cpuset mems_allowed was
> > + * updated in parallel. It's a
> > + * harmless race between the alloc
> > + * and the cpuset update.
> > + */
> > + put_mems_allowed(cpuset_mems_cookie);
> > + return page;
> > + }
> > }
> > }
> > - }
> > + } while (!put_mems_allowed(cpuset_mems_cookie));
> > put_mems_allowed();
>
> This doesn't build on 3.0, the backport left the stray put_mems_allowed
> above:
>
> linux-stable/mm/slub.c: In function 'get_any_partial':
> linux-stable/mm/slub.c:1510:2: error: too few arguments to function 'put_mems_allowed'
> linux-stable/include/linux/cpuset.h:108:20: note: declared here
>
That line should have been deleted and tests were based on slab. My
apologies.
---8<---
cpuset: mm: Reduce large amounts of memory barrier related damage fix
linux-stable/mm/slub.c: In function 'get_any_partial':
linux-stable/mm/slub.c:1510:2: error: too few arguments to function 'put_mems_allowed'
linux-stable/include/linux/cpuset.h:108:20: note: declared here
Reported-by: Herton Ronaldo Krzesinski <herton.krzesinski@canonical.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
diff --git a/mm/slub.c b/mm/slub.c
index 00ccf2c..ae6e80e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1507,7 +1507,6 @@ static struct page *get_any_partial(struct kmem_cache *s, gfp_t flags)
}
}
} while (!put_mems_allowed(cpuset_mems_cookie));
- put_mems_allowed();
#endif
return NULL;
}
next prev parent reply other threads:[~2012-07-27 15:23 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-26 21:14 [ 00/40] 3.0.39-stable review Greg KH
2012-07-26 21:29 ` [ 01/40] cifs: always update the inode cache with the results from a FIND_* Greg Kroah-Hartman
2012-07-26 21:29 ` [ 02/40] ntp: Fix STA_INS/DEL clearing bug Greg Kroah-Hartman
2012-07-26 21:29 ` [ 03/40] mm: fix lost kswapd wakeup in kswapd_stop() Greg Kroah-Hartman
2012-07-26 21:29 ` [ 04/40] MIPS: Properly align the .data..init_task section Greg Kroah-Hartman
2012-07-26 21:29 ` [ 05/40] UBIFS: fix a bug in empty space fix-up Greg Kroah-Hartman
2012-07-26 21:29 ` [ 06/40] dm raid1: fix crash with mirror recovery and discard Greg Kroah-Hartman
2012-07-26 21:29 ` [ 07/40] mm/vmstat.c: cache align vm_stat Greg Kroah-Hartman
2012-07-26 21:29 ` [ 08/40] mm: memory hotplug: Check if pages are correctly reserved on a per-section basis Greg Kroah-Hartman
2012-07-26 21:29 ` [ 09/40] mm: reduce the amount of work done when updating min_free_kbytes Greg Kroah-Hartman
2012-07-26 21:29 ` [ 10/40] mm: vmscan: fix force-scanning small targets without swap Greg Kroah-Hartman
2012-07-26 21:29 ` [ 11/40] vmscan: clear ZONE_CONGESTED for zone with good watermark Greg Kroah-Hartman
2012-07-26 21:29 ` [ 12/40] vmscan: add shrink_slab tracepoints Greg Kroah-Hartman
2012-07-26 21:29 ` [ 13/40] vmscan: shrinker->nr updates race and go wrong Greg Kroah-Hartman
2012-07-29 20:29 ` Ben Hutchings
2012-07-30 9:06 ` Mel Gorman
2012-07-30 15:41 ` Greg Kroah-Hartman
2012-07-26 21:29 ` [ 14/40] vmscan: reduce wind up shrinker->nr when shrinker cant do work Greg Kroah-Hartman
2012-07-26 21:29 ` [ 15/40] vmscan: limit direct reclaim for higher order allocations Greg Kroah-Hartman
2012-07-26 21:29 ` [ 16/40] vmscan: abort reclaim/compaction if compaction can proceed Greg Kroah-Hartman
2012-07-26 21:29 ` [ 17/40] mm: compaction: trivial clean up in acct_isolated() Greg Kroah-Hartman
2012-07-26 21:29 ` [ 18/40] mm: change isolate mode from #define to bitwise type Greg Kroah-Hartman
2012-07-26 21:29 ` [ 19/40] mm: compaction: make isolate_lru_page() filter-aware Greg Kroah-Hartman
2012-07-26 21:29 ` [ 20/40] mm: zone_reclaim: " Greg Kroah-Hartman
2012-07-26 21:29 ` [ 21/40] mm: migration: clean up unmap_and_move() Greg Kroah-Hartman
2012-07-26 21:29 ` [ 22/40] mm: compaction: allow compaction to isolate dirty pages Greg Kroah-Hartman
2012-07-26 21:29 ` [ 23/40] mm: compaction: determine if dirty pages can be migrated without blocking within ->migratepage Greg Kroah-Hartman
2012-07-26 21:29 ` [ 24/40] mm: page allocator: do not call direct reclaim for THP allocations while compaction is deferred Greg Kroah-Hartman
2012-07-26 21:29 ` [ 25/40] mm: compaction: make isolate_lru_page() filter-aware again Greg Kroah-Hartman
2012-07-26 21:29 ` [ 26/40] kswapd: avoid unnecessary rebalance after an unsuccessful balancing Greg Kroah-Hartman
2012-07-26 21:29 ` [ 27/40] kswapd: assign new_order and new_classzone_idx after wakeup in sleeping Greg Kroah-Hartman
2012-07-26 21:29 ` [ 28/40] mm: compaction: introduce sync-light migration for use by compaction Greg Kroah-Hartman
2012-07-26 21:29 ` [ 29/40] mm: vmscan: when reclaiming for compaction, ensure there are sufficient free pages available Greg Kroah-Hartman
2012-07-26 21:29 ` [ 30/40] mm: vmscan: do not OOM if aborting reclaim to start compaction Greg Kroah-Hartman
2012-07-26 21:29 ` [ 31/40] mm: vmscan: check if reclaim should really abort even if compaction_ready() is true for one zone Greg Kroah-Hartman
2012-07-26 21:29 ` [ 32/40] vmscan: promote shared file mapped pages Greg Kroah-Hartman
2012-07-26 21:29 ` [ 33/40] vmscan: activate executable pages after first usage Greg Kroah-Hartman
2012-07-26 21:29 ` [ 34/40] mm/vmscan.c: consider swap space when deciding whether to continue reclaim Greg Kroah-Hartman
2012-07-26 21:29 ` [ 35/40] mm: test PageSwapBacked in lumpy reclaim Greg Kroah-Hartman
2012-07-26 21:29 ` [ 36/40] mm: vmscan: convert global reclaim to per-memcg LRU lists Greg Kroah-Hartman
2012-07-30 0:25 ` Ben Hutchings
2012-07-30 15:29 ` Greg Kroah-Hartman
2012-07-26 21:29 ` [ 37/40] cpusets: avoid looping when storing to mems_allowed if one node remains set Greg Kroah-Hartman
2012-07-26 21:29 ` [ 38/40] cpusets: stall when updating mems_allowed for mempolicy or disjoint nodemask Greg Kroah-Hartman
2012-07-26 21:29 ` [ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3 Greg Kroah-Hartman
2012-07-27 15:08 ` Herton Ronaldo Krzesinski
2012-07-27 15:23 ` Mel Gorman [this message]
2012-07-27 19:01 ` Greg Kroah-Hartman
2012-07-28 5:02 ` Herton Ronaldo Krzesinski
2012-07-28 10:26 ` Mel Gorman
2012-07-30 15:39 ` Greg Kroah-Hartman
2012-07-30 15:37 ` Greg Kroah-Hartman
2012-07-30 15:38 ` Greg Kroah-Hartman
2012-07-26 21:29 ` [ 40/40] mm/hugetlb: fix warning in alloc_huge_page/dequeue_huge_page_vma Greg Kroah-Hartman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120727152347.GG612@suse.de \
--to=mgorman@suse.de \
--cc=a.p.zijlstra@chello.nl \
--cc=akpm@linux-foundation.org \
--cc=alan@lxorguk.ukuu.org.uk \
--cc=cl@linux.com \
--cc=gregkh@linuxfoundation.org \
--cc=herton.krzesinski@canonical.com \
--cc=linux-kernel@vger.kernel.org \
--cc=miaox@cn.fujitsu.com \
--cc=rientjes@google.com \
--cc=stable@vger.kernel.org \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).