* [RFC] mempolicy: convert the shared_policy lock to a rwlock @ 2015-11-12 17:11 Nathan Zimmer 2015-11-12 21:10 ` David Rientjes 0 siblings, 1 reply; 7+ messages in thread From: Nathan Zimmer @ 2015-11-12 17:11 UTC (permalink / raw) To: Mel Gorman, linux-kernel, linux-mm Cc: Nathan Zimmer, Andrew Morton, Naoya Horiguchi, Aneesh Kumar K.V When running the SPECint_rate gcc on some very large boxes it was noticed that the system was spending lots of time in mpol_shared_policy_lookup. The gamess benchmark can also show it and is what I mostly used to chase down the issue since the setup for that I found a easier. To be clear the binaries were on tmpfs because of disk I/O reqruirements. We then used text replication to avoid icache misses and having all the copies banging on the memory where the instruction code resides. This results in us hitting a bottle neck in mpol_shared_policy_lookup since lookup is serialised by the shared_policy lock. I have only reproduced this on very large (3k+ cores) boxes. The problem starts showing up at just a few hundred ranks getting worse until it threatens to livelock once it gets large enough. For example on the gamess benchmark at 128 ranks this area consumes only ~1% of time, at 512 ranks it consumes nearly 13%, and at 2k ranks it is over 90%. To alleviate the contention on this area I converted the spinslock to a rwlock. This allows the large number of lookups to happen simultaneously. The results were quite good reducing this to consumtion at max ranks to around 2%. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Mel Gorman <mgorman@suse.de> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Signed-off-by: Nathan Zimmer <nzimmer@sgi.com> --- include/linux/mempolicy.h | 2 +- mm/mempolicy.c | 16 ++++++++-------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 3d385c8..2696c1f 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -122,7 +122,7 @@ struct sp_node { struct shared_policy { struct rb_root root; - spinlock_t lock; + rwlock_t lock; }; int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 87a1779..ebf82a3 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2211,13 +2211,13 @@ mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx) if (!sp->root.rb_node) return NULL; - spin_lock(&sp->lock); + read_lock(&sp->lock); sn = sp_lookup(sp, idx, idx+1); if (sn) { mpol_get(sn->policy); pol = sn->policy; } - spin_unlock(&sp->lock); + read_unlock(&sp->lock); return pol; } @@ -2360,7 +2360,7 @@ static int shared_policy_replace(struct shared_policy *sp, unsigned long start, int ret = 0; restart: - spin_lock(&sp->lock); + write_lock(&sp->lock); n = sp_lookup(sp, start, end); /* Take care of old policies in the same range. */ while (n && n->start < end) { @@ -2393,7 +2393,7 @@ restart: } if (new) sp_insert(sp, new); - spin_unlock(&sp->lock); + write_unlock(&sp->lock); ret = 0; err_out: @@ -2405,7 +2405,7 @@ err_out: return ret; alloc_new: - spin_unlock(&sp->lock); + write_unlock(&sp->lock); ret = -ENOMEM; n_new = kmem_cache_alloc(sn_cache, GFP_KERNEL); if (!n_new) @@ -2431,7 +2431,7 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol) int ret; sp->root = RB_ROOT; /* empty tree == default mempolicy */ - spin_lock_init(&sp->lock); + rwlock_init(&sp->lock); if (mpol) { struct vm_area_struct pvma; @@ -2497,14 +2497,14 @@ void mpol_free_shared_policy(struct shared_policy *p) if (!p->root.rb_node) return; - spin_lock(&p->lock); + write_lock(&p->lock); next = rb_first(&p->root); while (next) { n = rb_entry(next, struct sp_node, nd); next = rb_next(&n->nd); sp_delete(p, n); } - spin_unlock(&p->lock); + write_unlock(&p->lock); } #ifdef CONFIG_NUMA_BALANCING -- 1.8.2.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [RFC] mempolicy: convert the shared_policy lock to a rwlock 2015-11-12 17:11 [RFC] mempolicy: convert the shared_policy lock to a rwlock Nathan Zimmer @ 2015-11-12 21:10 ` David Rientjes 2015-11-17 16:17 ` [PATCH] " Nathan Zimmer 0 siblings, 1 reply; 7+ messages in thread From: David Rientjes @ 2015-11-12 21:10 UTC (permalink / raw) To: Nathan Zimmer Cc: Mel Gorman, linux-kernel, linux-mm, Andrew Morton, Naoya Horiguchi, Aneesh Kumar K.V On Thu, 12 Nov 2015, Nathan Zimmer wrote: > When running the SPECint_rate gcc on some very large boxes it was noticed > that the system was spending lots of time in mpol_shared_policy_lookup. > The gamess benchmark can also show it and is what I mostly used to chase > down the issue since the setup for that I found a easier. > > To be clear the binaries were on tmpfs because of disk I/O reqruirements. > We then used text replication to avoid icache misses and having all the > copies banging on the memory where the instruction code resides. > This results in us hitting a bottle neck in mpol_shared_policy_lookup > since lookup is serialised by the shared_policy lock. > > I have only reproduced this on very large (3k+ cores) boxes. The problem > starts showing up at just a few hundred ranks getting worse until it > threatens to livelock once it gets large enough. > For example on the gamess benchmark at 128 ranks this area consumes only > ~1% of time, at 512 ranks it consumes nearly 13%, and at 2k ranks it is > over 90%. > > To alleviate the contention on this area I converted the spinslock to a > rwlock. This allows the large number of lookups to happen simultaneously. > The results were quite good reducing this to consumtion at max ranks to > around 2%. > There're a couple of places in the sp_lookup() comment that would need to be fixed to either correct that this is no longer a spinlock and that the caller must hold the read lock. The comment for sp_insert() would have to be fixed to specify the caller must hold the write lock. When that's fixed, feel free to add Acked-by: David Rientjes <rientjes@google.com> -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH] mempolicy: convert the shared_policy lock to a rwlock 2015-11-12 21:10 ` David Rientjes @ 2015-11-17 16:17 ` Nathan Zimmer 2015-11-18 13:50 ` Vlastimil Babka 2015-12-21 13:15 ` Vlastimil Babka 0 siblings, 2 replies; 7+ messages in thread From: Nathan Zimmer @ 2015-11-17 16:17 UTC (permalink / raw) Cc: Nathan Zimmer, Andrew Morton, Nadia Yvette Chambers, Naoya Horiguchi, Mel Gorman, Aneesh Kumar K.V, linux-kernel, linux-mm When running the SPECint_rate gcc on some very large boxes it was noticed that the system was spending lots of time in mpol_shared_policy_lookup. The gamess benchmark can also show it and is what I mostly used to chase down the issue since the setup for that I found a easier. To be clear the binaries were on tmpfs because of disk I/O reqruirements. We then used text replication to avoid icache misses and having all the copies banging on the memory where the instruction code resides. This results in us hitting a bottle neck in mpol_shared_policy_lookup since lookup is serialised by the shared_policy lock. I have only reproduced this on very large (3k+ cores) boxes. The problem starts showing up at just a few hundred ranks getting worse until it threatens to livelock once it gets large enough. For example on the gamess benchmark at 128 ranks this area consumes only ~1% of time, at 512 ranks it consumes nearly 13%, and at 2k ranks it is over 90%. To alleviate the contention on this area I converted the spinslock to a rwlock. This allows the large number of lookups to happen simultaneously. The results were quite good reducing this to consumtion at max ranks to around 2%. Acked-by: David Rientjes <rientjes@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Nadia Yvette Chambers <nyc@holomorphy.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Mel Gorman <mgorman@suse.de> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Signed-off-by: Nathan Zimmer <nzimmer@sgi.com> --- fs/hugetlbfs/inode.c | 2 +- include/linux/mempolicy.h | 2 +- mm/mempolicy.c | 20 ++++++++++---------- 3 files changed, 12 insertions(+), 12 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 316adb9..ab7b155 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -739,7 +739,7 @@ static struct inode *hugetlbfs_get_inode(struct super_block *sb, /* * The policy is initialized here even if we are creating a * private inode because initialization simply creates an - * an empty rb tree and calls spin_lock_init(), later when we + * an empty rb tree and calls rwlock_init(), later when we * call mpol_free_shared_policy() it will just return because * the rb tree will still be empty. */ diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 3d385c8..2696c1f 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -122,7 +122,7 @@ struct sp_node { struct shared_policy { struct rb_root root; - spinlock_t lock; + rwlock_t lock; }; int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 87a1779..197d917 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2142,7 +2142,7 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b) * * Remember policies even when nobody has shared memory mapped. * The policies are kept in Red-Black tree linked from the inode. - * They are protected by the sp->lock spinlock, which should be held + * They are protected by the sp->lock rwlock, which should be held * for any accesses to the tree. */ @@ -2179,7 +2179,7 @@ sp_lookup(struct shared_policy *sp, unsigned long start, unsigned long end) } /* Insert a new shared policy into the list. */ -/* Caller holds sp->lock */ +/* Caller holds the write of sp->lock */ static void sp_insert(struct shared_policy *sp, struct sp_node *new) { struct rb_node **p = &sp->root.rb_node; @@ -2211,13 +2211,13 @@ mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx) if (!sp->root.rb_node) return NULL; - spin_lock(&sp->lock); + read_lock(&sp->lock); sn = sp_lookup(sp, idx, idx+1); if (sn) { mpol_get(sn->policy); pol = sn->policy; } - spin_unlock(&sp->lock); + read_unlock(&sp->lock); return pol; } @@ -2360,7 +2360,7 @@ static int shared_policy_replace(struct shared_policy *sp, unsigned long start, int ret = 0; restart: - spin_lock(&sp->lock); + write_lock(&sp->lock); n = sp_lookup(sp, start, end); /* Take care of old policies in the same range. */ while (n && n->start < end) { @@ -2393,7 +2393,7 @@ restart: } if (new) sp_insert(sp, new); - spin_unlock(&sp->lock); + write_unlock(&sp->lock); ret = 0; err_out: @@ -2405,7 +2405,7 @@ err_out: return ret; alloc_new: - spin_unlock(&sp->lock); + write_unlock(&sp->lock); ret = -ENOMEM; n_new = kmem_cache_alloc(sn_cache, GFP_KERNEL); if (!n_new) @@ -2431,7 +2431,7 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol) int ret; sp->root = RB_ROOT; /* empty tree == default mempolicy */ - spin_lock_init(&sp->lock); + rwlock_init(&sp->lock); if (mpol) { struct vm_area_struct pvma; @@ -2497,14 +2497,14 @@ void mpol_free_shared_policy(struct shared_policy *p) if (!p->root.rb_node) return; - spin_lock(&p->lock); + write_lock(&p->lock); next = rb_first(&p->root); while (next) { n = rb_entry(next, struct sp_node, nd); next = rb_next(&n->nd); sp_delete(p, n); } - spin_unlock(&p->lock); + write_unlock(&p->lock); } #ifdef CONFIG_NUMA_BALANCING -- 1.8.2.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH] mempolicy: convert the shared_policy lock to a rwlock 2015-11-17 16:17 ` [PATCH] " Nathan Zimmer @ 2015-11-18 13:50 ` Vlastimil Babka 2015-11-18 20:05 ` Nathan Zimmer 2015-12-21 13:15 ` Vlastimil Babka 1 sibling, 1 reply; 7+ messages in thread From: Vlastimil Babka @ 2015-11-18 13:50 UTC (permalink / raw) To: Nathan Zimmer Cc: Andrew Morton, Nadia Yvette Chambers, Naoya Horiguchi, Mel Gorman, Aneesh Kumar K.V, linux-kernel, linux-mm On 11/17/2015 05:17 PM, Nathan Zimmer wrote: > When running the SPECint_rate gcc on some very large boxes it was noticed > that the system was spending lots of time in mpol_shared_policy_lookup. > The gamess benchmark can also show it and is what I mostly used to chase > down the issue since the setup for that I found a easier. > > To be clear the binaries were on tmpfs because of disk I/O reqruirements. > We then used text replication to avoid icache misses and having all the > copies banging on the memory where the instruction code resides. > This results in us hitting a bottle neck in mpol_shared_policy_lookup > since lookup is serialised by the shared_policy lock. > > I have only reproduced this on very large (3k+ cores) boxes. The problem > starts showing up at just a few hundred ranks getting worse until it > threatens to livelock once it gets large enough. > For example on the gamess benchmark at 128 ranks this area consumes only > ~1% of time, at 512 ranks it consumes nearly 13%, and at 2k ranks it is > over 90%. > > To alleviate the contention on this area I converted the spinslock to a > rwlock. This allows the large number of lookups to happen simultaneously. > The results were quite good reducing this to consumtion at max ranks to > around 2%. At first glance it seems that RCU would be a good fit here and achieve even better lookup scalability, have you considered it? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] mempolicy: convert the shared_policy lock to a rwlock 2015-11-18 13:50 ` Vlastimil Babka @ 2015-11-18 20:05 ` Nathan Zimmer 2015-11-19 10:50 ` Vlastimil Babka 0 siblings, 1 reply; 7+ messages in thread From: Nathan Zimmer @ 2015-11-18 20:05 UTC (permalink / raw) To: Vlastimil Babka Cc: Andrew Morton, Nadia Yvette Chambers, Naoya Horiguchi, Mel Gorman, Aneesh Kumar K.V, linux-kernel, linux-mm On 11/18/2015 07:50 AM, Vlastimil Babka wrote: > On 11/17/2015 05:17 PM, Nathan Zimmer wrote: >> When running the SPECint_rate gcc on some very large boxes it was noticed >> that the system was spending lots of time in mpol_shared_policy_lookup. >> The gamess benchmark can also show it and is what I mostly used to chase >> down the issue since the setup for that I found a easier. >> >> To be clear the binaries were on tmpfs because of disk I/O reqruirements. >> We then used text replication to avoid icache misses and having all the >> copies banging on the memory where the instruction code resides. >> This results in us hitting a bottle neck in mpol_shared_policy_lookup >> since lookup is serialised by the shared_policy lock. >> >> I have only reproduced this on very large (3k+ cores) boxes. The problem >> starts showing up at just a few hundred ranks getting worse until it >> threatens to livelock once it gets large enough. >> For example on the gamess benchmark at 128 ranks this area consumes only >> ~1% of time, at 512 ranks it consumes nearly 13%, and at 2k ranks it is >> over 90%. >> >> To alleviate the contention on this area I converted the spinslock to a >> rwlock. This allows the large number of lookups to happen simultaneously. >> The results were quite good reducing this to consumtion at max ranks to >> around 2%. > At first glance it seems that RCU would be a good fit here and achieve even > better lookup scalability, have you considered it? > Originally that was my plan but when I saw how good the results were with the rwlock, I chickened out and took the less prone to mistakes way. I should also note that the 2% time left in system is not from this lookup but another area. Nate -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] mempolicy: convert the shared_policy lock to a rwlock 2015-11-18 20:05 ` Nathan Zimmer @ 2015-11-19 10:50 ` Vlastimil Babka 0 siblings, 0 replies; 7+ messages in thread From: Vlastimil Babka @ 2015-11-19 10:50 UTC (permalink / raw) To: Nathan Zimmer Cc: Andrew Morton, Nadia Yvette Chambers, Naoya Horiguchi, Mel Gorman, Aneesh Kumar K.V, linux-kernel, linux-mm On 11/18/2015 09:05 PM, Nathan Zimmer wrote: > > > On 11/18/2015 07:50 AM, Vlastimil Babka wrote: >> At first glance it seems that RCU would be a good fit here and achieve even >> better lookup scalability, have you considered it? >> > > Originally that was my plan but when I saw how good the results were > with the rwlock, I chickened out and took the less prone to mistakes way. > > I should also note that the 2% time left in system is not from this lookup > but another area. Ah, I see, thanks! Vlastimil > Nate > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] mempolicy: convert the shared_policy lock to a rwlock 2015-11-17 16:17 ` [PATCH] " Nathan Zimmer 2015-11-18 13:50 ` Vlastimil Babka @ 2015-12-21 13:15 ` Vlastimil Babka 1 sibling, 0 replies; 7+ messages in thread From: Vlastimil Babka @ 2015-12-21 13:15 UTC (permalink / raw) To: Nathan Zimmer Cc: Andrew Morton, Nadia Yvette Chambers, Naoya Horiguchi, Mel Gorman, Aneesh Kumar K.V, linux-kernel, linux-mm On 11/17/2015 05:17 PM, Nathan Zimmer wrote: > When running the SPECint_rate gcc on some very large boxes it was noticed > that the system was spending lots of time in mpol_shared_policy_lookup. > The gamess benchmark can also show it and is what I mostly used to chase > down the issue since the setup for that I found a easier. > > To be clear the binaries were on tmpfs because of disk I/O reqruirements. > We then used text replication to avoid icache misses and having all the > copies banging on the memory where the instruction code resides. > This results in us hitting a bottle neck in mpol_shared_policy_lookup > since lookup is serialised by the shared_policy lock. > > I have only reproduced this on very large (3k+ cores) boxes. The problem > starts showing up at just a few hundred ranks getting worse until it > threatens to livelock once it gets large enough. > For example on the gamess benchmark at 128 ranks this area consumes only > ~1% of time, at 512 ranks it consumes nearly 13%, and at 2k ranks it is > over 90%. > > To alleviate the contention on this area I converted the spinslock to a > rwlock. This allows the large number of lookups to happen simultaneously. > The results were quite good reducing this to consumtion at max ranks to > around 2%. > > Acked-by: David Rientjes <rientjes@google.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2015-12-21 13:15 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2015-11-12 17:11 [RFC] mempolicy: convert the shared_policy lock to a rwlock Nathan Zimmer 2015-11-12 21:10 ` David Rientjes 2015-11-17 16:17 ` [PATCH] " Nathan Zimmer 2015-11-18 13:50 ` Vlastimil Babka 2015-11-18 20:05 ` Nathan Zimmer 2015-11-19 10:50 ` Vlastimil Babka 2015-12-21 13:15 ` Vlastimil Babka
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).