From: Vladimir Davydov <vdavydov@parallels.com>
To: Christoph Lameter <cl@linux.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@suse.cz>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH -mm 1/3] slub: don't fail kmem_cache_shrink if slab placement optimization fails
Date: Tue, 27 Jan 2015 15:58:38 +0300 [thread overview]
Message-ID: <20150127125838.GD5165@esperanza> (raw)
In-Reply-To: <alpine.DEB.2.11.1501261353020.16786@gentwo.org>
On Mon, Jan 26, 2015 at 01:53:32PM -0600, Christoph Lameter wrote:
> On Mon, 26 Jan 2015, Vladimir Davydov wrote:
>
> > We could do that, but IMO that would only complicate the code w/o
> > yielding any real benefits. This function is slow and called rarely
> > anyway, so I don't think there is any point to optimize out a page
> > allocation here.
>
> I think you already have the code there. Simply allow the sizeing of the
> empty_page[] array. And rename it.
>
May be, we could remove this allocation at all then? I mean, always
distribute slabs among constant number of buckets, say 32, like this:
diff --git a/mm/slub.c b/mm/slub.c
index 5ed1a73e2ec8..a43b213770b4 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3358,6 +3358,8 @@ void kfree(const void *x)
}
EXPORT_SYMBOL(kfree);
+#define SHRINK_BUCKETS 32
+
/*
* kmem_cache_shrink removes empty slabs from the partial lists and sorts
* the remaining slabs by the number of items in use. The slabs with the
@@ -3376,19 +3378,15 @@ int __kmem_cache_shrink(struct kmem_cache *s)
struct page *page;
struct page *t;
int objects = oo_objects(s->max);
- struct list_head *slabs_by_inuse =
- kmalloc(sizeof(struct list_head) * objects, GFP_KERNEL);
+ struct list_head slabs_by_inuse[SHRINK_BUCKETS];
unsigned long flags;
- if (!slabs_by_inuse)
- return -ENOMEM;
-
flush_all(s);
for_each_kmem_cache_node(s, node, n) {
if (!n->nr_partial)
continue;
- for (i = 0; i < objects; i++)
+ for (i = 0; i < SHRINK_BUCKETS; i++)
INIT_LIST_HEAD(slabs_by_inuse + i);
spin_lock_irqsave(&n->list_lock, flags);
@@ -3400,7 +3398,9 @@ int __kmem_cache_shrink(struct kmem_cache *s)
* list_lock. page->inuse here is the upper limit.
*/
list_for_each_entry_safe(page, t, &n->partial, lru) {
- list_move(&page->lru, slabs_by_inuse + page->inuse);
+ i = DIV_ROUND_UP(page->inuse * (SHRINK_BUCKETS - 1),
+ objects);
+ list_move(&page->lru, slabs_by_inuse + i);
if (!page->inuse)
n->nr_partial--;
}
@@ -3409,7 +3409,7 @@ int __kmem_cache_shrink(struct kmem_cache *s)
* Rebuild the partial list with the slabs filled up most
* first and the least used slabs at the end.
*/
- for (i = objects - 1; i > 0; i--)
+ for (i = SHRINK_BUCKETS - 1; i > 0; i--)
list_splice(slabs_by_inuse + i, n->partial.prev);
spin_unlock_irqrestore(&n->list_lock, flags);
@@ -3419,7 +3419,6 @@ int __kmem_cache_shrink(struct kmem_cache *s)
discard_slab(s, page);
}
- kfree(slabs_by_inuse);
return 0;
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-01-27 12:58 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-01-26 12:55 [PATCH -mm 0/3] slub: make dead caches discard free slabs immediately Vladimir Davydov
2015-01-26 12:55 ` [PATCH -mm 1/3] slub: don't fail kmem_cache_shrink if slab placement optimization fails Vladimir Davydov
2015-01-26 15:48 ` Christoph Lameter
2015-01-26 17:01 ` Vladimir Davydov
2015-01-26 18:24 ` Christoph Lameter
2015-01-26 19:36 ` Vladimir Davydov
2015-01-26 19:53 ` Christoph Lameter
2015-01-27 12:58 ` Vladimir Davydov [this message]
2015-01-27 17:02 ` Christoph Lameter
2015-01-28 15:00 ` Vladimir Davydov
2015-01-26 12:55 ` [PATCH -mm 2/3] slab: zap kmem_cache_shrink return value Vladimir Davydov
2015-01-26 15:49 ` Christoph Lameter
2015-01-26 17:04 ` Vladimir Davydov
2015-01-26 18:26 ` Christoph Lameter
2015-01-26 19:48 ` Vladimir Davydov
2015-01-26 19:55 ` Christoph Lameter
2015-01-26 20:16 ` Vladimir Davydov
2015-01-26 20:28 ` Christoph Lameter
2015-01-26 20:43 ` Vladimir Davydov
2015-01-26 12:55 ` [PATCH -mm 3/3] slub: make dead caches discard free slabs immediately Vladimir Davydov
2015-01-27 8:00 ` Joonsoo Kim
2015-01-27 8:23 ` Vladimir Davydov
2015-01-27 9:21 ` Joonsoo Kim
2015-01-27 9:28 ` Vladimir Davydov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150127125838.GD5165@esperanza \
--to=vdavydov@parallels.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=hannes@cmpxchg.org \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.cz \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).