From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
To: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/3] slub: correct to calculate num of acquired objects in get_partial_node()
Date: Wed, 16 Jan 2013 17:41:14 +0900 [thread overview]
Message-ID: <20130116084114.GA13446@lge.com> (raw)
In-Reply-To: <0000013c3ee3b69a-80cfdc68-a753-44e0-ba68-511060864128-000000@email.amazonses.com>
On Tue, Jan 15, 2013 at 03:46:17PM +0000, Christoph Lameter wrote:
> On Tue, 15 Jan 2013, Joonsoo Kim wrote:
>
> > There is a subtle bug when calculating a number of acquired objects.
> > After acquire_slab() is executed at first, page->inuse is same as
> > page->objects, then, available is always 0. So, we always go next
> > iteration.
>
> page->inuse is always < page->objects because the partial list is not used
> for slabs that are fully allocated. page->inuse == page->objects means
> that no objects are available on the slab and therefore the slab would
> have been removed from the partial list.
Currently, we calculate "available = page->objects - page->inuse",
after acquire_slab() is called in get_partial_node().
In acquire_slab() with mode = 1, we always set new.inuse = page->objects.
So
acquire_slab(s, n, page, object == NULL);
if (!object) {
c->page = page;
stat(s, ALLOC_FROM_PARTIAL);
object = t;
available = page->objects - page->inuse;
!!!!!! available is always 0 !!!!!!
} else {
available = put_cpu_partial(s, page, 0);
stat(s, CPU_PARTIAL_NODE);
}
Therefore, "available > s->cpu_partial / 2" is always false and
we always go to second iteration.
This patch correct this problem.
> > After that, we don't need return value of put_cpu_partial().
> > So remove it.
>
> Hmmm... The code looks a bit easier to understand than what we have right now.
>
> Could you try to explain it better?
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-01-16 8:41 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-01-15 7:20 [PATCH 1/3] slub: correct to calculate num of acquired objects in get_partial_node() Joonsoo Kim
2013-01-15 7:20 ` [PATCH 2/3] slub: correct bootstrap() for kmem_cache, kmem_cache_node Joonsoo Kim
2013-01-15 15:36 ` Christoph Lameter
2013-01-16 8:45 ` Joonsoo Kim
2013-01-16 15:05 ` Christoph Lameter
2013-01-15 7:20 ` [PATCH 3/3] slub: add 'likely' macro to inc_slabs_node() Joonsoo Kim
2013-01-15 15:36 ` Christoph Lameter
2013-01-15 15:46 ` [PATCH 1/3] slub: correct to calculate num of acquired objects in get_partial_node() Christoph Lameter
2013-01-16 8:41 ` Joonsoo Kim [this message]
2013-01-16 15:04 ` Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130116084114.GA13446@lge.com \
--to=iamjoonsoo.kim@lge.com \
--cc=cl@linux.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).