From: "Martin J. Bligh" <mbligh@mbligh.org>
To: Rohit Seth <rohit.seth@intel.com>
Cc: Andrew Morton <akpm@osdl.org>, Mattia Dongili <malattia@linux.it>,
linux-kernel@vger.kernel.org
Subject: Re: 2.6.14-rc2-mm1
Date: Tue, 27 Sep 2005 14:59:45 -0700 [thread overview]
Message-ID: <985130000.1127858385@flay> (raw)
In-Reply-To: <1127857919.7258.13.camel@akash.sc.intel.com>
>> I must be being particularly dense today ... but:
>>
>> pcp->high = batch / 2;
>>
>> Looks like half the batch size to me, not the same?
>
> pcp->batch = max(1UL, batch/2); is the line of code that is setting the
> batch value for the cold pcp list. batch is just a number that we
> counted based on some parameters earlier.
Ah, OK, so I am being dense. Fair enough. But if there's a reason to do
that max, perhaps:
pcp->batch = max(1UL, batch/2);
pcp->high = pcp->batch;
would be more appropriate? Tradeoff is more frequent dump / fill against
better frag, I suppose (at least if we don't refill using higher order
allocs ;-)) which seems fair enough.
>> > In general, I think if a specific higher order ( > 0) request fails that
>> > has GFP_KERNEL set then at least we should drain the pcps.
>>
>> Mmmm. so every time we fork a process with 8K stacks, or allocate a frame
>> for jumbo ethernet, or NFS, you want to drain the lists? that seems to
>> wholly defeat the purpose.
>
> Not every time there is a request for higher order pages. That surely
> will defeat the purpose of pcps. But my suggestion is only to drain
> when the the global pool is not able to service the request. In the
> pathological case where the higher order and zero order requests are
> alternating you could have thrashing in terms of pages moving to pcp for
> them to move back to global list.
OK, seems fair enough. But there's multiple "harder and harder" attempts
within __alloc_pages to do that ... which one are you going for? just
before we OOM / fail the alloc? That'd be hard to argue with, though I'm
unsure what the locking is to dump out other CPUs queues - you going to
global IPI and ask them to do it - that'd seem to cause it to race to
refill (as you mention).
>> Could you elaborate on what the benefits were from this change in the
>> first place? Some page colouring thing on ia64? It seems to have way more
>> downside than upside to me.
>
> The original change was to try to allocate a higher order page to
> service a batch size bulk request. This was with the hope that better
> physical contiguity will spread the data better across big caches.
OK ... but it has an impact on fragmentation. How much benefit are you
getting?
M.
next prev parent reply other threads:[~2005-09-27 21:59 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-09-22 5:28 2.6.14-rc2-mm1 Andrew Morton
2005-09-22 6:35 ` 2.6.14-rc2-mm1 Joel Becker
2005-09-22 6:46 ` 2.6.14-rc2-mm1 Reuben Farrelly
2005-09-22 7:03 ` 2.6.14-rc2-mm1 Andrew Morton
2005-09-22 18:59 ` 2.6.14-rc2-mm1 Martin J. Bligh
2005-09-22 19:52 ` 2.6.14-rc2-mm1 Andrew Morton
2005-09-22 20:14 ` 2.6.14-rc2-mm1 Martin J. Bligh
2005-09-23 0:28 ` 2.6.14-rc2-mm1 Martin J. Bligh
2005-09-22 22:28 ` 2.6.14-rc2-mm1 - ide problems ? Badari Pulavarty
2005-09-22 23:39 ` Andrew Morton
2005-09-22 19:50 ` tty update speed regression (was: 2.6.14-rc2-mm1) Alexey Dobriyan
2005-09-22 21:49 ` Alexey Dobriyan
2005-09-23 0:08 ` Nishanth Aravamudan
2005-09-23 17:12 ` Nish Aravamudan
2005-09-23 18:42 ` Alexey Dobriyan
2005-09-23 19:07 ` Nishanth Aravamudan
2005-09-23 19:42 ` Alexey Dobriyan
2005-09-23 21:32 ` Nishanth Aravamudan
2005-09-24 17:43 ` 2.6.14-rc2-mm1 Mattia Dongili
2005-09-24 17:58 ` 2.6.14-rc2-mm1 Mattia Dongili
2005-09-24 18:23 ` 2.6.14-rc2-mm1 Andrew Morton
2005-09-26 19:33 ` 2.6.14-rc2-mm1 Seth, Rohit
2005-09-27 18:57 ` 2.6.14-rc2-mm1 Martin J. Bligh
2005-09-27 20:05 ` 2.6.14-rc2-mm1 Rohit Seth
2005-09-27 21:18 ` 2.6.14-rc2-mm1 Martin J. Bligh
2005-09-27 21:51 ` 2.6.14-rc2-mm1 Rohit Seth
2005-09-27 21:59 ` Martin J. Bligh [this message]
2005-09-27 22:49 ` 2.6.14-rc2-mm1 Rohit Seth
2005-09-27 22:49 ` 2.6.14-rc2-mm1 Martin J. Bligh
2005-09-27 23:16 ` 2.6.14-rc2-mm1 Rohit Seth
2005-09-27 7:13 ` 2.6.14-rc2-mm1 (Oops, possibly Netfilter related?) Reuben Farrelly
2005-09-27 7:44 ` Andrew Morton
2005-09-27 18:59 ` Martin J. Bligh
2005-10-02 17:13 ` Paul Jackson
2005-10-02 21:31 ` Martin J. Bligh
2005-10-03 17:20 ` Rohit Seth
2005-10-03 17:56 ` Martin J. Bligh
-- strict thread matches above, loose matches on Subject: below --
2005-09-25 22:00 2.6.14-rc2-mm1 Paul Blazejowski
2005-09-25 23:44 ` 2.6.14-rc2-mm1 Andrew Morton
2005-09-26 4:32 ` 2.6.14-rc2-mm1 Carlo Calica
2005-09-28 4:56 ` 2.6.14-rc2-mm1 Paul Blazejowski
2005-09-28 19:07 ` 2.6.14-rc2-mm1 Carlo Calica
2005-09-26 7:14 ` 2.6.14-rc2-mm1 Tim Schmielau
2005-09-28 5:01 ` 2.6.14-rc2-mm1 Paul Blazejowski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=985130000.1127858385@flay \
--to=mbligh@mbligh.org \
--cc=akpm@osdl.org \
--cc=linux-kernel@vger.kernel.org \
--cc=malattia@linux.it \
--cc=rohit.seth@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox