linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mel@csn.ul.ie>
To: Nick Piggin <npiggin@suse.de>
Cc: linux-mm@kvack.org,
	Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>,
	Chris Mason <chris.mason@oracle.com>,
	Jens Axboe <jens.axboe@oracle.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/3] page-allocator: Check zone pressure when batch of pages are freed
Date: Tue, 9 Mar 2010 11:29:35 +0000	[thread overview]
Message-ID: <20100309112934.GE4883@csn.ul.ie> (raw)
In-Reply-To: <20100309111117.GI8653@laptop>

On Tue, Mar 09, 2010 at 10:11:18PM +1100, Nick Piggin wrote:
> On Tue, Mar 09, 2010 at 10:36:08AM +0000, Mel Gorman wrote:
> > On Tue, Mar 09, 2010 at 09:23:45PM +1100, Nick Piggin wrote:
> > > On Tue, Mar 09, 2010 at 10:08:35AM +0000, Mel Gorman wrote:
> > > > On Tue, Mar 09, 2010 at 08:53:42PM +1100, Nick Piggin wrote:
> > > > > Cool, you found this doesn't hurt performance too much?
> > > > > 
> > > > 
> > > > Nothing outside the noise was measured. I didn't profile it to be
> > > > absolutly sure but I expect it's ok.
> > > 
> > > OK. Moving the waitqueue cacheline out of the fastpath footprint
> > > and doing the flag thing might be a good idea?
> > > 
> > 
> > Probably, I'll do it as a separate micro-optimisation patch so it's
> > clear what I'm doing.
> 
> Fair enough.
> 
> > > > > Can't you remove the check from the reclaim code now? (The check
> > > > > here should give a more timely wait anyway)
> > > > > 
> > > > 
> > > > I'll try and see what the timing and total IO figures look like.
> > > 
> > > Well reclaim goes through free_pages_bulk anyway, doesn't it? So
> > > I don't see why you would have to run any test.
> > >  
> > 
> > It should be fine but no harm in double checking. The tests I'm doing
> > are not great anyway. I'm somewhat depending on people familar with
> > IO-related performance testing to give this a whirl or tell me how they
> > typically benchmark low-memory situations.
> 
> I don't really like that logic. It makes things harder to understand
> down the road if you have double checks.

There *should* be no difference and that is my expectation. If there is,
it means I'm missing something important. Hence, the double check.

> > > > > This is good because it should eliminate most all cases of extra
> > > > > waiting. I wonder if you've also thought of doing the check in the
> > > > > allocation path too as we were discussing? (this would give a better
> > > > > FIFO behaviour under memory pressure but I could easily agree it is not
> > > > > worth the cost)
> > > > > 
> > > > 
> > > > I *could* make the check but as I noted in the leader, there isn't
> > > > really a good test case that determines if these changes are "good" or
> > > > "bad". Removing congestion_wait() seems like an obvious win but other
> > > > modifications that alter how and when processes wait are less obvious.
> > > 
> > > Fair enough. But we could be sure it increases fairness, which is a
> > > good thing. So then we'd just have to check it against performance.
> > > 
> > 
> > Ordinarily, I'd agree but we've seen bug reports before from applications
> > that depended on unfairness for good performance. dbench figures depended
> > at one point in unfair behaviour (specifically being allowed to dirty the
> > whole system). volanomark was one that suffered when the scheduler became
> > more fair (think sched_yield was also a biggie). The new behaviour was
> > better and arguably the applications were doing the wrong thing but I'd
> > still like to treat "increase fairness in the page allocator" as a
> > separate patch as a result.
> 
> Yeah sure it would be done as another patch. I don't think there is much
> question that making things fairer is better. Especially if the
> alternative is a theoretical starvation.
> 

Agreed.

> That's not to say that batching shouldn't then be used to help improve
> performance of fairly scheduled resources. But it should be done in a
> carefully designed and controlled way, so that neither the fairness /
> starvation, nor the good performance from batching, depend on timing
> and behaviours of the hardware interconnect etc.
> 

Indeed. Batching is less clear-cut in this context. We are already
batching on a per-CPU basis but not on a per-process basis. My feeling
is that the problem to watch out for with queueing in the allocation
path is 2+ processes waiting on the queue and then allocating too much
on the per-cpu lists. Easy enough to handle that one but there are
probably a few more gotchas in there somewhere. Will revisit for sure
though.

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2010-03-09 11:29 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-03-08 11:48 [RFC PATCH 0/3] Avoid the use of congestion_wait under zone pressure Mel Gorman
2010-03-08 11:48 ` [PATCH 1/3] page-allocator: Under memory pressure, wait on pressure to relieve instead of congestion Mel Gorman
2010-03-09 13:35   ` Nick Piggin
2010-03-09 14:17     ` Mel Gorman
2010-03-09 15:03       ` Nick Piggin
2010-03-09 15:42         ` Christian Ehrhardt
2010-03-09 18:22           ` Mel Gorman
2010-03-10  2:38             ` Nick Piggin
2010-03-09 17:35         ` Mel Gorman
2010-03-10  2:35           ` Nick Piggin
2010-03-09 15:50   ` Christoph Lameter
2010-03-09 15:56     ` Christian Ehrhardt
2010-03-09 16:09       ` Christoph Lameter
2010-03-09 17:01         ` Mel Gorman
2010-03-09 17:11           ` Christoph Lameter
2010-03-09 17:30             ` Mel Gorman
2010-03-08 11:48 ` [PATCH 2/3] page-allocator: Check zone pressure when batch of pages are freed Mel Gorman
2010-03-09  9:53   ` Nick Piggin
2010-03-09 10:08     ` Mel Gorman
2010-03-09 10:23       ` Nick Piggin
2010-03-09 10:36         ` Mel Gorman
2010-03-09 11:11           ` Nick Piggin
2010-03-09 11:29             ` Mel Gorman [this message]
2010-03-08 11:48 ` [PATCH 3/3] vmscan: Put kswapd to sleep on its own waitqueue, not congestion Mel Gorman
2010-03-09 10:00   ` Nick Piggin
2010-03-09 10:21     ` Mel Gorman
2010-03-09 10:32       ` Nick Piggin
2010-03-11 23:41 ` [RFC PATCH 0/3] Avoid the use of congestion_wait under zone pressure Andrew Morton
2010-03-12  6:39   ` Christian Ehrhardt
2010-03-12  7:05     ` Andrew Morton
2010-03-12 10:47       ` Mel Gorman
2010-03-12 12:15         ` Christian Ehrhardt
2010-03-12 14:37           ` Andrew Morton
2010-03-15 12:29             ` Mel Gorman
2010-03-15 14:45               ` Christian Ehrhardt
2010-03-15 12:34             ` Christian Ehrhardt
2010-03-15 20:09               ` Andrew Morton
2010-03-16 10:11                 ` Mel Gorman
2010-03-18 17:42                 ` Mel Gorman
2010-03-22 23:50                 ` Mel Gorman
2010-03-23 14:35                   ` Christian Ehrhardt
2010-03-23 21:35                   ` Corrado Zoccolo
2010-03-24 11:48                     ` Mel Gorman
2010-03-24 12:56                       ` Corrado Zoccolo
2010-03-23 22:29                   ` Rik van Riel
2010-03-24 14:50                     ` Mel Gorman
2010-04-19 12:22                       ` Christian Ehrhardt
2010-04-19 21:44                         ` Johannes Weiner
2010-04-20  7:20                           ` Christian Ehrhardt
2010-04-20  8:54                             ` Christian Ehrhardt
2010-04-20 15:32                             ` Johannes Weiner
2010-04-20 17:22                               ` Rik van Riel
2010-04-21  4:23                                 ` Christian Ehrhardt
2010-04-21  7:35                                   ` Christian Ehrhardt
2010-04-21 13:19                                     ` Rik van Riel
2010-04-22  6:21                                       ` Christian Ehrhardt
2010-04-26 10:59                                         ` Subject: [PATCH][RFC] mm: make working set portion that is protected tunable v2 Christian Ehrhardt
2010-04-26 11:59                                           ` KOSAKI Motohiro
2010-04-26 12:43                                             ` Christian Ehrhardt
2010-04-26 14:20                                               ` Rik van Riel
2010-04-27 14:00                                                 ` Christian Ehrhardt
2010-04-21  9:03                                   ` [RFC PATCH 0/3] Avoid the use of congestion_wait under zone pressure Johannes Weiner
2010-04-21 13:20                                   ` Rik van Riel
2010-04-20 14:40                           ` Rik van Riel
2010-03-24  2:38                   ` Greg KH
2010-03-24 11:49                     ` Mel Gorman
2010-03-24 13:13                   ` Johannes Weiner
2010-03-12  9:09   ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100309112934.GE4883@csn.ul.ie \
    --to=mel@csn.ul.ie \
    --cc=chris.mason@oracle.com \
    --cc=ehrhardt@linux.vnet.ibm.com \
    --cc=jens.axboe@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=npiggin@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).