public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Matthew Dobson <colpatch@us.ibm.com>
To: Benjamin LaHaise <bcrl@kvack.org>
Cc: Christoph Lameter <clameter@engr.sgi.com>,
	linux-kernel@vger.kernel.org, sri@us.ibm.com, andrea@suse.de,
	pavel@suse.cz, linux-mm@kvack.org
Subject: Re: [patch 0/9] Critical Mempools
Date: Thu, 26 Jan 2006 16:27:16 -0800	[thread overview]
Message-ID: <43D968E4.5020300@us.ibm.com> (raw)
In-Reply-To: <20060127000304.GG10409@kvack.org>

Benjamin LaHaise wrote:
> On Thu, Jan 26, 2006 at 03:32:14PM -0800, Matthew Dobson wrote:
> 
>>>I thought the earlier __GFP_CRITICAL was a good idea.
>>
>>Well, I certainly could have used that feedback a month ago! ;)  The
>>general response to that patchset was overwhelmingly negative.  Yours is
>>the first vote in favor of that approach, that I'm aware of.
> 
> 
> Personally, I'm more in favour of a proper reservation system.  mempools 
> are pretty inefficient.  Reservations have useful properties, too -- one 
> could reserve memory for a critical process to use, but allow the system 
> to use that memory for easy to reclaim caches or to help with memory 
> defragmentation (more free pages really helps the buddy allocator).

That's an interesting idea...  Keep track of the number of pages "reserved"
but allow them to be used something like read-only pagecache...  Something
along those lines would most certainly be easier on the page allocator,
since it wouldn't have chunks of pages "missing" for long periods of time.


>>>Gfp flag? Better memory reclaim functionality?
>>
>>Well, I've got patches that implement the GFP flag approach, but as I
>>mentioned above, that was poorly received.  Better memory reclaim is a
>>broad and general approach that I agree is useful, but will not necessarily
>>solve the same set of problems (though it would likely lessen the severity
>>somewhat).
> 
> 
> Which areas are the priorities for getting this functionality into?  
> Networking over particular sockets?  A GFP_ flag would plug into the current 
> network stack trivially, as sockets already have a field to store the memory 
> allocation flags.

The impetus for this work was getting this functionality into the
networking stack, to keep the network alive under periods of extreme VM
pressure.  Keeping track of 'criticalness' on a per-socket basis is good,
but the problem is the receive side.  Networking packets are received and
put into skbuffs before there is any concept of what socket they belong to.
 So to really handle incoming traffic under extreme memory pressure would
require something beyond just a per-socket flag.

I have to say I'm somewhat amused by how much support the old approach is
getting now that I've spent a few weeks going back to the drawing board and
coming up with what I thought was a more general solution! :\

-Matt

  reply	other threads:[~2006-01-27  0:27 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-01-25 19:39 [patch 0/9] Critical Mempools Matthew Dobson
2006-01-26 17:57 ` Christoph Lameter
2006-01-26 23:01   ` Matthew Dobson
2006-01-26 23:18     ` Christoph Lameter
2006-01-26 23:32       ` Matthew Dobson
2006-01-27  0:03         ` Benjamin LaHaise
2006-01-27  0:27           ` Matthew Dobson [this message]
2006-01-27  7:35             ` Pekka Enberg
2006-01-27 10:10               ` Paul Jackson
2006-01-27 11:07                 ` Pekka Enberg
2006-01-28  0:41                   ` Matthew Dobson
2006-01-28 10:21                     ` Pekka Enberg
2006-01-30 22:38                       ` Matthew Dobson
2006-01-27 15:36             ` Jan Kiszka
2006-01-27  8:34           ` Sridhar Samudrala
2006-01-27  8:29         ` Sridhar Samudrala

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=43D968E4.5020300@us.ibm.com \
    --to=colpatch@us.ibm.com \
    --cc=andrea@suse.de \
    --cc=bcrl@kvack.org \
    --cc=clameter@engr.sgi.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=pavel@suse.cz \
    --cc=sri@us.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox