public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: "Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com>
Cc: Andi Kleen <ak@suse.de>, Christoph Lameter <clameter@sgi.com>,
	linux-kernel@vger.kernel.org, gregkh@suse.de, muli@il.ibm.com,
	asit.k.mallick@intel.com, suresh.b.siddha@intel.com,
	arjan@linux.intel.com, ashok.raj@intel.com, shaohua.li@intel.com,
	davem@davemloft.net
Subject: Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
Date: Mon, 11 Jun 2007 17:30:01 -0700	[thread overview]
Message-ID: <20070611173001.e0355af3.akpm@linux-foundation.org> (raw)
In-Reply-To: <20070611235208.GC25022@linux-os.sc.intel.com>

On Mon, 11 Jun 2007 16:52:08 -0700 "Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com> wrote:

> On Mon, Jun 11, 2007 at 02:14:49PM -0700, Andrew Morton wrote:
> > On Mon, 11 Jun 2007 13:44:42 -0700
> > "Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com> wrote:
> > 
> > > In the first implementation of ours, we had used mempools api's to 
> > > allocate memory and we were told that mempools with GFP_ATOMIC is
> > > useless and hence in the second implementation we came up with
> > > resource pools ( which is preallocate pools) and again as I understand
> > > the argument is why create another when we have slab allocation which
> > > is similar to this resource pools.
> > 
> > Odd.  mempool with GFP_ATOMIC is basically equivalent to your
> > resource-pools, isn't it?: we'll try the slab allocator and if that failed,
> > fall back to the reserves.
> 
> slab allocators don;t reserve the memory, in other words this memory 
> can be consumed by VM under memory pressure which we don;t want in
> IOMMU case.
> 
> Nope,they both are exactly opposite. 
> mempool with GFP_ATOMIC, first tries to get memory from OS and
> if that fails, it looks for the object in the pool and returns.
> 
> Where as resource pool is exactly opposite of mempool, where each 
> time it looks for an object in the pool and if it exist then we 
> return that object else we try to get the memory for OS while 
> scheduling the work to grow the pool objects. In fact, the  work
> is schedule to grow the pool when the low threshold point is hit.

I realise all that.  But I'd have thought that the mempool approach is
actually better: use the page allocator and only deplete your reserve pool
when the page allocator fails.

The refill-the-pool-in-the-background feature sounds pretty worthless to
me.  On a uniprocessor machine (for example), the kernel thread may not get
scheduled for tens of milliseconds (easily), which is far, far more than is
needed for that reserve pool to become fully consumed.


  reply	other threads:[~2007-06-12  0:30 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-06-06 18:56 [Intel-IOMMU 00/10] Intel IOMMU Support anil.s.keshavamurthy
2007-06-06 18:56 ` [Intel-IOMMU 01/10] DMAR detection and parsing logic anil.s.keshavamurthy
2007-06-06 18:57 ` [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling anil.s.keshavamurthy
2007-06-07 23:27   ` Andrew Morton
2007-06-08 18:21     ` Keshavamurthy, Anil S
2007-06-08 19:01       ` Andrew Morton
2007-06-08 20:12         ` Keshavamurthy, Anil S
2007-06-08 20:40           ` Siddha, Suresh B
2007-06-08 20:44           ` Andrew Morton
2007-06-08 22:33           ` Christoph Lameter
2007-06-08 22:49             ` Keshavamurthy, Anil S
2007-06-08 20:43         ` Andreas Kleen
2007-06-08 20:55           ` Andrew Morton
2007-06-08 22:31             ` Andi Kleen
2007-06-08 21:20           ` Keshavamurthy, Anil S
2007-06-08 21:42             ` Andrew Morton
2007-06-08 22:17               ` Arjan van de Ven
2007-06-08 22:18               ` Siddha, Suresh B
2007-06-08 22:38                 ` Christoph Lameter
2007-06-08 22:36           ` Christoph Lameter
2007-06-08 22:56             ` Andi Kleen
2007-06-08 22:59               ` Christoph Lameter
2007-06-09  9:47                 ` Andi Kleen
2007-06-11 20:44                   ` Keshavamurthy, Anil S
2007-06-11 21:14                     ` Andrew Morton
2007-06-11  9:46                       ` Ashok Raj
2007-06-11 22:16                       ` Andi Kleen
2007-06-11 23:28                         ` Christoph Lameter
2007-06-11 23:52                       ` Keshavamurthy, Anil S
2007-06-12  0:30                         ` Andrew Morton [this message]
2007-06-12  1:10                           ` Arjan van de Ven
2007-06-12  1:30                             ` Christoph Lameter
2007-06-12  1:35                             ` Andrew Morton
2007-06-12  1:55                               ` Arjan van de Ven
2007-06-12  2:08                                 ` Siddha, Suresh B
2007-06-13 18:40                                 ` Matt Mackall
2007-06-13 19:04                                   ` Andi Kleen
2007-06-12  0:38                         ` Christoph Lameter
2007-06-11 21:29                     ` Christoph Lameter
2007-06-11 21:40                       ` Keshavamurthy, Anil S
2007-06-11 22:25                     ` Andi Kleen
2007-06-11 11:29                       ` Ashok Raj
2007-06-11 23:15                       ` Keshavamurthy, Anil S
2007-06-08 22:32       ` Christoph Lameter
2007-06-08 22:45         ` Keshavamurthy, Anil S
2007-06-08 22:55           ` Christoph Lameter
2007-06-10 16:38             ` Arjan van de Ven
2007-06-11 16:10               ` Christoph Lameter
2007-06-06 18:57 ` [Intel-IOMMU 03/10] PCI generic helper function anil.s.keshavamurthy
2007-06-06 18:57 ` [Intel-IOMMU 04/10] clflush_cache_range now takes size param anil.s.keshavamurthy
2007-06-06 18:57 ` [Intel-IOMMU 05/10] IOVA allocation and management routines anil.s.keshavamurthy
2007-06-07 23:34   ` Andrew Morton
2007-06-08 18:25     ` Keshavamurthy, Anil S
2007-06-06 18:57 ` [Intel-IOMMU 06/10] Intel IOMMU driver anil.s.keshavamurthy
2007-06-07 23:57   ` Andrew Morton
2007-06-08 22:30     ` Christoph Lameter
2007-06-13 20:20     ` Keshavamurthy, Anil S
2007-06-06 18:57 ` [Intel-IOMMU 07/10] Intel iommu cmdline option - forcedac anil.s.keshavamurthy
2007-06-07 23:58   ` Andrew Morton
2007-06-06 18:57 ` [Intel-IOMMU 08/10] DMAR fault handling support anil.s.keshavamurthy
2007-06-06 18:57 ` [Intel-IOMMU 09/10] Iommu Gfx workaround anil.s.keshavamurthy
2007-06-08  0:01   ` Andrew Morton
2007-06-06 18:57 ` [Intel-IOMMU 10/10] Iommu floppy workaround anil.s.keshavamurthy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070611173001.e0355af3.akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=ak@suse.de \
    --cc=anil.s.keshavamurthy@intel.com \
    --cc=arjan@linux.intel.com \
    --cc=ashok.raj@intel.com \
    --cc=asit.k.mallick@intel.com \
    --cc=clameter@sgi.com \
    --cc=davem@davemloft.net \
    --cc=gregkh@suse.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=muli@il.ibm.com \
    --cc=shaohua.li@intel.com \
    --cc=suresh.b.siddha@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox