public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com>
To: Andi Kleen <ak@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com>,
	linux-kernel@vger.kernel.org, gregkh@suse.de, muli@il.ibm.com,
	asit.k.mallick@intel.com, suresh.b.siddha@intel.com,
	arjan@linux.intel.com, ashok.raj@intel.com, shaohua.li@intel.com,
	davem@davemloft.net
Subject: Re: [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling
Date: Mon, 11 Jun 2007 13:44:42 -0700	[thread overview]
Message-ID: <20070611204442.GA4074@linux-os.sc.intel.com> (raw)
In-Reply-To: <200706091147.24705.ak@suse.de>

On Sat, Jun 09, 2007 at 11:47:23AM +0200, Andi Kleen wrote:
> 
> > > Now there is a anon dirty limit since a few releases, but I'm not
> > > fully convinced it solves the problem completely.
> > 
> > A gut feeling or is there more?
> 
> Lots of other subsystem can allocate a lot of memory
> and they usually don't cooperate and have similar dirty limit concepts.
> So you could run out of usable memory anyways and then have a similar
> issue.
> 
> For example a flood of network packets could always steal your
> GFP_ATOMIC pools very quickly in the background (gigabit or 10gig 
> can transfer a lot of data very quickly) > 
> Also iirc try_to_free_pages() is not completely fair and might fail
> under extreme load for some requesters.
> 
> Not requiring memory allocation for any IO would be certainly safer.
> 
> Anyways, it's a theoretic question because you can't sleep in 
> there anyways unless something drastic changes in the driver interfaces.

Agree, that the ideal thing would be to make such changes in the driver
interfaces where in dma_map_{singe|sg} API's are not called in the
interrupt context and/or spinlock held, there by IOMMU drivers are 
free to  block when memory is not available. This seems to be a  
noble goal invloving huge changes and testing beyond the scope of the 
current IOMMU driver. I guess it would be ideal if this gets discussed
and resolved at kernel summit. 

Assuming that we may have to live with the above limitations for a
while, what is the best way to allocate memory in the
dma_map_{single|sg} API's for the IOMMU drivers? (these memory
are required to setup internal IOMMU pagetables etc.)

In the first implementation of ours, we had used mempools api's to 
allocate memory and we were told that mempools with GFP_ATOMIC is
useless and hence in the second implementation we came up with
resource pools ( which is preallocate pools) and again as I understand
the argument is why create another when we have slab allocation which
is similar to this resource pools.

Hence, can I assume that the conclusion of this 
discussion is to use kmem_cache_alloc() functions 
to allocate memory in dma_map_{single|sg} API's?

Again, if dma_map_{single|sg} API's fails due to 
failure to allocate memory, the only thing that can
be done is to panic as this is what few of the other 
IOMMU implementation is doing today. 

Please advice.

Thanks,
Anil

  reply	other threads:[~2007-06-11 20:48 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-06-06 18:56 [Intel-IOMMU 00/10] Intel IOMMU Support anil.s.keshavamurthy
2007-06-06 18:56 ` [Intel-IOMMU 01/10] DMAR detection and parsing logic anil.s.keshavamurthy
2007-06-06 18:57 ` [Intel-IOMMU 02/10] Library routine for pre-allocat pool handling anil.s.keshavamurthy
2007-06-07 23:27   ` Andrew Morton
2007-06-08 18:21     ` Keshavamurthy, Anil S
2007-06-08 19:01       ` Andrew Morton
2007-06-08 20:12         ` Keshavamurthy, Anil S
2007-06-08 20:40           ` Siddha, Suresh B
2007-06-08 20:44           ` Andrew Morton
2007-06-08 22:33           ` Christoph Lameter
2007-06-08 22:49             ` Keshavamurthy, Anil S
2007-06-08 20:43         ` Andreas Kleen
2007-06-08 20:55           ` Andrew Morton
2007-06-08 22:31             ` Andi Kleen
2007-06-08 21:20           ` Keshavamurthy, Anil S
2007-06-08 21:42             ` Andrew Morton
2007-06-08 22:17               ` Arjan van de Ven
2007-06-08 22:18               ` Siddha, Suresh B
2007-06-08 22:38                 ` Christoph Lameter
2007-06-08 22:36           ` Christoph Lameter
2007-06-08 22:56             ` Andi Kleen
2007-06-08 22:59               ` Christoph Lameter
2007-06-09  9:47                 ` Andi Kleen
2007-06-11 20:44                   ` Keshavamurthy, Anil S [this message]
2007-06-11 21:14                     ` Andrew Morton
2007-06-11  9:46                       ` Ashok Raj
2007-06-11 22:16                       ` Andi Kleen
2007-06-11 23:28                         ` Christoph Lameter
2007-06-11 23:52                       ` Keshavamurthy, Anil S
2007-06-12  0:30                         ` Andrew Morton
2007-06-12  1:10                           ` Arjan van de Ven
2007-06-12  1:30                             ` Christoph Lameter
2007-06-12  1:35                             ` Andrew Morton
2007-06-12  1:55                               ` Arjan van de Ven
2007-06-12  2:08                                 ` Siddha, Suresh B
2007-06-13 18:40                                 ` Matt Mackall
2007-06-13 19:04                                   ` Andi Kleen
2007-06-12  0:38                         ` Christoph Lameter
2007-06-11 21:29                     ` Christoph Lameter
2007-06-11 21:40                       ` Keshavamurthy, Anil S
2007-06-11 22:25                     ` Andi Kleen
2007-06-11 11:29                       ` Ashok Raj
2007-06-11 23:15                       ` Keshavamurthy, Anil S
2007-06-08 22:32       ` Christoph Lameter
2007-06-08 22:45         ` Keshavamurthy, Anil S
2007-06-08 22:55           ` Christoph Lameter
2007-06-10 16:38             ` Arjan van de Ven
2007-06-11 16:10               ` Christoph Lameter
2007-06-06 18:57 ` [Intel-IOMMU 03/10] PCI generic helper function anil.s.keshavamurthy
2007-06-06 18:57 ` [Intel-IOMMU 04/10] clflush_cache_range now takes size param anil.s.keshavamurthy
2007-06-06 18:57 ` [Intel-IOMMU 05/10] IOVA allocation and management routines anil.s.keshavamurthy
2007-06-07 23:34   ` Andrew Morton
2007-06-08 18:25     ` Keshavamurthy, Anil S
2007-06-06 18:57 ` [Intel-IOMMU 06/10] Intel IOMMU driver anil.s.keshavamurthy
2007-06-07 23:57   ` Andrew Morton
2007-06-08 22:30     ` Christoph Lameter
2007-06-13 20:20     ` Keshavamurthy, Anil S
2007-06-06 18:57 ` [Intel-IOMMU 07/10] Intel iommu cmdline option - forcedac anil.s.keshavamurthy
2007-06-07 23:58   ` Andrew Morton
2007-06-06 18:57 ` [Intel-IOMMU 08/10] DMAR fault handling support anil.s.keshavamurthy
2007-06-06 18:57 ` [Intel-IOMMU 09/10] Iommu Gfx workaround anil.s.keshavamurthy
2007-06-08  0:01   ` Andrew Morton
2007-06-06 18:57 ` [Intel-IOMMU 10/10] Iommu floppy workaround anil.s.keshavamurthy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070611204442.GA4074@linux-os.sc.intel.com \
    --to=anil.s.keshavamurthy@intel.com \
    --cc=ak@suse.de \
    --cc=akpm@linux-foundation.org \
    --cc=arjan@linux.intel.com \
    --cc=ashok.raj@intel.com \
    --cc=asit.k.mallick@intel.com \
    --cc=clameter@sgi.com \
    --cc=davem@davemloft.net \
    --cc=gregkh@suse.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=muli@il.ibm.com \
    --cc=shaohua.li@intel.com \
    --cc=suresh.b.siddha@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox