From: Nick Piggin <nickpiggin@yahoo.com.au>
To: Oliver Neukum <oliver@neukum.org>
Cc: Hugh Dickins <hugh@veritas.com>,
Pete Zaitcev <zaitcev@redhat.com>,
arjanv@redhat.com, alan@redhat.com, greg@kroah.com,
linux-kernel@vger.kernel.org, riel@redhat.com, sct@redhat.com
Subject: Re: PF_MEMALLOC in 2.6
Date: Fri, 20 Aug 2004 18:06:41 +1000 [thread overview]
Message-ID: <4125B111.2040308@yahoo.com.au> (raw)
In-Reply-To: <200408200956.50972.oliver@neukum.org>
Oliver Neukum wrote:
> Am Freitag, 20. August 2004 04:37 schrieb Nick Piggin:
>
>>So if this thing allocates memory on behalf of a read request, then
>>it is basically a bug. In practice you could probably get away with
>>servicing all writes with PF_MEMALLOC, however that could still lead
>>to situations where it consumes all your low memory on behalf of
>>highmem IO (though perhaps this won't deadlock if that memory is
>>going to be released as a matter of course?)
>>
>>Another thing, having it always use PF_MEMALLOC means it can easily
>>wipe out the GFP_ATOMIC reserve.
>>
>>So I'd say try to find a way to only use PF_MEMALLOC on behalf of
>>a PF_MEMALLOC thread or use a mempool or something.
>
>
> Then the SCSI layer should pass down the flag.
>
It would be ideal from the memory allocator's point of view to do it
on a per-request basis like that.
When the rubber hits the road, I think it is probably going to be very
troublesome to do it right that way. For example, what happens when
your usb-thingy-thread blocks on a memory allocation while handling a
read request, then the system gets low on memory and someone tries to
free some by submitting a write request to the USB device?
I don't know anything about how the usb thread works so I'm not sure.
The mempool model seems to work well for requests in the block layer -
making a completely uneducated guess I'd say that could be a good
option to investigate.
next prev parent reply other threads:[~2004-08-20 8:08 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-08-19 6:55 PF_MEMALLOC in 2.6 Pete Zaitcev
2004-08-19 6:59 ` William Lee Irwin III
2004-08-19 8:46 ` Stephen C. Tweedie
2004-08-19 8:59 ` Oliver Neukum
2004-08-19 12:41 ` Hugh Dickins
2004-08-19 18:25 ` Oliver Neukum
2004-08-20 2:37 ` Nick Piggin
2004-08-20 7:56 ` Oliver Neukum
2004-08-20 8:06 ` Nick Piggin [this message]
2004-08-20 8:40 ` Pete Zaitcev
2004-08-20 14:50 ` Oliver Neukum
2004-08-20 15:02 ` Alan Cox
2004-08-20 16:04 ` Rik van Riel
2004-08-20 16:06 ` Arjan van de Ven
2004-08-20 16:10 ` Alan Cox
2004-08-20 16:14 ` Rik van Riel
2004-08-21 2:03 ` Nick Piggin
2004-08-20 8:52 ` Oliver Neukum
2004-08-20 9:06 ` Nick Piggin
2004-08-26 21:16 ` Zephaniah E. Hull
2004-08-26 22:04 ` Oliver Neukum
[not found] ` <20040827032554.GB30820@babylon.d2dc.net>
2004-08-27 9:15 ` Oliver Neukum
2004-08-26 23:41 ` Mikulas Patocka
2004-08-20 10:31 ` Stephen C. Tweedie
2004-08-20 15:34 ` Oliver Neukum
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4125B111.2040308@yahoo.com.au \
--to=nickpiggin@yahoo.com.au \
--cc=alan@redhat.com \
--cc=arjanv@redhat.com \
--cc=greg@kroah.com \
--cc=hugh@veritas.com \
--cc=linux-kernel@vger.kernel.org \
--cc=oliver@neukum.org \
--cc=riel@redhat.com \
--cc=sct@redhat.com \
--cc=zaitcev@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.