From: Andrea Arcangeli <andrea@suse.de>
To: Linus Torvalds <torvalds@transmeta.com>
Cc: Robert Macaulay <robert_macaulay@dell.com>,
Rik van Riel <riel@conectiva.com.br>,
Craig Kulesa <ckulesa@as.arizona.edu>,
linux-kernel@vger.kernel.org, Bob Matthews <bmatthews@redhat.com>,
Marcelo Tosatti <marcelo@conectiva.com.br>
Subject: Re: highmem deadlock fix [was Re: VM in 2.4.10(+tweaks) vs. 2.4.9-ac14/15(+stuff)]
Date: Fri, 28 Sep 2001 02:08:10 +0200 [thread overview]
Message-ID: <20010928020810.C14277@athlon.random> (raw)
In-Reply-To: <20010928001321.L14277@athlon.random> <Pine.LNX.4.33.0109271605550.25667-100000@penguin.transmeta.com>
In-Reply-To: <Pine.LNX.4.33.0109271605550.25667-100000@penguin.transmeta.com>; from torvalds@transmeta.com on Thu, Sep 27, 2001 at 04:16:11PM -0700
On Thu, Sep 27, 2001 at 04:16:11PM -0700, Linus Torvalds wrote:
> Thinking about it, I think GFP_NOIO also implies "we must not wait for
> other buffers", because that could deadlock for _other_ things too, like
> loop and NBD (which use NOIO to make sure that they don't recurse - but
> that should also imply not waiting for themselves). The GFP_xxx approach
> should fix those deadlocks too.
I don't understand very well your point about GFP_NOIO, GFP_NOIO is a no
brainer, loop/NDB etc.. all them are safe since GFP_NOIO will forbid to
arrive in sync_page_buffers in first place.
The only brainer is the GFP_NOHIGHIO that can arrive there on lowmem
pages since it only protects against itself from all the callers via the
pagehighmem logic, so only the callers that locks down highmem and then
nohighmem and then start the I/O on the highmem are subject to the
highmem deadlock. The only point that locks down highmem and then
nohighmem and then starts I/O on highmem seems to be the
write_some_buffers. However I could agree if you're worried other places
does it too, but if they do we could teach them to use the pending_IO
information too so we could be more finegrined with my approch.
Andrea
prev parent reply other threads:[~2001-09-28 0:08 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-09-26 13:38 VM in 2.4.10(+tweaks) vs. 2.4.9-ac14/15(+stuff) Craig Kulesa
2001-09-26 14:03 ` Andrea Arcangeli
2001-09-26 14:23 ` Rik van Riel
2001-09-26 14:49 ` Andrea Arcangeli
2001-09-26 18:17 ` Robert Macaulay
2001-09-26 18:36 ` Andrea Arcangeli
2001-09-27 22:13 ` highmem deadlock fix [was Re: VM in 2.4.10(+tweaks) vs. 2.4.9-ac14/15(+stuff)] Andrea Arcangeli
2001-09-27 22:55 ` J . A . Magallon
2001-09-27 23:16 ` Linus Torvalds
2001-09-27 23:18 ` Linus Torvalds
2001-09-27 23:37 ` Andrea Arcangeli
2001-09-27 23:51 ` Rik van Riel
2001-09-28 1:26 ` Andrea Arcangeli
2001-09-28 1:28 ` Linus Torvalds
2001-09-27 23:47 ` Andrea Arcangeli
2001-09-28 0:03 ` Linus Torvalds
2001-09-28 0:11 ` Andrea Arcangeli
2001-09-28 2:12 ` Robert Macaulay
2001-09-28 2:24 ` Andrea Arcangeli
2001-09-28 13:36 ` Robert Macaulay
2001-09-28 14:02 ` LILO causes segmentation fault and panic [was Re: highmem deadlock fix] Robert Macaulay
2001-09-28 0:08 ` Andrea Arcangeli [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20010928020810.C14277@athlon.random \
--to=andrea@suse.de \
--cc=bmatthews@redhat.com \
--cc=ckulesa@as.arizona.edu \
--cc=linux-kernel@vger.kernel.org \
--cc=marcelo@conectiva.com.br \
--cc=riel@conectiva.com.br \
--cc=robert_macaulay@dell.com \
--cc=torvalds@transmeta.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox