From: Peter Zijlstra <peterz@infradead.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Minchan Kim <minchan@kernel.org>,
linux-mm@kvack.org, KOSAKI Motohiro <kosaki.motohiro@gmail.com>
Subject: Re: needed lru_add_drain_all() change
Date: Wed, 27 Jun 2012 12:27:31 +0200 [thread overview]
Message-ID: <1340792851.10063.20.camel@twins> (raw)
In-Reply-To: <20120626234119.755af455.akpm@linux-foundation.org>
On Tue, 2012-06-26 at 23:41 -0700, Andrew Morton wrote:
> On Wed, 27 Jun 2012 15:33:09 +0900 Minchan Kim <minchan@kernel.org> wrote:
>
> > Anyway, let's wait further answer, especially, RT folks.
>
> rt folks said "it isn't changing", and I agree with them. It isn't
> worth breaking the rt-prio quality of service because a few odd parts
> of the kernel did something inappropriate. Especially when those
> few sites have alternatives.
I'm not exactly sure its a 'few' sites.. but yeah there's a few obvious
sites we should look at.
Afaict all lru_add_drain_all() callers do this optimistically, esp.
since there's no hard sync. against adding new entries to the per-cpu
pagevecs.
So there's no hard requirement to wait for completion, now not waiting
has obvious problems as well, but we could cheat and timeout after a few
jiffies or so.
This would avoid the DoS scenario, it will not improve the over-all
quality of the kernel though, since an unflushed pagevec can result in
compaction etc. failing.
The problem with stuffing all this in hardirq context (using
on_each_cpu() and friends) is that these people who do spin in fifo
threads generally don't like interrupt latencies forced on them either.
And I presume its currently scheduled is because its potentially quite
expensive to flush all these pages.
The only alternative I can come up with is scheduling the work like we
do now, wait for it for a few jiffies, track which CPUs completed,
cancel the others, and remote flush their pagevecs from the calling cpu.
But I can't say I like that option either...
As it stands I've always said that doing while(1) from FIFO/RR tasks is
broken and you get to keep the pieces. If we can find good solutions for
this I'm all ears, but I don't think its something we should bend over
backwards for.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-06-27 10:27 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-06-26 21:37 needed lru_add_drain_all() change Andrew Morton
2012-06-27 0:55 ` Minchan Kim
2012-06-27 1:15 ` Andrew Morton
2012-06-27 1:20 ` Minchan Kim
2012-06-27 1:29 ` Andrew Morton
2012-06-27 2:09 ` Minchan Kim
2012-06-27 5:12 ` Andrew Morton
2012-06-27 5:41 ` Minchan Kim
2012-06-27 5:55 ` Andrew Morton
2012-06-27 6:33 ` Minchan Kim
2012-06-27 6:41 ` Andrew Morton
2012-06-27 10:27 ` Peter Zijlstra [this message]
2012-06-27 6:46 ` Andrew Morton
2012-06-27 10:31 ` Peter Zijlstra
2012-06-27 12:04 ` Peter Zijlstra
2012-06-28 6:23 ` KOSAKI Motohiro
2012-06-29 3:47 ` Kamezawa Hiroyuki
2012-06-28 7:43 ` Kamezawa Hiroyuki
2012-06-28 23:42 ` Minchan Kim
2012-06-29 3:24 ` Kamezawa Hiroyuki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1340792851.10063.20.camel@twins \
--to=peterz@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=kosaki.motohiro@gmail.com \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).