From: Jan Kara <jack@suse.cz>
To: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Jan Kara <jack@suse.cz>,
linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
Andrew Morton <akpm@linux-foundation.org>,
Christoph Hellwig <hch@infradead.org>,
Dave Chinner <david@fromorbit.com>,
Wu Fengguang <fengguang.wu@intel.com>
Subject: Re: [PATCH 3/5] mm: Implement IO-less balance_dirty_pages()
Date: Fri, 11 Feb 2011 16:46:16 +0100 [thread overview]
Message-ID: <20110211154616.GK5187@quack.suse.cz> (raw)
In-Reply-To: <1296824956.26581.649.camel@laptop>
On Fri 04-02-11 14:09:16, Peter Zijlstra wrote:
> On Fri, 2011-02-04 at 02:38 +0100, Jan Kara wrote:
> > + dirty_exceeded = check_dirty_limits(bdi, &st);
> > + if (dirty_exceeded < DIRTY_MAY_EXCEED_LIMIT) {
> > + /* Wakeup everybody */
> > + trace_writeback_distribute_page_completions_wakeall(bdi);
> > + spin_lock(&bdi->balance_lock);
> > + list_for_each_entry_safe(
> > + waiter, tmpw, &bdi->balance_list, bw_list)
> > + balance_waiter_done(bdi, waiter);
> > + spin_unlock(&bdi->balance_lock);
> > + return;
> > + }
> > +
> > + spin_lock(&bdi->balance_lock);
> is there any reason this is a spinlock and not a mutex?
No. Is mutex preferable?
> > + /*
> > + * Note: This loop can have quadratic complexity in the number of
> > + * waiters. It can be changed to a linear one if we also maintained a
> > + * list sorted by number of pages. But for now that does not seem to be
> > + * worth the effort.
> > + */
>
> That doesn't seem to explain much :/
>
> > + remainder_pages = written - bdi->written_start;
> > + bdi->written_start = written;
> > + while (!list_empty(&bdi->balance_list)) {
> > + pages_per_waiter = remainder_pages / bdi->balance_waiters;
> > + if (!pages_per_waiter)
> > + break;
>
> if remainder_pages < balance_waiters you just lost you delta, its best
> to not set bdi->written_start until the end and leave everything not
> processed for the next round.
I haven't lost it, it will be distributed in the second loop.
> > + remainder_pages %= bdi->balance_waiters;
> > + list_for_each_entry_safe(
> > + waiter, tmpw, &bdi->balance_list, bw_list) {
> > + if (waiter->bw_to_write <= pages_per_waiter) {
> > + remainder_pages += pages_per_waiter -
> > + waiter->bw_to_write;
> > + balance_waiter_done(bdi, waiter);
> > + continue;
> > + }
> > + waiter->bw_to_write -= pages_per_waiter;
> > }
> > + }
> > + /* Distribute remaining pages */
> > + list_for_each_entry_safe(waiter, tmpw, &bdi->balance_list, bw_list) {
> > + if (remainder_pages > 0) {
> > + waiter->bw_to_write--;
> > + remainder_pages--;
> > + }
> > + if (waiter->bw_to_write == 0 ||
> > + (dirty_exceeded == DIRTY_MAY_EXCEED_LIMIT &&
> > + !bdi_task_limit_exceeded(&st, waiter->bw_task)))
> > + balance_waiter_done(bdi, waiter);
> > + }
>
> OK, I see what you're doing, but I'm not quite sure it makes complete
> sense yet.
>
> mutex_lock(&bdi->balance_mutex);
> for (;;) {
> unsigned long pages = written - bdi->written_start;
> unsigned long pages_per_waiter = pages / bdi->balance_waiters;
> if (!pages_per_waiter)
> break;
> list_for_each_entry_safe(waiter, tmpw, &bdi->balance_list, bw_list){
> unsigned long delta = min(pages_per_waiter, waiter->bw_to_write);
>
> bdi->written_start += delta;
> waiter->bw_to_write -= delta;
> if (!waiter->bw_to_write)
> balance_waiter_done(bdi, waiter);
> }
> }
> mutex_unlock(&bdi->balance_mutex);
>
> Comes close to what you wrote I think.
Yes, quite close. Only if we wake up and there are not enough pages for
all waiters. We at least "help" waiters in the beginning of the queue. That
could have some impact when the queue grows quickly on a slow device
(something like write fork bomb) but given we need only one page written per
waiter it would be really horrible situation when this triggers anyway...
> One of the problems I have with it is that min(), it means that that
> waiter waited too long, but will not be compensated for this by reducing
> its next wait. Instead you give it away to other waiters which preserves
> fairness on the bdi level, but not for tasks.
>
> You can do that by keeping ->bw_to_write in task_struct and normalize it
> by the estimated bdi bandwidth (patch 5), that way, when you next
> increment it it will turn out to be lower and the wait will be shorter.
>
> That also removes the need to loop over the waiters.
Umm, interesting idea! Just the implication "pages_per_waiter >
->bw_to_write => we waited for too long" isn't completely right. In fact,
each waiter entered the queue sometime between the first waiter entered it
and the time timer triggered. It might have been just shortly before the
timer triggered and thus it will be in fact delayed for too short time. So
this problem is somehow the other side of the problem you describe and so
far I just ignored this problem in a hope that it just levels out in the
long term.
The trouble I see with storing remaining written pages with each task is
that we can accumulate significant amount of pages in that - from what I
see e.g. with my SATA drive the writeback completion seems to be rather
bumpy (and it's even worse over NFS). If we then get below dirty limit,
process can carry over a lot of written pages to the time when dirty limit
gets exceeded again which reduces the effect of throttling at that time
and we can exceed dirty limits by more than we'd wish. We could solve this
by somehow invalidating the written pages when we stop throttling on that
bdi but that would mean to track something like pairs <bonus pages, bdi>
with each task - not sure we want to do that...
Honza
--
Jan Kara <jack@suse.cz>
SUSE Labs, CR
next prev parent reply other threads:[~2011-02-11 15:46 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-02-04 1:38 [RFC PATCH 0/5] IO-less balance dirty pages Jan Kara
2011-02-04 1:38 ` [PATCH 1/5] writeback: account per-bdi accumulated written pages Jan Kara
2011-02-04 1:38 ` [PATCH 2/5] mm: Properly reflect task dirty limits in dirty_exceeded logic Jan Kara
2011-02-04 1:38 ` [PATCH 3/5] mm: Implement IO-less balance_dirty_pages() Jan Kara
2011-02-04 13:09 ` Peter Zijlstra
2011-02-11 14:56 ` Jan Kara
2011-02-04 13:09 ` Peter Zijlstra
2011-02-11 14:56 ` Jan Kara
2011-02-04 13:09 ` Peter Zijlstra
2011-02-04 13:19 ` Peter Zijlstra
2011-02-11 15:46 ` Jan Kara [this message]
2011-02-22 15:40 ` Peter Zijlstra
2011-02-04 1:38 ` [PATCH 4/5] mm: Remove low limit from sync_writeback_pages() Jan Kara
2011-02-04 1:38 ` [PATCH 5/5] mm: Autotune interval between distribution of page completions Jan Kara
2011-02-04 13:09 ` Peter Zijlstra
2011-02-11 15:49 ` Jan Kara
2011-02-06 17:54 ` [RFC PATCH 0/5] IO-less balance dirty pages Boaz Harrosh
2011-02-09 23:30 ` Jan Kara
2011-02-10 12:08 ` Boaz Harrosh
-- strict thread matches above, loose matches on Subject: below --
2011-03-08 22:31 [PATCH RFC 0/5] IO-less balance_dirty_pages() v2 (simple approach) Jan Kara
2011-03-08 22:31 ` [PATCH 3/5] mm: Implement IO-less balance_dirty_pages() Jan Kara
2011-03-10 0:07 ` Vivek Goyal
2011-03-14 20:48 ` Jan Kara
2011-03-15 15:23 ` Vivek Goyal
2011-03-16 21:26 ` Curt Wohlgemuth
2011-03-16 22:53 ` Curt Wohlgemuth
2011-03-16 16:53 ` Vivek Goyal
2011-03-16 19:10 ` Jan Kara
2011-03-16 19:31 ` Vivek Goyal
2011-03-16 19:58 ` Jan Kara
2011-03-16 20:22 ` Vivek Goyal
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110211154616.GK5187@quack.suse.cz \
--to=jack@suse.cz \
--cc=a.p.zijlstra@chello.nl \
--cc=akpm@linux-foundation.org \
--cc=david@fromorbit.com \
--cc=fengguang.wu@intel.com \
--cc=hch@infradead.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).