From: Jan Kara <jack@suse.cz>
To: Wu Fengguang <fengguang.wu@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Theodore Tso <tytso@mit.edu>,
Christoph Hellwig <hch@infradead.org>,
Dave Chinner <david@fromorbit.com>,
Chris Mason <chris.mason@oracle.com>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Li Shaohua <shaohua.li@intel.com>,
Myklebust Trond <Trond.Myklebust@netapp.com>,
"jens.axboe@oracle.com" <jens.axboe@oracle.com>,
Jan Kara <jack@suse.cz>, Nick Piggin <npiggin@suse.de>,
linux-fsdevel@vger.kernel.org,
Richard Kennedy <richard@rsk.demon.co.uk>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 01/45] writeback: reduce calls to global_page_state in balance_dirty_pages()
Date: Fri, 9 Oct 2009 17:12:31 +0200 [thread overview]
Message-ID: <20091009151230.GF7654@duck.suse.cz> (raw)
In-Reply-To: <20091007074901.251116016@intel.com>
Hi,
On Wed 07-10-09 15:38:19, Wu Fengguang wrote:
> From: Richard Kennedy <richard@rsk.demon.co.uk>
>
> Reducing the number of times balance_dirty_pages calls global_page_state
> reduces the cache references and so improves write performance on a
> variety of workloads.
>
> 'perf stats' of simple fio write tests shows the reduction in cache
> access.
> Where the test is fio 'write,mmap,600Mb,pre_read' on AMD AthlonX2 with
> 3Gb memory (dirty_threshold approx 600 Mb)
> running each test 10 times, dropping the fasted & slowest values then
> taking
> the average & standard deviation
>
> average (s.d.) in millions (10^6)
> 2.6.31-rc8 648.6 (14.6)
> +patch 620.1 (16.5)
>
> Achieving this reduction is by dropping clip_bdi_dirty_limit as it
> rereads the counters to apply the dirty_threshold and moving this check
> up into balance_dirty_pages where it has already read the counters.
>
> Also by rearrange the for loop to only contain one copy of the limit
> tests allows the pdflush test after the loop to use the local copies of
> the counters rather than rereading them.
>
> In the common case with no throttling it now calls global_page_state 5
> fewer times and bdi_stat 2 fewer.
Hmm, but the patch changes the behavior of balance_dirty_pages() in
several ways:
> -/*
> - * Clip the earned share of dirty pages to that which is actually available.
> - * This avoids exceeding the total dirty_limit when the floating averages
> - * fluctuate too quickly.
> - */
> -static void clip_bdi_dirty_limit(struct backing_dev_info *bdi,
> - unsigned long dirty, unsigned long *pbdi_dirty)
> -{
> - unsigned long avail_dirty;
> -
> - avail_dirty = global_page_state(NR_FILE_DIRTY) +
> - global_page_state(NR_WRITEBACK) +
> - global_page_state(NR_UNSTABLE_NFS) +
> - global_page_state(NR_WRITEBACK_TEMP);
> -
> - if (avail_dirty < dirty)
> - avail_dirty = dirty - avail_dirty;
> - else
> - avail_dirty = 0;
> -
> - avail_dirty += bdi_stat(bdi, BDI_RECLAIMABLE) +
> - bdi_stat(bdi, BDI_WRITEBACK);
> -
> - *pbdi_dirty = min(*pbdi_dirty, avail_dirty);
> -}
> -
> static inline void task_dirties_fraction(struct task_struct *tsk,
> long *numerator, long *denominator)
> {
> @@ -468,7 +442,6 @@ get_dirty_limits(unsigned long *pbackgro
> bdi_dirty = dirty * bdi->max_ratio / 100;
>
> *pbdi_dirty = bdi_dirty;
> - clip_bdi_dirty_limit(bdi, dirty, pbdi_dirty);
I don't see, what test in balance_dirty_limits() should replace this
clipping... OTOH clipping does not seem to have too much effect on the
behavior of balance_dirty_pages - the limit we clip to (at least
BDI_WRITEBACK + BDI_RECLAIMABLE) is large enough so that we break from the
loop immediately. So just getting rid of the function is fine but
I'd update the changelog accordingly.
> @@ -503,16 +476,36 @@ static void balance_dirty_pages(struct a
> };
>
> get_dirty_limits(&background_thresh, &dirty_thresh,
> - &bdi_thresh, bdi);
> + &bdi_thresh, bdi);
>
> nr_reclaimable = global_page_state(NR_FILE_DIRTY) +
> - global_page_state(NR_UNSTABLE_NFS);
> - nr_writeback = global_page_state(NR_WRITEBACK);
> + global_page_state(NR_UNSTABLE_NFS);
> + nr_writeback = global_page_state(NR_WRITEBACK) +
> + global_page_state(NR_WRITEBACK_TEMP);
>
> - bdi_nr_reclaimable = bdi_stat(bdi, BDI_RECLAIMABLE);
> - bdi_nr_writeback = bdi_stat(bdi, BDI_WRITEBACK);
> + /*
> + * In order to avoid the stacked BDI deadlock we need
> + * to ensure we accurately count the 'dirty' pages when
> + * the threshold is low.
> + *
> + * Otherwise it would be possible to get thresh+n pages
> + * reported dirty, even though there are thresh-m pages
> + * actually dirty; with m+n sitting in the percpu
> + * deltas.
> + */
> + if (bdi_thresh < 2*bdi_stat_error(bdi)) {
> + bdi_nr_reclaimable = bdi_stat_sum(bdi, BDI_RECLAIMABLE);
> + bdi_nr_writeback = bdi_stat_sum(bdi, BDI_WRITEBACK);
> + } else {
> + bdi_nr_reclaimable = bdi_stat(bdi, BDI_RECLAIMABLE);
> + bdi_nr_writeback = bdi_stat(bdi, BDI_WRITEBACK);
> + }
> +
> + dirty_exceeded =
> + (bdi_nr_reclaimable + bdi_nr_writeback >= bdi_thresh)
> + || (nr_reclaimable + nr_writeback >= dirty_thresh);
>
> - if (bdi_nr_reclaimable + bdi_nr_writeback <= bdi_thresh)
> + if (!dirty_exceeded)
> break;
Ugh, but this is not equivalent! We would block the writer on some BDI
without any dirty data if we are over global dirty limit. That didn't
happen before.
> /*
> @@ -521,7 +514,7 @@ static void balance_dirty_pages(struct a
> * when the bdi limits are ramping up.
> */
> if (nr_reclaimable + nr_writeback <
> - (background_thresh + dirty_thresh) / 2)
> + (background_thresh + dirty_thresh) / 2)
> break;
>
> if (!bdi->dirty_exceeded)
> @@ -539,33 +532,10 @@ static void balance_dirty_pages(struct a
> if (bdi_nr_reclaimable > bdi_thresh) {
> writeback_inodes_wbc(&wbc);
> pages_written += write_chunk - wbc.nr_to_write;
> - get_dirty_limits(&background_thresh, &dirty_thresh,
> - &bdi_thresh, bdi);
> - }
> -
> - /*
> - * In order to avoid the stacked BDI deadlock we need
> - * to ensure we accurately count the 'dirty' pages when
> - * the threshold is low.
> - *
> - * Otherwise it would be possible to get thresh+n pages
> - * reported dirty, even though there are thresh-m pages
> - * actually dirty; with m+n sitting in the percpu
> - * deltas.
> - */
> - if (bdi_thresh < 2*bdi_stat_error(bdi)) {
> - bdi_nr_reclaimable = bdi_stat_sum(bdi, BDI_RECLAIMABLE);
> - bdi_nr_writeback = bdi_stat_sum(bdi, BDI_WRITEBACK);
> - } else if (bdi_nr_reclaimable) {
> - bdi_nr_reclaimable = bdi_stat(bdi, BDI_RECLAIMABLE);
> - bdi_nr_writeback = bdi_stat(bdi, BDI_WRITEBACK);
> + /* don't wait if we've done enough */
> + if (pages_written >= write_chunk)
> + break;
> }
> -
> - if (bdi_nr_reclaimable + bdi_nr_writeback <= bdi_thresh)
> - break;
> - if (pages_written >= write_chunk)
> - break; /* We've done our duty */
> -
Here, we had an opportunity to break from the loop even if we didn't
manage to write everything (for example because per-bdi thread managed to
write enough or because enough IO has completed while we were trying to
write). After the patch, we will sleep. IMHO that's not good...
I'd think that if we did all that work in writeback_inodes_wbc we could
spend the effort on regetting and rechecking the stats...
> schedule_timeout_interruptible(pause);
>
> /*
> @@ -577,8 +547,7 @@ static void balance_dirty_pages(struct a
> pause = HZ / 10;
> }
>
> - if (bdi_nr_reclaimable + bdi_nr_writeback < bdi_thresh &&
> - bdi->dirty_exceeded)
> + if (!dirty_exceeded && bdi->dirty_exceeded)
> bdi->dirty_exceeded = 0;
Here we fail to clear dirty_exceeded if we are over global dirty limit
but not over per-bdi dirty limit...
> @@ -593,9 +562,7 @@ static void balance_dirty_pages(struct a
> * background_thresh, to keep the amount of dirty memory low.
> */
> if ((laptop_mode && pages_written) ||
> - (!laptop_mode && ((global_page_state(NR_FILE_DIRTY)
> - + global_page_state(NR_UNSTABLE_NFS))
> - > background_thresh)))
> + (!laptop_mode && (nr_reclaimable > background_thresh)))
> bdi_start_writeback(bdi, NULL, 0);
> }
This might be based on rather old values in case we break from the loop
after calling writeback_inodes_wbc.
Honza
--
Jan Kara <jack@suse.cz>
SUSE Labs, CR
next prev parent reply other threads:[~2009-10-09 15:12 UTC|newest]
Thread overview: 116+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-10-07 7:38 [PATCH 00/45] some writeback experiments Wu Fengguang
2009-10-07 7:38 ` [PATCH 01/45] writeback: reduce calls to global_page_state in balance_dirty_pages() Wu Fengguang
2009-10-09 15:12 ` Jan Kara [this message]
2009-10-09 15:18 ` Peter Zijlstra
2009-10-09 15:47 ` Jan Kara
2009-10-11 2:28 ` Wu Fengguang
2009-10-11 7:44 ` Peter Zijlstra
2009-10-11 10:50 ` Wu Fengguang
2009-10-11 10:58 ` Peter Zijlstra
2009-10-11 11:25 ` Peter Zijlstra
2009-10-12 1:26 ` Wu Fengguang
2009-10-12 9:07 ` Peter Zijlstra
2009-10-12 9:24 ` Wu Fengguang
2009-10-10 21:33 ` Wu Fengguang
2009-10-12 21:18 ` Jan Kara
2009-10-13 3:24 ` Wu Fengguang
2009-10-13 8:41 ` Peter Zijlstra
2009-10-13 18:12 ` Jan Kara
2009-10-13 18:28 ` Peter Zijlstra
2009-10-14 1:38 ` Wu Fengguang
2009-10-14 11:22 ` Peter Zijlstra
2009-10-17 5:30 ` Wu Fengguang
2009-10-07 7:38 ` [PATCH 02/45] writeback: reduce calculation of bdi dirty thresholds Wu Fengguang
2009-10-07 7:38 ` [PATCH 03/45] ext4: remove unused parameter wbc from __ext4_journalled_writepage() Wu Fengguang
2009-10-07 7:38 ` [PATCH 04/45] writeback: remove unused nonblocking and congestion checks Wu Fengguang
2009-10-09 15:26 ` Jan Kara
2009-10-10 13:47 ` Wu Fengguang
2009-10-07 7:38 ` [PATCH 05/45] writeback: remove the always false bdi_cap_writeback_dirty() test Wu Fengguang
2009-10-07 7:38 ` [PATCH 06/45] writeback: use larger ratelimit when dirty_exceeded Wu Fengguang
2009-10-07 8:53 ` Peter Zijlstra
2009-10-07 9:17 ` Wu Fengguang
2009-10-07 7:38 ` [PATCH 07/45] writeback: dont redirty tail an inode with dirty pages Wu Fengguang
2009-10-09 15:45 ` Jan Kara
2009-10-07 7:38 ` [PATCH 08/45] writeback: quit on wrap for .range_cyclic (write_cache_pages) Wu Fengguang
2009-10-07 7:38 ` [PATCH 09/45] writeback: quit on wrap for .range_cyclic (pohmelfs) Wu Fengguang
2009-10-07 12:32 ` Evgeniy Polyakov
2009-10-07 14:23 ` Wu Fengguang
2009-10-07 7:38 ` [PATCH 10/45] writeback: quit on wrap for .range_cyclic (btrfs) Wu Fengguang
2009-10-07 7:38 ` [PATCH 11/45] writeback: quit on wrap for .range_cyclic (cifs) Wu Fengguang
2009-10-07 7:38 ` [PATCH 12/45] writeback: quit on wrap for .range_cyclic (ext4) Wu Fengguang
2009-10-07 7:38 ` [PATCH 13/45] writeback: quit on wrap for .range_cyclic (gfs2) Wu Fengguang
2009-10-07 7:38 ` [PATCH 14/45] writeback: quit on wrap for .range_cyclic (afs) Wu Fengguang
2009-10-07 7:38 ` [PATCH 15/45] writeback: fix queue_io() ordering Wu Fengguang
2009-10-07 7:38 ` [PATCH 16/45] writeback: merge for_kupdate and !for_kupdate cases Wu Fengguang
2009-10-07 7:38 ` [PATCH 17/45] writeback: only allow two background writeback works Wu Fengguang
2009-10-07 7:38 ` [PATCH 18/45] writeback: introduce wait queue for balance_dirty_pages() Wu Fengguang
2009-10-08 1:01 ` KAMEZAWA Hiroyuki
2009-10-08 1:58 ` Wu Fengguang
2009-10-08 2:40 ` KAMEZAWA Hiroyuki
2009-10-08 4:01 ` Wu Fengguang
2009-10-08 5:59 ` KAMEZAWA Hiroyuki
2009-10-08 6:07 ` Wu Fengguang
2009-10-08 6:28 ` Wu Fengguang
2009-10-08 6:39 ` KAMEZAWA Hiroyuki
2009-10-08 8:08 ` Peter Zijlstra
2009-10-08 8:11 ` KAMEZAWA Hiroyuki
2009-10-08 8:36 ` Jens Axboe
2009-10-09 2:52 ` [PATCH] writeback: account IO throttling wait as iowait Wu Fengguang
2009-10-09 10:41 ` Jens Axboe
2009-10-09 10:58 ` Wu Fengguang
2009-10-09 11:01 ` Jens Axboe
2009-10-08 8:05 ` [PATCH 18/45] writeback: introduce wait queue for balance_dirty_pages() Peter Zijlstra
2009-10-07 7:38 ` [PATCH 19/45] writeback: remove the loop in balance_dirty_pages() Wu Fengguang
2009-10-07 7:38 ` [PATCH 20/45] NFS: introduce writeback wait queue Wu Fengguang
2009-10-07 8:53 ` Peter Zijlstra
2009-10-07 9:07 ` Wu Fengguang
2009-10-07 9:15 ` Peter Zijlstra
2009-10-07 9:19 ` Wu Fengguang
2009-10-07 9:17 ` Nick Piggin
2009-10-07 9:52 ` Wu Fengguang
2009-10-07 7:38 ` [PATCH 21/45] writeback: estimate bdi write bandwidth Wu Fengguang
2009-10-07 8:53 ` Peter Zijlstra
2009-10-07 9:39 ` Wu Fengguang
2009-10-07 7:38 ` [PATCH 22/45] writeback: show bdi write bandwidth in debugfs Wu Fengguang
2009-10-07 7:38 ` [PATCH 23/45] writeback: kill space in debugfs item name Wu Fengguang
2009-10-07 7:38 ` [PATCH 24/45] writeback: remove global nr_to_write and use timeout instead Wu Fengguang
2009-10-07 7:38 ` [PATCH 25/45] writeback: convert wbc.nr_to_write to per-file parameter Wu Fengguang
2009-10-07 7:38 ` [PATCH 26/45] block: pass the non-rotational queue flag to backing_dev_info Wu Fengguang
2009-10-07 7:38 ` [PATCH 27/45] writeback: introduce wbc.for_background Wu Fengguang
2009-10-07 7:38 ` [PATCH 28/45] writeback: introduce wbc.nr_segments Wu Fengguang
2009-10-07 7:38 ` [PATCH 29/45] writeback: fix the shmem AOP_WRITEPAGE_ACTIVATE case Wu Fengguang
2009-10-07 11:57 ` Hugh Dickins
2009-10-07 14:00 ` Wu Fengguang
2009-10-07 7:38 ` [PATCH 30/45] vmscan: lumpy pageout Wu Fengguang
2009-10-07 7:38 ` [PATCH 31/45] writeback: sync old inodes first in background writeback Wu Fengguang
2010-07-12 3:01 ` Christoph Hellwig
2010-07-12 15:24 ` Wu Fengguang
2009-10-07 7:38 ` [PATCH 32/45] writeback: update kupdate expire timestamp on each scan of b_io Wu Fengguang
2009-10-07 7:38 ` [PATCH 34/45] writeback: sync livelock - kick background writeback Wu Fengguang
2009-10-07 7:38 ` [PATCH 35/45] writeback: sync livelock - use single timestamp for whole sync work Wu Fengguang
2009-10-07 7:38 ` [PATCH 36/45] writeback: sync livelock - curb dirty speed for inodes to be synced Wu Fengguang
2009-10-07 7:38 ` [PATCH 37/45] writeback: use timestamp to indicate dirty exceeded Wu Fengguang
2009-10-07 7:38 ` [PATCH 38/45] writeback: introduce queue b_more_io_wait Wu Fengguang
2009-10-07 7:38 ` [PATCH 39/45] writeback: remove wbc.more_io Wu Fengguang
2009-10-07 7:38 ` [PATCH 40/45] writeback: requeue_io_wait() on I_SYNC locked inode Wu Fengguang
2009-10-07 7:38 ` [PATCH 41/45] writeback: requeue_io_wait() on pages_skipped inode Wu Fengguang
2009-10-07 7:39 ` [PATCH 42/45] writeback: requeue_io_wait() on blocked inode Wu Fengguang
2009-10-07 7:39 ` [PATCH 43/45] writeback: requeue_io_wait() on fs redirtied inode Wu Fengguang
2009-10-07 7:39 ` [PATCH 44/45] NFS: remove NFS_INO_FLUSHING lock Wu Fengguang
2009-10-07 13:11 ` Peter Staubach
2009-10-07 13:32 ` Wu Fengguang
2009-10-07 13:59 ` Peter Staubach
2009-10-08 1:44 ` Wu Fengguang
2009-10-07 7:39 ` [PATCH 45/45] btrfs: fix race on syncing the btree inode Wu Fengguang
2009-10-07 8:53 ` [PATCH 00/45] some writeback experiments Peter Zijlstra
2009-10-07 10:17 ` [PATCH 14/45] writeback: quit on wrap for .range_cyclic (afs) David Howells
2009-10-07 10:21 ` Nick Piggin
2009-10-07 10:47 ` Wu Fengguang
2009-10-07 11:23 ` Nick Piggin
2009-10-07 12:21 ` Wu Fengguang
2009-10-07 13:47 ` [PATCH 00/45] some writeback experiments Peter Staubach
2009-10-07 15:18 ` Wu Fengguang
2009-10-08 5:33 ` Wu Fengguang
2009-10-08 5:44 ` Wu Fengguang
2009-10-07 14:26 ` Theodore Tso
2009-10-07 14:45 ` Wu Fengguang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20091009151230.GF7654@duck.suse.cz \
--to=jack@suse.cz \
--cc=Trond.Myklebust@netapp.com \
--cc=a.p.zijlstra@chello.nl \
--cc=akpm@linux-foundation.org \
--cc=chris.mason@oracle.com \
--cc=david@fromorbit.com \
--cc=fengguang.wu@intel.com \
--cc=hch@infradead.org \
--cc=jens.axboe@oracle.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=npiggin@suse.de \
--cc=richard@rsk.demon.co.uk \
--cc=shaohua.li@intel.com \
--cc=tytso@mit.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).