From: Wu Fengguang <fengguang.wu@intel.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Jan Kara <jack@suse.cz>, Wu Fengguang <fengguang.wu@intel.com>,
Christoph Hellwig <hch@lst.de>,
Trond Myklebust <Trond.Myklebust@netapp.com>,
Dave Chinner <david@fromorbit.com>, Theodore Ts'o <tytso@mit.edu>,
Chris Mason <chris.mason@oracle.com>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Mel Gorman <mel@csn.ul.ie>, Rik van Riel <riel@redhat.com>,
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
Greg Thelen <gthelen@google.com>,
Minchan Kim <minchan.kim@gmail.com>,
Vivek Goyal <vgoyal@redhat.com>,
Andrea Righi <arighi@develer.com>,
Balbir Singh <balbir@linux.vnet.ibm.com>,
linux-mm <linux-mm@kvack.org>,
linux-fsdevel@vger.kernel.org,
LKML <linux-kernel@vger.kernel.org>
Subject: [PATCH 23/27] writeback: trace balance_dirty_pages
Date: Thu, 03 Mar 2011 14:45:28 +0800 [thread overview]
Message-ID: <20110303074951.713350492@intel.com> (raw)
In-Reply-To: 20110303064505.718671603@intel.com
[-- Attachment #1: writeback-trace-balance_dirty_pages.patch --]
[-- Type: text/plain, Size: 8969 bytes --]
It would be useful for analyzing the dynamics of the throttling
algorithms, and helpful for debugging user reported problems.
Here is an interesting test to verify the theory with balance_dirty_pages()
tracing. On a partition that can do ~60MB/s, a sparse file is created and
4 rsync tasks with different write bandwidth started:
dd if=/dev/zero of=/mnt/1T bs=1M count=1 seek=1024000
echo 1 > /debug/tracing/events/writeback/balance_dirty_pages/enable
rsync localhost:/mnt/1T /mnt/a --bwlimit 10000&
rsync localhost:/mnt/1T /mnt/A --bwlimit 10000&
rsync localhost:/mnt/1T /mnt/b --bwlimit 20000&
rsync localhost:/mnt/1T /mnt/c --bwlimit 30000&
Trace outputs within 0.1 second, grouped by tasks:
rsync-3824 [004] 15002.076447: balance_dirty_pages: bdi=btrfs-2 weight=15% limit=130876 gap=5340 dirtied=192 pause=20
rsync-3822 [003] 15002.091701: balance_dirty_pages: bdi=btrfs-2 weight=15% limit=130777 gap=5113 dirtied=192 pause=20
rsync-3821 [006] 15002.004667: balance_dirty_pages: bdi=btrfs-2 weight=30% limit=129570 gap=3714 dirtied=64 pause=8
rsync-3821 [006] 15002.012654: balance_dirty_pages: bdi=btrfs-2 weight=30% limit=129589 gap=3733 dirtied=64 pause=8
rsync-3821 [006] 15002.021838: balance_dirty_pages: bdi=btrfs-2 weight=30% limit=129604 gap=3748 dirtied=64 pause=8
rsync-3821 [004] 15002.091193: balance_dirty_pages: bdi=btrfs-2 weight=29% limit=129583 gap=3983 dirtied=64 pause=8
rsync-3821 [004] 15002.102729: balance_dirty_pages: bdi=btrfs-2 weight=29% limit=129594 gap=3802 dirtied=64 pause=8
rsync-3821 [000] 15002.109252: balance_dirty_pages: bdi=btrfs-2 weight=29% limit=129619 gap=3827 dirtied=64 pause=8
rsync-3823 [002] 15002.009029: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128762 gap=2842 dirtied=64 pause=12
rsync-3823 [002] 15002.021598: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128813 gap=3021 dirtied=64 pause=12
rsync-3823 [003] 15002.032973: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128805 gap=2885 dirtied=64 pause=12
rsync-3823 [003] 15002.048800: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128823 gap=2967 dirtied=64 pause=12
rsync-3823 [003] 15002.060728: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128821 gap=3221 dirtied=64 pause=12
rsync-3823 [000] 15002.073152: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128825 gap=3225 dirtied=64 pause=12
rsync-3823 [005] 15002.090111: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128782 gap=3214 dirtied=64 pause=12
rsync-3823 [004] 15002.102520: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128764 gap=3036 dirtied=64 pause=12
The data vividly show that
- the heaviest writer is throttled a bit (weight=39%)
- the lighter writers run at full speed (weight=15%,15%,30%)
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
include/linux/writeback.h | 5 +
include/trace/events/writeback.h | 92 ++++++++++++++++++++++++++++-
mm/page-writeback.c | 22 ++++++
3 files changed, 114 insertions(+), 5 deletions(-)
--- linux-next.orig/include/trace/events/writeback.h 2011-03-03 14:44:38.000000000 +0800
+++ linux-next/include/trace/events/writeback.h 2011-03-03 14:44:39.000000000 +0800
@@ -147,9 +147,6 @@ DEFINE_EVENT(wbc_class, name, \
DEFINE_WBC_EVENT(wbc_writeback_start);
DEFINE_WBC_EVENT(wbc_writeback_written);
DEFINE_WBC_EVENT(wbc_writeback_wait);
-DEFINE_WBC_EVENT(wbc_balance_dirty_start);
-DEFINE_WBC_EVENT(wbc_balance_dirty_written);
-DEFINE_WBC_EVENT(wbc_balance_dirty_wait);
DEFINE_WBC_EVENT(wbc_writepage);
#define KBps(x) ((x) << (PAGE_SHIFT - 10))
@@ -201,6 +198,95 @@ TRACE_EVENT(dirty_throttle_bandwidth,
)
);
+TRACE_EVENT(balance_dirty_pages,
+
+ TP_PROTO(struct backing_dev_info *bdi,
+ unsigned long thresh,
+ unsigned long dirty,
+ unsigned long bdi_dirty,
+ unsigned long task_bw,
+ unsigned long dirtied,
+ unsigned long period,
+ long pause,
+ unsigned long start_time),
+
+ TP_ARGS(bdi, thresh, dirty, bdi_dirty,
+ task_bw, dirtied, period, pause, start_time),
+
+ TP_STRUCT__entry(
+ __array( char, bdi, 32)
+ __field(unsigned long, bdi_weight)
+ __field(unsigned long, task_weight)
+ __field(unsigned long, limit)
+ __field(unsigned long, goal)
+ __field(unsigned long, dirty)
+ __field(unsigned long, bdi_goal)
+ __field(unsigned long, bdi_dirty)
+ __field(unsigned long, avg_dirty)
+ __field(unsigned long, base_bw)
+ __field(unsigned long, task_bw)
+ __field(unsigned long, dirtied)
+ __field(unsigned long, period)
+ __field( long, think)
+ __field( long, pause)
+ __field(unsigned long, paused)
+ ),
+
+ TP_fast_assign(
+ long numerator;
+ long denominator;
+
+ strlcpy(__entry->bdi, dev_name(bdi->dev), 32);
+
+ bdi_writeout_fraction(bdi, &numerator, &denominator);
+ __entry->bdi_weight = 1000 * numerator / denominator;
+ task_dirties_fraction(current, &numerator, &denominator);
+ __entry->task_weight = 1000 * numerator / denominator;
+
+ __entry->limit = default_backing_dev_info.dirty_threshold;
+ __entry->goal = thresh - thresh / DIRTY_SCOPE;
+ __entry->dirty = dirty;
+ __entry->bdi_goal = bdi->dirty_threshold -
+ bdi->dirty_threshold / DIRTY_SCOPE;
+ __entry->bdi_dirty = bdi_dirty;
+ __entry->avg_dirty = bdi->avg_dirty;
+ __entry->base_bw = KBps(bdi->throttle_bandwidth) >>
+ BASE_BW_SHIFT;
+ __entry->task_bw = KBps(task_bw);
+ __entry->dirtied = dirtied;
+ __entry->think = current->paused_when == 0 ? 0 :
+ (long)(jiffies - current->paused_when) * 1000 / HZ;
+ __entry->period = period * 1000 / HZ;
+ __entry->pause = pause * 1000 / HZ;
+ __entry->paused = (jiffies - start_time) * 1000 / HZ;
+ ),
+
+
+ TP_printk("bdi %s: bdi_weight=%lu task_weight=%lu "
+ "limit=%lu goal=%lu dirty=%lu "
+ "bdi_goal=%lu bdi_dirty=%lu avg_dirty=%lu "
+ "base_bw=%lu task_bw=%lu "
+ "dirtied=%lu "
+ "period=%lu think=%ld pause=%ld paused=%lu",
+ __entry->bdi,
+ __entry->bdi_weight,
+ __entry->task_weight,
+ __entry->limit,
+ __entry->goal,
+ __entry->dirty,
+ __entry->bdi_goal,
+ __entry->bdi_dirty,
+ __entry->avg_dirty,
+ __entry->base_bw, /* base throttle bandwidth */
+ __entry->task_bw, /* task throttle bandwidth */
+ __entry->dirtied,
+ __entry->period, /* ms */
+ __entry->think, /* ms */
+ __entry->pause, /* ms */
+ __entry->paused /* ms */
+ )
+);
+
DECLARE_EVENT_CLASS(writeback_congest_waited_template,
TP_PROTO(unsigned int usec_timeout, unsigned int usec_delayed),
--- linux-next.orig/mm/page-writeback.c 2011-03-03 14:44:38.000000000 +0800
+++ linux-next/mm/page-writeback.c 2011-03-03 14:44:39.000000000 +0800
@@ -227,14 +227,14 @@ void task_dirty_inc(struct task_struct *
/*
* Obtain an accurate fraction of the BDI's portion.
*/
-static void bdi_writeout_fraction(struct backing_dev_info *bdi,
+void bdi_writeout_fraction(struct backing_dev_info *bdi,
long *numerator, long *denominator)
{
prop_fraction_percpu(&vm_completions, &bdi->completions,
numerator, denominator);
}
-static inline void task_dirties_fraction(struct task_struct *tsk,
+void task_dirties_fraction(struct task_struct *tsk,
long *numerator, long *denominator)
{
prop_fraction_single(&vm_dirties, &tsk->dirties,
@@ -1251,6 +1251,15 @@ static void balance_dirty_pages(struct a
* it may be a light dirtier.
*/
if (unlikely(-pause < HZ*10)) {
+ trace_balance_dirty_pages(bdi,
+ dirty_thresh,
+ nr_dirty,
+ bdi_dirty,
+ bw,
+ pages_dirtied,
+ period,
+ pause,
+ start_time);
if (-pause > HZ/2) {
current->paused_when = jiffies;
current->nr_dirtied = 0;
@@ -1267,6 +1276,15 @@ static void balance_dirty_pages(struct a
pause = pause_max;
pause:
+ trace_balance_dirty_pages(bdi,
+ dirty_thresh,
+ nr_dirty,
+ bdi_dirty,
+ bw,
+ pages_dirtied,
+ period,
+ pause,
+ start_time);
current->paused_when = jiffies;
__set_current_state(TASK_UNINTERRUPTIBLE);
io_schedule_timeout(pause);
--- linux-next.orig/include/linux/writeback.h 2011-03-03 14:44:38.000000000 +0800
+++ linux-next/include/linux/writeback.h 2011-03-03 14:44:39.000000000 +0800
@@ -169,6 +169,11 @@ void global_dirty_limits(unsigned long *
unsigned long bdi_dirty_limit(struct backing_dev_info *bdi,
unsigned long dirty);
+void bdi_writeout_fraction(struct backing_dev_info *bdi,
+ long *numerator, long *denominator);
+void task_dirties_fraction(struct task_struct *tsk,
+ long *numerator, long *denominator);
+
void bdi_update_bandwidth(struct backing_dev_info *bdi,
unsigned long thresh,
unsigned long dirty,
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2011-03-03 8:17 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-03-03 6:45 [PATCH 00/27] IO-less dirty throttling v6 Wu Fengguang
2011-03-03 6:45 ` [PATCH 01/27] writeback: add bdi_dirty_limit() kernel-doc Wu Fengguang
2011-03-03 6:45 ` [PATCH 02/27] writeback: avoid duplicate balance_dirty_pages_ratelimited() calls Wu Fengguang
2011-03-03 6:45 ` [PATCH 03/27] writeback: skip balance_dirty_pages() for in-memory fs Wu Fengguang
2011-03-03 6:45 ` [PATCH 04/27] writeback: reduce per-bdi dirty threshold ramp up time Wu Fengguang
2011-03-03 6:45 ` [PATCH 05/27] btrfs: avoid duplicate balance_dirty_pages_ratelimited() calls Wu Fengguang
2011-03-03 6:45 ` [PATCH 06/27] btrfs: lower the dirty balance poll interval Wu Fengguang
2011-03-04 6:22 ` Dave Chinner
2011-03-04 7:57 ` Wu Fengguang
2011-03-03 6:45 ` [PATCH 07/27] btrfs: wait on too many nr_async_bios Wu Fengguang
2011-03-03 6:45 ` [PATCH 08/27] nfs: dirty livelock prevention is now done in VFS Wu Fengguang
2011-03-03 6:45 ` [PATCH 09/27] nfs: writeback pages wait queue Wu Fengguang
2011-03-03 16:07 ` Peter Zijlstra
2011-03-04 1:53 ` Wu Fengguang
2011-03-03 16:08 ` Peter Zijlstra
2011-03-04 2:01 ` Wu Fengguang
2011-03-04 9:10 ` Peter Zijlstra
2011-03-04 9:26 ` Peter Zijlstra
2011-03-04 14:38 ` Wu Fengguang
2011-03-04 14:41 ` Peter Zijlstra
2011-03-03 6:45 ` [PATCH 10/27] nfs: limit the commit size to reduce fluctuations Wu Fengguang
2011-03-03 6:45 ` [PATCH 11/27] nfs: limit the commit range Wu Fengguang
2011-03-03 6:45 ` [PATCH 12/27] nfs: lower writeback threshold proportionally to dirty threshold Wu Fengguang
2011-03-03 6:45 ` [PATCH 13/27] writeback: account per-bdi accumulated written pages Wu Fengguang
2011-03-03 6:45 ` [PATCH 14/27] writeback: account per-bdi accumulated dirtied pages Wu Fengguang
2011-03-03 6:45 ` [PATCH 15/27] writeback: bdi write bandwidth estimation Wu Fengguang
2011-03-03 6:45 ` [PATCH 16/27] writeback: smoothed global/bdi dirty pages Wu Fengguang
2011-03-03 6:45 ` [PATCH 17/27] writeback: smoothed dirty threshold and limit Wu Fengguang
2011-03-03 6:45 ` [PATCH 18/27] writeback: enforce 1/4 gap between the dirty/background thresholds Wu Fengguang
2011-03-03 6:45 ` [PATCH 19/27] writeback: dirty throttle bandwidth control Wu Fengguang
2011-03-07 21:34 ` Wu Fengguang
2011-03-29 21:08 ` Wu Fengguang
2011-03-03 6:45 ` [PATCH 20/27] writeback: IO-less balance_dirty_pages() Wu Fengguang
2011-03-03 6:45 ` [PATCH 21/27] writeback: show bdi write bandwidth in debugfs Wu Fengguang
2011-03-03 6:45 ` [PATCH 22/27] writeback: trace dirty_throttle_bandwidth Wu Fengguang
2011-03-03 6:45 ` Wu Fengguang [this message]
2011-03-03 6:45 ` [PATCH 24/27] writeback: trace global_dirty_state Wu Fengguang
2011-03-03 6:45 ` [PATCH 25/27] writeback: make nr_to_write a per-file limit Wu Fengguang
2011-03-03 6:45 ` [PATCH 26/27] writeback: scale IO chunk size up to device bandwidth Wu Fengguang
2011-03-03 6:45 ` [PATCH 27/27] writeback: trace writeback_single_inode Wu Fengguang
2011-03-03 20:12 ` [PATCH 00/27] IO-less dirty throttling v6 Vivek Goyal
2011-03-03 20:48 ` Vivek Goyal
2011-03-04 9:06 ` Wu Fengguang
2011-04-04 18:12 ` async write IO controllers Wu Fengguang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110303074951.713350492@intel.com \
--to=fengguang.wu@intel.com \
--cc=Trond.Myklebust@netapp.com \
--cc=a.p.zijlstra@chello.nl \
--cc=akpm@linux-foundation.org \
--cc=arighi@develer.com \
--cc=balbir@linux.vnet.ibm.com \
--cc=chris.mason@oracle.com \
--cc=david@fromorbit.com \
--cc=gthelen@google.com \
--cc=hch@lst.de \
--cc=jack@suse.cz \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mel@csn.ul.ie \
--cc=minchan.kim@gmail.com \
--cc=riel@redhat.com \
--cc=tytso@mit.edu \
--cc=vgoyal@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).