From: Fengguang Wu <fengguang.wu@intel.com>
To: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Sha Zhengju <handai.szj@gmail.com>,
linux-mm@kvack.org, cgroups@vger.kernel.org, gthelen@google.com,
yinghan@google.com, akpm@linux-foundation.org, mhocko@suse.cz,
linux-kernel@vger.kernel.org, torvalds@linux-foundation.org,
viro@zeniv.linux.org.uk, linux-fsdevel@vger.kernel.org,
Sha Zhengju <handai.szj@taobao.com>
Subject: Re: [PATCH 3/7] Make TestSetPageDirty and dirty page accounting in one func
Date: Sat, 7 Jul 2012 22:42:28 +0800 [thread overview]
Message-ID: <20120707144228.GA24329@localhost> (raw)
In-Reply-To: <4FF1827A.7060806@jp.fujitsu.com>
On Mon, Jul 02, 2012 at 08:14:02PM +0900, KAMEZAWA Hiroyuki wrote:
> (2012/06/28 20:01), Sha Zhengju wrote:
> > From: Sha Zhengju <handai.szj@taobao.com>
> >
> > Commit a8e7d49a(Fix race in create_empty_buffers() vs __set_page_dirty_buffers())
> > extracts TestSetPageDirty from __set_page_dirty and is far away from
> > account_page_dirtied.But it's better to make the two operations in one single
> > function to keep modular.So in order to avoid the potential race mentioned in
> > commit a8e7d49a, we can hold private_lock until __set_page_dirty completes.
> > I guess there's no deadlock between ->private_lock and ->tree_lock by quick look.
> >
> > It's a prepare patch for following memcg dirty page accounting patches.
> >
> > Signed-off-by: Sha Zhengju <handai.szj@taobao.com>
>
> I think there is no problem with the lock order.
Me think so, too.
> My small concern is the impact on the performance. IIUC, lock contention here can be
> seen if multiple threads write to the same file in parallel.
> Do you have any numbers before/after the patch ?
That would be a worthwhile test. The patch moves ->tree_lock and
->i_lock into ->private_lock, these are often contented locks..
For example, in the below case of 12 hard disks, each running 1 dd
write, the ->tree_lock and ->private_lock have the top #1 and #2
contentions.
lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-1dd-1-3.3.0/lock_stat
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
class name con-bounces contentions waittime-min waittime-max waittime-total acq-bounces acquisitions holdtime-min holdtime-max holdtime-total
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
&(&mapping->tree_lock)->rlock: 18629034 19138284 0.09 1029.32 24353812.07 49650988 482883410 0.11 186.88 260706119.09
-----------------------------
&(&mapping->tree_lock)->rlock 783 [<ffffffff81109267>] tag_pages_for_writeback+0x2b/0x9d
&(&mapping->tree_lock)->rlock 3195817 [<ffffffff81100d6c>] add_to_page_cache_locked+0xa3/0x119
&(&mapping->tree_lock)->rlock 3863710 [<ffffffff81108df7>] test_set_page_writeback+0x63/0x140
&(&mapping->tree_lock)->rlock 3311518 [<ffffffff81172ade>] __set_page_dirty+0x25/0xa5
-----------------------------
&(&mapping->tree_lock)->rlock 3450725 [<ffffffff81100d6c>] add_to_page_cache_locked+0xa3/0x119
&(&mapping->tree_lock)->rlock 3225542 [<ffffffff81172ade>] __set_page_dirty+0x25/0xa5
&(&mapping->tree_lock)->rlock 2241958 [<ffffffff81108df7>] test_set_page_writeback+0x63/0x140
&(&mapping->tree_lock)->rlock 7339603 [<ffffffff8110ac33>] test_clear_page_writeback+0x64/0x155
...............................................................................................................................................................................................
&(&mapping->private_lock)->rlock: 1165199 1191201 0.11 2843.25 1621608.38 13341420 152761848 0.10 3727.92 33559035.07
--------------------------------
&(&mapping->private_lock)->rlock 1 [<ffffffff81172913>] __find_get_block_slow+0x5a/0x135
&(&mapping->private_lock)->rlock 385576 [<ffffffff811735d6>] create_empty_buffers+0x48/0xbf
&(&mapping->private_lock)->rlock 805624 [<ffffffff8117346d>] try_to_free_buffers+0x57/0xaa
--------------------------------
&(&mapping->private_lock)->rlock 1 [<ffffffff811746dd>] __getblk+0x1b8/0x257
&(&mapping->private_lock)->rlock 952718 [<ffffffff8117346d>] try_to_free_buffers+0x57/0xaa
&(&mapping->private_lock)->rlock 238482 [<ffffffff811735d6>] create_empty_buffers+0x48/0xbf
Thanks,
Fengguang
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-07-07 14:42 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-06-28 10:54 [PATCH 0/7] Per-cgroup page stat accounting Sha Zhengju
2012-06-28 10:57 ` [PATCH 1/7] memcg: update cgroup memory document Sha Zhengju
2012-07-02 7:00 ` Kamezawa Hiroyuki
2012-07-04 12:47 ` Michal Hocko
2012-07-07 13:45 ` Fengguang Wu
2012-06-28 10:58 ` [PATCH 2/7] memcg: remove MEMCG_NR_FILE_MAPPED Sha Zhengju
2012-07-02 10:44 ` Kamezawa Hiroyuki
2012-07-04 12:56 ` Michal Hocko
2012-07-04 12:58 ` Michal Hocko
2012-07-07 13:48 ` Fengguang Wu
2012-07-09 21:01 ` Greg Thelen
2012-07-11 8:00 ` Sha Zhengju
2012-06-28 11:01 ` [PATCH 3/7] Make TestSetPageDirty and dirty page accounting in one func Sha Zhengju
2012-07-02 11:14 ` Kamezawa Hiroyuki
2012-07-07 14:42 ` Fengguang Wu [this message]
2012-07-04 14:23 ` Michal Hocko
2012-06-28 11:03 ` [PATCH 4/7] Use vfs __set_page_dirty interface instead of doing it inside filesystem Sha Zhengju
2012-06-29 5:21 ` Sage Weil
2012-07-02 8:10 ` Sha Zhengju
2012-07-02 14:49 ` Sage Weil
2012-07-04 8:11 ` Sha Zhengju
2012-07-05 15:20 ` Sage Weil
2012-07-05 15:40 ` Sha Zhengju
2012-07-04 14:27 ` Michal Hocko
2012-06-28 11:04 ` [PATCH 5/7] memcg: add per cgroup dirty pages accounting Sha Zhengju
2012-07-03 5:57 ` Kamezawa Hiroyuki
2012-07-08 14:45 ` Fengguang Wu
2012-07-04 16:11 ` Michal Hocko
2012-07-09 21:02 ` Greg Thelen
2012-07-11 9:32 ` Sha Zhengju
2012-07-19 6:33 ` Kamezawa Hiroyuki
2012-06-28 11:05 ` [PATCH 6/7] memcg: add per cgroup writeback " Sha Zhengju
2012-07-03 6:31 ` Kamezawa Hiroyuki
2012-07-04 8:24 ` Sha Zhengju
2012-07-08 14:44 ` Fengguang Wu
2012-07-08 23:01 ` Johannes Weiner
2012-07-09 1:37 ` Fengguang Wu
2012-07-04 16:15 ` Michal Hocko
2012-06-28 11:06 ` Sha Zhengju
2012-07-08 14:53 ` Fengguang Wu
2012-07-09 3:36 ` Sha Zhengju
2012-07-09 4:14 ` Fengguang Wu
2012-07-09 4:18 ` Kamezawa Hiroyuki
2012-07-09 5:22 ` Sha Zhengju
2012-07-09 5:28 ` Fengguang Wu
2012-07-09 5:19 ` Sha Zhengju
2012-07-09 5:25 ` Fengguang Wu
2012-07-09 21:02 ` Greg Thelen
2012-06-28 11:06 ` [PATCH 7/7] memcg: print more detailed info while memcg oom happening Sha Zhengju
2012-07-04 8:25 ` Sha Zhengju
2012-07-04 8:29 ` Kamezawa Hiroyuki
2012-07-04 11:20 ` Sha Zhengju
2012-06-29 8:23 ` [PATCH 0/7] Per-cgroup page stat accounting Kamezawa Hiroyuki
2012-07-02 7:51 ` Sha Zhengju
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120707144228.GA24329@localhost \
--to=fengguang.wu@intel.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=gthelen@google.com \
--cc=handai.szj@gmail.com \
--cc=handai.szj@taobao.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.cz \
--cc=torvalds@linux-foundation.org \
--cc=viro@zeniv.linux.org.uk \
--cc=yinghan@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).