From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail144.messagelabs.com (mail144.messagelabs.com [216.82.254.51]) by kanga.kvack.org (Postfix) with ESMTP id 6690B6B00AE for ; Tue, 12 Oct 2010 01:49:06 -0400 (EDT) Received: from d01relay02.pok.ibm.com (d01relay02.pok.ibm.com [9.56.227.234]) by e8.ny.us.ibm.com (8.14.4/8.13.1) with ESMTP id o9C5Y7ER028487 for ; Tue, 12 Oct 2010 01:34:07 -0400 Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168]) by d01relay02.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id o9C5n3jL475216 for ; Tue, 12 Oct 2010 01:49:03 -0400 Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1]) by d03av02.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id o9C5n2BN001158 for ; Mon, 11 Oct 2010 23:49:02 -0600 Date: Tue, 12 Oct 2010 11:09:35 +0530 From: Balbir Singh Subject: Re: [PATCH 04/10] memcg: disable local interrupts in lock_page_cgroup() Message-ID: <20101012053935.GD25875@balbir.in.ibm.com> Reply-To: balbir@linux.vnet.ibm.com References: <1286175485-30643-1-git-send-email-gthelen@google.com> <1286175485-30643-5-git-send-email-gthelen@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <1286175485-30643-5-git-send-email-gthelen@google.com> Sender: owner-linux-mm@kvack.org To: Greg Thelen Cc: Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, containers@lists.osdl.org, Andrea Righi , KAMEZAWA Hiroyuki , Daisuke Nishimura List-ID: * Greg Thelen [2010-10-03 23:57:59]: > If pages are being migrated from a memcg, then updates to that > memcg's page statistics are protected by grabbing a bit spin lock > using lock_page_cgroup(). In an upcoming commit memcg dirty page > accounting will be updating memcg page accounting (specifically: > num writeback pages) from softirq. Avoid a deadlocking nested > spin lock attempt by disabling interrupts on the local processor > when grabbing the page_cgroup bit_spin_lock in lock_page_cgroup(). > This avoids the following deadlock: > statistic > CPU 0 CPU 1 > inc_file_mapped > rcu_read_lock > start move > synchronize_rcu > lock_page_cgroup > softirq > test_clear_page_writeback > mem_cgroup_dec_page_stat(NR_WRITEBACK) > rcu_read_lock > lock_page_cgroup /* deadlock */ > unlock_page_cgroup > rcu_read_unlock > unlock_page_cgroup > rcu_read_unlock > > By disabling interrupts in lock_page_cgroup, nested calls > are avoided. The softirq would be delayed until after inc_file_mapped > enables interrupts when calling unlock_page_cgroup(). > > The normal, fast path, of memcg page stat updates typically > does not need to call lock_page_cgroup(), so this change does > not affect the performance of the common case page accounting. > > Signed-off-by: Andrea Righi > Signed-off-by: Greg Thelen -- It will take more convincing, all important functions (charge/uncharge use lock_page_cgroup()). I'd like to see the page fault scalability test results. I am not against this patch, just want to see the scalability numbers. Three Cheers, Balbir -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org