From mboxrd@z Thu Jan 1 00:00:00 1970 From: Yong Zhang Subject: [PATCH -rt] memcg: use migrate_disable()/migrate_enable( ) in memcg_check_events() Date: Wed, 16 Nov 2011 17:16:53 +0800 Message-ID: <20111116091653.GA8692@zhy> References: <20111115084059.GA23250@zhy> Reply-To: Yong Zhang Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Cc: LKML , linux-rt-users , Steven Rostedt , Peter Zijlstra To: Thomas Gleixner Return-path: Received: from mail-gx0-f174.google.com ([209.85.161.174]:39229 "EHLO mail-gx0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753689Ab1KPJRF (ORCPT ); Wed, 16 Nov 2011 04:17:05 -0500 Content-Disposition: inline In-Reply-To: <20111115084059.GA23250@zhy> Sender: linux-rt-users-owner@vger.kernel.org List-ID: Looking at commit 4799401f [memcg: Fix race condition in memcg_check_events() with this_cpu usage], we just want to disable migration. So use the right API in -rt. This will cure below warning. BUG: sleeping function called from invalid context at linux/kernel/rtmutex.c:645 in_atomic(): 1, irqs_disabled(): 0, pid: 1, name: swapper/0 1 lock held by swapper/0/1: #0: (&sig->cred_guard_mutex){+.+.+.}, at: [] prepare_bprm_creds+0x35/0x80 Pid: 1, comm: swapper/0 Not tainted 3.2.0-rc1-rt2-11311-g3c4c0e7-dirty #10 Call Trace: [] __might_sleep+0x12e/0x1e0 [] rt_spin_lock+0x24/0x60 [] memcg_check_events+0x11e/0x230 [] T.1144+0x8a/0xf0 [] __mem_cgroup_commit_charge_lrucare+0x56/0x180 [] ? sub_preempt_count+0xa9/0xe0 [] mem_cgroup_cache_charge+0xd8/0xe0 [] add_to_page_cache_locked+0x49/0x100 [] ? find_get_page+0xdf/0x1a0 [] add_to_page_cache_lru+0x22/0x50 [] do_read_cache_page+0x75/0x1a0 [] ? nfs_follow_link+0xc0/0xc0 [] read_cache_page_async+0x1c/0x20 [] read_cache_page+0xe/0x20 [] nfs_follow_link+0x59/0xc0 [] path_openat+0x2a7/0x470 [] do_filp_open+0x49/0xa0 [] open_exec+0x32/0xf0 [] load_elf_binary+0x85b/0x1d30 [] ? __lock_acquire+0x4f5/0xbf0 [] ? native_sched_clock+0x29/0x80 [] ? local_clock+0x4f/0x60 [] ? rt_spin_lock_slowunlock+0x78/0x80 [] ? trace_hardirqs_off_caller+0x29/0x120 [] ? put_lock_stats+0xe/0x40 [] ? rt_spin_lock_slowunlock+0x78/0x80 [] ? elf_map+0x1d0/0x1d0 [] ? sub_preempt_count+0xa9/0xe0 [] ? elf_map+0x1d0/0x1d0 [] search_binary_handler+0x1c8/0x4b0 [] ? search_binary_handler+0x57/0x4b0 [] do_execve_common+0x276/0x330 [] do_execve+0x3a/0x40 [] sys_execve+0x4a/0x80 [] kernel_execve+0x68/0xd0 [] ? run_init_process+0x23/0x30 [] init_post+0x58/0xd0 [] kernel_init+0x156/0x160 [] kernel_thread_helper+0x4/0x10 [] ? finish_task_switch+0x8c/0x110 [] ? _raw_spin_unlock_irq+0x3b/0x70 [] ? retint_restore_args+0xe/0xe [] ? parse_early_options+0x20/0x20 [] ? gs_change+0xb/0xb Signed-off-by: Yong Zhang Cc: Steven Rostedt Cc: Thomas Gleixner Cc: Peter Zijlstra --- mm/memcontrol.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 6aff93c..afa1954 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -722,7 +722,7 @@ static void __mem_cgroup_target_update(struct mem_cgroup *memcg, int target) */ static void memcg_check_events(struct mem_cgroup *memcg, struct page *page) { - preempt_disable(); + migrate_disable(); /* threshold event is triggered in finer grain than soft limit */ if (unlikely(__memcg_event_check(memcg, MEM_CGROUP_TARGET_THRESH))) { mem_cgroup_threshold(memcg); @@ -742,7 +742,7 @@ static void memcg_check_events(struct mem_cgroup *memcg, struct page *page) } #endif } - preempt_enable(); + migrate_enable(); } static struct mem_cgroup *mem_cgroup_from_cont(struct cgroup *cont) -- 1.7.5.4