From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754941Ab2CHD2l (ORCPT ); Wed, 7 Mar 2012 22:28:41 -0500 Received: from e23smtp09.au.ibm.com ([202.81.31.142]:38416 "EHLO e23smtp09.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752084Ab2CHD2j (ORCPT ); Wed, 7 Mar 2012 22:28:39 -0500 Message-ID: <4F582754.6050303@linux.vnet.ibm.com> Date: Thu, 08 Mar 2012 11:28:20 +0800 From: Michael Wang User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.27) Gecko/20120216 Thunderbird/3.1.19 MIME-Version: 1.0 To: LKML CC: Peter Zijlstra , Ingo Molnar , Johannes Weiner , Michal Hocko , Balbir Singh , KAMEZAWA Hiroyuki , Srivatsa Vaddagiri , Ram Pai Subject: [BUG] soft lockup occur while using cpu cgroup Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit x-cbid: 12030718-3568-0000-0000-000001553021 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, All I created 7 cpu type cgroup and start kernbench in each of them, then got soft lockup bug after several hours. My testing environment is an x86 server with redhat6.2 on it, and I'm using tip kernel tree: 3.3.0-rc6+, and all cgroup subsys will be mounted automatically. My testing step is: 1. use command "cgcreate -g cpu:/subcgx", x is 1~7, to create 7 cpu cgroup 2. start 7 shell and use command "echo $$ > /cgroup/cpu/subcgx/tasks" to attach each shell to each cgroup. 3. start kernbench in each shell. I have extract all the different bug log, like: 1: BUG: soft lockup - CPU#1 stuck for 23s! [cc1:4501] Call Trace: [] native_flush_tlb_others+0xe/0x10 [] flush_tlb_page+0x5f/0xb0 [] ptep_clear_flush+0x41/0x60 [] try_to_unmap_one+0xb3/0x450 [] try_to_unmap_file+0xad/0x2c0 [] try_to_unmap+0x47/0x80 [] shrink_page_list+0x2d8/0x670 [] shrink_inactive_list+0x1df/0x530 [] shrink_mem_cgroup_zone+0x259/0x320 [] shrink_zone+0x63/0xb0 [] shrink_zones+0x76/0x200 [] do_try_to_free_pages+0x9f/0x3e0 [] try_to_free_pages+0x9b/0x120 [] ? wakeup_kswapd+0x4a/0x140 [] __alloc_pages_slowpath+0x31e/0x710 [] ? zone_watermark_ok+0x1f/0x30 [] __alloc_pages_nodemask+0x1a4/0x1f0 [] alloc_pages_vma+0x9a/0x150 [] read_swap_cache_async+0xf2/0x150 [] ? valid_swaphandles+0x69/0x150 [] swapin_readahead+0x87/0xc0 [] do_swap_page+0x115/0x630 [] ? do_wp_page+0x38e/0x830 [] handle_pte_fault+0x1aa/0x210 [] handle_mm_fault+0x1d5/0x350 [] do_page_fault+0x13e/0x460 [] ? mntput+0x23/0x40 [] ? __fput+0x16b/0x240 [] page_fault+0x25/0x30 2: BUG: soft lockup - CPU#6 stuck for 23s! [cc1:5186] Call Trace: [] ? flush_tlb_others_ipi+0x130/0x140 [] native_flush_tlb_others+0xe/0x10 [] flush_tlb_page+0x5f/0xb0 [] ptep_clear_flush+0x41/0x60 [] try_to_unmap_one+0xb3/0x450 [] ? vma_prio_tree_next+0x3d/0x70 [] try_to_unmap_file+0xad/0x2c0 [] ? ktime_get_ts+0xad/0xe0 [] try_to_unmap+0x47/0x80 [] shrink_page_list+0x2d8/0x670 [] shrink_inactive_list+0x1df/0x530 [] shrink_mem_cgroup_zone+0x259/0x320 [] shrink_zone+0x63/0xb0 [] shrink_zones+0x76/0x200 [] do_try_to_free_pages+0x9f/0x3e0 [] try_to_free_pages+0x9b/0x120 [] ? wakeup_kswapd+0x4a/0x140 [] __alloc_pages_slowpath+0x31e/0x710 [] ? zone_watermark_ok+0x1f/0x30 [] __alloc_pages_nodemask+0x1a4/0x1f0 [] alloc_pages_vma+0x9a/0x150 [] read_swap_cache_async+0xf2/0x150 [] ? valid_swaphandles+0x69/0x150 [] swapin_readahead+0x87/0xc0 [] do_swap_page+0x115/0x630 [] ? do_wp_page+0x38e/0x830 [] handle_pte_fault+0x1aa/0x210 [] handle_mm_fault+0x1d5/0x350 [] do_page_fault+0x13e/0x460 [] ? __dequeue_entity+0x30/0x50 [] ? __switch_to+0x1a2/0x440 [] ? __schedule+0x3f7/0x730 [] page_fault+0x25/0x30 3: BUG: soft lockup - CPU#10 stuck for 22s! [cc1:31966] Call Trace: [] ? page_alloc_cpu_notify+0x60/0x60 [] smp_call_function+0x22/0x30 [] on_each_cpu+0x2b/0x70 [] drain_all_pages+0x1c/0x20 [] __alloc_pages_slowpath+0x37a/0x710 [] ? zone_watermark_ok+0x1f/0x30 [] __alloc_pages_nodemask+0x1a4/0x1f0 [] alloc_pages_vma+0x9a/0x150 [] read_swap_cache_async+0xf2/0x150 [] ? valid_swaphandles+0x69/0x150 [] swapin_readahead+0x87/0xc0 [] do_swap_page+0x115/0x630 [] ? do_wp_page+0x38e/0x830 [] handle_pte_fault+0x1aa/0x210 [] handle_mm_fault+0x1d5/0x350 [] do_page_fault+0x13e/0x460 [] ? __dequeue_entity+0x30/0x50 [] ? __switch_to+0x1a2/0x440 [] ? __schedule+0x3f7/0x730 [] page_fault+0x25/0x30 4: BUG: soft lockup - CPU#11 stuck for 23s! [cc1:9614] Call Trace: [] grab_super_passive+0x25/0xa0 [] prune_super+0x41/0x1c0 [] shrink_slab+0xa1/0x2c0 [] ? shrink_zones+0x9a/0x200 [] do_try_to_free_pages+0x2e3/0x3e0 [] try_to_free_pages+0x9b/0x120 [] ? wakeup_kswapd+0x4a/0x140 [] __alloc_pages_slowpath+0x31e/0x710 [] ? zone_watermark_ok+0x1f/0x30 [] __alloc_pages_nodemask+0x1a4/0x1f0 [] alloc_pages_current+0xaa/0x110 [] __page_cache_alloc+0x8f/0xb0 [] filemap_fault+0x1af/0x4b0 [] ? mem_cgroup_update_page_stat+0x1e/0x100 [] __do_fault+0x72/0x5a0 [] handle_pte_fault+0xe7/0x210 [] handle_mm_fault+0x1d5/0x350 [] do_page_fault+0x13e/0x460 [] ? mntput+0x23/0x40 [] ? __fput+0x16b/0x240 [] page_fault+0x25/0x30 5: BUG: soft lockup - CPU#11 stuck for 23s! [gnome-session:6449] Call Trace: [] put_super+0x1d/0x40 [] drop_super+0x22/0x30 [] prune_super+0x199/0x1c0 [] shrink_slab+0xa1/0x2c0 [] ? shrink_zones+0x9a/0x200 [] do_try_to_free_pages+0x2e3/0x3e0 [] try_to_free_pages+0x9b/0x120 [] ? wakeup_kswapd+0x4a/0x140 [] __alloc_pages_slowpath+0x31e/0x710 [] ? zone_watermark_ok+0x1f/0x30 [] __alloc_pages_nodemask+0x1a4/0x1f0 [] alloc_pages_current+0xaa/0x110 [] __page_cache_alloc+0x8f/0xb0 [] filemap_fault+0x1af/0x4b0 [] ? mem_cgroup_update_page_stat+0x1e/0x100 [] __do_fault+0x72/0x5a0 [] handle_pte_fault+0xe7/0x210 [] handle_mm_fault+0x1d5/0x350 [] do_page_fault+0x13e/0x460 [] ? security_file_permission+0x8b/0x90 [] page_fault+0x25/0x30 6: BUG: soft lockup - CPU#2 stuck for 40s! [kworker/2:1:17511] Call Trace: [] exit_notify+0x17/0x190 [] do_exit+0x1f9/0x470 [] ? manage_workers+0x120/0x120 [] kthread+0x95/0xb0 [] kernel_thread_helper+0x4/0x10 [] ? kthread_freezable_should_stop+0x70/0x70 [] ? gs_change+0x13/0x13 As the "mem_cgroup" appears lots of time, so at first I think this is caused by mem cgroup, so I umount all the other cgroup subsys besides cpu cgroup and test again, but issue still exist, the new log is like: BUG: soft lockup - CPU#0 stuck for 22s! [cc1:15411] Call Trace: [] ? page_alloc_cpu_notify+0x60/0x60 [] smp_call_function+0x22/0x30 [] on_each_cpu+0x2b/0x70 [] drain_all_pages+0x1c/0x20 [] __alloc_pages_slowpath+0x37a/0x710 [] ? zone_watermark_ok+0x1f/0x30 [] __alloc_pages_nodemask+0x1a4/0x1f0 [] alloc_pages_current+0xaa/0x110 [] __page_cache_alloc+0x8f/0xb0 [] find_or_create_page+0x4f/0xb0 [] grow_dev_page+0x3c/0x210 [] __getblk_slow+0x9c/0x130 [] __getblk+0x66/0x70 [] __breadahead+0x12/0x40 [] __ext4_get_inode_loc+0x346/0x400 [ext4] [] ext4_iget+0x86/0x820 [ext4] [] ext4_lookup+0xa5/0x120 [ext4] [] d_alloc_and_lookup+0x45/0x90 [] ? d_lookup+0x35/0x60 [] do_lookup+0x278/0x390 [] ? security_inode_permission+0x1c/0x30 [] do_last+0xe1/0x830 [] path_openat+0xd6/0x3e0 [] ? putname+0x35/0x50 [] ? user_path_at_empty+0x63/0xa0 [] do_filp_open+0x49/0xa0 [] ? strncpy_from_user+0x4a/0x90 [] ? getname_flags+0x1d0/0x280 [] ? alloc_fd+0x4d/0x120 [] do_sys_open+0x108/0x1e0 [] ? __audit_syscall_entry+0xcc/0x210 [] sys_open+0x21/0x30 [] system_call_fastpath+0x16/0x1b So I think this is some issue related to mm and cgroup's combination, please tell me if you need more info of the log :) Regards, Michael Wang