* oom_killer crash linux system @ 2010-10-18 1:47 Figo.zhang 2010-10-18 1:57 ` KAMEZAWA Hiroyuki 0 siblings, 1 reply; 22+ messages in thread From: Figo.zhang @ 2010-10-18 1:47 UTC (permalink / raw) To: linux-kernel, rientjes, fengguang.wu; +Cc: figo1802 hi all, i have a desktop run linux2.6.35 and have 2GB ram. i turn off the swap partition, and i open huge applications , let the system eat more and more memory. when the system eat more than 1.7G ram, the system crashed. i see the /var/log/message, the oom_killer had kill "xorg" processer, so the system crashed, so why the oom_killer will select the fundamental processer like xorg , gnome and so on? why it not select my opened applications to kill which had eated huge memory? Thanks ! Figo.zhang here is the message log: Sep 28 17:27:56 myhost kernel: Mem-Info: Sep 28 17:27:56 myhost kernel: DMA per-cpu: Sep 28 17:27:56 myhost kernel: CPU 0: hi: 0, btch: 1 usd: 0 Sep 28 17:27:56 myhost kernel: CPU 1: hi: 0, btch: 1 usd: 0 Sep 28 17:27:56 myhost kernel: Normal per-cpu: Sep 28 17:27:56 myhost kernel: CPU 0: hi: 186, btch: 31 usd: 0 Sep 28 17:27:56 myhost kernel: CPU 1: hi: 186, btch: 31 usd: 0 Sep 28 17:27:56 myhost kernel: HighMem per-cpu: Sep 28 17:27:56 myhost kernel: CPU 0: hi: 186, btch: 31 usd: 0 Sep 28 17:27:56 myhost kernel: CPU 1: hi: 186, btch: 31 usd: 30 Sep 28 17:27:56 myhost kernel: active_anon:208615 inactive_anon:72967 isolated_anon:0 Sep 28 17:27:56 myhost kernel: active_file:93203 inactive_file:100283 isolated_file:0 Sep 28 17:27:56 myhost kernel: unevictable:5 dirty:15 writeback:0 unstable:0 Sep 28 17:27:56 myhost kernel: free:11973 slab_reclaimable:4067 slab_unreclaimable:3996 Sep 28 17:27:56 myhost kernel: mapped:85844 shmem:47072 pagetables:1776 bounce:0 Sep 28 17:27:56 myhost kernel: DMA free:7972kB min:64kB low:80kB high:96kB active_anon:1536kB inactive_anon:3972kB active_file:1076kB inactive_file:1232kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15788kB mlocked:0kB dirty:0kB writeback:0kB mapped:1144kB shmem:3104kB slab_reclaimable:24kB slab_unreclaimable:16kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 28 17:27:56 myhost kernel: lowmem_reserve[]: 0 865 1980 1980 Sep 28 17:27:56 myhost kernel: Normal free:39448kB min:3728kB low:4660kB high:5592kB active_anon:326192kB inactive_anon:104140kB active_file:165600kB inactive_file:165936kB unevictable:20kB isolated(anon):0kB isolated(file):0kB present:885944kB mlocked:20kB dirty:0kB writeback:0kB mapped:142524kB shmem:62444kB slab_reclaimable:16244kB slab_unreclaimable:15968kB kernel_stack:2664kB pagetables:7104kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 28 17:27:56 myhost kernel: lowmem_reserve[]: 0 0 8921 8921 Sep 28 17:27:56 myhost kernel: HighMem free:472kB min:512kB low:1712kB high:2912kB active_anon:506732kB inactive_anon:183756kB active_file:206136kB inactive_file:233964kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1141984kB mlocked:0kB dirty:124kB writeback:0kB mapped:199708kB shmem:122740kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 28 17:27:56 myhost kernel: lowmem_reserve[]: 0 0 0 0 Sep 28 17:27:56 myhost kernel: DMA: 5*4kB 8*8kB 39*16kB 43*32kB 16*64kB 6*128kB 6*256kB 1*512kB 2*1024kB 0*2048kB 0*4096kB = 7972kB Sep 28 17:27:56 myhost kernel: Normal: 292*4kB 107*8kB 1161*16kB 199*32kB 45*64kB 31*128kB 8*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 39448kB Sep 28 17:27:56 myhost kernel: HighMem: 4*4kB 0*8kB 1*16kB 4*32kB 5*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 480kB Sep 28 17:27:56 myhost kernel: 240589 total pagecache pages Sep 28 17:27:56 myhost kernel: 0 pages in swap cache Sep 28 17:27:56 myhost kernel: Swap cache stats: add 0, delete 0, find 0/0 Sep 28 17:27:56 myhost kernel: Free swap = 0kB Sep 28 17:27:56 myhost kernel: Total swap = 0kB Sep 28 17:27:56 myhost kernel: 515070 pages RAM Sep 28 17:27:56 myhost kernel: 287745 pages HighMem Sep 28 17:27:56 myhost kernel: 8280 pages reserved Sep 28 17:27:56 myhost kernel: 433443 pages shared Sep 28 17:27:56 myhost kernel: 284256 pages non-shared Sep 28 17:28:10 myhost kernel: Xorg invoked oom-killer: gfp_mask=0x80d2, order=0, oom_adj=0 Sep 28 17:28:10 myhost kernel: Xorg cpuset=/ mems_allowed=0 Sep 28 17:28:10 myhost kernel: Pid: 1586, comm: Xorg Not tainted 2.6.35-ARCH #1 Sep 28 17:28:10 myhost kernel: Call Trace: Sep 28 17:28:10 myhost kernel: [<c10c00c9>] dump_header+0x69/0x1a0 Sep 28 17:28:10 myhost kernel: [<c11886c9>] ? ___ratelimit+0x89/0x110 Sep 28 17:28:10 myhost kernel: [<c10c0254>] oom_kill_process+0x54/0x140 Sep 28 17:28:10 myhost kernel: [<c10c064e>] ? select_bad_process +0x9e/0xd0 Sep 28 17:28:10 myhost kernel: [<c10c06d1>] __out_of_memory+0x51/0xb0 Sep 28 17:28:10 myhost kernel: [<c10c09c2>] out_of_memory+0x52/0xd0 Sep 28 17:28:10 myhost kernel: [<c10c4022>] __alloc_pages_nodemask +0x5e2/0x600 Sep 28 17:28:10 myhost kernel: [<c10dfed6>] __vmalloc_area_node +0x76/0x100 Sep 28 17:28:10 myhost kernel: [<f8613aa7>] ? i915_gem_object_get_pages +0xf7/0x1c0 [i915] Sep 28 17:28:10 myhost kernel: [<c10dfffa>] __vmalloc_node+0x9a/0xa0 Sep 28 17:28:10 myhost kernel: [<f8613aa7>] ? i915_gem_object_get_pages +0xf7/0x1c0 [i915] Sep 28 17:28:10 myhost kernel: [<f8613aa7>] ? i915_gem_object_get_pages +0xf7/0x1c0 [i915] Sep 28 17:28:10 myhost kernel: [<f8613aa7>] ? i915_gem_object_get_pages +0xf7/0x1c0 [i915] Sep 28 17:28:10 myhost kernel: [<c10e0155>] __vmalloc+0x25/0x30 Sep 28 17:28:10 myhost kernel: [<f8613aa7>] ? i915_gem_object_get_pages +0xf7/0x1c0 [i915] Sep 28 17:28:10 myhost kernel: [<f8613aa7>] i915_gem_object_get_pages +0xf7/0x1c0 [i915] Sep 28 17:28:10 myhost kernel: [<f8461457>] ? drm_mm_get_block_generic +0x37/0x90 [drm] Sep 28 17:28:10 myhost kernel: [<f8615161>] i915_gem_object_bind_to_gtt +0xf1/0x260 [i915] Sep 28 17:28:10 myhost kernel: [<c12e817a>] ? __mutex_lock_slowpath +0x1ea/0x2b0 Sep 28 17:28:10 myhost kernel: [<f8616897>] i915_gem_object_pin +0xc7/0xf0 [i915] Sep 28 17:28:10 myhost kernel: [<f8616ecd>] i915_gem_do_execbuffer +0x4ed/0x1090 [i915] Sep 28 17:28:10 myhost kernel: [<f861638f>] ? i915_gem_object_set_to_gtt_domain+0x4f/0x100 [i915] Sep 28 17:28:10 myhost kernel: [<c118d862>] ? _copy_from_user+0x32/0x50 Sep 28 17:28:10 myhost kernel: [<f8617ae7>] i915_gem_execbuffer2 +0x77/0x1e0 [i915] Sep 28 17:28:10 myhost kernel: [<f8458308>] drm_ioctl+0x1b8/0x460 [drm] Sep 28 17:28:10 myhost kernel: [<f8617a70>] ? i915_gem_execbuffer2 +0x0/0x1e0 [i915] Sep 28 17:28:10 myhost kernel: [<c10f672c>] ? do_sync_read+0x9c/0xd0 Sep 28 17:28:10 myhost kernel: [<c11048e4>] vfs_ioctl+0x34/0xa0 Sep 28 17:28:10 myhost kernel: [<f8458150>] ? drm_ioctl+0x0/0x460 [drm] Sep 28 17:28:10 myhost kernel: [<c1104ff6>] do_vfs_ioctl+0x66/0x560 Sep 28 17:28:10 myhost kernel: [<c115d93f>] ? security_file_permission +0xf/0x20 Sep 28 17:28:10 myhost kernel: [<c10f6a3d>] ? rw_verify_area+0x5d/0xd0 Sep 28 17:28:10 myhost kernel: [<c10f6f8d>] ? vfs_read+0x11d/0x180 Sep 28 17:28:10 myhost kernel: [<c110554f>] sys_ioctl+0x5f/0x80 Sep 28 17:28:10 myhost kernel: [<c100379f>] sysenter_do_call+0x12/0x28 Sep 28 17:28:10 myhost kernel: Mem-Info: Sep 28 17:28:10 myhost kernel: DMA per-cpu: Sep 28 17:28:10 myhost kernel: CPU 0: hi: 0, btch: 1 usd: 0 Sep 28 17:28:10 myhost kernel: CPU 1: hi: 0, btch: 1 usd: 0 Sep 28 17:28:10 myhost kernel: Normal per-cpu: Sep 28 17:28:10 myhost kernel: CPU 0: hi: 186, btch: 31 usd: 9 Sep 28 17:28:10 myhost kernel: CPU 1: hi: 186, btch: 31 usd: 30 Sep 28 17:28:10 myhost kernel: HighMem per-cpu: Sep 28 17:28:10 myhost kernel: CPU 0: hi: 186, btch: 31 usd: 2 Sep 28 17:28:10 myhost kernel: CPU 1: hi: 186, btch: 31 usd: 30 Sep 28 17:28:10 myhost kernel: active_anon:209750 inactive_anon:72131 isolated_anon:0 Sep 28 17:28:10 myhost kernel: active_file:93065 inactive_file:100131 isolated_file:0 Sep 28 17:28:10 myhost kernel: unevictable:5 dirty:12 writeback:18 unstable:0 Sep 28 17:28:10 myhost kernel: free:11950 slab_reclaimable:4061 slab_unreclaimable:3996 Sep 28 17:28:10 myhost kernel: mapped:85924 shmem:47752 pagetables:1765 bounce:0 Sep 28 17:28:10 myhost kernel: DMA free:7972kB min:64kB low:80kB high:96kB active_anon:1536kB inactive_anon:3980kB active_file:1076kB inactive_file:1232kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15788kB mlocked:0kB dirty:0kB writeback:0kB mapped:1144kB shmem:3112kB slab_reclaimable:24kB slab_unreclaimable:16kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:3744 all_unreclaimable? yes Sep 28 17:28:10 myhost kernel: lowmem_reserve[]: 0 865 1980 1980 Sep 28 17:28:10 myhost kernel: Normal free:39360kB min:3728kB low:4660kB high:5592kB active_anon:330892kB inactive_anon:100024kB active_file:165300kB inactive_file:165724kB unevictable:20kB isolated(anon):0kB isolated(file):0kB present:885944kB mlocked:20kB dirty:32kB writeback:80kB mapped:142300kB shmem:63064kB slab_reclaimable:16220kB slab_unreclaimable:15968kB kernel_stack:2648kB pagetables:7060kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:500960 all_unreclaimable? yes Sep 28 17:28:10 myhost kernel: lowmem_reserve[]: 0 0 8921 8921 Sep 28 17:28:10 myhost kernel: HighMem free:468kB min:512kB low:1712kB high:2912kB active_anon:506572kB inactive_anon:184520kB active_file:205884kB inactive_file:233568kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1141984kB mlocked:0kB dirty:16kB writeback:0kB mapped:200252kB shmem:124832kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:697088 all_unreclaimable? yes Sep 28 17:28:10 myhost kernel: lowmem_reserve[]: 0 0 0 0 Sep 28 17:28:10 myhost kernel: DMA: 5*4kB 8*8kB 39*16kB 43*32kB 16*64kB 6*128kB 6*256kB 1*512kB 2*1024kB 0*2048kB 0*4096kB = 7972kB Sep 28 17:28:10 myhost kernel: Normal: 440*4kB 124*8kB 1160*16kB 198*32kB 45*64kB 25*128kB 8*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 39360kB Sep 28 17:28:10 myhost kernel: HighMem: 17*4kB 0*8kB 3*16kB 3*32kB 2*64kB 1*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 468kB Sep 28 17:28:10 myhost kernel: 240978 total pagecache pages Sep 28 17:28:10 myhost kernel: 0 pages in swap cache Sep 28 17:28:10 myhost kernel: Swap cache stats: add 0, delete 0, find 0/0 Sep 28 17:28:10 myhost kernel: Free swap = 0kB Sep 28 17:28:10 myhost kernel: Total swap = 0kB Sep 28 17:28:10 myhost kernel: 515070 pages RAM Sep 28 17:28:10 myhost kernel: 287745 pages HighMem Sep 28 17:28:10 myhost kernel: 8280 pages reserved Sep 28 17:28:10 myhost kernel: 430061 pages shared Sep 28 17:28:10 myhost kernel: 286215 pages non-shared Sep 28 17:28:40 myhost kernel: conky invoked oom-killer: gfp_mask=0x280da, order=0, oom_adj=0 Sep 28 17:28:40 myhost kernel: conky cpuset=/ mems_allowed=0 Sep 28 17:28:40 myhost kernel: Pid: 28231, comm: conky Not tainted 2.6.35-ARCH #1 Sep 28 17:28:40 myhost kernel: Call Trace: Sep 28 17:28:40 myhost kernel: [<c10c00c9>] dump_header+0x69/0x1a0 Sep 28 17:28:40 myhost kernel: [<c11886c9>] ? ___ratelimit+0x89/0x110 Sep 28 17:28:40 myhost kernel: [<c10c0254>] oom_kill_process+0x54/0x140 Sep 28 17:28:40 myhost kernel: [<c10c064e>] ? select_bad_process +0x9e/0xd0 Sep 28 17:28:40 myhost kernel: [<c10c06d1>] __out_of_memory+0x51/0xb0 Sep 28 17:28:40 myhost kernel: [<c10c09c2>] out_of_memory+0x52/0xd0 Sep 28 17:28:40 myhost kernel: [<c10c4022>] __alloc_pages_nodemask +0x5e2/0x600 Sep 28 17:28:40 myhost kernel: [<c10d6c65>] handle_mm_fault+0x665/0x8f0 Sep 28 17:28:40 myhost kernel: [<c1028a20>] ? do_page_fault+0x0/0x3b0 Sep 28 17:28:40 myhost kernel: [<c1028b70>] do_page_fault+0x150/0x3b0 Sep 28 17:28:40 myhost kernel: [<c1028a20>] ? do_page_fault+0x0/0x3b0 Sep 28 17:28:40 myhost kernel: [<c12e9fa3>] error_code+0x73/0x78 Sep 28 17:28:40 myhost kernel: [<c118d3c0>] ? __copy_to_user_ll +0x40/0x70 Sep 28 17:28:40 myhost kernel: [<c118d8ae>] copy_to_user+0x2e/0x50 Sep 28 17:28:40 myhost kernel: [<c1110cbd>] seq_read+0x24d/0x3f0 Sep 28 17:28:40 myhost kernel: [<c10da11d>] ? mmap_region+0x15d/0x400 Sep 28 17:28:40 myhost kernel: [<c1110a70>] ? seq_read+0x0/0x3f0 Sep 28 17:28:40 myhost kernel: [<c1139afe>] proc_reg_read+0x5e/0x90 Sep 28 17:28:40 myhost kernel: [<c10f6f07>] vfs_read+0x97/0x180 Sep 28 17:28:40 myhost kernel: [<c1139aa0>] ? proc_reg_read+0x0/0x90 Sep 28 17:28:40 myhost kernel: [<c10f702d>] sys_read+0x3d/0x70 Sep 28 17:28:40 myhost kernel: [<c100379f>] sysenter_do_call+0x12/0x28 Sep 28 17:28:40 myhost kernel: Mem-Info: Sep 28 17:28:40 myhost kernel: DMA per-cpu: Sep 28 17:28:40 myhost kernel: CPU 0: hi: 0, btch: 1 usd: 0 Sep 28 17:28:40 myhost kernel: CPU 1: hi: 0, btch: 1 usd: 0 Sep 28 17:28:40 myhost kernel: Normal per-cpu: Sep 28 17:28:40 myhost kernel: CPU 0: hi: 186, btch: 31 usd: 2 Sep 28 17:28:40 myhost kernel: CPU 1: hi: 186, btch: 31 usd: 30 Sep 28 17:28:40 myhost kernel: HighMem per-cpu: Sep 28 17:28:40 myhost kernel: CPU 0: hi: 186, btch: 31 usd: 1 Sep 28 17:28:40 myhost kernel: CPU 1: hi: 186, btch: 31 usd: 30 Sep 28 17:28:40 myhost kernel: active_anon:209853 inactive_anon:72409 isolated_anon:0 Sep 28 17:28:40 myhost kernel: active_file:93594 inactive_file:99962 isolated_file:32 Sep 28 17:28:40 myhost kernel: unevictable:5 dirty:0 writeback:0 unstable:0 Sep 28 17:28:40 myhost kernel: free:11938 slab_reclaimable:4089 slab_unreclaimable:3994 Sep 28 17:28:40 myhost kernel: mapped:86446 shmem:48138 pagetables:1765 bounce:0 Sep 28 17:28:40 myhost kernel: DMA free:7972kB min:64kB low:80kB high:96kB active_anon:1596kB inactive_anon:3864kB active_file:1092kB inactive_file:1228kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15788kB mlocked:0kB dirty:0kB writeback:0kB mapped:1160kB shmem:2996kB slab_reclaimable:24kB slab_unreclaimable:16kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:3808 all_unreclaimable? yes Sep 28 17:28:40 myhost kernel: lowmem_reserve[]: 0 865 1980 1980 Sep 28 17:28:40 myhost kernel: Normal free:39324kB min:3728kB low:4660kB high:5592kB active_anon:327108kB inactive_anon:104096kB active_file:165324kB inactive_file:165648kB unevictable:20kB isolated(anon):0kB isolated(file):0kB present:885944kB mlocked:20kB dirty:0kB writeback:0kB mapped:142348kB shmem:62508kB slab_reclaimable:16332kB slab_unreclaimable:15960kB kernel_stack:2632kB pagetables:7060kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:547296 all_unreclaimable? no Sep 28 17:28:40 myhost kernel: lowmem_reserve[]: 0 0 8921 8921 Sep 28 17:28:40 myhost kernel: HighMem free:456kB min:512kB low:1712kB high:2912kB active_anon:510708kB inactive_anon:181676kB active_file:207960kB inactive_file:233072kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1141984kB mlocked:0kB dirty:0kB writeback:0kB mapped:202276kB shmem:127048kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:667264 all_unreclaimable? yes Sep 28 17:28:40 myhost kernel: lowmem_reserve[]: 0 0 0 0 Sep 28 17:28:40 myhost kernel: DMA: 7*4kB 11*8kB 31*16kB 42*32kB 16*64kB 7*128kB 6*256kB 1*512kB 2*1024kB 0*2048kB 0*4096kB = 7972kB Sep 28 17:28:40 myhost kernel: Normal: 430*4kB 163*8kB 1102*16kB 199*32kB 45*64kB 30*128kB 8*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 39376kB Sep 28 17:28:40 myhost kernel: HighMem: 2*4kB 0*8kB 4*16kB 4*32kB 2*64kB 1*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 456kB Sep 28 17:28:40 myhost kernel: 241743 total pagecache pages Sep 28 17:28:40 myhost kernel: 0 pages in swap cache Sep 28 17:28:40 myhost kernel: Swap cache stats: add 0, delete 0, find 0/0 Sep 28 17:28:40 myhost kernel: Free swap = 0kB Sep 28 17:28:40 myhost kernel: Total swap = 0kB Sep 28 17:28:40 myhost kernel: 515070 pages RAM Sep 28 17:28:40 myhost kernel: 287745 pages HighMem Sep 28 17:28:40 myhost kernel: 8280 pages reserved Sep 28 17:28:40 myhost kernel: 434921 pages shared Sep 28 17:28:40 myhost kernel: 282424 pages non-shared Sep 28 17:28:42 myhost kernel: Xorg invoked oom-killer: gfp_mask=0x80d2, order=0, oom_adj=0 Sep 28 17:28:42 myhost kernel: Xorg cpuset=/ mems_allowed=0 Sep 28 17:28:42 myhost kernel: Pid: 1586, comm: Xorg Not tainted 2.6.35-ARCH #1 Sep 28 17:28:42 myhost kernel: Call Trace: Sep 28 17:28:42 myhost kernel: [<c10c00c9>] dump_header+0x69/0x1a0 Sep 28 17:28:42 myhost kernel: [<c11886c9>] ? ___ratelimit+0x89/0x110 Sep 28 17:28:42 myhost kernel: [<c10c0254>] oom_kill_process+0x54/0x140 Sep 28 17:28:42 myhost kernel: [<c10c064e>] ? select_bad_process +0x9e/0xd0 Sep 28 17:28:42 myhost kernel: [<c10c06d1>] __out_of_memory+0x51/0xb0 Sep 28 17:28:42 myhost kernel: [<c10c09c2>] out_of_memory+0x52/0xd0 Sep 28 17:28:42 myhost kernel: [<c10c4022>] __alloc_pages_nodemask +0x5e2/0x600 Sep 28 17:28:42 myhost kernel: [<c10dfed6>] __vmalloc_area_node +0x76/0x100 Sep 28 17:28:42 myhost kernel: [<f8613aa7>] ? i915_gem_object_get_pages +0xf7/0x1c0 [i915] Sep 28 17:28:42 myhost kernel: [<c10dfffa>] __vmalloc_node+0x9a/0xa0 Sep 28 17:28:42 myhost kernel: [<f8613aa7>] ? i915_gem_object_get_pages +0xf7/0x1c0 [i915] Sep 28 17:28:42 myhost kernel: [<f8613aa7>] ? i915_gem_object_get_pages +0xf7/0x1c0 [i915] Sep 28 17:28:42 myhost kernel: [<f8613aa7>] ? i915_gem_object_get_pages +0xf7/0x1c0 [i915] Sep 28 17:28:42 myhost kernel: [<c10e0155>] __vmalloc+0x25/0x30 Sep 28 17:28:42 myhost kernel: [<f8613aa7>] ? i915_gem_object_get_pages +0xf7/0x1c0 [i915] Sep 28 17:28:42 myhost kernel: [<f8613aa7>] i915_gem_object_get_pages +0xf7/0x1c0 [i915] Sep 28 17:28:42 myhost kernel: [<f8461457>] ? drm_mm_get_block_generic +0x37/0x90 [drm] Sep 28 17:28:42 myhost kernel: [<f8615161>] i915_gem_object_bind_to_gtt +0xf1/0x260 [i915] Sep 28 17:28:42 myhost kernel: [<c12e817a>] ? __mutex_lock_slowpath +0x1ea/0x2b0 Sep 28 17:28:42 myhost kernel: [<f8616897>] i915_gem_object_pin +0xc7/0xf0 [i915] Sep 28 17:28:42 myhost kernel: [<f8616ecd>] i915_gem_do_execbuffer +0x4ed/0x1090 [i915] Sep 28 17:28:42 myhost kernel: [<f861638f>] ? i915_gem_object_set_to_gtt_domain+0x4f/0x100 [i915] Sep 28 17:28:42 myhost kernel: [<f8617ab5>] ? i915_gem_execbuffer2 +0x45/0x1e0 [i915] Sep 28 17:28:42 myhost kernel: [<c118d862>] ? _copy_from_user+0x32/0x50 Sep 28 17:28:42 myhost kernel: [<f8617ae7>] i915_gem_execbuffer2 +0x77/0x1e0 [i915] Sep 28 17:28:42 myhost kernel: [<f8458308>] drm_ioctl+0x1b8/0x460 [drm] Sep 28 17:28:42 myhost kernel: [<f8617a70>] ? i915_gem_execbuffer2 +0x0/0x1e0 [i915] Sep 28 17:28:42 myhost kernel: [<c115d93f>] ? security_file_permission +0xf/0x20 Sep 28 17:28:42 myhost kernel: [<c10f6a3d>] ? rw_verify_area+0x5d/0xd0 Sep 28 17:28:42 myhost kernel: [<c11048e4>] vfs_ioctl+0x34/0xa0 Sep 28 17:28:42 myhost kernel: [<f8458150>] ? drm_ioctl+0x0/0x460 [drm] Sep 28 17:28:42 myhost kernel: [<c1104ff6>] do_vfs_ioctl+0x66/0x560 Sep 28 17:28:42 myhost kernel: [<c10625c2>] ? lock_hrtimer_base.clone.21 +0x22/0x50 Sep 28 17:28:42 myhost kernel: [<c10626b7>] ? hrtimer_try_to_cancel +0x77/0xd0 Sep 28 17:28:42 myhost kernel: [<c1048434>] ? do_setitimer+0x154/0x200 Sep 28 17:28:42 myhost kernel: [<c104858c>] ? sys_setitimer+0x4c/0xa0 Sep 28 17:28:42 myhost kernel: [<c110554f>] sys_ioctl+0x5f/0x80 Sep 28 17:28:42 myhost kernel: [<c100379f>] sysenter_do_call+0x12/0x28 Sep 28 17:28:42 myhost kernel: Mem-Info: Sep 28 17:28:42 myhost kernel: DMA per-cpu: Sep 28 17:28:42 myhost kernel: CPU 0: hi: 0, btch: 1 usd: 0 Sep 28 17:28:42 myhost kernel: CPU 1: hi: 0, btch: 1 usd: 0 Sep 28 17:28:42 myhost kernel: Normal per-cpu: Sep 28 17:28:42 myhost kernel: CPU 0: hi: 186, btch: 31 usd: 1 Sep 28 17:28:42 myhost kernel: CPU 1: hi: 186, btch: 31 usd: 42 Sep 28 17:28:42 myhost kernel: HighMem per-cpu: Sep 28 17:28:42 myhost kernel: CPU 0: hi: 186, btch: 31 usd: 0 Sep 28 17:28:42 myhost kernel: CPU 1: hi: 186, btch: 31 usd: 32 Sep 28 17:28:42 myhost kernel: active_anon:209044 inactive_anon:73210 isolated_anon:0 Sep 28 17:28:42 myhost kernel: active_file:93609 inactive_file:99939 isolated_file:32 Sep 28 17:28:42 myhost kernel: unevictable:5 dirty:0 writeback:0 unstable:0 Sep 28 17:28:42 myhost kernel: free:11947 slab_reclaimable:4057 slab_unreclaimable:3984 Sep 28 17:28:42 myhost kernel: mapped:86447 shmem:48674 pagetables:1735 bounce:0 Sep 28 17:28:42 myhost kernel: DMA free:7972kB min:64kB low:80kB high:96kB active_anon:1596kB inactive_anon:3864kB active_file:1092kB inactive_file:1228kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15788kB mlocked:0kB dirty:0kB writeback:0kB mapped:1160kB shmem:2996kB slab_reclaimable:24kB slab_unreclaimable:16kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:3648 all_unreclaimable? yes Sep 28 17:28:42 myhost kernel: lowmem_reserve[]: 0 865 1980 1980 Sep 28 17:28:42 myhost kernel: Normal free:39304kB min:3728kB low:4660kB high:5592kB active_anon:324036kB inactive_anon:107432kB active_file:165332kB inactive_file:165540kB unevictable:20kB isolated(anon):0kB isolated(file):128kB present:885944kB mlocked:20kB dirty:0kB writeback:0kB mapped:142356kB shmem:62772kB slab_reclaimable:16204kB slab_unreclaimable:15920kB kernel_stack:2624kB pagetables:6940kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:644096 all_unreclaimable? yes Sep 28 17:28:42 myhost kernel: lowmem_reserve[]: 0 0 8921 8921 Sep 28 17:28:42 myhost kernel: HighMem free:512kB min:512kB low:1712kB high:2912kB active_anon:510544kB inactive_anon:181544kB active_file:208012kB inactive_file:232988kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1141984kB mlocked:0kB dirty:0kB writeback:0kB mapped:202272kB shmem:128928kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:667424 all_unreclaimable? yes Sep 28 17:28:42 myhost kernel: lowmem_reserve[]: 0 0 0 0 Sep 28 17:28:42 myhost kernel: DMA: 7*4kB 11*8kB 31*16kB 42*32kB 16*64kB 7*128kB 6*256kB 1*512kB 2*1024kB 0*2048kB 0*4096kB = 7972kB Sep 28 17:28:42 myhost kernel: Normal: 446*4kB 162*8kB 1106*16kB 197*32kB 45*64kB 29*128kB 8*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 39304kB Sep 28 17:28:42 myhost kernel: HighMem: 48*4kB 0*8kB 0*16kB 4*32kB 3*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 512kB Sep 28 17:28:42 myhost kernel: 242288 total pagecache pages Sep 28 17:28:42 myhost kernel: 0 pages in swap cache Sep 28 17:28:42 myhost kernel: Swap cache stats: add 0, delete 0, find 0/0 Sep 28 17:28:42 myhost kernel: Free swap = 0kB Sep 28 17:28:42 myhost kernel: Total swap = 0kB Sep 28 17:28:42 myhost kernel: 515070 pages RAM Sep 28 17:28:42 myhost kernel: 287745 pages HighMem Sep 28 17:28:42 myhost kernel: 8280 pages reserved Sep 28 17:28:42 myhost kernel: 430124 pages shared Sep 28 17:28:42 myhost kernel: 285406 pages non-shared Sep 28 17:28:42 myhost kernel: Xorg invoked oom-killer: gfp_mask=0x80d2, order=0, oom_adj=0 Sep 28 17:28:42 myhost kernel: Xorg cpuset=/ mems_allowed=0 Sep 28 17:28:42 myhost kernel: Pid: 1586, comm: Xorg Not tainted 2.6.35-ARCH #1 Sep 28 17:28:42 myhost kernel: Call Trace: Sep 28 17:28:42 myhost kernel: [<c10c00c9>] dump_header+0x69/0x1a0 Sep 28 17:28:42 myhost kernel: [<c11886c9>] ? ___ratelimit+0x89/0x110 Sep 28 17:28:42 myhost kernel: [<c10c0254>] oom_kill_process+0x54/0x140 Sep 28 17:28:42 myhost kernel: [<c10c064e>] ? select_bad_process +0x9e/0xd0 Sep 28 17:28:42 myhost kernel: [<c10c06d1>] __out_of_memory+0x51/0xb0 Sep 28 17:28:42 myhost kernel: [<c10c09c2>] out_of_memory+0x52/0xd0 Sep 28 17:28:42 myhost kernel: [<c10c4022>] __alloc_pages_nodemask +0x5e2/0x600 Sep 28 17:28:42 myhost kernel: [<c10dfed6>] __vmalloc_area_node +0x76/0x100 Sep 28 17:28:42 myhost kernel: [<f8613aa7>] ? i915_gem_object_get_pages +0xf7/0x1c0 [i915] Sep 28 17:28:42 myhost kernel: [<c10dfffa>] __vmalloc_node+0x9a/0xa0 Sep 28 17:28:42 myhost kernel: [<f8613aa7>] ? i915_gem_object_get_pages +0xf7/0x1c0 [i915] Sep 28 17:28:42 myhost kernel: [<f8613aa7>] ? i915_gem_object_get_pages +0xf7/0x1c0 [i915] Sep 28 17:28:42 myhost kernel: [<f8613aa7>] ? i915_gem_object_get_pages +0xf7/0x1c0 [i915] Sep 28 17:28:42 myhost kernel: [<c10e0155>] __vmalloc+0x25/0x30 Sep 28 17:28:42 myhost kernel: [<f8613aa7>] ? i915_gem_object_get_pages +0xf7/0x1c0 [i915] Sep 28 17:28:42 myhost kernel: [<f8613aa7>] i915_gem_object_get_pages +0xf7/0x1c0 [i915] Sep 28 17:28:42 myhost kernel: [<f8461457>] ? drm_mm_get_block_generic +0x37/0x90 [drm] Sep 28 17:28:42 myhost kernel: [<f8615161>] i915_gem_object_bind_to_gtt +0xf1/0x260 [i915] Sep 28 17:28:42 myhost kernel: [<c12e817a>] ? __mutex_lock_slowpath +0x1ea/0x2b0 Sep 28 17:28:42 myhost kernel: [<f8616897>] i915_gem_object_pin +0xc7/0xf0 [i915] Sep 28 17:28:42 myhost kernel: [<f8616ecd>] i915_gem_do_execbuffer +0x4ed/0x1090 [i915] Sep 28 17:28:42 myhost kernel: [<f861638f>] ? i915_gem_object_set_to_gtt_domain+0x4f/0x100 [i915] Sep 28 17:28:42 myhost kernel: [<f8617ab5>] ? i915_gem_execbuffer2 +0x45/0x1e0 [i915] Sep 28 17:28:42 myhost kernel: [<c118d862>] ? _copy_from_user+0x32/0x50 Sep 28 17:28:42 myhost kernel: [<f8617ae7>] i915_gem_execbuffer2 +0x77/0x1e0 [i915] Sep 28 17:28:42 myhost kernel: [<f8458308>] drm_ioctl+0x1b8/0x460 [drm] Sep 28 17:28:42 myhost kernel: [<f8617a70>] ? i915_gem_execbuffer2 +0x0/0x1e0 [i915] Sep 28 17:28:42 myhost kernel: [<c115d93f>] ? security_file_permission +0xf/0x20 Sep 28 17:28:42 myhost kernel: [<c10f6a3d>] ? rw_verify_area+0x5d/0xd0 Sep 28 17:28:42 myhost kernel: [<c11048e4>] vfs_ioctl+0x34/0xa0 Sep 28 17:28:42 myhost kernel: [<f8458150>] ? drm_ioctl+0x0/0x460 [drm] Sep 28 17:28:42 myhost kernel: [<c1104ff6>] do_vfs_ioctl+0x66/0x560 Sep 28 17:28:42 myhost kernel: [<c10625c2>] ? lock_hrtimer_base.clone.21 +0x22/0x50 Sep 28 17:28:42 myhost kernel: [<c10626b7>] ? hrtimer_try_to_cancel +0x77/0xd0 Sep 28 17:28:42 myhost kernel: [<c1048434>] ? do_setitimer+0x154/0x200 Sep 28 17:28:42 myhost kernel: [<c104858c>] ? sys_setitimer+0x4c/0xa0 Sep 28 17:28:42 myhost kernel: [<c110554f>] sys_ioctl+0x5f/0x80 Sep 28 17:28:42 myhost kernel: [<c100379f>] sysenter_do_call+0x12/0x28 Sep 28 17:28:42 myhost kernel: Mem-Info: Sep 28 17:28:42 myhost kernel: DMA per-cpu: Sep 28 17:28:42 myhost kernel: CPU 0: hi: 0, btch: 1 usd: 0 Sep 28 17:28:42 myhost kernel: CPU 1: hi: 0, btch: 1 usd: 0 Sep 28 17:28:42 myhost kernel: Normal per-cpu: Sep 28 17:28:42 myhost kernel: CPU 0: hi: 186, btch: 31 usd: 1 Sep 28 17:28:42 myhost kernel: CPU 1: hi: 186, btch: 31 usd: 42 Sep 28 17:28:42 myhost kernel: HighMem per-cpu: Sep 28 17:28:42 myhost kernel: CPU 0: hi: 186, btch: 31 usd: 0 Sep 28 17:28:42 myhost kernel: CPU 1: hi: 186, btch: 31 usd: 32 Sep 28 17:28:42 myhost kernel: active_anon:209044 inactive_anon:73210 isolated_anon:0 Sep 28 17:28:42 myhost kernel: active_file:93609 inactive_file:99964 isolated_file:0 Sep 28 17:28:42 myhost kernel: unevictable:5 dirty:0 writeback:0 unstable:0 Sep 28 17:28:42 myhost kernel: free:11947 slab_reclaimable:4057 slab_unreclaimable:3984 Sep 28 17:28:42 myhost kernel: mapped:86447 shmem:48674 pagetables:1735 bounce:0 Sep 28 17:28:42 myhost kernel: DMA free:7972kB min:64kB low:80kB high:96kB active_anon:1596kB inactive_anon:3864kB active_file:1092kB inactive_file:1228kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15788kB mlocked:0kB dirty:0kB writeback:0kB mapped:1160kB shmem:2996kB slab_reclaimable:24kB slab_unreclaimable:16kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:3648 all_unreclaimable? yes Sep 28 17:28:42 myhost kernel: lowmem_reserve[]: 0 865 1980 1980 Sep 28 17:28:42 myhost kernel: Normal free:39304kB min:3728kB low:4660kB high:5592kB active_anon:324036kB inactive_anon:107432kB active_file:165332kB inactive_file:165640kB unevictable:20kB isolated(anon):0kB isolated(file):0kB present:885944kB mlocked:20kB dirty:0kB writeback:0kB mapped:142356kB shmem:62772kB slab_reclaimable:16204kB slab_unreclaimable:15920kB kernel_stack:2624kB pagetables:6940kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:644864 all_unreclaimable? yes Sep 28 17:28:42 myhost kernel: lowmem_reserve[]: 0 0 8921 8921 Sep 28 17:28:42 myhost kernel: HighMem free:512kB min:512kB low:1712kB high:2912kB active_anon:510544kB inactive_anon:181544kB active_file:208012kB inactive_file:232988kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1141984kB mlocked:0kB dirty:0kB writeback:0kB mapped:202272kB shmem:128928kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:667488 all_unreclaimable? yes Sep 28 17:28:42 myhost kernel: lowmem_reserve[]: 0 0 0 0 Sep 28 17:28:42 myhost kernel: DMA: 7*4kB 11*8kB 31*16kB 42*32kB 16*64kB 7*128kB 6*256kB 1*512kB 2*1024kB 0*2048kB 0*4096kB = 7972kB Sep 28 17:28:42 myhost kernel: Normal: 446*4kB 162*8kB 1106*16kB 197*32kB 45*64kB 29*128kB 8*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 39304kB Sep 28 17:28:42 myhost kernel: HighMem: 48*4kB 0*8kB 0*16kB 4*32kB 3*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 512kB Sep 28 17:28:42 myhost kernel: 242288 total pagecache pages Sep 28 17:28:42 myhost kernel: 0 pages in swap cache Sep 28 17:28:42 myhost kernel: Swap cache stats: add 0, delete 0, find 0/0 Sep 28 17:28:42 myhost kernel: Free swap = 0kB Sep 28 17:28:42 myhost kernel: Total swap = 0kB Sep 28 17:28:42 myhost kernel: 515070 pages RAM Sep 28 17:28:42 myhost kernel: 287745 pages HighMem Sep 28 17:28:42 myhost kernel: 8280 pages reserved Sep 28 17:28:42 myhost kernel: 430123 pages shared Sep 28 17:28:42 myhost kernel: 285406 pages non-shared Sep 28 17:28:42 myhost kernel: device eth0 left promiscuous mode Sep 28 17:28:42 myhost kernel: bridge-eth0: disabled promiscuous mode Sep 28 17:29:11 myhost cntlm[1523]: Connection accepted from 127.0.0.1:55605 Sep 28 17:29:23 myhost cntlm[1523]: Connection accepted from 127.0.0.1:55607 Sep 28 17:29:53 myhost chmsee: Libgcrypt warning: missing initialization - please fix the application Sep 28 17:30:18 myhost cntlm[1523]: Connection accepted from 127.0.0.1:55619 Sep 28 17:30:21 myhost cntlm[1523]: Connection accepted from 127.0.0.1:55621 Sep 28 17:30:25 myhost cntlm[1523]: The request was denied! Sep 28 17:31:48 myhost cntlm[1523]: Connection accepted from 127.0.0.1:37870 Sep 28 17:34:16 myhost kernel: i915 D f6639f00 0 1003 2 0x00000000 Sep 28 17:34:16 myhost kernel: f6639f10 00000046 00000002 f6639f00 f645f8c0 c2808140 f7042280 c12f2680 Sep 28 17:34:16 myhost kernel: c2888140 c1002666 c1487140 c1487140 c1487140 f645f8c0 c1487140 00000000 Sep 28 17:34:16 myhost kernel: 00000000 c1487140 f645f8c0 00000001 f70ec814 f70ec818 00000246 f6639f3c Sep 28 17:34:16 myhost kernel: Call Trace: Sep 28 17:34:16 myhost kernel: [<c1002666>] ? __switch_to+0xb6/0x180 Sep 28 17:34:16 myhost kernel: [<c12e809c>] __mutex_lock_slowpath +0x10c/0x2b0 Sep 28 17:34:16 myhost kernel: [<c12e824b>] mutex_lock+0xb/0x20 Sep 28 17:34:18 myhost kernel: [<f8613448>] i915_gem_retire_work_handler +0x28/0xc0 [i915] Sep 28 17:34:18 myhost kernel: [<f8613420>] ? i915_gem_retire_work_handler+0x0/0xc0 [i915] Sep 28 17:34:18 myhost kernel: [<c105abc0>] worker_thread+0x110/0x250 Sep 28 17:34:18 myhost kernel: [<c102f0b0>] ? __wake_up_common+0x40/0x70 Sep 28 17:34:18 myhost kernel: [<c105eb30>] ? autoremove_wake_function +0x0/0x40 Sep 28 17:34:18 myhost kernel: [<c105aab0>] ? worker_thread+0x0/0x250 Sep 28 17:34:18 myhost kernel: [<c105e71c>] kthread+0x6c/0x80 Sep 28 17:34:18 myhost kernel: [<c105e6b0>] ? kthread+0x0/0x80 Sep 28 17:34:18 myhost kernel: [<c1003d3e>] kernel_thread_helper +0x6/0x18 Sep 28 17:34:20 myhost kernel: Xorg D f73c1da8 0 1586 1525 0x00400004 Sep 28 17:34:20 myhost kernel: f73c1db8 00203086 00000002 f73c1da8 f62d7b80 000000db c173fe00 c12f2680 Sep 28 17:34:20 myhost kernel: c2808140 f73c1d9c c1487140 c1487140 c1487140 f707b810 c1487140 00000000 Sep 28 17:34:20 myhost kernel: 00203246 c1487140 f707b810 00000001 f70ec814 f70ec818 00203246 f73c1de4 Sep 28 17:34:20 myhost kernel: Call Trace: Sep 28 17:34:20 myhost kernel: [<c12e809c>] __mutex_lock_slowpath +0x10c/0x2b0 Sep 28 17:34:20 myhost kernel: [<c12e824b>] mutex_lock+0xb/0x20 Sep 28 17:34:20 myhost kernel: [<f8618cb0>] i915_gem_madvise_ioctl +0x40/0x130 [i915] Sep 28 17:34:20 myhost kernel: [<f8458308>] drm_ioctl+0x1b8/0x460 [drm] Sep 28 17:34:20 myhost kernel: [<f8618c70>] ? i915_gem_madvise_ioctl +0x0/0x130 [i915] Sep 28 17:34:20 myhost kernel: [<c10f672c>] ? do_sync_read+0x9c/0xd0 Sep 28 17:34:20 myhost kernel: [<c11048e4>] vfs_ioctl+0x34/0xa0 Sep 28 17:34:20 myhost kernel: [<f8458150>] ? drm_ioctl+0x0/0x460 [drm] Sep 28 17:34:20 myhost kernel: [<c1104ff6>] do_vfs_ioctl+0x66/0x560 Sep 28 17:34:20 myhost kernel: [<c115d93f>] ? security_file_permission +0xf/0x20 Sep 28 17:34:20 myhost kernel: [<c10f6a3d>] ? rw_verify_area+0x5d/0xd0 Sep 28 17:34:20 myhost kernel: [<c10f6f8d>] ? vfs_read+0x11d/0x180 Sep 28 17:34:20 myhost kernel: [<c110554f>] sys_ioctl+0x5f/0x80 Sep 28 17:34:20 myhost kernel: [<c100379f>] sysenter_do_call+0x12/0x28 ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-18 1:47 oom_killer crash linux system Figo.zhang @ 2010-10-18 1:57 ` KAMEZAWA Hiroyuki 2010-10-18 2:11 ` Wu Fengguang 0 siblings, 1 reply; 22+ messages in thread From: KAMEZAWA Hiroyuki @ 2010-10-18 1:57 UTC (permalink / raw) To: Figo.zhang; +Cc: linux-kernel, rientjes, fengguang.wu, figo1802 On Mon, 18 Oct 2010 09:47:39 +0800 "Figo.zhang" <zhangtianfei@leadcoretech.com> wrote: > hi all, > > i have a desktop run linux2.6.35 and have 2GB ram. i turn off the swap > partition, and i open huge applications , let the system eat more and > more memory. > when the system eat more than 1.7G ram, the system crashed. > 2.6.36-rc series has a completely new logic, please try. Thanks, -Kame ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-18 1:57 ` KAMEZAWA Hiroyuki @ 2010-10-18 2:11 ` Wu Fengguang 2010-10-18 8:13 ` Figo.zhang 0 siblings, 1 reply; 22+ messages in thread From: Wu Fengguang @ 2010-10-18 2:11 UTC (permalink / raw) To: KAMEZAWA Hiroyuki Cc: Figo.zhang, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802 On Mon, Oct 18, 2010 at 09:57:22AM +0800, KAMEZAWA Hiroyuki wrote: > On Mon, 18 Oct 2010 09:47:39 +0800 > "Figo.zhang" <zhangtianfei@leadcoretech.com> wrote: > > > hi all, > > > > i have a desktop run linux2.6.35 and have 2GB ram. i turn off the swap > > partition, and i open huge applications , let the system eat more and > > more memory. > > when the system eat more than 1.7G ram, the system crashed. > > > > 2.6.36-rc series has a completely new logic, please try. And the new logic should help this case. commit a63d83f427fbce97a6cea0db2e64b0eb8435cd10 Author: David Rientjes <rientjes@google.com> Date: Mon Aug 9 17:19:46 2010 -0700 oom: badness heuristic rewrite ... Instead of basing the heuristic on mm->total_vm for each task, the task's rss and swap space is used instead. This is a better indication of the amount of memory that will be freeable if the oom killed task is chosen and subsequently exits. This helps specifically in cases where KDE or GNOME is chosen for oom kill on desktop systems instead of a memory hogging task. ... Thanks, Fengguang ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-18 2:11 ` Wu Fengguang @ 2010-10-18 8:13 ` Figo.zhang 2010-10-18 9:10 ` KOSAKI Motohiro 0 siblings, 1 reply; 22+ messages in thread From: Figo.zhang @ 2010-10-18 8:13 UTC (permalink / raw) To: Wu Fengguang Cc: KAMEZAWA Hiroyuki, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802 i want to test the oom-killer. My desktop (Dell optiplex 780, i686 kernel)have 2GB ram, i turn off the swap partition, and open a huge pdf files and applications, and let the system eat huge ram. in 2.6.35, i can use ram up to 1.75GB, but in 2.6.36-rc8, i just use to 1.53GB ram , the system come very slow and crashed after some minutes , the DiskIO is very busy. i see the DiskIO read is up to 8MB/s, write just only 400KB/s, (see by conky). what change between 2.6.35 to 2.6.36-rc8? is it low performance about page reclaim and page writeback in high press ram useage? Best, Figo.zhang On Mon, 2010-10-18 at 10:11 +0800, Wu Fengguang wrote: > On Mon, Oct 18, 2010 at 09:57:22AM +0800, KAMEZAWA Hiroyuki wrote: > > On Mon, 18 Oct 2010 09:47:39 +0800 > > "Figo.zhang" <zhangtianfei@leadcoretech.com> wrote: > > > > > hi all, > > > > > > i have a desktop run linux2.6.35 and have 2GB ram. i turn off the swap > > > partition, and i open huge applications , let the system eat more and > > > more memory. > > > when the system eat more than 1.7G ram, the system crashed. > > > > > > > 2.6.36-rc series has a completely new logic, please try. > > And the new logic should help this case. > > commit a63d83f427fbce97a6cea0db2e64b0eb8435cd10 > Author: David Rientjes <rientjes@google.com> > Date: Mon Aug 9 17:19:46 2010 -0700 > > oom: badness heuristic rewrite > ... > Instead of basing the heuristic on mm->total_vm for each task, the task's > rss and swap space is used instead. This is a better indication of the > amount of memory that will be freeable if the oom killed task is chosen > and subsequently exits. This helps specifically in cases where KDE or > GNOME is chosen for oom kill on desktop systems instead of a memory > hogging task. > ... > > Thanks, > Fengguang ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-18 8:13 ` Figo.zhang @ 2010-10-18 9:10 ` KOSAKI Motohiro 2010-10-18 15:31 ` Wu Fengguang 2010-10-19 2:07 ` Figo.zhang 0 siblings, 2 replies; 22+ messages in thread From: KOSAKI Motohiro @ 2010-10-18 9:10 UTC (permalink / raw) To: Figo.zhang Cc: kosaki.motohiro, Wu Fengguang, KAMEZAWA Hiroyuki, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802 > > i want to test the oom-killer. My desktop (Dell optiplex 780, i686 > kernel)have 2GB ram, i turn off the swap partition, and open a huge pdf > files and applications, and let the system eat huge ram. > > in 2.6.35, i can use ram up to 1.75GB, > > but in 2.6.36-rc8, i just use to 1.53GB ram , the system come very slow > and crashed after some minutes , the DiskIO is very busy. i see the > DiskIO read is up to 8MB/s, write just only 400KB/s, (see by conky). > > what change between 2.6.35 to 2.6.36-rc8? is it low performance about > page reclaim and page writeback in high press ram useage? very lots of change ;) can you please send us your crash log? ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-18 9:10 ` KOSAKI Motohiro @ 2010-10-18 15:31 ` Wu Fengguang 2010-10-19 2:07 ` Figo.zhang 1 sibling, 0 replies; 22+ messages in thread From: Wu Fengguang @ 2010-10-18 15:31 UTC (permalink / raw) To: KOSAKI Motohiro Cc: Figo.zhang, KAMEZAWA Hiroyuki, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802 On Mon, Oct 18, 2010 at 05:10:00PM +0800, KOSAKI Motohiro wrote: > > > > i want to test the oom-killer. My desktop (Dell optiplex 780, i686 > > kernel)have 2GB ram, i turn off the swap partition, and open a huge pdf > > files and applications, and let the system eat huge ram. > > > > in 2.6.35, i can use ram up to 1.75GB, > > > > but in 2.6.36-rc8, i just use to 1.53GB ram , the system come very slow > > and crashed after some minutes , the DiskIO is very busy. i see the > > DiskIO read is up to 8MB/s, write just only 400KB/s, (see by conky). There are much more reads than writes, it looks like some thrashing. How do you measure the 1.75GB/1.53GB? > > what change between 2.6.35 to 2.6.36-rc8? is it low performance about > > page reclaim and page writeback in high press ram useage? > > very lots of change ;) > can you please send us your crash log? And there are several ways to help debug the problem. - reduce the dirty limit echo 5 > /proc/sys/vm/dirty_ratio - enable vmscan trace mount -t debugfs none /sys/kernel/debug echo 1 > /sys/kernel/debug/tracing/events/vmscan/enable <eat memory and wait for crash> cat /sys/kernel/debug/tracing/trace > trace.log - log vmstat events i=1 while true; do cp /proc/vmstat vmstat.$i let i=i+1 sleep 1 done Thanks, Fengguang ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-18 9:10 ` KOSAKI Motohiro 2010-10-18 15:31 ` Wu Fengguang @ 2010-10-19 2:07 ` Figo.zhang 2010-10-19 2:59 ` KAMEZAWA Hiroyuki ` (2 more replies) 1 sibling, 3 replies; 22+ messages in thread From: Figo.zhang @ 2010-10-19 2:07 UTC (permalink / raw) To: KOSAKI Motohiro Cc: Wu Fengguang, KAMEZAWA Hiroyuki, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802 > > very lots of change ;) > can you please send us your crash log? i add some prink in select_bad_process() and oom_badness() to see pid/totalpages/points/memoryuseage/and finally process to selet to kill. i found it the oom-killer select: syslog-ng,mysqld,nautilus,VirtualBox to kill, so my question is: 1. the syslog-ng,mysqld,nautilus is the system foundamental process, so if oom-killer kill those process, the system will be damaged, such as lose some important data. 2. the new oom-killer just use percentage of used memory as score to select the candidate to kill, but how to know this process to very important for system? oom_score_adj, it is anyone commercial linux distributions to use this to protect the critical process. Best, Figo.zhang here is the message log: Oct 19 09:44:08 myhost kernel: [ 618.440834] select_bad_process, pid=584, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440839] oom_badness: memoy use =52, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440842] oom_badness: pid = 1304, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440844] select_bad_process, pid=1304, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440846] select_bad_process, ===========have choose pid=1304 to kill, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440848] oom_badness: memoy use =115, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440851] oom_badness: pid = 1305, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440853] select_bad_process, pid=1305, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440855] oom_badness: memoy use =215, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440858] oom_badness: pid = 1307, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440860] select_bad_process, pid=1307, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440862] oom_badness: memoy use =214, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440865] oom_badness: pid = 1310, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440867] select_bad_process, pid=1310, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440875] oom_badness: memoy use =62, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440877] oom_badness: pid = 1311, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440879] select_bad_process, pid=1311, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440882] oom_badness: memoy use =33, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440884] oom_badness: pid = 1340, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440886] select_bad_process, pid=1340, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440888] oom_badness: memoy use =35, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440891] oom_badness: pid = 1354, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440893] select_bad_process, pid=1354, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440895] oom_badness: memoy use =40, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440897] oom_badness: pid = 1356, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440899] select_bad_process, pid=1356, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440902] oom_badness: memoy use =22, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440904] oom_badness: pid = 1419, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440906] select_bad_process, pid=1419, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440909] oom_badness: memoy use =103, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440911] oom_badness: pid = 1450, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440913] select_bad_process, pid=1450, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440915] oom_badness: memoy use =52, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440918] oom_badness: pid = 1453, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440919] select_bad_process, pid=1453, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440922] oom_badness: memoy use =23, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440924] oom_badness: pid = 1465, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440926] select_bad_process, pid=1465, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440929] oom_badness: memoy use =23, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440931] oom_badness: pid = 1466, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440933] select_bad_process, pid=1466, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440936] oom_badness: memoy use =23, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440938] oom_badness: pid = 1467, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440940] select_bad_process, pid=1467, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440943] oom_badness: memoy use =23, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440945] oom_badness: pid = 1468, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440947] select_bad_process, pid=1468, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440949] oom_badness: memoy use =22, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440951] oom_badness: pid = 1469, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440953] select_bad_process, pid=1469, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440956] oom_badness: memoy use =22, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440958] oom_badness: pid = 1470, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440960] select_bad_process, pid=1470, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440962] select_bad_process, pid=1547, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440965] oom_badness: memoy use =170, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440967] oom_badness: pid = 1571, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440969] select_bad_process, pid=1571, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440971] oom_badness: memoy use =23, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440974] oom_badness: pid = 1586, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440976] select_bad_process, pid=1586, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440978] oom_badness: memoy use =179, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440985] oom_badness: pid = 1592, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440987] select_bad_process, pid=1592, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440988] oom_badness: memoy use =168, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440990] oom_badness: pid = 1593, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440991] select_bad_process, pid=1593, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440993] oom_badness: memoy use =236, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440995] oom_badness: pid = 1616, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.440996] select_bad_process, pid=1616, points=1 Oct 19 09:44:08 myhost kernel: [ 618.440998] oom_badness: memoy use =61, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.440999] oom_badness: pid = 1626, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441000] select_bad_process, pid=1626, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441002] oom_badness: memoy use =7037, totalpages=506807, points=13 Oct 19 09:44:08 myhost kernel: [ 618.441004] oom_badness: pid = 1658, oom_score_adj=0, points=-17 Oct 19 09:44:08 myhost kernel: [ 618.441005] select_bad_process, pid=1658, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441007] oom_badness: memoy use =2816, totalpages=506807, points=5 Oct 19 09:44:08 myhost kernel: [ 618.441008] oom_badness: pid = 1700, oom_score_adj=0, points=5 Oct 19 09:44:08 myhost kernel: [ 618.441010] select_bad_process, pid=1700, points=5 Oct 19 09:44:08 myhost kernel: [ 618.441011] select_bad_process, ===========have choose pid=1700 to kill, points=5 Oct 19 09:44:08 myhost kernel: [ 618.441013] oom_badness: memoy use =225, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441014] oom_badness: pid = 1701, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441016] select_bad_process, pid=1701, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441017] oom_badness: memoy use =187, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441019] oom_badness: pid = 1710, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441020] select_bad_process, pid=1710, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441022] oom_badness: memoy use =56, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441023] oom_badness: pid = 1715, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441024] select_bad_process, pid=1715, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441026] select_bad_process, pid=1716, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441028] oom_badness: memoy use =225, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441030] oom_badness: pid = 1738, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441031] select_bad_process, pid=1738, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441032] oom_badness: memoy use =1363, totalpages=506807, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441034] oom_badness: pid = 1739, oom_score_adj=0, points=-28 Oct 19 09:44:08 myhost kernel: [ 618.441035] select_bad_process, pid=1739, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441037] oom_badness: memoy use =1148, totalpages=506807, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441038] oom_badness: pid = 1742, oom_score_adj=0, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441040] select_bad_process, pid=1742, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441041] oom_badness: memoy use =40, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441043] oom_badness: pid = 1748, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441044] select_bad_process, pid=1748, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441046] oom_badness: memoy use =768, totalpages=506807, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441048] oom_badness: pid = 1749, oom_score_adj=0, points=-29 Oct 19 09:44:08 myhost kernel: [ 618.441049] select_bad_process, pid=1749, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441051] oom_badness: memoy use =1359, totalpages=506807, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441052] oom_badness: pid = 1752, oom_score_adj=0, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441054] select_bad_process, pid=1752, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441055] oom_badness: memoy use =1359, totalpages=506807, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441057] oom_badness: pid = 1753, oom_score_adj=0, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441058] select_bad_process, pid=1753, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441060] oom_badness: memoy use =1359, totalpages=506807, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441061] oom_badness: pid = 1754, oom_score_adj=0, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441063] select_bad_process, pid=1754, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441064] oom_badness: memoy use =1359, totalpages=506807, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441066] oom_badness: pid = 1755, oom_score_adj=0, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441067] select_bad_process, pid=1755, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441069] oom_badness: memoy use =1359, totalpages=506807, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441070] oom_badness: pid = 1756, oom_score_adj=0, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441072] select_bad_process, pid=1756, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441074] oom_badness: memoy use =149, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441075] oom_badness: pid = 1775, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441076] select_bad_process, pid=1775, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441078] oom_badness: memoy use =24, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441080] oom_badness: pid = 1867, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441081] select_bad_process, pid=1867, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441083] oom_badness: memoy use =285, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441084] oom_badness: pid = 1893, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441086] select_bad_process, pid=1893, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441088] oom_badness: memoy use =110, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441089] oom_badness: pid = 1899, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441091] select_bad_process, pid=1899, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441092] oom_badness: memoy use =97, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441094] oom_badness: pid = 1900, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441095] select_bad_process, pid=1900, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441097] oom_badness: memoy use =134, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441099] oom_badness: pid = 1967, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441100] select_bad_process, pid=1967, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441102] oom_badness: memoy use =301, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441103] oom_badness: pid = 1986, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441104] select_bad_process, pid=1986, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441106] select_bad_process, pid=2003, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441108] oom_badness: memoy use =42, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441109] oom_badness: pid = 2013, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441110] select_bad_process, pid=2013, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441112] oom_badness: memoy use =261, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441114] oom_badness: pid = 2014, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441115] select_bad_process, pid=2014, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441117] oom_badness: memoy use =51, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441118] oom_badness: pid = 2016, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441119] select_bad_process, pid=2016, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441121] oom_badness: memoy use =463, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441123] oom_badness: pid = 2021, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441124] select_bad_process, pid=2021, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441126] oom_badness: memoy use =1031, totalpages=506807, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441127] oom_badness: pid = 2024, oom_score_adj=0, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441129] select_bad_process, pid=2024, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441131] oom_badness: memoy use =824, totalpages=506807, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441132] oom_badness: pid = 2033, oom_score_adj=0, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441133] select_bad_process, pid=2033, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441135] oom_badness: memoy use =70, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441137] oom_badness: pid = 2038, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441138] select_bad_process, pid=2038, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441140] oom_badness: memoy use =720, totalpages=506807, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441141] oom_badness: pid = 2041, oom_score_adj=0, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441143] select_bad_process, pid=2041, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441144] select_bad_process, pid=2047, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441146] oom_badness: memoy use =130, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441147] oom_badness: pid = 2051, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441149] select_bad_process, pid=2051, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441150] oom_badness: memoy use =1167, totalpages=506807, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441152] oom_badness: pid = 2055, oom_score_adj=0, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441153] select_bad_process, pid=2055, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441155] oom_badness: memoy use =146, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441157] oom_badness: pid = 2058, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441158] select_bad_process, pid=2058, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441160] oom_badness: memoy use =151, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441161] oom_badness: pid = 2060, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441162] select_bad_process, pid=2060, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441164] oom_badness: memoy use =52, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441165] oom_badness: pid = 2061, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441167] select_bad_process, pid=2061, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441168] oom_badness: memoy use =4346, totalpages=506807, points=8 Oct 19 09:44:08 myhost kernel: [ 618.441170] oom_badness: pid = 2065, oom_score_adj=0, points=8 Oct 19 09:44:08 myhost kernel: [ 618.441171] select_bad_process, pid=2065, points=8 Oct 19 09:44:08 myhost kernel: [ 618.441172] select_bad_process, ===========have choose pid=2065 to kill, points=8 Oct 19 09:44:08 myhost kernel: [ 618.441174] oom_badness: memoy use =139, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441176] oom_badness: pid = 2067, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441177] select_bad_process, pid=2067, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441179] oom_badness: memoy use =1040, totalpages=506807, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441181] oom_badness: pid = 2075, oom_score_adj=0, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441182] select_bad_process, pid=2075, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441184] oom_badness: memoy use =72, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441186] oom_badness: pid = 2076, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441187] select_bad_process, pid=2076, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441189] oom_badness: memoy use =18211, totalpages=506807, points=35 Oct 19 09:44:08 myhost kernel: [ 618.441190] oom_badness: pid = 2078, oom_score_adj=0, points=35 Oct 19 09:44:08 myhost kernel: [ 618.441191] select_bad_process, pid=2078, points=35 Oct 19 09:44:08 myhost kernel: [ 618.441193] select_bad_process, ===========have choose pid=2078 to kill, points=35 Oct 19 09:44:08 myhost kernel: [ 618.441195] oom_badness: memoy use =197, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441196] oom_badness: pid = 2082, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441198] select_bad_process, pid=2082, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441199] oom_badness: memoy use =1587, totalpages=506807, points=3 Oct 19 09:44:08 myhost kernel: [ 618.441201] oom_badness: pid = 2083, oom_score_adj=0, points=3 Oct 19 09:44:08 myhost kernel: [ 618.441202] select_bad_process, pid=2083, points=3 Oct 19 09:44:08 myhost kernel: [ 618.441204] oom_badness: memoy use =403, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441206] oom_badness: pid = 2085, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441207] select_bad_process, pid=2085, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441209] oom_badness: memoy use =575, totalpages=506807, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441210] oom_badness: pid = 2091, oom_score_adj=0, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441211] select_bad_process, pid=2091, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441213] oom_badness: memoy use =366, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441214] oom_badness: pid = 2093, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441216] select_bad_process, pid=2093, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441218] oom_badness: memoy use =912, totalpages=506807, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441219] oom_badness: pid = 2094, oom_score_adj=0, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441221] select_bad_process, pid=2094, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441222] oom_badness: memoy use =315, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441224] oom_badness: pid = 2095, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441225] select_bad_process, pid=2095, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441227] oom_badness: memoy use =264, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441229] oom_badness: pid = 2096, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441230] select_bad_process, pid=2096, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441232] oom_badness: memoy use =274, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441233] oom_badness: pid = 2097, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441235] select_bad_process, pid=2097, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441236] oom_badness: memoy use =300, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441238] oom_badness: pid = 2098, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441239] select_bad_process, pid=2098, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441241] oom_badness: memoy use =466, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441243] oom_badness: pid = 2099, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441244] select_bad_process, pid=2099, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441245] oom_badness: memoy use =782, totalpages=506807, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441247] oom_badness: pid = 2116, oom_score_adj=0, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441248] select_bad_process, pid=2116, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441250] oom_badness: memoy use =269, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441252] oom_badness: pid = 2119, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441253] select_bad_process, pid=2119, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441255] oom_badness: memoy use =362, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441256] oom_badness: pid = 2121, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441257] select_bad_process, pid=2121, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441259] oom_badness: memoy use =141, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441261] oom_badness: pid = 2130, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441262] select_bad_process, pid=2130, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441264] oom_badness: memoy use =66, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441265] oom_badness: pid = 2133, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441266] select_bad_process, pid=2133, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441268] oom_badness: memoy use =1142, totalpages=506807, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441270] oom_badness: pid = 2144, oom_score_adj=0, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441271] select_bad_process, pid=2144, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441273] oom_badness: memoy use =2208, totalpages=506807, points=4 Oct 19 09:44:08 myhost kernel: [ 618.441274] oom_badness: pid = 2147, oom_score_adj=0, points=-26 Oct 19 09:44:08 myhost kernel: [ 618.441276] select_bad_process, pid=2147, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441278] oom_badness: memoy use =102, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441279] oom_badness: pid = 2151, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441281] select_bad_process, pid=2151, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441282] oom_badness: memoy use =161, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441284] oom_badness: pid = 2157, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441285] select_bad_process, pid=2157, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441287] oom_badness: memoy use =30, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441288] oom_badness: pid = 2159, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441290] select_bad_process, pid=2159, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441291] oom_badness: memoy use =481, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441293] oom_badness: pid = 2161, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441294] select_bad_process, pid=2161, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441296] oom_badness: memoy use =183, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441298] oom_badness: pid = 2173, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441299] select_bad_process, pid=2173, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441301] oom_badness: memoy use =189, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441302] oom_badness: pid = 2192, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441303] select_bad_process, pid=2192, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441305] oom_badness: memoy use =41, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441307] oom_badness: pid = 2218, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441308] select_bad_process, pid=2218, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441310] oom_badness: memoy use =46, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441312] oom_badness: pid = 2219, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441313] select_bad_process, pid=2219, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441314] oom_badness: memoy use =36, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441316] oom_badness: pid = 2284, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441317] select_bad_process, pid=2284, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441319] oom_badness: memoy use =480, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441321] oom_badness: pid = 2285, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441322] select_bad_process, pid=2285, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441324] oom_badness: memoy use =288, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441325] oom_badness: pid = 2339, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441327] select_bad_process, pid=2339, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441329] oom_badness: memoy use =76, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441330] oom_badness: pid = 2403, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441331] select_bad_process, pid=2403, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441333] oom_badness: memoy use =480, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441335] oom_badness: pid = 2425, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441336] select_bad_process, pid=2425, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441338] oom_badness: memoy use =36, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441339] oom_badness: pid = 2513, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441341] select_bad_process, pid=2513, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441342] oom_badness: memoy use =455, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441344] oom_badness: pid = 2518, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441345] select_bad_process, pid=2518, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441347] oom_badness: memoy use =592, totalpages=506807, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441349] oom_badness: pid = 2581, oom_score_adj=0, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441350] select_bad_process, pid=2581, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441352] oom_badness: memoy use =1076, totalpages=506807, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441353] oom_badness: pid = 2590, oom_score_adj=0, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441355] select_bad_process, pid=2590, points=2 Oct 19 09:44:08 myhost kernel: [ 618.441356] oom_badness: memoy use =278247, totalpages=506807, points=549 Oct 19 09:44:08 myhost kernel: [ 618.441358] oom_badness: pid = 2646, oom_score_adj=0, points=549 Oct 19 09:44:08 myhost kernel: [ 618.441359] select_bad_process, pid=2646, points=549 Oct 19 09:44:08 myhost kernel: [ 618.441360] select_bad_process, ===========have choose pid=2646 to kill, points=549 Oct 19 09:44:08 myhost kernel: [ 618.441362] oom_badness: memoy use =12734, totalpages=506807, points=25 Oct 19 09:44:08 myhost kernel: [ 618.441364] oom_badness: pid = 2834, oom_score_adj=0, points=25 Oct 19 09:44:08 myhost kernel: [ 618.441365] select_bad_process, pid=2834, points=25 Oct 19 09:44:08 myhost kernel: [ 618.441367] oom_badness: memoy use =101, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441368] oom_badness: pid = 2838, oom_score_adj=0, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441370] select_bad_process, pid=2838, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441372] oom_badness: memoy use =23285, totalpages=506807, points=45 Oct 19 09:44:08 myhost kernel: [ 618.441373] oom_badness: pid = 2854, oom_score_adj=0, points=45 Oct 19 09:44:08 myhost kernel: [ 618.441374] select_bad_process, pid=2854, points=45 Oct 19 09:44:08 myhost kernel: [ 618.441376] oom_badness: memoy use =15149, totalpages=506807, points=29 Oct 19 09:44:08 myhost kernel: [ 618.441377] oom_badness: pid = 2942, oom_score_adj=0, points=29 Oct 19 09:44:08 myhost kernel: [ 618.441379] select_bad_process, pid=2942, points=29 Oct 19 09:44:08 myhost kernel: [ 618.441380] oom_badness: memoy use =23280, totalpages=506807, points=45 Oct 19 09:44:08 myhost kernel: [ 618.441382] oom_badness: pid = 3140, oom_score_adj=0, points=45 Oct 19 09:44:08 myhost kernel: [ 618.441383] select_bad_process, pid=3140, points=45 Oct 19 09:44:08 myhost kernel: [ 618.441384] oom_badness: memoy use =12292, totalpages=506807, points=24 Oct 19 09:44:08 myhost kernel: [ 618.441386] oom_badness: pid = 3298, oom_score_adj=0, points=24 Oct 19 09:44:08 myhost kernel: [ 618.441387] select_bad_process, pid=3298, points=24 Oct 19 09:44:08 myhost kernel: [ 618.441389] oom_badness: memoy use =15003, totalpages=506807, points=29 Oct 19 09:44:08 myhost kernel: [ 618.441390] oom_badness: pid = 3327, oom_score_adj=0, points=29 Oct 19 09:44:08 myhost kernel: [ 618.441392] select_bad_process, pid=3327, points=29 Oct 19 09:44:08 myhost kernel: [ 618.441394] oom_badness: memoy use =34, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441395] oom_badness: pid = 3346, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441396] select_bad_process, pid=3346, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441398] oom_badness: memoy use =35, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441399] oom_badness: pid = 3347, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441401] select_bad_process, pid=3347, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441403] oom_badness: memoy use =35, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441404] oom_badness: pid = 3349, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441405] select_bad_process, pid=3349, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441407] oom_badness: memoy use =43, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441409] oom_badness: pid = 3353, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441410] select_bad_process, pid=3353, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441412] oom_badness: memoy use =43, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441413] oom_badness: pid = 3354, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441415] select_bad_process, pid=3354, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441417] oom_badness: memoy use =56, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441418] oom_badness: pid = 3355, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441419] select_bad_process, pid=3355, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441421] oom_badness: memoy use =56, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441423] oom_badness: pid = 3356, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441424] select_bad_process, pid=3356, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441426] oom_badness: memoy use =58, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441427] oom_badness: pid = 3387, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441429] select_bad_process, pid=3387, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441430] oom_badness: memoy use =59, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441432] oom_badness: pid = 3388, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441433] select_bad_process, pid=3388, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441435] oom_badness: memoy use =87, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441436] oom_badness: pid = 3389, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441438] select_bad_process, pid=3389, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441439] oom_badness: memoy use =87, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441441] oom_badness: pid = 3390, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441442] select_bad_process, pid=3390, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441444] oom_badness: memoy use =17474, totalpages=506807, points=34 Oct 19 09:44:08 myhost kernel: [ 618.441445] oom_badness: pid = 3414, oom_score_adj=0, points=34 Oct 19 09:44:08 myhost kernel: [ 618.441446] select_bad_process, pid=3414, points=34 Oct 19 09:44:08 myhost kernel: [ 618.441448] oom_badness: memoy use =7391, totalpages=506807, points=14 Oct 19 09:44:08 myhost kernel: [ 618.441450] oom_badness: pid = 3536, oom_score_adj=0, points=14 Oct 19 09:44:08 myhost kernel: [ 618.441451] select_bad_process, pid=3536, points=14 Oct 19 09:44:08 myhost kernel: [ 618.441453] oom_badness: memoy use =24, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441454] oom_badness: pid = 3560, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441456] select_bad_process, pid=3560, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441457] oom_badness: memoy use =22, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441459] oom_badness: pid = 3562, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441460] select_bad_process, pid=3562, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441462] oom_badness: memoy use =22, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441463] oom_badness: pid = 3563, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441464] select_bad_process, pid=3563, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441466] oom_badness: memoy use =22, totalpages=506807, points=0 Oct 19 09:44:08 myhost kernel: [ 618.441467] oom_badness: pid = 3564, oom_score_adj=0, points=-30 Oct 19 09:44:08 myhost kernel: [ 618.441469] select_bad_process, pid=3564, points=1 Oct 19 09:44:08 myhost kernel: [ 618.441470] httpd invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 Oct 19 09:44:08 myhost kernel: [ 618.441473] httpd cpuset=/ mems_allowed=0 Oct 19 09:44:08 myhost kernel: [ 618.441475] Pid: 1739, comm: httpd Not tainted 2.6.36testing #5 Oct 19 09:44:08 myhost kernel: [ 618.441476] Call Trace: Oct 19 09:44:08 myhost kernel: [ 618.441482] [<c10c0a20>] dump_header.clone.5+0x80/0x1e0 Oct 19 09:44:08 myhost kernel: [ 618.441486] [<c12eb566>] ? printk +0x18/0x1a Oct 19 09:44:08 myhost kernel: [ 618.441488] [<c10c0d85>] ? oom_badness+0x185/0x1a0 Oct 19 09:44:08 myhost kernel: [ 618.441490] [<c10c0dfc>] oom_kill_process+0x5c/0x1c0 Oct 19 09:44:08 myhost kernel: [ 618.441492] [<c10c102a>] ? select_bad_process.clone.7+0xca/0x100 Oct 19 09:44:08 myhost kernel: [ 618.441494] [<c10c12ff>] out_of_memory+0xbf/0x1d0 Oct 19 09:44:08 myhost kernel: [ 618.441496] [<c10c11c8>] ? try_set_zonelist_oom+0xc8/0xe0 Oct 19 09:44:08 myhost kernel: [ 618.441499] [<c10c4b88>] __alloc_pages_nodemask+0x5e8/0x600 Oct 19 09:44:08 myhost kernel: [ 618.441502] [<c10c6535>] __do_page_cache_readahead+0x105/0x230 Oct 19 09:44:08 myhost kernel: [ 618.441504] [<c10c68c1>] ra_submit +0x21/0x30 Oct 19 09:44:08 myhost kernel: [ 618.441506] [<c10beb8b>] filemap_fault+0x36b/0x3e0 Oct 19 09:44:08 myhost kernel: [ 618.441510] [<c10d5c5b>] __do_fault +0x3b/0x4f0 Oct 19 09:44:08 myhost kernel: [ 618.441512] [<c10d8cbd>] handle_mm_fault+0xfd/0x930 Oct 19 09:44:08 myhost kernel: [ 618.441515] [<c1029250>] ? do_page_fault+0x0/0x3e0 Oct 19 09:44:08 myhost kernel: [ 618.441517] [<c10293a0>] do_page_fault+0x150/0x3e0 Oct 19 09:44:08 myhost kernel: [ 618.441521] [<c1045d70>] ? child_wait_callback+0x0/0xa0 Oct 19 09:44:08 myhost kernel: [ 618.441523] [<c1048467>] ? sys_waitpid+0x27/0x30 Oct 19 09:44:08 myhost kernel: [ 618.441525] [<c1029250>] ? do_page_fault+0x0/0x3e0 Oct 19 09:44:08 myhost kernel: [ 618.441527] [<c12ef1ab>] error_code +0x67/0x6c Oct 19 09:44:08 myhost kernel: [ 618.441529] Mem-Info: Oct 19 09:44:08 myhost kernel: [ 618.441530] DMA per-cpu: Oct 19 09:44:08 myhost kernel: [ 618.441531] CPU 0: hi: 0, btch: 1 usd: 0 Oct 19 09:44:08 myhost kernel: [ 618.441533] CPU 1: hi: 0, btch: 1 usd: 0 Oct 19 09:44:08 myhost kernel: [ 618.441534] Normal per-cpu: Oct 19 09:44:08 myhost kernel: [ 618.441535] CPU 0: hi: 186, btch: 31 usd: 35 Oct 19 09:44:08 myhost kernel: [ 618.441536] CPU 1: hi: 186, btch: 31 usd: 14 Oct 19 09:44:08 myhost kernel: [ 618.441538] HighMem per-cpu: Oct 19 09:44:08 myhost kernel: [ 618.441539] CPU 0: hi: 186, btch: 31 usd: 98 Oct 19 09:44:08 myhost kernel: [ 618.441540] CPU 1: hi: 186, btch: 31 usd: 58 Oct 19 09:44:08 myhost kernel: [ 618.441543] active_anon:168162 inactive_anon:48949 isolated_anon:0 Oct 19 09:44:08 myhost kernel: [ 618.441544] active_file:69 inactive_file:350 isolated_file:0 Oct 19 09:44:08 myhost kernel: [ 618.441545] unevictable:10 dirty:0 writeback:5 unstable:0 Oct 19 09:44:08 myhost kernel: [ 618.441546] free:11927 slab_reclaimable:1903 slab_unreclaimable:3882 Oct 19 09:44:08 myhost kernel: [ 618.441546] mapped:267673 shmem:19397 pagetables:1721 bounce:0 Oct 19 09:44:08 myhost kernel: [ 618.441551] DMA free:7968kB min:64kB low:80kB high:96kB active_anon:3700kB inactive_anon:3752kB active_file:12kB inactive_file:252kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15788kB mlocked:0kB dirty:0kB writeback:4kB mapped:52kB shmem:348kB slab_reclaimable:0kB slab_unreclaimable:16kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:421 all_unreclaimable? yes Oct 19 09:44:08 myhost kernel: [ 618.441554] lowmem_reserve[]: 0 865 1980 1980 Oct 19 09:44:08 myhost kernel: [ 618.441560] Normal free:39348kB min:3728kB low:4660kB high:5592kB active_anon:176740kB inactive_anon:25640kB active_file:84kB inactive_file:308kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:885944kB mlocked:0kB dirty:0kB writeback:4kB mapped:576992kB shmem:5024kB slab_reclaimable:7612kB slab_unreclaimable:15512kB kernel_stack:2792kB pagetables:6884kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:741 all_unreclaimable? yes Oct 19 09:44:08 myhost kernel: [ 618.441563] lowmem_reserve[]: 0 0 8921 8921 Oct 19 09:44:08 myhost kernel: [ 618.441569] HighMem free:392kB min:512kB low:1712kB high:2912kB active_anon:492208kB inactive_anon:166404kB active_file:180kB inactive_file:840kB unevictable:40kB isolated(anon):0kB isolated(file):0kB present:1141984kB mlocked:40kB dirty:0kB writeback:12kB mapped:493648kB shmem:72216kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:1552 all_unreclaimable? yes Oct 19 09:44:08 myhost kernel: [ 618.441572] lowmem_reserve[]: 0 0 0 0 Oct 19 09:44:08 myhost kernel: [ 618.441575] DMA: 6*4kB 9*8kB 31*16kB 18*32kB 12*64kB 9*128kB 3*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 7952kB Oct 19 09:44:08 myhost kernel: [ 618.441581] Normal: 37*4kB 176*8kB 152*16kB 63*32kB 55*64kB 25*128kB 16*256kB 10*512kB 9*1024kB 2*2048kB 1*4096kB = 39348kB Oct 19 09:44:08 myhost kernel: [ 618.441587] HighMem: 70*4kB 6*8kB 4*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 392kB Oct 19 09:44:08 myhost kernel: [ 618.441592] 19873 total pagecache pages Oct 19 09:44:08 myhost kernel: [ 618.441594] 0 pages in swap cache Oct 19 09:44:08 myhost kernel: [ 618.441595] Swap cache stats: add 0, delete 0, find 0/0 Oct 19 09:44:08 myhost kernel: [ 618.441596] Free swap = 0kB Oct 19 09:44:08 myhost kernel: [ 618.441597] Total swap = 0kB Oct 19 09:44:08 myhost kernel: [ 618.445111] 515070 pages RAM Oct 19 09:44:08 myhost kernel: [ 618.445112] 287745 pages HighMem Oct 19 09:44:08 myhost kernel: [ 618.445113] 273107 pages reserved Oct 19 09:44:08 myhost kernel: [ 618.445114] 20369 pages shared Oct 19 09:44:08 myhost kernel: [ 618.445115] 218695 pages non-shared Oct 19 09:44:08 myhost kernel: [ 618.445116] [ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name Oct 19 09:44:08 myhost kernel: [ 618.445123] [ 584] 0 584 574 166 0 -17 -1000 udevd Oct 19 09:44:08 myhost kernel: [ 618.445127] [ 1304] 0 1304 1280 52 1 0 0 syslog-ng Oct 19 09:44:08 myhost kernel: [ 618.445129] [ 1305] 0 1305 1358 115 1 0 0 syslog-ng Oct 19 09:44:08 myhost kernel: [ 618.445132] [ 1307] 81 1307 752 215 0 0 0 dbus-daemon Oct 19 09:44:08 myhost kernel: [ 618.445135] [ 1310] 82 1310 3757 214 1 0 0 hald Oct 19 09:44:08 myhost kernel: [ 618.445137] [ 1311] 0 1311 898 62 0 0 0 hald-runner Oct 19 09:44:08 myhost kernel: [ 618.445140] [ 1340] 0 1340 914 33 1 0 0 hald-addon-inpu Oct 19 09:44:08 myhost kernel: [ 618.445142] [ 1354] 0 1354 914 35 1 0 0 hald-addon-stor Oct 19 09:44:08 myhost kernel: [ 618.445145] [ 1356] 82 1356 823 40 0 0 0 hald-addon-acpi Oct 19 09:44:08 myhost kernel: [ 618.445148] [ 1419] 0 1419 451 22 1 0 0 crond Oct 19 09:44:08 myhost kernel: [ 618.445150] [ 1450] 0 1450 3575 103 1 0 0 gdm-binary Oct 19 09:44:08 myhost kernel: [ 618.445153] [ 1453] 0 1453 718 52 0 0 0 mysqld_safe Oct 19 09:44:08 myhost kernel: [ 618.445156] [ 1465] 0 1465 439 23 0 0 0 agetty Oct 19 09:44:08 myhost kernel: [ 618.445158] [ 1466] 0 1466 439 23 1 0 0 agetty Oct 19 09:44:08 myhost kernel: [ 618.445161] [ 1467] 0 1467 439 23 1 0 0 agetty Oct 19 09:44:08 myhost kernel: [ 618.445163] [ 1468] 0 1468 439 23 1 0 0 agetty Oct 19 09:44:08 myhost kernel: [ 618.445166] [ 1469] 0 1469 439 22 0 0 0 agetty Oct 19 09:44:08 myhost kernel: [ 618.445168] [ 1470] 0 1470 439 22 1 0 0 agetty Oct 19 09:44:08 myhost kernel: [ 618.445171] [ 1547] 0 1547 1647 96 0 -17 -1000 sshd Oct 19 09:44:08 myhost kernel: [ 618.445174] [ 1571] 0 1571 2214 170 1 0 0 cupsd Oct 19 09:44:08 myhost kernel: [ 618.445176] [ 1586] 0 1586 477 23 0 0 0 cntlm Oct 19 09:44:08 myhost kernel: [ 618.445179] [ 1592] 0 1592 4391 179 1 0 0 gdm-simple-slav Oct 19 09:44:08 myhost kernel: [ 618.445181] [ 1593] 0 1593 6451 168 1 0 0 NetworkManager Oct 19 09:44:08 myhost kernel: [ 618.445184] [ 1616] 0 1616 5745 236 1 0 0 polkitd Oct 19 09:44:08 myhost kernel: [ 618.445187] [ 1626] 0 1626 2014 61 0 0 0 vmware-usbarbit Oct 19 09:44:08 myhost kernel: [ 618.445189] [ 1658] 0 1658 25307 7037 1 0 0 Xorg Oct 19 09:44:08 myhost kernel: [ 618.445192] [ 1700] 89 1700 29902 2816 1 0 0 mysqld Oct 19 09:44:08 myhost kernel: [ 618.445194] [ 1701] 0 1701 4910 225 1 0 0 smbd Oct 19 09:44:08 myhost kernel: [ 618.445197] [ 1710] 0 1710 2809 187 0 0 0 nmbd Oct 19 09:44:08 myhost kernel: [ 618.445199] [ 1715] 0 1715 1247 56 0 0 0 wpa_supplicant Oct 19 09:44:08 myhost kernel: [ 618.445202] [ 1716] 0 1716 540 137 1 -17 -1000 udevd Oct 19 09:44:08 myhost kernel: [ 618.445204] [ 1738] 0 1738 4910 225 0 0 0 smbd Oct 19 09:44:08 myhost kernel: [ 618.445207] [ 1739] 0 1739 5162 1363 0 0 0 httpd Oct 19 09:44:08 myhost kernel: [ 618.445209] [ 1742] 33 1742 4841 1148 0 0 0 httpd Oct 19 09:44:08 myhost kernel: [ 618.445211] [ 1748] 0 1748 946 40 0 0 0 ApplicationPool Oct 19 09:44:08 myhost kernel: [ 618.445214] [ 1749] 0 1749 3555 768 1 0 0 ruby Oct 19 09:44:08 myhost kernel: [ 618.445217] [ 1752] 33 1752 5162 1359 1 0 0 httpd Oct 19 09:44:08 myhost kernel: [ 618.445219] [ 1753] 33 1753 5162 1359 0 0 0 httpd Oct 19 09:44:08 myhost kernel: [ 618.445222] [ 1754] 33 1754 5162 1359 0 0 0 httpd Oct 19 09:44:08 myhost kernel: [ 618.445224] [ 1755] 33 1755 5162 1359 0 0 0 httpd Oct 19 09:44:08 myhost kernel: [ 618.445226] [ 1756] 33 1756 5162 1359 0 0 0 httpd Oct 19 09:44:08 myhost kernel: [ 618.445229] [ 1775] 0 1775 6671 149 1 0 0 console-kit-dae Oct 19 09:44:08 myhost kernel: [ 618.445232] [ 1867] 0 1867 487 24 1 0 0 dhcpcd Oct 19 09:44:08 myhost kernel: [ 618.445234] [ 1893] 120 1893 6828 285 0 0 0 polkit-gnome-au Oct 19 09:44:08 myhost kernel: [ 618.445237] [ 1899] 0 1899 3668 110 0 0 0 upowerd Oct 19 09:44:08 myhost kernel: [ 618.445239] [ 1900] 0 1900 3900 97 0 0 0 gdm-session-wor Oct 19 09:44:08 myhost kernel: [ 618.445242] [ 1967] 1000 1967 7737 134 1 0 0 gnome-keyring-d Oct 19 09:44:08 myhost kernel: [ 618.445244] [ 1986] 1000 1986 9209 301 1 0 0 gnome-session Oct 19 09:44:08 myhost kernel: [ 618.445247] [ 2013] 1000 2013 794 42 1 0 0 dbus-launch Oct 19 09:44:08 myhost kernel: [ 618.445250] [ 2014] 1000 2014 1034 261 0 0 0 dbus-daemon Oct 19 09:44:08 myhost kernel: [ 618.445252] [ 2016] 1000 2016 886 51 0 0 0 ssh-agent Oct 19 09:44:08 myhost kernel: [ 618.445255] [ 2021] 1000 2021 2645 463 0 0 0 gconfd-2 Oct 19 09:44:08 myhost kernel: [ 618.445257] [ 2024] 1000 2024 7535 1031 1 0 0 fcitx Oct 19 09:44:08 myhost kernel: [ 618.445260] [ 2033] 1000 2033 8792 824 1 0 0 gnome-settings- Oct 19 09:44:08 myhost kernel: [ 618.445263] [ 2038] 1000 2038 2168 70 1 0 0 gvfsd Oct 19 09:44:08 myhost kernel: [ 618.445265] [ 2041] 1000 2041 8709 720 1 0 0 metacity Oct 19 09:44:08 myhost kernel: [ 618.445268] [ 2047] 0 2047 573 160 0 -17 -1000 udevd Oct 19 09:44:08 myhost kernel: [ 618.445271] [ 2051] 1000 2051 7599 130 1 0 0 gvfs-fuse-daemo Oct 19 09:44:08 myhost kernel: [ 618.445273] [ 2055] 1000 2055 46424 1167 0 0 0 gnome-panel Oct 19 09:44:08 myhost kernel: [ 618.445276] [ 2058] 1000 2058 2510 146 0 0 0 gvfs-gdu-volume Oct 19 09:44:08 myhost kernel: [ 618.445278] [ 2060] 0 2060 5721 151 0 0 0 udisks-daemon Oct 19 09:44:08 myhost kernel: [ 618.445281] [ 2061] 0 2061 1284 52 1 0 0 udisks-daemon Oct 19 09:44:08 myhost kernel: [ 618.445284] [ 2065] 1000 2065 59963 4346 0 0 0 nautilus Oct 19 09:44:08 myhost kernel: [ 618.445286] [ 2067] 1000 2067 9063 139 1 0 0 bonobo-activati Oct 19 09:44:08 myhost kernel: [ 618.445289] [ 2075] 1000 2075 43722 1040 1 0 0 wnck-applet Oct 19 09:44:08 myhost kernel: [ 618.445292] [ 2076] 1000 2076 1631 72 0 0 0 sh Oct 19 09:44:08 myhost kernel: [ 618.445294] [ 2078] 1000 2078 86505 18211 1 0 0 evolution Oct 19 09:44:08 myhost kernel: [ 618.445297] [ 2082] 1000 2082 6808 197 0 0 0 polkit-gnome-au Oct 19 09:44:08 myhost kernel: [ 618.445299] [ 2083] 1000 2083 8439 1587 1 0 0 applet.py Oct 19 09:44:08 myhost kernel: [ 618.445302] [ 2085] 1000 2085 10769 403 0 0 0 evolution-alarm Oct 19 09:44:08 myhost kernel: [ 618.445304] [ 2091] 1000 2091 42308 575 1 0 0 cpufreq-applet Oct 19 09:44:08 myhost kernel: [ 618.445307] [ 2093] 1000 2093 41468 366 1 0 0 multiload-apple Oct 19 09:44:08 myhost kernel: [ 618.445310] [ 2094] 1000 2094 43601 912 1 0 0 mixer_applet2 Oct 19 09:44:08 myhost kernel: [ 618.445312] [ 2095] 1000 2095 40975 315 0 0 0 notification-ar Oct 19 09:44:08 myhost kernel: [ 618.445315] [ 2096] 1000 2096 5037 264 0 0 0 gdu-notificatio Oct 19 09:44:08 myhost kernel: [ 618.445317] [ 2097] 1000 2097 7274 274 1 0 0 gnome-power-man Oct 19 09:44:08 myhost kernel: [ 618.445320] [ 2098] 1000 2098 7574 300 0 0 0 vino-server Oct 19 09:44:08 myhost kernel: [ 618.445322] [ 2099] 1000 2099 72310 466 0 0 0 nm-applet Oct 19 09:44:08 myhost kernel: [ 618.445325] [ 2116] 1000 2116 45627 782 0 0 0 clock-applet Oct 19 09:44:08 myhost kernel: [ 618.445328] [ 2119] 1000 2119 7247 269 1 0 0 gnome-screensav Oct 19 09:44:08 myhost kernel: [ 618.445330] [ 2121] 1000 2121 15260 362 0 0 0 e-calendar-fact Oct 19 09:44:08 myhost kernel: [ 618.445333] [ 2130] 1000 2130 2299 141 0 0 0 gvfsd-trash Oct 19 09:44:08 myhost kernel: [ 618.445335] [ 2133] 0 2133 3290 66 1 0 0 system-tools-ba Oct 19 09:44:08 myhost kernel: [ 618.445338] [ 2144] 1000 2144 43820 1142 0 0 0 gnome-terminal Oct 19 09:44:08 myhost kernel: [ 618.445340] [ 2147] 0 2147 3296 2208 1 0 0 SystemToolsBack Oct 19 09:44:08 myhost kernel: [ 618.445343] [ 2151] 1000 2151 2201 102 1 0 0 gvfsd-burn Oct 19 09:44:08 myhost kernel: [ 618.445346] [ 2157] 1000 2157 1806 161 0 0 0 mission-control Oct 19 09:44:08 myhost kernel: [ 618.445348] [ 2159] 1000 2159 450 30 1 0 0 gnome-pty-helpe Oct 19 09:44:08 myhost kernel: [ 618.445351] [ 2161] 1000 2161 2067 481 0 0 0 bash Oct 19 09:44:08 myhost kernel: [ 618.445353] [ 2173] 1000 2173 2287 183 0 0 0 gvfsd-metadata Oct 19 09:44:08 myhost kernel: [ 618.445356] [ 2192] 1000 2192 22559 189 0 0 0 conky Oct 19 09:44:08 myhost kernel: [ 618.445358] [ 2218] 0 2218 794 41 0 0 0 dbus-launch Oct 19 09:44:08 myhost kernel: [ 618.445361] [ 2219] 0 2219 593 46 1 0 0 dbus-daemon Oct 19 09:44:08 myhost kernel: [ 618.445363] [ 2284] 1000 2284 1480 36 1 0 0 su Oct 19 09:44:08 myhost kernel: [ 618.445366] [ 2285] 0 2285 2042 480 1 0 0 bash Oct 19 09:44:08 myhost kernel: [ 618.445368] [ 2339] 0 2339 6275 288 0 0 0 vim Oct 19 09:44:08 myhost kernel: [ 618.445371] [ 2403] 0 2403 1663 76 1 0 0 sh Oct 19 09:44:08 myhost kernel: [ 618.445373] [ 2425] 1000 2425 2067 480 0 0 0 bash Oct 19 09:44:08 myhost kernel: [ 618.445376] [ 2513] 1000 2513 1480 36 0 0 0 su Oct 19 09:44:08 myhost kernel: [ 618.445378] [ 2518] 0 2518 2042 455 1 0 0 bash Oct 19 09:44:08 myhost kernel: [ 618.445381] [ 2581] 1000 2581 2930 592 1 0 0 VBoxXPCOMIPCD Oct 19 09:44:08 myhost kernel: [ 618.445384] [ 2590] 1000 2590 4996 1076 0 0 0 VBoxSVC Oct 19 09:44:08 myhost kernel: [ 618.445386] [ 2646] 1000 2646 311501 278247 0 0 0 VirtualBox Oct 19 09:44:08 myhost kernel: [ 618.445389] [ 2834] 1000 2834 61628 12734 1 0 0 evince Oct 19 09:44:08 myhost kernel: [ 618.445391] [ 2838] 1000 2838 5458 101 1 0 0 evinced Oct 19 09:44:08 myhost kernel: [ 618.445394] [ 2854] 1000 2854 61457 23285 1 0 0 acroread Oct 19 09:44:08 myhost kernel: [ 618.445396] [ 2942] 1000 2942 60581 15149 0 0 0 evince Oct 19 09:44:08 myhost kernel: [ 618.445399] [ 3140] 1000 3140 81044 23280 1 0 0 evince Oct 19 09:44:08 myhost kernel: [ 618.445401] [ 3298] 1000 3298 59753 12292 0 0 0 evince Oct 19 09:44:08 myhost kernel: [ 618.445404] [ 3327] 1000 3327 60274 15003 0 0 0 evince Oct 19 09:44:08 myhost kernel: [ 618.445406] [ 3346] 0 3346 717 34 0 0 0 sh Oct 19 09:44:08 myhost kernel: [ 618.445408] [ 3347] 0 3347 717 35 0 0 0 sh Oct 19 09:44:08 myhost kernel: [ 618.445411] [ 3349] 0 3349 717 35 1 0 0 sh Oct 19 09:44:08 myhost kernel: [ 618.445413] [ 3353] 0 3353 1185 43 0 0 0 git Oct 19 09:44:08 myhost kernel: [ 618.445416] [ 3354] 0 3354 1185 43 1 0 0 git Oct 19 09:44:08 myhost kernel: [ 618.445418] [ 3355] 0 3355 717 56 0 0 0 git-pull Oct 19 09:44:08 myhost kernel: [ 618.445421] [ 3356] 0 3356 717 56 1 0 0 git-pull Oct 19 09:44:08 myhost kernel: [ 618.445423] [ 3387] 0 3387 1187 58 1 0 0 git Oct 19 09:44:08 myhost kernel: [ 618.445426] [ 3388] 0 3388 1187 59 0 0 0 git Oct 19 09:44:08 myhost kernel: [ 618.445428] [ 3389] 0 3389 1571 87 1 0 0 ssh Oct 19 09:44:08 myhost kernel: [ 618.445430] [ 3390] 0 3390 1571 87 0 0 0 ssh Oct 19 09:44:08 myhost kernel: [ 618.445433] [ 3414] 1000 3414 63272 17474 0 0 0 evince Oct 19 09:44:08 myhost kernel: [ 618.445436] [ 3536] 1000 3536 59510 7391 1 0 0 evince Oct 19 09:44:08 myhost kernel: [ 618.445438] [ 3560] 0 3560 1016 24 0 0 0 cp Oct 19 09:44:08 myhost kernel: [ 618.445441] [ 3562] 0 3562 451 22 1 0 0 crond Oct 19 09:44:08 myhost kernel: [ 618.445443] [ 3563] 0 3563 451 22 0 0 0 crond Oct 19 09:44:08 myhost kernel: [ 618.445446] [ 3564] 0 3564 451 22 1 0 0 crond ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-19 2:07 ` Figo.zhang @ 2010-10-19 2:59 ` KAMEZAWA Hiroyuki 2010-10-19 5:23 ` Minchan Kim 2010-10-19 6:22 ` KAMEZAWA Hiroyuki 2010-10-19 18:43 ` David Rientjes 2 siblings, 1 reply; 22+ messages in thread From: KAMEZAWA Hiroyuki @ 2010-10-19 2:59 UTC (permalink / raw) To: Figo.zhang Cc: lKOSAKI Motohiro, Wu Fengguang, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802, linux-mm@kvack.org On Tue, 19 Oct 2010 10:07:38 +0800 "Figo.zhang" <zhangtianfei@leadcoretech.com> wrote: > > > > > very lots of change ;) > > can you please send us your crash log? > > i add some prink in select_bad_process() and oom_badness() to see > pid/totalpages/points/memoryuseage/and finally process to selet to kill. > > i found it the oom-killer select: syslog-ng,mysqld,nautilus,VirtualBox > to kill, so my question is: > > 1. the syslog-ng,mysqld,nautilus is the system foundamental process, so > if oom-killer kill those process, the system will be damaged, such as > lose some important data. > > 2. the new oom-killer just use percentage of used memory as score to > select the candidate to kill, but how to know this process to very > important for system? > The kernel can never know it. Just an admin (a man or management software) knows. Old kernel tries to guess it, but it tend to be wrong and many many report comes "why my ....is killed..." All guesswork the kernel does is not enough, I think. > oom_score_adj, it is anyone commercial linux distributions to use this > to protect the critical process. > oom_adj may be used in some system. All my customers select panic_at_oom=1 and cause cluster fail over rather than half-broken. <Off topic> Your another choice is memory cgroup, I think. please see documentation/cgroup/memory.txt or libcgroup. http://sourceforge.net/projects/libcg/ You can use some fancy controls with it. </Off topic> BTW, there seems to be some strange things. (CC'ed to linux-mm) Brief Summary: an oom-killer happens on swapless environment with 2.6.36-rc8. It has 2G memory. a reporter says == > i want to test the oom-killer. My desktop (Dell optiplex 780, i686 > kernel)have 2GB ram, i turn off the swap partition, and open a huge pdf > files and applications, and let the system eat huge ram. > > in 2.6.35, i can use ram up to 1.75GB, > > but in 2.6.36-rc8, i just use to 1.53GB ram , the system come very slow > and crashed after some minutes , the DiskIO is very busy. i see the > DiskIO read is up to 8MB/s, write just only 400KB/s, (see by conky). == The trigger of oom-kill is order=0 allocation. (see original mail for full log) Oct 19 09:44:08 myhost kernel: [ 618.441470] httpd invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 Zone's stat is. Oct 19 09:44:08 myhost kernel: [ 618.441551] DMA free:7968kB min:64kB low:80kB high:96kB active_anon:3700kB inactive_anon:3752kB active_file:12kB inactive_file:252kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15788kB mlocked:0kB dirty:0kB writeback:4kB mapped:52kB shmem:348kB slab_reclaimable:0kB slab_unreclaimable:16kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:421 all_unreclaimable? yes lowmem_reserve[]: 0 865 1980 1980 Oct 19 09:44:08 myhost kernel: [ 618.441560] Normal free:39348kB min:3728kB low:4660kB high:5592kB active_anon:176740kB inactive_anon:25640kB active_file:84kB inactive_file:308kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:885944kB mlocked:0kB dirty:0kB writeback:4kB mapped:576992kB shmem:5024kB slab_reclaimable:7612kB slab_unreclaimable:15512kB kernel_stack:2792kB pagetables:6884kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:741 all_unreclaimable? yes lowmem_reserve[]: 0 0 8921 8921 Oct 19 09:44:08 myhost kernel: [ 618.441569] HighMem free:392kB min:512kB low:1712kB high:2912kB active_anon:492208kB inactive_anon:166404kB active_file:180kB inactive_file:840kB unevictable:40kB isolated(anon):0kB isolated(file):0kB present:1141984kB mlocked:40kB dirty:0kB writeback:12kB mapped:493648kB shmem:72216kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:1552 all_unreclaimable? yes Highmem seems a bit strange. present(1141984) - active_anon - inactive_anon - inactive_file - active_file = 482352kB but free is 392kB. Highmem is used for some other purpose than usual user's page.(pagetable is 0.) And, Hmm, mapped:493648kB seems too large for me. (active/inactive-file + shmem is not enough.) And "mapped" in NORMAL zone is large, too. Does anyone have idea about file-mapped-but-not-on-LRU pages ? Thanks, -Kame ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-19 2:59 ` KAMEZAWA Hiroyuki @ 2010-10-19 5:23 ` Minchan Kim 2010-10-19 5:26 ` KAMEZAWA Hiroyuki 0 siblings, 1 reply; 22+ messages in thread From: Minchan Kim @ 2010-10-19 5:23 UTC (permalink / raw) To: KAMEZAWA Hiroyuki Cc: Figo.zhang, lKOSAKI Motohiro, Wu Fengguang, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802, linux-mm@kvack.org On Tue, Oct 19, 2010 at 11:59 AM, KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote: > On Tue, 19 Oct 2010 10:07:38 +0800 > "Figo.zhang" <zhangtianfei@leadcoretech.com> wrote: > >> >> > >> > very lots of change ;) >> > can you please send us your crash log? >> >> i add some prink in select_bad_process() and oom_badness() to see >> pid/totalpages/points/memoryuseage/and finally process to selet to kill. >> >> i found it the oom-killer select: syslog-ng,mysqld,nautilus,VirtualBox >> to kill, so my question is: >> >> 1. the syslog-ng,mysqld,nautilus is the system foundamental process, so >> if oom-killer kill those process, the system will be damaged, such as >> lose some important data. >> >> 2. the new oom-killer just use percentage of used memory as score to >> select the candidate to kill, but how to know this process to very >> important for system? >> > > The kernel can never know it. Just an admin (a man or management software) knows. > Old kernel tries to guess it, but it tend to be wrong and many many report comes > "why my ....is killed..." All guesswork the kernel does is not enough, I think. > >> oom_score_adj, it is anyone commercial linux distributions to use this >> to protect the critical process. >> > oom_adj may be used in some system. All my customers select panic_at_oom=1 > and cause cluster fail over rather than half-broken. > > <Off topic> > Your another choice is memory cgroup, I think. > please see documentation/cgroup/memory.txt or libcgroup. > http://sourceforge.net/projects/libcg/ > You can use some fancy controls with it. > </Off topic> > > > BTW, there seems to be some strange things. > (CC'ed to linux-mm) > Brief Summary: > an oom-killer happens on swapless environment with 2.6.36-rc8. > It has 2G memory. > a reporter says > == >> i want to test the oom-killer. My desktop (Dell optiplex 780, i686 >> kernel)have 2GB ram, i turn off the swap partition, and open a huge pdf >> files and applications, and let the system eat huge ram. >> >> in 2.6.35, i can use ram up to 1.75GB, >> >> but in 2.6.36-rc8, i just use to 1.53GB ram , the system come very slow >> and crashed after some minutes , the DiskIO is very busy. i see the >> DiskIO read is up to 8MB/s, write just only 400KB/s, (see by conky). > == > > The trigger of oom-kill is order=0 allocation. (see original mail for full log) > > > Oct 19 09:44:08 myhost kernel: [ 618.441470] httpd invoked oom-killer: > gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 > > Zone's stat is. > > Oct 19 09:44:08 myhost kernel: [ 618.441551] > DMA free:7968kB min:64kB low:80kB high:96kB active_anon:3700kB inactive_anon:3752kB > active_file:12kB inactive_file:252kB unevictable:0kB isolated(anon):0kB > isolated(file):0kB present:15788kB mlocked:0kB dirty:0kB writeback:4kB > mapped:52kB shmem:348kB slab_reclaimable:0kB slab_unreclaimable:16kB > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB > writeback_tmp:0kB pages_scanned:421 all_unreclaimable? yes > lowmem_reserve[]: 0 865 1980 1980 > > Oct 19 09:44:08 myhost kernel: [ 618.441560] > Normal free:39348kB min:3728kB low:4660kB high:5592kB active_anon:176740kB > inactive_anon:25640kB active_file:84kB inactive_file:308kB > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:885944kB > mlocked:0kB dirty:0kB writeback:4kB mapped:576992kB shmem:5024kB > slab_reclaimable:7612kB slab_unreclaimable:15512kB kernel_stack:2792kB > pagetables:6884kB unstable:0kB bounce:0kB writeback_tmp:0kB > pages_scanned:741 all_unreclaimable? yes > lowmem_reserve[]: 0 0 8921 8921 > > Oct 19 09:44:08 myhost kernel: [ 618.441569] > HighMem free:392kB min:512kB low:1712kB high:2912kB active_anon:492208kB > inactive_anon:166404kB active_file:180kB inactive_file:840kB > unevictable:40kB isolated(anon):0kB isolated(file):0kB present:1141984kB > mlocked:40kB dirty:0kB writeback:12kB mapped:493648kB shmem:72216kB > slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB > pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB > pages_scanned:1552 all_unreclaimable? yes > > Highmem seems a bit strange. > present(1141984) - active_anon - inactive_anon - inactive_file - active_file > = 482352kB but free is 392kB. > > Highmem is used for some other purpose than usual user's page.(pagetable is 0.) > And, Hmm, mapped:493648kB seems too large for me. > (active/inactive-file + shmem is not enough.) > And "mapped" in NORMAL zone is large, too. > > Does anyone have idea about file-mapped-but-not-on-LRU pages ? Isn't it possible some file pages are much sharable? Please see the page_add_file_rmap. > > Thanks, > -Kame > > > > > > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> > > -- Kind regards, Minchan Kim ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-19 5:23 ` Minchan Kim @ 2010-10-19 5:26 ` KAMEZAWA Hiroyuki 2010-10-19 5:34 ` Minchan Kim 2010-10-20 1:35 ` Wu Fengguang 0 siblings, 2 replies; 22+ messages in thread From: KAMEZAWA Hiroyuki @ 2010-10-19 5:26 UTC (permalink / raw) To: Minchan Kim Cc: Figo.zhang, lKOSAKI Motohiro, Wu Fengguang, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802, linux-mm@kvack.org On Tue, 19 Oct 2010 14:23:29 +0900 Minchan Kim <minchan.kim@gmail.com> wrote: > On Tue, Oct 19, 2010 at 11:59 AM, KAMEZAWA Hiroyuki > <kamezawa.hiroyu@jp.fujitsu.com> wrote: > > On Tue, 19 Oct 2010 10:07:38 +0800 > > "Figo.zhang" <zhangtianfei@leadcoretech.com> wrote: > > > >> > >> > > >> > very lots of change ;) > >> > can you please send us your crash log? > >> > >> i add some prink in select_bad_process() and oom_badness() to see > >> pid/totalpages/points/memoryuseage/and finally process to selet to kill. > >> > >> i found it the oom-killer select: syslog-ng,mysqld,nautilus,VirtualBox > >> to kill, so my question is: > >> > >> 1. the syslog-ng,mysqld,nautilus is the system foundamental process, so > >> if oom-killer kill those process, the system will be damaged, such as > >> lose some important data. > >> > >> 2. the new oom-killer just use percentage of used memory as score to > >> select the candidate to kill, but how to know this process to very > >> important for system? > >> > > > > The kernel can never know it. Just an admin (a man or management software) knows. > > Old kernel tries to guess it, but it tend to be wrong and many many report comes > > "why my ....is killed..." All guesswork the kernel does is not enough, I think. > > > >> oom_score_adj, it is anyone commercial linux distributions to use this > >> to protect the critical process. > >> > > oom_adj may be used in some system. All my customers select panic_at_oom=1 > > and cause cluster fail over rather than half-broken. > > > > <Off topic> > > Your another choice is memory cgroup, I think. > > please see documentation/cgroup/memory.txt or libcgroup. > > http://sourceforge.net/projects/libcg/ > > You can use some fancy controls with it. > > </Off topic> > > > > > > BTW, there seems to be some strange things. > > (CC'ed to linux-mm) > > Brief Summary: > > an oom-killer happens on swapless environment with 2.6.36-rc8. > > It has 2G memory. > > a reporter says > > == > >> i want to test the oom-killer. My desktop (Dell optiplex 780, i686 > >> kernel)have 2GB ram, i turn off the swap partition, and open a huge pdf > >> files and applications, and let the system eat huge ram. > >> > >> in 2.6.35, i can use ram up to 1.75GB, > >> > >> but in 2.6.36-rc8, i just use to 1.53GB ram , the system come very slow > >> and crashed after some minutes , the DiskIO is very busy. i see the > >> DiskIO read is up to 8MB/s, write just only 400KB/s, (see by conky). > > == > > > > The trigger of oom-kill is order=0 allocation. (see original mail for full log) > > > > > > Oct 19 09:44:08 myhost kernel: [ 618.441470] httpd invoked oom-killer: > > gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 > > > > Zone's stat is. > > > > Oct 19 09:44:08 myhost kernel: [ 618.441551] > > DMA free:7968kB min:64kB low:80kB high:96kB active_anon:3700kB inactive_anon:3752kB > > active_file:12kB inactive_file:252kB unevictable:0kB isolated(anon):0kB > > isolated(file):0kB present:15788kB mlocked:0kB dirty:0kB writeback:4kB > > mapped:52kB shmem:348kB slab_reclaimable:0kB slab_unreclaimable:16kB > > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB > > writeback_tmp:0kB pages_scanned:421 all_unreclaimable? yes > > lowmem_reserve[]: 0 865 1980 1980 > > > > Oct 19 09:44:08 myhost kernel: [ 618.441560] > > Normal free:39348kB min:3728kB low:4660kB high:5592kB active_anon:176740kB > > inactive_anon:25640kB active_file:84kB inactive_file:308kB > > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:885944kB > > mlocked:0kB dirty:0kB writeback:4kB mapped:576992kB shmem:5024kB > > slab_reclaimable:7612kB slab_unreclaimable:15512kB kernel_stack:2792kB > > pagetables:6884kB unstable:0kB bounce:0kB writeback_tmp:0kB > > pages_scanned:741 all_unreclaimable? yes > > lowmem_reserve[]: 0 0 8921 8921 > > > > Oct 19 09:44:08 myhost kernel: [ 618.441569] > > HighMem free:392kB min:512kB low:1712kB high:2912kB active_anon:492208kB > > inactive_anon:166404kB active_file:180kB inactive_file:840kB > > unevictable:40kB isolated(anon):0kB isolated(file):0kB present:1141984kB > > mlocked:40kB dirty:0kB writeback:12kB mapped:493648kB shmem:72216kB > > slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB > > pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB > > pages_scanned:1552 all_unreclaimable? yes > > > > Highmem seems a bit strange. > > present(1141984) - active_anon - inactive_anon - inactive_file - active_file > > = 482352kB but free is 392kB. > > > > Highmem is used for some other purpose than usual user's page.(pagetable is 0.) > > And, Hmm, mapped:493648kB seems too large for me. > > (active/inactive-file + shmem is not enough.) > > And "mapped" in NORMAL zone is large, too. > > > > Does anyone have idea about file-mapped-but-not-on-LRU pages ? > > Isn't it possible some file pages are much sharable? > Please see the page_add_file_rmap. > page_add_file_rmap() just counts an event where mapcount goes 0->1. Even if thousands process shares a page, it's just counted into file_mapped as 1. Then, there are 480MB of mapped file caches. Do I miss something ? Anyway, sum-of-all-lru-of-highmem is 480MB smaller than present pages. and isolated(anon/file) is 0kB. (NORMAL has similar problem) Thanks, -Kame ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-19 5:26 ` KAMEZAWA Hiroyuki @ 2010-10-19 5:34 ` Minchan Kim 2010-10-20 1:35 ` Wu Fengguang 1 sibling, 0 replies; 22+ messages in thread From: Minchan Kim @ 2010-10-19 5:34 UTC (permalink / raw) To: KAMEZAWA Hiroyuki Cc: Figo.zhang, lKOSAKI Motohiro, Wu Fengguang, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802, linux-mm@kvack.org On Tue, Oct 19, 2010 at 2:26 PM, KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote: > On Tue, 19 Oct 2010 14:23:29 +0900 > Minchan Kim <minchan.kim@gmail.com> wrote: > >> On Tue, Oct 19, 2010 at 11:59 AM, KAMEZAWA Hiroyuki >> <kamezawa.hiroyu@jp.fujitsu.com> wrote: >> > >> > Does anyone have idea about file-mapped-but-not-on-LRU pages ? >> >> Isn't it possible some file pages are much sharable? >> Please see the page_add_file_rmap. >> Absolutely you're right. Today, I need sleep. :( Sorry for the noise. -- Kind regards, Minchan Kim ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-19 5:26 ` KAMEZAWA Hiroyuki 2010-10-19 5:34 ` Minchan Kim @ 2010-10-20 1:35 ` Wu Fengguang 2010-10-20 2:06 ` Figo.zhang 1 sibling, 1 reply; 22+ messages in thread From: Wu Fengguang @ 2010-10-20 1:35 UTC (permalink / raw) To: KAMEZAWA Hiroyuki Cc: Minchan Kim, Figo.zhang, lKOSAKI Motohiro, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802, linux-mm@kvack.org On Tue, Oct 19, 2010 at 01:26:40PM +0800, KAMEZAWA Hiroyuki wrote: > On Tue, 19 Oct 2010 14:23:29 +0900 > Minchan Kim <minchan.kim@gmail.com> wrote: > > > On Tue, Oct 19, 2010 at 11:59 AM, KAMEZAWA Hiroyuki > > <kamezawa.hiroyu@jp.fujitsu.com> wrote: > > > On Tue, 19 Oct 2010 10:07:38 +0800 > > > "Figo.zhang" <zhangtianfei@leadcoretech.com> wrote: > > > > > >> > > >> > > > >> > very lots of change ;) > > >> > can you please send us your crash log? > > >> > > >> i add some prink in select_bad_process() and oom_badness() to see > > >> pid/totalpages/points/memoryuseage/and finally process to selet to kill. > > >> > > >> i found it the oom-killer select: syslog-ng,mysqld,nautilus,VirtualBox > > >> to kill, so my question is: > > >> > > >> 1. the syslog-ng,mysqld,nautilus is the system foundamental process, so > > >> if oom-killer kill those process, the system will be damaged, such as > > >> lose some important data. > > >> > > >> 2. the new oom-killer just use percentage of used memory as score to > > >> select the candidate to kill, but how to know this process to very > > >> important for system? > > >> > > > > > > The kernel can never know it. Just an admin (a man or management software) knows. > > > Old kernel tries to guess it, but it tend to be wrong and many many report comes > > > "why my ....is killed..." All guesswork the kernel does is not enough, I think. > > > > > >> oom_score_adj, it is anyone commercial linux distributions to use this > > >> to protect the critical process. > > >> > > > oom_adj may be used in some system. All my customers select panic_at_oom=1 > > > and cause cluster fail over rather than half-broken. > > > > > > <Off topic> > > > Your another choice is memory cgroup, I think. > > > please see documentation/cgroup/memory.txt or libcgroup. > > > http://sourceforge.net/projects/libcg/ > > > You can use some fancy controls with it. > > > </Off topic> > > > > > > > > > BTW, there seems to be some strange things. > > > (CC'ed to linux-mm) > > > Brief Summary: > > > an oom-killer happens on swapless environment with 2.6.36-rc8. > > > It has 2G memory. > > > a reporter says > > > == > > >> i want to test the oom-killer. My desktop (Dell optiplex 780, i686 > > >> kernel)have 2GB ram, i turn off the swap partition, and open a huge pdf > > >> files and applications, and let the system eat huge ram. > > >> > > >> in 2.6.35, i can use ram up to 1.75GB, > > >> > > >> but in 2.6.36-rc8, i just use to 1.53GB ram , the system come very slow > > >> and crashed after some minutes , the DiskIO is very busy. i see the > > >> DiskIO read is up to 8MB/s, write just only 400KB/s, (see by conky). > > > == > > > > > > The trigger of oom-kill is order=0 allocation. (see original mail for full log) > > > > > > > > > Oct 19 09:44:08 myhost kernel: [ 618.441470] httpd invoked oom-killer: > > > gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 > > > > > > Zone's stat is. > > > > > > Oct 19 09:44:08 myhost kernel: [ 618.441551] > > > DMA free:7968kB min:64kB low:80kB high:96kB active_anon:3700kB inactive_anon:3752kB > > > active_file:12kB inactive_file:252kB unevictable:0kB isolated(anon):0kB > > > isolated(file):0kB present:15788kB mlocked:0kB dirty:0kB writeback:4kB > > > mapped:52kB shmem:348kB slab_reclaimable:0kB slab_unreclaimable:16kB > > > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB > > > writeback_tmp:0kB pages_scanned:421 all_unreclaimable? yes > > > lowmem_reserve[]: 0 865 1980 1980 > > > > > > Oct 19 09:44:08 myhost kernel: [ 618.441560] > > > Normal free:39348kB min:3728kB low:4660kB high:5592kB active_anon:176740kB > > > inactive_anon:25640kB active_file:84kB inactive_file:308kB > > > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:885944kB > > > mlocked:0kB dirty:0kB writeback:4kB mapped:576992kB shmem:5024kB > > > slab_reclaimable:7612kB slab_unreclaimable:15512kB kernel_stack:2792kB > > > pagetables:6884kB unstable:0kB bounce:0kB writeback_tmp:0kB > > > pages_scanned:741 all_unreclaimable? yes > > > lowmem_reserve[]: 0 0 8921 8921 > > > > > > Oct 19 09:44:08 myhost kernel: [ 618.441569] > > > HighMem free:392kB min:512kB low:1712kB high:2912kB active_anon:492208kB > > > inactive_anon:166404kB active_file:180kB inactive_file:840kB > > > unevictable:40kB isolated(anon):0kB isolated(file):0kB present:1141984kB > > > mlocked:40kB dirty:0kB writeback:12kB mapped:493648kB shmem:72216kB > > > slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB > > > pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB > > > pages_scanned:1552 all_unreclaimable? yes > > > > > > Highmem seems a bit strange. > > > present(1141984) - active_anon - inactive_anon - inactive_file - active_file > > > = 482352kB but free is 392kB. > > > > > > Highmem is used for some other purpose than usual user's page.(pagetable is 0.) > > > And, Hmm, mapped:493648kB seems too large for me. > > > (active/inactive-file + shmem is not enough.) > > > And "mapped" in NORMAL zone is large, too. > > > > > > Does anyone have idea about file-mapped-but-not-on-LRU pages ? > > > > Isn't it possible some file pages are much sharable? > > Please see the page_add_file_rmap. > > > > page_add_file_rmap() just counts an event where mapcount goes 0->1. > Even if thousands process shares a page, it's just counted into file_mapped as 1. > > Then, there are 480MB of mapped file caches. Do I miss something ? > > Anyway, sum-of-all-lru-of-highmem is 480MB smaller than present pages. > and isolated(anon/file) is 0kB. > (NORMAL has similar problem) hugetlb files? But it's a desktop box. Figo, what's your meminfo? The GEM objects may be files not in LRU, however they should be accounted into shmem. Figo, would you run "page-types -r" for some clues? It can be compiled from the kernel tree: cd linux make Documentation/vm sudo Documentation/vm/page-types -r Thanks, Fengguang ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-20 1:35 ` Wu Fengguang @ 2010-10-20 2:06 ` Figo.zhang 2010-10-20 2:32 ` KOSAKI Motohiro 0 siblings, 1 reply; 22+ messages in thread From: Figo.zhang @ 2010-10-20 2:06 UTC (permalink / raw) To: Wu Fengguang Cc: KAMEZAWA Hiroyuki, Minchan Kim, lKOSAKI Motohiro, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802, linux-mm@kvack.org > > page_add_file_rmap() just counts an event where mapcount goes 0->1. > > Even if thousands process shares a page, it's just counted into file_mapped as 1. > > > > Then, there are 480MB of mapped file caches. Do I miss something ? > > > > Anyway, sum-of-all-lru-of-highmem is 480MB smaller than present pages. > > and isolated(anon/file) is 0kB. > > (NORMAL has similar problem) > > hugetlb files? But it's a desktop box. Figo, what's your meminfo? > > The GEM objects may be files not in LRU, however they should be > accounted into shmem. > > Figo, would you run "page-types -r" for some clues? It can be compiled > from the kernel tree: > > cd linux > make Documentation/vm > sudo Documentation/vm/page-types -r hi fengguang, here is the "page-types -r" result: flags page-count MB symbolic-flags long-symbolic-flags 0x0000000000000000 16494 64 __________________________________ 0x0000000100000000 8264 32 ______________________r___________ reserved 0x0000000000010000 3010 11 ________________T_________________ compound_tail 0x0000000000008000 76 0 _______________H__________________ compound_head 0x0000008000000000 1 0 _____________________________c____ uncached 0x0000000400000001 1 0 L_______________________d_________ locked,mappedtodisk 0x0000000000008014 1 0 __R_D__________H__________________ referenced,dirty,compound_head 0x0000000000010014 15 0 __R_D___________T_________________ referenced,dirty,compound_tail 0x0000000400000021 235 0 L____l__________________d_________ locked,lru,mappedtodisk 0x0000000800000024 64 0 __R__l___________________P________ referenced,lru,private 0x0000000400000028 823 3 ___U_l__________________d_________ uptodate,lru,mappedtodisk 0x0001000400000028 2 0 ___U_l__________________d_____I___ uptodate,lru,mappedtodisk,readahead 0x000000040000002c 1 0 __RU_l__________________d_________ referenced,uptodate,lru,mappedtodisk 0x000000000000402c 3837 14 __RU_l________b___________________ referenced,uptodate,lru,swapbacked 0x0000000800000030 2 0 ____Dl___________________P________ dirty,lru,private 0x0000000800000038 2 0 ___UDl___________________P________ uptodate,dirty,lru,private 0x0000000400000038 2 0 ___UDl__________________d_________ uptodate,dirty,lru,mappedtodisk 0x000000000000403c 58 0 __RUDl________b___________________ referenced,uptodate,dirty,lru,swapbacked 0x0000000800000060 53 0 _____lA__________________P________ lru,active,private 0x0000000800000064 9 0 __R__lA__________________P________ referenced,lru,active,private 0x0000000c00000068 8 0 ___U_lA_________________dP________ uptodate,lru,active,mappedtodisk,private 0x0000000000000068 2 0 ___U_lA___________________________ uptodate,lru,active 0x000000040000006c 1 0 __RU_lA_________________d_________ referenced,uptodate,lru,active,mappedtodisk 0x0000000800000070 2 0 ____DlA__________________P________ dirty,lru,active,private 0x0000000800000074 9 0 __R_DlA__________________P________ referenced,dirty,lru,active,private 0x0000000000004078 17910 69 ___UDlA_______b___________________ uptodate,dirty,lru,active,swapbacked 0x000000000000407c 5079 19 __RUDlA_______b___________________ referenced,uptodate,dirty,lru,active,swapbacked 0x000000080000007c 1 0 __RUDlA__________________P________ referenced,uptodate,dirty,lru,active,private 0x0004000000008080 70 0 _______S_______H________________A_ slab,compound_head,slub_frozen 0x0000000000008080 870 3 _______S_______H__________________ slab,compound_head 0x0000000000000080 2505 9 _______S__________________________ slab 0x0004000000000080 51 0 _______S________________________A_ slab,slub_frozen 0x0000000800000328 1 0 ___U_l__WI_______________P________ uptodate,lru,writeback,reclaim,private 0x0000000000000400 1724 6 __________B_______________________ buddy 0x0000000000000800 1 0 ___________M______________________ mmap 0x0000000000000804 1 0 __R________M______________________ referenced,mmap 0x0000000400000828 101 0 ___U_l_____M____________d_________ uptodate,lru,mmap,mappedtodisk 0x000000040000082c 150 0 __RU_l_____M____________d_________ referenced,uptodate,lru,mmap,mappedtodisk 0x0000000000004838 4595 17 ___UDl_____M__b___________________ uptodate,dirty,lru,mmap,swapbacked 0x000000000000483c 8 0 __RUDl_____M__b___________________ referenced,uptodate,dirty,lru,mmap,swapbacked 0x0000000400000868 3 0 ___U_lA____M____________d_________ uptodate,lru,active,mmap,mappedtodisk 0x000000040000086c 799 3 __RU_lA____M____________d_________ referenced,uptodate,lru,active,mmap,mappedtodisk 0x0000000000004878 576 2 ___UDlA____M__b___________________ uptodate,dirty,lru,active,mmap,swapbacked 0x000000000000487c 73 0 __RUDlA____M__b___________________ referenced,uptodate,dirty,lru,active,mmap,swapbacked 0x0000000000005808 15 0 ___U_______Ma_b___________________ uptodate,mmap,anonymous,swapbacked 0x0000000000005828 74342 290 ___U_l_____Ma_b___________________ uptodate,lru,mmap,anonymous,swapbacked 0x000000000000582c 85 0 __RU_l_____Ma_b___________________ referenced,uptodate,lru,mmap,anonymous,swapbacked 0x000000020004582c 12 0 __RU_l_____Ma_b___u____m__________ referenced,uptodate,lru,mmap,anonymous,swapbacked,unevictable,mlocked 0x0000000000005838 2 0 ___UDl_____Ma_b___________________ uptodate,dirty,lru,mmap,anonymous,swapbacked 0x0000000000005868 373077 1457 ___U_lA____Ma_b___________________ uptodate,lru,active,mmap,anonymous,swapbacked 0x000000000000586c 48 0 __RU_lA____Ma_b___________________ referenced,uptodate,lru,active,mmap,anonymous,swapbacked total 515071 2011 ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-20 2:06 ` Figo.zhang @ 2010-10-20 2:32 ` KOSAKI Motohiro 2010-10-20 2:58 ` Figo.zhang 0 siblings, 1 reply; 22+ messages in thread From: KOSAKI Motohiro @ 2010-10-20 2:32 UTC (permalink / raw) To: Figo.zhang Cc: kosaki.motohiro, Wu Fengguang, KAMEZAWA Hiroyuki, Minchan Kim, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802, linux-mm@kvack.org > > > > page_add_file_rmap() just counts an event where mapcount goes 0->1. > > > Even if thousands process shares a page, it's just counted into file_mapped as 1. > > > > > > Then, there are 480MB of mapped file caches. Do I miss something ? > > > > > > Anyway, sum-of-all-lru-of-highmem is 480MB smaller than present pages. > > > and isolated(anon/file) is 0kB. > > > (NORMAL has similar problem) > > > > hugetlb files? But it's a desktop box. Figo, what's your meminfo? > > > > The GEM objects may be files not in LRU, however they should be > > accounted into shmem. > > > > Figo, would you run "page-types -r" for some clues? It can be compiled > > from the kernel tree: > > > > cd linux > > make Documentation/vm > > sudo Documentation/vm/page-types -r > > hi fengguang, > here is the "page-types -r" result: > > flags page-count MB symbolic-flags > long-symbolic-flags > 0x0000000000005828 74342 290 ___U_l_____Ma_b___________________ uptodate,lru,mmap,anonymous,swapbacked > 0x0000000000005868 373077 1457 ___U_lA____Ma_b___________________ uptodate,lru,active,mmap,anonymous,swapbacked 1457+290=1747MB. that's ok. and this is very different result with your previous oom log. can you please try 1) invoke oom 2) get page-types -r again. I'm curious that oom makes page accounting lost again. I mean, please send us oom log and "page-types -r" result. thanks. ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-20 2:32 ` KOSAKI Motohiro @ 2010-10-20 2:58 ` Figo.zhang 2010-10-20 3:24 ` KOSAKI Motohiro 0 siblings, 1 reply; 22+ messages in thread From: Figo.zhang @ 2010-10-20 2:58 UTC (permalink / raw) To: KOSAKI Motohiro Cc: Wu Fengguang, KAMEZAWA Hiroyuki, Minchan Kim, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802, linux-mm@kvack.org > > can you please try 1) invoke oom 2) get page-types -r again. I'm curious > that oom makes page accounting lost again. I mean, please send us oom > log and "page-types -r" result. > > thanks ok, i do the experiment and catch the log: Oct 20 10:39:11 myhost kernel: [ 2187.700171] oom_badness: memoy use =53, totalpages=506807, points=0 Oct 20 10:39:11 myhost kernel: [ 2187.700174] oom_badness: pid = 1280, oom_score_adj=0, points=-30 Oct 20 10:39:11 myhost kernel: [ 2187.700176] select_bad_process, ===========have choose pid=1280 to kill, points=1 Oct 20 10:39:11 myhost kernel: [ 2187.700178] oom_badness: memoy use =121, totalpages=506807, points=0 Oct 20 10:39:11 myhost kernel: [ 2187.700180] oom_badness: pid = 1281, oom_score_adj=0, points=-30 Oct 20 10:39:11 myhost kernel: [ 2187.700181] oom_badness: memoy use =229, totalpages=506807, points=0 Oct 20 10:39:11 myhost kernel: [ 2187.700183] oom_badness: pid = 1284, oom_score_adj=0, points=0 Oct 20 10:39:11 myhost kernel: [ 2187.700185] oom_badness: memoy use =239, totalpages=506807, points=0 Oct 20 10:39:11 myhost kernel: [ 2187.700186] oom_badness: pid = 1287, oom_score_adj=0, points=0 Oct 20 10:39:11 myhost kernel: [ 2187.700188] oom_badness: memoy use =71, totalpages=506807, points=0 Oct 20 10:39:13 myhost kernel: [ 2187.700190] oom_badness: pid = 1288, oom_score_adj=0, points=-30 Oct 20 10:39:13 myhost kernel: [ 2187.700192] oom_badness: memoy use =33, totalpages=506807, points=0 Oct 20 10:39:13 myhost kernel: [ 2187.700193] oom_badness: pid = 1317, oom_score_adj=0, points=-30 Oct 20 10:39:13 myhost kernel: [ 2187.700195] oom_badness: memoy use =36, totalpages=506807, points=0 Oct 20 10:39:13 myhost kernel: [ 2187.700196] oom_badness: pid = 1331, oom_score_adj=0, points=-30 Oct 20 10:39:13 myhost kernel: [ 2187.700198] oom_badness: memoy use =49, totalpages=506807, points=0 Oct 20 10:39:13 myhost kernel: [ 2187.700200] oom_badness: pid = 1333, oom_score_adj=0, points=0 Oct 20 10:39:13 myhost kernel: [ 2187.700202] oom_badness: memoy use =26, totalpages=506807, points=0 Oct 20 10:39:13 myhost kernel: [ 2187.700203] oom_badness: pid = 1396, oom_score_adj=0, points=-30 Oct 20 10:39:13 myhost kernel: [ 2187.700205] oom_badness: memoy use =50, totalpages=506807, points=0 Oct 20 10:39:13 myhost kernel: [ 2187.700207] oom_badness: pid = 1419, oom_score_adj=0, points=-30 Oct 20 10:39:13 myhost kernel: [ 2187.700208] oom_badness: memoy use =116, totalpages=506807, points=0 Oct 20 10:39:13 myhost kernel: [ 2187.700210] oom_badness: pid = 1438, oom_score_adj=0, points=-30 Oct 20 10:39:13 myhost kernel: [ 2187.700212] oom_badness: memoy use =21, totalpages=506807, points=0 Oct 20 10:39:13 myhost kernel: [ 2187.700213] oom_badness: pid = 1441, oom_score_adj=0, points=-30 Oct 20 10:39:13 myhost kernel: [ 2187.700215] oom_badness: memoy use =21, totalpages=506807, points=0 Oct 20 10:39:13 myhost kernel: [ 2187.700217] oom_badness: pid = 1442, oom_score_adj=0, points=-30 Oct 20 10:39:13 myhost kernel: [ 2187.700219] oom_badness: memoy use =21, totalpages=506807, points=0 Oct 20 10:39:13 myhost kernel: [ 2187.700220] oom_badness: pid = 1443, oom_score_adj=0, points=-30 Oct 20 10:39:13 myhost kernel: [ 2187.700222] oom_badness: memoy use =21, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700224] oom_badness: pid = 1444, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700226] oom_badness: memoy use =20, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700227] oom_badness: pid = 1445, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700229] oom_badness: memoy use =21, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700231] oom_badness: pid = 1446, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700233] oom_badness: memoy use =174, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700234] oom_badness: pid = 1455, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700236] oom_badness: memoy use =164, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700238] oom_badness: pid = 1524, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700240] oom_badness: memoy use =251, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700241] oom_badness: pid = 1542, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700243] oom_badness: memoy use =59, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700245] oom_badness: pid = 1562, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700246] oom_badness: memoy use =2815, totalpages=506807, points=5 Oct 20 10:39:14 myhost kernel: [ 2187.700248] oom_badness: pid = 1656, oom_score_adj=0, points=5 Oct 20 10:39:14 myhost kernel: [ 2187.700250] select_bad_process, ===========have choose pid=1656 to kill, points=5 Oct 20 10:39:14 myhost kernel: [ 2187.700252] oom_badness: memoy use =56, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700253] oom_badness: pid = 1663, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700255] oom_badness: memoy use =227, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700257] oom_badness: pid = 1666, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700258] oom_badness: memoy use =177, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700260] oom_badness: pid = 1671, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700262] oom_badness: memoy use =226, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700263] oom_badness: pid = 1701, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700265] oom_badness: memoy use =1358, totalpages=506807, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700266] oom_badness: pid = 1709, oom_score_adj=0, points=-28 Oct 20 10:39:14 myhost kernel: [ 2187.700268] oom_badness: memoy use =1149, totalpages=506807, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700270] oom_badness: pid = 1716, oom_score_adj=0, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700272] oom_badness: memoy use =28, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700273] oom_badness: pid = 1717, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700275] oom_badness: memoy use =41, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700277] oom_badness: pid = 1723, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700278] oom_badness: memoy use =776, totalpages=506807, points=1 Oct 20 10:39:14 myhost kernel: [ 2187.700280] oom_badness: pid = 1724, oom_score_adj=0, points=-29 Oct 20 10:39:14 myhost kernel: [ 2187.700282] oom_badness: memoy use =1363, totalpages=506807, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700283] oom_badness: pid = 1727, oom_score_adj=0, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700285] oom_badness: memoy use =1363, totalpages=506807, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700287] oom_badness: pid = 1728, oom_score_adj=0, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700288] oom_badness: memoy use =1363, totalpages=506807, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700290] oom_badness: pid = 1729, oom_score_adj=0, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700292] oom_badness: memoy use =1363, totalpages=506807, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700293] oom_badness: pid = 1730, oom_score_adj=0, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700295] oom_badness: memoy use =1363, totalpages=506807, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700297] oom_badness: pid = 1731, oom_score_adj=0, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700298] oom_badness: memoy use =68, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700300] oom_badness: pid = 1752, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700302] oom_badness: memoy use =192, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700303] oom_badness: pid = 1754, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700305] oom_badness: memoy use =11195, totalpages=506807, points=22 Oct 20 10:39:14 myhost kernel: [ 2187.700306] oom_badness: pid = 1756, oom_score_adj=0, points=-8 Oct 20 10:39:14 myhost kernel: [ 2187.700308] oom_badness: memoy use =176, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700310] oom_badness: pid = 1775, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700312] oom_badness: memoy use =292, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700313] oom_badness: pid = 1876, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700315] oom_badness: memoy use =126, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700317] oom_badness: pid = 1882, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700319] oom_badness: memoy use =110, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700321] oom_badness: pid = 1883, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700323] oom_badness: memoy use =132, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700324] oom_badness: pid = 1950, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700326] oom_badness: memoy use =306, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700327] oom_badness: pid = 1969, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700330] oom_badness: memoy use =42, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700331] oom_badness: pid = 1996, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700333] oom_badness: memoy use =378, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700334] oom_badness: pid = 1997, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700336] oom_badness: memoy use =49, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700338] oom_badness: pid = 1999, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700340] oom_badness: memoy use =485, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700341] oom_badness: pid = 2004, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700343] oom_badness: memoy use =4515, totalpages=506807, points=8 Oct 20 10:39:14 myhost kernel: [ 2187.700344] oom_badness: pid = 2005, oom_score_adj=0, points=8 Oct 20 10:39:14 myhost kernel: [ 2187.700346] select_bad_process, ===========have choose pid=2005 to kill, points=8 Oct 20 10:39:14 myhost kernel: [ 2187.700348] oom_badness: memoy use =837, totalpages=506807, points=1 Oct 20 10:39:14 myhost kernel: [ 2187.700350] oom_badness: pid = 2014, oom_score_adj=0, points=1 Oct 20 10:39:14 myhost kernel: [ 2187.700351] oom_badness: memoy use =97, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700353] oom_badness: pid = 2019, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700354] oom_badness: memoy use =833, totalpages=506807, points=1 Oct 20 10:39:14 myhost kernel: [ 2187.700356] oom_badness: pid = 2022, oom_score_adj=0, points=1 Oct 20 10:39:14 myhost kernel: [ 2187.700358] oom_badness: memoy use =130, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700360] oom_badness: pid = 2032, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700361] oom_badness: memoy use =1322, totalpages=506807, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700363] oom_badness: pid = 2036, oom_score_adj=0, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700365] oom_badness: memoy use =172, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700366] oom_badness: pid = 2039, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700368] oom_badness: memoy use =166, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700370] oom_badness: pid = 2041, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700371] oom_badness: memoy use =51, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700373] oom_badness: pid = 2042, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700375] oom_badness: memoy use =5378, totalpages=506807, points=10 Oct 20 10:39:14 myhost kernel: [ 2187.700376] oom_badness: pid = 2046, oom_score_adj=0, points=10 Oct 20 10:39:14 myhost kernel: [ 2187.700378] select_bad_process, ===========have choose pid=2046 to kill, points=10 Oct 20 10:39:14 myhost kernel: [ 2187.700380] oom_badness: memoy use =140, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700381] oom_badness: pid = 2048, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700383] oom_badness: memoy use =1188, totalpages=506807, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700385] oom_badness: pid = 2056, oom_score_adj=0, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700386] oom_badness: memoy use =584, totalpages=506807, points=1 Oct 20 10:39:14 myhost kernel: [ 2187.700388] oom_badness: pid = 2059, oom_score_adj=0, points=1 Oct 20 10:39:14 myhost kernel: [ 2187.700390] oom_badness: memoy use =336, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700391] oom_badness: pid = 2062, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700393] oom_badness: memoy use =924, totalpages=506807, points=1 Oct 20 10:39:14 myhost kernel: [ 2187.700395] oom_badness: pid = 2063, oom_score_adj=0, points=1 Oct 20 10:39:14 myhost kernel: [ 2187.700396] oom_badness: memoy use =369, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700398] oom_badness: pid = 2065, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700400] oom_badness: memoy use =791, totalpages=506807, points=1 Oct 20 10:39:14 myhost kernel: [ 2187.700401] oom_badness: pid = 2066, oom_score_adj=0, points=1 Oct 20 10:39:14 myhost kernel: [ 2187.700403] oom_badness: memoy use =70, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700405] oom_badness: pid = 2067, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700406] oom_badness: memoy use =41485, totalpages=506807, points=81 Oct 20 10:39:14 myhost kernel: [ 2187.700408] oom_badness: pid = 2069, oom_score_adj=0, points=81 Oct 20 10:39:14 myhost kernel: [ 2187.700409] select_bad_process, ===========have choose pid=2069 to kill, points=81 Oct 20 10:39:14 myhost kernel: [ 2187.700411] oom_badness: memoy use =198, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700413] oom_badness: pid = 2077, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700415] oom_badness: memoy use =1599, totalpages=506807, points=3 Oct 20 10:39:14 myhost kernel: [ 2187.700416] oom_badness: pid = 2079, oom_score_adj=0, points=3 Oct 20 10:39:14 myhost kernel: [ 2187.700418] oom_badness: memoy use =416, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700420] oom_badness: pid = 2081, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700422] oom_badness: memoy use =264, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700423] oom_badness: pid = 2086, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700425] oom_badness: memoy use =287, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700426] oom_badness: pid = 2087, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700428] oom_badness: memoy use =291, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700430] oom_badness: pid = 2088, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700432] oom_badness: memoy use =434, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700433] oom_badness: pid = 2089, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700435] oom_badness: memoy use =269, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700436] oom_badness: pid = 2099, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700438] oom_badness: memoy use =376, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700440] oom_badness: pid = 2101, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700441] oom_badness: memoy use =995, totalpages=506807, points=1 Oct 20 10:39:14 myhost kernel: [ 2187.700443] oom_badness: pid = 2110, oom_score_adj=0, points=1 Oct 20 10:39:14 myhost kernel: [ 2187.700445] oom_badness: memoy use =276, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700446] oom_badness: pid = 2113, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700448] oom_badness: memoy use =177, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700450] oom_badness: pid = 2115, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700451] oom_badness: memoy use =69, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700453] oom_badness: pid = 2118, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700455] oom_badness: memoy use =119, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700456] oom_badness: pid = 2129, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700458] oom_badness: memoy use =2216, totalpages=506807, points=4 Oct 20 10:39:14 myhost kernel: [ 2187.700460] oom_badness: pid = 2133, oom_score_adj=0, points=-26 Oct 20 10:39:14 myhost kernel: [ 2187.700462] oom_badness: memoy use =202, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700463] oom_badness: pid = 2145, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700465] oom_badness: memoy use =213, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700466] oom_badness: pid = 2157, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700468] oom_badness: memoy use =1183, totalpages=506807, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700470] oom_badness: pid = 2189, oom_score_adj=0, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700471] oom_badness: memoy use =29, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700473] oom_badness: pid = 2191, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700475] oom_badness: memoy use =451, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700476] oom_badness: pid = 2193, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700478] oom_badness: memoy use =36, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700479] oom_badness: pid = 2205, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700481] oom_badness: memoy use =476, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700483] oom_badness: pid = 2206, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700484] oom_badness: memoy use =43, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700486] oom_badness: pid = 2277, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700488] oom_badness: memoy use =3431, totalpages=506807, points=6 Oct 20 10:39:14 myhost kernel: [ 2187.700489] oom_badness: pid = 2278, oom_score_adj=0, points=-24 Oct 20 10:39:14 myhost kernel: [ 2187.700491] oom_badness: memoy use =43, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700493] oom_badness: pid = 2282, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700495] oom_badness: memoy use =51, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700496] oom_badness: pid = 2283, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700498] oom_badness: memoy use =412, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700500] oom_badness: pid = 2286, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700502] oom_badness: memoy use =1891, totalpages=506807, points=3 Oct 20 10:39:14 myhost kernel: [ 2187.700503] oom_badness: pid = 2319, oom_score_adj=0, points=3 Oct 20 10:39:14 myhost kernel: [ 2187.700505] oom_badness: memoy use =21095, totalpages=506807, points=41 Oct 20 10:39:14 myhost kernel: [ 2187.700506] oom_badness: pid = 2403, oom_score_adj=0, points=41 Oct 20 10:39:14 myhost kernel: [ 2187.700508] oom_badness: memoy use =2021, totalpages=506807, points=3 Oct 20 10:39:14 myhost kernel: [ 2187.700510] oom_badness: pid = 2478, oom_score_adj=0, points=3 Oct 20 10:39:14 myhost kernel: [ 2187.700511] oom_badness: memoy use =1263, totalpages=506807, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700513] oom_badness: pid = 2487, oom_score_adj=0, points=2 Oct 20 10:39:14 myhost kernel: [ 2187.700515] oom_badness: memoy use =191, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700516] oom_badness: pid = 2614, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700518] oom_badness: memoy use =7232, totalpages=506807, points=14 Oct 20 10:39:14 myhost kernel: [ 2187.700519] oom_badness: pid = 2658, oom_score_adj=0, points=14 Oct 20 10:39:14 myhost kernel: [ 2187.700521] oom_badness: memoy use =327, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700523] oom_badness: pid = 2784, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700525] oom_badness: memoy use =479, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700526] oom_badness: pid = 2796, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700528] oom_badness: memoy use =3098, totalpages=506807, points=6 Oct 20 10:39:14 myhost kernel: [ 2187.700529] oom_badness: pid = 2882, oom_score_adj=0, points=6 Oct 20 10:39:14 myhost kernel: [ 2187.700532] oom_badness: memoy use =36, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700533] oom_badness: pid = 3319, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700535] oom_badness: memoy use =478, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700536] oom_badness: pid = 3320, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700538] oom_badness: memoy use =15539, totalpages=506807, points=30 Oct 20 10:39:14 myhost kernel: [ 2187.700540] oom_badness: pid = 3436, oom_score_adj=0, points=30 Oct 20 10:39:14 myhost kernel: [ 2187.700541] oom_badness: memoy use =118, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700543] oom_badness: pid = 3440, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700544] oom_badness: memoy use =29031, totalpages=506807, points=57 Oct 20 10:39:14 myhost kernel: [ 2187.700546] oom_badness: pid = 3446, oom_score_adj=0, points=57 Oct 20 10:39:14 myhost kernel: [ 2187.700548] oom_badness: memoy use =15735, totalpages=506807, points=31 Oct 20 10:39:14 myhost kernel: [ 2187.700549] oom_badness: pid = 3498, oom_score_adj=0, points=31 Oct 20 10:39:14 myhost kernel: [ 2187.700551] oom_badness: memoy use =66, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700552] oom_badness: pid = 3506, oom_score_adj=0, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700554] oom_badness: memoy use =4007, totalpages=506807, points=7 Oct 20 10:39:14 myhost kernel: [ 2187.700556] oom_badness: pid = 3508, oom_score_adj=0, points=7 Oct 20 10:39:14 myhost kernel: [ 2187.700557] oom_badness: memoy use =17404, totalpages=506807, points=34 Oct 20 10:39:14 myhost kernel: [ 2187.700559] oom_badness: pid = 3564, oom_score_adj=0, points=34 Oct 20 10:39:14 myhost kernel: [ 2187.700560] oom_badness: memoy use =14779, totalpages=506807, points=29 Oct 20 10:39:14 myhost kernel: [ 2187.700562] oom_badness: pid = 3622, oom_score_adj=0, points=29 Oct 20 10:39:14 myhost kernel: [ 2187.700563] oom_badness: memoy use =14386, totalpages=506807, points=28 Oct 20 10:39:14 myhost kernel: [ 2187.700565] oom_badness: pid = 3692, oom_score_adj=0, points=28 Oct 20 10:39:14 myhost kernel: [ 2187.700567] oom_badness: memoy use =13580, totalpages=506807, points=26 Oct 20 10:39:14 myhost kernel: [ 2187.700568] oom_badness: pid = 3708, oom_score_adj=0, points=26 Oct 20 10:39:14 myhost kernel: [ 2187.700570] oom_badness: memoy use =16011, totalpages=506807, points=31 Oct 20 10:39:14 myhost kernel: [ 2187.700571] oom_badness: pid = 3770, oom_score_adj=0, points=31 Oct 20 10:39:14 myhost kernel: [ 2187.700573] oom_badness: memoy use =19788, totalpages=506807, points=39 Oct 20 10:39:14 myhost kernel: [ 2187.700574] oom_badness: pid = 3889, oom_score_adj=0, points=39 Oct 20 10:39:14 myhost kernel: [ 2187.700576] oom_badness: memoy use =23319, totalpages=506807, points=46 Oct 20 10:39:14 myhost kernel: [ 2187.700577] oom_badness: pid = 3903, oom_score_adj=0, points=46 Oct 20 10:39:14 myhost kernel: [ 2187.700579] oom_badness: memoy use =14994, totalpages=506807, points=29 Oct 20 10:39:14 myhost kernel: [ 2187.700580] oom_badness: pid = 3927, oom_score_adj=0, points=29 Oct 20 10:39:14 myhost kernel: [ 2187.700582] oom_badness: memoy use =15434, totalpages=506807, points=30 Oct 20 10:39:14 myhost kernel: [ 2187.700583] oom_badness: pid = 3940, oom_score_adj=0, points=30 Oct 20 10:39:14 myhost kernel: [ 2187.700585] oom_badness: memoy use =17479, totalpages=506807, points=34 Oct 20 10:39:14 myhost kernel: [ 2187.700586] oom_badness: pid = 3954, oom_score_adj=0, points=34 Oct 20 10:39:14 myhost kernel: [ 2187.700588] oom_badness: memoy use =16979, totalpages=506807, points=33 Oct 20 10:39:14 myhost kernel: [ 2187.700589] oom_badness: pid = 3972, oom_score_adj=0, points=33 Oct 20 10:39:14 myhost kernel: [ 2187.700591] oom_badness: memoy use =13305, totalpages=506807, points=26 Oct 20 10:39:14 myhost kernel: [ 2187.700593] oom_badness: pid = 4002, oom_score_adj=0, points=26 Oct 20 10:39:14 myhost kernel: [ 2187.700594] oom_badness: memoy use =16434, totalpages=506807, points=32 Oct 20 10:39:14 myhost kernel: [ 2187.700596] oom_badness: pid = 4015, oom_score_adj=0, points=32 Oct 20 10:39:14 myhost kernel: [ 2187.700597] oom_badness: memoy use =15376, totalpages=506807, points=30 Oct 20 10:39:14 myhost kernel: [ 2187.700599] oom_badness: pid = 4028, oom_score_adj=0, points=30 Oct 20 10:39:14 myhost kernel: [ 2187.700601] oom_badness: memoy use =13587, totalpages=506807, points=26 Oct 20 10:39:14 myhost kernel: [ 2187.700602] oom_badness: pid = 4081, oom_score_adj=0, points=26 Oct 20 10:39:14 myhost kernel: [ 2187.700604] oom_badness: memoy use =36, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700605] oom_badness: pid = 4085, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700607] oom_badness: memoy use =36, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700609] oom_badness: pid = 4086, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700611] oom_badness: memoy use =35, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700612] oom_badness: pid = 4089, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700614] oom_badness: memoy use =43, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700615] oom_badness: pid = 4092, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700617] oom_badness: memoy use =42, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700619] oom_badness: pid = 4093, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700620] oom_badness: memoy use =55, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700622] oom_badness: pid = 4096, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700624] oom_badness: memoy use =54, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700625] oom_badness: pid = 4097, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700627] oom_badness: memoy use =58, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700629] oom_badness: pid = 4128, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700630] oom_badness: memoy use =58, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700632] oom_badness: pid = 4129, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700634] oom_badness: memoy use =86, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700636] oom_badness: pid = 4130, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700637] oom_badness: memoy use =87, totalpages=506807, points=0 Oct 20 10:39:14 myhost kernel: [ 2187.700639] oom_badness: pid = 4131, oom_score_adj=0, points=-30 Oct 20 10:39:14 myhost kernel: [ 2187.700641] oom_badness: memoy use =15280, totalpages=506807, points=30 Oct 20 10:39:14 myhost kernel: [ 2187.700642] oom_badness: pid = 4141, oom_score_adj=0, points=30 Oct 20 10:39:14 myhost kernel: [ 2187.700644] acroread invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 Oct 20 10:39:14 myhost kernel: [ 2187.700647] acroread cpuset=/ mems_allowed=0 Oct 20 10:39:14 myhost kernel: [ 2187.700649] Pid: 3446, comm: acroread Not tainted 2.6.36testing #7 Oct 20 10:39:14 myhost kernel: [ 2187.700650] Call Trace: Oct 20 10:39:14 myhost kernel: [ 2187.700656] [<c10c0a20>] dump_header.clone.5+0x80/0x1e0 Oct 20 10:39:14 myhost kernel: [ 2187.700660] [<c12eb5b6>] ? printk +0x18/0x1a Oct 20 10:39:14 myhost kernel: [ 2187.700662] [<c10c0d85>] ? oom_badness+0x185/0x1a0 Oct 20 10:39:14 myhost kernel: [ 2187.700664] [<c10c0dfc>] oom_kill_process+0x5c/0x210 Oct 20 10:39:14 myhost kernel: [ 2187.700666] [<c10c1031>] ? select_bad_process.clone.7+0x81/0x100 Oct 20 10:39:14 myhost kernel: [ 2187.700668] [<c10c134f>] out_of_memory+0xbf/0x1d0 Oct 20 10:39:14 myhost kernel: [ 2187.700670] [<c10c1218>] ? try_set_zonelist_oom+0xc8/0xe0 Oct 20 10:39:14 myhost kernel: [ 2187.700673] [<c10c4bd8>] __alloc_pages_nodemask+0x5e8/0x600 Oct 20 10:39:14 myhost kernel: [ 2187.700676] [<c10c6585>] __do_page_cache_readahead+0x105/0x230 Oct 20 10:39:14 myhost kernel: [ 2187.700678] [<c10c6911>] ra_submit +0x21/0x30 Oct 20 10:39:14 myhost kernel: [ 2187.700680] [<c10beb8b>] filemap_fault+0x36b/0x3e0 Oct 20 10:39:14 myhost kernel: [ 2187.700683] [<c10d5cab>] __do_fault +0x3b/0x4f0 Oct 20 10:39:14 myhost kernel: [ 2187.700686] [<c10d8d0d>] handle_mm_fault+0xfd/0x930 Oct 20 10:39:14 myhost kernel: [ 2187.700689] [<c1029250>] ? do_page_fault+0x0/0x3e0 Oct 20 10:39:14 myhost kernel: [ 2187.700691] [<c10293a0>] do_page_fault+0x150/0x3e0 Oct 20 10:39:14 myhost kernel: [ 2187.700694] [<c106a69f>] ? ktime_get_ts+0xff/0x130 Oct 20 10:39:14 myhost kernel: [ 2187.700697] [<c1109ce4>] ? sys_poll +0x54/0xd0 Oct 20 10:39:14 myhost kernel: [ 2187.700699] [<c1029250>] ? do_page_fault+0x0/0x3e0 Oct 20 10:39:14 myhost kernel: [ 2187.700702] [<c12ef1fb>] error_code +0x67/0x6c Oct 20 10:39:14 myhost kernel: [ 2187.700703] Mem-Info: Oct 20 10:39:14 myhost kernel: [ 2187.700704] DMA per-cpu: Oct 20 10:39:14 myhost kernel: [ 2187.700706] CPU 0: hi: 0, btch: 1 usd: 0 Oct 20 10:39:14 myhost kernel: [ 2187.700707] CPU 1: hi: 0, btch: 1 usd: 0 Oct 20 10:39:14 myhost kernel: [ 2187.700708] Normal per-cpu: Oct 20 10:39:14 myhost kernel: [ 2187.700709] CPU 0: hi: 186, btch: 31 usd: 0 Oct 20 10:39:14 myhost kernel: [ 2187.700711] CPU 1: hi: 186, btch: 31 usd: 40 Oct 20 10:39:14 myhost kernel: [ 2187.700712] HighMem per-cpu: Oct 20 10:39:14 myhost kernel: [ 2187.700713] CPU 0: hi: 186, btch: 31 usd: 0 Oct 20 10:39:14 myhost kernel: [ 2187.700714] CPU 1: hi: 186, btch: 31 usd: 52 Oct 20 10:39:14 myhost kernel: [ 2187.700718] active_anon:398375 inactive_anon:82967 isolated_anon:0 Oct 20 10:39:14 myhost kernel: [ 2187.700718] active_file:81 inactive_file:429 isolated_file:32 Oct 20 10:39:14 myhost kernel: [ 2187.700719] unevictable:13 dirty:2 writeback:14 unstable:0 Oct 20 10:39:14 myhost kernel: [ 2187.700720] free:11942 slab_reclaimable:2391 slab_unreclaimable:3303 Oct 20 10:39:14 myhost kernel: [ 2187.700721] mapped:5617 shmem:33909 pagetables:2280 bounce:0 Oct 20 10:39:14 myhost kernel: [ 2187.700725] DMA free:7984kB min:64kB low:80kB high:96kB active_anon:3852kB inactive_anon:3968kB active_file:0kB inactive_file:52kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15788kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:16kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:36 all_unreclaimable? no Oct 20 10:39:14 myhost kernel: [ 2187.700728] lowmem_reserve[]: 0 865 1980 1980 Oct 20 10:39:14 myhost kernel: [ 2187.700734] Normal free:39404kB min:3728kB low:4660kB high:5592kB active_anon:732980kB inactive_anon:42036kB active_file:116kB inactive_file:780kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:885944kB mlocked:0kB dirty:8kB writeback:48kB mapped:6728kB shmem:44600kB slab_reclaimable:9564kB slab_unreclaimable:13196kB kernel_stack:3200kB pagetables:9120kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:1207 all_unreclaimable? no Oct 20 10:39:14 myhost kernel: [ 2187.700737] lowmem_reserve[]: 0 0 8921 8921 Oct 20 10:39:14 myhost kernel: [ 2187.700743] HighMem free:380kB min:512kB low:1712kB high:2912kB active_anon:856668kB inactive_anon:285864kB active_file:208kB inactive_file:896kB unevictable:52kB isolated(anon):0kB isolated(file):0kB present:1141984kB mlocked:52kB dirty:0kB writeback:8kB mapped:15740kB shmem:91036kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:1398 all_unreclaimable? no Oct 20 10:39:14 myhost kernel: [ 2187.700746] lowmem_reserve[]: 0 0 0 0 Oct 20 10:39:14 myhost kernel: [ 2187.700749] DMA: 1*4kB 1*8kB 1*16kB 3*32kB 1*64kB 3*128kB 3*256kB 3*512kB 1*1024kB 0*2048kB 1*4096kB = 7996kB Oct 20 10:39:14 myhost kernel: [ 2187.700755] Normal: 53*4kB 79*8kB 144*16kB 139*32kB 121*64kB 44*128kB 40*256kB 6*512kB 1*1024kB 0*2048kB 1*4096kB = 39404kB Oct 20 10:39:14 myhost kernel: [ 2187.700761] HighMem: 29*4kB 17*8kB 6*16kB 1*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 380kB Oct 20 10:39:14 myhost kernel: [ 2187.700767] 34452 total pagecache pages Oct 20 10:39:14 myhost kernel: [ 2187.700768] 0 pages in swap cache Oct 20 10:39:14 myhost kernel: [ 2187.700769] Swap cache stats: add 0, delete 0, find 0/0 Oct 20 10:39:14 myhost kernel: [ 2187.700771] Free swap = 0kB Oct 20 10:39:14 myhost kernel: [ 2187.700771] Total swap = 0kB Oct 20 10:39:14 myhost kernel: [ 2187.704392] 515070 pages RAM Oct 20 10:39:14 myhost kernel: [ 2187.704393] 287745 pages HighMem Oct 20 10:39:14 myhost kernel: [ 2187.704394] 8264 pages reserved Oct 20 10:39:14 myhost kernel: [ 2187.704395] 29765 pages shared Oct 20 10:39:14 myhost kernel: [ 2187.704396] 478272 pages non-shared Oct 20 10:39:14 myhost kernel: [ 2187.704397] [ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name Oct 20 10:39:14 myhost kernel: [ 2187.704404] [ 566] 0 566 578 167 0 -17 -1000 udevd Oct 20 10:39:14 myhost kernel: [ 2187.704407] [ 1280] 0 1280 1280 53 1 0 0 syslog-ng Oct 20 10:39:14 myhost kernel: [ 2187.704410] [ 1281] 0 1281 1358 121 1 0 0 syslog-ng Oct 20 10:39:14 myhost kernel: [ 2187.704412] [ 1284] 81 1284 780 229 0 0 0 dbus-daemon Oct 20 10:39:14 myhost kernel: [ 2187.704415] [ 1287] 82 1287 3757 239 0 0 0 hald Oct 20 10:39:14 myhost kernel: [ 2187.704418] [ 1288] 0 1288 898 71 0 0 0 hald-runner Oct 20 10:39:14 myhost kernel: [ 2187.704420] [ 1317] 0 1317 914 33 1 0 0 hald-addon-inpu Oct 20 10:39:14 myhost kernel: [ 2187.704423] [ 1331] 0 1331 914 36 1 0 0 hald-addon-stor Oct 20 10:39:14 myhost kernel: [ 2187.704425] [ 1333] 82 1333 823 49 0 0 0 hald-addon-acpi Oct 20 10:39:14 myhost kernel: [ 2187.704428] [ 1370] 0 1370 577 168 1 -17 -1000 udevd Oct 20 10:39:14 myhost kernel: [ 2187.704430] [ 1396] 0 1396 451 26 1 0 0 crond Oct 20 10:39:14 myhost kernel: [ 2187.704433] [ 1419] 0 1419 718 50 1 0 0 mysqld_safe Oct 20 10:39:14 myhost kernel: [ 2187.704436] [ 1438] 0 1438 3575 116 0 0 0 gdm-binary Oct 20 10:39:14 myhost kernel: [ 2187.704438] [ 1441] 0 1441 439 21 1 0 0 agetty Oct 20 10:39:14 myhost kernel: [ 2187.704441] [ 1442] 0 1442 439 21 0 0 0 agetty Oct 20 10:39:14 myhost kernel: [ 2187.704443] [ 1443] 0 1443 439 21 1 0 0 agetty Oct 20 10:39:14 myhost kernel: [ 2187.704446] [ 1444] 0 1444 439 21 0 0 0 agetty Oct 20 10:39:14 myhost kernel: [ 2187.704448] [ 1445] 0 1445 439 20 0 0 0 agetty Oct 20 10:39:14 myhost kernel: [ 2187.704451] [ 1446] 0 1446 439 21 1 0 0 agetty Oct 20 10:39:14 myhost kernel: [ 2187.704453] [ 1455] 0 1455 2214 174 0 0 0 cupsd Oct 20 10:39:14 myhost kernel: [ 2187.704456] [ 1517] 0 1517 1647 98 1 -17 -1000 sshd Oct 20 10:39:14 myhost kernel: [ 2187.704459] [ 1524] 0 1524 6451 164 1 0 0 NetworkManager Oct 20 10:39:14 myhost kernel: [ 2187.704461] [ 1542] 0 1542 5710 251 0 0 0 polkitd Oct 20 10:39:14 myhost kernel: [ 2187.704464] [ 1562] 0 1562 2014 59 1 0 0 vmware-usbarbit Oct 20 10:39:14 myhost kernel: [ 2187.704466] [ 1656] 89 1656 29902 2815 0 0 0 mysqld Oct 20 10:39:14 myhost kernel: [ 2187.704469] [ 1663] 0 1663 1247 56 0 0 0 wpa_supplicant Oct 20 10:39:14 myhost kernel: [ 2187.704471] [ 1666] 0 1666 4910 227 0 0 0 smbd Oct 20 10:39:14 myhost kernel: [ 2187.704474] [ 1671] 0 1671 2807 177 0 0 0 nmbd Oct 20 10:39:14 myhost kernel: [ 2187.704476] [ 1701] 0 1701 4910 226 1 0 0 smbd Oct 20 10:39:14 myhost kernel: [ 2187.704479] [ 1709] 0 1709 5162 1358 0 0 0 httpd Oct 20 10:39:14 myhost kernel: [ 2187.704482] [ 1716] 33 1716 4841 1149 0 0 0 httpd Oct 20 10:39:14 myhost kernel: [ 2187.704484] [ 1717] 0 1717 487 28 0 0 0 dhcpcd Oct 20 10:39:14 myhost kernel: [ 2187.704487] [ 1723] 0 1723 946 41 0 0 0 ApplicationPool Oct 20 10:39:14 myhost kernel: [ 2187.704489] [ 1724] 0 1724 3555 776 1 0 0 ruby Oct 20 10:39:14 myhost kernel: [ 2187.704492] [ 1727] 33 1727 5162 1363 0 0 0 httpd Oct 20 10:39:14 myhost kernel: [ 2187.704494] [ 1728] 33 1728 5162 1363 0 0 0 httpd Oct 20 10:39:14 myhost kernel: [ 2187.704496] [ 1729] 33 1729 5162 1363 0 0 0 httpd Oct 20 10:39:14 myhost kernel: [ 2187.704499] [ 1730] 33 1730 5162 1363 0 0 0 httpd Oct 20 10:39:14 myhost kernel: [ 2187.704501] [ 1731] 33 1731 5162 1363 0 0 0 httpd Oct 20 10:39:14 myhost kernel: [ 2187.704504] [ 1752] 0 1752 854 68 0 0 0 cntlm Oct 20 10:39:14 myhost kernel: [ 2187.704506] [ 1754] 0 1754 4391 192 0 0 0 gdm-simple-slav Oct 20 10:39:14 myhost kernel: [ 2187.704509] [ 1756] 0 1756 38002 11195 0 0 0 Xorg Oct 20 10:39:14 myhost kernel: [ 2187.704512] [ 1775] 0 1775 6671 176 0 0 0 console-kit-dae Oct 20 10:39:14 myhost kernel: [ 2187.704514] [ 1876] 120 1876 6572 292 0 0 0 polkit-gnome-au Oct 20 10:39:14 myhost kernel: [ 2187.704517] [ 1882] 0 1882 3666 126 0 0 0 upowerd Oct 20 10:39:14 myhost kernel: [ 2187.704520] [ 1883] 0 1883 3900 110 1 0 0 gdm-session-wor Oct 20 10:39:14 myhost kernel: [ 2187.704522] [ 1950] 1000 1950 7737 132 1 0 0 gnome-keyring-d Oct 20 10:39:14 myhost kernel: [ 2187.704525] [ 1969] 1000 1969 9233 306 1 0 0 gnome-session Oct 20 10:39:14 myhost kernel: [ 2187.704528] [ 1996] 1000 1996 794 42 0 0 0 dbus-launch Oct 20 10:39:14 myhost kernel: [ 2187.704530] [ 1997] 1000 1997 1249 378 0 0 0 dbus-daemon Oct 20 10:39:14 myhost kernel: [ 2187.704533] [ 1999] 1000 1999 886 49 1 0 0 ssh-agent Oct 20 10:39:14 myhost kernel: [ 2187.704535] [ 2004] 1000 2004 2650 485 1 0 0 gconfd-2 Oct 20 10:39:14 myhost kernel: [ 2187.704538] [ 2005] 1000 2005 11022 4515 0 0 0 fcitx Oct 20 10:39:14 myhost kernel: [ 2187.704541] [ 2014] 1000 2014 8792 837 0 0 0 gnome-settings- Oct 20 10:39:14 myhost kernel: [ 2187.704543] [ 2019] 1000 2019 2201 97 1 0 0 gvfsd Oct 20 10:39:14 myhost kernel: [ 2187.704546] [ 2022] 1000 2022 40893 833 1 0 0 metacity Oct 20 10:39:14 myhost kernel: [ 2187.704549] [ 2028] 0 2028 577 161 0 -17 -1000 udevd Oct 20 10:39:14 myhost kernel: [ 2187.704551] [ 2032] 1000 2032 7599 130 0 0 0 gvfs-fuse-daemo Oct 20 10:39:14 myhost kernel: [ 2187.704554] [ 2036] 1000 2036 46396 1322 0 0 0 gnome-panel Oct 20 10:39:14 myhost kernel: [ 2187.704556] [ 2039] 1000 2039 8657 172 0 0 0 gvfs-gdu-volume Oct 20 10:39:14 myhost kernel: [ 2187.704559] [ 2041] 0 2041 5721 166 0 0 0 udisks-daemon Oct 20 10:39:14 myhost kernel: [ 2187.704561] [ 2042] 0 2042 1284 51 0 0 0 udisks-daemon Oct 20 10:39:14 myhost kernel: [ 2187.704564] [ 2046] 1000 2046 63112 5378 0 0 0 nautilus Oct 20 10:39:14 myhost kernel: [ 2187.704567] [ 2048] 1000 2048 9063 140 0 0 0 bonobo-activati Oct 20 10:39:14 myhost kernel: [ 2187.704569] [ 2056] 1000 2056 43835 1188 1 0 0 wnck-applet Oct 20 10:39:14 myhost kernel: [ 2187.704572] [ 2059] 1000 2059 42284 583 1 0 0 cpufreq-applet Oct 20 10:39:14 myhost kernel: [ 2187.704574] [ 2062] 1000 2062 41008 336 1 0 0 notification-ar Oct 20 10:39:14 myhost kernel: [ 2187.704577] [ 2063] 1000 2063 43602 924 0 0 0 mixer_applet2 Oct 20 10:39:14 myhost kernel: [ 2187.704580] [ 2065] 1000 2065 41469 369 0 0 0 multiload-apple Oct 20 10:39:14 myhost kernel: [ 2187.704582] [ 2066] 1000 2066 45661 791 1 0 0 clock-applet Oct 20 10:39:14 myhost kernel: [ 2187.704585] [ 2067] 1000 2067 1631 70 0 0 0 sh Oct 20 10:39:14 myhost kernel: [ 2187.704587] [ 2069] 1000 2069 114532 41485 0 0 0 evolution Oct 20 10:39:14 myhost kernel: [ 2187.704590] [ 2077] 1000 2077 6808 198 0 0 0 polkit-gnome-au Oct 20 10:39:14 myhost kernel: [ 2187.704592] [ 2079] 1000 2079 8439 1599 0 0 0 applet.py Oct 20 10:39:14 myhost kernel: [ 2187.704595] [ 2081] 1000 2081 10737 416 1 0 0 evolution-alarm Oct 20 10:39:14 myhost kernel: [ 2187.704598] [ 2086] 1000 2086 5037 264 0 0 0 gdu-notificatio Oct 20 10:39:14 myhost kernel: [ 2187.704600] [ 2087] 1000 2087 7018 287 1 0 0 gnome-power-man Oct 20 10:39:14 myhost kernel: [ 2187.704603] [ 2088] 1000 2088 7574 291 0 0 0 vino-server Oct 20 10:39:14 myhost kernel: [ 2187.704605] [ 2089] 1000 2089 72305 434 0 0 0 nm-applet Oct 20 10:39:14 myhost kernel: [ 2187.704608] [ 2099] 1000 2099 7247 269 1 0 0 gnome-screensav Oct 20 10:39:14 myhost kernel: [ 2187.704611] [ 2101] 1000 2101 15260 376 1 0 0 e-calendar-fact Oct 20 10:39:14 myhost kernel: [ 2187.704613] [ 2110] 1000 2110 41625 995 1 0 0 notify-osd Oct 20 10:39:14 myhost kernel: [ 2187.704616] [ 2113] 1000 2113 2414 276 0 0 0 gvfsd-trash Oct 20 10:39:14 myhost kernel: [ 2187.704618] [ 2115] 1000 2115 1806 177 1 0 0 mission-control Oct 20 10:39:14 myhost kernel: [ 2187.704621] [ 2118] 0 2118 3290 69 0 0 0 system-tools-ba Oct 20 10:39:14 myhost kernel: [ 2187.704624] [ 2129] 1000 2129 2201 119 0 0 0 gvfsd-burn Oct 20 10:39:14 myhost kernel: [ 2187.704626] [ 2133] 0 2133 3296 2216 1 0 0 SystemToolsBack Oct 20 10:39:14 myhost kernel: [ 2187.704629] [ 2145] 1000 2145 2299 202 1 0 0 gvfsd-metadata Oct 20 10:39:14 myhost kernel: [ 2187.704631] [ 2157] 1000 2157 22558 213 1 0 0 conky Oct 20 10:39:14 myhost kernel: [ 2187.704634] [ 2189] 1000 2189 43833 1183 1 0 0 gnome-terminal Oct 20 10:39:14 myhost kernel: [ 2187.704636] [ 2191] 1000 2191 450 29 1 0 0 gnome-pty-helpe Oct 20 10:39:14 myhost kernel: [ 2187.704639] [ 2193] 1000 2193 2041 451 1 0 0 bash Oct 20 10:39:14 myhost kernel: [ 2187.704641] [ 2205] 1000 2205 1480 36 1 0 0 su Oct 20 10:39:14 myhost kernel: [ 2187.704644] [ 2206] 0 2206 2041 476 1 0 0 bash Oct 20 10:39:14 myhost kernel: [ 2187.704646] [ 2277] 0 2277 1487 43 1 0 0 sudo Oct 20 10:39:14 myhost kernel: [ 2187.704649] [ 2278] 0 2278 55057 3431 0 0 0 gedit Oct 20 10:39:14 myhost kernel: [ 2187.704651] [ 2282] 0 2282 794 43 1 0 0 dbus-launch Oct 20 10:39:14 myhost kernel: [ 2187.704654] [ 2283] 0 2283 593 51 0 0 0 dbus-daemon Oct 20 10:39:14 myhost kernel: [ 2187.704657] [ 2286] 0 2286 2590 412 1 0 0 gconfd-2 Oct 20 10:39:14 myhost kernel: [ 2187.704659] [ 2319] 1000 2319 151661 1891 1 0 0 stardict Oct 20 10:39:14 myhost kernel: [ 2187.704662] [ 2403] 1000 2403 104374 21095 0 0 0 firefox Oct 20 10:39:14 myhost kernel: [ 2187.704664] [ 2478] 1000 2478 23809 2021 1 0 0 plugin-containe Oct 20 10:39:14 myhost kernel: [ 2187.704667] [ 2487] 1000 2487 9147 1263 0 0 0 GoogleTalkPlugi Oct 20 10:39:14 myhost kernel: [ 2187.704669] [ 2614] 1000 2614 5541 191 0 0 0 dconf-service Oct 20 10:39:14 myhost kernel: [ 2187.704672] [ 2658] 1000 2658 57278 7232 1 0 0 skype Oct 20 10:39:14 myhost kernel: [ 2187.704674] [ 2784] 1000 2784 9883 327 0 0 0 gvfsd-smb Oct 20 10:39:14 myhost kernel: [ 2187.704677] [ 2796] 1000 2796 2067 479 0 0 0 bash Oct 20 10:39:14 myhost kernel: [ 2187.704680] [ 2882] 1000 2882 50484 3098 1 0 0 gedit Oct 20 10:39:14 myhost kernel: [ 2187.704682] [ 3319] 1000 3319 1480 36 1 0 0 su Oct 20 10:39:14 myhost kernel: [ 2187.704685] [ 3320] 0 3320 2041 478 1 0 0 bash Oct 20 10:39:14 myhost kernel: [ 2187.704687] [ 3436] 1000 3436 63059 15539 0 0 0 evince Oct 20 10:39:14 myhost kernel: [ 2187.704690] [ 3440] 1000 3440 5483 118 0 0 0 evinced Oct 20 10:39:14 myhost kernel: [ 2187.704692] [ 3446] 1000 3446 67154 29031 0 0 0 acroread Oct 20 10:39:14 myhost kernel: [ 2187.704695] [ 3498] 1000 3498 64269 15735 1 0 0 evince Oct 20 10:39:14 myhost kernel: [ 2187.704698] [ 3506] 1000 3506 1631 66 1 0 0 foxitreader Oct 20 10:39:14 myhost kernel: [ 2187.704700] [ 3508] 1000 3508 11968 4007 1 0 0 FoxitReader Oct 20 10:39:14 myhost kernel: [ 2187.704703] [ 3564] 1000 3564 65198 17404 0 0 0 evince Oct 20 10:39:14 myhost kernel: [ 2187.704705] [ 3622] 1000 3622 62247 14779 0 0 0 evince Oct 20 10:39:14 myhost kernel: [ 2187.704708] [ 3692] 1000 3692 64126 14386 1 0 0 evince Oct 20 10:39:14 myhost kernel: [ 2187.704710] [ 3708] 1000 3708 61564 13580 1 0 0 evince Oct 20 10:39:14 myhost kernel: [ 2187.704713] [ 3770] 1000 3770 63902 16011 0 0 0 evince Oct 20 10:39:14 myhost kernel: [ 2187.704715] [ 3889] 1000 3889 67617 19788 1 0 0 evince Oct 20 10:39:14 myhost kernel: [ 2187.704718] [ 3903] 1000 3903 77057 23319 0 0 0 evince Oct 20 10:39:14 myhost kernel: [ 2187.704720] [ 3927] 1000 3927 60335 14994 1 0 0 evince Oct 20 10:39:14 myhost kernel: [ 2187.704723] [ 3940] 1000 3940 63049 15434 0 0 0 evince Oct 20 10:39:14 myhost kernel: [ 2187.704725] [ 3954] 1000 3954 65562 17479 0 0 0 evince Oct 20 10:39:14 myhost kernel: [ 2187.704727] [ 3972] 1000 3972 64697 16979 1 0 0 evince Oct 20 10:39:14 myhost kernel: [ 2187.704730] [ 4002] 1000 4002 59109 13305 1 0 0 evince Oct 20 10:39:14 myhost kernel: [ 2187.704733] [ 4015] 1000 4015 64456 16434 1 0 0 evince Oct 20 10:39:14 myhost kernel: [ 2187.704735] [ 4028] 1000 4028 63322 15376 0 0 0 evince Oct 20 10:39:14 myhost kernel: [ 2187.704738] [ 4081] 1000 4081 59547 13587 1 0 0 evince Oct 20 10:39:14 myhost kernel: [ 2187.704740] [ 4085] 0 4085 717 36 0 0 0 sh Oct 20 10:39:14 myhost kernel: [ 2187.704743] [ 4086] 0 4086 717 36 0 0 0 sh Oct 20 10:39:14 myhost kernel: [ 2187.704745] [ 4089] 0 4089 717 35 1 0 0 sh Oct 20 10:39:14 myhost kernel: [ 2187.704748] [ 4092] 0 4092 1185 43 0 0 0 git Oct 20 10:39:14 myhost kernel: [ 2187.704750] [ 4093] 0 4093 1185 42 1 0 0 git Oct 20 10:39:14 myhost kernel: [ 2187.704753] [ 4096] 0 4096 717 55 0 0 0 git-pull Oct 20 10:39:14 myhost kernel: [ 2187.704755] [ 4097] 0 4097 717 54 1 0 0 git-pull Oct 20 10:39:14 myhost kernel: [ 2187.704758] [ 4128] 0 4128 1187 58 0 0 0 git Oct 20 10:39:14 myhost kernel: [ 2187.704760] [ 4129] 0 4129 1187 58 1 0 0 git Oct 20 10:39:14 myhost kernel: [ 2187.704763] [ 4130] 0 4130 1571 86 1 0 0 ssh Oct 20 10:39:14 myhost kernel: [ 2187.704765] [ 4131] 0 4131 1571 87 0 0 0 ssh Oct 20 10:39:14 myhost kernel: [ 2187.704767] [ 4141] 1000 4141 65072 15280 1 0 0 evince Oct 20 10:39:14 myhost kernel: [ 2187.704771] ================= Oct 20 10:39:14 myhost kernel: [ 2187.704773] oom_kill_process:kill task pid=2069, victim_points=0 Oct 20 10:39:14 myhost kernel: [ 2187.704776] oom_kill_task:=========kill task pid=2069 here is the page-types log: flags page-count MB symbolic-flags long-symbolic-flags 0x0000000000000000 17478 68 __________________________________ 0x0000000100000000 8264 32 ______________________r___________ reserved 0x0000000000010000 2812 10 ________________T_________________ compound_tail 0x0000000000008000 76 0 _______________H__________________ compound_head 0x0000008000000000 1 0 _____________________________c____ uncached 0x0000000400000001 3 0 L_______________________d_________ locked,mappedtodisk 0x0000000400000008 1 0 ___U____________________d_________ uptodate,mappedtodisk 0x0000000400000021 12 0 L____l__________________d_________ locked,lru,mappedtodisk 0x0000000800000024 718 2 __R__l___________________P________ referenced,lru,private 0x0000000400000028 2183 8 ___U_l__________________d_________ uptodate,lru,mappedtodisk 0x0001000400000028 4 0 ___U_l__________________d_____I___ uptodate,lru,mappedtodisk,readahead 0x000000040000002c 67 0 __RU_l__________________d_________ referenced,uptodate,lru,mappedtodisk 0x000000000000002c 4 0 __RU_l____________________________ referenced,uptodate,lru 0x000000080000002c 2169 8 __RU_l___________________P________ referenced,uptodate,lru,private 0x000000000000402c 3939 15 __RU_l________b___________________ referenced,uptodate,lru,swapbacked 0x0000000000004038 1 0 ___UDl________b___________________ uptodate,dirty,lru,swapbacked 0x000000000000403c 71 0 __RUDl________b___________________ referenced,uptodate,dirty,lru,swapbacked 0x000000080000003c 1 0 __RUDl___________________P________ referenced,uptodate,dirty,lru,private 0x0000000800000060 107 0 _____lA__________________P________ lru,active,private 0x0000000800000064 228 0 __R__lA__________________P________ referenced,lru,active,private 0x0000000c00000068 8 0 ___U_lA_________________dP________ uptodate,lru,active,mappedtodisk,private 0x0000000000000068 16 0 ___U_lA___________________________ uptodate,lru,active 0x0000000800000068 331 1 ___U_lA__________________P________ uptodate,lru,active,private 0x0000000400000068 7 0 ___U_lA_________________d_________ uptodate,lru,active,mappedtodisk 0x000000000000006c 26 0 __RU_lA___________________________ referenced,uptodate,lru,active 0x000000080000006c 21 0 __RU_lA__________________P________ referenced,uptodate,lru,active,private 0x000000040000006c 142 0 __RU_lA_________________d_________ referenced,uptodate,lru,active,mappedtodisk 0x0000000000004078 14103 55 ___UDlA_______b___________________ uptodate,dirty,lru,active,swapbacked 0x000000000000407c 4277 16 __RUDlA_______b___________________ referenced,uptodate,dirty,lru,active,swapbacked 0x0004000000008080 66 0 _______S_______H________________A_ slab,compound_head,slub_frozen 0x0000000000000080 2556 9 _______S__________________________ slab 0x0000000000008080 794 3 _______S_______H__________________ slab,compound_head 0x0004000000000080 51 0 _______S________________________A_ slab,slub_frozen 0x000000080000012c 2 0 __RU_l__W________________P________ referenced,uptodate,lru,writeback,private 0x0000000000000400 1719 6 __________B_______________________ buddy 0x0000000000000800 1 0 ___________M______________________ mmap 0x0000000000000804 2 0 __R________M______________________ referenced,mmap 0x0000000400000808 1 0 ___U_______M____________d_________ uptodate,mmap,mappedtodisk 0x0000000000000810 1 0 ____D______M______________________ dirty,mmap 0x0000000000008814 1 0 __R_D______M___H__________________ referenced,dirty,mmap,compound_head 0x0000000000010814 15 0 __R_D______M____T_________________ referenced,dirty,mmap,compound_tail 0x0000000400000828 805 3 ___U_l_____M____________d_________ uptodate,lru,mmap,mappedtodisk 0x000000040000082c 467 1 __RU_l_____M____________d_________ referenced,uptodate,lru,mmap,mappedtodisk 0x000000000000482c 1 0 __RU_l_____M__b___________________ referenced,uptodate,lru,mmap,swapbacked 0x0000000000004838 5039 19 ___UDl_____M__b___________________ uptodate,dirty,lru,mmap,swapbacked 0x000000000000483c 126 0 __RUDl_____M__b___________________ referenced,uptodate,dirty,lru,mmap,swapbacked 0x000000020004483c 1 0 __RUDl_____M__b___u____m__________ referenced,uptodate,dirty,lru,mmap,swapbacked,unevictable,mlocked 0x0000000400000868 9 0 ___U_lA____M____________d_________ uptodate,lru,active,mmap,mappedtodisk 0x000000040000086c 3743 14 __RU_lA____M____________d_________ referenced,uptodate,lru,active,mmap,mappedtodisk 0x0000000c0000086c 7 0 __RU_lA____M____________dP________ referenced,uptodate,lru,active,mmap,mappedtodisk,private 0x0000000000004878 581 2 ___UDlA____M__b___________________ uptodate,dirty,lru,active,mmap,swapbacked 0x000000000000487c 89 0 __RUDlA____M__b___________________ referenced,uptodate,dirty,lru,active,mmap,swapbacked 0x0000000000005008 8 0 ___U________a_b___________________ uptodate,anonymous,swapbacked 0x0000000000005808 4 0 ___U_______Ma_b___________________ uptodate,mmap,anonymous,swapbacked 0x0000000000005828 83024 324 ___U_l_____Ma_b___________________ uptodate,lru,mmap,anonymous,swapbacked 0x000000000000582c 99 0 __RU_l_____Ma_b___________________ referenced,uptodate,lru,mmap,anonymous,swapbacked 0x000000020004582c 8 0 __RU_l_____Ma_b___u____m__________ referenced,uptodate,lru,mmap,anonymous,swapbacked,unevictable,mlocked 0x0000000000005838 2 0 ___UDl_____Ma_b___________________ uptodate,dirty,lru,mmap,anonymous,swapbacked 0x0000000000005868 358737 1401 ___U_lA____Ma_b___________________ uptodate,lru,active,mmap,anonymous,swapbacked 0x000000000000586c 42 0 __RU_lA____Ma_b___________________ referenced,uptodate,lru,active,mmap,anonymous,swapbacked total 515071 2011 ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-20 2:58 ` Figo.zhang @ 2010-10-20 3:24 ` KOSAKI Motohiro 2010-10-20 3:43 ` Figo.zhang 0 siblings, 1 reply; 22+ messages in thread From: KOSAKI Motohiro @ 2010-10-20 3:24 UTC (permalink / raw) To: Figo.zhang Cc: kosaki.motohiro, Wu Fengguang, KAMEZAWA Hiroyuki, Minchan Kim, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802, linux-mm@kvack.org > > > > > can you please try 1) invoke oom 2) get page-types -r again. I'm curious > > that oom makes page accounting lost again. I mean, please send us oom > > log and "page-types -r" result. > > > > thanks > > ok, i do the experiment and catch the log: thanks. > active_anon:398375 inactive_anon:82967 isolated_anon:0 > active_file:81 inactive_file:429 isolated_file:32 > unevictable:13 dirty:2 writeback:14 unstable:0 > free:11942 slab_reclaimable:2391 slab_unreclaimable:3303 > mapped:5617 shmem:33909 pagetables:2280 bounce:0 active_anon + inactive_anon + isolated_anon = 481342 pages ~= 1.8GB Um, this oom doesn't makes accounting lost. > here is the page-types log: > flags page-count MB symbolic-flags long-symbolic-flags > > 0x0000000000005828 83024 324 ___U_l_____Ma_b___________________ uptodate,lru,mmap,anonymous,swapbacked > 0x0000000000005868 358737 1401 ___U_lA____Ma_b___________________ uptodate,lru,active,mmap,anonymous,swapbacked > total 515071 2011 page-types show similar result. The big difference is, previous and current are showing some different processes. only previous has VirtualBox, only current has vmware-usbarbit, etc.. Can you use same test environment? ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-20 3:24 ` KOSAKI Motohiro @ 2010-10-20 3:43 ` Figo.zhang 2010-10-20 5:05 ` Adam Jiang 0 siblings, 1 reply; 22+ messages in thread From: Figo.zhang @ 2010-10-20 3:43 UTC (permalink / raw) To: KOSAKI Motohiro, Wu Fengguang Cc: KAMEZAWA Hiroyuki, Minchan Kim, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802, linux-mm@kvack.org > > active_anon:398375 inactive_anon:82967 isolated_anon:0 > > active_file:81 inactive_file:429 isolated_file:32 > > unevictable:13 dirty:2 writeback:14 unstable:0 > > free:11942 slab_reclaimable:2391 slab_unreclaimable:3303 > > mapped:5617 shmem:33909 pagetables:2280 bounce:0 > > active_anon + inactive_anon + isolated_anon = 481342 pages ~= 1.8GB > Um, this oom doesn't makes accounting lost. > > > here is the page-types log: > > flags page-count MB symbolic-flags long-symbolic-flags > > > > 0x0000000000005828 83024 324 ___U_l_____Ma_b___________________ uptodate,lru,mmap,anonymous,swapbacked > > 0x0000000000005868 358737 1401 ___U_lA____Ma_b___________________ uptodate,lru,active,mmap,anonymous,swapbacked > > total 515071 2011 > > page-types show similar result. > > > The big difference is, previous and current are showing some different processes. > only previous has VirtualBox, only current has vmware-usbarbit, etc.. > > Can you use same test environment? yes, it is the same desktop, and i open some pdf files and applications by random. but when my desktop eat up to 1.8GB RAM (active_anon + inactive_anon + isolated_anon = 481342 pages >= 1.8GB), the system became extraordinary slow. when i move the mouse, the mouse cant move a little on screen. i deem it have "crashed", but i ping it's ip by other desktop, it is ok. so what is apect affect the system seem to "crashed", page-writeback? page-reclaimed? and the oom-killer seem to be very conservative? in that condition , oom_killer must kill some process to release memory for new process. ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-20 3:43 ` Figo.zhang @ 2010-10-20 5:05 ` Adam Jiang 0 siblings, 0 replies; 22+ messages in thread From: Adam Jiang @ 2010-10-20 5:05 UTC (permalink / raw) To: Figo.zhang Cc: KOSAKI Motohiro, Wu Fengguang, KAMEZAWA Hiroyuki, Minchan Kim, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802, linux-mm@kvack.org On Wed, Oct 20, 2010 at 11:43:45AM +0800, Figo.zhang wrote: > > > > active_anon:398375 inactive_anon:82967 isolated_anon:0 > > > active_file:81 inactive_file:429 isolated_file:32 > > > unevictable:13 dirty:2 writeback:14 unstable:0 > > > free:11942 slab_reclaimable:2391 slab_unreclaimable:3303 > > > mapped:5617 shmem:33909 pagetables:2280 bounce:0 > > > > active_anon + inactive_anon + isolated_anon = 481342 pages ~= 1.8GB > > Um, this oom doesn't makes accounting lost. > > > > > total 515071 2011 > > > > page-types show similar result. > > > > > > The big difference is, previous and current are showing some different processes. > > only previous has VirtualBox, only current has vmware-usbarbit, etc.. > > > > Can you use same test environment? > yes, it is the same desktop, and i open some pdf files and applications > by random. > > but when my desktop eat up to 1.8GB RAM (active_anon + inactive_anon + > isolated_anon = 481342 pages >= 1.8GB), the system became extraordinary > slow. when i move the mouse, the mouse cant move a little on screen. i > deem it have "crashed", but i ping it's ip by other desktop, it is ok. > > so what is apect affect the system seem to "crashed", page-writeback? > page-reclaimed? and the oom-killer seem to be very conservative? in > that condition , oom_killer must kill some process to release memory for > new process. I think it just simply the test caused system *almost* dead but not really trigger oom-killer. You have 2GB RAM, right. 0.2G is a huge amount of memory for Linux kernel. If you do want to test the new oom-killer, you can just right a simple program to allocate memory continues but make different instances to eat memory in different paces. Then, you can find out who will be killed first eventually. /Adam > > > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-19 2:07 ` Figo.zhang 2010-10-19 2:59 ` KAMEZAWA Hiroyuki @ 2010-10-19 6:22 ` KAMEZAWA Hiroyuki 2010-10-20 1:36 ` Figo.zhang 2010-10-19 18:43 ` David Rientjes 2 siblings, 1 reply; 22+ messages in thread From: KAMEZAWA Hiroyuki @ 2010-10-19 6:22 UTC (permalink / raw) To: Figo.zhang Cc: KOSAKI Motohiro, Wu Fengguang, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802 On Tue, 19 Oct 2010 10:07:38 +0800 "Figo.zhang" <zhangtianfei@leadcoretech.com> wrote: > > > > > very lots of change ;) > > can you please send us your crash log? > > i add some prink in select_bad_process() and oom_badness() to see > pid/totalpages/points/memoryuseage/and finally process to selet to kill. > > i found it the oom-killer select: syslog-ng,mysqld,nautilus,VirtualBox > to kill, so my question is: > please show your patch..for getting logs. It seems you added printke to here. > if (points > *ppoints) { > chosen = p; > *ppoints = points; > } So, finally _a_ chosen process is .... > ===========have choose pid=1304 to kill, points=1 > Oct 19 09:44:08 myhost kernel: [ 618.441168] oom_badness: memoy use > =4346, totalpages=506807, points=8 ... > Oct 19 09:44:08 myhost kernel: [ 618.441170] oom_badness: pid = 2065, > oom_score_adj=0, points=8 > Oct 19 09:44:08 myhost kernel: [ 618.441171] select_bad_process, > pid=2065, points=8 > Oct 19 09:44:08 myhost kernel: [ 618.441172] select_bad_process, > ===========have choose pid=2065 to kill, points=8 ... > Oct 19 09:44:08 myhost kernel: [ 618.441189] oom_badness: memoy use > =18211, totalpages=506807, points=35 > Oct 19 09:44:08 myhost kernel: [ 618.441190] oom_badness: pid = 2078, > oom_score_adj=0, points=35 > Oct 19 09:44:08 myhost kernel: [ 618.441191] select_bad_process, > pid=2078, points=35 > Oct 19 09:44:08 myhost kernel: [ 618.441193] select_bad_process, > ===========have choose pid=2078 to kill, points=35 ... > Oct 19 09:44:08 myhost kernel: [ 618.441356] oom_badness: memoy use > =278247, totalpages=506807, points=549 > Oct 19 09:44:08 myhost kernel: [ 618.441358] oom_badness: pid = 2646, > oom_score_adj=0, points=549 > Oct 19 09:44:08 myhost kernel: [ 618.441359] select_bad_process, > pid=2646, points=549 > Oct 19 09:44:08 myhost kernel: [ 618.441360] select_bad_process, > ===========have choose pid=2646 to kill, points=549 > Oct 19 09:44:08 myhost kernel: [ 618.441470] httpd invoked oom-killer: > gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 > Oct 19 09:44:08 myhost kernel: [ 618.441473] httpd cpuset=/ > mems_allowed=0 PID 2646, Virtual Box. Right ? I think you have a log message as. == pr_err("Killed process %d (%s) total-vm:%lukB, anon-rss:%lukB, file-rss:%lukB\n", task_pid_nr(p), p->comm, K(p->mm->total_vm), K(get_mm_counter(p->mm, MM_ANONPAGES)), K(get_mm_counter(p->mm, MM_FILEPAGES))); == ... and killed one is Virtual box.(and syslog-ng etc.. aren't killed.) Thanks, -Kame ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-19 6:22 ` KAMEZAWA Hiroyuki @ 2010-10-20 1:36 ` Figo.zhang 2010-10-20 1:47 ` Wu Fengguang 0 siblings, 1 reply; 22+ messages in thread From: Figo.zhang @ 2010-10-20 1:36 UTC (permalink / raw) To: KAMEZAWA Hiroyuki Cc: KOSAKI Motohiro, Wu Fengguang, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802 > > > > > echo 5 > /proc/sys/vm/dirty_ratio > > reduce the dirty_ratio, i can use memory up to 1.75GB, and then it will > call oom-killer. >So it helps. Are there intensive IO after reducing dirty_ratio? > > - enable vmscan trace > > > > mount -t debugfs none /sys/kernel/debug > > echo 1 > /sys/kernel/debug/tracing/events/vmscan/enable > > <eat memory and wait for crash> > > cat /sys/kernel/debug/tracing/trace > trace.log > would you like to help to see the trace.log , i add attached file. > There are many vmscan writes showing up in the trace. yes, the DiskIO is be not aggressive when i reduce the dirty_ratio, but when i use memory up to 1.75GB (total 2GB), the system suddenly crashed. when i reboot and want to see the /var/log/message, it is without useful information. so it is other useful debug approach to find the issue? ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-20 1:36 ` Figo.zhang @ 2010-10-20 1:47 ` Wu Fengguang 0 siblings, 0 replies; 22+ messages in thread From: Wu Fengguang @ 2010-10-20 1:47 UTC (permalink / raw) To: Figo.zhang Cc: KAMEZAWA Hiroyuki, KOSAKI Motohiro, linux-kernel@vger.kernel.org, rientjes@google.com, figo1802 On Wed, Oct 20, 2010 at 09:36:47AM +0800, Figo.zhang wrote: > > > > > > > > echo 5 > /proc/sys/vm/dirty_ratio > > > > reduce the dirty_ratio, i can use memory up to 1.75GB, and then it > will > > call oom-killer. > >So it helps. Are there intensive IO after reducing dirty_ratio? > > > > - enable vmscan trace > > > > > > mount -t debugfs none /sys/kernel/debug > > > echo 1 > /sys/kernel/debug/tracing/events/vmscan/enable > > > <eat memory and wait for crash> > > > cat /sys/kernel/debug/tracing/trace > trace.log > > would you like to help to see the trace.log , i add attached file. > > There are many vmscan writes showing up in the trace. > > yes, the DiskIO is be not aggressive when i reduce the dirty_ratio, but > when i use memory up to 1.75GB (total 2GB), the system suddenly crashed. You seem to use the term "crash" for both "OOM-killed" and "kernel panic". You mean .36 kernel will panic on memory pressure while .35 kernel will OOM kill the Xorg task? > when i reboot and want to see the /var/log/message, it is without useful > information. > > so it is other useful debug approach to find the issue? Documentation/networking/netconsole.txt eg. netconsole=@:/eth0,6666@10.239.51.110/00:30:48:fe:19:94 You'll need another machine to catch the panic log. Thanks, Fengguang ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: oom_killer crash linux system 2010-10-19 2:07 ` Figo.zhang 2010-10-19 2:59 ` KAMEZAWA Hiroyuki 2010-10-19 6:22 ` KAMEZAWA Hiroyuki @ 2010-10-19 18:43 ` David Rientjes 2 siblings, 0 replies; 22+ messages in thread From: David Rientjes @ 2010-10-19 18:43 UTC (permalink / raw) To: Figo.zhang Cc: KOSAKI Motohiro, Wu Fengguang, KAMEZAWA Hiroyuki, linux-kernel, figo1802 On Tue, 19 Oct 2010, Figo.zhang wrote: > > very lots of change ;) > > can you please send us your crash log? > > i add some prink in select_bad_process() and oom_badness() to see > pid/totalpages/points/memoryuseage/and finally process to selet to kill. > It shouldn't need any printk's to be added, the new heuristic is rather predictable given the memory usage of the application. You could find what the badness score is by checking /proc/pid/oom_score. > i found it the oom-killer select: syslog-ng,mysqld,nautilus,VirtualBox > to kill, so my question is: > > 1. the syslog-ng,mysqld,nautilus is the system foundamental process, so > if oom-killer kill those process, the system will be damaged, such as > lose some important data. > The oom killer always attempts to kill the most memory-hogging task that is eligible given the context in which the system is out of memory. That allows the kernel to free a large amount of memory so the oom killer will not (hopefully) have to be recalled again in the near future. Otherwise, we end up killing everything else other than the memory-hogger, and it that may turn out to be a memory leaker that we don't care about :) > 2. the new oom-killer just use percentage of used memory as score to > select the candidate to kill, but how to know this process to very > important for system? > The user has to tell it by using /proc/pid/oom_score_adj, see Documentation/filesystems/proc.txt if you'd like to adjust how the oom killer ranks tasks that you deem to be important and vital to your system. ^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2010-10-20 5:05 UTC | newest] Thread overview: 22+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2010-10-18 1:47 oom_killer crash linux system Figo.zhang 2010-10-18 1:57 ` KAMEZAWA Hiroyuki 2010-10-18 2:11 ` Wu Fengguang 2010-10-18 8:13 ` Figo.zhang 2010-10-18 9:10 ` KOSAKI Motohiro 2010-10-18 15:31 ` Wu Fengguang 2010-10-19 2:07 ` Figo.zhang 2010-10-19 2:59 ` KAMEZAWA Hiroyuki 2010-10-19 5:23 ` Minchan Kim 2010-10-19 5:26 ` KAMEZAWA Hiroyuki 2010-10-19 5:34 ` Minchan Kim 2010-10-20 1:35 ` Wu Fengguang 2010-10-20 2:06 ` Figo.zhang 2010-10-20 2:32 ` KOSAKI Motohiro 2010-10-20 2:58 ` Figo.zhang 2010-10-20 3:24 ` KOSAKI Motohiro 2010-10-20 3:43 ` Figo.zhang 2010-10-20 5:05 ` Adam Jiang 2010-10-19 6:22 ` KAMEZAWA Hiroyuki 2010-10-20 1:36 ` Figo.zhang 2010-10-20 1:47 ` Wu Fengguang 2010-10-19 18:43 ` David Rientjes
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox