From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from cuda.sgi.com (cuda2.sgi.com [192.48.176.25]) by oss.sgi.com (8.14.3/8.14.3/SuSE Linux 0.8) with ESMTP id oB9DFFld260535 for ; Thu, 9 Dec 2010 07:15:15 -0600 Received: from kuber.nabble.com (localhost [127.0.0.1]) by cuda.sgi.com (Spam Firewall) with ESMTP id 1749B1D8868 for ; Thu, 9 Dec 2010 05:17:03 -0800 (PST) Received: from kuber.nabble.com (kuber.nabble.com [216.139.236.158]) by cuda.sgi.com with ESMTP id VcHcD7PAbIJhwYCS for ; Thu, 09 Dec 2010 05:17:03 -0800 (PST) Received: from isper.nabble.com ([192.168.236.156]) by kuber.nabble.com with esmtp (Exim 4.63) (envelope-from ) id 1PQgMp-0003K0-3B for xfs@oss.sgi.com; Thu, 09 Dec 2010 05:17:03 -0800 Message-ID: <30416394.post@talk.nabble.com> Date: Thu, 9 Dec 2010 05:17:03 -0800 (PST) From: blacknred Subject: Re: kernel panic-xfs errors In-Reply-To: <4D005E99.2030400@sandeen.net> MIME-Version: 1.0 References: <30397503.post@talk.nabble.com> <20101207222558.GC29333@dastard> <30403823.post@talk.nabble.com> <20101209005944.GD32766@dastard> <4D005E99.2030400@sandeen.net> List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: xfs-bounces@oss.sgi.com Errors-To: xfs-bounces@oss.sgi.com To: xfs@oss.sgi.com >which is NOT a rhel 5.0 kernel, and it says x86_64. >But the addresses are all 32 bits? My apologies there, somehow it all got jumbled up, pasting it again: BUG: unable to handle kernel NULL pointer dereference at virtual address 00000098 printing eip: *pde = 2c621001 Oops: 0000 [#1] SMP CPU: 2 EIP: 0060:[] Tainted: GF VLI EFLAGS: 00010282 (2.6.18-164.11.1.el5PAE #1) EIP is at do_page_fault+0x205/0x607 eax: ec6de000 ebx: 00000000 ecx: ec6de074 edx: 0000000d esi: 00014005 edi: ec6de0a4 ebp: 00000014 esp: ec6de054 ds: 007b es: 007b ss: 0068 Process bm (pid: 2910, ti=ec6dd000 task=ec6e3550 task.ti=ec6dd000) Stack: 00000000 00000000 ec6de0a4 00000014 00000098 f7180000 00000001 00000000 ec6de0a4 c0639439 00000000 0000000e 0000000b 00000000 00000000 00000000 00014005 c0619b9c 00000014 c0405a89 00000000 ec6de0f8 0000000d 00014005 Call Trace: [] do_page_fault+0x0/0x607 [] error_code+0x39/0x40 [] do_page_fault+0x205/0x607 [] elv_next_request+0x127/0x134 [] do_cciss_request+0x398/0x3a3 [cciss] [] do_page_fault+0x0/0x607 [] error_code+0x39/0x40 [] do_page_fault+0x205/0x607 [] deadline_set_request+0x16/0x57 [] do_page_fault+0x0/0x607 [] error_code+0x39/0x40 [] do_page_fault+0x205/0x607 [] do_page_fault+0x0/0x607 [] error_code+0x39/0x40 [] do_page_fault+0x205/0x607 [] do_page_fault+0x0/0x607 [] error_code+0x39/0x40 [] __down+0x2b/0xbb [] default_wake_function+0x0/0xc [] __down_failed+0x7/0xc [] .text.lock.xfs_buf+0x17/0x5f [xfs] [] xfs_buf_read_flags+0x48/0x76 [xfs] [] xfs_trans_read_buf+0x1bb/0x2c0 [xfs] [] xfs_btree_read_bufl+0x96/0xb3 [xfs] [] xfs_bmbt_lookup+0x135/0x478 [xfs] [] xfs_bmap_add_extent+0xd2b/0x1e30 [xfs] [] xfs_alloc_update+0x3a/0xbc [xfs] [] xfs_alloc_fixup_trees+0x217/0x29a [xfs] [] xfs_trans_log_buf+0x49/0x6c [xfs] [] xfs_alloc_search_busy+0x20/0xae [xfs] [] xfs_iext_bno_to_ext+0xd8/0x191 [xfs] [] kmem_zone_zalloc+0x1d/0x41 [xfs] [] xfs_bmapi+0x15fe/0x2016 [xfs] [] xfs_iext_bno_to_ext+0x48/0x191 [xfs] [] xfs_bmap_search_multi_extents+0x8a/0xc5 [xfs] [] xfs_iomap_write_allocate+0x29c/0x469 [xfs] [] lock_timer_base+0x15/0x2f [] del_timer+0x41/0x47 [] xfs_iomap+0x409/0x71d [xfs] [] xfs_map_blocks+0x29/0x52 [xfs] [] xfs_page_state_convert+0x37b/0xd2e [xfs] [] xfs_bmap_add_extent+0x1dcf/0x1e30 [xfs] [] xfs_bmap_search_multi_extents+0x8a/0xc5 [xfs] [] xfs_bmapi+0x272/0x2016 [xfs] [] xfs_bmapi+0x1853/0x2016 [xfs] [] find_get_pages_tag+0x30/0x75 [] xfs_vm_writepage+0x8f/0xc2 [xfs] [] mpage_writepages+0x1a7/0x310 [] xfs_vm_writepage+0x0/0xc2 [xfs] [] do_writepages+0x20/0x32 [] __writeback_single_inode+0x170/0x2af [] write_inode_now+0x66/0xa7 [] file_fsync+0xf/0x6c [] moddw_ioctl+0x420/0x669 [mod_dw] [] __cond_resched+0x16/0x34 [] do_ioctl+0x47/0x5d [] vfs_ioctl+0x47b/0x4d3 [] sys_ioctl+0x48/0x5f [] sysenter_past_esp+0x56/0x79 Thanks, sorry for the confusion.... Eric Sandeen-3 wrote: > > On 12/8/10 6:59 PM, Dave Chinner wrote: >> On Wed, Dec 08, 2010 at 01:39:10AM -0800, blacknred wrote: >>> >>> >>>> You've done a forced module load. No guarantee your kernel is in any >>>> sane shape if you've done that.... >>> >>> Agree, but I'm reasonably convinced that module isn't the issue, because >>> it >>> works fine with my other servers...... >>> >>>> Strange failure. Hmmm - i386 arch and fedora - are you running with >>> 4k stacks? If so, maybe it blew the stack... >>> >>> i386 arch, rhel 5.0 >> >> Yup, 4k stacks. This is definitely smelling like a stack blowout. > > well, hang on. The oops said: > > EIP: 0060:[] Tainted: GF VLI > EFLAGS: 00010272 (2.6.33.3-85.fc13.x86_64 #1) > EIP is at do_page_fault+0x245/0x617 > eax: ec5ee000 ebx: 00000000 ecx: eb5de084 edx: 0000000e > esi: 00013103 edi: ec5de0b3 ebp: 00000023 esp: ec5de024 > ds: 008b es: 008b ss: 0078 > > which is NOT a rhel 5.0 kernel, and it says x86_64. > > But the addresses are all 32 bits? > > So what's going on here? > >> esi: 00013103 edi: ec5de0b3 ebp: 00000023 esp: ec5de024 >> ds: 008b es: 008b ss: 0078 >> Process bm (pid: 3210, ti=ec622000 task=ec5e3450 task.ti=ec6ee000) > > end of the stack is ec6ee000, stack grows up, esp is at ec5de024, > well past it (i.e. yes, overrun) if I remember my stack math > right... but that's a pretty huge difference so either I have it > wrong, or things are really a huge mess here. > > -Eric > > _______________________________________________ > xfs mailing list > xfs@oss.sgi.com > http://oss.sgi.com/mailman/listinfo/xfs > > -- View this message in context: http://old.nabble.com/kernel-panic-xfs-errors-tp30397503p30416394.html Sent from the Xfs - General mailing list archive at Nabble.com. _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs