public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* XFS regression: Oops in xfs_buf_do_callbacks on xfstest 137
@ 2012-08-02 17:44 Eric Sandeen
  2012-08-17 18:02 ` Christoph Hellwig
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Sandeen @ 2012-08-02 17:44 UTC (permalink / raw)
  To: xfs-oss; +Cc: Christoph Hellwig

Putting this on the list for posterity, I'll try to work it out but don't want to lose the issue.

When running w/ slab debugging, particularly memory poisoning, I oops on test 137:


[ 6734.901318] general protection fault: 0000 [#1] SMP 
[ 6734.906337] CPU 1 
[ 6734.908183] Modules linked in:[ 6734.911258]  ext4 jbd2 ext2 xfs sunrpc ip6table_filter ip6_tables binfmt_misc vhost_net macvtap macvlan tun iTCO_wdt iTCO_vendor_support dcdbas microcode i2c_i801 lpc_ich mfd_core tg3 shpchp i3000_edac edac_core ext3 jbd mbcache ata_generic pata_acpi pata_sil680 radeon ttm drm_kms_helper drm i2c_algo_bit i2c_core [last unloaded: scsi_wait_scan]

[ 6734.942722] Pid: 19860, comm: umount Not tainted 3.5.0-rc6+ #1 Dell Computer Corporation PowerEdge 860/0RH817
[ 6734.952696] RIP: 0010:[<ffffffffa0348780>]  [<ffffffffa0348780>] xfs_buf_do_callbacks+0x20/0x50 [xfs]
[ 6734.962011] RSP: 0018:ffff880058e178f8  EFLAGS: 00010202
[ 6734.967334] RAX: 6b6b6b6b6b6b6b6b RBX: ffff880058088e00 RCX: 00000001001d001b
[ 6734.974472] RDX: 00000001001d001c RSI: ffffea0001603a10 RDI: 0000000000000246
[ 6734.981613] RBP: ffff880058e17908 R08: ffffea0001603a18 R09: 0000000000000000
[ 6734.988752] R10: 0000000000000000 R11: 00000000000000e8 R12: ffff880058088e00
[ 6734.995893] R13: ffffffffa02e8086 R14: 0000000000000000 R15: 0000000000000000
[ 6735.003034] FS:  00007f7a0b77d740(0000) GS:ffff88007d000000(0000) knlGS:0000000000000000
[ 6735.011125] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 6735.016879] CR2: 00007f7a0ade3400 CR3: 000000007991a000 CR4: 00000000000007e0
[ 6735.024017] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 6735.031158] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 6735.038299] Process umount (pid: 19860, threadinfo ffff880058e16000, task ffff8800588ecce0)
[ 6735.046647] Stack:
[ 6735.048676]  ffffffffa03496e4 ffff880058088e00 ffff880058e17938 ffffffffa0348bce
[ 6735.056188]  ffff880058088e30 ffff880058088e00 ffff880058088e00 ffffffffa03496e4
[ 6735.063682]  ffff880058e17958 ffffffffa02e8086 ffff880058e17988 ffff880058088e00
[ 6735.071185] Call Trace:
[ 6735.080337]  [<ffffffffa0348bce>] xfs_buf_iodone_callbacks+0x3e/0x310 [xfs]
[ 6735.093982]  [<ffffffffa02e8086>] xfs_buf_iodone_work+0x26/0x50 [xfs]
[ 6735.100454]  [<ffffffffa02e811b>] xfs_buf_ioend+0x6b/0x1b0 [xfs]
[ 6735.106505]  [<ffffffffa03496e4>] xfs_buf_item_unpin+0x94/0x2e0 [xfs]
[ 6735.118564]  [<ffffffffa034265d>] xfs_trans_committed_bulk+0x1bd/0x2a0 [xfs]
[ 6735.136898]  [<ffffffffa0347add>] xlog_cil_committed+0x3d/0x100 [xfs]
[ 6735.143380]  [<ffffffffa0347edb>] xlog_cil_push+0x33b/0x410 [xfs]
[ 6735.149515]  [<ffffffffa0348617>] xlog_cil_force_lsn+0x167/0x170 [xfs]
[ 6735.156084]  [<ffffffffa034531d>] _xfs_log_force+0x6d/0x250 [xfs]
[ 6735.162216]  [<ffffffffa034569a>] xfs_log_force+0x2a/0x100 [xfs]
[ 6735.168257]  [<ffffffffa02fb323>] xfs_quiesce_data+0x23/0x70 [xfs]
[ 6735.174470]  [<ffffffffa02f8b80>] xfs_fs_sync_fs+0x30/0x60 [xfs]
[ 6735.180486]  [<ffffffff811f5150>] __sync_filesystem+0x30/0x60
[ 6735.186238]  [<ffffffff811f51cb>] sync_filesystem+0x4b/0x70
[ 6735.191821]  [<ffffffff811c4a6b>] generic_shutdown_super+0x3b/0xf0
[ 6735.198011]  [<ffffffff811c4b51>] kill_block_super+0x31/0x80
[ 6735.203680]  [<ffffffff811c50fd>] deactivate_locked_super+0x3d/0xa0
[ 6735.209964]  [<ffffffff811c60ea>] deactivate_super+0x4a/0x70
[ 6735.215631]  [<ffffffff811e3cd2>] mntput_no_expire+0xd2/0x130
[ 6735.221384]  [<ffffffff811e495e>] sys_umount+0x7e/0x3c0
[ 6735.226619]  [<ffffffff816509a9>] system_call_fastpath+0x16/0x1b
[ 6735.232628] Code: 00 90 8b 34 a0 c9 c3 0f 1f 40 00 55 48 89 e5 53 48 83 ec 08 66 66 66 66 90 48 8b 87 d8 01 00 00 48 89 fb 48 85 c0 74 2b 0f 1f 00 <48> 8b 50 38 48 89 c6 48 89 df 48 89 93 d8 01 00 00 48 c7 40 38 
[ 6735.252741] RIP  [<ffffffffa0348780>] xfs_buf_do_callbacks+0x20/0x50 [xfs]
[ 6735.259681]  RSP <ffff880058e178f8>
[ 6735.278084] ---[ end trace a626b9b4cafd61da ]---

It's dying due to a use after free; RAX / bp->b_fspriv / lip is 0x6b6b6b6.... (POISON_FREE)

STATIC void
xfs_buf_do_callbacks(
        struct xfs_buf          *bp)
{
   712a5:       48 89 fb                mov    %rdi,%rbx
        struct xfs_log_item     *lip;

        while ((lip = bp->b_fspriv) != NULL) {
   712a8:       48 85 c0                test   %rax,%rax
   712ab:       74 2b                   je     712d8 <xfs_buf_do_callbacks+0x48>
   712ad:       0f 1f 00                nopl   (%rax)
                bp->b_fspriv = lip->li_bio_list;
   712b0:       48 8b 50 38             mov    0x38(%rax),%rdx	<--- HERE

The behavior started with:

commit 960c60af8b9481595e68875e79b2602e73169c29
Author: Christoph Hellwig <hch@infradead.org>
Date:   Mon Apr 23 15:58:38 2012 +1000

    xfs: do not add buffers to the delwri queue until pushed
    
    Instead of adding buffers to the delwri list as soon as they are logged,
    even if they can't be written until commited because they are pinned
    defer adding them to the delwri list until xfsaild pushes them.  This
    makes the code more similar to other log items and prepares for writing
    buffers directly from xfsaild.
    
    The complication here is that we need to fail buffers that were added
    but not logged yet in xfs_buf_item_unpin, borrowing code from
    xfs_bioerror.
    
    Signed-off-by: Christoph Hellwig <hch@lst.de>
    Reviewed-by: Dave Chinner <dchinner@redhat.com>
    Reviewed-by: Mark Tinguely <tinguely@sgi.com>
    Signed-off-by: Ben Myers <bpm@sgi.com>

I'm guessing it's a problem w/ the handling in xfs_buf_item_unpin() but not sure yet.

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: XFS regression: Oops in xfs_buf_do_callbacks on xfstest 137
  2012-08-02 17:44 XFS regression: Oops in xfs_buf_do_callbacks on xfstest 137 Eric Sandeen
@ 2012-08-17 18:02 ` Christoph Hellwig
  2012-08-17 18:15   ` Eric Sandeen
  0 siblings, 1 reply; 5+ messages in thread
From: Christoph Hellwig @ 2012-08-17 18:02 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs-oss

I'd be this is my new code added to xfs_buf_item_unpin, but I don't
quite understand why.  It's been a long time since I wrote that code,
but I had to add that code to make sure we clear all buffers during
a forced shutdown.  Can you test if things go away if you just remove it
(even if causes other hangs?)

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: XFS regression: Oops in xfs_buf_do_callbacks on xfstest 137
  2012-08-17 18:02 ` Christoph Hellwig
@ 2012-08-17 18:15   ` Eric Sandeen
  2012-09-03  0:45     ` Raghavendra D Prabhu
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Sandeen @ 2012-08-17 18:15 UTC (permalink / raw)
  To: xfs

On 8/17/12 1:02 PM, Christoph Hellwig wrote:
> I'd be this is my new code added to xfs_buf_item_unpin, but I don't
> quite understand why.  It's been a long time since I wrote that code,
> but I had to add that code to make sure we clear all buffers during
> a forced shutdown.  Can you test if things go away if you just remove it
> (even if causes other hangs?)

It does go away AFAIK, since the bisect found it.

Sadly it's been on the back burner for me, under other deadline pressure.

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: XFS regression: Oops in xfs_buf_do_callbacks on xfstest 137
  2012-08-17 18:15   ` Eric Sandeen
@ 2012-09-03  0:45     ` Raghavendra D Prabhu
  2012-09-03  3:05       ` Raghavendra D Prabhu
  0 siblings, 1 reply; 5+ messages in thread
From: Raghavendra D Prabhu @ 2012-09-03  0:45 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs


[-- Attachment #1.1.1: Type: text/plain, Size: 1228 bytes --]

Hi,


* On Fri, Aug 17, 2012 at 01:15:43PM -0500, Eric Sandeen <sandeen@sandeen.net> wrote:
>On 8/17/12 1:02 PM, Christoph Hellwig wrote:
>>I'd be this is my new code added to xfs_buf_item_unpin, but I don't
>>quite understand why.  It's been a long time since I wrote that code,
>>but I had to add that code to make sure we clear all buffers during
>>a forced shutdown.  Can you test if things go away if you just remove it
>>(even if causes other hangs?)
>
>It does go away AFAIK, since the bisect found it.
>
>Sadly it's been on the back burner for me, under other deadline pressure.
>
>-Eric
>
>_______________________________________________
>xfs mailing list
>xfs@oss.sgi.com
>http://oss.sgi.com/mailman/listinfo/xfs

I hit the same bug on xfstest 137 while testing and it is indeed 
POISON_FREE.

Here are the intermediate backtraces:  http://sprunge.us/HZeD 

I am also attaching the full backtrace.


git head:

commit b686d1f79acb65c6a34473c15fcfa2ee54aed8e2
  Author: Jeff Liu <jeff.liu@oracle.com>
  Date:   Tue Aug 21 17:12:18 2012 +0800


Regards,
-- 
Raghavendra Prabhu
GPG Id : 0xD72BE977
Fingerprint: B93F EBCB 8E05 7039 CD3C A4B8 A616 DCA1 D72B E977
www: wnohang.net

[-- Attachment #1.1.2: full.backtrace --]
[-- Type: text/plain, Size: 6870 bytes --]

#0  xfs_buf_iodone_callbacks (bp=0xffff88007a9d2d00) at fs/xfs/xfs_buf_item.c:1057
        lip = <optimized out>
        mp = <optimized out>
        lasttime = 0
        lasttarg = 0x0 <irq_stack_union>
        __func__ = "xfs_buf_iodone_callbacks"
#1  0xffffffff815824ee in xfs_buf_iodone_work (work=work@entry=0xffff88007a9d2e28) at fs/xfs/xfs_buf.c:1006
        bp = 0xffff88007a9d2d00
        __func__ = "xfs_buf_iodone_work"
#2  0xffffffff81582dec in xfs_buf_ioend (bp=bp@entry=0xffff88007a9d2d00, schedule=schedule@entry=0) at fs/xfs/xfs_buf.c:1027
        __func__ = "xfs_buf_ioend"
#3  0xffffffff8166e161 in xfs_buf_item_unpin (lip=0xffff88007a9c0d20, remove=1) at fs/xfs/xfs_buf_item.c:533
        bp = 0xffff88007a9d2d00
        ailp = 0xffff88007aad5240
        stale = 0
        __func__ = "xfs_buf_item_unpin"
#4  0xffffffff8165d104 in xfs_trans_committed_bulk (ailp=0xffff88007aad5240, log_vector=<optimized out>, commit_lsn=0, aborted=aborted@entry=2) at fs/xfs/xfs_trans.c:1305
        lip = 0xffff88007a9c0d20
        item_lsn = 0
        log_items = {0xffff88007f9d37c0, 0xffff880079fbbe80, 0xffff8800724d5ac8, 0xffffffff81dba277 <__schedule+3791>, 0xffff8800724d5b28, 0xffff88007ad99560, 0xffff88007ad99560, 0xffff88007aebef60, 0x2 <irq_stack_union+2>, 0xffff8800790df578, 0xffff8800724d5b98, 
          0xffffffff8189d08e <trace_hardirqs_on_thunk+58>, 0xffff88007f800000, 0x3 <irq_stack_union+3>, 0x1 <irq_stack_union+1>, 0xffff88007aaf2b28, 0xffff8800724d5fd8, 0xffff8800724d4000, 0x2 <irq_stack_union+2>, 0xffff8800724d5ad8, 0xffffffff810cd59a <irq_exit+410>, 
          0xffffffff81dbeb74 <restore_args>, 0xffff8800724d59d0, 0xffffffff81e03e40 <save_stack_ops>, 0xffff880079151cb8, 0x2 <irq_stack_union+2>, 0xffff8800724adfb0, 0x2 <irq_stack_union+2>, 0x0 <irq_stack_union>, 0xffff88007affd520, 0xffff88007aad5240, 
          0xffffffffffffff10}
        lv = 0xffff88007affd520
        cur = <incomplete type>
        i = <optimized out>
        __func__ = "xfs_trans_committed_bulk"
#5  0xffffffff81669107 in xlog_cil_committed (args=args@entry=0xffff88007ad99560, abort=abort@entry=2) at fs/xfs/xfs_log_cil.c:337
        ctx = 0xffff88007ad99560
        mp = 0xffff88007aebef60
        __func__ = "xlog_cil_committed"
#6  0xffffffff8166a301 in xlog_cil_push (log=log@entry=0xffff8800724adfb0) at fs/xfs/xfs_log_cil.c:582
        cil = 0xffff8800790df480
        lv = <optimized out>
        ctx = 0xffff88007ad99560
        new_ctx = <optimized out>
        commit_iclog = 0xffffffff81170cb4 <__lock_release+100>
        tic = 0xffff880079151c00
        num_iovecs = <optimized out>
        error = <optimized out>
        thdr = {
          th_magic = 1414676814, 
          th_type = 42, 
          th_tid = -576842867, 
          th_num_items = 3777
        }
        lhdr = {
          i_addr = 0xffff8800724d5be8, 
          i_len = 16, 
          i_type = 19
        }
        lvhdr = {
          lv_next = 0xffff88007affd520, 
          lv_niovecs = 1, 
          lv_iovecp = 0xffff8800724d5bd8, 
          lv_item = 0x0 <irq_stack_union>, 
          lv_buf = 0x0 <irq_stack_union>, 
          lv_buf_len = 0
        }
        commit_lsn = <optimized out>
        push_seq = 1
        __func__ = "xlog_cil_push"
#7  0xffffffff8166a4a9 in xlog_cil_push_foreground (log=log@entry=0xffff8800724adfb0, push_seq=push_seq@entry=1) at fs/xfs/xfs_log_cil.c:659
        cil = 0xffff8800790df480
        __func__ = "xlog_cil_push_foreground"
#8  0xffffffff8166a8f1 in xlog_cil_force_lsn (log=log@entry=0xffff8800724adfb0, sequence=1) at fs/xfs/xfs_log_cil.c:771
        cil = 0xffff8800790df480
        ctx = <optimized out>
        commit_lsn = -1
        __func__ = "xlog_cil_force_lsn"
#9  0xffffffff81665d5c in xlog_cil_force (log=0xffff8800724adfb0) at fs/xfs/xfs_log_priv.h:668
No locals.
#10 _xfs_log_force (mp=mp@entry=0xffff88007aebef60, flags=flags@entry=1, log_flushed=log_flushed@entry=0x0 <irq_stack_union>) at fs/xfs/xfs_log.c:2889
        log = 0xffff8800724adfb0
        iclog = <optimized out>
        lsn = <optimized out>
        __func__ = "_xfs_log_force"
#11 0xffffffff81666479 in xfs_log_force (mp=mp@entry=0xffff88007aebef60, flags=flags@entry=1) at fs/xfs/xfs_log.c:3004
        error = <optimized out>
        __func__ = "xfs_log_force"
#12 0xffffffff815ad9ec in xfs_quiesce_data (mp=mp@entry=0xffff88007aebef60) at fs/xfs/xfs_sync.c:310
        error = <optimized out>
        error2 = 0
        __func__ = "xfs_quiesce_data"
#13 0xffffffff815a78a0 in xfs_fs_sync_fs (sb=<optimized out>, wait=<optimized out>) at fs/xfs/xfs_super.c:946
        mp = 0xffff88007aebef60
        error = <optimized out>
        __func__ = "xfs_fs_sync_fs"
#14 0xffffffff813b0389 in __sync_filesystem (sb=sb@entry=0xffff88007a1ad668, wait=wait@entry=1) at fs/sync.c:38
        __func__ = "__sync_filesystem"
#15 0xffffffff813b0477 in sync_filesystem (sb=sb@entry=0xffff88007a1ad668) at fs/sync.c:66
        ret = <optimized out>
        __func__ = "sync_filesystem"
#16 0xffffffff8134da7f in generic_shutdown_super (sb=0xffff88007a1ad668) at fs/super.c:439
        sop = 0xffffffff81e86360 <xfs_super_operations>
        __func__ = "generic_shutdown_super"
#17 0xffffffff8134dc62 in kill_block_super (sb=<optimized out>) at fs/super.c:1104
        bdev = 0xffff88007d0115c0
        mode = 131
        __func__ = "kill_block_super"
#18 0xffffffff8134ed14 in deactivate_locked_super (s=s@entry=0xffff88007a1ad668) at fs/super.c:306
        fs = 0xffffffff828057c0 <xfs_fs_type>
        __func__ = "deactivate_locked_super"
#19 0xffffffff813501c7 in deactivate_super (s=s@entry=0xffff88007a1ad668) at fs/super.c:337
        __func__ = "deactivate_super"
#20 0xffffffff813902f4 in mntfree (mnt=0xffff88007ababd40) at fs/namespace.c:855
        m = 0xffff88007ababd60
        sb = 0xffff88007a1ad668
#21 mntput_no_expire (mnt=mnt@entry=0xffff88007ababd40) at fs/namespace.c:893
        __func__ = "mntput_no_expire"
#22 0xffffffff81392ce5 in sys_umount (name=<optimized out>, flags=0) at fs/namespace.c:1276
        path = <incomplete type>
        mnt = 0xffff88007ababd40
        retval = 0
        lookup_flags = <optimized out>
        __func__ = "sys_umount"
#23 <signal handler called>
No locals.
#24 0x00007f66053c34f7 in ?? ()
No symbol table info available.
#25 0x000000050003123b in ?? ()
No symbol table info available.
#26 0x0000000000100000 in cpu_lock_stats ()
No symbol table info available.
Continuing.

Program received signal SIGINT, Interrupt.
0xffffffff810607bb in native_safe_halt () at /media/Vone/kernel/xfs-next/arch/x86/include/asm/irqflags.h:49
49		asm volatile("sti; hlt": : :"memory");
A debugging session is active.

	Inferior 1 [Remote target] will be killed.

Quit anyway? (y or n) 

[-- Attachment #1.2: Type: application/pgp-signature, Size: 490 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: XFS regression: Oops in xfs_buf_do_callbacks on xfstest 137
  2012-09-03  0:45     ` Raghavendra D Prabhu
@ 2012-09-03  3:05       ` Raghavendra D Prabhu
  0 siblings, 0 replies; 5+ messages in thread
From: Raghavendra D Prabhu @ 2012-09-03  3:05 UTC (permalink / raw)
  To: Eric Sandeen, xfs


[-- Attachment #1.1: Type: text/plain, Size: 4036 bytes --]

Hi,


* On Mon, Sep 03, 2012 at 06:15:21AM +0530, Raghavendra D Prabhu <raghu.prabhu13@gmail.com> wrote:
>Hi,
>
>
>* On Fri, Aug 17, 2012 at 01:15:43PM -0500, Eric Sandeen <sandeen@sandeen.net> wrote:
>>On 8/17/12 1:02 PM, Christoph Hellwig wrote:
>>>I'd be this is my new code added to xfs_buf_item_unpin, but I don't
>>>quite understand why.  It's been a long time since I wrote that code,
>>>but I had to add that code to make sure we clear all buffers during
>>>a forced shutdown.  Can you test if things go away if you just remove it
>>>(even if causes other hangs?)
>>
>>It does go away AFAIK, since the bisect found it.
>>
>>Sadly it's been on the back burner for me, under other deadline pressure.
>>
>>-Eric
>>
>>_______________________________________________
>>xfs mailing list
>>xfs@oss.sgi.com
>>http://oss.sgi.com/mailman/listinfo/xfs
>
>I hit the same bug on xfstest 137 while testing and it is indeed 
>POISON_FREE.
>
>Here are the intermediate backtraces:  http://sprunge.us/HZeD
>
>I am also attaching the full backtrace.
>
>
>git head:
>
>commit b686d1f79acb65c6a34473c15fcfa2ee54aed8e2
> Author: Jeff Liu <jeff.liu@oracle.com>
> Date:   Tue Aug 21 17:12:18 2012 +0800
>

With DEBUG_PAGEALLOC enabled, I got following:

[  182.925026]  [<ffffffff815813ce>] ? xfs_buf_iodone_work+0x43/0xb7
[  182.925026]  [<ffffffff8166c7b5>] xfs_buf_iodone_callbacks+0x4d2/0x5aa
[  182.925026]  [<ffffffff8166d041>] ? xfs_buf_item_unpin+0x7b4/0x812
[  182.925026]  [<ffffffff815813ce>] xfs_buf_iodone_work+0x43/0xb7
[  182.925026]  [<ffffffff81581ccc>] xfs_buf_ioend+0x29a/0x2fc
[  182.925026]  [<ffffffff8166d041>] xfs_buf_item_unpin+0x7b4/0x812
[  182.925026]  [<ffffffff8165bfe4>] xfs_trans_committed_bulk+0x223/0x6d1
[  182.925026]  [<ffffffff81317583>] ? __slab_free+0xa46/0xc2f
[  182.925026]  [<ffffffff81665edc>] ? xlog_write+0x18b/0x95c
[  182.925026]  [<ffffffff8116f30b>] ? debug_check_no_locks_freed+0x121/0x17b
[  182.925026]  [<ffffffff81318ab0>] ? kmem_cache_free+0x338/0x491
[  182.925026]  [<ffffffff81661dcf>] ? xfs_log_ticket_put+0xaf/0xbc
[  182.925026]  [<ffffffff81667fe7>] xlog_cil_committed+0x3b/0x1fa
[  182.925026]  [<ffffffff816691e1>] xlog_cil_push+0x6ca/0x6f6
[  182.925026]  [<ffffffff81170c84>] ? __lock_release+0x64/0xb6
[  182.925026]  [<ffffffff81669389>] xlog_cil_push_foreground+0x17c/0x1fa
[  182.925026]  [<ffffffff816697d1>] xlog_cil_force_lsn+0x90/0x27e
[  182.925026]  [<ffffffff813a4a42>] ? sync_inodes_sb+0x23e/0x26c
[  182.925026]  [<ffffffff81664c3c>] _xfs_log_force+0x67/0x620
[  182.925026]  [<ffffffff81db7f97>] ? wait_for_common+0x231/0x3ac
[  182.925026]  [<ffffffff81665359>] xfs_log_force+0x164/0x1c2
[  182.925026]  [<ffffffff815ac8cc>] xfs_quiesce_data+0x21/0x9f
[  182.925026]  [<ffffffff815a6780>] xfs_fs_sync_fs+0x5a/0xe0
[  182.925026]  [<ffffffff813af269>] __sync_filesystem+0x9e/0xc2
[  182.925026]  [<ffffffff813af357>] sync_filesystem+0xca/0x12d
[  182.925026]  [<ffffffff8134c95f>] generic_shutdown_super+0x61/0x203
[  182.925026]  [<ffffffff8134cb42>] kill_block_super+0x41/0x1a6
[  182.925026]  [<ffffffff8134dbf4>] deactivate_locked_super+0x9b/0x104
[  182.925026]  [<ffffffff8134f0a7>] deactivate_super+0x147/0x187
[  182.925026]  [<ffffffff8138f1d4>] mntput_no_expire+0x308/0x32a
[  182.925026]  [<ffffffff81391bc5>] sys_umount+0x1a6/0x1e4
[  182.925026]  [<ffffffff81dcb3e9>] system_call_fastpath+0x16/0x1b

Full here -- http://sprunge.us/CPKW 

One more thing, in xfs_buf_do_callbacks,


	while ((lip = bp->b_fspriv) != NULL) {
		bp->b_fspriv = lip->li_bio_list;
		ASSERT(lip->li_cb != NULL);

     In the loop before the crash, lip->li_bio_list is NULL which 
     explains the use-after-free.


>_______________________________________________
>xfs mailing list
>xfs@oss.sgi.com
>http://oss.sgi.com/mailman/listinfo/xfs





Regards,
-- 
Raghavendra Prabhu
GPG Id : 0xD72BE977
Fingerprint: B93F EBCB 8E05 7039 CD3C A4B8 A616 DCA1 D72B E977
www: wnohang.net

[-- Attachment #1.2: Type: application/pgp-signature, Size: 490 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2012-09-03  3:04 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-08-02 17:44 XFS regression: Oops in xfs_buf_do_callbacks on xfstest 137 Eric Sandeen
2012-08-17 18:02 ` Christoph Hellwig
2012-08-17 18:15   ` Eric Sandeen
2012-09-03  0:45     ` Raghavendra D Prabhu
2012-09-03  3:05       ` Raghavendra D Prabhu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox