public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Raghavendra D Prabhu <raghu.prabhu13@gmail.com>
To: Eric Sandeen <sandeen@sandeen.net>
Cc: xfs@oss.sgi.com
Subject: Re: XFS regression: Oops in xfs_buf_do_callbacks on xfstest 137
Date: Mon, 3 Sep 2012 06:15:21 +0530	[thread overview]
Message-ID: <20120903004521.GA61118@Archie> (raw)
In-Reply-To: <502E8A4F.9050105@sandeen.net>


[-- Attachment #1.1.1: Type: text/plain, Size: 1228 bytes --]

Hi,


* On Fri, Aug 17, 2012 at 01:15:43PM -0500, Eric Sandeen <sandeen@sandeen.net> wrote:
>On 8/17/12 1:02 PM, Christoph Hellwig wrote:
>>I'd be this is my new code added to xfs_buf_item_unpin, but I don't
>>quite understand why.  It's been a long time since I wrote that code,
>>but I had to add that code to make sure we clear all buffers during
>>a forced shutdown.  Can you test if things go away if you just remove it
>>(even if causes other hangs?)
>
>It does go away AFAIK, since the bisect found it.
>
>Sadly it's been on the back burner for me, under other deadline pressure.
>
>-Eric
>
>_______________________________________________
>xfs mailing list
>xfs@oss.sgi.com
>http://oss.sgi.com/mailman/listinfo/xfs

I hit the same bug on xfstest 137 while testing and it is indeed 
POISON_FREE.

Here are the intermediate backtraces:  http://sprunge.us/HZeD 

I am also attaching the full backtrace.


git head:

commit b686d1f79acb65c6a34473c15fcfa2ee54aed8e2
  Author: Jeff Liu <jeff.liu@oracle.com>
  Date:   Tue Aug 21 17:12:18 2012 +0800


Regards,
-- 
Raghavendra Prabhu
GPG Id : 0xD72BE977
Fingerprint: B93F EBCB 8E05 7039 CD3C A4B8 A616 DCA1 D72B E977
www: wnohang.net

[-- Attachment #1.1.2: full.backtrace --]
[-- Type: text/plain, Size: 6870 bytes --]

#0  xfs_buf_iodone_callbacks (bp=0xffff88007a9d2d00) at fs/xfs/xfs_buf_item.c:1057
        lip = <optimized out>
        mp = <optimized out>
        lasttime = 0
        lasttarg = 0x0 <irq_stack_union>
        __func__ = "xfs_buf_iodone_callbacks"
#1  0xffffffff815824ee in xfs_buf_iodone_work (work=work@entry=0xffff88007a9d2e28) at fs/xfs/xfs_buf.c:1006
        bp = 0xffff88007a9d2d00
        __func__ = "xfs_buf_iodone_work"
#2  0xffffffff81582dec in xfs_buf_ioend (bp=bp@entry=0xffff88007a9d2d00, schedule=schedule@entry=0) at fs/xfs/xfs_buf.c:1027
        __func__ = "xfs_buf_ioend"
#3  0xffffffff8166e161 in xfs_buf_item_unpin (lip=0xffff88007a9c0d20, remove=1) at fs/xfs/xfs_buf_item.c:533
        bp = 0xffff88007a9d2d00
        ailp = 0xffff88007aad5240
        stale = 0
        __func__ = "xfs_buf_item_unpin"
#4  0xffffffff8165d104 in xfs_trans_committed_bulk (ailp=0xffff88007aad5240, log_vector=<optimized out>, commit_lsn=0, aborted=aborted@entry=2) at fs/xfs/xfs_trans.c:1305
        lip = 0xffff88007a9c0d20
        item_lsn = 0
        log_items = {0xffff88007f9d37c0, 0xffff880079fbbe80, 0xffff8800724d5ac8, 0xffffffff81dba277 <__schedule+3791>, 0xffff8800724d5b28, 0xffff88007ad99560, 0xffff88007ad99560, 0xffff88007aebef60, 0x2 <irq_stack_union+2>, 0xffff8800790df578, 0xffff8800724d5b98, 
          0xffffffff8189d08e <trace_hardirqs_on_thunk+58>, 0xffff88007f800000, 0x3 <irq_stack_union+3>, 0x1 <irq_stack_union+1>, 0xffff88007aaf2b28, 0xffff8800724d5fd8, 0xffff8800724d4000, 0x2 <irq_stack_union+2>, 0xffff8800724d5ad8, 0xffffffff810cd59a <irq_exit+410>, 
          0xffffffff81dbeb74 <restore_args>, 0xffff8800724d59d0, 0xffffffff81e03e40 <save_stack_ops>, 0xffff880079151cb8, 0x2 <irq_stack_union+2>, 0xffff8800724adfb0, 0x2 <irq_stack_union+2>, 0x0 <irq_stack_union>, 0xffff88007affd520, 0xffff88007aad5240, 
          0xffffffffffffff10}
        lv = 0xffff88007affd520
        cur = <incomplete type>
        i = <optimized out>
        __func__ = "xfs_trans_committed_bulk"
#5  0xffffffff81669107 in xlog_cil_committed (args=args@entry=0xffff88007ad99560, abort=abort@entry=2) at fs/xfs/xfs_log_cil.c:337
        ctx = 0xffff88007ad99560
        mp = 0xffff88007aebef60
        __func__ = "xlog_cil_committed"
#6  0xffffffff8166a301 in xlog_cil_push (log=log@entry=0xffff8800724adfb0) at fs/xfs/xfs_log_cil.c:582
        cil = 0xffff8800790df480
        lv = <optimized out>
        ctx = 0xffff88007ad99560
        new_ctx = <optimized out>
        commit_iclog = 0xffffffff81170cb4 <__lock_release+100>
        tic = 0xffff880079151c00
        num_iovecs = <optimized out>
        error = <optimized out>
        thdr = {
          th_magic = 1414676814, 
          th_type = 42, 
          th_tid = -576842867, 
          th_num_items = 3777
        }
        lhdr = {
          i_addr = 0xffff8800724d5be8, 
          i_len = 16, 
          i_type = 19
        }
        lvhdr = {
          lv_next = 0xffff88007affd520, 
          lv_niovecs = 1, 
          lv_iovecp = 0xffff8800724d5bd8, 
          lv_item = 0x0 <irq_stack_union>, 
          lv_buf = 0x0 <irq_stack_union>, 
          lv_buf_len = 0
        }
        commit_lsn = <optimized out>
        push_seq = 1
        __func__ = "xlog_cil_push"
#7  0xffffffff8166a4a9 in xlog_cil_push_foreground (log=log@entry=0xffff8800724adfb0, push_seq=push_seq@entry=1) at fs/xfs/xfs_log_cil.c:659
        cil = 0xffff8800790df480
        __func__ = "xlog_cil_push_foreground"
#8  0xffffffff8166a8f1 in xlog_cil_force_lsn (log=log@entry=0xffff8800724adfb0, sequence=1) at fs/xfs/xfs_log_cil.c:771
        cil = 0xffff8800790df480
        ctx = <optimized out>
        commit_lsn = -1
        __func__ = "xlog_cil_force_lsn"
#9  0xffffffff81665d5c in xlog_cil_force (log=0xffff8800724adfb0) at fs/xfs/xfs_log_priv.h:668
No locals.
#10 _xfs_log_force (mp=mp@entry=0xffff88007aebef60, flags=flags@entry=1, log_flushed=log_flushed@entry=0x0 <irq_stack_union>) at fs/xfs/xfs_log.c:2889
        log = 0xffff8800724adfb0
        iclog = <optimized out>
        lsn = <optimized out>
        __func__ = "_xfs_log_force"
#11 0xffffffff81666479 in xfs_log_force (mp=mp@entry=0xffff88007aebef60, flags=flags@entry=1) at fs/xfs/xfs_log.c:3004
        error = <optimized out>
        __func__ = "xfs_log_force"
#12 0xffffffff815ad9ec in xfs_quiesce_data (mp=mp@entry=0xffff88007aebef60) at fs/xfs/xfs_sync.c:310
        error = <optimized out>
        error2 = 0
        __func__ = "xfs_quiesce_data"
#13 0xffffffff815a78a0 in xfs_fs_sync_fs (sb=<optimized out>, wait=<optimized out>) at fs/xfs/xfs_super.c:946
        mp = 0xffff88007aebef60
        error = <optimized out>
        __func__ = "xfs_fs_sync_fs"
#14 0xffffffff813b0389 in __sync_filesystem (sb=sb@entry=0xffff88007a1ad668, wait=wait@entry=1) at fs/sync.c:38
        __func__ = "__sync_filesystem"
#15 0xffffffff813b0477 in sync_filesystem (sb=sb@entry=0xffff88007a1ad668) at fs/sync.c:66
        ret = <optimized out>
        __func__ = "sync_filesystem"
#16 0xffffffff8134da7f in generic_shutdown_super (sb=0xffff88007a1ad668) at fs/super.c:439
        sop = 0xffffffff81e86360 <xfs_super_operations>
        __func__ = "generic_shutdown_super"
#17 0xffffffff8134dc62 in kill_block_super (sb=<optimized out>) at fs/super.c:1104
        bdev = 0xffff88007d0115c0
        mode = 131
        __func__ = "kill_block_super"
#18 0xffffffff8134ed14 in deactivate_locked_super (s=s@entry=0xffff88007a1ad668) at fs/super.c:306
        fs = 0xffffffff828057c0 <xfs_fs_type>
        __func__ = "deactivate_locked_super"
#19 0xffffffff813501c7 in deactivate_super (s=s@entry=0xffff88007a1ad668) at fs/super.c:337
        __func__ = "deactivate_super"
#20 0xffffffff813902f4 in mntfree (mnt=0xffff88007ababd40) at fs/namespace.c:855
        m = 0xffff88007ababd60
        sb = 0xffff88007a1ad668
#21 mntput_no_expire (mnt=mnt@entry=0xffff88007ababd40) at fs/namespace.c:893
        __func__ = "mntput_no_expire"
#22 0xffffffff81392ce5 in sys_umount (name=<optimized out>, flags=0) at fs/namespace.c:1276
        path = <incomplete type>
        mnt = 0xffff88007ababd40
        retval = 0
        lookup_flags = <optimized out>
        __func__ = "sys_umount"
#23 <signal handler called>
No locals.
#24 0x00007f66053c34f7 in ?? ()
No symbol table info available.
#25 0x000000050003123b in ?? ()
No symbol table info available.
#26 0x0000000000100000 in cpu_lock_stats ()
No symbol table info available.
Continuing.

Program received signal SIGINT, Interrupt.
0xffffffff810607bb in native_safe_halt () at /media/Vone/kernel/xfs-next/arch/x86/include/asm/irqflags.h:49
49		asm volatile("sti; hlt": : :"memory");
A debugging session is active.

	Inferior 1 [Remote target] will be killed.

Quit anyway? (y or n) 

[-- Attachment #1.2: Type: application/pgp-signature, Size: 490 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2012-09-03  0:44 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-02 17:44 XFS regression: Oops in xfs_buf_do_callbacks on xfstest 137 Eric Sandeen
2012-08-17 18:02 ` Christoph Hellwig
2012-08-17 18:15   ` Eric Sandeen
2012-09-03  0:45     ` Raghavendra D Prabhu [this message]
2012-09-03  3:05       ` Raghavendra D Prabhu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120903004521.GA61118@Archie \
    --to=raghu.prabhu13@gmail.com \
    --cc=sandeen@sandeen.net \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox