public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: Dave Chinner <david@fromorbit.com>
Cc: linux-kernel@vger.kernel.org
Subject: Re: [OOPS, 3.13-rc2] null ptr in dio_complete()
Date: Wed, 4 Dec 2013 20:41:43 -0700	[thread overview]
Message-ID: <20131205034143.GU5051@kernel.dk> (raw)
In-Reply-To: <20131204235656.GG8803@dastard>

On Thu, Dec 05 2013, Dave Chinner wrote:
> On Wed, Dec 04, 2013 at 03:17:49PM +1100, Dave Chinner wrote:
> > On Tue, Dec 03, 2013 at 08:47:12PM -0700, Jens Axboe wrote:
> > > On Wed, Dec 04 2013, Dave Chinner wrote:
> > > > On Wed, Dec 04, 2013 at 12:58:38PM +1100, Dave Chinner wrote:
> > > > > On Wed, Dec 04, 2013 at 08:59:40AM +1100, Dave Chinner wrote:
> > > > > > Hi Jens,
> > > > > > 
> > > > > > Not sure who to direct this to or CC, so I figured you are the
> > > > > > person to do that. I just had xfstests generic/299 (an AIO/DIO test)
> > > > > > oops in dio_complete() like so:
> > > > > > 
> > ....
> > > > > > [ 9650.590630]  <IRQ>
> > > > > > [ 9650.590630]  [<ffffffff811ddae3>] dio_complete+0xa3/0x140
> > > > > > [ 9650.590630]  [<ffffffff811ddc2a>] dio_bio_end_aio+0x7a/0x110
> > > > > > [ 9650.590630]  [<ffffffff811ddbb5>] ? dio_bio_end_aio+0x5/0x110
> > > > > > [ 9650.590630]  [<ffffffff811d8a9d>] bio_endio+0x1d/0x30
> > > > > > [ 9650.590630]  [<ffffffff8175d65f>] blk_mq_complete_request+0x5f/0x120
> > > > > > [ 9650.590630]  [<ffffffff8175d736>] __blk_mq_end_io+0x16/0x20
> > > > > > [ 9650.590630]  [<ffffffff8175d7a8>] blk_mq_end_io+0x68/0xd0
> > > > > > [ 9650.590630]  [<ffffffff818539a7>] virtblk_done+0x67/0x110
> > > > > > [ 9650.590630]  [<ffffffff817f74c5>] vring_interrupt+0x35/0x60
> > .....
> > > > > And I just hit this from running xfs_repair which is doing
> > > > > multithreaded direct IO directly on /dev/vdc:
> > > > > 
> > ....
> > > > > [ 1776.510446] IP: [<ffffffff81755b6a>] blk_account_io_done+0x6a/0x180
> > ....
> > > > > [ 1776.512577]  [<ffffffff8175e4b8>] blk_mq_complete_request+0xb8/0x120
> > > > > [ 1776.512577]  [<ffffffff8175e536>] __blk_mq_end_io+0x16/0x20
> > > > > [ 1776.512577]  [<ffffffff8175e5a8>] blk_mq_end_io+0x68/0xd0
> > > > > [ 1776.512577]  [<ffffffff81852e47>] virtblk_done+0x67/0x110
> > > > > [ 1776.512577]  [<ffffffff817f7925>] vring_interrupt+0x35/0x60
> > > > > [ 1776.512577]  [<ffffffff810e48a4>] handle_irq_event_percpu+0x54/0x1e0
> > .....
> > > > > So this is looking like another virtio+blk_mq problem....
> > > > 
> > > > This one is definitely reproducable. Just hit it again...
> > > 
> > > I'll take a look at this. You don't happen to have gdb dumps of the
> > > lines associated with those crashes? Just to save me some digging
> > > time...
> > 
> > Only this:
> > 
> > (gdb) l *(dio_complete+0xa3)
> > 0xffffffff811ddae3 is in dio_complete (fs/direct-io.c:282).
> > 277                     }
> > 278
> > 279                     aio_complete(dio->iocb, ret, 0);
> > 280             }
> > 281
> > 282             kmem_cache_free(dio_cache, dio);
> > 283             return ret;
> > 284     }
> > 285
> > 286     static void dio_aio_complete_work(struct work_struct *work)
> > 
> > And this:
> > 
> > (gdb) l *(blk_account_io_done+0x6a)
> > 0xffffffff81755b6a is in blk_account_io_done (block/blk-core.c:2049).
> > 2044                    int cpu;
> > 2045
> > 2046                    cpu = part_stat_lock();
> > 2047                    part = req->part;
> > 2048
> > 2049                    part_stat_inc(cpu, part, ios[rw]);
> > 2050                    part_stat_add(cpu, part, ticks[rw], duration);
> > 2051                    part_round_stats(cpu, part);
> > 2052                    part_dec_in_flight(part, rw);
> > 2053
> > 
> > as I've rebuild the kernel with different patches since the one
> > running on the machine that is triggering the problem.
> 
> Any update on this, Jens? I've hit this blk_account_io_done() panic
> 10 times in the past 2 hours while trying to do xfs_repair
> testing....

No, sorry, no updates yet... I haven't had time to look into it today.
To reproduce tomorrow, can you mail me your exact setup (kvm invocation,
etc) and how your guest is setup and if there's any special way I need
to run xfstest or xfs_repair?

-- 
Jens Axboe


  reply	other threads:[~2013-12-05  3:41 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-03 21:59 [OOPS, 3.13-rc2] null ptr in dio_complete() Dave Chinner
2013-12-04  1:58 ` Dave Chinner
2013-12-04  3:38   ` Dave Chinner
2013-12-04  3:47     ` Jens Axboe
2013-12-04  4:17       ` Dave Chinner
2013-12-04 23:56         ` Dave Chinner
2013-12-05  3:41           ` Jens Axboe [this message]
2013-12-05  4:49             ` Dave Chinner
2013-12-05 14:22   ` Ming Lei
2013-12-05 15:57     ` Jens Axboe
2013-12-05 21:26     ` Dave Chinner
2013-12-05 23:16       ` Dave Chinner
2013-12-06 16:46         ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131205034143.GU5051@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=david@fromorbit.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox