From: Brian Foster <bfoster@redhat.com>
To: Alex Lyakas <alex@zadarastorage.com>
Cc: Christoph Hellwig <hch@infradead.org>, linux-xfs@vger.kernel.org
Subject: Re: xfs_alloc_ag_vextent_near() takes minutes to complete
Date: Thu, 4 May 2017 08:29:50 -0400 [thread overview]
Message-ID: <20170504122950.GC3248@bfoster.bfoster> (raw)
In-Reply-To: <CAOcd+r0zJQ=5PtJ4KofEbxiZPbS3xo8x_14tCtvdxcp6mhN8yQ@mail.gmail.com>
On Thu, May 04, 2017 at 02:13:40PM +0300, Alex Lyakas wrote:
> Hello,
>
> > # it is still not clear who is holding the lock
> Further analysis shows that the xfs_buf lock is being held, because
> the buffer is currently being read from disk. The stack that unlocks
> the buffer is [1]. The size of each buffer being read is 4Kb.
>
> So bottom line is that sometimes XFS needs to search through thousands
> of metadata blocks of 4Kb, and needs to wait while they are being read
> from disk. This causes very slow allocation in some cases, further
> causing other threads to wait on i_lock/i_mutex, in some cases
> triggering hung-task panics.
>
Not that this helps resolve the fundamental problem, but note that I
don't think hung task messages have to result in kernel panics. On a
quick look at v3.18, the CONFIG_BOOTPARAM_HUNG_TASK_PANIC kernel config
option looks like it determines whether a hung task message panics the
kernel or not.
Brian
> Anything can be done to remedy this?
>
> Thanks,
> Alex.
>
>
>
> [1]
> Call Trace:
> [<ffffffff81710c85>] dump_stack+0x4e/0x71
> [<ffffffffc0fade1b>] xfs_buf_ioend+0x22b/0x230 [xfs]
> [<ffffffffc0fade35>] xfs_buf_ioend_work+0x15/0x20 [xfs]
> [<ffffffff8108bd56>] process_one_work+0x146/0x410
> [<ffffffff8108c141>] worker_thread+0x121/0x450
> [<ffffffff8108c020>] ? process_one_work+0x410/0x410
> [<ffffffff810911b9>] kthread+0xc9/0xe0
> [<ffffffff810910f0>] ? kthread_create_on_node+0x180/0x180
> [<ffffffff81717918>] ret_from_fork+0x58/0x90
> [<ffffffff810910f0>] ? kthread_create_on_node+0x180/0x180
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2017-05-04 12:29 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-01 12:37 xfs_alloc_ag_vextent_near() takes minutes to complete Alex Lyakas
2017-05-01 15:26 ` Brian Foster
2017-05-02 7:35 ` Christoph Hellwig
2017-05-04 8:07 ` Alex Lyakas
2017-05-04 11:13 ` Alex Lyakas
2017-05-04 12:29 ` Brian Foster [this message]
2017-05-04 12:25 ` Brian Foster
2017-05-04 13:53 ` Alex Lyakas
2017-05-05 3:29 ` Dave Chinner
2017-05-07 7:52 ` Alex Lyakas
2017-05-07 8:00 ` Alex Lyakas
2017-05-07 9:12 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170504122950.GC3248@bfoster.bfoster \
--to=bfoster@redhat.com \
--cc=alex@zadarastorage.com \
--cc=hch@infradead.org \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox