From: Lachlan McIlroy <lachlan@sgi.com>
To: Lachlan McIlroy <lachlan@sgi.com>,
Christoph Hellwig <hch@infradead.org>, xfs-oss <xfs@oss.sgi.com>,
linux-mm@kvack.org
Subject: Re: deadlock with latest xfs
Date: Mon, 27 Oct 2008 18:31:12 +1100 [thread overview]
Message-ID: <49056E40.5040906@sgi.com> (raw)
In-Reply-To: <20081027065455.GB4985@disturbed>
Dave Chinner wrote:
> On Mon, Oct 27, 2008 at 05:29:50PM +1100, Lachlan McIlroy wrote:
>> Dave Chinner wrote:
>>> On Mon, Oct 27, 2008 at 12:42:09PM +1100, Lachlan McIlroy wrote:
>>>> Dave Chinner wrote:
>>>>> On Sun, Oct 26, 2008 at 11:53:51AM +1100, Dave Chinner wrote:
>>>>>> On Fri, Oct 24, 2008 at 05:48:04PM +1100, Dave Chinner wrote:
>>>>>>> OK, I just hung a single-threaded rm -rf after this completed:
>>>>>>>
>>>>>>> # fsstress -p 1024 -n 100 -d /mnt/xfs2/fsstress
>>>>>>>
>>>>>>> It has hung with this trace:
>>> ....
>>>>> Got it now. I can reproduce this in a couple of minutes now that both
>>>>> the test fs and the fs hosting the UML fs images are using lazy-count=1
>>>>> (and the frequent 10s long host system freezes have gone away, too).
>>>>>
>>>>> Looks like *another* new memory allocation problem [1]:
>>> .....
>>>>> We've entered memory reclaim inside the xfsdatad while trying to do
>>>>> unwritten extent completion during I/O completion, and that memory
>>>>> reclaim is now blocked waiting for I/o completion that cannot make
>>>>> progress.
>>>>>
>>>>> Nasty.
>>>>>
>>>>> My initial though is to make _xfs_trans_alloc() able to take a KM_NOFS argument
>>>>> so we don't re-enter the FS here. If we get an ENOMEM in this case, we should
>>>>> then re-queue the I/O completion at the back of the workqueue and let other
>>>>> I/o completions progress before retrying this one. That way the I/O that
>>>>> is simply cleaning memory will make progress, hence allowing memory
>>>>> allocation to occur successfully when we retry this I/O completion...
>>>> It could work - unless it's a synchronous I/O in which case the I/O is not
>>>> complete until the extent conversion takes place.
>>> Right. Pushing unwritten extent conversion onto a different
>>> workqueue is probably the only way to handle this easily.
>>> That's the same solution Irix has been using for a long time
>>> (the xfsc thread)....
>> Would that be a workqueue specific to one filesystem? Right now our
>> workqueues are per-cpu so they can contain I/O completions for multiple
>> filesystems.
>
> I've simply implemented another per-cpu workqueue set.
>
>>>> Could we allocate the memory up front before the I/O is issued?
>>> Possibly, but that will create more memory pressure than
>>> allocation in I/O completion because now we could need to hold
>>> thousands of allocations across an I/O - think of the case where
>>> we are running low on memory and have a disk subsystem capable of
>>> a few hundred thousand I/Os per second. the allocation failing would
>>> prevent the I/os from being issued, and if this is buffered writes
>>> into unwritten extents we'd be preventing dirty pages from being
>>> cleaned....
>> The allocation has to be done sometime - if have a few hundred thousand
>> I/Os per second then the queue of unwritten extent conversion requests
>> is going to grow very quickly.
>
> Sure, but the difference is that in a workqueue we are doing:
>
> alloc
> free
> alloc
> free
> .....
> alloc
> free
>
> So the instantaneous memory usage is bound by the number of
> workqueue threads doing conversions. The "pre-allocate" case is:
>
> alloc
> alloc
> alloc
> alloc
> ......
> <io completes>
> free
> .....
> <io_completes>
> free
> .....
>
> so the allocation is bound by the number of parallel I/Os we have
> not completed. Given that the transaction structure is *800* bytes,
> they will consume memory very quickly if pre-allocated before the
> I/O is dispatched.
Ah, yes of course I see your point. It would only really work for
synchronous I/O.
Even with the current code we could have queues that grow very large
because buffered writes to unwritten extents don't wait for the
conversion. So even for the small amount of memory we allocate for
each queue entry we still could consume a lot in total.
>
>> If a separate workqueue will fix this
>> then that's a better solution anyway.
>
> I think so. The patch I have been testing is below.
Thanks, I'll add it to the list.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2008-10-27 7:31 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <4900412A.2050802@sgi.com>
[not found] ` <20081023205727.GA28490@infradead.org>
[not found] ` <49013C47.4090601@sgi.com>
[not found] ` <20081024052418.GO25906@disturbed>
[not found] ` <20081024064804.GQ25906@disturbed>
[not found] ` <20081026005351.GK18495@disturbed>
2008-10-26 2:50 ` deadlock with latest xfs Dave Chinner
2008-10-26 4:20 ` Dave Chinner
2008-10-27 1:42 ` Lachlan McIlroy
2008-10-27 5:30 ` Dave Chinner
2008-10-27 6:29 ` Lachlan McIlroy
2008-10-27 6:54 ` Dave Chinner
2008-10-27 7:31 ` Lachlan McIlroy [this message]
2008-10-28 6:02 ` Nick Piggin
2008-10-28 6:25 ` Dave Chinner
2008-10-28 8:56 ` Nick Piggin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=49056E40.5040906@sgi.com \
--to=lachlan@sgi.com \
--cc=hch@infradead.org \
--cc=linux-mm@kvack.org \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox