From: Qu Wenruo <quwenruo@cn.fujitsu.com>
To: <dsterba@suse.cz>, <linux-btrfs@vger.kernel.org>
Subject: Re: [RFC PATCH v3.2 5/6] btrfs: qgroup: Introduce extent changeset for qgroup reserve functions
Date: Thu, 1 Jun 2017 09:01:26 +0800 [thread overview]
Message-ID: <17c52fe5-40fb-5c4c-1066-01928098a69f@cn.fujitsu.com> (raw)
In-Reply-To: <20170531143001.GA12135@twin.jikos.cz>
At 05/31/2017 10:30 PM, David Sterba wrote:
> On Wed, May 31, 2017 at 08:31:35AM +0800, Qu Wenruo wrote:
>>>> Yes it's hard to find such deadlock especially when lockdep will not
>>>> detect it.
>>>>
>>>> And this makes the advantage of using stack memory in v3 patch more obvious.
>>>>
>>>> I didn't realize the extra possible deadlock when memory pressure is
>>>> high, and to make completely correct usage of GFP_ flags we should let
>>>> caller to choose its GFP_ flag, which will introduce more modification
>>>> and more possibility to cause problem.
>>>>
>>>> So now I prefer the stack version a little more.
>>>
>>> The difference is that the stack version will always consume the stack
>>> at runtime. The dynamic allocation will not, but we have to add error
>>> handling and make sure we use right gfp flags. So it's runtime vs review
>>> trade off, I choose to spend time on review.
>>
>> OK, then I'll update the patchset to allow passing gfp flags for each
>> reservation.
>
> You mean to add gfp flags to extent_changeset_alloc and update the
> direct callers or to add gfp flags to the whole reservation codepath?
Yes, I was planning to do it.
> I strongly prefer to use GFP_NOFS for now, although it's not ideal.
OK, then keep GFP_NOFS.
But I also want to know the reason why.
Is it just because we don't have good enough tool to detect possible
deadlock caused by wrong GFP_* flags in write path?
Thanks,
Qu
next prev parent reply other threads:[~2017-06-01 1:01 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-17 2:56 [RFC PATCH v3.2 0/6] Qgroup fixes, Non-stack version Qu Wenruo
2017-05-17 2:56 ` [RFC PATCH v3.2 1/6] btrfs: qgroup: Add quick exit for non-fs extents Qu Wenruo
2017-05-17 2:56 ` [RFC PATCH v3.2 2/6] btrfs: qgroup: Cleanup btrfs_qgroup_prepare_account_extents function Qu Wenruo
2017-05-17 2:56 ` [RFC PATCH v3.2 3/6] btrfs: qgroup: Return actually freed bytes for qgroup release or free data Qu Wenruo
2017-05-17 2:56 ` [RFC PATCH v3.2 4/6] btrfs: qgroup: Fix qgroup reserved space underflow caused by buffered write and quota enable Qu Wenruo
2017-05-17 2:56 ` [RFC PATCH v3.2 5/6] btrfs: qgroup: Introduce extent changeset for qgroup reserve functions Qu Wenruo
2017-05-17 15:37 ` David Sterba
2017-05-18 0:24 ` Qu Wenruo
2017-05-18 13:45 ` David Sterba
2017-05-19 0:32 ` Qu Wenruo
2017-05-29 15:51 ` David Sterba
2017-05-31 0:31 ` Qu Wenruo
2017-05-31 14:30 ` David Sterba
2017-06-01 1:01 ` Qu Wenruo [this message]
2017-06-02 14:16 ` David Sterba
2017-05-17 2:56 ` [RFC PATCH v3.2 6/6] btrfs: qgroup: Fix qgroup reserved space underflow by only freeing reserved ranges Qu Wenruo
2017-06-21 19:09 ` [RFC PATCH v3.2 0/6] Qgroup fixes, Non-stack version David Sterba
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=17c52fe5-40fb-5c4c-1066-01928098a69f@cn.fujitsu.com \
--to=quwenruo@cn.fujitsu.com \
--cc=dsterba@suse.cz \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).