public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Qu Wenruo <wqu@suse.com>
To: dsterba@suse.cz, Qu Wenruo <quwenruo.btrfs@gmx.com>
Cc: linux-btrfs@vger.kernel.org,
	Linux FS Devel <linux-fsdevel@vger.kernel.org>
Subject: Re: Should we still go __GFP_NOFAIL? (Was Re: [PATCH] btrfs: refactor alloc_extent_buffer() to allocate-then-attach method)
Date: Wed, 29 Nov 2023 06:36:45 +1030	[thread overview]
Message-ID: <546fab58-974e-462c-ab20-5e31acb7285b@suse.com> (raw)
In-Reply-To: <20231128162636.GK18929@twin.jikos.cz>



On 2023/11/29 02:56, David Sterba wrote:
> On Mon, Nov 27, 2023 at 03:40:41PM +1030, Qu Wenruo wrote:
>> On 2023/11/23 06:33, Qu Wenruo wrote:
>> [...]
>>>> I wonder if we still can keep the __GFP_NOFAIL for the fallback
>>>> allocation, it's there right now and seems to work on sysmtems under
>>>> stress and does not cause random failures due to ENOMEM.
>>>>
>>> Oh, I forgot the __NOFAIL gfp flags, that's not hard to fix, just
>>> re-introduce the gfp flags to btrfs_alloc_page_array().
>>
>> BTW, I think it's a good time to start a new discussion on whether we
>> should go __GFP_NOFAIL.
>> (Although I have updated the patch to keep the GFP_NOFAIL behavior)
>>
>> I totally understand that we need some memory for tree block during
>> transaction commitment and other critical sections.
>>
>> And it's not that uncommon to see __GFP_NOFAIL usage in other mainstream
>> filesystems.
> 
> The use of NOFAIL is either carefuly evaluated or it's there for
> historical reasons. The comment for the flag says that,
> https://elixir.bootlin.com/linux/latest/source/include/linux/gfp_types.h#L198
> and I know MM people see the flag as problematic and that it should not
> be used if possible.
> 
>> But my concern is, we also have a lot of memory allocation which can
>> lead to a lot of problems either, like btrfs_csum_one_bio() or even
>> join_transaction().
> 
> While I agree that there are many places that can fail due to memory
> allocations, the extent buffer requires whole 4 pages, other structures
> could be taken from the generic slabs or our named caches. The latter
> has lower chance to fail.
> 
>> I doubt if btrfs (or any other filesystems) would be to blamed if we're
>> really running out of memory.
> 
> Well, people blame btrfs for everything.
> 
>> Should the memory hungry user space programs to be firstly killed far
>> before we failed to allocate memory?
> 
> That's up to the allocator and I think it does a good job of providing
> the memory to kernel rather than to user space programs.
> 
> We do the critical allocations as GFP_NOFS which so far provides the "do
> not fail" guarantees. It's a long going discussion,
> https://lwn.net/Articles/653573/ (2015). We can let many allocations
> fail with a fallback, but still a lot of them would lead to transaction
> abort. And as Josef said, there are some that can't fail because they're
> too deep or there's no clear exit path.

Yep, for those call sites (aka, extent io tree) we still need NOFAIL 
until we added error handling for all the call sites.

> 
>> Furthermore, at least for btrfs, I don't think we would hit a situation
>> where memory allocation failure for metadata would lead to any data
>> corruption.
>> The worst case is we hit transaction abort, and the fs flips RO.
> 
> Yeah, corruption can't happen as long as we have all the error handling
> in place and the transaction abort as the ultimate fallback.
> 
>> Thus I'm wondering if we really need __NOFAIL for btrfs?
> 
> It's hard to say if or when the NOFAIL semantics actually apply. Let's
> say there are applications doing metadata operations, the system is
> under load, memory is freed slowly by writing data etc. Application that
> waits inside the eb allocation will continue eventually. Without the
> NOFAIL it would exit early.
> 
> As a middle ground, we may want something like "try hard" that would not
> fail too soon but it could eventually. That's __GFP_RETRY_MAYFAIL .

This sounds good. Although I'd say the MM is already doing a too good 
job, thus I'm not sure if we even need the extra retry.

> 
> Right now there are several changes around the extent buffers, I'd like
> do the conversion first and then replace/drop the NOFAIL flag so we
> don't mix too many changes in one release. The extent buffers are
> critical so one step a time, with lots of testing.

This sounds very reasonable.

Thanks,
Qu

      reply	other threads:[~2023-11-28 20:06 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-21 23:35 [PATCH] btrfs: refactor alloc_extent_buffer() to allocate-then-attach method Qu Wenruo
2023-11-22 14:14 ` Josef Bacik
2023-11-22 20:00   ` Qu Wenruo
2023-11-27 16:28     ` Josef Bacik
2023-11-27 22:17       ` Qu Wenruo
2023-11-22 14:38 ` David Sterba
2023-11-22 20:03   ` Qu Wenruo
2023-11-27  5:10     ` Should we still go __GFP_NOFAIL? (Was Re: [PATCH] btrfs: refactor alloc_extent_buffer() to allocate-then-attach method) Qu Wenruo
2023-11-27 16:19       ` Josef Bacik
2023-11-28 16:26       ` David Sterba
2023-11-28 20:06         ` Qu Wenruo [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=546fab58-974e-462c-ab20-5e31acb7285b@suse.com \
    --to=wqu@suse.com \
    --cc=dsterba@suse.cz \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=quwenruo.btrfs@gmx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox