From: David Sterba <dsterba@suse.cz>
To: Qu Wenruo <quwenruo.btrfs@gmx.com>
Cc: dsterba@suse.cz, Qu Wenruo <wqu@suse.com>,
linux-btrfs@vger.kernel.org, Josef Bacik <josef@toxicpanda.com>
Subject: Re: [PATCH v3 1/3] btrfs: Introduce per-profile available space facility
Date: Wed, 8 Jan 2020 16:04:41 +0100 [thread overview]
Message-ID: <20200108150441.GG3929@twin.jikos.cz> (raw)
In-Reply-To: <9c2308bb-c3ae-d502-4b27-f8dbedc25d1a@gmx.com>
On Tue, Jan 07, 2020 at 10:13:43AM +0800, Qu Wenruo wrote:
> >> + devices_info = kcalloc(fs_devices->rw_devices, sizeof(*devices_info),
> >> + GFP_NOFS);
> >
> > Calling kcalloc is another potential slowdown, for the statfs
> > considerations.
>
> That's also what we did in statfs() before, so it shouldn't cause extra
> problem.
> Furthermore, we didn't use calc_one_profile_avail() directly in statfs()
> directly.
>
> Although I get your point, and personally speaking the memory allocation
> and extra in-memory device iteration should be pretty fast compared to
> __btrfs_alloc_chunk().
>
> Thus I don't think this memory allocation would cause extra trouble,
> except the error handling mentioned below.
Right, current statfs also does allocation via
btrfs_calc_avail_data_space, so it's the same as before.
> [...]
> >> + ret = calc_per_profile_avail(fs_info);
> >
> > Adding new failure modes
>
> Another solution I have tried is make calc_per_profile_avail() return
> void, ignoring the ENOMEM error, and just set the affected profile to 0
> available space.
>
> But that method is just delaying ENOMEM, and would cause strange
> pre-profile values until next successful update or mount cycle.
>
> Any idea on which method is less worse?
Better to return the error than wrong values in this case. As the
numbers are sort of a cache and the mount cycle to get them fixed is not
very user friendly, we need some other way. As this is a global state, a
bit in fs_info::flags can be set and full recalculation attempted at
some point until it succeeds. This would leave the counters stale for
some time but I think still better than if they're suddenly 0.
next prev parent reply other threads:[~2020-01-08 15:04 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-06 6:13 [PATCH v3 0/3] Introduce per-profile available space array to avoid over-confident can_overcommit() Qu Wenruo
2020-01-06 6:13 ` [PATCH v3 1/3] btrfs: Introduce per-profile available space facility Qu Wenruo
2020-01-06 14:32 ` David Sterba
2020-01-07 2:13 ` Qu Wenruo
2020-01-08 15:04 ` David Sterba [this message]
2020-01-08 23:53 ` Qu WenRuo
2020-01-09 6:26 ` Qu Wenruo
2020-01-06 23:45 ` kbuild test robot
2020-01-06 6:13 ` [PATCH v3 2/3] btrfs: space-info: Use per-profile available space in can_overcommit() Qu Wenruo
2020-01-06 6:13 ` [PATCH v3 3/3] btrfs: statfs: Use virtual chunk allocation to calculation available data space Qu Wenruo
2020-01-06 14:44 ` Josef Bacik
2020-01-07 1:03 ` Qu Wenruo
2020-01-06 14:06 ` [PATCH v3 0/3] Introduce per-profile available space array to avoid over-confident can_overcommit() David Sterba
2020-01-07 2:04 ` Qu Wenruo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200108150441.GG3929@twin.jikos.cz \
--to=dsterba@suse.cz \
--cc=josef@toxicpanda.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=quwenruo.btrfs@gmx.com \
--cc=wqu@suse.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox