From: Martin Steigerwald <Martin@lichtvoll.de>
To: Chris Murphy <lists@colorremedies.com>
Cc: Roman Mamedov <rm@romanrm.net>,
kreijack@inwind.it, kreijack@libero.it,
Brendan Hide <brendan@swiftspirit.co.za>,
linux-btrfs@vger.kernel.org
Subject: Re: Provide a better free space estimate on RAID1
Date: Fri, 07 Feb 2014 11:02:21 +0100 [thread overview]
Message-ID: <2016480.0iFfLibb6i@merkaba> (raw)
In-Reply-To: <1B8BB06F-64EA-40EC-B0D7-FB4A38928DA3@colorremedies.com>
Am Donnerstag, 6. Februar 2014, 22:30:46 schrieb Chris Murphy:
> On Feb 6, 2014, at 9:40 PM, Roman Mamedov <rm@romanrm.net> wrote:
> > On Thu, 06 Feb 2014 20:54:19 +0100
> >
> > Goffredo Baroncelli <kreijack@libero.it> wrote:
> >
> >
> >> I agree with you about the needing of a solution. However your patch to
> >> me seems even worse than the actual code.>>
> >>
> >>
> >> For example you cannot take in account the mix of data/linear and
> >> metadata/dup (with the pathological case of small files stored in the
> >> metadata chunks ), nor different profile level like raid5/6 (or the
> >> future raidNxM) And do not forget the compression...
> >
> >
> >
> > Every estimate first and foremost should be measured by how precise it is,
> > or in this case "wrong by how many gigabytes". The actual code returns a
> > result that is pretty much always wrong by 2x, after the patch it will be
> > close within gigabytes to the correct value in the most common use case
> > (data raid1, metadata raid1 and that's it). Of course that PoC is nowhere
> > near the final solution, what I can't agree with is "if another option is
> > somewhat better, but not ideally perfect, then it's worse than the
> > current one", even considering the current one is absolutely broken.
>
> Is the glass half empty or is it half full?
>
> From the original post, context is a 2x 1TB raid volume:
>
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda2 1.8T 1.1M 1.8T 1% /mnt/p2
>
> Earlier conventions would have stated Size ~900GB, and Avail ~900GB. But
> that's not exactly true either, is it? It's merely a convention to cut the
> storage available in half, while keeping data file sizes the same as if
> they were on a single device without raid.
>
> On Btrfs the file system Size is reported as the total storage stack size,
> and that's not incorrect. And the amount Avail is likewise not wrong
> because that space is "not otherwise occupied" which is the definition of
> available. It's linguistically consistent, it's just not a familiar
> convention.
I see one issue with it:
There are installers and applications that check available disk space prior to
installing. This won´t work with current df figures that BTRFS delivers.
While I understand that there is *never* a guarentee that a given free space
can really be allocated by a process cause other processes can allocate space
as well in the mean time, and while I understand that its difficult to provide
an accurate to provide exact figures as soon as RAID settings can be set per
subvolume, it still think its important to improve on the figures.
In the longer term I´d like like a function / syscall to ask the filesystem the
following question:
I am about to write 200 MB in this directory, am I likely to succeed with
that?
This way an application can ask specific to a directory which allows BTRFS to
provide a more accurate estimation.
I understand that there is something like that for single files (fallocate),
but there is nothing like this for writing a certain amount of data in several
files / directories.
Thanks,
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7
next prev parent reply other threads:[~2014-02-07 10:02 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-02-05 20:15 Provide a better free space estimate on RAID1 Roman Mamedov
2014-02-06 7:38 ` Brendan Hide
2014-02-06 12:45 ` Roman Mamedov
2014-02-06 19:54 ` Goffredo Baroncelli
2014-02-07 4:40 ` Roman Mamedov
2014-02-07 5:30 ` Chris Murphy
2014-02-07 6:08 ` Roman Mamedov
2014-02-07 18:44 ` Chris Murphy
2014-02-08 21:46 ` Kai Krakow
2014-02-08 11:21 ` Roman Mamedov
2014-02-07 10:02 ` Martin Steigerwald [this message]
2014-02-08 21:50 ` Kai Krakow
2014-02-08 15:46 ` Goffredo Baroncelli
2014-02-08 16:36 ` [PATCH][V2] " Goffredo Baroncelli
2014-02-09 17:20 ` [PATCH][V3] Provide a better free space estimate [was]Re: " Goffredo Baroncelli
2014-02-07 14:05 ` Frank Kingswood
2014-02-06 20:21 ` Josef Bacik
2014-02-07 20:32 ` Kai Krakow
2014-02-08 11:33 ` Roman Mamedov
2014-02-08 11:46 ` Hugo Mills
2014-02-08 21:35 ` Kai Krakow
2014-02-08 22:10 ` Roman Mamedov
2014-02-08 22:45 ` cwillu
2014-02-08 23:27 ` Kai Krakow
2014-02-08 23:32 ` Kai Krakow
2014-02-09 1:08 ` Roman Mamedov
2014-02-09 9:39 ` Kai Krakow
2014-02-09 6:38 ` Duncan
2014-02-09 9:20 ` Roman Mamedov
2014-02-10 0:02 ` Duncan
2014-02-10 9:14 ` Roman Mamedov
2014-02-09 9:37 ` Kai Krakow
2014-02-08 23:17 ` Kai Krakow
2014-02-09 1:55 ` Roman Mamedov
2014-02-09 2:21 ` Chris Murphy
2014-02-09 2:29 ` Chris Murphy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2016480.0iFfLibb6i@merkaba \
--to=martin@lichtvoll.de \
--cc=brendan@swiftspirit.co.za \
--cc=kreijack@inwind.it \
--cc=kreijack@libero.it \
--cc=linux-btrfs@vger.kernel.org \
--cc=lists@colorremedies.com \
--cc=rm@romanrm.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).