linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Rakesh Sankeshi <rakesh.sankeshi@gmail.com>
To: Qu Wenruo <quwenruo@cn.fujitsu.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: btrfs quota issues
Date: Tue, 16 Aug 2016 16:33:47 -0700	[thread overview]
Message-ID: <CAGj_d3C9274iWJN4vqPLamsEeO+ndPcnSefD-q-PA63ToeLBrw@mail.gmail.com> (raw)
In-Reply-To: <CAGj_d3C4=M7ZTvDhn+twROnf1qO4u-UXBmbP8HVQgfGiD91W6w@mail.gmail.com>

also is there any timeframe on when the qgroup / quota issues would be
stabilized in btrfs?

Thanks!


On Tue, Aug 16, 2016 at 9:05 AM, Rakesh Sankeshi
<rakesh.sankeshi@gmail.com> wrote:
> 2) after EDQUOT, can't write anymore.
>
> I can delete the data, but still can't write further
>
> 3) tested it without compression and also with LZO and ZLIB.. all
> behave same way with qgroup. no consistency on when it hits the quota
> limit and don't understand on how it's calculating the numbers.
>
> In case of ext4 and xfs, I can see visually that it's hitting the quota limit.
>
>
>
> On Mon, Aug 15, 2016 at 6:01 PM, Qu Wenruo <quwenruo@cn.fujitsu.com> wrote:
>>
>>
>> At 08/16/2016 03:11 AM, Rakesh Sankeshi wrote:
>>>
>>> yes, subvol level.
>>>
>>> qgroupid         rfer         excl     max_rfer     max_excl parent  child
>>>
>>> --------         ----         ----     --------     -------- ------  -----
>>>
>>> 0/5          16.00KiB     16.00KiB         none         none ---     ---
>>>
>>> 0/258       119.48GiB    119.48GiB    200.00GiB         none ---     ---
>>>
>>> 0/259        92.57GiB     92.57GiB    200.00GiB         none ---     ---
>>>
>>>
>>> although I have 200GB limit on 2 subvols, running into issue at about
>>> 120 and 92GB itself
>>
>>
>> 1) About workload
>> Would you mind to mention the work pattern of your write?
>>
>> Just dd data with LZO compression?
>> For compression part, it's a little complicated, as the reserved data size
>> and on disk extent size are different.
>>
>> It's possible that at some code we leaked some reserved data space.
>>
>>
>> 2) Behavior after EDQUOT
>> And, after EDQUOT happens, can you write data into the subvolume?
>> If you can still write a lot of data (at least several giga), it seems to be
>> something related with temporary reserved space.
>>
>> If not, and even can't remove any file due to EQUOTA, then it's almost sure
>> we have underflowed the reserved data.
>> In that case, unmount and mount again will be the only workaround.
>> (In fact, not workaround at all)
>>
>> 3) Behavior without compression
>>
>> If it's OK for you, would you mind to test it without compression?
>> Currently we mostly use the assumption that on-disk extent size are the same
>> with in-memory extent size (non-compression).
>>
>> So qgroup + compression is not the main concern before and is buggy.
>>
>> If without compression, qgroup works sanely, at least we can be sure that
>> the cause is qgroup + compression.
>>
>> Thanks,
>> Qu
>>
>>
>>>
>>>
>>> On Sun, Aug 14, 2016 at 7:11 PM, Qu Wenruo <quwenruo@cn.fujitsu.com>
>>> wrote:
>>>>
>>>>
>>>>
>>>> At 08/12/2016 01:32 AM, Rakesh Sankeshi wrote:
>>>>>
>>>>>
>>>>> I set 200GB limit to one user and 100GB to another user.
>>>>>
>>>>> as soon as I reached 139GB and 53GB each, hitting the quota errors.
>>>>> anyway to workaround quota functionality on btrfs LZO compressed
>>>>> filesystem?
>>>>>
>>>>
>>>> Please paste "btrfs qgroup show -prce <mnt>" output if you are using
>>>> btrfs
>>>> qgroup/quota function.
>>>>
>>>> And, AFAIK btrfs qgroup is applied to subvolume, not user.
>>>>
>>>> So did you mean limit it to one subvolume belongs to one user?
>>>>
>>>> Thanks,
>>>> Qu
>>>>
>>>>>
>>>>>
>>>>> 4.7.0-040700-generic #201608021801 SMP
>>>>>
>>>>> btrfs-progs v4.7
>>>>>
>>>>>
>>>>> Label: none  uuid: 66a78faf-2052-4864-8a52-c5aec7a56ab8
>>>>>
>>>>> Total devices 2 FS bytes used 150.62GiB
>>>>>
>>>>> devid    1 size 1.00TiB used 78.01GiB path /dev/xvdc
>>>>>
>>>>> devid    2 size 1.00TiB used 78.01GiB path /dev/xvde
>>>>>
>>>>>
>>>>> Data, RAID0: total=150.00GiB, used=149.12GiB
>>>>>
>>>>> System, RAID1: total=8.00MiB, used=16.00KiB
>>>>>
>>>>> Metadata, RAID1: total=3.00GiB, used=1.49GiB
>>>>>
>>>>> GlobalReserve, single: total=512.00MiB, used=0.00B
>>>>>
>>>>>
>>>>> Filesystem      Size  Used Avail Use% Mounted on
>>>>>
>>>>> /dev/xvdc       2.0T  153G  1.9T   8% /test_lzo
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs"
>>>>> in
>>>>> the body of a message to majordomo@vger.kernel.org
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>
>>

  reply	other threads:[~2016-08-16 23:33 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-11 17:32 btrfs quota issues Rakesh Sankeshi
2016-08-11 19:13 ` Duncan
2016-08-12 15:47   ` Rakesh Sankeshi
2016-08-13 23:05     ` Duncan
2016-08-15  2:11 ` Qu Wenruo
2016-08-15 19:11   ` Rakesh Sankeshi
2016-08-16  1:01     ` Qu Wenruo
2016-08-16 16:05       ` Rakesh Sankeshi
2016-08-16 23:33         ` Rakesh Sankeshi [this message]
2016-08-17  0:09           ` Tim Walberg
2016-08-17  0:56         ` Qu Wenruo
2016-08-23 18:38           ` Rakesh Sankeshi
2016-08-26  1:52             ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAGj_d3C9274iWJN4vqPLamsEeO+ndPcnSefD-q-PA63ToeLBrw@mail.gmail.com \
    --to=rakesh.sankeshi@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=quwenruo@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).