linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Rakesh Sankeshi <rakesh.sankeshi@gmail.com>
To: Qu Wenruo <quwenruo@cn.fujitsu.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: btrfs quota issues
Date: Mon, 15 Aug 2016 12:11:28 -0700	[thread overview]
Message-ID: <CAGj_d3C1nf3LGCXfWxoYNpWP2X81v-+eT22yBx17HgfMaP7ocg@mail.gmail.com> (raw)
In-Reply-To: <fe8222d8-6463-8b4d-0613-b3639455463d@cn.fujitsu.com>

yes, subvol level.

qgroupid         rfer         excl     max_rfer     max_excl parent  child

--------         ----         ----     --------     -------- ------  -----

0/5          16.00KiB     16.00KiB         none         none ---     ---

0/258       119.48GiB    119.48GiB    200.00GiB         none ---     ---

0/259        92.57GiB     92.57GiB    200.00GiB         none ---     ---


although I have 200GB limit on 2 subvols, running into issue at about
120 and 92GB itself


On Sun, Aug 14, 2016 at 7:11 PM, Qu Wenruo <quwenruo@cn.fujitsu.com> wrote:
>
>
> At 08/12/2016 01:32 AM, Rakesh Sankeshi wrote:
>>
>> I set 200GB limit to one user and 100GB to another user.
>>
>> as soon as I reached 139GB and 53GB each, hitting the quota errors.
>> anyway to workaround quota functionality on btrfs LZO compressed
>> filesystem?
>>
>
> Please paste "btrfs qgroup show -prce <mnt>" output if you are using btrfs
> qgroup/quota function.
>
> And, AFAIK btrfs qgroup is applied to subvolume, not user.
>
> So did you mean limit it to one subvolume belongs to one user?
>
> Thanks,
> Qu
>
>>
>>
>> 4.7.0-040700-generic #201608021801 SMP
>>
>> btrfs-progs v4.7
>>
>>
>> Label: none  uuid: 66a78faf-2052-4864-8a52-c5aec7a56ab8
>>
>> Total devices 2 FS bytes used 150.62GiB
>>
>> devid    1 size 1.00TiB used 78.01GiB path /dev/xvdc
>>
>> devid    2 size 1.00TiB used 78.01GiB path /dev/xvde
>>
>>
>> Data, RAID0: total=150.00GiB, used=149.12GiB
>>
>> System, RAID1: total=8.00MiB, used=16.00KiB
>>
>> Metadata, RAID1: total=3.00GiB, used=1.49GiB
>>
>> GlobalReserve, single: total=512.00MiB, used=0.00B
>>
>>
>> Filesystem      Size  Used Avail Use% Mounted on
>>
>> /dev/xvdc       2.0T  153G  1.9T   8% /test_lzo
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>
>
>

  reply	other threads:[~2016-08-15 19:11 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-11 17:32 btrfs quota issues Rakesh Sankeshi
2016-08-11 19:13 ` Duncan
2016-08-12 15:47   ` Rakesh Sankeshi
2016-08-13 23:05     ` Duncan
2016-08-15  2:11 ` Qu Wenruo
2016-08-15 19:11   ` Rakesh Sankeshi [this message]
2016-08-16  1:01     ` Qu Wenruo
2016-08-16 16:05       ` Rakesh Sankeshi
2016-08-16 23:33         ` Rakesh Sankeshi
2016-08-17  0:09           ` Tim Walberg
2016-08-17  0:56         ` Qu Wenruo
2016-08-23 18:38           ` Rakesh Sankeshi
2016-08-26  1:52             ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAGj_d3C1nf3LGCXfWxoYNpWP2X81v-+eT22yBx17HgfMaP7ocg@mail.gmail.com \
    --to=rakesh.sankeshi@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=quwenruo@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).