* [RFC PATCH v1 0/4] cgroup quota
@ 2012-03-09 11:20 Jeff Liu
[not found] ` <4F59E78A.7060903-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 7+ messages in thread
From: Jeff Liu @ 2012-03-09 11:20 UTC (permalink / raw)
To: cgroups-u79uwXL29TY76Z2rM5mHXA
Cc: lxc-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
xfs-VZNHf3L845pBDgjK7y7TUQ, tj-DgEjT+Ai2ygdnm+yROfE0A, Li Zefan,
Daniel Lezcano, Ben Myers, Christoph Hellwig, Chris Mason,
Christopher Jones, Dave Chinner, jack-AlSwsSmVLrQ,
tytso-DPNOqEs/LNQ
Hello,
Disk quota feature has been asked at LXC list from time to time.
Given that the project quota has already implemented in XFS for a long time, and it was also in progress for EXT4.
So the major idea is to assign one or more project IDs(or tree ID?) to container, but leaving quota setup via cgroup
config files, so all the tasks running at container could have project quota constraints applied.
I'd like to post an initial patch sets here, the stupid implements is very simple and even crash me
in some cases, sorry! But I would like to submit it to get more feedback to make sure I am going down
the right road. :)
Let me introduce it now.
1. Setup project quota on XFS(enabled pquota) firstly.
For example, the "project100" is configured on "/xfs/quota_test" directory.
$ cat /etc/projects
100:/xfs/quota_test
$ cat /etc/projid
project100:100
$ sudo xfs_quota -x -c 'report -p'
Project quota on /xfs (/dev/sda7)
Blocks
Project ID Used Soft Hard Warn/Grace
---------- --------------------------------------------------
project100 0 0 0 00 [--------]
2. Mount cgroup on /cgroup.
cgroup on /cgroup type cgroup (rw)
After that, there will have a couple of quota.XXXX files presented at /cgroup.
$ ls -l /cgroup/quota.*
--w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.activate
--w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.add_project
-r--r--r-- 1 root root 0 Mar 9 18:27 /cgroup/quota.all
--w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.block_limit_in_bytes
--w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.deactivate
--w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.inode_limit
-r--r--r-- 1 root root 0 Mar 9 18:27 /cgroup/quota.projects
--w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.remove_project
--w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.reset_block_limit_in_bytes
--w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.reset_inode_limit
3. To assign a project ID to container, just echo it to quota.add_project as:
echo "project100:100" > /cgroup/quota.add_project
To get a short list of the current projects assigned to container, user can check quota.projects,
# cat /cgroup/quota.projects
Project ID (project100:100) status: off
The totally quota info can be check against quota.all, it will show something like below:
# cat /cgroup/quota.all
Project ID (project100:100) status: off
block_soft_limit 9223372036854775807
block_hard_limit 9223372036854775807
block_max_usage 0
block_usage 0
inode_soft_limit 9223372036854775807
inode_hard_limit 9223372036854775807
inode_max_usage 0
inode_usage 0
Note that about "status: off", by default, a new assigned project will in OFF state, user could
turn it on by echo the project ID to quota.activate as below:
# echo 100 > /cgroup/quota.activate
# cat /cgroup/quota.all
Project ID (project100:100) status: on *now status changed.*
block_soft_limit 9223372036854775807
block_hard_limit 9223372036854775807
block_max_usage 0
block_usage 0
inode_soft_limit 9223372036854775807
inode_hard_limit 9223372036854775807
inode_max_usage 0
inode_usage 0
But it will do nothing since without quota setup.
4. To configure quota via cgroup, user need to interact with quota.inode_limit and quota.block_limit_in_bytes.
For now, I only add a simple inode quota check to XFS, it looks something like below:
# echo "100 2:4" >> /cgroup/quota.inode_limit
# cat /cgroup/quota.all
Project ID (project100:100) status: on
block_soft_limit 9223372036854775807
block_hard_limit 9223372036854775807
block_max_usage 0
block_usage 0
inode_soft_limit 2
inode_hard_limit 4
inode_max_usage 0
inode_usage 0
# for ((i=0; i < 6; i++)); do touch xfs/quota_test/test.$i; done
Project ID (project100:100) status: on
block_soft_limit 9223372036854775807
block_hard_limit 9223372036854775807
block_max_usage 0
block_usage 0
inode_soft_limit 2
inode_hard_limit 4
inode_max_usage 4
inode_usage 4
Sorry again, above steps crashed me sometimes for now, it works just for demo purpose. :)
Any criticism and suggestions are welcome!
Thanks,
-Jeff
fs/Makefile | 2 +
fs/quota_cgroup.c | 725 ++++++++++++++++++++++++++++++++++++++++++
fs/xfs/xfs_iomap.c | 30 ++
fs/xfs/xfs_vnodeops.c | 10 +
include/linux/quota_cgroup.h | 60 ++++
include/linux/res_counter.h | 3 +
init/Kconfig | 6 +
kernel/res_counter.c | 58 ++++
8 files changed, 894 insertions(+), 0 deletions(-)
create mode 100644 fs/quota_cgroup.c
create mode 100644 include/linux/quota_cgroup.h
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH v1 0/4] cgroup quota
[not found] ` <4F5C8A0C.8050904-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
@ 2012-03-11 10:50 ` Jeff Liu
0 siblings, 0 replies; 7+ messages in thread
From: Jeff Liu @ 2012-03-11 10:50 UTC (permalink / raw)
To: Glauber Costa
Cc: jack-AlSwsSmVLrQ, Daniel Lezcano, Christopher Jones, Li Zefan,
xfs-VZNHf3L845pBDgjK7y7TUQ, Christoph Hellwig,
tj-DgEjT+Ai2ygdnm+yROfE0A, Ben Myers, tytso-DPNOqEs/LNQ,
lxc-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
cgroups-u79uwXL29TY76Z2rM5mHXA, Chris Mason
Hi Glauber,
On 03/11/2012 07:18 PM, Glauber Costa wrote:
> On 03/09/2012 03:20 PM, Jeff Liu wrote:
>> Hello,
>>
>> Disk quota feature has been asked at LXC list from time to time.
>> Given that the project quota has already implemented in XFS for a long
>> time, and it was also in progress for EXT4.
>> So the major idea is to assign one or more project IDs(or tree ID?) to
>> container, but leaving quota setup via cgroup
>> config files, so all the tasks running at container could have project
>> quota constraints applied.
>>
>> I'd like to post an initial patch sets here, the stupid implements is
>> very simple and even crash me
>> in some cases, sorry! But I would like to submit it to get more
>> feedback to make sure I am going down
>> the right road. :)
>>
>> Let me introduce it now.
>>
>> 1. Setup project quota on XFS(enabled pquota) firstly.
>> For example, the "project100" is configured on "/xfs/quota_test"
>> directory.
>>
>> $ cat /etc/projects
>> 100:/xfs/quota_test
>>
>> $ cat /etc/projid
>> project100:100
>>
>> $ sudo xfs_quota -x -c 'report -p'
>> Project quota on /xfs (/dev/sda7)
>> Blocks
>> Project ID Used Soft Hard Warn/Grace
>> ---------- --------------------------------------------------
>> project100 0 0 0 00 [--------]
>>
>> 2. Mount cgroup on /cgroup.
>> cgroup on /cgroup type cgroup (rw)
>>
>> After that, there will have a couple of quota.XXXX files presented at
>> /cgroup.
>> $ ls -l /cgroup/quota.*
>> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.activate
>> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.add_project
>> -r--r--r-- 1 root root 0 Mar 9 18:27 /cgroup/quota.all
>> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.block_limit_in_bytes
>> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.deactivate
>> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.inode_limit
>> -r--r--r-- 1 root root 0 Mar 9 18:27 /cgroup/quota.projects
>> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.remove_project
>> --w------- 1 root root 0 Mar 9 18:27
>> /cgroup/quota.reset_block_limit_in_bytes
>> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.reset_inode_limit
>>
>> 3. To assign a project ID to container, just echo it to
>> quota.add_project as:
>> echo "project100:100"> /cgroup/quota.add_project
>>
>> To get a short list of the current projects assigned to container,
>> user can check quota.projects,
>> # cat /cgroup/quota.projects
>> Project ID (project100:100) status: off
>>
>> The totally quota info can be check against quota.all, it will show
>> something like below:
>> # cat /cgroup/quota.all
>> Project ID (project100:100) status: off
>> block_soft_limit 9223372036854775807
>> block_hard_limit 9223372036854775807
>> block_max_usage 0
>> block_usage 0
>> inode_soft_limit 9223372036854775807
>> inode_hard_limit 9223372036854775807
>> inode_max_usage 0
>> inode_usage 0
>>
>> Note that about "status: off", by default, a new assigned project will
>> in OFF state, user could
>> turn it on by echo the project ID to quota.activate as below:
>> # echo 100> /cgroup/quota.activate
>> # cat /cgroup/quota.all
>> Project ID (project100:100) status: on *now status changed.*
>> block_soft_limit 9223372036854775807
>> block_hard_limit 9223372036854775807
>> block_max_usage 0
>> block_usage 0
>> inode_soft_limit 9223372036854775807
>> inode_hard_limit 9223372036854775807
>> inode_max_usage 0
>> inode_usage 0
>>
>> But it will do nothing since without quota setup.
>>
>> 4. To configure quota via cgroup, user need to interact with
>> quota.inode_limit and quota.block_limit_in_bytes.
>> For now, I only add a simple inode quota check to XFS, it looks
>> something like below:
>>
>> # echo "100 2:4">> /cgroup/quota.inode_limit
>> # cat /cgroup/quota.all
>> Project ID (project100:100) status: on
>> block_soft_limit 9223372036854775807
>> block_hard_limit 9223372036854775807
>> block_max_usage 0
>> block_usage 0
>> inode_soft_limit 2
>> inode_hard_limit 4
>> inode_max_usage 0
>> inode_usage 0
>>
>> # for ((i=0; i< 6; i++)); do touch xfs/quota_test/test.$i; done
>>
>> Project ID (project100:100) status: on
>> block_soft_limit 9223372036854775807
>> block_hard_limit 9223372036854775807
>> block_max_usage 0
>> block_usage 0
>> inode_soft_limit 2
>> inode_hard_limit 4
>> inode_max_usage 4
>> inode_usage 4
>>
>> Sorry again, above steps crashed me sometimes for now, it works just
>> for demo purpose. :)
>>
>> Any criticism and suggestions are welcome!
>
> I have mixed feelings about this. The feature is obviously welcome, but
> I am not sure if
> the approach you took is the best one... I'll go through the patches
> now, and hopefully will
> have a better opinion by the end =)
Thanks for your response!
Daniel has pointed me that Anqin had gave a try for implementing
container quota feature in UID/GUID form back to 2009, his patch set can
be found at:
https://lkml.org/lkml/2009/2/23/35
However, He had to give up because of a new job.
Looks a possible approach is to combine cgroup with project quota(or
tree quota?) according to the feedback from Paul at that time:
https://lkml.org/lkml/2009/2/23/35
So I wrote this draft patch out to present my basic ideas(even not
consider xattr storage space for XFS in current demo).
I am definitely a newbie in this area, so please forgive me if I make
some stupid mistakes. :)
Thanks,
-Jeff
>
> _______________________________________________
> xfs mailing list
> xfs-VZNHf3L845pBDgjK7y7TUQ@public.gmane.org
> http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH v1 0/4] cgroup quota
[not found] ` <4F59E78A.7060903-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
@ 2012-03-11 11:18 ` Glauber Costa
[not found] ` <4F5C8A0C.8050904-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2012-03-11 11:57 ` Glauber Costa
1 sibling, 1 reply; 7+ messages in thread
From: Glauber Costa @ 2012-03-11 11:18 UTC (permalink / raw)
To: jeff.liu-QHcLZuEGTsvQT0dZR+AlfA
Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
lxc-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
xfs-VZNHf3L845pBDgjK7y7TUQ, tj-DgEjT+Ai2ygdnm+yROfE0A, Li Zefan,
Daniel Lezcano, Ben Myers, Christoph Hellwig, Chris Mason,
Christopher Jones, Dave Chinner, jack-AlSwsSmVLrQ,
tytso-DPNOqEs/LNQ
On 03/09/2012 03:20 PM, Jeff Liu wrote:
> Hello,
>
> Disk quota feature has been asked at LXC list from time to time.
> Given that the project quota has already implemented in XFS for a long time, and it was also in progress for EXT4.
> So the major idea is to assign one or more project IDs(or tree ID?) to container, but leaving quota setup via cgroup
> config files, so all the tasks running at container could have project quota constraints applied.
>
> I'd like to post an initial patch sets here, the stupid implements is very simple and even crash me
> in some cases, sorry! But I would like to submit it to get more feedback to make sure I am going down
> the right road. :)
>
> Let me introduce it now.
>
> 1. Setup project quota on XFS(enabled pquota) firstly.
> For example, the "project100" is configured on "/xfs/quota_test" directory.
>
> $ cat /etc/projects
> 100:/xfs/quota_test
>
> $ cat /etc/projid
> project100:100
>
> $ sudo xfs_quota -x -c 'report -p'
> Project quota on /xfs (/dev/sda7)
> Blocks
> Project ID Used Soft Hard Warn/Grace
> ---------- --------------------------------------------------
> project100 0 0 0 00 [--------]
>
> 2. Mount cgroup on /cgroup.
> cgroup on /cgroup type cgroup (rw)
>
> After that, there will have a couple of quota.XXXX files presented at /cgroup.
> $ ls -l /cgroup/quota.*
> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.activate
> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.add_project
> -r--r--r-- 1 root root 0 Mar 9 18:27 /cgroup/quota.all
> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.block_limit_in_bytes
> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.deactivate
> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.inode_limit
> -r--r--r-- 1 root root 0 Mar 9 18:27 /cgroup/quota.projects
> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.remove_project
> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.reset_block_limit_in_bytes
> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.reset_inode_limit
>
> 3. To assign a project ID to container, just echo it to quota.add_project as:
> echo "project100:100"> /cgroup/quota.add_project
>
> To get a short list of the current projects assigned to container, user can check quota.projects,
> # cat /cgroup/quota.projects
> Project ID (project100:100) status: off
>
> The totally quota info can be check against quota.all, it will show something like below:
> # cat /cgroup/quota.all
> Project ID (project100:100) status: off
> block_soft_limit 9223372036854775807
> block_hard_limit 9223372036854775807
> block_max_usage 0
> block_usage 0
> inode_soft_limit 9223372036854775807
> inode_hard_limit 9223372036854775807
> inode_max_usage 0
> inode_usage 0
>
> Note that about "status: off", by default, a new assigned project will in OFF state, user could
> turn it on by echo the project ID to quota.activate as below:
> # echo 100> /cgroup/quota.activate
> # cat /cgroup/quota.all
> Project ID (project100:100) status: on *now status changed.*
> block_soft_limit 9223372036854775807
> block_hard_limit 9223372036854775807
> block_max_usage 0
> block_usage 0
> inode_soft_limit 9223372036854775807
> inode_hard_limit 9223372036854775807
> inode_max_usage 0
> inode_usage 0
>
> But it will do nothing since without quota setup.
>
> 4. To configure quota via cgroup, user need to interact with quota.inode_limit and quota.block_limit_in_bytes.
> For now, I only add a simple inode quota check to XFS, it looks something like below:
>
> # echo "100 2:4">> /cgroup/quota.inode_limit
> # cat /cgroup/quota.all
> Project ID (project100:100) status: on
> block_soft_limit 9223372036854775807
> block_hard_limit 9223372036854775807
> block_max_usage 0
> block_usage 0
> inode_soft_limit 2
> inode_hard_limit 4
> inode_max_usage 0
> inode_usage 0
>
> # for ((i=0; i< 6; i++)); do touch xfs/quota_test/test.$i; done
>
> Project ID (project100:100) status: on
> block_soft_limit 9223372036854775807
> block_hard_limit 9223372036854775807
> block_max_usage 0
> block_usage 0
> inode_soft_limit 2
> inode_hard_limit 4
> inode_max_usage 4
> inode_usage 4
>
> Sorry again, above steps crashed me sometimes for now, it works just for demo purpose. :)
>
> Any criticism and suggestions are welcome!
I have mixed feelings about this. The feature is obviously welcome, but
I am not sure if
the approach you took is the best one... I'll go through the patches
now, and hopefully will
have a better opinion by the end =)
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH v1 0/4] cgroup quota
[not found] ` <4F5C933F.3000409-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
@ 2012-03-11 11:47 ` Jeff Liu
2012-03-12 9:36 ` Glauber Costa
0 siblings, 1 reply; 7+ messages in thread
From: Jeff Liu @ 2012-03-11 11:47 UTC (permalink / raw)
To: Glauber Costa
Cc: jack-AlSwsSmVLrQ, Lezcano, Christopher Jones, Li Zefan,
xfs-VZNHf3L845pBDgjK7y7TUQ, Christoph Hellwig,
tj-DgEjT+Ai2ygdnm+yROfE0A, Ben Myers,
Daniel-VZNHf3L845pBDgjK7y7TUQ,
lxc-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
cgroups-u79uwXL29TY76Z2rM5mHXA, Chris Mason, tytso-DPNOqEs/LNQ
On 03/11/2012 07:57 PM, Glauber Costa wrote:
> On 03/09/2012 03:20 PM, Jeff Liu wrote:
>> Hello,
>>
>> Disk quota feature has been asked at LXC list from time to time.
>> Given that the project quota has already implemented in XFS for a long
>> time, and it was also in progress for EXT4.
>> So the major idea is to assign one or more project IDs(or tree ID?) to
>> container, but leaving quota setup via cgroup
>> config files, so all the tasks running at container could have project
>> quota constraints applied.
>>
>> I'd like to post an initial patch sets here, the stupid implements is
>> very simple and even crash me
>> in some cases, sorry! But I would like to submit it to get more
>> feedback to make sure I am going down
>> the right road. :)
>>
>> Let me introduce it now.
>>
>> 1. Setup project quota on XFS(enabled pquota) firstly.
>> For example, the "project100" is configured on "/xfs/quota_test"
>> directory.
>>
>> $ cat /etc/projects
>> 100:/xfs/quota_test
>>
>> $ cat /etc/projid
>> project100:100
>>
>> $ sudo xfs_quota -x -c 'report -p'
>> Project quota on /xfs (/dev/sda7)
>> Blocks
>> Project ID Used Soft Hard Warn/Grace
>> ---------- --------------------------------------------------
>> project100 0 0 0 00 [--------]
>>
>> 2. Mount cgroup on /cgroup.
>> cgroup on /cgroup type cgroup (rw)
>>
>> After that, there will have a couple of quota.XXXX files presented at
>> /cgroup.
>> $ ls -l /cgroup/quota.*
>> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.activate
>> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.add_project
>> -r--r--r-- 1 root root 0 Mar 9 18:27 /cgroup/quota.all
>> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.block_limit_in_bytes
>> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.deactivate
>> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.inode_limit
>> -r--r--r-- 1 root root 0 Mar 9 18:27 /cgroup/quota.projects
>> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.remove_project
>> --w------- 1 root root 0 Mar 9 18:27
>> /cgroup/quota.reset_block_limit_in_bytes
>> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.reset_inode_limit
>>
>> 3. To assign a project ID to container, just echo it to
>> quota.add_project as:
>> echo "project100:100"> /cgroup/quota.add_project
>>
>> To get a short list of the current projects assigned to container,
>> user can check quota.projects,
>> # cat /cgroup/quota.projects
>> Project ID (project100:100) status: off
>>
>> The totally quota info can be check against quota.all, it will show
>> something like below:
>> # cat /cgroup/quota.all
>> Project ID (project100:100) status: off
>> block_soft_limit 9223372036854775807
>> block_hard_limit 9223372036854775807
>> block_max_usage 0
>> block_usage 0
>> inode_soft_limit 9223372036854775807
>> inode_hard_limit 9223372036854775807
>> inode_max_usage 0
>> inode_usage 0
>>
>> Note that about "status: off", by default, a new assigned project will
>> in OFF state, user could
>> turn it on by echo the project ID to quota.activate as below:
>> # echo 100> /cgroup/quota.activate
>> # cat /cgroup/quota.all
>> Project ID (project100:100) status: on *now status changed.*
>> block_soft_limit 9223372036854775807
>> block_hard_limit 9223372036854775807
>> block_max_usage 0
>> block_usage 0
>> inode_soft_limit 9223372036854775807
>> inode_hard_limit 9223372036854775807
>> inode_max_usage 0
>> inode_usage 0
>>
>> But it will do nothing since without quota setup.
>>
>> 4. To configure quota via cgroup, user need to interact with
>> quota.inode_limit and quota.block_limit_in_bytes.
>> For now, I only add a simple inode quota check to XFS, it looks
>> something like below:
>>
>> # echo "100 2:4">> /cgroup/quota.inode_limit
>> # cat /cgroup/quota.all
>> Project ID (project100:100) status: on
>> block_soft_limit 9223372036854775807
>> block_hard_limit 9223372036854775807
>> block_max_usage 0
>> block_usage 0
>> inode_soft_limit 2
>> inode_hard_limit 4
>> inode_max_usage 0
>> inode_usage 0
>>
>> # for ((i=0; i< 6; i++)); do touch xfs/quota_test/test.$i; done
>>
>> Project ID (project100:100) status: on
>> block_soft_limit 9223372036854775807
>> block_hard_limit 9223372036854775807
>> block_max_usage 0
>> block_usage 0
>> inode_soft_limit 2
>> inode_hard_limit 4
>> inode_max_usage 4
>> inode_usage 4
>>
>> Sorry again, above steps crashed me sometimes for now, it works just
>> for demo purpose. :)
>>
>> Any criticism and suggestions are welcome!
>>
> When I started reading through this, I had one question in mind:
>
> "Why cgroups?"
>
> After I read it, I have one question in mind:
>
> "Why cgroups?"
>
> It really seems like the wrong interface for that. Specially since you
> doesn't seem to be doing anything really clever to divide the charges,
> etc. You are basically using cgroups as an interface to configure
> quotas. And I see no reason whatsoever to do it. Quotas already have a
> very well-defined interface.
>
> In summary, I don't see how creating a new cgroup will do us any good
> here, specially if we're doing it just for the configuration interface.
>
> There are two pieces of the puzzle for container-quota: An outer quota,
> that the box admin applies to that container as a whole, and a container
> quota, that the container admin can apply to its user.
>
> The outer quota does not need any relation with cgroups at all!
> As a matter of fact, we already have this feature, you just don't
> realize it: if you assume you have project quota, we just need to
> configure the project to start at the subtree the container starts.
>
> so, for instance, if you have:
> /root/lxc-root/
>
> Then you create a project quota ontop of it, and you're done.
Thanks for your comments!
I know project quota can be configured outside of a container just as
you mentioned above.
Actually, at the present time, what I am trying to do are:
Given that an outside storage path, it has already configured with a
specified project ID, and it was bind mounted into a container.
Now the user hope to setup quota limits on the bind mounted path inside
the container. Assuming that the project ID has been attached to
container through cgroup interface, so that he don't need to configured
quota outsides(Exactly, sys admin can setup it outsides directly), just
specify them via cgroup file.
Further thoughts, again, assume a particular project ID has been
assigned to a container, if the sys admin want to configure quota
limits outsides, his operations should be denied; and the sys admin
even can not remove the project ID if it was already attached to a
container.
And also, if there has already a project quota limits enforced outsides
to a directly, but the user can still setup a smaller quota limit s
through cgroup ,those limits just mixed up, but the smaller quota only
be effected for those processes running at container.
>
> What we really need here, is a way to have a privileged user inside a
> container to create normal quotas (user, group) that he can configure,
> and have this quota be always smaller than, say, a project quota defined
> for the container from the outside. But cgroups is hardly the interface,
> or place, for that: Usually, the processes inside the container won't
> have access to their cgroups. They will contain the limits they are
> entitled to, and we don't won't the processes to change that at will. So
> tying it to cgroups does not solve the fundamental problem, which is how
> we have the container admin to set up quotas...
Sigh, exactly, I need some time to understand your opinions. Thanks again.
-Jeff
>
> _______________________________________________
> xfs mailing list
> xfs-VZNHf3L845pBDgjK7y7TUQ@public.gmane.org
> http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH v1 0/4] cgroup quota
[not found] ` <4F59E78A.7060903-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2012-03-11 11:18 ` Glauber Costa
@ 2012-03-11 11:57 ` Glauber Costa
[not found] ` <4F5C933F.3000409-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
1 sibling, 1 reply; 7+ messages in thread
From: Glauber Costa @ 2012-03-11 11:57 UTC (permalink / raw)
To: jeff.liu-QHcLZuEGTsvQT0dZR+AlfA
Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
lxc-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
xfs-VZNHf3L845pBDgjK7y7TUQ, tj-DgEjT+Ai2ygdnm+yROfE0A, Li Zefan,
Daniel Lezcano, Ben Myers, Christoph Hellwig, Chris Mason,
Christopher Jones, Dave Chinner, jack-AlSwsSmVLrQ,
tytso-DPNOqEs/LNQ
On 03/09/2012 03:20 PM, Jeff Liu wrote:
> Hello,
>
> Disk quota feature has been asked at LXC list from time to time.
> Given that the project quota has already implemented in XFS for a long time, and it was also in progress for EXT4.
> So the major idea is to assign one or more project IDs(or tree ID?) to container, but leaving quota setup via cgroup
> config files, so all the tasks running at container could have project quota constraints applied.
>
> I'd like to post an initial patch sets here, the stupid implements is very simple and even crash me
> in some cases, sorry! But I would like to submit it to get more feedback to make sure I am going down
> the right road. :)
>
> Let me introduce it now.
>
> 1. Setup project quota on XFS(enabled pquota) firstly.
> For example, the "project100" is configured on "/xfs/quota_test" directory.
>
> $ cat /etc/projects
> 100:/xfs/quota_test
>
> $ cat /etc/projid
> project100:100
>
> $ sudo xfs_quota -x -c 'report -p'
> Project quota on /xfs (/dev/sda7)
> Blocks
> Project ID Used Soft Hard Warn/Grace
> ---------- --------------------------------------------------
> project100 0 0 0 00 [--------]
>
> 2. Mount cgroup on /cgroup.
> cgroup on /cgroup type cgroup (rw)
>
> After that, there will have a couple of quota.XXXX files presented at /cgroup.
> $ ls -l /cgroup/quota.*
> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.activate
> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.add_project
> -r--r--r-- 1 root root 0 Mar 9 18:27 /cgroup/quota.all
> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.block_limit_in_bytes
> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.deactivate
> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.inode_limit
> -r--r--r-- 1 root root 0 Mar 9 18:27 /cgroup/quota.projects
> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.remove_project
> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.reset_block_limit_in_bytes
> --w------- 1 root root 0 Mar 9 18:27 /cgroup/quota.reset_inode_limit
>
> 3. To assign a project ID to container, just echo it to quota.add_project as:
> echo "project100:100"> /cgroup/quota.add_project
>
> To get a short list of the current projects assigned to container, user can check quota.projects,
> # cat /cgroup/quota.projects
> Project ID (project100:100) status: off
>
> The totally quota info can be check against quota.all, it will show something like below:
> # cat /cgroup/quota.all
> Project ID (project100:100) status: off
> block_soft_limit 9223372036854775807
> block_hard_limit 9223372036854775807
> block_max_usage 0
> block_usage 0
> inode_soft_limit 9223372036854775807
> inode_hard_limit 9223372036854775807
> inode_max_usage 0
> inode_usage 0
>
> Note that about "status: off", by default, a new assigned project will in OFF state, user could
> turn it on by echo the project ID to quota.activate as below:
> # echo 100> /cgroup/quota.activate
> # cat /cgroup/quota.all
> Project ID (project100:100) status: on *now status changed.*
> block_soft_limit 9223372036854775807
> block_hard_limit 9223372036854775807
> block_max_usage 0
> block_usage 0
> inode_soft_limit 9223372036854775807
> inode_hard_limit 9223372036854775807
> inode_max_usage 0
> inode_usage 0
>
> But it will do nothing since without quota setup.
>
> 4. To configure quota via cgroup, user need to interact with quota.inode_limit and quota.block_limit_in_bytes.
> For now, I only add a simple inode quota check to XFS, it looks something like below:
>
> # echo "100 2:4">> /cgroup/quota.inode_limit
> # cat /cgroup/quota.all
> Project ID (project100:100) status: on
> block_soft_limit 9223372036854775807
> block_hard_limit 9223372036854775807
> block_max_usage 0
> block_usage 0
> inode_soft_limit 2
> inode_hard_limit 4
> inode_max_usage 0
> inode_usage 0
>
> # for ((i=0; i< 6; i++)); do touch xfs/quota_test/test.$i; done
>
> Project ID (project100:100) status: on
> block_soft_limit 9223372036854775807
> block_hard_limit 9223372036854775807
> block_max_usage 0
> block_usage 0
> inode_soft_limit 2
> inode_hard_limit 4
> inode_max_usage 4
> inode_usage 4
>
> Sorry again, above steps crashed me sometimes for now, it works just for demo purpose. :)
>
> Any criticism and suggestions are welcome!
>
When I started reading through this, I had one question in mind:
"Why cgroups?"
After I read it, I have one question in mind:
"Why cgroups?"
It really seems like the wrong interface for that. Specially since you
doesn't seem to be doing anything really clever to divide the charges,
etc. You are basically using cgroups as an interface to configure
quotas. And I see no reason whatsoever to do it. Quotas already have a
very well-defined interface.
In summary, I don't see how creating a new cgroup will do us any good
here, specially if we're doing it just for the configuration interface.
There are two pieces of the puzzle for container-quota: An outer quota,
that the box admin applies to that container as a whole, and a container
quota, that the container admin can apply to its user.
The outer quota does not need any relation with cgroups at all!
As a matter of fact, we already have this feature, you just don't
realize it: if you assume you have project quota, we just need to
configure the project to start at the subtree the container starts.
so, for instance, if you have:
/root/lxc-root/
Then you create a project quota ontop of it, and you're done.
What we really need here, is a way to have a privileged user inside a
container to create normal quotas (user, group) that he can configure,
and have this quota be always smaller than, say, a project quota defined
for the container from the outside. But cgroups is hardly the interface,
or place, for that: Usually, the processes inside the container won't
have access to their cgroups. They will contain the limits they are
entitled to, and we don't won't the processes to change that at will. So
tying it to cgroups does not solve the fundamental problem, which is how
we have the container admin to set up quotas...
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH v1 0/4] cgroup quota
[not found] ` <4F5DC396.60701-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
@ 2012-03-12 7:11 ` Jeff Liu
0 siblings, 0 replies; 7+ messages in thread
From: Jeff Liu @ 2012-03-12 7:11 UTC (permalink / raw)
To: Glauber Costa
Cc: jack-AlSwsSmVLrQ, Lezcano, Christopher Jones, Li Zefan,
xfs-VZNHf3L845pBDgjK7y7TUQ, Christoph Hellwig,
tj-DgEjT+Ai2ygdnm+yROfE0A, Ben Myers,
Daniel-VZNHf3L845pBDgjK7y7TUQ,
lxc-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
cgroups-u79uwXL29TY76Z2rM5mHXA, Chris Mason, tytso-DPNOqEs/LNQ
On 03/12/2012 05:36 PM, Glauber Costa wrote:
> On 03/11/2012 03:47 PM, Jeff Liu wrote:
>> And also, if there has already a project quota limits enforced outsides
>> to a directly, but the user can still setup a smaller quota limit s
>> through cgroup ,those limits just mixed up, but the smaller quota only
>> be effected for those processes running at container.
>>
>>> >
>>> > What we really need here, is a way to have a privileged user inside a
>>> > container to create normal quotas (user, group) that he can
>>> configure,
>>> > and have this quota be always smaller than, say, a project quota
>>> defined
>>> > for the container from the outside. But cgroups is hardly the
>>> interface,
>>> > or place, for that: Usually, the processes inside the container won't
>>> > have access to their cgroups. They will contain the limits they are
>>> > entitled to, and we don't won't the processes to change that at
>>> will. So
>>> > tying it to cgroups does not solve the fundamental problem, which
>>> is how
>>> > we have the container admin to set up quotas...
>> Sigh, exactly, I need some time to understand your opinions. Thanks
>> again.
>>
>>
>
> My take on this is that you should stick to the quota interface. It
> seems to works well enough for people out there. This means, how quotas
> are configured, viewed, etc, should work with standard tools.
>
> Now, we need some of those quotas to be tied to a particular mnt
> namespace (I believe namespaces to be the right isolation abstraction
> here, not cgroups), in the sense that they can only be active inside
> that mnt namespace. And then when you bill an inode, block, or anything
> else that quota limits, you bill it to any quota structure that is
> possibly interested in it.
I got started investigating how to isolate quota combine with namespaces today, thanks for your timely suggestions, that's sounds clearer to me.
-Jeff
> Right now the code bills it to one quota
> structure, the one that matches your UID, GID, etc (XFS may be a bit
> more skilled already here, I don't know)
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH v1 0/4] cgroup quota
2012-03-11 11:47 ` Jeff Liu
@ 2012-03-12 9:36 ` Glauber Costa
[not found] ` <4F5DC396.60701-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
0 siblings, 1 reply; 7+ messages in thread
From: Glauber Costa @ 2012-03-12 9:36 UTC (permalink / raw)
To: jeff.liu
Cc: jack, Lezcano, Christopher Jones, Li Zefan, xfs,
Christoph Hellwig, tj, Ben Myers, Daniel, lxc-devel,
linux-fsdevel@vger.kernel.org, cgroups, Chris Mason, tytso
On 03/11/2012 03:47 PM, Jeff Liu wrote:
> And also, if there has already a project quota limits enforced outsides
> to a directly, but the user can still setup a smaller quota limit s
> through cgroup ,those limits just mixed up, but the smaller quota only
> be effected for those processes running at container.
>
>> >
>> > What we really need here, is a way to have a privileged user inside a
>> > container to create normal quotas (user, group) that he can configure,
>> > and have this quota be always smaller than, say, a project quota defined
>> > for the container from the outside. But cgroups is hardly the interface,
>> > or place, for that: Usually, the processes inside the container won't
>> > have access to their cgroups. They will contain the limits they are
>> > entitled to, and we don't won't the processes to change that at will. So
>> > tying it to cgroups does not solve the fundamental problem, which is how
>> > we have the container admin to set up quotas...
> Sigh, exactly, I need some time to understand your opinions. Thanks again.
>
>
My take on this is that you should stick to the quota interface. It
seems to works well enough for people out there. This means, how quotas
are configured, viewed, etc, should work with standard tools.
Now, we need some of those quotas to be tied to a particular mnt
namespace (I believe namespaces to be the right isolation abstraction
here, not cgroups), in the sense that they can only be active inside
that mnt namespace. And then when you bill an inode, block, or anything
else that quota limits, you bill it to any quota structure that is
possibly interested in it. Right now the code bills it to one quota
structure, the one that matches your UID, GID, etc (XFS may be a bit
more skilled already here, I don't know)
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2012-03-12 9:37 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-03-09 11:20 [RFC PATCH v1 0/4] cgroup quota Jeff Liu
[not found] ` <4F59E78A.7060903-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
2012-03-11 11:18 ` Glauber Costa
[not found] ` <4F5C8A0C.8050904-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2012-03-11 10:50 ` Jeff Liu
2012-03-11 11:57 ` Glauber Costa
[not found] ` <4F5C933F.3000409-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2012-03-11 11:47 ` Jeff Liu
2012-03-12 9:36 ` Glauber Costa
[not found] ` <4F5DC396.60701-bzQdu9zFT3WakBO8gow8eQ@public.gmane.org>
2012-03-12 7:11 ` Jeff Liu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).