From: Chris Mason <clm@fb.com>
To: "jack@suse.cz" <jack@suse.cz>
Cc: "gnehzuil.liu@gmail.com" <gnehzuil.liu@gmail.com>,
"lsf-pc@lists.linux-foundation.org"
<lsf-pc@lists.linux-foundation.org>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>
Subject: Re: [Lsf-pc] [LSF/MM ATTEND] Filesystems -- Btrfs, cgroups, Storage topics from Facebook
Date: Tue, 31 Dec 2013 13:19:15 +0000 [thread overview]
Message-ID: <1388495991.16965.36.camel@ret> (raw)
In-Reply-To: <20131231124535.GE11920@quack.suse.cz>
On Tue, 2013-12-31 at 13:45 +0100, Jan Kara wrote:
> On Tue 31-12-13 16:49:27, Zheng Liu wrote:
> > Hi Chris,
> >
> > On Mon, Dec 30, 2013 at 09:36:20PM +0000, Chris Mason wrote:
> > > Hi everyone,
> > >
> > > I'd like to attend the LSF/MM conference this year. My current
> > > discussion points include:
> > >
> > > All things Btrfs!
> > >
> > > Adding cgroups for more filesystem resources, especially to limit the
> > > speed dirty pages are created.
> >
> > Interesting. If I remember correctly, IO-less dirty throttling has been
> > applied into upstream kernel, which can limit the speed that dirty pages
> > are created. Does it has any defect?
> It works as it should. But as Jeff points out, the throttling isn't
> cgroup aware. So it can happen that one memcg is full of dirty pages and
> reclaim has problems with reclaiming pages for it. I guess what Chris asks
> for is that we watch number of dirty pages in each memcg and throttle
> processes creating dirty pages in memcg which is close to its limit on
> dirty pages.
Right, the ioless dirty throttling is fantastic, but it's based on the
BDI and you only get one of those per device.
The current cgroup IO controller happens after we've decided to start
sending pages down. From a buffered write point of view, this is
already too late. If we delay the buffered IOs, the higher priority
tasks will just wait in balance_dirty_pages instead of waiting on the
drive.
So I'd like to throttle the rate at which dirty pages are created,
preferably based on the rates currently calculated in the BDI of how
quickly the device is doing IO. This way we can limit dirty creation to
a percentage of the disk capacity during the current workload
(regardless of random vs buffered).
-chris
next prev parent reply other threads:[~2013-12-31 13:19 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-12-30 21:36 [LSF/MM ATTEND] Filesystems -- Btrfs, cgroups, Storage topics from Facebook Chris Mason
2013-12-31 8:49 ` Zheng Liu
2013-12-31 9:36 ` Jeff Liu
2013-12-31 12:45 ` [Lsf-pc] " Jan Kara
2013-12-31 13:19 ` Chris Mason [this message]
2013-12-31 14:22 ` Tao Ma
2013-12-31 15:34 ` Chris Mason
2014-01-02 6:46 ` Jan Kara
2014-01-02 15:21 ` Chris Mason
2014-01-02 16:01 ` tj
2014-01-02 16:14 ` tj
2014-01-03 6:03 ` Jan Kara
2014-01-02 17:06 ` Vivek Goyal
2014-01-02 17:10 ` tj
2014-01-02 19:11 ` Chris Mason
2014-01-03 6:39 ` Jan Kara
2014-01-02 18:27 ` James Bottomley
2014-01-02 18:36 ` tj
2014-01-03 7:44 ` James Bottomley
2014-01-08 15:04 ` Mel Gorman
2014-01-08 16:14 ` Chris Mason
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1388495991.16965.36.camel@ret \
--to=clm@fb.com \
--cc=gnehzuil.liu@gmail.com \
--cc=jack@suse.cz \
--cc=linux-fsdevel@vger.kernel.org \
--cc=lsf-pc@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).