linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Brian Foster <bfoster@redhat.com>
To: Dave Chinner <david@fromorbit.com>
Cc: Yafang Shao <laoar.shao@gmail.com>,
	Michal Hocko <mhocko@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linux MM <linux-mm@kvack.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	Shakeel Butt <shakeelb@google.com>,
	Yafang Shao <shaoyafang@didiglobal.com>,
	linux-xfs@vger.kernel.org
Subject: Re: [PATCH] mm, memcg: support memory.{min, low} protection in cgroup v1
Date: Mon, 8 Jul 2019 08:15:00 -0400	[thread overview]
Message-ID: <20190708121459.GB51396@bfoster> (raw)
In-Reply-To: <20190705235222.GE7689@dread.disaster.area>

On Sat, Jul 06, 2019 at 09:52:22AM +1000, Dave Chinner wrote:
> On Fri, Jul 05, 2019 at 11:10:45AM -0400, Brian Foster wrote:
> > cc linux-xfs
> > 
> > On Fri, Jul 05, 2019 at 10:33:04PM +0800, Yafang Shao wrote:
> > > On Fri, Jul 5, 2019 at 7:10 PM Michal Hocko <mhocko@kernel.org> wrote:
> > > >
> > > > On Fri 05-07-19 17:41:44, Yafang Shao wrote:
> > > > > On Fri, Jul 5, 2019 at 5:09 PM Michal Hocko <mhocko@kernel.org> wrote:
> > > > [...]
> > > > > > Why cannot you move over to v2 and have to stick with v1?
> > > > > Because the interfaces between cgroup v1 and cgroup v2 are changed too
> > > > > much, which is unacceptable by our customer.
> > > >
> > > > Could you be more specific about obstacles with respect to interfaces
> > > > please?
> > > >
> > > 
> > > Lots of applications will be changed.
> > > Kubernetes, Docker and some other applications which are using cgroup v1,
> > > that will be a trouble, because they are not maintained by us.
> > > 
> > > > > It may take long time to use cgroup v2 in production envrioment, per
> > > > > my understanding.
> > > > > BTW, the filesystem on our servers is XFS, but the cgroup  v2
> > > > > writeback throttle is not supported on XFS by now, that is beyond my
> > > > > comprehension.
> > > >
> > > > Are you sure? I would be surprised if v1 throttling would work while v2
> > > > wouldn't. As far as I remember it is v2 writeback throttling which
> > > > actually works. The only throttling we have for v1 is reclaim based one
> > > > which is a huge hammer.
> > > > --
> > > 
> > > We did it in cgroup v1 in our kernel.
> > > But the upstream still don't support it in cgroup v2.
> > > So my real question is why upstream can't support such an import file system ?
> > > Do you know which companies  besides facebook are using cgroup v2  in
> > > their product enviroment?
> > > 
> > 
> > I think the original issue with regard to XFS cgroupv2 writeback
> > throttling support was that at the time the XFS patch was proposed,
> > there wasn't any test coverage to prove that the code worked (and the
> > original author never followed up). That has since been resolved and
> > Christoph has recently posted a new patch [1], which appears to have
> > been accepted by the maintainer.
> 
> I don't think the validation issue has been resolved.
> 
> i.e. we still don't have regression tests that ensure it keeps
> working it in future, or that it works correctly in any specific
> distro setting/configuration. The lack of repeatable QoS validation
> infrastructure was the reason I never merged support for this in the
> first place.
> 
> So while the (simple) patch to support it has been merged now,
> there's no guarantee that it will work as expected or continue to do
> so over the long run as nobody upstream or in distro land has a way
> of validating that it is working correctly.
> 
> From that perspective, it is still my opinion that one-off "works
> for me" testing isn't sufficient validation for a QoS feature that
> people will use to implement SLAs with $$$ penalities attached to
> QoS failures....
> 

We do have an fstest to cover the accounting bits (which is what the fs
is responsible for). Christoph also sent a patch[1] to enable that on
XFS. I'm sure there's plenty of room for additional/broader test
coverage, of course...

Brian

[1] https://marc.info/?l=fstests&m=156138385006173&w=2

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> 


  reply	other threads:[~2019-07-08 12:15 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-05  7:05 [PATCH] mm, memcg: support memory.{min, low} protection in cgroup v1 Yafang Shao
2019-07-05  9:09 ` Michal Hocko
2019-07-05  9:41   ` Yafang Shao
2019-07-05 11:10     ` Michal Hocko
2019-07-05 14:33       ` Yafang Shao
2019-07-05 15:10         ` Brian Foster
2019-07-05 23:39           ` Yafang Shao
2019-07-05 23:52           ` Dave Chinner
2019-07-08 12:15             ` Brian Foster [this message]
2019-07-05 19:54         ` Michal Hocko
2019-07-05 23:54           ` Yafang Shao
2019-07-05 15:52     ` Chris Down
2019-07-05 23:47       ` Yafang Shao
2019-07-06 11:26         ` Chris Down

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190708121459.GB51396@bfoster \
    --to=bfoster@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@fromorbit.com \
    --cc=hannes@cmpxchg.org \
    --cc=laoar.shao@gmail.com \
    --cc=linux-mm@kvack.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=mhocko@kernel.org \
    --cc=shakeelb@google.com \
    --cc=shaoyafang@didiglobal.com \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).