linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mike Galbraith <efault@gmx.de>
To: Tejun Heo <tj@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	lizefan@huawei.com, hannes@cmpxchg.org, mingo@redhat.com,
	pjt@google.com, luto@amacapital.net, cgroups@vger.kernel.org,
	linux-kernel@vger.kernel.org, kernel-team@fb.com,
	lvenanci@redhat.com,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCHSET for-4.11] cgroup: implement cgroup v2 thread mode
Date: Tue, 14 Mar 2017 15:45:42 +0100	[thread overview]
Message-ID: <1489502742.4111.29.camel@gmx.de> (raw)
In-Reply-To: <20170313192621.GD15709@htj.duckdns.org>

On Mon, 2017-03-13 at 15:26 -0400, Tejun Heo wrote:
> Hello, Mike.
> 
> Sorry about the long delay.
> 
> On Mon, Feb 13, 2017 at 06:45:07AM +0100, Mike Galbraith wrote:
> > > > So, as long as the depth stays reasonable (single digit or lower),
> > > > what we try to do is keeping tree traversal operations aggregated or
> > > > located on slow paths.  There still are places that this overhead
> > > > shows up (e.g. the block controllers aren't too optimized) but it
> > > > isn't particularly difficult to make a handful of layers not matter at
> > > > all.
> > > 
> > > A handful of cpu bean counting layers stings considerably.
> 
> Hmm... yeah, I was trying to think about ways to avoid full scheduling
> overhead at each layer (the scheduler does a lot per each layer of
> scheduling) but don't think it's possible to circumvent that without
> introducing a whole lot of scheduling artifacts.

Yup.

> In a lot of workloads, the added overhead from several layers of CPU
> controllers doesn't seem to get in the way too much (most threads do
> something other than scheduling after all).

Sure, don't schedule a lot, it doesn't hurt much, but there are plenty
of loads that routinely do schedule a LOT, and there it matters a LOT..
which is why network benchmarks tend to be severely allergic to
scheduler lard.

>   The only major issue that
> we're seeing in the fleet is the cgroup iteration in idle rebalancing
> code pushing up the scheduling latency too much but that's a different
> issue.

Hm, I would suspect PELT to be the culprit there.  It helps smooth out
load balancing, but will stack "skinny looking" tasks.

	-Mike

  reply	other threads:[~2017-03-14 14:47 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-02-02 20:06 [PATCHSET for-4.11] cgroup: implement cgroup v2 thread mode Tejun Heo
2017-02-02 20:06 ` [PATCH 1/5] cgroup: reorganize cgroup.procs / task write path Tejun Heo
2017-02-02 20:06 ` [PATCH 2/5] cgroup: add @flags to css_task_iter_start() and implement CSS_TASK_ITER_PROCS Tejun Heo
2017-02-02 20:06 ` [PATCH 3/5] cgroup: introduce cgroup->proc_cgrp and threaded css_set handling Tejun Heo
2017-02-02 20:06 ` [PATCH 4/5] cgroup: implement CSS_TASK_ITER_THREADED Tejun Heo
2017-02-02 20:06 ` [PATCH 5/5] cgroup: implement cgroup v2 thread support Tejun Heo
2017-02-02 21:32 ` [PATCHSET for-4.11] cgroup: implement cgroup v2 thread mode Andy Lutomirski
2017-02-02 21:52   ` Tejun Heo
2017-02-03 21:10     ` Andy Lutomirski
2017-02-03 21:56       ` Tejun Heo
2017-02-06  9:50     ` Peter Zijlstra
2017-02-03 20:20 ` Peter Zijlstra
2017-02-03 20:59   ` Tejun Heo
2017-02-06 12:49     ` Peter Zijlstra
2017-02-08 23:08       ` Tejun Heo
2017-02-09 10:29         ` Peter Zijlstra
2017-02-10 15:45           ` Tejun Heo
2017-02-10 17:51             ` Peter Zijlstra
2017-02-12  5:05               ` Tejun Heo
2017-02-12  6:59                 ` Mike Galbraith
2017-02-13  5:45                   ` Mike Galbraith
2017-03-13 19:26                     ` Tejun Heo
2017-03-14 14:45                       ` Mike Galbraith [this message]
2017-02-14 10:35                 ` Peter Zijlstra
2017-03-13 20:05                   ` Tejun Heo
2017-03-21 12:39                     ` Peter Zijlstra
2017-03-22 14:52                       ` Peter Zijlstra
2017-02-09 13:07 ` Paul Turner
2017-02-09 14:47   ` Peter Zijlstra
2017-02-09 15:08     ` Mike Galbraith
     [not found]     ` <CAPM31RJaJjFwenC36Abij+EdzO3KBm-DEjQ_crSmzrtrrn2N2A@mail.gmail.com>
2017-02-13  5:28       ` Mike Galbraith
2017-02-10 15:46   ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1489502742.4111.29.camel@gmx.de \
    --to=efault@gmx.de \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan@huawei.com \
    --cc=luto@amacapital.net \
    --cc=lvenanci@redhat.com \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).