public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Chris Friesen" <cfriesen@nortel.com>
To: Ingo Molnar <mingo@elte.hu>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	linux-kernel@vger.kernel.org
Subject: cgroup task groups appears sensitive to absolute magnitude of shares
Date: Thu, 09 Oct 2008 17:00:22 -0600	[thread overview]
Message-ID: <48EE8D06.9060503@nortel.com> (raw)


When using cgroups-based task groups, the amount of cpu time for each 
class should be based on the relative shares of the different groups.

However, my testing shows that the absolute value of the shares matters 
as well, with larger shares values giving more accurate results (to a 
point).  Consider the two testcases below, where the only difference is 
that in the second case all the shares are increased by a factor of 10. 
  Notice that the accuracy in group 4 is significantly improved.


[root@localhost schedtest]#  ./fairtest  test5.dat
using settling delay of 1 sec, runtime of 2 sec
group hierarchy (name, weight, hogs, expected usage):
1,    40,   2, 55.555553
2,    20,   2, 27.777777
3,    10,   2, 13.888888
4,     2,   2, 2.777778
group       actual(%)    expected(%)   avg latency(ms)  max_latency(ms)
       1        54.90         55.56               5/5              6/57
       2        27.43         27.78               8/7              63/8
       3        13.71         13.89             12/13            18/379
       4         3.96          2.78               7/7             57/57



[root@localhost schedtest]# ./fairtest  test3.dat
using settling delay of 1 sec, runtime of 10 sec
group hierarchy (name, weight, hogs, expected usage):
1,   400,   2, 55.555557
2,   200,   2, 27.777779
3,   100,   2, 13.888889
4,    20,   2, 2.777778
group      actual(%)    expected(%)   avg latency(ms)  max_latency(ms)
       1        55.20         55.56               5/5             22/31
       2        28.02         27.78               7/8             23/21
       3        14.00         13.89             12/11             20/33
       4         2.78          2.78               9/9             24/20


I suspect that this is due to the following calculation in 
__update_group_shares_cpu():

shares = (sd_shares * rq_weight) / (sd_rq_weight + 1);

Because these are integers, the result will give greater rounding error 
when sd_shares is small.

Going to 4000/2000/1000/200 doesn't seem to give noticeable 
improvements, and going to 40000/20000/10000/2000 causes the test to 
behave unpredictably, either taking abnormally long to complete or else 
not completing at all.

Is it worth doing anything about this (automatic normalization of group 
shares?), or should we just document this behaviour somewhere and live 
with it?

Chris

             reply	other threads:[~2008-10-09 23:00 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-10-09 23:00 Chris Friesen [this message]
2008-10-10  5:44 ` cgroup task groups appears sensitive to absolute magnitude of shares Peter Zijlstra
2008-10-10  6:03 ` Peter Zijlstra
2008-10-10  7:53   ` Peter Zijlstra
2008-10-10 15:05     ` Chris Friesen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=48EE8D06.9060503@nortel.com \
    --to=cfriesen@nortel.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox