public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Srivatsa Vaddagiri <vatsa@in.ibm.com>
To: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: Paul Menage <menage@google.com>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Pavel Emelyanov <xemul@openvz.org>,
	Dhaval Giani <dhaval@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, Gautham R Shenoy <ego@in.ibm.com>,
	Linux Containers <containers@lists.linux-foundation.org>,
	linux-kernel@vger.kernel.org, Avi Kivity <avi@redhat.com>,
	bharata@linux.vnet.ibm.com, Ingo Molnar <mingo@elte.hu>
Subject: Re: [RFC] CPU hard limits
Date: Mon, 8 Jun 2009 10:07:06 +0530	[thread overview]
Message-ID: <20090608043705.GC16211@in.ibm.com> (raw)
In-Reply-To: <661de9470906070835l383cd388h67e40a31be07aef6@mail.gmail.com>

On Sun, Jun 07, 2009 at 09:05:23PM +0530, Balbir Singh wrote:
> > On further thinking, this is not as simple as that. In above example of
> > 5 tasks on 4 CPUs, we could cap each task at a hard limit of 80%
> > (4 CPUs/5 tasks), which is still not sufficient to ensure that each
> > task gets the perfect fairness of 80%! Not just that, hard-limit
> > for a group (on each CPU) will have to be adjusted based on its task
> > distribution. For ex: a group that has a hard-limit of 25% on a 4-cpu
> > system and that has a single task, is entitled to claim a whole CPU. So
> > the per-cpu hard-limit for the group should be 100% on whatever CPU the
> > task is running. This adjustment of per-cpu hard-limit should happen
> > whenever the task distribution of the group across CPUs change - which
> > in theory would require you to monitor every task exit/migration
> > event and readjust limits, making it very complex and high-overhead.
> >
> 
> We already do that for shares right? I mean instead of 25% hard limit,
> if the group had 25% of the shares the same thing would apply - no?

yes and no. we do rebalance shares based on task distribution, but not
upon every task fork/exit/wakeup/migration event. Its done once in a while,
frequent enough to give "decent" fairness!

> > Balbir,
> >        I dont think guarantee can be met easily thr' hard-limits in
> > case of CPU resource. Atleast its not as straightforward as in case of
> > memory!
> 
> OK, based on the discussion - leaving implementation issues out,
> speaking of whether it is possible to implement guarantees using
> shares? My answer would be
> 
> 1. Yes - but then the hard limits will prevent you and can cause idle
> times, some of those can be handled in the implementation. There might
> be other fairness and SMP concerns about the accuracy of the fairness,
> thank you for that data.
> 2. We'll update the RFC (second version) with the findings and send it
> out, so that the expectations are clearer
> 3. From what I've read and seen there seems to be no strong objection
> to hard limits, but some reservations (based on 1) about using them
> for guarantees and our RFC will reflect that.
> 
> Do you agree?

Well yes, guarantee is not a good argument for providing hard limits.
Pay-per-use kind of usage would be a better argument IMHO.

- vatsa

  reply	other threads:[~2009-06-08  4:37 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-06-04  5:36 [RFC] CPU hard limits Bharata B Rao
2009-06-04 12:19 ` Avi Kivity
2009-06-04 21:32   ` Mike Waychison
2009-06-05  3:03   ` Bharata B Rao
2009-06-05  3:33     ` Avi Kivity
2009-06-05  4:37       ` Balbir Singh
2009-06-05  4:44         ` Avi Kivity
2009-06-05  4:49           ` Balbir Singh
2009-06-05  5:09             ` Chris Friesen
2009-06-05  5:13               ` Balbir Singh
2009-06-05  5:10             ` Balbir Singh
2009-06-05  5:21               ` Avi Kivity
2009-06-05  5:27                 ` Balbir Singh
2009-06-05  5:31                   ` Bharata B Rao
2009-06-05  6:01                     ` Avi Kivity
     [not found]                       ` <4A28B4CE.4010004-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-05  8:16                         ` Bharata B Rao
2009-06-07  6:04                           ` Avi Kivity
2009-06-07 16:14                             ` Bharata B Rao
2009-06-05  9:39                       ` Balbir Singh
2009-06-05 13:14                         ` Avi Kivity
     [not found]                           ` <4A291A2F.3090201-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2009-06-05 13:42                             ` Balbir Singh
2009-06-07  6:09                               ` Avi Kivity
2009-06-05 14:54                           ` Chris Friesen
2009-06-07  6:10                             ` Avi Kivity
2009-06-05  9:24                     ` Balbir Singh
2009-06-05  6:03                   ` Avi Kivity
2009-06-05  6:32                     ` Bharata B Rao
2009-06-05 12:57                       ` Avi Kivity
2009-06-05  5:16             ` Avi Kivity
2009-06-05  5:20               ` Balbir Singh
2009-06-05  3:07   ` Balbir Singh
2009-06-05  8:53 ` Paul Menage
2009-06-05  9:27   ` Bharata B Rao
2009-06-05  9:32     ` Paul Menage
2009-06-05  9:48       ` Dhaval Giani
2009-06-05  9:51         ` Paul Menage
2009-06-05  9:59           ` Dhaval Giani
2009-06-05 10:03             ` Paul Menage
2009-06-08  8:50               ` Pavel Emelyanov
2009-06-05  9:36   ` Balbir Singh
     [not found]     ` <20090605093625.GI11755-SINUvgVNF2CyUtPGxGje5AC/G2K4zDHf@public.gmane.org>
2009-06-05  9:48       ` Paul Menage
2009-06-05  9:55         ` Balbir Singh
2009-06-05  9:57           ` Paul Menage
2009-06-05 10:02           ` Paul Menage
2009-06-05 11:32   ` Srivatsa Vaddagiri
2009-06-05 12:18     ` Paul Menage
     [not found]       ` <6599ad830906050518t6cd7d477h36a187f2eaf55578-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2009-06-07 10:11         ` Srivatsa Vaddagiri
2009-06-07 15:35           ` Balbir Singh
2009-06-08  4:37             ` Srivatsa Vaddagiri [this message]
2009-06-05 14:44     ` Chris Friesen
2009-06-05 13:02   ` Avi Kivity
2009-06-05 13:43     ` Dhaval Giani
2009-06-05 14:45       ` Chris Friesen
2009-06-05  9:02 ` Reinhard Tartler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090608043705.GC16211@in.ibm.com \
    --to=vatsa@in.ibm.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=avi@redhat.com \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=bharata@linux.vnet.ibm.com \
    --cc=containers@lists.linux-foundation.org \
    --cc=dhaval@linux.vnet.ibm.com \
    --cc=ego@in.ibm.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=menage@google.com \
    --cc=mingo@elte.hu \
    --cc=xemul@openvz.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox