xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: David Xu <davidxu06@gmail.com>
To: George Dunlap <george.dunlap@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: Re: performance of credit2 on hybrid workload
Date: Thu, 9 Jun 2011 15:50:13 -0400	[thread overview]
Message-ID: <BANLkTimeH=c1LZLVODLwqwReFhB_XSuh=Q@mail.gmail.com> (raw)
In-Reply-To: <1307626494.27103.3243.camel@elijah>


[-- Attachment #1.1: Type: text/plain, Size: 4829 bytes --]

> Remember though -- you can't just give a VM more CPU time.  Giving a VM
> more CPU at one time means taking CPU time away at another time.  I
> think they key is to think the opposite way -- taking away time from a
> VM by giving it a shorter timeslice, so that you can give time back when
> it needs it.

It seems if scheduler always schedules a VM firstly, it will use up all
allocated credits soon compared with other VMs and steal credits from
others, which may cause unfairness. And your suggestion that thinking the
opposite way is reasonable. An efficient method to reduce scheduling latency
for a specific VM is to preempt current running VM when a interrupt is
coming. However, too frequently context switch and interrupt processing may
negatively impact the performance as well. BTW, do you know how to give a VM
running mix workload a shorter time-slice (ex. 5ms) and keep other VMs the
default value (30ms)?

> I've just been talking to one of our engineers here who used to work for
> a company which sold network cards.  Our discussion convinced me that we
> shouldn't really need any more information about a VM than the
> interrupts which have been delivered to it: even devices which go into
> polling mode do so for a relatively brief period of time, then re-enable
> interrupts again.

Do you think a pending interrupt generally indicates a latency-intensive
workload? From my point of view, it means there is a I/O-intensive workload
which may not be latency-intensive but only require high throughput.

> Yes, I look forward to seeing the results of your work.  Are you going
> to be doing this on credit2?

I am not familiar with credit2, but I will delve into it in future. Of
course, If I have any new progress, I will share my results with you.

2011/6/9 George Dunlap <george.dunlap@citrix.com>

> On Wed, 2011-06-08 at 22:43 +0100, David Xu wrote:
> > Hi George,
> >
> >
> > Thanks for your reply. I have similar ideas to you, adding another
> > parameter that indicates the required latency and then letting
> > scheduler determine latency characteristics of a VM automatically.
> > Firstly, adding another parameter and let users set its value in
> > advance sounds similar to SEDF. But sometimes the configuration
> > process is hard and inflexible when workloads in VM is complex. So in
> > my opinion, a task-aware scheduler is better. However, manually
> > configuration can help us to check out the effectiveness of the new
> > parameter.
>
> Great!  Sounds like we're on the same page.
>
> >  For another hand, as you described, it is also not easy and accurate
> > to make scheduler  determine the latency characteristics of a VM
> > automatically with some information we can get from hypervisor, for
> > instance the delay interrupt. Therefore, the key point for me is to
> > find and implement a scheduling helper to indicate which VM should be
> > scheduled soon.
>
> Remember though -- you can't just give a VM more CPU time.  Giving a VM
> more CPU at one time means taking CPU time away at another time.  I
> think they key is to think the opposite way -- taking away time from a
> VM by giving it a shorter timeslice, so that you can give time back when
> it needs it.
>
> > For example, for TCP network, we can implement a tool similar to a
> > packet sniffer to capture the packet and analyze its head information
> > to infer the type of workload. Then the analysis result can help
> > scheduler to make a decision. In fact, not all I/O-intensive workloads
> > require low latency, some of them only require high-throughput. Of
> > course, scheduling latency impact significantly the throughput (You
> > handled this problem with boost mechanism to some extension).
>
> The boost mechanism (and indeed the whole credit1 scheduler) was
> actually written by someone else. :-)  And although it's good in theory,
> the way it's implemented actually causes some problems.
>
> I've just been talking to one of our engineers here who used to work for
> a company which sold network cards.  Our discussion convinced me that we
> shouldn't really need any more information about a VM than the
> interrupts which have been delivered to it: even devices which go into
> polling mode do so for a relatively brief period of time, then re-enable
> interrupts again.
>
> > What I want to is to only reduce the latency of a VM which require low
> > latency while postpone other VMs, and use other technology such as
> > packet offloading to compensate their lost and improve their
> > throughput.
> >
> >
> > This is just my course idea and there are many problems as well. I
> > hope I can often discuss with you and share our results. Thanks very
> > much.
>
> Yes, I look forward to seeing the results of your work.  Are you going
> to be doing this on credit2?
>
> Peace,
>  -George
>
>
>
>

[-- Attachment #1.2: Type: text/html, Size: 5739 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

  reply	other threads:[~2011-06-09 19:50 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-05-23  8:15 performance of credit2 on hybrid workload David Xu
2011-05-25 16:18 ` George Dunlap
     [not found]   ` <BANLkTi=57gDitoq7-T7n9Zh0_ZrCMuxfRg@mail.gmail.com>
     [not found]     ` <1306401493.21026.8526.camel@elijah>
2011-06-01  0:55       ` David Xu
2011-06-01  9:31         ` George Dunlap
2011-06-07 19:28           ` David Xu
2011-06-08 10:36             ` George Dunlap
2011-06-08 21:43               ` David Xu
2011-06-09 13:34                 ` George Dunlap
2011-06-09 19:50                   ` David Xu [this message]
2011-06-13 16:52                     ` David Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='BANLkTimeH=c1LZLVODLwqwReFhB_XSuh=Q@mail.gmail.com' \
    --to=davidxu06@gmail.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).