From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Xu Subject: Re: Re: performance of credit2 on hybrid workload Date: Mon, 13 Jun 2011 12:52:18 -0400 Message-ID: References: <1306340309.21026.8524.camel@elijah> <1306401493.21026.8526.camel@elijah> <1307626494.27103.3243.camel@elijah> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1103872833==" Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: George Dunlap , "xen-devel@lists.xensource.com" List-Id: xen-devel@lists.xenproject.org --===============1103872833== Content-Type: multipart/alternative; boundary=000e0cd644c00ee7b604a59abe23 --000e0cd644c00ee7b604a59abe23 Content-Type: text/plain; charset=ISO-8859-1 Hi, Could you tell me how to check out the pending interrupt during the scheduling while not adding extra risk of crash? Thanks. Regards, Cong 2011/6/9 David Xu > > Remember though -- you can't just give a VM more CPU time. Giving a VM > > more CPU at one time means taking CPU time away at another time. I > > think they key is to think the opposite way -- taking away time from a > > VM by giving it a shorter timeslice, so that you can give time back when > > it needs it. > > It seems if scheduler always schedules a VM firstly, it will use up all > allocated credits soon compared with other VMs and steal credits from > others, which may cause unfairness. And your suggestion that thinking the > opposite way is reasonable. An efficient method to reduce scheduling latency > for a specific VM is to preempt current running VM when a interrupt is > coming. However, too frequently context switch and interrupt processing may > negatively impact the performance as well. BTW, do you know how to give a VM > running mix workload a shorter time-slice (ex. 5ms) and keep other VMs the > default value (30ms)? > > > I've just been talking to one of our engineers here who used to work for > > a company which sold network cards. Our discussion convinced me that we > > shouldn't really need any more information about a VM than the > > interrupts which have been delivered to it: even devices which go into > > polling mode do so for a relatively brief period of time, then re-enable > > interrupts again. > > Do you think a pending interrupt generally indicates a latency-intensive > workload? From my point of view, it means there is a I/O-intensive workload > which may not be latency-intensive but only require high throughput. > > > Yes, I look forward to seeing the results of your work. Are you going > > to be doing this on credit2? > > I am not familiar with credit2, but I will delve into it in future. Of > course, If I have any new progress, I will share my results with you. > > 2011/6/9 George Dunlap > >> On Wed, 2011-06-08 at 22:43 +0100, David Xu wrote: >> > Hi George, >> > >> > >> > Thanks for your reply. I have similar ideas to you, adding another >> > parameter that indicates the required latency and then letting >> > scheduler determine latency characteristics of a VM automatically. >> > Firstly, adding another parameter and let users set its value in >> > advance sounds similar to SEDF. But sometimes the configuration >> > process is hard and inflexible when workloads in VM is complex. So in >> > my opinion, a task-aware scheduler is better. However, manually >> > configuration can help us to check out the effectiveness of the new >> > parameter. >> >> Great! Sounds like we're on the same page. >> >> > For another hand, as you described, it is also not easy and accurate >> > to make scheduler determine the latency characteristics of a VM >> > automatically with some information we can get from hypervisor, for >> > instance the delay interrupt. Therefore, the key point for me is to >> > find and implement a scheduling helper to indicate which VM should be >> > scheduled soon. >> >> Remember though -- you can't just give a VM more CPU time. Giving a VM >> more CPU at one time means taking CPU time away at another time. I >> think they key is to think the opposite way -- taking away time from a >> VM by giving it a shorter timeslice, so that you can give time back when >> it needs it. >> >> > For example, for TCP network, we can implement a tool similar to a >> > packet sniffer to capture the packet and analyze its head information >> > to infer the type of workload. Then the analysis result can help >> > scheduler to make a decision. In fact, not all I/O-intensive workloads >> > require low latency, some of them only require high-throughput. Of >> > course, scheduling latency impact significantly the throughput (You >> > handled this problem with boost mechanism to some extension). >> >> The boost mechanism (and indeed the whole credit1 scheduler) was >> actually written by someone else. :-) And although it's good in theory, >> the way it's implemented actually causes some problems. >> >> I've just been talking to one of our engineers here who used to work for >> a company which sold network cards. Our discussion convinced me that we >> shouldn't really need any more information about a VM than the >> interrupts which have been delivered to it: even devices which go into >> polling mode do so for a relatively brief period of time, then re-enable >> interrupts again. >> >> > What I want to is to only reduce the latency of a VM which require low >> > latency while postpone other VMs, and use other technology such as >> > packet offloading to compensate their lost and improve their >> > throughput. >> > >> > >> > This is just my course idea and there are many problems as well. I >> > hope I can often discuss with you and share our results. Thanks very >> > much. >> >> Yes, I look forward to seeing the results of your work. Are you going >> to be doing this on credit2? >> >> Peace, >> -George >> >> >> >> > --000e0cd644c00ee7b604a59abe23 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi,

Could you tell me how to check out the=A0pending interrupt during the scheduling w= hile not adding extra risk of crash? Thanks.

<= span class=3D"Apple-style-span" style=3D"border-collapse: collapse;">Regard= s,
Cong

2011/6/9 David Xu <davidxu06@gmail.com= >
> Remember though -- you can't just give a VM= more CPU time. =A0Giving a VM
> more CPU at one time means taking CP= U time away at another time. =A0I
> think they key is to think the op= posite way -- taking away time from a
> VM by giving it a shorter timeslice, so that you can give time back wh= en
> it needs it.

It seems if schedule= r always schedules a VM firstly, it will use up all allocated credits soon = compared with other VMs and steal credits from others, which may cause unfa= irness. And your suggestion that thinking the opposite way is reasonable. A= n efficient method to reduce scheduling latency for a specific VM is to pre= empt current running VM when a=A0interrupt=A0is coming. However, too freque= ntly context switch and interrupt processing may negatively impact the perf= ormance as well. BTW, do you know how to give a VM running mix workload a s= horter time-slice (ex. 5ms) and keep other VMs the default value (30ms)?=A0=

> I've just been talking to one of our engineers= here who used to work for
> a company which sold network cards. =A0O= ur discussion convinced me that we
> shouldn't really need any mo= re information about a VM than the
> interrupts which have been delivered to it: even devices which go into=
> polling mode do so for a relatively brief period of time, then re-= enable
> interrupts again.

Do you thin= k a pending interrupt generally indicates a latency-intensive workload? Fro= m my point of view, it means there is a I/O-intensive workload which may no= t be latency-intensive but only require high throughput.=A0

> Yes, I look forward to seeing the results of your = work. =A0Are you going
> to be doing this on credit2?

<= /div>
I am not familiar with credit2, but I will delve into it in= future. Of course, If I have any new=A0progress, I will share my results w= ith you.=A0

2011/6/9 George Dunlap <= ;george.dunla= p@citrix.com>
On Wed, 2011-06-08 at 22:43 +0100, David Xu wrote:
> Hi George,
>
>
> Thanks for your reply. I have similar ideas to you, adding another
> parameter that indicates the required latency and then letting
> scheduler determine latency characteristics of a VM automatically.
> Firstly, adding another parameter and let users set its value in
> advance sounds similar to SEDF. But sometimes the configuration
> process is hard and inflexible when workloads in VM is complex. So in<= br> > my opinion, a task-aware scheduler is better. However, manually
> configuration can help us to check out the effectiveness of the new > parameter.

Great! =A0Sounds like we're on the same page.

> =A0For another hand, as you described, it is also not easy and accurat= e
> to make scheduler =A0determine the latency characteristics of a VM
> automatically with some information we can get from hypervisor, for > instance the delay interrupt. Therefore, the key point for me is to > find and implement a scheduling helper to indicate which VM should be<= br> > scheduled soon.

Remember though -- you can't just give a VM more CPU time. =A0Giv= ing a VM
more CPU at one time means taking CPU time away at another time. =A0I
think they key is to think the opposite way -- taking away time from a
VM by giving it a shorter timeslice, so that you can give time back when it needs it.

> For example, for TCP network, we can implement a tool similar to a
> packet sniffer to capture the packet and analyze its head information<= br> > to infer the type of workload. Then the analysis result can help
> scheduler to make a decision. In fact, not all I/O-intensive workloads=
> require low latency, some of them only require high-throughput. Of
> course, scheduling latency impact significantly the throughput (You > handled this problem with boost mechanism to some extension).

The boost mechanism (and indeed the whole credit1 scheduler) was
actually written by someone else. :-) =A0And although it's good in theo= ry,
the way it's implemented actually causes some problems.

I've just been talking to one of our engineers here who used to work fo= r
a company which sold network cards. =A0Our discussion convinced me that we<= br> shouldn't really need any more information about a VM than the
interrupts which have been delivered to it: even devices which go into
polling mode do so for a relatively brief period of time, then re-enable interrupts again.

> What I want to is to only reduce the latency of a VM which require low=
> latency while postpone other VMs, and use other technology such as
> packet offloading to compensate their lost and improve their
> throughput.
>
>
> This is just my course idea and there are many problems as well. I
> hope I can often discuss with you and share our results. Thanks very > much.

Yes, I look forward to seeing the results of your work. =A0Are you go= ing
to be doing this on credit2?

Peace,
=A0-George





--000e0cd644c00ee7b604a59abe23-- --===============1103872833== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel --===============1103872833==--