* Re: Is guest OS oriented scheduling welcome?
[not found] <1060494.867561240402736300.JavaMail.coremail@app167.163.com>
@ 2009-04-22 12:22 ` Avi Kivity
2009-04-22 12:34 ` Daniel P. Berrange
2009-04-24 0:19 ` Anthony Liguori
1 sibling, 1 reply; 5+ messages in thread
From: Avi Kivity @ 2009-04-22 12:22 UTC (permalink / raw)
To: 刘志建; +Cc: kvm, anthony
刘志建 wrote:
> Hello folks,
> In the past, it was said KVM would like to treat the guest OS threads differently in scheduling. However, till now, the qemu thread is regarded as a conventional user thread. Therefore, it is hard to control how much CPU slices one guest OS can utilize. I don't think a computing cloud provider likes this idea.
> And, what's more, "Xen and Co.: Communication-aware CPU Scheduling for Consolidated Xen-based Hosting Platforms"(http://www.cse.psu.edu/~sgovinda/papers/vee07.pdf) has shown that the standard thread scheduling in Linux might not fit the virtualization environment well.
> I have ported Xen's credit scheduler to KVM.
Do you mean, to KVM, or to the Linux scheduler?
> With my work, the users can control how much CPU a guest OS can occupy.
> The principles are:
> 1. all the vcpu threads are declared as FIFO RealTime threads;
> 2. before a vcpu thread enters guest mode, its blackboard is checked first, to see if the scheduler wants it to continue. If the vcpu thread has a bad luck, it will give up cpu and waits for permission.
> 3. there is a per-cpu scheduler, which is triggered periodically by its timer. The scheduler writes the scheduling results to the interested party's blackboards. To those that get the permission, the scheduler will try to wake them up in case of their sleeping.
> 4. In order to port Xen's scheduler, the host Linux is treated as a virtual machine too. However, it never checks its blackboard.
> 5. a user level per-VM credit control mechanism is implemented, to allow users to dynamically adjust a virtual machine's CPU quota.
> Although a vcpu might continue to run for a while after the scheduler has decided that it should actually yield the cpu, our experiments show that this brings trival influence.
> The interaction-oriented scheduling are going to be ported/developed.
>
> If KVM welcomes this function, I will post it out.
>
I think this is very interesting, but should be integrated with the
Linux scheduler, so it could apply to normal processes, and so it can
account for host mode, not just guest mode.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: Is guest OS oriented scheduling welcome?
2009-04-22 12:22 ` Is guest OS oriented scheduling welcome? Avi Kivity
@ 2009-04-22 12:34 ` Daniel P. Berrange
0 siblings, 0 replies; 5+ messages in thread
From: Daniel P. Berrange @ 2009-04-22 12:34 UTC (permalink / raw)
To: Avi Kivity; +Cc: ?????????, kvm, anthony
On Wed, Apr 22, 2009 at 03:22:54PM +0300, Avi Kivity wrote:
> ????????? wrote:
> >Hello folks,
> >In the past, it was said KVM would like to treat the guest OS threads
> >differently in scheduling. However, till now, the qemu thread is regarded
> >as a conventional user thread. Therefore, it is hard to control how much
> >CPU slices one guest OS can utilize. I don't think a computing cloud
> >provider likes this idea.
Although the standard scheduler tunables are per thread, it is possible
to put each QEMU process into a separate CGroup, and use the cpu_shares
tunable to control scheduling priority of the guest as a whole, instead
of individual threads. Not sure it this is sufficient for what you
want, but it is one possible option for guest scheduling..
Daniel
--
|: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :|
|: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Is guest OS oriented scheduling welcome?
[not found] <1060494.867561240402736300.JavaMail.coremail@app167.163.com>
2009-04-22 12:22 ` Is guest OS oriented scheduling welcome? Avi Kivity
@ 2009-04-24 0:19 ` Anthony Liguori
2009-04-24 2:55 ` Andrew de Andrade
1 sibling, 1 reply; 5+ messages in thread
From: Anthony Liguori @ 2009-04-24 0:19 UTC (permalink / raw)
To: 刘志建; +Cc: kvm, avi
刘志建 wrote:
> Hello folks,
> In the past, it was said KVM would like to treat the guest OS threads differently in scheduling. However, till now, the qemu thread is regarded as a conventional user thread. Therefore, it is hard to control how much CPU slices one guest OS can utilize. I don't think a computing cloud provider likes this idea.
> And, what's more, "Xen and Co.: Communication-aware CPU Scheduling for Consolidated Xen-based Hosting Platforms"(http://www.cse.psu.edu/~sgovinda/papers/vee07.pdf) has shown that the standard thread scheduling in Linux might not fit the virtualization environment well.
By standard thread scheduling, I presume you mean scheduling that
doesn't take into account IO? That is, this paper is arguing that in a
virtualization environment, you want to provide temporary
disproportionate scheduling to favor IO bound workloads over CPU bound
workloads.
I don't think you need the credit scheduler to implement this idea in
KVM. CFS provides a number of tunables to userspace along with pretty
fine grain control ala cgroups. I think that provides you a roughly
equivalent interface to userspace that could be used to make scheduling
adjustments based on IO consumption.
What are the other motivating factors for wanting to use credit over cfs?
Regards,
Anthony Liguori
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: Is guest OS oriented scheduling welcome?
2009-04-24 0:19 ` Anthony Liguori
@ 2009-04-24 2:55 ` Andrew de Andrade
0 siblings, 0 replies; 5+ messages in thread
From: Andrew de Andrade @ 2009-04-24 2:55 UTC (permalink / raw)
To: kvm
On Apr 23, 2009, at 9:19 PM, Anthony Liguori wrote:
> 刘志建 wrote:
>> Hello folks,
>> In the past, it was said KVM would like to treat the guest OS
>> threads differently in scheduling. However, till now, the qemu
>> thread is regarded as a conventional user thread. Therefore, it is
>> hard to control how much CPU slices one guest OS can utilize. I
>> don't think a computing cloud provider likes this idea.
>> And, what's more, "Xen and Co.: Communication-aware CPU Scheduling
>> for Consolidated Xen-based Hosting Platforms"(http://www.cse.psu.edu/~sgovinda/papers/vee07.pdf
>> ) has shown that the standard thread scheduling in Linux might not
>> fit the virtualization environment well.
>
> By standard thread scheduling, I presume you mean scheduling that
> doesn't take into account IO? That is, this paper is arguing that
> in a virtualization environment, you want to provide temporary
> disproportionate scheduling to favor IO bound workloads over CPU
> bound workloads.
>
> I don't think you need the credit scheduler to implement this idea
> in KVM. CFS provides a number of tunables to userspace along with
> pretty fine grain control ala cgroups. I think that provides you a
> roughly equivalent interface to userspace that could be used to make
> scheduling adjustments based on IO consumption.
>
> What are the other motivating factors for wanting to use credit over
> cfs?
>
> Regards,
>
> Anthony Liguori
I'm a lurker on this list, but as someone has raised the point of what
cloud computing providers will and won't like, I figure I could add my
two cents.
If you guys think in terms of "the client of your client is your real
client" - i.e. the clients of the cloud providers are your clients -
then you need to consider how they would buy the product. Here in
Brazil there is still a lot of ignorance surrounding cloud computing
and many clients still think in terms of physical machines - that is
that they use their previous knowledge of provisioning physical
machines to guide their decisions when contracting new machines. As
such they think in terms of contracting a certain amount of MHz/GHz.
I'll give you a perfect and real example of this: We use VMware and we
are testing Xen/KVM because we ultimately want to go full open-source
with our back-end for virtualization. VMware allows us to sell the
product to the end client in terms of CPU/RAM/Disk Space, Xen on the
other hand works in terms of CPU prioritization, which AFAIK is the
correct way to think about how to use the physical resources. However
this approach treats all clients as one unit and does the tasks with
the greatest priority instead of treating all clients as equals and
taking care of all equally (with a guaranteed minimum amount of CPU as
per the client's contract). Adopting Xen would force us to completely
redesign our product plans with a product model which would be more
alien to the end user (i.e. more difficult to compare/consider).
At the end of the day I think virtualization has two basic users: (1)
Companies, laboratories, etc where certain tasks are assigned
priorities and all computing resources are used to their maximum all
the time (timesharing) and (2) Service providers where all clients
have a contract guaranteeing minimum resources and where overselling
to some degree is taking place (variable resource usage over the
course of the day/week).
Please excuse me if I didn't exactly answer the above question. Coming
from a business/somewhat technical background, I'm working as hard as
I can to immerse myself in the nitty gritty of virtualization. If
anyone has any other questions for what a cloud computing provider
would or would not like, feel free to contact me directly or post
questions to the list.
Andrew de Andrade
Locaweb
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re:Re: Is guest OS oriented scheduling welcome?
@ 2009-04-24 6:10 alex
2009-04-24 13:16 ` Anthony Liguori
0 siblings, 1 reply; 5+ messages in thread
From: alex @ 2009-04-24 6:10 UTC (permalink / raw)
To: avi, anthony, kvm
just now, I tried cgroup. I admit that as far as only CPU share is
concerned, cgroup is enough.
However, AFAIK Linux schedules each thread independently, ignoring the
upper level logic.
for example, suppose VM1 is an SMP one, and it is used to receive
network packets, which causes it run the TCP/IP stack code frequently.
The use of spin-lock implies that the lock holder will release it
fast. However, when vcpu threads are scheduled independently, when one
vcpu is spinning, the lock holder might be off the CPU! This would
make the VM's SMP scalability bad.
And there are user level vcpu dependence too.
Regards,
alex.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Is guest OS oriented scheduling welcome?
2009-04-24 6:10 alex
@ 2009-04-24 13:16 ` Anthony Liguori
0 siblings, 0 replies; 5+ messages in thread
From: Anthony Liguori @ 2009-04-24 13:16 UTC (permalink / raw)
To: alex; +Cc: avi, kvm
alex wrote:
> just now, I tried cgroup. I admit that as far as only CPU share is
> concerned, cgroup is enough.
>
> However, AFAIK Linux schedules each thread independently, ignoring the
> upper level logic.
>
> for example, suppose VM1 is an SMP one, and it is used to receive
> network packets, which causes it run the TCP/IP stack code frequently.
> The use of spin-lock implies that the lock holder will release it
> fast. However, when vcpu threads are scheduled independently, when one
> vcpu is spinning, the lock holder might be off the CPU! This would
> make the VM's SMP scalability bad.
>
Sure, but credit doesn't do gang scheduling either.
Instead of gang scheduling, I think the best solution to this problem is
spin lock paravirtualization.
Regards,
Anthony Liguori
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2009-04-24 13:16 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1060494.867561240402736300.JavaMail.coremail@app167.163.com>
2009-04-22 12:22 ` Is guest OS oriented scheduling welcome? Avi Kivity
2009-04-22 12:34 ` Daniel P. Berrange
2009-04-24 0:19 ` Anthony Liguori
2009-04-24 2:55 ` Andrew de Andrade
2009-04-24 6:10 alex
2009-04-24 13:16 ` Anthony Liguori
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox