From: Yuehai Xu <yuehaixu@gmail.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: xen-devel@lists.xensource.com, yhxu@wayne.edu
Subject: Re: Question about the ability of credit scheduler to handle I/O and CPU intensive VMs
Date: Thu, 30 Sep 2010 08:28:47 -0400 [thread overview]
Message-ID: <AANLkTikBWZdpOviSEQSNi_pf66A+zYW8FyQVjiCX8ojm@mail.gmail.com> (raw)
In-Reply-To: <AANLkTin9E1m_jFcj4Ak7nB9OxcQynrznpQ_nNPi_U7hN@mail.gmail.com>
On Tue, Sep 14, 2010 at 5:22 AM, George Dunlap
<George.Dunlap@eu.citrix.com> wrote:
> Credit2 development is mostly stalled; I've just got too many other
> things to do at the moment. If you know someone good at hypervisor
> development that wants to move to Cambridge to help me out, I think we
> have some open positions... :-)
>
> The problem you describe, which I call the "mixed workload" problem,
> is something that I'd like to try to solve with credit2. The actual
> problem with credit1, at the moment, is that when a vcpu is scheduled
> to run, it can always run for 30ms if it wants to. So if it's a CPU
> burner, in order to give it 50%, you have to keep it from running for
> 30ms before letting it run for 30ms again.
>
> I agree, letting a VM with an interrupt run for a short period of time
> makes sense. The challenge is to make sure that it can't simply send
> itself interrupts every 50us and get to run 100% of the time. :-)
I am afraid I don't really understand the challenge is, or, in another
word, this method is good principally, but in practice, it is hard to
implement? As I know, the OS should always schedules I/O related
processes once they are in runnable queue, so, as long as we give even
a very short period of time to the waken up guest VM, the I/O process
in it should be scheduled at once. In that case, this problem should
be solved. Of course, I don't do experiments, saying is always much
easier than doing.
Thanks,
Yuehai
>
> I don't have time to work on this right now, but if you work up some
> patches, I can give you feedback. Be advised, that getting this stuff
> to work right is not easy.
>
> -George
next prev parent reply other threads:[~2010-09-30 12:28 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-09-13 21:37 Question about the ability of credit scheduler to handle I/O and CPU intensive VMs Yuehai Xu
2010-09-13 23:29 ` Jeremy Fitzhardinge
2010-09-14 1:38 ` Yuehai Xu
[not found] ` <AANLkTin9E1m_jFcj4Ak7nB9OxcQynrznpQ_nNPi_U7hN@mail.gmail.com>
2010-09-14 14:58 ` Yuehai Xu
2010-09-30 12:28 ` Yuehai Xu [this message]
2010-09-30 13:27 ` George Dunlap
2010-10-05 2:52 ` Yuehai Xu
2010-10-05 14:16 ` George Dunlap
2010-10-05 14:56 ` Yuehai Xu
2010-10-05 15:02 ` George Dunlap
2010-10-07 22:18 ` Yuehai Xu
2010-10-08 0:25 ` Yuehai Xu
2010-10-08 9:57 ` George Dunlap
2010-10-08 10:03 ` George Dunlap
2010-10-08 10:11 ` George Dunlap
2010-10-10 4:08 ` Yuehai Xu
2010-10-10 8:30 ` cendhu
2010-10-11 11:05 ` George Dunlap
2010-10-12 12:42 ` Yuehai Xu
2010-10-18 10:25 ` George Dunlap
2010-10-05 4:30 ` question about lineat pagetable and mfn_x strongerwill
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=AANLkTikBWZdpOviSEQSNi_pf66A+zYW8FyQVjiCX8ojm@mail.gmail.com \
--to=yuehaixu@gmail.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=xen-devel@lists.xensource.com \
--cc=yhxu@wayne.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).