xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Disk IO scheduling in XEN
@ 2011-03-24 20:19 Paresh Nakhe
  2011-03-24 20:25 ` John Weekes
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Paresh Nakhe @ 2011-03-24 20:19 UTC (permalink / raw)
  To: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 742 bytes --]

Hi,

We are working on a project to modify the disk scheduling mechanism in XEN a
bit in an attempt to improve it. In the paper "Xen and the Art of
Virtualization"  following is mentioned.

*"Xen services batches of requests from competing domains in a
simple round-robin fashion; these are then passed to a standard elevator
scheduler before reaching the disk hardware*"

We were going through linux-jeremy source code in an attempt to map the
above in code. We could not however do so. On the contrary we came to the
conclusion that there is no such mechanism. Domain 0 services requests as
soon as it receives a hypercall from a guest domain. Are we right?

Which one would be better first come first serve or round robin and why?

Thanks

[-- Attachment #1.2: Type: text/html, Size: 816 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Disk IO scheduling in XEN
  2011-03-24 20:19 Disk IO scheduling in XEN Paresh Nakhe
@ 2011-03-24 20:25 ` John Weekes
  2011-03-24 22:29 ` James Harper
  2011-03-25 14:48 ` Jeremy Fitzhardinge
  2 siblings, 0 replies; 5+ messages in thread
From: John Weekes @ 2011-03-24 20:25 UTC (permalink / raw)
  To: Paresh Nakhe, xen-devel@lists.xensource.com


[-- Attachment #1.1: Type: text/plain, Size: 1148 bytes --]

On 3/24/2011 1:19 PM, Paresh Nakhe wrote:
> Hi,
>
> We are working on a project to modify the disk scheduling mechanism in 
> XEN a bit in an attempt to improve it. In the paper "Xen and the Art 
> of Virtualization"  following is mentioned.
>
> /"Xen services batches of requests from competing domains in a
> simple round-robin fashion; these are then passed to a standard elevator
> scheduler before reaching the disk hardware/"
>
> We were going through linux-jeremy source code in an attempt to map 
> the above in code. We could not however do so. On the contrary we came 
> to the conclusion that there is no such mechanism. Domain 0 services 
> requests as soon as it receives a hypercall from a guest domain. Are 
> we right?

Ian Campbell forward-ported the 2.6.18-xen.hg patches for this but I 
don't think that they've been merged into the main git 2.6.32 stable 
tree yet. Check the list archives for the subject line "unfair servicing 
of DomU vbd requests" for more information on this.

> Which one would be better first come first serve or round robin and why?

Of the two, RR, because domains can become starved otherwise.

-John

[-- Attachment #1.2: Type: text/html, Size: 1834 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: Disk IO scheduling in XEN
  2011-03-24 20:19 Disk IO scheduling in XEN Paresh Nakhe
  2011-03-24 20:25 ` John Weekes
@ 2011-03-24 22:29 ` James Harper
  2011-03-25  9:58   ` Ian Campbell
  2011-03-25 14:48 ` Jeremy Fitzhardinge
  2 siblings, 1 reply; 5+ messages in thread
From: James Harper @ 2011-03-24 22:29 UTC (permalink / raw)
  To: Paresh Nakhe, xen-devel

> Hi,
> 
> We are working on a project to modify the disk scheduling mechanism in
XEN a
> bit in an attempt to improve it. In the paper "Xen and the Art of
> Virtualization"  following is mentioned.
> 
> "Xen services batches of requests from competing domains in a
> simple round-robin fashion; these are then passed to a standard
elevator
> scheduler before reaching the disk hardware"
> 
> We were going through linux-jeremy source code in an attempt to map
the above
> in code. We could not however do so. On the contrary we came to the
conclusion
> that there is no such mechanism. Domain 0 services requests as soon as
it
> receives a hypercall from a guest domain. Are we right?
> 
> Which one would be better first come first serve or round robin and
why?
> 

There was a discussion about this recently. It turns out that one DomU
can starve the others and there are real life examples of this
happening. Patches have been proposed and posted but I don't know into
what trees.

James

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: Disk IO scheduling in XEN
  2011-03-24 22:29 ` James Harper
@ 2011-03-25  9:58   ` Ian Campbell
  0 siblings, 0 replies; 5+ messages in thread
From: Ian Campbell @ 2011-03-25  9:58 UTC (permalink / raw)
  To: James Harper, Jeremy Fitzhardinge
  Cc: Paresh Nakhe, xen-devel@lists.xensource.com

On Thu, 2011-03-24 at 22:29 +0000, James Harper wrote:
> There was a discussion about this recently. It turns out that one DomU
> can starve the others and there are real life examples of this
> happening. Patches have been proposed and posted but I don't know into
> what trees.

They are in Linus' tree so they will be in 2.6.39-rc1.

It isn't yet in Jeremy's tree, I have a branch at:

git://xenbits.xen.org/people/ianc/linux-2.6.git irq-fairness-2.6.32

John Weekes has tested it and reported success
(<4D7935C0.9010509@nuclearfallout.net>) so, Jeremy please pull:

The following changes since commit 6d6ba2f4ea5f5a11f31ed707445ec4a57d225eb6:
  Jeremy Fitzhardinge (1):
        Merge branch 'stable-2.6.32.x-dom0' of git://xenbits.xen.org/people/sstabellini/linux-pvhvm into xen/next-2.6.32

are available in the git repository at:

  git://xenbits.xen.org/people/ianc/linux-2.6.git irq-fairness-2.6.32

Ian Campbell (1):
      xen: events: Make last processed event channel a per-cpu variable.

Keir Fraser (3):
      xen: events: Clean up round-robin evtchn scan.
      xen: events: Make round-robin scan fairer by snapshotting each l2 word
      xen: events: Remove redundant clear of l2i at end of round-robin loop

Scott Rixner (1):
      xen: events: Process event channels notifications in round-robin order.

 drivers/xen/events.c |   80 +++++++++++++++++++++++++++++++++++++++++++++-----
 1 files changed, 72 insertions(+), 8 deletions(-)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Disk IO scheduling in XEN
  2011-03-24 20:19 Disk IO scheduling in XEN Paresh Nakhe
  2011-03-24 20:25 ` John Weekes
  2011-03-24 22:29 ` James Harper
@ 2011-03-25 14:48 ` Jeremy Fitzhardinge
  2 siblings, 0 replies; 5+ messages in thread
From: Jeremy Fitzhardinge @ 2011-03-25 14:48 UTC (permalink / raw)
  To: Paresh Nakhe; +Cc: xen-devel

On 03/24/2011 08:19 PM, Paresh Nakhe wrote:
> Hi,
>
> We are working on a project to modify the disk scheduling mechanism in
> XEN a bit in an attempt to improve it. In the paper "Xen and the Art
> of Virtualization"  following is mentioned.
>
> /"Xen services batches of requests from competing domains in a
> simple round-robin fashion; these are then passed to a standard elevator
> scheduler before reaching the disk hardware/"
>
> We were going through linux-jeremy source code in an attempt to map
> the above in code. We could not however do so. On the contrary we came
> to the conclusion that there is no such mechanism. Domain 0 services
> requests as soon as it receives a hypercall from a guest domain. Are
> we right?

There's no explicit Xen-specific disk scheduling.  Linux has a great
deal of intrinsic mechanisms for setting up disk scheduling, including
using cgroups to schedule groups of processes, so you can use those to
get any particular scheduling policy you desire.

> Which one would be better first come first serve or round robin and why?

I think there are many more options for disk scheduling than those.

For example: "is it better for guests to schedule their own requests, or
should they just pass requests straight through?".

    J

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2011-03-25 14:48 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-03-24 20:19 Disk IO scheduling in XEN Paresh Nakhe
2011-03-24 20:25 ` John Weekes
2011-03-24 22:29 ` James Harper
2011-03-25  9:58   ` Ian Campbell
2011-03-25 14:48 ` Jeremy Fitzhardinge

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).