xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: George Dunlap <george.dunlap@eu.citrix.com>
To: Nathan Studer <nate.studer@dornerworks.com>
Cc: Simon Martin <smartin@milliways.cl>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Robert VanVossen <robert.vanvossen@dornerworks.com>,
	xen-devel@lists.xen.org
Subject: Re: [PATCH 2/3] arinc: Add cpu-pool support to scheduler.
Date: Tue, 19 Nov 2013 11:30:16 +0000	[thread overview]
Message-ID: <528B4BC8.4000300@eu.citrix.com> (raw)
In-Reply-To: <1384805814-3597-3-git-send-email-nate.studer@dornerworks.com>

On 11/18/2013 08:16 PM, Nathan Studer wrote:
> From: Nathan Studer <nate.studer@dornerworks.com>
>
> 1.  Remove the restriction that dom0 must be in the schedule, since dom-0 may
> not belong to the scheduler's pool.
> 2.  Add a schedule entry for each of dom-0's vcpus as they are created.
> 3.  Add code to deal with empty schedules in the do_schedule function.
> 4.  Call the correct idle task for the pcpu on which the scheduling decision
> is being made in do_schedule.
> 5.  Add code to prevent migration of a vcpu.
> 6.  Implement a proper cpu_pick function, which prefers the current processor.
>
> These changes do not implement arinc653 multicore.  Since the schedule only
> supports 1 vcpu entry per slot, even if the vcpus of a domain are run on
> multiple pcpus, the scheduler will essentially serialize their execution.
>
> Signed-off-by: Nathan Studer <nate.studer@dornerworks.com>

If this were a change to one of the main schedulers I think I would say 
that it was too late for such an intrusive change.  But at the moment, I 
don't think there are other users of this code, so I'm inclined to be 
more permissive.

Unless someone wants to argue otherwise:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

  parent reply	other threads:[~2013-11-19 11:30 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-18 20:16 [PATCH 0/3] arinc: Implement cpu-pool support Nathan Studer
2013-11-18 20:16 ` [PATCH 1/3] arinc: whitespace and formatting fixes Nathan Studer
2013-11-19  9:54   ` Andrew Cooper
2013-11-19 11:30   ` George Dunlap
2013-11-18 20:16 ` [PATCH 2/3] arinc: Add cpu-pool support to scheduler Nathan Studer
2013-11-19 10:30   ` Andrew Cooper
2013-11-19 11:18     ` George Dunlap
2013-11-19 11:33       ` Andrew Cooper
2013-11-19 13:01         ` Nate Studer
2013-11-19 13:58     ` Nate Studer
2013-11-19 14:04       ` Nate Studer
2013-11-19 18:16       ` Andrew Cooper
2013-11-19 11:30   ` George Dunlap [this message]
2013-11-18 20:16 ` [PATCH 3/3] arinc: Add poolid parameter to scheduler get/set functions Nathan Studer
2013-11-19 10:32   ` Andrew Cooper
2013-11-19 11:32     ` George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=528B4BC8.4000300@eu.citrix.com \
    --to=george.dunlap@eu.citrix.com \
    --cc=ian.campbell@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=nate.studer@dornerworks.com \
    --cc=robert.vanvossen@dornerworks.com \
    --cc=smartin@milliways.cl \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).