xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <jgross@suse.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH 3/4] xen: sched: reorganize cpu_disable_scheduler()
Date: Thu, 9 Jul 2015 12:24:53 +0200	[thread overview]
Message-ID: <1436437493.22672.308.camel@citrix.com> (raw)
In-Reply-To: <1436368416.22672.220.camel@citrix.com>


[-- Attachment #1.1: Type: text/plain, Size: 3275 bytes --]

Hey,

1 thing...

On Wed, 2015-07-08 at 17:13 +0200, Dario Faggioli wrote:
> On Tue, 2015-07-07 at 13:16 +0200, Juergen Gross wrote:

> > > @@ -645,25 +675,72 @@ int cpu_disable_scheduler(unsigned int cpu)
> > >                   cpumask_setall(v->cpu_hard_affinity);
> > >               }
> > >
> > > -            if ( v->processor == cpu )
> > > +            if ( v->processor != cpu )
> > >               {
> > > -                set_bit(_VPF_migrating, &v->pause_flags);
> > > +                /* This vcpu is not on cpu, so we can move on. */
> > >                   vcpu_schedule_unlock_irqrestore(lock, flags, v);
> > > -                vcpu_sleep_nosync(v);
> > > -                vcpu_migrate(v);
> > > +                continue;
> > >               }
> > > -            else
> > > -                vcpu_schedule_unlock_irqrestore(lock, flags, v);
> > >
> > >               /*
> > > -             * A vcpu active in the hypervisor will not be migratable.
> > > -             * The caller should try again after releasing and reaquiring
> > > -             * all locks.
> > > +             * If we're here, it means that the vcpu is on cpu. Let's see how
> > > +             * it's best to send it away, depending on whether we are shutting
> > > +             * down/suspending, or doing cpupool manipulations.
> > >                */
> > > -            if ( v->processor == cpu )
> > > -                ret = -EAGAIN;
> > > -        }
> > > +            set_bit(_VPF_migrating, &v->pause_flags);
> > > +            vcpu_schedule_unlock_irqrestore(lock, flags, v);
> > > +            vcpu_sleep_nosync(v);
> > > +
> > > +            /*
> > > +             * In case of shutdown/suspend, it is not necessary to ask the
> > > +             * scheduler to chime in. In fact:
> > > +             *  * there is no reason for it: the end result we are after is
> > > +             *    just 'all the vcpus on the boot pcpu, and no vcpu anywhere
> > > +             *    else', so let's just go for it;
> > > +             *  * it's wrong, when dealing a cpupool with only non-boot pcpus,
> > > +             *    as the scheduler will always fail to send the vcpus away
> > > +             *    from the last online (non boot) pcpu!
> > 
> > I'd add a comment that in shutdown/suspend case all domains are being
> > paused, so we can be active in dom0/Pool-0 only.
> > 
> Sure, I'll add this.
> 
...while putting such a comment together, I'm realizing that I'm not
sure about what you meant, or what you wanted the comment itself to
express.

I mean, it is certainly true that all domains are being paused (they've
been paused already, actually), but that include Dom0 too. Also, we are
in Xen, in stop_machine context, so I'm not sure what you meant either
with "we can be active in dom0/Pool-0 only".

So, I'm adding a line about things being paused. If you think I should
say anything more than that, let me know.

Thanks and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2015-07-09 10:25 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-03 15:49 [PATCH 0/4] xen: sched/cpupool: more fixing of (corner?) cases Dario Faggioli
2015-07-03 15:49 ` [PATCH 1/4] xen: sched: factor the code for taking two runq locks in a function Dario Faggioli
2015-07-08 14:47   ` George Dunlap
2015-07-03 15:49 ` [PATCH 2/4] xen: sched: factor code that moves a vcpu to a new pcpu " Dario Faggioli
2015-07-08 14:56   ` George Dunlap
2015-07-08 15:09     ` Dario Faggioli
2015-07-03 15:49 ` [PATCH 3/4] xen: sched: reorganize cpu_disable_scheduler() Dario Faggioli
2015-07-07 11:16   ` Juergen Gross
2015-07-08 15:13     ` Dario Faggioli
2015-07-09 10:24       ` Dario Faggioli [this message]
2015-07-09 10:45         ` Juergen Gross
2015-07-15 14:54           ` Dario Faggioli
2015-07-15 15:07             ` Juergen Gross
2015-07-08 16:01   ` George Dunlap
2015-07-08 16:37     ` Dario Faggioli
2015-07-09 10:33       ` George Dunlap
2015-07-03 15:49 ` [PATCH 4/4] xen: sched/cpupool: properly update affinity when removing a cpu from a cpupool Dario Faggioli
2015-07-07 11:30   ` Juergen Gross
2015-07-08 16:19   ` George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1436437493.22672.308.camel@citrix.com \
    --to=dario.faggioli@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=jgross@suse.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).