xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Ben Guthro <Benjamin.Guthro@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [PATCH] x86/S3: Restore broken vcpu affinity on resume
Date: Wed, 27 Mar 2013 08:04:13 -0400	[thread overview]
Message-ID: <5152E03D.5040404@citrix.com> (raw)
In-Reply-To: <CAFLBxZYLjxTisYSu4bPLnGBHZJ3yG7ak79TcZxNstPkW8q6h+A@mail.gmail.com>



On 03/27/2013 08:01 AM, George Dunlap wrote:
> On Tue, Mar 26, 2013 at 5:20 PM, Ben Guthro <benjamin.guthro@citrix.com> wrote:
>> When in SYS_STATE_suspend, and going through the cpu_disable_scheduler
>> path, save a copy of the current cpu affinity, and mark a flag to
>> restore it later.
>>
>> Later, in the resume process, when enabling nonboot cpus restore these
>> affinities.
>>
>> This is the second submission of this patch.
>> Primary differences from the first patch is to fix formatting problems.
>> However, when doing so, I tested with another patch in the
>> cpu_disable_scheduler() path that is also appropriate here.
>>
>> Signed-off-by: Ben Guthro <benjamin.guthro@citrix.com>
>
> Overall looks fine to me; just a few comments below.
>
>> diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c
>> index 10b10f8..7a04f5e 100644
>> --- a/xen/common/cpupool.c
>> +++ b/xen/common/cpupool.c
>> @@ -19,13 +19,10 @@
>>   #include <xen/sched-if.h>
>>   #include <xen/cpu.h>
>>
>> -#define for_each_cpupool(ptr)    \
>> -    for ((ptr) = &cpupool_list; *(ptr) != NULL; (ptr) = &((*(ptr))->next))
>> -
>
> You're taking this out because it's not used, I presume?
>
> Since you'll probably be sending another patch anyway (see below), I
> think it would be better if you pull this out into a specific
> "clean-up" patch.

No. This was moved to an h file to allow use elsewhere.
I'm in the process of looking into Jan's suggestion of eliminating the 
need for it by moving some code into thaw_domains()



>
>
>> @@ -569,6 +609,13 @@ int cpu_disable_scheduler(unsigned int cpu)
>>               {
>>                   printk("Breaking vcpu affinity for domain %d vcpu %d\n",
>>                           v->domain->domain_id, v->vcpu_id);
>> +
>> +                if (system_state == SYS_STATE_suspend)
>> +               {
>
> This appears to have two tabs instead of 16 spaces?

Yes, I'll fix this in v3.

Thanks for your review

Ben

  reply	other threads:[~2013-03-27 12:04 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-03-26 17:20 [PATCH] x86/S3: Restore broken vcpu affinity on resume Ben Guthro
2013-03-26 17:23 ` Ben Guthro
2013-03-27  6:06 ` Juergen Gross
2013-03-27  9:19 ` Jan Beulich
2013-03-27 12:01 ` George Dunlap
2013-03-27 12:04   ` Ben Guthro [this message]
2013-03-27 12:06     ` George Dunlap
  -- strict thread matches above, loose matches on Subject: below --
2013-03-26 16:12 Ben Guthro
2013-03-26 16:54 ` Jan Beulich
2013-03-26 16:58   ` Ben Guthro
2013-03-26 17:04     ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5152E03D.5040404@citrix.com \
    --to=benjamin.guthro@citrix.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).