xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Juergen Gross <jgross@suse.com>
To: Anshul Makkar <anshul.makkar@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [PATCH v2 2/3] xen: add hypercall option to temporarily pin a vcpu
Date: Thu, 3 Mar 2016 06:31:32 +0100	[thread overview]
Message-ID: <56D7CC34.4040202@suse.com> (raw)
In-Reply-To: <d97b65fdc6124ca2a285b6c3c87f5318@AMSPEX02CL03.citrite.net>

On 02/03/16 18:21, Anshul Makkar wrote:
> Hi,
> 
> 
> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of George Dunlap
> Sent: 01 March 2016 15:53
> To: Juergen Gross <jgross@suse.com>; xen-devel@lists.xen.org
> Cc: Wei Liu <wei.liu2@citrix.com>; Stefano Stabellini <Stefano.Stabellini@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Andrew Cooper <Andrew.Cooper3@citrix.com>; Dario Faggioli <dario.faggioli@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; David Vrabel <david.vrabel@citrix.com>; jbeulich@suse.com
> Subject: Re: [Xen-devel] [PATCH v2 2/3] xen: add hypercall option to temporarily pin a vcpu
> 
> On 01/03/16 09:02, Juergen Gross wrote:
>> Some hardware (e.g. Dell studio 1555 laptops) require SMIs to be 
>> called on physical cpu 0 only. Linux drivers like dcdbas or i8k try to 
>> achieve this by pinning the running thread to cpu 0, but in Dom0 this 
>> is not enough: the vcpu must be pinned to physical cpu 0 via Xen, too.
>>
>> Add a stable hypercall option SCHEDOP_pin_temp to the sched_op 
>> hypercall to achieve this. It is taking a physical cpu number as 
>> parameter. If pinning is possible (the calling domain has the 
>> privilege to make the call and the cpu is available in the domain's
>> cpupool) the calling vcpu is pinned to the specified cpu. The old cpu 
>> affinity is saved. To undo the temporary pinning a cpu -1 is 
>> specified. This will restore the original cpu affinity for the vcpu.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> V2: - limit operation to hardware domain as suggested by Jan Beulich
>>     - some style issues corrected as requested by Jan Beulich
>>     - use fixed width types in interface as requested by Jan Beulich
>>     - add compat layer checking as requested by Jan Beulich
>> ---
>>  xen/common/compat/schedule.c |  4 ++
>>  xen/common/schedule.c        | 92 +++++++++++++++++++++++++++++++++++++++++---
>>  xen/include/public/sched.h   | 17 ++++++++
>>  xen/include/xlat.lst         |  1 +
>>  4 files changed, 109 insertions(+), 5 deletions(-)
>>
>> diff --git a/xen/common/compat/schedule.c 
>> b/xen/common/compat/schedule.c index 812c550..73b0f01 100644
>> --- a/xen/common/compat/schedule.c
>> +++ b/xen/common/compat/schedule.c
>> @@ -10,6 +10,10 @@
>>  
>>  #define do_sched_op compat_sched_op
>>  
>> +#define xen_sched_pin_temp sched_pin_temp CHECK_sched_pin_temp; 
>> +#undef xen_sched_pin_temp
>> +
>>  #define xen_sched_shutdown sched_shutdown  CHECK_sched_shutdown;  
>> #undef xen_sched_shutdown diff --git a/xen/common/schedule.c 
>> b/xen/common/schedule.c index b0d4b18..653f852 100644
>> --- a/xen/common/schedule.c
>> +++ b/xen/common/schedule.c
>> @@ -271,6 +271,12 @@ int sched_move_domain(struct domain *d, struct cpupool *c)
>>      struct scheduler *old_ops;
>>      void *old_domdata;
>>  
>> +    for_each_vcpu ( d, v )
>> +    {
>> +        if ( v->affinity_broken )
>> +            return -EBUSY;
>> +    }
>> +
>>      domdata = SCHED_OP(c->sched, alloc_domdata, d);
>>      if ( domdata == NULL )
>>          return -ENOMEM;
>> @@ -669,6 +675,14 @@ int cpu_disable_scheduler(unsigned int cpu)
>>              if ( cpumask_empty(&online_affinity) &&
>>                   cpumask_test_cpu(cpu, v->cpu_hard_affinity) )
>>              {
>> +                if ( v->affinity_broken )
>> +                {
>> +                    /* The vcpu is temporarily pinned, can't move it. */
>> +                    vcpu_schedule_unlock_irqrestore(lock, flags, v);
>> +                    ret = -EBUSY;
>> +                    break;
>> +                }
> 
> Does this mean that if the user closes the laptop lid while one of these drivers has vcpu0 pinned, that Xen will crash (see xen/arch/x86/smpboot.c:__cpu_disable())?  Or is it the OS's job to make sure that all temporary pins are removed before suspending?
> 
> Also -- have you actually tested the "cpupool move while pinned"
> functionality to make sure it actually works?  There's a weird bit in
> cpupool_unassign_cpu_helper() where after calling cpu_disable_scheduler(cpu), it unconditionally sets the cpu bit in the cpupool_free_cpus mask, even if it returns an error.  That can't be right, even for the existing -EAGAIN case, can it?
> 
> I see that you have a loop to retry this call several times in the next patch; but what if it fails every time -- what state is the system in?
> 
> And, in general, what happens if the device driver gets mixed up and forgets to unpin the vcpu?  Is the only recourse to reboot your host (or deal with the fact that you can't reconfigure your cpupools)?
> 
>  -George
> 
> Sorry, lost the original thread so replying at the top of mail chain.
> 
> +static XSM_INLINE int xsm_schedop_pin_temp(XSM_DEFAULT_VOID) 
> +{ 
> + XSM_ASSERT_ACTION(XSM_PRIV); 
> + return xsm_default_action(action, current->domain, NULL); 
> +}
> 
> Is the intention is to restrict the hypercall usage to dom0 only ?

To be more precise: to the hardware domain (the patch sniplet you are
referencing was part of V1 of the series, it isn't existing in V2 any
longer).


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-03-03  5:31 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-01  9:02 [PATCH v2 0/3] add hypercall option to temporarily pin a vcpu Juergen Gross
2016-03-01  9:02 ` [PATCH v2 1/3] xen: silence affinity messages on suspend/resume Juergen Gross
2016-03-02 11:11   ` Dario Faggioli
2016-03-01  9:02 ` [PATCH v2 2/3] xen: add hypercall option to temporarily pin a vcpu Juergen Gross
2016-03-01 11:27   ` Jan Beulich
2016-03-01 11:55   ` David Vrabel
2016-03-01 11:58     ` Juergen Gross
2016-03-01 12:15       ` Dario Faggioli
2016-03-01 14:02         ` George Dunlap
     [not found]   ` <56D58ABF02000078000D7C46@suse.com>
2016-03-01 11:58     ` Juergen Gross
2016-03-01 15:52   ` George Dunlap
2016-03-01 15:55     ` George Dunlap
2016-03-01 16:11       ` Jan Beulich
2016-03-02  7:14     ` Juergen Gross
2016-03-02  9:27       ` Dario Faggioli
2016-03-02 11:19         ` Juergen Gross
2016-03-02 11:49           ` Dario Faggioli
2016-03-02 12:12             ` Juergen Gross
2016-03-02 15:34         ` Juergen Gross
2016-03-02 16:03           ` Dario Faggioli
2016-03-02 17:15             ` Juergen Gross
2016-03-02 17:21     ` Anshul Makkar
2016-03-03  5:31       ` Juergen Gross [this message]
2016-03-01  9:02 ` [PATCH v2 3/3] libxc: do some retries in xc_cpupool_removecpu() for EBUSY case Juergen Gross
2016-03-01 11:58   ` Wei Liu
2016-03-01 11:59     ` Juergen Gross

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56D7CC34.4040202@suse.com \
    --to=jgross@suse.com \
    --cc=anshul.makkar@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).