From: Juergen Gross <jgross@suse.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
konrad.wilk@oracle.com, boris.ostrovsky@oracle.com,
david.vrabel@citrix.com, mingo@redhat.com,
Douglas_Warzecha@dell.com, pali.rohar@gmail.com,
jdelvare@suse.com, linux@roeck-us.net, tglx@linutronix.de,
hpa@zytor.com, jeremy@goop.org, chrisw@sous-sol.org,
akataria@vmware.com, rusty@rustcorp.com.au,
virtualization@lists.linux-foundation.org, x86@kernel.org
Subject: Re: [PATCH v3 5/6] virt, sched: add cpu pinning to smp_call_sync_on_phys_cpu()
Date: Fri, 1 Apr 2016 11:03:21 +0200 [thread overview]
Message-ID: <56FE3959.2030508@suse.com> (raw)
In-Reply-To: <20160401084408.GF3448@twins.programming.kicks-ass.net>
On 01/04/16 10:44, Peter Zijlstra wrote:
> On Fri, Apr 01, 2016 at 10:28:46AM +0200, Juergen Gross wrote:
>> On 01/04/16 09:43, Peter Zijlstra wrote:
>>> On Fri, Apr 01, 2016 at 09:14:33AM +0200, Juergen Gross wrote:
>>>> --- a/kernel/smp.c
>>>> +++ b/kernel/smp.c
>>>> @@ -14,6 +14,7 @@
>>>> #include <linux/smp.h>
>>>> #include <linux/cpu.h>
>>>> #include <linux/sched.h>
>>>> +#include <linux/hypervisor.h>
>>>>
>>>> #include "smpboot.h"
>>>>
>>>> @@ -758,9 +759,14 @@ struct smp_sync_call_struct {
>>>> static void smp_call_sync_callback(struct work_struct *work)
>>>> {
>>>> struct smp_sync_call_struct *sscs;
>>>> + unsigned int cpu;
>>>>
>>>> sscs = container_of(work, struct smp_sync_call_struct, work);
>>>> + cpu = get_cpu();
>>>> + hypervisor_pin_vcpu(cpu);
>>>> sscs->ret = sscs->func(sscs->data);
>>>> + hypervisor_pin_vcpu(-1);
>>>> + put_cpu();
>>>>
>>>> complete(&sscs->done);
>>>> }
>>>
>>> So I don't really like this; it adds the requirement that the function
>>> cannot schedule, which greatly limits the utility of the construct. At
>>> this point you might as well use the regular IPI stuff.
>>
>> Main reason for disabling preemption was to avoid any suspend/resume
>> cycles while vcpu pinning is active.
>>
>> With the switch to workqueues this might not be necessary, if I've read
>> try_to_freeze_tasks() correctly. Can you confirm, please?
>
> This is not something we should worry about; the caller should ensure
> the CPU stays valid; typically I would expect a caller to do
> get_online_cpus() before 'computing' what CPU to send the function to.
Okay.
>
>>> So I would propose you add:
>>>
>>> smp_call_on_cpu()
>>>
>>> As per patch 2. No promises about physical or anything. This means it
>>> can be used freely by anyone that wants to run a function on another
>>> cpu -- a much more useful thing.
>>
>> Okay.
>>
>>> And then build a phys variant on top.
>>
>> Hmm, I'm not sure I understand what you are suggesting here.
>>
>> Should this phys variant make use of smp_call_on_cpu() via an
>> intermediate function called on the dedicated cpu which is doing the
>> pinning and calling the user function then?
>>
>> Or do you want the phys variant to either use smp_call_on_cpu() or to
>> do the pinning and call the user function by itself depending on the
>> environment (pinning supported)?
>
> Yeah, uhmm.. not sure on the details; my brain is having a hard time
> engaging this morning.
>
> Maybe just make the vpin thing an option like:
>
> smp_call_on_cpu(int (*func)(void *), int phys_cpu);
Okay.
> Also; is something like the vpin thing possible on KVM? because if we're
> going to expose it to generic code like this we had maybe look at wider
> support.
It is necessary for dom0 under Xen. I don't think there is a need to do
this on KVM as a guest has no direct access to e.g. BIOS functions of
the real hardware and the host system needs no vcpu pinning. I'm not
sure about VMWare.
Juergen
next prev parent reply other threads:[~2016-04-01 9:03 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-01 7:14 [PATCH v3 0/6] Support calling functions on dedicated physical cpu Juergen Gross
2016-04-01 7:14 ` [PATCH v3 1/6] xen: sync xen header Juergen Gross
2016-04-01 7:14 ` [PATCH v3 2/6] smp: add function to execute a function synchronously on a physical cpu Juergen Gross
2016-04-01 7:37 ` Peter Zijlstra
2016-04-01 7:40 ` Juergen Gross
2016-04-01 7:14 ` [PATCH v3 3/6] dcdbas: make use of smp_call_sync_on_phys_cpu() Juergen Gross
2016-04-01 7:14 ` [PATCH v3 4/6] hwmon: use smp_call_sync_on_phys_cpu() for dell-smm i8k Juergen Gross
2016-04-01 7:14 ` [PATCH v3 5/6] virt, sched: add cpu pinning to smp_call_sync_on_phys_cpu() Juergen Gross
2016-04-01 7:43 ` Peter Zijlstra
2016-04-01 8:28 ` Juergen Gross
2016-04-01 8:44 ` Peter Zijlstra
2016-04-01 9:03 ` Juergen Gross [this message]
2016-04-01 9:15 ` Peter Zijlstra
2016-04-01 9:26 ` Juergen Gross
2016-04-01 7:14 ` [PATCH v3 6/6] xen: add xen_pin_vcpu() to support calling functions on a dedicated pcpu Juergen Gross
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56FE3959.2030508@suse.com \
--to=jgross@suse.com \
--cc=Douglas_Warzecha@dell.com \
--cc=akataria@vmware.com \
--cc=boris.ostrovsky@oracle.com \
--cc=chrisw@sous-sol.org \
--cc=david.vrabel@citrix.com \
--cc=hpa@zytor.com \
--cc=jdelvare@suse.com \
--cc=jeremy@goop.org \
--cc=konrad.wilk@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux@roeck-us.net \
--cc=mingo@redhat.com \
--cc=pali.rohar@gmail.com \
--cc=peterz@infradead.org \
--cc=rusty@rustcorp.com.au \
--cc=tglx@linutronix.de \
--cc=virtualization@lists.linux-foundation.org \
--cc=x86@kernel.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox