From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
To: Alexander Graf <agraf@suse.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>,
Paul Mackerras <paulus@samba.org>,
linuxppc-dev@lists.ozlabs.org, kvm-ppc@vger.kernel.org,
"\<kvm\@vger.kernel.org\> list" <kvm@vger.kernel.org>,
Gleb Natapov <gleb@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [RFC PATCH 09/11] kvm: simplify processor compat check
Date: Fri, 27 Sep 2013 18:43:09 +0530 [thread overview]
Message-ID: <87r4ca9zmi.fsf@linux.vnet.ibm.com> (raw)
In-Reply-To: <E5A81672-864D-4E33-B15A-5F6927CC7C13@suse.de>
Alexander Graf <agraf@suse.de> writes:
> On 27.09.2013, at 12:03, Aneesh Kumar K.V wrote:
>
>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>
> Missing patch description.
>
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>
> I fail to see how this really simplifies things, but at the end of the
> day it's Gleb's and Paolo's call.
will do. It avoid calling
for_each_online_cpu(cpu) {
smp_call_function_single()
on multiple architecture.
We also want to make the smp call function a callback of opaque. Hence
this should be made arch specific.
int kvm_arch_check_processor_compat(void *opaque)
{
int r,cpu;
struct kvmppc_ops *kvm_ops = (struct kvmppc_ops *)opaque;
for_each_online_cpu(cpu) {
smp_call_function_single(cpu,
kvm_ops->check_processor_compat,
&r, 1);
if (r < 0)
break;
}
return r;
}
against
- for_each_online_cpu(cpu) {
- smp_call_function_single(cpu,
- kvm_arch_check_processor_compat,
- &r, 1);
- if (r < 0)
- goto out_free_1;
- }
+
+ r = kvm_arch_check_processor_compat(opaque);
+ if (r < 0)
+ goto out_free_1;
>
> Which brings me to the next issue: You forgot to CC kvm@vger on your
> patch set. Gleb and Paolo don't read kvm-ppc@vger. And they shouldn't
> have to. Every kvm patch that you want review on or that should get
> applied needs to be sent to kvm@vger. If you want to tag it as PPC
> specific patch, do so by CC'ing kvm-ppc@vger.
Will do in the next update
-aneesh
next prev parent reply other threads:[~2013-09-27 13:13 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1380276233-17095-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com>
[not found] ` <1380276233-17095-10-git-send-email-aneesh.kumar@linux.vnet.ibm.com>
2013-09-27 12:31 ` [RFC PATCH 09/11] kvm: simplify processor compat check Alexander Graf
2013-09-27 13:13 ` Aneesh Kumar K.V [this message]
2013-09-27 15:14 ` Paolo Bonzini
2013-09-28 15:36 ` Aneesh Kumar K.V
2013-09-29 8:58 ` Gleb Natapov
2013-09-29 15:05 ` Aneesh Kumar K.V
2013-09-29 15:11 ` Gleb Natapov
[not found] ` <878uyibkq2.fsf@linux.vnet.ibm.com>
2013-09-30 10:16 ` [RFC PATCH 00/11 Allow PR and HV KVM to coexist in one kernel Alexander Graf
2013-09-30 13:09 ` Aneesh Kumar K.V
2013-09-30 14:54 ` Alexander Graf
2013-10-01 11:26 ` Aneesh Kumar K.V
2013-10-01 11:36 ` Alexander Graf
2013-10-01 11:41 ` Paolo Bonzini
2013-10-01 11:43 ` Alexander Graf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87r4ca9zmi.fsf@linux.vnet.ibm.com \
--to=aneesh.kumar@linux.vnet.ibm.com \
--cc=agraf@suse.de \
--cc=benh@kernel.crashing.org \
--cc=gleb@redhat.com \
--cc=kvm-ppc@vger.kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=paulus@samba.org \
--cc=pbonzini@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).