From: Marc Zyngier <maz@kernel.org>
To: Oliver Upton <oliver.upton@linux.dev>
Cc: Sebastian Ott <sebott@redhat.com>,
Sean Christopherson <seanjc@google.com>,
kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH] KVM: arm64: Fix smp_processor_id() call in preemptible context
Date: Tue, 06 Jun 2023 18:10:42 +0100 [thread overview]
Message-ID: <87ilc0o6st.wl-maz@kernel.org> (raw)
In-Reply-To: <ZH9jTrR8cdkOdJKu@linux.dev>
On Tue, 06 Jun 2023 17:48:14 +0100,
Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Tue, Jun 06, 2023 at 05:17:34PM +0100, Marc Zyngier wrote:
> > On Tue, 06 Jun 2023 15:10:44 +0100, Oliver Upton <oliver.upton@linux.dev> wrote:
> > > diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> > > index 491ca7eb2a4c..933a6331168b 100644
> > > --- a/arch/arm64/kvm/pmu-emul.c
> > > +++ b/arch/arm64/kvm/pmu-emul.c
> > > @@ -700,7 +700,7 @@ static struct arm_pmu *kvm_pmu_probe_armpmu(void)
> > >
> > > mutex_lock(&arm_pmus_lock);
> > >
> > > - cpu = smp_processor_id();
> > > + cpu = raw_smp_processor_id();
> > > list_for_each_entry(entry, &arm_pmus, entry) {
> > > tmp = entry->arm_pmu;
> > >
> > >
> >
> > If preemption doesn't matter (and I really don't think it does), why
> > are we looking for a the current CPU? I'd rather we pick the PMU that
> > is associated with CPU0 (we're pretty sure it exists), and be done
> > with it.
>
> Getting the current CPU is still useful, we just don't care about that
> cpu# being stale. Unconditionally using CPU0 could break existing usage
> patterns.
>
> A not-too-contrived example would be to taskset QEMU onto a cluster of
> cores in a big.LITTLE system (I do this). The current behavior would
> assign the right PMU to the guest. I've made my opinions about the 'old'
> ABI quite clear, but I don't have too great of an appetite for breakage,
> though fragile.
Fair enough.
>
> Can we proceed with the fix I had suggested along with a more complete
> description of the baggage that we're carrying?
Sure. Please post a separate patch and I'll queue that together with
Reiji's EL0 PMU stuff for the next bag of fixes.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
prev parent reply other threads:[~2023-06-06 17:11 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-06 10:37 [PATCH] KVM: arm64: Fix smp_processor_id() call in preemptible context Sebastian Ott
2023-06-06 13:59 ` Sean Christopherson
2023-06-06 14:10 ` Oliver Upton
2023-06-06 14:24 ` Sebastian Ott
2023-06-06 14:29 ` Sean Christopherson
2023-06-06 15:18 ` Oliver Upton
2023-06-06 15:46 ` Sean Christopherson
2023-06-06 17:00 ` Oliver Upton
2023-06-06 17:04 ` Sean Christopherson
2023-06-06 16:17 ` Marc Zyngier
2023-06-06 16:48 ` Oliver Upton
2023-06-06 17:10 ` Marc Zyngier [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87ilc0o6st.wl-maz@kernel.org \
--to=maz@kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=oliver.upton@linux.dev \
--cc=seanjc@google.com \
--cc=sebott@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).