kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Marc Zyngier <marc.zyngier@arm.com>
To: Christoffer Dall <christoffer.dall@linaro.org>
Cc: linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org,
	kvmarm@lists.cs.columbia.edu
Subject: Re: [PATCH 7/8] KVM: arm/arm64: Avoid accessing GICH registers
Date: Wed, 10 Feb 2016 17:43:54 +0000	[thread overview]
Message-ID: <56BB76DA.4080905@arm.com> (raw)
In-Reply-To: <20160210173054.GR5171@cbox>

On 10/02/16 17:30, Christoffer Dall wrote:
> On Wed, Feb 10, 2016 at 01:34:44PM +0000, Marc Zyngier wrote:
>> On 10/02/16 12:45, Christoffer Dall wrote:
>>> On Mon, Feb 08, 2016 at 11:40:21AM +0000, Marc Zyngier wrote:
>>>> GICv2 registers are *slow*. As in "terrifyingly slow". Which is bad.
>>>> But we're equaly bad, as we make a point in accessing them even if
>>>> we don't have any interrupt in flight.
>>>>
>>>> A good solution is to first find out if we have anything useful to
>>>> write into the GIC, and if we don't, to simply not do it. This
>>>> involves tracking which LRs actually have something valid there.
>>>>
>>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>>> ---
>>>>  arch/arm64/kvm/hyp/vgic-v2-sr.c | 71 ++++++++++++++++++++++++++++-------------
>>>>  include/kvm/arm_vgic.h          |  2 ++
>>>>  2 files changed, 51 insertions(+), 22 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/kvm/hyp/vgic-v2-sr.c b/arch/arm64/kvm/hyp/vgic-v2-sr.c
>>>> index e717612..874a08d 100644
>>>> --- a/arch/arm64/kvm/hyp/vgic-v2-sr.c
>>>> +++ b/arch/arm64/kvm/hyp/vgic-v2-sr.c
>>>> @@ -38,28 +38,40 @@ void __hyp_text __vgic_v2_save_state(struct kvm_vcpu *vcpu)
>>>>  
>>>>  	nr_lr = vcpu->arch.vgic_cpu.nr_lr;
>>>>  	cpu_if->vgic_vmcr = readl_relaxed(base + GICH_VMCR);
>>>> -	cpu_if->vgic_misr = readl_relaxed(base + GICH_MISR);
>>>> -	eisr0  = readl_relaxed(base + GICH_EISR0);
>>>> -	elrsr0 = readl_relaxed(base + GICH_ELRSR0);
>>>> -	if (unlikely(nr_lr > 32)) {
>>>> -		eisr1  = readl_relaxed(base + GICH_EISR1);
>>>> -		elrsr1 = readl_relaxed(base + GICH_ELRSR1);
>>>> -	} else {
>>>> -		eisr1 = elrsr1 = 0;
>>>> -	}
>>>> +
>>>> +	if (vcpu->arch.vgic_cpu.live_lrs) {
>>>> +		eisr0  = readl_relaxed(base + GICH_EISR0);
>>>> +		elrsr0 = readl_relaxed(base + GICH_ELRSR0);
>>>> +		cpu_if->vgic_misr = readl_relaxed(base + GICH_MISR);
>>>> +		cpu_if->vgic_apr    = readl_relaxed(base + GICH_APR);
>>>> +
>>>> +		if (unlikely(nr_lr > 32)) {
>>>> +			eisr1  = readl_relaxed(base + GICH_EISR1);
>>>> +			elrsr1 = readl_relaxed(base + GICH_ELRSR1);
>>>> +		} else {
>>>> +			eisr1 = elrsr1 = 0;
>>>> +		}
>>>> +
>>>>  #ifdef CONFIG_CPU_BIG_ENDIAN
>>>> -	cpu_if->vgic_eisr  = ((u64)eisr0 << 32) | eisr1;
>>>> -	cpu_if->vgic_elrsr = ((u64)elrsr0 << 32) | elrsr1;
>>>> +		cpu_if->vgic_eisr  = ((u64)eisr0 << 32) | eisr1;
>>>> +		cpu_if->vgic_elrsr = ((u64)elrsr0 << 32) | elrsr1;
>>>>  #else
>>>> -	cpu_if->vgic_eisr  = ((u64)eisr1 << 32) | eisr0;
>>>> -	cpu_if->vgic_elrsr = ((u64)elrsr1 << 32) | elrsr0;
>>>> +		cpu_if->vgic_eisr  = ((u64)eisr1 << 32) | eisr0;
>>>> +		cpu_if->vgic_elrsr = ((u64)elrsr1 << 32) | elrsr0;
>>>>  #endif
>>>> -	cpu_if->vgic_apr    = readl_relaxed(base + GICH_APR);
>>>>  
>>>> -	writel_relaxed(0, base + GICH_HCR);
>>>> +		for (i = 0; i < nr_lr; i++)
>>>> +			if (vcpu->arch.vgic_cpu.live_lrs & (1UL << i))
>>>> +				cpu_if->vgic_lr[i] = readl_relaxed(base + GICH_LR0 + (i * 4));
>>>>  
>>>> -	for (i = 0; i < nr_lr; i++)
>>>> -		cpu_if->vgic_lr[i] = readl_relaxed(base + GICH_LR0 + (i * 4));
>>>> +		writel_relaxed(0, base + GICH_HCR);
>>>> +
>>>> +		vcpu->arch.vgic_cpu.live_lrs = 0;
>>>> +	} else {
>>>> +		cpu_if->vgic_eisr = 0;
>>>> +		cpu_if->vgic_elrsr = ~0UL;
>>>> +		cpu_if->vgic_misr = 0;
>>>> +	}
>>>>  }
>>>>  
>>>>  /* vcpu is already in the HYP VA space */
>>>> @@ -70,15 +82,30 @@ void __hyp_text __vgic_v2_restore_state(struct kvm_vcpu *vcpu)
>>>>  	struct vgic_dist *vgic = &kvm->arch.vgic;
>>>>  	void __iomem *base = kern_hyp_va(vgic->vctrl_base);
>>>>  	int i, nr_lr;
>>>> +	u64 live_lrs = 0;
>>>>  
>>>>  	if (!base)
>>>>  		return;
>>>>  
>>>> -	writel_relaxed(cpu_if->vgic_hcr, base + GICH_HCR);
>>>> -	writel_relaxed(cpu_if->vgic_vmcr, base + GICH_VMCR);
>>>> -	writel_relaxed(cpu_if->vgic_apr, base + GICH_APR);
>>>> -
>>>>  	nr_lr = vcpu->arch.vgic_cpu.nr_lr;
>>>> +
>>>>  	for (i = 0; i < nr_lr; i++)
>>>> -		writel_relaxed(cpu_if->vgic_lr[i], base + GICH_LR0 + (i * 4));
>>>> +		if (cpu_if->vgic_lr[i] & GICH_LR_STATE)
>>>> +			live_lrs |= 1UL << i;
>>>> +
>>>> +	if (live_lrs) {
>>>> +		writel_relaxed(cpu_if->vgic_hcr, base + GICH_HCR);
>>>> +		writel_relaxed(cpu_if->vgic_apr, base + GICH_APR);
>>>> +		for (i = 0; i < nr_lr; i++) {
>>>> +			u32 val = 0;
>>>> +
>>>> +			if (live_lrs & (1UL << i))
>>>> +				val = cpu_if->vgic_lr[i];
>>>> +
>>>> +			writel_relaxed(val, base + GICH_LR0 + (i * 4));
>>>> +		}
>>>> +	}
>>>> +
>>>> +	writel_relaxed(cpu_if->vgic_vmcr, base + GICH_VMCR);
>>>
>>> couldn't you optimize this out by storing the last read value and
>>> compare if anything changed?  (you'd have to invalidate the cached value
>>> on vcpu_put obviously).
>>
>> Yeah, very good point. Only the guest can update this, so we could even
>> move it to vcpu_load/vcpu_put entirely, and never save/restore it inside
>> the run loop.
> 
> If vcpu_load is called *after* loading incoming state on migration, this
> should work, yes.

Hmmm. That could be an issue, actually. I need to check if we do a
vcpu_load on SET_ONE_REG access. If we do, then vcpu_put will overwrite
the value we've written in the shadow copies by reading back the old
value from the HW.

I'll investigate when I get the time.

>>
>> I'll keep that for a further patch, as it requires a bit of infrastructure.
>>
> Sounds good.
> 
> We can probably also optimize the writing of the LRs further, but I
> figure it's not worth it as the interrupt delivery path is the slow path
> anyway and we should care about optimizing the common case.
> 
> I wouldn't think saving 2-3 writes to some LRs would be measurable for
> interrupt delivery anyhow.

I found out that some other simple optimizations did save about 800
cycles for a single interrupt injection, which is about 10% of the
complete exit/enter path.

Guilty ones are the maintenance interrupt status registers (MISR, EISR),
and zeroing of LRs. That's with GICv2 though, and GICv3 seems less
sensitive to that kind of thing...

But I agree with you: this is a fairly slow path overall, and we'll
quickly approach the point of diminishing returns.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

  reply	other threads:[~2016-02-10 17:43 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-08 11:40 [PATCH 0/8] KVM/ARM: Guest Entry/Exit optimizations Marc Zyngier
2016-02-08 11:40 ` [PATCH 1/8] arm64: KVM: Switch the sys_reg search to be a binary search Marc Zyngier
2016-02-10 12:44   ` Christoffer Dall
2016-02-10 13:49   ` Alex Bennée
2016-02-10 14:00     ` Marc Zyngier
2016-02-08 11:40 ` [PATCH 2/8] ARM: KVM: Properly sort the invariant table Marc Zyngier
2016-02-10 12:44   ` Christoffer Dall
2016-02-08 11:40 ` [PATCH 3/8] ARM: KVM: Enforce sorting of all CP tables Marc Zyngier
2016-02-10 12:44   ` Christoffer Dall
2016-02-08 11:40 ` [PATCH 4/8] ARM: KVM: Rename struct coproc_reg::is_64 to is_64bit Marc Zyngier
2016-02-10 12:44   ` Christoffer Dall
2016-02-08 11:40 ` [PATCH 5/8] ARM: KVM: Switch the CP reg search to be a binary search Marc Zyngier
2016-02-10 12:44   ` Christoffer Dall
2016-02-08 11:40 ` [PATCH 6/8] KVM: arm/arm64: timer: Add active state caching Marc Zyngier
2016-02-10 12:44   ` Christoffer Dall
2016-02-08 11:40 ` [PATCH 7/8] KVM: arm/arm64: Avoid accessing GICH registers Marc Zyngier
2016-02-10 12:45   ` Christoffer Dall
2016-02-10 13:34     ` Marc Zyngier
2016-02-10 17:30       ` Christoffer Dall
2016-02-10 17:43         ` Marc Zyngier [this message]
2016-02-08 11:40 ` [PATCH 8/8] KVM: arm64: Avoid accessing ICH registers Marc Zyngier
2016-02-10 12:45   ` Christoffer Dall
2016-02-10 16:47     ` Marc Zyngier
2016-02-09 20:59 ` [PATCH 0/8] KVM/ARM: Guest Entry/Exit optimizations Christoffer Dall
2016-02-10  8:34   ` Marc Zyngier
2016-02-10 12:02     ` Andrew Jones
2016-02-10 12:24       ` Marc Zyngier
2016-02-10 20:40 ` Christoffer Dall
2016-02-16 20:05   ` Marc Zyngier
2016-02-17  9:15     ` Christoffer Dall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56BB76DA.4080905@arm.com \
    --to=marc.zyngier@arm.com \
    --cc=christoffer.dall@linaro.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).