From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alexander Graf Subject: Re: [PATCH] KVM: PPC: Book3S PR: Rework SLB switching code Date: Wed, 28 May 2014 16:13:10 +0200 Message-ID: <5385EEF6.1080601@suse.de> References: <1400157833-52294-1-git-send-email-agraf@suse.de> <20140517053616.GB22449@iris.ozlabs.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm-ppc@vger.kernel.org, kvm@vger.kernel.org To: Paul Mackerras Return-path: In-Reply-To: <20140517053616.GB22449@iris.ozlabs.ibm.com> Sender: kvm-ppc-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On 17.05.14 07:36, Paul Mackerras wrote: > On Thu, May 15, 2014 at 02:43:53PM +0200, Alexander Graf wrote: >> On LPAR guest systems Linux enables the shadow SLB to indicate to the >> hypervisor a number of SLB entries that always have to be available. >> >> Today we go through this shadow SLB and disable all ESID's valid bits. >> However, pHyp doesn't like this approach very much and honors us with >> fancy machine checks. >> >> Fortunately the shadow SLB descriptor also has an entry that indicates >> the number of valid entries following. During the lifetime of a guest >> we can just swap that value to 0 and don't have to worry about the >> SLB restoration magic. > I think this is a great idea; I have been thinking we should do > something like this. > >> While we're touching the code, let's also make it more readable (get >> rid of rldicl), allow it to deal with a dynamic number of bolted >> SLB entries and only do shadow SLB swizzling on LPAR systems. >> >> Signed-off-by: Alexander Graf > [snip] > >> +#define SHADOW_SLB_ENTRY_LEN 0x10 > Normally we'd define structure offsets/sizes like this in > asm-offsets.c. However, since the structure can't change I guess this > is OK. > >> /* Fill SLB with our shadow */ >> >> + lis r7, SLB_ESID_V@h >> + >> lbz r12, SVCPU_SLB_MAX(r3) >> mulli r12, r12, 16 >> addi r12, r12, SVCPU_SLB >> @@ -99,7 +76,7 @@ slb_loop_enter: >> >> ld r10, 0(r11) >> >> - rldicl. r0, r10, 37, 63 >> + and. r9, r10, r7 > Or... > andis. r9, r10, SLB_ESID_V@h > and save a register and an instruction. Good idea :) > >> cmpd cr0, r11, r12 >> blt slb_loop_enter >> >> + isync >> + sync > Why? Hrm, I guess I was trying to narrow down why things broke. I'll omit it and see whether my test machine can still successfully run PR KVM. > >> +BEGIN_FW_FTR_SECTION >> + >> + /* Declare SLB shadow as SLB_NUM_BOLTED entries big */ >> + >> + li r8, SLB_NUM_BOLTED >> + stb r8, 3(r11) > Currently it's true that slb_shadow.persistent is always > SLB_NUM_BOLTED, but if you are going to embed that assumption here in We had that assumption before too ;) > the KVM code you should at least add some comments over in > arch/powerpc/mm/slb.c and in arch/powerpc/kernel/paca.c (where > slb_shadow.persistent gets initialized) warning people that if they > break that assumption they need to fix KVM code as well. but I like warnings, so I'll add some. Alex