From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45518) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Xp5GD-0002DR-S9 for qemu-devel@nongnu.org; Thu, 13 Nov 2014 20:01:18 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Xp5G9-0008UC-LP for qemu-devel@nongnu.org; Thu, 13 Nov 2014 20:01:13 -0500 Date: Fri, 14 Nov 2014 11:42:37 +1100 From: David Gibson Message-ID: <20141114004237.GD18600@voom.fritz.box> References: <20141105071019.26196.93729.stgit@aravindap> <20141105071315.26196.68104.stgit@aravindap> <20141111031635.GF15270@voom.redhat.com> <5461B04F.5080204@linux.vnet.ibm.com> <20141113035206.GH7291@voom.fritz.box> <54644886.9050803@linux.vnet.ibm.com> <20141113103235.GK7291@voom.fritz.box> <54649A80.5000204@linux.vnet.ibm.com> <20141113124454.GB18600@voom.fritz.box> <5464C207.3020903@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="1sNVjLsmu1MXqwQ/" Content-Disposition: inline In-Reply-To: <5464C207.3020903@linux.vnet.ibm.com> Subject: Re: [Qemu-devel] [PATCH v3 4/4] target-ppc: Handle ibm, nmi-register RTAS call List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Aravinda Prasad Cc: qemu-ppc@nongnu.org, benh@au1.ibm.com, aik@au1.ibm.com, qemu-devel@nongnu.org, paulus@samba.org --1sNVjLsmu1MXqwQ/ Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Nov 13, 2014 at 08:06:55PM +0530, Aravinda Prasad wrote: >=20 >=20 > On Thursday 13 November 2014 06:14 PM, David Gibson wrote: > > On Thu, Nov 13, 2014 at 05:18:16PM +0530, Aravinda Prasad wrote: > >> On Thursday 13 November 2014 04:02 PM, David Gibson wrote: > >>> On Thu, Nov 13, 2014 at 11:28:30AM +0530, Aravinda Prasad wrote: > > [snip] > >>>>>>> Having to retry the hcall from here seems very awkward. This is a > >>>>>>> private hcall, so you can define it to do whatever retries are > >>>>>>> necessary internally (and I don't think your current implementati= on > >>>>>>> can fail anyway). > >>>>>> > >>>>>> Retrying is required in the cases when multi-processors experience > >>>>>> machine check at or about the same time. As per PAPR, subsequent > >>>>>> processors should serialize and wait for the first processor to is= sue > >>>>>> the ibm,nmi-interlock call. The second processor retries if the fi= rst > >>>>>> processor which received a machine check is still reading the erro= r log > >>>>>> and is yet to issue ibm,nmi-interlock call. > >>>>> > >>>>> Hmm.. ok. But I don't see any mechanism in the patches by which > >>>>> H_REPORT_MC_ERR will report failure if another CPU has an MC in > >>>>> progress. > >>>> > >>>> h_report_mc_err returns 0 if another VCPU is processing machine check > >>>> and in that case we retry. h_report_mc_err returns error log address= if > >>>> no other VCPU is processing machine check. > >>> > >>> Uh.. how? I'm only seeing one return statement in the implementation > >>> in 3/4. > >> > >> This part is in 4/4 which handles ibm,nmi-interlock call in > >> h_report_mc_err() > >> > >> + if (mc_in_progress =3D=3D 1) { > >> + return 0; > >> + } > >=20 > > Ah, right, missed the change to h_report_mc_err() in the later patch. > >=20 > >>>>>> Retrying cannot be done internally in h_report_mc_err hcall: only = one > >>>>>> thread can succeed entering qemu upon parallel hcall and hence ret= rying > >>>>>> inside the hcall will not allow the ibm,nmi-interlock from first C= PU to > >>>>>> succeed. > >>>>> > >>>>> It's possible, but would require some fiddling inside the h_call to > >>>>> unlock and wait for the other CPUs to finish, so yes, it might be m= ore > >>>>> trouble than it's worth. > >>>>> > >>>>>> > >>>>>>> > >>>>>>>> + mtsprg 2,4 > >>>>>>> > >>>>>>> Um.. doesn't this clobber the value of r3 you saved in SPRG2 just= above. > >>>>>> > >>>>>> The r3 saved in SPRG2 is moved to rtas area in the private hcall a= nd > >>>>>> hence it is fine to clobber r3 here > >>>>> > >>>>> Ok, if you're going to do some magic register saving inside the HCA= LL, > >>>>> why not do the SRR[01] and CR restoration inside there as well. > >>>> > >>>> SRR0/1 is clobbered while returning from HCALL and hence cannot be > >>>> restored in HCALL. For CR, we need to do the restoration here as we > >>>> clobber CR after returning from HCALL (the instruction checking the > >>>> return value of hcall clobbers CR). > >>> > >>> Hrm. AFAICT SRR0/1 shouldn't be clobbered when returning from an > >> > >> As hcall is an interrupt, SRR0 is set to nip and SRR1 to msr just befo= re > >> executing rfid. > >=20 > > AFAICT the return path from the hypervisor - including for hcalls - > > uses HSSR0/1 and hrfid, so ordinary SRR0/SRR1 should be ok. >=20 > I see SRR0 and SRR1 clobbered when the HCALL from guest returns. > Previous discussions on this is in the link below: >=20 > http://lists.nongnu.org/archive/html/qemu-devel/2014-09/msg01148.html Hrm. Well, I guess if it happened it happened, but Alex's explanation for why doesn't make sense to me. Did you execute cpu_synchronize_state() *before* attempting to set SRR0/1 in the hcall? > Further I searched QEMU source code but could not find whether it is > using rfid/hrfid. However, ISA for sc instruction mentions that SRR0 and > SRR1 are modified. Well of course it isn't in the qemu source, the low-level return to guest is within the host kernel, specifically fast_guest_return in arch/powerpc/kvm/book3s_hv_rmhandlers.S which uses hrfid. If I'm reading the ISA correctly then yes, SRR0/1 are clobbered on entry, but that's on *entry* so can be overwritten by the hcall handler itself. --=20 David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! -http://www.ozlabs.org/~dgibson --1sNVjLsmu1MXqwQ/ Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJUZU/9AAoJEGw4ysog2bOSTzEQAIGVE49SNhsimO5KWYGVkY3t eznInBKNstrf2JJwWErk2pr3nmZUrpVhGNDH25pPbBODBwBWmbUZErqK4Cu/11QH qXw53wPQX76XbYe6nSb8zkAvmQq++nYib4lvOWniT/AXsr1nr5kGGfqwhfvJOW8P t1P1CKytUmeeRNadLlC9/mEy+iI6FlWEHjk/B0GdYBpxybfaXb/NbkoApoXB3pfc mcICl6bLuS9gbstjoJVJ+ZuJ3qRgLRjbLvW1E+i55xhsl6Nxi3buV1weBbNfHDtP ZIsfW3hMg0ByqhtVqO35lR62fYd2AnTkcbP+bMntmO64mZuyWYXcY3EpwPGR5q5p 0DKOGCKOp5vBbtq2npRaRkCZQHLTCyV8MU300fKglzDymmHmmYl5AWP/LqDjIg2+ LAy/Uaq+apvio7xlh/eLfzOEgnMsp5WwzrZb7WqFizoTyD8cNWR+Wn03Ew8MZCHp XPsaFBMjLTPt+bJKC/pF2KYUgzyB+K52WoibyOSKhpfwf6On069h1OLGqsfmF6EC j+AuwvW6agnbsFGTE/G4QnnnvFctT0F7WHcuKyxgSY53L7cVZFG2TaYuDLH+Ub12 yHL9dCPxRoK4KtMuQPStiNWWTRuf4ssLg0Hw5PnCEDIqU6IGDdmcg1zJE0yXpC6B GptbMl4RgDj2Fp4bjU2J =e3Ou -----END PGP SIGNATURE----- --1sNVjLsmu1MXqwQ/--