From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51752) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Xos6X-0007dR-DO for qemu-devel@nongnu.org; Thu, 13 Nov 2014 05:58:25 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Xos6T-0000aR-9r for qemu-devel@nongnu.org; Thu, 13 Nov 2014 05:58:21 -0500 Date: Thu, 13 Nov 2014 21:32:35 +1100 From: David Gibson Message-ID: <20141113103235.GK7291@voom.fritz.box> References: <20141105071019.26196.93729.stgit@aravindap> <20141105071315.26196.68104.stgit@aravindap> <20141111031635.GF15270@voom.redhat.com> <5461B04F.5080204@linux.vnet.ibm.com> <20141113035206.GH7291@voom.fritz.box> <54644886.9050803@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="aziWXe2aaRGlkyg3" Content-Disposition: inline In-Reply-To: <54644886.9050803@linux.vnet.ibm.com> Subject: Re: [Qemu-devel] [PATCH v3 4/4] target-ppc: Handle ibm, nmi-register RTAS call List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Aravinda Prasad Cc: qemu-ppc@nongnu.org, benh@au1.ibm.com, aik@au1.ibm.com, qemu-devel@nongnu.org, paulus@samba.org --aziWXe2aaRGlkyg3 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Nov 13, 2014 at 11:28:30AM +0530, Aravinda Prasad wrote: >=20 >=20 > On Thursday 13 November 2014 09:22 AM, David Gibson wrote: > > On Tue, Nov 11, 2014 at 12:14:31PM +0530, Aravinda Prasad wrote: > >> On Tuesday 11 November 2014 08:46 AM, David Gibson wrote: > >>> On Wed, Nov 05, 2014 at 12:43:15PM +0530, Aravinda Prasad wrote: > > [snip] > >>>> + . =3D 0x200 > >>>> + /* > >>>> + * Trampoline saves r3 in sprg2 and issues private hcall > >>>> + * to request qemu to build error log. QEMU builds the > >>>> + * error log, copies to rtas-blob and returns the address. > >>>> + * The initial 16 bytes in return adress consist of saved > >>>> + * srr0 and srr1 which we restore and pass on the actual error > >>>> + * log address to OS handled mcachine check notification > >>>> + * routine > >>>> + * > >>>> + * All the below instructions are copied to interrupt vector > >>>> + * 0x200 at the time of handling ibm,nmi-register rtas call. > >>>> + */ > >>>> + mtsprg 2,3 > >>>> + li 3,0 > >>>> + /* > >>>> + * ori r3,r3,KVMPPC_H_REPORT_MC_ERR. The KVMPPC_H_REPORT_MC_ERR > >>>> + * value is patched below > >>>> + */ > >>>> +1: ori 3,3,0 > >>>> + sc 1 /* Issue H_CALL */ > >>>> + cmpdi cr0,3,0 > >>>> + beq cr0,1b /* retry KVMPPC_H_REPORT_MC_ERR */ > >>> > >>> Having to retry the hcall from here seems very awkward. This is a > >>> private hcall, so you can define it to do whatever retries are > >>> necessary internally (and I don't think your current implementation > >>> can fail anyway). > >> > >> Retrying is required in the cases when multi-processors experience > >> machine check at or about the same time. As per PAPR, subsequent > >> processors should serialize and wait for the first processor to issue > >> the ibm,nmi-interlock call. The second processor retries if the first > >> processor which received a machine check is still reading the error log > >> and is yet to issue ibm,nmi-interlock call. > >=20 > > Hmm.. ok. But I don't see any mechanism in the patches by which > > H_REPORT_MC_ERR will report failure if another CPU has an MC in > > progress. >=20 > h_report_mc_err returns 0 if another VCPU is processing machine check > and in that case we retry. h_report_mc_err returns error log address if > no other VCPU is processing machine check. Uh.. how? I'm only seeing one return statement in the implementation in 3/4. > >> Retrying cannot be done internally in h_report_mc_err hcall: only one > >> thread can succeed entering qemu upon parallel hcall and hence retrying > >> inside the hcall will not allow the ibm,nmi-interlock from first CPU to > >> succeed. > >=20 > > It's possible, but would require some fiddling inside the h_call to > > unlock and wait for the other CPUs to finish, so yes, it might be more > > trouble than it's worth. > >=20 > >> > >>> > >>>> + mtsprg 2,4 > >>> > >>> Um.. doesn't this clobber the value of r3 you saved in SPRG2 just abo= ve. > >> > >> The r3 saved in SPRG2 is moved to rtas area in the private hcall and > >> hence it is fine to clobber r3 here > >=20 > > Ok, if you're going to do some magic register saving inside the HCALL, > > why not do the SRR[01] and CR restoration inside there as well. >=20 > SRR0/1 is clobbered while returning from HCALL and hence cannot be > restored in HCALL. For CR, we need to do the restoration here as we > clobber CR after returning from HCALL (the instruction checking the > return value of hcall clobbers CR). Hrm. AFAICT SRR0/1 shouldn't be clobbered when returning from an hcall. You're right about CR though. Or more precisely, you can't both restore CR and use it to return the success state of the hcall. Well.. actually, you could use crN to return the hcall success, since you're only restoring cr0 anyway (although only restoring cr0 seems odd to me in the first place). --=20 David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson --aziWXe2aaRGlkyg3 Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJUZIjCAAoJEGw4ysog2bOStZIP/Rihpk6c1bWxn8FHIPD14Nmu nQxmZDylEOctEEpmRcLyvNMSEY/Pxd1fNmYxOo31e51HmO5z2mjY9ezG70IEaODr qCw8y1hDQdXEswijufX4BOFlXbhVunm61iM7jjuVzVYlxF8Ez5KVg9aYFPVoNajd ObsQBmKPkMFJ10Wx6un18wkTBCq2x74zvGzTBzJSix9hEbHXq+8HD75eJEiscKkr 5K53Z3z06/1anhGG3JgZmxzwYPZRkSZKAmN8kJXf0gLLmcOSWh+h5swT8xKPoKq+ uKs6vPnA6eCgl+5rK9s6HxmRskZq6aLqa0Ln2TuWM2bJDeDocvIH+rKhhMTnKX+Z Qha//PEsURX3H7sVPnl29MOXbvY7PsYBtdiaO+AZ9Mbut1QqrEcgaSKNdWzRoESq T8E9vD7564r/FA28TaDR2dVvUu36q5QxQUDqtpS1QsXIENkNROK9r826jocAhUqN t0VQxZfKG1Kyy9YHZ9+EtLElIDSWCbeDlXVmFL67TqfyFSaN/Gxo4rvhUuEgKSfs cDWsaepN/W8d+3cWxv7NSeX9wj4+4dV5j+BOcV49GG0FzXbve7GJInsZGOVS33H8 nQ4bvOJBfI0mVI6MdXLF/EK0vgu1JJFrOAxRhU31OhW9qd6eDwKBERTF4JBfPuBo 6SwzLOlHZoYknWPKLXKt =1MbV -----END PGP SIGNATURE----- --aziWXe2aaRGlkyg3--