From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60843) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YJD4o-0006Wc-H8 for qemu-devel@nongnu.org; Wed, 04 Feb 2015 22:25:59 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YJD4k-0006L6-HW for qemu-devel@nongnu.org; Wed, 04 Feb 2015 22:25:58 -0500 Date: Thu, 5 Feb 2015 13:55:56 +1100 From: David Gibson Message-ID: <20150205025556.GH25675@voom.fritz.box> References: <1422943851-25836-1-git-send-email-david@gibson.dropbear.id.au> <20150203211906.GA13992@iris.ozlabs.ibm.com> <20150204013211.GU28703@voom.fritz.box> <54D23872.90007@suse.de> <20150205004812.GD25675@voom.fritz.box> <54D2BF4F.1030609@suse.de> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="poemUeGtc2GQvHuH" Content-Disposition: inline In-Reply-To: <54D2BF4F.1030609@suse.de> Subject: Re: [Qemu-devel] [RFC] pseries: Enable in-kernel H_LOGICAL_CI_{LOAD, STORE} implementations List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alexander Graf Cc: aik@ozlabs.ru, qemu-ppc@nongnu.org, Paul Mackerras , qemu-devel@nongnu.org, mdroth@us.ibm.com --poemUeGtc2GQvHuH Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Thu, Feb 05, 2015 at 01:54:39AM +0100, Alexander Graf wrote: >=20 >=20 > On 05.02.15 01:48, David Gibson wrote: > > On Wed, Feb 04, 2015 at 04:19:14PM +0100, Alexander Graf wrote: > >> > >> > >> On 04.02.15 02:32, David Gibson wrote: > >>> On Wed, Feb 04, 2015 at 08:19:06AM +1100, Paul Mackerras wrote: > >>>> On Tue, Feb 03, 2015 at 05:10:51PM +1100, David Gibson wrote: > >>>>> qemu currently implements the hypercalls H_LOGICAL_CI_LOAD and > >>>>> H_LOGICAL_CI_STORE as PAPR extensions. These are used by the SLOF = firmware > >>>>> for IO, because performing cache inhibited MMIO accesses with the M= MU off > >>>>> (real mode) is very awkward on POWER. > >>>>> > >>>>> This approach breaks when SLOF needs to access IO devices implement= ed > >>>>> within KVM instead of in qemu. The simplest example would be virti= o-blk > >>>>> using an iothread, because the iothread / dataplane mechanism relie= s on > >>>>> an in-kernel implementation of the virtio queue notification MMIO. > >>>>> > >>>>> To fix this, an in-kernel implementation of these hypercalls has be= en made, > >>>>> however, the hypercalls still need to be enabled from qemu. This p= erforms > >>>>> the necessary calls to do so. > >>>>> > >>>>> Signed-off-by: David Gibson > >>>> > >>>> [snip] > >>>> > >>>>> + ret1 =3D kvmppc_enable_hcall(kvm_state, H_LOGICAL_CI_LOAD); > >>>>> + if (ret1 !=3D 0) { > >>>>> + fprintf(stderr, "Warning: error enabling H_LOGICAL_CI_LOAD= in KVM:" > >>>>> + " %s\n", strerror(errno)); > >>>>> + } > >>>>> + > >>>>> + ret2 =3D kvmppc_enable_hcall(kvm_state, H_LOGICAL_CI_STORE); > >>>>> + if (ret2 !=3D 0) { > >>>>> + fprintf(stderr, "Warning: error enabling H_LOGICAL_CI_STOR= E in KVM:" > >>>>> + " %s\n", strerror(errno)); > >>>>> + } > >>>>> + > >>>>> + if ((ret1 !=3D 0) || (ret2 !=3D 0)) { > >>>>> + fprintf(stderr, "Warning: Couldn't enable H_LOGICAL_CI_* i= n KVM, SLOF" > >>>>> + " may be unable to operate devices with in-kernel = emulation\n"); > >>>>> + } > >>>> > >>>> You'll always get these warnings if you're running on an old (meaning > >>>> current upstream) kernel, which could be annoying. > >>> > >>> True. > >>> > >>>> Is there any way > >>>> to tell whether you have configured any devices which need the > >>>> in-kernel MMIO emulation and only warn if you have? > >>> > >>> In theory, I guess so. In practice I can't see how you'd enumerate > >>> all devices that might require kernel intervention without something > >>> horribly invasive. > >> > >> We could WARN_ONCE in QEMU if we emulate such a hypercall, but its > >> handler is io_mem_unassigned (or we add another minimum priority huge > >> memory region on all 64bits of address space that reports the breakage= ). > >=20 > > Would that work for the virtio+iothread case? I had the impression > > the kernel handled notification region was layered over the qemu > > emulated region in that case. >=20 > IIRC we don't have a way to call back into kvm saying "please write to > this in-kernel device". But we could at least defer the warning to a > point where we know that we actually hit it. Right, but I'm saying we might miss the warning in cases where we want it, because the KVM device is shadowed by a qemu device, so qemu won't see the IO as unassigned or unhandled. In particular, I think that will happen in the case of virtio-blk with iothread, which is the simplest case in which to observe the problem. The virtio-blk device exists in qemu and is functional, but we rely on KVM catching the queue notification MMIO before it reaches the qemu implementation of the rest of the device's IO space. --=20 David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson --poemUeGtc2GQvHuH Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJU0tu8AAoJEGw4ysog2bOSgKYP/RVaQNkwN+YNNj8r9nKAZP7C hSyj0j2lDUhRoiOAz3nDR0NKTyqbCb0VxPmxYlpAt+zzfKpJOtzl5zkvCNidWh7a 1WC8/c3oEpQBwAv5+Lw0fIjIPtkJtm7wgT2koDWg5Z0/RKqM8HXaP5WnrcizWNwy QzibjMPUPhDNJf5HDDpYh6lmQN3rkprk2aLTiMOJRDXc/uRmxTrYsFW4oHjnPTyj kDciZZzwpFuQXqIMtPoi/7DdNVkInXPHcY9yaVlu6XKRy0YWIlKQNrefFqM/pXJu cJTJ7NYSaU5ZVeBw9mnzkp3vqcpXtQhu273GxgmbbjIM5pv7A/bHAsuyaQrNETnt YA2cdjekQqIEsY0aKye7pN6owYVMOrT1pZcolWrLllLgMIUXJ0slY3hQofIsTCyk BzXuNkXLQRIko3RVoaME8HRyQs3Bw6MINoGyQEimj5Mx9W7c5FCUToMmfFxJg2d8 zJspR96Hdos7txC1zTp0cDngv66thnaXS2A3dxkO7GPrwcvEuJcWdN6Rx1ze7nKy qfzwc6HnLMfd2OLKu3xvUDLgESqtdwasASU0+Z98+qfZ6aYKpIGswuOZQ37WkgSS SOBeucCreszPFpdUY2H94u9Ct2SpiI78EQmie76cY627Wb/s6X037C94CwiovHOK JcJJhbUwTSqCmbJK/fB+ =00OJ -----END PGP SIGNATURE----- --poemUeGtc2GQvHuH--