From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57901) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YpWZ7-0004VS-Pf for qemu-devel@nongnu.org; Tue, 05 May 2015 02:42:51 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YpWZ4-0000fc-JL for qemu-devel@nongnu.org; Tue, 05 May 2015 02:42:49 -0400 Date: Tue, 5 May 2015 08:42:36 +0200 From: Thomas Huth Message-ID: <20150505084236.6bc3a900@thh440s> In-Reply-To: <1430787601-3320-1-git-send-email-david@gibson.dropbear.id.au> References: <1430787601-3320-1-git-send-email-david@gibson.dropbear.id.au> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH] pseries: Enable in-kernel H_LOGICAL_CI_{LOAD, STORE} implementations List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: David Gibson Cc: lvivier@redhat.com, qemu-ppc@nongnu.org, agraf@suse.de, stefanha@redhat.com, qemu-devel@nongnu.org On Tue, 5 May 2015 11:00:01 +1000 David Gibson wrote: > qemu currently implements the hypercalls H_LOGICAL_CI_LOAD and > H_LOGICAL_CI_STORE as PAPR extensions. These are used by the SLOF firmware > for IO, because performing cache inhibited MMIO accesses with the MMU off > (real mode) is very awkward on POWER. > > This approach breaks when SLOF needs to access IO devices implemented > within KVM instead of in qemu. The simplest example would be virtio-blk > using an iothread, because the iothread / dataplane mechanism relies on > an in-kernel implementation of the virtio queue notification MMIO. > > To fix this, an in-kernel implementation of these hypercalls has been made, > (kernel commit 99342cf "kvmppc: Implement H_LOGICAL_CI_{LOAD,STORE} in KVM" > however, the hypercalls still need to be enabled from qemu. This performs > the necessary calls to do so. > > It would be nice to provide some warning if we encounter a problematic > device with a kernel which doesn't support the new calls. Unfortunately, > I can't see a way to detect this case which won't either warn in far too > many cases that will probably work, or which is horribly invasive. Hmm, is there a function that returns you a list to a given type? Something like object_resolve_path(TYPE_VIRTIO_BLK, NULL) but not only returning one matching object but all? ... then you could step through all the possibly affected devices and check whether they need this kernel feature. > Signed-off-by: David Gibson > --- > hw/ppc/spapr.c | 5 +++++ > target-ppc/kvm.c | 18 ++++++++++++++++++ > target-ppc/kvm_ppc.h | 5 +++++ > 3 files changed, 28 insertions(+) > > diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c > index 644689a..3b5768b 100644 > --- a/hw/ppc/spapr.c > +++ b/hw/ppc/spapr.c > @@ -1504,6 +1504,11 @@ static void ppc_spapr_init(MachineState *machine) > qemu_register_reset(spapr_cpu_reset, cpu); > } > > + if (kvm_enabled()) { > + /* Enable H_LOGICAL_CI_* so SLOF can talk to in-kernel devices */ > + kvmppc_enable_logical_ci_hcalls(); > + } > + > /* allocate RAM */ > spapr->ram_limit = ram_size; > memory_region_allocate_system_memory(ram, NULL, "ppc_spapr.ram", > diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c > index 12328a4..fde26d0 100644 > --- a/target-ppc/kvm.c > +++ b/target-ppc/kvm.c > @@ -1882,6 +1882,24 @@ int kvmppc_get_hypercall(CPUPPCState *env, uint8_t *buf, int buf_len) > return 0; > } > > +static inline int kvmppc_enable_hcall(KVMState *s, target_ulong hcall) > +{ > + return kvm_vm_enable_cap(s, KVM_CAP_PPC_ENABLE_HCALL, 0, hcall, 1); > +} > + > +void kvmppc_enable_logical_ci_hcalls(void) > +{ > + /* > + * FIXME: it would be nice if we could detect the cases where > + * we're using a device which requires the in kernel > + * implementation of these hcalls, but the kernel lacks them and > + * produce a warning. So far I haven't seen a practical way to do > + * that > + */ I'd maybe drop or shorten this comment (at least the last sentence). > + kvmppc_enable_hcall(kvm_state, H_LOGICAL_CI_LOAD); > + kvmppc_enable_hcall(kvm_state, H_LOGICAL_CI_STORE); > +} > + > void kvmppc_set_papr(PowerPCCPU *cpu) > { > CPUState *cs = CPU(cpu); > diff --git a/target-ppc/kvm_ppc.h b/target-ppc/kvm_ppc.h > index 2e0224c..4d30e27 100644 > --- a/target-ppc/kvm_ppc.h > +++ b/target-ppc/kvm_ppc.h > @@ -24,6 +24,7 @@ bool kvmppc_get_host_serial(char **buf); > int kvmppc_get_hasidle(CPUPPCState *env); > int kvmppc_get_hypercall(CPUPPCState *env, uint8_t *buf, int buf_len); > int kvmppc_set_interrupt(PowerPCCPU *cpu, int irq, int level); > +void kvmppc_enable_logical_ci_hcalls(void); > void kvmppc_set_papr(PowerPCCPU *cpu); > int kvmppc_set_compat(PowerPCCPU *cpu, uint32_t cpu_version); > void kvmppc_set_mpic_proxy(PowerPCCPU *cpu, int mpic_proxy); > @@ -107,6 +108,10 @@ static inline int kvmppc_set_interrupt(PowerPCCPU *cpu, int irq, int level) > return -1; > } > > +static inline void kvmppc_enable_logical_ci_hcalls(void) > +{ > +} > + > static inline void kvmppc_set_papr(PowerPCCPU *cpu) > { > } Reviewed-by: Thomas Huth