* [Qemu-devel] [PATCH v2 0/3] spapr: fix regression with older machine types @ 2018-06-29 9:48 Greg Kurz 2018-06-29 9:48 ` [Qemu-devel] [PATCH v2 1/3] target/ppc/kvm: get rid of kvm_get_fallback_smmu_info() Greg Kurz ` (2 more replies) 0 siblings, 3 replies; 7+ messages in thread From: Greg Kurz @ 2018-06-29 9:48 UTC (permalink / raw) To: qemu-devel; +Cc: qemu-ppc, David Gibson, Cédric Le Goater Since the recent cleanups to hide host configuration details from guests, it isn't possible to start an older machine type with HV KVM [*]: qemu-system-ppc64: KVM doesn't support for base page shift 34 This basically boils down to the fact that it isn't safe to call the kvmppc_hpt_needs_host_contiguous_pages() helper from a class init function because: - KVM isn't initialized yet, and kvm_enabled() always return false in this case. This causes kvmppc_hpt_needs_host_contiguous_pages() to do nothing and we end up choosing a 16G default page size which is not supported by KVM. - even if we drop kvm_enabled() we then have the issue that kvmppc_hpt_needs_host_contiguous_pages() assumes CPUs are created, which isn't the case either. The choice was made to initialize capabilities during machine init before creating the CPUs, and I don't think we should revert to the previous behavior. Let's go forward instead and ensure we can retrieve the MMU information from KVM before CPUs are created. To fix this, we first change kvm_get_smmu_info() so that it doesn't need a CPU object. This allows to stop using first_cpu in kvmppc_hpt_needs_host_contiguous_pages(). Then we delay the setting of the default value to machine init time, so that we're sure that KVM is fully initialized. Please comment. [*] it also breaks PR KVM actually, but the error is different and I need to dig some more. Changes in v2: - added patch "target/ppc/kvm: get rid of kvm_get_fallback_smmu_info()" - dropped patch "accel: forbid early use of kvm_enabled() and friends" to be sent separately -- Greg --- Greg Kurz (3): target/ppc/kvm: get rid of kvm_get_fallback_smmu_info() target/ppc/kvm: don't pass cpu to kvm_get_smmu_info() spapr: compute default value of "hpt-max-page-size" later hw/ppc/spapr.c | 13 +++--- hw/ppc/spapr_caps.c | 20 +++++++++ target/ppc/kvm.c | 118 +++++++++------------------------------------------ 3 files changed, 46 insertions(+), 105 deletions(-) ^ permalink raw reply [flat|nested] 7+ messages in thread
* [Qemu-devel] [PATCH v2 1/3] target/ppc/kvm: get rid of kvm_get_fallback_smmu_info() 2018-06-29 9:48 [Qemu-devel] [PATCH v2 0/3] spapr: fix regression with older machine types Greg Kurz @ 2018-06-29 9:48 ` Greg Kurz 2018-07-02 5:54 ` David Gibson 2018-06-29 9:48 ` [Qemu-devel] [PATCH v2 2/3] target/ppc/kvm: don't pass cpu to kvm_get_smmu_info() Greg Kurz 2018-06-29 9:48 ` [Qemu-devel] [PATCH v2 3/3] spapr: compute default value of "hpt-max-page-size" later Greg Kurz 2 siblings, 1 reply; 7+ messages in thread From: Greg Kurz @ 2018-06-29 9:48 UTC (permalink / raw) To: qemu-devel; +Cc: qemu-ppc, David Gibson, Cédric Le Goater Now that we're checking our MMU configuration is supported by KVM, rather than adjusting it to KVM, it doesn't really make sense to have a fallback for kvm_get_smmu_info(). If KVM is too old or buggy to provide the details, we should rather treat this as an error. This patch thus adds error reporting to kvm_get_smmu_info() and get rid of the fallback code. QEMU will now terminate if KVM fails to provide MMU details. This may break some very old setups, but the simplification is worth the sacrifice. Signed-off-by: Greg Kurz <groug@kaod.org> --- target/ppc/kvm.c | 117 +++++++++--------------------------------------------- 1 file changed, 20 insertions(+), 97 deletions(-) diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c index 4df4ff6cbff2..b6000f12b98f 100644 --- a/target/ppc/kvm.c +++ b/target/ppc/kvm.c @@ -248,107 +248,25 @@ static int kvm_booke206_tlb_init(PowerPCCPU *cpu) #if defined(TARGET_PPC64) -static void kvm_get_fallback_smmu_info(PowerPCCPU *cpu, - struct kvm_ppc_smmu_info *info) +static void kvm_get_smmu_info(PowerPCCPU *cpu, struct kvm_ppc_smmu_info *info, + Error **errp) { - CPUPPCState *env = &cpu->env; CPUState *cs = CPU(cpu); + int ret; - memset(info, 0, sizeof(*info)); - - /* We don't have the new KVM_PPC_GET_SMMU_INFO ioctl, so - * need to "guess" what the supported page sizes are. - * - * For that to work we make a few assumptions: - * - * - Check whether we are running "PR" KVM which only supports 4K - * and 16M pages, but supports them regardless of the backing - * store characteritics. We also don't support 1T segments. - * - * This is safe as if HV KVM ever supports that capability or PR - * KVM grows supports for more page/segment sizes, those versions - * will have implemented KVM_CAP_PPC_GET_SMMU_INFO and thus we - * will not hit this fallback - * - * - Else we are running HV KVM. This means we only support page - * sizes that fit in the backing store. Additionally we only - * advertize 64K pages if the processor is ARCH 2.06 and we assume - * P7 encodings for the SLB and hash table. Here too, we assume - * support for any newer processor will mean a kernel that - * implements KVM_CAP_PPC_GET_SMMU_INFO and thus doesn't hit - * this fallback. - */ - if (kvmppc_is_pr(cs->kvm_state)) { - /* No flags */ - info->flags = 0; - info->slb_size = 64; - - /* Standard 4k base page size segment */ - info->sps[0].page_shift = 12; - info->sps[0].slb_enc = 0; - info->sps[0].enc[0].page_shift = 12; - info->sps[0].enc[0].pte_enc = 0; - - /* Standard 16M large page size segment */ - info->sps[1].page_shift = 24; - info->sps[1].slb_enc = SLB_VSID_L; - info->sps[1].enc[0].page_shift = 24; - info->sps[1].enc[0].pte_enc = 0; - } else { - int i = 0; - - /* HV KVM has backing store size restrictions */ - info->flags = KVM_PPC_PAGE_SIZES_REAL; - - if (ppc_hash64_has(cpu, PPC_HASH64_1TSEG)) { - info->flags |= KVM_PPC_1T_SEGMENTS; - } - - if (env->mmu_model == POWERPC_MMU_2_06 || - env->mmu_model == POWERPC_MMU_2_07) { - info->slb_size = 32; - } else { - info->slb_size = 64; - } - - /* Standard 4k base page size segment */ - info->sps[i].page_shift = 12; - info->sps[i].slb_enc = 0; - info->sps[i].enc[0].page_shift = 12; - info->sps[i].enc[0].pte_enc = 0; - i++; - - /* 64K on MMU 2.06 and later */ - if (env->mmu_model == POWERPC_MMU_2_06 || - env->mmu_model == POWERPC_MMU_2_07) { - info->sps[i].page_shift = 16; - info->sps[i].slb_enc = 0x110; - info->sps[i].enc[0].page_shift = 16; - info->sps[i].enc[0].pte_enc = 1; - i++; - } - - /* Standard 16M large page size segment */ - info->sps[i].page_shift = 24; - info->sps[i].slb_enc = SLB_VSID_L; - info->sps[i].enc[0].page_shift = 24; - info->sps[i].enc[0].pte_enc = 0; + if (!kvm_check_extension(cs->kvm_state, KVM_CAP_PPC_GET_SMMU_INFO)) { + error_setg(errp, "KVM doesn't expose the MMU features it supports"); + error_append_hint(errp, "Consider switching to a newer KVM\n"); + return; } -} - -static void kvm_get_smmu_info(PowerPCCPU *cpu, struct kvm_ppc_smmu_info *info) -{ - CPUState *cs = CPU(cpu); - int ret; - if (kvm_check_extension(cs->kvm_state, KVM_CAP_PPC_GET_SMMU_INFO)) { - ret = kvm_vm_ioctl(cs->kvm_state, KVM_PPC_GET_SMMU_INFO, info); - if (ret == 0) { - return; - } + ret = kvm_vm_ioctl(cs->kvm_state, KVM_PPC_GET_SMMU_INFO, info); + if (ret == 0) { + return; } - kvm_get_fallback_smmu_info(cpu, info); + error_setg_errno(errp, -ret, + "KVM failed to provide the MMU features it supports"); } struct ppc_radix_page_info *kvm_get_radix_page_info(void) @@ -415,7 +333,7 @@ bool kvmppc_hpt_needs_host_contiguous_pages(void) return false; } - kvm_get_smmu_info(cpu, &smmu_info); + kvm_get_smmu_info(cpu, &smmu_info, &error_fatal); return !!(smmu_info.flags & KVM_PPC_PAGE_SIZES_REAL); } @@ -423,13 +341,18 @@ void kvm_check_mmu(PowerPCCPU *cpu, Error **errp) { struct kvm_ppc_smmu_info smmu_info; int iq, ik, jq, jk; + Error *local_err = NULL; /* For now, we only have anything to check on hash64 MMUs */ if (!cpu->hash64_opts || !kvm_enabled()) { return; } - kvm_get_smmu_info(cpu, &smmu_info); + kvm_get_smmu_info(cpu, &smmu_info, &local_err); + if (local_err) { + error_propagate(errp, local_err); + return; + } if (ppc_hash64_has(cpu, PPC_HASH64_1TSEG) && !(smmu_info.flags & KVM_PPC_1T_SEGMENTS)) { @@ -2168,7 +2091,7 @@ uint64_t kvmppc_rma_size(uint64_t current_size, unsigned int hash_shift) /* Find the largest hardware supported page size that's less than * or equal to the (logical) backing page size of guest RAM */ - kvm_get_smmu_info(POWERPC_CPU(first_cpu), &info); + kvm_get_smmu_info(POWERPC_CPU(first_cpu), &info, &error_fatal); rampagesize = qemu_getrampagesize(); best_page_shift = 0; ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [Qemu-devel] [PATCH v2 1/3] target/ppc/kvm: get rid of kvm_get_fallback_smmu_info() 2018-06-29 9:48 ` [Qemu-devel] [PATCH v2 1/3] target/ppc/kvm: get rid of kvm_get_fallback_smmu_info() Greg Kurz @ 2018-07-02 5:54 ` David Gibson 0 siblings, 0 replies; 7+ messages in thread From: David Gibson @ 2018-07-02 5:54 UTC (permalink / raw) To: Greg Kurz; +Cc: qemu-devel, qemu-ppc, Cédric Le Goater [-- Attachment #1: Type: text/plain, Size: 7501 bytes --] On Fri, Jun 29, 2018 at 11:48:16AM +0200, Greg Kurz wrote: > Now that we're checking our MMU configuration is supported by KVM, > rather than adjusting it to KVM, it doesn't really make sense to > have a fallback for kvm_get_smmu_info(). If KVM is too old or buggy > to provide the details, we should rather treat this as an error. > > This patch thus adds error reporting to kvm_get_smmu_info() and get > rid of the fallback code. QEMU will now terminate if KVM fails to > provide MMU details. This may break some very old setups, but the > simplification is worth the sacrifice. > > Signed-off-by: Greg Kurz <groug@kaod.org> Ok, so immediately failing with an old kernel wasn't actually what I had in mind. Instead I was suggesting just skipping the checks and hoping for the best. Still, either way the relevant kernels are really ancient now, so I'll apply anyway. > --- > target/ppc/kvm.c | 117 +++++++++--------------------------------------------- > 1 file changed, 20 insertions(+), 97 deletions(-) > > diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c > index 4df4ff6cbff2..b6000f12b98f 100644 > --- a/target/ppc/kvm.c > +++ b/target/ppc/kvm.c > @@ -248,107 +248,25 @@ static int kvm_booke206_tlb_init(PowerPCCPU *cpu) > > > #if defined(TARGET_PPC64) > -static void kvm_get_fallback_smmu_info(PowerPCCPU *cpu, > - struct kvm_ppc_smmu_info *info) > +static void kvm_get_smmu_info(PowerPCCPU *cpu, struct kvm_ppc_smmu_info *info, > + Error **errp) > { > - CPUPPCState *env = &cpu->env; > CPUState *cs = CPU(cpu); > + int ret; > > - memset(info, 0, sizeof(*info)); > - > - /* We don't have the new KVM_PPC_GET_SMMU_INFO ioctl, so > - * need to "guess" what the supported page sizes are. > - * > - * For that to work we make a few assumptions: > - * > - * - Check whether we are running "PR" KVM which only supports 4K > - * and 16M pages, but supports them regardless of the backing > - * store characteritics. We also don't support 1T segments. > - * > - * This is safe as if HV KVM ever supports that capability or PR > - * KVM grows supports for more page/segment sizes, those versions > - * will have implemented KVM_CAP_PPC_GET_SMMU_INFO and thus we > - * will not hit this fallback > - * > - * - Else we are running HV KVM. This means we only support page > - * sizes that fit in the backing store. Additionally we only > - * advertize 64K pages if the processor is ARCH 2.06 and we assume > - * P7 encodings for the SLB and hash table. Here too, we assume > - * support for any newer processor will mean a kernel that > - * implements KVM_CAP_PPC_GET_SMMU_INFO and thus doesn't hit > - * this fallback. > - */ > - if (kvmppc_is_pr(cs->kvm_state)) { > - /* No flags */ > - info->flags = 0; > - info->slb_size = 64; > - > - /* Standard 4k base page size segment */ > - info->sps[0].page_shift = 12; > - info->sps[0].slb_enc = 0; > - info->sps[0].enc[0].page_shift = 12; > - info->sps[0].enc[0].pte_enc = 0; > - > - /* Standard 16M large page size segment */ > - info->sps[1].page_shift = 24; > - info->sps[1].slb_enc = SLB_VSID_L; > - info->sps[1].enc[0].page_shift = 24; > - info->sps[1].enc[0].pte_enc = 0; > - } else { > - int i = 0; > - > - /* HV KVM has backing store size restrictions */ > - info->flags = KVM_PPC_PAGE_SIZES_REAL; > - > - if (ppc_hash64_has(cpu, PPC_HASH64_1TSEG)) { > - info->flags |= KVM_PPC_1T_SEGMENTS; > - } > - > - if (env->mmu_model == POWERPC_MMU_2_06 || > - env->mmu_model == POWERPC_MMU_2_07) { > - info->slb_size = 32; > - } else { > - info->slb_size = 64; > - } > - > - /* Standard 4k base page size segment */ > - info->sps[i].page_shift = 12; > - info->sps[i].slb_enc = 0; > - info->sps[i].enc[0].page_shift = 12; > - info->sps[i].enc[0].pte_enc = 0; > - i++; > - > - /* 64K on MMU 2.06 and later */ > - if (env->mmu_model == POWERPC_MMU_2_06 || > - env->mmu_model == POWERPC_MMU_2_07) { > - info->sps[i].page_shift = 16; > - info->sps[i].slb_enc = 0x110; > - info->sps[i].enc[0].page_shift = 16; > - info->sps[i].enc[0].pte_enc = 1; > - i++; > - } > - > - /* Standard 16M large page size segment */ > - info->sps[i].page_shift = 24; > - info->sps[i].slb_enc = SLB_VSID_L; > - info->sps[i].enc[0].page_shift = 24; > - info->sps[i].enc[0].pte_enc = 0; > + if (!kvm_check_extension(cs->kvm_state, KVM_CAP_PPC_GET_SMMU_INFO)) { > + error_setg(errp, "KVM doesn't expose the MMU features it supports"); > + error_append_hint(errp, "Consider switching to a newer KVM\n"); > + return; > } > -} > - > -static void kvm_get_smmu_info(PowerPCCPU *cpu, struct kvm_ppc_smmu_info *info) > -{ > - CPUState *cs = CPU(cpu); > - int ret; > > - if (kvm_check_extension(cs->kvm_state, KVM_CAP_PPC_GET_SMMU_INFO)) { > - ret = kvm_vm_ioctl(cs->kvm_state, KVM_PPC_GET_SMMU_INFO, info); > - if (ret == 0) { > - return; > - } > + ret = kvm_vm_ioctl(cs->kvm_state, KVM_PPC_GET_SMMU_INFO, info); > + if (ret == 0) { > + return; > } > > - kvm_get_fallback_smmu_info(cpu, info); > + error_setg_errno(errp, -ret, > + "KVM failed to provide the MMU features it supports"); > } > > struct ppc_radix_page_info *kvm_get_radix_page_info(void) > @@ -415,7 +333,7 @@ bool kvmppc_hpt_needs_host_contiguous_pages(void) > return false; > } > > - kvm_get_smmu_info(cpu, &smmu_info); > + kvm_get_smmu_info(cpu, &smmu_info, &error_fatal); > return !!(smmu_info.flags & KVM_PPC_PAGE_SIZES_REAL); > } > > @@ -423,13 +341,18 @@ void kvm_check_mmu(PowerPCCPU *cpu, Error **errp) > { > struct kvm_ppc_smmu_info smmu_info; > int iq, ik, jq, jk; > + Error *local_err = NULL; > > /* For now, we only have anything to check on hash64 MMUs */ > if (!cpu->hash64_opts || !kvm_enabled()) { > return; > } > > - kvm_get_smmu_info(cpu, &smmu_info); > + kvm_get_smmu_info(cpu, &smmu_info, &local_err); > + if (local_err) { > + error_propagate(errp, local_err); > + return; > + } > > if (ppc_hash64_has(cpu, PPC_HASH64_1TSEG) > && !(smmu_info.flags & KVM_PPC_1T_SEGMENTS)) { > @@ -2168,7 +2091,7 @@ uint64_t kvmppc_rma_size(uint64_t current_size, unsigned int hash_shift) > > /* Find the largest hardware supported page size that's less than > * or equal to the (logical) backing page size of guest RAM */ > - kvm_get_smmu_info(POWERPC_CPU(first_cpu), &info); > + kvm_get_smmu_info(POWERPC_CPU(first_cpu), &info, &error_fatal); > rampagesize = qemu_getrampagesize(); > best_page_shift = 0; > > -- David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --] ^ permalink raw reply [flat|nested] 7+ messages in thread
* [Qemu-devel] [PATCH v2 2/3] target/ppc/kvm: don't pass cpu to kvm_get_smmu_info() 2018-06-29 9:48 [Qemu-devel] [PATCH v2 0/3] spapr: fix regression with older machine types Greg Kurz 2018-06-29 9:48 ` [Qemu-devel] [PATCH v2 1/3] target/ppc/kvm: get rid of kvm_get_fallback_smmu_info() Greg Kurz @ 2018-06-29 9:48 ` Greg Kurz 2018-07-02 6:01 ` David Gibson 2018-06-29 9:48 ` [Qemu-devel] [PATCH v2 3/3] spapr: compute default value of "hpt-max-page-size" later Greg Kurz 2 siblings, 1 reply; 7+ messages in thread From: Greg Kurz @ 2018-06-29 9:48 UTC (permalink / raw) To: qemu-devel; +Cc: qemu-ppc, David Gibson, Cédric Le Goater In a future patch the machine code will need to retrieve the MMU information from KVM during machine initialization before the CPUs are created. Actually, KVM_PPC_GET_SMMU_INFO is a VM class ioctl, and thus, we don't need to have a CPU object around. We just need for KVM to be initialized and use the kvm_state global. This patch just does that. Signed-off-by: Greg Kurz <groug@kaod.org> --- v2: - fallback code was dropped by a previous patch --- target/ppc/kvm.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c index b6000f12b98f..9211ee2ee1a0 100644 --- a/target/ppc/kvm.c +++ b/target/ppc/kvm.c @@ -248,19 +248,19 @@ static int kvm_booke206_tlb_init(PowerPCCPU *cpu) #if defined(TARGET_PPC64) -static void kvm_get_smmu_info(PowerPCCPU *cpu, struct kvm_ppc_smmu_info *info, - Error **errp) +static void kvm_get_smmu_info(struct kvm_ppc_smmu_info *info, Error **errp) { - CPUState *cs = CPU(cpu); int ret; - if (!kvm_check_extension(cs->kvm_state, KVM_CAP_PPC_GET_SMMU_INFO)) { + assert(kvm_state != NULL); + + if (!kvm_check_extension(kvm_state, KVM_CAP_PPC_GET_SMMU_INFO)) { error_setg(errp, "KVM doesn't expose the MMU features it supports"); error_append_hint(errp, "Consider switching to a newer KVM\n"); return; } - ret = kvm_vm_ioctl(cs->kvm_state, KVM_PPC_GET_SMMU_INFO, info); + ret = kvm_vm_ioctl(kvm_state, KVM_PPC_GET_SMMU_INFO, info); if (ret == 0) { return; } @@ -326,14 +326,13 @@ target_ulong kvmppc_configure_v3_mmu(PowerPCCPU *cpu, bool kvmppc_hpt_needs_host_contiguous_pages(void) { - PowerPCCPU *cpu = POWERPC_CPU(first_cpu); static struct kvm_ppc_smmu_info smmu_info; if (!kvm_enabled()) { return false; } - kvm_get_smmu_info(cpu, &smmu_info, &error_fatal); + kvm_get_smmu_info(&smmu_info, &error_fatal); return !!(smmu_info.flags & KVM_PPC_PAGE_SIZES_REAL); } @@ -348,7 +347,7 @@ void kvm_check_mmu(PowerPCCPU *cpu, Error **errp) return; } - kvm_get_smmu_info(cpu, &smmu_info, &local_err); + kvm_get_smmu_info(&smmu_info, &local_err); if (local_err) { error_propagate(errp, local_err); return; @@ -2091,7 +2090,7 @@ uint64_t kvmppc_rma_size(uint64_t current_size, unsigned int hash_shift) /* Find the largest hardware supported page size that's less than * or equal to the (logical) backing page size of guest RAM */ - kvm_get_smmu_info(POWERPC_CPU(first_cpu), &info, &error_fatal); + kvm_get_smmu_info(&info, &error_fatal); rampagesize = qemu_getrampagesize(); best_page_shift = 0; ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [Qemu-devel] [PATCH v2 2/3] target/ppc/kvm: don't pass cpu to kvm_get_smmu_info() 2018-06-29 9:48 ` [Qemu-devel] [PATCH v2 2/3] target/ppc/kvm: don't pass cpu to kvm_get_smmu_info() Greg Kurz @ 2018-07-02 6:01 ` David Gibson 0 siblings, 0 replies; 7+ messages in thread From: David Gibson @ 2018-07-02 6:01 UTC (permalink / raw) To: Greg Kurz; +Cc: qemu-devel, qemu-ppc, Cédric Le Goater [-- Attachment #1: Type: text/plain, Size: 3272 bytes --] On Fri, Jun 29, 2018 at 11:48:32AM +0200, Greg Kurz wrote: 11;rgb:ffff/ffff/ffff> In a future patch the machine code will need to retrieve the MMU > information from KVM during machine initialization before the CPUs > are created. > > Actually, KVM_PPC_GET_SMMU_INFO is a VM class ioctl, and thus, we > don't need to have a CPU object around. We just need for KVM to > be initialized and use the kvm_state global. This patch just does > that. > > Signed-off-by: Greg Kurz <groug@kaod.org> Applied to ppc-for-3.0. > --- > v2: - fallback code was dropped by a previous patch > --- > target/ppc/kvm.c | 17 ++++++++--------- > 1 file changed, 8 insertions(+), 9 deletions(-) > > diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c > index b6000f12b98f..9211ee2ee1a0 100644 > --- a/target/ppc/kvm.c > +++ b/target/ppc/kvm.c > @@ -248,19 +248,19 @@ static int kvm_booke206_tlb_init(PowerPCCPU *cpu) > > > #if defined(TARGET_PPC64) > -static void kvm_get_smmu_info(PowerPCCPU *cpu, struct kvm_ppc_smmu_info *info, > - Error **errp) > +static void kvm_get_smmu_info(struct kvm_ppc_smmu_info *info, Error **errp) > { > - CPUState *cs = CPU(cpu); > int ret; > > - if (!kvm_check_extension(cs->kvm_state, KVM_CAP_PPC_GET_SMMU_INFO)) { > + assert(kvm_state != NULL); > + > + if (!kvm_check_extension(kvm_state, KVM_CAP_PPC_GET_SMMU_INFO)) { > error_setg(errp, "KVM doesn't expose the MMU features it supports"); > error_append_hint(errp, "Consider switching to a newer KVM\n"); > return; > } > > - ret = kvm_vm_ioctl(cs->kvm_state, KVM_PPC_GET_SMMU_INFO, info); > + ret = kvm_vm_ioctl(kvm_state, KVM_PPC_GET_SMMU_INFO, info); > if (ret == 0) { > return; > } > @@ -326,14 +326,13 @@ target_ulong kvmppc_configure_v3_mmu(PowerPCCPU *cpu, > > bool kvmppc_hpt_needs_host_contiguous_pages(void) > { > - PowerPCCPU *cpu = POWERPC_CPU(first_cpu); > static struct kvm_ppc_smmu_info smmu_info; > > if (!kvm_enabled()) { > return false; > } > > - kvm_get_smmu_info(cpu, &smmu_info, &error_fatal); > + kvm_get_smmu_info(&smmu_info, &error_fatal); > return !!(smmu_info.flags & KVM_PPC_PAGE_SIZES_REAL); > } > > @@ -348,7 +347,7 @@ void kvm_check_mmu(PowerPCCPU *cpu, Error **errp) > return; > } > > - kvm_get_smmu_info(cpu, &smmu_info, &local_err); > + kvm_get_smmu_info(&smmu_info, &local_err); > if (local_err) { > error_propagate(errp, local_err); > return; > @@ -2091,7 +2090,7 @@ uint64_t kvmppc_rma_size(uint64_t current_size, unsigned int hash_shift) > > /* Find the largest hardware supported page size that's less than > * or equal to the (logical) backing page size of guest RAM */ > - kvm_get_smmu_info(POWERPC_CPU(first_cpu), &info, &error_fatal); > + kvm_get_smmu_info(&info, &error_fatal); > rampagesize = qemu_getrampagesize(); > best_page_shift = 0; > > -- David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --] ^ permalink raw reply [flat|nested] 7+ messages in thread
* [Qemu-devel] [PATCH v2 3/3] spapr: compute default value of "hpt-max-page-size" later 2018-06-29 9:48 [Qemu-devel] [PATCH v2 0/3] spapr: fix regression with older machine types Greg Kurz 2018-06-29 9:48 ` [Qemu-devel] [PATCH v2 1/3] target/ppc/kvm: get rid of kvm_get_fallback_smmu_info() Greg Kurz 2018-06-29 9:48 ` [Qemu-devel] [PATCH v2 2/3] target/ppc/kvm: don't pass cpu to kvm_get_smmu_info() Greg Kurz @ 2018-06-29 9:48 ` Greg Kurz 2018-07-02 6:04 ` David Gibson 2 siblings, 1 reply; 7+ messages in thread From: Greg Kurz @ 2018-06-29 9:48 UTC (permalink / raw) To: qemu-devel; +Cc: qemu-ppc, David Gibson, Cédric Le Goater It is currently not possible to run a pseries-2.12 or older machine with HV KVM. QEMU prints the following and exits right away. qemu-system-ppc64: KVM doesn't support for base page shift 34 The "hpt-max-page-size" capability was recently added to spapr to hide host configuration details from HPT mode guests. Its default value for newer machine types is 64k. For backwards compatibility, pseries-2.12 and older machine types need a different value. This is handled as usual in a class init function. The default value is 16G, ie, all page sizes supported by POWER7 and newer CPUs, but HV KVM requires guest pages to be hpa contiguous as well as gpa contiguous. The default value is the page size used to back the guest RAM in this case. Unfortunately kvmppc_hpt_needs_host_contiguous_pages()->kvm_enabled() is called way before KVM init and returns false, even if the user requested KVM. We thus end up selecting 16G, which isn't supported by HV KVM. The default value must be set during machine init, because we can safely assume that KVM is initialized at this point. We fix this by moving the logic to a fixup helper to be called by spapr_caps_init(). Since the user cannot pass cap-hpt-max-page-size=0, we set the default to 0 in the pseries-2.12 class init function and use that as a flag to do the real work. Signed-off-by: Greg Kurz <groug@kaod.org> --- v2: - moved logic under spapr_caps_init() --- hw/ppc/spapr.c | 13 ++++++------- hw/ppc/spapr_caps.c | 20 ++++++++++++++++++++ 2 files changed, 26 insertions(+), 7 deletions(-) diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c index 8cc996d0b822..f6c60f2a1248 100644 --- a/hw/ppc/spapr.c +++ b/hw/ppc/spapr.c @@ -4103,17 +4103,16 @@ static void spapr_machine_2_12_instance_options(MachineState *machine) static void spapr_machine_2_12_class_options(MachineClass *mc) { sPAPRMachineClass *smc = SPAPR_MACHINE_CLASS(mc); - uint8_t mps; spapr_machine_3_0_class_options(mc); SET_MACHINE_COMPAT(mc, SPAPR_COMPAT_2_12); - if (kvmppc_hpt_needs_host_contiguous_pages()) { - mps = ctz64(qemu_getrampagesize()); - } else { - mps = 34; /* allow everything up to 16GiB, i.e. everything */ - } - smc->default_caps.caps[SPAPR_CAP_HPT_MAXPAGESIZE] = mps; + /* We depend on kvm_enabled() to choose a default value for the + * hpt-max-page-size capability. Of course we can't do it here + * because this is too early and the HW accelerator isn't initialzed + * yet. Postpone this to spapr_caps_init(). + */ + smc->default_caps.caps[SPAPR_CAP_HPT_MAXPAGESIZE] = 0; } DEFINE_SPAPR_MACHINE(2_12, "2.12", false); diff --git a/hw/ppc/spapr_caps.c b/hw/ppc/spapr_caps.c index 62663ebdf51a..e307161f6425 100644 --- a/hw/ppc/spapr_caps.c +++ b/hw/ppc/spapr_caps.c @@ -552,11 +552,31 @@ SPAPR_CAP_MIG_STATE(cfpc, SPAPR_CAP_CFPC); SPAPR_CAP_MIG_STATE(sbbc, SPAPR_CAP_SBBC); SPAPR_CAP_MIG_STATE(ibs, SPAPR_CAP_IBS); +static void fixup_default_cap_hpt_maxpagesize_pre_3_0(sPAPRMachineState *spapr) +{ + sPAPRMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr); + uint8_t mps; + + /* If it is already set, then this is a 3.0 or newer machine */ + if (smc->default_caps.caps[SPAPR_CAP_HPT_MAXPAGESIZE]) { + return; + } + + if (kvmppc_hpt_needs_host_contiguous_pages()) { + mps = ctz64(qemu_getrampagesize()); + } else { + mps = 34; /* allow everything up to 16GiB, i.e. everything */ + } + smc->default_caps.caps[SPAPR_CAP_HPT_MAXPAGESIZE] = mps; +} + void spapr_caps_init(sPAPRMachineState *spapr) { sPAPRCapabilities default_caps; int i; + fixup_default_cap_hpt_maxpagesize_pre_3_0(spapr); + /* Compute the actual set of caps we should run with */ default_caps = default_caps_with_cpu(spapr, MACHINE(spapr)->cpu_type); ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [Qemu-devel] [PATCH v2 3/3] spapr: compute default value of "hpt-max-page-size" later 2018-06-29 9:48 ` [Qemu-devel] [PATCH v2 3/3] spapr: compute default value of "hpt-max-page-size" later Greg Kurz @ 2018-07-02 6:04 ` David Gibson 0 siblings, 0 replies; 7+ messages in thread From: David Gibson @ 2018-07-02 6:04 UTC (permalink / raw) To: Greg Kurz; +Cc: qemu-devel, qemu-ppc, Cédric Le Goater [-- Attachment #1: Type: text/plain, Size: 4777 bytes --] On Fri, Jun 29, 2018 at 11:48:40AM +0200, Greg Kurz wrote: > It is currently not possible to run a pseries-2.12 or older machine > with HV KVM. QEMU prints the following and exits right away. > > qemu-system-ppc64: KVM doesn't support for base page shift 34 > > The "hpt-max-page-size" capability was recently added to spapr to hide > host configuration details from HPT mode guests. Its default value for > newer machine types is 64k. > > For backwards compatibility, pseries-2.12 and older machine types need > a different value. This is handled as usual in a class init function. > The default value is 16G, ie, all page sizes supported by POWER7 and > newer CPUs, but HV KVM requires guest pages to be hpa contiguous as > well as gpa contiguous. The default value is the page size used to > back the guest RAM in this case. > > Unfortunately kvmppc_hpt_needs_host_contiguous_pages()->kvm_enabled() is > called way before KVM init and returns false, even if the user requested > KVM. We thus end up selecting 16G, which isn't supported by HV KVM. The > default value must be set during machine init, because we can safely > assume that KVM is initialized at this point. > > We fix this by moving the logic to a fixup helper to be called by > spapr_caps_init(). Since the user cannot pass cap-hpt-max-page-size=0, > we set the default to 0 in the pseries-2.12 class init function and use > that as a flag to do the real work. > > Signed-off-by: Greg Kurz <groug@kaod.org> > --- > v2: - moved logic under spapr_caps_init() As discussed in another thread, it would actually be better to put this logic into default_caps_with_cpu(). At the moment that just deals with cpu-dependent defaults, but that's just because that was the only thing that previously required "late" default calculation. Now we have another thing, so it makes sense to put them together, and avoids altering the class structure at runtime. > --- > hw/ppc/spapr.c | 13 ++++++------- > hw/ppc/spapr_caps.c | 20 ++++++++++++++++++++ > 2 files changed, 26 insertions(+), 7 deletions(-) > > diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c > index 8cc996d0b822..f6c60f2a1248 100644 > --- a/hw/ppc/spapr.c > +++ b/hw/ppc/spapr.c > @@ -4103,17 +4103,16 @@ static void spapr_machine_2_12_instance_options(MachineState *machine) > static void spapr_machine_2_12_class_options(MachineClass *mc) > { > sPAPRMachineClass *smc = SPAPR_MACHINE_CLASS(mc); > - uint8_t mps; > > spapr_machine_3_0_class_options(mc); > SET_MACHINE_COMPAT(mc, SPAPR_COMPAT_2_12); > > - if (kvmppc_hpt_needs_host_contiguous_pages()) { > - mps = ctz64(qemu_getrampagesize()); > - } else { > - mps = 34; /* allow everything up to 16GiB, i.e. everything */ > - } > - smc->default_caps.caps[SPAPR_CAP_HPT_MAXPAGESIZE] = mps; > + /* We depend on kvm_enabled() to choose a default value for the > + * hpt-max-page-size capability. Of course we can't do it here > + * because this is too early and the HW accelerator isn't initialzed > + * yet. Postpone this to spapr_caps_init(). > + */ > + smc->default_caps.caps[SPAPR_CAP_HPT_MAXPAGESIZE] = 0; > } > > DEFINE_SPAPR_MACHINE(2_12, "2.12", false); > diff --git a/hw/ppc/spapr_caps.c b/hw/ppc/spapr_caps.c > index 62663ebdf51a..e307161f6425 100644 > --- a/hw/ppc/spapr_caps.c > +++ b/hw/ppc/spapr_caps.c > @@ -552,11 +552,31 @@ SPAPR_CAP_MIG_STATE(cfpc, SPAPR_CAP_CFPC); > SPAPR_CAP_MIG_STATE(sbbc, SPAPR_CAP_SBBC); > SPAPR_CAP_MIG_STATE(ibs, SPAPR_CAP_IBS); > > +static void fixup_default_cap_hpt_maxpagesize_pre_3_0(sPAPRMachineState *spapr) > +{ > + sPAPRMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr); > + uint8_t mps; > + > + /* If it is already set, then this is a 3.0 or newer machine */ > + if (smc->default_caps.caps[SPAPR_CAP_HPT_MAXPAGESIZE]) { > + return; > + } > + > + if (kvmppc_hpt_needs_host_contiguous_pages()) { > + mps = ctz64(qemu_getrampagesize()); > + } else { > + mps = 34; /* allow everything up to 16GiB, i.e. everything */ > + } > + smc->default_caps.caps[SPAPR_CAP_HPT_MAXPAGESIZE] = mps; > +} > + > void spapr_caps_init(sPAPRMachineState *spapr) > { > sPAPRCapabilities default_caps; > int i; > > + fixup_default_cap_hpt_maxpagesize_pre_3_0(spapr); > + > /* Compute the actual set of caps we should run with */ > default_caps = default_caps_with_cpu(spapr, MACHINE(spapr)->cpu_type); > > -- David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --] ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2018-07-02 6:05 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2018-06-29 9:48 [Qemu-devel] [PATCH v2 0/3] spapr: fix regression with older machine types Greg Kurz 2018-06-29 9:48 ` [Qemu-devel] [PATCH v2 1/3] target/ppc/kvm: get rid of kvm_get_fallback_smmu_info() Greg Kurz 2018-07-02 5:54 ` David Gibson 2018-06-29 9:48 ` [Qemu-devel] [PATCH v2 2/3] target/ppc/kvm: don't pass cpu to kvm_get_smmu_info() Greg Kurz 2018-07-02 6:01 ` David Gibson 2018-06-29 9:48 ` [Qemu-devel] [PATCH v2 3/3] spapr: compute default value of "hpt-max-page-size" later Greg Kurz 2018-07-02 6:04 ` David Gibson
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).