* Re: [Qemu-devel] [PATCH -V4 1/4] target-ppc: Update slb array with correct index values. [not found] ` <1378369007-958-2-git-send-email-aneesh.kumar@linux.vnet.ibm.com> @ 2013-09-25 15:41 ` Aneesh Kumar K.V 2013-09-30 17:40 ` Alexander Graf 0 siblings, 1 reply; 9+ messages in thread From: Aneesh Kumar K.V @ 2013-09-25 15:41 UTC (permalink / raw) To: agraf, paulus; +Cc: qemu-ppc, qemu-devel Hi Alex, Any update on this ? -aneesh "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> writes: > From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> > > Without this, a value of rb=0 and rs=0 results in replacing the 0th > index. This can be observed when using gdb remote debugging support. > > (gdb) x/10i do_fork > 0xc000000000085330 <do_fork>: Cannot access memory at address 0xc000000000085330 > (gdb) > > This is because when we do the slb sync via kvm_cpu_synchronize_state, > we overwrite the slb entry (0th entry) for 0xc000000000085330 > > Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> > --- > target-ppc/kvm.c | 17 +++++++++++++++-- > 1 file changed, 15 insertions(+), 2 deletions(-) > > diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c > index 30a870e..1838465 100644 > --- a/target-ppc/kvm.c > +++ b/target-ppc/kvm.c > @@ -1033,9 +1033,22 @@ int kvm_arch_get_registers(CPUState *cs) > > /* Sync SLB */ > #ifdef TARGET_PPC64 > + /* > + * The packed SLB array we get from KVM_GET_SREGS only contains > + * information about valid entries. So we flush our internal > + * copy to get rid of stale ones, then put all valid SLB entries > + * back in. > + */ > + memset(env->slb, 0, sizeof(env->slb)); > for (i = 0; i < 64; i++) { > - ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe, > - sregs.u.s.ppc64.slb[i].slbv); > + target_ulong rb = sregs.u.s.ppc64.slb[i].slbe; > + target_ulong rs = sregs.u.s.ppc64.slb[i].slbv; > + /* > + * Only restore valid entries > + */ > + if (rb & SLB_ESID_V) { > + ppc_store_slb(env, rb, rs); > + } > } > #endif > > -- > 1.8.1.2 ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH -V4 1/4] target-ppc: Update slb array with correct index values. 2013-09-25 15:41 ` [Qemu-devel] [PATCH -V4 1/4] target-ppc: Update slb array with correct index values Aneesh Kumar K.V @ 2013-09-30 17:40 ` Alexander Graf 2013-10-01 1:29 ` Aneesh Kumar K.V 0 siblings, 1 reply; 9+ messages in thread From: Alexander Graf @ 2013-09-30 17:40 UTC (permalink / raw) To: Aneesh Kumar K.V; +Cc: qemu-ppc, paulus, qemu-devel On 09/25/2013 05:41 PM, Aneesh Kumar K.V wrote: > Hi Alex, > > Any update on this ? The patch itself never made it to the qemu-devel mailing list which I pull things off of (through patchworks). Please resend. Alex > > -aneesh > > "Aneesh Kumar K.V"<aneesh.kumar@linux.vnet.ibm.com> writes: > >> From: "Aneesh Kumar K.V"<aneesh.kumar@linux.vnet.ibm.com> >> >> Without this, a value of rb=0 and rs=0 results in replacing the 0th >> index. This can be observed when using gdb remote debugging support. >> >> (gdb) x/10i do_fork >> 0xc000000000085330<do_fork>: Cannot access memory at address 0xc000000000085330 >> (gdb) >> >> This is because when we do the slb sync via kvm_cpu_synchronize_state, >> we overwrite the slb entry (0th entry) for 0xc000000000085330 >> >> Signed-off-by: Aneesh Kumar K.V<aneesh.kumar@linux.vnet.ibm.com> >> --- >> target-ppc/kvm.c | 17 +++++++++++++++-- >> 1 file changed, 15 insertions(+), 2 deletions(-) >> >> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c >> index 30a870e..1838465 100644 >> --- a/target-ppc/kvm.c >> +++ b/target-ppc/kvm.c >> @@ -1033,9 +1033,22 @@ int kvm_arch_get_registers(CPUState *cs) >> >> /* Sync SLB */ >> #ifdef TARGET_PPC64 >> + /* >> + * The packed SLB array we get from KVM_GET_SREGS only contains >> + * information about valid entries. So we flush our internal >> + * copy to get rid of stale ones, then put all valid SLB entries >> + * back in. >> + */ >> + memset(env->slb, 0, sizeof(env->slb)); >> for (i = 0; i< 64; i++) { >> - ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe, >> - sregs.u.s.ppc64.slb[i].slbv); >> + target_ulong rb = sregs.u.s.ppc64.slb[i].slbe; >> + target_ulong rs = sregs.u.s.ppc64.slb[i].slbv; >> + /* >> + * Only restore valid entries >> + */ >> + if (rb& SLB_ESID_V) { >> + ppc_store_slb(env, rb, rs); >> + } >> } >> #endif >> >> -- >> 1.8.1.2 > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH -V4 1/4] target-ppc: Update slb array with correct index values. 2013-09-30 17:40 ` Alexander Graf @ 2013-10-01 1:29 ` Aneesh Kumar K.V 0 siblings, 0 replies; 9+ messages in thread From: Aneesh Kumar K.V @ 2013-10-01 1:29 UTC (permalink / raw) To: Alexander Graf; +Cc: qemu-ppc, paulus, qemu-devel Alexander Graf <agraf@suse.de> writes: > On 09/25/2013 05:41 PM, Aneesh Kumar K.V wrote: >> Hi Alex, >> >> Any update on this ? > > The patch itself never made it to the qemu-devel mailing list which I > pull things off of (through patchworks). Please resend. > > My mistake, my alias in .gitconfig had the error. Will resend. -aneesh ^ permalink raw reply [flat|nested] 9+ messages in thread
[parent not found: <1378369007-958-3-git-send-email-aneesh.kumar@linux.vnet.ibm.com>]
* Re: [Qemu-devel] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kvm enabled [not found] ` <1378369007-958-3-git-send-email-aneesh.kumar@linux.vnet.ibm.com> @ 2013-09-30 17:48 ` Alexander Graf 2013-09-30 17:50 ` Alexander Graf 1 sibling, 0 replies; 9+ messages in thread From: Alexander Graf @ 2013-09-30 17:48 UTC (permalink / raw) To: Aneesh Kumar K.V; +Cc: qemu-ppc, paulus, qemu-devel On 09/05/2013 10:16 AM, Aneesh Kumar K.V wrote: > From: "Aneesh Kumar K.V"<aneesh.kumar@linux.vnet.ibm.com> > > With kvm enabled, we store the hash page table information in the hypervisor. > Use ioctl to read the htab contents. Without this we get the below error when > trying to read the guest address > > (gdb) x/10 do_fork > 0xc000000000098660<do_fork>: Cannot access memory at address 0xc000000000098660 > (gdb) > > Signed-off-by: Aneesh Kumar K.V<aneesh.kumar@linux.vnet.ibm.com> > --- > target-ppc/kvm.c | 59 +++++++++++++++++++++++++++++++++++++++++++++++++ > target-ppc/kvm_ppc.h | 12 +++++++++- > target-ppc/mmu-hash64.c | 57 ++++++++++++++++++++++++++++------------------- > 3 files changed, 104 insertions(+), 24 deletions(-) > > diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c > index 1838465..05b066c 100644 > --- a/target-ppc/kvm.c > +++ b/target-ppc/kvm.c > @@ -1888,3 +1888,62 @@ int kvm_arch_on_sigbus(int code, void *addr) > void kvm_arch_init_irq_routing(KVMState *s) > { > } > + > +hwaddr kvmppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash, > + bool secondary, target_ulong ptem, > + target_ulong *hpte0, target_ulong *hpte1) > +{ > + int htab_fd; > + uint64_t index; > + hwaddr pte_offset; > + target_ulong pte0, pte1; > + struct kvm_get_htab_fd ghf; > + struct kvm_get_htab_buf { > + struct kvm_get_htab_header header; > + /* > + * Older kernel required one extra byte. Older than what? > + */ > + unsigned long hpte[(HPTES_PER_GROUP * 2) + 1]; > + } hpte_buf; > + > + index = (hash * HPTES_PER_GROUP)& cpu->env.htab_mask; > + *hpte0 = 0; > + *hpte1 = 0; > + if (!cap_htab_fd) { > + return 0; > + } > + > + ghf.flags = 0; > + ghf.start_index = index; > + htab_fd = kvm_vm_ioctl(kvm_state, KVM_PPC_GET_HTAB_FD,&ghf); > + if (htab_fd< 0) { > + goto error_out; > + } > + /* > + * Read the hpte group > + */ > + if (read(htab_fd,&hpte_buf, sizeof(hpte_buf))< 0) { > + goto out; > + } > + > + index = 0; > + pte_offset = (hash * HASH_PTEG_SIZE_64)& cpu->env.htab_mask;; > + while (index< hpte_buf.header.n_valid) { > + pte0 = hpte_buf.hpte[(index * 2)]; > + pte1 = hpte_buf.hpte[(index * 2) + 1]; > + if ((pte0& HPTE64_V_VALID) > +&& (secondary == !!(pte0& HPTE64_V_SECONDARY)) > +&& HPTE64_V_COMPARE(pte0, ptem)) { > + *hpte0 = pte0; > + *hpte1 = pte1; > + close(htab_fd); > + return pte_offset; > + } > + index++; > + pte_offset += HASH_PTE_SIZE_64; > + } > +out: > + close(htab_fd); > +error_out: > + return -1; > +} > diff --git a/target-ppc/kvm_ppc.h b/target-ppc/kvm_ppc.h > index 4ae7bf2..dad0e57 100644 > --- a/target-ppc/kvm_ppc.h > +++ b/target-ppc/kvm_ppc.h > @@ -42,7 +42,9 @@ int kvmppc_get_htab_fd(bool write); > int kvmppc_save_htab(QEMUFile *f, int fd, size_t bufsize, int64_t max_ns); > int kvmppc_load_htab_chunk(QEMUFile *f, int fd, uint32_t index, > uint16_t n_valid, uint16_t n_invalid); > - > +hwaddr kvmppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash, > + bool secondary, target_ulong ptem, > + target_ulong *hpte0, target_ulong *hpte1); > #else > > static inline uint32_t kvmppc_get_tbfreq(void) > @@ -181,6 +183,14 @@ static inline int kvmppc_load_htab_chunk(QEMUFile *f, int fd, uint32_t index, > abort(); > } > > +static inline hwaddr kvmppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash, > + bool secondary, > + target_ulong ptem, > + target_ulong *hpte0, > + target_ulong *hpte1) > +{ > + abort(); > +} > #endif > > #ifndef CONFIG_KVM > diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c > index 67fc1b5..2288fe8 100644 > --- a/target-ppc/mmu-hash64.c > +++ b/target-ppc/mmu-hash64.c > @@ -302,37 +302,50 @@ static int ppc_hash64_amr_prot(CPUPPCState *env, ppc_hash_pte64_t pte) > return prot; > } > > -static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr pteg_off, > +static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr hash, > bool secondary, target_ulong ptem, > ppc_hash_pte64_t *pte) > { > - hwaddr pte_offset = pteg_off; > + hwaddr pte_offset; > target_ulong pte0, pte1; > - int i; > - > - for (i = 0; i< HPTES_PER_GROUP; i++) { > - pte0 = ppc_hash64_load_hpte0(env, pte_offset); > - pte1 = ppc_hash64_load_hpte1(env, pte_offset); > - > - if ((pte0& HPTE64_V_VALID) > -&& (secondary == !!(pte0& HPTE64_V_SECONDARY)) > -&& HPTE64_V_COMPARE(pte0, ptem)) { > - pte->pte0 = pte0; > - pte->pte1 = pte1; > - return pte_offset; > + int i, ret = 0; > + > + if (kvm_enabled()) { > + ret = kvmppc_hash64_pteg_search(ppc_env_get_cpu(env), hash, > + secondary, ptem, > +&pte->pte0,&pte->pte1); Instead of duplicating the search, couldn't you just > + } > + /* > + * We don't support htab fd, check whether we have a copy of htab > + */ > + if (!ret) { > + pte_offset = (hash * HASH_PTEG_SIZE_64)& env->htab_mask;; > + for (i = 0; i< HPTES_PER_GROUP; i++) { > + pte0 = ppc_hash64_load_hpte0(env, pte_offset); > + pte1 = ppc_hash64_load_hpte1(env, pte_offset); > + > + if ((pte0& HPTE64_V_VALID) > +&& (secondary == !!(pte0& HPTE64_V_SECONDARY)) > +&& HPTE64_V_COMPARE(pte0, ptem)) { > + pte->pte0 = pte0; > + pte->pte1 = pte1; > + return pte_offset; > + } > + pte_offset += HASH_PTE_SIZE_64; > } > - > - pte_offset += HASH_PTE_SIZE_64; > + /* > + * We didn't find a valid entry. > + */ > + ret = -1; > } > - > - return -1; > + return ret; > } > > static hwaddr ppc_hash64_htab_lookup(CPUPPCState *env, > ppc_slb_t *slb, target_ulong eaddr, > ppc_hash_pte64_t *pte) > { > - hwaddr pteg_off, pte_offset; > + hwaddr pte_offset; > hwaddr hash; > uint64_t vsid, epnshift, epnmask, epn, ptem; > > @@ -367,8 +380,7 @@ static hwaddr ppc_hash64_htab_lookup(CPUPPCState *env, > " vsid=" TARGET_FMT_lx " ptem=" TARGET_FMT_lx > " hash=" TARGET_FMT_plx "\n", > env->htab_base, env->htab_mask, vsid, ptem, hash); > - pteg_off = (hash * HASH_PTEG_SIZE_64)& env->htab_mask; > - pte_offset = ppc_hash64_pteg_search(env, pteg_off, 0, ptem, pte); > + pte_offset = ppc_hash64_pteg_search(env, hash, 0, ptem, pte); > > if (pte_offset == -1) { > /* Secondary PTEG lookup */ > @@ -377,8 +389,7 @@ static hwaddr ppc_hash64_htab_lookup(CPUPPCState *env, > " hash=" TARGET_FMT_plx "\n", env->htab_base, > env->htab_mask, vsid, ptem, ~hash); > > - pteg_off = (~hash * HASH_PTEG_SIZE_64)& env->htab_mask; > - pte_offset = ppc_hash64_pteg_search(env, pteg_off, 1, ptem, pte); > + pte_offset = ppc_hash64_pteg_search(env, ~hash, 1, ptem, pte); > } > > return pte_offset; ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kvm enabled [not found] ` <1378369007-958-3-git-send-email-aneesh.kumar@linux.vnet.ibm.com> 2013-09-30 17:48 ` [Qemu-devel] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kvm enabled Alexander Graf @ 2013-09-30 17:50 ` Alexander Graf 2013-10-01 1:27 ` Aneesh Kumar K.V 1 sibling, 1 reply; 9+ messages in thread From: Alexander Graf @ 2013-09-30 17:50 UTC (permalink / raw) To: Aneesh Kumar K.V; +Cc: qemu-ppc, paulus, qemu-devel@nongnu.org On 09/05/2013 10:16 AM, Aneesh Kumar K.V wrote: > From: "Aneesh Kumar K.V"<aneesh.kumar@linux.vnet.ibm.com> > > With kvm enabled, we store the hash page table information in the hypervisor. > Use ioctl to read the htab contents. Without this we get the below error when > trying to read the guest address > > (gdb) x/10 do_fork > 0xc000000000098660<do_fork>: Cannot access memory at address 0xc000000000098660 > (gdb) > > Signed-off-by: Aneesh Kumar K.V<aneesh.kumar@linux.vnet.ibm.com> > --- > target-ppc/kvm.c | 59 +++++++++++++++++++++++++++++++++++++++++++++++++ > target-ppc/kvm_ppc.h | 12 +++++++++- > target-ppc/mmu-hash64.c | 57 ++++++++++++++++++++++++++++------------------- > 3 files changed, 104 insertions(+), 24 deletions(-) > > diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c > index 1838465..05b066c 100644 > --- a/target-ppc/kvm.c > +++ b/target-ppc/kvm.c > @@ -1888,3 +1888,62 @@ int kvm_arch_on_sigbus(int code, void *addr) > void kvm_arch_init_irq_routing(KVMState *s) > { > } > + > +hwaddr kvmppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash, > + bool secondary, target_ulong ptem, > + target_ulong *hpte0, target_ulong *hpte1) > +{ > + int htab_fd; > + uint64_t index; > + hwaddr pte_offset; > + target_ulong pte0, pte1; > + struct kvm_get_htab_fd ghf; > + struct kvm_get_htab_buf { > + struct kvm_get_htab_header header; > + /* > + * Older kernel required one extra byte. Older than what? > + */ > + unsigned long hpte[(HPTES_PER_GROUP * 2) + 1]; > + } hpte_buf; > + > + index = (hash * HPTES_PER_GROUP)& cpu->env.htab_mask; > + *hpte0 = 0; > + *hpte1 = 0; > + if (!cap_htab_fd) { > + return 0; > + } > + > + ghf.flags = 0; > + ghf.start_index = index; > + htab_fd = kvm_vm_ioctl(kvm_state, KVM_PPC_GET_HTAB_FD,&ghf); > + if (htab_fd< 0) { > + goto error_out; > + } > + /* > + * Read the hpte group > + */ > + if (read(htab_fd,&hpte_buf, sizeof(hpte_buf))< 0) { > + goto out; > + } > + > + index = 0; > + pte_offset = (hash * HASH_PTEG_SIZE_64)& cpu->env.htab_mask;; > + while (index< hpte_buf.header.n_valid) { > + pte0 = hpte_buf.hpte[(index * 2)]; > + pte1 = hpte_buf.hpte[(index * 2) + 1]; > + if ((pte0& HPTE64_V_VALID) > +&& (secondary == !!(pte0& HPTE64_V_SECONDARY)) > +&& HPTE64_V_COMPARE(pte0, ptem)) { > + *hpte0 = pte0; > + *hpte1 = pte1; > + close(htab_fd); > + return pte_offset; > + } > + index++; > + pte_offset += HASH_PTE_SIZE_64; > + } > +out: > + close(htab_fd); > +error_out: > + return -1; > +} > diff --git a/target-ppc/kvm_ppc.h b/target-ppc/kvm_ppc.h > index 4ae7bf2..dad0e57 100644 > --- a/target-ppc/kvm_ppc.h > +++ b/target-ppc/kvm_ppc.h > @@ -42,7 +42,9 @@ int kvmppc_get_htab_fd(bool write); > int kvmppc_save_htab(QEMUFile *f, int fd, size_t bufsize, int64_t max_ns); > int kvmppc_load_htab_chunk(QEMUFile *f, int fd, uint32_t index, > uint16_t n_valid, uint16_t n_invalid); > - > +hwaddr kvmppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash, > + bool secondary, target_ulong ptem, > + target_ulong *hpte0, target_ulong *hpte1); > #else > > static inline uint32_t kvmppc_get_tbfreq(void) > @@ -181,6 +183,14 @@ static inline int kvmppc_load_htab_chunk(QEMUFile *f, int fd, uint32_t index, > abort(); > } > > +static inline hwaddr kvmppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash, > + bool secondary, > + target_ulong ptem, > + target_ulong *hpte0, > + target_ulong *hpte1) > +{ > + abort(); > +} > #endif > > #ifndef CONFIG_KVM > diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c > index 67fc1b5..2288fe8 100644 > --- a/target-ppc/mmu-hash64.c > +++ b/target-ppc/mmu-hash64.c > @@ -302,37 +302,50 @@ static int ppc_hash64_amr_prot(CPUPPCState *env, ppc_hash_pte64_t pte) > return prot; > } > > -static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr pteg_off, > +static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr hash, > bool secondary, target_ulong ptem, > ppc_hash_pte64_t *pte) > { > - hwaddr pte_offset = pteg_off; > + hwaddr pte_offset; > target_ulong pte0, pte1; > - int i; > - > - for (i = 0; i< HPTES_PER_GROUP; i++) { > - pte0 = ppc_hash64_load_hpte0(env, pte_offset); > - pte1 = ppc_hash64_load_hpte1(env, pte_offset); > - > - if ((pte0& HPTE64_V_VALID) > -&& (secondary == !!(pte0& HPTE64_V_SECONDARY)) > -&& HPTE64_V_COMPARE(pte0, ptem)) { > - pte->pte0 = pte0; > - pte->pte1 = pte1; > - return pte_offset; > + int i, ret = 0; > + > + if (kvm_enabled()) { > + ret = kvmppc_hash64_pteg_search(ppc_env_get_cpu(env), hash, > + secondary, ptem, > +&pte->pte0,&pte->pte1); Instead of duplicating the search, couldn't you just hook yourself into ppc_hash64_load_hpte0/1 and return the respective ptes from there? Just cache the current pteg to ensure things don't become dog slow. Alex > + } > + /* > + * We don't support htab fd, check whether we have a copy of htab > + */ > + if (!ret) { > + pte_offset = (hash * HASH_PTEG_SIZE_64)& env->htab_mask;; > + for (i = 0; i< HPTES_PER_GROUP; i++) { > + pte0 = ppc_hash64_load_hpte0(env, pte_offset); > + pte1 = ppc_hash64_load_hpte1(env, pte_offset); > + > + if ((pte0& HPTE64_V_VALID) > +&& (secondary == !!(pte0& HPTE64_V_SECONDARY)) > +&& HPTE64_V_COMPARE(pte0, ptem)) { > + pte->pte0 = pte0; > + pte->pte1 = pte1; > + return pte_offset; > + } > + pte_offset += HASH_PTE_SIZE_64; > } > - > - pte_offset += HASH_PTE_SIZE_64; > + /* > + * We didn't find a valid entry. > + */ > + ret = -1; > } > - > - return -1; > + return ret; > } > > static hwaddr ppc_hash64_htab_lookup(CPUPPCState *env, > ppc_slb_t *slb, target_ulong eaddr, > ppc_hash_pte64_t *pte) > { > - hwaddr pteg_off, pte_offset; > + hwaddr pte_offset; > hwaddr hash; > uint64_t vsid, epnshift, epnmask, epn, ptem; > > @@ -367,8 +380,7 @@ static hwaddr ppc_hash64_htab_lookup(CPUPPCState *env, > " vsid=" TARGET_FMT_lx " ptem=" TARGET_FMT_lx > " hash=" TARGET_FMT_plx "\n", > env->htab_base, env->htab_mask, vsid, ptem, hash); > - pteg_off = (hash * HASH_PTEG_SIZE_64)& env->htab_mask; > - pte_offset = ppc_hash64_pteg_search(env, pteg_off, 0, ptem, pte); > + pte_offset = ppc_hash64_pteg_search(env, hash, 0, ptem, pte); > > if (pte_offset == -1) { > /* Secondary PTEG lookup */ > @@ -377,8 +389,7 @@ static hwaddr ppc_hash64_htab_lookup(CPUPPCState *env, > " hash=" TARGET_FMT_plx "\n", env->htab_base, > env->htab_mask, vsid, ptem, ~hash); > > - pteg_off = (~hash * HASH_PTEG_SIZE_64)& env->htab_mask; > - pte_offset = ppc_hash64_pteg_search(env, pteg_off, 1, ptem, pte); > + pte_offset = ppc_hash64_pteg_search(env, ~hash, 1, ptem, pte); > } > > return pte_offset; ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kvm enabled 2013-09-30 17:50 ` Alexander Graf @ 2013-10-01 1:27 ` Aneesh Kumar K.V 2013-10-02 13:59 ` Alexander Graf 0 siblings, 1 reply; 9+ messages in thread From: Aneesh Kumar K.V @ 2013-10-01 1:27 UTC (permalink / raw) To: Alexander Graf; +Cc: qemu-ppc, paulus, qemu-devel@nongnu.org Alexander Graf <agraf@suse.de> writes: > On 09/05/2013 10:16 AM, Aneesh Kumar K.V wrote: >> From: "Aneesh Kumar K.V"<aneesh.kumar@linux.vnet.ibm.com> >> >> With kvm enabled, we store the hash page table information in the hypervisor. >> Use ioctl to read the htab contents. Without this we get the below error when >> trying to read the guest address >> >> (gdb) x/10 do_fork >> 0xc000000000098660<do_fork>: Cannot access memory at address 0xc000000000098660 >> (gdb) >> >> Signed-off-by: Aneesh Kumar K.V<aneesh.kumar@linux.vnet.ibm.com> >> --- >> target-ppc/kvm.c | 59 +++++++++++++++++++++++++++++++++++++++++++++++++ >> target-ppc/kvm_ppc.h | 12 +++++++++- >> target-ppc/mmu-hash64.c | 57 ++++++++++++++++++++++++++++------------------- >> 3 files changed, 104 insertions(+), 24 deletions(-) >> >> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c >> index 1838465..05b066c 100644 >> --- a/target-ppc/kvm.c >> +++ b/target-ppc/kvm.c >> @@ -1888,3 +1888,62 @@ int kvm_arch_on_sigbus(int code, void *addr) >> void kvm_arch_init_irq_routing(KVMState *s) >> { >> } >> + >> +hwaddr kvmppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash, >> + bool secondary, target_ulong ptem, >> + target_ulong *hpte0, target_ulong *hpte1) >> +{ >> + int htab_fd; >> + uint64_t index; >> + hwaddr pte_offset; >> + target_ulong pte0, pte1; >> + struct kvm_get_htab_fd ghf; >> + struct kvm_get_htab_buf { >> + struct kvm_get_htab_header header; >> + /* >> + * Older kernel required one extra byte. > > Older than what? > >> + */ Since we decided to drop that kernel patch, that should be updated as "kernel requires one extra byte". >> + unsigned long hpte[(HPTES_PER_GROUP * 2) + 1]; >> + } hpte_buf; >> + >> + index = (hash * HPTES_PER_GROUP)& cpu->env.htab_mask; >> + *hpte0 = 0; >> + *hpte1 = 0; >> + if (!cap_htab_fd) { >> + return 0; >> + } >> + ..... >> >> -static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr pteg_off, >> +static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr hash, >> bool secondary, target_ulong ptem, >> ppc_hash_pte64_t *pte) >> { >> - hwaddr pte_offset = pteg_off; >> + hwaddr pte_offset; >> target_ulong pte0, pte1; >> - int i; >> - >> - for (i = 0; i< HPTES_PER_GROUP; i++) { >> - pte0 = ppc_hash64_load_hpte0(env, pte_offset); >> - pte1 = ppc_hash64_load_hpte1(env, pte_offset); >> - >> - if ((pte0& HPTE64_V_VALID) >> -&& (secondary == !!(pte0& HPTE64_V_SECONDARY)) >> -&& HPTE64_V_COMPARE(pte0, ptem)) { >> - pte->pte0 = pte0; >> - pte->pte1 = pte1; >> - return pte_offset; >> + int i, ret = 0; >> + >> + if (kvm_enabled()) { >> + ret = kvmppc_hash64_pteg_search(ppc_env_get_cpu(env), hash, >> + secondary, ptem, >> +&pte->pte0,&pte->pte1); > > Instead of duplicating the search, couldn't you just hook yourself into > ppc_hash64_load_hpte0/1 and return the respective ptes from there? Just > cache the current pteg to ensure things don't become dog slow. > Can you explain this better ? -aneesh ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kvm enabled 2013-10-01 1:27 ` Aneesh Kumar K.V @ 2013-10-02 13:59 ` Alexander Graf 2013-10-07 13:58 ` Aneesh Kumar K.V 0 siblings, 1 reply; 9+ messages in thread From: Alexander Graf @ 2013-10-02 13:59 UTC (permalink / raw) To: Aneesh Kumar K.V; +Cc: qemu-ppc, paulus, qemu-devel@nongnu.org On 01.10.2013, at 03:27, Aneesh Kumar K.V wrote: > Alexander Graf <agraf@suse.de> writes: > >> On 09/05/2013 10:16 AM, Aneesh Kumar K.V wrote: >>> From: "Aneesh Kumar K.V"<aneesh.kumar@linux.vnet.ibm.com> >>> >>> With kvm enabled, we store the hash page table information in the hypervisor. >>> Use ioctl to read the htab contents. Without this we get the below error when >>> trying to read the guest address >>> >>> (gdb) x/10 do_fork >>> 0xc000000000098660<do_fork>: Cannot access memory at address 0xc000000000098660 >>> (gdb) >>> >>> Signed-off-by: Aneesh Kumar K.V<aneesh.kumar@linux.vnet.ibm.com> >>> --- >>> target-ppc/kvm.c | 59 +++++++++++++++++++++++++++++++++++++++++++++++++ >>> target-ppc/kvm_ppc.h | 12 +++++++++- >>> target-ppc/mmu-hash64.c | 57 ++++++++++++++++++++++++++++------------------- >>> 3 files changed, 104 insertions(+), 24 deletions(-) >>> >>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c >>> index 1838465..05b066c 100644 >>> --- a/target-ppc/kvm.c >>> +++ b/target-ppc/kvm.c >>> @@ -1888,3 +1888,62 @@ int kvm_arch_on_sigbus(int code, void *addr) >>> void kvm_arch_init_irq_routing(KVMState *s) >>> { >>> } >>> + >>> +hwaddr kvmppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash, >>> + bool secondary, target_ulong ptem, >>> + target_ulong *hpte0, target_ulong *hpte1) >>> +{ >>> + int htab_fd; >>> + uint64_t index; >>> + hwaddr pte_offset; >>> + target_ulong pte0, pte1; >>> + struct kvm_get_htab_fd ghf; >>> + struct kvm_get_htab_buf { >>> + struct kvm_get_htab_header header; >>> + /* >>> + * Older kernel required one extra byte. >> >> Older than what? >> >>> + */ > > Since we decided to drop that kernel patch, that should be updated as > "kernel requires one extra byte". > >>> + unsigned long hpte[(HPTES_PER_GROUP * 2) + 1]; >>> + } hpte_buf; >>> + >>> + index = (hash * HPTES_PER_GROUP)& cpu->env.htab_mask; >>> + *hpte0 = 0; >>> + *hpte1 = 0; >>> + if (!cap_htab_fd) { >>> + return 0; >>> + } >>> + > > ..... > >>> >>> -static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr pteg_off, >>> +static hwaddr ppc_hash64_pteg_search(CPUPPCState *env, hwaddr hash, >>> bool secondary, target_ulong ptem, >>> ppc_hash_pte64_t *pte) >>> { >>> - hwaddr pte_offset = pteg_off; >>> + hwaddr pte_offset; >>> target_ulong pte0, pte1; >>> - int i; >>> - >>> - for (i = 0; i< HPTES_PER_GROUP; i++) { >>> - pte0 = ppc_hash64_load_hpte0(env, pte_offset); >>> - pte1 = ppc_hash64_load_hpte1(env, pte_offset); >>> - >>> - if ((pte0& HPTE64_V_VALID) >>> -&& (secondary == !!(pte0& HPTE64_V_SECONDARY)) >>> -&& HPTE64_V_COMPARE(pte0, ptem)) { >>> - pte->pte0 = pte0; >>> - pte->pte1 = pte1; >>> - return pte_offset; >>> + int i, ret = 0; >>> + >>> + if (kvm_enabled()) { >>> + ret = kvmppc_hash64_pteg_search(ppc_env_get_cpu(env), hash, >>> + secondary, ptem, >>> +&pte->pte0,&pte->pte1); >> >> Instead of duplicating the search, couldn't you just hook yourself into >> ppc_hash64_load_hpte0/1 and return the respective ptes from there? Just >> cache the current pteg to ensure things don't become dog slow. >> > > Can you explain this better ? You're basically doing hwaddr ppc_hash64_pteg_search(...) { if (kvm) { pteg = read_from_kvm(); foreach pte in pteg { if (match) return offset; } return -1; } else { foreach pte in pteg { pte = read_pte_from_memory(); if (match) return offset; } return -1; } } This is massive code duplication. The only difference between kvm and tcg are the source for the pteg read. David already abstracted the actual pte0/pte1 reads away in ppc_hash64_load_hpte0 and ppc_hash64_load_hpte1 wrapper functions. Now imagine we could add a temporary pteg store in env. Then have something like this in ppc_hash64_load_hpte0: if (need_kvm_htab_access) { if (env->current_cached_pteg != this_pteg) ( read_pteg(env->cached_pteg); return env->cached_pteg[x].pte0; } } else { <do what was done before> } That way the actual resolver doesn't care where the PTEG comes from, as it only ever checks pte0/pte1 and leaves all the magic on where those come from to the load function. Alex ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kvm enabled 2013-10-02 13:59 ` Alexander Graf @ 2013-10-07 13:58 ` Aneesh Kumar K.V 2013-10-07 14:04 ` Alexander Graf 0 siblings, 1 reply; 9+ messages in thread From: Aneesh Kumar K.V @ 2013-10-07 13:58 UTC (permalink / raw) To: Alexander Graf; +Cc: qemu-ppc, paulus, qemu-devel@nongnu.org Alexander Graf <agraf@suse.de> writes: > On 01.10.2013, at 03:27, Aneesh Kumar K.V wrote: > >> Alexander Graf <agraf@suse.de> writes: >> >>> On 09/05/2013 10:16 AM, Aneesh Kumar K.V wrote: >>>> From: "Aneesh Kumar K.V"<aneesh.kumar@linux.vnet.ibm.com> >>>> .... >> >> Can you explain this better ? > > You're basically doing > > hwaddr ppc_hash64_pteg_search(...) > { > if (kvm) { > pteg = read_from_kvm(); > foreach pte in pteg { > if (match) return offset; > } > return -1; > } else { > foreach pte in pteg { > pte = read_pte_from_memory(); > if (match) return offset; > } > return -1; > } > } > > This is massive code duplication. The only difference between kvm and > tcg are the source for the pteg read. David already abstracted the > actual pte0/pte1 reads away in ppc_hash64_load_hpte0 and > ppc_hash64_load_hpte1 wrapper functions. > > Now imagine we could add a temporary pteg store in env. Then have something like this in ppc_hash64_load_hpte0: > > if (need_kvm_htab_access) { > if (env->current_cached_pteg != this_pteg) ( > read_pteg(env->cached_pteg); > return env->cached_pteg[x].pte0; > } > } else { > <do what was done before> > } > > That way the actual resolver doesn't care where the PTEG comes from, > as it only ever checks pte0/pte1 and leaves all the magic on where > those come from to the load function. I tried to do this and didn't like the end result. For one we unnecessarly bloat CPUPPCState struct to now carry a pteg information and associated array. ie, we need to have now the below in the CPUPPCState. int pteg_group; unsigned long hpte[(HPTES_PER_GROUP * 2) + 1]; Also out serach can be much effective with the current code, while (index < hpte_buf.header.n_valid) { against for (i = 0; i < HPTES_PER_GROUP; i++) { I guess the former is better when we can find invalid hpte entries. We now also need to update kvm_cpu_synchronize_state to clear pte_group so that we would not look at the stale values. If we ever want to use reading pteg in any other place we could possibly look at doing this. But at this stage, IMHO it unnecessarily make it all complex and less efficient. -aneesh ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Qemu-devel] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kvm enabled 2013-10-07 13:58 ` Aneesh Kumar K.V @ 2013-10-07 14:04 ` Alexander Graf 0 siblings, 0 replies; 9+ messages in thread From: Alexander Graf @ 2013-10-07 14:04 UTC (permalink / raw) To: Aneesh Kumar K.V; +Cc: open list:PReP, Paul Mackerras, qemu-devel@nongnu.org On 07.10.2013, at 15:58, Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> wrote: > Alexander Graf <agraf@suse.de> writes: > >> On 01.10.2013, at 03:27, Aneesh Kumar K.V wrote: >> >>> Alexander Graf <agraf@suse.de> writes: >>> >>>> On 09/05/2013 10:16 AM, Aneesh Kumar K.V wrote: >>>>> From: "Aneesh Kumar K.V"<aneesh.kumar@linux.vnet.ibm.com> >>>>> > > .... > >>> >>> Can you explain this better ? >> >> You're basically doing >> >> hwaddr ppc_hash64_pteg_search(...) >> { >> if (kvm) { >> pteg = read_from_kvm(); >> foreach pte in pteg { >> if (match) return offset; >> } >> return -1; >> } else { >> foreach pte in pteg { >> pte = read_pte_from_memory(); >> if (match) return offset; >> } >> return -1; >> } >> } >> >> This is massive code duplication. The only difference between kvm and >> tcg are the source for the pteg read. David already abstracted the >> actual pte0/pte1 reads away in ppc_hash64_load_hpte0 and >> ppc_hash64_load_hpte1 wrapper functions. >> >> Now imagine we could add a temporary pteg store in env. Then have something like this in ppc_hash64_load_hpte0: >> >> if (need_kvm_htab_access) { >> if (env->current_cached_pteg != this_pteg) ( >> read_pteg(env->cached_pteg); >> return env->cached_pteg[x].pte0; >> } >> } else { >> <do what was done before> >> } >> >> That way the actual resolver doesn't care where the PTEG comes from, >> as it only ever checks pte0/pte1 and leaves all the magic on where >> those come from to the load function. > > I tried to do this and didn't like the end result. For one we > unnecessarly bloat CPUPPCState struct to now carry a pteg information > and associated array. ie, we need to have now the below in the CPUPPCState. How about something like token = ppc_hash64_start_access(); foreach (hpte entry) { pte0 = ppc_hash64_load_hpte0(token, ...); ... } ppc_hash64_stop_access(token); That way you could put the buffer and pteg_group into the token struct and only allocate and use it when KVM with HV is in use. > > int pteg_group; > unsigned long hpte[(HPTES_PER_GROUP * 2) + 1]; > > Also out serach can be much effective with the current code, We're anything but performance critical at this point. > > while (index < hpte_buf.header.n_valid) { > > against > > for (i = 0; i < HPTES_PER_GROUP; i++) { > > I guess the former is better when we can find invalid hpte entries. > > We now also need to update kvm_cpu_synchronize_state to clear > pte_group so that we would not look at the stale values. If we ever want > to use reading pteg in any other place we could possibly look at doing > this. But at this stage, IMHO it unnecessarily make it all complex and > less efficient. The point is to make it less complex. I don't like the idea of having 2 hash lookups in the same code base that do basically the same. And efficiency only ever counts in the TCG case here. Alex ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2013-10-07 14:04 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- [not found] <1378369007-958-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com> [not found] ` <1378369007-958-2-git-send-email-aneesh.kumar@linux.vnet.ibm.com> 2013-09-25 15:41 ` [Qemu-devel] [PATCH -V4 1/4] target-ppc: Update slb array with correct index values Aneesh Kumar K.V 2013-09-30 17:40 ` Alexander Graf 2013-10-01 1:29 ` Aneesh Kumar K.V [not found] ` <1378369007-958-3-git-send-email-aneesh.kumar@linux.vnet.ibm.com> 2013-09-30 17:48 ` [Qemu-devel] [PATCH -V4 2/4] target-ppc: Fix page table lookup with kvm enabled Alexander Graf 2013-09-30 17:50 ` Alexander Graf 2013-10-01 1:27 ` Aneesh Kumar K.V 2013-10-02 13:59 ` Alexander Graf 2013-10-07 13:58 ` Aneesh Kumar K.V 2013-10-07 14:04 ` Alexander Graf
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).