* Re: [PATCH] target-ppc: Update slb array with correct index values.
[not found] ` <874nam5ei5.fsf@linux.vnet.ibm.com>
@ 2013-08-19 8:21 ` Alexander Graf
2013-08-20 13:57 ` Aneesh Kumar K.V
2013-08-21 5:11 ` Paul Mackerras
0 siblings, 2 replies; 13+ messages in thread
From: Alexander Graf @ 2013-08-19 8:21 UTC (permalink / raw)
To: Aneesh Kumar K.V; +Cc: kvm-ppc, kvm@vger.kernel.org list
On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
> Alexander Graf <agraf@suse.de> writes:
>
>> On 11.08.2013, at 20:16, Aneesh Kumar K.V wrote:
>>
>>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>>>
>>> Without this, a value of rb=0 and rs=0, result in us replacing the 0th index
>>>
>>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>>
>> Wrong mailing list again ;).
>
> Will post the series again with updated commit message to the qemu list.
>
>>
>>> ---
>>> target-ppc/kvm.c | 14 ++++++++++++--
>>> 1 file changed, 12 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
>>> index 30a870e..5d4e613 100644
>>> --- a/target-ppc/kvm.c
>>> +++ b/target-ppc/kvm.c
>>> @@ -1034,8 +1034,18 @@ int kvm_arch_get_registers(CPUState *cs)
>>> /* Sync SLB */
>>> #ifdef TARGET_PPC64
>>> for (i = 0; i < 64; i++) {
>>> - ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
>>> - sregs.u.s.ppc64.slb[i].slbv);
>>> + target_ulong rb = sregs.u.s.ppc64.slb[i].slbe;
>>> + /*
>>> + * KVM_GET_SREGS doesn't retun slb entry with slot information
>>> + * same as index. So don't depend on the slot information in
>>> + * the returned value.
>>
>> This is the generating code in book3s_pr.c:
>>
>> if (vcpu->arch.hflags & BOOK3S_HFLAG_SLB) {
>> for (i = 0; i < 64; i++) {
>> sregs->u.s.ppc64.slb[i].slbe = vcpu->arch.slb[i].orige | i;
>> sregs->u.s.ppc64.slb[i].slbv = vcpu->arch.slb[i].origv;
>> }
>>
>> Where exactly did you see broken slbe entries?
>>
>
> I noticed this when adding support for guest memory dumping via qemu gdb
> server. Now the array we get would look like below
>
> slbe0 slbv0
> slbe1 slbv1
> 0000 00000
> 0000 00000
Ok, so that's where the problem lies. Why are the entries 0 here? Either we try to fetch more entries than we should, we populate entries incorrectly or the kernel simply returns invalid SLB entry values for invalid entries.
Are you seeing this with PR KVM or HV KVM?
Alex
> Once we get an array like that when we hit the third value we will
> replace the 0th entry, that is [slbe0 slbv0]. That resulted in failed
> translation of the address by qemu gdb server.
>
>
> -aneesh
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] target-ppc: Update slb array with correct index values.
2013-08-19 8:21 ` [PATCH] target-ppc: Update slb array with correct index values Alexander Graf
@ 2013-08-20 13:57 ` Aneesh Kumar K.V
2013-08-21 7:02 ` Alexander Graf
2013-08-21 5:11 ` Paul Mackerras
1 sibling, 1 reply; 13+ messages in thread
From: Aneesh Kumar K.V @ 2013-08-20 13:57 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm@vger.kernel.org list
Alexander Graf <agraf@suse.de> writes:
> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
>
>> Alexander Graf <agraf@suse.de> writes:
>>
>>> On 11.08.2013, at 20:16, Aneesh Kumar K.V wrote:
>>>
>>>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>>>>
>>>> Without this, a value of rb=0 and rs=0, result in us replacing the 0th index
>>>>
>>>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>>>
>>> Wrong mailing list again ;).
>>
>> Will post the series again with updated commit message to the qemu list.
>>
>>>
>>>> ---
>>>> target-ppc/kvm.c | 14 ++++++++++++--
>>>> 1 file changed, 12 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
>>>> index 30a870e..5d4e613 100644
>>>> --- a/target-ppc/kvm.c
>>>> +++ b/target-ppc/kvm.c
>>>> @@ -1034,8 +1034,18 @@ int kvm_arch_get_registers(CPUState *cs)
>>>> /* Sync SLB */
>>>> #ifdef TARGET_PPC64
>>>> for (i = 0; i < 64; i++) {
>>>> - ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
>>>> - sregs.u.s.ppc64.slb[i].slbv);
>>>> + target_ulong rb = sregs.u.s.ppc64.slb[i].slbe;
>>>> + /*
>>>> + * KVM_GET_SREGS doesn't retun slb entry with slot information
>>>> + * same as index. So don't depend on the slot information in
>>>> + * the returned value.
>>>
>>> This is the generating code in book3s_pr.c:
>>>
>>> if (vcpu->arch.hflags & BOOK3S_HFLAG_SLB) {
>>> for (i = 0; i < 64; i++) {
>>> sregs->u.s.ppc64.slb[i].slbe = vcpu->arch.slb[i].orige | i;
>>> sregs->u.s.ppc64.slb[i].slbv = vcpu->arch.slb[i].origv;
>>> }
>>>
>>> Where exactly did you see broken slbe entries?
>>>
>>
>> I noticed this when adding support for guest memory dumping via qemu gdb
>> server. Now the array we get would look like below
>>
>> slbe0 slbv0
>> slbe1 slbv1
>> 0000 00000
>> 0000 00000
>
> Ok, so that's where the problem lies. Why are the entries 0 here?
> Either we try to fetch more entries than we should, we populate
> entries incorrectly or the kernel simply returns invalid SLB entry
> values for invalid entries.
The ioctl zero out the sregs, and fill only slb_max entries. So we find
0 filled entries above slb_max. Also we don't pass slb_max to user
space. So userspace have to look at all the 64 entries.
>
> Are you seeing this with PR KVM or HV KVM?
>
HV KVM
-aneesh
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] target-ppc: Update slb array with correct index values.
2013-08-19 8:21 ` [PATCH] target-ppc: Update slb array with correct index values Alexander Graf
2013-08-20 13:57 ` Aneesh Kumar K.V
@ 2013-08-21 5:11 ` Paul Mackerras
2013-08-21 7:37 ` Alexander Graf
1 sibling, 1 reply; 13+ messages in thread
From: Paul Mackerras @ 2013-08-21 5:11 UTC (permalink / raw)
To: Alexander Graf; +Cc: Aneesh Kumar K.V, kvm-ppc, kvm@vger.kernel.org list
On Mon, Aug 19, 2013 at 10:21:09AM +0200, Alexander Graf wrote:
>
> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
> > I noticed this when adding support for guest memory dumping via qemu gdb
> > server. Now the array we get would look like below
> >
> > slbe0 slbv0
> > slbe1 slbv1
> > 0000 00000
> > 0000 00000
>
> Ok, so that's where the problem lies. Why are the entries 0 here? Either we try to fetch more entries than we should, we populate entries incorrectly or the kernel simply returns invalid SLB entry values for invalid entries.
>
> Are you seeing this with PR KVM or HV KVM?
I suspect this is to do with the fact that PR and HV KVM use the
vcpu->arch.slb[] array differently. PR stores SLB entry n in
vcpu->arch.slb[n], whereas HV packs the valid entries down in the
low-numbered entries and puts the index in the bottom bits of the esid
field (this is so they can be loaded efficiently with the slbmte
instruction on guest entry).
Then, kvm_arch_vcpu_ioctl_get_sregs() on PR copies out all 64 entries
(valid or not) and puts an index value in the bottom bits of the esid,
whereas on HV it just copies out the valid entries (which already have
the index in the esid field).
So, the question is, what is the ABI here? It sounds a bit like qemu
is ignoring the index value in the esid field. Either qemu needs to
take notice of the index in the esid field or we need to change the HV
versions of kvm_arch_vcpu_ioctl_get/set_sregs to put entry n in
sregs->u.s.ppc64.slb[n] like PR does.
Paul.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] target-ppc: Update slb array with correct index values.
2013-08-20 13:57 ` Aneesh Kumar K.V
@ 2013-08-21 7:02 ` Alexander Graf
2013-08-21 9:07 ` Aneesh Kumar K.V
0 siblings, 1 reply; 13+ messages in thread
From: Alexander Graf @ 2013-08-21 7:02 UTC (permalink / raw)
To: Aneesh Kumar K.V; +Cc: kvm-ppc, kvm@vger.kernel.org list
On 20.08.2013, at 14:57, Aneesh Kumar K.V wrote:
> Alexander Graf <agraf@suse.de> writes:
>
>> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
>>
>>> Alexander Graf <agraf@suse.de> writes:
>>>
>>>> On 11.08.2013, at 20:16, Aneesh Kumar K.V wrote:
>>>>
>>>>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>>>>>
>>>>> Without this, a value of rb=0 and rs=0, result in us replacing the 0th index
>>>>>
>>>>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>>>>
>>>> Wrong mailing list again ;).
>>>
>>> Will post the series again with updated commit message to the qemu list.
>>>
>>>>
>>>>> ---
>>>>> target-ppc/kvm.c | 14 ++++++++++++--
>>>>> 1 file changed, 12 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
>>>>> index 30a870e..5d4e613 100644
>>>>> --- a/target-ppc/kvm.c
>>>>> +++ b/target-ppc/kvm.c
>>>>> @@ -1034,8 +1034,18 @@ int kvm_arch_get_registers(CPUState *cs)
>>>>> /* Sync SLB */
>>>>> #ifdef TARGET_PPC64
>>>>> for (i = 0; i < 64; i++) {
>>>>> - ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
>>>>> - sregs.u.s.ppc64.slb[i].slbv);
>>>>> + target_ulong rb = sregs.u.s.ppc64.slb[i].slbe;
>>>>> + /*
>>>>> + * KVM_GET_SREGS doesn't retun slb entry with slot information
>>>>> + * same as index. So don't depend on the slot information in
>>>>> + * the returned value.
>>>>
>>>> This is the generating code in book3s_pr.c:
>>>>
>>>> if (vcpu->arch.hflags & BOOK3S_HFLAG_SLB) {
>>>> for (i = 0; i < 64; i++) {
>>>> sregs->u.s.ppc64.slb[i].slbe = vcpu->arch.slb[i].orige | i;
>>>> sregs->u.s.ppc64.slb[i].slbv = vcpu->arch.slb[i].origv;
>>>> }
>>>>
>>>> Where exactly did you see broken slbe entries?
>>>>
>>>
>>> I noticed this when adding support for guest memory dumping via qemu gdb
>>> server. Now the array we get would look like below
>>>
>>> slbe0 slbv0
>>> slbe1 slbv1
>>> 0000 00000
>>> 0000 00000
>>
>> Ok, so that's where the problem lies. Why are the entries 0 here?
>> Either we try to fetch more entries than we should, we populate
>> entries incorrectly or the kernel simply returns invalid SLB entry
>> values for invalid entries.
>
>
> The ioctl zero out the sregs, and fill only slb_max entries. So we find
> 0 filled entries above slb_max. Also we don't pass slb_max to user
> space. So userspace have to look at all the 64 entries.
We do pass slb_max, it's just called differently and calculated implicitly :). How about something like this:
diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
index 30a870e..29a2ec3 100644
--- a/target-ppc/kvm.c
+++ b/target-ppc/kvm.c
@@ -818,6 +818,8 @@ int kvm_arch_put_registers(CPUState *cs, int level)
/* Sync SLB */
#ifdef TARGET_PPC64
+ /* We need to loop through all entries to give them potentially
+ valid values */
for (i = 0; i < 64; i++) {
sregs.u.s.ppc64.slb[i].slbe = env->slb[i].esid;
sregs.u.s.ppc64.slb[i].slbv = env->slb[i].vsid;
@@ -1033,7 +1035,7 @@ int kvm_arch_get_registers(CPUState *cs)
/* Sync SLB */
#ifdef TARGET_PPC64
- for (i = 0; i < 64; i++) {
+ for (i = 0; i < env->slb_nr; i++) {
ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
sregs.u.s.ppc64.slb[i].slbv);
}
>
>
>>
>> Are you seeing this with PR KVM or HV KVM?
>>
>
> HV KVM
If the above didn't help, could you please dig out how HV KVM assembles its SLB information and just paste it here? :)
Alex
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH] target-ppc: Update slb array with correct index values.
2013-08-21 5:11 ` Paul Mackerras
@ 2013-08-21 7:37 ` Alexander Graf
2013-08-21 7:42 ` Alexander Graf
2013-08-21 9:25 ` Paul Mackerras
0 siblings, 2 replies; 13+ messages in thread
From: Alexander Graf @ 2013-08-21 7:37 UTC (permalink / raw)
To: Paul Mackerras; +Cc: Aneesh Kumar K.V, kvm-ppc, kvm@vger.kernel.org list
On 21.08.2013, at 06:11, Paul Mackerras wrote:
> On Mon, Aug 19, 2013 at 10:21:09AM +0200, Alexander Graf wrote:
>>
>> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
>>> I noticed this when adding support for guest memory dumping via qemu gdb
>>> server. Now the array we get would look like below
>>>
>>> slbe0 slbv0
>>> slbe1 slbv1
>>> 0000 00000
>>> 0000 00000
>>
>> Ok, so that's where the problem lies. Why are the entries 0 here? Either we try to fetch more entries than we should, we populate entries incorrectly or the kernel simply returns invalid SLB entry values for invalid entries.
>>
>> Are you seeing this with PR KVM or HV KVM?
>
> I suspect this is to do with the fact that PR and HV KVM use the
> vcpu->arch.slb[] array differently. PR stores SLB entry n in
> vcpu->arch.slb[n], whereas HV packs the valid entries down in the
> low-numbered entries and puts the index in the bottom bits of the esid
> field (this is so they can be loaded efficiently with the slbmte
> instruction on guest entry).
>
> Then, kvm_arch_vcpu_ioctl_get_sregs() on PR copies out all 64 entries
> (valid or not) and puts an index value in the bottom bits of the esid,
> whereas on HV it just copies out the valid entries (which already have
> the index in the esid field).
>
> So, the question is, what is the ABI here? It sounds a bit like qemu
> is ignoring the index value in the esid field. Either qemu needs to
> take notice of the index in the esid field or we need to change the HV
> versions of kvm_arch_vcpu_ioctl_get/set_sregs to put entry n in
> sregs->u.s.ppc64.slb[n] like PR does.
It's the opposite today - QEMU does honor the index value on sregs get. Aneesh's patch wants to change it to ignore it instead. For sregs set we copy our internal copy of the slb linearly into the array, so we don't pack it there.
Can we safely assume on HV KVM that esid == 0 vsid == 0 is the end of the list? If so, we can just add a break statement in the get loop and call it a day. The rest should work just fine.
Alex
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] target-ppc: Update slb array with correct index values.
2013-08-21 7:37 ` Alexander Graf
@ 2013-08-21 7:42 ` Alexander Graf
2013-08-21 9:25 ` Paul Mackerras
1 sibling, 0 replies; 13+ messages in thread
From: Alexander Graf @ 2013-08-21 7:42 UTC (permalink / raw)
To: Paul Mackerras; +Cc: Aneesh Kumar K.V, kvm-ppc, kvm@vger.kernel.org list
On 21.08.2013, at 08:37, Alexander Graf wrote:
>
> On 21.08.2013, at 06:11, Paul Mackerras wrote:
>
>> On Mon, Aug 19, 2013 at 10:21:09AM +0200, Alexander Graf wrote:
>>>
>>> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
>>>> I noticed this when adding support for guest memory dumping via qemu gdb
>>>> server. Now the array we get would look like below
>>>>
>>>> slbe0 slbv0
>>>> slbe1 slbv1
>>>> 0000 00000
>>>> 0000 00000
>>>
>>> Ok, so that's where the problem lies. Why are the entries 0 here? Either we try to fetch more entries than we should, we populate entries incorrectly or the kernel simply returns invalid SLB entry values for invalid entries.
>>>
>>> Are you seeing this with PR KVM or HV KVM?
>>
>> I suspect this is to do with the fact that PR and HV KVM use the
>> vcpu->arch.slb[] array differently. PR stores SLB entry n in
>> vcpu->arch.slb[n], whereas HV packs the valid entries down in the
>> low-numbered entries and puts the index in the bottom bits of the esid
>> field (this is so they can be loaded efficiently with the slbmte
>> instruction on guest entry).
>>
>> Then, kvm_arch_vcpu_ioctl_get_sregs() on PR copies out all 64 entries
>> (valid or not) and puts an index value in the bottom bits of the esid,
>> whereas on HV it just copies out the valid entries (which already have
>> the index in the esid field).
>>
>> So, the question is, what is the ABI here?
Oh, and to answer your question here: The original intent was to copy the SLB array 1:1 from kvm to user space. But that's moot by now, since we already have kvm versions out there that return your packed format :).
Alex
>> It sounds a bit like qemu
>> is ignoring the index value in the esid field. Either qemu needs to
>> take notice of the index in the esid field or we need to change the HV
>> versions of kvm_arch_vcpu_ioctl_get/set_sregs to put entry n in
>> sregs->u.s.ppc64.slb[n] like PR does.
>
> It's the opposite today - QEMU does honor the index value on sregs get. Aneesh's patch wants to change it to ignore it instead. For sregs set we copy our internal copy of the slb linearly into the array, so we don't pack it there.
>
> Can we safely assume on HV KVM that esid == 0 vsid == 0 is the end of the list? If so, we can just add a break statement in the get loop and call it a day. The rest should work just fine.
>
>
> Alex
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] target-ppc: Update slb array with correct index values.
2013-08-21 7:02 ` Alexander Graf
@ 2013-08-21 9:07 ` Aneesh Kumar K.V
0 siblings, 0 replies; 13+ messages in thread
From: Aneesh Kumar K.V @ 2013-08-21 9:07 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm-ppc, kvm@vger.kernel.org list
Alexander Graf <agraf@suse.de> writes:
> On 20.08.2013, at 14:57, Aneesh Kumar K.V wrote:
>
>> Alexander Graf <agraf@suse.de> writes:
>>
>>> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
>>>
>>>> Alexander Graf <agraf@suse.de> writes:
>>>>
>>>>> On 11.08.2013, at 20:16, Aneesh Kumar K.V wrote:
>>>>>
>>>>>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>>>>>>
>>>>>> Without this, a value of rb=0 and rs=0, result in us replacing the 0th index
>>>>>>
>>>>>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>>>>>
>>>>> Wrong mailing list again ;).
>>>>
>>>> Will post the series again with updated commit message to the qemu list.
>>>>
>>>>>
>>>>>> ---
>>>>>> target-ppc/kvm.c | 14 ++++++++++++--
>>>>>> 1 file changed, 12 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
>>>>>> index 30a870e..5d4e613 100644
>>>>>> --- a/target-ppc/kvm.c
>>>>>> +++ b/target-ppc/kvm.c
>>>>>> @@ -1034,8 +1034,18 @@ int kvm_arch_get_registers(CPUState *cs)
>>>>>> /* Sync SLB */
>>>>>> #ifdef TARGET_PPC64
>>>>>> for (i = 0; i < 64; i++) {
>>>>>> - ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
>>>>>> - sregs.u.s.ppc64.slb[i].slbv);
>>>>>> + target_ulong rb = sregs.u.s.ppc64.slb[i].slbe;
>>>>>> + /*
>>>>>> + * KVM_GET_SREGS doesn't retun slb entry with slot information
>>>>>> + * same as index. So don't depend on the slot information in
>>>>>> + * the returned value.
>>>>>
>>>>> This is the generating code in book3s_pr.c:
>>>>>
>>>>> if (vcpu->arch.hflags & BOOK3S_HFLAG_SLB) {
>>>>> for (i = 0; i < 64; i++) {
>>>>> sregs->u.s.ppc64.slb[i].slbe = vcpu->arch.slb[i].orige | i;
>>>>> sregs->u.s.ppc64.slb[i].slbv = vcpu->arch.slb[i].origv;
>>>>> }
>>>>>
>>>>> Where exactly did you see broken slbe entries?
>>>>>
>>>>
>>>> I noticed this when adding support for guest memory dumping via qemu gdb
>>>> server. Now the array we get would look like below
>>>>
>>>> slbe0 slbv0
>>>> slbe1 slbv1
>>>> 0000 00000
>>>> 0000 00000
>>>
>>> Ok, so that's where the problem lies. Why are the entries 0 here?
>>> Either we try to fetch more entries than we should, we populate
>>> entries incorrectly or the kernel simply returns invalid SLB entry
>>> values for invalid entries.
>>
>>
>> The ioctl zero out the sregs, and fill only slb_max entries. So we find
>> 0 filled entries above slb_max. Also we don't pass slb_max to user
>> space. So userspace have to look at all the 64 entries.
>
> We do pass slb_max, it's just called differently and calculated implicitly :). How about something like this:
>
> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
> index 30a870e..29a2ec3 100644
> --- a/target-ppc/kvm.c
> +++ b/target-ppc/kvm.c
> @@ -818,6 +818,8 @@ int kvm_arch_put_registers(CPUState *cs, int level)
>
> /* Sync SLB */
> #ifdef TARGET_PPC64
> + /* We need to loop through all entries to give them potentially
> + valid values */
> for (i = 0; i < 64; i++) {
> sregs.u.s.ppc64.slb[i].slbe = env->slb[i].esid;
> sregs.u.s.ppc64.slb[i].slbv = env->slb[i].vsid;
> @@ -1033,7 +1035,7 @@ int kvm_arch_get_registers(CPUState *cs)
>
> /* Sync SLB */
> #ifdef TARGET_PPC64
> - for (i = 0; i < 64; i++) {
> + for (i = 0; i < env->slb_nr; i++) {
> ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
> sregs.u.s.ppc64.slb[i].slbv);
> }
>
But we don't sync slb_max (max valid slb index), env->slb_nr is slb_nr (total number of slb
slots). ? We also don't sync env->slb_nr everytime we do
kvm_arch_get_register. The problem we have is, we first memset sregs with 0 and then
fill only slb_max entries. Now slbe and slbv entries with value 0
results in us looking at those entries for 0th index, and update the
0th entry.
-aneesh
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] target-ppc: Update slb array with correct index values.
2013-08-21 7:37 ` Alexander Graf
2013-08-21 7:42 ` Alexander Graf
@ 2013-08-21 9:25 ` Paul Mackerras
2013-08-21 10:03 ` Alexander Graf
1 sibling, 1 reply; 13+ messages in thread
From: Paul Mackerras @ 2013-08-21 9:25 UTC (permalink / raw)
To: Alexander Graf; +Cc: Aneesh Kumar K.V, kvm-ppc, kvm@vger.kernel.org list
On Wed, Aug 21, 2013 at 08:37:47AM +0100, Alexander Graf wrote:
>
> On 21.08.2013, at 06:11, Paul Mackerras wrote:
>
> > On Mon, Aug 19, 2013 at 10:21:09AM +0200, Alexander Graf wrote:
> >>
> >> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
> >>> I noticed this when adding support for guest memory dumping via qemu gdb
> >>> server. Now the array we get would look like below
> >>>
> >>> slbe0 slbv0
> >>> slbe1 slbv1
> >>> 0000 00000
> >>> 0000 00000
> >>
> >> Ok, so that's where the problem lies. Why are the entries 0 here? Either we try to fetch more entries than we should, we populate entries incorrectly or the kernel simply returns invalid SLB entry values for invalid entries.
> >>
> >> Are you seeing this with PR KVM or HV KVM?
> >
> > I suspect this is to do with the fact that PR and HV KVM use the
> > vcpu->arch.slb[] array differently. PR stores SLB entry n in
> > vcpu->arch.slb[n], whereas HV packs the valid entries down in the
> > low-numbered entries and puts the index in the bottom bits of the esid
> > field (this is so they can be loaded efficiently with the slbmte
> > instruction on guest entry).
> >
> > Then, kvm_arch_vcpu_ioctl_get_sregs() on PR copies out all 64 entries
> > (valid or not) and puts an index value in the bottom bits of the esid,
> > whereas on HV it just copies out the valid entries (which already have
> > the index in the esid field).
> >
> > So, the question is, what is the ABI here? It sounds a bit like qemu
> > is ignoring the index value in the esid field. Either qemu needs to
> > take notice of the index in the esid field or we need to change the HV
> > versions of kvm_arch_vcpu_ioctl_get/set_sregs to put entry n in
> > sregs->u.s.ppc64.slb[n] like PR does.
>
> It's the opposite today - QEMU does honor the index value on sregs get. Aneesh's patch wants to change it to ignore it instead. For sregs set we copy our internal copy of the slb linearly into the array, so we don't pack it there.
>
> Can we safely assume on HV KVM that esid == 0 vsid == 0 is the end of the list? If so, we can just add a break statement in the get loop and call it a day. The rest should work just fine.
On HV KVM yes, that would be the end of the list, but PR KVM could
give you entry 0 containing esid==0 and vsid==0 followed by valid
entries. Perhaps the best approach is to ignore any entries with
SLB_ESID_V clear.
Paul.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] target-ppc: Update slb array with correct index values.
2013-08-21 9:25 ` Paul Mackerras
@ 2013-08-21 10:03 ` Alexander Graf
2013-08-21 15:59 ` Aneesh Kumar K.V
0 siblings, 1 reply; 13+ messages in thread
From: Alexander Graf @ 2013-08-21 10:03 UTC (permalink / raw)
To: Paul Mackerras
Cc: Aneesh Kumar K.V, kvm-ppc@vger.kernel.org,
kvm@vger.kernel.org list
Am 21.08.2013 um 10:25 schrieb Paul Mackerras <paulus@samba.org>:
> On Wed, Aug 21, 2013 at 08:37:47AM +0100, Alexander Graf wrote:
>>
>> On 21.08.2013, at 06:11, Paul Mackerras wrote:
>>
>>> On Mon, Aug 19, 2013 at 10:21:09AM +0200, Alexander Graf wrote:
>>>>
>>>> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
>>>>> I noticed this when adding support for guest memory dumping via qemu gdb
>>>>> server. Now the array we get would look like below
>>>>>
>>>>> slbe0 slbv0
>>>>> slbe1 slbv1
>>>>> 0000 00000
>>>>> 0000 00000
>>>>
>>>> Ok, so that's where the problem lies. Why are the entries 0 here? Either we try to fetch more entries than we should, we populate entries incorrectly or the kernel simply returns invalid SLB entry values for invalid entries.
>>>>
>>>> Are you seeing this with PR KVM or HV KVM?
>>>
>>> I suspect this is to do with the fact that PR and HV KVM use the
>>> vcpu->arch.slb[] array differently. PR stores SLB entry n in
>>> vcpu->arch.slb[n], whereas HV packs the valid entries down in the
>>> low-numbered entries and puts the index in the bottom bits of the esid
>>> field (this is so they can be loaded efficiently with the slbmte
>>> instruction on guest entry).
>>>
>>> Then, kvm_arch_vcpu_ioctl_get_sregs() on PR copies out all 64 entries
>>> (valid or not) and puts an index value in the bottom bits of the esid,
>>> whereas on HV it just copies out the valid entries (which already have
>>> the index in the esid field).
>>>
>>> So, the question is, what is the ABI here? It sounds a bit like qemu
>>> is ignoring the index value in the esid field. Either qemu needs to
>>> take notice of the index in the esid field or we need to change the HV
>>> versions of kvm_arch_vcpu_ioctl_get/set_sregs to put entry n in
>>> sregs->u.s.ppc64.slb[n] like PR does.
>>
>> It's the opposite today - QEMU does honor the index value on sregs get. Aneesh's patch wants to change it to ignore it instead. For sregs set we copy our internal copy of the slb linearly into the array, so we don't pack it there.
>>
>> Can we safely assume on HV KVM that esid == 0 vsid == 0 is the end of the list? If so, we can just add a break statement in the get loop and call it a day. The rest should work just fine.
>
> On HV KVM yes, that would be the end of the list, but PR KVM could
> give you entry 0 containing esid==0 and vsid==0 followed by valid
> entries. Perhaps the best approach is to ignore any entries with
> SLB_ESID_V clear.
That means we don't clear entries we don't receive from the kernel because they're V=0 but which were V=1 before. Which with the current code is probably already broken.
So yes, clear all cached entries first (to make sure we have no stale ones), then loop through all and only add entries with V=1 should fix everything for PR as well as HV.
Alex
>
> Paul.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] target-ppc: Update slb array with correct index values.
2013-08-21 10:03 ` Alexander Graf
@ 2013-08-21 15:59 ` Aneesh Kumar K.V
2013-08-21 23:32 ` Alexander Graf
0 siblings, 1 reply; 13+ messages in thread
From: Aneesh Kumar K.V @ 2013-08-21 15:59 UTC (permalink / raw)
To: Alexander Graf, Paul Mackerras
Cc: kvm-ppc@vger.kernel.org, kvm@vger.kernel.org list
Alexander Graf <agraf@suse.de> writes:
....
>>
>> On HV KVM yes, that would be the end of the list, but PR KVM could
>> give you entry 0 containing esid==0 and vsid==0 followed by valid
>> entries. Perhaps the best approach is to ignore any entries with
>> SLB_ESID_V clear.
>
> That means we don't clear entries we don't receive from the kernel because they're V=0 but which were V=1 before. Which with the current code is probably already broken.
>
> So yes, clear all cached entries first (to make sure we have no stale
> ones), then loop through all and only add entries with V=1 should fix
> everything for PR as well as HV.
This is more or less what the patch is doing. The kernel already
does memset of all the slb entries. The only difference is we don't
depend on the slb index in the return value. Instead we just use the
array index as the slb index. Do we really need to make sure the slb
index remain same ?
-aneesh
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] target-ppc: Update slb array with correct index values.
2013-08-21 15:59 ` Aneesh Kumar K.V
@ 2013-08-21 23:32 ` Alexander Graf
2013-08-22 13:20 ` Aneesh Kumar K.V
0 siblings, 1 reply; 13+ messages in thread
From: Alexander Graf @ 2013-08-21 23:32 UTC (permalink / raw)
To: Aneesh Kumar K.V
Cc: Paul Mackerras, kvm-ppc@vger.kernel.org, kvm@vger.kernel.org list
Am 21.08.2013 um 16:59 schrieb "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>:
> Alexander Graf <agraf@suse.de> writes:
>
>
> ....
>
>>>
>>> On HV KVM yes, that would be the end of the list, but PR KVM could
>>> give you entry 0 containing esid==0 and vsid==0 followed by valid
>>> entries. Perhaps the best approach is to ignore any entries with
>>> SLB_ESID_V clear.
>>
>> That means we don't clear entries we don't receive from the kernel because they're V=0 but which were V=1 before. Which with the current code is probably already broken.
>>
>> So yes, clear all cached entries first (to make sure we have no stale
>> ones), then loop through all and only add entries with V=1 should fix
>> everything for PR as well as HV.
>
> This is more or less what the patch is doing. The kernel already
> does memset of all the slb entries. The only difference is we don't
> depend on the slb index in the return value. Instead we just use the
> array index as the slb index. Do we really need to make sure the slb
> index remain same ?
Yes, otherwise get/set change SLB numbering which the guest doesn't expect.
Alex
>
> -aneesh
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] target-ppc: Update slb array with correct index values.
2013-08-21 23:32 ` Alexander Graf
@ 2013-08-22 13:20 ` Aneesh Kumar K.V
2013-08-22 16:36 ` Alexander Graf
0 siblings, 1 reply; 13+ messages in thread
From: Aneesh Kumar K.V @ 2013-08-22 13:20 UTC (permalink / raw)
To: Alexander Graf
Cc: Paul Mackerras, kvm-ppc@vger.kernel.org, kvm@vger.kernel.org list
Alexander Graf <agraf@suse.de> writes:
> Am 21.08.2013 um 16:59 schrieb "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>:
>
>> Alexander Graf <agraf@suse.de> writes:
>>
>>
>> ....
>>
>>>>
>>>> On HV KVM yes, that would be the end of the list, but PR KVM could
>>>> give you entry 0 containing esid==0 and vsid==0 followed by valid
>>>> entries. Perhaps the best approach is to ignore any entries with
>>>> SLB_ESID_V clear.
>>>
>>> That means we don't clear entries we don't receive from the kernel because they're V=0 but which were V=1 before. Which with the current code is probably already broken.
>>>
>>> So yes, clear all cached entries first (to make sure we have no stale
>>> ones), then loop through all and only add entries with V=1 should fix
>>> everything for PR as well as HV.
>>
>> This is more or less what the patch is doing. The kernel already
>> does memset of all the slb entries. The only difference is we don't
>> depend on the slb index in the return value. Instead we just use the
>> array index as the slb index. Do we really need to make sure the slb
>> index remain same ?
>
> Yes, otherwise get/set change SLB numbering which the guest doesn't
> expect.
how about
diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
index 30a870e..313f866 100644
--- a/target-ppc/kvm.c
+++ b/target-ppc/kvm.c
@@ -1033,9 +1033,21 @@ int kvm_arch_get_registers(CPUState *cs)
/* Sync SLB */
#ifdef TARGET_PPC64
+ /*
+ * KVM_GET_SREGS doesn't retun slb entry with slot information
+ * same as index. So don't depend on the slot information in
+ * the returned value.
+ * Zero out the SLB array invalidating all the entries
+ */
+ memset(env->slb, 0, 64 * sizeof(ppc_slb_t));
for (i = 0; i < 64; i++) {
- ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
- sregs.u.s.ppc64.slb[i].slbv);
+ target_ulong rb = sregs.u.s.ppc64.slb[i].slbe;
+ target_ulong rs = sregs.u.s.ppc64.slb[i].slbv;
+ /*
+ * Only restore valid entries
+ */
+ if (rb & SLB_ESID_V)
+ ppc_store_slb(env, rb, rs);
}
#endif
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH] target-ppc: Update slb array with correct index values.
2013-08-22 13:20 ` Aneesh Kumar K.V
@ 2013-08-22 16:36 ` Alexander Graf
0 siblings, 0 replies; 13+ messages in thread
From: Alexander Graf @ 2013-08-22 16:36 UTC (permalink / raw)
To: Aneesh Kumar K.V
Cc: Paul Mackerras, kvm-ppc@vger.kernel.org, kvm@vger.kernel.org list
On 22.08.2013, at 14:20, Aneesh Kumar K.V wrote:
> Alexander Graf <agraf@suse.de> writes:
>
>> Am 21.08.2013 um 16:59 schrieb "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>:
>>
>>> Alexander Graf <agraf@suse.de> writes:
>>>
>>>
>>> ....
>>>
>>>>>
>>>>> On HV KVM yes, that would be the end of the list, but PR KVM could
>>>>> give you entry 0 containing esid==0 and vsid==0 followed by valid
>>>>> entries. Perhaps the best approach is to ignore any entries with
>>>>> SLB_ESID_V clear.
>>>>
>>>> That means we don't clear entries we don't receive from the kernel because they're V=0 but which were V=1 before. Which with the current code is probably already broken.
>>>>
>>>> So yes, clear all cached entries first (to make sure we have no stale
>>>> ones), then loop through all and only add entries with V=1 should fix
>>>> everything for PR as well as HV.
>>>
>>> This is more or less what the patch is doing. The kernel already
>>> does memset of all the slb entries. The only difference is we don't
>>> depend on the slb index in the return value. Instead we just use the
>>> array index as the slb index. Do we really need to make sure the slb
>>> index remain same ?
>>
>> Yes, otherwise get/set change SLB numbering which the guest doesn't
>> expect.
>
> how about
>
> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
> index 30a870e..313f866 100644
> --- a/target-ppc/kvm.c
> +++ b/target-ppc/kvm.c
> @@ -1033,9 +1033,21 @@ int kvm_arch_get_registers(CPUState *cs)
>
> /* Sync SLB */
> #ifdef TARGET_PPC64
> + /*
> + * KVM_GET_SREGS doesn't retun slb entry with slot information
return
> + * same as index. So don't depend on the slot information in
> + * the returned value.
what returned value?
> + * Zero out the SLB array invalidating all the entries
Better put the below into a function with a speaking name rather than write a comment. But when I read the comment it's only clear to me what you're trying to say because of the email thread. Please write what you're doing and how things look, not what they don't look like. Reading positive statements is easier :).
> + */
> + memset(env->slb, 0, 64 * sizeof(ppc_slb_t));
> for (i = 0; i < 64; i++) {
> - ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
> - sregs.u.s.ppc64.slb[i].slbv);
> + target_ulong rb = sregs.u.s.ppc64.slb[i].slbe;
> + target_ulong rs = sregs.u.s.ppc64.slb[i].slbv;
> + /*
> + * Only restore valid entries
> + */
> + if (rb & SLB_ESID_V)
> + ppc_store_slb(env, rb, rs);
Coding style.
But the code should work, yes :). Paul, any objections?
Alex
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2013-08-22 16:36 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1376245011-20008-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com>
[not found] ` <8810AF54-BB64-4313-AD32-C42608527B38@suse.de>
[not found] ` <874nam5ei5.fsf@linux.vnet.ibm.com>
2013-08-19 8:21 ` [PATCH] target-ppc: Update slb array with correct index values Alexander Graf
2013-08-20 13:57 ` Aneesh Kumar K.V
2013-08-21 7:02 ` Alexander Graf
2013-08-21 9:07 ` Aneesh Kumar K.V
2013-08-21 5:11 ` Paul Mackerras
2013-08-21 7:37 ` Alexander Graf
2013-08-21 7:42 ` Alexander Graf
2013-08-21 9:25 ` Paul Mackerras
2013-08-21 10:03 ` Alexander Graf
2013-08-21 15:59 ` Aneesh Kumar K.V
2013-08-21 23:32 ` Alexander Graf
2013-08-22 13:20 ` Aneesh Kumar K.V
2013-08-22 16:36 ` Alexander Graf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox