* Performance problem about address translation
@ 2015-07-06 7:22 xinyue
2015-07-06 8:11 ` Andrew Cooper
0 siblings, 1 reply; 11+ messages in thread
From: xinyue @ 2015-07-06 7:22 UTC (permalink / raw)
To: ian.campbell, xen-devel
[-- Attachment #1.1: Type: text/plain, Size: 972 bytes --]
Hi,
For I want to translate the virtual address in HVM DomU to virtual address in Xen. But when I use the function paging_gva_to_gfn and get_gfn, I can feel the performance down quickly, the machine become very hot and then I have to force the machine shutting down.
The codes I used as below:
uint32_t pfec = PFEC_page_present;
unsigned long gfn;
unsigned long mfn;
unsigned long virtaddr;
struct vcpu *vcpu = current;
struct domain *d = vcpu->domain;
gfn = paging_gva_to_gfn(current, 0xc0290000, &pfec);
mfn = get_gfn(d, gfn, &t);
virtaddr = map_domain_page(mfn_x(mfn));
I also use the dbg_hvm_va2mfn function in debug.c, performance problem still present.
I don't know why, could someone give me some advices.
Thanks for any advice and best regards!
xinyue
[-- Attachment #1.2: Type: text/html, Size: 1194 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance problem about address translation
@ 2015-07-06 7:58 xinyue
2015-07-06 8:37 ` Andrew Cooper
0 siblings, 1 reply; 11+ messages in thread
From: xinyue @ 2015-07-06 7:58 UTC (permalink / raw)
To: andrew.cooper3; +Cc: xen-devel
[-- Attachment #1.1: Type: text/plain, Size: 1567 bytes --]
在 2015-07-06, Mon, 15:44:53 ,Andrew Cooper 写到:
On 06/07/2015 08:22, xinyue wrote:
Hi,
For I want to translate the virtual address in HVM DomU to virtual address in Xen. But when I use the function paging_gva_to_gfn and get_gfn, I can feel the performance down quickly, the machine become very hot and then I have to force the machine shutting down.
Your machine clearly isn't cooled sufficiently, which is the first problem.
The codes I used as below:
uint32_t pfec = PFEC_page_present;
unsigned long gfn;
unsigned long mfn;
unsigned long virtaddr;
struct vcpu *vcpu = current;
struct domain *d = vcpu->domain;
gfn = paging_gva_to_gfn(current, 0xc0290000, &pfec);
mfn = get_gfn(d, gfn, &t);
virtaddr = map_domain_page(mfn_x(mfn));
I also use the dbg_hvm_va2mfn function in debug.c, performance problem still present.
Walking pagetables in software is slow. There is no getting around this.
Your performance problems will be caused by performing the operation far too often. You should find a way to reduce this.
Thanks very much, I think I only do this for just once. And after the thanslation is done, the performance is not turn to normal. Does that mean that if I wait long enough it will recovery?
~Andrew
[-- Attachment #1.2: Type: text/html, Size: 2534 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance problem about address translation
2015-07-06 7:22 Performance problem about address translation xinyue
@ 2015-07-06 8:11 ` Andrew Cooper
0 siblings, 0 replies; 11+ messages in thread
From: Andrew Cooper @ 2015-07-06 8:11 UTC (permalink / raw)
To: xinyue, ian.campbell, xen-devel
[-- Attachment #1.1: Type: text/plain, Size: 1062 bytes --]
On 06/07/2015 08:22, xinyue wrote:
> Hi,
>
> For I want to translate the virtual address in HVM DomU to virtual
> address in Xen. But when I use the function paging_gva_to_gfn and
> get_gfn, I can feel the performance down quickly, the machine become
> very hot and then I have to force the machine shutting down.
Your machine clearly isn't cooled sufficiently, which is the first problem.
>
> The codes I used as below:
> uint32_t pfec = PFEC_page_present;
> unsigned long gfn;
> unsigned long mfn;
> unsigned long virtaddr;
> struct vcpu *vcpu = current;
> struct domain *d = vcpu->domain;
>
> gfn = paging_gva_to_gfn(current, 0xc0290000, &pfec);
> mfn = get_gfn(d, gfn, &t);
> virtaddr = map_domain_page(mfn_x(mfn));
>
> I also use the dbg_hvm_va2mfn function in debug.c, performance problem
> still present.
Walking pagetables in software is slow. There is no getting around this.
Your performance problems will be caused by performing the operation far
too often. You should find a way to reduce this.
~Andrew
[-- Attachment #1.2: Type: text/html, Size: 2251 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance problem about address translation
2015-07-06 7:58 xinyue
@ 2015-07-06 8:37 ` Andrew Cooper
0 siblings, 0 replies; 11+ messages in thread
From: Andrew Cooper @ 2015-07-06 8:37 UTC (permalink / raw)
To: xinyue; +Cc: xen-devel
[-- Attachment #1.1: Type: text/plain, Size: 1528 bytes --]
On 06/07/2015 08:58, xinyue wrote:
>
>
>
> 在 2015-07-06, Mon, 15:44:53 ,Andrew Cooper 写到:
> On 06/07/2015 08:22, xinyue wrote:
>>
>> Hi,
>>
>> For I want to translate the virtual address in HVM DomU to
>> virtual address in Xen. But when I use the function paging_gva_to_gfn
>> and get_gfn, I can feel the performance down quickly, the machine
>> become very hot and then I have to force the machine shutting down.
>
> Your machine clearly isn't cooled sufficiently, which is the first
> problem.
>
>>
>> The codes I used as below:
>> uint32_t pfec = PFEC_page_present;
>> unsigned long gfn;
>> unsigned long mfn;
>> unsigned long virtaddr;
>> struct vcpu *vcpu = current;
>> struct domain *d = vcpu->domain;
>>
>> gfn = paging_gva_to_gfn(current, 0xc0290000, &pfec);
>> mfn = get_gfn(d, gfn, &t);
>> virtaddr = map_domain_page(mfn_x(mfn));
>>
>> I also use the dbg_hvm_va2mfn function in debug.c, performance
>> problem still present.
>
> Walking pagetables in software is slow. There is no getting around this.
>
> Your performance problems will be caused by performing the operation
> far too often. You should find a way to reduce this.
>
>
>
>
> Thanks very much, I think I only do this for just once. And after the
> thanslation is done, the performance is not turn to normal. Does that
> mean that if I wait long enough it will recovery?
It almost certainly means you are not doing it just once like you suppose.
~Andrew
[-- Attachment #1.2: Type: text/html, Size: 3299 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance problem about address translation
@ 2015-07-06 12:21 xinyue
0 siblings, 0 replies; 11+ messages in thread
From: xinyue @ 2015-07-06 12:21 UTC (permalink / raw)
To: andrew.cooper3; +Cc: xen-devel
[-- Attachment #1.1: Type: text/plain, Size: 2028 bytes --]
在 2015-07-06, Mon, 16:11:02 ,Andrew Cooper 写到:
On 06/07/2015 08:58, xinyue wrote:
在 2015-07-06, Mon, 15:44:53 ,Andrew Cooper 写到:
On 06/07/2015 08:22, xinyue wrote:
Hi,
For I want to translate the virtual address in HVM DomU to virtual address in Xen. But when I use the function paging_gva_to_gfn and get_gfn, I can feel the performance down quickly, the machine become very hot and then I have to force the machine shutting down.
Your machine clearly isn't cooled sufficiently, which is the first problem.
The codes I used as below:
uint32_t pfec = PFEC_page_present;
unsigned long gfn;
unsigned long mfn;
unsigned long virtaddr;
struct vcpu *vcpu = current;
struct domain *d = vcpu->domain;
gfn = paging_gva_to_gfn(current, 0xc0290000, &pfec);
mfn = get_gfn(d, gfn, &t);
virtaddr = map_domain_page(mfn_x(mfn));
I also use the dbg_hvm_va2mfn function in debug.c, performance problem still present.
Walking pagetables in software is slow. There is no getting around this.
Your performance problems will be caused by performing the operation far too often. You should find a way to reduce this.
Thanks very much, I think I only do this for just once. And after the thanslation is done, the performance is not turn to normal. Does that mean that if I wait long enough it will recovery?
It almost certainly means you are not doing it just once like you suppose.
~andrew
Yes, you are right. I added printk in get_gfn and found it was call many times. I'll check why that happens. Thanks a lot!
xinyue
[-- Attachment #1.2: Type: text/html, Size: 3290 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance problem about address translation
@ 2015-07-07 1:46 xinyue
0 siblings, 0 replies; 11+ messages in thread
From: xinyue @ 2015-07-07 1:46 UTC (permalink / raw)
To: andrew.cooper3; +Cc: xen-devel
[-- Attachment #1.1: Type: text/plain, Size: 2708 bytes --]
在 2015-07-06, Mon, 16:11:02 ,Andrew Cooper 写到:
On 06/07/2015 08:58, xinyue wrote:
在 2015-07-06, Mon, 15:44:53 ,Andrew Cooper 写到:
On 06/07/2015 08:22, xinyue wrote:
Hi,
For I want to translate the virtual address in HVM DomU to virtual address in Xen. But when I use the function paging_gva_to_gfn and get_gfn, I can feel the performance down quickly, the machine become very hot and then I have to force the machine shutting down.
Your machine clearly isn't cooled sufficiently, which is the first problem.
The codes I used as below:
uint32_t pfec = PFEC_page_present;
unsigned long gfn;
unsigned long mfn;
unsigned long virtaddr;
struct vcpu *vcpu = current;
struct domain *d = vcpu->domain;
p2m_type_t t;
gfn = paging_gva_to_gfn(current, 0xc0290000, &pfec);
mfn = get_gfn(d, gfn, &t);
virtaddr = map_domain_page(mfn_x(mfn));
I also use the dbg_hvm_va2mfn function in debug.c, performance problem still present.
Walking pagetables in software is slow. There is no getting around this.
Your performance problems will be caused by performing the operation far too often. You should find a way to reduce this.
Thanks very much, I think I only do this for just once. And after the thanslation is done, the performance is not turn to normal. Does that mean that if I wait long enough it will recovery?
It almost certainly means you are not doing it just once like you suppose.
~andrew
Yes, you are right. I added printk in get_gfn and found it was call many times. I'll check why that happens. Thanks a lot!
Sorry for mistaking it, the calls of these functions in log appear before I invoke them. These functions that I invoked are through hypercall in HVM DomU, through the log I think I invoked them only once. Maybe the performance problem is caused by the parameters I used? Could you help me the check if I used them unproperly as posted before.
Another question is when I add printk in paging_gva_to gfn function, the performance alse down serioursly that it can't even boot hvm domu successfully. I am wondering why.
Thanks again and best regards!
xinyue
[-- Attachment #1.2: Type: text/html, Size: 4039 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance problem about address translation
@ 2015-07-07 3:24 xinyue
2015-07-07 11:49 ` Ian Campbell
0 siblings, 1 reply; 11+ messages in thread
From: xinyue @ 2015-07-07 3:24 UTC (permalink / raw)
To: andrew.cooper3; +Cc: xen-devel
[-- Attachment #1.1: Type: text/plain, Size: 2991 bytes --]
在 2015-07-06, Mon, 16:11:02 ,Andrew Cooper 写到:
On 06/07/2015 08:58, xinyue wrote:
在 2015-07-06, Mon, 15:44:53 ,Andrew Cooper 写到:
On 06/07/2015 08:22, xinyue wrote:
Hi,
For I want to translate the virtual address in HVM DomU to virtual address in Xen. But when I use the function paging_gva_to_gfn and get_gfn, I can feel the performance down quickly, the machine become very hot and then I have to force the machine shutting down.
Your machine clearly isn't cooled sufficiently, which is the first problem.
The codes I used as below:
uint32_t pfec = PFEC_page_present;
unsigned long gfn;
unsigned long mfn;
unsigned long virtaddr;
struct vcpu *vcpu = current;
struct domain *d = vcpu->domain;
p2m_type_t t;
gfn = paging_gva_to_gfn(current, 0xc0290000, &pfec);
mfn = get_gfn(d, gfn, &t);
virtaddr = map_domain_page(mfn_x(mfn));
I also use the dbg_hvm_va2mfn function in debug.c, performance problem still present.
Walking pagetables in software is slow. There is no getting around this.
Your performance problems will be caused by performing the operation far too often. You should find a way to reduce this.
Thanks very much, I think I only do this for just once. And after the thanslation is done, the performance is not turn to normal. Does that mean that if I wait long enough it will recovery?
It almost certainly means you are not doing it just once like you suppose.
~andrew
Yes, you are right. I added printk in get_gfn and found it was call many times. I'll check why that happens. Thanks a lot!
Sorry for mistaking it, the calls of these functions in log appear before I invoke them. These functions that I invoked are through hypercall in HVM DomU, through the log I think I invoked them only once. Maybe the performance problem is caused by the parameters I used? Could you help me the check if I used them unproperly as posted before.
Another question is when I add printk in paging_gva_to gfn function, the performance alse down serioursly that it can't even boot hvm domu successfully. I am wondering why.
Thanks again and best regards!
And after analyzing the performance of hvm domu, I found a process named "evolution-data-" using almost 99.9% cpu. Does someone known what's this and why it appears?
xinyue
[-- Attachment #1.2: Type: text/html, Size: 4405 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance problem about address translation
2015-07-07 3:24 xinyue
@ 2015-07-07 11:49 ` Ian Campbell
2015-07-08 6:13 ` xinyue
0 siblings, 1 reply; 11+ messages in thread
From: Ian Campbell @ 2015-07-07 11:49 UTC (permalink / raw)
To: xinyue; +Cc: andrew.cooper3, xen-devel
On Tue, 2015-07-07 at 11:24 +0800, xinyue wrote:
Please don't use HTML mail and do proper ">" quoting
> And after analyzing the performance of hvm domu, I found a process
> named "evolution-data-" using almost 99.9% cpu. Does someone known
> what's this and why it appears?
evolution-data-server is part of the evolution mail client. It has
nothing to do with Xen I'm afraid so you will have to look elsewhere for
why it is taking so much CPU.
Ian.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance problem about address translation
2015-07-07 11:49 ` Ian Campbell
@ 2015-07-08 6:13 ` xinyue
2015-07-08 6:26 ` xinyue
0 siblings, 1 reply; 11+ messages in thread
From: xinyue @ 2015-07-08 6:13 UTC (permalink / raw)
To: Ian Campbell; +Cc: andrew.cooper3, xen-devel
On 2015年07月07日 19:49, Ian Campbell wrote:
> On Tue, 2015-07-07 at 11:24 +0800, xinyue wrote:
>
> Please don't use HTML mail and do proper ">" quoting
>
>> And after analyzing the performance of hvm domu, I found a process
>> named "evolution-data-" using almost 99.9% cpu. Does someone known
>> what's this and why it appears?
> evolution-data-server is part of the evolution mail client. It has
> nothing to do with Xen I'm afraid so you will have to look elsewhere for
> why it is taking so much CPU.
>
> Ian.
>
Sorry for that and thanks very much.
I think the problem maybe caused by the address
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance problem about address translation
2015-07-08 6:13 ` xinyue
@ 2015-07-08 6:26 ` xinyue
2015-07-08 7:43 ` xinyue
0 siblings, 1 reply; 11+ messages in thread
From: xinyue @ 2015-07-08 6:26 UTC (permalink / raw)
To: Ian Campbell; +Cc: andrew.cooper3, xen-devel
Very sorry for sending wrong before.
On 2015年07月08日 14:13, xinyue wrote:
>
> On 2015年07月07日 19:49, Ian Campbell wrote:
>> On Tue, 2015-07-07 at 11:24 +0800, xinyue wrote:
>>
>> Please don't use HTML mail and do proper ">" quoting
>>
>>> And after analyzing the performance of hvm domu, I found a process
>>> named "evolution-data-" using almost 99.9% cpu. Does someone known
>>> what's this and why it appears?
>> evolution-data-server is part of the evolution mail client. It has
>> nothing to do with Xen I'm afraid so you will have to look elsewhere for
>> why it is taking so much CPU.
>>
>> Ian.
>>
>
Sorry for that and thanks very much.
I think the problem maybe caused by the address alignment. The HVM DomU
crashed after the hypercall and Dom0 crashed later sometimes with "Bus
error".
I think the function that caused the crash is get_gfn. The related code is
unsigned long gfn;
unsigned long mfn;
struct vcpu *vcpu = current;
struct domain *d = vcpu->domain;
uint32_t pfec = PFEC_page_present;
p2m_type_t t;
gfn = paging_gva_to_gfn(current, 0xc0290000, &pfec);
mfn = get_gfn(d, gfn, &t);
Is that I lost some type translation?
Thanks and best regards!
xinyue
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance problem about address translation
2015-07-08 6:26 ` xinyue
@ 2015-07-08 7:43 ` xinyue
0 siblings, 0 replies; 11+ messages in thread
From: xinyue @ 2015-07-08 7:43 UTC (permalink / raw)
To: Ian Campbell; +Cc: andrew.cooper3, xen-devel
On 2015年07月08日 14:26, xinyue wrote:
> Very sorry for sending wrong before.
> On 2015年07月08日 14:13, xinyue wrote:
>>
>> On 2015年07月07日 19:49, Ian Campbell wrote:
>>> On Tue, 2015-07-07 at 11:24 +0800, xinyue wrote:
>>>
>>> Please don't use HTML mail and do proper ">" quoting
>>>
>>>> And after analyzing the performance of hvm domu, I found a process
>>>> named "evolution-data-" using almost 99.9% cpu. Does someone known
>>>> what's this and why it appears?
>>> evolution-data-server is part of the evolution mail client. It has
>>> nothing to do with Xen I'm afraid so you will have to look elsewhere
>>> for
>>> why it is taking so much CPU.
>>>
>>> Ian.
>>>
>>
> Sorry for that and thanks very much.
>
> I think the problem maybe caused by the address alignment. The HVM
> DomU crashed after the hypercall and Dom0 crashed later sometimes with
> "Bus error".
>
> I think the function that caused the crash is get_gfn. The related
> code is
>
> unsigned long gfn;
> unsigned long mfn;
> struct vcpu *vcpu = current;
> struct domain *d = vcpu->domain;
> uint32_t pfec = PFEC_page_present;
> p2m_type_t t;
> gfn = paging_gva_to_gfn(current, 0xc0290000, &pfec);
> mfn = get_gfn(d, gfn, &t);
>
> Is that I lost some type translation?
>
>
> Thanks and best regards!
>
> xinyue
Thanks for all advices, I found the problem appeared because I forget
adding function put_gfn.
Thanks again and best regards!
xinyue
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2015-07-08 7:44 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-07-06 7:22 Performance problem about address translation xinyue
2015-07-06 8:11 ` Andrew Cooper
-- strict thread matches above, loose matches on Subject: below --
2015-07-06 7:58 xinyue
2015-07-06 8:37 ` Andrew Cooper
2015-07-06 12:21 xinyue
2015-07-07 1:46 xinyue
2015-07-07 3:24 xinyue
2015-07-07 11:49 ` Ian Campbell
2015-07-08 6:13 ` xinyue
2015-07-08 6:26 ` xinyue
2015-07-08 7:43 ` xinyue
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).