* big latency while under HV
@ 2013-09-09 15:19 Ivan Krivonos
2013-09-10 9:24 ` Diana Craciun
0 siblings, 1 reply; 2+ messages in thread
From: Ivan Krivonos @ 2013-09-09 15:19 UTC (permalink / raw)
To: linuxppc-dev
Hi,
i`m working on the embedded hypervisor targeting QorIQ platforms (p3041/p4080).
I have working prototype starting custom RTOS on just single core in the guest
space. What I see is the big latency (up to 3 times more) in RTOS running atop
of HV comparing to RTOS running bare-metal. I`m using lmbench utility.
It shows
nteger mul: 3.48 nanoseconds
integer div: 30.44 nanoseconds
integer mod: 13.92 nanoseconds
int64 bit: 1.75 nanoseconds
int64 add: 1.42 nanoseconds
int64 mul: 6.95 nanoseconds
HV:hvpriv_count 60000
int64 div: 447.56 nanoseconds
int64 mod: 385.42 nanoseconds
float add: 7.12 nanoseconds
float mul: 6.95 nanoseconds
float div: 33.05 nanoseconds
double add: 7.11 nanoseconds
double mul: 8.70 nanoseconds
double div: 57.36 nanoseconds
float bogomflops: 46.98 nanoseconds
double bogomflops: 73.09 nanoseconds
The bare-metal results are 3x better. Does anybody have any ideas on what
may be the source of such latency ? I forward all the exceptions to
the guest w/o
affecting HV. Only hvpriv is processed, it takes not more than 2 bus cycles.
Sorry for poor english
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: big latency while under HV
2013-09-09 15:19 big latency while under HV Ivan Krivonos
@ 2013-09-10 9:24 ` Diana Craciun
0 siblings, 0 replies; 2+ messages in thread
From: Diana Craciun @ 2013-09-10 9:24 UTC (permalink / raw)
To: Ivan Krivonos; +Cc: linuxppc-dev
Hi,
Just to be on the same page, are you using the Freescale Embedded
Hypervisor provided with the Freescale SDK or other embedded hypervisor?
Anyway, the question is not much related with the Linux kernel, so you
should probably redirect your question to Freescale support. You can
reach Freescale support at support@freescale.com.
Diana
On 09/09/2013 06:19 PM, Ivan Krivonos wrote:
> Hi,
>
> i`m working on the embedded hypervisor targeting QorIQ platforms (p3041/p4080).
> I have working prototype starting custom RTOS on just single core in the guest
> space. What I see is the big latency (up to 3 times more) in RTOS running atop
> of HV comparing to RTOS running bare-metal. I`m using lmbench utility.
> It shows
>
> nteger mul: 3.48 nanoseconds
> integer div: 30.44 nanoseconds
> integer mod: 13.92 nanoseconds
> int64 bit: 1.75 nanoseconds
> int64 add: 1.42 nanoseconds
> int64 mul: 6.95 nanoseconds
> HV:hvpriv_count 60000
> int64 div: 447.56 nanoseconds
> int64 mod: 385.42 nanoseconds
> float add: 7.12 nanoseconds
> float mul: 6.95 nanoseconds
> float div: 33.05 nanoseconds
> double add: 7.11 nanoseconds
> double mul: 8.70 nanoseconds
> double div: 57.36 nanoseconds
> float bogomflops: 46.98 nanoseconds
> double bogomflops: 73.09 nanoseconds
>
> The bare-metal results are 3x better. Does anybody have any ideas on what
> may be the source of such latency ? I forward all the exceptions to
> the guest w/o
> affecting HV. Only hvpriv is processed, it takes not more than 2 bus cycles.
> Sorry for poor english
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2013-09-10 9:24 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-09-09 15:19 big latency while under HV Ivan Krivonos
2013-09-10 9:24 ` Diana Craciun
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).