xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* How to reduce  high latency on PV-on-HVM?
@ 2011-01-04  7:56 shen.qilong
  2011-01-05 13:47 ` Stefano Stabellini
  0 siblings, 1 reply; 6+ messages in thread
From: shen.qilong @ 2011-01-04  7:56 UTC (permalink / raw)
  To: xen-devel; +Cc: 王鹏


[-- Attachment #1.1: Type: text/plain, Size: 4474 bytes --]

About ping latency in PV-on-HVM
I had tried a test to ping between two VM(PV-on-HVM) in the same host server with bridge model.

I think there would be slightly latency mostly(less than 1ms).
But I found that there are too much high latency package (more than 1ms) in PV-on-HVM + bridge environment.
 

The following is the test environment and the test result:

Server uses xen-4.0.0, domain-0 is kernel-2.6.32.13 and PV-on-HVM is kernel-2.6.x. 
The server and client are connected through a same network bridge. 

The result as following: 
# ping -i 1 -c 10000 192.18.22.72 | grep -v "time=0" 
PING 192.18.22.72 (192.18.22.72) 56(84) bytes of data.
64 bytes from 192.18.22.72: icmp_seq=125 ttl=64 time=6.78 ms
64 bytes from 192.18.22.72: icmp_seq=244 ttl=64 time=1.54 ms
64 bytes from 192.18.22.72: icmp_seq=510 ttl=64 time=10.4 ms
64 bytes from 192.18.22.72: icmp_seq=597 ttl=64 time=2.90 ms
64 bytes from 192.18.22.72: icmp_seq=883 ttl=64 time=1.60 ms
64 bytes from 192.18.22.72: icmp_seq=968 ttl=64 time=4.26 ms
64 bytes from 192.18.22.72: icmp_seq=1328 ttl=64 time=6.20 ms
64 bytes from 192.18.22.72: icmp_seq=1520 ttl=64 time=2.78 ms
64 bytes from 192.18.22.72: icmp_seq=1606 ttl=64 time=27.4 ms
64 bytes from 192.18.22.72: icmp_seq=1959 ttl=64 time=1.91 ms
64 bytes from 192.18.22.72: icmp_seq=2210 ttl=64 time=6.98 ms
64 bytes from 192.18.22.72: icmp_seq=2381 ttl=64 time=3.65 ms
64 bytes from 192.18.22.72: icmp_seq=2447 ttl=64 time=26.4 ms
64 bytes from 192.18.22.72: icmp_seq=2552 ttl=64 time=14.3 ms
64 bytes from 192.18.22.72: icmp_seq=2616 ttl=64 time=16.3 ms
64 bytes from 192.18.22.72: icmp_seq=2788 ttl=64 time=29.7 ms
64 bytes from 192.18.22.72: icmp_seq=3198 ttl=64 time=2.32 ms
64 bytes from 192.18.22.72: icmp_seq=3374 ttl=64 time=1.89 ms
64 bytes from 192.18.22.72: icmp_seq=3542 ttl=64 time=14.3 ms
64 bytes from 192.18.22.72: icmp_seq=3705 ttl=64 time=14.2 ms
64 bytes from 192.18.22.72: icmp_seq=3739 ttl=64 time=9.91 ms
64 bytes from 192.18.22.72: icmp_seq=3751 ttl=64 time=1.48 ms
64 bytes from 192.18.22.72: icmp_seq=4089 ttl=64 time=4.63 ms
64 bytes from 192.18.22.72: icmp_seq=4103 ttl=64 time=4.59 ms
64 bytes from 192.18.22.72: icmp_seq=4112 ttl=64 time=1.18 ms
64 bytes from 192.18.22.72: icmp_seq=4172 ttl=64 time=1.58 ms
64 bytes from 192.18.22.72: icmp_seq=4185 ttl=64 time=3.02 ms
64 bytes from 192.18.22.72: icmp_seq=4236 ttl=64 time=25.9 ms
64 bytes from 192.18.22.72: icmp_seq=4250 ttl=64 time=1.18 ms
64 bytes from 192.18.22.72: icmp_seq=5394 ttl=64 time=21.2 ms
64 bytes from 192.18.22.72: icmp_seq=5455 ttl=64 time=6.69 ms
64 bytes from 192.18.22.72: icmp_seq=5541 ttl=64 time=4.65 ms
64 bytes from 192.18.22.72: icmp_seq=5842 ttl=64 time=1.68 ms
64 bytes from 192.18.22.72: icmp_seq=5972 ttl=64 time=29.9 ms
64 bytes from 192.18.22.72: icmp_seq=5992 ttl=64 time=23.7 ms
64 bytes from 192.18.22.72: icmp_seq=6291 ttl=64 time=14.5 ms
64 bytes from 192.18.22.72: icmp_seq=6724 ttl=64 time=1.78 ms
64 bytes from 192.18.22.72: icmp_seq=6764 ttl=64 time=3.61 ms
64 bytes from 192.18.22.72: icmp_seq=7244 ttl=64 time=23.7 ms
64 bytes from 192.18.22.72: icmp_seq=7299 ttl=64 time=1.62 ms
64 bytes from 192.18.22.72: icmp_seq=7675 ttl=64 time=28.6 ms
64 bytes from 192.18.22.72: icmp_seq=7892 ttl=64 time=11.0 ms
64 bytes from 192.18.22.72: icmp_seq=7952 ttl=64 time=4.20 ms
64 bytes from 192.18.22.72: icmp_seq=7955 ttl=64 time=1.20 ms
64 bytes from 192.18.22.72: icmp_seq=9025 ttl=64 time=8.04 ms
64 bytes from 192.18.22.72: icmp_seq=9486 ttl=64 time=18.5 ms
64 bytes from 192.18.22.72: icmp_seq=9495 ttl=64 time=1.02 ms
64 bytes from 192.18.22.72: icmp_seq=9579 ttl=64 time=30.3 ms
64 bytes from 192.18.22.72: icmp_seq=9623 ttl=64 time=26.7 ms
64 bytes from 192.18.22.72: icmp_seq=9637 ttl=64 time=17.1 ms
64 bytes from 192.18.22.72: icmp_seq=9858 ttl=64 time=22.8 ms
64 bytes from 192.18.22.72: icmp_seq=9959 ttl=64 time=7.11 ms
--- 192.18.22.72 ping statistics ---
10000 packets transmitted, 10000 received, 0% packet loss, time 9999321ms
rtt min/avg/max/mdev = 0.081/0.292/30.300/1.036 ms

Can someone help me, or tell me something?

How to reduce the ping latency?

Best Regards !!

2011-01-04 



  
**************************************************
沈启龙
部门  :云快线 - 运营支撑中心 - 研发中心
手机  : 18910286687
E-mail:shen.qilong@21vianet.com
地址  :北京市朝阳区酒仙桥东路1号M5楼
**************************************************

[-- Attachment #1.2: Type: text/html, Size: 8605 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: How to reduce  high latency on PV-on-HVM?
  2011-01-04  7:56 How to reduce high latency on PV-on-HVM? shen.qilong
@ 2011-01-05 13:47 ` Stefano Stabellini
  2011-01-05 13:51   ` Pasi Kärkkäinen
  0 siblings, 1 reply; 6+ messages in thread
From: Stefano Stabellini @ 2011-01-05 13:47 UTC (permalink / raw)
  To: shen.qilong; +Cc: xen-devel, 王鹏

[-- Attachment #1: Type: text/plain, Size: 858 bytes --]

On Tue, 4 Jan 2011, shen.qilong wrote:
> About ping latency in PV-on-HVM
> I had tried a test to ping between two VM(PV-on-HVM) in the same host server with bridge model.
>  
> I think there would be slightly latency mostly(less than 1ms).
> But I found that there are too much high latency package (more than 1ms) in PV-on-HVM + bridge environment.
>  
>  
> The following is the test environment and the test result:
>  
> Server uses xen-4.0.0, domain-0 is kernel-2.6.32.13 and PV-on-HVM is kernel-2.6.x. 
> The server and client are connected through a same network bridge. 
>  
> Can someone help me, or tell me something?

If you boot your guest kernel with loglevel=9, can you see the following line
among the boot messages?

Xen HVM callback vector for event delivery is enabled

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: How to reduce  high latency on PV-on-HVM?
  2011-01-05 13:47 ` Stefano Stabellini
@ 2011-01-05 13:51   ` Pasi Kärkkäinen
  2011-01-05 13:59     ` Stefano Stabellini
  0 siblings, 1 reply; 6+ messages in thread
From: Pasi Kärkkäinen @ 2011-01-05 13:51 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: shen.qilong, xen-devel, ??????

On Wed, Jan 05, 2011 at 01:47:01PM +0000, Stefano Stabellini wrote:
> On Tue, 4 Jan 2011, shen.qilong wrote:
> > About ping latency in PV-on-HVM
> > I had tried a test to ping between two VM(PV-on-HVM) in the same host server with bridge model.
> >  
> > I think there would be slightly latency mostly(less than 1ms).
> > But I found that there are too much high latency package (more than 1ms) in PV-on-HVM + bridge environment.
> >  
> >  
> > The following is the test environment and the test result:
> >  
> > Server uses xen-4.0.0, domain-0 is kernel-2.6.32.13 and PV-on-HVM is kernel-2.6.x. 
> > The server and client are connected through a same network bridge. 
> >  
> > Can someone help me, or tell me something?
> 
> If you boot your guest kernel with loglevel=9, can you see the following line
> among the boot messages?
> 
> Xen HVM callback vector for event delivery is enabled

Hmm.. is this message message for the optimization available in Xen 4.1 ? 

-- Pasi

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: How to reduce  high latency on PV-on-HVM?
  2011-01-05 13:51   ` Pasi Kärkkäinen
@ 2011-01-05 13:59     ` Stefano Stabellini
  2011-01-05 14:09       ` Ian Campbell
  0 siblings, 1 reply; 6+ messages in thread
From: Stefano Stabellini @ 2011-01-05 13:59 UTC (permalink / raw)
  To: Pasi Kärkkäinen
  Cc: shen.qilong, xen-devel, ??????, Stefano Stabellini

[-- Attachment #1: Type: text/plain, Size: 1404 bytes --]

On Wed, 5 Jan 2011, Pasi Kärkkäinen wrote:
> On Wed, Jan 05, 2011 at 01:47:01PM +0000, Stefano Stabellini wrote:
> > On Tue, 4 Jan 2011, shen.qilong wrote:
> > > About ping latency in PV-on-HVM
> > > I had tried a test to ping between two VM(PV-on-HVM) in the same host server with bridge model.
> > >  
> > > I think there would be slightly latency mostly(less than 1ms).
> > > But I found that there are too much high latency package (more than 1ms) in PV-on-HVM + bridge environment.
> > >  
> > >  
> > > The following is the test environment and the test result:
> > >  
> > > Server uses xen-4.0.0, domain-0 is kernel-2.6.32.13 and PV-on-HVM is kernel-2.6.x. 
> > > The server and client are connected through a same network bridge. 
> > >  
> > > Can someone help me, or tell me something?
> > 
> > If you boot your guest kernel with loglevel=9, can you see the following line
> > among the boot messages?
> > 
> > Xen HVM callback vector for event delivery is enabled
> 
> Hmm.. is this message message for the optimization available in Xen 4.1 ? 
> 
 
Nope, it is for the basic optimization that is in the xen-4.0-testing
tree too.
Looking more closely, it should be present in 4.0.1 but not in 4.0.0, so
it is unlikely you have it.
It would be interesting to do the same test on a more recent 4.0.x
hypervisor.

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: How to reduce  high latency on PV-on-HVM?
  2011-01-05 13:59     ` Stefano Stabellini
@ 2011-01-05 14:09       ` Ian Campbell
  2011-01-05 14:14         ` Stefano Stabellini
  0 siblings, 1 reply; 6+ messages in thread
From: Ian Campbell @ 2011-01-05 14:09 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: shen.qilong, xen-devel, ??????

On Wed, 2011-01-05 at 13:59 +0000, Stefano Stabellini wrote:
> On Wed, 5 Jan 2011, Pasi Kärkkäinen wrote:
> > On Wed, Jan 05, 2011 at 01:47:01PM +0000, Stefano Stabellini wrote:
> > > On Tue, 4 Jan 2011, shen.qilong wrote:
> > > > About ping latency in PV-on-HVM
> > > > I had tried a test to ping between two VM(PV-on-HVM) in the same host server with bridge model.
> > > >  
> > > > I think there would be slightly latency mostly(less than 1ms).
> > > > But I found that there are too much high latency package (more than 1ms) in PV-on-HVM + bridge environment.
> > > >  
> > > >  
> > > > The following is the test environment and the test result:
> > > >  
> > > > Server uses xen-4.0.0, domain-0 is kernel-2.6.32.13 and PV-on-HVM is kernel-2.6.x. 
> > > > The server and client are connected through a same network bridge. 
> > > >  
> > > > Can someone help me, or tell me something?
> > > 
> > > If you boot your guest kernel with loglevel=9, can you see the following line
> > > among the boot messages?
> > > 
> > > Xen HVM callback vector for event delivery is enabled
> > 
> > Hmm.. is this message message for the optimization available in Xen 4.1 ? 
> > 
>  
> Nope, it is for the basic optimization that is in the xen-4.0-testing
> tree too.
> Looking more closely, it should be present in 4.0.1 but not in 4.0.0, so
> it is unlikely you have it.
> It would be interesting to do the same test on a more recent 4.0.x
> hypervisor.

It also depends on precisely which "kernel-2.6.x" is being used in the
guest, doesn't it?

Ian.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: How to reduce  high latency on PV-on-HVM?
  2011-01-05 14:09       ` Ian Campbell
@ 2011-01-05 14:14         ` Stefano Stabellini
  0 siblings, 0 replies; 6+ messages in thread
From: Stefano Stabellini @ 2011-01-05 14:14 UTC (permalink / raw)
  To: Ian Campbell; +Cc: shen.qilong, xen-devel, ??????, Stefano Stabellini

[-- Attachment #1: Type: text/plain, Size: 1841 bytes --]

On Wed, 5 Jan 2011, Ian Campbell wrote:
> On Wed, 2011-01-05 at 13:59 +0000, Stefano Stabellini wrote:
> > On Wed, 5 Jan 2011, Pasi Kärkkäinen wrote:
> > > On Wed, Jan 05, 2011 at 01:47:01PM +0000, Stefano Stabellini wrote:
> > > > On Tue, 4 Jan 2011, shen.qilong wrote:
> > > > > About ping latency in PV-on-HVM
> > > > > I had tried a test to ping between two VM(PV-on-HVM) in the same host server with bridge model.
> > > > >  
> > > > > I think there would be slightly latency mostly(less than 1ms).
> > > > > But I found that there are too much high latency package (more than 1ms) in PV-on-HVM + bridge environment.
> > > > >  
> > > > >  
> > > > > The following is the test environment and the test result:
> > > > >  
> > > > > Server uses xen-4.0.0, domain-0 is kernel-2.6.32.13 and PV-on-HVM is kernel-2.6.x. 
> > > > > The server and client are connected through a same network bridge. 
> > > > >  
> > > > > Can someone help me, or tell me something?
> > > > 
> > > > If you boot your guest kernel with loglevel=9, can you see the following line
> > > > among the boot messages?
> > > > 
> > > > Xen HVM callback vector for event delivery is enabled
> > > 
> > > Hmm.. is this message message for the optimization available in Xen 4.1 ? 
> > > 
> >  
> > Nope, it is for the basic optimization that is in the xen-4.0-testing
> > tree too.
> > Looking more closely, it should be present in 4.0.1 but not in 4.0.0, so
> > it is unlikely you have it.
> > It would be interesting to do the same test on a more recent 4.0.x
> > hypervisor.
> 
> It also depends on precisely which "kernel-2.6.x" is being used in the
> guest, doesn't it?
> 
 
Of course, kernel 2.6.x is very vague.
What kernel version are you actually using? Is it an upstream kernel? If
so, what exact version are you using?

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2011-01-05 14:14 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-01-04  7:56 How to reduce high latency on PV-on-HVM? shen.qilong
2011-01-05 13:47 ` Stefano Stabellini
2011-01-05 13:51   ` Pasi Kärkkäinen
2011-01-05 13:59     ` Stefano Stabellini
2011-01-05 14:09       ` Ian Campbell
2011-01-05 14:14         ` Stefano Stabellini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).