kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* kvm virtio ethernet ring on guest side over high throughput (packet per second)
@ 2014-01-21 18:06 Alejandro Comisario
  2014-01-22 15:22 ` Stefan Hajnoczi
  0 siblings, 1 reply; 8+ messages in thread
From: Alejandro Comisario @ 2014-01-21 18:06 UTC (permalink / raw)
  To: kvm, linux-kernel

Hi guys, we had in the past when using physical servers, several
throughput issues regarding the throughput of our APIS, in our case we
measure this with packets per seconds, since we dont have that much
bandwidth (Mb/s) since our apis respond lots of packets very small
ones (maximum response of 3.5k and avg response of 1.5k), when we
where using this physical servers, when we reach throughput capacity
(due to clients tiemouts) we touched the ethernet ring configuration
and we made the problem dissapear.

Today with kvm and over 10k virtual instances, when we want to
increase the throughput of KVM instances, we bumped with the fact that
when using virtio on guests, we have a max configuration of the ring
of 256 TX/RX, and from the host side the atached vnet has a txqueuelen
of 500.

What i want to know is, how can i tune the guest to support more
packets per seccond if i know that's my bottleneck?

* does virtio exposes more packets to configure in the virtual ethernet's ring ?
* does the use of vhost_net helps me with increasing packets per
second and not only bandwidth?

does anyone has to struggle with this before and knows where i can look into ?
there's LOOOOOOOOOOOOOOOTS of information about networking performance
tuning of kvm, but nothing related to increase throughput in pps
capacity.

This is a couple of configurations that we are having right now on the
compute nodes:

* 2x1Gb bonded interfaces (want to know the more than 20 models we are
using, just ask for it)
* Multi queue interfaces, pined via irq to different cores
* Linux bridges,  no VLAN, no open-vswitch
* ubuntu 12.04 kernel 3.2.0-[40-48]


any help will be incredibly apreciated !!

thank you.

^ permalink raw reply	[flat|nested] 8+ messages in thread
* kvm virtio ethernet ring on guest side over high throughput (packet per second)
@ 2014-01-21 17:59 Alejandro Comisario
  0 siblings, 0 replies; 8+ messages in thread
From: Alejandro Comisario @ 2014-01-21 17:59 UTC (permalink / raw)
  To: kvm-u79uwXL29TY76Z2rM5mHXA, linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	openstack-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r,
	openstack-operators-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r@public.gmane.org


[-- Attachment #1.1: Type: text/plain, Size: 1692 bytes --]

Hi guys, we had in the past when using physical servers, several throughput
issues regarding the throughput of our APIS, in our case we measure this
with packets per seconds, since we dont have that much bandwidth (Mb/s)
since our apis respond lots of packets very small ones (maximum response of
3.5k and avg response of 1.5k), when we where using this physical servers,
when we reach throughput capacity (due to clients tiemouts) we touched the
ethernet ring configuration and we made the problem dissapear.

Today with kvm and over 10k virtual instances, when we want to increase the
throughput of KVM instances, we bumped with the fact that when using virtio
on guests, we have a max configuration of the ring of 256 TX/RX, and from
the host side the atached vnet has a txqueuelen of 500.

What i want to know is, how can i tune the guest to support more packets
per seccond if i know that's my bottleneck?

* does virtio exposes more packets to configure in the virtual ethernet's
ring ?
* does the use of vhost_net helps me with increasing packets per second and
not only bandwidth?

does anyone has to struggle with this before and knows where i can look
into ?
there's LOOOOOOOOOOOOOOOTS of information about networking performance
tuning of kvm, but nothing related to increase throughput in pps capacity.

This is a couple of configurations that we are having right now on the
compute nodes:

* 2x1Gb bonded interfaces (want to know the more than 20 models we are
using, just ask for it)
* Multi queue interfaces, pined via irq to different cores
* Linux bridges,  no VLAN, no open-vswitch
* ubuntu 12.04 kernel 3.2.0-[40-48]


any help will be incredibly apreciated !!

thank you.

[-- Attachment #1.2: Type: text/html, Size: 4383 bytes --]

[-- Attachment #2: Type: text/plain, Size: 274 bytes --]

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack-ZwoEplunGu0gQVYkTtqAhEB+6BGkLq7r@public.gmane.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-01-24 18:41 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-01-21 18:06 kvm virtio ethernet ring on guest side over high throughput (packet per second) Alejandro Comisario
2014-01-22 15:22 ` Stefan Hajnoczi
2014-01-22 21:32   ` Alejandro Comisario
2014-01-23  3:14     ` Jason Wang
2014-01-23 19:25       ` Alejandro Comisario
2014-01-24 18:40         ` Alejandro Comisario
2014-01-23  3:12   ` Jason Wang
  -- strict thread matches above, loose matches on Subject: below --
2014-01-21 17:59 Alejandro Comisario

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).