From: "Michael S. Tsirkin" <mst@redhat.com>
To: Simon Horman <horms@verge.net.au>
Cc: Rusty Russell <rusty@rustcorp.com.au>,
virtualization@lists.linux-foundation.org,
Jesse Gross <jesse@nicira.com>,
dev@openvswitch.org, virtualization@lists.osdl.org,
netdev@vger.kernel.org, kvm@vger.kernel.org
Subject: Re: Flow Control and Port Mirroring Revisited
Date: Thu, 6 Jan 2011 14:07:22 +0200 [thread overview]
Message-ID: <20110106120722.GD12142@redhat.com> (raw)
In-Reply-To: <20110106113052.GA2541@verge.net.au>
On Thu, Jan 06, 2011 at 08:30:52PM +0900, Simon Horman wrote:
> On Thu, Jan 06, 2011 at 12:27:55PM +0200, Michael S. Tsirkin wrote:
> > On Thu, Jan 06, 2011 at 06:33:12PM +0900, Simon Horman wrote:
> > > Hi,
> > >
> > > Back in October I reported that I noticed a problem whereby flow control
> > > breaks down when openvswitch is configured to mirror a port[1].
> >
> > Apropos the UDP flow control. See this
> > http://www.spinics.net/lists/netdev/msg150806.html
> > for some problems it introduces.
> > Unfortunately UDP does not have built-in flow control.
> > At some level it's just conceptually broken:
> > it's not present in physical networks so why should
> > we try and emulate it in a virtual network?
> >
> >
> > Specifically, when you do:
> > # netperf -c -4 -t UDP_STREAM -H 172.17.60.218 -l 30 -- -m 1472
> > You are asking: what happens if I push data faster than it can be received?
> > But why is this an interesting question?
> > Ask 'what is the maximum rate at which I can send data with %X packet
> > loss' or 'what is the packet loss at rate Y Gb/s'. netperf has
> > -b and -w flags for this. It needs to be configured
> > with --enable-intervals=yes for them to work.
> >
> > If you pose the questions this way the problem of pacing
> > the execution just goes away.
>
> I am aware that UDP inherently lacks flow control.
Everyone's is aware of that, but this is always followed by a 'however'
:).
> The aspect of flow control that I am interested in is situations where the
> guest can create large amounts of work for the host. However, it seems that
> in the case of virtio with vhostnet that the CPU utilisation seems to be
> almost entirely attributable to the vhost and qemu-system processes. And
> in the case of virtio without vhost net the CPU is used by the qemu-system
> process. In both case I assume that I could use a cgroup or something
> similar to limit the guests.
cgroups, yes. the vhost process inherits the cgroups
from the qemu process so you can limit them all.
If you are after limiting the max troughput of the guest
you can do this with cgroups as well.
> Assuming all of that is true then from a resource control problem point of
> view, which is mostly what I am concerned about, the problem goes away.
> However, I still think that it would be nice to resolve the situation I
> described.
We need to articulate what's wrong here, otherwise we won't
be able to resolve the situation. We are sending UDP packets
as fast as we can and some receivers can't cope. Is this the problem?
We have made attempts to add a pseudo flow control in the past
in an attempt to make UDP on the same host work better.
Maybe they help some but they also sure introduce problems.
--
MST
next prev parent reply other threads:[~2011-01-06 12:07 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-01-06 9:33 Flow Control and Port Mirroring Revisited Simon Horman
2011-01-06 10:22 ` Eric Dumazet
2011-01-06 12:44 ` Simon Horman
2011-01-06 13:28 ` Eric Dumazet
2011-01-06 22:01 ` Simon Horman
2011-01-06 22:38 ` Jesse Gross
2011-01-07 1:23 ` Simon Horman
2011-01-10 9:31 ` Simon Horman
2011-01-13 6:47 ` Simon Horman
2011-01-13 15:45 ` Jesse Gross
2011-01-13 23:41 ` Simon Horman
2011-01-14 4:58 ` Michael S. Tsirkin
2011-01-14 6:35 ` Simon Horman
2011-01-14 6:54 ` Michael S. Tsirkin
2011-01-16 22:37 ` Simon Horman
2011-01-16 23:56 ` Rusty Russell
2011-01-17 10:38 ` Michael S. Tsirkin
2011-01-17 10:26 ` Michael S. Tsirkin
2011-01-18 19:41 ` Rick Jones
2011-01-18 20:13 ` Michael S. Tsirkin
2011-01-18 21:28 ` Rick Jones
2011-01-19 9:11 ` Simon Horman
2011-01-20 8:38 ` Simon Horman
2011-01-21 2:30 ` Rick Jones
2011-01-21 9:59 ` Michael S. Tsirkin
2011-01-21 18:04 ` Rick Jones
2011-01-21 23:11 ` Simon Horman
2011-01-22 21:57 ` Michael S. Tsirkin
2011-01-23 6:38 ` Simon Horman
2011-01-23 10:39 ` Michael S. Tsirkin
2011-01-23 13:53 ` Simon Horman
2011-01-24 18:27 ` Rick Jones
2011-01-24 18:36 ` Michael S. Tsirkin
2011-01-24 19:01 ` Rick Jones
2011-01-24 19:42 ` Michael S. Tsirkin
2011-01-06 10:27 ` Michael S. Tsirkin
2011-01-06 11:30 ` Simon Horman
2011-01-06 12:07 ` Michael S. Tsirkin [this message]
2011-01-06 12:29 ` Simon Horman
2011-01-06 12:47 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110106120722.GD12142@redhat.com \
--to=mst@redhat.com \
--cc=dev@openvswitch.org \
--cc=horms@verge.net.au \
--cc=jesse@nicira.com \
--cc=kvm@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=rusty@rustcorp.com.au \
--cc=virtualization@lists.linux-foundation.org \
--cc=virtualization@lists.osdl.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).