public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Simon Horman <horms@verge.net.au>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jesse Gross <jesse@nicira.com>,
	Eric Dumazet <eric.dumazet@gmail.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	virtualization@lists.linux-foundation.org, dev@openvswitch.org,
	virtualization@lists.osdl.org, netdev@vger.kernel.org,
	kvm@vger.kernel.org
Subject: Re: Flow Control and Port Mirroring Revisited
Date: Mon, 17 Jan 2011 07:37:30 +0900	[thread overview]
Message-ID: <20110116223728.GA6279@verge.net.au> (raw)
In-Reply-To: <20110114065415.GA30300@redhat.com>

On Fri, Jan 14, 2011 at 08:54:15AM +0200, Michael S. Tsirkin wrote:
> On Fri, Jan 14, 2011 at 03:35:28PM +0900, Simon Horman wrote:
> > On Fri, Jan 14, 2011 at 06:58:18AM +0200, Michael S. Tsirkin wrote:
> > > On Fri, Jan 14, 2011 at 08:41:36AM +0900, Simon Horman wrote:
> > > > On Thu, Jan 13, 2011 at 10:45:38AM -0500, Jesse Gross wrote:
> > > > > On Thu, Jan 13, 2011 at 1:47 AM, Simon Horman <horms@verge.net.au> wrote:
> > > > > > On Mon, Jan 10, 2011 at 06:31:55PM +0900, Simon Horman wrote:
> > > > > >> On Fri, Jan 07, 2011 at 10:23:58AM +0900, Simon Horman wrote:
> > > > > >> > On Thu, Jan 06, 2011 at 05:38:01PM -0500, Jesse Gross wrote:
> > > > > >> >
> > > > > >> > [ snip ]
> > > > > >> > >
> > > > > >> > > I know that everyone likes a nice netperf result but I agree with
> > > > > >> > > Michael that this probably isn't the right question to be asking.  I
> > > > > >> > > don't think that socket buffers are a real solution to the flow
> > > > > >> > > control problem: they happen to provide that functionality but it's
> > > > > >> > > more of a side effect than anything.  It's just that the amount of
> > > > > >> > > memory consumed by packets in the queue(s) doesn't really have any
> > > > > >> > > implicit meaning for flow control (think multiple physical adapters,
> > > > > >> > > all with the same speed instead of a virtual device and a physical
> > > > > >> > > device with wildly different speeds).  The analog in the physical
> > > > > >> > > world that you're looking for would be Ethernet flow control.
> > > > > >> > > Obviously, if the question is limiting CPU or memory consumption then
> > > > > >> > > that's a different story.
> > > > > >> >
> > > > > >> > Point taken. I will see if I can control CPU (and thus memory) consumption
> > > > > >> > using cgroups and/or tc.
> > > > > >>
> > > > > >> I have found that I can successfully control the throughput using
> > > > > >> the following techniques
> > > > > >>
> > > > > >> 1) Place a tc egress filter on dummy0
> > > > > >>
> > > > > >> 2) Use ovs-ofctl to add a flow that sends skbs to dummy0 and then eth1,
> > > > > >>    this is effectively the same as one of my hacks to the datapath
> > > > > >>    that I mentioned in an earlier mail. The result is that eth1
> > > > > >>    "paces" the connection.
> > > 
> > > This is actually a bug. This means that one slow connection will affect
> > > fast ones. I intend to change the default for qemu to sndbuf=0 : this
> > > will fix it but break your "pacing". So pls do not count on this
> > > behaviour.
> > 
> > Do you have a patch I could test?
> 
> You can (and users already can) just run qemu with sndbuf=0. But if you
> like, below.

Thanks

> > > > > > Further to this, I wonder if there is any interest in providing
> > > > > > a method to switch the action order - using ovs-ofctl is a hack imho -
> > > > > > and/or switching the default action order for mirroring.
> > > > > 
> > > > > I'm not sure that there is a way to do this that is correct in the
> > > > > generic case.  It's possible that the destination could be a VM while
> > > > > packets are being mirrored to a physical device or we could be
> > > > > multicasting or some other arbitrarily complex scenario.  Just think
> > > > > of what a physical switch would do if it has ports with two different
> > > > > speeds.
> > > > 
> > > > Yes, I have considered that case. And I agree that perhaps there
> > > > is no sensible default. But perhaps we could make it configurable somehow?
> > > 
> > > The fix is at the application level. Run netperf with -b and -w flags to
> > > limit the speed to a sensible value.
> > 
> > Perhaps I should have stated my goals more clearly.
> > I'm interested in situations where I don't control the application.
> 
> Well an application that streams UDP without any throttling
> at the application level will break on a physical network, right?
> So I am not sure why should one try to make it work on the virtual one.
> 
> But let's assume that you do want to throttle the guest
> for reasons such as QOS. The proper approach seems
> to be to throttle the sender, not have a dummy throttled
> receiver "pacing" it. Place the qemu process in the
> correct net_cls cgroup, set the class id and apply a rate limit?

I would like to be able to use a class to rate limit egress packets.
That much works fine for me.

What I would also like is for there to be back-pressure such that the guest
doesn't consume lots of CPU, spinning, sending packets as fast as it can,
almost of all of which are dropped. That does seem like a lot of wasted
CPU to me.

Unfortunately there are several problems with this and I am fast concluding
that I will need to use a CPU cgroup. Which does make some sense, as what I
am really trying to limit here is CPU usage not network packet rates - even
if the test using the CPU is netperf.  So long as the CPU usage can
(mostly) be attributed to the guest using a cgroup should work fine.  And
indeed seems to in my limited testing.

One scenario in which I don't think it is possible for there to be
back-pressure in a meaningful sense is if root in the guest sets
/proc/sys/net/core/wmem_default to a large value, say 2000000.


I do think that to some extent there is back-pressure provided by sockbuf
in the case where process on the host is sending directly to a physical
interface.  And to my mind it would be "nice" if the same kind of
back-pressure was present in guests.  But through our discussions of the
past week or so I get the feeling that is not your view of things.

Perhaps I could characterise the guest situation by saying:
	Egress packet rates can be controlled using tc on the host;
	Guest CPU usage can be controlled using CPU cgroups on the host;
	Sockbuf controls memory usage on the host;
	Back-pressure is irrelevant.


  reply	other threads:[~2011-01-16 22:37 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-06  9:33 Flow Control and Port Mirroring Revisited Simon Horman
2011-01-06 10:22 ` Eric Dumazet
2011-01-06 12:44   ` Simon Horman
2011-01-06 13:28     ` Eric Dumazet
2011-01-06 22:01       ` Simon Horman
2011-01-06 22:38     ` Jesse Gross
2011-01-07  1:23       ` Simon Horman
2011-01-10  9:31         ` Simon Horman
2011-01-13  6:47           ` Simon Horman
2011-01-13 15:45             ` Jesse Gross
2011-01-13 23:41               ` Simon Horman
2011-01-14  4:58                 ` Michael S. Tsirkin
2011-01-14  6:35                   ` Simon Horman
2011-01-14  6:54                     ` Michael S. Tsirkin
2011-01-16 22:37                       ` Simon Horman [this message]
2011-01-16 23:56                         ` Rusty Russell
2011-01-17 10:38                           ` Michael S. Tsirkin
2011-01-17 10:26                         ` Michael S. Tsirkin
2011-01-18 19:41                           ` Rick Jones
2011-01-18 20:13                             ` Michael S. Tsirkin
2011-01-18 21:28                               ` Rick Jones
2011-01-19  9:11                               ` Simon Horman
2011-01-20  8:38                             ` Simon Horman
2011-01-21  2:30                               ` Rick Jones
2011-01-21  9:59                               ` Michael S. Tsirkin
2011-01-21 18:04                                 ` Rick Jones
2011-01-21 23:11                                 ` Simon Horman
2011-01-22 21:57                                   ` Michael S. Tsirkin
2011-01-23  6:38                                     ` Simon Horman
2011-01-23 10:39                                       ` Michael S. Tsirkin
2011-01-23 13:53                                         ` Simon Horman
2011-01-24 18:27                                         ` Rick Jones
2011-01-24 18:36                                           ` Michael S. Tsirkin
2011-01-24 19:01                                             ` Rick Jones
2011-01-24 19:42                                               ` Michael S. Tsirkin
2011-01-06 10:27 ` Michael S. Tsirkin
2011-01-06 11:30   ` Simon Horman
2011-01-06 12:07     ` Michael S. Tsirkin
2011-01-06 12:29       ` Simon Horman
2011-01-06 12:47         ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110116223728.GA6279@verge.net.au \
    --to=horms@verge.net.au \
    --cc=dev@openvswitch.org \
    --cc=eric.dumazet@gmail.com \
    --cc=jesse@nicira.com \
    --cc=kvm@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=rusty@rustcorp.com.au \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=virtualization@lists.osdl.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox