* NIC emulation with built-in rate limiting?
@ 2012-09-11 18:07 Rick Jones
2012-09-11 19:05 ` Gregory Carter
0 siblings, 1 reply; 3+ messages in thread
From: Rick Jones @ 2012-09-11 18:07 UTC (permalink / raw)
To: netdev, kvm; +Cc: Lee Schermerhorn, Brian Haley
Are there NIC emulations in the kernel with built-in rate limiting? Or
is that supposed to be strictly the province of qdiscs/filters?
I've been messing about with netperf in a VM using virtio_net, to which
rate limiting has been applied to its corresponding vnetN interface -
rate policing on vnetN ingress (the VM's outbound) and htb on the vnetN
egress (the VM's inbound).
Looking at the "demo mode" output of netperf and a VM throttled to 800
Mbit/s in each direction I see that inbound to the VM is quite steady
over time - right at about 800 Mbit/s. However, looking at that same
sort of data for outbound from the VM shows considerable variability
ranging anywhere from 700 to 900 Mbit/s (though the bulk of the
variability is clustered more like 750 to 850.
I was thinking that part of the reason may stem from the lack of direct
feedback to the VM since the policing is on the vnetN interface and
wondered if it might be "better" if the VM's outbound network rate were
constrained not by an ingress policing filter on the vnetN interface but
by the host/hypervisor/emulator portion of the NIC and how quickly it
pulls packets from the tx queue. That would allow the queue which
built-up to be in the VM itself and would more accurately represent what
a "real NIC" of that bandwidth would do.
happy benchmarking,
rick jones
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: NIC emulation with built-in rate limiting?
2012-09-11 18:07 NIC emulation with built-in rate limiting? Rick Jones
@ 2012-09-11 19:05 ` Gregory Carter
2012-09-17 21:48 ` Rick Jones
0 siblings, 1 reply; 3+ messages in thread
From: Gregory Carter @ 2012-09-11 19:05 UTC (permalink / raw)
To: Rick Jones; +Cc: netdev, kvm, Lee Schermerhorn, Brian Haley
You can not model TCP/IP accurately in a KVM VM environment.
Too much background machinations are going on to make that plausible.
I would use a small network with actual hardware for the testing model.
You will have to use the actual gear in place and test and tweak there
if you are doing bandwidth sharing, multi-channel with qdiscs stochastic
queuing.
However, you can model various protocols using single channel qdiscs
fairly well, certainly well enough to use the data to direct your build
outs.
Application behavior works pretty well, if you are simply limiting
bandwidth sharing using single channel qdiscs such as discovering lower
end acceptable transmission rates for VoIP traffic etc. I have had
really good success with various codecs tested with single channel rate
limited qdiscs to answer various questions about latency and
bandwidth/quality issues in transmission of audio/video, yielding
numbers that reveal useful behavior in the design planning phases of
network services.
May I suggest allocating one channel to one qdisc.
Also, you have to strip the machine down if you want accurate results.
Do not have X running or anything other than the virtual machines
required as part of your testing process. Strip the process queue on
the testing gear to only running the VM's and Virtual network in question.
The lower you go in the network VM's connections, the more chaotic and
useless your numbers are going to be. In certain situations, if you
strip your test bed down far enough, you can predict how certain kernel
processes will affect your monitoring and screen those out of the data sets.
I use stripped down source built kernels by the way for many of these
questions because a lot of junk in the kernel such as I/O queing and
scheduling I turn off, specifically building kernels for running complex
VM point to point virtualized networks with as little background noise
as I can get.
After a while, if you standardize your network setup, you can screen out
a lot of background noise, and get some useful answers to how
applications and limited bandwidth connected endpoints will fair.
-gc
On 09/11/2012 01:07 PM, Rick Jones wrote:
> Are there NIC emulations in the kernel with built-in rate limiting?
> Or is that supposed to be strictly the province of qdiscs/filters?
>
> I've been messing about with netperf in a VM using virtio_net, to
> which rate limiting has been applied to its corresponding vnetN
> interface - rate policing on vnetN ingress (the VM's outbound) and htb
> on the vnetN egress (the VM's inbound).
>
> Looking at the "demo mode" output of netperf and a VM throttled to 800
> Mbit/s in each direction I see that inbound to the VM is quite steady
> over time - right at about 800 Mbit/s. However, looking at that same
> sort of data for outbound from the VM shows considerable variability
> ranging anywhere from 700 to 900 Mbit/s (though the bulk of the
> variability is clustered more like 750 to 850.
>
> I was thinking that part of the reason may stem from the lack of
> direct feedback to the VM since the policing is on the vnetN interface
> and wondered if it might be "better" if the VM's outbound network rate
> were constrained not by an ingress policing filter on the vnetN
> interface but by the host/hypervisor/emulator portion of the NIC and
> how quickly it pulls packets from the tx queue. That would allow the
> queue which built-up to be in the VM itself and would more accurately
> represent what a "real NIC" of that bandwidth would do.
>
> happy benchmarking,
>
> rick jones
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: NIC emulation with built-in rate limiting?
2012-09-11 19:05 ` Gregory Carter
@ 2012-09-17 21:48 ` Rick Jones
0 siblings, 0 replies; 3+ messages in thread
From: Rick Jones @ 2012-09-17 21:48 UTC (permalink / raw)
To: Gregory Carter; +Cc: netdev, kvm, Lee Schermerhorn, Brian Haley
So, while the question includes the "stability" of how things get
plumbed for a VM and whether moving some of that into the NIC emulation
might help :) I've gone ahead and re-run the experiment with bare-iron.
This time just for kicks I used 50 Mbit/s throttle inbound and
outbound. The results can be seen in:
ftp://ftp.netperf.org/50_mbits.tgz
Since this is now bare-iron, inbound is ingress and outbound is egress.
That is reversed from what it would be for a VM situation where VM
outbound traverses the ingress filter and VM inbound traverses the
egress qdisc.
Both systems were running Ubuntu 12.04.01 3.2.0-26 kernels, there was
plenty of CPU horsepower (2x E5-2680s in this case) and the network
between them was 10GbE using their 530FLB LOMs (BCM 57810S) connected
via a ProCurve 6120 10GbE switch. That simply happened to be the most
convenient bare-iron hardware I had on hand as one of the cobbler's
children. There was no X running on the systems, the only thing of note
running on them was netperf.
So, is the comparative instability between inbound and outbound
fundamentally inherent in using ingress policing, or more a matter of
"Silly Rick, you should be using <these settings> instead?"
If the former, is it then worthwhile to try to have NIC emulation only
pull from the VM at the emulated rate, to keep the queues in the VM
where it can react to them more directly? And are there any NIC
emulations doing that already (as virtio does not seem to at present)?
happy benchmarking,
rick jones
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2012-09-17 21:48 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-09-11 18:07 NIC emulation with built-in rate limiting? Rick Jones
2012-09-11 19:05 ` Gregory Carter
2012-09-17 21:48 ` Rick Jones
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).