netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jason Wang <jasowang@redhat.com>, netdev@vger.kernel.org
Subject: Re: tun mq failure
Date: Wed, 23 Jan 2013 13:41:02 +0200	[thread overview]
Message-ID: <20130123114102.GA10426@redhat.com> (raw)
In-Reply-To: <20130123110640.GC7005@order.stressinduktion.org>

On Wed, Jan 23, 2013 at 12:06:40PM +0100, Hannes Frederic Sowa wrote:
> On Wed, Jan 23, 2013 at 12:05:16PM +0200, Michael S. Tsirkin wrote:
> > This is when trying to start a VPN using some old openvpn binary so MQ
> > is not set.
> > 
> > So
> > 1. I think we should limit allocation of MQ to when MQ flag is set in SETIFF.
> > 2. order 7 allocation is 2^^7 pages - about half a megabyte of contigious
> >    memory. This is quite likely to fail.
> >    Let's start with a small limit on number of queues, like 8?
> >    Then we know it will succeed.
> >    Longer term we might want to solve it differently.
> 
> This has been come up before:
> http://thread.gmane.org/gmane.linux.network/255647/focus=255902
> 
> I think a solution to this problem is still outstanding.

Right. What (at least I) missed is that it's the
queue array allocation that fails here.
So I think something like the following will sort the first issue
(compiled only):

For the second, for 3.8 maybe the prudent thing to do is
to set MAX_TAP_QUEUES to a small value, like 8, to avoid
userspace relying on a large number of queues being available,
and look at a better way to do this longer term, like
using an array of pointers.

--->

tun: don't waste memory on unused queues

If MQ flag is off, we never attach more than 1 queue.
So let's not allocate memory for the unused ones.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

---

diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index af372d0..813d303 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1577,6 +1577,7 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
 	else {
 		char *name;
 		unsigned long flags = 0;
+		unsigned int max_tap_queues;
 
 		if (!ns_capable(net->user_ns, CAP_NET_ADMIN))
 			return -EPERM;
@@ -1599,9 +1600,13 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
 		if (*ifr->ifr_name)
 			name = ifr->ifr_name;
 
+		if (ifr->ifr_flags & IFF_MULTI_QUEUE)
+			max_tap_queues = MAX_TAP_QUEUES;
+		else
+			max_tap_queues = 1;
 		dev = alloc_netdev_mqs(sizeof(struct tun_struct), name,
 				       tun_setup,
-				       MAX_TAP_QUEUES, MAX_TAP_QUEUES);
+				       max_tap_queues, max_tap_queues);
 		if (!dev)
 			return -ENOMEM;
 

  reply	other threads:[~2013-01-23 11:36 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-01-23 10:05 tun mq failure Michael S. Tsirkin
2013-01-23 11:06 ` Hannes Frederic Sowa
2013-01-23 11:41   ` Michael S. Tsirkin [this message]
2013-01-23 12:10     ` Jason Wang
2013-01-23 13:18       ` Michael S. Tsirkin
2013-01-23 12:08   ` Jason Wang
2013-01-23 13:18     ` Michael S. Tsirkin
2013-01-23 13:20       ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130123114102.GA10426@redhat.com \
    --to=mst@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).