From mboxrd@z Thu Jan 1 00:00:00 1970 From: Eric Dumazet Subject: Re: surprising memory request Date: Fri, 18 Jan 2013 09:46:30 -0800 Message-ID: <1358531190.11051.402.camel@edumazet-glaptop> References: <20130118085818.147220.FMU5901@air.gr8dns.org> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit Cc: netdev@vger.kernel.org, David Woodhouse To: Dirk Hohndel , Jason Wang Return-path: Received: from mail-pb0-f52.google.com ([209.85.160.52]:37766 "EHLO mail-pb0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751542Ab3ARRqc (ORCPT ); Fri, 18 Jan 2013 12:46:32 -0500 Received: by mail-pb0-f52.google.com with SMTP id ro2so2159077pbb.11 for ; Fri, 18 Jan 2013 09:46:32 -0800 (PST) In-Reply-To: <20130118085818.147220.FMU5901@air.gr8dns.org> Sender: netdev-owner@vger.kernel.org List-ID: On Fri, 2013-01-18 at 08:58 -0800, Dirk Hohndel wrote: > Running openconnect on a very recent 3.8 (a few commits before Linus cut > RC4) I get this allocation failure. I'm unclear why we would need 128 > contiguous pages here... > > /D > > [66015.673818] openconnect: page allocation failure: order:7, mode:0x10c0d0 > [66015.673827] Pid: 3292, comm: openconnect Tainted: G W 3.8.0-rc3-00352-gdfdebc2 #94 > [66015.673830] Call Trace: > [66015.673841] [] warn_alloc_failed+0xe9/0x140 > [66015.673849] [] ? on_each_cpu_mask+0x87/0xa0 > [66015.673854] [] __alloc_pages_nodemask+0x579/0x720 > [66015.673859] [] __get_free_pages+0x17/0x50 > [66015.673866] [] kmalloc_order_trace+0x39/0xf0 > [66015.673874] [] ? __hw_addr_add_ex+0x78/0xc0 > [66015.673879] [] __kmalloc+0xc8/0x180 > [66015.673883] [] ? dev_addr_init+0x66/0x90 > [66015.673889] [] alloc_netdev_mqs+0x145/0x300 > [66015.673896] [] ? tun_net_fix_features+0x20/0x20 > [66015.673902] [] __tun_chr_ioctl+0xd0a/0xec0 > [66015.673908] [] tun_chr_ioctl+0x13/0x20 > [66015.673913] [] do_vfs_ioctl+0x97/0x530 > [66015.673917] [] ? kmem_cache_free+0x33/0x170 > [66015.673923] [] ? final_putname+0x26/0x50 > [66015.673927] [] sys_ioctl+0x91/0xb0 > [66015.673935] [] system_call_fastpath+0x16/0x1b > [66015.673938] Mem-Info: Thats because Jason thought that tun device had to have an insane number of queues to get good performance. #define MAX_TAP_QUEUES 1024 Thats crazy if your machine has say 8 cpus. And Jason didnt care to adapt the memory allocations done in alloc_netdev_mqs(), in order to switch to vmalloc() when kmalloc() fails. commit c8d68e6be1c3b242f1c598595830890b65cea64a Author: Jason Wang Date: Wed Oct 31 19:46:00 2012 +0000 tuntap: multiqueue support This patch converts tun/tap to a multiqueue devices and expose the multiqueue queues as multiple file descriptors to userspace. Internally, each tun_file were abstracted as a queue, and an array of pointers to tun_file structurs were stored in tun_structure device, so multiple tun_files were allowed to be attached to the device as multiple queues. When choosing txq, we first try to identify a flow through its rxhash, if it does not have such one, we could try recorded rxq and then use them to choose the transmit queue. This policy may be changed in the future. Signed-off-by: Jason Wang Signed-off-by: David S. Miller