From: Ido Shamai <idos@dev.mellanox.co.il>
To: Sathya Perla <Sathya.Perla@Emulex.Com>,
Yuval Mintz <yuvalmin@broadcom.com>,
Or Gerlitz <ogerlitz@mellanox.com>,
Or Gerlitz <or.gerlitz@gmail.com>
Cc: Amir Vadai <amirv@mellanox.com>,
"David S. Miller" <davem@davemloft.net>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
Eugenia Emantayev <eugenia@mellanox.com>,
Ido Shamay <idos@mellanox.com>
Subject: Re: [PATCH net-next 2/2] net/mlx4: Revert "mlx4: set maximal number of default RSS queues"
Date: Wed, 15 Jan 2014 14:49:32 +0200 [thread overview]
Message-ID: <52D683DC.8070202@dev.mellanox.co.il> (raw)
In-Reply-To: <CF9D1877D81D214CB0CA0669EFAE020C26B83E12@CMEXMB1.ad.emulex.com>
On 1/15/2014 2:46 PM, Sathya Perla wrote:
>> -----Original Message-----
>> From: netdev-owner@vger.kernel.org [mailto:netdev-owner@vger.kernel.org] On Behalf
>> Of Ido Shamai
>>
>> On 1/2/2014 12:27 PM, Yuval Mintz wrote:
>>>>>> Going back to your original commit 16917b87a "net-next: Add
>>>>>> netif_get_num_default_rss_queues" I am still not clear why we want
>>>>>>
>>>>>> 1. why we want a common default to all MQ devices?
>>>>> Although networking benefits from multiple Interrupt vectors
>>>>> (enabling more rings, better performance, etc.), bounding this
>>>>> number only to the number of cpus is unreasonable as it strains
>>>>> system resources; e.g., consider a 40-cpu server - we might wish
>>>>> to have 40 vectors per device, but that means that connecting
>>>>> several devices to the same server might cause other functions
>>>>> to fail probe as they will no longer be able to acquire interrupt
>>>>> vectors of their own.
>>>>
>>>> Modern servers which have tens of CPUs typically have thousands of MSI-X
>>>> vectors which means you should be easily able to plug four cards into a
>>>> server with 64 cores which will consume 256 out of the 1-4K vectors out
>>>> there. Anyway, let me continue your approach - how about raising the
>>>> default hard limit to 16 or having it as the number of cores @ the numa
>>>> node where the card is plugged?
>>>
>>> I think an additional issue was memory consumption -
>>> additional interrupts --> additional allocated memory (for Rx rings).
>>> And I do know the issues were real - we've had complains about devices
>>> failing to load due to lack of resources (not all servers in the world are
>>> top of the art).
>>>
>>> Anyway, I believe 8/16 are simply strict limitations without any true meaning;
>>> To judge what's more important, default `slimness' or default performance
>>> is beyond me.
>>> Perhaps the numa approach will prove beneficial (and will make some sense).
>>
>> After reviewing all that was said, I feel there is no need to enforce
>> vendors with this strict limitation without any true meaning.
>>
>> The reverted commit you applied forces the driver to use 8 rings at max
>> at all time, without the possibility to change in flight using ethtool,
>> as it's enforced on the PCI driver at module init (restarting the en
>> driver with different of requested rings will not affect).
>> So it's crucial for performance oriented applications using mlx4_en.
>
> The number of RSS/RX rings used by a driver can be increased (up to the HW supported value)
> at runtime using set-channels ethtool interface.
Not in this case, see my comment above: as it's enforced on the PCI
driver at module init.
set-channels interface in our case will not change this limitation, but
only up to it.
next prev parent reply other threads:[~2014-01-15 12:49 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-01-01 13:05 [PATCH net-next 0/2] net/mlx4: Mellanox driver update 01-01-2014 Amir Vadai
2014-01-01 13:05 ` [PATCH net-next 1/2] net/mlx4_en: Use affinity hint Amir Vadai
2014-01-01 16:34 ` Ben Hutchings
2014-01-02 16:33 ` Amir Vadai
2014-01-02 3:13 ` [PATCH net-next 1/2] " David Miller
2014-01-01 13:05 ` [PATCH net-next 2/2] net/mlx4: Revert "mlx4: set maximal number of default RSS queues" Amir Vadai
2014-01-01 18:46 ` Yuval Mintz
2014-01-01 21:50 ` Or Gerlitz
2014-01-02 6:04 ` Yuval Mintz
2014-01-02 9:35 ` Or Gerlitz
2014-01-02 10:27 ` Yuval Mintz
2014-01-15 12:15 ` Ido Shamai
2014-01-15 12:46 ` Sathya Perla
2014-01-15 12:49 ` Ido Shamai [this message]
2014-01-15 12:54 ` Yuval Mintz
2014-01-15 13:15 ` Ido Shamai
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=52D683DC.8070202@dev.mellanox.co.il \
--to=idos@dev.mellanox.co.il \
--cc=Sathya.Perla@Emulex.Com \
--cc=amirv@mellanox.com \
--cc=davem@davemloft.net \
--cc=eugenia@mellanox.com \
--cc=idos@mellanox.com \
--cc=netdev@vger.kernel.org \
--cc=ogerlitz@mellanox.com \
--cc=or.gerlitz@gmail.com \
--cc=yuvalmin@broadcom.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).