From: Jiri Pirko <jiri@resnulli.us>
To: Jakub Kicinski <jakub.kicinski@netronome.com>
Cc: Alexander Duyck <alexander.duyck@gmail.com>,
Eran Ben Elisha <eranbe@mellanox.com>,
Saeed Mahameed <saeedm@mellanox.com>,
"David S. Miller" <davem@davemloft.net>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: Re: [net-next 10/16] net/mlx5: Support PCIe buffer congestion handling via Devlink
Date: Thu, 26 Jul 2018 09:14:33 +0200 [thread overview]
Message-ID: <20180726071433.GA2222@nanopsycho> (raw)
In-Reply-To: <20180725174359.6952c937@cakuba.netronome.com>
Thu, Jul 26, 2018 at 02:43:59AM CEST, jakub.kicinski@netronome.com wrote:
>On Wed, 25 Jul 2018 08:23:26 -0700, Alexander Duyck wrote:
>> On Wed, Jul 25, 2018 at 5:31 AM, Eran Ben Elisha wrote:
>> > On 7/24/2018 10:51 PM, Jakub Kicinski wrote:
>> >>>> The devlink params haven't been upstream even for a full cycle and
>> >>>> already you guys are starting to use them to configure standard
>> >>>> features like queuing.
>> >>>
>> >>> We developed the devlink params in order to support non-standard
>> >>> configuration only. And for non-standard, there are generic and vendor
>> >>> specific options.
>> >>
>> >> I thought it was developed for performing non-standard and possibly
>> >> vendor specific configuration. Look at DEVLINK_PARAM_GENERIC_* for
>> >> examples of well justified generic options for which we have no
>> >> other API. The vendor mlx4 options look fairly vendor specific if you
>> >> ask me, too.
>> >>
>> >> Configuring queuing has an API. The question is it acceptable to enter
>> >> into the risky territory of controlling offloads via devlink parameters
>> >> or would we rather make vendors take the time and effort to model
>> >> things to (a subset) of existing APIs. The HW never fits the APIs
>> >> perfectly.
>> >
>> > I understand what you meant here, I would like to highlight that this
>> > mechanism was not meant to handle SRIOV, Representors, etc.
>> > The vendor specific configuration suggested here is to handle a congestion
>> > state in Multi Host environment (which includes PF and multiple VFs per
>> > host), where one host is not aware to the other hosts, and each is running
>> > on its own pci/driver. It is a device working mode configuration.
>> >
>> > This couldn't fit into any existing API, thus creating this vendor specific
>> > unique API is needed.
>>
>> If we are just going to start creating devlink interfaces in for every
>> one-off option a device wants to add why did we even bother with
>> trying to prevent drivers from using sysfs? This just feels like we
>> are back to the same arguments we had back in the day with it.
>>
>> I feel like the bigger question here is if devlink is how we are going
>> to deal with all PCIe related features going forward, or should we
>> start looking at creating a new interface/tool for PCI/PCIe related
>> features? My concern is that we have already had features such as DMA
>> Coalescing that didn't really fit into anything and now we are
>> starting to see other things related to DMA and PCIe bus credits. I'm
>> wondering if we shouldn't start looking at a tool/interface to
>> configure all the PCIe related features such as interrupts, error
>> reporting, DMA configuration, power management, etc. Maybe we could
>> even look at sharing it across subsystems and include things like
>> storage, graphics, and other subsystems in the conversation.
>
>Agreed, for actual PCIe configuration (i.e. not ECN marking) we do need
>to build up an API. Sharing it across subsystems would be very cool!
I wonder howcome there isn't such API in place already. Or is it?
If it is not, do you have any idea how should it look like? Should it be
an extension of the existing PCI uapi or something completely new?
It would be probably good to loop some PCI people in...
next prev parent reply other threads:[~2018-07-26 8:32 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-19 1:00 [pull request][net-next 00/16] Mellanox, mlx5e updates 2018-07-18 Saeed Mahameed
2018-07-19 1:00 ` [net-next 01/16] net/mlx5: FW tracer, implement tracer logic Saeed Mahameed
2018-07-19 1:00 ` [net-next 02/16] net/mlx5: FW tracer, create trace buffer and copy strings database Saeed Mahameed
2018-07-19 1:00 ` [net-next 03/16] net/mlx5: FW tracer, register log buffer memory key Saeed Mahameed
2018-07-19 1:00 ` [net-next 04/16] net/mlx5: FW tracer, events handling Saeed Mahameed
2018-07-19 1:00 ` [net-next 05/16] net/mlx5: FW tracer, parse traces and kernel tracing support Saeed Mahameed
2018-07-19 1:00 ` [net-next 06/16] net/mlx5: FW tracer, Enable tracing Saeed Mahameed
2018-07-19 1:00 ` [net-next 07/16] net/mlx5: FW tracer, Add debug prints Saeed Mahameed
2018-07-19 1:00 ` [net-next 08/16] net/mlx5: Move all devlink related functions calls to devlink.c Saeed Mahameed
2018-07-19 1:01 ` [net-next 09/16] net/mlx5: Add MPEGC register configuration functionality Saeed Mahameed
2018-07-19 1:01 ` [net-next 10/16] net/mlx5: Support PCIe buffer congestion handling via Devlink Saeed Mahameed
2018-07-19 1:49 ` Jakub Kicinski
2018-07-24 10:31 ` Eran Ben Elisha
2018-07-24 19:51 ` Jakub Kicinski
2018-07-25 12:31 ` Eran Ben Elisha
2018-07-25 15:23 ` Alexander Duyck
2018-07-26 0:43 ` Jakub Kicinski
2018-07-26 7:14 ` Jiri Pirko [this message]
2018-07-26 14:00 ` Alexander Duyck
2018-07-28 16:06 ` Bjorn Helgaas
2018-07-29 9:23 ` Moshe Shemesh
2018-07-29 22:00 ` Alexander Duyck
2018-07-30 14:07 ` Bjorn Helgaas
2018-07-30 15:02 ` Alexander Duyck
2018-07-30 22:00 ` Jakub Kicinski
2018-07-31 2:33 ` Bjorn Helgaas
2018-07-31 3:19 ` Alexander Duyck
2018-07-31 11:06 ` Bjorn Helgaas
2018-08-01 18:28 ` Moshe Shemesh
2018-07-19 8:24 ` Jiri Pirko
2018-07-19 8:49 ` Eran Ben Elisha
2018-07-19 1:01 ` [net-next 11/16] net/mlx5e: Set ECN for received packets using CQE indication Saeed Mahameed
2018-07-19 1:01 ` [net-next 12/16] net/mlx5e: Remove redundant WARN when we cannot find neigh entry Saeed Mahameed
2018-07-19 1:01 ` [net-next 13/16] net/mlx5e: Support offloading tc double vlan headers match Saeed Mahameed
2018-07-19 1:01 ` [net-next 14/16] net/mlx5e: Refactor tc vlan push/pop actions offloading Saeed Mahameed
2018-07-19 1:01 ` [net-next 15/16] net/mlx5e: Support offloading double vlan push/pop tc actions Saeed Mahameed
2018-07-19 1:01 ` [net-next 16/16] net/mlx5e: Use PARTIAL_GSO for UDP segmentation Saeed Mahameed
2018-07-23 21:35 ` [pull request][net-next 00/16] Mellanox, mlx5e updates 2018-07-18 Saeed Mahameed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180726071433.GA2222@nanopsycho \
--to=jiri@resnulli.us \
--cc=alexander.duyck@gmail.com \
--cc=davem@davemloft.net \
--cc=eranbe@mellanox.com \
--cc=jakub.kicinski@netronome.com \
--cc=netdev@vger.kernel.org \
--cc=saeedm@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).