* Re: [PATCH RESEND net-next 0/2] ntuple filters with RSS [not found] <533b5eff-49b6-16c3-9873-dda3fb05c3d4@solarflare.com> @ 2018-02-27 17:38 ` David Miller 2018-02-27 17:55 ` Edward Cree 0 siblings, 1 reply; 15+ messages in thread From: David Miller @ 2018-02-27 17:38 UTC (permalink / raw) To: ecree; +Cc: linux-net-drivers, netdev, linville Edward, none of these postings are making it to the list. The problem is there are syntax errors in your email headers. Any time a person's name contains a special character like ".", that entire string must be enclosed in double quotes. This is the case for "John W. Linville" so please add proper quotes around such names and resend your patch series again. Thank you. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND net-next 0/2] ntuple filters with RSS 2018-02-27 17:38 ` [PATCH RESEND net-next 0/2] ntuple filters with RSS David Miller @ 2018-02-27 17:55 ` Edward Cree 2018-02-27 19:28 ` John W. Linville 0 siblings, 1 reply; 15+ messages in thread From: Edward Cree @ 2018-02-27 17:55 UTC (permalink / raw) To: David Miller; +Cc: linux-net-drivers, netdev, linville On 27/02/18 17:38, David Miller wrote: > The problem is there are syntax errors in your email headers. > > Any time a person's name contains a special character like ".", > that entire string must be enclosed in double quotes. > > This is the case for "John W. Linville" so please add proper > quotes around such names and resend your patch series again. Thank you for spotting this! I looked at the headers and failed to notice anything wrong with them. I'm surprised that git-imap-send doesn't check for this... Will resend with that fixed. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND net-next 0/2] ntuple filters with RSS 2018-02-27 17:55 ` Edward Cree @ 2018-02-27 19:28 ` John W. Linville 0 siblings, 0 replies; 15+ messages in thread From: John W. Linville @ 2018-02-27 19:28 UTC (permalink / raw) To: Edward Cree; +Cc: David Miller, linux-net-drivers, netdev On Tue, Feb 27, 2018 at 05:55:51PM +0000, Edward Cree wrote: > On 27/02/18 17:38, David Miller wrote: > > The problem is there are syntax errors in your email headers. > > > > Any time a person's name contains a special character like ".", > > that entire string must be enclosed in double quotes. > > > > This is the case for "John W. Linville" so please add proper > > quotes around such names and resend your patch series again. > Thank you for spotting this!� I looked at the headers and failed > �to notice anything wrong with them. > I'm surprised that git-imap-send doesn't check for this... > > Will resend with that fixed. Haha, sorry for indirectly causing this issue! If it helps, you can leave-off the "W." -- I'll still know its for me... :-) John -- John W. Linville Someday the world will need a hero, and you linville@tuxdriver.com might be all we have. Be ready. ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH RESEND net-next 0/2] ntuple filters with RSS
@ 2018-02-27 17:59 Edward Cree
2018-02-27 23:47 ` Jakub Kicinski
2018-03-01 18:36 ` David Miller
0 siblings, 2 replies; 15+ messages in thread
From: Edward Cree @ 2018-02-27 17:59 UTC (permalink / raw)
To: linux-net-drivers, David Miller; +Cc: netdev, John W. Linville
This series introduces the ability to mark an ethtool steering filter to use
RSS spreading, and the ability to create and configure multiple RSS contexts
with different indirection tables, hash keys, and hash fields.
An implementation for the sfc driver (for 7000-series and later SFC NICs) is
included in patch 2/2.
The anticipated use case of this feature is for steering traffic destined for
a container (or virtual machine) to the subset of CPUs on which processes in
the container (or the VM's vCPUs) are bound, while retaining the scalability
of RSS spreading from the viewpoint inside the container.
The use of both a base queue number (ring_cookie) and indirection table is
intended to allow re-use of a single RSS context to target multiple sets of
CPUs. For instance, if an 8-core system is hosting three containers on CPUs
[1,2], [3,4] and [6,7], then a single RSS context with an equal-weight [0,1]
indirection table could be used to target all three containers by setting
ring_cookie to 1, 3 and 6 on the respective filters.
Edward Cree (2):
net: ethtool: extend RXNFC API to support RSS spreading of filter
matches
sfc: support RSS spreading of ethtool ntuple filters
drivers/net/ethernet/sfc/ef10.c | 273 ++++++++++++++++++++++------------
drivers/net/ethernet/sfc/efx.c | 65 +++++++-
drivers/net/ethernet/sfc/efx.h | 12 +-
drivers/net/ethernet/sfc/ethtool.c | 153 ++++++++++++++++---
drivers/net/ethernet/sfc/farch.c | 11 +-
drivers/net/ethernet/sfc/filter.h | 7 +-
drivers/net/ethernet/sfc/net_driver.h | 44 +++++-
drivers/net/ethernet/sfc/nic.h | 2 -
drivers/net/ethernet/sfc/siena.c | 26 ++--
include/linux/ethtool.h | 5 +
include/uapi/linux/ethtool.h | 32 +++-
net/core/ethtool.c | 64 ++++++--
12 files changed, 523 insertions(+), 171 deletions(-)
^ permalink raw reply [flat|nested] 15+ messages in thread* Re: [PATCH RESEND net-next 0/2] ntuple filters with RSS 2018-02-27 17:59 Edward Cree @ 2018-02-27 23:47 ` Jakub Kicinski 2018-02-28 1:24 ` Alexander Duyck 2018-03-01 18:36 ` David Miller 1 sibling, 1 reply; 15+ messages in thread From: Jakub Kicinski @ 2018-02-27 23:47 UTC (permalink / raw) To: Edward Cree Cc: linux-net-drivers, David Miller, netdev, John W. Linville, Or Gerlitz, Alexander Duyck On Tue, 27 Feb 2018 17:59:12 +0000, Edward Cree wrote: > This series introduces the ability to mark an ethtool steering filter to use > RSS spreading, and the ability to create and configure multiple RSS contexts > with different indirection tables, hash keys, and hash fields. > An implementation for the sfc driver (for 7000-series and later SFC NICs) is > included in patch 2/2. > > The anticipated use case of this feature is for steering traffic destined for > a container (or virtual machine) to the subset of CPUs on which processes in > the container (or the VM's vCPUs) are bound, while retaining the scalability > of RSS spreading from the viewpoint inside the container. > The use of both a base queue number (ring_cookie) and indirection table is > intended to allow re-use of a single RSS context to target multiple sets of > CPUs. For instance, if an 8-core system is hosting three containers on CPUs > [1,2], [3,4] and [6,7], then a single RSS context with an equal-weight [0,1] > indirection table could be used to target all three containers by setting > ring_cookie to 1, 3 and 6 on the respective filters. Please, let's stop extending ethtool_rx_flow APIs. I bit my tongue when Intel was adding their "redirection to VF" based on ethtool ntuples and look now they're adding the same functionality with flower :| And wonder how to handle two interfaces doing the same thing. On the use case itself, I wonder how much sense that makes. Can your hardware not tag the packet as well so you could then mux it to something like macvlan offload? CC: Alex, Or ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND net-next 0/2] ntuple filters with RSS 2018-02-27 23:47 ` Jakub Kicinski @ 2018-02-28 1:24 ` Alexander Duyck 2018-03-02 15:24 ` Edward Cree 0 siblings, 1 reply; 15+ messages in thread From: Alexander Duyck @ 2018-02-28 1:24 UTC (permalink / raw) To: Jakub Kicinski Cc: Edward Cree, linux-net-drivers, David Miller, netdev, John W. Linville, Or Gerlitz, Alexander Duyck On Tue, Feb 27, 2018 at 3:47 PM, Jakub Kicinski <kubakici@wp.pl> wrote: > On Tue, 27 Feb 2018 17:59:12 +0000, Edward Cree wrote: >> This series introduces the ability to mark an ethtool steering filter to use >> RSS spreading, and the ability to create and configure multiple RSS contexts >> with different indirection tables, hash keys, and hash fields. >> An implementation for the sfc driver (for 7000-series and later SFC NICs) is >> included in patch 2/2. >> >> The anticipated use case of this feature is for steering traffic destined for >> a container (or virtual machine) to the subset of CPUs on which processes in >> the container (or the VM's vCPUs) are bound, while retaining the scalability >> of RSS spreading from the viewpoint inside the container. >> The use of both a base queue number (ring_cookie) and indirection table is >> intended to allow re-use of a single RSS context to target multiple sets of >> CPUs. For instance, if an 8-core system is hosting three containers on CPUs >> [1,2], [3,4] and [6,7], then a single RSS context with an equal-weight [0,1] >> indirection table could be used to target all three containers by setting >> ring_cookie to 1, 3 and 6 on the respective filters. > > Please, let's stop extending ethtool_rx_flow APIs. I bit my tongue > when Intel was adding their "redirection to VF" based on ethtool ntuples > and look now they're adding the same functionality with flower :| And > wonder how to handle two interfaces doing the same thing. > > On the use case itself, I wonder how much sense that makes. Can your > hardware not tag the packet as well so you could then mux it to > something like macvlan offload? > > CC: Alex, Or We did something like this for i40e. Basically we required creating the queue groups using mqprio to keep them symmetric on Tx and Rx, and then allowed for TC ingress filters to redirect traffic to those queue groups. - Alex ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND net-next 0/2] ntuple filters with RSS 2018-02-28 1:24 ` Alexander Duyck @ 2018-03-02 15:24 ` Edward Cree 2018-03-02 18:55 ` Jakub Kicinski 0 siblings, 1 reply; 15+ messages in thread From: Edward Cree @ 2018-03-02 15:24 UTC (permalink / raw) To: Alexander Duyck, Jakub Kicinski Cc: linux-net-drivers, David Miller, netdev, John W. Linville, Or Gerlitz, Alexander Duyck On Tue, Feb 27, 2018 at 3:47 PM, Jakub Kicinski <kubakici@wp.pl> wrote: > Please, let's stop extending ethtool_rx_flow APIs. I bit my tongue > when Intel was adding their "redirection to VF" based on ethtool ntuples > and look now they're adding the same functionality with flower :| And > wonder how to handle two interfaces doing the same thing. Since sfc only supports ethtool NFC interfaces (we have no flower support, and I also wonder how one is to support both of those interfaces without producing an ugly mess), I'd much rather put this in ethtool than have to implement all of flower just so we can have this extension. I guess part of the question is, which other drivers besides us would want to implement something like this, and what are their requirements? > On the use case itself, I wonder how much sense that makes. Can your > hardware not tag the packet as well so you could then mux it to > something like macvlan offload? In practice the only way our hardware can "tag the packet" is by the selection of RX queue. So you could for instance give a container its own RX queues (rather than just using the existing RX queues on the appropriate CPUs), and maybe in future hook those queues up to l2fwd offload somehow. But that seems like a separate job (offloading the macvlan switching) to what this series is about (making the RX processing happen on the right CPUs). Is software macvlan switching really noticeably slow, anyway? Besides, more powerful filtering than just MAC addr might be needed, if, for instance, the container network is encapsulated. In that case something like a UDP 4-tuple filter might be necessary (or, indeed, a filter looking at the VNID (VxLAN TNI) - which our hardware can do but ethtool doesn't currently have a way to specify). AFAICT l2-fwd-offload can only be used for straight MAC addr, not for overlay networks like VxLAN or FOU? At least, existing ndo_dfwd_add_station() implementations don't seem to check that dev is a macvlan... Does it even support VLAN filters? fm10k implementation doesn't seem to. Anyway, like I say, filtering traffic onto its own queues seems to be orthogonal, or at least separate, to binding those queues into an upperdev for demux offload. On 28/02/18 01:24, Alexander Duyck wrote: > We did something like this for i40e. Basically we required creating > the queue groups using mqprio to keep them symmetric on Tx and Rx, and > then allowed for TC ingress filters to redirect traffic to those queue > groups. > > - Alex If we're not doing macvlan offload, I'm not sure what, if anything, the TX side would buy us. So for now it seems to make sense for TX just to use the TXQ associated with the CPU from which the TX originates, which I believe already happens automatically. -Ed ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND net-next 0/2] ntuple filters with RSS 2018-03-02 15:24 ` Edward Cree @ 2018-03-02 18:55 ` Jakub Kicinski 2018-03-02 23:24 ` Alexander Duyck 0 siblings, 1 reply; 15+ messages in thread From: Jakub Kicinski @ 2018-03-02 18:55 UTC (permalink / raw) To: Edward Cree Cc: Alexander Duyck, linux-net-drivers, David Miller, netdev, John W. Linville, Or Gerlitz, Alexander Duyck On Fri, 2 Mar 2018 15:24:29 +0000, Edward Cree wrote: > On Tue, Feb 27, 2018 at 3:47 PM, Jakub Kicinski <kubakici@wp.pl> wrote: > > > Please, let's stop extending ethtool_rx_flow APIs. I bit my tongue > > when Intel was adding their "redirection to VF" based on ethtool ntuples > > and look now they're adding the same functionality with flower :| And > > wonder how to handle two interfaces doing the same thing. > Since sfc only supports ethtool NFC interfaces (we have no flower support, > and I also wonder how one is to support both of those interfaces without > producing an ugly mess), I'd much rather put this in ethtool than have to > implement all of flower just so we can have this extension. "Just this one extension" is exactly the attitude that can lead to messy APIs :( > I guess part of the question is, which other drivers besides us would want > to implement something like this, and what are their requirements? I think every vendor is trying to come up with ways to make their HW work with containers better these days. > > On the use case itself, I wonder how much sense that makes. Can your > > hardware not tag the packet as well so you could then mux it to > > something like macvlan offload? > In practice the only way our hardware can "tag the packet" is by the > selection of RX queue. So you could for instance give a container its > own RX queues (rather than just using the existing RX queues on the > appropriate CPUs), and maybe in future hook those queues up to l2fwd > offload somehow. > But that seems like a separate job (offloading the macvlan switching) to > what this series is about (making the RX processing happen on the right > CPUs). Is software macvlan switching really noticeably slow, anyway? OK, thanks for clarifying. > Besides, more powerful filtering than just MAC addr might be needed, if, > for instance, the container network is encapsulated. In that case > something like a UDP 4-tuple filter might be necessary (or, indeed, a > filter looking at the VNID (VxLAN TNI) - which our hardware can do but > ethtool doesn't currently have a way to specify). AFAICT l2-fwd-offload > can only be used for straight MAC addr, not for overlay networks like > VxLAN or FOU? At least, existing ndo_dfwd_add_station() implementations > don't seem to check that dev is a macvlan... Does it even support > VLAN filters? fm10k implementation doesn't seem to. Exactly! One can come up with many protocol combinations which flower already has APIs for... ethtool is not the place for it. > Anyway, like I say, filtering traffic onto its own queues seems to be > orthogonal, or at least separate, to binding those queues into an > upperdev for demux offload. It is, I was just trying to broaden the scope to more capable HW so we design APIs that would serve all. > On 28/02/18 01:24, Alexander Duyck wrote: > > > We did something like this for i40e. Basically we required creating > > the queue groups using mqprio to keep them symmetric on Tx and Rx, and > > then allowed for TC ingress filters to redirect traffic to those queue > > groups. > > > > - Alex > If we're not doing macvlan offload, I'm not sure what, if anything, the > TX side would buy us. So for now it seems to make sense for TX just to > use the TXQ associated with the CPU from which the TX originates, which > I believe already happens automatically. I don't think that's what Alex was referring to. Please see commit e284fc280473 ("i40e: Add and delete cloud filter") for instance :) ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND net-next 0/2] ntuple filters with RSS 2018-03-02 18:55 ` Jakub Kicinski @ 2018-03-02 23:24 ` Alexander Duyck 0 siblings, 0 replies; 15+ messages in thread From: Alexander Duyck @ 2018-03-02 23:24 UTC (permalink / raw) To: Jakub Kicinski Cc: Edward Cree, linux-net-drivers, David Miller, netdev, John W. Linville, Or Gerlitz, Alexander Duyck On Fri, Mar 2, 2018 at 10:55 AM, Jakub Kicinski <kubakici@wp.pl> wrote: > On Fri, 2 Mar 2018 15:24:29 +0000, Edward Cree wrote: >> On Tue, Feb 27, 2018 at 3:47 PM, Jakub Kicinski <kubakici@wp.pl> wrote: >> >> > Please, let's stop extending ethtool_rx_flow APIs. I bit my tongue >> > when Intel was adding their "redirection to VF" based on ethtool ntuples >> > and look now they're adding the same functionality with flower :| And >> > wonder how to handle two interfaces doing the same thing. >> Since sfc only supports ethtool NFC interfaces (we have no flower support, >> and I also wonder how one is to support both of those interfaces without >> producing an ugly mess), I'd much rather put this in ethtool than have to >> implement all of flower just so we can have this extension. > > "Just this one extension" is exactly the attitude that can lead to > messy APIs :( > >> I guess part of the question is, which other drivers besides us would want >> to implement something like this, and what are their requirements? > > I think every vendor is trying to come up with ways to make their HW > work with containers better these days. > >> > On the use case itself, I wonder how much sense that makes. Can your >> > hardware not tag the packet as well so you could then mux it to >> > something like macvlan offload? >> In practice the only way our hardware can "tag the packet" is by the >> selection of RX queue. So you could for instance give a container its >> own RX queues (rather than just using the existing RX queues on the >> appropriate CPUs), and maybe in future hook those queues up to l2fwd >> offload somehow. >> But that seems like a separate job (offloading the macvlan switching) to >> what this series is about (making the RX processing happen on the right >> CPUs). Is software macvlan switching really noticeably slow, anyway? > > OK, thanks for clarifying. > >> Besides, more powerful filtering than just MAC addr might be needed, if, >> for instance, the container network is encapsulated. In that case >> something like a UDP 4-tuple filter might be necessary (or, indeed, a >> filter looking at the VNID (VxLAN TNI) - which our hardware can do but >> ethtool doesn't currently have a way to specify). AFAICT l2-fwd-offload >> can only be used for straight MAC addr, not for overlay networks like >> VxLAN or FOU? At least, existing ndo_dfwd_add_station() implementations >> don't seem to check that dev is a macvlan... Does it even support >> VLAN filters? fm10k implementation doesn't seem to. > > Exactly! One can come up with many protocol combinations which flower > already has APIs for... ethtool is not the place for it. > >> Anyway, like I say, filtering traffic onto its own queues seems to be >> orthogonal, or at least separate, to binding those queues into an >> upperdev for demux offload. > > It is, I was just trying to broaden the scope to more capable HW so we > design APIs that would serve all. > >> On 28/02/18 01:24, Alexander Duyck wrote: >> >> > We did something like this for i40e. Basically we required creating >> > the queue groups using mqprio to keep them symmetric on Tx and Rx, and >> > then allowed for TC ingress filters to redirect traffic to those queue >> > groups. >> > >> > - Alex >> If we're not doing macvlan offload, I'm not sure what, if anything, the >> TX side would buy us. So for now it seems to make sense for TX just to >> use the TXQ associated with the CPU from which the TX originates, which >> I believe already happens automatically. > > I don't think that's what Alex was referring to. Please see > commit e284fc280473 ("i40e: Add and delete cloud filter") for > instance :) Right. And as far as the Tx queue association goes right now we are basing things off of skb->priority which is easily controlled via cgroups. So in theory you could associate a given set of cgroup to a specific set of Tx queues using this approach. Most of the filtering that Jakub pointed out is applied to the Rx side to make sure the packets come in on the right queue set. - Alex ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND net-next 0/2] ntuple filters with RSS 2018-02-27 17:59 Edward Cree 2018-02-27 23:47 ` Jakub Kicinski @ 2018-03-01 18:36 ` David Miller 2018-03-02 16:01 ` Edward Cree 1 sibling, 1 reply; 15+ messages in thread From: David Miller @ 2018-03-01 18:36 UTC (permalink / raw) To: ecree; +Cc: linux-net-drivers, netdev, linville From: Edward Cree <ecree@solarflare.com> Date: Tue, 27 Feb 2018 17:59:12 +0000 > This series introduces the ability to mark an ethtool steering filter to use > RSS spreading, and the ability to create and configure multiple RSS contexts > with different indirection tables, hash keys, and hash fields. > An implementation for the sfc driver (for 7000-series and later SFC NICs) is > included in patch 2/2. > > The anticipated use case of this feature is for steering traffic destined for > a container (or virtual machine) to the subset of CPUs on which processes in > the container (or the VM's vCPUs) are bound, while retaining the scalability > of RSS spreading from the viewpoint inside the container. > The use of both a base queue number (ring_cookie) and indirection table is > intended to allow re-use of a single RSS context to target multiple sets of > CPUs. For instance, if an 8-core system is hosting three containers on CPUs > [1,2], [3,4] and [6,7], then a single RSS context with an equal-weight [0,1] > indirection table could be used to target all three containers by setting > ring_cookie to 1, 3 and 6 on the respective filters. We really should have the ethtool interfaces under deep freeze until we convert it to netlink or similar. Second, this is a real hackish way to extend ethtool with new semantics. A structure changes layout based upon a flag bit setting in an earlier member? Yikes... Lastly, there has been feedback asking how practical and useful this facility actually is, and you must address that. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND net-next 0/2] ntuple filters with RSS 2018-03-01 18:36 ` David Miller @ 2018-03-02 16:01 ` Edward Cree 2018-03-02 17:49 ` David Riddoch 2018-03-07 15:24 ` David Miller 0 siblings, 2 replies; 15+ messages in thread From: Edward Cree @ 2018-03-02 16:01 UTC (permalink / raw) To: David Miller; +Cc: linux-net-drivers, netdev, linville On 01/03/18 18:36, David Miller wrote: > We really should have the ethtool interfaces under deep freeze until we > convert it to netlink or similar. > Second, this is a real hackish way to extend ethtool with new > semantics. A structure changes layout based upon a flag bit setting > in an earlier member? Yikes... Yeah, while I'm reasonably confident it's ABI-compatible (presence of that flag in the past should always have led to drivers complaining they didn't recognise it), and it is somewhat similar to the existing FLOW_EXT flag, it is indeed rather ugly. This is the only way I could see to do it without adding a whole new command number, which I felt might also be contentious (see: deep freeze) but is probably a better approach. > Lastly, there has been feedback asking how practical and useful this > facility actually is, and you must address that. According to our marketing folks, there is end-user demand for this feature or something like it. I didn't see any arguments why this isn't useful, just that other things might be useful too. (Also, sorry it took me so long to address their feedback, but I had to do a bit of background reading before I could understand what Jakub was suggesting.) -Ed ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND net-next 0/2] ntuple filters with RSS 2018-03-02 16:01 ` Edward Cree @ 2018-03-02 17:49 ` David Riddoch 2018-03-07 15:24 ` David Miller 1 sibling, 0 replies; 15+ messages in thread From: David Riddoch @ 2018-03-02 17:49 UTC (permalink / raw) To: Edward Cree, David Miller; +Cc: linux-net-drivers, netdev, linville >> Lastly, there has been feedback asking how practical and useful this >> facility actually is, and you must address that. > According to our marketing folks, there is end-user demand for this feature > or something like it. The main benefit comes on numa systems, when you have high throughput applications or containers on multiple numa nodes. Using RSS without steering gives poor efficiency because traffic is often not received on the same node as the application. With flow steering to a single queue you can get a bottleneck, as all traffic for a TCP/UDP port or container goes to one core. ARFS doesn't scale to large numbers of flows. This feature allows the admin to ensure packets are received on the same numa node as the application (improving efficiency) and avoids the single core bottleneck. David ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND net-next 0/2] ntuple filters with RSS 2018-03-02 16:01 ` Edward Cree 2018-03-02 17:49 ` David Riddoch @ 2018-03-07 15:24 ` David Miller 2018-03-07 15:40 ` Edward Cree 1 sibling, 1 reply; 15+ messages in thread From: David Miller @ 2018-03-07 15:24 UTC (permalink / raw) To: ecree; +Cc: linux-net-drivers, netdev, linville From: Edward Cree <ecree@solarflare.com> Date: Fri, 2 Mar 2018 16:01:47 +0000 > On 01/03/18 18:36, David Miller wrote: >> We really should have the ethtool interfaces under deep freeze until we >> convert it to netlink or similar. >> Second, this is a real hackish way to extend ethtool with new >> semantics.� A structure changes layout based upon a flag bit setting >> in an earlier member?� Yikes... > Yeah, while I'm reasonably confident it's ABI-compatible (presence of that > �flag in the past should always have led to drivers complaining they didn't > �recognise it), and it is somewhat similar to the existing FLOW_EXT flag, > �it is indeed rather ugly.� This is the only way I could see to do it > �without adding a whole new command number, which I felt might also be > �contentious (see: deep freeze) but is probably a better approach. > >> Lastly, there has been feedback asking how practical and useful this >> facility actually is, and you must address that. > According to our marketing folks, there is end-user demand for this feature > �or something like it.� I didn't see any arguments why this isn't useful, > �just that other things might be useful too.� (Also, sorry it took me so > �long to address their feedback, but I had to do a bit of background > �reading before I could understand what Jakub was suggesting.) Ok. Since nobody is really working on the ethtool --> devlink/netlink conversion, it really isn't reasonable for me to block useful changes like your's. So please resubmit this series and I will apply it. Thanks. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND net-next 0/2] ntuple filters with RSS 2018-03-07 15:24 ` David Miller @ 2018-03-07 15:40 ` Edward Cree 2018-03-07 20:55 ` David Miller 0 siblings, 1 reply; 15+ messages in thread From: Edward Cree @ 2018-03-07 15:40 UTC (permalink / raw) To: David Miller; +Cc: linux-net-drivers, netdev, linville On 07/03/18 15:24, David Miller wrote: > Ok. > > Since nobody is really working on the ethtool --> devlink/netlink conversion, > it really isn't reasonable for me to block useful changes like your's. > > So please resubmit this series and I will apply it. > > Thanks. Ok, thanks. Should I stick with the hackish union-and-flag-bit, or define a new ethtool command number for the extended command? ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH RESEND net-next 0/2] ntuple filters with RSS 2018-03-07 15:40 ` Edward Cree @ 2018-03-07 20:55 ` David Miller 0 siblings, 0 replies; 15+ messages in thread From: David Miller @ 2018-03-07 20:55 UTC (permalink / raw) To: ecree; +Cc: linux-net-drivers, netdev, linville From: Edward Cree <ecree@solarflare.com> Date: Wed, 7 Mar 2018 15:40:39 +0000 > On 07/03/18 15:24, David Miller wrote: >> Ok. >> >> Since nobody is really working on the ethtool --> devlink/netlink conversion, >> it really isn't reasonable for me to block useful changes like your's. >> >> So please resubmit this series and I will apply it. >> >> Thanks. > Ok, thanks.� Should I stick with the hackish union-and-flag-bit, or define a > �new ethtool command number for the extended command? I'd say stick with the union-and-flag-bit hack. ^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2018-03-07 20:55 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <533b5eff-49b6-16c3-9873-dda3fb05c3d4@solarflare.com>
2018-02-27 17:38 ` [PATCH RESEND net-next 0/2] ntuple filters with RSS David Miller
2018-02-27 17:55 ` Edward Cree
2018-02-27 19:28 ` John W. Linville
2018-02-27 17:59 Edward Cree
2018-02-27 23:47 ` Jakub Kicinski
2018-02-28 1:24 ` Alexander Duyck
2018-03-02 15:24 ` Edward Cree
2018-03-02 18:55 ` Jakub Kicinski
2018-03-02 23:24 ` Alexander Duyck
2018-03-01 18:36 ` David Miller
2018-03-02 16:01 ` Edward Cree
2018-03-02 17:49 ` David Riddoch
2018-03-07 15:24 ` David Miller
2018-03-07 15:40 ` Edward Cree
2018-03-07 20:55 ` David Miller
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).