* [PATCH RFC v2 net-next 0/4] Add tx push buf len param to ethtool
@ 2023-03-02 20:30 Shay Agroskin
2023-03-02 20:30 ` [PATCH RFC v2 net-next 1/4] ethtool: Add support for configuring tx_push_buf_len Shay Agroskin
` (3 more replies)
0 siblings, 4 replies; 14+ messages in thread
From: Shay Agroskin @ 2023-03-02 20:30 UTC (permalink / raw)
To: David Miller, Jakub Kicinski, netdev
Cc: Shay Agroskin, Woodhouse, David, Machulsky, Zorik,
Matushevsky, Alexander, Saeed Bshara, Wilson, Matt,
Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi, Ali,
Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
Arinzon, David, Itzko, Shahar, Abboud, Osama
Changed since v1:
- Added the new ethtool param to generic netlink specs
- Dropped dynamic advertisement of tx push buff support in ENA.
The driver will advertise it for all platforms
This patchset adds a new sub-configuration to ethtool get/set queue
params (ethtool -g) called 'tx-push-buf-len'.
This configuration specifies the maximum number of bytes of a
transmitted packet a driver can push directly to the underlying
device ('push' mode). The motivation for pushing some of the bytes to
the device has the advantages of
- Allowing a smart device to take fast actions based on the packet's
header
- Reducing latency for small packets that can be copied completely into
the device
This new param is practically similar to tx-copybreak value that can be
set using ethtool's tunable but conceptually serves a different purpose.
While tx-copybreak is used to reduce the overhead of DMA mapping and
makes no sense to use if less than the whole segment gets copied,
tx-push-buf-len allows to improve performance by analyzing the packet's
data (usually headers) before performing the DMA operation.
The configuration can be queried and set using the commands:
$ ethtool -g [interface]
# ethtool -G [interface] tx-push-buf-len [number of bytes]
This patchset also adds support for the new configuration in ENA driver
for which this parameter ensures efficient resources management on the
device side.
David Arinzon (1):
net: ena: Add an option to configure large LLQ headers
Shay Agroskin (3):
ethtool: Add support for configuring tx_push_buf_len
net: ena: Recalculate TX state variables every device reset
net: ena: Add support to changing tx_push_buf_len
Documentation/netlink/specs/ethtool.yaml | 8 +
Documentation/networking/ethtool-netlink.rst | 43 ++++--
drivers/net/ethernet/amazon/ena/ena_eth_com.h | 4 +
drivers/net/ethernet/amazon/ena/ena_ethtool.c | 51 ++++++-
drivers/net/ethernet/amazon/ena/ena_netdev.c | 138 ++++++++++++++----
drivers/net/ethernet/amazon/ena/ena_netdev.h | 15 +-
include/linux/ethtool.h | 14 +-
include/uapi/linux/ethtool_netlink.h | 2 +
net/ethtool/netlink.h | 2 +-
net/ethtool/rings.c | 28 +++-
10 files changed, 245 insertions(+), 60 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH RFC v2 net-next 1/4] ethtool: Add support for configuring tx_push_buf_len
2023-03-02 20:30 [PATCH RFC v2 net-next 0/4] Add tx push buf len param to ethtool Shay Agroskin
@ 2023-03-02 20:30 ` Shay Agroskin
2023-03-03 11:53 ` Simon Horman
2023-03-03 23:50 ` Jakub Kicinski
2023-03-02 20:30 ` [PATCH RFC v2 net-next 2/4] net: ena: Add an option to configure large LLQ headers Shay Agroskin
` (2 subsequent siblings)
3 siblings, 2 replies; 14+ messages in thread
From: Shay Agroskin @ 2023-03-02 20:30 UTC (permalink / raw)
To: David Miller, Jakub Kicinski, netdev
Cc: Shay Agroskin, Woodhouse, David, Machulsky, Zorik,
Matushevsky, Alexander, Saeed Bshara, Wilson, Matt,
Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi, Ali,
Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
Arinzon, David, Itzko, Shahar, Abboud, Osama
This attribute, which is part of ethtool's ring param configuration
allows the user to specify the maximum number of the packet's payload
that can be written directly to the device.
Example usage:
# ethtool -G [interface] tx-push-buf-len [number of bytes]
Co-developed-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
---
Documentation/netlink/specs/ethtool.yaml | 8 ++++
Documentation/networking/ethtool-netlink.rst | 43 ++++++++++++--------
include/linux/ethtool.h | 14 +++++--
include/uapi/linux/ethtool_netlink.h | 2 +
net/ethtool/netlink.h | 2 +-
net/ethtool/rings.c | 28 ++++++++++++-
6 files changed, 74 insertions(+), 23 deletions(-)
diff --git a/Documentation/netlink/specs/ethtool.yaml b/Documentation/netlink/specs/ethtool.yaml
index 08b776908d15..244864c96feb 100644
--- a/Documentation/netlink/specs/ethtool.yaml
+++ b/Documentation/netlink/specs/ethtool.yaml
@@ -174,6 +174,12 @@ attribute-sets:
-
name: rx-push
type: u8
+ -
+ name: tx-push-buf-len
+ type: u32
+ -
+ name: tx-push-buf-len-max
+ type: u32
-
name: mm-stat
@@ -324,6 +330,8 @@ operations:
- cqe-size
- tx-push
- rx-push
+ - tx-push-buf-len
+ - tx-push-buf-len-max
dump: *ring-get-op
-
name: rings-set
diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst
index e1bc6186d7ea..1aa09e7e8dcc 100644
--- a/Documentation/networking/ethtool-netlink.rst
+++ b/Documentation/networking/ethtool-netlink.rst
@@ -860,22 +860,24 @@ Request contents:
Kernel response contents:
- ==================================== ====== ===========================
- ``ETHTOOL_A_RINGS_HEADER`` nested reply header
- ``ETHTOOL_A_RINGS_RX_MAX`` u32 max size of RX ring
- ``ETHTOOL_A_RINGS_RX_MINI_MAX`` u32 max size of RX mini ring
- ``ETHTOOL_A_RINGS_RX_JUMBO_MAX`` u32 max size of RX jumbo ring
- ``ETHTOOL_A_RINGS_TX_MAX`` u32 max size of TX ring
- ``ETHTOOL_A_RINGS_RX`` u32 size of RX ring
- ``ETHTOOL_A_RINGS_RX_MINI`` u32 size of RX mini ring
- ``ETHTOOL_A_RINGS_RX_JUMBO`` u32 size of RX jumbo ring
- ``ETHTOOL_A_RINGS_TX`` u32 size of TX ring
- ``ETHTOOL_A_RINGS_RX_BUF_LEN`` u32 size of buffers on the ring
- ``ETHTOOL_A_RINGS_TCP_DATA_SPLIT`` u8 TCP header / data split
- ``ETHTOOL_A_RINGS_CQE_SIZE`` u32 Size of TX/RX CQE
- ``ETHTOOL_A_RINGS_TX_PUSH`` u8 flag of TX Push mode
- ``ETHTOOL_A_RINGS_RX_PUSH`` u8 flag of RX Push mode
- ==================================== ====== ===========================
+ ======================================= ====== ===========================
+ ``ETHTOOL_A_RINGS_HEADER`` nested reply header
+ ``ETHTOOL_A_RINGS_RX_MAX`` u32 max size of RX ring
+ ``ETHTOOL_A_RINGS_RX_MINI_MAX`` u32 max size of RX mini ring
+ ``ETHTOOL_A_RINGS_RX_JUMBO_MAX`` u32 max size of RX jumbo ring
+ ``ETHTOOL_A_RINGS_TX_MAX`` u32 max size of TX ring
+ ``ETHTOOL_A_RINGS_RX`` u32 size of RX ring
+ ``ETHTOOL_A_RINGS_RX_MINI`` u32 size of RX mini ring
+ ``ETHTOOL_A_RINGS_RX_JUMBO`` u32 size of RX jumbo ring
+ ``ETHTOOL_A_RINGS_TX`` u32 size of TX ring
+ ``ETHTOOL_A_RINGS_RX_BUF_LEN`` u32 size of buffers on the ring
+ ``ETHTOOL_A_RINGS_TCP_DATA_SPLIT`` u8 TCP header / data split
+ ``ETHTOOL_A_RINGS_CQE_SIZE`` u32 Size of TX/RX CQE
+ ``ETHTOOL_A_RINGS_TX_PUSH`` u8 flag of TX Push mode
+ ``ETHTOOL_A_RINGS_RX_PUSH`` u8 flag of RX Push mode
+ ``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN`` u32 size of TX push buffer
+ ``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX`` u32 max size of TX push buffer
+ ======================================= ====== ===========================
``ETHTOOL_A_RINGS_TCP_DATA_SPLIT`` indicates whether the device is usable with
page-flipping TCP zero-copy receive (``getsockopt(TCP_ZEROCOPY_RECEIVE)``).
@@ -891,6 +893,14 @@ through MMIO writes, thus reducing the latency. However, enabling this feature
may increase the CPU cost. Drivers may enforce additional per-packet
eligibility checks (e.g. on packet size).
+``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN`` specifies the maximum number of bytes of a
+transmitted packet a driver can push directly to the underlying device
+('push' mode). Pushing some of the payload bytes to the device has the
+advantages of reducing latency for small packets by avoiding DMA mapping (same
+as ``ETHTOOL_A_RINGS_TX_PUSH`` parameter) as well as allowing the underlying
+device to process packet headers ahead of fetching its payload.
+This can help the device to make fast actions based on the packet's headers.
+
RINGS_SET
=========
@@ -908,6 +918,7 @@ Request contents:
``ETHTOOL_A_RINGS_CQE_SIZE`` u32 Size of TX/RX CQE
``ETHTOOL_A_RINGS_TX_PUSH`` u8 flag of TX Push mode
``ETHTOOL_A_RINGS_RX_PUSH`` u8 flag of RX Push mode
+ ``ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN`` u32 size of TX push buffer
==================================== ====== ===========================
Kernel checks that requested ring sizes do not exceed limits reported by
diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h
index 2792185dda22..798d35890118 100644
--- a/include/linux/ethtool.h
+++ b/include/linux/ethtool.h
@@ -75,6 +75,8 @@ enum {
* @tx_push: The flag of tx push mode
* @rx_push: The flag of rx push mode
* @cqe_size: Size of TX/RX completion queue event
+ * @tx_push_buf_len: Size of TX push buffer
+ * @tx_push_buf_max_len: Maximum allowed size of TX push buffer
*/
struct kernel_ethtool_ringparam {
u32 rx_buf_len;
@@ -82,6 +84,8 @@ struct kernel_ethtool_ringparam {
u8 tx_push;
u8 rx_push;
u32 cqe_size;
+ u32 tx_push_buf_len;
+ u32 tx_push_buf_max_len;
};
/**
@@ -90,12 +94,14 @@ struct kernel_ethtool_ringparam {
* @ETHTOOL_RING_USE_CQE_SIZE: capture for setting cqe_size
* @ETHTOOL_RING_USE_TX_PUSH: capture for setting tx_push
* @ETHTOOL_RING_USE_RX_PUSH: capture for setting rx_push
+ * @ETHTOOL_RING_USE_TX_PUSH_BUF_LEN: capture for setting tx_push_buf_len
*/
enum ethtool_supported_ring_param {
- ETHTOOL_RING_USE_RX_BUF_LEN = BIT(0),
- ETHTOOL_RING_USE_CQE_SIZE = BIT(1),
- ETHTOOL_RING_USE_TX_PUSH = BIT(2),
- ETHTOOL_RING_USE_RX_PUSH = BIT(3),
+ ETHTOOL_RING_USE_RX_BUF_LEN = BIT(0),
+ ETHTOOL_RING_USE_CQE_SIZE = BIT(1),
+ ETHTOOL_RING_USE_TX_PUSH = BIT(2),
+ ETHTOOL_RING_USE_RX_PUSH = BIT(3),
+ ETHTOOL_RING_USE_TX_PUSH_BUF_LEN = BIT(4),
};
#define __ETH_RSS_HASH_BIT(bit) ((u32)1 << (bit))
diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h
index d39ce21381c5..1ebf8d455f07 100644
--- a/include/uapi/linux/ethtool_netlink.h
+++ b/include/uapi/linux/ethtool_netlink.h
@@ -357,6 +357,8 @@ enum {
ETHTOOL_A_RINGS_CQE_SIZE, /* u32 */
ETHTOOL_A_RINGS_TX_PUSH, /* u8 */
ETHTOOL_A_RINGS_RX_PUSH, /* u8 */
+ ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN, /* u32 */
+ ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX, /* u32 */
/* add new constants above here */
__ETHTOOL_A_RINGS_CNT,
diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h
index f7b189ed96b2..79424b34b553 100644
--- a/net/ethtool/netlink.h
+++ b/net/ethtool/netlink.h
@@ -413,7 +413,7 @@ extern const struct nla_policy ethnl_features_set_policy[ETHTOOL_A_FEATURES_WANT
extern const struct nla_policy ethnl_privflags_get_policy[ETHTOOL_A_PRIVFLAGS_HEADER + 1];
extern const struct nla_policy ethnl_privflags_set_policy[ETHTOOL_A_PRIVFLAGS_FLAGS + 1];
extern const struct nla_policy ethnl_rings_get_policy[ETHTOOL_A_RINGS_HEADER + 1];
-extern const struct nla_policy ethnl_rings_set_policy[ETHTOOL_A_RINGS_RX_PUSH + 1];
+extern const struct nla_policy ethnl_rings_set_policy[ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX + 1];
extern const struct nla_policy ethnl_channels_get_policy[ETHTOOL_A_CHANNELS_HEADER + 1];
extern const struct nla_policy ethnl_channels_set_policy[ETHTOOL_A_CHANNELS_COMBINED_COUNT + 1];
extern const struct nla_policy ethnl_coalesce_get_policy[ETHTOOL_A_COALESCE_HEADER + 1];
diff --git a/net/ethtool/rings.c b/net/ethtool/rings.c
index f358cd57d094..9c2c617ea5a7 100644
--- a/net/ethtool/rings.c
+++ b/net/ethtool/rings.c
@@ -57,7 +57,9 @@ static int rings_reply_size(const struct ethnl_req_info *req_base,
nla_total_size(sizeof(u8)) + /* _RINGS_TCP_DATA_SPLIT */
nla_total_size(sizeof(u32) + /* _RINGS_CQE_SIZE */
nla_total_size(sizeof(u8)) + /* _RINGS_TX_PUSH */
- nla_total_size(sizeof(u8))); /* _RINGS_RX_PUSH */
+ nla_total_size(sizeof(u8))) + /* _RINGS_RX_PUSH */
+ nla_total_size(sizeof(u32)) + /* _RINGS_TX_PUSH_BUF_LEN */
+ nla_total_size(sizeof(u32)); /* _RINGS_TX_PUSH_BUF_LEN_MAX */
}
static int rings_fill_reply(struct sk_buff *skb,
@@ -98,7 +100,11 @@ static int rings_fill_reply(struct sk_buff *skb,
(kr->cqe_size &&
(nla_put_u32(skb, ETHTOOL_A_RINGS_CQE_SIZE, kr->cqe_size))) ||
nla_put_u8(skb, ETHTOOL_A_RINGS_TX_PUSH, !!kr->tx_push) ||
- nla_put_u8(skb, ETHTOOL_A_RINGS_RX_PUSH, !!kr->rx_push))
+ nla_put_u8(skb, ETHTOOL_A_RINGS_RX_PUSH, !!kr->rx_push) ||
+ nla_put_u32(skb, ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX,
+ kr->tx_push_buf_max_len) ||
+ nla_put_u32(skb, ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN,
+ kr->tx_push_buf_len))
return -EMSGSIZE;
return 0;
@@ -117,6 +123,7 @@ const struct nla_policy ethnl_rings_set_policy[] = {
[ETHTOOL_A_RINGS_CQE_SIZE] = NLA_POLICY_MIN(NLA_U32, 1),
[ETHTOOL_A_RINGS_TX_PUSH] = NLA_POLICY_MAX(NLA_U8, 1),
[ETHTOOL_A_RINGS_RX_PUSH] = NLA_POLICY_MAX(NLA_U8, 1),
+ [ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN] = { .type = NLA_U32 },
};
static int
@@ -158,6 +165,14 @@ ethnl_set_rings_validate(struct ethnl_req_info *req_info,
return -EOPNOTSUPP;
}
+ if (tb[ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN] &&
+ !(ops->supported_ring_params & ETHTOOL_RING_USE_TX_PUSH_BUF_LEN)) {
+ NL_SET_ERR_MSG_ATTR(info->extack,
+ tb[ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN],
+ "setting tx push buf len is not supported");
+ return -EOPNOTSUPP;
+ }
+
return ops->get_ringparam && ops->set_ringparam ? 1 : -EOPNOTSUPP;
}
@@ -189,6 +204,8 @@ ethnl_set_rings(struct ethnl_req_info *req_info, struct genl_info *info)
tb[ETHTOOL_A_RINGS_TX_PUSH], &mod);
ethnl_update_u8(&kernel_ringparam.rx_push,
tb[ETHTOOL_A_RINGS_RX_PUSH], &mod);
+ ethnl_update_u32(&kernel_ringparam.tx_push_buf_len,
+ tb[ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN], &mod);
if (!mod)
return 0;
@@ -209,6 +226,13 @@ ethnl_set_rings(struct ethnl_req_info *req_info, struct genl_info *info)
return -EINVAL;
}
+ if (kernel_ringparam.tx_push_buf_len > kernel_ringparam.tx_push_buf_max_len) {
+ NL_SET_ERR_MSG_ATTR(info->extack, tb[ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN],
+ "Requested TX push buffer exceeds maximum");
+
+ return -EINVAL;
+ }
+
ret = dev->ethtool_ops->set_ringparam(dev, &ringparam,
&kernel_ringparam, info->extack);
return ret < 0 ? ret : 1;
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH RFC v2 net-next 2/4] net: ena: Add an option to configure large LLQ headers
2023-03-02 20:30 [PATCH RFC v2 net-next 0/4] Add tx push buf len param to ethtool Shay Agroskin
2023-03-02 20:30 ` [PATCH RFC v2 net-next 1/4] ethtool: Add support for configuring tx_push_buf_len Shay Agroskin
@ 2023-03-02 20:30 ` Shay Agroskin
2023-03-03 11:51 ` Simon Horman
2023-03-02 20:30 ` [PATCH RFC v2 net-next 3/4] net: ena: Recalculate TX state variables every device reset Shay Agroskin
2023-03-02 20:30 ` [PATCH RFC v2 net-next 4/4] net: ena: Add support to changing tx_push_buf_len Shay Agroskin
3 siblings, 1 reply; 14+ messages in thread
From: Shay Agroskin @ 2023-03-02 20:30 UTC (permalink / raw)
To: David Miller, Jakub Kicinski, netdev
Cc: David Arinzon, Woodhouse, David, Machulsky, Zorik,
Matushevsky, Alexander, Saeed Bshara, Wilson, Matt,
Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi, Ali,
Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
Agroskin, Shay, Itzko, Shahar, Abboud, Osama
From: David Arinzon <darinzon@amazon.com>
Allow configuring the device with large LLQ headers. The Low Latency
Queue (LLQ) allows the driver to write the first N bytes of the packet,
along with the rest of the TX descriptors directly into device (N can be
either 96 or 224 for large LLQ headers configuration).
Having L4 TCP/UDP headers contained in the first 96 bytes of the packet
is required to get maximum performance from the device.
Signed-off-by: David Arinzon <darinzon@amazon.com>
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
---
drivers/net/ethernet/amazon/ena/ena_netdev.c | 100 ++++++++++++++-----
drivers/net/ethernet/amazon/ena/ena_netdev.h | 8 ++
2 files changed, 84 insertions(+), 24 deletions(-)
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index d3999db7c6a2..830d5be22aa9 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -44,6 +44,8 @@ static int ena_rss_init_default(struct ena_adapter *adapter);
static void check_for_admin_com_state(struct ena_adapter *adapter);
static void ena_destroy_device(struct ena_adapter *adapter, bool graceful);
static int ena_restore_device(struct ena_adapter *adapter);
+static void ena_calc_io_queue_size(struct ena_adapter *adapter,
+ struct ena_com_dev_get_features_ctx *get_feat_ctx);
static void ena_init_io_rings(struct ena_adapter *adapter,
int first_index, int count);
@@ -3387,13 +3389,30 @@ static int ena_device_validate_params(struct ena_adapter *adapter,
return 0;
}
-static void set_default_llq_configurations(struct ena_llq_configurations *llq_config)
+static void set_default_llq_configurations(struct ena_adapter *adapter,
+ struct ena_llq_configurations *llq_config,
+ struct ena_admin_feature_llq_desc *llq)
{
+ struct ena_com_dev *ena_dev = adapter->ena_dev;
+
llq_config->llq_header_location = ENA_ADMIN_INLINE_HEADER;
llq_config->llq_stride_ctrl = ENA_ADMIN_MULTIPLE_DESCS_PER_ENTRY;
llq_config->llq_num_decs_before_header = ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_2;
- llq_config->llq_ring_entry_size = ENA_ADMIN_LIST_ENTRY_SIZE_128B;
- llq_config->llq_ring_entry_size_value = 128;
+
+ adapter->large_llq_header_supported =
+ !!(ena_dev->supported_features & (1 << ENA_ADMIN_LLQ));
+ adapter->large_llq_header_supported &=
+ !!(llq->entry_size_ctrl_supported &
+ ENA_ADMIN_LIST_ENTRY_SIZE_256B);
+
+ if ((llq->entry_size_ctrl_supported & ENA_ADMIN_LIST_ENTRY_SIZE_256B) &&
+ adapter->large_llq_header_enabled) {
+ llq_config->llq_ring_entry_size = ENA_ADMIN_LIST_ENTRY_SIZE_256B;
+ llq_config->llq_ring_entry_size_value = 256;
+ } else {
+ llq_config->llq_ring_entry_size = ENA_ADMIN_LIST_ENTRY_SIZE_128B;
+ llq_config->llq_ring_entry_size_value = 128;
+ }
}
static int ena_set_queues_placement_policy(struct pci_dev *pdev,
@@ -3412,6 +3431,13 @@ static int ena_set_queues_placement_policy(struct pci_dev *pdev,
return 0;
}
+ if (!ena_dev->mem_bar) {
+ netdev_err(ena_dev->net_device,
+ "LLQ is advertised as supported but device doesn't expose mem bar\n");
+ ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST;
+ return 0;
+ }
+
rc = ena_com_config_dev_mode(ena_dev, llq, llq_default_configurations);
if (unlikely(rc)) {
dev_err(&pdev->dev,
@@ -3427,15 +3453,8 @@ static int ena_map_llq_mem_bar(struct pci_dev *pdev, struct ena_com_dev *ena_dev
{
bool has_mem_bar = !!(bars & BIT(ENA_MEM_BAR));
- if (!has_mem_bar) {
- if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) {
- dev_err(&pdev->dev,
- "ENA device does not expose LLQ bar. Fallback to host mode policy.\n");
- ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST;
- }
-
+ if (!has_mem_bar)
return 0;
- }
ena_dev->mem_bar = devm_ioremap_wc(&pdev->dev,
pci_resource_start(pdev, ENA_MEM_BAR),
@@ -3447,10 +3466,11 @@ static int ena_map_llq_mem_bar(struct pci_dev *pdev, struct ena_com_dev *ena_dev
return 0;
}
-static int ena_device_init(struct ena_com_dev *ena_dev, struct pci_dev *pdev,
+static int ena_device_init(struct ena_adapter *adapter, struct pci_dev *pdev,
struct ena_com_dev_get_features_ctx *get_feat_ctx,
bool *wd_state)
{
+ struct ena_com_dev *ena_dev = adapter->ena_dev;
struct ena_llq_configurations llq_config;
struct device *dev = &pdev->dev;
bool readless_supported;
@@ -3535,7 +3555,7 @@ static int ena_device_init(struct ena_com_dev *ena_dev, struct pci_dev *pdev,
*wd_state = !!(aenq_groups & BIT(ENA_ADMIN_KEEP_ALIVE));
- set_default_llq_configurations(&llq_config);
+ set_default_llq_configurations(adapter, &llq_config, &get_feat_ctx->llq);
rc = ena_set_queues_placement_policy(pdev, ena_dev, &get_feat_ctx->llq,
&llq_config);
@@ -3544,6 +3564,8 @@ static int ena_device_init(struct ena_com_dev *ena_dev, struct pci_dev *pdev,
goto err_admin_init;
}
+ ena_calc_io_queue_size(adapter, get_feat_ctx);
+
return 0;
err_admin_init:
@@ -3587,7 +3609,8 @@ static int ena_enable_msix_and_set_admin_interrupts(struct ena_adapter *adapter)
return rc;
}
-static void ena_destroy_device(struct ena_adapter *adapter, bool graceful)
+static
+void ena_destroy_device(struct ena_adapter *adapter, bool graceful)
{
struct net_device *netdev = adapter->netdev;
struct ena_com_dev *ena_dev = adapter->ena_dev;
@@ -3633,7 +3656,8 @@ static void ena_destroy_device(struct ena_adapter *adapter, bool graceful)
clear_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags);
}
-static int ena_restore_device(struct ena_adapter *adapter)
+static
+int ena_restore_device(struct ena_adapter *adapter)
{
struct ena_com_dev_get_features_ctx get_feat_ctx;
struct ena_com_dev *ena_dev = adapter->ena_dev;
@@ -3642,7 +3666,7 @@ static int ena_restore_device(struct ena_adapter *adapter)
int rc;
set_bit(ENA_FLAG_ONGOING_RESET, &adapter->flags);
- rc = ena_device_init(ena_dev, adapter->pdev, &get_feat_ctx, &wd_state);
+ rc = ena_device_init(adapter, adapter->pdev, &get_feat_ctx, &wd_state);
if (rc) {
dev_err(&pdev->dev, "Can not initialize device\n");
goto err;
@@ -4175,6 +4199,15 @@ static void ena_calc_io_queue_size(struct ena_adapter *adapter,
u32 max_tx_queue_size;
u32 max_rx_queue_size;
+ /* If this function is called after driver load, the ring sizes have already
+ * been configured. Take it into account when recalculating ring size.
+ */
+ if (adapter->tx_ring->ring_size)
+ tx_queue_size = adapter->tx_ring->ring_size;
+
+ if (adapter->rx_ring->ring_size)
+ rx_queue_size = adapter->rx_ring->ring_size;
+
if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) {
struct ena_admin_queue_ext_feature_fields *max_queue_ext =
&get_feat_ctx->max_queue_ext.max_queue_ext;
@@ -4216,6 +4249,24 @@ static void ena_calc_io_queue_size(struct ena_adapter *adapter,
max_tx_queue_size = rounddown_pow_of_two(max_tx_queue_size);
max_rx_queue_size = rounddown_pow_of_two(max_rx_queue_size);
+ /* When forcing large headers, we multiply the entry size by 2, and therefore divide
+ * the queue size by 2, leaving the amount of memory used by the queues unchanged.
+ */
+ if (adapter->large_llq_header_enabled) {
+ if ((llq->entry_size_ctrl_supported & ENA_ADMIN_LIST_ENTRY_SIZE_256B) &&
+ ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) {
+ max_tx_queue_size /= 2;
+ dev_info(&adapter->pdev->dev,
+ "Forcing large headers and decreasing maximum TX queue size to %d\n",
+ max_tx_queue_size);
+ } else {
+ dev_err(&adapter->pdev->dev,
+ "Forcing large headers failed: LLQ is disabled or device does not support large headers\n");
+
+ adapter->large_llq_header_enabled = false;
+ }
+ }
+
tx_queue_size = clamp_val(tx_queue_size, ENA_MIN_RING_SIZE,
max_tx_queue_size);
rx_queue_size = clamp_val(rx_queue_size, ENA_MIN_RING_SIZE,
@@ -4312,18 +4363,18 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
pci_set_drvdata(pdev, adapter);
- rc = ena_device_init(ena_dev, pdev, &get_feat_ctx, &wd_state);
+ rc = ena_map_llq_mem_bar(pdev, ena_dev, bars);
if (rc) {
- dev_err(&pdev->dev, "ENA device init failed\n");
- if (rc == -ETIME)
- rc = -EPROBE_DEFER;
+ dev_err(&pdev->dev, "ENA LLQ bar mapping failed\n");
goto err_netdev_destroy;
}
- rc = ena_map_llq_mem_bar(pdev, ena_dev, bars);
+ rc = ena_device_init(adapter, pdev, &get_feat_ctx, &wd_state);
if (rc) {
- dev_err(&pdev->dev, "ENA llq bar mapping failed\n");
- goto err_device_destroy;
+ dev_err(&pdev->dev, "ENA device init failed\n");
+ if (rc == -ETIME)
+ rc = -EPROBE_DEFER;
+ goto err_netdev_destroy;
}
/* Initial TX and RX interrupt delay. Assumes 1 usec granularity.
@@ -4333,7 +4384,6 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
ena_dev->intr_moder_rx_interval = ENA_INTR_INITIAL_RX_INTERVAL_USECS;
ena_dev->intr_delay_resolution = ENA_DEFAULT_INTR_DELAY_RESOLUTION;
max_num_io_queues = ena_calc_max_io_queue_num(pdev, ena_dev, &get_feat_ctx);
- ena_calc_io_queue_size(adapter, &get_feat_ctx);
if (unlikely(!max_num_io_queues)) {
rc = -EFAULT;
goto err_device_destroy;
@@ -4366,6 +4416,7 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
"Failed to query interrupt moderation feature\n");
goto err_device_destroy;
}
+
ena_init_io_rings(adapter,
0,
adapter->xdp_num_queues +
@@ -4486,6 +4537,7 @@ static void __ena_shutoff(struct pci_dev *pdev, bool shutdown)
rtnl_lock(); /* lock released inside the below if-else block */
adapter->reset_reason = ENA_REGS_RESET_SHUTDOWN;
ena_destroy_device(adapter, true);
+
if (shutdown) {
netif_device_detach(netdev);
dev_close(netdev);
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
index 2cb141079474..3e8c4a66c7d8 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
@@ -334,6 +334,14 @@ struct ena_adapter {
u32 msg_enable;
+ /* large_llq_header_enabled is used for two purposes:
+ * 1. Indicates that large LLQ has been requested.
+ * 2. Indicates whether large LLQ is set or not after device
+ * initialization / configuration.
+ */
+ bool large_llq_header_enabled;
+ bool large_llq_header_supported;
+
u16 max_tx_sgl_size;
u16 max_rx_sgl_size;
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH RFC v2 net-next 3/4] net: ena: Recalculate TX state variables every device reset
2023-03-02 20:30 [PATCH RFC v2 net-next 0/4] Add tx push buf len param to ethtool Shay Agroskin
2023-03-02 20:30 ` [PATCH RFC v2 net-next 1/4] ethtool: Add support for configuring tx_push_buf_len Shay Agroskin
2023-03-02 20:30 ` [PATCH RFC v2 net-next 2/4] net: ena: Add an option to configure large LLQ headers Shay Agroskin
@ 2023-03-02 20:30 ` Shay Agroskin
2023-03-03 11:53 ` Simon Horman
2023-03-02 20:30 ` [PATCH RFC v2 net-next 4/4] net: ena: Add support to changing tx_push_buf_len Shay Agroskin
3 siblings, 1 reply; 14+ messages in thread
From: Shay Agroskin @ 2023-03-02 20:30 UTC (permalink / raw)
To: David Miller, Jakub Kicinski, netdev
Cc: Shay Agroskin, Woodhouse, David, Machulsky, Zorik,
Matushevsky, Alexander, Saeed Bshara, Wilson, Matt,
Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi, Ali,
Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
Arinzon, David, Itzko, Shahar, Abboud, Osama
With the ability to modify LLQ entry size, the size of packet's
payload that can be written directly to the device changes.
This patch makes the driver recalculate this information every device
negotiation (also called device reset).
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
---
drivers/net/ethernet/amazon/ena/ena_netdev.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index 830d5be22aa9..43e3c76bd6ae 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -3662,8 +3662,9 @@ int ena_restore_device(struct ena_adapter *adapter)
struct ena_com_dev_get_features_ctx get_feat_ctx;
struct ena_com_dev *ena_dev = adapter->ena_dev;
struct pci_dev *pdev = adapter->pdev;
+ struct ena_ring *txr;
+ int rc, count, i;
bool wd_state;
- int rc;
set_bit(ENA_FLAG_ONGOING_RESET, &adapter->flags);
rc = ena_device_init(adapter, adapter->pdev, &get_feat_ctx, &wd_state);
@@ -3673,6 +3674,13 @@ int ena_restore_device(struct ena_adapter *adapter)
}
adapter->wd_state = wd_state;
+ count = adapter->xdp_num_queues + adapter->num_io_queues;
+ for (i = 0 ; i < count; i++) {
+ txr = &adapter->tx_ring[i];
+ txr->tx_mem_queue_type = ena_dev->tx_mem_queue_type;
+ txr->tx_max_header_size = ena_dev->tx_max_header_size;
+ }
+
rc = ena_device_validate_params(adapter, &get_feat_ctx);
if (rc) {
dev_err(&pdev->dev, "Validation of device parameters failed\n");
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH RFC v2 net-next 4/4] net: ena: Add support to changing tx_push_buf_len
2023-03-02 20:30 [PATCH RFC v2 net-next 0/4] Add tx push buf len param to ethtool Shay Agroskin
` (2 preceding siblings ...)
2023-03-02 20:30 ` [PATCH RFC v2 net-next 3/4] net: ena: Recalculate TX state variables every device reset Shay Agroskin
@ 2023-03-02 20:30 ` Shay Agroskin
2023-03-03 11:54 ` Simon Horman
2023-03-03 23:50 ` Jakub Kicinski
3 siblings, 2 replies; 14+ messages in thread
From: Shay Agroskin @ 2023-03-02 20:30 UTC (permalink / raw)
To: David Miller, Jakub Kicinski, netdev
Cc: Shay Agroskin, Woodhouse, David, Machulsky, Zorik,
Matushevsky, Alexander, Saeed Bshara, Wilson, Matt,
Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi, Ali,
Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
Arinzon, David, Itzko, Shahar, Abboud, Osama
The ENA driver allows for two distinct values for the number of bytes
of the packet's payload that can be written directly to the device.
For a value of 224 the driver turns on Large LLQ Header mode in which
the first 224 of the packet's payload are written to the LLQ.
Signed-off-by: Shay Agroskin <shayagr@amazon.com>
---
drivers/net/ethernet/amazon/ena/ena_eth_com.h | 4 ++
drivers/net/ethernet/amazon/ena/ena_ethtool.c | 51 +++++++++++++++++--
drivers/net/ethernet/amazon/ena/ena_netdev.c | 28 ++++++++--
drivers/net/ethernet/amazon/ena/ena_netdev.h | 7 +--
4 files changed, 78 insertions(+), 12 deletions(-)
diff --git a/drivers/net/ethernet/amazon/ena/ena_eth_com.h b/drivers/net/ethernet/amazon/ena/ena_eth_com.h
index 689313ee25a8..8bec31fa816c 100644
--- a/drivers/net/ethernet/amazon/ena/ena_eth_com.h
+++ b/drivers/net/ethernet/amazon/ena/ena_eth_com.h
@@ -10,6 +10,10 @@
/* head update threshold in units of (queue size / ENA_COMP_HEAD_THRESH) */
#define ENA_COMP_HEAD_THRESH 4
+/* we allow 2 DMA descriptors per LLQ entry */
+#define ENA_LLQ_ENTRY_DESC_CHUNK_SIZE (2 * sizeof(struct ena_eth_io_tx_desc))
+#define ENA_LLQ_HEADER (128 - ENA_LLQ_ENTRY_DESC_CHUNK_SIZE)
+#define ENA_LLQ_LARGE_HEADER (256 - ENA_LLQ_ENTRY_DESC_CHUNK_SIZE)
struct ena_com_tx_ctx {
struct ena_com_tx_meta ena_meta;
diff --git a/drivers/net/ethernet/amazon/ena/ena_ethtool.c b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
index 8da79eedc057..a692e6c907ae 100644
--- a/drivers/net/ethernet/amazon/ena/ena_ethtool.c
+++ b/drivers/net/ethernet/amazon/ena/ena_ethtool.c
@@ -476,6 +476,18 @@ static void ena_get_ringparam(struct net_device *netdev,
ring->tx_max_pending = adapter->max_tx_ring_size;
ring->rx_max_pending = adapter->max_rx_ring_size;
+ if (adapter->ena_dev->tx_mem_queue_type ==
+ ENA_ADMIN_PLACEMENT_POLICY_HOST)
+ goto no_llq_supported;
+
+ if (adapter->large_llq_header_supported)
+ kernel_ring->tx_push_buf_max_len = ENA_LLQ_LARGE_HEADER;
+ else
+ kernel_ring->tx_push_buf_max_len = ENA_LLQ_HEADER;
+
+ kernel_ring->tx_push_buf_len = adapter->ena_dev->tx_max_header_size;
+
+no_llq_supported:
ring->tx_pending = adapter->tx_ring[0].ring_size;
ring->rx_pending = adapter->rx_ring[0].ring_size;
}
@@ -486,7 +498,8 @@ static int ena_set_ringparam(struct net_device *netdev,
struct netlink_ext_ack *extack)
{
struct ena_adapter *adapter = netdev_priv(netdev);
- u32 new_tx_size, new_rx_size;
+ u32 new_tx_size, new_rx_size, new_tx_push_buf_len;
+ bool changed = false;
new_tx_size = ring->tx_pending < ENA_MIN_RING_SIZE ?
ENA_MIN_RING_SIZE : ring->tx_pending;
@@ -496,11 +509,40 @@ static int ena_set_ringparam(struct net_device *netdev,
ENA_MIN_RING_SIZE : ring->rx_pending;
new_rx_size = rounddown_pow_of_two(new_rx_size);
- if (new_tx_size == adapter->requested_tx_ring_size &&
- new_rx_size == adapter->requested_rx_ring_size)
+ changed |= new_tx_size != adapter->requested_tx_ring_size ||
+ new_rx_size != adapter->requested_rx_ring_size;
+
+ /* This value is ignored if LLQ is not supported */
+ new_tx_push_buf_len = 0;
+ if (adapter->ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_HOST)
+ goto no_llq_supported;
+
+ new_tx_push_buf_len = kernel_ring->tx_push_buf_len;
+
+ /* support for ENA_LLQ_LARGE_HEADER is tested in the 'get' command */
+ if (new_tx_push_buf_len != ENA_LLQ_HEADER &&
+ new_tx_push_buf_len != ENA_LLQ_LARGE_HEADER) {
+ bool large_llq_sup = adapter->large_llq_header_supported;
+ char large_llq_size_str[40];
+
+ snprintf(large_llq_size_str, 40, ", %lu", ENA_LLQ_LARGE_HEADER);
+
+ NL_SET_ERR_MSG_FMT_MOD(extack,
+ "Only [%lu%s] tx push buff length values are supported",
+ ENA_LLQ_HEADER,
+ large_llq_sup ? large_llq_size_str : "");
+
+ return -EINVAL;
+ }
+
+ changed |= new_tx_push_buf_len != adapter->ena_dev->tx_max_header_size;
+
+no_llq_supported:
+ if (!changed)
return 0;
- return ena_update_queue_sizes(adapter, new_tx_size, new_rx_size);
+ return ena_update_queue_params(adapter, new_tx_size, new_rx_size,
+ new_tx_push_buf_len);
}
static u32 ena_flow_hash_to_flow_type(u16 hash_fields)
@@ -900,6 +942,7 @@ static int ena_set_tunable(struct net_device *netdev,
static const struct ethtool_ops ena_ethtool_ops = {
.supported_coalesce_params = ETHTOOL_COALESCE_USECS |
ETHTOOL_COALESCE_USE_ADAPTIVE_RX,
+ .supported_ring_params = ETHTOOL_RING_USE_TX_PUSH_BUF_LEN,
.get_link_ksettings = ena_get_link_ksettings,
.get_drvinfo = ena_get_drvinfo,
.get_msglevel = ena_get_msglevel,
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index 43e3c76bd6ae..0625be4619a8 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -2811,11 +2811,13 @@ static int ena_close(struct net_device *netdev)
return 0;
}
-int ena_update_queue_sizes(struct ena_adapter *adapter,
- u32 new_tx_size,
- u32 new_rx_size)
+int ena_update_queue_params(struct ena_adapter *adapter,
+ u32 new_tx_size,
+ u32 new_rx_size,
+ u32 new_llq_header_len)
{
- bool dev_was_up;
+ bool dev_was_up, large_llq_changed = false;
+ int rc = 0;
dev_was_up = test_bit(ENA_FLAG_DEV_UP, &adapter->flags);
ena_close(adapter->netdev);
@@ -2825,7 +2827,23 @@ int ena_update_queue_sizes(struct ena_adapter *adapter,
0,
adapter->xdp_num_queues +
adapter->num_io_queues);
- return dev_was_up ? ena_up(adapter) : 0;
+
+ large_llq_changed = adapter->ena_dev->tx_mem_queue_type ==
+ ENA_ADMIN_PLACEMENT_POLICY_DEV;
+ large_llq_changed &=
+ new_llq_header_len != adapter->ena_dev->tx_max_header_size;
+
+ /* a check that the configuration is valid is done by caller */
+ if (!large_llq_changed)
+ goto if_up;
+
+ adapter->large_llq_header_enabled = !adapter->large_llq_header_enabled;
+
+ ena_destroy_device(adapter, false);
+ rc = ena_restore_device(adapter);
+
+if_up:
+ return dev_was_up && !rc ? ena_up(adapter) : 0;
}
int ena_set_rx_copybreak(struct ena_adapter *adapter, u32 rx_copybreak)
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
index 3e8c4a66c7d8..5a0d4ee76172 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
@@ -396,9 +396,10 @@ void ena_dump_stats_to_buf(struct ena_adapter *adapter, u8 *buf);
int ena_update_hw_stats(struct ena_adapter *adapter);
-int ena_update_queue_sizes(struct ena_adapter *adapter,
- u32 new_tx_size,
- u32 new_rx_size);
+int ena_update_queue_params(struct ena_adapter *adapter,
+ u32 new_tx_size,
+ u32 new_rx_size,
+ u32 new_llq_header_len);
int ena_update_queue_count(struct ena_adapter *adapter, u32 new_channel_count);
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v2 net-next 2/4] net: ena: Add an option to configure large LLQ headers
2023-03-02 20:30 ` [PATCH RFC v2 net-next 2/4] net: ena: Add an option to configure large LLQ headers Shay Agroskin
@ 2023-03-03 11:51 ` Simon Horman
2023-03-06 9:27 ` Shay Agroskin
0 siblings, 1 reply; 14+ messages in thread
From: Simon Horman @ 2023-03-03 11:51 UTC (permalink / raw)
To: Shay Agroskin
Cc: David Miller, Jakub Kicinski, netdev, David Arinzon,
Woodhouse, David, Machulsky, Zorik, Matushevsky, Alexander,
Saeed Bshara, Wilson, Matt, Liguori, Anthony, Bshara, Nafea,
Belgazal, Netanel, Saidi, Ali, Herrenschmidt, Benjamin,
Kiyanovski, Arthur, Dagan, Noam, Itzko, Shahar, Abboud, Osama
On Thu, Mar 02, 2023 at 10:30:43PM +0200, Shay Agroskin wrote:
> From: David Arinzon <darinzon@amazon.com>
>
> Allow configuring the device with large LLQ headers. The Low Latency
> Queue (LLQ) allows the driver to write the first N bytes of the packet,
> along with the rest of the TX descriptors directly into device (N can be
> either 96 or 224 for large LLQ headers configuration).
>
> Having L4 TCP/UDP headers contained in the first 96 bytes of the packet
> is required to get maximum performance from the device.
>
> Signed-off-by: David Arinzon <darinzon@amazon.com>
> Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Overall this looks very nice to me, its a very interesting HW feature.
As this is an RFC I've made a few nit-picking comments inline.
Those not withstanding,
Reviewed-by: Simon Horman <simon.horman@corigine.com>
> ---
> drivers/net/ethernet/amazon/ena/ena_netdev.c | 100 ++++++++++++++-----
> drivers/net/ethernet/amazon/ena/ena_netdev.h | 8 ++
> 2 files changed, 84 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
> index d3999db7c6a2..830d5be22aa9 100644
> --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
> +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
> @@ -44,6 +44,8 @@ static int ena_rss_init_default(struct ena_adapter *adapter);
> static void check_for_admin_com_state(struct ena_adapter *adapter);
> static void ena_destroy_device(struct ena_adapter *adapter, bool graceful);
> static int ena_restore_device(struct ena_adapter *adapter);
> +static void ena_calc_io_queue_size(struct ena_adapter *adapter,
> + struct ena_com_dev_get_features_ctx *get_feat_ctx);
>
FWIIW, I think it is nicer to move functions rather than provide forward
declarations. That could be done in a preparatory patch if you want
to avoid crowding out the intentions of this this patch.
> static void ena_init_io_rings(struct ena_adapter *adapter,
> int first_index, int count);
> @@ -3387,13 +3389,30 @@ static int ena_device_validate_params(struct ena_adapter *adapter,
> return 0;
> }
>
> -static void set_default_llq_configurations(struct ena_llq_configurations *llq_config)
> +static void set_default_llq_configurations(struct ena_adapter *adapter,
> + struct ena_llq_configurations *llq_config,
> + struct ena_admin_feature_llq_desc *llq)
> {
> + struct ena_com_dev *ena_dev = adapter->ena_dev;
> +
> llq_config->llq_header_location = ENA_ADMIN_INLINE_HEADER;
> llq_config->llq_stride_ctrl = ENA_ADMIN_MULTIPLE_DESCS_PER_ENTRY;
> llq_config->llq_num_decs_before_header = ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_2;
> - llq_config->llq_ring_entry_size = ENA_ADMIN_LIST_ENTRY_SIZE_128B;
> - llq_config->llq_ring_entry_size_value = 128;
> +
> + adapter->large_llq_header_supported =
> + !!(ena_dev->supported_features & (1 << ENA_ADMIN_LLQ));
nit: BIT(ENA_ADMIN_LLQ)
...
> @@ -3587,7 +3609,8 @@ static int ena_enable_msix_and_set_admin_interrupts(struct ena_adapter *adapter)
> return rc;
> }
>
> -static void ena_destroy_device(struct ena_adapter *adapter, bool graceful)
> +static
> +void ena_destroy_device(struct ena_adapter *adapter, bool graceful)
nit: this change seems unrelated to the rest of this patch.
> {
> struct net_device *netdev = adapter->netdev;
> struct ena_com_dev *ena_dev = adapter->ena_dev;
> @@ -3633,7 +3656,8 @@ static void ena_destroy_device(struct ena_adapter *adapter, bool graceful)
> clear_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags);
> }
>
> -static int ena_restore_device(struct ena_adapter *adapter)
> +static
> +int ena_restore_device(struct ena_adapter *adapter)
Ditto.
...
> @@ -4333,7 +4384,6 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
> ena_dev->intr_moder_rx_interval = ENA_INTR_INITIAL_RX_INTERVAL_USECS;
> ena_dev->intr_delay_resolution = ENA_DEFAULT_INTR_DELAY_RESOLUTION;
> max_num_io_queues = ena_calc_max_io_queue_num(pdev, ena_dev, &get_feat_ctx);
> - ena_calc_io_queue_size(adapter, &get_feat_ctx);
> if (unlikely(!max_num_io_queues)) {
> rc = -EFAULT;
> goto err_device_destroy;
> @@ -4366,6 +4416,7 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
> "Failed to query interrupt moderation feature\n");
> goto err_device_destroy;
> }
> +
nit: this change seems unrelated to the rest of this patch.
> ena_init_io_rings(adapter,
> 0,
> adapter->xdp_num_queues +
> @@ -4486,6 +4537,7 @@ static void __ena_shutoff(struct pci_dev *pdev, bool shutdown)
> rtnl_lock(); /* lock released inside the below if-else block */
> adapter->reset_reason = ENA_REGS_RESET_SHUTDOWN;
> ena_destroy_device(adapter, true);
> +
Ditto.
> if (shutdown) {
> netif_device_detach(netdev);
> dev_close(netdev);
...
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v2 net-next 3/4] net: ena: Recalculate TX state variables every device reset
2023-03-02 20:30 ` [PATCH RFC v2 net-next 3/4] net: ena: Recalculate TX state variables every device reset Shay Agroskin
@ 2023-03-03 11:53 ` Simon Horman
0 siblings, 0 replies; 14+ messages in thread
From: Simon Horman @ 2023-03-03 11:53 UTC (permalink / raw)
To: Shay Agroskin
Cc: David Miller, Jakub Kicinski, netdev, Woodhouse, David,
Machulsky, Zorik, Matushevsky, Alexander, Saeed Bshara,
Wilson, Matt, Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel,
Saidi, Ali, Herrenschmidt, Benjamin, Kiyanovski, Arthur,
Dagan, Noam, Arinzon, David, Itzko, Shahar, Abboud, Osama
On Thu, Mar 02, 2023 at 10:30:44PM +0200, Shay Agroskin wrote:
> With the ability to modify LLQ entry size, the size of packet's
> payload that can be written directly to the device changes.
> This patch makes the driver recalculate this information every device
> negotiation (also called device reset).
>
> Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v2 net-next 1/4] ethtool: Add support for configuring tx_push_buf_len
2023-03-02 20:30 ` [PATCH RFC v2 net-next 1/4] ethtool: Add support for configuring tx_push_buf_len Shay Agroskin
@ 2023-03-03 11:53 ` Simon Horman
2023-03-03 23:50 ` Jakub Kicinski
1 sibling, 0 replies; 14+ messages in thread
From: Simon Horman @ 2023-03-03 11:53 UTC (permalink / raw)
To: Shay Agroskin
Cc: David Miller, Jakub Kicinski, netdev, Woodhouse, David,
Machulsky, Zorik, Matushevsky, Alexander, Saeed Bshara,
Wilson, Matt, Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel,
Saidi, Ali, Herrenschmidt, Benjamin, Kiyanovski, Arthur,
Dagan, Noam, Arinzon, David, Itzko, Shahar, Abboud, Osama
On Thu, Mar 02, 2023 at 10:30:42PM +0200, Shay Agroskin wrote:
> This attribute, which is part of ethtool's ring param configuration
> allows the user to specify the maximum number of the packet's payload
> that can be written directly to the device.
>
> Example usage:
> # ethtool -G [interface] tx-push-buf-len [number of bytes]
>
> Co-developed-by: Jakub Kicinski <kuba@kernel.org>
> Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v2 net-next 4/4] net: ena: Add support to changing tx_push_buf_len
2023-03-02 20:30 ` [PATCH RFC v2 net-next 4/4] net: ena: Add support to changing tx_push_buf_len Shay Agroskin
@ 2023-03-03 11:54 ` Simon Horman
2023-03-03 23:50 ` Jakub Kicinski
1 sibling, 0 replies; 14+ messages in thread
From: Simon Horman @ 2023-03-03 11:54 UTC (permalink / raw)
To: Shay Agroskin
Cc: David Miller, Jakub Kicinski, netdev, Woodhouse, David,
Machulsky, Zorik, Matushevsky, Alexander, Saeed Bshara,
Wilson, Matt, Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel,
Saidi, Ali, Herrenschmidt, Benjamin, Kiyanovski, Arthur,
Dagan, Noam, Arinzon, David, Itzko, Shahar, Abboud, Osama
On Thu, Mar 02, 2023 at 10:30:45PM +0200, Shay Agroskin wrote:
> The ENA driver allows for two distinct values for the number of bytes
> of the packet's payload that can be written directly to the device.
>
> For a value of 224 the driver turns on Large LLQ Header mode in which
> the first 224 of the packet's payload are written to the LLQ.
>
> Signed-off-by: Shay Agroskin <shayagr@amazon.com>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v2 net-next 4/4] net: ena: Add support to changing tx_push_buf_len
2023-03-02 20:30 ` [PATCH RFC v2 net-next 4/4] net: ena: Add support to changing tx_push_buf_len Shay Agroskin
2023-03-03 11:54 ` Simon Horman
@ 2023-03-03 23:50 ` Jakub Kicinski
2023-03-06 11:54 ` Shay Agroskin
1 sibling, 1 reply; 14+ messages in thread
From: Jakub Kicinski @ 2023-03-03 23:50 UTC (permalink / raw)
To: Shay Agroskin
Cc: David Miller, netdev, Woodhouse, David, Machulsky, Zorik,
Matushevsky, Alexander, Saeed Bshara, Wilson, Matt,
Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi, Ali,
Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
Arinzon, David, Itzko, Shahar, Abboud, Osama
On Thu, 2 Mar 2023 22:30:45 +0200 Shay Agroskin wrote:
> @@ -496,11 +509,40 @@ static int ena_set_ringparam(struct net_device *netdev,
> ENA_MIN_RING_SIZE : ring->rx_pending;
> new_rx_size = rounddown_pow_of_two(new_rx_size);
>
> - if (new_tx_size == adapter->requested_tx_ring_size &&
> - new_rx_size == adapter->requested_rx_ring_size)
> + changed |= new_tx_size != adapter->requested_tx_ring_size ||
> + new_rx_size != adapter->requested_rx_ring_size;
> +
> + /* This value is ignored if LLQ is not supported */
> + new_tx_push_buf_len = 0;
> + if (adapter->ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_HOST)
> + goto no_llq_supported;
Are you rejecting the unsupported config in this case or just ignoring
it? You need to return an error if user tries to set something the
device does not support/allow.
BTW your use of gotos to skip code is against the kernel coding style.
gotos are only for complex cases and error handling, you're using them
to save indentation it seems. Factor the code out to a helper instead,
or some such.
> + new_tx_push_buf_len = kernel_ring->tx_push_buf_len;
> +
> + /* support for ENA_LLQ_LARGE_HEADER is tested in the 'get' command */
> + if (new_tx_push_buf_len != ENA_LLQ_HEADER &&
> + new_tx_push_buf_len != ENA_LLQ_LARGE_HEADER) {
> + bool large_llq_sup = adapter->large_llq_header_supported;
> + char large_llq_size_str[40];
> +
> + snprintf(large_llq_size_str, 40, ", %lu", ENA_LLQ_LARGE_HEADER);
> +
> + NL_SET_ERR_MSG_FMT_MOD(extack,
> + "Only [%lu%s] tx push buff length values are supported",
> + ENA_LLQ_HEADER,
> + large_llq_sup ? large_llq_size_str : "");
> +
> + return -EINVAL;
> + }
> +
> + changed |= new_tx_push_buf_len != adapter->ena_dev->tx_max_header_size;
> +
> +no_llq_supported:
> + if (!changed)
> return 0;
>
> - return ena_update_queue_sizes(adapter, new_tx_size, new_rx_size);
> + return ena_update_queue_params(adapter, new_tx_size, new_rx_size,
> + new_tx_push_buf_len);
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v2 net-next 1/4] ethtool: Add support for configuring tx_push_buf_len
2023-03-02 20:30 ` [PATCH RFC v2 net-next 1/4] ethtool: Add support for configuring tx_push_buf_len Shay Agroskin
2023-03-03 11:53 ` Simon Horman
@ 2023-03-03 23:50 ` Jakub Kicinski
2023-03-07 10:09 ` Shay Agroskin
1 sibling, 1 reply; 14+ messages in thread
From: Jakub Kicinski @ 2023-03-03 23:50 UTC (permalink / raw)
To: Shay Agroskin
Cc: David Miller, netdev, Woodhouse, David, Machulsky, Zorik,
Matushevsky, Alexander, Saeed Bshara, Wilson, Matt,
Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi, Ali,
Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
Arinzon, David, Itzko, Shahar, Abboud, Osama
On Thu, 2 Mar 2023 22:30:42 +0200 Shay Agroskin wrote:
> + nla_put_u32(skb, ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX,
> + kr->tx_push_buf_max_len) ||
> + nla_put_u32(skb, ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN,
> + kr->tx_push_buf_len))
Only report these if driver declares support, please
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v2 net-next 2/4] net: ena: Add an option to configure large LLQ headers
2023-03-03 11:51 ` Simon Horman
@ 2023-03-06 9:27 ` Shay Agroskin
0 siblings, 0 replies; 14+ messages in thread
From: Shay Agroskin @ 2023-03-06 9:27 UTC (permalink / raw)
To: Simon Horman
Cc: David Miller, Jakub Kicinski, netdev, David Arinzon,
Woodhouse, David, Machulsky, Zorik, Matushevsky, Alexander,
Saeed Bshara, Wilson, Matt, Liguori, Anthony, Bshara, Nafea,
Belgazal, Netanel, Saidi, Ali, Herrenschmidt, Benjamin,
Kiyanovski, Arthur, Dagan, Noam, Itzko, Shahar, Abboud, Osama
Simon Horman <simon.horman@corigine.com> writes:
> On Thu, Mar 02, 2023 at 10:30:43PM +0200, Shay Agroskin wrote:
>> From: David Arinzon <darinzon@amazon.com>
>>
>> Allow configuring the device with large LLQ headers. The Low
>> Latency
>> Queue (LLQ) allows the driver to write the first N bytes of the
>> packet,
>> along with the rest of the TX descriptors directly into device
>> (N can be
>> either 96 or 224 for large LLQ headers configuration).
>>
>> Having L4 TCP/UDP headers contained in the first 96 bytes of
>> the packet
>> is required to get maximum performance from the device.
>>
>> Signed-off-by: David Arinzon <darinzon@amazon.com>
>> Signed-off-by: Shay Agroskin <shayagr@amazon.com>
>
> Overall this looks very nice to me, its a very interesting HW
> feature.
>
> As this is an RFC I've made a few nit-picking comments inline.
> Those not withstanding,
>
> Reviewed-by: Simon Horman <simon.horman@corigine.com>
>
Thanks for reviewing this patchset (:
>> ---
>> drivers/net/ethernet/amazon/ena/ena_netdev.c | 100
>> ++++++++++++++-----
>> drivers/net/ethernet/amazon/ena/ena_netdev.h | 8 ++
>> 2 files changed, 84 insertions(+), 24 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c
>> b/drivers/net/ethernet/amazon/ena/ena_netdev.c
>> index d3999db7c6a2..830d5be22aa9 100644
>> --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
>> +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
>> @@ -44,6 +44,8 @@ static int ena_rss_init_default(struct
>> ena_adapter *adapter);
>> static void check_for_admin_com_state(struct ena_adapter
>> *adapter);
>> static void ena_destroy_device(struct ena_adapter *adapter,
>> bool graceful);
>> static int ena_restore_device(struct ena_adapter *adapter);
>> +static void ena_calc_io_queue_size(struct ena_adapter
>> *adapter,
>> + struct
>> ena_com_dev_get_features_ctx *get_feat_ctx);
>>
>
> FWIIW, I think it is nicer to move functions rather than provide
> forward
> declarations. That could be done in a preparatory patch if you
> want
> to avoid crowding out the intentions of this this patch.
>
Seeing that it is indeed called only once it does make more sense
just to move the function implementation itself
>> static void ena_init_io_rings(struct ena_adapter *adapter,
>> int first_index, int count);
>> @@ -3387,13 +3389,30 @@ static int
>> ena_device_validate_params(struct ena_adapter *adapter,
>> return 0;
>> }
>>
>> -static void set_default_llq_configurations(struct
>> ena_llq_configurations *llq_config)
>> +static void set_default_llq_configurations(struct ena_adapter
>> *adapter,
>> + struct
>> ena_llq_configurations *llq_config,
>> + struct
>> ena_admin_feature_llq_desc *llq)
>> {
>> + struct ena_com_dev *ena_dev = adapter->ena_dev;
>> +
>> llq_config->llq_header_location = ENA_ADMIN_INLINE_HEADER;
>> llq_config->llq_stride_ctrl =
>> ENA_ADMIN_MULTIPLE_DESCS_PER_ENTRY;
>> llq_config->llq_num_decs_before_header =
>> ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_2;
>> - llq_config->llq_ring_entry_size =
>> ENA_ADMIN_LIST_ENTRY_SIZE_128B;
>> - llq_config->llq_ring_entry_size_value = 128;
>> +
>> + adapter->large_llq_header_supported =
>> + !!(ena_dev->supported_features & (1 <<
>> ENA_ADMIN_LLQ));
>
> nit: BIT(ENA_ADMIN_LLQ)
>
Yup, I'll change to it
> ...
>
>> @@ -3587,7 +3609,8 @@ static int
>> ena_enable_msix_and_set_admin_interrupts(struct ena_adapter
>> *adapter)
>> return rc;
>> }
>>
>> -static void ena_destroy_device(struct ena_adapter *adapter,
>> bool graceful)
>> +static
>> +void ena_destroy_device(struct ena_adapter *adapter, bool
>> graceful)
>
> nit: this change seems unrelated to the rest of this patch.
>
I'll remove it
>> {
>> struct net_device *netdev = adapter->netdev;
>> struct ena_com_dev *ena_dev = adapter->ena_dev;
>> @@ -3633,7 +3656,8 @@ static void ena_destroy_device(struct
>> ena_adapter *adapter, bool graceful)
>> clear_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags);
>> }
>>
>> -static int ena_restore_device(struct ena_adapter *adapter)
>> +static
>> +int ena_restore_device(struct ena_adapter *adapter)
>
> Ditto.
>
> ...
>
I'll remove it
>> @@ -4333,7 +4384,6 @@ static int ena_probe(struct pci_dev
>> *pdev, const struct pci_device_id *ent)
>> ena_dev->intr_moder_rx_interval =
>> ENA_INTR_INITIAL_RX_INTERVAL_USECS;
>> ena_dev->intr_delay_resolution =
>> ENA_DEFAULT_INTR_DELAY_RESOLUTION;
>> max_num_io_queues = ena_calc_max_io_queue_num(pdev,
>> ena_dev, &get_feat_ctx);
>> - ena_calc_io_queue_size(adapter, &get_feat_ctx);
>> if (unlikely(!max_num_io_queues)) {
>> rc = -EFAULT;
>> goto err_device_destroy;
>> @@ -4366,6 +4416,7 @@ static int ena_probe(struct pci_dev
>> *pdev, const struct pci_device_id *ent)
>> "Failed to query interrupt moderation
>> feature\n");
>> goto err_device_destroy;
>> }
>> +
>
> nit: this change seems unrelated to the rest of this patch.
>
These are cosmetic little changes to improve code
readability. I'll just create an additional simple commit that
adds them.
>> ena_init_io_rings(adapter,
>> 0,
>> adapter->xdp_num_queues +
>> @@ -4486,6 +4537,7 @@ static void __ena_shutoff(struct pci_dev
>> *pdev, bool shutdown)
>> rtnl_lock(); /* lock released inside the below if-else
>> block */
>> adapter->reset_reason = ENA_REGS_RESET_SHUTDOWN;
>> ena_destroy_device(adapter, true);
>> +
>
> Ditto.
>
I'll move it to another patch
>> if (shutdown) {
>> netif_device_detach(netdev);
>> dev_close(netdev);
>
> ...
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v2 net-next 4/4] net: ena: Add support to changing tx_push_buf_len
2023-03-03 23:50 ` Jakub Kicinski
@ 2023-03-06 11:54 ` Shay Agroskin
0 siblings, 0 replies; 14+ messages in thread
From: Shay Agroskin @ 2023-03-06 11:54 UTC (permalink / raw)
To: Jakub Kicinski
Cc: David Miller, netdev, Woodhouse, David, Machulsky, Zorik,
Matushevsky, Alexander, Saeed Bshara, Wilson, Matt,
Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi, Ali,
Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
Arinzon, David, Itzko, Shahar, Abboud, Osama
Jakub Kicinski <kuba@kernel.org> writes:
> On Thu, 2 Mar 2023 22:30:45 +0200 Shay Agroskin wrote:
>> @@ -496,11 +509,40 @@ static int ena_set_ringparam(struct
>> net_device *netdev,
>> ENA_MIN_RING_SIZE : ring->rx_pending;
>> new_rx_size = rounddown_pow_of_two(new_rx_size);
>>
>> - if (new_tx_size == adapter->requested_tx_ring_size &&
>> - new_rx_size == adapter->requested_rx_ring_size)
>> + changed |= new_tx_size != adapter->requested_tx_ring_size
>> ||
>> + new_rx_size != adapter->requested_rx_ring_size;
>> +
>> + /* This value is ignored if LLQ is not supported */
>> + new_tx_push_buf_len = 0;
>> + if (adapter->ena_dev->tx_mem_queue_type ==
>> ENA_ADMIN_PLACEMENT_POLICY_HOST)
>> + goto no_llq_supported;
>
> Are you rejecting the unsupported config in this case or just
> ignoring
> it? You need to return an error if user tries to set something
> the
> device does not support/allow.
>
I'll explicitly set 0s to push buffer in 'get' when LLQ isn't
supported and return -ENOSUPP if the user
tries to set it when no LLQ is used.
> BTW your use of gotos to skip code is against the kernel coding
> style.
> gotos are only for complex cases and error handling, you're
> using them
> to save indentation it seems. Factor the code out to a helper
> instead,
> or some such.
>
Modified the code to remove the gotos (although I thought they
were an elegant implementation)
>> + new_tx_push_buf_len = kernel_ring->tx_push_buf_len;
>> +
>> + /* support for ENA_LLQ_LARGE_HEADER is tested in the 'get'
>> command */
>> + if (new_tx_push_buf_len != ENA_LLQ_HEADER &&
>> + new_tx_push_buf_len != ENA_LLQ_LARGE_HEADER) {
>> + bool large_llq_sup =
>> adapter->large_llq_header_supported;
>> + char large_llq_size_str[40];
>> +
>> + snprintf(large_llq_size_str, 40, ", %lu",
>> ENA_LLQ_LARGE_HEADER);
>> +
>> + NL_SET_ERR_MSG_FMT_MOD(extack,
>> + "Only [%lu%s] tx push buff
>> length values are supported",
>> + ENA_LLQ_HEADER,
>> + large_llq_sup ?
>> large_llq_size_str : "");
>> +
>> + return -EINVAL;
>> + }
>> +
>> + changed |= new_tx_push_buf_len !=
>> adapter->ena_dev->tx_max_header_size;
>> +
>> +no_llq_supported:
>> + if (!changed)
>> return 0;
>>
>> - return ena_update_queue_sizes(adapter, new_tx_size,
>> new_rx_size);
>> + return ena_update_queue_params(adapter, new_tx_size,
>> new_rx_size,
>> + new_tx_push_buf_len);
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v2 net-next 1/4] ethtool: Add support for configuring tx_push_buf_len
2023-03-03 23:50 ` Jakub Kicinski
@ 2023-03-07 10:09 ` Shay Agroskin
0 siblings, 0 replies; 14+ messages in thread
From: Shay Agroskin @ 2023-03-07 10:09 UTC (permalink / raw)
To: Jakub Kicinski
Cc: David Miller, netdev, Woodhouse, David, Machulsky, Zorik,
Matushevsky, Alexander, Saeed Bshara, Wilson, Matt,
Liguori, Anthony, Bshara, Nafea, Belgazal, Netanel, Saidi, Ali,
Herrenschmidt, Benjamin, Kiyanovski, Arthur, Dagan, Noam,
Arinzon, David, Itzko, Shahar, Abboud, Osama
Jakub Kicinski <kuba@kernel.org> writes:
> CAUTION: This email originated from outside of the
> organization. Do not click links or open attachments unless you
> can confirm the sender and know the content is safe.
>
>
>
> On Thu, 2 Mar 2023 22:30:42 +0200 Shay Agroskin wrote:
>> + nla_put_u32(skb, ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN_MAX,
>> + kr->tx_push_buf_max_len) ||
>> + nla_put_u32(skb, ETHTOOL_A_RINGS_TX_PUSH_BUF_LEN,
>> + kr->tx_push_buf_len))
>
> Only report these if driver declares support, please
Added a check in next patchset, lemme know if that is what you
meant
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2023-03-07 10:16 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-03-02 20:30 [PATCH RFC v2 net-next 0/4] Add tx push buf len param to ethtool Shay Agroskin
2023-03-02 20:30 ` [PATCH RFC v2 net-next 1/4] ethtool: Add support for configuring tx_push_buf_len Shay Agroskin
2023-03-03 11:53 ` Simon Horman
2023-03-03 23:50 ` Jakub Kicinski
2023-03-07 10:09 ` Shay Agroskin
2023-03-02 20:30 ` [PATCH RFC v2 net-next 2/4] net: ena: Add an option to configure large LLQ headers Shay Agroskin
2023-03-03 11:51 ` Simon Horman
2023-03-06 9:27 ` Shay Agroskin
2023-03-02 20:30 ` [PATCH RFC v2 net-next 3/4] net: ena: Recalculate TX state variables every device reset Shay Agroskin
2023-03-03 11:53 ` Simon Horman
2023-03-02 20:30 ` [PATCH RFC v2 net-next 4/4] net: ena: Add support to changing tx_push_buf_len Shay Agroskin
2023-03-03 11:54 ` Simon Horman
2023-03-03 23:50 ` Jakub Kicinski
2023-03-06 11:54 ` Shay Agroskin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).