* [PATCH net-next v2 0/4] Add support to do threaded napi busy poll
@ 2025-01-23 23:12 Samiullah Khawaja
2025-01-23 23:12 ` [PATCH net-next v2 1/4] Add support to set napi threaded for individual napi Samiullah Khawaja
` (5 more replies)
0 siblings, 6 replies; 8+ messages in thread
From: Samiullah Khawaja @ 2025-01-23 23:12 UTC (permalink / raw)
To: Jakub Kicinski, David S . Miller , Eric Dumazet, Paolo Abeni,
almasrymina
Cc: netdev, skhawaja
Extend the already existing support of threaded napi poll to do continuous
busy polling.
This is used for doing continuous polling of napi to fetch descriptors from
backing RX/TX queues for low latency applications. Allow enabling of threaded
busypoll using netlink so this can be enabled on a set of dedicated napis for
low latency applications.
It allows enabling NAPI busy poll for any userspace application
indepdendent of userspace API being used for packet and event processing
(epoll, io_uring, raw socket APIs). Once enabled user can fetch the PID
of the kthread doing NAPI polling and set affinity, priority and
scheduler for it depending on the low-latency requirements.
Currently threaded napi is only enabled at device level using sysfs. Add
support to enable/disable threaded mode for a napi individually. This can be
done using the netlink interface. Extend `napi-set` op in netlink spec that
allows setting the `threaded` attribute of a napi.
Extend the threaded attribute in napi struct to add an option to enable
continuous busy polling. Extend the netlink and sysfs interface to allow
enabled/disabling threaded busypolling at device or individual napi level.
We use this for our AF_XDP based hard low-latency usecase using onload
stack (https://github.com/Xilinx-CNS/onload) that runs in userspace. Our
usecase is a fixed frequency RPC style traffic with fixed
request/response size. We simulated this using neper by only starting
next transaction when last one has completed. The experiment results are
listed below,
Setup:
- Running on Google C3 VMs with idpf driver with following configurations.
- IRQ affinity and coalascing is common for both experiments.
- There is only 1 RX/TX queue configured.
- First experiment enables busy poll using sysctl for both epoll and
socket APIs.
- Second experiment enables NAPI threaded busy poll for the full device
using sysctl.
Non threaded NAPI busy poll enabled using sysctl.
```
echo 400 | sudo tee /proc/sys/net/core/busy_poll
echo 400 | sudo tee /proc/sys/net/core/busy_read
echo 2 | sudo tee /sys/class/net/eth0/napi_defer_hard_irqs
echo 15000 | sudo tee /sys/class/net/eth0/gro_flush_timeout
```
Results using following command,
```
sudo EF_NO_FAIL=0 EF_POLL_USEC=100000 taskset -c 3-10 onload -v \
--profile=latency ./neper/tcp_rr -Q 200 -R 400 -T 1 -F 50 \
-p 50,90,99,999 -H <IP> -l 10
...
...
num_transactions=2835
latency_min=0.000018976
latency_max=0.049642100
latency_mean=0.003243618
latency_stddev=0.010636847
latency_p50=0.000025270
latency_p90=0.005406710
latency_p99=0.049807350
latency_p99.9=0.049807350
```
Results with napi threaded busy poll using following command,
```
sudo EF_NO_FAIL=0 EF_POLL_USEC=100000 taskset -c 3-10 onload -v \
--profile=latency ./neper/tcp_rr -Q 200 -R 400 -T 1 -F 50 \
-p 50,90,99,999 -H <IP> -l 10
...
...
num_transactions=460163
latency_min=0.000015707
latency_max=0.200182942
latency_mean=0.000019453
latency_stddev=0.000720727
latency_p50=0.000016950
latency_p90=0.000017270
latency_p99=0.000018710
latency_p99.9=0.000020150
```
Here with NAPI threaded busy poll in a separate core, we are able to
consistently poll the NAPI to keep latency to absolute minimum. And also
we are able to do this without any major changes to the onload stack and
threading model.
v2:
- Add documentation in napi.rst.
- Provide experiment data and usecase details.
- Update busy_poller selftest to include napi threaded poll testcase.
- Define threaded mode enum in netlink interface.
- Included NAPI threaded state in napi config to save/restore.
Samiullah Khawaja (4):
Add support to set napi threaded for individual napi
net: Create separate gro_flush helper function
Extend napi threaded polling to allow kthread based busy polling
selftests: Add napi threaded busy poll test in `busy_poller`
Documentation/ABI/testing/sysfs-class-net | 3 +-
Documentation/netlink/specs/netdev.yaml | 14 ++
Documentation/networking/napi.rst | 80 ++++++++++-
.../net/ethernet/atheros/atl1c/atl1c_main.c | 2 +-
include/linux/netdevice.h | 24 +++-
include/uapi/linux/netdev.h | 7 +
net/core/dev.c | 127 ++++++++++++++----
net/core/net-sysfs.c | 2 +-
net/core/netdev-genl-gen.c | 5 +-
net/core/netdev-genl.c | 9 ++
tools/include/uapi/linux/netdev.h | 7 +
tools/testing/selftests/net/busy_poll_test.sh | 25 +++-
tools/testing/selftests/net/busy_poller.c | 14 +-
13 files changed, 282 insertions(+), 37 deletions(-)
--
2.48.1.262.g85cc9f2d1e-goog
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH net-next v2 1/4] Add support to set napi threaded for individual napi
2025-01-23 23:12 [PATCH net-next v2 0/4] Add support to do threaded napi busy poll Samiullah Khawaja
@ 2025-01-23 23:12 ` Samiullah Khawaja
2025-01-23 23:12 ` [PATCH net-next v2 2/4] net: Create separate gro_flush helper function Samiullah Khawaja
` (4 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Samiullah Khawaja @ 2025-01-23 23:12 UTC (permalink / raw)
To: Jakub Kicinski, David S . Miller , Eric Dumazet, Paolo Abeni,
almasrymina
Cc: netdev, skhawaja
A net device has a threaded sysctl that can be used to enable threaded
napi polling on all of the NAPI contexts under that device. Allow
enabling threaded napi polling at individual napi level using netlink.
Extend the netlink operation `napi-set` and allow setting the threaded
attribute of a NAPI. This will enable the threaded polling on a napi
context.
Tested using following command in qemu/virtio-net:
./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
--do napi-set --json '{"id": 66, "threaded": 1}'
Signed-off-by: Samiullah Khawaja <skhawaja@google.com>
---
Documentation/netlink/specs/netdev.yaml | 10 ++++++++
Documentation/networking/napi.rst | 13 ++++++++++-
include/linux/netdevice.h | 10 ++++++++
include/uapi/linux/netdev.h | 1 +
net/core/dev.c | 31 +++++++++++++++++++++++++
net/core/netdev-genl-gen.c | 5 ++--
net/core/netdev-genl.c | 9 +++++++
tools/include/uapi/linux/netdev.h | 1 +
8 files changed, 77 insertions(+), 3 deletions(-)
diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml
index cbb544bd6c84..785240d60df6 100644
--- a/Documentation/netlink/specs/netdev.yaml
+++ b/Documentation/netlink/specs/netdev.yaml
@@ -268,6 +268,14 @@ attribute-sets:
doc: The timeout, in nanoseconds, of how long to suspend irq
processing, if event polling finds events
type: uint
+ -
+ name: threaded
+ doc: Whether the napi is configured to operate in threaded polling
+ mode. If this is set to `1` then the NAPI context operates
+ in threaded polling mode.
+ type: u32
+ checks:
+ max: 1
-
name: queue
attributes:
@@ -659,6 +667,7 @@ operations:
- defer-hard-irqs
- gro-flush-timeout
- irq-suspend-timeout
+ - threaded
dump:
request:
attributes:
@@ -711,6 +720,7 @@ operations:
- defer-hard-irqs
- gro-flush-timeout
- irq-suspend-timeout
+ - threaded
kernel-family:
headers: [ "linux/list.h"]
diff --git a/Documentation/networking/napi.rst b/Documentation/networking/napi.rst
index 6083210ab2a4..41926e7a3dd4 100644
--- a/Documentation/networking/napi.rst
+++ b/Documentation/networking/napi.rst
@@ -413,7 +413,18 @@ dependent). The NAPI instance IDs will be assigned in the opposite
order than the process IDs of the kernel threads.
Threaded NAPI is controlled by writing 0/1 to the ``threaded`` file in
-netdev's sysfs directory.
+netdev's sysfs directory. It can also be enabled for a specific napi using
+netlink interface.
+
+For example, using the script:
+
+.. code-block:: bash
+
+ $ kernel-source/tools/net/ynl/pyynl/cli.py \
+ --spec Documentation/netlink/specs/netdev.yaml \
+ --do napi-set \
+ --json='{"id": 66,
+ "threaded": 1}'
.. rubric:: Footnotes
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 8da4c61f97b9..6afba24b18d1 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -352,6 +352,7 @@ struct napi_config {
u64 gro_flush_timeout;
u64 irq_suspend_timeout;
u32 defer_hard_irqs;
+ bool threaded;
unsigned int napi_id;
};
@@ -572,6 +573,15 @@ static inline bool napi_complete(struct napi_struct *n)
int dev_set_threaded(struct net_device *dev, bool threaded);
+/*
+ * napi_set_threaded - set napi threaded state
+ * @napi: NAPI context
+ * @threaded: whether this napi does threaded polling
+ *
+ * Return 0 on success and negative errno on failure.
+ */
+int napi_set_threaded(struct napi_struct *napi, bool threaded);
+
void napi_disable(struct napi_struct *n);
void napi_disable_locked(struct napi_struct *n);
diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h
index e4be227d3ad6..829648b2ef65 100644
--- a/include/uapi/linux/netdev.h
+++ b/include/uapi/linux/netdev.h
@@ -125,6 +125,7 @@ enum {
NETDEV_A_NAPI_DEFER_HARD_IRQS,
NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT,
NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT,
+ NETDEV_A_NAPI_THREADED,
__NETDEV_A_NAPI_MAX,
NETDEV_A_NAPI_MAX = (__NETDEV_A_NAPI_MAX - 1)
diff --git a/net/core/dev.c b/net/core/dev.c
index afa2282f2604..3885f3095873 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6787,6 +6787,30 @@ static void init_gro_hash(struct napi_struct *napi)
napi->gro_bitmask = 0;
}
+int napi_set_threaded(struct napi_struct *napi, bool threaded)
+{
+ if (napi->dev->threaded)
+ return -EINVAL;
+
+ if (threaded) {
+ if (!napi->thread) {
+ int err = napi_kthread_create(napi);
+
+ if (err)
+ return err;
+ }
+ }
+
+ if (napi->config)
+ napi->config->threaded = threaded;
+
+ /* Make sure kthread is created before THREADED bit is set. */
+ smp_mb__before_atomic();
+ assign_bit(NAPI_STATE_THREADED, &napi->state, threaded);
+
+ return 0;
+}
+
int dev_set_threaded(struct net_device *dev, bool threaded)
{
struct napi_struct *napi;
@@ -6798,6 +6822,11 @@ int dev_set_threaded(struct net_device *dev, bool threaded)
return 0;
if (threaded) {
+ /* Check if threaded is set at napi level already */
+ list_for_each_entry(napi, &dev->napi_list, dev_list)
+ if (test_bit(NAPI_STATE_THREADED, &napi->state))
+ return -EINVAL;
+
list_for_each_entry(napi, &dev->napi_list, dev_list) {
if (!napi->thread) {
err = napi_kthread_create(napi);
@@ -6880,6 +6909,8 @@ static void napi_restore_config(struct napi_struct *n)
napi_hash_add(n);
n->config->napi_id = n->napi_id;
}
+
+ napi_set_threaded(n, n->config->threaded);
}
static void napi_save_config(struct napi_struct *n)
diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c
index 996ac6a449eb..a1f80e687f53 100644
--- a/net/core/netdev-genl-gen.c
+++ b/net/core/netdev-genl-gen.c
@@ -92,11 +92,12 @@ static const struct nla_policy netdev_bind_rx_nl_policy[NETDEV_A_DMABUF_FD + 1]
};
/* NETDEV_CMD_NAPI_SET - do */
-static const struct nla_policy netdev_napi_set_nl_policy[NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT + 1] = {
+static const struct nla_policy netdev_napi_set_nl_policy[NETDEV_A_NAPI_THREADED + 1] = {
[NETDEV_A_NAPI_ID] = { .type = NLA_U32, },
[NETDEV_A_NAPI_DEFER_HARD_IRQS] = NLA_POLICY_FULL_RANGE(NLA_U32, &netdev_a_napi_defer_hard_irqs_range),
[NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT] = { .type = NLA_UINT, },
[NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT] = { .type = NLA_UINT, },
+ [NETDEV_A_NAPI_THREADED] = NLA_POLICY_MAX(NLA_U32, 1),
};
/* Ops table for netdev */
@@ -187,7 +188,7 @@ static const struct genl_split_ops netdev_nl_ops[] = {
.cmd = NETDEV_CMD_NAPI_SET,
.doit = netdev_nl_napi_set_doit,
.policy = netdev_napi_set_nl_policy,
- .maxattr = NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT,
+ .maxattr = NETDEV_A_NAPI_THREADED,
.flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
},
};
diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index 715f85c6b62e..208c3dd768ec 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -183,6 +183,9 @@ netdev_nl_napi_fill_one(struct sk_buff *rsp, struct napi_struct *napi,
if (napi->irq >= 0 && nla_put_u32(rsp, NETDEV_A_NAPI_IRQ, napi->irq))
goto nla_put_failure;
+ if (nla_put_u32(rsp, NETDEV_A_NAPI_THREADED, !!napi->thread))
+ goto nla_put_failure;
+
if (napi->thread) {
pid = task_pid_nr(napi->thread);
if (nla_put_u32(rsp, NETDEV_A_NAPI_PID, pid))
@@ -321,8 +324,14 @@ netdev_nl_napi_set_config(struct napi_struct *napi, struct genl_info *info)
{
u64 irq_suspend_timeout = 0;
u64 gro_flush_timeout = 0;
+ u32 threaded = 0;
u32 defer = 0;
+ if (info->attrs[NETDEV_A_NAPI_THREADED]) {
+ threaded = nla_get_u32(info->attrs[NETDEV_A_NAPI_THREADED]);
+ napi_set_threaded(napi, !!threaded);
+ }
+
if (info->attrs[NETDEV_A_NAPI_DEFER_HARD_IRQS]) {
defer = nla_get_u32(info->attrs[NETDEV_A_NAPI_DEFER_HARD_IRQS]);
napi_set_defer_hard_irqs(napi, defer);
diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h
index e4be227d3ad6..829648b2ef65 100644
--- a/tools/include/uapi/linux/netdev.h
+++ b/tools/include/uapi/linux/netdev.h
@@ -125,6 +125,7 @@ enum {
NETDEV_A_NAPI_DEFER_HARD_IRQS,
NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT,
NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT,
+ NETDEV_A_NAPI_THREADED,
__NETDEV_A_NAPI_MAX,
NETDEV_A_NAPI_MAX = (__NETDEV_A_NAPI_MAX - 1)
--
2.48.1.262.g85cc9f2d1e-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH net-next v2 2/4] net: Create separate gro_flush helper function
2025-01-23 23:12 [PATCH net-next v2 0/4] Add support to do threaded napi busy poll Samiullah Khawaja
2025-01-23 23:12 ` [PATCH net-next v2 1/4] Add support to set napi threaded for individual napi Samiullah Khawaja
@ 2025-01-23 23:12 ` Samiullah Khawaja
2025-01-23 23:12 ` [PATCH net-next v2 3/4] Extend napi threaded polling to allow kthread based busy polling Samiullah Khawaja
` (3 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Samiullah Khawaja @ 2025-01-23 23:12 UTC (permalink / raw)
To: Jakub Kicinski, David S . Miller , Eric Dumazet, Paolo Abeni,
almasrymina
Cc: netdev, skhawaja
Move multiple copies of same code snippet doing `gro_flush` and
`gro_normal_list` into a separate helper function.
Signed-off-by: Samiullah Khawaja <skhawaja@google.com>
---
net/core/dev.c | 28 +++++++++++++---------------
1 file changed, 13 insertions(+), 15 deletions(-)
diff --git a/net/core/dev.c b/net/core/dev.c
index 3885f3095873..484947ad5410 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6484,6 +6484,17 @@ static void skb_defer_free_flush(struct softnet_data *sd)
}
}
+static void __napi_gro_flush_helper(struct napi_struct *napi)
+{
+ if (napi->gro_bitmask) {
+ /* flush too old packets
+ * If HZ < 1000, flush all packets.
+ */
+ napi_gro_flush(napi, HZ >= 1000);
+ }
+ gro_normal_list(napi);
+}
+
#if defined(CONFIG_NET_RX_BUSY_POLL)
static void __busy_poll_stop(struct napi_struct *napi, bool skip_schedule)
@@ -6494,14 +6505,8 @@ static void __busy_poll_stop(struct napi_struct *napi, bool skip_schedule)
return;
}
- if (napi->gro_bitmask) {
- /* flush too old packets
- * If HZ < 1000, flush all packets.
- */
- napi_gro_flush(napi, HZ >= 1000);
- }
+ __napi_gro_flush_helper(napi);
- gro_normal_list(napi);
clear_bit(NAPI_STATE_SCHED, &napi->state);
}
@@ -7170,14 +7175,7 @@ static int __napi_poll(struct napi_struct *n, bool *repoll)
return work;
}
- if (n->gro_bitmask) {
- /* flush too old packets
- * If HZ < 1000, flush all packets.
- */
- napi_gro_flush(n, HZ >= 1000);
- }
-
- gro_normal_list(n);
+ __napi_gro_flush_helper(n);
/* Some drivers may have called napi_schedule
* prior to exhausting their budget.
--
2.48.1.262.g85cc9f2d1e-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH net-next v2 3/4] Extend napi threaded polling to allow kthread based busy polling
2025-01-23 23:12 [PATCH net-next v2 0/4] Add support to do threaded napi busy poll Samiullah Khawaja
2025-01-23 23:12 ` [PATCH net-next v2 1/4] Add support to set napi threaded for individual napi Samiullah Khawaja
2025-01-23 23:12 ` [PATCH net-next v2 2/4] net: Create separate gro_flush helper function Samiullah Khawaja
@ 2025-01-23 23:12 ` Samiullah Khawaja
2025-01-24 13:18 ` kernel test robot
2025-01-23 23:12 ` [PATCH net-next v2 4/4] selftests: Add napi threaded busy poll test in `busy_poller` Samiullah Khawaja
` (2 subsequent siblings)
5 siblings, 1 reply; 8+ messages in thread
From: Samiullah Khawaja @ 2025-01-23 23:12 UTC (permalink / raw)
To: Jakub Kicinski, David S . Miller , Eric Dumazet, Paolo Abeni,
almasrymina
Cc: netdev, skhawaja
Add a new state to napi state enum:
- STATE_THREADED_BUSY_POLL
Threaded busy poll is enabled/running for this napi.
Following changes are introduced in the napi scheduling and state logic:
- When threaded busy poll is enabled through sysfs it also enables
NAPI_STATE_THREADED so a kthread is created per napi. It also sets
NAPI_STATE_THREADED_BUSY_POLL bit on each napi to indicate that we are
supposed to busy poll for each napi.
- When napi is scheduled with STATE_SCHED_THREADED and associated
kthread is woken up, the kthread owns the context. If
NAPI_STATE_THREADED_BUSY_POLL and NAPI_SCHED_THREADED both are set
then it means that we can busy poll.
- To keep busy polling and to avoid scheduling of the interrupts, the
napi_complete_done returns false when both SCHED_THREADED and
THREADED_BUSY_POLL flags are set. Also napi_complete_done returns
early to avoid the STATE_SCHED_THREADED being unset.
- If at any point STATE_THREADED_BUSY_POLL is unset, the
napi_complete_done will run and unset the SCHED_THREADED bit also.
This will make the associated kthread go to sleep as per existing
logic.
Signed-off-by: Samiullah Khawaja <skhawaja@google.com>
Reviewed-by: Willem de Bruijn <willemb@google.com>
---
Documentation/ABI/testing/sysfs-class-net | 3 +-
Documentation/netlink/specs/netdev.yaml | 12 ++--
Documentation/networking/napi.rst | 67 ++++++++++++++++-
.../net/ethernet/atheros/atl1c/atl1c_main.c | 2 +-
include/linux/netdevice.h | 20 ++++--
include/uapi/linux/netdev.h | 6 ++
net/core/dev.c | 72 ++++++++++++++++---
net/core/net-sysfs.c | 2 +-
net/core/netdev-genl-gen.c | 2 +-
net/core/netdev-genl.c | 2 +-
tools/include/uapi/linux/netdev.h | 6 ++
11 files changed, 168 insertions(+), 26 deletions(-)
diff --git a/Documentation/ABI/testing/sysfs-class-net b/Documentation/ABI/testing/sysfs-class-net
index ebf21beba846..15d7d36a8294 100644
--- a/Documentation/ABI/testing/sysfs-class-net
+++ b/Documentation/ABI/testing/sysfs-class-net
@@ -343,7 +343,7 @@ Date: Jan 2021
KernelVersion: 5.12
Contact: netdev@vger.kernel.org
Description:
- Boolean value to control the threaded mode per device. User could
+ Integer value to control the threaded mode per device. User could
set this value to enable/disable threaded mode for all napi
belonging to this device, without the need to do device up/down.
@@ -351,4 +351,5 @@ Description:
== ==================================
0 threaded mode disabled for this dev
1 threaded mode enabled for this dev
+ 2 threaded mode enabled, and busy polling enabled.
== ==================================
diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml
index 785240d60df6..db3bf1eb9a63 100644
--- a/Documentation/netlink/specs/netdev.yaml
+++ b/Documentation/netlink/specs/netdev.yaml
@@ -78,6 +78,10 @@ definitions:
name: qstats-scope
type: flags
entries: [ queue ]
+ -
+ name: napi-threaded
+ type: enum
+ entries: [ disable, enable, busy-poll-enable ]
attribute-sets:
-
@@ -271,11 +275,11 @@ attribute-sets:
-
name: threaded
doc: Whether the napi is configured to operate in threaded polling
- mode. If this is set to `1` then the NAPI context operates
- in threaded polling mode.
+ mode. If this is set to `enable` then the NAPI context operates
+ in threaded polling mode. If this is set to `busy-poll-enable`
+ then the NAPI kthread also does busypolling.
type: u32
- checks:
- max: 1
+ enum: napi-threaded
-
name: queue
attributes:
diff --git a/Documentation/networking/napi.rst b/Documentation/networking/napi.rst
index 41926e7a3dd4..edecc21f0bca 100644
--- a/Documentation/networking/napi.rst
+++ b/Documentation/networking/napi.rst
@@ -232,7 +232,9 @@ are not well known).
Busy polling is enabled by either setting ``SO_BUSY_POLL`` on
selected sockets or using the global ``net.core.busy_poll`` and
``net.core.busy_read`` sysctls. An io_uring API for NAPI busy polling
-also exists.
+also exists. Threaded polling of NAPI also has a mode to busy poll for
+packets (:ref:`threaded busy polling<threaded_busy_poll>`) using the same
+thread that is used for NAPI processing.
epoll-based busy polling
------------------------
@@ -395,6 +397,69 @@ Therefore, setting ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` is
the recommended usage, because otherwise setting ``irq-suspend-timeout``
might not have any discernible effect.
+.. _threaded_busy_poll:
+
+Threaded NAPI busy polling
+--------------------------
+
+Threaded napi allows processing of packets from each NAPI in a kthread in
+kernel. Threaded napi busy polling extends this and adds support to do
+continuous busy polling of this napi. This can be used to enable busy polling
+independent of userspace application or the API (epoll, io_uring, raw sockets)
+being used in userspace to process the packets.
+
+It can be enabled for each NAPI using netlink interface or at device level using
+the threaded NAPI sysctl.
+
+For example, using following script:
+
+.. code-block:: bash
+
+ $ kernel-source/tools/net/ynl/pyynl/cli.py \
+ --spec Documentation/netlink/specs/netdev.yaml \
+ --do napi-set \
+ --json='{"id": 66,
+ "threaded": "busy-poll-enable"}'
+
+
+Enabling it for each NAPI allows finer control to enable busy pollling for
+only a set of NIC queues which will get traffic with low latency requirements.
+
+Depending on application requirement, user might want to set affinity of the
+kthread that is busy polling each NAPI. User might also want to set priority
+and the scheduler of the thread depending on the latency requirements.
+
+For a hard low-latency application, user might want to dedicate the full core
+for the NAPI polling so the NIC queue descriptors are picked up from the queue
+as soon as they appear. For more relaxed low-latency requirement, user might
+want to share the core with other threads.
+
+Once threaded busy polling is enabled for a NAPI, PID of the kthread can be
+fetched using netlink interface so the affinity, priority and scheduler
+configuration can be done.
+
+For example, following script can be used to fetch the pid:
+
+.. code-block:: bash
+
+ $ kernel-source/tools/net/ynl/pyynl/cli.py \
+ --spec Documentation/netlink/specs/netdev.yaml \
+ --do napi-get \
+ --json='{"id": 66}'
+
+This will output something like following, the pid `258` is the PID of the
+kthread that is polling this NAPI.
+
+.. code-block:: bash
+
+ $ {'defer-hard-irqs': 0,
+ 'gro-flush-timeout': 0,
+ 'id': 66,
+ 'ifindex': 2,
+ 'irq-suspend-timeout': 0,
+ 'pid': 258,
+ 'threaded': 'enable'}
+
.. _threaded:
Threaded NAPI
diff --git a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
index c571614b1d50..513328476770 100644
--- a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
+++ b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
@@ -2688,7 +2688,7 @@ static int atl1c_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
adapter->mii.mdio_write = atl1c_mdio_write;
adapter->mii.phy_id_mask = 0x1f;
adapter->mii.reg_num_mask = MDIO_CTRL_REG_MASK;
- dev_set_threaded(netdev, true);
+ dev_set_threaded(netdev, NETDEV_NAPI_THREADED_ENABLE);
for (i = 0; i < adapter->rx_queue_count; ++i)
netif_napi_add(netdev, &adapter->rrd_ring[i].napi,
atl1c_clean_rx);
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 6afba24b18d1..9d6bb0d719b3 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -352,7 +352,7 @@ struct napi_config {
u64 gro_flush_timeout;
u64 irq_suspend_timeout;
u32 defer_hard_irqs;
- bool threaded;
+ u8 threaded;
unsigned int napi_id;
};
@@ -410,6 +410,8 @@ enum {
NAPI_STATE_PREFER_BUSY_POLL, /* prefer busy-polling over softirq processing*/
NAPI_STATE_THREADED, /* The poll is performed inside its own thread*/
NAPI_STATE_SCHED_THREADED, /* Napi is currently scheduled in threaded mode */
+ NAPI_STATE_THREADED_BUSY_POLL, /* The threaded napi poller will busy poll */
+ NAPI_STATE_SCHED_THREADED_BUSY_POLL, /* The threaded napi poller is busy polling */
};
enum {
@@ -423,8 +425,14 @@ enum {
NAPIF_STATE_PREFER_BUSY_POLL = BIT(NAPI_STATE_PREFER_BUSY_POLL),
NAPIF_STATE_THREADED = BIT(NAPI_STATE_THREADED),
NAPIF_STATE_SCHED_THREADED = BIT(NAPI_STATE_SCHED_THREADED),
+ NAPIF_STATE_THREADED_BUSY_POLL = BIT(NAPI_STATE_THREADED_BUSY_POLL),
+ NAPIF_STATE_SCHED_THREADED_BUSY_POLL
+ = BIT(NAPI_STATE_SCHED_THREADED_BUSY_POLL),
};
+#define NAPIF_STATE_THREADED_BUSY_POLL_MASK \
+ (NAPIF_STATE_THREADED | NAPIF_STATE_THREADED_BUSY_POLL)
+
enum gro_result {
GRO_MERGED,
GRO_MERGED_FREE,
@@ -571,16 +579,18 @@ static inline bool napi_complete(struct napi_struct *n)
return napi_complete_done(n, 0);
}
-int dev_set_threaded(struct net_device *dev, bool threaded);
+int dev_set_threaded(struct net_device *dev,
+ enum netdev_napi_threaded threaded);
/*
* napi_set_threaded - set napi threaded state
* @napi: NAPI context
- * @threaded: whether this napi does threaded polling
+ * @threaded: threading mode
*
* Return 0 on success and negative errno on failure.
*/
-int napi_set_threaded(struct napi_struct *napi, bool threaded);
+int napi_set_threaded(struct napi_struct *napi,
+ enum netdev_napi_threaded threaded);
void napi_disable(struct napi_struct *n);
void napi_disable_locked(struct napi_struct *n);
@@ -2404,7 +2414,7 @@ struct net_device {
struct sfp_bus *sfp_bus;
struct lock_class_key *qdisc_tx_busylock;
bool proto_down;
- bool threaded;
+ u8 threaded;
/* priv_flags_slow, ungrouped to save space */
unsigned long see_all_hwtstamp_requests:1;
diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h
index 829648b2ef65..c2a9dbb361f6 100644
--- a/include/uapi/linux/netdev.h
+++ b/include/uapi/linux/netdev.h
@@ -74,6 +74,12 @@ enum netdev_qstats_scope {
NETDEV_QSTATS_SCOPE_QUEUE = 1,
};
+enum netdev_napi_threaded {
+ NETDEV_NAPI_THREADED_DISABLE,
+ NETDEV_NAPI_THREADED_ENABLE,
+ NETDEV_NAPI_THREADED_BUSY_POLL_ENABLE,
+};
+
enum {
NETDEV_A_DEV_IFINDEX = 1,
NETDEV_A_DEV_PAD,
diff --git a/net/core/dev.c b/net/core/dev.c
index 484947ad5410..8a5fde81f0b8 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -78,6 +78,7 @@
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/sched/isolation.h>
+#include <linux/sched/types.h>
#include <linux/sched/mm.h>
#include <linux/smpboot.h>
#include <linux/mutex.h>
@@ -6403,7 +6404,8 @@ bool napi_complete_done(struct napi_struct *n, int work_done)
* the guarantee we will be called later.
*/
if (unlikely(n->state & (NAPIF_STATE_NPSVC |
- NAPIF_STATE_IN_BUSY_POLL)))
+ NAPIF_STATE_IN_BUSY_POLL |
+ NAPIF_STATE_SCHED_THREADED_BUSY_POLL)))
return false;
if (work_done) {
@@ -6792,8 +6794,10 @@ static void init_gro_hash(struct napi_struct *napi)
napi->gro_bitmask = 0;
}
-int napi_set_threaded(struct napi_struct *napi, bool threaded)
+int napi_set_threaded(struct napi_struct *napi,
+ enum netdev_napi_threaded threaded)
{
+ unsigned long val;
if (napi->dev->threaded)
return -EINVAL;
@@ -6811,14 +6815,20 @@ int napi_set_threaded(struct napi_struct *napi, bool threaded)
/* Make sure kthread is created before THREADED bit is set. */
smp_mb__before_atomic();
- assign_bit(NAPI_STATE_THREADED, &napi->state, threaded);
+ val = 0;
+ if (threaded == NETDEV_NAPI_THREADED_BUSY_POLL_ENABLE)
+ val |= NAPIF_STATE_THREADED_BUSY_POLL;
+ if (threaded)
+ val |= NAPIF_STATE_THREADED;
+ set_mask_bits(&napi->state, NAPIF_STATE_THREADED_BUSY_POLL_MASK, val);
return 0;
}
-int dev_set_threaded(struct net_device *dev, bool threaded)
+int dev_set_threaded(struct net_device *dev, enum netdev_napi_threaded threaded)
{
struct napi_struct *napi;
+ unsigned long val;
int err = 0;
netdev_assert_locked_or_invisible(dev);
@@ -6826,17 +6836,22 @@ int dev_set_threaded(struct net_device *dev, bool threaded)
if (dev->threaded == threaded)
return 0;
+ val = 0;
if (threaded) {
/* Check if threaded is set at napi level already */
list_for_each_entry(napi, &dev->napi_list, dev_list)
if (test_bit(NAPI_STATE_THREADED, &napi->state))
return -EINVAL;
+ val |= NAPIF_STATE_THREADED;
+ if (threaded == NETDEV_NAPI_THREADED_BUSY_POLL_ENABLE)
+ val |= NAPIF_STATE_THREADED_BUSY_POLL;
+
list_for_each_entry(napi, &dev->napi_list, dev_list) {
if (!napi->thread) {
err = napi_kthread_create(napi);
if (err) {
- threaded = false;
+ threaded = NETDEV_NAPI_THREADED_DISABLE;
break;
}
}
@@ -6855,9 +6870,13 @@ int dev_set_threaded(struct net_device *dev, bool threaded)
* polled. In this case, the switch between threaded mode and
* softirq mode will happen in the next round of napi_schedule().
* This should not cause hiccups/stalls to the live traffic.
+ *
+ * Switch to busy_poll threaded napi will occur after the threaded
+ * napi is scheduled.
*/
list_for_each_entry(napi, &dev->napi_list, dev_list)
- assign_bit(NAPI_STATE_THREADED, &napi->state, threaded);
+ set_mask_bits(&napi->state,
+ NAPIF_STATE_THREADED_BUSY_POLL_MASK, val);
return err;
}
@@ -7235,7 +7254,7 @@ static int napi_thread_wait(struct napi_struct *napi)
return -1;
}
-static void napi_threaded_poll_loop(struct napi_struct *napi)
+static void napi_threaded_poll_loop(struct napi_struct *napi, bool busy_poll)
{
struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx;
struct softnet_data *sd;
@@ -7264,22 +7283,53 @@ static void napi_threaded_poll_loop(struct napi_struct *napi)
}
skb_defer_free_flush(sd);
bpf_net_ctx_clear(bpf_net_ctx);
+
+ /* Push the skbs up the stack if busy polling. */
+ if (busy_poll)
+ __napi_gro_flush_helper(napi);
local_bh_enable();
- if (!repoll)
+ /* If busy polling then do not break here because we need to
+ * call cond_resched and rcu_softirq_qs_periodic to prevent
+ * watchdog warnings.
+ */
+ if (!repoll && !busy_poll)
break;
rcu_softirq_qs_periodic(last_qs);
cond_resched();
+
+ if (!repoll)
+ break;
}
}
static int napi_threaded_poll(void *data)
{
struct napi_struct *napi = data;
+ bool busy_poll_sched;
+ unsigned long val;
+ bool busy_poll;
+
+ while (!napi_thread_wait(napi)) {
+ /* Once woken up, this means that we are scheduled as threaded
+ * napi and this thread owns the napi context, if busy poll
+ * state is set then we busy poll this napi.
+ */
+ val = READ_ONCE(napi->state);
+ busy_poll = val & NAPIF_STATE_THREADED_BUSY_POLL;
+ busy_poll_sched = val & NAPIF_STATE_SCHED_THREADED_BUSY_POLL;
+
+ /* Do not busy poll if napi is disabled. */
+ if (unlikely(val & NAPIF_STATE_DISABLE))
+ busy_poll = false;
+
+ if (busy_poll != busy_poll_sched)
+ assign_bit(NAPI_STATE_SCHED_THREADED_BUSY_POLL,
+ &napi->state, busy_poll);
- while (!napi_thread_wait(napi))
- napi_threaded_poll_loop(napi);
+ napi_threaded_poll_loop(napi, busy_poll);
+ }
return 0;
}
@@ -12497,7 +12547,7 @@ static void run_backlog_napi(unsigned int cpu)
{
struct softnet_data *sd = per_cpu_ptr(&softnet_data, cpu);
- napi_threaded_poll_loop(&sd->backlog);
+ napi_threaded_poll_loop(&sd->backlog, false);
}
static void backlog_napi_setup(unsigned int cpu)
diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
index 07cb99b114bd..beb496bcb633 100644
--- a/net/core/net-sysfs.c
+++ b/net/core/net-sysfs.c
@@ -657,7 +657,7 @@ static int modify_napi_threaded(struct net_device *dev, unsigned long val)
if (list_empty(&dev->napi_list))
return -EOPNOTSUPP;
- if (val != 0 && val != 1)
+ if (val > NETDEV_NAPI_THREADED_BUSY_POLL_ENABLE)
return -EOPNOTSUPP;
ret = dev_set_threaded(dev, val);
diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c
index a1f80e687f53..b572beba42e7 100644
--- a/net/core/netdev-genl-gen.c
+++ b/net/core/netdev-genl-gen.c
@@ -97,7 +97,7 @@ static const struct nla_policy netdev_napi_set_nl_policy[NETDEV_A_NAPI_THREADED
[NETDEV_A_NAPI_DEFER_HARD_IRQS] = NLA_POLICY_FULL_RANGE(NLA_U32, &netdev_a_napi_defer_hard_irqs_range),
[NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT] = { .type = NLA_UINT, },
[NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT] = { .type = NLA_UINT, },
- [NETDEV_A_NAPI_THREADED] = NLA_POLICY_MAX(NLA_U32, 1),
+ [NETDEV_A_NAPI_THREADED] = NLA_POLICY_MAX(NLA_U32, 2),
};
/* Ops table for netdev */
diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index 208c3dd768ec..7ae5f3ed0961 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -329,7 +329,7 @@ netdev_nl_napi_set_config(struct napi_struct *napi, struct genl_info *info)
if (info->attrs[NETDEV_A_NAPI_THREADED]) {
threaded = nla_get_u32(info->attrs[NETDEV_A_NAPI_THREADED]);
- napi_set_threaded(napi, !!threaded);
+ napi_set_threaded(napi, threaded);
}
if (info->attrs[NETDEV_A_NAPI_DEFER_HARD_IRQS]) {
diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h
index 829648b2ef65..c2a9dbb361f6 100644
--- a/tools/include/uapi/linux/netdev.h
+++ b/tools/include/uapi/linux/netdev.h
@@ -74,6 +74,12 @@ enum netdev_qstats_scope {
NETDEV_QSTATS_SCOPE_QUEUE = 1,
};
+enum netdev_napi_threaded {
+ NETDEV_NAPI_THREADED_DISABLE,
+ NETDEV_NAPI_THREADED_ENABLE,
+ NETDEV_NAPI_THREADED_BUSY_POLL_ENABLE,
+};
+
enum {
NETDEV_A_DEV_IFINDEX = 1,
NETDEV_A_DEV_PAD,
--
2.48.1.262.g85cc9f2d1e-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH net-next v2 4/4] selftests: Add napi threaded busy poll test in `busy_poller`
2025-01-23 23:12 [PATCH net-next v2 0/4] Add support to do threaded napi busy poll Samiullah Khawaja
` (2 preceding siblings ...)
2025-01-23 23:12 ` [PATCH net-next v2 3/4] Extend napi threaded polling to allow kthread based busy polling Samiullah Khawaja
@ 2025-01-23 23:12 ` Samiullah Khawaja
2025-01-24 1:24 ` [PATCH net-next v2 0/4] Add support to do threaded napi busy poll Jakub Kicinski
2025-01-27 17:06 ` Joe Damato
5 siblings, 0 replies; 8+ messages in thread
From: Samiullah Khawaja @ 2025-01-23 23:12 UTC (permalink / raw)
To: Jakub Kicinski, David S . Miller , Eric Dumazet, Paolo Abeni,
almasrymina
Cc: netdev, skhawaja
Add testcase to run busy poll test with threaded napi busy poll enabled.
Signed-off-by: Samiullah Khawaja <skhawaja@google.com>
---
tools/testing/selftests/net/busy_poll_test.sh | 25 ++++++++++++++++++-
tools/testing/selftests/net/busy_poller.c | 14 ++++++++---
2 files changed, 35 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/net/busy_poll_test.sh b/tools/testing/selftests/net/busy_poll_test.sh
index 7db292ec4884..aeca610dc989 100755
--- a/tools/testing/selftests/net/busy_poll_test.sh
+++ b/tools/testing/selftests/net/busy_poll_test.sh
@@ -27,6 +27,9 @@ NAPI_DEFER_HARD_IRQS=100
GRO_FLUSH_TIMEOUT=50000
SUSPEND_TIMEOUT=20000000
+# NAPI threaded busy poll config
+NAPI_THREADED_POLL=2
+
setup_ns()
{
set -e
@@ -62,6 +65,9 @@ cleanup_ns()
test_busypoll()
{
suspend_value=${1:-0}
+ napi_threaded_value=${2:-0}
+ prefer_busy_poll_value=${3:-$PREFER_BUSY_POLL}
+
tmp_file=$(mktemp)
out_file=$(mktemp)
@@ -73,10 +79,11 @@ test_busypoll()
-b${SERVER_IP} \
-m${MAX_EVENTS} \
-u${BUSY_POLL_USECS} \
- -P${PREFER_BUSY_POLL} \
+ -P${prefer_busy_poll_value} \
-g${BUSY_POLL_BUDGET} \
-i${NSIM_SV_IFIDX} \
-s${suspend_value} \
+ -t${napi_threaded_value} \
-o${out_file}&
wait_local_port_listen nssv ${SERVER_PORT} tcp
@@ -109,6 +116,15 @@ test_busypoll_with_suspend()
return $?
}
+test_busypoll_with_napi_threaded()
+{
+ # Only enable napi threaded poll. Set suspend timeout and prefer busy
+ # poll to 0.
+ test_busypoll 0 ${NAPI_THREADED_POLL} 0
+
+ return $?
+}
+
###
### Code start
###
@@ -154,6 +170,13 @@ if [ $? -ne 0 ]; then
exit 1
fi
+test_busypoll_with_napi_threaded
+if [ $? -ne 0 ]; then
+ echo "test_busypoll_with_napi_threaded failed"
+ cleanup_ns
+ exit 1
+fi
+
echo "$NSIM_SV_FD:$NSIM_SV_IFIDX" > $NSIM_DEV_SYS_UNLINK
echo $NSIM_CL_ID > $NSIM_DEV_SYS_DEL
diff --git a/tools/testing/selftests/net/busy_poller.c b/tools/testing/selftests/net/busy_poller.c
index 04c7ff577bb8..f7407f09f635 100644
--- a/tools/testing/selftests/net/busy_poller.c
+++ b/tools/testing/selftests/net/busy_poller.c
@@ -65,15 +65,16 @@ static uint32_t cfg_busy_poll_usecs;
static uint16_t cfg_busy_poll_budget;
static uint8_t cfg_prefer_busy_poll;
-/* IRQ params */
+/* NAPI params */
static uint32_t cfg_defer_hard_irqs;
static uint64_t cfg_gro_flush_timeout;
static uint64_t cfg_irq_suspend_timeout;
+static enum netdev_napi_threaded cfg_napi_threaded_poll = NETDEV_NAPI_THREADED_DISABLE;
static void usage(const char *filepath)
{
error(1, 0,
- "Usage: %s -p<port> -b<addr> -m<max_events> -u<busy_poll_usecs> -P<prefer_busy_poll> -g<busy_poll_budget> -o<outfile> -d<defer_hard_irqs> -r<gro_flush_timeout> -s<irq_suspend_timeout> -i<ifindex>",
+ "Usage: %s -p<port> -b<addr> -m<max_events> -u<busy_poll_usecs> -P<prefer_busy_poll> -g<busy_poll_budget> -o<outfile> -d<defer_hard_irqs> -r<gro_flush_timeout> -s<irq_suspend_timeout> -t<napi_threaded_poll> -i<ifindex>",
filepath);
}
@@ -86,7 +87,7 @@ static void parse_opts(int argc, char **argv)
if (argc <= 1)
usage(argv[0]);
- while ((c = getopt(argc, argv, "p:m:b:u:P:g:o:d:r:s:i:")) != -1) {
+ while ((c = getopt(argc, argv, "p:m:b:u:P:g:o:d:r:s:i:t:")) != -1) {
/* most options take integer values, except o and b, so reduce
* code duplication a bit for the common case by calling
* strtoull here and leave bounds checking and casting per
@@ -168,6 +169,12 @@ static void parse_opts(int argc, char **argv)
cfg_ifindex = (int)tmp;
break;
+ case 't':
+ if (tmp == ULLONG_MAX || tmp > 2)
+ error(1, ERANGE, "napi threaded poll value must be 0-2");
+
+ cfg_napi_threaded_poll = (enum netdev_napi_threaded)tmp;
+ break;
}
}
@@ -246,6 +253,7 @@ static void setup_queue(void)
cfg_gro_flush_timeout);
netdev_napi_set_req_set_irq_suspend_timeout(set_req,
cfg_irq_suspend_timeout);
+ netdev_napi_set_req_set_threaded(set_req, cfg_napi_threaded_poll);
if (netdev_napi_set(ys, set_req))
error(1, 0, "can't set NAPI params: %s\n", yerr.msg);
--
2.48.1.262.g85cc9f2d1e-goog
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH net-next v2 0/4] Add support to do threaded napi busy poll
2025-01-23 23:12 [PATCH net-next v2 0/4] Add support to do threaded napi busy poll Samiullah Khawaja
` (3 preceding siblings ...)
2025-01-23 23:12 ` [PATCH net-next v2 4/4] selftests: Add napi threaded busy poll test in `busy_poller` Samiullah Khawaja
@ 2025-01-24 1:24 ` Jakub Kicinski
2025-01-27 17:06 ` Joe Damato
5 siblings, 0 replies; 8+ messages in thread
From: Jakub Kicinski @ 2025-01-24 1:24 UTC (permalink / raw)
To: Samiullah Khawaja
Cc: David S . Miller , Eric Dumazet, Paolo Abeni, almasrymina, netdev
On Thu, 23 Jan 2025 23:12:32 +0000 Samiullah Khawaja wrote:
> Extend the already existing support of threaded napi poll to do continuous
> busy polling.
## Form letter - net-next-closed
The merge window for v6.14 has begun and we have already posted our pull
request. Therefore net-next is closed for new drivers, features, code
refactoring and optimizations. We are currently accepting bug fixes only.
Please repost when net-next reopens after Feb 3rd.
RFC patches sent for review only are obviously welcome at any time.
See: https://www.kernel.org/doc/html/next/process/maintainer-netdev.html#development-cycle
--
pw-bot: defer
pv-bot: closed
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH net-next v2 3/4] Extend napi threaded polling to allow kthread based busy polling
2025-01-23 23:12 ` [PATCH net-next v2 3/4] Extend napi threaded polling to allow kthread based busy polling Samiullah Khawaja
@ 2025-01-24 13:18 ` kernel test robot
0 siblings, 0 replies; 8+ messages in thread
From: kernel test robot @ 2025-01-24 13:18 UTC (permalink / raw)
To: Samiullah Khawaja, Jakub Kicinski, David S . Miller, Eric Dumazet,
Paolo Abeni, almasrymina
Cc: oe-kbuild-all, netdev, skhawaja
Hi Samiullah,
kernel test robot noticed the following build warnings:
[auto build test WARNING on net-next/main]
url: https://github.com/intel-lab-lkp/linux/commits/Samiullah-Khawaja/Add-support-to-set-napi-threaded-for-individual-napi/20250124-071412
base: net-next/main
patch link: https://lore.kernel.org/r/20250123231236.2657321-4-skhawaja%40google.com
patch subject: [PATCH net-next v2 3/4] Extend napi threaded polling to allow kthread based busy polling
config: loongarch-allyesconfig (https://download.01.org/0day-ci/archive/20250124/202501242114.hSuOcqsi-lkp@intel.com/config)
compiler: loongarch64-linux-gcc (GCC) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250124/202501242114.hSuOcqsi-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202501242114.hSuOcqsi-lkp@intel.com/
All warnings (new ones prefixed by >>):
drivers/net/ethernet/renesas/ravb_main.c: In function 'ravb_probe':
>> drivers/net/ethernet/renesas/ravb_main.c:3078:48: warning: implicit conversion from 'enum <anonymous>' to 'enum netdev_napi_threaded' [-Wenum-conversion]
3078 | dev_set_threaded(ndev, true);
| ^~~~
--
drivers/net/ethernet/mellanox/mlxsw/pci.c: In function 'mlxsw_pci_napi_devs_init':
>> drivers/net/ethernet/mellanox/mlxsw/pci.c:159:50: warning: implicit conversion from 'enum <anonymous>' to 'enum netdev_napi_threaded' [-Wenum-conversion]
159 | dev_set_threaded(mlxsw_pci->napi_dev_rx, true);
| ^~~~
--
drivers/net/wireless/ath/ath10k/snoc.c: In function 'ath10k_snoc_hif_start':
>> drivers/net/wireless/ath/ath10k/snoc.c:938:40: warning: implicit conversion from 'enum <anonymous>' to 'enum netdev_napi_threaded' [-Wenum-conversion]
938 | dev_set_threaded(ar->napi_dev, true);
| ^~~~
vim +3078 drivers/net/ethernet/renesas/ravb_main.c
32f012b8c01ca9 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2901
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2902 static int ravb_probe(struct platform_device *pdev)
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2903 {
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2904 struct device_node *np = pdev->dev.of_node;
ebb091461a9e14 drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-18 2905 const struct ravb_hw_info *info;
0d13a1a464a023 drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-25 2906 struct reset_control *rstc;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2907 struct ravb_private *priv;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2908 struct net_device *ndev;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2909 struct resource *res;
32f012b8c01ca9 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2910 int error, q;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2911
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2912 if (!np) {
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2913 dev_err(&pdev->dev,
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2914 "this driver is required to be instantiated from device tree\n");
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2915 return -EINVAL;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2916 }
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2917
b1768e3dc47792 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2918 rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
0d13a1a464a023 drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-25 2919 if (IS_ERR(rstc))
0d13a1a464a023 drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-25 2920 return dev_err_probe(&pdev->dev, PTR_ERR(rstc),
0d13a1a464a023 drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-25 2921 "failed to get cpg reset\n");
0d13a1a464a023 drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-25 2922
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2923 ndev = alloc_etherdev_mqs(sizeof(struct ravb_private),
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2924 NUM_TX_QUEUE, NUM_RX_QUEUE);
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2925 if (!ndev)
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2926 return -ENOMEM;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2927
8912ed25daf6fc drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-18 2928 info = of_device_get_match_data(&pdev->dev);
8912ed25daf6fc drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-18 2929
8912ed25daf6fc drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-18 2930 ndev->features = info->net_features;
8912ed25daf6fc drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-18 2931 ndev->hw_features = info->net_hw_features;
546875ccba938b drivers/net/ethernet/renesas/ravb_main.c Paul Barker 2024-10-15 2932 ndev->vlan_features = info->vlan_features;
4d86d381862714 drivers/net/ethernet/renesas/ravb_main.c Simon Horman 2017-10-04 2933
d8eb6ea4b302e7 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2023-11-28 2934 error = reset_control_deassert(rstc);
d8eb6ea4b302e7 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2023-11-28 2935 if (error)
d8eb6ea4b302e7 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2023-11-28 2936 goto out_free_netdev;
d8eb6ea4b302e7 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2023-11-28 2937
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2938 SET_NETDEV_DEV(ndev, &pdev->dev);
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2939
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2940 priv = netdev_priv(ndev);
ebb091461a9e14 drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-18 2941 priv->info = info;
0d13a1a464a023 drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-25 2942 priv->rstc = rstc;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2943 priv->ndev = ndev;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2944 priv->pdev = pdev;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2945 priv->num_tx_ring[RAVB_BE] = BE_TX_RING_SIZE;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2946 priv->num_rx_ring[RAVB_BE] = BE_RX_RING_SIZE;
1091da579d7ccd drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-10-12 2947 if (info->nc_queues) {
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2948 priv->num_tx_ring[RAVB_NC] = NC_TX_RING_SIZE;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2949 priv->num_rx_ring[RAVB_NC] = NC_RX_RING_SIZE;
a92f4f0662bf2c drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-10-01 2950 }
a92f4f0662bf2c drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-10-01 2951
32f012b8c01ca9 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2952 error = ravb_setup_irqs(priv);
32f012b8c01ca9 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2953 if (error)
32f012b8c01ca9 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2954 goto out_reset_assert;
32f012b8c01ca9 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2955
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2956 priv->clk = devm_clk_get(&pdev->dev, NULL);
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2957 if (IS_ERR(priv->clk)) {
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2958 error = PTR_ERR(priv->clk);
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2959 goto out_reset_assert;
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2960 }
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2961
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2962 if (info->gptp_ref_clk) {
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2963 priv->gptp_clk = devm_clk_get(&pdev->dev, "gptp");
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2964 if (IS_ERR(priv->gptp_clk)) {
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2965 error = PTR_ERR(priv->gptp_clk);
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2966 goto out_reset_assert;
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2967 }
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2968 }
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2969
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2970 priv->refclk = devm_clk_get_optional(&pdev->dev, "refclk");
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2971 if (IS_ERR(priv->refclk)) {
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2972 error = PTR_ERR(priv->refclk);
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2973 goto out_reset_assert;
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2974 }
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2975 clk_prepare(priv->refclk);
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2976
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2977 platform_set_drvdata(pdev, ndev);
48f894ab07c444 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-14 2978 pm_runtime_set_autosuspend_delay(&pdev->dev, 100);
48f894ab07c444 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-14 2979 pm_runtime_use_autosuspend(&pdev->dev);
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2980 pm_runtime_enable(&pdev->dev);
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2981 error = pm_runtime_resume_and_get(&pdev->dev);
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2982 if (error < 0)
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2983 goto out_rpm_disable;
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2984
e89a2cdb1cca51 drivers/net/ethernet/renesas/ravb_main.c Yang Yingliang 2021-06-09 2985 priv->addr = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2986 if (IS_ERR(priv->addr)) {
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2987 error = PTR_ERR(priv->addr);
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2988 goto out_rpm_put;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2989 }
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2990
e89a2cdb1cca51 drivers/net/ethernet/renesas/ravb_main.c Yang Yingliang 2021-06-09 2991 /* The Ether-specific entries in the device structure. */
e89a2cdb1cca51 drivers/net/ethernet/renesas/ravb_main.c Yang Yingliang 2021-06-09 2992 ndev->base_addr = res->start;
e89a2cdb1cca51 drivers/net/ethernet/renesas/ravb_main.c Yang Yingliang 2021-06-09 2993
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2994 spin_lock_init(&priv->lock);
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2995 INIT_WORK(&priv->work, ravb_tx_timeout_work);
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 2996
0c65b2b90d13c1 drivers/net/ethernet/renesas/ravb_main.c Andrew Lunn 2019-11-04 2997 error = of_get_phy_mode(np, &priv->phy_interface);
0c65b2b90d13c1 drivers/net/ethernet/renesas/ravb_main.c Andrew Lunn 2019-11-04 2998 if (error && error != -ENODEV)
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 2999 goto out_rpm_put;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3000
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3001 priv->no_avb_link = of_property_read_bool(np, "renesas,no-ether-link");
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3002 priv->avb_link_active_low =
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3003 of_property_read_bool(np, "renesas,ether-link-active-low");
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3004
1d63864299cafa drivers/net/ethernet/renesas/ravb_main.c Paul Barker 2024-09-18 3005 ndev->max_mtu = info->tx_max_frame_size -
e82700b8662ce5 drivers/net/ethernet/renesas/ravb_main.c Niklas Söderlund 2024-03-04 3006 (ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN);
75efa06f457bbe drivers/net/ethernet/renesas/ravb_main.c Niklas Söderlund 2018-02-16 3007 ndev->min_mtu = ETH_MIN_MTU;
75efa06f457bbe drivers/net/ethernet/renesas/ravb_main.c Niklas Söderlund 2018-02-16 3008
c81d894226b944 drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-25 3009 /* FIXME: R-Car Gen2 has 4byte alignment restriction for tx buffer
c81d894226b944 drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-25 3010 * Use two descriptor to handle such situation. First descriptor to
c81d894226b944 drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-25 3011 * handle aligned data buffer and second descriptor to handle the
c81d894226b944 drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-25 3012 * overflow data because of alignment.
c81d894226b944 drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-25 3013 */
c81d894226b944 drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-25 3014 priv->num_tx_desc = info->aligned_tx ? 2 : 1;
f543305da9b5a5 drivers/net/ethernet/renesas/ravb_main.c Kazuya Mizuguchi 2018-09-19 3015
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3016 /* Set function */
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3017 ndev->netdev_ops = &ravb_netdev_ops;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3018 ndev->ethtool_ops = &ravb_ethtool_ops;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3019
f384ab481cab6a drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3020 error = ravb_compute_gti(ndev);
b3d39a8805c510 drivers/net/ethernet/renesas/ravb_main.c Simon Horman 2015-11-20 3021 if (error)
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3022 goto out_rpm_put;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3023
a6f51f2efa742d drivers/net/ethernet/renesas/ravb_main.c Geert Uytterhoeven 2020-10-01 3024 ravb_parse_delay_mode(np, ndev);
61fccb2d6274f7 drivers/net/ethernet/renesas/ravb_main.c Kazuya Mizuguchi 2017-01-27 3025
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3026 /* Allocate descriptor base address table */
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3027 priv->desc_bat_size = sizeof(struct ravb_desc) * DBAT_ENTRY_NUM;
e2dbb33ad9545d drivers/net/ethernet/renesas/ravb_main.c Kazuya Mizuguchi 2015-09-30 3028 priv->desc_bat = dma_alloc_coherent(ndev->dev.parent, priv->desc_bat_size,
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3029 &priv->desc_bat_dma, GFP_KERNEL);
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3030 if (!priv->desc_bat) {
c451113291c193 drivers/net/ethernet/renesas/ravb_main.c Simon Horman 2015-11-02 3031 dev_err(&pdev->dev,
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3032 "Cannot allocate desc base address table (size %d bytes)\n",
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3033 priv->desc_bat_size);
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3034 error = -ENOMEM;
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3035 goto out_rpm_put;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3036 }
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3037 for (q = RAVB_BE; q < DBAT_ENTRY_NUM; q++)
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3038 priv->desc_bat[q].die_dt = DT_EOS;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3039
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3040 /* Initialise HW timestamp list */
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3041 INIT_LIST_HEAD(&priv->ts_skb_list);
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3042
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3043 /* Debug message level */
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3044 priv->msg_enable = RAVB_DEF_MSG_ENABLE;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3045
76fd52c1007785 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3046 /* Set config mode as this is needed for PHY initialization. */
76fd52c1007785 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3047 error = ravb_set_opmode(ndev, CCC_OPC_CONFIG);
76fd52c1007785 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3048 if (error)
76fd52c1007785 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3049 goto out_rpm_put;
76fd52c1007785 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3050
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3051 /* Read and set MAC address */
83216e3988cd19 drivers/net/ethernet/renesas/ravb_main.c Michael Walle 2021-04-12 3052 ravb_read_mac_address(np, ndev);
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3053 if (!is_valid_ether_addr(ndev->dev_addr)) {
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3054 dev_warn(&pdev->dev,
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3055 "no valid MAC address supplied, using a random one\n");
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3056 eth_hw_addr_random(ndev);
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3057 }
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3058
77972b55fb9d35 drivers/net/ethernet/renesas/ravb_main.c Geert Uytterhoeven 2020-09-22 3059 /* MDIO bus init */
77972b55fb9d35 drivers/net/ethernet/renesas/ravb_main.c Geert Uytterhoeven 2020-09-22 3060 error = ravb_mdio_init(priv);
77972b55fb9d35 drivers/net/ethernet/renesas/ravb_main.c Geert Uytterhoeven 2020-09-22 3061 if (error) {
77972b55fb9d35 drivers/net/ethernet/renesas/ravb_main.c Geert Uytterhoeven 2020-09-22 3062 dev_err(&pdev->dev, "failed to initialize MDIO\n");
76fd52c1007785 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3063 goto out_reset_mode;
77972b55fb9d35 drivers/net/ethernet/renesas/ravb_main.c Geert Uytterhoeven 2020-09-22 3064 }
77972b55fb9d35 drivers/net/ethernet/renesas/ravb_main.c Geert Uytterhoeven 2020-09-22 3065
76fd52c1007785 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3066 /* Undo previous switch to config opmode. */
76fd52c1007785 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3067 error = ravb_set_opmode(ndev, CCC_OPC_RESET);
76fd52c1007785 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3068 if (error)
76fd52c1007785 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3069 goto out_mdio_release;
76fd52c1007785 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3070
b48b89f9c189d2 drivers/net/ethernet/renesas/ravb_main.c Jakub Kicinski 2022-09-27 3071 netif_napi_add(ndev, &priv->napi[RAVB_BE], ravb_poll);
1091da579d7ccd drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-10-12 3072 if (info->nc_queues)
b48b89f9c189d2 drivers/net/ethernet/renesas/ravb_main.c Jakub Kicinski 2022-09-27 3073 netif_napi_add(ndev, &priv->napi[RAVB_NC], ravb_poll);
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3074
65c482bc226ab2 drivers/net/ethernet/renesas/ravb_main.c Paul Barker 2024-06-04 3075 if (info->coalesce_irqs) {
7b39c1814ce3bc drivers/net/ethernet/renesas/ravb_main.c Paul Barker 2024-06-04 3076 netdev_sw_irq_coalesce_default_on(ndev);
65c482bc226ab2 drivers/net/ethernet/renesas/ravb_main.c Paul Barker 2024-06-04 3077 if (num_present_cpus() == 1)
65c482bc226ab2 drivers/net/ethernet/renesas/ravb_main.c Paul Barker 2024-06-04 @3078 dev_set_threaded(ndev, true);
65c482bc226ab2 drivers/net/ethernet/renesas/ravb_main.c Paul Barker 2024-06-04 3079 }
7b39c1814ce3bc drivers/net/ethernet/renesas/ravb_main.c Paul Barker 2024-06-04 3080
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3081 /* Network device register */
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3082 error = register_netdev(ndev);
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3083 if (error)
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3084 goto out_napi_del;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3085
3e3d647715d401 drivers/net/ethernet/renesas/ravb_main.c Niklas Söderlund 2017-08-01 3086 device_set_wakeup_capable(&pdev->dev, 1);
3e3d647715d401 drivers/net/ethernet/renesas/ravb_main.c Niklas Söderlund 2017-08-01 3087
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3088 /* Print device information */
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3089 netdev_info(ndev, "Base address at %#x, %pM, IRQ %d.\n",
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3090 (u32)ndev->base_addr, ndev->dev_addr, ndev->irq);
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3091
48f894ab07c444 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-14 3092 pm_runtime_mark_last_busy(&pdev->dev);
48f894ab07c444 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-14 3093 pm_runtime_put_autosuspend(&pdev->dev);
48f894ab07c444 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-14 3094
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3095 return 0;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3096
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3097 out_napi_del:
1091da579d7ccd drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-10-12 3098 if (info->nc_queues)
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3099 netif_napi_del(&priv->napi[RAVB_NC]);
a92f4f0662bf2c drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-10-01 3100
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3101 netif_napi_del(&priv->napi[RAVB_BE]);
76fd52c1007785 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3102 out_mdio_release:
77972b55fb9d35 drivers/net/ethernet/renesas/ravb_main.c Geert Uytterhoeven 2020-09-22 3103 ravb_mdio_release(priv);
76fd52c1007785 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3104 out_reset_mode:
76fd52c1007785 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3105 ravb_set_opmode(ndev, CCC_OPC_RESET);
e2dbb33ad9545d drivers/net/ethernet/renesas/ravb_main.c Kazuya Mizuguchi 2015-09-30 3106 dma_free_coherent(ndev->dev.parent, priv->desc_bat_size, priv->desc_bat,
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3107 priv->desc_bat_dma);
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3108 out_rpm_put:
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3109 pm_runtime_put(&pdev->dev);
88b74831faaee4 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2023-11-28 3110 out_rpm_disable:
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3111 pm_runtime_disable(&pdev->dev);
48f894ab07c444 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-14 3112 pm_runtime_dont_use_autosuspend(&pdev->dev);
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3113 clk_unprepare(priv->refclk);
a654f6e875b753 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2024-02-02 3114 out_reset_assert:
0d13a1a464a023 drivers/net/ethernet/renesas/ravb_main.c Biju Das 2021-08-25 3115 reset_control_assert(rstc);
d8eb6ea4b302e7 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2023-11-28 3116 out_free_netdev:
d8eb6ea4b302e7 drivers/net/ethernet/renesas/ravb_main.c Claudiu Beznea 2023-11-28 3117 free_netdev(ndev);
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3118 return error;
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3119 }
c156633f135326 drivers/net/ethernet/renesas/ravb.c Sergei Shtylyov 2015-06-11 3120
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH net-next v2 0/4] Add support to do threaded napi busy poll
2025-01-23 23:12 [PATCH net-next v2 0/4] Add support to do threaded napi busy poll Samiullah Khawaja
` (4 preceding siblings ...)
2025-01-24 1:24 ` [PATCH net-next v2 0/4] Add support to do threaded napi busy poll Jakub Kicinski
@ 2025-01-27 17:06 ` Joe Damato
5 siblings, 0 replies; 8+ messages in thread
From: Joe Damato @ 2025-01-27 17:06 UTC (permalink / raw)
To: Samiullah Khawaja
Cc: Jakub Kicinski, David S . Miller , Eric Dumazet, Paolo Abeni,
almasrymina, netdev, mkarsten
On Thu, Jan 23, 2025 at 11:12:32PM +0000, Samiullah Khawaja wrote:
> Extend the already existing support of threaded napi poll to do continuous
> busy polling.
>
> This is used for doing continuous polling of napi to fetch descriptors from
> backing RX/TX queues for low latency applications. Allow enabling of threaded
> busypoll using netlink so this can be enabled on a set of dedicated napis for
> low latency applications.
>
> It allows enabling NAPI busy poll for any userspace application
> indepdendent of userspace API being used for packet and event processing
> (epoll, io_uring, raw socket APIs). Once enabled user can fetch the PID
> of the kthread doing NAPI polling and set affinity, priority and
> scheduler for it depending on the low-latency requirements.
When you resubmit this after the merge window (or if you resubmit it
as an RFC), would you mind CCing both me (jdamato@fastly.com) and
Martin (mkarsten@uwaterloo.ca) ?
We almost missed this revision after commenting on the previous
version, since we weren't included in the CC list.
Both Martin and I read through the cover letter and proposed changes
and have several questions/comments, but given that the thread is
marked as deferred/closed due to the merge window, we'll hold off
on digging in until the next revision is posted.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-01-27 17:06 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-23 23:12 [PATCH net-next v2 0/4] Add support to do threaded napi busy poll Samiullah Khawaja
2025-01-23 23:12 ` [PATCH net-next v2 1/4] Add support to set napi threaded for individual napi Samiullah Khawaja
2025-01-23 23:12 ` [PATCH net-next v2 2/4] net: Create separate gro_flush helper function Samiullah Khawaja
2025-01-23 23:12 ` [PATCH net-next v2 3/4] Extend napi threaded polling to allow kthread based busy polling Samiullah Khawaja
2025-01-24 13:18 ` kernel test robot
2025-01-23 23:12 ` [PATCH net-next v2 4/4] selftests: Add napi threaded busy poll test in `busy_poller` Samiullah Khawaja
2025-01-24 1:24 ` [PATCH net-next v2 0/4] Add support to do threaded napi busy poll Jakub Kicinski
2025-01-27 17:06 ` Joe Damato
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).