* [PATCH net] net: enetc: avoid deadlock in enetc_tx_onestep_tstamp()
@ 2023-01-12 10:54 Vladimir Oltean
2023-01-12 17:48 ` Alexander H Duyck
2023-01-14 5:40 ` patchwork-bot+netdevbpf
0 siblings, 2 replies; 7+ messages in thread
From: Vladimir Oltean @ 2023-01-12 10:54 UTC (permalink / raw)
To: netdev
Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Claudiu Manoil, Y . b . Lu
This lockdep splat says it better than I could:
================================
WARNING: inconsistent lock state
6.2.0-rc2-07010-ga9b9500ffaac-dirty #967 Not tainted
--------------------------------
inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-W} usage.
kworker/1:3/179 [HC0[0]:SC0[0]:HE1:SE1] takes:
ffff3ec4036ce098 (_xmit_ETHER#2){+.?.}-{3:3}, at: netif_freeze_queues+0x5c/0xc0
{IN-SOFTIRQ-W} state was registered at:
_raw_spin_lock+0x5c/0xc0
sch_direct_xmit+0x148/0x37c
__dev_queue_xmit+0x528/0x111c
ip6_finish_output2+0x5ec/0xb7c
ip6_finish_output+0x240/0x3f0
ip6_output+0x78/0x360
ndisc_send_skb+0x33c/0x85c
ndisc_send_rs+0x54/0x12c
addrconf_rs_timer+0x154/0x260
call_timer_fn+0xb8/0x3a0
__run_timers.part.0+0x214/0x26c
run_timer_softirq+0x3c/0x74
__do_softirq+0x14c/0x5d8
____do_softirq+0x10/0x20
call_on_irq_stack+0x2c/0x5c
do_softirq_own_stack+0x1c/0x30
__irq_exit_rcu+0x168/0x1a0
irq_exit_rcu+0x10/0x40
el1_interrupt+0x38/0x64
irq event stamp: 7825
hardirqs last enabled at (7825): [<ffffdf1f7200cae4>] exit_to_kernel_mode+0x34/0x130
hardirqs last disabled at (7823): [<ffffdf1f708105f0>] __do_softirq+0x550/0x5d8
softirqs last enabled at (7824): [<ffffdf1f7081050c>] __do_softirq+0x46c/0x5d8
softirqs last disabled at (7811): [<ffffdf1f708166e0>] ____do_softirq+0x10/0x20
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(_xmit_ETHER#2);
<Interrupt>
lock(_xmit_ETHER#2);
*** DEADLOCK ***
3 locks held by kworker/1:3/179:
#0: ffff3ec400004748 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x1f4/0x6c0
#1: ffff80000a0bbdc8 ((work_completion)(&priv->tx_onestep_tstamp)){+.+.}-{0:0}, at: process_one_work+0x1f4/0x6c0
#2: ffff3ec4036cd438 (&dev->tx_global_lock){+.+.}-{3:3}, at: netif_tx_lock+0x1c/0x34
Workqueue: events enetc_tx_onestep_tstamp
Call trace:
print_usage_bug.part.0+0x208/0x22c
mark_lock+0x7f0/0x8b0
__lock_acquire+0x7c4/0x1ce0
lock_acquire.part.0+0xe0/0x220
lock_acquire+0x68/0x84
_raw_spin_lock+0x5c/0xc0
netif_freeze_queues+0x5c/0xc0
netif_tx_lock+0x24/0x34
enetc_tx_onestep_tstamp+0x20/0x100
process_one_work+0x28c/0x6c0
worker_thread+0x74/0x450
kthread+0x118/0x11c
but I'll say it anyway: the enetc_tx_onestep_tstamp() work item runs in
process context, therefore with softirqs enabled (i.o.w., it can be
interrupted by a softirq). If we hold the netif_tx_lock() when there is
an interrupt, and the NET_TX softirq then gets scheduled, this will take
the netif_tx_lock() a second time and deadlock the kernel.
To solve this, use netif_tx_lock_bh(), which blocks softirqs from
running.
Fixes: 7294380c5211 ("enetc: support PTP Sync packet one-step timestamping")
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
---
drivers/net/ethernet/freescale/enetc/enetc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
index 5ad0b259e623..0a990d35fe58 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc.c
@@ -2288,14 +2288,14 @@ static void enetc_tx_onestep_tstamp(struct work_struct *work)
priv = container_of(work, struct enetc_ndev_priv, tx_onestep_tstamp);
- netif_tx_lock(priv->ndev);
+ netif_tx_lock_bh(priv->ndev);
clear_bit_unlock(ENETC_TX_ONESTEP_TSTAMP_IN_PROGRESS, &priv->flags);
skb = skb_dequeue(&priv->tx_skbs);
if (skb)
enetc_start_xmit(skb, priv->ndev);
- netif_tx_unlock(priv->ndev);
+ netif_tx_unlock_bh(priv->ndev);
}
static void enetc_tx_onestep_tstamp_init(struct enetc_ndev_priv *priv)
--
2.34.1
^ permalink raw reply related [flat|nested] 7+ messages in thread* Re: [PATCH net] net: enetc: avoid deadlock in enetc_tx_onestep_tstamp()
2023-01-12 10:54 [PATCH net] net: enetc: avoid deadlock in enetc_tx_onestep_tstamp() Vladimir Oltean
@ 2023-01-12 17:48 ` Alexander H Duyck
2023-01-12 18:53 ` Vladimir Oltean
2023-01-14 5:40 ` patchwork-bot+netdevbpf
1 sibling, 1 reply; 7+ messages in thread
From: Alexander H Duyck @ 2023-01-12 17:48 UTC (permalink / raw)
To: Vladimir Oltean, netdev
Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Claudiu Manoil, Y . b . Lu
On Thu, 2023-01-12 at 12:54 +0200, Vladimir Oltean wrote:
> This lockdep splat says it better than I could:
>
> ================================
> WARNING: inconsistent lock state
> 6.2.0-rc2-07010-ga9b9500ffaac-dirty #967 Not tainted
> --------------------------------
> inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-W} usage.
> kworker/1:3/179 [HC0[0]:SC0[0]:HE1:SE1] takes:
> ffff3ec4036ce098 (_xmit_ETHER#2){+.?.}-{3:3}, at: netif_freeze_queues+0x5c/0xc0
> {IN-SOFTIRQ-W} state was registered at:
> _raw_spin_lock+0x5c/0xc0
> sch_direct_xmit+0x148/0x37c
> __dev_queue_xmit+0x528/0x111c
> ip6_finish_output2+0x5ec/0xb7c
> ip6_finish_output+0x240/0x3f0
> ip6_output+0x78/0x360
> ndisc_send_skb+0x33c/0x85c
> ndisc_send_rs+0x54/0x12c
> addrconf_rs_timer+0x154/0x260
> call_timer_fn+0xb8/0x3a0
> __run_timers.part.0+0x214/0x26c
> run_timer_softirq+0x3c/0x74
> __do_softirq+0x14c/0x5d8
> ____do_softirq+0x10/0x20
> call_on_irq_stack+0x2c/0x5c
> do_softirq_own_stack+0x1c/0x30
> __irq_exit_rcu+0x168/0x1a0
> irq_exit_rcu+0x10/0x40
> el1_interrupt+0x38/0x64
> irq event stamp: 7825
> hardirqs last enabled at (7825): [<ffffdf1f7200cae4>] exit_to_kernel_mode+0x34/0x130
> hardirqs last disabled at (7823): [<ffffdf1f708105f0>] __do_softirq+0x550/0x5d8
> softirqs last enabled at (7824): [<ffffdf1f7081050c>] __do_softirq+0x46c/0x5d8
> softirqs last disabled at (7811): [<ffffdf1f708166e0>] ____do_softirq+0x10/0x20
>
> other info that might help us debug this:
> Possible unsafe locking scenario:
>
> CPU0
> ----
> lock(_xmit_ETHER#2);
> <Interrupt>
> lock(_xmit_ETHER#2);
>
> *** DEADLOCK ***
>
> 3 locks held by kworker/1:3/179:
> #0: ffff3ec400004748 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x1f4/0x6c0
> #1: ffff80000a0bbdc8 ((work_completion)(&priv->tx_onestep_tstamp)){+.+.}-{0:0}, at: process_one_work+0x1f4/0x6c0
> #2: ffff3ec4036cd438 (&dev->tx_global_lock){+.+.}-{3:3}, at: netif_tx_lock+0x1c/0x34
>
> Workqueue: events enetc_tx_onestep_tstamp
> Call trace:
> print_usage_bug.part.0+0x208/0x22c
> mark_lock+0x7f0/0x8b0
> __lock_acquire+0x7c4/0x1ce0
> lock_acquire.part.0+0xe0/0x220
> lock_acquire+0x68/0x84
> _raw_spin_lock+0x5c/0xc0
> netif_freeze_queues+0x5c/0xc0
> netif_tx_lock+0x24/0x34
> enetc_tx_onestep_tstamp+0x20/0x100
> process_one_work+0x28c/0x6c0
> worker_thread+0x74/0x450
> kthread+0x118/0x11c
>
> but I'll say it anyway: the enetc_tx_onestep_tstamp() work item runs in
> process context, therefore with softirqs enabled (i.o.w., it can be
> interrupted by a softirq). If we hold the netif_tx_lock() when there is
> an interrupt, and the NET_TX softirq then gets scheduled, this will take
> the netif_tx_lock() a second time and deadlock the kernel.
>
> To solve this, use netif_tx_lock_bh(), which blocks softirqs from
> running.
>
> Fixes: 7294380c5211 ("enetc: support PTP Sync packet one-step timestamping")
> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
> ---
> drivers/net/ethernet/freescale/enetc/enetc.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
> index 5ad0b259e623..0a990d35fe58 100644
> --- a/drivers/net/ethernet/freescale/enetc/enetc.c
> +++ b/drivers/net/ethernet/freescale/enetc/enetc.c
> @@ -2288,14 +2288,14 @@ static void enetc_tx_onestep_tstamp(struct work_struct *work)
>
> priv = container_of(work, struct enetc_ndev_priv, tx_onestep_tstamp);
>
> - netif_tx_lock(priv->ndev);
> + netif_tx_lock_bh(priv->ndev);
>
> clear_bit_unlock(ENETC_TX_ONESTEP_TSTAMP_IN_PROGRESS, &priv->flags);
> skb = skb_dequeue(&priv->tx_skbs);
> if (skb)
> enetc_start_xmit(skb, priv->ndev);
>
> - netif_tx_unlock(priv->ndev);
> + netif_tx_unlock_bh(priv->ndev);
> }
>
> static void enetc_tx_onestep_tstamp_init(struct enetc_ndev_priv *priv)
Looking at the patch this fixes I had a question. You have the tx_skbs
in the enet_ndev_priv struct and from what I can tell it looks like you
support multiple Tx queues. Is there a risk of corrupting the queue if
multiple Tx queues attempt to request the onestep timestamp?
My thought is that you might be better off looking at splitting your
queues up so that they are contained within the enetc_bdr struct. Then
you would only need the individual Tx queue lock instead of having to
take the global Tx queue lock.
Also I am confused. Why do you clear the TSTAMP_IN_PROGRESS flag in
enetc_tx_onestep_timestamp before checking the state of the queue? It
seems like something you should only be clearing once the queue is
empty.
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [PATCH net] net: enetc: avoid deadlock in enetc_tx_onestep_tstamp()
2023-01-12 17:48 ` Alexander H Duyck
@ 2023-01-12 18:53 ` Vladimir Oltean
2023-01-12 21:29 ` Alexander Duyck
0 siblings, 1 reply; 7+ messages in thread
From: Vladimir Oltean @ 2023-01-12 18:53 UTC (permalink / raw)
To: Alexander H Duyck
Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Claudiu Manoil, Y . b . Lu
On Thu, Jan 12, 2023 at 09:48:40AM -0800, Alexander H Duyck wrote:
> Looking at the patch this fixes I had a question. You have the tx_skbs
> in the enet_ndev_priv struct and from what I can tell it looks like you
> support multiple Tx queues. Is there a risk of corrupting the queue if
> multiple Tx queues attempt to request the onestep timestamp?
void skb_queue_tail(struct sk_buff_head *list, struct sk_buff *newsk)
{
unsigned long flags;
spin_lock_irqsave(&list->lock, flags);
__skb_queue_tail(list, newsk);
spin_unlock_irqrestore(&list->lock, flags);
}
> Also I am confused. Why do you clear the TSTAMP_IN_PROGRESS flag in
> enetc_tx_onestep_timestamp before checking the state of the queue?
Because when enetc_tx_onestep_timestamp() is called, the one-step
timestamping process is no longer in progress - which is what we need to
know. The resource that needs serialized access is the MAC-wide
ENETC_PM0_SINGLE_STEP register. So from enetc_start_xmit() and until
enetc_clean_tx_ring(), there can only be one one-step Sync message in
flight at a time.
> It seems like something you should only be clearing once the queue is
> empty.
The flag tracks what it says: whether there's a one-step timestamp in
progress. If no TS is in progress and a Sync message must be
timestamped, the flag will be set but the skb will not be queued.
It will be timestamped right away.
The queue is there to ensure that Sync messages sent in a burst are
eventually all sent (and timestamped). Each TX confirmation will
schedule the work item again.
By taking netif_tx_lock[_bh](), enetc_tx_onestep_tstamp() ensures that
it has priority in sending the skbs already queued up in &priv->tx_skbs,
over those coming from ndo_start_xmit -> enetc_xmit(). Not only that,
but if enetc_tx_onestep_tstamp() doesn't clear TSTAMP_IN_PROGRESS before
calling enetc_start_xmit(), this is a PEBKAC, because the skb will end
up being queued right back into &priv->tx_skbs again, rather than ever
getting sent. Keeping the netif_tx_lock() held ensures that the
TSTAMP_IN_PROGRESS bit will remain unset enough for our own queued skb
to make forward progress in enetc_start_xmit().
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [PATCH net] net: enetc: avoid deadlock in enetc_tx_onestep_tstamp()
2023-01-12 18:53 ` Vladimir Oltean
@ 2023-01-12 21:29 ` Alexander Duyck
2023-01-12 21:36 ` Vladimir Oltean
0 siblings, 1 reply; 7+ messages in thread
From: Alexander Duyck @ 2023-01-12 21:29 UTC (permalink / raw)
To: Vladimir Oltean
Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Claudiu Manoil, Y . b . Lu
On Thu, Jan 12, 2023 at 10:54 AM Vladimir Oltean
<vladimir.oltean@nxp.com> wrote:
>
> On Thu, Jan 12, 2023 at 09:48:40AM -0800, Alexander H Duyck wrote:
> > Looking at the patch this fixes I had a question. You have the tx_skbs
> > in the enet_ndev_priv struct and from what I can tell it looks like you
> > support multiple Tx queues. Is there a risk of corrupting the queue if
> > multiple Tx queues attempt to request the onestep timestamp?
>
> void skb_queue_tail(struct sk_buff_head *list, struct sk_buff *newsk)
> {
> unsigned long flags;
>
> spin_lock_irqsave(&list->lock, flags);
> __skb_queue_tail(list, newsk);
> spin_unlock_irqrestore(&list->lock, flags);
> }
So yet another layer of locking. As I said you could spare yourself
some cycles by moving this to a per queue list rather than a global
one. With that you could use the Tx lock to protect the list instead
of having to have the Tx lock and the queue lock.
> > Also I am confused. Why do you clear the TSTAMP_IN_PROGRESS flag in
> > enetc_tx_onestep_timestamp before checking the state of the queue?
>
> Because when enetc_tx_onestep_timestamp() is called, the one-step
> timestamping process is no longer in progress - which is what we need to
> know. The resource that needs serialized access is the MAC-wide
> ENETC_PM0_SINGLE_STEP register. So from enetc_start_xmit() and until
> enetc_clean_tx_ring(), there can only be one one-step Sync message in
> flight at a time.
>
> > It seems like something you should only be clearing once the queue is
> > empty.
>
> The flag tracks what it says: whether there's a one-step timestamp in
> progress. If no TS is in progress and a Sync message must be
> timestamped, the flag will be set but the skb will not be queued.
> It will be timestamped right away.
>
> The queue is there to ensure that Sync messages sent in a burst are
> eventually all sent (and timestamped). Each TX confirmation will
> schedule the work item again.
>
> By taking netif_tx_lock[_bh](), enetc_tx_onestep_tstamp() ensures that
> it has priority in sending the skbs already queued up in &priv->tx_skbs,
> over those coming from ndo_start_xmit -> enetc_xmit(). Not only that,
> but if enetc_tx_onestep_tstamp() doesn't clear TSTAMP_IN_PROGRESS before
> calling enetc_start_xmit(), this is a PEBKAC, because the skb will end
> up being queued right back into &priv->tx_skbs again, rather than ever
> getting sent. Keeping the netif_tx_lock() held ensures that the
> TSTAMP_IN_PROGRESS bit will remain unset enough for our own queued skb
> to make forward progress in enetc_start_xmit().
Yeah, what I realized is that I was looking at the "fixes" patch and
not the current code. I missed the patch "enetc: fix locking for
one-step timestamping packet transfer". It fixes the issue by moving
the test_and_set_bit_lock, but is still dependent on a global lock to
prevent anything else from taking the bit when we attempt a transmit.
So essentially we have to completely disable the Tx path in order to
make sure we don't race against any other Tx thread while we clear and
set the ENETC_TX_ONESTEP_TSTAMP_IN_PROGRESS flag.
Not pretty, but it addresses the issue it says it does.
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
One other question I had. How do you handle the event that
enetc_start_xmit returns NETDEV_TX_BUSY or causes the packet to go
down the drop_packet_err path?
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [PATCH net] net: enetc: avoid deadlock in enetc_tx_onestep_tstamp()
2023-01-12 21:29 ` Alexander Duyck
@ 2023-01-12 21:36 ` Vladimir Oltean
2023-01-12 21:54 ` Alexander Duyck
0 siblings, 1 reply; 7+ messages in thread
From: Vladimir Oltean @ 2023-01-12 21:36 UTC (permalink / raw)
To: Alexander Duyck
Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Claudiu Manoil, Y . b . Lu
On Thu, Jan 12, 2023 at 01:29:21PM -0800, Alexander Duyck wrote:
> One other question I had. How do you handle the event that
> enetc_start_xmit returns NETDEV_TX_BUSY or causes the packet to go
> down the drop_packet_err path?
We don't. If the enetc_start_xmit() asks the qdisc to requeue the skb
via NETDEV_TX_BUSY, we aren't going to do that, because we aren't the
qdisc, or if the packet just gets dropped without being mapped into the
TX ring, ENETC_TX_ONESTEP_TSTAMP_IN_PROGRESS will remain set with no
possibility of ever becoming unset ever again.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH net] net: enetc: avoid deadlock in enetc_tx_onestep_tstamp()
2023-01-12 21:36 ` Vladimir Oltean
@ 2023-01-12 21:54 ` Alexander Duyck
0 siblings, 0 replies; 7+ messages in thread
From: Alexander Duyck @ 2023-01-12 21:54 UTC (permalink / raw)
To: Vladimir Oltean
Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Claudiu Manoil, Y . b . Lu
On Thu, Jan 12, 2023 at 1:36 PM Vladimir Oltean <vladimir.oltean@nxp.com> wrote:
>
> On Thu, Jan 12, 2023 at 01:29:21PM -0800, Alexander Duyck wrote:
> > One other question I had. How do you handle the event that
> > enetc_start_xmit returns NETDEV_TX_BUSY or causes the packet to go
> > down the drop_packet_err path?
>
> We don't. If the enetc_start_xmit() asks the qdisc to requeue the skb
> via NETDEV_TX_BUSY, we aren't going to do that, because we aren't the
> qdisc, or if the packet just gets dropped without being mapped into the
> TX ring, ENETC_TX_ONESTEP_TSTAMP_IN_PROGRESS will remain set with no
> possibility of ever becoming unset ever again.
That is a separate issue then right? Just wanted to confirm I wasn't
missing something. I am assuming that leaving it set forever would be
a bad thing.
If NETDEV_TX_BUSY is triggered it is a memory leak if it is hit from
the enetc_tx_onestep_tstamp function correct?
Also what mechanism do you have in place to clean out the tx_skb queue
and clear the flag if the ring is stopped due to something such as
enetc_close being called? It seems like this is missing some logic to
handle the event that somebody were to do a ip link set <iface>
down/up as the stale packets would be left in the ring unless I am
missing something else.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH net] net: enetc: avoid deadlock in enetc_tx_onestep_tstamp()
2023-01-12 10:54 [PATCH net] net: enetc: avoid deadlock in enetc_tx_onestep_tstamp() Vladimir Oltean
2023-01-12 17:48 ` Alexander H Duyck
@ 2023-01-14 5:40 ` patchwork-bot+netdevbpf
1 sibling, 0 replies; 7+ messages in thread
From: patchwork-bot+netdevbpf @ 2023-01-14 5:40 UTC (permalink / raw)
To: Vladimir Oltean
Cc: netdev, davem, edumazet, kuba, pabeni, claudiu.manoil, yangbo.lu
Hello:
This patch was applied to netdev/net.git (master)
by Jakub Kicinski <kuba@kernel.org>:
On Thu, 12 Jan 2023 12:54:40 +0200 you wrote:
> This lockdep splat says it better than I could:
>
> ================================
> WARNING: inconsistent lock state
> 6.2.0-rc2-07010-ga9b9500ffaac-dirty #967 Not tainted
> --------------------------------
> inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-W} usage.
> kworker/1:3/179 [HC0[0]:SC0[0]:HE1:SE1] takes:
> ffff3ec4036ce098 (_xmit_ETHER#2){+.?.}-{3:3}, at: netif_freeze_queues+0x5c/0xc0
> {IN-SOFTIRQ-W} state was registered at:
> _raw_spin_lock+0x5c/0xc0
> sch_direct_xmit+0x148/0x37c
> __dev_queue_xmit+0x528/0x111c
> ip6_finish_output2+0x5ec/0xb7c
> ip6_finish_output+0x240/0x3f0
> ip6_output+0x78/0x360
> ndisc_send_skb+0x33c/0x85c
> ndisc_send_rs+0x54/0x12c
> addrconf_rs_timer+0x154/0x260
> call_timer_fn+0xb8/0x3a0
> __run_timers.part.0+0x214/0x26c
> run_timer_softirq+0x3c/0x74
> __do_softirq+0x14c/0x5d8
> ____do_softirq+0x10/0x20
> call_on_irq_stack+0x2c/0x5c
> do_softirq_own_stack+0x1c/0x30
> __irq_exit_rcu+0x168/0x1a0
> irq_exit_rcu+0x10/0x40
> el1_interrupt+0x38/0x64
> irq event stamp: 7825
> hardirqs last enabled at (7825): [<ffffdf1f7200cae4>] exit_to_kernel_mode+0x34/0x130
> hardirqs last disabled at (7823): [<ffffdf1f708105f0>] __do_softirq+0x550/0x5d8
> softirqs last enabled at (7824): [<ffffdf1f7081050c>] __do_softirq+0x46c/0x5d8
> softirqs last disabled at (7811): [<ffffdf1f708166e0>] ____do_softirq+0x10/0x20
>
> [...]
Here is the summary with links:
- [net] net: enetc: avoid deadlock in enetc_tx_onestep_tstamp()
https://git.kernel.org/netdev/net/c/3c463721a73b
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2023-01-14 5:40 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-01-12 10:54 [PATCH net] net: enetc: avoid deadlock in enetc_tx_onestep_tstamp() Vladimir Oltean
2023-01-12 17:48 ` Alexander H Duyck
2023-01-12 18:53 ` Vladimir Oltean
2023-01-12 21:29 ` Alexander Duyck
2023-01-12 21:36 ` Vladimir Oltean
2023-01-12 21:54 ` Alexander Duyck
2023-01-14 5:40 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox