From: Jason Gunthorpe <jgg@ziepe.ca>
To: Praveen Kannoju <praveen.kannoju@oracle.com>
Cc: "saeedm@nvidia.com" <saeedm@nvidia.com>,
"leon@kernel.org" <leon@kernel.org>,
"tariqt@nvidia.com" <tariqt@nvidia.com>,
"mbloch@nvidia.com" <mbloch@nvidia.com>,
"andrew+netdev@lunn.ch" <andrew+netdev@lunn.ch>,
"davem@davemloft.net" <davem@davemloft.net>,
"edumazet@google.com" <edumazet@google.com>,
"kuba@kernel.org" <kuba@kernel.org>,
"pabeni@redhat.com" <pabeni@redhat.com>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Rama Nichanamatlu <rama.nichanamatlu@oracle.com>,
Manjunath Patil <manjunath.b.patil@oracle.com>,
Anand Khoje <anand.a.khoje@oracle.com>
Subject: Re: [PATCH] net/mlx5: poll mlx5 eq during irq migration
Date: Thu, 5 Mar 2026 20:32:17 -0400 [thread overview]
Message-ID: <20260306003217.GB1687929@ziepe.ca> (raw)
In-Reply-To: <CH3PR10MB7704DD1E6B9A671796FC6B528C7DA@CH3PR10MB7704.namprd10.prod.outlook.com>
On Thu, Mar 05, 2026 at 05:08:52PM +0000, Praveen Kannoju wrote:
> Regardless of the underlying causes, which may include IRQ loss
> or EQ re-arming failure, the TX queue becomes stuck, and the
> timeout handler is only triggered once the queue is declared
> full. In scenarios where only specialized packets, such as
> heartbeat packets, are sent through the queue, it takes
> significantly longer for the queue to fill and be identified as
> stuck. A proven solution for this issue is polling the EQ
> immediately after the corresponding IRQ migration, which allows
> for earlier recovery and prevents the transmission queue from
> becoming stuck.
I undersand all of this, but for upstreaming we want the root cause,
not bodges like this.
There is no reason to do what this patch does, the IRQ system is not
supposed to loose interrupts on migration, if that is happening on
your systems it is a serious bug that must be root caused.
Jason
next prev parent reply other threads:[~2026-03-06 0:32 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-04 16:17 [PATCH] net/mlx5: poll mlx5 eq during irq migration Praveen Kumar Kannoju
2026-03-04 20:11 ` Jason Gunthorpe
[not found] ` <CH3PR10MB7704DD1E6B9A671796FC6B528C7DA@CH3PR10MB7704.namprd10.prod.outlook.com>
2026-03-06 0:32 ` Jason Gunthorpe [this message]
2026-03-06 14:19 ` Praveen Kannoju
2026-03-06 23:10 ` Jason Gunthorpe
2026-03-07 5:43 ` Praveen Kannoju
2026-03-12 0:35 ` Jason Gunthorpe
2026-03-20 16:31 ` Praveen Kannoju
2026-03-05 4:17 ` kernel test robot
2026-03-05 8:45 ` kernel test robot
2026-03-05 9:29 ` kernel test robot
2026-03-05 11:16 ` kernel test robot
2026-03-05 13:15 ` Praveen Kannoju
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260306003217.GB1687929@ziepe.ca \
--to=jgg@ziepe.ca \
--cc=anand.a.khoje@oracle.com \
--cc=andrew+netdev@lunn.ch \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=kuba@kernel.org \
--cc=leon@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=manjunath.b.patil@oracle.com \
--cc=mbloch@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=praveen.kannoju@oracle.com \
--cc=rama.nichanamatlu@oracle.com \
--cc=saeedm@nvidia.com \
--cc=tariqt@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox