* [PATCH net v1 0/2] avoid compiler and IQ/OQ reordering
@ 2025-12-19 7:29 Vimlesh Kumar
2025-12-19 7:29 ` [PATCH net v1 1/2] octeon_ep: " Vimlesh Kumar
2025-12-19 7:29 ` [PATCH net v1 2/2] octeon_ep_vf: " Vimlesh Kumar
0 siblings, 2 replies; 5+ messages in thread
From: Vimlesh Kumar @ 2025-12-19 7:29 UTC (permalink / raw)
To: netdev, linux-kernel; +Cc: sedara, srasheed, hgani, Vimlesh Kumar
Utilize READ_ONCE and WRITE_ONCE APIs to prevent compiler
optimization and reordering. Ensure IO queue OUT/IN_CNT
registers are flushed. Relocate IQ/OQ IN/OUT_CNTS updates
to occur before NAPI completion, and replace napi_complete
with napi_complete_done.
Vimlesh Kumar (2):
octeon_ep: avoid compiler and IQ/OQ reordering
octeon_ep_vf: avoid compiler and IQ/OQ reordering
.../ethernet/marvell/octeon_ep/octep_main.c | 40 +++++++++++++------
.../net/ethernet/marvell/octeon_ep/octep_rx.c | 27 +++++++++----
.../marvell/octeon_ep_vf/octep_vf_main.c | 38 +++++++++++++-----
.../marvell/octeon_ep_vf/octep_vf_rx.c | 28 +++++++++----
4 files changed, 95 insertions(+), 38 deletions(-)
--
2.47.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH net v1 1/2] octeon_ep: avoid compiler and IQ/OQ reordering
2025-12-19 7:29 [PATCH net v1 0/2] avoid compiler and IQ/OQ reordering Vimlesh Kumar
@ 2025-12-19 7:29 ` Vimlesh Kumar
2025-12-29 16:15 ` Paolo Abeni
2025-12-19 7:29 ` [PATCH net v1 2/2] octeon_ep_vf: " Vimlesh Kumar
1 sibling, 1 reply; 5+ messages in thread
From: Vimlesh Kumar @ 2025-12-19 7:29 UTC (permalink / raw)
To: netdev, linux-kernel
Cc: sedara, srasheed, hgani, Vimlesh Kumar, Veerasenareddy Burru,
Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Satananda Burla, Abhijit Ayarekar
Utilize READ_ONCE and WRITE_ONCE APIs for IO queue Tx/Rx
variable access to prevent compiler optimization and reordering.
Additionally, ensure IO queue OUT/IN_CNT registers are flushed
by performing a read-back after writing.
Relocate IQ/OQ IN/OUT_CNTS updates to occur before NAPI completion,
and replace napi_complete with napi_complete_done.
Fixes: 37d79d0596062 ("octeon_ep: add Tx/Rx processing and interrupt support")
Signed-off-by: Sathesh Edara <sedara@marvell.com>
Signed-off-by: Shinas Rasheed <srasheed@marvell.com>
Signed-off-by: Vimlesh Kumar <vimleshk@marvell.com>
---
.../ethernet/marvell/octeon_ep/octep_main.c | 40 +++++++++++++------
.../net/ethernet/marvell/octeon_ep/octep_rx.c | 27 +++++++++----
2 files changed, 47 insertions(+), 20 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
index bcea3fc26a8c..ccfb248d0914 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
@@ -555,28 +555,43 @@ static void octep_clean_irqs(struct octep_device *oct)
}
/**
- * octep_enable_ioq_irq() - Enable MSI-x interrupt of a Tx/Rx queue.
+ * octep_update_pkt() - Update IQ/OQ IN/OUT_CNT registers.
*
* @iq: Octeon Tx queue data structure.
* @oq: Octeon Rx queue data structure.
*/
-static void octep_enable_ioq_irq(struct octep_iq *iq, struct octep_oq *oq)
+static void octep_update_pkt(struct octep_iq *iq, struct octep_oq *oq)
{
- u32 pkts_pend = oq->pkts_pending;
+ u32 pkts_pend = READ_ONCE(oq->pkts_pending);
+ u32 last_pkt_count = READ_ONCE(oq->last_pkt_count);
+ u32 pkts_processed = READ_ONCE(iq->pkts_processed);
+ u32 pkt_in_done = READ_ONCE(iq->pkt_in_done);
netdev_dbg(iq->netdev, "enabling intr for Q-%u\n", iq->q_no);
- if (iq->pkts_processed) {
- writel(iq->pkts_processed, iq->inst_cnt_reg);
- iq->pkt_in_done -= iq->pkts_processed;
- iq->pkts_processed = 0;
+ if (pkts_processed) {
+ writel(pkts_processed, iq->inst_cnt_reg);
+ readl(iq->inst_cnt_reg);
+ WRITE_ONCE(iq->pkt_in_done, (pkt_in_done - pkts_processed));
+ WRITE_ONCE(iq->pkts_processed, 0);
}
- if (oq->last_pkt_count - pkts_pend) {
- writel(oq->last_pkt_count - pkts_pend, oq->pkts_sent_reg);
- oq->last_pkt_count = pkts_pend;
+ if (last_pkt_count - pkts_pend) {
+ writel(last_pkt_count - pkts_pend, oq->pkts_sent_reg);
+ readl(oq->pkts_sent_reg);
+ WRITE_ONCE(oq->last_pkt_count, pkts_pend);
}
/* Flush the previous wrties before writing to RESEND bit */
- wmb();
+ smp_wmb();
+}
+
+/**
+ * octep_enable_ioq_irq() - Enable MSI-x interrupt of a Tx/Rx queue.
+ *
+ * @iq: Octeon Tx queue data structure.
+ * @oq: Octeon Rx queue data structure.
+ */
+static void octep_enable_ioq_irq(struct octep_iq *iq, struct octep_oq *oq)
+{
writeq(1UL << OCTEP_OQ_INTR_RESEND_BIT, oq->pkts_sent_reg);
writeq(1UL << OCTEP_IQ_INTR_RESEND_BIT, iq->inst_cnt_reg);
}
@@ -602,7 +617,8 @@ static int octep_napi_poll(struct napi_struct *napi, int budget)
if (tx_pending || rx_done >= budget)
return budget;
- napi_complete(napi);
+ octep_update_pkt(ioq_vector->iq, ioq_vector->oq);
+ napi_complete_done(napi, rx_done);
octep_enable_ioq_irq(ioq_vector->iq, ioq_vector->oq);
return rx_done;
}
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
index 82b6b19e76b4..11f1b45d0f92 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
@@ -318,10 +318,16 @@ static int octep_oq_check_hw_for_pkts(struct octep_device *oct,
struct octep_oq *oq)
{
u32 pkt_count, new_pkts;
+ u32 last_pkt_count, pkts_pending;
pkt_count = readl(oq->pkts_sent_reg);
- new_pkts = pkt_count - oq->last_pkt_count;
+ last_pkt_count = READ_ONCE(oq->last_pkt_count);
+ new_pkts = pkt_count - last_pkt_count;
+ if (pkt_count < last_pkt_count) {
+ dev_err(oq->dev, "OQ-%u pkt_count(%u) < oq->last_pkt_count(%u)\n",
+ oq->q_no, pkt_count, last_pkt_count);
+ }
/* Clear the hardware packets counter register if the rx queue is
* being processed continuously with-in a single interrupt and
* reached half its max value.
@@ -332,8 +338,9 @@ static int octep_oq_check_hw_for_pkts(struct octep_device *oct,
pkt_count = readl(oq->pkts_sent_reg);
new_pkts += pkt_count;
}
- oq->last_pkt_count = pkt_count;
- oq->pkts_pending += new_pkts;
+ WRITE_ONCE(oq->last_pkt_count, pkt_count);
+ pkts_pending = READ_ONCE(oq->pkts_pending);
+ WRITE_ONCE(oq->pkts_pending, (pkts_pending + new_pkts));
return new_pkts;
}
@@ -408,7 +415,7 @@ static int __octep_oq_process_rx(struct octep_device *oct,
u16 rx_ol_flags;
u32 read_idx;
- read_idx = oq->host_read_idx;
+ read_idx = READ_ONCE(oq->host_read_idx);
rx_bytes = 0;
desc_used = 0;
for (pkt = 0; pkt < pkts_to_process; pkt++) {
@@ -493,7 +500,7 @@ static int __octep_oq_process_rx(struct octep_device *oct,
napi_gro_receive(oq->napi, skb);
}
- oq->host_read_idx = read_idx;
+ WRITE_ONCE(oq->host_read_idx, read_idx);
oq->refill_count += desc_used;
oq->stats->packets += pkt;
oq->stats->bytes += rx_bytes;
@@ -516,22 +523,26 @@ int octep_oq_process_rx(struct octep_oq *oq, int budget)
{
u32 pkts_available, pkts_processed, total_pkts_processed;
struct octep_device *oct = oq->octep_dev;
+ u32 pkts_pending;
pkts_available = 0;
pkts_processed = 0;
total_pkts_processed = 0;
while (total_pkts_processed < budget) {
/* update pending count only when current one exhausted */
- if (oq->pkts_pending == 0)
+ pkts_pending = READ_ONCE(oq->pkts_pending);
+ if (pkts_pending == 0)
octep_oq_check_hw_for_pkts(oct, oq);
+ pkts_pending = READ_ONCE(oq->pkts_pending);
pkts_available = min(budget - total_pkts_processed,
- oq->pkts_pending);
+ pkts_pending);
if (!pkts_available)
break;
pkts_processed = __octep_oq_process_rx(oct, oq,
pkts_available);
- oq->pkts_pending -= pkts_processed;
+ pkts_pending = READ_ONCE(oq->pkts_pending);
+ WRITE_ONCE(oq->pkts_pending, (pkts_pending - pkts_processed));
total_pkts_processed += pkts_processed;
}
--
2.47.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH net v1 2/2] octeon_ep_vf: avoid compiler and IQ/OQ reordering
2025-12-19 7:29 [PATCH net v1 0/2] avoid compiler and IQ/OQ reordering Vimlesh Kumar
2025-12-19 7:29 ` [PATCH net v1 1/2] octeon_ep: " Vimlesh Kumar
@ 2025-12-19 7:29 ` Vimlesh Kumar
2025-12-29 16:18 ` Paolo Abeni
1 sibling, 1 reply; 5+ messages in thread
From: Vimlesh Kumar @ 2025-12-19 7:29 UTC (permalink / raw)
To: netdev, linux-kernel
Cc: sedara, srasheed, hgani, Vimlesh Kumar, Veerasenareddy Burru,
Satananda Burla, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni
Utilize READ_ONCE and WRITE_ONCE APIs for IO queue Tx/Rx
variable access to prevent compiler optimization and reordering.
Additionally, ensure IO queue OUT/IN_CNT registers are flushed
by performing a read-back after writing.
Relocate IQ/OQ IN/OUT_CNTS updates to occur before NAPI completion,
and replace napi_complete with napi_complete_done.
Fixes: 1cd3b407977c3 ("octeon_ep_vf: add Tx/Rx processing and interrupt support")
Signed-off-by: Sathesh Edara <sedara@marvell.com>
Signed-off-by: Shinas Rasheed <srasheed@marvell.com>
Signed-off-by: Vimlesh Kumar <vimleshk@marvell.com>
---
.../marvell/octeon_ep_vf/octep_vf_main.c | 38 ++++++++++++++-----
.../marvell/octeon_ep_vf/octep_vf_rx.c | 28 ++++++++++----
2 files changed, 48 insertions(+), 18 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
index 420c3f4cf741..27a5fc38bccb 100644
--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
+++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
@@ -286,28 +286,45 @@ static void octep_vf_clean_irqs(struct octep_vf_device *oct)
}
/**
- * octep_vf_enable_ioq_irq() - Enable MSI-x interrupt of a Tx/Rx queue.
+ * octep_vf_update_pkt() - Update IQ/OQ IN/OUT_CNT registers.
*
* @iq: Octeon Tx queue data structure.
* @oq: Octeon Rx queue data structure.
*/
-static void octep_vf_enable_ioq_irq(struct octep_vf_iq *iq, struct octep_vf_oq *oq)
+
+static void octep_vf_update_pkt(struct octep_vf_iq *iq, struct octep_vf_oq *oq)
{
- u32 pkts_pend = oq->pkts_pending;
+ u32 pkts_pend = READ_ONCE(oq->pkts_pending);
+ u32 last_pkt_count = READ_ONCE(oq->last_pkt_count);
+ u32 pkts_processed = READ_ONCE(iq->pkts_processed);
+ u32 pkt_in_done = READ_ONCE(iq->pkt_in_done);
netdev_dbg(iq->netdev, "enabling intr for Q-%u\n", iq->q_no);
- if (iq->pkts_processed) {
- writel(iq->pkts_processed, iq->inst_cnt_reg);
- iq->pkt_in_done -= iq->pkts_processed;
- iq->pkts_processed = 0;
+ if (pkts_processed) {
+ writel(pkts_processed, iq->inst_cnt_reg);
+ readl(iq->inst_cnt_reg);
+ WRITE_ONCE(iq->pkt_in_done, (pkt_in_done - pkts_processed));
+ WRITE_ONCE(iq->pkts_processed, 0);
}
- if (oq->last_pkt_count - pkts_pend) {
- writel(oq->last_pkt_count - pkts_pend, oq->pkts_sent_reg);
- oq->last_pkt_count = pkts_pend;
+ if (last_pkt_count - pkts_pend) {
+ writel(last_pkt_count - pkts_pend, oq->pkts_sent_reg);
+ readl(oq->pkts_sent_reg);
+ WRITE_ONCE(oq->last_pkt_count, pkts_pend);
}
/* Flush the previous wrties before writing to RESEND bit */
smp_wmb();
+}
+
+/**
+ * octep_vf_enable_ioq_irq() - Enable MSI-x interrupt of a Tx/Rx queue.
+ *
+ * @iq: Octeon Tx queue data structure.
+ * @oq: Octeon Rx queue data structure.
+ */
+static void octep_vf_enable_ioq_irq(struct octep_vf_iq *iq,
+ struct octep_vf_oq *oq)
+{
writeq(1UL << OCTEP_VF_OQ_INTR_RESEND_BIT, oq->pkts_sent_reg);
writeq(1UL << OCTEP_VF_IQ_INTR_RESEND_BIT, iq->inst_cnt_reg);
}
@@ -333,6 +350,7 @@ static int octep_vf_napi_poll(struct napi_struct *napi, int budget)
if (tx_pending || rx_done >= budget)
return budget;
+ octep_vf_update_pkt(ioq_vector->iq, ioq_vector->oq);
if (likely(napi_complete_done(napi, rx_done)))
octep_vf_enable_ioq_irq(ioq_vector->iq, ioq_vector->oq);
diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.c b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.c
index d70c8be3cfc4..31380962c212 100644
--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.c
+++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.c
@@ -319,9 +319,16 @@ static int octep_vf_oq_check_hw_for_pkts(struct octep_vf_device *oct,
struct octep_vf_oq *oq)
{
u32 pkt_count, new_pkts;
+ u32 last_pkt_count, pkts_pending;
pkt_count = readl(oq->pkts_sent_reg);
- new_pkts = pkt_count - oq->last_pkt_count;
+ last_pkt_count = READ_ONCE(oq->last_pkt_count);
+ new_pkts = pkt_count - last_pkt_count;
+
+ if (pkt_count < last_pkt_count) {
+ dev_err(oq->dev, "OQ-%u pkt_count(%u) < oq->last_pkt_count(%u)\n",
+ oq->q_no, pkt_count, last_pkt_count);
+ }
/* Clear the hardware packets counter register if the rx queue is
* being processed continuously with-in a single interrupt and
@@ -333,8 +340,9 @@ static int octep_vf_oq_check_hw_for_pkts(struct octep_vf_device *oct,
pkt_count = readl(oq->pkts_sent_reg);
new_pkts += pkt_count;
}
- oq->last_pkt_count = pkt_count;
- oq->pkts_pending += new_pkts;
+ WRITE_ONCE(oq->last_pkt_count, pkt_count);
+ pkts_pending = READ_ONCE(oq->pkts_pending);
+ WRITE_ONCE(oq->pkts_pending, (pkts_pending + new_pkts));
return new_pkts;
}
@@ -363,7 +371,7 @@ static int __octep_vf_oq_process_rx(struct octep_vf_device *oct,
struct sk_buff *skb;
u32 read_idx;
- read_idx = oq->host_read_idx;
+ read_idx = READ_ONCE(oq->host_read_idx);
rx_bytes = 0;
desc_used = 0;
for (pkt = 0; pkt < pkts_to_process; pkt++) {
@@ -457,7 +465,7 @@ static int __octep_vf_oq_process_rx(struct octep_vf_device *oct,
napi_gro_receive(oq->napi, skb);
}
- oq->host_read_idx = read_idx;
+ WRITE_ONCE(oq->host_read_idx, read_idx);
oq->refill_count += desc_used;
oq->stats->packets += pkt;
oq->stats->bytes += rx_bytes;
@@ -480,22 +488,26 @@ int octep_vf_oq_process_rx(struct octep_vf_oq *oq, int budget)
{
u32 pkts_available, pkts_processed, total_pkts_processed;
struct octep_vf_device *oct = oq->octep_vf_dev;
+ u32 pkts_pending;
pkts_available = 0;
pkts_processed = 0;
total_pkts_processed = 0;
while (total_pkts_processed < budget) {
/* update pending count only when current one exhausted */
- if (oq->pkts_pending == 0)
+ pkts_pending = READ_ONCE(oq->pkts_pending);
+ if (pkts_pending == 0)
octep_vf_oq_check_hw_for_pkts(oct, oq);
+ pkts_pending = READ_ONCE(oq->pkts_pending);
pkts_available = min(budget - total_pkts_processed,
- oq->pkts_pending);
+ pkts_pending);
if (!pkts_available)
break;
pkts_processed = __octep_vf_oq_process_rx(oct, oq,
pkts_available);
- oq->pkts_pending -= pkts_processed;
+ pkts_pending = READ_ONCE(oq->pkts_pending);
+ WRITE_ONCE(oq->pkts_pending, (pkts_pending - pkts_processed));
total_pkts_processed += pkts_processed;
}
--
2.47.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH net v1 1/2] octeon_ep: avoid compiler and IQ/OQ reordering
2025-12-19 7:29 ` [PATCH net v1 1/2] octeon_ep: " Vimlesh Kumar
@ 2025-12-29 16:15 ` Paolo Abeni
0 siblings, 0 replies; 5+ messages in thread
From: Paolo Abeni @ 2025-12-29 16:15 UTC (permalink / raw)
To: Vimlesh Kumar, netdev, linux-kernel
Cc: sedara, srasheed, hgani, Veerasenareddy Burru, Andrew Lunn,
David S. Miller, Eric Dumazet, Jakub Kicinski, Satananda Burla,
Abhijit Ayarekar
On 12/19/25 8:29 AM, Vimlesh Kumar wrote:
> Utilize READ_ONCE and WRITE_ONCE APIs for IO queue Tx/Rx
> variable access to prevent compiler optimization and reordering.
> Additionally, ensure IO queue OUT/IN_CNT registers are flushed
> by performing a read-back after writing.
Please explain why such _ONCE() annotation are required, and what could
be reordered otherwise.
I don't think they are needed,
/P
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH net v1 2/2] octeon_ep_vf: avoid compiler and IQ/OQ reordering
2025-12-19 7:29 ` [PATCH net v1 2/2] octeon_ep_vf: " Vimlesh Kumar
@ 2025-12-29 16:18 ` Paolo Abeni
0 siblings, 0 replies; 5+ messages in thread
From: Paolo Abeni @ 2025-12-29 16:18 UTC (permalink / raw)
To: Vimlesh Kumar, netdev, linux-kernel
Cc: sedara, srasheed, hgani, Veerasenareddy Burru, Satananda Burla,
Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski
On 12/19/25 8:29 AM, Vimlesh Kumar wrote:
> Utilize READ_ONCE and WRITE_ONCE APIs for IO queue Tx/Rx
> variable access to prevent compiler optimization and reordering.
> Additionally, ensure IO queue OUT/IN_CNT registers are flushed
> by performing a read-back after writing.
Same question here.
/P
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-12-29 16:18 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-19 7:29 [PATCH net v1 0/2] avoid compiler and IQ/OQ reordering Vimlesh Kumar
2025-12-19 7:29 ` [PATCH net v1 1/2] octeon_ep: " Vimlesh Kumar
2025-12-29 16:15 ` Paolo Abeni
2025-12-19 7:29 ` [PATCH net v1 2/2] octeon_ep_vf: " Vimlesh Kumar
2025-12-29 16:18 ` Paolo Abeni
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).