public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
To: intel-wired-lan@lists.osuosl.org, anthony.l.nguyen@intel.com,
	aleksandr.loktionov@intel.com
Cc: netdev@vger.kernel.org
Subject: [PATCH iwl-next v2 7/8] ixgbe: use GFP_KERNEL in ixgbe_fcoe_ddp_setup()
Date: Wed,  8 Apr 2026 15:12:15 +0200	[thread overview]
Message-ID: <20260408131216.2662245-8-aleksandr.loktionov@intel.com> (raw)
In-Reply-To: <20260408131216.2662245-1-aleksandr.loktionov@intel.com>

ixgbe_fcoe_ddp_setup() is always called from process context (FCoE
offload setup paths) and never from an atomic context.  Using GFP_ATOMIC
is therefore unnecessarily restrictive and wastes memory allocator
headroom reserved for genuine atomic callers.

The previous attempt to change this to GFP_KERNEL placed the allocation
inside the get_cpu()/put_cpu() section, which disables preemption.
GFP_KERNEL can sleep under direct reclaim regardless of whether the
caller is in process context, which triggers a BUG() with preemption
disabled.

Restructure the function to split the get_cpu()/put_cpu() usage into
two narrow critical sections:

1. A short initial section that reads the per-CPU pool pointer and
   validates it, then immediately calls put_cpu() before any allocation.
   The pool pointer is saved in a local variable for use after the pin
   is dropped.

2. A second section after the allocation that re-pins the CPU solely to
   update per-CPU counters (noddp, noddp_ext_buff) inside the SG loop.

The DMA mapping and pool allocation sit between these two sections with
preemption enabled, making GFP_KERNEL safe.  The pool pointer saved
from section 1 remains valid because per-CPU DMA pools are only
destroyed during interface teardown under RTNL, not during normal
operation.

Suggested-by: Sebastian Basierski <sebastianx.basierski@intel.com>
Signed-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
---
v1 -> v2:
 - Move dma_pool_alloc() outside the get_cpu()/put_cpu() section;
   split into two narrow preempt-off regions so GFP_KERNEL is safe.

 drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c | 51 +++++++++++---------
 1 file changed, 35 insertions(+), 18 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c
index 011fda9..064ad17 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c
@@ -139,6 +139,7 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
 	struct ixgbe_fcoe *fcoe;
 	struct ixgbe_fcoe_ddp *ddp;
 	struct ixgbe_fcoe_ddp_pool *ddp_pool;
+	struct dma_pool *pool;
 	struct scatterlist *sg;
 	unsigned int i, j, dmacount;
 	unsigned int len;
@@ -179,29 +180,43 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
 		return 0;
 	}
 
+	/* Pin to current CPU only to read the per-CPU pool pointer; drop
+	 * the pin before any allocations that may sleep under direct reclaim.
+	 */
 	ddp_pool = per_cpu_ptr(fcoe->ddp_pool, get_cpu());
 	if (!ddp_pool->pool) {
 		e_warn(drv, "xid=0x%x no ddp pool for fcoe\n", xid);
-		goto out_noddp;
+		put_cpu();
+		return 0;
 	}
+	pool = ddp_pool->pool;
+	put_cpu();
 
 	/* setup dma from scsi command sgl */
 	dmacount = dma_map_sg(&adapter->pdev->dev, sgl, sgc, DMA_FROM_DEVICE);
 	if (dmacount == 0) {
 		e_err(drv, "xid 0x%x DMA map error\n", xid);
-		goto out_noddp;
+		return 0;
 	}
 
-	/* alloc the udl from per cpu ddp pool */
-	ddp->udl = dma_pool_alloc(ddp_pool->pool, GFP_ATOMIC, &ddp->udp);
+	/* Allocate from per-CPU pool; GFP_KERNEL is safe: preemption is
+	 * re-enabled after the put_cpu() above.  Per-CPU DMA pools are only
+	 * destroyed under RTNL during interface teardown, so the saved pool
+	 * pointer remains valid.
+	 */
+	ddp->udl = dma_pool_alloc(pool, GFP_KERNEL, &ddp->udp);
 	if (!ddp->udl) {
 		e_err(drv, "failed allocated ddp context\n");
-		goto out_noddp_unmap;
+		dma_unmap_sg(&adapter->pdev->dev, sgl, sgc, DMA_FROM_DEVICE);
+		return 0;
 	}
-	ddp->pool = ddp_pool->pool;
+	ddp->pool = pool;
 	ddp->sgl = sgl;
 	ddp->sgc = sgc;
 
+	/* Re-pin CPU for per-CPU statistics updates inside the SG loop. */
+	ddp_pool = per_cpu_ptr(fcoe->ddp_pool, get_cpu());
+
 	j = 0;
 	for_each_sg(sgl, sg, dmacount, i) {
 		addr = sg_dma_address(sg);
@@ -210,7 +225,8 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
 			/* max number of buffers allowed in one DDP context */
 			if (j >= IXGBE_BUFFCNT_MAX) {
 				ddp_pool->noddp++;
-				goto out_noddp_free;
+				put_cpu();
+				goto out_noddp_free_unmap;
 			}
 
 			/* get the offset of length of current buffer */
@@ -220,16 +236,20 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
 			 * all but the 1st buffer (j == 0)
 			 * must be aligned on bufflen
 			 */
-			if ((j != 0) && (thisoff))
-				goto out_noddp_free;
+			if (j != 0 && thisoff) {
+				put_cpu();
+				goto out_noddp_free_unmap;
+			}
 			/*
 			 * all but the last buffer
 			 * ((i == (dmacount - 1)) && (thislen == len))
 			 * must end at bufflen
 			 */
-			if (((i != (dmacount - 1)) || (thislen != len))
-			    && ((thislen + thisoff) != bufflen))
-				goto out_noddp_free;
+			if ((i != (dmacount - 1) || thislen != len) &&
+			    (thislen + thisoff) != bufflen) {
+				put_cpu();
+				goto out_noddp_free_unmap;
+			}
 
 			ddp->udl[j] = (u64)(addr - thisoff);
 			/* only the first buffer may have none-zero offset */
@@ -250,14 +270,15 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
 	if (lastsize == bufflen) {
 		if (j >= IXGBE_BUFFCNT_MAX) {
 			ddp_pool->noddp_ext_buff++;
-			goto out_noddp_free;
+			put_cpu();
+			goto out_noddp_free_unmap;
 		}
 
 		ddp->udl[j] = (u64)(fcoe->extra_ddp_buffer_dma);
 		j++;
 		lastsize = 1;
 	}
 	put_cpu();
 
 	fcbuff = (IXGBE_FCBUFF_4KB << IXGBE_FCBUFF_BUFFSIZE_SHIFT);
 	fcbuff |= ((j & 0xff) << IXGBE_FCBUFF_BUFFCNT_SHIFT);
@@ -316,14 +337,10 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
 
 	return 1;
 
-out_noddp_free:
+out_noddp_free_unmap:
 	dma_pool_free(ddp->pool, ddp->udl, ddp->udp);
 	ixgbe_fcoe_clear_ddp(ddp);
-
-out_noddp_unmap:
 	dma_unmap_sg(&adapter->pdev->dev, sgl, sgc, DMA_FROM_DEVICE);
-out_noddp:
-	put_cpu();
 	return 0;
 }
 
-- 
2.52.0

  parent reply	other threads:[~2026-04-08 13:12 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-08 13:12 [PATCH iwl-next v2 0/8] ixgbe: nits and improvements Aleksandr Loktionov
2026-04-08 13:12 ` [PATCH iwl-next v2 1/8] ixgbe: lower IXGBE_ITR_ADAPTIVE_MAX_USECS to prevent RX starvation Aleksandr Loktionov
2026-04-14 12:58   ` Simon Horman
2026-04-08 13:12 ` [PATCH iwl-next v2 2/8] ixgbe: add ixgbe_container_is_rx() helper and refine RX adaptive ITR Aleksandr Loktionov
2026-04-08 13:12 ` [PATCH iwl-next v2 3/8] ixgbe: limit ITR decrease in latency mode to prevent ACK overdrive Aleksandr Loktionov
2026-04-08 13:12 ` [PATCH iwl-next v2 4/8] ixgbe: add IXGBE_ITR_ADAPTIVE_MASK_USECS constant Aleksandr Loktionov
2026-04-08 13:12 ` [PATCH iwl-next v2 5/8] ixgbe: remove ixgbe_ping_all_vfs() from link state change handlers Aleksandr Loktionov
2026-04-14 13:23   ` Simon Horman
2026-04-08 13:12 ` [PATCH iwl-next v2 6/8] ixgbe: use ktime_get_real_ns() in ixgbe_ptp_reset() Aleksandr Loktionov
2026-04-08 13:12 ` Aleksandr Loktionov [this message]
2026-04-08 14:09   ` [Intel-wired-lan] [PATCH iwl-next v2 7/8] ixgbe: use GFP_KERNEL in ixgbe_fcoe_ddp_setup() Kohei Enju
2026-04-14 13:29   ` Simon Horman
2026-04-08 13:12 ` [PATCH iwl-next v2 8/8] ixgbe: use int instead of u32 for error code variables Aleksandr Loktionov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260408131216.2662245-8-aleksandr.loktionov@intel.com \
    --to=aleksandr.loktionov@intel.com \
    --cc=anthony.l.nguyen@intel.com \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox