public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH iwl-next v2 0/8] ixgbe: nits and improvements
@ 2026-04-08 13:12 Aleksandr Loktionov
  2026-04-08 13:12 ` [PATCH iwl-next v2 1/8] ixgbe: lower IXGBE_ITR_ADAPTIVE_MAX_USECS to prevent RX starvation Aleksandr Loktionov
                   ` (7 more replies)
  0 siblings, 8 replies; 10+ messages in thread
From: Aleksandr Loktionov @ 2026-04-08 13:12 UTC (permalink / raw)
  To: intel-wired-lan, anthony.l.nguyen, aleksandr.loktionov; +Cc: netdev

Eight cleanups and improvements for the ixgbe driver.

Patches 1-4 split the adaptive-ITR algorithm improvements into
independently reviewable pieces as requested by Simon Horman.  Each
patch is self-contained and can be reviewed and reverted independently:
  1. Lower IXGBE_ITR_ADAPTIVE_MAX_USECS from 126 to 84 to prevent RX
     starvation at minimum bulk-mode rates.
  2. Add ixgbe_container_is_rx() helper and refine the RX-specific
     latency algorithm (thresholds, no-packet handling, mode tracking).
  3. Limit ITR decrease in latency mode to at most 2 us per update so
     ACK workloads do not overdrive moderation.
  4. Add IXGBE_ITR_ADAPTIVE_MASK_USECS constant to replace the
     open-coded ~IXGBE_ITR_ADAPTIVE_LATENCY complement in ixgbe_set_itr.

Patch 5 removes ixgbe_ping_all_vfs() from both ixgbe_watchdog_link_is_up()
and ixgbe_watchdog_link_is_down().  The original submission only removed
the call from the link-up path; this version also removes it from the
link-down path, which carries the same race window against VF mailbox
initialization.

Patch 6 replaces ktime_to_ns(ktime_get_real()) with ktime_get_real_ns().

Patch 7 restructures ixgbe_fcoe_ddp_setup() so that dma_pool_alloc()
is called outside the get_cpu()/put_cpu() preemption-disabled section,
enabling GFP_KERNEL instead of GFP_ATOMIC.  The prior submission left
the allocation inside the preempt-off block (GFP_KERNEL can sleep under
direct reclaim, which triggers a BUG() with preemption disabled).

Patch 8 fixes six local variables declared as u32 that are used to
store signed kernel error codes.  Reviewed by Simon Horman.

Changes in v2:
 - 1-4/8: Split monolithic ITR cleanup into four independent patches
          as requested by Simon Horman.
 - 5/8:   Extend fix to also remove ixgbe_ping_all_vfs() from
          ixgbe_watchdog_link_is_down() to cover the identical race.
 - 6/8:   No code change; add [N/M] numbering.
 - 7/8:   Move dma_pool_alloc() outside get_cpu()/put_cpu() so
          GFP_KERNEL is used safely without sleeping under preempt-off.
 - 8/8:   Add Reviewed-by: Simon Horman; no code change.

---

Alexander Duyck (4):
  ixgbe: lower IXGBE_ITR_ADAPTIVE_MAX_USECS to prevent RX starvation
  ixgbe: add ixgbe_container_is_rx() helper and refine RX adaptive ITR
  ixgbe: limit ITR decrease in latency mode to prevent ACK overdrive
  ixgbe: add IXGBE_ITR_ADAPTIVE_MASK_USECS constant

Aleksandr Loktionov (2):
  ixgbe: remove ixgbe_ping_all_vfs() from link state change handlers
  ixgbe: use GFP_KERNEL in ixgbe_fcoe_ddp_setup()

Jacob Keller (1):
  ixgbe: use ktime_get_real_ns() in ixgbe_ptp_reset()

Aleksandr Loktionov (1):
  ixgbe: use int instead of u32 for error code variables

 drivers/net/ethernet/intel/ixgbe/ixgbe.h      |  5 +-
 drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c | 50 +++++++----
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 97 ++++++++++++------
 drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c  |  6 +-
 drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c  |  2 +-
 drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c | 12 +--
 6 files changed, 113 insertions(+), 59 deletions(-)
-- 
2.52.0

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH iwl-next v2 1/8] ixgbe: lower IXGBE_ITR_ADAPTIVE_MAX_USECS to prevent RX starvation
  2026-04-08 13:12 [PATCH iwl-next v2 0/8] ixgbe: nits and improvements Aleksandr Loktionov
@ 2026-04-08 13:12 ` Aleksandr Loktionov
  2026-04-08 13:12 ` [PATCH iwl-next v2 2/8] ixgbe: add ixgbe_container_is_rx() helper and refine RX adaptive ITR Aleksandr Loktionov
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Aleksandr Loktionov @ 2026-04-08 13:12 UTC (permalink / raw)
  To: intel-wired-lan, anthony.l.nguyen, aleksandr.loktionov; +Cc: netdev

From: Alexander Duyck <alexander.h.duyck@intel.com>

At the current maximum of 126 us the minimum bulk-mode interrupt rate
is ~7936 interrupts/s.  Under sustained full-line-rate bulk RX traffic
this is low enough that descriptor ring starvation can occur before the
next interrupt fires.

Lower IXGBE_ITR_ADAPTIVE_MAX_USECS from 126 to 84 us.  This raises the
minimum rate to ~11905 interrupts/s (~12K ints/s), providing enough
headroom to drain the ring before it wraps.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
---
v1 -> v2:
 - Split from monolithic ITR cleanup patch; this patch only lowers
   IXGBE_ITR_ADAPTIVE_MAX_USECS from 126 to 84.

 drivers/net/ethernet/intel/ixgbe/ixgbe.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
index 59a1cee4..b3f4d21 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
@@ -473,6 +473,6 @@ static inline unsigned int ixgbe_rx_pg_order(struct ixgbe_ring *ring)
 
 #define IXGBE_ITR_ADAPTIVE_MIN_INC	2
 #define IXGBE_ITR_ADAPTIVE_MIN_USECS	10
-#define IXGBE_ITR_ADAPTIVE_MAX_USECS	126
+#define IXGBE_ITR_ADAPTIVE_MAX_USECS	84
 #define IXGBE_ITR_ADAPTIVE_LATENCY	0x80
 #define IXGBE_ITR_ADAPTIVE_BULK		0x00
-- 
2.52.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH iwl-next v2 2/8] ixgbe: add ixgbe_container_is_rx() helper and refine RX adaptive ITR
  2026-04-08 13:12 [PATCH iwl-next v2 0/8] ixgbe: nits and improvements Aleksandr Loktionov
  2026-04-08 13:12 ` [PATCH iwl-next v2 1/8] ixgbe: lower IXGBE_ITR_ADAPTIVE_MAX_USECS to prevent RX starvation Aleksandr Loktionov
@ 2026-04-08 13:12 ` Aleksandr Loktionov
  2026-04-08 13:12 ` [PATCH iwl-next v2 3/8] ixgbe: limit ITR decrease in latency mode to prevent ACK overdrive Aleksandr Loktionov
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Aleksandr Loktionov @ 2026-04-08 13:12 UTC (permalink / raw)
  To: intel-wired-lan, anthony.l.nguyen, aleksandr.loktionov; +Cc: netdev

From: Alexander Duyck <alexander.h.duyck@intel.com>

Add an ixgbe_container_is_rx() helper to cleanly distinguish RX from TX
ring containers inside ixgbe_update_itr().

Refine the RX-specific latency-detection path:

 - Replace the shared "packets < 4 or bytes < 9000" threshold with an
   RX-specific check of "1..23 packets and bytes < 12112".  When that
   condition holds, target 8x the observed byte count in the next
   interval by computing avg_wire_size = (bytes + packets * 24) * 2,
   clamped to [2560, 12800], and jumping directly to the speed-based
   ITR calculation.  This provides finer-grained control over low-rate
   RX latency workloads without affecting TX.

 - Remove the separate "no packets" special-case block.  When packets
   is 0 it falls into the "< 48" branch.  The mode-tracking logic in
   that branch is extended: fewer than 8 packets forces latency mode;
   8..47 packets preserves the current mode.  This replaces the old
   unconditional "add LATENCY flag from ring_container->itr" carried
   over from the removed block.

 - Remove the adjust_by_size label and the associated "halve
   avg_wire_size in latency mode" step.  The Rx latency path now
   pre-calculates avg_wire_size independently and the bulk path no
   longer needs the halving to compensate for incorrect thresholds.
   Rename the jump target to adjust_for_speed to reflect its purpose.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
---
v1 -> v2:
 - Split from monolithic ITR cleanup; adds ixgbe_container_is_rx(),
   refines RX latency thresholds (24 pkts / 12112 B), and removes the
   separate no-packet and adjust_by_size code paths.

 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 72 ++++++++++--------
 1 file changed, 41 insertions(+), 31 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 210c7b9..b3f4a72 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -2711,6 +2711,12 @@ static void ixgbe_configure_msix(struct ixgbe_adapter *adapter)
 	IXGBE_WRITE_REG(&adapter->hw, IXGBE_EIAC, mask);
 }
 
+static bool ixgbe_container_is_rx(struct ixgbe_q_vector *q_vector,
+				  struct ixgbe_ring_container *rc)
+{
+	return &q_vector->rx == rc;
+}
+
 /**
  * ixgbe_update_itr - update the dynamic ITR value based on statistics
  * @q_vector: structure containing interrupt and ring information
@@ -2747,35 +2753,24 @@ static void ixgbe_update_itr(struct ixgbe_q_vector *q_vector,
 		goto clear_counts;
 
 	packets = ring_container->total_packets;
-
-	/* We have no packets to actually measure against. This means
-	 * either one of the other queues on this vector is active or
-	 * we are a Tx queue doing TSO with too high of an interrupt rate.
-	 *
-	 * When this occurs just tick up our delay by the minimum value
-	 * and hope that this extra delay will prevent us from being called
-	 * without any work on our queue.
-	 */
-	if (!packets) {
-		itr = (q_vector->itr >> 2) + IXGBE_ITR_ADAPTIVE_MIN_INC;
-		if (itr > IXGBE_ITR_ADAPTIVE_MAX_USECS)
-			itr = IXGBE_ITR_ADAPTIVE_MAX_USECS;
-		itr += ring_container->itr & IXGBE_ITR_ADAPTIVE_LATENCY;
-		goto clear_counts;
-	}
-
 	bytes = ring_container->total_bytes;
 
-	/* If packets are less than 4 or bytes are less than 9000 assume
-	 * insufficient data to use bulk rate limiting approach. We are
-	 * likely latency driven.
-	 */
-	if (packets < 4 && bytes < 9000) {
-		itr = IXGBE_ITR_ADAPTIVE_LATENCY;
-		goto adjust_by_size;
+	if (ixgbe_container_is_rx(q_vector, ring_container)) {
+		/* If Rx and there are 1 to 23 packets and bytes are less than
+		 * 12112 assume insufficient data to use bulk rate limiting
+		 * approach. Instead we will focus on simply trying to target
+		 * receiving 8 times as much data in the next interrupt.
+		 */
+		if (packets && packets < 24 && bytes < 12112) {
+			itr = IXGBE_ITR_ADAPTIVE_LATENCY;
+			avg_wire_size = (bytes + packets * 24) * 2;
+			avg_wire_size = clamp_t(unsigned int,
+						avg_wire_size, 2560, 12800);
+			goto adjust_for_speed;
+		}
 	}
 
-	/* Between 4 and 48 we can assume that our current interrupt delay
+	/* Less than 48 packets we can assume that our current interrupt delay
 	 * is only slightly too low. As such we should increase it by a small
 	 * fixed amount.
 	 */
@@ -2783,6 +2778,20 @@ static void ixgbe_update_itr(struct ixgbe_q_vector *q_vector,
 		itr = (q_vector->itr >> 2) + IXGBE_ITR_ADAPTIVE_MIN_INC;
 		if (itr > IXGBE_ITR_ADAPTIVE_MAX_USECS)
 			itr = IXGBE_ITR_ADAPTIVE_MAX_USECS;
+
+		/* If sample size is 0 - 7 we should probably switch
+		 * to latency mode instead of trying to control
+		 * things as though we are in bulk.
+		 *
+		 * Otherwise if the number of packets is less than 48
+		 * we should maintain whatever mode we are currently
+		 * in. The range between 8 and 48 is the cross-over
+		 * point between latency and bulk traffic.
+		 */
+		if (packets < 8)
+			itr += IXGBE_ITR_ADAPTIVE_LATENCY;
+		else
+			itr += ring_container->itr & IXGBE_ITR_ADAPTIVE_LATENCY;
 		goto clear_counts;
 	}
 
@@ -2813,7 +2822,6 @@ static void ixgbe_update_itr(struct ixgbe_q_vector *q_vector,
 	 */
 	itr = IXGBE_ITR_ADAPTIVE_BULK;
 
-adjust_by_size:
 	/* If packet counts are 256 or greater we can assume we have a gross
 	 * overestimation of what the rate should be. Instead of trying to fine
 	 * tune it just use the formula below to try and dial in an exact value
@@ -2856,12 +2864,7 @@ static void ixgbe_update_itr(struct ixgbe_q_vector *q_vector,
 		avg_wire_size = 32256;
 	}
 
-	/* If we are in low latency mode half our delay which doubles the rate
-	 * to somewhere between 100K to 16K ints/sec
-	 */
-	if (itr & IXGBE_ITR_ADAPTIVE_LATENCY)
-		avg_wire_size >>= 1;
-
+adjust_for_speed:
 	/* Resultant value is 256 times larger than it needs to be. This
 	 * gives us room to adjust the value as needed to either increase
 	 * or decrease the value based on link speeds of 10G, 2.5G, 1G, etc.
-- 
2.52.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH iwl-next v2 3/8] ixgbe: limit ITR decrease in latency mode to prevent ACK overdrive
  2026-04-08 13:12 [PATCH iwl-next v2 0/8] ixgbe: nits and improvements Aleksandr Loktionov
  2026-04-08 13:12 ` [PATCH iwl-next v2 1/8] ixgbe: lower IXGBE_ITR_ADAPTIVE_MAX_USECS to prevent RX starvation Aleksandr Loktionov
  2026-04-08 13:12 ` [PATCH iwl-next v2 2/8] ixgbe: add ixgbe_container_is_rx() helper and refine RX adaptive ITR Aleksandr Loktionov
@ 2026-04-08 13:12 ` Aleksandr Loktionov
  2026-04-08 13:12 ` [PATCH iwl-next v2 4/8] ixgbe: add IXGBE_ITR_ADAPTIVE_MASK_USECS constant Aleksandr Loktionov
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Aleksandr Loktionov @ 2026-04-08 13:12 UTC (permalink / raw)
  To: intel-wired-lan, anthony.l.nguyen, aleksandr.loktionov; +Cc: netdev

From: Alexander Duyck <alexander.h.duyck@intel.com>

When operating in latency mode and the computed ITR is lower than the
current setting, the algorithm can reduce the interrupt rate too
aggressively in a single step.  For a TCP workload this means the ACK
stream (a latency-sensitive, low-packet-rate workload) can drive the
moderation down to very high interrupt rates, starving CPU time from
the sender side.

After the speed-based ITR calculation is complete, check whether the
result is in latency mode and would decrease below the current setting.
If so, limit the decrease to at most IXGBE_ITR_ADAPTIVE_MIN_INC (2 us)
per update.  This ensures the number of interrupts grows by no more
than 2x per adjustment step for latency-class workloads, dialling in
smoothly rather than overshooting.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
---
v1 -> v2:
 - Split from monolithic ITR cleanup; this patch only adds the 2 us
   per-step decrease limit for latency-mode workloads.

 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index b3f4a72..d7c5e83 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -2888,6 +2888,16 @@ static void ixgbe_update_itr(struct ixgbe_q_vector *q_vector,
 		break;
 	}
 
+	/* In the case of a latency specific workload only allow us to
+	 * reduce the ITR by at most 2us. By doing this we should dial
+	 * in so that our number of interrupts is no more than 2x the number
+	 * of packets for the least busy workload. So for example in the case
+	 * of a TCP workload the ACK packets being received would set the
+	 * interrupt rate as they are a latency specific workload.
+	 */
+	if ((itr & IXGBE_ITR_ADAPTIVE_LATENCY) && itr < ring_container->itr)
+		itr = ring_container->itr - IXGBE_ITR_ADAPTIVE_MIN_INC;
+
 clear_counts:
 	/* write back value */
 	ring_container->itr = itr;
-- 
2.52.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH iwl-next v2 4/8] ixgbe: add IXGBE_ITR_ADAPTIVE_MASK_USECS constant
  2026-04-08 13:12 [PATCH iwl-next v2 0/8] ixgbe: nits and improvements Aleksandr Loktionov
                   ` (2 preceding siblings ...)
  2026-04-08 13:12 ` [PATCH iwl-next v2 3/8] ixgbe: limit ITR decrease in latency mode to prevent ACK overdrive Aleksandr Loktionov
@ 2026-04-08 13:12 ` Aleksandr Loktionov
  2026-04-08 13:12 ` [PATCH iwl-next v2 5/8] ixgbe: remove ixgbe_ping_all_vfs() from link state change handlers Aleksandr Loktionov
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Aleksandr Loktionov @ 2026-04-08 13:12 UTC (permalink / raw)
  To: intel-wired-lan, anthony.l.nguyen, aleksandr.loktionov; +Cc: netdev

From: Alexander Duyck <alexander.h.duyck@intel.com>

ixgbe_set_itr() clears the mode flag (IXGBE_ITR_ADAPTIVE_LATENCY, bit 7)
with the open-coded complement expression ~IXGBE_ITR_ADAPTIVE_LATENCY.
This is equivalent to keeping only bits [6:0], i.e. the usecs sub-field.

Add IXGBE_ITR_ADAPTIVE_MASK_USECS = IXGBE_ITR_ADAPTIVE_LATENCY - 1 =
0x7F to name this mask explicitly and replace the open-coded AND-NOT
operation with the cleaner AND form.  The two expressions are
arithmetically identical; the change improves readability.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
---
v1 -> v2:
 - Split from monolithic ITR cleanup; this patch only adds the
   IXGBE_ITR_ADAPTIVE_MASK_USECS constant and updates ixgbe_set_itr().

 drivers/net/ethernet/intel/ixgbe/ixgbe.h      | 1 +
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
index b3f4d21..c704cc6 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
@@ -475,6 +475,7 @@ static inline unsigned int ixgbe_rx_pg_order(struct ixgbe_ring *ring)
 #define IXGBE_ITR_ADAPTIVE_MAX_USECS	84
 #define IXGBE_ITR_ADAPTIVE_LATENCY	0x80
 #define IXGBE_ITR_ADAPTIVE_BULK		0x00
+#define IXGBE_ITR_ADAPTIVE_MASK_USECS	(IXGBE_ITR_ADAPTIVE_LATENCY - 1)
 
 struct ixgbe_ring_container {
 	struct ixgbe_ring *ring;	/* pointer to linked list of rings */

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index d7c5e83..1885fe8 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -2960,7 +2960,7 @@ static void ixgbe_set_itr(struct ixgbe_q_vector *q_vector)
 	new_itr = min(q_vector->rx.itr, q_vector->tx.itr);
 
 	/* Clear latency flag if set, shift into correct position */
-	new_itr &= ~IXGBE_ITR_ADAPTIVE_LATENCY;
+	new_itr &= IXGBE_ITR_ADAPTIVE_MASK_USECS;
 	new_itr <<= 2;
 
 	if (new_itr != q_vector->itr) {
-- 
2.52.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH iwl-next v2 5/8] ixgbe: remove ixgbe_ping_all_vfs() from link state change handlers
  2026-04-08 13:12 [PATCH iwl-next v2 0/8] ixgbe: nits and improvements Aleksandr Loktionov
                   ` (3 preceding siblings ...)
  2026-04-08 13:12 ` [PATCH iwl-next v2 4/8] ixgbe: add IXGBE_ITR_ADAPTIVE_MASK_USECS constant Aleksandr Loktionov
@ 2026-04-08 13:12 ` Aleksandr Loktionov
  2026-04-08 13:12 ` [PATCH iwl-next v2 6/8] ixgbe: use ktime_get_real_ns() in ixgbe_ptp_reset() Aleksandr Loktionov
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Aleksandr Loktionov @ 2026-04-08 13:12 UTC (permalink / raw)
  To: intel-wired-lan, anthony.l.nguyen, aleksandr.loktionov; +Cc: netdev

When multiple VFs are brought up simultaneously, a VF can receive a
CTS (Clear To Send) mailbox message instead of the expected RESET |
(N)ACK response, because the watchdog fires a gratuitous ping right as
the VF mailbox transaction is in progress.

Remove the ixgbe_ping_all_vfs() call from both ixgbe_watchdog_link_is_up()
and ixgbe_watchdog_link_is_down().  Both paths contain an identical call
with the same race window during VF initialization: link-state changes
(up or down) can arrive while VFs are mid-initialization, and the
gratuitous ping from either direction collides with the VF mailbox state
machine for the same reason.

Link-state changes are already communicated to VFs through the normal
mailbox protocol; these extra pings are redundant on both paths.

Suggested-by: Sebastian Basierski <sebastianx.basierski@intel.com>
Signed-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
---
v1 -> v2:
 - Extend fix to also remove ixgbe_ping_all_vfs() from
   ixgbe_watchdog_link_is_down(), which carries the same race window;
   justifies the asymmetry by eliminating it.

 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 1885fe8..7c3d821 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -8151,9 +8151,6 @@ static void ixgbe_watchdog_link_is_up(struct ixgbe_adapter *adapter)
 
 	/* update the default user priority for VFs */
 	ixgbe_update_default_up(adapter);
-
-	/* ping all the active vfs to let them know link has changed */
-	ixgbe_ping_all_vfs(adapter);
 }
 
 /**
@@ -8183,8 +8180,5 @@ static void ixgbe_watchdog_link_is_down(struct ixgbe_adapter *adapter)
 	e_info(drv, "NIC Link is Down\n");
 	netif_carrier_off(netdev);
-
-	/* ping all the active vfs to let them know link has changed */
-	ixgbe_ping_all_vfs(adapter);
 }
 
 static bool ixgbe_ring_tx_pending(struct ixgbe_adapter *adapter)
-- 
2.52.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH iwl-next v2 6/8] ixgbe: use ktime_get_real_ns() in ixgbe_ptp_reset()
  2026-04-08 13:12 [PATCH iwl-next v2 0/8] ixgbe: nits and improvements Aleksandr Loktionov
                   ` (4 preceding siblings ...)
  2026-04-08 13:12 ` [PATCH iwl-next v2 5/8] ixgbe: remove ixgbe_ping_all_vfs() from link state change handlers Aleksandr Loktionov
@ 2026-04-08 13:12 ` Aleksandr Loktionov
  2026-04-08 13:12 ` [PATCH iwl-next v2 7/8] ixgbe: use GFP_KERNEL in ixgbe_fcoe_ddp_setup() Aleksandr Loktionov
  2026-04-08 13:12 ` [PATCH iwl-next v2 8/8] ixgbe: use int instead of u32 for error code variables Aleksandr Loktionov
  7 siblings, 0 replies; 10+ messages in thread
From: Aleksandr Loktionov @ 2026-04-08 13:12 UTC (permalink / raw)
  To: intel-wired-lan, anthony.l.nguyen, aleksandr.loktionov
  Cc: netdev, Jacob Keller, Marcin Szycik, Simon Horman

From: Jacob Keller <jacob.e.keller@intel.com>

Replace ktime_to_ns(ktime_get_real()) with the direct equivalent
ktime_get_real_ns() in ixgbe_ptp_reset().  Using the combined helper
avoids the unnecessary intermediate ktime_t variable and makes the
intent clearer.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Reviewed-by: Marcin Szycik <marcin.szycik@linux.intel.com>
Reviewed-by: Simon Horman <horms@kernel.org>
---
v1 -> v2:
 - Carry Reviewed-by: Simon Horman from reply email; add [N/M]
   numbering; no code change.

 drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
index 6885d23..a7d1635 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
@@ -1347,7 +1347,7 @@ void ixgbe_ptp_reset(struct ixgbe_adapter *adapter)
 
 	spin_lock_irqsave(&adapter->tmreg_lock, flags);
 	timecounter_init(&adapter->hw_tc, &adapter->hw_cc,
-			 ktime_to_ns(ktime_get_real()));
+			 ktime_get_real_ns());
 	spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
 
 	adapter->last_overflow_check = jiffies;
-- 
2.52.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH iwl-next v2 7/8] ixgbe: use GFP_KERNEL in ixgbe_fcoe_ddp_setup()
  2026-04-08 13:12 [PATCH iwl-next v2 0/8] ixgbe: nits and improvements Aleksandr Loktionov
                   ` (5 preceding siblings ...)
  2026-04-08 13:12 ` [PATCH iwl-next v2 6/8] ixgbe: use ktime_get_real_ns() in ixgbe_ptp_reset() Aleksandr Loktionov
@ 2026-04-08 13:12 ` Aleksandr Loktionov
  2026-04-08 14:09   ` [Intel-wired-lan] " Kohei Enju
  2026-04-08 13:12 ` [PATCH iwl-next v2 8/8] ixgbe: use int instead of u32 for error code variables Aleksandr Loktionov
  7 siblings, 1 reply; 10+ messages in thread
From: Aleksandr Loktionov @ 2026-04-08 13:12 UTC (permalink / raw)
  To: intel-wired-lan, anthony.l.nguyen, aleksandr.loktionov; +Cc: netdev

ixgbe_fcoe_ddp_setup() is always called from process context (FCoE
offload setup paths) and never from an atomic context.  Using GFP_ATOMIC
is therefore unnecessarily restrictive and wastes memory allocator
headroom reserved for genuine atomic callers.

The previous attempt to change this to GFP_KERNEL placed the allocation
inside the get_cpu()/put_cpu() section, which disables preemption.
GFP_KERNEL can sleep under direct reclaim regardless of whether the
caller is in process context, which triggers a BUG() with preemption
disabled.

Restructure the function to split the get_cpu()/put_cpu() usage into
two narrow critical sections:

1. A short initial section that reads the per-CPU pool pointer and
   validates it, then immediately calls put_cpu() before any allocation.
   The pool pointer is saved in a local variable for use after the pin
   is dropped.

2. A second section after the allocation that re-pins the CPU solely to
   update per-CPU counters (noddp, noddp_ext_buff) inside the SG loop.

The DMA mapping and pool allocation sit between these two sections with
preemption enabled, making GFP_KERNEL safe.  The pool pointer saved
from section 1 remains valid because per-CPU DMA pools are only
destroyed during interface teardown under RTNL, not during normal
operation.

Suggested-by: Sebastian Basierski <sebastianx.basierski@intel.com>
Signed-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
---
v1 -> v2:
 - Move dma_pool_alloc() outside the get_cpu()/put_cpu() section;
   split into two narrow preempt-off regions so GFP_KERNEL is safe.

 drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c | 51 +++++++++++---------
 1 file changed, 35 insertions(+), 18 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c
index 011fda9..064ad17 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c
@@ -139,6 +139,7 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
 	struct ixgbe_fcoe *fcoe;
 	struct ixgbe_fcoe_ddp *ddp;
 	struct ixgbe_fcoe_ddp_pool *ddp_pool;
+	struct dma_pool *pool;
 	struct scatterlist *sg;
 	unsigned int i, j, dmacount;
 	unsigned int len;
@@ -179,29 +180,43 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
 		return 0;
 	}
 
+	/* Pin to current CPU only to read the per-CPU pool pointer; drop
+	 * the pin before any allocations that may sleep under direct reclaim.
+	 */
 	ddp_pool = per_cpu_ptr(fcoe->ddp_pool, get_cpu());
 	if (!ddp_pool->pool) {
 		e_warn(drv, "xid=0x%x no ddp pool for fcoe\n", xid);
-		goto out_noddp;
+		put_cpu();
+		return 0;
 	}
+	pool = ddp_pool->pool;
+	put_cpu();
 
 	/* setup dma from scsi command sgl */
 	dmacount = dma_map_sg(&adapter->pdev->dev, sgl, sgc, DMA_FROM_DEVICE);
 	if (dmacount == 0) {
 		e_err(drv, "xid 0x%x DMA map error\n", xid);
-		goto out_noddp;
+		return 0;
 	}
 
-	/* alloc the udl from per cpu ddp pool */
-	ddp->udl = dma_pool_alloc(ddp_pool->pool, GFP_ATOMIC, &ddp->udp);
+	/* Allocate from per-CPU pool; GFP_KERNEL is safe: preemption is
+	 * re-enabled after the put_cpu() above.  Per-CPU DMA pools are only
+	 * destroyed under RTNL during interface teardown, so the saved pool
+	 * pointer remains valid.
+	 */
+	ddp->udl = dma_pool_alloc(pool, GFP_KERNEL, &ddp->udp);
 	if (!ddp->udl) {
 		e_err(drv, "failed allocated ddp context\n");
-		goto out_noddp_unmap;
+		dma_unmap_sg(&adapter->pdev->dev, sgl, sgc, DMA_FROM_DEVICE);
+		return 0;
 	}
-	ddp->pool = ddp_pool->pool;
+	ddp->pool = pool;
 	ddp->sgl = sgl;
 	ddp->sgc = sgc;
 
+	/* Re-pin CPU for per-CPU statistics updates inside the SG loop. */
+	ddp_pool = per_cpu_ptr(fcoe->ddp_pool, get_cpu());
+
 	j = 0;
 	for_each_sg(sgl, sg, dmacount, i) {
 		addr = sg_dma_address(sg);
@@ -210,7 +225,8 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
 			/* max number of buffers allowed in one DDP context */
 			if (j >= IXGBE_BUFFCNT_MAX) {
 				ddp_pool->noddp++;
-				goto out_noddp_free;
+				put_cpu();
+				goto out_noddp_free_unmap;
 			}
 
 			/* get the offset of length of current buffer */
@@ -220,16 +236,20 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
 			 * all but the 1st buffer (j == 0)
 			 * must be aligned on bufflen
 			 */
-			if ((j != 0) && (thisoff))
-				goto out_noddp_free;
+			if (j != 0 && thisoff) {
+				put_cpu();
+				goto out_noddp_free_unmap;
+			}
 			/*
 			 * all but the last buffer
 			 * ((i == (dmacount - 1)) && (thislen == len))
 			 * must end at bufflen
 			 */
-			if (((i != (dmacount - 1)) || (thislen != len))
-			    && ((thislen + thisoff) != bufflen))
-				goto out_noddp_free;
+			if ((i != (dmacount - 1) || thislen != len) &&
+			    (thislen + thisoff) != bufflen) {
+				put_cpu();
+				goto out_noddp_free_unmap;
+			}
 
 			ddp->udl[j] = (u64)(addr - thisoff);
 			/* only the first buffer may have none-zero offset */
@@ -250,14 +270,15 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
 	if (lastsize == bufflen) {
 		if (j >= IXGBE_BUFFCNT_MAX) {
 			ddp_pool->noddp_ext_buff++;
-			goto out_noddp_free;
+			put_cpu();
+			goto out_noddp_free_unmap;
 		}
 
 		ddp->udl[j] = (u64)(fcoe->extra_ddp_buffer_dma);
 		j++;
 		lastsize = 1;
 	}
 	put_cpu();
 
 	fcbuff = (IXGBE_FCBUFF_4KB << IXGBE_FCBUFF_BUFFSIZE_SHIFT);
 	fcbuff |= ((j & 0xff) << IXGBE_FCBUFF_BUFFCNT_SHIFT);
@@ -316,14 +337,10 @@ static int ixgbe_fcoe_ddp_setup(struct net_device *netdev, u16 xid,
 
 	return 1;
 
-out_noddp_free:
+out_noddp_free_unmap:
 	dma_pool_free(ddp->pool, ddp->udl, ddp->udp);
 	ixgbe_fcoe_clear_ddp(ddp);
-
-out_noddp_unmap:
 	dma_unmap_sg(&adapter->pdev->dev, sgl, sgc, DMA_FROM_DEVICE);
-out_noddp:
-	put_cpu();
 	return 0;
 }
 
-- 
2.52.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH iwl-next v2 8/8] ixgbe: use int instead of u32 for error code variables
  2026-04-08 13:12 [PATCH iwl-next v2 0/8] ixgbe: nits and improvements Aleksandr Loktionov
                   ` (6 preceding siblings ...)
  2026-04-08 13:12 ` [PATCH iwl-next v2 7/8] ixgbe: use GFP_KERNEL in ixgbe_fcoe_ddp_setup() Aleksandr Loktionov
@ 2026-04-08 13:12 ` Aleksandr Loktionov
  7 siblings, 0 replies; 10+ messages in thread
From: Aleksandr Loktionov @ 2026-04-08 13:12 UTC (permalink / raw)
  To: intel-wired-lan, anthony.l.nguyen, aleksandr.loktionov
  Cc: netdev, Simon Horman

The variables used to store return values of kernel and driver functions
throughout the ixgbe driver are declared as u32 in several places.  Such
functions return negative errno values on error (e.g. -EIO, -EFAULT),
which are sign-extended negative integers.  Storing them in an unsigned
u32 silently wraps the value: -EIO (0xFFFFFFF7) stored in u32 becomes a
large positive number, so any "if (status)" truthiness check still works
by accident, but comparisons against specific negative error codes or
propagation up the call stack produce wrong results.

In the Linux kernel, u32 is reserved for fixed-width quantities used in
hardware interfaces or protocol structures.  Using it for generic error
codes misleads reviewers into thinking the value is hardware-constrained.

Change all such local variables from u32 to int driver-wide: one in
ixgbe_main.c (ixgbe_resume), three in ixgbe_phy.c
(ixgbe_identify_phy_generic, ixgbe_tn_check_overtemp,
ixgbe_set_copper_phy_power), and six in ixgbe_x550.c
(ixgbe_check_link_t_X550em, ixgbe_get_lasi_ext_t_x550em,
ixgbe_enable_lasi_ext_t_x550em, ixgbe_handle_lasi_ext_t_x550em,
ixgbe_ext_phy_t_x550em_get_link, ixgbe_setup_internal_phy_t_x550em).

No functional change.

Reviewed-by: Simon Horman <horms@kernel.org>
Signed-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
---
v1 -> v2:
 - Carry Reviewed-by: Simon Horman from the IWL posting; add [N/M]
   numbering; no code change.

 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |  2 +-
 drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c  |  6 +++---
 drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c | 12 ++++++------
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index ed5b75b..99cfccd 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -7531,7 +7531,7 @@ static int ixgbe_resume(struct device *dev_d)
 	struct pci_dev *pdev = to_pci_dev(dev_d);
 	struct ixgbe_adapter *adapter = pci_get_drvdata(pdev);
 	struct net_device *netdev = adapter->netdev;
-	u32 err;
+	int err;
 
 	adapter->hw.hw_addr = adapter->io_addr;
 
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
index ab733e7..de8f6c6 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
@@ -262,7 +262,7 @@ static bool ixgbe_probe_phy(struct ixgbe_hw *hw, u16 phy_addr)
  **/
 int ixgbe_identify_phy_generic(struct ixgbe_hw *hw)
 {
-	u32 status = -EFAULT;
+	int status = -EFAULT;
 	u32 phy_addr;
 
 	if (!hw->phy.phy_semaphore_mask) {
@@ -2811,7 +2811,7 @@ static void ixgbe_i2c_bus_clear(struct ixgbe_hw *hw)
 bool ixgbe_tn_check_overtemp(struct ixgbe_hw *hw)
 {
 	u16 phy_data = 0;
-	u32 status;
+	int status;
 
 	if (hw->device_id != IXGBE_DEV_ID_82599_T3_LOM)
 		return false;
@@ -2831,7 +2831,7 @@ bool ixgbe_tn_check_overtemp(struct ixgbe_hw *hw)
  **/
 int ixgbe_set_copper_phy_power(struct ixgbe_hw *hw, bool on)
 {
-	u32 status;
+	int status;
 	u16 reg;
 
 	/* Bail if we don't have copper phy */
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
index 76d2fa3..9b14f3b 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
@@ -1911,7 +1911,7 @@ static int ixgbe_check_link_t_X550em(struct ixgbe_hw *hw,
 				     bool *link_up,
 				     bool link_up_wait_to_complete)
 {
-	u32 status;
+	int status;
 	u16 i, autoneg_status;
 
 	if (hw->mac.ops.get_media_type(hw) != ixgbe_media_type_copper)
@@ -2330,7 +2330,7 @@ static int ixgbe_get_link_capabilities_X550em(struct ixgbe_hw *hw,
 static int ixgbe_get_lasi_ext_t_x550em(struct ixgbe_hw *hw, bool *lsc,
 				       bool *is_overtemp)
 {
-	u32 status;
+	int status;
 	u16 reg;
 
 	*is_overtemp = false;
@@ -2418,7 +2418,7 @@ static int ixgbe_get_lasi_ext_t_x550em(struct ixgbe_hw *hw, bool *lsc,
 static int ixgbe_enable_lasi_ext_t_x550em(struct ixgbe_hw *hw)
 {
 	bool lsc, overtemp;
-	u32 status;
+	int status;
 	u16 reg;
 
 	/* Clear interrupt flags */
@@ -2512,7 +2512,7 @@ static int ixgbe_handle_lasi_ext_t_x550em(struct ixgbe_hw *hw,
 {
 	struct ixgbe_phy_info *phy = &hw->phy;
 	bool lsc;
-	u32 status;
+	int status;
 
 	status = ixgbe_get_lasi_ext_t_x550em(hw, &lsc, is_overtemp);
 	if (status)
@@ -2606,7 +2606,7 @@ static int ixgbe_setup_kr_x550em(struct ixgbe_hw *hw)
  **/
 static int ixgbe_ext_phy_t_x550em_get_link(struct ixgbe_hw *hw, bool *link_up)
 {
-	u32 ret;
+	int ret;
 	u16 autoneg_status;
 
 	*link_up = false;
@@ -2642,7 +2642,7 @@ static int ixgbe_setup_internal_phy_t_x550em(struct ixgbe_hw *hw)
 {
 	ixgbe_link_speed force_speed;
 	bool link_up;
-	u32 status;
+	int status;
 	u16 speed;
 
 	if (hw->mac.ops.get_media_type(hw) != ixgbe_media_type_copper)
-- 
2.52.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [Intel-wired-lan] [PATCH iwl-next v2 7/8] ixgbe: use GFP_KERNEL in ixgbe_fcoe_ddp_setup()
  2026-04-08 13:12 ` [PATCH iwl-next v2 7/8] ixgbe: use GFP_KERNEL in ixgbe_fcoe_ddp_setup() Aleksandr Loktionov
@ 2026-04-08 14:09   ` Kohei Enju
  0 siblings, 0 replies; 10+ messages in thread
From: Kohei Enju @ 2026-04-08 14:09 UTC (permalink / raw)
  To: Aleksandr Loktionov; +Cc: intel-wired-lan, anthony.l.nguyen, netdev

On 04/08 15:12, Aleksandr Loktionov wrote:
> ixgbe_fcoe_ddp_setup() is always called from process context (FCoE
> offload setup paths) and never from an atomic context.  Using GFP_ATOMIC

As I mentioned in v1, I don't think this path is non-atomic.

fc_exch_seq_send()
  fc_exch_alloc() # acquires ep->ex_lock (spinlock) internally
  fc_fcp_ddp_setup()
    lport->tt.ddp_setup() == fcoe_ddp_setup()
      .ndo_fcoe_ddp_setup() == ixgbe_fcoe_ddp_get()
        ixgbe_fcoe_ddp_setup()
  ...
  spin_unlock_bh(&ep->ex_lock);

So even if this runs in process context, it still appears to be in
atomic context while ep->ex_lock is held and also bh is disabled.
GFP_KERNEL still looks unsafe here for me.

If I'm misreading something, please let me know.
Thanks.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2026-04-08 14:09 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-08 13:12 [PATCH iwl-next v2 0/8] ixgbe: nits and improvements Aleksandr Loktionov
2026-04-08 13:12 ` [PATCH iwl-next v2 1/8] ixgbe: lower IXGBE_ITR_ADAPTIVE_MAX_USECS to prevent RX starvation Aleksandr Loktionov
2026-04-08 13:12 ` [PATCH iwl-next v2 2/8] ixgbe: add ixgbe_container_is_rx() helper and refine RX adaptive ITR Aleksandr Loktionov
2026-04-08 13:12 ` [PATCH iwl-next v2 3/8] ixgbe: limit ITR decrease in latency mode to prevent ACK overdrive Aleksandr Loktionov
2026-04-08 13:12 ` [PATCH iwl-next v2 4/8] ixgbe: add IXGBE_ITR_ADAPTIVE_MASK_USECS constant Aleksandr Loktionov
2026-04-08 13:12 ` [PATCH iwl-next v2 5/8] ixgbe: remove ixgbe_ping_all_vfs() from link state change handlers Aleksandr Loktionov
2026-04-08 13:12 ` [PATCH iwl-next v2 6/8] ixgbe: use ktime_get_real_ns() in ixgbe_ptp_reset() Aleksandr Loktionov
2026-04-08 13:12 ` [PATCH iwl-next v2 7/8] ixgbe: use GFP_KERNEL in ixgbe_fcoe_ddp_setup() Aleksandr Loktionov
2026-04-08 14:09   ` [Intel-wired-lan] " Kohei Enju
2026-04-08 13:12 ` [PATCH iwl-next v2 8/8] ixgbe: use int instead of u32 for error code variables Aleksandr Loktionov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox