netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tony Nguyen <anthony.l.nguyen@intel.com>
To: davem@davemloft.net, kuba@kernel.org
Cc: Maciej Fijalkowski <maciej.fijalkowski@intel.com>,
	netdev@vger.kernel.org, anthony.l.nguyen@intel.com,
	magnus.karlsson@intel.com, ast@kernel.org, daniel@iogearbox.net,
	hawk@kernel.org, john.fastabend@gmail.com, andrii@kernel.org,
	kpsingh@kernel.org, kafai@fb.com, yhs@fb.com,
	songliubraving@fb.com, bpf@vger.kernel.org,
	George Kuruvinakunnel <george.kuruvinakunnel@intel.com>
Subject: [PATCH net-next 4/9] ice: unify xdp_rings accesses
Date: Fri, 15 Oct 2021 09:29:03 -0700	[thread overview]
Message-ID: <20211015162908.145341-5-anthony.l.nguyen@intel.com> (raw)
In-Reply-To: <20211015162908.145341-1-anthony.l.nguyen@intel.com>

From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>

There has been a long lasting issue of improper xdp_rings indexing for
XDP_TX and XDP_REDIRECT actions. Given that currently rx_ring->q_index
is mixed with smp_processor_id(), there could be a situation where Tx
descriptors are produced onto XDP Tx ring, but tail is never bumped -
for example pin a particular queue id to non-matching IRQ line.

Address this problem by ignoring the user ring count setting and always
initialize the xdp_rings array to be of num_possible_cpus() size. Then,
always use the smp_processor_id() as an index to xdp_rings array. This
provides serialization as at given time only a single softirq can run on
a particular CPU.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Tested-by: George Kuruvinakunnel <george.kuruvinakunnel@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_lib.c      | 2 +-
 drivers/net/ethernet/intel/ice/ice_main.c     | 2 +-
 drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 04155139125c..58fad5f82076 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -3215,7 +3215,7 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi)
 
 		ice_vsi_map_rings_to_vectors(vsi);
 		if (ice_is_xdp_ena_vsi(vsi)) {
-			vsi->num_xdp_txq = vsi->alloc_rxq;
+			vsi->num_xdp_txq = num_possible_cpus();
 			ret = ice_prepare_xdp_rings(vsi, vsi->xdp_prog);
 			if (ret)
 				goto err_vectors;
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index e4c12a87785c..856ae223bf4c 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -2638,7 +2638,7 @@ ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog,
 	}
 
 	if (!ice_is_xdp_ena_vsi(vsi) && prog) {
-		vsi->num_xdp_txq = vsi->alloc_rxq;
+		vsi->num_xdp_txq = num_possible_cpus();
 		xdp_ring_err = ice_prepare_xdp_rings(vsi, prog);
 		if (xdp_ring_err)
 			NL_SET_ERR_MSG_MOD(extack, "Setting up XDP Tx resources failed");
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
index d6d71f82142f..bc64610df7cb 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
@@ -297,7 +297,7 @@ void ice_finalize_xdp_rx(struct ice_rx_ring *rx_ring, unsigned int xdp_res)
 
 	if (xdp_res & ICE_XDP_TX) {
 		struct ice_tx_ring *xdp_ring =
-			rx_ring->vsi->xdp_rings[rx_ring->q_index];
+			rx_ring->vsi->xdp_rings[smp_processor_id()];
 
 		ice_xdp_ring_update_tail(xdp_ring);
 	}
-- 
2.31.1


  parent reply	other threads:[~2021-10-15 16:31 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-15 16:28 [PATCH net-next 0/9][pull request] 100GbE Intel Wired LAN Driver Updates 2021-10-15 Tony Nguyen
2021-10-15 16:29 ` [PATCH net-next 1/9] ice: remove ring_active from ice_ring Tony Nguyen
2021-10-15 16:29 ` [PATCH net-next 2/9] ice: move ice_container_type onto ice_ring_container Tony Nguyen
2021-10-15 16:29 ` [PATCH net-next 3/9] ice: split ice_ring onto Tx/Rx separate structs Tony Nguyen
2021-10-15 16:29 ` Tony Nguyen [this message]
2021-10-15 16:29 ` [PATCH net-next 5/9] ice: do not create xdp_frame on XDP_TX Tony Nguyen
2021-10-15 16:29 ` [PATCH net-next 6/9] ice: propagate xdp_ring onto rx_ring Tony Nguyen
2021-10-15 16:29 ` [PATCH net-next 7/9] ice: optimize XDP_TX workloads Tony Nguyen
2021-10-15 16:29 ` [PATCH net-next 8/9] ice: introduce XDP_TX fallback path Tony Nguyen
2021-10-15 16:29 ` [PATCH net-next 9/9] ice: make use of ice_for_each_* macros Tony Nguyen
2021-10-16  8:00 ` [PATCH net-next 0/9][pull request] 100GbE Intel Wired LAN Driver Updates 2021-10-15 patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211015162908.145341-5-anthony.l.nguyen@intel.com \
    --to=anthony.l.nguyen@intel.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=george.kuruvinakunnel@intel.com \
    --cc=hawk@kernel.org \
    --cc=john.fastabend@gmail.com \
    --cc=kafai@fb.com \
    --cc=kpsingh@kernel.org \
    --cc=kuba@kernel.org \
    --cc=maciej.fijalkowski@intel.com \
    --cc=magnus.karlsson@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=songliubraving@fb.com \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).