From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE5CA1F4631 for ; Fri, 28 Nov 2025 13:46:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.173 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764337586; cv=none; b=DbOyYGt2/SmP0HjpBtwucA6gFW+6GWWlZ5WM4jRqFXrBX+8px2WYIaw2NZZD7yUI27NjA/LZ13RvU5VHjcbY1YwwgRVcjVpxIxgEOz6TxjcSIY/pT8zpp27aFGbcDWQ2yMM9b4HElG2tpFoPFiS9mXJSoz7NO1/zS98lnYsiSPI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764337586; c=relaxed/simple; bh=KR4grrT8SD955/MgFQUYQtPvjQ/NO97IWPVFe4EfgGI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=SHag6NBi8cM56Zmier//uycv6IS0WE/rPMEDacVGO47QPXIJRVhlcK89pzl1ASQA4dAWg4qQi4QqUCyUlto50ncfU5Ytc8a3H4AiZAooTG0gPILHfUZkCBHALLDHuClGOEM2DguJdKVIozr+hVv7U1zjhfaPcBscteeVLlaTP0E= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=UMvD/KVB; arc=none smtp.client-ip=209.85.210.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="UMvD/KVB" Received: by mail-pf1-f173.google.com with SMTP id d2e1a72fcca58-7b852bb31d9so2182607b3a.0 for ; Fri, 28 Nov 2025 05:46:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1764337584; x=1764942384; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WpxD35MumNPzTl+ELYmx5t3jmJ1EzXuG/pCsj8wGD00=; b=UMvD/KVBtn86VuMdmnsGZxewjXBSAcF5YelKQJP4LZYeS6QrM3PO9mnwC5467jYmhV LxrI7/rg18wcPW3utRaW1zLZgzjqWubuOpXZ8R1bSrSwaSs5IxVnue/HoMVqt6gqpg5e 1GWtU0umLyYT5yjFZsWOAESTH5+HmeaqHEcXPRUdymxRutoGF2PaHsITCIxKI1pTggev EO0FeGqblxnJJJMKKbwpo00FuirWES489Eo4ZZGuGksYbDOYub/e/hdjPrJxU85hTO2N 54r9HS8Ot1vSTjtOBq9F3Rmm3SbGwgzbfDaUvXn1H7y2QBL4aEOQ8HkCQBSPGRc0U42u cHYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764337584; x=1764942384; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=WpxD35MumNPzTl+ELYmx5t3jmJ1EzXuG/pCsj8wGD00=; b=YxGjgNbEuX3lzoy8eE+pcMqL2emHAAD2N7Sl+Ekpr4CEQLZBKpsVCowFu7RLdlnnw4 6G6iKOw+/lDejtp6MGEe5+YEqWmfFuBkSzYXiAdOua23/1AoT3Ug2D/49US6MmGUXEf1 2U3/rIljYfQ+mr7WXbWb6Zo1ttIM9V+zmx/PaRVzeEO3ltAK7uPtSV1yFQDRkTZJYLIG NC5LhmwPh1YV4l6+h37UF/ya8MEkKMGVbXe/bnYJdmokpzlNm215TigKng2WNHV+CIZ2 vQbmiiRNIhzcUF49jzMBwEQDUJj0IL3auxIdau1evr5NgbsCFyWcq47LaT2D0c40jabe uVlA== X-Forwarded-Encrypted: i=1; AJvYcCXS7WdKF9jExyJsdGdjWCzvfyvXVEY7kAkM8SdHO3hw1h+15TUYfcLds9QUxS7SmmYoMhpZWeI=@vger.kernel.org X-Gm-Message-State: AOJu0YwS/tRQKEczEBfcSCyGGMOhelT6CkNowTt5F/vgWGsTUpOxxn4Y PIB9u4H+i1kZri+AjbZdABubLs/EwNkzTkZVFxylDNIxZruFn51ad6dD X-Gm-Gg: ASbGnct5wHNiTXlXrpegXLxDciBqw/nSG/uGhgbhDo5pipDSon0cc6WwvPIptBGNjcZ AT1XJQXOUJ6x+My/C5bzHZdID4es23pq/9d2kfsVbJ1nHMKDsgqBOf4kaEhYPPWqzNKL4GPBC1q CM0obtBRFbBaM/uL0Pr6v9vGZ3SAjwzVOo8QKyrmmWsUmMd8RiNqhJxx//9Qcs6cQbEAKuub4S3 6C5TIyWKsU9J/Zdyqu/4RgKBiaIYBJXEjXboYb7LjsQBcQdBNHXNnay1P77ALvHlSP+d+/FY/Yt 2b9RvMZVR1TKx4KySCQOjQkVe8iwN5XTSci26Lajl9i359ksvxBkqJpQPWUiSlRkhSvYrU1kCVc v0GJ6cEconFwwx9pozplRlsjGlHMvGImQzH5hNxaobi30sqCJ2/ZLbUpJXhhsio2w8iRtGcPhMK uJzcSxPAKuIXfHbZjG4RJwc0+FqNrTulG0FKXueVYOrNukASy+dc5425eDTT0= X-Google-Smtp-Source: AGHT+IH1Xp/RilpJaPDY23Ob1bPvy5Lw52/NTeRatwcDcPJ5Jh5g/Sv8tRwJSmrQj6zb2SgjnwP7GQ== X-Received: by 2002:a05:6300:8b0f:b0:35f:9743:f4a with SMTP id adf61e73a8af0-36150ea9786mr31081390637.26.1764337584044; Fri, 28 Nov 2025 05:46:24 -0800 (PST) Received: from KERNELXING-MC1.tencent.com ([114.253.35.215]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-be4fbde37d7sm4792674a12.13.2025.11.28.05.46.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Nov 2025 05:46:23 -0800 (PST) From: Jason Xing To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, bjorn@kernel.org, magnus.karlsson@intel.com, maciej.fijalkowski@intel.com, jonathan.lemon@gmail.com, sdf@fomichev.me, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, horms@kernel.org, andrew+netdev@lunn.ch Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, Jason Xing Subject: [PATCH net-next v3 3/3] xsk: remove spin lock protection of cached_prod Date: Fri, 28 Nov 2025 21:46:01 +0800 Message-Id: <20251128134601.54678-4-kerneljasonxing@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20251128134601.54678-1-kerneljasonxing@gmail.com> References: <20251128134601.54678-1-kerneljasonxing@gmail.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Jason Xing Remove the spin lock protection along with some functions adjusted. Now cached_prod is fully converted to atomic, which does help in the contended case where umem is shared between xsks. Removing that lock can avoid manipulating one extra cacheline in the extremely hot path which directly improves the performance by around 5% over different platforms as Paolo found[1]. [1]: https://lore.kernel.org/all/4c645223-8c52-40d3-889b-f3cf7fa09f89@redhat.com/ Signed-off-by: Jason Xing --- include/net/xsk_buff_pool.h | 5 ----- net/xdp/xsk.c | 21 ++++----------------- net/xdp/xsk_buff_pool.c | 1 - 3 files changed, 4 insertions(+), 23 deletions(-) diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h index 92a2358c6ce3..0b1abdb99c9e 100644 --- a/include/net/xsk_buff_pool.h +++ b/include/net/xsk_buff_pool.h @@ -90,11 +90,6 @@ struct xsk_buff_pool { * destructor callback. */ spinlock_t cq_prod_lock; - /* Mutual exclusion of the completion ring in the SKB mode. - * Protect: when sockets share a single cq when the same netdev - * and queue id is shared. - */ - spinlock_t cq_cached_prod_lock; struct xdp_buff_xsk *free_heads[]; }; diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index b63409b1422e..ae8a92c168b8 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -546,17 +546,6 @@ static int xsk_wakeup(struct xdp_sock *xs, u8 flags) return dev->netdev_ops->ndo_xsk_wakeup(dev, xs->queue_id, flags); } -static int xsk_cq_reserve_locked(struct xsk_buff_pool *pool) -{ - int ret; - - spin_lock(&pool->cq_cached_prod_lock); - ret = xsk_cq_cached_prod_reserve(pool->cq); - spin_unlock(&pool->cq_cached_prod_lock); - - return ret; -} - static void xsk_cq_submit_addr_locked(struct xsk_buff_pool *pool, struct sk_buff *skb) { @@ -585,11 +574,9 @@ static void xsk_cq_submit_addr_locked(struct xsk_buff_pool *pool, spin_unlock_irqrestore(&pool->cq_prod_lock, flags); } -static void xsk_cq_cancel_locked(struct xsk_buff_pool *pool, u32 n) +static void xsk_cq_cached_prod_cancel(struct xsk_buff_pool *pool, u32 n) { - spin_lock(&pool->cq_cached_prod_lock); atomic_sub(n, &pool->cq->cached_prod_atomic); - spin_unlock(&pool->cq_cached_prod_lock); } static void xsk_inc_num_desc(struct sk_buff *skb) @@ -643,7 +630,7 @@ static void xsk_consume_skb(struct sk_buff *skb) } skb->destructor = sock_wfree; - xsk_cq_cancel_locked(xs->pool, num_descs); + xsk_cq_cached_prod_cancel(xs->pool, num_descs); /* Free skb without triggering the perf drop trace */ consume_skb(skb); xs->skb = NULL; @@ -860,7 +847,7 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs, xskq_cons_release(xs->tx); } else { /* Let application retry */ - xsk_cq_cancel_locked(xs->pool, 1); + xsk_cq_cached_prod_cancel(xs->pool, 1); } return ERR_PTR(err); @@ -898,7 +885,7 @@ static int __xsk_generic_xmit(struct sock *sk) * if there is space in it. This avoids having to implement * any buffering in the Tx path. */ - err = xsk_cq_reserve_locked(xs->pool); + err = xsk_cq_cached_prod_reserve(xs->pool->cq); if (err) { err = -EAGAIN; goto out; diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c index 51526034c42a..9539f121b290 100644 --- a/net/xdp/xsk_buff_pool.c +++ b/net/xdp/xsk_buff_pool.c @@ -91,7 +91,6 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs, INIT_LIST_HEAD(&pool->xsk_tx_list); spin_lock_init(&pool->xsk_tx_list_lock); spin_lock_init(&pool->cq_prod_lock); - spin_lock_init(&pool->cq_cached_prod_lock); refcount_set(&pool->users, 1); pool->fq = xs->fq_tmp; -- 2.41.3