From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60CAFC77B73 for ; Fri, 21 Apr 2023 01:40:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231407AbjDUBkH (ORCPT ); Thu, 20 Apr 2023 21:40:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233089AbjDUBj5 (ORCPT ); Thu, 20 Apr 2023 21:39:57 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C40B14C24 for ; Thu, 20 Apr 2023 18:39:46 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 067B26440A for ; Fri, 21 Apr 2023 01:39:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EC13CC433D2; Fri, 21 Apr 2023 01:39:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1682041185; bh=2eH+jmStZQxN8mXFB2MS42hJDSNhM1bkawyavqSioDs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fIK4nWAPldiCaiEOsWD3euOLcx2Pi84Uv5WTNOI+DUDnELTF1WfXopKiHPGIa6mPx 8zET1Fdab/osp+re+IuSmFqXkyckyWG70ZUF0ilP+dbYcun4iYQhopMAMsw7FjzIVG WximU4A9cQDgnsY0lvBppQcYBI4BmOlYl5jnjZg9Aj/hGFKvSdg4UGEaV3gfihsYog AbrL0iYPaToF2xHTriD7wfDtxYoMhSR8EtH+CA2Il1vPPto3X0rJnudzQN8IhlfSEN vlifkCcqG9EJUtTQSwGp3nylrvxC6Z0MlZ2SlKzm4s0Hec5tAWfIsWdZV3ykdigF1R INU0IVm/uhTUw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski , Paolo Abeni , Eric Dumazet Cc: Saeed Mahameed , netdev@vger.kernel.org, Tariq Toukan , Dragos Tatulea Subject: [net-next 11/15] net/mlx5e: RX, Hook NAPIs to page pools Date: Thu, 20 Apr 2023 18:38:46 -0700 Message-Id: <20230421013850.349646-12-saeed@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230421013850.349646-1-saeed@kernel.org> References: <20230421013850.349646-1-saeed@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Dragos Tatulea Linking the NAPI to the rq page_pool to improve page_pool cache usage during skb recycling. Here are the observed improvements for a iperf single stream test case: - For 1500 MTU and legacy rq, seeing a 20% improvement of cache usage. - For 9K MTU, seeing 33-40 % page_pool cache usage improvements for both striding and legacy rq (depending if the application is running on the same core as the rq or not). Signed-off-by: Dragos Tatulea Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 7eb1eeb115ca..f5504b699fcf 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -857,6 +857,7 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params, pp_params.pool_size = pool_size; pp_params.nid = node; pp_params.dev = rq->pdev; + pp_params.napi = rq->cq.napi; pp_params.dma_dir = rq->buff.map_dir; pp_params.max_len = PAGE_SIZE; -- 2.39.2