From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-185.mta0.migadu.com (out-185.mta0.migadu.com [91.218.175.185]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 53534748F; Thu, 10 Jul 2025 04:45:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.185 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752122717; cv=none; b=rrdiP2wCc/HizqvAnf20jGOieG6CZLzC6QnarQOPL0Y60xp9jbvHUq2lY6p7T85J/4FTI3Nx+VoXXsopuv5nBWne3eJ2rraaUvewEqsfSLbUiSTQMOLdG6aRC6NhjBIPAaQQAtKwQvhb8Q5v2BuZqFNcG/zk5/VXCXaUUNszRx4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1752122717; c=relaxed/simple; bh=L4zt4GKHw1IXFF18J+kJzI+p0dvnSmo2E3bIwxpiiAI=; h=Message-ID:Date:MIME-Version:Subject:To:References:From: In-Reply-To:Content-Type; b=C86VS3CRIWxOHt7E2ZhpFXtsaiEp6jVz6WcQz786KAcxJZxKbxJ/HdUf5ZDlqitUG4NyMg6jWgDmCHqldTkeaiD/CXrN8lSUbvQE2YVnEiilUI4T0KoG84c986DTwb2OV4BdHmeE1Ls7Nj16u1LYX9fMUvaqi3IJenA30NT59CQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=AsI1XZWb; arc=none smtp.client-ip=91.218.175.185 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="AsI1XZWb" Message-ID: <2f147207-fca7-460f-92b9-10b220f026b6@linux.dev> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1752122706; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BNnHfl6FcQ99JfZ8muMQ59CJSF2r93JQSTBRpe65T7Y=; b=AsI1XZWbtSInOe2pkBHCd5v97UIzDkpWa0XbTFyqRLW7rrlUvXPbpAlND7UWEN4K30Cuk2 DcLxussoKUEGdJwb4gZfjLwtV7SqXV3Cg51DjjSpfsQwgBioWXWgwaAx+v51ei6KVezzLQ jsL2x1kvEnp3VVrV8EEmALUSwwv4XPY= Date: Wed, 9 Jul 2025 21:44:50 -0700 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Subject: Re: [PATCH rdma-next 1/1] net/mlx5: Fix build -Wframe-larger-than warnings To: Junxian Huang , saeedm@nvidia.com, leon@kernel.org, tariqt@nvidia.com, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, netdev@vger.kernel.org, linux-rdma@vger.kernel.org References: <20250710034948.34179-1-yanjun.zhu@linux.dev> <844e996e-09dd-7f1e-cc21-25be65137152@hisilicon.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Zhu Yanjun In-Reply-To: <844e996e-09dd-7f1e-cc21-25be65137152@hisilicon.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT 在 2025/7/9 21:09, Junxian Huang 写道: > > On 2025/7/10 11:49, Zhu Yanjun wrote: >> When building, the following warnings will appear. >> " >> pci_irq.c: In function ‘mlx5_ctrl_irq_request’: >> pci_irq.c:494:1: warning: the frame size of 1040 bytes is larger than 1024 bytes [-Wframe-larger-than=] >> >> pci_irq.c: In function ‘mlx5_irq_request_vector’: >> pci_irq.c:561:1: warning: the frame size of 1040 bytes is larger than 1024 bytes [-Wframe-larger-than=] >> >> eq.c: In function ‘comp_irq_request_sf’: >> eq.c:897:1: warning: the frame size of 1080 bytes is larger than 1024 bytes [-Wframe-larger-than=] >> >> irq_affinity.c: In function ‘irq_pool_request_irq’: >> irq_affinity.c:74:1: warning: the frame size of 1048 bytes is larger than 1024 bytes [-Wframe-larger-than=] >> " >> >> These warnings indicate that the stack frame size exceeds 1024 bytes in >> these functions. >> >> To resolve this, instead of allocating large memory buffers on the stack, >> it is better to use kvzalloc to allocate memory dynamically on the heap. >> This approach reduces stack usage and eliminates these frame size warnings. >> >> Signed-off-by: Zhu Yanjun >> --- >> 1. Only build tests; >> 2. It is not a bug fix. Thus no Fixes tag; >> --- >> drivers/net/ethernet/mellanox/mlx5/core/eq.c | 16 +++++--- >> .../mellanox/mlx5/core/irq_affinity.c | 15 +++++-- >> .../net/ethernet/mellanox/mlx5/core/pci_irq.c | 40 +++++++++++++------ >> 3 files changed, 51 insertions(+), 20 deletions(-) >> >> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c >> index dfb079e59d85..66ce61eee804 100644 >> --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c >> +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c >> @@ -873,20 +873,26 @@ static int comp_irq_request_sf(struct mlx5_core_dev *dev, u16 vecidx) >> { >> struct mlx5_irq_pool *pool = mlx5_irq_table_get_comp_irq_pool(dev); >> struct mlx5_eq_table *table = dev->priv.eq_table; >> - struct irq_affinity_desc af_desc = {}; >> + struct irq_affinity_desc *af_desc; >> struct mlx5_irq *irq; >> >> + af_desc = kvzalloc(sizeof(*af_desc), GFP_KERNEL); >> + if (!af_desc) >> + return -ENOMEM; >> + >> /* In case SF irq pool does not exist, fallback to the PF irqs*/ >> if (!mlx5_irq_pool_is_sf_pool(pool)) >> return comp_irq_request_pci(dev, vecidx); > ... > >> >> - af_desc.is_managed = false; >> - cpumask_copy(&af_desc.mask, cpu_online_mask); >> - cpumask_andnot(&af_desc.mask, &af_desc.mask, &table->used_cpus); >> - irq = mlx5_irq_affinity_request(dev, pool, &af_desc); >> + af_desc->is_managed = false; >> + cpumask_copy(&af_desc->mask, cpu_online_mask); >> + cpumask_andnot(&af_desc->mask, &af_desc->mask, &table->used_cpus); >> + irq = mlx5_irq_affinity_request(dev, pool, af_desc); >> if (IS_ERR(irq)) >> return PTR_ERR(irq); > ... > >> >> + kvfree(af_desc); >> + >> cpumask_or(&table->used_cpus, &table->used_cpus, mlx5_irq_get_affinity_mask(irq)); >> mlx5_core_dbg(pool->dev, "IRQ %u mapped to cpu %*pbl, %u EQs on this irq\n", >> pci_irq_vector(dev->pdev, mlx5_irq_get_index(irq)), >> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c b/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c >> index 2691d88cdee1..929f93853843 100644 >> --- a/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c >> +++ b/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c >> @@ -47,29 +47,38 @@ static int cpu_get_least_loaded(struct mlx5_irq_pool *pool, >> static struct mlx5_irq * >> irq_pool_request_irq(struct mlx5_irq_pool *pool, struct irq_affinity_desc *af_desc) >> { >> - struct irq_affinity_desc auto_desc = {}; >> + struct irq_affinity_desc *auto_desc; >> struct mlx5_irq *irq; >> u32 irq_index; >> int err; >> >> + auto_desc = kvzalloc(sizeof(*auto_desc), GFP_KERNEL); >> + if (!auto_desc) >> + return ERR_PTR(-ENOMEM); >> + >> err = xa_alloc(&pool->irqs, &irq_index, NULL, pool->xa_num_irqs, GFP_KERNEL); >> if (err) >> return ERR_PTR(err); > You forgot to free the kvzalloc() memory in these error branches Thanks a lot. I will fix it ASAP. Yanjun.Zhu > > Junxian > >> + >> if (pool->irqs_per_cpu) { >> if (cpumask_weight(&af_desc->mask) > 1) >> /* if req_mask contain more then one CPU, set the least loadad CPU >> * of req_mask >> */ >> cpumask_set_cpu(cpu_get_least_loaded(pool, &af_desc->mask), >> - &auto_desc.mask); >> + &auto_desc->mask); >> else >> cpu_get(pool, cpumask_first(&af_desc->mask)); >> } >> + >> irq = mlx5_irq_alloc(pool, irq_index, >> - cpumask_empty(&auto_desc.mask) ? af_desc : &auto_desc, >> + cpumask_empty(&auto_desc->mask) ? af_desc : auto_desc, >> NULL); >> if (IS_ERR(irq)) >> xa_erase(&pool->irqs, irq_index); >> + >> + kvfree(auto_desc); >> + >> return irq; >> } >> >> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c >> index 40024cfa3099..48aad94b0a5d 100644 >> --- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c >> +++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c >> @@ -470,26 +470,32 @@ void mlx5_ctrl_irq_release(struct mlx5_core_dev *dev, struct mlx5_irq *ctrl_irq) >> struct mlx5_irq *mlx5_ctrl_irq_request(struct mlx5_core_dev *dev) >> { >> struct mlx5_irq_pool *pool = ctrl_irq_pool_get(dev); >> - struct irq_affinity_desc af_desc; >> + struct irq_affinity_desc *af_desc; >> struct mlx5_irq *irq; >> >> - cpumask_copy(&af_desc.mask, cpu_online_mask); >> - af_desc.is_managed = false; >> + af_desc = kvzalloc(sizeof(*af_desc), GFP_KERNEL); >> + if (!af_desc) >> + return ERR_PTR(-ENOMEM); >> + >> + cpumask_copy(&af_desc->mask, cpu_online_mask); >> + af_desc->is_managed = false; >> if (!mlx5_irq_pool_is_sf_pool(pool)) { >> /* In case we are allocating a control IRQ from a pci device's pool. >> * This can happen also for a SF if the SFs pool is empty. >> */ >> if (!pool->xa_num_irqs.max) { >> - cpumask_clear(&af_desc.mask); >> + cpumask_clear(&af_desc->mask); >> /* In case we only have a single IRQ for PF/VF */ >> - cpumask_set_cpu(cpumask_first(cpu_online_mask), &af_desc.mask); >> + cpumask_set_cpu(cpumask_first(cpu_online_mask), &af_desc->mask); >> } >> /* Allocate the IRQ in index 0. The vector was already allocated */ >> - irq = irq_pool_request_vector(pool, 0, &af_desc, NULL); >> + irq = irq_pool_request_vector(pool, 0, af_desc, NULL); >> } else { >> - irq = mlx5_irq_affinity_request(dev, pool, &af_desc); >> + irq = mlx5_irq_affinity_request(dev, pool, af_desc); >> } >> >> + kvfree(af_desc); >> + >> return irq; >> } >> >> @@ -548,16 +554,26 @@ struct mlx5_irq *mlx5_irq_request_vector(struct mlx5_core_dev *dev, u16 cpu, >> { >> struct mlx5_irq_table *table = mlx5_irq_table_get(dev); >> struct mlx5_irq_pool *pool = table->pcif_pool; >> - struct irq_affinity_desc af_desc; >> + struct irq_affinity_desc *af_desc; >> int offset = MLX5_IRQ_VEC_COMP_BASE; >> + struct mlx5_irq *irq; >> + >> + af_desc = kvzalloc(sizeof(*af_desc), GFP_KERNEL); >> + if (!af_desc) >> + return ERR_PTR(-ENOMEM); >> >> if (!pool->xa_num_irqs.max) >> offset = 0; >> >> - af_desc.is_managed = false; >> - cpumask_clear(&af_desc.mask); >> - cpumask_set_cpu(cpu, &af_desc.mask); >> - return mlx5_irq_request(dev, vecidx + offset, &af_desc, rmap); >> + af_desc->is_managed = false; >> + cpumask_clear(&af_desc->mask); >> + cpumask_set_cpu(cpu, &af_desc->mask); >> + >> + irq = mlx5_irq_request(dev, vecidx + offset, af_desc, rmap); >> + >> + kvfree(af_desc); >> + >> + return irq; >> } >> >> static struct mlx5_irq_pool * -- Best Regards, Yanjun.Zhu