From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B8AA032E12D for ; Wed, 4 Feb 2026 04:12:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770178359; cv=none; b=GGZNRKiaBfhTOhYLTcQJpYtGiBG//QtBY9jBmJG8cvvOAuFSKqtGHe93ftd5q0TQq1qOni71pMzAYkXMBseO1kxI33a/k80lfhY2Pu0D28BpuVHDJCSxoDp4tMlU7kNEs0wlr8wu/lZqyCXxOK/5T8SpT5L6nYZDknD8Qb0SpTE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770178359; c=relaxed/simple; bh=GomkjnydL+gNaTLBxCQpqQUlqG8sggmOX7/huJunVZ0=; h=Date:From:To:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=TZL48FKcfPT/6FefwKRRuLzk2iKfAJ3SCql/mzIU9KwyrBn672Tu9caYLLUVCxcR1yoRlEYXXYFcVT1vCThha/zKZh1Z+eL9k4RMhF8APyanFhq6aWTIpIVJWbY0lw9C7TUtAPV+/OTEui+8LyqKJiy5YylP0qkSAaz3bET+le8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=gq/QBVBR; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gq/QBVBR" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 573A8C4CEF7; Wed, 4 Feb 2026 04:12:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1770178358; bh=GomkjnydL+gNaTLBxCQpqQUlqG8sggmOX7/huJunVZ0=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=gq/QBVBRHZvoxlKIWqrKDeJKyhvgV0+PqecU37vOwzdFZ7p2ffJYZLn6Q0pmDqO4C d8NGNgzLc/HrqqKAdymoWFk+KoPsu4GUP+OL84CzPuthVyOeCBnlZ5ASBpOPx5bZUS 8LNkNJJC9B03MzzQ1RLFUsCXV1CDKMkU/vn6ceS4vMbbnCtG2DSQO9iS1/uDqHR8H4 KTPkjwOLFqDIPjS2zIpvgt7O8Igz1mTJUcNqUbvVApsmaMyrC7r7UdR6fBx4F8wJ8V enIHwdEciYbdemCFSJZcNw7poxCu3sL1JQKr11dI+QDYJLdlS/gv2WOVqXXmfTtufo NNSc21msPeTNw== Date: Tue, 3 Feb 2026 20:12:37 -0800 From: Jakub Kicinski To: xuanzhuo@linux.alibaba.com Cc: lorenzo@kernel.org, andrew+netdev@lunn.ch, pabeni@redhat.com, vadim.fedorenko@linux.dev, davem@davemloft.net, guwen@linux.alibaba.com, lulie@linux.alibaba.com, hkallweit1@gmail.com, edumazet@google.com, lukas.bulwahn@redhat.com, andrew@lunn.ch, dong100@mucse.com, dust.li@linux.alibaba.com, netdev@vger.kernel.org Subject: Re: [net-next,v25,4/6] eea: create/destroy rx,tx queues for netdevice open and stop Message-ID: <20260203201237.35c5f756@kernel.org> In-Reply-To: <20260204040054.1698677-2-kuba@kernel.org> References: <20260202090638.46836-5-xuanzhuo@linux.alibaba.com> <20260204040054.1698677-2-kuba@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit On Tue, 3 Feb 2026 20:00:55 -0800 Jakub Kicinski wrote: > > + err = enet_bind_new_q_and_cfg(enet, ctx); > > + if (err) { > > + netdev_err(enet->netdev, > > + "eea reset: bind new queues failed. err %d\n", > > + err); > > + > > + return err; > > + } > > When enet_bind_new_q_and_cfg() fails, what happens to the queues allocated > by eea_alloc_rxtx_q_mem() at line 289? They're now assigned to ctx->rx and > ctx->tx but haven't been bound to enet yet. > > After eea_netdev_stop() sets enet->started = false, a subsequent call to > eea_netdev_stop() will return early at line 228 without calling > eea_free_rxtx_q_mem(). If enet_bind_new_q_and_cfg() fails before binding, > the queues remain in ctx with no cleanup path. > > The comment suggests deferring cleanup to "normal NIC cleanup" but > eea_net_remove() doesn't call eea_free_rxtx_q_mem(), and future reset > attempts would allocate new queues without freeing these. I think AI is slightly confused here but so am I. I don't get where you free he previous resources in this flow. The "bind_new_q_and_cfg" just overrides stuff, who frees the old set of rings? Also as I already mentioned in previous manual review you are not pre-allocating enough. You should also request necessary extra IRQs _before_ you start tearing down the old state. > > +static struct sk_buff *eea_rx_build_split_hdr_skb(struct eea_net_rx *rx, > > + struct eea_rx_ctx *ctx) > > +{ > > + struct eea_rx_meta *meta = ctx->meta; > > + struct sk_buff *skb; > > + u32 truesize; > > + > > + dma_sync_single_for_cpu(rx->enet->edev->dma_dev, meta->hdr_dma, > > + ctx->hdr_len, DMA_FROM_DEVICE); > > + > > + skb = napi_alloc_skb(&rx->napi, ctx->hdr_len); > > + if (unlikely(!skb)) > > + return NULL; > > + > > + truesize = meta->headroom + ctx->len; > > + > > + skb_put_data(skb, ctx->meta->hdr_addr, ctx->hdr_len); > > + > > + if (ctx->len) { > > + skb_add_rx_frag(skb, 0, meta->page, > > + meta->offset + meta->headroom, > > + ctx->len, truesize); > > + > > + eea_consume_rx_buffer(rx, meta, truesize); > > + } > > Is the truesize calculation correct for split header mode? Looking at line > 255, truesize is calculated as meta->headroom + ctx->len. > > In eea_rx_post() at line 500, buffers are allocated with space for > [headroom][data][tailroom], where tailroom is typically 128 bytes for > skb_shared_info. The tailroom is reserved but not included in the truesize > calculation here. > > When eea_consume_rx_buffer() advances meta->offset by only > (headroom + data_len), the reserved tailroom space remains unconsumed. After > alignment in meta_align_offset(), the next fragment may overlap with the > previous fragment's tailroom space. > > Compare with the non-split header path in eea_rx_build_skb() at line 290, > which includes shinfo_size in truesize: > > truesize = meta->headroom + ctx->len + shinfo_size; > > Should the split header path also include meta->tailroom or shinfo_size in > the truesize calculation? This one - I think the AI is just confused by how frags work.