From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E5E23C196D for ; Mon, 23 Mar 2026 17:10:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774285847; cv=none; b=VHhMTi7Jy9LVBqCRgj2qrUzyessc9/kBqCSkBh4tIjHU7JJnl1gQbZ8JuTsaJrFMLiLl1GXfvgWJ+Ls5fWFqlF+GyhdL5auhS0DJ15e3tLw2RcdLIXTCeSX9IJPGhvmk0Ci2VvFUHYCp3bqHfh1rculluB3etrauTZUbqLHUeNk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774285847; c=relaxed/simple; bh=AXu4moyYLT6oyjJm/jW8zlTRLJclBIHpMOpF1YV/RDk=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=tGyA43BKzIBuprzmbvHRV5L1PoB7+fPaw6wClYuZKFvQ4eBaaZNKjUW9cMD7T834hnwFkEUHJHwe9+c7qrCOfWMGU1QcxBlZVve/sE2bpVNGCVZ7TYqKw+VYW8TYIFhq5r0GXQEepHE5sjje3h8VnxIlujhMcNtyifz4syMgaqg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=KHLGZhJX; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b=oht7LGD/; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="KHLGZhJX"; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b="oht7LGD/" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1774285845; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=VgSRRWjYpcT9SP6nnEifjHM4Oz7EppnrFyKqEruLcwQ=; b=KHLGZhJXmVtL+wgVP9oVJewqpShKwGEd9d1jx705yrCqWlvuVAvCgb+JyTdry2zc60+MQG x5pbcExqPWXnb7fegfnXa+04IofWNMTG+3bf6vZzgtg1h9ZE5xqRqjSK6zEdb5noiWxuGs VJr7hwGueA+zXJSoG7ojo8t5QB0IRk4= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-553-Uc1XL6FXNEq-UKhCmKsWSA-1; Mon, 23 Mar 2026 13:10:43 -0400 X-MC-Unique: Uc1XL6FXNEq-UKhCmKsWSA-1 X-Mimecast-MFC-AGG-ID: Uc1XL6FXNEq-UKhCmKsWSA_1774285842 Received: by mail-wr1-f72.google.com with SMTP id ffacd0b85a97d-43b4730dad8so464906f8f.3 for ; Mon, 23 Mar 2026 10:10:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=google; t=1774285842; x=1774890642; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=VgSRRWjYpcT9SP6nnEifjHM4Oz7EppnrFyKqEruLcwQ=; b=oht7LGD/9YfNVixMDlynwqGnkVetXVZRSvAczJ91jYNValrLJPWrnDFH2WTa4NvWfr 525svrM5cIhqlkTpOiurFVDDMh9OBopKRQ6Hu2j+Mg+z7CtBuIAGzClmEEPNibfZGsb1 lG24MAuOxU3jLiM5CvFdaUp4XqEFfNNKlo6u/e4knBI2I+cAQamOOdaVNdYBbmnuPe12 x4rMpo6t0jkZeBDE0g6evtvI8jRMXc1OzNuc8rlWz2OtPVu8CDNdjxdG7ctfW5jtmyb9 qmBhEUPNCPojosy0W7oUJr+CU5ok+OMPo1JlXR2l82xpvZ12jrH7dLcskg6cy1kopLQT witg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774285842; x=1774890642; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VgSRRWjYpcT9SP6nnEifjHM4Oz7EppnrFyKqEruLcwQ=; b=oExR71RKK6UttUixrtu2u8PPzROlqnvovLUlfvbP8CWj/H+xR67au/MH6uWV4nuwe5 wIyiXYYAat02EfHQGeBMDq07c5q/OECyojN6ozwCzOgvUpsylQ/AZyiXSYFzWGmzJPwT uPpPCQlBF8N+0RTl0b6w8e5Gn2aOMlrVWtN0gbFb4CeH6JfWgU+goqiO1z9RJECFC5NC KHOsoZ88hNYq1OZSBrKa70bUjhy6ymLW1IVudzwrfzW5lSMkhnPY7k/22CqEAQRygdP2 BCUisPKknvIH9e0ScYm1RB6FyFloeI1WANFyFSlmFxDkHUPBtUelt8mLV8s3H2adgBW8 AC7A== X-Forwarded-Encrypted: i=1; AJvYcCVvnkgeQOTOb5DqoSlLljzru8+K8FkD7A1Le8E43dZyr8R5arderi3LpPbm6Tt03CsKjMhM3qcaRe7p3eI=@vger.kernel.org X-Gm-Message-State: AOJu0YyICT5UW9aAqsiIfgr96hjvhFzdNsuJXm5bnbNAq/d2IW7aQmYZ opLpslAmMGNoRvOWDS4W9dDAxhJ9H1dmhM4JxiHRPrPnuu5eLeW7o0MjHCp2OMI2ZvJxKSH4Ry1 fPp1YXnN7Hc1QJ747INYYym/1QNXPGCqIao8vwzAg0oBjV9cS+0/cW0uf4XesBOszqg== X-Gm-Gg: ATEYQzyhxWoy1nQ0V+SytbfzelqMI/8f5bBv5OVRMQyzYbO0YKFwy9nbPV+3VYnlvzO bA8t//JleR+nI2Fk7IMs0Xi/4mJecF+P01uM/EG+3Zdxx56vYZje4CfCO/IOfxdfmvnRRojVWob VNlJOA4IAVJTfwWZHYe3lCtVDTezv9d4lzBt3vNXfdtB6oEwadyvv94drs3dwIXl+y+8c9q/hWr pYBefz4/wzhJP3RCCKwzMH99oSRxQmmIWBHbOzkD5lUWchU3XKfTiLUBGYF+zBK2w9ufUgCoS1W UJq2AMlEtDcgCxEiz9qqnilOzmSsWwWq9QnmRNVZjj71Kueft88u7QL4/yKmMT5/qUlbSo05hve g/MaUwpBfdb0c7viS X-Received: by 2002:a05:6000:4203:b0:43b:60f7:2284 with SMTP id ffacd0b85a97d-43b6424e660mr20920989f8f.14.1774285841818; Mon, 23 Mar 2026 10:10:41 -0700 (PDT) X-Received: by 2002:a05:6000:4203:b0:43b:60f7:2284 with SMTP id ffacd0b85a97d-43b6424e660mr20920915f8f.14.1774285841294; Mon, 23 Mar 2026 10:10:41 -0700 (PDT) Received: from redhat.com ([2a0d:6fc0:1525:da00:3ac2:1a22:72ff:4256]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-43b646b0d3dsm29266639f8f.16.2026.03.23.10.10.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Mar 2026 10:10:40 -0700 (PDT) Date: Mon, 23 Mar 2026 13:10:37 -0400 From: "Michael S. Tsirkin" To: Omar Elghoul Cc: vishs@meta.com, andrew+netdev@lunn.ch, davem@davemloft.net, dw@davidwei.uk, edumazet@google.com, eperezma@redhat.com, ilias.apalodimas@linaro.org, jasowang@redhat.com, kernel-team@meta.com, kuba@kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, pabeni@redhat.com, technoboy85@gmail.com, virtualization@lists.linux.dev, xuanzhuo@linux.alibaba.com Subject: Re: [PATCH net-next v11] virtio_net: add page_pool support for buffer allocation Message-ID: <20260323131033-mutt-send-email-mst@kernel.org> References: <20260310183107.2822016-1-vishs@meta.com> <20260323150136.14452-1-oelghoul@linux.ibm.com> <20260323114313-mutt-send-email-mst@kernel.org> <8e0a5562-4511-41df-9993-a77e51025e95@linux.ibm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <8e0a5562-4511-41df-9993-a77e51025e95@linux.ibm.com> On Mon, Mar 23, 2026 at 12:54:03PM -0400, Omar Elghoul wrote: > On 3/23/26 11:52 AM, Michael S. Tsirkin wrote: > > On Mon, Mar 23, 2026 at 11:01:31AM -0400, Omar Elghoul wrote: > > > [...] > > Well... I am not sure how I missed it. Obvious in hindsight: > > > > static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq, > > void *buf, unsigned int len, void **ctx, > > unsigned int *xdp_xmit, > > struct virtnet_rq_stats *stats) > > { > > struct net_device *dev = vi->dev; > > struct sk_buff *skb; > > u8 flags; > > if (unlikely(len < vi->hdr_len + ETH_HLEN)) { > > pr_debug("%s: short packet %i\n", dev->name, len); > > DEV_STATS_INC(dev, rx_length_errors); > > virtnet_rq_free_buf(vi, rq, buf); > > return; > > } > > /* About the flags below: > > * 1. Save the flags early, as the XDP program might overwrite them. > > * These flags ensure packets marked as VIRTIO_NET_HDR_F_DATA_VALID > > * stay valid after XDP processing. > > * 2. XDP doesn't work with partially checksummed packets (refer to > > * virtnet_xdp_set()), so packets marked as > > * VIRTIO_NET_HDR_F_NEEDS_CSUM get dropped during XDP processing. > > */ > > if (vi->mergeable_rx_bufs) { > > flags = ((struct virtio_net_common_hdr *)buf)->hdr.flags; > > skb = receive_mergeable(dev, vi, rq, buf, ctx, len, xdp_xmit, > > stats); > > } else if (vi->big_packets) { > > void *p = page_address((struct page *)buf); > > flags = ((struct virtio_net_common_hdr *)p)->hdr.flags; > > skb = receive_big(dev, vi, rq, buf, len, stats); > > } else { > > flags = ((struct virtio_net_common_hdr *)buf)->hdr.flags; > > skb = receive_small(dev, vi, rq, buf, ctx, len, xdp_xmit, stats); > > } > > > > > > So we are reading the header, before dma sync, which is within > > receive_mergeable and friends: > Thank you for your analysis and explanation. > > > > static struct sk_buff *receive_mergeable(struct net_device *dev, > > struct virtnet_info *vi, > > struct receive_queue *rq, > > void *buf, > > void *ctx, > > unsigned int len, > > unsigned int *xdp_xmit, > > struct virtnet_rq_stats *stats) > > { > > struct virtio_net_hdr_mrg_rxbuf *hdr = buf; > > int num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers); > > struct page *page = virt_to_head_page(buf); > > int offset = buf - page_address(page); > > struct sk_buff *head_skb, *curr_skb; > > unsigned int truesize = mergeable_ctx_to_truesize(ctx); > > unsigned int headroom = mergeable_ctx_to_headroom(ctx); > > head_skb = NULL; > > if (rq->use_page_pool_dma) > > page_pool_dma_sync_for_cpu(rq->page_pool, page, offset, len); > > > > > > Just as a test, the below should fix it (compiled only), but the real > > fix is more complex since we need to be careful to avoid expensive syncing > > twice. > > I applied your patch and tested it on my system. With this change, I could > not reproduce the same error anymore. I would be happy to test a proper fix > once you have one. > > > > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > > index 97035b49bae7..57b4f5954bed 100644 > > --- a/drivers/net/virtio_net.c > > +++ b/drivers/net/virtio_net.c > > @@ -931,9 +931,19 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi, > > static void *virtnet_rq_get_buf(struct receive_queue *rq, u32 *len, void **ctx) > > { > > + void *buf; > > + > > BUG_ON(!rq->page_pool); > > - return virtqueue_get_buf_ctx(rq->vq, len, ctx); > > + buf = virtqueue_get_buf_ctx(rq->vq, len, ctx); > > + if (buf && rq->use_page_pool_dma && *len) { > > + struct page *page = virt_to_head_page(buf); > > + int offset = buf - page_address(page); > > + > > + page_pool_dma_sync_for_cpu(rq->page_pool, page, offset, *len); > > + } > > + > > + return buf; > > } > > static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf) > > > > > > > > just sent