netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* performance bug in virtio net xdp
@ 2020-05-06  8:08 Michael S. Tsirkin
  2020-05-06  8:37 ` Jason Wang
  2020-05-06  8:37 ` Jesper Dangaard Brouer
  0 siblings, 2 replies; 5+ messages in thread
From: Michael S. Tsirkin @ 2020-05-06  8:08 UTC (permalink / raw)
  To: Jason Wang
  Cc: virtualization, netdev, linux-kernel, bpf, Jesper Dangaard Brouer,
	Eugenio Perez Martin

So for mergeable bufs, we use ewma machinery to guess the correct buffer
size. If we don't guess correctly, XDP has to do aggressive copies.

Problem is, xdp paths do not update the ewma at all, except
sometimes with XDP_PASS. So whatever we happen to have
before we attach XDP, will mostly stay around.

The fix is probably to update ewma unconditionally.

-- 
MST


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: performance bug in virtio net xdp
  2020-05-06  8:08 performance bug in virtio net xdp Michael S. Tsirkin
@ 2020-05-06  8:37 ` Jason Wang
  2020-05-06 11:57   ` Michael S. Tsirkin
  2020-05-06  8:37 ` Jesper Dangaard Brouer
  1 sibling, 1 reply; 5+ messages in thread
From: Jason Wang @ 2020-05-06  8:37 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: virtualization, netdev, linux-kernel, bpf, Jesper Dangaard Brouer,
	Eugenio Perez Martin


On 2020/5/6 下午4:08, Michael S. Tsirkin wrote:
> So for mergeable bufs, we use ewma machinery to guess the correct buffer
> size. If we don't guess correctly, XDP has to do aggressive copies.
>
> Problem is, xdp paths do not update the ewma at all, except
> sometimes with XDP_PASS. So whatever we happen to have
> before we attach XDP, will mostly stay around.


It looks ok to me since we always use PAGE_SIZE when XDP is enabled in 
get_mergeable_buf_len()?

Thanks


>
> The fix is probably to update ewma unconditionally.
>


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: performance bug in virtio net xdp
  2020-05-06  8:08 performance bug in virtio net xdp Michael S. Tsirkin
  2020-05-06  8:37 ` Jason Wang
@ 2020-05-06  8:37 ` Jesper Dangaard Brouer
  2020-05-06 11:57   ` Michael S. Tsirkin
  1 sibling, 1 reply; 5+ messages in thread
From: Jesper Dangaard Brouer @ 2020-05-06  8:37 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, virtualization, netdev, linux-kernel, bpf,
	Eugenio Perez Martin, brouer

On Wed, 6 May 2020 04:08:27 -0400
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> So for mergeable bufs, we use ewma machinery to guess the correct buffer
> size. If we don't guess correctly, XDP has to do aggressive copies.
> 
> Problem is, xdp paths do not update the ewma at all, except
> sometimes with XDP_PASS. So whatever we happen to have
> before we attach XDP, will mostly stay around.
> 
> The fix is probably to update ewma unconditionally.

I personally find the code hard to follow, and (I admit) that it took
me some time to understand this code path (so I might still be wrong).

In patch[1] I tried to explain (my understanding):

  In receive_mergeable() the frame size is more dynamic. There are two
  basic cases: (1) buffer size is based on a exponentially weighted
  moving average (see DECLARE_EWMA) of packet length. Or (2) in case
  virtnet_get_headroom() have any headroom then buffer size is
  PAGE_SIZE. The ctx pointer is this time used for encoding two values;
  the buffer len "truesize" and headroom. In case (1) if the rx buffer
  size is underestimated, the packet will have been split over more
  buffers (num_buf info in virtio_net_hdr_mrg_rxbuf placed in top of
  buffer area). If that happens the XDP path does a xdp_linearize_page
  operation.


The EWMA code is not used when headroom is defined, which e.g. gets
enabled when running XDP.


[1] https://lore.kernel.org/netdev/158824572816.2172139.1358700000273697123.stgit@firesoul/
-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: performance bug in virtio net xdp
  2020-05-06  8:37 ` Jesper Dangaard Brouer
@ 2020-05-06 11:57   ` Michael S. Tsirkin
  0 siblings, 0 replies; 5+ messages in thread
From: Michael S. Tsirkin @ 2020-05-06 11:57 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: Jason Wang, virtualization, netdev, linux-kernel, bpf,
	Eugenio Perez Martin

On Wed, May 06, 2020 at 10:37:57AM +0200, Jesper Dangaard Brouer wrote:
> On Wed, 6 May 2020 04:08:27 -0400
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> 
> > So for mergeable bufs, we use ewma machinery to guess the correct buffer
> > size. If we don't guess correctly, XDP has to do aggressive copies.
> > 
> > Problem is, xdp paths do not update the ewma at all, except
> > sometimes with XDP_PASS. So whatever we happen to have
> > before we attach XDP, will mostly stay around.
> > 
> > The fix is probably to update ewma unconditionally.
> 
> I personally find the code hard to follow, and (I admit) that it took
> me some time to understand this code path (so I might still be wrong).
> 
> In patch[1] I tried to explain (my understanding):
> 
>   In receive_mergeable() the frame size is more dynamic. There are two
>   basic cases: (1) buffer size is based on a exponentially weighted
>   moving average (see DECLARE_EWMA) of packet length. Or (2) in case
>   virtnet_get_headroom() have any headroom then buffer size is
>   PAGE_SIZE. The ctx pointer is this time used for encoding two values;
>   the buffer len "truesize" and headroom. In case (1) if the rx buffer
>   size is underestimated, the packet will have been split over more
>   buffers (num_buf info in virtio_net_hdr_mrg_rxbuf placed in top of
>   buffer area). If that happens the XDP path does a xdp_linearize_page
>   operation.
> 
> 
> The EWMA code is not used when headroom is defined, which e.g. gets
> enabled when running XDP.
> 
> 
> [1] https://lore.kernel.org/netdev/158824572816.2172139.1358700000273697123.stgit@firesoul/

You are right.
So I guess the problem is just inconsistency?

When XDP program returns XDP_PASS, and it all fits in one page,
then we trigger
	        ewma_pkt_len_add(&rq->mrg_avg_pkt_len, head_skb->len);

if it does not trigger XDP_PASS, or does not fit in one page,
then we don't.

Given XDP does not use ewma for sizing, let's not update the average
either.


> -- 
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: performance bug in virtio net xdp
  2020-05-06  8:37 ` Jason Wang
@ 2020-05-06 11:57   ` Michael S. Tsirkin
  0 siblings, 0 replies; 5+ messages in thread
From: Michael S. Tsirkin @ 2020-05-06 11:57 UTC (permalink / raw)
  To: Jason Wang
  Cc: virtualization, netdev, linux-kernel, bpf, Jesper Dangaard Brouer,
	Eugenio Perez Martin

On Wed, May 06, 2020 at 04:37:41PM +0800, Jason Wang wrote:
> 
> On 2020/5/6 下午4:08, Michael S. Tsirkin wrote:
> > So for mergeable bufs, we use ewma machinery to guess the correct buffer
> > size. If we don't guess correctly, XDP has to do aggressive copies.
> > 
> > Problem is, xdp paths do not update the ewma at all, except
> > sometimes with XDP_PASS. So whatever we happen to have
> > before we attach XDP, will mostly stay around.
> 
> 
> It looks ok to me since we always use PAGE_SIZE when XDP is enabled in
> get_mergeable_buf_len()?
> 
> Thanks
> 

Oh right. Good point! Answered in another thread.

> > 
> > The fix is probably to update ewma unconditionally.
> > 


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-05-06 11:57 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-05-06  8:08 performance bug in virtio net xdp Michael S. Tsirkin
2020-05-06  8:37 ` Jason Wang
2020-05-06 11:57   ` Michael S. Tsirkin
2020-05-06  8:37 ` Jesper Dangaard Brouer
2020-05-06 11:57   ` Michael S. Tsirkin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).