* [Hackathon] Netback session notes
@ 2014-06-03 12:59 Zoltan Kiss
0 siblings, 0 replies; only message in thread
From: Zoltan Kiss @ 2014-06-03 12:59 UTC (permalink / raw)
To: Xen-devel@lists.xen.org
[Thanks for Joao Martins to write this up. I've edited it and added a
lots of notes]
Compound page skb problem
==========================
- compound pages introduced in 3.8/3.9
- compound pages can cross page boundary, and be more than 4k
- linear buffer: pointed by skb->data, now it can have any size,
spanning over page boundary
- plus MAX_SKB_FRAGS frags, which are additional buffers, with the same
constraints as the linear buffer
- this can increase number of slots needed on the ring
- slots in backend are limited to 18 due to historical reasons
- MAX_SKB_FRAGS changed 18 to 17 around 3.2 btw.
- packet can't be more than 64k, at the moment
- we need a slot to grant every 4k page in a compound
- the worst case skb looks like this (when max size is 64k):
linear buffer: 70 bytes, spanning page boundary = 2 slots
15 frag: 1+PAGE_SIZE+1 bytes, those two 1 bytes are spanning over to
adjacent pages = 3 * 15 = 45 slots
2 frag: 2 bytes, spanning page boundary = 2*2 = 4 slots
SUM: 51 slots
- usually it is only off by one, but who knows when we gonna run into an
usecase where these things happen more often
- in backend we already use shinfo(skb)->frag_list to handle guests
which are sending one more slot, but that puts the penalty on Dom0
- we shouldn't let Dom0 to pay the performance penalty for this
Option:
* decrease GSO max size
* might impact performance badly
* doesn't guarantee that frontend won't receive such packet, just
decrease the probability
Option:
* Map the guest's compound pages to adjacent pages in the backend, and
present them as compound pages in backend as well
* Use IOMMU mapping to make the pages received by netback contiguous for
the device;
* Increase the limit of slots a guest can send
* this needs PV IOMMU working, and negotiate this feature through
xenstore -> this can only work in long term
Option:
* straighten out packets in the frontend
* a patch were already proposed by Zoltan, it's a simple and slow way to
do it: allocate 4k pages for the frags and copy the whole stuff into
this nicely aligned buffers
* Stefan Bader proposed another solution, which is faster, but doesn't
work in all of the usecases
* Zoltan is working on an another algorithm, which is more effective and
handles every scenario, but therefore more complicated
Option:
* Turning off compound pages for the network stack if we are on Xen? * *
That only helps if the data is coming through a socket, but we should
expect other sources as well
* Probably it wouldn't be accepted by upstream, as it is a Xen specific
hack on the core networking stack
Slot Estimation (RX)
=============
We check if the guest has offered the necessary amount of slots to fit
the packet towards the guest.
Want to get rid of the estimation and trying to fit whats in the ring.
If not stop and let try later;
Testing pkt-gen
==========
Testing netback with in-kernel pkt-gen. Lets us test with different
packet sizes and different ways of composing the skbs, testing with
specific frag sizes.
https://www.kernel.org/doc/Documentation/networking/pktgen.txt
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2014-06-03 12:59 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-06-03 12:59 [Hackathon] Netback session notes Zoltan Kiss
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).