From: "Michael S. Tsirkin" <mst@redhat.com>
To: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Cc: virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org, Jason Wang <jasowang@redhat.com>,
"David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>
Subject: Re: [PATCH v4 0/3] virtio support cache indirect desc
Date: Thu, 11 Nov 2021 10:02:01 -0500 [thread overview]
Message-ID: <20211111100044-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <1636613527.8447719-1-xuanzhuo@linux.alibaba.com>
On Thu, Nov 11, 2021 at 02:52:07PM +0800, Xuan Zhuo wrote:
> On Wed, 10 Nov 2021 07:53:44 -0500, Michael S. Tsirkin <mst@redhat.com> wrote:
> > On Mon, Nov 08, 2021 at 10:47:40PM +0800, Xuan Zhuo wrote:
> > > On Mon, 8 Nov 2021 08:49:27 -0500, Michael S. Tsirkin <mst@redhat.com> wrote:
> > > >
> > > > Hmm a bunch of comments got ignored. See e.g.
> > > > https://lore.kernel.org/r/20211027043851-mutt-send-email-mst%40kernel.org
> > > > if they aren't relevant add code comments or commit log text explaining the
> > > > design choice please.
> > >
> > > I should have responded to related questions, I am guessing whether some emails
> > > have been lost.
> > >
> > > I have sorted out the following 6 questions, if there are any missing questions,
> > > please let me know.
> > >
> > > 1. use list_head
> > > In the earliest version, I used pointers directly. You suggest that I use
> > > llist_head, but considering that llist_head has atomic operations. There is no
> > > competition problem here, so I used list_head.
> > >
> > > In fact, I did not increase the allocated space for list_head.
> > >
> > > use as desc array: | vring_desc | vring_desc | vring_desc | vring_desc |
> > > use as queue item: | list_head ........................................|
> >
> > the concern is that you touch many cache lines when removing an entry.
> >
> > I suggest something like:
> >
> > llist: add a non-atomic list_del_first
> >
> > One has to know what one's doing, but if one has locked the list
> > preventing all accesses, then it's ok to just pop off an entry without
> > atomics.
> >
>
> Oh, great, but my way of solving the problem is too conservative.
>
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> >
> > ---
> >
> > diff --git a/include/linux/llist.h b/include/linux/llist.h
> > index 24f207b0190b..13a47dddb12b 100644
> > --- a/include/linux/llist.h
> > +++ b/include/linux/llist.h
> > @@ -247,6 +247,17 @@ static inline struct llist_node *__llist_del_all(struct llist_head *head)
> >
> > extern struct llist_node *llist_del_first(struct llist_head *head);
> >
> > +static inline struct llist_node *__llist_del_first(struct llist_head *head)
> > +{
> > + struct llist_node *first = head->first;
> > +
> > + if (!first)
> > + return NULL;
> > +
> > + head->first = first->next;
> > + return first;
> > +}
> > +
> > struct llist_node *llist_reverse_order(struct llist_node *head);
> >
> > #endif /* LLIST_H */
> >
> >
> > -----
> >
> >
> > > 2.
> > > > > + if (vq->use_desc_cache && total_sg <= VIRT_QUEUE_CACHE_DESC_NUM) {
> > > > > + if (vq->desc_cache_chain) {
> > > > > + desc = vq->desc_cache_chain;
> > > > > + vq->desc_cache_chain = (void *)desc->addr;
> > > > > + goto got;
> > > > > + }
> > > > > + n = VIRT_QUEUE_CACHE_DESC_NUM;
> > > >
> > > > Hmm. This will allocate more entries than actually used. Why do it?
> > >
> > >
> > > This is because the size of each cache item is fixed, and the logic has been
> > > modified in the latest code. I think this problem no longer exists.
> > >
> > >
> > > 3.
> > > > What bothers me here is what happens if cache gets
> > > > filled on one numa node, then used on another?
> > >
> > > I'm thinking about another question, how did the cross-numa appear here, and
> > > virtio desc queue also has the problem of cross-numa. So is it necessary for us
> > > to deal with the cross-numa scene?
> >
> > It's true that desc queue might be cross numa, and people are looking
> > for ways to improve that. Not a reason to make things worse ...
> >
>
> I will test for it.
>
> >
> > > Indirect desc is used as virtio desc, so as long as it is in the same numa as
> > > virito desc. So we can allocate indirect desc cache at the same time when
> > > allocating virtio desc queue.
> >
> > Using it from current node like we do now seems better.
> >
> > > 4.
> > > > So e.g. for rx, we are wasting memory since indirect isn't used.
> > >
> > > In the current version, desc cache is set up based on pre-queue.
> > >
> > > So if the desc cache is not used, we don't need to set the desc cache.
> > >
> > > For example, virtio-net, as long as the tx queue and the rx queue in big packet
> > > mode enable desc cache.
> >
> >
> > I liked how in older versions adding indrect enabled it implicitly
> > though without need to hack drivers.
>
> I see.
>
> >
> > > 5.
> > > > Would a better API be a cache size in bytes? This controls how much
> > > > memory is spent after all.
> > >
> > > My design is to set a threshold. When total_sg is greater than this threshold,
> > > it will fall back to kmalloc/kfree. When total_sg is less than or equal to
> > > this threshold, use the allocated cache.
> > >
> >
> > I know. My question is this, do devices know what a good threshold is?
> > If yes how do they know?
>
> I think the driver knows the threshold, for example, MAX_SKB_FRAG + 2 is a
> suitable threshold for virtio-net.
>
I guess... in that case I assume it's a good idea to have
virtio core round the size up to whole cache lines, right?
> >
> > > 6. kmem_cache_*
> > >
> > > I have tested these, the performance is not as good as the method used in this
> > > patch.
> >
> > Do you mean kmem_cache_alloc_bulk/kmem_cache_free_bulk?
> > You mentioned just kmem_cache_alloc previously.
>
>
> I will test for kmem_cache_alloc_bulk.
>
> Thanks.
>
> >
> > >
> > > Thanks.
> >
next prev parent reply other threads:[~2021-11-11 15:02 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-08 11:49 [PATCH v4 0/3] virtio support cache indirect desc Xuan Zhuo
2021-11-08 11:49 ` [PATCH v4 1/3] virtio: cache indirect desc for split Xuan Zhuo
2021-11-09 13:09 ` Michael S. Tsirkin
2021-11-08 11:49 ` [PATCH v4 2/3] virtio: cache indirect desc for packed Xuan Zhuo
2021-11-08 11:49 ` [PATCH v4 3/3] virtio-net: enable virtio desc cache Xuan Zhuo
2021-11-08 13:49 ` [PATCH v4 0/3] virtio support cache indirect desc Michael S. Tsirkin
[not found] ` <1636382860.765897-1-xuanzhuo@linux.alibaba.com>
2021-11-10 12:53 ` Michael S. Tsirkin
2021-11-10 12:54 ` Michael S. Tsirkin
[not found] ` <1636613527.8447719-1-xuanzhuo@linux.alibaba.com>
2021-11-11 15:02 ` Michael S. Tsirkin [this message]
2021-11-09 13:03 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211111100044-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=davem@davemloft.net \
--cc=jasowang@redhat.com \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=virtualization@lists.linux-foundation.org \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).