From: "Michael S. Tsirkin" <mst@redhat.com>
To: Benjamin Serebrin <serebrin@google.com>
Cc: Willem de Bruijn <willemdebruijn.kernel@gmail.com>,
Christian Borntraeger <borntraeger@de.ibm.com>,
Network Development <netdev@vger.kernel.org>,
Jason Wang <jasowang@redhat.com>,
David Miller <davem@davemloft.net>,
Willem de Bruijn <willemb@google.com>,
Venkatesh Srinivas <venkateshs@google.com>,
"Jon Olson (Google Drive)" <jonolson@google.com>,
Rick Jones <rick.jones2@hpe.com>,
James Mattson <jmattson@google.com>,
linux-s390 <linux-s390@vger.kernel.org>,
"linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>
Subject: Re: [PATCH v2 net-next] virtio: Fix affinity for #VCPUs != #queue pairs
Date: Wed, 15 Feb 2017 21:17:54 +0200 [thread overview]
Message-ID: <20170215210941-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <CAN+hb0V5hcMkWwQhk=5G4B+vWBARyEq+g-=XVaygH5s-s7mPZw@mail.gmail.com>
On Wed, Feb 15, 2017 at 10:27:37AM -0800, Benjamin Serebrin wrote:
> On Wed, Feb 15, 2017 at 9:42 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
> >
> >
> > > For pure network load, assigning each txqueue IRQ exclusively
> > > to one of the cores that generates traffic on that queue is the
> > > optimal layout in terms of load spreading. Irqbalance does
> > > not have the XPS information to make this optimal decision.
> >
> > Try to add hints for it?
> >
> >
> > > Overall system load affects this calculation both in the case of 1:1
> > > mapping uneven queue distribution. In both cases, irqbalance
> > > is hopefully smart enough to migrate other non-pinned IRQs to
> > > cpus with lower overall load.
> >
> > Not if everyone starts inserting hacks like this one in code.
>
>
> It seems to me that the default behavior is equally "random" - why would we want
> XPS striped across the cores the way it's done today?
Right. But userspace knows it's random at least. If kernel supplies
affinity e.g. the way your patch does, userspace ATM accepts this as a
gospel.
> What we're trying to do here is avoid the surprise cliff that guests will
> hit when queue count is limited to less than VCPU count.
And presumably the high VCPU count is there because people
actually run some tasks on all these extra VCPUs.
so I'm not sure I buy the "pure networking workload" argument.
> That will
> happen because
> we limit queue pair count to 32. I'll happily push further complexity
> to user mode.
Why do you do this BTW? I note that this will interfere with e.g. XPS
which for better or worse wants its own set of TC queues.
> If this won't fly, we can leave all of this behavior in user code.
> Michael, would
> you prefer that I abandon this patch?
I think it's either that or find a way that does not interfere with what
existing userspace has been doing. That's likely to involve core changes
outside of virtio.
> > That's another problem with this patch. If you care about hyperthreads
> > you want an API to probe for that.
>
>
> It's something of a happy accident that hyperthreads line up that way.
> Keeping the topology knowledge out of the patch and into user space seems
> cleaner, would you agree?
>
> Thanks!
> Ben
Well kernel does have topology knowledge already but I have no
issue with keeping the logic all in userspace.
--
MST
next prev parent reply other threads:[~2017-02-15 19:17 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-02-07 18:15 [PATCH v2 net-next] virtio: Fix affinity for #VCPUs != #queue pairs Ben Serebrin
2017-02-08 18:37 ` David Miller
2017-02-08 19:37 ` Michael S. Tsirkin
2017-02-14 19:17 ` Benjamin Serebrin
2017-02-14 21:05 ` Michael S. Tsirkin
2017-02-15 16:50 ` Willem de Bruijn
2017-02-15 17:42 ` Michael S. Tsirkin
2017-02-15 18:27 ` Benjamin Serebrin
2017-02-15 19:17 ` Michael S. Tsirkin [this message]
2017-02-15 21:38 ` Benjamin Serebrin
2017-02-15 21:49 ` Michael S. Tsirkin
2017-02-15 22:13 ` Benjamin Serebrin
2017-02-15 15:23 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170215210941-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=borntraeger@de.ibm.com \
--cc=davem@davemloft.net \
--cc=jasowang@redhat.com \
--cc=jmattson@google.com \
--cc=jonolson@google.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=rick.jones2@hpe.com \
--cc=serebrin@google.com \
--cc=venkateshs@google.com \
--cc=willemb@google.com \
--cc=willemdebruijn.kernel@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).