From: Jason Wang <jasowang@redhat.com>
To: Benjamin Serebrin <serebrin@google.com>
Cc: netdev@vger.kernel.org, "Michael S. Tsirkin" <mst@redhat.com>,
David Miller <davem@davemloft.net>,
Willem de Bruijn <willemb@google.com>,
Venkatesh Srinivas <venkateshs@google.com>,
James Mattson <jmattson@google.com>
Subject: Re: [PATCH net-next] virtio: Fix affinity for >32 VCPUs
Date: Mon, 6 Feb 2017 15:45:38 +0800 [thread overview]
Message-ID: <c9042cd9-c8aa-5569-4396-edd1f86bdca8@redhat.com> (raw)
In-Reply-To: <CAN+hb0UScuWow6YsaG-kd21ZdUeZrbs1vq36ov8x2_RXHHQKRA@mail.gmail.com>
On 2017年02月06日 15:28, Benjamin Serebrin wrote:
> On Sun, Feb 5, 2017 at 11:24 PM, Jason Wang<jasowang@redhat.com> wrote:
>> On 2017年02月03日 14:19, Ben Serebrin wrote:
>>> From: Benjamin Serebrin<serebrin@google.com>
>>>
>>> If the number of virtio queue pairs is not equal to the
>>> number of VCPUs, the virtio guest driver doesn't assign
>>> any CPU affinity for the queue interrupts or the xps
>>> aggregation interrupt.
>> So this in fact is not a affinity fixing for #cpus > 32 but adding affinity
>> for #cpus != #queue pairs.
> Fair enough. I'll adjust the title line in the subsequent version.
>
>
>>> Google Compute Engine currently provides 1 queue pair for
>>> every VCPU, but limits that at a maximum of 32 queue pairs.
>>>
>>> This code assigns interrupt affinity even when there are more than
>>> 32 VCPUs.
>>>
>>> Tested:
>>>
>>> (on a 64-VCPU VM with debian 8, jessie-backports 4.9.2)
>>>
>>> Without the fix we see all queues affinitized to all CPUs:
>> [...]
>>
>>> + /* If there are more cpus than queues, then assign the queues'
>>> + * interrupts to the first cpus until we run out.
>>> + */
>>> i = 0;
>>> for_each_online_cpu(cpu) {
>>> + if (i == vi->max_queue_pairs)
>>> + break;
>>> virtqueue_set_affinity(vi->rq[i].vq, cpu);
>>> virtqueue_set_affinity(vi->sq[i].vq, cpu);
>>> - netif_set_xps_queue(vi->dev, cpumask_of(cpu), i);
>>> i++;
>>> }
>>> + /* Stripe the XPS affinities across the online CPUs.
>>> + * Hyperthread pairs are typically assigned such that Linux's
>>> + * CPU X and X + (numcpus / 2) are hyperthread twins, so we cause
>>> + * hyperthread twins to share TX queues, in the case where there
>>> are
>>> + * more cpus than queues.
>> Since we use combined queue pairs, why not use the same policy for RX?
> XPS is for transmit only.
>
>
Yes, but I mean, e.g consider you let hyperthread twins to share TX
queues (XPS), why not share TX and RX queue interrupts (affinity)?
Thanks
next prev parent reply other threads:[~2017-02-06 7:45 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-02-03 6:19 [PATCH net-next] virtio: Fix affinity for >32 VCPUs Ben Serebrin
2017-02-03 15:07 ` Michael S. Tsirkin
2017-02-03 18:22 ` Benjamin Serebrin
2017-02-03 18:31 ` Willem de Bruijn
2017-02-03 18:34 ` Rick Jones
2017-02-03 20:25 ` Willem de Bruijn
2017-02-03 18:33 ` Rick Jones
2017-02-06 7:24 ` Jason Wang
2017-02-06 7:28 ` Benjamin Serebrin
2017-02-06 7:45 ` Jason Wang [this message]
2017-02-06 10:06 ` Christian Borntraeger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c9042cd9-c8aa-5569-4396-edd1f86bdca8@redhat.com \
--to=jasowang@redhat.com \
--cc=davem@davemloft.net \
--cc=jmattson@google.com \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=serebrin@google.com \
--cc=venkateshs@google.com \
--cc=willemb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).