From: "Michael S. Tsirkin" <mst@redhat.com>
To: Stefan Hajnoczi <stefanha@gmail.com>
Cc: kvm@vger.kernel.org, Stefan Hajnoczi <stefanha@redhat.com>,
virtualization@lists.linux-foundation.org
Subject: Re: [RFC 0/3] virtio: NUMA-aware memory allocation
Date: Mon, 29 Jun 2020 11:28:41 -0400 [thread overview]
Message-ID: <20200629112212-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20200629092646.GC31392@stefanha-x1.localdomain>
On Mon, Jun 29, 2020 at 10:26:46AM +0100, Stefan Hajnoczi wrote:
> On Sun, Jun 28, 2020 at 02:34:37PM +0800, Jason Wang wrote:
> >
> > On 2020/6/25 下午9:57, Stefan Hajnoczi wrote:
> > > These patches are not ready to be merged because I was unable to measure a
> > > performance improvement. I'm publishing them so they are archived in case
> > > someone picks up this work again in the future.
> > >
> > > The goal of these patches is to allocate virtqueues and driver state from the
> > > device's NUMA node for optimal memory access latency. Only guests with a vNUMA
> > > topology and virtio devices spread across vNUMA nodes benefit from this. In
> > > other cases the memory placement is fine and we don't need to take NUMA into
> > > account inside the guest.
> > >
> > > These patches could be extended to virtio_net.ko and other devices in the
> > > future. I only tested virtio_blk.ko.
> > >
> > > The benchmark configuration was designed to trigger worst-case NUMA placement:
> > > * Physical NVMe storage controller on host NUMA node 0
It's possible that numa is not such a big deal for NVMe.
And it's possible that bios misconfigures ACPI reporting NUMA placement
incorrectly.
I think that the best thing to try is to use a ramdisk
on a specific numa node.
> > > * IOThread pinned to host NUMA node 0
> > > * virtio-blk-pci device in vNUMA node 1
> > > * vCPU 0 on host NUMA node 1 and vCPU 1 on host NUMA node 0
> > > * vCPU 0 in vNUMA node 0 and vCPU 1 in vNUMA node 1
> > >
> > > The intent is to have .probe() code run on vCPU 0 in vNUMA node 0 (host NUMA
> > > node 1) so that memory is in the wrong NUMA node for the virtio-blk-pci devic=
> > > e.
> > > Applying these patches fixes memory placement so that virtqueues and driver
> > > state is allocated in vNUMA node 1 where the virtio-blk-pci device is located.
> > >
> > > The fio 4KB randread benchmark results do not show a significant improvement:
> > >
> > > Name IOPS Error
> > > virtio-blk 42373.79 =C2=B1 0.54%
> > > virtio-blk-numa 42517.07 =C2=B1 0.79%
> >
> >
> > I remember I did something similar in vhost by using page_to_nid() for
> > descriptor ring. And I get little improvement as shown here.
> >
> > Michael reminds that it was probably because all data were cached. So I
> > doubt if the test lacks sufficient stress on the cache ...
>
> Yes, that sounds likely. If there's no real-world performance
> improvement then I'm happy to leave these patches unmerged.
>
> Stefan
Well that was for vhost though. This is virtio, which is different.
Doesn't some benchmark put pressure on the CPU cache?
I kind of feel there should be a difference, and the fact there isn't
means there's some other bottleneck somewhere. Might be worth
figuring out.
--
MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
next prev parent reply other threads:[~2020-06-29 15:28 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-25 13:57 [RFC 0/3] virtio: NUMA-aware memory allocation Stefan Hajnoczi
2020-06-25 13:57 ` [RFC 1/3] virtio-pci: use NUMA-aware memory allocation in probe Stefan Hajnoczi
2020-06-25 13:57 ` [RFC 2/3] virtio_ring: " Stefan Hajnoczi
2020-06-25 13:57 ` [RFC 3/3] virtio-blk: " Stefan Hajnoczi
2020-06-28 6:34 ` [RFC 0/3] virtio: NUMA-aware memory allocation Jason Wang
2020-06-29 9:26 ` Stefan Hajnoczi
2020-06-29 15:28 ` Michael S. Tsirkin [this message]
2020-06-30 8:47 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200629112212-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=stefanha@gmail.com \
--cc=stefanha@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).