From: Alex Williamson <alex.williamson@redhat.com>
To: David Hildenbrand <david@redhat.com>
Cc: "Ankit Agrawal" <ankita@nvidia.com>,
"Cédric Le Goater" <clg@redhat.com>,
"Jason Gunthorpe" <jgg@nvidia.com>,
"shannon.zhaosl@gmail.com" <shannon.zhaosl@gmail.com>,
"peter.maydell@linaro.org" <peter.maydell@linaro.org>,
"ani@anisinha.ca" <ani@anisinha.ca>,
"Aniket Agashe" <aniketa@nvidia.com>, "Neo Jia" <cjia@nvidia.com>,
"Kirti Wankhede" <kwankhede@nvidia.com>,
"Tarun Gupta (SW-GPU)" <targupta@nvidia.com>,
"Vikram Sethi" <vsethi@nvidia.com>,
"Andy Currid" <ACurrid@nvidia.com>,
"qemu-arm@nongnu.org" <qemu-arm@nongnu.org>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"Gavin Shan" <gshan@redhat.com>
Subject: Re: [PATCH v1 0/4] vfio: report NUMA nodes for device memory
Date: Tue, 26 Sep 2023 13:14:27 -0600 [thread overview]
Message-ID: <20230926131427.1e441670.alex.williamson@redhat.com> (raw)
In-Reply-To: <769b577a-65b0-dbfe-3e99-db57cea08529@redhat.com>
On Tue, 26 Sep 2023 18:54:53 +0200
David Hildenbrand <david@redhat.com> wrote:
> On 26.09.23 16:52, Ankit Agrawal wrote:
> >>>>> Good idea. Fundamentally the device should not be creating NUMA
> >>>>> nodes, the VM should be configured with NUMA nodes and the device
> >>>>> memory associated with those nodes.
> >>>>
> >>>> +1. That would also make it fly with DIMMs and virtio-mem, where you
> >>>> would want NUMA-less nodes ass well (imagine passing CXL memory to a VM
> >>>> using virtio-mem).
> >>>>
> >>>
> >>> We actually do not add the device memory on the host, instead
> >>> map it into the Qemu VMA using remap_pfn_range(). Please checkout the
> >>> mmap function in vfio-pci variant driver code managing the device.
> >>> https://lore.kernel.org/all/20230915025415.6762-1-ankita@nvidia.com/
> >>> And I think host memory backend would need memory that is added on the
> >>> host.
> >>>
> >>> Moreover since we want to passthrough the entire device memory, the
> >>> -object memory-backend-ram would have to be passed a size that is equal
> >>> to the device memory. I wonder if that would be too much of a trouble
> >>> for an admin (or libvirt) triggering the Qemu process.
> >>>
> >>> Both these items are avoided by exposing the device memory as BAR as in the
> >>> current implementation (referenced above) since it lets Qemu to naturally
> >>> discover the device memory region and do mmap.
> >>>
> >>
> >> Just to clarify: nNUMA nodes for DIMMs/NVDIMMs/virtio-mem are configured
> >> on the device, not on the memory backend.
> >>
> >> e.g., -device pc-dimm,node=3,memdev=mem1,...
> >
>
> Alco CCing Gavin, I remember he once experimented with virtio-mem +
> multiple memory-less nodes and it was quite working (because of
> MEM_AFFINITY_HOTPLUGGABLE only on the last node, below).
>
> > Agreed, but still we will have the aforementioned issues viz.
> > 1. The backing memory for the memory device would need to be allocated
> > on the host. However, we do not add the device memory on the host in this
> > case. Instead the Qemu VMA is mapped to the device memory physical
> > address using remap_pfn_range().
>
> I don't see why that would be necessary ...
>
> > 2. The memory device need to be passed an allocation size such that all of
> > the device memory is mapped into the Qemu VMA. This may not be readily
> > available to the admin/libvirt.
>
> ... or that. But your proposal roughly looks like what I had in mind, so
> let's focus on that.
>
> >
> > Based on the suggestions here, can we consider something like the
> > following?
> > 1. Introduce a new -numa subparam 'devnode', which tells Qemu to mark
> > the node with MEM_AFFINITY_HOTPLUGGABLE in the SRAT's memory affinity
> > structure to make it hotpluggable.
>
> Is that "devnode=on" parameter required? Can't we simply expose any node
> that does *not* have any boot memory assigned as MEM_AFFINITY_HOTPLUGGABLE?
>
> Right now, with "ordinary", fixed-location memory devices
> (DIMM/NVDIMM/virtio-mem/virtio-pmem), we create an srat entry that
> covers the device memory region for these devices with
> MEM_AFFINITY_HOTPLUGGABLE. We use the highest NUMA node in the machine,
> which does not quite work IIRC. All applicable nodes that don't have
> boot memory would need MEM_AFFINITY_HOTPLUGGABLE for Linux to create them.
>
> In your example, which memory ranges would we use for these nodes in SRAT?
>
> > 2. Create several NUMA nodes with 'devnode' which are supposed to be
> > associated with the vfio-pci device.
> > 3. Pass the numa node start and count to associate the nodes created.
> >
> > So, the command would look something like the following.
> > ...
> > -numa node,nodeid=2,devnode=on \
> > -numa node,nodeid=3,devnode=on \
> > -numa node,nodeid=4,devnode=on \
> > -numa node,nodeid=5,devnode=on \
> > -numa node,nodeid=6,devnode=on \
> > -numa node,nodeid=7,devnode=on \
> > -numa node,nodeid=8,devnode=on \
> > -numa node,nodeid=9,devnode=on \
> > -device vfio-pci-nohotplug,host=0009:01:00.0,bus=pcie.0,addr=04.0,rombar=0,numa-node-start=2,numa-node-count=8 \
I don't see how these numa-node args on a vfio-pci device have any
general utility. They're only used to create a firmware table, so why
don't we be explicit about it and define the firmware table as an
object? For example:
-numa node,nodeid=2 \
-numa node,nodeid=3 \
-numa node,nodeid=4 \
-numa node,nodeid=5 \
-numa node,nodeid=6 \
-numa node,nodeid=7 \
-numa node,nodeid=8 \
-numa node,nodeid=9 \
-device vfio-pci-nohotplug,host=0009:01:00.0,bus=pcie.0,addr=04.0,rombar=0,id=nvgrace0 \
-object nvidia-gpu-mem-acpi,devid=nvgrace0,nodeset=2-9 \
There are some suggestions in this thread that CXL could have similar
requirements, but I haven't found any evidence that these
dev-mem-pxm-{start,count} attributes in the _DSD are standardized in
any way. If they are, maybe this would be a dev-mem-pxm-acpi object
rather than an NVIDIA specific one.
It seems like we could almost meet the requirement for this table via
-acpitable, but I think we'd like to avoid the VM orchestration tool
from creating, compiling, and passing ACPI data blobs into the VM.
Thanks,
Alex
next prev parent reply other threads:[~2023-09-26 19:15 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-15 2:45 [PATCH v1 0/4] vfio: report NUMA nodes for device memory ankita
2023-09-15 2:45 ` [PATCH v1 1/4] vfio: new command line params for device memory NUMA nodes ankita
2023-09-15 14:25 ` Jonathan Cameron via
2023-09-15 14:48 ` Igor Mammedov
2023-09-22 5:44 ` Ankit Agrawal
2023-09-25 14:08 ` Jonathan Cameron via
2023-09-15 2:45 ` [PATCH v1 2/4] vfio: assign default values to node params ankita
2023-09-15 2:45 ` [PATCH v1 3/4] hw/arm/virt-acpi-build: patch guest SRAT for NUMA nodes ankita
2023-09-15 14:37 ` Jonathan Cameron via
2023-09-22 5:49 ` Ankit Agrawal
2023-09-25 13:54 ` Jonathan Cameron via
2023-09-25 14:03 ` Jason Gunthorpe
2023-09-25 14:53 ` Jonathan Cameron via
2023-09-25 16:00 ` Jason Gunthorpe
2023-09-25 17:00 ` Jonathan Cameron via
2023-09-26 14:54 ` Ankit Agrawal
2023-09-27 7:06 ` Ankit Agrawal
2023-09-27 11:01 ` Jonathan Cameron via
2023-09-15 14:52 ` Igor Mammedov
2023-09-15 15:49 ` David Hildenbrand
2023-09-15 2:45 ` [PATCH v1 4/4] acpi/gpex: patch guest DSDT for dev mem information ankita
2023-09-15 15:13 ` Igor Mammedov
2023-09-27 11:42 ` Jonathan Cameron via
2023-09-15 14:19 ` [PATCH v1 0/4] vfio: report NUMA nodes for device memory Cédric Le Goater
2023-09-15 14:47 ` Alex Williamson
2023-09-15 18:34 ` David Hildenbrand
2023-09-22 8:11 ` Ankit Agrawal
2023-09-22 8:15 ` David Hildenbrand
2023-09-26 14:52 ` Ankit Agrawal
2023-09-26 16:54 ` David Hildenbrand
2023-09-26 19:14 ` Alex Williamson [this message]
2023-09-27 7:14 ` Ankit Agrawal
2023-09-27 11:33 ` Jonathan Cameron via
2023-09-27 13:53 ` Jason Gunthorpe
2023-09-27 14:24 ` Alex Williamson
2023-09-27 15:03 ` Vikram Sethi
2023-09-27 15:42 ` Jason Gunthorpe
2023-09-28 16:15 ` Jonathan Cameron via
2023-09-27 16:37 ` Alex Williamson
2023-09-28 16:29 ` Jonathan Cameron via
2023-09-28 16:04 ` Jonathan Cameron via
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230926131427.1e441670.alex.williamson@redhat.com \
--to=alex.williamson@redhat.com \
--cc=ACurrid@nvidia.com \
--cc=ani@anisinha.ca \
--cc=aniketa@nvidia.com \
--cc=ankita@nvidia.com \
--cc=cjia@nvidia.com \
--cc=clg@redhat.com \
--cc=david@redhat.com \
--cc=gshan@redhat.com \
--cc=jgg@nvidia.com \
--cc=kwankhede@nvidia.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-arm@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=shannon.zhaosl@gmail.com \
--cc=targupta@nvidia.com \
--cc=vsethi@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).