qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Jonathan Cameron <Jonathan.Cameron@Huawei.com>, ankita@nvidia.com
Cc: jgg@nvidia.com, alex.williamson@redhat.com, clg@redhat.com,
	shannon.zhaosl@gmail.com, peter.maydell@linaro.org,
	ani@anisinha.ca, berrange@redhat.com, eduardo@habkost.net,
	imammedo@redhat.com, mst@redhat.com, eblake@redhat.com,
	armbru@redhat.com, gshan@redhat.com, aniketa@nvidia.com,
	cjia@nvidia.com, kwankhede@nvidia.com, targupta@nvidia.com,
	vsethi@nvidia.com, acurrid@nvidia.com, dnigam@nvidia.com,
	udhoke@nvidia.com, qemu-arm@nongnu.org, qemu-devel@nongnu.org
Subject: Re: [PATCH v2 3/3] qom: Link multiple numa nodes to device using a new object
Date: Mon, 9 Oct 2023 14:57:15 +0200	[thread overview]
Message-ID: <10a2b2f6-a52f-a8cd-83cc-8f3b71cbf7f7@redhat.com> (raw)
In-Reply-To: <20231009133048.00003535@Huawei.com>

On 09.10.23 14:30, Jonathan Cameron wrote:
> On Sun, 8 Oct 2023 01:47:40 +0530
> <ankita@nvidia.com> wrote:
> 
>> From: Ankit Agrawal <ankita@nvidia.com>
>>
>> NVIDIA GPU's support MIG (Mult-Instance GPUs) feature [1], which allows
>> partitioning of the GPU device resources (including device memory) into
>> several (upto 8) isolated instances. Each of the partitioned memory needs
>> a dedicated NUMA node to operate. The partitions are not fixed and they
>> can be created/deleted at runtime.
>>
>> Unfortunately Linux OS does not provide a means to dynamically create/destroy
>> NUMA nodes and such feature implementation is not expected to be trivial. The
>> nodes that OS discovers at the boot time while parsing SRAT remains fixed. So
>> we utilize the GI Affinity structures that allows association between nodes
>> and devices. Multiple GI structures per BDF is possible, allowing creation of
>> multiple nodes by exposing unique PXM in each of these structures.
>>
>> Introducing a new nvidia-acpi-generic-initiator object, which inherits from
>> the generic acpi-generic-initiator object to allow a BDF to be associated with
>> more than 1 nodes.
>>
>> An admin can provide the range of nodes using numa-node-start and
>> numa-node-count and link it to a device by providing its id. The following
>> sample creates 8 nodes and link them to the device dev0:
>>
>>          -numa node,nodeid=2 \
>>          -numa node,nodeid=3 \
>>          -numa node,nodeid=4 \
>>          -numa node,nodeid=5 \
>>          -numa node,nodeid=6 \
>>          -numa node,nodeid=7 \
>>          -numa node,nodeid=8 \
>>          -numa node,nodeid=9 \
>>          -device vfio-pci-nohotplug,host=0009:01:00.0,bus=pcie.0,addr=04.0,rombar=0,id=dev0 \
>>          -object nvidia-acpi-generic-initiator,id=gi0,device=dev0,numa-node-start=2,numa-node-count=8 \
> 
> If you go this way, use an array of references to the numa nodes instead of a start and number.
> There is no obvious reason why they should be contiguous that I can see.

Right, a uint16List should do.


-- 
Cheers,

David / dhildenb



  parent reply	other threads:[~2023-10-09 12:58 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-07 20:17 [PATCH v2 0/3] acpi: report numa nodes for device memory using GI ankita
2023-10-07 20:17 ` [PATCH v2 1/3] qom: new object to associate device to numa node ankita
2023-10-09 12:26   ` Jonathan Cameron via
2023-10-09 12:26     ` Jonathan Cameron
2023-10-11 17:37     ` Vikram Sethi
2023-10-12  8:59       ` Jonathan Cameron via
2023-10-12  8:59         ` Jonathan Cameron
2023-10-09 21:16   ` Alex Williamson
2023-10-13 13:16   ` Markus Armbruster
2023-10-17 13:44     ` Ankit Agrawal
2023-10-07 20:17 ` [PATCH v2 2/3] hw/acpi: Implement the SRAT GI affinity structure ankita
2023-10-09 21:16   ` Alex Williamson
2023-10-17 13:51     ` Ankit Agrawal
2023-10-07 20:17 ` [PATCH v2 3/3] qom: Link multiple numa nodes to device using a new object ankita
2023-10-09 12:30   ` Jonathan Cameron via
2023-10-09 12:30     ` Jonathan Cameron
2023-10-09 12:57     ` David Hildenbrand [this message]
2023-10-09 21:27     ` Alex Williamson
2023-10-17 14:18       ` Ankit Agrawal
2023-10-09 21:16   ` Alex Williamson
2023-10-17 14:00     ` Ankit Agrawal
2023-10-17 15:21       ` Alex Williamson
2023-10-17 15:28         ` Jason Gunthorpe
2023-10-17 16:54           ` Alex Williamson
2023-10-17 17:24             ` Jason Gunthorpe
2023-10-13 13:17   ` Markus Armbruster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=10a2b2f6-a52f-a8cd-83cc-8f3b71cbf7f7@redhat.com \
    --to=david@redhat.com \
    --cc=Jonathan.Cameron@Huawei.com \
    --cc=acurrid@nvidia.com \
    --cc=alex.williamson@redhat.com \
    --cc=ani@anisinha.ca \
    --cc=aniketa@nvidia.com \
    --cc=ankita@nvidia.com \
    --cc=armbru@redhat.com \
    --cc=berrange@redhat.com \
    --cc=cjia@nvidia.com \
    --cc=clg@redhat.com \
    --cc=dnigam@nvidia.com \
    --cc=eblake@redhat.com \
    --cc=eduardo@habkost.net \
    --cc=gshan@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=kwankhede@nvidia.com \
    --cc=mst@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=shannon.zhaosl@gmail.com \
    --cc=targupta@nvidia.com \
    --cc=udhoke@nvidia.com \
    --cc=vsethi@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).