From: David Hildenbrand <david@redhat.com>
To: Igor Mammedov <imammedo@redhat.com>
Cc: Pankaj Gupta <pagupta@redhat.com>,
Eduardo Habkost <ehabkost@redhat.com>,
"Michael S . Tsirkin" <mst@redhat.com>,
qemu-devel@nongnu.org, Markus Armbruster <armbru@redhat.com>,
qemu-s390x@nongnu.org, qemu-ppc@nongnu.org,
Marcel Apfelbaum <marcel@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Richard Henderson <rth@twiddle.net>,
David Gibson <david@gibson.dropbear.id.au>
Subject: Re: [Qemu-devel] [PATCH v3 0/3] pc-dimm: factor out MemoryDevice
Date: Wed, 25 Apr 2018 14:46:20 +0200 [thread overview]
Message-ID: <50c25d28-4cac-1e68-c149-e7a5ba8b0f40@redhat.com> (raw)
In-Reply-To: <20180425141535.5000bcb8@redhat.com>
>>>> For first phase we are using 'virtio-pmem' as cold added devices. AFAIU
>>>> 'VirtioDeviceClass' being parent class and 'hotplug/unplug' methods implemented
>>>> for virtio-pmem device. So, pci bus hotplug/unplug should call the corresponding
>>>> functions?
>>> the problem is with trying to use PCI bus based device with bus-less
>>> infrastructure used by (pc|nv)dimms.
>>
>> I can understand your reasoning, but for me these are some QEMU internal details
>> that should not stop the virtio-(p)mem train from rolling.
> If it's quickly hacked up prototypes to play with than it's fine
> as far as they are not being merged into QEMU.
> If one plans to merge it, then code should be adapted to
> whatever QEMU internal requirements are.
At one point we will have to decide if we want to develop good software
(which tolerates layer violations if there is a good excuse) or build
the perfect internal architecture. And we all know the latter is not the
case right now and never will be.
So yes, I will be looking into ways to make this work "nicer"
internally, but quite frankly, it has very little priority.
>
>> In my world, device hotplug is composed of the following steps
>>
>> 1. Resource allocation
>> 2. Attaching the device to a bus (making it accessible by the guest)
>> 3. Notifying the guest
>>
>> I would e.g. also call ACPI sort of a bus structure. Now, the machine hotplug
>> handler currently does parts of 1. and then hands of to ACPI to do 2 and 3.
> it's not a bus, it's concrete device implementing GPE logic,
> on x86 it does the job on notifier #3 in case of hotplug.
>
>> virtio-mem and virtio-pmem do 1. partially in the realize function and then
>> let 2. and 3. be handled by the proxy device specific hotplug handlers.
>>
>> Mean people might say that the machine should not call the ACPI code but there
>> should be a ACPI hotplug handler. So we would end up with the same result.
> it should be fine for parent to manage its children but not other way around
A virtio-bus (e.g. CCW) also "belongs" to the machine. But we won't
start to pass all device starting from the machine downwards to the
concrete implementation.
(but I get your point)
>
>
>> But anyhow, the resource allocation (getting an address and getting plugged) will
>> be done in the first step out of the virtio-(p)mem realize function:
>>
>> static void virtio_mem_device_realize(DeviceState *dev, Error **errp)
>> {
>> ...
>> /* try to get a mapping in guest address space */
>> vm->phys_addr = memory_device_get_free_addr(MACHINE(qdev_get_machine))...
> this should be a property, and if it's not set then realize should error out
It is a property but if it is 0 we do auto-detection right now (like DIMM)
>
>> if (local_err) {
>> goto out;
>> }
>> ...
>>
>> /* register the memory region */
>> memory_device_plug_region(MACHINE(qdev_get_machine()), vm->mr,
>> vm->phys_addr);
>> ...
>> }
>>
>> So this happens before any hotplug handler is called. Everything works
>> just fine. What you don't like about this is the qdev_get_machine(). I
>> also don't like it but in the short term I don't see any problem with
>> it. It is resource allocation and not a "device plug" in the typical form.
>
> It's not qdev_get_machine() that's issue, it's layer violation,
> where child device is allocating and mapping resources of one of its parents.
Quite simple: introduce a function at the machine where the child can
"request" to get an address and "request" to plug/unplug a region.
Or what would be wrong about that?
>
> that's been an issue and show stopper for patches in the past,
> and that's probably not going to change in this case either.
>
I can see that, but again, for me these are internal details.
>
>>> The important point which we should not to break here while trying to glue
>>> PCI hotplug handler with machine hotplug handler is:
>>
>> I could later on imagine something like a 2 step approach.
>>
>> 1. resource allocation handler by a machine for MemoryDevices
>> - assigns address, registers memory region
>> 2. hotplug handler (ACPI, PCI, CCW ...)
>> - assigns bus specific stuff, attaches device, notifies guest
>>
>> Importantly the device is not visible to the guest until 2.
> So far it's about how QEMU models and manages wiring process,
> that's why pre_plug/plug handlers were introduced, to allow
> resource owner to attach devices that's plugged into it.
>
> i.e. PCI devices are managed by PCI subsystem and DIMM
> devices are managed by board where they are mapped into
> reserved address space by board code that owns it.
>
> Allowing random device to manage board resources directly
> isn't really acceptable (even as temporary solution).
I agree to "random" devices. This should not be the design principle.
>
> In case of virtio-pmem it might be much cleaner to use
> mapping mechanism provided by PCI sybsytem than trying
> to bridge bus and buss-less device wiring as from device
> modeling point of view (aside from providing RAM to guest)
> it's 2 quite different devices.
And again: Please don't forget virtio-ccw. We _don't_ want to glue
virtio device specifics to the underlying proxy here.
>
> i.e. if you think new device is RAM, which is governed by
> -m option, then model it as bus-less device like dimm and
> plug it directly into board, if its plugged in to a bus
> it's that bus owner responsibility to allocate/manage
> address space or bridge it to parent device.
>
> (btw: virtio-pmem looks sort of like ivshmem, maybe they
> can share some code on qemu side)
>
>> Of course, we could also take care of pre-plug things as you mentioned.
>>
>>>
>>> container MachineState::device_memory is owned by machine and
>>> it's up to machine plug handler (container's owner) to map device's mr
>>> into its address space.
>>> (i.e. nor device's realize nor PCI bus hotplug handler should do it)
>>
>> I agree, but I think these are internal details.
> it's internal details that we choose not to violate in QEMU
> and were working towards that direction, getting rid of places
> that do it wrongly.
Yes, and I'll try my best to avoid it.
>
>>> Not sure about virtio-mem but if it would use device_memory container,
>>> it should use machine's plug handler.
>>>
>>> I don't have out head ideas how to glue it cleanly, may be
>>> MachineState::device_memory is just not right thing to use
>>> for such devices.
>>
>> I strongly disagree. From the user point of view it should not matter what
>> was added/plugged. There is just one guest physical memory and maxmem is
>> defined for one QEMU instance. Exposing such details to the user should
>> definitely be avoided.
> qemu user have to be exposed to details as he already adds
> -device virtio-pmem,....
> to CLI, maxmem accounting is a separate matter and probably
> shouldn't be mixed with device model and how it's mapped into
> guest's address space.
I can't follow. Please step back and have a look at how it works on the
qemu command line:
1. You specify a maxmem option
2. You plug DIMM/NVDIMM/virtio-mem/virtio-pmem
Some machines (e.g. s390x) use maxmem to setup the maximum possible
guest address space in KVM.
Just because DIMM/NVDIMM was the first user does not mean that it is the
only valid user. That is also the reason why it is named
"query-memory-devices" and not "query-dimm-devices". The abstraction is
there for a reason.
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2018-04-25 12:46 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-04-20 12:34 [Qemu-devel] [PATCH v3 0/3] pc-dimm: factor out MemoryDevice David Hildenbrand
2018-04-20 12:34 ` [Qemu-devel] [PATCH v3 1/3] pc-dimm: factor out MemoryDevice interface David Hildenbrand
2018-04-22 4:26 ` David Gibson
2018-04-22 8:21 ` David Hildenbrand
2018-04-22 10:10 ` David Gibson
2018-04-23 9:52 ` David Hildenbrand
2018-04-22 5:09 ` Pankaj Gupta
2018-04-22 8:26 ` David Hildenbrand
2018-04-20 12:34 ` [Qemu-devel] [PATCH v3 2/3] machine: make MemoryHotplugState accessible via the machine David Hildenbrand
2018-04-23 3:28 ` David Gibson
2018-04-23 9:36 ` David Hildenbrand
2018-04-23 10:44 ` David Gibson
2018-04-23 11:11 ` David Hildenbrand
2018-04-20 12:34 ` [Qemu-devel] [PATCH v3 3/3] pc-dimm: factor out address space logic into MemoryDevice code David Hildenbrand
2018-04-23 12:19 ` Igor Mammedov
2018-04-23 12:44 ` David Hildenbrand
2018-04-24 13:28 ` Igor Mammedov
2018-04-24 13:39 ` David Hildenbrand
2018-04-24 14:38 ` Igor Mammedov
2018-04-23 12:52 ` David Hildenbrand
2018-04-24 13:31 ` Igor Mammedov
2018-04-24 13:41 ` David Hildenbrand
2018-04-24 14:44 ` Igor Mammedov
2018-04-24 15:23 ` David Hildenbrand
2018-04-25 5:45 ` Pankaj Gupta
2018-04-25 13:23 ` Igor Mammedov
2018-04-25 13:56 ` Pankaj Gupta
2018-04-25 15:26 ` Igor Mammedov
2018-04-26 7:37 ` Pankaj Gupta
2018-05-04 9:13 ` [Qemu-devel] [PATCH v3 3/3] virtio-pmem: should we make it migratable??? Igor Mammedov
2018-05-04 9:30 ` David Hildenbrand
2018-05-04 11:59 ` Pankaj Gupta
2018-05-04 12:26 ` Dr. David Alan Gilbert
2018-05-07 8:12 ` Igor Mammedov
2018-05-07 11:19 ` Pankaj Gupta
2018-05-08 9:44 ` Dr. David Alan Gilbert
2018-04-23 14:44 ` [Qemu-devel] [PATCH v3 3/3] pc-dimm: factor out address space logic into MemoryDevice code David Hildenbrand
2018-04-22 4:58 ` [Qemu-devel] [PATCH v3 0/3] pc-dimm: factor out MemoryDevice Pankaj Gupta
2018-04-22 8:20 ` David Hildenbrand
2018-04-23 4:58 ` Pankaj Gupta
2018-04-23 12:31 ` Igor Mammedov
2018-04-23 12:50 ` David Hildenbrand
2018-04-23 15:32 ` Pankaj Gupta
2018-04-23 16:35 ` David Hildenbrand
2018-04-24 14:00 ` Igor Mammedov
2018-04-24 15:42 ` David Hildenbrand
2018-04-25 12:15 ` Igor Mammedov
2018-04-25 12:46 ` David Hildenbrand [this message]
2018-04-25 13:15 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50c25d28-4cac-1e68-c149-e7a5ba8b0f40@redhat.com \
--to=david@redhat.com \
--cc=armbru@redhat.com \
--cc=david@gibson.dropbear.id.au \
--cc=ehabkost@redhat.com \
--cc=imammedo@redhat.com \
--cc=marcel@redhat.com \
--cc=mst@redhat.com \
--cc=pagupta@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
--cc=qemu-s390x@nongnu.org \
--cc=rth@twiddle.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).