From: Gal Hammer <ghammer@redhat.com>
To: Igor Mammedov <imammedo@redhat.com>
Cc: qemu-devel@nongnu.org, mst@redhat.com
Subject: Re: [Qemu-devel] [PATCH V11 2/3] i386: Add a Virtual Machine Generation ID device
Date: Mon, 02 Feb 2015 15:13:39 +0200 [thread overview]
Message-ID: <54CF7803.7020906@redhat.com> (raw)
In-Reply-To: <20150202134635.434233f7@nial.brq.redhat.com>
On 02/02/2015 14:46, Igor Mammedov wrote:
> On Sun, 01 Feb 2015 14:56:26 +0200
> Gal Hammer <ghammer@redhat.com> wrote:
>
>> On 22/01/2015 15:52, Igor Mammedov wrote:
>>> On Tue, 16 Dec 2014 17:50:43 +0200
>>> Gal Hammer <ghammer@redhat.com> wrote:
>>>
>>>> Based on Microsoft's sepecifications (paper can be dowloaded from
>>>> http://go.microsoft.com/fwlink/?LinkId=260709), add a device
>>>> description to the SSDT ACPI table and its implementation.
>>>>
>>>> The GUID is set using a global "vmgenid.uuid" parameter.
>>>>
>>>> Signed-off-by: Gal Hammer <ghammer@redhat.com>
>>>>
>>>
>>>> --- a/hw/i386/acpi-build.c
>>>> +++ b/hw/i386/acpi-build.c
>>>> @@ -257,6 +257,7 @@ static void acpi_get_pci_info(PcPciInfo *info)
>>>> #define ACPI_BUILD_TABLE_FILE "etc/acpi/tables"
>>>> #define ACPI_BUILD_RSDP_FILE "etc/acpi/rsdp"
>>>> #define ACPI_BUILD_TPMLOG_FILE "etc/tpm/log"
>>>> +#define ACPI_BUILD_VMGENID_FILE "etc/vm-generation-id"
>>>>
>>>> static void
>>>> build_header(GArray *linker, GArray *table_data,
>>>> @@ -1068,6 +1069,8 @@ build_ssdt(GArray *table_data, GArray *linker,
>>>> {
>>>> MachineState *machine = MACHINE(qdev_get_machine());
>>>> uint32_t nr_mem = machine->ram_slots;
>>>> + uint32_t vm_gid_physical_address;
>>>> + uint32_t vm_gid_offset = 0;
>>>> unsigned acpi_cpus = guest_info->apic_id_limit;
>>>> int ssdt_start = table_data->len;
>>>> uint8_t *ssdt_ptr;
>>>> @@ -1096,6 +1099,21 @@ build_ssdt(GArray *table_data, GArray *linker,
>>>> ACPI_BUILD_SET_LE(ssdt_ptr, sizeof(ssdp_misc_aml),
>>>> ssdt_isa_pest[0], 16, misc->pvpanic_port);
>>>>
>>>> + if (vm_generation_id_set()) {
>>>> + vm_gid_physical_address = ssdt_start + ssdt_acpi_vm_gid_addr[0];
>>>> + bios_linker_loader_alloc(linker, ACPI_BUILD_VMGENID_FILE, 8, true);
>>>> + bios_linker_loader_add_pointer(linker, ACPI_BUILD_VMGENID_FILE,
>>>> + ACPI_BUILD_TABLE_FILE,
>>>> + table_data,
>>>> + &vm_gid_offset,
>>>> + sizeof(vm_gid_offset));
>>> could some explain how this pointer magic works,
>>
>> I can try, but don't you think that a magic is gone once explained? ;-)
>>
>>> From my weak understanding it seems broken.
>>> Lets see:
>>>
>>> [1] &vm_gid_offset - must be pointer inside of dest_file blob (ACPI_BUILD_VMGENID_FILE)
>>> [2] vm_gid_offset - should hold offset of the place inside of src_file
>>> (ACPI_BUILD_TABLE_FILE) where to pointer inside of dest_file should point to
>>
>> The vm_gid_offset should point where in the ACPI_BUILD_VMGENID_FILE the
>> VM's GUID is stored. At the moment, it should always be zero because the
>> GUID is stored at the begging of the ACPI_BUILD_VMGENID_FILE.
>>
>>>
>>> now:
>>> vm_gid_physical_address - holds [2] i.e. offset of VGIA constant in inside SSDT in ACPI_BUILD_TABLE_FILE.
>>>
>>>> + ACPI_BUILD_SET_LE(ssdt_ptr, sizeof(ssdp_misc_aml),
>>>> + ssdt_acpi_vm_gid_addr[0], 32, vm_gid_physical_address);
>>> Then we write this offset into VGIA in ACPI_BUILD_TABLE_FILE.
>>
>> Yes. This offset is later patched by the linker to the full physical
>> address.
>>
>>> After BIOS loads tables it's going to patch at
>>> [3] ACPI_BUILD_VMGENID_FILE + (&vm_gid_offset - table_data->data) /* only god knows where it will be/
>>>
>>> and on top of it write in it value:
>>> *(ACPI_BUILD_TABLE_FILE + *[3])
>>
>> We know exactly where it is, no need to call for god's help :-).
>>
>>> This approach in general of patching arbitrary place in AML blob
>>> to get PHY addr of buffer with UUID, is quite a hack, especially
>>> in light of that we are trying to hide all direct access to AML
>>> blobs with related pointer arithmetic and manual patching.
>>>
>>> Why not reserve some potion of RAM and pass to BIOS/guest
>>> a reservation so it won't be part of AddressRangeMemory or
>>> AddressRangeACPI as MS spec requires? Then you won't need
>>> jump all above hoops to just get buffer's PHY addr.
>>
>> I'll be glad to hear a new idea that I didn't already try in one of
>> other previous patches. The problem is that the specification requires
>> working with a physical address, so it must be allocated from inside the
>> guest. Since the OS is not exist in this stage and I also don't want to
>> write a special driver just to allocate this buffer I had to choose this
>> approach.
> how about creating device which will map 4K MMIO region in PCI hole
> address space and passing it as a reservation via e820 table we have in QEMU.
> Then address could be directly built in ACPI tables as constant value
> at the time of ACPI tables creation.
Isn't this will cause a VMEXIT when the guest is reading the GUID? If it
is then this idea was already presented and Michael didn't approve it.
> That way it would be possible to get address of buffer without
> firmware + guest OS doing anything and going through quite complex
> chain for getting buffer address (qemu->bios->OSPM->qemu).
> If you go current route, it would be needed to teach linker a new command
> to make reservation in E820 so that allocated buffer won't be part of
> of AddressRangeMemory as required by spec or anything else.
> Which would make already hard to understand/use correctly linker API
> even more complex.
>
>
>>
>>>>
>>> [...]
>>>> typedef
>>>> @@ -1790,6 +1811,11 @@ void acpi_setup(PcGuestInfo *guest_info)
>>>> fw_cfg_add_file(guest_info->fw_cfg, ACPI_BUILD_TPMLOG_FILE,
>>>> tables.tcpalog->data, acpi_data_len(tables.tcpalog));
>>>>
>>>> + /* Add a 128-bit fw cfg file which stores the VM generation id. */
>>>> + g_array_set_size(tables.vmgenid, 16);
>>>> + fw_cfg_add_file(guest_info->fw_cfg, ACPI_BUILD_VMGENID_FILE,
>>>> + tables.vmgenid->data, tables.vmgenid->len);
>>> shouldn't it be migratable? /i.e. acpi_add_rom_blob(...)/
>>>
>>
>> I'm not too familiar with the migration process, but I assume that this
>> memory will be copied as part of the guest memory.
>>
>> Gal.
>>
>
next prev parent reply other threads:[~2015-02-02 13:12 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-12-16 15:50 [Qemu-devel] [PATCH V11 0/3] Virtual Machine Generation ID Gal Hammer
2014-12-16 15:50 ` [Qemu-devel] [PATCH V11 1/3] docs: vm generation id device's description Gal Hammer
2014-12-16 15:50 ` [Qemu-devel] [PATCH V11 2/3] i386: Add a Virtual Machine Generation ID device Gal Hammer
2015-01-22 13:52 ` Igor Mammedov
2015-01-22 21:21 ` Michael S. Tsirkin
2015-02-01 12:56 ` Gal Hammer
2015-02-02 12:46 ` Igor Mammedov
2015-02-02 13:13 ` Gal Hammer [this message]
2015-02-02 13:55 ` Igor Mammedov
2015-02-04 15:09 ` Gal Hammer
2015-02-04 15:53 ` Igor Mammedov
2015-02-04 16:07 ` Michael S. Tsirkin
2014-12-16 15:50 ` [Qemu-devel] [PATCH V11 3/3] tests: add a unit test for the vmgenid device Gal Hammer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=54CF7803.7020906@redhat.com \
--to=ghammer@redhat.com \
--cc=imammedo@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).