From: liang yan <liangy@hpe.com>
To: Laszlo Ersek <lersek@redhat.com>
Cc: "Hayes, Bill" <bill.hayes@hpe.com>,
edk2-devel@ml01.01.org, qemu-devel@nongnu.org, "Ramirez,
Laura L (HP Labs)" <laura.ramirez@hpe.com>,
Ard Biesheuvel <ard.biesheuvel@linaro.org>
Subject: Re: [Qemu-devel] Could not add PCI device with big memory to aarch64 VMs
Date: Mon, 30 Nov 2015 11:45:18 -0700 [thread overview]
Message-ID: <565C993E.9000605@hpe.com> (raw)
In-Reply-To: <563AA879.2030301@redhat.com>
On 11/04/2015 05:53 PM, Laszlo Ersek wrote:
> On 11/04/15 23:22, liang yan wrote:
>> Hello, Laszlo,
>>
>>
>> (2)It also has a problem that once I use a memory bigger than 256M for
>> ivshmem, it could not get through UEFI,
>> the error message is
>>
>> PciBus: Discovered PCI @ [00|01|00]
>> BAR[0]: Type = Mem32; Alignment = 0xFFF; Length = 0x100; Offset =
>> 0x10
>> BAR[1]: Type = Mem32; Alignment = 0xFFF; Length = 0x1000; Offset
>> = 0x14
>> BAR[2]: Type = PMem64; Alignment = 0x3FFFFFFF; Length =
>> 0x40000000; Offset = 0x18
>>
>> PciBus: HostBridge->SubmitResources() - Success
>> ASSERT
>> /home/liang/studio/edk2/ArmVirtPkg/PciHostBridgeDxe/PciHostBridge.c(449): ((BOOLEAN)(0==1))
>>
>>
>> I am wandering if there are memory limitation for pcie devices under
>> Qemu environment?
>>
>>
>> Just thank you in advance and any information would be appreciated.
> (CC'ing Ard.)
>
> "Apparently", the firmware-side counterpart of QEMU commit 5125f9cd2532
> has never been contributed to edk2.
>
> Therefore the the ProcessPciHost() function in
> "ArmVirtPkg/VirtFdtDxe/VirtFdtDxe.c" ignores the
> DTB_PCI_HOST_RANGE_MMIO64 type range from the DTB. (Thus only
> DTB_PCI_HOST_RANGE_MMIO32 is recognized as PCI MMIO aperture.)
>
> However, even if said driver was extended to parse the new 64-bit
> aperture into PCDs (which wouldn't be hard), the
> ArmVirtPkg/PciHostBridgeDxe driver would still have to be taught to look
> at that aperture (from the PCDs) and to serve MMIO BAR allocation
> requests from it. That could be hard.
>
> Please check edk2 commits e48f1f15b0e2^..e5ceb6c9d390, approximately,
> for the background on the current code. See also chapter 13 "Protocols -
> PCI Bus Support" in the UEFI spec.
>
> Patches welcome. :)
>
> (A separate note on ACPI vs. DT: the firmware forwards *both* from QEMU
> to the runtime guest OS. However, the firmware parses only the DT for
> its own purposes.)
Hello, Laszlo,
Thanks for your advices above, it's very helpful.
When debugging, I also found some problems for 32 bit PCI devices.
Hope could get some clues from you.
I checked on 512M, 1G, and 2G devices.(4G return invalid parameter
error, so I think it may be taken as a 64bit devices, is this right?).
First,
All devices start from base address 3EFEFFFF.
ProcessPciHost: Config[0x3F000000+0x1000000) Bus[0x0..0xF]
Io[0x0+0x10000)@0x3EFF0000 Mem[0x10000000+0x2EFF0000)@0x0
PcdPciMmio32Base is 10000000=====================
PcdPciMmio32Size is 2EFF0000=====================
Second,
It could not get new base address when searching memory space in GCD map.
For 512M devices,
*BaseAddress = (*BaseAddress + 1 - Length) & (~AlignmentMask);
BaseAddress is 3EFEFFFF==========================
new BaseAddress is 1EEF0000==========================
~AlignmentMask is E0000000==========================
Final BaseAddress is 0000
Status = CoreSearchGcdMapEntry (*BaseAddress, Length, &StartLink,
&EndLink, Map);
For bigger devices:
all stops when searching memory space because below code, Length will
bigger than MaxAddress(3EFEFFFF)
if ((Entry->BaseAddress + Length) > MaxAddress) {
continue;
}
I also checked on ArmVirtQemu.dsc which all set to 0.
gArmPlatformTokenSpaceGuid.PcdPciBusMin|0x0
gArmPlatformTokenSpaceGuid.PcdPciBusMax|0x0
gArmPlatformTokenSpaceGuid.PcdPciIoBase|0x0
gArmPlatformTokenSpaceGuid.PcdPciIoSize|0x0
gArmPlatformTokenSpaceGuid.PcdPciIoTranslation|0x0
gArmPlatformTokenSpaceGuid.PcdPciMmio32Base|0x0
gArmPlatformTokenSpaceGuid.PcdPciMmio32Size|0x0
gEfiMdePkgTokenSpaceGuid.PcdPciExpressBaseAddress|0x0
Do you think I should change from PcdPciMmio32Base and PcdPciMmio32Size,
or do some change for GCD entry list, so it could allocate resources for
PCI devices(CoreSearchGcdMapEntry)?
Looking forward to your reply.
Thanks,
Liang
> Thanks
> Laszlo
>
next prev parent reply other threads:[~2015-11-30 18:45 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-11-04 22:22 [Qemu-devel] Could not add PCI device with big memory to aarch64 VMs liang yan
2015-11-05 0:53 ` Laszlo Ersek
2015-11-30 18:45 ` liang yan [this message]
2015-11-30 22:05 ` [Qemu-devel] [edk2] " Laszlo Ersek
2015-12-01 0:46 ` liang yan
2015-12-01 1:45 ` Laszlo Ersek
2015-12-02 17:28 ` liang yan
2015-12-02 18:29 ` Laszlo Ersek
2016-09-02 21:18 ` Laszlo Ersek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=565C993E.9000605@hpe.com \
--to=liangy@hpe.com \
--cc=ard.biesheuvel@linaro.org \
--cc=bill.hayes@hpe.com \
--cc=edk2-devel@ml01.01.org \
--cc=laura.ramirez@hpe.com \
--cc=lersek@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).