From: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
To: Zihan Yang <whois.zihan.yang@gmail.com>,
Gerd Hoffmann <kraxel@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>
Cc: qemu-devel@nongnu.org, seabios@seabios.org
Subject: Re: [Qemu-devel] [SeaBIOS] [RFC v2 0/3] Support multiple pci domains in pci_device
Date: Tue, 28 Aug 2018 08:37:39 +0300 [thread overview]
Message-ID: <5f5dc082-7053-d6b9-b01f-fffba1847ba5@gmail.com> (raw)
In-Reply-To: <CAKwiv-gyGoqEpmpicxo_a3cQBsb0-XoT9cqkJu5WOtkUtAx0eQ@mail.gmail.com>
Hi Gerd
On 08/28/2018 07:12 AM, Zihan Yang wrote:
> Gerd Hoffmann <kraxel@redhat.com> 于2018年8月27日周一 上午7:04写道:
>> Hi,
>>
>>>> However, QEMU only binds port 0xcf8 and 0xcfc to
>>>> bus pcie.0. To avoid bus confliction, we should use other port pairs for
>>>> busses under new domains.
>>> I would skip support for IO based configuration and use only MMCONFIG
>>> for extra root buses.
>>>
>>> The question remains: how do we assign MMCONFIG space for
>>> each PCI domain.
Thanks for your comments!
>> Allocation-wise it would be easiest to place them above 4G. Right after
>> memory, or after etc/reserved-memory-end (if that fw_cfg file is
>> present), where the 64bit pci bars would have been placed. Move the pci
>> bars up in address space to make room.
>>
>> Only problem is that seabios wouldn't be able to access mmconfig then.
>>
>> Placing them below 4G would work at least for a few pci domains. q35
>> mmconfig bar is placed at 0xb0000000 -> 0xbfffffff, basically for
>> historical reasons. Old qemu versions had 2.75G low memory on q35 (up
>> to 0xafffffff), and I think old machine types still have that for live
>> migration compatibility reasons. Modern qemu uses 2G only, to make
>> gigabyte alignment work.
>>
>> 32bit pci bars are placed above 0xc0000000. The address space from 2G
>> to 2.75G (0x8000000 -> 0xafffffff) is unused on new machine types.
>> Enough room for three additional mmconfig bars (full size), so four
>> pci domains total if you add the q35 one.
> Maybe we can support 4 domains first before we come up
> with a better solution. But I'm not sure if four domains are
> enough for those who want too many devices?
(Adding Michael)
Since we will not use all 256 buses of an extra PCI domain,
I think this space will allow us to support more PCI domains.
How will the flow look like ?
1. QEMU passes to SeaBIOS information of how many extra
PCI domains needs, and how many buses per domain.
How it will pass this info? A vendor specific capability,
some PCI registers or modifying extra-pci-roots fw_cfg file?
2. SeaBIOS assigns the address for each PCI Domain and
returns the information to QEMU.
How it will do that? Some pxb-pcie registers? Or do we model
the MMCFG like a PCI BAR?
3. Once QEMU gets the MMCFG addresses, it can answer to
mmio configuration cycles.
4. SeaBIOS queries all PCI domains devices, computes
and assigns IO/MEM resources (for PCI domains > 0 it will
use MMCFG to configure the PCI devices)
5. QEMU uses the IO/MEM information to create the CRS for each
extra PCI host bridge.
6. SeaBIOS gets the ACPI tables from QEMU and passes them to the
guest OS.
Thanks,
Marcel
>> cheers,
>> Gerd
>>
next prev parent reply other threads:[~2018-08-28 5:38 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1533793434-7614-1-git-send-email-whois.zihan.yang@gmail.com>
[not found] ` <ece2a3ff-678d-00b6-3fd6-660972c5beca@gmail.com>
[not found] ` <20180827070406.t525gr43qpu7wpsj@sirius.home.kraxel.org>
2018-08-28 4:12 ` [Qemu-devel] [SeaBIOS] [RFC v2 0/3] Support multiple pci domains in pci_device Zihan Yang
2018-08-28 5:30 ` Gerd Hoffmann
2018-08-28 5:37 ` Marcel Apfelbaum [this message]
2018-08-28 6:07 ` Gerd Hoffmann
2018-08-28 6:53 ` Marcel Apfelbaum
2018-08-28 10:14 ` Gerd Hoffmann
2018-08-28 17:02 ` Kevin O'Connor
2018-08-28 17:17 ` Marcel Apfelbaum
2018-08-28 17:45 ` Kevin O'Connor
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5f5dc082-7053-d6b9-b01f-fffba1847ba5@gmail.com \
--to=marcel.apfelbaum@gmail.com \
--cc=kraxel@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=seabios@seabios.org \
--cc=whois.zihan.yang@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).