qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
To: Julia Suvorova <jusual@redhat.com>,
	Zihan Yang <whois.zihan.yang@gmail.com>,
	qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [RFC v5 0/6] pci_expander_brdige: support separate
Date: Sat, 31 Aug 2019 22:55:00 +0300	[thread overview]
Message-ID: <b7441a1c-9552-76cc-d879-ae3e7f8af8fb@gmail.com> (raw)
In-Reply-To: <c87583f3e8487100fdb8196ad45e6375cc877e3b.camel@redhat.com>

Hi Julia,

On 8/27/19 5:58 PM, Julia Suvorova wrote:
> On Mon, 2018-09-17 at 22:57 +0800, Zihan Yang wrote:
>> Hi all
>>
>> Here is a minimal working version of supporting multiple pci domains.
> Hi Zihan,
> Do you plan to continue working on this project?

Since the last submission was long ago we can safely assume
Zihan moved to other projects.
Thanks Zihan for the great work!

>
> I am interested in it, and if you do not mind I would like to finish
> the project, retaining your authorship. I am going to take care
> of this patch set, and the one in SeaBios too.
> How does it sound to you?

I don't Zihan would mind, anyway I fully agree on keeping his authorship.
His work is part of GSOC, so there is no problem with the licensing either.

> Any tips and clues are welcomed.

I would start with merging his work with latest QEMU and giving it a try.
MSI/MSIx PCI devices should simply work.

If I remember correctly 2 issues remain:
1. Devices with legacy interrupts do not work (some INTx wiring issue)
2. Running lspci -vt does not show the correct tree (every pxb should 
start a separate PCI hierarchy)

Good luck!
Marcel


>
> Best regards, Julia Suvorova.
>
>> The next a few paragraphs will illustrate the purpose and use
>> example.
>> Current issue and limitations will be in last 2 paragraphs, followed
>> by the changelog of each verison.
>>
>> Currently only q35 host bridge is allocated an item in MCFG table,
>> all
>> pxb-pcie host bridges stay within pci domain 0. This series of
>> patches
>> allow each pxb-pcie to be put in separate pci domain, allocating a
>> new
>> MCFG table item for it.
>>
>> Users can configure whether to put pxb host bridge into separate
>> domain
>> by specifying property 'domain_nr' of pxb-pcie device. 'bus_nr'
>> property
>> indicates the Base Bus Number(BBN) of the pxb-pcie host bridge.
>> Another
>> property max_bus specifies the maximum desired bus numbers to reduce
>> MCFG space cost. Example command is
>>
>>      -device pxb-pcie,id=bridge3,bus="pcie.0",domain_nr=1,max_bus=15
>>
>> Then this pxb-pcie host bridge is placed at pci domain 1, and only
>> reserve
>> (15+1)=16 buses, which is much smaller than the default 256 buses.
>>
>> Compared with previous version, this version is much simpler because
>> mcfg of extra domain now has a relatively fixed address, as suggested
>> by Marcel and Gerd. Putting extra mmconfig above 4G and let seabios
>> leave them for guest os will be expected in next version. The range
>> is
>> [0x80000000, 0xb0000000), which allows us to hold 4x busses compared
>> with before.
>>
>> A complete command line for test is follows, you need to replace
>> GUEST_IMAGE,
>> DATA_IMAGE and SEABIOS_BIN with proper environment variable
>>
>> ./x86_64-softmmu/qemu-system-x86_64 \
>>      -machine q35,accel=kvm -smp 2 -m 2048 \
>>      -drive file=${GUEST_IMAGE}  -netdev user,id=realnet0 \
>>      -device e1000e,netdev=realnet0,mac=52:54:00:12:34:56 \
>>      -device pxb-pcie,id=bridge3,bus="pcie.0",domain_nr=1 \
>>      -device pcie-root-
>> port,id=rp1,bus=bridge3,addr=1c.0,port=8,chassis=8
>> \
>>      -drive if=none,id=drive0,file=${DATA_IMAGE} \
>>      -device virtio-scsi-pci,id=scsi,bus=rp1,addr=00.0 \
>>      -bios ${SEABIOS_BIN}
>>
>> There are a few limitations, though
>> 1. Legacy interrupt routing is not dealt with yet. There is only
>> support
>> for
>>     devices using MSI/MSIX
>> 2. Only 4x devices is supported, you need to be careful not to
>> overuse
>> 3. I have not fully tested the functionality of devices under
>> separate
>> domain
>>     yet, but Linux can recognize then when typing `lspci`
>>
>> Current issue:
>> * SCSI storage device will be recognized twice, one in domain 0 as
>> 0000:01.0,
>>    the other in domain 1 as 0001:01.0. I will try to fix it in next
>> version
>>
>> v5 <- v4:
>> - Refactor the design and place pxb-pcie's mcfg in [0x80000000,
>> 0xb0000000)
>> - QEMU only decides the desired mcfg_size and leaves mcfg_base for
>> seabios
>> - Does not connect PXBDev and PXBPCIEHost with link property
>> anymore,
>> but
>>    with the pcibus under them, which makes code simpler.
>>
>> v4 <- v3:
>> - Fix bug in setting mcfg table
>> - bus_nr is not used when pxb-pcie is in a new pci domain
>>
>> v3 <- v2:
>> - Replace duplicate properties in pxb pcie host with link property
>> to
>> PXBDev
>> - Allow seabios to access config space and data space of expander
>> bridge
>>    through a different ioport, because 0xcf8 is attached only to
>> sysbus.
>> - Add a new property start_bus to indicate the BBN of pxb host
>> bridge.
>> The
>>    bus_nr property is used as the bus number of pxb-pcie device on
>> pcie.0
>> bus
>>
>> v2 <- v1:
>> - Allow user to configure whether to put pxb-pcie into seperate
>> domain
>> - Add AML description part of each host bridge
>> - Modify the location of MCFG space to between RAM hotplug and pci
>> hole64
>>
>> Many thanks to
>> Please let me know if you have any suggestions.
>>
>> Zihan Yang (6):
>>    pci_expander_bridge: add type TYPE_PXB_PCIE_HOST
>>    pci_expander_bridge: add domain_nr and max_bus property for pxb-
>> pcie
>>    acpi-build: allocate mcfg for pxb-pcie host bridges
>>    i386/acpi-build: describe new pci domain in AML
>>    pci_expander_bridge: add config_write callback for pxb-pcie
>>    pci_expander_bridge: inform seabios of desired mcfg size via hidden
>>      bar
>>
>>   hw/i386/acpi-build.c                        | 162
>> ++++++++++++++++++--------
>>   hw/pci-bridge/pci_expander_bridge.c         | 172
>> +++++++++++++++++++++++++++-
>>   hw/pci/pci.c                                |  30 ++++-
>>   include/hw/pci-bridge/pci_expander_bridge.h |  25 ++++
>>   include/hw/pci/pci.h                        |   2 +
>>   include/hw/pci/pci_bus.h                    |   2 +
>>   include/hw/pci/pci_host.h                   |   2 +-
>>   7 files changed, 336 insertions(+), 59 deletions(-)
>>   create mode 100644 include/hw/pci-bridge/pci_expander_bridge.h
>>



      reply	other threads:[~2019-08-31 19:55 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <0dc1a87882d78b071134dba7787d4459b48ed096.camel@gmail.com>
2019-08-27 14:58 ` [Qemu-devel] [RFC v5 0/6] pci_expander_brdige: support separate Julia Suvorova
2019-08-31 19:55   ` Marcel Apfelbaum [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b7441a1c-9552-76cc-d879-ae3e7f8af8fb@gmail.com \
    --to=marcel.apfelbaum@gmail.com \
    --cc=jusual@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=whois.zihan.yang@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).