From: Jonathan Cameron via <qemu-devel@nongnu.org>
To: wangyuquan <wangyuquan1236@phytium.com.cn>
Cc: <fan.ni@samsung.com>, <mst@redhat.com>,
<marcel.apfelbaum@gmail.com>, <qemu-devel@nongnu.org>,
<linux-cxl@vger.kernel.org>
Subject: Re: [RFC PATCH v3 2/2] pci-host/cxl: Support creation of a new CXL Host Bridge
Date: Fri, 25 Jul 2025 16:40:13 +0100 [thread overview]
Message-ID: <20250725164013.00006430@huawei.com> (raw)
In-Reply-To: <20250617040649.81303-3-wangyuquan1236@phytium.com.cn>
On Tue, 17 Jun 2025 12:06:49 +0800
wangyuquan <wangyuquan1236@phytium.com.cn> wrote:
> From: Yuquan Wang <wangyuquan1236@phytium.com.cn>
>
> Define a new CXL host bridge type (TYPE_CXL_HOST). This is an
> independent CXL host bridge which combined GPEX features (ECAM, MMIO
> windows and irq) and CXL Host Bridge Component Registers (CHBCR).
>
> The root bus path of CXL_HOST is "0001:00", that would not affect the
> original PCIe host topology on some platforms. In the previous, the
> pxb-cxl-host with any CXL root ports and CXL endpoint devices would
> share the resources (like BDF, MMIO space) of the original pcie
> domain, but it would cause some platforms like sbsa-ref are unable to
> support the original number of PCIe devices. The new type provides a
> solution to resolve the problem.
>
> Signed-off-by: Yuquan Wang <wangyuquan1236@phytium.com.cn>
A few trivial things inline. To me this looks fine but it needs more
eyes, particularly form PCI folk on whether anything is missing for the
host bridge.
> diff --git a/hw/pci-host/Kconfig b/hw/pci-host/Kconfig
> index 35c0415242..05c772bcf4 100644
> --- a/hw/pci-host/Kconfig
> +++ b/hw/pci-host/Kconfig
> @@ -74,6 +74,10 @@ config PCI_POWERNV
> select MSI_NONBROKEN
> select PCIE_PORT
>
> +config CXL_HOST_BRIDGE
> + bool
> + select PCI_EXPRESS
> +
> config REMOTE_PCIHOST
> bool
>
> diff --git a/hw/pci-host/cxl.c b/hw/pci-host/cxl.c
> new file mode 100644
> index 0000000000..74c8c83314
> --- /dev/null
> +++ b/hw/pci-host/cxl.c
> @@ -0,0 +1,152 @@
> +/*
> + * QEMU CXL Host Bridge Emulation
> + *
> + * Copyright (C) 2025, Phytium Technology Co, Ltd. All rights reserved.
> + *
> + * Based on gpex.c
> + *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + */
> +
> +#include "qemu/osdep.h"
> +#include "hw/pci/pci_bus.h"
> +#include "hw/pci-host/cxl_host_bridge.h"
> +
> +static void cxl_host_set_irq(void *opaque, int irq_num, int level)
> +{
> + CXLHostBridge *host = opaque;
> +
> + qemu_set_irq(host->irq[irq_num], level);
> +}
> +
> +int cxl_host_set_irq_num(CXLHostBridge *host, int index, int gsi)
> +{
> + if (index >= PCI_NUM_PINS) {
> + return -EINVAL;
> + }
> +
> + host->irq_num[index] = gsi;
> + return 0;
> +}
> +
> +static PCIINTxRoute cxl_host_route_intx_pin_to_irq(void *opaque, int pin)
> +{
> + PCIINTxRoute route;
> + CXLHostBridge *host = opaque;
> + int gsi = host->irq_num[pin];
> +
> + route.irq = gsi;
> + if (gsi < 0) {
> + route.mode = PCI_INTX_DISABLED;
> + } else {
> + route.mode = PCI_INTX_ENABLED;
> + }
> +
> + return route;
CXLHostBridge *host = opaque;
int gsi = host->irq_num[pin];
PCIINTxRoute route = {
.irq = gsi,
.mode = gsi < 0 ? PCI_INTX_DISABLED : PCI_INTX_ENABLED,
}
return route;
perhaps? Or maybe not worth bothering.
> +}
> +static void cxl_host_realize(DeviceState *dev, Error **errp)
> +{
> + SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
> + CXLHostBridge *host = CXL_HOST(dev);
> + CXLComponentState *cxl_cstate = &host->cxl_cstate;
> + struct MemoryRegion *mr = &cxl_cstate->crb.component_registers;
> + PCIHostState *pci = PCI_HOST_BRIDGE(dev);
> + PCIExpressHost *pex = PCIE_HOST_BRIDGE(dev);
> + PCIBus *cxlbus;
> + int i;
> +
> + cxl_host_reset(host);
> + cxl_component_register_block_init(OBJECT(dev), cxl_cstate, TYPE_CXL_HOST);
> + sysbus_init_mmio(sbd, mr);
> +
> + pcie_host_mmcfg_init(pex, PCIE_MMCFG_SIZE_MAX);
> + sysbus_init_mmio(sbd, &pex->mmio);
> +
> + memory_region_init(&host->io_mmio, OBJECT(host), "cxl_host_mmio",
> + UINT64_MAX);
> +
> + memory_region_init_io(&host->io_mmio_window, OBJECT(host),
> + &unassigned_io_ops, OBJECT(host),
Odd code alignment. The & should be under the &
Check for other instances of this.
> + "cxl_host_mmio_window", UINT64_MAX);
> +
> + memory_region_add_subregion(&host->io_mmio_window, 0, &host->io_mmio);
> + sysbus_init_mmio(sbd, &host->io_mmio_window);
> +
> + /* ioport window init, 64K is the legacy size in x86 */
> + memory_region_init(&host->io_ioport, OBJECT(host), "cxl_host_ioport",
> + 64 * 1024);
> +
> + memory_region_init_io(&host->io_ioport_window, OBJECT(host),
> + &unassigned_io_ops, OBJECT(host),
> + "cxl_host_ioport_window", 64 * 1024);
> +
> + memory_region_add_subregion(&host->io_ioport_window, 0, &host->io_ioport);
> + sysbus_init_mmio(sbd, &host->io_ioport_window);
> +
> + /* PCIe host bridge use 4 legacy IRQ lines */
> + for (i = 0; i < PCI_NUM_PINS; i++) {
> + sysbus_init_irq(sbd, &host->irq[i]);
> + host->irq_num[i] = -1;
> + }
> +
> + pci->bus = pci_register_root_bus(dev, "cxlhost.0", cxl_host_set_irq,
> + pci_swizzle_map_irq_fn, host, &host->io_mmio,
> + &host->io_ioport, 0, 4, TYPE_CXL_BUS);
> + cxlbus = pci->bus;
> + cxlbus->flags |= PCI_BUS_CXL;
As cxlbus is only used for flags, why not simply
pci->bus->flags |= PCI_BUS_CXL;
and drop the local variable. Or...
> +
> + pci_bus_set_route_irq_fn(pci->bus, cxl_host_route_intx_pin_to_irq);
Use cxlbus here instead of pci->bus giving a second user.
> +}
> diff --git a/include/hw/pci-host/cxl_host_bridge.h b/include/hw/pci-host/cxl_host_bridge.h
> new file mode 100644
> index 0000000000..833e460f01
> --- /dev/null
> +++ b/include/hw/pci-host/cxl_host_bridge.h
> @@ -0,0 +1,23 @@
> +/*
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + */
> +
> +#include "hw/cxl/cxl.h"
> +#include "hw/irq.h"
> +#include "hw/pci/pcie_host.h"
We should follow approx include what you use principles.
So need the header for MemoryRegion as well here.
"system/memory.h" seems to be most likely choice for that.
> +
> +typedef struct CXLHostBridge {
> + PCIExpressHost parent_obj;
> +
> + CXLComponentState cxl_cstate;
> +
> + MemoryRegion io_ioport;
I'm not sure the io_ prefix is adding much given this is all in
an CXLHostBridge object?
> + MemoryRegion io_mmio;
> + MemoryRegion io_ioport_window;
> + MemoryRegion io_mmio_window;
> + qemu_irq irq[PCI_NUM_PINS];
> + int irq_num[PCI_NUM_PINS];
> +} CXLHostBridge;
> +
> +int cxl_host_set_irq_num(CXLHostBridge *host, int index, int gsi);
> +void cxl_host_hook_up_registers(CXLState *cxl_state, CXLHostBridge *host);
next prev parent reply other threads:[~2025-07-25 15:41 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-17 4:06 [RFC PATCH v3 0/2] cxl: Support creation of a new CXL Host Bridge wangyuquan
2025-06-17 4:06 ` [RFC PATCH v3 1/2] hw/pxb-cxl: Rename the pxb cxl host bridge wangyuquan
2025-07-25 15:16 ` Jonathan Cameron via
2025-06-17 4:06 ` [RFC PATCH v3 2/2] pci-host/cxl: Support creation of a new CXL Host Bridge wangyuquan
2025-07-25 15:40 ` Jonathan Cameron via [this message]
2025-07-25 16:32 ` [RFC PATCH v3 0/2] cxl: " Dave Jiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250725164013.00006430@huawei.com \
--to=qemu-devel@nongnu.org \
--cc=Jonathan.Cameron@huawei.com \
--cc=fan.ni@samsung.com \
--cc=linux-cxl@vger.kernel.org \
--cc=marcel.apfelbaum@gmail.com \
--cc=mst@redhat.com \
--cc=wangyuquan1236@phytium.com.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).