qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: "Hervé Poussineau" <hpoussin@reactos.org>
Cc: "Andreas Färber" <andreas.faerber@web.de>,
	qemu-ppc@nongnu.org, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH v2 05/10] raven: set a correct PCI I/O memory region
Date: Wed, 04 Sep 2013 08:01:56 +0200	[thread overview]
Message-ID: <5226CCD4.2030204@redhat.com> (raw)
In-Reply-To: <1378247351-8446-6-git-send-email-hpoussin@reactos.org>

Il 04/09/2013 00:29, Hervé Poussineau ha scritto:
> PCI I/O region is 0x3f800000 bytes starting at 0x80000000.
> Do not use global QEMU I/O region, which is only 64KB.

You can make the global QEMU I/O region larger, that's not a problem.

Not using address_space_io is fine as well, but it's a separate change
and I doubt it is a good idea to do it for a single target; if you do it
for all non-x86 PCI bridges, and move the initialization of
address_space_io to target-i386, that's a different story of course.

Paolo

> Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
> ---
>  hw/pci-host/prep.c |   15 +++++++++------
>  1 file changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/hw/pci-host/prep.c b/hw/pci-host/prep.c
> index 95fa2ea..af0bf2b 100644
> --- a/hw/pci-host/prep.c
> +++ b/hw/pci-host/prep.c
> @@ -53,6 +53,7 @@ typedef struct PRePPCIState {
>  
>      qemu_irq irq[PCI_NUM_PINS];
>      PCIBus pci_bus;
> +    MemoryRegion pci_io;
>      MemoryRegion pci_intack;
>      RavenPCIState pci_dev;
>  } PREPPCIState;
> @@ -136,13 +137,11 @@ static void raven_pcihost_realizefn(DeviceState *d, Error **errp)
>  
>      memory_region_init_io(&h->conf_mem, OBJECT(h), &pci_host_conf_be_ops, s,
>                            "pci-conf-idx", 1);
> -    sysbus_add_io(dev, 0xcf8, &h->conf_mem);
> -    sysbus_init_ioports(&h->busdev, 0xcf8, 1);
> +    memory_region_add_subregion(&s->pci_io, 0xcf8, &h->conf_mem);
>  
>      memory_region_init_io(&h->data_mem, OBJECT(h), &pci_host_data_be_ops, s,
>                            "pci-conf-data", 1);
> -    sysbus_add_io(dev, 0xcfc, &h->data_mem);
> -    sysbus_init_ioports(&h->busdev, 0xcfc, 1);
> +    memory_region_add_subregion(&s->pci_io, 0xcfc, &h->data_mem);
>  
>      memory_region_init_io(&h->mmcfg, OBJECT(s), &PPC_PCIIO_ops, s, "pciio", 0x00400000);
>      memory_region_add_subregion(address_space_mem, 0x80800000, &h->mmcfg);
> @@ -160,11 +159,15 @@ static void raven_pcihost_initfn(Object *obj)
>      PCIHostState *h = PCI_HOST_BRIDGE(obj);
>      PREPPCIState *s = RAVEN_PCI_HOST_BRIDGE(obj);
>      MemoryRegion *address_space_mem = get_system_memory();
> -    MemoryRegion *address_space_io = get_system_io();
>      DeviceState *pci_dev;
>  
> +    memory_region_init(&s->pci_io, obj, "pci-io", 0x3f800000);
> +
> +    /* CPU address space */
> +    memory_region_add_subregion(address_space_mem, 0x80000000, &s->pci_io);
>      pci_bus_new_inplace(&s->pci_bus, DEVICE(obj), NULL,
> -                        address_space_mem, address_space_io, 0, TYPE_PCI_BUS);
> +                        address_space_mem, &s->pci_io, 0, TYPE_PCI_BUS);
> +
>      h->bus = &s->pci_bus;
>  
>      object_initialize(&s->pci_dev, TYPE_RAVEN_PCI_DEVICE);
> 

  reply	other threads:[~2013-09-04  6:02 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-09-03 22:29 [Qemu-devel] [PATCH v2 00/10] prep: improve Raven PCI host emulation Hervé Poussineau
2013-09-03 22:29 ` [Qemu-devel] [PATCH v2 01/10] prep: kill get_system_io() usage Hervé Poussineau
2013-09-04  6:13   ` Paolo Bonzini
2013-09-04 18:29     ` Hervé Poussineau
2013-09-03 22:29 ` [Qemu-devel] [PATCH v2 02/10] raven: use constant PCI_NUM_PINS instead of 4 Hervé Poussineau
2013-09-03 22:29 ` [Qemu-devel] [PATCH v2 03/10] raven: move BIOS loading from board code to PCI host Hervé Poussineau
2013-09-03 22:29 ` [Qemu-devel] [PATCH v2 04/10] raven: rename intack region to pci_intack Hervé Poussineau
2013-09-03 22:29 ` [Qemu-devel] [PATCH v2 05/10] raven: set a correct PCI I/O memory region Hervé Poussineau
2013-09-04  6:01   ` Paolo Bonzini [this message]
2013-09-04  7:22     ` Peter Maydell
2013-09-04  8:11       ` Paolo Bonzini
2013-09-04  8:25         ` Peter Maydell
2013-09-04  8:31           ` Paolo Bonzini
2013-09-04  8:51             ` Peter Maydell
2013-09-04  8:54           ` Andreas Färber
2013-09-09 20:57           ` Hervé Poussineau
2013-09-09 21:33             ` Peter Maydell
2013-09-10  7:43             ` Paolo Bonzini
2013-09-03 22:29 ` [Qemu-devel] [PATCH v2 06/10] raven: set a correct PCI " Hervé Poussineau
2013-09-03 22:29 ` [Qemu-devel] [PATCH v2 07/10] raven: add PCI bus mastering address space Hervé Poussineau
2013-09-03 22:29 ` [Qemu-devel] [PATCH v2 08/10] raven: implement non-contiguous I/O region Hervé Poussineau
2013-09-03 22:29 ` [Qemu-devel] [PATCH v2 09/10] raven: fix PCI bus accesses with size > 1 Hervé Poussineau
2013-09-03 22:29 ` [Qemu-devel] [PATCH v2 10/10] raven: use raven_ for all function prefixes Hervé Poussineau

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5226CCD4.2030204@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=andreas.faerber@web.de \
    --cc=hpoussin@reactos.org \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-ppc@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).