From: Michael Roth <mdroth@linux.vnet.ibm.com>
To: Claudio Fontana <hw.claudio@gmail.com>, peter.maydell@linaro.org
Cc: QEMU Developers <qemu-devel@nongnu.org>, agraf@suse.de, mst@redhat.com
Subject: Re: [Qemu-devel] [PATCH 08/12] pci: allow 0 address for PCI IO regions
Date: Fri, 09 Jan 2015 09:29:19 -0600 [thread overview]
Message-ID: <20150109152919.22996.30407@loki> (raw)
In-Reply-To: <CANv_3YY9vbdN+HWfr8KzAVzxuQNzZJ-7v44bRM7kaD+sDnL32Q@mail.gmail.com>
Quoting Claudio Fontana (2015-01-09 08:57:39)
> Hello,
>
> resurrecting an old thread.. I incurred in the same issue being
> discussed before,
> where QEMU silently ignores PCI BAR address programming attempts where
> the I/O space offset is 0 (zero).
>
> I think that from a QEMU "user" standpoint, beside this particular issue,
> which can be easily worked around just using a minimum offset,
> it would be good if QEMU would be a bit verbose in producing a warning
> about this.
>
> I think that at least it would be worth a message if DEBUG_PCI is set.
>
> This silent discarding of BAR programming attempts has been painful
> while doing early enablement
> even for other cases (like the requirement to set I/O space bit before
> hand etc), which are legitimate,
> but are still worthy of a diagnostic I think, at the very least if
> doing pci enablement (which to me translates in having DEBUG_PCI set).
>
> What do you guys think, would a patch be welcome trying to address that?
> Would you make the diagnostic dependent on DEBUG_PCI?
I've sent an updated patch (which allows 0-address MEM regions as well)
as a separate patchset. My hope is that we can simply allow the
programming of 0-address mem/io bars, but there are some concerns I
still don't quite understand but have attempted to summarize to
continue the discussion:
http://lists.gnu.org/archive/html/qemu-devel/2014-12/msg03453.html
Debugging would be useful, but there are undoubtedly cases where the
current behavior prevents proper initialization of guest devices on
existing guests which expect 0 to be valid, so at the very least I
think we should allow the behavior to be enabled on a
machine/host-bridge level of granularity, if not for all
machines/host-bridges as the patch above does.
>
> Thanks,
>
> Claudio
next prev parent reply other threads:[~2015-01-09 15:29 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-01-09 14:57 [Qemu-devel] [PATCH 08/12] pci: allow 0 address for PCI IO regions Claudio Fontana
2015-01-09 15:29 ` Michael Roth [this message]
-- strict thread matches above, loose matches on Subject: below --
2014-08-19 0:21 [Qemu-devel] [PATCH v3 00/12] spapr: add support for pci hotplug Michael Roth
2014-08-19 0:21 ` [Qemu-devel] [PATCH 08/12] pci: allow 0 address for PCI IO regions Michael Roth
2014-08-26 9:14 ` Alexey Kardashevskiy
2014-08-26 11:55 ` Peter Maydell
2014-08-26 18:34 ` Michael Roth
2014-08-26 11:41 ` Alexander Graf
2014-08-27 13:47 ` Michael S. Tsirkin
2014-08-28 21:21 ` Michael Roth
2014-08-28 21:33 ` Peter Maydell
2014-08-28 21:46 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150109152919.22996.30407@loki \
--to=mdroth@linux.vnet.ibm.com \
--cc=agraf@suse.de \
--cc=hw.claudio@gmail.com \
--cc=mst@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).