From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46978) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VLCWH-0004tq-H9 for qemu-devel@nongnu.org; Sun, 15 Sep 2013 09:37:51 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VLCWB-0001az-Hu for qemu-devel@nongnu.org; Sun, 15 Sep 2013 09:37:45 -0400 Received: from mx1.redhat.com ([209.132.183.28]:20288) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VLCWB-0001as-9j for qemu-devel@nongnu.org; Sun, 15 Sep 2013 09:37:39 -0400 Date: Sun, 15 Sep 2013 16:39:44 +0300 From: "Michael S. Tsirkin" Message-ID: <20130915133944.GA5219@redhat.com> References: <20130910123917.GA1800@redhat.com> <20130910130229.GB2121@redhat.com> <20130915071445.GA29806@redhat.com> <20130915110552.GA4600@redhat.com> <20130915121750.GB4600@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Subject: Re: [Qemu-devel] [PATCH RFC v2 2/2] hw/pci: handle unassigned pci addresses List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Peter Maydell Cc: Paolo Bonzini , Anthony Liguori , Jan Kiszka , QEMU Developers , Marcel Apfelbaum On Sun, Sep 15, 2013 at 02:24:07PM +0100, Peter Maydell wrote: > On 15 September 2013 13:17, Michael S. Tsirkin wrote: > > On Sun, Sep 15, 2013 at 12:23:41PM +0100, Peter Maydell wrote: > >>. If it's a "pure container" then it > >> doesn't respond for areas that none of its subregions > >> cover (it can't, it has no idea what it should do). > > > > Interesting. This really is completely undocumented > > though. > > > > documentation merely says: > > specifies a priority that allows the core to decide which of two regions at > > the same address are visible (highest wins) > > > > which makes one think the only thing affecting > > visibility is the priority. > > Yes, we could definitely stand to improve the documentation. > > > I wonder whether there's a way to describe this > > in terms that dont expose the implementation of > > render_memory_region, somehow. > > > > Maybe like follows: > > when multiple regions cover the same address only one region is going to > > "win" and get invoked for an access. > > The winner can be determined as follows: > > - "pure container" regions created with memory_region_init(..) > > are ignored > > - if multiple non-container regions cover an address, the winner is > > determined using a priority vector, built of priority field values > > from address space down to our region (i.e. region priority, followed by > > a subregion priority, followed by a sub subregion priority etc). > > > > These priority vectors are compared as follows: > > > > - if vectors are identical, which wins is undefined > > - otherwise if one vector is a sub-vector of another, > > which wins is undefined > > - otherwise the first vector in the lexicographical > > order wins > > This just seems to me to be documenting a theoretical > implementation. If we're lucky it has the same semantics > as what we actually do; if we're unlucky it doesn't in > some corner case and is just confusing. And it would > certainly be confusing if you look at the code and it does > absolutely nothing like the documentation's algorithm. > > I think we could simply add an extra bullet point in the > 'Visibility' section of docs/memory.txt saying: > - if a search within a container or alias subregion does not find > a match, we continue with the next subregion in priority order. > > plus a note at the bottom saying: > "Notice that these rules mean that if a container subregion > does not handle an address in its range (ie it has "holes" > where it has not mapped its own subregions) then lower > priority siblings of the container will be checked to see if they > should be used for accesses within those "holes". > > We could also do with explicitly mentioning in the section > about priority that priority is container local, since this seems > to be the number one confusion among people looking at > memory region priority. > > >> Mostly this doesn't come up because you don't need > >> to play games with overlapping memory regions and > >> containers very often: the common case is "nothing > >> overlaps at all". But the facilities are there if you need > >> them. > > > Dynamic regions like PI BARs are actually very common, > > IMO this means overlapping case is actually very common. > > Yes, that's true, but even then it's usually just "overlaps > of subregions within a single container" and there isn't > a need to worry about within-container versus outside-container > interactions. > > -- PMM Well actually if you look at the 'subregion collisions' mail that I sent, must of the collisions are exactly of this sort. For example, mcfg in q35 overlaps the pci hole, so any pci bar can be made to overlap it. I guess documentation should just explain what happens when regions overlap and not try to say whether this case is common, or not. -- MST