From: Jesse Barnes <jesse.barnes@intel.com>
To: Adam Jackson <ajackson@redhat.com>
Cc: gregkh@suse.de, linux-kernel@vger.kernel.org
Subject: Re: PCI bridge range sizing bug
Date: Thu, 19 Apr 2007 16:11:50 -0700 [thread overview]
Message-ID: <200704191611.52125.jesse.barnes@intel.com> (raw)
In-Reply-To: <1175812632.17147.12.camel@localhost.localdomain>
On Thursday, April 5, 2007 3:37 pm Adam Jackson wrote:
> So I'm attempting to do something fairly heinous (X server across
> five video cards), and I hit a fun bug in bridge range setup. See
> attached lspci and dmesg, but the short of it is I've got two VGA
> chips on one card behind a bridge, which is itself behind a second
> PCI bridge, and the bridge ranges get set up so that I can't map the
> ROMs, which means I can't post them, and therefore can't use them
> period.
>
> The alignment restriction on the ROMs seems a bit extreme:
>
> % sudo setpci -s 7:2 ROM_ADDRESS=ffffffff
> % sudo setpci -s 7:2 ROM_ADDRESS
> f0000001
>
> (same for 7:1) so that might be part of the problem.
...
Allocating PCI resources starting at 88000000 (gap: 80000000:7ff00000)
...
That's ~2G of space, which should be plenty for your PCI resources I
hope? If you have a bunch of cards with large BARS though you might be
running out.
...
PCI: Bridge: 0000:00:01.0
IO window: 4000-4fff
MEM window: a3500000-a35fffff (1M)
PREFETCH window: 90000000-97ffffff
PCI: Bridge: 0000:00:03.0
IO window: disabled.
MEM window: a3400000-a34fffff (1M)
PREFETCH window: 98000000-9fffffff
PCI: Bridge: 0000:00:1c.0
IO window: disabled.
MEM window: a3300000-a33fffff (1M)
PREFETCH window: 80000000-8fffffff
PCI: Bridge: 0000:00:1c.4
IO window: 3000-3fff
MEM window: a3200000-a32fffff (1M)
PREFETCH window: a3700000-a37fffff
PCI: Bridge: 0000:00:1c.5
IO window: 2000-2fff
MEM window: a3100000-a31fffff (1M)
PREFETCH window: disabled.
PCI: Failed to allocate mem resource #6:10000000@b0000000 for
0000:07:01.0
PCI: Failed to allocate mem resource #6:10000000@b0000000 for
0000:07:02.0
...
Yep, looks like those two devices had a problem. Supposedly they want
to sit at 256M? Given that we're only giving each bridge 1M of memory
space that would definitely be a problem.
The total so far is only 5M of PCI space... so we're not making good use
of the 2G we were given.
...
PCI: Bridge: 0000:06:00.0
IO window: disabled.
MEM window: a1000000-a2ffffff (32M)
PREFETCH window: disabled.
PCI: Bridge: 0000:00:1e.0
IO window: 1000-1fff
MEM window: a1000000-a30fffff (~32M)
PREFETCH window: a0000000-a0ffffff
...
And these bridges got more space somehow... Greg who's in charge of our
bridge resource allocation code?
Jesse
next parent reply other threads:[~2007-04-19 23:11 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1175812632.17147.12.camel@localhost.localdomain>
2007-04-19 23:11 ` Jesse Barnes [this message]
2007-04-19 23:40 ` PCI bridge range sizing bug Greg KH
2007-04-20 0:19 ` Linus Torvalds
2007-04-20 9:23 ` Ivan Kokshaysky
2007-04-20 16:32 ` Jesse Barnes
2007-04-20 18:28 ` Linus Torvalds
2007-04-20 20:30 ` Ivan Kokshaysky
2007-05-14 17:45 ` Jesse Barnes
2007-05-15 22:39 ` Ivan Kokshaysky
2007-04-20 20:34 ` Jesse Barnes
2007-04-21 5:31 ` Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200704191611.52125.jesse.barnes@intel.com \
--to=jesse.barnes@intel.com \
--cc=ajackson@redhat.com \
--cc=gregkh@suse.de \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox