* IRQS on 6 Slot Macs
@ 2003-11-03 3:35 Robert E Brose II
2003-11-03 6:54 ` Benjamin Herrenschmidt
2003-11-03 11:30 ` Michel Dänzer
0 siblings, 2 replies; 28+ messages in thread
From: Robert E Brose II @ 2003-11-03 3:35 UTC (permalink / raw)
To: linuxppc-dev
In trying to get OpenGL working on a YDL 3.0 system with kernel 2.4.22-ben2
and XFree86-4.3.0-2.1e, I get a warning on X initialization:
(II) R128(0): [drm] failure adding irq handler, there is a device already using
that irq
Digging into it more, I'm having a hard time understanding why there is a
problem and how the interrupts are allocated on a S900 (w/G3 card).
With the folowing configuration:
Slot 1:
00:0d.0 VGA compatible controller: ATI Technologies Inc Rage 128 RE/SG (prog-if 00 [VGA])
Flags: bus master, stepping, medium devsel, latency 32, IRQ 23
Slot 2:
00:0e.0 Ethernet controller: Digital Equipment Corporation DECchip 21142/43 (rev 21)
Flags: bus master, medium devsel, latency 32, IRQ 24
Slot 3:
01:00.0 SCSI storage controller: LSI Logic / Symbios Logic 53c875 (rev 04)
Flags: bus master, medium devsel, latency 32, IRQ 25
Slot 4:
01:01.0 Communication controller: Comtrol Corporation RocketPort 8 Intf (rev 02)
Flags: medium devsel, IRQ 25
Slot 5:
01:02.0 FireWire (IEEE 1394): Texas Instruments FireWire Controller (rev 01) (pr
og-if 10 [OHCI])
Flags: bus master, medium devsel, latency 32, IRQ 25
Slot 6:
01:03.0 USB Controller: Lucent Microelectronics USS-312 USB Controller (rev 10)
(prog-if 10 [OHCI])
Flags: bus master, medium devsel, latency 32, IRQ 25
and /proc/interrupts
2: 0 PMAC-PIC Edge MACE-txdma
3: 77087 PMAC-PIC Edge MACE-rxdma
8: 345 PMAC-PIC Edge Built-in Sound out
9: 0 PMAC-PIC Edge Built-in Sound in
13: 71 PMAC-PIC Edge MESH
14: 77840 PMAC-PIC Edge MACE
17: 0 PMAC-PIC Edge Built-in Sound misc
18: 4352 PMAC-PIC Edge ADB
19: 0 PMAC-PIC Edge SWIM3
24: 21259 PMAC-PIC Level eth0
25: 72922493 PMAC-PIC Level sym53c8xx, usb-ohci, ohci1394
A couple of things right off, the rage128 appears to be the only thing on
irq 23 however 23 doesn't show up in /proc/interrupts meaning, I suppose,
that it's not using the interrupt. So why does the drm complain?
How come everything from slots 3-6 says it's on the same interrupt (25)?
Bob
--
/~\ The ASCII | Robert E. Brose II N0QBJ
\ / Ribbon Campaign | http://www.qbjnet.com/
X Help cure | mailto:bob@qbjnet.com
/ \ HTML Email | public key at http://www.qbjnet.com/key.html
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: IRQS on 6 Slot Macs
2003-11-03 3:35 IRQS on 6 Slot Macs Robert E Brose II
@ 2003-11-03 6:54 ` Benjamin Herrenschmidt
2003-11-03 14:11 ` Bob Brose
2003-11-04 8:55 ` Jeff Walther
2003-11-03 11:30 ` Michel Dänzer
1 sibling, 2 replies; 28+ messages in thread
From: Benjamin Herrenschmidt @ 2003-11-03 6:54 UTC (permalink / raw)
To: Robert E Brose II; +Cc: linuxppc-dev list
On Mon, 2003-11-03 at 14:35, Robert E Brose II wrote:
> A couple of things right off, the rage128 appears to be the only thing on
> irq 23 however 23 doesn't show up in /proc/interrupts meaning, I suppose,
> that it's not using the interrupt. So why does the drm complain?
>
> How come everything from slots 3-6 says it's on the same interrupt (25)?
I don't know what's up with the DRM not liking your interrupt. I can
answer for the sharing of USB, symbios and firewire interrupts: all 3
slots share one interrupt because of bad motherboard design :)
Basicallly, what they did when designing that machine was to use a
standard powersurge design with 3 slots and replace one of them
with a PCI<->PCI bridge. Since they didn't "know" how to get more
interrupt lines out of Grand Central, they just also stuffed all
interrupt lines together for those 4 slots (I'm pretty sure GC do
have spare lines they could have used, but that would have meant
updating Open Firmware to understand the layout, I doubt the people
who designed that machine wanted to dive into that).
Ben.
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: IRQS on 6 Slot Macs
2003-11-03 6:54 ` Benjamin Herrenschmidt
@ 2003-11-03 14:11 ` Bob Brose
2003-11-03 14:34 ` Geert Uytterhoeven
2003-11-03 21:40 ` Benjamin Herrenschmidt
2003-11-04 8:55 ` Jeff Walther
1 sibling, 2 replies; 28+ messages in thread
From: Bob Brose @ 2003-11-03 14:11 UTC (permalink / raw)
To: Benjamin Herrenschmidt; +Cc: Robert E Brose II, linuxppc-dev list
User Benjamin Herrenschmidt says:
> On Mon, 2003-11-03 at 14:35, Robert E Brose II wrote:
>
> > A couple of things right off, the rage128 appears to be the only thing on
> > irq 23 however 23 doesn't show up in /proc/interrupts meaning, I suppose,
> > that it's not using the interrupt. So why does the drm complain?
> >
> > How come everything from slots 3-6 says it's on the same interrupt (25)?
>
> I don't know what's up with the DRM not liking your interrupt. I can
> answer for the sharing of USB, symbios and firewire interrupts: all 3
> slots share one interrupt because of bad motherboard design :)
> Basicallly, what they did when designing that machine was to use a
> standard powersurge design with 3 slots and replace one of them
> with a PCI<->PCI bridge. Since they didn't "know" how to get more
> interrupt lines out of Grand Central, they just also stuffed all
> interrupt lines together for those 4 slots (I'm pretty sure GC do
> have spare lines they could have used, but that would have meant
> updating Open Firmware to understand the layout, I doubt the people
> who designed that machine wanted to dive into that).
>
> Ben.
I suppose then that I should reorder the cards so that the ones generating
the most interrupts would be in the first 2 slots? It's pretty funny
having the possibility of the use of lots of irqs then ending up with
x86 style sharing. :(
Bob
--
/~\ The ASCII | Robert E. Brose II N0QBJ
\ / Ribbon Campaign | http://www.qbjnet.com/
X Help cure | mailto:bob@qbjnet.com
/ \ HTML Email | public key at http://www.qbjnet.com/key.html
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: IRQS on 6 Slot Macs
2003-11-03 14:11 ` Bob Brose
@ 2003-11-03 14:34 ` Geert Uytterhoeven
2003-11-03 21:40 ` Benjamin Herrenschmidt
1 sibling, 0 replies; 28+ messages in thread
From: Geert Uytterhoeven @ 2003-11-03 14:34 UTC (permalink / raw)
To: Bob Brose; +Cc: Benjamin Herrenschmidt, linuxppc-dev list
On Mon, 3 Nov 2003, Bob Brose wrote:
> User Benjamin Herrenschmidt says:
> > On Mon, 2003-11-03 at 14:35, Robert E Brose II wrote:
> > > A couple of things right off, the rage128 appears to be the only thing on
> > > irq 23 however 23 doesn't show up in /proc/interrupts meaning, I suppose,
> > > that it's not using the interrupt. So why does the drm complain?
> > >
> > > How come everything from slots 3-6 says it's on the same interrupt (25)?
> >
> > I don't know what's up with the DRM not liking your interrupt. I can
> > answer for the sharing of USB, symbios and firewire interrupts: all 3
> > slots share one interrupt because of bad motherboard design :)
> > Basicallly, what they did when designing that machine was to use a
> > standard powersurge design with 3 slots and replace one of them
> > with a PCI<->PCI bridge. Since they didn't "know" how to get more
> > interrupt lines out of Grand Central, they just also stuffed all
> > interrupt lines together for those 4 slots (I'm pretty sure GC do
Tsss... They could at least have `swizzled' the lines...
> > have spare lines they could have used, but that would have meant
> > updating Open Firmware to understand the layout, I doubt the people
> > who designed that machine wanted to dive into that).
> >
> > Ben.
>
> I suppose then that I should reorder the cards so that the ones generating
> the most interrupts would be in the first 2 slots? It's pretty funny
> having the possibility of the use of lots of irqs then ending up with
> x86 style sharing. :(
Yes, try to spread out interrupts evenly across interrupt lines.
Furthermore the presence of the bridge will probably incur a slight slowdown
for the last 3 slots, too. Don't know whether it's significant (should reread
PCI specs first).
Gr{oetje,eeting}s,
Geert
--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org
In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: IRQS on 6 Slot Macs
2003-11-03 14:11 ` Bob Brose
2003-11-03 14:34 ` Geert Uytterhoeven
@ 2003-11-03 21:40 ` Benjamin Herrenschmidt
1 sibling, 0 replies; 28+ messages in thread
From: Benjamin Herrenschmidt @ 2003-11-03 21:40 UTC (permalink / raw)
To: Bob Brose; +Cc: linuxppc-dev list
> I suppose then that I should reorder the cards so that the ones generating
> the most interrupts would be in the first 2 slots? It's pretty funny
> having the possibility of the use of lots of irqs then ending up with
> x86 style sharing. :(
Yup :(
Ben.
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: IRQS on 6 Slot Macs
2003-11-03 6:54 ` Benjamin Herrenschmidt
2003-11-03 14:11 ` Bob Brose
@ 2003-11-04 8:55 ` Jeff Walther
2003-11-04 9:14 ` Benjamin Herrenschmidt
1 sibling, 1 reply; 28+ messages in thread
From: Jeff Walther @ 2003-11-04 8:55 UTC (permalink / raw)
To: linuxppc-dev list
At 17:54 +1100 11/03/2003, Benjamin Herrenschmidt wrote:
>On Mon, 2003-11-03 at 14:35, Robert E Brose II wrote:
>
>> A couple of things right off, the rage128 appears to be the only thing on
>> irq 23 however 23 doesn't show up in /proc/interrupts meaning, I suppose,
>> that it's not using the interrupt. So why does the drm complain?
>>
>> How come everything from slots 3-6 says it's on the same interrupt (25)?
>
>I don't know what's up with the DRM not liking your interrupt. I can
>answer for the sharing of USB, symbios and firewire interrupts: all 3
>slots share one interrupt because of bad motherboard design :)
I'm not sure it was bad motherboard design--at least, not unless
following specifications leads to bad motherboard design. I can't
find the reference now, but I could swear that I read that the proper
procedure when implementing a PCI-PCI Bridge is to tie the
subordinate slot's interrupts into the interrupt for the host slot.
The firmware for the host machine is supposed to be able to sort this
out, if written properly.
After all, one can, in theory, add 1024 PCI slots to a machine using
PPBs. There aren't going to be 1024 interrupts available. The
specification for PCI-PCI Bridges had to have some more general
method of handling interrupts for slots behind a PPB, and tieing them
to the host slot interrupt makes the most sense.
All that said, the firmware for the x500 and x600 Macs is not written
properly, at least with respect to implementing PCI-PCI Bridges.
>Basicallly, what they did when designing that machine was to use a
>standard powersurge design with 3 slots and replace one of them
>with a PCI<->PCI bridge. Since they didn't "know" how to get more
>interrupt lines out of Grand Central, they just also stuffed all
>interrupt lines together for those 4 slots (I'm pretty sure GC do
>have spare lines they could have used,
The interrupts for the slots (in the 9500/9600) go to the following pins on GC:
Slot # GC pin #
1 193
2 194
3 189
4 188
5 173
6 174
On the S900 (and J700) the interrupts for slots 3 through 6 are tied
to pin 189.
Slots 1 through 3 are also correct for all other PowerSurge machines.
I would love to know if there are other unused interrupts available,
as the PowerSurge architecture supposedly can support up to four
Bandit chips, but as far as I know, if one constructed such a beast,
there'd be no interrupts available for any PCI slots beyond six.
This seems to be born out (limited interrupts available) by the
gymnastics they went through to arrange the interrupts in the Apple
Network Server, which has six PCI slots, but also four built-in PCI
devices (including Grand Central) on the motherboard. They didn't
use any previously unused interrupts on GC in the ANS, they just
rearranged and combined the interrupts used in the 9500.
However, I can't help but wonder if all that lovely video circuitry
on the 7500 and 8500 requires any interrupts and if so, where they
come from. Do they recycle the interrupts for slots 4 -6 or are
there other interrupts available on GC besides the ones for the six
slots?
>but that would have meant
>updating Open Firmware to understand the layout, I doubt the people
>who designed that machine wanted to dive into that).
At 14:16 +1100 11/04/2003, Benjamin Herrenschmidt wrote:
>Probably because updating Apple's 1.0.5 OF code to assign them properly
>with the P2P setup was beyond their ability to deal with crappy code :)
What is P2P setup?
The cloners just soldered Apple ROMs down in their machines. The
ROM/firmware used in the PowerComputing and Umax machines (except
PowerBase and C series) was the same ROM/firmware used in the x500
series of machines--the $77D.28F2. This ROM was used in the 7200,
7500, 8500, 9500, all of PCC's Catalyst clones, PowerWave, PowerTower
Pro, S900 and the J700.
I'm not sure if the cloner's license even allowed them to modify the
ROM. The chips are labeled with Apple part numbers and Apple
markings, so I think they really did purchase them from Apple, rather
than licensing their production.
Anyway, the point being that even if Umax had wanted to go that route
(rewrite/modify/fix OF 1.05), I'm not sure it was technically
feasible under the licensing agreement. Is hacking the interrupt
assignment for the PCI slots the kind of thing one could squeeze into
the NVRAMrc?
Jeff Walther
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: IRQS on 6 Slot Macs
2003-11-04 8:55 ` Jeff Walther
@ 2003-11-04 9:14 ` Benjamin Herrenschmidt
[not found] ` <a04310102bbcda40d92fc@[199.170.89.159]>
0 siblings, 1 reply; 28+ messages in thread
From: Benjamin Herrenschmidt @ 2003-11-04 9:14 UTC (permalink / raw)
To: Jeff Walther; +Cc: linuxppc-dev list
> I'm not sure it was bad motherboard design--at least, not unless
> following specifications leads to bad motherboard design. I can't
> find the reference now, but I could swear that I read that the proper
> procedure when implementing a PCI-PCI Bridge is to tie the
> subordinate slot's interrupts into the interrupt for the host slot.
> The firmware for the host machine is supposed to be able to sort this
> out, if written properly.
Hrm... nah nah nah :) If you tie them together, you get one
interrupt shared for all in the end... what you can do on
platforms with an irq router like x86 is to make use of the
4 different irq lines of the bridge and route them to the
sub-slots, but on macs, like on lots of othre platforms, the
4 lines are just or-ed together.
Anyway, such a rule really only apply when you design a PCI
card with a P2P bridge on it. As long as you are on the
motherboard, you do what you want.
> After all, one can, in theory, add 1024 PCI slots to a machine using
> PPBs. There aren't going to be 1024 interrupts available.
Why ? Some iSeries have up to 2048 irq lines afaik ;)
> The
> specification for PCI-PCI Bridges had to have some more general
> method of handling interrupts for slots behind a PPB, and tieing them
> to the host slot interrupt makes the most sense.
No. The P2P specification provides nothing for interrupts, just
a generic "guideline" that you may or may not follow depending on
what you are designing. The way you route a motherboard interrupt
line (regardless of it beeing on a bridge or not) is a matter
of commen sense rather.
> All that said, the firmware for the x500 and x600 Macs is not written
> properly, at least with respect to implementing PCI-PCI Bridges.
Well... Again, do not mix what happens on the mobo and what happens
in slots. Indeed, the x500 and x600 machines have an OF bug that
cause it to not properly assign the shared irq line to the child
devices, but that's really only a concern for _slots_.
> >Basicallly, what they did when designing that machine was to use a
> >standard powersurge design with 3 slots and replace one of them
> >with a PCI<->PCI bridge. Since they didn't "know" how to get more
> >interrupt lines out of Grand Central, they just also stuffed all
> >interrupt lines together for those 4 slots (I'm pretty sure GC do
> >have spare lines they could have used,
>
> The interrupts for the slots (in the 9500/9600) go to the following pins on GC:
>
> Slot # GC pin #
> 1 193
> 2 194
> 3 189
> 4 188
> 5 173
> 6 174
>
> On the S900 (and J700) the interrupts for slots 3 through 6 are tied
> to pin 189.
Yup. My point is that those could have been dispatched to the GC pins
> Slots 1 through 3 are also correct for all other PowerSurge machines.
> I would love to know if there are other unused interrupts available,
> as the PowerSurge architecture supposedly can support up to four
> Bandit chips, but as far as I know, if one constructed such a beast,
> there'd be no interrupts available for any PCI slots beyond six.
I don't know how much exactly GC provides. It has a single mask
register of 32 interrupts, so if you count all the GC internal ones,
that still leaves a few of them I beleive... You'd need the pinout
of GC, I don't have it (maybe you do ? :) I'm interested in any spec
for these old chipsets...)
> This seems to be born out (limited interrupts available) by the
> gymnastics they went through to arrange the interrupts in the Apple
> Network Server, which has six PCI slots, but also four built-in PCI
> devices (including Grand Central) on the motherboard. They didn't
> use any previously unused interrupts on GC in the ANS, they just
> rearranged and combined the interrupts used in the 9500.
Yup. Still... it would have made a lot of sense for the S900 designers
to actually route the additional slot interrupts to separate GC
interrupt pins. The main problem with that would have been the need to
"teach" Apple's OF about the binding, which of course would have been
a total mess....
> However, I can't help but wonder if all that lovely video circuitry
> on the 7500 and 8500 requires any interrupts and if so, where they
> come from. Do they recycle the interrupts for slots 4 -6 or are
> there other interrupts available on GC besides the ones for the six
> slots?
Maybe compare the interrupt numbers ? I don't have my datas at hand
but that should give you an idea of who goes where. IIRC, some Mklinux
source (or maybe it's early darwin source) had a map of all the irqs
of GC as well.
> >but that would have meant
> >updating Open Firmware to understand the layout, I doubt the people
> >who designed that machine wanted to dive into that).
>
> At 14:16 +1100 11/04/2003, Benjamin Herrenschmidt wrote:
> >Probably because updating Apple's 1.0.5 OF code to assign them properly
> >with the P2P setup was beyond their ability to deal with crappy code :)
>
> What is P2P setup?
PCI 2 PCI bridge setup.
> The cloners just soldered Apple ROMs down in their machines. The
> ROM/firmware used in the PowerComputing and Umax machines (except
> PowerBase and C series) was the same ROM/firmware used in the x500
> series of machines--the $77D.28F2. This ROM was used in the 7200,
> 7500, 8500, 9500, all of PCC's Catalyst clones, PowerWave, PowerTower
> Pro, S900 and the J700.
Yup. That is part of the problem.
> I'm not sure if the cloner's license even allowed them to modify the
> ROM. The chips are labeled with Apple part numbers and Apple
> markings, so I think they really did purchase them from Apple, rather
> than licensing their production.
Yah, though they probably could have dealt some small change to OF
to deal with that issue, or have an nvramrc patch (ugh !) at worst.
> Anyway, the point being that even if Umax had wanted to go that route
> (rewrite/modify/fix OF 1.05), I'm not sure it was technically
> feasible under the licensing agreement. Is hacking the interrupt
> assignment for the PCI slots the kind of thing one could squeeze into
> the NVRAMrc?
Yes, that's doable. A bit more tricky, they could have put a routing
circuit optionally or'ing them all together. By default, the machine
boots with them all or'ed. If the nvramrc script (or whatever other
possible software patch) doesn't load, they stay that way. The software
patch ticks an IO disabling that OR'ing after patching either the
device-tree (nvramrc patch) or whatever MacOS used for routing.
Probably doable with a few gates, or bits of a PLD if any was already
there.
Ben.
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: IRQS on 6 Slot Macs
2003-11-03 3:35 IRQS on 6 Slot Macs Robert E Brose II
2003-11-03 6:54 ` Benjamin Herrenschmidt
@ 2003-11-03 11:30 ` Michel Dänzer
2003-11-03 14:40 ` Robert E. Brose II
1 sibling, 1 reply; 28+ messages in thread
From: Michel Dänzer @ 2003-11-03 11:30 UTC (permalink / raw)
To: Robert E Brose II; +Cc: linuxppc-dev
On Mon, 2003-11-03 at 04:35, Robert E Brose II wrote:
> In trying to get OpenGL working on a YDL 3.0 system with kernel 2.4.22-ben2
> and XFree86-4.3.0-2.1e, I get a warning on X initialization:
>
> (II) R128(0): [drm] failure adding irq handler, there is a device already using
> that irq
Not sure what's up with that (depending on how old the DRM is, the error
could be misleading though), but this shouldn't prevent the DRI from
working.
--
Earthling Michel Dänzer | Debian (powerpc), X and DRI developer
Software libre enthusiast | http://svcs.affero.net/rm.php?r=daenzer
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: IRQS on 6 Slot Macs
2003-11-03 11:30 ` Michel Dänzer
@ 2003-11-03 14:40 ` Robert E. Brose II
2003-11-03 14:54 ` Michel Dänzer
0 siblings, 1 reply; 28+ messages in thread
From: Robert E. Brose II @ 2003-11-03 14:40 UTC (permalink / raw)
To: Michel Dänzer; +Cc: Robert E Brose II, linuxppc-dev
Michel Dänzer wrote:
> On Mon, 2003-11-03 at 04:35, Robert E Brose II wrote:
>
>>In trying to get OpenGL working on a YDL 3.0 system with kernel 2.4.22-ben2
>>and XFree86-4.3.0-2.1e, I get a warning on X initialization:
>>
>>(II) R128(0): [drm] failure adding irq handler, there is a device already using
>>that irq
>
>
> Not sure what's up with that (depending on how old the DRM is, the error
> could be misleading though), but this shouldn't prevent the DRI from
> working.
The reason I was wondering about the IRQ is I've been trying to get 3d
stuff (like tuxracer) to run and I get the error message:
*** tuxracer error: Couldn't initialize video: X11 driver not configured
with OpenGL (Success)
Now the glxgears programs runs with about 400 fps (1024x768x24 400 mhz g3)
I can't run tuxracer compiled by me or the stock ydl3.0 one. Other 3d
stuff I compile fails as well with the same message. I'm pretty sure I
have the X and kernel configuration right. (similar settings work on an
x86 box with a 3dfx card).
Load "GLcore"
Load "dbe"
Load "extmod"
Load "fbdevhw"
Load "dri"
Load "glx"
Load "record"
Load "freetype"
Load "type1"
And I'm building the kernel with 4.1 rage 128 drm support and the r128
module is loaded and there are no errors on X startup other than that
IRQ error (which is also noted in the dmesg when the r128 kernel module
loads)
also glxinfo looks ok to me so I'm having a hard time figuring out the
problem.
root# glxinfo
name of display: :0.0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.2
server glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_import_context
client glx vendor string: SGI
client glx version string: 1.2
client glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_import_context
GLX extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_import_context
OpenGL vendor string: VA Linux Systems, Inc.
OpenGL renderer string: Mesa DRI Rage128 20020221 AGP 1x
OpenGL version string: 1.2 Mesa 4.0.4
OpenGL extensions:
GL_ARB_imaging, GL_ARB_multitexture, GL_ARB_texture_env_add,
GL_ARB_transpose_matrix, GL_EXT_abgr, GL_EXT_bgra, GL_EXT_blend_color,
GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_clip_volume_hint,
GL_EXT_convolution, GL_EXT_compiled_vertex_array, GL_EXT_histogram,
GL_EXT_packed_pixels, GL_EXT_polygon_offset, GL_EXT_rescale_normal,
GL_EXT_texture3D, GL_EXT_texture_env_add, GL_EXT_texture_object,
GL_EXT_vertex_array, GL_IBM_rasterpos_clip, GL_MESA_window_pos,
GL_NV_texgen_reflection, GL_SGI_color_matrix, GL_SGI_color_table
glu version: 1.3
glu extensions:
GLU_EXT_nurbs_tessellator, GLU_EXT_object_space_tess
Any more suggestions are appreciated!
Thanks,
Bob
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: IRQS on 6 Slot Macs
@ 2003-11-03 21:30 Mich Lanners
2003-11-04 3:16 ` Benjamin Herrenschmidt
0 siblings, 1 reply; 28+ messages in thread
From: Mich Lanners @ 2003-11-03 21:30 UTC (permalink / raw)
To: linuxppc-dev
On 3 Nov, this message from Geert Uytterhoeven echoed through
cyberspace:
>> > I don't know what's up with the DRM not liking your interrupt. I
>> > can answer for the sharing of USB, symbios and firewire interrupts:
>> > all 3 slots share one interrupt because of bad motherboard design
>> > :) Basicallly, what they did when designing that machine was to use
>> > a standard powersurge design with 3 slots and replace one of them
>> > with a PCI<->PCI bridge. Since they didn't "know" how to get more
>> > interrupt lines out of Grand Central, they just also stuffed all
>> > interrupt lines together for those 4 slots (I'm pretty sure GC do
>
> Tsss... They could at least have `swizzled' the lines...
Sure, but it's indeed totally not understandable why they didn't use the
other lines on GrandCentral, since the 9500/9600 design includes six PCI
slots with each a dedicated IRQ line. Weird...
Michel
-------------------------------------------------------------------------
Michel Lanners | " Read Philosophy. Study Art.
23, Rue Paul Henkes | Ask Questions. Make Mistakes.
L-1710 Luxembourg |
email mlan@cpu.lu |
http://www.cpu.lu/~mlan | Learn Always. "
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: IRQS on 6 Slot Macs
2003-11-03 21:30 Mich Lanners
@ 2003-11-04 3:16 ` Benjamin Herrenschmidt
0 siblings, 0 replies; 28+ messages in thread
From: Benjamin Herrenschmidt @ 2003-11-04 3:16 UTC (permalink / raw)
To: Mich Lanners; +Cc: linuxppc-dev list
> Sure, but it's indeed totally not understandable why they didn't use the
> other lines on GrandCentral, since the 9500/9600 design includes six PCI
> slots with each a dedicated IRQ line. Weird...
Probably because updating Apple's 1.0.5 OF code to assign them properly
with the P2P setup was beyond their ability to deal with crappy code :)
Ben.
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: IRQS on 6 Slot Macs
@ 2003-11-04 20:27 Mich Lanners
2003-11-04 21:53 ` Jeff Walther
0 siblings, 1 reply; 28+ messages in thread
From: Mich Lanners @ 2003-11-04 20:27 UTC (permalink / raw)
To: trag; +Cc: linuxppc-dev
On 4 Nov, this message from Jeff Walther echoed through cyberspace:
> I'm not sure it was bad motherboard design--at least, not unless
> following specifications leads to bad motherboard design. I can't
> find the reference now, but I could swear that I read that the proper
> procedure when implementing a PCI-PCI Bridge is to tie the
> subordinate slot's interrupts into the interrupt for the host slot.
> The firmware for the host machine is supposed to be able to sort this
> out, if written properly.
>
> After all, one can, in theory, add 1024 PCI slots to a machine using
> PPBs. There aren't going to be 1024 interrupts available. The
> specification for PCI-PCI Bridges had to have some more general
> method of handling interrupts for slots behind a PPB, and tieing them
> to the host slot interrupt makes the most sense.
>
> All that said, the firmware for the x500 and x600 Macs is not written
> properly, at least with respect to implementing PCI-PCI Bridges.
>
>>Basicallly, what they did when designing that machine was to use a
>>standard powersurge design with 3 slots and replace one of them
>>with a PCI<->PCI bridge. Since they didn't "know" how to get more
>>interrupt lines out of Grand Central, they just also stuffed all
>>interrupt lines together for those 4 slots (I'm pretty sure GC do
>>have spare lines they could have used,
>
> The interrupts for the slots (in the 9500/9600) go to the following
> pins on GC:
>
> Slot # GC pin #
> 1 193
> 2 194
> 3 189
> 4 188
> 5 173
> 6 174
>
> On the S900 (and J700) the interrupts for slots 3 through 6 are tied
> to pin 189.
>
> Slots 1 through 3 are also correct for all other PowerSurge machines.
> I would love to know if there are other unused interrupts available,
> as the PowerSurge architecture supposedly can support up to four
> Bandit chips, but as far as I know, if one constructed such a beast,
> there'd be no interrupts available for any PCI slots beyond six.
>
> This seems to be born out (limited interrupts available) by the
> gymnastics they went through to arrange the interrupts in the Apple
> Network Server, which has six PCI slots, but also four built-in PCI
> devices (including Grand Central) on the motherboard. They didn't
> use any previously unused interrupts on GC in the ANS, they just
> rearranged and combined the interrupts used in the 9500.
>
> However, I can't help but wonder if all that lovely video circuitry
> on the 7500 and 8500 requires any interrupts and if so, where they
> come from. Do they recycle the interrupts for slots 4 -6 or are
> there other interrupts available on GC besides the ones for the six
> slots?
Here is my all-slots-filled config on a 7600:
00:0b.0 Host bridge: Apple Computer Inc. Bandit PowerPC host bridge (rev 03)
Flags: bus master, medium devsel, latency 32, IRQ 22
00:0d.0 Unknown mass storage controller: Promise Technology, Inc. 20262 (rev 01)
Flags: bus master, medium devsel, latency 32, IRQ 23
00:0e.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139 (rev 10)
Flags: bus master, medium devsel, latency 32, IRQ 24
00:0f.0 VGA compatible controller: Matrox Graphics, Inc. MGA 2064W [Millennium] (rev 01) (prog-if 00 [VGA])
Flags: stepping, medium devsel, IRQ 25
00:10.0 Class ff00: Apple Computer Inc. Grand Central I/O (rev 02)
Flags: bus master, medium devsel, latency 32, IRQ 22
01:0b.0 Non-VGA unclassified device: Apple Computer Inc. Control Video
Flags: fast devsel, IRQ 26
01:0d.0 Class ff00: Apple Computer Inc. PlanB Video-In (rev 01)
Flags: bus master, medium devsel, latency 32, IRQ 28
So yes, video-in and graphics have separate interrupts. But those could
be the same IRQ's as used on slots 4-6 on 9x00 machines. AFAIK, there
have been no machines with video circuitry _and_ 6 PCI slots.
I have no info about video-in; but I think I remember some site
somewhere with all possible device trees online. You might find more
info there...
Cheers
Michel
-------------------------------------------------------------------------
Michel Lanners | " Read Philosophy. Study Art.
23, Rue Paul Henkes | Ask Questions. Make Mistakes.
L-1710 Luxembourg |
email mlan@cpu.lu |
http://www.cpu.lu/~mlan | Learn Always. "
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: IRQS on 6 Slot Macs
2003-11-04 20:27 Mich Lanners
@ 2003-11-04 21:53 ` Jeff Walther
0 siblings, 0 replies; 28+ messages in thread
From: Jeff Walther @ 2003-11-04 21:53 UTC (permalink / raw)
To: linuxppc-dev
At 21:27 +0100 11/04/2003, Mich Lanners wrote:
>> However, I can't help but wonder if all that lovely video circuitry
>> on the 7500 and 8500 requires any interrupts and if so, where they
>> come from. Do they recycle the interrupts for slots 4 -6 or are
>> there other interrupts available on GC besides the ones for the six
>> slots?
>
>Here is my all-slots-filled config on a 7600:
>
>00:0b.0 Host bridge: Apple Computer Inc. Bandit PowerPC host bridge (rev 03)
> Flags: bus master, medium devsel, latency 32, IRQ 22
>00:0d.0 Unknown mass storage controller: Promise Technology, Inc.
>20262 (rev 01)
> Flags: bus master, medium devsel, latency 32, IRQ 23
>00:0e.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139 (rev 10)
> Flags: bus master, medium devsel, latency 32, IRQ 24
>00:0f.0 VGA compatible controller: Matrox Graphics, Inc. MGA 2064W
>[Millennium] (rev 01) (prog-if 00 [VGA])
> Flags: stepping, medium devsel, IRQ 25
>00:10.0 Class ff00: Apple Computer Inc. Grand Central I/O (rev 02)
> Flags: bus master, medium devsel, latency 32, IRQ 22
>01:0b.0 Non-VGA unclassified device: Apple Computer Inc. Control Video
> Flags: fast devsel, IRQ 26
>01:0d.0 Class ff00: Apple Computer Inc. PlanB Video-In (rev 01)
> Flags: bus master, medium devsel, latency 32, IRQ 28
>
>So yes, video-in and graphics have separate interrupts. But those could
>be the same IRQ's as used on slots 4-6 on 9x00 machines. AFAIK, there
>have been no machines with video circuitry _and_ 6 PCI slots.
Thank you, michel. That will get me quite a ways along on that inquiry.
>I have no info about video-in; but I think I remember some site
>somewhere with all possible device trees online. You might find more
>info there...
I have that site bookmarked somewhere--at least I think it's the same
site. I just haven't checked it in a while. However, I think that
the device tree only generates a listing if there is a PCI card
installed in the slot, and as I recall, most of the machines listed
only had one or two PCI cards installed. Still, it's worth a look to
me.
I have a hardware project in mind, but am not actively working on it.
So, when discussion that touches on it comes up, I'll pop up for a
bit to gather/share what info I can, but until I move this to the
front of my hobby queue I won't make a lot of progress.
The thing I'm interested in is building a PowerSurge with three or
four Bandits. The architecture supports it, according to Apple
docs, but the big question is whether the actual implementation of
chips supports the full architecture or just a subset. For example,
one would need twelve interrupts for PCI slots (at least) with four
Bandits. Are there six more external interrupts available on GC?
And how could we ever figure that one out, unless someone sneaks into
a vault at Apple? (Or one could build that hypothetical machine with
6 PCI slots and built-in video which you mentioned above.)
Another essential question is whether there are sufficient
arbitration lines on Hammerhead to support three or four Bandits.
And the thing that usually kills my ambition to make this my primary
hobby at the moment is the firmware mods it would take. I'm into
hardware and not much of a programmer.
As to how I would build such a machine, if Apple's chips supported
it, the S900 has a second CPU slot on the motherboard. However,
that slot could be used for a CPU Bus expansion card, because almost
all the required signals are available there. So, one could put
two or three Bandits on a daughter card, install the card in the
secondary CPU slot on the S900 and run cables from the daughter card
to the PCI backplanes for the additional Bandits. Actually, it
would be fairly trivial to build a daughter card with a single Bandit
on it for the S900, adding three PCI slots....One need only copy
connections from the 9500 in order to do that.
Jeff Walther
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: IRQS on 6 Slot Macs
@ 2003-11-04 21:29 Jeff Walther
2003-11-04 22:30 ` Geert Uytterhoeven
0 siblings, 1 reply; 28+ messages in thread
From: Jeff Walther @ 2003-11-04 21:29 UTC (permalink / raw)
To: linuxppc-dev list
At 20:14 +1100 11/04/2003, Benjamin Herrenschmidt wrote:
>> I can't
>> find the reference now, but I could swear that I read that the proper
>> procedure when implementing a PCI-PCI Bridge is to tie the
>> subordinate slot's interrupts into the interrupt for the host slot.
>Hrm... nah nah nah :) If you tie them together, you get one
>interrupt shared for all in the end... what you can do on
>platforms with an irq router like x86 is to make use of the
>4 different irq lines of the bridge and route them to the
>sub-slots, but on macs, like on lots of othre platforms, the
>4 lines are just or-ed together.
Ah, my memory is working this morning (okay, afternoon). The
reference I couldn't find was Apple's Tech Note TN_1135, which is an
Apple reference, not a PCI specification reference. My apologies
for any confusion I may have caused anyone.
>Anyway, such a rule really only apply when you design a PCI
>card with a P2P bridge on it. As long as you are on the
>motherboard, you do what you want.
Right, because with a card, one only has access to what's in the slot
but on the motherboard one can wire to anything present. Thank you.
Brain is feeling much more limber now.
>> After all, one can, in theory, add 1024 PCI slots to a machine using
>> PPBs. There aren't going to be 1024 interrupts available.
>
>Why ? Some iSeries have up to 2048 irq lines afaik ;)
By iSeries, do you mean iBook and iMac? Or is that something else?
I'm still kind of living in the PowerSurge world, with occasional
forays up to Beige G3. :-)
>> The
>> specification for PCI-PCI Bridges had to have some more general
>> method of handling interrupts for slots behind a PPB, and tieing them
>> to the host slot interrupt makes the most sense.
>
>No. The P2P specification provides nothing for interrupts, just
>a generic "guideline" that you may or may not follow depending on
>what you are designing. The way you route a motherboard interrupt
>line (regardless of it beeing on a bridge or not) is a matter
>of commen sense rather.
Right. My mistake for misremembering an Apple note as part of the PCI spec.
>> All that said, the firmware for the x500 and x600 Macs is not written
>> properly, at least with respect to implementing PCI-PCI Bridges.
>
>Well... Again, do not mix what happens on the mobo and what happens
>in slots. Indeed, the x500 and x600 machines have an OF bug that
>cause it to not properly assign the shared irq line to the child
>devices, but that's really only a concern for _slots_.
The bug I've observed, I first became aware of because of the S900.
The PowerSurge machines have a problem with more than one level of
PPB. If you put a PPB bearing card in the lower four slots of the
S900, you've created two layers of PPBs. With a few exceptions
(like leaving all the other lower slots empty) this causes the
machine to freeze during initialization at the gray screen.
>> >Basicallly, what they did when designing that machine was to use a
>> >standard powersurge design with 3 slots and replace one of them
>> >with a PCI<->PCI bridge. Since they didn't "know" how to get more
>> >interrupt lines out of Grand Central, they just also stuffed all
>> >interrupt lines together for those 4 slots (I'm pretty sure GC do
>> >have spare lines they could have used,
>I don't know how much exactly GC provides. It has a single mask
>register of 32 interrupts, so if you count all the GC internal ones,
>that still leaves a few of them I beleive... You'd need the pinout
>of GC, I don't have it (maybe you do ? :) I'm interested in any spec
>for these old chipsets...)
Unfortunately, no, I don't have the GC pinout. All the pinout
information I have on the PowerSurge chipset, I've gotten by starting
with the PCI slot pinout and the PPC601 pinout and working backwards
to the various chips. I wish, wish, wish that I had access to
Apple's documents on that chipset. Oh, and the CPU slot listing in
the ANS Hardware Developer Notes helped too, because the ANS used the
same chipset, so some of the pin IDs can be found by working
backwards to Hammerhead on the ANS.
Anyway, the listing of slot interrupt pins on GC was all I have
except that pin 61 is GNT and pin 62 is REQ for Grand Central's
arbitration as a device on the PCI bus. It will be fairly easy to
identify the PCI bus pins on GC. I just haven't done it yet and
that's probably the least interesting component of GC's pinout.
I think I can find GC's connection to MESH as well, but I'm
uncertain. I believe that MESH is just a licensed NEC (now LSI
Logic) 53CF96 and I have the pinout for the 53CF96 so tracing
backwards from that wouldn't be tough, if that's true. I need to
solder a 53CF96 in place of a MESH some time and see if it works....
I do have the complete CPU slot pinout and a mostly complete Bandit
pinout (two or three pins uncertain). But I think I've emailed
those to the list before, so you probably have them. If not, let me
know and I'll shoot you a copy.
>> This seems to be born out (limited interrupts available) by the
>> gymnastics they went through to arrange the interrupts in the Apple
>> Network Server, which has six PCI slots, but also four built-in PCI
>> devices (including Grand Central) on the motherboard. They didn't
>> use any previously unused interrupts on GC in the ANS, they just
>> rearranged and combined the interrupts used in the 9500.
>
>Yup. Still... it would have made a lot of sense for the S900 designers
>to actually route the additional slot interrupts to separate GC
>interrupt pins. The main problem with that would have been the need to
>"teach" Apple's OF about the binding, which of course would have been
>a total mess....
You've convinced me. I agree that would have been better. It's
kind of sad, because I worked with a fellow who interned at Umax's
Mac cloning hardware group, and he said they were engaging in a
pretty intricate firmware development effort for PREP (or was it
CHRP?) before things got shut down. If they put that much effort
into the next generation, they were willing to spend the same type of
resources it would have taken to hack the earlier OF. But it was a
different deal as far as the licensing issues go.
>> However, I can't help but wonder if all that lovely video circuitry
>> on the 7500 and 8500 requires any interrupts and if so, where they
>> come from. Do they recycle the interrupts for slots 4 -6 or are
>> there other interrupts available on GC besides the ones for the six
>> slots?
>
>Maybe compare the interrupt numbers ? I don't have my datas at hand
>but that should give you an idea of who goes where. IIRC, some Mklinux
>source (or maybe it's early darwin source) had a map of all the irqs
>of GC as well.
Now that's interesting. I wonder how they figured them out. I
wonder how hard it's going to be to find...
> A bit more tricky, they could have put a routing
>circuit optionally or'ing them all together. By default, the machine
>boots with them all or'ed. If the nvramrc script (or whatever other
>possible software patch) doesn't load, they stay that way. The software
>patch ticks an IO disabling that OR'ing after patching either the
>device-tree (nvramrc patch) or whatever MacOS used for routing.
>
>Probably doable with a few gates, or bits of a PLD if any was already
>there.
There is no PLD between the PCI interrupt pins and GC on the S900
board. But, since Umax designed the board, they certainly could have
added one. They already did that quirky E100 hack to PCI slot 1.
I haven't dug into that, but I'd like to know how they made that
work. Does an Enet card require an interrupt? Does it ever master
the PCI bus?
Somehow they convinced the machine that there's another PCI slot
there when the E100 (Mercury) card is installed. It's a combined UW
SCSI and 10/100 Enet for those not familiar with it and slot 1 on the
S900 has an extender to provide additional signals to the E100 card.
The E100 gets dual fuctionality without a PPB. I haven't traced the
connections, but my assumption is that the two chipsets on the card
are sharing the common PCI bus lines in the slot, and that the
extender on slot 1 provides the slot specific PCI signals to one of
the two chipsets on the card. But that might include an additional
interrupt and additional bus arbitration (does Enet bus master?).
And somehow the extension for the E100 card tells the S900 firmware
that there's an extra PCI slot called E100.
I read something about a legacy device left in the firmware called
E100, and I think Umax took advantage of that somehow, but I never
really understood what I read, and my memory is vague.
Jeff Walther
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: IRQS on 6 Slot Macs
2003-11-04 21:29 Jeff Walther
@ 2003-11-04 22:30 ` Geert Uytterhoeven
2003-11-05 1:48 ` Jeff Walther
0 siblings, 1 reply; 28+ messages in thread
From: Geert Uytterhoeven @ 2003-11-04 22:30 UTC (permalink / raw)
To: Jeff Walther; +Cc: linuxppc-dev list
On Tue, 4 Nov 2003, Jeff Walther wrote:
> At 20:14 +1100 11/04/2003, Benjamin Herrenschmidt wrote:
> >> After all, one can, in theory, add 1024 PCI slots to a machine using
> >> PPBs. There aren't going to be 1024 interrupts available.
> >
> >Why ? Some iSeries have up to 2048 irq lines afaik ;)
>
> By iSeries, do you mean iBook and iMac? Or is that something else?
iSeries are the big IBM boxes formerly known as AS/400.
> I think I can find GC's connection to MESH as well, but I'm
> uncertain. I believe that MESH is just a licensed NEC (now LSI
^^^
> Logic) 53CF96 and I have the pinout for the 53CF96 so tracing
NCR, I assume?
> I haven't dug into that, but I'd like to know how they made that
> work. Does an Enet card require an interrupt? Does it ever master
> the PCI bus?
Ethernet prefers to have an interrupt, else it becomes quite slow :-)
Most modern Ethernet interfaces do bus mastering. Old NE2000 clones don't.
Gr{oetje,eeting}s,
Geert
--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org
In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: IRQS on 6 Slot Macs
2003-11-04 22:30 ` Geert Uytterhoeven
@ 2003-11-05 1:48 ` Jeff Walther
0 siblings, 0 replies; 28+ messages in thread
From: Jeff Walther @ 2003-11-05 1:48 UTC (permalink / raw)
To: linuxppc-dev list
At 23:30 +0100 11/04/2003, Geert Uytterhoeven wrote:
>On Tue, 4 Nov 2003, Jeff Walther wrote:
>> At 20:14 +1100 11/04/2003, Benjamin Herrenschmidt wrote:
>> >> After all, one can, in theory, add 1024 PCI slots to a machine using
>> >> PPBs. There aren't going to be 1024 interrupts available.
>> >
>> >Why ? Some iSeries have up to 2048 irq lines afaik ;)
>>
>> By iSeries, do you mean iBook and iMac? Or is that something else?
>
>iSeries are the big IBM boxes formerly known as AS/400.
How do they provide that many IRQ lines? I'm assuming that there
can't be one wire/pin per IRQ because of the routing/packaging
problems for the traces and chips. Do they do something like an
address bus for IRQs?
>
>> I think I can find GC's connection to MESH as well, but I'm
>> uncertain. I believe that MESH is just a licensed NEC (now LSI
> ^^^
>> Logic) 53CF96 and I have the pinout for the 53CF96 so tracing
>
>NCR, I assume?
Derrrrrrr. Yes. Must. Find. New. Brain.
NCR spun off their semiconductor business as SYMBIOS which was later
acquired by LSI Logic.
>> I haven't dug into that, but I'd like to know how they made that
>> work. Does an Enet card require an interrupt? Does it ever master
>> the PCI bus?
>
>Ethernet prefers to have an interrupt, else it becomes quite slow :-)
>Most modern Ethernet interfaces do bus mastering. Old NE2000 clones don't.
Thank you. Then one would expect to find an interrupt and PCI bus
arbitration lines in the extender on PCI slot 1 of the S900. The
PCI bus arbitration lines are simple. The arbiter that Apple used
has something like six GNT and REQ lines available, so there are
spares of those, but I wonder where Umax got the interrupt, unless
it's sharing one with the UW SCSI portion of the card and the drivers
take that into account.
Jeff Walther
** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 28+ messages in thread
end of thread, other threads:[~2003-11-06 16:44 UTC | newest]
Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-11-03 3:35 IRQS on 6 Slot Macs Robert E Brose II
2003-11-03 6:54 ` Benjamin Herrenschmidt
2003-11-03 14:11 ` Bob Brose
2003-11-03 14:34 ` Geert Uytterhoeven
2003-11-03 21:40 ` Benjamin Herrenschmidt
2003-11-04 8:55 ` Jeff Walther
2003-11-04 9:14 ` Benjamin Herrenschmidt
[not found] ` <a04310102bbcda40d92fc@[199.170.89.159]>
[not found] ` <1067985081.707.124.camel@gaston>
2003-11-05 2:11 ` Jeff Walther
2003-11-05 2:46 ` Benjamin Herrenschmidt
2003-11-05 6:20 ` Jeff Walther
2003-11-05 7:18 ` Benjamin Herrenschmidt
2003-11-05 17:39 ` Jeff Walther
2003-11-06 7:07 ` Brad Boyer
2003-11-06 13:18 ` Michael R. Zucca
2003-11-06 16:44 ` Jeff Walther
2003-11-05 9:47 ` Geert Uytterhoeven
2003-11-05 10:38 ` Michael Schmitz
2003-11-05 11:38 ` Etsushi Kato
2003-11-03 11:30 ` Michel Dänzer
2003-11-03 14:40 ` Robert E. Brose II
2003-11-03 14:54 ` Michel Dänzer
-- strict thread matches above, loose matches on Subject: below --
2003-11-03 21:30 Mich Lanners
2003-11-04 3:16 ` Benjamin Herrenschmidt
2003-11-04 20:27 Mich Lanners
2003-11-04 21:53 ` Jeff Walther
2003-11-04 21:29 Jeff Walther
2003-11-04 22:30 ` Geert Uytterhoeven
2003-11-05 1:48 ` Jeff Walther
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).