public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* question about PCI setup with multiple CPUs on the PCI bus(es)
@ 2004-01-29 17:24 Chris Friesen
  2004-01-30 14:27 ` Adrian Cox
  0 siblings, 1 reply; 2+ messages in thread
From: Chris Friesen @ 2004-01-29 17:24 UTC (permalink / raw)
  To: linux-kernel, mj


We have an interesting scenario thats causing us some headaches, and 
before we go and re-invent the wheel, I've been asked to see if there's 
any others that have had to do something similar.

We have a main board with a processor on it and a number of PCI buses 
connected via bridges, with various devices on the buses.  There are a 
number of PMC slots on two of the buses two which are connected PMC 
processor boards, each of which has a cpu, memory, various devices, and 
a PCI bridge.

The problem we are running into is as follows:
1) the main board boots up, enumerates and configures the pci device 
space, and boots the daughterboards
2) the daughterboards boot up, enumerate and (re)configure the pci 
device space (differently than the cpu on the mainboard), and screw 
everything up

We changed to kernel on the daughterboards to not touch PCI at all, and 
everything worked fine.  However, one of the daughterboards (which is on 
its own pci bus separate from the others) needs to control two PCI devices.

We tried to modify the PCI code to just go out and discover what was in 
PCI space, not configure any of it.  However, as it did this it 
reprogrammed the PCI bridges, wrecking the configuration that the cpu on 
the main board expected.

Surely we aren't the only people that want to put multiple CPUs on a 
single PCI space.  How have people handled this in the past?  Ideally 
what I'm looking for is a CONFIG_NO_MANGLE_PCI or something to that 
effect. As a last resort we are considering hardcoding the bus/device 
topology for the two drivers on special daughterboard, but this seems 
really kludgy.

Anyone have any advice?

Chris

-- 
Chris Friesen                    | MailStop: 043/33/F10
Nortel Networks                  | work: (613) 765-0557
3500 Carling Avenue              | fax:  (613) 765-2986
Nepean, ON K2H 8E9 Canada        | email: cfriesen@nortelnetworks.com


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: question about PCI setup with multiple CPUs on the PCI bus(es)
  2004-01-29 17:24 question about PCI setup with multiple CPUs on the PCI bus(es) Chris Friesen
@ 2004-01-30 14:27 ` Adrian Cox
  0 siblings, 0 replies; 2+ messages in thread
From: Adrian Cox @ 2004-01-30 14:27 UTC (permalink / raw)
  To: Chris Friesen; +Cc: linux-kernel, mj

On Thu, 2004-01-29 at 17:24, Chris Friesen wrote:
> Surely we aren't the only people that want to put multiple CPUs on a 
> single PCI space.  How have people handled this in the past?  Ideally 
> what I'm looking for is a CONFIG_NO_MANGLE_PCI or something to that 
> effect. As a last resort we are considering hardcoding the bus/device 
> topology for the two drivers on special daughterboard, but this seems 
> really kludgy.
> 
> Anyone have any advice?

Having done this a few times before, the basic advice is to design with
a non-transparent bridge, such as an Intel 2155x or a PLX 6254/6540.
That's too late to save you, so you'll need a nasty hack instead.

Faced with your situation, I dealt with it by declaring one processor to
be the root of the PCI bus, and having a task on that processor read the
bus address of each PCI device, and pass those bus addresses to the
processors that needed them. Only the root processor ever made
configuration cycles.

- Adrian Cox
http://www.humboldt.co.uk/



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2004-01-30 14:27 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-01-29 17:24 question about PCI setup with multiple CPUs on the PCI bus(es) Chris Friesen
2004-01-30 14:27 ` Adrian Cox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox