* ACPI vs PCI: configuration space
@ 2003-06-18 22:17 Matthew Wilcox
[not found] ` <20030618221752.GY24357-+pPCBgu9SkPzIGdyhVEDUDl5KyyQGfY2kSSpQ9I8OhVaa/9Udqfwiw@public.gmane.org>
0 siblings, 1 reply; 3+ messages in thread
From: Matthew Wilcox @ 2003-06-18 22:17 UTC (permalink / raw)
To: Greg KH, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
acpi-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
OK, I've been bashing my head against this for two days,
it's time to get more eyes on the problem. The problem is in
acpi_os_read_pci_configuration() and acpi_os_write_pci_configuration().
These functions can be called for busses which have not yet been scanned
(and therefore do not have a corresponding pci_bus). The code in 2.5
doesn't work for ia64 beause we need a valid ->sysdata pointer to handle
PCI domains.
The current patch in the ia64 tree does this:
struct pci_bus bus;
+#ifdef CONFIG_IA64
+ struct pci_controller ctrl;
+#endif
...
bus.number = pci_id->bus;
+#ifdef CONFIG_IA64
+ ctrl.segment = pci_id->segment;
+ bus.sysdata = &ctrl;
+#endif
result = pci_root_ops->read(&bus, PCI_DEVFN(pci_id->device,
I think we can all agree that's ugly. But there's no _clear_ way to
improve this.
Try 1: Ask the architecture code to provide a sysdata for us. That's bad;
it needs to allocate with GFP_ATOMIC (since this code can be called from
interrupt context). So it can fail on low mem conditions.
Try 2: Define a `struct pci_controller' on architectures that don't have it.
And a macro pci_set_domain() so x86 can have a zero-length pci_controller.
Not the prettiest idea, but best of this batch.
Try 3: Move the segment/domain into the pci_bus. This already got NAKed
by a few people.
Try 4: redefine the pci_ops again. Haha, very funny.
It's kind of annoying to invent some structures and put some values into
them only to pull them out again. This leads to try 5 ...
struct acpi_pci_ops {
int (*read)(int domain, int bus, int devfn, int where, int size, u32 *val);
int (*write)(int domain, int bus, int devfn, int where, int size, u32 val);
}
It reduces stack consumption, which is a clear win ... it's also _incredibly_
easy to implement since all the existing pci_ops call functions which take
exactly this form.
Go on, approve Try 5. You know you want to ;-)
--
"It's not Hollywood. War is real, war is primarily not about defeat or
victory, it is about death. I've seen thousands and thousands of dead bodies.
Do you think I want to have an academic debate on this subject?" -- Robert Fisk
-------------------------------------------------------
This SF.Net email is sponsored by: INetU
Attention Web Developers & Consultants: Become An INetU Hosting Partner.
Refer Dedicated Servers. We Manage Them. You Get 10% Monthly Commission!
INetU Dedicated Managed Hosting http://www.inetu.net/partner/index.php
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: ACPI vs PCI: configuration space
[not found] ` <20030618221752.GY24357-+pPCBgu9SkPzIGdyhVEDUDl5KyyQGfY2kSSpQ9I8OhVaa/9Udqfwiw@public.gmane.org>
@ 2003-06-18 22:30 ` Greg KH
[not found] ` <20030618223003.GA2134-U8xfFu+wG4EAvxtiuMwx3w@public.gmane.org>
0 siblings, 1 reply; 3+ messages in thread
From: Greg KH @ 2003-06-18 22:30 UTC (permalink / raw)
To: Matthew Wilcox
Cc: linux-ia64-u79uwXL29TY76Z2rM5mHXA,
acpi-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
On Wed, Jun 18, 2003 at 11:17:52PM +0100, Matthew Wilcox wrote:
>
> It's kind of annoying to invent some structures and put some values into
> them only to pull them out again. This leads to try 5 ...
>
> struct acpi_pci_ops {
> int (*read)(int domain, int bus, int devfn, int where, int size, u32 *val);
> int (*write)(int domain, int bus, int devfn, int where, int size, u32 val);
> }
>
> It reduces stack consumption, which is a clear win ... it's also _incredibly_
> easy to implement since all the existing pci_ops call functions which take
> exactly this form.
>
> Go on, approve Try 5. You know you want to ;-)
So for i386, what would domain be?
Anyway, yeah, I agree with try 5, that seems the most sane.
thanks,
greg k-h
-------------------------------------------------------
This SF.Net email is sponsored by: INetU
Attention Web Developers & Consultants: Become An INetU Hosting Partner.
Refer Dedicated Servers. We Manage Them. You Get 10% Monthly Commission!
INetU Dedicated Managed Hosting http://www.inetu.net/partner/index.php
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: ACPI vs PCI: configuration space
[not found] ` <20030618223003.GA2134-U8xfFu+wG4EAvxtiuMwx3w@public.gmane.org>
@ 2003-06-18 23:00 ` Matthew Wilcox
0 siblings, 0 replies; 3+ messages in thread
From: Matthew Wilcox @ 2003-06-18 23:00 UTC (permalink / raw)
To: Greg KH
Cc: Matthew Wilcox, linux-ia64-u79uwXL29TY76Z2rM5mHXA,
acpi-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
On Wed, Jun 18, 2003 at 03:30:03PM -0700, Greg KH wrote:
> So for i386, what would domain be?
The `seg' argument currently in place to all these macros ;-)
> Anyway, yeah, I agree with try 5, that seems the most sane.
Yay. you'll have a patch tomorrow.
--
"It's not Hollywood. War is real, war is primarily not about defeat or
victory, it is about death. I've seen thousands and thousands of dead bodies.
Do you think I want to have an academic debate on this subject?" -- Robert Fisk
-------------------------------------------------------
This SF.Net email is sponsored by: INetU
Attention Web Developers & Consultants: Become An INetU Hosting Partner.
Refer Dedicated Servers. We Manage Them. You Get 10% Monthly Commission!
INetU Dedicated Managed Hosting http://www.inetu.net/partner/index.php
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2003-06-18 23:00 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-06-18 22:17 ACPI vs PCI: configuration space Matthew Wilcox
[not found] ` <20030618221752.GY24357-+pPCBgu9SkPzIGdyhVEDUDl5KyyQGfY2kSSpQ9I8OhVaa/9Udqfwiw@public.gmane.org>
2003-06-18 22:30 ` Greg KH
[not found] ` <20030618223003.GA2134-U8xfFu+wG4EAvxtiuMwx3w@public.gmane.org>
2003-06-18 23:00 ` Matthew Wilcox
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox