From: Martyn Welch <martyn.welch@gefanuc.com>
To: "Emilio G. Cota" <cota@braap.org>
Cc: Greg K-H <gregkh@suse.de>,
linux-kernel@vger.kernel.org, devel@driverdev.osuosl.org,
Sebastien Dugue <sebastien.dugue@bull.net>
Subject: Re: [patch 1/5] Staging: VME Framework for the Linux Kernel
Date: Mon, 10 Aug 2009 16:53:39 +0100 [thread overview]
Message-ID: <4A804283.5090009@gefanuc.com> (raw)
In-Reply-To: <20090810141442.GA18456@braap.org>
Emilio G. Cota wrote:
> Martyn Welch wrote:
>
>> Emilio G. Cota wrote:
>>
>>> - semaphores? isn't it screaming for mutexes?
>>>
>> The semaphores are initialized in mutex mode.
>>
>
> I know, but then please use mutexes.
>
>
I'm clearly missing something here - can you provide me with an example?
>>> - some function signatures (get_whatever) pass too many
>>> pointers; it's rather ugly. Passing a pointer to an
>>> appropriate struct seems a better solution.
>>>
>>>
>> When implementing these functions it seemed like a simpler option to
>> pass the parameters rather than defining yet another structure and using
>> it for a single call. If the *_get functions are changed then the *_set
>> functions should also be changed for consistency. I'm not overly
>> bothered either way - if the consensus is that a structure would be
>> better then we can go with a structure.
>>
>
> Yeh, In my opinion a structure there would be prettier. However
> let's put this off a bit, I'm currently re-working this a bit.
>
>
>>> - vme_alloc_consistent looks pretty fishy; why don't you pass
>>> that responsibility to the specific master? There you obviously
>>> know if you're bridging over PCI or whatever. Or even better;
>>> why is this needed at all?
>>>
>>>
>> In the model I have chosen it is up to the VME device driver to create
>> buffers rather than the VME bridge drivers. After all it is the device
>> drivers for the specific hardware found on the VME bus that knows what
>> it is going to do with these buffers. This method is provided as a
>> helper. It ensures that a VME device driver (not to be confused with the
>> VME bridge driver) is able to allocate a buffer suitable for utilizing
>> the DMA engines found in the bridge or indeed to be mapped onto VME
>> though a slave window without explicitly knowing which VME bridge is
>> being used.
>>
>
> hmm I see. The way you provide coherent DMA mappings for VME is through
> providing coherent (consistent) mappings for PCI, and then mapping the
> VME space onto those. I haven't tested if this works, but sounds
> reasonable.
>
>> Master windows don't require a contiguous buffer, are you referring to
>> something else when you say master?
>>
> I was thinking of the hypothetical case where you'd have a bridge not
> over PCI; but anyway since quite a few things seem tied to PCI this
> is fine.
>
That's one reason for the helper function. If VME bridges are added
which sit of on other buses then vme_alloc_consistent can be extended to
support this without requiring VME device drivers to be altered to
reflect the fact that pci_alloc_consistent might not work for all VME
bridges.
>>> - Please explain me what all the DMA functions do; are they
>>> meant to be used by master or slaves?
>>>
>>>
>> The TODO file (drivers/staging/vme/TODO) contains a description of the
>> current API including the DMA functions, does that provide enough of a
>> description?
>>
>
> I've had a closer look; it seems to me that most of it is unnecessary;
> there's no show those lists to a driver. I'd just provide a single
> 'do_dma(attributes)' call that sleeps until it's done (or similar).
>
>
The DMA controller in the tsi-148 can do PCI to PCI; PCI to VME; VME to
PCI; VME to VME; Patterns to VME and Patterns to PCI transfers. How do
you specify all those options without providing a structure where over
50% of the fields aren't used for any given transfer?
Every bridge I have seen is capable of link-list execution. The API
provides the ability to do scatter-gather style DMA transfers, this
could be used as part of a "zero-copy" type driver for high speed VME
capture devices. How would you support the link-list execution with a
single call?
I was also looking at the ability to queue DMA transfers if the
controller was currently busy - if the consensus is that this is
overkill I will happily remove this.
>>> Have a look at the interface for slaves we've got for our tsi148
>>> driver, available at:
>>> http://repo.or.cz/w/tsi148vmebridge.git
>>>
>>> /* API for new drivers */
>>> extern int vme_request_irq(unsigned int, int (*)(void *),
>>> void *, const char *);
>>>
> [snip]
>
>>> extern int vme_bus_error_check(int);
>>>
>>> That's pretty thin and it covers our slaves' needs. Do you see
>>> anything missing there?
>>>
>>>
>> This interface, especially struct vme_mapping seems to have been written
>> solely for the tsi-148 bridge. The API I have defined aims to provide a
>> consistent API across multiple VME bridges. Whilst this API clearly
>> works for you with the tsi-148 I am unsure how suitable it will be for
>> other bridge chips.
>>
>> The API I have proposed is designed to support more than just the tsi148
>> chipset. Have you thought about how the above API will be supported on
>> other VME bridges?
>>
>
> We only have access to tsi148 chips. Since apparently there aren't
> many bridges around, I think is saner to just provide an interface
> that works well with the devices we have access to (i.e. tsi148 and
> Tundra Universe in your case), and when new chips come along, we simply
> modify the interface as needed. Of course this doesn't mean we shouldn't
> try to make it generic, but within reason.
>
>
Agreed - which I believe is the approach I've taken.
>>> For masters there's no interface there because it was the
>>> master's driver who directly provided these calls to slaves.
>>> I had it in my to-do list to split that from the tsi148, in
>>> the same fashion as you've done with this work.
>>>
>>> - Note above the interrupt handler; simply needs the cookie. Also,
>>> shouldn't your vme_request_irq() just require the IRQ vector?
>>>
>>>
>> No. I believe that you are using the term IRQ vector where we would use
>> the term status ID, the value which is returned from an IACK cycle. Your
>> interrupt handling code assigns a single interrupt handler to all
>> interrupt levels, purely using the interrupt vector/status ID to
>> determine which interrupt handler will be used. This adds an artificial
>> limitation and would not work in some instances that we have seen. Our
>> framework provides the ability to attach an interrupt handler to each
>> combination of IRQ level and Status ID/vector.
>>
> Fair enough. For sanity we tend not to share IRQ vectors among
> different modules, but yes, the interface should know about this.
>
>
>>> - I'd like to see the whole picture, or 'vertical slice', i.e.
>>> the bus interface + a master + a slave driver. How would
>>> the slave's driver get the addresses and sizes of the mappings,
>>> interrupt lines, etc. for each of the devices it controls?
>>> For the time being we quickly hacked an xml-based scheme to get
>>> this info upon installation, but it's clearly not suitable
>>> for mainline.
>>>
>>>
>> I have written a test driver for a very old slave card we had lying
>> around. In that case we used module parameters to determine the location
>> of the device on the bus (it used a fixed VME address space). In this
>> instance the interrupt level and ID could be configured in the registers
>> available in the VME address space, hence I added module parameters to
>> allow these to be configured. In this respect configuration of VME
>> devices is very similar to ISA devices - neither of the buses has
>> supported discovery mechanism from the outset and thus old cards. I
>> therefore implemented a mechanism similar to how I believe ISA
>> approaches this.
>>
>> The framework that I have proposed aims to provide a consistent API to
>> manage allowing the resources provided by the VME bridges to be managed
>> in as a consistent a manner as possible. I believe it is up to the
>> device drivers for the devices found on the VME bus to determine the
>> best way to configure this as it is not provided by the VME
>> specifications.
>>
>
> The problem is how to manage several (say 10) devices that have to be
> controlled with the same driver; passing parameters upon loading the
> driver's module doesn't seem to scale beyond one device..
I implemented it using param array, I agree that the arguments might get quite long with 10 devices, especially if multiple parameters are required, but I don't see why it shouldn't work.
> At the moment I see no clean way of doing this; right now we're doing
> it through an IOCTL, copying from user-space a tree that describes the
> devices to install, namely mappings and IRQs for each of them.
>
> Greg, any ideas on how to deal with this?
>
>
>> For using the VME bus for communication between two SBCs, I think we
>> could use the CRCSR space to provide some information about the
>> resources used on each board for say, a virtual network driver like the
>> rapidIO subsystem provides. Possibly using the "device tree" stuff used
>> on PowerPC, Blackfin and Sparc archs (I believe) for passing device
>> layout between the firmware and kernel at boot.
>>
>
> hmm I don't see how we could implement this common consistent view of
> the bus in a way that could work with every bridge. For the time
> being I'd defer this and simply pass that responsibility to user-space.
>
>
That's the conclusion I sort of came to. I guess how the VME address
space(s) are used is a system integration issue.
>> Sure, np,
>>
>> Martyn
>>
>
> Thanks,
> E.
>
--
Martyn Welch MEng MPhil MIET (Principal Software Engineer) T:+44(0)1327322748
GE Fanuc Intelligent Platforms Ltd, |Registered in England and Wales
Tove Valley Business Park, Towcester, |(3828642) at 100 Barbirolli Square,
Northants, NN12 6PF, UK T:+44(0)1327359444 |Manchester,M2 3AB VAT:GB 927559189
next prev parent reply other threads:[~2009-08-10 15:52 UTC|newest]
Thread overview: 65+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20090803205657.964064732@mini.kroah.org>
2009-08-03 21:00 ` [patch 0/5] [ANNOUNCE] VME Bus drivers and framework for Linux Greg K-H
2009-08-03 21:01 ` [patch 1/5] Staging: VME Framework for the Linux Kernel Greg K-H
2009-08-08 23:01 ` Emilio G. Cota
2009-08-10 12:44 ` Martyn Welch
2009-08-10 14:14 ` Emilio G. Cota
2009-08-10 15:31 ` Emilio G. Cota
2009-08-10 16:40 ` Martyn Welch
2009-08-10 19:50 ` Emilio G. Cota
2009-08-11 8:02 ` Martyn Welch
2009-08-11 8:43 ` Emilio G. Cota
2009-08-10 15:53 ` Martyn Welch [this message]
2009-08-10 16:26 ` Shawn Bohrer
2009-08-10 19:38 ` Emilio G. Cota
2009-08-11 8:29 ` Martyn Welch
2009-08-11 14:49 ` Emilio G. Cota
2009-08-11 15:10 ` Martyn Welch
2009-08-11 15:36 ` Emilio G. Cota
2009-08-11 15:41 ` Martyn Welch
2009-08-11 15:42 ` Emilio G. Cota
2009-08-11 16:38 ` Martyn Welch
2009-08-11 20:48 ` Emilio G. Cota
2009-08-12 9:54 ` Emilio G. Cota
2009-08-12 9:59 ` Martyn Welch
2009-08-12 10:09 ` Emilio G. Cota
2009-08-12 10:19 ` Martyn Welch
2009-08-11 4:54 ` Mike Frysinger
2009-08-11 7:48 ` Martyn Welch
2009-08-10 16:30 ` Greg KH
2009-08-10 14:52 ` [PATCH] Staging: vme: fix {master,slave}_get check bug Emilio G. Cota
2009-08-10 16:50 ` Martyn Welch
2009-08-03 21:01 ` [patch 2/5] Staging: vme: add VME userspace driver Greg K-H
2009-08-08 23:22 ` Emilio G. Cota
2009-08-09 12:17 ` Emilio G. Cota
2009-08-10 13:13 ` Martyn Welch
2009-08-10 15:26 ` Emilio G. Cota
2009-08-10 16:29 ` Greg KH
2009-08-10 16:30 ` Martyn Welch
2009-08-10 20:36 ` Emilio G. Cota
2009-08-11 9:03 ` Martyn Welch
2009-08-11 9:40 ` Emilio G. Cota
2009-08-11 12:46 ` Martyn Welch
2009-08-11 21:01 ` Emilio G. Cota
2009-08-12 8:17 ` Martyn Welch
2009-08-12 9:39 ` Emilio G. Cota
2009-08-12 9:57 ` Martyn Welch
2009-08-12 11:20 ` Emilio G. Cota
2009-08-12 12:19 ` Martyn Welch
2009-08-10 16:28 ` Greg KH
2009-08-10 20:05 ` Emilio G. Cota
2009-08-10 21:09 ` Greg KH
2009-08-11 7:04 ` Emilio G. Cota
2009-08-03 21:01 ` [patch 3/5] Staging: vme: add Universe I/II bridge driver Greg K-H
2009-08-03 23:00 ` Jiri Slaby
2009-08-03 21:01 ` [patch 4/5] Staging: vme: add Tundra TSI148 VME-PCI Bridge driver Greg K-H
2009-08-03 22:50 ` Jiri Slaby
2009-08-03 22:55 ` Jiri Slaby
2009-08-05 16:33 ` [PATCH] Staging: Correct tsi-148 VME interrupt free routine Martyn Welch
2009-08-05 16:38 ` [PATCH v2] " Martyn Welch
2009-08-05 21:53 ` Jiri Slaby
2009-08-06 7:20 ` Martyn Welch
2009-08-09 0:09 ` [patch 4/5] Staging: vme: add Tundra TSI148 VME-PCI Bridge driver Emilio G. Cota
2009-08-11 14:59 ` [patch 4/5] Staging: vme: add Tundra TSI148 VME-PCI Bridgedriver Martyn Welch
2009-08-03 21:01 ` [patch 5/5] Staging: vme: add TODO file Greg K-H
2009-08-04 7:56 ` [patch 0/5] [ANNOUNCE] VME Bus drivers and framework for Linux Martyn Welch
2009-08-08 22:25 ` Emilio G. Cota
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4A804283.5090009@gefanuc.com \
--to=martyn.welch@gefanuc.com \
--cc=cota@braap.org \
--cc=devel@driverdev.osuosl.org \
--cc=gregkh@suse.de \
--cc=linux-kernel@vger.kernel.org \
--cc=sebastien.dugue@bull.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox