* [RFQ] New driver architecture questions
@ 2009-05-14 1:57 Mukker, Atul
2009-05-14 2:44 ` Jeff Garzik
0 siblings, 1 reply; 14+ messages in thread
From: Mukker, Atul @ 2009-05-14 1:57 UTC (permalink / raw)
To: linux-kernel@vger.kernel.org; +Cc: Austria, Winston, linux-scsi@vger.kernel.org
Hello Kernel experts out there!
We (a division of LSI Corp.) are planning to initiate a new driver design for future generation of LSI RAID controllers. The new class of RAID controllers would be supported under various operating systems in addition to Linux.
As part of the revamping exercise, we would like to design the driver in such a fashion that much of the driver source code can be made common across drivers offered for various operating systems.
The obvious benefits being:
1. Reduction of feature disparity across various operating systems.
2. Increased customer satisfaction in terms of support consistency across all available operating systems.
3. More synergy between the driver team members and increased collaboration.
4. Decreased overheads in terms of maintenance, fixing issues across the board, defect and change requests tracking etc.
5. More channelized test engineering.
etc….
Our concern is, is such a design acceptable to Linux community, with our intention being keeping the model open source.
Insights, feedback would be highly appreciated!
Best regards,
Atul Mukker
Staff Engineer,
LSI Corp.--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFQ] New driver architecture questions
2009-05-14 1:57 [RFQ] New driver architecture questions Mukker, Atul
@ 2009-05-14 2:44 ` Jeff Garzik
2009-05-14 3:07 ` Mukker, Atul
0 siblings, 1 reply; 14+ messages in thread
From: Jeff Garzik @ 2009-05-14 2:44 UTC (permalink / raw)
To: Mukker, Atul
Cc: linux-kernel@vger.kernel.org, Austria, Winston,
linux-scsi@vger.kernel.org
Mukker, Atul wrote:
> Hello Kernel experts out there!
>
>
>
> We (a division of LSI Corp.) are planning to initiate a new driver design for future generation of LSI RAID controllers. The new class of RAID controllers would be supported under various operating systems in addition to Linux.
>
> As part of the revamping exercise, we would like to design the driver in such a fashion that much of the driver source code can be made common across drivers offered for various operating systems.
>
> The obvious benefits being:
>
> 1. Reduction of feature disparity across various operating systems.
>
> 2. Increased customer satisfaction in terms of support consistency across all available operating systems.
>
> 3. More synergy between the driver team members and increased collaboration.
>
> 4. Decreased overheads in terms of maintenance, fixing issues across the board, defect and change requests tracking etc.
>
> 5. More channelized test engineering.
If the design is done right, everybody wins, absolutely. Intel has done
this in the past by making their hw-specific modules generic enough to
be used across multiple operating systems, while not compromising the
Linux API guarantees. See drivers/net/e1000e etc.
We hope, though, that design mistakes from the past can be avoided. In
the past, when hardware vendors have created a cross-OS _layer_ for
their drivers, that layer wound up decreasing performance, increasing
code size, introducing bugs, and decreasing overall portability.
In the past, cross OS driver layers, developed by hardware vendors for
specific drivers, have
1. Increased feature disparity between Linux drivers for hardware w/
similar capabilities.
2. Decreased customer satisfaction in terms of support consistency
across all Linux drivers.
3. Decreased synergy and collaboration across Linux drivers and Linux
engineers, due to increased driver differences.
4. Increased overhead in terms of maintenance, support, and fixing
issues across multiple Linux drivers, due to wider gulf between Linux
drivers.
5. Decrease total amount of testing and testing breadth, because less
shared code means more code to test, and less focused testing.
So it is clear from past experience that the /wrong/ design can hurt
Linux customers, and it is also clear that the /right/ design, e.g. like
Intel's network drivers, can be made cross-OS without impacting
performance, portability or bug hunting.
Jeff
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [RFQ] New driver architecture questions
2009-05-14 2:44 ` Jeff Garzik
@ 2009-05-14 3:07 ` Mukker, Atul
2009-05-14 4:16 ` Jeff Garzik
0 siblings, 1 reply; 14+ messages in thread
From: Mukker, Atul @ 2009-05-14 3:07 UTC (permalink / raw)
To: Jeff Garzik
Cc: linux-kernel@vger.kernel.org, Austria, Winston,
linux-scsi@vger.kernel.org
Interesting answer :-)
But it definitely makes few things clear:
1. The possibility definitely exists, if done right. We will review Intel's code and try to use as a reference.
2. Earlier the code is made public, more likely that it would stay on the "right" track.
Are there known pitfalls we should guard against? Why's your focus on Linux "drivers"? Do you expect more than one?
Thanks
Atul
________________________________________
From: Jeff Garzik [jeff@garzik.org]
Sent: Wednesday, May 13, 2009 10:44 PM
To: Mukker, Atul
Cc: linux-kernel@vger.kernel.org; Austria, Winston; linux-scsi@vger.kernel.org
Subject: Re: [RFQ] New driver architecture questions
Mukker, Atul wrote:
> Hello Kernel experts out there!
>
>
>
> We (a division of LSI Corp.) are planning to initiate a new driver design for future generation of LSI RAID controllers. The new class of RAID controllers would be supported under various operating systems in addition to Linux.
>
> As part of the revamping exercise, we would like to design the driver in such a fashion that much of the driver source code can be made common across drivers offered for various operating systems.
>
> The obvious benefits being:
>
> 1. Reduction of feature disparity across various operating systems.
>
> 2. Increased customer satisfaction in terms of support consistency across all available operating systems.
>
> 3. More synergy between the driver team members and increased collaboration.
>
> 4. Decreased overheads in terms of maintenance, fixing issues across the board, defect and change requests tracking etc.
>
> 5. More channelized test engineering.
If the design is done right, everybody wins, absolutely. Intel has done
this in the past by making their hw-specific modules generic enough to
be used across multiple operating systems, while not compromising the
Linux API guarantees. See drivers/net/e1000e etc.
We hope, though, that design mistakes from the past can be avoided. In
the past, when hardware vendors have created a cross-OS _layer_ for
their drivers, that layer wound up decreasing performance, increasing
code size, introducing bugs, and decreasing overall portability.
In the past, cross OS driver layers, developed by hardware vendors for
specific drivers, have
1. Increased feature disparity between Linux drivers for hardware w/
similar capabilities.
2. Decreased customer satisfaction in terms of support consistency
across all Linux drivers.
3. Decreased synergy and collaboration across Linux drivers and Linux
engineers, due to increased driver differences.
4. Increased overhead in terms of maintenance, support, and fixing
issues across multiple Linux drivers, due to wider gulf between Linux
drivers.
5. Decrease total amount of testing and testing breadth, because less
shared code means more code to test, and less focused testing.
So it is clear from past experience that the /wrong/ design can hurt
Linux customers, and it is also clear that the /right/ design, e.g. like
Intel's network drivers, can be made cross-OS without impacting
performance, portability or bug hunting.
Jeff
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFQ] New driver architecture questions
2009-05-14 3:07 ` Mukker, Atul
@ 2009-05-14 4:16 ` Jeff Garzik
2009-05-14 8:51 ` Boaz Harrosh
2009-05-15 0:58 ` adam radford
0 siblings, 2 replies; 14+ messages in thread
From: Jeff Garzik @ 2009-05-14 4:16 UTC (permalink / raw)
To: Mukker, Atul
Cc: linux-kernel@vger.kernel.org, Austria, Winston,
linux-scsi@vger.kernel.org
Mukker, Atul wrote:
> Interesting answer :-)
>
> But it definitely makes few things clear:
>
> 1. The possibility definitely exists, if done right. We will review Intel's code and try to use as a reference.
> 2. Earlier the code is made public, more likely that it would stay on the "right" track.
Agreed!
> Are there known pitfalls we should guard against? Why's your focus on Linux "drivers"? Do you expect more than one?
Good questions :) I use "drivers", plural, to illustrate how Linux
maintainers attempt to take a whole-system approach to driver evaluation.
We have to consider the user experience, support and maintenance of
multiple Linux drivers from multiple hardware vendors.
To pick an easy example in my area of expertise, every major vendor of
[typically non-firmware-based] SATA controllers that I deal with, such
as Intel, NVIDIA, Silicon Image, Promise and Marvell, ship a Windows
driver that includes software code in the OS driver for
* supporting their hardware controller
* implementing software RAID levels 0, 1, and 5
This is fine because the hardware vendor is only concerned with their
own hardware.
However, in Linux, we aim to maintain a consistent level of support
_across_ multiple hardware vendors. This is the same why the same
driver, drivers/ata/ahci.c, is used for AHCI controllers from
- Intel
- NVIDIA
- ULi
- SiS
- VIA
- JMicron
- Marvell
- ACard/Artop
When a bug is fixed in the ahci.c driver, _all_ customers benefit from
this bug fix. When a new feature is added, _all_ customers benefit from
a new feature.
Of course, if there is an NVIDIA-specific hardware feature, that does
not apply to other hardware vendors, that is welcomed! It is placed in
an NVIDIA-specific driver module.
To pick another example, cross-OS layers from hardware vendor A, created
in the past, have included workarounds for errata in system platforms
from hardware vendor B. In Linux, we typically put system workarounds
in drivers/pci/quirks.c or arch/* so that the workaround is applied to
all _systems_ that need it. (of course, if the errata is truly specific
only to A+B, then yes, the workaround should be in A's driver generally)
Additionally, minimizing duplicate code across hardware vendors
MAXIMIZES TESTING across all Linux drivers.
In Linux, when there is a change to software RAID-5, it is instantly
tested and verified across multiple hardware vendors, on multiple system
architectures and technologies.
So, what does this mean for LSI? In my humble opinion :)
1) A driver should be modular, in order to properly separate out
hardware-specific and OS-specific pieces. Taking drivers/net/e1000e as
an example,
hw.h hardware-specific defines, ~cross-OS
82571.c code specific to 8257x chip family, ~cross-OS
ich8lan.c code specific to ICH8+ chip family, ~cross-OS
netdev.c core driver code, Linux-specific
A key engineering task is decomposing the driver into fine-grained,
OS-specific OR hardware-specific operations.
Avoid large amounts of C pre-processor wrappers, and maximize use of
native C types and enums.
2) Highly standardized, not-specific-to-LSI-hardware routines such as
SAS discovery or software RAID5 XOR'ing should be separate from the
driver itself.
This is very different from Windows!!
As an example, the Adaptec 94xx and Marvell 6440 drivers share the same
SAS discovery code -- drivers/scsi/libsas, because discovery is 99% in
the OS driver.
However, LSI's mpt2sas is more firmware-based, so more of the discovery
process is found in hardware-specific drivers/scsi/mpt2sas.
Another example: RAID5 and RAID6 algorithms in Linux have been
hand-optimized for specific CPU architectures (drivers/md/raid6*).
Implementing your own software RAID would decrease performance and
eliminate the years of field testing performed on the existing code base.
For implementations of RAID that are largely firmware-based, most of the
RAID implementation is found in microcontroller firmware. This relieves
you of the burden of driver code duplication.
3) Ensure that the userland Application Binary Interface (ABI) for your
driver is consistent with other Linux drivers, for the same features.
If there is a feature NOT unique to LSI, attempt to maintain consistency
with existing Linux driver APIs.
If the feature is LSI-specific, use your best design judgement.
This ensures that existing Linux tools work.
4) For reasons stated above, we are FORCED to consider your driver in
the context of other Linux drivers from other hardware vendors.
The main reason, as I said, is to avoid code duplication.
Two implementations of software RAID 5 mean twice the bugs, and twice
the support/maintenance costs for Linux maintainers and distributors.
It is unfortunate but true that Linux maintainers must consider when a
chip reaches end-of-life support, or a hardware vendor goes out of
business, and users still want to keep using their hardware.
Whew, that was long. I hope this makes sense...
Regards,
Jeff
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFQ] New driver architecture questions
2009-05-14 4:16 ` Jeff Garzik
@ 2009-05-14 8:51 ` Boaz Harrosh
2009-05-15 0:58 ` adam radford
1 sibling, 0 replies; 14+ messages in thread
From: Boaz Harrosh @ 2009-05-14 8:51 UTC (permalink / raw)
To: Jeff Garzik
Cc: Mukker, Atul, linux-kernel@vger.kernel.org, Austria, Winston,
linux-scsi@vger.kernel.org
On 05/14/2009 07:16 AM, Jeff Garzik wrote:
> Mukker, Atul wrote:
>> Interesting answer :-)
>>
>> But it definitely makes few things clear:
>>
>> 1. The possibility definitely exists, if done right. We will review Intel's code and try to use as a reference.
>> 2. Earlier the code is made public, more likely that it would stay on the "right" track.
>
> Agreed!
>
>
>> Are there known pitfalls we should guard against? Why's your focus on Linux "drivers"? Do you expect more than one?
>
> Good questions :) I use "drivers", plural, to illustrate how Linux
> maintainers attempt to take a whole-system approach to driver evaluation.
>
> We have to consider the user experience, support and maintenance of
> multiple Linux drivers from multiple hardware vendors.
>
> To pick an easy example in my area of expertise, every major vendor of
> [typically non-firmware-based] SATA controllers that I deal with, such
> as Intel, NVIDIA, Silicon Image, Promise and Marvell, ship a Windows
> driver that includes software code in the OS driver for
> * supporting their hardware controller
> * implementing software RAID levels 0, 1, and 5
>
> This is fine because the hardware vendor is only concerned with their
> own hardware.
>
> However, in Linux, we aim to maintain a consistent level of support
> _across_ multiple hardware vendors. This is the same why the same
> driver, drivers/ata/ahci.c, is used for AHCI controllers from
> - Intel
> - NVIDIA
> - ULi
> - SiS
> - VIA
> - JMicron
> - Marvell
> - ACard/Artop
>
> When a bug is fixed in the ahci.c driver, _all_ customers benefit from
> this bug fix. When a new feature is added, _all_ customers benefit from
> a new feature.
>
> Of course, if there is an NVIDIA-specific hardware feature, that does
> not apply to other hardware vendors, that is welcomed! It is placed in
> an NVIDIA-specific driver module.
>
> To pick another example, cross-OS layers from hardware vendor A, created
> in the past, have included workarounds for errata in system platforms
> from hardware vendor B. In Linux, we typically put system workarounds
> in drivers/pci/quirks.c or arch/* so that the workaround is applied to
> all _systems_ that need it. (of course, if the errata is truly specific
> only to A+B, then yes, the workaround should be in A's driver generally)
>
> Additionally, minimizing duplicate code across hardware vendors
> MAXIMIZES TESTING across all Linux drivers.
>
> In Linux, when there is a change to software RAID-5, it is instantly
> tested and verified across multiple hardware vendors, on multiple system
> architectures and technologies.
>
>
> So, what does this mean for LSI? In my humble opinion :)
>
> 1) A driver should be modular, in order to properly separate out
> hardware-specific and OS-specific pieces. Taking drivers/net/e1000e as
> an example,
>
> hw.h hardware-specific defines, ~cross-OS
> 82571.c code specific to 8257x chip family, ~cross-OS
> ich8lan.c code specific to ICH8+ chip family, ~cross-OS
> netdev.c core driver code, Linux-specific
>
> A key engineering task is decomposing the driver into fine-grained,
> OS-specific OR hardware-specific operations.
>
> Avoid large amounts of C pre-processor wrappers, and maximize use of
> native C types and enums.
>
>
> 2) Highly standardized, not-specific-to-LSI-hardware routines such as
> SAS discovery or software RAID5 XOR'ing should be separate from the
> driver itself.
>
> This is very different from Windows!!
>
> As an example, the Adaptec 94xx and Marvell 6440 drivers share the same
> SAS discovery code -- drivers/scsi/libsas, because discovery is 99% in
> the OS driver.
>
> However, LSI's mpt2sas is more firmware-based, so more of the discovery
> process is found in hardware-specific drivers/scsi/mpt2sas.
>
> Another example: RAID5 and RAID6 algorithms in Linux have been
> hand-optimized for specific CPU architectures (drivers/md/raid6*).
> Implementing your own software RAID would decrease performance and
> eliminate the years of field testing performed on the existing code base.
>
> For implementations of RAID that are largely firmware-based, most of the
> RAID implementation is found in microcontroller firmware. This relieves
> you of the burden of driver code duplication.
>
>
> 3) Ensure that the userland Application Binary Interface (ABI) for your
> driver is consistent with other Linux drivers, for the same features.
>
> If there is a feature NOT unique to LSI, attempt to maintain consistency
> with existing Linux driver APIs.
>
> If the feature is LSI-specific, use your best design judgement.
>
> This ensures that existing Linux tools work.
>
>
> 4) For reasons stated above, we are FORCED to consider your driver in
> the context of other Linux drivers from other hardware vendors.
>
> The main reason, as I said, is to avoid code duplication.
>
> Two implementations of software RAID 5 mean twice the bugs, and twice
> the support/maintenance costs for Linux maintainers and distributors.
>
> It is unfortunate but true that Linux maintainers must consider when a
> chip reaches end-of-life support, or a hardware vendor goes out of
> business, and users still want to keep using their hardware.
>
>
> Whew, that was long. I hope this makes sense...
>
> Regards,
>
> Jeff
>
On top of every thing Jeff said, which I totally agree.
>From passed experience it is much (much^3) easier to start
from Linux, clean, then Port to windows, and maybe if you
are very very concerned, do it in parallel but first foot
in Linux.
This is because:
1. Windows code you don't have partners with, and they don't show.
So a small HAL or Linux-shim layer will raise no eyebrows.
2. The Linux model is much simpler, so it is a good design strategy
to start with the simple basics and add windows cross systems
complexity ontop.
3. Coding style wise and layer abstraction is most natural when done
Windows ontop Linux, then Linux ontop Windows.
Trivial bad example:
VOID KeQueryTickCount(OUT PLARGE_INTEGER TickCount)
{
*TickCount = current_time();
}
You do see how it would be just better to use current_time() everywhere
then use KeQueryTickCount.
So if you do clean layered abstracted code, pure Linux, then port it
to Windows that should be strait forward and maintainable. But if you
do clean layered abstracted code, pure Windows, and port it to Linux
It will most definitely not be so easy or maintainable. And to get it
accepted you will need lots of back porting into the windows driver.
(Hell they don't even have a proper libc in Windows Kernel)
Just my $0.017
Boaz
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFQ] New driver architecture questions
2009-05-14 4:16 ` Jeff Garzik
2009-05-14 8:51 ` Boaz Harrosh
@ 2009-05-15 0:58 ` adam radford
2009-05-15 1:01 ` Julian Calaby
1 sibling, 1 reply; 14+ messages in thread
From: adam radford @ 2009-05-15 0:58 UTC (permalink / raw)
To: Jeff Garzik
Cc: Mukker, Atul, linux-kernel@vger.kernel.org, Austria, Winston,
linux-scsi@vger.kernel.org
On Wed, May 13, 2009 at 9:16 PM, Jeff Garzik <jeff@garzik.org> wrote:
> Taking drivers/net/e1000e as an
> example,
>
> hw.h hardware-specific defines, ~cross-OS
> 82571.c code specific to 8257x chip family, ~cross-OS
82571.c contains Linux specific code such as: including Linux
specific header files, calls to msleep().
> ich8lan.c code specific to ICH8+ chip family, ~cross-OS
ich8lan.c contains Linux specific code such as: might_sleep(),
mutex_trylock(), mutex_unlock(), udelay(), msleep(), writel(), readl().
Perhaps this is a bad example? It seems like the "common layer"
sections that are "cross-OS" shouldn't contain any Linux specific code at all.
-Adam
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFQ] New driver architecture questions
2009-05-15 0:58 ` adam radford
@ 2009-05-15 1:01 ` Julian Calaby
2009-05-15 3:01 ` Jeff Garzik
0 siblings, 1 reply; 14+ messages in thread
From: Julian Calaby @ 2009-05-15 1:01 UTC (permalink / raw)
To: adam radford
Cc: Jeff Garzik, Mukker, Atul, linux-kernel@vger.kernel.org,
Austria, Winston, linux-scsi@vger.kernel.org
On Fri, May 15, 2009 at 10:58, adam radford <aradford@gmail.com> wrote:
> On Wed, May 13, 2009 at 9:16 PM, Jeff Garzik <jeff@garzik.org> wrote:
>> Taking drivers/net/e1000e as an
>> example,
>>
>> hw.h hardware-specific defines, ~cross-OS
>> 82571.c code specific to 8257x chip family, ~cross-OS
>
> 82571.c contains Linux specific code such as: including Linux
> specific header files, calls to msleep().
>
>> ich8lan.c code specific to ICH8+ chip family, ~cross-OS
>
> ich8lan.c contains Linux specific code such as: might_sleep(),
> mutex_trylock(), mutex_unlock(), udelay(), msleep(), writel(), readl().
>
> Perhaps this is a bad example? It seems like the "common layer"
> sections that are "cross-OS" shouldn't contain any Linux specific code at all.
I think the implication is that the cross-OS parts are coded, as it
happens, in the linux coding style, using linux functions, but then a
Windows layer maps these to Windows specific functions.
E.g. msleep(), mutex_trylock(), etc are implemented in the Windows
layer by mapping them to the Windows functions to do the same.
Thanks,
--
Julian Calaby
Email: julian.calaby@gmail.com
.Plan: http://sites.google.com/site/juliancalaby/
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFQ] New driver architecture questions
2009-05-15 1:01 ` Julian Calaby
@ 2009-05-15 3:01 ` Jeff Garzik
2009-05-15 14:56 ` Mukker, Atul
0 siblings, 1 reply; 14+ messages in thread
From: Jeff Garzik @ 2009-05-15 3:01 UTC (permalink / raw)
To: Julian Calaby
Cc: adam radford, Mukker, Atul, linux-kernel@vger.kernel.org,
Austria, Winston, linux-scsi@vger.kernel.org
Julian Calaby wrote:
> On Fri, May 15, 2009 at 10:58, adam radford <aradford@gmail.com> wrote:
>> On Wed, May 13, 2009 at 9:16 PM, Jeff Garzik <jeff@garzik.org> wrote:
>>> Taking drivers/net/e1000e as an
>>> example,
>>>
>>> hw.h hardware-specific defines, ~cross-OS
>>> 82571.c code specific to 8257x chip family, ~cross-OS
>> 82571.c contains Linux specific code such as: including Linux
>> specific header files, calls to msleep().
>>
>>> ich8lan.c code specific to ICH8+ chip family, ~cross-OS
>> ich8lan.c contains Linux specific code such as: might_sleep(),
>> mutex_trylock(), mutex_unlock(), udelay(), msleep(), writel(), readl().
>>
>> Perhaps this is a bad example? It seems like the "common layer"
>> sections that are "cross-OS" shouldn't contain any Linux specific code at all.
>
> I think the implication is that the cross-OS parts are coded, as it
> happens, in the linux coding style, using linux functions, but then a
> Windows layer maps these to Windows specific functions.
Correct.
Jeff
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [RFQ] New driver architecture questions
2009-05-15 3:01 ` Jeff Garzik
@ 2009-05-15 14:56 ` Mukker, Atul
2009-05-15 16:04 ` James Bottomley
2009-05-15 16:36 ` Matthew Wilcox
0 siblings, 2 replies; 14+ messages in thread
From: Mukker, Atul @ 2009-05-15 14:56 UTC (permalink / raw)
To: Jeff Garzik, Julian Calaby
Cc: adam radford, linux-kernel@vger.kernel.org, Austria, Winston,
linux-scsi@vger.kernel.org
>>
> >> Perhaps this is a bad example? It seems like the "common layer"
> >> sections that are "cross-OS" shouldn't contain any Linux specific code
> at all.
> >
> > I think the implication is that the cross-OS parts are coded, as it
> > happens, in the linux coding style, using linux functions, but then a
> > Windows layer maps these to Windows specific functions.
>
> Correct.
>
> Jeff
>
>
>
[Atul] I think we are close, for example, memcpy API in the stack is osi_memcpy(), which translates to memcpy() on Linux and ScsiPortMoveMemory() on windows.
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [RFQ] New driver architecture questions
2009-05-15 14:56 ` Mukker, Atul
@ 2009-05-15 16:04 ` James Bottomley
2009-05-15 16:36 ` Matthew Wilcox
1 sibling, 0 replies; 14+ messages in thread
From: James Bottomley @ 2009-05-15 16:04 UTC (permalink / raw)
To: Mukker, Atul
Cc: Jeff Garzik, Julian Calaby, adam radford,
linux-kernel@vger.kernel.org, Austria, Winston,
linux-scsi@vger.kernel.org
On Fri, 2009-05-15 at 08:56 -0600, Mukker, Atul wrote:
> >>
> > >> Perhaps this is a bad example? It seems like the "common layer"
> > >> sections that are "cross-OS" shouldn't contain any Linux specific code
> > at all.
> > >
> > > I think the implication is that the cross-OS parts are coded, as it
> > > happens, in the linux coding style, using linux functions, but then a
> > > Windows layer maps these to Windows specific functions.
> >
> > Correct.
> >
> > Jeff
> >
> >
> >
> [Atul] I think we are close, for example, memcpy API in the stack is
> osi_memcpy(), which translates to memcpy() on Linux and
> ScsiPortMoveMemory() on windows.
So what we really don't want to see in Linux drivers is the particular
name you've chosen for your glue layer API (like osi_mempy). The way
you get around this is to run a macro substituter over the HIM before
you put it in the kernel, so the API logic all appears to be linux
specific, even though it's translated from your generic HIM. This adds
a bit of a burden sometimes in patching because you have to translate
back to the HIM to apply the patch, but this can be largely automated.
James
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFQ] New driver architecture questions
2009-05-15 14:56 ` Mukker, Atul
2009-05-15 16:04 ` James Bottomley
@ 2009-05-15 16:36 ` Matthew Wilcox
2009-05-15 18:03 ` Mukker, Atul
1 sibling, 1 reply; 14+ messages in thread
From: Matthew Wilcox @ 2009-05-15 16:36 UTC (permalink / raw)
To: Mukker, Atul
Cc: Jeff Garzik, Julian Calaby, adam radford,
linux-kernel@vger.kernel.org, Austria, Winston,
linux-scsi@vger.kernel.org
On Fri, May 15, 2009 at 08:56:25AM -0600, Mukker, Atul wrote:
> [Atul] I think we are close, for example, memcpy API in the stack is osi_memcpy(), which translates to memcpy() on Linux and ScsiPortMoveMemory() on windows.
The solution to "We have some people who speak French and other people who
speak German" is not to invent Esperanto ;-)
Using one or the other internally is fine (we don't care what you do),
but we want to see memcpy(). By the way, the documentation I found for
ScsiPortMoveMemory() seems to indicate that it's memmove(), not memcpy().
Mapping memcpy() to ScsiPortMoveMemory() is fine ... but you can't
realiably go the other way.
--
Matthew Wilcox Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours. We can't possibly take such
a retrograde step."
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [RFQ] New driver architecture questions
2009-05-15 16:36 ` Matthew Wilcox
@ 2009-05-15 18:03 ` Mukker, Atul
2009-05-15 18:16 ` Matthew Wilcox
0 siblings, 1 reply; 14+ messages in thread
From: Mukker, Atul @ 2009-05-15 18:03 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Jeff Garzik, Julian Calaby, adam radford,
linux-kernel@vger.kernel.org, Austria, Winston,
linux-scsi@vger.kernel.org
> -----Original Message-----
> From: Matthew Wilcox [mailto:matthew@wil.cx]
> Sent: Friday, May 15, 2009 12:36 PM
> To: Mukker, Atul
> Cc: Jeff Garzik; Julian Calaby; adam radford; linux-
> kernel@vger.kernel.org; Austria, Winston; linux-scsi@vger.kernel.org
> Subject: Re: [RFQ] New driver architecture questions
>
> On Fri, May 15, 2009 at 08:56:25AM -0600, Mukker, Atul wrote:
> > [Atul] I think we are close, for example, memcpy API in the stack is
> osi_memcpy(), which translates to memcpy() on Linux and
> ScsiPortMoveMemory() on windows.
>
> The solution to "We have some people who speak French and other people who
> speak German" is not to invent Esperanto ;-)
[Atul] We really wish they could communicate in English :-), since that's not an option, we agree in principle that using native Linux Kernel APIs wherever possible is probably a good idea.
>
> Using one or the other internally is fine (we don't care what you do),
> but we want to see memcpy(). By the way, the documentation I found for
> ScsiPortMoveMemory() seems to indicate that it's memmove(), not memcpy().
> Mapping memcpy() to ScsiPortMoveMemory() is fine ... but you can't
> realiably go the other way.
[Atul] It's actually memcpy(),http://msdn.microsoft.com/en-us/library/ms805434.aspx
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [RFQ] New driver architecture questions
2009-05-15 18:03 ` Mukker, Atul
@ 2009-05-15 18:16 ` Matthew Wilcox
2009-05-15 18:38 ` Mukker, Atul
0 siblings, 1 reply; 14+ messages in thread
From: Matthew Wilcox @ 2009-05-15 18:16 UTC (permalink / raw)
To: Mukker, Atul
Cc: Jeff Garzik, Julian Calaby, adam radford,
linux-kernel@vger.kernel.org, Austria, Winston,
linux-scsi@vger.kernel.org
On Fri, May 15, 2009 at 12:03:39PM -0600, Mukker, Atul wrote:
> > The solution to "We have some people who speak French and other people who
> > speak German" is not to invent Esperanto ;-)
> [Atul] We really wish they could communicate in English :-), since that's not an option, we agree in principle that using native Linux Kernel APIs wherever possible is probably a good idea.
I'd stick to the C APIs where possible ... oh, that's what Linux does. OK ;-)
> > Using one or the other internally is fine (we don't care what you do),
> > but we want to see memcpy(). By the way, the documentation I found for
> > ScsiPortMoveMemory() seems to indicate that it's memmove(), not memcpy().
> > Mapping memcpy() to ScsiPortMoveMemory() is fine ... but you can't
> > realiably go the other way.
> [Atul] It's actually memcpy(),http://msdn.microsoft.com/en-us/library/ms805434.aspx
No, it's memmove(). "The (ReadBuffer + Length) can overlap the area
pointed to by WriteBuffer."
--
Matthew Wilcox Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours. We can't possibly take such
a retrograde step."
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [RFQ] New driver architecture questions
2009-05-15 18:16 ` Matthew Wilcox
@ 2009-05-15 18:38 ` Mukker, Atul
0 siblings, 0 replies; 14+ messages in thread
From: Mukker, Atul @ 2009-05-15 18:38 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Jeff Garzik, Julian Calaby, adam radford,
linux-kernel@vger.kernel.org, Austria, Winston,
linux-scsi@vger.kernel.org
> > > Using one or the other internally is fine (we don't care what you do),
> > > but we want to see memcpy(). By the way, the documentation I found
> for
> > > ScsiPortMoveMemory() seems to indicate that it's memmove(), not
> memcpy().
> > > Mapping memcpy() to ScsiPortMoveMemory() is fine ... but you can't
> > > realiably go the other way.
> > [Atul] It's actually memcpy(),http://msdn.microsoft.com/en-
> us/library/ms805434.aspx
>
> No, it's memmove(). "The (ReadBuffer + Length) can overlap the area
> pointed to by WriteBuffer."
[Atul] Look, you are already figuring out possible issues with the source code without it being out yet :-).
Thanks everyone for your inputs so far!
Atul
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2009-05-15 18:39 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-05-14 1:57 [RFQ] New driver architecture questions Mukker, Atul
2009-05-14 2:44 ` Jeff Garzik
2009-05-14 3:07 ` Mukker, Atul
2009-05-14 4:16 ` Jeff Garzik
2009-05-14 8:51 ` Boaz Harrosh
2009-05-15 0:58 ` adam radford
2009-05-15 1:01 ` Julian Calaby
2009-05-15 3:01 ` Jeff Garzik
2009-05-15 14:56 ` Mukker, Atul
2009-05-15 16:04 ` James Bottomley
2009-05-15 16:36 ` Matthew Wilcox
2009-05-15 18:03 ` Mukker, Atul
2009-05-15 18:16 ` Matthew Wilcox
2009-05-15 18:38 ` Mukker, Atul
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox