From: James Smart <James.Smart@Emulex.Com>
To: "Mukker, Atul" <Atul.Mukker@lsi.com>
Cc: Brian King <brking@linux.vnet.ibm.com>,
"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>
Subject: Re: Recommended HBA management interfaces
Date: Mon, 20 Jul 2009 12:57:46 -0400 [thread overview]
Message-ID: <4A64A20A.5080908@emulex.com> (raw)
In-Reply-To: <6C678488C5CEE74F813A4D1948FD2DC7A99EFDC7@cosmail02.lsi.com>
FYI - netlink (and sysfs, and I believe debugfs) do not exist with
vmware drivers... Additionally, with netlink, many of the distros no
longer include libnl by default in their install images. Even
interfaces that you think exist on vmware, may have very different
semantical behavior (almost all of the transport stuff either doesn't
exist or is only partially implemented).
One big caveat I'd give you: It's not so much the interface being used,
but rather, what are you doing over the interface. One of the goals of
the community is to present a consistent management paradigm for like
things. Thus, if what you are doing is generic, you should do it in a
generic manner so that all drivers for like hardware can utilize it.
This was the motivation for the protocol transports. Interestingly, even
the transports use different interfaces for different things. It all
depends on what it is.
Lastly, some things are considered bad practice from a kernel safety
point of view. Example: driver-specific ioctls passing around user-space
buffer pointers. In these cases, it doesn't matter what interface you
pick, they'll be rejected.
-- james s
Mukker, Atul wrote:
> Thanks Brian. Netlink seems to be appropriate for our purpose as well, almost too good :-)
>
> That make me think, what's the catch? The SCSI drivers are not heavy usage of this interface for one.
>
> Are the other caveats associated with it?
>
> Best regards,
> Atul Mukker
>
>
>> -----Original Message-----
>> From: Brian King [mailto:brking@linux.vnet.ibm.com]
>> Sent: Friday, July 17, 2009 11:36 AM
>> To: Mukker, Atul
>> Cc: linux-scsi@vger.kernel.org
>> Subject: Re: Recommended HBA management interfaces
>>
>> Mukker, Atul wrote:
>>
>>> Hi All,
>>>
>>> We would like expert comments on the following questions regarding
>>> management of HBA from applications.
>>>
>>> Traditionally, our drivers create a character device node, whose
>>> file_operations are then used by the management applications to
>>> transfer HBA specific commands. In addition to being quirky, this
>>> interface has a few limitations which we would like to remove, most
>>> important being able to seamlessly handle asynchronous events with
>>> data transfer.
>>>
>>> 1. What is (are) the other standard/recommended interfaces which
>>> applications can use to transfer HBA specific commands and data.
>>>
>> Depends on what the commands look like. With ipr, the commands that
>> the management application need to send to the HBA look sufficiently
>> like SCSI that I was able to report an sg device node for the adapter
>> and use SG_IO to send these commands.
>>
>> sysfs, debugfs, and configfs are options as well.
>>
>>
>>
>>> 2. How should an LLD implement interfaces to transmit asynchronous
>>> information to the management applications? The requirement is to be
>>> able to transmit data buffer as well as notifications for events.
>>>
>> I've had good success with netlink. In my use I only send a notification
>> to userspace and let the application send some commands to figure out
>> what happened, but netlink does allow to send data as well. It makes it
>> very
>> easy to have multiple concurrent readers of the data, which I've found
>> very
>> useful.
>>
>>
>>> 3. The interface should be able to work even if no SCSI devices are
>>> exported to the kernel.
>>>
>> netlink allows this.
>>
>>
>>> 4. Should work seamlessly across vmware and xen kernels.
>>>
>> netlink should work here too.
>>
>> -Brian
>>
>> --
>> Brian King
>> Linux on Power Virtualization
>> IBM Linux Technology Center
>>
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
next prev parent reply other threads:[~2009-07-20 16:57 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-07-17 13:16 Recommended HBA management interfaces Mukker, Atul
2009-07-17 15:35 ` Brian King
2009-07-20 16:28 ` Mukker, Atul
2009-07-20 16:57 ` James Smart [this message]
2009-07-20 18:03 ` Mukker, Atul
2009-07-20 19:08 ` James Smart
2009-07-20 20:33 ` Mukker, Atul
2009-07-21 12:29 ` James Smart
2009-07-21 13:38 ` Mukker, Atul
2009-07-21 13:48 ` Drew
2009-07-21 13:58 ` Mukker, Atul
2009-07-21 14:59 ` James Smart
2009-07-21 16:27 ` Drew
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4A64A20A.5080908@emulex.com \
--to=james.smart@emulex.com \
--cc=Atul.Mukker@lsi.com \
--cc=brking@linux.vnet.ibm.com \
--cc=linux-scsi@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox