public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
From: James Smart <James.Smart@Emulex.Com>
To: "Mukker, Atul" <Atul.Mukker@lsi.com>
Cc: Brian King <brking@linux.vnet.ibm.com>,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>
Subject: Re: Recommended HBA management interfaces
Date: Mon, 20 Jul 2009 15:08:45 -0400	[thread overview]
Message-ID: <4A64C0BD.5040503@emulex.com> (raw)
In-Reply-To: <6C678488C5CEE74F813A4D1948FD2DC7A99EFE16@cosmail02.lsi.com>

Mukker, Atul wrote:
> Thanks for restating my original question.
>
> 1. What interface should be used by the HBA management applications to obtain (non-generic) information from the HBA?
>   
My opinions:

sysfs :
  Pro: Good for singular data items and simple status  (link state, f/w 
rev, etc).
         Very good for things that really don't need a tool (simplistic 
admin commands):
             (show state, reset board, etc).
  Con: Doesn't work well for "transactions" that need multiple data elements
          Lack of insight to process life cycle, thus multi-step and 
concurrent
             transactions difficult.
          Doesn't work with binary data, buffers, etc.
          Difficult to use concurrently by multiple processes.
          Can't push async info to user.
          No support for complex things.
          The list of attributes can get big. Not a big deal, but...
          Security based on attribute permissions (not always the best 
model)

configfs:
  Pro: Basically sysfs but for transactions with multiple data elements
  Con: (same as sysfs, just minus multiple data element con).

netlink:
  Pro: Very good for "multi-cast" operations - pushing async events to 
multiple
              receivers.
         Handles requests and responses with multiple data elements easily.
         Can track per-process life cycles.
         Socket based so could even support mgmt from a different machine.
         Security checking easy to build in.
  Con: Doesn't work well for large payloads.
         Payloads can't be referenced via data pointer (they need to be 
inline to the pkt).
         Direct DMA not supported - has to be staged to driver buffer, 
copied in/out
            of socket.
         Multi-step transactions doable, but difficult. Maintaining 
relationships per
            pid difficult.
         Multiple machines means dealing with endian-ness and data typing.
         The netlink sockets do have memory-related issues that must be 
watched.

   Note: to not burn NETLINK id space, and perhaps collide in different 
distro
        kernels, please use the mid-layers netlink infrastructure, which 
does allow
        driver-specific messaging.

bsg:
  (Specifically the new midlayer sgio support that was recently added 
for ELS passthru)
  Pro: Support requests and responses with multiple data elements easily
         Supports separate request and response DMA-able payload buffers
         Supports big payloads easily
  Con: Lack of insight to process lifecycle, thus multi-step and concurrent
             transactions difficult.
          Async response generation (w/o associated request) very difficult.
          It's really a wrappered ioctl, with the midlayer protecting 
the kernel from
             bad ioctl practice via the way it converts the sgio ioctl 
into a midlayer
             request. Creates an odd programming interface, as you 
really want to
             wrapper the ioctl on the user side too.

Thus, when you look across the pros and cons, its easy to see why the 
transport
is using different things for different purposes.

> 2. How should driver notify such applications of asynchronous events happening on the HBA?
>   
This is already there with the midlayer netlink support.  Vendor-unique 
events
are already supported.

> Please keep in mind, all the data transfer between the applications and the HBA is a private protocol.
>   
Private or not, the code for the interface use will have to be in the 
driver.  Code will
be inspected for proper/safe usage of the interfaces.  Coding such that 
things in the
messaging are black-boxes will always be a point of contention.

-- james s


> Thanks
> Atul Mukker
>  
>
>   
>> -----Original Message-----
>> From: James Smart [mailto:James.Smart@Emulex.Com]
>> Sent: Monday, July 20, 2009 12:58 PM
>> To: Mukker, Atul
>> Cc: Brian King; linux-scsi@vger.kernel.org
>> Subject: Re: Recommended HBA management interfaces
>>
>> FYI - netlink (and sysfs, and I believe debugfs) do not exist with
>> vmware drivers...   Additionally, with netlink, many of the distros no
>> longer include libnl by default in their install images.  Even
>> interfaces that you think exist on vmware, may have very different
>> semantical behavior (almost all of the transport stuff either doesn't
>> exist or is only partially implemented).
>>
>> One big caveat I'd give you:  It's not so much the interface being used,
>> but rather, what are you doing over the interface.  One of the goals of
>> the community is to present a consistent management paradigm for like
>> things.  Thus, if what you are doing is generic, you should do it in a
>> generic manner so that all drivers for like hardware can utilize it.
>> This was the motivation for the protocol transports. Interestingly, even
>> the transports use different interfaces for different things. It all
>> depends on what it is.
>>
>> Lastly, some things are considered bad practice from a kernel safety
>> point of view. Example: driver-specific ioctls passing around user-space
>> buffer pointers.  In these cases, it doesn't matter what interface you
>> pick, they'll be rejected.
>>
>> -- james s
>>
>>
>> Mukker, Atul wrote:
>>     
>>> Thanks Brian. Netlink seems to be appropriate for our purpose as well,
>>>       
>> almost too good :-)
>>     
>>> That make me think, what's the catch? The SCSI drivers are not heavy
>>>       
>> usage of this interface for one.
>>     
>>> Are the other caveats associated with it?
>>>
>>> Best regards,
>>> Atul Mukker
>>>
>>>
>>>       
>>>> -----Original Message-----
>>>> From: Brian King [mailto:brking@linux.vnet.ibm.com]
>>>> Sent: Friday, July 17, 2009 11:36 AM
>>>> To: Mukker, Atul
>>>> Cc: linux-scsi@vger.kernel.org
>>>> Subject: Re: Recommended HBA management interfaces
>>>>
>>>> Mukker, Atul wrote:
>>>>
>>>>         
>>>>> Hi All,
>>>>>
>>>>> We would like expert comments on the following questions regarding
>>>>> management of HBA from applications.
>>>>>
>>>>> Traditionally, our drivers create a character device node, whose
>>>>> file_operations are then used by the management applications to
>>>>> transfer HBA specific commands. In addition to being quirky, this
>>>>> interface has a few limitations which we would like to remove, most
>>>>> important being able to seamlessly handle asynchronous events with
>>>>> data transfer.
>>>>>
>>>>> 1. What is (are) the other standard/recommended interfaces which
>>>>> applications can use to transfer HBA specific commands and data.
>>>>>
>>>>>           
>>>> Depends on what the commands look like. With ipr, the commands that
>>>> the management application need to send to the HBA look sufficiently
>>>> like SCSI that I was able to report an sg device node for the adapter
>>>> and use SG_IO to send these commands.
>>>>
>>>> sysfs, debugfs, and configfs are options as well.
>>>>
>>>>
>>>>
>>>>         
>>>>> 2. How should an LLD implement interfaces to transmit asynchronous
>>>>> information to the management applications? The requirement is to be
>>>>> able to transmit data buffer as well as notifications for events.
>>>>>
>>>>>           
>>>> I've had good success with netlink. In my use I only send a
>>>>         
>> notification
>>     
>>>> to userspace and let the application send some commands to figure out
>>>> what happened, but netlink does allow to send data as well. It makes it
>>>> very
>>>> easy to have multiple concurrent readers of the data, which I've found
>>>> very
>>>> useful.
>>>>
>>>>
>>>>         
>>>>> 3. The interface should be able to work even if no SCSI devices are
>>>>> exported to the kernel.
>>>>>
>>>>>           
>>>> netlink allows this.
>>>>
>>>>
>>>>         
>>>>> 4. Should work seamlessly across vmware and xen kernels.
>>>>>
>>>>>           
>>>> netlink should work here too.
>>>>
>>>> -Brian
>>>>
>>>> --
>>>> Brian King
>>>> Linux on Power Virtualization
>>>> IBM Linux Technology Center
>>>>
>>>>
>>>>         
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>>
>>>       
>
>   

  reply	other threads:[~2009-07-20 19:08 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-07-17 13:16 Recommended HBA management interfaces Mukker, Atul
2009-07-17 15:35 ` Brian King
2009-07-20 16:28   ` Mukker, Atul
2009-07-20 16:57     ` James Smart
2009-07-20 18:03       ` Mukker, Atul
2009-07-20 19:08         ` James Smart [this message]
2009-07-20 20:33           ` Mukker, Atul
2009-07-21 12:29             ` James Smart
2009-07-21 13:38               ` Mukker, Atul
2009-07-21 13:48               ` Drew
2009-07-21 13:58                 ` Mukker, Atul
2009-07-21 14:59                   ` James Smart
2009-07-21 16:27                     ` Drew

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A64C0BD.5040503@emulex.com \
    --to=james.smart@emulex.com \
    --cc=Atul.Mukker@lsi.com \
    --cc=brking@linux.vnet.ibm.com \
    --cc=linux-scsi@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox