netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* sfc userland MCDI - request for guidance
@ 2016-01-15 17:34 Edward Cree
  2016-01-16 10:03 ` Jiri Pirko
  0 siblings, 1 reply; 2+ messages in thread
From: Edward Cree @ 2016-01-15 17:34 UTC (permalink / raw)
  To: netdev; +Cc: linux-net-drivers

I have a design problem with a few possible solutions and I'd like some
 guidance on which ones would be likely to be acceptable.

The sfc driver communicates with the hardware using a protocol called MCDI -
 Management Controller to Driver Interface - and for various reasons
 (ranging from test automation to configuration utilities) we would like to
 be able to do this from userspace.  We currently have two ways of handling
 this, neither of which is satisfactory.
One is to use libpci to talk directly to the hardware; however this is
 unsafe when the driver is loaded because both driver and userland could try
 to send MCDI commands at the same time using the same doorbell.
The other is a private ioctl which is implemented in the out-of-tree version
 of our driver.  However, as an ioctl it presumably would not be acceptable
 in-tree.

The possible solutions we've come up with so far are:
* Generic Netlink.  Define a netlink family for EFX_MCDI, registered at
  driver load time, and using ifindex to identify which device to send the
  MCDI to.  The MCDI payload would be sent over netlink as a binary blob,
  because converting it to attributes and back would be much unnecessary
  work (there are many commands and many many arguments).  The response from
  the hardware would be sent back to userland the same way.
* Sysfs.  Have a sysfs node attached to the net device, to which MCDI
  commands are written and from which the responses are read.  This does
  mean userland has to handle mutual exclusion, else it could get the
  response to another process's request.
* Have the driver reserve an extra VI ('Virtual Interface') on the NIC
  beyond its own requirements, and report the index of that VI in a sysfs
  node attached to the net device.  Then the userland app can read it, and
  use that VI to do its MCDI through libpci.  Since each VI has its own MCDI
  doorbell, this is safe, but involves libpci and requires that a VI always
  be reserved for this.  Again, mutual exclusion is left to userspace.
* Have firmware expose a fake MTD partition, writes to which are interpreted
  as MCDI commands to run; no modification to the driver would be needed.
  This is incredibly ugly and our firmware team would rather not do it :)

Are any of these appropriate?

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: sfc userland MCDI - request for guidance
  2016-01-15 17:34 sfc userland MCDI - request for guidance Edward Cree
@ 2016-01-16 10:03 ` Jiri Pirko
  0 siblings, 0 replies; 2+ messages in thread
From: Jiri Pirko @ 2016-01-16 10:03 UTC (permalink / raw)
  To: Edward Cree; +Cc: netdev, linux-net-drivers

Fri, Jan 15, 2016 at 06:34:04PM CET, ecree@solarflare.com wrote:
>I have a design problem with a few possible solutions and I'd like some
> guidance on which ones would be likely to be acceptable.
>
>The sfc driver communicates with the hardware using a protocol called MCDI -
> Management Controller to Driver Interface - and for various reasons
> (ranging from test automation to configuration utilities) we would like to
> be able to do this from userspace.  We currently have two ways of handling
> this, neither of which is satisfactory.

It is wrong to send commands to HW directly to userspace. Please, do
proper in-kernel vendor neutral feature abstraction and the implement
in your driver. That is how things work. Userspace->HW bypass
is simply unacceptable.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-01-16 10:03 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-01-15 17:34 sfc userland MCDI - request for guidance Edward Cree
2016-01-16 10:03 ` Jiri Pirko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).