public inbox for linux-rdma@vger.kernel.org
 help / color / mirror / Atom feed
* Infiniband Adapter with Documentation
@ 2015-06-22 22:04 Brandon Falk
       [not found] ` <CAK9+cJVe+z_eWTsf6nVR-Gaqs2KeiCxXak-vXpBO23G5pWZ-gg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 2+ messages in thread
From: Brandon Falk @ 2015-06-22 22:04 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA

Heya,

I have a compute cluster which uses a completely custom OS (not binary
or source compatible with Linux by any means), and I'm really
interested in Infiniband support. Are there any adapters out there
that have development guides for system level stuff (such as PCI
BAR/MMIO space, etc). I'd ideally implement for the Mellanox
ConnectX-4, but I'm willing to go where the documentation is.

I just want to make a limited driver capable of RDMA writes and reads,
not planning on supporting much more beyond that. How feasible is
that? I've written multiple 1GbE drivers and a 10GbE driver
(specifically for the X540) which was a 8 hour project thanks to good
documentation. Is documentation of this sort available for Infiniband?

I'd be looking for the Infiniband equivalent of this
https://www-ssl.intel.com/content/dam/www/public/us/en/documents/datasheets/ethernet-x540-datasheet.pdf
.

-B
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2015-06-23  5:32 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-06-22 22:04 Infiniband Adapter with Documentation Brandon Falk
     [not found] ` <CAK9+cJVe+z_eWTsf6nVR-Gaqs2KeiCxXak-vXpBO23G5pWZ-gg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2015-06-23  5:32   ` Anuj Kalia

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox