* Infiniband Adapter with Documentation
@ 2015-06-22 22:04 Brandon Falk
[not found] ` <CAK9+cJVe+z_eWTsf6nVR-Gaqs2KeiCxXak-vXpBO23G5pWZ-gg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
0 siblings, 1 reply; 2+ messages in thread
From: Brandon Falk @ 2015-06-22 22:04 UTC (permalink / raw)
To: linux-rdma-u79uwXL29TY76Z2rM5mHXA
Heya,
I have a compute cluster which uses a completely custom OS (not binary
or source compatible with Linux by any means), and I'm really
interested in Infiniband support. Are there any adapters out there
that have development guides for system level stuff (such as PCI
BAR/MMIO space, etc). I'd ideally implement for the Mellanox
ConnectX-4, but I'm willing to go where the documentation is.
I just want to make a limited driver capable of RDMA writes and reads,
not planning on supporting much more beyond that. How feasible is
that? I've written multiple 1GbE drivers and a 10GbE driver
(specifically for the X540) which was a 8 hour project thanks to good
documentation. Is documentation of this sort available for Infiniband?
I'd be looking for the Infiniband equivalent of this
https://www-ssl.intel.com/content/dam/www/public/us/en/documents/datasheets/ethernet-x540-datasheet.pdf
.
-B
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: Infiniband Adapter with Documentation
[not found] ` <CAK9+cJVe+z_eWTsf6nVR-Gaqs2KeiCxXak-vXpBO23G5pWZ-gg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2015-06-23 5:32 ` Anuj Kalia
0 siblings, 0 replies; 2+ messages in thread
From: Anuj Kalia @ 2015-06-23 5:32 UTC (permalink / raw)
To: Brandon Falk; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
I have looked a lot for publicly-available documentation for
RDMA-capable NICs, but without success.
AFAIK, Mellanox's datasheets (Programmer Reference Manuals - PRMs) are
available to "customers with a design-in portfolio" under an NDA:
https://community.mellanox.com/thread/1052. Can someone with more
experience shed some light on this? For example, if I buy a Mellanox
card, am I eligible for its PRM?
--Anuj
On Mon, Jun 22, 2015 at 6:04 PM, Brandon Falk <bfalk-kQ3ioHgQxLs9XoPSrs7Ehg@public.gmane.org> wrote:
> Heya,
>
> I have a compute cluster which uses a completely custom OS (not binary
> or source compatible with Linux by any means), and I'm really
> interested in Infiniband support. Are there any adapters out there
> that have development guides for system level stuff (such as PCI
> BAR/MMIO space, etc). I'd ideally implement for the Mellanox
> ConnectX-4, but I'm willing to go where the documentation is.
>
> I just want to make a limited driver capable of RDMA writes and reads,
> not planning on supporting much more beyond that. How feasible is
> that? I've written multiple 1GbE drivers and a 10GbE driver
> (specifically for the X540) which was a 8 hour project thanks to good
> documentation. Is documentation of this sort available for Infiniband?
>
> I'd be looking for the Infiniband equivalent of this
> https://www-ssl.intel.com/content/dam/www/public/us/en/documents/datasheets/ethernet-x540-datasheet.pdf
> .
>
> -B
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2015-06-23 5:32 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-06-22 22:04 Infiniband Adapter with Documentation Brandon Falk
[not found] ` <CAK9+cJVe+z_eWTsf6nVR-Gaqs2KeiCxXak-vXpBO23G5pWZ-gg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2015-06-23 5:32 ` Anuj Kalia
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox