* "isert: isert_setup_id: rdma_bind_addr() failed: -19" spam, followed by Recursive Fault on reboot
@ 2017-01-28 23:01 Stevie Trujillo
2017-01-29 7:39 ` Sagi Grimberg
0 siblings, 1 reply; 3+ messages in thread
From: Stevie Trujillo @ 2017-01-28 23:01 UTC (permalink / raw)
To: target-devel, linux-rdma, linux-kernel, Sagi Grimberg,
Doug Ledford, Sean Hefty, Hal Rosenstock
Hello
I'm trying (failing) to get iSER working. After rebooting with some settings
saved in targetcli, I got an endless stream of messages like this:
[ 192.701299] isert: isert_setup_id: rdma_bind_addr() failed: -19
[ 192.702733] isert: isert_setup_id: rdma_bind_addr() failed: -19
[ 192.704021] isert: isert_setup_id: rdma_bind_addr() failed: -19
[ 192.705458] isert: isert_setup_id: rdma_bind_addr() failed: -19
[ 192.706979] isert: isert_setup_id: rdma_bind_addr() failed: -19
I tried deleting everything from targetcli, but the flood would not stop. The
ib_isert module did not unload. When rebooting I got a "Recursive Fault"
with a stacktrace inside configfs.
I hope this is enough information to fix this bug. I assumed the stacktrace
would be saved to the log so I didn't write it down, and I haven't been able to
retrace all the wrong stuff I did trying to make iSER work.
Linux Version: Linux 4.8.15-2~bpo8+2 (Debian 8 Backports)
--
Stevie Trujillo
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: "isert: isert_setup_id: rdma_bind_addr() failed: -19" spam, followed by Recursive Fault on reboot
2017-01-28 23:01 "isert: isert_setup_id: rdma_bind_addr() failed: -19" spam, followed by Recursive Fault on reboot Stevie Trujillo
@ 2017-01-29 7:39 ` Sagi Grimberg
[not found] ` <81437a39-c0e0-0215-08e5-50d8b9da1d8b-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
0 siblings, 1 reply; 3+ messages in thread
From: Sagi Grimberg @ 2017-01-29 7:39 UTC (permalink / raw)
To: Stevie Trujillo, target-devel, linux-rdma, linux-kernel,
Doug Ledford, Sean Hefty, Hal Rosenstock
> Hello
Hey Steve,
> I'm trying (failing) to get iSER working. After rebooting with some settings
> saved in targetcli, I got an endless stream of messages like this:
>
> [ 192.701299] isert: isert_setup_id: rdma_bind_addr() failed: -19
> [ 192.702733] isert: isert_setup_id: rdma_bind_addr() failed: -19
> [ 192.704021] isert: isert_setup_id: rdma_bind_addr() failed: -19
> [ 192.705458] isert: isert_setup_id: rdma_bind_addr() failed: -19
> [ 192.706979] isert: isert_setup_id: rdma_bind_addr() failed: -19
You get -ENODEV errors because you don't have an RDMA device.
This is probably due to the fact that the mlx5_ib (or mlx4_ib, depending
on your device) is not loaded.
Can you try loading mlx[4|5]_ib module before you enable iser on
a network portal?
I do see that mlx5 and mlx4 are requesting the mlx_ib module at probe
time, I wander how that didn't happen on your system..
I didn't see a endless loop of this error? can you share your
targetcli json?
> I tried deleting everything from targetcli, but the flood would not stop. The
> ib_isert module did not unload. When rebooting I got a "Recursive Fault"
> with a stacktrace inside configfs.
>
> I hope this is enough information to fix this bug. I assumed the stacktrace
> would be saved to the log so I didn't write it down, and I haven't been able to
> retrace all the wrong stuff I did trying to make iSER work.
>
> Linux Version: Linux 4.8.15-2~bpo8+2 (Debian 8 Backports)
Would it be possible to try with upstream kernel and report what you
are seeing?
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2017-01-30 22:52 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-01-28 23:01 "isert: isert_setup_id: rdma_bind_addr() failed: -19" spam, followed by Recursive Fault on reboot Stevie Trujillo
2017-01-29 7:39 ` Sagi Grimberg
[not found] ` <81437a39-c0e0-0215-08e5-50d8b9da1d8b-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2017-01-30 22:52 ` Stevie Trujillo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox