* srp-ha backport @ 2012-09-15 9:23 Bart Van Assche 2012-11-20 4:04 ` Vasiliy Tolstov 0 siblings, 1 reply; 18+ messages in thread From: Bart Van Assche @ 2012-09-15 9:23 UTC (permalink / raw) To: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org Hello, In case anyone would like to start using the srp-ha patch series before it gets upstream, a backported version of that patch series is available here: http://github.com/bvanassche/ib_srp-backport. The advantages of that version of ib_srp over what's upstream are: - Better robustness against cable plugging. - Allows closing an SRP connection from the initiator side (via the new "delete" attribute in sysfs). - Configurable dev_loss_tmo and fast_io_fail_tmo parameters. - Builds against any kernel in the range 2.6.32..3.6. - Can be used on RHEL 6.x systems. In combination with srp_daemon and multipath-tools this should allow to build a reliable H.A. SRP solution. Note: I haven't been able to test that code against every existing mainstream or distro kernel. Feedback is welcome though. Bart. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: srp-ha backport 2012-09-15 9:23 srp-ha backport Bart Van Assche @ 2012-11-20 4:04 ` Vasiliy Tolstov [not found] ` <loom.20121120T050107-224-eS7Uydv5nfjZ+VzJOa5vwg@public.gmane.org> 0 siblings, 1 reply; 18+ messages in thread From: Vasiliy Tolstov @ 2012-11-20 4:04 UTC (permalink / raw) To: linux-rdma-u79uwXL29TY76Z2rM5mHXA Bart Van Assche <bvanassche@...> writes: > > Hello, > > In case anyone would like to start using the srp-ha patch series before > it gets upstream, a backported version of that patch series is available > here: http://github.com/bvanassche/ib_srp-backport. The advantages of > that version of ib_srp over what's upstream are: > - Better robustness against cable plugging. > - Allows closing an SRP connection from the initiator side (via the > new "delete" attribute in sysfs). > - Configurable dev_loss_tmo and fast_io_fail_tmo parameters. > - Builds against any kernel in the range 2.6.32..3.6. > - Can be used on RHEL 6.x systems. > > In combination with srp_daemon and multipath-tools this should allow to > build a reliable H.A. SRP solution. > > Note: I haven't been able to test that code against every existing > mainstream or distro kernel. Feedback is welcome though. Thanks for this backport! I have some problem under sles 11 sp2 (kernel 3.0.42- 0.7-xen) then i shutdown srp target (reboot one sas server) multipath -ll does not respond. If i provide in multipath and srp identical dev_loss_tmo and fast_io_fail_tmo nothing changed. multipath -ll unblocks only then the server goes up. dev_loss_tmo = 15 fast_io_fail_tmo = 10 multipath.conf defaults { getuid_callout "/bin/cat /sys/block/%n/device/model" path_grouping_policy failover failback immediate no_path_retry fail path_checker tur rr_weight uniform rr_min_io 100 polling_interval 10 checker_timeout 10 fast_io_fail_tmo 60 dev_loss_tmo 120 } blacklist { devnode cciss devnode fd devnode hd devnode md devnode sr devnode scd devnode st devnode ram devnode raw devnode loop } -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <loom.20121120T050107-224-eS7Uydv5nfjZ+VzJOa5vwg@public.gmane.org>]
* Re: srp-ha backport [not found] ` <loom.20121120T050107-224-eS7Uydv5nfjZ+VzJOa5vwg@public.gmane.org> @ 2012-11-20 13:25 ` Bart Van Assche [not found] ` <CACaajQtEu_z+GEUBuiDz6D1dHYsb-YSFMJfkWeya1FNvfhbQHg@mail.gmail.com> 2013-06-08 2:31 ` Bruce McKenzie 0 siblings, 2 replies; 18+ messages in thread From: Bart Van Assche @ 2012-11-20 13:25 UTC (permalink / raw) To: Vasiliy Tolstov; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA On 11/20/12 05:04, Vasiliy Tolstov wrote: > Thanks for this backport! I have some problem under sles 11 sp2 (kernel 3.0.42- > 0.7-xen) then i shutdown srp target (reboot one sas server) multipath -ll does > not respond. If i provide in multipath and srp identical dev_loss_tmo and > fast_io_fail_tmo nothing changed. multipath -ll unblocks only then the server > goes up. That's strange. After the fast_io_fail_tmo timer has fired multipath -ll should unblock independent of the state of the SRP target. Bart. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <CACaajQtEu_z+GEUBuiDz6D1dHYsb-YSFMJfkWeya1FNvfhbQHg@mail.gmail.com>]
[parent not found: <CACaajQtEu_z+GEUBuiDz6D1dHYsb-YSFMJfkWeya1FNvfhbQHg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>]
* Re: srp-ha backport [not found] ` <CACaajQtEu_z+GEUBuiDz6D1dHYsb-YSFMJfkWeya1FNvfhbQHg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> @ 2012-11-21 14:31 ` Bart Van Assche [not found] ` <CACaajQsUR92Hg-nx_VQN1eHdnghDtsRv1xZJFaJKT=a4nWWf8Q@mail.gmail.com> 0 siblings, 1 reply; 18+ messages in thread From: Bart Van Assche @ 2012-11-21 14:31 UTC (permalink / raw) To: Vasiliy Tolstov; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA Hello Vasiliy, You should already get reasonable behavior with the default settings of all these timeout parameters. How long had you been waiting for multipath -ll before giving up ? Bart. On 11/21/12 15:26, Vasiliy Tolstov wrote: > Hmm. Ok What timeouts i need to set to correct fail devices? > For example i have timeout on each sd* device in > /sys/block/sd*/device/timeout > fast_io_fail_tmo and dev_loss_tmo in multipath and dev_loss_tmo > fast_io_fail_tmo in ib_srp... > Does timeout on sd* device need to be smaller that srp timeouts and > multipath? Does multipaths timeouts must be equal of srp timeouts? > > > 2012/11/20 Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org <mailto:bvanassche-HInyCGIudOg@public.gmane.org>> > > On 11/20/12 05:04, Vasiliy Tolstov wrote: > > Thanks for this backport! I have some problem under sles 11 sp2 > (kernel 3.0.42- > 0.7-xen) then i shutdown srp target (reboot one sas server) > multipath -ll does > not respond. If i provide in multipath and srp identical > dev_loss_tmo and > fast_io_fail_tmo nothing changed. multipath -ll unblocks only > then the server > goes up. > > > That's strange. After the fast_io_fail_tmo timer has fired multipath > -ll should unblock independent of the state of the SRP target. > > Bart. > > > > > -- > Vasiliy Tolstov, > Clodo.ru > e-mail: v.tolstov-+9FY0jupvH6HXe+LvDLADg@public.gmane.org <mailto:v.tolstov-+9FY0jupvH6HXe+LvDLADg@public.gmane.org> > jabber: vase-+9FY0jupvH6HXe+LvDLADg@public.gmane.org <mailto:vase-+9FY0jupvH6HXe+LvDLADg@public.gmane.org> > -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <CACaajQsUR92Hg-nx_VQN1eHdnghDtsRv1xZJFaJKT=a4nWWf8Q@mail.gmail.com>]
[parent not found: <CACaajQswh5foLKTGMhbZnj5JRpFax7nhdHxL7c1wTwbpMe=b8A@mail.gmail.com>]
[parent not found: <CACaajQswh5foLKTGMhbZnj5JRpFax7nhdHxL7c1wTwbpMe=b8A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>]
* Re: srp-ha backport [not found] ` <CACaajQswh5foLKTGMhbZnj5JRpFax7nhdHxL7c1wTwbpMe=b8A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> @ 2012-11-21 18:35 ` Bart Van Assche [not found] ` <CACaajQs3pPpB9Bz1OxRduk63+uu+ahBYYZsYv_VgroWCsr+vzQ@mail.gmail.com> [not found] ` <CACaajQva+gtR3+tNaPZSJZ7j7ACOzjNCWAKQeYfT5Gse2Nsx3Q@mail.gmail.com> 0 siblings, 2 replies; 18+ messages in thread From: Bart Van Assche @ 2012-11-21 18:35 UTC (permalink / raw) To: Vasiliy Tolstov; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA On 11/21/12 18:41, Vasiliy Tolstov wrote: > Another test has very bad results: > initiator multipath -ll running than storage go to down never returns > (deadlock), but other multipath -ll returns immediate with output (i'm > reboot sas01) It could be helpful to have a look at the call stacks generated by "echo w >/proc/sysrq-trigger". If this output reveals that device removal triggers hanging I/O then that might indicate that one or more SCSI device removal patches have not yet been backported to SLES 11 SP2. Have you already tried whether the same test succeeds with kernel 3.6.7 ? Several SCSI device removal fixes have been integrated in kernel 3.6. Bart. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <CACaajQs3pPpB9Bz1OxRduk63+uu+ahBYYZsYv_VgroWCsr+vzQ@mail.gmail.com>]
[parent not found: <CACaajQs3pPpB9Bz1OxRduk63+uu+ahBYYZsYv_VgroWCsr+vzQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>]
* Re: srp-ha backport [not found] ` <CACaajQs3pPpB9Bz1OxRduk63+uu+ahBYYZsYv_VgroWCsr+vzQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> @ 2012-11-22 7:36 ` Bart Van Assche [not found] ` <50ADD617.80500-HInyCGIudOg@public.gmane.org> 0 siblings, 1 reply; 18+ messages in thread From: Bart Van Assche @ 2012-11-22 7:36 UTC (permalink / raw) To: Vasiliy Tolstov; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA Hello Vasily, Replacing the kernel on the initiator system is sufficient. You don't have to replace the kernel on the target system for this test. Regarding which InfiniBand stack to use at the initiator side in combination with kernel 3.6: unless there is something I do not yet know about, it is fine to use the InfiniBand stack included in that kernel and it is not necessary to install OFED for this test. Bart. On 11/21/12 20:52, Vasiliy Tolstov wrote: > Does i need that on storage host side or on onitiator side? My storage > under debian squeeze with ofed from openfabrics. > > 21.11.2012 22:35 пользователь "Bart Van Assche" <bvanassche-HInyCGIudOg@public.gmane.org > <mailto:bvanassche-HInyCGIudOg@public.gmane.org>> написал: > > On 11/21/12 18:41, Vasiliy Tolstov wrote: > > Another test has very bad results: > initiator multipath -ll running than storage go to down never > returns > (deadlock), but other multipath -ll returns immediate with > output (i'm > reboot sas01) > > > It could be helpful to have a look at the call stacks generated by > "echo w >/proc/sysrq-trigger". If this output reveals that device > removal triggers hanging I/O then that might indicate that one or > more SCSI device removal patches have not yet been backported to > SLES 11 SP2. Have you already tried whether the same test succeeds > with kernel 3.6.7 ? Several SCSI device removal fixes have been > integrated in kernel 3.6. > > Bart. > -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <50ADD617.80500-HInyCGIudOg@public.gmane.org>]
* Re: srp-ha backport [not found] ` <50ADD617.80500-HInyCGIudOg@public.gmane.org> @ 2012-12-05 18:10 ` Vasiliy Tolstov 0 siblings, 0 replies; 18+ messages in thread From: Vasiliy Tolstov @ 2012-12-05 18:10 UTC (permalink / raw) To: Bart Van Assche; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA I'm try to use stable 3.0.53 from kernel.org but have lock issues. If i use you ib_srp-backport from github this does not solve problem. Now (witout backported ib_srp) i have this dmesg output when multipath -ll locks. Is that possible to determine when the problem ? [ 729.936560] SysRq : Show Blocked State [ 729.936669] task PC stack pid father [ 729.936704] multipathd D 0000000000000083 0 8152 1 0x00000000 [ 729.936709] ffff880115b97a68 0000000000000282 0000000100000000 ffff880115b979e8 [ 729.936713] ffff880115b96010 ffff880115b97a30 ffff880115b94400 ffff880115b94400 [ 729.936718] ffff880115b94400 ffff880115b97fd8 ffff880115b97fd8 ffff880115b94400 [ 729.936722] Call Trace: [ 729.936760] [<ffffffffa017b2ad>] scsi_block_when_processing_errors+0xcd/0xf0 [scsi_mod] [ 729.936776] [<ffffffffa04b7cf8>] sd_open+0xb8/0x1f0 [sd_mod] [ 729.936797] [<ffffffff80160bb8>] __blkdev_get+0x388/0x460 [ 729.936803] [<ffffffff8016113a>] blkdev_get+0x5a/0x1f0 [ 729.936808] [<ffffffff80161303>] blkdev_get_by_dev+0x33/0x70 [ 729.936819] [<ffffffffa00ffe43>] open_dev+0x33/0xb0 [dm_mod] [ 729.936835] [<ffffffffa01000f1>] __table_get_device+0x231/0x2c0 [dm_mod] [ 729.936849] [<ffffffffa03719d7>] parse_path+0xe7/0x380 [dm_multipath] [ 729.936859] [<ffffffffa0371de1>] parse_priority_group+0x171/0x220 [dm_multipath] [ 729.936868] [<ffffffffa03720c2>] multipath_ctr+0x232/0x32c [dm_multipath] [ 729.936879] [<ffffffffa0100a63>] dm_table_add_target+0x193/0x260 [dm_mod] [ 729.936895] [<ffffffffa0102cb9>] table_load+0xc9/0x2c0 [dm_mod] [ 729.936914] [<ffffffffa0103f8d>] ctl_ioctl+0x1ed/0x270 [dm_mod] [ 729.936934] [<ffffffffa010401e>] dm_ctl_ioctl+0xe/0x20 [dm_mod] [ 729.936950] [<ffffffff8013b613>] do_vfs_ioctl+0x93/0x3f0 [ 729.936955] [<ffffffff8013ba11>] sys_ioctl+0xa1/0xb0 [ 729.936962] [<ffffffff80402233>] system_call_fastpath+0x16/0x1b [ 729.936970] [<00007f73a8679fa7>] 0x7f73a8679fa6 [ 729.936981] scsi_eh_2 D 0000000000000000 0 30416 2 0x00000000 [ 729.936985] ffff880120ff3b70 0000000000000246 0000000000000026 ffff880120ff3af0 [ 729.936989] ffff880120ff2010 ffff880120ff3b38 ffff88011f616480 ffff88011f616480 [ 729.936993] ffff88011f616480 ffff880120ff3fd8 ffff880120ff3fd8 ffff88011f616480 [ 729.936997] Call Trace: [ 729.937004] [<ffffffff803f804d>] schedule_timeout+0x21d/0x2c0 [ 729.937010] [<ffffffff803f6f35>] wait_for_common+0xe5/0x210 [ 729.937018] [<ffffffffa037b75c>] srp_disconnect_target+0x18c/0x210 [ib_srp] [ 729.937029] [<ffffffffa037cbe0>] srp_reconnect_target+0x110/0x3a0 [ib_srp] [ 729.937042] [<ffffffffa037cea9>] srp_reset_host+0x39/0x50 [ib_srp] [ 729.937059] [<ffffffffa017898d>] scsi_try_host_reset+0x4d/0x120 [scsi_mod] [ 729.937077] [<ffffffffa017a604>] scsi_eh_host_reset+0x44/0x170 [scsi_mod] [ 729.937096] [<ffffffffa017ab81>] scsi_eh_ready_devs+0x91/0x130 [scsi_mod] [ 729.937115] [<ffffffffa017aefd>] scsi_unjam_host+0xfd/0x200 [scsi_mod] [ 729.937134] [<ffffffffa017b188>] scsi_error_handler+0x188/0x1e0 [scsi_mod] [ 729.937147] [<ffffffff80067ae6>] kthread+0x96/0xa0 [ 729.937153] [<ffffffff80402bd4>] kernel_thread_helper+0x4/0x10 [ 729.937156] scsi_eh_3 D 0000000000000000 0 30421 2 0x00000000 [ 729.937161] ffff88010d0d3b70 0000000000000246 0000000000000000 ffff88010d0d3af0 [ 729.937165] ffff88010d0d2010 ffff88010d0d3b38 ffff88011166a180 ffff88011166a180 [ 729.937169] ffff88011166a180 ffff88010d0d3fd8 ffff88010d0d3fd8 ffff88011166a180 [ 729.937173] Call Trace: [ 729.937178] [<ffffffff803f804d>] schedule_timeout+0x21d/0x2c0 [ 729.937183] [<ffffffff803f6f35>] wait_for_common+0xe5/0x210 [ 729.937190] [<ffffffffa037b75c>] srp_disconnect_target+0x18c/0x210 [ib_srp] [ 729.937201] [<ffffffffa037cbe0>] srp_reconnect_target+0x110/0x3a0 [ib_srp] [ 729.937213] [<ffffffffa037cea9>] srp_reset_host+0x39/0x50 [ib_srp] [ 729.937230] [<ffffffffa017898d>] scsi_try_host_reset+0x4d/0x120 [scsi_mod] [ 729.937248] [<ffffffffa017a604>] scsi_eh_host_reset+0x44/0x170 [scsi_mod] [ 729.937267] [<ffffffffa017ab81>] scsi_eh_ready_devs+0x91/0x130 [scsi_mod] [ 729.937286] [<ffffffffa017aefd>] scsi_unjam_host+0xfd/0x200 [scsi_mod] [ 729.937306] [<ffffffffa017b188>] scsi_error_handler+0x188/0x1e0 [scsi_mod] [ 729.937317] [<ffffffff80067ae6>] kthread+0x96/0xa0 [ 729.937322] [<ffffffff80402bd4>] kernel_thread_helper+0x4/0x10 [ 729.937333] multipath D 0000000000000001 0 5230 5056 0x00000004 [ 729.937337] ffff880128467c18 0000000000000246 0000000000000001 ffff880128467b98 [ 729.937341] ffff880128466010 ffff880128467be0 ffff8800a297a5c0 ffff8800a297a5c0 [ 729.937345] ffff8800a297a5c0 ffff880128467fd8 ffff880128467fd8 ffff8800a297a5c0 [ 729.937349] Call Trace: [ 729.937354] [<ffffffff803f7c1c>] io_schedule+0x9c/0xf0 [ 729.937361] [<ffffffff8016ff7f>] wait_for_all_aios+0xff/0x180 [ 729.937366] [<ffffffff80170396>] exit_aio+0x46/0xb0 [ 729.937372] [<ffffffff80042edd>] mmput+0x1d/0x100 [ 729.937378] [<ffffffff8004768c>] exit_mm+0x12c/0x170 [ 729.937384] [<ffffffff80048aca>] do_exit+0x18a/0x440 [ 729.937389] [<ffffffff8004911f>] do_group_exit+0x3f/0xe0 [ 729.937395] [<ffffffff8005b4f3>] get_signal_to_deliver+0x2a3/0x530 [ 729.937403] [<ffffffff80006d81>] do_signal+0x71/0x1b0 [ 729.937408] [<ffffffff80006f48>] do_notify_resume+0x88/0xa0 [ 729.937413] [<ffffffff804024c3>] int_signal+0x12/0x17 [ 729.937421] [<00007fb5b9e5e6a4>] 0x7fb5b9e5e6a3 [ 729.937426] Sched Debug Version: v0.10, 3.0.53 #1 2012/11/22 Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org>: > Hello Vasily, > > Replacing the kernel on the initiator system is sufficient. You don't have > to replace the kernel on the target system for this test. Regarding which > InfiniBand stack to use at the initiator side in combination with kernel > 3.6: unless there is something I do not yet know about, it is fine to use > the InfiniBand stack included in that kernel and it is not necessary to > install OFED for this test. > > Bart. > > > On 11/21/12 20:52, Vasiliy Tolstov wrote: >> >> Does i need that on storage host side or on onitiator side? My storage >> under debian squeeze with ofed from openfabrics. >> >> 21.11.2012 22:35 пользователь "Bart Van Assche" <bvanassche-HInyCGIudOg@public.gmane.org >> <mailto:bvanassche-HInyCGIudOg@public.gmane.org>> написал: >> >> >> On 11/21/12 18:41, Vasiliy Tolstov wrote: >> >> Another test has very bad results: >> initiator multipath -ll running than storage go to down never >> returns >> (deadlock), but other multipath -ll returns immediate with >> output (i'm >> reboot sas01) >> >> >> It could be helpful to have a look at the call stacks generated by >> "echo w >/proc/sysrq-trigger". If this output reveals that device >> removal triggers hanging I/O then that might indicate that one or >> more SCSI device removal patches have not yet been backported to >> SLES 11 SP2. Have you already tried whether the same test succeeds >> with kernel 3.6.7 ? Several SCSI device removal fixes have been >> integrated in kernel 3.6. >> >> Bart. >> > -- Vasiliy Tolstov, Clodo.ru e-mail: v.tolstov-+9FY0jupvH6HXe+LvDLADg@public.gmane.org jabber: vase-+9FY0jupvH6HXe+LvDLADg@public.gmane.org -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <CACaajQva+gtR3+tNaPZSJZ7j7ACOzjNCWAKQeYfT5Gse2Nsx3Q@mail.gmail.com>]
[parent not found: <CACaajQva+gtR3+tNaPZSJZ7j7ACOzjNCWAKQeYfT5Gse2Nsx3Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>]
* Re: srp-ha backport [not found] ` <CACaajQva+gtR3+tNaPZSJZ7j7ACOzjNCWAKQeYfT5Gse2Nsx3Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> @ 2012-11-23 8:06 ` Bart Van Assche [not found] ` <CACaajQu5e_S3UF0V7hyMZ+GChyT=MaZb7MtjwKph9q08f1MaHQ@mail.gmail.com> [not found] ` <50AF2E9C.7050103-HInyCGIudOg@public.gmane.org> 0 siblings, 2 replies; 18+ messages in thread From: Bart Van Assche @ 2012-11-23 8:06 UTC (permalink / raw) To: Vasiliy Tolstov; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA On 11/23/12 07:53, Vasiliy Tolstov wrote: > Is that possible to backport needed patch to sles11 sp2 (i can't switch > kernel now becouse i'm using xen on initiator node and need recompile > many different packages for new kernel) In every Linux distribution I know the SCSI core is not a kernel module but is built into the kernel itself. For SLES that means that only Novell can backport SCSI core patches to SLES. Bart. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <CACaajQu5e_S3UF0V7hyMZ+GChyT=MaZb7MtjwKph9q08f1MaHQ@mail.gmail.com>]
[parent not found: <CACaajQu5e_S3UF0V7hyMZ+GChyT=MaZb7MtjwKph9q08f1MaHQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>]
* Re: srp-ha backport [not found] ` <CACaajQu5e_S3UF0V7hyMZ+GChyT=MaZb7MtjwKph9q08f1MaHQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> @ 2012-11-23 14:20 ` Bart Van Assche 0 siblings, 0 replies; 18+ messages in thread From: Bart Van Assche @ 2012-11-23 14:20 UTC (permalink / raw) To: Vasiliy Tolstov; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA On 11/23/12 09:07, Vasiliy Tolstov wrote: > Is that possible to determine what patches are needed to backport to fix > my problem? If you have a look at the output of the command below that will help a lot: git log drivers/scsi/{hosts,scsi,scsi_lib,scsi_sysfs}.c Bart. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <50AF2E9C.7050103-HInyCGIudOg@public.gmane.org>]
* Re: srp-ha backport [not found] ` <50AF2E9C.7050103-HInyCGIudOg@public.gmane.org> @ 2012-12-06 9:52 ` Vasiliy Tolstov [not found] ` <CACaajQvVMmA23SCRNZwnRGYuB53gsBOGp26BxXu-J9vvHZ0szg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 0 siblings, 1 reply; 18+ messages in thread From: Vasiliy Tolstov @ 2012-12-06 9:52 UTC (permalink / raw) To: Bart Van Assche; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA Hello, again. Now i'm switch from sles kernel to 3.6.7 All works fine , but now you patches from github provide some errors: /sbin/service openibd restart Unloading ib_srp [FAILED] Removing 'ib_srp': Device or resource busy xen11:~ # rmmod ib_srp ERROR: Removing 'ib_srp': Device or resource busy xen11:~ # lsmod | grep srp ib_srp 47710 0 [permanent] ib_cm 46778 2 rdma_cm,ib_srp ib_sa 33627 4 rdma_ucm,rdma_cm,ib_srp,ib_cm ib_core 82311 9 rdma_ucm,rdma_cm,iw_cm,ib_srp,ib_cm,ib_sa,ib_uverbs,ib_umad,ib_mad How can i solve this? 2012/11/23 Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org>: > On 11/23/12 07:53, Vasiliy Tolstov wrote: >> >> Is that possible to backport needed patch to sles11 sp2 (i can't switch >> kernel now becouse i'm using xen on initiator node and need recompile >> many different packages for new kernel) > > > In every Linux distribution I know the SCSI core is not a kernel module but > is built into the kernel itself. For SLES that means that only Novell can > backport SCSI core patches to SLES. > > Bart. > -- Vasiliy Tolstov, Clodo.ru e-mail: v.tolstov-+9FY0jupvH6HXe+LvDLADg@public.gmane.org jabber: vase-+9FY0jupvH6HXe+LvDLADg@public.gmane.org -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <CACaajQvVMmA23SCRNZwnRGYuB53gsBOGp26BxXu-J9vvHZ0szg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>]
* Re: srp-ha backport [not found] ` <CACaajQvVMmA23SCRNZwnRGYuB53gsBOGp26BxXu-J9vvHZ0szg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> @ 2012-12-06 10:53 ` Bart Van Assche [not found] ` <50C07937.6080308-HInyCGIudOg@public.gmane.org> 0 siblings, 1 reply; 18+ messages in thread From: Bart Van Assche @ 2012-12-06 10:53 UTC (permalink / raw) To: Vasiliy Tolstov; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA On 12/06/12 10:52, Vasiliy Tolstov wrote: > Now i'm switch from sles kernel to 3.6.7 > All works fine , but now you patches from github provide some errors: > > /sbin/service openibd restart > Unloading ib_srp [FAILED] > Removing 'ib_srp': Device or resource busy > > xen11:~ # rmmod ib_srp > ERROR: Removing 'ib_srp': Device or resource busy > xen11:~ # lsmod | grep srp > ib_srp 47710 0 [permanent] > ib_cm 46778 2 rdma_cm,ib_srp > ib_sa 33627 4 rdma_ucm,rdma_cm,ib_srp,ib_cm > ib_core 82311 9 > rdma_ucm,rdma_cm,iw_cm,ib_srp,ib_cm,ib_sa,ib_uverbs,ib_umad,ib_mad > > How can i solve this? I'm not sure. According to the information I found on http://stackoverflow.com/questions/7482469/why-is-this-kernel-module-marked-at-permanent-on-2-6-39, it's a toolchain or a kernel configuration issue. Bart. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <50C07937.6080308-HInyCGIudOg@public.gmane.org>]
* Re: srp-ha backport [not found] ` <50C07937.6080308-HInyCGIudOg@public.gmane.org> @ 2012-12-06 12:00 ` Vasiliy Tolstov 0 siblings, 0 replies; 18+ messages in thread From: Vasiliy Tolstov @ 2012-12-06 12:00 UTC (permalink / raw) To: Bart Van Assche; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA 2012/12/6 Bart Van Assche <bvanassche-HInyCGIudOg@public.gmane.org>: > I'm not sure. According to the information I found on > http://stackoverflow.com/questions/7482469/why-is-this-kernel-module-marked-at-permanent-on-2-6-39, > it's a toolchain or a kernel configuration issue. Bingo. Yes, kernel builded by gcc 4.7 and module by 4.3. I'm rebuild module with 4.7 and now it can be unloded. Thanks! (Doing more tests now...) -- Vasiliy Tolstov, Clodo.ru e-mail: v.tolstov-+9FY0jupvH6HXe+LvDLADg@public.gmane.org jabber: vase-+9FY0jupvH6HXe+LvDLADg@public.gmane.org -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: srp-ha backport 2012-11-20 13:25 ` Bart Van Assche [not found] ` <CACaajQtEu_z+GEUBuiDz6D1dHYsb-YSFMJfkWeya1FNvfhbQHg@mail.gmail.com> @ 2013-06-08 2:31 ` Bruce McKenzie [not found] ` <loom.20130608T041932-498-eS7Uydv5nfjZ+VzJOa5vwg@public.gmane.org> 1 sibling, 1 reply; 18+ messages in thread From: Bruce McKenzie @ 2013-06-08 2:31 UTC (permalink / raw) To: linux-rdma-u79uwXL29TY76Z2rM5mHXA Bart Van Assche <bvanassche@...> writes: > > On 11/20/12 05:04, Vasiliy Tolstov wrote: > > Thanks for this backport! I have some problem under sles 11 sp2 (kernel 3.0.42- > > 0.7-xen) then i shutdown srp target (reboot one sas server) multipath -ll does > > not respond. If i provide in multipath and srp identical dev_loss_tmo and > > fast_io_fail_tmo nothing changed. multipath -ll unblocks only then the server > > goes up. > > That's strange. After the fast_io_fail_tmo timer has fired multipath -ll > should unblock independent of the state of the SRP target. > > Bart. > > -- > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in > the body of a message to majordomo@... > More majordomo info at http://vger.kernel.org/majordomo-info.html > > Hi Bart. any advice on using this fix with MD raid 1? a guide or site you know of? ive compiled ubuntu 13.04 to kernel 3.6.11 with OFED 2 from Mellanox, and it works ok, performance is a little better with SRP. Some packages dont seem to work, ie srptools and IB-diags some commands fail, which looks like those tools havenet been tested with 3.6.11? or updated. Ive tried using DRBD with pacemaker Stonith etc (which also works on 3.6.11) but it only works with iSCSI over IPOIB. ie virtual nic with mounted LVM using scst to present file i/o. and pacemaker to fail over the VIP to node 2. But OFED 2 doesnt seem to support SDP to have to rep via IPOIB which is slow even over dedicated IB_IPOIB nic. IE DRBD rep is 200MB/s Any help or direction would be greatfull. Cheers Bruce McKenzie -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <loom.20130608T041932-498-eS7Uydv5nfjZ+VzJOa5vwg@public.gmane.org>]
* Re: Combining distro IB tools and OFED [not found] ` <loom.20130608T041932-498-eS7Uydv5nfjZ+VzJOa5vwg@public.gmane.org> @ 2013-06-08 16:52 ` Bart Van Assche 2013-06-10 12:05 ` How to do replication right with SRP or remote storage? Sebastian Riemer 1 sibling, 0 replies; 18+ messages in thread From: Bart Van Assche @ 2013-06-08 16:52 UTC (permalink / raw) To: Bruce McKenzie; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA On 06/08/13 04:31, Bruce McKenzie wrote: > ive compiled ubuntu 13.04 to kernel 3.6.11 with OFED 2 from Mellanox, and it > works ok, performance is a little better with SRP. Some packages dont seem > to work, ie srptools and IB-diags some commands fail, which looks like those > tools havenet been tested with 3.6.11? or updated. (changed subject into something I think is more appropriate) Hello Bruce, Regarding combining InfiniBand tools from a Linux distribution with OFED: the ABI between user space and kernel is different for the upstream kernel and several OFED versions. So it's important to install IB kernel drivers and user space from the same source: either stick to the IB user space and kernel components provided by the Linux distribution you are using or use OFED and make sure to enable all user space components you need while configuring and building OFED. I think the OFED build process takes care of uninstalling redundant distro components such that only one version of the user space components remains on your system. Bart. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: How to do replication right with SRP or remote storage? [not found] ` <loom.20130608T041932-498-eS7Uydv5nfjZ+VzJOa5vwg@public.gmane.org> 2013-06-08 16:52 ` Combining distro IB tools and OFED Bart Van Assche @ 2013-06-10 12:05 ` Sebastian Riemer [not found] ` <51B5C108.1030803-EIkl63zCoXaH+58JC4qpiA@public.gmane.org> 1 sibling, 1 reply; 18+ messages in thread From: Sebastian Riemer @ 2013-06-10 12:05 UTC (permalink / raw) To: Bruce McKenzie; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA On 08.06.2013 04:31, Bruce McKenzie wrote: > Hi Bart. > > any advice on using this fix with MD raid 1? a guide or site you know of? > > ive compiled ubuntu 13.04 to kernel 3.6.11 with OFED 2 from Mellanox, and it > works ok, performance is a little better with SRP. Some packages dont seem > to work, ie srptools and IB-diags some commands fail, which looks like those > tools havenet been tested with 3.6.11? or updated. > > Ive tried using DRBD with pacemaker Stonith etc (which also works on 3.6.11) > but it only works with iSCSI over IPOIB. ie virtual nic with mounted LVM > using scst to present file i/o. and pacemaker to fail over the VIP to node > 2. But OFED 2 doesnt seem to support SDP to have to rep via IPOIB which is > slow even over dedicated IB_IPOIB nic. IE DRBD rep is 200MB/s > > Any help or direction would be greatfull. > Cheers > Bruce McKenzie > (changed subject into something I think is more appropriate) Hi Bruce, thanks for contacting me privately in parallel. I can answer you the replication questions. In order to share experience for others I reply here again. Please evaluate the ib_srp fixes from Bart and from me as well and send us your feedback! We are still negotiating how to do fast IO failing and the automatic reconnect right, also together with the Mellanox SRP guys Sagi Grimberg, Vu Pham, Oren Duer and others. You need these patches in order to fail IO in the time you want to the upper layers so that dm-multipath can fail over the path first and ib_srp continuously tries to reconnect the failed path. If the other path also fails, then very likely the storage server is down, so you fail the IO further up to MD RAID-1 so that it can fail that replica. For replication the last slide of my talk on LinuxTag this year could be interesting for you: http://www.slideshare.net/SebastianRiemer/infini-band-rdmaforstoragesrpvsiser-21791250 That slide caused a lot of discussion afterwards. The thing is that replication of remote storage is best on the initiator (a single kernel manages all replica, parallel network paths, symmetric latency,...). The bad news is that replication of virtual/remote storage with MD RAID-1 is a use case which basically works but has some issues which Neil Brown doesn't want to have fixed in mainline. So you need a kernel developer for some cool features like e.g. safe VM live migration. Perhaps, I should collect all guys who require MD RAID-1 for remote storage replication in order to put some pressure on Neil. At least some things of this use case are easy to merge with mainline behavior like e.g. letting MD assembly scale right (mdadm searches the whole /dev without a need). I was surprised that he will make the data offset settable again so that you can set it to 4 MiB (1 LV extent). We already have that by custom patches on top of mdadm 3.2.6. DRBD is already with iSCSI crap. 200 MB/s with IB sounds familiar. I had 250 MB/s in primary/secondary setup with DRBD during evaluation. That's store&forward writes to the secondary which is slooooow. Chained network paths! With Ethernet that hurts even more. People report 70 MB/s with that. I've taught them how to use blktrace and it became obvious that they were trapped in latency. I can also recommend you Vasiliy Tolstov <v.tolstov-+9FY0jupvH6HXe+LvDLADg@public.gmane.org>. He also uses SRP with MD RAID-1. He could convince Neil to fix the MD data offet. OpenSource is all about the right allies,.... Cheers, Sebastian -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <51B5C108.1030803-EIkl63zCoXaH+58JC4qpiA@public.gmane.org>]
* Re: How to do replication right with SRP or remote storage? [not found] ` <51B5C108.1030803-EIkl63zCoXaH+58JC4qpiA@public.gmane.org> @ 2013-06-10 12:44 ` Bart Van Assche [not found] ` <51B5CA33.70006-HInyCGIudOg@public.gmane.org> 2013-06-11 9:48 ` Vasiliy Tolstov 1 sibling, 1 reply; 18+ messages in thread From: Bart Van Assche @ 2013-06-10 12:44 UTC (permalink / raw) To: Sebastian Riemer; +Cc: Bruce McKenzie, linux-rdma-u79uwXL29TY76Z2rM5mHXA On 06/10/13 14:05, Sebastian Riemer wrote: > Perhaps, I should collect all guys who require MD RAID-1 for remote > storage replication in order to put some pressure on Neil. If I remember correctly one of the things Neil is trying to explain to md users is that when md is used without write-intent bitmap there is a risk of triggering a so-called write hole after a power failure ? Bart. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 18+ messages in thread
[parent not found: <51B5CA33.70006-HInyCGIudOg@public.gmane.org>]
* Re: How to do replication right with SRP or remote storage? [not found] ` <51B5CA33.70006-HInyCGIudOg@public.gmane.org> @ 2013-06-10 13:27 ` Sebastian Riemer 0 siblings, 0 replies; 18+ messages in thread From: Sebastian Riemer @ 2013-06-10 13:27 UTC (permalink / raw) To: Bart Van Assche; +Cc: Bruce McKenzie, linux-rdma-u79uwXL29TY76Z2rM5mHXA On 10.06.2013 14:44, Bart Van Assche wrote: > On 06/10/13 14:05, Sebastian Riemer wrote: >> Perhaps, I should collect all guys who require MD RAID-1 for remote >> storage replication in order to put some pressure on Neil. > > If I remember correctly one of the things Neil is trying to explain to > md users is that when md is used without write-intent bitmap there is a > risk of triggering a so-called write hole after a power failure ? I'm not sure. Haven't seen something like this on the mailing list. Do you have a reference from the archives? I think this is handled by superblock writes in the correct order by now. The main reason for the write-intent bitmap remains from my knowledge that you need a full resync without it if a component device is down for a short moment in time. It becomes faulty. If you know that there can't be a hardware issue (e.g. virtual storage), you can remove the faulty device and re-add it to the array. If a device was faulty, then it assembles again. There is an error counter in /sys/block/mdX/md/ sysfs and a maximum read error count (usually 20) after which the faulty device doesn't assemble again. /sys/block/mdX/md/dev-Y/errors /sys/block/mdX/md/max_read_errors Cheers, Sebastian -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: How to do replication right with SRP or remote storage? [not found] ` <51B5C108.1030803-EIkl63zCoXaH+58JC4qpiA@public.gmane.org> 2013-06-10 12:44 ` Bart Van Assche @ 2013-06-11 9:48 ` Vasiliy Tolstov 1 sibling, 0 replies; 18+ messages in thread From: Vasiliy Tolstov @ 2013-06-11 9:48 UTC (permalink / raw) To: Sebastian Riemer; +Cc: Bruce McKenzie, linux-rdma-u79uwXL29TY76Z2rM5mHXA 2013/6/10 Sebastian Riemer <sebastian.riemer-EIkl63zCoXaH+58JC4qpiA@public.gmane.org>: > I can also recommend you Vasiliy Tolstov <v.tolstov-+9FY0jupvH6HXe+LvDLADg@public.gmane.org>. He also > uses SRP with MD RAID-1. He could convince Neil to fix the MD data > offet. OpenSource is all about the right allies,.... Thanks for recommendations... but --data-oofset already fixed. Sometime ago Neil add this option when doing operations with raid. But when i'm try it it does not works. After my message in LKML and MDADM developers list Neil says, that it can be fixed. As i see http://git.neil.brown.name/?p=mdadm.git;a=shortlog in May this has been fixed. May be we need to test this. -- Vasiliy Tolstov, e-mail: v.tolstov-+9FY0jupvH6HXe+LvDLADg@public.gmane.org jabber: vase-+9FY0jupvH6HXe+LvDLADg@public.gmane.org -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2013-06-11 9:48 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-09-15 9:23 srp-ha backport Bart Van Assche
2012-11-20 4:04 ` Vasiliy Tolstov
[not found] ` <loom.20121120T050107-224-eS7Uydv5nfjZ+VzJOa5vwg@public.gmane.org>
2012-11-20 13:25 ` Bart Van Assche
[not found] ` <CACaajQtEu_z+GEUBuiDz6D1dHYsb-YSFMJfkWeya1FNvfhbQHg@mail.gmail.com>
[not found] ` <CACaajQtEu_z+GEUBuiDz6D1dHYsb-YSFMJfkWeya1FNvfhbQHg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-11-21 14:31 ` Bart Van Assche
[not found] ` <CACaajQsUR92Hg-nx_VQN1eHdnghDtsRv1xZJFaJKT=a4nWWf8Q@mail.gmail.com>
[not found] ` <CACaajQswh5foLKTGMhbZnj5JRpFax7nhdHxL7c1wTwbpMe=b8A@mail.gmail.com>
[not found] ` <CACaajQswh5foLKTGMhbZnj5JRpFax7nhdHxL7c1wTwbpMe=b8A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-11-21 18:35 ` Bart Van Assche
[not found] ` <CACaajQs3pPpB9Bz1OxRduk63+uu+ahBYYZsYv_VgroWCsr+vzQ@mail.gmail.com>
[not found] ` <CACaajQs3pPpB9Bz1OxRduk63+uu+ahBYYZsYv_VgroWCsr+vzQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-11-22 7:36 ` Bart Van Assche
[not found] ` <50ADD617.80500-HInyCGIudOg@public.gmane.org>
2012-12-05 18:10 ` Vasiliy Tolstov
[not found] ` <CACaajQva+gtR3+tNaPZSJZ7j7ACOzjNCWAKQeYfT5Gse2Nsx3Q@mail.gmail.com>
[not found] ` <CACaajQva+gtR3+tNaPZSJZ7j7ACOzjNCWAKQeYfT5Gse2Nsx3Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-11-23 8:06 ` Bart Van Assche
[not found] ` <CACaajQu5e_S3UF0V7hyMZ+GChyT=MaZb7MtjwKph9q08f1MaHQ@mail.gmail.com>
[not found] ` <CACaajQu5e_S3UF0V7hyMZ+GChyT=MaZb7MtjwKph9q08f1MaHQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-11-23 14:20 ` Bart Van Assche
[not found] ` <50AF2E9C.7050103-HInyCGIudOg@public.gmane.org>
2012-12-06 9:52 ` Vasiliy Tolstov
[not found] ` <CACaajQvVMmA23SCRNZwnRGYuB53gsBOGp26BxXu-J9vvHZ0szg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-12-06 10:53 ` Bart Van Assche
[not found] ` <50C07937.6080308-HInyCGIudOg@public.gmane.org>
2012-12-06 12:00 ` Vasiliy Tolstov
2013-06-08 2:31 ` Bruce McKenzie
[not found] ` <loom.20130608T041932-498-eS7Uydv5nfjZ+VzJOa5vwg@public.gmane.org>
2013-06-08 16:52 ` Combining distro IB tools and OFED Bart Van Assche
2013-06-10 12:05 ` How to do replication right with SRP or remote storage? Sebastian Riemer
[not found] ` <51B5C108.1030803-EIkl63zCoXaH+58JC4qpiA@public.gmane.org>
2013-06-10 12:44 ` Bart Van Assche
[not found] ` <51B5CA33.70006-HInyCGIudOg@public.gmane.org>
2013-06-10 13:27 ` Sebastian Riemer
2013-06-11 9:48 ` Vasiliy Tolstov
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).