public inbox for linux-rdma@vger.kernel.org
 help / color / mirror / Atom feed
* sg_map failures when tuning SRP via ib_srp module parameters for maximum SG entries
       [not found] ` <1213561283.21604993.1457793870012.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2016-03-12 14:56   ` Laurence Oberman
       [not found]     ` <1195068688.21605141.1457794577569.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Laurence Oberman @ 2016-03-12 14:56 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA; +Cc: James Hartsock

Hello

I am seeing and issue with 100Gbit EDR Infiniband (mlx5_ib and ConnectX-4) and connecting to high speed arrays when we tune the ib_srp parameters to maximum allowed values.

The tuning is being done to maximize performance using:

options ib_srp cmd_sg_entries=255 indirect_sg_entries=2048 

We get into a situation where in srp_queuecommand we fail the srp_map_data().

[  353.811594] scsi host4: ib_srp: Failed to map data (-5)
[  353.811619] scsi host4: Could not fit S/G list into SRP_CMD

On the array

[ 6097.205716] ib_srpt IB send queue full (needed 68)
[ 6097.233325] ib_srpt srpt_xfer_data[2731] queue full -- ret=-12

This is an issue with the latest upstream, RHEL7.2 and Mellanox code bases.

What is the impact of using allow_ext_sg=1 prefer_fr=1 to avoid the sg_map failures. 

If we cap the tuning at ib_srp cmd_sg_entries=128 indirect_sg_entries=512 we avoid this but this constrains the maximum performance that can be achieved. 


static int (struct Scsi_Host *shost, struct scsi_cmnd *scmnd)
{

        len = srp_map_data(scmnd, ch, req);  --------------------------------------------------------- See (2) below
        if (len < 0) {
                shost_printk(KERN_ERR, target->scsi_host,
                             PFX "Failed to map data (%d)\n", len);
                /*
                 * If we ran out of memory descriptors (-ENOMEM) because an
                 * application is queuing many requests with more than
                 * max_pages_per_mr sg-list elements, tell the SCSI mid-layer
                 * to reduce queue depth temporarily.
                 */
                scmnd->result = len == -ENOMEM ?
                        DID_OK << 16 | QUEUE_FULL << 1 : DID_ERROR << 16;
                goto err_iu;
        }


[  353.811594] scsi host4: ib_srp: Failed to map data (-5)
[  353.811619] scsi host4: Could not fit S/G list into SRP_CMD
[  353.811620] scsi host4: ib_srp: Failed to map data (-5)
[  353.811637] scsi host4: Could not fit S/G list into SRP_CMD
[  353.811639] scsi host4: ib_srp: Failed to map data (-5)
[  353.811646] scsi host4: Could not fit S/G list into SRP_CMD
[  353.811647] scsi host4: ib_srp: Failed to map data (-5)
[  353.811652] scsi host4: Could not fit S/G list into SRP_CMD

My array logs the queue full.

On the array

[ 6097.205716] ib_srpt IB send queue full (needed 68)
[ 6097.233325] ib_srpt srpt_xfer_data[2731] queue full -- ret=-12
[ 6097.266589] ib_srpt IB send queue full (needed 69)
[ 6097.266988] ib_srpt IB send queue full (needed 67)
[ 6097.266990] ib_srpt srpt_xfer_data[2731] queue full -- ret=-12
[ 6097.269996] ib_srpt IB send queue full (needed 64)
[ 6097.269997] ib_srpt srpt_xfer_data[2731] queue full -- ret=-12


(2)  ************************************************************************************* (2)
static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch,
                        struct srp_request *req)
{
 
..
..

        if (unlikely(target->cmd_sg_cnt < state.ndesc &&
                                                !target->allow_ext_sg)) {
                shost_printk(KERN_ERR, target->scsi_host,
                             "Could not fit S/G list into SRP_CMD\n");
                return -EIO;
        }
..
..


Laurence Oberman
Principal Software Maintenance Engineer
Red Hat Global Support Services

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: sg_map failures when tuning SRP via ib_srp module parameters for maximum SG entries
       [not found]     ` <1195068688.21605141.1457794577569.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2016-03-12 22:06       ` Sagi Grimberg
       [not found]         ` <56E492F0.1070609-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
  2016-03-13  0:38       ` Bart Van Assche
  1 sibling, 1 reply; 10+ messages in thread
From: Sagi Grimberg @ 2016-03-12 22:06 UTC (permalink / raw)
  To: Laurence Oberman, linux-rdma-u79uwXL29TY76Z2rM5mHXA; +Cc: James Hartsock


> Hello
>
> I am seeing and issue with 100Gbit EDR Infiniband (mlx5_ib and ConnectX-4) and connecting to high speed arrays when we tune the ib_srp parameters to maximum allowed values.
>
> The tuning is being done to maximize performance using:
>
> options ib_srp cmd_sg_entries=255 indirect_sg_entries=2048
>
> We get into a situation where in srp_queuecommand we fail the srp_map_data().
>
> [  353.811594] scsi host4: ib_srp: Failed to map data (-5)
> [  353.811619] scsi host4: Could not fit S/G list into SRP_CMD

I'd say that's an unusual limit to hit? What is your workload?
with CX4 (fr by default) you'd need a *very* unaligned SG layout
or a huge transfer size (huge).

> On the array
>
> [ 6097.205716] ib_srpt IB send queue full (needed 68)
> [ 6097.233325] ib_srpt srpt_xfer_data[2731] queue full -- ret=-12

Is this upstream srpt? And if all the srp commands contain ~255
(or even ~50) descriptors then I'm not at all surprised you get queue
overrun. Each command includes num_sg_entries worth of rdma posts...
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: sg_map failures when tuning SRP via ib_srp module parameters for maximum SG entries
       [not found]         ` <56E492F0.1070609-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
@ 2016-03-12 22:21           ` Laurence Oberman
       [not found]             ` <1578713476.21612303.1457821295989.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Laurence Oberman @ 2016-03-12 22:21 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, James Hartsock

Hello Sagi
Thanks, hope all is well with you.

I understand the reason for the queue full and I agree this may simply be over subscription of the tuning here.

This issue exists upstream, in MOFED and in RHEL 7.2 SRP drivers.
We are using a 4MB transfer size as this is what the customer wants.

What I found in testing today is that if I use:

options ib_srp cmd_sg_entries=255 indirect_sg_entries=2048 allow_ext_sg=1 prefer_fr=1, this avoids the sg_map failure (it clear in the code why) 
but then I overrun the array and lock up targetlio.

If the customers array can keep up is adding allow_ext_sg=1 prefer_fr=1 safe to do so.

As already mentioned, we believe this may simply be over-commitment here in that the parameters allow it but we max it out causing these issues.

Array issue here
-------------------
Mar 12 15:48:53 localhost kernel: ib_srpt received unsupported SRP_CMD request type (128 out + 0 in != 2288 / 16)
Mar 12 15:48:53 localhost kernel: ib_srpt 0x3e: parsing SRP descriptor table failed.
Mar 12 15:48:55 localhost kernel: BUG: unable to handle kernel NULL pointer dereference at           (null)
Mar 12 15:48:55 localhost kernel: IP: [<ffffffff81524a2d>] scsi_build_sense_buffer+0xd/0x40
Mar 12 15:48:55 localhost kernel: PGD 0
Mar 12 15:48:55 localhost kernel: Oops: 0002 [#1] SMP
Mar 12 15:48:55 localhost kernel: Modules linked in: target_core_user uio target_core_pscsi target_core_file target_core_iblock iscsi_target_mod ib_srp scsi_transport_srp ib_srpt target_core_mod mlx5_ib ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_filter ebtable_nat ebtable_broute bridge stp llc ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr coretemp kvm_intel iTCO_wdt iTCO_vendor_support gpio_ich joydev ipmi_ssif kvm pcc_cpufreq acpi_power_meter i7core_e
 dac nfsd hpilo hpwdt acpi_cpufreq
Mar 12 15:48:55 localhost kernel: ipmi_si edac_core shpchp wmi pcspkr irqbypass ipmi_msghandler tpm_tis lpc_ich tpm auth_rpcgss nfs_acl lockd grace sunrpc xfs libcrc32c amdkfd amd_iommu_v2 radeon i2c_algo_bit drm_kms_helper ttm drm mlx5_core crc32c_intel serio_raw netxen_nic hpsa nvme ata_generic pata_acpi scsi_transport_sas fjes
Mar 12 15:48:55 localhost kernel: CPU: 40 PID: 2495 Comm: ib_srpt_compl Not tainted 4.4.5 #1
Mar 12 15:48:55 localhost kernel: Hardware name: HP ProLiant DL580 G7, BIOS P65 10/01/2013
Mar 12 15:48:55 localhost kernel: task: ffff8813e5750000 ti: ffff8813c1560000 task.ti: ffff8813c1560000
Mar 12 15:48:55 localhost kernel: RIP: 0010:[<ffffffff81524a2d>]  [<ffffffff81524a2d>] scsi_build_sense_buffer+0xd/0x40
Mar 12 15:48:55 localhost kernel: RSP: 0018:ffff8813c1563d30  EFLAGS: 00010246
Mar 12 15:48:55 localhost kernel: RAX: 0000000000000000 RBX: ffff8813d6418468 RCX: 0000000000000024
Mar 12 15:48:55 localhost kernel: RDX: 0000000000000005 RSI: 0000000000000000 RDI: 0000000000000000
Mar 12 15:48:55 localhost kernel: RBP: ffff8813c1563d30 R08: 0000000000000000 R09: 00000000000005b4
Mar 12 15:48:55 localhost kernel: R10: ffff8813c26ee030 R11: 00000000000005b4 R12: 0000000000000000
Mar 12 15:48:55 localhost kernel: R13: 0000000000000008 R14: ffffffffa06c6640 R15: ffff8813dae1e000
Mar 12 15:48:55 localhost kernel: FS:  0000000000000000(0000) GS:ffff8827efc00000(0000) knlGS:0000000000000000
Mar 12 15:48:55 localhost kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
Mar 12 15:48:55 localhost kernel: CR2: 0000000000000000 CR3: 0000000001c09000 CR4: 00000000000006e0
Mar 12 15:48:55 localhost kernel: Stack:
Mar 12 15:48:55 localhost kernel: ffff8813c1563d78 ffffffffa06b65bb ffff8813c1563d78 002400000000003e
Mar 12 15:48:55 localhost kernel: 0000000018e17efa ffff8813c2bab040 ffff8813d6418400 ffff8813c26ee000
Mar 12 15:48:55 localhost kernel: ffff8813d6418468 ffff8813c1563e00 ffffffffa0705d89 ffff881300000020
Mar 12 15:48:55 localhost kernel: Call Trace:
Mar 12 15:48:55 localhost kernel: [<ffffffffa06b65bb>] transport_send_check_condition_and_sense+0x18b/0x250 [target_core_mod]
Mar 12 15:48:55 localhost kernel: [<ffffffffa0705d89>] srpt_handle_new_iu+0x2c9/0x700 [ib_srpt]
Mar 12 15:48:55 localhost kernel: [<ffffffffa0706848>] srpt_process_completion+0xc8/0x4b0 [ib_srpt]
Mar 12 15:48:55 localhost kernel: [<ffffffffa0706cfb>] srpt_compl_thread+0xcb/0x140 [ib_srpt]
Mar 12 15:48:55 localhost kernel: [<ffffffff810e4c20>] ? wake_atomic_t_function+0x70/0x70
Mar 12 15:48:55 localhost kernel: [<ffffffffa0706c30>] ? srpt_process_completion+0x4b0/0x4b0 [ib_srpt]
Mar 12 15:48:55 localhost kernel: [<ffffffff810c1628>] kthread+0xd8/0xf0
Mar 12 15:48:55 localhost kernel: [<ffffffff810c1550>] ? kthread_worker_fn+0x160/0x160
ar 12 15:48:55 localhost kernel: [<ffffffff8179aa8f>] ret_from_fork+0x3f/0x70
Mar 12 15:48:55 localhost kernel: [<ffffffff810c1550>] ? kthread_worker_fn+0x160/0x160
Mar 12 15:48:55 localhost kernel: Code: 89 c8 5d c3 0f b6 01 5d 39 d0 b8 00 00 00 00 48 0f 44 c1 c3 31 c0 eb ea 66 0f 1f 44 00 00 66 66 66 66 90 55 85 ff 48 89 e5 75 13 <c6> 06 70 88 56 02 c6 46 07 0a 88 4e 0c 44 88 46 0d 5d c3 c6 06
Mar 12 15:48:55 localhost kernel: RIP  [<ffffffff81524a2d>] scsi_build_sense_buffer+0xd/0x40
Mar 12 15:48:55 localhost kernel: RSP <ffff8813c1563d30>
Mar 12 15:48:55 localhost kernel: CR2: 0000000000000000
Mar 12 15:48:55 localhost kernel: ---[ end trace 9b27fcc1c864f7f3 ]---
Mar 12 15:50:02 localhost kernel: ib_srpt Received DREQ and sent DREP for session 0x4f6e72000390fe7c7cfe900300726ed2.
Mar 12 15:50:02 localhost kernel: ib_srpt Received SRP_LOGIN_REQ with i_port_id 0x4f6e72000390fe7c:0x7cfe900300726ed2, t_port_id 0x7cfe900300726e4e:0x7cfe900300726e4e and it_iu_len 2116 on port 1 (guid=0xfe80000000000000:0x7cfe900300726e4f)
Mar 12 15:50:02 localhost kernel: ib_srpt Session : kernel thread ib_srpt_compl (PID 2529) started
Mar 12 15:50:03 localhost kernel: ib_srpt received unsupported SRP_CMD request type (128 out + 0 in != 2576 / 16)
Mar 12 15:50:03 localhost kernel: ib_srpt 0x35: parsing SRP descriptor table failed.
Mar 12 15:50:05 localhost kernel: BUG: unable to handle kernel NULL pointer dereference at           (null)
Mar 12 15:50:05 localhost kernel: IP: [<ffffffff81524a2d>] scsi_build_sense_buffer+0xd/0x40
Mar 12 15:50:05 localhost kernel: PGD 0
Mar 12 15:50:05 localhost kernel: Oops: 0002 [#2] SMP
Mar 12 15:50:05 localhost kernel: Modules linked in: target_core_user uio target_core_pscsi target_core_file target_core_iblock iscsi_target_mod ib_srp scsi_transport_srp ib_srpt target_core_mod mlx5_ib ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_filter ebtable_nat ebtable_broute bridge stp llc ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr coretemp kvm_intel iTCO_wdt iTCO_vendor_support gpio_ich joydev ipmi_ssif kvm pcc_cpufreq acpi_power_meter i7core_e
 dac nfsd hpilo hpwdt acpi_cpufreq
Mar 12 15:50:05 localhost kernel: ipmi_si edac_core shpchp wmi pcspkr irqbypass ipmi_msghandler tpm_tis lpc_ich tpm auth_rpcgss nfs_acl lockd grace sunrpc xfs libcrc32c amdkfd amd_iommu_v2 radeon i2c_algo_bit drm_kms_helper ttm drm mlx5_core crc32c_intel serio_raw netxen_nic hpsa nvme ata_generic pata_acpi scsi_transport_sas fjes
Mar 12 15:50:05 localhost kernel: CPU: 15 PID: 2529 Comm: ib_srpt_compl Tainted: G      D         4.4.5 #1
Mar 12 15:50:05 localhost kernel: Hardware name: HP ProLiant DL580 G7, BIOS P65 10/01/2013
Mar 12 15:50:05 localhost kernel: task: ffff8813e8ab1bc0 ti: ffff8813c0c40000 task.ti: ffff8813c0c40000
Mar 12 15:50:05 localhost kernel: RIP: 0010:[<ffffffff81524a2d>]  [<ffffffff81524a2d>] scsi_build_sense_buffer+0xd/0x40
Mar 12 15:50:05 localhost kernel: RSP: 0018:ffff8813c0c43d30  EFLAGS: 00010246
Mar 12 15:50:05 localhost kernel: RAX: 0000000000000000 RBX: ffff880e55f20468 RCX: 0000000000000024
Mar 12 15:50:05 localhost kernel: RDX: 0000000000000005 RSI: 0000000000000000 RDI: 0000000000000000
Mar 12 15:50:05 localhost kernel: RBP: ffff8813c0c43d30 R08: 0000000000000000 R09: 00000000000005e0
Mar 12 15:50:05 localhost kernel: R10: ffff8813c2032030 R11: 00000000000005e0 R12: 0000000000000000
Mar 12 15:50:05 localhost kernel: R13: 0000000000000008 R14: ffffffffa06c6640 R15: ffff8813dae18800
Mar 12 15:50:05 localhost kernel: FS:  0000000000000000(0000) GS:ffff8827efbc0000(0000) knlGS:0000000000000000
Mar 12 15:50:05 localhost kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
Mar 12 15:50:05 localhost kernel: CR2: 0000000000000000 CR3: 0000000001c09000 CR4: 00000000000006e0
Mar 12 15:50:05 localhost kernel: Stack:

Laurence Oberman
Principal Software Maintenance Engineer
Red Hat Global Support Services

----- Original Message -----
From: "Sagi Grimberg" <sagig-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
To: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>, linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Cc: "James Hartsock" <hartsjc-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Sent: Saturday, March 12, 2016 5:06:40 PM
Subject: Re: sg_map failures when tuning SRP via ib_srp module parameters for maximum SG entries


> Hello
>
> I am seeing and issue with 100Gbit EDR Infiniband (mlx5_ib and ConnectX-4) and connecting to high speed arrays when we tune the ib_srp parameters to maximum allowed values.
>
> The tuning is being done to maximize performance using:
>
> options ib_srp cmd_sg_entries=255 indirect_sg_entries=2048
>
> We get into a situation where in srp_queuecommand we fail the srp_map_data().
>
> [  353.811594] scsi host4: ib_srp: Failed to map data (-5)
> [  353.811619] scsi host4: Could not fit S/G list into SRP_CMD

I'd say that's an unusual limit to hit? What is your workload?
with CX4 (fr by default) you'd need a *very* unaligned SG layout
or a huge transfer size (huge).

> On the array
>
> [ 6097.205716] ib_srpt IB send queue full (needed 68)
> [ 6097.233325] ib_srpt srpt_xfer_data[2731] queue full -- ret=-12

Is this upstream srpt? And if all the srp commands contain ~255
(or even ~50) descriptors then I'm not at all surprised you get queue
overrun. Each command includes num_sg_entries worth of rdma posts...
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: sg_map failures when tuning SRP via ib_srp module parameters for maximum SG entries
       [not found]             ` <1578713476.21612303.1457821295989.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2016-03-12 22:24               ` Laurence Oberman
  2016-03-13  0:34               ` Bart Van Assche
  1 sibling, 0 replies; 10+ messages in thread
From: Laurence Oberman @ 2016-03-12 22:24 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, James Hartsock

I meant to add:

By keeping these settings at sane values like
options ib_srp cmd_sg_entries=128 indirect_sg_entries=512

We are rock solid stable. I ran 4 days like that doing reads and writes to 35 NVME served targets.

Thanks

Laurence Oberman
Principal Software Maintenance Engineer
Red Hat Global Support Services

----- Original Message -----
From: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
To: "Sagi Grimberg" <sagig-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, "James Hartsock" <hartsjc-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Sent: Saturday, March 12, 2016 5:21:35 PM
Subject: Re: sg_map failures when tuning SRP via ib_srp module parameters for maximum SG entries

Hello Sagi
Thanks, hope all is well with you.

I understand the reason for the queue full and I agree this may simply be over subscription of the tuning here.

This issue exists upstream, in MOFED and in RHEL 7.2 SRP drivers.
We are using a 4MB transfer size as this is what the customer wants.

What I found in testing today is that if I use:

options ib_srp cmd_sg_entries=255 indirect_sg_entries=2048 allow_ext_sg=1 prefer_fr=1, this avoids the sg_map failure (it clear in the code why) 
but then I overrun the array and lock up targetlio.

If the customers array can keep up is adding allow_ext_sg=1 prefer_fr=1 safe to do so.

As already mentioned, we believe this may simply be over-commitment here in that the parameters allow it but we max it out causing these issues.

Array issue here
-------------------
Mar 12 15:48:53 localhost kernel: ib_srpt received unsupported SRP_CMD request type (128 out + 0 in != 2288 / 16)
Mar 12 15:48:53 localhost kernel: ib_srpt 0x3e: parsing SRP descriptor table failed.
Mar 12 15:48:55 localhost kernel: BUG: unable to handle kernel NULL pointer dereference at           (null)
Mar 12 15:48:55 localhost kernel: IP: [<ffffffff81524a2d>] scsi_build_sense_buffer+0xd/0x40
Mar 12 15:48:55 localhost kernel: PGD 0
Mar 12 15:48:55 localhost kernel: Oops: 0002 [#1] SMP
Mar 12 15:48:55 localhost kernel: Modules linked in: target_core_user uio target_core_pscsi target_core_file target_core_iblock iscsi_target_mod ib_srp scsi_transport_srp ib_srpt target_core_mod mlx5_ib ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_filter ebtable_nat ebtable_broute bridge stp llc ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr coretemp kvm_intel iTCO_wdt iTCO_vendor_support gpio_ich joydev ipmi_ssif kvm pcc_cpufreq acpi_power_meter i7core_e
 dac nfsd hpilo hpwdt acpi_cpufreq
Mar 12 15:48:55 localhost kernel: ipmi_si edac_core shpchp wmi pcspkr irqbypass ipmi_msghandler tpm_tis lpc_ich tpm auth_rpcgss nfs_acl lockd grace sunrpc xfs libcrc32c amdkfd amd_iommu_v2 radeon i2c_algo_bit drm_kms_helper ttm drm mlx5_core crc32c_intel serio_raw netxen_nic hpsa nvme ata_generic pata_acpi scsi_transport_sas fjes
Mar 12 15:48:55 localhost kernel: CPU: 40 PID: 2495 Comm: ib_srpt_compl Not tainted 4.4.5 #1
Mar 12 15:48:55 localhost kernel: Hardware name: HP ProLiant DL580 G7, BIOS P65 10/01/2013
Mar 12 15:48:55 localhost kernel: task: ffff8813e5750000 ti: ffff8813c1560000 task.ti: ffff8813c1560000
Mar 12 15:48:55 localhost kernel: RIP: 0010:[<ffffffff81524a2d>]  [<ffffffff81524a2d>] scsi_build_sense_buffer+0xd/0x40
Mar 12 15:48:55 localhost kernel: RSP: 0018:ffff8813c1563d30  EFLAGS: 00010246
Mar 12 15:48:55 localhost kernel: RAX: 0000000000000000 RBX: ffff8813d6418468 RCX: 0000000000000024
Mar 12 15:48:55 localhost kernel: RDX: 0000000000000005 RSI: 0000000000000000 RDI: 0000000000000000
Mar 12 15:48:55 localhost kernel: RBP: ffff8813c1563d30 R08: 0000000000000000 R09: 00000000000005b4
Mar 12 15:48:55 localhost kernel: R10: ffff8813c26ee030 R11: 00000000000005b4 R12: 0000000000000000
Mar 12 15:48:55 localhost kernel: R13: 0000000000000008 R14: ffffffffa06c6640 R15: ffff8813dae1e000
Mar 12 15:48:55 localhost kernel: FS:  0000000000000000(0000) GS:ffff8827efc00000(0000) knlGS:0000000000000000
Mar 12 15:48:55 localhost kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
Mar 12 15:48:55 localhost kernel: CR2: 0000000000000000 CR3: 0000000001c09000 CR4: 00000000000006e0
Mar 12 15:48:55 localhost kernel: Stack:
Mar 12 15:48:55 localhost kernel: ffff8813c1563d78 ffffffffa06b65bb ffff8813c1563d78 002400000000003e
Mar 12 15:48:55 localhost kernel: 0000000018e17efa ffff8813c2bab040 ffff8813d6418400 ffff8813c26ee000
Mar 12 15:48:55 localhost kernel: ffff8813d6418468 ffff8813c1563e00 ffffffffa0705d89 ffff881300000020
Mar 12 15:48:55 localhost kernel: Call Trace:
Mar 12 15:48:55 localhost kernel: [<ffffffffa06b65bb>] transport_send_check_condition_and_sense+0x18b/0x250 [target_core_mod]
Mar 12 15:48:55 localhost kernel: [<ffffffffa0705d89>] srpt_handle_new_iu+0x2c9/0x700 [ib_srpt]
Mar 12 15:48:55 localhost kernel: [<ffffffffa0706848>] srpt_process_completion+0xc8/0x4b0 [ib_srpt]
Mar 12 15:48:55 localhost kernel: [<ffffffffa0706cfb>] srpt_compl_thread+0xcb/0x140 [ib_srpt]
Mar 12 15:48:55 localhost kernel: [<ffffffff810e4c20>] ? wake_atomic_t_function+0x70/0x70
Mar 12 15:48:55 localhost kernel: [<ffffffffa0706c30>] ? srpt_process_completion+0x4b0/0x4b0 [ib_srpt]
Mar 12 15:48:55 localhost kernel: [<ffffffff810c1628>] kthread+0xd8/0xf0
Mar 12 15:48:55 localhost kernel: [<ffffffff810c1550>] ? kthread_worker_fn+0x160/0x160
ar 12 15:48:55 localhost kernel: [<ffffffff8179aa8f>] ret_from_fork+0x3f/0x70
Mar 12 15:48:55 localhost kernel: [<ffffffff810c1550>] ? kthread_worker_fn+0x160/0x160
Mar 12 15:48:55 localhost kernel: Code: 89 c8 5d c3 0f b6 01 5d 39 d0 b8 00 00 00 00 48 0f 44 c1 c3 31 c0 eb ea 66 0f 1f 44 00 00 66 66 66 66 90 55 85 ff 48 89 e5 75 13 <c6> 06 70 88 56 02 c6 46 07 0a 88 4e 0c 44 88 46 0d 5d c3 c6 06
Mar 12 15:48:55 localhost kernel: RIP  [<ffffffff81524a2d>] scsi_build_sense_buffer+0xd/0x40
Mar 12 15:48:55 localhost kernel: RSP <ffff8813c1563d30>
Mar 12 15:48:55 localhost kernel: CR2: 0000000000000000
Mar 12 15:48:55 localhost kernel: ---[ end trace 9b27fcc1c864f7f3 ]---
Mar 12 15:50:02 localhost kernel: ib_srpt Received DREQ and sent DREP for session 0x4f6e72000390fe7c7cfe900300726ed2.
Mar 12 15:50:02 localhost kernel: ib_srpt Received SRP_LOGIN_REQ with i_port_id 0x4f6e72000390fe7c:0x7cfe900300726ed2, t_port_id 0x7cfe900300726e4e:0x7cfe900300726e4e and it_iu_len 2116 on port 1 (guid=0xfe80000000000000:0x7cfe900300726e4f)
Mar 12 15:50:02 localhost kernel: ib_srpt Session : kernel thread ib_srpt_compl (PID 2529) started
Mar 12 15:50:03 localhost kernel: ib_srpt received unsupported SRP_CMD request type (128 out + 0 in != 2576 / 16)
Mar 12 15:50:03 localhost kernel: ib_srpt 0x35: parsing SRP descriptor table failed.
Mar 12 15:50:05 localhost kernel: BUG: unable to handle kernel NULL pointer dereference at           (null)
Mar 12 15:50:05 localhost kernel: IP: [<ffffffff81524a2d>] scsi_build_sense_buffer+0xd/0x40
Mar 12 15:50:05 localhost kernel: PGD 0
Mar 12 15:50:05 localhost kernel: Oops: 0002 [#2] SMP
Mar 12 15:50:05 localhost kernel: Modules linked in: target_core_user uio target_core_pscsi target_core_file target_core_iblock iscsi_target_mod ib_srp scsi_transport_srp ib_srpt target_core_mod mlx5_ib ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_filter ebtable_nat ebtable_broute bridge stp llc ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr coretemp kvm_intel iTCO_wdt iTCO_vendor_support gpio_ich joydev ipmi_ssif kvm pcc_cpufreq acpi_power_meter i7core_e
 dac nfsd hpilo hpwdt acpi_cpufreq
Mar 12 15:50:05 localhost kernel: ipmi_si edac_core shpchp wmi pcspkr irqbypass ipmi_msghandler tpm_tis lpc_ich tpm auth_rpcgss nfs_acl lockd grace sunrpc xfs libcrc32c amdkfd amd_iommu_v2 radeon i2c_algo_bit drm_kms_helper ttm drm mlx5_core crc32c_intel serio_raw netxen_nic hpsa nvme ata_generic pata_acpi scsi_transport_sas fjes
Mar 12 15:50:05 localhost kernel: CPU: 15 PID: 2529 Comm: ib_srpt_compl Tainted: G      D         4.4.5 #1
Mar 12 15:50:05 localhost kernel: Hardware name: HP ProLiant DL580 G7, BIOS P65 10/01/2013
Mar 12 15:50:05 localhost kernel: task: ffff8813e8ab1bc0 ti: ffff8813c0c40000 task.ti: ffff8813c0c40000
Mar 12 15:50:05 localhost kernel: RIP: 0010:[<ffffffff81524a2d>]  [<ffffffff81524a2d>] scsi_build_sense_buffer+0xd/0x40
Mar 12 15:50:05 localhost kernel: RSP: 0018:ffff8813c0c43d30  EFLAGS: 00010246
Mar 12 15:50:05 localhost kernel: RAX: 0000000000000000 RBX: ffff880e55f20468 RCX: 0000000000000024
Mar 12 15:50:05 localhost kernel: RDX: 0000000000000005 RSI: 0000000000000000 RDI: 0000000000000000
Mar 12 15:50:05 localhost kernel: RBP: ffff8813c0c43d30 R08: 0000000000000000 R09: 00000000000005e0
Mar 12 15:50:05 localhost kernel: R10: ffff8813c2032030 R11: 00000000000005e0 R12: 0000000000000000
Mar 12 15:50:05 localhost kernel: R13: 0000000000000008 R14: ffffffffa06c6640 R15: ffff8813dae18800
Mar 12 15:50:05 localhost kernel: FS:  0000000000000000(0000) GS:ffff8827efbc0000(0000) knlGS:0000000000000000
Mar 12 15:50:05 localhost kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
Mar 12 15:50:05 localhost kernel: CR2: 0000000000000000 CR3: 0000000001c09000 CR4: 00000000000006e0
Mar 12 15:50:05 localhost kernel: Stack:

Laurence Oberman
Principal Software Maintenance Engineer
Red Hat Global Support Services

----- Original Message -----
From: "Sagi Grimberg" <sagig-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
To: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>, linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Cc: "James Hartsock" <hartsjc-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Sent: Saturday, March 12, 2016 5:06:40 PM
Subject: Re: sg_map failures when tuning SRP via ib_srp module parameters for maximum SG entries


> Hello
>
> I am seeing and issue with 100Gbit EDR Infiniband (mlx5_ib and ConnectX-4) and connecting to high speed arrays when we tune the ib_srp parameters to maximum allowed values.
>
> The tuning is being done to maximize performance using:
>
> options ib_srp cmd_sg_entries=255 indirect_sg_entries=2048
>
> We get into a situation where in srp_queuecommand we fail the srp_map_data().
>
> [  353.811594] scsi host4: ib_srp: Failed to map data (-5)
> [  353.811619] scsi host4: Could not fit S/G list into SRP_CMD

I'd say that's an unusual limit to hit? What is your workload?
with CX4 (fr by default) you'd need a *very* unaligned SG layout
or a huge transfer size (huge).

> On the array
>
> [ 6097.205716] ib_srpt IB send queue full (needed 68)
> [ 6097.233325] ib_srpt srpt_xfer_data[2731] queue full -- ret=-12

Is this upstream srpt? And if all the srp commands contain ~255
(or even ~50) descriptors then I'm not at all surprised you get queue
overrun. Each command includes num_sg_entries worth of rdma posts...
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: sg_map failures when tuning SRP via ib_srp module parameters for maximum SG entries
       [not found]             ` <1578713476.21612303.1457821295989.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2016-03-12 22:24               ` Laurence Oberman
@ 2016-03-13  0:34               ` Bart Van Assche
       [not found]                 ` <56E4B59D.4070701-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
  1 sibling, 1 reply; 10+ messages in thread
From: Bart Van Assche @ 2016-03-13  0:34 UTC (permalink / raw)
  To: Laurence Oberman, Sagi Grimberg
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, James Hartsock

On 03/12/16 14:21, Laurence Oberman wrote:
> Array issue here
> -------------------
> Mar 12 15:48:53 localhost kernel: ib_srpt received unsupported SRP_CMD request type (128 out + 0 in != 2288 / 16)
> Mar 12 15:48:53 localhost kernel: ib_srpt 0x3e: parsing SRP descriptor table failed.
> Mar 12 15:48:55 localhost kernel: BUG: unable to handle kernel NULL pointer dereference at           (null)
> Mar 12 15:48:55 localhost kernel: IP: [<ffffffff81524a2d>] scsi_build_sense_buffer+0xd/0x40
> Mar 12 15:48:55 localhost kernel: PGD 0
> Mar 12 15:48:55 localhost kernel: Oops: 0002 [#1] SMP
> Mar 12 15:48:55 localhost kernel: Modules linked in: target_core_user uio target_core_pscsi target_core_file target_core_iblock iscsi_target_mod ib_srp scsi_transport_srp ib_srpt target_core_mod mlx5_ib ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_filter ebtable_nat ebtable_broute bridge stp llc ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr coretemp kvm_intel iTCO_wdt iTCO_vendor_support gpio_ich joydev ipmi_ssif kvm pcc_cpufreq acpi_power_meter i7core
 _edac nfsd hpilo hpwdt acpi_cpufreq
> Mar 12 15:48:55 localhost kernel: ipmi_si edac_core shpchp wmi pcspkr irqbypass ipmi_msghandler tpm_tis lpc_ich tpm auth_rpcgss nfs_acl lockd grace sunrpc xfs libcrc32c amdkfd amd_iommu_v2 radeon i2c_algo_bit drm_kms_helper ttm drm mlx5_core crc32c_intel serio_raw netxen_nic hpsa nvme ata_generic pata_acpi scsi_transport_sas fjes
> Mar 12 15:48:55 localhost kernel: CPU: 40 PID: 2495 Comm: ib_srpt_compl Not tainted 4.4.5 #1
> Mar 12 15:48:55 localhost kernel: Hardware name: HP ProLiant DL580 G7, BIOS P65 10/01/2013
> Mar 12 15:48:55 localhost kernel: task: ffff8813e5750000 ti: ffff8813c1560000 task.ti: ffff8813c1560000
> Mar 12 15:48:55 localhost kernel: RIP: 0010:[<ffffffff81524a2d>]  [<ffffffff81524a2d>] scsi_build_sense_buffer+0xd/0x40
> Mar 12 15:48:55 localhost kernel: RSP: 0018:ffff8813c1563d30  EFLAGS: 00010246
> Mar 12 15:48:55 localhost kernel: RAX: 0000000000000000 RBX: ffff8813d6418468 RCX: 0000000000000024
> Mar 12 15:48:55 localhost kernel: RDX: 0000000000000005 RSI: 0000000000000000 RDI: 0000000000000000
> Mar 12 15:48:55 localhost kernel: RBP: ffff8813c1563d30 R08: 0000000000000000 R09: 00000000000005b4
> Mar 12 15:48:55 localhost kernel: R10: ffff8813c26ee030 R11: 00000000000005b4 R12: 0000000000000000
> Mar 12 15:48:55 localhost kernel: R13: 0000000000000008 R14: ffffffffa06c6640 R15: ffff8813dae1e000
> Mar 12 15:48:55 localhost kernel: FS:  0000000000000000(0000) GS:ffff8827efc00000(0000) knlGS:0000000000000000
> Mar 12 15:48:55 localhost kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> Mar 12 15:48:55 localhost kernel: CR2: 0000000000000000 CR3: 0000000001c09000 CR4: 00000000000006e0
> Mar 12 15:48:55 localhost kernel: Stack:
> Mar 12 15:48:55 localhost kernel: ffff8813c1563d78 ffffffffa06b65bb ffff8813c1563d78 002400000000003e
> Mar 12 15:48:55 localhost kernel: 0000000018e17efa ffff8813c2bab040 ffff8813d6418400 ffff8813c26ee000
> Mar 12 15:48:55 localhost kernel: ffff8813d6418468 ffff8813c1563e00 ffffffffa0705d89 ffff881300000020
> Mar 12 15:48:55 localhost kernel: Call Trace:
> Mar 12 15:48:55 localhost kernel: [<ffffffffa06b65bb>] transport_send_check_condition_and_sense+0x18b/0x250 [target_core_mod]
> Mar 12 15:48:55 localhost kernel: [<ffffffffa0705d89>] srpt_handle_new_iu+0x2c9/0x700 [ib_srpt]
> Mar 12 15:48:55 localhost kernel: [<ffffffffa0706848>] srpt_process_completion+0xc8/0x4b0 [ib_srpt]
> Mar 12 15:48:55 localhost kernel: [<ffffffffa0706cfb>] srpt_compl_thread+0xcb/0x140 [ib_srpt]
> Mar 12 15:48:55 localhost kernel: [<ffffffff810e4c20>] ? wake_atomic_t_function+0x70/0x70
> Mar 12 15:48:55 localhost kernel: [<ffffffffa0706c30>] ? srpt_process_completion+0x4b0/0x4b0 [ib_srpt]
> Mar 12 15:48:55 localhost kernel: [<ffffffff810c1628>] kthread+0xd8/0xf0
> Mar 12 15:48:55 localhost kernel: [<ffffffff810c1550>] ? kthread_worker_fn+0x160/0x160
> ar 12 15:48:55 localhost kernel: [<ffffffff8179aa8f>] ret_from_fork+0x3f/0x70
> Mar 12 15:48:55 localhost kernel: [<ffffffff810c1550>] ? kthread_worker_fn+0x160/0x160
> Mar 12 15:48:55 localhost kernel: Code: 89 c8 5d c3 0f b6 01 5d 39 d0 b8 00 00 00 00 48 0f 44 c1 c3 31 c0 eb ea 66 0f 1f 44 00 00 66 66 66 66 90 55 85 ff 48 89 e5 75 13 <c6> 06 70 88 56 02 c6 46 07 0a 88 4e 0c 44 88 46 0d 5d c3 c6 06
> Mar 12 15:48:55 localhost kernel: RIP  [<ffffffff81524a2d>] scsi_build_sense_buffer+0xd/0x40
> Mar 12 15:48:55 localhost kernel: RSP <ffff8813c1563d30>
> Mar 12 15:48:55 localhost kernel: CR2: 0000000000000000

Hello Laurence,

This is one of the issues for which a fix is present in the patch series 
"IB/srpt patches for Linux kernel v4.6" 
(http://thread.gmane.org/gmane.linux.drivers.rdma/33715), a patch series 
of which it is expected that it will be sent to Linus during the v4.6 
merge window.

Bart.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: sg_map failures when tuning SRP via ib_srp module parameters for maximum SG entries
       [not found]     ` <1195068688.21605141.1457794577569.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2016-03-12 22:06       ` Sagi Grimberg
@ 2016-03-13  0:38       ` Bart Van Assche
       [not found]         ` <56E4B677.6020809-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
  1 sibling, 1 reply; 10+ messages in thread
From: Bart Van Assche @ 2016-03-13  0:38 UTC (permalink / raw)
  To: Laurence Oberman, linux-rdma-u79uwXL29TY76Z2rM5mHXA; +Cc: James Hartsock

On 03/12/16 06:56, Laurence Oberman wrote:
> [ 6097.205716] ib_srpt IB send queue full (needed 68)
> [ 6097.233325] ib_srpt srpt_xfer_data[2731] queue full -- ret=-12

Hello Laurence,

Are you familiar with the ib_srpt parameter srp_sq_size, a parameter 
that is configurable through configfs ? Please note that changing this 
parameter only affects new sessions but that it not affects any existing 
sessions.

Bart.

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: sg_map failures when tuning SRP via ib_srp module parameters for maximum SG entries
       [not found]                 ` <56E4B59D.4070701-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
@ 2016-03-13  0:39                   ` Laurence Oberman
  0 siblings, 0 replies; 10+ messages in thread
From: Laurence Oberman @ 2016-03-13  0:39 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Sagi Grimberg, linux-rdma-u79uwXL29TY76Z2rM5mHXA, James Hartsock

Hi Bart
Thanks!!
I will pull that patch and apply and test on my array.
Much appreciated.

Laurence Oberman
Principal Software Maintenance Engineer
Red Hat Global Support Services

----- Original Message -----
From: "Bart Van Assche" <bart.vanassche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
To: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>, "Sagi Grimberg" <sagig-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, "James Hartsock" <hartsjc-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Sent: Saturday, March 12, 2016 7:34:37 PM
Subject: Re: sg_map failures when tuning SRP via ib_srp module parameters for maximum SG entries

On 03/12/16 14:21, Laurence Oberman wrote:
> Array issue here
> -------------------
> Mar 12 15:48:53 localhost kernel: ib_srpt received unsupported SRP_CMD request type (128 out + 0 in != 2288 / 16)
> Mar 12 15:48:53 localhost kernel: ib_srpt 0x3e: parsing SRP descriptor table failed.
> Mar 12 15:48:55 localhost kernel: BUG: unable to handle kernel NULL pointer dereference at           (null)
> Mar 12 15:48:55 localhost kernel: IP: [<ffffffff81524a2d>] scsi_build_sense_buffer+0xd/0x40
> Mar 12 15:48:55 localhost kernel: PGD 0
> Mar 12 15:48:55 localhost kernel: Oops: 0002 [#1] SMP
> Mar 12 15:48:55 localhost kernel: Modules linked in: target_core_user uio target_core_pscsi target_core_file target_core_iblock iscsi_target_mod ib_srp scsi_transport_srp ib_srpt target_core_mod mlx5_ib ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_filter ebtable_nat ebtable_broute bridge stp llc ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr coretemp kvm_intel iTCO_wdt iTCO_vendor_support gpio_ich joydev ipmi_ssif kvm pcc_cpufreq acpi_power_meter i7core
 _edac nfsd hpilo hpwdt acpi_cpufreq
> Mar 12 15:48:55 localhost kernel: ipmi_si edac_core shpchp wmi pcspkr irqbypass ipmi_msghandler tpm_tis lpc_ich tpm auth_rpcgss nfs_acl lockd grace sunrpc xfs libcrc32c amdkfd amd_iommu_v2 radeon i2c_algo_bit drm_kms_helper ttm drm mlx5_core crc32c_intel serio_raw netxen_nic hpsa nvme ata_generic pata_acpi scsi_transport_sas fjes
> Mar 12 15:48:55 localhost kernel: CPU: 40 PID: 2495 Comm: ib_srpt_compl Not tainted 4.4.5 #1
> Mar 12 15:48:55 localhost kernel: Hardware name: HP ProLiant DL580 G7, BIOS P65 10/01/2013
> Mar 12 15:48:55 localhost kernel: task: ffff8813e5750000 ti: ffff8813c1560000 task.ti: ffff8813c1560000
> Mar 12 15:48:55 localhost kernel: RIP: 0010:[<ffffffff81524a2d>]  [<ffffffff81524a2d>] scsi_build_sense_buffer+0xd/0x40
> Mar 12 15:48:55 localhost kernel: RSP: 0018:ffff8813c1563d30  EFLAGS: 00010246
> Mar 12 15:48:55 localhost kernel: RAX: 0000000000000000 RBX: ffff8813d6418468 RCX: 0000000000000024
> Mar 12 15:48:55 localhost kernel: RDX: 0000000000000005 RSI: 0000000000000000 RDI: 0000000000000000
> Mar 12 15:48:55 localhost kernel: RBP: ffff8813c1563d30 R08: 0000000000000000 R09: 00000000000005b4
> Mar 12 15:48:55 localhost kernel: R10: ffff8813c26ee030 R11: 00000000000005b4 R12: 0000000000000000
> Mar 12 15:48:55 localhost kernel: R13: 0000000000000008 R14: ffffffffa06c6640 R15: ffff8813dae1e000
> Mar 12 15:48:55 localhost kernel: FS:  0000000000000000(0000) GS:ffff8827efc00000(0000) knlGS:0000000000000000
> Mar 12 15:48:55 localhost kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> Mar 12 15:48:55 localhost kernel: CR2: 0000000000000000 CR3: 0000000001c09000 CR4: 00000000000006e0
> Mar 12 15:48:55 localhost kernel: Stack:
> Mar 12 15:48:55 localhost kernel: ffff8813c1563d78 ffffffffa06b65bb ffff8813c1563d78 002400000000003e
> Mar 12 15:48:55 localhost kernel: 0000000018e17efa ffff8813c2bab040 ffff8813d6418400 ffff8813c26ee000
> Mar 12 15:48:55 localhost kernel: ffff8813d6418468 ffff8813c1563e00 ffffffffa0705d89 ffff881300000020
> Mar 12 15:48:55 localhost kernel: Call Trace:
> Mar 12 15:48:55 localhost kernel: [<ffffffffa06b65bb>] transport_send_check_condition_and_sense+0x18b/0x250 [target_core_mod]
> Mar 12 15:48:55 localhost kernel: [<ffffffffa0705d89>] srpt_handle_new_iu+0x2c9/0x700 [ib_srpt]
> Mar 12 15:48:55 localhost kernel: [<ffffffffa0706848>] srpt_process_completion+0xc8/0x4b0 [ib_srpt]
> Mar 12 15:48:55 localhost kernel: [<ffffffffa0706cfb>] srpt_compl_thread+0xcb/0x140 [ib_srpt]
> Mar 12 15:48:55 localhost kernel: [<ffffffff810e4c20>] ? wake_atomic_t_function+0x70/0x70
> Mar 12 15:48:55 localhost kernel: [<ffffffffa0706c30>] ? srpt_process_completion+0x4b0/0x4b0 [ib_srpt]
> Mar 12 15:48:55 localhost kernel: [<ffffffff810c1628>] kthread+0xd8/0xf0
> Mar 12 15:48:55 localhost kernel: [<ffffffff810c1550>] ? kthread_worker_fn+0x160/0x160
> ar 12 15:48:55 localhost kernel: [<ffffffff8179aa8f>] ret_from_fork+0x3f/0x70
> Mar 12 15:48:55 localhost kernel: [<ffffffff810c1550>] ? kthread_worker_fn+0x160/0x160
> Mar 12 15:48:55 localhost kernel: Code: 89 c8 5d c3 0f b6 01 5d 39 d0 b8 00 00 00 00 48 0f 44 c1 c3 31 c0 eb ea 66 0f 1f 44 00 00 66 66 66 66 90 55 85 ff 48 89 e5 75 13 <c6> 06 70 88 56 02 c6 46 07 0a 88 4e 0c 44 88 46 0d 5d c3 c6 06
> Mar 12 15:48:55 localhost kernel: RIP  [<ffffffff81524a2d>] scsi_build_sense_buffer+0xd/0x40
> Mar 12 15:48:55 localhost kernel: RSP <ffff8813c1563d30>
> Mar 12 15:48:55 localhost kernel: CR2: 0000000000000000

Hello Laurence,

This is one of the issues for which a fix is present in the patch series 
"IB/srpt patches for Linux kernel v4.6" 
(http://thread.gmane.org/gmane.linux.drivers.rdma/33715), a patch series 
of which it is expected that it will be sent to Linus during the v4.6 
merge window.

Bart.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: sg_map failures when tuning SRP via ib_srp module parameters for maximum SG entries
       [not found]         ` <56E4B677.6020809-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
@ 2016-03-13  0:58           ` Laurence Oberman
       [not found]             ` <2043305499.21615736.1457830705865.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Laurence Oberman @ 2016-03-13  0:58 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, James Hartsock

Hi Bart

Linux localhost.localdomain 4.4.5 #1 SMP Sat Mar 12 14:37:30 EST 2016 x86_64 x86_64 x86_64 GNU/Linux

Within srpt on the array I have options ib_srpt srp_max_req_size=4148
On the client I also only have options ib_srpt srp_max_req_size=4148

I have not tuned srp_sq_size as I was only aware of

parm:           srp_max_req_size:Maximum size of SRP request messages in bytes. (int)
parm:           srpt_srq_size:Shared receive queue (SRQ) size. (int)
parm:           srpt_service_guid:Using this value for ioc_guid, id_ext, and cm_listen_id instead of using the node_guid of the first HCA.

Please explain what that does and thanks

Laurence Oberman
Principal Software Maintenance Engineer
Red Hat Global Support Services

----- Original Message -----
From: "Bart Van Assche" <bart.vanassche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
To: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>, linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Cc: "James Hartsock" <hartsjc-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Sent: Saturday, March 12, 2016 7:38:15 PM
Subject: Re: sg_map failures when tuning SRP via ib_srp module parameters for maximum SG entries

On 03/12/16 06:56, Laurence Oberman wrote:
> [ 6097.205716] ib_srpt IB send queue full (needed 68)
> [ 6097.233325] ib_srpt srpt_xfer_data[2731] queue full -- ret=-12

Hello Laurence,

Are you familiar with the ib_srpt parameter srp_sq_size, a parameter 
that is configurable through configfs ? Please note that changing this 
parameter only affects new sessions but that it not affects any existing 
sessions.

Bart.

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: sg_map failures when tuning SRP via ib_srp module parameters for maximum SG entries
       [not found]             ` <2043305499.21615736.1457830705865.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2016-03-13  1:29               ` Bart Van Assche
       [not found]                 ` <56E4C25E.7050000-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Bart Van Assche @ 2016-03-13  1:29 UTC (permalink / raw)
  To: Laurence Oberman; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, James Hartsock

On 03/12/16 16:58, Laurence Oberman wrote:
> Within srpt on the array I have options ib_srpt srp_max_req_size=4148
> On the client I also only have options ib_srpt srp_max_req_size=4148
>
> I have not tuned srp_sq_size as I was only aware of
>
> parm:           srp_max_req_size:Maximum size of SRP request messages in bytes. (int)
> parm:           srpt_srq_size:Shared receive queue (SRQ) size. (int)
> parm:           srpt_service_guid:Using this value for ioc_guid, id_ext, and cm_listen_id instead of using the node_guid of the first HCA.
>
> Please explain what that does.

Hello Laurence,

The srp_sq_size parameter controls the send queue size per RDMA channel. 
The default value of this parameter is 4096. I think this is the 
parameter that has to be increased to avoid hitting "IB send queue full" 
errors.

Bart.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: sg_map failures when tuning SRP via ib_srp module parameters for maximum SG entries
       [not found]                 ` <56E4C25E.7050000-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
@ 2016-03-13 22:15                   ` Laurence Oberman
  0 siblings, 0 replies; 10+ messages in thread
From: Laurence Oberman @ 2016-03-13 22:15 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, James Hartsock, Doug Ledford

Hi Bart, Doug

You can add probably add a tested by for me for 
http://thread.gmane.org/gmane.linux.drivers.rdma/33715
I will email a response to that original thread.

Its settled and stabilized my array in that I only get the queue fulls now, which I think is going to 
be a client side overcommitment issue.

Testing logs

Array side
-----------
[root@localhost ~]# cat /etc/modprobe.d/ib_srp.conf 
options ib_srp cmd_sg_entries=255 indirect_sg_entries=2048

[root@localhost ~]# cat /etc/modprobe.d/ib_srpt.conf 
options ib_srpt srp_max_req_size=4148

Then I tuned these

Default is 4096

[root@localhost sys]# cat ./kernel/config/target/srpt/0xfe800000000000007cfe900300726e4e/tpgt_1/attrib/srp_sq_size
4096

Set it to 16384

[root@localhost sys]# echo 16384 > ./kernel/config/target/srpt/0xfe800000000000007cfe900300726e4e/tpgt_1/attrib/srp_sq_size
[root@localhost sys]# echo 16384 > ./kernel/config/target/srpt/0xfe800000000000007cfe900300726e4f/tpgt_1/attrib/srp_sq_size

Fedora 23 (Server Edition)
Kernel 4.5.0-rc7+ on an x86_64 (ttyS1)

..
Many of these, likely way too many queued requests from the client.
..
..

[ 1814.417508] ib_srpt IB send queue full (needed 131)
[ 1814.442723] ib_srpt srpt_xfer_data[2478] queue full -- ret=-12
[ 1814.474973] ib_srpt IB send queue full (needed 131)
[ 1814.477444] ib_srpt IB send queue full (needed 1)
[ 1814.477446] ib_srpt sending cmd response failed for tag 17
[ 1814.477925] ib_srpt IB send queue full (needed 144)
[ 1814.477926] ib_srpt srpt_xfer_data[2478] queue full -- ret=-12
[ 1814.478237] ib_srpt IB send queue full (needed 160)
[ 1814.478237] ib_srpt srpt_xfer_data[2478] queue full -- ret=-12
[ 1814.478559] ib_srpt IB send queue full (needed 184)
[ 1814.478560] ib_srpt srpt_xfer_data[2478] queue full -- ret=-12
[ 1814.478871] ib_srpt IB send queue full (needed 157)
..
..
.. After the aborts this is expected to see the TMR
..
[ 1818.051125] ABORT_TASK: Sending TMR_TASK_DOES_NOT_EXIST for ref_tag: 111
[ 1823.595409] ABORT_TASK: Found referenced srpt task_tag: 88
[ 1823.623385] ABORT_TASK: Sending TMR_FUNCTION_COMPLETE for ref_tag: 88
[ 1824.475646] ABORT_TASK: Found referenced srpt task_tag: 0
[ 1824.505863] ABORT_TASK: Sending TMR_FUNCTION_COMPLETE for ref_tag: 0
[ 1824.543904] ABORT_TASK: Found referenced srpt task_tag: 58
[ 1824.573565] ABORT_TASK: Sending TMR_FUNCTION_COMPLETE for ref_tag: 58
[ 1824.634873] ABORT_TASK: Found referenced srpt task_tag: 55

On the client
--------------
localhost login: [  593.363357] scsi host4: SRP abort called
[  599.261519] scsi host4: SRP abort called
[  599.290285] scsi host4: SRP abort called
..
..
[  625.847278] scsi host4: SRP abort called
[  626.246293] scsi host4: SRP abort called
[  722.672833] INFO: task systemd-udevd:3843 blocked for more than 120 seconds.
[  722.710870] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  722.754207] systemd-udevd   D ffff8811df412720     0  3843    802 0x00000080
[  722.794078]  ffff880086c1bb20 0000000000000086 ffff8823bcc6ae00 ffff880086c1bfd8
[  722.836676]  ffff880086c1bfd8 ffff880086c1bfd8 ffff8823bcc6ae00 ffff8811df412718
[  722.879162]  ffff8811df41271c ffff8823bcc6ae00 00000000ffffffff ffff8811df412720
[  722.921464] Call Trace:
[  722.935067]  [<ffffffff8163baa9>] schedule_preempt_disabled+0x29/0x70
[  722.972515]  [<ffffffff816397a5>] __mutex_lock_slowpath+0xc5/0x1c0
[  723.008003]  [<ffffffff81638c0f>] mutex_lock+0x1f/0x2f
[  723.037253]  [<ffffffff8121a3c6>] __blkdev_get+0x76/0x4d0
[  723.068997]  [<ffffffff8121a9f5>] blkdev_get+0x1d5/0x360
[  723.098180]  [<ffffffff8121ac2b>] blkdev_open+0x5b/0x80
[  723.127296]  [<ffffffff811dc0b7>] do_dentry_open+0x1a7/0x2e0
[  723.159133]  [<ffffffff8121abd0>] ? blkdev_get_by_dev+0x50/0x50
[  723.192497]  [<ffffffff811dc2e9>] vfs_open+0x39/0x70
[  723.220155]  [<ffffffff811eb8dd>] do_last+0x1ed/0x1270
[  723.248745]  [<ffffffff811c11be>] ? kmem_cache_alloc_trace+0x1ce/0x1f0
[  723.284548]  [<ffffffff811ee642>] path_openat+0xc2/0x490
[  723.314101]  [<ffffffff811efe0b>] do_filp_open+0x4b/0xb0
[  723.343628]  [<ffffffff811fc9a7>] ? __alloc_fd+0xa7/0x130
[  723.372032]  [<ffffffff811dd7b3>] do_sys_open+0xf3/0x1f0
[  723.402086]  [<ffffffff811dd8ce>] SyS_open+0x1e/0x20
[  723.430490]  [<ffffffff81645a49>] system_call_fastpath+0x16/0x1b
[  760.532038] scsi host4: ib_srp: failed receive status 5 for iu ffff8823bee8d680
[  760.536192] scsi host4: ib_srp: FAST_REG_MR failed status 5
[  770.772150] scsi host4: ib_srp: reconnect succeeded

[  836.572018] scsi host4: SRP abort called
[  842.125673] scsi host4: SRP abort called
[  843.005018] scsi host4: SRP abort called
[  843.070957] scsi host4: SRP abort called
[  843.159205] scsi host4: SRP abort called
[  843.369763] INFO: task systemd-udevd:3846 blocked for more than 120 seconds.
[  843.406044] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  843.450570] systemd-udevd   D ffff8811df4113a0     0  3846    802 0x00000080
[  843.490878]  ffff880b4ce3bb20 0000000000000086 ffff8811c03e5080 ffff880b4ce3bfd8
[  843.533065]  ffff880b4ce3bfd8 ffff880b4ce3bfd8 ffff8811c03e5080 ffff8811df411398
[  843.575303]  ffff8811df41139c ffff8811c03e5080 00000000ffffffff ffff8811df4113a0
[  843.616197] Call Trace:
[  843.629627]  [<ffffffff8163baa9>] schedule_preempt_disabled+0x29/0x70
[  843.663667]  [<ffffffff816397a5>] __mutex_lock_slowpath+0xc5/0x1c0
[  843.696872]  [<ffffffff81638c0f>] mutex_lock+0x1f/0x2f
[  843.725684]  [<ffffffff8121a3c6>] __blkdev_get+0x76/0x4d0
[  843.755051]  [<ffffffff8121a9f5>] blkdev_get+0x1d5/0x360
[  843.784317]  [<ffffffff8121ac2b>] blkdev_open+0x5b/0x80
[  843.813211]  [<ffffffff811dc0b7>] do_dentry_open+0x1a7/0x2e0
[  843.845213]  [<ffffffff8121abd0>] ? blkdev_get_by_dev+0x50/0x50
[  843.878693]  [<ffffffff811dc2e9>] vfs_open+0x39/0x70
[  843.906081]  [<ffffffff811eb8dd>] do_last+0x1ed/0x1270
[  843.935605]  [<ffffffff811c11be>] ? kmem_cache_alloc_trace+0x1ce/0x1f0
[  843.972008]  [<ffffffff811ee642>] path_openat+0xc2/0x490
[  844.000212] scsi host4: SRP abort called
[  844.024556]  [<ffffffff811efe0b>] do_filp_open+0x4b/0xb0
[  844.053528]  [<ffffffff811fc9a7>] ? __alloc_fd+0xa7/0x130
[  844.065679] scsi host4: SRP abort called
[  844.105880]  [<ffffffff811dd7b3>] do_sys_open+0xf3/0x1f0
[  844.135357]  [<ffffffff811dd8ce>] SyS_open+0x1e/0x20
[  844.135403] scsi host4: SRP abort called
[  844.183447]  [<ffffffff81645a49>] system_call_fastpath+0x16/0x1b
[  844.202725] scsi host4: SRP abort called
[  844.999434] scsi host4: SRP abort called
[  845.085156] scsi host4: SRP abort called

Going to retest client with upstream now.

Thanks

Laurence Oberman
Principal Software Maintenance Engineer
Red Hat Global Support Services

----- Original Message -----
From: "Bart Van Assche" <bart.vanassche-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
To: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, "James Hartsock" <hartsjc-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Sent: Saturday, March 12, 2016 8:29:02 PM
Subject: Re: sg_map failures when tuning SRP via ib_srp module parameters for maximum SG entries

On 03/12/16 16:58, Laurence Oberman wrote:
> Within srpt on the array I have options ib_srpt srp_max_req_size=4148
> On the client I also only have options ib_srpt srp_max_req_size=4148
>
> I have not tuned srp_sq_size as I was only aware of
>
> parm:           srp_max_req_size:Maximum size of SRP request messages in bytes. (int)
> parm:           srpt_srq_size:Shared receive queue (SRQ) size. (int)
> parm:           srpt_service_guid:Using this value for ioc_guid, id_ext, and cm_listen_id instead of using the node_guid of the first HCA.
>
> Please explain what that does.

Hello Laurence,

The srp_sq_size parameter controls the send queue size per RDMA channel. 
The default value of this parameter is 4096. I think this is the 
parameter that has to be increased to avoid hitting "IB send queue full" 
errors.

Bart.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2016-03-13 22:15 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <1213561283.21604993.1457793870012.JavaMail.zimbra@redhat.com>
     [not found] ` <1213561283.21604993.1457793870012.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-03-12 14:56   ` sg_map failures when tuning SRP via ib_srp module parameters for maximum SG entries Laurence Oberman
     [not found]     ` <1195068688.21605141.1457794577569.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-03-12 22:06       ` Sagi Grimberg
     [not found]         ` <56E492F0.1070609-LDSdmyG8hGV8YrgS2mwiifqBs+8SCbDb@public.gmane.org>
2016-03-12 22:21           ` Laurence Oberman
     [not found]             ` <1578713476.21612303.1457821295989.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-03-12 22:24               ` Laurence Oberman
2016-03-13  0:34               ` Bart Van Assche
     [not found]                 ` <56E4B59D.4070701-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
2016-03-13  0:39                   ` Laurence Oberman
2016-03-13  0:38       ` Bart Van Assche
     [not found]         ` <56E4B677.6020809-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
2016-03-13  0:58           ` Laurence Oberman
     [not found]             ` <2043305499.21615736.1457830705865.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-03-13  1:29               ` Bart Van Assche
     [not found]                 ` <56E4C25E.7050000-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
2016-03-13 22:15                   ` Laurence Oberman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox