From: Laurence Oberman <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
To: emilne-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org
Cc: "Martin K. Petersen"
<martin.petersen-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>,
linux-scsi <linux-scsi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: Cant write to max_sectors_kb on 4.5.0 SRP target
Date: Fri, 8 Apr 2016 09:11:19 -0400 (EDT) [thread overview]
Message-ID: <1975890115.28041373.1460121079252.JavaMail.zimbra@redhat.com> (raw)
In-Reply-To: <1460119192.25335.40.camel-bi+AKbBUZKY6gyzm1THtWbp2dZbC/Bob@public.gmane.org>
Hi Ewan,
OK, that makes sense.
I suspected after everybody's responses that RHEL was somehow ignoring the array imposed limit here.
I actually got lucky because I needed to be able to issue 4MB IO'S to reproduce the failures seen
at the customer on the initiator side.
Looking at the target-LIO array now its clamped to 1MB I/O sizes which makes sense.
I really was not focusing on the array at the time expecting it may chop the I/O up as many do.
Knowing what's up now I can continue to test and figure out what patches I need to pull in to SRP on RHEL to make progress.
Thank you to all that responded.
Laurence Oberman
Principal Software Maintenance Engineer
Red Hat Global Support Services
----- Original Message -----
From: "Ewan D. Milne" <emilne-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
To: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: "Martin K. Petersen" <martin.petersen-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>, "linux-scsi" <linux-scsi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>, linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Sent: Friday, April 8, 2016 8:39:52 AM
Subject: Re: Cant write to max_sectors_kb on 4.5.0 SRP target
The version of RHEL you are using does not have:
commit ca369d51b3e1649be4a72addd6d6a168cfb3f537
Author: Martin K. Petersen <martin.petersen-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
Date: Fri Nov 13 16:46:48 2015 -0500
block/sd: Fix device-imposed transfer length limits
(which will be added during the next update).
In the upstream kernel queue_max_sectors_store() does not
permit you to set a value larger than the device-imposed
limit. This value, stored in q->limits.max_dev_sectors,
is not visible via the block queue sysfs interface.
The code that sets q->limits.max_sectors and q->limits.io_opt
in sd.c does not take the device limit into account, but
the sysfs code to change max_sectors ("max_sectors_kb") does.
So there are a couple of problems here, one is that RHEL
is not clamping to the device limit, and the other one is
that neither RHEL nor upstream kernels take the device limit
into account when setting q->limits.io_opt. This only seems
to be a problem for you because your target is reporting
an optimal I/O size in VPD page B0 that is *smaller* than
the reported maximum I/O size.
The target is clearly reporting inconsistent data, the
question is whether we should change the code to clamp the
optimal I/O size, or whether we should assume the value
the target is reporting is wrong.
So the question is: does the target actually process
requests that are larger than the VPD page B0 reported
maximum size? If so, maybe we should just issue a warning
message rather than reducing the optimal I/O size.
-Ewan
On Fri, 2016-04-08 at 04:31 -0400, Laurence Oberman wrote:
> Hello Martin
>
> Yes, Ewan also noticed that.
>
> This started out as me testing the SRP stack on RHEL 7.2 and baselining against upstream.
> We have a customer that requires 4MB I/O.
> I bumped into a number of SRP issues including sg_map failures so started reviewing upstream changes to the SRP code and patches.
>
> The RHEL kernel is ignoring this so perhaps we have an issue on our side (RHEL kernel) and upstream is behaving as it should.
>
> What is intersting is that I cannot change the max_sectors_kb at all on the upstream for the SRP LUNS.
>
> Here is an HP SmartArray LUN
>
> [root@srptest ~]# sg_inq --p 0xb0 /dev/sda
> VPD INQUIRY: page=0xb0
> inquiry: field in cdb illegal (page not supported) **** Known that its not supported
>
> However
>
> /sys/block/sda/queue
>
> [root@srptest queue]# cat max_hw_sectors_kb max_sectors_kb
> 4096
> 1280
> [root@srptest queue]# echo 4096 > max_sectors_kb
> [root@srptest queue]# cat max_hw_sectors_kb max_sectors_kb
> 4096
> 4096
>
> On the SRP LUNS I am unable to change to a lower value than max_sectors_kb unless I change it to 128
> So perhaps the size on the array is the issue here as Nicholas said and the RHEL kernel has a bug and ignores it.
>
> /sys/block/sdc/queue
>
> [root@srptest queue]# cat max_hw_sectors_kb max_sectors_kb
> 4096
> 1280
>
> [root@srptest queue]# echo 512 > max_sectors_kb
> -bash: echo: write error: Invalid argument
>
> [root@srptest queue]# echo 256 > max_sectors_kb
> -bash: echo: write error: Invalid argument
>
> 128 works
> [root@srptest queue]# echo 128 > max_sectors_kb
>
>
>
>
> Laurence Oberman
> Principal Software Maintenance Engineer
> Red Hat Global Support Services
>
> ----- Original Message -----
> From: "Martin K. Petersen" <martin.petersen-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
> To: "Laurence Oberman" <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: "linux-scsi" <linux-scsi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>, linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Sent: Thursday, April 7, 2016 11:00:16 PM
> Subject: Re: Cant write to max_sectors_kb on 4.5.0 SRP target
>
> >>>>> "Laurence" == Laurence Oberman <loberman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> writes:
>
> Laurence,
>
> The target is reporting inconsistent values here:
>
> > [root@srptest queue]# sg_inq --p 0xb0 /dev/sdb
> > VPD INQUIRY: Block limits page (SBC)
> > Maximum compare and write length: 1 blocks
> > Optimal transfer length granularity: 256 blocks
> > Maximum transfer length: 256 blocks
> > Optimal transfer length: 768 blocks
>
> OPTIMAL TRANSFER LENGTH GRANULARITY roughly translates to physical block
> size or RAID chunk size. It's the smallest I/O unit that does not
> require read-modify-write. It would typically be either 1 or 8 blocks
> for a drive and maybe 64, 128 or 256 for a RAID5 array. io_min in
> queue_limits.
>
> OPTIMAL TRANSFER LENGTH indicates the stripe width and is a multiple of
> OPTIMAL TRANSFER LENGTH GRANULARITY. io_opt in queue_limits.
>
> MAXIMUM TRANSFER LENGTH indicates the biggest READ/WRITE command the
> device can handle in a single command. In this case 256 blocks so that's
> 128K. max_dev_sectors in queue_limits.
>
> From SBC:
>
> "A MAXIMUM TRANSFER LENGTH field set to a non-zero value indicates the
> maximum transfer length in logical blocks that the device server accepts
> for a single command shown in table 250. If a device server receives one
> of these commands with a transfer size greater than this value, then the
> device server shall terminate the command with CHECK CONDITION status
> [...]"
>
> So those reported values are off.
>
> logical block size <= physical block size <= OTLG <= OTL <= MTL
>
> Or in terms of queue_limits:
>
> lbs <= pbs <= io_min <= io_opt <=
> min_not_zero(max_dev_sectors, max_hw_sectors, max_sectors)
>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2016-04-08 13:11 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <480311118.27942868.1460063633409.JavaMail.zimbra@redhat.com>
[not found] ` <480311118.27942868.1460063633409.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-04-07 21:15 ` Cant write to max_sectors_kb on 4.5.0 SRP target Laurence Oberman
[not found] ` <302427900.27942894.1460063713447.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-04-08 2:49 ` Bart Van Assche
2016-04-08 7:58 ` Laurence Oberman
2016-04-08 3:00 ` Martin K. Petersen
2016-04-08 8:31 ` Laurence Oberman
2016-04-08 12:39 ` Ewan D. Milne
[not found] ` <1460119192.25335.40.camel-bi+AKbBUZKY6gyzm1THtWbp2dZbC/Bob@public.gmane.org>
2016-04-08 13:11 ` Laurence Oberman [this message]
[not found] ` <1975890115.28041373.1460121079252.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2016-04-11 14:57 ` Laurence Oberman
2016-04-11 21:29 ` Martin K. Petersen
2016-04-08 5:30 ` Nicholas A. Bellinger
2016-04-08 8:15 ` Laurence Oberman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1975890115.28041373.1460121079252.JavaMail.zimbra@redhat.com \
--to=loberman-h+wxahxf7alqt0dzr+alfa@public.gmane.org \
--cc=emilne-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
--cc=linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=linux-scsi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=martin.petersen-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).