linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* question about map_read_chunks()
@ 2012-02-20  9:50 Dan Carpenter
  2013-09-27 12:21 ` Dan Carpenter
  0 siblings, 1 reply; 5+ messages in thread
From: Dan Carpenter @ 2012-02-20  9:50 UTC (permalink / raw)
  To: Tom Tucker; +Cc: J. Bruce Fields, netdev, linux-nfs

I had a couple questions about some map_read_chunks().

net/sunrpc/xprtrdma/svc_rdma_recvfrom.c

   150          ch_bytes = ntohl(ch->rc_target.rs_length);
                ^^^^^^^^
It look like this is 32 bits from the network?

   151          head->arg.head[0] = rqstp->rq_arg.head[0];
   152          head->arg.tail[0] = rqstp->rq_arg.tail[0];
   153          head->arg.pages = &head->pages[head->count];
   154          head->hdr_count = head->count; /* save count of hdr pages */
   155          head->arg.page_base = 0;
   156          head->arg.page_len = ch_bytes;
   157          head->arg.len = rqstp->rq_arg.len + ch_bytes;
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Can overflow.
   158          head->arg.buflen = rqstp->rq_arg.buflen + ch_bytes;
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Same.  I didn't follow it through to see if an overflow matters.  Does
it?

   159          head->count++;
   160          chl_map->ch[0].start = 0;
   161          while (byte_count) {
   162                  rpl_map->sge[sge_no].iov_base =
   163                          page_address(rqstp->rq_arg.pages[page_no]) + page_off;
   164                  sge_bytes = min_t(int, PAGE_SIZE-page_off, ch_bytes);
                                          ^^^
This is the wrong cast to use.  A large ch_bytes would be counted as a
negative value and get around the cap here.

   165                  rpl_map->sge[sge_no].iov_len = sge_bytes;

regards,
dan carpenter

^ permalink raw reply	[flat|nested] 5+ messages in thread

* question about map_read_chunks()
       [not found] <20120220182003.GL2912@mwanda>
@ 2012-02-20 18:44 ` Tom Tucker
  0 siblings, 0 replies; 5+ messages in thread
From: Tom Tucker @ 2012-02-20 18:44 UTC (permalink / raw)
  To: Dan Carpenter; +Cc: tom, Linux NFS Mailing List

Hi Dan,
> ----- Forwarded message from Dan Carpenter<dan.carpenter@oracle.com>  -----
>
> Date: Mon, 20 Feb 2012 12:50:19 +0300
> From: Dan Carpenter<dan.carpenter@oracle.com>
> To: Tom Tucker<tom@ogc.us>
> Cc: "J. Bruce Fields"<bfields@redhat.com>, netdev@vger.kernel.org, linux-nfs@vger.kernel.org
> Subject: question about map_read_chunks()
>
> I had a couple questions about some map_read_chunks().
>
> net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
>
>     150          ch_bytes = ntohl(ch->rc_target.rs_length);
>                  ^^^^^^^^
> It look like this is 32 bits from the network?

Yes, it is. All these values should be clamped by the wsize/rsize, 
however, I don't think we enforce that. We should add a check to 
svc_rdma_xdr_decode_req().

>
>     151          head->arg.head[0] = rqstp->rq_arg.head[0];
>     152          head->arg.tail[0] = rqstp->rq_arg.tail[0];
>     153          head->arg.pages =&head->pages[head->count];
>     154          head->hdr_count = head->count; /* save count of hdr pages */
>     155          head->arg.page_base = 0;
>     156          head->arg.page_len = ch_bytes;
>     157          head->arg.len = rqstp->rq_arg.len + ch_bytes;
>                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> Can overflow.
Without the proposed check, it can.

>     158          head->arg.buflen = rqstp->rq_arg.buflen + ch_bytes;
>                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> Same.  I didn't follow it through to see if an overflow matters.  Does
> it?

I think it could cause bad things to happen, yes.

>
>     159          head->count++;
>     160          chl_map->ch[0].start = 0;
>     161          while (byte_count) {
>     162                  rpl_map->sge[sge_no].iov_base =
>     163                          page_address(rqstp->rq_arg.pages[page_no]) + page_off;
>     164                  sge_bytes = min_t(int, PAGE_SIZE-page_off, ch_bytes);
>                                            ^^^
> This is the wrong cast to use.  A large ch_bytes would be counted as a
> negative value and get around the cap here.

I suppose, in theory, we could have a r/wsize > 2G, in which case, you are 
correct.

>     165                  rpl_map->sge[sge_no].iov_len = sge_bytes;
>
> regards,
> dan carpenter
>

I think adding a check to svc_rdma_xdr_decode_req() would avoid any 
possibility of overflow. I'll code up a patch.

Thanks,
Tom



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: question about map_read_chunks()
  2012-02-20  9:50 question about map_read_chunks() Dan Carpenter
@ 2013-09-27 12:21 ` Dan Carpenter
  2013-09-27 15:23   ` Tom Tucker
  0 siblings, 1 reply; 5+ messages in thread
From: Dan Carpenter @ 2013-09-27 12:21 UTC (permalink / raw)
  To: Tom Tucker; +Cc: J. Bruce Fields, netdev, linux-nfs

I have looked at this again, and I still worry that it looks like a bug.
(remote security related blah blah blah).

regards,
dan carpenter

On Mon, Feb 20, 2012 at 12:50:19PM +0300, Dan Carpenter wrote:
> I had a couple questions about some map_read_chunks().
> 
> net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
> 
>    150          ch_bytes = ntohl(ch->rc_target.rs_length);
>                 ^^^^^^^^
> It look like this is 32 bits from the network?
> 
>    151          head->arg.head[0] = rqstp->rq_arg.head[0];
>    152          head->arg.tail[0] = rqstp->rq_arg.tail[0];
>    153          head->arg.pages = &head->pages[head->count];
>    154          head->hdr_count = head->count; /* save count of hdr pages */
>    155          head->arg.page_base = 0;
>    156          head->arg.page_len = ch_bytes;
>    157          head->arg.len = rqstp->rq_arg.len + ch_bytes;
>                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> Can overflow.
>    158          head->arg.buflen = rqstp->rq_arg.buflen + ch_bytes;
>                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> Same.  I didn't follow it through to see if an overflow matters.  Does
> it?
> 
>    159          head->count++;
>    160          chl_map->ch[0].start = 0;
>    161          while (byte_count) {
>    162                  rpl_map->sge[sge_no].iov_base =
>    163                          page_address(rqstp->rq_arg.pages[page_no]) + page_off;
>    164                  sge_bytes = min_t(int, PAGE_SIZE-page_off, ch_bytes);
>                                           ^^^
> This is the wrong cast to use.  A large ch_bytes would be counted as a
> negative value and get around the cap here.
> 
>    165                  rpl_map->sge[sge_no].iov_len = sge_bytes;
> 
> regards,
> dan carpenter

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: question about map_read_chunks()
  2013-09-27 12:21 ` Dan Carpenter
@ 2013-09-27 15:23   ` Tom Tucker
  2013-09-27 20:15     ` J. Bruce Fields
  0 siblings, 1 reply; 5+ messages in thread
From: Tom Tucker @ 2013-09-27 15:23 UTC (permalink / raw)
  To: Dan Carpenter; +Cc: Tom Tucker, J. Bruce Fields, netdev, linux-nfs

Hi Dan,

On 9/27/13 7:21 AM, Dan Carpenter wrote:
> I have looked at this again, and I still worry that it looks like a bug.
> (remote security related blah blah blah).
>
> regards,
> dan carpenter
>
> On Mon, Feb 20, 2012 at 12:50:19PM +0300, Dan Carpenter wrote:
>> I had a couple questions about some map_read_chunks().
>>
>> net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
>>
>>     150          ch_bytes = ntohl(ch->rc_target.rs_length);
>>                  ^^^^^^^^
>> It look like this is 32 bits from the network?
>>
>>     151          head->arg.head[0] = rqstp->rq_arg.head[0];
>>     152          head->arg.tail[0] = rqstp->rq_arg.tail[0];
>>     153          head->arg.pages = &head->pages[head->count];
>>     154          head->hdr_count = head->count; /* save count of hdr pages */
>>     155          head->arg.page_base = 0;
>>     156          head->arg.page_len = ch_bytes;
>>     157          head->arg.len = rqstp->rq_arg.len + ch_bytes;
>>                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>> Can overflow.
>>     158          head->arg.buflen = rqstp->rq_arg.buflen + ch_bytes;
agreed.
>>                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>> Same.  I didn't follow it through to see if an overflow matters.  Does
>> it?
>>
>>     159          head->count++;
>>     160          chl_map->ch[0].start = 0;
>>     161          while (byte_count) {
>>     162                  rpl_map->sge[sge_no].iov_base =
>>     163                          page_address(rqstp->rq_arg.pages[page_no]) + page_off;
>>     164                  sge_bytes = min_t(int, PAGE_SIZE-page_off, ch_bytes);
>>                                            ^^^
>> This is the wrong cast to use.  A large ch_bytes would be counted as a
>> negative value and get around the cap here.
True, but if we validate the wire data like we should, that's probably 
not an issue.

>>     165                  rpl_map->sge[sge_no].iov_len = sge_bytes;
>>
>> regards,
>> dan carpenter
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: question about map_read_chunks()
  2013-09-27 15:23   ` Tom Tucker
@ 2013-09-27 20:15     ` J. Bruce Fields
  0 siblings, 0 replies; 5+ messages in thread
From: J. Bruce Fields @ 2013-09-27 20:15 UTC (permalink / raw)
  To: Tom Tucker; +Cc: Dan Carpenter, Tom Tucker, J. Bruce Fields, netdev, linux-nfs

On Fri, Sep 27, 2013 at 10:23:10AM -0500, Tom Tucker wrote:
> Hi Dan,
> 
> On 9/27/13 7:21 AM, Dan Carpenter wrote:
> >I have looked at this again, and I still worry that it looks like a bug.
> >(remote security related blah blah blah).
> >
> >regards,
> >dan carpenter
> >
> >On Mon, Feb 20, 2012 at 12:50:19PM +0300, Dan Carpenter wrote:
> >>I had a couple questions about some map_read_chunks().
> >>
> >>net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
> >>
> >>    150          ch_bytes = ntohl(ch->rc_target.rs_length);
> >>                 ^^^^^^^^
> >>It look like this is 32 bits from the network?
> >>
> >>    151          head->arg.head[0] = rqstp->rq_arg.head[0];
> >>    152          head->arg.tail[0] = rqstp->rq_arg.tail[0];
> >>    153          head->arg.pages = &head->pages[head->count];
> >>    154          head->hdr_count = head->count; /* save count of hdr pages */
> >>    155          head->arg.page_base = 0;
> >>    156          head->arg.page_len = ch_bytes;
> >>    157          head->arg.len = rqstp->rq_arg.len + ch_bytes;
> >>                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> >>Can overflow.
> >>    158          head->arg.buflen = rqstp->rq_arg.buflen + ch_bytes;
> agreed.
> >>                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> >>Same.  I didn't follow it through to see if an overflow matters.  Does
> >>it?
> >>
> >>    159          head->count++;
> >>    160          chl_map->ch[0].start = 0;
> >>    161          while (byte_count) {
> >>    162                  rpl_map->sge[sge_no].iov_base =
> >>    163                          page_address(rqstp->rq_arg.pages[page_no]) + page_off;
> >>    164                  sge_bytes = min_t(int, PAGE_SIZE-page_off, ch_bytes);
> >>                                           ^^^
> >>This is the wrong cast to use.  A large ch_bytes would be counted as a
> >>negative value and get around the cap here.
> True, but if we validate the wire data like we should, that's
> probably not an issue.

Well, my last attempt to do rdma nfs io resulted in an instant server
reboot, so I can take a look at this but it may be the least of our
problems....

--b.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-09-27 20:15 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-02-20  9:50 question about map_read_chunks() Dan Carpenter
2013-09-27 12:21 ` Dan Carpenter
2013-09-27 15:23   ` Tom Tucker
2013-09-27 20:15     ` J. Bruce Fields
     [not found] <20120220182003.GL2912@mwanda>
2012-02-20 18:44 ` Tom Tucker

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).