* attempted oversize allocations in tcp_recvmsg.
@ 2011-12-28 18:44 Dave Jones
2011-12-28 19:06 ` David Miller
0 siblings, 1 reply; 2+ messages in thread
From: Dave Jones @ 2011-12-28 18:44 UTC (permalink / raw)
To: netdev
I got this trace from the page allocator while fuzzing sys_recvfrom
WARNING: at mm/page_alloc.c:2089 __alloc_pages_nodemask+0x39b/0xa50()
Hardware name: X8DTN
Modules linked in: nfnetlink binfmt_misc ip6_queue can_raw can_bcm rfcomm ipt_ULOG cmtp kernelcapi bnep sctp libcrc32c ip_queue dccp_ipv6 dccp_ipv4 >
Pid: 26212, comm: trinity Not tainted 3.1.6-1.fc16.x86_64.debug #1
Call Trace:
[<ffffffff8107940f>] warn_slowpath_common+0x7f/0xc0
[<ffffffff8107946a>] warn_slowpath_null+0x1a/0x20
[<ffffffff811461db>] __alloc_pages_nodemask+0x39b/0xa50
[<ffffffff8117efa3>] alloc_pages_current+0xa3/0x110
[<ffffffff81141604>] __get_free_pages+0x14/0x50
[<ffffffff8118b57f>] kmalloc_order_trace+0x3f/0x170
[<ffffffff8118bc08>] __kmalloc+0x268/0x290
[<ffffffff8139a64d>] dma_pin_iovec_pages+0x9d/0x220
[<ffffffff8157b7e7>] tcp_recvmsg+0x787/0xcb0
[<ffffffff815a34cb>] inet_recvmsg+0x10b/0x180
[<ffffffff81511ead>] sock_recvmsg+0x11d/0x140
[<ffffffff815159e1>] sys_recvfrom+0xf1/0x170
[<ffffffff816698c2>] system_call_fastpath+0x16/0x1b
---[ end trace 9a0c4dd55e1dbe8a ]---
The code in tcp_recvmsg that passes down the enormous size has these checks..
if (skb)
available = TCP_SKB_CB(skb)->seq + skb->len - (*seq);
if ((available < target) &&
(len > sysctl_tcp_dma_copybreak) && !(flags & MSG_PEEK) &&
!sysctl_tcp_low_latency &&
dma_find_channel(DMA_MEMCPY)) {
preempt_enable_no_resched();
tp->ucopy.pinned_list =
dma_pin_iovec_pages(msg->msg_iov, len);
} else {
preempt_enable_no_resched();
}
I'm guessing there should be a (len < 65535) (or similar constant) in that check ?
Or should we be doing this even sooner in one of the earlier functions?
Also, when that dma_pin_iovec_pages fails, we still proceed through the rest of
tcp_recvmsg. Is that expected ? Or should it be doing a goto out; in that case ?
Dave
^ permalink raw reply [flat|nested] 2+ messages in thread* Re: attempted oversize allocations in tcp_recvmsg.
2011-12-28 18:44 attempted oversize allocations in tcp_recvmsg Dave Jones
@ 2011-12-28 19:06 ` David Miller
0 siblings, 0 replies; 2+ messages in thread
From: David Miller @ 2011-12-28 19:06 UTC (permalink / raw)
To: davej; +Cc: netdev
From: Dave Jones <davej@redhat.com>
Date: Wed, 28 Dec 2011 13:44:17 -0500
> I got this trace from the page allocator while fuzzing sys_recvfrom
...
> The code in tcp_recvmsg that passes down the enormous size has these checks..
>
> if (skb)
> available = TCP_SKB_CB(skb)->seq + skb->len - (*seq);
> if ((available < target) &&
> (len > sysctl_tcp_dma_copybreak) && !(flags & MSG_PEEK) &&
> !sysctl_tcp_low_latency &&
> dma_find_channel(DMA_MEMCPY)) {
> preempt_enable_no_resched();
> tp->ucopy.pinned_list =
> dma_pin_iovec_pages(msg->msg_iov, len);
> } else {
> preempt_enable_no_resched();
> }
>
> I'm guessing there should be a (len < 65535) (or similar constant) in that check ?
> Or should we be doing this even sooner in one of the earlier functions?
I would say that it is dma_pin_iovec_pages()'s job to validate things, since the
fact that it does this kmalloc() whose size is some function of the given length
is it's business.
> Also, when that dma_pin_iovec_pages fails, we still proceed through the rest of
> tcp_recvmsg. Is that expected ? Or should it be doing a goto out; in that case ?
That's fine, we'll just try to process the recvmsg() without using the
DMA memcpy offloading. It's exactly the same as if we took the else
branch here.
Anyways, please report this to the DMA layer maintainer.
Thanks.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2011-12-28 19:06 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-12-28 18:44 attempted oversize allocations in tcp_recvmsg Dave Jones
2011-12-28 19:06 ` David Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).