* running out of file descriptors
@ 2009-02-16 5:48 Bryan Christ
2009-02-16 6:18 ` Joe Damato
0 siblings, 1 reply; 10+ messages in thread
From: Bryan Christ @ 2009-02-16 5:48 UTC (permalink / raw)
To: linux-c-programming
I am writing a multi-threaded application which services hundreds of
remote connections for data transfer. Several instances of this
program are run simultaneously. The problem is that whenever the
total number of active user connections (cumulative total of all open
sockets tallied from all process instances) reaches about 700 the
system appears to run out of file descriptors. I have tried raising
the open files limit via "ulimit -n" and by using the setrlimit()
facility. Neither of these seem to help. I am currently having to
limit the number of processes running on the system to 2 instances
allowing no more than 256 connections each. In this configuration the
sever will run for days without failure until I stop it. If I try to
add a third process or restart one of the process with a higher
connection limit, bad things will start happening at about 700 open
sockets.
Thanks in advance to anyone who can help.
--
Bryan
<><
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: running out of file descriptors
2009-02-16 5:48 running out of file descriptors Bryan Christ
@ 2009-02-16 6:18 ` Joe Damato
2009-02-16 7:09 ` Bryan Christ
0 siblings, 1 reply; 10+ messages in thread
From: Joe Damato @ 2009-02-16 6:18 UTC (permalink / raw)
To: Bryan Christ; +Cc: linux-c-programming
On Sun, Feb 15, 2009 at 9:48 PM, Bryan Christ <bryan.christ@gmail.com> wrote:
> I am writing a multi-threaded application which services hundreds of
> remote connections for data transfer. Several instances of this
> program are run simultaneously. The problem is that whenever the
> total number of active user connections (cumulative total of all open
> sockets tallied from all process instances) reaches about 700 the
> system appears to run out of file descriptors. I have tried raising
> the open files limit via "ulimit -n" and by using the setrlimit()
> facility. Neither of these seem to help. I am currently having to
> limit the number of processes running on the system to 2 instances
> allowing no more than 256 connections each.
Have you tried editing /etc/security/limits.conf (or equivalent file
on your system) to increase the max number of open files?
perhaps something like:
* - nofile 524288
is what you want?
joe
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: running out of file descriptors
2009-02-16 6:18 ` Joe Damato
@ 2009-02-16 7:09 ` Bryan Christ
2009-02-16 13:01 ` Eric Bambach
2009-02-16 13:06 ` Glynn Clements
0 siblings, 2 replies; 10+ messages in thread
From: Bryan Christ @ 2009-02-16 7:09 UTC (permalink / raw)
To: Joe Damato; +Cc: linux-c-programming
On Mon, Feb 16, 2009 at 12:18 AM, Joe Damato <ice799@gmail.com> wrote:
> On Sun, Feb 15, 2009 at 9:48 PM, Bryan Christ <bryan.christ@gmail.com> wrote:
>> I am writing a multi-threaded application which services hundreds of
>> remote connections for data transfer. Several instances of this
>> program are run simultaneously. The problem is that whenever the
>> total number of active user connections (cumulative total of all open
>> sockets tallied from all process instances) reaches about 700 the
It seems that would be the same as setting RLIMIT_NOFILE via
setrlimt() or the same as using the userspace tool "ulimit -n". Am I
wrong? Isn't this the same?
>> system appears to run out of file descriptors. I have tried raising
>> the open files limit via "ulimit -n" and by using the setrlimit()
>> facility. Neither of these seem to help. I am currently having to
>> limit the number of processes running on the system to 2 instances
>> allowing no more than 256 connections each.
>
> Have you tried editing /etc/security/limits.conf (or equivalent file
> on your system) to increase the max number of open files?
>
> perhaps something like:
> * - nofile 524288
>
> is what you want?
>
> joe
>
--
Bryan
<><
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: running out of file descriptors
2009-02-16 7:09 ` Bryan Christ
@ 2009-02-16 13:01 ` Eric Bambach
2009-02-16 17:35 ` Bryan Christ
2009-02-16 13:06 ` Glynn Clements
1 sibling, 1 reply; 10+ messages in thread
From: Eric Bambach @ 2009-02-16 13:01 UTC (permalink / raw)
To: Bryan Christ; +Cc: Joe Damato, linux-c-programming
On Monday 16 February 2009 01:09:42 Bryan Christ wrote:
> On Mon, Feb 16, 2009 at 12:18 AM, Joe Damato <ice799@gmail.com> wrote:
> > On Sun, Feb 15, 2009 at 9:48 PM, Bryan Christ <bryan.christ@gmail.com>
wrote:
> >> I am writing a multi-threaded application which services hundreds of
> >> remote connections for data transfer. Several instances of this
> >> program are run simultaneously. The problem is that whenever the
> >> total number of active user connections (cumulative total of all open
> >> sockets tallied from all process instances) reaches about 700 the
>
> It seems that would be the same as setting RLIMIT_NOFILE via
> setrlimt() or the same as using the userspace tool "ulimit -n". Am I
> wrong? Isn't this the same?
>
> >> system appears to run out of file descriptors. I have tried raising
> >> the open files limit via "ulimit -n" and by using the setrlimit()
> >> facility. Neither of these seem to help. I am currently having to
> >> limit the number of processes running on the system to 2 instances
> >> allowing no more than 256 connections each.
> >
> > Have you tried editing /etc/security/limits.conf (or equivalent file
> > on your system) to increase the max number of open files?
> >
> > perhaps something like:
> > * - nofile 524288
> >
> > is what you want?
> >
> > joe
The solution used to be editing the kernel and tuning NR_OPEN and NR_FILE in
/your/kernel/source/include/linux/fs.h
The kernel used to have an absolute hard limit that setrlimit and ulimit
couldn't go past but I'm not sure if this has changed.
You could try bumping those up and recompiling or at least googling along
those lines.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: running out of file descriptors
2009-02-16 13:01 ` Eric Bambach
@ 2009-02-16 17:35 ` Bryan Christ
2009-02-16 21:43 ` Glynn Clements
0 siblings, 1 reply; 10+ messages in thread
From: Bryan Christ @ 2009-02-16 17:35 UTC (permalink / raw)
To: eric; +Cc: Joe Damato, linux-c-programming
I pretty sure you can just echo to /proc/sys/fs/file-nr to set that
value (which already seems really high at 560178). I doubt I'm
exhausting 560178 descriptors.
On Mon, Feb 16, 2009 at 7:01 AM, Eric Bambach <bot403@cisu.net> wrote:
> On Monday 16 February 2009 01:09:42 Bryan Christ wrote:
>> On Mon, Feb 16, 2009 at 12:18 AM, Joe Damato <ice799@gmail.com> wrote:
>> > On Sun, Feb 15, 2009 at 9:48 PM, Bryan Christ <bryan.christ@gmail.com>
> wrote:
>> >> I am writing a multi-threaded application which services hundreds of
>> >> remote connections for data transfer. Several instances of this
>> >> program are run simultaneously. The problem is that whenever the
>> >> total number of active user connections (cumulative total of all open
>> >> sockets tallied from all process instances) reaches about 700 the
>>
>> It seems that would be the same as setting RLIMIT_NOFILE via
>> setrlimt() or the same as using the userspace tool "ulimit -n". Am I
>> wrong? Isn't this the same?
>>
>> >> system appears to run out of file descriptors. I have tried raising
>> >> the open files limit via "ulimit -n" and by using the setrlimit()
>> >> facility. Neither of these seem to help. I am currently having to
>> >> limit the number of processes running on the system to 2 instances
>> >> allowing no more than 256 connections each.
>> >
>> > Have you tried editing /etc/security/limits.conf (or equivalent file
>> > on your system) to increase the max number of open files?
>> >
>> > perhaps something like:
>> > * - nofile 524288
>> >
>> > is what you want?
>> >
>> > joe
>
> The solution used to be editing the kernel and tuning NR_OPEN and NR_FILE in
>
> /your/kernel/source/include/linux/fs.h
>
> The kernel used to have an absolute hard limit that setrlimit and ulimit
> couldn't go past but I'm not sure if this has changed.
>
> You could try bumping those up and recompiling or at least googling along
> those lines.
>
--
Bryan
<><
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: running out of file descriptors
2009-02-16 17:35 ` Bryan Christ
@ 2009-02-16 21:43 ` Glynn Clements
2009-02-17 18:51 ` Bryan Christ
0 siblings, 1 reply; 10+ messages in thread
From: Glynn Clements @ 2009-02-16 21:43 UTC (permalink / raw)
To: Bryan Christ; +Cc: eric, Joe Damato, linux-c-programming
Bryan Christ wrote:
> I pretty sure you can just echo to /proc/sys/fs/file-nr to set that
> value (which already seems really high at 560178). I doubt I'm
> exhausting 560178 descriptors.
file-nr is read-only, and reports the number of open descriptors, the
number of free descriptors, and the limit.
file-max is writable, and controls the system-wide limit.
--
Glynn Clements <glynn@gclements.plus.com>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: running out of file descriptors
2009-02-16 21:43 ` Glynn Clements
@ 2009-02-17 18:51 ` Bryan Christ
2009-02-18 7:07 ` Holger Kiehl
0 siblings, 1 reply; 10+ messages in thread
From: Bryan Christ @ 2009-02-17 18:51 UTC (permalink / raw)
To: Glynn Clements; +Cc: eric, Joe Damato, linux-c-programming
Thanks Glynn.
Well according to these values I'm only using about 1500 file
descriptors out of the 560178.
On Mon, Feb 16, 2009 at 3:43 PM, Glynn Clements
<glynn@gclements.plus.com> wrote:
>
> Bryan Christ wrote:
>
>> I pretty sure you can just echo to /proc/sys/fs/file-nr to set that
>> value (which already seems really high at 560178). I doubt I'm
>> exhausting 560178 descriptors.
>
> file-nr is read-only, and reports the number of open descriptors, the
> number of free descriptors, and the limit.
>
> file-max is writable, and controls the system-wide limit.
>
> --
> Glynn Clements <glynn@gclements.plus.com>
>
--
Bryan
<><
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: running out of file descriptors
2009-02-17 18:51 ` Bryan Christ
@ 2009-02-18 7:07 ` Holger Kiehl
0 siblings, 0 replies; 10+ messages in thread
From: Holger Kiehl @ 2009-02-18 7:07 UTC (permalink / raw)
To: Bryan Christ; +Cc: linux-c-programming
On Tue, 17 Feb 2009, Bryan Christ wrote:
> Thanks Glynn.
>
> Well according to these values I'm only using about 1500 file
> descriptors out of the 560178.
>
If I remember correctly you are using sockets. How many sockets does
your system have in TIME_WAIT? You can check this with 'netstat -nt|wc -l'.
If you open and close TCP sockets at a high rate you will soon find
that you hit a limit here as well.
Holger
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: running out of file descriptors
2009-02-16 7:09 ` Bryan Christ
2009-02-16 13:01 ` Eric Bambach
@ 2009-02-16 13:06 ` Glynn Clements
2009-02-16 17:39 ` Bryan Christ
1 sibling, 1 reply; 10+ messages in thread
From: Glynn Clements @ 2009-02-16 13:06 UTC (permalink / raw)
To: Bryan Christ; +Cc: Joe Damato, linux-c-programming
Bryan Christ wrote:
> >> I am writing a multi-threaded application which services hundreds of
> >> remote connections for data transfer. Several instances of this
> >> program are run simultaneously. The problem is that whenever the
> >> total number of active user connections (cumulative total of all open
> >> sockets tallied from all process instances) reaches about 700 the
> >> system appears to run out of file descriptors. I have tried raising
> >> the open files limit via "ulimit -n" and by using the setrlimit()
> >> facility. Neither of these seem to help. I am currently having to
> >> limit the number of processes running on the system to 2 instances
> >> allowing no more than 256 connections each.
> >
> > Have you tried editing /etc/security/limits.conf (or equivalent file
> > on your system) to increase the max number of open files?
>
> It seems that would be the same as setting RLIMIT_NOFILE via
> setrlimt() or the same as using the userspace tool "ulimit -n". Am I
> wrong? Isn't this the same?
Is your daemon running as root? If not, it cannot increase any hard
resource limit. Are you checking the return value (and errno) from
setrlimit()?
BTW, what do you mean by "appears to run out of file descriptors"?
Which system call fails, and with what error?
--
Glynn Clements <glynn@gclements.plus.com>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: running out of file descriptors
2009-02-16 13:06 ` Glynn Clements
@ 2009-02-16 17:39 ` Bryan Christ
0 siblings, 0 replies; 10+ messages in thread
From: Bryan Christ @ 2009-02-16 17:39 UTC (permalink / raw)
To: Glynn Clements; +Cc: Joe Damato, linux-c-programming
Yes--the daemon runs as root. It believe it's running out of file
descriptors because as soon as I hit that magic 700 number, the
daemons fail to stop accepting connections and all of the transfer
threads start aborting with EPIPE. My only other thought is perhaps I
am somehow hitting a limit on the number of open sockets I can have...
but as far as I know that is the same as the number of max open file
descriptors.
On Mon, Feb 16, 2009 at 7:06 AM, Glynn Clements
<glynn@gclements.plus.com> wrote:
>
> Bryan Christ wrote:
>
>> >> I am writing a multi-threaded application which services hundreds of
>> >> remote connections for data transfer. Several instances of this
>> >> program are run simultaneously. The problem is that whenever the
>> >> total number of active user connections (cumulative total of all open
>> >> sockets tallied from all process instances) reaches about 700 the
>> >> system appears to run out of file descriptors. I have tried raising
>> >> the open files limit via "ulimit -n" and by using the setrlimit()
>> >> facility. Neither of these seem to help. I am currently having to
>> >> limit the number of processes running on the system to 2 instances
>> >> allowing no more than 256 connections each.
>> >
>> > Have you tried editing /etc/security/limits.conf (or equivalent file
>> > on your system) to increase the max number of open files?
>>
>> It seems that would be the same as setting RLIMIT_NOFILE via
>> setrlimt() or the same as using the userspace tool "ulimit -n". Am I
>> wrong? Isn't this the same?
>
> Is your daemon running as root? If not, it cannot increase any hard
> resource limit. Are you checking the return value (and errno) from
> setrlimit()?
>
> BTW, what do you mean by "appears to run out of file descriptors"?
> Which system call fails, and with what error?
>
> --
> Glynn Clements <glynn@gclements.plus.com>
>
--
Bryan
<><
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2009-02-18 7:07 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-02-16 5:48 running out of file descriptors Bryan Christ
2009-02-16 6:18 ` Joe Damato
2009-02-16 7:09 ` Bryan Christ
2009-02-16 13:01 ` Eric Bambach
2009-02-16 17:35 ` Bryan Christ
2009-02-16 21:43 ` Glynn Clements
2009-02-17 18:51 ` Bryan Christ
2009-02-18 7:07 ` Holger Kiehl
2009-02-16 13:06 ` Glynn Clements
2009-02-16 17:39 ` Bryan Christ
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).