* a libipq problem(FAQ cannot help)
@ 2004-08-27 10:01 He Ke
0 siblings, 0 replies; 4+ messages in thread
From: He Ke @ 2004-08-27 10:01 UTC (permalink / raw)
To: netfilter-devel
i've now got a serious problem on the libipq programming.I wrote an application
which gets
packets from iptables's QUEUE target(ip_queue&ip6_queue),and then deals with
them(drop,accept,...).
It works well with normal amout of packets,but when packets comes above 10M/s,it
dies soon,and show
"Failed to received netlink message: No buffer space available".
I've checked the FAQ,it says "you can tune their receive buffer sizes via
/proc/sys/net/core, sysctl, or use the SO_RCVBUF socket option on the file
descriptor".I tried them both , but it doesn't work,my application still die
soon
when it meets packets above 10M/s.
I tuned
/proc/sys/net/core/rmem_default
/proc/sys/net/core/rmem_max
up to 1048576 .
I modified the ipq_create_handle function in the libipq.c file, added the
following sentence:
char maxbuf[1048576];
if(setsockopt(h->fd,SOL_SOCKET,SO_RCVBUF,&maxbuf,sizeof(maxbuf))==-1)
{
ipq_errno=IPQ_ERR_RECVBUF;
free(h);
return NULL;
}
Am I right?
If what i've done is what the FAQ says,why doesn't it work?
the software snort_inline who uses this technique has the same problem,i've
checked it.
Would you please tell me how can i deal with this problem ?
I'll be very appreaciate!
^ permalink raw reply [flat|nested] 4+ messages in thread
* a libipq problem(FAQ cannot help)
@ 2004-08-27 10:01 He Ke
0 siblings, 0 replies; 4+ messages in thread
From: He Ke @ 2004-08-27 10:01 UTC (permalink / raw)
To: netfilter-devel
i've now got a serious problem on the libipq programming.I wrote an application
which gets
packets from iptables's QUEUE target(ip_queue&ip6_queue),and then deals with
them(drop,accept,...).
It works well with normal amout of packets,but when packets comes above 10M/s,it
dies soon,and show
"Failed to received netlink message: No buffer space available".
I've checked the FAQ,it says "you can tune their receive buffer sizes via
/proc/sys/net/core, sysctl, or use the SO_RCVBUF socket option on the file
descriptor".I tried them both , but it doesn't work,my application still die
soon
when it meets packets above 10M/s.
I tuned
/proc/sys/net/core/rmem_default
/proc/sys/net/core/rmem_max
up to 1048576 .
I modified the ipq_create_handle function in the libipq.c file, added the
following sentence:
char maxbuf[1048576];
if(setsockopt(h->fd,SOL_SOCKET,SO_RCVBUF,&maxbuf,sizeof(maxbuf))==-1)
{
ipq_errno=IPQ_ERR_RECVBUF;
free(h);
return NULL;
}
Am I right?
If what i've done is what the FAQ says,why doesn't it work?
the software snort_inline who uses this technique has the same problem,i've
checked it.
Would you please tell me how can i deal with this problem ?
I'll be very appreaciate!
^ permalink raw reply [flat|nested] 4+ messages in thread
* a libipq problem(FAQ cannot help)
@ 2004-08-28 8:46 He Ke
2004-08-29 4:13 ` Pablo Neira
0 siblings, 1 reply; 4+ messages in thread
From: He Ke @ 2004-08-28 8:46 UTC (permalink / raw)
To: netfilter-devel
Hello,
i've now got a serious problem on the libipq programming.I wrote an application
which gets
packets from iptables's QUEUE target(ip_queue&ip6_queue),and then deals with
them(drop,accept,...).
It works well with normal amout of packets,but when packets comes above 10M/s,it
dies soon,and show
"Failed to received netlink message: No buffer space available".
I've checked the FAQ,it says "you can tune their receive buffer sizes via
/proc/sys/net/core, sysctl, or use the SO_RCVBUF socket option on the file
descriptor".I tried them both , but it doesn't work,my application still die
soon
when it meets packets above 10M/s.
I tuned
/proc/sys/net/core/rmem_default
/proc/sys/net/core/rmem_max
up to 1048576 .
I modified the ipq_create_handle function in the libipq.c file, added the
following sentence:
char maxbuf[1048576];
if(setsockopt(h->fd,SOL_SOCKET,SO_RCVBUF,&maxbuf,sizeof(maxbuf))==-1)
{
ipq_errno=IPQ_ERR_RECVBUF;
free(h);
return NULL;
}
Am I right?
If what i've done is what the FAQ says,why doesn't it work?
the software snort_inline who uses this technique has the same problem,i've
checked it.
Would you please tell me how can i deal with this problem ?
I'll be very appreaciate!
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: a libipq problem(FAQ cannot help)
2004-08-28 8:46 He Ke
@ 2004-08-29 4:13 ` Pablo Neira
0 siblings, 0 replies; 4+ messages in thread
From: Pablo Neira @ 2004-08-29 4:13 UTC (permalink / raw)
To: He Ke; +Cc: netfilter-devel
Hi He Ke,
He Ke wrote:
>Hello,
>i've now got a serious problem on the libipq programming.I wrote an application
>which gets
>packets from iptables's QUEUE target(ip_queue&ip6_queue),and then deals with
>them(drop,accept,...).
>It works well with normal amout of packets,but when packets comes above 10M/s,it
>dies soon,and show
> "Failed to received netlink message: No buffer space available".
>
>
this problem is netlink sockets related, so ipqueue inherits it because
it's built on top of them...
> I've checked the FAQ,it says "you can tune their receive buffer sizes via
> /proc/sys/net/core, sysctl, or use the SO_RCVBUF socket option on the file
> descriptor".I tried them both , but it doesn't work,my application still die
>soon
> when it meets packets above 10M/s.
> I tuned
> /proc/sys/net/core/rmem_default
> /proc/sys/net/core/rmem_max
>
>
I just submitted a patch which fixes this problem, it will go into 2.6.9
if everything goes ok.
It's also available here, 2.6.x version:
http://eurodev.net/~pablo/netlink-workqueue-2.patch
2.4.x patch version is on the way, stay tuned.
regards,
Pablo
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2004-08-29 4:13 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-08-27 10:01 a libipq problem(FAQ cannot help) He Ke
-- strict thread matches above, loose matches on Subject: below --
2004-08-27 10:01 He Ke
2004-08-28 8:46 He Ke
2004-08-29 4:13 ` Pablo Neira
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.