From: andrew hendry <andrew.hendry@gmail.com>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Zhu Yi <yi.zhu@intel.com>, netdev@vger.kernel.org
Subject: Re: [PATCH 8/8] x25: use limited socket backlog
Date: Wed, 3 Mar 2010 22:38:21 +1100 [thread overview]
Message-ID: <d45a3acc1003030338m3102a95p1b06b7db1b24e58e@mail.gmail.com> (raw)
In-Reply-To: <1267600109.2839.101.camel@edumazet-laptop>
Will wait for the next spin and in the meantime think if there is way
to test it.
x25 with no loopback and being so slow probably cant generate the same
as your UDP case.
Andrew.
On Wed, Mar 3, 2010 at 6:08 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> Le mercredi 03 mars 2010 à 14:35 +0800, Zhu Yi a écrit :
>> Make x25 adapt to the limited socket backlog change.
>>
>> Cc: Andrew Hendry <andrew.hendry@gmail.com>
>> Signed-off-by: Zhu Yi <yi.zhu@intel.com>
>> ---
>> net/x25/x25_dev.c | 2 +-
>> 1 files changed, 1 insertions(+), 1 deletions(-)
>>
>> diff --git a/net/x25/x25_dev.c b/net/x25/x25_dev.c
>> index 3e1efe5..5688123 100644
>> --- a/net/x25/x25_dev.c
>> +++ b/net/x25/x25_dev.c
>> @@ -53,7 +53,7 @@ static int x25_receive_data(struct sk_buff *skb, struct x25_neigh *nb)
>> if (!sock_owned_by_user(sk)) {
>> queued = x25_process_rx_frame(sk, skb);
>> } else {
>> - sk_add_backlog(sk, skb);
>> + __sk_add_backlog(sk, skb);
>> }
>> bh_unlock_sock(sk);
>> sock_put(sk);
>
> Please respin your patch the other way
>
> Ie: let sk_add_backlog(sk, skb) do its previous job (not leaking skbs,
> and returning a void status)
>
> Add a new function able to no limit backlog, and returns an error code,
> so that caller can free skb and increment SNMP counters accordingly.
>
> Callers MUST test return value, or use another helper that can free the
> skb for them.
>
> Name it sk_move_backlog() for example
>
> This will permit you to split the work as you tried.
>
> sk_add_backlog() could be redefined as the helper :
>
> void sk_add_backlog(sk, skb)
> {
> if (sk_move_backlog(sk, skb)) {
> kfree_skb(skb);
> atomic_inc(&sk->sk_drops);
> }
>
> }
>
> Thanks
>
>
>
next prev parent reply other threads:[~2010-03-03 11:38 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-03-03 6:35 [PATCH 1/8] net: add limit for socket backlog Zhu Yi
2010-03-03 6:35 ` [PATCH 2/8] dccp: use limited " Zhu Yi
2010-03-03 6:35 ` [PATCH 3/8] tcp: " Zhu Yi
2010-03-03 6:35 ` [PATCH 4/8] udp: " Zhu Yi
2010-03-03 6:35 ` [PATCH 5/8] llc: " Zhu Yi
2010-03-03 6:35 ` [PATCH 6/8] sctp: " Zhu Yi
2010-03-03 6:35 ` [PATCH 7/8] tipc: " Zhu Yi
2010-03-03 6:35 ` [PATCH 8/8] x25: " Zhu Yi
2010-03-03 7:08 ` Eric Dumazet
2010-03-03 11:38 ` andrew hendry [this message]
2010-03-03 14:00 ` Zhu, Yi
2010-03-03 14:33 ` Eric Dumazet
2010-03-03 22:44 ` andrew hendry
2010-03-03 6:56 ` [PATCH 2/8] dccp: " Eric Dumazet
2010-03-03 7:43 ` Zhu Yi
2010-03-03 6:54 ` [PATCH 1/8] net: add limit for " Eric Dumazet
2010-03-03 7:35 ` Zhu Yi
2010-03-03 8:02 ` Eric Dumazet
2010-03-03 8:14 ` Zhu Yi
2010-03-03 8:47 ` Eric Dumazet
2010-03-03 8:59 ` Zhu Yi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d45a3acc1003030338m3102a95p1b06b7db1b24e58e@mail.gmail.com \
--to=andrew.hendry@gmail.com \
--cc=eric.dumazet@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=yi.zhu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).