From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53742C43381 for ; Thu, 28 Feb 2019 01:00:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 29B3420842 for ; Thu, 28 Feb 2019 01:00:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730562AbfB1BA0 (ORCPT ); Wed, 27 Feb 2019 20:00:26 -0500 Received: from shards.monkeyblade.net ([23.128.96.9]:52776 "EHLO shards.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729918AbfB1BAZ (ORCPT ); Wed, 27 Feb 2019 20:00:25 -0500 Received: from localhost (unknown [IPv6:2601:601:9f80:35cd::bf5]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) (Authenticated sender: davem-davemloft) by shards.monkeyblade.net (Postfix) with ESMTPSA id 4BD9B126662EE; Wed, 27 Feb 2019 17:00:25 -0800 (PST) Date: Wed, 27 Feb 2019 17:00:24 -0800 (PST) Message-Id: <20190227.170024.1735727318522180688.davem@davemloft.net> To: netdev@vger.kernel.org CC: marcelo.leitner@gmail.com, lucien.xin@gmail.com, nhorman@tuxdriver.com Subject: [PATCH RFC v3 4/5] sctp: Make sctp_enqueue_event tak an skb list. From: David Miller X-Mailer: Mew version 6.8 on Emacs 26.1 Mime-Version: 1.0 Content-Type: Text/Plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.12 (shards.monkeyblade.net [149.20.54.216]); Wed, 27 Feb 2019 17:00:25 -0800 (PST) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Pass this, instead of an event. Then everything trickles down and we always have events a non-empty list. Then we needs a list creating stub to place into .enqueue_event for sctp_stream_interleave_1. Signed-off-by: David S. Miller --- net/sctp/stream_interleave.c | 44 +++++++++++++++++++++++++++--------- 1 file changed, 33 insertions(+), 11 deletions(-) diff --git a/net/sctp/stream_interleave.c b/net/sctp/stream_interleave.c index b6b251b8b3cf..0bc3d9329d9a 100644 --- a/net/sctp/stream_interleave.c +++ b/net/sctp/stream_interleave.c @@ -484,14 +484,15 @@ static struct sctp_ulpevent *sctp_intl_order(struct sctp_ulpq *ulpq, } static int sctp_enqueue_event(struct sctp_ulpq *ulpq, - struct sctp_ulpevent *event) + struct sk_buff_head *skb_list) { - struct sk_buff *skb = sctp_event2skb(event); struct sock *sk = ulpq->asoc->base.sk; struct sctp_sock *sp = sctp_sk(sk); - struct sk_buff_head *skb_list; + struct sctp_ulpevent *event; + struct sk_buff *skb; - skb_list = (struct sk_buff_head *)skb->prev; + skb = __skb_peek(skb_list); + event = sctp_skb2event(skb); if (sk->sk_shutdown & RCV_SHUTDOWN && (sk->sk_shutdown & SEND_SHUTDOWN || @@ -866,11 +867,15 @@ static int sctp_ulpevent_idata(struct sctp_ulpq *ulpq, } } else { event = sctp_intl_reasm_uo(ulpq, event); + if (event) { + skb_queue_head_init(&temp); + __skb_queue_tail(&temp, sctp_event2skb(event)); + } } if (event) { event_eor = (event->msg_flags & MSG_EOR) ? 1 : 0; - sctp_enqueue_event(ulpq, event); + sctp_enqueue_event(ulpq, &temp); } return event_eor; @@ -944,20 +949,27 @@ static struct sctp_ulpevent *sctp_intl_retrieve_first(struct sctp_ulpq *ulpq) static void sctp_intl_start_pd(struct sctp_ulpq *ulpq, gfp_t gfp) { struct sctp_ulpevent *event; + struct sk_buff_head temp; if (!skb_queue_empty(&ulpq->reasm)) { do { event = sctp_intl_retrieve_first(ulpq); - if (event) - sctp_enqueue_event(ulpq, event); + if (event) { + skb_queue_head_init(&temp); + __skb_queue_tail(&temp, sctp_event2skb(event)); + sctp_enqueue_event(ulpq, &temp); + } } while (event); } if (!skb_queue_empty(&ulpq->reasm_uo)) { do { event = sctp_intl_retrieve_first_uo(ulpq); - if (event) - sctp_enqueue_event(ulpq, event); + if (event) { + skb_queue_head_init(&temp); + __skb_queue_tail(&temp, sctp_event2skb(event)); + sctp_enqueue_event(ulpq, &temp); + } } while (event); } } @@ -1059,7 +1071,7 @@ static void sctp_intl_reap_ordered(struct sctp_ulpq *ulpq, __u16 sid) if (event) { sctp_intl_retrieve_ordered(ulpq, event); - sctp_enqueue_event(ulpq, event); + sctp_enqueue_event(ulpq, &temp); } } @@ -1326,6 +1338,16 @@ static struct sctp_stream_interleave sctp_stream_interleave_0 = { .handle_ftsn = sctp_handle_fwdtsn, }; +static int do_sctp_enqueue_event(struct sctp_ulpq *ulpq, + struct sctp_ulpevent *event) +{ + struct sk_buff_head temp; + + skb_queue_head_init(&temp); + __skb_queue_tail(&temp, sctp_event2skb(event)); + return sctp_enqueue_event(ulpq, &temp); +} + static struct sctp_stream_interleave sctp_stream_interleave_1 = { .data_chunk_len = sizeof(struct sctp_idata_chunk), .ftsn_chunk_len = sizeof(struct sctp_ifwdtsn_chunk), @@ -1334,7 +1356,7 @@ static struct sctp_stream_interleave sctp_stream_interleave_1 = { .assign_number = sctp_chunk_assign_mid, .validate_data = sctp_validate_idata, .ulpevent_data = sctp_ulpevent_idata, - .enqueue_event = sctp_enqueue_event, + .enqueue_event = do_sctp_enqueue_event, .renege_events = sctp_renege_events, .start_pd = sctp_intl_start_pd, .abort_pd = sctp_intl_abort_pd, -- 2.20.1