From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38D17C43219 for ; Thu, 25 Apr 2019 16:26:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 923782077C for ; Thu, 25 Apr 2019 16:26:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729929AbfDYQ0S (ORCPT ); Thu, 25 Apr 2019 12:26:18 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:42794 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727538AbfDYQ0R (ORCPT ); Thu, 25 Apr 2019 12:26:17 -0400 Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x3PGKW8E129241 for ; Thu, 25 Apr 2019 12:26:16 -0400 Received: from e06smtp03.uk.ibm.com (e06smtp03.uk.ibm.com [195.75.94.99]) by mx0b-001b2d01.pphosted.com with ESMTP id 2s3dsmyqd8-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Thu, 25 Apr 2019 12:26:15 -0400 Received: from localhost by e06smtp03.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 25 Apr 2019 17:26:13 +0100 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp03.uk.ibm.com (192.168.101.133) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Thu, 25 Apr 2019 17:26:11 +0100 Received: from d06av24.portsmouth.uk.ibm.com (mk.ibm.com [9.149.105.60]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x3PGQAWm25755678 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 25 Apr 2019 16:26:10 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4938C42041; Thu, 25 Apr 2019 16:26:10 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E73A742047; Thu, 25 Apr 2019 16:26:09 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Thu, 25 Apr 2019 16:26:09 +0000 (GMT) From: Julian Wiedmann To: David Miller Cc: , , Martin Schwidefsky , Heiko Carstens , Stefan Raspl , Ursula Braun , Julian Wiedmann Subject: [PATCH net-next 6/8] s390/qeth: cache max number of available buffer elements Date: Thu, 25 Apr 2019 18:25:59 +0200 X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190425162601.91997-1-jwi@linux.ibm.com> References: <20190425162601.91997-1-jwi@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 19042516-0012-0000-0000-00000314C0CA X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19042516-0013-0000-0000-0000214D1BAB Message-Id: <20190425162601.91997-7-jwi@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-04-25_13:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1904250100 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The QETH_MAX_BUFFER_ELEMENTS() macro effectively returns a constant value. To avoid some redundant pointer chasing and computations in the xmit hot path, cache this value in the queue struct. Take this as opportunity to shrink some of the queue struct's fields to their appropriate value range, slightly reducing its total size. Signed-off-by: Julian Wiedmann --- drivers/s390/net/qeth_core.h | 10 ++++------ drivers/s390/net/qeth_core_main.c | 34 +++++++++++++++++----------------- drivers/s390/net/qeth_l2_main.c | 2 +- 3 files changed, 22 insertions(+), 24 deletions(-) diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h index 92441593caf3..73afbb8b69e5 100644 --- a/drivers/s390/net/qeth_core.h +++ b/drivers/s390/net/qeth_core.h @@ -484,14 +484,12 @@ struct qeth_qdio_out_q { struct qeth_qdio_out_buffer *bufs[QDIO_MAX_BUFFERS_PER_Q]; struct qdio_outbuf_state *bufstates; /* convenience pointer */ struct qeth_out_q_stats stats; - int queue_no; + u8 next_buf_to_fill; + u8 max_elements; + u8 queue_no; + u8 do_pack; struct qeth_card *card; atomic_t state; - int do_pack; - /* - * index of buffer to be filled by driver; state EMPTY or PACKING - */ - int next_buf_to_fill; /* * number of buffers that are currently filled (PRIMED) * -> these buffers are hardware-owned diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c index 5d8777c4d1a6..009f2c0ec504 100644 --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -1165,15 +1165,14 @@ static void qeth_clear_output_buffer(struct qeth_qdio_out_q *queue, qeth_release_skbs(buf); - for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(queue->card); ++i) { + for (i = 0; i < queue->max_elements; ++i) { if (buf->buffer->element[i].addr && buf->is_header[i]) kmem_cache_free(qeth_core_header_cache, buf->buffer->element[i].addr); buf->is_header[i] = 0; } - qeth_scrub_qdio_buffer(buf->buffer, - QETH_MAX_BUFFER_ELEMENTS(queue->card)); + qeth_scrub_qdio_buffer(buf->buffer, queue->max_elements); buf->next_element_to_fill = 0; atomic_set(&buf->state, QETH_QDIO_BUF_EMPTY); } @@ -2727,14 +2726,15 @@ int qeth_init_qdio_queues(struct qeth_card *card) /* outbound queue */ for (i = 0; i < card->qdio.no_out_queues; ++i) { - qdio_reset_buffers(card->qdio.out_qs[i]->qdio_bufs, - QDIO_MAX_BUFFERS_PER_Q); - card->qdio.out_qs[i]->next_buf_to_fill = 0; - card->qdio.out_qs[i]->do_pack = 0; - atomic_set(&card->qdio.out_qs[i]->used_buffers, 0); - atomic_set(&card->qdio.out_qs[i]->set_pci_flags_count, 0); - atomic_set(&card->qdio.out_qs[i]->state, - QETH_OUT_Q_UNLOCKED); + struct qeth_qdio_out_q *queue = card->qdio.out_qs[i]; + + qdio_reset_buffers(queue->qdio_bufs, QDIO_MAX_BUFFERS_PER_Q); + queue->max_elements = QETH_MAX_BUFFER_ELEMENTS(card); + queue->next_buf_to_fill = 0; + queue->do_pack = 0; + atomic_set(&queue->used_buffers, 0); + atomic_set(&queue->set_pci_flags_count, 0); + atomic_set(&queue->state, QETH_OUT_Q_UNLOCKED); } return 0; } @@ -3558,7 +3558,7 @@ static void qeth_qdio_output_handler(struct ccw_device *ccwdev, /* prepare the queue slot for re-use: */ qeth_scrub_qdio_buffer(buffer->buffer, - QETH_MAX_BUFFER_ELEMENTS(card)); + queue->max_elements); if (qeth_init_qdio_out_buf(queue, bidx)) { QETH_CARD_TEXT(card, 2, "outofbuf"); qeth_schedule_recovery(card); @@ -3705,8 +3705,8 @@ static int qeth_add_hw_header(struct qeth_qdio_out_q *queue, unsigned int hdr_len, unsigned int proto_len, unsigned int *elements) { - const unsigned int max_elements = QETH_MAX_BUFFER_ELEMENTS(queue->card); const unsigned int contiguous = proto_len ? proto_len : 1; + const unsigned int max_elements = queue->max_elements; unsigned int __elements; addr_t start, end; bool push_ok; @@ -3878,8 +3878,8 @@ static int qeth_fill_buffer(struct qeth_qdio_out_q *queue, QETH_TXQ_STAT_INC(queue, skbs_pack); /* If the buffer still has free elements, keep using it. */ - if (!flush && buf->next_element_to_fill < - QETH_MAX_BUFFER_ELEMENTS(queue->card)) + if (!flush && + buf->next_element_to_fill < queue->max_elements) return 0; } @@ -3959,8 +3959,8 @@ int qeth_do_send_packet(struct qeth_card *card, struct qeth_qdio_out_q *queue, if (queue->do_pack) { do_pack = 1; /* does packet fit in current buffer? */ - if ((QETH_MAX_BUFFER_ELEMENTS(card) - - buffer->next_element_to_fill) < elements_needed) { + if (buffer->next_element_to_fill + elements_needed > + queue->max_elements) { /* ... no -> set state PRIMED */ atomic_set(&buffer->state, QETH_QDIO_BUF_PRIMED); flush_count++; diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c index fb21136c0ec2..cee9a99dd463 100644 --- a/drivers/s390/net/qeth_l2_main.c +++ b/drivers/s390/net/qeth_l2_main.c @@ -581,7 +581,7 @@ static int qeth_l2_xmit_osn(struct qeth_card *card, struct sk_buff *skb, } elements += qeth_count_elements(skb, hd_len); - if (elements > QETH_MAX_BUFFER_ELEMENTS(card)) { + if (elements > queue->max_elements) { rc = -E2BIG; goto out; } -- 2.16.4