From: Julian Wiedmann <jwi@linux.ibm.com>
To: David Miller <davem@davemloft.net>
Cc: <netdev@vger.kernel.org>, <linux-s390@vger.kernel.org>,
Heiko Carstens <heiko.carstens@de.ibm.com>,
Stefan Raspl <raspl@linux.ibm.com>,
Ursula Braun <ubraun@linux.ibm.com>,
Julian Wiedmann <jwi@linux.ibm.com>
Subject: [PATCH net-next 02/13] s390/qeth: use mm helpers
Date: Tue, 11 Jun 2019 18:37:49 +0200 [thread overview]
Message-ID: <20190611163800.64730-3-jwi@linux.ibm.com> (raw)
In-Reply-To: <20190611163800.64730-1-jwi@linux.ibm.com>
Slightly reduce the complexity of the core xmit path, by replacing some
open-coded logic with the corresponding helpers.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
---
drivers/s390/net/qeth_core_main.c | 33 +++++++++++++++----------------
1 file changed, 16 insertions(+), 17 deletions(-)
diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
index cd9e2f70d8f6..feb9e1c9d506 100644
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -20,6 +20,7 @@
#include <linux/ip.h>
#include <linux/tcp.h>
#include <linux/mii.h>
+#include <linux/mm.h>
#include <linux/kthread.h>
#include <linux/slab.h>
#include <linux/if_vlan.h>
@@ -3723,8 +3724,8 @@ static int qeth_add_hw_header(struct qeth_qdio_out_q *queue,
__elements = 1 + qeth_count_elements(skb, proto_len);
else
__elements = qeth_count_elements(skb, 0);
- } else if (!proto_len && qeth_get_elements_for_range(start, end) == 1) {
- /* Push HW header into a new page. */
+ } else if (!proto_len && PAGE_ALIGNED(skb->data)) {
+ /* Push HW header into preceding page, flush with skb->data. */
push_ok = true;
__elements = 1 + qeth_count_elements(skb, 0);
} else {
@@ -3778,18 +3779,16 @@ static void __qeth_fill_buffer(struct sk_buff *skb,
int element = buf->next_element_to_fill;
int length = skb_headlen(skb) - offset;
char *data = skb->data + offset;
- int length_here, cnt;
+ unsigned int elem_length, cnt;
/* map linear part into buffer element(s) */
while (length > 0) {
- /* length_here is the remaining amount of data in this page */
- length_here = PAGE_SIZE - ((unsigned long) data % PAGE_SIZE);
- if (length < length_here)
- length_here = length;
+ elem_length = min_t(unsigned int, length,
+ PAGE_SIZE - offset_in_page(data));
buffer->element[element].addr = data;
- buffer->element[element].length = length_here;
- length -= length_here;
+ buffer->element[element].length = elem_length;
+ length -= elem_length;
if (is_first_elem) {
is_first_elem = false;
if (length || skb_is_nonlinear(skb))
@@ -3802,7 +3801,8 @@ static void __qeth_fill_buffer(struct sk_buff *skb,
buffer->element[element].eflags =
SBAL_EFLAGS_MIDDLE_FRAG;
}
- data += length_here;
+
+ data += elem_length;
element++;
}
@@ -3813,17 +3813,16 @@ static void __qeth_fill_buffer(struct sk_buff *skb,
data = skb_frag_address(frag);
length = skb_frag_size(frag);
while (length > 0) {
- length_here = PAGE_SIZE -
- ((unsigned long) data % PAGE_SIZE);
- if (length < length_here)
- length_here = length;
+ elem_length = min_t(unsigned int, length,
+ PAGE_SIZE - offset_in_page(data));
buffer->element[element].addr = data;
- buffer->element[element].length = length_here;
+ buffer->element[element].length = elem_length;
buffer->element[element].eflags =
SBAL_EFLAGS_MIDDLE_FRAG;
- length -= length_here;
- data += length_here;
+
+ length -= elem_length;
+ data += elem_length;
element++;
}
}
--
2.17.1
next prev parent reply other threads:[~2019-06-11 16:38 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-06-11 16:37 [PATCH net-next 00/13] s390/qeth: updates 2019-06-11 Julian Wiedmann
2019-06-11 16:37 ` [PATCH net-next 01/13] s390/qeth: don't mask TX errors on IQD devices Julian Wiedmann
2019-06-11 16:37 ` Julian Wiedmann [this message]
2019-06-11 16:37 ` [PATCH net-next 03/13] s390/qeth: simplify DOWN state handling Julian Wiedmann
2019-06-11 16:37 ` [PATCH net-next 04/13] s390/qeth: restart pending READ cmd from callback Julian Wiedmann
2019-06-11 16:37 ` [PATCH net-next 05/13] s390/qeth: clean up setting of BLKT defaults Julian Wiedmann
2019-06-11 16:37 ` [PATCH net-next 06/13] s390/qeth: remove qeth_wait_for_buffer() Julian Wiedmann
2019-06-11 16:37 ` [PATCH net-next 07/13] s390/qeth: remove OSN-specific IO code Julian Wiedmann
2019-06-11 16:37 ` [PATCH net-next 08/13] s390/qeth: convert device-specific trace entries Julian Wiedmann
2019-06-11 16:37 ` [PATCH net-next 09/13] s390/qeth: remove 'channel' parameter from callbacks Julian Wiedmann
2019-06-11 16:37 ` [PATCH net-next 10/13] s390/qeth: add support for dynamically allocated cmds Julian Wiedmann
2019-06-11 16:37 ` [PATCH net-next 11/13] s390/qeth: convert RCD code to common IO infrastructure Julian Wiedmann
2019-06-11 16:37 ` [PATCH net-next 12/13] s390/qeth: command-chain the IDX sequence Julian Wiedmann
2019-06-11 16:38 ` [PATCH net-next 13/13] s390/qeth: allocate a single cmd on read channel Julian Wiedmann
2019-06-14 5:41 ` [PATCH net-next 00/13] s390/qeth: updates 2019-06-11 David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190611163800.64730-3-jwi@linux.ibm.com \
--to=jwi@linux.ibm.com \
--cc=davem@davemloft.net \
--cc=heiko.carstens@de.ibm.com \
--cc=linux-s390@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=raspl@linux.ibm.com \
--cc=ubraun@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).