From: Julien Grall <julien.grall@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: <ian.campbell@citrix.com>, <stefano.stabellini@eu.citrix.com>,
<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
<xen-devel@lists.xenproject.org>,
<linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH v3 18/20] net/xen-netback: Make it running on 64KB page granularity
Date: Mon, 10 Aug 2015 13:00:40 +0100 [thread overview]
Message-ID: <55C89268.2030102@citrix.com> (raw)
In-Reply-To: <20150810113907.GH8857@zion.uk.xensource.com>
On 10/08/15 12:39, Wei Liu wrote:
> On Mon, Aug 10, 2015 at 10:57:48AM +0100, Julien Grall wrote:
>> while (size > 0) {
>> BUG_ON(offset >= PAGE_SIZE);
>>
>> bytes = PAGE_SIZE - offset;
>> if (bytes > size)
>> bytes = size;
>>
>> info.page = page;
>> gnttab_foreach_grant_in_range(page, offset, bytes,
>> xenvif_gop_frag_copy_grant,
>> &info);
>> size -= bytes;
>> offset = 0;
>>
>> /* Next page */
>> if (size) {
>> BUG_ON(!PageCompound(page));
>> page++;
>> }
>> }
>>
>
> Right. That doesn't mean the original code was wrong or anything. But I
> don't want to bikeshed about this.
I never said the original code was wrong... The original code was
allowing the possibility to copy less data than the length contained in
page.
In the new version, it has been pushed with the callback
xenvif_gop_frag_copy_grant.
> Please add a comment saying that offset is always 0 starting from second
> iteration because the gnttab_foreach_grant_in_range makes sure we handle
> one page in one go.
I think this is superfluous. To be honest, the comment should have been
on the original version and not in the new one. The construction of the
loop was far from obvious that we copied less data.
In this new version, the reason is not because of
gnttab_foreach_grant_in_range is always a page but how the loop has been
constructed.
If you look how bytes has been defined, it will always contain
min(PAGE_SIZE - offset, size)
So for the first page, this will be PAGE_SIZE - offset. A the end of the
loop we reset the offset 0, indeed we copy all the data of the first
page. For the second page and onwards this will always be PAGE_SIZE
except for the last one where we took size.
Regards,
--
Julien Grall
prev parent reply other threads:[~2015-08-10 12:01 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1438966019-19322-1-git-send-email-julien.grall@citrix.com>
2015-08-07 16:46 ` [PATCH v3 01/20] net/xen-netback: xenvif_gop_frag_copy: move GSO check out of the loop Julien Grall
2015-08-08 13:59 ` Wei Liu
2015-08-07 16:46 ` [PATCH v3 17/20] net/xen-netfront: Make it running on 64KB page granularity Julien Grall
2015-08-20 10:03 ` [Xen-devel] " David Vrabel
2015-08-07 16:46 ` [PATCH v3 18/20] net/xen-netback: " Julien Grall
2015-08-08 14:55 ` Wei Liu
2015-08-10 9:57 ` Julien Grall
2015-08-10 11:39 ` Wei Liu
2015-08-10 12:00 ` Julien Grall [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55C89268.2030102@citrix.com \
--to=julien.grall@citrix.com \
--cc=ian.campbell@citrix.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=stefano.stabellini@eu.citrix.com \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).