From: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
To: David Gibson <david@gibson.dropbear.id.au>, anton@au1.ibm.com
Cc: michael@ellerman.id.au, linuxppc-dev@lists.ozlabs.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCHv2] ibmveth: Fix off-by-one error in ibmveth_change_mtu()
Date: Wed, 22 Apr 2015 16:42:37 -0500 [thread overview]
Message-ID: <553815CD.1000407@linux.vnet.ibm.com> (raw)
In-Reply-To: <1429578471-9717-1-git-send-email-david@gibson.dropbear.id.au>
On 04/20/2015 08:07 PM, David Gibson wrote:
> AFAIK the PAPR document which defines the virtual device interface used by
> the ibmveth driver doesn't specify a specific maximum MTU. So, in the
> ibmveth driver, the maximum allowed MTU is determined by the maximum
> allocated buffer size of 64k (corresponding to one page in the common case)
> minus the per-buffer overhead IBMVETH_BUFF_OH (which has value 22 for 14
> bytes of ethernet header, plus 8 bytes for an opaque handle).
>
> This suggests a maximum allowable MTU of 65514 bytes, but in fact the
> driver only permits a maximum MTU of 65513. This is because there is a <
> instead of an <= in ibmveth_change_mtu(), which only permits an MTU which
> is strictly smaller than the buffer size, rather than allowing the buffer
> to be completely filled.
>
> This patch fixes the buglet.
Thanks!
Acked-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
>
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>1
> ---
> drivers/net/ethernet/ibm/ibmveth.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> Changes since v1:
> * Fixed a second instance of the same off-by-one error. Thanks to
> Thomas Falcon for spotting this.
>
> diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c
> index cd7675a..1813476 100644
> --- a/drivers/net/ethernet/ibm/ibmveth.c
> +++ b/drivers/net/ethernet/ibm/ibmveth.c
> @@ -1238,7 +1238,7 @@ static int ibmveth_change_mtu(struct net_device *dev, int new_mtu)
> return -EINVAL;
>
> for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
> - if (new_mtu_oh < adapter->rx_buff_pool[i].buff_size)
> + if (new_mtu_oh <= adapter->rx_buff_pool[i].buff_size)
> break;
>
> if (i == IBMVETH_NUM_BUFF_POOLS)
> @@ -1257,7 +1257,7 @@ static int ibmveth_change_mtu(struct net_device *dev, int new_mtu)
> for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) {
> adapter->rx_buff_pool[i].active = 1;
>
> - if (new_mtu_oh < adapter->rx_buff_pool[i].buff_size) {
> + if (new_mtu_oh <= adapter->rx_buff_pool[i].buff_size) {
> dev->mtu = new_mtu;
> vio_cmo_set_dev_desired(viodev,
> ibmveth_get_desired_dma
next prev parent reply other threads:[~2015-04-22 21:42 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-04-21 1:07 [PATCHv2] ibmveth: Fix off-by-one error in ibmveth_change_mtu() David Gibson
2015-04-22 21:42 ` Thomas Falcon [this message]
2015-04-23 23:12 ` Tyrel Datwyler
-- strict thread matches above, loose matches on Subject: below --
2015-04-23 4:43 David Gibson
2015-04-23 15:42 ` David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=553815CD.1000407@linux.vnet.ibm.com \
--to=tlfalcon@linux.vnet.ibm.com \
--cc=anton@au1.ibm.com \
--cc=david@gibson.dropbear.id.au \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=michael@ellerman.id.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).