From: Leon Romanovsky <leon@kernel.org>
To: Or Gerlitz <ogerlitz@mellanox.com>
Cc: "David S. Miller" <davem@davemloft.net>,
netdev@vger.kernel.org, Tariq Toukan <tariqt@mellanox.com>,
Hadar Har-Zion <hadarh@mellanox.com>,
Amir Vadai <amirva@mellanox.com>, Roi Dayan <roid@mellanox.com>
Subject: Re: [PATCH net 1/3] net/mlx5: Fix flow counter bulk command out mailbox allocation
Date: Sun, 18 Sep 2016 21:02:23 +0300 [thread overview]
Message-ID: <20160918180223.GM2923@leon.nu> (raw)
In-Reply-To: <1474212029-1052-2-git-send-email-ogerlitz@mellanox.com>
[-- Attachment #1: Type: text/plain, Size: 1648 bytes --]
On Sun, Sep 18, 2016 at 06:20:27PM +0300, Or Gerlitz wrote:
> From: Roi Dayan <roid@mellanox.com>
>
> The FW command output length should be only the length of struct
> mlx5_cmd_fc_bulk out field. Failing to do so will cause the memcpy
> call which is invoked later in the driver to write over wrong memory
> address and corrupt kernel memory which results in random crashes.
>
> This bug was found using the kernel address sanitizer (kasan).
>
> Fixes: a351a1b03bf1 ('net/mlx5: Introduce bulk reading of flow counters')
> Signed-off-by: Roi Dayan <roid@mellanox.com>
> Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
> ---
> drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
> index 9134010..287ade1 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
> @@ -425,11 +425,11 @@ struct mlx5_cmd_fc_bulk *
> mlx5_cmd_fc_bulk_alloc(struct mlx5_core_dev *dev, u16 id, int num)
> {
> struct mlx5_cmd_fc_bulk *b;
> - int outlen = sizeof(*b) +
> + int outlen =
> MLX5_ST_SZ_BYTES(query_flow_counter_out) +
> MLX5_ST_SZ_BYTES(traffic_counter) * num;
>
> - b = kzalloc(outlen, GFP_KERNEL);
> + b = kzalloc(sizeof(*b) + outlen, GFP_KERNEL);
> if (!b)
> return NULL;
^^^^^^^^^ very controversial decision.
The code flow mlx5_fc_stats_query->mlx5_cmd_fc_bulk_alloc->kzalloc
failure is the same for success scenario too.
It is not related to the proposed patch.
>
> --
> 2.3.7
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
next prev parent reply other threads:[~2016-09-18 18:02 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-09-18 15:20 [PATCH net 0/3] mlx5 fixes to 4.8-rc6 Or Gerlitz
2016-09-18 15:20 ` [PATCH net 1/3] net/mlx5: Fix flow counter bulk command out mailbox allocation Or Gerlitz
2016-09-18 18:02 ` Leon Romanovsky [this message]
2016-09-18 20:24 ` Or Gerlitz
2016-09-18 15:20 ` [PATCH net 2/3] net/mlx5: E-Switch, Fix error flow in the SRIOV e-switch init code Or Gerlitz
2016-09-18 15:20 ` [PATCH net 3/3] net/mlx5: E-Switch, Handle mode change failures Or Gerlitz
2016-09-20 2:10 ` [PATCH net 0/3] mlx5 fixes to 4.8-rc6 David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160918180223.GM2923@leon.nu \
--to=leon@kernel.org \
--cc=amirva@mellanox.com \
--cc=davem@davemloft.net \
--cc=hadarh@mellanox.com \
--cc=netdev@vger.kernel.org \
--cc=ogerlitz@mellanox.com \
--cc=roid@mellanox.com \
--cc=tariqt@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).