From: "Jarkko Sakkinen" <jarkko@kernel.org>
To: "Matthew R. Ochs" <mochs@nvidia.com>, <peterhuewe@gmx.de>,
<jgg@ziepe.ca>, <kyarlagadda@nvidia.com>,
<linux-tegra@vger.kernel.org>, <linux-integrity@vger.kernel.org>,
<linux-kernel@vger.kernel.org>
Cc: <va@nvidia.com>, <csoto@nvidia.com>
Subject: Re: [PATCH v2] tpm_tis_spi: Account for SPI header when allocating TPM SPI xfer buffer
Date: Wed, 22 May 2024 15:04:36 +0300 [thread overview]
Message-ID: <D1G5QU7X1NK0.VACMWFR8IB49@kernel.org> (raw)
In-Reply-To: <20240522015932.3742421-1-mochs@nvidia.com>
On Wed May 22, 2024 at 4:59 AM EEST, Matthew R. Ochs wrote:
> The TPM SPI transfer mechanism uses MAX_SPI_FRAMESIZE for computing the
> maximum transfer length and the size of the transfer buffer. As such, it
> does not account for the 4 bytes of header that prepends the SPI data
> frame. This can result in out-of-bounds accesses and was confirmed with
> KASAN.
>
> Introduce SPI_HDRSIZE to account for the header and use to allocate the
> transfer buffer.
>
> Fixes: a86a42ac2bd6 ("tpm_tis_spi: Add hardware wait polling")
> Signed-off-by: Matthew R. Ochs <mochs@nvidia.com>
> Tested-by: Carol Soto <csoto@nvidia.com>
> ---
> v2: Removed MAX_SPI_BUFSIZE in favor of open coding the buffer allocation
> ---
> drivers/char/tpm/tpm_tis_spi_main.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/char/tpm/tpm_tis_spi_main.c b/drivers/char/tpm/tpm_tis_spi_main.c
> index 3f9eaf27b41b..c9eca24bbad4 100644
> --- a/drivers/char/tpm/tpm_tis_spi_main.c
> +++ b/drivers/char/tpm/tpm_tis_spi_main.c
> @@ -37,6 +37,7 @@
> #include "tpm_tis_spi.h"
>
> #define MAX_SPI_FRAMESIZE 64
> +#define SPI_HDRSIZE 4
>
> /*
> * TCG SPI flow control is documented in section 6.4 of the spec[1]. In short,
> @@ -247,7 +248,7 @@ static int tpm_tis_spi_write_bytes(struct tpm_tis_data *data, u32 addr,
> int tpm_tis_spi_init(struct spi_device *spi, struct tpm_tis_spi_phy *phy,
> int irq, const struct tpm_tis_phy_ops *phy_ops)
> {
> - phy->iobuf = devm_kmalloc(&spi->dev, MAX_SPI_FRAMESIZE, GFP_KERNEL);
> + phy->iobuf = devm_kmalloc(&spi->dev, SPI_HDRSIZE + MAX_SPI_FRAMESIZE, GFP_KERNEL);
> if (!phy->iobuf)
> return -ENOMEM;
>
Thanks, this is much better. Now when reading the code after months go
by, it is easy to see why the metrics for buffer size are what they
are.
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Given that it is a bug fix, I can put this to rc2 pull request. Thanks
for finding and fixing the bug.
BR, Jarkko
prev parent reply other threads:[~2024-05-22 12:04 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-21 15:40 [PATCH] tpm_tis_spi: Account for SPI header when allocating TPM SPI xfer buffer Matthew R. Ochs
2024-05-21 15:55 ` Jarkko Sakkinen
2024-05-21 17:59 ` Matt Ochs
2024-05-21 18:57 ` Jarkko Sakkinen
2024-05-22 1:59 ` [PATCH v2] " Matthew R. Ochs
2024-05-22 12:04 ` Jarkko Sakkinen [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=D1G5QU7X1NK0.VACMWFR8IB49@kernel.org \
--to=jarkko@kernel.org \
--cc=csoto@nvidia.com \
--cc=jgg@ziepe.ca \
--cc=kyarlagadda@nvidia.com \
--cc=linux-integrity@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-tegra@vger.kernel.org \
--cc=mochs@nvidia.com \
--cc=peterhuewe@gmx.de \
--cc=va@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).