From: Pengfei Xu <pengfei.xu@intel.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: <iommu@lists.linux.dev>, Kevin Tian <kevin.tian@intel.com>,
<linux-kselftest@vger.kernel.org>,
Lixiao Yang <lixiao.yang@intel.com>,
Matthew Rosato <mjrosato@linux.ibm.com>,
Nicolin Chen <nicolinc@nvidia.com>, Yi Liu <yi.l.liu@intel.com>
Subject: Re: [PATCH 2/4] iommufd: Fix unpinning of pages when an access is present
Date: Sat, 1 Apr 2023 14:23:12 +0800 [thread overview]
Message-ID: <ZCfN0MSBxfYTm7kI@xpf.sh.intel.com> (raw)
In-Reply-To: <2-v1-ceab6a4d7d7a+94-iommufd_syz_jgg@nvidia.com>
Hi Jason,
On 2023-03-31 at 12:32:25 -0300, Jason Gunthorpe wrote:
> syzkaller found that the calculation of batch_last_index should use
> 'start_index' since at input to this function the batch is either empty or
> it has already been adjusted to cross any accesses so it will start at the
> point we are unmapping from.
>
> Getting this wrong causes the unmap to run over the end of the pages
> which corrupts pages that were never mapped. In most cases this triggers
> the num pinned debugging:
>
> WARNING: CPU: 0 PID: 557 at drivers/iommu/iommufd/pages.c:294 __iopt_area_unfill_domain+0x152/0x560
> Modules linked in:
> CPU: 0 PID: 557 Comm: repro Not tainted 6.3.0-rc2-eeac8ede1755 #1
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
> RIP: 0010:__iopt_area_unfill_domain+0x152/0x560
> Code: d2 0f ff 44 8b 64 24 54 48 8b 44 24 48 31 ff 44 89 e6 48 89 44 24 38 e8 fc d3 0f ff 45 85 e4 0f 85 eb 01 00 00 e8 0e d2 0f ff <0f> 0b e8 07 d2 0f ff 48 8b 44 24 38 89 5c 24 58 89 18 8b 44 24 54
> RSP: 0018:ffffc9000108baf0 EFLAGS: 00010246
> RAX: 0000000000000000 RBX: 00000000ffffffff RCX: ffffffff821e3f85
> RDX: 0000000000000000 RSI: ffff88800faf0000 RDI: 0000000000000002
> RBP: ffffc9000108bd18 R08: 000000000003ca25 R09: 0000000000000014
> R10: 000000000003ca00 R11: 0000000000000024 R12: 0000000000000004
> R13: 0000000000000801 R14: 00000000000007ff R15: 0000000000000800
> FS: 00007f3499ce1740(0000) GS:ffff88807dc00000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 0000000020000243 CR3: 00000000179c2001 CR4: 0000000000770ef0
> PKRU: 55555554
> Call Trace:
> <TASK>
> iopt_area_unfill_domain+0x32/0x40
> iopt_table_remove_domain+0x23f/0x4c0
> iommufd_device_selftest_detach+0x3a/0x90
> iommufd_selftest_destroy+0x55/0x70
> iommufd_object_destroy_user+0xce/0x130
> iommufd_destroy+0xa2/0xc0
> iommufd_fops_ioctl+0x206/0x330
> __x64_sys_ioctl+0x10e/0x160
> do_syscall_64+0x3b/0x90
> entry_SYSCALL_64_after_hwframe+0x72/0xdc
>
> Also add some useful WARN_ON sanity checks.
>
> Cc: <stable@vger.kernel.org>
> Fixes: 8d160cd4d506 ("iommufd: Algorithms for PFN storage")
> Reported-by: Pengfei Xu <pengfei.xu@intel.com>
> Link: https://lore.kernel.org/r/ZBE1k040xAhIuTmq@xpf.sh.intel.com
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
> drivers/iommu/iommufd/pages.c | 10 +++++++++-
> 1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c
> index 400ec7c91ed7e7..b11aace836542d 100644
> --- a/drivers/iommu/iommufd/pages.c
> +++ b/drivers/iommu/iommufd/pages.c
> @@ -1207,13 +1207,21 @@ iopt_area_unpin_domain(struct pfn_batch *batch, struct iopt_area *area,
> unsigned long start =
> max(start_index, *unmapped_end_index);
>
> + if (IS_ENABLED(CONFIG_IOMMUFD_TEST) &&
> + batch->total_pfns)
> + WARN_ON(*unmapped_end_index -
> + batch->total_pfns !=
> + start_index);
> batch_from_domain(batch, domain, area, start,
> last_index);
> - batch_last_index = start + batch->total_pfns - 1;
> + batch_last_index = start_index + batch->total_pfns - 1;
> } else {
> batch_last_index = last_index;
> }
>
> + if (IS_ENABLED(CONFIG_IOMMUFD_TEST))
> + WARN_ON(batch_last_index > real_last_index);
> +
> /*
> * unmaps must always 'cut' at a place where the pfns are not
> * contiguous to pair with the maps that always install
> --
I tested the reproduced code in the kernel with all 3 fixed patches.
Syzkaller reproduced code: https://github.com/xupengfe/syzkaller_logs/raw/main/230314_094459___iopt_area_unfill_domain/repro
This issue was gone and the issue was fixed.
Thanks!
BR.
> 2.40.0
>
next prev parent reply other threads:[~2023-04-01 6:21 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-31 15:32 [PATCH 0/4] Fix three syzkaller splats in iommufd Jason Gunthorpe
2023-03-31 15:32 ` [PATCH 1/4] iommufd: Check for uptr overflow Jason Gunthorpe
2023-04-04 9:41 ` Tian, Kevin
2023-03-31 15:32 ` [PATCH 2/4] iommufd: Fix unpinning of pages when an access is present Jason Gunthorpe
2023-04-01 6:23 ` Pengfei Xu [this message]
2023-04-04 9:43 ` Tian, Kevin
2023-03-31 15:32 ` [PATCH 3/4] iommufd: Do not corrupt the pfn list when doing batch carry Jason Gunthorpe
2023-04-01 6:29 ` Pengfei Xu
2023-04-03 7:02 ` Pengfei Xu
2023-04-04 13:01 ` Jason Gunthorpe
2023-04-04 13:36 ` Pengfei Xu
2023-04-04 9:44 ` Tian, Kevin
2023-03-31 15:32 ` [PATCH 4/4] iommufd/selftest: Cover domain unmap with huge pages and access Jason Gunthorpe
2023-04-04 9:48 ` Tian, Kevin
2023-04-04 13:25 ` [PATCH 0/4] Fix three syzkaller splats in iommufd Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZCfN0MSBxfYTm7kI@xpf.sh.intel.com \
--to=pengfei.xu@intel.com \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=kevin.tian@intel.com \
--cc=linux-kselftest@vger.kernel.org \
--cc=lixiao.yang@intel.com \
--cc=mjrosato@linux.ibm.com \
--cc=nicolinc@nvidia.com \
--cc=yi.l.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox