From: Alistair Popple <apopple@nvidia.com>
To: Bernard Metzler <bmt@zurich.ibm.com>
Cc: linux-rdma@vger.kernel.org, jgg@nvidia.com, leonro@nvidia.com
Subject: Re: [PATCH] RDMA/siw: Fix user page pinning accounting
Date: Tue, 31 Jan 2023 11:09:27 +1100 [thread overview]
Message-ID: <878rhjzhbg.fsf@nvidia.com> (raw)
In-Reply-To: <20230130133235.223440-1-bmt@zurich.ibm.com>
Bernard Metzler <bmt@zurich.ibm.com> writes:
> To avoid racing with other user memory reservations, immediately
> account full amount of pages to be pinned.
>
> Fixes: 2251334dcac9 ("rdma/siw: application buffer management")
> Reported-by: Jason Gunthorpe <jgg@nvidia.com>
> Suggested-by: Alistair Popple <apopple@nvidia.com>
> Signed-off-by: Bernard Metzler <bmt@zurich.ibm.com>
> ---
> drivers/infiniband/sw/siw/siw_mem.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c
> index b2b33dd3b4fa..7afdbe3f2266 100644
> --- a/drivers/infiniband/sw/siw/siw_mem.c
> +++ b/drivers/infiniband/sw/siw/siw_mem.c
> @@ -398,7 +398,7 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable)
>
> mlock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
>
> - if (num_pages + atomic64_read(&mm_s->pinned_vm) > mlock_limit) {
> + if (atomic64_add_return(num_pages, &mm_s->pinned_vm) > mlock_limit) {
> rv = -ENOMEM;
> goto out_sem_up;
> }
> @@ -429,7 +429,6 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable)
> goto out_sem_up;
>
> umem->num_pages += rv;
> - atomic64_add(rv, &mm_s->pinned_vm);
> first_page_va += rv * PAGE_SIZE;
> nents -= rv;
> got += rv;
> @@ -442,6 +441,10 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable)
> if (rv > 0)
> return umem;
>
> + /* Adjust accounting for pages not pinned */
> + if (num_pages)
> + atomic64_sub(num_pages, &mm_s->pinned_vm);
> +
> siw_umem_release(umem, false);
Won't this unaccount some pages twice if we bail out of this loop early:
while (nents) {
struct page **plist = &umem->page_chunk[i].plist[got];
rv = pin_user_pages(first_page_va, nents,
foll_flags | FOLL_LONGTERM,
plist, NULL);
if (rv < 0)
goto out_sem_up;
umem->num_pages += rv;
first_page_va += rv * PAGE_SIZE;
nents -= rv;
got += rv;
}
num_pages -= got;
Because siw_umem_release() will subtract umem->num_pages but num_pages
won't always have been updated? Looks like you could just update
num_pages in the inner loop and eliminate the `got` variable right?
> return ERR_PTR(rv);
next prev parent reply other threads:[~2023-01-31 0:19 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-30 13:32 [PATCH] RDMA/siw: Fix user page pinning accounting Bernard Metzler
2023-01-31 0:09 ` Alistair Popple [this message]
2023-01-31 11:05 ` Bernard Metzler
2023-01-31 23:10 ` Alistair Popple
-- strict thread matches above, loose matches on Subject: below --
2023-02-02 10:10 Bernard Metzler
2023-02-06 12:48 ` Leon Romanovsky
2023-02-01 11:55 Bernard Metzler
2023-02-02 7:44 ` Alistair Popple
2023-01-30 13:28 Bernard Metzler
2023-01-30 13:29 ` Jason Gunthorpe
2023-01-30 13:30 ` Bernard Metzler
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=878rhjzhbg.fsf@nvidia.com \
--to=apopple@nvidia.com \
--cc=bmt@zurich.ibm.com \
--cc=jgg@nvidia.com \
--cc=leonro@nvidia.com \
--cc=linux-rdma@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox