From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6E51CC3DA4A for ; Fri, 2 Aug 2024 11:16:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:Message-ID:Date:References:In-Reply-To:Subject:Cc: To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=h+a+YbUYnYp2O9o3vDX++ZHVr+zNy81A/L7Ak8HFY0M=; b=At+10RR8TTrubqtAEF3JPOkGl+ OS/9Wx1pRnnDs0O2C2l5BfIu7SqGOxGCQef5O//JCPER8TV6Dcny2qRO3smLSKc8bu7cptSn8nrrz 8gp5We7VAbvXLOSvGMrvOS8kSqgXTzHVLDZJONJ5LWArR63i6w9mZhqFX1DZ7PPv1x/lVcWDc/68y FzAjAdzCaELLCiO/OXmFUoG43n/TOw4IWIPC3YcrMN+BTfOH5Oj/ygwhcYpc+xrub1ORn0+VmdEST vIpcfPk3BIuiyTb6A8UCmq4Rp6WHDRbZoI7NmBcExFTOSzSOz17Zf/V3bEHfNpFdmrTDSiqOVHBr1 5R5LpwkQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sZqH9-00000008epX-1QZC; Fri, 02 Aug 2024 11:16:47 +0000 Received: from mail-ed1-x536.google.com ([2a00:1450:4864:20::536]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sZqGd-00000008el7-2J7K for linux-arm-kernel@lists.infradead.org; Fri, 02 Aug 2024 11:16:17 +0000 Received: by mail-ed1-x536.google.com with SMTP id 4fb4d7f45d1cf-5af326eddb2so5439392a12.1 for ; Fri, 02 Aug 2024 04:16:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1722597373; x=1723202173; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:message-id:date:references :in-reply-to:subject:cc:to:from:from:to:cc:subject:date:message-id :reply-to; bh=h+a+YbUYnYp2O9o3vDX++ZHVr+zNy81A/L7Ak8HFY0M=; b=VDoRNTOKWDq+Gm0pGh7aomfJAtwZ4TdRCphgFD5P4W/qHC3nOB4N/dHdl++v5dwcwz 5mIisTL2szL5Bhf2A4Yx2qYb8QOCkFwoRO0jYijQqmY1MyJM5XxAnMvn+Cw+Hp9/Sn3M TvkKNeZ6cvpC1knc2FCaxYwbwZ1PPb9VclNZucZWDA0w3rkPsspLrZFToAapmK/lq/io bTz+6eQV7xBHSRjDOg81C2T6E0epJ0IBRVYA4kNmpZpuVKLa+lBJ1WcUG/mEbsX6JHl+ 3hTiJE5EFsnoTrMW3iianX2P9bALRn7/NOApV9DhKoa+1oicl9Pc2CxaqSm7ePflRr/J 4lpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722597373; x=1723202173; h=content-transfer-encoding:mime-version:message-id:date:references :in-reply-to:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=h+a+YbUYnYp2O9o3vDX++ZHVr+zNy81A/L7Ak8HFY0M=; b=Qs2gVChnQkwLvKk7CXpn4KMprYa/mt65LNihmI7lzd0YfIGtx+qh+yWqOPuGfJNY3M BAq3AMtSfsRUjCmRWl4u+ftcJfQhEp+5raPDYBt2OMAR7ut6jnuV40e8H2wu5E9mHuG7 PzxxSe+vUlTkXbZaFcKtLm+R4nK63ojQKxNW/P/Rc4B/INuvQYfb7vKc/foygHpQntou Y5pJKf2Z9KFZ2qyNIUUN+x9Ky8jADESCBfnTVSo7mjzxk9yCiEAoa2XeXJigfnhlwoDI 74hKufftJNTLq2h6aOiRDVcXCa9eFfNtVxeEkM19fCK0+q9znDbpISYdS0P9ne6iLj7q b+7w== X-Forwarded-Encrypted: i=1; AJvYcCX4rW9NjUn4j3siWf9MTdnibIxIJVR7U2wbYuyEWfKE5vOfjssr+4LUfCZN1BKkyJj9UfTSqFGrPNaRNJ3iU1uAO3C5D/bWD9IZmhY1FWcsrYBZAIM= X-Gm-Message-State: AOJu0YyyOOrxEqKCTgVmotjxdq6ioDDDc0Yi9Ua957dbQUTLabcTYRhc Mv5lHffRc7wYgzJLnFv2+nC4BNELWZUmDpnEh9jTzIuoYcH1Z0YN2fPlPB8fsnE= X-Google-Smtp-Source: AGHT+IHUr2+eigMQMyKiFV2SC/+LcUpbtIeuSXGg+vhEQqD67tJuIK6amWVCFT4RRO+CzEGYIGxtXA== X-Received: by 2002:a17:907:25c5:b0:a7a:acae:340b with SMTP id a640c23a62f3a-a7dc628bf8fmr256404766b.31.1722597372903; Fri, 02 Aug 2024 04:16:12 -0700 (PDT) Received: from draig.lan ([85.9.250.243]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a7dc9bcb19csm87224666b.42.2024.08.02.04.16.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Aug 2024 04:16:08 -0700 (PDT) Received: from draig (localhost [IPv6:::1]) by draig.lan (Postfix) with ESMTP id 91DEE5F8A9; Fri, 2 Aug 2024 12:16:04 +0100 (BST) From: =?utf-8?Q?Alex_Benn=C3=A9e?= To: Sean Christopherson Cc: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Subject: Re: [PATCH v12 11/84] KVM: Rename gfn_to_page_many_atomic() to kvm_prefetch_pages() In-Reply-To: <20240726235234.228822-12-seanjc@google.com> (Sean Christopherson's message of "Fri, 26 Jul 2024 16:51:20 -0700") References: <20240726235234.228822-1-seanjc@google.com> <20240726235234.228822-12-seanjc@google.com> Date: Fri, 02 Aug 2024 12:16:04 +0100 Message-ID: <87frrncgzv.fsf@draig.linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240802_041615_611816_54B2DABA X-CRM114-Status: GOOD ( 14.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Sean Christopherson writes: > Rename gfn_to_page_many_atomic() to kvm_prefetch_pages() to try and > communicate its true purpose, as the "atomic" aspect is essentially a > side effect of the fact that x86 uses the API while holding mmu_lock. It's never too late to start adding some kdoc annotations to a function and renaming a kvm_host API call seems like a good time to do it. > E.g. even if mmu_lock weren't held, KVM wouldn't want to fault-in pages, > as the goal is to opportunistically grab surrounding pages that have > already been accessed and/or dirtied by the host, and to do so quickly. > > Signed-off-by: Sean Christopherson > --- /** * kvm_prefetch_pages() - opportunistically grab previously accessed pages * @slot: which @kvm_memory_slot the pages are in * @gfn: guest frame * @pages: array to receives page pointers * @nr_pages: number of pages * * Returns the number of pages actually mapped. */ ? >=20=20 > -int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, > - struct page **pages, int nr_pages) > +int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn, > + struct page **pages, int nr_pages) > { > unsigned long addr; > gfn_t entry =3D 0; > @@ -3075,7 +3075,7 @@ int gfn_to_page_many_atomic(struct kvm_memory_slot = *slot, gfn_t gfn, >=20=20 > return get_user_pages_fast_only(addr, nr_pages, FOLL_WRITE, pages); > } > -EXPORT_SYMBOL_GPL(gfn_to_page_many_atomic); > +EXPORT_SYMBOL_GPL(kvm_prefetch_pages); >=20=20 > /* > * Do not use this helper unless you are absolutely certain the gfn _mus= t_ be --=20 Alex Benn=C3=A9e Virtualisation Tech Lead @ Linaro