From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0B5CBEE0213 for ; Thu, 14 Sep 2023 02:10:45 +0000 (UTC) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20230601 header.b=SKeP3naP; dkim-atps=neutral Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4RmLQC4pl8z3fNd for ; Thu, 14 Sep 2023 12:10:43 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20230601 header.b=SKeP3naP; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=flex--seanjc.bounces.google.com (client-ip=2607:f8b0:4864:20::b49; helo=mail-yb1-xb49.google.com; envelope-from=3nmgczqykdd8tfbokdhpphmf.dpnmjovyqqd-efwmjtut.p0mbct.psh@flex--seanjc.bounces.google.com; receiver=lists.ozlabs.org) Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4RmL5N1S4Xz3cBd for ; Thu, 14 Sep 2023 11:56:08 +1000 (AEST) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d814a1f7378so2790823276.1 for ; Wed, 13 Sep 2023 18:56:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1694656566; x=1695261366; darn=lists.ozlabs.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=tmZ4Pf+Lkz5wRFy1K0dmYcmTbZlsWWC8lS02Q/mSk0M=; b=SKeP3naPAaoS+LqUWWXunV6qIVKPrcPMBgZ2V0DBGYT+XVdd59aNh0L4dh9XeZrUzU IE+ya1dqFWzHCbyhJppnSPm3pZdyAS6XK3Uvucc8/SxrtMU8V4D3JM62z2weiWJKj/wb Hh25TtH1Na+7JdKTaS3poWdQPzzordiRsNzwXbAOEvboLQ8dlm7Slq4I3/F90X94xilf ocz5zFCDRtEcSyZS4BGamKzYSbb8mnfIXaru0DCjOjOrf8xmZjrU7CcZyH+rdwql8Hgj aDU0MFO96qbLzZaTi7Uo0n9IU4FK9fK7pmLXqSjP60zKkC5agENCpNBueU/ycrZxoQ5o S8DA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694656566; x=1695261366; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=tmZ4Pf+Lkz5wRFy1K0dmYcmTbZlsWWC8lS02Q/mSk0M=; b=cnEQFHZuaFg868HN0sGCSPcW7qoKJptL8U05ZjVxYGIiEr1qEXIz9cQBBCULnE96IB ScMOKvYILwtPZi3Pk2d+TmU4WCo92oZWw2cyJDtm7CsxpkcHEdgQHJVtojmzqJdtoYJm ZY+kLVMFg/EDKZE6rzJKjktsvsf6wdafZ+ofS5jBCm/u2k0O0SGx8uZ3T2c0OTuR3jC/ KqAXPoI0UQBCblDRbdPA7mxYRXEgWzrW8F6GuGYgDIuPTzwlgUNAEwT9GpRGXuoRrHVA n9eM/sfPgwggOgaVLvPMmkHwMxHGTvqIMm5bA3BLS3xHUrv2fWy+c370QcHC42pQ1SGQ 2mmQ== X-Gm-Message-State: AOJu0YxtY60tKo7ktIqrXX7O4S4+PzLYLBgaPfofZ8KETkhnAjGSZfY5 tv3qs8Gzmqn4XHoGc0Z4TW2ZsoQt7+0= X-Google-Smtp-Source: AGHT+IEewm62VrPkwfcYWKCbnDgTMcslgQzN36+Ag8qFabvQ71jteqZnAdWKs/X98zA5hLrpiwS+nbPNXc8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:ab47:0:b0:d7e:78db:d264 with SMTP id u65-20020a25ab47000000b00d7e78dbd264mr13061ybi.5.1694656566137; Wed, 13 Sep 2023 18:56:06 -0700 (PDT) Date: Wed, 13 Sep 2023 18:55:13 -0700 In-Reply-To: <20230914015531.1419405-1-seanjc@google.com> Mime-Version: 1.0 References: <20230914015531.1419405-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.283.g2d96d420d3-goog Message-ID: <20230914015531.1419405-16-seanjc@google.com> Subject: [RFC PATCH v12 15/33] KVM: Add transparent hugepage support for dedicated guest memory From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , "Matthew Wilcox (Oracle)" , Andrew Morton , Paul Moore , James Morris , "Serge E. Hallyn" Content-Type: text/plain; charset="UTF-8" X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Cc: kvm@vger.kernel.org, David Hildenbrand , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chao Peng , linux-riscv@lists.infradead.org, Isaku Yamahata , linux-security-module@vger.kernel.org, Wang , Fuad Tabba , linux-arm-kernel@lists.infradead.org, Maciej Szmigiero , Michael Roth , Ackerley Tng , kvmarm@lists.linux.dev, Vlastimil Babka , Isaku Yamahata , Quentin Perret , linux-mips@vger.kernel.org, Jarkko Sakkinen , Yu Zhang , "Kirill A . Shutemov" , kvm-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org, Liam Merwick , Vishal Annapurve , linuxppc-dev@lists.ozlabs.org, Xu Yilun , Ani sh Moorthy Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" TODO: writeme Signed-off-by: Sean Christopherson --- include/uapi/linux/kvm.h | 2 ++ virt/kvm/guest_mem.c | 54 ++++++++++++++++++++++++++++++++++++---- 2 files changed, 51 insertions(+), 5 deletions(-) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index b6f90a273e2e..2df18796fd8e 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -2314,6 +2314,8 @@ struct kvm_memory_attributes { #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd) +#define KVM_GUEST_MEMFD_ALLOW_HUGEPAGE (1ULL << 0) + struct kvm_create_guest_memfd { __u64 size; __u64 flags; diff --git a/virt/kvm/guest_mem.c b/virt/kvm/guest_mem.c index 0dd3f836cf9c..a819367434e9 100644 --- a/virt/kvm/guest_mem.c +++ b/virt/kvm/guest_mem.c @@ -17,15 +17,48 @@ struct kvm_gmem { struct list_head entry; }; -static struct folio *kvm_gmem_get_folio(struct file *file, pgoff_t index) +static struct folio *kvm_gmem_get_huge_folio(struct inode *inode, pgoff_t index) { +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + unsigned long huge_index = round_down(index, HPAGE_PMD_NR); + unsigned long flags = (unsigned long)inode->i_private; + struct address_space *mapping = inode->i_mapping; + gfp_t gfp = mapping_gfp_mask(mapping); struct folio *folio; - /* TODO: Support huge pages. */ - folio = filemap_grab_folio(file->f_mapping, index); - if (IS_ERR_OR_NULL(folio)) + if (!(flags & KVM_GUEST_MEMFD_ALLOW_HUGEPAGE)) return NULL; + if (filemap_range_has_page(mapping, huge_index << PAGE_SHIFT, + (huge_index + HPAGE_PMD_NR - 1) << PAGE_SHIFT)) + return NULL; + + folio = filemap_alloc_folio(gfp, HPAGE_PMD_ORDER); + if (!folio) + return NULL; + + if (filemap_add_folio(mapping, folio, huge_index, gfp)) { + folio_put(folio); + return NULL; + } + + return folio; +#else + return NULL; +#endif +} + +static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index) +{ + struct folio *folio; + + folio = kvm_gmem_get_huge_folio(inode, index); + if (!folio) { + folio = filemap_grab_folio(inode->i_mapping, index); + if (IS_ERR_OR_NULL(folio)) + return NULL; + } + /* * Use the up-to-date flag to track whether or not the memory has been * zeroed before being handed off to the guest. There is no backing @@ -323,7 +356,8 @@ static const struct inode_operations kvm_gmem_iops = { .setattr = kvm_gmem_setattr, }; -static int __kvm_gmem_create(struct kvm *kvm, loff_t size, struct vfsmount *mnt) +static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags, + struct vfsmount *mnt) { const char *anon_name = "[kvm-gmem]"; const struct qstr qname = QSTR_INIT(anon_name, strlen(anon_name)); @@ -346,6 +380,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, struct vfsmount *mnt) inode->i_mode |= S_IFREG; inode->i_size = size; mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); + mapping_set_large_folios(inode->i_mapping); mapping_set_unmovable(inode->i_mapping); /* Unmovable mappings are supposed to be marked unevictable as well. */ WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); @@ -396,6 +431,12 @@ static bool kvm_gmem_is_valid_size(loff_t size, u64 flags) if (size < 0 || !PAGE_ALIGNED(size)) return false; +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if ((flags & KVM_GUEST_MEMFD_ALLOW_HUGEPAGE) && + !IS_ALIGNED(size, HPAGE_PMD_SIZE)) + return false; +#endif + return true; } @@ -405,6 +446,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) u64 flags = args->flags; u64 valid_flags = 0; + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) + valid_flags |= KVM_GUEST_MEMFD_ALLOW_HUGEPAGE; + if (flags & ~valid_flags) return -EINVAL; -- 2.42.0.283.g2d96d420d3-goog