From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 841C2C5B549 for ; Mon, 2 Jun 2025 09:43:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EE0F56B0270; Mon, 2 Jun 2025 05:43:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E91716B0271; Mon, 2 Jun 2025 05:43:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D59A16B0272; Mon, 2 Jun 2025 05:43:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B592E6B0270 for ; Mon, 2 Jun 2025 05:43:45 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 2E9FF160C94 for ; Mon, 2 Jun 2025 09:43:45 +0000 (UTC) X-FDA: 83509973610.06.2304E0E Received: from mail-qt1-f177.google.com (mail-qt1-f177.google.com [209.85.160.177]) by imf06.hostedemail.com (Postfix) with ESMTP id 5DF32180004 for ; Mon, 2 Jun 2025 09:43:43 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=eCZD98CS; spf=pass (imf06.hostedemail.com: domain of tabba@google.com designates 209.85.160.177 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1748857423; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uo8mvfVOCPKDzB6phw0TdqUcAgsJs18ujFhEA6TPVZg=; b=L34ye1i1c++UUtAX5kgzuXzwmNZFFOSCRyafNC97sxSiqwbepWf3coFqc0dFcoFAIRz3X/ 02LfNHPk0vL62FzLftcnkb3HUzRi4SQmjYFt1Yo4gC+LEdvut1Umv0PbaxdQsqmmN63OHx gbcjhUkTc0zp1PvSM6t+zv5uCO0s0sQ= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=eCZD98CS; spf=pass (imf06.hostedemail.com: domain of tabba@google.com designates 209.85.160.177 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1748857423; a=rsa-sha256; cv=none; b=wkwBUZl73dvKqUo62iEviB21CaStw5uiHiAp4TERowwByNWDXcBjDqodd/dX5ULYrqBeek rKVEuxGAMeZCifBwhmCPZRrc8oMbCd/z9PwurON/Cs+aUBhudCp4/yICBwc/TgG0bS9oYT Us/g0cJUv++vueogjKMFrHCxaMpUBTo= Received: by mail-qt1-f177.google.com with SMTP id d75a77b69052e-47e9fea29easo651201cf.1 for ; Mon, 02 Jun 2025 02:43:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748857422; x=1749462222; darn=kvack.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=uo8mvfVOCPKDzB6phw0TdqUcAgsJs18ujFhEA6TPVZg=; b=eCZD98CSXLBY94sCuyISXaCD0iHh9z2hHtcjChnA0RI+byn0cjmTeW0+B0wOdrY7Ey c+VEyMWqMbDjd/YjYvTSBH/GMLC+NrEw4Qlq1oqsCTVWtVvAtTVhH89L21MLl+MSk3t0 OVoDUhNK1uXGH+jBkHa91Vhy4NCM6T2f40qOvTY/jY4LRHOF/ZJhDVGWqEOTLEwTES7R ScSkJX8I+PnnN9RJOo+HWApvZxf6kckdHmD3g97Mhb38ZrpQYqQ7BUswkxiPPgYI7BRQ ErgzBln3qtnyF4HnO/aHai5xGoyxbDA3A3skEj3hLrX3569lHlgGpKWjkpyMbkFkLess nyIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748857422; x=1749462222; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=uo8mvfVOCPKDzB6phw0TdqUcAgsJs18ujFhEA6TPVZg=; b=wNLim8ZB4Mz0qtC6zSrnK5BrMGVj20AW5ojy+mcXcVjkCP4I8X2e3VLz4AOCAydyF+ FdEewylHbUxwfanbiTcRe4OLe0Hl/qHSx2mXUnXZIV6CPNsU/QfBu6+trlLnYJ8KdhbW fERdY21JsS2WUS0yXvQndDTKHOFSfxxeQ8QCtu+GB4eBZC63UnKUZB/WDupfjuEMa1cA MU5m0bwbo/R8q2Pn8byHXmfvfmdndrEYPUfmhUWVDqrACs3j4g2JjZcICWNYC37Jk1A4 rJ0IRQ3e0iaJKBYXK8kcwbQreGlCMEQCtBJ0v9ERkQ2zgiADWyx4LvimEEMmOS2+KgEv hS3A== X-Forwarded-Encrypted: i=1; AJvYcCUpQBXAVZ+A0yC+5Ilmj5kZApEGCQaxwWgIhuAEX2eAollE+YDSecgVyaKH1FA7n5QLVqyXZ4ZghQ==@kvack.org X-Gm-Message-State: AOJu0Yycoz0YHpDM8Dia6a72sMN/6ZYiRSgwBHx6X7P0Fw6DCFuvhKuE 9TOGKpArXZeVFQv7WrTM+xVclthYy33TH924wP7spcVxabiVoHJHaDbtAuC/u+OUUmahWQvdU4J cnlHTFo65+RKSWQnaiP+FBkLo2vEWrA7MtTU7N1XT X-Gm-Gg: ASbGncttEhwrRudqoaZIOQGQ/UvXED4vP6xjNteUbNQIa6LV67Mjlzbqoa+DT9XYjpL P76igUbnZifjGTFj0U0UaJQ2ffcUOrdLy/lcpa2pnaRskgtwOAyQ3nJFC8zbEh4c20AiLoNyJQ+ /u3T/+ltihMerMwtP79zWe9hzh7w9ZqX9Jlu7gIqU8b+Y= X-Google-Smtp-Source: AGHT+IGWihVD4pfolVqs4guDDc40fFZEGkU6C3Ax5Qsx3Qx9MhC6Q8kpj0Q3hgKeQLjhU0Zcd4NAEFjvR2PgYiaFuH4= X-Received: by 2002:a05:622a:2288:b0:494:b833:aa42 with SMTP id d75a77b69052e-4a45815ef5cmr4822181cf.5.1748857421902; Mon, 02 Jun 2025 02:43:41 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Fuad Tabba Date: Mon, 2 Jun 2025 10:43:05 +0100 X-Gm-Features: AX0GCFs1Ed5_8o-cI-u2zHzfuVBCp-xOIalJ6ir71J27GefrvJtgoWToLCCKlFs Message-ID: Subject: Re: [RFC PATCH v2 02/51] KVM: guest_memfd: Introduce and use shareability to guard faulting To: Ackerley Tng Cc: Yan Zhao , kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, aik@amd.com, ajones@ventanamicro.com, akpm@linux-foundation.org, amoorthy@google.com, anthony.yznaga@oracle.com, anup@brainfault.org, aou@eecs.berkeley.edu, bfoster@redhat.com, binbin.wu@linux.intel.com, brauner@kernel.org, catalin.marinas@arm.com, chao.p.peng@intel.com, chenhuacai@kernel.org, dave.hansen@intel.com, david@redhat.com, dmatlack@google.com, dwmw@amazon.co.uk, erdemaktas@google.com, fan.du@intel.com, fvdl@google.com, graf@amazon.com, haibo1.xu@intel.com, hch@infradead.org, hughd@google.com, ira.weiny@intel.com, isaku.yamahata@intel.com, jack@suse.cz, james.morse@arm.com, jarkko@kernel.org, jgg@ziepe.ca, jgowans@amazon.com, jhubbard@nvidia.com, jroedel@suse.de, jthoughton@google.com, jun.miao@intel.com, kai.huang@intel.com, keirf@google.com, kent.overstreet@linux.dev, kirill.shutemov@intel.com, liam.merwick@oracle.com, maciej.wieczor-retman@intel.com, mail@maciej.szmigiero.name, maz@kernel.org, mic@digikod.net, michael.roth@amd.com, mpe@ellerman.id.au, muchun.song@linux.dev, nikunj@amd.com, nsaenz@amazon.es, oliver.upton@linux.dev, palmer@dabbelt.com, pankaj.gupta@amd.com, paul.walmsley@sifive.com, pbonzini@redhat.com, pdurrant@amazon.co.uk, peterx@redhat.com, pgonda@google.com, pvorel@suse.cz, qperret@google.com, quic_cvanscha@quicinc.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, quic_svaddagi@quicinc.com, quic_tsoni@quicinc.com, richard.weiyang@gmail.com, rick.p.edgecombe@intel.com, rientjes@google.com, roypat@amazon.co.uk, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, steven.sistare@oracle.com, suzuki.poulose@arm.com, thomas.lendacky@amd.com, usama.arif@bytedance.com, vannapurve@google.com, vbabka@suse.cz, viro@zeniv.linux.org.uk, vkuznets@redhat.com, wei.w.wang@intel.com, will@kernel.org, willy@infradead.org, xiaoyao.li@intel.com, yilun.xu@intel.com, yuzenghui@huawei.com, zhiquan1.li@intel.com Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Queue-Id: 5DF32180004 X-Rspamd-Server: rspam09 X-Stat-Signature: xkqpbpk4j47iozu8wfia5r939ypitjj8 X-HE-Tag: 1748857423-598629 X-HE-Meta: U2FsdGVkX196IaxGxzbV/KJgAVCNx0dx8Nc4GsXLFYHfqQaAXXB00J0SURHltlYasr7jaR1VmuPd+WI1O/KZJd2KKUj4tnFD0eX4YTGMhiNOn0CLMjWVP6wtd9upAIAa5XkuldD1so5s6tWVVm3Rq6L0U4nWtUz8FF1UJO0UgJyZ1X+0aCbRp8u8ae7aZscfP0YYhr/YMUiuw6SQGd4o3e9SQOdYBRII808hKbAIXWzX9Afzh1IZTq51pTzledHNkmN+CluqrwPjm56dxL9Rlxf+cyzOn7qiTTux98af+J5y9AfbnswTdgbNKrNO+CNC45dVT0/Y5ibkhamRJMY1TU47sP9CTumWFjzKGyOGOAPXpqIUxb1fADAeMjy2l7rFvnH4p5YSWMjMhQ4BCg9kGGR3ytEDAk1k61kdLJUXHMujRPquNfmm2YnYfYF0jF7uS4VoYOp5vQwdUfwx56YJ9g2mnuqf3wpfeYcqlN55RQP5tKsQE4bKnz1j0qO6IZHLjJp6YgmSoNmwgnsw0SoFX7TtBkEKsIrDeNy4RBCU05veLJ2+WpUTguOq6vQnYTuHxxtN5TvGYwqh/s6GWnF8QKTC7j0WycfOQkLP/N1VFrgtZwe6DgTwRw4uPa/pQkr/rb1JQJgpj5NpUzKMKBeO6HgDxT7rJf7Dpnr56jGNZ76MRjrvpzF1QsjShTWggd65MapYUSQwoVBhQd5Y/YIRhW8l+Ynh/BLUOvspJIUOtMMdcmu9IRkCyR9lasdhuF4Qnn4JI70rDFSU3Qk3Hxm6pN7aaoiAf1DgsbQ8QiMShQbCOfTHQmfdFNz7DAOx7L8g4tqg5cHSEdK1cLrD0Xt2HTar49abFd0MMlJl09AiB5khjJpDMT/W+kIqpGzfpurLEAb+eCZDe/M/FbtriB7lXMyvNoCeSPrC5GYJtXYVmaAg7XUY27pWVWSH2/dDdnL9l6CufUyxlHQUztw5GDZ nqwr7X4d WzC3RqLwkPyynOUZeD2VqCXgwYHf2LRbLnT4WyHw05/RBUStoTcuym3ZP6YA2n7rZRYA2eVIv+fkicLSmgzq2GHEUl3mOVqu7PL7qYUr+icmob2bne+5/MJx1LqKqkPRUOv8w3xJTO4tbv8mvqIONiu/AN2pc/VrEmK1cD6n9+JbEC4p2oLorcVNV2LtyrkCXWGe9Reo7+F9P8b5qglC5TtkftQAyrjVDtk5l6XAjVOkCjN6kGNnEN7iz/6qy4Bb4G1XEcaWoXtOoRnY7VlR0QYJP4bkGtH75NvogGcU/8KaEd6wjPvvERZxrjBc9TaRI+7dYJFcUFmKb5ACXm65MFmj7BDuKvi7yTVkWUr2NO9yZ7N7Ay6gtUNBwmqDMaKFflcso X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Ackerley, On Fri, 30 May 2025 at 19:32, Ackerley Tng wrote: > > Fuad Tabba writes: > > > Hi, > > > > .. snip.. > > > >> I noticed that in [1], the kvm_gmem_mmap() does not check the range. > >> So, the WARN() here can be hit when userspace mmap() an area larger than the > >> inode size and accesses the out of band HVA. > >> > >> Maybe limit the mmap() range? > >> > >> @@ -1609,6 +1620,10 @@ static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) > >> if (!kvm_gmem_supports_shared(file_inode(file))) > >> return -ENODEV; > >> > >> + if (vma->vm_end - vma->vm_start + (vma->vm_pgoff << PAGE_SHIFT) > i_size_read(file_inode(file))) > >> + return -EINVAL; > >> + > >> if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) != > >> (VM_SHARED | VM_MAYSHARE)) { > >> return -EINVAL; > >> > >> [1] https://lore.kernel.org/all/20250513163438.3942405-8-tabba@google.com/ > > > > I don't think we want to do that for a couple of reasons. We catch > > such invalid accesses on faulting, and, by analogy, afaikt, neither > > secretmem nor memfd perform a similar check on mmap (nor do > > memory-mapped files in general). > > > > There are also valid reasons why a user would want to deliberately > > mmap more memory than the backing store, knowing that it's only going > > to fault what it's going to use, e.g., alignment. > > > > This is a good point. > > I think there's no check against the inode size on faulting now though? > v10's [1] kvm_gmem_fault_shared() calls kvm_gmem_get_folio() > straightaway. > > We should add a check like [2] to kvm_gmem_fault_shared(). Yes! I mistakenly thought that kvm_gmem_get_folio() had such a check, I just verified that it doesn't. I have added the check, as well as a new selftest to make sure we don't miss it in the future. Thanks! /fuad > [1] https://lore.kernel.org/all/20250513163438.3942405-8-tabba@google.com/ > [2] https://github.com/torvalds/linux/blob/8477ab143069c6b05d6da4a8184ded8b969240f5/mm/filemap.c#L3373 > > > Cheers, > > /fuad > > > > > >> > + return xa_to_value(entry); > >> > +} > >> > + > >> > +static struct folio *kvm_gmem_get_shared_folio(struct inode *inode, pgoff_t index) > >> > +{ > >> > + if (kvm_gmem_shareability_get(inode, index) != SHAREABILITY_ALL) > >> > + return ERR_PTR(-EACCES); > >> > + > >> > + return kvm_gmem_get_folio(inode, index); > >> > +} > >> > + > >> > +#else > >> > + > >> > +static int kvm_gmem_shareability_setup(struct maple_tree *mt, loff_t size, u64 flags) > >> > +{ > >> > + return 0; > >> > +} > >> > + > >> > +static inline struct folio *kvm_gmem_get_shared_folio(struct inode *inode, pgoff_t index) > >> > +{ > >> > + WARN_ONCE("Unexpected call to get shared folio.") > >> > + return NULL; > >> > +} > >> > + > >> > +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */ > >> > + > >> > static int __kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slot, > >> > pgoff_t index, struct folio *folio) > >> > { > >> > @@ -333,7 +404,7 @@ static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf) > >> > > >> > filemap_invalidate_lock_shared(inode->i_mapping); > >> > > >> > - folio = kvm_gmem_get_folio(inode, vmf->pgoff); > >> > + folio = kvm_gmem_get_shared_folio(inode, vmf->pgoff); > >> > if (IS_ERR(folio)) { > >> > int err = PTR_ERR(folio); > >> > > >> > @@ -420,8 +491,33 @@ static struct file_operations kvm_gmem_fops = { > >> > .fallocate = kvm_gmem_fallocate, > >> > }; > >> > > >> > +static void kvm_gmem_free_inode(struct inode *inode) > >> > +{ > >> > + struct kvm_gmem_inode_private *private = kvm_gmem_private(inode); > >> > + > >> > + kfree(private); > >> > + > >> > + free_inode_nonrcu(inode); > >> > +} > >> > + > >> > +static void kvm_gmem_destroy_inode(struct inode *inode) > >> > +{ > >> > + struct kvm_gmem_inode_private *private = kvm_gmem_private(inode); > >> > + > >> > +#ifdef CONFIG_KVM_GMEM_SHARED_MEM > >> > + /* > >> > + * mtree_destroy() can't be used within rcu callback, hence can't be > >> > + * done in ->free_inode(). > >> > + */ > >> > + if (private) > >> > + mtree_destroy(&private->shareability); > >> > +#endif > >> > +} > >> > + > >> > static const struct super_operations kvm_gmem_super_operations = { > >> > .statfs = simple_statfs, > >> > + .destroy_inode = kvm_gmem_destroy_inode, > >> > + .free_inode = kvm_gmem_free_inode, > >> > }; > >> > > >> > static int kvm_gmem_init_fs_context(struct fs_context *fc) > >> > @@ -549,12 +645,26 @@ static const struct inode_operations kvm_gmem_iops = { > >> > static struct inode *kvm_gmem_inode_make_secure_inode(const char *name, > >> > loff_t size, u64 flags) > >> > { > >> > + struct kvm_gmem_inode_private *private; > >> > struct inode *inode; > >> > + int err; > >> > > >> > inode = alloc_anon_secure_inode(kvm_gmem_mnt->mnt_sb, name); > >> > if (IS_ERR(inode)) > >> > return inode; > >> > > >> > + err = -ENOMEM; > >> > + private = kzalloc(sizeof(*private), GFP_KERNEL); > >> > + if (!private) > >> > + goto out; > >> > + > >> > + mt_init(&private->shareability); > >> Wrap the mt_init() inside "#ifdef CONFIG_KVM_GMEM_SHARED_MEM" ? > >> > >> > + inode->i_mapping->i_private_data = private; > >> > + > >> > + err = kvm_gmem_shareability_setup(private, size, flags); > >> > + if (err) > >> > + goto out; > >> > + > >> > inode->i_private = (void *)(unsigned long)flags; > >> > inode->i_op = &kvm_gmem_iops; > >> > inode->i_mapping->a_ops = &kvm_gmem_aops; > >> > @@ -566,6 +676,11 @@ static struct inode *kvm_gmem_inode_make_secure_inode(const char *name, > >> > WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); > >> > > >> > return inode; > >> > + > >> > +out: > >> > + iput(inode); > >> > + > >> > + return ERR_PTR(err); > >> > } > >> > > >> > static struct file *kvm_gmem_inode_create_getfile(void *priv, loff_t size, > >> > @@ -654,6 +769,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) > >> > if (kvm_arch_vm_supports_gmem_shared_mem(kvm)) > >> > valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED; > >> > > >> > + if (flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED) > >> > + valid_flags |= GUEST_MEMFD_FLAG_INIT_PRIVATE; > >> > + > >> > if (flags & ~valid_flags) > >> > return -EINVAL; > >> > > >> > @@ -842,6 +960,8 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, > >> > if (!file) > >> > return -EFAULT; > >> > > >> > + filemap_invalidate_lock_shared(file_inode(file)->i_mapping); > >> > + > >> > folio = __kvm_gmem_get_pfn(file, slot, index, pfn, &is_prepared, max_order); > >> > if (IS_ERR(folio)) { > >> > r = PTR_ERR(folio); > >> > @@ -857,8 +977,8 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, > >> > *page = folio_file_page(folio, index); > >> > else > >> > folio_put(folio); > >> > - > >> > out: > >> > + filemap_invalidate_unlock_shared(file_inode(file)->i_mapping); > >> > fput(file); > >> > return r; > >> > } > >> > -- > >> > 2.49.0.1045.g170613ef41-goog > >> > > >> >