From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 315F0E7D0AE for ; Thu, 21 Sep 2023 21:47:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229615AbjIUVr5 (ORCPT ); Thu, 21 Sep 2023 17:47:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232461AbjIUVrn (ORCPT ); Thu, 21 Sep 2023 17:47:43 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A860B43C87 for ; Thu, 21 Sep 2023 14:34:48 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-c647150c254so2930493276.1 for ; Thu, 21 Sep 2023 14:34:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695332088; x=1695936888; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pR+2Sq/w5PO1VBsN7LGE/vsgLDdz85P5Qsv8lkXDhfw=; b=BYX5P5YenopC8WfoqBCQpAR+ubqYYZEbTvXtwVksdrNAEOU9vulHCoShstnOrYREm/ iXGLA7EuOUNC8DseYVJ1fVflrkbVmknCw1tY0YmX9V4zpoB/b2JaO2i/qzOjf70y0lEn kStyYA/xHQ1i/MjqVMWalNc2ZG5kiIeMaT002yyIcAyQN+rcnSL/4DQ2p5a1ZUoMCDPj cvJotVZD6SMjDw0Y11905dmjjoC+Nm6MaLfR+Un4lCoU1dx8tbrwobKXBxl2WJJsopMe o4FIYioflvjyjZOu0mXBntuqEQ4a2MjRzy6OWzK5+VpCK+azE5Mzt5gfOfKy5FkWJcrC CoPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695332088; x=1695936888; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pR+2Sq/w5PO1VBsN7LGE/vsgLDdz85P5Qsv8lkXDhfw=; b=TLh+rcgoZX26sqVDxzg1zY5+I+ihgbK4Q+4yb7GpOZ9QHQGTHo7y7dAzSs/1n8owKR ldfMx7P5zvE2Uq4hJxFcOP1HlU7MxGfCLg7PybeyRKUvS/NYZtCHzRnd/xTp/jMDxUYS CJ8FH3Qd/XGP1uDIxWHF7htBw0vyCMlXXdEnmBBDSzKlULc5X9ayeAdAHtjjI2NFQqXa zT8ifJT5gmQXOJtxd6b/AiBrEuKDndzX3Q4j4GVzgU4SguJ4GZMqfTQ1B8PpWcobUnjL xyIIhrcFpGUE84LjDJRKEdoScBpUd22FFbRnIQOOTAgncoUjrDaYIX/jLT2QX3vhD7jy ReHg== X-Gm-Message-State: AOJu0YwFvxNyON2IffxHS3CsW2WiuKYTnHNibEylQlSAreYbNKACMh+S 1rp3MbX3JiFs4aj/NlyZuwRnw4f1hHk= X-Google-Smtp-Source: AGHT+IHPp+gthilpe1ACitGvpuOvyGmVyPOCPYeySUnH3t8SZLPFwSVD9WckbY/e+Oyv6xiz6EbUHwXOp/o= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:ce4c:0:b0:d81:f70e:1e1a with SMTP id x73-20020a25ce4c000000b00d81f70e1e1amr13692ybe.0.1695332087878; Thu, 21 Sep 2023 14:34:47 -0700 (PDT) Date: Thu, 21 Sep 2023 14:34:46 -0700 In-Reply-To: Mime-Version: 1.0 References: Message-ID: Subject: Re: [RFC PATCH v2 1/6] KVM: gmem: Truncate pages on punch hole From: Sean Christopherson To: isaku.yamahata@intel.com Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, isaku.yamahata@gmail.com, Michael Roth , Paolo Bonzini , erdemaktas@google.com, Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, linux-coco@lists.linux.dev, Chao Peng , Ackerley Tng , Vishal Annapurve , Yuan Yao , Jarkko Sakkinen , Xu Yilun , Quentin Perret , wei.w.wang@intel.com, Fuad Tabba Content-Type: text/plain; charset="us-ascii" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 21, 2023, Sean Christopherson wrote: > On Thu, Sep 21, 2023, isaku.yamahata@intel.com wrote: > > From: Isaku Yamahata > > > > Although kvm_gmem_punch_hole() keeps all pages in mapping on punching hole, > > it's common expectation that pages are truncated. Truncate pages on > > punching hole. As page contents can be encrypted, avoid zeroing partial > > folio by refusing partial punch hole. > > > > Signed-off-by: Isaku Yamahata > > --- > > virt/kvm/guest_mem.c | 14 ++++++++++++-- > > 1 file changed, 12 insertions(+), 2 deletions(-) > > > > diff --git a/virt/kvm/guest_mem.c b/virt/kvm/guest_mem.c > > index a819367434e9..01fb4ca861d0 100644 > > --- a/virt/kvm/guest_mem.c > > +++ b/virt/kvm/guest_mem.c > > @@ -130,22 +130,32 @@ static void kvm_gmem_invalidate_end(struct kvm_gmem *gmem, pgoff_t start, > > static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len) > > { > > struct list_head *gmem_list = &inode->i_mapping->private_list; > > + struct address_space *mapping = inode->i_mapping; > > pgoff_t start = offset >> PAGE_SHIFT; > > pgoff_t end = (offset + len) >> PAGE_SHIFT; > > struct kvm_gmem *gmem; > > > > + /* > > + * punch hole may result in zeroing partial area. As pages can be > > + * encrypted, prohibit zeroing partial area. > > + */ > > + if (offset & ~PAGE_MASK || len & ~PAGE_MASK) > > + return -EINVAL; > > This should be unnecessary, kvm_gmem_fallocate() does > > if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) > return -EINVAL; > > before invoking kvm_gmem_punch_hole(). If that's not working, i.e. your test > fails, then that code needs to be fixed. I'll run your test to double-check, > but AFAICT this is unnecesary. I confirmed that the testcase passes without the extra checks. Just to close the loop, what prompted adding more checks to kvm_gmem_punch_hole()?