From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8EC77C77B73 for ; Wed, 24 May 2023 18:30:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=9P0tpAljcOhJ1NK/ZM4pOMgOklbAp5M+OEi1pVAaPi0=; b=V7LSNR4Yq4gRwiK0u83bNx09Cf lWqJD4OPcUJtlCJti12YKFebM/ay8fdfuqkNNUWo5l3f9kFMiyJgcjQXCtgh9jBHnLlf/jIr3IPXw kNHNBqnIm/mehxAjPiRVi/Nd3A0FhUBcN5g5u2I3K6gDv9slYpXWTs7xJtL2tM8Mk/WcWzu9Oqqy8 pHflYeosWvTmPVTELB9JLyvO24kvhhlFtuzTPLIcjLOFkE3UikFKZ+RLhGDdMqmuGRDDcI4vlu8Ve ZiT6SjvPb9AyAucTNK/xs9Kj0THGMDJK50sTcMq9qWJoikHYBGivpNvtEOJfbGqHq9yJi2XecJKvc camQ3pxg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1tFF-00ENYG-2D; Wed, 24 May 2023 18:29:57 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1tFC-00ENWD-2I for linux-arm-kernel@lists.infradead.org; Wed, 24 May 2023 18:29:56 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-ba83a9779f3so2593410276.1 for ; Wed, 24 May 2023 11:29:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1684952987; x=1687544987; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zhCLzNiRBNlROtfKp+ujTOcVOVXlrI8zrVzKw44qRAY=; b=gk9VY1Fi8rAEgoEQhDDDcAlyZtuvVYH+tDqstE9RgYMCMx44Q39XjyhilHUndn79Jf DsYr1NeHSyJUP6Xnpups0LK8LE/e6MaJSo5KgXEbph/nbdV6f20iBAAvJDcDAb8yWxqa PwXxf/osa258qdIt5lOohF/nQXbGcInom6MWSqv5xu2G/FDMvyF17aGJfKTyj0ZxdkU+ 1YwX6JquS35m12/IprqBATiSQnRhymKxch+P3H7efEhJZWSGzCXGn81+zzZVAaeP//kP +akCm4mHwxaHiE3+Y6UMz/ktFwEkssQ3cCRXbiLWobVua0yuYn9XCHMU6TN87sPyU0Lr hBbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684952987; x=1687544987; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zhCLzNiRBNlROtfKp+ujTOcVOVXlrI8zrVzKw44qRAY=; b=eJUQ9sQQzuAZf0Nrel4pnFJYhFb+A/RCY153NPBcm8A6QvLgKhgg98pJm5ixZPv3Xp 4+dJ9GIqjHkBxvwiSpzwAnj3/jqtbyUAczg6ymW5jGL/wbNjRshYgKS/Ymt8SGMYnAPL nVtW3BhZ0P3sjdwaJaiFaxIxjE9XvPclzOy+nltkgCZAPo3L6CX5xSiJuYUbPU4llPED Vsh02TE7+dHOshfCKHsnESy8BH0VfhUMKQiQ+ZyL+Wn8hQGGgWukqEqkQpDY3cgKIDjx N/tkFn73nlVIZmORq/aLS3XmWImsqk4T3ZLhrWs95ffvwnGhi9zGhK/1N6q1DHQKnwQK llVw== X-Gm-Message-State: AC+VfDwVVS7V6X/FdvJvzkRZKx7WATpyn4AkvdLtROltMAUlpcNpXTZH NRIqhd0XZR+zctj+cDZ83YWSqUOy0iY= X-Google-Smtp-Source: ACHHUZ6aIrpFTNF+ZZtuCtnMHlGQa2V7WRfUVWrDI43ohWGbs/qTvtv8jZTbxA9m9umssjztL0lGmACZlFw= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:1024:b0:ba8:6422:7fc with SMTP id x4-20020a056902102400b00ba8642207fcmr398046ybt.7.1684952987216; Wed, 24 May 2023 11:29:47 -0700 (PDT) Date: Wed, 24 May 2023 11:29:45 -0700 In-Reply-To: Mime-Version: 1.0 References: <20230330085802.2414466-1-stevensd@google.com> <20230330085802.2414466-2-stevensd@google.com> Message-ID: Subject: Re: [PATCH v6 1/4] KVM: mmu: introduce new gfn_to_pfn_noref functions From: Sean Christopherson To: Peter Xu Cc: David Stevens , Marc Zyngier , Oliver Upton , Paolo Bonzini , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230524_112954_747905_B21B0792 X-CRM114-Status: GOOD ( 26.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, May 24, 2023, Peter Xu wrote: > On Wed, May 24, 2023 at 09:46:13AM -0700, Sean Christopherson wrote: > > If we hack kvm_pfn_to_refcounted_page(), then all of those protections are lost > > because KVM would drop its assertions and also skip dirtying pages, i.e. would > > effectively suppress the latent detection by check_new_page_bad(). > > So it's probably that I totally have no idea what are the attributes for > those special pages so I don't understand enough on why we need to handle > those pages differently from e.g. PFNMAP pages, and also the benefits. > > I think what I can tell is that they're pages that doesn't have > PageCompound bits set on either head or tails, however it's still a > multi-2-order large page. Is there an example on how these pages are used > and allocated? Why would we need those pages, and whether these pages need > to be set dirty/accessed after all? The use case David is interested in is where an AMD GPU driver kmallocs() a chunk of memory, let's it be mmap()'d by userspace, and userspace then maps it into the guest for a virtual (passthrough?) GPU. For all intents and purposes, it's normal memory, just not refcounted. > > static bool kvm_is_ad_tracked_page(struct page *page) > > { > > + /* > > + * Assert that KVM isn't attempting to mark a freed page as Accessed or > > + * Dirty, i.e. that KVM's MMU doesn't have a use-after-free bug. KVM > > + * (typically) doesn't pin pages that are mapped in KVM's MMU, and > > + * instead relies on mmu_notifiers to know when a mapping needs to be > > + * zapped/invalidated. Unmapping from KVM's MMU must happen _before_ > > + * KVM returns from its mmu_notifier, i.e. the page should have an > > + * elevated refcount at this point even though KVM doesn't hold a > > + * reference of its own. > > + */ > > + if (WARN_ON_ONCE(!page_count(page))) > > + return false; > > + > > /* > > * Per page-flags.h, pages tagged PG_reserved "should in general not be > > * touched (e.g. set dirty) except by its owner". > > > > This looks like a good thing to have, indeed. But again it doesn't seem > like anything special to the pages we're discussing here, say, !Compound && > refcount==0 ones. The problem is that if KVM ignores refcount==0 pages, then KVM can't distinguish between the legitimate[*] refcount==0 AMD GPU case and a buggy refcount==0 use-after-free scenario. I don't want to make that sacrifice as the legimiate !refcounted use case is a very specific use case, whereas consuming refcounted memory is ubiquituous (outside of maybe AWS). [*] Consuming !refcounted pages is safe only for flows that are tied into the mmu_notifiers. The current proposal/plan is to add an off-by-default module param that let's userspace opt-in to kmap() use of !refcounted memory, e.g. this case and PFNMAP memory. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel