From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C6E6C19F2C for ; Tue, 2 Aug 2022 23:46:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 50A2A8E0002; Tue, 2 Aug 2022 19:46:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4B9AA8E0001; Tue, 2 Aug 2022 19:46:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 35A338E0002; Tue, 2 Aug 2022 19:46:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 21ED38E0001 for ; Tue, 2 Aug 2022 19:46:42 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id E25F61A10CD for ; Tue, 2 Aug 2022 23:46:41 +0000 (UTC) X-FDA: 79756289802.30.A21BA24 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf26.hostedemail.com (Postfix) with ESMTP id A28ED140006 for ; Tue, 2 Aug 2022 23:46:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659484000; x=1691020000; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=6rXGmJONj6b+nIwOClpJuGcxiMNJkmRdeSvTUzrRYKw=; b=ZtwtPG5WVyoSQyk2NaVe2pRGjb/nn8thOxubYgfQdpusS63WTK4EKKrX 8c/yz9sVzX/VS4Z4GTomTkQUjXA81P/NTPYmDrS6WXF1+B1iwAe2O3pNs bi4OZtkL6shusmpAF4sgqUULepxXi1hzyMRT3FCn398kmKrTuh0CFMwXA i6i1tuF3wKKXOJE3LcT/BMukZUpiCEWd50eBmFgL+Fkxux5DH/0lDMwnu lDvThCGhREMRF9PWQgz1yXCWVuQjcVkPx2mvvj9JtQCPccUC8pfraHhCT tv8E6q5Jot2ouUSTOyLgrzF+472UkmQk4YRg0XQKb24vkj3zbkt7RIXCe A==; X-IronPort-AV: E=McAfee;i="6400,9594,10427"; a="288299821" X-IronPort-AV: E=Sophos;i="5.93,212,1654585200"; d="scan'208";a="288299821" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Aug 2022 16:46:38 -0700 X-IronPort-AV: E=Sophos;i="5.93,212,1654585200"; d="scan'208";a="630919269" Received: from ywagle-mobl.amr.corp.intel.com (HELO [10.209.29.213]) ([10.209.29.213]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Aug 2022 16:46:38 -0700 Message-ID: <80cc204b-a24f-684f-ec66-1361b69cae39@intel.com> Date: Tue, 2 Aug 2022 16:46:38 -0700 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.11.0 Subject: Re: [PATCHv7 10/14] x86/mm: Avoid load_unaligned_zeropad() stepping into unaccepted memory Content-Language: en-US To: Borislav Petkov , "Kirill A. Shutemov" Cc: Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel , Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Mike Rapoport , David Hildenbrand , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org References: <20220614120231.48165-1-kirill.shutemov@linux.intel.com> <20220614120231.48165-11-kirill.shutemov@linux.intel.com> From: Dave Hansen In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659484001; a=rsa-sha256; cv=none; b=YT+Kn0y3T0Uu/MMJpOTRhQ+zR9QxJ2AjKQKekA14XW8ruChMv726zZIlJFnD1s3jYc37A+ GFuDADgVNCuB0bE0msnFZLdqWO9GoarODhuMHQVkx/d2kCx9VmSCWac7Jk6gWGzfewadKa RrlYlcDZtxwjzKnK7+mJTyMSNgxM7IQ= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=ZtwtPG5W; spf=pass (imf26.hostedemail.com: domain of dave.hansen@intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=dave.hansen@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659484001; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=N+X60bRjbYXouyrSs4nskQpZAt4Lhay6vfzBNK+UQBs=; b=WjgoOZ3WxiedgRO+JCSnJcEc+RlRFQkZBsPLZxBwOdipnnynUTFw6/NxMASfgjMslVexFE 5T0lODk0qG1/B56ogx2H/2VdvDLOYmrih1gjjKAhTb6/jhbYxdzv8e4B4Kvlp5+5XAmgFh Bp/pxPrWS3HlYM+nmqlskMl1uJHw20M= X-Rspamd-Server: rspam04 X-Stat-Signature: hepmabkp7yjxm96acennxjn7s5hcxdke Authentication-Results: imf26.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=ZtwtPG5W; spf=pass (imf26.hostedemail.com: domain of dave.hansen@intel.com designates 192.55.52.120 as permitted sender) smtp.mailfrom=dave.hansen@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspamd-Queue-Id: A28ED140006 X-Rspam-User: X-HE-Tag: 1659484000-222924 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 7/26/22 03:21, Borislav Petkov wrote: > On Tue, Jun 14, 2022 at 03:02:27PM +0300, Kirill A. Shutemov wrote: >> But, this approach does not work for unaccepted memory. For TDX, a load >> from unaccepted memory will not lead to a recoverable exception within >> the guest. The guest will exit to the VMM where the only recourse is to >> terminate the guest. > FTR, this random-memory-access-to-unaccepted-memory-is-deadly thing is > really silly. We should be able to handle such cases - because they do > happen often - in a more resilient way. Just look at the complex dance > this patch needs to do just to avoid this. > > IOW, this part of the coco technology needs improvement. This particular wound is self-inflicted. The hardware can *today* generate a #VE for these accesses. But, to make writing the #VE code more straightforward, we asked that the hardware not even bother delivering the exception. At the time, nobody could come up with a case why there would ever be a legitimate, non-buggy access to unaccepted memory. We learned about load_unaligned_zeropad() the hard way. I never ran into it and never knew it was there. Dangit. We _could_ go back to the way it was originally. We could add load_unaligned_zeropad() support to the #VE handler, and there's little risk of load_unaligned_zeropad() itself being used in the interrupts-disabled window early in the #VE handler. That would get rid of all the nasty adjacent page handling in the unaccepted memory code. But, that would mean that we can land in the #VE handler from more contexts. Any normal, non-buggy use of load_unaligned_zeropad() can end up there, obviously. We would, for instance, need to be more careful about #VE recursion. We'd also have to make sure that _bugs_ that land in the #VE handler can still be handled in a sane way. To sum it all up, I'm not happy with the complexity of the page acceptance code either but I'm not sure that it's bad tradeoff compared to greater #VE complexity or fragility. Does anyone think we should go back and really reconsider this?