From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com ([192.55.52.88]) by Galois.linutronix.de with esmtps (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1fQMPO-0005kk-0N for speck@linutronix.de; Wed, 06 Jun 2018 02:34:38 +0200 Subject: [MODERATED] Re: [PATCH 1/2] L1TF KVM 1 References: <20180529194214.2600-1-pbonzini@redhat.com> <20180529194236.EDB8561100@crypto-ml.lab.linutronix.de> From: Dave Hansen Message-ID: <269d6873-c5b4-d1c8-425f-9305e4594e46@linux.intel.com> Date: Tue, 5 Jun 2018 17:34:34 -0700 MIME-Version: 1.0 In-Reply-To: <20180529194236.EDB8561100@crypto-ml.lab.linutronix.de> Content-Type: multipart/mixed; boundary="KbXbProzu8LhKaMgKRzO7jffklrL948vk"; protected-headers="v1" To: speck@linutronix.de List-ID: This is an OpenPGP/MIME encrypted message (RFC 4880 and 3156) --KbXbProzu8LhKaMgKRzO7jffklrL948vk Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: quoted-printable On 05/29/2018 12:42 PM, speck for Paolo Bonzini wrote: > r =3D -ENOMEM; > + page =3D alloc_pages(GFP_ATOMIC, L1D_CACHE_ORDER); > + if (!page) > + goto out; > + empty_zero_pages =3D page_address(page); There is also an Intel suggestion to have guard pages before and after the L1D flush buffer. As it stands, the prefetchers might pull data into the cache from pages adjacent to the allocation you have there. You can use vmalloc(), where we get (unmapped) guard pages already. Or, you can just oversize the allocation using: alloc_pages_exact(L1D_CACHE_ORDER * PAGE_SIZE + 2 * PAGE_SIZE) and just point empty_zero_pages to the second page in the buffer: empty_zero_pages =3D page_address(page + 1); I'd suggest the alloc_pages_exact() version. It will chew up fewer TLB entries. --KbXbProzu8LhKaMgKRzO7jffklrL948vk--